Thursday, May 16, 2019

The Promise and Risks of AI in Health Care: How to Ensure the Good Guys Win

I have to be honest. A year ago, my understanding of AI (aka, artificial intelligence or related technologies such as machine learning and deep learning) sat somewhere between the 2001 Stephen Spielberg movie of the same name and The Matrix. I increasingly found myself in circles with AI being discussed as “the game changer of the century” and realized that my Hollywood knowledge was not going to suffice.
Fortunately, just like in the movies, I was in luck, finding myself a few weeks ago at a keynote discussion as part of the 8th annual Biopharma Sustainability Roundtable listening to Dr. Helen Routh. And, much to my surprise, I learned there is a huge role for corporate responsibility (CR) professionals in ensuring their organizations practice “responsible AI.”
Helen Routh
But let’s start with our leading lady. Helen is as much a super hero as any you’ll find on the silver screen. She spent 25 years at Philips, in innovation, business and strategy roles including four years as Senior Vice President of Strategy & Innovation. A common thread throughout her career has been the use of data to drive significant outcome improvements in health care, leading to what would be depicted today as “AI.”  Today, she is a board member and advisor in both public and private sectors and currently chairs Ultromics, an outcomes-based AI company spun out of the University of Oxford. It develops ultrasound-based diagnostic support tools for cardiovascular disease by combining deep clinical insights with machine learning and some of the largest cardiac ultrasound datasets in the world.
While Helen’s keynote didn’t involve any car chases or love triangles, it didn’t disappoint in painting a picture of immense hope for the future of health care through the power of AI. And, like an Oscar-winning director, Helen infused her story with dramatic risks, driving home that the only way the world can  reap the vast fruits of this innovation is by gaining a real understanding of the potential unintended negative effects and how to avoid them.
Let’s start with the positive. If used well, data and analytics, whether strictly AI, machine learning or deep learning, has the power to dramatically improve health outcomes, enhance the quality of care, reduce health care costs, and expand access globally. 
Starting in the developed world, Helen shared several real-world examples:
·       N of One: Recently acquired by Qiagen, it has developed proprietary technology and a knowledgebase called MarkerMineTM, which provides high-quality and actionable clinical interpretation of molecular tests for oncology patients.  The clinical decision support technology links patients’ tumor profiles with potential therapeutic strategies, including those still in clinical trials, greatly improving the chance for effective treatment.
·       HeartFlow: Developer of cloud-based software that aids cardiologists in coronary artery disease diagnosis. HeartFlow creates a 3-D model of a patient’s coronary arteries and applies algorithms to locate blockages to blood flow and helps determine a more precise treatment plan. It has been in beta testing for the past three years in 80  health care centers in the United States and abroad. The technology helps reduce unnecessary angiograms and other invasive procedures and shortens turnaround time to diagnosis.
·       CareSageTM: Developed by Philips, this predictive analytics engine integrates data from patients’ records with Lifeline (medical alert company) enrollment and medical alert service activity. The information is merged into models to score a patient’s risk of admission to the hospital or nursing center in the next 30 days. In a retrospective analysis of 2,000 Lifeline subscribers, the software accurately predicted a 40 percent reduction in admissions. 
And here’s one more that I learned about since Helen’s talk that illustrates how robotics can be combined with AI in health care delivery. Mazor Robotics uses AI to aid minimally invasive surgical operations as well as operations with complex anatomy. Before an operation, a patient’s CT scan is loaded into a 3-D computerized planning system to indicate where a surgeon should place implants—all before the patient even arrives. Mazor’s spinal surgery robot arm guides the orthopedic surgeon’s instruments, allowing for an extremely high degree of precision.
Pretty cool stuff. And AI also has potential to drive game-changing improvements in health in the developing world. A new report, Artificial Intelligence in Global Health: Defining a Collective Path Forward, recently published by USAID, the Rockefeller Foundation, and the Bill & Melinda Gates Foundation, sees AI as a tool to enable community health workers to better serve patients in remote areas, help governments prevent deadly disease outbreaks, and greatly improve health care delivery to vulnerable communities.
The report includes dozens of scenarios of AI’s potential use for good. One tells the story of Anita, a woman living a rural village in Western Kenya, six hours from Nairobi and two hours on dirt roads from the closest hospital. Anita has recently became a community health worker and now goes door to door in her community providing local patients with health advice and selling basic health products to address their needs.
Anita has a smartphone with various apps that she uses in her work; she enters simple information on her patients’ health condition, including symptoms they are currently experiencing. Her AI-enabled apps then provide health recommendations, diagnoses, treatment advice, and self-care recommendations that allow her to provide the best possible care to her patients and allows them to avoid travel to a health facility hours away.
Enter Dramatic Music
But now comes the moment in the story for drama when the good guys do their thing (which surely involves some karate kicks and jumps off tall buildings) to ensure risk is avoided so that the world can benefit from this life-altering technology. 
In the case of the developing world, the USAID/Rockefeller/Gates report calls out the challenge of taking AI solutions from high-income countries and deploying and scaling them to address the unique needs of populations in low-income environments. Fortunately, the report’s authors serve up recommendations to guide the appropriate use of AI in low- and middle-income contexts.
Back at the keynote, Helen revealed several of the critical factors that must be addressed to deliver the potential of data, analytics and AI to both developed and developing countries:
·       Data security and privacy should already be top of mind for all companies and particularly those in health care. “Any business or organization not already thinking about how to mitigate the risk of data breaches or other cyber-security-related threats is at risk.”
·       Data quality is critical to deliver meaningful results. Helen draws on the principle of “garbage in, garbage out,” noting that it is key to understand the quality of data used in building an application and how it applies to the specific clinical problem at hand.
·       Bias can be embedded (even unintentionally) in historical data sets, results and interpretations – whether this arises from using data for a different application, not understanding the impact of how data was collected, or population differences.
·       Public understanding and trust are critical to allow AI to thrive. Organizations must clearly communicate how a patient or citizen’s data will be used, how value is returned to the patients or citizens and be able to explain at a high level what a particular application does.  It’s hard to build trust when AI is perceived as a “black box” where something magic happens to your data.
While not part of Helen’s remarks, in my own research I came across a tool from Accenture’s new Applied Intelligence practice called the “AI Fairness ToolTM,” which helps identify and fix unintended biases in AI solutions. The tool examines “data influence” of sensitive variables (age, gender, race, etc.) on other variables in a model, measuring how much of a correlation the variables have with each other to see whether they are skewing the model and its outcomes. 
Another group working to maximize the public good of AI and related technologies is the Partnership on AI to Benefit People and Society. With more than 80 partners (including UNICEF) spanning 13 countries, the Partnership studies and formulates best practices for AI technologies, works to advance the public’s understanding and trust in AI, and serves as an open platform for discussion about the influences of AI on people and society.
“Where AI tools are used to supplement or replace human decision making, we must be sure that they are safe, trustworthy and aligned with the ethics and preferences of people who are influenced by them,” reads the Partnership’s website.
In health care, building trust and understanding of AI among the public will also require engagement with patient advocates and patient/disease-focused societies, and, as Helen notes, communicating the clinical not the business benefits of the technology. 
And this is precisely where the role of CR professionals comes in: We can help identify potential unintended risks; put in place responsible policies; bring in the voice of diverse stakeholders who may be affected by the AI-enabled technologies; and ensure transparency around the organization’s use of AI (thus, helping to build public trust). In summary, CR professionals must help ensure their organizations practice “responsible AI.”
Helen’s address taught me that AI is not science fiction; it is here, all around us already. It has the potential for much good, but society –business, government, academia and research institutions – must come together to define and practice responsible AI in order to reap the potential benefits and ensure a happy ending that would make Hollywood proud. 

No comments:

Post a Comment