In our second most-read post, author and Perspectives columnist Anne Bruce explores the ethical and human dilemmas created by advancing healthcare technology.
Welcome to Mindfulness in Medicine, an ongoing column by best-selling author Anne Bruce, designed to cultivate leadership and collaborative relationships among hospital leaders, nurses, providers and ancillary staff. Mindfulness is a powerful leadership tool that enhances emotional intelligence in medicine. It is a tool that, when practiced, can help us develop and implement relational coaching skills and illuminate various ways to improve hospital operations and cross-departmental performance. Mindfulness also improves our capacity for decision-making and participatory medicine, all while enhancing our own health and well-being. Your comments and insights on these postings are greatly valued.
Joining me for this month's column is training expert and healthcare consultant Dolly Hinshaw, who has been engaging healthcare professionals in thoughtful discussions about artificial intelligence for several years.
Putting Artificial Intelligence to Work and Avoiding Medical Missteps
Medical decision-making is a highly complex process. Providers must weigh information from many sources, including accepted best practices, in-house protocols, the current medical literature and their own clinical judgment. In the acute care setting, this synthesis of ideas often occurs under time pressure. Given these realities, it's no wonder that medical errors and missteps are so common
Fortunately, advances in artificial intelligence (AI) are putting a wealth of information at providers’ fingertips — in many cases instantly. AI uses complex computer algorithms to make sense of unstructured data. It can discern useful patterns in physician notes, journal articles, public databases and other sources and transform these findings into actionable intelligence.
AI's use in medicine has been a sensitive issue, drawing both curiosity and concern. In this month's column, we'll examine the possibilities, limitations and potential pitfalls of AI, plus offer some tips for incorporating it into practice.
Medical Decision-Making — Minding the Gap
Our world contains a wealth of medical information that can inform patient care. Unfortunately, even the most informed physician knows only a fraction of the available data relevant to a given case.
While information technology has helped to close the knowledge gap, traditional data analysis methods have limits. Specifically, they can't process unstructured information such as the language, abbreviations and images that make up many data sources (e.g., electronic medical records).
In recent years, researchers have developed new algorithms to translate and process unstructured information. Perhaps their most notable success is IBM's artificially intelligent supercomputer, Watson, which famously defeated two Jeopardy champions. IBM recently announced that Watson's first commercial application
will involve clinical decision support for healthcare.
While healthcare-specific AI platforms are still in the early stages of development, experts hope they will one day give providers robust intelligence to inform their clinical decisions. To illustrate, here are just a few of the data sources AI can draw upon:
- Public databases (e.g., cancer registries, voluntary reporting programs)
- Electronic medical records (EMRs)
- Journal articles
- Diagnostic images
- Results of clinical trials
- Insurance records
- Genomic profiles
- Provider notes
- Wearable devices and activity trackers (e.g., Fitbit)
AI not only compiles information but also translates it in usable intelligence. For example, it might crunch millions of EMRs, journal articles and cancer registry entries to predict the optimal treatment for a patient with a rare form of breast cancer. Recommendations would be personalized based on the person's genome, health history and response to past treatments. (Such programs are currently in the works at the cancer centers of Cedars-Sinai Medical Center
and Memorial Sloan Kettering
AI platforms are also capable of "machine learning," meaning they incorporate new findings into their operating algorithms. This means they grow more powerful over time while adjusting to rapid advances in medical research.
Healthcare Applications for AI
The Accountable Care Act mandates healthcare providers to lower costs while improving outcomes and patient experience. AI could go a long way toward helping meet these goals. A study at Indiana University
found that when physicians used an AI framework to aid decision-making, patient outcomes improved significantly. What's more, healthcare costs declined by 50 percent.
Here are just a few of the ways AI is already making healthcare safer and more cost effective:
- The Medical University of Vienna has developed an artificially intelligent surveillance program that tracks the spread and evolution of germs capable of causing hospital-acquired infections.
- The University of Pittsburgh and Carnegie Mellon University are using artificial intelligence to develop personalized treatment plans based on health history and lifestyle factors. (The platform bases its recommendations on outcome data for similar patients.)
- Hospitals can now use the Einstein II workforce management solution to optimize scheduling based on census, acuity, employee availability and historical data.
- AiCure is a mobile phone app that uses facial recognition and motion sensing technology to monitor medication compliance.
- Alme Health Coach acts as a "virtual assistant" to people living with chronic health conditions. It provides medication and testing reminders, monitors sleep and automates appointment scheduling. It even converses with noncompliant patients to determine the root causes of their behaviors.
- AdverseEvents Explorer provides real-time medication safety and cost data to physicians and payers based on prescription patterns and entries into the FDA's voluntary adverse event reporting system.
- Modernizing Medicine is a cloud-based EHR that leverages an enormous data warehouse to anticipate treatment decisions and provide personalized care recommendations.
Potential Downsides of AI in Medicine
Mindful providers acknowledge the potential of AI while remaining vigilant for side effects and unintended consequences.
AI technology, while powerful, has inherent limitations when compared with human understanding. Existing programs are not very good at processing natural language or images, though these deficits will likely diminish as technology advances.
More significantly, AI will never match human intelligence in terms of understanding information in context. For example, a machine might infer that a physician ordered a CT scan because they suspected a serious neurologic condition. However, a practicing provider familiar with the litigation landscape might correctly interpret this as an example of defensive medicine.
For these reasons, AI experts emphasize that the technology should be used to enhance human decision-making — not replace it. Providers and administrators should be vigilant in ensuring that increased automation and efficiency never compromise patient safety. A good example of this is the SEDASYS robot-delivered anesthesia system, which allows non-anesthesiologist physicians to administer general anesthesia. To ensure patient safety, Virginia Mason Medical Center worked closely with anesthesiologists to implement the program
and provide support to users.
Mindshare Tips For Embracing Artificial Intelligence
AI implementation can greatly enhance care delivery, but it can also be disruptive and divisive to healthcare teams. These strategies can be used to promote a mindful implementation and adoption of this important technology:
- Frame AI as a tool to augment (not replace) training, experience and clinical judgment.
- Emphasize the ways AI platforms can be a win-win-win solution for patients, providers and the hospital (e.g., more provider-patient face time, more personalized treatment plans).
- Advocate EMR vendors and end-users for increased interoperability between systems. Lack of EMR integration is still a major barrier to the effective use of AI.
- Engage colleagues, collaborative partners, medical students and even patients in mindful discussions about the possibilities and limitations of AI.
- Welcome the benefits of AI while acknowledging the need for vigilance in implementation and use. AI should never serve as substitute for medical knowledge and judgment.
When you consider all of the changes that are happening in healthcare today, what do you think of the coming merger between human and artificial intelligence? Comment below to tell us about it.
About the Authors
Anne Bruce has provided training and performance coaching for MedAmerica and CEP America. She also serves as MBSI's Employee Development Coach and Leadership Facilitator. Anne is a bestselling author with more than 20 books published by McGraw-Hill Publishing, New York. Her next book on mindful behavior is titled Conscious Engagement and is scheduled for release in 2016. She considers her award-winning life-coaching book, Discover True North: A 4-Week Approach to Ignite Passion and Activate Potential (McGraw-Hill Publishing) to be one of her most "mindful" books to date. She also leads a popular Discover True North Expedition group on LinkedIn. Anne can be reached at 214-507-8242 or by writing to her at Anne@AnneBruce.com
, or visiting her on LinkedIn or Facebook.
Dolly Hinshaw is a training design expert and healthcare consultant who has been engaging healthcare professionals in thoughtful discussions about AI for several years. Her company Hinshaw & Associates
embraces the powerful possibilities of combining human and automated reasoning.