Imagine walking into a doctor’s office in the future. As you describe your symptoms, a discreet device on the desk lights up. The doctor glances at it, nods thoughtfully, and says, “Well, based on your symptoms and the AI’s analysis of your medical history, we’re looking at a 92% probability of condition X. Let’s run a few more tests to confirm.”
Artificial Intelligence (AI) is no longer the stuff of science fiction. It’s here, it’s real, and it’s making its presence felt in the hallowed halls of medicine. From radiology to pathology, from oncology to ophthalmology, AI is increasingly becoming the doctor’s indispensable assistant. However, as with any revolutionary technology, AI in medical diagnosis comes with its own set of promises and perils.
Here, we’ll explore how AI is reshaping the landscape of medical diagnosis, examine its potential to revolutionize healthcare and confront the challenges and ethical dilemmas it presents—all the way through Silicon Valley and the neural networks of modern medicine.
The Evolution of Medical Diagnosis
The art of medical diagnosis has come a long way since Hippocrates first advised his disciples to observe and reason. For centuries, diagnosis relied heavily on a physician’s intuition and experience. More of a pseudoscience than quackery in modern context. The 20th century saw a shift towards evidence-based medicine, with randomized controlled trials becoming the gold standard for clinical decision-making.1
But the 21st century brought with it a data deluge. According to a 2014 study, the volume of medical data doubles every 73 days.2 The amount of information available to inform diagnoses has exploded From electronic health records to genomic data, from wearable devices to medical imaging. But this created a problem that never existed in the first place: how can we make sense of this data tsunami?
This is where Artificial Intelligence enters the game, and so far, it has played well. We are now at the brink of a new frontier, a brave new world of AI-assisted diagnosis, and whether you’re ready to catch up or not, AI is here – and it’s here to stay.
How AI-Assisted Diagnosis Works
At its core, AI-assisted diagnosis harnesses the power of machine learning algorithms that have been meticulously trained on immense datasets containing millions of medical cases. Like ravenous students devouring textbooks, these algorithms ingest and analyze vast troves of information – from medical imaging and genomic data to doctor’s notes and published research studies.
Through this intensive training process, the algorithms develop an uncanny ability to recognize intricate patterns and relationships that often elude the human eye. With lightning speed and computational might, they can cross-reference a patient’s symptoms, test results, and medical history against their deep neural networks to arrive at diagnostic predictions with astonishing accuracy rates that frequently outshine human medical experts.
Deep learning, a subset of machine learning, has been particularly transformative. Deep learning techniques like Convolutional Neural Networks (CNNs) have proven particularly adept at image recognition tasks crucial for radiology and pathology.3 These AI models can scrutinize medical scans, from X-rays to MRIs, with a level of granular detail and consistency that human radiologists simply cannot match. This helps a doctor catch hidden abnormalities and arrive at a more precise diagnosis at an earlier stage.
Natural Language Processing (NLP) is another powerful AI technique that has manifested as a clinical decision-support system. NLP has been making waves recently by extracting critical insights from the unstructured data found in clinical notes, doctor-patient conversations, and medical literature.4 It is trained to understand the nuances of human language. Although it may be exaggerated at this point, NLP algorithms can pinpoint relevant diagnostic information buried within these text-based sources, augmenting a physician’s decision-making process.
Real-world examples underscore AI’s diagnostic prowess. For example, take IDx-DR, the pioneering FDA-approved AI system for detecting Diabetic Retinopathy. For the untrained, Diabetic Retinopathy is a leading cause of blindness in patients with chronically elevated blood sugar levels. In rigorous clinical trials, this AI tool demonstrated an uncanny ability to match (and often surpass) the diagnostic accuracy of top human experts – it was able to correctly identify the pathology with a sensitivity of 87.2% and a specificity of 90.7%. This matches with the efficiency shown by expert human graders.5
Finding the article useful?
This is an Open Access Newsletter, which means the author has kept this free for academic interests. For getting free articles on AI in Medicine like these in your inbox, join the newsletter.
The Promise of AI in Diagnosis
The potential benefits of AI in medical diagnosis are nothing short of revolutionary. First and foremost is the promise of improved accuracy and consistency. A study published by McKinney et al. in Nature Medicine showed that an AI system outperformed six radiologists in reading mammograms, reducing false positives and false negatives.6
Mammograms are basically radiological imaging done to screen for breast cancer. Like ultrasound, much of the interpretation requires skill in radiodiagnosis, a crucial variable among radiologists that has resulted in a high number of false positives as well as false negative results.7 AI’s ability to handle complex data sets goes beyond human capability. It can integrate diverse data types – genetic information to lifestyle factors – to provide a more comprehensive diagnostic picture. This is particularly valuable in rare diseases, where AI can identify patterns that might escape even experienced clinicians.8
In 2016, doctors at the University of Tokyo reported that IBM’s Watson correctly diagnosed a rare form of leukemia in a patient after the condition had stumped human doctors for months. Watson was able to compare the patient’s genetic changes with a database of 20 million cancer research papers, identifying the correct condition in just 10 minutes.9 This begs the question, “Should Watson be called for a Second Opinion?”
Early detection of diseases is another area where AI shines. McKinney et al. have already demonstrated the ability to detect breast cancer in mammograms 5.7% more accurately than physicians, potentially saving thousands of lives through earlier intervention. Perhaps most excitingly, AI has the potential to democratize expert-level diagnostics. In resource-poor settings where specialists are scarce, AI could provide high-quality diagnostic support, potentially reducing healthcare disparities.10
Certainly, if you are a specialist reading this article (especially a radiologist), it is natural to fret at this point. One might wonder whether their jobs—for which they slogged off for years and still have unpaid student debts—will be disposed of in a yellow bin. Not yet. The next section will tell you why.
The Perils and Pitfalls of AI in Medicine
Before we get carried away with visions of AI-powered medical utopias (or dystopias, if you depend on diagnosis for livelihood), let’s take a sobering look at the challenges and risks.
The “black box” problem looms large in AI-assisted diagnosis. Many machine learning models, particularly deep learning ones, operate in “opaque” ways even to their creators. This lack of interpretability is a significant issue in medicine, where understanding the reasoning behind a diagnosis is crucial.11
There’s also the risk of over-reliance on technology, leading to the deskilling of medical professionals. If doctors become too dependent on AI for diagnoses, they may lose the ability to make independent judgments, a dangerous prospect when AI inevitably makes mistakes.12
Bias in AI systems is another critical concern. If an AI is trained on data that reflects existing healthcare disparities, it may perpetuate or even exacerbate these biases. In 2018, a study found that an AI system trained to identify skin cancer from images was less accurate for dark skin tones, potentially leading to missed diagnoses in people of color. This highlighted the critical importance of diverse training data in developing AI systems.13 Another study in Science found that a widely used algorithm was less likely to refer Black patients than equally sick White patients to programs that improve care, affecting millions of people.14
Legal and ethical considerations add another layer of complexity. Who’s responsible when an AI-assisted diagnosis goes wrong? The doctor? The hospital? The AI developer? These questions are still being grappled with in legal and ethical circles.15
The Human Factor: How AI is Changing the Doctor’s Role
As AI becomes more prevalent in diagnosis, the role of the doctor is evolving. Rather than being replaced by AI, doctors are becoming interpreters and integrators of AI-generated insights.
The importance of human intuition and empathy cannot be overstated. While AI can process vast amounts of data, it can’t replicate the nuanced understanding that comes from a doctor’s experience and their ability to pick up on subtle cues from patients.16 This is especially true in cases of psychiatric illnesses and cases where the social skills of a doctor may be necessary.
The challenge for modern physicians is to strike a balance between leveraging AI insights and applying their own clinical judgment. Doctors need to learn that AI is not replacing them; rather, it’s augmenting them. However, they need to learn how to use it effectively for the best results so that the people who use AI will not replace them in the future.17
Preparing for an AI-Assisted Future
The integration of AI into medical diagnosis necessitates significant changes in medical education and training. Future doctors will need to develop AI literacy alongside their clinical skills. They’ll need to understand not just how to use AI tools, but also their limitations and potential biases.18 Interdisciplinary collaboration between medicine and tech is becoming increasingly crucial. We’re seeing the emergence of new roles like “clinical data scientists” who bridge the gap between these fields.19
Like I mentioned earlier, the rise of AI in diagnosis raises profound ethical questions. How do we ensure patient trust and acceptance of AI diagnoses? A survey published in NPJ Digital Medicine found that while many patients were open to AI in healthcare, they still preferred human doctors for most tasks.20 Data privacy and security concerns are paramount. The vast datasets required to train AI systems raise questions about patient confidentiality and the potential for data breaches.21
There’s also the question of how AI will impact healthcare disparities. While it has the potential to improve access to expert-level diagnostics, there’s also a risk that it could widen the gap between those who have access to AI-enhanced care and those who don’t.22
A Symbiotic Future?
As we wade through this new world of AI-assisted diagnosis, it is beyond any reasonable doubt that we’re dealing with a double-edged sword. The potential benefits are enormous: improved accuracy, early detection of diseases, and democratization of expert-level diagnostics. However, the challenges are equally significant: the black box problem, potential biases, and over-reliance on technology.
Optimistically speaking, the key to harnessing AI’s power while mitigating its risks lies in fostering a symbiotic relationship between human doctors and AI systems. AI should be seen as a tool to augment human capabilities, not replace them. As Eric Topol puts it in his book Deep Medicine (UK)(IN), AI can enhance and empower medical professionals, not usurp them.
“Eventually, doctors will adopt AI and algorithms as their work partners. This leveling of the medical knowledge landscape will ultimately lead to a new premium: to find and train doctors who have the highest level of emotional intelligence.”
Eric Topol, Deep Medicine
If history has proven anything, it is that skepticism and calculated pessimism ensure our survival. We must undoubtedly approach this new frontier with a combination of enthusiasm and caution. Rigorous testing, ongoing monitoring, and a commitment to addressing ethical concerns are essential as we integrate AI into medical practice.
Several trends are emerging in AI-assisted diagnosis. Federated learning, which allows AI models to be trained on decentralized data, could help address privacy concerns.23 Privacy is one of the major concerns of Bioethics, and care is needed to ensure that it is not violated, at least as far as medical practice is concerned.
Integrating AI with other emerging technologies, such as genomics and wearable devices, promises to usher in an era of truly personalized medicine. Imagine an AI system that can predict your risk of heart disease based on your genetic profile, lifestyle data from your smartwatch, and environmental factors gleaned from your smartphone’s GPS.24
Explainable Artificial Intelligence is another area of active research, aiming to make the decision-making processes of AI systems more transparent.25 Not long back, we had completely private systems, yet none of the devices we use today are safe from the prying eyes, thanks to leaps in information technology and social media. Our bodies should not become the next device under corporate surveillance.
As we stand on the brink of this AI revolution in medicine, one thing is clear: the future of diagnosis will be a collaboration between humans and artificial intelligence. It’s up to us to ensure that this collaboration enhances rather than diminishes the art and science of medicine.
In the words of the ancient Greek physician Hippocrates, “Wherever the art of medicine is loved, there is also a love of humanity.” As we embrace AI’s possibilities in diagnosis, let us ensure that this love of humanity remains at the heart of medical practice.
References:
- Sackett DL, Rosenberg WMC, Gray JAM, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t. BMJ. 1996 Feb;312(7023):3–5. ↩︎
- Densen P. Challenges and Opportunities Facing Medical Education. Trans Am Clin Climatol Assoc. 2011;122:48. ↩︎
- Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, et al. A survey on deep learning in medical image analysis. Med Image Anal. 2017 Dec 1;42:60–88. ↩︎
- Demner-Fushman D, Chapman WW, McDonald CJ. What can natural language processing do for clinical decision support? J Biomed Inform. 2009 Oct 1;42(5):760–72. ↩︎
- Abràmoff MD, Lavin PT, Birch M, Shah N, Folk JC. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. npj Digital Medicine 2018 1:1. 2018 Aug 28;1(1):1–8. ↩︎
- McKinney SM, Sieniek M, Godbole V, Godwin J, Antropova N, Ashrafian H, et al. International evaluation of an AI system for breast cancer screening. Nature. 2020 Jan 2;577(7788):89–94. ↩︎
- Lehman CD, Arao RF, Sprague BL, Lee JM, Buist DSM, Kerlikowske K, et al. National performance benchmarks for modern screening digital mammography: Update from the Breast Cancer Surveillance Consortium. Radiology. 2017 Apr 1;283(1):49–58. ↩︎
- Shen J, Zhang CJP, Jiang B, Chen J, Song J, Liu Z, et al. Artificial Intelligence Versus Clinicians in Disease Diagnosis: Systematic Review. JMIR Med Inform. 2019 Aug 16;7(3):e10010. ↩︎
- Luxton DD. Should Watson be consulted for a second opinion? AMA J Ethics. 2019 Feb 1;21(2):131–7. ↩︎
- Schwalbe N, Wahl B. Artificial intelligence and the future of global health. Lancet. 2020 May 16;395(10236):1579–86. ↩︎
- Rudin C. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. Nat Mach Intell. 2019 May 1;1(5):206–15. ↩︎
- Cabitza F, Rasoini R, Gensini GF. Unintended Consequences of Machine Learning in Medicine. JAMA. 2017 Aug 8;318(6):517–8. ↩︎
- Adamson AS, Smith A. Machine Learning and Health Care Disparities in Dermatology. JAMA Dermatol. 2018 Nov 1;154(11):1247–8. ↩︎
- Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019 Oct 25;366(6464):447–53. ↩︎
- Price WN, Gerke S, Cohen IG. Potential Liability for Physicians Using Artificial Intelligence. JAMA. 2019 Nov 12;322(18):1765–6. ↩︎
- Verghese A, Shah NH, Harrington RA. What This Computer Needs Is a Physician: Humanism and Artificial Intelligence. JAMA. 2018 Jan 2;319(1):19–20. ↩︎
- Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nature Medicine 2019 25:1. 2019 Jan 7;25(1):44–56. ↩︎
- Wartman SA, Donald Combs C. Medical Education Must Move From the Information Age to the Age of Artificial Intelligence. Acad Med. 2018;93(8):1107–9. ↩︎
- Wang F, Preininger A. AI in Health: State of the Art, Challenges, and Future Directions. Yearb Med Inform. 2019 Aug 1;28(1):16–26. ↩︎
- Tran VT, Riveros C, Ravaud P. Patients’ views of wearable devices and AI in healthcare: findings from the ComPaRe e-cohort. NPJ Digit Med. 2019 Dec 1;2(1). ↩︎
- Price WN, Cohen IG. Privacy in the age of medical big data. Nat Med. 2019 Jan 1;25(1):37–43. ↩︎
- Vayena E, Blasimme A, Cohen IG. Machine learning in medicine: Addressing ethical challenges. PLoS Med. 2018 Nov 1;15(11). ↩︎
- Rieke N, Hancox J, Li W, Milletarì F, Roth HR, Albarqouni S, et al. The future of digital health with federated learning. NPJ Digit Med. 2020 Dec 1;3(1). ↩︎
- Triantafyllidis AK, Tsanas A. Applications of Machine Learning in Real-Life Digital Health Interventions: Review of the Literature. J Med Internet Res. 2019 Apr 1;21(4). ↩︎
- Holzinger A, Langs G, Denk H, Zatloukal K, Müller H. Causability and explainability of artificial intelligence in medicine. Wiley Interdiscip Rev Data Min Knowl Discov. 2019 Jul 1;9(4). ↩︎