Moorfields Eye Hospital, AI & paradigm shifts
By Ian Ellul – The application of AI in medicine has always intrigued clinicians, so much so that Elsevier started publishing the journal Artificial Intelligence in Medicine way back in 1989.
The application of AI in medicine has always intrigued clinicians, so much so that Elsevier started publishing the journal Artificial Intelligence in Medicine way back in 1989. To put you into perspective, this was a period where we experienced the initial years of marketing of fluoxetine & lovastatin [the first statin], as well as the fall of the Berlin Wall. Way back in 1989 Artificial Intelligence in Medicine discussed ‘Machine over mind’, ‘‘Deep’ models and their relation to diagnosis’, ‘Expert systems in laboratory medicine and pathology’ and the likes.
Here we are, merely 30 years later, championing a machine which can learn to interpret eye scans with an error rate of only 5.5%! Indeed, in the study conducted by Moorfields Eye Hospital NHS foundation trust, the University College London and Google’s DeepMind Technologies Limited, published in the journal Nature Medicine,1 the authors found that the said machine can learn to read complex eye scans and accurately detect more than 50 eye conditions. To put it simply, the London-based DeepMind created an algorithm enabling a computer to analyse optical coherence tomography (OCT), a high resolution 3D scan of the retina. Approximately 15,000 anonymized scans were used to ‘train’ the machine how to read OCTs. The next step involved the ultimate challenge of the machine against mankind … the AI & eight clinicians were asked to triage 1,000 patients whose clinical outcomes were already known. AI performed as well as leading retina specialists, with an error rate of 5.5%. Of significance is the fact that the algorithm did not miss a single urgent case.
Although all this seems really exciting, a question naturally comes to mind … what’s the next step? Well, seeing the AI system through the clinical trial phase and subsequent regulatory approval; if granted approval, the system will then be available for use across all of Moorfields’ sites. Currently DeepMind is also doing research with Imperial College London to improve the accuracy of breast cancer screening, as well as University College London Hospitals to examine whether AI can differentiate between cancerous and healthy tissue on scans.
Needless to say, the speed in diagnosis and the simultaneous reduction in diagnostic errors makes AI a most needed prioritisation tool. Things as they are, AI also has a scope in the training of clinicians. However, AI certainly raises a number of ethical and societal questions which need to be addressed including validation of AI systems, who is ultimately responsible when AI is used to support decision-making, mechanisms of ensuring the security and privacy of potentially sensitive data, to name a few. This has been clearly laid out by the Nuffield Council on Bioethics.
Once again, it seems apt to end this editorial with Nicholson Price’s note in his piece Black Box Medicine, namely that medicine “already does and increasingly will use the combination of large-scale high-quality datasets with sophisticated predictive algorithms to identify and use implicit, complex connections between multiple patient characteristics.”2 Quo Vadis?
References
1. De Fauw J, Ledsam JR, Romera-Paredes B, et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nature Medicine 2018. Published 13 August 2018 [online]. Available from: http://www.nature.com/articles/s41591-018-0107-6
2. Price N. Black-Box Medicine. Harv. J.L. & Tech 2015;28(2):420-467.