4 Reasons the Healthcare Industry is Ready for Machine Learning

Understanding the landscape of health-tech and artificial intelligence

SudoPurge
Geek Culture

--

Artificial intelligence and machine learning were applied to healthcare as early as the 60s and the 70s. Stanford developed an algorithm called the MYCIN System that could identify severe infection-causing bacteria. It was able to propose a good therapy in 69% of the cases which was actually better than experts of that time.

Photo by Tom Claes on Unsplash

In the 80s, the University of Pittsburg developed the INTERNIST-1/Quick Medical Reference Model. It could diagnose which disease or condition a patient had out of hundreds, based on a plethora of reported symptoms. On a high level, they modeled it like a Bayesian network but initially was more heuristic in nature. 15 person-years were spent to derive the apriori of each of these diseases and symptoms.

INTERNIST-1/QMR Model Schema. Miller et al. (1986), Shwe et al. (1991).

But this was never actually integrated into the clinical workflow. The whole process of manually collecting and entering the data in a structured manner and then coming up with a diagnosis was much slower than human experts and was difficult to integrate into the clinical workflow. There was no real learning going on in the model. It was Artificial Intelligence, but not Machine Learning. It would not work in a different part of the world with the same derived apriori because the prior probability of disease and symptom occurrences vary from region to region, culture to culture. They would have to be derived again from scratch for each setting. It was difficult to generalize, i.e. it was overfitted to where it was initially developed. The main leverage was on human expertise and knowledge of the domain which is a tedious, expensive, and unreliable process.

In 1990, a number of papers were published that used simple neural networks in medicine. As illustrated below, at most four layers were involved, with most having the bare minimum of three. The number of features was low. Data was collected, structured, and curated manually by people especially for machine learning purposes, which caused the number of samples to be quite low. Training data was scarce and they did not fit well into the clinical workflow of the time. And just like the INTERNIST-1, they were difficult to generalize.

Neural Network Studies in Medical Decision Making. Penny and Frost (1996)

These early endeavors did not really come to much fruition. But they did hint at the potentials of integrating ML with healthcare. Over the last ten years, a number of variables changed that altered the landscape of the healthcare industry drastically.

1. Boom in Data

After the economic disaster of 2008, the US government issued stimulus packages of around $30 billion to hospitals for purchasing electronic medical records. This encouraged more and more hospitals to maintain large amounts of patient data that is computer friendly. Large datasets like PhysioNet and DeepLesion containing data on patient vitals, attendees’ notes, imaging data, blood test results, and outcomes of treatments were released. Collaborative efforts of researchers from around the world have led to the curation of online databases (such as NCBI, HMDB, and DrugBank) of genomics, proteomics, metabolomics, and drug molecules. Even unconventional sources of medical data like social media (for mental health indicators) are now being used widely in clinical research, thanks to open-source initiatives which allow researchers to quickly build upon others’ works. It was the first time that policymakers started to affect the healthcare landscape that would ultimately open the door to intensive data-driven machine learning research opportunities. Availability of suitable data is one of the major bottlenecks of building accurate and impactful ML products. This article discusses why.

Percent of non-Federal acute care hospitals with the adoption of at least a Basic EHR (Electronic Health Record) with notes system and possession of a certified EHR. (Henry et al. ONC Data Brief, May 2016)

2. Standardization of Data

However, just data in itself is of little use of course. Standardization of data across the field is another cornerstone that has altered the landscape of healthcare. The ICD-10 is an international classification and coding system of diseases. National Drug Codes (NDC) classifies and codes drugs in a structured taxonomy so stakeholders can easily access and use them. LOINC (Logical Observation Identifiers Names and Codes) is a standard for identifying medical lab records and observations. Other formats like FASTA are standardizations of genomic and proteomic data types.

Doctors use the UMLS (Unified Medical Language System) for the standardization of medical concepts which dramatically eases communication between software and leads to the creation of interoperable systems in the medical industry. The OMOP Common Data Model is another convention that allows researchers to map data from sources that have their own intricacies into a common data model and thus allows interconvertibility between different data structures. Other nonprofit organisations like the International Society of Biocuration are there to ensure the availability of curated biological data that is interoperable, and follows certain standards and best practices. All of these standardizations enable better communication and compatibility within the industry. Think of how the development of language has led to cooperation and unity among early civilizations. In the same way, biological data produced by different scientific communities can now easily talk to each other and work towards something greater.

3. Financial Opportunities

Tech advancements rarely stick around without much interest from investors. All these potential financial opportunities have not gone unnoticed. Around $5.6 billion in venture fundings were reported in 2017 alone which is predicted to almost double in 2021. Thousands of digital healthcare startups with a primary focus on using AI and ML have been sprouting up all over the US, Canada, Europe, and India. Insurance companies are incentivized to predict which customer is more likely to fall sick.

In fact, because data is so essential in this sector that firms are racing to secure their rights. IBM led acquisitions of around $3.6 billion for Merge and Truven Health Analytics, both of which came with tremendous amounts of medical imaging and health insurance claim data. Roche purchased Flatiron Health for around $1.9 billion to acquire large amounts of electronic health records in oncology.

4. Technological Advancements

Lastly, the obvious change has been the immense development of new machine learning algorithms and hardware. Profoundly deeper and more complex neural networks like convolutional and recurrent nets, semi and unsupervised learning algorithms, different variants of stochastic gradient descent, and the ability to learn from high dimensional data are some of the greatest advancements that allow models to learn and extract as much knowledge as possible from the various kinds of medical data. Last year, DeepMind’s AlphaFold 2 solved the 50 years problem of predicting protein folding and structure from just the sequence itself, which was a massive milestone for artificial intelligence and the life sciences.

Even though there is a general rise in data availability, rare diseases have limited occurrences and so there is a general lack of good data in many medical cases. Most data is owned by multinational corporations which leads to independent or smaller firms still experiencing data deficiencies. Generative adversarial networks (GANs) have been used for synthetic medical data like radiology images which can easily bypass this bottleneck of data availability.

According to Yuval Noah Harrari, we are at the dawn of the age of biotechnology. There is of course still a long way to go but curiosity is a fascinating thing. Proactive preventive medicine is likely to be the new norm where people actively monitor their health in order to intervene early. This renders the production of more and more data from devices such as your phone or wearables. And more structured and labeled data opens the door to more opportunities for Machine Learning.

References:

  1. Penny, W., & Frost, D. (1996). Neural networks in clinical medicine. Medical Decision Making, 16(4), 386–398. https://doi.org/10.1177/0272989X9601600409
  2. Middleton, B., Shwe, M. A., Heckerman, D. E., Henrion, M., Horvitz, E. J., Lehmann, H. P., & Cooper, G. F. (1991). Probabilistic diagnosis using a reformulation of the INTERNIST-1/QMR knowledge base. II. Evaluation of diagnostic performance. Methods of Information in Medicine, 30(4), 256–267. https://doi.org/10.1055/s-0038-1634847
  3. Penny, W., & Frost, D. (1996). Neural networks in clinical medicine. Medical Decision Making, 16(4), 386–398. https://doi.org/10.1177/0272989X9601600409
  4. Adoption of Electronic Health Record Systems among U.S. Non-Federal Acute Care Hospitals: 2008–2015. (n.d.). Retrieved June 20, 2021, from https://dashboard.healthit.gov/evaluations/data-briefs/non-federal-acute-care-hospital-ehr-adoption-2008-2015.php
  5. Sontag, D., Spring 2019, What Makes Healthcare Unique?, Lecture 1, Machine Learning for Healthcare, MIT 6.S897

P.S. For more short and to the point articles on Data Science, Programming and how a biologist navigates his way through the Data revolution, consider following my blog.

With thousands of videos being uploaded every minute, its important to have them filtered out, so that you consume only the highest quality data. Hand-picked by myself, I will email you educational videos of the topics you are interested to learn about. Sign-up here.

Thank you for reading!

--

--