Artificial Intelligence (AI) is when a computer mimics human cognitive function. Machine Learning (ML) is a strategy to develop AI where the computer’s AI algorithms are continuously and automatically improved through use of new data. In Care1’s situation, the goal of our AI is to mimic the function of a human ophthalmologist (OMD) in terms of making diagnosis and treatment recommendations for patients. We created our AI by utilizing ML, whereby the continuous input of new patient data is used to continuously improve the performance of our AI.
An artificial neuron represents, at the most basic level, how AI makes a single decision. Data comes in as an input (x). The question then becomes how we can utilize ML to create a mathematical function using weights (w), bias (b), a summation function (E), and an activation function (f) to ultimately reach the output (y) that we want.
ML strategies for an artificial neuron can be grouped into (a) supervised learning, where inputs and outputs are known and provided to the artificial neuron in a training environment, or (b) unsupervised learning, where no labels are available in a training set and the ML is responsible for structuring the entire algorithm. Care1 uses supervised learning.
Example of Supervised Learning in Eyecare by an Artificial Neuron
Let's use a hypothetical example, where we want to use ML to create an artificial neuron that can assist us with deciding when to treat a patient for glaucoma, based on the cup:disc ratio of the optic disc within the eye. (This is a purely hypothetical example, because in reality, we make treatment decisions based on numerous variables and not just entirely on the cup:disc ratio). These are example photographs of optic nerves with cup:disc ratios ranging from 0.0 to 1.0.
These are the mathematical steps that we would take to implement ML to solve the above question:
Step 1: Control for all other variables, such as age, IOP, VF, OCT, etc. (this is controlling for w, the weights, except for the single weight we are trying to solve)
Step 2: Obtain a large set of patients and photograph all of their eyes.
Step 3: Label all the eyes with their cup:disc ratio (this is x, the input)
Step 4: Show the photos to different eye doctors, and obtain a universal recommendation for each with regards to whether or not they would start medication for each of the patients (this is y, the output)
Step 5: Create mathematical formulas which continually self-adjust factors w, b, E and f until a mathematical function is created which can reliably predict if the eye doctor will treat or not, based on the cup:disc ratio (this is the machine-learning process)
Step 6: Continuously test the resulting mathematical function on additional patients in different clinical situations, comparing the mathematically predicted patient recommendation against what doctors acting as independent validators recommend (this is the clinical validation). Modify the function to progressively improve accuracy.
Artificial Neural Networks (ANN)
An ANN is a collection of multiple artificial neurons. It mimics the brain, which is able to make very complex decisions, but is essentially made up of building blocks of billions of relatively simple neurons.
Deep Neural Network (DNN)
A DNN is a collection of multiple layers of ANNs. Traditionally, we would describe an ANN as “deep” if there are 3 layers or more. The entire DNN becomes the learned algorithm.
In Care1’s situation, our inputs are the 100+ data points acquired from the patient at each visit, multiplied by the number of visits the patient has ever had on our platform. The outputs are not only the recommended diagnosis and treatment plans, but also how these recommendations are worded in the final consultation letter from the OMD to the referring primary care doctor.
Medically Intelligent Deep Learning (MIDL)
Care1 software is designed in-house, specifically with medical applications in mind.
The primary limitation with existing AI in eyecare is the dependency on off-the-shelf AI design patterns which rely on mathematical methods to match inputs (eg. photos) with outputs (eg. diagnosis). This strategy works only for industries where 100% AI accuracy is not needed. In a similar way, companies researching self-driving cars do not build their AI using off-the-shelf AI software.
MIDL is a proprietary approach, created by Care1’s in-house engineering team, for creating customized medical AI that can reach near 100% levels of AI accuracy for patient care. Roughly, the MIDL steps are as follows:
Clinical guidelines and fundamental medical knowledge are utilized to architect the deep neural network at a high-level.
Utilizing fuzzy set theory, ML fills-in-the-blanks within the architecture created in Step 1.
Automated self-modification of the learned function is only allowed after approval by doctors
One of the key differences in approach between that which is needed for medical AI and that which is found within generic off-the-shelf AI software, is that medical care has certain clinical truths which cannot be violated. Some of these truths are published in clinical guidelines and can be identified by reading these, but others are more difficult to elucidate, and just fundamental knowledge which surgeons learn through seeing tens of thousands of patients over 12-15 years of training.
For example, one hard clinical truth in ophthalmology would be that a glaucoma patient with an intraocular pressure (IOP) of <5 mmHg, should not require further IOP lowering medications. Another clinical truth might be that a patient with an IOP >50mmHG needs to start taking oral acetazolamide pills, unless they are either blind or have a sulfa allergy.
Hard clinical truths like these should not be learned by medical AI, but instead hard-coded into the framework of the AI itself. Worse, data anomalies within data sets can sometimes cause off-the-shelf AI design patterns to actually improperly learn the opposite and potentially cause harm to the patient. This is one of the reasons why the AI strategies employed by our competitors without in-house AI development teams will never reach the levels of accuracy which Care1 has achieved.
Another key difference in approach between medical AI and that found within off-the-shelf AI software, is the role for doctors in monitoring self-modification of the DNN.
ML becomes more accurate when data sets are extremely large (in theory, maximum accuracy is achieved with an infinitely large data set), but it is impossible for data sets within clinical medicine to be extremely large - there are a limited number of patients, each patient needs to sign a consent form, and each patient requires counseling with a doctor. When a data set cannot be infinitely large, there is always the possibility that an AI operating at its own free will will self-generate errors.
The most common types of limitations/errors generated by AI include:
(a) Bias: When inaccurate training data leads to improper branch changes to the DNN
(b) Overfitting: When overcomplexity in the learned function is inadvertently created from too many successive iterations of DNN modification
(c) Information Gain: When features measured with higher levels of detail in the data set tend to be over-weighted by the DNN.
Due to the significant consequences of providing improper or harmful care to patients, we cannot risk errors of bias, overfitting or information gain. We therefore do not allow our DNN to self-modify based on ML findings, without first going through an approval and validation process from doctors and surgeons.
Examples of Care1’s Level of Detail
It is beyond the scope of this memo to review the specific algorithms that Care1 utilizes. To give a sense of the level of detail of our AI, we can state that our software analyzes data as obscure as if a doctor used a question mark in their office notes (potentially implying that the patient case may be more challenging than normal) or if a letter was received from a GP (implying that consultation letters could have improved readability if they utilized less eye-specific terms and more general medicine terms).
Summary: Care1's AI is custom built by our in-house engineering team. Existing methods of creating AI are insufficient for the healthcare field, as they do not lead to AI accuracy close enough to 100%. Our method revolves around a proprietary design pattern known as MIDL, which focuses on (a) building a clinical framework based on published clinical guidelines and unpublished fundamental medical and surgical knowledge, as well as (b) ensuring that doctor supervision is in place to avoid harm to patients from mathematical limitations of AI theory.