This is the direction the federal government is taking us. We must oppose it. Action Alert!
Late last year, the FDA released a draft guidance document that will loosen the reigns on some types of medical software. This development is part of a larger trend in which artificial intelligence (AI) and other technologies take on a more central role in the doctor/patient relationship. It is “evidence-based” medicine, as determined by government regulators.
The FDA’s guidance deals with “clinical decision support” software (CDS). These are programs that “analyze data within electronic health records to provide prompts and reminders to assist health care providers in implementing evidence-based clinical guidelines at the point of care.” The 21st Century Cures Act excluded certain CDS from the definition of medical devices, meaning CDS that meet the criteria are not subject to certain FDA regulations. Further, the FDA laid out a policy of risk-based enforcement discretion for CDS that do not meet all four criteria, but that the agency considers low-risk.
The issue is less the types of software FDA considers medical devices and which it does not, but rather the trend of medicine by algorithm. Make no mistake: CDS is the future of medicine. The AI health market is projected to hit $6.6 billion by 2021; in 2014 it was $600 million.
AI supported by machine learning algorithms is already being integrated in the practice of oncology. The most common application of this technology is the recognition of potentially cancerous lesions in radiology images. But we are rapidly moving beyond this to where AI is used to make clinical decisions. IBM has developed Watson for Oncology, a program that uses patient data and national treatment guidelines to guide cancer management. As we’ve seen, Google and other tech giants are greedy for our health data so they can develop more of these tools.
Technology should absolutely be harnessed to improve medicine and clinical outcomes, but AI cannot replace the doctor/patient relationship. There are obvious ethical questions. First is the lack of transparency. The algorithms, particularly the “deep learning” algorithms currently being used to analyze medical images, are impossible to interpret or explain. As patients, we have a right to know why or how a decision about our health is made; when that decision is made by an algorithm, we are deprived of that right. Further, when a mistake is made, who is accountable? The algorithm, or the doctor? How do we hold an algorithm accountable?
There are other problems when using AI machine learning in medicine. It can lead to bias, for example, by predicting greater likelihood of a disease on the basis of race or gender when those are not causal factors. AI also does not account for the assumptions clinicians routinely make. The University of Pittsburgh Medical Center evaluated the risk of death from pneumonia of patients arriving in their emergency department. The AI model told them that mortality decreased when patients were 100 years old or had a diagnosis of pneumonia. Sound ridiculous? Rather than these patients actually being at low risk of death, their risk was so high that they were immediately given antibiotics before they were registered in the electronic medical record—throwing off the AI’s analysis and producing a ridiculous conclusion.
There is also a danger that medicine becomes even more monolithic than it is today. One commentator put it this way:
The machines do not and cannot verify the accuracy of the underlying data they are given. Rather, they assume the data are perfectly accurate, reflect high quality, and are representative of optimal care and outcomes. Hence, the models generated will be optimized to approximate the outcomes that we are generating today.
AI that is programmed to incorporate current disease guidelines and best practices into their analysis will suffer from the same shortcomings that we see in conventional medicine today. In other words, if American Medical Association guidelines are the standard for achieving the “evidence-based medicine” that AI can help with, we begin an analysis with the biases of conventional medicine. Sophisticated data analysis doesn’t get us anywhere if we’re asking the wrong questions in the first place.
Action Alert! Write to the FDA, the American Hospital Association, and the Federation of State Medical Boards, telling them you oppose the use of AI to make clinical decisions. Please send your message immediately.