The science is unstoppable, and so is the flow of funding. But at least one roadblock stands in the way: a big, bureaucratic Cold War–era regulatory apparatus that could prove to be fundamentally incompatible with the very nature of artificial intelligence.
* * *
Every professional subculture has its heroes. At the Food and Drug Administration, the greatest hero is Frances Oldham Kelsey, who in the 1960s stubbornly refused to license Kevadon, a sedative that alleviated symptoms of morning sickness in pregnant women. As mothers in other countries would learn, the drug—better known by its generic name, thalidomide—could cause horrible birth defects. Kelsey’s vigilance in the face of heavy corporate pressure helped inspire the rigorous evaluation model that the FDA now applies to everything from pharmaceuticals to hospital equipment to medical software.
At the very core of this model is the assumption that any product may be clinically tested, produced, marketed, and used in a defined, unchanging form. That’s why the blood-pressure machines many people use in pharmacies look a lot like the ones they used a decade ago. Deviation from an old FDA-approved model often requires an entirely new approvals process, with all the attendant costs and delays.*
But that build-and-freeze model isn’t the way AI software development typically works—especially when it comes to machine-learning processes. These systems are essentially meta-algorithms that spit out new operational products every time fresh data is added—producing, in effect, a potentially infinite number of newly minted “medical devices” every day. (A nonmedical example would be the speech-recognition programs that gradually teach themselves how to better understand a user’s voice.) This phenomenon is creating a culture gap between the small, nimble medical-software boutiques creating these technologies, and the legacy regulatory system that developed to serve large corporate manufacturers.
Consider, for instance, Cloud DX: This Canadian company uses AI technology to scrutinize the audio waveform of a human cough, which allows it to detect asthma, tuberculosis, pneumonia, and other lung diseases. In April, the California-based XPRIZE foundation named Cloud DX its “Bold Epic Innovator” in its Star Trek–inspired Qualcomm Tricorder competition, whereby participants were asked to create a single device that an untrained person could use to measure their vital signs. The company received a $100,000 prize and lots of great publicity—but doesn’t yet have FDA approval to market this product for clinical applications. And getting such approval may prove difficult.
Which helps explain why many health-software innovators are finding other, creative ways to get their ideas to market. “There’s a reason that tech companies like Google haven’t been going the FDA route [of clinical trials aimed at diagnostic certification],” says Robert Kaul, the founder and CEO of Cloud DX. “It can be a bureaucratic nightmare, and they aren’t used to working at this level of scrutiny and slowness.” He notes that just getting a basic ISO 13485 certification, which acts as a baseline for the FDA’s device standards, can cost two years and seven figures. “How many investors are going to give you that amount of money just so you can get to the starting line?”