Roadblocks in AI identified by the COVID-19 pandemic

by | Jun 9, 2020

The current pandemic has provided an opportunity to observe the resilience of every link in the healthcare chain, and AI has appeared to have given in.

Image credit: Getty Images

After nearly 60 years on the test bench, many would have argued that artificial intelligence (AI) for clinical decision support had — finally — transitioned to the bedside. Popular and scientific press discussed learning-based algorithms that accurately forecast the onset of septic shock, reported on learning-based pattern recognition methods capable of classifying skin lesions with dermatologist level accuracy, and praised AI-based breast cancer screening tools that outperformed radiologists by a fairly large margin, among other accomplishments.

Today, after the outbreak of a novel respiratory disease, COVID-19, this rather sunny outlook has become overcast. The COVID-19 global pandemic, while certainly a rare worst-case scenario, provided an opportunity to observe the resilience of every link in the healthcare chain. And, though heavily stressed, most links of this chain withstood the stress test of the first surge of COVID-19, while the highly acclaimed AI link appeared to have given in. This breakdown occurred despite immediate opportunities for AI-assisted tools to have a positive impact on patient outcomes, e.g., via personalized treatment decision support or automated monitoring.

There is a notable absence of AI-based clinical decision support tools for COVID-19 in this first wave of the pandemic meaning there are still needs and opportunities to foster AI readiness.

Because the clinical needs and opportunities are evolving with the pandemic, for AI tools to succeed we need both a deep understanding of the clinical use case as well as effective mechanisms to accrue and capitalize on relevant data. While clinician-engineer teams, like ours in the Malone Center at John Hopkins University, are well positioned to rapidly identify needs as they emerge and devise solutions, developing or even deploying them is usually slow.

The reasons for slow adaptation are diverse and include difficulties in collecting meaningful data on the target population, lengthy ethical and regulatory review and approval processes, and infrastructural shortcomings that complicate secure data sharing and computing. Several recent developments seek to address these roadblocks. For example, the FDA has requested feedback on a regulatory framework for AI -based software as medical device, and the Office for Human Research Protections organized a workshop series that touched on potential operational solutions to the challenges of ethics and human subjects protection in “big data research”.

As a community, we should spend some time to identify and analyze the organizational, institutional, or regulatory hurdles that this healthcare crisis has highlighted, as well as the solution paths that emerged to bring AI-based systems to the bedside. These insights should contribute to an open discussion of how we can improve on the AI readiness of current practices and protocols. After all, who knows when the next severe test comes forth and what it will look like. Let’s be prepared!

Mathias Unberath, assistant research professor in the Department of Computer Science at Johns Hopkins University

Related posts: