Investigationsvol. 6

Artificial Intelligence at the Frontiers of Medicine

Are we ready for algorithms to meet life sciences?

By Yuchen Shao


In a laboratory at the University of Michigan Medical School, Art Szacik, a Biomedical Engineering major graduate student, stares intently at his computer screen. The display shows cancer-related genetic mutations being analyzed by an artificial intelligence (AI) system, and the rapidly changing data fills him with both excitement and bewilderment. “This is absolutely remarkable,” he murmurs. “The AI has completed in just one hour what would have previously taken us an entire day of analysis.”

Standing beside him, Kaiwen Deng, a bioinformatics PhD student, has spent more time immersed in the intersection of AI and bioengineering and offers a more measured perspective. “Yes, the efficiency gains are impressive, but I wonder if we’re becoming overly reliant on these algorithms,” he reflects. “When AI makes decisions in life sciences, do we truly understand its reasoning process? For instance, explainable models like decision trees provide clear, interpretable pathways for their conclusions, while less explainable forms, such as convolutional neural networks, operate as ‘black boxes’ where the logic behind decisions can be opaque. How much confidence can we place in these results before they’ve been thoroughly validated through experimental work?”

This scene vividly captures the current state of AI applications in biomedical research at Michigan: a delicate balance between promising technological innovation on one side and careful ethical consideration on the other.

Reasons to embrace artificial intelligence

The academic community’s growing enthusiasm for artificial intelligence has become increasingly evident in recent years. Take, for instance, the Nobel Prize, arguably science’s most prestigious recognition. The 2024 Nobel Prizes in Physics and Chemistry were both awarded to interdisciplinary experts who have made groundbreaking contributions in their respective fields as well as in artificial intelligence, reflecting the scientific community’s recognition of AI’s impact across disciplines and its anticipated future potential.

Yihan Lei, a computer science major student and researcher at the University of Michigan, offers his perspective on AI’s potential and applications: “Without question, AI will significantly accelerate human research across fundamental scientific fields, from mathematics to physics to chemistry. Even if AI occasionally makes mistakes, these can be identified and corrected during the final experimental validation phase. Integrating AI into any research field appears to offer substantial advantages, with potential challenges that can be managed.”

The natural necessity of artificial intelligence in the field of biomedicine

Lei’s sentiments echo those of many others, including advocates for deeper AI integration in biomedical and pharmaceutical research. These proponents believe that the biomedical and pharmaceutical fields face unique challenges that make AI integration particularly crucial. The core issue lies in the nature of healthcare data. According to HealthTech statistics by Brian Eastwood: “The average hospital produces roughly 50 petabytes of data every year. That’s more than twice the amount of data housed in the Library of Congress, and the amount of data generated in healthcare has been increasing at a rate of 47 percent per year.” Plus this data, according to the research of Precision Health, is inherently complex, containing thousands of variables that are often incomplete, inconsistent across providers, or missing entirely. The combination of volume, rapid growth, and inherent complexity makes human processing alone inadequate, creating an urgent need for AI-driven analysis tools.

These challenges play directly to AI’s strengths. AI systems can process data, fill gaps, filter anomalies, and draw preliminary conclusions far faster than human researchers, enabling more efficient diagnoses and personalized treatment plans.

The Precision Health Initiative, launched by the University of Michigan in October 2017, is a prime example of this innovative approach. Its Michigan Genomics Initiative leverages the power of artificial intelligence to collect and organize genetic data from over 80,000 participants, integrating it with corresponding electronic health records to create a robust resource database for future experimental analyses. Furthermore, the initiative continues to explore additional applications of artificial intelligence in the field of biomedical research, pushing the boundaries of what is possible in this exciting domain.

Wider applications of artificial intelligence in biomedicine

In fact, computer scientists like Dr. Barzan Mozafari, Assistant Professor of Computer Science and Engineering at the University of Michigan, believe that helping doctors find better answers is just the beginning. As he suggests, “You can’t replace scientists with machines, at least not soon, but the idea is that the machine should be able to suggest things.”

This vision is already being put into practice, with Insilico Medicine serving as a prime example. The company’s small molecule drug candidate INS018_055, designed to treat idiopathic pulmonary fibrosis (IPF), has advanced to Phase II clinical trials, marking a historic milestone as the first AI-discovered and AI-designed drug to reach this stage.

Given these developments, there seems to be a compelling case for expanding AI’s role in biomedicine. Whether considering the field’s inherent dependence on computational tools to manage its vast and complex datasets, or AI’s impressive achievements thus far, many argue for fewer restrictions on AI development and deployment in biomedical research. However, a significant number of scientists maintain a more cautious and measured stance on this issue.

Potential risks behind artificial intelligence

Jenna Wiens, an Associate Professor of Computer Science and Engineering (CSE) at the University of Michigan, whose primary research interests lie at the intersection of machine learning and healthcare, cautions against relaxing restrictions on artificial intelligence in the biomedical field. Her cautious perspective stems from two key sources: her deep understanding of AI’s underlying logic as a professor specializing in artificial intelligence, and the practical challenges she’s encountered through years of collaboration with healthcare institutions.

For instance, Professor Wiens was involved in evaluating the Epic Sepsis Model, a proprietary AI software designed for early sepsis detection. Her research led to a sobering conclusion: the model didn’t seem to extract any more meaningful information from patient data than clinicians already could. “We suspect that some of the health data the Epic Sepsis Model relies on may have inadvertently encoded clinicians’ existing suspicions about patients having sepsis,” she explains.

In another AI study focused on cardiac arrhythmia, Wiens discovered that the AI system was using the presence of pacemakers as a key indicator for diagnosing arrhythmia. The AI was originally meant to predict potential arrhythmia based on cardiac pathology images, but instead, it was merely identifying pacemakers, a marker that appears only after arrhythmia has already been diagnosed. This effectively rendered the model useless as a predictive tool. Although these incidents represent just minor setbacks in Professor Wiens’ research journey, they’ve led her to maintain a measured skepticism about AI’s reliability in biomedical applications. 

Limitations of existing data

A study published in IEEE Intelligent Systems provides a concise overview of the principles of artificial intelligence: AI systems learn patterns from training data to make decisions, rather than following explicit programming. This data-driven approach means their behavior can be less predictable than traditional computer systems. For the inherently cautious nature of the biomedical field, such unanticipated behavior is clearly unacceptable, as it could have immeasurable consequences for the health and lives of countless individuals. At the same time, the root cause of this unanticipated behavior—data unreliability—is not an issue that can be resolved in the short term. As highlighted in a study shared on LinkedIn: “The data used by artificial intelligence systems often reflects the biases and stereotypes present in society. This can result in AI algorithms perpetuating or even amplifying these biases, leading to unfair and discriminatory outcomes.” In the biomedical domain, one of the most prominent biases is gender bias in data representation. While human data analysts can sometimes consciously account for and mitigate these biases, AI systems struggle to do so. This inherent bias in the data, combined with the lack of transparency in AI’s decision-making processes, often leads experts in the biomedical field to approach AI-derived results with skepticism.

Unintentional violation of privacy

In the biomedical field, privacy is a paramount concern for AI implementation. AI algorithms can re-identify supposedly anonymized data by cross-referencing it with other datasets, potentially exposing research participants’ identities. Hospitals and research institutions must also contend with cybersecurity risks, as their valuable medical datasets become attractive targets for attacks. At the University of Michigan Medical Center, this has led to the development of a De-identified Research Data Warehouse, in which the data has been stored, de-identified and date-shifted to protect patient privacy and confidentiality.

Due to these concerns, the University of Michigan Medical School maintains particularly stringent regulations regarding medical data access. According to Yiwen Yang, a biostatistics student, “Access to data collected by the University of Michigan Hospital requires the use of designated computers and specific primary and secondary Medical School accounts. While this significantly reduces the risk of data breaches, the complex application process for secondary accounts means many researchers end up working with outdated public datasets instead.” These public datasets often suffer from quality issues and contain numerous anomalies. The time required to clean and prepare such data frequently offsets the efficiency gains that AI technologies could potentially offer, which creates a paradoxical situation.

Consequently, traditional biomedical researchers and practitioners find themselves in a delicate position. While they acknowledge AI’s potential to significantly streamline their work by reducing the burden of routine data processing and even catalyzing new biomedical discoveries, they must also consider its potentially devastating consequences. This has led many to adopt a cautious approach, often limiting AI research to older datasets rather than current information to prevent latent privacy violations.

Future outlook

In the end, due to the unique nature of biomedicine, the integration of AI in this field is likely to remain contentious.  Although this cautious approach might further widen the research progress gap between biomedical sciences and other disciplines, when dealing with something as precious and irreplaceable as human life, there’s no such thing as being too careful.

Looking to the future, as explainable AI models improve and privacy-preserving technologies like federated learning become more robust, the balance between caution and innovation might shift. Collaborative frameworks between AI researchers, ethicists, and biomedical professionals will likely play a crucial role in shaping responsible AI applications. One promising development is the emergence of “model cards“-detailed documentation that makes AI systems more transparent by outlining their testing conditions, performance metrics, and intended uses in medical settings.

 

Photo by Google DeepMind on Unsplash