We know how to regulate new drugs and medical devices–but we’re about to let health care AI run amok

There’s a lot of buzz surrounding artificial intelligence and its potential to transform industries. Healthcare ranks high in this regard. If applied appropriately, artificial intelligence will significantly improve patient outcomes by improving early detection and diagnosis of cancer, accelerating the discovery of more effective targeted therapies, predicting disease progression, and developing ideal, personalized treatment plans.

Beyond this exciting potential, there’s an inconvenient fact: the data used to train medical AI models reflects inherent biases and The inequalities that have long plagued the U.S. health system and often lack key information underrepresented communities. If left unchecked, these biases will widen inequalities and lead to loss of life based on socioeconomic status, race, ethnicity, religion, gender, disability or sexual orientation.

death will happen

To generate artificial intelligence models, data scientists use algorithms to discover or learn associations with predictive power from large data sets. In large language modeling (LLM), or generative artificial intelligence, deep learning techniques are used to analyze and learn patterns in input text data, whether the information is true, false, or simply inaccurate. This data, however imperfect, enables the model to formulate coherent and relevant responses to a variety of queries.

In healthcare, differences in how patients receive treatment or not are embedded in the data used to train artificial intelligence tools.When applied to a large and diverse population, this means the medical needs of a specific group of people, such as people of color, underrepresented communities, people with disabilities, or Have a specific type of health plan coverage– may be overlooked, ignored or misdiagnosed. If left unchecked, people will die needlessly—and we may not even be aware that underlying misinformation or lies exist.

Artificial intelligence systems do not operate in isolation. Let’s take a real-world example: If machine learning software is trained on massive amounts of data that includes deep-rooted systemic biases that result in care being provided to white patients differently than care to patients of color, that data is inequitable. It will be passed to the artificial intelligence algorithm and amplified exponentially as the model learns and iterates. Research conducted four years before our current AI renaissance showed such dire consequences for those already underserved. A landmark study from 2019 science A study of an artificial intelligence-based prediction algorithm used in hospitals serving more than 100 million patients found that black patients would have to be much sicker than white patients to be candidates for the same level of care.

In this case, the underlying data used to train the AI ​​model is flawed. The same goes for the algorithm, which was trained on health care spending data as a proxy for health care demand. The algorithm reflects a historical disparity in which black patients have less access to care than white patients with the same level of need, resulting in less commercial insurance claims data and less health care spending. Using historical health care costs as an indicator of health, the AI ​​model incorrectly concluded that black patients were healthier than similarly ill white patients, which in turn reduced the number of black patients who required additional care by more than half. When the algorithm was revised, the proportion of black patients determined to require additional care based on medical need increased from 18% to 47%.

other algorithmDesigned to estimate how many hours of in-home assistance should be provided to severely disabled state residents, it was found to have biases that led to errors about recipients’ medical needs. As a result, the algorithm directed cuts to much-needed medical services, resulting in significant disruption to care for many patients and, in some cases, hospitalization.

The consequences of a flawed algorithm can be fatal. A A recent study Focus on artificial intelligence-based tools to facilitate early detection of sepsis, a disease that kills approximately 270,000 people each year. The tool, deployed in more than 170 hospitals and health systems, failed to predict sepsis in 67% of patients. It sent false sepsis alerts for thousands of people. The researchers found that the detection flaws stemmed from the fact that the tool was being used in a new region with a different patient demographic than the one it was trained on. Conclusion: Artificial intelligence tools perform differently across geographies and demographics because patients vary in their lifestyles, disease incidence, and access to diagnosis and treatment.

Of particular concern is the potential for AI-powered chatbots to use LLMs that rely on data that has not been filtered for accuracy. May lead to misinformation, incorrect patient advice, and harmful medical outcomes.

we need to step up

Before artificial intelligence transforms health care, the medical community needs to step up its efforts, insist on human oversight at every stage of development, and apply ethical standards when deployed.

When developing artificial intelligence in medicine, a comprehensive, multidimensional approach is needed. This is not just the task of data scientists, but requires the deep involvement of a variety of professionals—including data scientists, technologists, hospital administrators, physicians, and other medical professionals from diverse backgrounds and perspectives, all of whom are aware of AI management The Dangers of Being Bad—Providing the oversight necessary to ensure that artificial intelligence becomes a positive transformative tool for health care.

Just as drug trials require FDA oversight (guiding principles and publicly shared data and evidence), artificial intelligence management in healthcare requires independent audits, evaluations, and reviews before being applied in clinical settings. The FDA has a process for regulating medical devices, but lacks dedicated funding and a clear path to regulating new AI-based tools. This leaves AI developers to develop bias-reducing processes on their own—if they realize the need to do so. Private industry, data scientists, and the medical community must build diversity into the teams that develop and deploy artificial intelligence. Artificial Intelligence can and should be developed and applied to medicine, as its potential is enormous – but we all need to acknowledge the complexity of medicine, especially given the biases ingrained in training materials, and need a model design that takes that into account Every step of the way in medicine. process.

As a doctor, one of the first tenets I learned in medical school was the Hippocratic Oath. I swear “First, do no harm.” Now, as a senior executive and innovator, my goal is to exceed myself. Building the infrastructure for artificial intelligence to function properly in healthcare will take us a giant step forward on the path to transforming healthcare for the benefit of all.

Chevon Rariy, MD, is chief health officer and senior vice president of digital health at Oncology Care Partners, an innovative value-based oncology care network, as well as an oncology-focused investor and practicing endocrinologist. She is the co-founder of Equity in STEMM, Innovation and AI, which works with academia, industry and policymakers to reduce barriers to healthcare and advance STEMM (Science, Technology, Engineering, Science and Technology) in underrepresented communities. mathematics and medicine). Dr. Rariy serves on various nonprofit and private boards at the intersection of digital health, technology, and equity, and is a 2023 JOURNEY Fellow.

More must-read comments by wealth:

The views expressed in Fortune Star review articles represent solely those of the author and do not necessarily reflect the following views and beliefs: wealth.

Svlook

Leave a Reply

Your email address will not be published. Required fields are marked *