AI has become an integral part of the life sciences landscape, contributing to advancements in diagnosis, treatment, and overall healthcare delivery. But many worry that without proper government oversight, the use of AI and machine learning (ML) algorithms could also cause negative and unintended consequences, including critical errors and bias amplification.
That’s why many international regulatory bodies have recently focused on AI development in several industries, including life sciences.
What Are the Benefits of AI in Life Sciences?
AI and big data analytics have already become an important part of the life sciences. It’s already used in software for medical devices, and large language models (LLMs) and generative AI can alleviate staff shortages and other pressures hospitals face.
The World Health Organization says AI is useful in several life sciences-related applications, including:
- Responding to written patient queries
- Other clerical tasks such as summarizing patient visits
- Symptom/treatment investigation and diagnostics
- Powering lifelike simulations for healthcare staff training
- Analyzing data to discover new compounds and risks for drug discovery and development
The WHO says AI tools have the potential to “transform the health sector” by strengthening clinical trials, improving diagnostics and personalized healthcare, and supplementing healthcare providers’ knowledge and skills.
AI can also benefit organizations involved in pharmacovigilance, help streamline pharmaceutical supply chains, improve patient satisfaction, and help with advanced diagnostics and personalized medical interventions.
AI Regulations Affecting Life Sciences
But AI also carries plenty of potential risks – especially in life sciences and healthcare scenarios where patients’ lives are on the line. Potential errors, biases, and the potential for data breaches as healthcare providers generate, receive, and store massive amounts of data are just a few.
Regardless, the increased awareness of AI on behalf of regulators mirrors the already-intense interest in AI in the life sciences industry, with regulatory submissions involving AI increasing nearly 10x between 2020 and 2021.
Life sciences news outlet Bio Buzz says most of these submissions were for Investigational New Drug Applications (IND).
This flurry in activity has led to several regulatory bodies enacting new compliance frameworks and guidelines regarding the use of AI in the life sciences, aiming to ensure healthcare products are safe and effective.
Here’s a look at some of the most impactful AI regulations for life sciences from around the world:
United States
The U.S. Food and Drug Administration (FDA) recently published several documents on the use of AI in life sciences, including the Center for Drug Evaluation and Research’s Artificial Intelligence in Drug Manufacturing paper and another discussion paper exploring the use of AI and ML for developing drugs and other biological products.
The FDA has also reviewed and authorized a number of medical devices with AI/ML (marketed via 510(k) clearance, granted De Novo request, or premarket approval) in the U.S. However, as of late 2023, the body has not authorized any devices using generative AI or powered by LLMs.
Authorized devices approved by the FDA include the following practice areas:
- Radiology
- Cardiovascular
- Neurology
- Hematology
- Gastroenterology/urology
- Ophthalmic
- Ear, nose, and throat
Interest in AI regulations for life sciences goes beyond borders, however. Another recent FDA document produced in tandem with Health Canada and the U.K.’s Medicines and Healthcare Products Regulatory Agency outlines Good Machine Learning Practices (GLMP) for medical devices.
European Union
The E.U. recently passed legislation to regulate AI (the AI Act) that came into force this year. This act will have several impacts on life sciences companies, but the main constant is that pretty much any medical device is now classified as a “high-risk” AI system in the EU.
- Class IIa and above medical devices and in vitro diagnostics (IVD) tests using AI: Will be subject to requirements applying to high-risk AI systems, subjecting them to tighter risk management obligations, testing, data governance rules, and documentation requirements.
- Digital companion diagnostics (CDx): These products will also be classified as high-risk AI systems and subject to the same requirements.
- Clinical trials: Clinical software that relies on AI to perform tasks such as optimizing molecular screening or predicting drug efficacy, and that is classified as medical devices or IVDs, are also considered high-risk AI systems.
Non-device AI systems: Any AI system used in the medical product lifecycle should not be categorized as a high-risk AI system, although the act says each system should be evaluated on a case-by-case basis.
