ICMRA: permanent AI-working group is needed
Artificial intelligence is evolving at a rapid pace. Therefore, it is challenging current regulations on medicines and medical devices. This is evident from a global inventory by the ICMRA (International Coalition of Medicines Regulatory Authorities. This international consortium of regulators wrote a report on the subject. and conclude that a permanent AI-working group is needed. The recommendation is to appoint a permanent working group. This will have to keep a sharp eye on the regulation of AI as it is developed and assessed.
On August 6, 2021, ICMRA released a report. This describes the results of two field studies. These should serve as stress tests for member agencies. The authors emphasize the need to bring in ethical experts. After all, the pharmaceutical industry is making greater use of AI. Also, a regulatory framework for AI will be needed. This should take into account factors such as the validity and origin of data. Furthermore, the framework should also consider the reliability and transparency of AI algorithms. For this, a permanent AI-working group is needed.
Ad Hoc Working Group Say That A Permanent AI-Working Group Is Needed
For this purpose, an informal network for innovation was previously formed. This consists of international regulators who are members of ICMRA. It has delegates from the EMA and WHO. Other members are from Italy, Denmark, Canada, Ireland and Switzerland. The FDA is also in the network although they have observer status.
A summary of the report says that there are opportunities to apply AI at each stage of the drug life cycle. This could be in target validation and biomarker identification. But also in the annotation and analysis of clinical data. For example in trials, pharmacovigilance and optimization of clinical use.
Hypothetical Case Studies
There is a lack of transparency in the algorithms underlying AI. This is especially true when machine learning techniques are used. Therein lies a significant challenge for the entire scale of AI applications. For example, machine learning can create a situation that has the potential to create clinical and regulatory conflicts. For example, a situation where the exact processes by which results are produced are no longer directly observable. This is a so-called "black box situation".
The working group developed two imaginary situations:
1. An app designed to capture data related to the central nervous system;
2. The use of AI in monitoring drug use.
The Ad Hoc working Group claims that a permanent AI-working group is needed.
Data Capturing App Related To Central Nervous System
This imaginary app could serve as an aid in the selection of patients for participation in clinical trials. For example, one could record and analyze the baseline status of a disease in prospective participants. Also, this app could record a number of other things:
- adherence to trial interventions;
- the response to therapies;
- endpoints identified as changes in disease status.
This field-study highlights the importance of early advice from regulators during the product development process. In doing so, the report points to the considerable complexity and novelty of the research. The process will need to involve not only regulators in the medical device field. Academically trained experts are also likely to be part of the process. Because in addition to ethical and legal considerations, there is also a scientific aspect. These are weighed against regulatory decisions.
Black Box Important Reason Of Why A Permanent AI-Working Group Is Needed.
The assessment about meeting a full set of requirements poses considerable challenges for such a complicated product. In this perception, the working group identifies the problem of the "black box". In it are the algorithms and the training and validation datasets used for AI development. Even if regulators are given access to it, full validation may not be possible. The report's authors write that more sophisticated approaches may be needed. Such as, for example, examining the behavior of machines.
Apps need to be updated. Then bridging or validation studies may reveal a change in the benefit-risk profile of product. Subsequently, this may then lead to the need for additional regulations. It is the job of developers to carry out this task. In an ideal situation, developers have robust governance systems. These should oversee AI algorithms as they evolve through use.
App With AI Monitoring The Use Of Medicine
In the report, the study group states that AI systems seem to be suitable in principle for safety signal detection. Now, there is often still a lot of reliance on the manual component when using the tools for signal detection. To reduce this, an app like the one in the second study is very promising. A permanent AI-working group is needed to oversee the possibilities and restrictions.
When AI starts playing a more prominent role in this, the challenge lies in finding a balance between human supervision and AI. Continually, there needs to be a calculation of the benefits and risks with each therapy. AI has the ability to scan very large data sets and aggregate disparate information. This is promising for discovering security signals that may be missed with current methods. We can think of:
- drug-disease interactions;
- secondary malignancies;
- misuse of drugs;
- changes in patterns of drug use and side effects.
The MAH (Marketing Authorization Holder) should ensure that both AI experts and data quality and signal detection specialists work in pharmacovigilance. There may also be a third party involved in managing the AI component of the pharmacovigilance program. That will then have to assure the licensee and regulators that they will maintain and update the AI as necessary. They will also need to provide regulators with appropriate access to the AI tool.
The report outlines current AI activities and future strategies for various regulators. They are:
- Health Canada;
- Japan's Pharmaceutical and Medical Devices Agency (PMDA);
- the EU's European Medicines Agency;
- the relevant agencies of the European Commission, Swissmedic and the Therapeutic Goods Administration (TGA) of Australia.
They have a number of common points, including recognition of the potential of AI to contribute to pharmacoeconomic developments. They also want the work of regulators to be eased. However, it is imperative that regulatory science keeps abreast of the rapid developments in AI. There is also a need for a robust ethical framework.
Reference: Kari Oakes