March 14, 2024
March 14, 2024
One sector significantly impacted by the use and development of Artificial Intelligence is healthcare, and by extension, Clinical Research. The new Regulation is crucial for AI development, as it affects not only technological advancement but also our fundamental rights.
By Guiomar León Salvatierra, a lawyer specializing in digital environments and Artificial Intelligence, and an advisor at Sermes CRO
It’s rare to go a day without encountering in the media a new revolution brought by Artificial Intelligence. New developments, applications, and utilities of AI systems rapidly emerge, promising to radically change our daily lives.
What is undeniable is that the transformative power of AI at all levels leads us to the conviction that we are facing a Revolution destined to change how we work, live, and interact. Therefore, the European Union, through the new Artificial Intelligence Regulation, seeks to safeguard the fundamental rights of individuals, with a firm intention of not falling behind in technological development and thus economic development as well.
AI is not a recent creation; it dates back to the 1950s and, over the years, has gone through periods of growth as well as decline. In 2023, with the introduction of Chat GPT into everyday life as a consumer good accessible to everyone, AI experienced a global explosion.
In light of these circumstances, the European Union stepped up its efforts in developing a regulation, the processing of which had begun in 2021. On December 8, 2023, under the Spanish presidency of the European Union, the Institutions – Parliament, Commission, and Council – reached a historic agreement on the new European AI Regulation, marking a milestone globally due to the regulatory export force that Europe has through what is known as the Brussels Effect. In principle, this effect will turn the approved text in Europe into the global standard in the field, or not.
The agreed text was endorsed by the COREPER, the European Committee of Permanent Representatives, on February 2, so it will see the light of day in a relatively short period of time, entering into force two years after its publication, as a general rule. The European Union has had to balance the protection of security, respect for fundamental rights and the values of the Union, with the no less important interest in promoting investment and innovation within its borders, due to a primary need for survival against great powers such as the United States and China, among others.
The risk of AI, according to the Regulation
The European Artificial Intelligence Regulation classifies AI systems by levels of risk, imposing stricter requirements and rules depending on the level of risk that the use of the system in question implies for Fundamental Rights. AI systems of minimum or no risk will not imply regulatory obligations, although it is recommended that they follow voluntary standards to be more reliable for citizens.
AI systems of limited risk will be subject to transparency obligations (so that users are aware that they are interacting with an AI system), they will have to have human management and supervision, technical robustness and security, respect for privacy and data governance, and respect diversity, not engaging in discrimination or disloyalty. AI systems of high risk will have to comply with obligations both in the design phase, as in the production phase, and in the marketing phase, relating to risk management, data governance, technical documentation, automatic record logs, transparency, human supervision, cybersecurity, and conformity assessment.
Finally, AI systems of unacceptable risk, such as social classification systems, emotional recognition in certain areas, decision manipulation techniques, among others, are prohibited by the Regulation. Beyond the European AI Regulation, there are areas outside the scope of Union law, such as systems created for military or defense purposes, for research or innovation, for non-professional use, and matters of National Security within the exclusive and excluding competence of European States.
Impact of AI on the healthcare sector
One sector significantly impacted by the use and development of AI is healthcare. Tasks that are automated, relationships with patients, much more precise diagnoses, disease prediction, information retrieval, etc… are breaking into our daily lives. For the average citizen, the first reaction AI generates is fear, of losing their job, of being able to trust what one sees, or what one hears, even going so far as to talk about the danger AI poses to humanity; therefore, restrictive and protective regulation is well received.
However, the business sector fears that this regulation will unacceptably strain technological development in Europe, and as a result, hinder our economy by not having seized the innovation train on time.
During the negotiations of the text and until its ratification by the COREPER, countries such as France, Italy, or Germany have been very critical of the wording of it, as they believed it constituted a brake on development in Europe, although the agreement was ultimately reached, leaving much to be developed within it.
Europeans trust that, much like in other realms such as privacy protection, Europe will serve as a bastion of our rights and freedoms, exporting this ethos globally. This trust stems from the realization that while our technological advancements hold great promise, safeguarding our fundamental rights is equally vital.
While the regulation typically allows a two-year period for implementation from its publication, the rapid pace of AI development may render it outdated by the time it takes effect.
AI’s Role in Accelerating Clinical Trials
An intriguing European endeavor dubbed “Accelerating Clinical Trials in the EU (ACT EU)” seeks to enhance and expedite clinical trials within the European Union. This initiative advocates for streamlining administrative and regulatory procedures, easing access to data and resources, and fostering collaboration among stakeholders.
These objectives seamlessly align with AI’s potential in optimizing processes, analyzing data, and personalizing treatments. AI can identify patterns, correlations, and trends in vast datasets generated during clinical trials, facilitating more efficient patient selection, study design, and data monitoring.
In essence, with proper precautions and regulation, AI stands poised to play a pivotal role in advancing ACT EU’s clinical research efforts.
What if the rest of the world forges ahead with more liberal AI development while Europe lags behind?
Europe boasts a rich legacy of individual rights and liberties, epitomized by the Hippocratic Oath and globally revered values. Yet, external perspectives often cast Europe as a pampered child needing to mature and address essential needs like defense, health, and progress, much like its counterparts do—sometimes at the expense of individual welfare for economic gains.
While a text mindful of individual rights may potentially stifle innovation, Europe remains steadfast in its identity and history. To prevent relegation to irrelevance, it concurrently champions data sharing in secure frameworks, test sandboxes to foster technological advancements, and other initiatives.
Companies specializing in Clinical Research and AI, such as Sermes CRO, must prioritize the development of innovative, ethically sound AI systems. Such systems should address industry challenges while ensuring a positive impact on individuals, reflecting our primary goal of caring for people.