Medical Device and Healthcare

AI regulation in healthcare: will legislation impact innovation?

The EU AI Act regulates the use of artificial intelligence for the first time. Its impact will be felt far beyond EU member states, setting a precedent for AI regulation around the world. Here we explore how the new legislation could impact innovation in the already heavily regulated healthcare industry.  

10 minutes to read
With insights from...

Back in 2021, the European Commission proposed the first regulatory framework specific to AI in the EU. It proposed that AI systems should be governed according to how much risk they pose to users. Now, after EU countries voted unanimously to approve the Act, it’s expected to come into force in 2026.

Undoubtedly, the passing of the AI Act is a huge milestone in AI regulation. But, if your health organisation is based outside Europe, you might be wondering to what extent the EU legislation will impact you. After all, AI regulation is still in its infancy and looks very different from one geography to the next. 

The UK government, for example, has said it will not yet legislate the use of AI systems. Regulators in Hong Kong and Singapore are also holding off on introducing new laws, preferring to overlay existing ones with guidance like the Model AI Governance Framework for Healthcare.

Despite these differences, one thing is clear: the AI Act will set a precedent for AI regulation worldwide. All companies that sell products or services within the EU will need to comply, and the Act’s risk-based framework for AI governance will have implications and applications far beyond EU member states. The FDA, for example, is expected to adopt a similar risk-based approach

The hope across all industries is that AI regulation will ensure greater clarity on the requirements needed to develop and use AI-enabled technologies safely. But within the healthcare sector, that hope is coupled with concerns.

Many organisations, particularly pharmaceutical and medical device companies, for whom AI capabilities form a large part of their USP, worry that new regulation may slow down or even hold back innovation. So are their concerns founded?  

A new framework for AI in healthcare

From automatic image analysis and chatbots to remote patient monitoring devices and patient-flow forecasting systems, it’s safe to say AI is already transforming healthcare.

As AI-empowered healthcare continues to advance, governments around the world are grappling with how to regulate a rapidly evolving environment. In the EU specifically, the hope is that AI regulation will ensure systems are safe, transparent, traceable, non-discriminatory and environmentally friendly. Most importantly, the EU’s focus is on protecting against unintended bad consequences of AI-enabled products, such as discrimination or the perpetuation of inequities. 

The biggest changes coming from these new regulatory frameworks will be outside highly regulated industries like health, where, until now, organisations have technically been  ‘free’ to launch AI-enabled products without formally accounting for the consequences of bringing them to the market. 

But what do these new regulations mean for healthcare organisations in particular? Well, depending on the level of risk posed by a particular AI system, certain obligations will be placed on the associated provider.  

For example, if an AI system is used in a medical device that falls under the EU’s product safety legislation and the level of risk is therefore deemed ‘high’, the organisation responsible will need to fulfil multiple criteria, from ensuring the system’s accuracy, robustness, and cybersecurity, to confirming the presence of human oversight.  

If, on the other hand, the AI application is a fairly straightforward chatbot that does not give medical advice but, for example, makes appointments, it will most likely be deemed ‘limited risk’, and the associated healthcare provider will only be required to fulfil transparency obligations. 

Eu ai act risk based approach

It’s worth noting that although these obligations will be ‘new’, in as far as needing to be formally documented and registered, the EU AI Act is really just reiterating what’s already good practice with machine learning technologies.

For example, here at Zühlke, where we partner with medical device companies and other health tech providers, we ensure that safe, ethical, and sustainable AI practices are adopted as a standard. All the AI models we work with undergo thorough validation, plus careful risk analysis and mitigation as set out in our responsible AI framework.

Concerns over AI regulation in healthcare

The EU AI Act has had mixed reactions from healthcare bodies. Perhaps the most notable came from the response from Johner Institut, who warned that the potential restrictiveness of yet another healthcare regulation could constrain innovation and competitiveness among European manufacturers.  

In a similar vein, MedTech Europe raised concerns over specific obligations such as the requirement for more human oversight. The trade association argued that excessive human interference could ‘negatively impact the benefit-risk ratio of medical devices, which in turn inhibits the uptake of innovative and potentially life-saving applications and limits learnings from them’.  

Clearly, the concern underlying these responses centres around the potential stifling of innovation – but is this concern warranted? Well first it’s worth acknowledging that the EU AI Act has undergone multiple amendments since most healthcare bodies’ initial responses were published. For example, a lot of the inconsistencies and unclear requirements that were highlighted in Johner Institut’s response have been resolved since last October (as the institute has acknowledged in the meantime).  

What’s more, many authorities in the sector have shown a great degree of optimism in the new regulation – for example, despite their concerns, MedTech Europe has also highlighted the potential of the act to give individuals: ‘the confidence to embrace AI-based solutions, including AI-enabled digital health services and tools’. And similar sentiments have been shared by other key industry players (more on that later).  

At Zühlke, we recognise and understand  the concerns around the regulation. But we’re also convinced that the EU AI Act has the potential to be a facilitator for innovation. Why? Because it brings much needed clarity around how to develop high quality products and ensure the transparency that’s needed to ensure the adoption of these tools.  

Here’s why the EU AI Act is not the innovation blocker that some people fear...  

Three reasons to be optimistic about AI regulation in healthcare

1. It’s an opportunity to build trust in AI in healthcare

It’s no secret that building and securing trust is a top challenge among healthcare organisations. This is even more true for AI technologies, especially on the background of highly publicised incidents like the time IBM’s Watson reportedly prescribed a drug that could have killed a patient during a simulation.  

This mistrust partly stems from a lack of understanding of 'blackbox’ solutions, but also from the fact that there has been no quality-control for AI technologies in many areas. Even within healthcare, AI technologies that weren’t covered by the medical devices regulation (MDR) or other regulations did not have to undergo any form of scrutiny.  

But viral headlines and media-fuelled doomsday scenarios aside, the fact that there are still no laws nor harmonised standards that specifically regulate the use of machine learning technologies gives the public a pretty decent reason to question the trustworthiness the technology.  

Once the EU AI Act becomes law, however, healthcare organisations will have to ensure their products comply with the requirements set out by the law. This paves the way for  higher quality AI solutions, which can help build trust in AI. This, in turn, can become an elementary enabler for AI-based health solutions that really have an impact for consumers and patients.  

Beyond that, it might also restore trust in AI in healthcare among the key industry bodies – which brings us to the next point… 

2. It’s a potential gateway to investment

One of the core objectives of the EU AI Act is to facilitate investments in AI by giving providers and manufacturers the chance to ensure legal certainty. While some have highlighted that the compliance costs incurred under the act might provoke a chilling effect on investment in AI in Europe, others argue this will only apply to high-risk applications. Even then, they argue, the impact on investment won’t be as significant as some have suggested.  

Many also paint a positive picture of investment under the EU AI Act, including the world’s largest association of private capital providers, which openly welcomed the EU AI Act this year. 

3. It won’t require a complete rethinking of compliance processes

As highlighted in Johner Institut’s response to the EU AI Act, many of the ‘new’ obligations will actually overlap with pre-existing regulation in healthcare, such as MDR and IDVR, which demand cybersecurity, risk management, post-market surveillance, and other requirements also stated in the new act.

By simply clarifying that there’s no need for manufacturers to comply with requirements that overlap with MDR and IDVR, the amount of work needed to comply with the act will be minimised. This means that many organisations are already on their way to complying with the new regulation.

How to prepare your business for regulating AI in healthcare

Eu ai act

Adopt a risk mindset

The easiest way to achieve compliance with the new regulation is to take a proactive approach to risk management and compliance. This starts with awareness of the EU AI Act and being mindful what risk categories the own products and solutions fall under.  

Most importantly, this mean designing, developing and testing all AI-enabled technologies according to the standard set, rather than attempting to fulfil compliance obligations in hindsight, once a product is already developed.  

Leverage the right expertise

Ideally you’ll have an in-house resource or an external partner who can help you navigate the path to compliance. For example, you’ll need someone to help identify all the risks entailed by your AI system, who can understand the nuances like the difference between actual risk and severity of potential harm. Once that’s done, they should be able to help you design a selective use case that’s of the lowest-risk possible, giving you a more manageable starting point.  

We recommend starting in-house or with a smaller solution to learn from it and build up knowledge that you will need build up more advanced solutions. Over time, you and/or your partner will be able to ensure that any product you produce is compliant and in line with the standard long before it launches. Plus, with the right systems in place, you’ll be able to automate a lot of the steps involved once your selective use case is complete, accelerating the time to execute on other, more complex use cases.  

AI regulation in healthcare is a golden opportunity

The EU AI Act presents an opportunity for healthcare organisations to win back public trust in AI technologies. It paves the way for products of the highest standard to monitor our health, diagnose disease, learn about the causes of disease, develop new therapies, triage patients, and positively influence lifestyles.  

If your business is ready to take this opportunity, we’d love to help you launch AI-enabled solutions that garner respect, confidence, and investment opportunities.   

Interested in learning more about how to navigate the EU AI Act? Want to pick the brains of our AI healthcare experts? Below you can find the contact data to our authors and book a free consultation.

You might also like:

The EU AI Act at a glance

On March 13th, 2024, the European Parliament passed the Artificial Intelligence Act, with 523 votes in favour, 46 against and 49 abstentions. Here are the basics:

This is the first ever attempt to enact a horizontal regulation for AI.

It’ll affect all providers placing AI systems on the EU market, putting AI systems into service in the EU, or relying on outputs that’ll be used in the EU, regardless of whether those providers are established within or outside the EU themselves.

AI systems in consumer-facing products will be classified according to how much risk they pose to users.

AI systems presenting 'unacceptable' risks will be prohibited. ‘High-risk' AI systems will be authorised, but subject to a set of requirements and obligations to gain access to the EU market. AI systems presenting 'limited risk' will be subject to very light transparency obligations.

Organisations will have two years to 'adjust' to the new regulation, after which time non-compliance will result in heavy fines, and the right to recall the AI system.
Contact person for Switzerland

Dr. Lisa Falco

Lead Data Consultant

Lisa Falco is passionate about the positive impact that AI and machine learning can bring to society. She has more than 15 years of industry experience working in medical applications of data science and has helped bringing several AI driven MedTech products to market. Lisa has a PhD from EPFL, Switzerland, in Biomedical Image Analysis and an MSc in Engineering Physics from Chalmers, Sweden.  

Contact
Thank you for your message.