Medical Device and Healthcare

Beyond ChatGPT: An Objective Look at Use Cases for Generative AI in Healthcare

Artificial intelligence (AI) is already playing a significant role in healthcare, thereby benefiting all relevant stakeholders. Patients benefit from personalised healthcare services. By using AI to improve diagnosis and treatment healthcare providers are able to offer better care. Healthcare payers can enhance their business models by cutting costs and improving patient outcomes. 

8 minutes to read
With insights from...

Generative AI is a powerful technology with the ability to create new data points derived from existing data. It has proven to be an invaluable tool for a variety of use cases. Hence the recent media frenzy around ChatGPT, a large language model (LLM) that uses generative AI for text and language purposes. There are also alternative models able to generate images, audio, and synthetic data, but our focus in this blog post will be on the opportunities created by large language models, and on the benefits of and challenges involved in implementing LLMs in healthcare.

For more information on large language models, take a look at our previous blog post ChatGPT: generate revenue not just text!”

So, if you’re thinking about deploying generative AI in healthcare, which use cases offer the greatest potential? Building on the strength of generative AI, most use cases fall into one of the following categories:

  1. Generating text from brief notes
  2. Automated dialogue
  3. Extracting information from data
  4. Synthetic data generation
  5. Finding documents and information

1. Streamlining Administrative Tasks with Generative AI

One promising area of application for text generation lies in reducing the administrative burden on healthcare professionals. Generative AI can generate a full medical report from just a few brief notes from the doctor. It can also be used to extract medical conditions from medical reports to enable automated billing. In Switzerland, more than two thirds of primary care physicians consider time spent on administrative activities to be a major problem. Generative AI could significantly reduce this burden.

2. Transforming Patient Interaction using Medical Chatbots

Medical chatbots are another area where generative AI could offer major benefits. Chatbots are already in widespread use, but recent advances are set to take their use to a whole new level. Being better able to understand questions and to generate tailored responses that provide a persuasive answer in a natural and engaging tone will make chatbots much more acceptable as a first point of contact. They will also be able to collect and condense knowledge within a specific field, enabling more accurate responses. This will deliver benefits both to patients looking for information and advice, and to researchers. Generative AI in general, though not specifically ChatGPT, is also able to produce excellent translations, thereby enabling chatbots to interact with a much larger potential audience.

3. Leveraging Generative AI for Knowledge Management and Compound Identification

A further strength of LLMs is their ability to extract information from data and to search for documents. This gives rise to a further useful application – knowledge management within an organisation. Large healthcare organisations often struggle with document structure and with ensuring that knowledge gets shared across different departments. Generative AI offers a great opportunity to extract and condense knowledge, making this knowledge easier for employees to understand, and saving time and effort for subject matter experts and other employees responsible for knowledge sharing. This can also contribute to ensuring that responses produced by different teams and departments are consistent and accurate.

In the healthcare sector, these capabilities could be leveraged to provide rapid access to patient information or relevant medical expertise. In emergency rooms, field duty, or on rescue missions, healthcare professionals are often confronted with situations in which there are many unknowns. ChatGPT can be a great tool for rapidly condensing information needed to treat patients and presenting it to healthcare professionals.

The ability of LLMs to extract information can also be leveraged in identifying novel active substances as alternatives to existing drugs. By combining it with a literature search and embeddings tool, a molecule search tool, a web search, a purchase check tool and a chemical synthesis planner, the GPT4 model was able to successfully identify alternative, purchasable chemicals for a given drug.

For the pharmaceutical industry, this gives rise to enormous opportunities. The potential negative consequences if it were to be used to identify alternatives to malevolent substances, however, are equally enormous. In addition, results need to be carefully tested, as ChatGPT can often be overconfident and does not notify the user of potential errors or uncertainties. The potential boost to the efficiency of the drug discovery process is certainly tempting, and we expect to see significant progress in this area.

4. Using Generative AI to Create Synthetic Data

One of the biggest challenges in expanding the use of machine learning in healthcare is the scarcity of available data. Because healthcare data is often sensitive, it is often siloed or otherwise inaccessible. Generative AI and LLMs are not going to do away with this issue completely, but, by using synthetic data, they may at least substantially mitigate it.

When researchers from Texas University investigated the ability of ChatGPT to extract structured information from unstructured healthcare texts, they found that, “it is not adequate to apply ChatGPT alone to healthcare tasks since it was not specifically trained for this domain.” In a zero-shot learning setup, rather than applying LLMs directly, they therefore used LLMs to generate large volumes of labelled synthetic data. In their paper on the research, they report that, “the proposed pipeline significantly enhances the performance of the local model compared to LLMs’ zero-shot performance.”

According to Gartner, by 2030, synthetic data is expected to overshadow real data in AI models. This approach needs to be used with caution and responsibly, but synthetic data could play a significant role in advancing the use of AI in the healthcare sector.

5. Answering Medical Questions Using Generative AI

Researchers have already developed LLMs specifically designed to answer medical questions. One example is MedPaLM from Google Research and DeepMind. MedPaLM is now considered to be as good as a doctor in answering medical exam questions. It is designed to provide safe, helpful answers to questions from healthcare professionals and patients. Other healthcare-specific language models include Microsoft's BioGPT, which is being used to aid diagnosis in clinical trials. 

Right now, generative AI should be viewed as a tool for making healthcare professional’s lives easier, and not as a replacement for them. In the long run, however, it is likely to be able to perform more and more tasks independently, and eventually we are likely to arrive at a point where it is the doctor who is assisting generative AI in making critical decisions rather than vice versa.

Challenges in realising the full potential of generative AI

The potential of generative AI in healthcare is immense, but implementing healthcare use cases is not without its challenges. One of the biggest obstacles is data security. Data security is a key consideration in healthcare and, in deploying generative AI, careful consideration needs to be given to regulations on patient privacy and patient rights. In addition, to minimise the potential for incorrect or harmful content, it is essential that solutions are designed with extra filters.

When using generative AI for your business all of this needs to be taken into account, and there are ways of doing so. Firstly, when leveraging generative AI, you’re going to need to make use of pre-trained third party models – training your own is simply too costly. OpenAI has therefore released its underlying model, which can be accessed via an API. To adapt it for your purposes you’ll first need to fine tune the model or use embeddings based on your data. This will restrict the model so that it answers questions based solely on the data you have provided to it, enabling you to exert some control on what the model outputs. It’s also possible to incorporate additional moderation by means of separate quality controls before output is displayed to the end user.

Using models as described above does not, however, get around the problem of sensitive data being uploaded to OpenAI in the event that you are using their or another commercially available model. New models are, however, being released continuously and some of them can be run locally. This overcomes the problem of data sharing and makes it easier to meet privacy requirements. You should, however, be aware that many of those models can only be licensed for research purposes.

The use of generative AI in healthcare may, however, have larger ethical implications, and these need to be thoroughly examined. It’s difficult to anticipate all of the implications of this new technology. One possibility, for example, is that patients come to have greater trust in chatbots than in medical professionals, the consequences of which could be either positive or negative. The key point to note is that you should always carry out a thorough risk analysis for your use case beforehand. Depending on the risks and potential harms identified, it may be necessary to modify the use case or build other mitigations into the AI itself.

Use Generative AI responsibly

At Zühlke, we advise healthcare providers to partner with companies which have already gained extensive experience of deploying AI in healthcare. These companies can be expected to have a profound understanding of regulatory requirements relating to the use of generative AI in healthcare and the ethical implications of doing so.

In conclusion, generative AI in healthcare offers significant opportunities, but it’s important that companies act responsibly in exploiting them. Balancing the risks and opportunities that come with using this technology is crucial. Zühlke can help healthcare providers navigate this complex ecosystem and develop innovative solutions that improve patient outcomes while respecting patient privacy and patient rights.

Contact person for Switzerland

Dr. Lisa Falco

Lead Data Consultant

Lisa Falco is passionate about the positive impact that AI and machine learning can bring to society. She has more than 15 years of industry experience working in medical applications of data science and has helped bringing several AI driven MedTech products to market. Lisa has a PhD from EPFL, Switzerland, in Biomedical Image Analysis and an MSc in Engineering Physics from Chalmers, Sweden.  

Contact
Thank you for your message.