ChatGPT and medical communications – what are the implications?

Alexander Curire
10 Minute Read
Artificial Intelligence (AI) chatbots, known as large language models (LLMs) are becoming more easily accessible to mass audiences. One such chatbot is ChatGPT which has been making headlines in recent weeks, particularly around the implications it will have in academia and the medical communications industry.
ChatGPT (GPT = Generative Pre-Trained Transformer) was launched by OpenAI in November 2022. Since then Microsoft has invested a speculated $10 billion and is in plans to incorporate ChatGPT into its search engine Bing, beginning an AI arms race with Google, who is developing its own chatbot called Bard.
Once prompted, the chatbot generates sentences by predicting the patterns of language using a large database that contains approximately 175 billion tokens (words or components of words). Responses are generated with incredible speed and accuracy with the current third generation GPT-3, however this is expected to increase exponentially with the release of GPT-4 sometime in 2023. With updates in this technology, it is going to become even harder to distinguish between text generated by the chatbot and that written by people.
In the scientific community, excitement and consternation have been increasing in equal measure around the implications of such a tool. The ability of the chatbot to generate presentable student essays and answer questions well enough to pass medical exams led to schools in New York to announce they were banning the use of ChatGPT on their computers and networks. However, there have been calls to embrace it and teach students how to use the tool.
But what are the possible implications of such a tool on the future of academia and, more specifically, medical communications?
Nature announced that scientists have been using the tool as a research assistant and it has actually been named as an author on a number of pre-print publications. Many are quick (and right) to point out that ChatGPT does not meet the standards to be a study author given the complete lack of one fundamental criteria – it cannot be held accountable for content and scientific integrity.
At InkLab, we are always striving to use the latest technologies to advance medical education, so we decided to consider some of the pros and cons of the chatbot in and around medical communications.
One use of the chatbot, that has gained a lot of interest in the medical communications community is the ability of the technology to generate accurate and almost perfect plain language summaries (PLS) of scientific articles. There are many in the medical communications sector who are in roles that are completely dedicated to writing of PLS. To test the chatbot, it was asked to generate a PLS for a publication written by one of our medical writers, Alexander Currie, during his PhD. The response was both astonishingly and frighteningly spot on!

The introduction of ChatGPT to the medical communications sphere could result in a change of roles for many medical writers – i.e., will we be less focused on writing PLS and trained to accurately prompt the chatbot and provide editorial to the content it generates?
This will be an area to watch, specifically the cost-time benefits of utilising this tool for these types of medical writing. In fact, the chatbot actually highlights this as one of the advantages for using ChatGTP for medical communications - “Cost-effective: Using ChatGPT can reduce the need for additional staff, providing a cost-effective solution for medical organizations.”

One limitation of the tool we have discussed as a team is the inability to apply context. In medical communications, the scientific findings must be explained in the correct context, for e.g., whether something is promotional versus non-promotional or whether it applies to a specific disease or treatment. We chose to carry out a context check by asking the chatbot a joke - “what do you get if you cross a football player with a payphone?” Its response was, “it is not possible to cross a football player with a payphone as they are both inanimate objects and cannot reproduce” (The answer to the joke was “a wide receiver”). Although a rudimentary example with a slightly accurate response (disclaimer at InkLab we agree football players are neither inanimate or sterile), it is clear the context was lost in this case.

Whilst we are not on a fault-finding mission, the lack of context has to be highlighted so necessary safeguards can be applied to any content generated for medical communication purposes. In fact, the chatbot is “self-aware” regarding this limitation as when asked about “what are some of the disadvantages of using ChatGPT for medical communications,” one of its responses was “Limited Understanding of Context: Although ChatGPT has a large knowledge base, it can struggle to fully understand the context of a situation, leading to errors or misunderstandings.” This however, may be something that is improved with future updates.
One major criticism, which may limit its use in medical communications, is that it cannot (at the moment) accurately cite and reference for the content it generates. Again, this may be something that is available with future versions of ChatGPT. Possible competitor LLMs, such as MedPaLM being developed by Google may also provide this feature. If this feature is introduced, more medical writers may find themselves copy-editing and fact-checking chatbot generated medical content as a new normal. This is not meant to be hyperbolic, but when new technologies disrupt our normal ways of life (just like personal computers in the 1980s and the modern internet in the 1990s), we must adapt.
The points presented in this blog are not exhaustive and merely cover some of the main points highlighted in the medical communications sector around the use of chatbots. ChatGPT is currently being widely reported on in the media but more information about how it is developing can be followed on medical communications blogs such as The Medical Futurist.
The advantages of ChatGPT are clear. The ability of the chatbot to generate accurate (on the most part) and fast content give it the potential to be used in vast areas of medical communications. We must proceed with caution however, and apply appropriate scrutiny to content generated by ChatGPT. Additionally, necessary safeguards must be put in place with mandatory requirements to state whenever chatbots have been utilised in the generation of medical content. Whether you are a proponent or opponent to the use of such technologies, one thing is clear, the ChatGPT technology (much like radio, television and the internet) is here to stay and there’s no doubt we all have to curiously but cautiously adapt and evolve together.
Subscribe to our blog
Subscribe to our blog for more insights from Inklab
InkLab Medical Communications needs the contact information you provide to us to contact you about our products and services. You may unsubscribe from these communications at anytime. For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, check out our Privacy Policy.