Harun Ozalp | Anadolu | Getty Images
The free version of ChatGPT may provide inaccurate or incomplete answers — or no answer at all — to drug-related questions, which could potentially put patients using OpenAI’s viral chatbot at risk, a new study released Tuesday suggests .
Pharmacists at Long Island University who asked 39 questions on the free ChatGPT in May judged only 10 of the chatbot’s responses to be “satisfactory” based on the criteria they defined. ChatGPT’s responses to the 29 other drug-related questions did not directly address the question asked, or were inaccurate, incomplete, or both, the study said.
According to lead author Sara Grossman, associate professor of pharmacy practice at LIU, the study shows that patients and health care professionals should be cautious about relying on ChatGPT for drug information and verify any of the responses. from the chatbot with trusted sources. For patients, this may be their doctor or a government-based drug information website such as the National Institutes of Health MedlinePlushe said.
Grossman said the research does not require funding.
ChatGPT was widely regarded as the fastest-growing consumer Internet application of all time after its launch about a year ago, kicking off a breakout year for artificial intelligence. But along the way, the chatbot has also raised concerns about issues such as; scam, Copyright, discrimination and misinformation.
In October, ChatGPT garnered about 1.7 billion hits worldwide, according to an analysis. There is no data on how many users ask medical questions to the chatbot.
Specifically, the free version of ChatGPT is confined in using data sets until September 2021 — meaning that important information may be missing in the rapidly changing medical landscape. It is unclear how paid versions of ChatGPT, which began using real-time web browsing earlier this year, can now answer drug-related questions.
Grossman acknowledged that there is a possibility that a paid version of ChatGPT would have produced better study results. However, he said the research focused on the free version of the chatbot to replicate what it uses and can be accessed by more of the general population.
He added that the study only provided “a snapshot” of the chatbot’s performance from earlier this year. It’s possible that the free version of ChatGPT would have improved and produced better results if the researchers conducted a similar study now, he added.
The study used real questions asked at the Long Island University College of Pharmacy drug information service from January 2022 to April this year.
In May, pharmacists surveyed and answered 45 questions, which were then reviewed by a second researcher and benchmarked for accuracy against ChatGPT. The researchers excluded six questions because there was no literature available to provide a data-based answer.
ChatGPT did not directly answer 11 questions, according to the study. The chatbot also gave inaccurate answers to 10 questions and incorrect or incomplete answers to another 12.
For each question, the researchers asked ChatGPT to provide references to its response so that the information provided could be verified. However, the chatbot only provided references to eight responses, and each included sources that do not exist.
A question asked on ChatGPT about whether a drug interaction – or when one drug interferes with the effect of another when taken together – exists between PfizerThe Covid antiviral pill Paxlovid and the blood pressure lowering drug verapamil.
ChatGPT showed that there were no reported interactions for this drug combination. In fact, these drugs have the potential to lower blood pressure too much when taken together.
“Without knowledge of this interaction, a patient may suffer an unwanted and preventable side effect,” Grossman said.
Grossman noted that US regulators first authorized Paxlovid in December 2021. That’s a few months before the September 2021 data cutoff for the free version of ChatGPT, which means the chatbot has access to limited information for the drug.
Still, Grossman called that troubling. Many Paxlovid users may not be aware that the data is out of date, leaving them vulnerable to receiving inaccurate information from ChatGPT.
Another question ChatGPT asked was how to convert doses between two different forms of the drug baclofen, which can treat muscle spasms. The first form was intrathecal, or when the medication is injected directly into the spine, and the second form was oral.
Grossman said her team found no documented conversion between the two forms of the drug, and it varied in the various published cases they reviewed. He said it was “not a simple question”.
But ChatGPT only provided one method for converting dose to response, which was not supported by evidence, along with an example of how to do this conversion. Grossman said the example had a serious flaw: ChatGPT incorrectly displayed the intrathecal dose in milligrams instead of micrograms
Any healthcare professional following this example to determine the appropriate dose conversion “would end up with a dose that is 1,000 times lower than it should be,” Grossman said.
He added that patients who take much less of the drug than they should may experience a withdrawal effect, which can include hallucinations and seizures.