You may already know that Open AI's ChatGPT is a free chatbot that can answer questions using artificial intelligence, or AI. But did you know that ChatGPT can provide false information? According to Open AI "ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers." In technical terms, the AI is hallucinating.
How can this affect researchers? In addition to potentially providing false information, ChatGPT can produce legitimate looking reference citations for materials that don't exist. In June of 2023, attorney Steven Schwartz was sanctioned for submitting a legal brief citing multiple court decisions he'd researched in ChatGPT. Despite asking the chatbot if the legal cases were real, and being reassured that the cases could be found in Westlaw and LexisNexis, at least six were nonexistent "ghost" references hallucinated by the AI. However, not realizing that ChatGPT could be incorrect, Schwartz did not attempt to search the citations out in either of the databases mentioned.
ChatGPT can produce legitimate looking reference citations for materials that don't exist.
While AI can be a useful tool, it's important to be aware of its limitations. Duke University Libraries suggest that ChatGPT is good for generating ideas for related concepts, terms, and words about a particular topic and suggesting library databases to search on a specific topic, but cautions against asking ChatGPT directly for sources or citations.
The LANL Research Library recommends that if you decide to use ChatGPT or other Generative AI, that you always double-check citations and references provided by AI.
The Research Library also cautions that as more researchers use ChatGPT and other AI tools, citations in papers and presentations have a higher chance of being "ghost" or "hallucinated" references. Exercise caution and verify their accuracy before reusing them.