Mental health researchers relying on ChatGPT to speed up their work should take note of an unsettling finding from Australian researchers. The AI chatbot gets citations wrong or invents them outright more than half the time.
When scientists at Deakin University tasked GPT-4o with writing six literature reviews on mental health topics, they discovered that nearly 20% (19.9%) of the 176 citations the AI generated were completely fabricated. Among the 141 real citations, 45.4% contained errors such as wrong publication dates, incorrect page numbers, or invalid digital object identifiers. READ MORE
