“Despite their impressive capabilities, generative AI models often produce content that is incorrect, misleading, or not directly based on their training data, a phenomenon sometimes referred to by experts as ‘hallucinations’ or fabrications.” – Laflamme & Bruneault, 2025, p. 496
AI hallucination occurs when an artificial intelligence system, like ChatGPT, produces information that is factually incorrect, made up, or non-sourced, even though it sounds confident and plausible.
For example, the AI might invent a quote, cite a non-existent academic article, or give an incorrect answer based on patterns rather than facts. Hallucinations happen because AI generates responses based on language patterns, not true understanding or verified data.
Unlike a search engine like Google, AI tools do not search the internet in real time or access live sources to find answers. They generate content based on patterns learned from past training data.
In academic work, using hallucinated content can lead to misinformation or violations of academic integrity, including plagiarism.
Don’t forget to double-check what you get from ChatGPT. Sometimes it sounds right but includes made-up details. This is called a “hallucination.”