« Back to Glossary Index

The term hallucination refers to the phenomenon where the model produces inaccurate or entirely false information with high confidence, in the context of generative AI. This occurs when a generative model, such as a language model, is asked to generate content on which it has had little or no training data. Hallucinations can be mitigated through fine-tuning or techniques such as Retrieval Augmented Generation (RAG).