Hallucinations
AI
Instances where a Generative AI model generates output that is not grounded in its input data, i.e., it "makes things up". This is particularly common in tasks like text generation from large language models, where the model might generate plausible sounding but incorrect or nonsensical information.