Hallucination (artificial intelligence)

In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called confabulation or delusion) is a response generated by an AI which contains false or misleading information presented as fact.

For example, a hallucinating chatbot might, when asked to generate a financial report for a company, falsely state that the company's revenue was $13.6 billion (or some other number apparently "plucked from thin air"). Such phenomena are termed "hallucinations", in loose analogy with the phenomenon of hallucination in human psychology. However, one key difference is that human hallucination is usually associated with false percepts, but an AI hallucination is associated with the category of unjustified responses or beliefs. Some researchers believe the specific term "AI hallucination" unreasonably anthropomorphizes computers.

AI hallucination gained prominence during the AI boom, alongside the rollout of widely used chatbots based on large language models (LLMs), such as ChatGPT. Users complained that such chatbots often seemed to pointlessly embed plausible-sounding random falsehoods within their generated content. By 2023, analysts considered frequent hallucination to be a major problem in LLM technology, with some estimating chatbots hallucinate as much as 27% of the time and a study finding factual errors in 46% of generated responses.

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.