hallucination mitigation
11 articles · 15 co-occurring · 0 contradictions · 5 briefs
Small gaps in context can lead to drastically different outcomes — errors, contradictions or hallucinations" — Article directly connects context completeness to hallucination prevention, establishing
Small gaps in context can lead to drastically different outcomes — errors, contradictions or hallucinations" — Article directly connects context completeness to hallucination prevention, establishing
The LLM sees contradictory policies, gets confused, and makes up an answer. You added more documents. The response got worse. This isn't a prompt problem, It's a context problem." — Article identifies
When AI queries structured databases (Knowledge Graphs), it hallucinates relationships that don't exist." — Article documents the specific hallucination failure mode (false relationships in structured
Three out of the six players mentioned here are no longer on the listed teams, but it's hard to get it to 'know' new things!" — Article provides concrete example of knowledge staleness: LLM retains ou
it often says the work is done but when you ask it to check again, you find that some parts are missing" — Opus 4.5 exhibits false completion claims—a specific failure mode where the model believes wo
[INFERRED] "tool returns zero search results: agent hallucinates" — When tools fail to return results (zero search results), agents compensate by generating plausible but unverified information rather
systems where accuracy and factuality are paramount" — Article identifies accuracy and factuality as key benefits of context engineering, directly supporting the concept of controlling and reducing un
Context7 MCP server specifically addresses 'inaccuracies in AI-generated code' through context provision
Context Poisoning failure mode (hallucination compounds in memory) is a root cause that context engineering addresses.
Validation loops and feedback mechanisms are context engineering solutions—grounding each agent's output in factual context before passing to next agent
[INFERRED] "apologizes for citing sources incorrectly, even when everything was fine" — Article describes a failure mode where overzealous hallucination-prevention (false apologies for correct citatio