AI Hallucinations

Here are some types of hallucinations and how you can make sure they don't happen.

LLM answers from the wrong document

Problem: The Language Model (LLM) provides an answer sourced from a document other than the one intended and both relevant documents was retrieved.

Cause: The LLM struggles to discern, based on the user's question or the content of the retrieved documents, which document contains the definitively "correct" answer. This often occurs when documents contain similar or overlapping information.

Potential solutions:

  • Document Improvement: Enhance the target document's clarity and specificity. Make it easier for the LLM to identify it as the most relevant source for particular queries. This could involve:

    • Adding unique identifiers or key phrases.

    • Structuring information more clearly.

    • Ensuring the document directly answers common questions it's intended for.

  • Persona Refinement: Modify the LLM's persona or instructions to guide its behavior when faced with ambiguous document relevance. For example, instruct the LLM to:

    • "If multiple documents contain similar information, always present information from all relevant documents, noting their distinct sources."

    • "If uncertain about the primary source, state the source of the information provided."

Last updated

Was this helpful?