How to Reduce Artificial Intelligence Hallucinations

September 30, 2023
Not getting the desired response from an AI chatbot? You might be dealing with an artificial intelligence hallucination, a problem that occurs when the model produces inaccurate or irrelevant results.
It’s caused by various factors, such as the quality of the data used to train the model, lack of context, or ambiguity in the prompt. Fortunately, there are techniques you can use to get more reliable results.
Prompt Techniques to Reduce AI Hallucinations
To reduce AI model hallucinations, there are some best practices to follow, such as:
1. Provide Clear and Specific Prompts
The first step to minimizing AI hallucinations is to create clear and highly specific instructions. Vague or ambiguous prompts can lead to unpredictable results as AI models may try to interpret the intent behind the prompt. Instead, be explicit in your instructions.
Instead of asking, “Tell me about dogs,” you could ask, “Give me a detailed description of the physical characteristics and temperament of Golden Retrievers.” Refining your prompt until it becomes clear is a straightforward way to prevent AI hallucinations.
2. Use the “According to…” Technique
One of the challenges of using AI systems is that they may generate substantially incorrect, distorted, or inconsistent results with your opinions or values. This can happen because AI chatbots are trained on broad and diverse datasets that may contain errors, opinions, or contradictions.
To avoid this, you can use the “according to…” technique, attributing the output to a specific source or perspective. For example, you can ask the AI system to write a fact on a topic according to Wikipedia, Google Scholar, or a specific publicly accessible source.
3. Employ Constraints and Rules
Constraints and rules can help prevent the AI system from generating inappropriate, inconsistent, contradictory, or illogical results. They can also help shape and refine the output based on the desired outcome and purpose. Constraints and rules can be explicitly stated in the prompt or implicitly implied by context or the task.
Suppose you want to use an AI tool to write a love poem. Instead of giving a generic prompt like “Write a poem about love,” you can provide a more constrained, rule-based prompt like “Write a sonnet about love with 14 lines and 10 syllables per line.”
4. Break Down the Request into Multiple Steps
Sometimes, complex questions can lead to AI hallucinations because the model attempts to answer in a single step. To overcome this issue, break your questions into multiple steps.
For example, instead of asking, “What is the most effective treatment for diabetes?” you can ask, “What are common treatments for diabetes?” You can then follow up with, “Which of these treatments is considered the most effective according to medical studies?”
Multi-step prompts force the AI model to provide intermediate information before arriving at a final answer, which can lead to more accurate and precise responses.
5. Assign a Role to the AI
When you assign a specific role to the AI model in your prompt, you clarify its purpose and reduce the likelihood of hallucinations. For example, instead of saying, “Tell me about the history of quantum mechanics,” you can ask the AI: “Take on the role of a diligent researcher and provide a summary of key milestones in the history of quantum mechanics.”
This perspective encourages the AI to act as a diligent researcher rather than a speculative narrator.
6. Add Contextual Information
Failing to provide contextual information when necessary is an immediate pitfall to avoid when using ChatGPT or other AI models. Contextual information helps the model understand the background, scope, or purpose of the task and generate more relevant and coherent output. Contextual information can include keywords, tags, categories, examples, references, and sources.
For example, if you want to generate a product review for a pair of headphones, you can provide contextual information such as the product name, brand, features, price, rating, or customer feedback.
It can be frustrating when you don’t get the feedback you expect from an AI model. However, by using these prompt techniques, you can reduce the likelihood of hallucinations and get better and more reliable responses.
Keep in mind that these techniques are not foolproof and may not work for every task or topic. You should always check and verify AI outputs before using them for any serious purpose.