Blogs | Openstream.ai

Avoiding Hallucinations Using Neurosymbolic AI | Openstream.ai

Written by David A. Stark | Oct 8, 2024 5:46:39 PM

Hallucinations, A detective, Two Bakerd and Neurosymbolic AI

I hate to be the one to break it to you, but trusting the output of Generative AI and GPT-based systems without a system of checks, balances, and reasoning is foolhardy. Throwing $6.6 billion at any company that proposes an AGI future primarily upon this capability is the type of hallucinatory narrative that only an LLM could generate and make sound truthful and sage. According to a recent Gartner report, ‘Over 50% of generative AI solution deployments in the enterprise will fail through 2026’. But why?

 

An LLM’s nature is to hallucinate

Large Language Models (LLMs) which are trained on massive corpora of text, at any given moment of their processing, only predict the next most likely word, even if that word is not the accurate one. This type of AI model simply does not know right from wrong, or truth from fiction. They operate on a system of educated mathematical guesses to predict what is typically ‘next’ in a sentence using single words or groups of words together (tokens). 

Anthropomorphizing LLMs is a mistake yet the term ‘hallucination’ is often used to describe the accuracy, or lack thereof, of the LLMs output. An LLM is not a conscious being like you or I and therefore it cannot hallucinate something out of thin air. Tech companies have made enormous investments to provide “guardrails” for LLMs, but they will continue to hallucinate. It’s their nature.

Hire a Detective to Avoid Hallucinations

One approach to avoiding hallucinations is to have the LLM generate the meaning representation, but have a logical interpreter to evaluate it against databases for which there are, in fact, truthful answers. Think about a logical interpreter as an AI detective whose primary role is to observe a suspect to determine if the suspect is telling the truth.

Detectives would fail if they only listened to what the suspect told them at face value. Instead, detectives gather facts, use logic and deductive reasoning, observe behaviors and mannerisms, etc. in combination to determine the truth. And like a detective, the logical interpreter should also be able to evaluate statements of “theory of mind”, e.g., meaning representations about a person’s mental state, such as their beliefs, goals, intentions, and the like.  

Reasoning capability

The ability to reason is critical to being able to plan actions in the world, where assessing the truth of the needed preconditions and effects that describe various actions is also critical. This is why Openstream.ai has implemented a planner that can reason its way through various steps, and yet be controllable by and explainable to an end user. Such a planner can be deployed to reason about what to say, and to infer why the user said what they did.

Neurosymbolic AI

This combination of planning with reasoning is at the heart of our neurosymbolic multimodal Conversational AI platform, Eva (Enterprise Virtual Assistant). Eva uses this AI scaffolding in combination with a number of other AI tools like knowledge graphs to engage humans with AI agents in multimodal conversations crafting dialogue in real-time by developing a plan to help them achieve their goals. And does so without hallucinating by combining the strengths of Neural AI with Symbolic AI, creating a symbiotic AI partnership between the two.

Eating cake

In the world of enterprise Conversational AI, clients are trying to scale human relationships with AI agents. This requires massive scalability and grounded truths being realized and actuated within just milliseconds. We want it all. However, like the old idiom, we simply couldn’t have our cake and eat it too with any one approach. By combining the best attributes of Neural AI with the best attributes of Symbolic AI together within a single platform – we can deliver a sum greater than its parts.

Neural models like LLMs are highly skilled pattern recognizers. They are excellent at finding complex patterns in large amounts of data, much like how our brains process information. Symbolic AI is a logical reasoning system. It uses predefined rules and knowledge to make decisions, similar to how someone might follow a flowchart or a set of instructions. 

Combined, they allow our platform to reason and handle massive amounts of unstructured data. All while identifying patterns that are transposed into symbolic representations that can be reliably and transparently verified and refined. We get to have our cake and eat it too.

Two bakers

Think about it like this. Imagine that you’re running an Italian bakery and it’s a busy Saturday morning. In your kitchen, you have two chefs. 

The first chef, Neural AI, is creative and can come up with brilliant and amazing new pastries and cakes solely based on their experience and intuition. They can create wonderful flavors and plate them so they look like works of art. But occasionally, those dishes may use odd combinations of ingredients or lack crucial ones. The results can be odd, ugly, or just taste bad. 

The other chef, Symbolic AI, is strictly a power recipe book user. They ensure that every item is prepared consistently and according to specific rules. However, they often lack creativity and don’t use new ingredients or take patron preferences into account.

Neurosymbolic AI combines the best of both chefs/approaches. It's like having the creative chef (Neural) work alongside a knowledgeable chef (Symbolic) who understands the rules of cuisine. The Neural chef can come up with innovative ideas, while the Symbolic chef can validate these ideas against established culinary principles, ensuring the desserts are both creative and sensible.

In an enterprise deploying AI agents, Neurosymbolic AI helps to:

  1. Reduce hallucinations - The symbolic component can fact-check the neural network's outputs against a knowledge base, preventing nonsensical results.

  2. Improve explainability - The reasoning process becomes more transparent, as the symbolic component can provide logical explanations for decisions.

  3. Enhance flexibility - The system can handle both pattern recognition tasks and logical reasoning, making it more versatile.

  4. Operate with less data - Symbolic rules can guide the neural network when data is scarce.

  5. Improve speed and reduce latency – AI agents can understand and generate non-hallucinatory multimodal dialogue in real-time, allowing for trustworthy, natural, and intuitive conversations.

For enterprises operating in highly regulated industries or who simply do not want to be exposed to risk or lawsuits, hallucinations are not an option or desired outcome.

Reduced AI costs for enterprises and the environment

LLMs are expensive to access for an enterprise due to their ever-growing size and required throughput. Enterprises that already use AI agents that engage in tens of thousands of conversations can reduce the costs associated with LLMs by moving to a neurosymbolic Conversational AI platform like Eva. And they can do so while improving the AI agent’s access to real-time enterprise data and dynamic dialogue.

LLMs also require enormous amounts of electricity to power the data centers that supply them. Consider that, according to Harvard Business Review, “The training process for a single AI model, such as an LLM, can consume thousands of megawatt hours of electricity and emit hundreds of tons of carbon. AI model training can also lead to the evaporation of an astonishing amount of freshwater into the atmosphere for data center heat rejection…” This is why vendors like Microsoft and others are restarting long-dormant nuclear power plants across the United States.

Neurosymbolic AI, on the other hand, is quite energy efficient at runtime – nuclear power is not required here and there is no need to construct mega data centers full of expensive GPUs. Once a Neurosymbolic AI system has been trained and fine-tuned on the corpus of enterprise knowledge it needs to be successful, the entire system can be run on lower-end GPUs and even standard CPUs within existing data centers and operated on the edge using mobile devices with limited bandwidth.

When Relationships Matter

Relationships with your customers, prospects, and employees start and end with trust. AI that hallucinates does not build trust. Neurosymbolic AI is required for more reliable and trustworthy AI systems that can handle complex tasks while still being explainable and aligned with business rules and knowledge. 

And this should matter to anyone considering deploying AI virtual assistants, AI voice agents, or AI Avatars within their enterprise.