Skip to content

Can we get Veracity from AI? Ten Questions to ask

From padded resumes to fictitious court cases to high school essays containing made-up history, it is now clear to everyone that ChatGPT and other generative AI technologies have a veracity problem.

Veracity – aka truthfulness – is clearly a priority in the business world. The only thing worse than inaccurate data is some AI bot telling you that you should believe those data.

Veracity is especially important for conversational AI – AI created for the express purpose of interacting with humans via a natural, back-and-forth conversation, either to answer humans’ questions or complete tasks as per their request.

Veracity, however, is surprisingly difficult to pin down, especially when we’re talking about AI. How should we approach the question of veracity from our AI-based applications? Here are ten questions about your AI you should have answers to.

Veracity and Reasoning

Under the covers of most conversational AI offerings is generative AI technology. Generative AI vendors have optimized their output for plausibility rather than veracity.

Their technologies put phrases and sentences together based upon massive quantities of training data with no understanding of what the output means or why it reaches a particular conclusion.

Even when a prompt calls for ChatGPT to create a logical argument, the best it can do is appear to mimic human reasoning, coming up with a plausible but questionable facsimile of human thought.

Businesses require more than spurious reasoning, especially when the AI is conducting conversations with people. They need answers to the following questions:

  • Factual provenance – for the statements the AI takes as representing facts, how did it learn that those statements were in fact true?
  • Patterns of inference – when AI uses some kind of logic, what reasoning patterns did it follow to come up with a particular conclusion?
  • Probabilistic judgment – when the AI is making a judgment about the probability a statement is true, how did it come up with that probability?
  • Relevance of assumptions – if the AI makes assumptions as part of its reasoning, how did it conclude those assumptions were relevant to the argument at hand?
  • Based on the answers to these questions, either the developers of the AI or perhaps the businesspeople using it must be able to understand how the AI is coming to its conclusions.
Veracity and Trust

The more confidence a human has in the decision-making capabilities of AI, the more that human will trust it.

Trust, however, is a more complex human process than a simple evaluation of capabilities. For humans to trust an AI application, they must believe it is not only accurate, but also reliable and predictable in its behavior.standard-quality-control-collage-concept

Such trust also requires an expectation that the AI trusts its human users in return. Trust, after all, is a two-way street.

People must have a reasonable expectation that anyone interacting with the AI is not acting maliciously, perhaps providing the AI with poor data or attempting to subvert its reasoning ability.

For businesses to trust AI, therefore, they need answers to the following questions:

  • Predictability – is the AI reliable and predictable in its behavior? Is there a reasonable expectation that the AI will behave in a manner consistent with the business priorities set out for it?
  • Protections against ill-intentioned users – is there a mechanism in place to govern interactions with users to prevent malicious or otherwise counterproductive prompts or other data that might impact the AI’s veracity?
  • Awareness of constraints – is the AI aware of what it should not or must not do, either because of business or regulatory constraints?

Veracity must be placed into the context of trust. We need more than simple factual outputs from our AI – we need AI we can rely upon.

Veracity and Transparency

As the old Russian proverb says, trust but verify. It is not good enough for AI to come up with the right answers. We need to understand how it came up with those answers.

In fact, explainable AI (XAI) has been a top business priority since well before generative AI became a hot topic. Neural networks and other AI technologies act as black boxes: data go in one end, and inferences come out the other. How are we to trust those answers unless we can see into the box?

The explainability of AI reasoning is only part of the transparency challenge. Transparency is also essential for privacy, and by extension, many aspects of data governance. It is not good enough for your AI to obey privacy rules or other regulations, you must know it is obeying those rules and regulations.

In other words, auditability is essential to transparency and is thus a critical enabler of veracity. In the case of AI, we expect the AI itself to provide the necessary auditability. We should always be able to prompt our AI with ‘prove that your reasoning complied with privacy and other regulations' and get a satisfactory answer in return.

For businesses to have sufficient transparency into their AI, therefore, they need answers to the following questions:

  • Explainability – can the AI explain how it conducted its reasoning to come up with the conclusions it did?
  • Privacy – can the AI confirm that it has complied with all privacy guidelines regarding the data it works with?
  • Compliance – can the AI provide proof of compliance sufficient to meet the requirements of auditors?
  • Without sufficient transparency, businesses won’t be able to trust their AI or the reasoning it performs.
The Intellyx Take

Reasoning, trust, and transparency seem to be high bars for our AI to meet. And yet, today’s AI has the ability to answer all of the questions I posed in this article.

It is true that ChatGPT and other generative AI offerings on the market may fall short – and thus, their solutions may be of limited business value, especially for conversational AI purposes.

Conversational AI vendors like Openstream.ai, in contrast, aren’t simple ChatGPT add-ons. The company has been building reasoning, explainability, trust, governance, and privacy protection into its technology long before ChatGPT hit the public consciousness.

Don’t be fooled by AI that favors plausibility over veracity. AI that delivers veracity is available today.

Copyright © Intellyx LLC. Openstream.ai is an Intellyx customer. Intellyx retains final editorial control of this article. No AI was used to write this article.