From padded resumes to fictitious court cases to high school essays containing made-up history, it is now clear to everyone that ChatGPT and other generative AI technologies have a veracity problem.
Veracity – aka truthfulness – is clearly a priority in the business world. The only thing worse than inaccurate data is some AI bot telling you that you should believe those data.
Veracity is especially important for conversational AI – AI created for the express purpose of interacting with humans via a natural, back-and-forth conversation, either to answer humans’ questions or complete tasks as per their request.
Veracity, however, is surprisingly difficult to pin down, especially when we’re talking about AI. How should we approach the question of veracity from our AI-based applications? Here are ten questions about your AI you should have answers to.
Under the covers of most conversational AI offerings is generative AI technology. Generative AI vendors have optimized their output for plausibility rather than veracity.
Their technologies put phrases and sentences together based upon massive quantities of training data with no understanding of what the output means or why it reaches a particular conclusion.
Even when a prompt calls for ChatGPT to create a logical argument, the best it can do is appear to mimic human reasoning, coming up with a plausible but questionable facsimile of human thought.
Businesses require more than spurious reasoning, especially when the AI is conducting conversations with people. They need answers to the following questions:
The more confidence a human has in the decision-making capabilities of AI, the more that human will trust it.
Trust, however, is a more complex human process than a simple evaluation of capabilities. For humans to trust an AI application, they must believe it is not only accurate, but also reliable and predictable in its behavior.
Such trust also requires an expectation that the AI trusts its human users in return. Trust, after all, is a two-way street.
People must have a reasonable expectation that anyone interacting with the AI is not acting maliciously, perhaps providing the AI with poor data or attempting to subvert its reasoning ability.
For businesses to trust AI, therefore, they need answers to the following questions:
Veracity must be placed into the context of trust. We need more than simple factual outputs from our AI – we need AI we can rely upon.
As the old Russian proverb says, trust but verify. It is not good enough for AI to come up with the right answers. We need to understand how it came up with those answers.
In fact, explainable AI (XAI) has been a top business priority since well before generative AI became a hot topic. Neural networks and other AI technologies act as black boxes: data go in one end, and inferences come out the other. How are we to trust those answers unless we can see into the box?
The explainability of AI reasoning is only part of the transparency challenge. Transparency is also essential for privacy, and by extension, many aspects of data governance. It is not good enough for your AI to obey privacy rules or other regulations, you must know it is obeying those rules and regulations.
In other words, auditability is essential to transparency and is thus a critical enabler of veracity. In the case of AI, we expect the AI itself to provide the necessary auditability. We should always be able to prompt our AI with ‘prove that your reasoning complied with privacy and other regulations' and get a satisfactory answer in return.
For businesses to have sufficient transparency into their AI, therefore, they need answers to the following questions:
Reasoning, trust, and transparency seem to be high bars for our AI to meet. And yet, today’s AI has the ability to answer all of the questions I posed in this article.
It is true that ChatGPT and other generative AI offerings on the market may fall short – and thus, their solutions may be of limited business value, especially for conversational AI purposes.
Conversational AI vendors like Openstream.ai, in contrast, aren’t simple ChatGPT add-ons. The company has been building reasoning, explainability, trust, governance, and privacy protection into its technology long before ChatGPT hit the public consciousness.
Don’t be fooled by AI that favors plausibility over veracity. AI that delivers veracity is available today.
Copyright © Intellyx LLC. Openstream.ai is an Intellyx customer. Intellyx retains final editorial control of this article. No AI was used to write this article.