Skip to content

Transparency and Ethics for Conversational AI

We are living in an era where AI and machine learning are rapidly evolving and reshaping our lives in real time. It brings about thoughts of Isaac Asimov and the Three Laws of Robotics which he wrote back in 1941 (published in 1942) - a set of principles that still influence ethical AI discussions today:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm
  • A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law
  • A robot must protect its existence as long as such protection does not conflict with the First or Second Law.”

Asimov later added another rule, known as the fourth or zeroth law, that superseded the others. It stated that “a robot may not harm humanity, or, by inaction, allow humanity to come to harm.” Pretty prophetic.

Thanks to the proliferation of ChatGPT, AI is being used and tested under the scrutiny of the public eye as individual platforms evolve and new use cases are tested. Companies like Openstream need to continue to ensure that we set, articulate, and embody a highly transparent and ethical AI framework for our customers and the audiences they serve.

The Artificial Intelligence industry owes it to humanity - to ‘not harm’. Humans need to understand ‘why’ a system came up with a particular conclusion, ‘how’ it came to that conclusion, ‘what’ rationale led it to the conclusion, ‘where’ it acquired the knowledge to draw that conclusion, and the fundamental privacy protections therein.

Not every AI company has decades of research and academia in their pedigree and the depth of ethical considerations baked into their DNA that we are fortunate enough to have here at Openstream. With this in mind, Openstream’s Chief Scientist, Dr. Phil Cohen, recently signed the Future of Life Institute’s ‘Pause Giant AI Experiments: An Open Letter’ to lend our support to the idea that AI can do a better job of being transparent and as ethical as possible as the industry continues to rapidly evolve. This is particularly important as today many platforms are in a type of public beta test – even those platforms from the biggest technology companies in the business.

What is the right framework to base a highly ethical and transparent Conversational AI platform and arguably, any AI platform on? We have implemented a framework based on a foundation of reasoning & inference, explainability, trust, data governance, and privacy protection.

Reasoning & Inference

Reasoning is the ability of an artificial intelligence system to apply logical, probabilistic, common sense, and/or other methods to arrive at conclusions or make decisions based on available data and information. Critical to the ability to reason is first a notion of truth:

  • What statements are facts and what is their provenance i.e., how did the system learn that a statement is true?
  • What patterns of inference does the system follow?
  • If the system takes a statement to have a probability of being true, from where did it get that probabilistic judgment?
  • If a system makes assumptions in order to reach a conclusion, what rule of inference enabled it to relate the assumption with the conclusion?

The critical element in all the above questions is that people (at least developers and perhaps customers) should be able to understand the AI system’s reasoning and fix it if the system reaches an incorrect or unwarranted conclusion. Included in the repair process should be the ability to simply tell the system new information, provided the user is trusted on that topic, and the provenance of the new information is properly recorded.

AI inference is the process of using a model to make predictions, classify data, derive new conclusions via reasoning, or make decisions based on new data. It matters because it enables the deployment of AI models in real-world applications, where the model needs to process new data and generate accurate predictions in real-time. It allows businesses and organizations to automate tasks, improve decision-making, and provide personalized experiences to users.

Take the example of image recognition (Neural Networks) whereby an AI model is trained on a dataset of labeled images, and during inference, it can classify new images based on the learned patterns. Similarly, in Natural Language Processing (NLP), an AI model can be trained to predict the sentiment of a text and during inference, it can classify new texts as positive, negative, or both based on the learned patterns. Because inference is so essential to our trusting its operation, the AI system needs to be able to explain how it concluded. 

Explainability

One of the primary ethical concerns with AI is the lack of explainability. Explainability for AI refers to the system's ability to describe how it arrived at a given prediction or decision, including the decision of what to say. It involves rendering the decision-making process of the AI system transparent and interpretable so that humans can understand and validate the reasoning behind its outputs. 

Simply put, the AI system needs to know what it is doing, and why. But, it cannot just make up an after-the-fact reconstruction of its reasoning. For the system to tell the truth, the reasoning must be “causally connected” to what it did. Explainability is critical for several reasons, including ensuring fairness and accountability, increasing trust in AI systems, and enabling humans to intervene and correct errors if necessary.

Trust

This refers to the level of confidence that humans have in the decisions and actions of artificial intelligence systems. It involves having a belief that the AI system is reliable, and accurate, and behaves in a manner that aligns with human values and expectations. Trust is an essential component of the relationship between humans and AI systems.

Trust allows users to feel comfortable interacting with AI systems and relying on their outputs. It also enables AI systems to operate predictably and efficiently without the need for constant human supervision or intervention. And trust is crucial for ensuring that AI systems are used ethically and do not cause harm to individuals or society as a whole. 

But trust is a two-way street. An AI system also needs to know how and when to trust its users. Users with malevolent intentions should not be able to manipulate the system into performing actions that it should not (read March 27, 2023, Europol report for Law Enforcement implications). Thus, AI systems need to be defensive and know what they are permitted to do and obligated to not do.

Additional checks, balances, and regulations are also being explored in the United States. The Biden administration has begun to examine whether checks need to be placed on artificial intelligence amid growing concerns that the technology could be used to discriminate or spread harmful information. 

Data Governance

The policies, procedures, and frameworks that are put in place to manage data effectively and ensure that it is used in a responsible, ethical, and compliant manner make up data governance. It involves establishing standards for data quality, accessibility, security, and privacy, as well as defining roles and responsibilities for data management and oversight.

Effective data governance is essential for AI applications because the quality and reliability of the data used to train and operate AI systems have a significant impact on their performance and accuracy. Data governance also plays a crucial role in ensuring that AI systems operate in a manner that is consistent with legal and ethical requirements.

Privacy Protection

The measures and techniques that are used to ensure that personal information and data are collected, processed, and stored in a way that respects individual privacy rights are all considerations for privacy protection. With the increasing use of AI systems in various domains, there is a growing concern about the potential privacy risks posed by these systems.

Privacy protection in AI also involves complying with relevant privacy regulations, such as the General Data Protection Regulation (GDPR) in the European Union or the California Consumer Privacy Act (CCPA) in the United States and others.

These regulations and others proposed, such as the EU AI Act, will explore providing legal frameworks for the collection, processing, and storage of personal data, and set out requirements for obtaining consent from individuals, providing transparency about data usage, and ensuring the right to access and delete personal data.

Learn more about Openstream’s Conversational AI, Eva

To learn more about Eva, its underlying theories, and methods we encourage you to check out Eva: A Planning-Based Explanatory Collaborative Dialogue System by Dr. Philip R. Cohen and Dr. Lucian Galescu for more insights and transparency about our platform.