Skip to content
Openstream.ai Patents

Innovation in AI Matters

Openstream.ai’s 20-year pursuit of innovation on behalf of our clients has led to the receipt of several patents. Explore and get a glimpse into what makes Openstream.ai so visionary.

Recent AI Patents

Openstream.ai Granted New Patent for Multimodal AI System that Eliminates Hallucinations

This innovation enhances Openstream.ai's platform, Eva (Enterprise Virtual Assistant), by using a unique combination of neuro-symbolic AI to prevent AI hallucinations—errors where AI generates false or misleading information.

Businesses need AI systems that can accurately interact with users and carry out tasks reliably. Eva’s neuro-symbolic AI combines the data-processing power of Neural AI with the logical reasoning of Symbolic AI. This ensures the AI operates based on known facts, providing clear and transparent responses without the risks associated with traditional AI models.

Screenshot 2024-10-15 at 8.33.18 PM
Eva-Patents

Openstream.ai Gets Key Patent for Multimodal AI-Driven Digital Twins of Humans to Scale Access to Experts

This patented approach gives the visionary clients of its platform Eva™ (Enterprise Virtual Assistant) the ability to deploy advanced virtual assistants that are digital twins of human experts with their knowledge and unique personas that can engage in empathetic, hallucination-free natural conversations with end-users. 

Digital Twins of Experts can take the form of AI Avatars, AI Virtual, or AI Voice agents on any channel or engage in any language to collaborate with end-users to help them achieve their goals. This allows an enterprise to scale and deploy twins to support customer service, employee help desk, or co-pilot use cases as needed 24/7/365.  

Additional AI Patents

SYSTEM AND METHOD FOR ACTIVE LEARNING BASED MULTILINGUAL SEMANTIC PARSER (June '24)

Described is a system and method for training a multilingual semantic parser. A method includes receiving, by a multilingual semantic parser, a multilingual training dataset, wherein the multilingual training dataset includes pairs of utterances and meaning representations from at least one high-resource language and at least one low-resource language and wherein the multilingual training dataset is initially a machine-translated dataset, training, the multilingual semantic parser, by translating the utterances in the multilingual training dataset to a target language; and iteratively performing selecting, by an acquisition functions estimator, a subset of the multilingual training dataset for human translation, updating the multilingual training dataset with the human-translated subset of the multilingual training dataset with, and retraining, the multilingual semantic parser, with the updated multilingual training dataset.

Publication Date: June 2024

Read

System and method for multi-modality soft-agent for query population and information mining (Oct. '22)

Methods and systems for multi-modality soft-agents for an enterprise virtual assistant tool are disclosed. An exemplary method comprises capturing, with a computing device, one or more user requests based on at least one multi-modality interaction, populating, with a computing device, soft-queries to access associated data sources and applications, and mining information retrieved by executing at least one populated soft-query. A soft-query is created from user requests. A multi-modality user interface engine annotates the focus of user requests received via text, speech, touch, image, video, or object scanning. A query engine populates queries by identifying the sequence of multi-modal interactions, executes queries and provides results by mining the query results. The multi-modality interactions identify specific inputs for query building and specific parameters associated with the query. A query is populated and used to generate micro-queries associated with the applications involved.

Publication Date: October 2022

Read

System and Method for Temporal Attention Behavioral Analysis of Multi-Modal Conversations in a Question and Answer System (May '22)

Methods and systems for attention behavioral analysis for a conversational question and answer system are disclosed. A multi-modality input is selected from a plurality of multimodality conversations among two or more users. The system annotates the first modality inputs and at least one attention region in the first modality input corresponding to a set of entities and semantic relationships in a unified modality is identified by a discrete aspect of information bounded by the attention elements. The system models the representations of the multimodality inputs at different levels of granularity, which includes entity level, turn level, conversational level. The method proposed uses a network that consists of multilevel encoder-decoder architecture that is used to determine unified focalized attention, analyze and construct one or more responses for one or more turns in a conversation.

Publication Date: May 2022

Read 

System and Method for Multi-modality Soft-agent for Query Population and Information Mining (Apr. '22)

Methods and systems for multi-modality soft-agents for an enterprise virtual assistant tool are disclosed. An exemplary method comprises capturing, with a computing device, one or more user requests based on at least one multi-modality interaction, populating, with a computing device, soft-queries to access associated data sources and applications, and mining information retrieved by executing at least one populated soft-query. A soft-query is created from user requests. A multi-modality user interface engine annotates the focus of user requests received via text, speech, touch, image, video, or object scanning. A query engine populates queries by identifying the sequence of multi-modal interactions, executes queries and provides results by mining the query results. The multi-modality interactions identify specific inputs for query building and specific parameters associated with the query. A query is populated and used to generate micro-queries associated with the applications involved.

Publication Date: April 2022

Read

Methods for Reinforcement Document Transformer for Multimodal Conversations and Devices Thereof (Dec. '22)

A computer-implemented method and system for enrichment of responses in a multimodal conversation environment are disclosed. A Question Answer (QA) engine, such as a reinforcement document transformer exploits a document template structure or layout, adapts the information extraction using a domain ontology, stores the enriched contents in a hierarchical form, and learns context and query patterns based on the intent and utterances of one or more queries. The region of enriched content for preparing a response to a given query is expanded or collapsed by navigating upwards or downwards in the hierarchy. The QA engine returns the most relevant answer with the proper context for one or more questions. The responses are provided to the user in one or more modalities.

Publication Date: December 2022

Read

SYSTEM AND METHOD FOR COOPERATIVE PLAN-BASED UTTERANCE-GUIDED MULTIMODAL DIALOGUE (Dec. '22)

Methods and systems for multimodal conversational dialogue are disclosed. The multimodal conversational dialogue system includes multiple sensors to detect multimodal inputs from a user. The multimodal conversational dialogue system includes a multimodal sematic parser that performs semantic parsing and multimodal fusion of the multimodal inputs to determine a goal of the user. The multimodal conversational dialogue system includes a dialogue manager that generates a dialogue with the user in real-time. The dialogue includes system-generated utterances that are used to conduct a conversation between the user and the multimodal conversational dialogue system.

Publication Date: December 2022

Read

Gartner Logo