The present embodiments relate to a virtual dialog system employing an automated virtual dialog agent, such as, for example, a “chatbot,” and a related computer program product and computer-implemented method. In certain exemplary embodiments, a knowledge gap between one or more requests and corresponding expected responses is identified and resolved, with the resolution directed at bridging or minimizing the knowledge gap to improve performance of the automated virtual dialog agent.
A chatbot is a computer program that uses artificial intelligence (AI) as a platform to conduct a transaction between an automated virtual dialog agent and, typically, a user such as a consumer. The transaction may involve product sales, customer service, information acquisition, or other types of transactions. Chatbots interact with the user through dialog, often either textual (e.g., online or by text) or auditory (e.g., by telephone). It is known in the art for the chatbot to function as a question-answer component between a user and the AI platform. The quality of the questions and answers are derived from the quality of question understanding, question transformation, and answer resolution. A frequent cause of error, commonly found in a failure to find a corresponding response to a question, is due to a lack of knowledge for an effective transformation of the question into an equivalent knowledge representation that maps to the answer. For example, lack of synonyms or concept relations can limit the ability of the AI platform to determine that the question is equivalent or related to a known question for which an answer is available.
The embodiments include a system, computer program product, and method for improving performance of a dialog system, and in particular embodiments the improvements are directed to active explanation to dynamically solicit input for relevant knowledge for bridging a knowledge gap.
In one aspect, a system is provided for use with a computer system including a processing unit, e.g. processor, operatively coupled to memory, and an artificial intelligence (AI) platform in communication with the processing unit. The AI platform is configured with tools to direct performance of an operatively coupled dialog system. The tools include a dialog manager, an artificial intelligence (AI) manager, and a director. The dialog manager functions to receive and process natural language (NL) as related to an interaction with an automated virtual dialog agent of the dialog system. The NL includes dialog events in the form of one or more input instances and one or more corresponding output instances or output actions. The AI manager functions to apply the dialog event to a learning program to interpret the one or more requests, identify a knowledge gap, and dynamically bridge the knowledge gap. The director functions to refine the automated virtual dialog agent commensurate with the bridged knowledge gap, with the refinement directed at improving performance of the dialog system.
In another aspect, a computer program product is provided for improving performance of a virtual dialog agent system. The computer program product comprises a computer readable storage medium having program code embodied therewith. The program code is executable by a processor to direct performance of an operatively coupled dialog system. Program code functions to receive and process natural language (NL) as related to an interaction with an automated virtual dialog agent of the dialog system. The NL includes dialog events in the form of one or more input instances and one or more corresponding output instances or output actions. Program code further functions to apply the dialog event to a learning program to interpret the one or more requests, identify a knowledge gap, and dynamically bridge the knowledge gap. Program code is further provided to refine the automated virtual dialog agent commensurate with the bridged knowledge gap, with the refinement directed at improving performance of the dialog system.
In yet another aspect, a computer-implemented method is provided of improving performance of a dialog system. The method comprises: receiving and processing, by a processor of a computing device, natural language (NL) as related to an interaction with an automated virtual dialog agent. The NL includes dialog events in the form of one or more input instances and one or more corresponding output instances or output actions. The dialog event is applied to a learning program to interpret the one or more input instances, identify a knowledge gap, and dynamically bridge the knowledge gap. The automated virtual dialog agent is subject to refinement commensurate with the bridged knowledge gap, with the refinement directed at improving performance of the dialog system.
In a further aspect, a computer system is provided with an artificial intelligence (AI) platform in communication with a processor. The AI platform is configured with tools to direct performance of an operatively coupled dialog system. The tools include a dialog manager, an artificial intelligence (AI) manager, and a director. The dialog manager functions to receive and process natural language (NL) as related to an interaction with an automated virtual dialog agent of the dialog system. The AI manager functions to apply the dialog event to a learning program to interpret one or more input instances, identify a knowledge gap, and dynamically bridge the knowledge gap. The director functions to refine the automated virtual dialog agent commensurate with the bridged knowledge gap, with the refinement directed at improving performance of the dialog system.
These and other features and advantages will become apparent from the following detailed description of the presently exemplary embodiment(s), taken in conjunction with the accompanying drawings.
The drawings referenced herein form a part of the specification and are incorporated herein by reference. Features shown in the drawings are meant as illustrative of only some embodiments, and not of all embodiments, unless otherwise explicitly indicated.
It will be readily understood that the components of the present embodiments, as generally described and illustrated in the Figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the apparatus, system, method, and computer program product of the present embodiments, as presented in the Figures, is not intended to limit the scope of the embodiments, as claimed, but is merely representative of selected embodiments.
Reference throughout this specification to “a select embodiment,” “one embodiment,” “an exemplary embodiment,” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases “a select embodiment,” “in one embodiment,” “in an exemplary embodiment,” or “in an embodiment” in various places throughout this specification are not necessarily referring to the same embodiment. The embodiments described herein may be combined with one another and modified to include features of one another. Furthermore, the described features, structures, or characteristics of the various embodiments may be combined and modified in any suitable manner.
The illustrated embodiments will be best understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The following description is intended only by way of example, and simply illustrates certain selected embodiments of devices, systems, and processes that are consistent with the embodiments as claimed herein.
In the field of artificially intelligent computer systems, natural language systems (such as the IBM Watson® artificially intelligent computer system or and other natural language systems) process natural language based on knowledge acquired by the system. To process natural language, the system may be trained with data derived from a database or corpus of knowledge, but the resulting outcome can be incorrect or inaccurate for a variety of reasons.
Machine learning (ML), which is a subset of Artificial intelligence (AI), utilizes algorithms to learn from data and create foresights based on this data. AI refers to the intelligence when machines, based on information, are able to make decisions, which maximizes the chance of success in a given topic. More specifically, AI is able to learn from a data set to solve problems and provide relevant recommendations. Cognitive computing is a mixture of computer science and cognitive science. Cognitive computing utilizes self-teaching algorithms that use data minimum, visual recognition, and natural language processing to solve problems and optimize human processes.
At the core of AI and associated reasoning lies the concept of similarity. The process of understanding natural language and objects requires reasoning from a relational perspective that can be challenging. Structures, including static structures and dynamic structures, dictate a determined output or action for a given determinate input. More specifically, the determined output or action is based on an express or inherent relationship within the structure. This arrangement may be satisfactory for select circumstances and conditions. However, it is understood that dynamic structures are inherently subject to change, and the output or action may be subject to change accordingly. Existing solutions for efficiently identifying objects and understanding natural language and processing content response to the identification and understanding as well as changes to the structures are extremely difficult at a practical level.
A chatbot is an Artificial Intelligence (AI) program that simulates interactive human conversation by using pre-calculated phrases and auditory or text-based signals. Chatbots are increasingly used in electronic platform for customer service support. In one embodiment, the chatbot may function as an intelligent virtual agent. Each chatbot experience is comprised of a set of communications comprised of user actions and dialog system actions, with the experience having a discriminative behavior pattern. It is understood in the art that chatbot dialogs may be evaluated and subject to diagnosis to ascertain elements of the chatbot that may warrant changes to improve future chatbot experiences. Such evaluations identify patterns of behavior. By studying these patterns, and more specifically by identifying different characteristics of the patterns, the chatbot program may be refined or amended to improve chatbot metrics and future chatbot experiences.
A system, computer program product, and method automatically identify and resolve a knowledge gap. As shown and described herein, the knowledge gap is defined as context representations that are expected to be equivalent but which cannot be derived from one another with sufficient accuracy. The cause for the knowledge gap may stem from different scenarios. For example, domain specific concepts used to describe the request may not be the same as those used to describe the action context or action context may not be complete because a designer missed or omitted specification of elements, e.g. providing details increase solution design labor, or because commonly known or accepted knowledge is assumed.
Two avenues are provided and described in detail below to resolve the identified knowledge gap and provide explanations thereof, including online with an end-user interacting with an artificial intelligence (AI) solution, e.g. chatbot platform, and offline with leveraging a subject matter expert (SME) to review system produced questions and answers. Examples of the provided explanation(s) include, but are not limited to, enforcement of an existing concept relation for a positive response, and qualifying concept relations in context for a negative response. The goal of the explanation is to enrich concept relations presented in the solution. More specifically, domain knowledge is extended to capture concepts and relations, and not limited to questions and answers. An example type of concept relations may be concept relation of equivalence, such as “A” is identified or unconditionally equivalent to “B”, e.g. LAN is equivalent to “local area network”. If there is any occurrence, A or B can be replaced with the other concept without change of meaning. Another example of a type of concept relations may be concept relations of contextual implication. For example, “A” co-occurs with “B”, e.g. Ethernet co-occurs with wire, such that any occurrence of A generates a context of B, or “A” implies “B”, e.g. Ethernet implies network and wired network, such that any occurrence of A can be replaced by B and assertions are still true. In an exemplary embodiment and described in detail below, one or more multiple choice questions are generated to elicit possible reasons for similarities or differences.
The chatbot platform functions as an AI interaction interface. As shown and described herein, the chatbot platform is supplemented with leveraging a subject matter expert (SME) to provide ground truth data to bootstrap the system. Ground truth (GT) is a term used in machine learning that refers to information provided by direct observation, e.g. empirical evidence, as opposed to information provided by inference. Attaching one or more taxonomy tags, referred to herein as labels, to GT data provides structure and meaning to the data. Annotated GT, or an annotation, is attached to the document or in one embodiment elements of the document, and indicates the subject matter of elements present within the document. The annotation is created and attached by annotators of different skillsets reviewing documents. Accordingly, domain specific relations are collected from the chatbot platform and the SME.
Referring to
The AI platform (150) is operatively coupled to the network (105) to support interaction with the virtual dialog agent (162) from one or more of the computing devices (180), (182), (184), (186), (188), and (190). More specifically, the computing devices (180), (182), (184), (186), and (188) communicate with each other and with other devices or components via one or more wired and/or wireless data communication links, where each communication link may comprise one or more of wires, routers, switches, transmitters, receivers, or the like. In this networked arrangement, the server (110) and the network connection (105) enable communication detection, recognition, and resolution. Other embodiments of the server (110) may be used with components, systems, sub-systems, and/or devices other than those that are depicted herein.
The AI platform (150) is shown herein operatively coupled to the dialog agent (162), which is configured to receive input (102) from various sources across the network (105). For example, the dialog system (160) may receive input across the network (105) and leverage the data source (170), also referred to herein as the knowledge domain or corpus of information, to create output or response content.
As shown, the data source (170) is configured with a plurality of libraries, shown herein by way of example is library (172A), libraryB (172B), . . . , and libraryN (172N). Each library is populated with data in the form of feedback data and ground truth data. In an exemplary embodiment, each library may be directed to specific subject matter. For example, in an embodiment, libraryA (172A) may be populated with items directed to athletics, libraryB (172B) may be populated with items directed finance, etc. Similarly, in an embodiment, the libraries may be populated based on industry. The dialogue system (160) is operatively coupled to the knowledge domain (170) and the corresponding libraries.
The dialogue system (160) is an interactive AI interface to support communication between a virtual agent and a non-virtual agent, such as a user, which can be human or software, and potentially an AI virtual agent. The interactions that transpire generate what are referred to as conversations, with the content of such conversation stored in conversation log files (also referred to as records). Each log file records interactions with the virtual dialog agent (162). According to an exemplary embodiment, each conversation log, e.g., log file, is a recording of questions presented to the dialog system (160) and corresponding answers generated from the dialog system (160). Accordingly, the communication that transpires includes a dialog in an electronic platform between a user and a virtual agent.
The dialog system (160) is operatively coupled to a knowledge base (140) to store the records generated by the dialogs. As shown by way of example, the knowledge base (140) is shown herein with data structures to store representations of the dialog. In this example, there are two data structures, DSA (142A) and DSB (142B). The quantity of data structures shown is for illustrative purposes and should not be considered limiting. Each data structure is populated with representations of the requests, e.g. questions, and corresponding generated responses or response actions, e.g. answers. As shown and described in
The various computing devices (180), (182), (184), (186), (188), and (190) in communication with the network (105) may include access points to the dialog system (162). The network (105) may include local network connections and remote connections in various embodiments, such that the AI platform (150) may operate in environments of any size, including local and global, e.g., the Internet. Additionally, the AI platform (150) serves as a back-end system that can make available a variety of knowledge extracted from or represented in documents, network accessible sources and/or structured data sources. In this manner, some processes populate the AI platform (150), with the AI platform (150) also including input interfaces to receive requests and respond accordingly.
As shown, content may be represented in one or more models operatively coupled to the AI platform (150) via the knowledge base (140). Content users may access the AI platform (150) and the operatively coupled dialog system (160) via a network connection or an Internet connection to the network (105), and may submit natural language input to the dialog system (160) from which the AI platform (150) may effectively determine an output response related to the input by leveraging the operatively coupled data source (170) and the tools that comprise the AI platform (150).
The AI platform (150) is shown herein with several tools to support the dialog system (160) and the corresponding virtual agent (e.g., chatbot) (162), and more specifically, with the tools directed at improving performance of the dialog system (160) and virtual agent (162) experience. The AI platform (150) employs a plurality of tools to interface with and support improving performance of the virtual agent (162). The tools include a dialog manager (152), an artificial intelligence (AI) manager (154), and a director (156).
The dialog manager (152) interfaces with the dialog system (160) via receipt of natural language (NL) related to interaction with the automated virtual dialog agent (162). The received natural language is shown herein populated and stored in the corresponding data structures of the knowledge base (140). Each dialogue event in the knowledge base (140) includes one or more requests and one or more corresponding responses or response actions.
The AI manager (154) is shown herein operatively coupled to the dialog manager (152). The AI manager (154) functions to apply a received or obtained dialog event to a learning program for knowledge gap assessment and remediation. As shown, the learning program (154A) is operatively coupled to the AI manager (154). The learning program (154A) subjects the dialog to an interpretation by producing a response in the form of an explanation of an interpreted request and associated response. In an exemplary embodiment, the explanation is a rule between or bridging two or more concepts. The interpretation enables the learning program (154A) to automatically identify presence of a knowledge gap, if any. More specifically, the learning program (154A) determines whether the input, also referred to herein as an input instance, includes one or more concepts that are not present, e.g. absent from, in the corresponding output instance or output action. Identification of the knowledge gap and instances that require explanation is automated. Examples of scenarios that lead to the knowledge gap identification include, but are not limited to the following: identification of concepts in the question that do not appear as related to the concepts in a corresponding ground truth answer, identification of synonyms in a preferred answer to the terms in question, and identification of terms that describe a differentiating context setting that was not explicitly mentioned in the question yet present in the preferred answer. In an embodiment, the learning program (154A) determines when the input instance and the corresponding output instance or output action(s) are not aligned. In addition to identification of the knowledge gap, the learning program (154A) dynamically solicits pieces of knowledge to extend the explanation of the input instance(s) and map the request(s) to the output instance(s) or output action(s), thereby effectively bridging the identified knowledge gap. In an exemplary embodiment, the AI manager (154) receives ground truth data to bootstrap the data source (170). Similarly, in an embodiment, the AI manager (154) enriches the data source (170) with the explanation and the solicited pieces of knowledge. In an exemplary embodiment, the learning program (154A) analyzes multiple pairs of input instance(s) and output instance(s) or action(s), e.g. input and output, of the dialog system. This extends the functionality of the learning program (154A) to identify one or more features common among two or more pairs of requests and responses, which may then be leveraged to solicit pieces of knowledge to reduce or eliminate the knowledge gap.
The AI manager (154) represents the interpretation of the dialog in the form of a model, such as but not limited to, modelQ,0 (142Q,0) representing the question(s) and modelA,0 (142A,0) representing the corresponding answer(s). The model representation enables the AI manager (154) to leverage the structure and functionality of the corresponding models to compare content there, e.g. text of the question answer pair, and to determine any similarities or differences corresponding to concept relations. In an exemplary embodiment, the model is a representation of the dialog events in a subtree format. The AI manager (154) quantifies the knowledge gap with a measurement of a distance between subgraphs with a common node label. In an exemplary embodiment, the measured distance corresponds to or is an indicator of complexity of the identified knowledge gap. In addition to the comparison, the AI manager (154) may generate one or more questions for the dialogue manager (152) to communicate through the dialog system (160) and corresponding virtual agent (162). In an exemplary embodiment, the dialog manager (152) receives one or more answers to the generated questions. The received answer functions as an indicator of the identified knowledge gap and is communicated to the AI manager for knowledge gap assessment and remediation.
The director (156) is shown herein operatively coupled to the dialog manager (152), the AI manager (154) and the dialog system (160). The director (156) is configured to refine the virtual agent (162) in relation to the corresponding dialog event. The refinement is in the form of leveraging the chatbot platform to facilitate and enable system interaction to collect domain specific relations from correct responses, such as ground truth, or positive feedback. Details of the refinement are shown and described in
The dialog events that are created or enabled by the dialog system (160) may be processed by the IBM Watson® server (110), and the corresponding artificial intelligence platform (150). The dialog manager (152) performs an analysis of received natural language using a variety of reasoning algorithms. There may be hundreds or even thousands of reasoning algorithms applied, each of which performs different analysis, e.g., comparisons. For example, some reasoning algorithms may look at matching of terms and synonyms within the language of the received dialog and the corresponding responses or response action. In one embodiment, the dialog manager (154) may process the electronic communication to identify and extract features within the communication. Whether through use of extracted features and feature representations, or an alternative platform for processing electronic records, the dialog manager (152) processes the dialog events in an effort to identify and parse events and a behavioral characteristic of the dialog events. In an exemplary embodiment, behavioral characteristics include, but are not limited to, language and knowledge. In one embodiment, the platform identifies grammatical components, such as nouns, verbs, adjectives, punctuation, punctuation marks, etc. in the requests and corresponding responses or response actions. Similarly, in one embodiment, one or more reasoning algorithms may look at temporal or spatial features in language of the electronic records.
In some illustrative embodiments, server (110) may be the IBM Watson® system available from International Business Machines Corporation of Armonk, New York, augmented with the mechanisms of the illustrative embodiments described hereafter.
The dialog manager (152), the AI manager (154), and the director (156) hereinafter referred to collectively as AI tools, are shown as being embodied in or integrated within the artificial intelligence platform (150) of the server (110). The AI tools may be implemented in a separate computing system (e.g., 190) that is connected across network (105) to the server (110). Wherever embodied, the AI tools function to evaluate dialog events, extract behavior characteristics from the requests and responses, selectively identify a knowledge gap, and through active explanation dynamically solicit input for knowledge that is relevant to bridge the knowledge gap and improve question transformation and knowledge representation.
In selected example embodiments, the dialog manager (152) may be configured to apply NL processing to identify the behavior characteristics of the dialog events. For example, the NL manager (152) may perform a sentence structure analysis, with the analysis entailing a parse of the subject sentence(s) and the parse to denote grammatical terms and parts of speech. In one embodiment, the NL manager (152) may use a Slot Grammar Logic (SGL) parser to perform the parsing. The NL manager (152) may also be configured to apply one or more learning methods to match detected content to known content to decide and assign a value to the behavior characteristic.
Types of information handling systems that can utilize the artificial intelligence platform (150) range from small handheld devices, such as handheld computer/mobile telephone (180) to large mainframe systems, such as mainframe computer (182). Examples of handheld computers (180) include personal digital assistants (PDAs), personal entertainment devices, such as MP4 players, portable televisions, and compact disc players. Other examples of information handling systems include a pen or tablet computer (184), a laptop or notebook computer (186), a personal computer system (188), and a server (190). As shown, the various information handling systems can be networked together using computer network (105). Types of computer network (105) that can be used to interconnect the various information handling systems include Local Area Networks (LANs), Wireless Local Area Networks (WLANs), the Internet, the Public Switched Telephone Network (PSTN), other wireless networks, and any other network topology that can be used to interconnect the information handling systems. Many of the information handling systems include nonvolatile data stores, such as hard drives and/or nonvolatile memory. Some of the information handling systems may use separate nonvolatile data stores (e.g., server (190) utilizes nonvolatile data store (190A), and mainframe computer (182) utilizes nonvolatile data store (182A)). The nonvolatile data store (182A) can be a component that is external to the various information handling systems or can be internal to one of the information handling systems.
The information handling system employed to support the artificial intelligence platform (150) may take many forms, some of which are shown in
An Application Program Interface (API) is understood in the art as a software intermediary between two or more applications. With respect to the artificial intelligence platform (150) shown and described in
Referring to
The feedback, e.g. final feedback, is stored in a repository at (314), and leveraged by system starting at step (316) to enrich the domain knowledge. The enrichment may be responsive to positive feedback or negative feedback. Similarly, the enrichment may be through the chatbot platform or through ground truth provided by a subject matter expert (SME). Similarly, the enrichment may assess presence of a knowledge gap, if any, and resolve such gaps through the domain knowledge enrichment in the form of feedback interaction via the chatbot (162). It is understood in the art that various methods may have been utilized to generate responses to the chatbot, and as such, some matches between a question, e.g. input, and a corresponding generated answer may be partial, e.g. partially good or partially bad. Although they may be considered overall as a good answer, there may be knowledge gaps present that could benefit from additional information. Similarly, the system may consider an answer to be good, when the feedback is negative, and as such requires or could benefit from identification information or knowledge to clarify the discrepancy.
Learning, also referred to herein as learning interactions, is shown herein bifurcated into feedback interaction (316) and knowledge enrichment interaction (318). Feedback interaction leverages the chatbot platform to facilitate and enable system interaction to collect domain specific relations from correct responses, such as ground truth, or positive feedback. These relations are used to augment the domain knowledge. Feedback interaction (316) is followed by generation of an explanation prompt (320), details which include a knowledge gap assessment, as shown and described in
Referring to
Referring to
A representation of components of the question as identified at step (504), also referred to herein as a request, are stored in a corresponding data structure (506). For example, in an embodiment, the identified question components are represented as an Abstract Meaning Representation (AMR) tree or a parse tree. AMR is a semantic representation that expresses the logical meaning of sentences with rooted, directed, acyclic graphs. AMR associates semantic concepts with nodes on a graph, while relations are label edges between concept nodes. In an exemplary embodiment, AMR expresses the semantic representation of the sentences in a hierarchy, which is an organization technique in which items are layered or grouped to reduce complexity.
Following the extraction and representation at steps (504) and (506), respectively, the answer to the question is analyzed (508) similar to the question analysis. More specifically, one or more answers to the question are generated and subject to analysis at step (508). In an exemplary embodiment, the answers are obtained from a corresponding knowledge base or knowledge domain. The analysis subjects the answer(s) to similar NL processing as the question, wherein one or more components in the question, e.g. subject or object, and corresponding characteristics, etc. are identified. The analysis identifies relevance of the extract concepts and relations instances, and in an embodiment, subjects the concepts and relations instances to a ranking, and represents or characterizes the relevance in the data structure representing the question. In an exemplary embodiment, information in the knowledge domain is used for the analysis and identification. The information includes, but is not limited to, concepts or relations of a specific type, such as failure actions and attributes, product management actions, e.g. restart, delete, and configure, product names, and product components. Similarly, in an embodiment, a classifier based on a neural language model or sequence model is utilized to label which terms are relevant for the domain. In an exemplary embodiment, the extracted concepts and relations may be arranged in a hierarchy based on the ranking. Similar to the question processing, the concepts within the answer text and relations from the question for the answer are extracted and a representation of components of the answer as identified at step (510), also referred to herein as a response, are stored in a corresponding data structure (512). Similar to the question processing, in an embodiment the identified answer components are represented as an Abstract Meaning Representation (AMR) tree or a parse tree. In an exemplary embodiment, representations of the question and answer can comprise multiple data structures representing multiple cognitive features. Accordingly, both the question and corresponding answer text are subject to concept and relations extraction.
The list of elements of the question(s) and answer(s) populated in the corresponding data structures or models at steps (506) and (512) correspond to specific selection criterion, e.g. matching or different. The lists may be generated by applying comparison methods specific for each type of feature representation. For example, the two data structures may be received as input with output from the comparison method generating a list of elements that exist in both data structure, a list of elements that exist in only the first data structure, and a list of elements that only exist in the second data structure. For a relation graph, differences in semantic or syntactic relation graph subtrees are identified to determine subtrees that are identical or within a distance threshold. For example, a distance may be measured between subgraphs with common node labels, such as the quantity of relations that originate or conclude in a common node.
Following step (512), the representations of the question and corresponding answer are subject to comparison to extract overlaps and differences (514). The comparison determines differences in semantic or syntactic relation graph subtrees to determine those subtrees that are identical or within a distance threshold. Distance may be measured between subgraphs with common node labels, such as the number of relations that originate or end in the common node, that are different, i.e. have different node in a triplet (common-node, common-relation, node), or are missing, i.e. common-node, relation-in-only one of the compared trees. For example, in an exemplary embodiment, the question in the scenario is “product batt must be replaced” and is shown with the following AMR representation:
and the answer text ‘How to replace battery on product’ is shown with the following AMR representation:
Based upon this example, the subtree representations have a common node “replace-01” and the related subtrees are at a distance of 1 because the relations <replace, ‘action’, battery> and <batt, action, unknown are different. The analysis determines that ‘battery’ is a concept of type ‘component’ and ‘batt’ is unknown. Accordingly, the comparison leverages the data structures created at steps (506) and (512) for the comparison.
It is then determined if the entirety of the extracted question concepts and relations is represented in the extracted answer concepts and relations (516). A positive response to the determination is an indication that the question and answer match, e.g. align, and there is no knowledge gap present and the comparison process terminates (518). In an exemplary embodiment, the assessment at step (516) is directed at a concept represented in a tier of the hierarchy. A negative response to the determination at step (516) is followed by a subsequent determination to assess the concepts and relations with respect to relevancy as identified within the question at step (506) and their representation in the answer (524). In an exemplary embodiment, the assessment at step (524) is directed at a different tier in the hierarchy, e.g. a concept different from the assessment at step (516). A positive response to the assessment at step (524) is an indication of the assessed question or aspect of the question determined to be present within the corresponding answer, or in an embodiment, within an assessed or identified concept of the corresponding answer. Accordingly, as shown herein, an overlap of the answer and question may be directed to an assessment of one or more concepts identified therein.
As shown and described, the assessment at step (524) is directed to a concept identified or represented within the question, and may not pertain to the question in its entirety. Following a positive response to the assessment at step (524), it is determined if the assessment between the question and answer should continue by evaluating additional concepts represented within the hierarchy of the question (526). In an exemplary embodiment, a user may refer to one element of a concept represented in a hierarchy, but the relevant content is about a related concept in a different tier in the hierarchy. Continued assessment corresponding to a positive response to step (526) will require additional time, and in an embodiment, may be a distraction from the chatbot experience. A negative response to the determination at step (526) terminates the assessment process as shown herein by a jump to the determination sequence starting at step (518). However, a positive response to the determination at step (526) or a negative response to the determination at step (524) is followed by transforming the text of the question based on equivalents, such as is-a, symptom-action, and other relations available in the domain knowledge (528). The transformation at step (528) is directing at identifying differences between the question and the corresponding answer. In an exemplary embodiment, one or more language models are used to determine equivalent sentences to the question that match with terms found in any previous matches corresponding to the question. It is then determined if there any new alternative formulations of the question were produced (530). A positive response at step (530) is followed by a return to step (514), and a negative response is followed by a jump to the termination sequence starting at step (518).
At such time as the evaluation concludes, shown herein as a negative response to the determination at steps (526) and (530), and following step (518), corresponding first and second data structures are populated. More specifically, a representation of matching concepts, relations, and graph regions is populated in a first corresponding data structure (520). Similarly, a representation of non-matching concepts, relations, and graph regions are populated in a second corresponding data structure (522). Although shown sequentially, in an exemplary embodiment, the population of the first and data structures at steps (520) and (522) may take place in parallel or in an alternative order. The first data structure is populated with elements of the question and elements of the answer that correspond to specific selection criterion, e.g. matching in both representations, and the second data structure is populated with elements of the question and elements of the answer that corresponding to specific selection criterion that are different, e.g. present in only one of the representations. Lists are generated by applying comparison methods specific to each type of feature representation. For example, in a list of concepts, the procedure receives as input two lists and traverses and compares the items in the lists and outputs a list of elements that exist in both, e.g. matching concepts, and a list of elements that exist in only the first input and a list of elements that exist in only the second input, e.g. non-matching concepts. In an embodiment that utilizes AMR, the procedure may output the pairs of subgraphs that have the same root node labels, and the edges in the graphs that have the same label and same adjacent node labels, e.g. matching concepts.
As shown and described, an assessment is conducted to determine or identify the presence of a knowledge gap, which is akin to ascertaining that there is not a match between the question(s) and the answer(s). In an exemplary embodiment, the use of the term match is directed at an exact match as well as a match of synonymous terms. The use of the term ‘match’ is based on a comparison of the words expressed in the question and the presence of those words in the answer. The knowledge gap might in the form of missing connections, such as missing synonyms or a missing inference of a relationship. Although the user of the system provides feedback in the form of final feedback, the system analyzes the questions and corresponding answers stored in the repository. In an exemplary embodiment, an engagement policy directed at the user with respect to knowledge gap assessment is provided to address users of different skills and different patience levels, and as such the engagement policy selectively directs learning interaction of the knowledge gap. Partial match may be satisfactory or good responses, the system can learn what made the match good and learn additional relations that brought confidence of the answer. Accordingly, the engagement policy may learn from both positive and negative feedback collected.
Output from the knowledge gap assessment shown in
The following table, Table 2, is an example of presentation templates to generate output for the user provided explanations:
The following table, Table 3, is an example of management policy to be applied when more than one explanation type is a match:
Referring to
As shown, input is provided in the form of the data structures generated in
The following is an example of a structured question for a user for online interaction:
The following is an example annotation question for batch interaction:
Accordingly, as shown herein a set of presentation templates in the form of questions and multiple choice answers are used to generate the output for the user providing explanation. Questions qualify how you would represent relationships in a corresponding knowledge graph.
As shown and described herein, a policy corresponds to phrasing one or more questions, e.g. prompts, to generate synchronous responses that will explain the semantic relationship, or lack thereof, between the question and the answer. Similarly, in an embodiment, asynchronous response may be obtained via use of the SME requesting an explanation of an answer, or a batch of answers. Accordingly, the explanation may be obtained via a synchronous channel through use of the chatbot platform or an asynchronous channel through the SME.
Referring to
A negative response to the determination at step (712) is an indication that knowledge artifacts can be created from the user indicated relation. A mapping is created to associate an explanation type, e.g. type of relation and type of concepts, to one or more domain knowledge components and artifact type(s) (714), followed by an update action to the knowledge domain for each knowledge artifact that can be generated from the user indicated relation (716).
Referring to
Returning to
As shown and described in
Embodiments shown and described herein may be in the form of a computer system for use with an intelligent computer platform for enriching domain knowledge. Aspects of the tools (152), (154), and (156)) and their associated functionality may be embodied in a computer system/server in a single location, or in an embodiment, may be configured in a cloud based system sharing computing resources. With reference to
The host (902) may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. The host (902) may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
As shown in
The system memory (906) can include computer system readable media in the form of volatile memory, such as random access memory (RAM) (930) and/or cache memory (932). By way of example only, storage system (934) can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a “hard drive”). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to the bus (908) by one or more data media interfaces.
Program/utility (940), having a set (at least one) of program modules (942), may be stored in the system memory (906) by way of example, and not limitation, as well as an operating system, one or more application programs, other program modules, and program data. Each of the operating systems, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. Program modules (942) generally carry out the functions and/or methodologies of embodiments to dynamically interpret and understanding request and action descriptions, and effectively augment corresponding domain knowledge. For example, the set of program modules (942) may include the tools (152), (154), and (156) as shown in
The host (902) may also communicate with one or more external devices (914), such as a keyboard, a pointing device, etc.; a display (924); one or more devices that enable a user to interact with the host (902); and/or any devices (e.g., network card, modem, etc.) that enable the host (902) to communicate with one or more other computing devices. Such communication can occur via Input/Output (I/O) interface(s) (922). Still yet, the host (902) can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter (920). As depicted, the network adapter (920) communicates with the other components of the host (902) via the bus (908). In an embodiment, a plurality of nodes of a distributed file system (not shown) is in communication with the host (902) via the I/O interface (922) or via the network adapter (920). It should be understood that although not shown, other hardware and/or software components could be used in conjunction with the host (902). Examples, include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc.
In this document, the terms “computer program medium,” “computer usable medium,” and “computer readable medium” are used to generally refer to media such as main memory (906), including RAM (930), cache (932), and storage system (934), such as a removable storage drive and a hard disk installed in a hard disk drive.
Computer programs (also called computer control logic) are stored in memory (906). Computer programs may also be received via a communication interface, such as network adapter (920). Such computer programs, when run, enable the computer system to perform the features of the present embodiments as discussed herein. In particular, the computer programs, when run, enable the processing unit (904) to perform the features of the computer system. Accordingly, such computer programs represent controllers of the computer system.
The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a dynamic or static random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a magnetic storage device, a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
Computer readable program instructions for carrying out operations of the present embodiments may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server or cluster of servers. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the embodiments.
The functional tools described in this specification have been labeled as managers. A manager may be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, or the like. The managers may also be implemented in software for processing by various types of processors. An identified manager of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions which may, for instance, be organized as an object, procedure, function, or other construct. Nevertheless, the executables of an identified manager need not be physically located together, but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the managers and achieve the stated purpose of the managers.
Indeed, a manager of executable code could be a single instruction, or many instructions, and may even be distributed over several different code segments, among different applications, and across several memory devices. Similarly, operational data may be identified and illustrated herein within the manager, and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices, and may exist, at least partially, as electronic signals on a system or network.
Referring now to
Referring now to
The hardware and software layer (1110) includes hardware and software components. Examples of hardware components include mainframes, in one example IBM® zSeries® systems; RISC (Reduced Instruction Set Computer) architecture based servers, in one example IBM pSeries® systems; IBM xSeries® systems; IBM BladeCenter® systems; storage devices; networks and networking components. Examples of software components include network application server software, in one example IBM WebSphere® application server software; and database software, in one example IBM DB2® database software. (IBM, zSeries, pSeries, xSeries, BladeCenter, WebSphere, and DB2 are trademarks of International Business Machines Corporation registered in many jurisdictions worldwide).
Virtualization layer (1120) provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers; virtual storage; virtual networks, including virtual private networks; virtual applications and operating systems; and virtual clients.
In an example, management layer (1130) may provide the following functions: resource provisioning, metering and pricing, user portal, service layer management, and SLA planning and fulfillment. Resource provisioning provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and pricing provides cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may comprise application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal provides access to the cloud computing environment for consumers and system administrators. Service layer management provides cloud computing resource allocation and management such that required service layers are met. Service Layer Agreement (SLA) planning and fulfillment provides pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA.
Workloads layer (1140) provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include, but are not limited to: mapping and navigation; software development and lifecycle management; virtual classroom education delivery; data analytics processing; transaction processing; and natural language enrichment.
While particular embodiments of the present embodiments have been shown and described, it will be obvious to those skilled in the art that, based upon the teachings herein, changes and modifications may be made without departing from the embodiments and its broader aspects. Therefore, the appended claims are to encompass within their scope all such changes and modifications as are within the true spirit and scope of the embodiments. Furthermore, it is to be understood that the embodiments are solely defined by the appended claims. It will be understood by those with skill in the art that if a specific number of an introduced claim element is intended, such intent will be explicitly recited in the claim, and in the absence of such recitation no such limitation is present. For a non-limiting example, as an aid to understanding, the following appended claims contain usage of the introductory phrases “at least one” and “one or more” to introduce claim elements. However, the use of such phrases should not be construed to imply that the introduction of a claim element by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim element to embodiments containing only one such element, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an”; the same holds true for the use in the claims of definite articles. As used herein, the term “and/or” means either or both (or one or any combination or all of the terms or expressed referred to).
The present embodiments may be a system, a method, and/or a computer program product. In addition, selected aspects of the present embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and/or hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present embodiments may take the form of computer program product embodied in a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present embodiments. Thus embodied, the disclosed system, a method, and/or a computer program product is operative to support natural language enrichment.
Aspects of the present embodiments are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present embodiments. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
It will be appreciated that, although specific embodiments have been described herein for purposes of illustration, various modifications may be made without departing from the spirit and scope of the embodiments. Accordingly, the scope of protection of the embodiments is limited only by the following claims and their equivalents.
Number | Name | Date | Kind |
---|---|---|---|
8538744 | Roberts et al. | Sep 2013 | B2 |
9240128 | Bagchi | Jan 2016 | B2 |
9378459 | Skiba | Jun 2016 | B2 |
10169454 | Ait-Mokhtar | Jan 2019 | B2 |
10379712 | Brown et al. | Aug 2019 | B2 |
10978053 | Smythe | Apr 2021 | B1 |
20110320187 | Motik et al. | Dec 2011 | A1 |
20130017524 | Barborak | Jan 2013 | A1 |
20140072948 | Boguraev | Mar 2014 | A1 |
20140236577 | Malon | Aug 2014 | A1 |
20140310001 | Kalns | Oct 2014 | A1 |
20140324747 | Crowder et al. | Oct 2014 | A1 |
20150121216 | Brown | Apr 2015 | A1 |
20160098394 | Bruno | Apr 2016 | A1 |
20180053119 | Zeng | Feb 2018 | A1 |
20180054523 | Zhang | Feb 2018 | A1 |
20180075359 | Brennan | Mar 2018 | A1 |
20180097749 | Ventura | Apr 2018 | A1 |
20180293221 | Finkelstein | Oct 2018 | A1 |
20180357272 | Bagchi et al. | Dec 2018 | A1 |
20190026654 | Allen | Jan 2019 | A1 |
20190042988 | Brown | Feb 2019 | A1 |
20190043106 | Talmor | Feb 2019 | A1 |
20190065576 | Peng | Feb 2019 | A1 |
20190163818 | Mittal | May 2019 | A1 |
20190171712 | Eisenzopf | Jun 2019 | A1 |
20190171758 | Pinel | Jun 2019 | A1 |
20190180639 | Dechu | Jun 2019 | A1 |
20190182382 | Mazza | Jun 2019 | A1 |
20190212879 | Anand | Jul 2019 | A1 |
20190213284 | Anand et al. | Jul 2019 | A1 |
20190347297 | Galitsky | Nov 2019 | A1 |
20190361977 | Crudele | Nov 2019 | A1 |
20190370342 | Luke et al. | Dec 2019 | A1 |
20200242444 | Zhang | Jul 2020 | A1 |
20210097140 | Chatterjee | Apr 2021 | A1 |
20210097978 | Mei | Apr 2021 | A1 |
20210150152 | Galitsky | May 2021 | A1 |
20210216714 | Drzewucki | Jul 2021 | A1 |
20220027707 | Wu | Jan 2022 | A1 |
20220108188 | Wu | Apr 2022 | A1 |
Number | Date | Country |
---|---|---|
105550302 | May 2016 | CN |
115803734 | Mar 2023 | CN |
2023535913 | Aug 2023 | JP |
2022018676 | Jan 2022 | WO |
Entry |
---|
Li et al, “Efficient similarity search for tree-structured data”, 2008, InScientific and Statistical Database Management: 20th International Conference, SSDBM 2008, Hong Kong, China, Jul. 9-11, 2008 Proceedings 20 2008 (pp. 131-149). Springer Berlin Heidelberg. |
Bordes et al, “Question answering with subgraph embeddings”, 2014, arXiv preprint arXiv:1406.3676. Jun. 14, 2014, pp. 615-620. |
Liu et al, “Approximate subgraph matching-based literature mining for biomedical events and relation”, 2013,. PloS one. Apr. 17, 2013;8(4):e60954. |
Xiong et al, “Improving question answering over incomplete kbs with knowledge-aware reader”, May 2019, arXiv preprint arXiv: 1905.07098. May 1, 20197, pp. 1-6. |
Hu et al, “Answering natural language questions by subgraph matching over knowledge graphs” IEEE Transactions on Knowledge and Data Engineering. Oct. 26, 2017;30(5):824-37. |
PCT/IB2021-056620, Written Opinion of the International Searching Authority, dated Nov. 1, 2021. |
Damljanovic, D., et al., “Natural Language Interfaces to Ontologies: Combining Syntactic Analysis and Ontology-Based Lookup through the User Interaction”, Extended Semantic Web Conference, Springer, Berlin, 2010, pp. 106-120. |
Blair-Goldensohn, Sasha J., “Long-Answer Question Answering and Rhetorical-Semantic Relations”, Thesis, Columbia University, 2007. |
Moore, Johanna D., et al., “A Reactive Approach to Explanation”, Speech and Natural Language, pp. 1504-1510, 1989. |
Carlisle, Scott A., et al., “Explanation Capabilities of Production-Based Consultation Systems”, Departments of Computer Science and Clinical Pharmacology, Stanford University, Software Patent Institute, IP.com, IPCOM000150622D, Feb. 28, 1977, 30 pages. |
Schlimmer, Jeffrey C., et al., “A Case Study of Incremental Concept Induction”, AAAI-86 Proceedings, 1986, pp. 496-501. |
Peng, Bin-Bin, et al., “SVM-Based Incremental Active Learning for User Adaptation for Online Graphics Recognition System”, Proceedings of the First International Conference on Machine Learning and Cybernetics, Beijing, Nov. 4-5, 2002, pp. 1379-1386. |
Riedel, Kurt S., et al., “Orthonormal Representations for Output Systems Pairs”, arXiv: 1803.06571v1, Mar. 17, 2018. |
Butler, A.C., et al., Explanation Feedback Is Better Than Correct Answer Feedback for Promoting Transfer of Learning, Journal of Educational Psychology, 2013, vol. 105, No. 2, pp. 290-298. |
Ferrer-Troyano, Francisco, et al., “Incremental Rule Learning based on Example Nearness from Numerical Data Streams”, SAC '05, Proceedings of the 2005 ACM Symposium on Applied Computing, Mar. 2005, pp. 568-572. |
Chen, Wenhu, et al., “Variational Knowledge Graph Reasoning”, arXiv: 1803.06581v3, Oct. 23, 2018. |
Duke University, Marsh Lab, “Understanding learning and memory”, accessed on Oct. 31, 2023, 2 pages. |
Wikipedia, Machine Learning, accessed on Oct. 31, 2023, 20 pages. |
Zendesk, “Zendesk Al already speaks customer service”, Artificial Intelligence, accessed on Oct. 31, 2023, 15 pages. |
Number | Date | Country | |
---|---|---|---|
20220027768 A1 | Jan 2022 | US |