AUTOMATIC METAMODEL GENERATION FOR ARTIFICIAL INTELLIGENCE REASONING

Information

  • Patent Application
  • 20240212348
  • Publication Number
    20240212348
  • Date Filed
    December 22, 2022
    a year ago
  • Date Published
    June 27, 2024
    5 months ago
  • CPC
    • G06V20/41
    • G06F40/30
    • G06V10/774
    • G06V10/82
    • G06V20/52
    • G06V20/70
  • International Classifications
    • G06V20/40
    • G06F40/30
    • G06V10/774
    • G06V10/82
    • G06V20/52
    • G06V20/70
Abstract
In one embodiment, a student agent identifies a topic of interest. The student agent issues a set of one or more questions to a teacher agent regarding the topic of interest. The student agent receives, from the teacher agent, answer data in response to the set of one or more questions. The student agent uses the answer data to generate a neuro-symbolic metamodel that comprises a semantic reasoner and a sub-symbolic layer.
Description
TECHNICAL FIELD

The present disclosure relates generally to computer networks, and, more particularly, to automatic metamodel generation for artificial intelligence reasoning.


BACKGROUND

Video analytics techniques are becoming increasingly ubiquitous as a complement to new and existing surveillance systems. For instance, person detection and reidentification now allows for a specific person to be tracked across different video feeds throughout a location. More advanced video analytics techniques also attempt to detect certain types of events/activities, such as a person leaving a suspicious package in an airport. Underlying such functionality are machine learning (ML)/deep learning (DL) models that have been trained using a set of training data that include examples of the objects or activities to be detected by the model.


While ML/DL-based analytics models can be quite powerful, their capabilities are also a function of the training process used to generate them. Indeed, the performance of such a model in terms of recall, precision, etc., largely depend on the quality and quantity of training examples. In addition, the more robust the capabilities of the model, the more cumbersome the training process becomes. For instance, training a model to identify people in video is a relatively trivial task. However, training a model to not only identify people, but to also discern when the behavior of a person represents a potentially hazardous situation is much more difficult. In other words, the more intelligent/capable the model, the more demanding its training process will be. These training challenges are further compounded when ML/DL models are paired with semantic reasoning as part of a neuro-symbolic metamodel.





BRIEF DESCRIPTION OF THE DRAWINGS

The embodiments herein may be better understood by referring to the following description in conjunction with the accompanying drawings in which like reference numerals indicate identically or functionally similar elements, of which:



FIGS. 1A-1B illustrate an example computer network;



FIG. 2 illustrates an example network device/node;



FIG. 3 illustrates an example hierarchy for a neuro-symbolic metamodel;



FIG. 4 illustrates an example metamodel architecture;



FIG. 5 illustrates an example of various inference types;



FIG. 6 illustrates an example architecture for multiple metamodel agents;



FIG. 7 illustrates an example neuro-symbolic metamodel;



FIG. 8 illustrates an example of student and teacher agents interacting to generate a neuro-symbolic metamodel;



FIG. 9 illustrates an example of answer data being provided by a teacher agent to a student agent; and



FIG. 10 illustrates an example simplified procedure for automatic metamodel generation for artificial intelligence reasoning.





DESCRIPTION OF EXAMPLE EMBODIMENTS
Overview

According to one or more embodiments of the disclosure, a student agent identifies a topic of interest. The student agent issues a set of one or more questions to a teacher agent regarding the topic of interest. The student agent receives, from the teacher agent, answer data in response to the set of one or more questions. The student agent uses the answer data to generate a neuro-symbolic metamodel that comprises a semantic reasoner and a sub-symbolic layer.


DESCRIPTION

A computer network is a geographically distributed collection of nodes interconnected by communication links and segments for transporting data between end nodes, such as personal computers, cellular phones, workstations, or other devices, such as sensors, etc. Many types of networks are available, with the types ranging from local area networks (LANs) to wide area networks (WANs). LANs typically connect the nodes over dedicated private communications links located in the same general physical location, such as a building or campus. WANs, on the other hand, typically connect geographically dispersed nodes over long-distance communications links, such as common carrier telephone lines, optical lightpaths, synchronous optical networks (SONET), or synchronous digital hierarchy (SDH) links, or Powerline Communications (PLC) such as IEEE 61334, IEEE P1901.2, and others. The Internet is an example of a WAN that connects disparate networks throughout the world, providing global communication between nodes on various networks. The nodes typically communicate over the network by exchanging discrete frames or packets of data according to predefined protocols, such as the Transmission Control Protocol/Internet Protocol (TCP/IP). In this context, a protocol consists of a set of rules defining how the nodes interact with each other. Computer networks may be further interconnected by an intermediate network node, such as a router, to forward data from one network to another.


Smart object networks, such as sensor networks, in particular, are a specific type of network having spatially distributed autonomous devices such as sensors, actuators, etc., that cooperatively monitor physical or environmental conditions at different locations, such as, e.g., energy/power consumption, resource consumption (e.g., water/gas/etc. for advanced metering infrastructure or “AMI” applications) temperature, pressure, vibration, sound, radiation, motion, pollutants, etc. Other types of smart objects include actuators, e.g., responsible for turning on/off an engine or perform other actions. Sensor networks, a type of smart object network, are typically shared-media networks, such as wireless or PLC networks. That is, in addition to one or more sensors, each sensor device (node) in a sensor network may generally be equipped with a radio transceiver or other communication port such as PLC, a microcontroller, and an energy source, such as a battery. Often, smart object networks are considered field area networks (FANs), neighborhood area networks (NANs), personal area networks (PANs), etc. Generally, size and cost constraints on smart object nodes (e.g., sensors) result in corresponding constraints on resources such as energy, memory, computational speed and bandwidth.



FIG. 1A is a schematic block diagram of an example computer network 100 illustratively comprising nodes/devices, such as a plurality of routers/devices interconnected by links or networks, as shown. For example, customer edge (CE) routers 110 may be interconnected with provider edge (PE) routers 120 (e.g., PE-1, PE-2, and PE-3) in order to communicate across a core network, such as an illustrative network backbone 130. For example, routers 110, 120 may be interconnected by the public Internet, a multiprotocol label switching (MPLS) virtual private network (VPN), or the like. Data packets 140 (e.g., traffic/messages) may be exchanged among the nodes/devices of the computer network 100 over links using predefined network communication protocols such as the Transmission Control Protocol/Internet Protocol (TCP/IP), User Datagram Protocol (UDP), Asynchronous Transfer Mode (ATM) protocol, Frame Relay protocol, or any other suitable protocol. Those skilled in the art will understand that any number of nodes, devices, links, etc. may be used in the computer network, and that the view shown herein is for simplicity.


In some implementations, a router or a set of routers may be connected to a private network (e.g., dedicated leased lines, an optical network, etc.) or a virtual private network (VPN), such as an MPLS VPN utilizing a Service Provider network, via one or more links exhibiting very different network and service level agreement characteristics. For the sake of illustration, a given customer site may fall under any of the following categories:


1.) Site Type A: a site connected to the network (e.g., via a private or VPN link) using a single CE router and a single link, with potentially a backup link (e.g., a 3G/4G/5G/LTE backup connection). For example, a particular CE router 110 shown in network 100 may support a given customer site, potentially also with a backup link, such as a wireless connection.


2.) Site Type B: a site connected to the network using two MPLS VPN links (e.g., from different Service Providers) using a single CE router, with potentially a backup link (e.g., a 3G/4G/5G/LTE connection). A site of type B may itself be of different types:


2a.) Site Type B1: a site connected to the network using two MPLS VPN links (e.g., from different Service Providers), with potentially a backup link (e.g., a 3G/4G/5G/LTE connection).


2b.) Site Type B2: a site connected to the network using one MPLS VPN link and one link connected to the public Internet, with potentially a backup link (e.g., a 3G/4G/5G/LTE connection). For example, a particular customer site may be connected to network 100 via PE-3 and via a separate Internet connection, potentially also with a wireless backup link.


2c.) Site Type B3: a site connected to the network using two links connected to the public Internet, with potentially a backup link (e.g., a 3G/4G/5G/LTE connection).


Notably, MPLS VPN links are usually tied to a committed service level agreement, whereas Internet links may either have no service level agreement or a loose service level agreement (e.g., a “Gold Package” Internet service connection that guarantees a certain level of performance to a customer site).


3.) Site Type C: a site of type B (e.g., types B1, B2 or B3) but with more than one CE router (e.g., a first CE router connected to one link while a second CE router is connected to the other link), and potentially a backup link (e.g., a wireless 3G/4G/5G/LTE backup link). For example, a particular customer site may include a first CE router 110 connected to PE-2 and a second CE router 110 connected to PE-3.



FIG. 1B illustrates an example of network 100 in greater detail, according to various embodiments. As shown, network backbone 130 may provide connectivity between devices located in different geographical areas and/or different types of local networks. For example, network 100 may comprise local/branch networks 160, 162 that include devices/nodes 10-16 and devices/nodes 18-20, respectively, as well as a data center/cloud environment 150 that includes servers 152-154. Notably, local networks 160-162 and data center/cloud environment 150 may be located in different geographic locations.


Servers 152-154 may include, in various embodiments, a network management server (NMS), a dynamic host configuration protocol (DHCP) server, a constrained application protocol (CoAP) server, an outage management system (OMS), an application policy infrastructure controller (APIC), an application server, etc. As would be appreciated, network 100 may include any number of local networks, data centers, cloud environments, devices/nodes, servers, etc.


In some embodiments, the techniques herein may be applied to other network topologies and configurations. For example, the techniques herein may be applied to peering points with high-speed links, data centers, etc.


In various embodiments, network 100 may include one or more mesh networks, such as an Internet of Things network. Loosely, the term “Internet of Things” or “IoT” refers to uniquely identifiable objects (things) and their virtual representations in a network-based architecture. In particular, the next frontier in the evolution of the Internet is the ability to connect more than just computers and communications devices, but rather the ability to connect “objects” in general, such as lights, appliances, vehicles, heating, ventilating, and air-conditioning (HVAC), windows and window shades and blinds, doors, locks, etc. The “Internet of Things” thus generally refers to the interconnection of objects (e.g., smart objects), such as sensors and actuators, over a computer network (e.g., via IP), which may be the public Internet or a private network.


Notably, shared-media mesh networks, such as wireless or PLC networks, etc., are often deployed on what are referred to as Low-Power and Lossy Networks (LLNs), which are a class of network in which both the routers and their interconnect are constrained: LLN routers typically operate with constraints, e.g., processing power, memory, and/or energy (battery), and their interconnects are characterized by, illustratively, high loss rates, low data rates, and/or instability. LLNs are comprised of anything from a few dozen to thousands or even millions of LLN routers, and support point-to-point traffic (between devices inside the LLN), point-to-multipoint traffic (from a central control point such at the root node to a subset of devices inside the LLN), and multipoint-to-point traffic (from devices inside the LLN towards a central control point). Often, an IoT network is implemented with an LLN-like architecture. For example, as shown, local network 160 may be an LLN in which CE-2 operates as a root node for devices/nodes 10-16 in the local mesh, in some embodiments.


In contrast to traditional networks, LLNs face a number of communication challenges. First, LLNs communicate over a physical medium that is strongly affected by environmental conditions that change over time. Some examples include temporal changes in interference (e.g., other wireless networks or electrical appliances), physical obstructions (e.g., doors opening/closing, seasonal changes such as the foliage density of trees, etc.), and propagation characteristics of the physical media (e.g., temperature or humidity changes, etc.). The time scales of such temporal changes can range between milliseconds (e.g., transmissions from other transceivers) to months (e.g., seasonal changes of an outdoor environment). In addition, LLN devices typically use low-cost and low-power designs that limit the capabilities of their transceivers. In particular, LLN transceivers typically provide low throughput. Furthermore, LLN transceivers typically support limited link margin, making the effects of interference and environmental changes visible to link and network protocols. The high number of nodes in LLNs in comparison to traditional networks also makes routing, quality of service (QoS), security, network management, and traffic engineering extremely challenging, to mention a few.



FIG. 2 is a schematic block diagram of an example node/device 200 (e.g., an apparatus) that may be used with one or more embodiments described herein, e.g., as any of the computing devices shown in FIGS. 1A-1B, particularly the PE routers 120, CE routers 110, nodes/device 10-20, servers 152-154 (e.g., a network controller located in a data center, etc.), any other computing device that supports the operations of network 100 (e.g., switches, etc.), or any of the other devices referenced below. The device 200 may also be any other suitable type of device depending upon the type of network architecture in place, such as IoT nodes, etc. Device 200 comprises one or more network interfaces 210, one or more processors 220, and a memory 240 interconnected by a system bus 250, and is powered by a power supply 260.


The network interfaces 210 include the mechanical, electrical, and signaling circuitry for communicating data over physical links coupled to the network 100. The network interfaces may be configured to transmit and/or receive data using a variety of different communication protocols. Notably, a physical network interface 210 may also be used to implement one or more virtual network interfaces, such as for virtual private network (VPN) access, known to those skilled in the art.


The memory 240 comprises a plurality of storage locations that are addressable by the processor(s) 220 and the network interfaces 210 for storing software programs and data structures associated with the embodiments described herein. The processor 220 may comprise necessary elements or logic adapted to execute the software programs and manipulate the data structures 245. An operating system 242 (e.g., the Internetworking Operating System, or IOS®, of Cisco Systems, Inc., another operating system, etc.), portions of which are typically resident in memory 240 and executed by the processor(s), functionally organizes the node by, inter alia, invoking network operations in support of software processors and/or services executing on the device. These software processors and/or services may comprise a neuro-symbolic metamodel process 248, as described herein.


It will be apparent to those skilled in the art that other processor and memory types, including various computer-readable media, may be used to store and execute program instructions pertaining to the techniques described herein. Also, while the description illustrates various processes, it is expressly contemplated that various processes may be embodied as modules configured to operate in accordance with the techniques herein (e.g., according to the functionality of a similar process). Further, while processes may be shown and/or described separately, those skilled in the art will appreciate that processes may be routines or modules within other processes.


Metamodel process 248 includes computer executable instructions that, when executed by processor(s) 220, cause device 200 to provide cognitive reasoning services to a network. In various embodiments, metamodel process 248 may utilize artificial intelligence/machine learning techniques, in whole or in part, to perform its analysis and reasoning functions. In general, machine learning is concerned with the design and the development of techniques that take as input empirical data (such as network statistics and performance indicators) and recognize complex patterns in these data. One very common pattern among machine learning techniques is the use of an underlying model M, whose hyper-parameters are optimized for minimizing the cost function associated to M, given the input data. The learning process then operates by adjusting the hyper-parameters such that the number of misclassified points is minimal. After this optimization phase (or learning phase), the model M can be used very easily to classify new data points. Often, M is a statistical model, and the minimization of the cost function is equivalent to the maximization of the likelihood function, given the input data.


In various embodiments, metamodel process 248 may employ one or more supervised, unsupervised, or self-supervised machine learning models. Generally, supervised learning entails the use of a training large set of data, as noted above, that is used to train the model to apply labels to the input data. For example, in the case of video recognition and analysis, the training data may include sample video data that depicts a certain object and is labeled as such. On the other end of the spectrum are unsupervised techniques that do not require a training set of labels. Notably, while a supervised learning model may look for previously seen patterns that have been labeled as such, an unsupervised model may instead look to whether there are sudden changes in the behavior. Self-supervised is a representation learning approach that eliminates the pre-requisite requiring humans to label data. Self-supervised learning systems extract and use the naturally available relevant context and embedded metadata as supervisory signals. Self-supervised learning models take a middle ground approach: it is different from unsupervised learning as systems do not learn the inherent structure of data, and it is different from supervised learning as systems learn entirely without using explicitly-provided labels.


Example machine learning techniques that metamodel process 248 can employ may include, but are not limited to, nearest neighbor (NN) techniques (e.g., k-NN models, replicator NN models, etc.), statistical techniques (e.g., Bayesian networks, etc.), clustering techniques (e.g., k-means, mean-shift, etc.), neural networks (e.g., reservoir networks, artificial neural networks, etc.), support vector machines (SVMs), logistic or other regression, Markov models or chains, principal component analysis (PCA) (e.g., for linear models), multi-layer perceptron (MLP) artificial neural networks (ANNs) (e.g., for non-linear models), replicating reservoir networks (e.g., for non-linear models, typically for time series), random forest classification, or the like. Accordingly, metamodel process 248 may employ deep learning, in some embodiments. Generally, deep learning is a subset of machine learning that employs ANNs with multiple layers, with a given layer extracting features or transforming the outputs of the prior layer.


The performance of a machine learning model can be evaluated in a number of ways based on the number of true positives, false positives, true negatives, and/or false negatives of the model. For example, the false positives of the model may refer to the number of times the model incorrectly identified an object or condition within a video feed. Conversely, the false negatives of the model may refer to the number of times the model failed to identify an object or condition within a video feed. True negatives and positives may refer to the number of times the model correctly determined that the object or condition was absent in the video or was present in the video, respectively. Related to these measurements are the concepts of recall and precision. Generally, recall refers to the ratio of true positives to the sum of true positives and false negatives, which quantifies the sensitivity of the model. Similarly, precision refers to the ratio of true positives the sum of true and false positives.


According to various embodiments, FIG. 3 illustrates an example hierarchy 300 for a neuro-symbolic metamodel. For example, metamodel process 248 shown in FIG. 2 may execute a neuro-symbolic metamodel for any number of purposes. In particular, metamodel process 248 may be configured to analyze sensor data in an IoT deployment (e.g., video data, etc.), to analyze networking data for purposes of network assurance, control, enforcing security policies and detecting threats, facilitating collaboration, or, as described in greater detail below, to aid in the development of a collaborative knowledge generation and learning system for visual programming.


In general, a reasoning engine, also known as a ‘semantic reasoner,’ ‘reasoner,’ or ‘rules engine,’ is a specialized form of machine learning software that uses asserted facts or axioms to infer consequences, logically. Typically, a reasoning engine is a form of inference engine that applies inference rules defined via an ontology language. As introduced herein, a neuro-symbolic metamodel is an enhanced form of reasoning engine that further leverages the power of sub-symbolic machine learning techniques, such as neural networks (e.g., deep learning), allowing the system to operate across the full spectrum of sub-symbolic data all the way to the symbolic level.


At the lowest layer of hierarchy 300 is sub-symbolic layer 302 that processes the sensor data 312 collected from the network. For example, sensor data 312 may include video feed/stream data from any number of cameras located throughout a location. In some embodiments, sensor data 312 may comprise multimodal sensor data from any number of different types of sensors located throughout the location. At the core of sub-symbolic layer 302 may be one or more DNNs 308 or other machine learning-based model that processes the collected sensor data 312. In other words, sub-symbolic layer 302 may perform sensor fusion on sensor data 312 to identify hidden relationships between the data.


At the opposing end of hierarchy 300 may be symbolic layer 306 that may leverage symbolic learning. In general, symbolic learning includes a set of symbolic grammar rules specifying the representation language of the system, a set of symbolic inference rules specifying the reasoning competence of the system, and a semantic theory containing the definitions of “meaning.” This approach differs from other learning approaches that try to establish generalizations from facts as it is about reasoning and extracting knowledge from knowledge. It combines knowledge representations and reasoning to acquire and ground knowledge from observations in a non-axiomatic way. In other words, in sharp contrast to the sub-symbolic learning performed in layer 302, the symbolic learning and generalized intelligence performed at symbolic layer 306 requires a variety of reasoning and learning paradigms that more closely follows how humans learn and are able to explain why a particular conclusion was reached.


Symbolic learning models what are referred to as “concepts,” which comprise a set of properties. Typically, these properties include an “intent” and an “extent,” whereby the intent offers a symbolic way of identifying the extent of the concept. For example, consider the intent that represents motorcycles. The intent for this concept may be defined by properties such as “having two wheels” and “motorized,” which can be used to identify the extent of the concept (e.g., whether a particular vehicle is a motorcycle).


Linking sub-symbolic layer 302 and symbolic layer 306 may be conceptual layer 304 that leverages conceptual spaces. In general, conceptual spaces are a proposed framework for knowledge representation by a cognitive system on the conceptual level that provides a natural way of representing similarities. Conceptual spaces enable the interaction between different type of data representations as an intermediate level between sub-symbolic and symbolic representations.


More formally, a conceptual space is a geometrical structure which is defined by a set of quality dimensions to allow for the measurement of semantic distances between instances of concepts and for the assignment of quality values to their quality dimensions, which correspond to the properties of the concepts. Thus, a point in a conceptual space S may be represented by an n-dimensional conceptual vector v=<d1, . . . , di, . . . , dn> where di represents the quality value for the ith quality dimension. For example, consider the concept of taste. A conceptual space for taste may include the following dimensions: sweet, sour, bitter, and salty, each of which may be its own dimension in the conceptual space. The taste of a given food can then be represented as a vector of these qualities in a given space (e.g., ice cream may fall farther along the sweet dimension than that of peanut butter, peanut butter may fall farther along the salty dimension than that of ice cream, etc.). By representing concepts within a geometric conceptual space, similarities can be compared in geometric terms, based on the Manhattan distance between domains or the Euclidean distance within a domain in the space. In addition, similar objects can be grouped into meaningful conceptual space regions through the application of clustering techniques, which extract concepts from data (e.g., observations).


Said differently, a conceptual space is a framework for representing information that models human-like reasoning to compose concepts using other existing concepts. Note that these representations are not competing with symbolic or associationistic representations. Rather, the three kinds can be seen as three levels of representations of cognition with different scales of resolution and complementary. Namely, a conceptual space is built up from geometrical representations based on a number of quality dimensions that complements the symbolic and deep learning models of symbolic layer 306 and sub-symbolic layer 302, representing an operational bridge between them. Each quality dimension may also include any number of attributes, which present other features of objects in a metric subspace based on their measured quality values. Here, similarity between concepts is just a matter of metric distance between them in the conceptual space in which they are embedded.


In other words, a conceptual space is a geometrical representation which allows the discovery of regions that are physically or functionally linked to each other and to abstract symbols used in symbolic layer 306, allowing for the discovery of correlations shared by the conceptual domains during concepts formation. For example, an alert prioritization module may use connectivity to directly acquire and evaluate alerts as evidence. Possible enhancements may include using volume of alerts and novelty of adjacent (spatially/temporally) alerts, to tune level of alertness.


In general, the conceptual space at conceptual layer 304 allows for the discovery of regions that are naturally linked to abstract symbols used in symbolic layer 306. The overall model is bi-directional as it is planned for predictions and action prescriptions depending on the data causing the activation in sub-symbolic layer 302.


Layer hierarchy 300 shown is particularly appealing when matched with the attention mechanism provided by a cognitive system that operates under the assumption of limited resources and time-constraints. For practical applications, the reasoning logic in symbolic layer 306 may be non-axiomatic and constructed around the assumption of insufficient knowledge and resources (AIKR). It may be implemented, for example, with a Non-Axiomatic Reasoning System (open-NARS) 310. However, other reasoning engines can also be used, such as Auto-catalytic Endogenous Reflective Architecture (AERA), OpenCog, and the like, in symbolic layer 306, in further embodiments. Even Prolog may be suitable, in some cases, to implement a reasoning engine in symbolic layer 306. In turn, an output 314 coming from symbolic layer 306 may be provided to a user interface (UI) for review. For example, output 314 may comprise a video feed/stream augmented with inferences or conclusions made by the metamodel, such as the locations of unstocked or under-stocked shelves, etc.


By way of example of symbolic reasoning, consider the ancient Greek syllogism: (1.) All men are mortal, (2.) Socrates is a man, and (3.) therefore, Socrates is mortal. Depending on the formal language used for the symbolic reasoner, these statements can be represented as symbols of a term logic. For example, the first statement can be represented as “man→[mortal]” and the second statement can be represented as “{Socrates}→man.” Thus, the relationship between terms can be used by the reasoner to make inferences and arrive at a conclusion (e.g., “Socrates is mortal”). Non-axiomatic reasoning systems (NARS) generally differ from more traditional axiomatic reasoners in that the former applies a truth value to each statement, based on the amount of evidence available and observations retrieved, while the latter relies on axioms that are treated as a baseline of truth from which inferences and conclusions can be made.


Thus, a neuro-symbolic metamodel generally refers to a cognitive engine capable of taking sub-symbolic data as input (e.g., raw or processed sensor data regarding a monitored system), recognizing symbolic concepts from that data, and applying symbolic reasoning to the concepts, to draw conclusions about the monitored system.


According to various embodiments, FIG. 4 illustrates an example neuro-symbolic metamodel architecture 400. As shown, architecture 400 may be implemented across any number of devices or fully on a particular device, as desired. At the core of architecture 400 may be middleware 402 that offers a collection of services, each of which may have its own interface. In general, middleware 402 may leverage a library for interfacing, configuring, and orchestrating each service of middleware 402.


In various embodiments, middleware 402 may also provide services to support semantic reasoning, such as by an AIKR reasoner. For example, as shown, middleware 402 may include a NARS agent that performs semantic reasoning for structural learning. In other embodiments, OpenCog or another suitable AIKR semantic reasoner could be used.


One or more metamodel agents 404 may interface with middleware 402 to orchestrate the various services available from middleware 402. In addition, metamodel agent 404 may feed and interact with the AIKR reasoner so as to populate and leverage a metamodel knowledge graph with knowledge.


More specifically, in various embodiments, middleware 402 may obtain sub-symbolic data 408. In turn, middleware 402 may leverage various ontologies, programs, rules, and/or structured text 410 to translate sub-symbolic data 408 into symbolic data 412 for consumption by metamodel agent 404. This allows metamodel agent 404 to apply symbolic reasoning to symbolic data 412, to populate and update a metamodel knowledge base (KB) 416 with knowledge 414 regarding the problem space (e.g., the network under observation, etc.). In addition, metamodel agent 404 can leverage the stored knowledge 414 in metamodel KB 416 to make assessments/inferences.


For example, metamodel agent 404 may perform semantic graph decomposition on metamodel KB 416 (e.g., a knowledge graph), so as to compute a graph from the knowledge graph of KB 416 that addresses a particular problem. Metamodel agent 404 may also perform post-processing on metamodel KB 416, such as performing graph cleanup, applying deterministic rules and logic to the graph, and the like. Metamodel agent 404 may further employ a definition of done, to check goals and collect answers using metamodel KB 416.


In general, metamodel KB 416 may comprise any or all of the following:

    • Data
    • Ontologies
    • Evolutionary steps of reasoning
    • Knowledge (e.g., in the form of a knowledge graph)
    • The Knowledge graph also allows different reasoners to:
      • Have their internal subgraphs
      • Share or coalesce knowledge
      • Work cooperatively


In other words, metamodel KB 416 acts as a dynamic and generic memory structure. In some embodiments, metamodel KB 416 may also allow different reasoners to share or coalesce knowledge, have their own internal sub-graphs, and/or work collaboratively in a distributed manner. For example, a first metamodel agent 404 may perform reasoning on a first sub-graph, a second metamodel agent 404 may perform reasoning on a second sub-graph, etc., to evaluate the health of the network and/or find solutions to any detected problems. To communicate with metamodel agent 404, metamodel KB 416 may include a bidirectional Narsese interface or other interface using another suitable grammar.


In various embodiments, metamodel KB 416 can be visualized on a user interface. For example, Cytoscape, which has its building blocks in bioinformatics and genomics, can be used to implement graph analytics and visualizations.


Said differently, architecture 400 may include any or all of the following the following components:

    • Middleware 402 that comprises:
      • Structural learning component
      • JSON, textual data, ML/DL pipelines, and/or other containerized services (e.g., using Docker)
      • Hierarchical goal support
    • Metamodel Knowledge Base (KB) 416 that supports:
      • Bidirectional Narseseese interface
      • Semantic graph decomposition algorithms
      • Graph analytics
      • Visualization services
    • Metamodel Agent 404
      • Metamodel Control System


More specifically, in some embodiments, middleware 402 may include any or all of the following:

    • Subsymbolic services:
      • Data services to collect sub-symbolic data for consumption
    • Reasoner(s) for structural learning
    • NARS
    • OpenCog
    • Optimized hierarchical goal execution
      • Probabilistic programming
      • Causal inference engines
    • Visualization Services (e.g., Cytoscape, etc.)


Middleware 402 may also allow the addition of new services needed by different problem domains.


During execution, metamodel agent 404 may, thus, perform any or all of the following:

    • Orchestration of services
    • Focus of attention
      • Semantic graph decomposition
        • Addresses combinatorial issues via an automated divide and conquer approach that works even in non-separable problems because the overall knowledge graph 416 may allow for overlap.
    • Feeding and interacting with the AIKR reasoner via bidirectional translation layer to the metamodel knowledge graph.
      • Call middleware services
    • Post processing of the graph
      • Graph clean-up
      • Apply deterministic rules and logic to the graph
    • Definition of Done (DoD)
      • Check goals and collect answers



FIG. 5 illustrates an example 500 showing the different forms of structural learning that the neuro-symbolic metamodel framework can employ. More specifically, the inference rules in example 500 relate premises S→M and M→P, leading to a conclusion S→P. Using these rules, the structural learning herein can be implemented using an ontology with respect to an Assumption of Insufficient Knowledge and Resources (AIKR) reasoning engine, as noted previously. This allows the system to rely on finite processing capacity in real time and be prepared for unexpected tasks. More specifically, as shown, the metamodel may support any or all of the following:

    • Syllogistic Logic
      • Logical quantifiers
    • Various Reasoning Types
      • Deduction Induction
      • Abduction
      • Induction
      • Revision
    • Different Types of Inference
    • Local inference
    • Backward inference


To address combinatorial explosion, the metamodel knowledge graph may be partitioned such that each partition is processed by one or more metamodel agents 404, as shown in architecture 600 in FIG. 6, in some embodiments. More specifically, any number of metamodel agents 404 (e.g., a first metamodel agent 404a through an Nth metamodel agent 404n) may be executed by devices connected via a network 602 or by the same device. In some embodiments, metamodel agents 404a-404n may be deployed to different platforms (e.g., platforms 604a-604n) and/or utilize different learning approaches. For instance, metamodel agent 404a may leverage neural networks 606, metamodel agent 404b may leverage Bayesian learning 608, metamodel agent 404c may leverage statistical learning, and metamodel agent 404n may leverage decision tree learning 612.


As would be appreciated, graph decomposition can be based on any or all of the following:

    • Spatial relations—for instance, this could include the vertical industry of a customer, physical location (country) of a network, scale of a network deployment, or the like.
    • Descriptive properties, such as severity, service impact, next step, etc.
    • Graph-based components (isolated subgraphs, minimum spanning trees, all shortest paths, strongly connected components . . . )


      Any new knowledge and related reasoning steps can also be input back to the knowledge graph, in various embodiments.


In further embodiments, the metamodel framework may also support various user interface functions, so as to provide visualizations, actions, etc. to the user. To do so, the framework may leverage Cytoscape, web services, or any other suitable mechanism.


At the core of the techniques herein is an artificial intelligence metamodel 700 for knowledge representation at different levels of abstraction, as shown in FIG. 7, according to various embodiments. In various embodiments, the metamodel knowledge graph groups information into four different levels, which are labeled L0, L1, L2, and L* and represent different levels of abstraction, with L0 being closest to raw data coming in from various sensors and external systems and L2 representing the highest levels of abstraction typically obtained via mathematical means such as statistical learning and reasoning. L* can be viewed as the layer where high-level goals and motivations are stored. The overall structure of this knowledge is also based on anti-symmetric and symmetric relations.


One key advantage of the metamodel knowledge graph is that human level domain expertise, ontologies, and goals are entered at the L2 level. This leads, by definition, to an unprecedented ability to generalize at the L2 level thus minimizing the manual effort required to ingest domain expertise.


More formally:

    • L* represents the overall status of the abstraction. In case of a problem, it triggers problem solving in lower layers via a metamodel agent 702.
    • L2-1-L2.∞=Higher level representations of the world in which most of concepts and relations are collapsed into simpler representations. The higher-level representations are domain-specific representations of lower levels.
    • L1=has descriptive, teleological and structural information about L0.
    • L0=Object level is the symbolic representation of the physical world.


In various embodiments, L2 may comprise both expertise and experience stored in long-term memory, as well as a focus of attention (FOA) in short-term memory. In other words, when a problem is triggered at L*, a metamodel agent 702 that operates on L2-L0 may control the FOA so as to focus on different things, in some embodiments.


As would be appreciated, there may be hundreds of thousands or even millions of data points that need to be extracted at L0. The FOA of the metamodel is based on the abstraction and the knowledge graph (KG) may be used to keep combinatorial explosion under control.


Said differently, metamodel 700 may generally take the form of a knowledge graph in which semantic knowledge is stored regarding a particular system, such as a computer network and its constituent networking devices. By representing the relationships between such real-world entities (e.g., router A, router B, etc.), as well as their more abstract concepts (e.g., a networking router), metamodel agent 702 can make evaluations regarding the particular system at different levels of extraction. Indeed, metamodel 700 may differ from a more traditional knowledge graph through the inclusion of any or all of the following, in various embodiments:

    • A formal mechanism to represent different levels of abstraction, and for moving up and down the abstraction hierarchy (e.g., ranging from extension to intension).
    • Additional structure that leverages distinctions/anti-symmetric relations, as the backbone of the knowledge structures.
    • Similarity/symmetric relation-based relations.


Thus, metamodel 700 is a neuro-symbolic metamodel that leverages both sub-symbolic processing (e.g., using deep/neural networks) and symbolic reasoning. This allows it to perform zero-shot learning whereby it is able to make inferences about objects, behaviors, interactions, conditions, and the like, that are outside those on which it was specifically trained, as well as one-shot learning of class labels. For instance, say the system uses a seed ontology comprising the concepts of ‘standing’ and ‘running,’ but is then encounters video data of a person walking. Even without specific knowledge about the concept of walking, the semantic reasoning engine may determine that this is an intermediate state between standing and running, based on the velocities involved, the pose analysis of the person, etc. This allows the system to learn different conditions and make inferences about situations that it may not have seen before.


As noted above, training machine learning (ML)/deep learning (DL)-based models can be quite cumbersome on its own. Indeed, the capabilities and performances of such models largely depend on the amount of training and training data/ground truth examples that are available. For instance, training a model to identify people within video can be relatively trivial, whereas training a model to discern when the behavior of a person represents a potentially hazardous situation is much more difficult. These complexities are further compounded when generating a neuro-symbolic metamodel, such as metamodel 700, that bridges symbolic and sub-symbolic processing.


——Automatic Metamodel Generation for Artificial Intelligence Reasoning——

The techniques herein allow for the automatic generation of a neuro-symbolic metamodel using student and teacher agents. In some aspects, the two agents may interact with one another via a question and answer mechanism, to capture information that can be used to generate a neuro-symbolic metamodel.


Illustratively, the techniques described herein may be performed by hardware, software, and/or firmware, such as in accordance with process 248, which may include computer executable instructions executed by the processor 220 (or independent processor of interfaces 210), to perform functions relating to the techniques described herein.


Specifically, according to various embodiments, a student agent identifies a topic of interest. The student agent issues a set of one or more questions to a teacher agent regarding the topic of interest. The student agent receives, from the teacher agent, answer data in response to the set of one or more questions. The student agent uses the answer data to generate a neuro-symbolic metamodel that comprises a semantic reasoner and a sub-symbolic layer.


Operationally, FIG. 8 illustrates an example 800 of student and teacher agents interacting to generate a neuro-symbolic metamodel, as described in greater detail above. As shown, in order to generate a neuro-symbolic metamodel 822, a student agent 808 and a teacher agent 810 may be configured to interact with one another, in order to produce a set 820 of questions and answers, as well as the original topic, on which neuro-symbolic metamodel 822 may be trained/generated/updated, according to various embodiments. In various embodiments, student agent 808 and teacher agent 810 may be implemented through the execution of metamodel process 248 by one or more devices (e.g., a device 200). In the distributed case, the executing devices can also be viewed as their own singular device for purposes of performing the techniques herein, as well.


The question and answer generation mechanism may begin by student agent 808 identifying a topic of interest. For instance, as shown, assume that there is input data 802 that comprises a textual topic, such as “a cat is running on the prairie.” Note that input data 802 may also include one or more media examples of the topic, such as an actual image, video, etc. depicting a cat running on the prairie, as well. In other embodiments, the examples may be provided to student agent 808 by asking teacher agent 810 for them. In various instances, student agent 808 may obtain input data 802 via a user interface (e.g., an engineer entering a topic to be learned), from a pre-established topical database, or even through an automated mechanism whereby student agent 808 seeks topics (e.g., via web searches, etc.) to learn.


Student agent 808 may leverage a natural language understanding (NLU) unit 804 that parses the textual topic in input data 802 and extracts concepts from it, in order to form textual queries to send to teacher agent 810. For instance, in the case of the topic of “a cat is running on the prairie,” student agent 808 may not have an initial conception of what a cat is or looks like, what running entails, and/or what a prairie is or looks like.


In some embodiments, as shown, another function of student agent 808 may be to perform semantic segmentation 812 and/or object detection 814 on the example(s) of the topic. As would be appreciated, semantic segmentation seeks to assign a classification label to each pixel of an image. For instance, in the case of an image of a cat running on the prairie, the result of object detection 814 may be detect the presence of the cat and the result of semantic segmentation 812 may be to divide the image into regions classified as being one of: sky, trees, grass, or cat. In the case of video, for instance, a similar approach an also be taken by student agent 808 to perform activity identification 816 (e.g., by discerning between a stationary cat, a slow moving cat, and a running cat).


In order to provide suitable examples as answer data back to student agent 808, teacher agent 810 may leverage a knowledge graph 806. Associated with knowledge graph 806 may be media examples (e.g., images, video, and/or audio) of the answer returned back to student agent 808 for any given question. For instance, in the case of a cat, one of the initial questions asked by student agent 808 regarding the topic of input data 802 may be “What does a cat look like?” In response, teacher agent 810 may search knowledge graph 806 for the concept of a cat and return any number of representative images 818 from it to student agent 808.


As a result of the interactions between student agent 808 and teacher agent 810, set 820 will eventually be populated with the original topic, the questions issued by student agent 808, and the answer data provided by 810, such as examples of the answers. Thus, not only does set 820 represent discrete topics and labeled images that can be used to train the ML/DL model(s) of neuro-symbolic metamodel 822, but also the relationships between these concepts to represent the entire concept of the input topic (e.g., not just the concept of a cat, but the concept of a cat running on the prairie), which can be used to populate the conceptual layer(s) of neuro-symbolic metamodel 822 and support its semantic reasoning. Thus, even if neuro-symbolic metamodel 822 is later confronted with an image of a dog running on the prairie, so long as it has a basic conception of what a dog it, it can infer that the dog is on the prairie and that its corresponding action is ‘running.’



FIG. 9 illustrates an example 900 of answer data being provided by a teacher agent to a student agent, according to various embodiments. Continuing the example of FIG. 8, example 900 again shows the interactions of a student agent 904 and a teacher agent 906, so as to produce a set 912 of the original topic, questions from student agent 904, and answer data from teacher agent 906.


As shown, student agent 904 and/or teacher agent 906 may leverage any number of different data sources 902, to perform tasks such as identifying topics of interest, formulating questions, and returning answer data. By way of example, data sources 902 may include, but are not limited to, any or all of the following: prior human domain knowledge 902a about learning a topic, Generative Pre-trained Transformer 3 (GTP3) 902b, which is an autoregressive language model developed by OpenAI, Wordnet 902c (e.g., a lexical database of semantic relationships between words/concepts), other online sources 902d such as DALL-E from OpenAI, search engines, crowd sourcing sites, etc., ImageNet 902e, which is an image database organized according to Wordnet 902c, VideoNet 902f, which is a video database organized according to Wordnet 902c.


From these data sources 902, student agent 904 may formulate questions 908 regarding the topic of interest. For instance, in the case of the topic of a cat running in the prairie, student agent 904 may leverage Wordnet 902c and/or prior human domain knowledge 902a, to formulate questions such as “What does a cat look like?”, “How does a cat run?”, “How many legs does a cat have?” and the like. In turn, teacher agent 906 may leverage data sources 902, such as other online sources 902d, ImageNet 902, VideoNet 902f, etc., to provide answers 910. For instance, as shown, answers 910 may comprise an image of a cat, a video of a cat running, and an image of all four legs of a cat, each representing the answer to the respective questions issued by student agent 904.


In this way, set 912 may be formulated through the interactions between the two agents 904-906, and can be used to generate a neuro-symbolic metamodel 914, accordingly. For instance, in the case of metamodel 914 being used to analyze surveillance feeds, it may be used to analyze video data captured from at least one of: a port, a train station, a bus station, an airport, or a stadium, using its sub-symbolic layers to detect certain objects and actions, as well as its symbolic layers to perform semantic reasoning on their related concepts, to make inferences about the monitored environment.


As would be appreciated, the ontologies generated by the system and used for neuro-symbolic metamodel 914 or others may be associated with raw data that is categorized into nodes in an ontology based on some sort of similarity metric which may be spatio-, temporal-, and/or spatio-temporal in nature. In turn, the mechanism used to generate the ontology can also be used to align/match at least some of the nodes in the generated ontology with counterparts in any of the source ontologies on which it is based (e.g., Wordnet 902c, other human domain knowledge 902a, etc.). Indeed, there may be cases in which multiple nodes in the machine generated ontology can be aligned to the same human generated ontology. In these cases, the system could rank candidates (also based on the similarity metric) along with a measure of the relevance of the machine generated ontology, such as based on metrics that represent how often the node under consideration applies to the input data.


In many cases, the machine-generated ontologies will likely be far greater in size and complexity than that of the human-sourced ontologies and language. In some embodiments, the system may also apply a quantization to its generated ontology, to map it to human language. For instance, there may be hundreds of ways that the system categorizes the basic action of a person walking (e.g., one category may correspond to a person walking faster than a threshold velocity, with their arms swinging, with a particular gait, etc.). However, as humans, such granular classification is often unnecessary as simply saying “the person is walking” is often enough to convey the idea. This functionality would be even more noticeable when the input data is not related to daily human activity. For example, processing massive datasets such as genetic, protein/molecular interactions, physics experiments, network telemetry, etc. may lead to very granular and difficult to understand levels of details. In various embodiments, to address this, the system may take any or all of the following approaches:

    • Leveraging plausible words for new concepts that could be amenable to human language
    • Using unique numbers/identifiers for each


Regardless, another potential function of the system when generating a neuro-symbolic metamodel may be to align the generated ontology associated with the metamodel with one or more human-generated ontologies and/or a human language, to aid in the explainability of the conclusions made by the metamodel during use. FIG. 10 illustrates an example simplified procedure (e.g., a method) for automatic metamodel generation for artificial intelligence reasoning, in accordance with one or more embodiments described herein. For example, a non-generic, specifically configured device (e.g., device 200) may perform procedure 1000 by executing stored instructions (e.g., process 248), such as to function as a student agent in accordance with the techniques herein. The procedure 1000 may start at step 1005, and continues to step 1010, where, as described in greater detail above, the student agent may identify a topic of interest. In some embodiments, the student agent uses natural language processing to identify the topic of interest. In various embodiments, the topic of interest comprises a particular type of action associated with a particular type of object.


At step 1015, as detailed above, the student agent may issue a set of one or more questions to a teacher agent regarding the topic of interest. For instance, in the case of the topic involving a particular type of object, the one or more questions may ask


At step 1020, the student agent may receive, from the teacher agent, answer data in response to the set of one or more questions, as described in greater detail above. In some embodiments, the answer data comprises images. In one embodiment, the teacher agent bases the answer data on results from a search engine. In another embodiment, the teacher agent bases the answer data on crowdsourced or human-provided information.


At step 1025, as detailed above, the student agent may use the answer data to generate a neuro-symbolic metamodel that comprises a semantic reasoner and a sub-symbolic layer. In some embodiments, the metamodel is generated in part by populating a knowledge graph that links the sub-symbolic layer to a symbolic layer on which the semantic reasoner operates. In further embodiments, the metamodel performs semantic segmentation and object detection on the answer data. In another embodiment, the metamodel may be generated in part by training a neural network at the sub-symbolic layer of the neuro-symbolic metamodel using the answer data. In various embodiments, the neuro-symbolic metamodel is then used to analyze video data captured from at least one of: a port, a train station, a bus station, an airport, or a stadium.


Procedure 1000 then ends at step 1030.


It should be noted that while certain steps within procedure 1000 may be optional as described above, the steps shown in FIG. 10 are merely examples for illustration, and certain other steps may be included or excluded as desired. Further, while a particular order of the steps is shown, this ordering is merely illustrative, and any suitable arrangement of the steps may be utilized without departing from the scope of the embodiments herein.


While there have been shown and described illustrative embodiments that provide for automatic metamodel generation for artificial intelligence reasoning, it is to be understood that various other adaptations and modifications may be made within the spirit and scope of the embodiments herein. For example, while certain embodiments are described herein with respect to specific types of artificial intelligence development systems, the techniques can be extended without undue experimentation to other use cases, as well.


The foregoing description has been directed to specific embodiments. It will be apparent, however, that other variations and modifications may be made to the described embodiments, with the attainment of some or all of their advantages. For instance, it is expressly contemplated that the components and/or elements described herein can be implemented as software being stored on a tangible (non-transitory) computer-readable medium (e.g., disks/CDs/RAM/EEPROM/etc.) having program instructions executing on a computer, hardware, firmware, or a combination thereof. Accordingly, this description is to be taken only by way of example and not to otherwise limit the scope of the embodiments herein. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the embodiments herein.

Claims
  • 1. A method comprising: identifying, by a student agent, a topic of interest;issuing, by the student agent, a set of one or more questions to a teacher agent regarding the topic of interest;receiving, at the student agent and from the teacher agent, answer data in response to the set of one or more questions; andusing, by the student agent, the answer data to generate a neuro-symbolic metamodel that comprises a semantic reasoner and a sub-symbolic layer.
  • 2. The method as in claim 1, wherein the student agent uses natural language processing to identify the topic of interest.
  • 3. The method as in claim 1, wherein the answer data comprises images.
  • 4. The method as in claim 1, wherein using the answer data to generate the neuro-symbolic metamodel comprises: populating a knowledge graph that links the sub-symbolic layer to a symbolic layer on which the semantic reasoner operates.
  • 5. The method as in claim 1, wherein using the answer data to generate the neuro-symbolic metamodel comprises: performing semantic segmentation and object detection on the answer data.
  • 6. The method as in claim 1, wherein the teacher agent bases the answer data on results from a search engine.
  • 7. The method as in claim 1, wherein the teacher agent bases the answer data on crowdsourced or human-provided information.
  • 8. The method as in claim 1, wherein the topic of interest comprises a particular type of action associated with a particular type of object.
  • 9. The method as in claim 1, wherein using the answer data to generate the neuro-symbolic metamodel comprises: training a neural network at the sub-symbolic layer of the neuro-symbolic metamodel using the answer data.
  • 10. The method as in claim 1, wherein the neuro-symbolic metamodel is used to analyze video data captured from at least one of: a port, a train station, a bus station, an airport, or a stadium.
  • 11. An apparatus, comprising: a network interface to communicate with a computer network;a processor coupled to the network interface and configured to execute one or more processes; anda memory configured to store a process that is executed by the processor, the process when executed configured to: identify, by a student agent, a topic of interest;issue, by the student agent, a set of one or more questions to a teacher agent regarding the topic of interest;receive, at the student agent and from the teacher agent, answer data in response to the set of one or more questions; anduse the answer data to generate a neuro-symbolic metamodel that comprises a semantic reasoner and a sub-symbolic layer.
  • 12. The apparatus as in claim 11, wherein the student agent uses natural language processing to identify the topic of interest.
  • 13. The apparatus as in claim 11, wherein the answer data comprises images.
  • 14. The apparatus as in claim 11, wherein the apparatus uses the answer data to generate the neuro-symbolic metamodel by: populating a knowledge graph that links the sub-symbolic layer to a symbolic layer on which the semantic reasoner operates.
  • 15. The apparatus as in claim 11, wherein the apparatus uses the answer data to generate the neuro-symbolic metamodel by: performing semantic segmentation and object detection on the answer data.
  • 16. The apparatus as in claim 11, wherein the teacher agent bases the answer data on results from a search engine.
  • 17. The apparatus as in claim 11, wherein the teacher agent bases the answer data on crowdsourced or human-provided information.
  • 18. The apparatus as in claim 11, wherein the topic of interest comprises a particular type of action associated with a particular type of object.
  • 19. The apparatus as in claim 11, wherein the apparatus uses the answer data to generate the neuro-symbolic metamodel by: training a neural network at the sub-symbolic layer of the neuro-symbolic metamodel using the answer data.
  • 20. A tangible, non-transitory, computer-readable medium storing program instructions that cause a device to execute a process comprising: identifying, by a student agent, a topic of interest;issuing, by the student agent, a set of one or more questions to a teacher agent regarding the topic of interest;receiving, at the student agent and from the teacher agent, answer data in response to the set of one or more questions; andusing, by the student agent, the answer data to generate a neuro-symbolic metamodel that comprises a semantic reasoner and a sub-symbolic layer.