The following specification describes and ascertains the nature of this invention and the manner in which it is to be performed.
The present disclosure relates to a method to prevent capturing of an AI module and an AI system thereof.
With the advent of data science, data processing and decision making systems are implemented using artificial intelligence modules. The artificial intelligence modules use different techniques like machine learning, neural networks, deep learning etc. Most of the AI based systems, receive large amounts of data and process the data to train AI models. Trained AI models generate output based on the use cases requested by the user. Typically the AI systems are used in the fields of computer vision, speech recognition, natural language processing, audio recognition, healthcare, autonomous driving, manufacturing, robotics etc. where they process data to generate required output based on certain rules/intelligence acquired through training.
To process the inputs and give a desired output, the AI systems use various models/algorithms which are trained using the training data. Once the AI system is trained using the training data, the AI systems use the models to analyze the real time data and generate appropriate result. The models may be fine-tuned in real-time based on the results. The models in the AI systems form the core of the system. Lots of effort, resources (tangible and intangible), and knowledge goes into developing these models.
It is possible that some adversary may try to capture/copy/extract the model from AI systems. The adversary may use different techniques to capture the model from the AI systems. One of the simple techniques used by the adversaries is where the adversary sends different queries to the AI system iteratively, using its own test data. The test data may be designed in a way to extract internal information about the working of the models in the AI system. The adversary uses the generated results to train its own models. By doing these steps iteratively, it is possible to capture the internals of the model and a parallel model can be built using similar logic. This will cause hardships to the original developer of the AI systems. The hardships may be in the form of business disadvantages, loss of confidential information, loss of lead time spent in development, loss of intellectual properties, loss of future revenues etc.
There are methods known in the prior arts to identify such attacks by the adversaries and to protect the models used in the AI system. The prior art US 20190095629A1—Protecting Cognitive Systems from Model Stealing Attacks discloses one such method. It discloses a method wherein the input data is processed by applying a trained model to the input data to generate an output vector having values for each of the plurality of pre-defined classes. A query engine modifies the output vector by inserting a query in a function associated with generating the output vector, to thereby generate a modified output vector. The modified output vector is then output. The query engine modifies one or more values to disguise the trained configuration of the trained model logic while maintaining accuracy of classification of the input data.
An embodiment of the invention is described with reference to the following accompanying drawings:
It is important to understand some aspects of artificial intelligence (AI) technology and artificial intelligence (AI) based systems or artificial intelligence (AI) system. This disclosure covers two aspects of AI systems. The first aspect is related to the training of a submodule in the AI system and second aspect is related to the prevention of capturing of the AI module in an AI system.
Some important aspects of the AI technology and AI systems can be explained as follows. Depending on the architecture of the implements AI systems may include many components. One such component is an AI module. An AI module with reference to this disclosure can be explained as a component which runs a model. A model can be defined as reference or an inference set of data, which is use different forms of correlation matrices. Using these models and the data from these models, correlations can be established between different types of data to arrive at some logical understanding of the data. A person skilled in the art would be aware of the different types of AI models such as linear regression, naïve bayes classifier, support vector machine, neural networks and the like. It must be understood that this disclosure is not specific to the type of model being executed in the AI module and can be applied to any AI module irrespective of the AI model being executed. A person skilled in the art will also appreciate that the AI module may be implemented as a set of software instructions, combination of software and hardware or any combination of the same.
Some of the typical tasks performed by AI systems are classification, clustering, regression etc. Majority of classification tasks depend upon labeled datasets; that is, the data sets are labelled manually in order for a neural network to learn the correlation between labels and data. This is known as supervised learning. Some of the typical applications of classifications are: face recognition, object identification, gesture recognition, voice recognition etc. Clustering or grouping is the detection of similarities in the inputs. The cluster learning techniques do not require labels to detect similarities. Learning without labels is called unsupervised learning. Unlabeled data is the majority of data in the world. One law of machine learning is: the more data an algorithm can train on, the more accurate it will be. Therefore, unsupervised learning models/algorithms has the potential to produce accurate models as training dataset size grows.
As the AI module forms the core of the AI system, the module needs to be protected against attacks. Attackers attempt to attack the model within the AI module and steal information from the AI module. The attack is initiated through an attack vector. In the computing technology a vector may be defined as a method in which a malicious code/virus data uses to propagate itself such as to infect a computer, a computer system or a computer network. Similarly, an attack vector is defined a path or means by which a hacker can gain access to a computer or a network in order to deliver a payload or a malicious outcome. A model stealing attack uses a kind of attack vector that can make a digital twin/replica/copy of an AI module.
The attacker typically generates random queries of the size and shape of the input specifications and starts querying the model with these arbitrary queries. This querying produces input-output pairs for random queries and generates a secondary dataset that is inferred from the pre-trained model. The attacker then take this I/O pairs and trains the new model from scratch using this secondary dataset. This is black box model attack vector where no prior knowledge of original model is required. As the prior information regarding model is available and increasing, attacker moves towards more intelligent attacks. The attacker chooses relevant dataset at his disposal to extract model more efficiently. This is domain intelligence model-based attack vector. With these approaches, it is possible to demonstrate model stealing attack across different models and datasets.
A module with respect to this disclosure can either be a logic circuitry or a software programs that respond to and processes logical instructions to get a meaningful result. A module is implemented as any or a combination of: one or more microchips or integrated circuits interconnected using a parent board, hardwired logic, software stored by a memory device and executed by a microprocessor, microcontrollers, firmware, an application specific integrated circuit (ASIC), and/or a field programmable gate array (FPGA). As explained above, these various modules can either be a software embedded in a single chip or a combination of software and hardware where each module and its functionality is executed by separate independent chips connected to each other to function as the system. For example, a neural network (in an embodiment the AI module) mentioned herein after can be a software residing in the system or the cloud or embodied within an electronic chip. Such neural network chips are specialized silicon chips, which incorporate AI technology and are used for machine learning.
The blocker module (18) is configured to block a user when the information gain exceeds a predefined threshold. Information gain is calculated based on input attack queries exceeds a predefined threshold value. The blocker module (18) is further configured to modify a first output generated by an AI module (12). This is done only when the input is identified as an attack vector.
The AI module (12) to process said input data and generate the first output data corresponding to said input. The AI module (12) executes a first model (M) based on the input to generate a first output. The first model could be any one from those mentioned above such as linear regression, naïve bayes classifier, support vector machine or neural networks and the like.
The submodule (14) is configured to identify an attack vector from the received input. The submodule comprises a computation module (141), a memory (142) and at least a comparator module (143). The computation module (141) is configured to at least derive an instantaneous frequency domain transformation signature of the received input. The memory (142) is configured to store a set of pre-derived frequency domain transformation signatures. The set of pre-derived Frequency domain transformation signatures comprise Frequency domain transformation signatures for known inputs comprising a range of non-attack vectors.
The comparator module (143) is configured to compare the instantaneous Frequency domain transformation signature with the set of pre-derived frequency domain transformation signatures. The comparator module (143) can be a conventional electronic comparator or specialized electronic comparator either embedded with neural networks or executing another AI model to enhance their functions. The above-mentioned components of the submodule can either be implemented in a single chip or as any or a combination of: one or more microchips or integrated circuits interconnected using a parent board, hardwired logic, software stored by a memory device and executed by a microprocessor, firmware, an application specific integrated circuit (ASIC), and/or a field programmable gate array (FPGA).
The blocker notification module (20) transmits a notification to the owner of said AI system (10) on detecting an attack vector. The notification could be transmitted in any audio/visual/textual form.
The information gain module (16) is configured to calculate an information gain and send the information gain value to the blocker module (18). The information gain is calculated using the information gain methodology. In one embodiment, if the information gain extracted exceeds a pre-defined threshold, the AI system (10) is configured to lock out the user from the system. The locking out the system is initiated if the cumulative information gain extracted by plurality of users exceeds a pre-defined threshold.
The output interface (22) sends output to said at least one user. The output sent by the output interface (22) comprises the first output data when the submodule (14) doesn't identify an attack vector from the received input. The output sent by the output interface (22) comprises a modified output received from the blocker module (18), when an attack vector is detected from the input.
It should be understood at the outset that, although exemplary embodiments are illustrated in the figures and described below, the present disclosure should in no way be limited to the exemplary implementations and techniques illustrated in the drawings and described below. As explained above, these various modules and submodules can either be a software embedded in a single chip or a combination of software and hardware where each module and its functionality is executed by separate independent chips connected to each other to function as the system.
In method step 301, input interface (11) receives input data from at least one user. In step 302, this input data is transmitted through a blocker module (18) to an AI module (12). In step 303, the AI module (12) computes a first output based on the input data.
In step 304, input is processed by submodule (14) to identify an attack vector from the input data, the identification information of the attack vector is sent to the information gain module (16). Processing of the input data further comprises computing an instantaneous Frequency domain transformation signature of the received input by means of a computational module. This is followed by comparing the instantaneous Frequency domain transformation signature with a set of pre-derived Frequency domain transformation signatures by means of a comparator module (143). Finally, identifying an attack vector based on said comparison. The set of pre-derived Frequency domain transformation signatures comprise Frequency domain transformation signatures for known inputs comprising a range of non-attack vectors.
Reference can be made to the training method (200) elucidated in accordance with
In step 305 an output is sent to a user by means of the output interface (22). The output sent by the output interface (22) comprises the first output data when the submodule (14) doesn't identify an attack vector from the received input. Once the attack vector identification information is sent to the information gain module (16), an information gain is calculated. The information gain is sent to the blocker module (18). In an embodiment, if the information gain exceeds a pre-defined threshold, the user is blocked, and the notification is sent the owner of the AI system (10) using blocker notification module (20). If the information gain is below a pre-defined threshold, although an attack vector was detected, the blocker module (18) may modify the first output generated by the AI module (12) to send it to the output interface (22).
In addition, the user profile may be used to determine whether the user is habitual attacker or was it one time attack or was it only incidental attack etc. Depending upon the user profile, the steps for unlocking of the system may be determined. If it was first time attacker, the user may be locked out temporarily. If the attacker is habitual attacker, then a stricter locking steps may be suggested and so on.
A person skilled in the art will appreciate that while these method steps describe only a series of steps to accomplish the objectives, these methodologies may be implemented with slight modification to the AI system (10) described herein. This idea to develop a method to prevent capturing of an AI module (12) and an AI system (10) thereof is quite useful for time series inputs where time series can be sliced and FFT or other frequency domain transformations can be generated.
It must be understood that the embodiments explained in the above detailed description are only illustrative and do not limit the scope of this invention. Any modification to a method to prevent capturing of an AI module and an AI system thereof are envisaged and form a part of this invention. The scope of this invention is limited only by the claims.
Number | Date | Country | Kind |
---|---|---|---|
2022 4101 0164 | Feb 2022 | IN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2023/053355 | 2/10/2023 | WO |