A portion of this disclosure contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the material subject to copyright protection as it appears in the United States Patent & Trademark Office's patent file or records, but otherwise reserves all copyright rights whatsoever.
Cyber security and in an embodiment use of Artificial Intelligence in cyber security.
Cybersecurity attacks have become a pervasive problem for enterprises as many computing devices and other resources have been subjected to attack and compromised. A “cyberattack” constitutes a threat to security of an enterprise (e.g., enterprise network, one or more computing devices connected to the enterprise network, or the like). A cyber threat from a cyberattack may involve malicious software, an insider attack, and other threat introduced into a computing device and/or the network. The cyber threats may further represent malicious or criminal activity, ranging from theft of credential to even a nation-state attack, where the source initiating or causing the security threat is commonly referred to as a “malicious” source.
One attack vector that may be employed by a malicious actor in the context of a cyber threat are barcodes that can be scanned by a user, such as quick-response (QR) codes. Such codes may be embedded in an email, instant message, shared file, website, or printed on a physical object. A target of the cyber threat may scan the QR code which may progress a cyber attack such as by directing the target to a malicious phishing website. It is, therefore, valuable for a cyber security system to be able to detect QR codes so that appropriate action can be taken to prevent the cyber threat from developing. In order to bypass such cyber security systems, malicious actors are becoming more sophisticated by obfuscating QR codes so that they are not detected by an automated cyber security system but can still be scanned by a target. Aspects of the present application address these or related problems.
Methods, systems, and apparatus are disclosed for an Artificial Intelligence-based cyber security system. In an embodiment, a cyber security appliance for detecting a presence of a barcode is provided. The cyber security appliance comprises a barcode reader module that is configured to: receive input data; obtain one or more images from the received input data; detect that a candidate barcode is present in the one or more images; and output an alert, wherein when the candidate barcode can be decoded, the alert indicates that a valid barcode is present in the received input data, and wherein when the candidate barcode cannot be decoded, the alert indicates the candidate barcode present in the received input data cannot be decoded.
Optionally, detecting that a candidate barcode is present in an image comprises: one or more image processing steps to increase a detectability of any candidate barcode that may be present in the one or more images; and one or more scanning steps to detect and attempt to decode the candidate barcode in the one or more images.
Optionally, the one or more image processing steps comprise detecting contours in the one or more images.
Optionally, a first image of the one or more images is determined not to comprise the candidate barcode when a number of contours is below a first threshold; and wherein when the is first image determined not to comprise the candidate barcode it is then discarded.
Optionally, one or more regions of the one or more images likely to have the candidate barcode are determined based on the detected contours; and areas not within the one or more regions of the one or more images likely to have the candidate barcode are then discarded.
Optionally, the one or more image processing steps comprise detecting edges.
Optionally, a first image of the one or more images is determined not to comprise the candidate barcode in the first image when a number of edges is below a second threshold; and wherein when the first image is determined not to comprise the candidate barcode it is then discarded.
Optionally, the edges are detected using Sobel gradients.
Optionally, the one or more image processing steps comprise one or more of: inverting a first image; resizing the first image one or more times; adjusting or replacing one or more colors of the first image; sharpening the first image; brightening the first image; and increasing a contrast of the first image.
Optionally, the one or more image processing steps comprise two or more image processing steps and wherein at least one of the one or more scanning steps are interspersed between two of the two or more image processing steps.
Optionally, the one or more scanning steps comprise two or more scanning steps, the two or more scanning steps performed using at least two different QR scanners.
Optionally, obtaining the one or more images from the received input data comprises: determining that the received input data comprises multiple image frames; and extracting each image frame from the multiple image frames.
Optionally, obtaining the one or more images from the received input data further comprises: determining a similarity between each image frame; and selecting the one or more images from the multiple image frames based on the similarity between each image frame.
Optionally, determining that the received input data comprises multiple image frames comprises determining that the received input data is a GIF or a video.
Optionally, obtaining one or more images from the received input data comprises: determining that the received input data comprises one or more images; determining that at least one of the one or more images is larger than a threshold size; and chunking the at least one image that is larger than the threshold size into a plurality of image chunks, each image chunk being smaller than the threshold size; wherein the obtained one or more images comprise the plurality of image chunks.
Optionally, when the candidate barcode can be decoded, the barcode reader module is further configured to determine whether the candidate barcode is malicious.
Optionally, when it is determined that the candidate barcode is malicious the barcode reader module is further configured to remove the malicious candidate barcode from the input data, replace the malicious candidate barcode with replacement data, or both.
Optionally, the alert indicating that a valid barcode is present in the received input data further comprises decoded data from the candidate barcode for further investigation by a cyber security system.
Optionally, the received input data comprises an email or instant message received over a network, or wherein the received input data comprises an image received from a camera associated with a device running the cyber security appliance.
Optionally, the barcode is a linear barcode, a two-dimensional barcode, a matrix barcode, wherein optionally the barcode is a quick-response (QR) code.
According to one embodiment, a method for a barcode reader module of a cyber security appliance for detecting a presence of a barcode is provided. The method comprises: receiving input data; obtaining one or more images from the received input data; detecting that a candidate barcode is present in the one or more images; and outputting an alert, wherein when the candidate barcode can be decoded, the alert indicates that a valid barcode is present in the received input data, and wherein when the candidate barcode cannot be decoded, the alert indicates the candidate barcode present in the received input data cannot be decoded.
Optionally, the method may comprise any of the steps above that may be performed by the cyber security appliance according to some embodiments.
According to one embodiment, a non-transitory computer-readable medium is provided. The non-transitory computer-readable medium comprises instructions that, when executed by one or more processing devices, cause the one or more processing devices to: receive input data; obtain one or more images from the received input data; detect that a candidate barcode is present in the one or more images; and output an alert, wherein when the candidate barcode can be decoded, the alert indicates that a valid barcode is present in the received input data, and wherein when the candidate barcode cannot be decoded, the alert indicates the candidate barcode present in the received input data cannot be decoded.
Optionally, the non-transitory computer-readable medium may further comprise instructions that, when executed by one or more processing devices, cause the one or more processing devices to perform any of the steps above performed by the cyber security appliance according to some embodiments.
These and other features of the design provided herein can be better understood with reference to the drawings, description, and claims, all of which form the disclosure of this patent application.
The drawings refer to some embodiments of the design provided herein in which:
While the design is subject to various modifications, equivalents, and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will now be described in detail. It should be understood that the design is not limited to the particular embodiments disclosed, but—on the contrary—the intention is to cover all modifications, equivalents, and alternative forms using the specific embodiments.
In the following description, numerous specific details are set forth, such as examples of specific data signals, named components, number of servers in a system, etc., in order to provide a thorough understanding of the present design. It will be apparent, however, to one of ordinary skill in the art that the present design can be practiced without these specific details. In other instances, well known components or methods have not been described in detail but rather in a block diagram in order to avoid unnecessarily obscuring the present design. Further, specific numeric references such as a first server, can be made. However, the specific numeric reference should not be interpreted as a literal sequential order but rather interpreted that the first server is different than a second server. Thus, the specific details set forth are merely exemplary. Also, the features implemented in one embodiment may be implemented in another embodiment where logically possible. The specific details can be varied from and still be contemplated to be within the spirit and scope of the present design.
A barcode is a type of machine-readable visual representation of encoded data. Many types of barcodes exist, generally categorizable into linear barcodes that make use of lines of different thickness to encode data and two-dimensional barcodes. Two-dimensional barcodes may still be primarily line-based but may have a number of stacked rows of linear barcodes, or they may take the form of matrix barcodes, which utilize a two-dimensional arrangement of squares, rectangles, hexagons, dots, or other shapes. A particularly common example of a type of barcode to which the present teachings can be applied are quick-response (QR) codes, which are a type of two-dimensional matrix barcode.
Method 10 begins with step 11 of receiving input data. The received input data is the data for which it is to be determined whether a barcode is present therein. The input data may be received over a variety of different protocols and technologies. In one example, the input data is an email received over a network, such as the internet. Method 10 then determines whether a barcode is present within the email, such that a viewer of the email may be able to scan the barcode (e.g., with a camera of a cellular phone pointing at a screen displaying the email). Other forms of input data and manners in which input data may be received include instant messages, shared files, web pages, and images taken by a camera (such as a camera associated with a device running a cyber security appliance performing method 10). The input data may be received over a wired or wireless connection; over a network such as the internet, a local area network (LAN), wide area network (WAN), intranet, enterprise network; or be collected locally by a device running a cyber security appliance performing method 10.
Once the input data has been received at step 11, the method proceeds with obtaining one or more images from the input data at step 13. At this step, images are downloaded, extracted, or otherwise obtained from the received input data such that they can be investigated to determine whether they are, or comprise, a barcode. The step 13 of obtaining one or more images from the received input data may also comprise some initial processing to ensure that the images are in a suitable format to be scanned by a barcode scanner, to determine whether a barcode is present. This processing may involve, for example, extracting image frames from a video or GIF detected within the received input data. Other processing that may be performed at step 13 includes chunking large images into smaller image chunks that are suitable for further processing and scanning with a barcode scanner.
Having obtained one or more images from the received input data at step 103, the method continues with detecting that a candidate barcode is present in the one or more images at step 15. A candidate barcode may be a suspected barcode. A candidate barcode can be detected using a barcode scanner that finds and successfully decodes a valid barcode within an image. The appropriate barcode scanner will depend upon the type of barcodes being looked for. As an example, for the case where the barcode is a QR code, the barcode scanner (a QR scanner) may be . . . . In this case, the candidate barcode is also a valid barcode. Alternatively, a candidate barcode can be detected using other image processing procedures that determine that something similar (in one or more regards) to an actual barcode is present. In this case, the candidate barcode may not necessarily be a valid barcode. Nevertheless, it may still be useful to know that such a candidate barcode is present in an image in the received input data, even if it cannot be verified as a valid barcode by scanning and decoding it. This is because it may still represent an attempted cyber attack by the sending of an obfuscated barcode having gone wrong by obfuscating the code too much for it to remain valid. In this case, the knowledge of an attempted cyber attack is valuable, and this knowledge can be combined with other information such as the source of the input data (e.g., the sender). Additionally, different barcode scanners perform differently, and one barcode scanner may detect and decode barcodes that another barcode scanner would not be able to. This is especially so for barcodes that have been obfuscated in some way to attempt to avoid detection by cyber security appliances. Different barcode scanners will be able to “see through” different obfuscation techniques more effectively, and so while barcode scanners employed in method 10 may not be able to decode a candidate barcode, a user device of the person who receives the input data may nonetheless be able to decode the candidate barcode. The risk of a barcode not being decodable by a barcode scanner employed in method 10 yet being decodable by an end user's device is increased due to the fact that many cellular telephones, commonly used by people for scanning barcodes, utilize proprietary barcode scanners that cannot, therefore, be utilized within method 10.
After a candidate barcode is detected in an image of the received input data at step 105, an alert is output at step 17. If the candidate barcode can be decoded, then the alert indicates that a valid barcode is present in the received input data. On the other hand, if the candidate barcode cannot be decoded, the alert indicates a candidate barcode that cannot be decoded is present in the received input data. This alert may take various forms and be provided to a number of different people and other systems within a broader cyber security system. In some cases, an alert may be provided to the end user who received the input data informing them of the candidate barcode, such that they can determine an appropriate further course of action. Similarly, in addition or alternatively to providing an alert to the user who received the input data, an alert may be sent to a cyber security professional e.g., within the same organization as the user to whom the received data was sent, for them to take appropriate further action. In other cases, the alert may be sent not to a human but instead to another part of a cyber security system within which a cyber security appliance performing method 10 exists. For example, as discussed further below, the alert may provide information to detection 100 and response 140 engines to feed into the detection and response to a cyber attack, as well as associated AI models, such as normal pattern of life models that are updated on a continuing basis and then used to, amongst other tasks, to help detect and respond to cyber threats. Automated responses, which may be considered a part of the step of outputting an alert, may include, for example, removing a candidate barcode from the received input data before releasing the received input data to the intended recipient, or replacing the candidate barcode with an alternative image or safe barcode (for example, directing the user to an information page detailing why the candidate barcode was replaced). In some examples, the output alert may be a combination of alerts to human operators and users and information provided to other parts of a cyber security system.
The output alerts at step 17 may comprise any information that has been determined or is known about the candidate barcode. If the candidate barcode has been decoded, then the alert may include the data decoded from the barcode, which typically will be a URL or other web link, though may in principle be any type of data. Other information that may be included in the alert, whether or not the candidate barcode has been successfully decoded, includes metadata about the input data that the candidate barcode was received in. This can include a source or sender of the received input data, destination information (i.e., who the intended recipients were), and a time and date that the input data was sent and received. Information regarding the detection of the candidate barcode can be included as well. For example, details can be provided regarding how and where the candidate barcode was inserted within the input data (e.g., whether it was within a video or a GIF), as well as how many image processing steps had to be performed before the candidate barcode could be decoded and which steps these were (e.g., to give an indication of the level of obfuscation of the candidate barcode). The output alerts at step 17 may also comprise the images in which the candidate barcodes are identified. This may be in the format in which the image was received, the image subsequent to any image processing steps applied to it as part of the detection of the candidate barcode at step 15, or both formats. This may provide useful information to a human recipient of the alert, and in particular may enable them to determine whether a candidate barcode that could not be decoded does in fact appear to be a valid barcode that was obfuscated so that it could not be decoded by a barcode scanner or is instead an image possessing similar characteristics to a barcode.
An example of an implementation of method 10 is method 200 illustrated in
Method 200 begins, as illustrated in
At step 203, it is first determined whether the received input data comprises an image data, whether in the form of (still) images such as raster images like JPEGs, PNGs, vector images like SVGs, or image data in other formats such as videos and GIFs. If no such image data is present the method ends as there is no need to scan for any candidate QR code in the received input data. The received input data may then be otherwise used or processed, such as in accordance with any other rules or procedures being applied to received data of that type by an organization or as desired by the end user.
On the other hand, if, at step 203, it is determined that the received input data does comprise image data in some form, the input data must be processed to obtain one or more images from the received input data, as in step 13 of method 10. In method 200, steps 205 to 213, also shown in
At step 205, the method determines whether any of the image data in the received input data is in the form of a video or a GIF (or any other file type consisting of a number of frames displayed in some sequence). If no such image data is present in the received input data, the method moves to step 211, discussed in more detail shortly.
If there are videos or GIFs in the image data of the received input data, the method proceeds to extract the frames from the videos or GIFs as still images at step 207. As this may result in a large number of individual images, to reduce the number of images which have to be further processed and scanned for QR codes, a similarity may optionally be determined between each of the extracted images. Images that are within a similarity threshold to a previous image may be discarded and not processed or scanned further. The similarity between images may be determined in any appropriate manner. After a subset of the frames of the videos and GIFs in the received input data have been selected based on the similarity of the frames, the method proceeds to step 211.
At step 211, whether the method has arrived directly from step 205 or via steps 207 and 209, the sizes of the images in the received input data are considered. At step 211, it is determined whether any of the images, from the received input data directly or extracted from videos or GIFs in the received input data, are over a threshold size. The threshold size may be defined in a number of different ways, such as file size, total number of pixels, or number of pixels in a specific dimension. The threshold size may be set based on various factors, such as the processing power available to the system on which the method is being run, the (anticipated) number of images that the method will need to process, and so on, to provide the required level of speed and performance. If images of the threshold size are present, these may be chunked, such as with a moving window so that the resulting image chunks comprise overlaps, into a plurality of image chunks that are each below the threshold size. Using a moving window can help ensure that no matter where in an image a QR code is located it will be captured in one of the resulting image chunks. Chunking large images in this manner means that the method can avoid processing images over a certain size, which can take disproportionately more time and resources than processing a larger number of smaller images—particularly the steps of using a QR scanner, discussed further below.
Once the images have been obtained from the received input data, including extracting any frames from GIFs or videos and chunking any large images, the images are processed to detect that a candidate barcode is present in the one or more images, as per step 15 of method 10. This function is generally performed by steps 215, 217, 219, 221, 223, 225, 227, 229, 231, 233, 235, 237, 239, 241, 243, 245, 247, 249, 251, 253, and 255 in method 200.
To begin this processing, method 200 proceeds to step 215, illustrated on
With the first image selected for processing at step 215, at step 217 the image is processed to detect contours in the image. Contours in an image refer to a steep gradient in luminosity or color. Contours in an image can be detected using any appropriate algorithm, such as edge-based approaches (like Canny edge detection), region-based approaches (including active contours), pixel-based approaches, and deep learning-based methods. These contours are used, at step 219, as an initial pass to determine whether a candidate QR code may exist in the image or whether the image can be quickly discarded as having no candidate QR codes. This is done by determining whether the number of contours in the image is above a threshold, also referred to as a first threshold herein. If the number of contours is below the first threshold, then the image can be determined to not comprise a candidate QR code and can therefore be discarded at step 265. This is because a QR code has a high number of contours inherent in its design, and so images with few contours will not comprise a QR code. In some examples, instead of determining whether the number of contours is above a threshold, it may be determined whether the number of contours is within a range (i.e., above a lower threshold and below an upper threshold). This may exclude some images, such as screenshots of spreadsheets on a computer, that have a very high number of contours, well above what a QR code will have.
If the selected image for processing comprises at least the first threshold number of contours, then the method proceeds to attempt to scan and decode any candidate QR code within the image. The scanning and decoding performed in method 200 generally comprises steps 241, 243, 245, 247, 251, 253, and 257, illustrated in
From step 219, if sufficient contours are present in the selected image, the method, therefore, proceeds to step 241, illustrated in
Once the QR scanner is selected at step 241, at step 243 the QR code formats to be searched for by the QR scanner may be restricted. This may be based on any existing information known to the cyber security appliance performing method 200, or more broadly within an encompassing cyber security system. For example, if the sender of the received data is known to send QR codes in a particular format, then this may be used to restrict the formats of QR codes searched for.
Prior knowledge can also be used at the subsequent step 245, whereby any further hints can be provided to the QR scanner to try to speed up the scanning and decoding of any QR code present in the selected image. For example, a likely location of a QR code may be input to the scanner to direct it to start looking for QR code at this location.
After any restrictions and hints have been input to the QR scanner at steps 243 and 245, the method proceeds to scan the image for a QR code using the selected QR scanner at step 247. If the scanner finds a QR code, which is confirmed by it being able to decode a QR code, then it is determined whether the QR code is malicious. If so, the QR code is removed from the selected image and may be replaced with other data, such as another QR code, and then an alert is output at step 261 indicating that a valid QR code is present in the input data. If the QR code is not malicious, then the method proceeds directly to step 261 at which an alert is output indicating that a valid QR code is present in the input data. On the other hand, if the QR scanner is unable to find a QR code (i.e., unable to decode anything), then the method proceeds to step 251.
At step 251, if there are additional QR scanners yet to be employed on the selected image, then the method returns to step 241 and a different, untried QR scanner is selected. The method then proceeds through steps 243, 245, and 247 as described above, except with the newly selected QR scanner. If this QR scanner is able to decode a QR code at step 249, then it is determined whether the QR code is malicious. If so, the QR code is removed from the selected image and may be replaced with other data, such as another QR code, and then an alert is output at step 261. If the QR code is not malicious, then the method proceeds directly to step 261 at which an alert is output indicating that a valid QR code is present in the input data. Otherwise, the method again returns to step 251. If further QR scanners remain to be tested, then the method can repeat steps 241 to 249 until they have all been tested. If all of the QR scanners have been tested, then the method proceeds to step 253.
At step 253, it is determined whether there are additional processing steps that have not yet been performed on the selected image. In this discussion, step 241 was arrived at from step 219, and as can be seen in
At step 221, the image is inverted. Once simply way by which a QR code can be obfuscated is by switching the black and white parts of the QR code, which can be countered by inverting the image. Once the image is inverted, the method 200 again proceeds to scan the image for a QR code, moving through steps 241, 243, 245, 247, 249, 251, 253, and 255 as discussed above. Should a QR code be decoded by a QR scanner, it is determined whether the QR code is malicious. If so, the QR code is removed from the selected image and may be replaced with other data, such as another QR code, and then an alert is output at step 261. If the QR code is not malicious, then the method proceeds directly to step 261 at which an alert is output indicating that a valid QR code is present in the input data. Otherwise, the method then selects the next processing step again at step 255, and so proceeds to step 223.
At step 223, the image is resized. For example, the image may be compressed or the number of pixels reduced. The size of an image may affect the performance of QR scanners that are optimized to search for QR codes of a particular range of sizes. In particular, QR scanners may struggle to detect and decode very large QR codes. Once the image is resized, the method 200 again proceeds to scan the image for a QR code, moving through steps 241, 243, 245, 247, 249, 251, 253, and 255 as discussed above. Should a QR code be decoded by a QR scanner, it is determined whether the QR code is malicious. If so, the QR code is removed from the selected image and may be replaced with other data, such as another QR code, and then an alert is output at step 261. If the QR code is not malicious, then the method proceeds directly to step 261 at which an alert is output indicating that a valid QR code is present in the input data. Otherwise, the method then selects the next processing step again at step 255, and so proceeds to step 225.
At step 225, one or more colors in the image are replaced. For example, most QR scanners are optimized to search for black QR codes, and so QR codes in a different color, such as grey or red, may not be detected as well by the QR scanners. Colors may be selected to be replaced based on existing knowledge or previous steps of method 200. For example, pixels in the image can be looked at to determine average colors for the selected image and expected colors for a QR code. Based on this, colors can be replaced to attempt to normalize the selected image to give a black QR code on a white background. In some examples, step 225 may involve replacing transparent regions of an image with a color (e.g., white or black in particular). This step may also comprise converting the image to greyscale or black and white. Once the image is recolored, the method 200 again proceeds to scan the image for a QR code, moving through steps 241, 243, 245, 247, 249, 251, 253, and 255 as discussed above. Should a QR code be decoded by a QR scanner, it is determined whether the QR code is malicious. If so, the QR code is removed from the selected image and may be replaced with other data, such as another QR code, and then an alert is output at step 261. If the QR code is not malicious, then the method proceeds directly to step 261 at which an alert is output indicating that a valid QR code is present in the input data. Otherwise, the method then selects the next processing step again at step 255, and so proceeds to step 227.
At step 227, contours in the image are detected. This step may reuse any calculations or determinations performed at step 217. The detected contours at step 227 are used at step 229 to identify regions of the image likely to contain a candidate QR code. Generally, such regions will be parts of the image with lots of contours. This information can be utilized, for example, to provide hints to the QR scanners at step 245. The contours are also used to detect edges in the image at step 231. Edges may be detected, for example, using Sobel gradients. A QR code has a high number of edges, and so, similarly to the determination performed at step 219, at step 233 the method can determine whether the selected image comprises a threshold number of edges, based on a second threshold. If the image comprises too few edges, below the second threshold, then the selected image is discarded at step 265 as not comprising a candidate QR code. In some cases, the second threshold may be replaced with a range, and step 233 can determine whether the selected images comprise a number of edges within an acceptable range. If the number of edges is outside of the threshold range (i.e., too high or too low) then again, the selected image is discarded at step 265 as not comprising a candidate QR code.
Assuming that the selected image does have a number of edges meeting the requirements of step 233, the method then proceeds to step 235. At step 235, the image is sharpened. One way in which a QR code can be obfuscated is by blurring or smoothing the edges in the QR code. Sharpening the image at step 235 can help detect QR codes obfuscated in this manner. Once the image is sharpened, the method 200 again proceeds to scan the image for a QR code, moving through steps 241, 243, 245, 247, 249, 251, 253, and 255 as discussed above. Should a QR code be decoded by a QR scanner, it is determined whether the QR code is malicious. If so, the QR code is removed from the selected image and may be replaced with other data, such as another QR code, and then an alert is output at step 261. If the QR code is not malicious, then the method proceeds directly to step 261 at which an alert is output indicating that a valid QR code is present in the input data. Otherwise, the method then selects the next processing step again at step 255, and so proceeds to step 237.
At step 237, the image is brightened, which can help enable QR scanners to better detect QR codes that may have been obfuscated by darkening them outside of the range of brightnesses a QR scanner is optimized to operate at. Once the image is brightened, the method 200 again proceeds to scan the image for a QR code, moving through steps 241, 243, 245, 247, 249, 251, 253, and 255 as discussed above. Should a QR code be decoded by a QR scanner, it is determined whether the QR code is malicious. If so, the QR code is removed from the selected image and may be replaced with other data, such as another QR code, and then an alert is output at step 261. If the QR code is not malicious, then the method proceeds directly to step 261 at which an alert is output indicating that a valid QR code is present in the input data. Otherwise, the method then selects the next processing step again at step 255, and so proceeds to step 239.
At step 239, the image is resized. Step 239 may be performed in much the same way as step 223 discussed above, in which the image is also resized. For example, the image may be compressed or the number of pixels can be reduced. The size of an image may affect the performance of QR scanners that are optimized to search for QR codes of a particular range of sizes. In particular, QR scanners may struggle to detect and decode very large QR codes. Resizing an image multiple times (e.g., at step 223 and at step 239) increases the likelihood of the QR scanners being able to detect and decode a QR code in the image. Once the image is resized, the method 200 again proceeds to scan the image for a QR code, moving through steps 241, 243, 245, 247, 249, and 251, as discussed above. Should a QR code be decoded by a QR scanner, it is determined whether the QR code is malicious. If so, the QR code is removed from the selected image and may be replaced with other data, such as another QR code, and then an alert is output at step 261. If the QR code is not malicious, then the method proceeds directly to step 261 at which an alert is output indicating that a valid QR code is present in the input data. However, as step 239 is the last processing step applied to the selected image in method 200, if no QR code is decoded at step 249 and no additional QR scanners are left to try at step 251, at step 253, method 200 proceeds to output an alert at step 263 that indicates that a candidate QR code is present in the received input data but that the candidate QR code cannot be decoded.
A candidate QR code is determined to be present in the selected image because the image has proceeded through the processing steps without being discarded has not having a QR code in (e.g., as the result of step 219 or step 233). This may indicate that a QR code is present but that it has been obfuscated to such an extent it cannot be decoded. Alternatively, it may mean that no QR code is present and the selected image simply shares certain properties with a QR code, such as a high number of contours and edges.
After the method 200 has finished processing the image selected at step 215, either by discarding the image at step 265 because it does not have a QR code or by outputting an alert at step 261 or step 263, the method then proceeds to step 267. At step 267, the method determines whether there are further images to process to determine whether a QR code is present or not. If there are additional images in the received data that are to be processed, then the method returns to step 215 to select the next image for processing, and then proceeds from step 215 as described above with the newly selected image. When the method has processed all of the images to determine the presence of any QR codes, and no further images exist to be processed by method 200, the method ends. In this manner, method 200 may process received data to determine the presence of any QR codes within the received data.
In some cases, method 200 may store the alerts output at step 261 and step 263 until each image in the received data that is to be processed has been processed, and generate a final output alert comprising the data from each alert regarding all of the candidate QR codes in the received data.
It will be understood that method 200 is an exemplary method and that the processing steps described above may be modified, reordered, repeated or removed. Generally speaking, methods of the present disclosure may have any combination of image processing steps, in any order and repeated any number of times, including detecting the number of contours (e.g., step 217) and determining whether an image comprises a threshold number of contours (e.g., step 219); inverting an image (e.g., step 221); resizing an image (e.g., step 223 and step 239); replacing colors in an image (e.g., step 225); detecting contours in an image (e.g., step 227) and identifying regions of an image likely to contain a candidate QR code (or other barcode, e.g., step 229); detecting edges in an image (e.g., step 231) and determining whether an image comprises a threshold number of edges (e.g., step 233); sharpening an image (e.g., step 235); and brightening an image (e.g., step 237), as well as other image processing steps such as increasing or decreasing a contrast of the image, reducing the brightness of an image, increasing a saturation of an image, rotating an image, and stretching an image in one or more dimensions.
The particular steps utilized in any given implementation will largely depend on: (i) the processing power available to perform the method; (ii) the volume of data expected to be processed by the method; (iii) the latency requirements for the method; (iv) the required reliability of the method to pick up obfuscated barcodes; and (v) the barcode scanner or barcode scanners being used in the method. The more processing power available, then more processing steps (and more computationally intensive processing steps) can be performed in a given period of time. The more data that it is expected will need processing, then in order to process all of the data in a given period of time with a given amount of processing power fewer processing steps can be used. A higher latency requirement (i.e., a requirement for lower latency, for data to be processed quicker) will lead to fewer processing steps being possible for a given amount of computing power. A desire for a high reliability in detecting candidate barcodes will require more processing steps, while a lower reliability requirement (i.e., higher tolerance for missing barcodes in received data) will mean fewer processing steps need to be used. Finally, the particular processing steps chosen to give the best balance of speed and reliability will depend on the particular barcode scanner or barcode scanners being used.
As noted above, different barcode scanners are better at detecting and decoding barcodes obfuscated in different ways. Tailoring the processing steps to address the weaknesses of the barcode scanners will help to optimize the method. For example, if the barcode scanners being used perform poorly detecting fuzzy of blurry barcodes, then a processing step of sharpening the image (e.g., step 235 in method 200) could be included or prioritized in the method. Similarly, if the barcode scanners being used perform poorly detecting barcodes not in black and white, then a processing step of replacing the colors in an image (e.g., step 225 in method 200) may be included or prioritized in the method.
The ordering of the processing steps within the method may also be based on the same considerations of (i) the processing power available to perform the method; (ii) the volume of data expected to be processed by the method; (iii) the latency requirements for the method; (iv) the required reliability of the method to pick up obfuscated barcodes; and (v) the barcode scanner or barcode scanners being used in the method.
To increase the speed and reduce the processing requirements of the method, processing steps that are able to filter out images that do not contain a barcode with relatively little processing power requirements may be prioritized and placed early in the method, such as the steps of detecting the contours in an image and determining whether the image comprises a threshold number of contours (e.g., step 217 and step 219 in method 200). Similarly, processing steps that address the weaknesses of the barcode scanners can also be prioritized and placed earlier in the method. For example, if the barcode scanners being used perform poorly detecting fuzzy of blurry barcodes, then a processing step of sharpening the image (e.g., step 235 in method 200) could be prioritized in the method. Similarly, if the barcode scanners being used perform poorly detecting barcodes not in black and white, then a processing step of replacing the colors in an image (e.g., step 225 in method 200) may be prioritized in the method.
The initial processing steps to obtain images from the received input data may also vary from those illustrated in method 200, again based on the considerations of (i) the processing power available to perform the method; (ii) the volume of data expected to be processed by the method; (iii) the latency requirements for the method; (iv) the required reliability of the method to pick up obfuscated barcodes; and (v) the barcode scanner or barcode scanners being used in the method. In some cases, all the method may omit steps 205, 207, and 209 used to extract images from GIFs or videos, if such formats are not a concern. In some cases, steps 211 and 213, relating to chunking oversized images, may be omitted. This may especially be the case if there are already existing protocols in place that restrict the size of objects in the received input data, such as email servers having a maximum attachment file size.
The steps for scanning and decoding a barcode in an image (e.g., steps 241, 243, 245, 249, 251, and 253) may also vary from those illustrated in method 200. In particular, restricting the format of barcode searched for (e.g., step 243 in method 200) and providing hints to a barcode scanner (e.g., step 245 in method 200) may be omitted. Furthermore, if only one barcode scanner is being used, then the steps of selecting the barcode scanner (e.g., step 241 in method 200) and determining if there is an additional barcode scanner to use (e.g., step 251 in method 200) can be omitted. Furthermore, the points in the method at which the scanning for a barcode takes place can vary. In the exemplary method 200, scanning takes place after most processing steps. However, this need not be the case. In some examples, scanning may only take place after all of the processing steps have been applied which may help reduce the computing overhead of the method. In other example, scanning may take place after some but not all of the processing steps, or scanning may take place after every processing step.
It will also be appreciated that parts of the method may be performed in parallel. For example, two or more different barcode scanners may each scan an image in parallel, which may use more computing resources but take less time. In some examples, the method may comprise parallel processing steps. For example, a selected image to be processed may be duplicated and processed, at the same time, by a first set of processing steps and by a second set of processing steps, resulting in two different images that may then be scanned in parallel for a barcode by the same or different barcode scanners. When two (or more) parallel processing streams are implemented, this may be to optimize the resulting image in different for scanning by different barcode scanners. That is, each parallel processing stream may be configured, through the selection of the particular processing steps, to increase the likelihood that a corresponding barcode scanner will be able to detect and decode any barcodes present in the image.
Methods 10 and 200, or other methods in accordance with the above discussion, may be implemented by a cyber security appliance. In particular, methods 10 and 200, or other methods in accordance with the present disclosure, may be implemented by a barcode reader module of a cyber security appliance. In some cases, this cyber security appliance may be a part of a broader cyber security system. The following text below discusses how some of the other components in the cyber security system operate; and thus, how these components respond to the commands, requests, communications, and otherwise interact with the methods described above.
The cyber security appliance 100 can host the cyber threat detection engine and other components. The cyber security appliance 100 includes a set of modules cooperating with one or more Artificial Intelligence models configured to perform a machine-learned task of detecting a cyber threat incident. The detection engine uses the set of modules cooperating with the one or more Artificial Intelligence models in the cyber security appliance 100 to prevent a cyber threat from compromising the nodes (e.g. devices, end users, etc.) and/or spreading through the nodes of the network being protected by the cyber security appliance 100.
The cyber security appliance 100 with the Artificial Intelligence (AI)-based cyber security system may protect a network/domain from a cyber threat (insider attack, malicious files, malicious emails, etc.). The cyber security appliance 100 can protect all of the devices on the network(s)/domain(s) being monitored. For example, the IT network domain module (e.g., first domain module 145) may communicate with network sensors to monitor network traffic going to and from the computing devices on the network as well as receive secure communications from software agents embedded in host computing devices/containers. Other domain modules such as the barcode reader module 150 and a cloud domain module operate similarly with their domain. The methods described above, including method 10 and method 200, may be implemented by the cyber security appliance within the barcode reader module 150. The steps below will detail the activities and functions of several of the components in the cyber security appliance 100.
The gather module 110 may be configured with one or more process identifier classifiers. Each process identifier classifier may be configured to identify and track one or more processes and/or devices in the network, under analysis, making communication connections. The data store 135 cooperates with the process identifier classifier to collect and maintain historical data of processes and their connections, which is updated over time as the network is in operation. In an example, the process identifier classifier can identify each process running on a given device along with its endpoint connections, which are stored in the data store 135. In addition, a feature classifier can examine and determine features in the data being analyzed into different categories.
The analyzer module 115 can cooperate with the AI model(s) 160 or other modules in the cyber security appliance 100 to confirm a presence of a cyber threat in cyberattack against one or more domains in an enterprise's system (e.g., see system/enterprise network 791, 792, and 747 of
The cyber threat analyst module 120 allows two levels of investigations of a cyber threat that may suggest a potential impending cyberattack. In a first level of investigation, the analyzer module 115 and AI model(s) 160 can rapidly detect and then the autonomous response engine 140 will autonomously respond to overt and obvious cyberattacks (generally indicated by high scores of 80 or more see
The cyber threat analyst module 120 forms in conjunction with the AI model(s) 160 trained with machine learning on how to form cyber threat hypotheses and how to conduct investigations for a cyber threat hypothesis investigate hypotheses on what are a possible set of cyber threats. The cyber threat analyst module 120 can also cooperate with the analyzer module 115 with its one or more data analysis processes to conduct an investigation on a possible set of cyber threats hypotheses that would include an anomaly of at least one of i) the abnormal behavior, ii) the suspicious activity, and iii) any combination of both, identified through cooperation with, for example, the AI model(s) 160 trained with machine learning on the normal pattern of life of entities in the system. For example, as shown in
Returning back to
The gather module 110 cooperates with the cyber threat analyst module 120 and/or analyzer module 115 to collect data to support or to refute each of the one or more possible cyber threat hypotheses that could include this abnormal behavior or suspicious activity by cooperating with one or more of the cyber threat hypotheses mechanisms to form and investigate hypotheses on what are a possible set of cyber threats.
Thus, the cyber threat analyst module 120 is configured to cooperate with the AI model(s) 160 trained with machine learning on how to form cyber threat hypotheses and how to conduct investigations for a cyber threat hypothesis to form and investigate hypotheses on what are a possible set of cyber threats and then can cooperate with the analyzer module 115 with the one or more data analysis processes to confirm the results of the investigation on the possible set of cyber threats hypotheses that would include the at least one of i) the abnormal behavior, ii) the suspicious activity, and iii) any combination of both, identified through cooperation with the AI model(s) 160 trained with machine learning on the normal pattern of life/normal behavior of entities in the domains under analysis.
Note, in the first level of threat detection, the gather module 110 and the analyzer module 115 cooperate to supply any data and/or metrics requested by the analyzer module 115 cooperating with the AI model(s) 160 trained on possible cyber threats to support or rebut each possible type of cyber threat and generally that presence of an anomaly with a high threat/anomaly score and/or the occurrence of a specific event deemed a serious cyber threat in itself, will cause the analyzer module 115 to send a signal and this information to the autonomous response engine 140. Again, the analyzer module 115 can cooperate with the AI model(s) 160 and/or other modules to rapidly detect and then cooperate with the autonomous response engine 140 to autonomously respond to overt and obvious cyberattacks, (including ones found to be supported by the cyber threat analyst module 120).
As a starting point, the AI-based cyber security appliance 100 can use multiple modules, each capable of identifying abnormal behavior and/or suspicious activity against the AI model(s) 160 trained on a normal pattern of life for the entities in the network/domain under analysis, which is supplied to the analyzer module 115 and/or the cyber threat analyst module 120. The analyzer module 115 and/or the cyber threat analyst module 120 may also receive other inputs such as AI model breaches, AI classifier breaches, etc. a trigger to start an investigation from an external source.
Many other model breaches of the AI model(s) 160 trained with machine learning on the normal behavior of the system can send an input into the cyber threat analyst module 120 and/or the trigger module to trigger an investigation to start the formation of one or more hypotheses on what are a possible set of cyber threats that could include the initially identified abnormal behavior and/or suspicious activity.
The cyber threat analyst module 120, which forms and investigates hypotheses on what are the possible set of cyber threats, can use hypotheses mechanisms including any of 1) one or more of the AI model(s) 160 trained on how human cyber security analysts form cyber threat hypotheses and how to conduct investigations for a cyber threat hypothesis that would include at least an anomaly of interest, 2) one or more scripts outlining how to conduct an investigation on a possible set of cyber threats hypotheses that would include at least the anomaly of interest, 3) one or more rules-based models on how to conduct an investigation on a possible set of cyber threats hypotheses and how to form a possible set of cyber threats hypotheses that would include at least the anomaly of interest, and 4) any combination of these. Again, the AI model(s) 160 trained on ‘how to form cyber threat hypotheses and how to conduct investigations for a cyber threat hypothesis’ may use supervised machine learning on human-led cyber threat investigations and then steps, data, metrics, and metadata on how to support or to refute a plurality of the possible cyber threat hypotheses, and then the scripts and rules-based models will include the steps, data, metrics, and metadata on how to support or to refute the plurality of the possible cyber threat hypotheses. The cyber threat analyst module 120 and/or the analyzer module 115 can feed the cyber threat details to the assessment module 125 to generate a threat risk score that indicate a level of severity of the cyber threat.
Each Artificial Intelligence-based engine has an interface to communicate with another separate Artificial Intelligence-based engine, which is configured to understand a type of information and communication that this other separate Artificial Intelligence-based engine needs to make determinations on an ongoing cyberattack from that other Artificial Intelligence-based engine's perspective. The autonomous response engine 140 works with the assessment module 125 in the detection engine when the cyber threat is detected and autonomously takes one or more actions to mitigate the cyber threat.
The cyber threat detection engine can also have an anomaly alert system in a formatting module configured to report out anomalous incidents and events as well as the cyber threat detected to a display screen viewable by a human cyber-security professional. Each Artificial Intelligence-based engine has a rapid messaging system to communicate with a human cyber-security team to keep the human cyber-security team informed on actions autonomously taken and actions needing human approval to be taken.
Referring to
The example multiple Artificial Intelligence-based engines cooperating with each other can include i) the cyber threat detection engine, ii) an autonomous response engine 140, iii) a cyber-security restoration engine 190, and iv) a cyber-attack simulator 105. i) The cyber threat detection engine (consisting of the modules making up the cyber security appliance 100) can be configured to use Artificial Intelligence algorithms trained to perform a machine-learned task of detecting the cyber threat. (See for example
The multiple Artificial Intelligence-based engines have communication hooks in between them to exchange a significant amount of behavioral metrics including data between the multiple Artificial Intelligence-based engines to work in together to provide an overall cyber threat response.
The intelligent orchestration component can be configured as a discreet intelligent orchestration component that exists on top of the multiple Artificial Intelligence-based engines to orchestrate the overall cyber threat response and an interaction between the multiple Artificial Intelligence-based engines, each configured to perform its own machine-learned task. Alternatively, the intelligent orchestration component can be configured as a distributed collaboration with a portion of the intelligent orchestration component implemented in each of the multiple Artificial Intelligence-based engines to orchestrate the overall cyber threat response and an interaction between the multiple Artificial Intelligence-based engines. In an embodiment, whether implemented as a distributed portion on each AI engine or a discrete AI engine itself, the intelligent orchestration component can use self-learning algorithms to learn how to best assist the orchestration of the interaction between itself and the other AI engines, which also implement self-learning algorithms themselves to perform their individual machine-learned tasks better.
The multiple Artificial Intelligence-based engines can be configured to cooperate to combine an understanding of normal operations of the nodes, an understanding emerging cyber threats, an ability to contain those emerging cyber threats, and a restoration of the nodes of the system to heal the system with an adaptive feedback between the multiple Artificial Intelligence-based engines in light of simulations of the cyberattack to predict what might occur in the nodes in the system based on the progression of the attack so far, mitigation actions taken to contain those emerging cyber threats and remediation actions taken to heal the nodes using the simulated cyberattack information.
The multiple Artificial Intelligence-based engines each have an interface to communicate with the other separate Artificial Intelligence-based engines configured to understand a type of information and communication that the other separate Artificial Intelligence-based engine needs to make determinations on an ongoing cyberattack from that other Artificial Intelligence-based engine's perspective. Each Artificial Intelligence-based engine has an instant messaging system to communicate with a human cyber-security team to keep the human cyber-security team informed on actions autonomously taken and actions needing human approval as well as generate reports for the human cyber-security team.
Each of these Artificial Intelligence-based engines has bi-directional communications, including the exchange of raw data, with each other as well as with software agents resident in physical and/or virtual devices making up the system being protected as well as bi-directional communications with sensors within the system being protected. Note, the system under protection can be, for example, an IT network, an OT network, a Cloud network, an email network, a source code database, an endpoint device, etc.
In an example, the autonomous response engine 140 uses its intelligence to cooperate with a cyber-attack simulator and its Artificial Intelligence-based simulations to choose and initiate an initial set of one or more mitigation actions indicated as a preferred targeted initial response to the detected cyber threat by autonomously initiating those mitigation actions to defend against the detected cyber threat, rather than a human taking an action. The autonomous response engine 140, rather than the human taking the action, is configured to autonomously cause the one or more mitigation actions to be taken to contain the cyber threat when a threat risk parameter from an assessment module in the detection engine is equal to or above an actionable threshold. Example mitigation actions can include 1) the autonomous response engine 140 monitoring and sending signals to a potentially compromised node to restrict communications of the potentially compromised node to merely normal recipients and types of communications according to the Artificial Intelligence model trained to model the normal pattern of life for each node in the protected system, 2) the autonomous response engine 140 trained on how to isolate a compromised node as well as to take mitigation acts with other nodes that have a direct nexus to the compromised node.
In another example, the cyber-attack simulator 105 and its Artificial Intelligence-based simulations use intelligence to cooperate with the cyber-security restoration engine 190 to assist in choosing one or more remediation actions to perform on nodes affected by the cyberattack back to a trusted operational state while still mitigating the cyber threat during an ongoing cyberattack based on effects determined through the simulation of possible remediation actions to perform and their effects on the nodes making up the system being protected and preempt possible escalations of the cyberattack while restoring one or more nodes back to a trusted operational state.
In another example, the cyber security restoration engine 190 restores the one or more nodes in the protected system by cooperating with at least two or more of 1) an Artificial Intelligence model trained to model a normal pattern of life for each node in the protected system, 2) an Artificial Intelligence model trained on what are a possible set of cyber threats and their characteristics and symptoms to identify the cyber threat (e.g. malicious actor/device/file) that is causing a particular node to behave abnormally (e.g. malicious behavior) and fall outside of that node's normal pattern of life, and 3) the autonomous response engine 140.
The cyber-attack simulator 105 may be implemented via i) a simulator to model the system being protected and/or ii) a clone creator to spin up a virtual network and create a virtual clone of the system being protected configured to pentest one or more defenses provided by scores based on both the level of confidence that the cyber threat is a viable threat and the severity of the cyber threat (e.g., attack type where ransomware attacks has greater severity than phishing attack; degree of infection; computing devices likely to be targeted, etc.). The threat risk scores be used to rank alerts that may be directed to enterprise or computing device administrators. This risk assessment and ranking is conducted to avoid frequent “false positive” alerts that diminish the degree of reliance/confidence on the cyber security appliance 100. The cyber-attack simulator 105 may include and cooperate with one or more AI models trained with machine learning on the contextual knowledge of the organization. These trained AI models may be configured to identify data points from the contextual knowledge of the organization and its entities, which may include, but is not limited to, language-based data, email/network connectivity and behavior pattern data, and/or historic knowledgebase data. The cyber-attack simulator 105 may use the trained AI models to cooperate with one or more AI classifier(s) by producing a list of specific organization-based classifiers for the AI classifier. The cyber-attack simulator 105 is further configured to calculate-based at least in part on the results of the one or more hypothetical simulations of a possible cyberattack path and/or of an actual ongoing cyberattack paths from a cyber threat determine a risk score for each node (e.g. each device, user account, etc.), the threat risk score being indicative of a possible severity of the compromise prior to an autonomous response action is taken in response to the actual cyberattack of the cyber incident. See for example
Again, similarly named components in each Artificial Intelligence-based engine can 1) perform similar functions and/or 2) have a communication link from that component located in one of the Artificial Intelligence-based engines and then information is needed from that component is communicated to another Artificial Intelligence-based engine that through the interface to that Artificial Intelligence-based engine.
Training of AI Pre-Deployment and then During Deployment
In step 1, an initial training of the Artificial Intelligence model trained on cyber threats can occur using unsupervised learning and/or supervised learning on characteristics and attributes of known potential cyber threats including malware, insider threats, and other kinds of cyber threats that can occur within that domain. Each Artificial Intelligence model (e.g. neural network, decision tree, etc.) can be programmed and configured with the background information to understand and handle particulars, including different types of data, protocols used, types of devices, user accounts, etc. of the system being protected. The Artificial Intelligence pre-deployment can all be trained on the specific machine learning task that they will perform when put into deployment. For example, the AI model, such as AI model(s) 160 or example (hereinafter “AI model(s) 160”), trained on identifying a specific cyber threat learns at least both in the pre-deployment training i) the characteristics and attributes of known potential cyber threats as well as ii) a set of characteristics and attributes of each category of potential cyber threats and their weights assigned on how indicative certain characteristics and attributes correlate to potential cyber threats of that category of threats. In this example, one of the AI models 160 trained on identifying a specific cyber threat can be trained with machine learning such as Linear Regression, Regression Trees, Non-Linear Regression, Bayesian Linear Regression, Deep learning, etc. to learn and understand the characteristics and attributes in that category of cyber threats. Later, when in deployment in a domain/network being protected by the cyber security appliance 100, the AI model trained on cyber threats can determine whether a potentially unknown threat has been detected via a number of techniques including an overlap of some of the same characteristics and attributes in that category of cyber threats. The AI model may use unsupervised learning when deployed to better learn newer and updated characteristics of cyberattacks.
In an embodiment, one or more of the AI models 160 may be trained on a normal pattern of life of entities in the system are self-learning AI model using unsupervised machine learning and machine learning algorithms to analyze patterns and ‘learn’ what is the ‘normal behavior’ of the network by analyzing data on the activity on, for example, the network level, at the device level, and at the employee level. The self-learning AI model using unsupervised machine learning understands the system under analysis' normal patterns of life in, for example, a week of being deployed on that system, and grows more bespoke with every passing minute. The AI unsupervised learning model learns patterns from the features in the day-to-day dataset and detecting abnormal data which would not have fallen into the category (cluster) of normal behavior. The self-learning AI model using unsupervised machine learning can simply be placed into an observation mode for an initial week or two when first deployed on a network/domain in order to establish an initial normal behavior for entities in the network/domain under analysis.
Thus, a deployed Artificial Intelligence model 160 trained on a normal behavior of entities in the system can be configured to observe the nodes in the system being protected. Training on a normal behavior of entities in the system can occur while monitoring for the first week or two until enough data has been observed to establish a statistically reliable set of normal operations for each node (e.g., user account, device, etc.). Initial training of one or more Artificial Intelligence models 160 trained with machine learning on a normal behavior of the pattern of life of the entities in the network/domain can occur where each type of network and/or domain will generally have some common typical behavior with each model trained specifically to understand components/devices, protocols, activity level, etc. to that type of network/system/domain. Alternatively, pre-deployment machine learning training of one or more Artificial Intelligence models trained on a normal pattern of life of entities in the system can occur. Initial training of one or more Artificial Intelligence models trained with machine learning on a normal behavior of the pattern of life of the entities in the network/domain can occur where each type of network and/or domain will generally have some common typical behavior with each model trained specifically to understand components/devices, protocols, activity level, etc. to that type of network/system/domain. What is the normal behavior of each entity within that system can be established either prior to the deployment and then adjusted during deployment or alternatively the model can simply be placed into an observation mode for an initial week or two when first deployed on a network/domain in order to establish an initial normal behavior for entities in the network/domain under analysis. During the deployment of the model, what is considered normal behavior will change as each different entity's behavior changes and will be reflected through the use of unsupervised learning in the model such as various Bayesian techniques, clustering, etc. Again, the AI models 160 can be implemented with various mechanisms, such neural networks, decision trees, etc. and combinations of these. Likewise, one or more supervised machine learning AI models 160 may be trained to create possible hypotheses and perform cyber threat investigations on agnostic examples of past historical incidents of detecting a multitude of possible types of cyber threat hypotheses previously analyzed by human cyber security analyst.
At its core, the self-learning AI models 160 that model the normal behavior (e.g. a normal pattern of life) of entities in the network mathematically characterizes what constitutes ‘normal’ behavior, based on the analysis of a large number of different measures of a device's network behavior-packet traffic and network activity/processes including server access, data volumes, timings of events, credential use, connection type, volume, and directionality of, for example, uploads/downloads into the network, file type, packet intention, admin activity, resource and information requests, command sent, etc.
In order to model what should be considered as normal for a device or cloud container, its behavior can be analyzed in the context of other similar entities on the network. The AI models (e.g., AI model(s) 160) can use unsupervised machine learning to algorithmically identify significant groupings, a task which is virtually impossible to do manually. To create a holistic image of the relationships within the network, the AI models and AI classifiers employ a number of different clustering methods, including matrix-based clustering, density-based clustering, and hierarchical clustering techniques. The resulting clusters can then be used, for example, to inform the modeling of the normative behaviors and/or similar groupings.
The AI models and AI classifiers can employ a large-scale computational approach to understand sparse structure in models of network connectivity based on applying L1-regularization techniques (the lasso method). This allows the artificial intelligence to discover true associations between different elements of a network which can be cast as efficiently solvable convex optimization problems and yield parsimonious models. Various mathematical approaches assist.
Next, one or more supervised machine learning AI models are trained to create possible hypotheses and how to perform cyber threat investigations on agnostic examples of past historical incidents of detecting a multitude of possible types of cyber threat hypotheses previously analyzed by human cyber threat analysis. AI models 160 trained on forming and investigating hypotheses on what are a possible set of cyber threats can be trained initially with supervised learning. Thus, these AI models 160 can be trained on how to form and investigate hypotheses on what are a possible set of cyber threats and steps to take in supporting or refuting hypotheses. The AI models trained on forming and investigating hypotheses are updated with unsupervised machine learning algorithms when correctly supporting or refuting the hypotheses including what additional collected data proved to be the most useful. More on the training of the AI models that are trained to create one or more possible hypotheses and perform cyber threat investigations will be discussed later.
Next, the various Artificial Intelligence models and AI classifiers combine use of unsupervised and supervised machine learning to learn ‘on the job’—it does not depend upon solely knowledge of previous cyber threat attacks. The Artificial Intelligence models and classifiers combine use of unsupervised and supervised machine learning constantly revises assumptions about behavior, using probabilistic mathematics, that is always up to date on what a current normal behavior is, and not solely reliant on human input. The Artificial Intelligence models and classifiers combine use of unsupervised and supervised machine learning on cyber security is capable of seeing hitherto undiscovered cyber events, from a variety of threat sources, which would otherwise have gone unnoticed. Next, these cyber threats can include, for example: Insider threat-malicious or accidental, Zero-day attacks-previously unseen, novel exploits, latent vulnerabilities, machine-speed attacks-ransomware and other automated attacks that propagate and/or mutate very quickly, Cloud and SaaS-based attacks, other silent and stealthy attacks advance persistent threats, advanced spear-phishing, etc.
The assessment module 125 and/or cyber threat analyst module 120 of
As discussed in more detail below, the analyzer module 115 and/or cyber threat analyst module 120 can cooperate with the one or more unsupervised AI (machine learning) model 160 trained on the normal pattern of life/normal behavior in order to perform anomaly detection against the actual normal pattern of life for that system to determine whether an anomaly (e.g., the identified abnormal behavior and/or suspicious activity) is malicious or benign. In the operation of the cyber security appliance 100, the emerging cyber threat can be previously unknown, but the emerging threat landscape data 170 representative of the emerging cyber threat shares enough (or does not share enough) in common with the traits from the AI models 160 trained on cyber threats to now be identified as malicious or benign. Note, if later confirmed as malicious, then the AI models 160 trained with machine learning on possible cyber threats can update their training. Likewise, as the cyber security appliance 100 continues to operate, then the one or more AI models trained on a normal pattern of life for each of the entities in the system can be updated and trained with unsupervised machine learning algorithms. The analyzer module 115 can use any number of data analysis processes (discussed more in detail below and including the agent analyzer data analysis process here) to help obtain system data points so that this data can be fed and compared to the one or more AI models trained on a normal pattern of life, as well as the one or more machine learning models trained on potential cyber threats, as well as create and store data points with the connection fingerprints.
All of the above AI models 160 can continually learn and train with unsupervised machine learning algorithms on an ongoing basis when deployed in their system that the cyber security appliance 100 is protecting. Thus, learning and training on what is normal behavior for each user, each device, and the system overall and lowering a threshold of what is an anomaly.
Anomaly detection can discover unusual data points in your dataset. Anomaly can be a synonym for the word ‘outlier’. Anomaly detection (or outlier detection) is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data. Anomalous activities can be linked to some kind of problems or rare events. Since there are tons of ways to induce a particular cyber-attack, it is very difficult to have information about all these attacks beforehand in a dataset. But, since the majority of the user activity and device activity in the system under analysis is normal, the system overtime captures almost all of the ways which indicate normal behavior. And from the inclusion-exclusion principle, if an activity under scrutiny does not give indications of normal activity, the self-learning AI model using unsupervised machine learning can predict with high confidence that the given activity is anomalous/unusual. The AI unsupervised learning model learns patterns from the features in the day to day dataset and detecting abnormal data which would not have fallen into the category (cluster) of normal behavior. The goal of the anomaly detection algorithm through the data fed to it is to learn the patterns of a normal activity so that when an anomalous activity occurs, the modules can flag the anomalies through the inclusion-exclusion principle. The goal of the anomaly detection algorithm through the data fed to it is to learn the patterns of a normal activity so that when an anomalous activity occurs, the modules can flag the anomalies through the inclusion-exclusion principle. The cyber threat module can perform its two level analysis on anomalous behavior and determine correlations.
In an example, 95% of data in a normal distribution lies within two standard-deviations from the mean. Since the likelihood of anomalies in general is very low, the modules cooperating with the AI model of normal behavior can say with high confidence that data points spread near the mean value are non-anomalous. And since the probability distribution values between mean and two standard-deviations are large enough, the modules cooperating with the AI model of normal behavior can set a value in this example range as a threshold (a parameter that can be tuned over time through the self-learning), where feature values with probability larger than this threshold indicate that the given feature's values are non-anomalous, otherwise it's anomalous. Note, this anomaly detection can determine that a data point is anomalous/non-anomalous on the basis of a particular feature. In reality, the cyber security appliance 100 should not flag a data point as an anomaly based on a single feature. Merely, when a combination of all the probability values for all features for a given data point is calculated can the modules cooperating with the AI model of normal behavior can say with high confidence whether a data point is an anomaly or not. Anomaly detection can discover unusual data points in your dataset.
Again, the AI models trained on a normal pattern of life of entities in a network (e.g., domain) under analysis may perform the cyber threat detection through a probabilistic change in a normal behavior through the application of, for example, an unsupervised Bayesian mathematical model to detect the behavioral change in computers and computer networks. The Bayesian probabilistic approach can determine periodicity in multiple time series data and identify changes across single and multiple time series data for the purpose of anomalous behavior detection. Please reference U.S. Pat. No. 10,701,093 granted Jun. 30, 2020, titled “Anomaly alert system for cyber threat detection” for an example Bayesian probabilistic approach, which is incorporated by reference in its entirety. In addition, please reference US patent publication number “US2021273958A1 filed Feb. 26, 2021, titled “Multi-stage anomaly detection for process chains in multi-host environments” for another example anomalous behavior detector using a recurrent neural network and a bidirectional long short-term memory (LSTM), which is incorporated by reference in its entirety. In addition, please reference US patent publication number “US2020244673A1, filed Apr. 23, 2019, titled “Multivariate network structure anomaly detector,” which is incorporated by reference in its entirety, for another example anomalous behavior detector with a Multivariate Network and Artificial Intelligence classifiers.
Next, as discussed further below, as discussed further below, during pre-deployment the cyber threat analyst module 120 and the analyzer module 115 can use data analysis processes and cooperate with AI model(s) 160 trained on forming and investigating hypotheses on what are a possible set of cyber threats. In addition, another set of AI models can be trained on how to form and investigate hypotheses on steps to take in supporting or refuting hypotheses. The AI models trained on forming and investigating hypotheses are updated with unsupervised machine learning algorithms when correctly supporting or refuting the hypotheses including what additional collected data proved to be the most useful.
Referring back to
The gather module 110 may have a process identifier classifier. The process identifier classifier can identify and track each process and device in the network, under analysis, making communication connections. The data store 135 cooperates with the process identifier classifier to collect and maintain historical data of processes and their connections, which is updated over time as the network is in operation. In an example, the process identifier classifier can identify each process running on a given device along with its endpoint connections, which are stored in the data store. Similarly, data from any of the domains under analysis may be collected and compared. Examples of domains/networks under analysis being protected can include any of i) an Informational Technology network, ii) an Operational Technology network, iii) a Cloud service, iv) a SaaS service, v) an endpoint device, vi) an email domain, and vii) any combinations of these.
A domain module is constructed and coded to interact with and understand a specific domain. For instance, the IT network domain module 145 may receive information from and send information to, in this example, IT network-based sensors (i.e., probes, taps, etc.). The IT network domain module 145 also has algorithms and components configured to understand, in this example, IT network parameters, IT network protocols, IT network activity, and other IT network characteristics of the network under analysis. The barcode reader module 150 can receive information from and send information to, in this example, email-based sensors (i.e., probes, taps, etc.) to receive input data to determine the presence of barcodes in the email data. The barcode reader module 150 also has algorithms and components configured to understand, in this example, email parameters, email protocols and formats, email activity, and other email characteristics of the network under analysis. Additional domain modules, such as a cloud domain module can also collect domain data from another respective domain. As noted above, the methods described herein may be implemented on the domain module corresponding to the domain the method is to work in. If the method is intended to detect barcodes in email data, the method may be implemented in the barcode reader module 150. In another example, the method may detect barcodes in instant messages within an organization or in files shared within an organization, and so the method may be implemented within the network domain module 145. In some cases, multiple domain modules may run methods as disclosed herein on their respective domains. Different domain modules may run the same method as each other, or the methods may be adapted in accordance with the teachings above to the different domains.
The coordinator module 155 is configured to work with various machine learning algorithms and relational mechanisms to i) assess, ii) annotate, and/or iii) position in a vector diagram, a directed graph, a relational database, etc., activity including events occurring, for example, in the first domain compared to activity including events occurring in the second domain. The domain modules can cooperate to exchange and store their information with the data store, such as output alerts.
As discussed, the process identifier classifier in the gather module 110 can cooperate with additional classifiers in each of the domain modules 145/150 to assist in tracking individual processes and associating them with entities in a domain under analysis as well as individual processes and how they relate to each other. The process identifier classifier can cooperate with other trained AI classifiers in the modules to supply useful metadata along with helping to make logical nexuses. A feedback loop of cooperation exists between the gather module 110, the analyzer module 115, the domain specific modules such as the IT network module and/or email module, the AI model(s) 160 trained on different aspects of this process, and the cyber threat analyst module 120 to gather information to determine whether a cyber threat is potentially attacking the networks/domains under analysis.
In the following examples the analyzer module 115 and/or cyber threat analyst module 120 can use multiple factors to the determination of whether a process, event, object, entity, etc. is likely malicious.
In an example, the analyzer module 115 and/or cyber threat analyst module 120 can cooperate with one or more of the AI model(s) 160 trained on certain cyber threats to detect whether the anomalous activity detected, such as suspicious email messages, exhibit traits that may suggest a malicious intent, such as phishing links, scam language, sent from suspicious domains, etc. The analyzer module 115 and/or cyber threat analyst module 120 can also cooperate with one of more of the AI model(s) 160 trained on potential IT based cyber threats to detect whether the anomalous activity detected, such as suspicious IT links, URLs, domains, user activity, etc., may suggest a malicious intent as indicated by the AI models trained on potential IT based cyber threats.
In the above example, the analyzer module 115 and/or the cyber threat analyst module 120 can cooperate with the one or more AI models 160 trained with machine learning on the normal pattern of life for entities in an email domain under analysis to detect, in this example, anomalous emails which are detected as outside of the usual pattern of life for each entity, such as a user, email server, etc., of the email network/domain. Likewise, the analyzer module 115 and/or the cyber threat analyst module 120 can cooperate with the one or more AI models trained with machine learning on the normal pattern of life for entities in a second domain under analysis (in this example, an IT network) to detect, in this example, anomalous network activity by user and/or devices in the network, which is detected as outside of the usual pattern of life (e.g. abnormal) for each entity, such as a user or a device, of the second domain's network under analysis.
Thus, the analyzer module 115 and/or the cyber threat analyst module 120 can be configured with one or more data analysis processes to cooperate with the one or more of the AI model(s) 160 trained with machine learning on the normal pattern of life in the system, to identify an anomaly of at least one of i) the abnormal behavior, ii) the suspicious activity, and iii) the combination of both, from one or more entities in the system. Note, other sources, such as other model breaches, can also identify at least one of i) the abnormal behavior, ii) the suspicious activity, and iii) the combination of both to trigger the investigation.
Accordingly, during this cyber threat determination process, the analyzer module 115 and/or the cyber threat analyst module 120 can also use AI classifiers that look at the features and determine a potential maliciousness based on commonality or overlap with known characteristics of malicious processes/entities. Many factors, including anomalies that include unusual and suspicious behavior, and other indicators of processes and events, are examined by the one or more AI models 160 trained on potential cyber threats including some supporting AI classifiers looking at specific features for their malicious nature in order to make a determination of whether an individual factor and/or whether a chain of anomalies is determined to be likely malicious.
Initially, in this example of activity in an IT network analysis, the rare JA3 hash and/or rare user agent connections for this network coming from a new or unusual process are factored just like in the first wireless domain suspicious wireless signals are considered. These are quickly determined by referencing the one or more of the AI model(s) 160 trained with machine learning on the pattern of life of each device and its associated processes in the system. Next, the analyzer module 115 and/or the cyber threat analyst module 120 can have an external input to ingest threat intelligence from other devices in the network cooperating with the cyber security appliance 100. Next, the analyzer module 115 and/or the cyber threat analyst module 120 can look for other anomalies, such as model breaches, while the AI models trained on potential cyber threats can assist in examining and factoring other anomalies that have occurred over a given timeframe to see if a correlation exists between a series of two or more anomalies occurring within that time frame.
The analyzer module 115 and/or the cyber threat analyst module 120 can combine these Indicators of Compromise (e.g., unusual network JA3, unusual device JA3, . . . ) with many other weak indicators to detect the earliest signs of an emerging threat, including previously unknown threats, without using strict blacklists or hard-coded thresholds. However, the AI classifiers can also routinely look at blacklists, etc. to identify maliciousness of features looked at. A deeper analysis may assist in confirming an analysis to determine that indeed a cyber threat has been detected. The analyzer module 115 can also look at factors of how rare the endpoint connection is, how old the endpoint is, where geographically the endpoint is located, how a security certificate associated with a communication is verified only by an endpoint device or by an external 3rd party, just to name a few additional factors. The analyzer module 115 (and similarly the cyber threat analyst module 120) can then assign weighting given to these factors in the machine learning that can be supervised based on how strongly that characteristic has been found to match up to actual malicious cyber threats learned in the training.
In another example, an AI classifier supporting the AI models 160 is trained to find potentially malicious indicators. The agent analyzer data analysis process in the analyzer module 115 and/or cyber threat analyst module 120 may cooperate with the process identifier classifier to identify all of the additional factors of i) are one or more processes running independently of other processes, ii) are the one or more processes running independent are recent to this network, and iii) are the one or more processes running independent connect to the endpoint, which the endpoint is a rare connection for this network, which are referenced and compared to one or more AI models 160 trained with machine learning on the normal behavior of the pattern of life of the system.
The analyzer module 115 and/or the cyber threat analyst module 120 may use the agent analyzer data analysis process that detects a potentially malicious agent previously unknown to the system to start an investigation on one or more possible cyber threat hypotheses. The determination and output of this step is what are possible cyber threats that can include or be indicated by the identified abnormal behavior and/or identified suspicious activity identified by the agent analyzer data analysis process.
In an example, the cyber threat analyst module 120 can use the agent analyzer data analysis process and the AI models(s) trained on forming and investigating hypotheses on what are a possible set of cyber threats to use the machine learning and/or set scripts to aid in forming one or more hypotheses to support or refute each hypothesis. The cyber threat analyst module 120 can cooperate with the AI models trained on forming and investigating hypotheses to form an initial set of possible hypotheses, which needs to be intelligently filtered down. The cyber threat analyst module 120 can be configured to use the one or more supervised machine learning models trained on i) agnostic examples of a past history of detection of a multitude of possible types of cyber threat hypotheses previously analyzed by human, who was a cyber security professional, ii) a behavior and input of how a plurality of human cyber security analysts make a decision and analyze a risk level regarding and a probability of a potential cyber threat, iii) steps to take to conduct an investigation start with anomaly via learning how expert humans tackle investigations into specific real and synthesized cyber threats and then the steps taken by the human cyber security professional to narrow down and identify a potential cyber threat, and iv) what type of data and metrics that were helpful to further support or refute each of the types of cyber threats, in order to determine a likelihood of whether the abnormal behavior and/or suspicious activity is either i) malicious or ii) benign?
The cyber threat analyst module 120 using AI models, scripts and/or rules based modules is configured to conduct initial investigations regarding the anomaly of interest, collected additional information to form a chain of potentially related/linked information under analysis and then form one or more hypotheses that could have this chain of information that is potentially related/linked under analysis and then gather additional information in order to refute or support each of the one or more hypotheses.
The cyber threat analyst module using AI models, scripts and/or rules-based modules is configured to conduct initial investigations regarding the anomaly of interest, collected additional information to form a chain of potentially related/linked information under analysis and then form one or more hypotheses that could have this chain of information that is potentially related/linked under analysis and then gather additional information in order to refute or support each of the one or more hypotheses.
In an example, a behavioural pattern analysis of what are the unusual behaviours of the network/system/device/user under analysis by the machine learning models may be as follows. The coordinator module can tie the alerts, activities, and events from, in this example, the email domain to the alerts, activities, and events from the IT network domain.
The autonomous response engine 140 of the cyber security system is configured to take one or more autonomous mitigation actions to mitigate the cyber threat during the cyberattack by the cyber threat. The autonomous response engine 140 is configured to reference an Artificial Intelligence model trained to track a normal pattern of life for each node of the protected system to perform an autonomous act of restricting a potentially compromised node having i) an actual indication of compromise and/or ii) merely adjacent to a known compromised node, to merely take actions that are within that node's normal pattern of life to mitigate the cyber threat. Similarly named components in the cyber security restoration engine 190 can operate and function similar to as described for the detection engine.
In the next step, the analyzer module 115 and/or cyber threat analyst module 120 generates one or more supported possible cyber threat hypotheses from the possible set of cyber threat hypotheses. The analyzer module generates the supporting data and details of why each individual hypothesis is supported or not. The analyzer module can also generate one or more possible cyber threat hypotheses and the supporting data and details of why they were refuted.
In general, the analyzer module 115 cooperates with the following three sources. The analyzer module 115 cooperates with the AI models trained on cyber threats to determine whether an anomaly such as the abnormal behavior and/or suspicious activity is either 1) malicious or 2) benign when the potential cyber threat under analysis is previously unknown to the cyber security appliance 100. The analyzer module cooperates with the AI models trained on a normal behavior of entities in the network under analysis. The analyzer module cooperates with various AI-trained classifiers. With all of these sources, when they input information that indicates a potential cyber threat that is i) severe enough to cause real harm to the network under analysis and/or ii) a close match to known cyber threats, then the analyzer module can make a final determination to confirm that a cyber threat likely exists and send that cyber threat to the assessment module to assess the threat score associated with that cyber threat. Certain model breaches will always trigger a potential cyber threat that the analyzer will compare and confirm the cyber threat.
In the next step, an assessment module with the AI classifiers is configured to cooperate with the analyzer module. The analyzer module supplies the identity of the supported possible cyber threat hypotheses from the possible set of cyber threat hypotheses to the assessment module. The assessment module with the AI classifiers cooperates with the AI model trained on possible cyber threats can make a determination on whether a cyber threat exists and what level of severity is associated with that cyber threat. The assessment module with the AI classifiers cooperates with the one or more AI models trained on possible cyber threats in order to assign a numerical assessment of a given cyber threat hypothesis that was found likely to be supported by the analyzer module with the one or more data analysis processes, via the abnormal behavior, the suspicious activity, or the collection of system data points. The assessment module with the AI classifiers output can be a score (ranked number system, probability, etc.) that a given identified process is likely a malicious process.
The assessment module with the AI classifiers can be configured to assign a numerical assessment, such as a probability, of a given cyber threat hypothesis that is supported and a threat level posed by that cyber threat hypothesis which was found likely to be supported by the analyzer module, which includes the abnormal behavior or suspicious activity as well as one or more of the collection of system data points, with the one or more AI models trained on possible cyber threats.
The cyber threat analyst module 120 in the AI-based cyber security appliance 100 component provides an advantage over competitors' products as it reduces the time taken for cybersecurity investigations, provides an alternative to manpower for small organizations and improves detection (and remediation) capabilities within the cyber security platform.
The AI-based cyber threat analyst module 120 performs its own computation of threat and identifies interesting network events with one or more processers. These methods of detection and identification of threat all add to the above capabilities that make the AI-based cyber threat analyst module a desirable part of the cyber security appliance 100. The AI-based cyber threat analyst module 120 offers a method of prioritizing which is not just a summary or highest score alert of an event evaluated by itself equals the most bad, and prevents more complex attacks being missed because their composite parts/individual threats only produced low-level alerts.
The AI classifiers can be part of the assessment component, which scores the outputs of the analyzer module. Again, as for the other AI classifiers discussed, the AI classifier can be coded to take in multiple pieces of information about an entity, object, and/or thing and based on its training and then output a prediction about the entity, object, or thing. Given one or more inputs, the AI classifier model will try to predict the value of one or more outcomes. The AI classifiers cooperate with the range of data analysis processes that produce features for the AI classifiers. The various techniques cooperating here allow anomaly detection and assessment of a cyber threat level posed by a given anomaly; but more importantly, an overall cyber threat level posed by a series/chain of correlated anomalies under analysis.
In the next step, the formatting module can generate an output such as a printed or electronic report with the relevant data. The formatting module can cooperate with both the analyzer module, the cyber threat analyst module, and the assessment module depending on what the user wants to be reported.
The formatting module is configured to format, present a rank for, and output one or more detected cyber threats from the analyzer module or from the assessment module into a formalized report, from one or more report templates populated with the data for that incident. Many different types of formalized report templates exist to be populated with data and can be outputted in an easily understandable format for a human user's consumption.
The formalized report on the template is outputted for a human user's consumption in a medium of any of 1) printable report, 2) presented digitally on a user interface, 3) in a machine readable format for further use in machine-learning reinforcement and refinement, or 4) any combination of the three. The formatting module is further configured to generate a textual write up of an incident report in the formalized report for a wide range of breaches of normal behavior, used by the AI models trained with machine learning on the normal behavior of the system, based on analyzing previous reports with one or more models trained with machine learning on assessing and populating relevant data into the incident report corresponding to each possible cyber threat. The formatting module can generate a threat incident report in the formalized report from a multitude of a dynamic human-supplied and/or machine created templates corresponding to different types of cyber threats, each template corresponding to different types of cyber threats that vary in format, style, and standard fields in the multitude of templates. The formatting module can populate a given template with relevant data, graphs, or other information as appropriate in various specified fields, along with a ranking of a likelihood of whether that hypothesis cyber threat is supported and its threat severity level for each of the supported cyber threat hypotheses, and then output the formatted threat incident report with the ranking of each supported cyber threat hypothesis, which is presented digitally on the user interface and/or printed as the printable report.
In the next step, the assessment module with the AI classifiers, once armed with the knowledge that malicious activity is likely occurring/is associated with a given process from the analyzer module, then cooperates with the autonomous response engine 140 to take an autonomous action such as i) deny access in or out of the device or the network and/or ii) shutdown activities involving a detected malicious agent.
The autonomous response engine 140, rather than a human taking an action, can be configured to cause one or more rapid autonomous mitigation actions to be taken to counter the cyber threat. A user interface for the response engine can program the autonomous response engine 140 i) to merely make a suggested response to take to counter the cyber threat that will be presented on a display screen and/or sent by a notice to an administrator for explicit authorization when the cyber threat is detected or ii) to autonomously take a response to counter the cyber threat without a need for a human to approve the response when the cyber threat is detected. The autonomous response engine 140 will then send a notice of the autonomous response as well as display the autonomous response taken on the display screen. Example autonomous responses may include cut off connections, shutdown devices, change the privileges of users, delete and remove malicious links in emails, slow down a transfer rate, and other autonomous actions against the devices and/or users. The autonomous response engine 140 uses one or more Artificial Intelligence models that are configured to intelligently work with other third-party defense systems in that customer's network against threats. The autonomous response engine 140 can send its own protocol commands to devices and/or take actions on its own. In addition, the autonomous response engine 140 uses the one or more Artificial Intelligence models to orchestrate with other third-party defense systems to create a unified defense response against a detected threat within or external to that customer's network. The autonomous response engine 140 can be an autonomous self-learning response coordinator that is trained specifically to control and reconfigure the actions of traditional legacy computer defenses (e.g., firewalls, switches, proxy servers, etc.) to contain threats propagated by, or enabled by, networks and the internet. The cyber threat module can cooperate with the autonomous response engine 140 to cause one or more autonomous actions in response to be taken to counter the cyber threat, improves computing devices in the system by limiting an impact of the cyber threat from consuming unauthorized CPU cycles, memory space, and power consumption in the computing devices via responding to the cyber threat without waiting for some human intervention.
The trigger module, analyzer module, assessment module, and formatting module cooperate to improve the analysis and formalized report generation with less repetition to consume CPU cycles with greater efficiency than humans repetitively going through these steps and re-duplicating steps to filter and rank the one or more supported possible cyber threat hypotheses from the possible set of cyber threat hypotheses.
The autonomous response engine 140 is configured to use one or more Application Programming Interfaces to translate desired mitigation actions for nodes (devices, user accounts, etc.) into a specific language and syntax utilized by that device, user account, etc. from potentially multiple different vendors being protected in order to send the commands and other information to cause the desired mitigation actions to change, for example, a behavior of a detected threat of a user and/or a device acting abnormal to the normal pattern of life. The selected mitigation actions on the selected nodes minimize an impact on other parts of the system being protected (e.g., devices and users) that are i) currently active in the system being protected and ii) that are not in breach of being outside the normal behavior benchmark. The autonomous response engine 140 can have a discovery module to i) discover capabilities of each node being protected device and the other cyber security devices (e.g., firewalls) in the system being protected and ii) discover mitigation actions they can take to counter and/or contain the detected threat to the system being protected, as well as iii) discover the communications needed to initiate those mitigation actions.
For example, the autonomous response engine 140 cooperates and coordinates with an example set of network capabilities of various network devices. The network devices may have various capabilities such as identity management including setting user permissions, network security controls, firewalls denying or granting access to various ports, encryption capabilities, centralize logging, antivirus anti-malware software quarantine and immunization, patch management, etc., and also freeze any similar, for example, network activity, etc. triggering the harmful activity on the system being protected.
Accordingly, the autonomous response engine 140 will take an autonomous mitigation action to, for example, shutdown the device or user account, block login failures, perform file modifications, block network connections, restrict the transmission of certain types of data, restrict a data transmission rate, remove or restrict user permissions, etc. The autonomous response engine 140 for an email system could initiate example mitigation actions to either remedy or neutralize the tracking link, when determined to be the suspicious covert tracking link, while not stopping every email entering the email domain with a tracking link, or hold the email communication entirely if the covert tracking link is highly suspicious, and also freeze any similar, for example, email activity triggering the harmful activity on the system being protected.
The autonomous response engine 140 has a default set of autonomous mitigation actions shown on its user interface that it knows how to perform when the different types of cyber threats are equal to or above a user configurable threshold posed by this type of cyber threat. The autonomous response engine 140 is also configurable in its user interface to allow the user to augment and change what type of automatic mitigation actions, if any, the autonomous response engine 140 may take when different types of cyber threats that are equal to or above the configurable level of threat posed by a cyber threat.
Referring to
The cyber-attack simulator 105 with Artificial Intelligence-based simulations is configured to integrate with the cyber security appliance 100 and cooperate with components within the cyber security appliance 100 installed and protecting the network from cyber threats by making use of outputs, data collected, and functionality from two or more of a data store, other modules, and one or more AI models already existing in the cyber security appliance 100.
The cyber-attack simulator 105 may include a cyber threat generator module to generate many different types of cyber threats with the past historical attack patterns to attack the simulated system to be generated by the simulated attack module 750 that will digitally/virtually replicate the system being protected, such as a phishing email generator configured to generate one or more automated phishing emails to pentest the email defenses and/or the network defenses provided by the cyber security appliance 100. For example, the system being protected can be an email system and then the phishing email generator may be configured to cooperate with the trained AI models to customize the automated phishing emails based on the identified data points of the organization and its entities.
The email module and IT network module may use a vulnerability tracking module to track and profile, for example, versions of software and a state of patches and/or updates compared to a latest patch and/or update of the software resident on devices in the system/network. The vulnerability tracking module can supply results of the comparison of the version of software as an actual detected vulnerability for each particular node in the system being protected, which is utilized by the node exposure score generator and the cyber-attack simulator 105 with Artificial Intelligence-based simulations in calculating 1) the spread of a cyber threat and 2) a prioritization of remediation actions on a particular node compared to the other network nodes with actual detected vulnerabilities. The node exposure score generator is configured to also factor in whether the particular node is exposed to direct contact by an entity generating the cyber threat (when the threat is controlled from a location external to the system e.g., network) or the particular node is downstream of a node exposed to direct contact by the entity generating the cyber threat external to the network.
The node exposure score generator and the simulated attack module 750 in the cyber-attack simulator 105 cooperate to run the one or more hypothetical simulations of an actual detected cyber threat incident and/or a hypothetical cyberattack incident to calculate the node paths of least resistance in the virtualized instance/modeled instance of the system being protected. The progress through the node path(s) of least resistance through the system being protected are plotted through the various simulated instances of components of the graph of the system being protected until reaching a suspected end goal of the cyber-attack scenario, all based on historic knowledge of connectivity and behavior patterns of users and devices within the system under analysis. See for example
The cyber-attack simulator 105 with Artificial Intelligence-based simulations is configured to simulate the compromise of a spread of the cyber threat being simulated in the simulated cyber-attack scenario, based on historical and/or similar cyber threat attack patterns, between the devices connected to the virtualized network, via a calculation on an ease of transmission of the cyber threat algorithm, from 1) an originally compromised node by the cyber threat, 2) through to other virtualized/simulated instances of components of the virtualized network, 3) until reaching a suspected end goal of the cyber-attack scenario, including key network devices. The cyber-attack simulator 105 with Artificial Intelligence-based simulations also calculates how likely it would be for the cyber-attack to spread to achieve either of 1) a programmable end goal of that cyber-attack scenario set by a user, or 2) set by default an end goal scripted into the selected cyber-attack scenario.
The email module and the IT network module can include a profile manager module. The profile manager module is configured to maintain a profile tag on all of the devices connecting to the actual system/network under analysis based on their behavior and security characteristics and then supply the profile tag for the devices connecting to the virtualized instance of the system/network when the construction of the graph occurs. The profile manager module is configured to maintain a profile tag for each device before the simulation is carried out; and thus, eliminates a need to search and query for known data about each device being simulated during the simulation. This also assists in running multiple simulations of the cyberattack in parallel.
The cyber-attack simulator 105 with Artificial Intelligence-based simulations module is configured to construct the graph of the virtualized system, e.g. a network with its nets and subnets, where two or more of the devices connecting to the virtualized network are assigned with different weighting resistances to malicious compromise from the cyber-attack being simulated in the simulated cyber-attack scenario based on the actual cyber-attack on the virtualized instance of the network and their node vulnerability score. In addition to a weighting resistance to the cyberattack, the calculations in the model for the simulated attack module 750 factor in the knowledge of a layout and connection pattern of each particular network device in a network, an amount of connections and/or hops to other network devices in the network, how important a particular device (a key importance) determined by the function of that network device, the user(s) associated with that network device, and the location of the device within the network. Note, multiple simulations can be conducted in parallel by the orchestration module. The simulations can occur on a periodic regular basis to pentest the cyber security of the system and/or in response to a detected ongoing cyberattack in order to get ahead of the ongoing cyberattack and predict its likely future moves. Again, the graph of the virtualize instance of the system is created with two or more of 1) known characteristics of the network itself, 2) pathway connections between devices on that network, 3) security features and credentials of devices and/or their associated users, and 4) behavioral characteristics of the devices and/or their associated users connecting to that network, which all of this information is obtained from what was already know about the network from the cyber security appliance.
During an ongoing cyberattack, the simulated attack module 750 is configured to run the one or more hypothetical simulations of the detected cyber threat incident and feed details of a detected incident by a cyber threat module in the detection engine into the collections module of the cyber-attack simulator 105 using Artificial Intelligence-based simulations. The simulated attack module 750 is configured to run one or more hypothetical simulations of that detected incident in order to predict and assist in the triggering an autonomous response by the autonomous response engine 140 and then restoration by the restoration engine to the detected incident.
The simulated attack module 750 ingests the information for the purposes of modeling and simulating a potential cyberattacks against the network and routes that an attacker would take through the network. The simulated attack module 750 can construct the graph of nodes with information to i) understand an importance of network nodes in the network compared to other network nodes in the network, and ii) to determine key pathways within the network and vulnerable network nodes in the network that a cyber-attack would use during the cyber-attack, via modeling the cyber-attack on at least one of 1) a simulated device version and 2) a virtual device version of the system being protected under analysis. Correspondingly, the calculated likelihood of the compromise and timeframes for the spread of the cyberattack is tailored and accurate to each actual device/user account (e.g., node) being simulated in the system because the cyber-attack scenario is based upon security credentials and behavior characteristics from actual traffic data fed to the modules, data store, and AI models of the cyber security appliance.
The cyber-attack simulator 105 with its Artificial Intelligence trained on how to conduct and perform cyberattack in a simulation in either a simulator or in a clone creator spinning up virtual instances on virtual machines will take a sequence of actions and then evaluate the actual impact after each action in the sequence, in order to yield a best possible result to contain/mitigate the detected threat while minimizing the impact on other network devices and users that are i) currently active and ii) not in breach, from different possible actions to take. Again, multiple simulations can be run in parallel so that the different sequences of mitigation actions and restoration actions can be evaluated essentially simultaneously. The cyber-attack simulator 105 with Artificial Intelligence-based simulations in the cyber-attack simulator 105 is configured to use one or more mathematical functions to generate a score and/or likelihood for each of the possible actions and/or sequence of multiple possible actions that can be taken in order to determine which set of actions to choose among many possible actions to initiate. The one or more possible actions to take and their calculated scores can be stacked against each other to factor 1) a likelihood of containing the detected threat acting abnormal with each possible set of actions, 2) a severity level of the detected threat to the network, and 3) the impact of taking each possible set of actions i) on users and ii) on devices currently active in the network not acting abnormal to the normal behavior of the network, and then communicate with the cyber threat detection engine, the autonomous response engine 140, and the cyber-security restoration engine 190, respectively, to initiate the chosen set of actions to cause a best targeted change of the behavior of the detected threat acting abnormal to the normal pattern of life on the network while minimizing the impact on other network devices and users that are i) currently active and ii) not in breach of being outside the normal behavior benchmark. The cyber-attack simulator cooperates with the AI models modelling a normal pattern of life for entities/nodes in the system being protected.
The simulated attack module 750 is programmed itself and can cooperate with the artificial intelligence in the restoration engine to factor an intelligent prioritization of remediation actions and which nodes (e.g., devices and user accounts) in the simulated instance of the system being protected should have a priority compared to other nodes. This can also be reported out to assist in allocating human security team personnel resources that need human or human approval to restore the nodes based on results of the one or more hypothetical simulations of the detected incident.
Note, the cyberattack simulator 105, when doing attack path modelling, does not need to not calculate every theoretically possible path from the virtualized instance of the source device to the end goal of the cyber-attack scenario but rather a set of the most likely paths, each time a hop is made from one node in the virtualized network to another device in the virtualized network, in order to reduce an amount of computing cycles needed by the one or more processing units as well as an amount of memory storage needed in the one or more non-transitory storage mediums.
Referring back to
The cyber security appliance 100 in the computer builds and maintains a dynamic, ever-changing model of the ‘normal behavior’ of each user and machine within the system. The approach is based on Bayesian mathematics, and monitors all interactions, events, and communications within the system-which computer is talking to which, files that have been created, networks that are being accessed.
For example, a second computer is-based in a company's San Francisco office and operated by a marketing employee who regularly accesses the marketing network, usually communicates with machines in the company's U.K. office in second computer system 40 between 9.30 AM and midday, and is active from about 8:30 AM until 6 PM.
The same employee virtually never accesses the employee time sheets, very rarely connects to the company's Atlanta network and has no dealings in South-East Asia. The security appliance takes all the information that is available relating to this employee and establishes a ‘pattern of life’ for that person and the devices used by that person in that system, which is dynamically updated as more information is gathered. The model of the normal pattern of life for an entity in the network under analysis is used as a moving benchmark, allowing the cyber security appliance 100 to spot behavior on a system that seems to fall outside of this normal pattern of life, and flags this behavior as anomalous, requiring further investigation and/or autonomous action.
The cyber security appliance 100 is built to deal with the fact that today's attackers are getting stealthier, and an attacker/malicious agent may be ‘hiding’ in a system to ensure that they avoid raising suspicion in an end user, such as by slowing their machine down. The Artificial Intelligence model(s) in the cyber security appliance 100 builds a sophisticated ‘pattern of life’—that understands what represents normality for every person, device, and network activity in the system being protected by the cyber security appliance 100.
The self-learning algorithms in the AI can, for example, understand each node's (user account, device, etc.) in an organization's normal patterns of life in about a week, and grows more bespoke with every passing minute. Conventional AI typically relies solely on identifying threats based on historical attack data and reported techniques, requiring data to be cleansed, labelled, and moved to a centralized repository. The detection engine self-learning AI can learn “on the job” from real-world data occurring in the system and constantly evolves its understanding as the system's environment changes. The Artificial Intelligence can use machine learning algorithms to analyze patterns and ‘learn’ what is the ‘normal behavior’ of the network by analyzing data on the activity on the network at the device and employee level. The unsupervised machine learning does not need humans to supervise the learning in the model but rather discovers hidden patterns or data groupings without the need for human intervention. The unsupervised machine learning discovers the patterns and related information using the unlabeled data monitored in the system itself. Unsupervised learning algorithms can include clustering, anomaly detection, neural networks, etc. Unsupervised Learning can break down features of what it is analyzing (e.g., a network node of a device or user account), which can be useful for categorization, and then identify what else has similar or overlapping feature sets matching to what it is analyzing.
The cyber security appliance 100 can use unsupervised machine learning to works things out without pre-defined labels. In the case of sorting a series of different entities, such as animals, the system analyzes the information and works out the different classes of animals. This allows the system to handle the unexpected and embrace uncertainty when new entities and classes are examined. The modules and models of the cyber security appliance 100 do not always know what they are looking for, but can independently classify data and detect compelling patterns.
The cyber security appliance's 100 unsupervised machine learning methods do not require training data with pre-defined labels. Instead, they are able to identify key patterns and trends in the data, without the need for human input. The advantage of unsupervised learning in this system is that it allows computers to go beyond what their programmers already know and discover previously unknown relationships. The unsupervised machine learning methods can use a probabilistic approach based on a Bayesian framework. The machine learning allows the cyber security appliance 100 to integrate a huge number of weak indicators/low threat values by themselves of potentially anomalous network behavior to produce a single clear overall measure of these correlated anomalies to determine how likely a network device is to be compromised. This probabilistic mathematical approach provides an ability to understand important information, amid the noise of the network-even when it does not know what it is looking for.
The models in the cyber security appliance 100 can use a Recursive Bayesian Estimation to combine these multiple analyzes of different measures of network behavior to generate a single overall/comprehensive picture of the state of each device, the cyber security appliance 100 takes advantage of the power of Recursive Bayesian Estimation (RBE) via an implementation of the Bayes filter.
Using RBE, the cyber security appliance 100's AI models are able to constantly adapt themselves, in a computationally efficient manner, as new information becomes available to the system. The cyber security appliance 100's AI models continually recalculate threat levels in the light of new evidence, identifying changing attack behaviors where conventional signature-based methods fall down.
Training a model can be accomplished by having the model learn good values for all of the weights and the bias for labeled examples created by the system, and in this case; starting with no labels initially. A goal of the training of the model can be to find a set of weights and biases that have low loss, on average, across all examples.
The AI classifier can receive supervised machine learning with a labeled data set to learn to perform their task as discussed herein. An anomaly detection technique that can be used is supervised anomaly detection that requires a data set that has been labeled as “normal” and “abnormal” and involves training a classifier. Another anomaly detection technique that can be used is an unsupervised anomaly detection that detects anomalies in an unlabeled test data set under the assumption that the majority of the instances in the data set are normal, by looking for instances that seem to fit least to the remainder of the data set. The model representing normal behavior from a given normal training data set can detect anomalies by establishing the normal pattern and then test the likelihood of a test instance under analysis to be generated by the model. Anomaly detection can identify rare items, events or observations which raise suspicions by differing significantly from the majority of the data, which includes rare objects as well as things like unexpected bursts in activity.
The methods and systems shown in the Figures and discussed in the text herein can be coded to be performed, at least in part, by one or more processing components with any portions of software stored in an executable format on a computer readable medium. Thus, any portions of the method, apparatus and system implemented as software can be stored in one or more non-transitory storage devices in an executable format to be executed by one or more processors. The computer readable storage medium may be non-transitory and does not include radio or other carrier waves. The computer readable storage medium could be, for example, a physical computer readable storage medium such as semiconductor memory or solid-state memory, magnetic tape, a removable computer diskette, a random-access memory (RAM), a read-only memory (ROM), a rigid magnetic disc, and an optical disk, such as a CD-ROM, CD-R/W or DVD. The various methods described above may also be implemented by a computer program product. The computer program product may include computer code arranged to instruct a computer to perform the functions of one or more of the various methods described above. The computer program and/or the code for performing such methods may be provided to an apparatus, such as a computer, on a computer readable medium or computer program product. For the computer program product, a transitory computer readable medium may include radio or other carrier waves.
A computing system can be, wholly or partially, part of one or more of the server or client computing devices in accordance with some embodiments. Components of the computing system can include, but are not limited to, a processing unit having one or more processing cores, a system memory, and a system bus that couples various system components including the system memory to the processing unit.
The computing device may include one or more processors or processing units 620 to execute instructions, one or more memories 630-632 to store information, one or more data input components 660-663 to receive data input from a user of the computing device 600, one or more modules that include the management module, a network interface communication circuit 670 to establish a communication link to communicate with other computing devices external to the computing device, one or more sensors where an output from the sensors is used for sensing a specific triggering condition and then correspondingly generating one or more preprogrammed actions, a display screen 691 to display at least some of the information stored in the one or more memories 630-632 and other components. Note, portions of this design implemented in software 644, 645, 646 are stored in the one or more memories 630-632 and are executed by the one or more processors 620. The processing unit 620 may have one or more processing cores, which couples to a system bus 621 that couples various system components including the system memory 630. The system bus 621 may be any of several types of bus structures selected from a memory bus, an interconnect fabric, a peripheral bus, and a local bus using any of a variety of bus architectures.
Computing device 602 typically includes a variety of computing machine-readable media. Machine-readable media can be any available media that can be accessed by computing device 602 and includes both volatile and nonvolatile media, and removable and non-removable media. By way of example, and not limitation, computing machine-readable media use includes storage of information, such as computer-readable instructions, data structures, other executable software, or other data. Computer-storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other tangible medium which can be used to store the desired information, and which can be accessed by the computing device 602. Transitory media such as wireless channels are not included in the machine-readable media. Machine-readable media typically embody computer readable instructions, data structures, and other executable software. In an example, a volatile memory drive 641 is illustrated for storing portions of the operating system 644, application programs 645, other executable software 646, and program data 647.
A user may enter commands and information into the computing device 602 through input devices such as a keyboard, touchscreen, or software or hardware input buttons 662, a microphone 663, a pointing device and/or scrolling input component, such as a mouse, trackball, or touch pad 661. The microphone 663 can cooperate with speech recognition software. These and other input devices are often connected to the processing unit 620 through a user input interface 660 that is coupled to the system bus 621, but can be connected by other interface and bus structures, such as a lighting port, game port, or a universal serial bus (USB). A display monitor 691 or other type of display screen device is also connected to the system bus 621 via an interface, such as a display interface 690. In addition to the monitor 691, computing devices may also include other peripheral output devices such as speakers 697, a vibration device 699, and other output devices, which may be connected through an output peripheral interface 695.
The computing device 602 can operate in a networked environment using logical connections to one or more remote computers/client devices, such as a remote computing system 680. The remote computing system 680 can a personal computer, a mobile computing device, a server, a router, a network PC, a peer device, or other common network node, and typically includes many or all of the elements described above relative to the computing device 602. The logical connections can include a personal area network (PAN) 672 (e.g., Bluetooth®), a local area network (LAN) 671 (e.g., Wi-Fi), and a wide area network (WAN) 673 (e.g., cellular network). Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets, and the Internet. A browser application and/or one or more local apps may be resident on the computing device and stored in the memory.
When used in a LAN networking environment, the computing device 602 is connected to the LAN 671 through a network interface 670, which can be, for example, a Bluetooth® or Wi-Fi adapter. When used in a WAN networking environment (e.g., Internet), the computing device 602 typically includes some means for establishing communications over the WAN 673. With respect to mobile telecommunication technologies, for example, a radio interface, which can be internal or external, can be connected to the system bus 621 via the network interface 670, or other appropriate mechanism. In a networked environment, other software depicted relative to the computing device 602, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation, remote application programs 685 as reside on remote computing device 680. It will be appreciated that the network connections shown are examples and other means of establishing a communications link between the computing devices that may be used. It should be noted that the present design can be carried out on a single computing device or on a distributed system in which different portions of the present design are carried out on different parts of the distributed computing system.
Note, an application described herein includes but is not limited to software applications, mobile applications, and programs routines, objects, widgets, plug-ins that are part of an operating system application. Some portions of this description are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. These algorithms can be written in a number of different software programming languages such as Python, C, C++, Java, HTTP, or other similar languages. Also, an algorithm can be implemented with lines of code in software, configured logic gates in hardware, or a combination of both. In an embodiment, the logic consists of electronic circuits that follow the rules of Boolean Logic, software that contain patterns of instructions, or any combination of both. A module may be implemented in hardware electronic components, software components, and a combination of both. A software engine is a core component of a complex system consisting of hardware and software that is capable of performing its function discretely from other portions of the entire complex system but designed to interact with the other portions of the entire complex system. The systems and methods described herein can be implemented with these algorithms discussed herein.
Unless specifically stated otherwise as apparent from the above discussions, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers, or other such information storage, transmission or display devices.
While the foregoing design and embodiments thereof have been provided in considerable detail, it is not the intention of the applicant(s) for the design and embodiments provided herein to be limiting. Additional adaptations and/or modifications are possible, and, in broader aspects, these adaptations and/or modifications are also encompassed. Accordingly, departures may be made from the foregoing design and embodiments without departing from the scope afforded by the following claims, which scope is only limited by the claims when appropriately construed.
This application claims priority under 35 USC 119 to U.S. provisional patent application 63/623,939, titled “CYBER SECURITY AND METHODS OF OPERATION” filed Jan. 23, 2024, which the disclosure of such is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63623939 | Jan 2024 | US |