The present invention is generally related to cyber security threats to enterprise and personal computer systems, and more particularly relates to systems and methods for monitoring and mitigating cyber security threats and cyber risks to corporate and employee computer systems.
With the ever increasing levels of personal and corporate dependence on Information Technology (IT) systems and their ubiquitous interconnection to the Internet, there has unfortunately been a significant rise in the amount and range of malicious attacks (cyber threats or cyber-attacks) by hackers or the like, operating in ‘cyber space,’ to attack or undermine the operations of the IT systems. That is, cyber-attacks are more frequent than ever before due to the ever increasing availability of internet connectivity across all types of devices from laptops, desktops, notepads, mobile phones and the broad spectrum of everyday devices that are connected to the Internet (e.g., Internet-of-Things or IoT) and are significantly affecting businesses and individual's productivity and monetary interests. The cyber-attacks normally target vulnerabilities in the IT systems to steal confidential information, and can take many forms including Phishing attacks, distributed denial-of-service attacks, account takeover attempts, ransomware attacks, and other known malicious types of activity, and have come to dominate the everyday operations of organizations, thereby requiring significant labor force and enterprise attention and resources. Additionally, the cyber-attacks can target individual employees through sophisticated social engineering personalized attacks. These collectively have become known as cyber-crimes.
Cyber-crimes have become one of the world's major problems with new breaches of data and releases of ransomware occurring hourly at an alarming rate. Cyber-crimes cost many businesses billions of dollars every year. Any person or business regardless of size is potentially vulnerable to cyber risks, from some of the world's largest corporations, to critical national infrastructure, to small local enterprises, and to individuals. These types of cyber-crimes will continue to increase, particularly as evolving programs such as Internet of Things (IoT), smart cities, and mass digitization become the reality of daily life. Further, the cost of preventing and responding to cyber-crimes will continue to grow exponentially causing serious financial and reputational damage to individuals and businesses.
In order to properly address these cyber security threats, a significant cyber security infrastructure and related personnel needs to be deployed and maintained. The security infrastructure can include employing a number of different security tool software applications as well as associated hardware devices, all maintained by the technical personnel. As the cyber threats increase in size and scale, and become more sophisticated, businesses and the employees who manage the security infrastructure have needed to adapt. This adaptation requires new skills, new tools, new processes, policies and enterprise-level training.
Currently, cybersecurity solutions rely on tedious human labor centered around a diverse combination of point solutions (to cyber threat management) with limited knowledge sharing between competitive data silos, networks, and associated security vendors. Even within businesses, data related to different types of cyber-threats is often siloed and not shared across the security infrastructure platform. Cyber criminals can hence capitalize on static controls set on siloed data sources that still define the security landscape for many businesses. As such, the information security teams are often slow to respond to real time cyber-attacks on the system since they are not being provided with critical cyber-attack data across the entire system in real time. In an effort to keep businesses secure, the information security teams spend a large amount of resources fine tuning the static security rules. The teams also tend to set conservative risk thresholds in an effort to timely identify system attacks, which can unfortunately lead to a large amount of false alarms. Both of these activities are time consuming, costly, and hard to scale across the platform. Still further, the volume of cyber tools and log data generated by the system also makes it difficult to detect attacks because of the difficulties experienced processing the large amounts of data and then identifying attack data therefrom. To further complicate matters, the industry as a whole is struggling with a scarcity of trained cyber specialists. As a result, most companies are still highly vulnerable to cyber threats due to the constantly changing nature of cyber-attacks, the siloed nature of the security data, and lack of trained personnel.
In light of the current cyber security risks, there is an imminent need for a cyber security solution that aggregates security data across the platform and enables adaptive and proactive capabilities for identifying new cyber threats and allowing end users to reduce, mitigate, or eliminate them. The need extends from just being a point solution to really enabling each business to address their own individual risk profile by fundamentally enabling scale and reducing dependence on humans as the connection between point solutions. The core enablers for such a solution can thus provide businesses with the data insight and infrastructure to accurately identify, measure, quantify, and respond to the cyber risks and threats to which the business is exposed.
In light of the above-mentioned needs, the present invention is directed to a cyber security threat management system that employs a unique and fully scalable Artificial Intelligence (AI) based analytical tool for adaptive, intuitive, automated, and seamless review of security data, thereby giving cybersecurity teams the ability to monitor, identify, remediate, mitigate and resolve cyber security issues at scale. The system and method of the present invention assists businesses with transforming their cyber security infrastructure into a robust, integrated security platform that employs artificial intelligence and machine learning.
The cyber security monitoring and mitigation system of the present invention employs an analytics module that processes security data so as to generate and determine cyber security risks and generate predictions based on the processing of the security data. The analytics module can employ a data connector unit for retrieving selected security data based on preselected cyber features. The data from the data connector unit is then preprocessed by profiling and cleaning the data and generating cleaned security data that is disposed in a structured format. The consistence of the structured data format allows the system to quickly and efficiently process the data to analyze and identify real time cyber security risks and needs. The cleaned security data then has one or more cyber feature elements overlaid thereon to extract and identify selected security data for processing by one or more machine learning techniques. The machine learning techniques helps identify selected portions of the security data based on the cyber feature elements for processing by the prediction unit. The prediction unit then generates prediction or probability values or information based on the security data and the cyber features. The system can respond to the predictions by addressing, reducing and eliminating cyber security threats and issues. The prediction information can be subsequently processed by a data visualization unit for generating one or more user interfaces that displays selected types of information to the system user. The present invention thus helps businesses address and reduce the occurrence of cyber security threats and attacks by leveraging the power of AI and machine learning enabled technologies. As such, once the system and method of the present invention is adopted, the information technology infrastructure of the underlying business can be transformed from a conventional reactive siloed system to a proactive system that monitors, measures, and mitigates in real time cyber threats and risks.
The present invention is directed to a cyber security monitoring and mitigation system comprising one or more data sources for storing security data, a deployment infrastructure subsystem having a security tool layer for generating at least a portion of the security data and one or more storage elements for storing at least a portion of the security data, and a data analytics module for processing the security data. The analytics module includes a data connector unit for collecting the security data from one or more of the data sources and then organizing the security data into a selected format to form organized security data, a data preprocessing unit for profiling and correcting the organized security data to form cleaned security data, a cyber feature unit for identifying based on preselected cyber features selected portions of the cleaned security data associated with the cyber features, a model development unit for applying one or more selected machine learning techniques to the features from the cyber feature unit to form output model data, and a model prediction unit for generating based on the output model data one or more prediction values based on the cleaned security data and the cyber features.
The system further includes a results integrator unit for generating from the prediction values one or more user interfaces for displaying the prediction values, a network for communicating with the one or more of the one or more data sources, the data analytics module, and the deployment infrastructure, and for communicating the security data therebetween, and a data merger unit for merging cleaned security data from two or more of the plurality of data sources. The system can also include a data search engine communicating with the data connector unit and the security data for searching the security data for one or more selected parameters.
According to the present invention, the data preprocessing unit comprises a data profiler unit that is configured to analyze and to process the organized security data in a data frame received from the data connector unit and summarize one or more values associated with the organized security data contained in the data frame to form profiled security data, and a data cleaner unit for detecting and correcting selected information in the profiled security data within the data frame to form the cleaned security data. The one or more values associated with the organized data comprises selected numerical fields, timestamp information, categorical field information, information related to changes in the security data, and historical trend information. The data cleaner unit comprises a cleaning schema module for applying a uniform cleaning schema to the profiled security data, and the cyber feature unit comprises a plurality of selectable cyber features.
According to one practice, the machine learning technique of the model deployment unit comprises one or more of a supervised machine learning technique, an unsupervised machine learning technique, a semi-supervised learning technique, a self-learning technique, or a reinforcement machine learning technique.
The present invention is also directed to a computer implemented method comprising providing security data from one or more data sources, generating at least a portion of the security data and storing at least a portion of the security data in one or more storage elements, and processing the security data. The security data can be processed by collecting the security data from one or more of the data sources and then organizing the security data into a selected format to form organized security data, profiling and correcting the organized security data to form cleaned security data, identifying based on one or more preselected cyber features selected portions of the cleaned security data associated with the cyber features, applying one or more selected machine learning techniques to the cleaned security data to form output model data, generating based on the output model data one or more prediction values based on the cleaned security data and the cyber features, and generating from the prediction values one or more user interfaces for displaying the prediction values.
The computer-implemented method of the present invention also includes merging cleaned security data from two or more of the data sources, and providing a data search engine for searching the security data for one or more selected parameters. Further, the step of collecting the security data comprises generating a data frame that includes therein the organized security data, and the step of profiling and correcting the organized security data further comprises analyzing and processing the organized security data in the data frame and summarizing one or more values associated with the organized security data contained in the data frame to form profiled security data, and detecting and correcting selected information in the profiled security data within the data frame to form the cleaned security data.
These and other features and advantages of the present invention will be more fully understood by reference to the following detailed description in conjunction with the attached drawings in which like reference numerals refer to like elements throughout the different views. The drawings illustrate principals of the invention and, although not to scale, show relative dimensions.
The present invention is directed to a cyber security monitoring and mitigation system 10 that help organizations, such as enterprise businesses, protect their critical IT infrastructure and associated data from cyber criminals. A simplified schematic representation of the cyber security monitoring and mitigation system 10 of the present invention is shown in
The security data 13 from the data sources 12 can be conveyed or transferred to other portions of the cyber security monitoring and mitigation system 10 via wired connections or wireless connections, such as via a network 14. The security data 13 is eventually transferred to a data analytics module 16 that communicates with a deployment infrastructure subsystem 18. The illustrated deployment infrastructure subsystem 18 can include any selected number or collection of computer infrastructure components, including both hardware and software, that are constructed and configured to develop, test, monitor, control, support or deliver selected information technology services. The deployment infrastructure subsystem can be housed or located at a single location or can be distributed across an on-premises IT platform, and can include if desired any selected type and number of cloud hosting services. The deployment infrastructure subsystem 18 can include, for example, one or more client devices, one or more servers, such as for example Linux servers, Windows servers, Cloud Operating System servers, docker and Kubernetes servers, one or more types of data engines, one or more cluster type frameworks, such as a Spark cluster, and one or more types of containerization software applications, as well as other known hardware and software components. The deployment infrastructure subsystem 18 can also include client data sources and computer clusters, server networks and farms, and the like. The system 10 can also include a security tool layer 24 that includes a plurality of known types of security tools for monitoring and maintaining the security of the network. The security data 13 can be generated by the security tools in the security tool layer 24 or from other security tools in associated networks or a combination of both. The data sources 12 can thus reside within or form part of the security tool layer 24, can be external to the security tool layer 24, or can be a combination of both. The deployment infrastructure subsystem 18 can also include electronic or computer devices, such as servers and clients, having known processing capabilities, as well as selected and varied types of storage and memory associated therewith, indicated in a simplified manner as storage elements 20 and 22. The illustrated storage elements 20, 22 are represented as a database for the sake of simplicity.
The deployment infrastructure subsystem 18 communicates via known communication technology with the data analytics module 16. The illustrated data analytics module 16 can include selected types of data preprocessing and processing subsystems or units and associated functionality, as well as include selected types of artificial intelligence and machine learning capabilities. For example, and according to one embodiment, the illustrated data analytics module 16 can include a data connector unit 30 for collecting selected types of data, including security data, from the data sources 12a-12n and then organizing the data into a selected type or format. The organized security data generated or produced by the data connector unit 30 is then conveyed to a data preprocessing unit 32. The data preprocessing unit 32 can be employed to profile and clean the security data for subsequent use by the system. Specifically, the data profiler can be configured to summarize selected values or statistics associated with the data. Further, the data preprocessing unit 32 can also clean the data by comparing the security data to a selected cleaning schema so as to generate or create a common data types or data structure, such as a data frame. The preprocessed security data from the various data sources can then be merged by an optional data merger unit 34. The merged security data is then introduced or conveyed to other elements or portions of the data analytics module 16.
According to one embodiment of the present invention, the data analytics module 16 can include a cyber feature unit 36 for selecting one or more, and preferably a plurality, of cyber features for processing by the module 16. The cyber features can include a list or table of selected types or attributes of security data that can be preselected by the system user and overlaid on the cleaned data frame so as to identify selected portions of the security data. The extracted security data is then introduced to the AI training module 38 for further processing and for training any selected machine learning component associated therewith. The AI module can include and employ one or more known machine learning model training components or techniques and the like.
The illustrated data analytics module 16 can further include a model development unit 40 and a model prediction unit 42. The model development unit 40 develops and deploys or applies a machine learning model for processing the curated and extracted security data and associated information and the model prediction unit 42 forms or generates predictions based on the machine learning techniques as applied to the extracted security data. The model deployment unit can employ any selected known type of machine learning technique, including for example one or more of, or any combination of, a neural network technique, a Random Forest technique, an XGBoost technique, one or more known methods or techniques for explaining predictions, such as for example a Shapely Additive Explanation (SHAP) technique, and the like.
The security data and associated predictions can then be introduced or conveyed to a results integrator unit 44 for integrating the data results into a useable format by the system and for displaying the results to an end user. The results integrator unit 44 can employ one or more known report generators, data visualization software applications, and/or one or more known user interface units for generating one or more user interfaces. The reports generator and/or the user interface unit can also be deployed separately from the results integrator unit 44.
Further details of the cyber security monitoring and mitigation system 10 of the present invention is shown in
The security data 13 is initially supplied to the data connector unit 30 as part of a data ingestion methodology employed by the data analytics module 16. As shown in
The data preprocessing unit 32 of
The data cleaner unit 70 is configured to clean the profiled security data received from the data profiler unit 60 by detecting and correcting inaccurate or incomplete data within the data frame according to known techniques to form cleaned security data. The data cleaner unit 70 thus ensures that the data is accurate, valid, correct, complete, consistent, and uniform (e.g., cleaned). The data cleaner unit 70 can include a cleaning schema module 72 for cleaning the data by applying a uniform cleaning schema or process to the profiled security data. For example, as shown in
The security data from each of the data sources 12 is processed by the data connector unit 30, the data profiler unit 60, and the data cleaner unit 70 in the container unit 80, and the resultant cleaned data frames 78 are merged in the data merger unit 34 with other cleaned data frames that are processed by the system and that are received from other data sources 12. The data merger unit 34 ensures that all of the data sources are combined or merged in a standardized manner so that all numerical data can be accessed in the same way irrespective of the original source of data. The merger unit 34 also ensures that the data sources 12 are correctly identified (e.g., tagged) to the right data sets so remediation and auditing of the data sources can be easily done. Along with the raw security data from the data sources 12, the merger unit 34 can also normalize (e.g., make the baseline representation of each data set equal) the data sets so that similar attributes of cyber information between different cyber systems are treated in the same manner by the system 10.
The merged security data is then conveyed from the data merger unit 34 to the cyber feature unit 36. The cyber feature unit 36 converts the data in the merged security data and originating from the various cyber data sources into a set of features that can be used by one or more of the artificial intelligence and machine learning algorithms. More specifically, features that are generated or outputted by the cyber feature unit, as that term is used herein, can include a defined table of outputs, which may be numerals, category variables, or binary variables, and the like. The cyber features allow the system to interpret the security data that may have originated from various and different locations and systems in a common way. The cyber feature unit 36 also functions as a way for the users of the system to transfer human learned knowledge of how cyber attackers act into a meaningful numerical, or machine interpretable, score or value to be used for all use cases that can be integrated with the overall system. The features generated by the cyber feature unit 36 can be considered as the cyber knowledge repository for any institution that implements the present invention. All the features generated as part of the daily operation of the present invention are retained by the system and hence act as a central storage of all the features that are used by cyber analysts within the organization to do their daily cyber remediation activities.
The cyber feature unit 36 can include or communicate with a feature generator module 104 that includes a plurality of different cyber features or characteristics that the system can review, analysis and evaluate. The feature generator module 104 can be located in the cyber feature unit 36, or at other locations in the system 10, such as for example by forming part of the model prediction unit 42. The feature generator module 104 can generate cyber feature profile data 94 that can be stored in the system, such as for example in the deployment infrastructure subsystem 18. According to the present invention, the cyber features can include without limitation rate or volume of cyber events, network traffic volume (e.g., number of log-in events and connections), changes in geo-location, time span between changes in geo-location, changes in connection or log-in behavior, whether information associated with log-in or user is previously identified as suspicious, log-in frequency, time span between log-ins, and the like. Thus, the cyber features correspond if desired to the cyber characteristics of the system 10 that the client wishes to monitor or investigate. Further, the cyber features can be preselected based on client needs, and can be aligned to the datasets that users already have stored. Thus, the cyber feature unit 36 or the feature generator module 104 can comprise if desired a plurality of selectable cyber features. As a simple example, if a user is noted to log into the system from different geographic locations, the system can determine if it is feasible that the user can manage to travel between the locations in the allotted period of time based on preselected cyber features, such as a geolocation cyber feature, time span cyber feature, and reasonable time and distance between geolocations cyber feature. If not, then the security data is marked as suspicious.
The model development unit 40 then applies one or more selected machine learning techniques to the cyber features extracted from the merged security data in order to assess and code into machine language what cyber features help distinguish everyday ‘normal’ security or cyber data from threat actor based cyber data. The machine learning techniques are commonly available methodologies (e.g., computer science algorithms) that have been proven to work with large volumes of cyber data and are able to capture and identify intricate or detailed patterns in the data. The present invention can optionally allow the users to preselect the machine learning methodology applied to the data prior to application of the data. The machine learning techniques can be a supervised learning technique (e.g., regression or classified techniques), an unsupervised learning technique (e.g., mining techniques, clustering techniques, and recommendation system techniques), a semi-supervised technique, a self-learning technique, or a reinforcement learning technique. Examples of suitable machine language techniques include Random Forest, neural network, clustering, XGBoost, bootstrap XGBoost, Deep learning Neural Nets, Decision Trees, regression Trees, and the like. The machine learning algorithms may also extend from the use of a single algorithm to the use of a combination of algorithms (e.g., ensemble methodology), and may use some of the existing methods of boosting the algorithmic learning, bagging of results to enhance learning, incorporate stochastic and deterministic approaches, and the like to ensure that the machine learning is comprehensive and complete. As such, the machine learning technique that is employed by the model development unit 40 essentially maps one or more of the input values of the extracted security data to one or more outputs or determines inferences, patterns or classifications between the security data and the cyber features based on the extracted security data and responds accordingly. The output of the model development unit 40 is cyber or output model data in the form for example of a computer model that has a well-defined interpretation and can be interpreted and run by commonly available computer code libraries. Further, the model development unit 40 may also incorporate a series of methodologies (e.g., computer algorithms) that allow the models to also output what cyber data features were of highest importance to the decision making while connecting input data with the desired output inference. Methods like local interpretable model-agnostic explanation (LIME), shapely additive explanation (SHAP), may be used to accomplish the importance mapping. The steps taken by the model development unit 40 are sometimes referred to as the machine learning training step and this step represents the encoding of institutional cyber knowledge (in the form of cyber data features and cyber incident labels for the cyber data) into well-defined computer methodologies.
The model prediction unit 42 can be used to repeatedly label or tag the cyber data generated by the model development unit 40. The model prediction unit 42 then generates one or more inference outputs or prediction data, which may correlate to what humans may have labeled the data, if they were present in place of the model, in the form of prediction or probability values and associated information as well as feature profiles 94 and predictions 96, from a prediction module 106, based on the output model data of the model development unit 40, the cyber features generated by the cyber feature unit 36, and the trained machine learning techniques. The prediction information can be in any selected form or format, and can include a prediction or probability score. The cyber security unit 36 and the model deployment (ML model) unit 42 can form part of the same software container, such as the Train Classification Docker 88. Further, the train classification docker 88, the model prediction unit 42 and the data merger unit 34 can form part of a common software container, such as the Model Train and Predict Docker 90. The feature profile data 94 and the prediction value data 96 can be separately stored if desired in one or more storage units, such as the storage unit 20, 22 of the deployment infrastructure subsystem 18, for subsequent use by the system 10.
The illustrated cyber security monitoring and mitigation system 10 can also include a model training and governance unit 100 for training the machine learning techniques employed by the system and for providing model governance of the techniques. The model governance helps establish the rules and controls for the machine learning techniques employed by the system, including access control, testing, validation, change and access logs, and the traceability of model results. Further, the model training can occur in the model training and governance unit 100 based on prior learning data sets as well as current data sets. The data sets can include if desired learning security data as well as real time security data. The unit 100 can also extract and/or determine selected types of data if desired, including performance metrics, model parameters, feature importance information, feature profile information, model files, LIME explanation related information, and the like.
The fully processed security data and the associated prediction information generated by the model prediction unit 42 are conveyed to the results integrator unit 44. In the current example, the results integrator unit 44 can include a data visualization unit 110. The data visualization unit 110 can include any selected hardware and associated visualization software for generating reports or graphs for display on a suitable display device. The display device can form part of the system or can form part of an electronic or computer device that communicates with the system, as is known. The reports can be preselected or can be customized to present or display the processed security data and associated predictions in a suitable manner. The data visualization unit 110 can include any selected software application suitable for generating the reports and graphs, such as for example Splunk from Splunk Inc., USA. Examples of the reports or user interfaces that can be generated by the data visualization unit 110 are shown in
After the prediction data and the security data is received by the data visualization unit 110, the unit 110 or associated user interface unit can generate one or more selected reports or user interfaces.
The left topmost pane element 122c can be configured as an Identity Risk pane element for illustrating through a graphical element 124 an identity risk score or value. Specifically, the risk score can be a unique monotonically increasing number that maps to the perceived or assigned risk for a specific machine, employee, contractor, vendor, customer, and electronic device (e.g., mobile phone, computer and the like), from being identified as a “known entity” to the institution. According to one practice, the Identity Risk pane element can be structured to cover a selected time increment or amount, such as for example one hour. Those of ordinary skill in the art will readily recognize that the time span or duration can be for any selected length of time. The graphical element 124 can be any desired graphical element that easily and readily displays the identity risk data to the user or observer. In the current example, the graphical element can take the form of a number set in or on a background 126. The number represents the extent to which the specific risk is of concern to the institution. The background 126 can be structured so as to display the security data in a visually distinctive manner that easily and readily imparts to the viewer the importance of the information in the pane element 122c. In the current example, a suitable color background can be employed to visually indicate the importance or risk profile of the information in the pane element.
The right top pane element 122a can be a Network Risk pane element illustrating through a graphical element 134 a network risk score or value. Specifically, the risk score can be a unique monotonically increasing number that maps to the perceived or assigned risk for a specific electronic device and the like from performing computer network traffic activity that seems improper to the institution. The network risk score indicates the overall risk to the network from cyber-attacks and the like. According to one practice, the Network Risk pane element can be structured to cover a selected time increment or amount, such as for example one hour. Those of ordinary skill in the art will readily recognize that the time span or duration can be for any selected length of time. The graphical element 134 can be any desired graphical element that easily and readily displays the network risk data to the user or observer. In the current example, the graphical element can take the form of a number set on a suitable background 136 that represents an example of a network threat. The background 136 can be structured so as to display the security data or associated score in a visually distinctive manner that easily and readily imparts to the viewer the importance of the information in the pane element 122a. In the current example, a suitable color background can be employed to visually indicate the importance or risk profile of the information in the pane element 122a.
The left bottom pane element 122d can be an Endpoint Risks pane element illustrating through a graphical element 144 an endpoint risk score or value. Specifically, the endpoint risk score can be a unique monotonically increasing number that maps to the perceived or assigned risk for a specific electronic device and the like, because the electronic device characteristics at a certain moment in time do not correlate with what is known within the system as being a predefined normal score. According to one practice, the Endpoint Risk pane element can be structured to cover a selected time increment or amount, such as for example one hour. Those of ordinary skill in the art will readily recognize that the time span or duration can be for any selected length of time. The graphical element 144 can be any desired graphical element that easily and readily displays the endpoint risk data to the user or observer. In the current example, the graphical element can take the form of a number set a background 146. The background 146 can also be structured so as to display the security data in a visually distinctive manner that easily and readily imparts to the viewer the importance of the information in the pane element 122d. In the current example, a suitable color background can be employed to visually indicate the importance or risk profile of the information in the pane element 122d.
The right bottom pane element 122b can be a Data Loss Risk pane element illustrating through a graphical element 154 a data loss risk score or value. Specifically, the data loss risk score can be a unique monotonically increasing number that maps to the perceived or assigned risk of actually losing, corrupting, or misusing enterprise, customer or employee data. The data loss risk indicates the likelihood that data can be lost based on real time cyber-attacks or threats to the system. According to one practice, the Data Loss Risk pane element can be structured to cover a selected time increment or amount, such as for example one day. Those of ordinary skill in the art will readily recognize that the time span or duration can be for any selected length of time. The graphical element 154 can be any desired graphical element that easily and readily displays the data loss risk data to the user or observer. In the current example, the graphical element can take the form of a number set a background 156. The background 156 can be structured so as to display the security data in a visually distinctive manner that easily and readily imparts to the viewer the importance of the information in the pane element 122b. In the current example, a suitable color background can be employed to visually indicate the importance or risk profile of the information in the pane element.
The bottommost pane element 122e can be a Traffic Origins pane element illustrating through a graphical element 164 the origins of the traffic on the network. The graphical element 164 can be any desired graphical element that easily and readily displays the identity risk data to the user or observer. In the current example, the graphical element can take the form of a world map that includes visual identifiers 168 identifying the location of the traffic on the network. The identifier can be sized so as to correspond to the volume of network traffic emanating from any of the identified locations. That is, the visual identifier can have a size that corresponds to the size of the data traffic emanating or originating in that region.
Similar to the pane elements 122 of
The top middle pane element 172b can be configured as a Third Party Author Score pane element for illustrating through a graphical element a risk probability distribution. The system can employ suitable software application tools that obtain risk level information of each login and then assign each login an authorization score. To confirm that the score is effective in identifying threat and fraudulent activities in the system, the tool compares a behavioral based risk score with the authorization score. The right top pane element 172c can be configured as a Number of Users identified As Compromised pane element for illustrating through a graphical element the number of system users that are compromised. According to one practice, the pane element 172c can be structured to cover a selected time increment or amount, such as for example one day.
The left bottom pane element 172d can display multiple graphical elements related to High Risk IP to Investigate and High Risk Account to Investigate. The graphical elements can relate to the number of IP addresses to review so as to determine if they are compromised. The middle bottom pane element 172e can be configured as a High Risk Users Based pane element for illustrating through a graphical element a Machine Learning (ML) Risk Score Over Time pane element for illustrating a risk score generated by the model prediction unit 42 over time. According to one practice, the pane element 172e can be structured to cover any selected time increment or amount. The right bottom pane element 172f can be configured as an Indicator of Compromise pane element for illustrating through a graphical element an indicator of compromise data. The graphical element represents the two common scenarios of cyber threat to the system, including high risk IP addresses and High Risk accounts. The number represents the cases the analysts need to investigate on these two threat scenarios.
The illustrated window element 190 also includes a lower pane element 194C that can be configured as an Individual IP Investigation pane element that can include graphical elements that illustrate the number of high risk sessions of one or more users as well as the number of low risk sessions.
It should be appreciated that the various concepts, methods, and systems introduced above and discussed below may be implemented in any number of ways, as the disclosed concepts are not limited to any particular manner of implementation or system configuration. Examples of specific implementations and applications are provided below primarily for illustrative purposes and for providing or describing the operating environment and associated hardware of the cyber security monitoring and mitigation system 10 of the present invention. The cyber security monitoring and mitigation system 10 of the present invention can employ a plurality of electronic devices, such as one or more servers, clients, computers and the like, that are networked together or which are arranged so as to effectively communicate with each other. Specifically, one or more of the aforementioned data processing unit 32 including the data profiler unit 60 and the data cleaner unit 70, the data connector unit 30, the data merger unit 34, the AI module 38, the model deployment unit 40, the model prediction unit, and results integrator unit 44, can be implemented in software, hardware, or a combination of both, and preferably one or more of the units can be implemented via one or more electronic devices employing suitable software applications to perform the functions associated with that device. The network 14 can be any type or form of network. The electronic devices can be on the same network or on different networks. In some embodiments, the network system may include multiple, logically-grouped servers. In one of these embodiments, the logical group of servers may be referred to as a server farm or a machine farm. In another of these embodiments, the servers may be geographically dispersed. The electronic devices can communicate through wired connections or through wireless connections. The clients can also be generally referred to as local machines, clients, client nodes, client machines, client computers, client devices, endpoints, or endpoint nodes. The servers can also be referred to herein as servers, nodes, or remote machines. In some embodiments, a client has the capacity to function as both a client or client node seeking access to resources provided by a server or node and as a server providing access to hosted resources for other clients. The clients can be any suitable electronic or computing device, including for example, a computer, a server, a smartphone, a smart electronic pad, a portable computer, and the like, such as the electronic device 300 illustrated in
The illustrated electronic device 300 can include any suitable electronic circuitry that includes a main memory unit 305 that is connected to a processor 311 having a CPU 315 and a cache unit 340 configured to store copies of the data from the most frequently used main memory 305.
Further, the methods and procedures for carrying out the methods disclosed herein can be performed by one or more programmable processors executing a computer program to perform the functions, operations, and methods of the present invention by operating on input data and generating output data. Further, the methods and procedures disclosed herein can also be performed by, and the apparatus disclosed herein can be implemented as, special purpose logic circuitry, such as a FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). Modules and units disclosed herein can also refer to portions of the computer program and/or the processor/special circuitry that implements that functionality.
The processor 311 can be any logic circuitry that responds to, processes or manipulates instructions received from the main memory unit, and can be any suitable processor for execution of a computer program. For example, the processor 311 can be a general and/or special purpose microprocessor and/or a processor of a digital computer. The CPU 315 can be any suitable processing unit known in the art. For example, the CPU 315 can be a general and/or special purpose microprocessor, such as an application-specific instruction set processor, graphics processing unit, physics processing unit, digital signal processor, image processor, coprocessor, floating-point processor, network processor, and/or any other suitable processor that can be used in a digital computing circuitry. Alternatively or additionally, the processor can comprise at least one of a multi-core processor and a front-end processor. Generally, the processor 311 can be embodied in any suitable manner. For example, the processor 311 can be embodied as various processing means such as a microprocessor or other processing element, a coprocessor, a controller or various other computing or processing devices including integrated circuits such as, for example, an ASIC (application specific integrated circuit), an FPGA (field programmable gate array), a hardware accelerator, or the like. Additionally or alternatively, the processor 311 can be configured to execute instructions stored in the memory 305 or otherwise accessible to the processor 311. As such, whether configured by hardware or software methods, or by a combination thereof, the processor 311 can represent an entity (e.g., physically embodied in circuitry) capable of performing operations according to embodiments disclosed herein while configured accordingly. Thus, for example, when the processor 311 is embodied as an ASIC, FPGA or the like, the processor 311 can be specifically configured hardware for conducting the operations described herein. Alternatively, as another example, when the processor 311 is embodied as an executor of software instructions, the instructions can specifically configure the processor 311 to perform the operations described herein. In many embodiments, the central processing unit 530 is provided by a microprocessor unit, e.g.: those manufactured by Intel Corporation of Mountain View, Calif.; those manufactured by Motorola Corporation of Schaumburg, Ill.; the ARM processor and TEGRA system on a chip (SoC) manufactured by Nvidia of Santa Clara, Calif; the POWER7 processor, those manufactured by International Business Machines of White Plains, N.Y.; or those manufactured by Advanced Micro Devices of Sunnyvale, Calif. The processor can be configured to receive and execute instructions received from the main memory 305.
The illustrated electronic device 300 applicable to the hardware of the present invention can be based on any of these processors, or any other processor capable of operating as described herein. The central processing unit 315 may utilize instruction level parallelism, thread level parallelism, different levels of cache, and multi-core processors. A multi-core processor may include two or more processing units on a single computing component. Examples of multi-core processors include the AMD PHENOM IIX2, INTEL CORE i5 and INTEL CORE i7.
The processor 311 and the CPU 315 can be configured to receive instructions and data from the main memory 305 (e.g., a read-only memory or a random access memory or both) and execute the instructions. The instructions and other data can be stored in the main memory 305. The processor 311 and the main memory 305 can be included in or supplemented by special purpose logic circuitry. The main memory unit 305 can include one or more memory chips capable of storing data and allowing any storage location to be directly accessed by the processor 311. The main memory unit 305 may be volatile and faster than other memory in the electronic device, or can dynamic random access memory (DRAM) or any variants, including static random access memory (SRAM), Burst SRAM or SynchBurst SRAM (BSRAM), Fast Page Mode DRAM (FPM DRAM), Enhanced DRAM (EDRAM), Extended Data Output RAM (EDO RAM), Extended Data Output DRAM (EDO DRAM), Burst Extended Data Output DRAM (BEDO DRAM), Single Data Rate Synchronous DRAM (SDR SDRAM), Double Data Rate SDRAM (DDR SDRAM), Direct Rambus DRAM (DRDRAM), or Extreme Data Rate DRAM (XDR DRAM). In some embodiments, the main memory 305 may be non-volatile; e.g., non-volatile read access memory (NVRAM), flash memory non-volatile static RAM (nvSRAM), Ferroelectric RAM (FeRAM), Magnetoresistive RAM (MRAM), Phase-change memory (PRAM), conductive-bridging RAM (CBRAM), Silicon-Oxide-Nitride-Oxide-Silicon (SONOS), Resistive RAM (RRAM), Racetrack, Nano-RAM (NRAM), or Millipede memory. The main memory 305 can be based on any of the above described memory chips, or any other available memory chips capable of operating as described herein. The main memory can be configured to communicate with other system memory, including without limitation the storage elements 20, 22.
In the embodiment shown in
The main memory 305 can comprise an operating system 320 that is configured to implement various operating system functions. For example, the operating system 320 can be responsible for controlling access to various devices, memory management, and/or implementing various functions of the asset management system disclosed herein. Generally, the operating system 320 can be any suitable system software that can manage computer hardware and software resources and provide common services for computer programs.
The main memory 305 can also hold suitable application software 330. For example, the main memory 305 and application software 330 can include various computer executable instructions, application software, and data structures, such as computer executable instructions and data structures that implement various aspects of the embodiments described herein. For example, the main memory 305 and application software 330 can include computer executable instructions, application software, and data structures, such as computer executable instructions and data structures that implement various aspects of the content characterization systems disclosed herein, such as processing and capture of information. Generally, the functions performed by the content characterization systems disclosed herein can be implemented in digital electronic circuitry or in computer hardware that executes software, firmware, or combinations thereof. The implementation can be as a computer program product (e.g., a computer program tangibly embodied in a non-transitory machine-readable storage device) for execution by or to control the operation of a data processing apparatus (e.g., a computer, a programmable processor, or multiple computers). Generally, the program codes that can be used with the embodiments disclosed herein can be implemented and written in any form of programming language, including compiled or interpreted languages, and can be deployed in any form, including as a stand-alone program or as a component, module, subroutine, or other unit suitable for use in a computing environment. A computer program can be configured to be executed on a computer, or on multiple computers, at one site or distributed across multiple sites and interconnected by a communications network, such as the Internet.
The processor 311 can further be coupled to a database or data storage 380. The data storage 380 can be configured to store information and data relating to various functions and operations of the content characterization systems disclosed herein. For example, as detailed above, the data storage 380 can store information including but not limited to captured information, multimedia, processed information, and characterized content.
A wide variety of I/O devices may be present in or connected to the electronic device 300. For example, the device can include a display 370. The display 370 can be configured to display information and instructions received from the processor 311. Further, the display 370 can generally be any suitable display available in the art, for example a Liquid Crystal Display (LCD), a light emitting diode (LED) display, digital light processing (DLP) displays, liquid crystal on silicon (LCOS) displays, organic light-emitting diode (OLED) displays, active-matrix organic light-emitting diode (AMOLED) displays, liquid crystal laser displays, time-multiplexed optical shutter (TMOS) displays, or 3D displays, or electronic papers (e-ink) displays. Furthermore, the display 370 can be a smart and/or touch sensitive display that can receive instructions from a user and forwarded the received information to the processor 311. The display can be associated with one or more of the system units, such as the results integrator unit 44, and can be employed to display the user interfaces set forth in
The electronic device 300 can also include an Input/Output (I/O) interface 350 that is configured to connect the processor 311 to various interfaces via an input/output (I/O) device interface 380. The device 300 can also include a communications interface 360 that is responsible for providing the circuitry 300 with a connection to a communications network (e.g., communications network 120). Transmission and reception of data and instructions can occur over the communications network.
It will thus be seen that the invention efficiently attains the objects set forth above, among those made apparent from the preceding description. Since certain changes may be made in the above constructions without departing from the scope of the invention, it is intended that all matter contained in the above description or shown in the accompanying drawings be interpreted as illustrative and not in a limiting sense.
It is also to be understood that the following claims are to cover all generic and specific features of the invention described herein, and all statements of the scope of the invention which, as a matter of language, might be said to fall therebetween.
Having described the invention, what is claimed as new and desired to be secured by Letters Patent is:
The present application is a continuation of U.S. patent application Ser. No. 17/034,788, entitled SYSTEM AND METHOD FOR MONITORING, MEASURING, AND MITIGATING CYBER THREATS TO A COMPUTER SYSTEM, filed on Sep. 28, 2020, the contents of which are herein incorporated by reference.
Number | Date | Country | |
---|---|---|---|
Parent | 17034788 | Sep 2020 | US |
Child | 17493138 | US |