The present invention relates to computer security, and in particular to preventing attackers from breaching computer networks.
Reference is made to
Access to computers 110 and servers 120 in network 100 may optionally be governed by an access governor 150, such as a directory service, that authorizes users to access computers 110 and databases 120 based on “credentials”. Access governor 150 may be a name directory, such as ACTIVE DIRECTORY® developed by Microsoft Corporation of Redmond, Wash., for WINDOWS® environments. Background information about ACTIVE DIRECTORY® is available at Wikipedia. Other access governors for WINDOWS and non-WINDOWS environments, include inter alia Lightweight Directory Access Protocol (LDAP), Remote Authentication Dial-In User Service (RADIUS), and Apple Filing Protocol (AFP), formerly APPLETALK®, developed by Apple Inc. of Cupertino, Calif. Background information about LDAP, RADIUS and AFP is available at Wikipedia.
Access governor 150 may be one or more local machine access controllers. Access governor 150 may be one or more authorization servers, such as a database server or an application server.
In lieu of access governor 150, the endpoints and/or servers of network 100 determine their local access rights.
Credentials for accessing computers 110 and databases 120 include inter alia server account credentials such as <address> <username> <password> for an FTP server, an SQL server, or an SSH server. Credentials for accessing computers 110 and databases 120 also include user login credentials <username> <password>, or <username> <ticket>, where “ticket” is an authentication ticket, such as a ticket for the Kerberos authentication protocol or NTLM hash used by Microsoft Corp., or login credentials via certificates or via another implementation used today or in the future. Background information about the Kerberos protocol and the LM hash is available at Wikipedia.
Access governor 150 may maintain a directory of computers 110, databases 120 and their users. Access governor 150 authorizes users and computers, assigns and enforces security policies, and installs and updates software. When a user logs into a computer 110, access governor 150 checks the submitted password, and determines if the user is an administrator (admin), a normal user (user) or other user type.
Computers 110 may run a local or remote security service, which is an operating system process that verifies users logging in to computers and other single sign-on systems and other credential storage systems.
Network 100 may include a security information and event management (SIEM) server 160, which provides real-time analysis of security alerts generated by network hardware and applications. Background information about SIEM is available at Wikipedia.
Network 100 may include a domain name system (DNS) server 170, or such other name service system, for translating domain names to IP addresses. Background information about DNS is available at Wikipedia.
Network 100 may include a firewall 180 located within a demilitarized zone (DMZ), which is a gateway between enterprise network 100 and external internet 10. Firewall 180 controls incoming and outgoing traffic for network 100. Background information about firewalls and DMZ is available at Wikipedia.
One of the most prominent threats that organizations face is a targeted attack; i.e., an individual or group of individuals that attacks the organization for a specific purpose, such as stealing data, using data and systems, modifying data and systems, and sabotaging data and systems. Targeted attacks are carried out in multiple stages, typically including inter alia reconnaissance, penetration, lateral movement and payload. Lateral movement involves orientation, movement and propagation, and includes establishing a foothold within the organization and expanding that foothold to additional systems within the organization.
In order to carry out the lateral movement stage, an attacker, whether a human being who is operating tools within the organization's network, or a tool with “learning” capabilities, learns information about the environment it is operating in, such as network topology and organization structure, learns “where can I go from my current step” and “how can I go from my current step (privileged required)”, and learns implemented security solutions, and then operates in accordance with that data. One method to defend against such attacks, termed “honeypots”, is to plant and monitor misleading information/decoys/bait, with the objective of the attacker learning of their existence and then consuming those bait resources, and to notify an administrator of the malicious activity. Background information about honeypots is available at Wikipedia.
Conventional honeypot systems operate by monitoring access to a supervised element in a computer network. Access monitoring generates many false alerts, caused by non-malicious access from automatic monitoring systems and by user mistakes. Conventional systems try to mitigate this problem by adding a level of interactivity to the honeypot, and by performing behavioral analysis of suspected malware if it has infected the honeypot itself.
An advanced attacker may use different attack techniques to enter a corporate network and to move laterally within the network in order to obtain its resource goals. The advanced attacker may begin with a workstation, server or any other network entity to start his lateral movement. He uses different methods to enter the first network node, including inter alia social engineering, existing exploit and/or vulnerability that he knows to exercise, and a Trojan horse or any other malware allowing him to control the first node.
Reference is made to
Exemplary attack vectors include inter alia credentials of users with enhanced privileges, existing share names on different servers, and details of an FTP server, an email server, an SQL server or an SSH server and its credentials. Attack vectors are often available to an attacker because a user did not log off his workstation or clear his cache. E.g., if a user contacted a help desk and gave the help desk remote access to his workstation and did not log off his workstation, then the help desk access credentials may still be stored in the user's local cache and available to the attacker. Similarly, if the user accessed an FTP server, then the FTP account login parameters may be stored in the user's local cache or profile and available to the attacker.
Attack vectors enable inter alia a move from workstation A→server B based on a shared name and its credentials, connection to a different workstation using local admin credentials that reside on a current workstation, and connection to an FTP server using specific access credentials.
Reference is made to
When the attacker implements such a discovery process on all nodes in the network, he will be able to “see” all attack vectors of the corporate network and generate a “maximal attack map”. Before the attacker discovers all attack vectors on network nodes and completes the discovery process, he generates a “current attack map” that is currently available to him.
An objective of the attacker is to discover an attack path that leads him to a target network node. The target may be a bank authorized server that is used by the corporation for ordering bank account transfers of money, it may be an FTP server that updates the image of all corporate points of sale, it may be a server or workstation that stores confidential information such as source code and secret formulas of the corporation, or it may be any other network node that is of value to the attacker and is his “attack goal node”.
When the attacker lands on the first node, but does not know how to reach the attack goal node, he generates a current attack map that leads to the attack goal node.
It is common today for networks to include containerized clusters. Conventional network security systems are designed for non-containerized networks. It is more common for containerized networks, which include many servers, to hold sensitive data and services, more so than non-containerized environments that mostly include workstations that generally do not hold sensitive data.
Current network security solutions prevent attacks by examining configuration files and noticing violations. Such solutions always have a human element, and there are always errors in the environment that cannot be addressed.
It would thus be of great advantage to have methods and systems to protect against attackers who target containerized clusters.
Embodiments of the present invention provide methods and systems to protect against attackers who target containerized clusters.
Embodiments of the present invention address containerized networks. These embodiments detect attackers as they land on container instances and push their way towards the most important assets in the environment, referred to as “crown jewels”. These embodiments hinder attackers in case container orchestrator and configuration files have been compromised.
Embodiments of the present invention detect, with no false positives, attackers who exploit human errors; specifically, attackers who land on a specific instance, generally from the outside world, and attackers who reach the orchestrator/configuration, either from the API or by actually finding the files.
Embodiments of the present invention provide approaches to generating deceptions that protect the orchestrator, and modify the data that might be intercepted by an attacker in such ways that lead an attacker towards traps.
There is thus provided in accordance with an embodiment of the present invention a system for detecting and hindering attackers who target containerized clusters, including a container orchestrator that manages, deploys and monitors a number of container instances, a container registry comprising a collection of configuration files that hold the definition of the environment that is managed by the container orchestrator, at least one host, at least one database, at least one file share, and a management server that learns the environment, creates deceptions in accordance with the environment learned, plants the created deceptions via the container orchestrator, via the container registry, and via a secure share (SSH) directly to the containers, and issues an alert when at attacker attempts to connect to a deceptive entity.
There is additionally provided in accordance with an embodiment of the present invention a method for operation of a deception management server, for detecting and hindering attackers who target containerized clusters of a network, including learning the network environment, including finding existing container instances, finding existing services and relationships, extracting naming conventions in the environment, and classifying the most important assets in the environment, creating deceptions based on the learning phase, the deceptions including one or more of (i) secrets, (ii) environment variables pointing to deceptive databases, web servers or active directories, (iii) mounts, (iv) additional container instances comprising one or more of file server, database, web applications and SSH, (v) URLs to external services, and (vi) namespaces to fictional environments, planting the created deceptions via a container orchestrator, via an SSH directly to the containers, or via the container registry, and issuing an alert when an attacker attempts to connect to a deceptive entity.
The present invention will be more fully understood and appreciated from the following detailed description, taken in conjunction with the drawings in which:
For reference to the figures, the following index of elements and their numerals is provided. Similarly numbered elements represent elements of the same type, but they need not be identical elements.
Elements numbered in the 1000's are operations of flow charts.
The following definitions are employed throughout the specification.
In accordance with embodiments of the present invention, systems and methods are provided to protect against attackers who target containerized clusters.
Reference is made to
Any or all of the components of network 200 may be replaced by containers that are managed by an orchestrator 310 (
A deception approach to protecting such orchestrator requires modifying the data that might be intercepted, in such a way that lead an attacker to traps.
Database 220 stores attack vectors that fake movement and access to computers 110, databases 120 and other resources in network 200. Attack vectors include inter alia:
The attack vectors stored in database 220 are categorized by families, such as inter alia
Credentials for a computer B that reside on a computer A provide an attack vector for an attacker from computer A→computer B.
Database 220 communicates with an update server 250, which updates database 220 as attack vectors for accessing, manipulating and hopping to computers evolve over time.
Policy database 230 stores, for each group of computers, G1, G2, . . . , policies for planting decoy attack vectors in computers of that group. Each policy specifies decoy attack vectors that are planted in each group, in accordance with attack vectors stored in database 220. For user credentials, the decoy attack vectors planted on a computer lead to another resource in the network. For attack vectors to access an FTP or other server, the decoy attack vectors planted on a computer lead to a trap server 240.
It will be appreciated by those skilled in the art the databases 220 and 230 may be combined into a single database, or distributed over multiple databases.
Deception management server 210 includes a policy manager 211, a deployment module 212, and a forensic application 213. Policy manager 211 defines a decoy and response policy. The response policy defines different decoy types, different decoy combinations, response procedures, notification services, and assignments of policies to specific network nodes, network users, groups of nodes or users or both. Once policies are defined, they are stored in policy database 230 with the defined assignments.
Deception management server 210 obtains the policies and their assignments from policy database 230, and delivers them to appropriate nodes and groups. It than launches deployment module 212 to plant decoys in end points, servers, applications, routers, switches, relays and other entities in the network. Deployment module 212 plants each decoy, based on its type, in memory (RAM), disk, or in any other data or information storage area, as appropriate. Deployment module 212 plants the decoy attack vectors in such a way that the chances of a valid user accessing the decoy attack vectors are low. Deployment module 212 may or may not stay resident.
Forensic application 213 is a real-time application that is transmitted to a destination computer in the network, when a decoy attack vector is accessed by a computer 110. When forensic application 213 is launched on the destination computer, it identifies a process running within that computer 110 that accessed that decoy attack vector, logs the activities performed by the thus-identified process in a forensic report, and transmits the forensic report to deception management server 210.
Once an attacker is detected, a “response procedure” is launched. The response procedure includes inter alia various notifications to various addresses, and actions on a trap server such as launching an investigation process, and isolating, shutting down and re-imaging one or more network nodes. The response procedure collects information available on one or more nodes that may help in identifying the attacker's attack acts, attention and progress.
Each trap server 240 may be in the form of a container instance, a mounted folder and agent, and/or a real trap server. Each trap server 240 includes a tar-pit module 241, which is a process that purposely delays incoming connections, thereby providing additional time for forensic application 213 to launch and log activities on a computer 110 that is accessing the trap server. Each trap server 240 also includes a forensic alert module 242, which alerts management system 210 that an attacker is accessing the trap server via a computer 110 of the network, and causes deception management server 210 to send forensic application 213 to the computer that is accessing the trap server. In an alternative embodiment of the present invention, trap server 240 may store forensic application 213, in which case trap server 240 may transmit forensic application 213 directly to the computer that is accessing the trap server. In another alternative embodiment of the present invention, deception management server 210 or trap server 240 may transmit forensic application 213 to a destination computer other than the computer that is accessing the trap server 240.
Notification servers (not shown) are notified when an attacker uses a decoy. The notification servers may discover this by themselves, or by using information stored on access governor 150 and SIEM 160. The notification servers forward notifications, or results of processing multiple notifications, to create notification time lines or such other analytics.
Reference is made to
At operation 1105, deployment module 212 plants decoy attack vectors in computers 110 in accordance with the policies in database 230. At operation 1110 trap server B recognizes that it is being accessed from a computer A via a decoy attack vector. At operation 1115, tar-pit module 241 of trap server B delays access to data and resources on trap server B. The delaying performed at operation 1115 provides additional time for trap server B to send a request to deception management server 210 to transmit forensic application 213 to computer A, and for computer A to receive and run forensic application 213. At operation 1120, trap server B sends a request to deception management server 210, to transmit real-time forensic application 213 to computer A.
At operation 1125, deception management server 210 receives the request send by trap server B, and at operation 1130 deception management server 210 transmits forensic application 213 to computer A.
At operation 1135, computer A receives forensic application 213 from deception management server 210, and launches the application. At operation 1140, forensic application 213 identifies a process, P, running on computer A that is accessing trap server B. At operation 1145, forensic application 213 logs activities performed by process P. At operation 1150, forensic application 213 transmits a forensic report to deception management server 210. Finally, at operation 1155, deception management server 210 receives the forensic report from computer A.
In accordance with an alternative embodiment of the present invention, trap server B may store forensic application 213, in which case trap server B may transmit forensic application 213 directly to computer A, and operations 1120, 1125 and 1130 can be eliminated.
In accordance with another alternative embodiment of the present invention, forensic application 213 is transmitted by deception management server 210 or by trap server B to a destination computer other than computer A. When the destination computer launches forensic application 213, the application communicates with computer A to identify the process, P, running on computer A that is accessing trap server B, log the activities performed by process P, and transmit the forensic report to deception management server 210.
Reference is made to
At operation 1205, deployment module 212 plants decoy credentials in computers 110 in accordance with the policies in database 230. At operation 1210 access governor 150 receives an authorization request from a computer B for a login to a computer A using invalid user credentials. At operation 1215 access governor 150 reports the attempted invalid login to SIEM server 160.
At operation 1225, deception management server 210 identifies an invalid login attempt event reported by SIEM server 160, and at operation 1230 deception management server 210 transmits real-time forensic application 213 to computer A.
At operation 1235, computer A receives forensic application 213 from deception management server 210, and launches the application. At operation 1240, forensic application 213 identifies a process, P, running on computer A that is accessing computer B. At operation 1245, forensic application 213 logs activities performed by process P. At operation 1250, forensic application 213 transmits a forensic report to deception management server 210. Finally, at operation 1255, deception management server 210 receives the forensic report from computer A.
In accordance with an alternative embodiment of the present invention, forensic application 213 is transmitted by deception management server 210 to a destination computer other than computer A. When the destination computer launches forensic application 213, the application communicates with computer A to identify the process, P, running on computer A that is accessing computer B, log the activities performed by process P, and transmit the forensic report to deception management server 210.
Containerized Clusters
It is common today for some portions of network 200 (
Non-containerized networks generally are confined within a net/subnet. In distinction, networks with containerized environments can reside in a cloud and have connections to resources out of the cloud. E.g., trap management may reside in or out of the cloud. A containerized network has predicable use since the user is a program or operator.
Containers are light-weight nodes in the network and, as such there are several key differences between a container node and a non-container node that change the attack vectors and how they can be mitigated.
Containers are stateless objects that are recreated from a read-only image as frequently as needed. As such, any changes made to a container instance by either an attacker or by a deception management tool are lost when the image is discarded. To protect container nodes, deceptions need to be planted either in the image or in the orchestrator before the container is instantiated.
When built properly, containers are very small and hold only the data and tools required for their immediate operation. As such, if any data of interest to an attacker exists on a container node, it is easier to find. The proverbial haystack doesn't hide the needle.
On the other hand, many of the most basic tools used by an attacker to study the network and perform lateral movement do not exist in a properly built container (even a word processor), and need to be installed by the attacker. As such, conventional attack vectors may not apply in a container node, or may require different approaches, such as manually installing tools required by the attacker, whereas other attack vectors present themselves that are unique to containerized systems; e.g.:
Embodiments of the present invention address containerized networks. These embodiments detect attackers as they land on container instances and push their way towards the “crown jewels”. These embodiments hinder and detect attackers in case container orchestrator and configuration files have been compromised.
Embodiments of the present invention detect, with no false positives, attackers who exploit human errors; specifically, attackers who land on a specific instance, generally from the outside world, and attackers who reach the orchestrator/configuration, either from the API or by actually finding the files.
Embodiments of the present invention provide approaches to generating deceptions that protect the orchestrator, and modify the data that might be intercepted by an attacker in such ways that lead an attacker toward traps.
Reference is made to
Reference is made to
Reference is made to
At operation 1320 management server 210 creates deceptions based on the learning phase. At operation 1330 management server 210 plants deceptions via container manager 310, an SSH directory to the containers, or container registry 320. Inter alia, the following deceptions may be planted:
At operation 1340 management server 210 issues an alert when an attacker attempts to connect to a deceptive entity. The alert may be displayed on a console of management server 210.
At operation 1350 forensics, such as log files and network traffic captures, are collected. Management server 210 may connect to container orchestrator 310 and use container orchestrator 310 to collect forensics. Alternatively, container orchestrator 310 may attach a forensics tool to each deceptive container instance, and forensics may be collected from the deceptive container instance via the tool, when an attacker attempts to connect to the deceptive container instance. The forensic data may relate inter alia to memory, file system, process and network information.
There are two types of attack vectors that are addressed. For an attacker who gains access to the deceptive configuration files, either via the container orchestrator API or directly, the attacker will find a much larger containerized environment, and as soon as he tries to connect to a deceptive entity, such as one of the deceptions listed hereinabove, the attacker reaches a trap and is detected. E.g., if the attacker attempts to exploit a secret file with deceptive passwords to database, websites, and/or file shares, the attacker is led to a trap server 240 that triggers an alert.
For an attacker who exploits the container from the outside and directly gains access to the container instance, the attacker is confronted inter alia with deceptive attributes, tools and mounts, and as soon as the attacker attempts to use any of them, the attacker is detected. E.g., management server 210 may replace package managers, such as Yum, with proprietary tools that trigger an alert if an attacker attempts to access it. Alternatively, management server 210 may replace package repositories with a trap server 240. Alternatively, management server 210 may listen for outgoing traffic that is not supposed to go out of the container instance, and trigger an alert in response thereto.
For an attacker who breaches the container orchestrator 310, new secret files are added, with deceptive passwords to databases, websites and file shares. Detection is based on a trap machine that triggers an alert when someone connects to it.
In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made to the specific exemplary embodiments without departing from the broader spirit and scope of the invention. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
This application is a continuation-in-part of U.S. patent application Ser. No. 15/722,351, entitled SYSTEM AND METHOD FOR CREATION, DEPLOYMENT AND MANAGEMENT OF AUGMENTED ATTACKER MAP, and filed on Oct. 2, 2017 by inventors Shlomo Touboul, Hanan Levin, Stephane Roubach, Assaf Mischari, Itai Ben David, Itay Avraham, Adi Ozer, Chen Kazaz, Ofer Israeli, Olga Vingurt, Liad Gareh, Israel Grimberg, Cobby Cohen, Sharon Sultan and Matan Kubovsky, the contents of which are hereby incorporated herein in their entirety. U.S. patent application Ser. No. 15/722,351 is a continuation of U.S. patent application Ser. No. 15/403,194, now U.S. Pat. No. 9,787,715, entitled SYSTEM AND METHOD FOR CREATION, DEPLOYMENT AND MANAGEMENT OF AUGMENTED ATTACKER MAP, and filed on Jan. 11, 2017 by inventors Shlomo Touboul, Hanan Levin, Stephane Roubach, Assaf Mischari, Itai Ben David, Itay Avraham, Adi Ozer, Chen Kazaz, Ofer Israeli, Olga Vingurt, Liad Gareh, Israel Grimberg, Cobby Cohen, Sharon Sultan and Matan Kubovsky, the contents of which are hereby incorporated herein in their entirety. U.S. patent application Ser. No. 15/403,104 is a continuation of U.S. patent application Ser. No. 15/004,904, now U.S. Pat. No. 9,553,885, entitled SYSTEM AND METHOD FOR CREATION, DEPLOYMENT AND MANAGEMENT OF AUGMENTED ATTACKER MAP, and filed on Jan. 23, 2016 by inventors Shlomo Touboul, Hanan Levin, Stephane Roubach, Assaf Mischari, Itai Ben David, Itay Avraham, Adi Ozer, Chen Kazaz, Ofer Israeli, Olga Vingurt, Liad Gareh, Israel Grimberg, Cobby Cohen, Sharon Sultan and Matan Kubovsky, the contents of which are hereby incorporated herein in their entirety. U.S. patent application Ser. No. 15/004,904 is a non-provisional of U.S. Provisional Application No. 62/172,251, entitled SYSTEM AND METHOD FOR CREATION, DEPLOYMENT AND MANAGEMENT OF AUGMENTED ATTACKER MAP, and filed on Jun. 8, 2015 by inventors Shlomo Touboul, Hanan Levin, Stephane Roubach, Assaf Mischari, Itai Ben David, Itay Avraham, Adi Ozer, Chen Kazaz, Ofer Israeli, Olga Vingurt, Liad Gareh, Israel Grimberg, Cobby Cohen, Sharon Sultan and Matan Kubovsky, the contents of which are hereby incorporated herein in their entirety. U.S. patent application Ser. No. 15/004,904 is a non-provisional of U.S. Provisional Application No. 62/172,253, entitled SYSTEM AND METHOD FOR MULTI-LEVEL DECEPTION MANAGEMENT AND DECEPTION SYSTEM FOR MALICIOUS ACTIONS IN A COMPUTER NETWORK, and filed on Jun. 8, 2015 by inventors Shlomo Touboul, Hanan Levin, Stephane Roubach, Assaf Mischari, Itai Ben David, Itay Avraham, Adi Ozer, Chen Kazaz, Ofer Israeli, Olga Vingurt, Liad Gareh, Israel Grimberg, Cobby Cohen, Sharon Sultan and Matan Kubovsky, the contents of which are hereby incorporated by reference herein in their entirety. U.S. patent application Ser. No. 15/004,904 is a non-provisional of U.S. Provisional Application No. 62/172,255, entitled METHODS AND SYSTEMS TO DETECT, PREDICT AND/OR PREVENT AN ATTACKER'S NEXT ACTION IN A COMPROMISED NETWORK, and filed on Jun. 8, 2015 by inventors Shlomo Touboul, Hanan Levin, Stephane Roubach, Assaf Mischari, Itai Ben David, Itay Avraham, Adi Ozer, Chen Kazaz, Ofer Israeli, Olga Vingurt, Liad Gareh, Israel Grimberg, Cobby Cohen, Sharon Sultan and Matan Kubovsky, the contents of which are hereby incorporated by reference herein in their entirety. U.S. patent application Ser. No. 15/004,904 is a non-provisional of US Provisional Application No. 62/172,259, entitled MANAGING DYNAMIC DECEPTIVE ENVIRONMENTS, and filed on Jun. 8, 2015 by inventors Shlomo Touboul, Hanan Levin, Stephane Roubach, Assaf Mischari, Itai Ben David, Itay Avraham, Adi Ozer, Chen Kazaz, Ofer Israeli, Olga Vingurt, Liad Gareh, Israel Grimberg, Cobby Cohen, Sharon Sultan and Matan Kubovsky, the contents of which are hereby incorporated by reference herein in their entirety. U.S. patent application Ser. No. 15/004,904 is a non-provisional of US Provisional Application No. 62/172,261, entitled SYSTEMS AND METHODS FOR AUTOMATICALLY GENERATING NETWORK ENTITY GROUPS BASED ON ATTACK PARAMETERS AND/OR ASSIGNMENT OF AUTOMATICALLY GENERATED SECURITY POLICIES, and filed on Jun. 8, 2015 by inventors Shlomo Touboul, Hanan Levin, Stephane Roubach, Assaf Mischari, Itai Ben David, Itay Avraham, Adi Ozer, Chen Kazaz, Ofer Israeli, Olga Vingurt, Liad Gareh, Israel Grimberg, Cobby Cohen, Sharon Sultan and Matan Kubovsky, the contents of which are hereby incorporated by reference herein in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
6363489 | Comay et al. | Mar 2002 | B1 |
6618709 | Sneeringer | Sep 2003 | B1 |
7065657 | Moran | Jun 2006 | B1 |
7089589 | Chefalas et al. | Aug 2006 | B2 |
7093291 | Bailey | Aug 2006 | B2 |
7516227 | Cohen | Apr 2009 | B2 |
7574741 | Aviani et al. | Aug 2009 | B2 |
7636944 | Raikar | Dec 2009 | B2 |
7665134 | Hernacki et al. | Feb 2010 | B1 |
7694339 | Blake et al. | Apr 2010 | B2 |
7725937 | Levy | May 2010 | B1 |
7752664 | Satish et al. | Jul 2010 | B1 |
7945953 | Salinas et al. | May 2011 | B1 |
8015284 | Isenberg et al. | Sep 2011 | B1 |
8181249 | Chow et al. | May 2012 | B2 |
8181250 | Rafalovich et al. | May 2012 | B2 |
8250654 | Kennedy et al. | Aug 2012 | B1 |
8375447 | Amoroso et al. | Feb 2013 | B2 |
8499348 | Rubin | Jul 2013 | B1 |
8528091 | Bowen et al. | Sep 2013 | B2 |
8549642 | Lee | Oct 2013 | B2 |
8549643 | Shou | Oct 2013 | B1 |
8719938 | Chasko et al. | May 2014 | B2 |
8739281 | Wang et al. | May 2014 | B2 |
8739284 | Gardner | May 2014 | B1 |
8769684 | Stolfo et al. | Jul 2014 | B2 |
8819825 | Keromytis et al. | Aug 2014 | B2 |
8856928 | Rivner et al. | Oct 2014 | B1 |
8881288 | Levy et al. | Nov 2014 | B1 |
8925080 | Hebert | Dec 2014 | B2 |
9009829 | Stolfo et al. | Apr 2015 | B2 |
9043905 | Allen et al. | May 2015 | B1 |
9124622 | Falkowitz et al. | Sep 2015 | B1 |
9152808 | Ramalingam et al. | Oct 2015 | B1 |
9240976 | Murchison | Jan 2016 | B1 |
9325728 | Kennedy et al. | Apr 2016 | B1 |
9356942 | Joffe | May 2016 | B1 |
9386030 | Vashist et al. | Jul 2016 | B2 |
9495188 | Ettema et al. | Nov 2016 | B1 |
20020066034 | Schlossberg et al. | May 2002 | A1 |
20020194489 | Almogy et al. | Dec 2002 | A1 |
20030084349 | Friedrichs et al. | May 2003 | A1 |
20030110396 | Lewis et al. | Jun 2003 | A1 |
20030145224 | Bailey | Jul 2003 | A1 |
20040088581 | Brawn et al. | May 2004 | A1 |
20040128543 | Blake et al. | Jul 2004 | A1 |
20040148521 | Cohen et al. | Jul 2004 | A1 |
20040160903 | Gai et al. | Aug 2004 | A1 |
20040172557 | Nakae et al. | Sep 2004 | A1 |
20040255155 | Stading | Dec 2004 | A1 |
20050114711 | Hesselink et al. | May 2005 | A1 |
20050132206 | Palliyil et al. | Jun 2005 | A1 |
20050149480 | Deshpande | Jul 2005 | A1 |
20050235360 | Pearson | Oct 2005 | A1 |
20060010493 | Piesco et al. | Jan 2006 | A1 |
20060041761 | Neumann et al. | Feb 2006 | A1 |
20060069697 | Shraim et al. | Mar 2006 | A1 |
20060101516 | Sudaharan et al. | May 2006 | A1 |
20060161982 | Chari et al. | Jul 2006 | A1 |
20060224677 | Ishikawa et al. | Oct 2006 | A1 |
20060242701 | Black et al. | Oct 2006 | A1 |
20070028301 | Shull et al. | Feb 2007 | A1 |
20070039038 | Goodman et al. | Feb 2007 | A1 |
20070157315 | Moran | Jul 2007 | A1 |
20070192853 | Shraim et al. | Aug 2007 | A1 |
20070226796 | Gilbert et al. | Sep 2007 | A1 |
20070299777 | Shraim et al. | Dec 2007 | A1 |
20080016570 | Capalik | Jan 2008 | A1 |
20080086773 | Tuvell et al. | Apr 2008 | A1 |
20080155693 | Mikan et al. | Jun 2008 | A1 |
20090019547 | Palliyil et al. | Jan 2009 | A1 |
20090144827 | Peinado et al. | Jun 2009 | A1 |
20090222920 | Chow et al. | Sep 2009 | A1 |
20090241173 | Troyansky | Sep 2009 | A1 |
20090241191 | Keromytis et al. | Sep 2009 | A1 |
20090241196 | Troyansky et al. | Sep 2009 | A1 |
20090328216 | Rafalovich et al. | Dec 2009 | A1 |
20100058456 | Jajodia et al. | Mar 2010 | A1 |
20100071051 | Choyi et al. | Mar 2010 | A1 |
20100077483 | Stolfo et al. | Mar 2010 | A1 |
20100082513 | Liu | Apr 2010 | A1 |
20100251369 | Grant | Sep 2010 | A1 |
20100269175 | Stolfo et al. | Oct 2010 | A1 |
20110016527 | Yanovsky et al. | Jan 2011 | A1 |
20110154494 | Sundaram et al. | Jun 2011 | A1 |
20110167494 | Bowen et al. | Jul 2011 | A1 |
20110214182 | Adams et al. | Sep 2011 | A1 |
20110258705 | Vestergaard et al. | Oct 2011 | A1 |
20110302653 | Frantz et al. | Dec 2011 | A1 |
20110307705 | Fielder | Dec 2011 | A1 |
20120005756 | Hoefelmeyer et al. | Jan 2012 | A1 |
20120084866 | Stolfo | Apr 2012 | A1 |
20120167208 | Buford et al. | Jun 2012 | A1 |
20120210388 | Kolishchak | Aug 2012 | A1 |
20120246724 | Sheymov et al. | Sep 2012 | A1 |
20120311703 | Yanovsky et al. | Dec 2012 | A1 |
20130061055 | Schibuk | Mar 2013 | A1 |
20130086691 | Fielder | Apr 2013 | A1 |
20130212644 | Hughes et al. | Aug 2013 | A1 |
20130227697 | Zandani | Aug 2013 | A1 |
20130263226 | Sudia | Oct 2013 | A1 |
20140082730 | Vashist et al. | Mar 2014 | A1 |
20140101724 | Wick et al. | Apr 2014 | A1 |
20140115706 | Silva et al. | Apr 2014 | A1 |
20140201836 | Amsler | Jul 2014 | A1 |
20140208401 | Balakrishnan et al. | Jul 2014 | A1 |
20140237599 | Gertner et al. | Aug 2014 | A1 |
20140259095 | Bryant | Sep 2014 | A1 |
20140298469 | Marion et al. | Oct 2014 | A1 |
20140310770 | Mahaffey | Oct 2014 | A1 |
20140337978 | Keromytis et al. | Nov 2014 | A1 |
20140359708 | Schwartz | Dec 2014 | A1 |
20150007326 | Mooring et al. | Jan 2015 | A1 |
20150013006 | Shulman et al. | Jan 2015 | A1 |
20150047032 | Hannis et al. | Feb 2015 | A1 |
20150074750 | Pearcy et al. | Mar 2015 | A1 |
20150074811 | Capalik | Mar 2015 | A1 |
20150096048 | Zhang et al. | Apr 2015 | A1 |
20150128246 | Feghali et al. | May 2015 | A1 |
20150156211 | Chi Tin et al. | Jun 2015 | A1 |
20150264062 | Hagiwara et al. | Sep 2015 | A1 |
20150326587 | Vissamsetty et al. | Nov 2015 | A1 |
20150326598 | Vasseur et al. | Nov 2015 | A1 |
20160019395 | Ramalingam et al. | Jan 2016 | A1 |
20160080414 | Kolton et al. | Mar 2016 | A1 |
20160212167 | Dotan et al. | Jul 2016 | A1 |
20160261608 | Hu et al. | Sep 2016 | A1 |
20160300227 | Subhedar et al. | Oct 2016 | A1 |
20160308895 | Kotler et al. | Oct 2016 | A1 |
20160323316 | Kolton et al. | Nov 2016 | A1 |
20160373447 | Akiyama et al. | Dec 2016 | A1 |
20170032130 | Joseph Durairaj et al. | Feb 2017 | A1 |
Number | Date | Country |
---|---|---|
2006131124 | Dec 2006 | WO |
2015001969 | Jan 2015 | WO |
2015047555 | Apr 2015 | WO |
Entry |
---|
Wikipedia, Active Directory, https://en.wikipedia.org/wiki/Active_Directory, Jun. 24, 2015. |
Wikipedia, Apple Filing Protocol, https://en.wikipedia.org/wiki/Apple_Filing_Protocol, Aug. 14, 2015. |
Wikipedia, DMZ (computing), https://en.wikipedia.org/wiki/DMZ_(computing), Jun. 17, 2015. |
Wikipedia, Domain Name System, https://en.wikipedia.org/wiki/Domain_Name_System, Jul. 14, 2015. |
Wikipedia, Firewall (computing), https://en.wikipedia.org/wiki/Firewall_(computing), Jul. 14, 2015. |
Wikipedia, Honeypot (computing), https://en.wikipedia.org/wiki/Honeypot_(computing), Jun. 21, 2015. |
Wikipedia, Kerberos (protocol), https://en.wikipedia.org/wiki/Kerberos_(protocol), Jun. 30, 2015. |
Wikipedia, Lightweight Directory Access Protocol, https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol, Aug. 15, 2015. |
Wikipedia, LM hash, https://en.wikipedia.org/wiki/LM_hash, Jun. 8, 2015. |
Wikipedia, RADIUS, https://en.wikipedia.org/wiki/RADIUS, Aug. 16, 2015. |
Wikipedia, Rainbow table, https://en.wikipedia.org/wiki/Rainbow_table, Jul. 14, 2015. |
Wikipedia, Secure Shell, https://en.wikipedia.org/wiki/Honeypot_(computing), Jul. 12, 2015. |
Wikipedia, Security Information and Event Management, https://en.wikipedia.org/wiki/Security_information_and_event_management, Jun. 23, 2015. |
Wikipedia, Tarpit (networking), https://en.wikipedia.org/wiki/Tarpit_(networking), Jul. 3, 2014. |
Mishra et al., Intrusion detection in wireless ad hoc networks, IEEE Wireless Communications, Feb. 2004, pp. 48-60. |
Zhang et al., Intrusion detection techniques for mobile wireless networks, Journal Wireless Networks vol. 9(5), Sep. 2003, pp. 545-556, Kluwer Academic Publishers, the Netherlands. |
U.S. Appl. No. 15/004,904, Office Action, dated May 27, 2016, 16 pages. |
U.S. Appl. No. 15/004,904, Notice of Allowance, dated Oct. 19, 2016, 13 pages. |
U.S. Appl. No. 15/175,048, Notice of Allowance, dated Oct. 13, 2016, 17 pages. |
U.S. Appl. No. 15/175,050, Office Action, dated Aug. 19, 2016, 34 pages. |
U.S. Appl. No. 15/175,050, Office Action, dated Nov. 30, 2016, 31 pages. |
U.S. Appl. No. 15/175,050, Notice of Allowance, dated Mar. 21, 2017, 13 pages. |
U.S. Appl. No. 15/175,052, Office Action, dated Feb. 13, 2017, 19 pages. |
U.S. Appl. No. 15/175,052, Office Action, dated Jun. 6, 2017, 19 pages. |
U.S. Appl. No. 15/175,054, Notice of Allowance, dated Feb. 21, 2017, 13 pages. |
U.S. Appl. No. 15/403,194, Office Action, dated Feb. 28, 2017, 13 pages. |
U.S. Appl. No. 15/403,194, Notice of Allowance, dated Jun. 16, 2017, 9 pages. |
U.S. Appl. No. 15/406,731, Notice of Allowance, dated Apr. 20, 2017. |
PCT Application No. PCT/IL16/50103, International Search Report and Written Opinion, dated May 26, 2016, 9 pages. |
PCT Application No. PCT/IL16/50579, International Search Report and Written Opinion, dated Sep. 30, 2016, 7 pages. |
PCT Application No. PCT/IL16/50581, International Search Report and Written Opinion, dated Nov. 29, 2016, 10 pages. |
PCT Application No. PCT/IL16/50582, International Search Report and Written Opinion, dated Nov. 16, 2016, 11 pages. |
PCT Application No. PCT/IL16/50583, International Search Report and Written Opinion, dated Dec. 8, 2016, 10 pages. |
U.S. Appl. No. 15/175,052, Notice of Allowance, dated Jan. 2, 2018, 9 pages. |
U.S. Appl. No. 15/679,180, Notice of Allowance, dated Mar. 26, 2018, 14 pages. |
U.S. Appl. No. 15/722,351, Office Action, dated Mar. 9, 2018, 17 pages. |
U.S. Appl. No. 15/682,577, Notice of Allowance, dated Jun. 14, 2018, 15 pages. |
U.S. Appl. No. 15/641,817, Office Action, dated Jul. 26, 2018, 29 pages. |
Number | Date | Country | |
---|---|---|---|
20190089737 A1 | Mar 2019 | US |
Number | Date | Country | |
---|---|---|---|
62172251 | Jun 2015 | US | |
62172253 | Jun 2015 | US | |
62172255 | Jun 2015 | US | |
62172259 | Jun 2015 | US | |
62172261 | Jun 2015 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15403194 | Jan 2017 | US |
Child | 15722351 | US | |
Parent | 15004904 | Jan 2016 | US |
Child | 15403194 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15722351 | Oct 2017 | US |
Child | 16163579 | US |