1. Field of the Invention
The present invention relates generally to the field of information leak prevention. More specifically but not exclusively, the present invention deals with methods for an efficient identification of attempts to steal private and confidential information using information stealing software and phishing.
2. Description of the Related Technology
The information and knowledge created and accumulated by organizations and businesses are among their most valuable assets. As such, keeping the information and the knowledge inside the organization and restricting its distribution outside of it is of paramount importance for almost any organization, government entity or business, and provides a significant leverage of its value. Unauthorized dissemination of intellectual property, financial information and other confidential or sensitive information can significantly damage a company's reputation and competitive advantage. In addition, the private information of individuals inside organizations, as well as the private information of the clients, customers and business partners includes sensitive details that can be abused by a user with criminal intentions.
Another aspect of the problem is compliance with regulations with respect to information: Regulations within the United States of America, such as the Health Insurance Portability and Accountability Act (HIPAA), the Gramm-Leach-Bliley act (GLBA) and the Sarbanes Oxley act (SOX) mandate that the information assets within organizations be monitored and subjected to an information management policy, in order to protect clients privacy and to mitigate the risks of potential misuse and fraud. Information and data leakage therefore poses a severe risk from both business and legal perspectives.
One of the emerging threats regarding the privacy and the confidentiality of digital information is Information Stealing Software, such as Trojan Horses and “Spyware”. Such software may be installed on the computer by malicious users that gained an access to the user's computer or by “infection” e.g., from a web-site, an email or shared files in a file-sharing network. The Information Stealing Software can then detect sensitive or confidential information—e.g., by employing a “key logger” that logs keystrokes, or by searching for confidential information within the user's computer and sending it to a predefined destination.
Current attempts to deal with Information Stealing Software are based mainly on detection of their existence in the host—e.g., by looking at their signatures. However, as these types of software are carefully designed to avoid such detection, the effectiveness of this approach is limited.
Another aspect of information stealing is known as “phishing & pharming”. In phishing attempts users are solicited, usually by officially-looking e-mails, to post their sensitive details to web-sites designed for stealing this information. There have been many attempts to mitigate phishing risks, such as helping users identify legitimate sites, alerting users to fraudulent websites, augmenting password logins and eliminating phishing mail. Yet, effective phishing attacks remain very common.
Pharming attacks aim to redirect a website's traffic to another, bogus website. Pharming can be conducted either by changing the hosts file on a victim's computer or by exploitation of a vulnerability in DNS server software. Current attempts to mitigate risks of pharming, such as DNS protection and web browser add-ins such as tool bars are of limited value.
A system and method for identifying infection of unwanted software on an electronic device is disclosed. A software agent is configured to generate a bait and is installed on the electronic device. The bait can simulate a situation in which the user performs a login session and submits personal information or it may just contain artificial sensitive information. Additionally, parameters may be inserted into the bait such as the identity of the electronic device that the bait is installed upon. The electronic output of the electronic device is then monitored and analyzed for attempts of transmitting the bait. The output is analyzed by correlating the output with the bait and can be done by comparing information about the bait with the traffic over a computer network in order to decide about the existence and the location of unwanted software. Furthermore, it is possible to store information about the bait in a database and then compare information about a user with the information in the database in order to determine if the electronic device that transmitted the bait contains unwanted software.
It is also possible to simulate sensitive information within the bait in the context of a target site and then configure the simulated sensitive information to identify the electronic device. The target site is then monitored for detection of the simulated sensitive information to determine the existence of unwanted software on the electronic device.
A system for identifying unwanted software on at least one electronic device has a management unit in communication with the electronic device. The management unit is configured to install a software agent on the electronic device that generates a bait to be transmitted by the electronic device over a computer network as an output. The management unit can be configured to insert a parameter into the bait in order to identify the electronic device. A traffic analyzer in communication with the computer network analyzes the output of the electronic device. The traffic analyzer may be installed on a network gateway in communication with the computer network. A decision system in communication with the traffic analyzer correlates the bait from the electronic device with the output of the electronic device in order to determine the existence of unwanted software.
In addition to the foregoing, it is also possible to use two groups of electronic devices to determine the existence of unwanted software. In this scenario, a bait is installed on at least one of the electronic devices of the first group of electronic devices. The output of the first and second groups of electronic devices is monitored and analyzed wherein the second group of electronic devices is used as a baseline for analyzing the output of the first group of electronic devices. The output of the first group and second group of electronic devices can be correlated in order to determine the existence of unwanted software.
A method for controlling the dissemination of sensitive information over an electronic network is disclosed. The method includes analyzing the traffic of the network and detecting the sensitive information. Next, the sensitivity level and the risk level of the information leaving the electronic network is assessed. A required action is determined based upon the sensitivity level and the risk level.
The sensitivity level of the information IS assessed by analyzing the content of the information. The information may include a password and the sensitivity information may be analyzed by analyzing the strength of the password. For example, a strong password would indicate that the information is highly sensitive. The risk level of the information leaving the network may be assessed using heuristics including at least one of geolocation, analysis of a recipient URL, previous knowledge about the destination and analysis of the content of the site.
For a better understanding of the invention and to show how the same may be carried into effect, reference will now be made, purely by way of example, to the accompanying drawings, in which:
The inventors of the systems and methods described in this application have recognized a need for, and it would be highly advantageous to have, a method and system that allows for efficient detection of information disseminated by information stealing software and for mitigation of phishing and pharming attacks, while overcoming the drawbacks described above.
The presently preferred embodiments describe a method and system for efficient mitigation of hazards stemming from information stealing. Before explaining at least one embodiment in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of the components set forth in the following description or illustrated in the drawings. The invention is capable of other embodiments or of being practiced or carried out in various ways. In addition, it is to be understood that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting. Also, it will be recognized that the described components may be implemented solely in software, hardware or the combination of both.
Behavioral detection of information stealing software in a potentially infected computerized device or software is achieved by simulating situations that will potentially trigger the information stealing software to attempt to disseminate “artificial sensitive information bait”, and thereafter analyze the traffic and other behavioral patterns of the potentially infected computerized device or software. As the situation is controlled and the information bait is known to the system, there are many cases of infection in which such an analysis will be able to detect the existence of the information stealing software.
For example, some malware types, such as certain key loggers, attempt to locate sensitive or personal information (e.g., usernames, passwords, financial information etc.). When such information is discovered, either locally on the host computer or as the user uses it to log into a website or application, the malware attempts to capture it and send it out, either in plaintext or encrypted. This behavior is exploited by generating bogus credentials and artificial sensitive information bait and storing it and/or sending them periodically to websites.
If such malware exists on the user's system, the malware captures the bogus information and attempts to send it out. Because the system provided this information in the first place, the system has a very good estimate of what the message sent by the malware will look like. Therefore, the system inspects all outgoing traffic from the user to spot these suspicious messages, and deduce the existence of malware on the machine. The system can simulate a situation in which the user attempts to access the website of a financial institute and submits his username and password. If an information stealing software is installed on the user's computer or along the connection, then by intercepting and analyzing the outgoing traffic the system can detect attempts to steal information.
Reference is now made to
Turning now to
The artificial sensitive information bait typically comprises bogus personal data which is used to login to e-banks, payment services etc. and the system is operable to simulate a situation in which the user performs a login session to such service and submit personal information. The baits implemented on different devices or software components can have unique characteristics, which enable identification of the infected machine. The software agent produces emulated keystrokes (e.g., utilizing the keyboard and/or the mouse drivers) that produce a sequence of characters in a variable rate, that reflect natural typing.
Also, the system can produce artificial sensitive documents that would seem realistic—for example financial reports to be publicly released, design documents, password files, network diagrams, etc. . . .
Also, the system can produce the baits in random fashion, such that each artificial sensitive information or document is different, in order to impede the information stealing software further.
The software agents implemented in the various devices are masqueraded in order to avoid detection by the information stealing software. The software agents can also be hidden, e.g., in a manner commonly referred to as rootkits, by means ordinarily used in the art.
In order to prevent unwelcome traffic to the target sites (e.g., sites of e-banking) in the process of simulation, the target sites can be emulated by the gateway 260. Accordingly, no information is actually sent to the target sites.
Sophisticated information stealing software may utilize special means to avoid detection, and may encrypt and/or hide the disseminated information. In a one embodiment, the system looks for encrypted content and correlates, statistically, the amount of encrypted data in the outgoing transportation with the number and size of the artificial sensitive information baits. This correlation may be a comparison, or it may be some other type of correlation. Detection of encrypted content can be based on the entropy of the content. In general, the sequence of bits that represent the encrypted content appears to be random (e.g., with maximal entropy). However, one should note that in an adequately compressed content there are also sequences of bits with maximal entropy, and therefore the system preferably utilizes the entropy test for encryption after establishing that the content is not compressed by a standard compression means ordinarily used in the art.
In order to further increase the probability of detection, m an organizational environment, the software agents may be installed on some of the machines and the system performs statistical tests, as explained below, in order to decide about the probability of existence of infected computerized devices and software in the organization.
The sets S and
In some embodiments, the output of the computerized devices may be compared with the output of computerized devices that, with high probability, were not infected—e.g., new machines (real or virtual). In order to further increase the probability of detection, the method may also include cooperation with the sites to which the bogus login details are to be submitted in order to detect attempts to use bogus username, password and other elements of sensitive information. Turning now to
The system can detect patterns that correspond to the information planted by the system that were possibly encoded in order to avoid detection: e.g., the system compares the monitored traffic with the planted content and attempts to decide whether there exists a transformation between the two contents. For example, the system can check for reversing the order of the characters, replacing characters (e.g., S−>$), encoding characters using numeric transformations, etc. . . . The system can also decide that certain patterns are suspicious as attempts to avoid detection. Furthermore, the system can look at behavioral patterns and correlate them with the planting events in order to achieve a better accuracy level.
According to another aspect, the system identifies and blocks information stealing malicious code that are designed to compromise hosts, collect data, and upload them to a remote location, usually without the users consent or knowledge. These often are installed as part of an attacker's toolkit that are becoming more popular to use, but they can also be part of a targeted attack scheme.
The system can also protect against attempts to steal personal information using methods commonly referred to as “phishing” and “pharming”. The method is based on: identifying when private or sensitive information (e.g., username, email address and password) are being passed in cleartext over a non-secure connection; assessing the risk involved in that scenario; and deciding to block or quarantine such attempt according to the sensitivity of the information and the level of risk.
In order to provide an adequate level of security while maintaining minimum interference with the user's work, the system determines whether the destination site is suspicious, and differentiates accordingly between cases in which users send information to suspicious sites and cases in which the information is sent to benign sites. The system can thereafter employ accordingly different strategies, such that for “suspicious” destinations dissemination of potentially sensitive information is blocked.
Suspicious sites can be determined using various heuristics, including:
The system may also identify cases in which the sensitive private information is posted in cleartext over a non-secure connection, a case that by itself constitutes a problematic situation, and thus may justify blocking or quarantining. The private sensitive information may include credit card numbers, social security numbers, ATM PIN, expiration dates of credit-card numbers etc.
The system may utilize the categorization and classification of websites and then assess the probability that the site is dangerous or malicious based on this categorization (e.g., using blacklists and whitelists), or employ real-time classification of the content of the destination site, in order to assess its integrity and the probability that the site is malicious.
The system can also assess the strength of the password in order to assess the sensitivity level: strong passwords “deserve” higher protection, while common passwords, that can be easily guessed using basic “dictionary attack” can be considered as less sensitive. Note that sites that require strong passwords are in general more sensitive (e.g., financial institutions) while in many cases users select common passwords to “entertainment sites”. In a one embodiment, the strength of the password is determined according to at least one of the following parameters:
In a preferred embodiment of the present invention, the strength and the entropy of the password are evaluated using the methods described in Appendix A of the National Institute of Standards (NIST) Special Publication 800-63, Electronic Authentication Guideline, the contents of which is hereby incorporated herein by reference in its entirety.
Reference is now made to
At stage B, 620, detectors of sensitive information detect sensitive information such as passwords, usernames, mother maiden names, etc. At stage C, 630, the sensitivity level of the sensitive information is assessed, e.g., by analyzing password strength as explained above, by counting the number of personal details etc. At stage D, 640, the level of risk is assessed using various heuristics, including geolocation, analysis of the URL, previous knowledge about the site, analysis of the content of the site etc. At stage E, 650, the system decides about the required action (such as blocking, quarantine, alert etc.) based on both the sensitivity level and the risk, and at stage F, 660, the system enforces the required action accordingly.
While analyzing sensitivity and risk there may be two clear-cut cases: low risk and low sensitivity case (e.g. sending the password 1234 to a hobby-related site) and high-risk high-sensitivity case (sending many personal details and a strong password in cleartext to a doubtful site). However, dealing with cases in the “gray area” (e.g., “medium sensitivity—low risk” or “medium risk—low sensitivity”) may depend on the organizational preferences. Typically, the operator of the system can set parameters that will reflect the organizational trade-off in the risk-sensitivity two-dimensional plane.
Turning now to
The system of
The weak validation method may be based on a Bloom filter, as described in: Space/Time Trade-offs in Hash Coding with Allowable Errors, by H Bloom Burton, Communications of the ACM, 13 (7). 422-426, 1970, the contents of which are hereby incorporated herein by reference in their entirety. The Bloom filter can assign a tunable probability to the existence of passwords from the organization password file. When the system tests for the existence of a password in the file, it queries the Bloom filter. If the Bloom filter returns “no” then the password does not exist in the file. If the Bloom filter returns “yes”, then it is probable that the password exists in the file, (and therefore in the organization). The Bloom filter therefore provides a probabilistic indication for the existence of a password in the organization, and this probabilistic indication p is tunable by the design of the filter. If p equals to, e.g. 0.9, then there is a false-positive rate of 0.1. Since this validation appears in the context of password dissemination, which by itself conveys a potential risk, this level of false positives is acceptable while monitoring normal traffic.
However, if an attacker attempts a “dictionary attack” (an attack where the attacker systematically tests possible passwords, beginning with words that have a higher probability of being used, such as names, number sequences and places) on the file, the Bloom filter will return “yes” on an expected 10% of the password candidates, even though they do not exist in the file. This will add noise to results of the dictionary attack, making it impractical to distinguish the few true positives from the many false positives.
The same method can be applied in order to safely identify other low-entropy items from a database, without compromising the items themselves to dictionary attacks. For example, suppose that the database comprises 10,000 U.S. Social Security Numbers (SSN). As SSN's are 9 digit numbers, even if they are represented by a strong cryptographic hashes, one can easily conduct an effective dictionary attack over all the valid social security numbers. Utilizing the weak validation method described above, one can assess whether the disseminated 9-digit number is, with a high probability, an SSN from the database.
The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
This application is a continuation of U.S. patent application Ser. No. 12/051,670, filed Mar. 19, 2008, now U.S. Pat. No. 8,407,784, entitled “METHOD AND SYSTEM FOR PROTECTION AGAINST INFORMATION STEALING SOFTWARE” and assigned to the assignee hereof, the disclosure of which is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5414833 | Hershey et al. | May 1995 | A |
5581804 | Cameron et al. | Dec 1996 | A |
5590403 | Cameron et al. | Dec 1996 | A |
5596330 | Yokev et al. | Jan 1997 | A |
5712979 | Graber et al. | Jan 1998 | A |
5720033 | Deo | Feb 1998 | A |
5724576 | Letourneau | Mar 1998 | A |
5801747 | Bedard | Sep 1998 | A |
5828835 | Isfeld et al. | Oct 1998 | A |
5832228 | Holden et al. | Nov 1998 | A |
5899991 | Karch | May 1999 | A |
5905495 | Tanaka et al. | May 1999 | A |
5919257 | Trostle | Jul 1999 | A |
5937404 | Csaszar et al. | Aug 1999 | A |
6012832 | Saunders et al. | Jan 2000 | A |
6092194 | Touboul | Jul 2000 | A |
6185681 | Zizzi | Feb 2001 | B1 |
6252884 | Hunter | Jun 2001 | B1 |
6301658 | Koehler | Oct 2001 | B1 |
6338088 | Waters et al. | Jan 2002 | B1 |
6357010 | Viets et al. | Mar 2002 | B1 |
6460141 | Olden | Oct 2002 | B1 |
6493758 | McLain | Dec 2002 | B1 |
6654787 | Aronson et al. | Nov 2003 | B1 |
6804780 | Touboul | Oct 2004 | B1 |
6832230 | Zilliacus et al. | Dec 2004 | B1 |
6988209 | Balasubramaniam et al. | Jan 2006 | B1 |
7051200 | Manferdelli et al. | May 2006 | B1 |
7058822 | Edery et al. | Jun 2006 | B2 |
7080000 | Cambridge | Jul 2006 | B1 |
7089589 | Chefalas et al. | Aug 2006 | B2 |
7100199 | Ginter et al. | Aug 2006 | B2 |
7136867 | Chatterjee et al. | Nov 2006 | B1 |
7155243 | Baldwin et al. | Dec 2006 | B2 |
7185361 | Ashoff et al. | Feb 2007 | B1 |
7249175 | Donaldson | Jul 2007 | B1 |
7346512 | Wang et al. | Mar 2008 | B2 |
7376969 | Njemanze et al. | May 2008 | B1 |
7447215 | Lynch et al. | Nov 2008 | B2 |
7536437 | Zmolek | May 2009 | B2 |
7617532 | Alexander et al. | Nov 2009 | B1 |
7634463 | Katragadda et al. | Dec 2009 | B1 |
7644127 | Yu | Jan 2010 | B2 |
7693945 | Dulitz et al. | Apr 2010 | B1 |
7707157 | Shen | Apr 2010 | B1 |
7725937 | Levy | May 2010 | B1 |
7783706 | Robinson | Aug 2010 | B1 |
7814546 | Strayer et al. | Oct 2010 | B1 |
7818800 | Lemley, III et al. | Oct 2010 | B1 |
7991411 | Johnson et al. | Aug 2011 | B2 |
8078625 | Zhang et al. | Dec 2011 | B1 |
8315178 | Makhoul et al. | Nov 2012 | B2 |
8498628 | Shapiro et al. | Jul 2013 | B2 |
8695100 | Cosoi | Apr 2014 | B1 |
20010047474 | Takagi | Nov 2001 | A1 |
20020078045 | Dutta | Jun 2002 | A1 |
20020087882 | Schneier et al. | Jul 2002 | A1 |
20020091947 | Nakamura | Jul 2002 | A1 |
20020095592 | Daniell et al. | Jul 2002 | A1 |
20020099952 | Lambert et al. | Jul 2002 | A1 |
20020129140 | Peled et al. | Sep 2002 | A1 |
20020129277 | Caccavale | Sep 2002 | A1 |
20020162015 | Tang | Oct 2002 | A1 |
20020174358 | Wolff et al. | Nov 2002 | A1 |
20020199095 | Bandini et al. | Dec 2002 | A1 |
20030018491 | Nakahara et al. | Jan 2003 | A1 |
20030018903 | Greca et al. | Jan 2003 | A1 |
20030074567 | Charbonneau | Apr 2003 | A1 |
20030093694 | Medvinsky et al. | May 2003 | A1 |
20030110168 | Kester et al. | Jun 2003 | A1 |
20030135756 | Verma | Jul 2003 | A1 |
20030172292 | Judge | Sep 2003 | A1 |
20030177361 | Wheeler et al. | Sep 2003 | A1 |
20030185395 | Lee et al. | Oct 2003 | A1 |
20030185399 | Ishiguro | Oct 2003 | A1 |
20030188197 | Miyata et al. | Oct 2003 | A1 |
20030202536 | Foster et al. | Oct 2003 | A1 |
20040003139 | Cottrille et al. | Jan 2004 | A1 |
20040003286 | Kaler et al. | Jan 2004 | A1 |
20040034794 | Mayer et al. | Feb 2004 | A1 |
20040039921 | Chuang | Feb 2004 | A1 |
20040111632 | Halperin | Jun 2004 | A1 |
20040117624 | Brandt et al. | Jun 2004 | A1 |
20040139351 | Tsang | Jul 2004 | A1 |
20040153644 | McCorkendale | Aug 2004 | A1 |
20040162876 | Kohavi | Aug 2004 | A1 |
20040187029 | Ting | Sep 2004 | A1 |
20040203615 | Qu et al. | Oct 2004 | A1 |
20040255147 | Peled et al. | Dec 2004 | A1 |
20040260924 | Peled et al. | Dec 2004 | A1 |
20050025291 | Peled et al. | Feb 2005 | A1 |
20050027980 | Peled et al. | Feb 2005 | A1 |
20050033967 | Morino et al. | Feb 2005 | A1 |
20050048958 | Mousseau et al. | Mar 2005 | A1 |
20050055327 | Agrawal et al. | Mar 2005 | A1 |
20050066197 | Hirata et al. | Mar 2005 | A1 |
20050086520 | Dharmapurikar et al. | Apr 2005 | A1 |
20050091535 | Kavalam et al. | Apr 2005 | A1 |
20050108557 | Kayo et al. | May 2005 | A1 |
20050111367 | Chao et al. | May 2005 | A1 |
20050120229 | Lahti | Jun 2005 | A1 |
20050131868 | Lin et al. | Jun 2005 | A1 |
20050138109 | Redlich et al. | Jun 2005 | A1 |
20050138353 | Spies | Jun 2005 | A1 |
20050149726 | Joshi et al. | Jul 2005 | A1 |
20050210035 | Kester et al. | Sep 2005 | A1 |
20050223001 | Kester et al. | Oct 2005 | A1 |
20050229250 | Ring et al. | Oct 2005 | A1 |
20050251862 | Talvitie | Nov 2005 | A1 |
20050273858 | Zadok et al. | Dec 2005 | A1 |
20050283836 | Lalonde et al. | Dec 2005 | A1 |
20050288939 | Peled et al. | Dec 2005 | A1 |
20060004636 | Kester et al. | Jan 2006 | A1 |
20060020814 | Lieblich et al. | Jan 2006 | A1 |
20060021031 | Leahy et al. | Jan 2006 | A1 |
20060026105 | Endoh | Feb 2006 | A1 |
20060026681 | Zakas | Feb 2006 | A1 |
20060031504 | Hegli et al. | Feb 2006 | A1 |
20060036874 | Cockerille et al. | Feb 2006 | A1 |
20060053488 | Sinclair et al. | Mar 2006 | A1 |
20060068755 | Shraim et al. | Mar 2006 | A1 |
20060080735 | Brinson et al. | Apr 2006 | A1 |
20060095459 | Adelman et al. | May 2006 | A1 |
20060095965 | Phillips et al. | May 2006 | A1 |
20060098585 | Singh et al. | May 2006 | A1 |
20060101514 | Milener et al. | May 2006 | A1 |
20060129644 | Owen et al. | Jun 2006 | A1 |
20060191008 | Fernando et al. | Aug 2006 | A1 |
20060212723 | Sheymov | Sep 2006 | A1 |
20060251068 | Judge et al. | Nov 2006 | A1 |
20060259948 | Calow et al. | Nov 2006 | A1 |
20060265750 | Huddleston | Nov 2006 | A1 |
20060272024 | Huang et al. | Nov 2006 | A1 |
20060277259 | Murphy et al. | Dec 2006 | A1 |
20060282890 | Gruper et al. | Dec 2006 | A1 |
20060288076 | Cowings et al. | Dec 2006 | A1 |
20070005762 | Knox et al. | Jan 2007 | A1 |
20070011739 | Zamir et al. | Jan 2007 | A1 |
20070027965 | Brenes et al. | Feb 2007 | A1 |
20070028302 | Brennan et al. | Feb 2007 | A1 |
20070067844 | Williamson et al. | Mar 2007 | A1 |
20070143424 | Schirmer et al. | Jun 2007 | A1 |
20070150827 | Singh et al. | Jun 2007 | A1 |
20070156833 | Nikolov et al. | Jul 2007 | A1 |
20070195779 | Judge et al. | Aug 2007 | A1 |
20070199054 | Florencio et al. | Aug 2007 | A1 |
20070220607 | Sprosts et al. | Sep 2007 | A1 |
20070260602 | Taylor | Nov 2007 | A1 |
20070261112 | Todd et al. | Nov 2007 | A1 |
20070294199 | Nelken et al. | Dec 2007 | A1 |
20070294428 | Guy et al. | Dec 2007 | A1 |
20070299915 | Shraim et al. | Dec 2007 | A1 |
20080009268 | Ramer et al. | Jan 2008 | A1 |
20080040804 | Oliver et al. | Feb 2008 | A1 |
20080047017 | Renaud | Feb 2008 | A1 |
20080100414 | Diab et al. | May 2008 | A1 |
20080262991 | Kapoor et al. | Oct 2008 | A1 |
20080267144 | Jano et al. | Oct 2008 | A1 |
20080282338 | Beer | Nov 2008 | A1 |
20080282344 | Shuster | Nov 2008 | A1 |
20080295177 | Dettinger et al. | Nov 2008 | A1 |
20090064326 | Goldstein | Mar 2009 | A1 |
20090100055 | Wang | Apr 2009 | A1 |
20090100518 | Overcash | Apr 2009 | A1 |
20090119402 | Shull et al. | May 2009 | A1 |
20090131035 | Aiglstorfer | May 2009 | A1 |
20090241191 | Keromytis et al. | Sep 2009 | A1 |
20090320135 | Cavanaugh | Dec 2009 | A1 |
20100024037 | Grzymala-Busse et al. | Jan 2010 | A1 |
20100064347 | More et al. | Mar 2010 | A1 |
20100198928 | Almeida | Aug 2010 | A1 |
20100312843 | Robinson | Dec 2010 | A1 |
Number | Date | Country |
---|---|---|
1367595 | Sep 2002 | CN |
1756147 | Apr 2006 | CN |
1 180 889 | Feb 2002 | EP |
1 278 330 | Jan 2003 | EP |
1 280 040 | Jan 2003 | EP |
1 457 885 | Sep 2004 | EP |
1 510 945 | Mar 2005 | EP |
1571578 | Sep 2005 | EP |
1 638 016 | Mar 2006 | EP |
1 643 701 | Apr 2006 | EP |
2418330 | Mar 2006 | GB |
2000-235540 | Aug 2000 | JP |
WO 9605549 | Feb 1996 | WO |
WO 9642041 | Dec 1996 | WO |
WO 0124012 | Apr 2001 | WO |
WO 2005017708 | Feb 2005 | WO |
WO 2005119488 | Dec 2005 | WO |
WO 2006027590 | Mar 2006 | WO |
WO 2006062546 | Jun 2006 | WO |
WO 2006136605 | Dec 2006 | WO |
WO 2007059428 | May 2007 | WO |
WO 2007106609 | Sep 2007 | WO |
Entry |
---|
“Norton Confidential” by Neil J. Rubenking (Aug. 4, 2006); 3 pages; originally downloaded from http://www.pcmag.com/article2/0,2817,1999239,00.asp. |
“Clearswift makes a clean sweep of Web threats” by James R. Borck (Aug. 22, 2007); 2 pages; originally downloaded from http://www.infoworld.com/d/security-central/clearswift-makes-clean-sweep--web-threats-818. |
“Network Applications of Bloom Filters—A Survey” by Andrei Broder (Apr. 13, 2004) originally downloaded from http://www.internetmathematics.org/volumes/1/4/broder.pdf (via http://citeseer.ist.psu.edu/). pp. 485-509. |
“An Examination of Bloom Filters and their Applications” by Jacob Honoroff (Mar. 16, 2006); 113 pages; originally downloaded from http://cs.unc.edu/.about.fabian/courses/CS600.624/slides/bloomslides.pdf. |
“Network Applications of Bloom Filters—A Survey” by Andrei Broder (Apr. 13, 2004); pp. 485-509; originally downloaded from http://www.internetmathematics.org/volumes/1/4/broder.pdf via http://citeseer.ist.psu.edu/. |
NPL “SSARES: Secure Searchable Automated Remote Email Storage” by Adam J. Aviv et al. (Jan. 2, 2008); pp. 129-138 of the 23rd Annual Computer Security Applications Conference; originally downloaded from http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4412983. |
“Multi-pattern signature matching for hardware network intrusion detection systems” (Jan. 23, 2006) by Haoyu Song and J.W. Lockwood; pp. 1686-1680 of IEEE Globecom 2005 (5 pages total); originally downloaded from http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1577937. |
“Performance of Full Text Search in Structured and Unstructured Peer-to-Peer Systems” (2006) by Yang et al.; 12 pages from Proceedings IEEE Infocom; originally downloaded from http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=04146962. |
“Preventing Weak Password Choices” (Apr. 9, 1991) by Eugene H. Spafford; 12 pages; originally downloaded from http://docs.lib.purdue.edu/cstech/875/. |
“Google + StopBadward.org = Internet Gestapo?”, http://misterpoll.wordpress.com/2007/01/05/google-stopbadwareorg-internet- -gestapo/, Jan. 5, 2007. |
“Trends in Badware 2007”, StopBadware.org. George, Erica, “Google launches new anti-badware API”, http://blog.stopbadware.org//2007/06/19/google-launches-new-anti-badware-- api, Jun. 19, 2007. |
Wang et al., MBF: a Real Matrix Bloom Filter Representation Method on Dynamic Set, 2007 IFIP International Conference on Network and Parallel Computing--Workshops, Sep. 18, 2007, pp. 733-736, Piscataway, NJ, USA. |
Adam Lyon, “Free Spam Filtering Tactics Using Eudora,”, May 21, 2004, pp. 1-4. |
Cohen, F., A Cryptographic Checksum for Integrity Protection, Computers & Security, Elsevier Science Publishers, Dec. 1, 1987, vol. 6, Issue 6, pp. 505-510, Amsterdam, NL. |
Dahan, M. Ed., “The Internet and government censorship: the case of the Israeli secretservice” Online information., Proceedings of the International Online Information Meeting, Oxford, Learned Infomration, GB, Dec. 12-14, 1989, vol. Meeting 13, December, Issue XP000601363, pp. 41-48, Sections 1,3., London. |
Gittler F., et al., The DCE Security Service, Pub: Hewlett-Packard Journal, Dec. 1995, pp. 41-48. |
IBM Technical Disclosure Bulletin, Mean to Protect System from Virus, IBM Corp., Aug. 1, 1994, Issue 659-660. |
Igakura, Tomohiro et al., Specific quality measurement and control of the service-oriented networking application., Technical Report of IEICE, IEICE Association, Jan. 18, 2002, vol. 101, Issue 563, pp. 51-56, Japan. |
International Search Report and Written Opinion for International Application No. PCT/GB2005/003482, Dec. 9, 2005. |
IronPort Web Reputation: Protect and Defend Against URL-Based Threats; Ironport Systems, Apr. 2006, 8 pages. |
PCT International Search Report and Written Opinion for International Application No. PCT/US2008/052483, Feb. 11, 2009. |
Reid, Open Systems Security: Traps and Pitfalls, Computer & Security, 1995, Issue 14, pp. 496-517. |
Resnick, P. et al., “PICS: Internet Access Controls Without Censorship”, Communications of the Association for Comuting Machinery, ACM, Oct. 1, 1996, vol. 39, Issue 10, pp. 87-93, New York, NY. |
Stein, Web Security—a step by step reference guide, Addison-Wesley, 1997, pp. 387-415. |
Symantec Corporation, E-security begins with sound security policies, Announcement Symantec, XP002265695, Jun. 14, 2001, pp. 1,9. |
Williams, R., Data Integrity with Veracity, Retrieved from the Internet: <URL: ftp://ftp.rocksoft.com/clients/rocksoft/papers/vercty10.ps>, Sep. 12, 1994. |
Zhang et al., The Role of URLs in Objectionable Web Content Categorization, Web Intelligence, 2006. |
Borck, James R., Clearswift makes a clean sweep of Web threats, originally downloaded from http://www.infoworld.com/d/security-central/clearswift-makes-clean-sweep-- web-threats-818., Aug. 22, 2007, pp. 2. |
Honoroff, Jacob, An Examination of Bloom Filters and their Applications, originally downloaded from http://cs.unc.edu/.about.fabian/courses/CS600.624/slides/bloomslides.pdf, Mar. 16, 2006, pp. 113. |
Long, John A., Risk and the Right Model, originally downloaded from http://www.dtic.mil/cgi-bin/GetTRDoc?Location=U2&doc=GEtTRDoc.pdf&AD=ADA1- 61757, Jan. 1986, pp. 13. |
Rubenking, Neil J., Norton Confidential, originally downloaded from http://www.pcmag.com/article2/0,2817,1999239,00.asp, Aug. 4, 2006, pp. 3. |
Clear Text Password Risk Assessment Documentation, SANS Institute, 2002. |
Shanmugasundaram et al, Payload Attribution via Hierarchical Bloom Filters, CCS, Oct. 25-29, 2004. |
Shanmugasundaram et al., ForNet: A Distributed Forensics Network, in Proceedings of the Second International Workshop on Mathematical Methods, Models and Architectures for Computer Networks Security, 2003. |
Number | Date | Country | |
---|---|---|---|
20130227684 A1 | Aug 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12051670 | Mar 2008 | US |
Child | 13849377 | US |