Method and system for protection against information stealing software

Information

  • Patent Grant
  • 9130986
  • Patent Number
    9,130,986
  • Date Filed
    Wednesday, March 19, 2008
    17 years ago
  • Date Issued
    Tuesday, September 8, 2015
    9 years ago
Abstract
A system and method for identifying infection of unwanted software on an electronic device is disclosed. A software agent configured to generate a bait and is installed on the electronic device. The bait can simulate a situation in which the user performs a login session and submits personal information or it may just contain artificial sensitive information. Parameters may be inserted into the bait such as the identity of the electronic device that the bait is installed upon. The output of the electronic device is monitored and analyzed for attempts of transmitting the bait. The output is analyzed by correlating the output with the bait and can be done by comparing information about the bait with the traffic over a computer network in order to decide about the existence and the location of unwanted software. Furthermore, it is possible to store information about the bait in a database and then compare information about a user with the information in the database in order to determine if the electronic device that transmitted the bait contains unwanted software.
Description
BACKGROUND OF THE INVENTION

1. Field of the Invention


The present invention relates generally to the field of information leak prevention. More specifically but not exclusively, the present invention deals with methods for an efficient identification of attempts to steal private and confidential information using information stealing software and phishing.


2. Description of the Related Technology


The information and knowledge created and accumulated by organizations and businesses are among their most valuable assets. As such, keeping the information and the knowledge inside the organization and restricting its distribution outside of it is of paramount importance for almost any organization, government entity or business, and provides a significant leverage of its value. Unauthorized dissemination of intellectual property, financial information and other confidential or sensitive information can significantly damage a company's reputation and competitive advantage. In addition, the private information of individuals inside organizations, as well as the private information of the clients, customers and business partners includes sensitive details that can be abused by a user with criminal intentions.


Another aspect of the problem is compliance with regulations with respect to information: Regulations within the United States of America, such as the Health Insurance Portability and Accountability Act (HIPAA), the Gramm-Leach-Bliley act (GLBA) and the Sarbanes Oxley act (SOX) mandate that the information assets within organizations be monitored and subjected to an information management policy, in order to protect clients privacy and to mitigate the risks of potential misuse and fraud. Information and data leakage therefore poses a severe risk from both business and legal perspectives.


One of the emerging threats regarding the privacy and the confidentiality of digital information is Information Stealing Software, such as Trojan Horses and “Spyware”. Such software may be installed on the computer by malicious users that gained an access to the user's computer or by “infection” e.g., from a web-site, an email or shared files in a file-sharing network. The Information Stealing Software can then detect sensitive or confidential information—e.g., by employing a “keylogger” that logs keystrokes, or by searching for confidential information within the user's computer and sending it to a predefined destination.


Current attempts to deal with Information Stealing Software are based mainly on detection of their existence in the host—e.g., by looking at their signatures. However, as these types of software are carefully designed to avoid such detection, the effectiveness of this approach is limited


Another aspect of information stealing is known as “phishing & pharming”. In phishing attempts users are solicited, usually by officially-looking e-mails, to post their sensitive details to web-sites designed for stealing this information. There have been many attempts to mitigate phishing risks, such as helping users identify legitimate sites, alerting users to fraudulent websites, augmenting password logins and eliminating phishing mail. Yet, effective phishing attacks remain very common.


Pharming attacks aim to redirect a website's traffic to another, bogus website. Pharming can be conducted either by changing the hosts file on a victim's computer or by exploitation of a vulnerability in DNS server software. Current attempts to mitigate risks of pharming, such as DNS protection and web browser add-ins such as toolbars are of limited value.


SUMMARY

A system and method for identifying infection of unwanted software on an electronic device is disclosed. A software agent is configured to generate a bait and is installed on the electronic device. The bait can simulate a situation in which the user performs a login session and submits personal information or it may just contain artificial sensitive information. Additionally, parameters may be inserted into the bait such as the identity of the electronic device that the bait is installed upon. The electronic output of the electronic device is then monitored and analyzed for attempts of transmitting the bait. The output is analyzed by correlating the output with the bait and can be done by comparing information about the bait with the traffic over a computer network in order to decide about the existence and the location of unwanted software. Furthermore, it is possible to store information about the bait in a database and then compare information about a user with the information in the database in order to determine if the electronic device that transmitted the bait contains unwanted software.


It is also possible to simulate sensitive information within the bait in the context of a target site and then configure the simulated sensitive information to identify the electronic device. The target site is then monitored for detection of the simulated sensitive information to determine the existence of unwanted software on the electronic device.


A system for identifying unwanted software on at least one electronic device has a management unit in communication with the electronic device. The management unit is configured to install a software agent on the electronic device that generates a bait to be transmitted by the electronic device over a computer network as an output. The management unit can be configured to insert a parameter into the bait in order to identify the electronic device. A traffic analyzer in communication with the computer network analyzes the output of the electronic device. The traffic analyzer may be installed on a network gateway in communication with the computer network. A decision system in communication with the traffic analyzer correlates the bait from the electronic device with the output of the electronic device in order to determine the existence of unwanted software.


In addition to the foregoing, it is also possible to use two groups of electronic devices to determine the existence of unwanted software. In this scenario, a bait is installed on at least one of the electronic devices of the first group of electronic devices. The output of the first and second groups of electronic devices is monitored and analyzed wherein the second group of electronic devices is used as a baseline for analyzing the output of the first group of electronic devices. The output of the first group and second group of electronic devices can be correlated in order to determine the existence of unwanted software.


A method for controlling the dissemination of sensitive information over an electronic network is disclosed. The method includes analyzing the traffic of the network and detecting the sensitive information. Next, the sensitivity level and the risk level of the information leaving the electronic network is assessed. A required action is determined based upon the sensitivity level and the risk level.


The sensitivity level of the information is assessed by analyzing the content of the information. The information may include a password and the sensitivity information may be analyzed by analyzing the strength of the password. For example, a strong password would indicate that the information is highly sensitive. The risk level of the information leaving the network may be assessed using heuristics including at least one of geolocation, analysis of a recipient URL, previous knowledge about the destination and analysis of the content of the site.





BRIEF DESCRIPTION OF THE DRAWINGS

For a better understanding of the invention and to show how the same may be carried into effect, reference will now be made, purely by way of example, to the accompanying drawings, in which:



FIG. 1 is a flowchart illustrating a method of efficient detection of information stealing software.



FIG. 2 is an illustration of a system for mitigation of information-stealing software hazards according to FIG. 1.



FIG. 3 is flowchart illustrating another method of efficient detection of information stealing software.



FIG. 4 is an illustration of a system for mitigation of information-stealing software hazards according to FIG. 3.



FIG. 5 is an illustration of a system that utilizes a corporation from target sites in order to detect information stealing software.



FIG. 6 is a flowchart illustrating another method of efficient detection of information stealing software.



FIG. 7 is an illustration of a system for mitigation of information stealing software hazards according to FIG. 6.





DETAILED DESCRIPTION OF CERTAIN INVENTIVE EMBODIMENTS

The inventors of the systems and methods described in this application have recognized a need for, and it would be highly advantageous to have, a method and system that allows for efficient detection of information disseminated by information stealing software and for mitigation of phishing and pharming attacks, while overcoming the drawbacks described above.


The presently preferred embodiments describe a method and system for efficient mitigation of hazards stemming from information stealing. Before explaining at least one embodiment in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of the components set forth in the following description or illustrated in the drawings. The invention is capable of other embodiments or of being practiced or carried out in various ways. In addition, it is to be understood that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting. Also, it will be recognized that the described components may be implemented solely in software, hardware or the combination of both.


Behavioral detection of information stealing software in a potentially infected computerized device or software is achieved by simulating situations that will potentially trigger the information stealing software to attempt to disseminate “artificial sensitive information bait”, and thereafter analyze the traffic and other behavioral patterns of the potentially infected computerized device or software. As the situation is controlled and the information bait is known to the system, there are many cases of infection in which such an analysis will be able to detect the existence of the information stealing software.


For example, some malware types, such as certain keyloggers, attempt to locate sensitive or personal information (e.g., usernames, passwords, financial information etc.). When such information is discovered, either locally on the host computer or as the user uses it to log into a website or application, the malware attempts to capture it and send it out, either in plaintext or encrypted. This behavior is exploited by generating bogus credentials and artificial sensitive information bait and storing it and/or sending them periodically to websites.


If such malware exists on the users system, the malware captures the bogus information and attempts to send it out. Because the system provided this information in the first place, the system has a very good estimate of what the message sent by the malware will look like. Therefore, the system inspects all outgoing traffic from the user to spot these suspicious messages, and deduce the existence of malware on the machine. The system can simulate a situation in which the user attempts to access the website of a financial institute and submits his username and password. If an information stealing software is installed on the user's computer or along the connection, then by intercepting and analyzing the outgoing traffic the system can detect attempts to steal information.


Reference is now made to FIG. 1, which illustrates a method for detection of information stealing software. At stage A, 110, a software agent is installed on computerized devices. The software agent is preferably designed and implemented such that it can simulate various artificial inputs in a manner that would seem as a regular user input from the information stealing software perspective (e.g., emulating sequences of keystrokes, accessing sites of e-banking, planting documents that would seem to be sensitive etc.) At stage B, 120, in order to fine-tune the operation of the software agent, a set of parameters are preferably selected, such as scheduling bait tasks or providing keywords that produce an attractive bait in this context. At stage C, 130, various baits in the various computerized devices are implemented in accordance with the inserted parameters. Specifically, the baits are created and sent to predefined targets. At stage D, 140 the output and behavioral patterns of the computerized device are analyzed from the computer network and at stage E, 150, the system estimates the probability that the device is infected by an information stealing software from the output and behavorial patterns analyzed at stage D.


Turning now to FIG. 2, an illustration of a system for detection of information stealing software is provided. A remote installation & management unit 210 installs software agents 220 on various computerized devices 230 connected thereto by means ordinarily used in the art. The installation can include optional parameters inserted by an operator 240. The software agents produce artificial sensitive information baits, and the output and other behavioral parameters of the various computerized devices are analyzed by the software agents 220 and preferably by a traffic analyzer 250 on a network gateway 260. The traffic analyzer 250 may be software installed on the gateway for monitoring the flow of electronic traffic between the computer devices 230 and a WAN as is commonly known in the art. The results are sent for analysis to a decision system 270, which correlates the information in the traffic with the artificial sensitive information baits in order to decide about the existence and the location of potentially infected computerized devices or software. The decision system 270 may be a software or a hardware module in electronic communication with the traffic analyzer 250.


The artificial sensitive information bait typically comprises bogus personal data which is used to login to e-banks, payment services etc. and the system is operable to simulate a situation in which the user performs a login session to such service and submit personal information. The baits implemented on different devices or software components can have unique characteristics, which enable identification of the infected machine. The software agent produces emulated keystrokes (e.g., utilizing the keyboard and/or the mouse drivers) that produce a sequence of characters in a variable rate, that reflect natural typing.


Also, the system can produce artificial sensitive documents that would seem realistic—for example financial reports to be publicly released, design documents, password files, network diagrams, etc. . . .


Also, the system can produce the baits in random fashion, such that each artificial sensitive information or document is different, in order to impede the information stealing software further.


The software agents implemented in the various devices are masqueraded in order to avoid detection by the information stealing software. The software agents can also be hidden, e.g., in a manner commonly referred to as rootkits, by means ordinarily used in the art.


In order to prevent unwelcome traffic to the target sites (e.g., sites of e-banking) in the process of simulation, the target sites can be emulated by the gateway 260. Accordingly, no information is actually sent to the target sites.


Sophisticated information stealing software may utilize special means to avoid detection, and may encrypt and/or hide the disseminated information. In a one embodiment, the system looks for encrypted content and correlates, statistically, the amount of encrypted data in the outgoing transportation with the number and size of the artificial sensitive information baits. This correlation may be a comparison, or it may be some other type of correlation. Detection of encrypted content can be based on the entropy of the content. In general, the sequence of bits that represent the encrypted content appears to be random (e.g., with maximal entropy). However, one should note that in an adequately compressed content there are also sequences of bits with maximal entropy, and therefore the system preferably utilizes the entropy test for encryption after establishing that the content is not compressed by a standard compression means ordinarily used in the art.


In order to further increase the probability of detection, in an organizational environment, the software agents may be installed on some of the machines and the system performs statistical tests, as explained below, in order to decide about the probability of existence of infected computerized devices and software in the organization.



FIG. 3 illustrates a method for detection of information stealing software, substantially similar to the method of FIG. 1, but utilizes a two-set method: in stage A, 310, software agents are installed on some of the computerized devices, denoted as set S. At stage B, 320, in order to fine-tune the operation of the software agents, a set of parameters are preferably selected, such as scheduling bait tasks and providing keywords that would produce an attractive bait in this context. At stage C, 330, various baits in the various computerized devices are implemented in accordance with the inserted parameters. At stage D, 340 the output and behavioral patterns of the computerized device are analyzed and compared with those of computerized devices and at stage E, 350, the system estimates the probability that the device is infected by information stealing software.



FIG. 4 illustrates a system for detection of information stealing software, substantially similar to the system of FIG. 2 but utilizing the two-set method to improve detection of information stealing software described in FIG. 3. A remote installation & management unit 410 installs software agents 420 on various computerized devices in the set S 430, (according to parameters inserted optionally by an operator) but not on set 455. The software agents then produce artificial sensitive information baits on the computerized devices of set S 430, and the output and other behavioral parameters of the various computerized devices in the set S and the complementary set S are analyzed by a traffic analyzer 450, on a gateway 460. The results are sent for analysis to a decision system 470, which compares characteristics such of the output between sets S and S in order to decide about the existence of potentially infected computerized devices or software. Such characteristics may include, for example, the volume of the traffic, the number of TCP sessions, the geographical distribution of the recipients, the entropy of the traffic, the time of the sessions etc. The results of the analysis of the set S are thereafter used as a baseline in order to determine the statistical significance of the hypothesis that there are infected computerized devices or software in the set S that react to the existence of the artificial sensitive information baits.


The sets S and S may be selected randomly and are changed dynamically in order to provide more information about the identity of the infected machines. The computerized devices in both S and S are equipped with software agents which analyze and store outgoing traffic, but only the agents of set S produce artificial sensitive information baits.


In some embodiments, the output of the computerized devices may be compared with the output of computerized devices that, with high probability, were not infected—e.g., new machines (real or virtual). In order to further increase the probability of detection, the method may also include cooperation with the sites to which the bogus login details are to be submitted in order to detect attempts to use bogus username, password and other elements of sensitive information. Turning now to FIG. 5, there is illustrated a system that utilizes such cooperation. A remote installation & management unit 510 installs software agents 520 on various computerized devices according to optional parameters inserted by an operator 540. The software agents 520 then produce artificial sensitive information baits, such that each computerized device receives different bogus details. The bogus details are then sent via a gateway 560 to databases 582 at sites 580. If an attacker 590 tries to use a username and password in order to login to the site 580, the site will check the database 582 to determine whether these were bogus details created by the software agents 520, and will send the details of the event to a decision system 570. The decision system 570 determines the infected machines based on the uniqueness of the bogus personal information.


The system can detect patterns that correspond to the information planted by the system that were possibly encoded in order to avoid detection: e.g., the system compares the monitored traffic with the planted content and attempts to decide whether there exists a transformation between the two contents. For example, the system can check for reversing the order of the characters, replacing characters (e.g., S→$), encoding characters using numeric transformations, etc. . . . The system can also decide that certain patterns are suspicious as attempts to avoid detection.


Furthermore, the system can look at behavioral patterns and correlate them with the planting events in order to achieve a better accuracy level.


According to another aspect, the system identifies and blocks information stealing malicious code that are designed to compromise hosts, collect data, and upload them to a remote location, usually without the users consent or knowledge. These often are installed as part of an attacker's toolkit that are becoming more popular to use, but they can also be part of a targeted attack scheme.


The system can also protect against attempts to steal personal information using methods commonly referred to as “phishing” and “pharming”. The method is based on:


Identifying when private or sensitive information (e.g., username, email address and password) are being passed in cleartext over a non-secure connection;


Assessing the risk involved in that scenario; and


Deciding to block or quarantine such attempt according to the sensitivity of the information and the level of risk.


In order to provide an adequate level of security while maintaining minimum interference with the user's work, the system determines whether the destination site is suspicious, and differentiates accordingly between cases in which users send information to suspicious sites and cases in which the information is sent to benign sites. The system can thereafter employ accordingly different strategies, such that for “suspicious” destinations dissemination of potentially sensitive information is blocked.


Suspicious sites can be determined using various heuristics, including:


a. Geolocation to determine whether the location of the site in question is different from the location of the user attempting to access it—For example, it is less likely for someone in North America to access a financial site in Belarus, therefore making the transaction more suspicious.


b. Looking for a string such as www.<popular site>.com somewhere at the end of the URL string. Examples for “popular site” may be paypal, ebay, etc, taken from a predefined list of popular spoofed sites.


The system may also identify cases in which the sensitive private information is posted in cleartext over a non-secure connection, a case that by itself constitutes a problematic situation, and thus may justify blocking or quarantining. The private sensitive information may include credit card numbers, social security numbers, ATM PIN, expiration dates of credit-card numbers etc.


The system may utilize the categorization and classification of websites and then assess the probability that the site is dangerous or malicious based on this categorization (e.g., using blacklists and whitelists), or employ real-time classification of the content of the destination site, in order to assess its integrity and the probability that the site is malicious.


The system can also assess the strength of the password in order to assess the sensitivity level: strong passwords “deserve” higher protection, while common passwords, that can be easily guessed using basic “dictionary attack” can be considered as less sensitive. Note that sites that require strong passwords are in general more sensitive (e.g., financial institutions) while in many cases users select common passwords to “entertainment sites”. In a one embodiment, the strength of the password is determined according to at least one of the following parameters:


The length of the password;


Similarity to common passwords, such as those used by “password cracking tools”; or


The entropy of the password.


In a preferred embodiment of the present invention, the strength and the entropy of the password are evaluated using the methods described in Appendix A of the National Institute of Standards (NIST) Special Publication 800-63, Electronic Authentication Guideline, the contents of which is hereby incorporated herein by reference in its entirety.


Reference is now made to FIG. 6, which illustrates a method for protection against phishing and pharming attempts. Specifically, the electronic traffic is monitored and analyzed at stage A, 610 possibly using a system that is used also for other applications, such as monitoring and prevention of unauthorized dissemination of information, as described e.g., in U.S. Published Patent Application Nos. 2002/0129140 entitled “A System and a Method for Monitoring Unauthorized Transport of Digital Content” and 2005/0288939, “A method and system for managing confidential information”, the contents of which are hereby incorporated by reference herein in their entirety.


At stage B, 620, detectors of sensitive information detect sensitive information such as passwords, usernames, mother maiden names, etc. At stage C, 630, the sensitivity level of the sensitive information is assessed, e.g., by analyzing password strength as explained above, by counting the number of personal details etc. At stage D, 640, the level of risk is assessed using various heuristics, including geolocation, analysis of the URL, previous knowledge about the site, analysis of the content of the site etc. At stage E, 650, the system decides about the required action (such as blocking, quarantine, alert etc.) based on both the sensitivity level and the risk, and at stage F, 660, the system enforces the required action accordingly.


While analyzing sensitivity and risk there may be two clear-cut cases: low risk and low sensitivity case (e.g. sending the password 1234 to a hobby-related site) and high-risk high-sensitivity case (sending many personal details and a strong password in cleartext to a doubtful site). However, dealing with cases in the “gray area” (e.g., “medium sensitivity—low risk” or “medium risk—low sensitivity”) may depend on the organizational preferences. Typically, the operator of the system can set parameters that will reflect the organizational trade-off in the risk-sensitivity two-dimensional plane.


Turning now to FIG. 7, there is an illustration of a system for protection against phishing and pharming attempts, constructed in accordance with the method described in FIG. 6. A management unit 710 is used for setting a policy for protecting computerized devices 720 within the organizational perimeter 730, optionally according to parameters inserted by an operator 740, (e.g., parameters that will reflect the organizational trade-off in the risk-sensitivity two-dimensional plane, as explained above). A traffic analyzer 750 on a gateway 760 monitors incoming and outgoing traffic from at least one computerized device 720 to a site 780 and analyzes the sensitivity and the risk involved in the scenario. The results are sent for analysis to the decision system 770, which decides about the required action and sends instructions accordingly (such as “block”, “quarantine” or “alert”) to the gateway 760.


The system of FIG. 7 can perform a weak validation to check whether the disseminated password is, with a high-probability, the password used by a user to access his account (or other sensitive resources) inside the organization, without revealing significant information to an attacker who gains access to a weak validation file. This is in contrast to files that allow “strong validation” of passwords, using their hash values—such files are known as highly vulnerable to attacks commonly known as “dictionary attacks”.


The weak validation method may be based on a Bloom filter, as described in: Space/Time Trade-offs in Hash Coding with Allowable Errors, by H Bloom Burton, Communications of the ACM, 13 (7). 422-426, 1970, the contents of which are hereby incorporated herein by reference in their entirety. The Bloom filter can assign a tunable probability to the existence of passwords from the organization password file. When the system tests for the existence of a password in the file, it queries the Bloom filter. If the Bloom filter returns “no” then the password does not exist in the file. If the Bloom filter returns “yes”, then it is probable that the password exists in the file, (and therefore in the organization). The Bloom filter therefore provides a probabilistic indication for the existence of a password in the organization, and this probabilistic indication p is tunable by the design of the filter. If p equals to, e.g. 0.9, then there is a false-positive rate of 0.1. Since this validation appears in the context of password dissemination, which by itself conveys a potential risk, this level of false positives is acceptable while monitoring normal traffic.


However, if an attacker attempts a “dictionary attack” (an attack where the attacker systematically tests possible passwords, beginning with words that have a higher probability of being used, such as names, number sequences and places) on the file, the Bloom filter will return “yes” on an expected 10% of the password candidates, even though they do not exist in the file. This will add noise to results of the dictionary attack, making it impractical to distinguish the few true positives from the many false positives.


The same method can be applied in order to safely identify other low-entropy items from a database, without compromising the items themselves to dictionary attacks. For example, suppose that the database comprises 10,000 U.S. Social Security Numbers (SSN). As SSN's are 9 digit numbers, even if they are represented by a strong cryptographic hashes, one can easily conduct an effective dictionary attack over all the valid social security numbers. Utilizing the weak validation method described above, one can assess whether the disseminated 9-digit number is, with a high probability, an SSN from the database.


The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.


The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.

Claims
  • 1. A computer-implemented method of controlling dissemination of sensitive information over an electronic network to a destination, the method comprising: analyzing traffic on the electronic network to detect an attempt to transmit a password to the destination, wherein the destination is an external site on the Internet;determining a strength of the password based on one or more of a length of the password, a similarity of the password to a set of other passwords, and an entropy score of the password;determining a sensitivity of information protected by the password based on the strength of the password, wherein the sensitivity is positively correlated with the strength of the password such that a stronger password results in a determination of higher sensitivity and a weaker password result in a determination of lower sensitivity;in response to the attempt to transmit the password to the destination, classifying content at the destination to determine a category of the content by executing computer instructions on a processor, wherein the category denotes whether the destination node is malicious;assessing a risk level incurred if the password leaves the electronic network and is passed to the destination based at least in part on the category and the sensitivity of information protected by the password; anddetermining a required action based on the risk level, wherein the required action includes one or more of blocking, quarantining, or alerting, wherein relatively stronger passwords receive relatively stronger protection from being passed in clear-text over a non-secure connection.
  • 2. The method of claim 1, wherein the risk level is further assessed based on at least one of geolocation, analysis of a recipient URL identifying content at the destination, and previous knowledge about the destination.
  • 3. The method of claim 1, wherein the required action is based, at least in part, on parameters settable by an operator.
  • 4. The method of claim 1, further comprising determining longer passwords are stronger than shorter passwords.
  • 5. The method of claim 1, further comprising determining less similar passwords are stronger than more similar passwords.
  • 6. The method of claim 1, further comprising determining higher entropy passwords are stronger than lower entropy passwords.
  • 7. The method of claim 1, further comprising determining a higher level of risk with a stronger password than with a weaker password.
  • 8. A system for controlling dissemination of sensitive information over an electronic network to a destination, the system comprising: a processor configured to execute computer instructions, wherein the computer instructions implement a traffic analyzer, the traffic analyzer in communication with the electronic network and configured to detect an attempt to transmit a password to the destination, wherein the destination is an external site on the Internet;the traffic analyzer configured to, in response to the attempt to transmit the password to the destination: determine a strength of the password based on one or more of a length of the password, a similarity of the password to a set of other passwords, and an entropy score of the password;determine a sensitivity of information protected by the password based on the strength of the password, wherein the sensitivity is positively correlated with the strength of the password such that a stronger password results in a determination of higher sensitivity and a weaker password result in a determination of lower sensitivity,classify content at the destination to determine a category of the content,assess a risk level incurred if the password leaves the electronic network and is passed to the destination based at least in part on the category and the sensitivity of the information protected by the password, and todetermine a required action in response to the risk level, wherein the required action includes one or more of blocking, quarantining, or alerting, wherein relatively stronger passwords receive relatively stronger protection from being passed in clear-text over a non-secure connection.
  • 9. The system of claim 8, wherein the risk level is further assessed based on at least one of geolocation, analysis of a URL identifying content at the destination, and previous knowledge about the destination.
  • 10. The system of claim 8, wherein the traffic analyzer is configured to block transmission of the password over the network in response to the risk level.
  • 11. The system of claim 8, wherein the required action is based, at least in part, on parameters settable by an operator.
  • 12. The system of claim 8, wherein the traffic analyzer is further configured to, in response to the attempt to transmit the password to the destination, determine longer passwords are stronger than shorter passwords.
  • 13. The system of claim 8, wherein the traffic analyzer is further configured to, in response to the attempt to transmit the password to the destination determine less similar passwords are stronger than more similar passwords.
  • 14. The system of claim 8, wherein the traffic analyzer is further configured to, in response to the attempt to transmit the password to the destination, determine higher entropy passwords are stronger than lower entropy passwords.
  • 15. The system of claim 8, wherein the traffic analyzer is further configured to, in response to the attempt to transmit the password to the destination, determine a higher level of risk with a stronger password than with a weaker password.
US Referenced Citations (203)
Number Name Date Kind
5414833 Hershey et al. May 1995 A
5581804 Cameron et al. Dec 1996 A
5590403 Cameron et al. Dec 1996 A
5596330 Yokev et al. Jan 1997 A
5712979 Graber et al. Jan 1998 A
5720033 Deo Feb 1998 A
5724576 Letourneau Mar 1998 A
5801747 Bedard Sep 1998 A
5828835 Isfeld et al. Oct 1998 A
5832228 Holden et al. Nov 1998 A
5899991 Karch May 1999 A
5905495 Tanaka et al. May 1999 A
5919257 Trostle Jul 1999 A
5937404 Csaszar et al. Aug 1999 A
6012832 Saunders et al. Jan 2000 A
6092194 Touboul Jul 2000 A
6185681 Zizzi Feb 2001 B1
6252884 Hunter Jun 2001 B1
6301658 Koehler Oct 2001 B1
6338088 Waters et al. Jan 2002 B1
6357010 Viets et al. Mar 2002 B1
6460141 Olden Oct 2002 B1
6493758 McLain Dec 2002 B1
6654787 Aronson et al. Nov 2003 B1
6732180 Hale et al. May 2004 B1
6804780 Touboul Oct 2004 B1
6832230 Zilliacus et al. Dec 2004 B1
6988209 Balasubramaniam et al. Jan 2006 B1
7051200 Manferdelli et al. May 2006 B1
7058822 Edery et al. Jun 2006 B2
7080000 Cambridge Jul 2006 B1
7089589 Chefalas et al. Aug 2006 B2
7100199 Ginter et al. Aug 2006 B2
7136867 Chatterjee et al. Nov 2006 B1
7155243 Baldwin et al. Dec 2006 B2
7185361 Ashoff et al. Feb 2007 B1
7249175 Donaldson Jul 2007 B1
7346512 Li-Chun Wang et al. Mar 2008 B2
7376969 Njemanze et al. May 2008 B1
7447215 Lynch et al. Nov 2008 B2
7464407 Nakae et al. Dec 2008 B2
7522910 Day Apr 2009 B2
7536437 Zmolek May 2009 B2
7617532 Alexander et al. Nov 2009 B1
7634463 Katragadda et al. Dec 2009 B1
7644127 Yu Jan 2010 B2
7693945 Dulitz et al. Apr 2010 B1
7707157 Shen Apr 2010 B1
7725937 Levy May 2010 B1
7783706 Robinson Aug 2010 B1
7787864 Provo Aug 2010 B2
7814546 Strayer et al. Oct 2010 B1
7818800 Lemley et al. Oct 2010 B1
7991411 Johnson et al. Aug 2011 B2
8041769 Shraim et al. Oct 2011 B2
8065728 Wang et al. Nov 2011 B2
8078625 Zhang et al. Dec 2011 B1
8165049 Salmi Apr 2012 B2
8315178 Makhoul et al. Nov 2012 B2
8498628 Shapiro et al. Jul 2013 B2
8655342 Weinzierl Feb 2014 B2
8695100 Cosoi Apr 2014 B1
8769671 Shraim et al. Jul 2014 B2
20010047474 Takagi Nov 2001 A1
20020078045 Dutta Jun 2002 A1
20020087882 Schneier et al. Jul 2002 A1
20020091947 Nakamura Jul 2002 A1
20020095592 Daniell et al. Jul 2002 A1
20020099952 Lambert et al. Jul 2002 A1
20020129140 Peled et al. Sep 2002 A1
20020129277 Caccavale Sep 2002 A1
20020133606 Mitomo et al. Sep 2002 A1
20020147915 Chefalas et al. Oct 2002 A1
20020162015 Tang Oct 2002 A1
20020174358 Wolff et al. Nov 2002 A1
20020194490 Halperin et al. Dec 2002 A1
20020199095 Bandini et al. Dec 2002 A1
20030018491 Nakahara et al. Jan 2003 A1
20030018903 Greca et al. Jan 2003 A1
20030074567 Charbonneau Apr 2003 A1
20030093694 Medvinsky et al. May 2003 A1
20030101348 Russo et al. May 2003 A1
20030110168 Kester et al. Jun 2003 A1
20030135756 Verma Jul 2003 A1
20030172292 Judge Sep 2003 A1
20030177361 Wheeler et al. Sep 2003 A1
20030185395 Lee et al. Oct 2003 A1
20030185399 Ishiguro Oct 2003 A1
20030188197 Miyata et al. Oct 2003 A1
20030195852 Campbell et al. Oct 2003 A1
20030202536 Foster et al. Oct 2003 A1
20040003139 Cottrille et al. Jan 2004 A1
20040003286 Kaler et al. Jan 2004 A1
20040034794 Mayer et al. Feb 2004 A1
20040039921 Chuang Feb 2004 A1
20040111632 Halperin Jun 2004 A1
20040111636 Baffes et al. Jun 2004 A1
20040117624 Brandt et al. Jun 2004 A1
20040139351 Tsang Jul 2004 A1
20040153644 McCorkendale Aug 2004 A1
20040162876 Kohavi Aug 2004 A1
20040187029 Ting Sep 2004 A1
20040203615 Qu et al. Oct 2004 A1
20040255147 Peled et al. Dec 2004 A1
20040260924 Peled et al. Dec 2004 A1
20050025291 Peled et al. Feb 2005 A1
20050027980 Peled et al. Feb 2005 A1
20050033967 Morino et al. Feb 2005 A1
20050048958 Mousseau et al. Mar 2005 A1
20050055327 Agrawal et al. Mar 2005 A1
20050066197 Hirata et al. Mar 2005 A1
20050086520 Dharmapurikar et al. Apr 2005 A1
20050091535 Kavalam et al. Apr 2005 A1
20050108557 Kayo et al. May 2005 A1
20050111367 Chao et al. May 2005 A1
20050120229 Lahti Jun 2005 A1
20050131868 Lin et al. Jun 2005 A1
20050149726 Joshi et al. Jul 2005 A1
20050210035 Kester et al. Sep 2005 A1
20050223001 Kester et al. Oct 2005 A1
20050229250 Ring et al. Oct 2005 A1
20050251862 Talvitie Nov 2005 A1
20050273858 Zadok et al. Dec 2005 A1
20050283836 Lalonde et al. Dec 2005 A1
20050288939 Peled et al. Dec 2005 A1
20060004636 Kester et al. Jan 2006 A1
20060020814 Lieblich et al. Jan 2006 A1
20060021031 Leahy et al. Jan 2006 A1
20060026105 Endoh Feb 2006 A1
20060026681 Zakas Feb 2006 A1
20060031504 Hegli et al. Feb 2006 A1
20060036874 Cockerille et al. Feb 2006 A1
20060053488 Sinclair et al. Mar 2006 A1
20060068755 Shraim et al. Mar 2006 A1
20060080735 Brinson et al. Apr 2006 A1
20060095459 Adelman et al. May 2006 A1
20060095965 Phillips et al. May 2006 A1
20060098585 Singh et al. May 2006 A1
20060101514 Milener et al. May 2006 A1
20060129644 Owen et al. Jun 2006 A1
20060191008 Fernando et al. Aug 2006 A1
20060212723 Sheymov Sep 2006 A1
20060251068 Judge et al. Nov 2006 A1
20060259948 Calow et al. Nov 2006 A1
20060265750 Huddleston Nov 2006 A1
20060272024 Huang et al. Nov 2006 A1
20060277259 Murphy et al. Dec 2006 A1
20060282890 Gruper et al. Dec 2006 A1
20060288076 Cowings et al. Dec 2006 A1
20070005762 Knox et al. Jan 2007 A1
20070011739 Zamir et al. Jan 2007 A1
20070027965 Brenes et al. Feb 2007 A1
20070028302 Brennan et al. Feb 2007 A1
20070067844 Williamson et al. Mar 2007 A1
20070143424 Schirmer et al. Jun 2007 A1
20070150827 Singh et al. Jun 2007 A1
20070156833 Nikolov et al. Jul 2007 A1
20070195779 Judge et al. Aug 2007 A1
20070199054 Florencio et al. Aug 2007 A1
20070220607 Sprosts et al. Sep 2007 A1
20070250920 Lindsay Oct 2007 A1
20070260602 Taylor Nov 2007 A1
20070261112 Todd et al. Nov 2007 A1
20070294199 Nelken et al. Dec 2007 A1
20070294428 Guy et al. Dec 2007 A1
20070294524 Katano Dec 2007 A1
20070299915 Shraim et al. Dec 2007 A1
20080009268 Ramer et al. Jan 2008 A1
20080040804 Oliver et al. Feb 2008 A1
20080047017 Renaud Feb 2008 A1
20080086638 Mather Apr 2008 A1
20080100414 Diab et al. May 2008 A1
20080216168 Larson et al. Sep 2008 A1
20080226069 Tan Sep 2008 A1
20080262991 Kapoor et al. Oct 2008 A1
20080267144 Jano et al. Oct 2008 A1
20080282338 Beer Nov 2008 A1
20080282344 Shuster Nov 2008 A1
20080295177 Dettinger et al. Nov 2008 A1
20090007243 Boodaei et al. Jan 2009 A1
20090064326 Goldstein Mar 2009 A1
20090064330 Shraim et al. Mar 2009 A1
20090100055 Wang Apr 2009 A1
20090100518 Overcash Apr 2009 A1
20090119402 Shull et al. May 2009 A1
20090131035 Aiglstorfer May 2009 A1
20090144823 Lamastra et al. Jun 2009 A1
20090222920 Chow et al. Sep 2009 A1
20090241173 Troyansky Sep 2009 A1
20090241187 Troyansky Sep 2009 A1
20090241191 Keromytis et al. Sep 2009 A1
20090241196 Troyansky et al. Sep 2009 A1
20090320135 Cavanaugh Dec 2009 A1
20100017879 Kuegler et al. Jan 2010 A1
20100024037 Grzymala-Busse et al. Jan 2010 A1
20100064347 More et al. Mar 2010 A1
20100069127 Fiennes Mar 2010 A1
20100077223 Maruyama et al. Mar 2010 A1
20100198928 Almeida Aug 2010 A1
20100257603 Chander et al. Oct 2010 A1
20100269175 Stolfo et al. Oct 2010 A1
20100312843 Robinson Dec 2010 A1
20120047217 Hewes et al. Feb 2012 A1
Foreign Referenced Citations (23)
Number Date Country
1367595 Sep 2002 CN
1756147 Apr 2006 CN
101060421 Oct 2007 CN
1 180 889 Feb 2002 EP
1 278 330 Jan 2003 EP
1 280 040 Jan 2003 EP
1 457 885 Sep 2004 EP
1 510 945 Mar 2005 EP
1571578 Sep 2005 EP
1 638 016 Mar 2006 EP
1 643 701 Apr 2006 EP
2418330 Mar 2006 GB
2000-235540 Aug 2000 JP
WO 9605549 Feb 1996 WO
WO 9642041 Dec 1996 WO
WO 0124012 Apr 2001 WO
WO 2005017708 Feb 2005 WO
WO 2005119488 Dec 2005 WO
WO 2006027590 Mar 2006 WO
WO 2006062546 Jun 2006 WO
WO 2006136605 Dec 2006 WO
WO 2007059428 May 2007 WO
WO 2007106609 Sep 2007 WO
Non-Patent Literature Citations (34)
Entry
“Risk and the Right Model” by John A. Long (Jan. 1986); 13 pages; originally downloaded from http://www.dtic.mil/cgi-bin/GetTRDoc?Location=U2&doc=GetTRDoc.pdf&AD=ADA161757.
NPL “Clear Text Password Risk Assessment Documentation” (2002) by SANS Institute; 10 pages; originally downloaded from http://www.sans.org/reading—room/whitepapers/authentication/clear-text-password-risk-assessment-documentation—113.
NPL: Spafford, “Preventing Weak Password Choices”, Computer Science Technical Report, Purdue University, Apr. 9, 1991.
National Institute of Standards (NIST) Special Publication 800-63, Electronic Authentication Guideline. Apr. 2006. 65 pages.
“Google + StopBadward.org = Internet Gestapo?”, http://misterpoll.wordpress.com/2007/01/05/google-stopbadwareorg-internet-gestapo/, Jan. 5, 2007.
“Trends in Badware 2007”, StopBadware.org.
George, Erica, “Google launches new anti-badware API”, http://blog.stopbadware.org//2007/06/19/google-launches-new-anti-badware-api, Jun. 19, 2007.
Wang et al., MBF: a Real Matrix Bloom Filter Representation Method on Dynamic Set, 2007 IFIP International Conference on Network and Parallel Computing—Workshops, Sep. 18, 2007, pp. 733-736, Piscataway, NJ, USA.
Adam Lyon, “Free Spam Filtering Tactics Using Eudora,”, May 21, 2004, pp. 1-4.
Cohen, F., A Cryptographic Checksum for Integrity Protection, Computers & Security, Elsevier Science Publishers, Dec. 1, 1987, vol. 6, Issue 6, pp. 505-510, Amsterdam, NL.
Dahan, M. Ed., “The Internet and government censorship: the case of the Israeli secretservice” Online information., Proceedings of the International Online Information Meeting, Oxford, Learned Infomration, GB, Dec. 12-14, 1989, vol. Meeting 13, December, Issue XP000601363, pp. 41-48, Sections 1,3., London.
Gittler F., et al., The DCE Security Service, Pub: Hewlett-Packard Journal, Dec. 1995, pp. 41-48.
IBM Technical Disclosure Bulletin, Mean to Protect System from Virus, IBM Corp., Aug. 1, 1994, Issue 659-660.
Igakura, Tomohiro et al., Specific quality measurement and control of the service-oriented networking application., Technical Report of IEICE, IEICE Association, Jan. 18, 2002, vol. 101, Issue 563, pp. 51-56, Japan.
PCT International Search Report and Written Opinion for International Application No. PCT/US2008/052483, Feb. 11, 2009.
Reid, Open Systems Security: Traps and Pitfalls, Computer & Security, 1995, Issue 14, pp. 496-517.
Resnick, P. et al., “PICS: Internet Access Controls Without Censorship”, Communications of the Association for Comuting Machinery, ACM, Oct. 1, 1996, vol. 39, Issue 10, pp. 87-93, New York, NY.
Stein, Web Security—a step by step reference guide, Addison-Wesley, 1997, pp. 387-415.
Symantec Corporation, E-security begins with sound security policies, Announcement Symantec, XP002265695, Jun. 14, 2001, pp. 1,9.
Williams, R., Data Integrity with Veracity, Retrieved from the Internet: <URL: ftp://ftp.rocksoft.com/clients/rocksoft/papers/vercty10.ps>, Sep. 12, 1994.
Zhang et al., The Role of URLs in Objectionable Web Content Categorization, Web Intelligence, 2006.
IronPort Web Reputation White Paper, A Comprehensive, Proactive Approach to Web-Based Threats, Ironport Systems,, 2009, pp. 10.
IronPort Web Reputation: Protect and Defend Against URL-Based Threats; Ironport Systems, Apr. 2006, 8 pages.
Aviv et al., Ssares: Secure Searchable Automated Remote Email Storage, 23rd Annual Computer Security Applications Conference, Jan. 2, 2008, pp. 129-138.
Long, John A., Risk and the Right Model, originally downloaded from http://www.dtic.mil/cgi-bin/GetTRDoc?Location=U2&doc=GEtTRDoc.pdf&AD=ADA161757, Jan. 1986, pp. 13.
Song et al., Multi-pattern signature matching for hardware network intrusion detection systems, IEEE Globecom 2005, Jan. 23, 2006.
Spafford, Eugene, Prventing Weak Password Choices, Computer Science Technical Reports. Paper 875. http://docs.lib.purdue.edu/cstech/875, 1991.
Yang et al., Performance of Full Text Search in Structured and Unstructured Peer-to-Peer Systems, Proceedings IEEE Infocom; originally downloaded from http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=04146962, 2006, pp. 12.
Shanmugasundaram et al, Payload Attribution via Hierarchical Bloom Filters, CCS, Oct. 25-29, 2004.
Shanmugasundaram et al., ForNet: A Distributed Forensics Network, In Proceedings of the Second International Workshop on Mathematical Methods, Models and Architectures for Computer Networks Security, 2003.
Wang Ping, “Research on Content Filtering-based Anti-spam Technology,” Outstanding Master's Degree Thesis of China, Issue 11, Nov. 15, 2006.
Ma Zhe, “Research and Realization of Spam Filtering System,” Outstanding Master's Degree Thesis of China, Issue 2, Jun. 15, 2005.
Zhang Yao Long, “Research and Application of Behavior Recognition in Anti-spam System,” Outstanding Master's Degree Thesis of China, Issue 11, Nov. 15, 2006.
Ruffo et al., EnFilter: A Password Enforcement and Filter Tool Based on Pattern Recognition Techniques, ICIAP 2005, LNCS 3617, pp. 75-82, 2005.
Related Publications (1)
Number Date Country
20090241173 A1 Sep 2009 US