Computer viruses and worms are often transmitted via electronic messages. An infectious message usually comes in the form of an e-mail with a file attachment, although other forms of infection are possible. Attackers have exploited many protocols that exchange electronic information, including email, instant messaging, SQL protocols, Hyper Text Transfer Protocols (HTTP), Lightweight Directory Access Protocol (LDAP), File Transfer Protocol (FTP), telnet, etc. When the attachment is opened, the virus executes. Sometimes the virus is launched through a link provided in the email. Virus or worm attacks can cause considerable damage to organizations. Thus, many anti-virus solutions have been developed to identify viruses and prevent further damage. Currently, most anti-virus products use virus signatures based on known viruses for identification. Such systems, however, often do not protect the network effectively during the time window between a virus' first appearance and the deployment of its signature. Networks are particularly vulnerable during this time window, which is referred to as “time zero” or “day zero”. For a typical anti-virus system to function effectively, it usually requires viruses to be identified, their signatures developed and deployed. Even after the system adapts after an outbreak, time zero threat can sometimes re-immerge as the virus mutates, rendering the old signature obsolete.
One approach to time zero virus detection is to use a content filter to identify and quarantine any message with a potentially executable attachment. This approach is cumbersome because it could incorrectly flag attachments in Word, Excel and other frequently used document formats even if the attachments are harmless, resulting in high rate if misidentification (also referred to as false positives). Furthermore, the approach may not be affective if the virus author disguises the nature of the attachment. For example, some virus messages ask the recipients to rename a .txt file as .exe and then click on it. Sometimes the virus author exploits files that were not previously thought to be executable, such as JPEG files. Therefore, it would be useful to have a better time zero detection technique. It would also be desirable if the technique could detect viruses more accurately and generate fewer false positives.
Embodiments for evaluating a file attached to an electronic message for the presence of a virus are claimed.
In a first claimed embodiment, a method for evaluating a file attached to an electronic message for the presence of a virus includes receiving an electronic message at a computing device. The electronic message includes an attachment that has a file name. The computing device has at least a first virus detection routine and executable instructions stored in memory. Upon executing the instructions using the processor, the computing device applies at least a signature matching test that outputs a probability that the attachment includes a virus. The computing device quarantines the electronic message when the outputted probability that the attachment includes a virus exceeds a predetermined threshold. The computing device searches for another virus detection test stored in memory when the outputted probability that the attachment includes a virus does not exceed the predetermined threshold and applies the other virus detection test. The other virus detection test includes at least one of a file name test, a bit pattern test, or an N-gram test. The probability that the attachment includes a virus is updated based on the other virus detection test and the computing device quarantines the electronic message when the updated probability that the attachment includes a virus exceeds the predetermined threshold. The computing device identifies the electronic message as free of viruses when the updated probability that the attachment includes a virus does not exceed the predetermined threshold.
In a second claimed embodiment, a computer program is embodied on a non-transitory computer-readable storage medium. The program is executable by a processor to perform a method for evaluating a file attached to an electronic message for the presence of a virus. The method includes receiving an electronic message at a computing device. The electronic message includes an attachment that has a file name. The computing device has at least a first virus detection routine stored in memory. The method includes applying at least a signature matching test that outputs a probability that the attachment includes a virus. The method further includes quarantining the electronic message when the outputted probability that the attachment includes a virus exceeds a predetermined threshold. The method includes searching for another virus detection test stored in memory when the outputted probability that the attachment includes a virus does not exceed the predetermined threshold and applying the other virus detection test. The other virus detection test includes at least one of a file name test, a bit pattern test, or an N-gram test. The probability that the attachment includes a virus is updated based on the other virus detection test. The method further includes quarantining the electronic message when the updated probability that the attachment includes a virus exceeds the predetermined threshold. The method also includes identifying the electronic message as free of viruses when the updated probability that the attachment includes a virus does not exceed the predetermined threshold.
Various embodiments of the invention are disclosed in the following detailed description and the accompanying drawings.
The invention can be implemented in numerous ways, including as a process, an apparatus, a system, a composition of matter, a computer readable medium such as a computer readable storage medium or a computer network wherein program instructions are sent over optical or electronic communication links. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. A component such as a processor or memory described as being configured to perform a task includes both a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task. In general, the order of the steps of disclosed processes may be altered within the scope of the invention.
A detailed description of one or more embodiments of the invention is provided below along with accompanying figures that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured.
Detecting infectious messages is disclosed. Analysis of individual characteristics of messages is performed in some embodiments to determine whether the message is suspicious. If a message is deemed suspicious, it is determined whether a similar message has been noted previously as possibly suspicious. If a similar message has been previously noted, the message is classified according to its individual characteristics and its similarity to the noted message. In some embodiments, if a message that was forwarded is later found to be infectious, the infectious message is reported to human or machine agents for appropriate action to take place.
In the process shown, if a message is determined to be legitimate, the message is forwarded to the appropriate recipient (204). If the message is determined to be infectious, the message is treated as appropriate (206). In some embodiments, the message is quarantined or deleted from the delivery queue. If a message is deemed to be suspicious, a traffic analysis is performed on the suspicious message (208). The traffic analysis identifies any traffic spike in the e-mail message stream that is consistent with the pattern of a virus outbreak. Details of the traffic analysis are described below. In this example, analysis of a message in the context of all message traffic yields another probability of the message being infectious, and classifies the suspicious message as either legitimate or infectious according to the probability. Legitimate messages are processed normally and forwarded to their destinations (204). Infectious messages are treated appropriately (206). Other classifications are also possible. The order of the analyses may be different in some implementations and some embodiments perform the analysis in parallel. In some embodiments, each analysis is performed independently.
It is then determined whether the probability exceeds the threshold for the message to be deemed infectious (320). If so, the message is considered infectious and may be quarantined, deleted from send queue, or otherwise appropriately handled. If, however, the probability does not exceed the threshold, it is determined whether more tests are available (322). If so, the next available test is applied and the process of updating probability and testing for threshold is repeated. If no more tests are available, the probability is compared to the threshold required for a legitimate message (324). If the probability exceeds the legitimate threshold, the message is deemed to be suspicious. Otherwise, the tests indicate that the message is legitimate. The classification of the message is passed on to the next routine. According to process 200, depending on whether the message is legitimate, suspicious or infectious, the next routine may forward the message, perform traffic analysis on the message, or treat the message as infectious.
Examples of the tests used in the individual message analysis include signature matching tests (304), file name tests (306), character tests (308), bit pattern tests (310), N-gram tests (312), bit pattern test (314), and probabilistic finite state automata (PFSA) tests (316). The tests may be arranged in ay appropriate order. Some tests maybe omitted and different tests may be used.
Some of the tests analyze the intrinsic characteristics of the message and/or its attachments. In the embodiments shown, the signature matching test (304) compares the signature of the message with the signatures of known viruses. The test in some embodiments generates a probability on a sliding scale, where an exact match leads to a probability of 1, and an inexact match receives a probability value that depends on the degree of similarity.
The file name test (306) examines the name of the attachment and determines if there is anomaly. For example, a file name such as “read me.txt.exe” is highly suspicious since it would appear that the sender is attempting to misrepresent the nature of the executable and pass the file off as a text file.
The bit pattern test (310) examines certain portions of the file and determines whether there is anomaly. Many files contain embedded bit patterns that indicate the file type. The magic number or magic sequence is such a bit pattern. For example, an executable file includes a particular bit pattern that indicates to the operating system that the file is an executable. The operating system will execute any file that starts with the magic sequence, regardless of the file extension. If an attachment has an extension such as .txt or .doc that seems to indicate that the file is textual in nature, yet the starting sequence in the file contains the magic sequence of an executable, then there is a high probability that the sender is attempting to disguise an executable as a text document. Therefore, the attachment is highly suspicious.
Some of the tests such as N-gram (312) and PFSA (314) measure the deviation of the received message from a baseline. In this example, the baseline is built from a collection of known good messages. An N-gram model describes the properties of the good messages. The N-gram model is a collection of token sequences and the corresponding probability of each sequence. The tokens can be characters, words or other appropriate entities. The test compares the N-gram model to an incoming message to estimate the probability that a message is legitimate. The probabilities of the N-gram sequences of the incoming messages can be combined with the probabilities of the N-gram sequences of the baseline model using any of several methods. In some embodiments, the N-gram test compares the test result with a certain threshold to determine the legitimacy of a message. In some embodiments, a message deemed legitimate by the N-gram test is not subject to further testing, thus reducing false positive rate. In some embodiments, a message found to be legitimate by the N-gram test is further tested to ascertain its true legitimacy.
In the example shown, the PFSA test (314) relies on a model that is built from a set of known good messages. The model describes the properties of legitimate messages. The model includes a plurality of token such as characters and words, and the probabilities associated with the tokens. The test estimates the probability that a particular message that includes a sequence of tokens can be generated by the model. In some embodiments, similar to the N-gram test, the test result is compared with a certain threshold to determine the legitimacy of a message. In some embodiments, a message deemed legitimate by the PFSA test is not subject to further testing to avoid false positives. In some embodiments, a message found to be legitimate by the PFSA test is further tested to ascertain its true legitimacy.
In some embodiments, information about previously received messages is collected and used to identify an increase in the number of similar and potentially suspicious messages. Messages are clustered to establish a statistical model that can be used to detect similar messages. The data collected may include one or more of the following: time of receipt, the recipients, number of recipients, the sender, size of the attachment, number of attachments, number of executable attachments, file name, file extension, file type according to the starting sequence of the file binary, etc. The characteristics of an incoming message are compared to the model to determine whether similar messages have been noted previously. A traffic spike in similar messages that were previously noted as potentially suspicious indicates the likelihood of a virus outbreak.
In some embodiments, traffic patterns are analyzed on a global network level. In other words, the analysis may monitor all the messages routed through an internet service provider and note the suspicious ones. In some embodiments, the traffic patterns are analyzed locally. For example, messages on a local network or on different subnets of a local network may be analyzed separately. In some embodiments, a combination of global and local analyses is used.
In local traffic analysis, different subnets can have different traffic patterns. For example, within a corporation, the traffic on the engineering department subnet may involve a large number of executables and binary files. Thus, absent other indicators, executables and binary attachments will not always trigger an alarm. In contrast, the traffic pattern of the accounting department may mostly involve text documents and spreadsheets, therefore an increase in binary or executable attachments would indicate a potential outbreak. Tailoring traffic analysis based on local traffic can identify targeted attacks as well as variants of old viruses.
It is then determined whether the message is similar to the previous stored messages (406). If the message is not similar to any of the previously stored suspicious messages, a low probability of infectiousness is assigned. If, however, the message is similar to previous stored suspicious messages, information associated with the received message is also stored and statistical model is updated accordingly (408). It is then determined whether the number of such similar and suspicious messages has exceeded a predefined threshold (410). If not, the message is not classified as infectious at this point, although a higher probability may be assigned to it. If the total number of such suspicious messages has exceeded the threshold, it is likely that the message is indeed infectious and should be appropriately treated. For example, consider the case where the threshold number is set to 5, and there are already 4 instances of suspicious messages with executable attachments having the same extension and similar size. When a fifth message arrives with similar sized executable attachments with the same extension, the message will be classified as infectious. By selecting an appropriate threshold value, infectious messages can be detected and prevented without a major outbreak.
Sometimes the system may initially find a message to be legitimate or merely suspicious and forward the message to its destination. Later as more information becomes available, the system may find the message to be infectious.
Once an already forwarded message is deemed infectious, measures are taken to prevent the infectious forwarded message from spreading (508). In the example shown above, the system will take actions to keep the 4 instances of previously forwarded message from being opened or resent by their recipients. Additionally, the system will not forward the fifth message. In some embodiments, the system reports the finding to the system administrator, the recipient, and/or other users on the network to prevent the previously forwarded infectious message from further spreading. Warning messages, log messages or other appropriate techniques may be used. In some embodiments, the system generates a cancellation request to a forwarding agent such as the mail server, which will attempt to prevent the message from being forwarded by deleting them from the send queue, moving the messages into a location to be quarantined or any other appropriate action.
Detecting and managing infectious messages have been disclosed. By performing individual message analysis and/or traffic analysis, infectious messages can be more accurately identified at time zero, and infectious messages that initially escaped detection can be later identified and prevented from further spreading.
Although the foregoing embodiments have been described in some detail for purposes of clarity of understanding, the invention is not limited to the details provided. There are many alternative ways of implementing the invention. The disclosed embodiments are illustrative and not restrictive.
This application is a continuation and claims the priority benefit of U.S. patent application Ser. No. 14/578,065 filed Dec. 19, 2014, now U.S. Pat. No. 9,237,163 that issued on Jan. 12, 2016, which is a continuation and claims the priority benefit of U.S. patent application Ser. No. 11/895,519 filed Aug. 24, 2007 and titled “Managing Infectious Forwarded Messages,” now U.S. Pat. No. 8,955,106 that issued on Feb. 10, 2015, which is a division of and claims priority to U.S. patent application Ser. No. 11/156,373 filed Jun. 16, 2005 and titled “Managing Infectious Messages As Identified by an Attachment,” now U.S. Pat. No. 7,343,624 that issued on Mar. 11, 2008, which claims the priority benefit of U.S. provisional patent application No. 60/587,839 filed Jul. 13, 2004 and titled “Detecting Malicious Message on Day Zero,” the disclosures of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5319776 | Hile et al. | Jun 1994 | A |
5452442 | Kephart | Sep 1995 | A |
5623600 | Ji et al. | Apr 1997 | A |
5826269 | Hussey | Oct 1998 | A |
5832208 | Chen et al. | Nov 1998 | A |
5889943 | Ji et al. | Mar 1999 | A |
5956481 | Walsh et al. | Sep 1999 | A |
6035423 | Hodges et al. | Mar 2000 | A |
6112227 | Heiner | Aug 2000 | A |
6144934 | Stockwell et al. | Nov 2000 | A |
6199102 | Cobb | Mar 2001 | B1 |
6560632 | Chess | May 2003 | B1 |
6650890 | Irlam et al. | Nov 2003 | B1 |
6701440 | Kim et al. | Mar 2004 | B1 |
6732279 | Hoffman | May 2004 | B2 |
6757830 | Tarbotton et al. | Jun 2004 | B1 |
6763462 | Marsh | Jul 2004 | B1 |
6763467 | Radatti et al. | Jul 2004 | B1 |
6802012 | Smithson et al. | Oct 2004 | B1 |
6813712 | Luke | Nov 2004 | B1 |
6886099 | Smithson et al. | Apr 2005 | B1 |
6892241 | Kouznetsov et al. | May 2005 | B2 |
6898715 | Smithson et al. | May 2005 | B1 |
6901519 | Stewart et al. | May 2005 | B1 |
6941348 | Petry et al. | Sep 2005 | B2 |
6941466 | Mastrianni | Sep 2005 | B2 |
6944555 | Blackett et al. | Sep 2005 | B2 |
6954858 | Welborn et al. | Oct 2005 | B1 |
6971023 | Makinson et al. | Nov 2005 | B1 |
6993660 | Libenzi et al. | Jan 2006 | B1 |
7007302 | Jagger et al. | Feb 2006 | B1 |
7010696 | Cambridge et al. | Mar 2006 | B1 |
7010807 | Yanovsky | Mar 2006 | B1 |
7017187 | Marshall et al. | Mar 2006 | B1 |
7020895 | Albrecht | Mar 2006 | B2 |
7043757 | Hoefelmeyer et al. | May 2006 | B2 |
7062553 | Liang | Jun 2006 | B2 |
7069583 | Yann et al. | Jun 2006 | B2 |
7117358 | Bandini et al. | Oct 2006 | B2 |
7124438 | Judge et al. | Oct 2006 | B2 |
7143356 | Shafrir et al. | Nov 2006 | B1 |
7159149 | Spiegel et al. | Jan 2007 | B2 |
7213260 | Judge | May 2007 | B2 |
7257842 | Barton et al. | Aug 2007 | B2 |
7263561 | Green et al. | Aug 2007 | B1 |
7299361 | Kim et al. | Nov 2007 | B1 |
7310816 | Burns et al. | Dec 2007 | B1 |
7343624 | Rihn | Mar 2008 | B1 |
7360246 | Etoh et al. | Apr 2008 | B2 |
7458094 | Jackson | Nov 2008 | B2 |
7461129 | Shah et al. | Dec 2008 | B2 |
7509679 | Alagna et al. | Mar 2009 | B2 |
7526809 | Liang et al. | Apr 2009 | B2 |
7647321 | Lund et al. | Jan 2010 | B2 |
7673002 | Damarla | Mar 2010 | B1 |
7748038 | Olivier et al. | Jun 2010 | B2 |
7765593 | Lowe et al. | Jul 2010 | B1 |
7774229 | Dernehl et al. | Aug 2010 | B1 |
8122508 | Rihn | Feb 2012 | B2 |
8429232 | Appenzeller et al. | Apr 2013 | B1 |
8443441 | Stolfo | May 2013 | B2 |
8515894 | Yu | Aug 2013 | B2 |
8850566 | Rihn | Sep 2014 | B2 |
8955106 | Rihn | Feb 2015 | B2 |
8955136 | Rihn | Feb 2015 | B2 |
9154511 | Rihn | Oct 2015 | B1 |
9237163 | Rihn | Jan 2016 | B2 |
9325724 | Rihn | Apr 2016 | B2 |
9516047 | Rihn | Dec 2016 | B2 |
20010005889 | Albrecht | Jun 2001 | A1 |
20020004908 | Galea | Jan 2002 | A1 |
20020091940 | Welborn et al. | Jul 2002 | A1 |
20020116639 | Chefalas et al. | Aug 2002 | A1 |
20020129277 | Caccavale | Sep 2002 | A1 |
20020147780 | Liu et al. | Oct 2002 | A1 |
20020178373 | Altschul et al. | Nov 2002 | A1 |
20020194489 | Almogy et al. | Dec 2002 | A1 |
20020194490 | Halperin et al. | Dec 2002 | A1 |
20030074578 | Ford et al. | Apr 2003 | A1 |
20030115485 | Milliken | Jun 2003 | A1 |
20030120947 | Moore et al. | Jun 2003 | A1 |
20030120950 | Hunt | Jun 2003 | A1 |
20030154394 | Levin | Aug 2003 | A1 |
20030158905 | Petry et al. | Aug 2003 | A1 |
20030167402 | Stolfo et al. | Sep 2003 | A1 |
20030172166 | Judge et al. | Sep 2003 | A1 |
20030172302 | Judge | Sep 2003 | A1 |
20030191969 | Katsikas | Oct 2003 | A1 |
20030204569 | Andrews et al. | Oct 2003 | A1 |
20030233418 | Goldman | Dec 2003 | A1 |
20040013295 | Sabe | Jan 2004 | A1 |
20040015718 | DeClouet | Jan 2004 | A1 |
20040015726 | Szor | Jan 2004 | A1 |
20040024639 | Goldman | Feb 2004 | A1 |
20040030913 | Liang et al. | Feb 2004 | A1 |
20040054917 | Obrecht et al. | Mar 2004 | A1 |
20040059697 | Forman | Mar 2004 | A1 |
20040073617 | Milliken | Apr 2004 | A1 |
20040073810 | Dettinger et al. | Apr 2004 | A1 |
20040083384 | Hypponen | Apr 2004 | A1 |
20040117648 | Kissel | Jun 2004 | A1 |
20040128355 | Chao et al. | Jul 2004 | A1 |
20040128536 | Elzam et al. | Jul 2004 | A1 |
20040158554 | Trottman | Aug 2004 | A1 |
20040158741 | Schneider | Aug 2004 | A1 |
20040186893 | Ochiai | Sep 2004 | A1 |
20040199594 | Radatti et al. | Oct 2004 | A1 |
20040210769 | Radatti et al. | Oct 2004 | A1 |
20040230827 | Franczek et al. | Nov 2004 | A1 |
20040255159 | Williamson et al. | Dec 2004 | A1 |
20050015624 | Ginter et al. | Jan 2005 | A1 |
20050055410 | Landsman et al. | Mar 2005 | A1 |
20050060643 | Glass et al. | Mar 2005 | A1 |
20050081051 | Girouard et al. | Apr 2005 | A1 |
20050081059 | Bandini et al. | Apr 2005 | A1 |
20050108338 | Simske et al. | May 2005 | A1 |
20050108339 | Gleeson et al. | May 2005 | A1 |
20050111367 | Chao et al. | May 2005 | A1 |
20050125667 | Sullivan et al. | Jun 2005 | A1 |
20050138432 | Ransom et al. | Jun 2005 | A1 |
20050149749 | Van Brabant | Jul 2005 | A1 |
20050210272 | Fotta | Sep 2005 | A1 |
20050251862 | Talvitie | Nov 2005 | A1 |
20050283837 | Olivier et al. | Dec 2005 | A1 |
20060010209 | Hodgson | Jan 2006 | A1 |
20060010213 | Mehta | Jan 2006 | A1 |
20060010495 | Cohen et al. | Jan 2006 | A1 |
20060053490 | Herz et al. | Mar 2006 | A1 |
20060265745 | Shackleton et al. | Nov 2006 | A1 |
20060288414 | Kuroda | Dec 2006 | A1 |
20070294765 | Rihn | Dec 2007 | A1 |
20080104703 | Rihn | May 2008 | A1 |
20080134336 | Rihn | Jun 2008 | A1 |
20120151590 | Rihn | Jun 2012 | A1 |
20140373149 | Rihn | Dec 2014 | A1 |
20150106936 | Rihn | Apr 2015 | A1 |
20170155670 | Rihn | Jun 2017 | A1 |
Number | Date | Country |
---|---|---|
2005-011325 | Jan 2005 | JP |
2005-259141 | Sep 2005 | JP |
WO 2005076135 | Aug 2005 | WO |
WO 2005116804 | Dec 2005 | WO |
WO 2005124600 | Dec 2005 | WO |
WO 2006009620 | Jan 2006 | WO |
Entry |
---|
U.S. Appl. No. 15/370,873 Office Action dated Jul. 28, 2017. |
Betteridge,Ian “Windows JPEG Exploit Ventures into the Wild,” Sep. 28, 2004, http://www.eweek.com/article2/0.1759.1659777.00.asp. |
Brain, Marshall “How Web Servers Work: The Internet” Published Jun. 5, 2003 as verified by the Internet Archive (2 pages) http://web.archive.org/web/20030605034818/computer.howstuffworks.com/web-server2.htm. |
Byrne, Julian, “My Spamblock,” Google Groups Thread (Jan. 19, 1997). |
Cheeseman et al., AAAI88, The Seventh National Conference on Artificial Intelligence, Bayesian Classification, Aug. 21-28, 1988, vol. 2. |
Chua, Louis Zero-day Attacks, ComputerWorld, vol. 10, Issue No. 22, Jun. 9-22, 2004. |
Collins et al., Efficient Induction of Finite State Automata, 1996. |
Crocker, David H. “RFC 822: Standard for the Format of ARPA Internet Text Messages” originally published Aug. 13, 1982, http://www.faqs.org/ftp/rfc/pdf/rfc822.txt.pdf. |
Dwork, Cynthia, et al. “Pricing via Processing or Combating Junk Mail,” CRYPTO '92, Springer-Verlag LNCS 740, pp. 139-147, 1992. |
Festa, Paul “ISP Software Puts Squeeze on Spam,” Published Jun. 16, 2003 © 2003 Cnet News. (3 pages) http://news.cnet.com/ISP-software-puts-squeeze-on-spam/2100-1032_3-1017930.html. |
Gaines, B.R. “Behavior/Structure Transformations Under Uncertainty,” Mar. 1976. |
Graham, Paul “A Plan for Spam,” Published Aug. 2002 (8 pages), http://www.paulgraham.com/spam.html. |
Graham, Paul “Better Bayesian Filtering,” Published Jan. 2003 (13 pages), http://www.paulgraham.com/better.html. |
Oliver et al., Unsupervised Learning Using MML, no date. |
Postel, Jonathan “RFC 821: Simple Mail Transfer Protocol,” originally published Aug. 1982, http://www.faqs.org/ftp/rfc/pdf/rfc821.txt.pdf. |
Postini, Postini White Paper, Email Intrusion Prevention: Stop Malicious Threats before They Harm Your Email System, no date. |
Skoll, David F., “How to Make Sure a Human is Sending You Mail,” Google Groups Thread (Nov. 17, 1996). |
Snyder, Joel “Definition of Zero Day Protection,” Aug. 9, 2004, http://seclists.org/lists/focus-ids/2004/Aug/0046.html. |
Templeton, Brad, “Viking-12 Junk E-Mail Blocker,” (believed to have last been updated Jul. 15, 2003). |
Turner, Adam “Canning Spam,” Published Apr. 6, 2004, © 2004 Syndey Morning Herald (5 pages) http://www.smh.com.au/articles/2004/04/05/1081017087281.html. |
Various definitions for “instrinsic” from Dictionary.com © 1995-2009 by various, (2 pages) http://dictionary.reference.com/dic?q=intrinsic&search=search. |
Vidal et al., PASCAL—Pattern Analysis, Statistical Modelling and Computational Learning, Probabilistic Finite State Automata—Part I, IEEE Transaction PAMI, no date. |
Von Ahn, Luis, et al., “Telling Humans and Computers Apart (Automatically) or How Lazy Cryptographers do AI,” Communications to the ACM, Feb. 2004. |
Webster's II New Riverside Dictionary definition for “instrinsic” © 1984, 1988 Houghton Mifflin Co. (4 pages). |
What are n-grams good for? http://www.cs.ihu.edu/.about.jason/465/PDFSlides/lect3-ngram-apps.pdf, Oct. 18, 2002. |
Wikipedia article for “Echelon (signals intelligence)” as originally published on Mar. 27, 2004 (2 pages) http://en.wikipedia.org/w/index.php?title=Echelon_%28signals_intelligence%29&oldid=3030420&printable=yes. |
U.S. Appl. No. 11/156,372 Final Office Action dated May 22, 2015. |
U.S. Appl. No. 11/156,372 Office Action dated Jan. 15, 2015. |
U.S. Appl. No. 11/156,372 Office Action dated Oct. 7, 2010. |
U.S. Appl. No. 11/156,372 Final Office Action dated Mar. 30, 2010. |
U.S. Appl. No. 11/156,372 Office Action dated Sep. 9, 2009. |
U.S. Appl. No. 11/156,372 Final Office Action dated May 28, 2009. |
U.S. Appl. No. 11/156,372 Office Action dated Dec. 10, 2008. |
U.S. Appl. No. 11/156,372 Final Office Action dated Jul. 22, 2008. |
U.S. Appl. No. 11/156,372 Office Action dated Nov. 26, 2007. |
U.S. Appl. No. 11/927,438 Final Office Action dated Feb. 21, 2013. |
U.S. Appl. No. 11/927,438 Office Action dated Jul. 13, 2012. |
U.S. Appl. No. 11/927,438 Final Office Action dated Nov. 25, 2011. |
U.S. Appl. No. 11/927,438 Office Action dated Jun. 9, 2011. |
U.S. Appl. No. 11/927,438 Final Office Action dated Jul. 12, 2010. |
U.S. Appl. No. 11/927,438 Final Office Action dated Apr. 13, 2010. |
U.S. Appl. No. 11/927,438 Office Action dated Sep. 29, 2009. |
U.S. Appl. No. 14/472,026 Office Action dated Oct. 2, 2015. |
U.S. Appl. No. 14/472,026 Office Action dated Mar. 12, 2015. |
U.S. Appl. No. 11/156,373 Office Action dated Sep. 5, 2007. |
U.S. Appl. No. 11/927,424 Office Action dated Jun. 24, 2011. |
U.S. Appl. No. 11/927,424 Final Office Action dated Nov. 16, 2010. |
U.S. Appl. No. 11/927,424 Office Action dated May 25, 2010. |
U.S. Appl. No. 13/400,548 Final Office Action dated Nov. 5, 2013. |
U.S. Appl. No. 13/400,548 Office Action dated May 6, 2013. |
U.S. Appl. No. 11/895,519 Final Office Action dated Mar. 3, 2014. |
U.S. Appl. No. 11/895,519 Office Action dated Oct. 11, 2013. |
U.S. Appl. No. 11/895,519 Final Office Action dated Nov. 16, 2009. |
U.S. Appl. No. 11/895,519 Office Action dated Mar. 31, 2009. |
“FileEXT Facts” Published Nov. 23, 2003 as verified by the Internet Archive (3 pages) http://web.archive.org/web/20031123132118/http://filext.com/faq/idx/0/017/article. |
“Majordomo FAQ,” Oct. 20, 2001, pp. 1-14 www.visi.com/˜barr/majordomo-faq.html. |
“Spam Guard Network—eMail-Spam Filtering Desktop Software” Published May 27, 2002 as verified by the Internet Archive, http://web.archive.org/web/20020527012857/http://www.spamguard.net/. |
“Using the Rules Wizard,” Sams Teach Yourself Microsoft® Outlook™ 2000 in 10 Minutes, © 1999 Sams Publishing, excerpt from Chapter 8, (3 pages). |
“Vispa Introduce Mail Guardian Spam & Virus Filtering,” Published Apr. 14, 2004 from M2 Presswire, http://goliath.ecnext.com/coms2/gi_0199-226559/Vispa-introduce-Mail-Guardian-spam.html. |
“WindowsSecurity.com Email Security Testing Zone,” Published Feb. 7, 2003 and Mar. 4, 2003 as verified by the Internet Archive (11 pages) http://web.archive.org/web/20030207121425/http://www.windowsecurity.com/emailssecuritytest/ and http://web.arachive.org/web/20030304211950/www.windowsecurity.com/emailsecuritytest/faq.htm. |
Abhay, Joshi “How to Protect your Company from ‘Zero-Day’ Exploits,” Published Mar. 1, 2004 by Computerworld.com (2 pages), http://www.computerworld.com/s/article/print/90447/How_to_protect_your_company_from_zero_day_exploits?taxonomyName=Security&taxonomyId=17. |
Angluin, Dana “Learning Regular Sets from Queries and Counterexamples,” 1987. |
AVINTI, iSolation Server v.2, Whitepaper, 2005. |
Basis Technology, Language Identifier, Automatically Identify the Language and Encoding of Incoming Text, 2004, http://www.basistech.com/language%2Didentification,Date of Download: Jul. 14, 2014. |
Bastrup et al., Language Detection based on Unigram Analysis and Decision Trees, Date of Download: Sep. 24, 2014. |
Guilmette, Ronald F., “To Mung or Not to Mung,” Google Groups Thread (Jul. 24, 1997). |
Holtzman, Carey “Spam Attack? Bayesian Filters to the Rescue!” Published Aug. 9, 2004 (7 pages), http://www.crn.com/white-box/59200920;jsesionid=DASXF2YEGQM2LQE1GHPCKHWATMY32JVN. |
Ironport, Press Release, Iron Port Systems.TM. Introduces “Virus Outbreak Filters” to Predict New Outbreaks, Protect Customer Networks, Jun. 14, 2004. |
Kaspersky Lab, Review of 2004 Shows Malware Continuing at Unrelenting Pace, http://www.kaspersky.com/news?id=155897089. |
Langberg, Mike, “Spam Foe Needs Filter of Himself,” (Email Thread Dated Apr. 5, 2003). |
Lee et al., RFC 1738, Uniform Resource Locators (URL), Dec. 1994. |
Livingston, Brian “How Long Must You Wait for an Anti-Virus Fix?,” http://itmanagement.earthweb.com/columns/executive.sub.--tech/article.php- /3316511, no date. |
MailFrontier, MailFrontier Anti-Virus: A Complete Solution to Protect Your Network from Outbreaks, no date. |
McAfee, Security Headquarters, http://www.mcafeesecurity.com/us/security/home.asp. |
McCullagh, Declan, “In-Boxes that Fight Back,” News.com, May 19, 2003. |
Messagelabs, MessageLabs Anti-Virus, How it works, http://www.messagelabs.com/services/antivirus/detail, no date. |
Murhammer, Martin et al., “IP Network Design Guide” © 1999 IBM Redbooks. Excerpt from Chapter 6 (32 pages). |
Number | Date | Country | |
---|---|---|---|
20160127400 A1 | May 2016 | US |
Number | Date | Country | |
---|---|---|---|
60587839 | Jul 2004 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11156373 | Jun 2005 | US |
Child | 11895519 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14578065 | Dec 2014 | US |
Child | 14993059 | US | |
Parent | 11895519 | Aug 2007 | US |
Child | 14578065 | US |