Message classification using legitimate contact points

Information

  • Patent Grant
  • 8935348
  • Patent Number
    8,935,348
  • Date Filed
    Saturday, June 8, 2013
    11 years ago
  • Date Issued
    Tuesday, January 13, 2015
    9 years ago
Abstract
A system and method are disclosed for classifying a message. The method includes receiving the message, identifying all items of a certain type in the message, determining whether each of the items meets a criterion, and in the event that all the items are determined to meet the criterion, determining a classification of the message. The system includes an interface configured to receive the message, a processor coupled to the interface, configured to identify all items of a certain type in the message; determine whether each of the items meets a criterion; and in the event that all the items are determined to meet the criterion, determine a classification of the message.
Description
FIELD OF THE INVENTION

The present invention relates generally to message classification. More specifically, a technique for avoiding junk messages (spam) is disclosed.


BACKGROUND OF THE INVENTION

Electronic messages have become an indispensable part of modern communication. Electronic messages such as email or instant messages are popular because they are fast, easy, and have essentially no incremental cost. Unfortunately, these advantages of electronic messages are also exploited by marketers who regularly send out unsolicited junk messages. The junk messages are referred to as “spam”, and spam senders are referred to as “spammers”. Spam messages are a nuisance for users. They clog people's inbox, waste system resources, often promote distasteful subjects, and sometimes sponsor outright scams.


There are a number of commonly used techniques for classifying messages and identifying spam. For example, blacklists are sometimes used for tracking known spammers. The sender address of an incoming message is compared to the addresses in the blacklist A match indicates that the message is spam and prevents the message from being delivered. Other techniques such as rule matching and content filtering analyze the message and determine the classification of the message according to the analysis. Some systems have multiple categories for message classification. For example, a system may classify a message as one of the following categories: spam, likely to be spam, likely to be good email, and good email, where only good email messages are allowed through and the rest are either further processed or discarded.


Spam-blocking systems sometimes misidentify non-spam messages. For example, a system that performs content filtering may be configured to identify any messages that include certain word patterns, such as “savings on airline tickets” as spam. However, an electronic ticket confirmation message that happens to include such word patterns may be misidentified as spam or possibly spam. Misidentification of good messages is undesirable, since it wastes system resources, and in the worst case scenario, causes good messages to be classified as spam and lost.


It would be useful to have a technique that would more accurately identify non-spam messages. Such a technique would not be effective if spammers could easily alter parts of the spam messages they sent so that the messages would be identified as non-spam. Thus, it would also be desirable if non-spam messages identified by such a technique is not easily spoofed.





BRIEF DESCRIPTION OF THE DRAWINGS

The present invention will be readily understood by the following detailed description in conjunction with the accompanying drawings, wherein like reference numerals designate like structural elements, and in which:



FIG. 1 is a flowchart illustrating the message classification process according to one embodiment.



FIG. 2 is a flowchart illustrating the details of the signature generation process according to one embodiment.



FIG. 3 is a flow chart illustrating the classification of a message according to another embodiment.



FIG. 4 is a flow chart illustrating a registration process for updating the database, according to one embodiment.



FIG. 5 is a table used for aggregating user inputs, according to one system embodiment.





DETAILED DESCRIPTION

It should be appreciated that the present invention can be implemented in numerous ways, including as a process, an apparatus, a system, or a computer readable medium such as a computer readable storage medium or a computer network wherein program instructions are sent over optical or electronic communication links. It should be noted that the order of the steps of disclosed processes may be altered within the scope of the invention.


A detailed description of one or more preferred embodiments of the invention is provided below along with accompanying figures that illustrate by way of example the principles of the invention. While the invention is described in connection with such embodiments, it should be understood that the invention is not limited to any embodiment. On the contrary, the scope of the invention is limited only by the appended claims and the invention encompasses numerous alternatives, modifications and equivalents. For the purpose of example, numerous specific details are set forth in the following description in order to provide a thorough understanding of the present invention. The present invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the present invention is not unnecessarily obscured.


In U.S. patent application Ser. No. 10/371,987 by Wilson, et al filed Feb. 20, 2003 entitled: “USING DISTINGUISHING PROPERTIES TO CLASSIFY MESSAGES” which is herein incorporated by reference for all purposes, a technique using distinguishing properties to identify electronic messages is described. The technique uses distinguishing properties within messages, such as contact information, to identify messages that have previously been classified. In some embodiments, the technique is applied to identify spam messages. However, spammers aware of such a detection scheme may change their contact information frequently to prevent their messages from being identified.


An improved technique is disclosed. The technique prevents spammers from circumventing detection by using items in the message to identify non-spam messages. All items of a certain type in the message are identified, and checked to determine whether they meet a certain criterion. In some embodiments, the items are distinguishing properties or signatures of distinguishing properties. They are identified and looked up in a database. In various embodiments, the database may be updated by a registration process, based on user input, and/or post-processing stored messages. In some embodiments, the items are looked up in a database of acceptable items. A message is classified as non-spam if all the items are found in the database. If not all the items are found in the database, the message is further processed to determine its classification.


Spammers generally have some motives for sending spam messages. Although spam messages come in all kinds of forms and contain different types of information, nearly all of them contain some distinguishing properties for helping the senders fulfill their goals. For example, in order for the spammer to ever make money from a recipient, there must be some way for the recipient to contact the spammer. Thus, most spam messages include at least one contact point, whether in the form of a phone number, an address, a universal resource locator (URL), or any other appropriate information for establishing contact with some entity. These distinguishing properties, such as contact points, instructions for performing certain tasks, distinctive terms such as stock ticker symbols, names of products or company, or any other information essential for the message, are extracted and used to identify messages.


Similarly, non-spam messages may also have distinguishing properties. For example, electronic ticket confirmations and online purchase orders commonly include contact points such as URL's, email addresses, and telephone numbers to the sender's organization. It is advantageous that spam messages always include some distinguishing properties that are different from the distinguishing properties in non-spam messages. For example, the URL to the spammer's website is unlikely to appear in any non-spam message. To identify non-spam messages, a database is used for storing acceptable distinguishing properties. The database may be a table, a list, or any other appropriate combination of storage software and hardware. A message that only has acceptable distinguishing properties is unlikely to be spam. Since information that is not distinguishing is discarded during the classification process, it is more difficult for the spammers to alter their message generation scheme to evade detection.


For the purpose of example, details of email message processing using contact points and contact point signatures to determine whether the message is acceptable are discussed, although it should be noted that the technique are also applicable to the classification of other forms of electronic messages using other types of items. It should also be noted that different types of criterion and classification may be used in various embodiments.



FIG. 1 is a flowchart illustrating the message classification process according to one embodiment. A message is received (100), and all the contact points are selected (102). It is then determined whether all the contact points can be found in a database of previously stored acceptable contact points (104). If all the contact points are found in the database, the message is classified as non-spam and delivered to the user (106). The contact points that are not found in the database may be contact points for a spammer or contact points for a legitimate sender that have not yet been stored in the database. Thus, if not all contact points are found in the database, the message cannot be classified as non-spam and further processing is needed to accurately classify the message (108). The processing may include any appropriate message classification techniques, such as performing a whitelist test on the sender's address, using summary information or rules to determine whether the content of the message is acceptable, etc.


In some embodiments, the system optionally generates signatures based on the selected contact points. The signatures can be generated using a variety of methods, including compression, expansion, checksum, hash functions, etc. The signatures are looked up in a database of acceptable signatures. If all the signatures are found in the database, the message is classified as non-spam; otherwise, the message is further processed to determine its classification. Since signatures obfuscate the actual contact point information, using signatures provides better privacy protection for the intended recipient of the message, especially when the classification component resides on a different device than the recipient's.



FIG. 2 is a flowchart illustrating the details of the signature generation process according to one embodiment. Various contact points are extracted from the message and used to generate the signatures. This process is used both in classifying incoming messages and in updating the database with signatures that are known to be from non-spam. The sender address, email addresses, links to URLs such as web pages, images, etc. and the phone numbers in the message are extracted (200, 202, 204, 206). There are many ways to extract the contact information. For example, telephone numbers usually include 7-10 digits, sometimes separated by dashes and parenthesis. To extract telephone numbers, the text of the message is scanned, and patterns that match various telephone number formats are extracted. Any other appropriate contact information is also extracted (208).


The extracted contact points are then reduced to their canonical equivalents (210). The canonical equivalent of a piece of information is an identifier used to represent the same information, regardless of its format. For example, a telephone number may be represented as 1-800-555-5555 or 1(800)555-5555, but both are reduced to the same canonical equivalent of 18005555555. In some embodiments, the canonical equivalent of an URL and an email address is the domain name. For example, http://www.mailfrontier.com/contact, www.mailfrontier.com/support and jon@mailfrontier.com are all reduced to the same canonical equivalent of mailfrontier.com. It should be noted that there are numerous techniques for arriving at the canonical equivalent of any distinguishing property, and different implementation may employ different techniques.


After the contact points are reduced to their canonical equivalents, signatures corresponding to the canonical equivalents are generated and added to the database (212). There are various techniques for generating the signature, such as performing a hash function or a checksum function on the characters in the canonical equivalent.


The database shown in this embodiment stores signatures that correspond to various acceptable contact points. Such a database is also used in the subsequent embodiments for the purposes of illustration. It should be noted that the acceptable contact points, other distinguishing property and/or their signatures may be stored in the database in some embodiments.



FIG. 3 is a flow chart illustrating the classification of a message according to another embodiment. In this embodiment, each contact point of the message is tested and used to classify the message. Once the message is received (300), it is optionally determined whether the message includes any contact points (301). If the message does not include any contact points, the message may or may not be spam. Therefore, control is transferred to 312 to further process the message to classify it. If the message includes at least one contact point, the message is parsed and an attempt is made to extract the next contact point in the message (302). There may not be another contact point to be extracted from the message if all the distinguishing properties in the message have been processed already. Hence, in the next step, it is determined whether the next contact point is available (304). If there are no more distinguishing properties available, the test has concluded without finding any contact point in the message that does not already exist in the database. Therefore, the message is classified as acceptable (306).


If the next contact point is available, it is reduced to its canonical equivalent (307) and a signature is generated based on the canonical equivalent (308). It is then determined whether the signature exists in the database (310). If the signature does not exist in the database, there is a possibility that the message is spam and further processing is needed to classify the message (312). If, however, a signature exists in the database, it indicates that the contact point is acceptable and control is transferred to step 302 where the next contact point in the message is extracted and the process of generating and comparing the signature is repeated.


For the message classification technique to be effective, the database should include as many signatures of acceptable contact points as possible, and exclude any signatures of contact points that may be distinguishing for spam messages. In some embodiments, the database is updated using a registration process. The registration process allows legitimate businesses or organizations to store contact points used in the messages they send to their customers or target audience at a central spam filtering location. The legitimacy of the organization is established using certificates such as the certificate issued by a certificate authority such as Verisign, an identifier or code issued by a central spam filtering authority, or any other appropriate certification mechanism that identifies the validity of an organization.



FIG. 4 is a flow chart illustrating a registration process for updating the database, according to one embodiment. Once a registration message is received (400), it is determined whether the certificate is valid (402). If the certificate is not valid, the message is ignored (404). In this embodiment, if the message certificate is valid, optional steps 405, 406 and 407 are performed. The classification of the message sender is obtained from the certificate (405). It is then further tested using other spam determination techniques to determine whether the message is spam (406). This optional step is used to prevent spammers from obtaining a valid certificate and add their spam messages to the database. If the message is determined to be spam by these additional tests, control is transferred to step 404 and the message is ignored. If, however, the message is determined to be non-spam, one or more signatures are generated based on the contact points in the message (408). The signatures, sender classification, and other associated information for the message are then saved in the database (410).


Different organizations or individuals may have different criteria for which messages are acceptable, and may only allow a subset of the registered signature database. In some embodiments, the signature database from the registration site is duplicated by individual organizations that wish to use the signature database for spam blocking purposes. The system administrators or end users are then able to customize their message filtering policies using the database entries. Using a policy allows some database entries to be selected for filtering purposes.


In some embodiments, the database is updated dynamically as messages are received, based on classifications made by the recipients. Preferably, the system allows for collaborative spam filtering where the response from other recipients in the system is incorporated into the message classification process. Different recipients of the same message may classify the message, therefore the contact points in the message, differently. The same contact point may appear in a message that is classified as non-spam as well as a message that is classified as spam. The system aggregates the classification information of a contact point, and determines whether it should be added to the database of acceptable contact points.



FIG. 5 is a table used for aggregating user inputs, according to one system embodiment. The system extracts the contact points in the messages and generates their signature. The state of each signature is tracked by three counters: acceptable, unacceptable, and unclassified, which are incremented whenever a message that includes the contact point is classified as non-spam, spam or unknown, respectively. A probability of being acceptable is computed by the system based on the counter values and updated periodically. A signature is added to the database once its probability of being acceptable exceeds a certain threshold. In some embodiments, the signature is removed from the database if its probability of being acceptable falls below the threshold.


In some embodiments, the database is updated by post-processing previously stored messages. The messages are classified as spam or non-spam using spam classification techniques and/or previous user inputs. The contact points are extracted, their signatures generated, and their probabilities of being acceptable are computed. The signatures of the contact points that are likely to be acceptable are stored in the database.


An improved technique for classifying electronic messages has been disclosed. The technique uses distinguishing properties in a message and its corresponding signature to classify the message and determine whether it is acceptable. In some embodiments, the distinguishing properties are contact points. A database of registered signatures is used in the classification process, and can be customized to suit the needs of individuals. The technique has low overhead, and is able to quickly and accurately determine non-spam messages.


Although the foregoing invention has been described in some detail for purposes of clarity of understanding, it will be apparent that certain changes and modifications may be practiced within the scope of the appended claims. It should be noted that there are many alternative ways of implementing both the process and apparatus of the present invention. Accordingly, the present embodiments are to be considered as illustrative and not restrictive, and the invention is not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims.

Claims
  • 1. A method for classifying a message based on contact points, the method comprising: storing a plurality of signatures in a database in memory, each of the signatures corresponding to a contact point previously identified as non-spam;receiving a message over a communication network, the message including one or more contact points; andexecuting instructions stored in memory, wherein execution of the instructions by a processor: extracts a plurality of contact points from the received message,determines whether the plurality of extracted contact points includes any contact points that does not correspond to any of the signatures in the database, wherein the received message is classified as non-spam if all of the extracted contact points in the received message corresponds to one or more of the signatures in the database, andwherein the received message is submitted to one or more additional classification processes if at least one of the extracted contact points does not correspond to any of the signatures in the database; andadds a new contact point to the database based on a plurality of user classifications that include different classifications, wherein signatures for the new contact point are not added to the database until a probability that the new contact point is non-spam meets a threshold.
  • 2. The method of claim 1, wherein extracting the contact points includes scanning text in the received message and matching against a plurality of formats associated with contact points.
  • 3. The method of claim 1, wherein determining whether the extracted contact points includes any contact points that does not correspond to any of the signatures in the database comprises generating a signature for each of the extracted contact points and matching the generated signatures to the signatures in the database.
  • 4. The method of claim 3, wherein generating the signature comprises reducing the extracted contact point to a canonical equivalent.
  • 5. The method of claim 1, further comprising excluding signatures associated with known spam messages from the database.
  • 6. The method of claim 1, further comprising updating the database based registration.
  • 7. The method of claim 1, further comprising updating the database based on user classification.
  • 8. The method of claim 1, further comprising updating the database by post-processing previously stored messages.
  • 9. A system for classifying a message based on contact points, the method comprising: memory for storing a plurality of signatures in a database, each of the signatures corresponding to a contact point previously identified as non-spam;a communication interface for receiving a message over a communication network, the message including one or more contact points; anda processor for executing instructions stored in memory, wherein execution of the instructions by the processor: extracts a plurality of contact points from the received message,determines whether the plurality of extracted contact points includes any contact points that does not correspond to any of the signatures in the database, wherein the received message is classified as non-spam if all of the extracted contact points in the received message corresponds to one or more of the signatures in the database, andwherein the received message is submitted to one or more additional classification processes if at least one of the extracted contact points does not correspond to any of the signatures in the database; andadds a new contact point to the database based on a plurality of user classifications that include different classifications, wherein signatures for the new contact point are not added to the database until a probability that the new contact point is non-spam meets a threshold.
  • 10. The system of claim 9, wherein the processor extracts the contact points by scanning text in the received message and matching against a plurality of formats associated with contact points.
  • 11. The system of claim 9, wherein the processor determines whether the extracted contact points includes any contact points that does not correspond to any of the signatures in the database by generating a signature for each of the extracted contact points and matching the generated signatures to the signatures in the database.
  • 12. The system of claim 11, wherein the processor generates the signature by reducing the extracted contact point to a canonical equivalent.
  • 13. The system of claim 9, wherein the database stored in memory excludes signatures that are associated with known spam messages.
  • 14. The system of claim 9, wherein the database stored in memory is updated based registration.
  • 15. The system of claim 9, wherein the database stored in memory is updated based on user classification.
  • 16. The system of claim 9, wherein the database stored in memory is updated based on post-processing previously stored messages.
  • 17. A non-transitory computer-readable storage medium, having embodied thereon a program executable by a processor to perform a method for classifying a message based on contact points, the method comprising: storing a plurality of signatures in a database, each of the signatures corresponding to a contact point previously identified as non-spam;receiving a message including one or more contact points;extracting a plurality of contact points from the received message; anddetermining whether the plurality of extracted contact points includes any contact points that does not correspond to any of the signatures in the database, wherein the received message is classified as non-spam if all of the extracted contact points in the received message corresponds to one or more of the signatures in the database, andwherein the received message is submitted to one or more additional classification processes if at least one of the extracted contact points does not correspond to any of the signatures in the database;adds a new contact point to the database based on a plurality of user classifications that include different classifications, wherein signatures for the new contact point are not added to the database until a probability that the new contact point is non-spam meets a threshold.
CROSS-REFERENCE TO RELATED APPLICATIONS

The present application is a continuation and claims the priority benefit of U.S. patent application Ser. No. 13/361,659 filed Jan. 30, 2012, which will issue as U.S. Pat. No. 8,463,861 on Jun. 11, 2013, which is a continuation and claims the priority benefit of U.S. patent application Ser. No. 12/502,189 filed Jul. 13, 2009, now U.S. Pat. No. 8,108,477, which is a continuation and claims the priority benefit of U.S. patent application Ser. No. 11/927,497 filed Oct. 29, 2007, now U.S. Pat. No. 7,562,122, which is a continuation and claims the priority benefit of U.S. patent application Ser. No. 10/616,703 filed Jul. 9, 2003, now U.S. Pat. No. 7,406,502, which claims benefit of U.S. provisional application No. 60/476,419 filed Jun. 6, 2003 and is a continuation-in-part and claims the priority benefit of U.S. patent application Ser. No. 10/371,987 filed Feb. 20, 2003, now U.S. Pat. No. 8,266,215, the disclosures of which are incorporated herein by reference.

US Referenced Citations (158)
Number Name Date Kind
5905777 Foladare et al. May 1999 A
5999929 Goodman Dec 1999 A
6023723 McCormick et al. Feb 2000 A
6052709 Paul Apr 2000 A
6072942 Stockwell et al. Jun 2000 A
6076101 Kamakura et al. Jun 2000 A
6112227 Heiner Aug 2000 A
6161130 Horvitz et al. Dec 2000 A
6199102 Cobb Mar 2001 B1
6234802 Pella et al. May 2001 B1
6266692 Greenstein Jul 2001 B1
6421709 McCormick et al. Jul 2002 B1
6424997 Buskirk, Jr. et al. Jul 2002 B1
6438690 Patel et al. Aug 2002 B1
6453327 Nielsen Sep 2002 B1
6539092 Kocher Mar 2003 B1
6546416 Kirsch Apr 2003 B1
6615242 Riemers Sep 2003 B1
6615348 Gibbs Sep 2003 B1
6640301 Ng Oct 2003 B1
6643686 Hall Nov 2003 B1
6650890 Irlam et al. Nov 2003 B1
6654787 Aronson et al. Nov 2003 B1
6691156 Drummond et al. Feb 2004 B1
6708205 Sheldon et al. Mar 2004 B2
6728378 Garib Apr 2004 B2
6732149 Kephart May 2004 B1
6772196 Kirsch et al. Aug 2004 B1
6778941 Worrell et al. Aug 2004 B1
6779021 Bates et al. Aug 2004 B1
6829635 Townsend Dec 2004 B1
6842773 Ralston et al. Jan 2005 B1
6851051 Bolle et al. Feb 2005 B1
6868498 Katsikas Mar 2005 B1
6876977 Marks Apr 2005 B1
6931433 Ralston et al. Aug 2005 B1
6941348 Petry et al. Sep 2005 B2
6944772 Dozortsev Sep 2005 B2
6963928 Bagley et al. Nov 2005 B1
6965919 Woods et al. Nov 2005 B1
7003724 Newman Feb 2006 B2
7006993 Cheong et al. Feb 2006 B1
7016875 Steele et al. Mar 2006 B1
7016877 Steele et al. Mar 2006 B1
7032114 Moran Apr 2006 B1
7076241 Zondervan Jul 2006 B1
7103599 Buford et al. Sep 2006 B2
7127405 Frank et al. Oct 2006 B1
7149778 Patel et al. Dec 2006 B1
7162413 Johnson et al. Jan 2007 B1
7171450 Wallace et al. Jan 2007 B2
7178099 Meyer et al. Feb 2007 B2
7206814 Kirsch Apr 2007 B2
7216233 Krueger May 2007 B1
7222157 Sutton, Jr. et al. May 2007 B1
7231428 Teague Jun 2007 B2
7293063 Sobel Nov 2007 B1
7299261 Oliver et al. Nov 2007 B1
7392280 Rohall et al. Jun 2008 B2
7406502 Oliver et al. Jul 2008 B1
7539726 Wilson et al. May 2009 B1
7562122 Oliver et al. Jul 2009 B2
7580982 Owen et al. Aug 2009 B2
7693945 Dulitz et al. Apr 2010 B1
7711669 Liu et al. May 2010 B1
7711786 Zhu May 2010 B2
7725475 Alspector et al. May 2010 B1
7725544 Alspector et al. May 2010 B2
7827190 Pandya Nov 2010 B2
7836061 Zorky Nov 2010 B1
7873996 Emigh et al. Jan 2011 B1
7882189 Wilson Feb 2011 B2
8010614 Musat et al. Aug 2011 B1
8046832 Goodman et al. Oct 2011 B2
8091129 Emigh et al. Jan 2012 B1
8108477 Oliver et al. Jan 2012 B2
8112486 Oliver et al. Feb 2012 B2
8180837 Lu et al. May 2012 B2
8266215 Wilson Sep 2012 B2
8271603 Wilson Sep 2012 B2
8463861 Oliver et al. Jun 2013 B2
8484301 Wilson Jul 2013 B2
8688794 Oliver Apr 2014 B2
8713014 Alspector et al. Apr 2014 B1
20010002469 Bates et al. May 2001 A1
20010044803 Szutu Nov 2001 A1
20010047391 Szutu Nov 2001 A1
20020046275 Crosbie et al. Apr 2002 A1
20020052920 Umeki et al. May 2002 A1
20020052921 Morkel May 2002 A1
20020087573 Reuning et al. Jul 2002 A1
20020116463 Hart Aug 2002 A1
20020143871 Meyer et al. Oct 2002 A1
20020162025 Sutton Oct 2002 A1
20020169954 Bandini et al. Nov 2002 A1
20020188689 Michael Dec 2002 A1
20020199095 Bandini Dec 2002 A1
20030009526 Bellegarda et al. Jan 2003 A1
20030023692 Moroo Jan 2003 A1
20030023736 Abkemeier Jan 2003 A1
20030041126 Buford et al. Feb 2003 A1
20030041280 Malcolm et al. Feb 2003 A1
20030046421 Horvitz Mar 2003 A1
20030069933 Lim Apr 2003 A1
20030086543 Raymond May 2003 A1
20030105827 Tan Jun 2003 A1
20030115485 Miliken Jun 2003 A1
20030120651 Bernstein et al. Jun 2003 A1
20030126136 Omoigui Jul 2003 A1
20030149726 Spear Aug 2003 A1
20030154254 Awasthi Aug 2003 A1
20030158903 Rohall et al. Aug 2003 A1
20030167311 Kirsch Sep 2003 A1
20030195937 Kircher, Jr. et al. Oct 2003 A1
20030204569 Andrews et al. Oct 2003 A1
20030229672 Kohn Dec 2003 A1
20030233418 Goldman Dec 2003 A1
20040003283 Goodman et al. Jan 2004 A1
20040008666 Hardjono Jan 2004 A1
20040015554 Wilson Jan 2004 A1
20040024639 Goldman Feb 2004 A1
20040030776 Cantrell et al. Feb 2004 A1
20040059786 Caughey Mar 2004 A1
20040083270 Heckerman et al. Apr 2004 A1
20040107190 Gilmour et al. Jun 2004 A1
20040117451 Chung Jun 2004 A1
20040148330 Alspector et al. Jul 2004 A1
20040158554 Trottman Aug 2004 A1
20040162795 Dougherty et al. Aug 2004 A1
20040167964 Rounthwaite et al. Aug 2004 A1
20040167968 Wilson Aug 2004 A1
20040177120 Kirsch Sep 2004 A1
20040215963 Kaplan Oct 2004 A1
20050055410 Landsman et al. Mar 2005 A1
20050060643 Glass et al. Mar 2005 A1
20050081059 Bandini et al. Apr 2005 A1
20050125667 Sullivan et al. Jun 2005 A1
20050172213 Ralston et al. Aug 2005 A1
20060010217 Sood Jan 2006 A1
20060031346 Zheng et al. Feb 2006 A1
20060036693 Hulten et al. Feb 2006 A1
20060095521 Patinkin May 2006 A1
20060235934 Wilson Oct 2006 A1
20060282888 Bandini et al. Dec 2006 A1
20070143432 Klos et al. Jun 2007 A1
20080021969 Oliver et al. Jan 2008 A1
20090063371 Lin Mar 2009 A1
20090064323 Lin Mar 2009 A1
20090110233 Lu et al. Apr 2009 A1
20100017487 Patinkin Jan 2010 A1
20100017488 Oliver et al. Jan 2010 A1
20110184976 Wilson Jul 2011 A1
20110296524 Hines et al. Dec 2011 A1
20120131118 Oliver et al. May 2012 A1
20120131119 Oliver et al. May 2012 A1
20130173562 Alspector et al. Jul 2013 A1
20130275463 Wilson Oct 2013 A1
20140129655 Oliver May 2014 A1
Foreign Referenced Citations (1)
Number Date Country
WO 2004075029 Sep 2004 WO
Non-Patent Literature Citations (67)
Entry
“Active SMTP White Paper,” ESCOM Corp. (author unknown), 2000, 11pp.
“Digital Signature,” http://www.cnet.com/Resources/Info/Glossary/Terms/digitalsignature.html last accessed Nov. 15, 2006.
“Hash Function,” http://en.wikipedia.org/wiki/Hash—value, last accessed Nov. 15, 2006.
“Majordomo FAQ,” Oct. 20, 2001.
Agrawal et al., “Controlling Spam Emails at the Routers,” IEEE 2005.
Anon, “Challenge Messages,” Mailblocks, http://support.mailblocks.com/tab—howto/Validation/detail—privacy—challenge.asp, Apr. 18, 2003.
Anon, “Cloudmark, Different Approaches to Spamfighting,” Whitepaper, Version 1.0, Nov. 2002.
Anon, “Correspondence Negotiation Protocol,” http://www.cs.sfu.ca/˜cameron/CNP.html, Mar. 17, 2003.
Anon, “DigiPortal Software, Creating Order from Chaos,” Support, Frequently Asked Questions, http://www.digiportal.com/support/choicemail/faq.html, Jul. 2002.
Anon, “DM” Strategies Making a Living on Opt-In Email Marketing, Interactive PR & Marketing News, Feb. 19, 1999, vol. 6, Issue 4.
Anon, “Giant Company Software Announces Full Integrated AOL Support for its Popular Spam Inspector Anti-Spam Software,” Giant Company Software, Inc., Nov. 15, 2002.
Anon, “How Challenge/Response Works,” http://about.mailblocks.com/challenge.html, Apr. 1, 2003.
Anon, “Project: Vipul's Razor: Summary,” http://sourceforge.net/projects/razor, Jan. 12, 2002.
Anon, “Tagged Message Delivery Agent (TMDA),” http://tmda.net/indext.html, Jul. 25, 2002.
Anon, “The Lifecycle of Spam,” PC Magazine, Feb. 25, 2003, pp. 74-97.
Balvanz, Jeff et al., “Spam Software Evaluation, Training, and Support: Fighting Back to Reclaim the Email Inbox,” in the Proc. of the 32nd Annual ACM SIGUCCS Conference on User Services, Baltimore, MD, pp. 385-387, 2004.
Byrne, Julian “My Spamblock,” Google Groups Thread, Jan. 19, 1997.
Cranor, Lorrie et al., “Spam!,” Communications of the ACM, vol. 41, Issue 8, pp. 74-83, Aug. 1998.
Dwork, Cynthia et al., “Pricing via Processing or Combating Junk Mail,” CRYPTO '92, Springer-Verlag LNCS 740, pp. 139-147, 1992.
Gabrilovich et al., “The Homograph Attack,” Communications of the ACM 45 (2):128, Feb. 2002.
Georgantopoulous, Bryan “MScin Speech and Language Processing Dissertation: Automatic Summarizing Based on Sentence Extraction: A Statistical Approach,” Department of Linguistics, University of Edinburgh, http://cgi.di.uoa.gr/˜byron/msc.html, Apr. 21, 2001.
Gomes, Luiz et al., “Characterizing a Spam Traffic,” in the Proc. of the 4th ACM SIGCOMM Conference on Internet Measurement, Sicily, Italy, pp. 356-369, 2004.
Guilmette, Ronald F., “To Mung or Not to Mung,” Google Groups Thread, Jul. 24, 1997.
Hoffman, Paul and Crocker, Dave “Unsolicited Bulk Email: Mechanisms for Control” Internet Mail Consortium Report: UBE-SOL, IMCR-008, revised May 4, 1998.
Jung, Jaeyeon et al., “An Empirical Study of Spam Traffic and the Use of DNS Black Lists,” IMC'04, Taormina, Sicily, Italy, Oct. 25-27, 2004.
Kolathur, Satheesh and Subramanian, Subha “Spam Filter, A Collaborative Method of Eliminating Spam,” White paper, published Dec. 8, 2000 http://www.cs.uh.edu/˜kolarthur/Paper.htm.
Langberg, Mike “Spam Foe Needs Filter of Himself,” Email Thread dtd. Apr. 5, 2003.
Lie, D.H., “Sumatra: A System for Automatic Summary Generation,” http://www.carptechnologies.nl/SumatraTWLT14paper/SumatraTWLT14.html, Oct. 1999.
Mastaler, Jason “Tagged Message Delivery Agent (TMDA),” TDMA Homepage, 2003.
Maxwell, Rebecca, “Inxight Summarizer creates Document Outlines,” Jun. 17, 1999, www.itworldcanada.com.
McCullagh, Declan “In-Boxes that Fight Back,” News.com, May 19, 2003.
Prakash, Vipul Ved “Razor-agents 2.22,” http://razor.sourceforge.net, Aug. 18, 2000.
Skoll, David F., “How to Make Sure a Human is Sending You Mail,” Google Groups Thread, Nov. 17, 1996.
Spamarrest, The Product, How it Works, http://spamarrest.com/products/howitworks.jsp, Aug. 2, 2002.
SpamAssassin, “Welcome to SpamAssassin,” http://spamassassin.org, Jan. 23, 2003.
Templeton, Brad “Viking-12 Junk E-Mail Blocker,” (believed to have last been updated Jul. 15, 2003).
Von Ahn, Luis et al., “Telling Humans and Computers Apart (Automatically) or How Lazy Cryptographers do Al,” Communications to the ACM, Feb. 2004.
Weinstein, Lauren “Spam Wars,” Communications of the ACM, vol. 46, Issue 8, p. 136, Aug. 2003.
PCT Application No. PCT/US04/05172 International Search Report and Written Opinion mailed Dec. 7, 2004, 9 pages.
U.S. Appl. No. 11/903,413 Office Action dated Oct. 27, 2009.
U.S. Appl. No. 10/371,987 Final Office Action dated Jun. 27, 2008.
U.S. Appl. No. 10/371,987 Office Action dated Nov. 28, 2007.
U.S. Appl. No. 10/371,987 Final Office Action dated Jul. 6, 2007.
U.S. Appl. No. 10/371,987 Office Action dated Jan. 12, 2007.
U.S. Appl. No. 10/371,987 Final Office Action dated Aug. 10, 2006.
U.S. Appl. No. 10/371,987 Office Action dated Nov. 30, 2005.
U.S. Appl. No. 10/371,987 Final Office Action dated Jun. 6, 2005.
U.S. Appl. No. 10/371,987 Office Action dated Sep. 30, 3004.
U.S. Appl. No. 10/616,703 Office Action dated Nov. 28, 2007.
U.S. Appl. No. 10/616,703 Final Office Action dated Sep. 19, 2007.
U.S. Appl. No. 10/616,703 Office Action dated Apr. 9, 2007.
U.S. Appl. No. 11/455,037 Final Office Action dated Feb. 15, 2012.
U.S. Appl. No. 11/455,037 Office Action dated Oct. 28, 2011.
U.S. Appl. No. 11/455,037 Final Office Action dated Jan. 18, 2008.
U.S. Appl. No. 11/455,037 Office Action dated Jul. 17, 2007.
U.S. Appl. No. 11/455,037 Final Office Action dated Feb. 13, 2007.
U.S. Appl. No. 11/455,037 Office Action dated Oct. 20, 2006.
U.S. Appl. No. 11/926,819 Final Office Action dated Mar. 5, 2010.
U.S. Appl. No. 11/926,819 Office Action dated Jun. 25, 2009.
U.S. Appl. No. 11/927,497 Office Action dated Sep. 4, 2008.
U.S. Appl. No. 12/502,189 Final Office Action dated Aug. 2, 2011.
U.S. Appl. No. 12/502,189 Office Action dated Aug. 17, 2010
U.S. Appl. No. 13/015,526 Office Action dated Aug. 10, 2012.
U.S. Appl. No. 13/361,659 Final Office Action dated Jul. 17, 2012.
U.S. Appl. No. 13/361,659 Office Action dated Mar. 16, 2012.
U.S. Appl. No. 13/360,971 Office Action dated Aug. 13, 2013.
U.S. Appl. No. 13/912,055 Office Action dated Nov. 7, 2014.
Related Publications (1)
Number Date Country
20130318108 A1 Nov 2013 US
Provisional Applications (1)
Number Date Country
60476419 Jun 2003 US
Continuations (4)
Number Date Country
Parent 13361659 Jan 2012 US
Child 13913413 US
Parent 12502189 Jul 2009 US
Child 13361659 US
Parent 11927497 Oct 2007 US
Child 12502189 US
Parent 10616703 Jul 2003 US
Child 11927497 US
Continuation in Parts (1)
Number Date Country
Parent 10371987 Feb 2003 US
Child 10616703 US