Proofs to filter spam

Information

  • Patent Grant
  • 8065370
  • Patent Number
    8,065,370
  • Date Filed
    Thursday, November 3, 2005
    19 years ago
  • Date Issued
    Tuesday, November 22, 2011
    13 years ago
Abstract
Embodiments of proofs to filter spam are presented herein. Proofs are utilized to indicate a sender used a set amount of computer resources in sending a message in order to demonstrate the sender is not a “spammer”. Varying the complexity of the proofs, or the level of resources used to send the message, will indicate to the recipient the relative likelihood the message is spam. Higher resource usage indicates that the message may not be spam, while lower resource usage increases the likelihood a message is spam. Also, if the recipient requires a higher level of proof than received, the receiver may request the sender send additional proof to verify the message is not spam.
Description
BACKGROUND

The prevalence of message communication continues to increase as users utilize a wide variety of computing devices to communicate, one to another. For example, users may use desktop computers, wireless phones, and so on, to communicate through the use of email (i.e., electronic mail). Email employs standards and conventions for addressing and routing such that the email may be delivered across a network, such as the Internet, utilizing a plurality of devices. Thus, email may be transferred within a company over an intranet, across the world using the Internet, and so on.


Unfortunately, as the prevalence of these techniques for sending messages has continued to expand, the amount of “spam” encountered by the user has also continued to increase. Spam is typically thought of as an email that is sent to a large number of recipients, such as to promote a product or service. Because sending an email generally costs the sender little or nothing to send, “spammers” have developed which send the equivalent of junk mail to as many users as can be located. Even though a minute fraction of the recipients may actually desire the described product or service, this minute fraction may be enough to offset the minimal costs in sending the spam. Consequently, a vast number of spammers are responsible for communicating a vast number of unwanted and irrelevant emails. Thus, a typical user may receive a large number of these irrelevant emails, thereby hindering the user's interaction with relevant emails. In some instances, for example, the user may be required to spend a significant amount of time interacting with each of the unwanted emails in order to determine which, if any, of the emails received by the user might actually be of interest.


SUMMARY

Proof techniques to filter spam are described. Proofs may be utilized to indicate at least a minimal amount of resources were utilized by a sender in sending a message, thereby indicating that the sender is not likely a “spammer”. Additionally, different proofs may utilize different amounts of resources. The different proofs, therefore, may be used for different likelihoods that a message will be considered spam. For instance, a client may use a locally-executable spam filter to determine a relative likelihood that a message will be considered spam and select a proof to provide a proportional level of “proof” to the message, thereby increasing the likelihood that the message will not be considered as “spam” by a recipient of the message, e.g., a communication service that communicates the message to an intended recipient and/or the intended recipient itself.


This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is an illustration of an environment operable for communication of messages, such as emails, instant messages, and so on, across a network and is also operable to employ proof strategies.



FIG. 2 is an illustration of a system in an exemplary implementation showing a plurality of clients and a communication service of FIG. 1 in greater detail.



FIG. 3 is a flow chart depicting a procedure in an exemplary implementation in which processing of a message is performed by local spam filters to determine whether a result of a proof should be included before communication of the message to an intended recipient.



FIG. 4 is a flow chart depicting a procedure in an exemplary implementation in which one or more proofs are selected based on a relative likelihood that a message will be considered spam.



FIG. 5 is a flow chart depicting a procedure in an exemplary implementation in which receiver-driven computation is performed.





The same reference numbers are utilized in instances in the discussion to reference like structures and components.


DETAILED DESCRIPTION

Overview


As the prevalence of techniques for sending messages has continued to expand, the amount of “spam” encountered by the user has also continued to increase. Therefore, proofs may be utilized to differentiate between legitimate messages and messages that are sent by a spammer. For example a proof may be computed that requires a significant amount of resources (e.g., processing and/or memory resources) to be utilized in the computation over that typically required to send a message by a sender. A “memory bound” proof, for instance, may rely on memory latency to slow down computations that could be quickly performed if performed by a processor alone and therefore require an amount of time to process by a computing device. Therefore, presence of this result may indicate that the sender of the message performed the computation and therefore is not likely a spammer, which may therefore be used when processing the message, such as by a spam filter.


Additionally, different “levels” of proof may also be employed. For example, a computational proof having a particular amount of difficulty (e.g., requiring a certain amount of computer resources) may provide a certain amount of protection, while a computation proof having a greater amount of difficulty may be used to provide a corresponding greater amount of protection. Therefore, a sender may be “aware” of these levels and try to “guess” a proper amount of proof (e.g., difficulty) to be included with the message when communicated. Thus, senders of messages that do not look like spam may use relatively little proof while senders of messages that look like spam (e.g., a spammer) may use relatively larger amounts of proof. This improves the user experience for “good” users by allowing efficient use of proof that addresses the likely processing that will be performed on the message before the message is communicated.


In the following description, an exemplary environment is first described which is operable to employ the proof techniques. Exemplary procedures are then described which may operate in the exemplary environment, as well as in other environments.


Exemplary Environment



FIG. 1 is an illustration of an environment 100 operable for communication of messages across a network. The environment 100 is illustrated as including a plurality of clients 102(1), . . . , 102(N) that are communicatively coupled, one to another, over a network 104. The plurality of clients 102(1)-102(N) may be configured in a variety of ways. For example, one or more of the clients 102(1)-102(N) may be configured as a computer that is capable of communicating over the network 104, such as a desktop computer, a mobile station, a game console, an entertainment appliance, a set-top box communicatively coupled to a display device, a wireless phone, and so forth. The clients 102(1)-102(N) may range from full resource devices with substantial memory and processor resources (e.g., personal computers, television recorders equipped with hard disk) to low-resource devices with limited memory and/or processing resources (e.g., traditional set-top boxes). In the following discussion, the clients 102(1)-102(N) may also relate to a person and/or entity that operate the client. In other words, client 102(1)-102(N) may describe a logical client that includes a user, software and/or a machine.


Additionally, although the network 104 is illustrated as the Internet, the network may assume a wide variety of configurations. For example, the network 104 may include a wide area network (WAN), a local area network (LAN), a wireless network, a public telephone network, an intranet, and so on. Further, although a single network 104 is shown, the network 104 may be configured to include multiple networks. For instance, clients 102(1), 102(N) may be communicatively coupled via a peer-to-peer network to communicate, one to another. Each of the clients 102(1), 102(N) may also be communicatively coupled to one or more of a plurality of communication services 106(m) (where “m” can be any integer form one to “M”) over the Internet.


Each of the plurality of clients 102(1), . . . , 102(N) is illustrated as including a respective one of a plurality of communication modules 108(1), . . . , 108(N). In the illustrated implementation, each of the plurality of communication modules 108(1)-108(N) is executable on a respective one of the plurality of clients 102(1)-102(N) to send and receive messages. For example, one or more of the communication modules 108(1)-108(N) may be configured to send and receive email. As previously described, email employs standards and conventions for addressing and routing such that the email may be delivered across the network 104 utilizing a plurality of devices, such as routers, other computing devices (e.g., email servers), and so on. In this way, emails may be transferred within a company over an intranet, across the world using the Internet, and so on. An email, for instance, may include a header, text, and attachments, such as documents, computer-executable files, and so on. The header contains technical information about the source and oftentimes may describe the route the message took from sender to recipient.


In another example, one or more of the communication modules 108(1)-108(N) may be configured to send and receive instant messages. Instant messaging provides a mechanism such that each of the clients 102(1)-102(N), when participating in an instant messaging session, may send text messages to each other. The instant messages are typically communicated in real time, although delayed delivery may also be utilized, such as by logging the text messages when one of the clients 102(1)-102(N) is unavailable, e.g., offline. Thus, instant messaging may be thought of as a combination of email and Internet chat in that instant messaging supports message exchange and is designed for two-way live chats. Therefore, instant messaging may be utilized for synchronous communication. For instance, like a voice telephone call, an instant messaging session may be performed in real-time such that each user may respond to each other user as the instant messages are received.


In an implementation, the communication modules 106(1)-106(N) communicate with each other through use of the communication service 106(m). For example, client 102(1) may form a message using communication module 108(1) and send that message over the network 104 to the communication service 106(m) which is stored as one of a plurality of messages 110(j), where “j” can be any integer from one to “J”, in storage 112(m) through execution of a communication manager module 114(m). Client 102(N) may then “log on” to the communication service (e.g., by providing a name and password) and retrieve corresponding messages from storage 112(m) through execution of the communication module 108(N). A variety of other examples are also contemplated.


In another example, client 102(1) may cause the communication module 108(1) to form an instant message for communication to client 102(N). The communication module 108(1) is executed to communicate the instant message to the communication service 106(m), which then executes the communication manager module 114(m) to route the instant message to the client 102(N) over the network 104. The client 102(N) receives the instant message and executes the respective communication module 108(N) to display the instant message to a respective user. In another instance, when the clients 102(1), 102(N) are communicatively coupled directly, one to another (e.g., via a peer-to-peer network), the instant messages are communicated without utilizing the communication service 106(m). Although messages configured as emails and instant messages have been described, a variety of textual and non-textual messages (e.g., graphical messages, audio messages, and so on) may be communicated via the environment 100 without departing from the sprit and scope thereof. Additionally, computational proofs can be utilized for a wide variety of other communication techniques, such as to determine if a user will accept a voice-over-IP (VOIP) call or route the call to voicemail.


As previously described, the efficiently of the environment 100 has also resulted in communication of unwanted messages, commonly referred to as “spam”. Spam is typically provided via email that is sent to a large number of recipients, such as to promote a product or service. Thus, spam may be thought of as an electronic form of “junk” mail. Because a vast number of emails may be communicated through the environment 100 for little or no cost to the sender, a vast number of spammers are responsible for communicating a vast number of unwanted and irrelevant messages. Thus, each of the plurality of clients 102(1)-102(N) may receive a large number of these irrelevant messages, thereby hindering the client's interaction with actual messages of interest.


One technique which may be utilized to hinder the communication of unwanted messages is through the use of a computational proof, i.e., “proofs”. Proofs provide a technique that allows a sender of a message to prove their “non-spammer” intentions through use of a proof that enables the sender to indicate that a significant amount of hardware and/or software resources were expended by the client in the communication of the message. For example, clients 102(1)-102(N) are each illustrated as including a respective plurality of proofs 116(f), 116(g), where “f” and “g” can be any integer from one to “F” and “G”, respectively. Proof of effort algorithms generally involve use of a significant amount of computing resources (e.g., hardware and software resources) when solving a defined proof, e.g., a hash collision, a solution to a cryptographic problem, a solution to a memory bound problem, a solution to a reverse Turing test, and so on. As previously described, it typically requires few resources for a spammer to send a message. Therefore, by indicating that resources have been utilized by a sender of the message, the sender may indicate a decreased likelihood of being a spammer.


In the illustrated environment, the communication service 102(m) is also illustrated as including a plurality of proofs 116(h), where “h” can be any integer from one to “H”, which are stored in storage 118(m). Therefore, the communication service 102(m) in this instance may be used on part of one or more of the clients 102(1)-102(N) in the performance of the proofs 116(h). In another example, a third party 120 may also compute one or more of a plurality of proofs 116(i) (where “i” can be any integer from one to “I”) which are illustrated as stored in storage 122. For instance, the third party 120 may be configured as a web service to compute the proofs 116(i) when one or more of the clients 102(1)-102(N) is configured as a “thin” client as previously described. Therefore, the thin client may offload the computation of the proof to the third party to compute the proof. In another instance, the third party 120 is another computing device that is owned/accessible by the user (e.g., a desktop computer, work server, and so on) such that the user may transfer computation of the proofs between the user's computing devices before output to an intended recipient, such as from a wireless phone to a home computer, after which the message is then communicated for receipt by an intended recipient. A variety of other instances are also contemplated.


Because computation of the proofs indicates a decreased likelihood that a sender of the message is a “spammer”, spam filters employed in the environment 100 may take this into account when processing a message. For example, clients 102(1)-102(N) each include respective spam filters 124(1)-124(N) which are utilized to process messages received by the clients in order to “filter out” spam from legitimate messages. Spam filters 124(1)-124(N) may utilize a variety of techniques for filtering spam, such as through examination of message text, indicated sender, domains, and so on. The spam filters 124(1)-124(N), when processing the messages, may also take into account whether the message includes a result of a computational proof when determining whether the message is spam. Similar functionality may be employed by the spam filters 124(m) provided on the communication service 102(m). Therefore, a result of a computational proof may be utilized to obtain “safe passage” of the message through spam filters 124(1), 124(N), 124(m) employed in the environment 100.


Different amounts of resources, however, may be expended when computing different proofs 116(f), 116(g), 116(h), 116(i). For example, computation of a first one of the proofs 116(f) may more hardware and software resources than computation of another one of the proofs 116(f). Therefore, the spam filters 124(1)-124(N) may also be configured to address the amount of computation utilized to perform the respective proofs when determining whether or not a message is spam, further discussion of which may be found in relation to the following figure.


Generally, any of the functions described herein can be implemented using software, firmware (e.g., fixed logic circuitry), manual processing, or a combination of these implementations. The terms “module,” “functionality,” and “logic” as used herein generally represent software, firmware, or a combination of software and firmware. In the case of a software implementation, the module, functionality, or logic represents program code that performs specified tasks when executed on a processor (e.g., CPU or CPUs). The program code can be stored in one or more computer readable memory devices, further description of which may be found in relation to FIG. 2. The features of the proof strategies described below are platform-independent, meaning that the strategies may be implemented on a variety of commercial computing platforms having a variety of processors.



FIG. 2 is an illustration of a system 200 in an exemplary implementation showing the plurality of clients 102(n) and the communication service 106(m) of FIG. 1 in greater detail. Client 102(n) is representative of any of the plurality of clients 102(1)-102(N) of FIG. 1, and therefore reference will be made to client 102(n) in both singular and plural form. The communication service 102(m) is illustrated as being implemented by a plurality of servers 202(s), where “s” can be any integer from one to “S”, and the client 102(n) is illustrated as a client device. Further, the servers 202(s) and the clients 102(n) are illustrated as including respective processors 204(s), 206(n) and respective memory 208(s), 210(n).


Processors are not limited by the materials from which they are formed or the processing mechanisms employed therein. For example, processors may be comprised of semiconductor(s) and/or transistors (e.g., electronic integrated circuits (ICs)). In such a context, processor-executable instructions may be electronically-executable instructions. Alternatively, the mechanisms of or for processors, and thus of or for a computing device, may include, but are not limited to, quantum computing, optical computing, mechanical computing (e.g., using nanotechnology), and so forth. Additionally, although a single memory 208(s), 210(n) is shown for the respective server 202(s) and client 102(n), memory 208(s), 210(n) may be representative of a wide variety of types and combinations of memory may be employed, such as random access memory (RAM), hard disk memory, removable medium memory, and other computer-readable media.


The clients 102(n) are illustrated as executing the communication module 108(n) and the spam filters on the processor 206(n), which are also storable in memory 210(n). Additionally, the communication module 108(n) is illustrated as including a proof module 212(n), which is representative of functionality to select and perform proofs 116(1), . . . , 116(y), . . . , 116(Y) (which may or may not correspond to the proofs 116(f), 116(g) of FIG. 1). For example, the communication module 108(n), when executed, may be utilized to form a message for communication over the network 104. Before the message is communicated, the communication module 108(n) may process the message using the client's 102(n) spam filter 124(n) to determine a likelihood of whether the message, as is, will be considered spam by an intended recipient and/or a communication service 106(m) configured to communicate the message to the intended recipient. When the message is considered spam, the proof module 212(n) may perform one or more of the proofs 116(1)-116(Y), a result of which is then combined with the message before communication over the network 104. In this way, the client 102(n) may indicate, through the use of the proof, that the message is not spam and in this case the client makes the determination of whether to even perform one or more of the proofs 116(1)-116(Y) without contacting an intended recipient before hand. Further discussion of processing a message by a spam filter before communication over the network may be found in relation to FIG. 3.


As previously described, proofs 116(1)-116(Y) may require different amounts of resources to be performed, which is illustrated in FIG. 2 by an arrow 214 that indicates that proof 116(Y) is more resource intensive than proof 116(y), which is more resource intensive than proof 116(1). For example, different proof mechanisms may include parameters that specify a particular difficult, e.g., in a hash collision case an “N” bit collision may be utilized in which as “N” increases computation time increases exponentially. These differences in resource amounts may also be utilized in conjunction with an indication of a relative likelihood that the message will be considered spam to select an appropriate proof 116(1)-116(Y) to be performed before communication of the message. For example, a message that, when processed by the spam filter 124(n) indicates a relatively low likelihood of being considered spam may include a result of a proof 116(1) that consumes relatively low resources, when performed. On the other hand, a message that, when processed by the spam filter 124(n) indicates a relatively high likelihood of being considered spam may include a proof 116(Y) that consumes a relatively high amount of resources. In this way, the proof module 214 may select the proof 116(1)-116(Y), and even choose to forgo inclusion of a proof, in a manner which conserves resources of the client 102(n) yet still indicates that the client 102(n) is not a spammer. Further discussion of proof selection may be found in relation to FIG. 4.


The results of the proofs 116(1)-116(Y) may be combined with a variety of identifying mechanisms 216(x) that may also indicate a relative likelihood that a message is spam and/or sent by a spammer. For example, when a user receives a message, the communication modules 108(n) and/or manager module 114(m) gather and validate messages utilizing one or more applicable identifying mechanisms 216(x). For example, the identifying mechanisms 216(x) may involve checking that part of a message is signed with a specific private key, that a message was sent from a machine that is approved via a sender's identification for a specified domain, and so on. A variety of identifying mechanisms 216(x) and combinations thereof may be employed by the communication modules 108(n), 114(m), and/or the spam filters 124(n), 124(m), examples of which are described as follows.


Email Address


The email address is a standard form of identity. The email address may be checked by looking at a ‘FROM’ line in the header of a message. Although the email address may be particularly vulnerable to attack, a combination of the email address and another one of the identifying mechanisms 216(x) and/or the proofs 116(1)-116(Y) may result in substantial protection.


Third Party Certificates


Third party certificates may involve the signing of a portion of a message with a certificate that can be traced to a third-party certifier. This signature can be attached utilizing a variety of techniques, such as through secure/multipurpose Internet mail extension (S/MIME) techniques, e.g., by including a header in the message that contains the signature. The level of security provided by this technique may also be based on the reputation of the third party certifier, a type of certificate (e.g. some certifiers offer several levels of increasingly secure certification), and on the amount of the message signed (signing more of the message is presumably more secure).


Self-Signed Certificate


A self-signed certificate involves signing a portion of a message with a certificate that the sender created. Like a third-party certificate, this identifying mechanism may be attached using a variety of techniques, such as through secure/multipurpose Internet mail extension (S/MIME) techniques, e.g., by including a header in the message that contains the signature. In an implementation, use of a self-signed certificate involves the creation of a public/private key pair by a sender, signing part of the message with the private key, and distributing the public key in the message (or via other standard methods). The level of security provided by this method is based on the amount of the message signed.


Passcode


The passcode identifying mechanism involves the use of a passcode in a message, such as by including a public key in a message but not signing any portion of the message with the associated private key. This identity mechanism may be useful for users who have mail transfer agents that modify messages in transfer and destroy the cryptographic properties of signatures, such that the signatures cannot be verified. This identifying mechanism is useful as a lightweight way to establish a form of identity. Although a passcode is still potentially spoofable, the passcode may be utilized with other identifying mechanisms to provide greater likelihood of verification (i.e., authenticity of the sender's identity).


IP Address


The IP address identifying mechanism involves validating whether a message was sent from a particular IP address or IP address range (e.g. the IP/24 range 204.200.100.*). In an implementation, this identity mechanism may support a less secure mode in which the IP address/range may appear in any of a message's “received” header lines. As before, the use of a particular IP address, IP address range, and/or where the IP address or range may be located in a message can serve as a basis for a relative likelihood that the message was sent from a spammer.


Valid Sender ID


The valid Sender ID identifying mechanism involves validating whether a message was sent from a computer that is authorized to send messages for a particular domain via the Sender's ID. For example, reference may be made to a trusted domain. For instance, “test@test.com” is an address and “test.com” is the domain. It should be noted that the domain does not need to match exactly, e.g. the domain could also formatted as foo.test.com. When a message from this address is received, the communication module 108(n) may perform a Sender ID test on the “test.com” domain, and if the message matches the entry, it is valid. This identifying mechanism can also leverage algorithms for detecting IP addresses in clients and any forthcoming standards for communicating IP addresses from edge servers, standards for communicating the results of Sender ID checks from the edge servers, and so on. Additionally, it should be noted that the Sender ID test is not limited to any particular sender identification technique or framework (e.g., sender policy framework (SPF), sender ID framework from MICROSOFT (Microsoft is a trademark of the Microsoft Corporation, Redmond, Wash.), and so on), but may include any mechanism that provides for authentication of a user or domain.


Monetary Attachment


The monetary attachment identifying mechanism involves inclusion of a monetary amount to a message for sending, in what may be referred to as an “e-stamp”. For example, a sender of the message may attach a monetary amount to the message that is credited to the recipient. By attaching even a minimal monetary amount, the likelihood of a spammer sending a multitude of such messages may decrease, thereby increasing the probability that the sender is not a spammer. A variety of other techniques may also be employed for monetary attachment, such as through a central clearinghouse on the Internet that charges for certifying messages. Therefore, a certificate included with the message may act to verify that the sender paid an amount of money to send the message. Although a variety of identifying mechanisms have been described, a variety of other identifying mechanisms 216(x) may also be employed without departing from the sprit and scope thereof. Further discussion of message processing may be found in relation to the following figures.


Exemplary Procedures


The following discussion describes proof techniques that may be implemented utilizing the previously described systems and devices. Aspects of each of the procedures may be implemented in hardware, firmware, or software, or a combination thereof. The procedures are shown as a set of blocks that specify operations performed by one or more devices and are not necessarily limited to the orders shown for performing the operations by the respective blocks. It should also be noted that the following exemplary procedures may be implemented in a wide variety of other environments without departing from the spirit and scope thereof.



FIG. 3 depicts a procedure 300 in an exemplary implementation in which processing of a message is performed by local spam filters to determine whether a result of a proof should be included before communication of the message to an intended recipient. A message is formed for communication over a network (block 302). For example, the communication module 108(1) may be executed to compose an email, an instant message, and so on.


The message is then processed using one or more spam filters (block 403). The communication module 108(1), for instance, may forward the composed message to spam filters 124(1) that are local on the client 102(1). From the processing, an indication is received as to whether the message is considered to be spam (block 306). The indication, for instance, may be configured as a binary indictor (e.g., “yes” or “no”) as to whether the message is considered spam by that spam filter 124(1). Therefore, the indication is utilized to determine whether the message is considered spam (decision block 308).


When the message is not indicated as spam (“no” from decision block 308), the message is output for communication to an intended recipient over a network (block 310). Thus, the client 102(1) in this instance determines that the message is not likely to be considered spam by the intended recipient, and therefore may simply communicate the message without performing another action.


When the message is indicated as spam (“yes” from decision block 308), a proof is computed (block 312). A result of the computation and the message are then output for communication to an intended recipient over a network (block 314). Thus, in the instance the client 102(1) determines that the message is likely considered to be spam and therefore computes a proof to indicate the “non-spammer” intentions of the client 102(1).


Although a binary indication was described as being output from the spam filters, a relative likelihood (e.g., a score) may also be output and leveraged by the computational proofs. For example, an additional threshold may be utilized in conjunction with the spam filter's indication to protect from spam filters that are likely to be more aggressive than the spam filter employed by the client 102(1), such as spam filter employed by a communication service 106(m). In this way, the additional threshold may account for out-of-date spam filters that find the message “more spammy” than the sender's filter. For instance, the threshold may be based on an update frequency of the spam filter 124(1), with more rapid updates requiring smaller thresholds.


Additionally, logic may be employed for specific intended recipients and/or communicators of the message. For instance, a particular communication service may filter more aggressively, and therefore a larger threshold may be employed. In an implementation, messages that are sent to recipients within a local domain are not pre-processed, e.g., when recipients are located on a global address list, when recipients are included in a local domain of a sender, and son on. A variety of other instances are also contemplated, an example of which is described as follows.



FIG. 4 depicts a procedure 400 in an exemplary implementation in which one or more proofs are selected based on a relative likelihood that a message will be considered spam. In the previous example, an implementation was described in which an indication of “spamminess” of a message may be relative, such as provided by a score in which higher numbers indicate an increased likelihood of being spam. This relative likelihood may also be utilized to select one or more proofs such that different “levels” of proof may be employed based on the relative likelihood of the message being considered spam. As before, a message is processed by one or more spam filters (block 402) and an indication is received of a relative likelihood that the message is considered to be spam (block 404), such as a numerical score, a relative indication of a degree of “spamminess”, and so on.


One or more of a plurality of proofs are then selected based on the relative likelihood (block 406). Thus, the communication module 108(1) may determine a level of proof that is proportion to the apparent “spamminess” of the message. For example, if the message is almost certainly not spam, the client 102(1) may select a proof requiring a minimal amount of resources to compute. However, if the message is significantly “spammy”, the client 102(1) may select one or more proofs requiring a significantly greater amount of resources to compute. The selected one or more proofs are then computed (block 408) and the message and a result of the computation is output for communication to an intended recipient over a network (block 410).


Thus, in this example, the “amount” of proof is selected based on a guess as to how much proof will be required to bypass the intended recipient's, as well as communication services that communicate the message, spam filters. This guess may also be based on the local spam filter 124(1) (e.g., is it up-to-date), knowledge of receiver's filters (e.g., the communication service 106(m) employs aggressive spam filters), and so on. In the previous example, the computations performed were “sender driven”, in that, the sender (e.g., client 102(1)) made a guess as to whether the recipients (e.g., communication service 106(m) and client 102(N)) would consider the message to be spam. This determination may also be made, at least in part, through communication with a recipient of the message, an example of which is described in relation to the following figure.



FIG. 5 depicts a procedure 500 in an exemplary implementation in which receiver-driven computation is performed. A message is received over a network (block 502) and processed using one or more spam filters (block 504). For example, the communication service 106(m) may receive a message from client 102(1) and process the message using the spam filters 124(m). An indication is then received of a relative likelihood that the message is spam (block 506).


Based at least in part on the indication, a determination is made as to an amount of proof to be associated with the message such that the message is not considered spam (block 508). For instance, the indication may be configured as a numerical score, which may then be utilized to determine a proportional amount of proof (e.g., more or less computation) such that, when included, the message is not considered to be spam. Additional indicators may also be utilized when making this determination, such as through use of the identity mechanisms 216(x) previously described in relation to FIG. 2. Thus, a variety of factors may be utilized to determine the “amount” of proof to be included with the message.


A determination is then made as to whether the message includes the amount (decision block 510). If so (“yes” from decision block 512), the message is routed accordingly, e.g., to a client's inbox. If not (“no” from decision block 512), a communication is formed to be communicated to a sender of the message to request additional computation (block 514). Thus, in this instance, a receiver (e.g., a communication service 102(m) and/or the client 102(N) that is the intended recipient) may report back that additional proof is needed before further processing and/or routing, e.g., passing to an inbox, pushing to the intended recipient, and so forth. In other words, the recipient may communicate back that the sender's “guess” was wrong. Further, the recipient may also “give credit” to previous amounts of “proof” that were included in the message when requiring the additional proof, e.g., the sender's guess plus the additional proof required equals the minimum amount of proof needed to allow the message to be routed to a user's inbox. Thus, this cost may put an asymmetric burden of proof on spammers because receivers will require larger amount of proof before the receiver is willing to place a “spammy” message in the intended recipient's inbox.


These techniques may also be employed to address a situation, in which, the spam filters are not synchronized, e.g., one spam filter has been updated and another one has not. For example, due to a lack of synchronization, the sender (e.g., client 102(1)) might “guess” incorrectly, and therefore messages sent by the sender may end up in the intended recipients' (e.g., client 102(N)) “junk” mail folder. Therefore, by requesting additional proof, this situation may be avoided.


In an implementation, a recipient (e.g., the communication service 102(m) and/or the intended recipient, client 102(N)) may choose not to inform the sender (e.g., client 102(1)) that addition proof is required in order to avoid “web bugs” (i.e., techniques that spammers use to determine when a receiver reads a message) and address book mining (i.e., techniques used by spammers to determine when an account is live, and thus worth spamming). In such an instance, the recipient may require a certain minimum amount of proof before requesting additional proof from a sender. Thus, the amount of initial proof may be set such that using receiver-driven computation as a surrogate for web bugs and address book mining is uneconomical for spammers. In another example, the “challenge” may be limited to instances in which the sender indicated a willingness to receive challenges, such as in an email header field.


CONCLUSION

Although the invention has been described in language specific to structural features and/or methodological acts, it is to be understood that the invention defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the claimed invention.

Claims
  • 1. A method comprising: processing an outgoing message using one or more spam filters on a client computer;computing, via the client computer, a result from a proof to be included with the outgoing message that is communicated over a network to an intended recipient when the processing indicates that the outgoing message is considered spam, wherein an extent of processing performed by computing resources used to calculate the proof varies based on the relative probability the outgoing message is spam and the indicated relative probability the outgoing message is spam includes an additional threshold to protect from spam filters that are likely to be more aggressive than the one or more spam filters employed by the client computer;selecting the proof from a plurality of proofs based on the indicated relative probability the outgoing message is spam, such that selecting includes: selecting a complex proof that uses a higher amount of computer resources as the proof when the relative probability the outgoing message is spam is higher; andselecting a simple proof that uses a lower amount of computer resources as the proof when the relative probability the outgoing message is spam is lower;attaching the result from the proof with the outgoing message before the outgoing message is sent to the intended recipient, the result from the proof being included in the outgoing message as evidence that a sender of the outgoing message expended additional computer resources to indicate the outgoing message is less likely to be spam; andreceiving a communication from the intended recipient indicating that the result attached to the outgoing message is not enough evidence that the message is less likely to be spam.
  • 2. A method as described in claim 1, wherein the outgoing message is an email or an instant message.
  • 3. A method as described in claim 1, wherein the proof is a Proof of Effort (POE) algorithm.
  • 4. A method as described in claim 1, wherein the processing and the computing are performed: on a client that composed the outgoing message; andbefore the outgoing message is communicated over the network.
  • 5. A method as described in claim 1, wherein: the processing indicates a relative probability that the outgoing message is spam; andthe proof is selected from a plurality of proofs based on the indicated relative probability and an identity mechanism utilized by the outgoing message to identify a sender of the message.
  • 6. A method as described in claim 1, further comprising outputting the outgoing message and the result of the computation to be communicated to an intended recipient over the network.
  • 7. A method comprising: determining a relative probability that an outgoing message is spam by processing the outgoing message using one or more spam filters of a computing device;selecting one or more proofs for the outgoing message, a result of the one or more proofs to be computed by the computing device based on the relative probability the outgoing message is spam, wherein the proof for the outgoing message is an extraneous operation performed by a client computer that is unrelated to the computing device that determines the relative probability that the outgoing message is spam;wherein a complex proof that uses a higher amount of computer resources is selected for the outgoing message from the one or more proofs when the relative probability the outgoing message is spam is higher;wherein a simple proof that uses a lower amount of computer resources is selected for the outgoing message from the one or more proofs when the relative probability the outgoing message is spam is lower;including the result of the one or more proofs with the outgoing message before the outgoing message is communicated over a network to an intended recipient; andreceiving a response from the intended recipient over the network indicating that the result of the one or more proofs for the outgoing message included with the outgoing message does not represent a sufficient amount of computer resources to indicate that the outgoing message is less likely to be spam.
  • 8. A method as described in claim 7, wherein the determining and the selecting are performed: before the outgoing message is received by an intended recipient; andby a client that composed the outgoing message before communication over a network.
  • 9. A method as described in claim 8, wherein the determining is not performed when an intended recipient of the outgoing message is in a same domain as a sender of the outgoing message.
  • 10. A method as described in claim 8, wherein the determining and the selecting are performed by a communication service that receives the outgoing message from a client that composed the outgoing message.
  • 11. A method as described in claim 7, wherein the determining is based at least in part on an identity mechanism utilized by the outgoing message to identify a sender of the outgoing message.
  • 12. A method as described in claim 7, wherein each said proof is a Proof of Effort (POE) algorithm.
  • 13. A method as described in claim 7, further comprising outputting the outgoing message and a result of the computation of the selected one or more said proofs for communication to the intended recipient over a network.
  • 14. Computer memory device storing computer-executable instructions that, when executed on one or more processors, performs acts comprising: computing a result from a proof for an outgoing message, the result to be included with the outgoing message that is communicated over a network to an intended recipient upon an indication that the outgoing message has a relative probability of being spam;determining a relative amount of requested processing to be performed by computing resources used to calculate the result of the proof for the outgoing message based on the relative probability that the outgoing message is spam, the proof for the outgoing message being at least one of a hash collision, a solution to a cryptographic problem, a solution to a memory bound problem, or a solution to a reverse Turing test;selecting the proof for the outgoing message from a plurality of proofs based on the relative amount of requested processing such that a complex proof that uses a larger amount of computer resources is selected as the proof for the outgoing message when the relative probability the outgoing message is spam is higher, and a simple proof that uses a smaller amount of computer resources is selected as the proof for the outgoing message when the relative probability the outgoing message is spam is lower;attaching the result from the proof for the outgoing message with the outgoing message before the outgoing message is sent to the intended recipient, the result from the proof for the outgoing message being included in the outgoing message as evidence that a sender of the outgoing message expended additional computer resources to indicate the outgoing message is less likely to be spam;sending the outgoing message with the attached result to the intended recipient;receiving a reply from the intended recipient in response to the sending, the reply indicating that the result of the proof for the outgoing message attached to the outgoing message is not enough evidence that the outgoing message is less likely to be spam; andsending an updated outgoing message to the intended recipient, the updated outgoing message including the result of the proof for the outgoing message and further including an additional result of an additional proof for the outgoing message computed in response to the receiving the reply.
  • 15. A method as described in claim 1, further comprising: computing, via the client computer, an additional result from an additional proof to be included with the outgoing message that is communicated over the network to the intended recipient; andattaching the additional result from the additional proof along with the result to the outgoing message before the outgoing message is sent again to the intended recipient, the additional result included along with the result in the outgoing message as further evidence that the outgoing message is less likely to be spam.
  • 16. A method as described in claim 7, further comprising including an additional result of an additional proof along with the result of the one or more proofs in a re-communication of the outgoing message over the network to the intended recipient in response to the receiving the response.
  • 17. A method as described in claim 1, wherein at least one of: the processing includes determining, by the one or more spam filters on the client computer, whether to perform one or more proofs; orthe processing is based, at least in part, on an up-to-date status of the one or more spam filters on the client computer.
  • 18. A method as described in claim 1, wherein the determining includes receiving an initial estimation of a sender that a level of proof for the outgoing message will be sufficient for the outgoing message to the intended recipient.
  • 19. A method as described in claim 1, wherein the spam filters that are likely to be more aggressive operate on a computing device other than the client computer.
US Referenced Citations (217)
Number Name Date Kind
5377354 Scannell et al. Dec 1994 A
5459717 Mullan et al. Oct 1995 A
5619648 Canale et al. Apr 1997 A
5638487 Chigier Jun 1997 A
5704017 Heckerman et al. Dec 1997 A
5805801 Holloway et al. Sep 1998 A
5835087 Herz et al. Nov 1998 A
5884033 Duvall et al. Mar 1999 A
5905859 Holloway et al. May 1999 A
5911776 Guck Jun 1999 A
5930471 Milewski et al. Jul 1999 A
5999932 Paul Dec 1999 A
5999967 Sundsted Dec 1999 A
6003027 Prager Dec 1999 A
6023723 McCormick et al. Feb 2000 A
6041321 Fabbrizio et al. Mar 2000 A
6041324 Earl et al. Mar 2000 A
6047242 Benson Apr 2000 A
6052709 Paul Apr 2000 A
6072942 Stockwell et al. Jun 2000 A
6101531 Eggleston et al. Aug 2000 A
6112227 Heiner Aug 2000 A
6122657 Hoffman, Jr. et al. Sep 2000 A
6128608 Barnhill Oct 2000 A
6144934 Stockwell et al. Nov 2000 A
6157921 Barnhill Dec 2000 A
6161130 Horvitz et al. Dec 2000 A
6167434 Pang Dec 2000 A
6192114 Council Feb 2001 B1
6192360 Dumais et al. Feb 2001 B1
6195698 Lillibridge et al. Feb 2001 B1
6199102 Cobb Mar 2001 B1
6199103 Sakaguchi et al. Mar 2001 B1
6249807 Shaw et al. Jun 2001 B1
6266692 Greenstein Jul 2001 B1
6308273 Goertzel et al. Oct 2001 B1
6314421 Sharnoff et al. Nov 2001 B1
6321267 Donaldson Nov 2001 B1
6324569 Ogilvie et al. Nov 2001 B1
6327617 Fawcett Dec 2001 B1
6330590 Cotten Dec 2001 B1
6332164 Jain Dec 2001 B1
6351740 Rabinowitz Feb 2002 B1
6370526 Agrawal et al. Apr 2002 B1
6393465 Leeds May 2002 B2
6421709 McCormick et al. Jul 2002 B1
6424997 Buskirk, Jr. et al. Jul 2002 B1
6427141 Barnhill Jul 2002 B1
6434600 Waite et al. Aug 2002 B2
6449635 Tilden, Jr. et al. Sep 2002 B1
6453327 Nielsen Sep 2002 B1
6477551 Johnson et al. Nov 2002 B1
6484197 Donohue Nov 2002 B1
6484261 Wiegel Nov 2002 B1
6505250 Freund et al. Jan 2003 B2
6519580 Johnson et al. Feb 2003 B1
6546390 Pollack et al. Apr 2003 B1
6546416 Kirsch Apr 2003 B1
6592627 Agrawal et al. Jul 2003 B1
6615242 Riemers Sep 2003 B1
6618747 Flynn et al. Sep 2003 B1
6633855 Auvenshine Oct 2003 B1
6643686 Hall Nov 2003 B1
6654787 Aronson et al. Nov 2003 B1
6684201 Brill Jan 2004 B1
6691156 Drummond et al. Feb 2004 B1
6701350 Mitchell Mar 2004 B1
6701440 Kim et al. Mar 2004 B1
6704772 Ahmed et al. Mar 2004 B1
6728690 Meek et al. Apr 2004 B1
6732149 Kephart May 2004 B1
6732157 Gordon et al. May 2004 B1
6732273 Byers May 2004 B1
6742047 Tso May 2004 B1
6748422 Morin et al. Jun 2004 B2
6751348 Buzuloiu et al. Jun 2004 B2
6757830 Tarbotton et al. Jun 2004 B1
6768991 Hearnden Jul 2004 B2
6775704 Watson et al. Aug 2004 B1
6779021 Bates et al. Aug 2004 B1
6785820 Muttik et al. Aug 2004 B1
6842773 Ralston et al. Jan 2005 B1
6853749 Watanabe et al. Feb 2005 B2
6868498 Katsikas Mar 2005 B1
6892193 Bolle et al. May 2005 B2
6901398 Horvitz et al. May 2005 B1
6915334 Hall Jul 2005 B1
6920477 Mitzenmacher Jul 2005 B2
6928465 Earnest Aug 2005 B2
6957259 Malik Oct 2005 B1
6971023 Makinson et al. Nov 2005 B1
6990485 Forman et al. Jan 2006 B2
7003555 Jungck Feb 2006 B1
7032030 Codignotto Apr 2006 B1
7039949 Cartmell et al. May 2006 B2
7051077 Lin May 2006 B2
7072942 Maller Jul 2006 B1
7089241 Alspector et al. Aug 2006 B1
7117358 Bandini et al. Oct 2006 B2
7146402 Kucherawy Dec 2006 B2
7155243 Baldwin et al. Dec 2006 B2
7155484 Malik Dec 2006 B2
7188369 Ho et al. Mar 2007 B2
7206814 Kirsch Apr 2007 B2
7219148 Rounthwaite et al. May 2007 B2
7249162 Rounthwaite et al. Jul 2007 B2
7263607 Ingerman et al. Aug 2007 B2
7287060 McCown et al. Oct 2007 B1
7293063 Sobel Nov 2007 B1
7320020 Chadwick et al. Jan 2008 B2
7321922 Zheng et al. Jan 2008 B2
7359941 Doan et al. Apr 2008 B2
7366761 Murray et al. Apr 2008 B2
7574409 Patinkin Aug 2009 B2
7600255 Baugher Oct 2009 B1
7711779 Goodman et al. May 2010 B2
20010039575 Freund et al. Nov 2001 A1
20010046307 Wong Nov 2001 A1
20010049745 Schoeffler Dec 2001 A1
20020016824 Leeds Feb 2002 A1
20020016956 Fawcett Feb 2002 A1
20020059425 Belfiore et al. May 2002 A1
20020073157 Newman et al. Jun 2002 A1
20020091738 Rohrabaugh et al. Jul 2002 A1
20020124025 Janakiraman et al. Sep 2002 A1
20020129111 Cooper Sep 2002 A1
20020147782 Dimitrova et al. Oct 2002 A1
20020169954 Bandini et al. Nov 2002 A1
20020174185 Rawat et al. Nov 2002 A1
20020184315 Earnest Dec 2002 A1
20020199095 Bandini et al. Dec 2002 A1
20030009495 Adjaoute Jan 2003 A1
20030009698 Lindeman et al. Jan 2003 A1
20030016872 Sun Jan 2003 A1
20030037074 Dwork et al. Feb 2003 A1
20030041126 Buford et al. Feb 2003 A1
20030088627 Rothwell et al. May 2003 A1
20030149733 Capiel Aug 2003 A1
20030167311 Kirsch Sep 2003 A1
20030191969 Katsikas Oct 2003 A1
20030204569 Andrews et al. Oct 2003 A1
20030229672 Kohn Dec 2003 A1
20040003283 Goodman et al. Jan 2004 A1
20040015554 Wilson Jan 2004 A1
20040019650 Auvenshine Jan 2004 A1
20040019651 Andaker Jan 2004 A1
20040054887 Paulsen, Jr. et al. Mar 2004 A1
20040059697 Forman Mar 2004 A1
20040068543 Seifert Apr 2004 A1
20040073617 Milliken et al. Apr 2004 A1
20040083270 Heckerman et al. Apr 2004 A1
20040093371 Burrows et al. May 2004 A1
20040139160 Wallace et al. Jul 2004 A1
20040139165 McMillan et al. Jul 2004 A1
20040148330 Alspector et al. Jul 2004 A1
20040177120 Kirsch Sep 2004 A1
20040181571 Atkinson et al. Sep 2004 A1
20040193684 Ben-Yoseph Sep 2004 A1
20040199585 Wang Oct 2004 A1
20040199594 Radatti et al. Oct 2004 A1
20040205135 Hallam-Baker Oct 2004 A1
20040210640 Chadwick et al. Oct 2004 A1
20040215977 Goodman et al. Oct 2004 A1
20040255122 Ingerman et al. Dec 2004 A1
20040260776 Starbuck et al. Dec 2004 A1
20050015455 Liu Jan 2005 A1
20050015456 Martinson, Jr. Jan 2005 A1
20050021649 Goodman et al. Jan 2005 A1
20050041789 Warren-Smith et al. Feb 2005 A1
20050050150 Dinkin Mar 2005 A1
20050060643 Glass et al. Mar 2005 A1
20050076084 Loughmiller et al. Apr 2005 A1
20050080855 Murray Apr 2005 A1
20050080889 Malik et al. Apr 2005 A1
20050081059 Bandini et al. Apr 2005 A1
20050091320 Kirsch et al. Apr 2005 A1
20050091321 Daniell et al. Apr 2005 A1
20050097174 Daniell May 2005 A1
20050102366 Kirsch May 2005 A1
20050108340 Gleeson et al. May 2005 A1
20050114452 Prakash May 2005 A1
20050120019 Rigoutsos et al. Jun 2005 A1
20050159136 Rouse et al. Jul 2005 A1
20050160148 Yu Jul 2005 A1
20050165895 Rajan et al. Jul 2005 A1
20050182735 Zager et al. Aug 2005 A1
20050188023 Doan et al. Aug 2005 A1
20050198270 Rusche et al. Sep 2005 A1
20050204005 Purcell et al. Sep 2005 A1
20050204006 Purcell et al. Sep 2005 A1
20050204159 Davis et al. Sep 2005 A1
20050228899 Wendkos et al. Oct 2005 A1
20060015942 Judge et al. Jan 2006 A1
20060026246 Fukuhara et al. Feb 2006 A1
20060031303 Pang Feb 2006 A1
20060031306 Haverkos Feb 2006 A1
20060031464 Bowman et al. Feb 2006 A1
20060036693 Hulten et al. Feb 2006 A1
20060036701 Bulfer et al. Feb 2006 A1
20060047769 Davis et al. Mar 2006 A1
20060059238 Slater et al. Mar 2006 A1
20060123083 Goutte et al. Jun 2006 A1
20060137009 Chesla Jun 2006 A1
20060168017 Stern et al. Jul 2006 A1
20060265498 Turgeman et al. Nov 2006 A1
20070101423 Oliver et al. May 2007 A1
20070118759 Sheppard May 2007 A1
20070130350 Alperovitch et al. Jun 2007 A1
20070130351 Alperovitch et al. Jun 2007 A1
20070133034 Jindal et al. Jun 2007 A1
20070143407 Avritch et al. Jun 2007 A1
20070208856 Rounthwaite et al. Sep 2007 A1
20080016579 Pang Jan 2008 A1
20080104186 Wieneke et al. May 2008 A1
20080114843 Shinde et al. May 2008 A1
20080120413 Mody et al. May 2008 A1
20090157708 Bandini et al. Jun 2009 A1
Foreign Referenced Citations (34)
Number Date Country
1350247(A) May 2002 CN
0413537(A2) Feb 1991 EP
07203333(A2) Jul 1996 EP
1300997(A2) Apr 2003 EP
1376427(A2) Jan 2004 EP
10074172(1) Mar 1998 JP
2000163341(A) Jun 2000 JP
2001505371 Apr 2001 JP
2002149611(A) May 2002 JP
2002164887(A) Jun 2002 JP
2002330175(A) Nov 2002 JP
2002537727 Nov 2002 JP
2003115925(A) Apr 2003 JP
2003125005(A) Apr 2003 JP
2004186888(A) Jul 2004 JP
20010088973(A) Sep 2001 KR
20020063534(A) Aug 2002 KR
519591 Feb 2003 TW
520483 Feb 2003 TW
521213 Feb 2003 TW
WO9635994(A1) Nov 1996 WO
WO9910817(A1) Mar 1999 WO
WO9937066(A1) Jul 1999 WO
WO9967731(A1) Dec 1999 WO
WO0146872(A1) Jun 2001 WO
WO0219069(A2) Mar 2002 WO
WO0223390(A2) Mar 2002 WO
WO0230054(A1) Apr 2002 WO
WO02071286(A2) Sep 2002 WO
OW02082226(A2) Oct 2002 WO
WO03054764 Jul 2003 WO
WO03054764(A1) Jul 2003 WO
WO2004054188 Jun 2004 WO
WO2004059506(A1) Jul 2004 WO
Related Publications (1)
Number Date Country
20070100949 A1 May 2007 US