The present disclosure relates to the monitoring of communication traffic generated by users of computer applications.
Various computer applications allow users to exchange communication with each other over a communication network, such as the Internet. Such an exchange may be actively performed, as when one user uses an application to send a text message to another user. Alternatively, such an exchange may be passively performed, as when the device of a first user passes to a second user, via the application server, a status-update message that contains information relating to the status of the first user with respect to the application. For example, upon a given user launching the application, the application may send a message to some or all of the user's contacts, indicating that the user is now “online.” (The user may also receive respective status-update messages from the user's contacts.) Subsequently, while the application is running, the application may periodically send the user's contacts additional status-update messages, to notify the contacts that the user remains online. As another example of a passive exchange, upon a first user opening a message from a second user, the first user's device may send a message to the second user, indicating that the message has been opened.
Many computer applications use encrypted protocols, such that the communication traffic exchanged by these applications is encrypted. Examples of such applications include Gmail, Facebook, Twitter, and WhatsApp. Examples of encrypted protocols include the Secure Sockets Layer (SSL) protocol, the Transport Layer Security (TLS) protocol, and proprietary end-to-end encrypted protocols.
US Patent Application Publication 2016/0285978 describes a monitoring system that monitors traffic flows exchanged over a communication network. The system characterizes the flows in terms of their temporal traffic features and uses this characterization to identify communication devices that participate in the same communication session. By identifying the communication devices that serve as endpoints in the same session, the system establishes correlations between the users of these communication devices. The monitoring system characterizes the flows using traffic features such as flow start time, flow end time, inter-burst time and burst size, and/or statistical properties of such features. The system typically generates compressed-form representations (“signatures”) for the traffic flows based on the temporal traffic features and finds matching flows by finding similarities between signatures.
There is provided, in accordance with some embodiments of the present invention, apparatus that includes a network interface and a processor. The processor is configured to receive a volume of communication traffic that includes a plurality of messages, each of which is exchanged between a server for an application and one of a plurality of users. The processor is further configured to identify in the received volume, by scanning the received volume for any message sequence that follows any one of a plurality of predetermined message-sequence patterns, at least one sequence of messages that is exchanged between the server and a particular pair of the users and follows one of the predetermined message sequence patterns. The processor is further configured to, in response to the identifying, calculate a likelihood that the particular pair of the users used the application to communicate with one another, and, in response to the likelihood exceeding a threshold, generate an output that indicates the particular pair of the users.
In some embodiments, the messages are encrypted, and the processor is configured to scan the received volume without decrypting any of the messages. In some embodiments, the processor is configured to scan the received volume for any message sequence that follows any one of the predetermined message-sequence patterns by virtue of a property of the message sequence selected from the group of properties consisting of: respective sizes of messages in the message sequence, respective directionalities of the messages in the message sequence, and respective user-endpoints of the messages in the message sequence.
In some embodiments, the processor is further configured to, prior to scanning the received volume, identify multiple pairs of the users that potentially used the application to communicate with one another, the multiple pairs including the particular pair, by identifying in the volume of traffic, for each pair of the multiple pairs, a plurality of instances in which a first one of the messages destined to a first member of the pair was received within a given time interval of a second one of the messages destined to a second member of the pair, and the processor is configured to scan the received volume for any message sequence exchanged between the server and any one of the identified multiple pairs of the users.
In some embodiments, the processor is configured to identify the sequence in response to the sequence spanning a time interval that is less than a given threshold.
In some embodiments, the given threshold is a function of a number of round trips, between the server and the particular pair of users, that is implied by the sequence. In some embodiments, the processor is configured to identify a plurality of sequences that collectively follow a plurality of different ones of the predetermined message-sequence patterns, and the processor is configured to calculate the likelihood, using a machine-learned model, based at least on respective numbers of the identified sequences following the different ones of the predetermined message-sequence patterns.
In some embodiments, the volume is a first volume, and the processor is further configured to:
identify a plurality of true message sequences, each of which follows any one of the predetermined message-sequence patterns and is assumed to belong to a communication session between any two users, generate a second volume of communication traffic, by intermixing a first sequential series of messages exchanged with the server with a second sequential series of messages exchanged with the server, identify, in the second volume, a plurality of spurious message sequences, each of which follows any one of the predetermined message-sequence patterns and includes at least one message from the first sequential series and at least one message from the second sequential series, and train the model, using both the true message sequences and the spurious message sequences.
In some embodiments, the processor is further configured to, prior to scanning the volume, learn the message-sequence patterns, by identifying a plurality of ground-truth message sequences, each of which follows any one the message-sequence patterns and is assumed to belong to any one of a plurality of communication sessions between one or more other pairs of users.
In some embodiments, the processor is further configured to ascertain that each one of the ground-truth message sequences is assumed to belong to one of the communication sessions, by identifying, for each pair of the other pairs of users, a plurality of instances in which a first message destined to a first member of the pair was received within a given time interval of a second message destined to a second member of the pair.
In some embodiments,
the volume is a first volume,
the processor is further configured to:
generate a second volume of communication traffic, by intermixing a first sequential series of messages exchanged with the server with a second sequential series of messages
exchanged with the server, and
identify, in the second volume, a plurality of spurious message sequences, each of which includes at least one message from the first sequential series and at least one message from the second sequential series, and the processor is configured to, in learning the predetermined message-sequence patterns, exclude at least some patterns followed by the spurious message sequences from the predetermined message sequence patterns, in response to identifying the spurious message sequences.
There is further provided, in accordance with some embodiments of the present invention, a method that includes receiving a volume of communication traffic that includes a plurality of messages, each of which is exchanged between a server for an application and one of a plurality of users. The method further includes, by scanning the received volume for any message sequence that follows any one of a plurality of predetermined message-sequence patterns, identifying, in the received volume, at least one sequence of messages that is exchanged between the server and a particular pair of the users, and follows one of the predetermined message-sequence patterns. The method further includes, in response to the identifying, calculating a likelihood that the particular pair of the users used the application to communicate with one another, and, in response to the likelihood exceeding a threshold, generating an output that indicates the particular pair of the users.
There is further provided, in accordance with some embodiments of the present invention, a computer software product including a tangible non-transitory computer-readable medium in which program instructions are stored. The instructions, when read by a processor, cause the processor to receive a volume of communication traffic that includes a plurality of messages, each of which is exchanged between a server for an application and one of a plurality of users. The instructions further cause the processor to identify in the received volume, by scanning the received volume for any message sequence that follows any one of a plurality of predetermined message-sequence patterns, at least one sequence of messages that is exchanged between the server and a particular pair of the users and follows one of the predetermined message-sequence patterns. The instructions further cause the processor to, in response to the identifying, calculate a likelihood that the particular pair of the users used the application to communicate with one another, and, in response to the likelihood exceeding a threshold, generate an output that indicates the particular pair of the users.
The present disclosure will be more fully understood from the following detailed description of embodiments thereof, taken together with the drawings, in which:
Overview
In some cases, interested parties may wish to identify relationships between users of computer applications, such as apps that runs on a mobile phone, by monitoring the communication traffic generated by these applications. A challenge in doing so, however, is that the traffic generated by the applications may be encrypted. Moreover, many applications use a server to intermediate communication between users, such that traffic does not flow directly between the users. Furthermore, popular applications, such as WhatsApp, may have hundreds of thousands, or even millions, of active users at any given instant. As yet another challenge, each application of interest may generate different characteristic patterns of traffic.
Embodiments of the present disclosure address these challenges, using a technique that does not require performing any decryption, and that requires only passive monitoring of the communication exchanged with the relevant application servers. In particular, in embodiments described herein, a monitoring system receives the traffic exchanged with each relevant application server, and identifies, in the traffic, sequences of messages—or “n-grams”—that appear to belong to a communication session between a pair of users. Subsequently, based on the numbers and types of identified n-grams, the system identifies each pair of users that are likely to be related to one another via the application, in that these users used the application to communicate (actively and/or passively) with one another.
For example, for a particular messaging application, the monitoring system may identify a sequence of three messages, or a “3-gram,” that indicates that the following sequence of events occurred:
(i) A first user sent a text message to the server.
(ii) The server sent a text message to a second user.
(iii) The server sent an acknowledgement message to the first user.
Assuming that the time span of this sequence does not exceed a particular threshold, this sequence suggests possible communication between the first and second users. (Of course, this sequence does not definitively suggest such communication, since it is possible that the text message received by the second user was sent by a third user, rather than by the first user.) Hence, the identification of this sequence may increase the likelihood of a relationship between the first and second users.
Advantageously, the system typically does not need to learn to explicitly identify each type of message that a particular application generates. Rather, the system may identify those sequences of messages that, by virtue of the sizes of the messages in the sequence, and/or other properties of the messages that are readily discernable, indicate a possible user-pair relationship. Thus, for example, the system may identify the aforementioned 3-gram without identifying the first and second messages as text messages, and/or without identifying the third message as an acknowledgement message.
Typically, prior to looking for n-grams in the traffic, the system performs an initial screening, to identify candidate pairs of related users. For example, the system may identify all pairs of users who, in a sufficient number of instances, received messages from the server within a small time interval of one another, such as within 2 MS of one another. (Each such instance is termed an “Rx collision.”) The system then looks for n-grams only for the candidate pairs, while ignoring other pairs of users. This screening process generally reduces the time required to identify pairs of related users, without significantly increasing the number of missed related pairs. Hence, even applications having a large number of simultaneously-active users may be handled by the system.
Embodiments of the present disclosure also include techniques for learning the message-sequence patterns that potentially suggest user-pair relatedness, such that the system may subsequently search for n-grams that follow these specific patterns. (These patterns may also be referred to as “n-grams,” such that the term “n-gram” may refer either to a message-sequence pattern, or to an actual sequence of messages.) First, the system identifies “ground-truth” pairs of related users, using any suitable external source of information (e.g., contact lists), and/or by applying the above-described screening process to a large volume of communication traffic and identifying those pairs of users having a relatively large number of associated Rx collisions. Subsequently, the system identifies the most common patterns appearing in communication sessions between the pairs of related users.
Embodiments of the present disclosure further include techniques for training a classifier to identify that a particular pair of users is related, based on the numbers and types of n-grams identified for the pair. To train the classifier, the system first records, for each of a plurality of pairs of related users, and for each of the potentially-meaningful patterns that were learned as described above, the number of sequences following the pattern that were identified in a volume of traffic spanning a particular time interval (e.g., 10 minutes). This information is supplied to the classifier in the form of a plurality of “feature vectors,” each of which corresponds to a respective pair of related users. Similar feature vectors are generated for a plurality of pairs of unrelated users and are likewise supplied to the classifier. Based on these feature vectors, the classifier learns to differentiate between related and unrelated pairs.
To facilitate generating feature vectors for pairs of unrelated users, the system may mix two separate volumes of traffic with one another, such as to create “spurious” n-grams that each include at least one message from each of the volumes. This mixing technique may be further used in the above-described learning stage, in that the system may identify a given pattern as potentially meaningful only if this pattern is exhibited by the true n-grams with a frequency that is sufficiently greater than the frequency with which the pattern is exhibited by the spurious n-grams.
System Description
Reference is initially made to
Typically, system 20 passively monitors the communication over network 22, in that the system does not intermediate the exchange of communication traffic between users 24 and servers 26, but rather, receives copies of the traffic from one or more network taps 32. Network taps 32 may be situated at any suitable point in the network; for example, network taps 32 may be situated near one or more Internet Service Providers (ISPs) 23.
The “units” of communication traffic exchanged over network 22 may include, for example, Transmission Control Protocol (TCP) packets, User Datagram Protocol (UDP) packets, or higher-level encapsulations of TCP or UDP packets, such as SSL frames or any encrypted proprietary frames. In some cases, a single unit of traffic corresponds to a single message. (For example, each SSL frame generally corresponds to a single message.) In other cases, a single unit may carry only part of a message or may carry multiple messages. Hence, system 20 is configured to combine or split units of traffic, as necessary, in order to identify the individual messages that underlie the communication. (Typically, in the event that a given message spans more than one packet, the time at which the first or last packet containing at least part of the message was received by the system is used as the receipt time of the message.) System 20 is further configured to identify the sizes of the underlying messages (e.g., by reading any unencrypted headers), and to use these sizes for related-user-pair identification, as described in detail below.
Typically, any given message does not indicate (in an unencrypted form) the application to which the message belongs, the identity of the sender, or the identity of the entity for whom the message is destined. Rather, the message typically specifies only the communication protocol per which the message is constructed, a source Internet Protocol (IP) address and port, and a destination IP address and port. (For downstream messages, the source IP address and port belong to the server, and the destination IP address and port belong to the user for whom the message is destined; for upstream messages, the source IP address and port belong to the sending user, and the destination IP address and port belong to the server.) Notwithstanding this dearth of information, however, system 20 may identify the application to which the message belongs, along with the identity of the “endpoint user” with whom the message was exchanged.
For example, system 20 may identify the application from the source or destination IP address of the server that is contained in the message. For example, in response to identifying the IP address of the WhatsApp server, the system may ascertain that the message was generated from the WhatsApp application. In the event that the server serves multiple applications, system 20 may identify the application from an SSL handshake at the start of the communication session. Alternatively, system 20 may perform all of the techniques described herein even if the system does not know the application to which any given one of the messages belongs, by treating all communication exchanged with the server as belonging to a single application.
The system may further use any suitable technique to identify the endpoint user who sent or received the message. For example, the system may refer to a cellular service provider for a mapping between IP addresses and mobile phone numbers, such that the source or destination IP address may be used to identify the user who sent or received the message. (Such a mapping may be derived, for example, from General Packet Radio Service Tunneling Protocol (GTP) data.) In the event that a user is using a network address translator (NAT), which allows multiple devices to use a single IP address, techniques for discovering the identity of a device behind a NAT, such as any of the techniques described in US Patent Application Publication 2017/0222922, whose disclosure is incorporated herein by reference, may be applied. For example, the system may use one or more device identifiers, such as one or more Internet cookies, to identify the device.
System 20 comprises a network interface 28, such as a network interface controller (NIC), and a processor 30. Intercepted messages from network taps 32 are received by processor 30 via network interface 28. Processor 30 processes the messages as described herein, such as to identify relationships between users 24, or perform any other functions described herein. Further to processing the messages, the processor may generate any suitable output, such as a visual output displayed on a display 36. System 20 may further comprise any suitable input devices, such as a keyboard and/or mouse, to facilitate human interaction with the system.
In general, processor 30 may be embodied as a single processor, or as a cooperatively networked or clustered set of processors. The functionality of processor 30, as described herein, may be implemented in hardware, e.g., using one or more Application-Specific Integrated Circuits (ASICs) or Field-Programmable Gate Arrays (FPGAs). Alternatively, this functionality may be implemented using software, or using a combination of hardware and software elements. For example, processor 30 may be a programmed digital computing device comprising a central processing unit (CPU), random access memory (RAM), non-volatile secondary storage, such as a hard drive or CD ROM drive, network interfaces, and/or peripheral devices. Program code, including software programs, and/or data are loaded into the RAM for execution and processing by the CPU and results are generated for display, output, transmittal, or storage, as is known in the art. The program code and/or data may be downloaded to the processor in electronic form, over a network, for example, or it may, alternatively or additionally, be provided and/or stored on non-transitory tangible media, such as magnetic, optical, or electronic memory. Such program code and/or data, when provided to the processor, produce a machine or special-purpose computer, configured to perform the tasks described herein.
N-Gram Identification and Pattern Matching
Reference is now made to
Given that each message 40 is encrypted, processor 30 cannot, typically, inspect the content of the message. However, processor 30 may identify certain features of each message, even without decrypting the message. These features include, in addition to the origin and destination of the message (the identification of which were described above with reference to
It is noted that the specific sizes and times shown in
By scanning volume 38, processor 30 identifies sequences of messages exchanged between the server and various pairs of users, each of these sequences potentially belonging to a communication session between the corresponding pair of users. A sequence is said to be exchanged between the server and a particular pair of users if each message in the sequence is exchanged between the server and either one of the users. For example,
(i) User B sent a text message (of size 157 bytes), destined for User A, to the server;
(ii) User B received an acknowledgement message (of size 36) from the server, acknowledging receipt of the text message by the server;
(iii) User A received the text message from the server;
(iv) User A sent an acknowledgement message (of size 64) to the server, acknowledging receipt of the text message; and
(v) User B received another acknowledgement message (of size 36) from the server, reporting that the text message was received by User A.
(It is noted that the present application may refer to a message as being sent by, received by, or destined to a particular user, even if the user is never explicitly made aware of the message, as long as the message is sent by, received by, or destined to the user's device.)
Typically, processor 30 scans volume 38 for specific predefined message-sequence patterns, contained in a list 41, that are known to potentially indicate a user-pair relationship. (As further described below with reference to
For example, the sequence of MSG1, MSG2, MSG4, MSG5, and MSG6 may be represented as the following 5-gram: {(157, u, 0), (36, d, 0), (157, d, 1), (64, u, 1), (36, d, 0)}. In this representation, “d” indicates that the message is passed downstream, from the server to one of the users, while “u,” for “upstream,” indicates the reverse directionality. One of the users—in this case, User B—is assigned a user-endpoint ID of 0, and the other user is assigned a user-endpoint ID of 1. Assuming that list 41 includes this 5-gram, this sequence may be identified as potentially indicating a relationship between User A and User B.
Typically, when considering whether to identify a given sequence of messages for a particular pair of users, the processor considers the time span of the sequence. In response to this time span being less than a given threshold, the processor may identify the sequence. Typically, this threshold is a function of a number of round trips, between the server and the pair of users, that is implied by the sequence. (In general, each round trip includes an upstream message followed by a downstream message, or vice versa.)
For example, the time span of the five-message sequence described above—i.e., the interval between MSG1 and MSG6—is approximately 3.7 ms. (In practice, typically, the time span of such a sequence would be much larger than 3.7 ms; as noted above, however, the times in
In many cases, the processor may identify a smaller sequence of messages that is subsumed within a larger sequence. For example, in addition to identifying a given 5-gram, the processor may identify a 2-gram, 3-gram, and/or 4-gram that is included in the 5-gram.
It is noted that each message may be characterized by any number of properties, alternatively or additionally to those specified above. Hence, a given sequence may be identified as following one of the predetermined message-sequence patterns by virtue of the respective sizes of the messages in the sequence, the respective directionalities of the messages, the respective user endpoints of the messages, and/or any other properties of the messages. For example, each word in any given n-gram may include, in addition to the size, directionality, and user endpoint of the corresponding message, the time interval between the receipt of the previous message and the receipt of the message. (Thus, for example, MSG4 may be represented by the n-gram (157, d, 1, 0.3228)).
It is further noted that a given predetermined pattern may specify an upper bound, a lower bound, or a range of values for one or more “letters,” such that multiple sequences having different respective properties may be deemed to match the same pattern. For example, by way of illustration, the following predetermined pattern may describe the sending of a text message from one user to another user: {(x>MAXSIZE, u, 0), (36, d, 0), (x+/−RANGE, d, 1), (64, u, 1), (36, d, 0)}. To match this pattern, a given sequence requires:
(i) a first message, sent from the first user, whose size “x” is greater than the size MAXSIZE of the largest standard message belonging to the application;
(ii) a second message, received by the first user, whose size matches that of a standard acknowledgement message sent by the server (namely, 36 bytes);
(iii) a third message, received by the second user, whose size is within a given RANGE of “x;”
(iv) a fourth message, sent by the second user, whose size matches that of a standard acknowledgement message sent to the server (namely, 64 bytes); and
(v) a fifth message, sent to the first user, whose size matches that of a standard acknowledgement message sent by the server (namely, 36 bytes).
(The above described sequence of MSG1, MSG2, MSG4, MSG5, and MSG6 matches this pattern.)
It is emphasized that the processor does not need to identify the meaning of any particular message, or of any particular sequence. Rather, the processor need only learn the message-sequence patterns that indicate user-pair relationships, and then identify sequences matching these patterns. Hence, a single framework for learning and scanning may be deployed across multiple applications.
Notwithstanding the above, the processor may, in some embodiments, classify particular types of messages, and include the message classification as another letter in the words that represent the messages. (Thus, for example, MSG4 may be represented by the n-gram (157, d, 1, “text”)). In classifying messages, the processor may, for example, use any of the transfer-learning techniques described in Israel Patent Application No. 250948, whose disclosure is incorporated herein by reference.
For each identified sequence that matches one of the predefined patterns, the processor increments a counter that is maintained for the corresponding pair of users and for the particular pattern followed by the sequence. The processor thus generates, for each of a plurality of user pairs, a feature vector 44 that includes the count for each of the predetermined patterns. (In some embodiments, to reduce the dimensionality of feature vectors 44, multiple patterns may be grouped together into a “family” of patterns, such that the counts for these patterns are combined into a single feature.)
For example, by way of illustration,
Following the generation of feature vectors 44, the processor applies a machine-learned model to feature vectors 44. This model calculates, for each pair of users, a likelihood, or “confidence level,” that the pair of users are related to one another, i.e., that the pair of users used the application to communicate with one another, based on the counts contained in the pair's feature vector. (The values of this likelihood measure may be drawn from the range of 0-1, or from any other suitable range.) For example, the machine-learned model may comprise a classifier 46, which, given feature vectors 44, classifies each candidate related pair as “related” or “unrelated,” with an associated likelihood.
Typically, the processor divides the full volume of received communication into multiple sub-volumes that each have a predefined time interval (such as 10 minutes), generates a separate set of vectors 44 for each of the sub-volumes, and then separately processes each of these sets, as described above. Thus, for each relevant pair of users, and for each of the sub-volumes, the processor obtains a different respective likelihood of relatedness. The processor then combines the sub-volume-specific likelihoods of relatedness for each pair of users into a combined likelihood of relatedness, referred to hereinbelow as a “score,” which may be drawn from any suitable range of values. (This score may be continually updated over time, e.g., over several days or weeks, as further communication is received by the processor.) Each of the calculated scores is compared, by the processor, to a suitable threshold. In response to the score for a particular pair of users exceeding the threshold, the processor generates an output that indicates the particular pair of the users. (Typically, the output also indicates the score for the pair.) For example, the processor may generate a visual output on display 36 showing all pairs of users whose respective scores exceed the threshold, thus reporting these pairs of users as being potentially-related pairs.
Notwithstanding the above, it is noted that for some pairs of users, even a single one of the sub-volumes may provide sufficient evidence of relatedness, i.e., the likelihood of relatedness that is calculated from a single sub-volume may already exceed the threshold.
Screening the Volume for Candidate Related Pairs
Typically, volume 38 includes a large number of messages exchanged, collectively, with a large number of users, such that it is relatively time-consuming to consider all possible pairs of related users. Hence, the processor typically performs a first screening, to identify “candidate” pairs of related users who potentially used the application to communicate with one another. The processor then scans volume 38 for sequences of messages exchanged between the server and any one of the identified candidate related pairs, while generally ignoring other pairs of users who are assumed to be unrelated to one another.
In this regard, reference is now made to
Typically, to identify a particular pair of users as a candidate pair of related users, the processor identifies, in volume 38, a plurality of “Rx collisions,” i.e., a plurality of instances in which a first downstream message destined to one member of the pair was received within a given time interval (e.g., 1-6 ms, such as 2-3 ms) of a second downstream message destined to the other member of the pair. In some embodiments, to identify the Rx collisions, the processor passes, over volume 38, a sliding window whose duration is equal to the desired Rx-collision time interval and identifies all pairs of downstream messages contained in the sliding window. One such Rx collision is identified in
Typically, the processor counts the number of Rx collisions for each pair of users, and stores the counts, for example, in a table 42. Subsequently, the processor identifies each pair having an Rx-collision count that is greater than a particular threshold as being a candidate pair of related users. This threshold may be designated, for example, based on the total duration of volume 38, or based on a given percentile of the Rx-collision counts. Alternatively, this threshold may be implicitly designated, in that the processor may sort table 42 in decreasing order of Rx-collision count, and then select the top “M” pairs of users, where M is any suitable number, as candidate pairs. (After a particular pair of users is identified as being related, in that the pair's likelihood of relatedness exceeds the relevant threshold, the processor may ignore any Rx collisions between the pair, such as to make room, in the top M slots in table 42, for other pairs of users who were not yet identified as being related.)
For example, with reference to table 42, the processor may identify each of the pairs (A,B), (A,C), and (A,D), but not the pair (B,C), as a candidate related pair, based on a threshold of, for example, 52 or 100. Subsequently, the processor may scan volume 38 for n-grams belonging to (A,B), (A,C), and (A,D), but not for n-grams belonging to (B,C). Thus, for example, the processor may ignore the 3-gram of MSG1, MSG2, and MSG3, given that the processor already established, by the aforementioned screening process, that the pair (B,C) is likely not related.
In some embodiments, to improve the effectiveness of the screen, the processor counts only Rx collisions that involve downstream messages of particular types (or of particular sizes). Alternatively or additionally to identifying a candidate related pair by counting Rx collisions for the pair, the processor may identify the candidate related pair by identifying “Tx-Rx collisions” for the pair. In other words, the processor may identify instances in which a downstream message destined for one member of the pair was received within a given time interval of an upstream message sent from the other member of the pair.
Learning the Predetermined Message-Sequence Patterns
Reference is now made to
Typically, prior to scanning volume 38 as described above with reference to
For example, to learn the patterns, the processor may scan another volume 48 of communication traffic that functions as a “learning set,” in that volume 48 includes communication sessions between pairs of related users. By scanning volume 48, the processor may identify various message sequences exchanged between the server and these pairs of related users, and hence the patterns that are followed by these sequences. For example, in the specific scenario shown in
By scanning volume 48, the processor may also generate a ground-truth set 51 of training vectors, which specifies, for each pair of related users, and for each of the patterns in list 41, the number of sequences following the pattern that were observed for the pair. (The vectors in set 51 are thus analogous to feature vectors 44, described above with reference to
In some embodiments, volume 48 is expressly generated for learning the meaningful message-sequence patterns. For example, two or more users may deliberately perform various exchanges of communication with each other, such as to generate a variety of different message sequences. Alternatively or additionally, processor 30 may perform various exchanges of communication between automated user profiles. (For example, User E and User F, shown in
Alternatively or additionally, the processor may learn the message-sequence patterns from a volume of communication traffic that was not expressly generated for learning purposes. In such embodiments, the processor first identifies at least one ground-truth pair of related users, i.e., at least one pair of users who are assumed, with sufficient confidence, to be related to one another, such that the processor may subsequently identify various message-sequence patterns from the traffic of this pair. Further to identifying any given ground-truth pair, the processor may assume that any sequence exchanged with the pair belongs to a communication session between the pair.
In some embodiments, the aforementioned ground-truth pairs are identified from an information source other than the traffic generated by the application of interest. For example, via network taps 32, the processor may monitor Voice over IP (VoIP) communication, or any other peer-to-peer communication over network 22, such as to identify a pair of users who communicate with one another relatively frequently. Alternatively or additionally, ground-truth pairs may be identified by monitoring other communication sessions that are not exchanged over network 22, such as phone conversations or Short Message Service (SMS) sessions. Alternatively or additionally, ground-truth pairs may be identified from other information sources, such as contact lists.
Alternatively or additionally, the processor may identify ground-truth related pairs by applying the screening technique described above with reference to
Generating Training Vectors for Pairs of Unrelated Users
Reference is now made to
Typically, to generate set 57, the processor first generates “spurious” message sequences, each of which is known not to belong to any communication session. For example, the processor may generate a synthetic volume 56 of communication traffic, by intermixing a first sequential series 52 of messages exchanged with the server with a second sequential series 54 of messages exchanged with the server. For example, the processor may intermix a first volume of traffic obtained from network taps 32 with a second volume generated by automatic user profiles. As another example, the processor may intermix two volumes of communication traffic obtained from network taps 32 over two different time periods. This intermixing creates a plurality of spurious message sequences, each of which includes at least one message from first series 52 and at least one message from second series 54.
When intermixing the two series of messages, the receipt times of one of the series are “normalized” with respect to the receipt times of the other series, so that the resulting synthetic volume includes an intermingling of messages from the two series, as if the two series were received over the same time period. For example, in
Subsequently, by scanning volume 56, the processor identifies the spurious sequences in volume 56, and the patterns followed by these sequences. For each identified pattern that is contained in list 41 (
For example,
Typically, volume 56 is divided into a plurality of sub-volumes, each having a standard, predefined time span (such as 10 minutes), and ground-truth training vectors are generated from each of the sub-volumes.
It is noted that synthetic volume 56 may also be used to exclude meaningless patterns from list 41, in that, if a given pattern is followed relatively frequently by the spurious sequences in volume 56, the processor may exclude the pattern from list 41. For example, for each candidate pattern, the processor may calculate the frequency with which the pattern appears in the “true” sequences of volume 48 (
Notwithstanding the above, it is noted that the processor may learn the most common message-sequence patterns (as described above with reference to
It is noted that if series 52 and/or series 54 includes known communication sessions, the processor may also identify true sequences in synthetic volume 56, such that the processor may use volume 56 to learn the patterns that are to be included in list 41, and to generate set 51 (
Training the Classifier
Subsequently to (i) learning the predetermined message-sequence patterns and generating a set of training vectors for pairs of related users from ground-truth true message sequences, as described above with reference to
It is noted that each of the training vectors used to train the model may include any suitable features, alternatively or additionally to the numbers of identified sequences. (Hence, these features may also be used subsequently to the training, to identify new pairs of related users.) Advantageously, at least some of these features may be relevant across different applications; in other words, at least some of these features may help discriminate between related pairs and unrelated pairs, regardless of the application for which these features are used.
For example, each of the training vectors may include the number of unique time windows having at least one sequence following one of the predetermined patters. (Each time window may have any suitable duration, such as, for example, one second.) Alternatively or additionally, each training vector may include one or more other features that are based on the timing of the identified sequences, and/or the distribution of the sequences over time. In general, a more uneven distribution indicates that the sequences belong to a communication session (and hence, that the users are related to one another), whereas a more even distribution indicates the sequences are spurious (and hence, that the users are not related).
As another example, even features that do not necessarily relate to any identified sequences per se may be included in the training vectors. One such feature is the ratio of the number of messages sent from one of the users to the number of messages sent from the other user. In general, a ratio that is close to one indicates that the pair is related, whereas a ratio that is further from one indicates that the pair is unrelated. Another such feature is the number of Tx-Rx collisions in which the two temporally-colliding messages share the same size, or have respective sizes differing by a known, fixed offset. (One such collision is included in volume 38 (
In some embodiments, each feature vector is normalized by the number of messages contained in the sub-volume from which the vector was derived, or by the time span of this sub-volume. In such embodiments, the sub-volume time span used for generating vectors 44 (
For example, classifier 46 may be trained on sets of training vectors derived from 10-minute sub-volumes. Subsequently, the processor may generate vectors 44 from 20-minute sub-volumes, each of which contains approximately twice as many messages as a typical 10-minute sub-volume. Prior to passing vectors 44 to the classifier, the processor may normalize vectors 44 by dividing each of these vectors by two, since the count for any given pattern in a 20-minute volume is expected to be twice as high as the count in a 10-minute volume. (In some cases, normalization may not be needed, even if the time spans differ from one another. For example, a one-hour sub-volume of midnight traffic may contain approximately as many messages as a 10-minute sub-volume of midday traffic, such that a vector derived from the one-hour midnight sub-volume may be passed to a classifier that was trained on 10-minute midday sub-volumes, even without prior normalization of the vector.)
In some applications, such as Telegram, messages of varying types may share the same size. Even for such applications, however, the processor may identify pairs of related users, based on the identification of higher-order n-grams (e.g., 5-grams or 6-grams), message ratios, and/or any other relevant features extracted from the communication.
In some embodiments, the processor identifies, in the ground-truth volume(s), “meta-sequences” of messages (or n-n-grams), each of which follows a “meta-pattern.” For example, while a first user uses a particular messaging app to type a text message to a second user, the messaging app may send a “typing” message to the second user, which indicates that the first user is typing. This typing message, along with any associated acknowledgement messages, constitutes a first message sequence. Subsequently, when the first user sends the message, a second message sequence may be generated. The first sequence, together with the second sequence, constitute a meta-sequence (and in particular, a 2-n-gram) that indicates user-pair relatedness. Thus, the processor may learn the meta-pattern that is followed by this meta-sequence, and then identify pairs of related users by scanning the communication traffic for this meta-pattern.
It will be appreciated by persons skilled in the art that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of embodiments of the present invention includes both combinations and subcombinations of the various features described hereinabove, as well as variations and modifications thereof that are not in the prior art, which would occur to persons skilled in the art upon reading the foregoing description. Documents incorporated by reference in the present patent application are to be considered an integral part of the application except that to the extent any terms are defined in these incorporated documents in a manner that conflicts with the definitions made explicitly or implicitly in the present specification, only the definitions in the present specification should be considered.
Number | Date | Country | Kind |
---|---|---|---|
256690 | Jan 2018 | IL | national |
This application is a continuation of U.S. application Ser. No. 16/228,929 filed on Dec. 21, 2018, and entitled “SYSTEM AND METHOD FOR IDENTIFYING PAIRS OF RELATED APPLICATION USERS”, which claims priority to Israel Application Serial No. 256690, filed on Jan. 1, 2018 and entitled “SYSTEM AND METHOD FOR IDENTIFYING PAIRS OF RELATED APPLICATION USERS.” The disclosure of both are incorporated by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
5689442 | Swanson et al. | Nov 1997 | A |
6404857 | Blair et al. | Jun 2002 | B1 |
6718023 | Zolotov | Apr 2004 | B1 |
6741992 | McFadden | May 2004 | B1 |
6757361 | Blair et al. | Jun 2004 | B2 |
7134141 | Crosbie | Nov 2006 | B2 |
7216162 | Amit et al. | May 2007 | B2 |
7225343 | Honig et al. | May 2007 | B1 |
7269157 | Klinker et al. | Sep 2007 | B2 |
7287278 | Liang | Oct 2007 | B2 |
7466816 | Blair | Dec 2008 | B2 |
RE40634 | Blair et al. | Feb 2009 | E |
7587041 | Blair | Sep 2009 | B2 |
7634528 | Horvitz | Dec 2009 | B2 |
7650317 | Basu et al. | Jan 2010 | B2 |
7769875 | Moisand et al. | Aug 2010 | B1 |
7941827 | John | May 2011 | B2 |
8005897 | Roka et al. | Aug 2011 | B1 |
RE43103 | Rozman et al. | Jan 2012 | E |
8176527 | Njemanze et al. | May 2012 | B1 |
8201245 | Dewey et al. | Jun 2012 | B2 |
RE43528 | Rozman et al. | Jul 2012 | E |
RE43529 | Rozman et al. | Jul 2012 | E |
8224761 | Rockwood | Jul 2012 | B1 |
RE43987 | Rozman et al. | Feb 2013 | E |
8402543 | Ranjan et al. | Mar 2013 | B1 |
8413244 | Nachenberg | Apr 2013 | B1 |
8463855 | Adams | Jun 2013 | B2 |
8499348 | Rubin | Jul 2013 | B1 |
8578493 | McFadden | Nov 2013 | B1 |
8682812 | Ranjan | Mar 2014 | B1 |
8762948 | Zaitsev | Jun 2014 | B1 |
8838951 | Hicks et al. | Sep 2014 | B1 |
8839417 | Jordan | Sep 2014 | B1 |
8850579 | Kalinichenko | Sep 2014 | B1 |
8869268 | Barger | Oct 2014 | B1 |
9225829 | Agúndez Dominguez et al. | Dec 2015 | B2 |
9804752 | Mall | Oct 2017 | B1 |
20020129140 | Peled et al. | Sep 2002 | A1 |
20030097439 | Strayer et al. | May 2003 | A1 |
20030103461 | Jorgensen | Jun 2003 | A1 |
20050018618 | Mualem et al. | Jan 2005 | A1 |
20050041590 | Olakangil et al. | Feb 2005 | A1 |
20050105712 | Williams | May 2005 | A1 |
20050108377 | Lee et al. | May 2005 | A1 |
20060026680 | Zakas | Feb 2006 | A1 |
20060146879 | Anthias | Jul 2006 | A1 |
20070027966 | Singhal | Feb 2007 | A1 |
20070180509 | Swartz et al. | Aug 2007 | A1 |
20070186284 | McConnell | Aug 2007 | A1 |
20070192863 | Kapoor et al. | Aug 2007 | A1 |
20070294768 | Moskovitch et al. | Dec 2007 | A1 |
20080014873 | Krayer et al. | Jan 2008 | A1 |
20080028463 | Dagon et al. | Jan 2008 | A1 |
20080069437 | Baker | Mar 2008 | A1 |
20080141376 | Clausen et al. | Jun 2008 | A1 |
20080184371 | Moskovitch et al. | Jul 2008 | A1 |
20080196104 | Tuvell et al. | Aug 2008 | A1 |
20080222127 | Bergin | Sep 2008 | A1 |
20080261192 | Huang et al. | Oct 2008 | A1 |
20080267403 | Boult | Oct 2008 | A1 |
20080285464 | Katzir | Nov 2008 | A1 |
20090106842 | Durie | Apr 2009 | A1 |
20090150999 | Dewey et al. | Jun 2009 | A1 |
20090158430 | Borders | Jun 2009 | A1 |
20090216760 | Bennett | Aug 2009 | A1 |
20090249484 | Howard et al. | Oct 2009 | A1 |
20090271370 | Jagadish et al. | Oct 2009 | A1 |
20090282476 | Nachenberg et al. | Nov 2009 | A1 |
20100002612 | Hsu et al. | Jan 2010 | A1 |
20100037314 | Perdisci et al. | Feb 2010 | A1 |
20100061235 | Pai et al. | Mar 2010 | A1 |
20100071065 | Khan et al. | Mar 2010 | A1 |
20100082751 | Meijer | Apr 2010 | A1 |
20100100949 | Sonwane | Apr 2010 | A1 |
20100306185 | Smith et al. | Dec 2010 | A1 |
20110099620 | Stavrou et al. | Apr 2011 | A1 |
20110154497 | Bailey | Jun 2011 | A1 |
20110167494 | Bowen et al. | Jul 2011 | A1 |
20110271341 | Satish et al. | Nov 2011 | A1 |
20110302653 | Frantz et al. | Dec 2011 | A1 |
20110320816 | Yao et al. | Dec 2011 | A1 |
20120017281 | Banerjee | Jan 2012 | A1 |
20120110677 | Abendroth et al. | May 2012 | A1 |
20120167221 | Kang et al. | Jun 2012 | A1 |
20120174225 | Shyamsunder et al. | Jul 2012 | A1 |
20120222117 | Wong et al. | Aug 2012 | A1 |
20120304244 | Xie et al. | Nov 2012 | A1 |
20120311708 | Agarwal et al. | Dec 2012 | A1 |
20120327956 | Vasudevan | Dec 2012 | A1 |
20120331556 | Alperovitch et al. | Dec 2012 | A1 |
20130014253 | Neou | Jan 2013 | A1 |
20130018964 | Osipkov | Jan 2013 | A1 |
20130096917 | Edgar et al. | Apr 2013 | A1 |
20130144915 | Ravi et al. | Jun 2013 | A1 |
20130151616 | Amsterdamski | Jun 2013 | A1 |
20130191917 | Warren et al. | Jul 2013 | A1 |
20130283289 | Adinarayan | Oct 2013 | A1 |
20130333038 | Chien | Dec 2013 | A1 |
20140059216 | Jerrim | Feb 2014 | A1 |
20140075557 | Balabine et al. | Mar 2014 | A1 |
20140207917 | Tock et al. | Jul 2014 | A1 |
20140298469 | Marion et al. | Oct 2014 | A1 |
20150055594 | Nirantar | Feb 2015 | A1 |
20150135265 | Bag | May 2015 | A1 |
20150135326 | Bailey, Jr. | May 2015 | A1 |
20150163187 | Nasir | Jun 2015 | A1 |
20150215429 | Weisblum et al. | Jul 2015 | A1 |
20150341297 | Barfield, Jr. | Nov 2015 | A1 |
20160197901 | Lester | Jul 2016 | A1 |
20160285978 | Zlatokrilov | Sep 2016 | A1 |
20170099240 | Evnine | Apr 2017 | A1 |
20170142039 | Martinazzi | May 2017 | A1 |
20180109542 | Katzir et al. | Apr 2018 | A1 |
20180212845 | Eriksson | Jul 2018 | A1 |
20190164082 | Wu | May 2019 | A1 |
20190207880 | Georgiou | Jul 2019 | A1 |
20190207888 | Kasheff | Jul 2019 | A1 |
20190207892 | Handte | Jul 2019 | A1 |
20190340945 | Malhotra | Nov 2019 | A1 |
20190370854 | Gao | Dec 2019 | A1 |
20200143241 | Gao | May 2020 | A1 |
Number | Date | Country |
---|---|---|
0989499 | Mar 2000 | EP |
1325655 | Jan 2007 | EP |
2104044 | Sep 2009 | EP |
2437477 | Apr 2012 | EP |
2012075347 | Jun 2012 | WO |
Number | Date | Country | |
---|---|---|---|
20210152512 A1 | May 2021 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16228929 | Dec 2018 | US |
Child | 17159544 | US |