Method and system for preventing illicit use of a telephony platform

Information

  • Patent Grant
  • 11063972
  • Patent Number
    11,063,972
  • Date Filed
    Thursday, September 26, 2019
    5 years ago
  • Date Issued
    Tuesday, July 13, 2021
    3 years ago
Abstract
A system and method for preventing illicit use of a telephony platform that includes enrolling a plurality of accounts on a telecommunications platform, wherein an account includes account configuration; at a fraud detection system of the telecommunications platform, receiving account usage data, wherein the usage data includes at least communication configuration data and billing configuration data of account configuration and further includes communication history of the plurality of accounts; calculating fraud scores of a set of fraud rules from the usage data, wherein at least a sub-set of the fraud rules include conditions of usage data patterns between at least two accounts; detecting when the fraud scores of an account satisfy a fraud threshold; and initiating an action response when a fraud score satisfies the fraud threshold.
Description
TECHNICAL FIELD

This invention relates generally to the telephony field, and more specifically to a new and useful method and system for preventing illicit use of a telephony platform in the telephony field.


BACKGROUND

Telephone fraud has long been a problem for telephony systems. With the introduction of VoIP networks and Session Initiation Protocol (SIP) trunks, the opportunities for telephony fraud is even greater. The recent development of new telephony platforms that enable a wider range of developers to create useful products also enables nefarious parties to create programs that commit telephony fraud. As one example, toll fraud has become a common problem on telephony platforms due in part to easier access to disposable telephone numbers. Other forms of telephony fraud can result in chargebacks for telephony platform providers when the telephony fraud involves stolen credit cards. Yet other forms of telephony fraud use valuable resources for improper uses that would otherwise be used for legitimate applications. Telephony fraud can be damaging to users that fall victim to the telephony frauds, to the profitability of telephony platforms, and to the performance of legitimate telephony applications. Furthermore, as developers are more frequently building on top of other infrastructure, those developers may not have access to the raw information to prevent such illicit use of their applications. Thus, there is a need in the telephony field to create a new and useful method and system for preventing illicit use of a telephony platform. This invention provides such a new and useful method and system.





BRIEF DESCRIPTION OF THE FIGURES


FIG. 1 is a schematic representation of a system of a preferred embodiment of the invention;



FIG. 2 is a flowchart representation of a preferred embodiment of the invention;



FIG. 3 is a schematic representation of a preferred embodiment of the invention;



FIG. 4 is a schematic representation of a preferred embodiment of the invention for integrating a fraud scoring system with a data stream;



FIG. 5 is a flowchart depicting a variation of a preferred embodiment of the invention for updating received usage data upon receiving a trigger signal;



FIG. 6 is a flowchart depicting a variation of a preferred embodiment of the invention for calculating a fraud score from usage data associated with call history data;



FIG. 7 is a flowchart depicting a variation of a preferred embodiment of the invention for calculating a fraud score from usage data associated with message history data;



FIG. 8 is a flowchart depicting a variation of a preferred embodiment of the invention for calculating a fraud score from usage data associated with platform account data;



FIG. 9 is a table depicting a fraud rule set of an exemplary implementation of a preferred embodiment of the invention; and



FIG. 10 is a flowchart depicting a variation of a preferred embodiment of the invention for generating a fraud rule.





DESCRIPTION OF THE PREFERRED EMBODIMENTS

The following description of the preferred embodiments of the invention is not intended to limit the invention to these preferred embodiments, but rather to enable any person skilled in the art to make and use this invention.


1. System for Preventing Illicit Use of a Communication Platform

As shown in FIG. 1, a system for preventing illicit use of a communication platform of a preferred embodiment can include a communication platform 100 that includes a multitenant account system 110 and a fraud scoring system 120 communicatively coupled to operational components 130 of the communication platform. The system functions to apply various fraud-based heuristics across the accounts and/or subaccounts of the platform 100, monitor and measure the scores based on the heuristics, and alter operation of the account within the communication platform. Such a system is preferably capable of mitigating fraudulent behavior made on top of a self sign-up communication platform. In one scenario, the system can be applied to preventing illicit use within a single account. The system can additionally be extended to detect illicit use through cooperative use of multiple accounts. Another aspect is that the multitenant account system may include functionally for an account to create sub-accounts. Sub-accounts can be used so that a developer can develop a service on top of the communication platform and provide that service to end customers. The system can enable fraudulent behavior within the subaccount of an account to also be monitored for fraudulent behavior.


The communication platform 100 functions as the main infrastructure on which fraud is sought to be prevented or reduced. The communication platform is more preferably a telecommunication platform that facilitates synchronous voice communication sessions, synchronous video communication sessions, screen-sharing session, asynchronous text or media communication. In particular traditional telecommunication protocols such as telephone based networks (e.g., PSTN) or carrier based messaging (e.g., SMS or MMS) are of particular attention in the prevention of fraud. The ecosystem of traditional telecommunication protocols includes user contracts and network/carrier contracts to facilitate interoperability and functioning of the communication network as a whole. The communication platform 100 in some variations may provide a way for account holders to avoid the various contract related restrictions usually involved in using the network. For example, an account may be created and used through self sign-up, avoiding a contract lock-in or enrollment process. As described below accounts can additionally acquire and drop communication endpoints on-demand. The fraud scoring system preferably functions to ensure that such beneficial features are not leveraged in implementing toll fraud, spamming techniques, scams, or other illicit uses of the communication platform 100.


The communication platform 100 can provide any suitable service. In one variation, the communication platform 100 provides routing functionality. In another variation, the communication platform 100 may provide communication bridging between at least two protocols such as a PSTN device talking to a SIP based device. In a preferred embodiment, the communication platform 100 provides communication application functionality and/or API based integration to communication sessions, events, and resources. The communication platform preferably enables accounts to configure applications to be responsive to incoming communications. The communication platform 100 can additionally facilitate initiating outbound communications to be controlled by an application or connected to an agent. The applications are preferably internet hosted telephony instruction documents hosted externally by the developers (e.g., the account holder). The applications are preferably configured as URI mappings within an account that relate an endpoint with an application URI. The URI based applications preferably enable web developers to easily apply web-based application skills to building dynamic telephony applications. The communication application platform is preferably substantially similar to the one described in U.S. Pat. No. 8,306,021, issued 6 Nov. 2012, which is hereby incorporated in its entirety by this reference. The communication platform 100 may alternatively be focused on providing some features directed at a targeted use case. For example, the communication platform 100 may be a customer service platform used by customers to build call centers. The communication platform 100 may be a conference call service, a personal voicemail system, a notification service, a two-factor authentication facilitating service, and/or any suitable type of communication platform.


The multitenant account system 110 functions to manage and facilitate the accounts within the communication platform 100. As described above, the communication platform 100 is preferably a multitenant infrastructure in that multiple users can independently operate on shared resources of the communication platform. Preferably, any given account is prevented from impacting the resources of others within a multitenant system. The account system no preferably includes a user interfaced and/or programming interface (API) to create and manage an account. The communication platform will often involve paid use of communication infrastructure. The account system may include a billing engine that stores payment information of the account. Within an individual account, at least one endpoint is preferably assigned as a communication address. The communication endpoint is preferably a phone number, but may alternatively be a SIP address, a user name, or any communication address. The account system 110 or an endpoint service may additionally facilitate an account from acquiring new endpoints, porting outside endpoints for use within the platform, and/or canceling endpoints. The account system 110 can additionally manage operational configuration such as storing resources, references to resources, parameter settings, or other aspects used in account usage of the communication platform 100. Preferably, the configuration can store the application URIs mapped to endpoints of the account.


Additionally, the multitenant account system no can include a sub-account system such that a hierarchy of accounts can be created. A first account (i.e., a parent account) can preferably create or contain multiple sub-accounts (i.e., children accounts). Sub-accounts may be created through an interface by the sub-account holder or alternatively through an API by the parent account holder. For example, an application developer may create a customer service application, and then allow end users to signup as customers within his account. The sub-accounts will preferably operate within the scope of the parent account. The sub-accounts can be customized by the parent account and/or customized by sub-account holder. In one implementation, the sub-account system may functions similarly to the system and method described in U.S. patent application Ser. No. 13/167,569, filed 23 Jun. 2011, which is hereby incorporated in its entirety by this reference.


The fraud scoring system 120 functions to monitor, measure, and detect instances of illicit use that occur within or through the communication platform. The fraud scoring system 120 may predominantly focus on preventing continued illicit use of the communication platform 100 that is initiated by an account and/or a parent account of the communication platform 100. The fraud scoring system 120 can additionally identify and prevent illicit actions initiated by parties outside of the platform but occurring through the communication platform 100.


The fraud score preferably includes a set of fraud rules. The fraud rules are preferably conditions that either act as a metric upon which a score is based. The scores of the various fraud rules are preferably collectively analyzed to determine if fraud is occurring. A fraud rule in one variation is used in calculating a scalar measurement of one dimension or indicator of fraud. A fraud rule may alternatively be set of discrete conditions with an assigned score based on the determined condition. Preferably, this will be binary decision of assigning a fraud score or not. The fraud rules can target various aspects of communication and account usage and configuration. The fraud rules may simply evaluate indicators of fraud within an account or sub-account. Additionally, the fraud rules may include analysis across accounts/sub-accounts to detect patterns of illicit use implemented using multiple accounts. The fraud rules may be preconfigured or automatically generated based on algorithmically learned patterns in fraud or anomaly detection. The fraud scoring system no may additionally include an analyst-facilitated user interface wherein new rules can be created and issues can be manually ignored or acted upon, which functions to supplement automatic operation with human insight.


The set of fraud scores can include a wide variety of rules that use a variety of data sources. The data sources may include communication history such as involved endpoints, duration of the communication, content of the communication, frequency of the communications, geographic information of the communication, and other logged information. Some of the conditions may be based on static configuration parameters (i.e., how the account is setup). If an entity is implementing illicit behavior across multiple accounts similar resources are preferably used, and thus similarities of account settings across multiple accounts may be a sign of suspicious abnormal behavior. Other conditions may be based on usage of the account.


Another data source may include billing information such as the number of credit cards on the account, the number of accounts that use a particular credit card, number of names used on credit cards of an account, number or frequency of changes to billing information, country of IP address matched against credit card country, geographic region diversity of billing address, and other billing related information. The billing data source may be from a billing system of the communication platform. Outside data sources may additionally or alternatively be used. For example a data source with stolen or flagged credit card information can be used.


Yet another data source can include endpoints of an account. Patterns in endpoints may relate to the variety of owned or used endpoints by an account, variety of endpoints of incoming communication, variety of endpoints in outgoing communication, number or percentage of communications that are international, types of endpoints (e.g., short codes, mobile numbers, landlines, business numbers, etc.)


In the variation where the communication platform is a communication application platform, the application configuration can be another data source used in fraud rule conditions. Preferably, an application parameter is set within an account to reference the application resource (e.g., a document with the communication instructions). The application parameter is preferably a URI string that points to an application server of the account holder. The number of times the URI is used in different accounts may be the basis of a fraud rule condition. The application parameter may alternatively be a binary data file or executable code, and the raw application resource can be compared to other. For example, a cryptographic hash or fingerprint may be generated and used in comparing applications across accounts or sub-accounts. While static application configuration may be used, applications may be able to redirect application state control to other URIs and thus the fraud rule condition may be based on the URIs that are used throughout the processing of a communication session.


Similar to the fraud rules based on application configuration, media resource usage can additionally be used. If two or more accounts or sub-accounts, are using the same media resources, then those may be assumed to be operated by the same entity.


In addition to the data source, the time period in which the pattern is detected, age of the account, number of accounts, percentage of usage that is not flagged as suspicious and other qualifying conditions may provide additional context to the data source conditions.


The fraud scoring system 120 is communicatively coupled to the operational components 130 of the communication platform 100. The operational components 130 of the communication platform can include any servers, databases, processors or other resources that either define account configuration, account usage, or other aspects of the account within the platform. Preferably, the operational components include a call router that processes communication. In particular, the call router controls and facilitates the execution of a telephony application during a communication session. The various operational components 130 may additionally be used in enforcing some response to detection of illicit behavior by an account or sub-account.


2. Method for Preventing Illicit Use of a Communication Platform

As shown in FIG. 2, a method for preventing illicit use of a communication platform in accordance with a preferred embodiment may include enrolling a plurality of accounts in a telecommunications platform block S110, at a fraud scoring system, receiving usage data of a telephony platform component block S120, calculating a fraud score from the usage data block S130, detecting when fraud scores of an account satisfy a fraud threshold block S140, and taking action when a fraud score satisfies a fraud threshold block S150. The method functions to enable heuristic based identification and prevention of telephony fraud. The method is preferably used to prevent illicit use cases in voice or video calls, short message service (SMS) messages, multimedia messaging service (MMS) messages, Fax, or any suitable form of telephony communication. The method can additionally be applied to IP based communication or proprietary communication channels such as SIP, Video conferencing, screen sharing or other suitable communication mediums. The method is preferably performed by a fraud scoring system which is a preferably a sub-component of telephony application platform such as the telephony platform described in U.S. patent application Ser. No. 12/417,630, filed 2 Apr. 2009 and titled “System and Method for Processing Telephony Sessions”, which is incorporated in its entirety by this reference. Integration into a telephony platform preferably enables the gathering of usage data from a plurality of various telephony platform components. The telephony platform components are preferably those components that facilitate calls or messaging such as call databases or SMS databases, but may alternatively include components facilitating telephony application setup or operation such as account or credit card databases. The telephony platform is preferably a multitenant platform with multiple user accounts and optionally sub-accounts that independently use the platform. The telephony platform can be a self-sign up service, and the programmatic interface into the telephony platform can make it appear more appealing for illicit use. Entities can be freed of the hassle and complexity of arranging long-term contracts or other agreements that normally act as a barrier to telephony based fraud. The method is preferably applicable to preventing toll fraud in a telephony platform but may additionally or alternatively be used to prevent terms of service violations, denial of service attacks on a telephony platform or an outside system, suspicious behavior, credit card fraud, phishing attacks, and/or any suitable type of illicit use of a telephony platform.


The method is preferably capable of addressing internal telephony fraud (i.e., fraud performed by account holders on the telephony platform) and/or external telephony fraud (i.e., fraud attempts originating on outside systems but occurring through the telephony platform). The method is preferably capable of detecting coordinated illicit behavior performed across two or more accounts of the platform. Additionally or alternatively, the illicit behavior of a single account can additionally be addressed. The method preferably uses a heuristic based approach using rules defined in a rule set of the fraud scoring system. Rules used in the method can preferably be crafted and maintained by fraud analysts, which functions to enable analysts to use their unique insight into fraud scenarios to automatically detect future scenarios using the fraud scoring system. The method additionally can automate the detection and actions taken by fraud analysts for a system. The method may additionally include Bayesian learning, neural networks, reinforcement learning, cluster analysis or any suitable machine learning or algorithmic approaches to facilitate identifying illicit use cases. Preferably a combination of automatic fraud rule generation and fraud analyst input is used in during the method of the fraud scoring system. The method is preferably capable of identifying a wide variety of illicit use cases as defined in the rule set. When illicit use of the telephony platform is matches a rule, the fraud scoring system preferably acts to prevent that instance of illicit use from continuing.


Block S110, which includes enrolling a plurality of accounts in a telecommunications platform, functions to setup, configure, and instantiate multiple entities within the platform. An account within the telephony platform preferably has a unique identifier or uniquely identifying characteristics. Fraud detection is preferably detected within individual accounts or through two or more accounts that share usage data patterns (which often indicate a single entity is coordinating both accounts to distribute the signals of illicit behavior across multiple accounts). Enrolling an account may be initiated by a user through a user interface, but an account and/or a sub-account may alternatively be configured programmatically through an API such as a REST API of the platform. The enrollment may additionally include within one account, enrolling at least one sub-account that is managed by the first account. The sub-account (i.e., the child account) will often be an end customer of a service of the primary/parent account holder. For example, a customer care application may create a parent account, and within that account each end-customer is given a sub-account so that usage, data, and configuration can be independently managed. The parent account holder preferably manages these accounts. Sub-accounts are preferably created and managed through an API. The method can be particularly useful for systems that use sub-accounts in that, individual sub-accounts may be performing illicit behavior and the account holder may not have sufficient data when operating on top of the platform to detect the illicit behavior. The fraud detection service can be a beneficial service in promoting app developers to build on top of a platform.


Basic configuration of an account preferably occurs during enrollment but can be completed at a later time. Enrolling an account preferably includes an enrolling-account assigning at least one communication endpoint address to the account. Preferably, at least one phone number is associated with an account. Multiple phone numbers can additionally be configured. The communication endpoint may alternatively be a SIP address, email address, username, or any suitable address identifier used in routing communication to a destination. An assigned endpoint may be purchased/selected from the platform, ported from an existing system, or added to the account in any suitable manner.


The enrolling account additionally configures application resources. Preferably, an endpoint will be mapped to an application URI, which will be an external, internet-accessible resource that provides communication instructions for a communication session. Multiple application URI's may additionally be configured for different communication states or events. For example, there may be a primary application URI for incoming calls, an outgoing application URI that takes control of outgoing communication sessions, a fallback application may be used for when errors occur, there may be different application URIs for different mediums (e.g., voice, video, SMS, MMS, fax, eats.), different application URIs for different regions or originating endpoints. Each endpoint assigned to an account can additionally be uniquely configured. The configured application resources may alternatively or additionally include media files used in an application such as an application executable binary, instruction file, playable audio or video, or other suitable media resources.


The enrolling account may additionally configure billing information. The billing information will preferably include at least one credit card, but may alternatively be any suitable payment mechanism such as a bank account, links to an outside account with credit/points. The payment mechanism information will preferably include an account identifier (e.g., a credit card number), billing name, billing address. Multiple payment mechanisms may be setup.


Block S120, which recites at a fraud score system receiving usage data of a telephony platform component, functions to collect data used to calculate a fraud score. The usage data is preferably data collected and maintained independently from the fraud score system. The usage data thus typically reflects operational metrics of a telephony platform. For example, a call history database may store records of when calls where made and what the destination endpoints were for those calls. In this example, the primary purpose of the call history database may be for analytics but the data may additionally be used for calculating a fraud score. Alternatively, usage data may be collected with the explicit intent to measure data pertinent to calculating a fraud score. The fraud scoring system is preferably coupled through a network to a telephony platform component. More preferably the fraud scoring system is coupled through a network to a plurality of telephony platform components as shown in FIG. 3. A telephony platform component is preferably a machine that provides the usage data. The telephony platform components coupled to the fraud scoring system may include call history databases, messaging history databases, account databases, credit card hash databases, account databases, client device information databases, IP address databases, phone number databases, credit card or spending databases, API logs, and/or any suitable machine containing data useful for calculating a fraud score. The fraud scoring system is preferably configured to actively initiate communication with the telephony platform components, and the platform components preferably respond with any requested usage data. Alternatively, the coupled machines may independently send usage data to the fraud scoring system through a subscription or push-based service.


The fraud scoring system preferably refreshes usage data periodically. For example, fraud score system may receive new usage data from at least a subset of machines every half hour. In another variation, telephony platform components may send usage data continuously, when new data is collected, or for any suitable reason. In yet another variation, a fraud scoring system may be integrated into a data stream. In this variation data would preferably not need to be replicated or sent through a separate fraud scoring system. A fraud scoring system can preferably subscribe to designated data streams as shown in FIG. 4 but may alternatively be integrated into a data stream in any suitable manner. The fraud scoring system may additionally poll or actively request update usage data from components. Additionally or alternatively, a variation of a method of a preferred embodiment may include updating received usage data upon receiving a trigger signal Block S122 as shown in FIG. 5, which functions to enable fraud checking programmatically. In response to a trigger signal, the fraud scoring system preferably actively initiates the transmission of usage data from a telephony platform component to the fraud scoring system. The trigger signal is preferably an instruction associated with an application programming interface (API) call. The API call preferably causes usage data to be updated, a fraud score to be calculated, and action to be taken if appropriate. The API call may alternatively trigger a subset of the above steps. A telephony platform is preferably configured to send an API call to update the fraud scoring system when events occur that have a high correlation to fraud. For example, an API call to update the fraud scoring system may be sent before, while, or during updating an account, performing a credit card transaction, detecting high account concurrency, or during any suitable event. A fraud score API may additionally be used to perform other interactions with the fraud scoring system. For example, a fraud score API may trigger any suitable steps of the fraud scoring method; may create, edit, delete, or otherwise augment fraud rules, usage data, usage scores, fraud actions, or other parameters of the fraud scoring system; and/or interact with the fraud scoring system in any suitable way.


Block S130, which recites calculating a fraud score from the usage data, functions to process usage data to generate a metric that reflects the likelihood that illicit use of the telephony platform is occurring. Fraud scores are preferably calculated for a set of fraud rules. The set of fraud rules are used to calculate a set of fraud scores (e.g., measure or indicators of fraud). Additionally, fraud thresholds can define when particular types of actions are taken. A fraud rule preferably includes a usage condition, a usage data time window, and an account age condition. The fraud rules may additionally be conditions within a single account or pattern conditions across multiple accounts. The usage conditions are particular patterns in usage data (e.g., account configuration or communication history). The usage conditions are preferably particular patterns such as some threshold on the number or percentage of events or resources that would trigger activating the fraud rule (e.g., assigning the defined fraud score for that rule). The usage condition can additionally specify conditions found across multiple accounts. For example, a usage condition may be for identical/corresponding billing information configured in more than three accounts. The usage data time window is the window that is used to define what data is analyzed. Some exemplary time windows could include the past 24 hours, the past week, the past month, the past year, or across all data (e.g., no time window). The account age condition may define for how long the rule is monitored for an account. Some illicit use scenarios may only be seen with new accounts. For example, the account age condition may configure a fraud rule to apply to an account for the first week after the account is created. If the conditions of the fraud rule are satisfied a defined score is preferably assigned. These fraud scores are preferably stored per account. If the fraud rule is defined for condition patterns across multiple accounts, the fraud score is preferably assigned to each account. The fraud score is preferably a numeric value but may alternatively be a label or any suitable construct to communicate fraud likelihood. In this document we treat high fraud scores as indicating a greater likelihood of illicit use, but any suitable relationship may be defined. A fraud score is preferably associated with at least one key/identifier. The key may be an account, sub-account, an endpoint (e.g., a phone number), a credit card hash, or any suitable key. A plurality of fraud scores (e.g., one per fraud rule) is preferably calculated to monitor various entities and approaches to performing fraud in a telephony platform. For example, a series of fraud scores may be calculated to monitor accounts for one form of telephone fraud, while another series of fraud scores may be calculated to monitor credit card abuse across accounts. The fraud score is preferably indicative of activity during a specified time window, but may alternatively be an aggregate value (preferably factoring in older fraud scores to reflect multiple time windows). Calculation of fraud scores may additionally involve creating associations between subsets of the received usage data. Associations can be made based on user accounts, credit cards used to pay for accounts, endpoints or endpoint prefixes, source or destination carriers, and/or any suitable parameter that can be used to associate various data points in the usage data.


As described, fraud scores are preferably calculated to generate metrics that reflect the likelihood of fraud. These metrics may be associated with various parameters or combination of parameters of a telephony platform. Block S130 preferably includes calculating a fraud score from usage data associated with call history data Block S132, calculating a fraud score from usage data associated with messaging history data S134, and/or calculating a fraud score from usage data associated with platform account configuration data S136, but any suitable usage data may alternatively be used in calculating fraud score. Correspondingly, the block S130 preferably includes at least one fraud rule of the set of fraud rules including identifying communication-application configuration shared between at least two accounts, identifying shared patterns of media resource usage in two accounts, detecting shared billing information across two or more accounts, detecting communication history patterns across at least two accounts, and other suitable fraud rule conditions that are defined for patterns in usage data between multiple accounts.


Block S132, which recites calculating a fraud score from usage data associated with call history data, functions to create a fraud score based on patterns in calls occurring on the telephony platform. Several different parameters of a call may have been measured and included in the usage data. For example, call duration, account(s) associated with a call, call destination endpoints, caller endpoints, carrier origin of a call, destination carrier, frequency of calls, number of concurrent calls for an account, or any suitable parameter of call data. Such call related usage data can preferably be used to calculate fraud scores based on various heuristics. In one variation, high call concurrency (i.e., multiple calls occurring on the telephony platform simultaneously) for a new account is indicative of illicit use of the telephony platform. A fraud score that reflects this is preferably calculated from such data. In this variation, the fraud score preferably has a direct relationship to concurrency and an inverse relationship to the age of the account. In another variation, numerous call endpoints matching designated prefix patterns may additionally be an indicator of illicit use. A fraud score that reflects this is preferably calculated. Preferably, a fraud rule is defined for each communication history condition or set of conditions. Additionally, audio or video of a call may be used in calculating a fraud score. For example, white noise analysis of a call may be included in or extracted from usage data. White noise analysis may enable the fraud scoring system to detect if a phone call had anyone on either side of a call. In this example, a long silent phone call may be associated with illicit use of the telephony platform, and the white noise detection could be used to calculate a fraud score that reflects this heuristic.


Block S134, which recites calculating a fraud score from usage data associated with messaging history data, functions to create a fraud score based on patterns in messages occurring on the telephony platform. Messaging history data may include any data related to SMS, MMS, or other suitable messages communicated through the telephony platform. Calculation of a fraud score may include the use of usage data analogous to the usage data described above for call data, such as message endpoints, account(s) associated with a message, message frequency, message frequency as a factor of account age, carrier origin of a message, carrier destination of a message, or any suitable parameter of a message or messages sent through the telephony platform. Message content and message conversations conveyed in usage data of the messages may additionally be used to calculate a fraud score. In one variation, messages replying to account messages that instruct the sender to stop sending messages (e.g., a message with the message ‘STOP’) preferably contribute towards a higher fraud score. Accounts that receive a higher percentage of stop-messages are more likely to be practicing behavior that is undesirable to users. In an alternative variation, if a large number of spam-like text messages are delivered to endpoints matching a prefix and no stop-messages are received, this may also be an indicator of illicit behavior (e.g., a nefarious user may be trying to terminate as many text messages to a particular carrier).


Block S136, which recites calculating a fraud score from usage data associated with platform account configuration data, functions to use metrics collected from the telephony platform that do not directly relate to voice, video or messaging. Usage data associated with platform account configuration data may include information pertaining to user accounts, credit cards, endpoints, client devices, telephony application URI's, or any suitable platform account data. The configuration data preferably includes communication-application configuration, which includes variables and resources used in customizing and defining the application(s) of the account. One fraud rule may be defined for a condition of identifying communication-application configuration shared between at least two accounts. If multiple accounts have the same application configuration, then this can be used as a signal that the two accounts are used for the same task. Outside entities may set up multiple accounts to perform the same task to avoid detection, but identical application configuration can be a signal that the accounts are managed by the same entity or two cooperating entities. Preferably, applications are defined by application URIs that are associated with/mapped to communication endpoints. String comparisons of the URIs can be performed to identify matching applications used in multiple accounts. In some situations, some application URI's may be whitelisted so that they can be used in multiple accounts. In a similar, variation the actual application media resources consumed during execution of an application can be used to indicate similar functionality. A communication platform may transfer application state to various application URIs during a communication session. These application URIs can be similarly tracked and compared. Also media such as the instruction documents (telephony instructions in an XML document), audio files, video files, and other resources can be fingerprinted or otherwise processed to create an identifier that can be used to detect similar or identical media resources. Fingerprinting data preferably includes creating an identifier of the content of the media file. The fingerprint identifier can be preferably easily compared to other fingerprint identifiers in other accounts to determine if identical or substantially similar media is used. A fingerprint identifier preferably functions so that media can be matched despite variations in the encoding of the content. For example two images of the same picture but of slightly different dimensions and size ratios can be shown to be matching. Alternatively, the raw file may be compared. Media resource usage during communication sessions can also be used as signals of illicit behavior. For example, an image sent over MMS by one account may be fingerprinted. A second account additionally sends an image of MMS and the image is similarly fingerprinted. The fingerprint identifiers are then compared, and if they indicate the image content matches, this may trigger a fraud rule around two accounts sending identical images over MMS. Media fingerprinting can similarly be applied to audio, video and other suitable media mediums.


In one variation, calculating a fraud score from usage data associated with credit card data preferably involves comparing hashes of credit card numbers. By comparing billing information within and across accounts, the fraud scoring system functions to check diversity of payment mechanism. Payment mechanisms are preferably not shared across numerous accounts. This can be a signal that one entity is setting up multiple accounts for some reason. Within an account the payment mechanisms preferably have little diversity. If several credit cards with multiple names and addresses may be a sign that stolen credit cards are being used. As an example, a plurality of new accounts created and set up using the same credit card may be an indicator of illicit use. Credit card hash records for new accounts are preferably compared to identify credit cards used multiple times. In this variation, a credit card used multiple times for different accounts would preferably contribute to a higher fraud score. Similarly, many telephony applications allow accounts to set up an application to handle calls or messages by specifying a URI. In one variation, if one URI is configured for a plurality of new accounts, then this may indicate illicit use as it indicates one entity is setting up multiple accounts for the same purpose.


Block S140, which recites detecting when fraud scores of an account satisfy a fraud threshold, function to monitor and assess when a scenario of illicit behavior is occurring based on the fraud scores. Block S140 preferably includes storing/recording the fraud score. As described above, the fraud scores are preferably indicative of a fraud score for a particular time window, but may alternatively be an aggregate metric. The fraud scores are preferably stored such that an associated account, endpoint, application, and/or any suitable key may be referenced when retrieving data. In one variation block storing of the fraud scores is optional, and assessment can be performed directly after calculating fraud scores, without persistently storing fraud scores. Preferably, the same set of fraud rules are used in calculating fraud scores across all the accounts/sub-accounts. Fraud thresholds can define when particular types of actions are taken. In one implementation, the fraud scores associated with an account or sub-account are preferably summed, and if the total fraud score is above a define fraud score threshold a response is made in block S150. Additionally, there may be different levels of fraud thresholds. For example a fraud threshold may be defined for fraud scores from 20-50, a second fraud threshold for 51-75, and a third fraud threshold for scores over 76. These three fraud thresholds can define three levels of actions taken in block S150. The fraud reaction can alternatively be based on the fraud scores of a particular fraud rules. For example, specific fraud rules (when satisfied or for certain scores) may define a reaction of flagging an account or throttling an account, while some fraud rules may define more severe illicit behavior and can initiate automatic termination of the account.


Block S150, which recites taking action when a fraud score satisfies a fraud threshold, functions to react to fraud scores that indicate illicit behavior. The reaction to a fraud score may include flagging the account, throttling communication of an account, requesting additional billing information, notifying account holder, notifying an analyst of the communication platform, performing additional fraud detection analysis on the account, blocking particular actions on the account, or performing any suitable action. In a sub-account variation, the parent account of a sub-account is preferably notified of the sub-account illicit behavior. The notification can be an email notification, a message within the communication platform web platform, or notification made through the API of the communication platform. Account holders may have multiple sub-accounts using their service provided on top of the communication platform. By performing the fraud regulation by sub-accounts, the communication platform can avoid taking action against the account itself since many sub-accounts may be using the communication platform in a proper manner. This functions to simplify and abstract the fraud prevention aspect away from account holders such that the communication platform can handle illicit use detection. A fraud scoring system preferably includes a set of fraud rules (i.e., a rule set) stored using any suitable schema. The rule set preferably enables various heuristics to be configured and/or updated to keep current with the latest fraud attempts. Fraud score patterns may include thresholds for a particular fraud score or alternatively a group of fraud scores. Some exemplary fraud score patterns may include taking action when there are more than a specified number of international calls lasting longer than a specified amount of time, when an average length of international calls is greater than a specified amount of time, when greater than a specified number of outbound SMS messages to a classification of prefixes (e.g., UK prefixes) are made, when more than a specified number of unique credit cards are added to an account, when the credit cards of an account use more than a specified number of zip codes, when one credit card is used on more than a specified number of accounts, when one IP address is used across more than a specified number of accounts, when the account balance is more than a specified amount for an account and the age of the account is less than a specified number of days, when the answer rate of outbound calls is less than a specified percentage and/or when any suitable pattern is satisfied, As shown in FIG. 9, rule sets may be dependent on measured metrics in combination with a threshold, time period for the metrics, and account age. Alternatively, any suitable parameters may be specified to determine a rule set. Fraud score patterns may alternatively be trending patterns from a time series of related fraud scores. Fraud reactions preferably include suspending an account, blacklisting credit card numbers, blacklisting application URI's or IP's, rate-limiting services provided to an offending account, remove or adjust services provided to an offending account (e.g., remove international services), flag the account for a human fraud analyst to investigate, and/or any suitable course of action. The fraud reaction is preferably signaled to the telephony platform, and the resulting reaction preferably alters behavior of the telephony platform to prevent a suspected case of illicit use of the platform. There may additionally be different level of responses based on the severity of the fraud score, and fraud reactions may be applied in stages if the fraud score does not subside.


Additionally or alternatively, a method of a preferred embodiment may include generating a fraud rule block S160 as shown in FIG. 10, which functions to produce a fraud score based on collected data. In one variation, a fraud score set is preferably predominately generated by fraud analysts. This preferably enables fraud analysts to apply unique insight into fraud attempts to enable automatic detection. In a variation that implements block S150, at least a subset of the fraud rule set is generated through analysis of the data. As mention above Bayesian learning, neural networks, reinforcement learning, cluster analysis or any suitable machine learning techniques may be used to extract rules to identify fraud scenarios. The generating of a fraud rule may be active or reactive. Active generation of a fraud rule will preferably automatically generate a rule based on observed data. Reactive fraud rule generation preferably generates a fraud rule after a fraud scenario has happened. Data from the time of the fraud can preferably be replayed such that a fraud rule may be generated that would have set the fraud score to reflect the occurrence of the fraud scenario.


An alternative embodiment preferably implements the above methods in a computer-readable medium storing computer-readable instructions. The instructions are preferably executed by computer-executable components preferably integrated with a fraud scoring system. The fraud scoring system preferably includes a fraud rule set and a fraud scoring API. The fraud scoring system is preferably integrated into a telephony platform capable of facilitating voice, video, or message communication. The computer-readable medium may be stored on any suitable computer readable media such as RAMs, ROMs, flash memory, EEPROMs, optical devices (CD or DVD), hard drives, floppy drives, or any suitable device. The computer-executable component is preferably a processor but the instructions may alternatively or additionally be executed by any suitable dedicated hardware device.


As a person skilled in the art will recognize from the previous detailed description and from the figures and claims, modifications and changes can be made to the preferred embodiments of the invention without departing from the scope of this invention defined in the following claims.

Claims
  • 1. A system of automatically reacting to a fraud attempt on a telephony platform, the system comprising: one or more computer processors;one or more computer memories;a set of instructions incorporated into the one or more computer memories, the set of instructions configuring the one or more computer processors to perform operations comprising:automatically generating a set of fraud rules that each include a usage condition and a usage data time window, the automatic generating of the set of fraud rules based on algorithmically learned patterns in related time series data;obtaining one or more data streams of usage data from one or more components of the telephony platform, the usage data including one or more of call history data, messaging history data, or platform account data;calculating a set of fraud scores for each of the set of fraud rules, each of the set of fraud scores being an assigned score or a scalar measurement of a dimension of fraud based on a condition determined from the usage data;determining a total fraud score by collectively analyzing the set of fraud scores; andautomatically taking an action with respect to an account on the telephony platform based on the total fraud score being above a fraud score threshold.
  • 2. The system of claim 1, wherein the action includes at least one of flagging the account, throttling communications of the account, requesting additional billing information, notifying a holder of the account, notifying an analyst of the communication platform, performing additional fraud detection analysis on the account, or blocking particular actions on the account.
  • 3. The system of claim 1, wherein the action includes at least one of suspending the account, blacklisting credit card numbers, blacklisting application URIs or IP addresses, rate-limiting services provided to the account, removing or adjusting services provided to the account, or flagging the account for investigation.
  • 4. The system of claim 1, wherein the set of fraud rules each further include an account age condition.
  • 5. The system of claim 1, wherein at least a subset of the set of fraud rules is generated automatically by applying a machine-learning technique to the usage data, the machine-learning technique including one or more of Bayesian learning, neural networks, reinforcement learning, or cluster analysis.
  • 6. The system of claim 1, wherein the automatic generation of the subset of the set of fraud rules includes automating actions taken by fraud analysts via an analyst-facilitated user interface.
  • 7. The system of claim 1, where the generating of the fraud score is further based on input received via an analyst-facilitated user interface.
  • 8. A method of automatically reacting to a fraud attempt on a telephony platform, the method comprising: automatically generating a set of fraud rules that each include a usage condition and a usage data time window, the automatic generating of the set of fraud rules based on algorithmically learned patterns in related time series data;obtaining one or more data streams of usage data from one or more components of the telephony platform, the usage data including one or more of call history data, messaging history data, or platform account data;calculating a set of fraud scores for each of the set of fraud rules, each of the set of fraud scores being an assigned score or a scalar measurement of a dimension of fraud based on a condition determined from the usage data;determining a total fraud score by collectively analyzing the set of fraud scores; andautomatically taking an action with respect to an account on the telephony platform based on the total fraud score being above a fraud score threshold.
  • 9. The method of claim 8, wherein the action includes at least one of flagging the account, throttling communications of the account, requesting additional billing information, notifying a holder of the account, notifying an analyst of the communication platform, performing additional fraud detection analysis on the account, or blocking particular actions on the account.
  • 10. The method of claim 8, wherein the action includes at least one of suspending the account, blacklisting credit card numbers, blacklisting application URIs or IP addresses, rate-limiting services provided to the account, removing or adjusting services provided to the account, or flagging the account for investigation.
  • 11. The method of claim 8, wherein the set of fraud rules each further include an account age condition.
  • 12. The method of claim 8, wherein at least a subset of the set of fraud rules is generated automatically by applying a machine-learning technique to the usage data, the machine-learning technique including one or more of Bayesian learning, neural networks, reinforcement learning, or cluster analysis.
  • 13. The method of claim 12, wherein the automatic generation of the subset of the set of fraud rules includes automating actions taken by fraud analysts via an analyst-facilitated user interface.
  • 14. The method of claim 12, where the generating of the fraud score is further based on input received via an analyst-facilitated user interface.
  • 15. A non-transitory machine-readable storage medium storing a set of instructions that, when executed by one or more computer processors, cause the one or more computer processors to perform operations for automatically reacting to a fraud attempt on a telephony platform, the operations comprising: automatically generating a set of fraud rules that each include a usage condition and a usage data time window, the automatic generating of the set of fraud rules based on algorithmically learned patterns in related time series data;obtaining one or more data streams of usage data from one or more components of the telephony platform, the usage data including one or more of call history data, messaging history data, or platform account data;calculating a set of fraud scores for each of the set of fraud rules, each of the set of fraud scores being an assigned score or a scalar measurement of a dimension of fraud based on a condition determined from the usage data;determining a total fraud score by collectively analyzing the set of fraud scores; andautomatically taking an action with respect to an account on the telephony platform based on the total fraud score being above a fraud score threshold.
  • 16. The non-transitory machine-readable storage medium of claim 15, wherein the action includes at least one of flagging the account, throttling communications of the account, requesting additional billing information, notifying a holder of the account, notifying an analyst of the communication platform, performing additional fraud detection analysis on the account, or blocking particular actions on the account.
  • 17. The non-transitory machine-readable storage medium of claim 15, wherein the action includes at least one of suspending the account, blacklisting credit card numbers, blacklisting application URIs or IP addresses, rate-limiting services provided to the account, removing or adjusting services provided to the account, or flagging the account for investigation.
  • 18. The non-transitory machine-readable storage medium of claim 15, wherein the set of fraud rules each further include an account age condition.
  • 19. The non-transitory machine-readable storage medium of claim 15, wherein at least a subset of the set of fraud rules is generated automatically by applying a machine-learning technique to the usage data, the machine-learning technique including one or more of Bayesian learning, neural networks, reinforcement learning, or cluster analysis.
  • 20. The non-transitory machine-readable storage medium of claim 19, wherein the automatic generation of the subset of the set of fraud rules includes automating actions taken by fraud analysts via an analyst-facilitated user interface.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 15/911,737, filed 5 Mar. 2018, which is a continuation of U.S. patent application Ser. No. 15/440,908, filed 23 Feb. 2017, which is a continuation of U.S. patent application Ser. No. 14/995,015, filed 13 Jan. 2016, which is a continuation of U.S. patent application Ser. No. 14/253,316, filed 15 Apr. 2014, which is a divisional of U.S. patent application Ser. No. 13/949,984, filed 24 Jul. 2013, which claims the benefit of U.S. Provisional Application Ser. No. 61/675,156, filed on 24 Jul. 2012, all of which are incorporated in their entirety by this reference.

US Referenced Citations (795)
Number Name Date Kind
5274700 Gechter et al. Dec 1993 A
5526416 Dezonno et al. Jun 1996 A
5581608 Jreij et al. Dec 1996 A
5598457 Foladare et al. Jan 1997 A
5633914 Rosa May 1997 A
5867495 Elliott et al. Feb 1999 A
5934181 Adamczewski Aug 1999 A
5978465 Corduroy et al. Nov 1999 A
6026440 Shrader et al. Feb 2000 A
6034946 Roginsky et al. Mar 2000 A
6094681 Shaffer et al. Jul 2000 A
6138143 Gigliotti et al. Oct 2000 A
6185565 Meubus et al. Feb 2001 B1
6192123 Grunsted et al. Feb 2001 B1
6206564 Adamczewski Mar 2001 B1
6223287 Douglas et al. Apr 2001 B1
6232979 Shochet May 2001 B1
6269336 Ladd et al. Jul 2001 B1
6317137 Rosasco Nov 2001 B1
6363065 Thornton et al. Mar 2002 B1
6373836 Deryugin et al. Apr 2002 B1
6425012 Trovato et al. Jul 2002 B1
6426995 Kim et al. Jul 2002 B1
6430175 Echols et al. Aug 2002 B1
6434528 Sanders Aug 2002 B1
6445694 Swartz Sep 2002 B1
6445776 Shank et al. Sep 2002 B1
6459913 Cloutier Oct 2002 B2
6463414 Su et al. Oct 2002 B1
6493558 Bernhart et al. Dec 2002 B1
6496500 Nance et al. Dec 2002 B2
6501739 Cohen Dec 2002 B1
6501832 Saylor et al. Dec 2002 B1
6507875 Mellen-Garnett et al. Jan 2003 B1
6571245 Huang et al. May 2003 B2
6574216 Farris et al. Jun 2003 B1
6577721 Vainio et al. Jun 2003 B1
6600736 Ball et al. Jul 2003 B1
6606596 Zirngibl et al. Aug 2003 B1
6614783 Sonesh et al. Sep 2003 B1
6625258 Ram et al. Sep 2003 B1
6625576 Kochanski et al. Sep 2003 B2
6636504 Albers et al. Oct 2003 B1
6662231 Drosset et al. Dec 2003 B1
6704785 Koo et al. Mar 2004 B1
6707889 Saylor et al. Mar 2004 B1
6711129 Bauer et al. Mar 2004 B1
6711249 Weissman et al. Mar 2004 B2
6738738 Henton May 2004 B2
6757365 Bogard Jun 2004 B1
6765997 Zirngibl et al. Jul 2004 B1
6768788 Langseth et al. Jul 2004 B1
6771955 Imura et al. Aug 2004 B2
6778653 Kallas et al. Aug 2004 B1
6785266 Swartz Aug 2004 B2
6788768 Saylor et al. Sep 2004 B1
6792086 Saylor et al. Sep 2004 B1
6792093 Barak et al. Sep 2004 B2
6798867 Zirngibl et al. Sep 2004 B1
6801604 Maes et al. Oct 2004 B2
6807529 Johnson et al. Oct 2004 B2
6807574 Partovi et al. Oct 2004 B1
6819667 Brusilovsky et al. Nov 2004 B1
6820260 Flockhart et al. Nov 2004 B1
6829334 Zirngibl et al. Dec 2004 B1
6831966 Tegan et al. Dec 2004 B1
6834265 Balasuriya Dec 2004 B2
6836537 Zirngibl et al. Dec 2004 B1
6842767 Partovi et al. Jan 2005 B1
6850603 Eberle et al. Feb 2005 B1
6870830 Schuster et al. Mar 2005 B1
6873952 Bailey et al. Mar 2005 B1
6874084 Dobner et al. Mar 2005 B1
6885737 Gao et al. Apr 2005 B1
6888929 Saylor et al. May 2005 B1
6892064 Qi et al. May 2005 B2
6895084 Saylor et al. May 2005 B1
6898567 Balasuriya May 2005 B2
6912581 Johnson et al. Jun 2005 B2
6922411 Taylor Jul 2005 B1
6928469 Duursma et al. Aug 2005 B1
6931405 El-Shimi et al. Aug 2005 B2
6934858 Woodhill Aug 2005 B2
6937699 Schuster et al. Aug 2005 B1
6940953 Eberle et al. Sep 2005 B1
6941268 Porter et al. Sep 2005 B2
6947417 Laursen et al. Sep 2005 B2
6947727 Brynielsson Sep 2005 B1
6947988 Saleh Sep 2005 B1
6961330 Cattan et al. Nov 2005 B1
6964012 Zirngibl et al. Nov 2005 B1
6970915 Partovi et al. Nov 2005 B1
6977992 Zirngibl et al. Dec 2005 B2
6985862 Strom et al. Jan 2006 B2
6993658 Engberg et al. Jan 2006 B1
6999576 Sacra Feb 2006 B2
7003464 Ferrans et al. Feb 2006 B2
7006606 Cohen et al. Feb 2006 B1
7010586 Allavarpu et al. Mar 2006 B1
7020685 Chen et al. Mar 2006 B1
7039165 Saylor et al. May 2006 B1
7062709 Cheung Jun 2006 B2
7065637 Nanja Jun 2006 B1
7076037 Gonen et al. Jul 2006 B1
7076428 Anastasakos et al. Jul 2006 B2
7080049 Truitt et al. Jul 2006 B2
7085727 Vanorman Aug 2006 B2
7089310 Ellerman et al. Aug 2006 B1
7092370 Jiang et al. Aug 2006 B2
7103003 Brueckheimer et al. Sep 2006 B2
7103171 Annadata et al. Sep 2006 B1
7106844 Holland Sep 2006 B1
7110513 Halpern et al. Sep 2006 B2
7110514 Brown et al. Sep 2006 B2
7111163 Haney Sep 2006 B1
7136932 Schneider Nov 2006 B1
7140004 Kunins et al. Nov 2006 B1
7143039 Stifelman et al. Nov 2006 B1
7197331 Anastasakos et al. Mar 2007 B2
7197461 Eberle et al. Mar 2007 B1
7197462 Takagi et al. Mar 2007 B2
7197544 Wang et al. Mar 2007 B2
7225232 Elberse May 2007 B2
7227849 Rasanen Jun 2007 B1
7260208 Cavalcanti Aug 2007 B2
7266181 Zirngibl et al. Sep 2007 B1
7269557 Bailey et al. Sep 2007 B1
7272212 Eberle et al. Sep 2007 B2
7272564 Phillips et al. Sep 2007 B2
7277851 Henton Oct 2007 B1
7283515 Fowler Oct 2007 B2
7286521 Jackson et al. Oct 2007 B1
7287248 Adeeb Oct 2007 B1
7289453 Riedel et al. Oct 2007 B2
7296739 Mo et al. Nov 2007 B1
7298732 Cho Nov 2007 B2
7298834 Homeier et al. Nov 2007 B1
7308085 Weissman Dec 2007 B2
7308408 Stifelman et al. Dec 2007 B1
7324633 Gao et al. Jan 2008 B2
7324942 Mahowald et al. Jan 2008 B1
7328263 Sadjadi Feb 2008 B1
7330463 Bradd et al. Feb 2008 B1
7330890 Partovi et al. Feb 2008 B1
7340040 Saylor et al. Mar 2008 B1
7349714 Lee et al. Mar 2008 B2
7369865 Gabriel et al. May 2008 B2
7370329 Kumar et al. May 2008 B2
7373660 Guichard et al. May 2008 B1
7376223 Taylor et al. May 2008 B2
7376586 Partovi et al. May 2008 B1
7376733 Connelly et al. May 2008 B2
7376740 Porter et al. May 2008 B1
7383572 Rolfe Jun 2008 B2
7395050 Tuomi et al. Jul 2008 B2
7412525 Cafarella et al. Aug 2008 B2
7418090 Reding et al. Aug 2008 B2
7428302 Zirngibl et al. Sep 2008 B2
7431202 Meador et al. Oct 2008 B1
7440898 Eberle et al. Oct 2008 B1
7447299 Partovi et al. Nov 2008 B1
7454459 Kapoor et al. Nov 2008 B1
7457249 Baldwin et al. Nov 2008 B2
7457397 Saylor et al. Nov 2008 B1
7473872 Takimoto Jan 2009 B2
7486780 Zirngibl et al. Feb 2009 B2
7496054 Taylor Feb 2009 B2
7496188 Saha et al. Feb 2009 B2
7496651 Joshi Feb 2009 B1
7500249 Kampe et al. Mar 2009 B2
7505951 Thompson et al. Mar 2009 B2
7519359 Chiarulli et al. Apr 2009 B2
7522711 Stein et al. Apr 2009 B1
7536454 Balasuriya May 2009 B2
7552054 Stifelman et al. Jun 2009 B1
7565547 Matta et al. Jul 2009 B2
7571226 Partovi et al. Aug 2009 B1
7577847 Nguyen et al. Aug 2009 B2
7606868 Le et al. Oct 2009 B1
7613287 Stifelman et al. Nov 2009 B1
7623648 Oppenheim et al. Nov 2009 B1
7630900 Strom Dec 2009 B1
7631310 Henzinger Dec 2009 B1
7644000 Strom Jan 2010 B1
7657433 Chang Feb 2010 B1
7657434 Thompson et al. Feb 2010 B2
7668157 Weintraub et al. Feb 2010 B2
7672275 Yajnik et al. Mar 2010 B2
7672295 Andhare et al. Mar 2010 B1
7675857 Chesson Mar 2010 B1
7676221 Roundtree et al. Mar 2010 B2
7685298 Day et al. Mar 2010 B2
7715547 Ibbotson et al. May 2010 B2
7716293 Kasuga et al. May 2010 B2
7742499 Erskine et al. Jun 2010 B1
7756507 Morper Jul 2010 B2
7764955 Mangal et al. Jul 2010 B1
7779065 Gupta et al. Aug 2010 B2
7875836 Imura et al. Jan 2011 B2
7882253 Pardo-Castellote et al. Feb 2011 B2
7920866 Bosch et al. Apr 2011 B2
7926099 Chakravarty et al. Apr 2011 B1
7929562 Petrovykh Apr 2011 B2
7936867 Hill et al. May 2011 B1
7946913 Yacenda May 2011 B2
7962644 Ezerzer et al. Jun 2011 B1
7979555 Rothstein et al. Jul 2011 B2
7983404 Croak et al. Jul 2011 B1
7992120 Wang et al. Aug 2011 B1
8023425 Raleigh Sep 2011 B2
8024567 Han Sep 2011 B2
8024785 Andress et al. Sep 2011 B2
8045689 Provenzale et al. Oct 2011 B2
8046378 Zhuge et al. Oct 2011 B1
8046823 Begen et al. Oct 2011 B1
8069096 Ballaro et al. Nov 2011 B1
8078483 Hirose et al. Dec 2011 B1
8081744 Sylvain Dec 2011 B2
8081958 Soderstrom et al. Dec 2011 B2
8103725 Gupta et al. Jan 2012 B2
8126128 Hicks, III et al. Feb 2012 B1
8126129 Mcguire Feb 2012 B1
8139730 Da Palma et al. Mar 2012 B2
8149716 Ramanathan et al. Apr 2012 B2
8150918 Edelman et al. Apr 2012 B1
8156213 Deng et al. Apr 2012 B1
8165116 Ku et al. Apr 2012 B2
8166185 Samuel et al. Apr 2012 B2
8166299 Kemshall Apr 2012 B2
8169936 Koren et al. May 2012 B2
8175007 Jain et al. May 2012 B2
8185619 Maiocco et al. May 2012 B1
8196133 Kakumani et al. Jun 2012 B2
8204479 Vendrow et al. Jun 2012 B2
8233611 Zettner Jul 2012 B1
8238533 Blackwell et al. Aug 2012 B2
8243889 Taylor et al. Aug 2012 B2
8249552 Gailloux et al. Aug 2012 B1
8266327 Kumar et al. Sep 2012 B2
8295272 Boni et al. Oct 2012 B2
8302175 Thoursie et al. Oct 2012 B2
8306021 Lawson et al. Nov 2012 B2
8315198 Corneille et al. Nov 2012 B2
8319816 Swanson et al. Nov 2012 B1
8326805 Arous et al. Dec 2012 B1
8346630 McKeown Jan 2013 B1
8355394 Taylor et al. Jan 2013 B2
8413247 Hudis et al. Apr 2013 B2
8416923 Lawson et al. Apr 2013 B2
8417817 Jacobs Apr 2013 B1
8429827 Wetzel Apr 2013 B1
8438315 Tao et al. May 2013 B1
8447025 Shaffer et al. May 2013 B2
8462670 Chien Jun 2013 B2
8462920 Gonen et al. Jun 2013 B2
8467502 Sureka et al. Jun 2013 B2
8477926 Jasper et al. Jul 2013 B2
8503639 Reding et al. Aug 2013 B2
8503650 Reding et al. Aug 2013 B2
8509068 Begall et al. Aug 2013 B2
8532686 Schmidt et al. Sep 2013 B2
8542805 Agranovsky et al. Sep 2013 B2
8543665 Ansari et al. Sep 2013 B2
8547962 Ramachandran et al. Oct 2013 B2
8565117 Hilt et al. Oct 2013 B2
8572391 Golan et al. Oct 2013 B2
8576712 Sabat et al. Nov 2013 B2
8577803 Chatterjee et al. Nov 2013 B2
8582450 Robesky Nov 2013 B1
8594626 Woodson et al. Nov 2013 B1
8601136 Fahlgren et al. Dec 2013 B1
8611338 Lawson et al. Dec 2013 B2
8613102 Nath Dec 2013 B2
8649268 Lawson et al. Feb 2014 B2
8656452 Li et al. Feb 2014 B2
8667056 Proulx et al. Mar 2014 B1
8675493 Buddhikot et al. Mar 2014 B2
8694025 Dupray et al. Apr 2014 B2
8695077 Gerhard et al. Apr 2014 B1
8737593 Lawson et al. May 2014 B2
8737962 Ballai et al. May 2014 B2
8738051 Nowack et al. May 2014 B2
8751801 Harris et al. Jun 2014 B2
8755376 Lawson et al. Jun 2014 B2
8767925 Sureka et al. Jul 2014 B2
8781975 Bennett et al. Jul 2014 B2
8806024 Toba Francis et al. Aug 2014 B1
8819133 Wang Aug 2014 B2
8825746 Ravichandran et al. Sep 2014 B2
8837465 Lawson et al. Sep 2014 B2
8838707 Lawson et al. Sep 2014 B2
8855271 Brock et al. Oct 2014 B2
8861510 Fritz Oct 2014 B1
8879547 Maes Nov 2014 B2
8938053 Cooke et al. Jan 2015 B2
8948356 Nowack et al. Feb 2015 B2
8964726 Lawson et al. Feb 2015 B2
8990610 Bostick et al. Mar 2015 B2
9014664 Kim et al. Apr 2015 B2
9015702 Bhat Apr 2015 B2
9031223 Smith et al. May 2015 B2
9137127 Nowack et al. Sep 2015 B2
9141682 Adoc, Jr. et al. Sep 2015 B1
9160696 Wilsher et al. Oct 2015 B2
9225840 Malatack et al. Dec 2015 B2
9226217 Malatack Dec 2015 B2
9270833 Ballai et al. Feb 2016 B2
9306982 Lawson et al. Apr 2016 B2
9307094 Nowack et al. Apr 2016 B2
9344573 Wolthuis et al. May 2016 B2
9378337 Kuhr Jun 2016 B2
9456008 Lawson et al. Sep 2016 B2
9514293 Moritz Dec 2016 B1
9614972 Ballai et al. Apr 2017 B2
9628624 Wolthuis et al. Apr 2017 B2
9632875 Raichstein et al. Apr 2017 B2
9948788 Ballai et al. Apr 2018 B2
10469670 Ballai et al. Nov 2019 B2
20010032192 Putta et al. Oct 2001 A1
20010037254 Glikman Nov 2001 A1
20010037264 Husemann et al. Nov 2001 A1
20010038624 Greenberg et al. Nov 2001 A1
20010043684 Guedalia et al. Nov 2001 A1
20010051996 Cooper et al. Dec 2001 A1
20020006124 Jimenez et al. Jan 2002 A1
20020006125 Josse et al. Jan 2002 A1
20020006193 Rodenbusch et al. Jan 2002 A1
20020020741 Sakaguchi Feb 2002 A1
20020032874 Hagen et al. Mar 2002 A1
20020035539 O'Connell Mar 2002 A1
20020057777 Saito et al. May 2002 A1
20020064267 Martin et al. May 2002 A1
20020067823 Walker et al. Jun 2002 A1
20020070273 Fujll Jun 2002 A1
20020077833 Arons et al. Jun 2002 A1
20020126813 Partovi et al. Sep 2002 A1
20020133587 Ensel et al. Sep 2002 A1
20020136391 Armstrong et al. Sep 2002 A1
20020138450 Kremer Sep 2002 A1
20020147913 Lun Yip Oct 2002 A1
20020165957 Devoe et al. Nov 2002 A1
20020169988 Vandergeest et al. Nov 2002 A1
20020176378 Hamilton et al. Nov 2002 A1
20020176404 Girard Nov 2002 A1
20020177433 Bravo et al. Nov 2002 A1
20020184361 Eden Dec 2002 A1
20020198941 Gavrilescu et al. Dec 2002 A1
20030005136 Eun Jan 2003 A1
20030006137 Wei et al. Jan 2003 A1
20030012356 Zino et al. Jan 2003 A1
20030014665 Anderson et al. Jan 2003 A1
20030018830 Chen et al. Jan 2003 A1
20030023672 Vaysman Jan 2003 A1
20030026426 Wright et al. Feb 2003 A1
20030046366 Pardikar et al. Mar 2003 A1
20030051037 Sundaram et al. Mar 2003 A1
20030058884 Kallner et al. Mar 2003 A1
20030059020 Meyerson et al. Mar 2003 A1
20030060188 Gidron et al. Mar 2003 A1
20030061317 Brown et al. Mar 2003 A1
20030061404 Atwal et al. Mar 2003 A1
20030088421 Maes et al. May 2003 A1
20030097330 Hillmer et al. May 2003 A1
20030097447 Johnston May 2003 A1
20030097639 Niyogi et al. May 2003 A1
20030103620 Brown et al. Jun 2003 A1
20030123640 Roelle et al. Jul 2003 A1
20030126076 Kwok et al. Jul 2003 A1
20030149721 Alfonso-Nogueiro et al. Aug 2003 A1
20030159068 Halpin et al. Aug 2003 A1
20030169881 Niedermeyer Sep 2003 A1
20030172272 Ehlers et al. Sep 2003 A1
20030195950 Huang et al. Oct 2003 A1
20030195990 Greenblat et al. Oct 2003 A1
20030196076 Zabarski et al. Oct 2003 A1
20030204616 Billhartz et al. Oct 2003 A1
20030204756 Ransom et al. Oct 2003 A1
20030211842 Kempf et al. Nov 2003 A1
20030221125 Rolfe Nov 2003 A1
20030231647 Petrovykh Dec 2003 A1
20030233276 Pearlman et al. Dec 2003 A1
20040008635 Nelson et al. Jan 2004 A1
20040011690 Marfino et al. Jan 2004 A1
20040044953 Watkins et al. Mar 2004 A1
20040052349 Creamer et al. Mar 2004 A1
20040054632 Remy Mar 2004 A1
20040071275 Bowater et al. Apr 2004 A1
20040073519 Fast Apr 2004 A1
20040097217 McClain May 2004 A1
20040101122 Da Palma et al. May 2004 A1
20040102182 Reith et al. May 2004 A1
20040117788 Karaoguz et al. Jun 2004 A1
20040122685 Bunce Jun 2004 A1
20040136324 Steinberg et al. Jul 2004 A1
20040165569 Sweatman et al. Aug 2004 A1
20040172482 Weissman et al. Sep 2004 A1
20040199572 Hunt et al. Oct 2004 A1
20040203595 Singhal Oct 2004 A1
20040205101 Radhakrishnan Oct 2004 A1
20040205689 Ellens et al. Oct 2004 A1
20040213400 Golitsin et al. Oct 2004 A1
20040216058 Chavers et al. Oct 2004 A1
20040218748 Fisher Nov 2004 A1
20040219904 De Petris Nov 2004 A1
20040228469 Andrews et al. Nov 2004 A1
20040236696 Aoki et al. Nov 2004 A1
20040240649 Goel Dec 2004 A1
20050005109 Castaldi et al. Jan 2005 A1
20050005200 Matena et al. Jan 2005 A1
20050010483 Ling Jan 2005 A1
20050015505 Kruis et al. Jan 2005 A1
20050021626 Prajapat et al. Jan 2005 A1
20050025303 Hostetler, Jr. Feb 2005 A1
20050038772 Colrain Feb 2005 A1
20050043952 Sharma et al. Feb 2005 A1
20050047579 Salame Mar 2005 A1
20050060411 Coulombe et al. Mar 2005 A1
20050066179 Seidlein Mar 2005 A1
20050083907 Fishler Apr 2005 A1
20050091336 Dehamer et al. Apr 2005 A1
20050091572 Gavrilescu et al. Apr 2005 A1
20050108770 Karaoguz et al. May 2005 A1
20050125251 Berger et al. Jun 2005 A1
20050125739 Thompson et al. Jun 2005 A1
20050128961 Miloslavsky et al. Jun 2005 A1
20050135578 Ress et al. Jun 2005 A1
20050141500 Bhandari et al. Jun 2005 A1
20050147088 Bao et al. Jul 2005 A1
20050176449 Cui et al. Aug 2005 A1
20050177635 Schmidt et al. Aug 2005 A1
20050181835 Lau et al. Aug 2005 A1
20050198292 Duursma et al. Sep 2005 A1
20050228680 Malik Oct 2005 A1
20050238153 Chevalier Oct 2005 A1
20050240659 Taylor Oct 2005 A1
20050243977 Creamer et al. Nov 2005 A1
20050246176 Creamer et al. Nov 2005 A1
20050273442 Bennett et al. Dec 2005 A1
20050286496 Malhotra et al. Dec 2005 A1
20050289222 Sahim Dec 2005 A1
20060008065 Longman et al. Jan 2006 A1
20060008073 Yoshizawa et al. Jan 2006 A1
20060008256 Khedouri et al. Jan 2006 A1
20060015467 Morken et al. Jan 2006 A1
20060020799 Kemshall Jan 2006 A1
20060021004 Moran et al. Jan 2006 A1
20060023676 Whitmore et al. Feb 2006 A1
20060047666 Bedi et al. Mar 2006 A1
20060067506 Flockhart et al. Mar 2006 A1
20060080415 Tu Apr 2006 A1
20060098624 Morgan et al. May 2006 A1
20060129638 Deakin Jun 2006 A1
20060143007 Koh et al. Jun 2006 A1
20060146792 Ramachandran et al. Jul 2006 A1
20060146802 Baldwin et al. Jul 2006 A1
20060168334 Potti et al. Jul 2006 A1
20060203979 Jennings Sep 2006 A1
20060209695 Archer, Jr. et al. Sep 2006 A1
20060212865 Vincent et al. Sep 2006 A1
20060215824 Mitby et al. Sep 2006 A1
20060217823 Hussey Sep 2006 A1
20060217978 Mitby et al. Sep 2006 A1
20060222166 Ramakrishna et al. Oct 2006 A1
20060235715 Abrams et al. Oct 2006 A1
20060256816 Yarlagadda et al. Nov 2006 A1
20060262915 Marascio et al. Nov 2006 A1
20060270386 Yu et al. Nov 2006 A1
20060285489 Francisco et al. Dec 2006 A1
20070002744 Mewhinney et al. Jan 2007 A1
20070027775 Hwang Feb 2007 A1
20070036143 Alt et al. Feb 2007 A1
20070038499 Margulies et al. Feb 2007 A1
20070042755 Singhal Feb 2007 A1
20070043681 Morgan et al. Feb 2007 A1
20070050306 McQueen Mar 2007 A1
20070064672 Raghav et al. Mar 2007 A1
20070070906 Thakur Mar 2007 A1
20070070980 Phelps et al. Mar 2007 A1
20070071223 Lee et al. Mar 2007 A1
20070074174 Thornton Mar 2007 A1
20070088836 Tai et al. Apr 2007 A1
20070091907 Seshadri et al. Apr 2007 A1
20070094095 Kilby Apr 2007 A1
20070107048 Halls et al. May 2007 A1
20070112574 Greene May 2007 A1
20070112673 Protti May 2007 A1
20070116191 Bermudez et al. May 2007 A1
20070121651 Casey et al. May 2007 A1
20070127691 Lert Jun 2007 A1
20070127703 Siminoff Jun 2007 A1
20070130260 Weintraub et al. Jun 2007 A1
20070133771 Stifelman et al. Jun 2007 A1
20070147351 Dietrich et al. Jun 2007 A1
20070149166 Turcotte et al. Jun 2007 A1
20070153711 Dykas et al. Jul 2007 A1
20070167170 Fitchett et al. Jul 2007 A1
20070174214 Welsh et al. Jul 2007 A1
20070192629 Saito Aug 2007 A1
20070201448 Baird et al. Aug 2007 A1
20070208862 Fox et al. Sep 2007 A1
20070232284 Mason et al. Oct 2007 A1
20070239761 Baio et al. Oct 2007 A1
20070242626 Altberg et al. Oct 2007 A1
20070255828 Paradise Nov 2007 A1
20070265073 Novi et al. Nov 2007 A1
20070286180 Marquette et al. Dec 2007 A1
20070291734 Bhatia et al. Dec 2007 A1
20070291905 Halliday et al. Dec 2007 A1
20070293200 Roundtree et al. Dec 2007 A1
20070295803 Levine et al. Dec 2007 A1
20080005275 Overton et al. Jan 2008 A1
20080025320 Bangalore et al. Jan 2008 A1
20080037715 Prozeniuk et al. Feb 2008 A1
20080037746 Dufrene et al. Feb 2008 A1
20080040484 Yardley Feb 2008 A1
20080049617 Grice et al. Feb 2008 A1
20080052395 Wright et al. Feb 2008 A1
20080091843 Kulkarni Apr 2008 A1
20080101571 Harlow et al. May 2008 A1
20080104348 Kabzinski et al. May 2008 A1
20080120702 Hokimoto May 2008 A1
20080123559 Haviv et al. May 2008 A1
20080134049 Gupta et al. Jun 2008 A1
20080139166 Agarwal et al. Jun 2008 A1
20080146268 Gandhi et al. Jun 2008 A1
20080152101 Griggs Jun 2008 A1
20080154601 Stifelman et al. Jun 2008 A1
20080155029 Helbling et al. Jun 2008 A1
20080162482 Ahern et al. Jul 2008 A1
20080165708 Moore et al. Jul 2008 A1
20080172404 Cohen Jul 2008 A1
20080177883 Hanai et al. Jul 2008 A1
20080192736 Jabri et al. Aug 2008 A1
20080201426 Darcie Aug 2008 A1
20080209050 Li Aug 2008 A1
20080212945 Khedouri et al. Sep 2008 A1
20080222656 Lyman Sep 2008 A1
20080229421 Hudis et al. Sep 2008 A1
20080232574 Baluja et al. Sep 2008 A1
20080235230 Maes Sep 2008 A1
20080256224 Kaji et al. Oct 2008 A1
20080275741 Loeffen Nov 2008 A1
20080307436 Hamilton Dec 2008 A1
20080310599 Purnadi et al. Dec 2008 A1
20080313318 Vermeulen et al. Dec 2008 A1
20080316931 Qiu et al. Dec 2008 A1
20080317222 Griggs et al. Dec 2008 A1
20080317232 Couse et al. Dec 2008 A1
20080317233 Rey et al. Dec 2008 A1
20090046838 Andreasson Feb 2009 A1
20090052437 Taylor et al. Feb 2009 A1
20090052641 Taylor et al. Feb 2009 A1
20090059894 Jackson et al. Mar 2009 A1
20090063502 Coimbatore et al. Mar 2009 A1
20090074159 Goldfarb et al. Mar 2009 A1
20090075684 Cheng et al. Mar 2009 A1
20090083155 Tudor et al. Mar 2009 A1
20090089165 Sweeney Apr 2009 A1
20090089352 Davis et al. Apr 2009 A1
20090089699 Saha et al. Apr 2009 A1
20090092674 Ingram et al. Apr 2009 A1
20090093250 Jackson et al. Apr 2009 A1
20090106829 Thoursie et al. Apr 2009 A1
20090125608 Werth et al. May 2009 A1
20090129573 Gavan et al. May 2009 A1
20090136011 Goel May 2009 A1
20090170496 Bourque Jul 2009 A1
20090171659 Pearce et al. Jul 2009 A1
20090171669 Engelsma et al. Jul 2009 A1
20090171752 Galvin et al. Jul 2009 A1
20090182896 Patterson et al. Jul 2009 A1
20090193433 Maes Jul 2009 A1
20090216835 Jain et al. Aug 2009 A1
20090217293 Wolber et al. Aug 2009 A1
20090220057 Waters Sep 2009 A1
20090221310 Chen et al. Sep 2009 A1
20090222341 Belwadi et al. Sep 2009 A1
20090225748 Taylor Sep 2009 A1
20090225763 Forsberg et al. Sep 2009 A1
20090228868 Drukman et al. Sep 2009 A1
20090232289 Drucker et al. Sep 2009 A1
20090234965 Viveganandhan et al. Sep 2009 A1
20090235349 Lai et al. Sep 2009 A1
20090241135 Wong et al. Sep 2009 A1
20090252159 Lawson et al. Oct 2009 A1
20090262725 Chen et al. Oct 2009 A1
20090276771 Nickolov et al. Nov 2009 A1
20090288012 Hertel et al. Nov 2009 A1
20090288165 Qiu et al. Nov 2009 A1
20090300194 Ogasawara Dec 2009 A1
20090316687 Kruppa Dec 2009 A1
20090318112 Vasten Dec 2009 A1
20100027531 Kurashima Feb 2010 A1
20100037204 Lin et al. Feb 2010 A1
20100054142 Moiso et al. Mar 2010 A1
20100070424 Monk Mar 2010 A1
20100071053 Ansari et al. Mar 2010 A1
20100082513 Liu Apr 2010 A1
20100087215 Gu et al. Apr 2010 A1
20100088187 Courtney et al. Apr 2010 A1
20100088698 Krishnamurthy Apr 2010 A1
20100094758 Chamberlain et al. Apr 2010 A1
20100103845 Ulupinar et al. Apr 2010 A1
20100107222 Glasser Apr 2010 A1
20100115041 Hawkins et al. May 2010 A1
20100138501 Clinton et al. Jun 2010 A1
20100142516 Lawson et al. Jun 2010 A1
20100150139 Lawson et al. Jun 2010 A1
20100167689 Sepehri-Nik et al. Jul 2010 A1
20100188979 Thubert et al. Jul 2010 A1
20100191915 Spencer Jul 2010 A1
20100208881 Kawamura Aug 2010 A1
20100217837 Ansari et al. Aug 2010 A1
20100217982 Brown et al. Aug 2010 A1
20100232594 Lawson et al. Sep 2010 A1
20100235539 Carter et al. Sep 2010 A1
20100250946 Korte et al. Sep 2010 A1
20100251329 Wei Sep 2010 A1
20100251340 Martin et al. Sep 2010 A1
20100265825 Blair et al. Oct 2010 A1
20100281108 Cohen Nov 2010 A1
20100291910 Sanding et al. Nov 2010 A1
20100299437 Moore Nov 2010 A1
20100312919 Lee et al. Dec 2010 A1
20100332852 Vembu et al. Dec 2010 A1
20110026516 Roberts et al. Feb 2011 A1
20110029882 Jaisinghani Feb 2011 A1
20110029981 Jaisinghani Feb 2011 A1
20110053555 Cai et al. Mar 2011 A1
20110078278 Cui et al. Mar 2011 A1
20110081008 Lawson et al. Apr 2011 A1
20110083069 Paul et al. Apr 2011 A1
20110083179 Lawson et al. Apr 2011 A1
20110093516 Geng et al. Apr 2011 A1
20110096673 Stevenson et al. Apr 2011 A1
20110110366 Moore et al. May 2011 A1
20110131293 Mori Jun 2011 A1
20110138453 Verma et al. Jun 2011 A1
20110143714 Keast et al. Jun 2011 A1
20110145049 Hertel et al. Jun 2011 A1
20110149810 Koren et al. Jun 2011 A1
20110149950 Petit-Huguenin et al. Jun 2011 A1
20110151884 Zhao Jun 2011 A1
20110158235 Senga Jun 2011 A1
20110167172 Roach et al. Jul 2011 A1
20110170505 Rajasekar et al. Jul 2011 A1
20110176537 Lawson et al. Jul 2011 A1
20110211679 Mezhibovsky et al. Sep 2011 A1
20110251921 Kassaei et al. Oct 2011 A1
20110253693 Lyons et al. Oct 2011 A1
20110255675 Jasper et al. Oct 2011 A1
20110258432 Rao et al. Oct 2011 A1
20110265168 Lucovsky et al. Oct 2011 A1
20110265172 Sharma et al. Oct 2011 A1
20110267985 Wilkinson et al. Nov 2011 A1
20110274111 Narasappa et al. Nov 2011 A1
20110276892 Jensen-Horne et al. Nov 2011 A1
20110276951 Jain Nov 2011 A1
20110280390 Lawson et al. Nov 2011 A1
20110283259 Lawson et al. Nov 2011 A1
20110289126 Aikas et al. Nov 2011 A1
20110299672 Chiu et al. Dec 2011 A1
20110310902 Xu Dec 2011 A1
20110313950 Nuggehalli et al. Dec 2011 A1
20110320449 Gudlavenkatasiva Dec 2011 A1
20110320550 Lawson et al. Dec 2011 A1
20120000903 Baarman et al. Jan 2012 A1
20120011274 Moreman Jan 2012 A1
20120017222 May Jan 2012 A1
20120023531 Meuninck et al. Jan 2012 A1
20120023544 Li et al. Jan 2012 A1
20120027228 Rijken et al. Feb 2012 A1
20120028602 Lisi et al. Feb 2012 A1
20120036574 Heithcock et al. Feb 2012 A1
20120039202 Song Feb 2012 A1
20120059709 Lieberman et al. Mar 2012 A1
20120079066 Li et al. Mar 2012 A1
20120083266 Vanswol et al. Apr 2012 A1
20120089572 Raichstein et al. Apr 2012 A1
20120094637 Jeyaseelan et al. Apr 2012 A1
20120101952 Raleigh et al. Apr 2012 A1
20120110564 Ran et al. May 2012 A1
20120114112 Rauschenberger et al. May 2012 A1
20120149404 Beattie, Jr. et al. Jun 2012 A1
20120166488 Kaushik et al. Jun 2012 A1
20120170726 Schwartz Jul 2012 A1
20120173610 Bleau et al. Jul 2012 A1
20120174095 Natchadalingam et al. Jul 2012 A1
20120179646 Hinton et al. Jul 2012 A1
20120179907 Byrd et al. Jul 2012 A1
20120180021 Byrd et al. Jul 2012 A1
20120180029 Hill et al. Jul 2012 A1
20120185561 Klein et al. Jul 2012 A1
20120198004 Watte Aug 2012 A1
20120201238 Lawson et al. Aug 2012 A1
20120208495 Lawson et al. Aug 2012 A1
20120221603 Kothule et al. Aug 2012 A1
20120226579 Ha et al. Sep 2012 A1
20120239757 Firstenberg et al. Sep 2012 A1
20120240226 Li Sep 2012 A1
20120246273 Bornstein et al. Sep 2012 A1
20120254828 Aiylam et al. Oct 2012 A1
20120266258 Tuchman et al. Oct 2012 A1
20120281536 Gell et al. Nov 2012 A1
20120288082 Segall Nov 2012 A1
20120290706 Lin et al. Nov 2012 A1
20120304245 Lawson et al. Nov 2012 A1
20120304275 Ji et al. Nov 2012 A1
20120316809 Egolf et al. Dec 2012 A1
20120321058 Eng et al. Dec 2012 A1
20120321070 Smith et al. Dec 2012 A1
20130029629 Lindholm et al. Jan 2013 A1
20130031158 Salsburg Jan 2013 A1
20130031613 Shanabrook et al. Jan 2013 A1
20130036476 Roever et al. Feb 2013 A1
20130047232 Tuchman et al. Feb 2013 A1
20130054517 Beechuk et al. Feb 2013 A1
20130054684 Brazier et al. Feb 2013 A1
20130058262 Parreira Mar 2013 A1
20130067232 Cheung et al. Mar 2013 A1
20130067448 Sannidhanam et al. Mar 2013 A1
20130097298 Ting et al. Apr 2013 A1
20130110658 Lyman May 2013 A1
20130132573 Lindblom May 2013 A1
20130139148 Berg et al. May 2013 A1
20130156024 Burg Jun 2013 A1
20130179942 Caplis et al. Jul 2013 A1
20130201909 Bosch et al. Aug 2013 A1
20130204786 Mattes et al. Aug 2013 A1
20130212603 Cooke et al. Aug 2013 A1
20130244632 Spence et al. Sep 2013 A1
20130268676 Martins et al. Oct 2013 A1
20130325934 Fausak et al. Dec 2013 A1
20130328997 Desai Dec 2013 A1
20130336472 Fahlgren et al. Dec 2013 A1
20140013400 Warshavsky et al. Jan 2014 A1
20140045456 Ballai et al. Feb 2014 A1
20140058806 Guenette et al. Feb 2014 A1
20140064467 Lawson et al. Mar 2014 A1
20140072115 Makagon et al. Mar 2014 A1
20140101058 Castel et al. Apr 2014 A1
20140101149 Winters et al. Apr 2014 A1
20140105372 Nowack et al. Apr 2014 A1
20140106704 Cooke et al. Apr 2014 A1
20140122600 Kim et al. May 2014 A1
20140123187 Reisman May 2014 A1
20140126715 Lum et al. May 2014 A1
20140129363 Lorah et al. May 2014 A1
20140153565 Lawson et al. Jun 2014 A1
20140185490 Holm et al. Jul 2014 A1
20140226803 Ballai et al. Aug 2014 A1
20140254600 Shibata et al. Sep 2014 A1
20140258481 Lundell Sep 2014 A1
20140269333 Boerjesson Sep 2014 A1
20140274086 Boerjesson et al. Sep 2014 A1
20140282473 Saraf et al. Sep 2014 A1
20140289391 Balaji et al. Sep 2014 A1
20140304054 Orun et al. Oct 2014 A1
20140317640 Harm et al. Oct 2014 A1
20140355600 Lawson et al. Dec 2014 A1
20140372508 Fausak et al. Dec 2014 A1
20140372509 Fausak et al. Dec 2014 A1
20140372510 Fausak et al. Dec 2014 A1
20140373098 Fausak et al. Dec 2014 A1
20140379670 Kuhr Dec 2014 A1
20150004932 Kim et al. Jan 2015 A1
20150004933 Kim et al. Jan 2015 A1
20150023251 Giakoumelis et al. Jan 2015 A1
20150026477 Malatack et al. Jan 2015 A1
20150066865 Yara et al. Mar 2015 A1
20150081918 Nowack et al. Mar 2015 A1
20150082378 Collison Mar 2015 A1
20150100634 He et al. Apr 2015 A1
20150119050 Liao et al. Apr 2015 A1
20150181631 Lee et al. Jun 2015 A1
20150236905 Bellan et al. Aug 2015 A1
20150281294 Nur et al. Oct 2015 A1
20150365480 Soto et al. Dec 2015 A1
20150370788 Bareket et al. Dec 2015 A1
20150381580 Graham, III et al. Dec 2015 A1
20160011758 Dornbush et al. Jan 2016 A1
20160028695 Binder Jan 2016 A1
20160112475 Lawson et al. Apr 2016 A1
20160112521 Lawson et al. Apr 2016 A1
20160119291 Zollinger et al. Apr 2016 A1
20160127254 Kumar et al. May 2016 A1
20160134757 Ballai et al. May 2016 A1
20160149956 Birnbaum et al. May 2016 A1
20160205519 Patel et al. Jul 2016 A1
20160226937 Patel et al. Aug 2016 A1
20160226979 Lancaster et al. Aug 2016 A1
20160234391 Wolthuis et al. Aug 2016 A1
20160239770 Batabyal et al. Aug 2016 A1
20170163817 Ballai et al. Jun 2017 A1
20180198923 Ballai et al. Jul 2018 A1
Foreign Referenced Citations (28)
Number Date Country
1684587 Mar 1971 DE
0282126 Sep 1988 EP
1387239 Feb 2004 EP
1464418 Oct 2004 EP
1522922 Apr 2005 EP
1770586 Apr 2007 EP
2053869 Apr 2009 EP
2134107 Sep 1999 ES
2362489 Nov 2001 GB
10294788 Nov 1998 JP
2004166000 Jun 2004 JP
2004220118 Aug 2004 JP
2006319914 Nov 2006 JP
WO-9732448 Sep 1997 WO
WO-0131483 May 2001 WO
WO-0167219 Sep 2001 WO
WO-0219593 Mar 2002 WO
WO-0235486 May 2002 WO
WO-02052879 Jul 2002 WO
WO-2002087804 Nov 2002 WO
WO-03063411 Jul 2003 WO
WO-2006037492 Apr 2006 WO
WO-2009018489 Feb 2009 WO
WO-2009124223 Oct 2009 WO
WO-2010037064 Apr 2010 WO
WO-2010040010 Apr 2010 WO
WO-2010101935 Sep 2010 WO
WO-2011091085 Jul 2011 WO
Non-Patent Literature Citations (168)
Entry
U.S. Appl. No. 13/949,984 U.S Pat. No. 8,737,962, filed Jul. 24, 2013, Method and System for Preventing Illicit Use of a Telephony Platform.
U.S. Appl. No. 14/253,316 U.S Pat. No. 9,270,833, filed Apr. 15, 2014, Method and System for Preventing Illicit Use of a Telephony Platform.
U.S. Appl. No. 14/995,015 U.S. Pat. No. 9,614,972, filed Jan. 13, 2016, Method and System for Preventing Illicit Use of a Telephony Platform.
U.S. Appl. No. 15/440,908 U.S. Pat. No. 9,948,788, filed Feb. 23, 2017, Method and System for Preventing Illicit Use of a Telephony Platform.
U.S. Appl. No. 15/911,737 U.S. Pat. No. 10,469,670 filed Mar. 5, 2018, Method and System for Preventing Illicit Use of a Telephony Platform.
“[Proposed] Order Granting Defendant Telesign Corporation's Motion to Dismiss”, Twilio, Inc., v. Telesign Corporation, Case No. 5:16-cv-6925-LHK, Filed Jan. 25, 2017, 2 pgs.
“ActivCard”, [Online], Retrieved from the Internet: <URL: http://www.activcard.com:80/products/client/tokens/token.pdf>, (1998), 26 pgs.
“Aepona's API Monetization Platform Wins Best of 4G Awards for Mobile Cloud Enabler”, 4G World 2012 Conference & Expo, [Online]. [Accessed Nov. 5, 2015]. Retrieved from the Internet: <URL: https://www.realwire.com/releases/%20Aeponas-API-Monetization>, (Oct. 30, 2012), 4 pgs.
“U.S. Appl. No. 13/949,984, Notice of Allowance dated Feb. 19, 2014”, 10 pgs.
“U.S. Appl. No. 13/949,984, Response filed Jan. 30, 2014 to Restriction Requirement dated Dec. 30, 2013”, 8 pgs.
“U.S. Appl. No. 13/949,984, Restriction Requirement dated Dec. 30, 2013”, 6 pgs.
“U.S. Appl. No. 14/253,316, Examiner Interview Summary dated Aug. 12, 2015”, 3 pgs.
“U.S. Appl. No. 14/253,316, Non Final Office Action dated Mar. 25, 2015”, 9 pgs.
“U.S. Appl. No. 14/253,316, Notice of Allowance dated Oct. 14, 2015”, 8 pgs.
“U.S. Appl. No. 14/253,316, Response filed Aug. 17, 2015 to Non Final Office Action dated Mar. 25, 2015”, 10 pgs.
“U.S. Appl. No. 14/995,015, Corrected Notice of Allowance dated Dec. 16, 2016”, 2 pgs.
“U.S. Appl. No. 14/995,015, Examiner Interview Summary dated Sep. 7, 2016”, 3 pgs.
“U.S. Appl. No. 14/995,015, Non Final Office Action dated Jun. 24, 2016”, 11 pgs.
“U.S. Appl. No. 14/995,015, Notice of Allowance dated Nov. 23, 2016”, 13 pgs.
“U.S. Appl. No. 14/995,015, Response filed Sep. 9, 2016 to Non Final Office Action dated Jun. 24, 2016”, 12 pgs.
“U.S. Appl. No. 15/440,908, Examiner Interview Summary dated Sep. 18, 2017”, 3 pgs.
“U.S. Appl. No. 15/440,908, Non Final Office Action dated Jun. 13, 2017”, 19 pgs.
“U.S. Appl. No. 15/440,908, Notice of Allowance dated Oct. 16, 2017”, 8 pgs.
“U.S. Appl. No. 15/440,908, Notice of Allowance dated Dec. 6, 2017”, 8 pgs.
“U.S. Appl. No. 15/440,908, Preliminary Amendment filed May 17, 2017”, 7 pgs.
“U.S. Appl. No. 15/440,908, Response filed Sep. 13, 2017 to Non Final Office Action dated Jun. 13, 2017”, 8 pgs.
“U.S. Appl. No. 15/911,737, Corrected Notice of Allowability dated Jan. 31, 2019”, 2 pgs.
“U.S. Appl. No. 15/911,737, Corrected Notice of Allowability dated Apr. 15, 2019”, 2 pgs.
“U.S. Appl. No. 15/911,737, Examiner Interview Summary dated Sep. 26, 2018”, 2 pgs.
“U.S. Appl. No. 15/911,737, Non Final Office Action dated Apr. 9, 2018”, 11 pgs.
“U.S. Appl. No. 15/911,737, Notice of Allowance dated Jan. 17, 2019”, 8 pgs.
“U.S. Appl. No. 15/911,737, Notice of Allowance dated Jun. 26, 2019”, 8 pgs.
“U.S. Appl. No. 15/911,737, Notice of Allowance dated Sep. 19, 2018”, 8 pgs.
“U.S. Appl. No. 15/911,737, Response filed Jul. 9, 2018 to Non Final Office Action dated Apr. 9, 2018”, 10 pgs.
“Archive Microsoft Office 365 Email I Retain Unified Archiving”, GWAVA, Inc., Montreal, Canada, [Online] Retrieved from the Internet: <URL: http://www.gwava.com/Retain/Retain for_Office_365.php>, (2015), 4 pgs.
“ASB Bank selects RSA Mobile two-factor authentication for Internet security; Leading New Zealand bank to integrate RSA Mobile solution to expand business opportunities and enhance”, RSA Security, M2 Presswire; Coventry [Coventry], (Jun. 23, 2003), 4 pgs.
“Authenex”, [Online], Retrieved from the Internet: <URL: http://www.authenex.com:80/isaserver/pdf/psasas.pdf>, (2003), 34 pgs.
“Aventail partners with phone-based two-factor authentication company; Aventail and SecurEnvoy join forces to offer easy-to-use authentication from mobile devices for secure, remote access”, Aventail-M2 Presswire; Coventry [Coventry], (Dec. 7, 2005), 4 pgs.
“Carrierinfo—Product Guide”, Mapinfo Corporation, (2005), 36 pgs.
“CDyne Phone Verifier”, BACKGROUND_WEB_ARCHIVE, (2005), 4 pgs.
“Classifying m-payments—a user-centric model”, Proceedings of the Third International Conference on Mobile Business, M-Business, (2004), 11 pgs.
“Complaint for Patent Infringement”, Telinit Technologies, LLC v. Twilio Inc 2:12-cv-663, (Oct. 12, 2012), 17 pgs.
“Complaint for Patent Infringement—Jury Trial Demanded”, Twilio Inc., vs. Telesign Corporation, Case 3:16-cv-06925 Filed Dec. 1, 2016, 240 pgs.
“CRYPTO-Tokens”, CryptoCard, (2003), 12 pgs.
“Cyber Locator”, (1999), 7 pgs.
“Declaration of Jesse J. Camacho in Support of Defendant Telesign Corporation's Reply to Motion to Dismiss”, Twilio, Inc., v. Telesign Corporation, Case No, 5:16-cv-6925-LHK, Filed Feb. 15, 2017, 17 pgs.
“Defendant Telesign Corporation's Notice of Motion and Motion to Dismiss; Memorandum of Points and Authorities in Support Thereof”, Twilio, Inc., v. Telesign Corporation, Case No. 5:16-cv-6925-LHK, Filed Jan. 25, 2017, 32 pgs.
“Defendant Telesign Corporation's Reply in Support of Motion to Dismiss”, Twilio, Inc., v. Telesign Corporation, Case No. 5:16-cv-6925-LHK, Filed Feb. 15, 2017, 22 pgs.
“DIGIPASS® GO 1”, Vasco, (2001), 36 pgs.
“Diversinet”, MobiSecure, 2 pgs.
“Entrust”, Entrust TruePass™ Product Portfolio, 28 pgs.
“Ethernet to Token Ring Bridge”, Black Box Corporation, [Online] Retrieved from the Internet: <URL: http://blackboxcanada.com/resource/files/productdetails/17044.pdf>, (Oct. 1999), 2 pgs.
“eToken”, Aladdin Knowledge Systems, [Online]. Retrieved from the Internet: <URL: http://www.aladdin.com:80/etoken/products.asp>, (2005), 20 pgs.
“File History U.S. Pat. No. 8,351,369”, 295 pgs.
“File History U.S. Pat. No. 8,462,920 B2”, 322 pgs.
“File History U.S. Pat. No. 8,737,593”, 261 pgs.
“File History U.S. Pat. No. 8,755,376 B2”, 1084 pgs.
“Final Written Decision 35 U.S.C. § 318(a)”, Telesign Corporation v. Twilio Inc., Case IPR2017-01976, U.S. Pat. No. 8,837,465B2, (Mar. 6, 2019), 42 pgs.
“Final Written Decision 35 U.S.C. § 318(a)”, Telesign Corporation v. Twilio Inc., Case IPR2017-01977, U.S. Pat. No. 8,755,376B2, (Mar. 6, 2019), 51 pgs.
“Fone Finder”, (Feb. 4, 2005), 12 pgs.
“iKey 2032”, Personal USB Authentication and Encryption Token, [Online] Retrieved from the Internet: <URL: http://www.safenet-inc.com:80/library/3/iKey_2032.pdf>, (2005), 5 pgs.
“International Numbering Plans”, BACKGROUND_WEB__ARCHIVE, (2005), 1 pg.
“Maag Holdings Selects RSA Security to Help Protect its Real Estate Information System”, (2003), 5 pgs.
“Microsoft Targets Mobile Developers with Tools and Devices”, Mobile Business Advisor, (2003), 1 pg.
“Multi-Factor Authentication Employing Voice Biometrics and Existing Infrastructures”, Background_WEB_ARCHIVE_Authentify, (2005), 15 pgs.
“Open Service Access (OSA); Parlay X Web Services; Part 11: Audio Call (Parlay X 2)”, ETSI ES 202 391-11 V1.2.1, (Dec. 2006), 19 pgs.
“Open Service Access (OSA); Parlay X Web Services; Part 2: Third Party Call (Parlay X 2)”, ETSI ES 202 391-2 V1.2.1, (Dec. 2006), 18 pgs.
“Open Service Access (OSA); Parlay X Web Services; Part 3: Call Notification (Parlay X 2)”, ETSI ES 202 391-3 V1.2,1, (Dec. 2006), 23 pgs.
“Open Service Access (OSA); Parlay X Web Services; Part 4: Short Messaging (Parlay X 2)”, ETSI ES 202 391-4 V1.2.1, (Dec. 2006), 26 pgs.
“Open Service Access (OSA); Parlay X Web Services; Part 7: Account Management (Parlay X 2)”, ETSI ES 202 391-7 V1.2.1, (Dec. 2006), 22 pgs.
“Order Granting in Part and Denying in Part Defendant's Motion to Dismiss”, Twilio, Inc., v. Telesign Corporation, Case No. 16-CV-06925-LHK, Filed Mar. 31, 2017, 58 pgs.
“Order Granting in Part Defendant's Motion to Dismiss”, Twilio, Inc., v. Telesign Corporation, Case No. 16-CV-06925-LHK, Filed Apr. 17, 2017, 54 pgs.
“PhoneID Fraud Prevention”, Delivers real-time security intelligence and data on phone numbers around the world to enable greater assurance and security against fraudulent activity, (Jun. 15, 2015), 7 pgs.
“PhoneID Score”, PhoneID Score—TeleSign REST API v1.50 documentation, (Jun. 16, 2015), 10 pgs.
“PhoneID Standard”, PhoneID Standard—TeleSign REST API v1.50 documentation, (Jun. 16, 2015), 1-10.
“Plaintiff's Opposition to Defendant's Motion to Dismiss”, Twilio Inc., vs. Telesign Corporation, Case No. 5:16-CV-06925-LHK, Filed Feb. 8, 2017, 28 pgs.
“Q3 2002 RSA Security Earnings Conference Call—Final”, Dow Jones, (Oct. 16, 2002), 12 pgs.
“Q4 2002 RSA Security Earnings Conference Call—Final”, Dow Jones, (Jan. 23, 2003), 8 pgs.
“Requests”, TeleSign REST API v1.51 documentation, (Nov. 3, 2015), 1 pg.
“Resources”, TeleSign REST API v1.51 documentation, (Nov. 2, 2015), 2 pgs.
“Responses”, TeleSign REST API v1.51 documentation, (Nov. 3, 2015), 1 pg.
“Risk factor put on hold—Security Solutions—Data Under Siege—A special advertising report”, The Australian—Dow Jones, (Sep. 24, 2002), 1 pg.
“RSA launches authentication solutions”, The China Post—Dow Jones, (Sep. 14, 2002), 2 pgs.
“RSA Mobile”, Two-factor authentication for a mobile world, (Jun. 12, 2004), 6 pgs.
“RSA Mobile New Product Review”, (2002), 1 pg.
“RSA SecurID® Authentication”, A Better Value for a Better ROI, (2003), 34 pgs.
“RSA Security and iRevolution Join Forces to Offer Two-Factor Authentication For Companies Using Microsoft(R) Passport”, PR Newswire; New York. (Oct. 8, 2002), 4 pgs.
“RSA Security and Nocom launch new service in Scandinavia: Flexible and secure solution for user identification”, NASDAQ OMX—Dow Jones, (Sep. 9, 2003), 2 pgs.
“RSA Security Announces Third Quarter Results”, PR Newswire—Dow Jones, (Oct. 16, 2002), 10 pgs.
“RSA Security Helps Banca Popolare di Sondrio (Suisse) Differentiate Itself from the Competition”, PR Newswire; New York, (Apr. 15, 2003), 4 pgs.
“RSA Security technology helps make an innovative information management solution even more compelling to the marketplace”, Maag Holdings Ltd., (2004), 3 pgs.
“RSA Security Unveils Innovative Two-Factor Authentication Solution for the Consumer Market”, PR Newswire; New York, (Sep. 4, 2002), 5 pgs.
“RSA Security uses phones as security token. (Business)”, RCR Wireless News. 21.36, Academic OneFile, [Online] Retrieved from the Internet: <URL: http://link.galegroup.com/apps/doc/A91672329/AONE?u=otta35732&sid=AONE&xid=2f576581>, (Sep. 9, 2002), 1 pg.
“RSA(R) Mobile and RSA SecurID(R) Two-Factor Authentication Products Recognized by SC Magazine as Best of 2002”, PR Newswire—Dow Jones, (Dec. 12, 2002), 2 pgs.
“Saintlogin”, BACKGROUND_WEB_ARCHIVE, (2005), 3 pgs.
“Score()—TeleSign Python SDK documentation”, score(), (Jun. 16, 2015), 2 pgs.
“Scottrade Selects PassMark for Strong Mutual Authentication”, PassMark, (Oct. 11, 2005), 8 pgs.
“SecurAccess Overview Video”, Securenvoy—Date for Overview.swf, [Online]. [Accessed Jan. 20, 2005]. Retrieved from the Internet: <URL: www.securenvoy.com/animations/Overview.swf>, 14 pgs.
“SecurAccess User Guide Video”, Securenvoy—Date for UserGuide.swf, [Online]. [Accessed Sep. 30, 2004]. Retrieved from the Internet: <URL: http://www.securenvoy.com/animations/UserGuide.swf>, 17 pgs.
“SecurAccess Video”, Securenvoy—Date for SecurAccess.swf, [Online]. [Accessed May 5, 2006]. Retrieved from the Internet: <URL: http://www.securenvoy.com:80/animations/SecurAccess.swf>, 8 pgs.
“Securenvoy”, Secure Email, (2004), 6 pgs.
“SecurEnvoy SecurAccess”, Protecting Access from outside the perimeter, (2005), 6 pgs.
“SecurMail and SecurAccess”, Securenvoy, 1 pg.
“SIEMENS”, System Description HiPath 3000 Version 1.2-3.0, (2002), 762 pgs.
“Simple, secure access control for the Web”, using SafeWord™ PremierAccess, (Nov. 2001), 46 pgs.
“Smart Verify | TeleSign”, Smart Verify, (Nov. 3, 2015), 9 pgs.
“SMS Authentication”, RSA Security Inc. Published in ComputerWorld Sep. 23, 2002, Technology, p. 38, (Sep. 23, 2002), 1 pg.
“SMS Verify—TeleSign”, SMS Verify, (Nov. 3, 2015), 8 pgs.
“Taking security online to new level”, Dow Jones, (2005), 2 pgs.
“TeleSign's PhoneID Score Named a New Products Winner”, TeleSign, (Jun. 27, 2014), 4 pgs.
“Trailblazers: RSA Security (specializes in access management tools for internal security)”, Dow Jones, (2003), 1 pg.
“Twilio Cloud Communications—APIs for Voice, VoIP, and Text Messaging”, Twilio, [Online] Retrieved from the Internet: <URL: http://www.twilio.com/docs/api/rest/call-feedback>, (Jun. 24, 2015), 8 pgs.
“Unified Authentication”, Verisign, (Mar. 21, 2005), 196 pgs.
“Verify Registration—TeleSign REST API v1.51 documentation”, Verify Registration, (Nov. 3, 2015), 7 pgs.
“Voice Verify With Call Forward Detection”, TeleSign Verification APIs, (2015), 2 pgs.
“What's a Mobile Phone, anyway?”, Australian PC World; Off Camera Fun, (Jun. 2005), 1 pg.
“Wifi WatchDog”, Newbury Networks, (2006), 11 pgs.
Abu-Lebdeh, et al., “A 3GPP Evolved Packet Core-Based Architecture for QoS-Enabled Mobile Video Surveillance Applications”, 2012 Third International Conference on the Network of the Future {NOF), (Nov. 21-23, 2012), 1-6.
Barakovic, Sabina, et al., “Survey and Challenges of QoE Management Issues in Wireless Networks”, Hindawi Publishing Corporation, (2012), 1-29.
Bennett, Robert, “American business has to start thinking of data with the same reverence that it think of money!”, Griffin Technologies, LLC. White Paper, (Oct. 2001), 6 pgs.
Berners-Lee, T., “RFC 3986: Uniform Resource Identifier (URI): Generic Syntax”, The Internet Society, [Online]. Retrieved from the Internet: <URL: http://tools.ietf.org/html/rfc3986>, (Jan. 2005), 57 pgs.
Curphey, Mark, et al., “A Guide to Building Secure Web Applications: The Open Web Application Security Project”, (2002), 70 pgs.
Doyle, Eric, “RSA uses SMS to offer secure Web access anywhere”, (2002), 1 pg.
Fonseca, Brian, “RSA and Entrust Target Web services security returns”, Dow Jones, (Oct. 8, 2002), 2 pgs.
Forbes, Bob, “The Fifth Factor: Behavior Profiling Opens New Possibilities For Web Access Control”, Data Security Management, 8 pgs.
Fred, Piper, et al., “Identities and authentication”, Cyber Trust & Crime Prevention Project, (Apr. 6, 2004), 1-15.
Hill, Kashmir, “Your Phone Number Is Going To Get A Reputation Score Forbes”, Forbes, (Jun. 16, 2015), 4 pgs.
Hong, Sungjune, et al., “The semantic PARLAY for 4G network”, 2nd International Conference on Mobile Technology, Applications and Systems. IEEE, (2005), 5 pgs.
Jamieson, Rodger, et al., “A Framework for Security, Control and Assurance of Knowledge Management Systems”, School of Information Systems, Technology and Management, University of New South Wales, Sydney, Australia, Chapter 25, (2004), 29 pgs.
Jones, Dow, “Awakens To The Fact That Prevention Is Better Than Cure”, India Inc., (Mar. 31, 2003), 1 pg.
Jones, Dow, “Event Brief of Q3 2002 RSA Security Earnings Conference Call—Final”, (Oct. 16, 2002), 5 pgs.
Jones, Dow, “Make sure you're secure”, Bristol Evening Post, (Oct. 25, 2004), 2 pgs.
Jones, Dow, “Regulatory News Service (RNS)”, REG-iRevolution Group Announces Partnership, (Oct. 9, 2002), 2 pgs.
Jörg, Tacke, et al., “Two-Factor Web Authentication Via Voice”, VOICE.TRUST AG1, (2003), 88 pgs.
Kemshall, A., et al., “Two Factor Authentication”, securenvoy, White Paper, (2005), 8 pgs.
Kim, Hwa-Jong, et al., “In-Service Feedback QoE Framework”, 2010 Third International Conference on Communication Theory. Reliability and Quality of Service, (2010), 135-138.
Kotanchik, J, “Kerberos And Two-Factor Authentication”, (Mar. 1994), 6 pgs.
Kumar, Bharat, et al., “Breaking into Cyberia”, Business Line, Dow Jones, (Nov. 5, 2003), 4 pgs.
Lebihan, Rachel, “New online banking security plan in doubt”, The Australian Financial Review, Dow Jones, (Aug. 2, 2004), 2 pgs.
Lebihan, Rachel, “Still Fishing For Answer To Internet Scams”, The Australian Financial Review, Dow Jones, (2004), 3 pgs.
Louise, Richardson, “RSA Security”, Dow Jones, (Dec. 1, 2003), 2 pgs.
Mallery, John, “Who Are You? You just can't trust a username/password combo to verify user identity. It's time for two-factor”, Security Technology & Design, (Nov. 1, 2005), 4 pgs.
Matos, et al., “Quality of Experience-based Routing in Multi-Service Wireless Mesh Networks”, Realizing Advanced Video Optimized Wireless Networks. IEEE, (2012), 7060-7065.
McCue, Andy, “Networks—ISP trials security via SMS”, Computing, (Sep. 12, 2002), 1 pg.
McCue, Andy, “SMS Secures Online Apps”, ITWEEK, Dow Jones, (Sep. 9, 2002), 2 pgs.
McCue, Andy, “United Utilities pilots SMS security software”, VNUnet Newswire, Dow Jones, (Sep. 4, 2002), 2 pgs.
Messmer, Ellen, “HIPAA deadline ups healthcare anxiety”, Network World, (Mar. 10, 2003), 1 pg.
Mills, Kelly, “Security merger to boost banks”, The Australian—Dow Jones, (2005), 2 pgs.
Mizuno, Shintaro, et al., “Authentication Using Multiple Communication Channels”, (Nov. 11, 2005), 9 pgs.
Mu, Mu, et al., “Quality Evaluation in Peer-to-Peer IP Services”, Data Traffic and Monitoring Analysis, LNCS 7754, 302-319, (2013), 18 pgs.
Nguyan, Thien-Loc, “National Identification Systems”, (Jun. 2003), 156 pgs.
Nystrom, M, “The SecurID(r) SASL Mechanism”, RSA Laboratories, (Apr. 2000), 11 pgs.
O'Gorman, “Comparing Passwords, Tokens, and Biometrics for User Authentication”, In Proceedings: The IEEE, vol. 91, Issue 12, (Dec. 2003), 20 pgs.
O'Gorman, Lawrence, et al., “Call Center Customer Verification by Query-Directed Passwords”, 15 pgs.
Parthasarathy, P R, “Resolving Webuser on the Fly”, (Jun. 2002), 6 pgs.
Pullar-Strecker, Tom, “Asb Shuts Out Online Fraud”, (Sep. 27, 2004), 2 pgs.
Pullar-Strecker, Tom, “Auckland Security Firm Turns Heads”, (May 30, 2005), 3 pgs.
Pullar-Strecker, Tom, “NZ bank adds security online”, (Nov. 8, 2004), 1 pg.
Pullar-Strecker, Tom, et al., “NZ start-up plans authentication trial”, (Aug. 23, 2004), 3 pgs.
Scarlet, Pruitt, “RSA secures mobile access to Web apps”, Dow Jones—InfoWorld Daily News, (Sep. 4, 2002), 1 pg.
Subramanya, et al., “Digital Signatures”, IEEE Potentials, (Mar./Apr. 2006), 5-8.
Tran, et al., “User to User adaptive routing based on QoE”, ICNS 2011: The Seventh International Conference on Networking and Services, (2011), 170-177.
Tynan, Dan, “What's a Cell Phone, Anyway?”, PC World.Com ; San Francisco, (Mar. 23, 2005), 3 pgs.
Wall, Matthew, “Fight business marauders the high-tech way”, Sunday Times ; London (UK), (Sep. 18, 2005), 4 pgs.
Wolfe, Daniel, “For PassMark, Image Is Everything in Phish Foiling”, American Banker. 169.43, (Mar. 4, 2004), 2 pgs.
Wright, Rob, “Paramount Protection Vendors have devised new ways to safeguard information”, VARbusiness, (Oct. 28, 2002), 4 pgs.
Wu, Min, et al., “Secure Web Authentication with Mobile Phones”, DIMACS Workshop on Usable Privacy and Security Software, (Jul. 2004), 5 pgs.
Wullems, Chris, et al., “Enhancing the Security of Internet Applications using location : A New Model for Tamper-resistant GSM Location”, Proceedings of the Eighth IEEE International Symposium on Computers and Communication (ISCC'03), (2003), 9 pgs.
Related Publications (1)
Number Date Country
20200236222 A1 Jul 2020 US
Provisional Applications (1)
Number Date Country
61675156 Jul 2012 US
Divisions (1)
Number Date Country
Parent 13949984 Jul 2013 US
Child 14253316 US
Continuations (4)
Number Date Country
Parent 15911737 Mar 2018 US
Child 16584069 US
Parent 15440908 Feb 2017 US
Child 15911737 US
Parent 14995015 Jan 2016 US
Child 15440908 US
Parent 14253316 Apr 2014 US
Child 14995015 US