To access a protected network resource, such as stored data (e.g., cloud data) or a network service, a user typically must supply user access credentials (e.g., login credentials). The network service provider, or application server, through which the user accesses the protected network resource, associates the credentials of the user with an identity of the user. Cybercriminals, however, have developed techniques for hacking user credentials, and fraudulently obtaining access to protected network resources associated with the user identity of other users.
The following detailed description refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. The following detailed description does not limit the invention.
As described herein, a risk assessment platform assesses a level of risk of identity fraud associated with users attempting to access protected resources, such as, for example, stored data (e.g., cloud data) or network services. The risk assessment platform monitors static, dynamic or multi-factor authentication processes engaged in by a user, and collects and stores user and/or device attributes associated with the authentication processes. The risk assessment platform then performs a risk score calculation process to determine a level of risk of identity fraud associated with the user based on the collected user and/or device attributes. The determined risk score for the user may be used by the risk assessment platform, or other application servers, for granting or denying the user access to protected resources.
Static authentication may include execution of an authentication process that verifies the identity of the user 105 without performing any computations. One example of a static authentication process includes a user 105 supplying login credentials (e.g., login ID and/or password) that is compared to known login credentials to verify the identity of the user 105. Dynamic authentication may include execution of an authentication process that verifies the identity of the user 105 by performing dynamic computations. Various different types of existing dynamic authentication processes may be used. One example of a dynamic authentication process includes a challenge-response protocol in which a challenge is sent to the user, and the user's device 110 responds by computing a cryptographic function that uses the challenge and secret data (e.g., a secret key) stored at the device. The challenge-response protocol may be based on, for example, a digital signature computing algorithm. Multi-factor (MF) authentication may include execution of an authentication protocol that verifies the identity of the user 105 by obtaining two or more pieces of evidence (or factors) and subjecting them to an authentication mechanism. An example of two-factor authentication includes verifying a user 105's identity by using something the user 105 knows (e.g., a password) and a second factor, such as the user 105 repeating something that was sent to them through an out-of-band mechanism (e.g., a verification code sent to the user 105 via an email).
The user information attributes of the collected attributes may include data associated with the user 105, or with an account of the user 105. The user information attributes may include, for example, an account profile or a settings profile of an account of the user 105, account information associated with the user 105 (e.g., name, address, phone number, email address, billing address, service address, account age, privacy settings, payment information, etc. of the user 105's account), and/or biometric information associated with the user 105 (e.g., a “selfie” picture, a fingerprint, an audio file of user 105 speaking). The user behavior attributes of the collected attributes may include, for example, actions taken by the user 105 during the static, dynamic, and/or multi-factor authentication. The actions may include, for example, input actions taken by user 105 upon device 110, and/or on-line actions taken by user 105 via device 110 (e.g., as evidenced by signaling sent from device 110). The device information attributes of the collected attributes may include information associated with the device 110 such as, for example, device profile data, Mobile Directory Number (MDN), International Mobile Subscriber Identity (IMSI), Subscriber Identity Module (SIM) ID, International Mobile Equipment Identifier (IMEI), Mobile Equipment Identifier (MEID), device operational characteristics, device activity, device location (e.g., GPS coordinates), etc. The network information attributes of the collected attributes may include, for example, a network address (e.g., Internet Protocol (IP) address, Port address, Medium Access Control (MAC) address, etc.) of device 110 used by user 105 for performing the static, dynamic, and/or multi-factor authentication.
Subsequent to collection of the attributes/information associated with a user 105 engaging in the static, dynamic authentication, and/or multi-factor authentication, the authentication server 120 passes the collected attributes (identified with a “2” within a circle) to a risk score engine 125 of risk assessment platform 100. Risk score engine 125 uses a process for determining a risk score, that identifies a level of risk of identity fraud, associated with user 105 attempting to access protected digital resources, based on the collected attributes. The risk score, therefore, represents a score that serves as a proxy for identifying whether or not the user 105 is likely to be the user/person/entity that user 105 is claiming to be when attempting to access digital resources. The risk score determination process, in one implementation, may calculate a weighted sum associated with the collected attributes to determine a risk score associated with user 105, as described in further detail below. Risk score engine 125 may, in one implementation, include a machine learning system that uses a Bayesian computation set for determining the risk score for each user 105. Upon determination of the current risk score for user 105, risk score engine 125 passes the risk score (identified with a “3” within a circle) to authentication server(s) 120 which, in turn, passes the risk score (identified with a “4” within a circle) to policy manager 130.
Policy manager 130, upon receipt of the risk score associated with a user 105, compares the risk score with a policy threshold score or policy score range, previously set by, for example, an administrator, to determine whether the risk score indicates a risk failure, a risk passage, or an “attention required” state. A risk failure/denial (identified with a “5” within a circle) indicates that the determined risk score for the user 105 is too high, and that the attempt to access protected digital resources should be denied. The denial of the access attempt should be reported by policy manager 130 to a machine learning process/function 145 of risk score engine 125. A risk passage (identified with a “6” within a circle) indicates that the determined risk score for the user 105 is sufficiently low, and that the attempt to access protected digital resources should be granted. An “attention required” state (identified with a “7” within a circle) indicates that the determined risk score for the user 105 is ambiguous (i.e., neither too high nor too low), and that a further security measure(s) may be applied to the user 105's attempt to access the protected digital resources. The further security measure(s) may include triggering a re-authentication process 140, such as, for example, step-up authentication or multi-factor authentication, for user 105 by authentication server(s) 120. Completion of the extra security measure(s) by authentication server(s) 120 results in a denial or grant of access to the protected digital resource based on the results of the extra security measure. If completion of the extra security measure results in a denial of access, the failed case is reported to the machine learning process 145 of risk score engine 125.
As further shown in the exemplary overview of
Upon receipt of the user verification request, authentication server(s) 120 may retrieve a most recent determined risk score associated with the user 105, or may determine a completely new risk score for the user 105 based on previously collected attributes, and may return a user verification result (identified with a “B” within a circle) to the requesting app server 135 that includes an indication of the identity fraud risk to app server 135 via a user verification result. The indication of identity fraud risk may be a quantitative or qualitative indication of the likelihood that the user 105 is not the person/user/entity that the user 105 is claiming to be.
Risk assessment platform 100 includes one or more network devices that implement the user authentication server 120, the risk score engine 125, and the policy manager 130 of
Network(s) 220 may include one or more wired or wireless networks of various types including, for example, one or more wired telecommunications networks (e.g., Public Switched Telephone Networks (PSTNs)), one or more wireless networks (e.g., a Public Land Mobile Network(s) (PLMN(s)), a satellite network(s)), the Internet, a wired and/or wireless local area network (LAN), a wired and/or wireless wide area network (WAN), a wired and/or wireless metropolitan area network (MAN), an intranet, or a cable network (e.g., an optical cable network).
The configuration of network components of network environment 200 is shown in
Processing unit 320 may include one or more processors or microprocessors which may interpret and execute stored instructions associated with one or more processes, or processing logic that implements the one or more processes. In some implementations, processing unit 320 may include, but is not limited to, programmable logic such as Field Programmable Gate Arrays (FPGAs) or accelerators. Processing unit 320 may include software, hardware, or a combination of software and hardware, for executing the processes described herein. Main memory 330 may include a random access memory (RAM) or another type of dynamic storage device that may store information and instructions for execution by processing unit 320. ROM 340 may include a ROM device or another type of static storage device that may store static information and instructions for use by processing unit 320. Storage device 350 may include a magnetic and/or optical recording medium. Main memory 330, ROM 340 and storage device 350 may each be referred to herein as a “tangible non-transitory computer-readable medium” or “non-transitory storage medium.” The processes/methods set forth herein can, in some implementations, be implemented as instructions that are stored in main memory 330, ROM 340 and/or storage device 350 for execution by processing unit 320.
Input device 360 may include one or more mechanisms that permit an operator to input information into device 300, such as, for example, a keypad or a keyboard, a display with a touch sensitive screen or panel, voice recognition and/or biometric mechanisms, etc. Output device 370 may include one or more mechanisms that output information to the operator, including a display, a speaker, etc. Input device 360 and output device 370 may, in some implementations, be implemented as a user interface (UI) (e.g., a touch screen display) that displays UI information and which receives user input via the UI. Communication interface(s) 380 may include a transceiver that enables device 300 to communicate with other devices and/or systems. For example, communication interface(s) 380 may include a wired or wireless transceiver(s) for communicating via network(s) 220.
The configuration of components of device 300 illustrated in
In a second authentication process 400-2, the user 105 may use an application (app), or web browser, to log into an on-line account or network service. The network device(s) receiving the user log in, may, during authentication, collect attributes such as account/user behavior 430, a device profile 435, or additional verification 440 attributes. Account/user behavior 430 may include attributes associated with the user 105's on-line activity or behavior while logging into, or while logged into, the account or network service. Device profile 435 may include attributes associated with the device 110 that user 105 uses to log into the account or network service. The additional user verification attributes 440 may include additional attributes, other than those already described, that may be obtained during verification of the identity of the user that has logged into the account or network service. The attributes collected from the account/network service login authentication process 400-2 may be supplied to risk assessment platform 100.
In another authentication process 400-x, the user 105 may physically present themselves, in person, at a “brick and mortar” store to, for example, purchase a product or service, or to obtain assistance with a malfunctioning product. While in the store, a photo match process 450 may be performed, and a location 455 of user 105's device 110 may be determined as matching the location of the store. The photo match process may, in one instance, include manual comparison, by a store employee, of the user 105's picture ID with a previously stored image of the user 105. In another instance, the photo match process may include using an image scanning system to scan the user 105's picture ID, and the image scanning system may compare the scanned picture of the user 105 with a previously stored image of the user 105. Additionally, a “push to verify” (also called “touch ID” herein) process 460 may be initiated by which a pop-up message is sent to the user 105's device 110, requesting that the user 105 push a button on the device 110 to verify receipt of the message. Pushing of the button upon the user 105's device may, in itself, be used as a verification of the identity of user 105. In other implementations, when the button is pushed by the user 105 upon the device 110 (e.g., with a particular finger), a biometric scan of the user 105's fingerprint may be taken via the touch screen of the device 110. The scanned fingerprint may then be used for verification of the identity of the user 105 such as by comparison with a previously stored biometric scan of the user 105's fingerprint.
As further shown in
Various types of attribute icons may be displayed in user interface 465, including, for example, a phone number icon, a device icon, a photo ID icon, a location icon, a payment method icon, a social media icon, and a touch ID icon. A highlighted phone number icon indicates that a phone number of the device 110 which the user 105 is currently using matches the phone number as being on-record for that user 105. A highlighted device icon indicates that the device information, obtained from the network providing service to the user 105's device 110, matches device information of the device 110 known to be used by the user 105. A highlighted photo ID icon indicates that the user 105 has presented a photo ID that matches the known identity of the user 105. A highlighted location icon indicates that a location of the user 105 matches a determined location of the device 110 known to be used by the user 105. The location of the device 110 may be determined using, for example, Global Positioning System (GPS) data obtained from GPS. A highlighted payment method icon indicates that the user 105 has attempted to use a payment method known to be associated with the user 105 (e.g., a particular credit card number, with a particular security code obtained from the back of the credit card). A highlighted social media icon indicates that the user 105's information provided during the current transaction matches known social media information. A highlighted touch ID icon indicates that the touch ID process, described further below with respect to
User interface 465 may further display a risk score 478, including an associated risk score scale 475, and a flag 480 indication indicating whether or not a high risk of identity fraud is predicted for the user identified by name 470 and phone number 473. The risk score 478, and the risk score scale 475, present a quantified number (e.g., scaled between 0 and 100) that indicates a level of risk of identity fraud associated with the user 105 based on the collected attributes. For example, in some implementations, a higher risk score may indicate a higher risk of identity fraud, whereas a lower risk score may indicate a lower risk of identity fraud. The determined risk score may be compared, by risk assessment platform 100, with a risk policy threshold (e.g., set by an operator or administrator) to determine whether the risk score is higher or lower than the risk policy threshold. If, for example, if the risk score is higher than the risk policy threshold, then the flag 480 may be set (i.e., indicating an unacceptable level of risk of identity fraud) and displayed in user interface 465.
User interface 465 may additionally present “take action” option buttons to the operator or administrator, including a button 485 for referring the user 105 to a fraud specialist, a button 488 for initiating a “touch ID” process (described below with respect to
The exemplary process includes user authentication server 120 receiving an indication of a user authentication event associated with a user's attempt to access to a protected resource (block 500), and authentication server 120 conducting static, dynamic, and/or multi-factor authentication of the user 105 and the user's device 110 based on the user authentication event (block 505). The user 105, for example, may attempt, using device 110 over network(s) 220, to access data stored by an app server 135, or to access a network service provided by app server 135, and user authentication server 120 may, upon notification of the authentication event, initiate a static, dynamic, and/or multi-factor authentication process to verify the identity of the user 105.
User authentication server 120 collects user/device attributes associated with the user authentication (block 510), and stores the collected user/device attributes in a database (block 515). The collected user/device attributes may include user, device and/or network attributes associated with the user 105, the user's device 110, or the network(s) 220 through which the user's device 110 is connected. The collected user/device attributes may include, but are not limited to, the user 105's name, the user's address, the user's phone number (e.g., MDN), the user's account information (e.g., account age, account number, account status, customer type, call forwarding indicator, account email address, account primary phone, billing address, service address, privacy settings, etc.), the user's device 110 information (e.g., Mobile Equipment Identifier (MEID), International Mobile Equipment Identity (IMEI), International Mobile Subscriber Identifier (IMSI), Subscriber Identity Module (SIM) ID, prepaid device, burner device), the user's personal information (e.g., date of birth, gender, height), a network location of device 110 (e.g., IP address of the user's device 110), biometric data of the user 105 (e.g., “selfie” picture taken of the user 105, a biometric scan of user's fingerprint), device usage information, device location (e.g., using GPS), payment information (e.g., payment method, payment method type, card owner name, card type, card number, card expiration data, account owner name, bank account routing number, bank account number), and device profile information (e.g., SIM change date, device change date, MDN change date, equipment status change date, device make/model, SIM swap, device swap, roaming, roaming country).
User authentication server 120 sends the collected user/device attributes to risk score engine 125 (block 520), and risk score engine 125 determines an updated risk score for the user based on the collected user/device attributes (block 525) and sends the updated risk score to user authentication server 120 (block 530). Risk score engine 125 may, in some implementations, implement machine learning process 145 that can use a Bayesian computation set to calculate the risk score for the user 105. The calculation may include, in one implementation, calculating sums of weights corresponding to the collected attributes for a user authentication event i. Further details of one exemplary implementation of the updated risk score determination of block 525 is described below with respect to the exemplary process of
User authentication server 120 checks with policy manager 130 for a comparison of the received updated risk score with a policy threshold(s) (block 535). The policy thresholds may be established by, for example, an operator or administrator of risk assessment platform 100. The policy threshold value may be a pre-set value that may be changed manually by the operator or administrator, or the policy threshold may be determined by an algorithm that adjusts the policy threshold value based on the varying risk environment.
User authentication server 120 receives an authentication denial, passage, or “attention required” indication from policy manager 130 (block 540). If comparison of the updated risk score with the policy threshold indicates, for example, that the updated risk score is less than a first policy threshold (as an example, on a scale of 0-100, the first policy threshold may equal 40), then policy manager 130 may issue an authentication passage. If comparison of the updated risk score with the policy threshold indicates, for example, that the updated risk score is greater than a second policy threshold (as an example, on a scale of 0-100, the second policy threshold may equal 75) then policy manager 130 issue an authentication denial/failure. If comparison of the updated risk score with the policy threshold indicates, for example, that the updated risk score is between the first policy threshold and the second policy threshold, then policy manager 130 may issue an “attention required” notification.
If the user authentication server 120 receives an authentication denial (DENIAL—block 545), then user authentication server 120 denies the user access to the resource (block 550), and sends data associated with the authentication denial to risk score engine 125 as a machine learning training sample (block 555).
If the user authentication server 120 receives an authentication passage indication (PASS—block 545), then user authentication server 120 grants the user access to the resource (block 560)(
If the user authentication server 120 receives an “attention required” indication (ATTENTION REQUIRED—block 545), then user authentication server 120 initiates an extra security measure (block 570)(
In one implementation, upon touching of the touch region 830, signaling/messaging may be returned to risk assessment platform 100 notifying platform 100 that the user 105's identity has been confirmed. In another implementation, upon touching of the touch region 830, a biometric scan may be taken of the user 105's fingerprint, and the data associated with the scan may be returned to risk assessment platform 100 for comparison with previously stored biometric data. If the comparison indicates a match between the previously stored biometric fingerprint data and the current scanned data, then risk assessment platform 100 may conclude that the user 105's identity has been confirmed.
If the user passes the extra security measure (YES—block 575), then user authentication server 120 grants the user access to the resource (block 580), and sends data associated with passing the extra security measure to risk score engine 125 as a machine learning training sample (block 585). Referring to the example “touch ID” process of
If the user doesn't not pass the extra security measure (NO—block 575), then user authentication server 120 denies the user access to the resource (block 550), and sends data associated with the authentication denial to risk score engine 125 as a machine learning training sample (block 555). Referring again to the example “touch ID” process of
The exemplary process of
The exemplary process includes risk score engine 125 obtaining a set of attributes [ai1 ai2 ai3 . . . ] for the current authentication event i (block 900). For each authentication event i, risk score engine 125 may receive a set of attributes that includes multiple different attributes ai1, ai2, ai3, etc. associated with the user 105 engaging in a static, dynamic or multi-factor authentication process. In a simplified example, the set of attributes may include four attributes: the user's physical location, the profile/settings of the user's account, a device profile of the user 105's device 110, and biometric data of the user 105. In this simplified example, the set of attributes [ai1 ai2 ai3 ai4] may equal (locationi, account profilei, device profilei, biometrici) for authentication event i.
Risk score engine 125 compares the set of attributes of the current authentication event i with a set of attributes from a previous authentication event(s) i−1[ai-11 ai-12 ai-13 . . . ] to identify one or more attributes that have changed (block 905). At one or more previous occurrences of an authentication event, risk score engine 125 may have received a corresponding set of attributes. Referring again to the simplified example, the set of previous attributes may include: the user's previous physical location, the previous account profile/settings of the user's account, the previous device profile of the user 105's device 110, and the previous biometric data of the user 105. Therefore, in the simplified example, the set of previous attributes [ai-11 ai-12 ai-13ai-14] may equal (locationi-1, account profilei-1, device profilei-1, biometrici-1) for authentication event i−1. In some implementations, instead of using the attribute set associated with the previous authentication event i−1, risk score engine 125 may perform a “look back” at the attribute set associated with the xth previous authentication event (i-x), or may perform a “look back” at all of the attribute sets between the current attribute set and the xth previous attribute set. Risk score engine 125 may compare any of the collected user behavior attributes, user information attributes, device information attributes, and network information attributes, such as those described above with respect to
Risk score engine 125 determines an applicability factor mx, having a value of either zero or one, for each attribute ax of the set of attributes [ai1 ai2 ai3 . . . ] based on whether the attribute ax has changed relative to the previous authentication event i−1 and based on results of the machine learning process 145 (block 910). Thus, an applicability factor mx for each attribute ax may be set either to zero or one depending on whether that attribute aix has changed relative to a previous event's attribute ai-1x, and also based on machine learning process 145. Referring to the simplified example, the current attribute set (locationi, account profilei, device profilei, biometrici) is compared to the previous attribute set (locationi-1, account profilei-1, device profilei-1, biometrici-1) to determine which attributes in the current set do not match corresponding attributes in the previous set.
For example, if the user 105's physical location at authentication event i does not match the user 105's physical location at authentication event i−1, then the applicability factor m1 for the physical location attribute ai1 may be set to a value of one, and the applicability factors mx of other attributes, that have not changed, may be set to a value of zero. If, as another example, the user 105's account/settings profile at authentication event i additionally does not match the user 105's account/settings profile at authentication event i−1, then the applicability factor m2 for the account profile attribute ai2 may be set to a value of one, and the applicability factors mx of other attributes, that have not changed, may be set to a value of zero. The results of machine learning process 145 may, however, be used to alter the applicability factors assigned to each attribute ax (e.g., an applicability factor of zero assigned to an attribute changed to one, an applicability factor of one assigned to an attribute changed to a zero). In one implementation, machine learning process 145 may use a Bayesian machine learning algorithm, that incorporates feedback from previous authentication events, to adjust the applicability factors mx for each attribute ax of the current attribute set.
Risk score engine 125 determines a weight W, for each attribute ax of the set of attributes [di1 ai2 ai3 . . . ] for the current authentication event i based on results of the machine learning process (block 915). Thus, risk score engine 125 determines a weight W1 for attribute a1, a weight W2 for attribute a2, a weight W3 for attribute a3, etc. In one implementation, each weight Wx for each attribute ax may be initially set by an administrator of risk assessment platform 100, with each weight Wx being dynamically adjusted, over time, based on machine learning process 145. Returning to the simplified example, a weight (W1) of 30 may be specified for the physical location attribute, a weight (W2) of 20 may be specified for the account profile attribute, a weight (W3) of 20 may be specified for the device profile attribute, and a weight (W4) of 30 may be specified for the biometric data attribute.
Risk score engine 125 determines the user 105's updated risk score:
RISK SCOREi=m1W1+m2W2+m3W3+ . . . Eqn. (1)
where mx is the applicability factor (having a value of either 0 or 1, as determined in block 910) for attribute aix, and Wx is the determined weight (determined in block 915) for attribute aix. Returning again to the simplified example, if the user 105's physical location (attribute ai1) at authentication event i does not match the user 105's physical location (attribute ai-11) at authentication event i−1, and the user 105's account/settings profile (attribute ai2) at authentication event i additionally does not match the user 105's account/settings profile (attribute ai-12) at authentication event i−1, then the applicability factor m1 for the physical location attribute ai1 may be set to a value of one and the applicability factor m2 for the account/settings profile attribute ai2 may be set to a value of one. Additionally, the applicability factor m3 for the device profile attribute ai3, and the applicability factor m4 for the biometric data attribute ai4, may be both set to zero. Therefore, with weights [W1 W2 W3 W4] being set (either by an administrator, or via dynamic adjustment by machine learning process 145) to [30 20 20 30], then the updated risk score may be calculated as (1*30)+(1*20)+(0*20)+(0*30)=50. Calculation of the updated risk score concludes the exemplary process of
The foregoing description of implementations provides illustration and description, but is not intended to be exhaustive or to limit the invention to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention. For example, while series of blocks have been described with respect to
Certain features described above may be implemented as “logic” or a “unit” that performs one or more functions. This logic or unit may include hardware, such as one or more processors, microprocessors, application specific integrated circuits, or field programmable gate arrays, software, or a combination of hardware and software.
No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise.
To the extent the aforementioned embodiments collect, store, or employ personal information of individuals, it should be understood that such information shall be collected, stored, and used in accordance with all applicable laws concerning protection of personal information. Additionally, the collection, storage, and use of such information can be subject to consent of the individual to such activity, for example, through well known “opt-in” or “opt-out” processes as can be appropriate for the situation and type of information. Storage and use of personal information can be in an appropriately secure manner reflective of the type of information, for example, through various encryption and anonymization techniques for particularly sensitive information.
In the preceding specification, various preferred embodiments have been described with reference to the accompanying drawings. It will, however, be evident that various modifications and changes may be made thereto, and additional embodiments may be implemented, without departing from the broader scope of the invention as set forth in the claims that follow. The specification and drawings are accordingly to be regarded in an illustrative rather than restrictive sense.
Number | Name | Date | Kind |
---|---|---|---|
8584219 | Toole | Nov 2013 | B1 |
8806610 | Draluk | Aug 2014 | B2 |
8925058 | Dotan | Dec 2014 | B1 |
9111083 | Taveau | Aug 2015 | B2 |
9202038 | Allen | Dec 2015 | B1 |
9349014 | Hubing | May 2016 | B1 |
9424429 | Roth | Aug 2016 | B1 |
9426139 | McClintock | Aug 2016 | B1 |
9430629 | Ziraknejad | Aug 2016 | B1 |
9444824 | Balazs | Sep 2016 | B1 |
9560030 | Hughes | Jan 2017 | B2 |
9560046 | Hughes | Jan 2017 | B2 |
9667613 | Wisemon | May 2017 | B1 |
9769192 | Tucker | Sep 2017 | B2 |
9801066 | Hanley | Oct 2017 | B1 |
9807094 | Liu | Oct 2017 | B1 |
9842220 | Brisebois | Dec 2017 | B1 |
9887984 | Justin | Feb 2018 | B2 |
9967742 | Belton, Jr. | May 2018 | B1 |
10003607 | Kolman | Jun 2018 | B1 |
10032039 | Milman | Jul 2018 | B1 |
10091230 | Machani | Oct 2018 | B1 |
10097527 | Brown | Oct 2018 | B2 |
10115111 | Miltonberger | Oct 2018 | B2 |
10158489 | Shastri | Dec 2018 | B2 |
10218506 | Bhabbur | Feb 2019 | B1 |
10303869 | Duke | May 2019 | B1 |
10313353 | Lu | Jun 2019 | B2 |
10360367 | Mossoba | Jul 2019 | B1 |
10440028 | Makmel | Oct 2019 | B1 |
10484429 | Fawcett | Nov 2019 | B1 |
10491623 | Foster | Nov 2019 | B2 |
10496801 | Hamlin | Dec 2019 | B2 |
10601937 | Holzband | Mar 2020 | B2 |
10606990 | Tuli | Mar 2020 | B2 |
10606994 | Kurian | Mar 2020 | B2 |
10609037 | Jackson | Mar 2020 | B2 |
10614452 | Tomasofsky | Apr 2020 | B2 |
10616196 | Khitrenovich | Apr 2020 | B1 |
10616221 | Favila | Apr 2020 | B2 |
10873596 | Bourget | Dec 2020 | B1 |
20050097320 | Golan | May 2005 | A1 |
20060282660 | Varghese | Dec 2006 | A1 |
20080046368 | Tidwell | Feb 2008 | A1 |
20080098464 | Mizrah | Apr 2008 | A1 |
20090089869 | Varghese | Apr 2009 | A1 |
20090144095 | Shahi | Jun 2009 | A1 |
20090199264 | Lang | Aug 2009 | A1 |
20110047265 | Withers | Feb 2011 | A1 |
20110225625 | Wolfson | Sep 2011 | A1 |
20110246766 | Orsini | Oct 2011 | A1 |
20110277025 | Counterman | Nov 2011 | A1 |
20110307957 | Barcelo | Dec 2011 | A1 |
20110314558 | Song | Dec 2011 | A1 |
20120072723 | Orsini | Mar 2012 | A1 |
20120143650 | Crowley | Jun 2012 | A1 |
20120166818 | Orsini | Jun 2012 | A1 |
20130013931 | O'Hare | Jan 2013 | A1 |
20130046987 | Radhakrishnan | Feb 2013 | A1 |
20130047213 | Radhakrishnan | Feb 2013 | A1 |
20130047224 | Radhakrishnan | Feb 2013 | A1 |
20130047249 | Radhakrishnan | Feb 2013 | A1 |
20130047254 | Radhakrishnan | Feb 2013 | A1 |
20130047263 | Radhakrishnan | Feb 2013 | A1 |
20130047266 | Radhakrishnan | Feb 2013 | A1 |
20130061285 | Donfried | Mar 2013 | A1 |
20130109358 | Balasubramaniyan | May 2013 | A1 |
20130298244 | Kumar | Nov 2013 | A1 |
20130312115 | Jennings | Nov 2013 | A1 |
20130346311 | Boding | Dec 2013 | A1 |
20140007179 | Moore | Jan 2014 | A1 |
20140164178 | Adjaoute | Jun 2014 | A1 |
20140282868 | Sheller | Sep 2014 | A1 |
20140289833 | Briceno | Sep 2014 | A1 |
20150039513 | Adjaoute | Feb 2015 | A1 |
20150088739 | Desai | Mar 2015 | A1 |
20150106265 | Stubblefield | Apr 2015 | A1 |
20150112871 | Kumnick | Apr 2015 | A1 |
20150188913 | Teixeron | Jul 2015 | A1 |
20150205954 | Jou | Jul 2015 | A1 |
20150220907 | Denton | Aug 2015 | A1 |
20150363481 | Haynes | Dec 2015 | A1 |
20160012235 | Lee | Jan 2016 | A1 |
20160063229 | Key | Mar 2016 | A1 |
20160099963 | Mahaffey | Apr 2016 | A1 |
20160110528 | Gupta | Apr 2016 | A1 |
20160180068 | Das | Jun 2016 | A1 |
20160203575 | Madhu | Jul 2016 | A1 |
20160212100 | Banerjee | Jul 2016 | A1 |
20160226911 | Boss | Aug 2016 | A1 |
20160269403 | Koutenaei | Sep 2016 | A1 |
20170006028 | Tunnell | Jan 2017 | A1 |
20170070524 | Bailey | Mar 2017 | A1 |
20170118025 | Shastri | Apr 2017 | A1 |
20170206365 | Garcia | Jul 2017 | A1 |
20170230417 | Amar | Aug 2017 | A1 |
20170243028 | LaFever | Aug 2017 | A1 |
20170264619 | Narayanaswamy | Sep 2017 | A1 |
20170289168 | Bar | Oct 2017 | A1 |
20170332238 | Bansal | Nov 2017 | A1 |
20170364450 | Struttmann | Dec 2017 | A1 |
20170364698 | Goldfarb | Dec 2017 | A1 |
20170364699 | Goldfarb | Dec 2017 | A1 |
20170364700 | Goldfarb | Dec 2017 | A1 |
20170364701 | Struttmann | Dec 2017 | A1 |
20170366353 | Struttmann | Dec 2017 | A1 |
20180012227 | Tunnell | Jan 2018 | A1 |
20180025148 | Jain | Jan 2018 | A1 |
20180027006 | Zimmermann | Jan 2018 | A1 |
20180032744 | Cavanaugh | Feb 2018 | A1 |
20180033006 | Goldman | Feb 2018 | A1 |
20180034859 | Aronowitz | Feb 2018 | A1 |
20180039990 | Lindemann | Feb 2018 | A1 |
20180041503 | Lindemann | Feb 2018 | A1 |
20180048647 | Favila | Feb 2018 | A1 |
20180063128 | Korus | Mar 2018 | A1 |
20180069867 | Grajek | Mar 2018 | A1 |
20180082069 | Cunico | Mar 2018 | A1 |
20180089403 | Watson | Mar 2018 | A1 |
20180159852 | Crabtree | Jun 2018 | A1 |
20180181741 | Whaley | Jun 2018 | A1 |
20180183789 | Tischart | Jun 2018 | A1 |
20180186334 | Munafo | Jul 2018 | A1 |
20180191501 | Lindemann | Jul 2018 | A1 |
20180191695 | Lindemann | Jul 2018 | A1 |
20180205552 | Struttmann | Jul 2018 | A1 |
20180211115 | Klein | Jul 2018 | A1 |
20180227303 | Caldera | Aug 2018 | A1 |
20180247312 | Loganathan | Aug 2018 | A1 |
20180248863 | Kao | Aug 2018 | A1 |
20180288060 | Jackson | Oct 2018 | A1 |
20180288063 | Koottayi | Oct 2018 | A1 |
20180288073 | Hopper | Oct 2018 | A1 |
20180293367 | Urman | Oct 2018 | A1 |
20180307857 | Beecham | Oct 2018 | A1 |
20180307859 | LaFever | Oct 2018 | A1 |
20180341758 | Park | Nov 2018 | A1 |
20180343246 | Benayed | Nov 2018 | A1 |
20180351944 | Cho | Dec 2018 | A1 |
20180357423 | Kurian | Dec 2018 | A1 |
20190012441 | Tuli | Jan 2019 | A1 |
20190028514 | Barboi | Jan 2019 | A1 |
20190036969 | Swafford | Jan 2019 | A1 |
20190043054 | Crank | Feb 2019 | A1 |
20190044942 | Gordon | Feb 2019 | A1 |
20190081968 | Wang | Mar 2019 | A1 |
20190116193 | Wang | Apr 2019 | A1 |
20190123904 | Ackerman | Apr 2019 | A1 |
20190141041 | Bhabbur | May 2019 | A1 |
20190147152 | Kurian | May 2019 | A1 |
20190149542 | Scopis | May 2019 | A1 |
20190158503 | Bansal | May 2019 | A1 |
20190164156 | Lindemann | May 2019 | A1 |
20190171801 | Barday | Jun 2019 | A1 |
20190180208 | Nelson | Jun 2019 | A1 |
20190182115 | Wilshinsky | Jun 2019 | A1 |
20190199759 | Anderson | Jun 2019 | A1 |
20190222424 | Lindemann | Jul 2019 | A1 |
20190236249 | Pavlou | Aug 2019 | A1 |
20190253269 | Keane | Aug 2019 | A1 |
20190253431 | Atanda | Aug 2019 | A1 |
20190266609 | Phelan | Aug 2019 | A1 |
20190273746 | Coffing | Sep 2019 | A1 |
20190281030 | Isaacson | Sep 2019 | A1 |
20190288850 | Beecham | Sep 2019 | A1 |
20190291696 | Munafo | Sep 2019 | A1 |
20190295085 | Ashiya | Sep 2019 | A1 |
20190318117 | Beecham | Oct 2019 | A1 |
20190319980 | Levy | Oct 2019 | A1 |
20190325432 | Ow | Oct 2019 | A1 |
20190327081 | Ow | Oct 2019 | A1 |
20190327082 | Ow | Oct 2019 | A1 |
20190332807 | LaFever | Oct 2019 | A1 |
20190340379 | Beecham | Nov 2019 | A1 |
20190349351 | Verma | Nov 2019 | A1 |
20190373136 | Diaz | Dec 2019 | A1 |
20190384899 | Brannon | Dec 2019 | A1 |
20200004938 | Brannon | Jan 2020 | A1 |
20200005168 | Bhargava | Jan 2020 | A1 |
20200021620 | Purathepparambil | Jan 2020 | A1 |
20200026871 | Mikhailov | Jan 2020 | A1 |
20200044851 | Everson | Feb 2020 | A1 |
20200077265 | Singh | Mar 2020 | A1 |
20200175141 | Manganelli | Jun 2020 | A1 |
20200210899 | Guo | Jul 2020 | A1 |
Entry |
---|
NPL Search Terms (Year: 2021). |
Number | Date | Country | |
---|---|---|---|
20200042723 A1 | Feb 2020 | US |