A conventional remote server may require the owner of an account to authenticate before granting login access to the owner. Along these lines, the owner may need to supply a username and password which the remote server compares to an expected username and password. If there is a correct match, the remote server grants the owner with login access to the account. However, if there is not a match, the remote server denies login access to the account.
During such operation, the remote server may lockout (or deny login access to) the account until a system manager resets the account or for a set length of time if a certain number of unsuccessful login attempts are made. For example, suppose that a malicious person attempts to successfully login to the account by trying different passwords hoping to guess correctly. If the limit to the number of failed login attempts is reached during some amount of time (e.g., three failed login attempts within a two minute period), the remote server locks out the malicious person by preventing further login attempts to that account.
Unfortunately, there are deficiencies to the above-described conventional approach to locking out accounts in response to unsuccessful login attempts. For example, an attacker may try to avoid detection by trying to login just a few times to several accounts of the remote server. In this situation, the attacker may have the usernames of several account holders, and hope to correctly guess the password to one of their accounts. If the attacker does not exceed the lockout limit (i.e., the threshold of failed login attempts that must be exceeded before the remote server lockouts an account), the attacker's malicious activity which can be referred to as a “touch the fence” style of attack will go undetected.
In contrast to the above-described conventional approach to locking out an account on a remote server when the limit to the number of failed login attempts is reached during some amount of time, improved techniques are directed to authentication which involves velocity metrics identifying authentication performance for a set of authentication request sources (e.g., computerized devices, IP addresses, etc.). An example of such a velocity metric is the number of failed authentication attempts during a particular amount of time from a particular source device. If a malicious person attempts to authenticate using different usernames and passwords, there will be an increase in the number of failed authentication attempts (or an increase in the failure rate) from that source device. Accordingly, the malicious person's activity is detectable even if the malicious person tries to login just a few times to several accounts in a “touch the fence” style of attack. Suitable actions in response to such detection include locking out the particular source device, locking out further authentication attempts across the entire system, placing the source device on a blacklist or similar notification to devices of a fraud mitigation network, and so on.
One embodiment is directed to a method of performing authentication. The method includes performing a set of authentication operations in response to a set of authentication requests, and updating a set of velocity metrics which identifies authentication performance for a set of authentication request sources that originated the set of authentication requests. The method further includes, after updating the set of velocity metrics, receiving an authentication request from an authentication request source, and providing an authentication result in response to the authentication request from the authentication request source. The authentication result (i) is based on the set of velocity metrics and (ii) indicates whether the authentication request is considered to be legitimate.
In some arrangements, the set of velocity metrics includes a set of failed authentication velocities. In these arrangements, updating the set of velocity metrics includes updating the set of failed authentication velocities based on failed authentication operations of the set of authentication operations.
In some arrangements, updating the set of failed authentication velocities includes updating, for each source of the set of the authentication request sources, a respective failed authentication velocity. It should be understood that riskiness of that source increases as the respective failed authentication velocity for that source increases.
In some arrangements, updating the set of failed authentication velocities includes deriving, for each source of the set of the authentication request sources, a respective rate of change in respective failed authentication velocity. Riskiness of that source increases as the respective rate of change in respective failed authentication velocity for that source increases.
In some arrangements, the method further includes performing an authentication-related action based on the set of failed authentication velocities. A variety of such actions can be performed individually or in combination.
In some arrangements, the set of failed authentication velocities indicates an abnormally high failed authentication velocity for a particular authentication request source. In these arrangements, performing the authentication-related action based on the set of failed authentication velocities includes, in response to detection of the abnormally high failed authentication velocity for the particular authentication request source, (i) locking out the particular authentication request source, (ii) distributing a list of suspicious authentication request sources to a set of server devices of a fraud mitigation network, the list of suspicious authentication request sources identifying the particular authentication request source, and (iii) transitioning the processing circuitry from operating in a “not locked out” state in which further authentication requests are processed to a “locked out” state in which further authentication requests are denied.
In some arrangements, the method is performed in an authentication server. In these arrangements, the method may further include maintaining, as an overall server sensitivity index, a measure of riskiness indicating whether the authentication server is currently under attack from an attacker.
In some arrangements, the method further includes comparing the overall server sensitivity index to a predefined threshold. In these arrangements, the method further includes maintaining the authentication server in a “not locked out” state in which the authentication server performs further authentication operations in response to further authentication requests while the overall server sensitivity index is below the predefined threshold. Additionally, the method further includes operating the authentication server in a “locked out” state in which the authentication server denies further authentication requests while the overall server sensitivity index is above the predefined threshold.
In some arrangements, the authentication server is currently operating in the “locked out” state. In the arrangements, the method further includes, after the authentication server operates in the “locked out” state due to the overall server sensitivity index being above the predefined threshold, maintaining the authentication server in the “locked out” state until a human administrator resets the authentication server to the “not locked out” state.
In alternative arrangements, the authentication server is currently operating in the “locked out” state. In these arrangements, the method further includes, after the authentication server operates in the “locked out” state due to the overall server sensitivity index being above the predefined threshold, maintaining the authentication server in the “locked out” state for a predefined period of time and automatically transitioning the authentication server from the “locked out” state back to the “not locked out” state after expiration of the predefined period of time.
It should be understood that, in the cloud context, at least some of the electronic circuitry is formed by remote computer resources distributed over a network. Such a computing environment is capable of providing certain advantages such as enhanced fault tolerance, load balancing, processing flexibility, etc.
Other embodiments are directed to electronic systems and apparatus, processing circuits, computer program products, and so on. Some embodiments are directed to various methods, electronic components and circuitry which are involved in security and authentication using velocity metrics which identify authentication performance for a set of devices.
The foregoing and other objects, features and advantages will be apparent from the following description of particular embodiments of the present disclosure, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of various embodiments of the present disclosure.
An improved technique is directed to providing security which involves velocity metrics identifying authentication performance for a set of authentication request sources (e.g., computerized devices, IP addresses, etc.). An example of such a velocity metric is the number of failed authentications during an amount of time from a particular source device (i.e., an authentication failure rate for that source device). If an attacker attempts to authenticate using different usernames and passwords, there will be an increase in the number of failed authentications (or an increase in the authentication failure rate) for that source. Accordingly, the attacker's activity is detectable even if the attacker tries to login just a few times to several accounts in a “touch the fence” style of attack. Suitable security actions in response to such detection include locking out the particular source device, locking out further authentication attempts across the entire system, placing the source device on a blacklist or other notification to participants of a fraud mitigation network, and so on.
Each computerized device 22 includes a set of device identifiers 40 which enable other components of the electronic environment 20 to properly identify that computerized device 22. Suitable device identifiers 40 include computerized cookies, device addresses (e.g., MAC addresses, IP addresses, etc.), characteristics of various software (e.g., browser type, version numbers, installed features/languages/add-ons, user agent strings, etc.), serial numbers of circuits/modules/peripherals/software/etc., combinations thereof, and so on. Such device identifiers 40 can be conveyed directly to the other components and/or are discernable indirectly based on the behavior of and communications with the computerized devices 22.
It should be understood that the computerized devices 22 may connect to the communications medium 30 through respective local network equipment 42. For example, computerized device 22(1) connects to the communications medium 30 through network equipment 42(1), computerized device 22(2) connects to the communications medium 30 through network equipment 42(2), computerized device 22(3) connects to the communications medium 30 through network equipment 42(3), and so on. Such network equipment 42 may have their own identifying attributes which can be further used to identify the computerized devices 22 such as IP addresses, serial numbers, specialized protocols, etc. Moreover, due to participation of the network equipment 42 in communications (e.g., ISP information, packet headers and routing information, encapsulation, re-formatting, etc.), the identifying attributes of the network equipment 42 may enable further identification of the computerized devices 22 from the perspective of the other components of the electronic environment 20.
The set of protected resource servers 26 maintains protected resources 44 which can be accessed remotely by the computerized devices 22 after successful authentication. Examples of suitable protected resources 44 include (among others) accounts and databases of enterprises, VPNs/gateways/other networks, account access and transaction access with banks/brokerages/other financial institutions, transaction performance at online stores, databases containing movies/music/files/other content, access to email, access to online games, and so on.
The authentication server 28 performs authentication to control access to the protected resources 44 (e.g., by communicating with the set of protected resource servers 26, by communicating directly with the computerized devices 22, etc.). Along these lines, authentication results from the authentication server 28 can be based on (i) a set of authentication factors provided in authentication requests, (ii) user authentication profiles which profile the users 24, and (iii) other information which exists at the time of receiving the authentication requests such as the earlier-mentioned velocity metrics, device lockout states, user lockout states, system conditions, and so on.
The communications medium 30 is constructed and arranged to connect the various components of the electronic environment 20 together to enable these components to exchange electronic signals 50 (e.g., see the double arrow 50). At least a portion of the communications medium 30 is illustrated as a cloud to indicate that the communications medium 30 is capable of having a variety of different topologies including backbone, hub-and-spoke, loop, irregular, combinations thereof, and so on. Along these lines, the communications medium 30 may include copper-based data communications devices and cabling, fiber optic devices and cabling, wireless devices, combinations thereof, etc. Furthermore, the communications medium 30 is capable of supporting LAN-based communications, SAN-based communications, cellular communications, combinations thereof, etc.
During operation, the users 24 operate their respective computerized devices 22 to perform useful work. Such work may include accessing one or more protected resources 44 of the protected resource servers 26 (e.g., accessing a VPN, reading email, performing a banking transaction, making an online purchase, downloading and installing an application from a remote server, saving content in the cloud, and so on).
During the course of such operation, the authentication server 28 controls access to the protected resources 44. That is, the users 24 of the computerized devices 22 provide authentication requests 52, and the authentication server 28 provides authentication results (or responses) 54 to the authentication requests 52 indicating whether authentication is successful. Such authentication requests 52 may be conveyed through the protected resource servers 26 (i.e., the protected resources servers 26 may operate as authentication front-ends, and the authentication server 28 operates in the background in a manner which is transparent from the perspective of the computerized devices 22).
When the users 24 successfully authenticate, the authentication server 24 grants access to the protected resources 44 (e.g., the authentication server 24 signals the protected resource servers 26 that the users 24 are deemed to be authentic and thus are entitled to access the protected resources 44). However, when the users 24 do not properly authenticate, the authentication server 24 denies access to the protected resources 44 (e.g., the authentication server 24 signals the protected resource servers 26 that the users 24 are to be denied access due to unsuccessful authentication).
During such operation, the authentication server 28 maintains a set of velocity metrics 60 which identifies authentication performance for each computerized device 22 originating authentication requests 52. In particular, the authentication server 28 maintains, for each computerized device 22, a set of failed authentication velocities based on failed authentication attempts. An increase in the number of failed authentication attempts during a particular amount of time from a particular computerized device 22 (i.e., an increase in failed authentication velocity) indicates a strong likelihood of an attack from that computerized device 22.
It should be understood that the failed authentication velocity for a computerized device 22 increases with every failed authentication attempt from that device 22. Accordingly, a malicious person trying unsuccessfully to access the same account with different passwords will increase this velocity metric. Additionally, a malicious person trying unsuccessfully to access different accounts with a few authentication attempts trying not to be detected (i.e., a “touch the fence” attack) will increase this velocity metric. Even a malicious person trying unsuccessfully to guess usernames or user IDs will increase this velocity metric.
In response to such detection, the authentication server 28 performs a remedial operation. Examples of such operations include locking out the computerized device 22 which is the source of the failed authentication attempts, adding the computerized device 22 to security data which is shared among a fraud mitigation syndicate, increasing an overall server sensitivity index (i.e., a measure of riskiness indicating whether the authentication server 28 is currently under attack from an attacker) which can be used to control authentication operation globally, and so on. Further details will now be provided with reference to
The network interface 70 is constructed and arranged to connect the authentication server 28 to the communications medium 30. Accordingly, the network interface 70 enables the authentication server 28 to communicate with the other components of the electronic environment 20 (
The memory 72 is intended to represent both volatile storage (e.g., DRAM, SRAM, etc.) and non-volatile storage (e.g., flash memory, magnetic disk drives, etc.). The memory 72 stores a variety of software constructs 80 including an operating system 82 to manage the computerized resources of the authentication server 28, specialized applications 84 to form authentication operations (e.g., code to form a risk-engine, code to form a policy engine, code to maintain the set of velocity metrics 60, and so on), the set of velocity metrics 60, and a user database 86 to hold user information. Such user information can include user details (e.g., a user identifier, a username, contact data, etc.), user privileges (e.g., account information, a list of protected resources 44 which the user 24 owns, etc.), user PINs (or PIN hashes), user secrets/seeds for OTP derivation, user activity history, and so on.
The control circuitry 74 is constructed and arranged to operate in accordance with the various software constructs 80 stored in the memory 72. Such circuitry may be implemented in a variety of ways including via one or more processors (or cores) running specialized software, application specific ICs (ASICs), field programmable gate arrays (FPGAs) and associated programs, discrete components, analog circuits, other hardware circuitry, combinations thereof, and so on. In the context of one or more processors executing software, a computer program product 90 is capable of delivering all or portions of the software to the authentication server 28. The computer program product 90 has a non-transitory (or non-volatile) computer readable medium which stores a set of instructions which controls one or more operations of the authentication server 28. Examples of suitable computer readable storage media include tangible articles of manufacture and apparatus which store instructions in a non-volatile manner such as CD-ROM, flash memory, disk memory, tape memory, and the like.
The additional (or other) circuitry 76 is optional and represents additional hardware that can be utilized by the authentication server 28. For example, the authentication server 28 can include a user interface (i.e., a console or terminal) enabling a human administrator to set up new users 24, to deal with alarms or warning messages, to administer routine maintenance, to reset operation of the authentication server 28, and so on. As another example, a portion of the authentication server 28 may operate as a source for distributing computerized device code during configuration/enrollment (e.g., an app store, a central app repository, etc.). Other components and circuitry are suitable for use as well.
During operation, the authentication server 28 runs in accordance with the specialized applications 84 to reliably and robustly control access to the protected resources 44 of the protected resource servers 26. In particular, the authentication server 28 enrolls users 24 and stores the enrollment data in the user database 86. For example, the authentication server 28 can store, maintain and update user profiles on behalf of the users 24 of the computerized devices 22. Once the users 24 are properly enrolled, the authentication server 28 responds to authentication requests 52 from the users 24 with authentication results 54 which either grant or deny access to the protected resources 44 (also see
In some arrangements, the authentication server 28 performs standard multi-factor authentication (i.e., compares current authentication factors such as user identifiers, personal identification numbers or PINs, passwords, etc. to expected authentication factors). In other arrangements, the authentication server 28 performs risk-based or adaptive authentication (AA) in which a numerical risk score is generated indicating a measure of riskiness that the authentication source is fraudulent. Other types of authentication are suitable for use as well such as knowledge based authentication, biometric authentication, combinations of different forms of authentication, and so on.
While the authentication server 28 performs authentication operations, the authentication server 28 maintains a set of velocity metrics 60 for each computerized device 22, and authentication performance for each computerized device 22 serves as an indicator of whether that computerized device 22 is a source of a malicious attack (e.g., login attempts by a fraudster). As mentioned earlier, the authentication server 28 is able to accurately identify each computerized device 22 based on express device identifiers (e.g., cookies, IP addresses, etc.), indirect device identifiers (e.g., browser features, user agent strings, ISP data, etc.), combinations thereof, etc.
For example, for a particular computerized device 22, when the number of failed authentication attempts during a period of time or the velocity metric increases by a predetermined threshold amount, the authentication server 28 considers that computerized device 22 to be used by an attacker. In response, the authentication server 28 performs a remedial action such as locking out that computerized device 22, sending an alarm to a human administrator, notifying a fraud mitigation network, and so on.
In some arrangements, the authentication server 28 maintains as part of the set of velocity metrics 60, an overall server sensitivity index. This index is a measure of riskiness indicating whether the authentication server 28 from an overall perspective is currently under attack from an attacker. Along these lines, suppose that an attacker tries to overcome security by authenticating from different computerized devices 22 (i.e., somehow changing the device identifiers 40 and/or the identifiers of the network equipment 42). The overall server sensitivity index can serve as a measure of overall security health (e.g., increased in response to detected security weakness, lowered in response to detected security strength, etc.). Accordingly, the authentication server 28 updates the overall server sensitivity index based on current failed authentication velocities, among other things (e.g., an increase in the number of user lockouts, abnormal traffic patterns, high levels of traffic from blacklisted devices, etc.).
Furthermore, when the overall server sensitivity index remains below a particular predefined index threshold, the authentication server 28 operates in a normal mode by processing authentication requests 52 and providing authentication results 54. However, when the overall server sensitivity index exceeds the particular predefined index threshold, the authentication server 28 transitions from the normal mode to a high security mode by no longer granting access to protected resources 44 in response to authentication requests 52, i.e., a global lockout.
The global lockout can apply to groups of computerized devices 22, groups of protected resources 44, classes of communications, all protected resources 44, and so on. In some arrangements, the global lockout remains in effect until a human administrator resets the authentication server 28 (e.g., allowing time to evaluate/analyze the attack, impose additional safety measures, etc.). In other arrangements, the global lockout remains in effect for a predefined period of time (e.g., an hour, a day, etc.), and the authentication server 28 automatically transitions from the high security mode back to the normal mode once the period of time expires.
Additionally, the overall server sensitivity index can be used as an authentication factor in various forms of authentication such as risk-based authentication. Accordingly, each authentication operation performed in response to an individual authentication request 52 takes the overall security health of the authentication server 28 into account. Further details will now be provided with reference to
As shown in
The authentication circuitry 100 receives and processes authentication requests 52 from the computerized devices 22 (also see
Additionally, in response to each authentication request 52, the authentication circuitry 100 updates the set of velocity metrics 60. In particular, the authentication server 28 is constructed and arranged to maintain a respective velocity metric entry 104 for each computerized device 22. If the authentication circuitry 100 encounters an authentication request 52 from an unknown computerized device 22 (e.g., as identified uniquely by device identifiers 40 and/or associated network equipment 42), the authentication circuitry 100 creates a new set of entries 104 in the set of velocity metrics 60 in order to track authentication performance for that computerized device 22. However, if the authentication circuitry 100 encounters an authentication request 52 from a known computerized device 22, the authentication circuitry 100 updates the appropriate entries 104 in the set of velocity metrics 60 in order to track authentication performance for that computerized device 22. Moreover, such an entry 104 can be removed after a long period of inactivity.
By way of example and as shown in
Also shown in
Each device identifier 110(1), 110(2), 110(3), . . . uniquely identifies a respective computerized device 22(1), 22(2), 22(3), . . . . Such an identifier 110 can be assigned by the authentication server 28 upon detection of a new computerized device 22.
Each velocity metric 112(1), 112(2), 112(3), . . . provides a measure authentication performance of the respective computerized device 22(1), 22(2), 22(3), . . . . A suitable velocity metric 112 is a current authentication failure rate (e.g., the number of failed authentication attempts within a period of time (e.g., 30 seconds, one minute, two minutes, five minutes, etc.) In some arrangements, the authentication server 28 maintains multiple velocity metrics 112 for each computerized device 22 in order to distinguish between a series of manual user authentication attempts and a series of automated authentication attempts (e.g., by software).
The additional device identified data 114(1), 114(2), 114(3), . . . is a collection of information which can serve multiple purposes. For example, in real time, such data 114 can serve as a source of one or more authentication factors, or input to calculate other velocity metrics. Furthermore, such data 114 can be made available later for fraud investigation and circulated among participants 130 of a fraud mitigation network 132 (see arrow 134 in
The additional device history data 116(1), 116(2), 116(3), . . . is another collection of information which can serve multiple purposes. For example, such data 116 can identify attack frequency, common attack times, common attack levels, etc. Such data 114 can serve as a source of one or more authentication factors, or as input to calculate other velocity metrics. Additionally, such data 114 can be made available later for fraud investigation and circulated among participants 130 of a fraud mitigation network 132 (again, see arrow 134 in
It should be understood that while such authentication-related operations takes place, the velocity metric evaluation circuitry 102 monitors the velocity metric entries 104 to determine whether the authentication server 28 is being attacked. In particular, a velocity metric 104 indicating an unusual rise in failed authentication attempts over a set period of time may indicate the presence of an attacker. For example, a sharp increase in the authentication failure rate for a computerized device 22 is a sign that a malicious person is operating that computerized device 22. Example abnormal increases include increases over a predefined amount of time of 5%, 10%, 15%, and so on. Since authentication performance is maintained per computerized device 22 rather than per user 24 such detection occurs even if the attacker attempts to authenticate just a few times from the same computerized device 22 across multiple users 24.
Moreover, the velocity metric evaluation circuitry 102 updates the overall sensitivity index that serves as a measure of riskiness (or threat level) for the entire authentication server 28. In particular, the velocity metric evaluation circuitry 102 increases the overall sensitivity index if it senses a sudden increase in the authentication failure rate for a computerized device 22 or if it locks out a computerized device 22. The velocity metric evaluation circuitry 102 can lower the overall sensitivity index over time in response to the lack of threats such as a subsequent period of low authentication failure rates.
If the overall sensitivity index exceeds a predefined threshold, the velocity metric evaluation circuitry 102 can be configured to lockout one or more computerized devices 22 to safeguard the protected resources 44. Additionally, the velocity metric evaluation circuitry 102 can control when the authentication server 28 is re-enabled to perform authentication operations to grant access to the protected resources 44 (e.g., after being manually reset by a human administrator, automatically after a period of time has elapsed, etc.).
It should be understood that the velocity metrics 104 and the overall sensitivity index can be used as authentication factors in future authentication operations. Along these lines, these metrics are well suited for risk-based authentication which uses weights and scores to generate an overall risk score indicating an amount of riskiness of fraud. For example, for any authentication requests 52 from a particular computerized device 22, the authentication server 28 can take into account the current authentication failure rate for that computerized device 22. Similarly, for any authentication requests 52, the authentication server 28 can take into account the current overall sensitivity index. Further details will now be provided with reference to
At 204, the authentication server 28, after updating the set of velocity metrics, receives a new authentication request 52 from an authentication request source. As mentioned above, if the authentication request source is new to the authentication server 28, the authentication server 28 creates a new entry 104 (see
At 206, the authentication server 28 provides an authentication result 54 in response to the authentication request 52 from the authentication request source. The authentication result 54 (i) is based on the set of velocity metrics 60 and (ii) indicates whether the authentication request 52 is considered to be legitimate. Additionally, the authentication server 28 updates the set of velocity metrics 60 as well as performs a remedial action if the set of velocity metrics 60 indicates an attack.
As described above, improved techniques are directed to providing security using velocity metrics 60 identifying authentication performance for a set of authentication request sources (e.g., computerized devices, IP addresses, etc.). An example of such a velocity metric 60 is the number of failed authentication attempts during a particular amount of time from a particular computerized device 22. If a malicious person attempts to authenticate using different usernames and passwords, there will be an increase in the number of failed authentication attempts (or an increase in the failure rate) from that computerized device 22. Accordingly, the malicious person's activity is detectable even if the malicious person tries to login just a few times to several accounts in a “touch the fence” style of attack, tries unsuccessfully to guess usernames, etc. Suitable actions in response to such detection include locking out the particular computerized device 22, locking out further authentication attempts across the entire authentication server 28 and/or protected resource servers 26, placing the computerized device 22 on a blacklist or similar notification to members of a fraud mitigation network, and so on.
While various embodiments of the present disclosure have been particularly shown and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure as defined by the appended claims.
For example, it should be understood that various components such as the servers 26, 28 of the electronic environment 20 are capable of being implemented in or “moved to” the cloud, i.e., to remote computer resources distributed over a network. Here, the various computer resources may be distributed tightly (e.g., a server farm in a single facility) or over relatively large distances (e.g., over a campus, in different cities, coast to coast, etc.). In these situations, the network connecting the resources is capable of having a variety of different topologies including backbone, hub-and-spoke, loop, irregular, combinations thereof, and so on. Additionally, the network may include copper-based data communications devices and cabling, fiber optic devices and cabling, wireless devices, combinations thereof, etc. Furthermore, the network is capable of supporting LAN-based communications, SAN-based communications, combinations thereof, and so on.
Additionally, it should be understood that the above-described techniques are well suited for discovering a “touch the fence” attack. In particular, if a hacker attempts to authenticate by trying many users 24 and two or three authentications per user 24 to avoid locking the users 24 out, the authentication server 28 will detect the attack by sensing the increase in authentication failures from the same computerized device 22.
Furthermore, such discovery may involve exposing device and IP velocity predictors to a policy engine. That is, a “device velocity” predictor can be input in combination with an “ip auth” predictor where the “ip auth” predictor is used to track a computerized device 22 that fails a challenge. If the number of users 24 coming from a single computerized device 22 (device velocity predictor) failing challenge (ip auth predictor) increases, this sensed activity is most likely an attacker attempting to guess user credentials.
Additionally, the device, IP and other velocities can be input to the policy engine for automatic application of policies and policy setting, i.e., a policy management system. Accordingly, it is not necessary for a human administrator to add more fields into the policy management system. Rather, the same device or IP lockout policies as in a risk based user lockout technique can be used, but lockout computerized devices 22 instead of locking out users 24. Furthermore, an IP address (e.g., a computerized device 22, network equipment, etc.) with many users with a high authentication fail rate can be recommended to go on to a list which is distributed to a fraud mitigation network. Also, even a mobile device with many users, trying to access new apps from a new location, can be locked.
Furthermore, the velocity metrics 60 can be used as input to set an overall sensitivity index for the system. That is, there may be an attacker that uses new IP addresses and/or new computerized devices 22 in each failed authentication attempt. Nevertheless, the above-described techniques understand that if there are more failed authentication attempts coming from many new computerized devices and new IP addresses that the system has never seen before (i.e., that are unfamiliar and not previously associated with a particular organization receiving authentication services from the authentication server 28), the overall sensitivity index can be used for lockout in order to safeguard the protected resources 44. Specifically, the authentication server 28 can detect and lockout devices 22 and IP addresses without the need to go through a more comprehensive process. Additionally, such detection and locking out can be performed quickly as well as be reset with ease. It may even detect an attack before any user accounts or other protected resources 44 are actually compromised. Such modifications and enhancements are intended to belong to various embodiments of the disclosure.
Number | Name | Date | Kind |
---|---|---|---|
858078 | Knowlton | Jun 1907 | A |
5408607 | Nishikawa | Apr 1995 | A |
6192361 | Huang | Feb 2001 | B1 |
6263447 | French et al. | Jul 2001 | B1 |
6314520 | Schell | Nov 2001 | B1 |
6665799 | Slama | Dec 2003 | B1 |
7024556 | Hadjinikitas | Apr 2006 | B1 |
7069000 | Corson | Jun 2006 | B1 |
7127524 | Renda | Oct 2006 | B1 |
7139917 | Jablon | Nov 2006 | B2 |
7342906 | Calhoun | Mar 2008 | B1 |
7415720 | Jung | Aug 2008 | B2 |
7513428 | Giordano et al. | Apr 2009 | B2 |
7725730 | Juels et al. | May 2010 | B2 |
7739733 | Szydlo | Jun 2010 | B2 |
7865959 | Lewis | Jan 2011 | B1 |
7908645 | Varghese | Mar 2011 | B2 |
8225383 | Channakeshava | Jul 2012 | B1 |
8238532 | Cox et al. | Aug 2012 | B1 |
8353032 | Satish | Jan 2013 | B1 |
8490162 | Popoveniuc | Jul 2013 | B1 |
8499348 | Rubin | Jul 2013 | B1 |
8549595 | Vaisman | Oct 2013 | B1 |
8572366 | Yadav | Oct 2013 | B1 |
8613070 | Borzycki | Dec 2013 | B1 |
8701174 | Dotan | Apr 2014 | B1 |
8726379 | Stiansen | May 2014 | B1 |
8762724 | Bravo | Jun 2014 | B2 |
8819382 | Pizlo | Aug 2014 | B2 |
8875255 | Dotan | Oct 2014 | B1 |
8881251 | Hilger | Nov 2014 | B1 |
8955076 | Faibish et al. | Feb 2015 | B1 |
9077703 | Goshen | Jul 2015 | B1 |
9146669 | Kim | Sep 2015 | B2 |
9154496 | Juels | Oct 2015 | B2 |
9230092 | Juels | Jan 2016 | B1 |
9529996 | Qureshi | Dec 2016 | B2 |
9628508 | Turbin | Apr 2017 | B2 |
20020120722 | Kutaragi | Aug 2002 | A1 |
20020164023 | Koelle | Nov 2002 | A1 |
20020169843 | Tsuneda | Nov 2002 | A1 |
20040059951 | Pinkas | Mar 2004 | A1 |
20040158639 | Takusagawa | Aug 2004 | A1 |
20040193892 | Tamura | Sep 2004 | A1 |
20050160280 | Caslin | Jul 2005 | A1 |
20050198099 | Motsinger | Sep 2005 | A1 |
20050228882 | Watanabe | Oct 2005 | A1 |
20050235358 | Keohane et al. | Oct 2005 | A1 |
20050243984 | Mahone | Nov 2005 | A1 |
20050278542 | Pierson et al. | Dec 2005 | A1 |
20060018481 | Nagano | Jan 2006 | A1 |
20060021031 | Leahy | Jan 2006 | A1 |
20060037064 | Jeffries | Feb 2006 | A1 |
20060130140 | Andreev et al. | Jun 2006 | A1 |
20060154695 | Ishibashi | Jul 2006 | A1 |
20060224897 | Kikuchi | Oct 2006 | A1 |
20060242414 | Corson | Oct 2006 | A1 |
20060282660 | Varghese | Dec 2006 | A1 |
20070067627 | Dokuni | Mar 2007 | A1 |
20070124806 | Shulman | May 2007 | A1 |
20070150745 | Peirce | Jun 2007 | A1 |
20070165858 | Bakshi | Jul 2007 | A1 |
20070192843 | Peterson | Aug 2007 | A1 |
20070234420 | Novotney | Oct 2007 | A1 |
20070280123 | Atkins et al. | Dec 2007 | A1 |
20080034411 | Aoyama | Feb 2008 | A1 |
20080108324 | Moshir | May 2008 | A1 |
20080244712 | Kitada | Oct 2008 | A1 |
20080285464 | Katzir | Nov 2008 | A1 |
20080289033 | Hamilton | Nov 2008 | A1 |
20090031406 | Hirose | Jan 2009 | A1 |
20090041307 | Iwano et al. | Feb 2009 | A1 |
20090064281 | Esaka | Mar 2009 | A1 |
20090089252 | Galitsky | Apr 2009 | A1 |
20090111671 | Campbell et al. | Apr 2009 | A1 |
20090135731 | Secades et al. | May 2009 | A1 |
20090172402 | Tran | Jul 2009 | A1 |
20090172788 | Vedula | Jul 2009 | A1 |
20090187983 | Zerfos | Jul 2009 | A1 |
20090260078 | Nakazawa | Oct 2009 | A1 |
20090320035 | Ahlgren | Dec 2009 | A1 |
20100017860 | Ishida | Jan 2010 | A1 |
20100284539 | Roy et al. | Nov 2010 | A1 |
20100325295 | Kajiwara | Dec 2010 | A1 |
20110088078 | Kholaif | Apr 2011 | A1 |
20110138445 | Chasen | Jun 2011 | A1 |
20110162066 | Kim | Jun 2011 | A1 |
20110191847 | Davis | Aug 2011 | A1 |
20110202440 | Jarrah | Aug 2011 | A1 |
20110246765 | Schibuk | Oct 2011 | A1 |
20110252479 | Beresnevichiene | Oct 2011 | A1 |
20110283356 | Fly | Nov 2011 | A1 |
20110296179 | Templin | Dec 2011 | A1 |
20120060030 | Lamb | Mar 2012 | A1 |
20120158678 | McGraw et al. | Jun 2012 | A1 |
20120179802 | Narasimhan | Jul 2012 | A1 |
20120197743 | Grigg | Aug 2012 | A1 |
20120204245 | Ting | Aug 2012 | A1 |
20120221863 | Osaka | Aug 2012 | A1 |
20120254935 | Yato | Oct 2012 | A1 |
20120254947 | Dheap | Oct 2012 | A1 |
20120260329 | Suffling | Oct 2012 | A1 |
20120311340 | Naganuma | Dec 2012 | A1 |
20120311611 | Wang | Dec 2012 | A1 |
20130081138 | Rados et al. | Mar 2013 | A1 |
20130104201 | Nandakumar | Apr 2013 | A1 |
20130198819 | Gordon | Aug 2013 | A1 |
20130227352 | Kumarasamy | Aug 2013 | A1 |
20130247204 | Schrecker | Sep 2013 | A1 |
20130342314 | Chen et al. | Dec 2013 | A1 |
20140041005 | He; Chang | Feb 2014 | A1 |
20140047113 | Subramanya | Feb 2014 | A1 |
20140068094 | Burch | Mar 2014 | A1 |
20140068722 | Hayat | Mar 2014 | A1 |
20140115677 | Popp | Apr 2014 | A1 |
20140165171 | Meng | Jun 2014 | A1 |
20140165175 | Sugiyama | Jun 2014 | A1 |
20140181968 | Ge et al. | Jun 2014 | A1 |
20140189781 | Manickam | Jul 2014 | A1 |
20140237599 | Gertner et al. | Aug 2014 | A1 |
20140253376 | Large | Sep 2014 | A1 |
20140351596 | Chan | Nov 2014 | A1 |
20140373166 | Little | Dec 2014 | A1 |
20140380475 | Canning | Dec 2014 | A1 |
20150067779 | Agawa | Mar 2015 | A1 |
20150089621 | Khalid | Mar 2015 | A1 |
20150121496 | Caldeira De Andrada | Apr 2015 | A1 |
20150156183 | Beyer | Jun 2015 | A1 |
20150180894 | Sadovsky et al. | Jun 2015 | A1 |
20150269374 | Fan | Sep 2015 | A1 |
20160164893 | Levi | Jun 2016 | A1 |
Entry |
---|
Merriam-Webster, “rate”, 2015. |
Merriam-Webster, “velocity”, 2015. |
Nelson et al., “Common Remote Authentication Dial in User Service (RADIUS) Implementation Issues and Suggested Fixes”, RFC 5080, 2007. |
Rigney et al., “Remote Authentication Dial in User Servics (RADIUS)”, RFC 2865, 2000. |
Sandhya, “Secure Initial Access Authentication in WLAN”, 2014. |
Fonseca et al., “Experiences with Tracing Causality in Network Services”, 2010. |
Bari et al., “An AAA based service customization framework for Public WLANs”, 2005. |
Merriam-Webster, “velocity”, 2016. |
Dai Zovi, “Apple iOS 4 Security Evaluation”, 2011. |
Hoog et al., “iPhone and iOS Forensics”, “Investigation, Analysis and Mobile Security for Applie iPhone, iPad and iOS Devices”, select pages, 2011. |
Seriot, “iPhone Privacy”, 2010. |
Smith, “iPhone Applications & Privacy Issues: An Analysis of Application Transmission of iPhone Unique Device Identifiers (UDIDs)”, 2010. |