Companies such as credit card companies, financial institutions, insurance companies, healthcare companies, and governments incur loss due to fraudsters and fraud rings. Fraud rates are increasing, with identity theft, account takeover (including phishing attacks) and friendly fraud (chargebacks) the most prevalent threats. Fraud most often happens from an outside individual, but internal fraud also occurs. Much of the fraud can be traced back to the contact centers or call centers associated with the companies.
Conventional identify authentication (IA) and fraud detection (FD) solutions focus on either the IA or FD aspect, but not both. This is not sufficient as fraud continues to increase.
Systems and methods are provided to stop both external and internal fraud, ensure correct actions are being followed, and information is available to fraud teams for investigation. Most IAFD solutions focus on either the IA or FD aspect. The disclosed systems contain components that can address: 1) behavioral analytics (ANI reputation, IVR behavior, account activity)—this gives a risk assessment event before a call gets to an agent; 2) fraud detection—the ability to identify, in real time, if a caller is part of a fraudster cohort' and alert the agent and escalate to the fraud team; 3) identity authentication—the ability to identify through natural language if the caller is who they say they are; and 4) two factor authentication—the ability to send a text message to the caller and automatically process the response and create a case in the event of suspected fraud.
All of these components may be combined into one solution that simultaneously reduces authentication friction for valid callers and automatic escalation for fraudsters. This results in both a higher satisfaction for callers due to easier authentications, and reduced costs and time for the call center due to less incorrect call escalations.
In an embodiment, a system for authenticating calls and for preventing fraud is provided. The system includes one or more processors and a memory communicably coupled to the one or more processors. The memory stores an analysis module, a biometrics module, an authentication module, and a fraud module. The analysis module includes instructions that when executed by the one or more processors cause the one or more processors to: receive a call through a first channel, wherein the call is associated with a customer and a speaker; based on one or more characteristics of the received call, the customer, or the channel, assign a score to the call; determine if the score satisfies a threshold; and if the score does not satisfy the threshold, flag the call as a fraudulent call. The biometrics module includes instructions that when executed by the one or more processors cause the one or more processors to: analyze voice data associated with the call to determine whether the speaker is a fraudulent speaker; and if the speaker is a fraudulent speaker, flag the call as a fraudulent call. The authentication module includes instructions that when executed by the one or more processors cause the one or more processors to: generate a first code; retrieve a profile associated with the customer; send the first code to the customer through a second channel indicated by the profile associated with the customer; receive a second code through the first channel; determine if the first code matches the second code; and if it is determined that the first code matches the second code, flag the call as an authenticated call.
Embodiments may include some or all of the following features. The analysis module may further include instructions that when executed by the one or more processors cause the one or more processors to: if the score satisfies the threshold, flag the call as an authenticated call. The biometrics module may further include instructions that when executed by the one or more processors cause the one or more processors to: if the speaker is not a fraudulent speaker, flag the call as an authenticated call. The authentication module may include instructions that when executed by the one or more processors cause the one or more processors to: if it is determined that the first code does not match the second code, flag the call as an authenticated call. The fraud module may include instructions that when executed by the one or more processors cause the one or more processors to: if the call is flagged as a fraudulent call; receive a recording of the call; process the recording to generate one or more voiceprints for the speaker associated with the call; and store the generated voiceprints. Analyzing voice data associated with the call to determine whether the speaker is a fraudulent speaker may include: retrieving voiceprints associated with fraudulent activities; determining if the voice data matches any of the voiceprints associated with fraudulent activities; and if the determined voice data matches any of the voiceprints associated with the fraudulent activities, flag the call as a fraudulent call. The biometrics module may further include instructions that when executed by the one or more processors cause the one or more processors to: retrieve one or more voiceprints associated with the customer; determine if the voice data matches any of the voiceprints associated with the customer; and if the determined voice data matches any of the voiceprints associated with the customer, flag the call as an authenticated call.
In an embodiment, a method for authenticating calls and for preventing fraud is provided. The method includes: receiving a call through a first channel, wherein the call is associated with a customer and a speaker; determine if there are one or more voiceprints associated with the customer; if it is determined that there are one or more voiceprints associated with the customer: retrieving the one or more voiceprints associated with the customer; determine if voice data associated with the call matches any of the one or more voiceprints associated with the customer; and if the voice data matches any of the one or more voiceprints associated with the customer, flag the call as an authenticated call.
Embodiments may include some or all of the following features. The method may further include: if the voice data does not match any of the one or more voiceprints associated with the customer, the call may be flagged as a fraudulent call. The method may further include: if it is determined that there are no voiceprints associated with the customer: generating a first code; retrieving a profile associated with the customer; sending the first code to the customer through a second channel indicated by the profile associated with the customer; receiving a second code through the first channel; determining if the first code matches the second code; and if it is determined that the first code matches the second code, flagging the call as an authenticated call. The method may further include: generating a voiceprint for the customer using voice data associated with the call; and associating the voiceprint with the customer.
In an embodiment, a method for authenticating calls and for preventing fraud is provided. The method includes: receiving a call through a first channel, wherein the call is associated with a customer and a speaker; based on one or more characteristics of the received call, the customer, or the channel, assigning a score to the call; determining if the score satisfies a threshold; and if the score does not satisfy the threshold, flagging the call as a fraudulent call; analyzing voice data associated with the call to determine whether the speaker is a fraudulent speaker; if the speaker is a fraudulent speaker, flagging the call as a fraudulent call; generating a first code; retrieving a profile associated with the customer; sending the first code to the customer through a second channel indicated by the profile associated with the customer; receiving a second code through the first channel; determining if the first code matches the second code; and if it is determined that the first code matches the second code, flagging the call as an authenticated call.
Embodiments may include some or all of the following features. The method may further include: if the score satisfies the threshold, flagging the call as an authenticated call. The method may further include: if the speaker is not a fraudulent speaker, flagging the call as an authenticated call. The method may further include: if it is determined that the first code does not match the second code, flagging the call as a fraudulent call. The method may further include: if the call is flagged as a fraudulent call: receiving a recording of the call; processing the recording to generate one or more voiceprints for the speaker associated with the call; and storing the generated one or more voiceprints. Analyzing voice data associated with the call to determine whether the speaker is a fraudulent speaker may include: retrieving voiceprints associated with fraudulent activities; determining if the voice data matches any of the voiceprints associated with fraudulent activities; and if the determined voice data matches any of the voiceprints associated with the fraudulent activities, flagging the call as a fraudulent call. The method may further include: retrieving one or more voiceprints associated with the customer; determine if the voice data matches any of the voiceprints associated with the customer; and if the determined voice data matches any of the voiceprints associated with the fraudulent activities, flagging the call as an authenticated call.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The foregoing summary, as well as the following detailed description of illustrative embodiments, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the embodiments, there is shown in the drawings example constructions of the embodiments; however, the embodiments are not limited to the specific methods and instrumentalities disclosed. In the drawings:
An embodiment described herein provides behavioral analytics, fraud detection, identity authentication, and two factor authentication. Behavioral analytics (e.g., ANI reputation, IVR behavior, account activity) gives a risk assessment event before a call gets to an agent. Fraud detection provides the ability to identify, in real time, if a caller is part of a ‘fraudster cohort’ and alert the agent and escalate to the fraud team. Identity authentication provides the ability to identify through natural language if the caller is who they say they are. Two factor authentication provides the ability to send a text message to the caller and automatically process the response and create a case in the event of suspected fraud.
The agent 152 may receive the call from the customer on an agent computing device 155. The agent computing device 155 may be equipped with both human and virtual voice agent capabilities.
Besides the agent 152, the call may also be received (at the same time or later) by a computing device 110 associated with the call center environment 100. The computing device 110 may provide one or more call center services to the user 102 such as interactive voice response services (“IVR”) where the user may be presented with an automated system that may determine the optimal agent 152 to direct the call, may determine the identity of the customer, or may retrieve other information from the customer in an automated way.
As may be appreciated, the computing device 105, agent computing device 155, and the computing device 110 may each be implemented by one or more general purpose computing devices 110 such as the computing device 900 illustrated with respect to
In order to detect fraud, detect intrusions, and provide authentication with respect to one more calls and users 102, the computing device 110 may include one or more modules. As illustrated, these modules include an analytics model 115; a biometrics module 120; an authentication module 130; and a fraud module 140. More or fewer modules may be supported. Depending on the implementation, some or all of the modules 115, 120, 130, and 140 may be implemented the same computing device 110, or by some combination of computing devices 110. In addition, some or all of the modules 115, 120, 130, and 140 may be implemented by a cloud-based computing system. Furthermore, some or all of the modules 115, 120, 130, and 140 may be implemented in a call center.
The analytics module 115 may generate a score 118 for a current or received call. The score 118 may represent how likely it is that a particular call is associated with fraud (e.g., a known fraudster or fraudulent user) or is otherwise suspect. For example, the score 118 may be a 1-5 score where 1 represents a call that is not likely to be associated with fraud, and 5 represents a call that is very likely to be associated with fraud. Other scales or scoring methods may be used.
As will be described further below, the analytics module 115 may determine the score 118 for a call and may transmit the score 118 to the agent computing device 155 associated with the agent 152 that is handling the call. For example, the score 118 may be displayed to the agent 152 on a dashboard or other user-interface being viewed by the agent 152 during the call. The dashboard may display information about the call such as a (purported) name of the customer and any information determined for the customer by an IVR or the agent 152.
In some implementations, the analytics module 115 may generate the score 118 for the call using one or more characteristics 117 of the call. The characteristics of the call may include knowledge based characteristics 117 such as carrier information (e.g., is the carrier used by the caller associated with fraud), the channel 108 associated with the call (e.g., some channels such as VoIP may be more associated with fraud than landlines), country, city, or state of origin, and whether or not any ANI spoofing is detected or if the number used for the call is known to be spoofed or associated with fraud.
The characteristics 117 may further include reputation based characteristics 117 such as an account history associated with the number or customer, information from an ANI blacklist, previous indicators associated with the number or caller, and information associated with an escalating threat level (e.g., are fraud detections currently on the rise in the contact center?). The characteristics 117 may include behavior-based characteristics 117 learned about the caller by the IVR (or PBX) such as call flow actions, caller pattern probing, call velocity and frequency, and cross program detection. Other characteristics 117 may be used.
Depending on the embodiment, if the score 118 generated for a call or caller is above (or below) a threshold, the call may be flagged by the analytics module 115 as authenticated or otherwise unlikely to be associated with fraud. After being authenticated, the agent 152 may proceed with the business or purpose of the call without further authenticating the user 102.
If the score 118 generated for the call is below (or above) the threshold, the computing device 110 may continue to authenticate the call. In particular, the biometrics module 120 may process voice data associated with the call to determine if the speaker associated with the call is the customer that the speaker purports to be and may process the voice data to determine if the speaker matches any known fraudsters (i.e., fraudulent users). As used herein the term speaker is used to distinguish the user 102 speaking on the call from the term customer. The customer is the user 102 associated with the account and may be the user 102 that the speaker on the call purports to be. When the call is authenticated, the speaker and the customer are determined to be the same user 102.
When the speaker is connected to the agent 152, the agent 152 welcomes the speaker and asks the speaker to provide their customer name and account information. As this information is provided the biometrics module 120 processes the voice data in real-time using one or more voiceprints 121 that are associated with the customer. Depending on the embodiment, the biometrics module 120 may use passive voice biometrics which may not require the customer to actively enroll their voice. Instead, voiceprints 121 may be created automatically for a customer based on their voice data collected from a call. Any method for creating voiceprints 121 may be used.
If the biometrics module 120 determines that the voice data matches a voiceprint 121 associated with the customer (or a voiceprint associated with customers on a white list), then the biometrics module 120 may flag the call as authenticated and may proceed as described above. If the biometrics module 120 determines that the voice data does not match any voiceprint 121 associated with the customer (or there are no voiceprints associated with the customer), the biometrics module 120 may hand off processing of the call to the authentication module 130.
The biometrics module 120 may further retrieve voiceprints 121 associated with known fraudsters or fraudulent users. If the voice data associated with the call matches a voiceprint 121 associated with a known fraudulent user, then the biometrics module 120 may flag the call as being a fraudulent call.
The authentication module 130 may authenticate the speaker on the call using what is known as two factor authentication. In one implementation, the authentication module 130 may perform the two factor authentication by first determining a second channel 108 associated with the user 102. The second channel 108 may be different than the first channel 108 being used and may include mobile phones (i.e., text or SMS), email, etc. The second channel 108 may be determined from a profile associated with the user 102 (i.e., customer) or user account.
After determining the second channel 108, the authentication module 130 may generate a code 131 and may send the code 131 to the user 102 via the second channel 108. For example, if the second channel 108 is email, the authentication module 130 may send the code 131 to the user 102 at the email address associated with the user 102.
The authentication module 130 may then later receive a code 131 from the user 102 via the first channel 108 (i.e., the call). For example, the agent 152 may ask the user 102 to repeat the code 131 included in the email received via the second channel 108. Depending on the embodiment, the user 102 may speak the code 131 to the agent 152 or may provide the code 131 directly to the authentication module 130 via the internet. If the received code 131 matches the sent code 131 then the authentication module 130 may flag the call and user 102 as authenticated. After being authenticated, the agent 152 may proceed with the business or purpose of the call without further authenticating the user. If the received code 131 does not match the sent code 131, the authentication module 130 may flag the call as fraudulent.
Furthermore, after the authentication module 130 authenticates the user, additional fraud and authentication related steps may be performed by the computing device 110. For example, in scenarios where the biometrics module 120 was unable to authenticate the user 102 due to incomplete or non-existing voiceprints 121, the biometrics module 120 may attempt to generate one or more voiceprints 121 for the user 102. In some implementations, the biometrics module 120 may generate the one or more voiceprints 121 using the voice data provided by the user 102 at the beginning of the call. In other implementations, after the call has been completed, the biometrics module 120 may use voice data extracted from a recording of the call and may generate the voiceprints 121 from the voice data.
The fraud module 140 may process information related to calls marked or flagged as fraudulent to investigate whether the calls are fraudulent calls and to learn rules or other information that can be later used to identify fraudulent calls or users 102. Initially, when a call is marked or flagged as fraudulent, an indicator or other information may be displayed to the agent 152 to let them know that the call may be fraudulent. In addition, in response to the call being flagged as fraudulent, the agent 152 may be provided with questions to ask the speaker during the call. These questions may be selected to keep the speaker participating on the call and to collect additional information about the speaker that can be used by the fraud module 140 to both better determine if the call is fraudulent, and to update rules or characteristics that can be later used to identify fraudulent calls. Alternatively, or additionally, in response to the call being flagged as fraudulent, the call may be transferred to an agent 152 specially trained on how to deal with fraudulent calls.
After a call has been completed (or while the call is in progress), the fraud module 140 may be provided with information about the call (i.e., a case) such a recording of the call, any characteristics 117 of the call, and any information collected for the call by the IVR system. The fraud module 140 may then process the information and call recording to determine rules or other characteristics that are indicative of a fraudulent call. For example, the fraud module 140 may determine additional call characteristics 117 that may be used by the analytics module 115 to generate a score 118 for a call.
In some implementations the fraud module 140 may be associated with a fraud team. The fraud team may track fraudulent calls across multiple departments of a company or organization. This may help the organization identify fraudulent call trends in the organization and to possibly get ahead of a fraudulent caller that is calling multiple departments. The fraud team may ultimately determine whether or not a call was properly tagged as fraudulent.
The fraud module 140 may further generate one or more reports 141 based on the fraudulent calls identified by the fraud module 140. The reports 141 can be used to track, manage, and improve fraud practice for an organization. In some implementations, the reports 141 may indicate calls that were successfully identified as fraud, calls that were not successfully identified as fraud, as well as the number of voiceprints 121 (for users 102 or fraudsters) that have been collected by the system. The reports can be used for training and/or coaching agents 152 to reduce the overall training time.
Depending on the embodiment, a report 141 may include an alert for each call that is flagged or determined to be fraudulent. The reports 141 may include for each alert: 1) alert summary: score and threat level; channel information (phone number, carrier line type, chat, web, email, etc.); line risk indicator (carrier/line correlation with fraudulent activity); events (number of times application has been access via specified channel identifier); accounts (different accounts accessed via specified channel identifier); user active duration (days user has access the system via the indicated channel); spoof risk (level of threat that channel identifier has been spoofed); access to detailed information about alert; and 2) alert details: score and threat level; channel information; score report; state management (status); score history (chart shows risk score for channel identifier over time); events (additional details for each event where the channel identifier accessed the application); link back to call recording. Other information may be included for each alert in a report 141.
The user-interface 200 further includes windows 203, 205, 207, and 209 that each correspond to another method or means of verifying or authenticating the speaker or customer. The window 203 includes verification based on call characteristics 117 such as ANI. The window 205 includes verification based on biometrics such as voiceprints 121. The window 207 includes verification using two factor authentication. The window 209 includes verification based on security questions associated with the user. Depending on the embodiment, the user-interface 200 further includes a window 211 that displays chats or other messages received from other agents 152. The agents 152 may use the window 211 to share observations and trends they are observing while handling calls.
Continuing to
Continuing to
Continuing to
Continuing to
At 705, a call is received through a first channel. The call may be received by an agent computing device 155 associated with an agent 152. The call may be associated with a customer (i.e., a user 102) and a speaker. The speaker may be the person speaking on the call, and the customer may be the user 102 that the speaker purports to be. Depending on the embodiment, the speaker may have identified themselves as the customer to an IVR system, and/or may have identified themselves as the customer to the agent 152. The first channel 108 may be a telephone line such as a cellular phone, a VoIP phone, or a POTS phone. Other channels 108 may be used.
At 710, whether there are one or more voiceprints 121 associated with the customer is determined. The determination may be made by the biometrics module 120. Depending on the embodiment, the biometrics module 120 may store voiceprints 121 for customers as well as known fraudulent users. If there are one or more voiceprints 121 associated with the customer, then the method 700 may continue at 715. Else the method 700 may continue at 735.
At 715, the one or more voiceprints 121 are retrieved. The one or more voiceprints 121 may be retrieved by the biometrics module 120 from a profile associated with the customer or user, for example.
At 720, a determination is made as to whether any of the one or more voiceprints 121 match voice data associated with the call. The determination may be made by the biometrics module 120. The voice data may comprise various words and phrases spoken by the speaker to one or both of the IVR system or the agent 152. Any method for matching voiceprints 121 against voice data may be used. If any of the voiceprints 121 match the voice data, the method 700 may continue at 730.
As may be appreciated, if the voce data matches any of the voiceprints 121 associated with the speaker, then the biometrics module 120 can be assured that the speaker is the customer that they purport to be. Accordingly, no further authentication (such as two factor or security questions) may be necessary for the call. By reducing the amount of authentication that is required, the call experience of the customers is improved, and the total amount of time spent per call by agents 152 is reduced.
At 725, the call is flagged as fraudulent. The call may be flagged as fraudulent by one or both of the biometrics module 120 or the authentication module 130. In response to flagging the call as fraudulent, the call may be transferred to an agent 152 that specializes in fraudulent calls. In addition, a recording of the call, and other information about the call such as characteristics 117 of the call, may be sent to the fraud module 140. The fraud module 140 may analyze the recording of the call and the information about the call to try to determine one or more or rules that can be used to identify fraudulent calls or fraudulent users 102. In addition, the fraud module 140 may identify trends in fraudulent call activity and may share any information learned with other call centers or other company divisions or subsidiaries. Depending on the embodiment, one or more voiceprints 121 may be generated from the recorded call and may be added to a group of voiceprints 121 that are known to be associated with fraudulent users 102.
At 730, the call is flagged as authenticated. The call may be flagged as authenticated by one or both of the biometrics module 120 or the authentication module 130. After the call is authenticated, the agent 152 may proceed with addressing the reason that the customer initiated the call in the first place.
At 735, two factor authentication is performed. The two factor authentication may be performed by the authentication module 130. Because there was no voiceprint 121 associated with the customer, the authentication module 130 may authenticate the speaker using two factor authentication. Depending on the embodiment, the authentication module 130 may send a code 131 to the speaker using a second channel 108 that is indicated in a profile associated with the customer. Other methods for authentication such as security questions may be used if two factor authentication is not available. For example, a user 102 may not have set up two factor authentication yet.
At 740, whether the authentication was successful is determined. The determination may be made by the authentication module 130. For two factor authentication, the authentication is successful when the speaker speaks the correct code 131 back to the agent 152. For authentication based on security questions, the authentication is successfully when the speaker provided the correct answers to the security questions. If the authentication was successful, then the method 700 may proceed to 730 where the call may be flagged as authenticated. If the authentication was not successful, the method 700 may proceed to 725 where the call may be flagged as fraudulent.
At 805, a call is received through a first channel. The call may be received by an agent computing device 155 associated with an agent 152. The call may be associated with a customer (i.e., a user 102) and a speaker. The speaker may be the person speaking on the call, and the customer may be the user 102 that the speaker purports to be. Depending on the embodiment, the speaker may have identified themselves as the customer to an IVR system, and/or may have identified themselves as the customer to the agent 152. The first channel 108 may be a telephone line such as a cellular phone, a VoIP phone, or a POTS phone. Other channels 108 may be used.
At 810, a score is assigned to the call. The score 118 may be assigned to the call by the analytics module 115 based on one or more characteristics 117 of the call. Depending on the embodiment the characteristics 117 may include knowledge-based characteristics 117, reputation-based characteristics 117, and behavior-based characteristics 117. Other characteristics 117 may be considered. Example characteristics include the number of calls associated with the number, the number of calls associated with the customer or user 102, carrier information, the type of channel 108 used, call origin, ANI information, IVR information, PBX information, call velocity, and cross platform detection. Other characteristics 117 may be used.
At 815, a determination as to whether the score 118 satisfies a threshold is determined. The determination may be made by the analytics module 115. If the score satisfies the threshold (e.g., is less than the threshold), the method 800 may continue at 840. Else, the method 800 may continue at 820.
At 820, a determination is made as to whether the call is associated with a fraudulent speaker. The determination may be made by the biometrics module 120 by comparing voice data associated with the call with voiceprints 121 known to be associated with fraudulent speakers. If any of the voiceprints 121 match the method 800 may continue at 830. Else, the method 800 may continue at 825.
At 825, a determination is made as to whether the speaker is authenticated. The determination may be made by the biometrics module 120 and/or the authentication module 130. In some implementations, the biometrics module 120 may authenticate the speaker by comparing voice data associated with the call with voiceprints 121 known to be associated with the user 102 corresponding to the speaker. Alternatively or additionally, the authentication module 130 may authenticate the speaker using two factor authentication. If the speaker is authenticated the method 800 may continue at 840. Else, the method 800 may continue at 830.
At 830, the call is flagged as fraudulent. The call may be flagged as fraudulent by any of the analytics module 115, the biometrics module 120, or the authentication module 130.
At 835, the call is sent for fraud processing. The call, including any information about the call and a recording of the call, may be sent to the fraud module 140 for fraud processing. The fraud processing may include analyzing the recording of the call and the information about the call to try to determine one or more or rules that can be used to identify fraudulent calls or fraudulent users 102. In addition, the fraud module 140 may identify trends in fraudulent call activity and may share any information learned with other call centers or other company divisions or subsidiaries. Depending on the embodiment, one or more voiceprints 121 may be generated from the recorded call and may be added to the voiceprints 121 that are known to be associated with fraudulent users 102.
At 840, the call is flagged as authenticated. The call may be flagged as authenticated by one or more of the analytics module 115, the biometrics module 120, or the authentication module 130. After the call is authenticated, the agent 152 may proceed with addressing the reason that the customer initiated the call in the first place.
Numerous other general purpose or special purpose computing devices environments or configurations may be used. Examples of well-known computing devices, environments, and/or configurations that may be suitable for use include, but are not limited to, personal computers, server computers, handheld or laptop devices, multiprocessor systems, microprocessor-based systems, network personal computers (PCs), minicomputers, mainframe computers, embedded systems, distributed computing environments that include any of the above systems or devices, and the like.
Computer-executable instructions, such as program modules, being executed by a computer may be used. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Distributed computing environments may be used where tasks are performed by remote processing devices that are linked through a communications network or other data transmission medium. In a distributed computing environment, program modules and other data may be located in both local and remote computer storage media including memory storage devices.
With reference to
Computing device 900 may have additional features/functionality. For example, computing device 900 may include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape. Such additional storage is illustrated in
Computing device 900 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by the device 1300 and includes both volatile and non-volatile media, removable and non-removable media.
Computer storage media include volatile and non-volatile, and removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Memory 904, removable storage 908, and non-removable storage 910 are all examples of computer storage media. Computer storage media include, but are not limited to, RAM, ROM, electrically erasable program read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 900. Any such computer storage media may be part of computing device 900.
Computing device 900 may contain communication connection(s) 912 that allow the device to communicate with other devices. Computing device 900 may also have input device(s) 914 such as a keyboard, mouse, pen, voice input device, touch input device, etc. Output device(s) 916 such as a display, speakers, printer, etc. may also be included. All these devices are well known in the art and need not be discussed at length here.
It should be understood that the various techniques described herein may be implemented in connection with hardware components or software components or, where appropriate, with a combination of both. Illustrative types of hardware components that can be used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc. The methods and apparatus of the presently disclosed subject matter, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium where, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the presently disclosed subject matter.
Although exemplary implementations may refer to utilizing aspects of the presently disclosed subject matter in the context of one or more stand-alone computer systems, the subject matter is not so limited, but rather may be implemented in connection with any computing environment, such as a network or distributed computing environment. Still further, aspects of the presently disclosed subject matter may be implemented in or across a plurality of processing chips or devices, and storage may similarly be effected across a plurality of devices. Such devices might include personal computers, network servers, and handheld devices, for example.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
The application is a continuation of U.S. patent application Ser. No. 16/905,111 filed on Jun. 18, 2020, entitled SYSTEMS AND METHODS FOR AUTHENTICATION AND FRAUD DETECTION, which claims the benefit of priority to U.S. Provisional Patent Application No. 62/864,169 filed on Jun. 20, 2019, entitled “SYSTEMS AND METHODS FOR AUTHENTICATION AND FRAUD DETECTION.” The contents of both are hereby incorporated by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
4653097 | Watanabe et al. | Mar 1987 | A |
4823380 | Kohen et al. | Apr 1989 | A |
4864566 | Chauveau | Sep 1989 | A |
4930888 | Freisleben et al. | Jun 1990 | A |
5027407 | Tsunoda | Jun 1991 | A |
5222147 | Koyama | Jun 1993 | A |
5293452 | Picone et al. | Mar 1994 | A |
5638430 | Hogan et al. | Jun 1997 | A |
5805674 | Anderson, Jr. | Sep 1998 | A |
5907602 | Peel et al. | May 1999 | A |
5946654 | Newman et al. | Aug 1999 | A |
5963908 | Chadha | Oct 1999 | A |
5999525 | Krishnaswamy et al. | Dec 1999 | A |
6044382 | Martino | Mar 2000 | A |
6145083 | Shaffer et al. | Nov 2000 | A |
6266640 | Fromm | Jul 2001 | B1 |
6275806 | Pertrushin | Aug 2001 | B1 |
6427137 | Petrushin | Jul 2002 | B2 |
6480825 | Sharma et al. | Nov 2002 | B1 |
6510415 | Talmor et al. | Jan 2003 | B1 |
6539352 | Sharma et al. | Mar 2003 | B1 |
6587552 | Zimmerman | Jul 2003 | B1 |
6597775 | Lawyer et al. | Jul 2003 | B2 |
6915259 | Rigazio | Jul 2005 | B2 |
7006605 | Morganstein et al. | Feb 2006 | B1 |
7039951 | Chaudhari et al. | May 2006 | B1 |
7054811 | Barzilay | May 2006 | B2 |
7106843 | Gainsboro et al. | Sep 2006 | B1 |
7130800 | Currey et al. | Oct 2006 | B1 |
7158622 | Lawyer et al. | Jan 2007 | B2 |
7212613 | Kim et al. | May 2007 | B2 |
7299177 | Broman et al. | Nov 2007 | B2 |
7386105 | Wasserblat | Jun 2008 | B2 |
7403922 | Lewis et al. | Jul 2008 | B1 |
7539290 | Ortel | May 2009 | B2 |
7657431 | Hayakawa | Feb 2010 | B2 |
7660715 | Thambiratnam | Feb 2010 | B1 |
7668769 | Baker et al. | Feb 2010 | B2 |
7693965 | Rhoads | Apr 2010 | B2 |
7778832 | Broman et al. | Aug 2010 | B2 |
7822605 | Zigel et al. | Oct 2010 | B2 |
7908645 | Varghese et al. | Mar 2011 | B2 |
7940897 | Khor et al. | May 2011 | B2 |
8036892 | Broman et al. | Oct 2011 | B2 |
8073691 | Rajakumar | Dec 2011 | B2 |
8095372 | Kuppuswamy et al. | Jan 2012 | B2 |
8112278 | Burke | Feb 2012 | B2 |
8145562 | Wasserblat et al. | Mar 2012 | B2 |
8253797 | Maali et al. | Aug 2012 | B1 |
8311826 | Rajakumar | Nov 2012 | B2 |
8510215 | Gutierrez | Aug 2013 | B2 |
8515025 | Hewinson | Aug 2013 | B1 |
8537978 | Jaiswal et al. | Sep 2013 | B2 |
8566904 | Pizano et al. | Oct 2013 | B2 |
8887269 | Teglia | Nov 2014 | B2 |
9001976 | Arrowood | Apr 2015 | B2 |
9172808 | Zeppenfeld et al. | Oct 2015 | B2 |
9349373 | Williams et al. | May 2016 | B1 |
10121488 | Drews et al. | Nov 2018 | B1 |
10477012 | Rao et al. | Nov 2019 | B2 |
20010026632 | Tamai | Oct 2001 | A1 |
20020022474 | Blom et al. | Feb 2002 | A1 |
20020099649 | Lee et al. | Jul 2002 | A1 |
20030009333 | Sharma et al. | Jan 2003 | A1 |
20030050780 | Rigazio | Mar 2003 | A1 |
20030050816 | Givens et al. | Mar 2003 | A1 |
20030063133 | Foote et al. | Apr 2003 | A1 |
20030097593 | Sawa et al. | May 2003 | A1 |
20030147516 | Lawyer et al. | Aug 2003 | A1 |
20030203730 | Wan et al. | Oct 2003 | A1 |
20030208684 | Camacho et al. | Nov 2003 | A1 |
20040029087 | White | Feb 2004 | A1 |
20040105006 | Lazo et al. | Jun 2004 | A1 |
20040111305 | Gavan et al. | Jun 2004 | A1 |
20040131160 | Mardirossian | Jul 2004 | A1 |
20040143635 | Galea | Jul 2004 | A1 |
20040164858 | Lin | Aug 2004 | A1 |
20040167964 | Rounthwaite et al. | Aug 2004 | A1 |
20040169587 | Washington | Sep 2004 | A1 |
20040203575 | Chin et al. | Oct 2004 | A1 |
20040240631 | Broman et al. | Dec 2004 | A1 |
20040257444 | Maruya et al. | Dec 2004 | A1 |
20050010411 | Rigazio | Jan 2005 | A1 |
20050043014 | Hodge | Feb 2005 | A1 |
20050076084 | Loughmiller et al. | Apr 2005 | A1 |
20050125226 | Magee | Jun 2005 | A1 |
20050125339 | Tidwell et al. | Jun 2005 | A1 |
20050185779 | Toms | Aug 2005 | A1 |
20050273442 | Bennett et al. | Dec 2005 | A1 |
20060013372 | Russell | Jan 2006 | A1 |
20060106605 | Saunders et al. | May 2006 | A1 |
20060107296 | Mock et al. | May 2006 | A1 |
20060149558 | Kahn | Jul 2006 | A1 |
20060161435 | Atef et al. | Jul 2006 | A1 |
20060212407 | Lyon | Sep 2006 | A1 |
20060212925 | Shull et al. | Sep 2006 | A1 |
20060248019 | Rajakumar | Nov 2006 | A1 |
20060251226 | Hogan et al. | Nov 2006 | A1 |
20060277043 | Tomes et al. | Dec 2006 | A1 |
20060282660 | Varghese et al. | Dec 2006 | A1 |
20060285665 | Wasserblat et al. | Dec 2006 | A1 |
20060289622 | Khor et al. | Dec 2006 | A1 |
20060293891 | Pathuel | Dec 2006 | A1 |
20070041517 | Clarke et al. | Feb 2007 | A1 |
20070071206 | Gainsboro et al. | Mar 2007 | A1 |
20070074021 | Smithies et al. | Mar 2007 | A1 |
20070100608 | Gable et al. | May 2007 | A1 |
20070106517 | Cluff et al. | May 2007 | A1 |
20070124246 | Lawyer et al. | May 2007 | A1 |
20070244702 | Kahn et al. | Oct 2007 | A1 |
20070280436 | Rajakumar | Dec 2007 | A1 |
20070282605 | Rajakumar | Dec 2007 | A1 |
20070288242 | Spengler | Dec 2007 | A1 |
20080010066 | Broman et al. | Jan 2008 | A1 |
20080052164 | Abifaker | Feb 2008 | A1 |
20080114612 | Needham et al. | May 2008 | A1 |
20080181417 | Pereg et al. | Jul 2008 | A1 |
20080195387 | Zigel et al. | Aug 2008 | A1 |
20080222734 | Redlich et al. | Sep 2008 | A1 |
20090033519 | Shi et al. | Feb 2009 | A1 |
20090046841 | Hodge | Feb 2009 | A1 |
20090059007 | Wagg et al. | Mar 2009 | A1 |
20090106846 | Dupray et al. | Apr 2009 | A1 |
20090116703 | Schultz | May 2009 | A1 |
20090119106 | Rajakumar et al. | May 2009 | A1 |
20090147939 | Morganstein et al. | Jun 2009 | A1 |
20090152343 | Carter et al. | Jun 2009 | A1 |
20090247131 | Champion et al. | Oct 2009 | A1 |
20090254971 | Herz et al. | Oct 2009 | A1 |
20090304374 | Fruehauf et al. | Dec 2009 | A1 |
20090319269 | Aronowitz | Dec 2009 | A1 |
20100086108 | Jaiswal et al. | Apr 2010 | A1 |
20100114744 | Gonen | May 2010 | A1 |
20100121644 | Glasser | May 2010 | A1 |
20100131273 | Aley-Raz et al. | May 2010 | A1 |
20100228656 | Wasserblat et al. | Sep 2010 | A1 |
20100303211 | Hartig | Dec 2010 | A1 |
20100305946 | Gutierrez | Dec 2010 | A1 |
20100305960 | Gutierrez | Dec 2010 | A1 |
20100329546 | Smith | Dec 2010 | A1 |
20110004472 | Zlokarnik | Jan 2011 | A1 |
20110026689 | Metz et al. | Feb 2011 | A1 |
20110069172 | Hazzani | Mar 2011 | A1 |
20110141276 | Borghei | Jun 2011 | A1 |
20110191106 | Khor et al. | Aug 2011 | A1 |
20110206198 | Freedman et al. | Aug 2011 | A1 |
20110255676 | Marchand et al. | Oct 2011 | A1 |
20110282661 | Dobry et al. | Nov 2011 | A1 |
20110282778 | Wright et al. | Nov 2011 | A1 |
20110320484 | Smithies et al. | Dec 2011 | A1 |
20120053939 | Gutierrez et al. | Mar 2012 | A9 |
20120054202 | Rajakumar | Mar 2012 | A1 |
20120072453 | Guerra et al. | Mar 2012 | A1 |
20120245941 | Cheyer | Sep 2012 | A1 |
20120249328 | Xiong | Oct 2012 | A1 |
20120253805 | Rajakumar et al. | Oct 2012 | A1 |
20120254243 | Zeppenfeld et al. | Oct 2012 | A1 |
20120263285 | Rajakumar et al. | Oct 2012 | A1 |
20120284026 | Cardillo et al. | Nov 2012 | A1 |
20130132091 | Skerpac | May 2013 | A1 |
20130163737 | Dement et al. | Jun 2013 | A1 |
20130197912 | Hayakawa et al. | Aug 2013 | A1 |
20130253919 | Gutierrez et al. | Sep 2013 | A1 |
20130283378 | Costigan et al. | Oct 2013 | A1 |
20130300939 | Chou et al. | Nov 2013 | A1 |
20140007210 | Murakami et al. | Jan 2014 | A1 |
20140214676 | Bukai | Jul 2014 | A1 |
20140222678 | Sheets et al. | Aug 2014 | A1 |
20140230032 | Duncan | Aug 2014 | A1 |
20140254778 | Zeppenfeld et al. | Sep 2014 | A1 |
20140334611 | Barnes et al. | Nov 2014 | A1 |
20150055763 | Guerra et al. | Feb 2015 | A1 |
20150381801 | Rajakumar et al. | Dec 2015 | A1 |
20160125884 | Timem et al. | May 2016 | A1 |
20160365095 | Lousky et al. | Dec 2016 | A1 |
20170118340 | Kumar et al. | Apr 2017 | A1 |
20170140760 | Sachdev | May 2017 | A1 |
20180014189 | Ellison et al. | Jan 2018 | A1 |
20180075454 | Claridge et al. | Mar 2018 | A1 |
20180082689 | Khoury et al. | Mar 2018 | A1 |
20180152445 | Ye et al. | May 2018 | A1 |
20180181741 | Whaley | Jun 2018 | A1 |
20190037081 | Rao et al. | Jan 2019 | A1 |
20190050545 | Keret et al. | Feb 2019 | A1 |
Number | Date | Country |
---|---|---|
202008007520 | Aug 2008 | DE |
0598469 | May 1994 | EP |
2437477 | Apr 2012 | EP |
3270618 | Jan 2018 | EP |
2004-193942 | Jul 2004 | JP |
2006-038955 | Sep 2006 | JP |
2000077772 | Dec 2000 | WO |
2004079501 | Sep 2004 | WO |
2006013555 | Feb 2006 | WO |
2007001452 | Jan 2007 | WO |
2010116292 | Oct 2010 | WO |
2014205227 | Dec 2014 | WO |
Entry |
---|
3GPP TS 24.008 v3.8.0, “3rd Generation Partnership Project; Technical Specification Group Core Network; Mobile radio interface layer 3 specification; Core Network Protocols—Stage 3,” Release 1999, (Jun. 2001), 442 pages. |
Asokan, N., et al., “Man-in-the-Middle in Tunneled Authentication Protocols,” Draft version 1.3 (latest public version: http://eprint.iacr.org/2002/163/, Nov. 11, 2002, 15 pages. |
Cohen, I., “Noise Spectrum Estimation in Adverse Environment: Improved Minima Controlled Recursive Averaging,” IEEE Transactions on Speech and Audio Processing, vol. 11, No. 5, 2003, pp. 466-475. |
Cohen, I., et al., “Spectral Enhancement by Tracking Speech Presence Probability in Subbands,” Proc. International Workshop in Hand-Free Speech Communication (HSC'01), 2001, pp. 95-98. |
ETSI TS 102 232-5 v2.1.1, “Lawful Interception (LI); Handover Interface and Service Specific Details (SSD) for IP delivery; Part 5: Service-specific details for IP Multimedia Services,” Feb. 2007, 25 pages. |
ETSI TS 102 657 v1.4.1, “Lawful Interception (LI); Retained data handling; Handover interface for the request and delivery of retained data,” Dec. 2009, 92 pages. |
Girardin, Fabien, et al., “Detecting air travel to survey passengers on a worldwide scale,” Journal of Location Based Services, Oct. 28, 2009, 26 pages. |
Hayes, M.H., “Statistical Digital Signal Processing and Modeling,” J. Wiley & Sons, Inc., New York, 1996, 200 pages. |
Lailler, C., et al., “Semi-Supervised and Unsupervised Data Extraction Targeting Speakers: From Speaker Roles to Fame?” Proceedings of the First Workshop on Speech, Language and Audio in Multimedia (SLAM), Marseille, France, 2013, 6 pages. |
Meyer, Ulrike, et al., “On the Impact of GSM Encryption and Man-in-the-Middle Attacks on the Security of Interoperating GSM/UMTS Networks,” IEEE, 2004, 8 pages. |
Schmalenstroeer, J., et al., “Online Diarization of Streaming Audio-Visual Data for Smart Environments,” IEEE Journal of Selected Topics in Signal Processing, vol. 4, No. 5, 2010, 12 pages. |
Strobel, Daehyun, “IMSI Catcher,” Seminararbeit Ruhr-Universitat Bochum, Chair for Communication Security, Prof. Dr.-Ing. Christof Paar, Jul. 13, 2007, 28 pages. |
Vedaldi, Andrea, “An implementation of SIFT detector and descriptor,” University of California at Los Angeles, 2006, 7 pages. |
International Search Report and Written Opinion, dated Aug. 17, 2020, received in connection with corresponding International Patent Application No. PCT/US2020/038339. |
Number | Date | Country | |
---|---|---|---|
20210368041 A1 | Nov 2021 | US |
Number | Date | Country | |
---|---|---|---|
62864169 | Jun 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16905111 | Jun 2020 | US |
Child | 17398464 | US |