User interface for fraud alert management

Information

  • Patent Grant
  • 11538128
  • Patent Number
    11,538,128
  • Date Filed
    Tuesday, May 14, 2019
    5 years ago
  • Date Issued
    Tuesday, December 27, 2022
    a year ago
Abstract
A user interface for a fraud detection application includes a visual display; a plurality of panes displayed on the visual display, each pane including an identifier corresponding to a communication received; a graphical representation of a threat risk associated with the identifier; a numeric score associated with the threat risk, wherein the numeric score is a weighted score based on a plurality of predetermined factors updated substantially continuously. The graphical representation may include a status bar indicative of a threat risk associated with the identifier, the threat risk provided by the fraud detection algorithm and based on a weighted score. Each pane may include additional information about the identifier, such as a number of accounts accessed or attempted to be accessed associated with the identifier; a number of days the identifier has been active; a type of channel associated with the identifier; and a number of communications initiated by the identifier over a predetermined period of time. A user may access further information by activating a portion of the visual display to access additional information related to the threat risk. The communication may be a phone call, a chat, a web interaction or the like.
Description
BACKGROUND OF THE INVENTION
Field of the Invention

Embodiments of the present invention relate to a graphical user interface, specifically a graphical user interface and information provided thereby for use in fraud alert management.


Background

According to the Identity Theft Resource Center, there were 781 tracked data breaches in 2015 where consumer data was stolen. There are many more breaches that go undetected or unreported. 338 of these breaches resulted in over 164 million social security numbers being stolen. Social security numbers are much more valuable to a fraudster than a credit card number. Credit card accounts can be closed, but social security numbers provide an ongoing opportunity for fraudulent activity. In 2016 the number of breaches increased to 1,091 and there have already been over 1,000 in 2017 including the Equifax breach where 143M social security numbers were compromised. According to Javelin, losses attributable to identity theft topped $16B.


Fraudsters take the stolen data and systematically attack the consumer, enterprises and government entities through the contact center and particularly the associated interactive voice response (IVR) system, the self-service channel. The IVR provides the means for a fraudster to access account information in anonymity without facing any interrogation by an agent.


In a 2016 Aite Group study, 78% of financial services executives indicated that fraud in the contact center is on the increase. 17% of financial services executives indicated that they did not know, likely because they do not have the processes in place to identify the fraud in the call center, let alone prevent it. Account Takeover (ATO) fraud accounts for 28% of all identity theft fraud in financial services and has a 10% compound annual growth rate (CAGR). Fraudulent activity is so prevalent in the contact center and IVR that Aite says, “Account Takeover is so commonly enabled in the contact center that it should be renamed the cross-channel fraud enablement channel.”


Accordingly, there is a need for systems to help detect and prevent fraud, particularly fraud via IVR systems.


BRIEF SUMMARY OF THE INVENTION

Accordingly, the present invention is directed to a user interface for fraud alert management that obviates one or more of the problems due to limitations and disadvantages of the related art.


In accordance with the purpose(s) of this invention, as embodied and broadly described herein, this invention, in one aspect, relates to a user interface for a fraud detection application, the user interface comprising: a visual display; a plurality of panes displayed on the visual display, each pane including: an identifier corresponding to a communication received; a graphical representation of a threat risk associated with the identifier; a numeric score associated with the threat risk, wherein the numeric score is a weighted score based on a plurality of predetermined factors updated substantially continuously.


Additional advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the appended claims. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.


It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.





BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying figures, which are incorporated herein and form part of the specification, illustrate examples of user interface for fraud alert management. Together with the description, the figures further serve to explain the principles of a user interface for fraud alert management described herein and thereby enable a person skilled in the pertinent art to make and use the user interface for fraud alert management.



FIG. 1 is an exemplary initial view of a user interface for list management.



FIG. 2 is an exemplary initial view of a user interface according to principles described herein illustrating exemplary alerts.



FIG. 3 is an exemplary initial view of a user interface for sorting according to status of an alert.



FIG. 4 is an exemplary individual pane associated with a single caller or phone number.



FIG. 5 is an exemplary view of additional information available associated with a caller or phone number.



FIG. 6 further illustrates event details that may be available to an analyst.





DETAILED DESCRIPTION

Reference will now be made in detail to embodiments of the user interface for fraud alert management with reference to the accompanying figures, in which like reference numerals indicate like elements.


It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention cover the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents.


According to principles described herein, a user interface for fraud alert management provides a visual representation and integrated functionality to a user. Typically, the user would be a fraud analyst. Often, such analysts are tasked with evaluating the risk level of a caller or phone number.


In the process of identifying of whether a particular caller or phone number (ANI) 18 represents a risk requiring further investigation, information about a call would be received from the IVR. The present system uses rules based and a learning system (artificial intelligence) to process and continually update information presented in a usable fashion to the analyst via the user interface.


As illustrated in FIG. 1, an initial view of the user interface shows information important to the analyst for determining whether the caller or phone number represents a threat. Each “pane” 10 in the window 14 corresponds to a caller or phone numbers 18 associated with recent calls to the IVR.



FIG. 2 is an exemplary initial view of a user interface according to principles described herein illustrating exemplary alerts. As illustrated, information most import to a fraud analyst may be presented “by default” (e.g., new Fraud Alerts). As shown, the side bar 22 on the left provides quick navigation to remaining options (e.g, menu buttons 26 reveal/hide the side bar 22 and further sorting or functionality). E.g., selecting the arrows expands or hides sub-menus 30.


As shown in FIGS. 2 and 3, a menu of available screens, which may represent filters or sorting of the information, may be provided. In the example of FIG. 3, “Alerts” and “List Management” are provided as upper level screens, but the upper level screen access may be provided in another form that would be useful depending on the work flow of the analyst and the fraud review team. As further illustrated in this example, “alerts” may be further broken down into new, under review, and reviewed. In this example, “new” refers to potential fraud that has been recently detected (or evaluated based on the caller or phone number), “under review” refers to alerts that have been seen by an analyst and are in the process of determining whether fraudulent activity has occurred; and “reviewed” indicates that the analysis has been completed. A reviewed ANI can be placed on a blacklist if it is confirmed to be fraudulent, on a whitelist if the number is associated as a test number or an enterprise number, and kept in a monitoring state if there was no fraudulent activity detected. The screens/sort may further indicate disposition or referral of the incident, caller or phone number 18 to other fraud related activities or downstream analysts or functions, such as customer alerts, or closing of the incident. The screen may indicate age of the call, analyst assigned, risk level, under review, reviewed, closed, disposition, etc. A single analyst may see only his or her own assigned incidents or may be authorized or able to see system-wide calls. In the example of FIG. 2, the side bar menu 22 may include a category “Monitoring”, which indicates an analyst has completed review and is monitoring for additional activity.


As shown in FIG. 4, each caller or phone number (ANI) 18 is provided in a single “pane” 10 as an alert summary in the user interface along with a plurality associated information. As illustrated in FIG. 4, the pane 10 may include channel information 34 such as whether access to the IVR is made by phone or other means (e.g., chat, web, email, etc.) For the purposes of the illustration in FIG. 4, the description refers to phone calls. The alert summary may include a score 38 and threat level. For example, a numerical score 38 may be provided where the score is associated with the caller or phone number 18. As illustrated in FIG. 3 in particular, the exemplary score 38 is 65 on a scale of 1-100. The score 38 may take into account a variety of information, including, but not limited to behavior data (e.g., what a caller does in a call; behavior patterns), telecommunications data (e.g. the carrier, whether the phone number is a working number, landline versus cellular, fixed or non-fixed VOIP, etc.) and reputation/history data (e.g., reputation/history indicates how long has this number been active and how does their current behavior compare to past behavior. A Blacklist would indicate that the number had previously been used in fraud). Spoof risk 42 is a separate score visible in the UI as the last icon on each high level tile.


Each of these information sources may be taken into account individually or in combination to provide a total weighted aggregate score 38. Thresholds may be set to indicate whether the numerical score is indicative of the caller or phone number 18 being a high fraud risk, an elevated fraud risk, a low level threat or not considered a threat.


Further, a graphical bar 46 may be provided to illustrate at a glance the level of risk associated with the caller or phone number 18. As illustrated in FIG. 4, an exemplary status bar 46 is broken into three sections, which may be selectively illuminated based on the level of risk as indicated by the associated score thresholds. For example, as illustrated in the first pane 10 of FIG. 1, the exemplary caller has a risk score of 96, which is considered “high risk”, so that all three portions of the status bar 46 are illuminated. Referring again to FIGS. 1 and 2, a score of 65 or 43 is considered elevated risk, such that only two portions of the status bar are illuminated; a score of 28 is considered a low-level threat, such that only one portion of the status bar is illuminated. Although not shown, a numeric score of less than, e.g., 20, would be indicative of no threat, and thus no portions of the status bar would be illuminated. That is, in this exemplary embodiment, the visual representation of the threat risk presented to the analyst is broken down as follows: High Threat: 3 bars; Elevated Threat: 2 bars; Low Threat: 1 bar; No Threat: 0 bars (if used).


The single pane 10 may also include other information relevant to the fraud assessment for an analyst, including but not limited to channel information 34 such as, for phone calls: Phone number, carrier, and line type, and for other channel types, an indication of the channel by which the IVR is accessed, such as chat, web, email, etc. The pane 10 may further include a line risk indicator 50 (carrier/line correlation with fraudulent activity); events 54 (number of times application has been accessed via specified channel identifier); accounts 58 (different Accounts accessed via specified channel identifier); user active duration 62 (days user has accessed the system via the indicated channel); spoof risk 42 (level of threat that channel identifier has been spoofed); and a button 66 for access to detailed information about the alert, as shown, for example in FIG. 5. FIG. 6 further illustrates event details such as score and threat level; channel information; score report (which may be the same data as shown on Summary card); state management; Select new status from menu, then Save button to update; Score History. The chart further illustrates a risk score for channel identifier over time; Events and additional details for each event where the channel identifier accessed the application. As shown n FIG. 6, the Alert Details may provide a hyperlink labeled Transfer Success. This hyperlink pulls up the voice recording of the agent/caller interaction enabling the fraud analyst to easily combine activity and scoring from the IVR with the agent interaction to have a holistic view of the interaction.


While the information is organized in a particular way in the embodiment of FIG. 4, the exact layout of the user interface is not so limited. The information provided to the analyst in the alert summary may include, but is not limited to, the number of accounts accessed or attempted to be accessed by the caller or phone number 18, active duration such as the number of days a user has accessed the system via the channel (e.g., phone number 18), a risk of whether the phone number is a spoofed number, the number of calls for the caller or phone number 18 in a defined time period. The time period may be a rolling time period or a specified fixed time period. The pane 10 may also include an alert, which may be an activatable virtual button to that leads to further information or background information relevant to the fraud risk evaluation, as illustrated in FIG. 5. Such further information may include historical data or scores over a predetermine period of time, etc.


The system may include the capability to set up user profiles to define the scope of accessibility that a user is allowed. Example user profiles may include “super user” who can access any area for any customer; “admin” who has access to set up other users within a customer domain; “manager/supervisor” who has access to customer specific data; and “analyst” who has access to limited areas within the customer domain.


A “management view” may also be provided, where the management view may access to KPIs (key performance indicators) related to the fraud detection, alerts, analyst performance and overall system functionality. The management view provides additional insight into the workflow aspect of a fraud event providing high level and detailed information into when an alert was initiated, current status of an alert, the final disposition of the alert including timelines and the analyst who worked the alert.


During an IVR call or communication, the associated caller or the phone number is scored on a scale of 1 to 99 and if the score exceeds a certain level defined by business rules it may trigger an alert and be referred to a fraud specialist to determine whether the caller presents a fraud risk, require additional stepped up authentication, trigger automated changes to the IVR callflow to change the access allowed to the caller or other dispositions based on business rules. Scores are determined by the analysis of behavioral data, telecom data and known history of a caller and account activity. Behavioral data may include: ANI velocity, account velocity, transfer velocity, call duration, goal attempt and completion, exit point, authentication methods and success/failure, application specific data (card REPIN, PIN probing, payments, bank transfers, access of closed or blocked cards, time of day. Telecom data may include: the line type (e.g., landline, cellular, fixed VOIP or non-fixed VOIP); whether there is a caller ID associated with the phone number or whether it is anonymous, is ANI actually in service, is the cell phone a prepaid cell phone, date number was last ported, spoof risk and geolocation. The history component looks at the account being accessed, previous access to this account, checks for ANI on blacklist and/or account on watchlist. This list is not exhaustive and may adapt over time.


The present disclosure provides a user interface for the fraud analyst to managed numerous alerts, e.g., by being able to take in a significant amount of information visually, organize that information, and obtain additional details as needed via the user interface.


When a call is received by the IVR, it is designated “new” and a pane 10 created on a fraud analyst's screen. The ANI (phone number) will only appear on the analyst screen if it is scored at a level to exceed a threshold to appear on the screen. The analyst may see many panes related to multiple callers/phone numbers at one time. The status bar offers a visual cue for the analyst to designate callers/phone numbers with low fraud risk as closed or otherwise change their status, while devoting attention to callers/phone numbers with a higher risk or moderate risk. The analyst may click on any of the links provided in a pane 10 associated with a particular caller/phone number and assess the underlying data available to determine disposition of the particular caller/phone number. Once the fraud analyst has determined disposition of the caller/phone number and the incident is may be passed to another analyst for more investigation or closed, such that the pane 10 can be removed from the view the analyst sees.


In addition to the features described above, the system may include authentication and authorization such that the system is secure and accessible only to registered users. The system can be configured so that each analyst only sees an application list specific to each analyst, e.g, only those incidents assigned to them or that they are allowed to work. A manager or administrator can modify these settings, e.g., granting or changing authorizations, assignments or access.


While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the present invention. Thus, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments but should be defined only in accordance with the following claims and their equivalents.

Claims
  • 1. A system for fraud detection for a call center system, the system comprising: a processor; anda visual display in communication with the processor, the processor causing the visual display to presenta plurality of panes displayed on the visual display, each pane including: an identifier corresponding to a communication received;a graphical representation of a threat risk associated with the identifier;a numeric score associated with the threat risk, wherein the numeric score is a weighted score based on a plurality of predetermined factors updated substantially continuously.
  • 2. The system of claim 1, wherein the graphical representation includes: a status bar indicative of a threat risk associated with the identifier, the threat risk provided by the fraud detection algorithm and based on a weighted score.
  • 3. The system of claim 2, wherein the communication is one of a phone call, a chat and a web interaction.
  • 4. The system of claim 1, wherein each pane includes additional information about the identifier.
  • 5. The system of claim 4, wherein the communication is one of a phone call, a chat and a web interaction.
  • 6. The system of claim 1, wherein the additional information includes: a number of accounts accessed or attempted to be accessed associated with the identifier;a number of days the identifier has been active;a type of channel associated with the identifier; anda number of communications initiated by the identifier over a predetermined period of time;wherein a user may access further information by activating a portion of the visual display to access additional information related to the threat risk.
  • 7. The system of claim 6, wherein the communication is one of a phone call, a chat and a web interaction.
  • 8. The system of claim 1, wherein the communication is one of a phone call, a chat and a web interaction.
  • 9. A non-transitory computer readable medium having stored thereon program instructions that, when executed by a processing system direct the processing system to: display a plurality of graphical panes on a visual display, a plurality of panes displayed on the visual display;cause each pane to display an identifier corresponding to a communication received, a graphical representation of a threat risk associated with the identifier; and a numeric score associated with the threat risk, wherein the numeric score is a weighted score based on a plurality of predetermined factors; andupdate the graphic representation of the threat risk and the numeric scores displayed substantially continuously.
  • 10. The non-transitory computer readable medium of claim 9, wherein the program instructions, when executed by a processing system further direct the processing system to display a status bar indicative of a threat risk associated with the identifier, the threat risk provided by the fraud detection algorithm and based on a weighted score.
  • 11. The non-transitory computer readable medium of claim 9, wherein the program instructions, when executed by the processing system, further direct the processing system to cause each pane to display additional information about the identifier.
  • 12. The non-transitory computer readable medium of claim 11, wherein the additional information includes a number of accounts accessed or attempted to be accessed associated with the identifier;a number of days the identifier has been active;a type of channel associated with the identifier; anda number of communications initiated by the identifier over a predetermined period of time;wherein a user may access further information by activating a portion of the visual display to access additional information related to the threat risk.
  • 13. The non-transitory computer readable medium of claim 9, wherein the communication is one of a phone call, a chat and a web interaction.
  • 14. A method of presenting information to a user of a fraud detection application in a call center, the method comprising: displaying a plurality of graphical panes on a visual display, a plurality of panes displayed on the visual display;causing each pane to display an identifier corresponding to a communication received, a graphical representation of a threat risk associated with the identifier; and a numeric score associated with the threat risk, wherein the numeric score is a weighted score based on a plurality of predetermined factors; andupdating the graphic representation of the threat risk and the numeric scores displayed substantially continuously.
  • 15. The method of claim 14, further comprising display in each pane on the visual display a status bar indicative of a threat risk associated with the identifier, the threat risk provided by the fraud detection algorithm and based on a weighted score.
  • 16. The method of claim 14, further comprising causing each pane to display additional information about the identifier.
  • 17. The method of claim 16, wherein the additional information includes: a number of accounts accessed or attempted to be accessed associated with the identifier;a number of days the identifier has been active;a type of channel associated with the identifier; anda number of communications initiated by the identifier over a predetermined period of time;wherein a user may access further information by activating a portion of the visual display to access additional information related to the threat risk.
  • 18. The method of claim 14, wherein the communication is one of a phone call, a chat and a web interaction.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of priority to U.S. Provisional Application No. 62/671,046 filed May 14, 2018, the disclosure of which is incorporated herein by reference in its entirety.

US Referenced Citations (153)
Number Name Date Kind
4653097 Watanabe et al. Mar 1987 A
4823380 Kohen et al. Apr 1989 A
4864566 Chauveau Sep 1989 A
4930888 Freisleben et al. Jun 1990 A
5027407 Tsunoda Jun 1991 A
5222147 Koyama Jun 1993 A
5638430 Hogan et al. Jun 1997 A
5805674 Anderson, Jr. Sep 1998 A
5907602 Peel et al. May 1999 A
5946654 Newman et al. Aug 1999 A
5963908 Chadha Oct 1999 A
5999525 Krishnaswamy et al. Dec 1999 A
6044382 Martino Mar 2000 A
6145083 Shaffer et al. Nov 2000 A
6266640 Fromm Jul 2001 B1
6275806 Pertrushin Aug 2001 B1
6427137 Pertrushin Jul 2002 B2
6480825 Sharma et al. Nov 2002 B1
6510415 Talmor et al. Jan 2003 B1
6587552 Zimmerman Jul 2003 B1
6597775 Lawyer et al. Jul 2003 B2
6915259 Rigazio Jul 2005 B2
7006605 Morganstein et al. Feb 2006 B1
7039951 Chaudhari et al. May 2006 B1
7054811 Barzilay May 2006 B2
7106843 Gainsboro et al. Sep 2006 B1
7130800 Currey et al. Oct 2006 B1
7158622 Lawyer et al. Jan 2007 B2
7212613 Kim et al. May 2007 B2
7299177 Broman et al. Nov 2007 B2
7386105 Wasserblat Jun 2008 B2
7403922 Lewis et al. Jul 2008 B1
7539290 Ortel May 2009 B2
7657431 Hayakawa Feb 2010 B2
7660715 Thambiratnam Feb 2010 B1
7668769 Baker et al. Feb 2010 B2
7693965 Rhoads Apr 2010 B2
7778832 Broman et al. Aug 2010 B2
7822605 Zigel et al. Oct 2010 B2
7908645 Varghese et al. Mar 2011 B2
7940897 Khor et al. May 2011 B2
8036892 Broman et al. Oct 2011 B2
8073691 Rajakumar Dec 2011 B2
8112278 Burke Feb 2012 B2
8145562 Wasserblat et al. Mar 2012 B2
8253797 Maali et al. Aug 2012 B1
8311826 Rajakumar Nov 2012 B2
8510215 Gutierrez Aug 2013 B2
8537978 Jaiswal et al. Sep 2013 B2
9001976 Arrowood Apr 2015 B2
10477012 Rao et al. Nov 2019 B2
10484532 Newman et al. Nov 2019 B1
20010026632 Tamai Oct 2001 A1
20020022474 Blom et al. Feb 2002 A1
20020099649 Lee et al. Jul 2002 A1
20030009333 Sharma et al. Jan 2003 A1
20030050780 Rigazio Mar 2003 A1
20030050816 Givens et al. Mar 2003 A1
20030063133 Foote et al. Apr 2003 A1
20030097593 Sawa et al. May 2003 A1
20030147516 Lawyer et al. Aug 2003 A1
20030203730 Wan et al. Oct 2003 A1
20030208684 Camacho et al. Nov 2003 A1
20040029087 White Feb 2004 A1
20040105006 Lazo et al. Jun 2004 A1
20040111305 Gavan et al. Jun 2004 A1
20040131160 Mardirossian Jul 2004 A1
20040143635 Galea Jul 2004 A1
20040164858 Lin Aug 2004 A1
20040167964 Rounthwaite et al. Aug 2004 A1
20040169587 Washington Sep 2004 A1
20040203575 Chin et al. Oct 2004 A1
20040240631 Broman et al. Dec 2004 A1
20040257444 Maruya et al. Dec 2004 A1
20050010411 Rigazio Jan 2005 A1
20050043014 Hodge Feb 2005 A1
20050076084 Loughmiller et al. Apr 2005 A1
20050125226 Magee Jun 2005 A1
20050125339 Tidwell et al. Jun 2005 A1
20050185779 Toms Aug 2005 A1
20050273442 Bennett et al. Dec 2005 A1
20060013372 Russell Jan 2006 A1
20060106605 Saunders et al. May 2006 A1
20060107296 Mock et al. May 2006 A1
20060149558 Kahn Jul 2006 A1
20060161435 Atef et al. Jul 2006 A1
20060212407 Lyon Sep 2006 A1
20060212925 Shull et al. Sep 2006 A1
20060248019 Rajakumar Nov 2006 A1
20060251226 Hogan et al. Nov 2006 A1
20060282660 Varghese et al. Dec 2006 A1
20060285665 Wasserblat et al. Dec 2006 A1
20060289622 Khor et al. Dec 2006 A1
20060293891 Pathuel Dec 2006 A1
20070041517 Clarke et al. Feb 2007 A1
20070071206 Gainsboro et al. Mar 2007 A1
20070074021 Smithies et al. Mar 2007 A1
20070100608 Gable et al. May 2007 A1
20070124246 Lawyer et al. May 2007 A1
20070208569 Subramanian et al. Sep 2007 A1
20070244702 Kahn et al. Oct 2007 A1
20070280436 Rajakumar Dec 2007 A1
20070282605 Rajaku Mar Dec 2007 A1
20070288242 Spengler Dec 2007 A1
20080010066 Broman et al. Jan 2008 A1
20080114612 Needham et al. May 2008 A1
20080154609 Wasserblat et al. Jun 2008 A1
20080181417 Pereg et al. Jul 2008 A1
20080195387 Zigel et al. Aug 2008 A1
20080222734 Redlich et al. Sep 2008 A1
20090033519 Shi et al. Feb 2009 A1
20090046841 Hodge Feb 2009 A1
20090059007 Wagg et al. Mar 2009 A1
20090106846 Dupray et al. Apr 2009 A1
20090119106 Rajakumar et al. May 2009 A1
20090147939 Morganstein et al. Jun 2009 A1
20090247131 Champion et al. Oct 2009 A1
20090254971 Herz et al. Oct 2009 A1
20090304374 Fruehauf et al. Dec 2009 A1
20090319269 Aronowitz Dec 2009 A1
20100114744 Gonen May 2010 A1
20100228656 Wasserblat et al. Sep 2010 A1
20100303211 Hartig Dec 2010 A1
20100305946 Gutierrez Dec 2010 A1
20100305960 Gutierrez Dec 2010 A1
20100329546 Smith Dec 2010 A1
20110004472 Zlokarnik Jan 2011 A1
20110026689 Metz et al. Feb 2011 A1
20110069172 Hazzani Mar 2011 A1
20110191106 Khor et al. Aug 2011 A1
20110255676 Marchand et al. Oct 2011 A1
20110282661 Dobry et al. Nov 2011 A1
20110282778 Wright et al. Nov 2011 A1
20110320484 Smithies et al. Dec 2011 A1
20120053939 Gutierrez et al. Mar 2012 A9
20120054202 Rajakumar Mar 2012 A1
20120072453 Guerra et al. Mar 2012 A1
20120253805 Rajakumar et al. Oct 2012 A1
20120254243 Zeppenfeld et al. Oct 2012 A1
20120263285 Rajakumar et al. Oct 2012 A1
20120284026 Cardillo et al. Nov 2012 A1
20130016819 Cheethirala Jan 2013 A1
20130163737 Dement et al. Jun 2013 A1
20130197912 Hayakawa et al. Aug 2013 A1
20130253919 Gutierrez et al. Sep 2013 A1
20130283378 Costigan et al. Oct 2013 A1
20130300939 Chou et al. Nov 2013 A1
20150055763 Guerra et al. Feb 2015 A1
20150288791 Weiss Oct 2015 A1
20180075454 Claridge et al. Mar 2018 A1
20190020759 Kuang Jan 2019 A1
20190114649 Wang et al. Apr 2019 A1
20190268354 Zettel, II Aug 2019 A1
Foreign Referenced Citations (9)
Number Date Country
202008007520 Aug 2008 DE
0598469 May 1994 EP
2004-193942 Jul 2004 JP
2006-038955 Sep 2006 JP
2000077772 Dec 2000 WO
2004079501 Sep 2004 WO
2006013555 Feb 2006 WO
2007001452 Jan 2007 WO
2010116292 Oct 2010 WO
Non-Patent Literature Citations (15)
Entry
Evmorfia N. Argyriou et al., A Fraud Detection Visualization System Utilizing Radial Drawings and Heat-maps, Jan. 1, 2014, IEEE Xplore, pp. 1-8 (Year: 2014).
Roger A. Leite et al., EVA: Visual Analytics to Identify Fraudulent Events, Jan. 1, 2018, IEEE Transactions on Visualization and Computer Graphics, vol. 24, No. 1, pp. 330-339 (Year: 2018).
3GPP TS 24.008 v3.8.0, “3rd Generation Partnership Project; Technical Specification Group Core Network; Mobile radio interface layer 3 specification; Core Network Protocols—Stage 3,” Release 1999, (Jun. 2001), 442 pages.
Asokan, N., et al., “Man-in-the-Middle in Tunneled Authentication Protocols,” Draft version 1.3 (latest public version: http://eprint.iacr.org/2002/163/, Nov. 11, 2002, 15 pages.
ETSI TS 102 232-5 v2.1.1, “Lawful Interception (LI); Handover Interface and Service Specific Details (SSD) for IP delivery; Part 5: Service-specific details for IP Multimedia Services,” Feb. 2007, 25 pages.
ETSI TS 102 657 v1.4.1, “Lawful Interception (LI); Retained data handling; Handover interface for the request and delivery of retained data,” Dec. 2009, 92 pages.
Cohen, I., “Noise Spectrum Estimation in Adverse Environment: Improved Minima Controlled Recursive Averaging,” IEEE Transactions on Speech and Audio Processing, vol. 11, No. 5, 2003, pp. 466-475.
Cohen, I., et al., “Spectral Enhancement by Tracking Speech Presence Probability in Subbands,” Proc. International Workshop in Hand-Free Speech Communication (HSC'01), 2001, pp. 95-98.
Girardin, Fabien, et al., “Detecting air travel to survey passengers on a worldwide scale,” Journal of Location Based Services, 26 pages.
Hayes, M.H., “Statistical Digital Signal Processing and Modeling,” J. Wiley & Sons, Inc., New York, 1996, 200 pages.
Lailler, C., et al., “Semi-Supervised and Unsupervised Data Extraction Targeting Speakers: From Speaker Roles to Fame?” Proceedings of the First Workshop on Speech, Language and Audio in Multimedia (SLAM), Marseille, France, 2013, 6 pages.
Meyer, Ulrike, et al., “On the Impact of GSM Encryption and Man-in-the-Middle Attacks on the Security of Interoperating GSM/UMTS Networks,” IEEE, 2004, 8 pages.
Schmalenstroeer, J., et al., “Online Diarization of Streaming Audio-Visual Data for Smart Environments,” IEEE Journal of Selected Topics in Signal Processing, vol. 4, No. 5, 2010, 12 pages.
Strobel, Daehyun, “IMSI Catcher,” Seminararbeit Ruhr-Universität Bochum, Chair for Communication Security, Prof. Dr.-Ing. Christof Paar, Jul. 13, 2007, 28 pages.
Vedaldi, Andrea, “An implementation of SIFT detector and descriptor,” University of California at Los Angeles, 7 pages.
Related Publications (1)
Number Date Country
20190347752 A1 Nov 2019 US
Provisional Applications (1)
Number Date Country
62671046 May 2018 US