This invention relates generally to communication systems, and more particularly to a method and system of providing caller information.
Caller identification information allows a call recipient to view caller information. The recipient can view the caller identification information and decide, for example, whether to answer a call, or not, based on the caller identification information.
Typically, caller identification information includes a number of the caller and a name of the caller. Caller identification information can be provided by a call routing engine—e.g., based on account provisioning information. Additionally or alternatively, in some cases, a user (caller) can provide their caller identification information. In these cases, a user can enter a name to be displayed on call recipients' devices.
When a language used by a caller differs from a language used by a call recipient, particularly when an alphabet used by the caller's device or, e.g., the caller's routing engine or the like, differs from an alphabet used by the recipient's device, the caller identification information may not be displayed in a manner that is discernible to the call recipient. The caller identification information may not properly display on the recipient's device and/or may be displayed using characters that are not familiar to the call recipient. As a result, the recipient may not be able to identify the caller and/or determine a context of a call.
Some systems may allow a user (recipient) to store information, such as a caller's name, associated with a phone number on a user's device. However, such systems cannot provide caller identification information for callers not associated with information already stored on the user/recipient device, and such systems require input by the recipient. Other systems do not allow a user/recipient to modify caller identification information. Accordingly, improved systems and methods that allow a recipient to identify a caller are desired.
Subject matter of the present disclosure is particularly pointed out and distinctly claimed in the concluding portion of the specification. A more complete understanding of the present disclosure, however, may best be obtained by referring to the detailed description and claims when considered in connection with the drawing figures.
It will be appreciated that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of illustrated embodiments of the present disclosure.
The description of various embodiments of the present disclosure provided below is merely exemplary and is intended for purposes of illustration only; the following description is not intended to limit the scope of an invention disclosed herein. Moreover, recitation of multiple embodiments having stated features is not intended to exclude other embodiments having additional features or other embodiments incorporating different combinations of the stated features.
Exemplary embodiments of the disclosure are described herein in terms of various functional components and various steps. It should be appreciated that such functional components may be realized by any number of hardware or structural components configured to perform the specified functions. Further, it should be noted that while various components may be suitably coupled or connected to other components within exemplary systems, such connections and couplings can be realized by direct connection between components, or by connection through other components and devices located therebetween. Similarly, unless otherwise noted, illustrative methods can include additional steps and/or steps that are performed in a different order.
Exemplary systems and methods are described below in connection with voice-over-Internet protocols (VoIP), and in some cases with session initial protocol (SIP). However, unless otherwise noted, the disclosure is not limited to such examples.
In accordance with various embodiments of the disclosure, improved methods and systems for providing caller information on a call recipient's device are disclosed. As discussed in more detail below, exemplary methods and systems can be used to provide audio information and/or translated (literally or phonetically) caller information to a call recipient's device. In accordance with illustrate examples, the text can be displayed using the language and alphabet of the call recipient's device—even if the language and/or alphabet used by a caller's device differs from the language and/or alphabet used by a recipient's device.
Exemplary methods of providing caller information include initiating a call between a first device and a second device, providing file location information corresponding to audio information during a call setup between the first device and the second device, using the file location information, retrieving the audio information, and performing one or more of playing the audio information and displaying information on the second device. The file location can be, for example, on the first device or on a server. The audio information can be pre-recorded, using, e.g., the first device. The audio information can then be stored on, for example, the first device or a server. The audio information can be translated into text, such that the text is displayed on the second device using the alphabet used on the second device, which may be different from the alphabet used on the first device. Thus, a caller's name and/or other information can be displayed using the recipient's device alphabet, even if the recipient's device alphabet differs from the caller device's alphabet.
In accordance with yet further embodiments of the disclosure, methods of providing caller information include initiating a call between a first device and a second device, providing file location information corresponding to audio information recorded in a first language using the first device, translating the audio information to a second language, and providing translated information in the second language on the second device. The translated information can include text that is transcribed literally and/or phonetically. The file location information can include a URL address for the audio information. The URL information can correspond to the first device, to a server, or the like.
In accordance with yet additional exemplary embodiments of the disclosure, a system for providing caller information includes a communication network, a first device connected to the communication network, and a second device coupled to the communication network, wherein file location information regarding audio information is provided to the second device during a call setup between the first device and the second device, and wherein the second device performs one or more of displaying translated text corresponding to the audio information and playing the audio information. The system can additionally include a server or the like to store the audio information. Additionally or alternatively, the system can include a server to translate and/or transcribe audio information in a first language to text information (or an image including text) in a second language.
In accordance with further aspects, a computer readable medium having computer readable code embodied therein for controlling a device to perform the steps of providing caller information includes initiating a call between a first device and a second device, providing file location information corresponding to audio information during a call setup between the first device and the second device, using the file location information, retrieving the audio information, and performing one or more of playing the audio information and displaying information on the second device is provided. In accordance with other aspects, a computer readable medium having computer readable code embodied therein for controlling a device to perform the steps of initiating a call between a first device and a second device, providing file location information corresponding to audio information recorded in a first language, translating the audio information to a second language, and providing translated information in the second language on the second device is provided.
In accordance with various aspects of the exemplary embodiments described herein, the caller identification information can be provided as contextual information or as part of contextual information to a user's device, such that the contextual information can be provided (e.g., in text, image, or audio format) along with other caller identification information.
Turning to
Devices 102, 104 can include any suitable device with wired and/or wireless communication features. For example, user devices 102, 104 can include a wearable device, a tablet computer, a smart phone, a personal (e.g., laptop or desktop) computer, a streaming device, such as a game console or other media streaming device, such as Roku, Amazon Fire TV, or the like, a mobile station, a subscriber station, a mobile unit, a subscriber unit, a wireless unit, a mobile device, a wireless device, a wireless communication device, an access terminal, a mobile terminal, a handset, a user agent, or other suitable device.
In accordance with various embodiments of the disclosure, one or more of devices 102, 104 include a client to, for example, perform various functions or steps as described herein. As used herein, client can refer to a downloadable OTT application, a native phone application, a soft phone or a browser-based client. The client can, for example, present caller information that has been, for example, transcribed and translated—e.g., from one language using a first alphabet to a second language using a second alphabet. Additionally or alternatively, the client can allow a user to record caller information, such as the caller's name and/or play audio information (e.g., the caller's recorded name) upon a call setup (e.g., in addition to or in lieu of a call ring). In accordance with further exemplary embodiments, the client can translate caller information—e.g., during a call setup. Exemplary clients can allow a user to select a preferred language and/or alphabet for display of caller information received, for example, during a call setup. The client can also provide a suitable graphical user interface (GUI) that can allow playing the audio information (on demand or automatically) and/or displaying the text associated with the caller information. Additional exemplary client functions are described below.
Servers 106, 108 can be any suitable device capable of receiving information from one or more of devices 102, 104, storing the information, and allowing access to the information from one or more of devices 102, 104 and/or other devices and/or transmitting the information to one or more devices. By way of particular examples, servers 106, 108 can be stand-alone servers, private branch exchange (PBX) servers, unified communications servers, part of a cloud service, or the like.
In accordance with exemplary embodiments of the disclosure, server 106 is configured to store audio information that corresponds to caller information (e.g., caller identification information). By way of example, server 106 can store recordings (e.g., in a user's/caller's voice) that can then be transmitted to a call recipient's device—e.g., during a call setup. Alternatively, as discussed in more detail below, such audio information can be stored on the caller's device.
In accordance with further illustrated examples, server 108 is configured to transcribe audio information into text. The translated text can be in a language and using an alphabet of the call recipient's device. As noted above, the translation and transcription of audio information can be literal or phonetic. Although illustrated separately, servers 106, 108 can form part of another system and/or the functions of servers 106/108 can be performed by a single server.
Network 110 can include or be a local area network (LAN), a wide area network, a personal area network, a campus area network, a metropolitan area network, a global area network, or the like. Network 110 can be coupled to other networks, such as a private branch exchange (PBX) network, to other devices typically coupled to a network, and/or to the Internet.
Step 202 includes initiating a call (e.g., a telephone call, a video call, or a collaboration call) between two or more devices. By way of example, step 202 can include initiating a VoIP call, such as a SIP call.
In accordance with some examples of the disclosure, prior to or at the beginning of step 202, a user (the caller) can use a device (e.g., use a client on the caller's device or another device) to audio record caller information, such as a caller's name, caller's company, information about what the call is about, context information, other caller information, and the like. Examples of how to generate context-aware information are disclosed in U.S. application Ser. No. 15/009,187, entitled METHOD AND SYSTEM OF PROVIDING CONTEXT AWARE ANNOUNCEMENTS, and filed Jan. 28, 2016, the contents of which are hereby incorporated herein by reference—to the extent such contents do not conflict with the present disclosure. The audio information can be stored on the caller's device or elsewhere, such as on server 106. The audio recording can be a one-time or limited-time setup, wherein the caller information is stored. Additionally or alternatively, caller information can be recorded when a call is made. For example, a caller's name may need to be recorded only once or periodically and caller information could include the recorded name alone or with additional information that is made, for example, on a per-call basis. Alternatively, a caller could use a client to record caller information when it is thought that there might be a translation issue, and not use such service if a translation issue is not expected.
The audio information can be stored as an uncompressed (e.g., .wav) file or a compressed (e.g., MPEG-4, or MP3) file or any other suitable file format. The audio information can be recorded by the caller and/or provided by an operator. The information can be recorded and stored in one or more languages. When the information is stored on a remote device, such as a server, the information can be stored along with the corresponding directory number.
During step 204, file location information (e.g., an IP or URL address) corresponding to caller information is provided to a recipient's device. In accordance with various examples of the disclosure, the file location information is provided as part of a call setup. For example, the file location information can be provided as part of a SIP header. The location information can include a location of a server, such as server 106, a location of the caller's device, or another location of a suitable server or device.
A second device (e.g., device 104) can retrieve the audio information during step 206—e.g., using file location information that is transmitted to recipient's device during a call setup. As noted above, the file location can be the caller's device, a server, or another device. When the audio file is on the caller's device, the audio information can directly stream from the caller's device to the recipient's device—e.g., using SIP. Alternatively, the audio file can be transmitted as part of a call setup. Although separately illustrated, steps 202-206 can be part of a call setup process.
A call recipient can play the audio file—e.g., using a client, can have displayed text or an image corresponding to the caller information (e.g., translated text and/or non-translated text corresponding to the audio information), or both during step 208. In accordance with some embodiments of the disclosure, the recipient's device can perform translation of the audio file into a language used by the recipient's device. In accordance with other examples, the translation can occur elsewhere, such as on a server (e.g., server 108).
Steps 302 and 304 can be the same or similar to steps 202 and 204, described above. Step 306 includes translating the audio information. The translation and/or transcription can be performed by, e.g., a recipient's device and/or elsewhere, such as a server—e.g., server 108.
Once the audio information is translated and transcribed, the translated/transcribed information can be displayed, during step 308, on a recipient's device. The information that is displayed can include caller identification (e.g., name, company, or the like) information. The information can be displayed as part of a call setup.
The methods and systems have been described above with reference to a number of exemplary embodiments and examples. It should be appreciated that the particular embodiments shown and described herein are illustrative of the invention and its best mode and are not intended to limit in any way the scope of the invention as set forth in the claims. It will be recognized that changes and modifications may be made to the exemplary embodiments without departing from the scope of the present invention. These and other changes or modifications are intended to be included within the scope of the present invention, as expressed in the following claims.
This application is a continuation of, and claims priority to, U.S. patent application Ser. No. 15/040,756, filed Feb. 10, 2016, and entitled “Method and System for Providing Caller Information,” the contents of which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5329578 | Brennan et al. | Jul 1994 | A |
5590188 | Crockett | Dec 1996 | A |
5754627 | Butler et al. | May 1998 | A |
6301608 | Rochkind | Oct 2001 | B1 |
6301609 | Aravamundan et al. | Oct 2001 | B1 |
6421439 | Liffick | Jul 2002 | B1 |
6700967 | Kleinoder et al. | Mar 2004 | B2 |
7443964 | Urban | Oct 2008 | B2 |
8009812 | Bruce et al. | Aug 2011 | B2 |
8073121 | Urban | Dec 2011 | B2 |
8428227 | Angel | Apr 2013 | B2 |
8879701 | Phadnis | Nov 2014 | B2 |
9203954 | Van Rensburg | Dec 2015 | B1 |
9444946 | White | Sep 2016 | B2 |
9871930 | Laasik | Jan 2018 | B2 |
20020034286 | Crockett et al. | Mar 2002 | A1 |
20020094067 | August | Jul 2002 | A1 |
20020196914 | Ruckart | Dec 2002 | A1 |
20030037113 | Petrovykh | Feb 2003 | A1 |
20030110039 | Brown | Jun 2003 | A1 |
20030128821 | Luneau et al. | Jul 2003 | A1 |
20030215074 | Wrobel | Nov 2003 | A1 |
20040037403 | Koch | Feb 2004 | A1 |
20040073423 | Freedman | Apr 2004 | A1 |
20040203835 | Trottier et al. | Oct 2004 | A1 |
20040209605 | Urban | Oct 2004 | A1 |
20040261115 | Bartfeld | Dec 2004 | A1 |
20050063365 | Mathew et al. | Mar 2005 | A1 |
20050149335 | Mesbah et al. | Jul 2005 | A1 |
20050246628 | Peterson | Nov 2005 | A1 |
20050287997 | Fournier | Dec 2005 | A1 |
20070036313 | White et al. | Feb 2007 | A1 |
20070041540 | Shao | Feb 2007 | A1 |
20070260456 | Proux | Nov 2007 | A1 |
20070263791 | Alperin et al. | Nov 2007 | A1 |
20070275708 | Henderson | Nov 2007 | A1 |
20080037520 | Stein | Feb 2008 | A1 |
20080101588 | Bruce | May 2008 | A1 |
20080233980 | Englund | Sep 2008 | A1 |
20090117886 | Urban | May 2009 | A1 |
20090248421 | Michaelis | Oct 2009 | A1 |
20110286584 | Angel | Nov 2011 | A1 |
20120116766 | Wasserblat | May 2012 | A1 |
20130272513 | Phadnis | Oct 2013 | A1 |
20140003298 | Charugundla | Jan 2014 | A1 |
20150086000 | Goldstein | Mar 2015 | A1 |
20160072947 | Van Rensburg | Mar 2016 | A1 |
20160366093 | Narayanan | Dec 2016 | A1 |
Number | Date | Country |
---|---|---|
0 510 411 | Mar 1992 | EP |
1211875 | Jun 2002 | EP |
2107775 | Oct 2009 | EP |
2600597 | Jun 2013 | EP |
2 351 870 | Oct 2001 | GB |
2 369 529 | May 2002 | GB |
WO 9926424 | May 1999 | WO |
WO 2007072323 | Jun 2007 | WO |
Number | Date | Country | |
---|---|---|---|
20190253546 A1 | Aug 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15040756 | Feb 2016 | US |
Child | 16392133 | US |