The present disclosure relates generally to the fields of mobile communications and telephony.
Modern conferencing systems facilitate communications among multiple participants over telephone lines, Internet protocol (IP) networks, and other data networks. In a typical conferencing session, a participant enters the conference by using an access number. During a typical conference session a mixer receives audio and/or video streams from the participants, determines the N loudest speakers, mixes the audio streams from the loudest speakers and sends the mixed media back to the participants.
One drawback of existing conferencing systems is that the participants often lack the ability to observe each other's body language. By its nature, body language communication is bidirectional; that is, it allows a listener to convey his feelings while he is listening to a speaker. For audio-only participants, this means that a speaker is unable to see the facial expressions, head nods, arm gesticulations, etc., of the other participants.
The present invention will be understood more fully from the detailed description that follows and from the accompanying drawings, which however, should not be taken to limit the invention to the specific embodiments shown, but are for explanation and understanding only.
In the following description specific details are set forth, such as device types, components, communication methods, etc., in order to provide a thorough understanding of the disclosure herein. However, persons having ordinary skill in the relevant arts will appreciate that these specific details may not be needed to practice the embodiments described.
In the context of the present application, a communication network is a geographically distributed collection of interconnected subnetworks for transporting data between nodes, such as intermediate nodes and end nodes (also referred to as endpoints). A local area network (LAN) is an example of such a subnetwork; a plurality of LANs may be further interconnected by an intermediate network node, such as a router, bridge, or switch, to extend the effective “size” of the computer network and increase the number of communicating nodes. Examples of the endpoint devices or nodes may include servers, video conferencing units, video terminals, and personal computers. The nodes typically communicate by exchanging discrete frames or packets of data according to predefined protocols.
A conferencing system, as that term is used herein, comprises software and/or hardware (including firmware) components integrated so as to facilitate the live exchange of voice and, in certain implementations, other information (audio, video or data) among persons participating via endpoint devices at different remote locations. A conference session may involve point-to-point or multipoint calls over a variety of network connections and protocol types.
An endpoint is a device that provides media communications termination to an end user, client, or person who is capable of participating in an audio conference session via a conferencing system. Endpoint devices that may be used to initiate or participate in a conference session include a mobile phone, a personal digital assistant (PDA), a voice over Internet protocol (VoIP) phone, a personal computer (PC), such as notebook, laptop, or desktop computer; an audio/video appliance; or any other device, component, element, or object capable of initiating or participating in exchanges with an audio (or audio/visual) conferencing system.
Overview
In one embodiment, a mobile phone (or an earpiece that communicates with a mobile phone or telephone base unit) incorporates a built-in motion detector. The motion detector is configured so as to capture head and/or hand gestures of the user and convey them to a remote speaker. For example, an individual equipped with such a mobile phone who participates in an audio conference session may shake their head in a gesture of approval (up/down movement of the head) or disapproval (side-to-side movement of the head) of the statements he is hearing from the remote speaker. This gesticular information is captured and communicated back to the speaker so as to provide the speaker with communication feedback.
In accordance with a specific embodiment, a standard motion-activated dual switch motion detector is incorporated into a mobile phone (or detached earpiece unit) to detect both vertical and horizontal head shaking. In another embodiment, a gyroscopic device may be utilized to provide more precise detection of movements of the user's head, which may include differentiating between casual head movements not intended to convey information, a slight head nod, and more intense or vigorous head gestures intended to strongly convey agreement or disagreement with the speaker's message.
Head gestures of a user may be analyzed at the earpiece to determine whether the motion detected is affirmative (i.e., intended to communicate approval), negative (i.e., intended to communicate disapproval), or non-communicative (i.e., not intended to be a communicative gesture, such as a casual turning of the head). Alternatively, the sensed motion of the detector may be transmitted to another device, unit, or network node (e.g., server) for further analysis and subsequent transmission to the speaker and/or other participants to the discussion.
While many mobile phones incorporate video cameras, they lack the ability to capture the speaker's body language and convey that body language to another listener. As a result, a call over a mobile phone tends to be unidirectional, i.e., lacking in communication of gesticular information from the listener to the speaker.
Practitioners in the art will appreciate that instead of talking directly into mobile phone 10, user 10 may utilize a headset or earpiece that communicates with the mobile phone or another telephone base unit via a wireless (e.g., Bluetooth) or wired connection. The information communicated by the headset or earpiece may include speech (voice) as well as the motion signals or gesture information detected by the motion detector incorporated into the headset or earpiece.
Additionally, CPU 21 may run software (or firmware) specifically aimed at recognizing alphanumeric characters, thereby permitting a user of mobile phone to spell certain characters (e.g., the letter “A” or the number “8”) to facilitate man-machine interactions. For instance, in the case where an interactive voice response (IVR) system has difficulty understanding a user's voice command or input, the user may spell the misunderstood character or word through appropriate movement of his mobile phone (e.g., by moving the mobile phone in his hand in a manner so as to trace the character(s) in space).
In one embodiment, CPU 21 may establish via wireless transceiver 26 a data communication channel between mobile phone 20 and an endpoint device at the other end of the call. All gesture information is transmitted over the data channel to the remote speaker's endpoint. For example, upon receiving the listener's head gesture information, the speaker's endpoint may play, as a background “whisper” tone, a voice file which resembles the approval sound of “aha” or the disapproval sound of “mm mm”. The stronger and sharper the head gestures of the listener, the louder the approval and disapproval sounds that are played to the speaker.
In another embodiment, the remote speaker's endpoint device may be configured so that the head gestures of the listener at the other end of the call may be conveyed to the speaker using visual indicators such as light-emitting diodes (e.g., green for approval, red for disapproval) or a written message on a display screen, e.g., stating that the listener is shaking his head in agreement or disagreement.
In the embodiment shown, conferencing server 37 includes a digital signal processor (DSP) or firmware/software-based system that mixes and/or switches audio signals received at its input ports under the control of server 37. The audio signals received at the conference server ports originate from each of the conference or meeting participants (e.g., individual conference participants using endpoint devices 31-33 and 35), and possibly from an interactive voice response (IVR) system (not shown).
Conferencing server 37 and gesture server 38 both include software (or firmware) plug-ins or modules that implement the various features and functions described herein. In one implementation, conferencing server 37 is responsible for establishing and maintaining control and media channels between the various participant endpoint devices, as well as managing the mixing of the participant's speech. Gesture server 38, on the other hand, is responsible for handling the data channel communications over which gesture information is transmitted between the current speaker and listeners. In addition, gesture server may process the received gesture information to produce statistics and/or other types of gesture feedback information to the speaker.
By way of example, assume that a conference session is underway with participant 39 currently speaking to the participants who are using mobile phones 31-33. Further assume that phones 31 & 32 detect head gestures indicative of approval, with phone 33 detecting a disapproving head gesture. In such a scenario, gesture server may produce output feedback information transmitted to PC 35 consisting of an “aha” followed by an “mm mm” at PC 35—the volume of the “aha” being twice as loud as the “mm mm” to reflect the 2:1 approval to disapproval ratio.
It is appreciated that in different specific implementations the media path for the conference participants may include audio/video transmissions, e.g., Real-Time Transport Protocol (RTP) packets sent across a variety of different networks (e.g., Internet, intranet, PSTN, wireless, etc.), protocols (e.g., IP, Asynchronous Transfer Mode (ATM), Point-to-Point Protocol (PPP)), with connections that span across multiple services, systems, and devices. For instance, although not explicitly shown, the connection path to each of mobile phones 31-33 may comprise connections over wireless provider networks and through various gateway devices.
Practitioners in the art will understand that the window 40 may be generated by graphical user interface software (i.e., code) running a user's PC or other endpoint device. In other cases, the GUI may comprise a collaborative web-based application that is accessed by the browser software running on an endpoint. In other instances, the GUI may comprise a downloaded application (e.g., downloaded from gesture server 38) or other forms of computer-executable code that may be loaded or accessed by a participant's PC, mobile phone, or other endpoint device. For instance, the software code for implementing the GUI may be executed on server 38 and accessed by users who want to utilize the features provided therein. In the case of a mobile phone with a display screen, the screen may display current approval/disapproval statistics while the user is talking via a headset or earpiece communicatively coupled with the mobile phone.
By regularly monitoring window 40 during the time that he is talking, speaker 39 can tailor his speech or discussion to his listeners. For instance, speaker may decide to skip certain material, charge forward faster, slow down, or reiterate his last point in a different way, depending upon the gesture feedback information received from the listening audience. In other words, speaker 39 may tailor the delivery of his message, e.g., by altering the content and delivery speed, based on the body language (head gestures) communicated via interface window 40.
Note that the gesticular information collected and statistics generated by server 38 regarding the body language of the listeners may also be conveyed and shared among all of the participants. Also, the sharing of gesticular information is dynamic and bidirectional, meaning that when the user of mobile phone 31 asks a question or takes the floor (e.g., in a half duplex or push-to-talk (PTT) environment), the non-verbal gestures of user 39 and the users of mobile phones 32 & 33 may be captured and conveyed to mobile phone 31 in a manner as described above. In a PTT environment, the conferencing system may also be configured or adapted to capture and interpret brief verbal sounds or comments, such as “yes”, “no”, “aha, “mm mm”, etc., and include them along with non-verbal gestures when generating statistical feedback to the speaker regarding approval/disapproval of the group of listeners. This affirmative/negative feedback information may be transmitted on a low bandwidth channel (e.g., control signal channel) separate from the voice path or main communication channel.
It should be understood that conferencing server 37 and gesture server 38 do not have to be implemented as separate components or nodes on network 30. In other words, the functions of each may be integrated or incorporated into a single server of node residing on the network.
The statistics may include the respective percentages of the listeners who approve or disapprove of the speaker's current statements. In systems where the motion detectors are capable of distinguishing between slight head movements and more demonstrative or dramatic gestures, the system may collect statistics on a broader range of listener reactions. For example, the statistical categories that listeners may fall into may include “strongly approve”, “slightly approve”, “neutral”, “slightly disapprove”, and “strongly disapprove”.
Once statistics have been generated for a given sampling of listener reactions, the system outputs a feedback response (e.g., audible and/or visual) that is transmitted to the speaker. The specific type of feedback response may be controlled by the gesture server based on the available information about the specific kind of endpoint the speaker is utilizing. The feedback response may also be output to all of the participating listeners, with the type or nature of the response being tailored to the destination endpoint device. This is shown occurring at block 54. Since the system operates dynamically, sampling listener gestures over a predetermined sampling period (e.g., 2 seconds), the system waits again until any additional gesture information is received by any of the participants (block 55). When new or additional gesture data is received, the process returns to block 52. A new sampling period is then initiated, followed by the processing and transmission steps discussed above.
It should be understood that elements of the present invention may also be provided as a computer program product which may include a machine-readable medium having stored thereon instructions which may be used to program a computer (e.g., a processor or other electronic device) to perform a sequence of operations. Alternatively, the operations may be performed by a combination of hardware and software. The machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, magnet or optical cards, propagation media or other type of media/machine-readable medium suitable for storing electronic instructions. For example, elements of the present invention may be downloaded as a computer program product, wherein the program may be transferred from a remote computer or telephonic device to a requesting process by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a modem or network connection).
Additionally, although the present invention has been described in conjunction with specific embodiments, numerous modifications and alterations are well within the scope of the present invention. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
| Number | Name | Date | Kind |
|---|---|---|---|
| 4805210 | Griffith, Jr. | Feb 1989 | A |
| 4825465 | Ryan | Apr 1989 | A |
| 5008884 | Yazawa et al. | Apr 1991 | A |
| 5200994 | Sasano et al. | Apr 1993 | A |
| 5206905 | Lee et al. | Apr 1993 | A |
| 5220599 | Sasano et al. | Jun 1993 | A |
| 5341413 | Hori et al. | Aug 1994 | A |
| 5402490 | Mihm, Jr. | Mar 1995 | A |
| 5432844 | Core et al. | Jul 1995 | A |
| 5568540 | Greco et al. | Oct 1996 | A |
| 5608786 | Gordon | Mar 1997 | A |
| 5615213 | Griefer | Mar 1997 | A |
| 5754630 | Srinivasan | May 1998 | A |
| 5794218 | Jennings et al. | Aug 1998 | A |
| 5905448 | Briancon et al. | May 1999 | A |
| 5937040 | Wrede et al. | Aug 1999 | A |
| 5959662 | Shaffer et al. | Sep 1999 | A |
| 5974142 | Heer et al. | Oct 1999 | A |
| 5999599 | Shaffer et al. | Dec 1999 | A |
| 6044081 | Bell et al. | Mar 2000 | A |
| 6259405 | Stewart et al. | Jul 2001 | B1 |
| 6271764 | Okamura | Aug 2001 | B1 |
| 6366651 | Griffith et al. | Apr 2002 | B1 |
| 6373817 | Kung et al. | Apr 2002 | B1 |
| 6421544 | Sawada | Jul 2002 | B1 |
| 6438600 | Greenfield et al. | Aug 2002 | B1 |
| 6522726 | Hunt et al. | Feb 2003 | B1 |
| 6526293 | Matsuo | Feb 2003 | B1 |
| 6542586 | Helstab | Apr 2003 | B1 |
| 6545596 | Moon | Apr 2003 | B1 |
| 6564261 | Gudjonsson et al. | May 2003 | B1 |
| 6567508 | Katayama | May 2003 | B2 |
| 6587680 | Ala-Laurila | Jul 2003 | B1 |
| 6643774 | McGarvey | Nov 2003 | B1 |
| 6654455 | Isaka | Nov 2003 | B1 |
| 6665534 | Conklin et al. | Dec 2003 | B1 |
| 6721401 | Lee et al. | Apr 2004 | B2 |
| 6738461 | Trandal et al. | May 2004 | B2 |
| 6766007 | Dermler et al. | Jul 2004 | B1 |
| 6769000 | Akhtar et al. | Jul 2004 | B1 |
| 6771639 | Holden | Aug 2004 | B1 |
| 6792296 | Van Bosch | Sep 2004 | B1 |
| 6792297 | Cannon et al. | Sep 2004 | B2 |
| 6798874 | Ohlinger et al. | Sep 2004 | B1 |
| 6799052 | Agness et al. | Sep 2004 | B2 |
| 6804334 | Beasley et al. | Oct 2004 | B1 |
| 6816469 | Kung et al. | Nov 2004 | B1 |
| 6826173 | Kung et al. | Nov 2004 | B1 |
| 6839761 | Kadyk et al. | Jan 2005 | B2 |
| 6847715 | Swartz | Jan 2005 | B1 |
| 6870835 | Chen et al. | Mar 2005 | B1 |
| 6876734 | Summers et al. | Apr 2005 | B1 |
| 6898279 | Baker et al. | May 2005 | B1 |
| 6905414 | Danieli et al. | Jun 2005 | B2 |
| 6907123 | Schier | Jun 2005 | B1 |
| 6912275 | Kaplan | Jun 2005 | B1 |
| 6917672 | Brown et al. | Jul 2005 | B2 |
| 6918034 | Sengodan et al. | Jul 2005 | B1 |
| 6928558 | Allahwerdi et al. | Aug 2005 | B1 |
| 6931001 | Deng | Aug 2005 | B2 |
| 6934858 | Woodhill | Aug 2005 | B2 |
| 6947417 | Laursen et al. | Sep 2005 | B2 |
| 6959184 | Byers et al. | Oct 2005 | B1 |
| 6977993 | Starbuck et al. | Dec 2005 | B2 |
| 6985745 | Quaid | Jan 2006 | B2 |
| 6987744 | Harrington et al. | Jan 2006 | B2 |
| 7042989 | Lawson | May 2006 | B2 |
| 7085244 | Koskelainen et al. | Aug 2006 | B2 |
| 7092001 | Schulz | Aug 2006 | B2 |
| 7139370 | Tse | Nov 2006 | B1 |
| 7189132 | Nacik et al. | Mar 2007 | B2 |
| 7275109 | Lee | Sep 2007 | B1 |
| 7333614 | Jarosinski et al. | Feb 2008 | B2 |
| 7466801 | Miller et al. | Dec 2008 | B2 |
| 7561892 | Huh et al. | Jul 2009 | B2 |
| 7583286 | Brooksby et al. | Sep 2009 | B2 |
| 7843486 | Blair et al. | Nov 2010 | B1 |
| 7979059 | Rockefeller et al. | Jul 2011 | B2 |
| 20020010008 | Bork et al. | Jan 2002 | A1 |
| 20020040936 | Wentker et al. | Apr 2002 | A1 |
| 20020068537 | Shim et al. | Jun 2002 | A1 |
| 20020086680 | Hunsinger | Jul 2002 | A1 |
| 20020098831 | Castell et al. | Jul 2002 | A1 |
| 20020140745 | Allenby et al. | Oct 2002 | A1 |
| 20020167937 | Goodman | Nov 2002 | A1 |
| 20020178228 | Goldberg | Nov 2002 | A1 |
| 20020181691 | Miller et al. | Dec 2002 | A1 |
| 20020198004 | Heie et al. | Dec 2002 | A1 |
| 20030043992 | Wengrovitz | Mar 2003 | A1 |
| 20030061496 | Ananda | Mar 2003 | A1 |
| 20040003070 | Fernald et al. | Jan 2004 | A1 |
| 20040024640 | Engle et al. | Feb 2004 | A1 |
| 20040066932 | Seligmann | Apr 2004 | A1 |
| 20040078349 | Syrjala et al. | Apr 2004 | A1 |
| 20040121774 | Rajkotia et al. | Jun 2004 | A1 |
| 20040128350 | Topfl et al. | Jul 2004 | A1 |
| 20040131206 | Cao et al. | Jul 2004 | A1 |
| 20040205330 | Godfrey et al. | Oct 2004 | A1 |
| 20040248586 | Patel et al. | Dec 2004 | A1 |
| 20050022020 | Fremberg | Jan 2005 | A1 |
| 20050031110 | Haimovich et al. | Feb 2005 | A1 |
| 20050157708 | Chun | Jul 2005 | A1 |
| 20050177622 | Spielman et al. | Aug 2005 | A1 |
| 20050197110 | Hasan et al. | Sep 2005 | A1 |
| 20050212749 | Marvit | Sep 2005 | A1 |
| 20050272413 | Bourne | Dec 2005 | A1 |
| 20050273333 | Morin et al. | Dec 2005 | A1 |
| 20060034336 | Huh et al. | Feb 2006 | A1 |
| 20060035657 | Lim | Feb 2006 | A1 |
| 20060068731 | Seier | Mar 2006 | A1 |
| 20060104218 | Kako | May 2006 | A1 |
| 20060105790 | Jin et al. | May 2006 | A1 |
| 20060116175 | Chu | Jun 2006 | A1 |
| 20060126529 | Hardy | Jun 2006 | A1 |
| 20060147002 | Desai et al. | Jul 2006 | A1 |
| 20060206454 | Frostall et al. | Sep 2006 | A1 |
| 20060239277 | Gallagher | Oct 2006 | A1 |
| 20070036322 | Goldman et al. | Feb 2007 | A1 |
| 20070037610 | Logan | Feb 2007 | A1 |
| 20070064908 | Levy et al. | Mar 2007 | A1 |
| 20070112571 | Thirugnana | May 2007 | A1 |
| 20080059578 | Albertson et al. | Mar 2008 | A1 |
| Number | Date | Country |
|---|---|---|
| 1536645 | Jun 2005 | EP |
| 1560140 | Aug 2005 | EP |
| 1731995 | Dec 2006 | EP |
| 2347593 | Sep 2000 | GB |
| WO 2005104520 | Nov 2005 | WO |
| WO 2006028514 | Mar 2006 | WO |
| WO 2006071420 | Jul 2006 | WO |
| Entry |
|---|
| How to Build Smart Appliances, Albercht Schmidt, Kristof Van Laerhoven, IEEE Personal Communications, Aug. 2001, pp. 66-71. |
| China, The First Office Action, Chinese Appln. No. 200880022939.0, Serial No. 2012052800963230, with translation, emailed Oct. 10, 2012, May 31, 2012. |
| China, The Second Office Action, Chinese Appln. No. 200880022939.0, Serial No. 2012052800963230, with translation, emailed Apr. 10, 2013, Apr. 10, 2013. |
| China, The Original Office Action, Chinese Appln. No. 200880022939.0, Serial No. 2013090600765570 with description of the claims translated, emailed Sep. 11, 2013, Sep. 11, 2013. |
| Number | Date | Country | |
|---|---|---|---|
| 20090009588 A1 | Jan 2009 | US |