The present invention relates to the representation of voices of participants attending a conference call, and more particularly to the recognition and distribution of voices of such participants in a planar or spatial representation of the phone conference.
The improvement of telecommunications has yielded a great increase of tele-meetings between remote colleagues. These virtual meetings can use different media, such as the phone or the Internet. Different means of interacting with the other parties are offered, either audio (for example a telephone set, either fixed or cellular) or video. It is now common to have many people active in such virtual meetings calling from different areas of the globe. Thus, in a phone conference, one participant has to interact with different people that one often doesn't know beforehand, and sometimes in a language different from one's mother tongue.
It may be difficult for a participant to distinguish between the other participants as they may talk at any time without each time presenting themselves, have similar voices, etc., making it difficult to distinguish who is actually speaking. A system that helps the participant in distinguishing between the different participants during a phone conference would then be very useful. Such a system would have to recognize the different participants and then make a representation of them that the participant can easily decipher and use to facilitate interaction with other participants. For a system to identify other participants, it can either recognize the calling device or the calling person, the two options offering different capabilities. Then a user-friendly representation of the conference call must be built by the system and presented to the participant, under a text, audio or video format.
Different systems have been designed pertaining to the identification of callers and a representation of a conference call to a participant. U.S. Pat. No. 6,868,149 describes a system to display information about any one caller on a telephone or computer screen of the participant. Each caller is identified using a combination of different means, such as line sensing and voice identification.
While previous inventions offer various means for identifying callers, they never provide the participant with a user-friendly representation of the conference call. In addition they may require expensive display facilities (such as a personal computer screen) as all the caller identifications may not fit on a regular phone screen. Moreover, they are useless for participants with viewing disabilities.
The present invention is defined by the system set out in the claims. It provides for an end-user attending a conference call, a representation of other callers in the conference call so as to enable the end-user to better recognize them. This is achieved by providing a unique position for each caller in such representation. A regular telephone line is used, and the system does not need any additional device, such as a central server hooked up to the telephone network.
More particularly, there is devised a system for facilitating to an end-user the recognition of other participants attending a conference call, comprising means attached to the end-user's telephone for receiving signals from the telephone line, means for analyzing the telephone line signals and associating a unique caller identification to each participant joining the conference call, means for associating with each such caller identification a unique position in a representation of the conference call, and means for representing to the end-user such unique positions for all participants in the conference call. In some embodiments, these functions may be performed by computer program instructions for execution by a computer that are stored on a storage medium.
For example, a storage medium containing computer program instructions, which, when executed by a computer, can cause the computer to perform a method for facilitating, to an end-user, the recognition of participants attending a conference call. Such as storage medium contains
In one embodiment, the system makes use of biometric analysis of the participants' voices.
In another embodiment, the representation of the conference call has a predetermined number of positions.
In a further embodiment, representation of a new participant in excess of a number of participants equal to a predetermined number of positions involves computing the difference between biometric analysis of the new participant with each current participant, and associating with the new participant a position in the representation which is the same as the position of the current participant with the most difference in the biometric analysis.
In a yet further embodiment, the representation is obtained though filtering of the telephone line signals and rendering them into a stereo signal amplified for the end-user that reproduces a 2D or 3D mental representation of the conference call and positions of all participants.
The foregoing, together with other objects, features, and advantages of this invention can be better appreciated with reference to the following specification, claims and drawings.
The novel and inventive features believed characteristic of the invention are set forth in the appended claims. The invention itself, however, as well as a preferred mode of use, further objects and advantages thereof, will best be understood by reference to the following detailed description of an illustrative detailed embodiment when read in conjunction with the accompanying drawings, wherein:
The following description is presented to enable one of ordinary skill in the art to make and use the invention.
According to
For sake of clarity, this administration of the system is further detailed after each one of the system's technical capabilities have been described below.
The system does not require other technical equipments, in particular, it does not require any central server hooked up to the telephone network 108 that the spatialization system 100 might otherwise use and query.
Turning to
and two main databases;
A mono signal (210) coming from the telephone 107 line, is transformed into an output stereo signal (216) for headset 106 of end-user 105, which provides for a spatial representation of the voice of any currently speaking participant in the conference call.
Speaker Identification means 201 identify the conference call participant currently speaking. A unique caller identification (212) is associated with each identified participant at the time when he/she joins the conference. The identification itself involves techniques further described with respect to
Analysis of a participant's voice through the producing of a set of relevant biometric parameters (211) enables the system to compare this voice against other participants' voices. These voice parameters and caller identification are stored on Speaker Characteristics Database 204. This database is reset for each new conference call, whereas the Spatialization Parameters Database 205 is set once at system setup (power-on for example).
Once the currently speaking participant is identified, the system derives in Compute Spatialization Parameters 202, a position based on caller identification 212 and biometric parameters 211. The position details are then updated in the Speaker Characteristics Database 204. Right (214) and left (215) transfer functions that relate to a speaking participant's voice, that simulates the sounds that would be perceived by the left and right ears of the end-user, are then retrieved from the Spatialization Parameters Database 205.
The mono signal 210 that carries the currently speaking participant's voice is then filtered in Signal Filtering 203. Real time filtering of a signal can be implemented by a person skilled in the art using known algorithms, some of which are, for example, presented in the book “Discrete-Time Signal Processing” by Alan V. Oppenheim, Ronald W. Schafer, John R. Buck, Publisher: Prentice Hall, 2nd edition (Feb. 15, 1999), ISBN: 0137549202. The stereo signal 216 is then produced that mimics the position of the currently speaking participant.
In the case when 2D sound rendering is activated by end-user 105 through feature 102, elevation b is null, and the representation rendered of the call is in the plane xy.
As skilled art persons will appreciate, the voice of the currently speaking participant comes on the public telephone network 108 in an analog form, and is conveyed to Speaker Identification 201 through signal 210.
Signal 210 is first sampled (401) to allow the performance of digital signal processing. A buffer is filled with the sampled data. The length of the buffer can be adjusted by a person skilled in the art, based on the expected performance of the system, the tolerance for delays, etc.
The buffered samples are then analyzed (402) and biometric parameters are computed based on the data. Different biometric parameters can be computed for voice identification. Persons skilled in the art often rely on cepstral coefficients to identify voices, based for example on the teaching of 2002 IEEE publication “Speaker identification using cepstral analysis” by Muhammad Noman Nazar. Other parameters can be used as well based, for example, on the teaching of 2001 Proceedings of the 23rd Annual EMBS International Conference, “Comparative analysis of speech parameters for the design of speaker verification systems” by A. F. Souza and M. N. Souza:
The confidence level of the value of these biometric parameters is then estimated (403). If the confidence level is above a predetermined threshold then the system goes to the next step (406).
If the confidence level is below that threshold, then additional data is required to compute the biometric parameters with an adequate level of confidence. The system then evaluates (404) if the additional computational delay introduced by the aggregation of data is going to be higher than a predetermined maximum authorized delay.
If not, the system aggregates (405) the data and performs again the biometric parameters analysis (402 and 403).
If additional computation exceeds the predetermined maximum authorized delay, then the system goes to the next step 406. In this situation, there will be a high risk of error (a discussion on FMR or FNMR is offered below in connection with
Given computed cepstral coefficients, the system then checks (406) in a table for a matching set of parameters. This checking is more fully described in relation to
If none is found then the currently speaking participant is new to the conference, and is added (407) to the systems representation of the conference, by linking speaker and its biometric parameters to speaker identification.
If the currently speaking participant has previously been identified by the system, no action is taken.
In both cases, participant identification and associated biometric parameters are passed on to the next sequential tasks.
The system keeps track of the different participants in table (501). This table is reset for each new conference call. It is typically stored in the Speaker Characteristics Database 204.
Speaker Identification is stored in column (502). Typically, the first speaking participant is given identification number 1, with each new joining participant having an identification incremented by one.
Biometric Parameters associated with each participant are stored in column (503).
Position References, more fully described in connection with
For each speaking participant, a determination is made, as shown with step 406 on
In one embodiment of the invention, for each new participant after the nth one, the Position Reference is not predefined anymore, but is dynamically computed. The system sets Position Reference 504 to point towards a Position Identification i (511) in a table (510) so that:
A metric is associated with each set of biometric parameters as a measure of the difference between voices' characteristics. The distance between two cepstal vectors can be defined as the euclidian distance (509).
Any added participant after n current participants is given the same position as the position of the current participant which gives the highest value for the euclidian distance 509.
The system is set to accommodate 2n participants.
Table 510 associates with each Position identification 511 a position (512) made of two angles, the azimuth 306 and the elevation 307, and two Head Related Transfer Function (HRTF) filters (513), one for the left ear and one for the right ear of the headphone 106, computed for this position.
In case of 2D functioning, elevation 307 is set to 0.
Each HRTF 513 being specific to a given position, they are to be computed in advance. Persons skilled in the art of 3D sound can use different mechanisms to compute them. Sets of HRTFs are publicly available. An example may be found at http://sound.media.mit.edu/KEMAR.html (“KEMAR HRTF data, originally created May 24, 1995, lastly revised Jan. 27, 1997, Bill Gardner and Keith Martin, Perceptual Computing Group, MIT Media Lab, rm. E15-401, 20 Ames Street, Cambridge Mass. 02139).
If additional HRTF are needed, the system computes them, in particular through interpolation of the existing ones.
All participants up to 8 are assigned to a predefined position on the circle, with the first one being “in front” (603) of end-user 105, the second one behind him, etc. until 8.
The 9th caller is placed in the same position as participant number 6 (604) since they have the greatest difference between voices' characteristics.
Turning now to
1) mistaking biometric measurements from two different persons to be from the same person (called false match), and
2) mistaking two biometric measurements from the same person to be from two different persons (called false non-match).
These two types of errors are also often termed as false accept and false reject, respectively. There is a tradeoff between false match rate (FMR) and false non-match rate (FNMR) in every biometric system. In fact, both FMR and FNMR are functions of the system threshold; if it is decreased to make the system more tolerant to input variations and noise, then FMR increases. On the other hand, if it is raised to make the system more secure, then FNMR increases accordingly. This is easily seen from
Administration of system 100 by end-user 105 can now be described in connection with
The system 100 can act as a regular phone when the spatialization feature is off and as the conference call spatialization apparatus when this feature is on with on/off 101.
The end-user 105 can decide with selection 102 to get a 2D representation of the call, meaning with voices coming from different directions but in the same horizontal plane, or in 3D, i.e., with the perceived direction of voice reception having a non-null elevation (angle 307 not null).
Based on the distance that the end-user 105 wants between the different participants' voices, he or she can then set using means 103 a minimal azimuth angle that is then used to compute the different possible speaker positions in the plan. A minimum elevation angle can also be set in case of 3D spatialization. One of the consequences of this setting is to set the maximum number of participants that the system can handle. There is thus a trade off between a better participant discrimination and a greater number of represented participants.
The end-user can also set with means 104 different biometric parameters for the analysis of the speaking participants' voice. The achieved result is an improved identification of the participants.
While the invention has been particularly shown and described with reference to a preferred embodiment, it will be understood that various changes in form and detail may be made therein without departing from the spirit, and scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
06300241 | Mar 2006 | EP | regional |
Number | Name | Date | Kind |
---|---|---|---|
6192395 | Lerner et al. | Feb 2001 | B1 |
6262979 | Anderson et al. | Jul 2001 | B1 |
6850496 | Knappe et al. | Feb 2005 | B1 |
6865264 | Berstis | Mar 2005 | B2 |
6868149 | Berstis | Mar 2005 | B2 |
7386448 | Poss et al. | Jun 2008 | B1 |
20050206721 | Bushmitch et al. | Sep 2005 | A1 |
20090080623 | Creamer et al. | Mar 2009 | A1 |
Number | Date | Country | |
---|---|---|---|
20070217590 A1 | Sep 2007 | US |