The disclosure is related to the field of computer audio, including applications in audio and video conferencing.
Computer audio applications employ automated audio detection for a variety of purposes. In one example, video conferencing systems use audio detection to identify an active speaker among conference participants, and the identification of an active speaker is used to provide a visual indication of the active speaker to other participants. Such an indication may take the form of a text-based message or notification, such as “Participant X is speaking”, and/or it may involve certain treatment of windows used for displaying camera views (“web cam” feeds) from the participants. For example, the window of the current speaker may be highlighted or enlarged relative to the windows of the other participants, helping to visually guide the participants' attention to the current speaker.
Computer audio applications may be improved by an improvement in automated audio detection. In one example, video conferencing systems may employ level detection or other automated audio detection on audio streams from participants to identify speakers. If the audio level is above a certain threshold, the participant is identified as a speaker, and otherwise the participant is identified as a non-speaker. This automated detection is used to drive the visual indications provided as part of video conference operation.
Existing automated audio detection may have limitations that cause certain issues in computer audio applications. In the case of video conferencing, for example, a participant in a video conference may be identified as a speaker even though the participant is not actually speaking. This can arise due to the presence of non-speech audio in the audio stream of the participant, resulting from non-speech sound in the participant's environment picked up by the participant's microphone . In some cases this sound may be some type of background sound not directly controllable by the participant, such as crowd noise, vehicle noise, etc. In other cases it may result from an audible activity of the participant, such as shuffling papers adjacent to a speakerphone microphone. In either case, conventional speaker-detection mechanisms may not be able to accurately discriminate between such non-speech sound and actual speech, and thus to that extent may provide unreliable identifications of speakers.
A technique is disclosed for enabling more accurate discrimination between speech and non-speech audio in an audio stream in a computer audio application. In one example the technique is described in the context of video conferencing and applied to the audio streams of the conference participants. The improved discrimination is used as input to the user interface of the conference, for example to improve the accuracy of any graphical indications identifying speakers and non-speakers, improving user experience. Also, the discrimination may be used to initiate some type of remedial action, such as providing a notification to a participant whose audio stream has been identified as containing non-speech audio. Having been made aware, the participant can take steps to reduce non-speech audio under the participant's control. Thus, by accurately and explicitly identifying sources of non-speech audio, the system provides for better quality video conferences.
More particularly, a method of operating a video conferencing system is disclosed that includes applying audio detection and speech recognition to an input audio stream to generate respective audio detection and speech recognition signals, and applying a function to the audio detection and speech recognition signals to generate a non-speech audio detection signal identifying presence of non-speech audio in the input audio stream when the audio detection signal is asserted and the speech recognition signal is not asserted. The method further includes performing a control or indication action in the computer system based on assertion of the non-speech audio detection signal.
In one example the technique is employed to discriminate between speech and non-speech audio in each of a set of audio streams from participants in a video conference. Non-speech audio is detected when no speech is recognized in an audio stream that has a non-zero level. A graphical user interface of the video conference is operated to reflect the discriminating between speech and non-speech audio in the audio streams. Operation includes (a) providing a first graphical identification of one or more first participants as speaking participants based on a discrimination of speech in the respective audio streams, and (b) providing a second graphical identification of one or more second participants as non-speaking participants based on a discrimination of non-speech audio in the respective audio streams.
In one embodiment, remedial action may also be taken such as sending a notification to one of the participants (e.g., to the conference organizer or to an offending participant directly), enabling an offending participant to make a change in activity or the environment to reduce the non-speech audio, further improving user experience in the video conference.
The foregoing and other objects, features and advantages will be apparent from the following description of particular embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views.
The system of
In operation, the conference clients 12 establish connections and conference sessions with the conference server 10. Each conference client 12 executes a client conference application that provides a graphical user interface to a local conference participant or “attendee”, as well as providing for transmission of local audio and video input to the conference server 10 and receiving conference audio and video streams or feeds from the conference server for rendering to the local attendee. The conference server performs merging or “mixing” of the audio and video streams from the conference clients 12 to create the conference feeds provided back to the conference clients 12. Audio is typically mixed into a single output channel distributed to all conference clients 12, enabling all participants to hear any participant who is speaking. Video streams such as from local cameras are individually copied to all participants, enabling each participant to see all the other participants. The system also enables documents or other application data to be shared among the conference clients, where the source of a shared item is referred to as a “presenter” 16. For such sharing, the contents of a window or similar user-interface element are sent from the presenter 16 to the conference server 10, where they are replicated and provided to the other conference clients 12 for local display.
The graphical display can provide information about the operation of the conference in one or more ways. For example, the conference control window 32 may include a notification area (NOTIF) 38 used to display information. In the illustrated example, one notification is an identification of the current speaker as “CLT 1”. An identification of the speaker may also be made in other ways, such as by applying some manner of highlighting to the camera viewing window 34 of the current speaker. In the illustrated example, this highlighting is in the form of a bolded or otherwise enhanced border 40, while maintaining a regular or non-enhanced border for the camera viewing window(s) 34 of the non-speaking participants. Other forms of highlighting may be used, such as enlarging the speaker window 34 relative to the non-speaker windows 34, dynamically re-arranging the windows 34 to place the current speaker in some predetermined position (e.g., at the top), etc.
As outlined above, the conference system provides improved performance by improved discrimination between a true speaker and a participant generating non-speech audio. In contrast to prior systems, the presently disclosed system incorporates speech recognition along with audio detection and uses these to classify each audio stream as containing speech, silence, or non-speech audio. This classification is then used to more accurately identify speakers, and it may also be used to take some form of remedial action with respect to detected non-speech audio.
1. Silence (audio not detected, i.e., amplitude below threshold)
2. Speech (speech output from speech recognition)
3. Non-speech sound (audio detected, with no speech recognized)
At 52, the result of the discrimination in step 50 is used to operate the conference GUI. At a minimum, the discrimination can provide a more reliable identification of a speaker as distinct from non-speakers. In prior systems, a non-speaker generating some type of non-speech sound might be treated erroneously as a speaker. In the presently disclosed technique, only a participant whose audio is characterized as “speech” (#2 above) is identified as a speaker, whereas those who are silent (#1) or who are generating non-speech sound (#3) are identified as non-speakers. These more reliable identifications of speakers and non-speakers are provided to the GUI for presenting the corresponding graphical indications such as described above with reference to
As also indicated at 52, the system may also perform some type of remedial action with respect to detected non-speech audio. As an example, a notification may be sent to the participant whose audio is identified as non-speech audio, making the participant aware of this condition so that the participant can take further action to address it (for example, ceasing some non-speech activity or increasing the distance between a noise source and the local microphone). Either in addition or as an alternative, a notification may be sent to a participant serving as the conference organizer to enable that person to take action, such as somehow notifying the offending participant. More intrusive action may be taken, such as actively reducing the level or entirely muting the offending participant's audio as long as non-speech audio is being detected.
There may also be a split of audio processing for different clients in the same conference. For example, if some client devices are low-performance, the server may do the audio processing, while if the client is higher-performance, it could be done on the client. Network connectivity may also be a factor. It may be preferable to use client detection when the network is poor, and server detection when the network is good, because the audio stream quality would be impacted by the poor network, thus reducing the speech recognition accuracy if done on the server.
One benefit of using speech recognition in the presently disclosed manner is that it may be included in the system for other unrelated uses, and thus an efficiency is gained by making dual use of it. For example, in an embodiment like
While various embodiments of the invention have been particularly shown and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
7684982 | Taneda | Mar 2010 | B2 |
7916848 | Rui | Mar 2011 | B2 |
8681203 | Yin | Mar 2014 | B1 |
20030033139 | Walker | Feb 2003 | A1 |
20040174392 | Bjoernsen | Sep 2004 | A1 |
20040186726 | Grosvenor | Sep 2004 | A1 |
20050081160 | Wee | Apr 2005 | A1 |
20080077390 | Nagao | Mar 2008 | A1 |
20080255842 | Simhi | Oct 2008 | A1 |
20110093273 | Lee et al. | Apr 2011 | A1 |
20110112833 | Frankel | May 2011 | A1 |
20110225247 | Anantharaman | Sep 2011 | A1 |
20110267419 | Quinn | Nov 2011 | A1 |
20120116800 | McCallie | May 2012 | A1 |
20120221330 | Thambiratnam et al. | Aug 2012 | A1 |
20120226997 | Pang | Sep 2012 | A1 |
20120239394 | Matsumoto | Sep 2012 | A1 |
20120269332 | Mukund | Oct 2012 | A1 |
20130066628 | Takahashi | Mar 2013 | A1 |
20130307919 | Taubin | Nov 2013 | A1 |
20140119548 | Kechichian | May 2014 | A1 |
20140142951 | Crawley | May 2014 | A1 |
20150002611 | Thapliyal | Jan 2015 | A1 |
Number | Date | Country | |
---|---|---|---|
20150002611 A1 | Jan 2015 | US |