1. Field of Invention
The present invention relates to the field of multipoint audio/video conferences, and more particularly to improving the quality of the conference by reducing nuisance signals.
2. Description of Background Art
Multipoint conferences of audio and/or video and/or multimedia are a communication between more than two participants. Commonly, conference calls may be established over a communications network, such as the Public Switched Telephone Network (“PSTN”), Integrated Services Digital Network (ISDN), Internet Protocol (IP) network, etc. The network contains Multipoint Control Units (MCU) and/or audio bridges that route and compose the communications of the participants in the call. The operation of MCUs and audio bridges is well known to those skilled in the art. An exemplary audio bridge is depicted in U.S. patent application Ser. No. 10/072,081 or in U.S. patent application Ser. No. 10/144,561, the contents of which are incorporated herein by reference. It should be noted that the terms MCU and audio bridge may be used interchangeably herein.
A common MCU may receive audio signals from a plurality of conferees, analyze the signals and create control information such as, but not limited to, VAD (Voice Activity Detection), signal energy, and signal quality measures. Based on the control information, decisions may be made regarding whose signals will be mixed and distributed among the other conferees, or whose signal will be muted due to a conclusion that the signal is below acceptable quality. Un Acceptable Signal (UAS) is an objective criteria and may depend on the type of the conference. Exemplary criteria may be non-voice signals such as: music, DTMF, background noise, etc. The terms noisy signal, nuisance and UAS may be used interchangeably and the term nuisance may represent those terms.
There are known methods for generating information regarding signal energy, VAD and quality. Exemplary algorithms for creating this information are depicted in G723.1 (used in Annex A for the same standard); G729.1 Annex B; and GSM AMR (GSM 06.71) using the VAD algorithm GSM 06.94. A simple algorithm for Nuisance Detector (ND) may define nuisance when the signal energy (SE) is above a certain level, while the VAD indicates that the signal is not voice is ND=SE and (not VAD).
The quality of a conference depends on the automatic decisions of those methods. For example, a sensitive ND algorithm may disconnect a valid participant, while a less sensitive algorithm may add to the conference mix audio of a noisy connection. Conference mix is the composed audio signal of the selected conferees. The selection is based on the conference setup parameters and on automatic decisions based on analyzing the signals of the current speakers. There are cases where an automatic decision may frequently fail. For example, in the cases where a conferee places the conference call on hold and accepts another call, during the hold period a private branch exchange (PBX), through which the conferee is connected to, may play “music on hold” over the connection to the conference, disturbing the rest of the participants. “Music on hold” may be music, broadcast radio, advertising or other signals to a party waiting on hold. Generally “Music on hold” may have the same properties as speech and therefore it may pass the criteria of common VAD and/or ND algorithms and may be transmitted to the other parties in the conference. On the other hand, a sensitive ND that is not tuned to the connection quality of a certain conferee may harm/disconnect a valid conferee. Therefore it is difficult to pre-tune the ND algorithm to different conferee's conditions.
Thus, it is evident that current technologies of automatic nuisance detection in audio/video conferencing may make the wrong decisions that reduce the quality of the conference. Therefore, there is a need in the art for a new nuisance detection method that may overcome these deficiencies.
Systems according to the present invention solve the above-described problem by providing a path to a conferee that has been defined as a source of nuisance by an automatic decision of the MCU to respond to this automatic decision and to correct or adjust/tune the ND according to his/her audio signal.
For example, in a telephone conference, in which at least one of the conferee may be connected to the conference via a PBX, the conferee may place the conference on ‘HOLD’ forcing a nuisance signal over the connection to the conference. Then the ND may identify this connection as a nuisance connection and send this indication to an exemplary control unit in the MCU. The control unit upon receiving an indication that the conferee channel is nuisance may mute the signal coming from this conferee. Then the exemplary controller may place an Interactive Voice Response (IVR) message over the audio signal toward the conferee. An exemplary message may inform the conferee that he has been muted and requesting the conferee to press one of the touch tone keys, for instance ‘1’, if he just returns from ‘HOLD,’ to press ‘3’ if the conferee using a noisy line/environment, or press ‘5’ to disable the ND algorithm, etc.
If a response is not received, the message may continue for a certain period or for the rest of the conference. If one of those keys has been pushed the system according to the present invention may act as follows. If the key is ‘1’ an exemplary embodiment of the present invention cancels the mute situation and enables the conferee to be heard. If the key is ‘3’ the exemplary embodiment of the present invention may reduce the sensitivity of the ND algorithm. Then the control unit enables the conferee to be heard in the conference while keeping a record of this adjustment. If the new level is above this tolerance the exemplary controller may instruct the conferee that he is too noisy and refuses to add the conferee's audio to the conference. If the key is ‘5’ the exemplary embodiment of the present invention may disable the ND algorithm and connects the nuisance conferee without condition.
Other exemplary embodiments may request, using an IVR message, from another conferee, for example the chairman of the conference, to decide whether to mute the noisy conferee or not.
Other exemplary embodiments may place a noisy conferee in push to talk (PTT) operation, instruct the noisy conferee, using an IVR message, to push momentarily any one of the keys each time he wishes to talk and to push again upon terminating his/her part.
In general, systems according to the present invention may use other than DTMF feedback means to respond, such as but not limited to: voice recognition, network control signals such as ISDN ‘D’ channel, control packets over IP communication, etc.
Thus, systems according to the present invention advantageously offer an improved algorithm that handles nuisances in conferences by requesting feedback from the nuisance conferees. The feedback from the noisy conferee may correct the automatic decision and therefore improve the quality of the conference.
Other features and advantages of the present invention will become apparent upon reading the following detailed description of the embodiments with the accompanying drawings and appended claims.
Turning now to the figures in which like numerals represent like elements throughout the several views, exemplary embodiments of the present invention are described. For convenience, only some elements of the same group may be labeled with numerals. The purpose of the drawings is to describe exemplary embodiments and not for production. Therefore features shown in the figures are chosen for convenience and clarity of presentation only.
The pluralities of endpoints 1110aa-nk are connected via the plurality of networks 1130a-k to the MCCU 1140. The MCCU 1140 may be an MCU, or an audio only multipoint control unit (an audio bridge), for example. The MCCU 1140 and/or some or all of its components are logical units that may be implemented by hardware and/or software. The MCS 1170 may be a control module and may be a logical unit that controls the operation of the MCCU 1140.
An endpoint is a terminal on a network, capable of providing one way or two-way audio and/or visual communication with other terminals or with the MCCU 1440. The information communicated between the terminals and/or the MCCU 1440 may include control signals, indicators, audio information, video information, and data. A terminal may provide any combination of several different types of inputs and/or outputs, such as speech only, speech and data, a combination of speech and video, or a combination of speech, data, and video. In case of audio conference, the endpoint may be common telephone, cellular telephone etc.
The NI 1142 receives multimedia communications 1122a-k via a plurality of networks 1130a-k and multimedia communications 1120aa-nk from the plurality of the endpoints 1110aa-nk, and processes the media communication according to communication standards that are used by each type of network, such as, but not limited to, H.323, H.321, H.324, H.324M, SIP, and/or H.320 ISDN, PSTN etc. The NI 1142 then delivers compressed audio, compressed video, compressed data, and control streams to appropriate logical modules in the MCCU 1140. Some communication standards require that the process of the NI 1142 include demultiplexing the incoming multimedia communication into compressed audio, compressed video, compressed data and control streams. In the opposite direction, the NI 1142 receives the separate streams from the various units (e.g., the MCS 1170, audio unit 1160, and/or video unit 1300) and processes the streams according to the appropriate communication standard. The NI 1142 then transmits the streams to the appropriate network 1130a-k.
The audio unit 1160 receives the compressed audio streams of the plurality of endpoints 1110aa-nk via the NI 1142 and CACI 110, processes the audio streams, mixes the relevant audio streams, and sends the compressed mixed signal via the Compressed Audio Common Interface (CACI) 110 and the NI 1142 to the endpoints 1110aa-nk. Audio unit 1160 may be a logical unit and is described below in conjunction to
The video unit 1300 may be a logical unit that receives and sends compressed video streams. The video unit 1300 includes at least one video input module that handles an input portion of a video stream 1302 from a participating endpoint and at least one video output module that generates a composed compressed video output stream that is sent via Compressed Video Common Interface (CVCI) 1302 to NJ 1142 and from there to the designated endpoints 1110aa-nk. An exemplary operation of such a video unit is described in U.S. Pat No. 6,300,973, which is incorporated herein by reference. The video unit is not mandatory for the operation of the present invention. The present invention may be used by MCCU that do not have a video unit, such as an audio bridge.
Preferably, the host 1200 communicates with the operator 1115 of the MCCU 1140, where the operator 1115 may have an operator's station for communicating with the host 1200. The host 1200 controls the MCCU 1140 via the MCS 1170 according to instructions from the operator 1115. However, the operator 1115 is not mandatory. The MCCU may operate automatically without operator.
Further, the CACI 110 carries signals to and from endpoints 1110aa-nk. For example, the compressed signal 115 from one of the endpoints 1110aa-nk is routed through the CACI 110 to the decoder 122 in the codec 120, which was previously allocated to that endpoint by the MCS 1170 via control bus 135. The decoder 122 may be a logical unit, software or hardware or a combination of those, and may decode a compressed audio stream, based on the communication standards such as, but not limited to G.711, G.723.1, G.728, G.729, MPEG or relay uncompressed audio. The decoder 122 then decodes the compressed audio stream, such as compressed signal 115, and broadcasts the decoded signal 126 over the Decoded Audio Common Interface (DACI) 140. The DACI 140 is a bus that may have broadcasting capabilities. The DACI 140 may be implemented for example by any one of or any combination of Time Division Multiplexing (TDM), Asynchronous Transmission Mode (ATM), Local Area Network (LAN), wireless technology, or shared memory. An appropriate bridge 150 may then grab the decoded signal from the DACI 140 and may analyze, enhance, and/or mix the decoded signal and return the output 161 to the DACI 140.
The encoder 124 may be a logical unit. The encoder 124 may compress the output 128 of the appropriate bridge 150 forming a compressed audio stream, such as the compressed signal 117, based on the communication standard such as, but not limited to G.711, G.723.1, G.728, G.729, and/or Motion Picture Expert Group (MPEG), transferred as decoded audio to an endpoint that receives decoded audio.
The MCS 1170 may use a database that holds the connection parameters (e.g., codecs and bridges, etc.) and the connection status (e.g., normal, mute etc.) of each endpoint (participant) that is currently connected to the MCCU, in every conference that is currently managed by the MCCU. The Mute (M) connection status may mean that the participant cannot be heard in the conference. The Normal (N) connection status may mean that the participant can be heard and can listen to the conference etc. According to the database, the MCS 1170 programs one or more bridges 150 to grab from the DACI 140 the decoded signals of all the participants associated with a conference that is assigned to those bridges 150.
The decoded output 126 of any codec 120 can be grabbed by more than one bridge 150, allowing the participants to be associated with more than one conference. The decoded streams from the decoders 122 on the DACI 140 may be grabbed by the bridge 150 and then analyzed and enhanced by the analyze and enhance unit 152. The analyze and enhance unit 152 may be a logical unit, and may include a set of algorithms for analyzing an audio stream of a participant and/or enhancing its quality, such as, but not limited to, International Telecommunications Union (ITU) G.165 (echo canceling), Dual Tone Multi-Frequency (DTMF) detection, DTMF suppression, and/or Voice Activity Detector (VAD), signal energy, and nuisance analysis.
The bridge 150 may have one or more analyze and enhance units 152. Each analyze and enhance unit 152 is assigned to a single participant and is programmed according to the connection status of that participant in the conference. The control unit 154 controls a conference that receives all signals from the analyze and enhance unit 152 and selects the participants that will be routed via switch 156 to the mixer 160. The control unit 154 may implement an exemplary method of the present invention for handling by utilizing the analysis signals 153 coming from analyze and enhance units 152 and controlling the IVR module 154a and switch 156. The exemplary method is described below in conjunction with
The mixer 160 receives the enhanced streams from all of the selected participants and/or the signal from IVR 154a and supplies each participant with an uncompressed mixed audio stream of the selected participants and/or the signal IVR 154a. Mixer 160 may supply more than one stream 161, each stream having a different mix.
Signals 153 from the analyze and enhance unit 152 are sent to the control unit 154 and the enhanced decoded audio signals 155 are sent from the analyze and enhance units 152 to the switch unit 156. The switch unit 156 is a selector that receives the decoded streams from all the participants in a conference as well as the IVR unit 154a and transfers the selected streams to mixer 160. The selection is based on the decisions of the control unit 154. The decisions of the control unit 154 are based on received commands from the MCS 1170, which define the connection status of the participants in the conference that is assigned to the bridge 150, and the information signal 153 from the analyze and enhance unit 152. The control unit 154 controls, via control signals 157, the switch 156, and the mixer 160. For example, in a case where a participant's connection status is Normal (N), the analyze and enhance unit 152 that is associated with that participant may indicate that the voice signal meets a certain criteria such as set forth by VAD, (e.g., such as the energy level being above a certain value.). Then, the control unit 154 via switch 156 selects the output 155 of the analyze and enhance unit 152, which is assigned to the participant, as one of the inputs to the mixer 160.
In another case, when the analyze and enhance unit 152 that is associated with a participant may indicate to the control unit 154 that the participant signal is nuisance. The control unit 154 may initiate an exemplary embodiment of the present invention, as illustrated in
In an alternate embodiment (not shown in the drawings) the output of the IVR unit may be delivered directly to the DACI, the encoder of the nuisance conferee having been instructed to grab the IVR's signal from the DACI instead of the output of the appropriate mixer.
The mixer 160 mixes the selected audio signals to form the mixed signals 161, and broadcasts the mixed signals 161 over the DACI 140. Some embodiments of the bridge 150 have the capability of eliminating the voice of a speaker from the mixed signal that is aimed to the endpoint of that speaker. Control unit 154 may update the MCS 1170 with the new situation of the nuisance conferee.
The MCS 1170, based on the connection status stored in the database, commands one or more codecs 120 to grab the mixed output 128 from the DACI 140 for listening to the conference. After grabbing the mixed output 128 from the DACI 140, the encoder 124 encodes the decoded signal from the appropriate bridge 150, and sends the compressed signal 117 via the CACI 110 to the appropriate participant.
The codecs 120 and the bridges 150 may be implemented by Digital Signal Processors (DSPs) such as, but not limited to, Texas Instruments DSP, TMS320C31. One DSP can include more than one unit (e.g., more than one codec and/or bridge). In the above example, the codec 120 handles a single participant's audio signal, and the bridge 150 handles one conference or part of a conference.
Referring now to
Then the control unit instructs 224 the IVR 154a to send a “Mute” message to the nuisance conferee. In parallel, as long as the IVR message is active, the switch 156 is instructed to select the input of the IVR unit 154a as the only input to a mixer 160 that is associated with the nuisance conferee. An exemplary “Mute” message may be “Please be aware that you have been muted. Please press ‘1’ upon returning from ‘Hold’. Please press ‘3’ if you were not in hold.”
Other exemplary embodiments may offer other or additional options. For example, an additional option which may be added is one such as, but not limited, to “Please press ‘5’ to disable the ND algorithm” etc.
At the end of the message the control unit may wait 226 for a period ‘T1’. Period ‘T1’ may be in the range of a few hundreds of milliseconds up to several seconds. Exemplary values of ‘T1’ may be 800 milliseconds, 2 seconds etc. At the end of the waiting period, the control unit verifies 230 whether a DTMF signal as been identified by the A&E unit 152 that is associated with the nuisance conferee. If no DTMF signal has been received, which may reflect that the nuisance conferee is not listening to the conference, then the control unit returns to step 224 and continues the mute decision. Such a case may happen if the nuisance conferee has put the conference call on hold or has been disconnected. In any of those cases no harm is made either to the nuisance conferee or to the rest of the participants. The loop comprising steps 224, 226 and 230 may continue until a DTMF signal is received. If no DTMF signal is received or a DTMF signal other than ‘1’ or ‘3’ or ‘5’ has been received, the loop may continue until the end of the conference. Other embodiments may add a counter that counts the number of cycles and may disconnect the nuisance conferee after a certain number of cycles.
In case a DTMF signal has been received and has been identified as ‘1,’ which indicates that the nuisance conferee has returned to the conference and pushed the appropriate key, then control unit 154 (
In case a DTMF signal has been received and has been identified as the number indicating that the nuisance conferee is listening to the conference, the received ND indication may be due to a noisy connection or noisy environment. In the current example the number is ‘3’. Then, at step 238 the level of the threshold of the ND 2020 is increased. Increasing the threshold reduces the sensitivity to noise. The present invention may set the initial threshold level to a low level. Based on the feedback from a nuisance conferee, embodiments according to the present invention adjust the setup according to the current connection.
Different embodiments of the present invention may utilize different methods for increasing the threshold level. One exemplary method may increase the level by a certain percentage from the current level each time, 10%, 30%, and 50%, for example. Other methods may use a fixed value each time; other methods may change the values according to the value of a counter (Cnt.) etc. The Cnt. is reset during the initiation of the conference and the value of the Cnt. is increased by one 238 each time that the conferee has been identified as a nuisance.
At step 240 a decision is made whether the nuisance conferee is a disturbing one. If the nuisance conferee is defined as a disturbing conferee, the Mute state of the conferee will be kept for the rest of the conference. If not, the Mute state may be canceled. An exemplary embodiment may compare 240 the new ND level to a predefined maximum level, etc. Max. The Max value is a parameter that may be set during the set up of the conference or may be a default value above which the noise is disturbing and it is better to permanently mute the nuisance conferee. Other embodiments may make a decision based on the value of the Cnt. If the value is above a certain number of interactions, it indicates that the connection is a disturbing connection and the conferee may be defined as a disturbing conferee.
If the connection is not a disturbing connection 240, for example, the ND level is below Max, then the mute state is canceled 234 and the task is terminated 236. The task may be restarted if a new ND indication will be received. Then the task will start at step 210 but this time the value of the Cnt. will be other than zero.
If the connection is a disturbing connection 240, for example, the ND level is above Max, then the exemplary embodiment of the present invention may keep the mute state of the nuisance conferee and allowing the nuisance conferee only to listen to the conference. At step 242 the IVR module 154a (
Other embodiments may offer the conferee the option to disconnect the current connection and re-try again. Other embodiments may request human assistance to decide how to proceed with the nuisance conferee.
In case a DTMF signal has been received and has been identified as ‘5’, this indicates that the nuisance conferee has requested to disable the ND 2020. Such a request may be done when the involvement of this conferee is crucial in this conference and he/she must be heard even though the connection is noisy. Then the control unit disables 232 the ND 2020 and cancels the mute condition of the nuisance conferee.
Alternate embodiments of the present invention may use other than DTMF feedback means to respond, such as but not limited to: Voice Recognition, network control signals such as ISDN ‘D’ channel, control packets over IP communication, etc.
The present invention may handle more that one nuisance conferee. For each nuisance conferee a dedicated task may be initiated.
In this application the words “unit” and “module” are used interchangeably. Anything designated as a unit or module may be a stand-alone unit or a specialized module. A unit or a module may be modular or have modular aspects allowing it to be easily removed and replaced with another similar unit or module. Each unit or module may be any one of, or any combination of, software, hardware, and/or firmware.
Overall, embodiments according to this invention will improve the quality of a conference by handling a nuisance conferee automatically with feedback from the nuisance conferee that improves the automatic decision. The process is transparent to the rest of the conferees.
In the description and claims of the present application, each of the verbs, “comprise” “include” and “have”, and conjugates thereof, are used to indicate that the object or objects of the verb are not necessarily a complete listing of members, components, elements, or parts of the subject or subjects of the verb.
The present invention has been described using detailed descriptions of embodiments thereof that are provided by way of example and are not intended to limit the scope of the invention. The described embodiments comprise different features, not all of which are required in all embodiments of the invention. Some embodiments of the present invention utilize only some of the features or possible combinations of the features. Variations of embodiments of the present invention that are described and embodiments of the present invention comprising different combinations of features noted in the described embodiments will occur to persons of the art. The scope of the invention is limited only by the following claims.