Method and apparatus for identifying and eliminating the source of background noise in multi-party teleconferences

Abstract
A mechanism is provided that allows participants on the conference call to identify, and then mute or filter, a participant(s) responsible for introducing the noise, regardless of whether the noise is caused by transmission impairments or by the participant(s) being in a noisy location. For example, individual users could be able to press a “test” button that could block each of the participants one at a time. This would allow the source of the noise to be identified. This “test button” could be one or more of provided at the endpoint(s), be enabled through a web interface or, for example, through a dedicated conference call interface at the endpoint(s) or at the conference bridge. The blocking of each participant could occur through interaction with the main PBX using, for example, in-band signaling to the PBX. Once the source(s) of the noise is identified, noise mitigation can be applied as needed.
Description
FIELD OF THE INVENTION

An exemplary embodiment of this invention relates to communications devices, protocols and techniques. More specifically, an exemplary aspect of this invention relates to teleconferences, and the identification and reduction of noise therein.


BACKGROUND OF THE INVENTION

Traditionally, when unacceptable background noise levels have been experienced on voice calls, the party experiencing the noise has simply turned down the volume setting, which reduces the background noise level but at the expense of the user's ability to hear the voice of the other party. Alternatively, in a manned conference-bridge type environment, a conference bridge operator can manually check the various lines of the conference call and turn down the volume on noisy lines.


In situations where the background noise is caused by a party being in a noisy location, solutions such as local mute and far-end mute are known. The obvious disadvantage to these approaches is that they do not distinguish between noise and voice.


Solutions such as highly directional handset microphones and speakerphones can do a good job of filtering out background noises, but they require the user to be positioned precisely or they, too, get filtered.


Prior to the development of electret microphones, telephone handsets used carbon microphones. Essentially, these are small canisters, filled with powdered carbon. The top of the canister was covered with a thin, highly flexible diaphragm. When sound waves pressed on the diaphragm, it would cause the carbon powder to be compressed, thereby reducing the electrical resistance of the canister. An interesting artifact of this design is that if sounds aren't loud enough to squeeze the carbon, they are not transmitted by the microphone. For this reason, carbon microphones are pretty good at filtering out the background noise at the user's location.


Electret microphones do not have this non-linear behavior. Because of their inherent ability to pick up low-amplitude sounds in addition to the user's voice, it became necessary to supplement the microphones with an expander circuit starting approximately 20 years ago. The expander circuit would measure the signal strength of the microphone and then, if the signal strength was below a predetermined threshold level, the transmitted signal would be attenuated electronically by an additional amount, perhaps 10 db.


When the background noise was at a level below the attenuator's threshold, the expander actually worked well. Needless to say, the expander was useless when the background noise was above the threshold, but the condition that was especially troubling was when the background noise was close to the threshold level, thereby causing the attenuator to kick in and out. For the listening party, the effect often sounded like heavy breathing.


SUMMARY OF THE INVENTION

This problem of an attenuator activating and deactivating inappropriately does not seem to occur with today's handsets, possibly because the location of the microphone is better than in early generation handsets. Nonetheless, the problem can still be heard when someone at the far end is using a speakerphone, especially when the background noise level is close to the threshold of the voice switch. Furthermore, there still exists a problem of undesirable background noises being transmitted when the noise is loud, regardless of whether the sender is using a handset of a speakerphone.


In accordance with a first embodiment of this invention, a mechanism is provided that allows participants on the conference call to identify a participant(s) responsible for introducing the noise, regardless of whether the noise is caused by transmission impairments or by the participant(s) being in a noisy location. For example, individual users could be able to press a “test” button that would block each of the participants one at a time. This would allow the source of the noise to be identified. The “test button” could be one or more of located at the endpoint(s), be enabled through a web interface or, for example, through a dedicated conference call interface at the endpoint(s) or at the conference bridge. The blocking of each participant could occur through interaction with the main PBX using, for example, in-band signaling to the PBX. Alternatively, or in addition, in- or out-of-band signaling could be used in an IP telephony environment.


Being able to block each participant one at a time, allows the source of the noise to be identified. This is especially true when the noise is due to transmission impairments, where, for example, participant number one would sound noise-free to participant number two, but sound very noisy to participant number three. By allowing selected one-at-a-time blocking, it would be easier to identify the source(s) of noise.


In accordance with a second exemplary embodiment, a mechanism is provided which allows individual users to be queried about how to handle the presence of a noise-introducing conference participant. After identifying the offending participant(s) several options could be presented. Illustratively, an option that could be offered is selective far-end mute, whereby each participant could selectively mute any other conference participant. (For example, in the scenario described in the previous paragraph, participant three could mute the transmissions from participant one to participant three, without affecting the transmissions of participant one to participant two.) If more than one party is introducing noise, individual far-end mute/unmute keys or buttons can be assigned on the listening party's telephone. In an exemplary embodiment, when speech is detected on a muted line, a light can flash or other indicator be utilized such as a message conveyed as a whisper page. As a result of the queries to the various users about noise-introducing conference participants, this information could be assembled into a report-based format as well.


Other corrective measures may also be implemented at the user(s) node or the other node of the “bad” line or at an intermediate node, such as a conference call mixer. For example, the background noise on the “bad” line can be identified and characterized, thereby allowing the use of suitable filters to improve the signal-to-noise ratio. Alternatively, or in addition, an automatic mute may be performed in which the line is unmuted automatically when speech is detected. After speech ends, the line may again be muted automatically. The remote mute feature can be implemented for each channel from each person's perspective recognizing that noise for one conference call participant may not be present for another conference call participant.


In accordance with another exemplary embodiment, control over the transmitted signal is provided to address why handset expanders and the voice switches and speakerphones are prone to failure. Specifically, the threshold level at which the attenuator and/or voice switch gets triggered is not adjustable by the listener and does not allow different adjustments for individual listeners. In accordance with this exemplary embodiment, each listening party is capable of adjusting the transmitting parties' expanders and/or voice switch. This is different than what is commonly referred to as “squelch” in that the listener exercises control over the transmitter, as opposed to allowing the listener to do amplitude-based filtering of the received signal. This functionality could be provided in one or more of a PBX, endpoint, conference call mixer, communications server or the like.


Some of the embodiments discussed above, adjustments made by participant one to the signal they received from participant number 2 can be global, i.e., heard by all other participants, or the adjustments can affect only that specific person-to-person transmission path.


Within the prior art, when noise is coming from a source that a conference participant can identify, operators have to manually test each line. One exemplary advantage of the present invention is the participants can check the lines even while a conference is in progress and continue even if there is a bad line without the interruption of an operator trying to determine problem lines. Another exemplary advantage associated with the above inventions is that the listener can exercise control over the transmitter, as opposed to doing amplitude-based filtering of the received signal.


Exemplary aspects of this invention thus relate to communications management. More specifically, exemplary aspects of the invention relate to noise reduction. Still further aspects of the invention relate to noise reduction in a conference call environment.


Additional exemplary aspects of the invention relate to providing individual conference call listeners the ability to identity which transmitting party(s) sounds noisy to them, coupled with the ability of the listeners to adjust the noisy transmitter(s) in a way that is heard by all listeners or heard by only the person who is making the adjustment.


Still further aspects of the invention relate to blocking one or more of the participant in a conference call.


Still further aspects of the invention relate to providing selective far-end mute capability which may be manually implemented and/or automatic.


Still further aspects of the invention relate to providing suitable filters to remove noise associated with a conference call participant.


Still further aspects of the invention relate to providing the ability for each listening party to adjust each transmitting party's expander (background noise filter) and/or voice switch.


The present invention can provide a number of advantages depending on the particular configuration. These and other advantages will be apparent from the disclosure of the invention(s) contained herein.


The phrases “at least one”, “one or more”, and “and/or” are open-ended expressions that are both conjunctive and disjunctive in operation. For example, each of the expressions “at least one of A, B and C”, “at least one of A, B, or C”, “one or more of A, B, and C”, “one or more of A, B, or C” and “A, B, and/or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together.


The term “a” or “an” entity refers to one or more of that entity. As such, the terms “a” (or “an”), “one or more” and “at least one” can be used interchangeably herein. It is also to be noted that the terms “comprising”, “including”, and “having” can be used interchangeably.


The term “automatic” and variations thereof, as used herein, refers to any process or operation done without material human input when the process or operation is performed. However, a process or operation can be automatic even if performance of the process or operation uses human input, whether material or immaterial, received before performance of the process or operation. Human input is deemed to be material if such input influences how the process or operation will be performed. Human input that consents to the performance of the process or operation is not deemed to be “material”.


The term “computer-readable medium” as used herein refers to any tangible storage and/or transmission medium that participate in providing instructions to a processor for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media includes, for example, NVRAM, or magnetic or optical disks. Volatile media includes dynamic memory, such as main memory. Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, magneto-optical medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, a solid state medium like a memory card, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read. A digital file attachment to e-mail or other self-contained information archive or set of archives is considered a distribution medium equivalent to a tangible storage medium. When the computer-readable media is configured as a database, it is to be understood that the database may be any type of database, such as relational, hierarchical, object-oriented, and/or the like.


While circuit or packet switched types of communications can be used with the present invention, the concepts and techniques disclosed herein are applicable to other protocols such as Session Initiation Protocol or SIP, which is a simple signaling/application layer protocol for network multimedia conferencing and telephony, multimedia conferencing, audio and video conferencing and the like. For example, video noise can be a significant problem in video telephony, causing noticeable degradations in the picture quality.


Accordingly, the invention is considered to include a tangible storage medium or distribution medium and prior art-recognized equivalents and successor media, in which the software implementations of the present invention are stored.


The terms “determine”, “calculate” and “compute,” and variations thereof, as used herein, are used interchangeably and include any type of methodology, process, mathematical operation or technique.


The term “module” as used herein refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and software that is capable of performing the functionality associated with that element. Also, while the invention is described in terms of exemplary embodiments, it should be appreciated that individual aspects of the invention can be separately claimed.


The preceding is a simplified summary of the invention to provide an understanding of some aspects of the invention. This summary is neither an extensive nor exhaustive overview of the invention and its various embodiments. It is intended neither to identify key or critical elements of the invention nor to delineate the scope of the invention but to present selected concepts of the invention in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other embodiments of the invention are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates an exemplary conference call environment according to this invention;



FIG. 2 illustrates an exemplary endpoint according to this invention;



FIG. 3 illustrates an exemplary interface associated with an endpoint according to this invention; and



FIG. 4 is a flow chart illustrating an exemplary method for identifying and reducing noise from noisy conference call participants.





DETAILED DESCRIPTION OF THE INVENTION

The invention will be described below in relation to a conference call environment. Although well suited for use with circuit-switched or packet switched networks, the invention is not limited to use with any particular type of communication system or configuration of system elements and those skilled in the art will recognize that the disclosed techniques may be used in any application in which it is desirable to provide noise reduction in a conference call. For example, these systems and methods of this invention will also work well with SIP-based communication systems and endpoints. Moreover, the various endpoints described herein can be any communications device such as a telephone, speakerphone, cellular phone, SIP enabled endpoint, softphone, PDA, wired or wireless communication device, or in general any communications device that is capable of sending and/or receiving voice communications.


The exemplary systems and methods of this invention will also be described in relation to software, modules and associated hardware and network(s). However, to avoid unnecessarily obscuring the present invention, the following description omits well-known structures, components and devices that may be shown in block diagram form, are well known, or are otherwise summarized.


For purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the present invention. It should be appreciated, however, that the present invention may be practiced in a variety of ways beyond the specific details set forth herein.



FIG. 1 illustrates an exemplary communications system according to this invention. The communication system 1 includes one or more endpoints (10, 20, 30, 40, 50) interconnected via one or more networks 2 and links 5. The network 2, in addition to traditional telecommunications architectural components, can also include one or more PBXs, communications servers, manned or unmanned conference call mixers, or the like. The links 5 can be wired or wireless links or any combination thereof that are capable of exchanging information between the various endpoints.


As illustrated in the communications system 1, each of the five users (call participants) can be presented with a display, such as a graphical user interface, that shows the status of the other users participating in a conference call. For example, the interface 10 for User 1 shows that User 2 is in the block or “test” state, User 3 is muted, User 4 has been tuned and User 5 has been filtered.


For User 1, the user has blocked User 2 to, for example, attempt and identify the source of noise in a conference. As previously discussed, a user can select, for example, a button on their endpoint that corresponds to each user they want to block thereby testing whether or not the other conference call participant is a source of the noise. A user can systematically test each other conference call participant and then, as discussed, one or more of mute, tune or filter participants associated with the source of the noise.


As illustrated in FIG. 1, and the various interfaces for different users discussed hereinafter, each user can be provided with this functionality appreciating that a first user may experience noise with another conference call participant, where a second user may not have the same noisy experience.


For example, User 3 at endpoint 30 is not having a problem with conference call participant 1 or 2, but has blocked Users at 4 and 5 in an attempt to locate the source of noise on the call. User 4 at endpoint 40 has set established an initial configuration for the conference call indicating that there is no problem with User 1, has implemented a filter for User 2, has placed User 3 on manual mute and User 5 on auto-mute.



FIG. 2 illustrates in greater detail an exemplary endpoint 10 for User 1. Endpoint 10 includes the status display 12 as well as modules that provide the various functionalities discussed above. More specifically, the block module 14 allows each other conference call participant to test or block this specific endpoint 10, as well as allows the user associated with endpoint 10 to selectively block one or more of the other conference call participants. When a conference participant is blocked, no audio information from that user's communication channel is audible. This blocking can be user centric or applied globally to the conference for all conference participants. Blocking can be accomplished by muting all information from the blocked channel(s).


In a similar manner, the tune module 16, mute module 18 and auto mute module 19 allow the tuning, muting and auto muting, respectively, functionality to be implemented by the user associated with this specific endpoint to other conference call participants, as well as provides functionality for other conference call participants to manipulate this specific endpoint, and thus, for example, adjust the conference call signal received by them.


As illustrated in the status display 12 of endpoint 10 in FIG. 2, the status of various users can optionally be displayed, as well as an indication provided to the user associated with endpoint 10 of actions taken by other conference call participants against this particular endpoint. In this illustrative example, a notice is provided to User 1 that their endpoint has been muted by User 5. The status display 12 could also be expanded to include all or a portion of this type of information relative to one or more of the other conference call participants.


The tune/filter module 16 allows a user to adjust one or more of the transmitting party's expander and voice switch. In a similar manner to the block module 14, if a user selects to tune another conference call participant, the other conference call participant is identified and the user provided with, for example, an interface that allows the adjusting of the expander or voice switch either automatically or manually, for example, with the slider bars or the like. The settings for one or more other “tuned” conference call participants can be shown in the status display 12 and in a similar manner, the user associated with endpoint 10 provided with the tune settings that are being used by other conference call participants on the endpoint 10. In this manner, information can be shared between conference call participants (or with a manned conference call bridge) to assist with noise reduction in a conference call environment.


For filtering, the tune/filter module allows a user to filter one or more other conference call participants either at the near-end or at the call mixer to reduce, for example, noise. In addition to adjustments that may be made to the expander mechanisms (such as the threshold level at which the expander kicks in and the degree of attenuation that is added to the transmitted signal when the user is not speaking), many other types of filtering may be used in conjunction with this invention. Examples include spectral filtering, amplitude normalization, adjustments to the “comfort noise” that is provided in response to packet loss, and the automatic removal of clicks, pops, and other types of transient non-speech events.


The mute module 18 allows the user associated with endpoint 10 to selectively mute one or more other conference call participants manually. As discussed, an indicator can be provided when voice communications are detected at one or more of the other muted endpoints and this indicator provided to the user associated with endpoint 10 via, for example, the status display 12 or other comparable audio or visual queue.


The auto mute module 19 allows the user associated with endpoint 10 to selectively automatically mute one or more other conference call participants. Similar to the other modules discussed above, the auto-mute module 19 also provides the functionality to mute endpoint 10 at the requested one or more other conference call participants. If a user is auto muted, signals from that user are not transmitted to one or more of the other conference call participants unless a voice is detected.



FIG. 3 illustrates an exemplary interface associated with an endpoint. The interface 7 includes one or more buttons 22-28 as well as a status display 12. In this particular exemplary embodiment, a block button 22, tune button 24, mute button 26 and automatic mute button 28 are provided that allow the implementation of the functionality discussed above in relation to the block, tune, mute and auto-mute modules, respectively.


In this particular exemplary embodiment, a user has selected the block button 22 (highlighted by the bold text) at which point the status display 12 is updated to reflect the status of the other users and provide the ability for the user to select one or more of the other conference call participants that are to be blocked. In accordance with this particular exemplary embodiment, User 3 has been muted, User 4 has been tuned and User 5 has been filtered and no particular action has been taken against User 2. User 1 could then opt to block user 2 in an attempt to identify the source of a noisy conference call participant.


In a similar manner, the various other buttons can be selected with the status display 12 being updated to one or more of allow the user associated with the endpoint to select the other conference call participant(s) on which the function should be implemented and/or adjust the parameters associated with the selected function. For example, on selection of the tune button 24, the status display 12 can be updated to show which, if any, other users have been tuned and by whom, and optionally show the parameters associated with each of the tuned users.


The various buttons can be one or more of physical buttons associated with an endpoint and soft buttons, such as those found in a user interface.



FIG. 4 illustrates an exemplary method for reducing noise in a conference call environment. In particular, control begins in step S400 and continues to step S410. In step S410, one or more conference call participants are selected. Next in step S420, one or more participants can be selected and blocked to assist with the determination of the source of a problem, such as noise. Then, in step S430, a determination is made whether the one or more blocked participants are the source of the problem. If the one or more blocked participants are the source of the problem, control continues to step S440. Otherwise, control jumps to step s450.


In step S440, one or more of near/far end mute, filtering and/or tuning are selectively applied to one or more of the conference call participants to assist with mitigating the problem, such as noise. In addition to the application of each of these functions, the function such as filter and tune can have their parameters adjusted to assist with fine-tuning that functionality.


In step S450, a determination is made whether another participant should be selected. If another conference call participant should be selected, control jumps back to step S410. Otherwise, control continues to step S460 where the conference continues. Control then continues to step S470 where the control sequence ends.


In accordance with an additional embodiment, one or more of the endpoints could be equipped with a processor and memory (not shown), the memory storing a profile. The profile can be used to store preference information for certain conference call participants, such as tuning and filtering preferences, that could be used for future conference calls. Additionally, one or more of the profile and memory could store instructions that are used for adjusting one or more of a far-end device and functionality at a conference call bridge.


As an example, at some point during a conference call between 3 parties (Pat, Sam and Chris) Pat is experiencing noise from Sam. Pat determines this by using the block functionality. This can be implemented by having Pat's endpoint forward an instruction to one or more of Sam's endpoint and the conference call bridge to mute all communications on Sam's communication channel. The instruction can include information for which of the bridge and endpoint are to implement the blocking functionality as well as an indication of which party is to be blocked. For example, in a SIP environment, this information could be included in a header associated with the instruction.


Having determined that one or more of the communication channel associated with Sam, Sam's endpoint or the environment that Sam is in is the source of the noise, Pat can use one or more of the tune, filter, mute and auto mute functionality described herein. In a similar manner, each of these functions can have an associated instruction that can control the requested function at one or more of another endpoint, a bridge and a plurality of endpoints. These instructions can be provided in an in-band or out-of-band signal. The out-of-band signaling could be through the bridge, with the bridge acting as a proxy, or directly to one or more of the other endpoints. Additionally, voice XML can be used to implement this functionality.


In accordance with yet another exemplary embodiment, the system uses one or more of:


(a) Telecommunication network signaling protocols, to include traditional analog mechanisms, non-IP digital signaling, wireless protocols such as GSM, and VoIP methods such as H.323 and SIP, and the like;


(b) Audio encoding and transmission techniques, including but not limited to Mu-Law and A-Law Pulse Code Modulation, MPEG techniques, Linear Predictive Coding, Code-Excited Linear Prediction, the audio encoding standards recognized by the Global System for Mobile Communications Association (including, but not limited to, GSM, GPRS, EDGE, and 3GSM), and the audio standards recognized by the International Telecommunication Union (including, but not limited to, G.711, G.722, G.723, G.726, G.728, and G.729), and the like; and


(c) Video encoding and transmission techniques, including but not limited to the MPEG, AVI, WMA, ITU H.263, and ITU H.264 formats, and the like.


A number of variations and modifications of the invention can be used. It would be possible to provide or claims for some features of the invention without providing or claiming others.


The exemplary systems and methods of this invention have been described in relation to conference call noise reduction. However, to avoid unnecessarily obscuring the present invention, the description omits a number of known structures and devices. This omission is not to be construed as a limitation of the scope of the claimed invention. Specific details are set forth to provide an understanding of the present invention. It should however be appreciated that the present invention may be practiced in a variety of ways beyond the specific detail set forth herein.


Furthermore, while the exemplary embodiments illustrated herein show various components of the system collocated, certain components of the system can be located remotely, at distant portions of a distributed network, such as a LAN, cable network, and/or the Internet, or within a dedicated system. Thus, it should be appreciated, that the components of the system can be combined in to one or more devices, such as a messaging system, or collocated on a particular node of a distributed network, such as an analog and/or digital communications network, a packet-switch network, a circuit-switched network or a cable network.


It will be appreciated from the preceding description, and for reasons of computational efficiency, that the components of the system can be arranged at any location within a distributed network of components without affecting the operation of the system. For example, the various components can be located in a switch such as a PBX and media server, gateway, a cable provider, enterprise system, in one or more communications devices, at one or more users' premises, or some combination thereof. Similarly, one or more functional portions of the system could be distributed between a communications device(s), such as a PDA, and an associated computing device.


Furthermore, it should be appreciated that the various links, such as link 5, connecting the elements can be wired or wireless links, or any combination thereof, or any other known or later developed element(s) that is capable of supplying and/or communicating data to and from the connected elements. These wired or wireless links can also be secure links and may be capable of communicating encrypted information. Transmission media used as links, for example, can be any suitable carrier for electrical signals, including coaxial cables, copper wire and fiber optics, and may take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.


Also, while the flowcharts have been discussed and illustrated in relation to a particular sequence of events, it should be appreciated that changes, additions, and omissions to this sequence can occur without materially affecting the operation of the invention.


In yet another embodiment, the systems and methods of this invention can be implemented in conjunction with a special purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit element(s), an ASIC or other integrated circuit, a digital signal processor, a hard-wired electronic or logic circuit such as discrete element circuit, a programmable logic device or gate array such as PLD, PLA, FPGA, PAL, special purpose computer, any comparable means, or the like. In general, any device(s) or means capable of implementing the methodology illustrated herein can be used to implement the various aspects of this invention.


Exemplary hardware that can be used for the present invention includes computers, handheld devices, telephones (e.g., cellular, Internet enabled, digital, analog, hybrids, and others), and other hardware known in the art. Some of these devices include processors (e.g., a single or multiple microprocessors), memory, nonvolatile storage, input devices, and output devices. Furthermore, alternative software implementations including, but not limited to, distributed processing or component/object distributed processing, parallel processing, or virtual machine processing can also be constructed to implement the methods described herein.


In yet another embodiment, the disclosed methods may be readily implemented in conjunction with software using object or object-oriented software development environments that provide portable source code that can be used on a variety of computer or workstation platforms. Alternatively, the disclosed system may be implemented partially or fully in hardware using standard logic circuits or VLSI design. Whether software or hardware is used to implement the systems in accordance with this invention is dependent on the speed and/or efficiency requirements of the system, the particular function, and the particular software or hardware systems or microprocessor or microcomputer systems being utilized.


In yet another embodiment, the disclosed methods may be partially implemented in software that can be stored on a storage medium, executed on programmed general-purpose computer with the cooperation of a controller and memory, a special purpose computer, a microprocessor, or the like. In these instances, the systems and methods of this invention can be implemented as a program embedded on personal computer such as an applet, JAVA® or CGI script, as a resource residing on a server or computer workstation, as a routine embedded in a dedicated measurement system, system component, or the like. The system can also be implemented by physically incorporating the system and/or method into a software and/or hardware system.


Although the present invention describes components and functions implemented in the embodiments with reference to particular standards and protocols, the invention is not limited to such standards and protocols. Other similar standards and protocols not mentioned herein are in existence and are considered to be included in the present invention. Moreover, the standards and protocols mentioned herein and other similar standards and protocols not mentioned herein are periodically superseded by faster or more effective equivalents having essentially the same functions. Such replacement standards and protocols having the same functions are considered equivalents included in the present invention.


The present invention, in various embodiments, configurations, and aspects, includes components, methods, processes, systems and/or apparatus substantially as depicted and described herein, including various embodiments, subcombinations, and subsets thereof. Those of skill in the art will understand how to make and use the present invention after understanding the present disclosure. The present invention, in various embodiments, configurations, and aspects, includes providing devices and processes in the absence of items not depicted and/or described herein or in various embodiments, configurations, or aspects hereof, including in the absence of such items as may have been used in previous devices or processes, e.g., for improving performance, achieving ease and\or reducing cost of implementation.


The foregoing discussion of the invention has been presented for purposes of illustration and description. The foregoing is not intended to limit the invention to the form or forms disclosed herein. In the foregoing Detailed Description for example, various features of the invention are grouped together in one or more embodiments, configurations, or aspects for the purpose of streamlining the disclosure. The features of the embodiments, configurations, or aspects of the invention may be combined in alternate embodiments, configurations, or aspects other than those discussed above. This method of disclosure is not to be interpreted as reflecting an intention that the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment, configuration, or aspect. Thus, the following claims are hereby incorporated into this Detailed Description, with each claim standing on its own as a separate preferred embodiment of the invention.


Moreover, though the description of the invention has included description of one or more embodiments, configurations, or aspects and certain variations and modifications, other variations, combinations, and modifications are within the scope of the invention, e.g., as may be within the skill and knowledge of those in the art, after understanding the present disclosure. It is intended to obtain rights which include alternative embodiments, configurations, or aspects to the extent permitted, including alternate, interchangeable and/or equivalent structures, functions, ranges or steps to those claimed, whether or not such alternate, interchangeable and/or equivalent structures, functions, ranges or steps are disclosed herein, and without intending to publicly dedicate any patentable subject matter.

Claims
  • 1. A conference call noise identification and reduction system comprising: a block module adapted to block audio from one or more conference call participants, the blocking occurring at one or more of a near-end and a conference bridge to allow a conference call participant to identify a source of noise; andone or more of a tune module, filter module and mute module selectively operable at the conference bridge for each conference call participant identified by the blocking to reduce the source of noise associated with the conference call participants identified by the blocking, wherein when a first conference call participant selectively operates one or more of the one or more of the tune module, filter module and mute module, the selective operation only affects the audio to the first conference call participant, with the audio to other conference call participants remaining unchanged, and, if the first conference call participant sounds acceptable to a second conference call participant, but does not sound acceptable to a third conference call participant, then the third conference call participant can adjust the first conference call participant to the third conference call participant transmission parameters without affecting the first conference call participant to the second conference call participant transmissions.
  • 2. The system of claim 1, further comprising a status display that displays noise reduction information associated with one or more of the conference call participants.
  • 3. The system of claim 1, further comprising instructions used to control one or more of one or more endpoints and the conference bridge.
  • 4. The system of claim 3, wherein the instructions are sent via one or more of in-band signaling and out-of-band signaling.
  • 5. The system of claim 3, further comprising parameters associated with the instructions.
  • 6. The system of claim 1, wherein one or more of the blocking, tuning, filtering and muting occur during a conference call.
  • 7. The system of claim 1, wherein the system uses one or more of telecommunication network signaling protocols, audio encoding and transmission techniques and video encoding and transmission techniques.
  • 8. The system of claim 1, further comprising one or more profiles that store information about the one or more conference call participants.
  • 9. A conference call noise identification and reduction method comprising: blocking audio from one or more conference call participants, the blocking occurring at one or more of a near-end and a conference bridge to allow a conference call participant to identify a source of noise;one or more of selectively tuning, filtering and muting at the conference bridge each conference call participant identified by the blocking to reduce the source of noise associated with the conference call participants identified by the blocking, wherein when a first conference call participant selectively operates one or more of the one or more of the tune module, filter module and mute module, the selective operation only affects the audio to the first conference call participant, with the audio to other conference call participants remaining unchanged, and, if the first conference call participant sounds acceptable to a second conference call participant, but does not sound acceptable to a third conference call participant, then the third conference call participant can adjust the first conference call participant to the third conference call participant transmission parameters without affecting the first conference call participant to the second conference call participant transmissions.
  • 10. The method of claim 9, further comprising displaying noise reduction information associated with one or more of the conference call participants.
  • 11. The method of claim 9, further comprising controlling one or more of one or more endpoints and the conference bridge.
  • 12. The method of claim 11, wherein instructions are sent via one or more of in-band signaling and out-of-band signaling.
  • 13. The method of claim 11, further comprising associating parameters with the instructions.
  • 14. The method of claim 9, wherein one or more of the blocking, tuning, filtering and muting occur during a conference call.
  • 15. The method of claim 9, wherein the system uses one or more of telecommunication network signaling protocols, audio encoding and transmission techniques and video encoding and transmission techniques.
  • 16. The method of claim 9, further comprising storing information in one or more profiles about the one or more conference call participants.
  • 17. A non-transitory computer-readable information storage medium having stored thereon instructions, that when executed by a processor, perform the steps of claim 9.
  • 18. The method of claim 10 wherein the selective operation occurs at a mixer.
US Referenced Citations (240)
Number Name Date Kind
4791660 Oye et al. Dec 1988 A
5067127 Ochiai Nov 1991 A
5206903 Kohler et al. Apr 1993 A
5506872 Mohler Apr 1996 A
5594740 LaDue Jan 1997 A
5604786 Engelke et al. Feb 1997 A
5724405 Engelke et al. Mar 1998 A
5724416 Foladare et al. Mar 1998 A
5802058 Harris et al. Sep 1998 A
5828747 Fisher et al. Oct 1998 A
5878029 Hasegawa et al. Mar 1999 A
5905793 Flockhart et al. May 1999 A
5933425 Iwata Aug 1999 A
5946618 Agre et al. Aug 1999 A
5953312 Crawley et al. Sep 1999 A
5961572 Craport et al. Oct 1999 A
5982873 Flockhart et al. Nov 1999 A
6002933 Bender et al. Dec 1999 A
6021178 Locke et al. Feb 2000 A
6038214 Shionozaki Mar 2000 A
6058163 Pattison et al. May 2000 A
6061431 Knappe et al. May 2000 A
6067300 Baumert et al. May 2000 A
6073013 Agre et al. Jun 2000 A
6088732 Smith et al. Jul 2000 A
6122665 Bar et al. Sep 2000 A
6163607 Bogart et al. Dec 2000 A
6173053 Bogart et al. Jan 2001 B1
6185527 Petkovic et al. Feb 2001 B1
6192122 Flockhart et al. Feb 2001 B1
6212275 Akhteruzzaman Apr 2001 B1
6249757 Cason Jun 2001 B1
6256300 Ahmed et al. Jul 2001 B1
6349136 Light et al. Feb 2002 B1
6374302 Galasso et al. Apr 2002 B1
6381472 LaMedica, Jr. et al. Apr 2002 B1
6381639 Thebaut et al. Apr 2002 B1
6421425 Bossi et al. Jul 2002 B1
6434628 Bowman-Amuah Aug 2002 B1
6453022 Weinman, Jr. Sep 2002 B1
6463470 Mohaban et al. Oct 2002 B1
6463474 Fuh et al. Oct 2002 B1
6469991 Chuah Oct 2002 B1
6490343 Smith, Jr. et al. Dec 2002 B2
6490556 Graumann et al. Dec 2002 B1
6498791 Pickett et al. Dec 2002 B2
6502131 Vaid et al. Dec 2002 B1
6526140 Marchok et al. Feb 2003 B1
6529475 Wan et al. Mar 2003 B1
6529499 Doshi et al. Mar 2003 B1
6532241 Ferguson et al. Mar 2003 B1
6546082 Alcendor et al. Apr 2003 B1
6563794 Takashima et al. May 2003 B1
6578077 Rakoshitz et al. Jun 2003 B1
6601101 Lee et al. Jul 2003 B1
6618368 Tanigawa et al. Sep 2003 B1
6628611 Mochizuki Sep 2003 B1
6647270 Himmelstein Nov 2003 B1
6665637 Bruhn Dec 2003 B2
6668042 Michaelis Dec 2003 B2
6678250 Grabelsky et al. Jan 2004 B1
6724862 Shaffer et al. Apr 2004 B1
6725128 Hogg et al. Apr 2004 B2
6727767 Takada Apr 2004 B2
6754710 McAlear Jun 2004 B1
6760312 Hitzeman Jul 2004 B1
6760774 Soumiya et al. Jul 2004 B1
6765905 Gross et al. Jul 2004 B2
6778534 Tal et al. Aug 2004 B1
6792092 Michalewicz Sep 2004 B1
6798751 Voit et al. Sep 2004 B1
6798786 Lo et al. Sep 2004 B1
6807564 Zellner et al. Oct 2004 B1
6857020 Chaar et al. Feb 2005 B1
6914964 Levine Jul 2005 B1
6954435 Billhartz et al. Oct 2005 B2
6964023 Maes et al. Nov 2005 B2
6973033 Chiu et al. Dec 2005 B1
6980516 Wibowo et al. Dec 2005 B1
6988133 Zavalkovsky et al. Jan 2006 B1
7003462 Shambaugh et al. Feb 2006 B2
7003574 Bahl Feb 2006 B1
7006616 Christofferson et al. Feb 2006 B1
7010097 Zellner et al. Mar 2006 B2
7010581 Brown et al. Mar 2006 B2
7031311 MeLampy et al. Apr 2006 B2
7031327 Lu Apr 2006 B2
7043435 Knott et al. May 2006 B2
7046646 Kilgore May 2006 B2
7075922 Mussman et al. Jul 2006 B2
7076540 Kurose et al. Jul 2006 B2
7076568 Philbrick et al. Jul 2006 B2
7089189 Lipe et al. Aug 2006 B2
7099440 Michaelis Aug 2006 B2
7103542 Doyle Sep 2006 B2
7124205 Craft et al. Oct 2006 B2
7165035 Zinser et al. Jan 2007 B2
7170855 Mo et al. Jan 2007 B1
7170977 Doherty et al. Jan 2007 B2
7177945 Hong et al. Feb 2007 B2
7184434 Ganti et al. Feb 2007 B2
7212969 Bennett May 2007 B1
7221660 Simonson et al. May 2007 B1
7249024 Engstrom Jul 2007 B2
7251640 Baumard Jul 2007 B2
7257120 Saunders et al. Aug 2007 B2
7260439 Foote et al. Aug 2007 B2
7266499 Surace et al. Sep 2007 B2
7269252 Eran Sep 2007 B2
7272563 Nelson Sep 2007 B2
7290059 Yadav Oct 2007 B2
7295555 Elzur Nov 2007 B2
7299185 Falcon et al. Nov 2007 B2
7319961 Al-Dhubaib et al. Jan 2008 B2
7321591 Daniel et al. Jan 2008 B2
7349851 Zuberec et al. Mar 2008 B2
7359979 Gentle et al. Apr 2008 B2
7362745 Cope et al. Apr 2008 B1
7363371 Kirkby et al. Apr 2008 B2
7376564 Selg et al. May 2008 B2
7398212 Yacoub Jul 2008 B2
7437297 Chaar et al. Oct 2008 B2
7454351 Jeschke et al. Nov 2008 B2
7474627 Chheda et al. Jan 2009 B2
7489687 Chavez et al. Feb 2009 B2
7496661 Morford et al. Feb 2009 B1
7502741 Finke et al. Mar 2009 B2
7509260 Cross, Jr. et al. Mar 2009 B2
7519536 Maes et al. Apr 2009 B2
7522719 Carlson et al. Apr 2009 B2
7565415 Markowitz et al. Jul 2009 B1
20010012993 Attimont et al. Aug 2001 A1
20010036157 Blanc et al. Nov 2001 A1
20010039210 St-Denis Nov 2001 A1
20020080808 Leung Jun 2002 A1
20020085703 Proctor Jul 2002 A1
20020091843 Vaid Jul 2002 A1
20020105911 Pruthi et al. Aug 2002 A1
20020116522 Zelig Aug 2002 A1
20020143971 Govindarajan et al. Oct 2002 A1
20020152319 Amin et al. Oct 2002 A1
20020176404 Girard Nov 2002 A1
20030002650 Gruchala Jan 2003 A1
20030016653 Davis Jan 2003 A1
20030016876 Chai et al. Jan 2003 A1
20030086515 Trans et al. May 2003 A1
20030120789 Hepworth et al. Jun 2003 A1
20030227878 Krumm-Heller Dec 2003 A1
20040073641 Minhazuddin et al. Apr 2004 A1
20040073690 Hepworth et al. Apr 2004 A1
20050031110 Haimovich et al. Feb 2005 A1
20050064899 Angelopoulos et al. Mar 2005 A1
20050119892 Agapi et al. Jun 2005 A1
20050119894 Cutler et al. Jun 2005 A1
20050125229 Kurzweil Jun 2005 A1
20050125230 Haas Jun 2005 A1
20050131697 Brown et al. Jun 2005 A1
20050131698 Tischer Jun 2005 A1
20050131699 Fukada Jun 2005 A1
20050131700 Washburn et al. Jun 2005 A1
20050152523 Fellenstein et al. Jul 2005 A1
20050154590 Coffey et al. Jul 2005 A1
20050177370 Hwang et al. Aug 2005 A1
20050180323 Beightol et al. Aug 2005 A1
20050186933 Trans Aug 2005 A1
20050192808 Sugiyama Sep 2005 A1
20050203746 Obata Sep 2005 A1
20050216268 Kannappan Sep 2005 A1
20050228673 Nefian et al. Oct 2005 A1
20050228674 Gunn et al. Oct 2005 A1
20050228675 Trinkel et al. Oct 2005 A1
20050240409 Gallistel Oct 2005 A1
20050240410 Charles et al. Oct 2005 A1
20050240412 Fujita Oct 2005 A1
20050240413 Asano et al. Oct 2005 A1
20050246173 Creamer et al. Nov 2005 A1
20050246174 DeGolia Nov 2005 A1
20050256717 Miyata et al. Nov 2005 A1
20050261035 Groskreutz et al. Nov 2005 A1
20050261907 Smolenski et al. Nov 2005 A1
20050273339 Chaudhari et al. Dec 2005 A1
20050278148 Bader et al. Dec 2005 A1
20050278177 Gottesman Dec 2005 A1
20050278178 Girouard et al. Dec 2005 A1
20050283366 Lee Dec 2005 A1
20050283367 Ativanichayaphong et al. Dec 2005 A1
20050283368 Leung Dec 2005 A1
20050288933 Nakamura et al. Dec 2005 A1
20050288934 Omi Dec 2005 A1
20050288935 Lee et al. Dec 2005 A1
20060004579 Claudatos et al. Jan 2006 A1
20060009979 McHale et al. Jan 2006 A1
20060009980 Burke et al. Jan 2006 A1
20060020468 Hilliard Jan 2006 A1
20060020469 Rast Jan 2006 A1
20060031073 Anglin et al. Feb 2006 A1
20060036440 Kunkel Feb 2006 A1
20060047515 Connors Mar 2006 A1
20060067486 Zellner et al. Mar 2006 A1
20060069568 Passaretti et al. Mar 2006 A1
20060069570 Allison et al. Mar 2006 A1
20060069779 Sundqvist et al. Mar 2006 A1
20060074679 Pifer et al. Apr 2006 A1
20060074681 Janiszewski et al. Apr 2006 A1
20060074682 Chou et al. Apr 2006 A1
20060080103 Van Breemen Apr 2006 A1
20060080104 Dang Apr 2006 A1
20060100879 Jakobsen et al. May 2006 A1
20060100880 Yamamoto et al. May 2006 A1
20060100881 He May 2006 A1
20060100882 Eves et al. May 2006 A1
20060100883 Miyamoto et al. May 2006 A1
20060106610 Napper May 2006 A1
20060106611 Krasikov et al. May 2006 A1
20060106613 Mills May 2006 A1
20060116880 Gober Jun 2006 A1
20060116881 Umezawa et al. Jun 2006 A1
20060129405 Elfanbaum Jun 2006 A1
20060136217 Mullin Jun 2006 A1
20060143014 Cheng et al. Jun 2006 A1
20060143015 Knott et al. Jun 2006 A1
20060161440 Nakayama et al. Jul 2006 A1
20060167694 Mitsuyoshi Jul 2006 A1
20060167695 Spille et al. Jul 2006 A1
20060173688 Whitham Aug 2006 A1
20060190262 Roskind Aug 2006 A1
20060195322 Broussard et al. Aug 2006 A1
20060217985 Noguchi et al. Sep 2006 A1
20060235693 Ruderman et al. Oct 2006 A1
20060247931 Caskey et al. Nov 2006 A1
20060247932 Yamamoto Nov 2006 A1
20070103317 Zellner et al. May 2007 A1
20070133403 Hepworth et al. Jun 2007 A1
20070168195 Wilkin et al. Jul 2007 A1
20070172083 Tseng et al. Jul 2007 A1
20080117869 Freen et al. May 2008 A1
20080151886 Gentle et al. Jun 2008 A1
20080151898 Gentle et al. Jun 2008 A1
20080151921 Gentle et al. Jun 2008 A1
20090168984 Kreiner et al. Jul 2009 A1
Foreign Referenced Citations (12)
Number Date Country
2319655 Jun 2001 CA
0982920 Mar 2000 EP
1549035 Jun 2005 EP
2006-340376 Dec 2006 JP
WO 9114278 Sep 1991 WO
WO 9846035 Oct 1998 WO
WO 9951038 Oct 1999 WO
WO 0041090 Jul 2000 WO
WO 0072563 Nov 2000 WO
WO 0126393 Apr 2001 WO
WO 0175705 Oct 2001 WO
WO 0200316 Jan 2002 WO
Related Publications (1)
Number Date Country
20100080374 A1 Apr 2010 US