The present invention relates to telephony, and in particular to an audio conferencing platform.
Audio conferencing platforms are well known. For example, see U.S. Pat. Nos. 5,483,588 and 5,495,522. Audio conferencing platforms allow conference participants to easily schedule and conduct audio conferences with a large number of users. In addition, audio conference platforms are generally capable of simultaneously supporting many conferences.
A problem with audio conference platforms has been their distributed task system architectures. For example, the system disclosed in U.S. Pat. No. 5,495,522 employs a distributed conference summing architecture, wherein each digital signal processor (DSP) generates a separate output signal (i.e., separate summed conference audio) for each of the phone channels that the DSP supports. That is, this prior art system generates a separate summed conference audio output signal for each of the phone channels. This is an inefficient system architecture since the same task is being simultaneously executed by a number of DSP resources.
Therefore, there is a need for a system that centralizes the audio conference summing task and provides a scalable system architecture.
Briefly, according to the present invention, an audio conferencing platform includes a data bus, a controller, and an interface circuit that receives audio signals from a plurality of conference participants and provides digitized audio signals in assigned time slots over the data bus. The audio conferencing platform also includes a plurality of digital signal processors (DSPs) adapted to communicate on the TDM bus with the interface circuit. At least one of the DSPs sums a plurality of the digitized audio signals associated with conference participants who are speaking to provide a summed conference signal. This DSP provides the summed conference signal to at least one of the other plurality of DSPs, which removes the digitized audio signal associated with a speaker whose voice is included in the summed conference signal, thus providing a customized conference audio signal to each of the speakers.
In a preferred embodiment, the audio conferencing platform configures at least one of the DSPs as a centralized audio mixer and at least another one of the DSPs as an audio processor. Significantly, the centralized audio mixer performs the step of summing a plurality of the digitized audio signals associated with conference participants who are speaking, to provide the summed conference signal. The centralized audio mixer provides the summed conference signal to the audio processor(s) for post processing and routing to the conference participants. The post processing includes removing the audio associated with a speaker from the conference signal to be sent to the speaker. For example, if there are forty conference participants and three of the participants are speaking, then the summed conference signal will include the audio from the three speakers. The summed conference signal is made available on the data bus to the thirty-seven non-speaking conference participants. However, the three speakers each receive an audio signal that is equal to the summed conference signal less the digitized audio signal associated with the speaker. Removing the speaker's voice from the audio he hears reduces echoes.
The centralized audio mixer also receives DTMF detect bits indicative of the digitized audio signals that include a DTMF tone. The DTMF detect bits may be provided by another of the DSPs that is programmed to detect DTMF tones. If the digitized audio signal is associated with a speaker, but the digitized audio signal includes a DTMF tone, the centralized conference mixer will not include the digitized audio signal in the summed conference signal while that DTMF detect bit signal is active. This ensures conference participants do not hear annoying DTMF tones in the conference audio. When the DTMF tone is no longer present in the digitized audio signal, the centralized conference mixer may include the audio signal in the summed conference signal.
The audio conference platform is capable of supporting a number of simultaneous conferences (e.g., 384). As a result, the audio conference mixer provides a summed conference signal for each of the conferences.
Each of the digitized audio signals may be preprocessed. The preprocessing steps include decompressing the signal (e.g., μ-Law or A-Law compression), and determining if the magnitude of the decompressed audio signal is greater than a detection threshold. If it is, then a speech bit associated with the digitized audio signal is set. Otherwise, the speech bit is cleared.
Advantageously, the centralized conference mixer reduces repetitive tasks from being distributed between the plurality of DSPs. In addition, centralized conference mixing provides a system architecture that is scalable and thus easily expanded.
These and other objects, features and advantages of the present invention will become apparent in light of the following detailed description of preferred embodiments thereof, as illustrated in the accompanying drawings.
Each user site 21–23 preferably includes a telephone 28 and a computer/server 30. However, a conferences site may only include either the telephone or the computer/server. The computer/server 30 may be connected via an Internet/intranet backbone 32 to a server 34. The audio conferencing platform 26 and the server 34 are connected via a data link 36 (e.g., a 10/100 BaseT Ethernet link). The computer 30 allows the user to participate in a data conference simultaneous to the audio conference via the server 34. In addition, the user can use the computer 30 to interface (e.g., via a browser) with the server 34 to perform functions such as conference control, administration (e.g., system configuration, billing, reports, . . . ), scheduling and account maintenance. The telephone 28 and the computer 30 may cooperate to provide voice over the Internet/intranet 32 to the audio conferencing platform 26 via the data link 36.
The audio conferencing platform 26 also includes a plurality of processor boards 44–46 that receive and transmit data to the NICs 38–40 over the TDM bus 42. The NICs and the processor boards 44–46 also communicate with a controller/CPU board 48 over a system bus 50. The system bus 50 is preferably based upon the compact PCi standard. The CPU/controller communicates with the server 34 (
Each DSP 60–65 also transmits data over and receives data from the TDM bus 42. The processor card 44 includes a TDM bus interface 78 that performs any necessary signal conditioning and transformation. For example, if the TDM bus is a H.110 bus then it includes thirty-two serial lines, as a result the TDM bus interface may include a serial-to-parallel and a parallel-to-serial interface. An example, of a serial-to-parallel and a parallel-to-serial interface is disclosed in commonly assigned United States Provisional Patent Application designated Ser. No. 60/105,369 filed Oct. 23, 1998 and entitled “Serial-to-Parallel/Parallel-to-Serial Conversion Engine”. This application is hereby incorporated by reference.
Each DSP 60–65 also includes an associated TDM dual port RAM 80–85 that buffers data for transmission between the TDM bus 42 and the associated DSP.
Each of the DSPs is preferably a general purpose digital signal processor IC, such as the model number TMS320C6201 processor available from Texas Instruments. The number of DSPs resident on the processor board 44 is a function of the size of the integrated circuits, their power consumption and the heat dissipation ability of the processor board. For example, there may be between four and ten DSPs per processor board.
Executable software applications may be downloaded from the controller/CPU 48 (
Each audio processor 92, 94 is capable of supporting a certain number of user ports (i.e., conference participants). This number is based upon the operational speed of the various components within the processor board, and the over-all design of the system. Each audio processor 92, 94 receives compressed audio data 102 from the conference participants over the TDM bus 42.
The TDM bus 42 may support 4096 time slots, each having a bandwidth of 64 kbps. The timeslots are generally dynamically assigned by the controller/CPU 48 (
For each of the active/assigned ports for the audio processor, step 502 reads the audio data for that port from the TDM dual port RAM associated with the audio processor. For example, if DSP2 61 (
Since each of the audio signals is compressed (e.g., μ-Law, A-Law, etc), step 604 decompresses each of the 8-bit signals to a 16-bit word. Step 506 computes the average magnitude (AVM) for each of the decompressed signals associated with the ports assigned to the audio processor.
Step 508 is performed next to determine which of the ports are speaking. This step compares the average magnitude for the port computed in step 506 against a predetermined magnitude value representative of speech (e.g., −35 dBm). If average magnitude for the port exceeds the predetermined magnitude value representative of speech, a speech bit associated with the port is set. Otherwise, the associated speech bit is cleared. Each port has an associated speech bit. Step 510 outputs all the speech bits (eight per timeslot) onto the TDM bus. Step 512 is performed to calculate an automatic gain correction (AGC) factor for each port. To compute an AGC value for the port, the AVM value is converted to an index value associated with a table containing gain/attenuation factors. For example, there may be 256 index values, each uniquely associated with 256 gain/attenuation factors. The index value is used by the conference mixer 90 (
For an assigned number of the active/assigned ports of the conferencing system, step 602 reads the audio data for the port from the TDM dual port RAM associated with the DSP(s) configured to perform the DTMF tone detection function. Step 604 then expands the 8-bit signal to a 16-bit word. Next, step 606 tests each of these decompressed audio signals to determine if any of the signals includes a DTMF tone. For any signal that does include a DTMF tone, step 606 sets a DTMF detect bit associated with the port. Otherwise, the DTMF detect bit is cleared. Each port has an associated DTMF detect bit. Step 608 informs the controller/CPU 48 (
Referring to
Referring to
For each active/assigned port, step 802 retrieves the summed conference signal for the conference that the port is assigned to. Step 804 reads the conference bit associated with the port, and step 806 tests the bit to determine if audio from the port was used to create the summed conference signal. If it was, then step 808 removes the gain (e.g., AGC and gain/TLP) compensated audio signal associated with the port from the summed audio signal. This step removes the speaker's own voice from the conference audio. If step 806 determines that audio from the port was not used to create the summed conference signal, then step 808 is bypassed. To prepare the signal to be output, step 810 applies a gain, and step 812 compresses the gain corrected signal. Step 814 then outputs the compressed signal onto the TDM bus for routing to the conference participant associated with the port, via the NIC (
Notably, the audio conferencing platform 26 (
One of ordinary skill will appreciate that as processor speeds continue to increase, that the overall system design is a function of the processing ability of each DSP. For example, if a sufficiently fast DSP was available, then the functions of the audio conference mixer, the audio processor and the DTMF tone detection and the other DSP functions may be performed by a single DSP.
Although the present invention has been shown and described with respect to several preferred embodiments thereof, various changes, omissions and additions to the form and detail thereof, may be made therein, without departing from the spirit and scope of the invention.
This application is a continuation of U.S. Non-Provisional patent application Ser. No. 09/532,602 filed Mar. 22, 2000, now U.S. Pat. No. 6,625,271, entitled “Scalable Audio Conference Platform” which non-provisional application claims the benefit of the following applications: 1) U.S. Provisional Application Ser. No. 60/148,975 filed Aug. 13, 1999, entitled “Scalable Audio Conference Platform with a Centralized Audio Mixer” and 2) U.S. Provisional Application Ser. No. 60/125,440 filed Mar. 22, 1999, entitled “Audio Conference Platform System and Method for Broadcasting a Real-Time Audio Conference Over the Internet”.
Number | Name | Date | Kind |
---|---|---|---|
3622708 | Guenther et al. | Nov 1971 | A |
3692947 | Lewis | Sep 1972 | A |
4109111 | Cook | Aug 1978 | A |
4416007 | Huizinga et al. | Nov 1983 | A |
4485469 | Witmore | Nov 1984 | A |
4541087 | Comstock | Sep 1985 | A |
4797876 | Ratcliff | Jan 1989 | A |
4998243 | Kao | Mar 1991 | A |
5029162 | Epps | Jul 1991 | A |
5210794 | Brunjard | May 1993 | A |
5483588 | Eaton et al. | Jan 1996 | A |
5495522 | Allen et al. | Feb 1996 | A |
5671287 | Gerzon | Sep 1997 | A |
5793415 | Gregory, III et al. | Aug 1998 | A |
5841763 | Leondires et al. | Nov 1998 | A |
6049565 | Paradine et al. | Apr 2000 | A |
6282278 | Doganata et al. | Aug 2001 | B1 |
6324265 | Christie, IV et al. | Nov 2001 | B1 |
6343313 | Salesky et al. | Jan 2002 | B1 |
6418214 | Smythe et al. | Jul 2002 | B1 |
Number | Date | Country |
---|---|---|
WO 9418779 | Aug 1994 | WO |
Number | Date | Country | |
---|---|---|---|
20040042602 A1 | Mar 2004 | US |
Number | Date | Country | |
---|---|---|---|
60148975 | Aug 1999 | US | |
60125440 | Mar 1999 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 09532602 | Mar 2000 | US |
Child | 10613431 | US |