Parallel signal processing system and method

Information

  • Patent Grant
  • 10869108
  • Patent Number
    10,869,108
  • Date Filed
    Friday, December 27, 2019
    4 years ago
  • Date Issued
    Tuesday, December 15, 2020
    3 years ago
  • Inventors
  • Examiners
    • Nguyen; Duc
    • Mohammed; Assad
    Agents
    • Hoffberg & Associates
    • Hoffberg; Steven M.
Abstract
A system and method for processing a plurality of channels, for example audio channels, in parallel is provided. For example, a plurality of telephony channels are processed in order to detect and respond to call progress tones. The channels may be processed according to a common transform algorithm. Advantageously, a massively parallel architecture is employed, in which operations on many channels are synchronized, to achieve a high efficiency parallel processing environment. The parallel processor may be situated on a data bus, separate from a main general-purpose processor, or integrated with the processor in a common board or integrated device. All, or a portion of a speech processing algorithm may also be performed in a massively parallel manner.
Description
BACKGROUND
1. Field of the Invention

The invention relates to the field of real time digital signal processing, particularly in a context of a general-purpose computer executing a non-real-time operating system.


2. Background of the Invention

While modern architecture general purpose central processing units (CPU's) typically have sufficient processing capability to perform signal processing tasks, to some degree, the various operating systems used to implement systems, such as Windows XP, Windows Vista, Linux and Unix-derivative, and Macintosh operating systems have difficulty supporting substantive real-time processing of complex signals representing large amounts of data to be processed, except perhaps particular data types for which the processor has special purpose instructions or hardware execution units. The various software processes handled by such processors compete for processing capability, making it difficult for a programmer or system designer to predict the real-time performance envelope of such a system with any degree of accuracy; therefore, the effective real-time performance is well below the theoretical processing envelope in a real-world system which is not particularly designed to be dedicated to real-time functionality. Therefore, as the real-time processing capacity increases, and the processing latency becomes more critical, general purpose computers with desktop or server operating systems are deemed less suitable for tasks that impose real-time requirements.


Typically, when one of two strategies is implemented to improve the real-time performance of a system: provide a coprocessor which handles only the required real-time tasks, and using a so-called real-time operating system (RTOS) with restrictions on other software which may execute in the same environment.


Existing telephone systems, such as the CallTrol Object Telephone Server (OTS™), tend to require and rely upon special purpose hardware to handle real-time signal processing tasks for large numbers of concurrent voice channels. More information about this system can be found at www.calltrol.com/CalltrolSDKWhitepaper6-02.pdf, which is expressly incorporated herein by reference in its entirety.


3. Call Progress Tone Analysis

In many traditional systems, a single dedicated analog and/or digital circuit is provided for each public switch telephone network (PSTN) line. See, e.g., Consumer Microcircuits Limited CMX673 datasheet, Clare M-985-01 datasheet. In other types of systems, a digital signal processor (coprocessor) is provided to handle signal processing tasks for multiple channels in parallel. Two particular tasks which require significant signal processing capability are call tone progress analysis and echo cancellation. See, e.g., Manish Marwah and Sharmistha Das, “UNICA—A Unified Classification Algorithm For Call Progress Tones” (Avaya Labs, University of Colorado), en.wikipedia.org/wiki/Echo_cancellation, and www.voip-info.org/wiki/view/Asterisk+echo+cancellation, each of which is expressly incorporated herein by reference.


Call progress tone signals provide information regarding the status or progress of a call to customers, operators, and connected equipment. In circuit-associated signaling, these audible tones are transmitted over the voice path within the frequency limits of the voice band. The four most common call progress tones are: Dial tone; Busy tone; Audible ringback; and Reorder tone. In addition to these, there are a number of other defined tones, including for example the 12 DTMF codes on a normal telephone keypad. There may be, for example, about 53 different tones supported by a system. A call progress tone detector, may additionally respond to cue indicating Cessation of ringback; Presence/cessation of voice; Special Information Tones (SITs); and Pager cue tones. Collectively, call progress tones and these other audible signals are referred to as call progress events. Call progress tone generation/detection in the network is generally based on a Precise Tone Plan. In the plan, four distinctive tones are used singly or in combination to produce unique progress tone signals. These tones are 350 Hz, 440 Hz, 480 Hz and 620 Hz. Each call progress tone is defined by the frequencies used and a specific on/off temporal pattern.


The ITU-T E.180 and E.182 recommendations define the technical characteristics and intended usage of some of these tones: busy tone or busy signal; call waiting tone; comfort tone; conference call tone; confirmation tone; congestion tone; dial tone; end of three-party service tone (three-way calling); executive override tone; holding tone; howler tone; intercept tone; intrusion tone; line lock-out tone; negative indication tone; notify tone; number unobtainable tone; pay tone; payphone recognition tone; permanent signal tone; preemption tone; queue tone; recall dial tone; record tone; ringback tone or ringing tone; ringtone or ringing signal; second dial tone; special dial tone; special information tone (SIT); waiting tone; warning tone; Acceptance tone; Audible ring tone; Busy override warning tone; Busy verification tone; Engaged tone; Facilities tone; Fast busy tone; Function acknowledge tone; Identification tone; Intercept tone; Permanent signal tone; Positive indication tone; Re-order tone; Refusal tone; Ringback tone; Route tone; Service activated tone; Special ringing tone; Stutter dial tone; Switching tone; Test number tone; Test tone; and Trunk offering tone. In addition, signals sent to the PSTN include Answer tone; Calling tone; Guard tone; Pulse (loop disconnect) dialing; Tone (DTMF) dialing, and other signals from the PSTN include Billing (metering) signal; DC conditions; and Ringing signal. The tones, cadence, and tone definitions, may differ between different countries, carriers, types of equipment, etc. See, e.g., Annex to ITU Operational Bulletin No. 781-1.11.2003. Various Tones Used In National Networks (According To ITU-T Recommendation E.180) (03/1998).


Characteristics for the call progress events are shown in Table 1.












TABLE 1





Call Progress





Event





Characteristics
Frequencies
Temporal
Event


Name
(Hz)
Pattern
Reported After







Dial Tone
350 + 440
Steady tone
Approximately 0.75





seconds


Busy Tone
480 + 620
0.5 seconds on/
2 cycles of precise,




0.5 seconds off
3 cycles of nonprecise


Detection
440 + 480
2 seconds on/
2 cycles of precise or


Audible

4 seconds off
nonprecise


Ringback


3 to 6.5 seconds after


Cessation


ringback detected


Reorder
480 + 620
0.25 seconds
2 cycles of precise,




on/0.25
3 cycles of nonprecise




seconds off



Detection
200 to 3400

Approximately 0.25


Voice


to 0.50 seconds


Cessation


Approximately 0.5 to





1.0 seconds after





voice detected


Special
See Table 2.
See Table 2.
Approximately 0.25


Information


to 0.75 seconds


Tones (SITs)





Pager Cue
1400
3 to 4 tones at
2 cycles of precise


Tones

0.1 to 0.125
or any pattern of




intervals
1400-Hz signals









Dial tone indicates that the CO is ready to accept digits from the subscriber. In the precise tone plan, dial tone consists of 350 Hz plus 440 Hz. The system reports the presence of precise dial tone after approximately 0.75 seconds of steady tone. Nonprecise dial tone is reported after the system detects a burst of raw energy lasting for approximately 3 seconds.


Busy tone indicates that the called line has been reached but it is engaged in another call. In the precise tone plan, busy tone consists of 480 Hz plus 620 Hz interrupted at 60 ipm (interruptions per minute) with a 0.5 seconds on/0.5 seconds off temporal pattern. The system reports the presence of precise busy tone after approximately two cycles of this pattern. Nonprecise busy tone is reported after three cycles.


Audible ringback (ring tone) is returned to the calling party to indicate that the called line has been reached and power ringing has started. In the precise tone plan, audible ringback consists of 440 Hz plus 480 Hz with a 2 seconds on/4 seconds off temporal pattern. The system reports the presence of precise audible ringback after two cycles of this pattern.


Outdated equipment in some areas may produce nonprecise, or dirty ringback. Nonprecise ringback is reported after two cycles of a 1 to 2.5 seconds on, 2.5 to 4.5 seconds off pattern of raw energy. The system may report dirty ringback as voice detection, unless voice detection is specifically ignored during this period. The system reports ringback cessation after 3 to 6.5 seconds of silence once ringback has been detected (depending at what point in the ringback cycle the CPA starts listening).


Reorder (Fast Busy) tone indicates that the local switching paths to the calling office or equipment serving the customer are busy or that a toll circuit is not available. In the precise tone plan, reorder consists of 480 Hz plus 620 Hz interrupted at 120 ipm (interruptions per minute) with a 0.25 seconds on/0.25 seconds off temporal pattern. The system reports the presence of precise reorder tone after two cycles of this pattern. Nonprecise reorder tone is reported after three cycles.


Voice detection has multiple uses, and can be used to detect voice as an answer condition, and also to detect machine-generated announcements that may indicate an error condition. Voice presence can be detected after approximately 0.25 to 0.5 seconds of continuous human speech falling within the 200-Hz to 3400-Hz voiceband (although the PSTN only guarantees voice performance between 300 Hz to 800 Hz. A voice cessation condition may be determined, for example, after approximately 0.5 to 1.0 seconds of silence once the presence of voice has been detected.


Special Information Tones (SITs) indicate network conditions encountered in both the Local Exchange Carrier (LEC) and Inter-Exchange Carrier (IXC) networks. The tones alert the caller that a machine-generated announcement follows (this announcement describes the network condition). Each SIT consists of a precise three-tone sequence: the first tone is either 913.8 Hz or 985.2 Hz, the second tone is either 1370.6 Hz or 1428.5 Hz, and the third is always 1776.7 Hz. The duration of the first and second tones can be either 274 ms or 380 ms, while the duration of the third remains a constant 380 ms. The names, descriptions and characteristics of the four most common SITs are summarized in Table 2.













TABLE 2







Special






Infor-






mation

First Tone
Second Tone
Third Tone


Tones

Frequency
Frequency
Frequency


(SITs)

Duration
Duration
Duration














Name
Description
(Hz)
(ms)
(Hz)
(ms)
(Hz)
(ms)





NC1
No circuit
985.2
380
1428.5
380
1776.7
380



found








IC
Operator
913.8
274
1370.6
274
1776.7
380



intercept








VC
Vacant
985.2
380
1370.6
274
1776.7
380



circuit









(non-









registered









number)








RO1
Reorder
913.8
274
1428.5
380
1776.7
380



(system









busy)






1Tone frequencies shown indicate conditions that are the responsibility of the BOC intra-LATA carrier. Conditions occurring on inter-LATA carriers generate SITs with different first and second tone frequencies.







Pager cue tones are used by pager terminal equipment to signal callers or connected equipment to enter the callback number (this number is then transmitted to the paged party). Most pager terminal equipment manufacturers use a 3- or 4-tone burst of 1400 Hz at 100- to 125-ms intervals. The system identifies three cycles of 1400 Hz at these approximate intervals as pager cue tones. To accommodate varying terminal equipment signals, tone bursts of 1400 Hz in a variety of patterns may also be reported as pager cue tones. Voice prompts sometimes accompany pager cue tones to provide instructions. Therefore, combinations of prompts and tones may be detected by configuring an answer supervision template to respond to both voice detection and pager cue tone detection.


A Goertzel filter algorithm may be used to detect the solid tones that begin fax or data-modem calls. If any of the following tones are detected, a “modem” (fax or data) state is indicated: 2100 Hz, 2225 Hz, 1800 Hz, 2250 Hz, 1300 Hz, 1400 Hz, 980 Hz, 1200 Hz, 600 Hz, or 3000 Hz. Fax detection relies on the 1.5 seconds of HDLC flags that precede the answering fax terminal's DIS frame. DIS is used by the answering terminal to declare its capabilities. After a solid tone is detected, a V.21 receiver is used to detect the HDLC flags (01111110) in the preamble of DIS signal on the downstream side. If the required number of flags are detected, fax is reported. Otherwise, upon expiration of a timer, the call is may be determined to be a data modem communication. See, e.g., U.S. Pat. No. 7,003,093, the entirety of which is expressly incorporated herein by reference. See also, U.S. Pat. No. 7,043,006, expressly incorporated herein by reference.


Therefore, a well-developed system exists for in-band signaling over audio channels, with a modest degree of complexity and some variability between standards, which themselves may change over time.


4. Graphics Processing Units

One known digital signal processor architecture, exemplified by the nVidia Tesla™ C870 GPU device, provides a massively multi-threaded architecture, providing over 500 gigaflops peak floating-point performance, and which is typically interfaced with a general-purpose computer through a PCI x16 interface slot on a motherboard. This device encompasses a 128-processor computing core, and is typically provided as a coprocessor on a high-speed bus for a standard personal computer platform. Similarly, the AMD/ATI Firestream 9170 also reports 500 gigaflops performance from a GPU-type device with double precision floating point capability. Likewise, newly described devices (e.g., AMD Fusion) integrate a CPU and GPU on a single die with shared external interfaces.


The nVidia Tesla™ GPU is supported by the Compute Unified Device Architecture (CUDA) software development environment, which provides C language support. Typical applications proposed for the nVidia Tesla™ GPU, supported by CUDA, are Parallel bitonic sort; Matrix multiplication; Matrix transpose; Performance profiling using timers; Parallel prefix sum (scan) of large arrays; Image convolution; 1D DWT using Haar wavelet; OpenGL and Direct3D graphics interoperation examples; Basic Linear Algebra Subroutines; Fast Fourier Transform; Binomial Option Pricing; Black-Scholes Option Pricing; Monte-Carlo Option Pricing; Parallel Mersenne Twister (random number generation); Parallel Histogram; Image Denoising; and a Sobel Edge Detection Filter. Therefore, the typical proposed applications are computer software profiling, matrix applications, image processing applications, financial applications, Seismic simulations; Computational biology; Pattern recognition; Signal processing; and Physical simulation. CUDA technology offers the ability for threads to cooperate when solving a problem. The nVidia Tesla™ GPUs featuring CUDA technology have an on-chip Parallel Data Cache that can store information directly on the GPU, allowing computing threads to instantly share information rather than wait for data from much slower, off-chip DRAMs. Likewise, the software compile aspects of CUDA are able to partition code between the GPU and a host processor, for example to effect data transfers and to execute on the host processor algorithms and code which are incompatible or unsuitable for efficient execution on the GPU itself.


GPU architectures are generally well-suited to address problems that can be expressed as data-parallel computations: the same program is executed on many data elements in parallel, with high arithmetic intensity, the ratio of arithmetic operations to memory operations. Because the same program is executed for each data element, there is a lower requirement for sophisticated flow control; and because it is executed on many data elements and has high arithmetic intensity, the memory access latency can be hidden with calculations instead of big data caches. Thus, the GPU architecture typically provides a larger number of arithmetic logic units than independently and concurrently operable instruction decoders. Data-parallel processing maps data elements to parallel processing threads. Many applications that process large data sets such as arrays can use a data-parallel programming model to speed up the computations. In 3D rendering large sets of pixels and vertices are mapped to parallel threads. Similarly, image and media processing applications such as post-processing of rendered images, video encoding and decoding, image scaling, stereo vision, and pattern recognition can map image blocks and pixels to parallel processing threads. In fact, many algorithms outside the field of image rendering and processing are accelerated by data-parallel processing, from general signal processing or physics simulation to computational finance or computational biology.


While the GPU devices speed up data processing for appropriately selected and defined tasks, typically they are controlled through a general-purpose operating system, and the offload of processed data from the GPU card back to the main processor is not treated as a real-time process. Thus, in a video environment, tasks are sent from the host processor to the GPU, and only if the usable output is fed directly from the GPU system, e.g., the video digital to analog converter (DAC), is the system treated as a real-time processing resource.


The Tesla™ GPU device is implemented as a set of multiprocessors (e.g., 8 on the C870 device), each of which has a Single Instruction, Multiple Data architecture (SIMD): At any given clock cycle, each processor (16 per multiprocessor on the C870) of the multiprocessor executes the same instruction, but operates on different data. Each multiprocessor has on-chip memory of the four following types: One set of local 32-bit registers per processor, a parallel data cache or shared memory that is shared by all the processors and implements the shared memory space, a read-only constant cache that is shared by all the processors and speeds up reads from the constant memory space, which is implemented as a read-only region of device memory, and a read-only texture cache that is shared by all the processors and speeds up reads from the texture memory space, which is implemented as a read-only region of device memory. The local and global memory spaces are implemented as read-write regions of device memory and are not cached. Each multiprocessor accesses the texture cache via a texture unit. A grid of thread blocks is executed on the device by executing one or more blocks on each multiprocessor using time slicing: Each block is split into SIMD groups of threads called warps; each of these warps contains the same number of threads, called the warp size, and is executed by the multiprocessor in a SIMD fashion; a thread scheduler periodically switches from one warp to another to maximize the use of the multiprocessor's computational resources. A half-warp is either the first or second half of a warp. The way a block is split into warps is always the same; each warp contains threads of consecutive, increasing thread IDs with the first warp containing thread 0. A block is processed by only one multiprocessor, so that the shared memory space resides in the on-chip shared memory leading to very fast memory accesses. The multiprocessor's registers are allocated among the threads of the block. If the number of registers used per thread multiplied by the number of threads in the block is greater than the total number of registers per multiprocessor, the block cannot be executed and the corresponding kernel will fail to launch. Several blocks can be processed by the same multiprocessor concurrently by allocating the multiprocessor's registers and shared memory among the blocks. The issue order of the warps within a block is undefined, but their execution can be synchronized, to coordinate global or shared memory accesses. The issue order of the blocks within a grid of thread blocks is undefined and there is no synchronization mechanism between blocks, so threads from two different blocks of the same grid cannot safely communicate with each other through global memory during the execution of the grid.


A new trend seeks to integrate at least one GPU core and at least one CPU core in a single module, such as a single MCM or integrated circuit. This integration permits higher speed intercommunication, lower power consumption, and sharing of higher-level resources, such as cache memory, external bus and memory driver circuitry, and other system elements. Such integration, which encompasses heterogeneous processing core aggregation, also permits parallel processing, speculative execution, and effectively races between different architectures and processing schemes.


5. Telephony Processing Platforms

Telephony control and switching applications have for many years employed general purpose computer operating systems, and indeed the UNIX system was originally developed by Bell Laboratories/AT&T. There are a number of available telephone switch platforms, especially private branch exchange implementations, which use an industry standard PC Server platform, typically with specialized telephony support hardware. These include, for example, Asterisk (from Digium) PBX platform, PBXtra (Fonality), Callweaver, Sangoma, etc. See also, e.g., www.voip-info.org/wiki/. Typically, these support voice over Internet protocol (VOIP) communications, in addition to switched circuit technologies.


As discussed above, typical automated telephone signaling provides in-band signaling which therefore employs acoustic signals. A switching system must respond to these signals, or it is deemed deficient. Typically, an analog or digital call progress tone detector is provided for each channel of a switched circuit system. For VOIP systems, this functionality maybe provided in a gateway (media gateway), either as in traditional switched circuit systems, or as a software process within a digital signal processor.


Because of the computational complexity of the call progress tone analysis task, the density of digital signal processing systems for simultaneously handling a large number of voice communications has been limited. For example, 8 channel call progress tone detection may be supported in a single Texas Instruments TMS320C5510™ digital signal processor (DSP). See, IP PBX Chip from Adaptive Digital Technologies, Inc. (www.adaptivedigital.com/product/solution/ip_pbx.htm). The tone detection algorithms consume, for example, over 1 MIPS per channel for a full suite of detection functions, depending on algorithm, processor architecture, etc. Scaling to hundreds of channels per system is cumbersome, and typically requires special purpose dedicated, and often costly, hardware which occupy a very limited number of expansion bus slots of a PBX system.


Echo cancellation is typically handled near the client (i.e., microphone and speaker); however, in conferencing systems a server-side echo canceller is usually required to obtain good sound quality. Echo cancellation is often discussed with reference to speech signal communication between a “near end” and a “far end”. A person speaking at the “far end” of a telephone connection has speech sent over the network to a person listening (and eventually speaking) at the “near end;” a portion of the speech signal received at the near end is retransmitted to the far end, with a delay, resulting in an audible echo.


A typical network echo canceller employs an adaptive digital transversal filter to model the impulse response of the unknown echo channel so that the echo signal can be cancelled. The echo impulse response coefficients used in the transversal filter are updated to track the characteristics of the unknown echo channel. Various algorithms are known, and some are explicitly suited for parallel processing environments. See, e.g., US 20070168408, US 20020064139, U.S. Pat. Nos. 7,155,018, 6,963,649, 6,430,287, PCT/US1998/005854, Gan, W. S. Parallel Implementation of the Frequency Bin Adaptive Filter For Acoustical Echo Cancellation. September 1997, Proceedings of 1997 International Conference on Information, Communications and Signal Processing, IEEE ICICS Volume 2, 9-12 Sep. 1997, pages 754-757; David Qi, “Acoustic Echo Cancellation Algorithms and Implementation on the TMS320C8x”, Digital Signal Processing Solutions, Texas Instruments, SPRA063 May 1996, each of which is expressly incorporated herein in its entirety by reference thereto. It is noted that in a conferencing environment, client-side echoes, and line echoes may each be relevant, and a system must be adapted to deal with each. Therefore, it may be desired to handle echoes of in excess of 250 mS, for example 500 mS.


SUMMARY OF THE INVENTION

The present system and method improve the cost and efficiency of real time digital signal processing in a general-purpose computing environment. In particular, one suitable use for the system is performing telephony signal processing functions, in which, for example, a general-purpose computer supports a telephone switching system requiring real-time analysis of multiple voice channels in order to make switching decisions.


In one aspect of the invention, a massively parallel digital signal processor is employed to perform telephony in-band signaling detection and analysis and/or echo cancellation as a coprocessor in a telephony system. In another aspect, a massively parallel coprocessor card is added to a telephony server application which is executed on a standard processor to increase call progress tone detection and/or echo cancellation performance. Advantageously, the massively parallel processor may be adapted to execute standard software, such as C language, and therefore may perform both massively parallel tasks, and possibly serial execution tasks as well. Thus, a telephony system may be implemented on a single processor system, or within a distributed and/or processor/coprocessor architecture.


In a preferred embodiment exemplary of an aspect of the invention, performing call progress tone analysis, data blocks, each including a time slice from a single audio channel, are fed to the massively parallel processor, which performs operations in parallel on a plurality of time slices, generally executing the same instruction on the plurality of time slices. In this subsystem, real time performance may be effectively achieved, with a predetermined maximum processing latency. Further, in a telephone switching environment, the call progress tone analysis task is a limiting factor in achieving acceptable performance, and therefore the telephone switch, including the parallel processor, achieves acceptable performance for the entire telephone switching task. In this case, “real-time” means that the system appropriately processes calls (e.g., inbound and outbound) and in-band call progress tones according to specification.


In some cases, it is not necessary to detect tones on each audio channel continuously, and therefore the system may sample each channel sequentially. In addition, if a Fast Fourier Transform-type (FFT) algorithm is employed, the real (I) and imaginary (Q) channels may each be presented with data from different sources, leading to a doubling of capacity, or even represent qualitatively different high-level processing tasks (which conform to the same FFT criteria). Thus, for example, using an nVidia Tesla™ C870 GPU, with 128 processors, each processor can handle 8 (real only) or 16 (real and imaginary) audio channels, leading to a density of 1024 or 2048 channel call progress tone detection. Practically, the normal operation of the system is below theoretical capacity, to provide “headroom” for other processing tasks and the like, and therefore up to about 800 voice channels may be processed, using a general purpose commercially available coprocessor card for a PC architecture.


For echo cancellation, with a 500 mS capacity and 8.4 kHz sampling rate, about 4200 samples per channel are processed. The processing may, in some cases, be consolidated with the CPT analysis, though a complete separation of these functions is possible. For example, some PC motherboards can host 2 or more PCIe 16× cards, and therefore CPT call be implemented on one card, and echo cancellation (EC) on another. On the other hand, some of the processing is common for CPT and EC, for example an FFT transform. Therefore, the processing may also be combined. Likewise, two (or more) graphics processor boards may be linked through a so-called SLI interface, so that the power of two (or more) GPU devices may be employed in a single coordinated task.


The call progress tone detection coprocessor may, for example, be provided within a telephony server system, implementing a so-called private branch exchange (PBX) or the like.


For example, a PC architecture server may execute Asterisk PBX software under the Linux operating system. A software call is provided from the Asterisk PBX software to a dynamic linked library (DLL), which transfers data from a buffer in main memory containing time slices for the analog channels to be processed. For example, 2 mS each for 800 channels, at an 8.4 kHz sampling rate is provided (132 kB) in the buffer. The buffer contents is transferred to the coprocessor through a PCIe ×16 interface, along with a call to perform an FFT for each channel, with appropriate windowing, and/or using continuity from prior samples. The FFT may then be filtered on the coprocessor, with the results presented to the host processor, or the raw FFT data transferred to the host for filtering. Using a time-to-frequency domain transform, the signal energy at a specified frequency is converted to an amplitude peak at a specific frequency bin, which is readily extracted. Temporal analysis may also be performed in either the coprocessor or processor, though preferably this is performed in the processor. The analysis and data transform may also be used for speech recognition primitives, and for other processes.


A particular advantage of this architecture arises from the suitability of the call progress tone analysis to be performed in parallel, since the algorithm is deterministic and has few or no branch points. Thus, the task is defined to efficiently exploit the processing power and parallelism of a massively parallel processor.


The use of the system and architecture is not limited to telephony. For example, the architecture may be used for music signal processing, such as equalization, mixing, companding, and the like. Various sensor array data, such as sensors to detect fatigue and cracking in infrastructure, may be processed as well. In this later application, a problem may arise that the sensors are sensitive to dynamic and transient events, such as a truck rolling across a bridge, and it is that excitation which provides a signal for analysis. In that case, unless the system processes only a small portion of the data available, it is difficult to archive the unprocessed data which may come from hundreds of sensors (e.g., 500 sensors), each having a frequency response of 1 kHz or more (and thus a sampling rate of 2 kHz or more) with a dynamic range of, for example, 16 bits. In this example, the data throughput is 500×2000×2=2 MB per second, or 7.2 GB per hour, making remote processing unwieldly. After processing, for example to produce a model of the structure, the daily data may be reduced to less than 1 MB, or smaller. That is, the goal of the sensor array is to determine whether the structure is failing, and the raw data merely represents the excitation of the structure which is used to extract model parameters describing the structure. Changes in the model can be interpreted as changes in the structure, which may be innocent, such as snow cover, or insidious, such as stress fracture. Of course, other types of sensors, sensor arrays, or signal sources may also produce massive amounts of data to be processed and reduced, which necessarily requires real-time throughput as available from the present invention. The architecture therefore advantageously provides a signal processor which handles raw signal processing, the results of which may be passed, for example, to a general-purpose processor which can perform a high-level analysis (as required) and general computational tasks, such as communications, mass storage control, human interface functionality, and the like.


Another use of the technology is real time control of complex systems, preferably, but not necessarily those with an array of similar components to be controlled. Thus, for example, a plurality of mechanical or electronic elements may be controlled, and each may be represented with a common model (possibly with different model parameters for each). Likewise, a set of actuators together controlling an interactive system may be controlled. Further, systems with similarly configured control loops, but not necessarily with interactive response, may be implemented. Banks of digital filters, for example, finite impulse response or infinite impulse response, or those with arbitrary topology, may be implemented. In each case, it is preferred that processors within any bank of multiprocessors mostly execute the same operation on data in parallel, though in some cases, this is not a hard limit of operation, and the broad parallelism may be sacrificed to process data separately. In some cases, there may be interactivity between the processing by a plurality of processors within a multiprocessor, or between data processed by different multiprocessors.


In another embodiment, a real system is implemented which specifically exploits the architecture of the real time parallel processor. Thus, for example if there are 128 processes arranged in 8 sets of 16 processors, then a rectangular actuator and/or sensor array of these dimensions are implemented, with each processor handling a single actuator and/or sensor of the 8×16 array. Likewise, if there are distortions or symmetries which make the anticipated processing for some sets of actuators and/or sensors more alike than others, these can be group together under a single multiprocessor. According to this same strategy, in some cases, control over an actuator and/or sensor may be dynamically assigned to different processors based on a similarity of the processing task. Further, the system may be implemented such that actuators and/or sensors are dynamically grouped based on an acceptability of identical algorithm execution (with possible exceptions), with or without post-correction of results. This may, in some cases, lead to a dithering, that is, an oscillation about a desired response, which may be tolerated, or later filtered.


The system and method may be used for processing supporting spatial arrays, for example antenna arrays. One preferred embodiment provides a dynamically adaptive synthetic aperture antenna in which each element of an array has, for example, a dynamically controllable gain and delay. If the elements of such an array have a large near-field pattern, a predetermined set of control parameters would be suboptimal, since the antenna will respond to dielectric elements within its near field. Therefore, in accordance with the present invention, the sets of parameters may be adaptively controlled to account for distortion and the like. Further, in some cases, transmit and receive antennas may be interactive, and thus require appropriate processing. In other cases, the performance of the antenna may be sensitive to the data transmitted or other aspects of the waveform, and the processing array can be used to shape the transmitted signal to “predistort” the output for each element (or groups of elements), or to adapted the elements based on the transmitted or received signal characteristics.


In general, the processing architecture advantageously performs transforms on parallel data sets, which can then be filtered or simply processed as may be appropriate to yield a desired output. In some cases, the transformed signals are transformed at least twice, for example a transform and an inverse transform. In some cases, the transforms are Fourier and inverse Fourier transforms, though many other types of transformation are possible. A key feature of typical transforms is that the processing instructions and sequence is not data dependent, permitting a multiprocessor architecture to efficiently process many data streams in parallel. However, even in cases where there is a data dependency, such an architecture may provide advantages.


In cases where a set of heterogeneous cores are integrated, which access a common memory pool, a first type of processor may be employed to transform data in a data-dependent fashion, and a second processor may be employed to process the transformed data in a data-dependent fashion. For example, the data-dependent processor may be employed to make individual decisions regarding signal states, while the data-independent processor may be employed for filtering and orthogonalization of data representations.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a schematic diagram of a system for implementing the invention.



FIG. 2 is a flowchart of operations within a host processor



FIG. 3 is a schematic diagram showing operations with respect to a massively parallel coprocessor.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

One embodiment of the present invention provides a system and method for analyzing call progress tones and performing other types of audio band processing on a plurality of voice channels, for example in a telephone system. Examples of call progress tone analysis can be found at:


www.commetrex⋅com/products/algorithms/CPA.html;


www.dialogic⋅com/network/csp/appnots/10117_CPA_SR6_HMP2.pdf;


whitepapers.zdnet.co-uk/0,1000000651,260123088p,00.htm; and


www.pikatechnologies⋅com/downloads/samples/readme/6.2%20-%20Call %20Progress %20Analysis %20-%20ReadMe.txt.


In a modest size system for analyzing call progress tones, there may be hundreds of voice channels to be handled are simultaneously. Indeed, the availability of a general-purpose call progress tone processing system permits systems to define non-standard or additional signaling capabilities, thus reducing the need for out of band signaling. Voice processing systems generally require real time performance; that is, connections must be maintained and packets or streams forwarded within narrow time windows, and call progress tones processed within tight specifications.


An emerging class of telephone communication processing system, implements a private branch exchange (PBX) switch, which employs a standard personal computer (PC) as a system processor, and employs software which executes on a general-purpose operating system (OS). For example, the Asterisk system runs on the Linux OS. More information about Asterisk may be found at Digium/Asterisk, 445 Jan Davis Drive NW, Huntsville, Ala. 35806, 256.428.6000 asterisk.org/downloads. Another such system is: “Yate” (Yet Another Telephony Engine), available from Bd. Nicolae Titulescu 10, Bl. 20, Sc. C, Ap. 128 Sector 1, Bucharest, Romania yate.null.ro/pmwiki/index.php?n=Main.Download.


In such systems, scalability to desired levels, for example hundreds of simultaneous voice channels, requires that the host processor have sufficient headroom to perform all required tasks within the time allotted. Alternately stated, the tasks performed by the host processor should be limited to those it is capable of completing without contention or undue delay. Because digitized audio signal processing is resource intensive, PC-based systems have typically not implemented functionality, which requires per-channel signal processing, or offloaded the processing to specialized digital signal processing (DSP) boards. Further, such DSP boards are themselves limited, for example 8-16 voice processed channels per DSP core, with 4-32 cores per board, although higher density boards are available. These boards are relatively expensive, as compared to the general-purpose PC, and occupy a limited number of bus expansion slots.


The present invention provides an alternate to the use of specialized DSP processors dedicated to voice channel processing. According to one embodiment, a massively parallel processor as available in a modern video graphics processor (though not necessarily configured as such) is employed to perform certain audio channel processing tasks, providing substantial capacity and versatility. One example of such a video graphics processor is the nVidia Tesla™ GPU, using the CUDA software development platform (“GPU”). This system provides 8 banks of 16 processors (128 processors total), each processor capable of handling a real-time fast Fourier transform (FFT) on 8-16 channels. For example, the FFT algorithm facilitates subsequent processing to detect call progress tones, which may be detected in the massively parallel processor environment, or using the host processor after downloading the FFT data. One particularly advantageous characteristic of implementation of a general purpose FFT algorithm rather than specific call tone detection algorithms is that a number of different call tone standards (and extensions/variants thereof) may be supported, and the FFT data may be used for a number of different purposes, for example speech recognition, etc.


Likewise, the signal processing is not limited to FFT algorithms, and therefore other algorithms may also or alternately be performed. For example, wavelet-based algorithms may provide useful information.


The architecture of the system provides a dynamic link library (DLL) available for calls from the telephony control software, e.g., Asterisk. An application programming interface (API) provides communication between the telephony control software (TCS) and the DLL. This TCS is either unmodified or minimally modified to support the enhanced functionality, which is separately compartmentalized.


The TCS, for example, executes a process which calls the DLL, causing the DLL to transfer a data from a buffer holding, e.g., 2 mS of voice data for, e.g., 800 voice channels, from main system memory of the PC to the massively parallel coprocessor (MPC), which is, for example an nVidia Tesla™ platform. The DLL has previously uploaded to the MPC the algorithm, which is, for example, a parallel FFT algorithm, which operates on all 800 channels simultaneously. It may, for example, also perform tone detection, and produce an output in the MPC memory of the FFT-representation of the 800 voice channels, and possibly certain processed information and flags. The DLL then transfers the information from the MPC memory to PC main memory for access by the TCS, or other processes, after completion.


While the MPC has massive computational power, it has somewhat limited controllability. For example, a bank of 16 DSPs in the MPC are controlled by a single instruction pointer, meaning that the algorithms executing within the MPC are generally not data-dependent in execution, nor have conditional-contingent branching, since this would require each thread to execute different instructions, and thus dramatically reduce throughput. Therefore, the algorithms are preferably designed to avoid such processes, and should generally be deterministic and non-data dependent algorithms. On the other hand, it is possible to perform contingent or data-dependent processing, though the gains from the massively parallel architecture are limited, and thus channel specific processing is possible. Advantageously, implementations of the FFT algorithm are employed which meet the requirements for massively parallel execution. For example, the CUDA™ technology environment from nVidia provides such algorithms. Likewise, post processing of the FFT data to determine the presence of tones poses a limited burden on the processor(s), and need not be performed under massively parallel conditions. This tone extraction process may therefore be performed on the MPC or the host PC processor, depending on respective processing loads and headroom.


In general, the FFT itself should be performed in faster-than real-time manner. For example, it may be desired to implement overlapping FFTs, e.g., examining 2 mS of data every 1 mS, including memory-to-memory transfers and associated processing. Thus, for example, it may be desired to complete the FFT of 2 mS of data on the MPC within 0.5 mS. Assuming, for example, a sampling rate of 8.4 kHz, and an upper frequency within a channel of 3.2-4 kHz, the 2 mS sample, would generally imply a 256-point FFT, which can be performed efficiently and quickly on the nVidia Tesla™ platform, including any required windowing and post processing.


Therefore, the use of the present invention permits the addition of call progress tone processing and other per channel signal processing tasks to a PC based TCS platform without substantially increasing the processing burden on the host PC processor, and generally permits such a platform to add generic call progress tone processing features and other per channel signal processing features without substantially limiting scalability.


Other sorts of parallel real time processing are also possible, for example analysis of distributed sensor signals such as “Motes” or the like. See, en.wikipedia⋅org/wiki/Smartdust. The MPC may also be employed to perform other telephony tasks, such as echo cancellation, conferencing, tone generation, compression/decompression, caller ID, interactive voice response, voicemail, packet processing and packet loss recovery algorithms, etc.


Similarly, simultaneous voice recognition can be performed on hundreds of simultaneous channels, for instance in the context of directing incoming calls based on customer responses at a customer service center. Advantageously, in such an environment, processing of particular channels maybe switched between banks of multiprocessors, depending on the processing task required for the channel and the instructions being executed by the multiprocessor. Thus, to the extent that the processing of a channel is data dependent, but the algorithm has a limited number of different paths based on the data, the MPC system may efficiently process the channels even where the processing sequence and instructions for each channel is not identical.



FIG. 1 shows a schematic of system for implementing the invention.


Massively multiplexed voice data 101 is received at network interface 102. The network could be a LAN, Wide Area Network (WAN), Prime Rate ISDN (PRI), a traditional telephone network with Time Division Multiplexing (TDM), or any other suitable network. This data may typically include hundreds of channels, each carrying a separate conversation and also routing information. The routing information may be in the form of in-band signaling of dual frequency (DTMF) audio tones received from a telephone keypad or DTMF generator. The channels may be encoded using digital sampling of the audio input prior to multiplexing. Typically voice channels will come in 20 ms frames.


The system according to a preferred coprocessor embodiment includes at least one host processor 103, which may be programmed with telephony software such as Asterisk or Yate, cited above. The host processor may be of any suitable type, such as those found in PCs, for example Intel Pentium Core 2 Duo or Quadra, or AMD Athlon X2. The host processor communicates via shared memory 104 with MPC 105, which is, for example 2 GB or more of DDR2 or DDR3 memory.


Within the host processor, application programs 106 receive demultiplexed voice data from interface 102, and generate service requests for services that cannot or are desired not to be processed in real time within the host processor itself. These service requests are stored in a service request queue 107. A service calling module 108 organizes the service requests from the queue 107 for presentation to the MPC 105.


The module 108 also reports results back to the user applications 106, which in turn put processed voice data frames back on the channels in real time, such that the next set of frames coming in on the channels 101 can be processed as they arrive.



FIG. 2 shows a process within module 108. In this process, a timing module 201 keeps track of a predetermined real time delay constraint. Since standard voice frames are 20 ms long, this constraint should be significantly less than that to allow operations to be completed in real time. A 5-10 ms delay would very likely be sufficient; however, a 2 ms delay would give a degree of comfort that real time operation will be assured. Then, at 202, e blocks of data requesting service are organized into the queue or buffer. At 203, the service calling module examines the queue to see what services are currently required. Some MPC's, such as the nVidia Tesla™ C870 GPU, require that each processor within a multiprocessor of the MPC perform the same operations in lockstep. For such MPC's, it will be necessary to choose all requests for the same service at the same time. For instance, all requests for an FFT should be grouped together and requested at once. Then all requests for a Mix operation might be grouped together and requested after the FFT's are completed—and so forth. The MPC 105 will perform the services requested and provide the results returned to shared memory 104. At 204, the service calling module will retrieve the results from shared memory and at 205 will report the results back to the application program. At 206, it is tested whether there is more time and whether more services are requested. If so, control returns to element 202. If not, at 207, the MPC is triggered to sleep (or be available to other processes) until another time interval determined by the real time delay constraint is begun, FIG. 3 shows an example of running several processes on data retrieved from the audio channels. The figure shows the shared memory 104 and one of the processors 302 from the MPC 105. The processor 302 first retrieves one or more blocks from the job queue or buffer 104 that are requesting an FFT and performs the FFT on those blocks. The other processors within the same multiprocessor array of parallel processors are instructed to do the same thing at the same time (on different data). After completion of the FFT, more operations can be performed. For instance, at 304 and 305, the processor 302 checks shared memory 104 to see whether more services are needed. In the examples given, mixing 304 and decoding 305 are requested by module 109, sequentially. Therefore, these operations are also performed on data blocks retrieved from the shared memory 104. The result or results of each operation are placed in shared memory upon completion of the operation, where those results are retrievable by the host processor.


In the case of call progress tones, these three operations together: FFT, mixing, and decoding, will determine the destination of a call associated with the block of audio data for the purposes of telephone switching.


If module 108 sends more request for a particular service than can be accommodated at once, some of the requests will be accumulated in a shared RAM 109 to be completed in a later processing cycle. The MPC will be able to perform multiple instances of the requested service within the time constraints imposed by the loop of FIG. 2. Various tasks may be assigned priorities, or deadlines, and therefore the processing of different services may be selected for processing based on these criteria, and need not be processed in strict order.


The following is some pseudo code illustrating embodiments of the invention as implemented in software. The disclosure of a software embodiment does not preclude the possibility that the invention might be implemented in hardware.


Embodiment #1

Data Structures to be Used by Module 108


RQueueType Structure // Job Request Queue


ServiceType


ChannelID // Channel Identifier


VoiceData // Input Data


Output // Output Data


End Structure


// This embodiment uses a separate queue for each type of service to be requested.


// The queues have 200 elements in them. This number is arbitrary and could be adjusted


// by the designer depending on anticipated call volumes and numbers of processors available


// on the MPC. Generally, the number does not have to be as large as the total of number


// of simultaneous calls anticipated, because not all of those calls will be requesting services


// at the same time.


RQueueType RQueueFFT[200] // Maximum of 200 Requests FFT


RQueueType RQueueMIX[200] // Maximum of 200 Requests MIX


RQueueType RQueueENC[200] // Maximum of 200 Requests ENC


RQueueType RQueueDEC[200] // Maximum of 200 Requests DEC


Procedures to be Used by Module 108


// Initialization Function


Init: Initialize Request Queue


Initialize Service Entry


Start Service Poll Loop


// Service Request Function


ReqS: Case ServiceType


FFT: Lock RQueueFFT

    • Insert Service Information into RQueueFFT
    • Unlock RQueueFFT


MIX: Lock RQueueMIX

    • Insert Service Information into RQueueMIX
    • Unlock RQueueMIX


ENC: Lock RQueueENC

    • Insert Service Information into RQueueENC
    • Unlock RQueueENC


DEC: Lock RQueueDEC

    • Insert Service Information into RQueueDEC
    • Unlock RQueueDEC


End Case


Wait for completion of Service


Return output


// Service Poll Loop


// This loop is not called by the other procedures. It runs independently. It will keep track of


// where the parallel processors are in their processing. The host will load all the requests for a


// particular service into the buffer. Then, it will keep track of when the services are completed


// and load new requests into the buffer.


//


SerPL: Get timestamp and store in St


// Let's do FFT/FHT


Submit RQueueFFT with FFT code to GPU


For all element in RQueueFFT

    • Signal Channel of completion of service


End For


// Let's do mixing


Submit RQueueMIX with MIXING code to GPU


For all element in RQueueMIX

    • Signal Channel of completion of service


End For


// Let's do encoding


Submit RQueueENC with ENCODING code to GPU


For all element in RQueueENC

    • Signal Channel of completion of service


End For


// Let's do decoding


Submit RQueueDEC with DECODING code to GPU


For all element in RQueueDEC

    • Signal Channel of completion of service


End For


// Make sure it takes the same amount of time for every pass


Compute time difference between now and St


Sleep that amount of time


Goto SerPL // second pass


Examples of Code in Application Programs 106 for Calling the Routines Above


Example for Calling “Init”


// we have to initialize PStar before we can use it


Call Init


Example for Requesting an FFT


// use FFT service for multitone detection


Allocate RD as RQueueType


RD.Service=FFT


RD.ChannelID=Current Channel ID


RD.Input=Voice Data


Call ReqS(RD)


Scan RD.Output for presence of our tones


Example for Requesting Encoding


// use Encoding service


Allocate RD as RQueueType


RD.Service=ENCODE


RD.ChannelID=Current Channel ID


RD.Input=Voice Data


Call ReqS(RD)


// RD.Output contains encoded/compressed data


Example for Requesting Decoding


// use Decoding service


Allocate RD as RQueueType


RD.Service=DECODE


RD.ChannelID=Current Channel ID


RD.Input=Voice Data


Call ReqS(RD)


// RD.Output contains decoded data


Embodiment #2

// This embodiment is slower, but also uses less memory than embodiment #1 above


Data Structures to be Used by Module 108


RQueueType Structure // Job Request Queue


ServiceType


ChannelID // Channel Identifier


VoiceData // Input Data


Output // Output Data


End Structure


// This embodiment uses a single queue, but stores other data in a temporary queue


// when the single queue is not available. This is less memory intensive, but slower.


RQueueType RQueue[200]


// Maximum of 200 Requests


Procedures to be Used by Module 108


// Initialization Function


Init: Initialize Request Queue


Initialize Service Entry


Start Service Poll Loop


// Service Request Function


ReqS: Lock RQueue


Insert Service Information into RQueue


Unlock RQueue


Wait for completion of Service


Return output


// Service Poll Loop


// to run continuously


SerPL: Get timestamp and store in St


// Let's do FFT/FHT


For all element in RQueue where SerivceType=FFT

    • Copy Data To TempRQueue


End For


Submit TempRQueue with FFT code to GPU


For all element in TempRQueue

    • Move TempRQueue.output to RQueue.output
    • Signal Channel of completion of service


End For


// Let's do mixing


For all element in RQueue where SerivceType=MIXING

    • Copy Data To TempRQueue


End For


Submit TempRQueue with MIXING code to GPU


For all element in RQueue

    • Move TempRQueue.output to RQueue.output
    • Signal Channel of completion of service


End For


// Let's do encoding


For all element in RQueue where SerivceType=ENCODE

    • Copy Data To TempRQueue


End For


Submit TempRQueue with ENCODING code to GPU


For all element in RQueue

    • Move TempRQueue.output to RQueue.output
    • Signal Channel of completion of service


End For


// Let's do decoding


For all element in RQueue where SerivceType=DECODE

    • Copy Data To TempRQueue


End For


Submit TempRQueue with DECODING code to GPU


For all element in RQueue

    • Move TempRQueue.output to RQueue.output
    • Signal Channel of completion of service


End For


// Make sure it takes the same amount of time for every pass


Compute time difference between now and St


Sleep that amount of time


Goto SerPL // second pass


Examples of Code in the Application Programs 106 for Calling the Routines Above


Example for Calling “Init”


// we have to initialize PStar before we can use it


Call Init


Example for Calling “FFT”


// use FFT service for multitone detection


Allocate RD as RQueueType


RD.Service=FFT


RD.ChannelID=Current Channel ID


RD.Input=Voice Data


Call ReqS(RD)


Scan RD.Output for presents of our tones


Example for Calling Encoding


// use Encoding service


Allocate RD as RQueueType


RD.Service=ENCODE


RD.ChannelID=Current Channel ID


RD.Input=Voice Data


Call ReqS(RD)


// RD.Output contains encoded/compressed data


Example for Calling Decoding


// use Decoding service


Allocate RD as RQueueType


RD.Service=DECODE


RD.ChannelID=Current Channel ID


RD.Input=Voice Data


Call ReqS(RD)


// RD.Output contains decoded data


While the embodiment discussed above uses a separate host and massively parallel processing array, it is clear that the processing array may also execute general purpose code and support general purpose or application-specific operating systems, albeit with reduced efficiency as compared to an unbranched signal processing algorithm. Therefore, it is possible to employ a single processor core and memory pool, thus reducing system cost and simplifying system architecture. Indeed, one or more multiprocessors may be dedicated to signal processing, and other(s) to system control, coordination, and logical analysis and execution. In such a case, the functions identified above as being performed in the host processor would be performed in the array, and, of course, the transfers across the bus separating the two would not be required.


The present invention may be applied to various parallel data processing algorithms for independent or interrelated data streams. For example, telephone conversions, sensor arrays, communications from computer network components, image processing, tracking of multiple objects within a space, object recognition in complex media or multimedia, and the like.


One particular advantage of the present architecture is that it facilitates high level interaction of multiple data streams and data fusion. Thus, for example, in a telephone environment, the extracted call progress tones may be used by a call center management system to control workflows, scheduling, pacing, monitoring, training, voice stress analysis, and the like, which involve an interaction of a large number of concurrent data streams which are each nominally independent. On the other hand, in a seismic data processor, there will typically be large noise signals imposed on many sensors, which must be both individually processed and processor for correlations and significant events. Therefore, another advantage of the integration of the real time parallel data processing and analysis within a computing platform, that supports a general purpose (typically non-real time) operating system, is that a high level of complex control may be provided based on the massive data flows through the real-time subsystem, within an integrated platform, and often without large expense, using available computational capacity efficiently.


From a review of the present disclosure, other modifications will be apparent to persons skilled in the art. Such modifications may involve other features which are already known in the design, manufacture and use of telephony engines and parallel processing and which may be used instead of or in addition to features already described herein. Although claims have been formulated in this application to particular combinations of features, it should be understood that the scope of the disclosure of the present application also includes any novel feature or novel combination of features disclosed herein either explicitly or implicitly or any generalization thereof, whether or not it mitigates any or all of the same technical problems as does the present invention. The applicants hereby give notice that new claims may be formulated to such features during the prosecution of the present application or any further application derived therefrom.


The word “comprising”, “comprise”, or “comprises” as used herein should not be viewed as excluding additional elements. The singular article “a” or “an” as used herein should not be viewed as excluding a plurality of elements. The word “or” should be construed as an inclusive or, in other words as “and/or”.

Claims
  • 1. A parallel signal processing system, comprising: at least one input port configured to receive a plurality of streams of information over time;a memory configured to store data representing a time period of the plurality of streams of information over time;a single instruction, multiple data type parallel processor, configured to: receive the data representing the time period of the plurality of streams of information over time;process the received data for each respective stream of information over time to produce a result selectively in dependent on at least one of a transform, a convolution, an echo processing and a transversal filtering; andprocess the data representing the time period of the plurality of streams of information over time to produce the result for a respective stream of information over time comprising in-band signal analysis; andstoring the result in the memory.
  • 2. The parallel signal processing system according to claim 1, wherein the plurality of streams of information over time are independent of each other.
  • 3. The parallel signal processing system according to claim 1, wherein at least two of the plurality of streams of information over time are related to each other.
  • 4. The parallel signal processing system according to claim 1, wherein the single instruction, multiple data type parallel processor is configured to process the received data representing the time period for a respective stream of information over time, dependent on at least one other respective stream of information of the plurality of streams of information over time.
  • 5. The parallel signal processing system according to claim 1, wherein the single instruction, multiple data type parallel processor is configured to process the data representing the time period of the plurality of streams of information over time to produce the result for a respective stream of information over time which is independent of others of the plurality of streams of information over time.
  • 6. The parallel signal processing system according to claim 1, wherein the single instruction, multiple data type parallel processor is configured to process the data representing the time period of the plurality of streams of information over time to produce the result for a respective stream of information over time which comprises a convolution.
  • 7. The parallel signal processing system according to claim 1, wherein the single instruction, multiple data type parallel processor is configured to process the data representing the time period of the plurality of streams of information over time to produce the result for a respective stream of information over time comprising a transversal filtering.
  • 8. The parallel signal processing system according to claim 1, wherein the single instruction, multiple data type parallel processor is configured to process the data representing the time period of the plurality of streams of information over time to produce the result for a respective stream of information over time comprising a time-frequency domain transform.
  • 9. The parallel signal processing system according to claim 1, wherein the single instruction, multiple data type parallel processor is configured to process the data representing the time period of the plurality of streams of information over time to produce the result for a respective stream of information over time comprising a wavelet transform.
  • 10. The parallel signal processing system according to claim 1, wherein the single instruction, multiple data type parallel processor is configured to process the data representing the time period of the plurality of streams of information over time to produce the result for a respective stream of information over time comprising echo processing.
  • 11. The parallel signal processing system according to claim 1, wherein the single instruction, multiple data type parallel processor is configured to process the data representing the time period of the plurality of streams of information over time to produce the result for a respective stream of information over time comprising speech recognition processing.
  • 12. The parallel signal processing system according to claim 1, wherein the single instruction, multiple data type parallel processor is configured to concurrently execute a common set of instructions for each time period substantially without data-dependent conditional execution branch instructions.
  • 13. A nontransitory computer readable medium, storing therein instructions for controlling a programmable processor to perform a method comprising: a step for storing data representing a time period of a plurality of streams of information over time in a memory;a step for controlling a single instruction, multiple data type parallel processor to receive and process the data representing the time period of the plurality of streams of information over time;a step for controlling the single instruction, multiple data type parallel processor to produce a result selectively in dependent thereon, wherein the process comprises at least one of a transform, a convolution, echo processing and a transversal filtering;a step for controlling the single instruction, multiple data type parallel processor to process the data representing the time period of the plurality of streams of information over time to produce the result for a respective stream of information over time comprising in-band signal analysis; anda step for storing the result in the memory.
  • 14. The computer readable medium according to claim 13, wherein at least one of the plurality of streams of information and the result for each processed streams of information are independent of each other.
  • 15. The computer readable medium according to claim 13, wherein at least one of the plurality of streams of information over time or at least one of the results are dependent on at least one other of the plurality of streams of information over time or at least one of the results.
  • 16. The computer readable medium according to claim 13, wherein the result comprises speech recognition processing.
  • 17. A parallel signal processing method comprising: storing data representing a time period of a plurality of streams of information over time in a memory;receiving the data representing the time period of the plurality of streams of information over time from the memory;selectively processing the retrieved data representing the time period of the plurality of streams of information over time with a single instruction, multiple data type parallel processor comprising at least one of a transform, a convolution, echo processing and a transversal filtering;selectively processing the retrieved data representing the time period of the plurality of streams of information over time with a single instruction, multiple data type parallel processor comprising in-band signal analysis; andstoring the result in the memory.
  • 18. The parallel signal processing method according to claim 17, wherein said selectively processing comprises performing at least one speech recognition function.
  • 19. The parallel signal processing method according to claim 17, wherein said selectively processing comprises performing echo processing.
  • 20. The parallel signal processing method according to claim 17, wherein the in-band signal analysis comprises call progress tone detection.
CROSS REFERENCE TO RELATED APPLICATIONS

The present application is a Continuation of U.S. patent application Ser. No. 15/823,430, filed Nov. 27, 2019, now U.S. Pat. No. 10,524,024, issued Dec. 31, 2019, which is a Continuation of U.S. patent application Ser. No. 14/305,432, filed Jun. 16, 2014, now U.S. Pat. No. 9,832,543, issued Nov. 28, 2017, which is a Division of U.S. patent application Ser. No. 12/569,456, filed Sep. 29, 2009, issued Jun. 17, 2014 as U.S. Pat. No. 8,755,515, which Claims benefit of priority from U.S. Provisional Patent Application No. 61/101,050, filed Sep. 29, 2008, the entirety of which are expressly incorporated herein by reference.

US Referenced Citations (994)
Number Name Date Kind
4507748 Cotton Mar 1985 A
4591981 Kassabov May 1986 A
4665556 Fukushima et al. May 1987 A
4742552 Andrews May 1988 A
4783738 Li et al. Nov 1988 A
4851995 Hsu et al. Jul 1989 A
4860201 Stolfo et al. Aug 1989 A
4873626 Gifford Oct 1989 A
4891787 Gifford Jan 1990 A
4964126 Musicus et al. Oct 1990 A
4972314 Getzinger et al. Nov 1990 A
4985832 Grondalski Jan 1991 A
5006721 Cameron et al. Apr 1991 A
5038282 Gilbert et al. Aug 1991 A
5050065 Dartois et al. Sep 1991 A
5091783 Miyaguchi Feb 1992 A
5091786 Miyaguchi Feb 1992 A
5093722 Miyaguchi et al. Mar 1992 A
5105387 Childers et al. Apr 1992 A
5121498 Gilbert et al. Jun 1992 A
5146606 Grondalski Sep 1992 A
5153521 Grondalski Oct 1992 A
5163120 Childers et al. Nov 1992 A
5164990 Pazienti et al. Nov 1992 A
5165023 Gifford Nov 1992 A
5170484 Grondalski Dec 1992 A
5175862 Phelps et al. Dec 1992 A
5187795 Balmforth et al. Feb 1993 A
5197130 Chen et al. Mar 1993 A
5197140 Balmer Mar 1993 A
5210705 Chauvel et al. May 1993 A
5210836 Childers et al. May 1993 A
5212777 Gove et al. May 1993 A
5226125 Balmer et al. Jul 1993 A
5230079 Grondalski Jul 1993 A
5239654 Ing-Simmons et al. Aug 1993 A
5251097 Simmons et al. Oct 1993 A
5265207 Zak et al. Nov 1993 A
5268856 Wilson Dec 1993 A
5276895 Grondalski Jan 1994 A
5289577 Gonzales et al. Feb 1994 A
5293637 Childers et al. Mar 1994 A
5305462 Grondalski Apr 1994 A
5309232 Hartung et al. May 1994 A
5321510 Childers et al. Jun 1994 A
5327541 Reinecke et al. Jul 1994 A
5333268 Douglas et al. Jul 1994 A
5339447 Balmer Aug 1994 A
5349527 Pieprzak et al. Sep 1994 A
5353412 Douglas et al. Oct 1994 A
5361363 Wells et al. Nov 1994 A
5371896 Gove et al. Dec 1994 A
5388214 Leiserson et al. Feb 1995 A
5390298 Kuszmaul et al. Feb 1995 A
5408673 Childers et al. Apr 1995 A
5410649 Gove Apr 1995 A
5418970 Gifford May 1995 A
5420809 Read et al. May 1995 A
5430854 Sprague et al. Jul 1995 A
5442581 Poland Aug 1995 A
5446651 Moyse et al. Aug 1995 A
5452425 Childers et al. Sep 1995 A
5465095 Bryant Nov 1995 A
5465224 Guttag et al. Nov 1995 A
5471592 Gove et al. Nov 1995 A
5479166 Read et al. Dec 1995 A
5481308 Hartung et al. Jan 1996 A
5481749 Grondalski Jan 1996 A
5485411 Guttag et al. Jan 1996 A
5493513 Keith et al. Feb 1996 A
5493514 Keith et al. Feb 1996 A
5493524 Guttag et al. Feb 1996 A
5499375 Miyaguchi Mar 1996 A
5504678 Juszczak et al. Apr 1996 A
5509129 Guttag et al. Apr 1996 A
5511212 Rockoff Apr 1996 A
5512896 Read et al. Apr 1996 A
5513371 Cypher et al. Apr 1996 A
5522080 Harney May 1996 A
5522083 Gove et al. May 1996 A
5528238 Nickerson Jun 1996 A
5530661 Garbe et al. Jun 1996 A
5530809 Douglas et al. Jun 1996 A
5532940 Agarwal et al. Jul 1996 A
5535138 Keith Jul 1996 A
5535288 Chen et al. Jul 1996 A
5537338 Coelho Jul 1996 A
5539662 Nickerson Jul 1996 A
5539663 Agarwal Jul 1996 A
5539891 Childers et al. Jul 1996 A
5555428 Radigan et al. Sep 1996 A
5559722 Nickerson Sep 1996 A
5561784 Chen et al. Oct 1996 A
5568192 Hannah Oct 1996 A
5577262 Pechanek et al. Nov 1996 A
5579527 Chin et al. Nov 1996 A
5581778 Chin et al. Dec 1996 A
5586026 Highnam et al. Dec 1996 A
5588152 Dapp et al. Dec 1996 A
5590283 Hillis et al. Dec 1996 A
5590345 Barker et al. Dec 1996 A
5590350 Guttag et al. Dec 1996 A
5590356 Gilbert Dec 1996 A
5592405 Gove et al. Jan 1997 A
5594679 Iwata Jan 1997 A
5594918 Knowles et al. Jan 1997 A
5596519 Van Aken et al. Jan 1997 A
5596660 Shu Jan 1997 A
5596763 Guttag et al. Jan 1997 A
5598545 Childers et al. Jan 1997 A
5600582 Miyaguchi Feb 1997 A
5600847 Guttag et al. Feb 1997 A
5603013 Ohara Feb 1997 A
5606520 Gove et al. Feb 1997 A
5606677 Balmer et al. Feb 1997 A
5611038 Shaw et al. Mar 1997 A
5612908 Pechanek et al. Mar 1997 A
5613146 Gove et al. Mar 1997 A
5625836 Barker et al. Apr 1997 A
5628025 Chung et al. May 1997 A
5634065 Guttag et al. May 1997 A
5634067 Nagazumi May 1997 A
5640578 Balmer et al. Jun 1997 A
5644522 Moyse et al. Jul 1997 A
5644524 Van Aken et al. Jul 1997 A
5659780 Wu Aug 1997 A
5664214 Taylor et al. Sep 1997 A
5669010 Duluk, Jr. Sep 1997 A
5680339 Moyse et al. Oct 1997 A
5680550 Kuszmaul et al. Oct 1997 A
5680600 Childers et al. Oct 1997 A
5682491 Pechanek et al. Oct 1997 A
5689677 MacMillan Nov 1997 A
5689695 Read Nov 1997 A
5694348 Guttag et al. Dec 1997 A
5694588 Ohara et al. Dec 1997 A
5696913 Gove et al. Dec 1997 A
5696954 Guttag et al. Dec 1997 A
5696959 Guttag et al. Dec 1997 A
5708836 Wilkinson et al. Jan 1998 A
5710935 Barker et al. Jan 1998 A
5712999 Guttag et al. Jan 1998 A
5713037 Wilkinson et al. Jan 1998 A
5717943 Barker et al. Feb 1998 A
5717944 Wilkinson et al. Feb 1998 A
5727225 Guttag et al. Mar 1998 A
5729691 Agarwal Mar 1998 A
5734880 Guttag et al. Mar 1998 A
5734921 Dapp et al. Mar 1998 A
5742538 Guttag et al. Apr 1998 A
5752067 Wilkinson et al. May 1998 A
5752068 Gilbert May 1998 A
5752071 Tubbs et al. May 1998 A
5754871 Wilkinson et al. May 1998 A
5758195 Balmer May 1998 A
5761523 Wilkinson et al. Jun 1998 A
5761726 Guttag et al. Jun 1998 A
5765010 Chung et al. Jun 1998 A
5765011 Wilkinson et al. Jun 1998 A
5768445 Troeller et al. Jun 1998 A
5768609 Gove et al. Jun 1998 A
5774357 Hoffberg et al. Jun 1998 A
5778241 Bindloss et al. Jul 1998 A
5784636 Rupp Jul 1998 A
5794059 Barker et al. Aug 1998 A
5805913 Guttag et al. Sep 1998 A
5809288 Balmer Sep 1998 A
5809292 Wilkinson et al. Sep 1998 A
5815680 Okumura et al. Sep 1998 A
5815723 Wilkinson et al. Sep 1998 A
5822606 Morton Oct 1998 A
5825677 Agarwal et al. Oct 1998 A
5828894 Wilkinson et al. Oct 1998 A
5842031 Barker et al. Nov 1998 A
5867386 Hoffberg et al. Feb 1999 A
5867649 Larson Feb 1999 A
5867723 Chin et al. Feb 1999 A
5870619 Wilkinson et al. Feb 1999 A
5872965 Petrick Feb 1999 A
5872987 Wade et al. Feb 1999 A
5875108 Hoffberg et al. Feb 1999 A
5878241 Wilkinson et al. Mar 1999 A
5881272 Balmer Mar 1999 A
5901246 Hoffberg et al. May 1999 A
5903454 Hoffberg et al. May 1999 A
5907822 Prieto, Jr. May 1999 A
5909559 So Jun 1999 A
5920477 Hoffberg et al. Jul 1999 A
5930390 Coelho Jul 1999 A
5933624 Balmer Aug 1999 A
5937202 Crosetto Aug 1999 A
5951677 Wolf et al. Sep 1999 A
5960193 Guttag et al. Sep 1999 A
5960211 Schwartz et al. Sep 1999 A
5961635 Guttag et al. Oct 1999 A
5963745 Collins et al. Oct 1999 A
5963746 Barker et al. Oct 1999 A
5966528 Wilkinson et al. Oct 1999 A
5974539 Guttag et al. Oct 1999 A
5978509 Nachtergaele et al. Nov 1999 A
5983161 Lemelson et al. Nov 1999 A
5987590 Wing So Nov 1999 A
5995747 Guttag et al. Nov 1999 A
5995748 Guttag et al. Nov 1999 A
6016538 Guttag et al. Jan 2000 A
6026478 Dowling Feb 2000 A
6026484 Golston Feb 2000 A
6032170 Guttag et al. Feb 2000 A
6037982 Coelho Mar 2000 A
6038584 Balmer Mar 2000 A
6047122 Spiller Apr 2000 A
6052773 DeHon et al. Apr 2000 A
6055619 North et al. Apr 2000 A
6058473 Guttag et al. May 2000 A
6067613 Balmer May 2000 A
6070003 Gove et al. May 2000 A
6078745 De Greef et al. Jun 2000 A
6079008 Clery, III Jun 2000 A
6081750 Hoffberg et al. Jun 2000 A
6088062 Kanou et al. Jul 2000 A
6088782 Lee et al. Jul 2000 A
6088783 Morton Jul 2000 A
6091857 Shaw et al. Jul 2000 A
6094715 Wilkinson et al. Jul 2000 A
6098163 Guttag et al. Aug 2000 A
6101592 Pechanek et al. Aug 2000 A
6105119 Kerr et al. Aug 2000 A
6113650 Sakai Sep 2000 A
6116768 Guttag et al. Sep 2000 A
6121998 Voois et al. Sep 2000 A
6122767 Ohara Sep 2000 A
6124882 Voois et al. Sep 2000 A
6141744 Wing So Oct 2000 A
6148111 Creusere Nov 2000 A
6148389 So Nov 2000 A
6167502 Pechanek et al. Dec 2000 A
6170048 Wing So Jan 2001 B1
6170049 So Jan 2001 B1
6173305 Poland Jan 2001 B1
6173388 Abercrombie et al. Jan 2001 B1
6173389 Pechanek et al. Jan 2001 B1
6173394 Guttag et al. Jan 2001 B1
6179489 So et al. Jan 2001 B1
6182216 Luyster Jan 2001 B1
6188803 Iwase et al. Feb 2001 B1
6199162 Luyster Mar 2001 B1
6205533 Margolus Mar 2001 B1
6209114 Wolf et al. Mar 2001 B1
6212628 Abercrombie et al. Apr 2001 B1
6216223 Revilla et al. Apr 2001 B1
6219688 Guttag et al. Apr 2001 B1
6219775 Wade et al. Apr 2001 B1
6223274 Catthoor et al. Apr 2001 B1
6226389 Lemelson et al. May 2001 B1
6226738 Dowling May 2001 B1
6240437 Guttag et al. May 2001 B1
6243730 Wang Jun 2001 B1
6260088 Gove et al. Jul 2001 B1
6272479 Farry et al. Aug 2001 B1
6272512 Golliver et al. Aug 2001 B1
6275239 Ezer et al. Aug 2001 B1
6275773 Lemelson et al. Aug 2001 B1
6275920 Abercrombie et al. Aug 2001 B1
6282556 Chehrazi et al. Aug 2001 B1
6298370 Tang et al. Oct 2001 B1
6304197 Freking et al. Oct 2001 B1
6311280 Vishin Oct 2001 B1
6317159 Aoyama Nov 2001 B1
6317767 Wang Nov 2001 B2
6317819 Morton Nov 2001 B1
6321322 Pechanek et al. Nov 2001 B1
6353460 Sokawa et al. Mar 2002 B1
6353843 Chehrazi et al. Mar 2002 B1
6356994 Barry et al. Mar 2002 B1
6366999 Drabenstott et al. Apr 2002 B1
6370558 Guttag et al. Apr 2002 B1
6397324 Barry et al. May 2002 B1
6400996 Hoffberg et al. Jun 2002 B1
6404928 Shaw et al. Jun 2002 B1
6405185 Pechanek et al. Jun 2002 B1
6408382 Pechanek et al. Jun 2002 B1
6418424 Hoffberg et al. Jul 2002 B1
6421696 Horton Jul 2002 B1
6421809 Wuytack et al. Jul 2002 B1
6425054 Nguyen Jul 2002 B1
6430287 Rao Aug 2002 B1
6430589 Jennings, III Aug 2002 B1
6438569 Abbott Aug 2002 B1
6446190 Barry et al. Sep 2002 B1
6448910 Lu Sep 2002 B1
6449378 Yoshida et al. Sep 2002 B1
6449667 Ganmukhi et al. Sep 2002 B1
6449747 Wuytack et al. Sep 2002 B2
6467036 Pechanek et al. Oct 2002 B1
6470380 Yoshizawa et al. Oct 2002 B1
6470441 Pechanek et al. Oct 2002 B1
6487500 Lemelson et al. Nov 2002 B2
6493467 Okuda et al. Dec 2002 B1
6502117 Golliver et al. Dec 2002 B2
6512852 Wu et al. Jan 2003 B1
6526430 Hung et al. Feb 2003 B1
6530010 Hung et al. Mar 2003 B1
6553130 Lemelson et al. Apr 2003 B1
6557094 Pechanek et al. Apr 2003 B2
6560742 Dubey et al. May 2003 B1
6573907 Madrane Jun 2003 B1
6577764 Myler et al. Jun 2003 B2
6578150 Luyster Jun 2003 B2
6581152 Barry et al. Jun 2003 B2
6608296 Toyoda et al. Aug 2003 B1
6621855 Van Stralen et al. Sep 2003 B1
6622234 Pechanek et al. Sep 2003 B1
6630964 Burns et al. Oct 2003 B2
6631457 Ohba Oct 2003 B1
6638226 He et al. Oct 2003 B2
6640145 Hoffberg et al. Oct 2003 B2
6647486 Ohba Nov 2003 B2
6654870 Barry et al. Nov 2003 B1
6665790 Glossner, III et al. Dec 2003 B1
6671797 Golston Dec 2003 B1
6675286 Sun et al. Jan 2004 B1
6681052 Luna et al. Jan 2004 B2
6708246 Ishihara et al. Mar 2004 B1
6714197 Thekkath et al. Mar 2004 B1
6721773 Jennings, III Apr 2004 B2
6728862 Wilson Apr 2004 B1
6732259 Thekkath et al. May 2004 B1
6732354 Ebeling et al. May 2004 B2
6738358 Bist et al. May 2004 B2
6738522 Hsu et al. May 2004 B1
6742010 Hus et al. May 2004 B1
6745315 Gurney et al. Jun 2004 B2
6748517 Pechanek et al. Jun 2004 B1
6751319 Luyster Jun 2004 B2
6754279 Zhou et al. Jun 2004 B2
6754687 Kurak, Jr. et al. Jun 2004 B1
6757329 Morad et al. Jun 2004 B2
6760831 Drabenstott et al. Jul 2004 B2
6760833 Dowling Jul 2004 B1
6765625 Lee et al. Jul 2004 B1
6769056 Barry et al. Jul 2004 B2
6772186 Tonomura et al. Aug 2004 B1
6775766 Revilla et al. Aug 2004 B2
6785743 Sun et al. Aug 2004 B1
6791609 Yamauchi et al. Sep 2004 B2
6825857 Harasimiuk Nov 2004 B2
6826522 Moller et al. Nov 2004 B1
6829016 Hung Dec 2004 B2
6832232 Hus et al. Dec 2004 B1
6836289 Koshiba et al. Dec 2004 B2
6839728 Pitsianis et al. Jan 2005 B2
6845423 Park Jan 2005 B2
6847313 Biswas Jan 2005 B2
6848041 Pechanek et al. Jan 2005 B2
6850252 Hoffberg Feb 2005 B1
6851041 Pechanek et al. Feb 2005 B2
6854003 Jennings, III Feb 2005 B2
6868490 Barry et al. Mar 2005 B1
6873658 Zhou Mar 2005 B2
6882976 Hsu et al. Apr 2005 B1
6898691 Blomgren et al. May 2005 B2
6901421 Nielsen et al. May 2005 B2
6906639 Lemelson et al. Jun 2005 B2
6907438 Horton et al. Jun 2005 B1
6922716 Desai et al. Jul 2005 B2
6930689 Giacalone et al. Aug 2005 B1
6933970 Koshiba et al. Aug 2005 B2
6948050 Gove et al. Sep 2005 B1
6950547 Floeder et al. Sep 2005 B2
6954842 Drabenstott et al. Oct 2005 B2
6957317 Chen et al. Oct 2005 B2
6959372 Hobson et al. Oct 2005 B1
6961084 Duncan et al. Nov 2005 B1
6963649 Vaudrey et al. Nov 2005 B2
6970196 Ishikawa et al. Nov 2005 B1
6970994 Abdallah et al. Nov 2005 B2
6973469 Hsu et al. Dec 2005 B1
6976046 Guevorkian et al. Dec 2005 B2
6996117 Lee et al. Feb 2006 B2
6999520 Reina Feb 2006 B2
7003093 Prabhu et al. Feb 2006 B2
7003450 Sadri et al. Feb 2006 B2
7003653 Spracklen Feb 2006 B2
7006881 Hoffberg et al. Feb 2006 B1
7007002 Matsugu et al. Feb 2006 B2
7007055 Zheltov et al. Feb 2006 B2
7010668 Drabenstott et al. Mar 2006 B2
7020873 Bik et al. Mar 2006 B2
7024540 Barry et al. Apr 2006 B2
7032215 Harrison, III et al. Apr 2006 B2
7035991 Ohba Apr 2006 B2
7043006 Chambers et al. May 2006 B1
7043682 Ferguson May 2006 B1
7054850 Matsugu May 2006 B2
7070398 Olsen et al. Jul 2006 B2
7072357 Stacey Jul 2006 B2
7072929 Pechanek et al. Jul 2006 B2
7085749 Matsugu Aug 2006 B2
7085795 Debes et al. Aug 2006 B2
7106795 Kerofsky Sep 2006 B2
7110431 Oates Sep 2006 B2
7110437 Oates et al. Sep 2006 B2
7110440 Oates et al. Sep 2006 B2
7123655 Kerofsky Oct 2006 B2
7127590 Lindquist Oct 2006 B1
7130348 Kerofsky Oct 2006 B2
7136710 Hoffberg et al. Nov 2006 B1
7142603 Luna et al. Nov 2006 B2
7143264 Debes et al. Nov 2006 B2
7145487 Anderson et al. Dec 2006 B1
7146487 Drabenstott et al. Dec 2006 B2
7155018 Stokes, III et al. Dec 2006 B1
7159212 Schenk et al. Jan 2007 B2
7164706 Oates Jan 2007 B2
7167890 Lin et al. Jan 2007 B2
7170942 Kerofsky Jan 2007 B2
7181719 Demeure Feb 2007 B2
7185176 Tanaka et al. Feb 2007 B2
7185181 Parthasarathy Feb 2007 B2
7187663 Schmidt Mar 2007 B2
7196708 Dorojevets et al. Mar 2007 B2
7203221 Oates Apr 2007 B2
7210062 Oates et al. Apr 2007 B2
7210139 Hobson et al. Apr 2007 B2
7213128 Paver et al. May 2007 B2
7216140 Chen et al. May 2007 B1
7218645 Lotter et al. May 2007 B2
7218668 Oates et al. May 2007 B2
7219212 Sanghavi et al. May 2007 B1
7223242 He et al. May 2007 B2
7236634 Miyakoshi et al. Jun 2007 B2
7236998 Nutter et al. Jun 2007 B2
7237088 Barry et al. Jun 2007 B2
7242414 Thekkath et al. Jul 2007 B1
7242988 Hoffberg et al. Jul 2007 B1
7248623 Oates Jul 2007 B2
7257696 Pechanek et al. Aug 2007 B2
7266620 Pechanek et al. Sep 2007 B1
7272622 Sebot et al. Sep 2007 B2
7272700 Pechanek et al. Sep 2007 B1
7275147 Tavares Sep 2007 B2
7287148 Kanapathippillai et al. Oct 2007 B2
7299342 Nilsson et al. Nov 2007 B2
7301541 Hansen et al. Nov 2007 B2
7305608 Taunton et al. Dec 2007 B2
7308559 Glossner, III et al. Dec 2007 B2
7310348 Trinh et al. Dec 2007 B2
7313788 Ben-David et al. Dec 2007 B2
7317840 DeCegama Jan 2008 B2
7327780 Oates et al. Feb 2008 B2
7328230 Aldrich et al. Feb 2008 B2
7328332 Tran Feb 2008 B2
7330209 Osamato Feb 2008 B2
7333036 Oh et al. Feb 2008 B2
7333141 Hung Feb 2008 B2
7340495 Buxton et al. Mar 2008 B2
7343389 Macy et al. Mar 2008 B2
7349403 Lee et al. Mar 2008 B2
7353244 Aldrich et al. Apr 2008 B2
7355535 Anderson et al. Apr 2008 B1
7366236 Winger Apr 2008 B1
7373488 Paver et al. May 2008 B2
7376175 Oates et al. May 2008 B2
7376812 Sanghavi et al. May 2008 B1
7386049 Garrido et al. Jun 2008 B2
7389317 Guttag et al. Jun 2008 B2
7389508 Aguilar, Jr. et al. Jun 2008 B2
7392368 Khan et al. Jun 2008 B2
7392511 Brokenshire et al. Jun 2008 B2
7395298 Debes et al. Jul 2008 B2
7395302 Macy et al. Jul 2008 B2
7395409 Dowling Jul 2008 B2
7397858 Garrido et al. Jul 2008 B2
7398458 Ferguson Jul 2008 B2
7400680 Jiang Jul 2008 B2
7400682 Kerofsky Jul 2008 B2
7401333 Vandeweerd Jul 2008 B2
7412586 Rajopadhye et al. Aug 2008 B1
7412587 Tanaka et al. Aug 2008 B2
7415594 Doerr et al. Aug 2008 B2
7415595 Tell et al. Aug 2008 B2
7415703 Aguilar, Jr. et al. Aug 2008 B2
7418008 Lotter et al. Aug 2008 B2
RE40509 Pechanek et al. Sep 2008 E
7424594 Pitsianis et al. Sep 2008 B2
7430578 Debes et al. Sep 2008 B2
7437013 Anderson Oct 2008 B2
7437339 Matsugu Oct 2008 B2
7437719 Nagaraj et al. Oct 2008 B2
7444632 Minor et al. Oct 2008 B2
7447873 Nordquist Nov 2008 B1
7450857 Dress et al. Nov 2008 B2
7451005 Hoffberg et al. Nov 2008 B2
7453922 Oates et al. Nov 2008 B2
7467286 Abdallah et al. Dec 2008 B2
7467288 Glossner, III et al. Dec 2008 B2
7475257 Aguilar, Jr. et al. Jan 2009 B2
7478031 Master et al. Jan 2009 B2
7478390 Brokenshire et al. Jan 2009 B2
7483933 Wang et al. Jan 2009 B2
7489779 Scheuermann Feb 2009 B2
7493375 Master et al. Feb 2009 B2
7496917 Brokenshire et al. Feb 2009 B2
7502517 Kodama et al. Mar 2009 B2
7506135 Mimar Mar 2009 B1
7506136 Stuttard et al. Mar 2009 B2
7506137 Pechanek et al. Mar 2009 B2
7509366 Hansen Mar 2009 B2
7512173 Sambhwani et al. Mar 2009 B2
7516456 Aguilar, Jr. et al. Apr 2009 B2
7523157 Aguilar, Jr. et al. Apr 2009 B2
7526630 Stuttard et al. Apr 2009 B2
7529423 Aldrich et al. May 2009 B2
7529918 Taunton May 2009 B2
7532244 Ishikawa et al. May 2009 B2
7539846 Canella et al. May 2009 B2
7548586 Mimar Jun 2009 B1
7549145 Aguilar, Jr. et al. Jun 2009 B2
7562198 Noda et al. Jul 2009 B2
7580584 Holcomb et al. Aug 2009 B2
7584342 Nordquist et al. Sep 2009 B1
7594095 Nordquist Sep 2009 B1
7602740 Master et al. Oct 2009 B2
7602851 Lee et al. Oct 2009 B2
7606943 Ramchandran Oct 2009 B2
7609297 Master et al. Oct 2009 B2
RE41012 Barry et al. Nov 2009 E
7624138 Debes et al. Nov 2009 B2
7626544 Smith et al. Dec 2009 B2
7627736 Stuttard et al. Dec 2009 B2
7630569 DeCegama Dec 2009 B2
7631025 Debes et al. Dec 2009 B2
7644255 Totsuka Jan 2010 B2
7650319 Hoffberg et al. Jan 2010 B2
7653710 Scheuermann et al. Jan 2010 B2
7653806 Hansen et al. Jan 2010 B2
7656950 Garrido et al. Feb 2010 B2
7657861 Vorbach et al. Feb 2010 B2
7660973 Hansen et al. Feb 2010 B2
7660984 Master Feb 2010 B1
7665041 Wilson et al. Feb 2010 B2
7668229 Sambhwani et al. Feb 2010 B2
7680873 Pechanek et al. Mar 2010 B2
7685212 Sebot et al. Mar 2010 B2
7689641 Abel et al. Mar 2010 B2
7693339 Wittenstein Apr 2010 B2
7701365 Fukuhara et al. Apr 2010 B2
7715477 Garrido et al. May 2010 B2
7715591 Owechko et al. May 2010 B2
7716100 Metlapalli May 2010 B2
7720013 Kelliher et al. May 2010 B1
7721069 Ramchandran et al. May 2010 B2
7724261 Thekkath et al. May 2010 B2
7725521 Chen et al. May 2010 B2
7725641 Park et al. May 2010 B2
7728845 Holub Jun 2010 B2
7730287 Hansen et al. Jun 2010 B2
7738554 Lin et al. Jun 2010 B2
7739319 Macy, Jr. et al. Jun 2010 B2
7742405 Trinh et al. Jun 2010 B2
7752419 Plunkett et al. Jul 2010 B1
7752426 Nye et al. Jul 2010 B2
7768287 Hayashi et al. Aug 2010 B2
7769912 Pisek et al. Aug 2010 B2
7787688 Kass Aug 2010 B1
7788468 Nickolls et al. Aug 2010 B1
7791615 Stewart Sep 2010 B2
7796841 Paillet et al. Sep 2010 B2
7796885 Dress et al. Sep 2010 B2
7797647 Hassoun et al. Sep 2010 B2
7797691 Cockx et al. Sep 2010 B2
7801383 Sullivan Sep 2010 B2
7802079 Stuttard et al. Sep 2010 B2
7805477 Oh et al. Sep 2010 B2
7809927 Shi et al. Oct 2010 B2
7813822 Hoffberg Oct 2010 B1
7814297 Wezelenburg Oct 2010 B2
7818356 Chen et al. Oct 2010 B2
7818548 Hansen et al. Oct 2010 B2
7822109 Scheuermann Oct 2010 B2
7822849 Titus Oct 2010 B2
7831819 Chun et al. Nov 2010 B2
7840778 Hobson et al. Nov 2010 B2
7843459 Hansen et al. Nov 2010 B2
7844796 Vorbach et al. Nov 2010 B2
7849291 Hansen et al. Dec 2010 B2
7856611 Pisek et al. Dec 2010 B2
7861060 Nickolls et al. Dec 2010 B1
7865894 Nordquist et al. Jan 2011 B1
7870350 Yu et al. Jan 2011 B1
7873812 Mimar Jan 2011 B1
7889204 Hansen et al. Feb 2011 B2
7890735 Tran Feb 2011 B2
7899864 Margulis Mar 2011 B2
7904187 Hoffberg et al. Mar 2011 B2
7924878 Schmidt Apr 2011 B2
7925861 Stuttard et al. Apr 2011 B2
7930623 Pisek et al. Apr 2011 B2
7932910 Hansen et al. Apr 2011 B2
7932911 Hansen et al. Apr 2011 B2
7937559 Parameswar et al. May 2011 B1
7937591 Master et al. May 2011 B1
7940206 Nohara et al. May 2011 B2
7940277 Hansen et al. May 2011 B2
7945760 Barry et al. May 2011 B1
7948496 Hansen et al. May 2011 B2
7949856 Knowles May 2011 B2
7952587 Hansen et al. May 2011 B2
7953021 Lotter et al. May 2011 B2
7953938 Nakajima May 2011 B2
7962667 Pechanek et al. Jun 2011 B2
7966078 Hoffberg et al. Jun 2011 B2
7966475 Stuttard et al. Jun 2011 B2
7970279 Dress Jun 2011 B2
7974714 Hoffberg Jul 2011 B2
7987003 Hoffberg et al. Jul 2011 B2
7996671 Chheda et al. Aug 2011 B2
RE42728 Madrane Sep 2011 E
8018464 Hansen et al. Sep 2011 B2
8023561 Garrido et al. Sep 2011 B1
8031951 Takada Oct 2011 B2
8032477 Hoffberg et al. Oct 2011 B1
8036274 Srinivasan et al. Oct 2011 B2
8046313 Hoffberg et al. Oct 2011 B2
8068984 Smith et al. Nov 2011 B2
8074224 Nordquist Dec 2011 B1
8082419 Aldrich et al. Dec 2011 B2
8108641 Goss et al. Jan 2012 B2
8108656 Katragadda et al. Jan 2012 B2
8112513 Margulis Feb 2012 B2
8117137 Xu et al. Feb 2012 B2
8117426 Hansen et al. Feb 2012 B2
8131612 Payne Mar 2012 B1
8135224 Nilsson et al. Mar 2012 B2
8156284 Vorbach et al. Apr 2012 B2
8156481 Koh et al. Apr 2012 B1
8165916 Hoffberg et al. Apr 2012 B2
8169440 Stuttard et al. May 2012 B2
8171263 Stuttard et al. May 2012 B2
8174529 Iwaki et al. May 2012 B2
8174530 Stuttard et al. May 2012 B2
8175262 Leung et al. May 2012 B1
8176398 Taunton et al. May 2012 B2
8190807 Reid et al. May 2012 B2
8200025 Woodbeck Jun 2012 B2
8208549 Sasai et al. Jun 2012 B2
8218624 Holcomb et al. Jul 2012 B2
8219378 Koh Jul 2012 B1
8237711 McCombe et al. Aug 2012 B2
8238624 Doi et al. Aug 2012 B2
8249616 Boejer et al. Aug 2012 B2
8250337 Shih Aug 2012 B2
8250339 Master et al. Aug 2012 B2
8250503 Vorbach et al. Aug 2012 B2
8250549 Reid et al. Aug 2012 B2
8254455 Wu et al. Aug 2012 B2
8254707 Fukuhara et al. Aug 2012 B2
8265144 Christoffersen et al. Sep 2012 B2
8276135 Master Sep 2012 B2
8284204 Kalaiah et al. Oct 2012 B2
8287456 Daigle Oct 2012 B2
8289335 Hansen et al. Oct 2012 B2
8300935 Distante et al. Oct 2012 B2
8320693 Fukuhara et al. Nov 2012 B2
8326049 Rovati et al. Dec 2012 B2
8327158 Titiano et al. Dec 2012 B2
8385424 Reznik Feb 2013 B2
8402490 Hoffberg-Borghesani et al. Mar 2013 B2
8429625 Liege Apr 2013 B2
8448067 Cerny et al. May 2013 B2
8457958 Koishida et al. Jun 2013 B2
8484441 Knowles Jul 2013 B2
8505002 Yehia et al. Aug 2013 B2
8512241 Bandy et al. Aug 2013 B2
8516266 Hoffberg et al. Aug 2013 B2
8538015 Gueron et al. Sep 2013 B2
8571473 Lee Oct 2013 B2
8578387 Mills et al. Nov 2013 B1
8606023 Reznik et al. Dec 2013 B2
8612732 Grover et al. Dec 2013 B2
8615284 Arneson et al. Dec 2013 B2
8631483 Soni et al. Jan 2014 B2
8665943 Fukuhara et al. Mar 2014 B2
8686939 Mao et al. Apr 2014 B2
8736675 Holzbach et al. May 2014 B1
8745541 Wilson et al. Jun 2014 B2
8755515 Wu Jun 2014 B1
8755675 Direnzo et al. Jun 2014 B2
8762691 Stuttard et al. Jun 2014 B2
8776030 Grover et al. Jul 2014 B2
8797260 Mao et al. Aug 2014 B2
8811470 Kimura et al. Aug 2014 B2
8817031 Hakura et al. Aug 2014 B2
8904151 Gschwind et al. Dec 2014 B2
8947347 Mao et al. Feb 2015 B2
8977836 Fish, III Mar 2015 B2
8984498 Grover et al. Mar 2015 B2
9047094 Knowles Jun 2015 B2
9060175 Wang et al. Jun 2015 B2
9069547 Julier et al. Jun 2015 B2
9137567 Vestergaard et al. Sep 2015 B2
9143826 Vestergaard et al. Sep 2015 B2
9183560 Abelow Nov 2015 B2
9215499 Vestergaard et al. Dec 2015 B2
9648325 Baeza et al. May 2017 B2
9727042 Hoffberg-Borghesani et al. Aug 2017 B2
9832543 Wu Nov 2017 B1
20010014904 Wang Aug 2001 A1
20010038693 Luyster Nov 2001 A1
20010041012 Hsieh et al. Nov 2001 A1
20010045988 Yamauchi et al. Nov 2001 A1
20010049763 Barry et al. Dec 2001 A1
20020002574 Jennings Jan 2002 A1
20020003578 Koshiba et al. Jan 2002 A1
20020004809 Golliver et al. Jan 2002 A1
20020012054 Osamato Jan 2002 A1
20020012055 Koshiba et al. Jan 2002 A1
20020012398 Zhou et al. Jan 2002 A1
20020012470 Luna et al. Jan 2002 A1
20020015447 Zhou Feb 2002 A1
20020027604 Hung Mar 2002 A1
20020038294 Matsugu Mar 2002 A1
20020064139 Bist et al. May 2002 A1
20020073299 Pechanek et al. Jun 2002 A1
20020076034 Prabhu et al. Jun 2002 A1
20020078320 Barry et al. Jun 2002 A1
20020085648 Burns et al. Jul 2002 A1
20020110269 Floeder et al. Aug 2002 A1
20020116595 Morton Aug 2002 A1
20020118827 Luyster Aug 2002 A1
20020131501 Morad et al. Sep 2002 A1
20020135502 Lu Sep 2002 A1
20020135583 Ohba Sep 2002 A1
20020135683 Tamama et al. Sep 2002 A1
20020151992 Hoffberg et al. Oct 2002 A1
20020154123 Harasimiuk Oct 2002 A1
20020165709 Sadri et al. Nov 2002 A1
20020169813 Pechanek et al. Nov 2002 A1
20020178207 McNeil Nov 2002 A1
20020178345 Drabenstott et al. Nov 2002 A1
20020198911 Blomgren et al. Dec 2002 A1
20030004583 Matsugu et al. Jan 2003 A1
20030004697 Ferris Jan 2003 A1
20030004907 Matsugu Jan 2003 A1
20030008684 Ferris Jan 2003 A1
20030031368 Myler et al. Feb 2003 A1
20030061473 Revilla et al. Mar 2003 A1
20030065489 Guevorkian et al. Apr 2003 A1
20030065904 Burns et al. Apr 2003 A1
20030067894 Schmidt Apr 2003 A1
20030076875 Oates Apr 2003 A1
20030079109 Pechanek et al. Apr 2003 A1
20030088182 He et al. May 2003 A1
20030088601 Pitsianis et al. May 2003 A1
20030088754 Barry et al. May 2003 A1
20030088755 Gudmunson et al. May 2003 A1
20030091058 Oates et al. May 2003 A1
20030091102 Oates May 2003 A1
20030091106 Oates May 2003 A1
20030099224 Oates May 2003 A1
20030099291 Kerofsky May 2003 A1
20030100833 He et al. May 2003 A1
20030105793 Guttag et al. Jun 2003 A1
20030112876 Kerofsky Jun 2003 A1
20030121029 Harrison et al. Jun 2003 A1
20030123553 Kerofsky Jul 2003 A1
20030123579 Safavi et al. Jul 2003 A1
20030126351 Park Jul 2003 A1
20030128739 Oates et al. Jul 2003 A1
20030152076 Lee et al. Aug 2003 A1
20030152084 Lee et al. Aug 2003 A1
20030172221 McNeil Sep 2003 A1
20030179941 DeCegama Sep 2003 A1
20030182336 Nielsen et al. Sep 2003 A1
20030191887 Oates et al. Oct 2003 A1
20030198197 Oates et al. Oct 2003 A1
20030200420 Pechanek et al. Oct 2003 A1
20030202559 Oates et al. Oct 2003 A1
20030202566 Oates et al. Oct 2003 A1
20030206585 Kerofsky Nov 2003 A1
20030219034 Lotter et al. Nov 2003 A1
20030222879 Lin et al. Dec 2003 A1
20030222998 Yamauchi et al. Dec 2003 A1
20030231702 Oates et al. Dec 2003 A1
20040003201 Burns et al. Jan 2004 A1
20040003370 Schenk et al. Jan 2004 A1
20040017852 Garrido et al. Jan 2004 A1
20040017853 Garrido et al. Jan 2004 A1
20040022318 Garrido et al. Feb 2004 A1
20040030859 Doerr et al. Feb 2004 A1
20040034760 Paver et al. Feb 2004 A1
20040039899 Drabenstott et al. Feb 2004 A1
20040049664 Drabenstott et al. Mar 2004 A1
20040054871 Pechanek et al. Mar 2004 A1
20040073589 Debes et al. Apr 2004 A1
20040073773 Demjanenko Apr 2004 A1
20040078549 Tanaka et al. Apr 2004 A1
20040078554 Glossner et al. Apr 2004 A1
20040078556 Spracklen Apr 2004 A1
20040078674 Raimi et al. Apr 2004 A1
20040093484 Barry et al. May 2004 A1
20040098562 Anderson et al. May 2004 A1
20040103262 Glossner et al. May 2004 A1
20040107333 Drabenstott et al. Jun 2004 A1
20040117422 Debes et al. Jun 2004 A1
20040145501 Hung Jul 2004 A1
20040153634 Barry et al. Aug 2004 A1
20040189720 Wilson et al. Sep 2004 A1
20040193848 Tavares Sep 2004 A1
20040210903 Kosanovic Oct 2004 A1
20040218679 Luna et al. Nov 2004 A1
20040221137 Pitsianis et al. Nov 2004 A1
20040233986 Morad et al. Nov 2004 A1
20040240548 Morad et al. Dec 2004 A1
20040250045 Dowling Dec 2004 A1
20040263363 Biswas Dec 2004 A1
20040267857 Abel et al. Dec 2004 A1
20050025237 Kerofsky Feb 2005 A1
20050062746 Kataoka et al. Mar 2005 A1
20050069037 Jiang Mar 2005 A1
20050071403 Taunton Mar 2005 A1
20050071404 Nutter et al. Mar 2005 A1
20050071513 Aguilar, Jr. et al. Mar 2005 A1
20050071526 Brokenshire et al. Mar 2005 A1
20050071578 Day et al. Mar 2005 A1
20050071651 Aguilar, Jr. et al. Mar 2005 A1
20050071828 Brokenshire et al. Mar 2005 A1
20050081181 Brokenshire et al. Apr 2005 A1
20050081182 Minor et al. Apr 2005 A1
20050081201 Aguilar, Jr. et al. Apr 2005 A1
20050081202 Brokenshire et al. Apr 2005 A1
20050081203 Aguilar, Jr. et al. Apr 2005 A1
20050086655 Aguilar, Jr. et al. Apr 2005 A1
20050091473 Aguilar, Jr. et al. Apr 2005 A1
20050097301 Ben-David et al. May 2005 A1
20050100111 Taunton et al. May 2005 A1
20050135700 Anderson Jun 2005 A1
20050135948 Olsen et al. Jun 2005 A1
20050160406 Duncan et al. Jul 2005 A1
20050188364 Cockx et al. Aug 2005 A1
20050213842 Aldrich et al. Sep 2005 A1
20050216545 Aldrich et al. Sep 2005 A1
20050216699 Tanaka et al. Sep 2005 A1
20050219251 Chun et al. Oct 2005 A1
20050219422 Dorojevets et al. Oct 2005 A1
20050223193 Knowles Oct 2005 A1
20050223196 Knowles Oct 2005 A1
20050223197 Knowles Oct 2005 A1
20050223380 Chun et al. Oct 2005 A1
20050226337 Dorojevets et al. Oct 2005 A1
20050235025 Aldrich Oct 2005 A1
20050240870 Aldrich et al. Oct 2005 A1
20050265577 DeCegama Dec 2005 A1
20050280728 Ishikawa et al. Dec 2005 A1
20050285862 Noda et al. Dec 2005 A1
20060015702 Khan et al. Jan 2006 A1
20060015703 Ramchandran et al. Jan 2006 A1
20060061497 Matsumura et al. Mar 2006 A1
20060072014 Geng et al. Apr 2006 A1
20060095745 Tran May 2006 A1
20060095750 Nye et al. May 2006 A1
20060110054 Rovati et al. May 2006 A1
20060155398 Hoffberg et al. Jul 2006 A1
20060184599 Wang et al. Aug 2006 A1
20060184910 Pisek et al. Aug 2006 A1
20060200253 Hoffberg et al. Sep 2006 A1
20060200258 Hoffberg et al. Sep 2006 A1
20060200259 Hoffberg et al. Sep 2006 A1
20060200260 Hoffberg et al. Sep 2006 A1
20060211387 Pisek et al. Sep 2006 A1
20060212502 Chatterjee Sep 2006 A1
20060224656 Pechanek et al. Oct 2006 A1
20060225002 Hassoun et al. Oct 2006 A1
20060236214 Ferguson Oct 2006 A1
20060238406 Nohara et al. Oct 2006 A1
20060239471 Mao et al. Oct 2006 A1
20060241929 Ferris Oct 2006 A1
20060248311 Jennings Nov 2006 A1
20060248317 Vorbach et al. Nov 2006 A1
20060253288 Chu et al. Nov 2006 A1
20060268777 Schmidt Nov 2006 A1
20060271764 Nilsson et al. Nov 2006 A1
20060271765 Tell et al. Nov 2006 A1
20060277316 Wang et al. Dec 2006 A1
20060280360 Holub Dec 2006 A1
20070005327 Ferris Jan 2007 A1
20070005937 Anderson et al. Jan 2007 A1
20070011120 Matsugu Jan 2007 A1
20070016476 Hoffberg et al. Jan 2007 A1
20070024472 Oh et al. Feb 2007 A1
20070027695 Oh et al. Feb 2007 A1
20070028076 Wezelenburg Feb 2007 A1
20070036225 Srinivasan et al. Feb 2007 A1
20070050603 Vorbach et al. Mar 2007 A1
20070053513 Hoffberg Mar 2007 A1
20070061022 Hoffberg-Borghesani et al. Mar 2007 A1
20070061023 Hoffberg et al. Mar 2007 A1
20070061735 Hoffberg et al. Mar 2007 A1
20070070038 Hoffberg et al. Mar 2007 A1
20070097130 Margulis May 2007 A1
20070110053 Soni et al. May 2007 A1
20070113038 Hobson et al. May 2007 A1
20070124474 Margulis May 2007 A1
20070165959 Takada Jul 2007 A1
20070168408 Skelton Jul 2007 A1
20070189227 Lotter et al. Aug 2007 A1
20070198815 Liu et al. Aug 2007 A1
20070198901 Ramchandran et al. Aug 2007 A1
20070204132 Paver et al. Aug 2007 A1
20070204137 Tran Aug 2007 A1
20070205921 Sawitzki Sep 2007 A1
20070206634 Lotter et al. Sep 2007 A1
20070226601 Pisek et al. Sep 2007 A1
20070230914 Garrido et al. Oct 2007 A1
20070247936 Direnzo et al. Oct 2007 A1
20070260855 Gschwind et al. Nov 2007 A1
20070265531 He et al. Nov 2007 A1
20070286275 Kimura et al. Dec 2007 A1
20070294496 Goss et al. Dec 2007 A1
20070294511 Ramchandran et al. Dec 2007 A1
20080007559 Kalaiah et al. Jan 2008 A1
20080016320 Menon et al. Jan 2008 A1
20080022077 Thekkath et al. Jan 2008 A1
20080022078 Taunton Jan 2008 A1
20080040584 Hansen et al. Feb 2008 A1
20080040749 Hoffberg et al. Feb 2008 A1
20080059766 Hansen et al. Mar 2008 A1
20080059767 Hansen et al. Mar 2008 A1
20080065860 Hansen et al. Mar 2008 A1
20080065862 Hansen et al. Mar 2008 A1
20080072020 Hansen et al. Mar 2008 A1
20080077771 Guttag et al. Mar 2008 A1
20080088507 Smith et al. Apr 2008 A1
20080091350 Smith et al. Apr 2008 A1
20080091758 Hansen et al. Apr 2008 A1
20080091904 Nakajima Apr 2008 A1
20080091925 Hansen et al. Apr 2008 A1
20080098207 Reid et al. Apr 2008 A1
20080098208 Reid et al. Apr 2008 A1
20080104375 Hansen et al. May 2008 A1
20080104376 Hansen et al. May 2008 A1
20080112885 Okunev et al. May 2008 A1
20080114224 Bandy et al. May 2008 A1
20080114921 Park et al. May 2008 A1
20080114937 Reid et al. May 2008 A1
20080123750 Bronstein et al. May 2008 A1
20080126278 Bronstein et al. May 2008 A1
20080133892 Pechanek et al. Jun 2008 A1
20080137771 Taunton et al. Jun 2008 A1
20080141012 Yehia et al. Jun 2008 A1
20080141131 Cerny et al. Jun 2008 A1
20080161660 Arneson et al. Jul 2008 A1
20080162770 Titiano et al. Jul 2008 A1
20080162882 Hansen et al. Jul 2008 A1
20080177986 Hansen et al. Jul 2008 A1
20080181308 Wang et al. Jul 2008 A1
20080181472 Doi et al. Jul 2008 A1
20080189512 Hansen et al. Aug 2008 A1
20080201468 Titus Aug 2008 A1
20080215768 Reid et al. Sep 2008 A1
20080219575 Wittenstein Sep 2008 A1
20080256330 Wang et al. Oct 2008 A1
20080262764 Sedeh Oct 2008 A1
20080320038 Liege Dec 2008 A1
20080320240 Savic Dec 2008 A1
20090031105 Hansen et al. Jan 2009 A1
20090054075 Boejer et al. Feb 2009 A1
20090063724 Pechanek et al. Mar 2009 A1
20090074052 Fukuhara et al. Mar 2009 A1
20090083498 Hansen et al. Mar 2009 A1
20090089540 Hansen et al. Apr 2009 A1
20090092326 Fukuhara et al. Apr 2009 A1
20090100227 Hansen et al. Apr 2009 A1
20090102686 Fukuhara et al. Apr 2009 A1
20090106536 Hansen et al. Apr 2009 A1
20090112095 Daigle Apr 2009 A1
20090113176 Hansen et al. Apr 2009 A1
20090113185 Hansen et al. Apr 2009 A1
20090113187 Hansen et al. Apr 2009 A1
20090128562 McCombe et al. May 2009 A1
20090158012 Hansen et al. Jun 2009 A1
20090274378 Fukuhara et al. Nov 2009 A1
20110029922 Hoffberg et al. Feb 2011 A1
20120001930 Iwaki et al. Jan 2012 A1
20120069131 Abelow Mar 2012 A1
20120182302 Iwaki et al. Jul 2012 A1
20130044260 Vestergaard et al. Feb 2013 A1
20130044802 Vestergaard et al. Feb 2013 A1
20130044805 Vestergaard et al. Feb 2013 A1
20130044822 Vestergaard et al. Feb 2013 A1
20130044823 Vestergaard et al. Feb 2013 A1
20130044824 Vestergaard et al. Feb 2013 A1
20130047074 Vestergaard et al. Feb 2013 A1
20140195330 Lee et al. Jul 2014 A1
Non-Patent Literature Citations (21)
Entry
Whalen, Sean. “Audio and the graphics processing unit.” Author report, University of California Davis 47 (2005): 51.
Trebien, Fernando, and Manuel M. Oliveira. “Realistic real-time sound re-synthesis and processing for interactive virtual worlds.” The Visual Computer 25.5-7 (2009): 469-477.
Fabritius, Frederik. Audio processing algorithms on the GPU. Diss. Technical University of Denmark, DTU, DK-2800 Kgs. Lyngby, Denmark, 2009.
Wefers, Frank, and Jan Berg. “High-performance real-time FIR-filtering using fast convolution on graphics hardware.” Proc. of the 13th Conference on Digital Audio Effects. 2010.
Röber, Niklas, Martin Spindler, and Maic Masuch. “Waveguide-based room acoustics through graphics hardware.” Proceedings of ICMC. 2006.
Moreira, B., et al. “An Architecture Using a Finite Difference Method to Calculate Realistic Sound Equalization in Games.” Games and Digital Entertainment (SBGAMES), 2011 Brazilian Symposium on. IEEE, 2011.
Hamidi, Foad, and Bill Kapralos. “A review of spatial sound for virtual environments and games with graphics processing units.” Open Virtual Reality Journal 1 (2009): 8-17.
Cowan, Brent, and Bill Kapralos. “Spatial sound for video games and virtual environments utilizing real-time GPU-based convolution.” Proceedings of the 2008 Conference on Future Play: Research, Play, Share. ACM, 2008.
Savioja, Lauri. “Real-time 3D finite-difference time-domain simulation of low-and mid-frequency room acoustics.” 13th Int. Conf on Digital Audio Effects. vol. 1. 2010.
Tsingos, Nicolas. “Using programmable graphics hardware for auralization.” Proc. EAA Symposium on Auralization, Espoo, Finland. 2009.
Mauro, Davide Andrea, and Ernesto Damiani. “On Binaural Spatialization and The Use of GPUGU for Audio Processing.” Ph.D. Thesis U. Degli Studi Di Milano, 2012.
Da Computação, Curso De Ci{hacek over (e)}ncia. A GPU-based Real-Time Modular Audio Processing System. Diss. Universidade Federal Do Rio Grande Do Sul, 2006.
Rodriguez, Jose Antonio Belloch. “Performance Improvement of Multichannel Audio by Graphics Processing Units.” (2014).
Mauro, Davide Andrea. “Audio convolution by the mean of GPU: CUDA and OpenCL implementations.” Acoustics 2012. 2012.
Salazar, Adrian. “General-Purpose Computation Using Graphical Processing Units.” (2008).
Kartashev, Pavel, and V. Nazaruk. “Analysis of Gpgpu Platforms Efficiency in General-Purpose Computations.” publication. editionName (2011): 857-863.
Guher, Muge. “Signal Processing and General Purpose Computing on GPU.” (circa 2012).
Theodoropoulos, Dimitris, Catalin Bogdan Ciobanu, and Georgi Kuzmanov. “Wave field synthesis for 3D audio: architectural prospectives.” Proceedings of the 6th ACM conference on Computing frontiers. ACM, 2009.
Gjermundsen, Aleksander. “CPU and GPU Co-processing for Sound.” (2010).
Annex to ITU Operational Bulletin No. 781-1.II.2003.
U.S. Appl. No. 10/361,802, filed Jul. 23, 2019, Hoffberg-Borghesani et al.
Provisional Applications (1)
Number Date Country
61101050 Sep 2008 US
Divisions (1)
Number Date Country
Parent 12569456 Sep 2009 US
Child 14305432 US
Continuations (2)
Number Date Country
Parent 15823430 Nov 2017 US
Child 16728881 US
Parent 14305432 Jun 2014 US
Child 15823430 US