Parallel signal processing system and method

Information

  • Patent Grant
  • 11322171
  • Patent Number
    11,322,171
  • Date Filed
    Monday, October 12, 2020
    3 years ago
  • Date Issued
    Tuesday, May 3, 2022
    2 years ago
  • Inventors
  • Examiners
    • King; Simon
    Agents
    • Hoffberg & Associates
    • Hoffberg; Steven M.
Abstract
A system and method for processing a plurality of channels, for example audio channels, in parallel is provided. For example, a plurality of telephony channels are processed in order to detect and respond to call progress tones. The channels may be processed according to a common transform algorithm. Advantageously, a massively parallel architecture is employed, in which operations on many channels are synchronized, to achieve a high efficiency parallel processing environment. The parallel processor may be situated on a data bus, separate from a main general-purpose processor, or integrated with the processor in a common board or integrated device. All, or a portion of a speech processing algorithm may also be performed in a massively parallel manner.
Description
BACKGROUND
Field of the Invention

The invention relates to the field of real time digital audio processing, particularly in a telephony switch context.


Background of the Invention

Existing telephone systems, such as the Calltrol Object Telephony Server (OTS™), tend to require relatively expensive special purpose hardware to process hundreds of voice channels simultaneously. More information about this system can be found at www.calltrol.com/newsolutionsforoldchallenges.pdf, www.calltrol.com/crmconvergence_saleslogix.pdf, and www.calltrol.com/CalltrolSDKWhitepaper6-02.pdf, each of which is expressly incorporated herein by reference in its entirety.


In many traditional systems, a single dedicated analog or digital circuit is provided for each public switch telephone network (PSTN) line. See, e.g., Consumer Microcircuits Limited CMX673 datasheet, Clare M-985-01 datasheet. In other types of systems, the call progress tone analyzer may be statistically shared between multiple channels, imposing certain limitations and detection latencies.


Digital signal processor algorithms are also known for analyzing call progress tones (CPT). See, e.g., Manish Marwah and Sharmistha Das, “UNICA—A Unified Classification Algorithm For Call Progress Tones” (Avaya Labs, University of Colorado), expressly incorporated herein by reference.


Call progress tone signals provide information regarding the status or progress of a call to customers, operators, and connected equipment. In circuit-associated signaling, these audible tones are transmitted over the voice path within the frequency limits of the voice band. The four most common call progress tones are: Dial tone; Busy tone; Audible ringback; and Reorder tone. In addition to these, there are a number of other defined tones, including for example the 12 DTMF codes on a normal telephone keypad. There may be, for example, about 53 different tones supported by a system. A call progress tone detector, may additionally respond to cue indicating Cessation of ringback; Presence/cessation of voice; Special Information Tones (SITs); and Pager cue tones. Collectively, call progress tones and these other audible signals are referred to as call progress events. Call progress tone generation/detection in the network is generally based on a Precise Tone Plan. In the plan, four distinctive tones are used singly or in combination to produce unique progress tone signals. These tones are 350 Hz, 440 Hz, 480 Hz and 620 Hz. Each call progress tone is defined by the frequencies used and a specific on/off temporal pattern.


The ITU-T E.180 and E.182 recommendations define the technical characteristics and intended usage of some of these tones: busy tone or busy signal; call waiting tone; comfort tone; conference call tone; confirmation tone; congestion tone; dial tone; end of three-party service tone (three-way calling); executive override tone; holding tone; howler tone; intercept tone; intrusion tone; line lock-out tone; negative indication tone; notify tone; number unobtainable tone; pay tone; payphone recognition tone; permanent signal tone; preemption tone; queue tone; recall dial tone; record tone; ringback tone or ringing tone; ringtone or ringing signal; second dial tone; special dial tone; special information tone (SIT); waiting tone; warning tone; Acceptance tone; Audible ring tone; Busy override warning tone; Busy verification tone; Engaged tone; Facilities tone; Fast busy tone; Function acknowledge tone; Identification tone; Intercept tone; Permanent signal tone; Positive indication tone; Re-order tone; Refusal tone; Ringback tone; Route tone; Service activated tone; Special ringing tone; Stutter dial tone; Switching tone; Test number tone; Test tone; and Trunk offering tone. In addition, signals sent to the PSTN include Answer tone; Calling tone; Guard tone; Pulse (loop disconnect) dialing; Tone (DTMF) dialing, and other signals from the PSTN include Billing (metering) signal; DC conditions; and Ringing signal. The tones, cadence, and tone definitions, may differ between different countries, carriers, types of equipment, etc. See, e.g., Annex to ITU Operational Bulletin No. 781-1.11.2003. Various Tones Used In National Networks (According To ITU-T Recommendation E.180) (03/1998).


Characteristics for the call progress events are shown in Table 1.












TABLE 1





Call Progress





Event





Characteristics
Frequencies




Name
(Hz)
Temporal Pattern
Event Reported After







Dial Tone
350 + 440
Steady tone
Approximately 0.75





seconds


Busy Tone
480 + 620
0.5 seconds on/
2 cycles of precise,




0.5 seconds off
3 cycles of nonprecise


Detection
440 + 480
2 seconds on/
2 cycles of precise or


Audible

4 seconds off
nonprecise


Ringback


3 to 6.5 seconds after


Cessation


ringback detected


Reorder
480 + 620
0.25 seconds on/
2 cycles of precise,




0.25 seconds off
3 cycles of nonprecise


Detection
200 to 3400

Approximately 0.25 to


Voice


0.50 seconds


Cessation


Approximately 0.5 to





1.0 seconds after voice





detected


Special
See Table 2.
See Table 2.
Approximately 0.25 to


Information


0.75 seconds


Tones (SITs)





Pager Cue
1400
3 to 4 tones at
2 cycles of precise or


Tones

0.1 to 0.125
any pattern of 1400-Hz




intervals
signals









Dial tone indicates that the CO is ready to accept digits from the subscriber. In the precise tone plan, dial tone consists of 350 Hz plus 440 Hz. The system reports the presence of precise dial tone after approximately 0.75 seconds of steady tone. Nonprecise dial tone is reported after the system detects a burst of raw energy lasting for approximately 3 seconds.


Busy tone indicates that the called line has been reached but it is engaged in another call. In the precise tone plan, busy tone consists of 480 Hz plus 620 Hz interrupted at 60 ipm (interruptions per minute) with a 0.5 seconds on/0.5 seconds off temporal pattern. The system reports the presence of precise busy tone after approximately two cycles of this pattern. Nonprecise busy tone is reported after three cycles.


Audible ringback (ring tone) is returned to the calling party to indicate that the called line has been reached and power ringing has started. In the precise tone plan, audible ringback consists of 440 Hz plus 480 Hz with a 2 seconds on/4 seconds off temporal pattern. The system reports the presence of precise audible ringback after two cycles of this pattern.


Outdated equipment in some areas may produce nonprecise, or dirty ringback. Nonprecise ringback is reported after two cycles of a 1 to 2.5 seconds on, 2.5 to 4.5 seconds off pattern of raw energy. The system may report dirty ringback as voice detection, unless voice detection is specifically ignored during this period. The system reports ringback cessation after 3 to 6.5 seconds of silence once ringback has been detected (depending at what point in the ringback cycle the CPA starts listening).


Reorder (Fast Busy) tone indicates that the local switching paths to the calling office or equipment serving the customer are busy or that a toll circuit is not available. In the precise tone plan, reorder consists of 480 Hz plus 620 Hz interrupted at 120 ipm (interruptions per minute) with a 0.25 seconds on/0.25 seconds off temporal pattern. The system reports the presence of precise reorder tone after two cycles of this pattern. Nonprecise reorder tone is reported after three cycles.


Voice detection has multiple uses, and can be used to detect voice as an answer condition, and also to detect machine-generated announcements that may indicate an error condition. Voice presence can be detected after approximately 0.25 to 0.5 seconds of continuous human speech falling within the 200-Hz to 3400-Hz voiceband (although the PSTN only guarantees voice performance between 300 Hz to 800 Hz. A voice cessation condition may be determined, for example, after approximately 0.5 to 1.0 seconds of silence once the presence of voice has been detected.


Special Information Tones (SITs) indicate network conditions encountered in both the Local Exchange Carrier (LEC) and Inter-Exchange Carrier (IXC) networks. The tones alert the caller that a machine-generated announcement follows (this announcement describes the network condition). Each SIT consists of a precise three-tone sequence: the first tone is either 913.8 Hz or 985.2 Hz, the second tone is either 1370.6 Hz or 1428.5 Hz, and the third is always 1776.7 Hz. The duration of the first and second tones can be either 274 ms or 380 ms, while the duration of the third remains a constant 380 ms. The names, descriptions and characteristics of the four most common SITs are summarized in Table 2.













TABLE 2







Special






Information

First Tone
Second Tone
Third Tone


Tones

Frequency
Frequency
Frequency


(SITs)

Duration
Duration
Duration














Name
Description
(Hz)
(ms)
(Hz)
(ms)
(Hz)
(ms)





NC1
No circuit
985.2
380
1428.5
380
1776.7
380



found








IC
Operator
913.8
274
1370.6
274
1776.7
380



intercept








VC
Vacant
985.2
380
1370.6
274
1776.7
380



circuit









(non-









registered









number)








RO1
Reorder
913.8
274
1428.5
380
1776.7
380



(system









busy)






1Tone frequencies shown indicate conditions that are the responsibility of the BOC intra-LATA carrier. Conditions occurring on inter-LATA carriers generate SITs with different first and second tone frequencies







Pager cue tones are used by pager terminal equipment to signal callers or connected equipment to enter the callback number (this number is then transmitted to the paged party). Most pager terminal equipment manufacturers use a 3- or 4-tone burst of 1400 Hz at 100- to 125-ms intervals. The system identifies three cycles of 1400 Hz at these approximate intervals as pager cue tones. To accommodate varying terminal equipment signals, tone bursts of 1400 Hz in a variety of patterns may also be reported as pager cue tones. Voice prompts sometimes accompany pager cue tones to provide instructions. Therefore, combinations of prompts and tones may be detected by configuring an answer supervision template to respond to both voice detection and pager cue tone detection.


A Goertzel filter algorithm may be used to detect the solid tones that begin fax or data-modem calls. If any of the following tones are detected, a “modem” (fax or data) state is indicated: 2100 Hz, 2225 Hz, 1800 Hz, 2250 Hz, 1300 Hz, 1400 Hz, 980 Hz, 1200 Hz, 600 Hz, or 3000 Hz. Fax detection relies on the 1.5 seconds of HDLC flags that precede the answering fax terminal's DIS frame. DIS is used by the answering terminal to declare its capabilities. After a solid tone is detected, a V.21 receiver is used to detect the HDLC flags (01111110) in the preamble of DIS signal on the downstream side. If the required number of flags are detected, fax is reported. Otherwise, upon expiration of a timer, the call is may be determined to be a data modem communication. See, e.g., U.S. Pat. No. 7,003,093, the entirety of which is expressly incorporated herein by reference. See also, U.S. Pat. No. 7,043,006, expressly incorporated herein by reference.


Therefore, a well-developed system exists for in-band signaling over audio channels, with a modest degree of complexity and some variability between standards, which themselves may change over time.


One known digital signal processor architecture, exemplified by the nVidia Tesla™ C870 GPU device, provides a massively multi-threaded architecture, providing over 500 gigaflops peak floating-point performance. This device encompasses a 128-processor computing core, and is typically provided as a coprocessor on a high speed bus for a standard personal computer platform. Similarly, the AMD/ATI Firestream 9170 also reports 500 gigaflops performance from a GPU-type device with double precision floating point capability. Likewise, newly described devices (e.g., AMD Fusion) integrate a CPU and GPU on a single die with shared external interfaces. See, for example, www.nvidia.com/object/tesla_product_literature.html, S1070 1U System Specification Document (2.03 MB PDF), NVIDIA Tesla S1070 Datasheet (258 KB PDF), NVIDIA Tesla Personal Supercomputer Datasheet (517 KB PDF), C1060 Board Specification Document (514 KB PDF), NVIDIA Tesla C1060 Datasheet (153 KB PDF), NVIDIA Tesla 8 Series Product Overview (1.69 MB PDF), C870 Board Specification Document (478 KB PDF), D870 System Specification Document (630 KB PDF), 5870 1U Board Specification Document (13.3 MB PDF), NVIDIA Tesla 8 Series: GPU Computing Technical Brief (3.73 MB PDF), www.nvidia.com/object/cuda_programming_tools.html (PTX: Parallel Thread Execution ISA Version 1.2), developer.download.nvidia.com/compute/cuda/2_0/docs/NVIDIA_CUDA_Programming_Guide_2.0.pdf, developer.download.nvidia.com/compute/cuda/2_0/docs/CudaReferenceManual_2.0.pdf, developer.download.nvidia.com/compute/cuda/2_0/docs/CUBLAS_Library_2.0.pdf, developer.download.nvidia.com/compute/cuda/2_0/docs/CUFFT_Library_2.0.pdf, each of which is expressly incorporated herein by reference in its entirety.


The nVidia Tesla™ GPU is supported by the Compute Unified Device Architecture (CUDA) software development environment, which provides C language support. Typical applications proposed for the nVidia Tesla™ GPU, supported by CUDA, are Parallel bitonic sort; Matrix multiplication; Matrix transpose; Performance profiling using timers; Parallel prefix sum (scan) of large arrays; Image convolution; 1D DWT using Haar wavelet; OpenGL and Direct3D graphics interoperation examples; Basic Linear Algebra Subroutines; Fast Fourier Transform; Binomial Option Pricing; Black-Scholes Option Pricing; Monte-Carlo Option Pricing; Parallel Mersenne Twister (random number generation); Parallel Histogram; Image Denoising; and a Sobel Edge Detection Filter. Therefore, the typical proposed applications are computer software profiling, matrix applications, image processing applications, financial applications, Seismic simulations; Computational biology; Pattern recognition; Signal processing; and Physical simulation. CUDA technology offers the ability for threads to cooperate when solving a problem. The nVidia Tesla™ GPUs featuring CUDA technology have an on-chip Parallel Data Cache that can store information directly on the GPU, allowing computing threads to instantly share information rather than wait for data from much slower, off-chip DRAMs. Likewise, the software compile aspects of CUDA are able to partition code between the GPU and a host processor, for example to effect data transfers and to execute on the host processor algorithms and code which are incompatible or unsuitable for efficient execution on the GPU itself.


GPU architectures are generally well-suited to address problems that can be expressed as data-parallel computations: the same program is executed on many data elements in parallel, with high arithmetic intensity, the ratio of arithmetic operations to memory operations. Because the same program is executed for each data element, there is a lower requirement for sophisticated flow control; and because it is executed on many data elements and has high arithmetic intensity, the memory access latency can be hidden with calculations instead of big data caches. Thus, the GPU architecture typically provides a larger number of arithmetic logic units than independently and concurrently operable instruction decoders. Data-parallel processing maps data elements to parallel processing threads. Many applications that process large data sets such as arrays can use a data-parallel programming model to speed up the computations. In 3D rendering large sets of pixels and vertices are mapped to parallel threads. Similarly, image and media processing applications such as post-processing of rendered images, video encoding and decoding, image scaling, stereo vision, and pattern recognition can map image blocks and pixels to parallel processing threads. In fact, many algorithms outside the field of image rendering and processing are accelerated by data-parallel processing, from general signal processing or physics simulation to computational finance or computational biology.


The Tesla™ GPU device is implemented as a set of multiprocessors (e.g., 8 on the C870 device), each of which has a Single Instruction, Multiple Data architecture (SIMD): At any given clock cycle, each processor (16 per multiprocessor on the C870) of the multiprocessor executes the same instruction, but operates on different data. Each multiprocessor has on-chip memory of the four following types: One set of local 32-bit registers per processor, a parallel data cache or shared memory that is shared by all the processors and implements the shared memory space, a read-only constant cache that is shared by all the processors and speeds up reads from the constant memory space, which is implemented as a read-only region of device memory, and a read-only texture cache that is shared by all the processors and speeds up reads from the texture memory space, which is implemented as a read-only region of device memory. The local and global memory spaces are implemented as read-write regions of device memory and are not cached. Each multiprocessor accesses the texture cache via a texture unit. A grid of thread blocks is executed on the device by executing one or more blocks on each multiprocessor using time slicing: Each block is split into SIMD groups of threads called warps; each of these warps contains the same number of threads, called the warp size, and is executed by the multiprocessor in a SIMD fashion; a thread scheduler periodically switches from one warp to another to maximize the use of the multiprocessor's computational resources. A half-warp is either the first or second half of a warp. The way a block is split into warps is always the same; each warp contains threads of consecutive, increasing thread IDs with the first warp containing thread 0. A block is processed by only one multiprocessor, so that the shared memory space resides in the on-chip shared memory leading to very fast memory accesses. The multiprocessor's registers are allocated among the threads of the block. If the number of registers used per thread multiplied by the number of threads in the block is greater than the total number of registers per multiprocessor, the block cannot be executed and the corresponding kernel will fail to launch. Several blocks can be processed by the same multiprocessor concurrently by allocating the multiprocessor's registers and shared memory among the blocks. The issue order of the warps within a block is undefined, but their execution can be synchronized, to coordinate global or shared memory accesses. The issue order of the blocks within a grid of thread blocks is undefined and there is no synchronization mechanism between blocks, so threads from two different blocks of the same grid cannot safely communicate with each other through global memory during the execution of the grid.


Telephony control and switching applications have for many years employed general purpose computer operating systems, and indeed the UNIX system was originally developed by Bell Laboratories/AT&T. There are a number of available telephone switch platforms, especially private branch exchange implementations, which use an industry standard PC Server platform, typically with specialized telephony support hardware. These include, for example, Asterisk (from Digium) PBX platform, PBXtra (Fonality), Callweaver, Sangoma, etc. See also, e.g., www.voip-info.org/wiki/. Typically, these support voice over Internet protocol (VOIP) communications, in addition to switched circuit technologies.


As discussed above, typical automated telephone signaling provides in-band signaling which therefore employs acoustic signals. A switching system must respond to these signals, or it is deemed deficient. Typically, an analog or digital call progress tone detector is provided for each channel of a switched circuit system. For VOIP systems, this functionality maybe provided in a gateway (media gateway), either as in traditional switched circuit systems, or as a software process within a digital signal processor.


Because of the computational complexity of the call progress tone analysis task, the density of digital signal processing systems for simultaneously handling a large number of voice communications has been limited. For example, 8 channel call progress tone detection may be supported in a single Texas Instruments TMS320C5510™ digital signal processor (DSP). See, IP PBX Chip from Adaptive Digital Technologies, Inc. (www.adaptivedigital.com/product/solution/ip_pbx.htm). The tone detection algorithms consume, for example, over 1 MIPS per channel for a full suite of detection functions, depending on algorithm, processor architecture, etc. Scaling to hundreds of channels per system is cumbersome, and typically requires special purpose dedicated, and often costly, hardware which occupy a very limited number of expansion bus slots of a PBX system.


SUMMARY OF THE INVENTION

The present system and method improve the cost and efficiency of real time digital signal processing with respect to analog signals, and in particular, telephony signaling functions.


In one aspect of the invention, a massively parallel digital signal processor is employed to perform telephony in-band signaling detection and analysis. In another aspect, a massively parallel coprocessor card is added to a telephony server which is executed on a standard processor to increase call progress tone detection performance. Advantageously, the massively parallel processor is adapted to execute standard software, such as C language, and therefore may perform both massively parallel tasks, and with a likely lower degree of efficiency, serial execution tasks as well. Thus, a telephony system may be implemented on a single processor system, or within a distributed and/or processor/coprocessor architecture.


Data blocks, each including a time slice from a single audio channel, are fed in parallel to the massively parallel processor, which performs operations in parallel on a plurality of time slices, generally executing the same instruction on the plurality of time slices. In this system, real time performance may be effectively achieved, with a predetermined maximum processing latency. In many cases, it is not necessary to detect tones on each audio channel continuously, and therefore the system may sample each channel sequentially. In addition, if a Fast Fourier Transform-type algorithm is employed, the real (I) and imaginary (Q) channels may each be presented with data from different sources, leading to a doubling of capacity. Thus, for example, using an nVidia Tesla™ C870 GPU, with 128 processors, each processor can handle 8 (real only) or 16 (real and imaginary) audio channels, leading to a density of 1024 or 2048 channel call progress tone detection. Practically, the system is not operated at capacity, and therefore up to about 800 voice channels may be processed, using a general purpose commercially available coprocessor card for a PC architecture.


For example, a PC architecture server executes Asterisk PBX software under the Linux operating system. A call is provided from the Asterisk PBX software to a dynamic linked library (DLL), which transfers data from a buffer in main memory containing time slices for the analog channels to be processed. For example, 2 mS each for 800 channels, at an 8.4 kHz sampling rate is provided (132 kB) in the buffer. The buffer contents are transferred to the coprocessor through a PCIe x16 interface, along with a call to perform an FFT for each channel, with appropriate windowing, and/or using continuity from prior samples. The FFT may then be filtered on the coprocessor, with the results presented to the host processor, or the raw FFT data transferred to the host for filtering. Using a time-to-frequency domain transform, the signal energy at a specified frequency is converted to an amplitude peak at a specific frequency bin, which is readily extracted. Temporal analysis may also be performed in either the coprocessor or processor, though preferably this is performed in the processor. The analysis and data transform may also be used for speech recognition primitives, and for other processes.


A particular advantage of this architecture arises from the suitability of the call progress tone analysis to be performed in parallel, since the algorithm is deterministic and has few or no branch points. Thus, the task is defined to efficiently exploit the processing power and parallelism of a massively parallel processor.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a schematic diagram of a system for implementing the invention.



FIG. 2 is a flowchart of operations within a host processor



FIG. 3 is a schematic diagram showing operations with respect to a massively parallel co-processor.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

One embodiment of the present invention provides a system and method for analyzing call progress tones and performing other types of audio band processing on a plurality of voice channels, for example in a telephone system. Examples of call progress tone analysis can be found at: www.commetrex.com/products/algorithms/CPA.html; www.dialogic.com/network/csp/appnots/10117_CPA_SR6_HMP2.pdf;


whitepapers.zdnet.co.uk/0,1000000651,260123088p,00.htm; and


www.pikatechnologies.com/downloads/samples/readme/6.2%20-%20Call %20Progress %20Analysis %20-%20ReadMe.txt, each of which is expressly incorporated herein by reference.


In a modest size system for analyzing call progress tones, there may be hundreds of voice channels to be handled are simultaneously. Indeed, the availability of a general-purpose call progress tone processing system permits systems to define non-standard or additional signaling capabilities, thus reducing the need for out of band signaling. Voice processing systems generally require real time performance; that is, connections must be maintained and packets or streams forwarded within narrow time windows, and call progress tones processed within tight specifications.


An emerging class of telephone communication processing system, implements a private branch exchange (PBX) switch, which employs a standard personal computer (PC) as a system processor, and employs software which executes on a general purpose operating system (OS).


For example, the Asterisk system runs on the Linux OS. More information about Asterisk may be found at Digium/Asterisk, 445 Jan Davis Drive NW, Huntsville, Ala. 35806, 256.428.6000 asterisk.org/downloads. Another such system is: “Yate” (Yet Another Telephony Engine), available from Bd. Nicolae Titulescu 10, Bl. 20, Sc. C, Ap. 128 Sector 1, Bucharest, Romania yate.null.ro/pmwiki/index.php?n=Main.Download.


In such systems, scalability to desired levels, for example hundreds of simultaneous voice channels, requires that the host processor have sufficient headroom to perform all required tasks within the time allotted. Alternately stated, the tasks performed by the host processor should be limited to those it is capable of completing without contention or undue delay. Because digitized audio signal processing is resource intensive, PC-based systems have typically not implemented functionality, which requires per-channel signal processing, or offloaded the processing to specialized digital signal processing (DSP) boards. Further, such DSP boards are themselves limited, for example 8-16 voice processed channels per DSP core, with 4-32 cores per board, although higher density boards are available. These boards are relatively expensive, as compared to the general-purpose PC, and occupy a limited number of bus expansion slots.


The present invention provides an alternate to the use of specialized DSP processors dedicated to voice channel processing. According to one embodiment, a massively parallel processor as available in a modern video graphics processor (though not necessarily configured as such) is employed to perform certain audio channel processing tasks, providing substantial capacity and versatility. One example of such a video graphics processor is the nVidia Tesla™ GPU, using the CUDA software development platform (“GPU”). This system provides 8 banks of 16 processors (128 processors total), each processor capable of handling a real-time fast Fourier transform (FFT) on 8-16 channels. For example, the FFT algorithm facilitates subsequent processing to detect call progress tones, which may be detected in the massively parallel processor environment, or using the host processor after downloading the FFT data. One particularly advantageous characteristic of implementation of a general purpose FFT algorithm rather than specific call tone detection algorithms is that a number of different call tone standards (and extensions/variants thereof) may be supported, and the FFT data may be used for a number of different purposes, for example speech recognition, etc.


Likewise, the signal processing is not limited to FFT algorithms, and therefore other algorithms may also or alternately be performed. For example, wavelet-based algorithms may provide useful information.


The architecture of the system provides a dynamic link library (DLL) available for calls from the telephony control software, e.g., Asterisk. An application programming interface (API) provides communication between the telephony control software (TCS) and the DLL. This TCS is either unmodified or minimally modified to support the enhanced functionality, which is separately compartmentalized.


The TCS, for example, executes a process which calls the DLL, causing the DLL to transfer a data from a buffer holding, e.g., 2 mS of voice data for, e.g., 800 voice channels, from main system memory of the PC to the massively parallel coprocessor (MPC), which is, for example an nVidia Tesla™ platform. The DLL has previously uploaded to the MPC the algorithm, which is, for example, a parallel FFT algorithm, which operates on all 800 channels simultaneously. It may, for example, also perform tone detection, and produce an output in the MPC memory of the FFT-representation of the 800 voice channels, and possibly certain processed information and flags. The DLL then transfers the information from the MPC memory to PC main memory for access by the TCS, or other processes, after completion.


While the MPC has massive computational power, it has somewhat limited controllability. For example, a bank of 16 DSPs in the MPC are controlled by a single instruction pointer, meaning that the algorithms executing within the MPC are generally not data-dependent in execution, nor have conditional-contingent branching, since this would require each thread to execute different instructions, and thus dramatically reduce throughput. Therefore, the algorithms are preferably designed to avoid such processes, and should generally be deterministic and non-data dependent algorithms. On the other hand, it is possible to perform contingent or data-dependent processing, though the gains from the massively parallel architecture are limited, and thus channel specific processing is possible. Advantageously, implementations of the FFT algorithm are employed which meet the requirements for massively parallel execution. For example, the CUDA™ technology environment from nVidia provides such algorithms Likewise, post processing of the FFT data to determine the presence of tones poses a limited burden on the processor(s), and need not be performed under massively parallel conditions. This tone extraction process may therefore be performed on the MPC or the host PC processor, depending on respective processing loads and headroom.


In general, the FFT itself should be performed in faster-than real-time manner. For example, it may be desired to implement overlapping FFTs, e.g., examining 2 mS of data every 1 mS, including memory-to-memory transfers and associated processing. Thus, for example, it may be desired to complete the FFT of 2 mS of data on the MPC within 0.5 mS. Assuming, for example, a sampling rate of 8.4 kHz, and an upper frequency within a channel of 3.2-4 kHz, the 2 mS sample, would generally imply a 256-point FFT, which can be performed efficiently and quickly on the nVidia Tesla™ platform, including any required windowing and post processing.


Therefore, the use of the present invention permits the addition of call progress tone processing and other per channel signal processing tasks to a PC based TCS platform without substantially increasing the processing burden on the host PC processor, and generally permits such a platform to add generic call progress tone processing features and other per channel signal processing features without substantially limiting scalability.


Other sorts of parallel real time processing are also possible, for example analysis of distributed sensor signals such as “Motes” or the like. See, en.wikipedia.org/wiki/Smartdust. The MPC may also be employed to perform other telephony tasks, such as echo cancellation, conferencing, tone generation, compression/decompression, caller ID, interactive voice response, voicemail, packet processing and packet loss recovery algorithms, etc.


Similarly, simultaneous voice recognition can be performed on hundreds of simultaneous channels, for instance in the context of directing incoming calls based on customer responses at a customer service center. Advantageously, in such an environment, processing of particular channels maybe switched between banks of multiprocessors, depending on the processing task required for the channel and the instructions being executed by the multiprocessor. Thus, to the extent that the processing of a channel is data dependent, but the algorithm has a limited number of different paths based on the data, the MPC system may efficiently process the channels even where the processing sequence and instructions for each channel is not identical.



FIG. 1 shows a schematic of system for implementing the invention.


Massively multiplexed voice data 101 is received at network interface 102. The network could be a LAN, Wide Area Network (WAN), Prime Rate ISDN (PRI), a traditional telephone network with Time Division Multiplexing (TDM), or any other suitable network. This data may typically include hundreds of channels, each carrying a separate conversation and also routing information. The routing information may be in the form of in-band signaling of dual frequency (DTMF) audio tones received from a telephone keypad or DTMF generator. The channels may be encoded using digital sampling of the audio input prior to multiplexing. Typically voice channels will come in 20 ms frames.


The system according to a preferred coprocessor embodiment includes at least one host processor 103, which may be programmed with telephony software such as Asterisk or Yate, cited above. The host processor may be of any suitable type, such as those found in PCs, for example Intel Pentium Core 2 Duo or Quadra, or AMD Athlon X2. The host processor communicates via shared memory 104 with MPC 105, which is, for example 2 GB or more of DDR2 or DDR3 memory.


Within the host processor, application programs 106 receive demultiplexed voice data from interface 102, and generate service requests for services that cannot or are desired not to be processed in real time within the host processor itself. These service requests are stored in a service request queue 107. A service calling module 108 organizes the service requests from the queue 107 for presentation to the MPC 105.


The module 108 also reports results back to the user applications 106, which in turn put processed voice data frames back on the channels in real time, such that the next set of frames coming in on the channels 101 can be processed as they arrive.



FIG. 2 shows a process within module 108. In this process, a timing module 201 keeps track of a predetermined real time delay constraint. Since standard voice frames are 20 ms long, this constraint should be significantly less than that to allow operations to be completed in real time. A 5-10 ms delay would very likely be sufficient; however, a 2 ms delay would give a degree of comfort that real time operation will be assured. Then, at 202, e blocks of data requesting service are organized into the queue or buffer. At 203, the service calling module examines the queue to see what services are currently required. Some MPC's, such as the nVidia Tesla™ C870 GPU, require that each processor within a multiprocessor of the MPC perform the same operations in lockstep. For such MPC's, it will be necessary to choose all requests for the same service at the same time. For instance, all requests for an FFT should be grouped together and requested at once. Then all requests for a Mix operation might be grouped together and requested after the FFT's are completed—and so forth. The MPC 105 will perform the services requested and provide the results returned to shared memory 104. At 204, the service calling module will retrieve the results from shared memory and at 205 will report the results back to the application program. At 206, it is tested whether there is more time and whether more services are requested. If so, control returns to element 202. If not, at 207, the MPC is triggered to sleep (or be available to other processes) until another time interval determined by the real time delay constraint is begun, FIG. 3 shows an example of running several processes on data retrieved from the audio channels. The figure shows the shared memory 104 and one of the processors 302 from the MPC 105. The processor 302 first retrieves one or more blocks from the job queue or buffer 104 that are requesting an FFT and performs the FFT on those blocks. The other processors within the same multiprocessor array of parallel processors are instructed to do the same thing at the same time (on different data). After completion of the FFT, more operations can be performed. For instance, at 304 and 305, the processor 302 checks shared memory 104 to see whether more services are needed. In the examples given, mixing 304 and decoding 305 are requested by module 109, sequentially. Therefore, these operations are also performed on data blocks retrieved from the shared memory 104. The result or results of each operation are placed in shared memory upon completion of the operation, where those results are retrievable by the host processor.


In the case of call progress tones, these three operations together: FFT, mixing, and decoding, will determine the destination of a call associated with the block of audio data for the purposes of telephone switching.


If module 108 sends more request for a particular service than can be accommodated at once, some of the requests will be accumulated in a shared RAM 109 to be completed in a later processing cycle. The MPC will be able to perform multiple instances of the requested service within the time constraints imposed by the loop of FIG. 2. Various tasks may be assigned priorities, or deadlines, and therefore the processing of different services may be selected for processing based on these criteria, and need not be processed in strict order.


It is noted that the present invention is not limited to nVidia Tesla® parallel processing technology, and may make use of various other technologies. For example, the Intel Larrabee GPU technology, which parallelizes a number of P54C processors, may also be employed, as well as ATI CTM technology (ati.amd.com/technology/streamcomputing/index.html, ati.amd.com/technology/streamcomputing/resources.html, each of which, including linked resources, is expressly incorporated herein by reference), and other known technologies.


The following is some pseudo code illustrating embodiments of the invention as implemented in software. The disclosure of a software embodiment does not preclude the possibility that the invention might be implemented in hardware.


Embodiment 1

The present example provides computer executable code, which is stored in a computer readable medium, for execution on a programmable processor, to implement an embodiment of the invention. The computer is, for example, an Intel dual core processor-based machine, with one or more nVidia Tesla® compatible cards in PCIe x16 slots, for example, nVidia C870 or C1060 processor. The system typically stores executable code on a SATA-300 interface rotating magnetic storage media, i.e., a so-called hard disk drive, though other memory media, such as optical media, solid state storage, or other known computer readable media may be employed. Indeed, the instructions may be provided to the processors as electromagnetic signals communicated through a vacuum or conductive or dielectric medium. The nVidia processor typically relies on DDR3 memory, while the main processor typically relies on DDR2 memory, though the type of random-access memory is non-critical. The telephony signals for processing may be received over a T1, T3, optical fiber, Ethernet, or other communications medium and/or protocol.


Data Structures to be Used by Module 108


RQueueType Structure // Job Request Queue


ServiceType


ChannelID // Channel Identifier


VoiceData // Input Data


Output // Output Data


End Structure


// This embodiment uses a separate queue for each type of service to be requested.


// The queues have 200 elements in them. This number is arbitrary and could be adjusted


// by the designer depending on anticipated call volumes and numbers of processors available


// on the MPC. Generally, the number does not have to be as large as the total of number


// of simultaneous calls anticipated, because not all of those calls will be requesting services


// at the same time.


RQueueType RQueueFFT[200] // Maximum of 200 Requests FFT


RQueueType RQueueMIX[200] // Maximum of 200 Requests MIX


RQueueType RQueueENC[200] // Maximum of 200 Requests ENC


RQueueType RQueueDEC[200] // Maximum of 200 Requests DEC


Procedures to be Used by Module 108


// Initialization Function


Init: Initialize Request Queue

    • Initialize Service Entry
    • Start Service Poll Loop


      // Service Request Function


ReqS: Case ServiceType

    • FFT: Lock RQueueFFT
      • Insert Service Information into RQueueFFT
      • Unlock RQueueFFT
    • MIX: Lock RQueueMIX
      • Insert Service Information into RQueueMIX
      • Unlock RQueueMIX
    • ENC: Lock RQueueENC
      • Insert Service Information into RQueueENC
      • Unlock RQueueENC
    • DEC: Lock RQueueDEC
      • Insert Service Information into RQueueDEC
      • Unlock RQueueDEC
    • End Case
    • Wait for completion of Service
    • Return output


      // Service Poll Loop


      // This loop is not called by the other procedures. It runs independently. It will keep track of


      // where the parallel processors are in their processing. The host will load all the requests for a


      // particular service into the buffer. Then it will keep track of when the services are completed


      // and load new requests into the buffer.


      //SerPL:


      Get timestamp and store in St


// Let's do FFT/FHT


Submit RQueueFFT with FFT code to GPU


For all element in RQueueFFT

    • Signal Channel of completion of service


End For


// Let's do mixing


Submit RQueueMIX with MIXING code to GPU


For all element in RQueueMIX

    • Signal Channel of completion of service


End For


// Let's do encoding


Submit RQueueENC with ENCODING code to GPU


For all element in RQueueENC

    • Signal Channel of completion of service


End For


// Let's do decoding


Submit RQueueDEC with DECODING code to GPU


For all element in RQueueDEC

    • Signal Channel of completion of service


End For


// Make sure it takes the same amount of time for every pass


Compute time difference between now and St


Sleep that amount of time


Goto SerPL // second pass


Examples of Code in Application Programs 106 for Calling the Routines Above


Example for Calling “Init”


// we have to initialize PStar before we can use it


Call Init


Example for Requesting an FFT


// use FFT service for multitone detection


Allocate RD as RQueueType


RD.Service=FFT


RD.ChannelID=Current Channel ID


RD.Input=Voice Data


Call ReqS(RD)


Scan RD.Output for presence of our tones


Example for Requesting Encoding


// use Encoding service


Allocate RD as RQueueType


RD.Service=ENCODE


RD.ChannelID=Current Channel ID


RD.Input=Voice Data


Call ReqS(RD)


// RD.Output contains encoded/compressed data


Example for Requesting Decoding


// use Decoding service


Allocate RD as RQueueType


RD.Service=DECODE


RD.ChannelID=Current Channel ID


RD.Input=Voice Data


Call ReqS(RD)


// RD.Output contains decoded data


Embodiment 2

The second embodiment may employ similar hardware to Embodiment 1.


// this Embodiment is Slower, but Also Uses Less Memory than Embodiment #1 Above


Data Structures to be Used by Module 108






    • RQueueType Structure // Job Request Queue

    • ServiceType

    • ChannelID // Channel Identifier

    • VoiceData // Input Data

    • Output // Output Data





End Structure


// This embodiment uses a single queue, but stores other data in a temporary queue


// when the single queue is not available. This is less memory intensive, but slower.


RQueueType RQueue[200] // Maximum of 200 Requests


Procedures to be Used by Module 108


// Initialization Function


Init: Initialize Request Queue

    • Initialize Service Entry
    • Start Service Poll Loop


// Service Request Function


ReqS: Lock RQueue

    • Insert Service Information into RQueue
    • Unlock RQueue
    • Wait for completion of Service
    • Return output


// Service Poll Loop


// to run continuously


SerPL: Get timestamp and store in St

    • // Let's do FFT/FHT
    • For all element in RQueue where SerivceType=FFT
      • Copy Data To TempRQueue
    • End For
    • Submit TempRQueue with FFT code to GPU
    • For all element in TempRQueue
      • Move TempRQueue.output to RQueue.output
      • Signal Channel of completion of service
    • End For
    • // Let's do mixing
    • For all element in RQueue where SerivceType=MIXING
      • Copy Data To TempRQueue
    • End For
    • Submit TempRQueue with MIXING code to GPU
    • For all element in RQueue
      • Move TempRQueue.output to RQueue.output
      • Signal Channel of completion of service
    • End For
    • // Let's do encoding
    • For all element in RQueue where SerivceType=ENCODE
      • Copy Data To TempRQueue
    • End For
    • Submit TempRQueue with ENCODING code to GPU
    • For all element in RQueue
      • Move TempRQueue.output to RQueue.output
      • Signal Channel of completion of service
    • End For
    • // Let's do decoding
    • For all element in RQueue where SerivceType=DECODE
      • Copy Data To TempRQueue
    • End For
    • Submit TempRQueue with DECODING code to GPU
    • For all element in RQueue
      • Move TempRQueue.output to RQueue.output
      • Signal Channel of completion of service
    • End For
    • // Make sure it takes the same amount of time for every pass
    • Compute time difference between now and St
    • Sleep that amount of time
    • Goto SerPL // second pass


      Examples of Code in the Application Programs 106 for Calling the Routines Above


      Example for Calling “init”


// we have to initialize PStar before we can use it


Call Init


Example for Calling “FFT”


// use FFT service for multitone detection


Allocate RD as RQueueType


RD.Service=FFT


RD.ChannelID=Current Channel ID


RD.Input=Voice Data


Call ReqS(RD)


Scan RD.Output for presents of our tones


Example for Calling Encoding


// use Encoding service


Allocate RD as RQueueType


RD.Service=ENCODE


RD.ChannelID=Current Channel ID


RD.Input=Voice Data


Call ReqS(RD)


// RD.Output contains encoded/compressed data


Example for Calling Decoding


// use Decoding service


Allocate RD as RQueueType


RD.Service=DECODE


RD.ChannelID=Current Channel ID


RD.Input=Voice Data


Call ReqS(RD)


// RD.Output contains decoded data


While the embodiment discussed above uses a separate host and massively parallel processing array, it is clear that the processing array may also execute general purpose code and support general purpose or application-specific operating systems, albeit with reduced efficiency as compared to an unbranched signal processing algorithm. Therefore, it is possible to employ a single processor core and memory pool, thus reducing system cost and simplifying system architecture. Indeed, one or more multiprocessors may be dedicated to signal processing, and other(s) to system control, coordination, and logical analysis and execution. In such a case, the functions identified above as being performed in the host processor would be performed in the array, and, of course, the transfers across the bus separating the two would not be required.


From a review of the present disclosure, other modifications will be apparent to persons skilled in the art. Such modifications may involve other features which are already known in the design, manufacture and use of telephony engines and parallel processing and which may be used instead of or in addition to features already described herein. Although claims have been formulated in this application to particular combinations of features, it should be understood that the scope of the disclosure of the present application also includes any novel feature or novel combination of features disclosed herein either explicitly or implicitly or any generalization thereof, whether or not it mitigates any or all of the same technical problems as does the present invention. The applicants hereby give notice that new claims may be formulated to such features during the prosecution of the present application or any further application derived therefrom.


The word “comprising”, “comprise”, or “comprises” as used herein should not be viewed as excluding additional elements. The singular article “a” or “an” as used herein should not be viewed as excluding a plurality of elements. The word “or” should be construed as an inclusive or, in other words as “and/or”.

Claims
  • 1. A method for processing signals, comprising: (a) receiving data representing a time slice of a stream of time-sequential information for each of a plurality of streams of time-sequential information;(b) automatically performing at least one transform process on the received time slice of the stream of time-sequential information for each of the plurality of streams of time-sequential information, to produce transformed data, with at least one single-instruction, multiple-data type parallel processor having a plurality of processing cores concurrently executing the same at least one transform process for each respective stream of time-sequential information, under a common set of instructions;(c) making at least one decision based on the transformed data of each time slice, with at least one single-instruction, multiple-data type parallel processor having a plurality of processing cores concurrently executing the same at least one transform process for each respective stream of time-sequential information, under a common set of instructions; and(d) communicating information representing the decision through a digital communication interface.
  • 2. The method according to claim 1, wherein each stream of time-sequential information comprises audio information, and the decision is dependent on audio information within the respective stream of time-sequential information.
  • 3. The method according to claim 1, wherein the at least one transform comprises a speech recognition primitive.
  • 4. The method according to claim 1, wherein the decision is made based on information in a single stream of time-sequential information, independent of information contained in the other streams of time-sequential information.
  • 5. The method according to claim 1, wherein the decision is made by a respective processing core of the at least one single-instruction, multiple-data type parallel processor having a plurality of processing cores for each respective time slice dependent solely on information in that respective time slice.
  • 6. The method according to claim 1, wherein the at least one transform process is selected from the group consisting of a time-to-frequency domain transform algorithm, a wavelet domain transform algorithm, and a Goertzel filter algorithm.
  • 7. The method according to claim 1, wherein the decision is made by the at least one single-instruction, multiple-data type parallel processor and represents a determination whether an in-band signal is present in a respective time slice.
  • 8. The method according to claim 1, wherein the plurality of streams of time-sequential information comprise a plurality of different streams of time-sequential information, each different stream of time-sequential information comprising a stream of audio information which is processed in parallel by the at least one single-instruction, multiple-data type parallel processor, and the decision is made based on the at least one transform process in parallel by the at least one single-instruction, multiple-data type parallel processor having the plurality of processing cores executing concurrently under the common set of instructions.
  • 9. The method according to claim 8, the common set of instructions controls the at least one single-instruction, multiple-data type parallel processor to perform at least a portion of a speech recognition process.
  • 10. The method according to claim 1, wherein the common set of instructions comprises program instructions to perform a telephony task.
  • 11. The method according to claim 1, wherein the at least one transform process comprises a Fourier transform.
  • 12. The method according to claim 1, wherein the at least one single-instruction, multiple-data type parallel processor comprises a multiprocessor having a common instruction decode unit for the plurality of processing cores, each processing core having a respective arithmetic logic unit, all arithmetic logic units within a respective multiprocessor being adapted to concurrently execute the instructions of the instruction sequence on the time slices of the plurality of streams of time-sequential information representing a plurality of digitized real-time analog channels.
  • 13. A non-transitory computer readable medium storing instructions for controlling a programmable processor to perform a method, comprising: (a) instructions for receiving data representing a plurality of respective time slices of a plurality of parallel streams of time-sequential information;(b) a common set of transform instructions for concurrently performing at least one transform process on the received plurality of respective time slices of the plurality of parallel streams of time-sequential information in parallel to produce respective transformed data for each respective time slice, with at least one single-instruction, multiple-data type parallel processor having a plurality of processing cores executing concurrently under the common set of transform instructions;(c) a common set of decisional instructions for concurrently making at least one decision based on the transformed data, with the at least one single-instruction, multiple-data type parallel processor having the plurality of processing cores executing concurrently under the common set of decisional instructions; and(d) instructions for communicating information representing the decision through a digital communication interface.
  • 14. The non-transitory computer readable medium according to claim 13, wherein the instructions for making the at least one decision based on the at least one transform process comprise a common set of decision instructions for the at least one single-instruction, multiple-data type parallel processor for concurrently making the at least one decision on the respective time slices of the plurality of parallel streams of time-sequential information in parallel under the common set of decision instructions.
  • 15. A system for processing streams of information, comprising: (a) an input port configured to receive data representing a plurality of time slices of a plurality of streams of time-sequential information;(b) at least one single-instruction, multiple-data type parallel processor having a plurality of processing cores synchronized to concurrently execute the same instruction, configured to: perform a transform process on the plurality of time slices to produce transformed data, the transform process being performed by concurrent execution of a common set of transform instructions on the plurality of processing cores; andmake at least one decision based on the transformed data of the plurality of time slices, the decision being made by concurrent execution of a common set of decision instructions on the plurality of processing cores; and(c) an output port configured to communicate information representing the decision through a digital communication interface.
  • 16. The system according to claim 15, wherein: the plurality of streams of time sequential information comprise signals digitized at a sampling rate, and the decision is dependent on values of the signals digitized at the sampling rate.
  • 17. The system according to claim 16, wherein the plurality of streams of time-sequential information comprise a plurality of audio streams, and the at least one decision comprises a determination of whether an in-band audio signal is present in a respective time slice of a respective stream of time-sequential information.
  • 18. The system according to claim 15, wherein the common set of instructions are adapted to perform a plurality of concurrent tasks selected from the group consisting of an echo processing task, an audio compression task, an audio decompression task, a packet loss recovery task, a wavelet transform processing task, a combined time domain and frequency domain transform processing task, a speech recognition primitive task, and a stream combining task.
  • 19. The system according to claim 15, wherein the at least one single-instruction, multiple-data type parallel processor comprises a multiprocessor having a common instruction decode unit for the plurality of processing cores, each processing core having a respective arithmetic logic unit, all arithmetic logic units within a respective multiprocessor being adapted to concurrently execute the respective instructions of the common set of instructions.
  • 20. The system according to claim 15, wherein the single-instruction, multiple-data type parallel processor comprises a Peripheral Component Interconnect Express (PCIe) interface graphic processing unit of a computer system, which operates under control of a central processing unit and receives the plurality of time slices of a plurality of streams of time-sequential information by communication through the Peripheral Component Interconnect Express (PCIe) interface.
RELATED APPLICATIONS

The present application is a: Division of U.S. patent application Ser. No. 16/186,252, filed Nov. 9, 2018, now U.S. Pat. No. 10,803,883, issued Oct. 13, 2020, which is a Continuation of U.S. patent application Ser. No. 15/633,211, filed Jun. 26, 2017, now U.S. Pat. No. 10,127,925, issued Nov. 13, 2018, which is a Continuation of U.S. patent application Ser. No. 14/744,377, filed Jun. 19, 2016, now U.S. Pat. No. 9,692,908, issued Jun. 27, 2017, which is a Division of U.S. patent application Ser. No. 13/968,522, filed Aug. 16, 2013, now U.S. Pat. No. 9,064,496, issued Jun. 23, 2015, which is a Division of U.S. patent application Ser. No. 12/337,236, filed Dec. 17, 2008, now U.S. Pat. No. 8,515,052, issued Aug. 20, 2013, which is a Nonprovisional of, and Claims benefit of priority from U.S. Provisional Patent Application No. 61/014,106, filed Dec. 17, 2007, the entirety of which are each expressly incorporated herein by reference.

US Referenced Citations (683)
Number Name Date Kind
5164990 Pazienti et al. Nov 1992 A
5611038 Shaw et al. Mar 1997 A
5729659 Potter Mar 1998 A
5754456 Eitan et al. May 1998 A
5774357 Hoffberg et al. Jun 1998 A
5875108 Hoffberg et al. Feb 1999 A
5901246 Hoffberg et al. May 1999 A
5968167 Whittaker et al. Oct 1999 A
5983161 Lemelson et al. Nov 1999 A
6055619 North et al. Apr 2000 A
6061711 Song May 2000 A
6081750 Hoffberg et al. Jun 2000 A
6094637 Hong Jul 2000 A
6121998 Voois et al. Sep 2000 A
6124882 Voois et al. Sep 2000 A
6226389 Lemelson et al. May 2001 B1
6275239 Ezer et al. Aug 2001 B1
6275773 Lemelson et al. Aug 2001 B1
6353843 Chehrazi et al. Mar 2002 B1
6400996 Hoffberg et al. Jun 2002 B1
6418424 Hoffberg et al. Jul 2002 B1
6487500 Lemelson et al. Nov 2002 B2
6493467 Okuda Dec 2002 B1
6507614 Li Jan 2003 B1
6553130 Lemelson et al. Apr 2003 B1
6630964 Burns et al. Oct 2003 B2
6636986 Norman Oct 2003 B2
6640145 Hoffberg et al. Oct 2003 B2
6654783 Hubbard Nov 2003 B1
6738358 Bist et al. May 2004 B2
6748020 Eifrig et al. Jun 2004 B1
6754279 Zhou et al. Jun 2004 B2
6847365 Miller et al. Jan 2005 B1
6850252 Hoffberg Feb 2005 B1
6889312 McGrath et al. May 2005 B1
6906639 Lemelson et al. Jun 2005 B2
6907518 Lohman et al. Jun 2005 B1
6931370 McDowell Aug 2005 B1
6948050 Gove Sep 2005 B1
6959372 Hobson et al. Oct 2005 B1
6981132 Christie et al. Dec 2005 B2
6981134 Yamamura Dec 2005 B2
7003093 Prabhu et al. Feb 2006 B2
7003450 Sadri et al. Feb 2006 B2
7043006 Chambers et al. May 2006 B1
7136710 Hoffberg et al. Nov 2006 B1
7158141 Chung et al. Jan 2007 B2
7210139 Hobson et al. Apr 2007 B2
7218645 Lotter et al. May 2007 B2
7219085 Buck et al. May 2007 B2
7234141 Coles et al. Jun 2007 B2
7242988 Hoffberg et al. Jul 2007 B1
7286380 Hsu et al. Oct 2007 B2
7317840 DeCegama Jan 2008 B2
7333036 Oh et al. Feb 2008 B2
7418008 Lotter et al. Aug 2008 B2
7430578 Debes et al. Sep 2008 B2
7451005 Hoffberg et al. Nov 2008 B2
7461426 Gould et al. Dec 2008 B2
7496917 Brokenshire et al. Feb 2009 B2
7506135 Mimar Mar 2009 B1
7539714 Macy, Jr. et al. May 2009 B2
7548586 Mimar Jun 2009 B1
7565287 Sadri et al. Jul 2009 B2
7602740 Master et al. Oct 2009 B2
7609297 Master et al. Oct 2009 B2
7630569 DeCegama Dec 2009 B2
7650319 Hoffberg et al. Jan 2010 B2
7657881 Nagendra et al. Feb 2010 B2
7665041 Wilson et al. Feb 2010 B2
7689935 Gould et al. Mar 2010 B2
7742531 Xue et al. Jun 2010 B2
7777749 Chung et al. Aug 2010 B2
7805477 Oh et al. Sep 2010 B2
7813822 Hoffberg Oct 2010 B1
7840778 Hobson et al. Nov 2010 B2
7890549 Elad et al. Feb 2011 B2
7890648 Gould et al. Feb 2011 B2
7904187 Hoffberg et al. Mar 2011 B2
7908244 Royo et al. Mar 2011 B2
7953021 Lotter et al. May 2011 B2
7953768 Gould et al. May 2011 B2
7966078 Hoffberg et al. Jun 2011 B2
7974297 Jing et al. Jul 2011 B2
7974714 Hoffberg Jul 2011 B2
7979574 Gillo et al. Jul 2011 B2
7987003 Hoffberg et al. Jul 2011 B2
8005147 Alvarez et al. Aug 2011 B2
8024549 Stewart Sep 2011 B2
8031060 Hoffberg et al. Oct 2011 B2
8032477 Hoffberg et al. Oct 2011 B1
8046313 Hoffberg et al. Oct 2011 B2
8064952 Rofougaran et al. Nov 2011 B2
8068683 DeCegama Nov 2011 B2
8069334 Mimar Nov 2011 B2
8073704 Suzuki Dec 2011 B2
8085834 Hanke et al. Dec 2011 B2
8095782 Danskin Jan 2012 B1
8117370 Rofougaran et al. Feb 2012 B2
8122143 Gould et al. Feb 2012 B2
8139608 Lotter et al. Mar 2012 B2
8190854 Codrescu et al. May 2012 B2
8194593 Jing et al. Jun 2012 B2
8200730 Oh et al. Jun 2012 B2
8214626 Macy, Jr. et al. Jul 2012 B2
8223786 Jing et al. Jul 2012 B2
8229134 Duraiswami et al. Jul 2012 B2
8253750 Huang et al. Aug 2012 B1
8280232 McCrossan et al. Oct 2012 B2
8306387 Yamashita et al. Nov 2012 B2
8340960 Sadri et al. Dec 2012 B2
8346838 Debes et al. Jan 2013 B2
8364136 Hoffberg et al. Jan 2013 B2
8369967 Hoffberg et al. Feb 2013 B2
8374242 Lewis et al. Feb 2013 B1
8407263 Elad et al. Mar 2013 B2
8412981 Munoz et al. Apr 2013 B2
8425322 Gillo et al. Apr 2013 B2
8429625 Liege Apr 2013 B2
8437407 Rosenzweig et al. May 2013 B2
8442829 Chen May 2013 B2
8479175 Heuler Jul 2013 B1
8484154 You et al. Jul 2013 B2
8488683 Xue et al. Jul 2013 B2
8502825 Zalewski et al. Aug 2013 B2
8504374 Potter Aug 2013 B2
8510707 Heuler Aug 2013 B1
8515052 Wu Aug 2013 B2
8526623 Franck et al. Sep 2013 B2
8539039 Sheu et al. Sep 2013 B2
8542732 Lewis et al. Sep 2013 B1
8549521 Brokenshire et al. Oct 2013 B2
8555239 Heuler Oct 2013 B1
8559400 Lotter et al. Oct 2013 B2
8565519 Weybrew Oct 2013 B2
8566259 Chong et al. Oct 2013 B2
8583263 Hoffberg et al. Nov 2013 B2
8605910 Franck et al. Dec 2013 B2
8620772 Owen Dec 2013 B2
8676574 Kalinli Mar 2014 B2
8688959 Macy, Jr. et al. Apr 2014 B2
8693534 Lewis et al. Apr 2014 B1
8700552 Yu et al. Apr 2014 B2
8713285 Rakib et al. Apr 2014 B2
8719437 Bazzarella, Jr. et al. May 2014 B1
8731945 Potter May 2014 B2
8745541 Wilson et al. Jun 2014 B2
8755515 Wu Jun 2014 B1
8756061 Kalinli et al. Jun 2014 B2
8759661 Van Buskirk et al. Jun 2014 B2
8762852 Davis et al. Jun 2014 B2
8768097 Wang et al. Jul 2014 B2
8768142 Ju et al. Jul 2014 B1
8788951 Zalewski et al. Jul 2014 B2
8789144 Mazzaferri et al. Jul 2014 B2
8811470 Kimura et al. Aug 2014 B2
8819172 Davis et al. Aug 2014 B2
8825482 Hernandez-Abrego et al. Sep 2014 B2
8831279 Rodriguez et al. Sep 2014 B2
8831760 Gupta et al. Sep 2014 B2
8849088 Sasaki et al. Sep 2014 B2
8861898 Candelore et al. Oct 2014 B2
8862909 Branover et al. Oct 2014 B2
8867731 Lum et al. Oct 2014 B2
8908631 Jing et al. Dec 2014 B2
8935468 Maydan et al. Jan 2015 B2
8949633 Belmont et al. Feb 2015 B2
8972984 Meisner et al. Mar 2015 B2
8988970 O'Donovan et al. Mar 2015 B2
9002998 Master et al. Apr 2015 B2
9015093 Commons Apr 2015 B1
9036902 Nathan et al. May 2015 B2
9047090 Kottilingal et al. Jun 2015 B2
9053562 Rabin et al. Jun 2015 B1
9064496 Wu Jun 2015 B1
9075697 Powell et al. Jul 2015 B2
9076449 Rathi Jul 2015 B2
9105083 Rhoads et al. Aug 2015 B2
9124798 Hanna Sep 2015 B2
9124850 Stevenson et al. Sep 2015 B1
9143780 Lewis et al. Sep 2015 B1
9148664 Lewis et al. Sep 2015 B1
9172923 Prins et al. Oct 2015 B1
9183580 Rhoads et al. Nov 2015 B2
9185379 Gould et al. Nov 2015 B2
9202254 Rodriguez et al. Dec 2015 B2
9210266 Lum et al. Dec 2015 B2
9218530 Davis et al. Dec 2015 B2
9225822 Davis et al. Dec 2015 B2
9229718 Macy, Jr. et al. Jan 2016 B2
9229719 Macy, Jr. et al. Jan 2016 B2
9239951 Hoffberg et al. Jan 2016 B2
9240021 Rodriguez Jan 2016 B2
9247226 Gould et al. Jan 2016 B2
9251115 Bursell Feb 2016 B2
9251783 Kalinli-Akbacak et al. Feb 2016 B2
9270678 Mazzaferri et al. Feb 2016 B2
9292895 Rodriguez et al. Mar 2016 B2
9293109 Duluk, Jr. et al. Mar 2016 B2
9324335 Rathi Apr 2016 B2
9330427 Conwell May 2016 B2
9354778 Cornaby et al. May 2016 B2
9361259 Kimura et al. Jun 2016 B2
9367886 Davis et al. Jun 2016 B2
9384009 Belmont et al. Jul 2016 B2
9405363 Hernandez-Abrego et al. Aug 2016 B2
9405501 Ahmed et al. Aug 2016 B2
9411983 Mangalampalli et al. Aug 2016 B2
9418616 Duluk, Jr. et al. Aug 2016 B2
9424618 Rodriguez Aug 2016 B2
9456131 Tran Sep 2016 B2
9477472 Macy, Jr. et al. Oct 2016 B2
9478256 Ju et al. Oct 2016 B1
9484046 Knudson et al. Nov 2016 B2
9495526 Hanna Nov 2016 B2
9501281 Gopal et al. Nov 2016 B2
9516022 Borzycki et al. Dec 2016 B2
9520128 Bauer et al. Dec 2016 B2
9535563 Hoffberg et al. Jan 2017 B2
9547873 Rhoads Jan 2017 B2
9552130 Momchilov Jan 2017 B2
RE46310 Hoffberg et al. Feb 2017 E
9569778 Hanna Feb 2017 B2
9575765 Forsyth et al. Feb 2017 B2
9600919 Imbruce et al. Mar 2017 B1
9632792 Forsyth et al. Apr 2017 B2
9648169 Lum et al. May 2017 B2
9667985 Prins et al. May 2017 B1
9672811 Kalinli-Akbacak Jun 2017 B2
9673985 Mangalampalli et al. Jun 2017 B2
9678753 Macy, Jr. et al. Jun 2017 B2
9692908 Wu Jun 2017 B1
9706292 Duraiswami et al. Jul 2017 B2
9720692 Julier et al. Aug 2017 B2
9727042 Hoffberg-Borghesani et al. Aug 2017 B2
9804848 Julier et al. Oct 2017 B2
9824668 Deering et al. Nov 2017 B2
9830950 Rodriguez et al. Nov 2017 B2
9832543 Wu Nov 2017 B1
9858076 Macy, Jr. et al. Jan 2018 B2
9883040 Strong et al. Jan 2018 B2
9888051 Rosenzweig et al. Feb 2018 B1
9891883 Sharma et al. Feb 2018 B2
9930186 Bandyopadhyay et al. Mar 2018 B2
9940922 Schissler et al. Apr 2018 B1
9977644 Schissler et al. May 2018 B2
9986324 Pergament et al. May 2018 B2
10003550 Babcock et al. Jun 2018 B1
10038783 Wilcox et al. Jul 2018 B2
10049657 Kalinli-Akbacak Aug 2018 B2
10051298 Bear et al. Aug 2018 B2
10055733 Hanna Aug 2018 B2
10083689 Bocklet et al. Sep 2018 B2
10127042 Yap et al. Nov 2018 B2
10127624 Lassahn et al. Nov 2018 B1
10127925 Wu Nov 2018 B1
10141009 Khoury et al. Nov 2018 B2
10141033 Hinton et al. Nov 2018 B2
10142463 Douglas Nov 2018 B2
10152822 Surti et al. Dec 2018 B2
10153011 Hinton et al. Dec 2018 B2
10157162 Chen Dec 2018 B2
10163468 Hinton et al. Dec 2018 B2
10166999 Weng Jan 2019 B1
10170115 Bocklet et al. Jan 2019 B2
10170165 Hinton et al. Jan 2019 B2
10181339 Rodriguez et al. Jan 2019 B2
10185670 Litichever et al. Jan 2019 B2
10223112 Abraham et al. Mar 2019 B2
10228909 Anderson et al. Mar 2019 B2
10229670 You et al. Mar 2019 B2
10255911 Malinowski et al. Apr 2019 B2
10263842 Bursell Apr 2019 B2
10275216 Anderson et al. Apr 2019 B2
10306249 Prins et al. May 2019 B2
10319374 Catanzaro et al. Jun 2019 B2
10325397 Imbruce et al. Jun 2019 B2
10331451 Yap et al. Jun 2019 B2
10332509 Catanzaro et al. Jun 2019 B2
10334348 Pergament et al. Jun 2019 B2
10361802 Hoffberg-Borghesani et al. Jul 2019 B1
10362172 Strong et al. Jul 2019 B2
10376785 Hernandez-Abrego et al. Aug 2019 B2
10382623 Lev-Tov et al. Aug 2019 B2
10387148 Ould-Ahmed-Vall et al. Aug 2019 B2
10387149 Ould-Ahmed-Vall et al. Aug 2019 B2
10388272 Thomson et al. Aug 2019 B1
10389982 Fu et al. Aug 2019 B1
10389983 Fu et al. Aug 2019 B1
10424048 Calhoun et al. Sep 2019 B1
10424289 Kalinli-Akbacak Sep 2019 B2
10425222 Gueron et al. Sep 2019 B2
10447468 Gueron et al. Oct 2019 B2
10452398 Hughes et al. Oct 2019 B2
10452555 Hughes Oct 2019 B2
10455088 Tapuhi et al. Oct 2019 B2
10459685 Sharma et al. Oct 2019 B2
10459877 Uliel et al. Oct 2019 B2
10467144 Hughes Nov 2019 B2
10469249 Gueron et al. Nov 2019 B2
10469664 Pirat et al. Nov 2019 B2
10474466 Macy, Jr. et al. Nov 2019 B2
10476667 Gueron et al. Nov 2019 B2
10482177 Hahn Nov 2019 B2
10510000 Commons Dec 2019 B1
10511708 Rangarajan et al. Dec 2019 B2
10517021 Feldman et al. Dec 2019 B2
10524024 Wu Dec 2019 B1
10536672 Fu et al. Jan 2020 B2
10536673 Noone Jan 2020 B2
10542135 Douglas Jan 2020 B2
10547497 Mostafa et al. Jan 2020 B1
10547811 Tran Jan 2020 B2
10547812 Tran Jan 2020 B2
10559307 Khaleghi Feb 2020 B1
10565354 Ray et al. Feb 2020 B2
10572251 Kapoor et al. Feb 2020 B2
10573312 Thomson et al. Feb 2020 B1
RE47908 Hoffberg et al. Mar 2020 E
10579219 Momchilov Mar 2020 B2
10581594 Wolrich et al. Mar 2020 B2
10587800 Boyce et al. Mar 2020 B2
10650807 Bocklet et al. May 2020 B2
10657779 Weber et al. May 2020 B2
10658007 Davis et al. May 2020 B2
RE48056 Hoffberg et al. Jun 2020 E
10672383 Thomson et al. Jun 2020 B1
10678851 Tcherechansky et al. Jun 2020 B2
10681313 Day Jun 2020 B1
10687145 Campbell Jun 2020 B1
10714077 Song et al. Jul 2020 B2
10715656 Douglas Jul 2020 B2
10715793 Rabin et al. Jul 2020 B1
10719433 Lassahn et al. Jul 2020 B2
10726792 Runyan et al. Jul 2020 B2
10732970 Abraham et al. Aug 2020 B2
10733116 Litichever et al. Aug 2020 B2
10735848 Pergament et al. Aug 2020 B2
10755718 Ge et al. Aug 2020 B2
10757161 Murgia et al. Aug 2020 B2
10777050 ap Dafydd et al. Sep 2020 B2
10803381 Rozen et al. Oct 2020 B2
10803883 Wu Oct 2020 B1
20020012398 Zhou et al. Jan 2002 A1
20020022927 Lemelson et al. Feb 2002 A1
20020064139 Bist et al. May 2002 A1
20020072898 Takamizawa Jun 2002 A1
20020085648 Burns et al. Jul 2002 A1
20020095617 Norman Jul 2002 A1
20020151992 Hoffberg et al. Oct 2002 A1
20020165709 Sadri et al. Nov 2002 A1
20030009656 Yamamura Jan 2003 A1
20030105788 Chatterjee Jun 2003 A1
20030115381 Coles et al. Jun 2003 A1
20030151608 Chung et al. Aug 2003 A1
20030179941 DeCegama Sep 2003 A1
20030219034 Lotter et al. Nov 2003 A1
20040001501 Delveaux Jan 2004 A1
20040001704 Chan et al. Jan 2004 A1
20040022416 Lemelson et al. Feb 2004 A1
20040054878 Debes Mar 2004 A1
20040189720 Wilson et al. Sep 2004 A1
20040233930 Colby Nov 2004 A1
20040267856 Macy, Jr. Dec 2004 A1
20050055208 Kibkalo et al. Mar 2005 A1
20050062746 Kataoka et al. Mar 2005 A1
20050071526 Brokenshire et al. Mar 2005 A1
20050125369 Buck et al. Jun 2005 A1
20050166227 Joshi Jul 2005 A1
20050222841 McDowell Oct 2005 A1
20050265577 DeCegama Dec 2005 A1
20060100865 Sadri et al. May 2006 A1
20060136712 Nagendra et al. Jun 2006 A1
20060140098 Champion et al. Jun 2006 A1
20060155398 Hoffberg et al. Jul 2006 A1
20060193383 Alvarez et al. Aug 2006 A1
20060200253 Hoffberg et al. Sep 2006 A1
20060200259 Hoffberg et al. Sep 2006 A1
20060212613 Stewart Sep 2006 A1
20060239471 Mao et al. Oct 2006 A1
20060253288 Chu et al. Nov 2006 A1
20070024472 Oh et al. Feb 2007 A1
20070027695 Oh et al. Feb 2007 A1
20070050834 Royo et al. Mar 2007 A1
20070053513 Hoffberg Mar 2007 A1
20070061022 Hoffberg-Borghesani et al. Mar 2007 A1
20070061023 Hoffberg et al. Mar 2007 A1
20070061142 Hernandez-Abrego et al. Mar 2007 A1
20070061735 Hoffberg et al. Mar 2007 A1
20070070038 Hoffberg et al. Mar 2007 A1
20070070079 Chung et al. Mar 2007 A1
20070070734 Hsu et al. Mar 2007 A1
20070106684 Gould et al. May 2007 A1
20070110053 Soni et al. May 2007 A1
20070113038 Hobson et al. May 2007 A1
20070147568 Harris et al. Jun 2007 A1
20070230586 Shen et al. Oct 2007 A1
20070250681 Horvath et al. Oct 2007 A1
20070286275 Kimura et al. Dec 2007 A1
20080059763 Bivolarski Mar 2008 A1
20080068389 Bakalash et al. Mar 2008 A1
20080089672 Gould et al. Apr 2008 A1
20080092049 Gould et al. Apr 2008 A1
20080133895 Sivtsov et al. Jun 2008 A1
20080163255 Munoz et al. Jul 2008 A1
20080168443 Brokenshire et al. Jul 2008 A1
20080193050 Weybrew Aug 2008 A1
20080214253 Gillo et al. Sep 2008 A1
20080215679 Gillo et al. Sep 2008 A1
20080215971 Gillo et al. Sep 2008 A1
20080215972 Zalewski et al. Sep 2008 A1
20080226119 Candelore et al. Sep 2008 A1
20080235582 Zalewski et al. Sep 2008 A1
20080281915 Elad et al. Nov 2008 A1
20090016691 Gould et al. Jan 2009 A1
20090028347 Duraiswami et al. Jan 2009 A1
20090055744 Sawada et al. Feb 2009 A1
20090119379 Read et al. May 2009 A1
20090132243 Suzuki May 2009 A1
20090154690 Wu Jun 2009 A1
20090160863 Frank Jun 2009 A1
20090196280 Rofougaran Aug 2009 A1
20090197642 Rofougaran et al. Aug 2009 A1
20090198855 Rofougaran et al. Aug 2009 A1
20090208189 Sasaki et al. Aug 2009 A1
20090216641 Hubbard Aug 2009 A1
20090238479 Jaggi et al. Sep 2009 A1
20090259463 Sadri et al. Oct 2009 A1
20090265523 Macy, Jr. et al. Oct 2009 A1
20090268945 Wilson et al. Oct 2009 A1
20090274202 Hanke et al. Nov 2009 A1
20090276606 Mimar Nov 2009 A1
20090316798 Mimar Dec 2009 A1
20090327661 Sperber et al. Dec 2009 A1
20100011042 Debes et al. Jan 2010 A1
20100054701 DeCegama Mar 2010 A1
20100070904 Zigon et al. Mar 2010 A1
20100076642 Hoffberg et al. Mar 2010 A1
20100092156 McCrossan et al. Apr 2010 A1
20100104263 McCrossan et al. Apr 2010 A1
20100111429 Wang et al. May 2010 A1
20100198592 Potter Aug 2010 A1
20100208905 Franck et al. Aug 2010 A1
20100211391 Chen Aug 2010 A1
20100217835 Rofougaran Aug 2010 A1
20100232370 Jing et al. Sep 2010 A1
20100232371 Jing et al. Sep 2010 A1
20100232396 Jing et al. Sep 2010 A1
20100232447 Jing et al. Sep 2010 A1
20100257089 Johnson Oct 2010 A1
20110043518 Von Borries et al. Feb 2011 A1
20110054915 Oh et al. Mar 2011 A1
20110066578 Chong et al. Mar 2011 A1
20110082877 Gupta et al. Apr 2011 A1
20110103488 Xue et al. May 2011 A1
20110145184 You et al. Jun 2011 A1
20110156896 Hoffberg et al. Jun 2011 A1
20110167110 Hoffberg et al. Jul 2011 A1
20110222372 O'Donovan et al. Sep 2011 A1
20110314093 Sheu et al. Dec 2011 A1
20120011170 Elad et al. Jan 2012 A1
20120028712 Zuili Feb 2012 A1
20120036016 Hoffberg et al. Feb 2012 A1
20120116559 Davis et al. May 2012 A1
20120134548 Rhoads et al. May 2012 A1
20120150651 Hoffberg et al. Jun 2012 A1
20120166187 Van Buskirk et al. Jun 2012 A1
20120208592 Davis et al. Aug 2012 A1
20120210233 Davis et al. Aug 2012 A1
20120224743 Rodriguez et al. Sep 2012 A1
20120253812 Kalinli et al. Oct 2012 A1
20120259638 Kalinli Oct 2012 A1
20120268241 Hanna et al. Oct 2012 A1
20120277893 Davis et al. Nov 2012 A1
20120280908 Rhoads et al. Nov 2012 A1
20120282905 Owen Nov 2012 A1
20120282911 Davis et al. Nov 2012 A1
20120284012 Rodriguez et al. Nov 2012 A1
20120284122 Brandis Nov 2012 A1
20120284339 Rodriguez Nov 2012 A1
20120284593 Rodriguez Nov 2012 A1
20120288114 Duraiswami et al. Nov 2012 A1
20120293643 Hanna Nov 2012 A1
20120297383 Meisner et al. Nov 2012 A1
20130006617 Sadri et al. Jan 2013 A1
20130018701 Dusig et al. Jan 2013 A1
20130031177 Willis et al. Jan 2013 A1
20130086185 Desmarais et al. Apr 2013 A1
20130138589 Yu et al. May 2013 A1
20130145180 Branover et al. Jun 2013 A1
20130152002 Menczel et al. Jun 2013 A1
20130162752 Herz et al. Jun 2013 A1
20130169838 Rodriguez et al. Jul 2013 A1
20130183952 Davis et al. Jul 2013 A1
20130243203 Franck et al. Sep 2013 A1
20130298033 Momchilov Nov 2013 A1
20130317816 Potter Nov 2013 A1
20140032624 Zohar et al. Jan 2014 A1
20140032881 Zohar et al. Jan 2014 A1
20140046673 Rathi Feb 2014 A1
20140047251 Kottilingal et al. Feb 2014 A1
20140053161 Sadowski Feb 2014 A1
20140055559 Huang et al. Feb 2014 A1
20140085501 Tran Mar 2014 A1
20140105022 Soni et al. Apr 2014 A1
20140109210 Borzycki et al. Apr 2014 A1
20140126715 Lum et al. May 2014 A1
20140149112 Kalinli-Akbacak May 2014 A1
20140156274 You et al. Jun 2014 A1
20140173452 Hoffberg et al. Jun 2014 A1
20140176588 Duluk, Jr. et al. Jun 2014 A1
20140176589 Duluk, Jr. et al. Jun 2014 A1
20140189231 Maydan et al. Jul 2014 A1
20140211718 Jing et al. Jul 2014 A1
20140258446 Bursell Sep 2014 A1
20140289816 Mazzaferri et al. Sep 2014 A1
20140300758 Tran Oct 2014 A1
20140310442 Kimura et al. Oct 2014 A1
20140320021 Conwell Oct 2014 A1
20140324596 Rodriguez Oct 2014 A1
20140324833 Davis et al. Oct 2014 A1
20140347272 Hernandez-Abrego et al. Nov 2014 A1
20140357312 Davis et al. Dec 2014 A1
20140369550 Davis et al. Dec 2014 A1
20150019530 Felch Jan 2015 A1
20150063557 Lum et al. Mar 2015 A1
20150072728 Rodriguez et al. Mar 2015 A1
20150073794 Kalinli-Akbacak et al. Mar 2015 A1
20150089197 Gopal et al. Mar 2015 A1
20150100809 Belmont et al. Apr 2015 A1
20150121039 Macy, Jr. et al. Apr 2015 A1
20150142618 Rhoads et al. May 2015 A1
20150154023 Macy, Jr. et al. Jun 2015 A1
20150163345 Cornaby et al. Jun 2015 A1
20150178081 Julier et al. Jun 2015 A1
20150178084 Julier et al. Jun 2015 A1
20150193194 Ahmed et al. Jul 2015 A1
20150256677 Konig et al. Sep 2015 A1
20150281853 Eisner et al. Oct 2015 A1
20150286873 Davis et al. Oct 2015 A1
20150310872 Rathi Oct 2015 A1
20160034248 Schissler et al. Feb 2016 A1
20160086600 Bauer et al. Mar 2016 A1
20160094491 Fedorov et al. Mar 2016 A1
20160103788 Forsyth et al. Apr 2016 A1
20160104165 Hanna Apr 2016 A1
20160110196 Forsyth et al. Apr 2016 A1
20160127184 Bursell May 2016 A1
20160165051 Lum et al. Jun 2016 A1
20160247160 Hanna Aug 2016 A1
20160254006 Rathi Sep 2016 A1
20160310847 Hernandez-Abrego et al. Oct 2016 A1
20160322082 Davis et al. Nov 2016 A1
20160337426 Shribman et al. Nov 2016 A1
20160378427 Sharma et al. Dec 2016 A1
20170019660 Deering et al. Jan 2017 A1
20170025119 Song et al. Jan 2017 A1
20170038929 Momchilov Feb 2017 A1
20170060857 Imbruce et al. Mar 2017 A1
20170109162 Yap et al. Apr 2017 A1
20170111506 Strong et al. Apr 2017 A1
20170111515 Bandyopadhyay et al. Apr 2017 A1
20170147343 Yap et al. May 2017 A1
20170148431 Catanzaro et al. May 2017 A1
20170148433 Catanzaro et al. May 2017 A1
20170192785 Uliel et al. Jul 2017 A1
20170193685 Imbruce et al. Jul 2017 A1
20170220929 Rozen et al. Aug 2017 A1
20170236006 Davis et al. Aug 2017 A1
20170238002 Prins et al. Aug 2017 A1
20170251295 Pergament et al. Aug 2017 A1
20170263240 Kalinli-Akbacak Sep 2017 A1
20170323638 Malinowski et al. Nov 2017 A1
20170351664 Hahn Dec 2017 A1
20170371829 Chen Dec 2017 A1
20180007587 Feldman et al. Jan 2018 A1
20180039497 Ould-Ahmed-Vall et al. Feb 2018 A1
20180041631 Douglas Feb 2018 A1
20180052686 Ould-Ahmed-Vall et al. Feb 2018 A1
20180063325 Wilcox et al. Mar 2018 A1
20180077380 Tran Mar 2018 A1
20180088943 Abraham et al. Mar 2018 A1
20180115751 Noone Apr 2018 A1
20180122429 Hinton et al. May 2018 A1
20180122430 Hinton et al. May 2018 A1
20180122432 Hinton et al. May 2018 A1
20180122433 Hinton et al. May 2018 A1
20180144435 Chen et al. May 2018 A1
20180152561 Strong et al. May 2018 A1
20180158463 Ge et al. Jun 2018 A1
20180174620 Davis et al. Jun 2018 A1
20180182388 Bocklet et al. Jun 2018 A1
20180198838 Murgia et al. Jul 2018 A1
20180225091 Anderson et al. Aug 2018 A1
20180225092 Anderson et al. Aug 2018 A1
20180225217 Hughes Aug 2018 A1
20180225218 Hughes Aug 2018 A1
20180225230 Litichever et al. Aug 2018 A1
20180246696 Sharma et al. Aug 2018 A1
20180261187 Barylski et al. Sep 2018 A1
20180270347 Rangarajan et al. Sep 2018 A1
20180279036 Pergament et al. Sep 2018 A1
20180286105 Surti et al. Oct 2018 A1
20180293362 Ray et al. Oct 2018 A1
20180295282 Boyce et al. Oct 2018 A1
20180299841 Appu et al. Oct 2018 A1
20180300617 McBride et al. Oct 2018 A1
20180301095 Runyan et al. Oct 2018 A1
20180309927 Tanner et al. Oct 2018 A1
20180322876 Bocklet et al. Nov 2018 A1
20180336464 Karras et al. Nov 2018 A1
20180353145 Simon et al. Dec 2018 A1
20190005943 Kalinli-Akbacak Jan 2019 A1
20190036684 Gueron et al. Jan 2019 A1
20190043488 Booklet et al. Feb 2019 A1
20190065185 Kuo Feb 2019 A1
20190082990 Poltorak Mar 2019 A1
20190087359 Litichever et al. Mar 2019 A1
20190087929 Lassahn et al. Mar 2019 A1
20190102187 Abraham et al. Apr 2019 A1
20190108030 Corbal San Adrian et al. Apr 2019 A1
20190109703 Gueron et al. Apr 2019 A1
20190109704 Gueron et al. Apr 2019 A1
20190109705 Gueron et al. Apr 2019 A1
20190114176 Shifer et al. Apr 2019 A1
20190116025 Wolrich et al. Apr 2019 A1
20190121643 Hughes et al. Apr 2019 A1
20190130278 Karras et al. May 2019 A1
20190140978 Babcock et al. May 2019 A1
20190141184 Douglas May 2019 A1
20190147856 Price et al. May 2019 A1
20190147884 Hirani et al. May 2019 A1
20190199590 Bursell Jun 2019 A1
20190201691 Poltorak Jul 2019 A1
20190222619 Shribman et al. Jul 2019 A1
20190224441 Poltorak Jul 2019 A1
20190227765 Soifer et al. Jul 2019 A1
20190237108 Davis et al. Aug 2019 A1
20190238954 Dawson Aug 2019 A1
20190244225 Ravichandran Aug 2019 A1
20190244611 Godambe et al. Aug 2019 A1
20190244613 Jonas et al. Aug 2019 A1
20190247662 Poltroak Aug 2019 A1
20190286441 Abraham et al. Sep 2019 A1
20190286444 Kapoor et al. Sep 2019 A1
20190294972 Keller et al. Sep 2019 A1
20190324752 Julier et al. Oct 2019 A1
20190327449 Fu et al. Oct 2019 A1
20190332694 Tcherechansky et al. Oct 2019 A1
20190332869 Varerkar et al. Oct 2019 A1
20190342452 Strong et al. Nov 2019 A1
20190349472 Douglas Nov 2019 A1
20190362461 George et al. Nov 2019 A1
20190370644 Kenney et al. Dec 2019 A1
20190378383 Buttner et al. Dec 2019 A1
20190379342 Weber et al. Dec 2019 A1
20190379964 Pergament et al. Dec 2019 A1
20190379976 ap Dafydd et al. Dec 2019 A1
20190379977 Buttner et al. Dec 2019 A1
20200120307 Tran Apr 2020 A1
20200133625 Sharma et al. Apr 2020 A1
20200137635 Feldman et al. Apr 2020 A1
20200151559 Karras et al. May 2020 A1
20200175961 Thomson et al. Jun 2020 A1
20200175962 Thomson et al. Jun 2020 A1
20200175987 Thomson et al. Jun 2020 A1
20200210473 Tcherechansky et al. Jul 2020 A1
20200215433 Ahmed et al. Jul 2020 A1
20200220916 Ahmed et al. Jul 2020 A1
20200222010 Howard Jul 2020 A1
20200226451 Liu et al. Jul 2020 A1
20200243094 Thomson et al. Jul 2020 A1
20200258516 Khaleghi Aug 2020 A1
20200265859 LaBosco et al. Aug 2020 A1
20200272976 Murison et al. Aug 2020 A1
20200275201 LaBosco Aug 2020 A1
20200275202 LaBosco Aug 2020 A1
20200275203 LaBosco Aug 2020 A1
20200275204 LaBosco Aug 2020 A1
20200302612 Marrero et al. Sep 2020 A1
20200314569 Morgan et al. Oct 2020 A1
20200320023 Litichever et al. Oct 2020 A1
20200320177 Ray et al. Oct 2020 A1
Provisional Applications (1)
Number Date Country
61014106 Dec 2007 US
Divisions (3)
Number Date Country
Parent 16186252 Nov 2018 US
Child 17068219 US
Parent 13968522 Aug 2013 US
Child 14744377 US
Parent 12337236 Dec 2008 US
Child 13968522 US
Continuations (2)
Number Date Country
Parent 15633211 Jun 2017 US
Child 16186252 US
Parent 14744377 Jun 2015 US
Child 15633211 US
Continuation in Parts (1)
Number Date Country
Parent 15633211 Jun 2017 US
Child 16186252 US