This present disclosure relates generally to microphones, and more particularly to a horn microphone utilizing beamforming signal processing.
A Microphone converts air pressure variations of a sound wave into an electrical signal. A variety of methods may be used to convert a sound wave into an electrical signal, such as use of a coil of wire with a diaphragm suspended in a magnetic field, use of a vibrating diaphragm as a capacitor plate, use of a crystal of piezoelectric material, or use of a permanently charged material. Conventional microphones may sense sound waves from all directions (e.g. omni microphone), in a 3D axis symmetric figure of eight pattern (e.g. dipole microphone), or primarily in one direction with a fairly large pickup pattern (e.g. cardioid, super cardioid and hyper cardioid microphones).
In audio and video conferencing applications involving multiple participants in a given location, uni-directional microphones are undesired. In addition, participants desire speech intelligibility and sound quality without requiring a multitude of microphones placed throughout a conference room. Placing a plurality of microphones in varying locations within a room requires among other things, lengthy cables, cable management, and additional hardware.
Further, conventional microphone arrays require sophisticated and costly hardware, significant computing performance, complex processing, and may nonetheless lack adequate sound quality when compared to use of multiple microphones placed throughout a room. Moreover, conventional microphone arrays may experience processing artifacts caused by high-frequency spatial aliasing issues.
The embodiments herein may be better understood by referring to the following description in conjunction with the accompanying drawings in which like reference numerals indicate identical or functionally similar elements. Understanding that these drawings depict only exemplary embodiments of the disclosure and are not therefore to be considered to be limiting of its scope, the principles herein are described and explained with additional specificity and detail through the use of the accompanying drawings in which:
The detailed description set forth below is intended as a description of various configurations of embodiments and is not intended to represent the only configurations in which the subject matter of this disclosure can be practiced. The appended drawings are incorporated herein and constitute a part of the detailed description. The detailed description includes specific details for the purpose of providing a more thorough understanding of the subject matter of this disclosure. However, it will be clear and apparent that the subject matter of this disclosure is not limited to the specific details set forth herein and may be practiced without these details. In some instances, structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject matter of this disclosure.
Conventional microphones may sense sound waves from all directions (e.g. omni microphone), in a 3D axis symmetric figure of eight pattern (e.g. dipole microphone), or primarily in one direction with a fairly large pickup pattern (e.g. cardioid, super cardioid and hyper cardioid microphones). In applications where sensing of sound from various locations may be required, an array of microphones may be positioned in a central location, such as on the middle of a table in a room. Conventional microphone arrays require sophisticated and costly hardware, significant computing performance, complex processing, and may lack adequate sound quality when compared to use of multiple microphones placed throughout a room or assigned to individual participants or users. In addition, conventional microphone arrays may have a shorter critical distance, that is, the distance in which the microphone array may adequately sense sound due to the sound pressure level of the direct sound and the reverberant sound being equal when dealing with a directional source, when compared to the hybrid horn microphone of the subject technology. Moreover, a conventional microphone array may experience processing artifacts caused by high-frequency spatial aliasing issues.
The disclosed technology addresses the need in the art for providing a high-sensitive and anti-aliasing microphone by combining horn technology and beamforming signal processing. In an array configuration, the hybrid horn microphone of the subject technology requires less processing power compared to conventional microphone arrays. In addition, the hybrid microphone of the subject technology has a higher signal to noise ratio and less high frequency spatial-aliasing issues than other implementations. The hybrid horn microphone array of the subject technology also has a longer critical distance and increased sound quality compared to conventional microphone arrays.
In addition, the hybrid horn microphone array of the subject technology does not require multiple arrays, may utilize a single output cable, and may be installed in a single location in a room, such as on or near the ceiling. There is no need for multiple microphones to be located, installed and wired throughout a room. Further, users do not need to reposition table microphones to improve sound quality as the subject technology is capable of processing audio signals to create high quality sound.
Various aspects of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosure.
The plurality of planar surfaces 110 may be substantially planar and devoid of curvature such that a cross-sectional area of the horn portion from the proximal end to the distal end decreases at a constant rate. In some aspects, the planar surfaces may include curvature such that the cross-sectional area of the horn portion from the proximal end to the distal end decreases with varying rates.
The plurality of planar surfaces 110 may be made of polymer, composite, metal, alloys, or a combination thereof. It is understood that other materials may be used to form the horn portion without deviating from the scope of the subject technology.
Each planar surface 110 of the plurality of planar surfaces 110A-E may have substantially the same thickness. The thickness of each planar surface 110 may be 0.13″, 0.25″, 0.38″, or 0.5″. It is understood that the planar surfaces 110 may have other values for thickness without departing from the scope of the subject technology.
In some aspects, the length of the planar surface 110 may range from 4-6 inches, 6-8 inches, 8-10 inches, 10-12 inches or 12-14 inches. It is understood that the planar surface 110 may have a longer length without departing from the scope of the subject technology. In one aspect, a width of the planar surface is similar to the length of the planar surface.
In one aspect, the horn portion may be formed by a single component, folded, cast, or molded into the desired shape. For example, the horn portion may comprise sheet metal folded into a pentagonal pyramid having five planar surfaces 110A-E. In another aspect, the horn portion may be assembled from multiple components with each component comprising the planar surface 110.
Sound waves emitted by a source, such as a user speaking at a telephonic or video conference, are directed or reflected towards the horn portion 105 and are directed to the instrument 120 by the shape of the planar surfaces 110A-E. In one aspect, the size and shape of the horn portion 105 correlates to a frequency range or bandwidth of the sound waves desired for detection.
In another aspect, by utilizing the horn portion 105, the microphone 100 detects and senses sound waves directionally. That is, the microphone 100 is capable of detecting sound waves from a source located within a detection range 115, while minimizing detection of sound waves from other sources that may be located at different locations from the source, outside of the detection range 115. By utilizing the horn portion 105, the microphone 100 is also able to prevent detection of ambient noise (typically greater than 10 dB) coming from sources located outside of the detection range. In one aspect, the horn portion 105 of the microphone 100 significantly reduces detection of sound waves coming from angles outside of the direction of the microphone 100 because the sound waves from outside the direction of the microphone 100 are reflected away from the instrument 120 by the horn portion 105. In another aspect, for sound waves coming from a source located within the detection range 115 of the microphone 100, a Signal to Noise Ratio (SNR) of the sound wave is significantly higher (generally 9 dB or more) than conventional microphones resulting in increased sound quality. In one aspect, for sound waves coming from a source within the detection range 115, the microphone 100 has a very high directivity at frequencies above 2 kHz.
In some aspects, the horn portion 105 may have various shapes formed by the planar surfaces 110. For example, the shape of the horn portion 105 formed by the plurality of planar surfaces 110 may comprise a triangular pyramid having three interior faces. In another example, the shape of the horn portion 105 formed by the plurality of planar surfaces 110 may comprise a square pyramid having four interior faces. In yet another example, the shape of the horn portion 105 formed by the plurality of planar surfaces 110 may comprise a pentagonal pyramid having five interior faces. In another example, the shape of the horn portion 105 formed by the plurality of planar surfaces 110 may comprise a hexagonal pyramid having six interior faces. In yet another example, the shape of the horn portion 105 formed by the plurality of planar surfaces 110 may comprise a heptagonal pyramid having seven interior faces. In another example, the shape of the horn portion 105 formed by the plurality of planar surfaces 110 may comprise an octagonal pyramid having eight interior faces. It is further understood that other shapes may be formed by the plurality of planar surfaces 110 as desired by a person of ordinary skill in the art.
Each microphone 100 of the array 300 is pointed at a different direction, as shown in
The hybrid horn microphone array processing block diagram 400 comprises a beamforming signal processing circuit 405 for creating a high-sensitivity and anti-aliasing microphone array 300. The beamforming signal processing circuit 405 is electrically coupled to each microphone 100 and is configured to receive the electrical signals from each instrument 120. The beamforming signal processing circuit 405 is further configured to create beam signals corresponding to each microphone 100 based on the respective electrical signals. In some aspects, the beam signals are indicative of a location of a source of the sound waves detected by each microphone 100.
The beamforming signal processing circuit 405 comprises a crossover filter 410, a delaying circuit 420, a processor 430, and a mixer 440. Each electrical signal from the microphones 100A-N passes through respective cross over filters 410A-N. Each crossover filter 410A-N is configured to convert the respective electrical signals from the microphone 100A-N to a first signal 412 and a second signal 414, with the first and second signals, 412 and 414 respectively, having different frequencies or sub-bands. For example, the frequency of each respective first signal 412 may be below 2 kHz and the frequency of each respective second signal 414 may be above 2 kHz. In one aspect, the crossover frequency can be adapted to the size of the horn portion 105 (as shown in
For example, with reference to a first microphone 100A, the electrical signal from the microphone 100A is received by the cross over filter 410A. The cross over filter 410A converts the electrical signal from the microphone 100A into a first signal 412A (Low Frequency or LF) and a second signal 414A (High Frequency or HF). With reference to a second microphone 100B, the electrical signal from the microphone 100B is received by the cross over filter 410B. The cross over filter 410B converts the electrical signal from the microphone 100B into a first signal 412B (Low Frequency or LF) and a second signal 414B (High Frequency or HF). With reference to a third microphone 100C, the electrical signal from the microphone 100C is received by the cross over filter 410C. The cross over filter 410C converts the electrical signal from the microphone 100C into a first signal 412C (Low Frequency or LF) and a second signal 414C (High Frequency or HF). With reference to a fourth microphone 100D, the electrical signal from the microphone 100D is received by the cross over filter 410D. The cross over filter 410D converts the electrical signal from the microphone 100D into a first signal 412D (Low Frequency or LF) and a second signal 414D (High Frequency or HF). With reference to a fifth microphone 100E, the electrical signal from the microphone 100E is received by the cross over filter 410E. The cross over filter 410E converts the electrical signal from the microphone 100E into a first signal 412E (Low Frequency or LF) and a second signal 414E (High Frequency or HF). In some aspects, any number of microphones 100N may be connected to the beamforming signal processing circuit 405, including the cross over filter 410N to convert the electrical signal from the microphone 100N into a first signal 412N and a second signal 414N, without departing from the scope of the subject technology.
The delaying circuit 420 is configured to delay the second signal 414 from the crossover filter 410 to create a delayed second signal 422. In some aspects, the delaying circuit is configured to sufficiently delay the second signal 414 so that upon mixing by the mixer 440, as discussed further below, the mixed signal is sufficiently aligned. Each second signal 414A-N from the respective cross over filters 410A-N is received by corresponding delaying circuits 420A-N to create respective delayed second signals 422A-N.
For example, with reference to the first microphone 100A, the second signal 414A from the cross over filter 410A is received by the delaying circuit 420A. The delaying circuit 420A delays the second signal 414A to create a delayed second signal 422A. With reference to the second microphone 100B, the second signal 414B from the cross over filter 410B is received by the delaying circuit 420B. The delaying circuit 420B delays the second signal 414B to create a delayed second signal 422B. With reference to the third microphone 100C, the second signal 414C from the cross over filter 410C is received by the delaying circuit 420C. The delaying circuit 420C delays the second signal 414C to create a delayed second signal 422C. With reference to the fourth microphone 100D, the second signal 414D from the cross over filter 410D is received by the delaying circuit 420D. The delaying circuit 420D delays the second signal 414D to create a delayed second signal 422D. With reference to the fifth microphone 100E, the second signal 414E from the cross over filter 410E is received by the delaying circuit 420E. The delaying circuit 420E delays the second signal 414E to create a delayed second signal 422E. In some aspects, any number of microphones 100N may be connected to the beamforming signal processing circuit 405, including the delaying circuit 420N to delay the second signal 414N and create a delayed second signal 422N, without departing from the scope of the subject technology.
The processor 430 may be configured to downsample the first signal 412 from the crossover filter 410 to create a downsampled first signal, process the downsampled first signal to create a processed first signal that is indicative of the location of the source of the sound waves detected by the microphone 100, and upsample the processed first signal to create an upsampled first signal 432. Each first signal 412A-N from the respective cross over filters 410A-N is received by the processor 430 to create the processed first signal 432A-N.
In some aspects, the processor 430 utilizes beamforming signal processing techniques to process the first signals 412A-N. Beam forming signal processing may be used to extract sound sources in an area or room. This may be achieved by combining elements in a phased array in such a way that signals at particular angles experience constructive interference while others experience destructive interference.
In one aspect, because the horn portion 105 (as shown in
The processor 430 may downsample each of the first signals 412A-N to a lower sampling rate such as from 48 kHz to 4 kHz, which may significantly reduce computational complexity by 90%. The processor 430 may then filter and sum (or weight and sum in the frequency domain) each of the first signals 412A-N to create respective processed first signals representing acoustic beams pointing in the direction of each respective microphone. In another example, the processer 430 may use spherical harmonics theory or sound field models to create respective processed first signals representing acoustic beams pointing in the direction of each respective microphone. In one aspect, the processor 430 may measure the array response vectors for various sound arrival angles in an anechoic chamber. In another aspect, the processor 430 may implement various types of beam pattern synthesis/optimization or machine learning. The processor 430 may then upsample the processed first signals to obtain respective upsampled first signals 432 with a desired sampling rate.
For example, with reference to the first microphone 100A, the first signal 412A from the cross over filter 410A is received by the processor 430. The processor 430 may downsample the first signal 412A to create a first downsampled first signal. The processor 430 may then filter and sum (or weight and sum in the frequency domain) the first downsampled first signal to create a first processed first signal representing an acoustic beam pointing in the direction of microphone 100A. The first processed first signal indicative of the location of the source of the sound waves detected by the microphone 100A. The processor 430 may then upsample the first processed first signal to obtain an upsampled first signal 432A. With respect to the second microphone 100B, the first signal 412B from the cross over filter 410B is received by the processor 430. The processor 430 may downsample the first signal 412B to create a second downsampled first signal. The processor 430 may then filter and sum (or weight and sum in the frequency domain) the second downsampled first signal to create a second processed first signal representing an acoustic beam pointing in the direction of microphone 100B. The second processed first signal indicative of the location of the source of the sound waves detected by the microphone 100B. The processor 430 may then upsample the second processed first signal to obtain an upsampled first signal 432B. With respect to the third microphone 100C, the first signal 412C from the cross over filter 410C is received by the processor 430. The processor 430 may downsample the first signal 412C to create a third downsampled first signal. The processor 430 may then filter and sum (or weight and sum in the frequency domain) the third downsampled first signal to create a third processed first signal representing an acoustic beam pointing in the direction of microphone 100C. The third processed first signal indicative of the location of the source of the sound waves detected by the microphone 100C. The processor 430 may then upsample the third processed first signal to obtain an upsampled first signal 432C. With respect to the fourth microphone 100D, the first signal 412D from the cross over filter 410D is received by the processor 430. The processor 430 may downsample the first signal 412D to create a fourth downsampled first signal. The processor 430 may then filter and sum (or weight and sum in the frequency domain) the fourth downsampled first signal to create a fourth processed first signal representing an acoustic beam pointing in the direction of microphone 100D. The fourth processed first signal indicative of the location of the source of the sound waves detected by the microphone 100D. The processor 430 may then upsample the fourth processed first signal to obtain an upsampled first signal 432D. With respect to the fifth microphone 100E, the first signal 412E from the cross over filter 410E is received by the processor 430. The processor 430 may downsample the first signal 412E to create a fifth downsampled first signal. The processor 430 may then filter and sum (or weight and sum in the frequency domain) the fifth downsampled first signal to create a fifth processed first signal representing an acoustic beam pointing in the direction of microphone 100E. The fifth processed first signal indicative of the location of the source of the sound waves detected by the microphone 100E. The processor 430 may then upsample the fifth processed first signal to obtain an upsampled first signal 432E. In some aspects, any number of microphones 100N may be connected to the beamforming signal processing circuit 405, including the processor 430 to downsample, process and upsample the first signal 412N and create a upsampled first signal 432N, without departing from the scope of the subject technology.
The mixer 440 is configured to combine the upsampled first signal 432 from the processor 430 and the delayed second signal 422 from the delaying circuit 420 to create a full-band beam signal 442. Each upsampled first signal 432A-N and delayed second signal 422A-N from the respective delaying circuits 420A-N is received by corresponding mixers 440A-N to create respective full-band beam signals 442A-N.
For example, with reference to the first microphone 100A, the upsampled first signal 432A from the processor 430 and the delayed second signal 422A from the delaying circuit 420A is received by the mixer 440A. The mixer 440A combines the upsampled first signal 432A and the delayed second signal 422A to create a beam signal 442A. With reference to the second microphone 100B, the upsampled first signal 432B from the processor 430 and the delayed second signal 422B from the delaying circuit 420B is received by the mixer 440B. The mixer 440B combines the upsampled first signal 432B and the delayed second signal 422B to create a beam signal 442B. With reference to the third microphone 100C, the upsampled first signal 432C from the processor 430 and the delayed second signal 422C from the delaying circuit 420C is received by the mixer 440C. The mixer 440C combines the upsampled first signal 432C and the delayed second signal 422C to create a beam signal 442C. With reference to the fourth microphone 100D, the upsampled first signal 432D from the processor 430 and the delayed second signal 422D from the delaying circuit 420D is received by the mixer 440D. The mixer 440D combines the upsampled first signal 432D and the delayed second signal 422D to create a beam signal 442D. With reference to the second microphone 100E, the upsampled first signal 432E from the processor 430 and the delayed second signal 422E from the delaying circuit 420E is received by the mixer 440E. The mixer 440E combines the upsampled first signal 432E and the delayed second signal 422E to create a beam signal 442E. In some aspects, any number of microphones 100N may be connected to the beamforming signal processing circuit 405, including the mixer 440N to combine the upsampled first signal 432N and delayed second signal 422N to create the beam signal 442N, without departing from the scope of the subject technology.
The hybrid horn microphone array processing block diagram 400 may further comprise an audio processing circuit 450. The audio processing circuit 450 may be configured to receive each of the beam signals 442A-N and perform at least one of an echo control filter, a reverberation filter, or a noise reduction filter, to improve the quality of the beam signals 442A-N and create pre-mixed beam signals 452A-N.
For example, with reference to the first microphone 100A, the beam signal 442A from the mixer 440A is received by the audio processing circuit 450. The audio processing circuit 450 performs operations such as echo modification, reverberation adjustment, or noise reduction, to improve the quality of the beam signal 442A, and thereby create a pre-mixed beam signal 452A. With reference to the second microphone 100B, the beam signal 442B from the mixer 440B is received by the audio processing circuit 450. The audio processing circuit 450 performs operations such as echo modification, reverberation adjustment, or noise reduction, to improve the quality of the beam signal 442B, and thereby create a pre-mixed beam signal 452B. With reference to the third microphone 100C, the beam signal 442C from the mixer 440C is received by the audio processing circuit 450. The audio processing circuit 450 performs operations such as echo modification, reverberation adjustment, or noise reduction, to improve the quality of the beam signal 442C, and thereby create a pre-mixed beam signal 452C. With reference to the fourth microphone 100D, the beam signal 442D from the mixer 440D is received by the audio processing circuit 450. The audio processing circuit 450 performs operations such as echo modification, reverberation adjustment, or noise reduction, to improve the quality of the beam signal 442D, and thereby create a pre-mixed beam signal 452D. With reference to the fifth microphone 100E, the beam signal 442E from the mixer 440E is received by the audio processing circuit 450. The audio processing circuit 450 performs operations such as echo modification, reverberation adjustment, or noise reduction, to improve the quality of the beam signal 442E, and thereby create a pre-mixed beam signal 452E. In some aspects, any number of microphones 100N may be connected to the audio processing circuit 450 to improve the quality of the beam signal 442N and create pre-mixed beam signal 452N, without departing from the scope of the subject technology.
The hybrid horn microphone array processing block diagram 400 may further comprise an automatic mixer 460. The automatic mixer 460 may be configured to receive the plurality of pre-mixed beam signals 452A-N and identify one or more beam signals from the plurality of beam signals 452A-N to output to an output device 470 based on a characteristic of the beam signal 452A-N. The characteristic of the beam signal 452A-N may include, for example, quality, level, clarity, strength, SNR, signal to reverberation ratio, amplitude, wavelength, frequency, or phase. In some aspects, the mixer 460 may be configured to review each incoming pre-mix beam signal 452A-N, identify one or more beam signals 452A-N based on one or more characteristic of the beam signals 452A-N, select the one or more beam signals 452A-N, isolate signals representing speech, filter low signals that may not represent speech, and transmit an output signal 462 to the output device 470. In one aspect, the mixer 460 may utilize audio selection techniques to generate the desired audio output signal 462 (e.g., mono, stereo, surround).
The output device 470 is configured to receive the output signal 462 from the mixer and may comprise a set top box, console, visual output device (e.g., monitor, television, display), or audio output device (e.g., speaker).
At operation 510, a sound wave is received at an array of microphones. The array of microphones comprise a plurality of microphones arranged in a polyhedron shape, as shown for example, in
At operation 520, a plurality of electrical signals are generated based on the received sound wave. The plurality of electrical signals comprise the electrical signal generated by each instrument of the plurality of microphones.
At operation 530, each electrical signal of the plurality of electrical signals is converted into a high sub-band signal and a low sub-band signal. The electrical signal generated by each instrument and microphone, is thus converted to two signals, the high sub-band signal and the low sub-band signal. Each of the low-band signals, together, comprise a plurality of low-band signals. Similarly, each of the high-band signals, together, comprise a plurality of high-band signals.
At operation 540, beamforming signal processing is performed on the plurality of low sub-band signals to create a plurality of low sub-band beam signals. Stated differently, each of the low-band signals undergoes beamforming signal processing to thereby create a low sub-band beam signal. As described above, beamforming signal processing may comprise use of spherical harmonics theory or sound field models, use of array response vectors for various sound arrival angles in an anechoic chamber, and/or use of various types of beam pattern synthesis/optimization or machine learning.
At operation 550, each low-band beam signal of the plurality of low sub-band signals is combined with the respective high sub-band signal of the plurality of high sub-band signals to create a plurality of beam signals. Each beam signal of the plurality of beam signals corresponds to each microphone of the plurality of microphones of the array.
At operation 560, one or more beam signals of the plurality of beam signals is elected for output to an output device.
The functions described above can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media. Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on.
Devices implementing the functions and operations according to these disclosures may comprise hardware, firmware and/or software, and can take any of a variety of form factors. Typical examples of such form factors include laptops, smart phones, small form factor personal computers, personal digital assistants, rackmount devices, standalone devices, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example.
The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures.
Although a variety of examples and other information was used to explain aspects within the scope of the appended claims, no limitation of the claims should be implied based on particular features or arrangements in such examples, as one of ordinary skill would be able to use these examples to derive a wide variety of implementations. Further and although some subject matter may have been described in language specific to examples of structural features and/or method steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to these described features or acts. For example, such functionality can be distributed differently or performed in components other than those identified herein. Rather, the described features and steps are disclosed as examples of components of systems and methods within the scope of the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
4460807 | Kerr et al. | Jul 1984 | A |
4890257 | Anthias et al. | Dec 1989 | A |
4977605 | Fardeau et al. | Dec 1990 | A |
5293430 | Shiau et al. | Mar 1994 | A |
5694563 | Belfiore et al. | Dec 1997 | A |
5699082 | Marks et al. | Dec 1997 | A |
5745711 | Kitahara et al. | Apr 1998 | A |
5767897 | Howell | Jun 1998 | A |
5825858 | Shaffer et al. | Oct 1998 | A |
5874962 | de Judicibus et al. | Feb 1999 | A |
5889671 | Autermann et al. | Mar 1999 | A |
5917537 | Lightfoot et al. | Jun 1999 | A |
5995096 | Kitahara et al. | Nov 1999 | A |
6023606 | Monte et al. | Feb 2000 | A |
6040817 | Sumikawa | Mar 2000 | A |
6075531 | DeStefano | Jun 2000 | A |
6085166 | Beckhardt et al. | Jul 2000 | A |
6191807 | Hamada et al. | Feb 2001 | B1 |
6300951 | Filetto et al. | Oct 2001 | B1 |
6392674 | Hiraki et al. | May 2002 | B1 |
6424370 | Courtney | Jul 2002 | B1 |
6463473 | Gubbi | Oct 2002 | B1 |
6553363 | Hoffman | Apr 2003 | B1 |
6554433 | Holler | Apr 2003 | B1 |
6573913 | Butler et al. | Jun 2003 | B1 |
6646997 | Baxley et al. | Nov 2003 | B1 |
6665396 | Khouri et al. | Dec 2003 | B1 |
6700979 | Washiya | Mar 2004 | B1 |
6711419 | Mori | Mar 2004 | B1 |
6754321 | Innes et al. | Jun 2004 | B1 |
6754335 | Shaffer et al. | Jun 2004 | B1 |
RE38609 | Chen et al. | Oct 2004 | E |
6816464 | Scott et al. | Nov 2004 | B1 |
6865264 | Berstis | Mar 2005 | B2 |
6938208 | Reichardt | Aug 2005 | B2 |
6978499 | Gallant et al. | Dec 2005 | B2 |
7046134 | Hansen | May 2006 | B2 |
7046794 | Piket et al. | May 2006 | B2 |
7058164 | Chan et al. | Jun 2006 | B1 |
7058710 | McCall et al. | Jun 2006 | B2 |
7062532 | Sweat et al. | Jun 2006 | B1 |
7085367 | Lang | Aug 2006 | B1 |
7124164 | Chemtob | Oct 2006 | B1 |
7149499 | Oran et al. | Dec 2006 | B1 |
7180993 | Hamilton | Feb 2007 | B2 |
7209475 | Shaffer et al. | Apr 2007 | B1 |
7340151 | Taylor et al. | Mar 2008 | B2 |
7366310 | Stinson et al. | Apr 2008 | B2 |
7418664 | Ben-Shachar et al. | Aug 2008 | B2 |
7441198 | Dempski et al. | Oct 2008 | B2 |
7478339 | Pettiross et al. | Jan 2009 | B2 |
7500200 | Kelso et al. | Mar 2009 | B2 |
7530022 | Ben-Shachar et al. | May 2009 | B2 |
7552177 | Kessen et al. | Jun 2009 | B2 |
7577711 | McArdle | Aug 2009 | B2 |
7584258 | Maresh | Sep 2009 | B2 |
7587028 | Broerman et al. | Sep 2009 | B1 |
7606714 | Williams et al. | Oct 2009 | B2 |
7606862 | Swearingen et al. | Oct 2009 | B2 |
7620902 | Manion et al. | Nov 2009 | B2 |
7634533 | Rudolph et al. | Dec 2009 | B2 |
7774407 | Daly et al. | Aug 2010 | B2 |
7792277 | Shaffer et al. | Sep 2010 | B2 |
7830814 | Allen et al. | Nov 2010 | B1 |
7840013 | Dedieu et al. | Nov 2010 | B2 |
7840980 | Gutta | Nov 2010 | B2 |
7881450 | Gentle et al. | Feb 2011 | B1 |
7920160 | Tamaru et al. | Apr 2011 | B2 |
7956869 | Gilra | Jun 2011 | B1 |
7986372 | Ma et al. | Jul 2011 | B2 |
7995464 | Croak et al. | Aug 2011 | B1 |
8059557 | Sigg et al. | Nov 2011 | B1 |
8081205 | Baird et al. | Dec 2011 | B2 |
8140973 | Sandquist et al. | Mar 2012 | B2 |
8169463 | Enstad et al. | May 2012 | B2 |
8219624 | Haynes et al. | Jul 2012 | B2 |
8274893 | Bansal et al. | Sep 2012 | B2 |
8290998 | Stienhans et al. | Oct 2012 | B2 |
8301883 | Sundaram et al. | Oct 2012 | B2 |
8340268 | Knaz | Dec 2012 | B2 |
8358327 | Duddy | Jan 2013 | B2 |
8423615 | Hayes | Apr 2013 | B1 |
8428234 | Knaz | Apr 2013 | B2 |
8433061 | Cutler | Apr 2013 | B2 |
8434019 | Nelson | Apr 2013 | B2 |
8456507 | Mallappa et al. | Jun 2013 | B1 |
8462103 | Moscovitch et al. | Jun 2013 | B1 |
8478848 | Minert | Jul 2013 | B2 |
8520370 | Waitzman, III et al. | Aug 2013 | B2 |
8625749 | Jain et al. | Jan 2014 | B2 |
8630208 | Kjeldaas | Jan 2014 | B1 |
8638354 | Leow et al. | Jan 2014 | B2 |
8645464 | Zimmet et al. | Feb 2014 | B2 |
8675847 | Shaffer et al. | Mar 2014 | B2 |
8694587 | Chaturvedi et al. | Apr 2014 | B2 |
8694593 | Wren et al. | Apr 2014 | B1 |
8706539 | Mohler | Apr 2014 | B1 |
8732149 | Lida et al. | May 2014 | B2 |
8738080 | Nhiayi et al. | May 2014 | B2 |
8751572 | Behforooz et al. | Jun 2014 | B1 |
8831505 | Seshadri | Sep 2014 | B1 |
8850203 | Sundaram et al. | Sep 2014 | B2 |
8860774 | Sheeley et al. | Oct 2014 | B1 |
8874644 | Allen et al. | Oct 2014 | B2 |
8890924 | Wu | Nov 2014 | B2 |
8892646 | Chaturvedi et al. | Nov 2014 | B2 |
8914444 | Hladik, Jr. | Dec 2014 | B2 |
8914472 | Lee et al. | Dec 2014 | B1 |
8924862 | Luo | Dec 2014 | B1 |
8930840 | Riskó et al. | Jan 2015 | B1 |
8947493 | Lian et al. | Feb 2015 | B2 |
8972494 | Chen et al. | Mar 2015 | B2 |
9003445 | Rowe | Apr 2015 | B1 |
9031839 | Thorsen et al. | May 2015 | B2 |
9032028 | Davidson et al. | May 2015 | B2 |
9075572 | Ayoub et al. | Jul 2015 | B2 |
9118612 | Fish et al. | Aug 2015 | B2 |
9131017 | Kurupacheril et al. | Sep 2015 | B2 |
9137376 | Basart et al. | Sep 2015 | B1 |
9143729 | Anand et al. | Sep 2015 | B2 |
9165281 | Orsolini et al. | Oct 2015 | B2 |
9197701 | Petrov et al. | Nov 2015 | B1 |
9197848 | Felkai et al. | Nov 2015 | B2 |
9201527 | Kripalani et al. | Dec 2015 | B2 |
9203875 | Huang et al. | Dec 2015 | B2 |
9204099 | Brown | Dec 2015 | B2 |
9219735 | Hoard et al. | Dec 2015 | B2 |
9246855 | Maehiro | Jan 2016 | B2 |
9258033 | Showering | Feb 2016 | B2 |
9268398 | Tipirneni | Feb 2016 | B2 |
9298342 | Zhang et al. | Mar 2016 | B2 |
9323417 | Sun et al. | Apr 2016 | B2 |
9335892 | Ubillos | May 2016 | B2 |
9349119 | Desai et al. | May 2016 | B2 |
9367224 | Ananthakrishnan et al. | Jun 2016 | B2 |
9369673 | Ma et al. | Jun 2016 | B2 |
9407621 | Vakil et al. | Aug 2016 | B2 |
9432512 | You | Aug 2016 | B2 |
9449303 | Underhill et al. | Sep 2016 | B2 |
9495664 | Cole et al. | Nov 2016 | B2 |
9513861 | Lin et al. | Dec 2016 | B2 |
9516022 | Borzycki et al. | Dec 2016 | B2 |
9525711 | Ackerman et al. | Dec 2016 | B2 |
9553799 | Tarricone et al. | Jan 2017 | B2 |
9563480 | Messerli et al. | Feb 2017 | B2 |
9609030 | Sun et al. | Mar 2017 | B2 |
9609514 | Mistry et al. | Mar 2017 | B2 |
9614756 | Joshi | Apr 2017 | B2 |
9640194 | Nemala et al. | May 2017 | B1 |
9667799 | Olivier et al. | May 2017 | B2 |
9674625 | Armstrong-Mutner | Jun 2017 | B2 |
9762709 | Snyder et al. | Sep 2017 | B1 |
20010030661 | Reichardt | Oct 2001 | A1 |
20020018051 | Singh | Feb 2002 | A1 |
20020076003 | Zellner et al. | Jun 2002 | A1 |
20020078153 | Chung et al. | Jun 2002 | A1 |
20020140736 | Chen | Oct 2002 | A1 |
20020188522 | McCall et al. | Dec 2002 | A1 |
20030028647 | Grosu | Feb 2003 | A1 |
20030046421 | Horvitz et al. | Mar 2003 | A1 |
20030068087 | Wu et al. | Apr 2003 | A1 |
20030154250 | Miyashita | Aug 2003 | A1 |
20030174826 | Hesse | Sep 2003 | A1 |
20030187800 | Moore et al. | Oct 2003 | A1 |
20030197739 | Bauer | Oct 2003 | A1 |
20030227423 | Arai et al. | Dec 2003 | A1 |
20040039909 | Cheng | Feb 2004 | A1 |
20040054885 | Bartram et al. | Mar 2004 | A1 |
20040098456 | Krzyzanowski et al. | May 2004 | A1 |
20040210637 | Loveland | Oct 2004 | A1 |
20040253991 | Azuma | Dec 2004 | A1 |
20040267938 | Shoroff et al. | Dec 2004 | A1 |
20050014490 | Desai et al. | Jan 2005 | A1 |
20050031136 | Du | Feb 2005 | A1 |
20050048916 | Suh | Mar 2005 | A1 |
20050055405 | Kaminsky et al. | Mar 2005 | A1 |
20050055412 | Kaminsky et al. | Mar 2005 | A1 |
20050085243 | Boyer et al. | Apr 2005 | A1 |
20050099492 | Orr | May 2005 | A1 |
20050108328 | Berkeland et al. | May 2005 | A1 |
20050131774 | Huxter | Jun 2005 | A1 |
20050175208 | Shaw | Aug 2005 | A1 |
20050215229 | Cheng | Sep 2005 | A1 |
20050226511 | Short | Oct 2005 | A1 |
20050231588 | Yang et al. | Oct 2005 | A1 |
20050286711 | Lee et al. | Dec 2005 | A1 |
20060004911 | Becker et al. | Jan 2006 | A1 |
20060020697 | Kelso et al. | Jan 2006 | A1 |
20060026255 | Malamud et al. | Feb 2006 | A1 |
20060083305 | Dougherty et al. | Apr 2006 | A1 |
20060084471 | Walter | Apr 2006 | A1 |
20060164552 | Cutler | Jul 2006 | A1 |
20060224430 | Butt | Oct 2006 | A1 |
20060250987 | White et al. | Nov 2006 | A1 |
20060271624 | Lyle et al. | Nov 2006 | A1 |
20070005752 | Chawla et al. | Jan 2007 | A1 |
20070021973 | Stremler | Jan 2007 | A1 |
20070025576 | Wen | Feb 2007 | A1 |
20070041366 | Vugenfirer et al. | Feb 2007 | A1 |
20070047707 | Mayer et al. | Mar 2007 | A1 |
20070058842 | Vallone et al. | Mar 2007 | A1 |
20070067387 | Jain et al. | Mar 2007 | A1 |
20070091831 | Croy et al. | Apr 2007 | A1 |
20070100986 | Bagley et al. | May 2007 | A1 |
20070106747 | Singh et al. | May 2007 | A1 |
20070116225 | Zhao et al. | May 2007 | A1 |
20070139626 | Saleh et al. | Jun 2007 | A1 |
20070150453 | Morita | Jun 2007 | A1 |
20070168444 | Chen et al. | Jul 2007 | A1 |
20070198637 | Deboy et al. | Aug 2007 | A1 |
20070208590 | Dorricott et al. | Sep 2007 | A1 |
20070248244 | Sato et al. | Oct 2007 | A1 |
20070250567 | Graham et al. | Oct 2007 | A1 |
20080059986 | Kalinowski et al. | Mar 2008 | A1 |
20080068447 | Mattila et al. | Mar 2008 | A1 |
20080071868 | Arenburg et al. | Mar 2008 | A1 |
20080080532 | O'Sullivan et al. | Apr 2008 | A1 |
20080107255 | Geva et al. | May 2008 | A1 |
20080133663 | Lentz | Jun 2008 | A1 |
20080154863 | Goldstein | Jun 2008 | A1 |
20080209452 | Ebert et al. | Aug 2008 | A1 |
20080270211 | Vander Veen et al. | Oct 2008 | A1 |
20080278894 | Chen et al. | Nov 2008 | A1 |
20090012963 | Johnson et al. | Jan 2009 | A1 |
20090019374 | Logan et al. | Jan 2009 | A1 |
20090049151 | Pagan | Feb 2009 | A1 |
20090064245 | Facemire et al. | Mar 2009 | A1 |
20090075633 | Lee et al. | Mar 2009 | A1 |
20090089822 | Wada | Apr 2009 | A1 |
20090094088 | Chen et al. | Apr 2009 | A1 |
20090100142 | Stern et al. | Apr 2009 | A1 |
20090119373 | Denner et al. | May 2009 | A1 |
20090132949 | Bosarge | May 2009 | A1 |
20090193327 | Roychoudhuri et al. | Jul 2009 | A1 |
20090234667 | Thayne | Sep 2009 | A1 |
20090254619 | Kho et al. | Oct 2009 | A1 |
20090256901 | Mauchly et al. | Oct 2009 | A1 |
20090278851 | Ach et al. | Nov 2009 | A1 |
20090282104 | O'Sullivan et al. | Nov 2009 | A1 |
20090292999 | LaBine et al. | Nov 2009 | A1 |
20090296908 | Lee et al. | Dec 2009 | A1 |
20090306981 | Cromack et al. | Dec 2009 | A1 |
20090309846 | Trachtenberg et al. | Dec 2009 | A1 |
20090313334 | Seacat et al. | Dec 2009 | A1 |
20100005142 | Xiao et al. | Jan 2010 | A1 |
20100005402 | George et al. | Jan 2010 | A1 |
20100031192 | Kong | Feb 2010 | A1 |
20100061538 | Coleman et al. | Mar 2010 | A1 |
20100070640 | Allen, Jr. et al. | Mar 2010 | A1 |
20100073454 | Lovhaugen et al. | Mar 2010 | A1 |
20100077109 | Yan et al. | Mar 2010 | A1 |
20100094867 | Badros et al. | Apr 2010 | A1 |
20100095327 | Fujinaka et al. | Apr 2010 | A1 |
20100121959 | Lin et al. | May 2010 | A1 |
20100131856 | Kalbfleisch et al. | May 2010 | A1 |
20100157978 | Robbins et al. | Jun 2010 | A1 |
20100162170 | Johns et al. | Jun 2010 | A1 |
20100183179 | Griffin, Jr. et al. | Jul 2010 | A1 |
20100211872 | Rolston et al. | Aug 2010 | A1 |
20100215334 | Miyagi | Aug 2010 | A1 |
20100220615 | Enstrom et al. | Sep 2010 | A1 |
20100241691 | Savitzky et al. | Sep 2010 | A1 |
20100245535 | Mauchly | Sep 2010 | A1 |
20100250817 | Collopy et al. | Sep 2010 | A1 |
20100262266 | Chang et al. | Oct 2010 | A1 |
20100262925 | Liu et al. | Oct 2010 | A1 |
20100275164 | Morikawa | Oct 2010 | A1 |
20100302033 | Devenyi et al. | Dec 2010 | A1 |
20100303227 | Gupta | Dec 2010 | A1 |
20100316207 | Brunson | Dec 2010 | A1 |
20100318399 | Li et al. | Dec 2010 | A1 |
20110072037 | Lotzer | Mar 2011 | A1 |
20110075830 | Dreher et al. | Mar 2011 | A1 |
20110087745 | O'Sullivan et al. | Apr 2011 | A1 |
20110117535 | Benko et al. | May 2011 | A1 |
20110131498 | Chao et al. | Jun 2011 | A1 |
20110154427 | Wei | Jun 2011 | A1 |
20110230209 | Kilian | Sep 2011 | A1 |
20110264928 | Hinckley | Oct 2011 | A1 |
20110270609 | Jones et al. | Nov 2011 | A1 |
20110271211 | Jones et al. | Nov 2011 | A1 |
20110283226 | Basson et al. | Nov 2011 | A1 |
20110314139 | Song et al. | Dec 2011 | A1 |
20120009890 | Curcio et al. | Jan 2012 | A1 |
20120013704 | Sawayanagi et al. | Jan 2012 | A1 |
20120013768 | Zurek | Jan 2012 | A1 |
20120026279 | Kato | Feb 2012 | A1 |
20120054288 | Wiese et al. | Mar 2012 | A1 |
20120072364 | Ho | Mar 2012 | A1 |
20120084714 | Sirpal et al. | Apr 2012 | A1 |
20120092436 | Pahud et al. | Apr 2012 | A1 |
20120140970 | Kim et al. | Jun 2012 | A1 |
20120179502 | Farooq et al. | Jul 2012 | A1 |
20120190386 | Anderson | Jul 2012 | A1 |
20120192075 | Ebtekar et al. | Jul 2012 | A1 |
20120233020 | Eberstadt et al. | Sep 2012 | A1 |
20120246229 | Carr et al. | Sep 2012 | A1 |
20120246596 | Ording et al. | Sep 2012 | A1 |
20120284635 | Sitrick et al. | Nov 2012 | A1 |
20120296957 | Stinson et al. | Nov 2012 | A1 |
20120303476 | Krzyzanowski et al. | Nov 2012 | A1 |
20120306757 | Keist et al. | Dec 2012 | A1 |
20120306993 | Sellers-Blais | Dec 2012 | A1 |
20120308202 | Murata et al. | Dec 2012 | A1 |
20120313971 | Murata et al. | Dec 2012 | A1 |
20120315011 | Messmer et al. | Dec 2012 | A1 |
20120321058 | Eng et al. | Dec 2012 | A1 |
20120323645 | Spiegel et al. | Dec 2012 | A1 |
20120324512 | Cahnbley et al. | Dec 2012 | A1 |
20130027425 | Yuan | Jan 2013 | A1 |
20130038675 | Malik | Feb 2013 | A1 |
20130047093 | Reuschel et al. | Feb 2013 | A1 |
20130050398 | Krans et al. | Feb 2013 | A1 |
20130055112 | Joseph et al. | Feb 2013 | A1 |
20130061054 | Niccolai | Mar 2013 | A1 |
20130063542 | Bhat et al. | Mar 2013 | A1 |
20130086633 | Schultz | Apr 2013 | A1 |
20130090065 | Fisunenko et al. | Apr 2013 | A1 |
20130091205 | Kotler et al. | Apr 2013 | A1 |
20130091440 | Kotler et al. | Apr 2013 | A1 |
20130094647 | Mauro et al. | Apr 2013 | A1 |
20130113602 | Gilbertson et al. | May 2013 | A1 |
20130113827 | Forutanpour et al. | May 2013 | A1 |
20130120522 | Lian et al. | May 2013 | A1 |
20130124551 | Foo | May 2013 | A1 |
20130129252 | Lauper et al. | May 2013 | A1 |
20130135837 | Kemppinen | May 2013 | A1 |
20130141371 | Hallford et al. | Jun 2013 | A1 |
20130148789 | Hillier et al. | Jun 2013 | A1 |
20130182063 | Jaiswal et al. | Jul 2013 | A1 |
20130185672 | McCormick et al. | Jul 2013 | A1 |
20130198629 | Tandon et al. | Aug 2013 | A1 |
20130210496 | Zakarias et al. | Aug 2013 | A1 |
20130211826 | Mannby | Aug 2013 | A1 |
20130212202 | Lee | Aug 2013 | A1 |
20130215215 | Gage et al. | Aug 2013 | A1 |
20130219278 | Rosenberg | Aug 2013 | A1 |
20130222246 | Booms et al. | Aug 2013 | A1 |
20130225080 | Doss et al. | Aug 2013 | A1 |
20130227433 | Doray et al. | Aug 2013 | A1 |
20130235866 | Tian et al. | Sep 2013 | A1 |
20130242030 | Kato et al. | Sep 2013 | A1 |
20130243213 | Moquin | Sep 2013 | A1 |
20130252669 | Nhiayi | Sep 2013 | A1 |
20130263020 | Heiferman et al. | Oct 2013 | A1 |
20130290421 | Benson et al. | Oct 2013 | A1 |
20130297704 | Alberth, Jr. et al. | Nov 2013 | A1 |
20130300637 | Smits et al. | Nov 2013 | A1 |
20130325970 | Roberts et al. | Dec 2013 | A1 |
20130329865 | Ristock et al. | Dec 2013 | A1 |
20130335507 | Aarrestad et al. | Dec 2013 | A1 |
20140012990 | Ko | Jan 2014 | A1 |
20140028781 | MacDonald | Jan 2014 | A1 |
20140040404 | Pujare et al. | Feb 2014 | A1 |
20140040819 | Duffy | Feb 2014 | A1 |
20140063174 | Junuzovic et al. | Mar 2014 | A1 |
20140068452 | Joseph et al. | Mar 2014 | A1 |
20140068670 | Timmermann et al. | Mar 2014 | A1 |
20140078182 | Utsunomiya | Mar 2014 | A1 |
20140108486 | Borzycki et al. | Apr 2014 | A1 |
20140111597 | Anderson et al. | Apr 2014 | A1 |
20140136630 | Siegel et al. | May 2014 | A1 |
20140157338 | Pearce | Jun 2014 | A1 |
20140161243 | Contreras et al. | Jun 2014 | A1 |
20140195557 | Oztaskent et al. | Jul 2014 | A1 |
20140198175 | Shaffer et al. | Jul 2014 | A1 |
20140237371 | Klemm et al. | Aug 2014 | A1 |
20140253671 | Bentley et al. | Sep 2014 | A1 |
20140280595 | Mani et al. | Sep 2014 | A1 |
20140282213 | Musa et al. | Sep 2014 | A1 |
20140296112 | O'Driscoll et al. | Oct 2014 | A1 |
20140298210 | Park et al. | Oct 2014 | A1 |
20140317561 | Robinson et al. | Oct 2014 | A1 |
20140337840 | Hyde et al. | Nov 2014 | A1 |
20140358264 | Long et al. | Dec 2014 | A1 |
20140372908 | Kashi et al. | Dec 2014 | A1 |
20150004571 | Ironside et al. | Jan 2015 | A1 |
20150009278 | Modai et al. | Jan 2015 | A1 |
20150029301 | Nakatomi et al. | Jan 2015 | A1 |
20150067552 | Leorin et al. | Mar 2015 | A1 |
20150070835 | Mclean | Mar 2015 | A1 |
20150074189 | Cox et al. | Mar 2015 | A1 |
20150081885 | Thomas et al. | Mar 2015 | A1 |
20150082350 | Ogasawara et al. | Mar 2015 | A1 |
20150085060 | Fish et al. | Mar 2015 | A1 |
20150088575 | Asli et al. | Mar 2015 | A1 |
20150089393 | Zhang et al. | Mar 2015 | A1 |
20150089394 | Chen et al. | Mar 2015 | A1 |
20150113050 | Stahl | Apr 2015 | A1 |
20150113369 | Chan et al. | Apr 2015 | A1 |
20150128068 | Kim | May 2015 | A1 |
20150172120 | Dwarampudi et al. | Jun 2015 | A1 |
20150178626 | Pielot et al. | Jun 2015 | A1 |
20150215365 | Shaffer et al. | Jul 2015 | A1 |
20150254760 | Pepper | Sep 2015 | A1 |
20150288774 | Larabie-Belanger | Oct 2015 | A1 |
20150301691 | Qin | Oct 2015 | A1 |
20150304120 | Xiao et al. | Oct 2015 | A1 |
20150304366 | Bader-Natal et al. | Oct 2015 | A1 |
20150319113 | Gunderson et al. | Nov 2015 | A1 |
20150350126 | Xue | Dec 2015 | A1 |
20150373063 | Vashishtha et al. | Dec 2015 | A1 |
20150373414 | Kinoshita | Dec 2015 | A1 |
20160037304 | Dunkin et al. | Feb 2016 | A1 |
20160043986 | Ronkainen | Feb 2016 | A1 |
20160044159 | Wolff et al. | Feb 2016 | A1 |
20160044380 | Barrett | Feb 2016 | A1 |
20160050079 | Martin De Nicolas et al. | Feb 2016 | A1 |
20160050160 | Li et al. | Feb 2016 | A1 |
20160050175 | Chaudhry et al. | Feb 2016 | A1 |
20160070758 | Thomson et al. | Mar 2016 | A1 |
20160071056 | Ellison et al. | Mar 2016 | A1 |
20160072862 | Bader-Natal et al. | Mar 2016 | A1 |
20160094593 | Priya | Mar 2016 | A1 |
20160105345 | Kim et al. | Apr 2016 | A1 |
20160110056 | Hong et al. | Apr 2016 | A1 |
20160165056 | Bargetzi et al. | Jun 2016 | A1 |
20160173537 | Kumar et al. | Jun 2016 | A1 |
20160182580 | Nayak | Jun 2016 | A1 |
20160266609 | McCracken | Sep 2016 | A1 |
20160269411 | Malachi | Sep 2016 | A1 |
20160277461 | Sun et al. | Sep 2016 | A1 |
20160283909 | Adiga | Sep 2016 | A1 |
20160307165 | Grodum et al. | Oct 2016 | A1 |
20160309037 | Rosenberg et al. | Oct 2016 | A1 |
20160321347 | Zhou et al. | Nov 2016 | A1 |
20170006162 | Bargetzi et al. | Jan 2017 | A1 |
20170006446 | Harris et al. | Jan 2017 | A1 |
20170070706 | Ursin et al. | Mar 2017 | A1 |
20170093874 | Uthe | Mar 2017 | A1 |
20170104961 | Pan et al. | Apr 2017 | A1 |
20170171260 | Jerrard-Dunne et al. | Jun 2017 | A1 |
20170324850 | Snyder et al. | Nov 2017 | A1 |
Number | Date | Country |
---|---|---|
101055561 | Oct 2007 | CN |
101076060 | Nov 2007 | CN |
102572370 | Jul 2012 | CN |
102655583 | Sep 2012 | CN |
101729528 | Nov 2012 | CN |
102938834 | Feb 2013 | CN |
103141086 | Jun 2013 | CN |
204331453 | May 2015 | CN |
3843033 | Sep 1991 | DE |
959585 | Nov 1999 | EP |
2773131 | Sep 2014 | EP |
2341686 | Aug 2016 | EP |
WO 9855903 | Dec 1998 | WO |
2008139269 | Nov 2008 | WO |
WO 2012167262 | Dec 2012 | WO |
WO 2014118736 | Aug 2014 | WO |
Entry |
---|
Nh acoustics, em32 Eigenmike® microphone array release notes (v15.0), Apr. 26, 2013 (Year: 2013). |
Mh acoustics em32 Eigennnike®, microphone array release notes (v15.0) , Apr. 27, 2013. |
Author Unknown, “A Primer on the H.323 Series Standard,” Version 2.0, available at http://www.packetizer.com/voip/h323/papers/primer/, retrieved on Dec. 20, 2006, 17 pages. |
Author Unknown, ““I can see the future” 10 predictions concerning cell-phones,” Surveillance Camera Players, http://www.notbored.org/cell-phones.html, Jun. 21, 2003, 2 pages. |
Author Unknown, “Active screen follows mouse and dual monitors,” KDE Community Forums, Apr. 13, 2010, 3 pages. |
Author Unknown, “Implementing Media Gateway Control Protocols” A RADVision White Paper, Jan. 27, 2002, 16 pages. |
Author Unknown, “Manage Meeting Rooms in Real Time,” Jan. 23, 2017, door-tablet.com, 7 pages. |
Averusa, “Interactive Video Conferencing K-12 applications,” “Interactive Video Conferencing K-12 applications” copyright 2012. http://www.averusa.com/education/downloads/hvc brochure goved.pdf (last accessed Oct. 11, 2013). |
Choi, Jae Young, et al; “Towards an Automatic Face Indexing System for Actor-based Video Services in an IPTV Environment,” IEEE Transactions on 56, No. 1 (2010): 147-155. |
Cisco Systems, Inc. “Cisco webex: WebEx Meeting Center User Guide For Hosts, Presenters, and Participants” © 1997-2013, pp. 1-394 plus table of contents. |
Cisco Systems, Inc., “Cisco Webex Meetings for iPad and iPhone Release Notes,” Version 5.0, Oct. 2013, 5 pages. |
Cisco Systems, Inc., “Cisco WebEx Meetings Server System Requirements release 1.5.” 30 pages, Aug. 14, 2013. |
Cisco Systems, Inc., “Cisco Unified Personal Communicator 8.5”, 2011, 9 pages. |
Cisco White Paper, “Web Conferencing: Unleash the Power of Secure, Real-Time Collaboration,” pp. 1-8, 2014. |
Clarke, Brant, “Polycom Announces RealPresence Group Series” “Polycom Announces RealPresence Group Series,” dated Oct. 8, 2012 available at http://www.323.tv/news/polycom-realpresence-group-series (last accessed Oct. 11, 2013). |
Clauser, Grant, et al., “Is the Google Home the voice-controlled speaker for you?,” The Wire Cutter, Nov. 22, 2016, pp. 1-15. |
Cole, Camille, et al., “Videoconferencing for K-12 Classrooms, Second Edition (excerpt),” http://www.iste.org/docs/excerpts/VIDCO2-excerpt.pdf (last accessed Oct. 11, 2013), 2009. |
Eichen, Elliot, et al., “Smartphone Docking Stations and Strongly Converged VoIP Clients for Fixed-Mobile convergence,” IEEE Wireless Communications and Networking Conference: Services, Applications and Business, 2012, pp. 3140-3144. |
Epson, “BrightLink Pro Projector,” BrightLink Pro Projector. http://www.epson.com/cgi-bin/Store/jsp/Landing/brightlink-pro-interactive-projectors.do?ref=van brightlink-pro—dated 2013 (last accessed Oct. 11, 2013). |
Grothaus, Michael, “How Interactive Product Placements Could Save Television,” Jul. 25, 2013, 4 pages. |
Hannigan, Nancy Kruse, et al., The IBM Lotus Samteime VB Family Extending The IBM Unified Communications and Collaboration Strategy (2007), available at http://www.ibm.com/developerworks/lotus/library/sametime8-new/, 10 pages. |
Hirschmann, Kenny, “TWIDDLA: Smarter Than The Average Whiteboard,” Apr. 17, 2014, 2 pages. |
Infocus, “Mondopad,” Mondopad. http://www.infocus.com/sites/default/files/InFocus-Mondopad-INF5520a-INF7021-Datasheet-EN.pdf (last accessed Oct. 11, 2013), 2013. |
Maccormick, John, “Video Chat with Multiple Cameras,” CSCW '13, Proceedings of the 2013 conference on Computer supported cooperative work companion, pp. 195-198, ACM, New York, NY, USA, 2013. |
Microsoft, “Positioning Objects on Multiple Display Monitors,” Aug. 12, 2012, 2 pages. |
Mullins, Robert, “Polycom Adds Tablet Videoconferencing,” Mullins, R. “Polycom Adds Tablet Videoconferencing” available at http://www.informationweek.com/telecom/unified-communications/polycom-adds-tablet-videoconferencing/231900680 dated Oct. 12, 2011 (last accessed Oct. 11, 2013). |
Nu-Star Technologies, “Interactive Whiteboard Conferencing,” Interactive Whiteboard Conferencing. http://www.nu-star.com/interactive-conf.php dated 2013 (last accessed Oct. 11, 2013). |
Nyamgondalu, Nagendra, “Lotus Notes Calendar And Scheduling Explained!” IBM, Oct. 18, 2004, 10 pages. |
Polycom, “Polycom RealPresence Mobile: Mobile Telepresence & Video Conferencing,” http://www.polycom.com/products-services/hd-telepresence-video-conferencing/realpresence-mobile.html#stab1 (last accessed Oct. 11, 2013), 2013. |
Polycom, “Polycom Turns Video Display Screens into Virtual Whiteboards with First Integrated Whiteboard Solution for Video Collaboration,” Polycom Turns Video Display Screens into Virtual Whiteboards with First Integrated Whiteboard Solution for Video Collaboration—http://www.polycom.com/company/news/press-releases/2011/20111027 2.html—dated Oct. 27, 2011. |
Polycom, “Polycom UC Board, Transforming ordinary surfaces into virtual Whiteboards” 2012, Polycom, Inc., San Jose, CA, http://www.uatg.com/pdf/polycom/polycom-uc-board-_datasheet.pdf, (last accessed Oct. 11, 2013). |
Schreiber, Danny, “The Missing Guide for Google Hangout Video Calls,” Jun. 5, 2014, 6 pages. |
Shervington, Martin, “Complete Guide to Google Hangouts for Businesses and Individuals,” Mar. 20, 2014, 15 pages. |
Shi, Saiqi, et al, “Notification That a Mobile Meeting Attendee Is Driving”, May 20, 2013, 13 pages. |
Stevenson, Nancy, “Webex Web Meetings for Dummies” 2005, Wiley Publishing Inc., Indianapolis, Indiana, USA, 339 pages. |
Stodle. Daniel, et al., “Gesture-Based, Touch-Free Multi-User Gaming on Wall-Sized, High-Resolution Tiled Displays,” 2008, 13 pages. |
Thompson, Phil, et al., “Agent Based Ontology Driven Virtual Meeting Assistant,” Future Generation Information Technology, Springer Berlin Heidelberg, 2010, 4 pages. |
TNO, “Multi-Touch Interaction Overview,” Dec. 1, 2009, 12 pages. |
Toga, James, et al., “Demystifying Multimedia Conferencing Over the Internet Using the H.323 Set of Standards,” Intel Technology Journal Q2, 1998, 11 pages. |
Ubuntu, “Force Unity to open new window on the screen where the cursor is?” Sep. 16, 2013, 1 page. |
VB Forums, “Pointapi,” Aug. 8, 2001, 3 pages. |
Vidyo, “VidyoPanorama,” VidyoPanorama-http://www.vidyo.com/products/vidyopanorama/ dated 2013 (last accessed Oct. 11, 2013). |
Number | Date | Country | |
---|---|---|---|
20180359562 A1 | Dec 2018 | US |