This application is based upon and claims the benefit of priority from the prior Japanese Patent Application No. 2016-094625, filed on May 10, 2016, the entire contents of which are incorporated herein by reference.
The present invention relates to a band extension technique, and more particularly to an encoding device, a decoding device, and a communication system for extending a voice band.
In order to improve the quality of a voice signal in a communication system, a threshold frequency is defined within a passband defining the maximum frequency and minimum frequency of the voice signal on a transmission side, so that a voice signal having a frequency lower than the threshold frequency is not compressed. On the other hand, a voice signal having a frequency higher than the threshold frequency is compressed and transmitted by being compressed into the region between the threshold frequency and the maximum frequency of the passband. On a reception side, the compressed voice signal is extended and harmonic information is generated based on a non-compressed voice signal, and a suitable harmonic is added to the extended voice signal based on the harmonic information (see, for example, Patent Document 1).
When intense compression is applied to a frequency higher than the threshold frequency while a lower frequency is left without being substantially compressed, the quality and clarity of voice may be decreased if the compressed frequency is reproduced as it is without being extended on a reception side. In order to improve the quality and clarity of voice, it is necessary to perform, on the reception side, equalization processing in accordance with a speaker or a language such that adjustment should be made each time. When voice outside a band is reproduced, it is necessary to analyze the received voice, and hence a processing load may increase due to advanced voice signal processing, a speaker output be delayed due to delay processing, or uncomfortable voice be reproduced due to the generation of an unnecessary signal.
In order to solve the above problems, an encoding device according to an aspect of the present embodiment comprises: an input unit that inputs a voice signal; a first encoding unit that generates a first encoded signal by encoding a component within a first band in the voice signal input in the input unit; a frequency shifting unit that shifts the frequency of a component within a second band in the voice signal input in the input unit, the second band having a frequency higher than that of the first band, to the frequency of a component within the first band; a second encoding unit that generates a second encoded signal by encoding the component whose frequency has been shifted in the frequency shifting unit; and an output unit that outputs both the first encoded signal generated in the first encoding unit and the second encoded signal generated in the second encoding unit.
Another aspect of the present embodiment is a decoding device. The device comprises: an input unit that inputs both a first encoded signal obtained by encoding a component within a first band in a voice signal and a second encoded signal obtained by shifting the frequency of a component within a second band in the voice signal, the second band having a frequency higher than that of the first band, to the frequency of a component within the first band and by encoding the latter component; a first decoding unit that generates a first voice component within the first band by decoding the first encoded signal input in the input unit; a second decoding unit that generates a second voice component within the first band by decoding the second encoded signal input in the input unit; a frequency shifting unit that shifts the frequency of the second voice component generated in the second decoding unit to the frequency of a component within the second band; and a combination unit that combines the first voice component generated in the first decoding unit and the second voice component whose frequency has been shifted in the frequency shifting unit and outputs the combined voice component.
Still another aspect of the present embodiment is a communication system. This communication system comprises an encoding device and a decoding device. The encoding device includes: an input unit that inputs a voice signal; a first encoding unit that generates a first encoded signal by encoding a component within a first band in the voice signal input in the input unit; a frequency shifting unit that shifts the frequency of a component within a second band in the voice signal input in the input unit, the second band having a frequency higher than that of the first band, to the frequency of a component within the first band; a second encoding unit that generates a second encoded signal by encoding the component whose frequency has been shifted in the frequency shifting unit; and an output unit that outputs both the first encoded signal generated in the first encoding unit and the second encoded signal generated in the second encoding unit. The decoding device includes: an input unit that inputs both the first encoded signal and the second encoded signal from the encoding device; a first decoding unit that generates a first voice component within the first band by decoding the first encoded signal input in the input unit; a second decoding unit that generates a second voice component within the first band by decoding the second encoded signal input in the input unit; a frequency shifting unit that shifts the frequency of the second voice component generated in the second decoding unit to the frequency of a component within the second band; and a combination unit that combines the first voice component generated in the first decoding unit and the second voice component whose frequency has been shifted in the frequency shifting unit and outputs the combined voice component.
It is to be noted that any optional combination of the above constituent elements and any embodiment obtained by transforming what are expressed by the present embodiment into a method, an apparatus, a system, a recording medium, a computer program, and so on is also effective as other aspects of the present embodiment.
Embodiments will now be described, by way of example only, with reference to the accompanying drawings, which are meant to be exemplary, not limiting, and wherein like elements are numbered alike in several figures, in which:
The invention will now be described by reference to the preferred embodiments. This does not intend to limit the scope of the present invention, but to exemplify the invention.
Prior to the specific description of the present invention, the outline thereof will be given first. First Embodiment relates to a communication system that transmits a voice signal from a transmitter to a receiver. When the communication system is a digital wireless communication system, a vocoder method is often used for transmitting a voice signal. The vocoder method is a voice compression technology for communication. The transmitter transmits parameterized voice signals without directly transmitting waves of voice, and the receiver synthesizes the original voice from the received parameterized signals.
In such a vocoder method, a frequency component higher than the Nyquist frequency is removed. For example, “AMBE (registered trademark) +2” is used as a vocoder method in NXDN (registered trademark) that is a standard of a digital professional-use wireless system, but the sampling frequency is set to 8 kHz in AMBE +2, and hence band limitation is performed at 4 kHz. If a voice having a frequency of 4 kHz or higher is lost, voice quality and clarity may decrease. In order to improve voice quality and clarity, it is necessary to perform band extension either in equalization processing that emphasizes high frequencies in a receiver or in advanced signal processing. In addition, when a voice having a frequency of 4 kHz or higher is reproduced, it is necessary to analyze the received voice, and hence a processing load may increase due to advanced voice signal processing, a speaker output be delayed due to delay processing, or an uncomfortable voice be reproduced due to the generation of even an unnecessary signal, as described above.
In order to easily improve voice quality and clarity under such circumstances, the transmitter according to the present embodiment vocoder-encodes a component of 0 to 4 kHz in a voice signal, and vocoder-encodes a component of 4 to 8 kHz after shifting the frequency thereof to a frequency of 0 to 4 kHz. On the other hand, the receiver vocoder-decodes the latter vocoder-encoded signal and then shifts the frequency thereof to a frequency of 4 to 8 kHz, and reproduces a voice of 0 to 8 kHz by combining the result of the frequency shifting and the result of vocoder-decoding the vocoder-encoded signal of 0 to 4 kHz.
The microphone 20 inputs the voice produced by a speaker and converts this into an electric signal. The microphone 20 outputs the voice converted into an electric signal (hereinafter, referred to as a “voice signal”) to the IF unit 22. The IF unit 22 inputs the voice signal from the microphone 20, and outputs the voice signal to the encoding device 24. In this case, the IF unit 22 may perform any processing on the voice signal. The encoding device 24 generates both a first encoded signal and a second encoded signal by inputting the voice signal from the IF unit 22 and vocoder-encoding the voice signal. The details of the first encoded signal and the second encoded signal will be described later. The encoding device 24 outputs the first encoded signal and the second encoded signal to the transmission unit 26. The transmission unit 26 inputs the first encoded signal and the second encoded signal from the encoding device 24, and transmits a wireless signal including these signals. The transmission unit 26 corresponds to a digital professional-use wireless system like, for example, NXDN.
The reception unit 30 receives the wireless signal from the transmission unit 26. The reception unit 30 acquires the first encoded signal and the second encoded signal from the wireless signal, and outputs them to the decoding device 32. The decoding device 32 generates a voice signal by vocoder-decoding the first encoded signal and the second encoded signal. The decoding device 32 outputs the voice signal to the IF unit 34. The IF unit 34 inputs the voice signal from the decoding device 32, and outputs it to the speaker 36. In this case, the IF unit 34 may perform processing corresponding to the processing in the IF unit 22 on the voice signal. The speaker 36 inputs the voice signal from the IF unit 34, and converts it into a voice to be output.
The input unit 40 inputs the voice signal from the non-illustrated IF unit 22. The sampling frequency for the input voice signal is, for example, 48 kHz. The input unit 40 outputs the voice signal to the first decimation unit 42a and the second decimation unit 42b.
The first decimation unit 42a inputs the voice signal from the input unit 40. The first decimation unit 42a downsamples the sampling frequency for the voice signal from 48 kHz to 8 kHz. The downsampled voice signal contains a voice component within a band of 0 to 4 kHz. Herein, the band of 0 to 4 kHz is also referred to as a first band, and hence the voice component within the band of 0 to 4 kHz can also be referred to as a component within the first band. The first decimation unit 42a outputs the downsampled voice signal (hereinafter, this is also referred to as the “voice signal”), i.e., the component within the first band to the first encoding unit 46a.
The first encoding unit 46a inputs the voice signal from the first decimation unit 42, i.e., the component within the first band. The first encoding unit 46a vocoder-encodes the component within the first band. In this case, the sampling frequency of the vocoder processing is 8 kHz. The first encoding unit 46a outputs the vocoder-encoded component within the first band (hereinafter, referred to as the “first encoded signal”) to the output unit 48.
The second decimation unit 42b inputs the voice signal from the input unit 40. The second decimation unit 42b downsamples the sampling frequency for the voice signal from 48 kHz to 16 kHz. Herein, the sampling frequency is downsampled up to 16 kHz in order to handle a voice component outside a band of 8 kHz. The second decimation unit 42b outputs the downsampled voice signal (hereinafter, this is also referred to as the “voice signal”) to the frequency shifting unit 44.
The frequency shifting unit 44 inputs the voice signal from the second decimation unit 42b. The frequency shifting unit 44 shifts the frequency of a voice component of 4 to 8 kHz contained in the voice signal to the frequency of a component of 0 to 4 kHz. This is equivalent to the fact that the frequency of a voice component of 4 to 8 kHz, outside the band, is shifted to the frequency of a component within the band of 0 to 4 kHz. Herein, the band of 4 to 8 kHz is also referred to as a second band, and hence it can also be said that the frequency shifting unit 44 shifts the frequency of a component within the second band in the voice signal, the second band having a frequency higher than that of the first band, to the frequency of a component within the first band. This is because: the bandwidth that can be processed in the later-described second encoding unit 46b is set up to 4 kHz, and hence a component of a high frequency outside the band is to be processed in the second encoding unit 46b by shifting the frequency of the component. Herein, the bandwidth of the second band is the same as that of the first band. The frequency shifting unit 44 outputs the voice signal whose frequency has been shifted (hereinafter, this is also referred to as the “voice signal”), i.e., the component within the second band whose frequency has been shifted to that of a component within the first band (hereinafter, this is also referred to as the “component within the second band”) to the third decimation unit 42c.
The third decimation unit 42c inputs the voice signal from the frequency shifting unit 44. The third decimation unit 42c downsamples the sampling frequency for the voice signal from 16 kHz to 8 kHz. The third decimation unit 42c outputs the downsampled voice signal (hereinafter, this is also referred to as the “voice signal”) to the second encoding unit 46b. Herein, the output voice signal also contains the component within the second band.
The second encoding unit 46b inputs the voice signal from the third decimation unit 42c, i.e., the component within the second band. The second encoding unit 46b vocoder-encodes the component within the second band. In this case, the sampling frequency of the vocoder processing is 8 kHz. The second encoding unit 46b outputs the vocoder-encoded component within the second band (hereinafter, referred to as the “second encoded signal”) to the output unit 48.
The output unit 48 inputs both the first encoded signal from the first encoding unit 46a and the second encoded signal from the second encoding unit 46b, and outputs these signals. In particular, the output unit 48 outputs the first encoded signal and the second encoded signal while switching the output order in accordance with the order in which the non-illustrated transmission unit 26 transmits signals, i.e., the order determined by the frames of a wireless communication channel.
Herein, prior to the description of this, a frame format in a wireless communication channel (RTCH) used during voice communication in an NXDN 9600 bps (Half Rate) system will be described, as a comparison object, by referring to
Next, two types of formats for storing the second encoded signal will be described, and either of them may be used. In order to cause to correspond to the first format, the output unit 48 alternately outputs the first encoded signal and the second encoded signal.
In this case, VCH, VCH (extension), VCH, and VCH (extension) are arranged in this order, and hence a combination of the continuous VCH and VCH (extension) becomes a single voice signal. Therefore, the receiver 12 continuously acquires voice components outside the band by vocoder-decoding VCH to acquire voice components within the band and then by vocoder-decoding VCH (extension). Further, the receiver 12 reproduces a voice by combining the result of vocoder-decoding VCH and that of vocoder-decoding VCH (extension), and hence the adjustment of the order of the results of vocoder-decoding becomes unnecessary, whereby processing becomes simple.
In order to cause to correspond to the second format, the output unit 48 continuously outputs a plurality of the first encoded signals, and then continuously outputs a plurality of the second encoded signals.
This configuration is implemented in the hardware by any CPU of a computer, memory, and other LSI, and implemented in the software by a program or the like that is loaded in a memory. Herein, functional blocks implemented by the cooperation of hardware and software are depicted. Thus, it is to be understood by a person skilled in the art that these functional blocks can be implemented in various forms, namely, solely in hardware, solely in software, or through a combination of hardware and software.
The input unit 60 inputs the first encoded signal and the second encoded signal from the non-illustrated reception unit 30. When the format of the signal received in the reception unit 30 corresponds to
The first decoding unit 62a inputs the first encoded signal from the input unit 60. The first decoding unit 62a vocoder-decodes the first encoded signal. In this case, the sampling frequency of the vocoder processing is 8 kHz. The first decoding unit 62a outputs the vocoder-decoded first encoded signal (hereinafter, referred to as a “first voice component”) to the first interpolation unit 64a. The first voice component is a voice component within a band of 0 to 4 kHz, and is a voice component within the first band. Because the first voice component is contained in a voice signal having a sampling frequency of 8 kHz, it can also be said that the voice signal is output to the first interpolation unit 64a.
The first interpolation unit 64a inputs the voice signal from the first decoding unit 62a, i.e., the first voice component. The first interpolation unit 64a upsamples the sampling frequency for the voice signal from 8 kHz to 48 kHz. The upsampled voice signal also contains the first voice component within the first band. The first interpolation unit 64a outputs the upsampled voice signal (hereinafter, this is also referred to as the “voice signal”), i.e., the first voice component to the delay unit 66.
The delay unit 66 inputs the voice signal from the first interpolation unit 64a, i.e., the first voice component. The delay unit 66 delays the voice signal only by a period in accordance with the format illustrated in
The second decoding unit 62a inputs the second encoded signal from the input unit 60. The second decoding unit 62b vocoder-decodes the second encoded signal. In this case, the sampling frequency of the vocoder processing is 8 kHz. The second decoding unit 62a outputs the vocoder-decoded second encoded signal (hereinafter, referred to as a “second voice component”) to the second interpolation unit 64b. The second voice component is a voice component outside the band of 4 to 8 kHz, and is a voice component within the second band. Herein, the frequency of the second voice component is shifted to that of a component within the band of 0 to 4 kHz, i.e., within the first band. Also, the second voice component is contained in a voice signal having a sampling frequency of 8 kHz, and hence it can also be said that the voice signal is output to the second interpolation unit 64b.
The second interpolation unit 64a inputs the voice signal from the second decoding unit 62b, i.e., the second voice component. The second interpolation unit 64b upsamples the sampling frequency for the voice signal from 8 kHz to 16 kHz. The upsampled voice signal also contains the second voice component within the first band. The second interpolation unit 64b outputs the upsampled voice signal (hereinafter, this is also referred to as the “voice signal”), i.e., the second voice component to the frequency shifting unit 68.
The frequency shifting unit 68 inputs the voice signal from the second interpolation unit 64b, i.e., the second voice component. The frequency shifting unit 68 shifts the frequency of the voice component of 0 to 4 kHz contained in the voice signal to that of a component of 4 to 8 kHz. This is equivalent to the fact that a voice component outside the band of 0 to 4 kHz, the frequency of which has been shifted to that of a component within the band thereof, is returned to a voice component within the band of 4 to 8 kHz. Therefore, it is equivalent to the fact that the frequency of the second voice component within the first band is shifted to that of a component within the second band. The frequency shifting unit 68 outputs the voice signal whose frequency has been shifted (hereinafter, this is also referred to as the “voice signal”), i.e., the second voice component whose frequency has been shifted to a component within the second band (hereinafter, this is also referred to as the “second voice component”) to the third interpolation unit 64c.
The third interpolation unit 64c inputs the voice signal from the frequency shifting unit 68, i.e., the second voice component. The third interpolation unit 64c upsamples the sampling frequency for the voice signal from 16 kHz to 48 kHz. The upsampled voice signal also contains the second voice component within the second band. The third interpolation unit 64c outputs the upsampled voice signal (hereinafter, this is also referred to as the “voice signal”), i.e., the second voice component to the combination unit 70.
The combination unit 70 inputs the voice signal from the delay unit 66, i.e., the first voice component, and inputs the voice signal from the third interpolation unit 64c, i.e., the second voice component. The combination unit 70 combines the first voice component and the second voice component with addition processing. The voice component obtained by combining the first voice component and the second voice component is contained in the voice signal. The combination unit 70 outputs the voice signal to the non-illustrated IF unit 34.
An operation of the communication system 100 configured as described above will be described.
Hereinafter, a configuration for further improving voice quality and clarity in each of the encoding device 24 and the decoding device 32 described above will be described. The encoding device 24 and the decoding device 32 described above do not include an equalizer for reasons such as suppression of an increase in a processing load and the like. On the other hand, at least one of the encoding device 24 and the decoding device 32 includes an equalizer herein.
The first EQ unit 50a inputs the voice signal from the first decimation unit 42a, i.e., the component within the first band. The first EQ unit 50a executes equalization processing on the component within the first band. In the equalization processing, voice quality to a vowel is improved with a formant corresponding to the vowel being further emphasized. The equalization processing may be implemented by any publicly known technique, and thus description thereof will be omitted herein. The first EQ unit 50a outputs the equalization-processed component within the first band (hereinafter, this is also referred to as the “component within the first band”), i.e., the voice signal to the first encoding unit 46a.
The second EQ unit 50b inputs the voice signal from the third decimation unit 42c, i.e., the component within the second band. The second EQ unit 50b executes equalization processing on the component within the second band. In the equalization processing, voice quality to a consonant is improved with a formant corresponding to the consonant being further emphasized. The equalization processing may be implemented by any publicly known technique, and thus description thereof will be omitted herein. The second EQ unit 50b outputs the equalization-processed component within the second band (hereinafter, this is also referred to as the “component within the second band”), i.e., the voice signal to the second encoding unit 46b.
In such a configuration, the encoding device 24 of
According to the present embodiment, the first encoded signal is generated from the component within the first band and the second encoded signal from the component within the second band, and hence a component outside the band can also be encoded. Further, a component outside the band is encoded, and hence voice quality and clarity can be improved. Furthermore, a component within the second band is encoded after the frequency thereof is shifted to that of a component within the first band, and hence a second encoding unit corresponding to the first band can be used. Still furthermore, the first encoded signal is generated from a component within the first band and the second encoded signal from a component within the second band, and hence a voice of 0 to 8 kHz can be reproduced without performing advanced voice signal processing. Still furthermore, the second encoded signal is generated based on a component of 4 to 8 kHz, the unnaturalness of the voice reproduced on the reception side can be reduced. Still furthermore, the first encoded signal and the second encoded signal are alternately output, and hence processing delay can be reduced.
Still furthermore, a plurality of the first encoded signals are continuously output and then a plurality of the second encoded signals are continuously output, and hence changing of the positions of VCH where the first encoded signals are to be stored can be made unnecessary. Still furthermore, changing of the positions of VCH where the first encoded signals are to be stored is made unnecessary, and hence the first encoded signal can be decoded also in a receiver that does not correspond to the decoding of the second encoded signal. Still furthermore, the first encoded signal is decoded also in a receiver that does not correspond to the decoding of the second encoded signal, and hence compatibility can be maintained. Still furthermore, equalization processing is executed in an encoding device, voice quality and clarity can be further improved. Still furthermore, equalization processing is executed in a decoding device, voice quality and clarity can be further improved.
Second Embodiment will now be described. Second Embodiment relates to a communication system that transmits a voice signal from a transmitter to a receiver, similarly to First Embodiment. Until now, the NXDN 9600 bps (Half Rate) system has been described as an example of the communication system 100. Therefore, a voice signal with a bandwidth of 8 kHz is divided into a component of 0 to 4 kHz and that of 4 to 8 kHz. In Second Embodiment, a voice signal is equally divided into n components. A communication system 100 according to Second Embodiment is of a type similar to
The first decimation unit 42a and the first encoding unit 46a generate a first encoded signal by vocoder-encoding a component within a first band in a voice signal. The second decimation unit 42b, the first frequency shifting unit 44a, the third decimation unit 42c, and the second encoding unit 46b generate a second encoded signal by shifting the frequency of a component within a second band in the voice signal to that of a component within the first band and then by vocoder-encoding the component. These are the same processing as in First Embodiment. On the other hand, the fourth decimation unit 42d, the second frequency shifting unit 44b, the fifth decimation unit 42e, and the third encoding unit 46c generate a third encoded signal by shifting the frequency of a component within a third band in the voice signal to that of a component within the first band and then by vocoder-encoding the component. The sixth decimation unit 42f, the third frequency shifting unit 44c, the seventh decimation unit 42g, and the fourth encoding unit 46d generate a fourth encoded signal by shifting the frequency of a component within a fourth band in the voice signal to that of a component within the first band and then by vocoder-encoding the component.
That is, the additional frequency shifting unit 52 shifts the frequency of a component within an i-th (i>2) band in a voice signal, the i-th band having a frequency higher than that of the (i−1)-th band, to that of a component within the first band. The i-th encoding unit 54 generates an i-th encoded signal by vocoder-encoding the component whose frequency has been shifted in the frequency shifting unit 44. The bandwidths of the first band to the fourth band are the same as each other, and they may not be 4 kHz as in First Embodiment. Also, the voice signal is equally divided into “4” components, but the number of equal divisions is not limited to “4.” Also, the sampling frequency in the decimation unit 42 and the like may be appropriately set. Finally, the output unit 48 also outputs the i-th encoded signal generated in the i-th encoding unit 54.
The first decoding unit 62a and the first interpolation unit 64a generate a first voice component by decoding the first encoded signal. The second decoding unit 62b, the second interpolation unit 64b, the first frequency shifting unit 68a, and the third interpolation unit 64c generate a second voice component obtained by decoding the second encoded signal, and then shift the frequency thereof to that of a component within the second band. These are the same processing as in First Embodiment. On the other hand, the third decoding unit 62c, the fourth interpolation unit 64d, the second frequency shifting unit 68b, and the fifth interpolation unit 64e generate a third voice component obtained by decoding the third encoded signal, and then shift the frequency thereof to that of a component within the third band. The fourth decoding unit 62d, the sixth interpolation unit 64f, the third frequency shifting unit 68c, and the seventh interpolation unit 64g generate a fourth voice component obtained by decoding the fourth encoded signal, and then shift the frequency thereof to that of a component within the fourth band.
That is, the i-th decoding unit 74 generates an i-th voice component within the first band by decoding the i-th encoded signal. The additional frequency shifting unit 76 shifts the frequency of the i-th voice component generated in the i-th decoding unit 74 to that of a component within the i-th band. Also, herein, the bandwidths of the first band to the fourth band are the same as each other, and they may not be 4 kHz as in First Embodiment. Also, the voice signal is equally divided into “4” components, but the number of equal divisions is not limited to “4.” Also, the sampling frequency in the interpolation unit 64 and the like may be appropriately set. Finally, the combination unit 70 also combines the i-th voice component whose frequency has been shifted in the i-th decoding unit 74 and outputs it.
According to the present embodiment, a voice signal is equally divided into n components and encoding and decoding are executed on each of them, and hence voice quality and clarity can be further improved. Further, a voice signal is equally divided into n components and encoding and decoding are executed on each of them, and hence the flexibility of configuration can be improved.
The present invention has been described above based on embodiments. These embodiments are illustrative in nature, and it should be appreciated by a person skilled in the art that various modifications can be made to the combinations of the components and the processing processes and such modifications also fall within the scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
2016-094625 | May 2016 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
5848164 | Levine | Dec 1998 | A |
5974380 | Smyth | Oct 1999 | A |
6925116 | Liljeryd | Aug 2005 | B2 |
6978236 | Liljeryd | Dec 2005 | B1 |
7181389 | Liljeryd | Feb 2007 | B2 |
7246065 | Tanaka | Jul 2007 | B2 |
7283955 | Liljeryd | Oct 2007 | B2 |
7328162 | Liljeryd | Feb 2008 | B2 |
7469206 | Kjorling | Dec 2008 | B2 |
7734473 | Schuijers | Jun 2010 | B2 |
20030187663 | Truman | Oct 2003 | A1 |
20050096917 | Kjorling | May 2005 | A1 |
20060031064 | Liljeryd | Feb 2006 | A1 |
20060053018 | Engdegard | Mar 2006 | A1 |
20060165237 | Villemoes | Jul 2006 | A1 |
20060256971 | Chong | Nov 2006 | A1 |
20070016411 | Kim | Jan 2007 | A1 |
20070129036 | Arora | Jun 2007 | A1 |
20080183466 | Nongpiur | Jul 2008 | A1 |
20080262835 | Oshikiri | Oct 2008 | A1 |
20090192804 | Schuijers | Jul 2009 | A1 |
20140200901 | Kawashima | Jul 2014 | A1 |
Number | Date | Country |
---|---|---|
2008-537174 | Sep 2008 | JP |
Number | Date | Country | |
---|---|---|---|
20170330584 A1 | Nov 2017 | US |