The present invention relates to techniques for processing music pieces.
Disk jockeys (DJs), for example, reproduce a plurality of music pieces one after another while interconnecting the music pieces with no break therebetween. Japanese Patent Application Laid-open Publication No. 2003-108,132 discloses a technique for realizing such music piece reproduction. The technique disclosed in the No. 2003-108,132 publication allows a plurality of music pieces to be interconnected smoothly by controlling respective reproduction timing of the music pieces in such a manner that beat positions of successive ones of the music pieces agree with each other.
In order to organize a natural and refined music piece from a plurality music pieces, selection of proper music pieces as well as adjustment of reproduction timing of the music pieces becomes an important factor. Namely, even where beat positions of individual music pieces are merely adjusted as with the technique disclosed in the No. 2003-108,132 publication, it would not be possible to organize an auditorily-natural music piece if the music pieces greatly differ from each other in musical characteristic.
In view of the foregoing, it is an object of the present invention to produce, from a plurality of music pieces, a music piece with no uncomfortable feeling.
In order to accomplish the above-mentioned object, the present invention provides an improved music piece processing apparatus, which comprises: a storage section that stores music piece data sets of a plurality of music pieces, each of the music piece data sets comprising respective tone data of a plurality of fragments of the music piece and respective character values of the fragments, the character value of each of the fragments being indicative of a musical character of the fragment; a similarity index calculation section that selects, as a main fragment, one of plurality of fragments of a main music piece selected from among the plurality of music pieces stored in the storage section; specifies, as a sub fragment, each one, other than the selected main fragment, of a plurality of fragments of two or more music pieces selected from among the plurality of music pieces stored in the storage section; and calculates a similarity index value indicative of a degree of similarity between the character value of the selected main fragment and the character value of the specified sub fragment, the similarity index calculation section selecting, as the main fragment, each of the plurality of fragments of the selected main music piece and calculating the similarity index value for each of the main fragments; a condition setting section that sets a selection condition; a selection section that selects, for each of the main fragments of the main music piece, a sub fragment presenting a similarity index value that satisfies the selection condition; and a processing section that processes the tone data of each of the main fragments of the main music piece on the basis of the tone data of the sub fragment selected by the selection section for the main fragment. Namely, the sub fragment, selected in accordance with the calculated similarity index value with respect to the main fragment, is used for processing of the main fragment, and thus, even where the user is not sufficiently familiar with similarity and harmonizability among the music pieces, the present invention permits production or organization of an auditorily-natural music piece without substantially impairing the melodic sequence of the main music piece.
As an example, the condition setting section sets the selection condition on the basis of user's input operation performed via an input device. Such an arrangement allows the user to process a music piece with an enhanced degree of freedom.
As an example, the condition setting section sets a plurality of the selection conditions, at least one of the plurality of the selection conditions being settable on the basis of user's input operation, and the selection section selects the sub fragment in accordance with a combination of the plurality of the selection conditions. Such an arrangement can significantly enhance a degree of freedom of music piece processing without requiring complicated operation of the user.
In a preferred implementation, each of the fragments is a section obtained by dividing the music piece at time points synchronous with beats. For example, fragments are sections obtained by dividing the music piece at every beat or every predetermined plurality of beats, or by dividing each interval between successive beats into a plurality of segments (e.g., segment of a time length corresponding to ½ or ¼ beat). Because sections obtained by dividing the music piece at time points synchronous with beats are set as the fragments, this inventive arrangement can produce a natural music piece while maintaining a rhythm feeling of the main music piece.
Whereas any desired selection condition may be set by the condition setting section, the following examples may be advantageously employed. As a first example, the condition setting section sets a reference position, in order of the similarity with the main fragment, as the selection condition on the basis of user's input operation, and the selection section selects a sub fragment located at a position corresponding to the reference position in the order of similarity with the main fragment. As a second example, the condition setting section sets a random number range as the selection condition, and the selection section generates a random number within the random number range and selects a sub fragment located at a position corresponding to the random number in the order of similarity with the main fragment. As a third example, the condition setting section sets a total number of selection as the selection condition, and the selection section selects a given number of the sub fragments corresponding to the total number of selection. As a fourth example, the condition setting section sets a maximum number of selection as the selection condition, and the selection section selects, for each of the main fragments, a plurality of the sub fragments while limiting a maximum number of the sub fragments, selectable from one music piece, to the maximum number of selection.
According to a preferred embodiment, the music piece processing apparatus further comprises a mixing section that mixes together the tone data having been processed by the processing section and original tone data of the main music piece and outputs the mixed tone data. Mixing ratio between the tone data having been processed by the processing section and the original tone data of the main music piece is set on the basis of user's input operation performed via the input device. Which one of the tone data having been processed by the processing section and the original tone data of the main music piece should be prioritized over the other can be changed as necessary on the basis of user's input operation performed via the input device. In another preferred implementation, the music piece processing apparatus further comprises a tone length adjustment section that processes each of the tone data, having been processed by the processing section, so that a predetermined portion of the tone data is made a silent portion. Further, the predetermined portion is a portion from a halfway time point to an end point of a tone generating section corresponding to the tone data, and a length of the predetermined portion is set on the basis of user's operation performed via the input device. According to the preferred implementation, it is possible to change as necessary the lengths of individual tones (i.e., rhythm feeling of the music piece) on the basis of user's input operation performed via the input device.
In a preferred embodiment, the music piece processing apparatus further comprises a pitch control section that controls, for each of the two or more music pieces, a pitch of a tone, represented by the tone data of each of the sub fragments selected by the selection section, on the basis of user's operation performed via an input device. Such an arrangement can organize a music piece having a feeling of unity, for example, in tone pitch by adjusting tone pitches per music piece. The music piece processing apparatus further comprises an effect impartment section that imparts an acoustic effect to the tone data of each of the sub fragments selected by the selection section, and, for each of the two or more music pieces, the effect impartment section controls the acoustic effect to be imparted, on the basis of user's operation performed via an input device. Such an arrangement can organize a music piece having a feeling of unity by adjusting the acoustic effect per music piece.
In a preferred embodiment, the similarity index calculation section includes: a similarity determination section that calculates, for each of the main fragments, a basic index value indicative of similarity/dissimilarity in character value between the main fragment and each of the sub fragments; and an adjustment section that determines a similarity index value on the basis of the basic index value calculated by the similarity determination section, wherein, of the basic index values calculated for individual ones of the sub fragments with respect to a given main fragment, the adjustment section adjusts the basic index values of one or more sub fragments, following one or more sub fragments selected by the selection section for the given main fragment, so as to increase a degree of similarity, to thereby determine the similarity index value. Such an arrangement can increase a possibility of sub fragments of the same music piece being selected in succession, and thus, it is possible to organize a music piece while maintaining a melodic sequence of a particular music piece.
In another embodiment, the similarity index calculation section includes: a similarity determination section that calculates, for each of the main fragments, a basic index value indicative of similarity/dissimilarity in character value between the main fragment and each of the sub fragments; a coefficient setting section that sets a coefficient separately for each of the music pieces on the basis of user's input operation performed via an input device; and an adjustment section that calculates the similarity index value by adjusting each of the basic index values, calculated by the similarity determination section, in accordance with the coefficient set by the coefficient setting section. Because the similarity index value is adjusted per music piece in accordance with the coefficient set by the coefficient setting section, a frequency with which sub fragments of each of the music piece are used for processing of the main music piece can increase or decrease in response to an input to the input device. Thus, the inventive arrangement can organize a music piece agreeing with user's intension.
The aforementioned music piece processing apparatus of the present invention may be implemented not only by hardware (electronic circuitry), such as a DSP (Digital Signal Processor) dedicated to various processing of the invention, but also by cooperative operations between a general-purpose processor device, such as a CPU (Central Processing Unit), and software programs. Further, the present invention may be implemented as a computer-readable storage medium containing a program for causing the computer to perform the various steps of the aforementioned music piece processing method. Such a program may be supplied from a server apparatus through delivery over a communication network and then installed into the computer.
The following will describe embodiments of the present invention, but it should be appreciated that the present invention is not limited to the described embodiments and various modifications of the invention are possible without departing from the basic principles. The scope of the present invention is therefore to be determined solely by the appended claims.
For better understanding of the objects and other features of the present invention, its preferred embodiments will be described hereinbelow in greater detail with reference to the accompanying drawings, in which:
The control device 10 is a processing unit (CPU) that controls various components of the music piece processing apparatus 100 by executing software programs. The storage device 20 stores therein the programs to be executed by the control device 10 and various data to be processed by the control device 10. For example, any of a semiconductor storage device, magnetic storage device, etc. can be suitably used as the storage device 20. Further, the storage device 20 stores respective music data sets of a plurality of music pieces, as shown in
As further shown in
As shown in
The input device 40 is equipment, such as a mouse and keyboard, that includes a plurality of operation members operable by a user to give instructions to the music piece processing apparatus 100. For example, the user designates M (M is an integral number greater than one) music pieces to be processed by the music piece processing apparatus 100 (these music pieces to be processed will hereinafter be referred to as “object music pieces”) from among a plurality of music pieces whose music piece data are stored in the storage device 20.
The control device 10 processes respective tone data A of a plurality of fragments S of a main music piece selected from among M object music pieces (the fragments S of the selected main music piece will hereinafter referred to as “main fragments Sm”) on the basis of one or more sub fragments Ss, selected from among all of the fragments of the M object music pieces other than the main fragments Sm, whose character values F are similar to those of the main fragments Sm. Then, the control device 10 sequentially output the processed tone data. Selection of the main music piece may be made either on the basis of user's operation performed via the input device 40, or automatically by the control device 10. The sounding device 30 produces an audible tone on the basis of a data train a1 of the tone data A output from the control device 10. For example, the sounding device 30 includes a D/A converter for generating an analog signal from the tone data A, an amplifier for amplifying the signal output from the D/A converter, and sounding equipment, such as a speaker or headphones, that outputs a sound wave corresponding to the signal output from the amplifier.
The display device 50 visually displays various images under control of the control device 10. For example, while the music piece processing apparatus is in operation, an operation screen 52 as shown in
Next, a description will be given about specific functions of the control device 10. As shown in
For each of a plurality of main fragments Sm of a main music piece, the similarity index calculation section 11 specifies all of the fragments, other than the main fragment Sm, as sub fragments Ss. Then, the similarity index calculation section 11 calculates, for each of the specified sub fragments Ss, a numerical value indicative of a degree of similarity R between the main fragment Sm and the sub fragment S (hereinafter referred to as “similarity index value”). The similarity index calculation section 11 in the instant embodiment includes a similarity determination section 12, a coefficient setting section 13 and an adjustment section 14.
The similarity determination section 12 calculates a value R0 serving as a basis for the similarity index value R (the value R0 will hereinafter be referred to as “basic index value”). Similarly to the similarity index value R, the basic index value R0 is a numerical value that serves as an index between character values F of the main and sub fragments Sm and Ss. More specifically, the similarity determination section 12 sequentially acquires the character values F of the individual main fragments Sm from the storage device 20 and calculates, for each of the sub fragments Ss of the M object music pieces, a basic index value R0 corresponding to the character value F of one of the main fragments Sm and the character value F of the sub fragment Ss. Such a basic index value R0 between the main fragment Sm and the sub fragment Ss is calculated, for example, as an inverse number of an Euclid distance between coordinates specified in an N-dimensional space having N numerical values of the character values F. Therefore, it can be said that the main fragment Sm and the sub fragment Ss are more similar in musical character if the basic index value R0 calculated therebetween is greater.
The coefficient setting section 13 sets a coefficient K separately for each of the M object music pieces. In the instant embodiment, the coefficient setting section 17 controls the coefficient K individually for each of the object music pieces in response to user's operation performed on an area G1 of the operation screen 52 of
For each of the object music pieces, the adjustment section 16 adjusts the basic index value R0, calculated by the similarity determination section 12, in accordance with the coefficient K. More specifically, the adjustment section 16 calculates, as the similarity index value R, a product (i.e., result of multiplication) between the basic index value R0 calculated per sub fragment Ss of any one of the object music pieces and the coefficient K set by the coefficient setting section 13 for that object music piece.
The selection section 16 selects, for each of the plurality of main fragments Sm of the main music piece, a predetermined number of, i.e., one or more, sub fragments Ss whose similarity index value R calculated with respect to the main fragments Sm indicates relatively close similarity. The condition setting section 17 sets a condition of selection by the selection section 16, in accordance with an input to the input device 40. The processing section 18 replaces the tone data A of some of the main fragments Sm of the main music piece with the tone data A of the predetermined number of sub fragments Ss selected by the selection section 16 for the main fragments Sm and then sequentially outputs the replaced tone data A.
Area G2 of the operation screen 52 shown in
As seen from above, as the reference position C
Once the processing of
Then, at step S3, the selection section 16 selects, only within a range where the number of sub fragments Ss to be selected from one object music piece does not exceed the maximum number of selection C
Then, at step S4, the processing section 18 determines whether or not the minimum value Rmin of the similarity index values R of the sub fragments Ss selected by the selection section 16 at step S3 exceeds a threshold value TH. If answered in the negative at step S4 (namely, any sub fragment Ss that is not sufficiently similar to the selected main fragment Sm is included in the sub fragments Ss selected by the selection section 16), then the processing section 18 acquires the tone data A of the selected main fragment Sm from the storage device 20 and outputs the acquired tone data A to the sounding device 30, at step S5. Thus, for the current selected main fragment Sm, a tone of the main music piece is audibly reproduced via the sounding device 30.
On the other hand, if answered in the affirmative at step S4 (namely, all of the sub fragments Ss selected by the selection section 16 are sufficiently similar to the selected main fragment Sm), then the processing section 18 acquires the tone data A of each of the sub fragments Ss selected by the selection section 16, in place of the tone data A of the selected main fragment Sm, at step S6. Further, the processing section 18 processes the tone data acquired at step S6 to be equal in time length to the selected main fragment Sm, at step S7. At step S7, it is possible to make the time length of the tone data A, acquired at step S6, agree with the time length of the tone data A of the selected main fragment Sm while maintaining the original tone pitch, using a conventionally-known technique for adjusting a tempo without changing a tone pitch. Then, the processing section 18 adds together the tone data A of the individual sub fragments Ss, processed at step S7, and outputs the resultant added tone data A to the sounding device 30 at step S8. Thus, for the current selected main fragment Sm, a tone of another music piece similar to the selected main fragment Sm is audibly reproduced via the sounding device 30, instead of the tone of the main music piece.
Following step S5 or S8, the processing section 18 determines, at step S9, whether or not an instruction for ending the reproduction of the music piece has been given to the input device 40. With an affirmative (YES) determination at step S9, the processing section 18 ends the processing of
In the instant embodiment, as set forth above, the main fragments Sm of the main music piece are replaced with sub fragments Ss selected in accordance with the similarity index values R (typically, sub fragments Ss similar in musical character to the main fragments Sm). Thus, even where the user is not sufficiently familiar with similarity and harmonizability among the object music pieces, the instant embodiment permits production of auditorily-natural music piece without substantially impairing the melodic sequence of the main music piece. Further, because each music piece is divided into fragments S on a beat-by-beat basis and sub fragments Ss, selected by the selection section 16, are used for processing of a main fragment Sm after being adjusted to the time length of the main fragment Sm (step S7), the rhythm feeling of the main music piece will not be impaired either.
Further, because the similarity index value R, serving as the index for the sub fragment selection by the selection section 16, is controlled in accordance with the coefficient K, sub fragments Ss of an object music piece, for which the coefficient K is set at a greater value, has a higher chance of being selected by the selection section 16, i.e. higher frequency of selection by the selection section 16. As the coefficient K of the object music piece is increased or decreased through user's operation performed via the input device 40, frequency with which the main fragment Sm is replaced with the sub fragment Ss of the object music piece increase or decrease. Thus, the instant embodiment permits organization of a variety of or diverse music pieces agreeing with user's preferences, as compared to the construction where the coefficients K are fixed (i.e., where the basic index value R0 calculated by the similarity determination section 12 is output to the selection section 16 as is). Further, with the instant embodiment, where the coefficients K of the object music pieces are adjusted by movement of the operation members 71 emulating actual slider operators, there can also be achieved the advantageous benefit that the user can intuitively grasp each object music piece output on a preferential basis.
Further, in the instant embodiment, any of the conditions of the selection by the selection section 16 is variably controlled in accordance with an input to the input device 40. Thus, the instant embodiment permits production of diverse music pieces as compared to the construction where the conditions of the selections are fixed. For example, because the reference position C
Next, a description will be given about a second embodiment of the present invention. Elements similar in function and construction to those in the first embodiment are indicated by the same reference numerals and characters as in the first embodiment and will not be described here to avoid unnecessary duplication.
Because the mixing ratio between the data train a1 and the data train a2 (i.e., coefficient g) and the time length of the silent portion is variably controlled, the second embodiment can reproduce a music piece in a diverse manner as compared to the above-described first embodiment. For example, if the coefficient g is increased through user's operation of the operation member 75A, a tone having been processed by the processing section 18 is reproduced predominantly. Further, as the time length pT is increased through user's operation of the operation member 75B, a tone can be reproduced with an increased rhythm feeling (e.g., staccato feeling).
Whereas the tone length adjustment section 64 is provided at a stage following the mixing section 62 in the illustrated example of
The effect impartment section 68 imparts an acoustic effect to the tone data A of each of the sub fragments Ss selected by the selection section 16. The acoustic effect to be imparted to the tone data A of each of the sub fragments Ss selected from one object music piece is variably controlled in accordance with an operating angle of any one of the operation members 78 which is provided in the area G4 and corresponds to the object music piece. The effect impartment section 68 in the instant embodiment is, for example, in the form of a low-pass filter (resonance low-pass filter) that imparts a resonance effect to the tone data A, and it controls the resonance effect to be imparted the tone data A by changing a cutoff frequency in accordance with an operating angle of the operation member 78.
The above-described third embodiment, where the tone pitch and acoustic effect of tone data A are individually controlled per object music piece in response to inputs to the input device 40, can flexibly produce a music piece agreeing with user's intension. For example, the third embodiment can organize a music piece which has a feeling of unity in its melodic sequence, by the user appropriately operating the operation members 77 and 78 so as to achieve approximation in pitch and acoustic characteristic among the tone data A of the plurality of object music pieces. Note that the type of the acoustic effect to be imparted by the effect impartment section 68 and the type of the characteristic to be controlled may be varied as desired. For example, the effect impartment section 68 may impart the tone data A with a reverberation effect of which a reverberation time has been set in accordance with an operating angle of the operation member 78.
The above-described embodiments may be modified variously as exemplified below. Note that two or more of the following modifications may be used in combination.
(1) Modification 1
Whereas each of the first to third embodiments has been described above as constructed to perform the processing on the entire loop of the main music piece, the object section to be processed (defined by, for example, by the number of measures or beats) may be variably controlled in accordance with an input to the input device 40. When the processing of
(2) Modification 2
Each of the first to third embodiments has been described above in relation to the case where the user individually designates any one of the M object music pieces. Alternatively, respective attribute information (such as musical genres and times) of a plurality of music pieces may be prestored in the storage device 20 so that two or more of the music pieces corresponding to user-designated attribute information are automatically selected as object music pieces. Further, it is also advantageous to employ a construction where various settings at the time of reproduction of a music piece (such settings will hereinafter be referred to as “reproduction information”) are stored by the control device 10 into the storage device 20 or other storage device in response to user's operation of the input device 40. The reproduction information may include, for example, not only information designating a main music piece and M object music pieces but also variables set via the operation screen 52, such as selection conditions C
(3) Modification 3
Whereas each of the first to third embodiments has been described above as using four types of variables (C
(4) Modification 4
There may also be employed a construction for enhancing a possibility or chance of the selection section 16 selecting one of a plurality of sub fragment Ss which follows a sub fragment Ss selected for the last main fragment Sm in a music piece, i.e. a possibility of sub fragment Ss of the same music piece being selected in succession.
Once the similarity determination section 12 calculates a basic index value R0 between one main fragment Sm and each individual one of the sub fragments Ss, the adjustment section 14 calculates a similarity index value R by adjusting the basic index value R0 in accordance with the coefficient K, in generally the same manner as in the first embodiment. In this case, however, the adjustment section 14 adds an adjustment, corresponding to the coefficient K, to the basic index value R0 of the sub fragment that follows the sub fragment Ss (i.e., “following sub fragment”) selected for the last main fragment Sm in the same object music piece, to enhance the degree of similarity in accordance with the degree of sequency SQ and thereby calculate a similarity index value R. For example, the adjustment section 14 calculates, as the similarity index value R, a sum between the basic index value R0 of the following sub fragment Ss adjusted in accordance with the coefficient K and a value corresponding to the degree of sequency SQ. Thus, at step S3 of
When the degree of sequency SQ is set at a minimum value (e.g., zero), the adjustment section 14 adjusts all of the basic index values R0 on the basis of only the coefficient K. Thus, the object of the selection at step S3 of
(5) Modification 5
In each of the above-described embodiments, the selection section is arranged to select a given number of sub fragment Ss corresponding to the total number of selection C
(6) Modification 6
Each of the above-described embodiments has been described above as outputting the tone data A of the selected main fragment Sm to the sounding device 30 when the minimum value Rmin of the similarity index values R of the individual sub fragments Ss is smaller than the threshold value TH (steps S4 and S5 of
(7) Modification 7
In each of the above-described embodiments, the other fragments S than the main fragment Sm of the main music piece are made sub fragments Ss as candidates for selection by the selection section 16. However, it is also advantageous to employ a modified construction where only individual sub fragments S of (M−1) object music pieces, excluding the main music piece, are made sub fragments Ss. Because the individual fragments S in the same music piece are often similar to one another in acoustic feature, it is highly possible that, in the above-described first embodiment, the fragments S of the main music piece will be selected as sub fragments Ss similar to the main fragment Sm. With the construction where the fragments S of the main music piece are excluded from the candidates for selection by the selection section 16, on the other hand, it is possible to produce diverse music pieces using the fragments S of the other object music pieces than the main music piece.
(8) Modification 8
Whereas each of the first to third embodiments has been described above as replacing the tone data of the main fragment Sm with the tone data of a sub fragment Ss, the scheme for processing the main fragment Sm on the basis of the sub fragment Ss is not necessarily limited to such replacement of the tone data A. For example, the tone data A of the main fragment Sm and the tone data A of a predetermined number of sub fragments Ss may be mixed at a predetermined mixing ratio so that the mixed results are output. However, with the construction where the main fragment Sm is merely replaced with a sub fragment Ss as described above in relation to the first to third embodiments, there can be achieved the benefit that processing loads on the control device 10 can be significantly reduced.
(9) Modification 9
The scheme for calculating a similarity index value R on the basis of respective character values F of a main fragment Sm and sub fragment Ss may be modified as desired. For example, whereas each of the first to third embodiments has been described above in relation to the case where the similarity index value R increases as the degree of similarity between the main fragment Sm and sub fragment Ss increases, the similarity index value R may be a numerical value (e.g., distance between the character values F) that decreases as the degree of similarity between the main fragment Sm and sub fragment Ss increases.
(10) Modification 10
Furthermore, each of the first to third embodiments has been described above in relation to the case where the operation screen 52 operable by the user to manipulate the music piece processing apparatus 100 is displayed as a screen image on the display device 50. Alternatively, input equipment having actual hardware operation members, corresponding the various operation members illustratively shown as images in
This application is based on, and claims priority to, JP PA 2007-186,149 filed on 17 Jul. 2007. The disclosure of the priority applications, in its entirety, including the drawings, claims, and the specification thereof, is incorporated herein by reference.
Number | Date | Country | Kind |
---|---|---|---|
2007-186149 | Jul 2007 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
6539395 | Gjerdingen et al. | Mar 2003 | B1 |
7031980 | Logan et al. | Apr 2006 | B2 |
7282632 | van Pinxteren et al. | Oct 2007 | B2 |
7304231 | van Pinxteren et al. | Dec 2007 | B2 |
7340455 | Platt et al. | Mar 2008 | B2 |
7345233 | van Pinxteren et al. | Mar 2008 | B2 |
7571183 | Renshaw et al. | Aug 2009 | B2 |
20020088336 | Stahl | Jul 2002 | A1 |
20020181711 | Logan et al. | Dec 2002 | A1 |
20030065517 | Miyashita et al. | Apr 2003 | A1 |
20030205124 | Foote et al. | Nov 2003 | A1 |
20040163527 | Kawai et al. | Aug 2004 | A1 |
20060065106 | Pinxteren et al. | Mar 2006 | A1 |
20060074649 | Pachet et al. | Apr 2006 | A1 |
20060080095 | Pinxteren et al. | Apr 2006 | A1 |
20060080100 | Pinxteren et al. | Apr 2006 | A1 |
20060107823 | Platt et al. | May 2006 | A1 |
20060112082 | Platt et al. | May 2006 | A1 |
20060112098 | Renshaw et al. | May 2006 | A1 |
20070157797 | Hashizume et al. | Jul 2007 | A1 |
20070169613 | Kim et al. | Jul 2007 | A1 |
20070174274 | Kim et al. | Jul 2007 | A1 |
20070208990 | Kim et al. | Sep 2007 | A1 |
20070239654 | Kraft et al. | Oct 2007 | A1 |
20080052371 | Partovi et al. | Feb 2008 | A1 |
20080075303 | Kim et al. | Mar 2008 | A1 |
20080115658 | Fujishima et al. | May 2008 | A1 |
20080132187 | Hanebeck | Jun 2008 | A1 |
20080147215 | Kim et al. | Jun 2008 | A1 |
20080168074 | Inagaki | Jul 2008 | A1 |
20080222128 | Yoshida et al. | Sep 2008 | A1 |
20080236371 | Eronen | Oct 2008 | A1 |
20080288255 | Carin et al. | Nov 2008 | A1 |
20090019996 | Fujishima et al. | Jan 2009 | A1 |
20090043758 | Kobayashi et al. | Feb 2009 | A1 |
20090043811 | Yamamoto et al. | Feb 2009 | A1 |
20090095145 | Streich et al. | Apr 2009 | A1 |
20090150445 | Herberger et al. | Jun 2009 | A1 |
20090173214 | Eom et al. | Jul 2009 | A1 |
20090217804 | Lu et al. | Sep 2009 | A1 |
Number | Date | Country |
---|---|---|
2003108132 | Apr 2003 | JP |
2006106754 | Apr 2006 | JP |
WO 2006079813 | Aug 2006 | WO |
Number | Date | Country | |
---|---|---|---|
20090019996 A1 | Jan 2009 | US |