1. Field of the Invention
The present invention relates to a content data reproduction apparatus for reproducing content data, and a sound processing system which uses the content data reproduction apparatus.
2. Description of the Related Art
Conventionally, there is a system by which song information is superimposed on audio data as an electronic watermark so that an apparatus which receives the audio data can identify content of the audio data (see Japanese Unexamined Patent Publication No. 2002-314980).
However, the conventional apparatus which receives the audio data merely identifies the content, without any collaborated operation with other apparatuses.
Therefore, the present invention provides a content reproduction apparatus which identifies content and allows collaborated operation with other apparatuses.
The content data reproduction apparatus of the present invention includes an input portion for inputting a sound signal on which first information for identifying content data which is to be reproduced is superimposed; a content data identification portion for identifying, on the basis of the first information superimposed on the sound signal, the content data which is to be reproduced; and a reproduction portion for reproducing the identified content data. The input portion may input the sound signal by collecting a sound with a microphone or by transmission over a line by use of an audio cable. Furthermore, the concept of the reproduction of the content data includes not only sounds but also display of images.
In the case of the sound signal on which the first information for identifying the content data is superimposed, as described above, content data which is reproduced by a different apparatus can be identified by demodulation of the first information. In a case where phase modulation is employed as the means for superimposition of data, for example, an apparatus which performs the modulation uses bit data (0, 1) of the information for identifying content data (content identification information) to phase-modulate (inverse) spread codes, while the apparatus which performs demodulation determines whether the peaks of the correlation values are positive or negative to decode the bit data.
The content data reproduction apparatus identifies the content data in accordance with the decoded bit data to further identify content data which is correlated with the identified content data and is necessary for the content data reproduction apparatus to reproduce the identified content data. The content necessary for the content data reproduction apparatus may be the same content data as that reproduced by the different apparatus or different content (e.g., the necessary content can be musical score data in a case where the different apparatus is reproducing audio data). In a case, for example, where the different apparatus is emitting sounds played by a piano (piano sounds on which content identification information is superimposed), the content data reproduction apparatus of the present invention can collect the piano sounds to reproduce accompaniment sounds as content data correlated with the piano sounds.
Furthermore, the content identification portion may identify necessary content data in accordance with content data which the reproduction portion is able to reproduce. In a case, for example, where the content data reproduction apparatus is a digital audio player which is able to reproduce audio data, the content identification portion identifies audio data as content data. In a case where the content data reproduction apparatus is an automatic performance apparatus, the content identification portion can identify musical instrument digital interface (MIDI) data as necessary content data. In a case where the content data reproduction apparatus is an apparatus having a display portion, the content identification portion can identify musical score data as necessary content data.
Furthermore, the information superimposed on the sound signal may include synchronization information indicative of timing at which the content data is reproduced. In a case, for example, where an apparatus which sends sound signals on which MIDI-compliant reference clock is superimposed as synchronization information, an apparatus which receives the sound signals is able to conduct automatic performance by operating a sequencer by use of the decoded reference clock. Therefore, the reproduction of content data by the different apparatus can be synchronized with the reproduction of content data by the content data reproduction apparatus. In a case where the different apparatus is an automatically played musical instrument (e.g., an automatically played piano) which reproduces MIDI data, when the content data reproduction apparatus of the present invention is placed near the different apparatus to collect piano sounds, the content data reproduction apparatus reproduces accompaniment sounds synchronized with the piano sounds. In a case where the content data reproduction apparatus of the present invention has a display portion, the content data reproduction apparatus is also able to display a musical score to indicate the currently reproduced position in synchronization with the piano sounds, even refreshing the displayed musical score with the progression of the reproduced song.
Furthermore, the content data reproduction apparatus may include a superimposition portion for superimposing second information which is information for identifying the content data which is to be reproduced, and is different from the first information on a sound signal of the content data which the reproduction apparatus is to reproduce. In this case, collaborated reproduction of content data among a plurality of content data reproduction apparatuses can be achieved. For example, the content data reproduction apparatus can retrieve content data from a different apparatus at a shop away from home, whereas the content data reproduction apparatus can superimpose information on the content data again at home to achieve collaborated operation among apparatuses regardless of differences in time and locations.
The content data reproduction apparatus may also be configured to transmit demodulated information and information indicative of content data which the content data reproduction apparatus is able to reproduce (e.g., information indicative of the function of the apparatus such as whether or not the apparatus has a display portion) to a server so that the server will extract correlated content data.
According to the present invention, the content data reproduction apparatus retrieves correlated content data which is correlated with identified content data and is necessary for the content data reproduction apparatus, and then reproduces the retrieved content data, allowing collaborated operation with other apparatuses.
The microphone 20 and the microphone 30 may be either integrated into their respective reproduction apparatuses 2, 3 or provided externally via line terminals for the respective reproduction apparatuses 2, 3. As indicated in
To the control portion 11, content data is input from the content data storage portion 12. The content data, which is data in which song information such as the title of a song and the name of a singer is included, is formed of compressed data such as MPEG1 Audio Layer 3 (MP3), Musical Instrument Digital Interface (MIDI) data or the like. In this example, the content data is stored in the apparatus. However, the apparatus may externally receive content data as airwaves or through a network (from the server 9, for example). Alternatively, the apparatus may receive PCM data and Table of Contents (TOC) from a medium such as a CD.
The control portion 11 inputs the read content data into the reproduction portion 13. The reproduction portion 13 reproduces the input content data to generate sound signals. In a case where the content data is compressed audio, the reproduction portion 13 decodes the sound signals to output the decoded signals to the superimposition portion 14. In a case where the content data is MIDI data, the reproduction portion 13 controls a tone generator (not shown) to generate musical tones (sound signals) to output the musical tones to the superimposition portion 14.
In addition, the control portion 11 inputs information for identifying the content which is to be reproduced (content identification information) to the superimposition portion 14 to superimpose on the sound signals which will be output so that the content identification information will be superimposed on the sound signals which will be output. The content identification information includes the above-described song information, a song ID and the like. A song ID is provided for each song as a unique ID. By sending the song information retrieved from the content data to the server 9 through the communication portion 15, the control portion 11 retrieves a song ID of the content. More specifically, the server 9, which has the song information storage portion 93 storing a database (see
The superimposition portion 14 modulates, for example, the content identification information input from the control portion 11 to superimpose the modulated component on the sound signals input from the reproduction portion 13. The sound signals on which the modulated component has been superimposed are supplied to the speaker 10 to be emitted.
The superimposition scheme of the superimposition portion 14 may be any scheme, but is preferable to be a scheme by which modulated components are inaudible to humans. For example, spread codes (pseudo-noise code (PN code)) such as maximum length sequence (m-sequence) or gold-sequence are superimposed in high frequencies at a very weak level which is not acoustically odd. In a case where information is previously superimposed on content data by use of spread codes or the like, the superimposition portion 14 is unnecessary for the reproduction apparatus 1.
The spread code generation portion 142 generates spread codes of m-sequence or the like at regular intervals in accordance with instructions made by the control portion 11. The spread code generated by the spread code generation portion 142 and the content identification information (binarized code sequence using −1 and 1) are multiplied by the multiplier 144. As a result, the spread code is phase-modulated. In a case of bit data of “1”, more specifically, the spread code is in phase. In a case of bit data of “0”, the spread code is phase-inverted.
The phase-modulated spread code is input to the XOR circuit 145. The XOR circuit 145 outputs an exclusive OR of the code input from the multiplier 144 and an output code of the immediately preceding sample input through the delay device 146. The signal after the differential coding is binarized to −1 or 1. Because of the output of the differential code binarized to −1 or 1, the apparatus which performs demodulation is able to extract the spread code of pre-differential coding by multiplying differential codes of two successive samples.
The differential coded spread code is band-limited within a base band in the LPF 147 to be input to the multiplier 148. The multiplier 148 multiplies a carrier signal (a carrier signal which is higher than an audible band of sound signals of content data) output from the carrier signal generator 149 by the signal output from the LPF 147 to shift the frequency of the differential coded spread code to passband. The differential coded spread code may be upsampled before the frequency shifting. The frequency-shifted spread code is combined with the sound signals of the content data by the adder 141. The sound signals of the content data are limited to a band different from that of the frequency components of the spread codes by the LPF 140.
It is preferable that the frequency band in which the spread codes are superimposed is an inaudible band of 20 kHz or more. In a case where the inaudible band is not available due to D/A conversion, encoding of compressed audio or the like, however, the frequency band of the order of 10 to 15 kHz can also reduce auditory effects. In this example, furthermore, the frequency band of sound signals of content data is completely separated from the frequency band of spread codes by the LPF 140. Even if the frequency band of sound signals of content data slightly overlaps with that of frequency components of pseudo noise, however, it is possible to make it difficult for audience to hear modulated signals and to secure the S/N ratio which enables demodulation of modulated components by the apparatus which collects sounds.
The reproduction apparatus 2 and the reproduction apparatus 3 collect sounds on which spread codes are superimposed with the microphone 20 and the microphone 30, respectively, calculate correlation values between the collected sound signals and the same spread codes as those of the superimposition portion 14 to decode the content identification information (the song information such as the title of a song, the name of a singer and the name of a performer, and the song ID) in accordance with the peaks of the calculated correlation values.
The amount of delay of the delay device 242 is set at the time equivalent to a sample of differential codes. In a case of upsampled differential codes, the amount of delay is set at the time equivalent to an upsampled sample. The multiplier 243 multiplies a signal input from the HPF 241 by a signal of the immediately preceding sample output from the delay device 242 to carry out delay detection processing. Because the differential coded signals are binarized to −1 or 1 to indicate the phase shift from the code of the immediately preceding sample, the spread code before the differential coding can be extracted by multiplying by the signal of the immediately preceding sample.
The signal output from the multiplier 243 is extracted as a base band signal through the LPF 244 to be input to the correlator 245. The correlator 245, which is formed of a finite impulse response filter (FIR filter) (matched filter) in which the spread codes generated in the spread code generator 142 are set as filter coefficients, obtains correlation values between the input sound signals and the spread codes. Because the spread codes employ high autocorrelation such as m-sequence or Gold sequence, positive and negative peak components of the correlation values output by the correlator 245 are extracted at intervals of the spread codes (at the intervals of the data codes) by the peak detector 246. The code determination portion 247 decodes the respective peak components as data codes (a positive peak is 1, while a negative peak is 0) of the content identification information.
Among pieces of information included in the decoded content identification information, the song information such as the title of a song, the name of a composer and the name of a player is input to the display portion 23 to be displayed on a screen. The song ID is input to the control portion 22 to be used for identifying the content. The control portion 22 identifies the content in accordance with the input song ID to further identify content data (the same content data as that of the reproduction apparatus 1, content data specific to the apparatus for itself or content data such as musical score data) which is correlated with the identified content and is necessary for the apparatus for itself. In a case of the reproduction apparatus of
More specifically, the control portion 22 searches the content data storage portion 25 for the correlated content data (audio data and musical score data) corresponding to the song ID. In the content data storage portion 25, as indicated in
In a case where correlated content data corresponding to the song ID is not stored in the content data storage portion 25, the control portion 22 sends the decoded song ID and the reproducible content information indicative of the reproduction functions of the apparatus (audio reproduction function, musical score display function, automatic performance function, etc. (including information on musical instrument part reproducible by the apparatus)) to the server 9 connected through the communication portion 21. In the case of the configuration indicated in
The control portion 92 of the server 9 refers to the song information storage portion 93 on the basis of various kinds of information received through the communication portion 91 to extract correlated content data equivalent to the content data necessary for the reproduction apparatus 2. As indicated in
The control portion 92 retrieves the correlated content data on the basis of the received song ID and reproducible content information. In the example of
The control portion 22 outputs the musical score data transmitted from the server 9 to the display portion 23, and outputs the audio data to the reproduction portion 26. The display portion 23 displays a musical score in accordance with the input musical score data, while the reproduction portion 26 reproduces the input audio data to generate sound signals. The generated sound signals are output to the speaker 27 to be emitted as sounds.
In a case where the apparatus has the automatic performance function as the case of the reproduction apparatus 3 indicated in
In this case, the control portion 92 of the server 9, which has received from the reproduction apparatus 3 the reproducible content information indicating that the apparatus is capable of automatic performance, retrieves the song data (MIDI data) corresponding to the identified song ID. The retrieved song data (MIDI data) is transmitted to the control portion 22 of the reproduction apparatus 3 to be stored in the content data storage portion 25. The control portion 22 of the reproduction apparatus 3 outputs the song data (MIDI data) sent from the server 9 to the automatic performance portion 36. The automatic performance portion 36 generates musical tone signals (sound signals) with the passage of time in accordance with the input song data (MIDI data). The generated musical tone signals (sound signals) are output to the speaker 27 to be emitted as musical tones (sounds).
The respective functions and workings of the reproduction apparatuses 1, 2, 3 of the sound processing system have been described above with reference to the functional block diagrams. Actually, however, these reproduction apparatuses 1, 2, 3 are configured by apparatuses each having a microcomputer so that many of the workings are done by program processing. Therefore, a concrete example of such program processing will be explained briefly. The reproduction apparatuses 1, 2, 3 are provided with a hardware configuration as indicated in a block diagram of
Each of the reproduction apparatuses 1, 2, 3 has an input operating elements 71, an input circuit 72, a demodulation circuit 73, a display unit 74, a reproduction circuit 75, a superimposition circuit 76, an output circuit 77, a computer portion 78, a flash memory 79 and a communication interface circuit 80. These circuits and devices 71 to 80 are connected to a bus 81. The input operating elements 71 are manipulated by a user in order to instruct operation of the reproduction apparatus 1, 2, 3. The input operating elements 71 are connected to the bus 81 through a detection circuit 82 which detects manipulations of the input operating elements 71. The input circuit 72 inputs sound signals (sound signals on which song information, song ID and the like are superimposed) from a microphone 83 and an input terminal 84 to convert the input sound signals from analog signals to digital signals to supply the converted sound signals to the bus 81 and the demodulation circuit 73. The demodulation circuit 73 is configured similarly to the above-described demodulation portion 24 of
The display unit 74 displays letters, numerals, musical scores, images and the like. The display unit 74 is connected to the bus 81 through a display circuit 85. The display circuit 85 controls the display of letters, numerals, musical scores and the like on the display unit 74 under the control of the computer portion 78. The reproduction circuit 75 generates digital musical tone signals in accordance with musical tone control signals such as key codes, key-on signals and key-off signals to output the generated digital musical tone signals to the superimposition circuit 76. In a case where audio signals are supplied to the reproduction circuit 75, the reproduction circuit 75 decodes the input audio signals to the superimposition circuit 76, for the audio signals are compressed. The superimposition circuit 76 is configured similarly to the above-described superimposition portion 14 of
The computer portion 78, which has a CPU 78a, a ROM 78b, a RAM 78c and a timer 78d, controls the reproduction apparatus by carrying out programs which will be described later. The flash memory 79 serves as a large-capacity nonvolatile memory. In the flash memory 79, an encoding program indicated in
Next, cases in which the reproduction apparatuses 1, 2, 3 operate under the program control by use of the hardware circuits configured as indicated in
In step S12, the computer portion 78 supplies the set of input content data to the reproduction circuit 75, and instructs the reproduction circuit 75 to reproduce the content data set to reproduce sound signals. In this case, if the content data is audio data, the computer portion 78 carries out a separately provided audio reproduction program (not shown) to successively supply audio data input with the passage of time to the reproduction circuit 75. The reproduction circuit 75 decodes the supplied audio data to output the decoded data to the superimposition circuit 76 one after another. If the content data is MIDI data, the computer portion 78 carries out a separately provided MIDI data reproduction program (e.g., an automatic performance program) to supply MIDI data (musical tone control data such as key codes, key-on signals and key-off signals) input with the passage of time to the reproduction circuit 75 one after another. The reproduction circuit 75 allows a tone generator to generate musical tone signals (sound signals) by use of the supplied MIDI data to output the generated musical tone signals to the superimposition circuit 76 one after another.
After step S12, the computer portion 78 transmits, in step S13, song information (the title of a song, the name of a singer and the name of a performer) accompanied with the content data to the server 9 through the communication interface circuit 80 to retrieve content identification information (song ID, in this case). If the computer portion 78 has already obtained the song ID, step S13 is not necessary. Then, the computer portion 78 proceeds to step S14 to output the content identification information (song information such as the title of a song, the name of a singer and the name of a player, and song ID) to the superimposition circuit 76, and to instruct the superimposition circuit 76 to superimpose the content identification information on the sound signals. The superimposition circuit 76 superimposes the content identification information on the sound signals input from the reproduction circuit 75 to output the signals to the output circuit 77. The computer portion 78 then proceeds to step S15 to instruct the output circuit 77 to start outputting the sound signals. In step S16, the computer portion 78 terminates the encoding program. The output circuit 77 converts the input sound signals from digital signals to analog signals to output the converted signals to the speaker 86. As a result, the sound signal on which the content identification information is superimposed are emitted as musical tones from the speaker 86.
Next, the same working as that of the above-described reproduction apparatus 2 will be described. The user manipulates the input operating elements 71 to make the computer portion 78 carry out the decoding program. As indicated in
After step S23, the computer portion 78 proceeds to step S24 to identify the content in accordance with the song ID included in the input content identification information. In step S25, the computer portion 78 identifies the content data necessary for the apparatus. In this case, content data is identified in accordance with the functions which the reproduction apparatus has. However, content data may be identified as correlated content data in accordance with user's manipulation of the input operating elements 71. Alternatively, predetermined content data may be identified as correlated content data. The computer portion 78 then proceeds to step S26 to refer to the content data stored in the flash memory 79 to search for the identified correlated content data. In step S27, the computer portion 78 determines whether all the identified sets of correlated content data are stored in the flash memory 79.
When all the identified sets of correlated content data are stored in the flash memory 79, the computer portion 78 gives “yes” in step S27 to proceed to step S28 to read out all the identified sets of correlated content data from the flash memory 79. When all the identified sets of correlated content data are not stored in the flash memory, the computer portion 78 gives “no” in step S27 to proceed to step S29 to transmit the song ID and the reproducible content information to the server 9 through the interface circuit 80. In this case as well, the reproducible content information may be input in accordance with user's manipulation of the input operating elements 71, or predetermined reproducible content information may be used. By the transmission of the song ID and the reproducible content information to the server 9, the server 9 transmits the correlated content data corresponding to the song ID and the reproducible content information to the reproduction apparatus. In step S30 which follows the above-described step S29, therefore, the computer portion 78 retrieves the correlated content data transmitted from the server 9 through the interface circuit 80.
After the above-described step S28 or step S30, the computer portion 78 reproduces the retrieved correlated content data in step S31, and then proceeds to step S32 to terminate the decoding program. In this case, if the correlated content data is musical score data, the musical score data is output to the display circuit 85. In accordance with the musical score data, therefore, the display circuit 85 displays a musical score on the display unit 74.
If the correlated content data is audio data, the computer portion 78 supplies, by step S31 which is similar to the above-described step S12, the input correlated content data (audio data) to the reproduction circuit 75, and instructs the reproduction circuit 75 to reproduce the content data to reproduce sound signals. If the correlated content data is MIDI data such as automatic performance data and automatic accompaniment data, the computer portion 78 supplies, by step S31 which is similar to the above-described step S12, the MIDI data (musical tone control data such as key codes, key-on signals and key-off signals) to the reproduction circuit 75 one after another to make a tone generator of the reproduction circuit 75 reproduce musical tone signals (sound signals) corresponding to the MIDI data. The reproduced musical tone signals (sound signals) are output to the superimposition circuit 76 one after another. In this case, the sound signals (musical tone signals) supplied from the reproduction circuit 75 to the superimposition circuit 76 simply pass the superimposition circuit 76 under the control of the computer portion 78. Under the control of the computer portion 78, furthermore, the output circuit 77 outputs the sound signals (musical tone signals) to the speaker 86 to emit musical tones from the speaker 86.
According to the sound processing system of this embodiment, as described above, when the reproduction apparatus 1 which reproduces content data emits sounds in which modulated components are included, the other reproduction apparatuses which have collected the sounds reproduce their respective correlated content data, resulting in collaborated operations among the apparatuses. In a case where the reproduction apparatus 1 emits sounds played by a piano, for example, when the reproduction apparatus 2 collects the piano sounds, the reproduction apparatus 2 allows the display portion 23 to display the song information of the piano sounds as well as musical score data as the content correlated with the piano sounds, and also reproduces accompaniment tones (tones of strings, rhythm tones and the like). In addition, the reproduction apparatus 3 emits accompaniment tones by automatic performance.
The superimposition portion 14 of the reproduction apparatus 1 may superimpose not only content identification information but also other information. For example, the superimposition portion 14 may superimpose synchronization information indicative of the timing at which content data is to be reproduced so that the reproduction apparatus 1 can reproduce the content data in synchronization with the reproduction apparatus 2 and the reproduction apparatus 3.
In the case of synchronized reproduction, the sound processing system may be configured such that the reproduction apparatus 1 superimposes information indicative of time elapsed from the start of reproduction as synchronization information whereas the reproduction apparatus 2 and the reproduction apparatus 3 reproduce their respective content data in accordance with the information indicative of the elapsed time. In a case where content data conforms to MIDI data, however, the timing at which peaks of spread codes are extracted may be used as reference clock to achieve the synchronized reproduction. In this case, the control portion 11 controls the superimposition portion 14 so that spread codes will be output at time intervals of the reference clock. The reproduction portion 26 of the reproduction apparatus 2 and the automatic performance portion 36 of the reproduction apparatus 3 can allow respective sequencers to operate with peaks of correlation values calculated at regular intervals being defined as reference clock to conduct automatic performance.
By superimposition of information about the time difference between the timing at which musical tones are generated and the reference clock, furthermore, the apparatus which demodulates the information can achieve synchronized reproduction with further great precision. By additional superimposition of performance information such as note numbers, velocities and the like, the apparatus which demodulates the information is able to carry out automatic performance even if the apparatus does not store MIDI data.
In the case of synchronized reproduction, for example, the collaborated operations are available among the apparatuses such as the reproduction apparatus 1 generating sounds of a piano part, the reproduction apparatus 2 generating sounds of a string part while displaying a musical score, and the reproduction apparatus 3 generating sounds of a drum part. As for the display of a musical score, furthermore, the reproduction apparatus is also able to indicate the current reproduction timing on the musical score (indicating the progression of a song).
Although the above-described example is configured such that the reproduction apparatus 1 reproduces content data to emit sounds from the speaker 10, the reproduction apparatus 1 may be connected to an automatic performance musical instrument (e.g., an automatically played piano) to generate musical tones from the automatic performance musical instrument. The automatically played piano is an acoustic musical instrument in which solenoids provided on key actions of the piano operate in accordance with input MIDI data to depress keys. In this case, the reproduction apparatus 1 outputs only modulated sounds from the speaker 10.
Although the above-described embodiment is configured such that the respective apparatuses store content data, the respective apparatuses may receive content data externally over the airwaves, or may receive content data through the network (from the server 9, for example). Alternatively, the respective apparatuses may retrieve PCM data from a medium such as a CD.
In the case where the apparatuses receive content data through the network, particularly, the sound processing system may be configured such that identified content data is charged so that downloading of the identified content data is allowed after payment. In a case where the server 9 not only stores content data but also charge for the content data, the server 9 is able to conduct both the identification of content data and the charging for the identified content data in response to a reception of a song ID. In a case where a server which charges for content data is provided separately from the server 9 which identify content data, the sound processing system may be configured such that the server 9 which receives a song ID converts the song ID into a unique ID used specifically for the charging to conduct downloading and charging separately.
The reproduction apparatus 1, the reproduction apparatus 2 and the reproduction apparatus 5 may be placed at the same location at the same time. In this example, however, the reproduction apparatus 5, which is a portable instrument, is placed at a location A where the reproduction apparatus 1 is placed, to input sound signals to the reproduction apparatus 5. After the input of the sound signals to the reproduction apparatus 5, the reproduction apparatus 5 is transferred to a location B where the reproduction apparatus 2 is placed, to input the sound signals to the reproduction apparatus 2.
The superimposition portion 28 is configured similarly to the superimposition portion 14 indicated in
The spread codes which the reproduction apparatus 5 superimposes may be either the same spread codes as those codes which the reproduction apparatus 1 superimposes or different spread codes. In a case where spread codes different from those of the reproduction apparatus 1, however, the apparatus which is to demodulate the codes (the reproduction apparatus 2) is to store spread codes which are to be superimposed by the reproduction apparatus 5.
Furthermore, the control portion 22 of the reproduction apparatus 5 is also able to extract, through the LPF, only the sound signals of content data from the sound signals collected by the microphone 50 to store the extracted sound signals in the content data storage portion 25 as recorded data. In this case, the reproduction apparatus 5 is able to superimpose content identification information on the recorded data again to emit sounds in which modulated components are contained. The LPF is not an absolute necessity. More specifically, sound signals in which spread codes are contained may be recorded directly in the content data storage portion 25 so that the recorded sound signals will be output for reproduction later. In this case, the superimposition portion 28 is not necessary.
The sound processing system of the example application 2 is configured such that the reproduction apparatus 6 transmits collected sound signals to an analysis server (either identical to the server 9 or different from the server 9) so that the server will demodulate the sound signals to identify content. That is, the example application 2 is an example in which the server 9 is additionally provided with an analysis function.
In this case, the control portion 22 transmits the sound signals collected by the microphone 60 (or encoded data) to the server 9 through the communication portion 21. Furthermore, the control portion 22 also transmits information indicative of the types of content that the apparatus is able to reproduce (reproducible content information).
The server 9 is provided with a demodulation portion 94 having the same configuration and function as those of the demodulation portion 24. The control portion 92 inputs the received sound signals to the demodulation portion 94 to decode the content identification information. The control portion 92 then transmits the decoded content identification information to the control portion 22 of the reproduction apparatus 6. In addition, the control portion 92 extracts correlated content information (content ID) in accordance with the song ID included in the decoded content identification information and the reproducible content information received from the reproduction apparatus 6 to transmit the extracted correlated content information to the control portion 22 of the reproduction apparatus 6.
The control portion 22 displays song information included in the received content identification information on the display portion 23. In addition, the control portion 22 searches the content data storage portion 25 for the content data (audio data and musical score data) corresponding to the received content ID. In a case where the content data is found, the control portion 22 reads out the found content data to output the content data to the reproduction portion 26. In a case where the content data is not found in the content data storage portion 25, the control portion 22 downloads correlated content data from the server 9. The display portion 23 displays a musical score in accordance with the input musical score data, while the reproduction portion 26 reproduces the input content data to generate sound signals. The generated sound signals are output to the speaker 61 to be emitted as sounds.
Furthermore, the control portion 22 of the reproduction apparatus 6 records the sound signals collected by the microphone 60 as recorded data in the content data storage portion 25. In this case, the sound signals in which spread codes are included are directly recorded in the content data storage portion 25. When the recorded data is reproduced at the location B (e.g., home), the correlated content data is reproduced by the reproduction apparatus 2.
As described above, the reproduction apparatus 6 is configured to transmit collected sound signals and to receive analyzed results to operate. In the sound processing system of the example application 2, therefore, the typical digital audio player can serve as the content data reproduction apparatus of the present invention.
The above-described examples are configured such that each reproduction apparatus emits sounds by use of a speaker, generates sound signals by use of a microphone, and transmits sounds by air. However, audio signals may be transmitted by line connections.
Although the above-described embodiment employs the examples in which the reproduction apparatus sends the server information indicative of the types of content which the reproduction apparatus is able to reproduce, the embodiment may be modified such that the reproduction apparatus is to send the server information for identifying the reproduction apparatus (e.g., an ID unique to the reproduction apparatus). In this modification, the server stores information indicative of types of content reproducible on each reproduction apparatus as a database, so that by referring to the database on the basis of the received information for identifying a reproduction apparatus, the server can identify the content data which can be reproduced by the reproduction apparatus.
Furthermore, the embodiment may be modified such that the reproduction apparatus sends the server both the information indicative of the types of content which the reproduction apparatus can reproduce (reproducible content information) and the information for identifying the reproduction apparatus. In this modification, even in a situation where the types of reproducible content vary depending on the operating state of the reproduction apparatus, the server is able to identify content data suitable for the reproduction apparatus. In the above-described example applications 1, 2 as well, furthermore, each of the reproduction apparatuses 1, 2, 5, 6 can be configured by the apparatus indicated in
Number | Date | Country | Kind |
---|---|---|---|
2010-80436 | Mar 2010 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
4680740 | Treptow | Jul 1987 | A |
4964000 | Kanota et al. | Oct 1990 | A |
5025702 | Oya | Jun 1991 | A |
5056402 | Hikawa et al. | Oct 1991 | A |
5212551 | Conanan | May 1993 | A |
5414567 | Amada et al. | May 1995 | A |
5423073 | Ogawa | Jun 1995 | A |
5428183 | Matsuda et al. | Jun 1995 | A |
5608807 | Brunelle | Mar 1997 | A |
5612943 | Moses et al. | Mar 1997 | A |
5637821 | Izumisawa et al. | Jun 1997 | A |
5637822 | Utsumi et al. | Jun 1997 | A |
5670732 | Utsumi et al. | Sep 1997 | A |
5684261 | Luo | Nov 1997 | A |
5857171 | Kageyama et al. | Jan 1999 | A |
5886275 | Kato et al. | Mar 1999 | A |
5889223 | Matsumoto | Mar 1999 | A |
6141032 | Priest | Oct 2000 | A |
6248945 | Sasaki | Jun 2001 | B1 |
6266430 | Rhoads | Jul 2001 | B1 |
6304523 | Jones et al. | Oct 2001 | B1 |
6326537 | Tamura | Dec 2001 | B1 |
6429365 | Ide et al. | Aug 2002 | B1 |
6462264 | Elam | Oct 2002 | B1 |
6621881 | Srinivasan | Sep 2003 | B2 |
6965682 | Davis et al. | Nov 2005 | B1 |
6970517 | Ishii | Nov 2005 | B2 |
7009942 | Fujimori et al. | Mar 2006 | B2 |
7026537 | Ishii | Apr 2006 | B2 |
7161079 | Nishitani et al. | Jan 2007 | B2 |
7164076 | McHale et al. | Jan 2007 | B2 |
7181022 | Rhoads | Feb 2007 | B2 |
7375275 | Matsuoka et al. | May 2008 | B2 |
7415129 | Rhoads | Aug 2008 | B2 |
7505605 | Rhoads et al. | Mar 2009 | B2 |
7507894 | Matsuura et al. | Mar 2009 | B2 |
7531736 | Honeywell | May 2009 | B2 |
7546173 | Waserblat et al. | Jun 2009 | B2 |
7554027 | Moffatt | Jun 2009 | B2 |
7562392 | Rhoads et al. | Jul 2009 | B1 |
7572968 | Komano | Aug 2009 | B2 |
7620468 | Shimizu | Nov 2009 | B2 |
7630282 | Tanaka et al. | Dec 2009 | B2 |
7750228 | Fujishima et al. | Jul 2010 | B2 |
7799985 | Yanase et al. | Sep 2010 | B2 |
8023692 | Rhoads | Sep 2011 | B2 |
8103542 | Davis et al. | Jan 2012 | B1 |
8116514 | Radzishevsky | Feb 2012 | B2 |
8126200 | Rhoads | Feb 2012 | B2 |
8165341 | Rhoads | Apr 2012 | B2 |
8180844 | Rhoads | May 2012 | B1 |
8204222 | Rhoads | Jun 2012 | B2 |
8688250 | Iwase et al. | Apr 2014 | B2 |
8697975 | Iwase et al. | Apr 2014 | B2 |
8737638 | Sakurada et al. | May 2014 | B2 |
20010021188 | Fujimori et al. | Sep 2001 | A1 |
20010037721 | Hasegawa et al. | Nov 2001 | A1 |
20010053190 | Srinivasan | Dec 2001 | A1 |
20010055464 | Miyaki et al. | Dec 2001 | A1 |
20020026867 | Hasegawa et al. | Mar 2002 | A1 |
20020048224 | Dygert et al. | Apr 2002 | A1 |
20020078146 | Rhoads | Jun 2002 | A1 |
20020156547 | Suyama et al. | Oct 2002 | A1 |
20020166439 | Nishitani et al. | Nov 2002 | A1 |
20030161425 | Kikuchi | Aug 2003 | A1 |
20030190155 | Tsutsui et al. | Oct 2003 | A1 |
20030195851 | Ong | Oct 2003 | A1 |
20030196540 | Ishii | Oct 2003 | A1 |
20030200859 | Futamase et al. | Oct 2003 | A1 |
20040094020 | Wang et al. | May 2004 | A1 |
20040137929 | Jones et al. | Jul 2004 | A1 |
20040159218 | Aiso et al. | Aug 2004 | A1 |
20050071763 | Hart et al. | Mar 2005 | A1 |
20050154908 | Okamoto | Jul 2005 | A1 |
20050211068 | Zar | Sep 2005 | A1 |
20050255914 | McHale et al. | Nov 2005 | A1 |
20060009979 | McHale et al. | Jan 2006 | A1 |
20060054008 | Yanase et al. | Mar 2006 | A1 |
20060078305 | Arora et al. | Apr 2006 | A1 |
20060133624 | Waserblat et al. | Jun 2006 | A1 |
20060219090 | Komano | Oct 2006 | A1 |
20060239503 | Petrovic et al. | Oct 2006 | A1 |
20060248173 | Shimizu | Nov 2006 | A1 |
20070074622 | Honeywell | Apr 2007 | A1 |
20070149114 | Danilenko | Jun 2007 | A1 |
20070169615 | Chidlaw et al. | Jul 2007 | A1 |
20070209498 | Lindgren et al. | Sep 2007 | A1 |
20070256545 | Lee et al. | Nov 2007 | A1 |
20070286423 | Soda et al. | Dec 2007 | A1 |
20080053293 | Georges et al. | Mar 2008 | A1 |
20080063226 | Koyama et al. | Mar 2008 | A1 |
20080101635 | Dijkstra et al. | May 2008 | A1 |
20080105110 | Pietrusko et al. | May 2008 | A1 |
20080119953 | Reed et al. | May 2008 | A1 |
20080161956 | Tohgi et al. | Jul 2008 | A1 |
20080178726 | Honeywell | Jul 2008 | A1 |
20080243491 | Matsuoka | Oct 2008 | A1 |
20090070114 | Staszak | Mar 2009 | A1 |
20090132077 | Fujihara et al. | May 2009 | A1 |
20100023322 | Schnell et al. | Jan 2010 | A1 |
20100045681 | Weissmueller, Jr. et al. | Feb 2010 | A1 |
20100132536 | O'Dwyer | Jun 2010 | A1 |
20100208905 | Franck et al. | Aug 2010 | A1 |
20100251876 | Wilder | Oct 2010 | A1 |
20100280907 | Wolinsky et al. | Nov 2010 | A1 |
20110023691 | Iwase et al. | Feb 2011 | A1 |
20110028160 | Roeding et al. | Feb 2011 | A1 |
20110029359 | Roeding et al. | Feb 2011 | A1 |
20110029362 | Roeding et al. | Feb 2011 | A1 |
20110029364 | Roeding et al. | Feb 2011 | A1 |
20110029370 | Roeding et al. | Feb 2011 | A1 |
20110066437 | Luff | Mar 2011 | A1 |
20110103591 | Ojala | May 2011 | A1 |
20110150240 | Akiyama et al. | Jun 2011 | A1 |
20110167390 | Reed et al. | Jul 2011 | A1 |
20110174137 | Okuyama et al. | Jul 2011 | A1 |
20110209171 | Weissmueller, Jr. et al. | Aug 2011 | A1 |
20110290098 | Thuillier | Dec 2011 | A1 |
20110319160 | Arn et al. | Dec 2011 | A1 |
20120064870 | Chen et al. | Mar 2012 | A1 |
20120065750 | Tissier et al. | Mar 2012 | A1 |
20130077447 | Hiratsuka | Mar 2013 | A1 |
20130079910 | Hiratsuka et al. | Mar 2013 | A1 |
20130179175 | Biswas et al. | Jul 2013 | A1 |
20130243203 | Franck et al. | Sep 2013 | A1 |
20130282368 | Choo et al. | Oct 2013 | A1 |
20130305908 | Iwase et al. | Nov 2013 | A1 |
20140040088 | King et al. | Feb 2014 | A1 |
Number | Date | Country |
---|---|---|
1422689 | May 2004 | EP |
1505476 | Feb 2005 | EP |
2312763 | Apr 2011 | EP |
63128810 | Jun 1988 | JP |
5091063 | Apr 1993 | JP |
07240763 | Sep 1995 | JP |
08286687 | Nov 1996 | JP |
10093449 | Apr 1998 | JP |
2000020054 | Jan 2000 | JP |
2000056872 | Feb 2000 | JP |
2000181447 | Jun 2000 | JP |
2001008177 | Jan 2001 | JP |
2001203732 | Jul 2001 | JP |
2001265325 | Sep 2001 | JP |
2001356767 | Dec 2001 | JP |
2002175089 | Jun 2002 | JP |
2002229576 | Aug 2002 | JP |
2002314980 | Oct 2002 | JP |
2002341865 | Nov 2002 | JP |
2003280664 | Oct 2003 | JP |
2003295894 | Oct 2003 | JP |
2003316356 | Nov 2003 | JP |
2004126214 | Apr 2004 | JP |
2004341066 | Dec 2004 | JP |
2005-122709 | May 2005 | JP |
2005274851 | Oct 2005 | JP |
2006100945 | Apr 2006 | JP |
2006163435 | Jun 2006 | JP |
2006251676 | Sep 2006 | JP |
2006287301 | Oct 2006 | JP |
2006287730 | Oct 2006 | JP |
2006323161 | Nov 2006 | JP |
2006337483 | Dec 2006 | JP |
2007104598 | Apr 2007 | JP |
2007249033 | Sep 2007 | JP |
4030036 | Jan 2008 | JP |
2008072399 | Mar 2008 | JP |
2008-224707 | Sep 2008 | JP |
2008216889 | Sep 2008 | JP |
2008228133 | Sep 2008 | JP |
2008-286946 | Nov 2008 | JP |
2010-206568 | Sep 2010 | JP |
2010276950 | Dec 2010 | JP |
0008909 | Feb 2000 | WO |
2005018097 | Feb 2005 | WO |
2005055194 | Jun 2005 | WO |
2010127268 | Nov 2010 | WO |
2011014292 | Feb 2011 | WO |
Entry |
---|
Extended European Search Report issued Jul. 27, 2011 for corresponding EP Patent Application No. 11160194.4. |
Japanese Office Action for corresponding JP 2010-080436, mail date Apr. 1, 2014. English translation provided. |
Japanese Office Action cited in Japanese counterpart application No. JP2011-209117, dated Sep. 10, 2013. English translation provided. Cited in pending related U.S. Appl. No. 13/626,018. |
Japanese Office Action cited in Japanese counterpart application No. JP2011-208307, dated Sep. 24, 2013. English translation provided. Cited in pending U.S. Appl. No. 13/626,018. |
Notice of Allowance issued in related pending U.S. Appl. No. 13/955,451. Mailed Dec. 10, 2014. |
Non Final Office Action issued in related pending U.S. Appl. No. 13/626,018. Mailed Sep. 24, 2014. |
Notification of Reasons for Refusal for JP2008-331081, mailed Mar. 5, 2013. Cited in related co-pending U.S. Appl. No. 13/955,451. |
International Search Report for PCT/JP2009/063513, mailed Nov. 2, 2009. Cited in related co-pending U.S. Appl. No. 13/955,451. |
International Search Report for PCT/JP2009/063510, mailed Sep. 8, 2009. Cited in related co-pending U.S. Appl. No. 13/955,451. |
Yamaha Digital Mixing Console Owner's Manual 2006. Cited in related co-pending U.S. Appl. No. 13/955,451. |
Notification of Reasons for Refusal for corresponding JP2008-196492 mailed Oct. 1, 2013. Cited in related co-pending U.S. Appl. No. 13/955,451. |
Notification of Reasons for Refusal for corresponding JP2008-253532 mailed Oct. 1, 2013. Cited in related co-pending U.S. Appl. No. 13/955,451. |
Extended European Search Report for EP09802994.5 mailed Oct. 17, 2013. Cited in related co-pending U.S. Appl. No. 13/955,451. |
Ryuki Tachibana, “Audio Watermarking for Live Performance”; Proceedings of SPIE-IS&T Electronic Imaging, SPIE vol. 5020; pp. 32-43, 2003. Cited in related co-pending U.S. Appl. No. 13/955,451. |
Extended European Search Report for EP09802996.0 mailed Mar. 27, 2013. Cited in related co-pending U.S. Appl. No. 13/955,451. |
Office Action issued in JP2008-331081 mailed Jun. 18, 2013. Cited in related co-pending U.S. Appl. No. 13/955,451. |
Office Action issued in JP2009-171319 mailed Sep. 17, 2013. Cited in related co-pending U.S. Appl. No. 13/955,451. |
Office Action issued in JP2009-171320 mailed Sep. 17, 2013. Cited in related co-pending U.S. Appl. No. 13/955,451. |
Office Action issued in JP2009-171321 mailed Sep. 10, 2013. Cited in related co-pending U.S. Appl. No. 13/955,451. |
Extended European Search Report issued in EP14169714.4 mailed Aug. 18, 2014. Cited in related co-pending U.S. Appl. No. 13/955,451. |
Office Action issued in U.S. Appl. No. 13/071,821 mailed Jan. 28, 2013. Cited in related co-pending U.S. Appl. No. 13/955,451. |
Notice of Allowance issued in U.S. Appl. No. 13/071,821 mailed Jun. 25, 2013. Cited in related co-pending U.S. Appl. No. 13/955,451. |
Notice of Allowance issued in U.S. Appl. No. 13/071,821 mailed Nov. 22, 2013. Cited in related co-pending U.S. Appl. No. 13/955,451. |
Office Action issued in U.S. Appl. No. 13/733,950 mailed Dec. 24, 2013. Cited in related co-pending U.S. Appl. No. 13/955,451. |
Office Action issued in U.S. Appl. No. 13/733,950 mailed Aug. 7, 2014. Cited in related co-pending U.S. Appl. No. 13/955,451. |
Japanese Office Action cited in JP2012000911, dated Nov. 26, 2013. English translation of relevant portion provided. Cited in related co-pending U.S. Appl. No. 13/955,451. |
Jonti Olds, “J-QAM—A QAM Soundcard Modern”, Archive.org, Nov. 26, 2011, XP055090998. Retrieved on Dec. 2, 2013. Cited in related co-pending U.S. Appl. No. 13/955,451. |
“Quadrature Amplitude Modulation”, Wikipedia, Jan. 5, 2012, XP055091051, Retrieved on Dec. 2, 2013. Cited in related co-pending U.S. Appl. No. 13/955,451. |
European Search Report issued in EP12197637.7 mailed Dec. 10, 2013. Cited in related co-pending U.S. Appl. No. 13/955,451. |
Office Action issued in JP2009-171319 mailed May 7, 2014. English translation provided. Cited in related co-pending U.S. Appl. No. 13/955,451. |
Office Action issued in U.S. Appl. No. 13/955,451 mailed Dec. 18, 2013. |
Office Action issued in U.S. Appl. No. 13/955,451 mailed Jun. 6, 2014. |
Number | Date | Country | |
---|---|---|---|
20140013928 A1 | Jan 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13071821 | Mar 2011 | US |
Child | 14035081 | US |