The present invention relates to a music editing apparatus and method of editing as materials music content or the like obtained from song parts, bars, and so forth into which songs are divided and creates new songs (music content).
As memory type audios have become common, users daily carry a large number of songs with them. With memory type audios that allow their users to listen to a large number of songs continuously without need to change mediums, a style in which the users not only successively listen to songs of each CD album, but listen to songs shuffled in all CD albums and their favorite songs arranged as play lists has become common. It can be said that the degree of freedom of which the users listen to their favorite songs in their favorite orders not in the orders of songs recorded in albums has increased.
However, it is assumed that the users listen to each song from the beginning to the end. In addition, there is a non-sound region between songs. In other words, one song is the minimum unit. Thus the user has to listen to music on the basis of one song. The user's concentration, namely his or her enthusiasm, breaks at such a non-sound region between songs. As a result, the users are difficult to continuously listen to songs with his or her tension kept.
Thus a technique of combining a plurality of music materials and editing them was developed. Japanese Patent Application Laid-Open No. 2003-44046 discloses a technique of combining a plurality of music materials that are input through a user's keyboard and mouse and allowing him or her to interactively edit them and to enjoy the reproduced songs. In the technology, sound pieces such as sound patterns and one-shots and tips composed of effects applicable to sound pieces are assigned to individual keys of the keyboard. When the user presses the keys, tips corresponding thereto are combined and output. When a plurality of keys are pressed at the same time, corresponding tips are combined and output. In addition, when keys are successively pressed, corresponding tips are chronologically combined and thereby a song is composed.
However, with the foregoing technique disclosed as Japanese Patent Application Laid-Open No. 2003-44046,it was difficult to recompose a song by connecting parts of a plurality of songs in such a manner that their beats are matched like with a remix created, for example, by a disc jockey (DJ). A DJ is currently a common occupation. There are label companies that dedicatedly deal with remix music. The number of users who enjoy remix music is increasing.
Taking into account of the rhythm of music in a music editing apparatus, it is preferred that while the rhythm of a song currently being reproduced is being followed in real time, a plurality of songs be reproduced successively or simultaneously in such a manner that the rhythm of each song is properly matched like does a professional DJ.
Persons having an occupation called DJs can successively connect a plurality of songs and reproduce them as if they were a series of songs with a record turn table by controlling the reproduction start timings and tempos of these songs. This technique is generally called a remix. Such rhythm synchronizing reproduction provides the listener with new sensation and excitement as an advanced music listening style one step beyond from the conventional simple listening manner. Thus, in recent years, the technique DJ remix has become common as a new music expression technique.
However, currently, it is very difficult to mechanically automate the DJ reproducing technique. This is because although senses of rhythm, beats, and tempo of music are amounts of senses that humans can easily perceive, machines are very difficult to recognize them. Even if an automatic recognition technology is used, its recognition accuracy does not become 100%. Thus machines are insufficient to be practically used for music arrangement that necessitates perfection. Since machines cannot understand the structure of music, they cannot automatically perform flexible arrangement reproduction based on a music structure like DJ play.
From the foregoing point of view, the present invention solves the following problems. DJs can seamlessly start reproducing the next song such that its beats and bars are matched with those of the song that is being reproduced. In contrast, since machines cannot recognize beats and bars, they cannot automatically perform such a process.
DJs can seamlessly start reproducing the next song such that its beats and bars are matched with those of the song that is being reproduced. In contrast, since machines cannot recognize beats and bars, they cannot automatically perform such a process.
DJs can seamlessly start reproducing the next song such that it's tempo is matched with that of the current song that it is currently reproducing even if the tempo of the current song changes. In contrast, since the machines cannot recognize the tempo, they cannot automatically perform such a process.
DJs can seamlessly start reproducing the next song such that its melody is matched with that of the song that is being reproduced. In contrast, since machines cannot recognize the rhythm, they cannot automatically perform such a process.
The present invention was created from the foregoing point of view. An object of the present invention is to provide a music editing apparatus that uses so-called time line meta data that describe a beat and melody structure of a song to automatically accomplish the automatic DJ remix play with a low load.
To solve the foregoing problem, a music editing apparatus of the present invention includes a remix processing section which performs a remix process based on both meta data pre-generated corresponding to song data and containing at least time-based beat positions and a remix pattern file; a synchronous reproduction controlling section which generates a master beat, sends the master beat to the remix processing section, and reproduces a plurality of songs with individual entries of the meta data corresponding to a remix process which the remix processing section designates corresponding to the remix pattern file and the meta data; and a mixing section which mixes the songs reproduced by the synchronous reproduction controlling section. The synchronous reproduction controlling section also includes a phase comparator which compares a phase of a beat synchronous signal generated from the meta data with that of the master beat signal; an integrating circuit which integrates a phase compared output of the phase comparator; and a correcting section which corrects a tempo based on an integrated output of the integrating circuit.
A music editing method of the present invention includes a remix processing step of performing a remix process based on both meta data pre-generated corresponding to song data and containing at least time-based beat positions and a remix pattern file; a synchronous reproduction controlling step of generating a master beat, sending the master beat to the remix processing section, and reproducing a plurality of songs corresponding to a remix process which the remix processing section designates corresponding to the remix pattern file and the meta data; and a mixing step of mixing the songs reproduced by the synchronous reproduction controlling section. The synchronous reproduction controlling step further includes a phase comparing step of comparing a phase of a beat synchronous signal generated from the meta data supplied from the remix processing step with that of the master beat signal; an integrating step of integrating a phase compared output of the phase comparator; and a correcting step of correcting a tempo based on an integrated output of the integrating step.
According to the present invention, when songs having different tempos and rhythms are reproduced as a remix, even if the beats of original music change, beat synchronous reproduction can be automatically performed in real time for the songs such that their tempos and rhythms are kept constant.
Thus, according to the present invention, since music can be automatically reproduced as the DJ remix play in real time, a music reproducing apparatus can provide a new music listening style. In addition, as an application of the present invention, since the beats of music can be easily synchronized with those of another medium, a new entertainment can be created.
Next, with reference to the accompanying drawings, best mode embodiments for carrying out the present invention will be described.
The music editing apparatus 1 is an automatic DJ reproducing section that reproduce song A and song B having different tempos as music content data (original data) of songs provided through various types of record mediums such as a hard disk (HD), a compact disc (CD), a mini disc (MD), a digital versatile disc (DVD), and so forth and through networks such as the Internet with meta data described later in such a manner that their beats are matched or they are overlaid.
As shown in
The CPU 2 decides a connecting method for songs in real time and supplies necessary song materials to the synchronously reproducing section 8 at a necessary timing. In addition, corresponding to a user's operation, the CPU 2 designates a tempo and beat synchronization/asynchronization for the synchronously reproducing section 8.
The storage section 4 is composed of a song storage section 5, a song meta data storage section 6, and a remix pattern storage section 7. The song storage section 5 is a storage section that stores data of a plurality of songs. A storage section such as a flash memory or a hard disk that a console type music reproducing apparatus or a portable music reproducing apparatus has may be connected as an external device. Song data that are stored in the song storage section 5 may be regardless of compressed music data or non-compressed data.
The song meta data storage section 6 is a storage section such as a flash memory or a hard disc and stores time-based meta data added to songs. As will be described later, meta data are time-based auxiliary data added to songs and describe position information of beats as well as tempos, position information of the beginnings of bars (simply referred to as beginnings), and melody information such as introductions and themes (catchy parts).
Like the song storage section 5, the remix pattern storage section 7 is not limited to a particular type as long as it is a storage device. The remix pattern storage section 7 is a storage section that stores a remix pattern file that designates a remix pattern designating method. As will be described later, a remix pattern file is not only a file that describes the order of songs, but a file that describes a song order plus a manner of combining song A and song B or a manner of what parts of song A and song B are used and where they are combined.
The synchronously reproducing section 8 is a signal processing block with which the music editing apparatus 1 automatically performs the DJ reproduction. The synchronously reproducing section 8 reproduces song materials designated by a remix control function of the CPU 2 in synchronization with a reference beat. The synchronously reproducing section 8 is composed of a synchronous reproduction controlling section 9, an audio mixing section 10, a digital/analog converting section (D/A) 11, and an audio output section 12.
The synchronous reproduction controlling section 9 has a plurality of audio signal generating sections and reproduces a plurality of audio signals in synchronization with a locally generated clock signal. In addition, the synchronous reproduction controlling section 9 always monitors the position of a song that is being currently reproduced on the basis of meta data supplied from the meta data storage section 6 and sends the current reproduction position (such as a reproduction position as a sample count in the song) and a reproduction position as a bar count and a beat count back to the remix process function section of the CPU 2.
The audio mixing section 10 mixes a plurality of audio signals generated by the plurality of audio signal generating sections of the synchronous reproduction controlling section 9 and outputs the mixed signal. The D/A 11 converts the digital signal reproduced by the audio mixing section 10 into an analog signal. The audio output section 12 amplifies the analog audio signal supplied from the D/A 11 and outputs the amplified signal to a speaker, a headphone, or the like.
The ROM 13 stores a music editing program composed of processes based on the music editing method according to the present invention. In addition, the ROM 13 stores various types of default data. The RAM 14 becomes a work area with which the CPU 2 executes the music editing program. In addition, the RAM 14 stores various types of update data with which the CPU 2 executes the music editing program.
The user operation I/F section 15 is, for example, a keyboard, a mouse, a touch panel, and/or the like that accepts user's operations. The U/I display section 16 is a display section including a touch panel that displays a current operation state and a music editing process state and accepts user's operations. The U/I display section 16 is, for example, a liquid crystal display device. Of coarse, the U/I display section 16 may be a CRT.
The three (tracks of) audio signal generating sections have three tracks to output a plurality of stereo sounds in real time, as a typical example, song A, song B, and sound effect (SE). Of course, the audio signal generating section may have four tracks, five tracks, or more corresponding to the number of songs. The audio signal generating section synchronously reproduces bar/beat positions of songs corresponding to the bar/beat positions of the master beat based on the synchronous signal (clock or beat) generated by the master beat generating section 90.
For individual tracks, decoders 91a, 91b, and 91c and time stretch sections 92a, 92b, and 92c are provided. The decoders 91a, 91b, and 91c decode compressed sounds that have been compressed according to MP3 or ATRAC and output decoded sounds as PCM data. In this case, since SE is short and its data size is small, it does not always need to have been compressed. Thus the decoder for the SE track may be omitted. The time stretch sections 92a, 92b, and 92c are sections that convert a reproduction speed while the intervals of songs are kept constant. The time stretch sections 92a, 92b, and 92c cause tempos of song materials to match the tempo of the reference beat based on meta data supplied from the song meta data storage section 6. The time stretch sections 92a, 92b, and 92c perform a process of changing the reproduction speed in real time based on the ratio of the original tempos of songs and the tempo of the master beat. Thus the original tempos of songs can be matched with the tempo of the master beat. Of course, as described above, the intervals of songs are not changed.
The audio signal generating sections may have a pitch shifter function. The pitch shifter function changes the interval of a song while the reproduction speed is kept constant. The pitch shifter function is used to musically harmonize song materials having different intervals and pitches. However, the pitch shifter function is not an essential function, but an additional function.
The audio mixing section 10 has three effect processing sections 100a, 100b, and 100c and three sound volume adjusting sections 101a, 101b, and 101c corresponding to the three tracks of audio signal generating sections. The outputs of these three systems are mixed by an audio mixing section 102, amplified by an audio output section 103, and then output to the external speaker, headphone, or the like. The audio mixing section 10 can perform an effect process and an sound volume adjustment process for the individual tracks of the output audio signals supplied from the audio signal generating sections.
In the remix processing section 20, the meta data processing section 21 processes meta data stored in the song meta data storage section 6. As described above, time-based meta data have been added to songs. Meta data contain not only information about a tempo, but position information of beats, bar beginning position information, and melody information of an introduction, a catchy part, and so forth.
The meta data processing section 21 reads time-based meta data corresponding to the current song and checks the position information of the song according to designated remix pattern information that has been read by the remix pattern reading section 22. For example, knowing the current beat position and the positions of beats of songs to be combined, the meta data processing section 21 decides in what manner to reproduce songs and a sound effect at what timings.
Moreover, in the remix processing section 20, the remix pattern reading section 22 reads a remix pattern file 7a stored in the remix pattern storage section 7. The remix pattern file 7a is a file that designates whether or not to perform fade-out, whether or not to perform cut-out, and what SE to use. A remix pattern may be a data sequence designated by the user or a third party (commanding in what manner to connect this song) or created according to an automatically creating algorithm (namely, a remix decided on the machine side).
In the synchronous reproduction controlling section 9, the master beat generating section 90 generates a master beat, and sends the master beat to the remix processing section 20, and reproduces a plurality of songs based on individual entries of meta data supplied from the meta data storage section 6 according to the remix process designated by the remix processing section 20 on the basis of a remix pattern and meta data.
First, in the remix processing section 20 of the CPU 2, the remix pattern reading section 22 reads the remix pattern file 7a from the remix pattern storage section 7 and obtains the remix pattern file 7a (at step S1). Thereafter, the synchronous reproduction controlling section 9 is caused to obtain a song, for example, a first song (at step S2). When there is another song (YES at step S3), a tempo of the master beat generating section 90 of the synchronous reproduction controlling section 9 is decided (at step S4). The tempo may be fixed at 140 or the user may designate the tempo. Thereafter, a connection pattern (described in the pattern file) is obtained (at step S5).
Since the pattern that connects songs has been decided, meta data of the songs are obtained (at step S6). For example, meta data of song B are obtained. Thereafter, it is determined whether or not the effect process is necessary on the basis of the remix pattern file (at step S7). When necessary (YES), the effect processing section 100 is activated to perform a predetermined effect (at step S8).
Thereafter, it is determined whether or not the sound volume fading process is necessary on the basis of the remix pattern file (at step S9). For example, it is selected whether or not sound volume fade of increasing or decreasing sound volumes of song A and song B that are edited and overlaid is necessary. When necessary (YES), a fade parameter is set (at step S10). Although it has been assumed that sound volumes are automatically increased or decreased, their fade parameters are set.
Thereafter, the original tempos of the songs are set to the master beat generating section 90 of the synchronous reproduction controlling section 9 (at step S11). The original tempos of the songs have been added to meta data of the songs. Thereafter, a free audio signal generating section of the synchronous reproduction controlling section 9 is obtained. In the foregoing specific example, although an example of three channels was described, a free audio signal generating section is obtained and a song to be reproduced is set to the obtained audio signal generating section (at step S13).
After having obtained the current reproduction position of each song (at step S14), it is determined whether or not a point at which the next song is prepared has occurred (at step S15). When SE is, for example, cross-fade, it ends several bars before the relevant bar. When SE is, for example, cut-in, since it immediately starts, it can be prepared one bar before it. Of course, simultaneous reproduction is a reproduction of which songs are simultaneously reproduced. It is determined whether or not such a point has occurred. When a point at which a song is prepared has not occurred, the flow of the processes returns to step S14. At step S14, it is determined whether or not such point has occurred. When a point at which the next song is prepared has occurred (YES), the flow of the processes returns to step S2.
In the music editing apparatus 1 having the structure shown in
When the pattern shown in
Thus it can be said that a remix pattern is a conceptual entity of a designation of what parts of songs are connected in what manner is added to a conventional play list.
Next, with reference to
With the music editing apparatus 1 having the structure shown in
As shown in
In addition, in
It should be noted that the present invention is not limited to the composition of only song A, song B, and SE. Of course, song C and another SE may be added to such a composition. However, for simplicity of the description, the composition of only song A, song B, and SE will be described in the following.
To do that, as described above, the music editing apparatus 1 reproduces song A, song B, and song C in such a manner that their beats are synchronized. To reproduce songs having different tempos in such a manner that their beats are synchronized, it is necessary to know their original tempos and their beat positions. In addition, to match the tempos and beat positions of the songs to be reproduced, it is necessary to change the reproduction speeds of the songs.
To know the original tempos and beat positions of the songs, the music editing apparatus 1 uses time-based meta data paired with the songs. When the songs are reproduced, their reproduction speeds are changed on the basis of the ratios of their original tempos and the tempo of the current master beat and the beginning positions of the bars of the songs are matched.
As described above, by accurately handling the tempos and beat positions of the plurality of songs and controlling their reproduction positions in real time, their synchronous reproduction is performed. It is assumed that the technique of seamlessly connecting songs is based on the use of the method of beat synchronous reproduction.
The music editing apparatus 1 can perform a cut-in process that is outlined in
The music editing apparatus 1 connects songs with a sound effect (SE) as described above. In other words, a sound effect (SE) can be interposed not only between songs, but at a connecting portion of songs and at any portion of a song.
In addition, the music editing apparatus 1 can perform the simultaneous reproduction. In this case, different songs are not only simultaneously reproduced, but their beats are synchronized and the songs are reproduced as shown in
In addition, as shown in
While applying reverb as an effect to music A and low-cut filter as an effect to music B, when these songs are simultaneously reproduced or connected in various manners, more natural and attractive connecting methods can be accomplished.
In addition, the music editing apparatus 1 may apply the foregoing connecting methods not only to a whole song, but to a catchy part or an introduction of each song as shown in
As described above, the music editing apparatus 1 can seamlessly remix songs based on a remix pattern file that describes a remix pattern. In this case, the music editing apparatus 1 may decide the tempo of the master beat that the user has designated or match the tempos of songs with the tempo of a main song. Instead, with an acceleration sensor or the like, the music editing apparatus 1 may measure the walking tempo or jogging tempo of the user and decide the tempo of the master beat corresponding to the measured tempo.
Thus, with the music editing apparatus 1, the user can enjoy music not only in the conventional manner, but in the following new type manner. Unlike the conventional manner, this new type manner cannot be achieved only by passively listening to music. This new type manner allows the user to actively concern himself or herself in music, create a music listening method corresponding to his or her lifestyle, and satisfy his or her exhibitionism.
In other words, the user can stylishly connect his or her favorite parts of songs and listen to them as seamless remix music. In addition, with the music editing apparatus 1, the user can satisfy his or her creative desire for selecting a connecting method of songs on try-and-error basis and stylishly creating a remix.
These tracks correspond to song A, song B, and SE. The number of tracks depends on the number of songs composed. For these tracks, functions of decoders 91a, 91b, time stretch sections 92a, 92b, and 92c, and bit shift sections 94a, 94b, and 94c are provided. The decoders decode compressed audios that have been compressed according to MP3, ATRAC, or the like and output decompressed audio as PCM data. Since the length of SE is short and the data size thereof is small, it does not always need to be compressed. Thus the decoder for the SE track is omitted and a PCM out 91d is used instead.
The time stretch sections convert the reproduction speed of the corresponding song or SE with the interval kept constant. The time stretch sections are used to match the different tempos of the song materials with the tempo of the reference beat. The pitch shift sections change the interval of the corresponding song or SE with the reproduction speed kept constant. The pitch shift sections are used to musically harmonize song materials having different keys and pitches. However, the pitch shift sections are not essential functions, but additional functions.
The synchronous reproduction controlling section 9 reproduces the song material selected by the remix processing section 20 in synchronization with the reference beat. The synchronous reproduction controlling section 9 reproduces a music signal in synchronization with a self-generated clock signal. A clock signal is a clock corresponding to a beat generated in the synchronous reproduction controlling section 9. The synchronous reproduction controlling section 9 supplies an interrupt of a clock's leading edge to the remix processing section 20. The remix processing section 20 counts the interrupts as triggers and obtains a bar count and a beat count starting from the beginning. Since the remix processing section 20 continuously counts up the clock with interrupts, it can know timings of supplying song A and song B. Thus the remix processing section 20 causes the synchronous reproduction controlling section 9 to reproduce them at the timings.
Unlike the conventional music editing apparatus 1, the synchronous reproduction controlling section 9 is not a simple audio signal reproducing section. Instead, the synchronous reproduction controlling section 9 is a section that also generates a clock corresponding to a beat. The synchronous reproduction controlling section 9 supplies the clock to the remix processing section 20 to interrupt it.
Thus the reference beat is composed of a trigger signal that represents the beginning of a bar and a beat signal that represents a beat. The number of beats per bar depends on a rhythm (for example, four-four rhythm). The reference beat is supplied as an interrupt signal to the remix processing section 20. The remix processing section 20 counts the clock and obtains a bar count and a beat count as the current reproduction position. In addition, the reference beat can be used as a timing signal for supplying a song material.
The remix processing section 20 pre-obtains the timings of the first beats of bars as the reference beats that are interrupt signals. The remix processing section 20 pre-selects a song material to be reproduced next and supplies the selected song material to the synchronous reproduction controlling section 9 at a timing immediately before the first beat of the next bar from which the song material is to be reproduced.
Thereafter, the synchronous reproduction controlling section 9 starts reproducing the song material at the timing of the first beat of the next bar. Likewise, all song materials are started to be reproduced in synchronization with the reference beat. Thus, for example, song A, song B, and SE can be synchronously reproduced.
Specifically, in
In other words, although the remix processing section 20 controls processes at rough timings, the synchronous reproduction controlling section 9 performs processes corresponding to commands at very fine timings.
Thus, in the music editing apparatus 1, the synchronous reproduction controlling section 9 generates a clock by itself and reproduces song materials at timings exactly in synchronization with the clock. The remix processing section 20 functions as a system controlling section that controls the whole system. The remix processing section 20 controls the system at rough timings. In other words, while the remix processing section 20 issues commands to the synchronous reproduction controlling section 9 at rough timings, the synchronous reproduction controlling section 9 exactly performs processes corresponding to the commands.
The synchronous reproduction controlling section 9 executes all commands supplied from the remix processing section 20 in synchronization with the reference beat. In this example, the reference beat as a synchronous mode is the first beat of a bar. Instead, commands may be synchronized with a regular beat or a fine beat such as a beat of a 16-beat rhythm. When the synchronous mode is OFF, the synchronous reproduction controlling section 9 immediately starts reproducing song materials corresponding to a command supplied from the remix processing section 20. The synchronous mode is commanded to the synchronous reproduction controlling section 9 by the remix processing section 20.
As exemplified in
By spectrum-analyzing (
It is assumed that three types of music of song A, song B, and SE are simultaneously reproduced. To lock their frequencies and phases, a master clock is prepared. The master clock is generated by the master beat generating section 90. The master beat generating section 90 is disposed in the synchronous signal generating section 210. The master beat generating section 90 is a simple oscillating circuit. The synchronous reproduction controlling section 9 of the synchronous reproducing section 8 follows the beats of the music in synchronization with the master clock. The master bit is, for example, 120 BPM. The three types of reproducing sections controls feedbacks such that their phases are synchronized with the phase of the clock and the error of the reproduction tempos does not cumulate. The music rhythm obtaining section 206 generates a beat synchronous signal from the time line meta data. The phase comparator and integrating circuit 204 compares the phase and frequency of the generated beats with those of the master clock. A negative feedback is performed to prevent the error from cumulating. The tempo of the time stretch is delicately adjusted such that when the tempo becomes slower than the master clock, the tempo is increased and when the tempo becomes faster than the master clock, the tempo is decreased. In other words, the reproduction of music is synchronized with the master clock.
To put it in more detail, with the time line meta data, a beat synchronous signal corresponding to beats of the current music is generated, compared with the master clock, and fed back such that the phase of the current music does not deviate from that of the master clock. Thus songs can be reproduced for a long time without changes of their phases.
Thus, when the music editing apparatus 1 reproduces songs having different tempos and rhythms as a remix, even if the beats of the original music change, the beat synchronous reproduction can be automatically performed for them in real time such that their tempos and rhythms are kept constant.
Since the music editing apparatus 80 has the network communicating section 81, the user can exchange and/or share a remix pattern file through a network such as the Internet 82 and a third party can download a remix created by another user and enjoys it. Instead, a remix pattern provided by a content service side rather than a private user may be used.
With the music editing apparatus 80, a new music-based communication through which connecting methods for songs are published on the Internet or the like, they are shared by other persons, and remixes are created and evaluated by a plurality of persons can be accomplished.
As in an invention disclosed, for example, as “Reproduction State Controlling Apparatus And Reproduction Sate Controlling Method” (Japanese Patent Application Laid-Open No. 2005-156641), a sensor value obtaining section 113 is based on a technique of detecting a walking tempo using an acceleration sensor and changing the tempo of a song corresponding to the walking tempo. When this example is applied, the tempo of the master beat generating section can be changed corresponding to the walking tempo and all songs can be reproduced as remixes corresponding to the walking tempo and jogging tempo. In addition, as in “Reproducing Apparatus And Reproducing Method For Audio Signal” (Japanese Patent Application No. 2005-363094), an invention of selecting songs corresponding to the walking tempo or jogging tempo may be applied.
To apply these techniques, a sensor is essential. When the sensor and the algorithms of these inventions are applied to the system of the present invention, songs can be selected corresponding to the state of the user, the selected songs can be remixed, and they can be seamlessly reproduced.
First, it is determined whether the mode of the music editing apparatus 110 is the sensor priority mode or the pattern priority mode (at step S11). In this case, the sensor priority mode is based on the technique of selecting songs depending on whether the user is walking or jogging. The sensor decides the order of songs and selects songs. Thus songs may not be decided corresponding to a pattern. It may suggest that they are dynamically changed. In other words, as a concept, the music editing apparatus 110 dynamically creates a pattern file with a value supplied from the sensor, not read a prefixed pattern file.
When the pattern priority mode rather than the sensor priority mode has been selected at step S311, the same processes shown in
Next, the case of which the sensor priority mode has been selected at step S311 or step S312 will be described. In this case, songs are automatically selected corresponding to jogging and the selected songs are automatically connected corresponding thereto.
As a subroutine, songs and their tempos are decided corresponding to the input from the sensor (at step S313). When the determined result at step S314 denotes that there is another song, the flow advances to step S315. At step S315, the tempo of a master beat is set. In this case, the tempo of the master beat has been detected and set corresponding to the walking tempo. In this case, since the song connecting method has not been always decided, it is automatically decided (at step S316). For example, in the jogging mode, all the songs may be simply connected with cross-fade. Instead, when meta data of the next song denotes that it is fad in, the current song may be simply overlaid. Since the processes after step S317 are the same as those from step S6 to step S15 of
Thus, with the music editing apparatus 110, songs are selected corresponding to the jogging tempo and the selected songs are seamlessly connected. Thus, since the songs do not disorder the tension and tempo of jogging, the user can comfortably enjoy jogging. With a particular type of a sensor, a remix corresponding to another rhythmical motion (for example, dancing) or the heart rate of the user instead of jogging can be created.
Songs are selected corresponding to the input from the sensor, the tempos of the selected songs are matched with the tempo of the jogging, and the songs are remixed with the matched tempo. Thus the user can enjoy his or her favorite songs corresponding to his or her tempo.
In addition, timer remix reproduction can be performed. It is assumed that the user wants to listen to songs of his or her favorite album or those of a play list in 30 minutes for his or her commuting. Conventionally, if the duration of the album is 60 minutes, he or she has to stop reproducing songs in 30 minutes. In contrast, with the system according to the present invention, a remix can be created by selecting tempos and parts of songs so that the reproduction duration of songs becomes 30 minutes. The timer remix reproduction can be applied to jogging. When the user wants to jog only for 30 minutes, a remix of songs for 30 minutes corresponding to the tempo of jogging can be created.
A remix of only catchy parts of songs for 30 minutes of commuting can be created so that the remix is complete when the user arrives at the destination. In addition, as the current time approaches the arrival time, an atmosphere that allows the user to feel that he or she is approaching the destination can be created in such a manner that the tempo of songs is gradually decreased or particular SE is overlaid.
As described above, with the music editing apparatus 80 and 110, a new music-based communication through which a connecting method for songs is published, for example, on the Internet, shared by other persons, remixes are created by a plurality of persons, and they are evaluated by them can be achieved. In addition, with a sensor, the user can listen to music that well fits his or her situation and state.
These features will make a new wave against the conventional world in which the user enjoys work that professional creators and musicians created and allow the user to experience a new world in which he or she reuses and recomposes their work as materials with which he or she expresses himself or herself and enjoy the result.
When users who find an amusement in such a new type music increase, the entire music industry will become active. In addition, from a view point of the music industry side, there is a possibility of which there will be music work that allows the user side to recompose and new creators to create such work. Thus there is a possibility of which the industry and users can establish new music cultures.
Last, characteristics that can be used in the music editing apparatus of the present invention will be described. The music editing apparatus is characterized in that it has means for tracking beats of a rhythm of music in real time. In addition, the music editing apparatus is characterized in that it has means of correctly and accurately tracking beats of a rhythm of music in real time with time line meta data (time information such as time stamp information that represents such as beat positions) that have been provided. In addition, the music editing apparatus is characterized in that it has means for generating a beat synchronous signal in synchronization with a song to be reproduced from time line meta data (time information such as time stamp information that represents such as beat positions).
In addition, the music editing apparatus is characterized in that it has means for tracking beats of a rhythm of music from time line meta data in real time although the tempo and rhythm of a song of the music change. In addition, the music editing apparatus is characterized in that it has a plurality of reproduction tracks for successively or simultaneously reproducing a plurality of songs having different tempos and rhythms.
In addition, the music editing apparatus is characterized in that it has a plurality of reproduction tracks for reproducing successively or simultaneously reproducing a plurality of songs having different tempos and rhythms and means for synchronously reproducing songs that overlap in a transition period for which one song is switched to another song. In addition, the music editing apparatus is characterized in that it has a plurality of tracks for successively or simultaneously reproducing a plurality of tracks having different tempos and rhythms and means for selecting songs to be synchronously reproduced such that the songs overlap in a transition state of which one song is switched to another song.
In addition, the music editing apparatus is characterized in that it has means for successively or simultaneously reproducing a plurality of songs having different tempos and rhythms in synchronization with a reference beat without changes of their tempos and rhythms. In addition, the music editing apparatus is characterized in that it has PLL circuits corresponding to tracks as means for successively or simultaneously reproducing a plurality of songs having different tempos and rhythms in synchronization with a reference beat without changes of their tempos and rhythms.
In addition, the music editing apparatus is characterized in that a reference beat is input to each track as means for successively or simultaneously reproducing a plurality of songs having different tempos and rhythms in synchronization with the reference beat without changes of their tempos and rhythms. In addition, the music editing apparatus is characterized in that it has means for generating a beat synchronous signal for each track from time line meta data as means for successively or simultaneously reproducing a plurality of songs having different tempos and rhythms in synchronization with a reference beat without changes of their tempos and rhythms.
In addition, the music editing apparatus is characterized in that it has means for deciding a tempo of a reference beat as a master for synchronous reproduction. In addition, the music editing apparatus is characterized in that it has means for tracking beats of a rhythm of music in real time and generating a current beat count value. In addition, the music editing apparatus is characterized in that it has means for tracking beats of a rhythm of music in real time and generating a current bar count value.
Number | Date | Country | Kind |
---|---|---|---|
2005-356830 | Dec 2005 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2006/324889 | 12/7/2006 | WO | 00 | 6/2/2008 |