1. Field of the Disclosure
The present disclosure relates to a process of reproducing sound data and, more particularly, to a sound processing device for a fast-forwarding operation of sound data being reproduced and a user interface thereof.
2. Description of the Related Art
Recently, mobile telephone terminals, PCs or portable music players having a music reproduction function have become widespread, and music content to be reproduced is also downloaded from a distribution site through a network so as to be easily available.
In addition, small mobile recorders (so-called IC recorder) have also become widespread, and sound is easily recorded in meetings or the like, without being limited to music content.
Such sound data content is compressed and encoded into a predetermined format so as to be held as digital data, and is decoded and converted into an analog signal so as to be acoustically output from a speaker or the like during reproduction.
If a fast-forwarding operation is performed while sound data is reproduced (general reproduction), a plurality of sound data parts distributed beyond the current reproduction position is read and the plurality of sound data parts is sequentially reproduced instead of the general reproduction. If the fast-forwarding operation is finished, the general reproduction is resumed from the sound data part which is currently being reproduced.
In Japanese Unexamined Patent Application Publication Nos. 2008-135891 and 2008-135892, technology for simultaneously reproducing a plurality of pieces of music data and, at this time, performing a predetermined process with respect to the plurality of sound signals such that the music signals are audible by the user through the sense of hearing is disclosed.
In the related art, arbitrary sound data may be fast-forwarded using the fast-forwarding operation during reproduction (during listening) such that following content is audibly checked.
In this case, when the original general reproduction is desired to be resumed after the fast-forwarding reproduction is performed by the fast-forwarding operation, it is necessary to take steps that the general reproduction is stopped; and start of the reproduction from the beginning, rewinding from a current position, or the like. To this end, the original general reproduction may not be immediately resumed.
In addition, in the fast-forwarding reproduction by the digital sound reproduction technology in the related art, a reproduction speed for enabling content to be heard to some extent is preferable. However, at such a reproduction speed, if a desired point is distant from the current reproduction position, a corresponding time is used to reach the corresponding point. In contrast, if a reproduction speed is increased, the content may not be recognized.
It is desirable to provide a function of a fast-forwarding operation of new sound data using a technology of separating a plurality of pieces of sound data so as to be simultaneously audible to the sense of hearing.
In addition, the present disclosure provides a sound data selection program executed on a computer or a computer-readable storage medium for storing such a program, in order to realize a step of providing an option of a plurality of pieces of selectively listenable sound data as display information and such a method.
According to the present disclosure, it is possible to reproduce sound of the fast-forwarding reproduction of the same sound data in parallel partway-through the reproduction of sound data, without stopping the reproduction, and to separate and listen to both pieces of sound data. Accordingly, it is possible to immediately return from fast-forwarding reproduction to the original general reproduction.
According to the present disclosure, an information processing apparatus stores audio data, receives an input instruction of a user, reproduces a first channel of the audio data, which corresponds to normal playback of the audio data, and a second channel of the audio data, which corresponds to a fast-forwarding playback of the audio data based on an instruction received at the input unit, processes the first and second channels of the audio data such that the first and second channels of the audio data are separately audible by the user, and outputs the processed first and second channels of the audio data.
Hereinafter, the embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.
This sound processing system provides a user interface for performing various operations for reproducing a plurality of pieces of sound data stored in a storage device or a recording medium. Hereinafter, as one of the operations of this user interface, a fast-forwarding operation performed during sound data reproduction will be described. Hereinafter, two fast-forwarding operations will be largely described as first and second embodiments, respectively.
In the fast-forwarding operation of the first embodiment, general reproduction begins with respect to one piece of sound data among a plurality of pieces of sound data according to a reproduction start operation of a user and then a sound data part at a position beyond the current reproduction position of the same sound data is read and fast-forwarding reproduced in parallel with the general reproduction according to a fast-forwarding operation of the user. In addition, at this time, the generally reproduced sound data and the fast-forwarding reproduced sound data are processed by the below-described sound processing unit such that the generally reproduced sound data and the fast-forwarding reproduced sound data are heard by the sense of hearing so as to be separately audible by the user, and output sound data in which both pieces of sound data are mixed is output.
The fast-forwarding operation of the second embodiment is the same as the first embodiment in that the general reproduction of one piece of sound data among the plurality of pieces of sound data begins according to the reproduction start operation of the user and then the fast-forwarding operation of the user is received, and is different from the first embodiment in that a plurality of sound data parts beyond the current reproduction position of the same sound data is divided into a plurality of sections, and the sections are sequentially read and are reproduced in a multiple manner with a predetermined time difference. At this time, the plurality of sound data parts are processed by the sound processing unit such that the plurality of sound data parts reproduced in the multiple manner are heard by the sense of hearing so as to be separately audible by the user, and output sound data in which the plurality of sound data parts is mixed is output.
In a sound separation process, a plurality of pieces of input sound data is simultaneously reproduced, and a specific filter process is performed such that the plurality of reproduced sound signals is separated and heard. Next, such sound signals are mixed to output sound data having a desired number of channels and are acoustically output from an output device such as a stereo or earphone.
In the present specification, music data is used as an example of sound data. However, the sound data of the present disclosure is not limited to music data and may be applied to data representing any sound such as a spoken word recording, comic storytelling, meetings or the like, environmental sounds, speech sounds or ringtones (melody) of a telephone, a television broadcast or the like, or sound data included in image data recorded on a DVD.
The sound processing system 10 shown in
The storage device 12 may include a small storage device mounted in an apparatus, such as a hard disk, and a small-sized storage medium which is detachably mounted in an apparatus, such as a flash memory. The storage device 12 may include a storage device such as a hard disk in a server connected to the sound processing device 16 through a network.
The sound processing device 16 includes a plurality of reproduction devices 14, a user input unit 18, a display unit 19, a control unit 20, a storage unit 22, a sound processing unit 24, a down-mixer 26, and an output unit 27.
The reproduction devices 14 reproduce and output music data (in the present example, a song) selected by the user as a sound signal, and appropriately decode one selected from the music data stored in the storage device 12 so as to generate the sound signal. Although four pieces of music data are simultaneously reproduced and four reproduction devices 14 are shown in
If a reproduction process is performed in parallel by a multi-processor or the like, from the standpoint of appearance, one reproduction device 14 including a plurality of processing units, each of which reproduces each piece of music data and generates the respective sound signal, may be used.
The user input unit 18 is a portion which receives an input operation of a user and, in the present embodiment, has an input area placed on an overlapped display screen of the display unit 19, and includes a touch panel (touch screen) for detecting the position of a user touch. The user input unit 18 used in the present disclosure is not limited to such a configuration and may be an input unit by a hardware key or button. Instead thereof or in addition thereto, for example, at least one input device such as a mouse, a keyboard, a trackball, a joystick or a touch pen may be used.
The display unit 19 displays characters or images on the display screen and includes a display device such as an LCD or an organic EL and a display controller.
The control unit 20 performs the switch of the display of the display unit 19, the conversion of the music data reproduced by the reproduction device 14 and various operations, the control of the operation of the reproduction device 14 or the sound processing unit 24, or the like according to an instruction input from the user input unit 18.
The control unit 20 includes a CPU, various processors, and program memory. The control unit 20 executes the fast-forwarding process which is characteristic in the present embodiment.
The storage unit 22 includes a storage medium such as a memory or a hard disk, for storing music data, information corresponding to each piece of music data, image data, a various types of control data, or the like. The storage unit 22 also stores a table necessary for control by the control unit 20, that is, information such as predetermined parameters.
The sound processing unit 24 performs a predetermined process with respect to a plurality of pieces of input sound data such that the plurality of pieces of input sound data is heard by the sense of hearing so as to be separately audible by the user. The plurality of pieces of input sound data is different parts of the same input sound data in the present embodiment. In more detail, the sound processing unit 24 performs a predetermined filter process with respect to each of the plurality of pieces of input sound data so as to generate a plurality of sound signals (output sound data) heard by the sense of hearing so as to be separately recognized. The details of the operation of the sound processing unit 24 will be described later.
The down-mixer 26 mixes the plurality of sound signals subjected to the predetermined filter process so as to generate an output signal having a desired number of channels.
The output unit 27 includes a D/A converter for converting digital sound data into an analog sound signal, an amplifier for amplifying the output signal, and an output terminal.
The output device 30 includes an electrical acoustic conversion unit for acoustically outputting the mixed sound signal and, in detail, includes a (internal or external) speaker, headphones, and earphones. In the present specification, the term “speaker” is not limited to a speaker and may be any electrical acoustic conversion unit. Although the output device 30 is shown as an external device of the sound processing device 16, it may be mounted in the sound processing device 16.
The sound processing system 10 corresponds to a personal computer, a music reproduction apparatus such as a mobile music player, or an IC recorder. This system may be integrally configured or may be configured by the local connection of a plurality of units.
In addition, the format of the music data stored in the storage device 12 is unimportant. The music data may be encoded by a general encoding format such as MP3.
The down-mixer 26 mixes the plurality of input sound signals after performing various adjustments as necessary and outputs as output signal having a predetermined number of channels, such as monaural, stereo or 5.1 channel. The number of channels may be fixed or may be switched by the user using hardware or software.
In the auxiliary information of the music data stored in the storage unit 22, any general information such as song title, artist name, an icon or a genre of song corresponding to a piece of music data may be included. Further, some of the parameters necessary for the sound processing unit 24 may be included. The information about the music data may be read and stored in the storage unit 22 when the music data is stored in the storage device 12. Alternatively, the information may be read from the storage device 12 and stored in the storage unit 22 whenever the sound processing device 16 is operated.
Now, the sound separation process of allowing one user to separate and listen to a plurality of pieces of music data or music data parts, which is simultaneously reproduced, will be described.
If a plurality of sounds is mixed and heard using a set of speakers or earphones, fundamentally, since separation information is not obtained at the inner ear level, different sounds are recognized by the brain depending on differences in the auditory stream or tone, or the like. However, sounds thus distinguishable are restricted. Accordingly, it is difficult to apply this operation to various sounds.
If the methods proposed by Japanese Unexamined Patent Application Publication Nos. 2008-135891 and 2008-135892 are used, separation information approaching the inner ear or the brain is artificially added to sound signals so as to finally generate sound signals capable of being recognized and separated even when mixed.
That is, if the sound processing unit 24 is configured as follows, it is possible to separate and listen to a plurality of pieces of sound data.
In the sound separation process, a predetermined filter process is performed with respect to each sound signal so as to separate and listen to the music data when the plurality of pieces of music data is simultaneously reproduced, mixed and output. In detail, separation information at the inner ear level is provided by distributing a frequency band or time to the sound signal obtained by reproducing each piece of music data or separation information at the brain level by providing periodic change, performing acoustic processing or providing different localization with respect to some or all of the sound signals. In this way, when the sound signals are mixed, it is possible to acquire the separation information at the inner ear level and the brain level and, finally, to facilitate the separation and recognition of the plurality of pieces sound data. As a result, it is possible to simultaneously observe sounds similar to the viewing of a thumbnail display on a display screen and to readily check a plurality of music contents without spending much time even when wishing to check the contents.
The sound processing unit 24 of the sound processing device 16 of the present embodiment processes each of the plurality of pieces of sound data (sound signals) so as to be heard by the sense of hearing so as to be separately recognized when being mixed.
The preprocessing unit 40 may be a general automatic gain controller or the like and controls gain such that the volumes of the plurality of sound signals received from the reproduction device 14 approximately become uniform.
The frequency band division filter 42 allocates a block obtained by dividing the audible band for each sound signal and extracts a frequency component belonging to the block allocated to each sound signal. For example, the frequency band division filter 42 may extract the frequency component by configuring a band pass filter (not shown) provided in every block with respect to each channel of the sound signal. A division pattern for deciding the manner of dividing the block or an allocation pattern for deciding the manner of allocating the block to the sound signal may be changed by enabling the control unit 20 to control each band pass filter or the like so as to set the frequency band or to set the valid band pass filter.
The time division filter 44 performs a time division method of the sound signal and time-modulates the amplitude of each sound signal by changing the phase in a period of several tens of milliseconds to several hundreds of milliseconds. The time division filter 44 is realized by, for example, controlling the gain controller on a time axis.
The modulation filter 46 performs a method of periodically providing a specific change to the sound signal and is realized by, for example, controlling the gain controller, an equalizer, an audio filter, or the like on a time axis.
The processing filter 48 performs a method of normally performing a special effect (hereinafter, referred to as a processing treatment) with respect to the sound signal and is realized by an effector or the like.
The localization setting filter 50 performs a method for changing localization as the position of a virtual sound source and is realized by a three-dimensional localization process or the like of a panpot, a virtual surround or the like.
In the present embodiment, as described above, the plurality of mixed sound signals is heard by the sense of hearing so as to be separately recognized by the user. According to the sound separation process, it is possible to separate and distinguish the plurality of pieces of music data (parts) which is simultaneously output according to the separation listening method, by changing the parameter of each filter provided to the sound processing unit 24. The change pattern of the provided parameter is stored in the storage unit 22 in advance. In addition, such a change pattern may be an internal parameter or a plurality of tables in the sound processing unit 24 in order to perform an optimal process.
As the separation listening method by the sound separation process, in more detail, in the above related art there is a plurality of methods proposed as follows.
(1) Frequency Band Division Method
First, as the method of providing the separation information at the inner ear level, the division of the sound signal in the frequency band and the time division of the sound signal will be described.
In the example shown in
The threshold band refers to a frequency band in which, even when sound having a certain frequency band extends to a larger bandwidth, increase of a masking amount of the other sound is not increased. Masking is a phenomenon in which the minimum audible value of any sound is increased by the presence of the other sound, that is, a phenomenon in which it is difficult to listen to either sound. The masking amount is the amount of increase of the minimum audible value. It is difficult for sounds in different threshold bands to mask each other. By dividing the frequency band using the 24 Bark threshold bands approved by experiments, it is possible to suppress influence in which, for example, the frequency component of the song part A belonging to the block of the frequencies f1 to f2 masks the frequency component of the song part B belonging to the block of the frequencies f2 to f3. The same is true in the other blocks and, as a result, the song part A and the song part B become the sound signals which hardly interfere each other.
In addition, division of the entire frequency region into the plurality of blocks may not be performed using the threshold bands. In either case, it is possible to provide the separation information using the frequency resolution of the inner ear, by reducing the overlapping frequency bands.
Although, in the example shown in
In addition, although, in the example shown in
(2) Time Division Method
In the example of
On the other hand, modulation of a sine wave which does not have a time width at the time point when the amplitude reaches its peak may be performed. In this case, only the phase is delayed and the timing when the amplitude reaches its peak becomes different. In either case, it is possible to provide the separation information using the time resolution of the inner ear.
(3) Method of Providing Separation Information at Brain Level
Next, a method of providing separation information at the brain level will be described. The separation information provided at the brain level provides clues to recognizing the auditory stream of each sound when the brain analyzes sound. In the present embodiment, a method of periodically providing a specific change to sound signals, a method of normally performing a processing treatment with respect to sound signals, and a method for changing localization are introduced.
(3-1) In the method of periodically providing the specific change to the sound signals, the amplitudes of all or a part of mixed sound signals are modulated or the frequency characteristics are modulated. The modulation may be performed in a pulse shape for a short period of time or may be performed so as to slowly vary for a long period time. If the common modulation is performed with respect to the plurality of sound signals, the timings of the peaks of the sound signals are different.
Alternatively, noise such as flick sound may be periodically provided, a processing treatment realized by a general sound filter may be performed, or localization may be swung to the left or the right. By combining such modulations, applying another modulation by the sound signals, or delaying timings, it is possible to provide clues to recognizing the auditory stream of the sound signals.
(3-2) In the method of normally performing the processing treatment with respect to the sound signals, one or a combination of various acoustic processes such as echo, reverb, pitch shift, which are able to be realized using a general effector, is performed with respect to all or a part of mixed sound signals. Normally, the frequency characteristics may be different from those of the original sound signals. For example, even in different song parts of the same song, one song part is subjected to echo processing and thus is prone to be recognized as a different song part. If the processing treatment is performed with respect to a plurality of sound signals, processing content or processing strength becomes different according to the sound signals.
(3-3) In the method of changing the localization, different localizations are provided to all of the mixed sound signals. By performing acoustic spatial information analysis in the brain in cooperation with the inner ear, it is easy to separate the sound signals. Since the sound separation process using the change of the localization is changed so as to separate the positions of virtual sound sources, it may be a sound source separation process.
As shown in
As shown in
Although the localization angles of the general reproduction and the fast-forwarding reproduction of the two song parts shown in
Now, it is assumed that the generally reproduced song part and the fast-forwarding reproduced song part are reproduced in parallel. If the sound signal of the generally reproduced song part obtained from the reproduction device 14 of one unit includes a digital L channel signal and a digital R channel signal, a monaural signal (L+R)/2 obtained by synthesizing both signals is input to a filter unit 50a. The filter unit 50a is formed of Finite Impulse Responses (FIRs) of the two L and R channels as a portion of a localization setting filter 50. If the sound signal of the song part A is originally a monaural signal, the monaural signal may be input to the filter unit 50a without change.
Similarly, if the sound signal of the fast-forwarding reproduced song part obtained from the reproduction device 14 of another unit includes a digital L channel signal and a digital R channel signal, a monaural signal (L+R)/2 obtained by synthesizing both signals is input to another filter unit 50b. The filter unit 50b is formed of FIRs of the two L and R channels as a portion of the localization setting filter 50.
The filter units 50a and 50b receive control parameters from the control unit 20 and generate L and R channel output sound data for realizing predetermined localization. The control parameters are stored in the storage unit 22 in advance as a coefficient table 23. In this example, in the coefficient table 23, parameters of a Head Related Transfer Function (HRTF) are stored. The HRTF is a function indicating the transfer characteristics of sound transferred from a sound source to human ears. This function has a value changed by the shape of the head or the ears and the position of the sound source. In contrast, by using this function value, it is possible to virtually change the position of the sound source.
In the above-described example of
The L channel output signals of the filter units 50a and 50b are superposed in a down-mixer 26, are converted into an analog signal by a D/A converter 28L of an output unit 27, are amplified by an amplifier 29L, and are output as sound from an L channel speaker 30L of an output device 30. Similarly, the R channel output signals of the filter units 50a and 50b are superposed in the down-mixer 26, are converted into an analog signal by a D/A converter 28R of the output unit 27, are amplified by an amplifier 29R, and are output as sound from an R channel speaker 30R of the output device 30.
Hereinafter, a detailed example of a user interface of a fast-forwarding operation using the separation listening method as described above in the first embodiment will be described.
As shown in
In contrast, as shown in
As shown in
In
The input unit 80 shown in
The input unit 81 shown in
The input unit 82 shown in
The present disclosure is not limited to the shown examples and, in the software keys, only necessary keys may be displayed.
The input unit 83 shown in
In the state of
The screen of
In the user interface shown in
The execution process may be switched by the long press touch operation and a simple touch operation (that is, a tap operation) shown in
In
In the state of
In
In addition, the transition to the current position of the fast-forwarding reproduction when returning to the general reproduction may correspond to the end of the long press operation using a flick operation or the end of the simple long press operation. That is, if the flick operation is performed when returning to the general reproduction, the transition to the current position of the fast-forwarding reproduction may be performed, and vice versa. This correspondence is initially set and may be arbitrarily selected by the user.
In the above first embodiment, fast-forwarding reproduced targets were sound data parts (Q1, Q2, Q3, . . . ) of the durations T2 of the plurality of positions within the period distributed in the every period T1 beyond the current reproduction position of the input sound data. Instead of the period T1, the other information may be used. For example, according to the sound data, a plurality of parts may be partitioned in advance by metadata attached thereto. For example, in music content such as pop songs, metadata shown in
This process is executed from the start of the general reproduction of the sound data (S11).
During the general reproduction of the sound data, whether or not the fast-forwarding operation is performed by the user input unit 18 (S12) is monitored. If the fast-forwarding operation is detected, as described above, the distributed sound data parts are extracted from the non-reproduced parts of the sound data which is being reproduced and the fast-forwarding reproduction is additionally started in parallel with the general reproduction (S13). The sound separation process is simultaneously started with respect to the general reproduction and the fast-forwarding reproduction (S14). Thereafter, the process progresses to step S15.
If the general reproduction of the sound data is finished (S15), the present process is finished (S22). When the fast-forwarding operation is finished (S16), the sound separation process is stopped (S17) and the fast-forwarding reproduction is finished (S18). The case where the fast-forwarding reproduction is finished earlier than the general reproduction to the same as a case where the fast-forwarding operation is finished.
Depending on whether or not the operation for instructing the end of the fast-forwarding operation of step S16 is a first end operation, the process branches into step S20 or S21. If it is not the first end operation, it is determined that it is a second end operation. One of the first and second end operations is the operation for simply finishing the above-described fast-forwarding operation and the other is a specific operation such as the flick operation. In the parallel usage of the above-described fast forward button and another button, the presence or absence of the parallel usage corresponds to the first and second end operation. Even when the above-described plurality of fast forward buttons is used, any one of the first and second end operations is determined depending on which fast forward button is used.
In step S21, the original general reproduction continues. In step S22, the general reproduction is resumed from the current position (or a predetermined position) of the fast-forwarding reproduction.
This process is executed from the start of the general reproduction of the sound data (S31).
During the general reproduction, whether or not a touch operation of the user for the bar 83 is performed (S32) is monitored. If the touch operation is detected, waiting is performed for a predetermined time (S33). This predetermined time is a threshold for determining whether or not the touch is the long press and may be set to, for example, about 0.5 seconds. This predetermined time may be adjusted by the user. If the touch does not continue after waiting for the predetermined time, it is determined that the user's operation is a so-called tap operation, and the current position of the sound data which is being generally reproduced is changed to the time point of the touch position (S35). Therefore, the reproduction position of the general reproduction is changed. Thereafter, the process returns to step S32.
If the touch continues after waiting for the predetermined time, it is determined that the user's operation is the long press touch (that is, a fast-forwarding operation) so as to transition to the fast-forwarding operation. That is, in parallel to the general reproduction, the fast-forwarding reproduction is additionally started from the position of the sound data for the touch position (S36). In addition, the sound separation process is started with respect to the general reproduction and the fast-forwarding reproduction (S37). Thereafter, the process progresses to step S38.
If the general reproduction of the sound data is finished (S35), the present process is finished (S45). When the long press touch is finished (S39), the sound separation process is stopped (S40) and additionally the fast-forwarding reproduction is finished (S41). The case where the fast-forwarding reproduction is finished earlier than the general reproduction to the same as the case where the long press touch is finished.
Depending on whether or not the end of the long press touch of step S39 is a first end operation, the process branches into step S43 or S44. If it is not the first end operation, it is determined that it is a second end operation. One of the first and second end operations is the operation for simply finishing the above-described fast-forwarding operation and the other is a specific operation such as the flick operation.
In step S44, the original general reproduction independently continues. In step S43, the general reproduction is resumed from the current position (or a predetermined position) of the fast-forwarding reproduction.
Although the touch position of the bar is assumed the position after (future position) the current reproduction position on the bar during the general reproduction in the above description, the same process as described above may be performed even when a previous (past) position is instructed.
Next, the fast-forwarding operation of a second embodiment of the present disclosure will be described.
The configuration of the sound processing system 10 shown in
During the general reproduction of the input sound data, at the time point t1 when the fast-forwarding operation of the user is instructed, the fast-forwarding reproduction of the same input sound data is started. That is, a plurality of sound data parts at the position beyond the current reproduction position is divided into a plurality of sections in which a constant length (period T3) is consecutively arranged and the sections are sequentially read so as to perform the reproduction and output in a multiple manner and in a cyclic manner with a predetermined time difference T4. At this time, the sound data parts reproduced in the multiple manner are processed by the sound processing unit so as to be separated and heard and the mixed output sound data is output. At a time point when the fast-forwarding operation is finished, the multiple reproduction finishes and the general reproduction of the input sound data part is resumed from a last sound data part (in the example of
According to the fast-forwarding operation of the second embodiment, compared with the fast-forwarding operation of the related art shown in
In the second embodiment, during the fast-forwarding reproduction, whether or not the general reproduction is continuously executed is not regarded as important. In the example of
Next, the separation listening suitable for the multiple reproduction of three or more sound data parts of the second embodiment will be described.
When the fast-forwarding operation is finished, the general reproduction is resumed. In the configuration shown in
This process is executed from the start of the general reproduction of the sound data (S51).
During the general reproduction, whether or not the fast-forwarding operation is performed (S52) is monitored by the user. If the fast-forwarding operation is detected, the fast-forwarding reproduction operation of the present embodiment is started (S53). That is, the non-reproduced part of the sound data which is being reproduced is divided into a plurality of sections and such sections are reproduced in the multiple manner and the cyclic manner with a predetermined time difference. In addition, the sound separation process is started (S54). Thereafter, the process progresses to step S55.
If the general reproduction of the sound data is finished (S55), the present process finishes (S60). When the fast-forwarding operation finishes (S56), the sound separation process is paused (S57) and the multiple reproduction finishes (S58). Next, the general reproduction is resumed from the last position (or a predetermined position) of the fast-forwarding (S59). Thereafter, the process returns to step S52.
In addition, although the example in which the parallel reproduction of the general reproduction during the multiple reproduction is not performed is shown in this process, the parallel reproduction of the general reproduction may be performed similar to the first embodiment. In this case, the virtual sound source position of the general reproduction is preferably the position of the head portion H.
Although the suitable embodiments of the present disclosure are described, various modification and changes may be made in addition to the above description. That is, it is apparent to those skilled in the art that the above embodiments are exemplary and various modified examples of a combination of components and processes may be made and such modified examples are in the scope of the present disclosure.
For example, although only the fast-forwarding reproduction is described, the present disclosure is also applicable to rewinding reproduction in the same manner. In the rewinding reproduction, the reproduction order of the above-described blocks or sections may be reversed.
Although the position of the virtual sound source is in the horizontal plane in the present embodiment, it may be set in a three-dimensional space centered on the head portion H.
A computer program for realizing the functions described in the above embodiments on a computer and a computer-readable storage medium for storing the program are included in the present disclosure. Examples of the “storage medium” for supplying the program include a magnetic storage medium (a flexible disk, a hard disk, a magnetic tape, or the like), an optical disc (a magneto-optical disc, a CD, a DVD, or the like), or a semiconductor storage.
It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alterations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.
The present application claims the benefit of the earlier filing date of U.S. Provisional Patent Application Ser. No. 61/387,811 filed on Sep. 29, 2010, the entire contents of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
61387811 | Sep 2010 | US |