The present invention relates generally to audio systems and methods and, more particularly, to systems and methods for digitally processing audio recordings to provide a measurable enhanced effect for end users.
Different types of sounds may be used to entrain the human brain to reach a certain mental state. For example, when two tones having slightly different frequencies are played in the left and right ears simultaneously, the human brain perceives the creation of a third tone having a frequency equivalent to the difference between the two tones being played. This auditory illusion is called a binaural beat and, depending on its frequency, may be used to reduce stress and anxiety, increase focus and concentration, improve sleep quality, or provide other benefits. In some cases, an individual may choose to listen to binaural beats with music in order to improve the listening experience. However, the binaural beats are simply played as background to the music (or vice-versa)—i.e., the music material itself is not used for purposes of brainwave entrainment. Thus, there remains a need in the art for an improved method of delivering sounds with neuropsychological impact that overcomes drawbacks associated with existing methods and/or that offers other advantages compared to existing methods.
The present invention is directed to methods and systems for infusing sounds into source audio tracks without disruption in the musical quality or perceived composition in order to provide desired psychological, neurological and/or physiological outcomes.
In one embodiment, a source audio track is separated into a plurality of audio tracks, such as multiple music stems, and each of the separated audio tracks is individually processed to generate a binaural beat track. Each binaural beat track is generated from an audio track by (a) transcribing the audio track to provide a transcription that includes an estimated fundamental frequency and/or an estimated amplitude envelope for each of a plurality of notes and (b) using the transcription to generate the binaural beat track. This embodiment provides multiple binaural beat tracks-one for each of the separated audio tracks-which provides multiple layers of stimulation. Because the binaural beat tracks are derived from the separated audio tracks, the perception of certain instruments and/or vocals in the audio tracks may be enhanced. Further, the delivery of multilayered binaural beats that are dynamically matched to the source audio track results in a polyphonic binaural beat experience that implements frequencies compatible with the tonal center of the source audio track without distorting the acoustic experience.
While existing methods of binaural beat implementation typically involve the delivery of one drone tone frequency in each of the left and right speaker channels, drawing from a limited frequency spectrum, the binaural beat generation method of the present invention draws from a series of pitches that are harmonically and melodically compatible with the source audio track. Since it is an established hypothesis that individuals respond to certain binaural “carrier” and “offset” frequencies with greater success than other frequencies on a subjective level, this method increases the probability of successful brainwave entrainment via the greater number and variety of source frequencies creating the binaural beat effect.
In another embodiment, the source audio track is processed to generate one or more spatialized tracks. Each spatialized track is generated from one or more spatialization trajectories that are derived from certain audio feature(s) of the source audio track (e.g., tempo, musical event density, harmonic mode, loudness, percussiveness, etc.) and a target end-state effect. Each spatialized track is configured so that the sound source is perceived to move in time with the music—i.e., each spatialized track provides tempo-based spatialization. Spatialization creates sustained attention by directing the focus to a moving sound source that enhances the neurological effect. Furthermore, tracking a moving sound source activates spatial awareness.
In another embodiment, the source audio track is processed to generate both binaural beat tracks and one or more spatialized tracks, as described above. Additional tracks may also be generated, such as one or more infrasonic tracks, ultrasonic tracks, enhanced bass tracks, and/or subharmonic tracks. The various tracks may be played simultaneously or mixed for delivery of a single enhanced audio track to an end user device. It should be noted that the binaural beat stimulation and tempo-based spatialization are additive to the infrasonic and ultrasonic track synthesis and enhanced bass and subharmonic processing techniques so as to provide synergies in driving neurological responses.
Various embodiments of the present invention are described in detail below, or will be apparent to one skilled in the art based on the disclosure provided herein, or may be learned from the practice of the invention. It should be understood that the above summary of the invention is not intended to identify key features or essential components of the embodiments of the present invention, nor is it intended to be used as an aid in determining the scope of the claimed subject matter as set forth below.
Various exemplary embodiments of the present invention are described in detail below with reference to the attached drawing figures, wherein:
The present invention is directed to methods and systems for infusing sounds into source audio tracks in order to provide desired psychological, neurological and/or physiological outcomes. While the invention will be described in detail below with reference to various exemplary embodiments, it should be understood that the invention is not limited to the specific configurations or methodologies of these embodiments. In addition, although the exemplary embodiments are described as embodying several different inventive features, one skilled in the art will appreciate that any one of these features could be implemented without the others in accordance with the present invention.
In the present disclosure, references to “one embodiment,” “an embodiment,” “an exemplary embodiment,” or “embodiments” mean that the feature or features being described are included in at least one embodiment of the invention. Separate references to “one embodiment,” “an embodiment,” “an exemplary embodiment,” or “embodiments” in this disclosure do not necessarily refer to the same embodiment and are also not mutually exclusive unless so stated and/or except as will be readily apparent to one skilled in the art from the description. For example, a feature, structure, function, etc. described in one embodiment may also be included in other embodiments, but is not necessarily included. Thus, the present invention can include a variety of combinations and/or integrations of the embodiments described herein.
In general terms, the methods of the present invention include various digital signal processing steps that are performed to infuse sounds into a source audio track and thereby produce an enhanced audio track that is configured to achieve a desired psychological, neurological and/or physiological outcome.
The source audio track may comprise any type of audio recording that includes vocals, instrumental music, and/or other types of sounds-including one channel music tracks (mono), two-channel music tracks (stereo), or multiple-channel music tracks (surround sound, spatial audio, etc.). In some embodiments, the source audio track is a stereo music track that has been professionally mixed and mastered. Of course, the present invention is not limited to the enhancement of commercial audio recordings, i.e., any type of audio recording may be enhanced in accordance with the teachings described herein.
Various types of sounds may be infused into the source audio track, such as binaural beats, tempo-based spatialization, infrasonic signals, ultrasonic signals, enhanced bass, subharmonics, and the like. It should be understood that any combination of these sounds may be used in a particular implementation, including (a) binaural beats only, (b) tempo-based spatialization only, (c) binaural beats and tempo-based spatialization, (d) binaural beats and infrasonic and/or ultrasonic signals, (e) tempo-based spatialization and infrasonic and/or ultrasonic signals; (f) binaural beats, tempo-based spatialization, and infrasonic and/or ultrasonic signals, and (g) any of the foregoing combinations with enhanced bass and/or subharmonics. Of course, other sound combinations and additional sounds may also be used within the scope of the present invention.
Sounds may be infused into the source audio track in different ways. One infusion method involves processing the source audio track in such a manner as to produce one or more audio tracks that are derived from the source audio track. Examples of audio tracks that may be produced using this infusion method include binaural beat track(s), spatialized track(s), enhanced bass track(s) and/or subharmonic track(s). Another infusion method involves generating one or more synthetic audio tracks. Examples of audio tracks that may be produced using this infusion method include infrasonic track(s) and/or ultrasonic track(s). Of course, both of the above infusion methods may be used, and all of the audio tracks may optionally be mixed to provide a single enhanced audio track. Thus, the “infusion” of sound into a source audio track as used herein means processing the source audio track to produce one or more audio tracks that are derived from the source audio track and/or generating one or more synthetic audio tracks, and optionally mixing the audio tracks to provide a single enhanced audio track-all for the purpose of obtaining a desired psychological, neurological and/or physiological outcome.
An enhanced audio track with the infused sounds may be delivered to an end user device via any known delivery method, such as streaming or downloading of an audio file to an end user device. The enhanced audio track may comprise a mixed audio track that may be played through stereo headphones, earphones, earbuds, one or more speakers, and the like. Alternatively, the enhanced audio track may comprise individual audio tracks that may be played simultaneously through multiple speakers or multiple headphone drivers.
The methods of the present invention may be performed in conjunction with additional pre-processing steps that occur before the steps described above. For example, a healthcare professional or other operator may determine the settings of an input configuration that is used to produce the infused sound. These settings may be chosen so that the enhanced audio track provides a desired psychological, neurological and/or physiological outcome and, in that respect, the settings will vary between different end users. As another example, the source audio track may be separated from other sound elements prior to the steps described above—e.g., a movie track may include music, speech, and/or sound effects, wherein the music is separated from the speech and sound effects for enhancement as described herein. Other pre-processing steps will be apparent to one skilled in the art.
The methods of the present invention may also be performed in conjunction with additional post-processing steps that occur after the steps described above. For example, the enhanced audio track may be delivered to an end user via a production speaker system that is used for health or medical management. The end user's response to the enhanced audio track may then be measured for effectiveness by a healthcare professional. Other post-processing steps will be apparent to one skilled in the art.
Referring to
File server 110 may comprise any suitable computer hardware and software known in the art that is configured for the storage of source audio files that are available for enhancement by application server 120. The source audio files may be provided in any desired audio format. Examples of common audio formats include Advanced Audio Coding (AAC), Free Lossless Audio Codec (FLAC), Windows Media Video (WMV), Waveform Audio File Format (WAV), MPEG-1 Audio Layer 3 (MP3), and Pulse-Code Modulation (PCM). Of course, other audio formats known in the art may also be used.
Application server 120 may comprise any suitable computer hardware and software known in the art that is configured to receive a source audio file from file server 110, receive an input configuration from database server 130 and/or machine learning server 140, execute a sound infusion application to infuse sound into a source audio track in order to produce an enhanced audio track, and store an enhanced audio file on file server 150. The steps performed by the sound infusion application will be described in greater detail below in connection with
Database server 130 may comprise any suitable computer hardware and software known in the art that is configured to maintain a database for storing a plurality of input configurations that have been manually entered by an operator. The data elements for each input configuration may include a variety of different types of settings, such as: (a) the frequency of binaural beats (in Hz); (b) the volume of binaural beats (in Loudness Units Full Scale (LUFS) units); (c) the frequency of infrasound (in Hz); (d) the volume of infrasound (in LUFS units); (e) the frequency of ultrasound (in Hz); (f) the volume of ultrasound (in LUFS units); (g) the subharmonic volume (in LUFS units); (h) the bass intensity (in LUFS units); and (i) the bass volume (in LUFS units). Of course, other settings may also be stored in database 130 in accordance with the invention. In some embodiments, the operator is prompted to enter values for the settings via a configuration file, although any data input method known in the art may be used.
Machine learning server 140 may comprise any suitable computer hardware and software known in the art that is configured to maintain and provide a set of machine learning models that are used to generate input configurations. It should be understood that any type of machine learning approach may be used, such as a recommendation engine that uses supervised learning, reinforcement learning, and the like. In general, machine learning server 140 monitors the quality of the output from application server 120 and refines the configuration settings for optimal performance of the audio infusion application.
It should be understood that the input configuration settings for a particular end user or end user group may be manually input by an operator, may be determined through machine learning, or may be a combination of both. The recommended values for these settings may vary between different end users—i.e., the setting values recommended for one individual may be different than the setting values recommended for another individual. These setting values may be determined by experimentation based on various factors, such as the target end-state effect to be achieved, the intended duration of exposure to the enhanced audio track, whether a treatment plan requires the same or different stimuli to be delivered in different sessions, etc. Thus, it can be appreciated that the selection of the setting values enables the generation of an enhanced audio track that has been personalized or customized either manually or by a machine learning process for an end user or end user group.
File server 150 may comprise any suitable computer hardware and software known in the art that is configured for storage of the enhanced audio files generated by application server 120. The enhanced audio files may be provided in any desired audio format, such as AAC, FLAC, WMV, WAV, MP3, and PCM. Of course, other audio formats known in the art may also be used.
Web server 160 may comprise any suitable computer hardware and software known in the art that is configured to process requests to access (stream, download, etc.) the enhanced audio files stored on file server 150. As shown in
End user devices 1701-170n may each comprise a mobile phone, a personal computing tablet, a personal computer, a laptop computer, or any other type of personal computing device that is capable of communication (wireless or wired) with web server 160 via communication network 180. In this embodiment, each of end user devices 1701-170n utilizes an Internet-enabled application (e.g., a web browser or installed application) to communicate with web server 160. It should be understood that end user devices 1701-170n may be used to play the enhanced audio tracks through stereo headphones, earphones, earbuds, one or more speakers, and the like. Alternatively, end user devices 1701-170n may be used to store the enhanced audio tracks in memory (e.g., file copy, thumb drive, etc.) for later playback through another device (e.g., MP3 player, iPod®, etc.).
Communication network 180 may comprise any network or combination of networks capable of facilitating the exchange of data between web server 160 and end user devices 1701-170n. In some embodiments, communication network 180 enables communication in accordance with one or more cellular standards, such as the Long-Term Evolution (LTE) standard, the Universal Mobile Telecommunications System (UMTS) standard, and the like. In other embodiments, communication network 220 enables communication in accordance with the IEEE 802.3 protocol (e.g., Ethernet) and/or the IEEE 802.11 protocol (e.g., Wi-Fi). Of course, other types of networks may also be used within the scope of the present invention.
It should be understood that system 100 is an exemplary embodiment and that other embodiments may not include all of the servers shown in
Further, in another embodiment, the sound infusion process that is used to create enhanced audio tracks in accordance with the present invention is performed on a single computing device—e.g., a mobile phone, a personal computing tablet, a personal computer, a laptop computer, or any other type of personal computing device. In this case, the various servers of the system 100 shown in
Referring to
In step 302, a source audio track is provided as a digital audio track having one or multiple channels (e.g., mono, stereo, or surround sound). In some embodiments, a compressed or uncompressed audio file (e.g., an audio file provided in an AAC, FLAC, WMV, WAV, MP3, or PCM audio format) is read and decoded into a two-channel digital audio track. For example, application server 120 may read and decode an audio file retrieved from file server 110 to provide the two-channel digital audio track. In other embodiments, analog audio signals are passed through an analog to digital converter to provide the digital audio track.
In step 304, the source audio track is separated into a plurality of audio tracks, commonly termed stem tracks. Each stem track may comprise a vocals track or an instrumental track, which may include either one instrument or a combination of individual instruments that have been originally mixed together to produce the stem track. In the exemplary embodiment shown in
Once the source audio track is provided as multiple audio tracks, subsequent processing of each of the audio tracks—including transcription (step 306), note filtering (step 308), and binaural beat synthesis (step 310)—can be performed in parallel on a multi-processor computer or across multiple computers. Steps 306, 308 and 310 will be described below in relation to the processing of one stem track; however, it should be understood that these steps will be performed in connection with each of the stem tracks that that are separated in step 304 or provided as individual stem tracks in step 302.
In step 306, each stem track is transcribed to provide information on the various musical notes contained in the stem track. As described below, the transcription may include, for example, an estimated fundamental frequency and/or an estimated amplitude envelope for each musical note in the stem track. If the source audio track comprises two or more channels (e.g., a stereo music track), the channels will be mixed to mono during the transcription process. One skilled in the art will appreciate that this mixing may be performed in different ways, depending on the transcription tool used to perform the transcription. In some cases, the channels are mixed to mono and then a transcription of the single mixed channel is generated. In other cases, each channel is transcribed and then the channel transcriptions are combined. Of course, the channels of the source audio track may alternatively be mixed to mono prior to source separation in step 304, although this approach is generally not as efficient as using the transcription tool to perform the mixing during the transcription process of step 306.
Depending on the source, either monophonic or polyphonic pitch transcription may be used to estimate the fundamental frequency of each musical note in the stem track—e.g., monophonic pitch transcription may be used for inherently monophonic sources and polyphonic pitch transcription may be used for inherently polyphonic sources. For example, in the exemplary embodiment shown in
The pitch transcription may be represented in different formats. For example, the pitch transcription may be in the form of a musical note event list, such as a standard Musical Instrument Digital Interface (MIDI) file. In this case, each musical note is represented as a MIDI note number (i.e., a pitch quantized into a 12 tone equal-tempered note number) in relation to the time position of such note. An example of a MIDI file is shown in
The transcription may also include an estimated amplitude envelope for each musical note in the stem track. As known in the art, an amplitude envelope for a musical note may be defined by four parameters: an attack time, a decay time, a sustain level, and a release time. During the transcription process, estimated values for these four parameters may be obtained for each musical note in the stem track. Alternatively, it is possible to use a fixed amplitude in which a single amplitude value averaged across time is provided in the settings of the input configuration. In this case, the transcription will only include frequency information—not amplitude information.
Each stem track may be transcribed using any transcription tool known in the art, such as the Melodyne transcription tool available from Celemony Software GmbH of Munich, Germany. Of course, it should be understood that other transcription tools may also be used in accordance with the present invention.
In step 308, the transcribed notes for each stem track are filtered in order to remove any outlier notes caused by errors in the pitch transcription process of step 306. One or more different note filtering methods may be used in accordance with the present invention.
One note filtering method involves identifying the outlier notes based on a statistical analysis of the pitch transcription for the stem track. Specifically, the pitch transcription is analyzed to calculate the mean and standard deviation of the estimated fundamental frequencies of the transcribed notes. Then, the outlier notes are identified as those having an estimated fundamental frequency that is either greater than an upper frequency limit or less than a lower frequency limit. The upper frequency limit has a value comprising the mean plus a first multiplier of the standard deviation, and the lower frequency limit has a value comprising the mean minus a second multiplier of the standard deviation. If the pitch transcription is represented as a MIDI file, the outlier notes are deleted from the musical note event list. On the other hand, if the pitch transcription is represented as a time-frequency matrix, the energy of the outlier notes is set to zero.
The first and second multipliers used to determine the upper and lower frequency limits, respectively, may be the same or different.
In some embodiments, the first and second multipliers are determined by experimentation using multiple datasets of music selected to represent those typical for processing. Typically, the first and second multipliers will have a value in the range of 1.25 (in cases where more filtering is required) to 2.5 (in cases where less filtering is required). In most cases, the first and second multipliers are pre-set and do not need to be adjusted when processing different music tracks. Of course, in other embodiments, the first and second multipliers may be part of the input configuration and, as such, would be entered by an operator or determined through machine learning.
Another note filtering method involves estimating the musical key of the stem track and then filtering out all transcribed notes with a fundamental frequency falling outside of that musical key. The musical key may be estimated using any key estimation algorithm known in the art, such as those described in C. L. Krumhansl and E. J. Kessler, Tracing the Dynamic Changes in Perceived Tonal Organization in a Spatial Representation of Musical Keys, Psychological Review, 89(4):334-368, 1982.
For example, if the musical key of the stem track is estimated to be A major and the pitch transcription process of step 306 generates a note list of (A3, C3, C #3, D3, E3, F3, G #3), the note filtering process of step 308 may filter that note list to (A3, C #3, D3, E3, G #3). In this case, certain notes are filtered out so that the remaining notes match the tonal center of the stem track—e.g., the A3, C #3, D3, E3, and G #3 pitches are compatible with the musical key of A major, while the C3 and F3 pitches are not compatible with the musical key of A major.
The musical key estimation and filtering method may be used by itself or in combination with the statistical analysis method described above. Of course, other note filtering methods known in the art may also be used within the scope of the present invention.
It should be understood that the note filtering methods described above may not be required if the errors in the pitch transcription process of step 306 are sufficiently low so as to fall within acceptable limits. In this regard, step 308 is optional and may not be performed in all implementations.
In step 310, the filtered transcription of each stem track is used to synthesize a corresponding binaural beat track. The binaural beat track is comprised of first and second binaural channels, each of which includes one or more sinusoidal signals that are derived from the transcribed notes in the filtered transcription. Different methods may be used to generate these sinusoidal signals.
In the exemplary embodiment shown in
The binaural beat frequency parameter is included in the input configuration described above (i.e., the frequency of binaural beats (in Hz)), and is representative of a magnitude of frequency deviation from the estimated fundamental frequency of each transcribed note. Specifically, the binaural beat frequency parameter is added to the estimated fundamental frequency to determine the corresponding frequency of a sinusoidal signal provided in the first binaural channel and, conversely, the binaural beat frequency parameter is subtracted from the estimated fundamental frequency to determine the corresponding frequency of a sinusoidal signal provided in the second binaural channel. For example, if the binaural beat frequency parameter is 2 Hz and the estimated fundamental frequency of a transcribed note is 440 Hz, then the corresponding frequencies of the sinusoidal signals provided on the first and second binaural channels are 438 Hz and 442 Hz, respectively. This same approach is used for all transcribed notes. Thus, the binaural effect is synthesized by a low frequency difference between the sinusoidal signals provided on the first and second binaural channels.
For stems that are processed polyphonically, the filtered transcription may include multiple transcribed notes that occur at the same time—e.g., the “other” pitched instruments stem track may include musical notes associated with two or more instruments. The estimated fundamental frequencies of these transcribed notes are used to synthesize multiple (i.e., simultaneously sounding) sinusoidal signals. For stems that are processed monophonically, only one sinusoidal signal will occur at a time. Thus, it can be appreciated that each of the first and second binaural tracks may include one to multiple sinusoidal signals within the scope of the present invention.
In the exemplary embodiment shown in
In step 312, the gains of the binaural beat tracks (i.e., one binaural beat track corresponding to each music stem track) are adjusted using the gain setting process described below in connection with
In step 314, one or more infrasonic tracks and/or one or more ultrasonic tracks are synthesized, each of which comprises a sinusoidal signal having a frequency and an amplitude that are fixed across the track. Unlike the other tracks described herein, the infrasonic and/or ultrasonic tracks are not derived from the source audio track.
For each infrasonic track, the frequency of the sinusoidal signal is determined from an infrasound frequency parameter included in the input configuration described above (i.e., the frequency of infrasound (in Hz)) and the amplitude of the sinusoidal signal is determined from an infrasound volume parameter included in the input configuration described above (i.e., the volume of infrasound (in LUFS units)). Typically, the frequency of the infrasound has a value in the range of 0.25 Hz to 20 Hz. The selected frequency is chosen to be complementary to the frequencies of the source audio track, either by manual selection, or by use of machine learning. The frequency chosen for each infrasonic track can be selected as complementary by combined reference to the neurological and physiological sensitivity of the listener to probe frequencies, and by reference to subharmonics of identified prominent frequencies in the source audio track. The balance between these contributions is determined either by manual selection, or by use of machine learning.
For each ultrasonic track, the frequency of the sinusoidal signal is determined from an ultrasound frequency parameter included in the input configuration described above (i.e., the frequency of ultrasound (in Hz)) and the amplitude of the sinusoidal signal is determined from an ultrasound volume parameter included in the input configuration described above (i.e., the volume of ultrasound (in LUFS units)). Typically, the frequency of the ultrasound has a value in the range of 20 kHz to the Nyquist frequency (one-half the audio sample rate). The selected frequency is chosen to be complementary to the frequencies of the source audio track, either by manual selection, or by use of machine learning. The frequency chosen for each ultrasonic track can be selected as complementary by combined reference to the neurological and physiological sensitivity of the listener to probe frequencies, and by reference to high frequency harmonics of identified prominent frequencies in the source audio track. The balance between these contributions is determined either by manual selection, or by use of machine learning.
It should be understood that the number of infrasonic tracks and/or ultrasonic tracks will vary between different contexts—i.e., some implementations will synthesize one or more infrasonic tracks, some implementations will synthesize one or more ultrasonic tracks, and some implementations will synthesize a combination of one or more infrasonic tracks and one or more ultrasonic tracks. It can be appreciated that the input configuration will include a frequency parameter and a volume parameter for each of these tracks.
In step 316, the gains of the one or more infrasonic tracks and/or the one or more ultrasonic tracks are adjusted using the gain setting process described below in connection with
In step 318, the source audio track is filtered to remove non-bass frequencies. In the exemplary embodiment shown in
In step 320, a subharmonic track is generated by synthesizing subharmonics of the bass frequency content of the music audio using a subharmonic generator. Any subharmonic generator known in the art may be used to create subharmonic frequencies below the fundamental frequencies of the bass frequency content, typically using harmonic subdivision methods. The subharmonic track may comprise non-sinusoidal signals derived from a combination of frequencies drawn from the source audio track using spectral analysis, such as a fast Fourier transform (FFT). The subharmonic volume (in LUFS units) provided in the input configuration described above is used to set an amplitude for the subharmonic track that is at a desired balance with the other musical elements within the final mixed track (e.g., not too overpowering in intensity, not inaudible, etc.).
In step 322, the gain of the subharmonic track is adjusted using the gain setting process described below in connection with
In step 324, an enhanced bass track is generated by enhancing the bass frequency content of the music audio using the process described in U.S. Pat. No. 5,930,373, which is incorporated herein by reference. In this process, a bass intensity parameter included in the input configuration described above (i.e., the bass intensity (in LUFS units)) is used to determine the degree of bass enhancement of the processed track, while a bass volume parameter included in the input configuration described above (i.e., the bass volume (in LUFS units)) is used to determine the loudness of the processed track. One skilled in the art will appreciate that bass enhancements are desired for maximum effect (e.g., power, warmth, clarity, etc.).
In step 326, the gain of the enhanced bass track is adjusted using the gain setting process described below in connection with
In step 328, the source audio track is filtered to remove bass frequencies. In the exemplary embodiment shown in
In step 332, the gain of the spatialized track is adjusted using the gain setting process described below in connection with
In step 334, the various audio tracks described above—i.e., binaural beat tracks, infrasonic and/or ultrasonic tracks, subharmonic track, enhanced bass track, and spatialized track—are mixed to produce an enhanced audio track. In the exemplary embodiment, the mixing is performed by summation and averaging to produce a two-channel digital audio track. Then, in step 336, the enhanced audio track is stored as a compressed or uncompressed audio file (e.g., an audio file provided in an AAC, FLAC, WMV, WAV, MP3, or PCM audio format). For example, with reference to
Referring to
In step 502, the source audio track is obtained. In step 504, one or more features are extracted from the source audio track, such as (a) a tempo of the source audio track (in beats per minute (BPM)), (b) a musical event density of the source audio track (in events per second), (c) a harmonic mode of the source audio track (major, minor, etc.), (d) a loudness of the source audio track (in LUFS units), and/or (e) a percussiveness of the source audio track (using leading edge transient detection). These features may be extracted using any feature extraction algorithms known in the art, and are then fed to a trajectory planner for processing in step 506, described below.
Examples of tempo estimation software that may be used to estimate the tempo of the source audio track include the Traktor software available from Native Instruments GmbH of Berlin, Germany, the Cubase software available from Steinberg Media Technologies GmbH of Hamburg, Germany, the Ableton Live software available from Ableton AG of Berlin, Germany, and the Hornet Songkey MK4 software available from Hornet SRL of Italy. The tempo may also be estimated using the approaches described in D. Bogdanov et al., Essentia: An audio Analysis Library for Music Information Retrieval, 14th Conference of the International Society for Music Information Retrieval (ISMIR), pages 493-498, 2013.
The musical event density of the source audio track may be estimated using the approaches described in D. Bogdanov et al., Essentia: An audio Analysis Library for Music Information Retrieval, 14th Conference of the International Society for Music Information Retrieval (ISMIR), pages 493-498, 2013.
Examples of harmonic mode estimation software that may be used to estimate the harmonic mode of the source audio track include the Traktor software available from Native Instruments GmbH of Berlin, Germany, the Cubase software available from Steinberg Media Technologies GmbH of Hamburg, Germany, the Ableton Live software available from Ableton AG of Berlin, Germany, the Hornet Songkey MK4 software available from Hornet SRL of Italy, and the Pro Tools software available from Avid Technology, Inc. of Burlington, Massachusetts.
An example of loudness estimation software that may be used to estimate the loudness of the source audio track includes the WLM Loudness Meter software available from Waves Audio Ltd. of Tel Aviv, Israel.
The percussiveness of the source audio track may be estimated using the approaches described in F. Gouyon et al., Evaluating Rhythmic Descriptors for Musical Genre Classification, Proceedings of the AES 25th International Conference, pages 196-204, 2004.
In step 508, an end-state effect intended to be induced in the end user is obtained. Examples of end-state effects include, but are not limited to, calm-grounded-controlled, motivated-inspired-energized, relaxed-tranquil-sleepy, undistracted-meditative-focused, comforted-relieved-soothed, recovered-restored, and driven-enhanced-in flow state. The end-state effect is then fed to the trajectory planner for processing in step 506, described below.
In step 506, the trajectory planner generates a spatialization trajectory that is derived from the audio feature(s) extracted from the source audio track and the target end-state effect. The spatialization trajectory may be a two-dimensional (2D) trajectory or a three-dimensional (3D) trajectory, depending on the implementation and desired effect. In particular, a 2D trajectory enables control of the perceived distance and lateral excursion of a sound; in a 3D trajectory, it is also possible to control the elevation of the trajectory. A number of different approaches may be used to generate the spatialization trajectory.
In some embodiments, the audio feature(s) extracted from the source audio track and the target end-state effect are used to index into a preset spatialization trajectory. Examples of four preset spatialization trajectories—each of which is associated with a tempo of the source audio track and a target end-state effect—are provided in Table 1 below:
It should be understood that the preset spatialization trajectories provided in Table 1 are merely examples. Any number of preset spatialization trajectories may be used—each of which is associated with one or more audio features extracted from the source audio track and a target end-state effect.
In other embodiments, the spatialization trajectory is determined using a combination approach that includes lookup from a set of preset spatialization trajectories and parameter modification of the selected trajectory using one or more of the audio feature(s) extracted from the source audio track. An example algorithm that uses this approach is provided below:
An example trajectory loop that may be generated using the above algorithm is shown in
In step 510, the source audio track is filtered to remove bass frequencies. It should be understood that this step uses the same process described above in connection with step 328 of the method shown in
In some embodiments, the process may use more than one filter to provide multiple tracks that are spatialized simultaneously. For example, a first filter may remove frequencies below 120 Hz to provide a first track, a second filter may pass frequencies of 20 Hz to 120 Hz to provide a second track, a third filter may remove frequencies outside of 400 Hz to 1,000 Hz to provide a third track, and a fourth filter may remove frequencies outside of 1,000 Hz to 4,000 Hz to provide a fourth track. A different spatialization trajectory may be used to spatialize each of these tracks to thereby generate multiple spatialized tracks. Other examples will be apparent to one skilled in the art.
Referring to
As an initial matter, the spatialized track is obtained in step 702, the enhanced bass track is obtained in step 704, the subharmonic track is obtained in step 706, the binaural beat tracks are obtained in step 708, and the infrasonic and/or ultrasonic tracks are received in step 710.
In step 712, the spatialized track, enhanced bass track, and subharmonic track are mixed by gains. In step 714, the loudness of the mix is estimated using any loudness estimation algorithm known in the art (e.g., the WLM Loudness Meter software available from Waves Audio Ltd. of Tel Aviv, Israel). In step 716, it is determined whether the loudness of the mix matches the loudness of the source audio track. If not, in step 718, the gains of the spatialized track, enhanced bass track, and subharmonic track are iteratively adjusted until the levels substantially match the loudness of the source audio track. Any multi-parameter estimation method, including machine learning, for example, stochastic gradient descent of multilayer perceptrons, may be used to determine the gain adjustment in each iteration. Of course, it is also possible to use more simplified methods, such as a hill-climbing estimate, to adjust the gains.
When the loudness of the mix matches the loudness of the source audio track, in step 720, the final gain of the mix is used to set the gains of the binaural beat tracks and the infrasonic and/or ultrasonic tracks. In the exemplary embodiment, the gains of the binaural beat tracks and the infrasonic and/or ultrasonic tracks are determined by multiplying the final gain of the mix by a constant, i.e., a fixed loudness ratio that may be derived by experimentation with existing datasets or by data analysis. For example, if the final gain of the mix is −6 dB and the constant is 9, the gains of the binaural beat tracks and the infrasonic and/or ultrasonic tracks will be −54 dB.
In step 722, the various audio tracks—i.e., binaural beat tracks, infrasonic and/or ultrasonic tracks, subharmonic track, enhanced bass track, and spatialized track—are mixed to produce an enhanced audio track. Then, in step 724, the enhanced audio track is stored as an audio file. It should be understood that steps 722 and 724 use the same processes described above in connection with steps 334 and 336 of the method shown in
The description set forth above provides several exemplary embodiments of the inventive subject matter. Although each exemplary embodiment represents a single combination of inventive elements, the inventive subject matter is considered to include all possible combinations of the disclosed elements. Thus, if one embodiment comprises elements A, B, and C, and a second embodiment comprises elements B and D, then the inventive subject matter is also considered to include other remaining combinations of A, B, C, or D, even if not explicitly disclosed.
The use of any and all examples or exemplary language (e.g., “such as”) provided with respect to certain embodiments is intended merely to better describe the invention and does not pose a limitation on the scope of the invention. Also, the use of relative relational terms, such as “first” and “second,” are used solely to distinguish one unit or action from another unit or action without necessarily requiring or implying any actual such relationship or order between such units or actions. No language in the description should be construed as indicating any non-claimed element essential to the practice of the invention.
The use of the terms “comprises,” “comprising,” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, or system that comprises a list of elements does not include only those elements, but may include other elements not expressly listed or inherent to such process, method, or system.
While the present invention has been described and illustrated hereinabove with reference to several exemplary embodiments, it should be understood that various modifications could be made to these embodiments without departing from the scope of the invention. Therefore, the present invention is not to be limited to the specific configurations or methodologies of the exemplary embodiments, except insofar as such limitations are included in the following claims.
Number | Name | Date | Kind |
---|---|---|---|
9712916 | Katsianos et al. | Jul 2017 | B2 |
10014002 | Koretzky | Jul 2018 | B2 |
10838684 | Tsingos et al. | Nov 2020 | B2 |
11216244 | Morsy et al. | Jan 2022 | B2 |
20110305345 | Bouchard | Dec 2011 | A1 |
20130177883 | Barnehama | Jul 2013 | A1 |
20140355766 | Morrell | Dec 2014 | A1 |
20200065055 | Tsingos | Feb 2020 | A1 |
20200128346 | Noh | Apr 2020 | A1 |
20200221220 | Benattar | Jul 2020 | A1 |
20210279030 | Morsy et al. | Sep 2021 | A1 |
20220060269 | Kiely et al. | Feb 2022 | A1 |
20220156036 | Liu et al. | May 2022 | A1 |
20230128812 | Quinton | Apr 2023 | A1 |
20230379638 | Wagner | Nov 2023 | A1 |
Number | Date | Country |
---|---|---|
2824663 | Aug 2021 | EP |
2014515129 | Jun 2014 | JP |
2022531432 | Jul 2022 | JP |
WO-2006108456 | Oct 2006 | WO |
WO 2020220140 | Nov 2020 | WO |
WO 2022126271 | Jun 2022 | WO |
WO-2022200136 | Sep 2022 | WO |
Entry |
---|
Koretzky, Online Article, “Audio AI—isolating instruments from stereo music using Convolutional Neural Networks”, 2019 (28 pgs). |
Hacker News, Online Communication, “Manipulate audio with a simple Python library”, Oct. 12, 2014 (4 pgs). |
Bogdanov et al., Article, “Essentia: An Audio Analysis Library for Music Information Retrieval”, Proceedings—14th International Society for Music Information Retrieval Conference, Nov. 1, 2013 (6 pgs). |
Gouyon et al., Article, “Evaluating rhythmic descriptors for musical genre classification”, AES 25th International Conference, Jun. 17-19, 2004 (9 pgs). |
Krumhansl, et al., Article, “Tracing the Dynamic Changes in Perceived Tonal Organization in a Spatial Representation of Musical Keys”, Psychological Review, vol. 89, No. 4, 334-368, 1982 (35 pgs). |
Moussallam, Online Article, “Releasing Spleeter: Deezer Research source separation engine. Spleeter by deezer”, Published in Deezer I/O, Nov. 4, 2019 (5 pgs). |
Website Information, celemony, “What can Melodyne do?”, downloaded from the internet at https://celemony.com/en/melodyne/what-can-melodyne-do on Jan. 27, 2023 (6 pgs). |
Website Information, Native Instruments GmbH, Overview All Products, downloaded from the internet at https://www.native-instruments.com/en/catalog/?searching=traktor on Jan. 27, 2023 (11 pgs). |
Website Information, Steinberg Media Technologies GmbH, “Cubase”, downloaded from the internet at https://www.steinberg.net/cubase/ on Jan. 27, 2023 (13 pgs). |
Website Information, Ableton, “What is Live?”, downloaded from the internet at https://www.ableton.com/en/live/what-is-live/ on Jan. 27, 2023 (27 pgs). |
Website Information, Hornet Plugins, “HoRNet SongKey MK4”, downloaded from the internet at https://www.hornetplugins.com/plugins/hornet-songkey-mk4/ on Jan. 27, 2023 (8 pgs). |
Website Information, Avid Technology, Inc., “Pro Tools. Music Production.”, downloaded from the internet at https://www.avid.com/pro-tools on Jan. 27, 2023 (14 pgs). |
Website Information, Waves Audio Ltd, “WLM Plus Loudness Meter. How to Set Loudness Levels for Streaming.”, downloaded from the internet at https://www.waves.com/plugins/wlm-loudness-meter#how-to-set-loudness-levels-for-streaming-wlm on Jan. 27, 2023 (3 pgs). |
Website Information, Waves Audio Ltd, “Loudness Metering Explained.”, downloaded from the internet at https://www.waves.com/loudness-metering-explained on Jan. 27, 2023 (5 pgs). |
Website Information, Dolby Professional, “Dolby Atmos Personalized Rendering (beta)”, downloaded from the internet at https://professionaldolby.com/phrtf on Jan. 27, 2023 (14 pgs). |
Website Information, Dear Reality, “dearVR PRO”, downloaded from the internet at https://www.dear-reality.com/products/dearvr-pro#:˜:text=dearVR%20PRO%20is%20the%20all,output%20formats%20up%20to%209.1. on Jan. 27, 2023 (9 pgs). |
Number | Date | Country | |
---|---|---|---|
20240265898 A1 | Aug 2024 | US |