The present invention is generally related to audio-based computer technologies. More particularly, example embodiments of the present invention are related to methods of providing a customized audio-stream through intelligent comparison of environmental factors.
Conventionally, audio-streams and audio-stream technology depends upon existing audio data files stored on a computer. These audio data files are played back using a computer apparatus individually, for example, as a single stream of music. Mixing or blending of several audio files may be accomplished, however, the mixing or blending is conventionally performed by a user picking and choosing to produce a desired effect. Generally, the user is skilled in audio-mixing. It follows that individuals not skilled in audio-mixing may have difficulty in producing desired, blended audio streams.
According to an example embodiment of the present invention, a method of real-time customization of an audio stream includes retrieving a set of parameters related to a current state of a device, determining a pattern, tempo, background loop, pitch, and number of foreground notes for the audio stream based upon the set of parameters, and creating an audio stream based upon the determination.
According to another example embodiment of the present invention, a system of real-time customization of an audio stream. The system includes a service provider, the service provider storing a plurality of information related to a state of a geographic location. The system further includes a device in communication with the service provider, the device configured and disposed to retrieve information related to a current state of the device from the service provider based on a geographic location of the device, and the device further configured and disposed to customize an audio-stream based on the retrieved information.
According to another example embodiment of the present invention, a computer-implemented user-interface rendered on a display portion of a portable computer apparatus includes a plurality of controls, each control of the plurality of controls including user-configurable and pre-existing states of the portable computer apparatus. A processor of the portable computer apparatus is configured and disposed to perform a method of real-time customization of an audio stream. The method includes retrieving a set of parameters based on user-manipulation of the user-interface and the plurality of controls, determining a pattern, tempo, background loop, pitch, and number of foreground notes for a customized audio stream based upon the set of parameters, and creating an audio stream based upon the determination.
According to another example embodiment of the present invention, a computer program product includes a computer readable medium containing computer executable code thereon; the computer executable code, when processed by a processor of a computer, directs the processor to perform a method of real-time customization of an audio stream. The method includes retrieving a set of parameters related to a current state of a device, determining a pattern, tempo, background loop, pitch, and number of foreground notes for the audio stream based upon the set of parameters, and creating an audio stream based upon the determination.
Many aspects of the invention can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present invention. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views. The figures:
Further to the brief description provided above and associated textual detail of each of the figures, the following description provides additional details of example embodiments of the present invention. It should be understood, however, that there is no intent to limit example embodiments to the particular forms and particular details disclosed, but to the contrary, example embodiments are to cover all modifications, equivalents, and alternatives falling within the scope of example embodiments and claims. Like numbers refer to like elements throughout the description of the figures.
It will be understood that, although the terms first, second, etc. may be used herein to describe various steps or calculations, these steps or calculations should not be limited by these terms. These terms are only used to distinguish one step or calculation from another. For example, a first calculation could be termed a second calculation, and, similarly, a second step could be termed a first step, without departing from the scope of this disclosure. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises”, “comprising,”, “includes” and/or “including”, when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may in fact be executed substantially concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
Hereinafter, example embodiments of the present invention are described in detail.
Example embodiments of the present invention may generate a music stream (or streams) that is influenced by a plurality of parameters. These parameters may include geographical location, movement speed/velocity, time of day, weather conditions, ambient temperature, and/or any other suitable parameters.
Generally, example embodiments may include a user interface and application on a mobile device/computer apparatus, for example, to determine geographic location and velocity. Further, the application may include code-portions configured to blend/mix existing audio files into a configurable audio stream. The blend/mix audio stream may be tailored based upon the parameters.
A user interface of example embodiments may include icon buttons or other graphical elements for easy manipulation by a user of a computer device (e.g., mobile device). The graphical elements may allow control or revision of desired audio-stream mixing through manipulation of the above-described parameters.
The interface 100 may further include data connection control 104. The data connection control 104 may turn on/off a default data connection of a device presenting the interface 100, or alternatively, a number of devices presenting the interface 100. In other embodiments, the data connection control 104 may direct rendering of a data connection interface for selection of a plurality of parameters by a user (see
The interface 100 may further include audio stream control 105. The audio stream control 105 may direct rendering of an audio stream interface for selection of a plurality of parameters by a user (see
The interface 100 may further include geographical rendering 110. The geographical rendering 110 may include a plurality of elements for viewing by a user. For example, element 111 depicts a current or selected geographical location. The element 111 may be controlled through a location interface (see
Hereinafter, the several example user interfaces mentioned above are described in detail.
Although described above as individual interfaces, it should be understood that any or all of the interfaces 200, 300, 400, 500, 600, and/or 700 may be rendered upon other interfaces, or may be rendered in combination with other interfaces. The particular forms described and illustrated are for the purpose of understanding of example embodiments only, and should not be construed as limiting. Furthermore, in addition to those interfaces presented and described above, it is noted that example embodiments may further provide visual display or representation of an audio stream rendered upon a user interface, including any user interface described herein.
The visual rendering 801 may be a representation of the custom audio-stream of the device. A plurality of visual representations are possible, and thus example embodiments should not be limited to only the example illustrated, but should be applicable to any desired visual rendering representative of an audio stream. In the example provided, visual rendering 801 includes a plurality of dots/elements representing portions of the audio-stream. The dots/elements may move erratically for speedier compositions, or may remain fixed. The dots/elements may be colored or shaded based on parameters of the audio-stream. For example, different colors or shades representing speed/weather/location (sunny, fast, slow, beach, city, etc) may be presented dynamically at any or all of the dots/elements.
Additional user interface elements may include an audio wave animation configured to display audio information. For example, sinusoidal or linear waves may be presented. Furthermore, bar-graph-like equalizer elements or other such elements may be rendered on 801. The animated elements may be configured to allow a user to select portions of the audio wave, fast-forward, rewind, etc. Additionally, selecting the audio wave may enable a video selection screen (not illustrated). Upon selection, the current sound mix may be faded out and another background loop may be initiated. If the user wishes to return to the previous audio stream, the previous stream may be faded back in.
Within the video selection view noted above, a user may select between different video clips or possible video renderings. Touching/selecting a video thumbnail (e.g., static image) may initiate a full screen video view (or a rendering on a portion of a display or interface) according to the selected visual representation.
As described above, example embodiments provide a plurality of interfaces by which a user may select, adjust, and/or override parameters representative of a current state of a device (location, speed, weather conditions near the device, etc). Using these parameters, example embodiments may provide customization of an audio stream as described in detail below.
According to example embodiments, a method 900 includes retrieving parameters at block 901. The parameters may be retrieved by a device from a plurality of sources. For example, a device may retrieve pre-selected parameters, dynamically updated parameters, or any other suitable parameters associated with the device. The parameters may be fixed and stored on the device for a continuous audio-loop, or may be updated at any desired or predetermined frequency for dynamic changes to an audio stream. Therefore, although block 901 is presented in a flow-chart, it should be understood that block 901 and associated actions may be repeated throughout implementation of the method 900.
The method 900 further includes determining audio properties based on the retrieved parameters at block 902. For example, audio properties may be properties used to produce an audio stream. The properties may include tempo, octaves, audio ranges, background patterns, or any other suitable properties. These properties may be based on the retrieved parameters.
For example, geographic location may affect the mixing of a pattern of audio sounds. The geographic location may be retrieved automatically through a GPS chip (if one exists), or may be chosen as described above. There may be a plurality of audio patterns stored on a computer readable medium which may be accessed through computer instructions embodying the present invention. The geographic location may be used to determine a particular pattern meaningful to a particular location. For example, if the device is located near a beach, a different pattern may be used than that which may be appropriate for a city.
Further, speed/velocity of a device may affect playback speed of the pattern noted above. For example, a delay effect may be introduced if a device is moving more slowly compared to a predetermined or desired velocity. For example, the desired velocity may be set using a speed interface, or a change of speed/tempo may be selected through an interface as well.
Further, weather conditions may affect selection of a background loop. For example the number of notes played in a pattern may be increased in clear/sunny weather, decreased in inclement weather, etc.
Further, ambient temperature may affect a pitch of pattern notes in the audio stream.
Further, time of day may affect a number of notes played in a pattern. For example, a number of notes played in a pattern may be decreased during the evening, increased in daylight, increased in the evening based on location (nightclub, music venue, etc), decreased in daylight based on weather patterns, etc.
Furthermore, according to some example embodiments, a random element may be introduced to modify the mixing/audio pattern over time. Additionally, after a predetermined or desired time, the audio pattern may fade-out and after some time of background loop only, the pattern may fade back in as a variation depending upon the random element. This may be beneficial in that the audio pattern of the mixed audio stream is in constant variation thereby maintaining and/or increasing interest in the audio pattern.
The method 900 further includes producing the audio stream based on the determined audio properties at block 903. As described above, parameters may be retrieved periodically, based on any desired frequency, and thus audio properties may be adjusted over time as well. It follows that a new or altered audio stream may be produced constantly. For example, as a speed of a device changes, so may the tempo of the audio stream. Further, as weather changes, so may the tone of the audio stream.
Finally, the method 900 includes audio playback/visualization of the audio stream. The playback may be constant and may be dynamically adjusted based on retrieved parameters. The visualization may also be constant and may be dynamically adjusted based on the retrieved parameters. Further, as described above, the audio playback/visualization may be paused, rewound, moved forward, or ceased by a user through manipulation of an interface as described above.
The system 1000 further includes a service provider 1003 in communication with the server 1001 over a network 1002. It is noted that although illustrated as separate, the service provider 1003 may include a server substantially similar to server 1001. The service provider may be a data service provider, for example, a cellular service provider, a weather information provider, a positioning service provider (satellite information, WiFi network position information, etc), or any other suitable provider. The service provider 1003 may also be an application server providing applications and/or computer executable code implementing any of the interfaces/methodologies described herein. The service provider 1003 may present a plurality of application defaults, choices, set-ups, and/or configurations such that a device may receive and process the application accordingly. The service provider 1003 may present any application on a user interface or web-browser of a device for relatively easy selection by a user of the device. The user interface or web-page rendered for application selection may be in the form of an application store and/or application marketplace.
The network 1002 may be any suitable network, including the Internet, wide area network, and/or a local network. The server 1001 and the service provider 1003 may be in communication with the network 1002 over communication channels 1010, 1011. The communication channels 1010, 1011 may be any suitable communication channels including wireless, satellite, wired, or otherwise.
The system 1000 further includes computer apparatus 1005 in communication with the network 1002, over communication channel 1012. The computer apparatus 1005 may be any suitable computer apparatus including a personal computer (fixed location), a laptop or portable computer, a personal digital assistant, a cellular telephone, a portable tablet computer, a portable audio player, or otherwise. For example, the system 1000 may include computer apparatuses 1004 and 1006, which are embodied as portable music players and/or cellular telephones with portable music players or music playing capabilities thereon. The apparatuses 1004 and 1006 may include display means 1041, 1061, and/or buttons/controls 1042, 1062. The controls 1042, 1062 may operate independently or in combination with any of the controls noted above. For example, the controls 1042, 1062 may be controls directed to cellular operation or default music player operations.
Further, the apparatuses 1004, 1005, and 1006 may be in communication with each other over communication channels 1115, 1116 (for example, wired, wireless, Bluetooth channels, etc); and may further be in communication with the network 1002 over communication channels 1012, 1013, and 1014.
Therefore, the apparatuses 1004, 1005, and 1006 may all be in communication with one or both of the server 1001 and the service provider 1003, as well as each other. Each of the apparatuses may be in severable communication with the network 1002 and each other, such that the apparatuses 1004, 1005, and 1006 may be operated without constant communication with the network 1002 (e.g., using data connection controls of an interface). For example, if there is no data availability or if a user directs an apparatus to work offline, the customized audio produced at any of the apparatuses 1004, 1005, and 1006 may be based on stored information/parameters. It follows that each of the apparatuses 1004, 1005, and 1006 may be configured to perform the methodologies described above; thereby producing real-time customized audio streams to a user of any of the apparatuses.
Furthermore, using any of the illustrated communication mediums, the apparatuses 1004, 1005, and 1006 may share, transmit, and/or receive different audio-streams previously or currently produced at any one of the illustrated elements of the system 1000. For example, a stored plurality of audio streams may be available on the server 1001 and/or the service provider 1003. Moreover, users of any other the devices 1004, 1005, and 1006 may transmit/share audio streams with other users. Additionally, a personalized bank of audio streams may be stored at the server 1001 and/or the service provider 1003.
As described above, features of example embodiments include listening to uniquely and/or real-time generated music/audio streams, sharing music moods with friends/users, mobile platform integration, and other unique features not found in the conventional art. For example, while typical generative music systems utilize fixed rules and algorithms of a pre-defined framework or database in order to create sound, audio generation of example embodiments is achieved through ongoing real-time transformation of online and offline data which trigger a subsequent sound creation process. Example embodiments use algorithmic routines as well to render the real-time data in such a manner that the musical result may sound meaningful.
Example embodiments may begin a new generative process upon initiation and continue to create sound until a request to terminate is received. Users can manually adjust values (i.e., mood, tempo, structure complexity, position) in order to manipulate the musical result according to their preferences and musical taste, in addition to manipulating any of the parameters described above. For example, a user may choose whether or not to include weather information or any other parameter to base the customized audio stream on.
Example embodiments may be configured to adjust/learn through explicit user feedback (e.g., ‘Do you like your audio stream?’ presented to a user for feedback on an interface) as well as through implicit user feedback (i.e., if audio stream generation applications are periodically set to a positive mood at a certain time of a day, output may be less melancholic because minor notes would be eliminated from the sound generation process, and vice versa).
Online data may also be regularly retrieved through the methods described herein, and may constantly influence the sound/melody generation, while offline data may be used to add specific characteristics and/or replace online data if a device is offline (e.g., through a severable connection).
Example embodiments may be configured to utilize different types of samples and sounds (i.e. by famous artists and musicians) offering the possibility to create a unique long-form application, each with a very characteristic and specific musical bias.
Additionally and as described above, example embodiments of the invention may be embodied in the form of computer-implemented processes and apparatuses for practicing those processes. Therefore, according to an example embodiment, the methodologies described hereinbefore may be implemented by a computer system or apparatus. A computer system or apparatus may be somewhat similar to the mobile devices and computer apparatuses described above, which may include elements as described below.
Therefore, embodiments can be embodied in the form of computer-implemented processes and apparatuses for practicing those processes on a computer program product. Embodiments include the computer program product 1200 as depicted in
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
It should be emphasized that the above-described embodiments of the present invention, particularly, any detailed discussion of particular examples, are merely possible examples of implementations, and are set forth for a clear understanding of the principles of the invention. Many variations and modifications may be made to the above-described embodiment(s) of the invention without departing from the spirit and scope of the invention. All such modifications and variations are intended to be included herein within the scope of this disclosure and the present invention and protected by the following claims.
This application claims priority under 35 U.S.C. §119 to Provisional Patent Application Ser. No. 61/231,423, filed Aug. 5, 2009, entitled “MOBILE MOOD MACHINE,” the entire contents of which are hereby incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
61231423 | Aug 2009 | US |