Embodiments of the present invention relate to an adaptive user interface. In particular, some embodiments relate to methods, systems, devices and computer programs for changing an appearance of a graphical user interface in response to music.
It is now common for people to listen to music using digital electronic devices such as dedicated music players or multi-functional devices that have music playing as an available function.
Such devices typically have a user interface that enables a user of the device to control the device. Some devices have a graphical user interface (GUI).
Digital music is a growth business, but it is extremely competitive. It would therefore be desirable to increase the value associated with digital music and/or digital music player so that they are more desirable and consequently more valuable.
According to an embodiment of the invention there is provided a method comprising: obtaining music information that defines at least one characteristic of audible music; and controlling changes to an appearance of a graphical user interface using the music information.
According to another embodiment of the invention there is provided a system comprising: a display for providing a graphical user interface; and a processor operable to obtain music information that defines at least one characteristic of audible music and operable to control changes to an appearance of the graphical user interface using the music information while the music is audible.
According to a further embodiment of the invention there is provided a computer program for obtaining music information that defines at least one characteristic of audible music; and controlling changes to an appearance of a graphical user interface using the music information.
According to another embodiment of the invention there is provided a method comprising: storing a data structure that defines at least how a graphical user interface changes and changing with successive beats of audible music, the appearance of the graphical user interface using the data structure.
For a better understanding of the present invention reference will now be made by way of example only to the accompanying drawings in which:
The processor 2 is arranged to write to and read from the memory 4 and to control the output of the display 8. It receives user input commands from the user input device 6.
The computer program instructions 6 define a graphical user interface software application. The computer program instructions 6, when loaded into the processor 2, provide the logic and routines that enables the system 10 to perform the method illustrated in
The computer program instructions 6 may arrive at the electronic device via an electromagnetic carrier signal or be copied from a physical entity 1 such as a computer program product, a memory device or a record medium such as a CD-ROM or DVD.
The system 10 will typically be part of an electronic device such as a personal digital assistant, a personal computer, a mobile cellular telephone, a personal music player etc.
The system 10 may also be used as a music player. In this embodiment, a music track may be stored in the memory 4. Computer program instructions when loaded into the processor 2, enable the functionality of a music player as is well known in the art. The music player processes the music track and produces an audio control signal which is provided to an audio output device 8 to play the music. The audio output device may be, for example, a loudspeaker or a jack for headphones. The music player is responsible for the audio playback, i.e., it reads the music track and renders it to audio.
The
The GUI database 12 stores a plurality of independent GUI models as independent data structures 13.
A GUI model defines a particular GUI 20 and, if the GUI 20 adapts automatically to audible music, it defines how the GUI adapts with musical time.
For example, the adaptable GUI illustrated in
A GUI model for an automatically adaptable GUI consequently defines an ordered sequence of GUI configurations that are adopted at a rate determined by the beat of the music. A configuration is the collection of the graphical items forming the GUI 20 and their visual attributes. Thus, the GUI model defines how the graphical items and their visual attributes change with musical time.
The graphical items, will be different for each GUI 20, but may include, for example, indicators (e.g. battery life remaining, received signal strength, volume, etc), items (such as menu entries, icons or buttons) for selection by a user, a background and images.
The visual attributes may include one or more of: the position(s) of one or more graphical items; the size(s) of one or more graphical items; the shape(s) of one or more graphical items; the color of one or more graphical items; a color palette; the animation of one or more graphical items such as the fluttering of a graphical menu item like a flag in time with the music.
Consequently, it will be appreciated that
For example, the background may fade in and out with the tempo of the music and/or the color palette used for the graphical user interface may vary with the tempo of the music.
The selection of the current GUI model is schematically illustrated at block 50 in
The context information may be, for example, a user input command 62 that selects or specifies the current GUI model.
The selection may be alternatively automatic, that is, without user intervention.
The context information may be, for example, music information such as metadata 64 provided with the music track that is being played or derived by processing the audible music. This metadata may indicate characteristics of the music such as, for example, the music genre, keywords from the lyrics, time signature, mood (danceable, romantic) etc. The automatic selection of the current GUI mode may be based on the metadata.
The context information may be, for example, environmental music information that is detected from radio or sound waves in the environment of the system 10. For example, it may be metadata derived by processing ambient audible music detected via a microphone 66. This metadata may indicate characteristics of the music such as, for example, the music genre, keywords from the lyrics detected using voice recognition, time signature etc. The automatic selection of the current GUI model may be based on the metadata.
At step 52, music information that is dependent upon a characteristic of the music, such as the tempo of the music track, is obtained. The tempo is typically in the form of beats per minute. The music tempo may be provided with the music track as metadata, derived from the music or input by the user. Derivation of the music tempo is suitable when the music is produced from a stored music track and also when the music is ambient music produced by a third party.
The tempo information can be derived automatically using digital signal processing techniques. There are known solutions for extracting beat information from an acoustic signal, e.g.
At step 54, the processor 2 uses the music tempo obtained in step 54 and the current GUI model to control the GUI 20 displayed on display 4. The GUI 40 changes its appearance in time with the audible music. The appearance of the GUI may be changed with successive beats of the audible music in a manner defined by the current GUI model.
Each GUI model data structure 13 may be transferable independently into and out of the database 12. A data structure 13 can, for example, be downloaded from a web-site, uploaded to a website, transferred from one device or storage device to another etc. Each GUI model data structure 13 and therefore each GUI model is therefore independently portable. A common standard model may be used as a basis for each GUI model. That is, there is a semantic convention for specifying the GUI attributes.
A new GUI model can be created by a user by creating a new GUI model data structure 13 and storing it in the GUI model database 12.
Also, an existing GUI model may be varied by editing the existing GUI model data structure 13 for that GUI model and saving the new data structure in the GUI model database 12.
A GUI model data structure 13 for use with a music track may be provided with that music track.
Optionally, at step 52, information other than the tempo of the music track can be obtained. This may include for example the pitch, which can be accomplished using methods presented in the literature, e.g. A. de Cheveigne and H. Kawahara, “YIN, a fundamental frequency estimator for speech and music,” J. Acoust. Soc. Am., vol. 111, pp. 1917-1930, April 2002, or Matti P. Ryynanen and Anssi Klapuri: “POLYPHONIC MUSIC TRANSCRIPTION USING NOTE EVENT MODELING”, Proc. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, Oct. 16-19, 2005, New Paltz, N. Y. For example, the color of a GUI element may be adapted according to the pitch, e.g. such that the color changes from blue to red when the pitch of the music increases.
A filter bank may be used to divide the music spectrum into N bands, and analyze the energy in each band. As an example, the energies and energy changes in different bands can be detected and produced as musical information for use at step 54. For example, the spectrum can be divided into three bands and the energies in each can be used to control the amount of red, blue, and green color in a GUI element or background.
The musical information may identify different instruments. Essid, Richard, David, “Instrument Recognition in polyphonic music”, In Proc. IEEE Int. Conference on Acoustics, Speech, and Signal Processing 2005, provides a method for recognizing the presence of different musical instruments. For example, detecting the presence of an electric guitar may make an Ul element ripple, creating an illusion as if the distortion of the guitar sound would distort the graphical element.
The musical information may identify music harmony and tonality: Gomez, Herrera: “Automatic Extraction of Tonal Metadata from Polyphonic Audio Recordings”, AES 25th International Conference, London, United Kingdom, 2004 Jun. 17-19, provides a method for identifying music harmony and tonality. For example, the GUI model might define that certain chords of the music are mapped to different colors.
The GUI could also be adapted according to the characteristics of the sound coming from the microphone. For example, the GUI elements can be made to ripple according to the volume of the sound recorded with the microphone. Thus, if there are loud noises in the environment of the device then the loud noises can e.g. cause the GUI elements to ripple. In this case the music player of the device is not playing anything, but the device just analyzes the incoming audio being recorded with the microphone, and uses the audio characteristics to control the appearance of the GUI items.
Although embodiments of the present invention have been described in the preceding paragraphs with reference to various examples, it should be appreciated that modifications to the examples given can be made without departing from the scope of the invention as claimed.
Whilst endeavoring in the foregoing specification to draw attention to those features of the invention believed to be of particular importance it should be understood that the Applicant claims protection in respect of any patentable feature or combination of features hereinbefore referred to and/or shown in the drawings whether or not particular emphasis has been placed thereon.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/IB2006/001932 | 5/12/2006 | WO | 00 | 2/18/2009 |