METHOD FOR GENERATING A VISUALIZING MAP OF MUSIC

Information

  • Patent Application
  • 20070157795
  • Publication Number
    20070157795
  • Date Filed
    December 28, 2006
    18 years ago
  • Date Published
    July 12, 2007
    17 years ago
Abstract
The present invention provides a method for generating a visualizing map of music in accordance with the identifiable features of the music. First, the music would be divided into plural segments, and the length of each segment is preferably identical. After that, an audio analysis is executed to determine the mood types of these segments. Each mood type may be determined by certain parameters, such as tempo value and articulation type. Besides, every mood type corresponds to a certain visualizing expression, and the correspondence can be defined in advance and looked up in a table for example. Eventually, the visualizing map of the music is generated according to the mood types and the distribution of visualizing expressions.
Description

BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a flow chart showing a method of generating the visualizing map of music according to the preferred embodiment of the present invention.



FIG. 2 is a flow chart showing the procedure of the audio analysis according to the preferred embodiment of the present invention.



FIG. 3 has five examples of the present invention showing visualizing maps of music.



FIG. 4 is a flow chart showing a method of visualizing music according to another embodiment of the present invention.


Claims
  • 1. A method for generating a visualizing map of music comprises the steps of: dividing said music into plural segments;executing an audio analysis for determining mood types of said segments; andgenerating said visualizing map of said music according to said mood types.
  • 2. The method as claimed in claim 1, wherein said method further comprises the step of: processing low-level features of said segments for determining said mood types, wherein said low-level features are obtained by said audio analysis.
  • 3. The method as claimed in claim 1, wherein said method further comprises the step of: designating a mood type to each visualizing expression, and allocating each said visualizing expression to one of said segments according to said mood types of said plural segments.
  • 4. The method as claimed in claim 3, wherein said visualizing map can comprise plural visualizing expressions of said segments.
  • 5. The method as claimed in claim 3, wherein said visualizing expression comprises color, texture pattern, emotion symbol or value of brightness.
  • 6. The method as claimed in claim 3, wherein said method further comprises the step of: determining a visualization summary according to the distribution of said visualizing expression; andgenerating a summarized visualizing map according to said visualization summary.
  • 7. The method as claimed in claim 6, wherein said visualizing map comprises said distribution, and said distribution is summarized to determine said visualization summary.
  • 8. The method as claimed in claim 1, wherein the lengths of said segments are substantially identical.
  • 9. The method as claimed in claim 1, wherein said audio analysis comprises: transferring the wave feature of a time domain to the energy feature of a frequency domain for obtaining an energy value;dividing said energy value into plural sub-bands;calculating a chord change probability of each period according to a dominant frequency of adjacent period, wherein the length of said period is predetermined;obtaining beat points according to said chord change probability; andobtaining a tempo value according to a density of said beat points.
  • 10. The method as claimed in claim 9, wherein said dominant frequency is determined according to the energy value of every said sub-band.
  • 11. The method as claimed in claim 9, wherein said mood types are determined according to the distribution of said beat points in said segments.
  • 12. The method as claimed in claim 9, wherein said mood types are determined according to said tempo value of said segments.
  • 13. The method as claimed in claim 1, wherein said mood types are determined according to articulation types of said segments, and said articulation types are detected in said audio analysis.
  • 14. The method as claimed in claim 13, wherein said articulation types are determined by detecting a relative silence of said music.
  • 15. A method for visualizing music, comprising the steps of: dividing said music into plural segments;analyzing said segments to obtain identifiable features;determining the visualizing expressions of said segments according to said identifiable features; andpresenting said visualizing expressions in order while said music is played.
  • 16. The method as claimed in claim 15, which further comprises: executing an audio analysis for obtaining low-level features, and processing said low-level features for obtaining said identifiable features.
  • 17. The method as claimed in claim 15, which further comprises: designating each of said identifiable features to a visualizing expression, and allocating said visualizing expression to each of said segments according to said identifiable features of said segments.
  • 18. The method as claimed in claim 15, wherein said music is analyzed by steps comprising: transferring wave features of a time domain to energy features of a frequency domain for obtaining an energy value;dividing said energy value into plural sub-bands;calculating a chord change probability of each period according to a dominant frequency of adjacent period, wherein the length of said period is predetermined;obtaining beat points according to said chord change probability; andobtaining a tempo value according to a density of said beat points.
  • 19. The method as claimed in claim 18, wherein said dominant frequency is determined according to energy value of every said sub-band.
  • 20. The method as claimed claim 15, wherein said identifiable features are determined according to the distribution of said beat points, an articulation type or a tempo value.
  • 21. The method as claimed in claim 20, wherein said articulation type is determined by detecting a relative silence of said music.
  • 22. The method as claimed in claim 15, wherein said visualizing expressions include a color, a texture pattern, an emotion symbol or a value of brightness.
  • 23. The method as claimed in claim 15, wherein said music is played by a computer or player and said visualizing expressions are presented on a display of said computer or player.
Priority Claims (1)
Number Date Country Kind
095100816 Jan 2006 TW national