The present disclosure relates generally to language instruction and translation and, more specifically, to a system and method for foreign language instruction and translation using analysis of tonal and rhythmic structures.
Learning a foreign language outside of a classroom environment is generally accomplished through the process of listening to, and then repeating, prerecorded words and phrases. Often, no feedback or critique is involved, so that the person attempting to learn the language has no idea if they are pronouncing words correctly. Without some form of instruction that verifies the accuracy of the spoken words, learning a foreign language is a hit or miss proposition. The process becomes even more difficult for individuals wishing to perfect regional or country-specific accents or dialects. Proper vocal techniques to learn and maintain an accent or dialect are difficult to master and it is often hard to progress in any significant way outside of the actual time spent with an instructor.
Some translation devices are available that employ the use of a language-to-language dictionary or phrasebook, but they generally require that the word in one language be typed via a small keyboard or keypad located on the device, with the corresponding foreign language word thereafter appearing. Some translators will generate a synthesized voice that gives a generic pronunciation of the word, but the translator does not provide a genuine pronunciation as one would expect from a person whose native language is that to which words and phrases are being translated. Methods are needed which improve the accuracy and efficiency of foreign language processing.
Accordingly, in one aspect, A foreign language processing system is disclosed, comprising a user input device, a processing device, and a display; wherein said processing device executes computer readable code to select a foreign language word which corresponds to the meaning of a native language word entered by a user using said user input device, wherein said processing device executes computer readable code to create a first visual representation of said foreign language word for output on said display; and wherein said first visual representation is generated according to a method comprising the steps of: (a) labeling the perimeter of a circle with a plurality of labels corresponding to a plurality of equally spaced frequency intervals in an octave, such that moving clockwise or counter-clockwise between adjacent ones of said labels represents a first frequency interval; (b) identifying an occurrence of a first frequency within the foreign language word; (c) identifying an occurrence of a second frequency within the foreign language word; (d) identifying a first label corresponding to the first frequency; (e) identifying a second label corresponding to the second frequency; (f) creating a first line connecting the first label and the second label.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
For the purposes of promoting an understanding of the principles of the invention, reference will now be made to the embodiment illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended, and alterations and modifications in the illustrated device, and further applications of the principles of the invention as illustrated therein are herein contemplated as would normally occur to one skilled in the art to which the invention relates.
Before describing the system and method of foreign language processing, a summary of the above-referenced music tonal and rhythmic visualization methods will be presented. The tonal visualization methods are described in U.S. patent application Ser. No. 11/827,264 filed Jul. 11, 2007 entitled “Apparatus and Method for Visualizing Music and Other Sounds” which is hereby incorporated by reference.
There are three traditional scales or ‘patterns’ of musical tone that have developed over the centuries. These three scales, each made up of seven notes, have become the foundation for virtually all musical education in the modern world. There are, of course, other scales, and it is possible to create any arbitrary pattern of notes that one may desire; but the vast majority of musical sound can still be traced back to these three primary scales.
Each of the three main scales is a lopsided conglomeration of seven intervals:
Unfortunately, our traditional musical notation system has also been based upon the use of seven letters (or note names) to correspond with the seven notes of the scale: A, B, C, D, E, F and G. The problem is that, depending on which of the three scales one is using, there are actually twelve possible tones to choose from in the ‘pool’ of notes used by the three scales. Because of this discrepancy, the traditional system of musical notation has been inherently lopsided at its root.
With a circle of twelve tones and only seven note names, there are (of course) five missing note names. To compensate, the traditional system of music notation uses a somewhat arbitrary system of ‘sharps’ (#'s) and ‘flats’ (b's) to cover the remaining five tones so that a single notation system can be used to encompass all three scales. For example, certain key signatures will have seven ‘pure letter’ tones (like ‘A’) in addition to sharp or flat tones (like C# or Gb), depending on the key signature. This leads to a complex system of reading and writing notes on a staff, where one has to mentally juggle a key signature with various accidentals (sharps and flats) that are then added one note at a time. The result is that the seven-note scale, which is a lopsided entity, is presented as a straight line on the traditional musical notation staff. On the other hand, truly symmetrical patterns (such as the chromatic scale) are represented in a lopsided manner on the traditional musical staff. All of this inefficiency stems from the inherent flaw of the traditional written system being based upon the seven note scales instead of the twelve-tone circle.
To overcome this inefficiency, a set of mathematically based, color-coded MASTER KEY™ diagrams is presented to better explain the theory and structures of music using geometric form and the color spectrum. As shown in
The next ‘generation’ of the MASTER KEY™ diagrams involves thinking in terms of two note ‘intervals.’ The Interval diagram, shown in
Another important aspect of the MASTER KEY™ diagrams is the use of color. Because there are six basic music intervals, the six basic colors of the rainbow can be used to provide another way to comprehend the basic structures of music. In a preferred embodiment, the interval line 12 for a half step is colored red, the interval line 14 for a whole step is colored orange, the interval line 16 for a minor third is colored yellow, the interval line 18 for a major third is colored green, the interval line 20 for a perfect fourth is colored blue, and the interval line 22 for a tri-tone is colored purple. In other embodiments, different color schemes may be employed. What is desirable is that there is a gradated color spectrum assigned to the intervals so that they may be distinguished from one another by the use of color, which the human eye can detect and process very quickly.
The next group of MASTER KEY™ diagrams pertains to extending the various intervals 12-22 to their completion around the twelve-tone circle 10. This concept is illustrated in
The next generation of MASTER KEY™ diagrams is based upon musical shapes that are built with three notes. In musical terms, three note structures are referred to as triads. There are only four triads in all of diatonic music, and they have the respective names of major, minor, diminished, and augmented. These four, three-note shapes are represented in the MASTER KEY™ diagrams as different sized triangles, each built with various color coded intervals. As shown in
The next group of MASTER KEY™ diagrams are developed from four notes at a time. Four note chords, in music, are referred to as seventh chords, and there are nine types of seventh chords.
Every musical structure that has been presented thus far in the MASTER KEY™ system, aside from the six basic intervals, has come directly out of three main scales. Again, the three main scales are as follows: the Major Scale, the Harmonic-Minor Scale, and the Melodic-Minor Scale. The major scale is the most common of the three main scales and is heard virtually every time music is played or listened to in the western world. As shown in
The previously described diagrams have been shown in two dimensions; however, music is not a circle as much as it is a helix. Every twelfth note (an octave) is one helix turn higher or lower than the preceding level. What this means is that music can be viewed not only as a circle but as something that will look very much like a DNA helix, specifically, a helix of approximately ten and one-half turns (i.e. octaves). There are only a small number of helix turns in the complete spectrum of audible sound; from the lowest auditory sound to the highest auditory sound. By using a helix instead of a circle, not only can the relative pitch difference between the notes be discerned, but the absolute pitch of the notes can be seen as well. For example,
The use of the helix becomes even more powerful when a single chord is repeated over multiple octaves. For example,
The above described MASTER KEY™ system provides a method for understanding the tonal information within musical compositions. Another method, however, is needed to deal with the rhythmic information, that is, the duration of each of the notes and relative time therebetween. Such rhythmic visualization methods are described in U.S. Utility patent application Ser. No. 12/023,375 filed Jan. 31, 2008 entitled “Device and Method for Visualizing Musical Rhythmic Structures” which is also hereby incorporated by reference.
In addition to being flawed in relation to tonal expression, traditional sheet music also has shortcomings with regards to rhythmic information. This becomes especially problematic for percussion instruments that, while tuned to a general frequency range, primarily contribute to the rhythmic structure of music. For example, traditional staff notation 1250, as shown in the upper portion of
The lower portion of
Because cymbals have a higher auditory frequency than drums, cymbal toroids have a resultantly larger diameter than any of the drums. Furthermore, the amorphous sound of a cymbal will, as opposed to the crisp sound of a snare, be visualized as a ring of varying thickness, much like the rings of a planet or a moon. The “splash” of the cymbal can then be animated as a shimmering effect within this toroid. In one embodiment, the shimmering effect can be achieved by randomly varying the thickness of the toroid at different points over the circumference of the toroid during the time period in which the cymbal is being sounded as shown by toroid 1204 and ring 1306 in
The spatial layout of the two dimensional side view shown in
The 3-D visualization of this Rhythmical Component as shown, for example, in
The two-dimensional view of
In other embodiments, each spheroid (whether it appears as such or as a circle or line) and each toroid (whether it appears as such or as a ring, line or bar) representing a beat when displayed on the graphical user interface will have an associated small “flag” or access control button. By mouse-clicking on one of these access controls, or by click-dragging a group of controls, a user will be able to highlight and access a chosen beat or series of beats. With a similar attachment to the Master Key™ music visualization software (available from Musical DNA LLC, Indianapolis, Ind.), it will become very easy for a user to link chosen notes and musical chords with certain beats and create entire musical compositions without the need to write music using standard notation. This will allow access to advanced forms of musical composition and musical interaction for musical amateurs around the world.
The present disclosure utilizes the previously described visualization methods as the basis for a system of foreign language processing and instruction. The easily visualized tonal and rhythmic shapes allow a person to “see” their sounds or words as they attempt to pronounce them correctly and clearly. The system can benefit individuals wishing to learn a foreign language outside of a traditional classroom setting. Even students having the benefit of a live instructor will improve their comprehension due to the intuitive feedback provided by the disclosed system. Actors who normally employ voice coaches to help them master a specific dialect for a particular role can also utilize the system to perfect their reproduction of the required accent.
The digital audio input device 1502 may include a digital music player such as an MP3 device or CD player, an analog music player, instrument or device with appropriate interface, transponder and analog-to-digital converter, or a digital music file, as well as other input devices and systems. The digital audio information may contain prerecorded voice recordings that can be visualized to serve as an example for the user when comparing visualizations of their own voice.
The processing device 1508 may be implemented on a personal computer, a workstation computer, a laptop computer, a palmtop computer, a wireless terminal having computing capabilities (such as a cell phone having a Windows CE or Palm operating system), a game terminal, or the like. It will be apparent to those of ordinary skill in the art that other computer system architectures may also be employed.
In general, such a processing device 1508, when implemented using a computer, comprises a bus for communicating information, a processor coupled with the bus for processing information, a main memory coupled to the bus for storing information and instructions for the processor, a read-only memory coupled to the bus for storing static information and instructions for the processor. The display 1510 is coupled to the bus for displaying information for a computer user and the input devices 1512, 1514 are coupled to the bus for communicating information and command selections to the processor. A mass storage interface for communicating with data storage device 1509 containing digital information may also be included in processing device 1508 as well as a network interface for communicating with a network.
The processor may be any of a wide variety of general purpose processors or microprocessors such as the PENTIUM microprocessor manufactured by Intel Corporation, a POWER PC manufactured by IBM Corporation, a SPARC processor manufactured by Sun Corporation, or the like. It will be apparent to those of ordinary skill in the art, however, that other varieties of processors may also be used in a particular computer system. Display 1510 may be a liquid crystal device (LCD), a cathode ray tube (CRT), a plasma monitor, a holographic display, or other suitable display device. The mass storage interface may allow the processor access to the digital information in the data storage devices via the bus. The mass storage interface may be a universal serial bus (USB) interface, an integrated drive electronics (IDE) interface, a serial advanced technology attachment (SATA) interface or the like, coupled to the bus for transferring information and instructions. The data storage device 1509 may be a conventional hard disk drive, a floppy disk drive, a flash device (such as a jump drive or SD card), an optical drive such as a compact disc (CD) drive, digital versatile disc (DVD) drive, HD DVD drive, BLUE-RAY DVD drive, or another magnetic, solid state, or optical data storage device, along with the associated medium (a floppy disk, a CD-ROM, a DVD, etc.)
In general, the processor retrieves processing instructions and data from the data storage device 1509 using the mass storage interface and downloads this information into random access memory for execution. The processor then executes an instruction stream from random access memory or read-only memory. Command selections and information that is input at input devices 1512, 1514 are used to direct the flow of instructions executed by the processor. Equivalent input devices 1514 may also be a pointing device such as a conventional trackball device. The results of this processing execution are then displayed on display device 1510.
The processing device 1508 is configured to generate an output for viewing on the display 1510 and/or for driving the printer 1516 to print a hardcopy. Preferably, the video output to display 1510 is also a graphical user interface, allowing the user to interact with the displayed information.
The system 1500 may optionally include one or more subsystems 1551 substantially similar to subsystem 1501 and communicating with subsystem 1501 via a network 1550, such as a LAN, WAN or the internet. Subsystems 1501 and 1551 may be configured to act as a web server, a client or both and will preferably be browser enabled. Thus with system 1500, remote teaching is made possible when an instructor is not able to be physically present. System 1500 may also be configured to be portable so an individual can continue to practice and improve their foreign language or dialect ability even after the sessions with a live instructor have ended, thereby avoiding the potential problem with stagnation or regression due to the lack of continuing instruction.
In operation, microphone 1503 is operative to receive or pick up words or phrases of a person attempting to speak a foreign language. In this context, foreign language is contemplated to mean a language other than the natural or familiar language of an individual. Microphone 1503 creates signals representative of the words or phrases that have been spoken and applies them to processor 1508. Processor 1508 creates tonal and rhythm visualization components from the microphone signals and displays them on display 1510. The material that is spoken may be reproduced via speaker 1520, if desired. The processing device 1508 may receive stored or archived visualization components, preferably in an encoded or digital format, from the data storage device 1509. Reference audio signals may also be retrieved from data storage device 1509 or digital audio input device 1502.
The visualization components, whether in encoded or unencoded form, contain information relating to the particular words or phrase of interest, including pitch, timbre, and volume, as non-limiting examples. The visualization components of the words or phrases spoken by a person may be stored in data storage device 1509, or recorded on removable or portable media, as may the visualization components of the same words or phrases spoken by an individual whose native language is that which the person is attempting to learn. The visualization components of the spoken words or phrases may be used to create a visual side-by-side comparison to aid the person in gauging his or her progress over time in learning the foreign language. By viewing the specific characteristics of the tonal and rhythm visualization components, e.g., shapes and colors, of the words or phrases spoken by the person, as compared to the same words or phrases spoken by the individual speaking in their native language, the person can immediately see the areas where their vocal technique is deficient, e.g., maintaining correct inflection, controlling voice modulation, etc. The visualization components allow the person to practice fundamentals, e.g., matching the pronunciation of a single word, as well as more complex techniques, e.g., reciting complete sentences with proper grammar. The person can then concentrate or focus on specific areas that require improvement or are most relevant for the needs of the person, with any improvement or regression being immediately visible on display 1510.
In order to visualize the individual components of a person's speech, the system 1500 can implement software operating as an audio signal or note extractor. The audio extractor examines the voice signals received by the microphone 1503 and determines which frequencies are most important to the sounding of a given syllable or word. The frequency content is then mapped to certain colors within a tonal circle or helix and displayed to the user. Various audio frequency extraction methods are described in U.S. Patent Application Ser. No. 61/025,374 filed Feb. 1, 2008 entitled “Apparatus and Method for Visualization of Music Using Note Extraction” which is hereby incorporated by reference.
It shall be understood that while the tonal circle and helix has been described above as having twelve subdivisions corresponding to twelve notes in a musical scale, a higher or lower number of subdivisions can be used to represent the complex nature of the human voice. This is particularly important when visualizing speech sounds or patterns.
In addition to tonal frequency changes, the system 1500 can provide visualization of the rhythm components of a person's voice. Because the proper pronunciation of letters or words is often dependent on rhythm or cadence, these visualizations will help students of a foreign language understand whether they are correctly annunciating a particular word. For example, a “B” sound has a short attack time and a relatively longer release. This rhythm could be visualized as a small diameter spheroid, much like a bass drum as described hereinabove. On the other hand, an “S” sound is a more open ended sound which could be visualized much like a cymbal in the preceding examples.
The present disclosure contemplates that a user may purchase foreign language instruction or lessons pre-programmed on electronic storage media, with or without printed materials. The program or software, accessed by processing device 1508 via data storage device 1509, for example, will then provide a step-by-step process for learning various words or vocal techniques using the previously described understandable tonal and rhythm visualization systems. The program or software will provide speech visualization for the student, and the program can be configured to provide both the visualization of the word or phrase that the student was supposed to vocalize, as well as the word or phrase that was actually vocalized by the student. Through the use of this real-time visual feedback, a student can immediately determine both visually, and aurally, that a mistake was made. The correction that is required to be made in order to speak the passage properly is also evident from the visualization system, either by merely viewing the correct visualization shape, color, or pattern, or by hints or specific instruction to the student, e.g., “you need to roll your tongue more during ‘R’ sounds.”
The present disclosure also contemplates that system 1500 may incorporate a “shape filter,” which will show a particular shape or visualization using gray or dimly-lit lines. When the user speaks words that correspond to the desired visualization, the points and lines representing those sounds will change color or become otherwise accentuated to provide further visual reinforcement to the user that the correct sounds are being vocalized. The system 1500 can also be configured to only output “correct” sounds. That is, sounds or notes vocalized by the user that are not part of the word or shape being taught will not be sounded or visually displayed. This prevents the user from being discouraged during the learning process and helps the user focus on the desired sounds.
The program or software will also be able to maintain statistics relating to the user's accuracy rate and provide rewards for improved performance. The student's accuracy can be measured both in terms of actual correct sounds in addition to rhythm accuracy. In certain embodiments, the system will keep track of which words or phrases the user has mastered so that the user can make more efficient use of practice time, concentrating on areas of difficulty. The program or software can also be configured to require a certain level of skill or mastery of a set of words or phrases before allowing the student to continue to the next level or stage of instruction. The mastery level and statistical data for each user can be managed using unique user login information. When a user logs in, the system will be able to retrieve all of the data associated with that user. This will allow multiple users to utilize a single system as in a multi-use classroom environment or by accessing the software from an application service provider using the internet or other appropriate communications link. In addition, data storage device 1509 can be used to save the current training or performance session, along with all associated audio and visualization information, for later retrieval and analysis.
The instruction software or program can be configured to include a complete instructional regimen, or be sold as individual programs that require the purchase of successive modules as the student progressed in expertise. Sales of the instructional modules can be through stores, by on-line sales, or through direct downloads of software, and proof of prior accomplishment can be required to purchase the next module in an instructional series, if such control is desired.
Remote access to subsystem 1501 via network 1550 can provide help from an actual foreign language instructor or voice coach if a student needs additional help, or to demonstrate a level of accomplishment to enable advancement, for example. Access to an instructor may entail extra cost or a certain amount of instructor time may be included in the cost of the instruction programs or modules. Subsystem 1551, connected via network 1550, may also provide a source of instruction that can supplement or take the place of the previously described pre-programmed cards or modules, as well as a source of additional information, “extra credit” exercises or practice pieces, or the ability to purchase added components for system 1500 or other items. Downloads of the instructional software can also be available via subsystem 1551 and network 1550. In certain embodiments, a “virtual” instructor can be provided, such as computer generated voice, with or without a graphical human representation, which prompts the user through the various exercises
The system 1500 can also be implemented as a video gaming system in which foreign language instruction can be combined with video games to provide additional interest and enjoyment to learning language, through the use of the tonal and rhythm visualization systems, of course. Games and interactive exercises can be included in the previously described pre-programmed modules as well. The games can award points based on performance of certain words or musical scales, and allow users to collaborate and play against each other remotely over a network. The use of games in connection with the visualization systems can be especially interesting for younger students.
While the disclosure has been illustrated and described in detail in the drawings and foregoing description, the same is to be considered as illustrative and not restrictive in character, it being understood that only the preferred embodiments have been shown and described and that all changes, modifications and equivalents that come within the spirit of the disclosure provided herein are desired to be protected. The articles “a,”, “an,” “said,” and “the” are not limited to a singular element, and may include one or more such elements.
The present application claims the benefit of U.S. Provisional Patent Application Ser. No. 60/913,017, filed Apr. 20, 2007, entitled “Method and Apparatus for Accent or Dialect Instruction”, U.S. Provisional Patent Application Ser. No. 60/913,015 filed Apr. 20, 2007 entitled “Method and Apparatus for Foreign Language Instruction”, and U.S. Provisional Patent Application Ser. No. 60/913,010, filed Apr. 20, 2007, entitled “Method and Apparatus for Foreign Language Translation.” This application also relates to U.S. Provisional Patent Application Ser. No. 60/830,386 filed Jul. 12, 2006 entitled “Apparatus and Method for Visualizing Musical Notation”, U.S. Utility patent application Ser. No. 11/827,264 filed Jul. 11, 2007 entitled “Apparatus and Method for Visualizing Music and Other Sounds”, U.S. Provisional Patent Application Ser. No. 60/921,578, filed Apr. 3, 2007, entitled “Device and Method for Visualizing Musical Rhythmic Structures”, and U.S. Utility patent application Ser. No. 12/023,375 filed Jan. 31, 2008 entitled “Device and Method for Visualizing Musical Rhythmic Structures”. All of these applications are hereby incorporated by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
347686 | Carpenter et al. | Aug 1886 | A |
2804500 | Giacoletto | Aug 1957 | A |
3698277 | Barra | Oct 1972 | A |
3969972 | Bryant | Jul 1976 | A |
4128846 | Robinson, Jr. | Dec 1978 | A |
4172406 | Martinez | Oct 1979 | A |
4257062 | Meredith | Mar 1981 | A |
4378466 | Esser | Mar 1983 | A |
4526168 | Hassler et al. | Jul 1985 | A |
4887507 | Shaw | Dec 1989 | A |
4907573 | Nagasaki | Mar 1990 | A |
5048390 | Adachi et al. | Sep 1991 | A |
5207214 | Romano | May 1993 | A |
5370539 | Dillard | Dec 1994 | A |
5415071 | Davies | May 1995 | A |
5563358 | Zimmerman | Oct 1996 | A |
5741990 | Davies | Apr 1998 | A |
5784096 | Paist | Jul 1998 | A |
6031172 | Papadopoulos | Feb 2000 | A |
6111755 | Park | Aug 2000 | A |
6127616 | Yu | Oct 2000 | A |
6137041 | Nakano | Oct 2000 | A |
6201769 | Lewis | Mar 2001 | B1 |
6245981 | Smith | Jun 2001 | B1 |
6265651 | Landtroop | Jul 2001 | B1 |
6350942 | Thomson | Feb 2002 | B1 |
6390923 | Yoshitomi et al. | May 2002 | B1 |
6392131 | Boyer | May 2002 | B2 |
6407323 | Karapetian | Jun 2002 | B1 |
6411289 | Zimmerman | Jun 2002 | B1 |
6414230 | Randall | Jul 2002 | B2 |
6448487 | Smith | Sep 2002 | B1 |
6544123 | Tanaka et al. | Apr 2003 | B1 |
6686529 | Kim | Feb 2004 | B2 |
6750386 | King | Jun 2004 | B2 |
6791568 | Steinberg et al. | Sep 2004 | B2 |
6841724 | George | Jan 2005 | B2 |
6856329 | Peevers et al. | Feb 2005 | B1 |
6927331 | Haase | Aug 2005 | B2 |
6930235 | Sandborn et al. | Aug 2005 | B2 |
6987220 | Holcombe | Jan 2006 | B2 |
7030307 | Wedel | Apr 2006 | B2 |
7096154 | Andrade-Cetto | Aug 2006 | B1 |
7153139 | Wen et al. | Dec 2006 | B2 |
7182601 | Donnan | Feb 2007 | B2 |
7202406 | Coleman | Apr 2007 | B2 |
7212213 | Steinberg et al. | May 2007 | B2 |
7271328 | Pangrie | Sep 2007 | B2 |
7271329 | Franzblau | Sep 2007 | B2 |
7400361 | Noske et al. | Jul 2008 | B2 |
7439438 | Hao | Oct 2008 | B2 |
7521619 | Salter | Apr 2009 | B2 |
7538265 | Lemons | May 2009 | B2 |
7548791 | Johnston | Jun 2009 | B1 |
7589269 | Lemons | Sep 2009 | B2 |
7634405 | Basu et al. | Dec 2009 | B2 |
7663043 | Park | Feb 2010 | B2 |
7667125 | Taub et al. | Feb 2010 | B2 |
7671266 | Lemons | Mar 2010 | B2 |
7714222 | Taub et al. | May 2010 | B2 |
8073701 | Lemons | Dec 2011 | B2 |
8676795 | Durgin et al. | Mar 2014 | B1 |
20020050206 | MacCutcheon | May 2002 | A1 |
20020176591 | Sandborn et al. | Nov 2002 | A1 |
20030205124 | Foote et al. | Nov 2003 | A1 |
20040089132 | Georges et al. | May 2004 | A1 |
20040148575 | Haase | Jul 2004 | A1 |
20040206225 | Wedel | Oct 2004 | A1 |
20050190199 | Brown et al. | Sep 2005 | A1 |
20050241465 | Goto | Nov 2005 | A1 |
20060107819 | Salter | May 2006 | A1 |
20060132714 | Nease et al. | Jun 2006 | A1 |
20070044639 | Farbood et al. | Mar 2007 | A1 |
20070157795 | Hung | Jul 2007 | A1 |
20070180979 | Rosenberg | Aug 2007 | A1 |
20080022842 | Lemons | Jan 2008 | A1 |
20080034947 | Sumita | Feb 2008 | A1 |
20080115656 | Sumita | May 2008 | A1 |
20080190271 | Taub et al. | Aug 2008 | A1 |
20080245212 | Lemons | Oct 2008 | A1 |
20080264239 | Lemons et al. | Oct 2008 | A1 |
20080271589 | Lemons | Nov 2008 | A1 |
20080271590 | Lemons | Nov 2008 | A1 |
20080271591 | Lemons | Nov 2008 | A1 |
20080276790 | Lemons | Nov 2008 | A1 |
20080276791 | Lemons | Nov 2008 | A1 |
20080276793 | Yamashita et al. | Nov 2008 | A1 |
20080314228 | Dreyfuss et al. | Dec 2008 | A1 |
20090223348 | Lemons | Sep 2009 | A1 |
20100154619 | Taub et al. | Jun 2010 | A1 |
Number | Date | Country |
---|---|---|
0349686 | Jan 1990 | EP |
456 860 | Nov 1991 | EP |
1354561 | Oct 2003 | EP |
05-232856 | Sep 1993 | JP |
2004-226556 | Aug 2004 | JP |
10-2006-0110988 | Oct 2006 | KR |
Entry |
---|
Patent Application Search Report mailed on Aug. 1, 2008 for PCT/US2008/59126. |
Patent Application Search Report mailed on Aug. 14, 2008 for PCT/US2008/004989. |
Patent Application Search Report mailed on Aug. 18, 2008 for PCT/US2008/005069. |
Patent Application Search Report mailed on Aug. 18, 2008 for PCT/US2008/005073. |
Patent Application Search Report mailed on Aug. 18, 2008 for PCT/US2008/005126. |
Patent Application Search Report mailed on Aug. 21, 2008 for PCT/US2008/005076. |
Patent Application Search Report mailed on Aug. 27, 2008 for PCT/US2008/005075. |
Patent Application Search Report mailed on Aug. 28, 2008 for PCT/US2008/005077. |
Patent Application Search Report mailed on Jul. 31, 2008 for PCT/US2008/005070. |
Patent Application Search Report mailed on Sep. 18, 2008 for PCT/US2008/005072. |
Patent Application Search Report mailed on Sep. 18, 2008 for PCT/US2008/05124. |
Patent Application Search Report mailed on Sep. 24, 2008 for PCT/US2008/005125. |
Patent Application Search Report mailed on Sep. 29, 2008 for PCT/US2008/005074. |
Rabiner, Huang “Fundamentals of Speech Recognition,” PTR Prentice-Hall, Inc., 1993, ISBN 0-13-285826-6, pp. 21-31, 42-68; Fig. 2.17, 2.32. |
International Preliminary Search Report dated Jul. 19, 2009 fro PCT/US08/05075. |
Patent Application Search Report mailed on Aug. 25, 2009 for PCT/US2009/000684. |
Written Opinion mailed on Aug. 25, 2009 for PCT/US2009/00684. |
“Time-line of the Music Animation Machine (and related experiments)”, Music Animation Machine: History, http://www.musanim.com/mam/mamhist.htm, pp. 1-5, p. 1, pp. 1-2, pp. 1-2 & p. 1, printed Aug. 30, 2007. |
Ashton, Anthony, “Harmonograph: A Visual Guide to the Mathematics of Music,” ISBN 0-8027-1409-9, Walker Publishing Company, 2003, pp. 1-58. |
Bourke, Paul, “Harmonograph,” Aug. 1999, http://local.wasp.uwa.edu.au/˜pbourke/surfaces—curves/harmonograph/, pp. 1-6, printed Aug. 30, 2007. |
Dunne, Gabriel, “Color/Shape/Sound Ratio & Symmetry Calculator,” Quilime.com—Symmetry Calculator, https://www.quilime.com/content/colorcalc/, pp. 1-6, printed Jul. 3, 2007. |
Number | Date | Country | |
---|---|---|---|
20080274443 A1 | Nov 2008 | US |
Number | Date | Country | |
---|---|---|---|
60913017 | Apr 2007 | US | |
60913015 | Apr 2007 | US | |
60913010 | Apr 2007 | US | |
60830386 | Jul 2006 | US | |
60921578 | Apr 2007 | US |