Enhanced system, method, and devices for capturing inaudible tones associated with music

Information

  • Patent Grant
  • 10878788
  • Patent Number
    10,878,788
  • Date Filed
    Tuesday, July 9, 2019
    5 years ago
  • Date Issued
    Tuesday, December 29, 2020
    4 years ago
Abstract
One embodiment provides a system, method, and device for capturing inaudible tones from music. A song is received. Inaudible tones are detected in the song. Information associated with the inaudible tones is extracted from the song. The information associated with the inaudible tones is communicated to a user.
Description
BACKGROUND
I. Field of the Disclosure

The illustrative embodiments relate to music. More specifically, but not exclusively, the illustrative embodiments relate to enhancing music through associating available information.


II. Description of the Art

Teaching, learning, and playing music may be very challenging for individuals. It may be even more difficult for students and others with limited exposure to music notes, theory, or instruments. Unfortunately, music advancement has not kept pace with advancements in technology and resources to create, teach, learn, and play music more easily and increase accessibility for individuals of all skill levels, cognition, and abilities.


SUMMARY OF THE DISCLOSURE

The illustrative embodiments provide a system, method, and device for capturing inaudible tones from music. A song is received. Inaudible tones are detected in the song. Information associated with the inaudible tones is extracted from the song. The information associated with the inaudible tones is communicated to a user. Another embodiment provides a device including a processor for executing a set of instructions and a memory for storing the set of instructions. The set of instructions are executed to perform the method(s) herein described.


Another embodiment provides a method for utilizing the inaudible tones with music. Music is received utilizing an electronic device including at least a display. Inaudible tones in the music are detected. Information associated with the inaudible tones of the music is extracted. The information associated with the inaudible tones is communicated to a user utilizing at least the display of the electronic device.


Yet another embodiment provides a system for utilizing inaudible tones in music. A transmitting device is configured to broadcast music including one or more inaudible tones. A receiving devices receives the music, detects inaudible tones in the music, extracts information associated with the inaudible tones of the music, and communicates information associated with the inaudible tones to a user through the receiving device, wherein the information includes at least notes associated with the music.


Yet another illustrative embodiment provides a system, method, and device for utilizing inaudible tones for music. A song is initiated with enhanced features. A determination is made whether inaudible tones including information or data are associated with a portion of the song. The associated inaudible tone is played. Playback of the song is continued. Another embodiments provides a device including a processor for executing a set of instructions and a memory for storing the set of instructions. The instructions are executed to perform the method described above.


Yet another embodiment provides a method for utilizing inaudible tones for music. Music and inaudible tones associated with the music are receiving utilizing an electronic device including at least a display. Information associated with the inaudible tones is extracted. The information associated with the inaudible tones is communicated. Another embodiments provides a receiving device including a processor for executing a set of instructions and a memory for storing the set of instructions. The instructions are executed to perform the method described above.


Yet another embodiment provides a system for utilizing inaudible tones in music. The system includes a transmitting device that broadcasts music synchronized with one or more inaudible tones. The system includes a receiving device that receives the inaudible tones, extracts information associated with the inaudible tones, and communicates the information associated with the inaudible tones.





BRIEF DESCRIPTION OF THE DRAWINGS

Illustrated embodiments are described in detail below with reference to the attached drawing figures, which are incorporated by reference herein, and where:



FIG. 1 is a pictorial representation of a system for utilizing inaudible tones in accordance with an illustrative embodiment;



FIG. 2 is a flowchart of a process for utilizing inaudible tones in accordance with an illustrative embodiment;



FIG. 3 is a flowchart of a process for processing inaudible tones in accordance with an illustrative embodiment.



FIGS. 4 and 5 are a first embodiment of sheet music including notations for utilizing a system in accordance with illustrative embodiments;



FIGS. 6 and 7 are a second embodiment of sheet music including notations for utilizing an inaudible system in accordance with illustrative embodiments; and



FIG. 8 depicts a computing system in accordance with an illustrative embodiment.





DETAILED DESCRIPTION OF THE DISCLOSURE

The illustrative embodiments provide a system and method for utilizing inaudible tones integration with visual sheet music, inaudible time codes, musical piece displays, live music capture, execution, and marking, and musical accompaniment suggestions. The illustrative embodiments may be implemented utilizing any number of musical instruments, wireless devices, computing devices, or so forth. For example, an electronic piano may communicate with a smart phone to perform the processes and embodiments herein described. The illustrative embodiments may be utilized to create, learn, play, observe, or teach music.


The illustrative embodiments may utilize inaudible tones to communicate music information, such as notes being played. A visual and text representation of the note, notes, or chords may be communicated. The illustrative embodiments may be utilized for recorded or live music or any combination thereof. The inaudible tones may be received and processed by any number of devices to display or communicate applicable information.



FIG. 1 is a pictorial representation of a system 100 for utilizing inaudible tones in accordance with an illustrative embodiment. In one embodiment, the system 100 of FIG. 1 may include any number of devices 101, networks, components, software, hardware, and so forth. In one example, the system 100 may include a wireless device 102, a tablet 104 utilizing a graphical user interface 105, a laptop 106 (altogether devices 101), a network 110, a network 112, a cloud network 114, servers 116, databases 118, and a music platform 120 including at least a logic engine 122, and memory 224. The cloud network 114 may further communicate with third-party resources 130.


In one embodiment, the system 100 may be utilized by any number of users to learn, play, teach, observe, or review music. For example, the system 100 may be utilized with musical instruments 132. The musical instruments 132 may represent any number of acoustic, electronic, networked, percussion, wind, string, or other instruments of any type. In one embodiment, the wireless device 12, tablet 104, or laptop 106 may be utilized to display information to a user, receiver user input, feedback, commands, and/or instructions, record music, store data and information, play inaudible tones associated with music, and so forth.


The system 100 may be utilized by one or more users at a time. In on embodiment, an entire band, class, orchestra, or so forth may utilize the system 100 at one time utilizing their own electronic devices or assigned or otherwise provided devices. The devices 101 may communicate utilizing one or more of the networks 110, 112 and the cloud network 114 to synchronize playback, inaudible tones, and the playback process. In one embodiment, software operated by the devices of the system 100 may synchronize the playback and learning process. For example, mobile applications executed by the devices 101 may perform synchronization, communications, displays, and the processes herein described. The devices 101 may play inaudible tones as well as detect music, tones, inaudible tones, and input received from the instruments 132.


The inaudible tones discussed in the illustrative embodiments may be produced from the known tone spectrum in an audio range that is undetectable to human ears. The inaudible tone range is used to carry data transmissions to implement processes, perform synchronization, communicate/display information, and so forth. Any number of standard or specialized devices may perform data recognition, decoding, encoding, transmission, and differentiation via the inaudible tone data embedded in the inaudible tones.


The inaudible tones may be combined in various inaudible tone ranges that are undetectable to human ears. The known human tone range of detection can vary from 20 Hz to 20,000 Hz. The illustrative embodiments utilize the inaudible tone spectrum in the ranges of 18 Hz to 20 Hz and 8 KHz to 22 KHz, which both fall under the category of inaudible frequencies. The inaudible tones at 8 kHz, 10 kHz, 12 kHz, 14 kHz, 15 kHz, 16 kHz, 17 kHz, 17.4 kHz, 18 kHz, 19 kHz, 20 kHz, 21 kHz, and 22 kHz may be particularly useful. The illustrative embodiments may also utilize Alpha and Beta tones which use varied rates of inaudible tone frequency modulation and sequencing to ensure a broader range of the inaudible tone frequency spectrum is available from each singular inaudible tone range. The illustrative embodiments may also utilize audible tones to perform the processes, steps, and methods herein described.


The inaudible tones carry data that is processed and decoded via microphones, receivers, sensors, or tone processors. The microphones and logic that perform inaudible tone processing be pre-installed on a single purpose listening device or installed in application format on any standard fixed or mobile device with a built-in microphone and processor. The inaudible tones include broadcast data from various chips or tone transmission beacons, which are recognized and decoded at the microphone and logic.


The devices 101 are equipped to detect and decode data contained in the inaudible signals sent from any number of other sources. The devices 101 as well as the associated inaudible tone applications or features be programmed in an always on, passive listening, scheduled listening mode or based on environmental conditions, location (e.g., school, classroom, field, venue, etc.), or other conditions, settings, and/or parameters. In one embodiment, the music-based data and information may also be associated with the inaudible tones so that it does not have to be encoded or decoded.


The devices 101 may be portable or fixed to a location (e.g., teaching equipment for a classroom). In one embodiment, the devices 101 may be programmed to only decode tones and data specific to each system utilization. The devices 101 may also be equipped to listen for the presence or absence of specific tones and recognize the presence of each specific tone throughout a location or environment. The devices 101 may also be utilized to grant, limit or deny access to the system or system data based on the specific tone.


In one embodiment, the inaudible tones associated with a particular piece of music, data, or information may be stored in the memories of the devices 101 of the system 100, in the databases 118, or the memory 124 of the music platform 120 or in other memories, storage, hardware, or software. Similarly, the devices 101 of the system 100 may execute software that coordinates the processes of the system 100 as well as the playback of the inaudible tones.


In one embodiment, cloud network 114 or the music platform 120 may coordinate the methods and processes described herein as well as software synchronization, communication, and processes. The software may utilize any number of speakers, microphones, tactile components (e.g., vibration components, etc.) graphical user interfaces, such as the graphical user interface 105 to communicate and receive indicators, inaudible tones, and so forth.


The system 100 and devices may utilize speakers and microphones as inaudible tone generators and inaudible tone receivers to link music 107, such as sheet music notation or tablature-based notes to the tempo of a song creating a visual musical score. The process utilizes sound analysis tools on live and pre-produced musical pieces 107 or may be used with other tablature, standard sheet music, and sheet music creation tools (music 107).


The inaudible tone recognition tool ties sheet music 107 to the actual audio version of a song and in real-time to visually broadcasts each note 109 (notes, chord) that each instrument or voice produced during the progression of a song and visually displays the note in conjunction with the rhythm of the song through an inaudible tone. The note 109 may represent a single note, multiple notes, groups or sets of notes, or a chord. As shown, the note 109 may be displayed by the graphical user interface 105 by an application executed by the wireless device 104. The note 109 may be displayed graphically as a music node as well as the associated text or description, such as “a”. The note 109 may also indicate other information, such as treble clef or bass clef.


In another embodiment, primary or key notes 109 of the music 107 may be displayed to the devices 101 based on information from the inaudible tones. Alternatively, a user (e.g., teacher, student, administrator, etc.) may select preselect or indicate in real-time the notes 109 from the music 107 to be displayed. The note 109 may be displayed individually or as part of the music 105. For example, the note 109 may light up, move, shake, or be otherwise be animated when played.


As noted, any number of devices 101 may be utilized to display the associated music 105, notes 109, and content. In addition, one of the devices 101 may coordinate the display and playback of information, such as a cell phone, table, server, personal computer, gaming device, or so forth.


Any number of flags, instructions, codes, inaudible tones, or other indicators may be associated with the notes 109, information, instructions, commands, or data associated with the music 107. As a result, the indicators may show the portion of the music being played. The indicators may also provide instructions or commands or be utilized to automatically implement an action, program, script, activity, prompt, display message, or so forth. The indicators may also include inaudible codes that may be embedded within music to perform any number of features or functions.


Inaudible time codes are placed within the piece of music 107 indicating the title and artist, the availability of related sheet music for the song, the start and finish of each measure, the vocal and instrumental notes or song tablature for each measure, and the timing and tempo fluctuations within a measure. The system 100 may also visually pre-indicate when a specific instrument or groups of instruments will enter in on the piece of music 107. Through the utilization of inaudible time codes embedded in the song and its measures the system can adjust the notes to the tempo and rhythm of music 107 that has numerous or varied tempo changes.


Multiple different inaudible tones may be associated with the different information outlined herein. The inaudible tones may facilitate teaching, learning, playing, or otherwise being involved with music playing, practice, or theory. For example, the inaudible tones may be embedded in the soundtrack of a broadcast. The inaudible tones may be delivered through any number of transmissions utilizing digital or analog communications standards, protocols, or signals. For example, the inaudible tones may represent markers resulting in the ability to play back and display sheet music notes 109 on time and synchronized with the music.


The music 107 or song data may include artist, title, song notes, tablature, and other information for a specific piece of music are transmitted from the song data contained in the inaudible tones via a network broadcast, wireless signal, satellite signal, terrestrial signal, direction connection, peer-to-peer connection, software based communication, via a music player, to a device, mobile device, wearable, e-display, electronic equalizer, holographic display, projected, or streamed to a digital sheet music stand or other implementation that visually displays the notes 109 and tempo that each specific instrument will play.


Through the user interface 106, a digital display, or visually projected musical representation each instrument and its associated notes 109 may be displayed in unison as the piece of music 107 plays. In one embodiment, each instrument in a musical piece 107 may be is assigned a color indicator or other visual representations. The display may also be selectively activated to highlight specific instrumental musical pieces. The instrument and representative color is visually displayed in a musical staff in standard musical notation format or in single or grouped notes 109 format that represent one or a chorded group of the 12 known musical notes A-G# or may be visually displayed as a standard tablature line that that displays the musical notes 109 in a number-based tablature format.


In one embodiment, one of the devices 101 may be a car radio. The car radio may display the notes 109 of the music 107. The system 100 may be effective in communicating the inaudible tones to any device within range to receive the inaudible tones. For example, the range of the inaudible tones may be only be limited by the acoustic and communications properties of the environment.


Live Music Capture, Execution, and Marking: In one embodiment, the system 100 utilizes a software-based sound capture process that is compatible with the devices 101 used to capture the inaudible tone song data. The devices 101 may capture the inaudible tone song data and in real-time capture, produce and analyze a real-time progression of the actual visual musical piece 107 in conjunction with the piece 107 being played by a live band, live orchestra, live ensemble performance, or other live music environment. The sound capture devices 101 that capture the inaudible song data may also capture each live instrumental note as it is played by a single instrument or group of performers' and is indicated with a visual representation that indicates a played note 105 is on time with the software based internal metronome marking the time in a musical piece 107.


The system 100 may indicate if each note 105 is played correctly which displays the note 105 in green as a correctly executed note, or if the note 105 is off beat or incorrect the note 109 displays red on the metronome tick as an incorrectly executed note, the metronome may also indicate if a specific instruments note was played too fast or too slow. The system 100 may also generate a report for each instrument and each instrumentalist's overall success rate for each note, timing, and other performance characteristics as played in a musical score. The report may be saved or distributed as needed or authorized.


Musical Accompaniment Suggestions: The system 100 may also make rhythmic or tempo based suggestions in addition to suggest new musical accompaniment that isn't included or heard in the original music piece 107. For example, the suggestions may be utilized to teach individuals how to perform improvisation and accompaniment. The system 100 may group specific instruments and may also indicate where other instruments may be added to fit into a piece of music 107. The system 100 may also make recommendations where new musical instrumental elements might fit into an existing piece of music 107. This also includes suggested instrumental or vocal elements, computer generated sounds, or other musical samples. The system 100 may indicate where groups of instruments share the same notes and rhythm pattern in the music 107. The system 100 may allow conductors or music composers to create and modify music 107 in real-time as it is being played or created.



FIG. 2 is a flowchart of a process for utilizing inaudible tones in accordance with an illustrative embodiment. In one embodiment, a song may represent electronic sheet music, songs, teaching aids, digital music content, or any type of musical content. The process of FIG. 2 may be performed by an electronic device, system, or component. For example, a personal computer (e.g., desktop, laptop, tablet, etc.), wireless device, DJ system, or other device may be utilized. The process of FIG. 2 may begin by initiating a song with enhanced features (202). The song may be initiated for audio or visual playback, display, communication, review, teaching, projection, or so forth. In one example, the song may be initiated to teach an orchestral group of a middle school the song. The song may include a number of parts, notes, and musical combinations for each of the different participants. The song may also represent a song played for recreation by a user travelling in a vehicle (e.g., car, train, plane, boat, etc.).


Next, the device determines whether there are inaudible tones including information or data associated with a portion of the song (step 204). Step 204 may be performed repeatedly for different portions or parts of the song corresponding to lines, measures, notes, flats, bars, transitions, verse, chorus, bridge, intro, scale, coda, notations, lyrics, melody, solo, and so forth. In one embodiment, each different portion of the song may be associated with inaudible information and data.


Next, the device plays the associated inaudible tone (step 206). The inaudible tone may be communicated through any number of speakers, transmitters, emitters, or other output devices of the device or in communication with the device. In one embodiment, the inaudible tone is simultaneously broadcast as part of the song. The inaudible tones represent a portion of the song that is unhearable by the listeners.


Next, the device continues playback of the song (step 208). Playback is continued until the song has been completed, the user selects to end the process, or so forth. In one embodiment, during step 208, the device may move from one portion of the song to the next portion of the song (e.g., moving from a first note to a second note). As noted, the playback may include real-time or recorded content. In one example, the content is a song played by a band at a concert. In another example, the content may represent a classical orchestral piece played from a digital file.


Next, the device returns again to determine whether there is inaudible information or data associated with a portion of the song (step 204). As noted, the process of FIG. 2 is performed repeatedly until the song is completed.



FIG. 3 is a flowchart of a process for processing inaudible tones in accordance with an illustrative embodiment. The process of FIG. 3 may be performed by any number of receiving devices. In one embodiment, the process may begin by detecting an inaudible tone in a song (step 302). The number and types of devices that may detect the inaudible tones is broad and diverse. The devices may be utilized for learning, teaching, entertainment, collaboration, development, or so forth.


Next, the device extracts information associated with the inaudible tones (step 304). The data and information may be encoded in the inaudible tones in any number of analog or digital packets, protocols, formats, or signals (e.g., data encryption standard (DES), triple data encryption standard, Blowfish, RC4, RC2, RC6, advanced encryption standard). Any number of ultrasonic frequencies and modulation/demodulation may be utilized for data decoding, such as chirp technology. The device may utilize any number of decryption schemes, processes, or so forth. The information may be decoded as the song is played. As previously noted, the information may be synchronized with the playback of the song. In some embodiments, network, processing, and other delays may be factored in to retrieve the information in a timely manner for synchronization. For example, the inaudible tones may be sent slightly before a note is actually played so that step 306 is being performed as the associated note is played.


Next, the device communicates information associated with the inaudible tones (step 306). In one embodiment, the device may display each note/chord of the song as it is played. For example, a zoomed visual view of the note and the text description may be provided (e.g., see for example note 109 of FIG. 1). The information may also be displayed utilizing tactile input, graphics, or other content that facilitate learning, understanding, and visualization of the song. The communication of the information may help people learn and understand notes, tempo, and other information associated with the song. During step 306, the device may also perform any number of actions associated with the inaudible tones.


In one embodiment, the device may share the information with any number of other devices proximate the device. For example, the information may be shared through a direct connection, network, or so forth.



FIGS. 4 and 5 are a first embodiment of sheet music 400 including notations for utilizing a system in accordance with illustrative embodiments. FIGS. 6 and 7 are a second embodiment of sheet music 600 including notations for utilizing an inaudible system in accordance with illustrative embodiments. The embodiments shown in FIGS. 4-7 represent various versions of Amazing Grace. In one embodiment, time codes 402 of the measures (bars) and tempo show how the illustrative embodiments utilize indicators to display music. In one embodiment, the indicators may each be associated with inaudible tones. For example, at time code 10.74 the inaudible tone may communicate content to display the note “e” visually as well as textually. As shown by the time codes 402 any number of note/chord combinations may also be displayed. In addition, the time codes 402 may be applicable to different verses of the song.


The illustrative embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, embodiments of the inventive subject matter may take the form of a computer program product embodied in any tangible medium of expression having computer usable program code embodied in the medium. The described embodiments may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computing system (or other electronic device(s)) to perform a process according to embodiments, whether presently described or not, since every conceivable variation is not enumerated herein. A machine-readable medium includes any mechanism for storing or transmitting information in a form (e.g., software, processing application) readable by a machine (e.g., a computer). The machine-readable medium may include, but is not limited to, magnetic storage medium (e.g., floppy diskette); optical storage medium (e.g., CD-ROM); magneto-optical storage medium; read only memory (ROM); random access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; or other types of medium suitable for storing electronic instructions. In addition, embodiments may be embodied in an electrical, optical, acoustical or other form of propagated signal (e.g., carrier waves, infrared signals, digital signals, etc.), or wireline, wireless, or other communications medium.


Computer program code for carrying out operations of the embodiments may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on a user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN), a wireless personal area network (WPAN), or a wide area network (WAN), or the connection may be made to an external computer (e.g., through the Internet using an Internet Service Provider).



FIG. 8 depicts a computing system 800 in accordance with an illustrative embodiment. For example, the computing system 800 may represent a device, such as the wireless device 102 of FIG. 1. The computing system 800 includes a processor unit 801 (possibly including multiple processors, multiple cores, multiple nodes, and/or implementing multi-threading, etc.). The computing system includes memory 807. The memory 807 may be system memory (e.g., one or more of cache, SRAM, DRAM, zero capacitor RAM, Twin Transistor RAM, eDRAM, EDO RAM, DDR RAM, EEPROM, NRAM, RRAM, SONOS, PRAM, etc.) or any one or more of the above already described possible realizations of machine-readable media. The computing system also includes a bus 803 (e.g., PCI, ISA, PCI-Express, HyperTransport®, InfiniBand®, NuBus, etc.), a network interface 806 (e.g., an ATM interface, an Ethernet interface, a Frame Relay interface, SONET interface, wireless interface, etc.), and a storage device(s) 809 (e.g., optical storage, magnetic storage, etc.).


The system memory 807 embodies functionality to implement all or portions of the embodiments described above. The system memory 807 may include one or more applications or sets of instructions for implementing a communications engine to communicate with one or more electronic devices or networks. The communications engine may be stored in the system memory 807 and executed by the processor unit 802. As noted, the communications engine may be similar or distinct from a communications engine utilized by the electronic devices (e.g., a personal area communications application). Code may be implemented in any of the other devices of the computing system 800. Any one of these functionalities may be partially (or entirely) implemented in hardware and/or on the processing unit 801. For example, the functionality may be implemented with an application specific integrated circuit, in logic implemented in the processing unit 801, in a co-processor on a peripheral device or card, etc. Further, realizations may include fewer or additional components not illustrated in FIG. 8 (e.g., video cards, audio cards, additional network interfaces, peripheral devices, etc.). The processor unit 801, the storage device(s) 809, and the network interface 805 are coupled to the bus 803. Although illustrated as being coupled to the bus 803, the memory 807 may be coupled to the processor unit 801. The computing system 800 may further include any number of optical sensors, accelerometers, magnetometers, microphones, gyroscopes, temperature sensors, and so forth for verifying user biometrics, or environmental conditions, such as motion, light, or other events that may be associated with the wireless earpieces or their environment.


The illustrative embodiments are not to be limited to the particular embodiments and examples described herein. In particular, the illustrative embodiments contemplate numerous variations in the type of ways in which embodiments of the invention may be applied to music teaching, playback, and communication utilizing inaudible tones. The foregoing description has been presented for purposes of illustration and description. It is not intended to be an exhaustive list or limit any of the disclosure to the precise forms disclosed. It is contemplated that other alternatives or exemplary aspects are considered included in the disclosure. The description is merely examples of embodiments, processes or methods of the invention. It is understood that any other modifications, substitutions, and/or additions may be made, which are within the intended spirit and scope of the disclosure. For the foregoing, it can be seen that the disclosure accomplishes at least all of the intended objectives.


The previous detailed description is of a small number of embodiments for implementing the invention and is not intended to be limiting in scope. The following claims set forth a number of the embodiments disclosed with greater particularity.

Claims
  • 1. A method for capturing inaudible tones from a song, comprising: receiving the song;detecting inaudible tones in the song;extracting information associated with the inaudible tones of the song;communicating the information associated with the inaudible tones to a user in real-time including at least notes and information associated with the song.
  • 2. The method of claim 1, wherein the detecting further comprises: determining whether the inaudible tones are associated with a portion of the song.
  • 3. The method of claim 1, wherein the song is music with enhanced features, and wherein the inaudible tones are frequencies that are not discernable by humans.
  • 4. The method of claim 1, wherein communicating the information includes displaying the notes associated with the song as played.
  • 5. The method of claim 1, wherein the song represents live music or recorded music.
  • 6. The method of claim 1, wherein the song and the inaudible tones are received by a microphone of an electronic device.
  • 7. The method of claim 1, wherein the electronic device executes an application for detecting the inaudible tones.
  • 8. The method of claim 1, wherein the information includes artist, title, notes, chords, and tablature for one or more instruments associated with the song.
  • 9. The method of claim 1, wherein the detecting further comprises: generating the inaudible tones in real-time in response to receiving the song.
  • 10. The method of claim 1, wherein the notes include each instrumental, voice, and note part associated with the song.
  • 11. The method of claim 2, wherein the communicating further comprises displaying and moving the sheet music notes, tablatures, measures, and instructions associated with the music in synchronization with each musical or tempo change in the music.
  • 12. The method of claim 6, wherein a plurality of different portions of the song are associated with a plurality of inaudible tones and associated information.
  • 13. The method of claim 1, wherein the information included in the inaudible tones represents sheet music, notes, tablatures, measures, or musical instructions.
  • 14. A method for utilizing inaudible tones for music, comprising: receiving music utilizing an electronic device including at least a display;detecting inaudible tones in the music;extracting information associated with the inaudible tones of the music; andcommunicating the information associated with the inaudible tones to a user utilizing at least the display of the electronic device in real-time including at least notes and information associated with the song.
  • 15. The method of claim 14, wherein the inaudible tones are audio frequencies that are not discernable by humans, wherein the inaudible tones are embedded in the music.
  • 16. The method of claim 14, further comprising: implementing one or more actions associated with the information extracted from inaudible tones utilizing the electronic device.
  • 17. The method of claim 14, wherein the information includes one or more of notes, tablatures, measures, and instructions.
  • 18. The system of claim 16, wherein the transmitting device utilizes one or more speakers to broadcast the music and the inaudible tones, wherein the receiving device utilizes one or more microphones or sensors to receive the inaudible tones, and wherein the inaudible tones are audio frequencies that are not discernable by humans.
  • 19. The system of claim 16, wherein the inaudible tones are received by a plurality of devices simultaneously, wherein the information is uniquely associated with a user profile available on each of the plurality of devices.
  • 20. A system for utilizing inaudible tones in music, comprising: a transmitting device configured to broadcast music including one or more inaudible tones;a receiving device that receives the music, detects inaudible tones in the music, extracts information associated with the inaudible tones of the music, and communicates the information associated with the inaudible tones to a user through the receiving device in real-time, wherein the information includes at least notes associated with the music.
PRIORITY

This application claims priority to U.S. Provisional Patent Application Ser. No. 62/524,835 entitled ENHANCED SYSTEM, METHOD, AND DEVICES FOR UTILIZING INAUDIBLE TONES WITH MUSIC filed on Jun. 26, 2017 and is a continuation of U.S. Utility patent application Ser. No. 16/019,257 entitled ENHANCED SYSTEM, METHOD, AND DEVICES FOR UTILIZING INAUDIBLE TONES WITH MUSIC, the entirety of each which is incorporated by reference herein.

US Referenced Citations (139)
Number Name Date Kind
4399731 Aoki Aug 1983 A
4479416 Clague Oct 1984 A
4694723 Shinohara et al. Sep 1987 A
4976182 Obuchi et al. Dec 1990 A
5275082 Kestner-Clifton et al. Jan 1994 A
5315911 Ochi May 1994 A
5413486 Burrows et al. May 1995 A
5533903 Kennedy Jul 1996 A
5563358 Zimmerman Oct 1996 A
5621538 Gnant et al. Apr 1997 A
5690496 Kennedy Nov 1997 A
5728960 Sitrick Mar 1998 A
5746605 Kennedy May 1998 A
5760323 Romero et al. Jun 1998 A
5768127 Murata Jun 1998 A
5773741 Eller et al. Jun 1998 A
5931680 Semba Aug 1999 A
6072113 Tohgi et al. Jun 2000 A
6084168 Sitrick Jul 2000 A
6096962 Crowley Aug 2000 A
6156964 Sahai et al. Dec 2000 A
6211451 Tohgi et al. Apr 2001 B1
6235979 Yanase May 2001 B1
6275222 Desain Aug 2001 B1
6348648 Connick Feb 2002 B1
6380471 Matsumoto Apr 2002 B2
6380474 Taruguchi et al. Apr 2002 B2
6392132 Uehara May 2002 B2
6483019 Hamilton Nov 2002 B1
6486388 Akahori Nov 2002 B2
6504089 Negishi et al. Jan 2003 B1
6515210 Shibukawa Feb 2003 B2
6545208 Hiratsuka Apr 2003 B2
6555737 Miyaki et al. Apr 2003 B2
6664458 Hiratsuka et al. Dec 2003 B2
6685480 Nishimoto et al. Feb 2004 B2
6686531 Pennock et al. Feb 2004 B1
6727418 Matsumoto Apr 2004 B2
6777607 Smith Aug 2004 B2
6798427 Suzuki et al. Sep 2004 B1
6809246 Errico Oct 2004 B2
6831220 Varme Dec 2004 B2
7019204 Terada Mar 2006 B2
7030307 Wedel Apr 2006 B2
7041888 Masuda May 2006 B2
7041890 Sutton May 2006 B1
7045698 Miyamoto May 2006 B2
7064261 Shao Jun 2006 B2
7078609 Georges Jul 2006 B2
7094960 Ikeya et al. Aug 2006 B2
7094962 Kayama Aug 2006 B2
7105733 Jarrett et al. Sep 2006 B2
7119266 Bittner et al. Oct 2006 B1
7129407 Hiratsuka et al. Oct 2006 B2
7183476 Swingle et al. Feb 2007 B2
7199298 Funaki Apr 2007 B2
7199299 Asakura Apr 2007 B2
7223912 Nishimoto et al. May 2007 B2
7297856 Sitrick Nov 2007 B2
7314992 Funaki et al. Jan 2008 B2
7314994 Hull et al. Jan 2008 B2
7335833 Smith et al. Feb 2008 B2
7342165 Gotfried Mar 2008 B2
7371954 Masuda et al. May 2008 B2
7428534 Ito et al. Sep 2008 B2
7439441 Jarrett et al. Oct 2008 B2
7482529 Flamini et al. Jan 2009 B1
7485794 Koizumi Feb 2009 B2
7507893 Knudsen Mar 2009 B2
7560635 Funaki Jul 2009 B2
7589271 Jarrett et al. Sep 2009 B2
7589727 Haeker Sep 2009 B2
7601905 Yanase et al. Oct 2009 B2
7605322 Nakamura Oct 2009 B2
7640501 Suzuki et al. Dec 2009 B2
7642447 Kojima Jan 2010 B2
7683250 Ikeya et al. Mar 2010 B2
7703014 Funaki Apr 2010 B2
7765314 Okamoto Jul 2010 B2
7767898 Nakayama et al. Aug 2010 B2
7829777 Kyuma et al. Nov 2010 B2
8106282 Lee Jan 2012 B2
8138409 Brennan Mar 2012 B2
8158874 Kenney Apr 2012 B1
8319083 Adams Nov 2012 B2
8367921 Evans et al. Feb 2013 B2
8497416 Gregson Jul 2013 B2
8513511 Ikeya et al. Aug 2013 B2
8629342 Lee et al. Jan 2014 B2
8642871 Feidner Feb 2014 B2
8660678 Lavi et al. Feb 2014 B1
8680388 Becker et al. Mar 2014 B2
8688250 Iwase et al. Apr 2014 B2
8785757 Pillhofer et al. Jul 2014 B2
8878040 Araki et al. Nov 2014 B2
9006551 Iwase et al. Apr 2015 B2
9029676 Iwase et al. May 2015 B2
9035162 Hamilton et al. May 2015 B2
9082380 Hamilton et al. Jul 2015 B1
9092992 Yung Jul 2015 B2
9093055 Daisy-Cavaleri Jul 2015 B2
9105259 Akazawa et al. Aug 2015 B2
9116509 Takahashi et al. Aug 2015 B2
9120016 Epstein Sep 2015 B2
9183754 Tanaka Nov 2015 B2
9275616 Uemura et al. Mar 2016 B2
9333418 Lee et al. May 2016 B2
9412352 Uemura Aug 2016 B2
9418638 Soejima Aug 2016 B2
9424822 Bisnauth Aug 2016 B2
9472178 Kruge Oct 2016 B2
9545578 Nathan et al. Jan 2017 B2
9576564 Yamauchi et al. Feb 2017 B2
9601029 Gebauer Mar 2017 B2
9601127 Yang et al. Mar 2017 B2
9620095 Hamilton et al. Apr 2017 B1
9640160 Wang et al. May 2017 B2
10460709 Bradley Oct 2019 B2
20020002464 Petrushin Jan 2002 A1
20040003707 Mazzoni Jan 2004 A1
20050257666 Sakurada Nov 2005 A1
20080304653 Ghani Dec 2008 A1
20140056172 Lee et al. Feb 2014 A1
20140180675 Neuhauser Jun 2014 A1
20160104190 Webster Apr 2016 A1
20170025115 Tachibana Jan 2017 A1
20170279542 Knauer Sep 2017 A1
20180152796 Stone May 2018 A1
20180242083 Lindemann Aug 2018 A1
20180374460 Bradley Dec 2018 A1
20190013879 Webster Jan 2019 A1
20190082224 Bradley Mar 2019 A1
20190121532 Strader Apr 2019 A1
20190122691 Roy et al. Apr 2019 A1
20190122766 Strader Apr 2019 A1
20190155997 Vos May 2019 A1
20190200071 Knauer Jun 2019 A1
20200051534 Bradley Feb 2020 A1
20200077184 Mizuno Mar 2020 A1
Related Publications (1)
Number Date Country
20190333486 A1 Oct 2019 US
Provisional Applications (1)
Number Date Country
62524835 Jun 2017 US
Continuations (1)
Number Date Country
Parent 16019257 Jun 2018 US
Child 16506670 US