The present embodiments relate to audio signal processing, and more particularly to a method and apparatus for estimating spatial content of a soundfield at a desired location, including a location that has actual sound content obstructed or distorted.
The spatial content of the soundfield provides an important component of one's situational awareness. However, when wearing a helmet, such as when playing football or hockey, or when riding a bicycle or motorcycle, sounds are muffled and spatial cues altered. As a result, a quarterback might not hear a lineman rushing from his “blind side,” or a bike rider might not hear an approaching car.
Accordingly, a need remains in the art for a solution to these problems, among others.
The present embodiments relate to a method and apparatus for estimating spatial content of a soundfield at a desired location, including a location that has actual sound content obstructed or distorted. According to certain aspects, the present embodiments aim at presenting a more natural, spatially accurate sound, for example to a user at the desired location who is wearing a helmet, mimicking the sound a user would experience if they were not wearing any headgear. Modes for enhanced spatial hearing may be applied which would include situation-dependent processing for augmented hearing. According to other aspects, the present embodiments aim at remotely reproducing the soundfield at a desired location with faithful reproduction of the spatial content of the soundfield for entertainment purposes, among other things.
These and other embodiment of the methods and systems disclosed herein enhance the experience of a professional football game for the players, coaches, referees, and fans, both onsite and away from the venue, and both live and offline. An aspect is to record or capture the sound experienced by a number of the participants and devices on and near the field of play, analyze the captured sound for its various components and their associated spatial content, and make those components available to participants and spectators.
This way, for instance, a fan listening from the perspective of the quarterback on a pass play would be enveloped by the sounds of linemen blocking while trying to form a pocket around the quarterback, players signaling each other as the play develops, coaches yelling from their sidelines, and fans and the public announcement system making noise. The spatial content of the soundfield presented to the listener would develop with the play, for example becoming more concentrated in the direction the center of the field were to be chased from the pocket toward the sideline. Note that players and on-field referees could benefit from the safety aspects of hearing spatialized sound, while fans could benefit from having an immersive experience. In a virtual reality setting, a listener, at their option, may locate themselves anywhere in the field of play using sound and positioning information collected from worn microphones on helmets and hats, etc., and other sensors and video analysis.
These and other aspects and features of the present embodiments will become apparent to those ordinarily skilled in the art upon review of the following description of specific embodiments in conjunction with the accompanying figures, wherein:
The present invention will now be described in detail with reference to the drawings, which are provided as illustrative examples of embodiments so as to enable those skilled in the art to practice these and other embodiments. Notably, the figures and examples below are not meant to limit the scope of the present invention to a single embodiment, but other embodiments are possible by way of interchange of some or all of the described or illustrated elements. Moreover, where certain elements of the present invention can be partially or fully implemented using known components, only those portions of such known components that are necessary for an understanding of the present invention will be described, and detailed descriptions of other portions of such known components will be omitted so as not to obscure the embodiments. In the present specification, an embodiment showing a singular component should not be considered limiting; rather, the present disclosure is intended to encompass other embodiments including a plurality of the same component, and vice-versa, unless explicitly stated otherwise herein. Moreover, applicants do not intend for any term in the specification or claims to be ascribed an uncommon or special meaning unless explicitly set forth as such. Further, the present invention encompasses present and future known equivalents to the known components referred to herein by way of illustration.
In some general aspects, the present invention recognizes that spatial content of a soundfield at a given location can become distorted and/or degraded, for example by headgear worn by a user at that location. This is illustrated in
To avoid these situations, the present invention incorporates microphones into helmets and hats (and even clothing, gear, balls, etc.) worn by sports participants and riders. The soundfield and its spatial character may then be captured, processed, and passed on to participants and perhaps also to fans. Restoring a player's or rider's natural spatial hearing cues enhances safety; providing spatialized communications among players augments gameplay; rendering a player's, referee's, or other participant's soundfield for fans provides an immersive entertainment experience.
According to some aspects, the present embodiments aim at presenting a more natural, spatially accurate sound to a user wearing a helmet, mimicking the sound a user would experience if they were not wearing any headgear. Modes for enhanced spatial hearing may be applied which would include situation-dependent processing for augmented hearing.
In one embodiment shown in
An example methodology according to certain safety aspects of embodiments is illustrated in
As shown in
In step S302, the sound is processed (if necessary) to remove the effects of the headgear filter. Those skilled in the art will be able to understand how to implement an inverse filter based on a characterized filter such as the filter causing the distortion in
In step S304, the un-filtered sound and/or positioning input(s) is further processed to extract the direction of arrival of sound source(s) in the inputs. There are many ways that this processing can be performed. For example, one or more techniques can be used as described in Y. Hur et al., “Microphone Array Synthetic Reconfiguration,” AES Convention Paper presented at the 127th Convention, Oct. 9-12 2009, the contents of which are incorporated by reference herein.
In step S306, virtual speakers are placed at the determined position(s) of the identified source(s), and in step S308, sound is output from the virtual speakers. The output can be a conventional stereo (L/R) output, for example to be played back into real speakers on a helmet such as that shown in
An example methodology according to certain entertainment aspects of embodiments is illustrated in
As shown in
In step S402, the sound is processed to extract the direction of arrival of sound source(s) in the inputs. There are many ways that this processing can be performed. For example, one or more techniques can be used as described in Y. Hur et al., “Microphone Array Synthetic Reconfiguration,” AES Convention Paper presented at the 127th Convention, Oct. 9-12 2009, the contents of which are incorporated by reference herein.
In one example implementation, the sound signal(s) received by the microphones are transmitted (e.g. via WiFi, RF, Bluetooth or other means) to a remotely located processor and further processing is performed remotely (e.g. in a gameday television or radio broadcast studio).
In step S404, the processed sound signal is rendered to a surround sound (e.g. 5.1, etc.) or other spatial audio display format, using techniques such as those described in U.S. Pat. No. 6,507,658, the contents of which are incorporated by reference herein.
It should be apparent that other processing can be performed before output, such as performing noise cancellation, and to separate, select and/or eliminate different sound sources (e.g. crowd noise, etc.).
In step S406, the rendered sound signal is broadcast (e.g. RF, TV, radio, satellite) for normal playback through any compatible surround sound system.
Embodiments described herein can find many useful applications.
In Entertainment applications, for example, embodiments include: referee hats, player helmets, clothing, uniforms, gear, balls, “flying” and other cameras outfitted with single, multiple microphones, in-ear, in-ear with hat, helmet-mounted microphones combined with stadium, arena microphones (on down markers, goal posts, etc.); directional microphones, directional processing, raw signals; translation to specific playback systems and formats: e.g., broadcast formats surround, stereo speakers, (binaural) headphones; in-stadium fan, coaches displays; position, head orientation tracking; helmet modifications to enhance or restore altered spatial cues; wind, clothing noise suppression.
In Gameplay applications, for example, embodiments include: wind, clothing noise suppression; Communications between players with position encoded; stereo earphones, at least one microphone or synthesized signal; reverberation to cue distance rather than amplitude reduction; spatialized sonic icons, sonification indicating arrangement of certain own-team players or certain opponent players (possibly derived from video signals); offsides in hockey, e.g. referee signals for improved foul calls (e.g., hear punt, pass released, player crossing boundary such as the line of scrimmage); quarterback (microphone array, advanced helmet) enhanced amplification for sounds arising from the rear; suppressed out-of-plane sounds, enhanced in-plane signals (reduce crowd noise, noise suppression); player positioning, where you are on the field (“hear” the sidelines, auditory display for line of scrimmage.g.); Example applications: football, hockey.
In Safety applications, for example, embodiments of include: bicycle, motorcycle, sports helmets, hats, clothing, vehicle exterior; enhanced volume, sonic icons from rear, sides; amplification of actual soundfield, or synthesized sounds based on detecting the presence of an object via other means; arrival angle tracking for collision detection; Example applications: bike, snowboard, ski, skateboard helmet
As set forth above, embodiments of the methods and systems disclosed herein enhance the experience of a professional football game for the players, coaches, referees, and fans, both onsite and away from the venue, and both live and offline. As set forth now in more detail, one aspect is to record or capture the sound experienced by a number of the participants and devices on and near the field of play, analyze the captured sound for its various components and their associated spatial content, and make those components available to participants and spectators.
This way, for instance, a fan listening from the perspective of the quarterback on a pass play would be enveloped by the sounds of linemen blocking while trying to form a pocket around the quarterback, players signaling each other as the play develops, coaches yelling from their sidelines, and fans and the public announcement system making noise. The spatial content of the soundfield presented to the listener would develop with the play, for example becoming more concentrated in the direction the center of the field were the quarterback to be chased from the pocket toward the sideline. Note that players and on-field referees could benefit from the safety aspects of hearing spatialized sound, while fans could benefit from having an immersive experience. In a virtual reality setting, a listener, at their option, may locate themselves anywhere in the field of play using sound and positioning information collected from worn microphones on helmets and hats, etc., and other sensors and video analysis.
Referring to
The processor 1025 will process the input signals, applying audio effects such as equalization and compression for fixative or artistic purposes, and combining the signals into different mixes such as stereo, 5.1 or Atmos surround, binaural, or Ambisonic mixes, or as a component sound or sounds with associated spatial information. separate mixes could be made for any number of listeners in mind, for instance, players, coaches, referees, stadium fans (boxes and seats, according to their section). These mixes are encoded for transmission (e.g., broadcast, streaming, and the like) or storage (e.g., cloud) to produce encoded signals and associated spatial information 1026, 1028, and 1030, in any number of formats such as stereo, binaural, 5.1, 7.1, 5.1.4, and 10.2 surround, Ambisonics, and position tagged audio signal sets. The mixes are transmitted and/or stored using processor 1035, and maybe received or retrieved using processor 1045.
Received or retrieved mixes 1034, 1036, 1038 are then rendered using rendering processors 1055, 1065, and 1075 to produce sound 1042, 1044, and 1046 for listeners in any number of settings. In one setting, a listener wearing headphones with a head orientation tracking device will have the mix of positioned sources rendered locally 1055 using a binaural format, taking into account their look direction. This rendering could be used to provide enhanced audio to fans at the stadium. It could also be used to provide spatialized audio to people watching the gameplay on a device, for which head tracking would not be needed, as the listener could be presumed to be looking at the device screen. In another setting, viewers could be watching the game in a living room with surround audio, and the audio rendered locally 1065 according to received audio component position information, or rendered in the mixing/encoding processor 1025, in which the camera angle and other video context would be used to control aspects of the sound spatial character such as the viewpoint. In yet another setting, a room could be outfitted with a number of loudspeakers, and one of the surround mixes repurposed to the room configuration, or an Ambisonics or similar technique used to pan sound components to their appropriate spatial locations 1075.
It should be noted that those skilled in the art will understand how to adapt the embodiment described above to other sports and activities such as hockey, basketball, baseball, soccer, tennis, golf, short track ice skating, road and track bicycle racing, skiing, boxing among many others. In these activities, microphones are worn by participants and referees, and mounted on equipment and the venue, and the signals captured, and processed to present spatialized sound to listeners. In hockey, the players' and referees' helmets are natural mounting places, as are the goals. The home plate umpire would be a good perspective to present from a baseball game, and the net posts and umpire's chair good places for tennis. Yet another possible scenario is if a player is wearing a close microphone, as in a quarterback and the microphone is used to talk with the coaches, and their position is known relative to a listener's viewpoint, for example through video analysis or a worn tracking device, then the close microphone sound can be rendered as coming from the desired location for the listener.
The herein described subject matter sometimes illustrates different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are illustrative, and that in fact many other architectures can be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “operably connected,” or “operably coupled,” to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being “operably coupleable,” to each other to achieve the desired functionality. Specific examples of operably coupleable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components.
With respect to the use of plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.
It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc.).
Although the figures and description may illustrate a specific order of method steps, the order of such steps may differ from what is depicted and described, unless specified differently above. Also, two or more steps may be performed concurrently or with partial concurrence, unless specified differently above. Such variation may depend, for example, on the software and hardware systems chosen and on designer choice. All such variations are within the scope of the disclosure. Likewise, software implementations of the described methods could be accomplished with standard programming techniques with rule-based logic and other logic to accomplish the various connection steps, processing steps, comparison steps, and decision steps.
It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation, no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to inventions containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should typically be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, typically means at least two recitations, or two or more recitations).
Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, and C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to “at least one of A, B, or C, etc.” is used, in general, such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, or C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.”
Further, unless otherwise noted, the use of the words “approximate,” “about,” “around,” “substantially,” etc., mean plus or minus ten percent.
Although the present embodiments have been particularly described with reference to the preferred embodiments thereof, it should be readily apparent to those of ordinary skill in the art that changes and modifications in the form and details may be made without departing from the spirit and scope of the embodiments.
The present application is a continuation-in-part of U.S. patent application Ser. No. 17/164,443 filed Feb. 1, 2021, which application is a continuation of U.S. patent application Ser. No. 15/435,211, filed Feb. 16, 2017, now U.S. Pat. No. 10,911,871, which is a divisional of U.S. patent application Ser. No. 13/224,256, filed Sep. 1, 2011, now U.S. Pat. No. 9,578,419, which claims priority to U.S. Provisional Application No. 61/379,332, the contents of all such applications being incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
61379332 | Sep 2010 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13224256 | Sep 2011 | US |
Child | 15435211 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15435211 | Feb 2017 | US |
Child | 17164443 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17164443 | Feb 2021 | US |
Child | 17721284 | US |