The present disclosure is generally related to audio processing, and in particular to generation of spatial audio data.
Advances in technology have resulted in smaller and more powerful computing devices. For example, there currently exist a variety of portable personal computing devices, including wireless telephones such as mobile and smart phones, tablets and laptop computers that are small, lightweight, and easily carried by users. These devices can communicate voice and data packets over wireless networks. Further, many such devices incorporate additional functionality such as a digital still camera, a digital video camera, a digital recorder, and an audio file player. Also, such devices can process executable instructions, including software applications, such as a web browser application, that can be used to access the Internet. As such, these devices can include significant computing capabilities.
Such computing devices often incorporate functionality associated with communications and media generation. For example, such computing devices often support features such as voice calls, video calls, multi-party calls (e.g., teleconferencing and videoconferencing), gaming, multimedia capture, or combinations thereof.
During use of many of these features, audio is captured locally and saved or sent to another device, and a richer audio experience can be provided using spatial audio. Commonly, audio data is generated, stored, and/or rendered as channel-based audio (such as mono-channel audio, stereo-channel audio, 5.1 channel audio, etc.), in which each channel represents an output stream. In contrast, spatial audio data includes data representing objects or sound sources and output channels are generated as part of rendering the spatial audio data. For example, spatial audio can be rendered so that speech represented in the output audio sounds like it is coming from the direction of a person who is speaking. Unfortunately, capturing spatial audio data generally requires the use of complicated, special-purpose microphone arrays. Because of the complexity and expense of such microphone arrays, it is challenging and cost prohibitive to generate spatial audio for everyday use.
In a particular aspect, a device includes memory configured to store audio data and one or more processors configured to obtain the audio data captured by a microphone of a wearable device. The instructions further cause the one or more processors to determine, based on one or more signals exchanged between the wearable device and a reference device, directionality information indicative of a direction of the microphone relative to the reference device. The instructions also cause the one or more processors to process the audio data based on the directionality information to generate spatial audio data.
In a particular aspect, a method includes obtaining, at one or more processors, audio data captured by a microphone of a wearable device. The method also includes determining, by the one or more processors, directionality information indicative of a direction between the microphone and a reference device based on one or more signals exchanged between the wearable device and the reference device. The method further includes generating, at the one or more processors, spatial audio data based on the audio data and the directionality information.
In a particular aspect, a non-transitory computer-readable device stores instructions that are executable by one or more processors to cause the one or more processors to obtain audio data captured by a microphone of a wearable device. The instructions further cause the one or more processors to determine directionality information indicative of a direction between the microphone and a reference device based on one or more signals exchanged between the wearable device and the reference device. The instructions also cause the one or more processors to generate spatial audio data based on the audio data and the directionality information.
In a particular aspect, an apparatus includes means for obtaining audio data captured by a microphone of a wearable device. The apparatus also includes means for determining directionality information indicative of a direction between the microphone and a reference device based on one or more signals exchanged between the wearable device and the reference device. The apparatus further includes means for generating spatial audio data based on the audio data and the directionality information.
Other aspects, advantages, and features of the present disclosure will become apparent after review of the entire application, including the following sections: Brief Description of the Drawings, Detailed Description, and the Claims.
Complicated, special-purpose microphone arrays are usually used to capture spatial audio. For example, the typical set up to capture sound to generate a first-order ambisonics representation of the sound includes at least four microphones including one omnidirectional microphone and three
Although directional microphone arrays are complex and not in common use, wearable devices that include a microphone (or perhaps a few microphones) arranged to capture speech of a user are quite common and inexpensive. For example, wireless headphones, earbuds, headsets, extended reality (XR) devices (e.g., XR headsets and XR glasses), etc. are increasingly common and often used for audio capture, among other things. Aspects disclosed herein enable use of such devices (e.g., wireless wearable devices that include one or more microphones) to generate spatial audio data without the use of special-purpose directional microphone arrays.
For example, a wearable device with a microphone can be used in conjunction with a reference device. The wearable device can be configured such that the microphone is disposed at a fixed position relative to the user's mouth. To illustrate, the wearable device can be head mounted, such that when the user's head moves, the microphone also moves and remains stationary relative to the user's mouth. Alternatively, the wearable device can be configured such that movement of the user's mouth relative to the microphone is constrained. To illustrate, the wearable device can be chest mounted or coupled to a limb of the user, such that when the user's head moves, the microphone moves a small amount relative to the user's mouth.
The reference device and the wearable device can exchange signals, such as advertisement packets, beacon signals, or signals encoding audio data captured by the wearable device, and the reference device can use the signals to determine directionality information indicating a direction from the reference device to the wearable device. For example, the reference device can determine the angle of arrival of signals from the wearable device. As another example, the reference device can determine range information indicating a distance between the wearable device and the reference device.
The directionality information and the audio data captured by the wearable device can be used to generate spatial audio data, such as first-order ambisonics data. For example, the spatial audio data may represent the audio data as coming from a sound source that is at the location of the wearable device relative to the reference device. As the user moves relative to the reference device, a sound source represented in the spatial audio data moves due to the changing position of the wearable device relative to the reference device.
In some embodiments, the audio data can be sent from the wearable device to the reference device (or to another device) via the same signals as are used to determine the position of the wearable device relative to the reference device. In such examples, no additional processing burden is placed on the wearable device to enable generation of spatial audio data thereby conserving resources (e.g., battery power) of the wearable device and enabling the use of commonly available wearable devices to generate spatial audio data rather than special-purpose microphone arrays.
One problem with widespread use of 3D audio capture is the complexity and expense of 3D audio capture equipment. For example, many users capture audio using portable computing devices, such as smartphones, which are often too small to house a 3D audio capture microphone array. Further, many users use wireless headphones, earbuds, or other similar devices to capture audio data. Such devices are generally configured specifically for capturing speech of the user; accordingly, the microphones of such devices are usually configured to remain at a fixed position relative to the user's mouth to improve the quality of captured speech audio. One benefit of such devices is that they allow the user to move about the environment while capturing high-quality audio. A user's movement about the environment represents a circumstance in which 3D audio capture would be useful (e.g., to adapt audio output to represent the user's movement); however, it is problematic to capture 3D audio in this situation because of the fixed microphone position of a wireless device worn by the user, available microphones of a smartphone device, and the cost and complexity of special-purpose 3D audio capture equipment.
Disclosed embodiments provide a solution to these and other problems by generating spatial audio data using audio captured at a wearable device and directionality information based on signals exchanged between the wearable device and a reference device. For example, the user can move around the environment while speaking, while a reference device (such as a smartphone) exchanges signals with the wearable device. The distance, direction, or both, between the reference device and the wearable device are determined based on the signal exchange, and the audio data captured at the wearable device is modified based on the distance, direction, or both, to generate spatial audio data. For example, direction information can be determined based on the angle of arrival of the exchanged signals, distance information can be determined based on signal strength indicators associated with the exchanged signals, or both. Thus, one technical advantage of the disclosed embodiments is that low-cost and readily available audio capture equipment can be used to capture spatial audio data (e.g., 3D audio data). For example, the audio data can be processed to generate Ambisonics audio data (such as a 1st or 2nd order ambisonics representation of the audio captured at the wearable device).
Particular aspects of the present disclosure are described below with reference to the drawings. In the description, common features are designated by common reference numbers. As used herein, various terminology is used for the purpose of describing particular embodiments only and is not intended to be limiting of embodiments. For example, the singular forms “a.” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Further, some features described herein are singular in some embodiments and plural in other embodiments. To illustrate,
In some drawings, multiple instances of a particular type of feature are used. Although these features are physically and/or logically distinct, the same reference number is used for each, and the different instances are distinguished by addition of a letter to the reference number. When the features as a group or a type are referred to herein, e.g., when no particular one of the features is being referenced, the reference number is used without a distinguishing letter. However, when one particular feature of multiple features of the same type is referred to herein, the reference number is used with the distinguishing letter. For example, referring to
As used herein, the terms “comprise.” “comprises,” and “comprising” may be used interchangeably with “include.” “includes,” or “including.” Additionally, the term “wherein” may be used interchangeably with “where.” As used herein, “exemplary” indicates an example, an embodiment, and/or an aspect, and should not be construed as limiting or as indicating a preference or a preferred embodiment. As used herein, an ordinal term (e.g., “first.” “second.” “third.” etc.) used to modify an element, such as a structure, a component, an operation, etc., does not by itself indicate any priority or order of the element with respect to another element, but rather merely distinguishes the element from another element having a same name (but for use of the ordinal term). As used herein, the term “set” refers to one or more of a particular element, and the term “plurality” refers to multiple (e.g., two or more) of a particular element.
As used herein, “coupled” may include “communicatively coupled.” “electrically coupled,” or “physically coupled,” and may also (or alternatively) include any combinations thereof. Two devices (or components) may be coupled (e.g., communicatively coupled, electrically coupled, or physically coupled) directly or indirectly via one or more other devices, components, wires, buses, networks (e.g., a wired network, a wireless network, or a combination thereof), etc. Two devices (or components) that are electrically coupled may be included in the same device or in different devices and may be connected via electronics, one or more connectors, or inductive coupling, as illustrative, non-limiting examples. In some embodiments, two devices (or components) that are communicatively coupled, such as in electrical communication, may send and receive signals (e.g., digital signals or analog signals) directly or indirectly, via one or more wires, buses, networks, etc. As used herein, “directly coupled” may include two devices that are coupled (e.g., communicatively coupled, electrically coupled, or physically coupled) without intervening components.
In the present disclosure, terms such as “determining,” “calculating,” “estimating,” “shifting,” “adjusting,” etc. may be used to describe how one or more operations are performed. It should be noted that such terms are not to be construed as limiting and other techniques may be utilized to perform similar operations. Additionally, as referred to herein, “generating,” “calculating,” “estimating,” “using,” “selecting,” “accessing,” and “determining” may be used interchangeably. For example, “generating,” “calculating,” “estimating,” or “determining” a parameter (or a signal) may refer to actively generating, estimating, calculating, or determining the parameter (or the signal) or may refer to using, selecting, or accessing the parameter (or signal) that is already generated, such as by another component or device.
The wearable device 110 is configured to be worn by the user 112 to capture sound 182 to generate the audio data 162. For example, the wearable device 110 can include a head-mounted device (e.g., a headset, earbuds, glasses, etc.) that includes the microphone(s) 120 configured to capture the sound 182 associated with speech of the user 112. In other examples, the wearable device 110 is configured to be worn on another portion of the user's body, such as on the user's chest or arm. In some embodiments, at least one of the microphone(s) 120 is operable to capture sound 182 generated by the user 112 while the user 112 moves relative to the reference device 150. As compared to movement of the user 112 relative to the reference device 150, the microphone(s) 120 remain stationary (or nearly stationary) relative to a source of sound 182 generated by the user 112. For example, when the wearable device 110 is a head-mounted device, the microphone(s) 120 can be disposed at a fixed location (during use) relative to the user's mouth to capture speech of the user 112. As another example, when the wearable device 110 is worn on the chest or a limb of the user 112, the microphone(s) 120 can move during audio capture (e.g., due to turning of the user's head); however, such motion results in a relatively small position change as compared to movement of the user 112 relative to the reference device 150, and as such, can be ignored in some embodiments.
The reference device 150 is configured to exchange one or more signals 184 with the wearable device 110 and to generate directionality information 160 based on the signal(s) 184. For example, in
In some embodiments, the reference device 150 may be integrated within a wireless access point configured to support a wireless local area network (WLAN), such as a WIFI® network (WI-FI® is a registered trademark of the Wi-Fi Alliance Corp., a California corporation). In other embodiments, the reference device 150 is integrated within a portable communication device, such as a smart phone. To illustrate, the portable communication device may be configured to support a WLAN or a personal area network (PAN), such as one or more Bluetooth communication links (BLUETOOTH® is a registered trademark of Bluetooth SIG, Inc., a Delaware Corporation).
In some embodiments, ranging information, angle of arrival information, or both, can be determined based on a received signal strength of one or more signals 184 at particular antennas 154. Additionally, or alternatively, the reference device 150 may be configured to determine a direction (e.g., an angle of arrival) using beamforming techniques, enabling the reference device 150 to use multilateration techniques to determine location coordinates of the wearable device 110. Further, in some embodiments, the reference device 150, the wearable device 110, or both, are within a coverage area of several access points of a WLAN (e.g., part of a mesh network), and one or more access points may determine location information associated with the reference device 150, the wearable device 110, or both, and exchange the location information with the reference device 150 to enable the reference device 150 to determine more accurate and/or more precise directionality information 160.
In some implementations, the reference device 150 can use phase-based ranging (based on the signals 184) to estimate the directionality information 160. One example of a phase-based ranging technique is high-accuracy distance measurement (HADM) based on Bluetooth Low-Energy (BLE) transmissions. Additionally, or alternatively, the reference device 150 can use signal strength-based techniques to estimate the directionality information 160. To illustrate, the signals 184 can include a transmission power indicator, and the reference device 150 can compare the transmission power indicator to a signal strength of the signal 184 as received at the reference device 150 to estimate ranging information. Multilateration based on the received signal strength or received signal strength fingerprinting for spatially diverse antennas can be used to estimate the directionality information 160.
In some embodiments, the wearable device 110 sends the signals 184 periodically or occasionally, and the reference device 150 determines the directionality information 160 based on an angle of arrival of the signals 184, ranging information associated with the signals 184, or both. For example, the reference device 150 can include two or more antennas 154, and the angle of arrival of the signals 184 can be determined based on phases of waveforms of the signals 184 as received at the two or more antennas 154. In some such embodiments, the reference device 150 can determine a received signal strength of the signals 184 and compare the received signal strength to a transmitted signal strength of the signals 184 to determine ranging information. In such embodiments, the transmitted signal strength of the signals 184 can be indicated in a data field of the signals 184 or can be determined based on prior agreement (e.g., based on a communication protocol specification or based on communication link set up data exchange between the wearable device 110 and the reference device 150). In some embodiments, the ranging information can be determined based on multiple factors, such as differences in angle of arrival of the signals 184 at multiple spatially diverse antennas 154 and signal strength information.
In some embodiments, the reference device 150 sends the signals 184 periodically or occasionally, and the wearable device 110 determines the directionality information 160 based on an angle of arrival of the signals 184, ranging information associated with the signals 184, or both, using similar techniques to those described above. In such embodiments, the wearable device 110 can send the directionality information 160 to the reference device 150.
In some embodiments, the wearable device 110 encodes the audio data 162 to generate a stream of data packets, and transmits the data packets via the signals 184. In such embodiments, the reference device 150 can determine the directionality information 160 based on signals (e.g., the signals 184) that encode the audio data 162. A device 192, the reference device 150, or both, may be configured to receive encoded data representing the audio data via the signals 184 and to decode the encoded data to generate the audio data 162. For example, the wearable device 110 can generate data packets that include data representing the audio data 162 and can transmit the data packets via the signals 184 using a wireless protocol, such as a BLUETOOTH® protocol, a WIFI® protocol, or another local area or personal area wireless communication protocol. In this example, the reference device 150 can receive the signals 184 and determine the directionality information based on the signals 184. The reference device 150, the device 192, or both, can also process the data packets to reconstruct the audio data 162.
In
During operation of the system 100 in one example, the user 112 can walk about in a room or other space in which the reference device 150 is located while speaking. In this example, at a first time (TO), the user 112 is at a location 130A. As the user 112 speaks, the microphone(s) 120 of the wearable device 110 capture sound 182A representing the user's speech. The wearable device 110 processes the sound 182A to generate a portion of the audio data 162 that represents the sound 182A. The wearable device 110 transmits signals 184A representing the portion of the audio data 162 representing the sound 182A from the location 130A.
The reference device 150 receives the signals 184A and determines an angle of arrival of the signals 184A to determine a direction to the location 130A. Directionality information 160 for time TO includes an indication of the direction to the location 130A. Optionally, in some embodiments, the reference device 150 also determines range information based on a signal strength of the signals 184A (e.g., based on a received signal strength and a transmitted signal strength indicator). The range information indicates a distance to the location 130A. In such embodiments, directionality information 160 for time TO also includes the range information.
The spatial audio generator 140 receives the portion of the audio data 162 representing the sound 182A and the directionality information 160 for time TO, and determines a portion of the spatial audio data 180 for the time TO. For example, the spatial audio generator 140 can treat the portion of the audio data 162 representing the sound 182A as though it were captured by an ambisonics microphone array positioned at the reference device 150 from a sound source at the location 130A. In some embodiments, the spatial audio generator 140 receives the portion of the audio data 162 representing the sound 182A from the wearable device 110 (e.g., both the reference device 150 and the device 192 receive the signals 184A). In some embodiments, the spatial audio generator 140 receives the portion of the audio data 162 representing the sound 182A from the reference device 150.
At a time T1 (after time TO), the user 112 has moved to a location 130B. As the user 112 speaks at the location 130B, the microphone(s) 120 of the wearable device 110 capture sound 182B representing the user's speech. The wearable device 110 sends a portion of the audio data 162 representing the sound 182B via signals 184B from the location 130B.
The reference device 150 receives the signals 184B and determines the directionality information 160 for time T1 based on the angle of arrival of the signals 184B, signal strength information associated with the signals 184B, or both. The spatial audio generator 140 receives the portion of the audio data 162 representing the sound 182B and the directionality information 160 for time T1, and determines a portion of the spatial audio data 180 for the time T1. For example, the spatial audio generator 140 can treat the portion of the audio data 162 representing the sound 182B as though it were captured by an ambisonics microphone array positioned at the reference device 150 from a sound source at the location 130B.
Likewise, at a time T2 (after time T1), the user 112 has moved to a location 130C. As the user 112 speaks at the location 130C, the microphone(s) 120 of the wearable device 110 capture sound 182C representing the user's speech. The wearable device 110 sends a portion of the audio data 162 representing the sound 182C via signals 184C from the location 130C.
The reference device 150 receives the signals 184C and determines the directionality information 160 for time T2 based on the angle of arrival of the signals 184C, signal strength information associated with the signals 184C, or both. The spatial audio generator 140 receives the portion of the audio data 162 representing the sound 182C and the directionality information 160 for time T2, and determines a portion of the spatial audio data 180 for the time T2. For example, the spatial audio generator 140 can treat the portion of the audio data 162 representing the sound 182C as though it were captured by an ambisonics microphone array positioned at the reference device 150 from a sound source at the location 130C.
As the example above illustrates, the system 100 is able to generate spatial audio data 180 (such as ambisonics data) using a less complex microphone arrangement than is conventional for spatial audio capture. To illustrate, first-order ambisonics audio capture typically uses a multimicrophone array that includes special purpose microphones. As one example, an ambisonics microphone array can include an omnidirectional microphone and three
Although particular communication protocols and corresponding networks are described with reference to
Although
In various embodiments, the reference device 150, the device 192, or both, include, correspond to, or are included within a smart speaker, a speaker bar, a smart phone, a cellular phone, a laptop computer, a computer, a tablet, a personal digital assistant, a display device, a television, a gaming console, a music player, a radio, a digital video player, a tuner, a camera, a navigation device, a home automation system, a voice-activated device, a wireless speaker and voice activated device, a portable electronic device, a communication device, an internet-of-things (IoT) device, an extended reality (XR) device, a base station, or a mobile device. Additionally, or alternatively, in various embodiments, the wearable device 110 includes, corresponds to, or is included within a headset, an augmented reality headset, a mixed reality headset, a virtual reality headset, extended reality glasses, one or more earbuds, a wireless microphone device (e.g., a lapel microphone), etc.
In
In
As described with reference to
Optionally, the communication system 152 can also include a range detector 244. The range detector 244 is configured to determine a distance from the reference device 150 to the wearable device 110 based on the signal(s) 184. To illustrate, information indicative of attenuation of the signal(s) 184 can be used to determine the distance. As one example, the attenuation of the signal(s) 184 can be determined based on a received signal strength of the signal(s) 184 and a transmitted signal strength of the signal(s) 184. As another example, the communication system 152 can determine the distance and direction to the wearable device 110 based on phase and signal strength of the signal(s) 184 as received at spatially diverse antennas of the antenna(s) 154.
As another example, the communication system 152 can optionally include a beamformer 202. The beamformer 202 can include, be included within, or correspond to, or be coupled to the direction detector 242, the range detector 244, or both. In this example, the beamformer 202 may be configured to sweep one or more beams about an area around the reference device 150 to determine a direction from the reference device 150 to the wearable device 110, a distance from the reference device 150 to the wearable device 110, or both.
In
In some embodiments, the audio data 162 extracted from (or determined based on) the signal(s) 184 is stored at the memory 210 for processing by the processor(s) 190. In such embodiments, the directionality information 160 may also be stored at the memory 210. The directionality information 160 can include or correspond to direction information determined by the direction detector 242, range information determined by the range detector 244, or both.
The spatial audio generator 140 is configured to generate the spatial audio data 180 based on the audio data 162 and directionality information 160 associated with the audio data 162. In some embodiments, in addition to the spatial audio generator 140, the processor(s) 190 can include a pre-processor 240 that is configured to modify the audio data 162 and provide the modified audio data to the spatial audio generator 140. For example, the pre-processor 240 can perform operations to emphasize some audio components of the audio data 162 (e.g., target audio components), can perform operations to de-emphasize other audio components of the audio data 162 (e.g., non-target audio components), or both.
As an example, in
Optionally, the reference device 150 can include one or more sensors and the spatial audio data 180 can be processed in conjunction with data from the sensor(s) to form a multimedia stream that includes spatial audio. For example, in
In the description above, the audio data 162 is described as being transmitted via the same signal(s) 184 that are used to determine the directionality information 160; however, in other embodiments, different signals are used to determine the directionality information 160 than are used to transmit the audio data 162. For example, the reference device 150 can transmit signals that are received by the wearable device 110. In this example, the wearable device 110 can send information descriptive of the signals received from the reference device 150 to the reference device 150 for processing to determine the directionality information 160. To illustrate, the reference device 150 can periodically or occasionally transmit beacon signals or advertisement packets that are received by the wearable device 110. In this illustrative example, the wearable device 110 can send information such as angle of arrival, time of receipt, signal strength, etc., characteristic of the signals transmitted by the reference device 150 to the reference device 150, and the reference device 150 can determine the directionality information 160 based on the information. As another example, the wearable device 110 can transmit the audio data 162 via the data packets 284 and can also transmit other signals (e.g., advertisement packets, beacons signals, ranging and direction signals, etc.). In this example, the reference device 150 determines the directionality information 160 based on the other signals.
In
In
In
As one example, during a call, the wearable device 110 can capture the sound 182 representing speech of a user and generate the audio data 162. The audio data 162 and associated directionality information 160 can be sent to the device 192 (e.g., a server) to generate the spatial audio data 180. The spatial audio data 180 can be sent back to the wearable device 110 or the reference device 150 for transmission to one or more other participants in the call. In this example, the spatial audio data 180 can be rendered by devices associated with the other participant(s) of the call such that movement of the user of the wearable device 110 within a space that includes the reference device 150 is rendered as movement of a sound source in the spatial audio data 180. To illustrate if the user of the wearable device 110 walks around a room that includes the reference device 150 while speaking, the spatial audio data 180 provided to the other participant(s) of the call reflects such movement.
In the example illustrated in
The communication system 152 is configured to determine information indicative of the position of the headset device 502 relative to the laptop computing device 504. For example, the communication system 152 can determine the directionality information 160. In some embodiments, the communication system 152 determines characteristics of the signals 184 that are related to an angle of arrival of the signals 184, characteristics of the signals 184 that are related to a range to the headset device 502, or both, and one or more other components of the laptop computing device 504 determine the directionality information. For example, in such embodiments the laptop computing device 504 may also include the direction detector 242, the range detector 244, or both, of
Although the laptop computing device 504 of
As one example, during use, a user of the headset device 502 can speak while moving relative to the laptop computing device 504. To illustrate, while on a call or a video conference, the user can move around a room. In this example, the spatial audio data can be generated such that a sound source corresponding to the user moves around in the spatial audio data, even though the user is not moving relative to the microphone(s) 120. Thus, the system 500 enables generation of spatial audio data without complicated, special-purpose microphone arrays.
Optionally, the laptop computing device 504 can include a camera 522. In embodiments in which the laptop computing device 504 includes the camera 522, the spatial audio data from the spatial audio generator 140 can be combined with video data from the camera 522 to generate a multimedia stream that includes the spatial audio data.
Optionally, the laptop computing device 504 can include one or more additional microphones 524. In embodiments in which the laptop computing device 504 includes the microphone(s) 524, audio data captured by the microphone(s) 524 can be used to modify the audio data captured by the microphone(s) 120, such as to de-emphasize noise components in the spatial audio data.
The game console 604 is configured to communicate with a game controller 606 and with the XR headset device 602. To illustrate, in
In some embodiments, multiple types of signals 184 can be exchanged by the XR headset device 602 and the game console 604. For example, in addition to game information and audio data, the signals 184 exchanged by the XR headset device 602 and the game console 604 can include ranging and/or positioning signals used by the communication system 152 and/or the spatial audio generator 140 to determine a direction and/or distance (e.g., the directionality information 160 of
In some embodiments, the communication system 152 of the game console 604 is configured to determine information indicative of the position of the XR headset device 602 relative to the game console 604. For example, the communication system 152 can determine the directionality information 160. In some embodiments, the communication system 152 determines characteristics of the signals 184 that are related to an angle of arrival of the signals 184, characteristics of the signals 184 that are related to a range to the XR headset device 602, or both, and one or more other components of the game console 604 determine the directionality information. For example, in such embodiments the game console 604 may also include the direction detector 242, the range detector 244, or both, of
Although the game console 604 of
As one example, during use, a user of the XR headset device 602 can speak while moving relative to the game console 604. To illustrate, while playing a multiplayer game, the user can move around a room and speak to other players. In this example, the other players can receive spatial audio data in which a sound source corresponding to the user moves around in the in-game audio, even though the user is not moving relative to the microphone(s) 120. Thus, the system 600 enables generation of spatial audio data without complicated, special-purpose microphone arrays.
Optionally, the system 600 can include other features of any of
In the example illustrated in
In the example illustrated in
The communication system 152 is configured to determine information indicative of the position of at least one of the earbuds 702 relative to the mobile communication device 704. For example, the communication system 152 can determine the directionality information 160 of
Although the mobile communication device 704 of
As one example, during use, a user of the earbuds 702 can speak while moving relative to the mobile communication device 704. To illustrate, while on a call or a video conference, the user can move around a room. In this example, the spatial audio data can be generated such that a sound source corresponding to the user moves around in the spatial audio data, even though the user is not moving relative to the microphone(s) 120. Thus, the system 700 enables generation of spatial audio data without complicated, special-purpose microphone arrays.
Optionally, the mobile communication device 704 can include a camera 722. In embodiments in which the mobile communication device 704 includes the camera 722, the spatial audio data from the spatial audio generator 140 can be combined with video data from the camera 722 to generate a multimedia stream that includes spatial audio data.
Optionally, the mobile communication device 704 can include one or more additional microphones 720. In embodiments in which the mobile communication device 704 includes the microphone(s) 720, audio data captured by the microphone(s) 720 can be used to modify the audio data captured by the microphone(s) 120, such as to de-emphasize noise components in the spatial audio data.
Although the system 700 illustrates two earbuds 702 of a pair, in other embodiments, the system 700 can include a single earbud (e.g., the earbud 702A) that is configured to exchange the signals 184 with the mobile communication device 704. For example, the user can wear both earbuds 702, but only one of the earbuds 702 includes the communication system 252. Further, although
In the example illustrated in
The communication system 152 is configured to determine information indicative of the position of the XR glasses 802 relative to the vehicle 804. For example, the communication system 152 can determine the directionality information 160. In some embodiments, the communication system 152 determines characteristics of the signals 184 that are related to an angle of arrival of the signals 184, characteristics of the signals 184 that are related to a range to the XR glasses 802, or both, and one or more other components of the vehicle 804 determine the directionality information. For example, in such embodiments the vehicle 804 may also include the direction detector 242, the range detector 244, or both, of
Although the vehicle 804 of
As one example, during use, a user of the XR glasses 802 can speak while moving relative to the vehicle 804. To illustrate, while on a call or a video conference, the user can move around a scene. In this example, the spatial audio data can be generated such that a sound source corresponding to the user moves around in the spatial audio data, even though the user is not moving relative to the microphone(s) 120. Thus, the system 800 enables generation of spatial audio data without complicated, special-purpose microphone arrays.
In some embodiments, the vehicle 804 and the user of the XR glasses 802 move together. For example, the user can be a passenger of the vehicle 804. In such embodiments, the communication system 152 can be disposed on or in a portion of the vehicle 804 that the user moves with respect to. To illustrate, when the vehicle 804 is an aircraft, the user can move along an aisle of the aircraft. In some embodiments, the user of the XR glasses 802 can remain stationary and the vehicle 804 can move relative to the user. For example, the vehicle 804 can include an unmanned aerial vehicle (UAV) that is configured to move around a scene or around the user. In this example, the movement of the vehicle 804 relative to the user can be reflected in directionality information indicating the distance and/or direction from the vehicle 804 to the XR glasses 802. In this example, the spatial audio data represents a sound source associated with the user moving as the vehicle 804 moves. Alternatively, in some embodiments, the vehicle 804 includes an onboard positioning system (e.g., a local positioning system, a global positioning system, or both) configured to detect movement of the vehicle 804. In such embodiments, the movement of the vehicle 804 can be removed from the directionality information such that a sound source corresponding to the user is stationary in the spatial audio data when the vehicle 804 moves and the sound source moves when the user moves in a manner that changes the distance and/or direction from the vehicle 804 to the XR glasses 802.
Optionally, the vehicle 804 can include a camera 822. In embodiments in which the vehicle 804 includes the camera 822, the spatial audio data from the spatial audio generator 140 can be combined with video data from the camera 822 to generate a multimedia stream that includes spatial audio data.
Optionally, the vehicle 804 can include one or more additional microphones 820. In embodiments in which the vehicle 804 includes the microphone(s) 820, audio data captured by the microphone(s) 820 can be used to modify the audio data captured by the microphone(s) 120, such as to de-emphasize noise components in the spatial audio data.
The camera 904 includes the communication system 152 and the spatial audio generator 140. The system 900 may optionally include other features of any of
In the example illustrated in
The communication system 152 is configured to determine information indicative of the position of the portable microphone device 902 relative to the camera 904. For example, the communication system 152 can determine the directionality information 160. In some embodiments, the communication system 152 determines characteristics of the signals 184 that are related to an angle of arrival of the signals 184, characteristics of the signals 184 that are related to a range to the portable microphone device 902, or both, and one or more other components of the camera 904 determine the directionality information. For example, in such embodiments the camera 904 may also include the direction detector 242, the range detector 244, or both, of
Although the camera 904 of
As one example, during use, a user of the portable microphone device 902 can speak while moving relative to the camera 904. To illustrate, while recording a video segment, the user can move around a scene. In this example, the spatial audio data can be generated such that a sound source corresponding to the user 112 moves around in the spatial audio data, even though the user 112 is not moving relative to the microphone(s) 120. Thus, the system 900 enables generation of spatial audio data without complicated, special-purpose microphone arrays. Optionally, the spatial audio data from the spatial audio generator 140 can be combined with video data from the camera 904 to generate a multimedia stream that includes spatial audio data. In embodiments in which the camera 904 includes the optional microphone(s) 920, audio data captured by the microphone(s) 920 can be used to modify the audio data captured by the microphone(s) 120, such as to de-emphasize noise components in the spatial audio data.
The examples illustrated in
Referring to
In a particular aspect, the method 1000 includes, at block 1002, obtaining, at one or more processors, audio data captured by a microphone of a wearable device. For example, a user can wear the wearable device in a manner that positions the microphone to capture speech or other sounds produced by the user. In this example, a communication system of the wearable device can send audio data representing the sound captured by the microphone to the reference device or to another device. To illustrate, the wearable device 110 of
The method 1000 also includes, at block 1004, determining, by the one or more processors, directionality information indicative of a direction between the microphone and the reference device based on one or more signals exchanged between the wearable device and the reference device. For example, the communication system 152 of the reference device 150 of
The method 1000 includes, at block 1006, generating, at the one or more processors, spatial audio data (e.g., ambisonics data, such as first-order or higher-order ambisonics data) based on the audio data and the directionality information. For example, the directionality information can be used to specify a location of a sound source corresponding to the audio data. In this example, the sound source moves within the spatial audio data as the wearable device moves relative to the reference device. To illustrate, the microphone can be configured to capture sound corresponding to the audio data at a fixed location relative to a source of sound (e.g., the user's mouth). In this illustrative example, at a second time, the user can move to a different location, resulting in movement of the wearable device relative to the reference device. In this situation, the method 1000 can include determining updated directionality information based on one or more additional signals exchanged between the wearable device and the reference device. The method 1000 can also include generating updated spatial audio data based on the updated directionality information. In the updated spatial audio data movement over time of the wearable device relative to the reference device is represented as movement of the source of the sound.
In some embodiments, the one or more signals exchanged between the wearable device and the reference device include signals encoding the audio data. For example, the signal(s) can include encoded data representing the audio data (e.g., in a set of data packets), and the method 1000 can include decoding the encoded data to generate the audio data.
Determining the directionality information can include, for example, determining an angle of arrival of the one or more signals. In this example, the angle of arrival corresponds to or indicates a direction from the reference device to the wearable device. Determining the directionality information can also, or alternatively, include determining range information indicative of a distance (or a change of distance) between the microphone and the reference device. To illustrate, the range information can be determined based on a signal strength indicator associated with the signals. For example, a transmitted signal strength of the signals can be compared to a received signal strength of the signals to estimate a distance traversed by the signals.
In some embodiments, the method 1000 can also include obtaining video data associated with the audio data, processing the video data in conjunction with the spatial audio data, and encoding the video data and the spatial audio data for transmission or storage. For example, the reference device 150 of
In some embodiments, the audio data can be modified before the spatial audio data is generated. For example, the audio data can be modified to emphasize target audio components (e.g., speech), to de-emphasize non-target audio components (e.g., noise), or both. As one example, the reference device 150 of
The method 1000 of
Referring to
In a particular embodiment, the device 1100 includes a processor 1106 (e.g., a CPU). The device 1100 may include one or more additional processors 1110 (e.g., one or more DSPs). In a particular aspect, the processor(s) 190 of
The device 1100 may include the memory 210 and a CODEC 1134. The memory 210 may include the instructions 212, that are executable by the one or more additional processors 1110 (or the processor 1106) to implement the functionality described with reference to the spatial audio generator 140. The device 1100 may include the modem 206 coupled, via a transceiver 1150, to the antenna(s) 154.
The device 1100 may include a display 1128 coupled to a display controller 1126. One or more speakers 1192, one or more microphones 220, or a combination thereof, which may be coupled to the CODEC 1134. The CODEC 1134 may include a digital-to-analog converter (DAC) 1102, an analog-to-digital converter (ADC) 1104, or both. In a particular embodiment, the CODEC 1134 may receive analog signals from the microphone(s) 220, convert the analog signals to digital signals using the analog-to-digital converter 1104, and provide the digital signals to the speech and music codec 1108. The speech and music codec 1108 may process the digital signals, and the digital signals may further be processed by the spatial audio generator 140. For example, audio components present in the digital signals can be subtracted from audio data received from the wearable device 110 to de-emphasize such audio components from spatial audio data generated by the spatial audio generator 140. In a particular embodiment, the speech and music codec 1108 may provide digital signals to the CODEC 1134. The CODEC 1134 may convert the digital signals to analog signals using the digital-to-analog converter 1102 and may provide the analog signals to the speaker(s) 1192.
In a particular embodiment, the device 1100 may be included in a system-in-package or system-on-chip device 1122. In a particular embodiment, the memory 210, the processor 1106, the processors 1110, the display controller 1126, the CODEC 1134, and the modem 206 are included in the system-in-package or system-on-chip device 1122. In a particular embodiment, an input device 1130 and a power supply 1144 are coupled to the system-in-package or the system-on-chip device 1122. Moreover, in a particular embodiment, as illustrated in
In a particular aspect, the system-in-package or system-on-chip device 1122 also includes a transceiver 1150 coupled to the modem 206 and the antenna(s) 154. The transceiver 1150, the modem 206, and possibly other component, correspond to the communication system 152 of
The device 1100 may include a smart speaker, a speaker bar, a mobile communication device, a smart phone, a cellular phone, a laptop computer, a computer, a tablet, a personal digital assistant, a display device, a television, a gaming console, a music player, a radio, a digital video player, a digital video disc (DVD) player, a tuner, a camera, a navigation device, a vehicle, a headset, an augmented reality headset, a mixed reality headset, a virtual reality headset, an aerial vehicle, a home automation system, a voice-activated device, a wireless speaker and voice activated device, a portable electronic device, a car, a computing device, a communication device, an internet-of-things (IoT) device, a virtual reality (VR) device, a base station, a mobile device, or any combination thereof.
In conjunction with the described embodiments, an apparatus includes means for obtaining audio data captured by a microphone of a wearable device. For example, the means for obtaining audio data captured by the microphone of the wearable device can include or correspond to the antenna(s) 154, the communication system 152, the reference device 150, the spatial audio generator 140, the processor(s) 190, the device 192, the pre-processor 240, the receiver 204, the modem 206, the beamformer 202, the direction detector 242, the range detector 244, one or more other circuits or components configured to obtain audio data captured by a microphone of a wearable device, or any combination thereof.
The apparatus also includes means for determining directionality information indicative of a direction between the microphone and a reference device based on one or more signals exchanged between the wearable device and the reference device. For example, the means for determining directionality information can include or correspond to the communication system 152, the reference device 150, the spatial audio generator 140, the processor(s) 190, the device 192, the receiver 204, the beamformer 202, the direction detector 242, the range detector 244, one or more other circuits or components configured to determine directionality information based on exchanged signals, or any combination thereof.
The apparatus also includes means for generating spatial audio data based on the audio data and the directionality information. For example, the means for generating the spatial audio data can include or correspond to the reference device 150, the spatial audio generator 140, the processor(s) 190, the device 192, one or more other circuits or components configured to generate the spatial audio data, or any combination thereof.
In some embodiments, a non-transitory computer-readable medium (e.g., a computer-readable storage device, such as the memory 210) includes instructions (e.g., the instructions 212) that, when executed by one or more processors (e.g., the processor(s) 190, the processor(s) 1110, or the processor 1106), cause the one or more processors to obtain audio data (e.g., the audio data 162) captured by a microphone (e.g., the microphone(s) 120) of a wearable device (e.g., the wearable device 110), determine directionality information (e.g., the directionality information 160) indicative of a direction between the microphone and a reference device (e.g., the reference device 150) based on one or more signals (e.g., the signals 184) exchanged between the wearable device and the reference device, and generate spatial audio data (e.g., the spatial audio data 180) based on the audio data and the directionality information.
Particular aspects of the disclosure are described below in sets of interrelated Examples:
According to Example 1, a device includes a memory configured to store audio data and one or more processors configured to: obtain the audio data captured by a microphone of a wearable device; determine, based on one or more signals exchanged between the wearable device and a reference device, directionality information indicative of a direction of the microphone relative to the reference device; and process the audio data based on the directionality information to generate spatial audio data.
Example 2 includes the device of Example 1, wherein the microphone captures the audio data at a fixed location relative to a source of sound, and wherein the one or more processors are configured to update the spatial audio data over time to represent movement of the wearable device relative to the reference device as movement of the source of the sound.
Example 3 includes the device of Example 1 or Example 2, wherein the one or more signals include encoded data, wherein the one or more processors are configured to receive the one or more signals and decode the encoded data to generate the audio data.
Example 4 includes the device of any of Examples 1 to 3, wherein the one or more processors are configured to obtain the audio data from one or more data packets of the one or more signals.
Example 5 includes the device of any of Examples 1 to 4, wherein the one or more processors are further configured to determine, based on a signal strength indicator associated with the one or more signals, range information indicative of a distance between the microphone and the reference device.
Example 6 includes the device of any of Examples 1 to 5, wherein the one or more processors are further configured to determine, based on a signal strength indicator associated with the one or more signals, a change of distance between the microphone and the reference device.
Example 7 includes the device of any of Examples 1 to 6, wherein the one or more processors are configured to determine the directionality information based on an angle of arrival of the one or more signals.
Example 8 includes the device of any of Examples 1 to 7 and further includes one or more antennas configured to transmit a signal of the one or more signals, to receive a signal of the one or more signals, or both.
Example 9 includes the device of any of Examples 1 to 8, wherein the one or more processors are further configured to determine, based on a received signal strength of the one or more signals, range information associated with a distance between the microphone and the reference device, wherein the audio data is processed further based on the range information to generate the spatial audio data.
Example 10 includes the device of any of Examples 1 to 9, wherein the spatial audio data includes ambisonics data.
Example 11 includes the device of any of Examples 1 to 10 and further includes a camera coupled to the one or more processors and configured to capture video data, wherein the one or more processors are configured to process the video data in conjunction with the spatial audio data and to encode the video data and the spatial audio data for communication to another device.
Example 12 includes the device of any of Examples 1 to 11 and further includes a second microphone coupled to the one or more processors, wherein the one or more processors are configured to modify the audio data based on sound captured at the second microphone.
Example 13 includes the device of Example 12, wherein the one or more processors are configured to modify the audio data to de-emphasize, in the spatial audio data, audio components that are present in both the audio data and in the sound captured at the second microphone.
Example 14 includes the device of any of Examples 1 to 13, wherein the one or more processors and the memory are integrated within the reference device.
Example 15 includes the device of any of Examples 1 to 13, wherein the one or more processors and the memory are integrated within the wearable device.
Example 16 includes the device of any of Examples 1 to 15, wherein the one or more processors and the memory are integrated into at least one of a smart speaker, a speaker bar, a smart phone, a cellular phone, a laptop computer, a computer, a tablet, a personal digital assistant, a display device, a television, a gaming console, a music player, a radio, a digital video player, a tuner, a camera, a navigation device, a headset, an augmented reality headset, a mixed reality headset, a virtual reality headset, a home automation system, a voice-activated device, a wireless speaker and voice activated device, a portable electronic device, a communication device, an internet-of-things (IoT) device, an extended reality (XR) device, a base station, or a mobile device.
Example 17 includes the device of any of Examples 1 to 16, wherein the wearable device corresponds to or includes a headset device or one or more earbuds.
According to Example 18, a method includes obtaining, at one or more processors, audio data captured by a microphone of a wearable device; determining, by the one or more processors, directionality information indicative of a direction between the microphone and a reference device based on one or more signals exchanged between the wearable device and the reference device; and generating, at the one or more processors, spatial audio data based on the audio data and the directionality information.
Example 19 includes the method of Example 18, wherein the microphone captures the audio data at a fixed location relative to a source of sound, and further including, after determining the directionality information, determining updated directionality information; and generating updated spatial audio data, wherein the updated spatial audio data represents movement over time of the wearable device relative to the reference device as movement of the source of the sound.
Example 20 includes the method of Example 18 or Example 19, wherein the one or more signals include signals encoding the audio data.
Example 21 includes the method of any of Examples 18 to 20, wherein the one or more signals include encoded data, and further comprising receiving the one or more signals and decoding the encoded data to generate the audio data.
Example 22 includes the method of any of Examples 18 to 21 and further includes determining, based on a signal strength indicator, range information indicative of a distance between the microphone and the reference device.
Example 23 includes the method of any of Examples 18 to 22 and further includes determining, based on a signal strength indicator, a change of distance between the microphone and the reference device.
Example 24 includes the method of any of Examples 18 to 23, wherein the directionality information is based on an angle of arrival of the one or more signals.
Example 25 includes the method of any of Examples 18 to 24, wherein the one or more signals are transmitted in accordance with a BLUETOOTH® communication protocol.
Example 26 includes the method of any of Examples 18 to 25 and further includes determining, based on a received signal strength of the one or more signals, range information associated with a distance between the microphone and the reference device, wherein the audio data is processed further based on the range information to generate the spatial audio data.
Example 27 includes the method of any of Examples 18 to 26, wherein the spatial audio data includes ambisonics data.
Example 28 includes the method of any of Examples 18 to 27 and further includes obtaining video data associated with the audio data; processing the video data in conjunction with the spatial audio data; and encoding the video data and the spatial audio data for transmission or storage.
Example 29 includes the method of any of Examples 18 to 28 and further includes modifying the audio data based on sound captured at a second microphone.
Example 30 includes the method of Example 29 and further includes modifying the audio data to de-emphasize, in the spatial audio data, audio components that are present in the audio data and in the sound captured at a microphone of the reference device.
According to Example 31, a non-transitory computer-readable device storing instructions that are executable by one or more processors to cause the one or more processors to obtain audio data captured by a microphone of a wearable device; determine directionality information indicative of a direction between the microphone and a reference device based on one or more signals exchanged between the wearable device and the reference device; and generate spatial audio data based on the audio data and the directionality information.
Example 32 includes the non-transitory computer-readable device of Example 31, wherein the microphone captures the audio data at a fixed location relative to a source of sound, and wherein the instructions are further executable to: after determining the directionality information, determine updated directionality information; and generate updated spatial audio data based on the updated directionality information, wherein the updated spatial audio data represents movement over time of the wearable device relative to the reference device as movement of the source of the sound.
Example 33 includes the non-transitory computer-readable device of Example 31 or Example 32, wherein the one or more signals include encoded data, wherein the instructions are further executable to decode the encoded data to generate the audio data.
Example 34 includes the non-transitory computer-readable device of any of Examples 31 to 33, wherein the audio data is obtained from one or more data packets of the one or more signals.
Example 35 includes the non-transitory computer-readable device of any of Examples 31 to 34, wherein the instructions are further executable to determine, based on a signal strength indicator, range information indicative of a distance between the microphone and the reference device.
Example 36 includes the non-transitory computer-readable device of any of Examples 31 to 35, wherein the instructions are further executable to determine, based on a signal strength indicator, a change of distance between the microphone and the reference device.
Example 37 includes the non-transitory computer-readable device of any of Examples 31 to 36, wherein the directionality information is based on an angle of arrival of the one or more signals.
Example 38 includes the non-transitory computer-readable device of any of Examples 31 to 37, wherein the one or more signals are transmitted in accordance with a BLUETOOTH® communication protocol.
Example 39 includes the non-transitory computer-readable device of any of Examples 31 to 38, wherein the instructions are further executable to determine, based on a received signal strength of the one or more signals, range information associated with a distance between the microphone and the reference device, wherein the audio data is processed further based on the range information to generate the spatial audio data.
Example 40 includes the non-transitory computer-readable device of any of Examples 31 to 39, wherein the spatial audio data includes ambisonics data.
Example 41 includes the non-transitory computer-readable device of any of Examples 31 to 40, wherein the instructions are further executable to obtain video data associated with the audio data; process the video data in conjunction with the spatial audio data; and encode the video data and the spatial audio data for transmission or storage.
Example 42 includes the non-transitory computer-readable device of any of Examples 31 to 41, wherein the instructions are further executable to modify the audio data based on sound captured at a second microphone.
Example 43 includes the non-transitory computer-readable device of Example 42, wherein the instructions are further executable to modify the audio data to de-emphasize, in the spatial audio data, audio components that are present in the audio data and in the sound captured at a microphone of the reference device.
According to Example 44, an apparatus includes means for obtaining audio data captured by a microphone of a wearable device; means for determining directionality information indicative of a direction between the microphone and a reference device based on one or more signals exchanged between the wearable device and the reference device; and means for generating spatial audio data based on the audio data and the directionality information.
Example 45 includes the apparatus of Example 44, wherein the microphone captures the audio data at a fixed location relative to a source of sound, and further includes means for determining updated directionality information after determining the directionality information; and means for generating updated spatial audio data based on the updated directionality information, wherein the updated spatial audio data represents movement over time of the wearable device relative to the reference device as movement of the source of the sound.
Example 46 includes the apparatus of Example 44 or Example 45, wherein the one or more signals include an encoded version of the audio data.
Example 47 includes the apparatus of any of Examples 44 to 46, wherein the audio data is obtained from one or more data packets of the one or more signals.
Example 48 includes the apparatus of any of Examples 44 to 47 and further includes means for determining, based on a signal strength indicator, range information indicative of a distance between the microphone and the reference device.
Example 49 includes the apparatus of any of Examples 44 to 48 and further includes means for determining, based on a signal strength indicator, a change of distance between the microphone and the reference device.
Example 50 includes the apparatus of any of Examples 44 to 49, wherein the directionality information is based on an angle of arrival of the one or more signals.
Example 51 includes the apparatus of any of Examples 44 to 50, wherein the one or more signals are transmitted in accordance with a BLUETOOTH® communication protocol.
Example 52 includes the apparatus of any of Examples 44 to 51 and further includes means for determining, based on a received signal strength of the one or more signals, range information associated with a distance between the microphone and the reference device, wherein the audio data is processed further based on the range information to generate the spatial audio data.
Example 53 includes the apparatus of any of Examples 44 to 52, wherein the spatial audio data includes ambisonics data.
Example 54 includes the apparatus of any of Examples 44 to 53 and further includes: means for obtaining video data associated with the audio data; means for processing the video data in conjunction with the spatial audio data; and means for encoding the video data and the spatial audio data for transmission or storage.
Example 55 includes the apparatus of any of Examples 44 to 54 and further includes means for modifying the audio data based on sound captured at a second microphone.
Example 56 includes the apparatus of Example 55 and further includes means for modifying the audio data to de-emphasize, in the spatial audio data, audio components that are present in the audio data and in the sound captured at a microphone of the reference device.
Those of skill would further appreciate that the various illustrative logical blocks, configurations, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software executed by a processor, or combinations of both. Various illustrative components, blocks, configurations, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or processor executable instructions depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, such embodiment decisions are not to be interpreted as causing a departure from the scope of the present disclosure.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in random access memory (RAM), flash memory, read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), registers, hard disk, a removable disk, a compact disc read-only memory (CD-ROM), or any other form of non-transient storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor may read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an application-specific integrated circuit (ASIC). The ASIC may reside in a computing device or a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a computing device or user terminal.
The previous description of the disclosed aspects is provided to enable a person skilled in the art to make or use the disclosed aspects. Various modifications to these aspects will be readily apparent to those skilled in the art, and the principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope possible consistent with the principles and novel features as defined by the following claims.