Aspects of the disclosure relate to audio signal processing.
Computer-mediated reality systems are being developed to allow computing devices to augment or add to, remove or subtract from, substitute or replace, or generally modify existing reality as experienced by a user. Computer-mediated reality systems may include, as a couple of examples, virtual reality (VR) systems, augmented reality (AR) systems, and mixed reality (MR) systems. The perceived success of a computer-mediated reality system is generally related to the ability of such a system to provide a realistically immersive experience in terms of both video and audio such that the video and audio experiences align in a manner that is perceived as natural and expected by the user. Although the human visual system is more sensitive than the human auditory systems (e.g., in terms of perceived localization of various objects within the scene), ensuring an adequate auditory experience is an increasingly important factor in ensuring a realistically immersive experience, particularly as the video experience improves to permit better localization of video objects that enable the user to better identify sources of audio content.
In VR technologies, virtual information may be presented to a user using a head-mounted display such that the user may visually experience an artificial world on a screen in front of their eyes. In AR technologies, the real-world is augmented by visual objects that may be superimposed (e.g., overlaid) on physical objects in the real world. The augmentation may insert new visual objects and/or mask visual objects in the real-world environment. In MR technologies, the boundary between what is real or synthetic/virtual and visually experienced by a user is becoming difficult to discern.
Hardware for VR, AR, and/or MR may include one or more screens to present a visual scene to a user and one or more sound-emitting transducers (e.g., loudspeakers) to provide a corresponding audio environment. Such hardware may also include one or more microphones to capture an acoustic environment of the user and/or speech of the user, and/or may include one or more sensors to determine a position, orientation, and/or movement of the user.
A method of audio signal processing according to a general configuration includes determining that first audio activity in at least one microphone signal is voice activity; determine whether the voice activity is voice activity of a participant in an application session active on a device; based at least on a result of the determining whether the voice activity is voice activity of a participant in an application session, generating an antinoise signal to cancel the first audio activity; and, by a loudspeaker, producing an acoustic signal that is based on the antinoise signal. Computer-readable storage media comprising code which, when executed by at least one processor, causes the at least one processor to perform such a method are also disclosed.
An apparatus according to a general configuration includes a memory configured to store at least one microphone signal; and a processor coupled to the memory. The processor is configured to retrieve the at least one microphone signal and to execute computer-executable instructions to determine that first audio activity in the at least one microphone signal is voice activity; to determine whether the voice activity is voice activity of a participant in an application session active on a device; to generate, based at least on a result of the determining whether voice activity is voice activity of a participant in an application session, an antinoise signal to cancel the first audio activity; and to cause a loudspeaker to produce an acoustic signal that is based on the antinoise signal.
Aspects of the disclosure are illustrated by way of example. In the accompanying figures, like reference numbers indicate similar elements.
The term “extended reality” (or XR) is a general term that encompasses real-and-virtual combined environments and human-machine interactions generated by computer technology and wearables and includes such representative forms as augmented reality (AR), mixed reality (MR), and virtual reality (VR).
An XR experience may be shared among multiple participants by interaction among applications executing on devices of the participants (e.g., wearable devices, such as one or more of the examples described herein). Such an XR experience may include a shared space within which participants may communicate verbally (and possibly visually) with one another as if they are spatially close to one another, even though they may be far from each other in the real world. On each participant's device, an active session of an application receives audio content (and possibly visual content) of the shared space and presents it to the participant in accordance with the participant's perspective within the shared space (e.g., volume and/or direction of arrival of a sound, location of a visual element, etc.). Examples of XR experiences that may be shared in such fashion include gaming experiences and video telephony experiences (e.g., a virtual conference room or other meeting space).
A participant in an XR shared space may be located in a physical space that is shared with persons who are not participants in the XR shared space. Participants in an XR shared space (e.g., a shared virtual space) may wish to communicate verbally with one another without being distracted by voices of non-participants who may be nearby. For example, a participant may be in a coffee shop or shared office; in an airport or other enclosed public space; or on an airplane, bus, train, or other form of public transportation). When an attendee is engaged in an XR conference meeting, or a player is engaged in an XR game, the voice of a non-participant who is nearby may be distracting. It may be desired to reduce this distraction by screening out the voices of non-participants. One approach to such screening is to provide active noise cancellation (ANC) at each participant's ears to cancel ambient sound, including the non-participant voice(s). In order for the participants to be able to hear one another, microphones may be used to capture the participants' voices, and wireless transmission may be used to share the captured voices among the participants.
Indiscriminate cancellation of ambient sound may acoustically isolate a participant of an XR shared space from her actual surroundings, however, which may not be desired. Such an approach may also impede participants who are physically situated near one another from hearing each other's voice acoustically, rather than only electronically, which may not be desired. It may be desired to provide cancellation of non-participant voice without canceling all ambient sound and/or while permitting nearby participants to hear one another. It may be desired to provide for exceptions to such cancellation, such as, for example, when it is desired for a participant of an XR shared space to talk with a non-participant.
Several illustrative configurations will now be described with respect to the accompanying drawings, which form a part hereof. While particular configurations, in which one or more aspects of the disclosure may be implemented, are described below, other configurations may be used and various modifications may be made without departing from the scope of the disclosure or the spirit of the appended claims. Although the particular examples discussed herein relate primarily to gaming applications, it will be understood that the principles, methods, and apparatuses disclosed relate more generally to shared virtual spaces in which the participants may be physically local and/or remote to one another, such as conferees in a virtual conference room, members of a tour group sharing an augmented reality experience in a museum or on a city street, instructors and trainees of a virtual training group on a factory floor, etc., and that uses of these principles in such contexts is specifically contemplated and hereby disclosed.
In a first example as shown in
Each of the devices D10-1, D10-2, and D10-3 may be implemented as a hearable device or “hearable” (also known as “smart headphones,” “smart earphones,” or “smart earpieces”). Such devices, which are designed to be worn over the ear or in the ear, are becoming increasingly popular and have been used for multiple purposes, including wireless transmission and fitness tracking. As shown in
An HMD may include multiple microphones for better noise cancellation (e.g., to allow ambient sound to be detected from multiple locations). An array of multiple microphones may also include microphones from more than one device that is configured for wireless communication: for example, on an HMD and a smartphone; on an HMD (e.g., glasses) and a wearable (e.g., a watch, an earbud, a fitness tracker, smart clothing, smart jewelry, etc.); on earbuds worn at a participant's left and right ears, etc. Additionally or alternatively, signals from several microphones located on an HMD close to the user's ears may be used to estimate the acoustic signals that the user is likely hearing (e.g., the proportion of ambient sound to augmented sound, the qualities of each type of incoming sound), and then adjust specific frequencies or balance as appropriate to enhance hearability of augmented sound over the ambient sound (e.g., boost low frequencies of game sounds on the right to compensate for the masking effect of a detected ambient sound of a truck driving by on the right).
In a second example as shown in
This example may also be extended to include participation in the XR shared space by remote participants.
As described above, a participant's device (e.g., self-voice detector SV10) may be configured to detect that the participant is speaking based on, for example, volume level and/or directional sound processing. Additionally or alternatively, the voice of a participant may be registered with the participant's own corresponding device (e.g., as an access control security measure), such that the device (e.g., participant determination logic PD20, task T50) may be implemented to detect that the participant is speaking by recognizing her voice.
In a third example as shown in
When one of the players speaks, the players' devices detect the voice activity, and one or more of the devices transmits the voice activity to the server (e.g., via a WiFi or a cellular data network). For example, a device may be configured to transmit the voice activity to the server upon detecting that the wearer of the device is speaking (e.g., based on volume level and/or directional sound processing). The transmission may include the captured sound or, alternatively, the transmission may include values of recognition parameters that are extracted from the captured sound. In response to the transmitted voice activity, the server wirelessly transmits an indication to the devices that the voice activity is recognized as speech of a player (e.g., that the voice activity is matched to one of the voices that has been registered with the game). Because the voice belongs to one of the players, no ANC is activated by the devices in response to the detected voice activity.
As an alternative to speaker recognition by the server, one or more of the devices may be configured to perform the speaker recognition locally, and to wirelessly transmit a corresponding indication of the speaker recognition to any other players' devices that do not perform the speaker recognition. For example, a device may perform the speaker recognition upon detecting that the wearer of the device is speaking (e.g., based on volume level and/or directional sound processing) and to wirelessly transmit an indication to the other devices upon recognizing that the voice activity is speech of a registered player. In this event, because the voice belongs to one of the players, no ANC is activated by the devices in response to the detected voice activity.
As the players who are physically present speak, VAD is triggered and their voices are matched to voices registered with the game, allowing other registered users (both local and remote) to hear them. As a remote player speaks, VAD is again triggered and matched so registered users can hear, and her voice is played through the devices of the other players. When a non-player speaks, because the detected voice activity is not speech of any player, it is not transmitted to the remote players.
For an implementation in which the players' voices are recognized, it may happen that a non-player would like to see and hear what is going on in the game. In this case, it may be possible for the non-player to pick up another headset, put it on, and now view what is going on in the game. But when the non-player converses with a person next to her, the registered players do not hear the conversation, because the voice of the non-player is not registered with the application (e.g., the game). In response to detecting the voice activity of the non-players, the players' devices continue to activate ANC to cancel that voice activity, because the non-players' voices are not recognized by the devices and/or by the game server.
Alternatively or additionally, the system may be configured to recognize each of the participants' faces and to use this information to distinguish speech by participants from speech by non-participants. For example, each player may have registered her face with a game server (for example, by submitting a self-photo before the game begins in a registration step), and each device (e.g., participant determination logic PD20, task T50) may be implemented to recognize the face of each other player (e.g., using eigenfaces, HMMs, the Fisherface algorithm, and/or one or more other known methods). The same registration procedure may be applied to other uses, such as a conferencing server. Each device may be configured to reject voice activity coming from a direction in which no recognized participant is present and/or to reject voice activity coming from a detected face that is not recognized.
Any of the use cases described above may be implemented to distinguish between speech by a participant and speech by a non-participant that occurs at the same time. For example, a participant's device may be implemented to include an array of two or more microphones to allow incoming acoustic signals from multiple sources to be distinguished and individually accepted or canceled according to direction of arrival (e.g., by using beamforming and null beamforming to direct and steer beams and nulls).
A device and/or an application may also be configured to allow a user to select which voices to hear and/or which voices to block. For example, a user may choose manually to block one or more selected participants, or to hear only one or more participants, or to block all participants. Such a configuration may be provided in settings of the device and/or in settings of the application (e.g., a team configuration).
An application session may have a default context as described above, in which voices of non-participants are blocked using ANC but voices of participants are not blocked. It may be desired to provide for other contexts of an application session as well. For example, it may be desired to provide for contexts in which one or more participant voices may also be blocked using ANC. Several examples of such contexts (which may be indicated in session settings of the application) are described below.
In some contexts, a participant's voice may be disabled. A participant may desire to step out of the XR shared space for a short time, such that one or more external sounds which would have been blocked are now audible to the participant. On such an occasion, it may be desired for the participant to be able to hear the voice of a non-participant, but for the non-participant's voice to continue to be blocked for the participants who remain in the XR shared space. For example, it may be desired for a player to be able to engage in a conversation with a non-player (e.g., as shown in
One approach for switching between operating modes is to implement keyword detection on the at least one microphone signal. In this approach, a player says a keyword or keyphrase (e.g., “pause,” “let me hear”) to leave the shared-space mode and enter an step-out mode, and the player says a corresponding different keyword or keyphrase (e.g., “play,” “resume,” “quiet”) to leave the step-out mode and reenter the shared-space mode. In one such example, voice activity detector VAD10 is implemented to include a keyword detector that is configured to detect the designated keywords or keyphrases and to control ANC operation in accordance with the corresponding indicated mode. When the step-out mode is indicated, the keyword detector may cause participant determination logic PD10 to prevent the loudspeaker from producing an acoustic ANC signal (e.g., by blocking activation of the ANC system in response to voice activity detection, or by otherwise disabling the ANC system). (It may also be desired, during the step-out mode, for the participant's device to reduce the volume level of audio that is related to the XR shared space, such as game sounds and/or the voice of remote participants.) When the shared-space mode is indicated, the keyword detector may cause participant determination logic PD10 to enable the loudspeaker to produce an acoustic ANC signal (e.g., by allowing activation of the ANC system in response to voice activity detection, or by otherwise reenabling the ANC system). The keyword detector may also be implemented to cause participant determination logic PD10 to transmit an indication of a change in the device's operating mode to the other players' devices (e.g., via transceiver TX10) so that the other players' devices may allow or block voice activity by the player according to the operating mode indicated by the player's device.
Another approach for switching between operating modes is to implement a change of operating mode in response to user movement (e.g., changes in body position). For players seated in a circle around a game board, for example, a player may switch from play mode to a step-out mode by moving or leaning out of the circle shared by the players, and may leave the step-out mode and reenter play mode by moving back into the circle (e.g., allowing VAD/ANC to resume). In one example, a player's device includes a Bluetooth module (or is associated with such a module, such as in a smartphone of the player) that is configured to indicate a measure of proximity to devices of nearby players that also include (or are associated with) Bluetooth modules. The player's device may also be implemented to transmit an indication of a change in the device's operating mode to the other players' devices (e.g., via transceiver TX10) so that the other players' devices may allow or block voice activity by the player according to the operating mode indicated by the player's device.
In another example, a participant's device includes an inertial measurement unit (IMU), which may include one or more accelerometers, gyroscopes, and/or magnetometers. Such a unit may be used to track changes in the orientation of the user's head relative to, for example, a direction that corresponds to the shared virtual space. For a scenario as in
In order to support an immersive XR experience, it may be desired for the IMU to detect movement in three degrees of freedom (3DOF) or in six degrees of freedom (6DOF). As shown in
A further approach for switching between operating modes is based on information from video captured by a camera (e.g., a forward-facing camera of a player's device). In one example, a participant's device is implemented to determine, from video captured by a camera (e.g., a camera of the device), the identity and/or the relative direction of a person who is speaking. A face detected in a video capture may be associated with detected voice activity by a correlation in time and/or direction between the voice activity and movement of the face (e.g., mouth movement, such as a motion of the lips). As described above, the system may be configured to recognize each of the participants' faces and to use this information to distinguish speech by participants from speech by non-participants.
A device may be configured to analyze video from a camera that faces in the same direction as the user and to determine, from a gaze direction of a person who is speaking, whether the person is speaking to the user.
The player's device may be configured to switch from the step-out mode back to play mode in response to the player looking back toward the game or at another player, or in response to a determination that the gaze of the speaking non-player is no longer detected. The player's device may also be configured to transmit an indication of the mode change to the devices of other players, so that the voice of the player is no longer cancelled.
It may be desired to implement a mode change detection as described herein (e.g., by keyword detection, user movement detection, and/or gaze detection as described above) to include hysteresis and/or time windows. Before a change from one mode to another is indicated, for example, it may be desired to confirm that the mode change condition persists over a certain time interval (e.g., one-half second, one second, or two seconds). Additionally or alternatively, it may be desired to use a higher mode change threshold value (e.g., on a user orientation parameter, such as the angle between the user's facing direction and the center of the shared virtual space) for indicating an exit from play mode than for indicating a return to play mode. To ensure robust operation, a mode change detection may be implemented to require a contemporaneous occurrence of two or more trigger conditions (e.g., keyword, user movement, non-player face recognized, etc.) to change mode.
In traditional gameplay, teammates have no way to secretly share information except to come within close proximity to each other and whisper. It may be desired to support a mode of operation in which two or more teammates (e.g., whether nearby or remote) may privately discuss virtual strategy without being overheard by members of an opposing team. It may be desired, for example, to use facial recognition and ANC within an AR game environment to support team privacy and/or to enhance team vocalizations (e.g., by amplifying a teammate's whisper to a player's ears). Such a mode may also be extended so that the teammates may privately share virtual strategy plans without members of an opposing team being able to see the plans. (The same example may be applied to, for example, members of a subgroup during another XR shared-space experience as described herein, such as members of a subcommittee during a virtual meeting of a larger committee.)
In response to the mode change condition, the system transmits an indication of a change in the device's operating mode to the other players' devices. For example, in this case the device of player 1 and/or the device of player 3 may be implemented to transmit, in response to the mode change condition, an indication of a change in the device's operating mode to the other players' devices (e.g., via transceiver TX10). In response to the mode change indication, the non-teammates' devices block voice activity by players 1 and 3 (and possibly by other players who are identified as their teammates) in accordance with the indicated operating mode. One teammate can now privately discuss (or even whisper) and visually share strategy plans/data with other teammates without members of the opposing team hearing/seeing them, because the devices of opposing team members activate ANC to cancel the voice activity. Among the devices of the teammates, the mode change indication may cause the devices to amplify teammate voice activity (e.g., to amplify teammate whispers). Looking away from a teammate resumes normal play operation, in which all player vocalizations can be heard by all players. In a related context, the voice of a particular participant (e.g., a coach) is audible only to one or more selected other participants and is blocked for the other participants.
The XR shared space need not be an open space, such as a meeting room. For example, it may include virtual walls or other virtual acoustic barriers that would reduce prevent one participant from hearing another participant if it were real. In such instances, the application may be configured to track the participant's movement (e.g., using data from an IMU (inertial measurement unit) and a simultaneous mapping and localization (SLAM) algorithm) and to update the participant's location within the XR shared space accordingly. The application may be further configured to modify the participant's audio experience according to features of the XR shared space, such as structures or surfaces that would block or otherwise modify sound (e.g., muffle, cause reverberation, etc.) if physical.
Unless expressly limited by its context, the term “signal” is used herein to indicate any of its ordinary meanings, including a state of a memory location (or set of memory locations) as expressed on a wire, bus, or other transmission medium. Unless expressly limited by its context, the term “generating” is used herein to indicate any of its ordinary meanings, such as computing or otherwise producing. Unless expressly limited by its context, the term “calculating” is used herein to indicate any of its ordinary meanings, such as computing, evaluating, estimating, and/or selecting from a plurality of values. Unless expressly limited by its context, the term “obtaining” is used to indicate any of its ordinary meanings, such as calculating, deriving, receiving (e.g., from an external device), and/or retrieving (e.g., from an array of storage elements). Unless expressly limited by its context, the term “selecting” is used to indicate any of its ordinary meanings, such as identifying, indicating, applying, and/or using at least one, and fewer than all, of a set of two or more. Unless expressly limited by its context, the term “determining” is used to indicate any of its ordinary meanings, such as deciding, establishing, concluding, calculating, selecting, and/or evaluating. Where the term “comprising” is used in the present description and claims, it does not exclude other elements or operations. The term “based on” (as in “A is based on B”) is used to indicate any of its ordinary meanings, including the cases (i) “derived from” (e.g., “B is a precursor of A”), (ii) “based on at least” (e.g., “A is based on at least B”) and, if appropriate in the particular context, (iii) “equal to” (e.g., “A is equal to B”). Similarly, the term “in response to” is used to indicate any of its ordinary meanings, including “in response to at least.” Unless otherwise indicated, the terms “at least one of A, B, and C,” “one or more of A, B, and C,” “at least one among A, B, and C,” and “one or more among A, B, and C” indicate “A and/or B and/or C.” Unless otherwise indicated, the terms “each of A, B, and C” and “each among A, B, and C” indicate “A and B and C.”
Unless indicated otherwise, any disclosure of an operation of an apparatus having a particular feature is also expressly intended to disclose a method having an analogous feature (and vice versa), and any disclosure of an operation of an apparatus according to a particular configuration is also expressly intended to disclose a method according to an analogous configuration (and vice versa). The term “configuration” may be used in reference to a method, apparatus, and/or system as indicated by its particular context. The terms “method,” “process,” “procedure,” and “technique” are used generically and interchangeably unless otherwise indicated by the particular context. A “task” having multiple subtasks is also a method. The terms “apparatus” and “device” are also used generically and interchangeably unless otherwise indicated by the particular context. The terms “element” and “module” are typically used to indicate a portion of a greater configuration. Unless expressly limited by its context, the term “system” is used herein to indicate any of its ordinary meanings, including “a group of elements that interact to serve a common purpose.”
Unless initially introduced by a definite article, an ordinal term (e.g., “first,” “second,” “third,” etc.) used to modify a claim element does not by itself indicate any priority or order of the claim element with respect to another, but rather merely distinguishes the claim element from another claim element having a same name (but for use of the ordinal term). Unless expressly limited by its context, each of the terms “plurality” and “set” is used herein to indicate an integer quantity that is greater than one.
The various elements of an implementation of an apparatus or system as disclosed herein may be embodied in any combination of hardware with software and/or with firmware that is deemed suitable for the intended application. For example, such elements may be fabricated as electronic and/or optical devices residing, for example, on the same chip or among two or more chips in a chipset. One example of such a device is a fixed or programmable array of logic elements, such as transistors or logic gates, and any of these elements may be implemented as one or more such arrays. Any two or more, or even all, of these elements may be implemented within the same array or arrays. Such an array or arrays may be implemented within one or more chips (for example, within a chipset including two or more chips).
A processor or other means for processing as disclosed herein may be fabricated as one or more electronic and/or optical devices residing, for example, on the same chip or among two or more chips in a chipset. One example of such a device is a fixed or programmable array of logic elements, such as transistors or logic gates, and any of these elements may be implemented as one or more such arrays. Such an array or arrays may be implemented within one or more chips (for example, within a chipset including two or more chips). Examples of such arrays include fixed or programmable arrays of logic elements, such as microprocessors, embedded processors, IP cores, DSPs (digital signal processors), FPGAs (field-programmable gate arrays), ASSPs (application-specific standard products), and ASICs (application-specific integrated circuits). A processor or other means for processing as disclosed herein may also be embodied as one or more computers (e.g., machines including one or more arrays programmed to execute one or more sets or sequences of instructions) or other processors. It is possible for a processor as described herein to be used to perform tasks or execute other sets of instructions that are not directly related to a procedure of an implementation of method M100 (or another method as disclosed with reference to operation of an apparatus or system described herein), such as a task relating to another operation of a device or system in which the processor is embedded (e.g., a voice communications device, such as a smartphone, or a smart speaker). It is also possible for part of a method as disclosed herein to be performed under the control of one or more other processors.
Each of the tasks of the methods disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. In a typical application of an implementation of a method as disclosed herein, an array of logic elements (e.g., logic gates) is configured to perform one, more than one, or even all of the various tasks of the method. One or more (possibly all) of the tasks may also be implemented as code (e.g., one or more sets of instructions), embodied in a computer program product (e.g., one or more data storage media such as disks, flash or other nonvolatile memory cards, semiconductor memory chips, etc.), that is readable and/or executable by a machine (e.g., a computer) including an array of logic elements (e.g., a processor, microprocessor, microcontroller, or other finite state machine). The tasks of an implementation of a method as disclosed herein may also be performed by more than one such array or machine. In these or other implementations, the tasks may be performed within a device for wireless communications such as a cellular telephone or other device having such communications capability. Such a device may be configured to communicate with circuit-switched and/or packet-switched networks (e.g., using one or more protocols such as VoIP). For example, such a device may include RF circuitry configured to receive and/or transmit encoded frames.
In one or more exemplary embodiments, the operations described herein may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, such operations may be stored on or transmitted over a computer-readable medium as one or more instructions or code. The term “computer-readable media” includes both computer-readable storage media and communication (e.g., transmission) media. By way of example, and not limitation, computer-readable storage media can comprise an array of storage elements, such as semiconductor memory (which may include without limitation dynamic or static RAM, ROM, EEPROM, and/or flash RAM), or ferroelectric, magnetoresistive, ovonic, polymeric, or phase-change memory; CD-ROM or other optical disk storage; and/or magnetic disk storage or other magnetic storage devices. Such storage media may store information in the form of instructions or data structures that can be accessed by a computer. Communication media can comprise any medium that can be used to carry desired program code in the form of instructions or data structures and that can be accessed by a computer, including any medium that facilitates transfer of a computer program from one place to another. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a web site, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology such as infrared, radio, and/or microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technology such as infrared, radio, and/or microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray Disc™ (Blu-Ray Disc Association, Universal City, Calif.), where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
In one example, a non-transitory computer-readable storage medium comprises code which, when executed by at least one processor, causes the at least one processor to perform a method of audio signal processing as described herein.
The previous description is provided to enable a person skilled in the art to make or use the disclosed implementations. Various modifications to these implementations will be readily apparent to those skilled in the art, and the principles defined herein may be applied to other implementations without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the implementations shown herein but is to be accorded the widest scope possible consistent with the principles and novel features as defined by the following claims.
Number | Name | Date | Kind |
---|---|---|---|
9415308 | Zepp | Aug 2016 | B1 |
20100145701 | Kaneko | Jun 2010 | A1 |
20140093091 | Dusan | Apr 2014 | A1 |
20140254820 | Gardenfors | Sep 2014 | A1 |
20160080874 | Fullam | Mar 2016 | A1 |
20180322861 | Ibrahim | Nov 2018 | A1 |
20200135163 | Lovitt et al. | Apr 2020 | A1 |
20200294351 | Feng | Sep 2020 | A1 |
Entry |
---|
International Search Report and Written Opinion—PCT/US2021/037693—ISA/EPO—dated Oct. 1, 2021. |
Number | Date | Country | |
---|---|---|---|
20220014839 A1 | Jan 2022 | US |