This disclosure relates in general to systems and methods for processing speech signals, and in particular to systems and methods for processing a speech signal to determine the onset of voice activity.
Systems for speech recognition are tasked with receiving audio input representing human speech, typically via one or more microphones, and processing the audio input to determine words, logical structures, or other outputs corresponding to that audio input. For example, automatic speech recognition (ASR) systems may generate a text output based on the human speech corresponding to an audio input signal; and natural language processing (NLP) tools may generate logical structures, or computer data, corresponding to the meaning of that human speech. While such systems may contain any number of components, at the heart of such systems is a speech processing engine, which is a component that accepts an audio signal as input, performs some recognition logic on the input, and outputs some text corresponding to that input.
Historically, audio input was provided to speech processing engines in a structured, predictable manner. For example, a user might speak directly into a microphone of a desktop computer in response to a first prompt (e.g., “Begin Speaking Now”); immediately after pressing a first button input (e.g., a “start” or “record” button, or a microphone icon in a software interface); or after a significant period of silence. Similarly, a user might stop providing microphone input in response to a second prompt (e.g., “Stop Speaking”); immediately before pressing a second button input (e.g., a “stop” or “pause” button); or by remaining silent for a period of time. Such structured input sequences left little doubt as to when the user was providing input to a speech processing engine (e.g., between a first prompt and a second prompt, or between pressing a start button and pressing a stop button). Moreover, because such systems typically required deliberate action on the part of the user, it could generally be assumed that a user's speech input was directed to the speech processing engine, and not to some other listener (e.g., a person in an adjacent room). Accordingly, many speech processing engines of the time may not have had any particular need to identify, from microphone input, which portions of the input were directed to the speech processing engine and were intended to provide speech recognition input, and conversely, which portions were not.
The ways in which users provide speech recognition input has changed as speech processing engines have become more pervasive and more fully integrated into users' everyday lives. For example, some automated voice assistants are now housed in or otherwise integrated with household appliances, automotive dashboards, smart phones, wearable devices, “living room” devices (e.g., devices with integrated “smart” voice assistants), and other environments far removed from the conventional desktop computer. In many cases, speech processing engines are made more broadly usable by this level of integration into everyday life. However, these systems would be made cumbersome by system prompts, button inputs, and other conventional mechanisms for demarcating microphone input to the speech processing engine. Instead, some such systems place one or more microphones in an “always on” state, in which the microphones listen for a “wake-up word” (e.g., the “name” of the device or any other predetermined word or phrase) that denotes the beginning of a speech recognition input sequence. Upon detecting the wake-up word, the speech processing engine can process the following sequence of microphone input as input to the speech processing engine.
While the wake-up word system replaces the need for discrete prompts or button inputs for speech processing engines, it can be desirable to minimize the amount of time the wake-up word system is required to be active. For example, mobile devices operating on battery power benefit from both power efficiency and the ability to invoke a speech processing engine (e.g., invoking a smart voice assistant via a wake-up word). For mobile devices, constantly running the wake-up word system to detect the wake-up word may undesirably reduce the device's power efficiency. Ambient noises or speech other than the wake-up word may be continually processed and transcribed, thereby continually consuming power. However, processing and transcribing ambient noises or speech other than the wake-up word may not justify the required power consumption. It therefore can be desirable to minimize the amount of time the wake-up word system is required to be active without compromising the device's ability to invoke a speech processing engine.
In addition to reducing power consumption, it is also desirable to improve the accuracy of speech recognition systems. For example, a user who wishes to invoke a smart voice assistant may become frustrated if the smart voice assistant does not accurately respond to the wake-up word. The smart voice assistant may respond to an acoustic event that is not the wake-up word (i.e., false positives), the assistant may fail to respond to the wake-up word (i.e., false negatives), or the assistant may respond too slowly to the wake-up word (i.e., lag). Inaccurate responses to the wake-up word like the above examples may frustrate the user, leading to a degraded user experience. The user may further lose trust in the reliability of the product's speech processing engine interface. It therefore can be desirable to develop a speech recognition system that accurately responds to user input.
Examples of the disclosure describe systems and methods for determining a voice onset. According to an example method, a first audio signal is received via a first microphone, and a first probability of voice activity is determined based on the first audio signal. A second audio signal is received via a second microphone, and a second probability of voice activity is determined based on the first and second audio signals. Whether a first threshold of voice activity is met is determined based on the first and second probabilities of voice activity. In accordance with a determination that a first threshold of voice activity is met, it is determined that a voice onset has occurred and an alert is transmitted to a processor based on the determination that the voice onset has occurred. In accordance with a determination that a first threshold of voice activity is not met, it is not determined that a voice onset has occurred.
In the following description of examples, reference is made to the accompanying drawings which form a part hereof, and in which it is shown by way of illustration specific examples that can be practiced. It is to be understood that other examples can be used and structural changes can be made without departing from the scope of the disclosed examples.
Example Wearable System
In some examples involving augmented reality or mixed reality applications, it may be desirable to transform coordinates from a local coordinate space (e.g., a coordinate space fixed relative to headgear device 400A) to an inertial coordinate space, or to an environmental coordinate space. For instance, such transformations may be necessary for a display of headgear device 400A to present a virtual object at an expected position and orientation relative to the real environment (e.g., a virtual person sitting in a real chair, facing forward, regardless of the position and orientation of headgear device 400A), rather than at a fixed position and orientation on the display (e.g., at the same position in the display of headgear device 400A). This can maintain an illusion that the virtual object exists in the real environment (and does not, for example, appear positioned unnaturally in the real environment as the headgear device 400A shifts and rotates). In some examples, a compensatory transformation between coordinate spaces can be determined by processing imagery from the depth cameras 444 (e.g., using a Simultaneous Localization and Mapping (SLAM) and/or visual odometry procedure) in order to determine the transformation of the headgear device 400A relative to an inertial or environmental coordinate system. In the example shown in
In some examples, the depth cameras 444 can supply 3D imagery to a hand gesture tracker 411, which may be implemented in a processor of headgear device 400A. The hand gesture tracker 411 can identify a user's hand gestures, for example by matching 3D imagery received from the depth cameras 444 to stored patterns representing hand gestures. Other suitable techniques of identifying a user's hand gestures will be apparent.
In some examples, one or more processors 416 may be configured to receive data from headgear subsystem 404B, the IMU 409, the SLAM/visual odometry block 406, depth cameras 444, microphones 450; and/or the hand gesture tracker 411. The processor 416 can also send and receive control signals from the 6DOF totem system 404A. The processor 416 may be coupled to the 6DOF totem system 404A wirelessly, such as in examples where the handheld controller 400B is untethered. Processor 416 may further communicate with additional components, such as an audio-visual content memory 418, a Graphical Processing Unit (GPU) 420, and/or a Digital Signal Processor (DSP) audio spatializer 422. The DSP audio spatializer 422 may be coupled to a Head Related Transfer Function (HRTF) memory 425. The GPU 420 can include a left channel output coupled to the left source of imagewise modulated light 424 and a right channel output coupled to the right source of imagewise modulated light 426. GPU 420 can output stereoscopic image data to the sources of imagewise modulated light 424, 426. The DSP audio spatializer 422 can output audio to a left speaker 412 and/or a right speaker 414. The DSP audio spatializer 422 can receive input from processor 419 indicating a direction vector from a user to a virtual sound source (which may be moved by the user, e.g., via the handheld controller 400B). Based on the direction vector, the DSP audio spatializer 422 can determine a corresponding HRTF (e.g., by accessing a HRTF, or by interpolating multiple HRTFs). The DSP audio spatializer 422 can then apply the determined HRTF to an audio signal, such as an audio signal corresponding to a virtual sound generated by a virtual object. This can enhance the believability and realism of the virtual sound, by incorporating the relative position and orientation of the user relative to the virtual sound in the mixed reality environment—that is, by presenting a virtual sound that matches a user's expectations of what that virtual sound would sound like if it were a real sound in a real environment.
In some examples, such as shown in
While
Speech Processing Engines
Speech recognition systems in general include a speech processing engine that can accept an input audio signal corresponding to human speech (a source signal); process and analyze the input audio signal; and produce, as a result of the analysis, an output corresponding to the human speech. In the case of automatic speech recognition (ASR) systems, for example, the output of a speech processing engine may be a text transcription of the human speech. In the case of natural language processing systems, the output may be one or more commands or instructions indicated by the human speech; or some representation (e.g., a logical expression or a data structure) of the semantic meaning of the human speech. While reference is made herein to speech processing engines, other forms of speech processing besides speech recognition should also be considered within the scope of the disclosure. Other types of speech processing systems (e.g., automatic translation systems), including those that do not necessarily “recognize” speech, are contemplated and are within the scope of the disclosure.
Speech recognition systems are found in a diverse array of products and applications: conventional telephone systems; automated voice messaging systems; voice assistants (including standalone and smartphone-based voice assistants); vehicles and aircraft; desktop and document processing software; data entry; home appliances; medical devices; language translation software; closed captioning systems; and others. An advantage of speech recognition systems is that they may allow users to provide input to a computer system using natural spoken language, such as presented to one or more microphones, instead of conventional computer input devices such as keyboards or touch panels; accordingly, speech recognition systems may be particularly useful in environments where conventional input devices (e.g., keyboards) may be unavailable or impractical. Further, by permitting users to provide intuitive voice-based input, speech processing engines can heighten feelings of immersion. As such, speech recognition can be a natural fit for wearable systems, and in particular, for virtual reality, augmented reality, and/or mixed reality applications of wearable systems, in which user immersion is a primary goal; and in which it may be desirable to limit the use of conventional computer input devices, whose presence may detract from feelings of immersion.
Although speech processing engines allow users to naturally interface with a computer system through spoken language, constantly running the speech processing engine can pose problems. For example, one problem is that the user experience may be degraded if the speech processing engine responds to noise, or other sounds, that are not intended to be speech input. Background speech can be particularly problematic, as it could cause the computer system to execute unintended commands if the speech processing engine hears and interprets the speech. Because it can be difficult, if not impossible, to eliminate the presence of background speech in a user's environment (particularly for mobile devices), speech processing engines can benefit from a system that can ensure that the speech processing engine only responds to audio signals intended to be speech input for the computer system.
Such a system can also alleviate a second problem of continually running the speech processing engine: power efficiency. A continually running speech processing engine requires power to process a continuous stream of audio signals. Because automatic speech recognition and natural language processing can be computationally expensive tasks, the speech processing engine can be power hungry. Power constraints can be particularly acute for battery powered mobile devices, as continually running the speech processing engine can undesirably reduce the operating time of the mobile device. One way a system can alleviate this problem is by activating the speech processing engine only when the system has determined there is a high likelihood that the audio signal is intended as input for the speech processing engine and the computer system. By initially screening the incoming audio signal to determine if it is likely to be intended speech input, the system can ensure that the speech recognition system accurately responds to speech input while disregarding non-speech input. The system may also increase the power efficiency of the speech recognition system by reducing the amount of time the speech processing engine is required to be active.
One part of such a system can be a wake-up word system. A wake-up word system can rely upon a specific word or phrase to be at the beginning of any intended speech input. The wake-up word system can therefore require that the user first say the specific wake-up word or phrase and then follow the wake-up word or phrase with the intended speech input. Once the wake-up word system detects that the wake-up word has been spoken, the associated audio signal (that may or may not include the wake-up word) can be processed by the speech processing engine or passed to the computer system. Wake-up word systems with a well-selected wake-up word or phrase can reduce or eliminate unintended commands to the computer system from audio signals that are not intended as speech input. If the wake-up word or phrase is not typically uttered during normal conversation, the wake-up word or phrase may serve as a reliable marker that indicates the beginning of intended speech input. However, a wake-up word system still requires a speech processing engine to actively process audio signals to determine if any given audio signal includes the wake-up word.
It therefore can be desirable to create an efficient system that first determines if an audio signal is likely to be a wake-up word. In some embodiments, the system can first determine that an audio signal is likely to include a wake-up word. The system can then wake the speech processing engine and pass the audio signal to the speech processing engine. In some embodiments, the system comprises a voice activity detection system and further comprises a voice onset detection system.
The present disclosure is directed to systems and methods for improving the accuracy and power efficiency of a speech recognition system by filtering out audio signals that are not likely to be intended speech input. As described herein, such audio signals can first be identified (e.g., classified) by a voice activity detection system (e.g., as voice activity or non-voice activity). A voice onset detection system can then determine that an onset has occurred (e.g., of a voice activity event). The determination of an onset can then trigger subsequent events (e.g., activating a speech processing engine to determine if a wake-up word was spoken). “Gatekeeping” audio signals that the speech processing engine is required to process allows the speech processing engine to remain inactive when non-input audio signals are received. In some embodiments, the voice activity detection system and the voice onset detection system are configured to run on a low power, always-on processor.
Such capabilities may be especially important in mobile applications of speech processing, even more particularly for wearable applications, such as virtual reality or augmented reality applications. In such wearable applications, the user may often speak without directing input speech to the wearable system. The user may also be in locations where significant amounts of background speech exists. Further, the wearable system may be battery-operated and have a limited operation time. Sensors of wearable systems (such as those described in this disclosure) are well suited to solving this problem, as described herein. However, it is also contemplated that systems and methods described herein can also be applied in non-mobile applications (e.g., a voice assistant running on a device connected to a power outlet or a voice assistant in a vehicle).
Voice Activity Detection
In some embodiments, input audio signals can be summed together at step 605. For microphone configurations that are symmetric relative to a signal source (e.g., a user's mouth), a summed input signal can serve to reinforce an information signal (e.g., a speech signal) because the information signal can be present in both individual input signals, and each microphone can receive the information signal at the same time. In some embodiments, the noise signal in the individual input audio signals can generally not be reinforced because of the random nature of the noise signal. For microphone configurations that are not symmetric relative to a signal source, a summed signal can serve to increase a signal-to-noise ratio (e.g., by reinforcing a speech signal without reinforcing a noise signal). In some embodiments, a filter or delay process can be used for asymmetric microphone configurations. A filter or delay process can align input audio signals to simulate a symmetric microphone configuration by compensating for a longer or shorter path from a signal source to a microphone. Although the depicted embodiment illustrates two input audio signals summed together, it is also contemplated that a single input audio signal can be used, or more than two input audio signals can be summed together as well. It is also contemplated that signal processing steps 603 and/or 604 can occur after a summation step 605 on a summed input signal.
In some embodiments, input power can be estimated at step 606. In some embodiments, input power can be determined on a per-frame basis based on a windowing function applied at steps 603 and 604. At step 608, the audio signal can optionally be smoothed to produce a smoothed input power. In some embodiments, the smoothing process occurs over the frames provided by the windowing function. Although the depicted embodiment shows signal processing and smoothing steps 603, 604, and 608, it is also contemplated that the input audio signal can be processed at step 610.
At step 610, a ratio of the smoothed input power to the noise power estimate is calculated. In some embodiments, the noise power estimate is used to determine voice activity, however, the noise power estimate may also (in some embodiments) rely on information as to when speech is present or absent. Because of the interdependence between inputs and outputs, methods like minima controlled recursive averaging (MCRA) can be used to determine the noise power estimate (although other methods may be used).
Referring back to
Referring now to
In
In another example, one or more microphones can be placed in a location that is generally but not completely fixed with respect to a user. In some embodiments, one or more microphones may be placed in a car (e.g., two microphones equally spaced relative to a driver's seat). In some embodiments, one or more microphones may be communicatively coupled to a processor. In some embodiments, a generally expected location of a user may be used in conjunction with a known location of one or more microphones for subsequent processing or calibration.
Referring back to the example shown in
At step 706, the two or more audio signals are summed to produce a summation signal, as shown in more detail in
Referring back to the example shown in
Referring back to the example shown in
In some embodiments, a baseline for a difference signal can be normalized to a baseline for a summation signal by using an equalization filter, which can be a FIR filter. A ratio of a power spectral density of a noise signal in a difference signal and a noise signal in a summation signal can be given as equation (1), where ΓN12(ω) represents the coherence of a signal N1 (which can correspond to a noise signal from a first microphone) and a signal N2 (which can correspond to a noise signal from a second microphone), and where Re(*) can represent the real portion of a complex number:
Accordingly, a desired frequency response of an equalization filter can be represented as equation (2):
Determining ΓN12(ω) can be difficult because it can require knowledge about which segments of a signal comprise voice activity. This can present a circular issue where voice activity information is required in part to determine voice activity information. One solution can be to model a noise signal as a diffuse field sound as equation (3), where d can represent a spacing between microphones, where c can represent the speed of sound, and ω can represent a normalized frequency:
Accordingly, a magnitude response using a diffuse field model for noise can be as equation (4):
In some embodiments, ΓN12(ω) can then be estimated using a FIR filter to approximate a magnitude response using a diffuse field model.
In some embodiments, input power can be estimated at steps 710 and 711. In some embodiments, input power can be determined on a per-frame basis based on a windowing function applied at steps 704 and 705. At steps 712 and 713, the summation signal and the normalized difference signal can optionally be smoothed. In some embodiments, the smoothing process occurs over the frames provided by the windowing function.
In the depicted embodiment, the probability of voice activity in the beamforming signal is determined at step 715 from the ratio of the normalized difference signal to the summation signal. In some embodiments, the presence of voice activity is determined by mapping the ratio of the normalized difference signal to the summation signal into probability space, as shown in
Referring back to
ψVAD(l)=pBF(l)αBF·pOD(l)αOD (5)
Based on the combined probability for a given time, the input signal can then be classified in some embodiments as voice activity or non-voice activity as equation (6), where δVAD represents a threshold:
In some embodiments, δVAD is a tunable parameter that can be tuned by any suitable means (e.g., manually, semi-automatically, and/or automatically, for example, through machine learning). The binary classification of the input signal into voice activity or non-voice activity can be the voice activity detection (VAD) output.
Voice Onset Detection
Referring back to
In some embodiments, an onset can be determined using parameters that can be tuned via any suitable means (e.g., manually, semi-automatically, and/or automatically, for example, through machine learning). For example, parameters can be tuned such that the voice onset detection system is sensitive to particular speech signals (e.g., a wake-up word). In some embodiments, a typical duration of a wake-up word is known (or can be determined for or by a user) and the voice onset detection parameters can be tuned accordingly (e.g., the THOLD parameter can be set to approximately the typical duration of the wake-up word) and, in some embodiments, may include padding. Although the embodiments discussed assume the unit of utterance to be detected by the voice onset detection system is a word (or one or more words), it is also contemplated that the target unit of utterance can be other suitable units, such as phonemes or phrases. In some embodiments, the TLOOKBACK buffer window can be tuned to optimize for lag and accuracy. In some embodiments, the TLOOKBACK buffer window can be tuned for or by a user. For example, a longer TLOOKBACK buffer window can increase the system's sensitivity to onsets because the system can evaluate a larger window where the TVA_ACCUM threshold can be met. However, in some embodiments, a longer TLOOKBACK window can increase lag because the system may have to wait longer to determine if an onset has occurred.
In some embodiments, the TLOOKBACK buffer window size and the TVA_ACCUM threshold can be tuned to yield the least amount of false negatives and/or false positives. For example, a longer buffer window size with the same threshold can make the system less likely to produce false negatives but more likely to produce false positives. In some embodiments, a larger threshold with the same buffer window size can make the system less likely to produce false positives but more likely to produce false negatives. In some embodiments, the onset marker can be determined at the moment the TVA_ACCUM threshold is met. Accordingly, in some embodiments, the onset marker can be offset from the beginning of the detected voice activity by the duration TVA_ACCUM. In some embodiments, it is desirable to introduce an offset to remove undesired speech signals that can precede desired speech signals (e.g., “uh” or “um” preceding a command). In some embodiments, once the TVA_ACCUM threshold is met, the onset marker can be “back-dated” using suitable methods to the beginning of the detected voice activity such that there may be no offset. For example, the onset marker can be back-dated to the most recent beginning of detected voice activity. In some embodiments, the onset marker can be back-dated using one or more of onset detection parameters (e.g., TLOOKBACK and TVA_ACCUM).
In some embodiments, onset detection parameters can be determined at least in part based on previous interactions. For example, the THOLD duration can be adjusted based on a determination of how long the user has previously taken to speak the wake-up word. In some embodiments, TLOOKBACK or TVA_ACCUM can be adjusted based on a likelihood of false positives or false negatives from a user or a user's environment. In some embodiments, signal processing steps 604 (in
In some embodiments, voice onset detection can be used to trigger subsequent events. For example, the voice onset detection system can run on an always-on, lower-power processor (e.g., a dedicated processor or a DSP), compared to a main processor. In some embodiments, the detection of an onset can wake a neighboring processor and prompt the neighboring processor to begin speech recognition. In some embodiments, the voice onset detection system can pass information to subsequent systems (e.g., the voice onset detection system can pass a timestamp of a detected onset to a speech processing engine running on a neighboring processor). In some embodiments, the voice onset detection system can use voice activity detection information to accurately determine the onset of speech without the aid of a speech processing engine. In some embodiments, the detection of an onset can serve as a trigger for a speech processing engine to activate; the speech processing engine therefore can remain inactive (reducing power consumption) until an onset has been detected. In some embodiments, a voice onset detector requires less processing (and therefore less power) than a speech processing engine because a voice onset detector analyzes input signal energy, instead of analyzing the content of the speech.
In some embodiments, sensors on a wearable head device can determine (at least in part) parameters for onset detection. For example, one or more sensors on a wearable head device may monitor a user's mouth movements in determining an onset event. In some embodiments, a user moving his or her mouth may indicate that an onset event is likely to occur. In some embodiments, one or more sensors on a wearable head device may monitor a user's eye movements in determining an onset event. For example, certain eye movements or patterns may be associated with preceding an onset event. In some embodiments, sensors on a wearable head device may monitor a user's vital signs to determine an onset event. For example, an elevated heartrate may be associated with preceding an onset event. It is also contemplated that sensors on a wearable head device may monitor a user's behavior in ways other than those described herein (e.g., head movement, hand movement).
In some embodiments, sensor data (e.g., mouth movement data, eye movement data, vital sign data) can be used as an additional parameter to determine an onset event (e.g., determination of whether a threshold of voice activity is met), or sensor data can be used exclusively to determine an onset event. In some embodiments, sensor data can be used to adjust other onset detection parameters. For example, mouth movement data can be used to determine how long a particular user takes to speak a wake-up word. In some embodiments, mouth movement data can be used to adjust a THOLD parameter accordingly. In some embodiments, a wearable head device with one or more sensors can be pre-loaded with instructions on how to utilize sensor data for determining an onset event. In some embodiments, a wearable head device with one or more sensors can also learn how to utilize sensor data for predetermining an onset event based on previous interactions. For example, it may be determined that, for a particular user, heartrate data is not meaningfully correlated with an onset event, but eye patterns are meaningfully correlated with an onset event. Heartrate data may therefore not be used to determine onset events, or a lower weight may be assigned to heartrate data. A higher weight may also be assigned to eye pattern data.
In some embodiments, the voice onset detection system functions as a wrapper around the voice activity detection system. In some embodiments, it is desirable to produce onset information because onset information may be more accurate than voice activity information. For example, onset information may be more robust against false positives than voice activity information (e.g., if a speaker briefly pauses during a single utterance, voice activity detection may show two instances of voice activity when one onset is desired). In some embodiments, it is desirable to produce onset information because it requires less processing in subsequent steps than voice activity information. For example, clusters of multiple detected voice activity may require further determination if the cluster should be treated as a single instance of voice activity or multiple.
Asymmetrical Microphone Placement
Symmetrical microphone configurations (such as the configuration shown in
In some embodiments, asymmetrical microphone configurations may be used because an asymmetrical configuration may be better suited at distinguishing a user's voice from other audio signals. In
In some embodiments, an asymmetrical microphone configuration (e.g., the microphone configuration shown in
Although asymmetrical microphone configurations may provide additional information about a sound source (e.g., an approximate height of the sound source), a sound delay may complicate subsequent calculations. In some embodiments, adding and/or subtracting audio signals that are offset (e.g., in time) from each other may decrease a signal-to-noise ratio (“SNR”), rather than increasing the SNR (which may happen when the audio signals are not offset from each other). It can therefore be desirable to process audio signals received from an asymmetrical microphone configuration such that a beamforming analysis (e.g., noise cancellation) may still be performed to determine voice activity. In some embodiments, a voice onset event can be determined based on a beamforming analysis and/or single channel analysis. A notification may be transmitted to a processor (e.g., a DSP or x86 processor) in response to determining that a voice onset event has occurred. The notification may include information such as a timestamp of the voice onset event and/or a request that the processor begin speech recognition.
In some embodiments, an audio signal received at microphone 1108 may be processed at steps 1110 and/or 1112. In some embodiments, steps 1110 and 1112 together may correspond to processing step 705 and/or step 604. For example, microphone 1108 may be placed at position 1002. In some embodiments, a window function may be applied at step 1110 to a second audio signal received by microphone 1108. In some embodiments, the window function applied at step 1110 can be the same window function applied at step 1104. In some embodiments, a second filter (e.g., a bandpass filter) may be applied to the second audio signal at step 1112. In some embodiments, the second filter may be different from the first filter because the second filter may account for a time-delay between an audio signal received at microphone 1108 and an audio signal received at microphone 1102. For example, a user may speak while wearing MR system 1000, and the user's voice may be picked up by microphone 1108 at a later time than by microphone 1102 (e.g., because microphone 1108 may be further away from a user's mouth than microphone 1102). In some embodiments, a bandpass filter applied at step 1112 can be implemented in the time domain, and the bandpass filter can be shifted (as compared to a bandpass filter applied at step 1106) by a delay time, which may include an additional time for sound to travel from position 1006 to 1002, as compared from 1006 to 1004. In some embodiments, a delay time may be approximately 3-4 samples at a 48 kHz sampling rate, although a delay time can vary depending on a particular microphone (and user) configuration. A delay time can be predetermined (e.g., using measuring equipment) and may be fixed across different MR systems (e.g., because the microphone configurations may not vary across different systems). In some embodiments, a delay time can be dynamically measured locally by individual MR systems. For example, a user may be prompted to generate an impulse (e.g., a sharp, short noise) with their mouth, and a delay time may be recorded as the impulse reaches asymmetrically positioned microphones. In some embodiments, a bandpass filter can be implemented in the frequency domain, and one or more delay times may be applied to different frequency domains (e.g., a frequency domain including human voices may be delayed by a first delay time, and all other frequency domains may be delayed by a second delay time).
In some embodiments, an audio signal received at microphone 1120 may be processed at steps 1122 and/or 1124. In some embodiments, steps 1122 and 1124 together may correspond to processing step 705 and/or step 604. For example, microphone 1120 may be placed at position 1002. In some embodiments, a window function may be applied at step 1122 to a second audio signal received by microphone 1120. In some embodiments, the window function applied at step 1122 can be the same window function applied at step 1116. In some embodiments, a second filter (e.g., a bandpass filter) may be applied to the second audio signal at step 1124. In some embodiments, the second filter may be different from the first filter because the second filter may account for a time-delay between an audio signal received at microphone 1120 and an audio signal received at microphone 1114. In some embodiments, the second filter may have the same tap as the filter applied at step 1118. In some embodiments, the second filter may be configured to account for additional variations. For example, an audio signal originating from a user's mouth may be distorted as a result of, for example, additional travel time, reflections from additional material traversed (e.g., parts of MR system 1000), reverberations from additional material traversed, and/or occlusion from parts of MR system 1000. In some embodiments, the second filter may be configured to remove and/or mitigate distortions that may result from an asymmetrical microphone configuration.
In some embodiments, an audio signal received at microphone 1132 may be processed at steps 1134, 1136, and/or 1138. In some embodiments, steps 1134, 1136, and 1138 together may correspond to processing step 705 and/or step 604. For example, microphone 1132 may be placed at position 1002. In some embodiments, a FIR filter can be applied to a second audio signal received by microphone 1132. In some embodiments, a FIR filter can be configured to filter out non-impulse responses. An impulse response can be pre-determined (and may not vary across MR systems with the same microphone configurations), or an impulse response can be dynamically determined at individual MR systems (e.g., by having the user utter an impulse and recording the response). In some embodiments, a FIR filter can provide better control of designing a frequency-dependent delay than an impulse response filter. In some embodiments, a FIR filter can guarantee a stable output. In some embodiments, a FIR filter can be configured to compensate for a time delay. In some embodiments, a FIR filter can be configured to remove distortions that may result from a longer and/or different travel path for an audio signal. In some embodiments, a window function may be applied at step 1136 to a second audio signal received by microphone 1132. In some embodiments, the window function applied at step 1136 can be the same window function applied at step 1128. In some embodiments, a second filter (e.g., a bandpass filter) may be applied to the second audio signal at step 1138. In some embodiments, the second filter may be the same as the filter applied at step 1130.
With respect to the systems and methods described above, elements of the systems and methods can be implemented by one or more computer processors (e.g., CPUs or DSPs) as appropriate. The disclosure is not limited to any particular configuration of computer hardware, including computer processors, used to implement these elements. In some cases, multiple computer systems can be employed to implement the systems and methods described above. For example, a first computer processor (e.g., a processor of a wearable device coupled to one or more microphones) can be utilized to receive input microphone signals, and perform initial processing of those signals (e.g., signal conditioning and/or segmentation, such as described above). A second (and perhaps more computationally powerful) processor can then be utilized to perform more computationally intensive processing, such as determining probability values associated with speech segments of those signals. Another computer device, such as a cloud server, can host a speech processing engine, to which input signals are ultimately provided. Other suitable configurations will be apparent and are within the scope of the disclosure.
Although the disclosed examples have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. For example, elements of one or more implementations may be combined, deleted, modified, or supplemented to form further implementations. Such changes and modifications are to be understood as being included within the scope of the disclosed examples as defined by the appended claims.
This patent application is a Continuation of U.S. patent application Ser. No. 17/714,708 filed Apr. 6, 2022, which claims priority to Continuation of U.S. patent application Ser. No. 16/987,267 filed Aug. 6, 2020, now U.S. Pat. No. 11,328,740, which claims priority to U.S. Provisional Patent Application No. 62/884,143 filed on Aug. 7, 2019 and U.S. Provisional Patent Application No. 63/001,118 filed Mar. 27, 2020, all of which are hereby incorporated reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
4158750 | Sakoe | Jun 1979 | A |
4852988 | Velez | Aug 1989 | A |
6433760 | Vaissie | Aug 2002 | B1 |
6491391 | Blum et al. | Dec 2002 | B1 |
6496799 | Pickering | Dec 2002 | B1 |
6847336 | Lemelson | Jan 2005 | B1 |
6943754 | Aughey | Sep 2005 | B2 |
6977776 | Volkenandt et al. | Dec 2005 | B2 |
7346654 | Weiss | Mar 2008 | B1 |
7347551 | Fergason et al. | Mar 2008 | B2 |
7488294 | Torch | Feb 2009 | B2 |
7587319 | Catchpole | Sep 2009 | B2 |
7979277 | Larri et al. | Jul 2011 | B2 |
8154588 | Burns | Apr 2012 | B2 |
8235529 | Raffle | Aug 2012 | B1 |
8611015 | Wheeler | Dec 2013 | B2 |
8638498 | Bohn et al. | Jan 2014 | B2 |
8696113 | Lewis | Apr 2014 | B2 |
8929589 | Publicover et al. | Jan 2015 | B2 |
9010929 | Lewis | Apr 2015 | B2 |
9274338 | Robbins et al. | Mar 2016 | B2 |
9292973 | Bar-zeev et al. | Mar 2016 | B2 |
9294860 | Carlson | Mar 2016 | B1 |
9323325 | Perez et al. | Apr 2016 | B2 |
9715875 | Piernot | Jul 2017 | B2 |
9720505 | Gribetz et al. | Aug 2017 | B2 |
10013053 | Cederlund et al. | Jul 2018 | B2 |
10025379 | Drake et al. | Jul 2018 | B2 |
10062377 | Larri et al. | Aug 2018 | B2 |
10134425 | Johnson, Jr. | Nov 2018 | B1 |
10289205 | Sumter | May 2019 | B1 |
10839789 | Larri et al. | Nov 2020 | B2 |
10971140 | Catchpole | Apr 2021 | B2 |
11328740 | Lee et al. | May 2022 | B2 |
11587563 | Sheeder et al. | Feb 2023 | B2 |
11790935 | Lee et al. | Oct 2023 | B2 |
11854550 | Sheeder et al. | Dec 2023 | B2 |
11854566 | Leider | Dec 2023 | B2 |
11917384 | Roach | Feb 2024 | B2 |
20010055985 | Matt et al. | Dec 2001 | A1 |
20030030597 | Geist | Feb 2003 | A1 |
20050033571 | Huang | Feb 2005 | A1 |
20050069852 | Janakiraman et al. | Mar 2005 | A1 |
20060023158 | Howell et al. | Feb 2006 | A1 |
20060072767 | Zhang et al. | Apr 2006 | A1 |
20060098827 | Paddock et al. | May 2006 | A1 |
20060178876 | Sato et al. | Aug 2006 | A1 |
20070225982 | Washio | Sep 2007 | A1 |
20080124690 | Redlich | May 2008 | A1 |
20080201138 | Visser et al. | Aug 2008 | A1 |
20090180626 | Nakano | Jul 2009 | A1 |
20100245585 | Fisher et al. | Sep 2010 | A1 |
20100323652 | Visser et al. | Dec 2010 | A1 |
20110211056 | Publicover et al. | Sep 2011 | A1 |
20110213664 | Osterhout | Sep 2011 | A1 |
20110238407 | Kent | Sep 2011 | A1 |
20110288860 | Schevciw | Nov 2011 | A1 |
20120021806 | Maltz | Jan 2012 | A1 |
20120130713 | Shin et al. | May 2012 | A1 |
20120209601 | Jing | Aug 2012 | A1 |
20130077147 | Efimov | Mar 2013 | A1 |
20130204607 | Baker, IV | Aug 2013 | A1 |
20130339028 | Rosner et al. | Dec 2013 | A1 |
20140016793 | Gardner | Jan 2014 | A1 |
20140194702 | Tran | Jul 2014 | A1 |
20140195918 | Friedlander | Jul 2014 | A1 |
20140200887 | Nakadai et al. | Jul 2014 | A1 |
20140222430 | Rao | Aug 2014 | A1 |
20140270202 | Ivanov et al. | Sep 2014 | A1 |
20140270244 | Fan | Sep 2014 | A1 |
20140337023 | Mcculloch et al. | Nov 2014 | A1 |
20140379336 | Bhatnagar | Dec 2014 | A1 |
20150006181 | Fan et al. | Jan 2015 | A1 |
20150168731 | Robbins | Jun 2015 | A1 |
20150310857 | Habets et al. | Oct 2015 | A1 |
20150348572 | Thornburg | Dec 2015 | A1 |
20160019910 | Faubel et al. | Jan 2016 | A1 |
20160066113 | Elkhatib et al. | Mar 2016 | A1 |
20160112817 | Fan | Apr 2016 | A1 |
20160142830 | Hu | May 2016 | A1 |
20160165340 | Benattar | Jun 2016 | A1 |
20160180837 | Gustavsson | Jun 2016 | A1 |
20160216130 | Abramson et al. | Jul 2016 | A1 |
20160217781 | Zhong et al. | Jul 2016 | A1 |
20160284350 | Yun et al. | Sep 2016 | A1 |
20160358598 | Williams et al. | Dec 2016 | A1 |
20160379629 | Hofer et al. | Dec 2016 | A1 |
20160379632 | Hoffmeister et al. | Dec 2016 | A1 |
20160379638 | Basye et al. | Dec 2016 | A1 |
20170078819 | Habets | Mar 2017 | A1 |
20170091169 | Bellegarda | Mar 2017 | A1 |
20170092276 | Sun et al. | Mar 2017 | A1 |
20170110116 | Tadpatrikar et al. | Apr 2017 | A1 |
20170148429 | Hayakawa | May 2017 | A1 |
20170270919 | Parthasarathi et al. | Sep 2017 | A1 |
20170280239 | Sekiya | Sep 2017 | A1 |
20170316780 | Lovitt | Nov 2017 | A1 |
20170330555 | Kawano | Nov 2017 | A1 |
20170332187 | Lin | Nov 2017 | A1 |
20180011534 | Poulos et al. | Jan 2018 | A1 |
20180053284 | Rodriguez et al. | Feb 2018 | A1 |
20180077095 | Deyle et al. | Mar 2018 | A1 |
20180129469 | Vennström et al. | May 2018 | A1 |
20180227665 | Elko et al. | Aug 2018 | A1 |
20180316939 | Todd | Nov 2018 | A1 |
20180336902 | Cartwright et al. | Nov 2018 | A1 |
20180358021 | Mistica et al. | Dec 2018 | A1 |
20180366114 | Anbazhagan et al. | Dec 2018 | A1 |
20190129944 | Kawano | May 2019 | A1 |
20190362741 | Li et al. | Nov 2019 | A1 |
20190373362 | Ansai et al. | Dec 2019 | A1 |
20190392641 | Taylor | Dec 2019 | A1 |
20200027455 | Sugiyama et al. | Jan 2020 | A1 |
20200064921 | Kang et al. | Feb 2020 | A1 |
20200194028 | Lipman | Jun 2020 | A1 |
20200213729 | Soto | Jul 2020 | A1 |
20200279552 | Piersol et al. | Sep 2020 | A1 |
20200286465 | Wang et al. | Sep 2020 | A1 |
20200296521 | Wexler et al. | Sep 2020 | A1 |
20200335128 | Sheeder et al. | Oct 2020 | A1 |
20210056966 | Bilac et al. | Feb 2021 | A1 |
20210125609 | Dusan et al. | Apr 2021 | A1 |
20210264931 | Leider | Aug 2021 | A1 |
20210306751 | Roach et al. | Sep 2021 | A1 |
20220230658 | Lee et al. | Jul 2022 | A1 |
20230135768 | Sheeder et al. | May 2023 | A1 |
20230386461 | Leider | Nov 2023 | A1 |
20240087565 | Sheeder | Mar 2024 | A1 |
20240087587 | Leider | Mar 2024 | A1 |
20240163612 | Roach | May 2024 | A1 |
Number | Date | Country |
---|---|---|
2316473 | Jan 2001 | CA |
2362895 | Dec 2002 | CA |
2388766 | Dec 2003 | CA |
105529033 | Apr 2016 | CN |
2950307 | Dec 2015 | EP |
3211918 | Aug 2017 | EP |
S52144205 | Dec 1977 | JP |
2000148184 | May 2000 | JP |
2002135173 | May 2002 | JP |
2005196134 | Jul 2005 | JP |
2014137405 | Jul 2014 | JP |
2014178339 | Sep 2014 | JP |
2016004270 | Jan 2016 | JP |
2017211596 | Nov 2017 | JP |
2018523156 | Aug 2018 | JP |
2018179954 | Nov 2018 | JP |
2014113891 | Jul 2014 | WO |
2014159581 | Oct 2014 | WO |
2015169618 | Nov 2015 | WO |
2016063587 | Apr 2016 | WO |
2016151956 | Sep 2016 | WO |
2016153712 | Sep 2016 | WO |
2017003903 | Jan 2017 | WO |
2017017591 | Feb 2017 | WO |
2017191711 | Nov 2017 | WO |
2019224292 | Nov 2019 | WO |
2020180719 | Sep 2020 | WO |
2020214844 | Oct 2020 | WO |
2022072752 | Apr 2022 | WO |
2023064875 | Apr 2023 | WO |
Entry |
---|
Final Office Action mailed Sep. 7, 2023, for U.S. Appl. No. 17/214,446, filed Mar. 26, 2021, nineteen pages. |
Non-Final Office Action mailed Sep. 15, 2023, for U.S. Appl. No. 16/850,965, filed Apr. 16, 2020, fourteen pages. |
Notice of Allowance mailed Oct. 12, 2023, for U.S. Appl. No. 18/148,221, filed Dec. 29, 2022, five pages. |
Notice of Allowance mailed Oct. 17, 2023, for U.S. Appl. No. 17/254,832, filed Dec. 21, 2020, sixteen pages. |
European Office Action dated Dec. 12, 2023, for EP Application No. 20766540.7, four pages. |
Non-Final Office Action mailed Mar. 27, 2024, for U.S. Appl. No. 16/850,965, filed Apr. 16, 2020, sixteen pages. |
Backstrom, T. (Oct. 2015). “Voice Activity Detection Speech Processing,” Aalto University, vol. 58, No. 10; Publication [online], retrieved Apr. 19, 2020, retrieved from the Internet: URL: https://mycourses.aalto.fi/pluginfile.php/146209/mod_resource/content/1/slides_07_vad.pdf, ; pp. 1-36. |
Bilac, M. et al. (Nov. 15, 2017). Gaze and Filled Pause Detection for Smooth Human-Robot Conversations. www.angelicalim.com , retrieved on Jun. 17, 2020, Retrieved from the internet URL: http://www.angelicalim.com/papers/humanoids2017_paper.pdf entire document, 8 pages. (20.40). |
Chinese Office Action dated Jun. 2, 2023, for CN Application No. 2020 571488, with English translation, 9 pages. |
European Office Action dated Jun. 1, 2023, for EP Application No. 19822754.8, six pages. |
European Search Report dated Nov. 12, 2021, for EP Application No. 19822754.8, ten pages. |
European Search Report dated Nov. 21, 2022, for EP Application No. 20791183.5 nine pages. |
European Search Report dated Oct. 6, 2022, for EP Application No. 20766540.7 nine pages. |
Final Office Action mailed Apr. 10, 2023, for U.S. Appl. No. 16/850,965, filed Apr. 16, 2020, sixteen pages. |
Final Office Action mailed Apr. 15, 2022, for U.S. Appl. No. 16/850,965, filed Apr. 16, 2020, fourteen pages. |
Final Office Action mailed Aug. 4, 2023, for U.S. Appl. No. 17/254,832, filed Dec. 21, 2020, seventeen pages. |
Final Office Action mailed Aug. 5, 2022, for U.S. Appl. No. 16/805,337, filed Feb. 28, 2020, eighteen pages,. |
Final Office Action mailed Jan. 11, 2023, for U.S. Appl. No. 17/214,446, filed Mar. 26, 2021, sixteen pages. |
Final Office Action mailed Oct. 6, 2021, for U.S. Appl. No. 16/805,337, filed Feb. 28, 2020, fourteen pages. |
Harma, A. et al. (Jun. 2004). “Augmented Reality Audio for Mobile and Wearable Appliances,” J. Audio Eng. Soc., vol. 52, No. 6, retrieved on Aug. 20, 2019, Retrieved from the Internet: URL:https://pdfs.semanticscholar.org/ae54/82c6a8d4add3e9707d780dfb5ce03d8e0120.pdf, 22 pages. |
International Preliminary Report and Patentability mailed Dec. 22, 2020, for PCT Application No. PCT/US2019/038546, 13 pages. |
International Preliminary Report and Written Opinion mailed Apr. 13, 2023, for PCT Application No. PCT/US2021/53046, filed Sep. 30, 2021, nine pages. |
International Preliminary Report and Written Opinion mailed Oct. 28, 2021, for PCT Application No. PCT/US2020/028570, filed Apr. 16, 2020, 17 pages. |
International Preliminary Report and Written Opinion mailed Sep. 16, 2021, for PCT Application No. PCT/US20/20469, filed Feb. 28, 2020, nine pages. |
International Search Report and Written Opinion mailed Jan. 17, 2023, for PCT Application No. PCT/US22/78073, thirteen pages. |
International Search Report and Written Opinion mailed Jan. 24, 2022, for PCT Application No. PCT/US2021/53046, filed Sep. 30, 2021, 15 pages,. |
International Search Report and Written Opinion mailed Jul. 2, 2020, for PCT Application No. PCT/US2020/028570, filed Apr. 16, 2020, nineteen pages. |
International Search Report and Written Opinion mailed May 18, 2020, for PCT Application No. PCT/US20/20469, filed Feb. 28, 2020, twenty pages. |
International Search Report and Written Opinion mailed Sep. 17, 2019, for PCT Application No. PCT/US2019/038546, sixteen pages. |
Jacob, R. “Eye Tracking in Advanced Interface Design”, Virtual Environments and Advanced Interface Design, Oxford University Press, Inc. (Jun. 1995). |
Kitayama, K. et al. (Sep. 30, 2003). “Speech Starter: Noise-Robust Endpoint Detection by Using Filled Pauses.” Eurospeech 2003, retrieved on Jun. 17, 2020, retrieved from the internet URL: http://clteseerx.ist.psu.edu/viewdoc/download? doi=10 .1.1.141.1472&rep=rep1&type=pdf entire document, pp. 1237-1240. |
Liu, Baiyang, et al.: (Sep. 6, 2015). “Accurate Endpointing with Expected Pause Duration,” Interspeech 2015, pp. 2912-2916, retrieved from: a href=“https://scholar.google.com/scholar?q=BAIYANG”target=“_blank”https://scholar.google.com/scholar?q=BAIYANG/a,+Liu+et+al.:+(September+6,+2015).+Accurate+endpointing+with+expected+pause+duration,&hl=en&as_sdt=0&as_vis=1&oi=scholart. |
Non-Final Office Action malled Apr. 12, 2023, for U.S. Appl. No. 17/214,446, filed Mar. 26, 2021, seventeen pages. |
Non-Final Office Action mailed Apr. 13, 2023, for U.S. Appl. No. 17/714,708, filled Apr. 6, 2022, sixteen pages. |
Non-Final Office Action mailed Apr. 27, 2023, for U.S. Appl. No. 17/254,832, filed Dec. 21, 2020, fourteen pages. |
Non-Final Office Action mailed Aug. 10, 2022, for U.S. Appl. No. 17/214,446, filed Mar. 26, 2021, fifteen pages. |
Non-Final Office Action mailed Jun. 23, 2023, for U.S. Appl. No. 18/148,221, filed Dec. 29, 2022, thirteen pages. |
Non-Final Office Action mailed Jun. 24, 2021, for U.S. Appl. No. 16/805,337, filed Feb. 28, 2020, fourteen pages. |
Non-Final Office Action mailed Mar. 17, 2022, for U.S. Appl. No. 16/805,337, filed Feb. 28, 2020, sixteen pages. |
Non-Final Office Action mailed Nov. 17, 2021, for U.S. Appl. No. 16/987,267, filed Aug. 6, 2020, 21 pages. |
Non-Final Office Action malled Oct. 4, 2021, for U.S. Appl. No. 16/850,965, filed Apr. 16, 2020, twelve pages. |
Non-Final Office Action mailed Sep. 29, 2022, for U.S. Appl. No. 16/850,965, filed Apr. 16, 2020, fifteen pages. |
Notice of Allowance mailed Jul. 31, 2023, for U.S. Appl. No. 17/714,708, filed Apr. 6, 2022, eight pages. |
Notice of Allowance mailed Mar. 3, 2022, for U.S. Appl. No. 16/987,267, filled Aug. 6, 2020, nine pages. |
Notice of Allowance malled Nov. 30, 2022, for U.S. Appl. No. 16/805,337, filed Feb. 28, 2020, six pages,. |
Rolland, J. et al., “High-resolution inset head-mounted display”, Optical Society of America, vol. 37, No. 19, Applied Optics, (Jul. 1, 1998). |
Shannon, Matt et al. (Aug. 20-24, 2017). “Improved End-of-Query Detection for Streaming Speech Recognition”, Interspeech 2017, Stockholm, Sweden, pp. 1909-1913. |
Tanriverdi, V. et al. (Apr. 2000). “Interacting With Eye Movements In Virtual Environments,” Department of Electrical Engineering and Computer Science, Tufts University, Medford, MA 02155, USA, Proceedings of the SIGCHI conference on Human Factors in Computing Systems, eight pages. |
Tonges, R. (Dec. 2015). “An augmented Acoustics Demonstrator with Realtime stereo up-mixing and Binaural Auralization,” Technische University Berlin, Audio Communication Group, retrieved on Aug. 22, 2019, Retrieved from the Internet: URL: https://www2.ak.tu-berlin.de/˜akgroup/ak_pub/abschlussarbeiten/2015/ToengesRaffael_MasA.pdf 100 pages. |
Yoshida, A. et al., “Design and Applications of a High Resolution Insert Head Mounted Display”, (Jun. 1994). |
Chinese Office Action dated Dec. 21, 2023, for CN Application No. 201980050714.4, with English translation, eighteen pages. |
Final Office Action mailed Jan. 23, 2024, for U.S. Appl. No. 16/850,965, filed Apr. 16, 2020, fifteen pages. |
Japanese Notice of Allowance mailed Dec. 15, 2023, for JP Application No. 2020-571488, with English translation, eight pages. |
Japanese Office Action mailed Jan. 30, 2024, for JP Application No. 2021-551538, with English translation, sixteen pages. |
Notice of Allowance mailed Dec. 15, 2023, for U.S. Appl. No. 17/214,446, filed Mar. 26, 2021, seven pages. |
International Preliminary Report and Written Opinion mailed Apr. 25, 2024, for PCT Application No. PCT/US2022/078063, seven pages. |
International Preliminary Report on Patentability and Written Opinion mailed Apr. 25, 2024, for PCT Application No. PCT/US2022/078073, seven pages. |
International Preliminary Report on Patentability and Written Opinion mailed May 2, 2024, for PCT Application No. PCT/US2022/078298, twelve pages. |
International Search Report and Written Opinion mailed Jan. 11, 2023, for PCT Application No. PCT/US2022/078298, seventeen pages. |
International Search Report and Written Opinion mailed Jan. 25, 2023, for PCT Application No. PCT/US2022/078063, nineteen pages. |
Japanese Office Action mailed May 2, 2024, for JP Application No. 2021-562002, with English translation, sixteen pages. |
Number | Date | Country | |
---|---|---|---|
20230410835 A1 | Dec 2023 | US |
Number | Date | Country | |
---|---|---|---|
63001118 | Mar 2020 | US | |
62884143 | Aug 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17714708 | Apr 2022 | US |
Child | 18459342 | US | |
Parent | 16987267 | Aug 2020 | US |
Child | 17714708 | US |