Dilated convolutions and gating for efficient keyword spotting

Information

  • Patent Grant
  • 12159626
  • Patent Number
    12,159,626
  • Date Filed
    Monday, August 28, 2023
    a year ago
  • Date Issued
    Tuesday, December 3, 2024
    19 days ago
Abstract
A method for detection of a keyword in a continuous stream of audio signal, by using a dilated convolutional neural network, implemented by one or more computers embedded on a device, the dilated convolutional network comprising a plurality of dilation layers, including an input layer and an output layer, each layer of the plurality of dilation layers comprising gated activation units, and skip-connections to the output layer, the dilated convolutional network being configured to generate an output detection signal when a predetermined keyword is present in the continuous stream of audio signal, the generation of the output detection signal being based on a sequence of successive measurements provided to the input layer, each successive measurement of the sequence being measured on a corresponding frame from a sequence of successive frames extracted from the continuous stream of audio signal, at a plurality of successive time steps.
Description
TECHNICAL FIELD OF THE INVENTION

This invention relates to the field of using neural networks to automatically recognize speech, and, more precisely, to automatically detect pre-defined keywords in a continuous stream of audio signal.


BACKGROUND

Traditional approaches to keyword spotting either require important memory resources and fail at capturing large patterns with reasonably small models, or require such important computational resources that they cannot be implemented on a low-resource device.


Therefore, there is a need for an effective on-device keyword spotting method, providing real-time response and high accuracy for good user experience, while limiting memory footprint and computational cost.


SUMMARY OF THE INVENTION

The present invention provides a method for detection of a keyword in a continuous stream of audio signal, by using a dilated convolutional neural network, implemented by one or more computers embedded on a device, the dilated convolutional network comprising a plurality of dilation layers, including an input layer and an output layer, each layer of the plurality of dilation layers comprising gated activation units, and skip-connections to the output layer, the dilated convolutional network being configured to generate an output detection signal when a predetermined keyword is present in the continuous stream of audio signal, the generation of the output detection signal being based on a sequence of successive measurements provided to the input layer, each successive measurement of the sequence being measured on a corresponding frame from a sequence of successive frames extracted from the continuous stream of audio signal, at a plurality of successive time steps.


According to these provisions, it is possible to embed on a low power and performance limited device the necessary computation and memory resources to implement a dilated convolutional network and use it for keyword detection applications.


According to an embodiment, the invention comprises one or more of the following features, alone or in combination.


According to an embodiment, the dilation convolutional neural network comprises 24 layers.


According to an embodiment, the successive measurements are acoustic features measured on successive frames extracted from the audio stream every 10 ms, each frame having a 25 ms duration.


According to an embodiment, the acoustic features measured on successive frames are 20 dimensional log-Mel filterbank energies.


According to an embodiment, the dilated convolutional neural network is configured to compute, at a time step, a dilated convolution based on a convolution kernel for each dilation layer, and to put in a cache memory the result of the computation at the time step, so that, at a next time step, the result of the computation is used to compute a new dilated convolution based on a shifted convolution kernel for each dilation layer.


According to these provisions, using the result of the computation at a time to compute the dilation convolution at a next time allows reducing the amount of floating point operations per second to a level compatible with the requirement of embedding the computer implemented dilated convolutional neural network on a small device.


According to another aspect, the invention provides a computer implemented method for training a dilated convolutional neural network, the dilated convolutional neural network being implemented by one or more computers embedded on a device, for keyword detection in a continuous stream of audio signal, the method comprising a data set preparation phase followed by a training phase based on the result of the data set preparation phase, the data set preparation phase comprising a labelling step comprises a step of associating a first label to successive frames which occur inside a predetermined time period centred on a time step at which an end of the keyword occurs, and in associating a second label to frames occurring outside the predetermined time period and inside a positive audio sample containing a formulation of the keyword, the positive audio samples comprising a first sequence of frames, the frames of the first sequence of frames occurring at successive time steps in between the beginning of the positive audio sample and the end of the positive audio sample.


According to an embodiment, the invention comprises one or more of the following features, alone or in combination.


According to an embodiment, the labelling step further comprises a step of associating the second label to frames inside a negative audio sample not containing a formulation of the keyword, the negative audio sample comprising a second sequence of frames, the frames of the second sequence of frames occurring at successive time steps in between a beginning time step of the positive audio sample and an ending time step of the positive audio sample.


According to these provisions, it is possible to train a more accurate model, and therefore more accurate detection results when using the computer implemented dilated convolutional network (DCNN) for keyword detection.


According to an embodiment the first label is a 1, and the second label is a 0.


According to an embodiment, the end of the keyword is detected using a voice activity detection computer implemented algorithm.


According to an embodiment, a width of the predetermined time period is optimised during a further step of validation based on a set of validation data.


According to an embodiment, during the training phase, the training of the dilated convolutional neural network is configured to learn only from the frames included in the second sequence of frames and from the frames which are associated to the first label and which are included in the first sequence of frames, and not to learn from the frames which are included in the first sequence frames and which are associated to the second label.


According to these provisions, the efficiency of the method is further improved, allowing even better accuracy in the model, and better accuracy in the detection results when using the computer implemented dilated convolutional network (DCNN) for keyword detection.


According to another aspect, the invention provides a method for detection of a keyword in a continuous stream of audio signal, by using a dilated convolutional neural network, implemented by one or more computers embedded on a device, the dilated convolutional network comprising a plurality of dilation layers, including an input layer and an output layer, each layer of the plurality of dilation layers comprising gated activation units, and skip-connections to the output layer, the dilated convolutional network being configured to generate an output detection signal when a predetermined keyword is present in the continuous stream of audio signal, the generation of the output detection signal being based on a sequence of successive measurements provided to the input layer, each successive measurement of the sequence being measured on a corresponding frame from a sequence of successive frames extracted from the continuous stream of audio signal, at a plurality of successive time steps, wherein the dilated convolutional network is trained according to the computer implemented method for training a dilated convolutional neural network, of the invention.





BRIEF DESCRIPTION OF THE DRAWINGS

The foregoing and other purposes, features, aspects and advantages of the invention will become apparent from the following detailed description of embodiments, given by way of illustration and not limitation with reference to the accompanying drawings, in which the same reference refer to similar elements or to elements having similar functions, and in which:



FIG. 1 schematically represents a view of an embodiment of a dilated convolutional neural network at a given time step;



FIG. 2 schematically represents a view of an embodiment of activation gated units and skip-connections for a dilated convolutional neural network;



FIGS. 3a and 3b illustrates an embodiment of a labelling method to prepare training data sets.





DETAILED DESCRIPTION OF THE INVENTION ACCORDING TO AN EMBODIMENT

An embodiment of a computer implemented method for keyword detection in a continuous stream of audio signal, using a computer implemented dilated convolutional network, will be described in reference to FIGS. 1 and 2.


According to an embodiment illustrated in FIG. 1, a continuous audio-stream is fragmented into a sequence SSM of successive measurements SM, each successive measurement SM resulting from the measurement of one or more acoustic features on a frame extracted from the continuous audio stream. According to an embodiment, the acoustic features are 20 dimensional log-Mel filterbank energies measured on successive frames extracted from the audio stream every 10 ms, each frame having a 25 ms duration.


As illustrated in FIG. 1, the sequence SSM of successive measurements SM is provided as an input to a computer implemented dilated convolutional neural network DCNN.



FIG. 1 illustrates the configuration at a given time step of the inference process for keyword detection, with a given number of successive measurements SM being provided as input to the dilated convolutional neural network DCNN. At a next time step of the inference process for keyword detection, a new successive measurement SM is introduced in the sequence provided as input, pushing the sequence in the direction opposite to the time direction T represented on FIG. 1, the time direction T being directed towards the future.


According to an embodiment illustrated in FIG. 1, as it is well-known from the one skilled in the art, the computer implemented dilated convolutional network comprises a plurality of dilation layers DL, including an input layer IL and an output layer OL, and it is configured to implement, at each successive time steps of the process, dilated convolution of the sequence SSM of successive measurements SM which is provided, at each given time step of the process, to the input layer IL of the computer implemented dilated convolutional network.


According to an embodiment illustrated at FIG. 2, as it is well-known from the one skilled in the art, each layer of the plurality of dilation layers DL further comprises gated activation units GAU, and skip-connections SC to the output layer OL.


According to an embodiment of the dilated convolutional network DCNN is configured to generate an output detection signal when a predetermined keyword is present in the continuous stream of audio signal, the generation of the output detection signal being based on the result of the dilated convolution of the sequence SSM of successive measurements SM provided to the input layer IL, the result of the dilated convolution being transformed by operation of gated activation units GAU and of skip-connections SC to contribute to the generation of the output detection signal. Skip connections are introduced to speed up convergence address the issue of vanishing gradients posed by training of models of higher depth. Each layer yields two outputs: one is directly fed to the next layer as usual, but the second one skips it. All skip-connections outputs are then summed into the final output of the network. Without these bypassing strategies, one could not train deeper architectures, as required by the keyword detection application.


The gated activation units are a combination of tanh and sigmoid activations. Sigmoid activation filter acts like a gate for the tanh activation filter, depending on how important is the output of the tanh filter.


The computer implemented dilated convolutional network DCNN is configured to run in a streaming fashion during inference process for detection of keyword. When receiving a new input frame at a next time step, the result of the dilated convolution computation at a previous time step is used to compute a new dilated convolution based on a shifted convolution kernel for each dilation layer. This is possible because convolution kernels of each dilation layer are shifted one time step at a time, or a few time steps at a time, but in any case the “stride”, or the number of time steps the kernel is shifted at a time, is usually smaller than the kernel size, so that two subsequent convolution kernels overlap. This cached implementation allows reducing the amount of Floating Point Operations per Second (FLOPS), so that the level of computing resources required by the inference process for keyword detection task is compatible with technical constraints imposed by embedding of the computer implemented dilated convolutional network DCNN on a low power and performance limited device. Indeed, using a dilated convolutional network architecture for keyword detection implies technically dealing with a deeper model dealing with a larger number of parameters, therefore it is important for this specific application to be able to reduce as indicated the amount of FLOPS.


Before using the computer implemented dilation convolutional network DCNN in an inference mode for keyword detection, it is necessary to train the dilation convolutional network DCNN so that it builds an internal model adapted to the keyword (s) to be detected during the inference process.


According to an aspect, the invention also relates to a computer implemented method for training a dilated convolutional neural network (DCNN). The method comprises a data set preparation phase, followed by a training phase based on the result of the data set preparation phase, the data set preparation phase comprising the following steps:

    • collect two sets of training data comprising respectively to types of audio samples of varying duration. A first type of audio samples, that will be denoted positive audio samples, the positive audio samples corresponding to the utterance by someone of the predetermined keyword(s); for example an audio sample corresponding to someone saying the keyword to be detected, “Hey SNIPS” for example, as illustrated in FIG. 3a, with silence at the beginning and the end, will be denoted as “positive sample”. A second type of audio samples, that will be denoted negative audio samples, the negative audio samples corresponding to the utterance by someone of a random sentence, “Hello world” for example, as illustrated in FIG. 3b.
    • to be processed by the computer implemented dilated convolutional network, the audio samples are respectively divided into sequences of successive frames; according to an embodiment, the frames are of 25 ms duration and overlap by 10 ms with the previous and next frames. Each successive frame corresponds to a portion of the audio sample occurring respectively at one of a sequence of successive time steps.
    • in the sequence of successive frames corresponding to positive audio samples, automatically detect a frame, for example by using a voice activity detection algorithm, the detected frame corresponding to an end EK of the keyword, and associate a first label, 1 for example as illustrated on FIG. 3a, to all successive frames which occur, in the sequence of successive frames, inside a predetermined time period starting before and ending after the occurrence time step of the detected frame, and associate a second label, 0 for example, to each other frame of the sequence of successive frames, corresponding to positive audio samples, which occur outside the predetermined time period.
    • associate the second label, 0 for example, to each frame of the sequences of successive frames corresponding to negative audio samples, as illustrated on FIG. 3b.


According to these provisions, instead of using an alignment algorithm to find the keyword window that is aligned with the spoken keyword, and to label 1, for example, the frames inside the window, and 0 the frames outside the window, according to the method of the invention, only the frames close to the end of the keyword are labelled 1. The end of the keyword can easily be detected by, for example, a voice activity detection algorithm. Thus, it is possible to train a more accurate model, and therefore to obtain more accurate detection results when using the computer implemented dilated convolutional network DCNN for keyword detection.


In the traditional approach, the model has a tendency to trigger as soon as the keyword starts, whether or not the sample contains only a fraction of the keyword. One advantage of our approach is that the network will trigger near the end EK of keyword, once it has seen enough context.


According to an embodiment of the method, the predetermined time period is centered on the frame corresponding to the end EK of the keyword, the width of the predetermined time period being optimised during a further step of validation tests based on a set of validation data.


According to an embodiment of the method, during the training of the dilated convolutional neural network DCNN, the dilated convolutional neural network DCNN is configured to learn only from the successive frames of the negative audio samples, and from the successive frames of the positive audio samples which are associated to the first label, 1 for example, and not to learn from successive frames of the positive audio samples which are associated to the second label, 0 for example.


According to these provisions, the efficiency of the method is further improved, allowing even better accuracy in the model, and better accuracy in the detection results when using the computer implemented dilated convolutional network DCNN for keyword detection.

Claims
  • 1. A computing device comprising: at least one processor;at least one non-transitory computer-readable medium; andprogram instructions stored on the at least one non-transitory computer-readable medium that are executable by the at least one processor such that the computing device is configured to: extract, at a plurality of successive time steps, a sequence of successive frames from a stream of audio signal;measure a sequence of successive measurements, each successive measurement of the sequence measured on a corresponding frame from the sequence of successive frames;provide the sequence of successive measurements to an input layer of a dilated convolutional neural network (DCNN), wherein the DCNN comprises a plurality of dilation layers including the input layer and an output layer;based on the sequence of successive measurements provided to the input layer, detect a presence of a predetermined keyword in the stream of audio signal; andbased on detecting the presence of the predetermined keyword, generate an output detection signal indicating the presence of the predetermined keyword.
  • 2. The computing device of claim 1, further comprising program instructions that are executable by the at least one processor such that the computing device is configured to: compute, via the DCNN at a time step, a dilated convolution based on a respective convolution kernel for each dilation layer of the plurality of dilation layers;store a result of the computation at the time step; andat a next time step, use the result of the computation to compute, via the DCNN, a new dilated convolution based on a respective shifted convolution kernel for each dilation layer of the plurality of dilation layers.
  • 3. The computing device of claim 1, wherein each frame in the sequence of successive frames has a 25 millisecond duration.
  • 4. The computing device of claim 1, wherein the program instructions that are executable by the at least one processor such that the computing device is configured to measure the sequence of successive measurements comprise program instructions that are executable by the at least one processor such that the computing device is configured to measure acoustic features from the sequence of successive frames extracted from the stream of audio signal.
  • 5. The computing device of claim 4, wherein the program instructions that are executable by the at least one processor such that the computing device is configured to measure the acoustic features from the sequence of successive frames extracted from the stream of audio signal comprise program instructions that are executable by the at least one processor such that the computing device is configured to measure the acoustic features from the sequence of successive frames every 10 milliseconds.
  • 6. The computing device of claim 4, wherein the acoustic features measured from the sequence of successive frames are 20 dimensional log-Mel filterbank energies.
  • 7. The computing device of claim 1, wherein the DCNN comprises 24 dilation layers.
  • 8. A non-transitory computer-readable medium, wherein the non-transitory computer-readable medium is provisioned with program instructions that, when executed by at least one processor, cause a computing device to: extract, at a plurality of successive time steps, a sequence of successive frames from a stream of audio signal;measure a sequence of successive measurements, each successive measurement of the sequence measured on a corresponding frame from the sequence of successive frames;provide the sequence of successive measurements to an input layer of a dilated convolutional neural network (DCNN), wherein the DCNN comprises a plurality of dilation layers including the input layer and an output layer;based on the sequence of successive measurements provided to the input layer, detect a presence of a predetermined keyword in the stream of audio signal; andbased on detecting the presence of the predetermined keyword, generate an output detection signal indicating the presence of the predetermined keyword.
  • 9. The non-transitory computer-readable medium of claim 8, wherein the non-transitory computer-readable medium is also provisioned with program instructions that, when executed by at least one processor, cause the computing device to: compute, via the DCNN at a time step, a dilated convolution based on a respective convolution kernel for each dilation layer of the plurality of dilation layers;store a result of the computation at the time step; andat a next time step, use the result of the computation to compute, via the DCNN, a new dilated convolution based on a respective shifted convolution kernel for each dilation layer of the plurality of dilation layers.
  • 10. The non-transitory computer-readable medium of claim 8, wherein each frame in the sequence of successive frames has a 25 millisecond duration.
  • 11. The non-transitory computer-readable medium of claim 8, wherein the program instructions that, when executed by at least one processor, cause the computing device to measure the sequence of successive measurements comprise program instructions that, when executed by at least one processor, cause the computing device to measure acoustic features from the sequence of successive frames extracted from the stream of audio signal.
  • 12. The non-transitory computer-readable medium of claim 11, wherein the program instructions that, when executed by at least one processor, cause the computing device to measure the acoustic features from the sequence of successive frames extracted from the stream of audio signal comprise program instructions that, when executed by at least one processor, cause the computing device to measure the acoustic features from the sequence of successive frames every 10 milliseconds.
  • 13. The non-transitory computer-readable medium of claim 11, wherein the acoustic features measured from the sequence of successive frames are 20 dimensional log-Mel filterbank energies.
  • 14. The non-transitory computer-readable medium of claim 8, wherein the DCNN comprises 24 dilation layers.
  • 15. A method implemented by a computing device, the method comprising: extracting, at a plurality of successive time steps, a sequence of successive frames from a stream of audio signal;measuring a sequence of successive measurements, each successive measurement of the sequence measured on a corresponding frame from the sequence of successive frames;providing the sequence of successive measurements to an input layer of a dilated convolutional neural network (DCNN), wherein the DCNN comprises a plurality of dilation layers including the input layer and an output layer;based on the sequence of successive measurements provided to the input layer, detecting a presence of a predetermined keyword in the stream of audio signal content; andbased on detecting the presence of the predetermined keyword, generating an output detection signal indicating the presence of the predetermined keyword.
  • 16. The method of claim 15, further comprising: computing, via the DCNN at a time step, a dilated convolution based on a respective convolution kernel for each dilation layer of the plurality of dilation layers;storing a result of the computation at the time step; andat a next time step, using the result of the computation to compute, via the DCNN, a new dilated convolution based on a respective shifted convolution kernel for each dilation layer of the plurality of dilation layers.
  • 17. The method device of claim 16, wherein each frame in the sequence of successive frames has a 25 millisecond duration.
  • 18. The method of claim 15, wherein measuring the sequence of successive measurements comprises measuring acoustic features from the sequence of successive frames extracted from the stream of audio signal.
  • 19. The method of claim 18, wherein measuring the acoustic features from the sequence of successive frames extracted from the stream of audio signal comprises measuring the acoustic features from the sequence of successive frames every 10 milliseconds.
  • 20. The method of claim 18, wherein the acoustic features measured from the sequence of successive frames are 20 dimensional log-Mel filterbank energies.
Priority Claims (1)
Number Date Country Kind
18306501 Nov 2018 EP regional
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims priority as a continuation under 35 U.S.C. § 120 to U.S. application Ser. No. 17/549,253 filed on Dec. 13, 2021, which is a continuation of Ser. No. 16/685,135, filed on Nov. 15, 2019, which claims priority under 35 U.S.C. § 119 to following Europe Patent Application No. 18306501.0, filed on Nov. 15, 2018, the contents of each of which are incorporated herein by reference in their entireties.

US Referenced Citations (270)
Number Name Date Kind
999715 Gundersen Aug 1911 A
5717768 Laroche Feb 1998 A
5857172 Rozak Jan 1999 A
6070140 Tran May 2000 A
6219645 Byers Apr 2001 B1
7516068 Clark Apr 2009 B1
7705565 Patino et al. Apr 2010 B2
8325909 Tashev et al. Dec 2012 B2
8489398 Gruenstein Jul 2013 B1
8594320 Faller Nov 2013 B2
8620232 Helsloot Dec 2013 B2
8639214 Fujisaki Jan 2014 B1
8676273 Fujisaki Mar 2014 B1
8762156 Chen Jun 2014 B2
8768712 Sharifi Jul 2014 B1
8898063 Sykes et al. Nov 2014 B1
9002024 Nakadai et al. Apr 2015 B2
9047857 Barton Jun 2015 B1
9070367 Hoffmeister et al. Jun 2015 B1
9088336 Mani et al. Jul 2015 B2
9183845 Gopalakrishnan et al. Nov 2015 B1
9313317 Lebeau et al. Apr 2016 B1
9354687 Bansal et al. May 2016 B2
9491033 Soyannwo et al. Nov 2016 B1
9514747 Bisani et al. Dec 2016 B1
9542941 Weksler et al. Jan 2017 B1
9558755 Laroche et al. Jan 2017 B1
9640194 Nemala et al. May 2017 B1
9672812 Watanabe et al. Jun 2017 B1
9691384 Wang et al. Jun 2017 B1
9706320 Starobin et al. Jul 2017 B2
9749738 Adsumilli et al. Aug 2017 B1
9756422 Paquier et al. Sep 2017 B2
9767786 Starobin et al. Sep 2017 B2
9781532 Sheen Oct 2017 B2
9799330 Nemala et al. Oct 2017 B2
9812128 Mixter et al. Nov 2017 B2
9898250 Williams et al. Feb 2018 B1
9899021 Vitaladevuni et al. Feb 2018 B1
10028069 Lang Jul 2018 B1
10038419 Elliot et al. Jul 2018 B1
10134398 Sharifi Nov 2018 B2
10157042 Jayakumar et al. Dec 2018 B1
10186276 Dewasurendra et al. Jan 2019 B2
10204624 Knudson et al. Feb 2019 B1
10249205 Hammersley et al. Apr 2019 B2
10304440 Panchapagesan et al. May 2019 B1
10304475 Wang et al. May 2019 B1
10332508 Hoffmeister Jun 2019 B1
10424296 Penilla et al. Sep 2019 B2
10565999 Wilberding Jan 2020 B2
10567515 Bao Feb 2020 B1
10586534 Argyropoulos et al. Mar 2020 B1
10593328 Wang et al. Mar 2020 B1
10593330 Sharifi Mar 2020 B2
10699711 Reilly Jun 2020 B2
10720173 Freeman et al. Jul 2020 B2
10735870 Ballande et al. Aug 2020 B2
10746840 Barton et al. Aug 2020 B1
10789041 Kim et al. Sep 2020 B2
10824682 Alvares et al. Nov 2020 B2
10825471 Walley et al. Nov 2020 B2
10837667 Nelson et al. Nov 2020 B2
10847137 Mandal et al. Nov 2020 B1
10847164 Wilberding Nov 2020 B2
10867604 Smith et al. Dec 2020 B2
10871943 D'Amato et al. Dec 2020 B1
10878811 Smith et al. Dec 2020 B2
10885091 Meng et al. Jan 2021 B1
10964314 Jazi et al. Mar 2021 B2
11024311 Mixter et al. Jun 2021 B2
11025569 Lind et al. Jun 2021 B2
11050615 Mathews et al. Jun 2021 B2
11062705 Watanabe et al. Jul 2021 B2
11100923 Fainberg et al. Aug 2021 B2
11137979 Plagge Oct 2021 B2
11138969 D'Amato Oct 2021 B2
11159878 Chatlani et al. Oct 2021 B1
11172328 Soto et al. Nov 2021 B2
11172329 Soto et al. Nov 2021 B2
11175880 Liu et al. Nov 2021 B2
11184704 Jarvis et al. Nov 2021 B2
11184969 Lang Nov 2021 B2
11206052 Park et al. Dec 2021 B1
11212612 Lang et al. Dec 2021 B2
11264019 Bhattacharya et al. Mar 2022 B2
11277512 Leeds et al. Mar 2022 B1
11315556 Smith et al. Apr 2022 B2
11354092 D'Amato et al. Jun 2022 B2
11361763 Maas et al. Jun 2022 B1
11373645 Mathew et al. Jun 2022 B1
11411763 MacKay et al. Aug 2022 B2
11445301 Park et al. Sep 2022 B2
11475899 Lesso Oct 2022 B2
11514898 Millington Nov 2022 B2
11531520 Wilberding Nov 2022 B2
20020054685 Avendano et al. May 2002 A1
20020055950 Witteman May 2002 A1
20020143532 McLean et al. Oct 2002 A1
20030097482 DeHart et al. May 2003 A1
20040153321 Chung et al. Aug 2004 A1
20040161082 Brown et al. Aug 2004 A1
20060104454 Guitarte Perez et al. May 2006 A1
20070038461 Abbott et al. Feb 2007 A1
20080160977 Ahmaniemi et al. Jul 2008 A1
20080192946 Faller Aug 2008 A1
20080291916 Xiong et al. Nov 2008 A1
20090013255 Yuschik et al. Jan 2009 A1
20090113053 Van Wie et al. Apr 2009 A1
20090214048 Stokes, III et al. Aug 2009 A1
20090299745 Kennewick et al. Dec 2009 A1
20090323924 Tashev et al. Dec 2009 A1
20100070276 Wasserblat et al. Mar 2010 A1
20100260348 Bhow et al. Oct 2010 A1
20100299639 Ramsay et al. Nov 2010 A1
20100329472 Nakadai et al. Dec 2010 A1
20100332236 Tan Dec 2010 A1
20110019833 Kuech et al. Jan 2011 A1
20110176687 Birkenes Jul 2011 A1
20120009906 Patterson et al. Jan 2012 A1
20120020485 Visser et al. Jan 2012 A1
20120027218 Every et al. Feb 2012 A1
20120076308 Kuech et al. Mar 2012 A1
20120224457 Kim et al. Sep 2012 A1
20120237047 Neal et al. Sep 2012 A1
20120265528 Gruber et al. Oct 2012 A1
20130073293 Jang et al. Mar 2013 A1
20130080167 Mozer Mar 2013 A1
20130080171 Mozer et al. Mar 2013 A1
20130129100 Sorensen May 2013 A1
20130185639 Lim Jul 2013 A1
20130230184 Kuech et al. Sep 2013 A1
20130238326 Kim et al. Sep 2013 A1
20130283169 Van Wie Oct 2013 A1
20130289994 Newman et al. Oct 2013 A1
20130294611 Yoo et al. Nov 2013 A1
20130301840 Yemdji et al. Nov 2013 A1
20130322634 Bennett et al. Dec 2013 A1
20130336499 Beckhardt et al. Dec 2013 A1
20140056435 Kjems et al. Feb 2014 A1
20140064476 Mani et al. Mar 2014 A1
20140108010 Maltseff et al. Apr 2014 A1
20140126745 Dickins et al. May 2014 A1
20140159581 Pruemmer et al. Jun 2014 A1
20140161263 Koishida et al. Jun 2014 A1
20140167929 Shim et al. Jun 2014 A1
20140181199 Kumar et al. Jun 2014 A1
20140188476 Li et al. Jul 2014 A1
20140200881 Chatlani Jul 2014 A1
20140229959 Beckhardt et al. Aug 2014 A1
20140244269 Tokutake Aug 2014 A1
20140270216 Tsilfidis et al. Sep 2014 A1
20140278343 Tran Sep 2014 A1
20140288686 Sant et al. Sep 2014 A1
20140328490 Mohammad et al. Nov 2014 A1
20140364089 Lienhart et al. Dec 2014 A1
20140365225 Haiut Dec 2014 A1
20140368734 Hoffert et al. Dec 2014 A1
20150032443 Karov et al. Jan 2015 A1
20150032456 Wait Jan 2015 A1
20150039310 Clark et al. Feb 2015 A1
20150039311 Clark et al. Feb 2015 A1
20150073807 Kumar Mar 2015 A1
20150126255 Yang et al. May 2015 A1
20150154953 Bapat et al. Jun 2015 A1
20150221307 Shah et al. Aug 2015 A1
20150373100 Kravets et al. Dec 2015 A1
20150380010 Srinivasan Dec 2015 A1
20150382128 Ridihalgh et al. Dec 2015 A1
20160014536 Sheen Jan 2016 A1
20160027440 Gelfenbeyn et al. Jan 2016 A1
20160034448 Tran Feb 2016 A1
20160055847 Dahan Feb 2016 A1
20160066087 Solbach et al. Mar 2016 A1
20160077794 Kim et al. Mar 2016 A1
20160093281 Kuo et al. Mar 2016 A1
20160118048 Heide Apr 2016 A1
20160134924 Bush et al. May 2016 A1
20160148612 Guo et al. May 2016 A1
20160212488 Os et al. Jul 2016 A1
20160283841 Sainath Sep 2016 A1
20160299737 Clayton et al. Oct 2016 A1
20160335485 Kim Nov 2016 A1
20160379635 Page Dec 2016 A1
20170083606 Mohan Mar 2017 A1
20170084278 Jung Mar 2017 A1
20170110130 Sharifi et al. Apr 2017 A1
20170140750 Wang et al. May 2017 A1
20170140757 Penilla et al. May 2017 A1
20170164139 Deselaers et al. Jun 2017 A1
20170186425 Dawes et al. Jun 2017 A1
20170186427 Wang et al. Jun 2017 A1
20170269900 Triplett Sep 2017 A1
20170300289 Gattis Oct 2017 A1
20170329397 Lin Nov 2017 A1
20170332035 Shah et al. Nov 2017 A1
20170357390 Alonso Ruiz et al. Dec 2017 A1
20170365247 Ushakov Dec 2017 A1
20180012077 Laska et al. Jan 2018 A1
20180033429 Makke et al. Feb 2018 A1
20180061409 Valentine et al. Mar 2018 A1
20180096678 Zhou et al. Apr 2018 A1
20180120947 Wells et al. May 2018 A1
20180137857 Zhou et al. May 2018 A1
20180139512 Moran et al. May 2018 A1
20180182410 Kaskari et al. Jun 2018 A1
20180196776 Hershko et al. Jul 2018 A1
20180199130 Jaffe et al. Jul 2018 A1
20180270573 Lang et al. Sep 2018 A1
20180270575 Akutagawa Sep 2018 A1
20180301147 Kim Oct 2018 A1
20180330727 Tulli Nov 2018 A1
20180336892 Kim et al. Nov 2018 A1
20180350356 Garcia Dec 2018 A1
20180350379 Wung et al. Dec 2018 A1
20180352014 Alsina et al. Dec 2018 A1
20180352334 Family et al. Dec 2018 A1
20180358019 Mont-Reynaud Dec 2018 A1
20190035404 Gabel et al. Jan 2019 A1
20190037173 Lee Jan 2019 A1
20190044745 Knudson et al. Feb 2019 A1
20190051299 Ossowski et al. Feb 2019 A1
20190066680 Woo et al. Feb 2019 A1
20190066710 Bryan et al. Feb 2019 A1
20190073999 Prémont et al. Mar 2019 A1
20190122662 Chang et al. Apr 2019 A1
20190156847 Bryan et al. May 2019 A1
20190172467 Kim et al. Jun 2019 A1
20190172476 Wung et al. Jun 2019 A1
20190186937 Sharifi et al. Jun 2019 A1
20190237089 Shin Aug 2019 A1
20190251960 Maker et al. Aug 2019 A1
20190259408 Freeman et al. Aug 2019 A1
20190281387 Woo et al. Sep 2019 A1
20190287536 Sharifi et al. Sep 2019 A1
20190311715 Pfeffinger et al. Oct 2019 A1
20190311718 Huber et al. Oct 2019 A1
20190311722 Caldwell Oct 2019 A1
20190318729 Chao et al. Oct 2019 A1
20190325870 Mitic Oct 2019 A1
20190325888 Geng Oct 2019 A1
20190341037 Bromand et al. Nov 2019 A1
20190341038 Bromand et al. Nov 2019 A1
20190371324 Powell et al. Dec 2019 A1
20190371329 D'Souza et al. Dec 2019 A1
20190371342 Tukka et al. Dec 2019 A1
20190392832 Mitsui et al. Dec 2019 A1
20200043489 Bradley et al. Feb 2020 A1
20200066279 Kang et al. Feb 2020 A1
20200075018 Chen Mar 2020 A1
20200105256 Fainberg Apr 2020 A1
20200110571 Liu et al. Apr 2020 A1
20200125162 D'Amato et al. Apr 2020 A1
20200135194 Jeong Apr 2020 A1
20200244650 Burris et al. Jul 2020 A1
20200265842 Singh Aug 2020 A1
20200336846 Rohde Oct 2020 A1
20200342869 Lee et al. Oct 2020 A1
20200409926 Srinivasan et al. Dec 2020 A1
20210029452 Tsoi et al. Jan 2021 A1
20210118429 Shan Apr 2021 A1
20210118439 Schillmoeller et al. Apr 2021 A1
20210249004 Smith Aug 2021 A1
20210295849 Van Der Ven et al. Sep 2021 A1
20210358481 D'Amato et al. Nov 2021 A1
20220036882 Ahn et al. Feb 2022 A1
20220050585 Fettes et al. Feb 2022 A1
20220083136 DeLeeuw Mar 2022 A1
20220301561 Robert Jose et al. Sep 2022 A1
20230019595 Smith Jan 2023 A1
Foreign Referenced Citations (49)
Number Date Country
1748250 Mar 2006 CN
1781291 May 2006 CN
101427154 May 2009 CN
102999161 Mar 2013 CN
104155938 Nov 2014 CN
104572009 Apr 2015 CN
104581510 Apr 2015 CN
104885406 Sep 2015 CN
104885438 Sep 2015 CN
105101083 Nov 2015 CN
105162886 Dec 2015 CN
105284168 Jan 2016 CN
105389099 Mar 2016 CN
105427861 Mar 2016 CN
105453179 Mar 2016 CN
105472191 Apr 2016 CN
105493179 Apr 2016 CN
105632486 Jun 2016 CN
106030699 Oct 2016 CN
106796784 May 2017 CN
106910500 Jun 2017 CN
107122158 Sep 2017 CN
107465974 Dec 2017 CN
107644313 Jan 2018 CN
107767863 Mar 2018 CN
107832837 Mar 2018 CN
107919116 Apr 2018 CN
108198548 Jun 2018 CN
3142107 Mar 2017 EP
2501367 Oct 2013 GB
2004096520 Mar 2004 JP
2004109361 Apr 2004 JP
2004163590 Jun 2004 JP
2007235875 Sep 2007 JP
2008217444 Sep 2008 JP
2014510481 Apr 2014 JP
2016009193 Jan 2016 JP
2019109510 Jul 2019 JP
101284134 Jul 2013 KR
201629950 Aug 2016 TW
2008096414 Aug 2008 WO
2015133022 Sep 2015 WO
2015195216 Dec 2015 WO
2016003509 Jan 2016 WO
2016136062 Sep 2016 WO
2018140777 Aug 2018 WO
2019005772 Jan 2019 WO
2020068795 Apr 2020 WO
2020132298 Jun 2020 WO
Non-Patent Literature Citations (196)
Entry
Advisory Action mailed on Nov. 7, 2022, issued in connection with U.S. Appl. No. 16/168,389, filed Oct. 23, 2018, 4 pages.
Australian Patent Office, Australian Examination Report Action mailed on Nov. 10, 2022, issued in connection with Australian Application No. 2018312989, 2 pages.
Australian Patent Office, Australian Examination Report Action mailed on Jul. 11, 2023, issued in connection with Australian Application No. 2022246446, 2 pages.
Australian Patent Office, Australian Examination Report Action mailed on Jun. 14, 2023, issued in connection with Australian Application No. 2019299865, 2 pages.
Australian Patent Office, Australian Examination Report Action mailed on Sep. 25, 2023, issued in connection with Australian Application No. 2018338812, 3 pages.
Australian Patent Office, Australian Examination Report Action mailed on Sep. 28, 2022, issued in connection with Australian Application No. 2018338812, 3 pages.
Australian Patent Office, Australian Examination Report Action mailed on Oct. 31, 2023, issued in connection with Australian Application No. 2023203687, 2 pages.
Canadian Patent Office, Canadian Examination Report mailed on Oct. 12, 2023, issued in connection with Canadian Application No. 3084279, 4 pages.
Canadian Patent Office, Canadian Examination Report mailed on Sep. 14, 2022, issued in connection with Canadian Application No. 3067776, 4 pages.
Canadian Patent Office, Canadian Examination Report mailed on Oct. 19, 2022, issued in connection with Canadian Application No. 3123601, 5 pages.
Canadian Patent Office, Canadian Examination Report mailed on Jun. 7, 2022, issued in connection with Canadian Application No. 3105494, 5 pages.
Chinese Patent Office, First Office Action and Translation mailed on Feb. 9, 2023, issued in connection with Chinese Application No. 201880076788.0, 13 pages.
Chinese Patent Office, First Office Action and Translation mailed on Oct. 9, 2022, issued in connection with Chinese Application No. 201780056695.7, 10 pages.
Chinese Patent Office, First Office Action and Translation mailed on Nov. 10, 2022, issued in connection with Chinese Application No. 201980070006.7, 15 pages.
Chinese Patent Office, First Office Action and Translation mailed on Jan. 19, 2023, issued in connection with Chinese Application No. 201880064916.X, 10 pages.
Chinese Patent Office, First Office Action and Translation mailed on Sep. 19, 2022, issued in connection with Chinese Application No. 201980056604.9, 13 pages.
Chinese Patent Office, First Office Action and Translation mailed on Nov. 25, 2022, issued in connection with Chinese Application No. 201780056321.5, 8 pages.
Chinese Patent Office, First Office Action and Translation mailed on Feb. 27, 2023, issued in connection with Chinese Application No. 201980003798.6, 12 pages.
Chinese Patent Office, First Office Action and Translation mailed on Dec. 30, 2022, issued in connection with Chinese Application No. 201880076775.3, 10 pages.
Chinese Patent Office, First Office Action and Translation mailed on Sep. 6, 2023, issued in connection with Chinese Application No. 202010179593.8, 14 pages.
Chinese Patent Office, Second Office Action and Translation mailed on Apr. 1, 2023, issued in connection with Chinese Application No. 201980056604.9, 11 pages.
Chinese Patent Office, Second Office Action mailed on Dec. 21, 2022, issued in connection with Chinese Application No. 201980089721.5, 12 pages.
Chinese Patent Office, Second Office Action mailed on May 30, 2023, issued in connection with Chinese Application No. 201980070006.7, 9 pages.
European Patent Office, Decision to Refuse European Patent Application mailed on May 30, 2022, issued in connection with European Application No. 17200837.7, 4 pages.
European Patent Office, European EPC Article 94.3 mailed on Jun. 5, 2023, issued in connection with European Application No. 20710649.3, 8 pages.
European Patent Office, European EPC Article 94.3 mailed on Feb. 10, 2023, issued in connection with European Application No. 19729968.8, 7 pages.
European Patent Office, European EPC Article 94.3 mailed on Oct. 12, 2023, issued in connection with European Application No. 20736489.4, 8 pages.
European Patent Office, European EPC Article 94.3 mailed on Jun. 21, 2022, issued in connection with European Application No. 19780508.8, 5 pages.
European Patent Office, European EPC Article 94.3 mailed on Feb. 23, 2023, issued in connection with European Application No. 19839734.1, 8 pages.
European Patent Office, European EPC Article 94.3 mailed on Jun. 27, 2023, issued in connection with European Application No. 21195031.6, 4 pages.
European Patent Office, European EPC Article 94.3 mailed on Nov. 27, 2023, issued in connection with European Application No. 19780508.8, 7 pages.
European Patent Office, European EPC Article 94.3 mailed on Nov. 28, 2022, issued in connection with European Application No. 18789515.6, 7 pages.
European Patent Office, European EPC Article 94.3 mailed on Mar. 29, 2023, issued in connection with European Application No. 22182193.7, 4 pages.
European Patent Office, European EPC Article 94.3 mailed on Jun. 30, 2022, issued in connection with European Application No. 19765953.5, 4 pages.
European Patent Office, European EPC Article 94.3 mailed on Aug. 31, 2023, issued in connection with European Application No. 19773326.4, 5 pages.
European Patent Office, European EPC Article 94.3 mailed on Jul. 31, 2023, issued in connection with European Application No. 21164130.3, 5 pages.
European Patent Office, European EPC Article 94.3 mailed on Apr. 6, 2023, issued in connection with European Application No. 21193616.6, 7 pages.
European Patent Office, European EPC Article 94.3 mailed on Sep. 6, 2023, issued in connection with European Application No. 19197116.7, 4 pages.
European Patent Office, European EPC Article 94.3 mailed on Sep. 7, 2023, issued in connection with European Application No. 20185599.6, 6 pages.
European Patent Office, European Extended Search Report mailed on Oct. 7, 2022, issued in connection with European Application No. 22182193.7, 8 pages.
European Patent Office, European Extended Search Report mailed on Jun. 23, 2022, issued in connection with European Application No. 22153180.9, 6 pages.
European Patent Office, European Extended Search Report mailed on Jun. 30, 2022, issued in connection with European Application No. 21212763.3, 9 pages.
European Patent Office, European Extended Search Report mailed on Jul. 8, 2022, issued in connection with European Application No. 22153523.0, 9 pages.
European Patent Office, European Search Report mailed on Oct. 4, 2022, issued in connection with European Application No. 22180226.7, 6 pages.
European Patent Office, European Search Report mailed on Sep. 21, 2023, issued in connection with European Application No. 23172783.5, 8 pages.
European Patent Office, Summons to Attend Oral Proceedings mailed on Jul. 15, 2022, issued in connection with European Application No. 17792272.1, 11 pages.
Final Office Action mailed on Aug. 17, 2022, issued in connection with U.S. Appl. No. 16/179,779, filed Nov. 2, 2018, 26 pages.
Final Office Action mailed on May 17, 2023, issued in connection with U.S. Appl. No. 16/168,389, filed Oct. 23, 2018, 44 pages.
Final Office Action mailed on Aug. 22, 2022, issued in connection with U.S. Appl. No. 16/168,389, filed Oct. 23, 2018, 37 pages.
Final Office Action mailed on Aug. 22, 2023, issued in connection with U.S. Appl. No. 18/061,570, filed Dec. 5, 2022, 12 pages.
Non-Final Office Action mailed on Oct. 20, 2022, issued in connection with U.S. Appl. No. 17/532,674, filed Nov. 22, 2021, 52 pages.
Non-Final Office Action mailed on Nov. 21, 2023, issued in connection with U.S. Appl. No. 18/088,976, filed Dec. 27, 2022, 9 pages.
Non-Final Office Action mailed on Dec. 22, 2022, issued in connection with U.S. Appl. No. 16/168,389, filed Oct. 23, 2018, 39 pages.
Non-Final Office Action mailed on Jun. 23, 2023, issued in connection with U.S. Appl. No. 18/048,945, filed Oct. 24, 2022, 10 pages.
Non-Final Office Action mailed on Oct. 23, 2023, issued in connection with U.S. Appl. No. 17/932,715, filed Sep. 16, 2022, 14 pages.
Non-Final Office Action mailed on Sep. 23, 2022, issued in connection with U.S. Appl. No. 16/153,530, filed Oct. 5, 2018, 25 pages.
Non-Final Office Action mailed on Apr. 24, 2023, issued in connection with U.S. Appl. No. 17/532,744, filed Nov. 22, 2021, 18 pages.
Non-Final Office Action mailed on Apr. 25, 2023, issued in connection with U.S. Appl. No. 17/536,572, filed Nov. 29, 2021, 8 pages.
Non-Final Office Action mailed on Apr. 25, 2023, issued in connection with U.S. Appl. No. 17/656,794, filed Mar. 28, 2022, 22 pages.
Non-Final Office Action mailed on May 25, 2023, issued in connection with U.S. Appl. No. 18/157,937, filed Jan. 23, 2023, 9 pages.
Non-Final Office Action mailed on Oct. 25, 2022, issued in connection with U.S. Appl. No. 17/549,034, filed Dec. 13, 2021, 20 pages.
Non-Final Office Action mailed on Feb. 27, 2023, issued in connection with U.S. Appl. No. 17/493,430, filed Oct. 4, 2021, 17 pages.
Non-Final Office Action mailed on Aug. 28, 2023, issued in connection with U.S. Appl. No. 17/722,661, filed Apr. 18, 2022, 16 pages.
Non-Final Office Action mailed on Feb. 28, 2023, issued in connection with U.S. Appl. No. 17/548,921, filed Dec. 13, 2021, 12 pages.
Non-Final Office Action mailed on Jul. 3, 2023, issued in connection with U.S. Appl. No. 17/135,173, filed Dec. 28, 2020, 22 pages.
Non-Final Office Action mailed on Sep. 30, 2022, issued in connection with U.S. Appl. No. 17/353,254, filed Jun. 21, 2021, 22 pages.
Non-Final Office Action mailed on Nov. 4, 2022, issued in connection with U.S. Appl. No. 17/445,272, filed Aug. 17, 2021, 22 pages.
Non-Final Office Action mailed on Oct. 4, 2022, issued in connection with U.S. Appl. No. 16/915,234, filed Jun. 29, 2020, 16 pages.
Non-Final Office Action mailed on Apr. 5, 2023, issued in connection with U.S. Appl. No. 18/145,501, filed Dec. 22, 2022, 6 pages.
Non-Final Office Action mailed on Jul. 5, 2023, issued in connection with U.S. Appl. No. 18/061,579, filed Dec. 5, 2022, 11 pages.
Non-Final Office Action mailed on Oct. 6, 2023, issued in connection with U.S. Appl. No. 17/222,950, filed Apr. 5, 2021, 9 pages.
Non-Final Office Action mailed on Feb. 7, 2023, issued in connection with U.S. Appl. No. 17/303,001, filed May 18, 2021, 8 pages.
Non-Final Office Action mailed on Jun. 7, 2023, issued in connection with U.S. Appl. No. 16/179,779, filed Nov. 2, 2018, 29 pages.
Non-Final Office Action mailed on Sep. 7, 2023, issued in connection with U.S. Appl. No. 17/340,590, filed Jun. 7, 2021, 18 pages.
Non-Final Office Action mailed on Jun. 8, 2023, issued in connection with U.S. Appl. No. 18/048,034, filed Oct. 20, 2022, 8 pages.
Non-Final Office Action mailed on Jun. 8, 2023, issued in connection with U.S. Appl. No. 18/061,243, filed Dec. 2, 2022, 10 pages.
Notice of Allowance mailed on Nov. 2, 2022, issued in connection with U.S. Appl. No. 16/989,805, filed Aug. 10, 2020, 5 pages.
Notice of Allowance mailed on Nov. 3, 2022, issued in connection with U.S. Appl. No. 17/448,015, filed Sep. 17, 2021, 7 pages.
Notice of Allowance mailed on Feb. 6, 2023, issued in connection with U.S. Appl. No. 17/077,974, filed Oct. 22, 2020, 7 pages.
Notice of Allowance mailed on Jan. 6, 2023, issued in connection with U.S. Appl. No. 17/896,129, filed Aug. 26, 2022, 13 pages.
Notice of Allowance mailed on Dec. 7, 2022, issued in connection with U.S. Appl. No. 17/315,599, filed May 10, 2021, 11 pages.
Notice of Allowance mailed on Feb. 8, 2023, issued in connection with U.S. Appl. No. 17/446,690, filed Sep. 1, 2021, 8 pages.
Notice of Allowance mailed on Jan. 9, 2023, issued in connection with U.S. Appl. No. 17/247,507, filed Dec. 14, 2020, 8 pages.
Notice of Allowance mailed on Jun. 9, 2023, issued in connection with U.S. Appl. No. 17/532,674, filed Nov. 22, 2021, 13 pages.
Notice of Allowance mailed on Mar. 9, 2023, issued in connection with U.S. Appl. No. 17/662,302, filed May 6, 2022, 7 pages.
Notice of Allowance mailed on Nov. 9, 2022, issued in connection with U.S. Appl. No. 17/385,542, filed Jul. 26, 2021, 8 pages.
Notice of Allowance mailed on Jul. 10, 2023, issued in connection with U.S. Appl. No. 17/315,599, filed May 10, 2021, 2 pages.
Notice of Allowance mailed on Aug. 11, 2023, issued in connection with U.S. Appl. No. 17/878,649, filed Aug. 1, 2022, 7 pages.
Notice of Allowance mailed on May 11, 2023, issued in connection with U.S. Appl. No. 18/061,638, filed Dec. 5, 2022, 15 pages.
Notice of Allowance mailed on Jul. 12, 2022, issued in connection with U.S. Appl. No. 16/907,953, filed Jun. 22, 2020, 8 pages.
Notice of Allowance mailed on Jul. 12, 2022, issued in connection with U.S. Appl. No. 17/391,404, filed Aug. 2, 2021, 13 pages.
Notice of Allowance mailed on Jul. 12, 2023, issued in connection with U.S. Appl. No. 18/151,619, filed Jan. 9, 2023, 13 pages.
Notice of Allowance mailed on Jun. 12, 2023, issued in connection with U.S. Appl. No. 17/453,632, filed Nov. 4, 2021, 9 pages.
Notice of Allowance mailed on Feb. 13, 2023, issued in connection with U.S. Appl. No. 18/045,360, filed Oct. 10, 2022, 9 pages.
Notice of Allowance mailed on Jul. 13, 2023, issued in connection with U.S. Appl. No. 18/145,501, filed Dec. 22, 2022, 9 pages.
Notice of Allowance mailed on Jun. 13, 2023, issued in connection with U.S. Appl. No. 17/249,776, filed Mar. 12, 2021, 10 pages.
Notice of Allowance mailed on Aug. 14, 2023, issued in connection with U.S. Appl. No. 17/549,034, filed Dec. 13, 2021, 9 pages.
Notice of Allowance mailed on Sep. 14, 2023, issued in connection with U.S. Appl. No. 18/061,579, filed Dec. 5, 2022, 7 pages.
Notice of Allowance mailed on Aug. 15, 2022, issued in connection with U.S. Appl. No. 17/101,949, filed Nov. 23, 2020, 11 pages.
Notice of Allowance mailed on Feb. 15, 2023, issued in connection with U.S. Appl. No. 17/659,613, filed Apr. 18, 2022, 21 pages.
Notice of Allowance mailed on Jun. 15, 2023, issued in connection with U.S. Appl. No. 17/305,698, filed Jul. 13, 2021, 8 pages.
Notice of Allowance mailed on Jun. 15, 2023, issued in connection with U.S. Appl. No. 17/305,920, filed Jul. 16, 2021, 8 pages.
Notice of Allowance mailed on Sep. 15, 2022, issued in connection with U.S. Appl. No. 16/736,725 , filed on Jan. 1, 2020, 11 pages.
Notice of Allowance mailed on Aug. 16, 2023, issued in connection with U.S. Appl. No. 17/536,572, filed Nov. 29, 2021, 7 pages.
Notice of Allowance mailed on Aug. 17, 2022, issued in connection with U.S. Appl. No. 17/135,347, filed Dec. 28, 2020, 14 pages.
Notice of Allowance mailed on Nov. 17, 2022, issued in connection with U.S. Appl. No. 17/486,222, filed Sep. 27, 2021, 10 pages.
Notice of Allowance mailed on Jul. 18, 2022, issued in connection with U.S. Appl. No. 17/222,151, filed Apr. 5, 2021, 5 pages.
Notice of Allowance mailed on Oct. 2, 2023, issued in connection with U.S. Appl. No. 17/810,533, filed Jul. 1, 2022, 8 pages.
Notice of Allowance mailed on Dec. 20, 2022, issued in connection with U.S. Appl. No. 16/806,747, filed Mar. 2, 2020, 5 pages.
Notice of Allowance mailed on Jan. 20, 2023, issued in connection with U.S. Appl. No. 16/915,234, filed Jun. 29, 2020, 6 pages.
Notice of Allowance mailed on Mar. 20, 2023, issued in connection with U.S. Appl. No. 17/562,412, filed Dec. 27, 2021, 9 pages.
Notice of Allowance mailed on Aug. 21, 2023, issued in connection with U.S. Appl. No. 17/548,921, filed Dec. 13, 2021, 10 pages.
Notice of Allowance mailed on Jul. 21, 2023, issued in connection with U.S. Appl. No. 17/986,241, filed Nov. 14, 2022, 12 pages.
Notice of Allowance mailed on Mar. 21, 2023, issued in connection with U.S. Appl. No. 17/353,254, filed Jun. 21, 2021, 8 pages.
Notice of Allowance mailed on Nov. 21, 2022, issued in connection with U.S. Appl. No. 17/454,676, filed Nov. 12, 2021, 8 pages.
Notice of Allowance mailed on Sep. 21, 2022, issued in connection with U.S. Appl. No. 17/128,949, filed Dec. 21, 2020, 8 pages.
Notice of Allowance mailed on Sep. 22, 2022, issued in connection with U.S. Appl. No. 17/163,506, filed Jan. 31, 2021, 13 pages.
Notice of Allowance mailed on Sep. 22, 2022, issued in connection with U.S. Appl. No. 17/248,427, filed Jan. 25, 2021, 9 pages.
Notice of Allowance mailed on Feb. 23, 2023, issued in connection with U.S. Appl. No. 17/532,674, filed Nov. 22, 2021, 10 pages.
Notice of Allowance mailed on Nov. 24, 2023, issued in connection with U.S. Appl. No. 18/070,024, filed Nov. 28, 2022, 7 pages.
Notice of Allowance mailed on Apr. 26, 2022, issued in connection with U.S. Appl. No. 17/896,129, filed Aug. 26, 2022, 8 pages.
Notice of Allowance mailed on Apr. 26, 2023, issued in connection with U.S. Appl. No. 17/658,717, filed Apr. 11, 2022, 11 pages.
Notice of Allowance mailed on Aug. 26, 2022, issued in connection with U.S. Appl. No. 17/145,667, filed Jan. 11, 2021, 8 pages.
Notice of Allowance mailed on Oct. 26, 2022, issued in connection with U.S. Appl. No. 17/486,574, filed Sep. 27, 2021, 11 pages.
Notice of Allowance mailed on Jun. 27, 2022, issued in connection with U.S. Appl. No. 16/812,758, filed Mar. 9, 2020, 16 pages.
Notice of Allowance mailed on Sep. 27, 2023, issued in connection with U.S. Appl. No. 17/656,794, filed Mar. 28, 2022, 11 pages.
Notice of Allowance mailed on Sep. 27, 2023, issued in connection with U.S. Appl. No. 18/048,945, filed Oct. 24, 2022, 9 pages.
Notice of Allowance mailed on Sep. 27, 2023, issued in connection with U.S. Appl. No. 18/061,243, filed Dec. 2, 2022, 8 pages.
Notice of Allowance mailed on Sep. 28, 2022, issued in connection with U.S. Appl. No. 17/444,043, filed Jul. 29, 2021, 17 pages.
Notice of Allowance mailed on Dec. 29, 2022, issued in connection with U.S. Appl. No. 17/327,911, filed May 24, 2021, 14 pages.
Notice of Allowance mailed on Jul. 29, 2022, issued in connection with U.S. Appl. No. 17/236,559, filed Apr. 21, 2021, 6 pages.
Notice of Allowance mailed on Mar. 29, 2023, issued in connection with U.S. Appl. No. 17/722,438, filed Apr. 18, 2022, 7 pages.
Notice of Allowance mailed on Sep. 29, 2023, issued in connection with U.S. Appl. No. 16/168,389, filed Oct. 23, 2018, 11 pages.
Notice of Allowance mailed on Jun. 30, 2023, issued in connection with U.S. Appl. No. 17/303,001, filed May 18, 2021, 8 pages.
Notice of Allowance mailed on Mar. 30, 2023, issued in connection with U.S. Appl. No. 17/303,066, filed May 19, 2021, 7 pages.
Notice of Allowance mailed on Aug. 31, 2023, issued in connection with U.S. Appl. No. 18/145,520, filed Dec. 22, 2022, 2 pages.
Notice of Allowance mailed on Mar. 31, 2023, issued in connection with U.S. Appl. No. 17/303,735, filed Jun. 7, 2021, 19 pages.
Notice of Allowance mailed on Aug. 4, 2023, issued in connection with U.S. Appl. No. 18/145,520, filed Dec. 22, 2022, 10 pages.
Notice of Allowance mailed on Apr. 5, 2023, issued in connection with U.S. Appl. No. 17/549,253, filed Dec. 13, 2021, 10 pages.
Notice of Allowance mailed on Mar. 6, 2023, issued in connection with U.S. Appl. No. 17/449,926, filed Oct. 4, 2021, 8 pages.
Notice of Allowance mailed on Nov. 8, 2023, issued in connection with U.S. Appl. No. 18/066,093, filed Dec. 14, 2022, 11 pages.
Simon Doclo et al. Combined Acoustic Echo and Noise Reduction Using GSVD-Based Optimal Filtering. In 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No. 00CH37100), Aug. 6, 2002, 4 pages. [retrieved on Feb. 23, 2023], Retrieved from the Internet: URL: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C14&q=COMBINED+ACOUSTIC+ECHO+AND+NOISE+REDUCTION+USING+GSVD-BASED+OPTIMAL+FILTERING&btnG=.
Wikipedia. “The Wayback Machine”, Speech recognition software for Linux, Sep. 22, 2016, 4 pages. [retrieved on Mar. 28, 2022], Retrieved from the Internet: URL: https://web.archive.org/web/20160922151304/https://en.wikipedia.org/wiki/Speech_recognition_software_for_Linux.
Wolf et al. On the potential of channel selection for recognition of reverberated speech with multiple microphones. Interspeech, TALP Research Center, Jan. 2010, 5 pages.
Wölfel et al. Multi-source far-distance microphone selection and combination for automatic transcription of lectures, Interspeech 2006—ICSLP, Jan. 2006, 5 pages.
Zhang et al. Noise Robust Speech Recognition Using Multi-Channel Based Channel Selection and Channel Weighting. The Institute of Electronics, Information and Communication Engineers, arXiv:1604.03276v1 [cs.SD] Jan. 1, 2010, 8 pages.
Final Office Action mailed on Aug. 25, 2023, issued in connection with U.S. Appl. No. 16/989,350, filed Aug. 10, 2020, 21 pages.
Final Office Action mailed on Jul. 27, 2022, issued in connection with U.S. Appl. No. 16/989,350, filed Aug. 10, 2020, 15 pages.
Final Office Action mailed on Sep. 27, 2023, issued in connection with U.S. Appl. No. 18/048,034, filed Oct. 20, 2022, 9 pages.
Final Office Action mailed on Mar. 29, 2023, issued in connection with U.S. Appl. No. 17/549,034, filed Dec. 13, 2021, 21 pages.
Final Office Action mailed on Oct. 6, 2023, issued in connection with U.S. Appl. No. 17/532,744, filed Nov. 22, 2021, 21 pages.
Final Office Action mailed on Aug. 9, 2023, issued in connection with U.S. Appl. No. 17/493,430, filed Oct. 4, 2021, 19 pages.
Google LLC v. Sonos, Inc., International Trade Commission Case No. 337-TA-1330, Order No. 25: Regarding Respondent Sonos, Inc.'s Omnibus Motion for Summary Determination; dated May 16, 2023, 7 pages.
Google LLC v. Sonos, Inc., International Trade Commission Case No. 337-TA-1330, Order No. 28: Regarding Respondent Sonos, Inc.'s Omnibus Motion for Summary Determination; dated May 22, 2023, 3 pages.
Google LLC v. Sonos, Inc., International Trade Commission Case No. 337-TA-1330, Order No. 37: Regarding Complainant Google LLC's Motions in Limine; dated Jul. 7, 2023, 10 pages.
Google LLC v. Sonos, Inc., International Trade Commission Case No. 337-TA-1330, Respondent Sonos, Inc.'s Motion in Limine No. 4. Motion to Exclude Untimely Validity Arguments Regarding Claim 11 of U.S. Pat. No. 11,024,311; dated Jun. 13, 2023, 34 pages.
Google LLC v. Sonos, Inc., International Trade Commission Case No. 337-TA-1330, Respondent Sonos, Inc.'s Response to Google's Motion in Limine No. 3 Preclude Sonos from Presenting Evidence or Argument that Claim 3 of the '748 Patent is Indefinite for Lack of Antecedent Basis; dated Jun. 12, 2023, 26 pages.
Helwani et al. Source-domain adaptive filtering for MIMO systems with application to acoustic echo cancellation. In 2010 IEEE International Conference on Acoustics, Speech and Signal Processing, Jun. 28, 2010, 4 pages. [retrieved on Feb. 23, 2023], Retrieved from the Internet: URL: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C14&q=SOURCE-DOMAIN+ADAPTIVE+FILTERING+FOR+MIMO+SYSTEMS+WITH+APPLICATION+TO+ACOUSTIC+ECHO+CANCELLATION&btnG=.
International Bureau, International Preliminary Report on Patentability, mailed on Jul. 21, 2022, issued in connection with International Application No. PCT/US2021/070007, filed on Jan. 6, 2021, 8 pages.
International Bureau, International Search Report and Written Opinion mailed on Mar. 20, 2023, issued in connection with International Application No. PCT/US2022/045399, filed on Sep. 30, 2022, 25 pages.
International Searching Authority, Invitation to Pay Additional Fees on Jan. 27, 2023, issued in connection with International Application No. PCT/US2022/045399, filed on Sep. 30, 2022, 19 pages.
Japanese Patent Office, Decision of Refusal and Translation mailed on Oct. 4, 2022, issued in connection with Japanese Patent Application No. 2021-535871, 6 pages.
Japanese Patent Office, Decision of Refusal and Translation mailed on May 23, 2023, issued in connection with Japanese Patent Application No. 2021-163622, 13 pages.
Japanese Patent Office, Decision of Refusal and Translation mailed on Jul. 26, 2022, issued in connection with Japanese Patent Application No. 2020-513852, 10 pages.
Japanese Patent Office, Non-Final Office Action mailed on Apr. 4, 2023, issued in connection with Japanese Patent Application No. 2021-573944, 5 pages.
Japanese Patent Office, Notice of Reasons for Refusal and Translation mailed on Sep. 13, 2022, issued in connection with Japanese Patent Application No. 2021-163622, 12 pages.
Japanese Patent Office, Notice of Reasons for Refusal and Translation mailed on Aug. 8, 2023, issued in connection with Japanese Patent Application No. 2022-101346, 6 pages.
Japanese Patent Office, Office Action and Translation mailed on Nov. 15, 2022, issued in connection with Japanese Patent Application No. 2021-146144, 9 pages.
Japanese Patent Office, Office Action mailed on Nov. 29, 2022, issued in connection with Japanese Patent Application No. 2021-181224, 6 pages.
Katsamanis et al. Robust far-field spoken command recognition for home automation combining adaptation and multichannel processing. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing—Proceedings, May 2014, pp. 5547-5551.
Korean Patent Office, Korean Examination Report and Translation mailed on Apr. 10, 2023, issued in connection with Korean Application No. 10-2022-7024007, 8 pages.
Korean Patent Office, Korean Examination Report and Translation mailed on Oct. 13, 2022, issued in connection with Korean Application No. 10-2021-7030939, 4 pages.
Korean Patent Office, Korean Examination Report and Translation mailed on Jul. 19, 2023, issued in connection with Korean Application No. 10-2022-7024007, 9 pages.
Korean Patent Office, Korean Examination Report and Translation mailed on Jul. 26, 2022, issued in connection with Korean Application No. 10-2022-7016656, 17 pages.
Korean Patent Office, Korean Examination Report and Translation mailed on Mar. 31, 2023, issued in connection with Korean Application No. 10-2022-7016656, 7 pages.
Korean Patent Office, Korean Examination Report and Translation mailed on Oct. 31, 2021, issued in connection with Korean Application No. 10-2022-7024007, 10 pages.
Korean Patent Office, Office Action and Translation mailed on Feb. 27, 2023, issued in connection with Korean Application No. 10-2022-7021879, 5 pages.
Mathias Wolfel. Channel Selection by Class Separability Measures for Automatic Transcriptions on Distant Microphones, Interspeech 2007 10.21437/Interspeech.2007-255, 4 pages.
Non-Final Office Action mailed on Feb. 2, 2023, issued in connection with U.S. Appl. No. 17/305,698, filed Jul. 13, 2021, 16 pages.
Non-Final Office Action mailed on Dec. 5, 2022, issued in connection with U.S. Appl. No. 17/662,302, filed May 6, 2022, 12 pages.
Non-Final Office Action mailed on Oct. 5, 2022, issued in connection with U.S. Appl. No. 17/449,926, filed Oct. 4, 2021, 11 pages.
Non-Final Office Action mailed on Aug. 10, 2023, issued in connection with U.S. Appl. No. 18/070,024, filed Nov. 28, 2022, 4 pages.
Non-Final Office Action mailed on Apr. 12, 2023, issued in connection with U.S. Appl. No. 17/878,649, filed Aug. 1, 2022, 16 pages.
Non-Final Office Action mailed on Nov. 14, 2022, issued in connection with U.S. Appl. No. 17/077,974, filed Oct. 22, 2020, 6 pages.
Non-Final Office Action mailed on Sep. 14, 2022, issued in connection with U.S. Appl. No. 17/446,690, filed Sep. 1, 2021, 10 pages.
Non-Final Office Action mailed on Sep. 14, 2023, issued in connection with U.S. Appl. No. 17/528,843, filed Nov. 17, 2021, 20 pages.
Non-Final Office Action mailed on Aug. 15, 2022, issued in connection with U.S. Appl. No. 17/448,015, filed Sep. 17, 2021, 12 pages.
Non-Final Office Action mailed on Dec. 15, 2022, issued in connection with U.S. Appl. No. 17/549,253, filed Dec. 13, 2021, 10 pages.
Non-Final Office Action mailed on Feb. 15, 2023, issued in connection with U.S. Appl. No. 17/453,632, filed Nov. 4, 2021, 12 pages.
Non-Final Office Action mailed on Sep. 15, 2022, issued in connection with U.S. Appl. No. 17/247,507, filed Dec. 14, 2020, 9 pages.
Non-Final Office Action mailed on Sep. 15, 2022, issued in connection with U.S. Appl. No. 17/327,911, filed May 24, 2021, 44 pages.
Non-Final Office Action mailed on Feb. 16, 2023, issued in connection with U.S. Appl. No. 17/305,920, filed Jul. 16, 2021, 12 pages.
Non-Final Office Action mailed on Jul. 18, 2023, issued in connection with U.S. Appl. No. 18/066,093, filed Dec. 14, 2022, 12 pages.
Non-Final Office Action mailed on Oct. 18, 2022, issued in connection with U.S. Appl. No. 16/949,973, filed Nov. 23, 2020, 31 pages.
Non-Final Office Action mailed on Sep. 19, 2022, issued in connection with U.S. Appl. No. 17/385,542, filed Jul. 26, 2021, 9 pages.
Non-Final Office Action mailed on Apr. 20, 2023, issued in connection with U.S. Appl. No. 18/061,570, filed Dec. 5, 2022, 12 pages.
Related Publications (1)
Number Date Country
20240062747 A1 Feb 2024 US
Continuations (2)
Number Date Country
Parent 17549253 Dec 2021 US
Child 18456941 US
Parent 16685135 Nov 2019 US
Child 17549253 US