The present disclosure is related to consumer goods and, more particularly, to methods, systems, products, features, services, and other elements directed to media playback or some aspect thereof.
Options for accessing and listening to digital audio in an out-loud setting were limited until in 2002, when SONOS, Inc. began development of a new type of playback system. Sonos then filed one of its first patent applications in 2003, entitled “Method for Synchronizing Audio Playback between Multiple Networked Devices,” and began offering its first media playback systems for sale in 2005. The Sonos Wireless Home Sound System enables people to experience music from many sources via one or more networked playback devices. Through a software control application installed on a controller (e.g., smartphone, tablet, computer, voice input device), one can play what she wants in any room having a networked playback device. Media content (e.g., songs, podcasts, video sound) can be streamed to playback devices such that each room with a playback device can play back corresponding different media content. In addition, rooms can be grouped together for synchronous playback of the same media content, and/or the same media content can be heard in all rooms synchronously.
Features, aspects, and advantages of the presently disclosed technology may be better understood with regard to the following description, appended claims, and accompanying drawings, as listed below. A person skilled in the relevant art will understand that the features shown in the drawings are for purposes of illustrations, and variations, including different and/or additional features and arrangements thereof, are possible.
The drawings are for the purpose of illustrating example embodiments, but those of ordinary skill in the art will understand that the technology disclosed herein is not limited to the arrangements and/or instrumentality shown in the drawings.
I. Overview
Voice control can be beneficial for a “smart” home having smart appliances and related devices, such as wireless illumination devices, home-automation devices (e.g., thermostats, door locks, etc.), and audio playback devices. In some implementations, networked microphone devices (which may be a component of a playback device) may be used to control smart home devices. A network microphone device will typically include a microphone for receiving voice inputs. The network microphone device can forward voice inputs to a voice assistant service (VAS), such as AMAZON's ALEXA®, APPLE's SIRI®, MICROSOFT's CORTANA®, GOOGLE's Assistant, etc. A VAS may be a remote service implemented by cloud servers to process voice inputs. A VAS may process a voice input to determine an intent of the voice input. Based on the response, the network microphone device may cause one or more smart devices to perform an action. For example, the network microphone device may instruct an illumination device to turn on/off based on the response to the instruction from the VAS.
A voice input detected by a network microphone device will typically include an activation word followed by an utterance containing a user request. The activation word is typically a predetermined word or phrase used to “wake up” and invoke the VAS for interpreting the intent of the voice input. For instance, in querying AMAZON's ALEXA, a user might speak the activation word “Alexa.” Other examples include “Ok, Google” for invoking GOOGLE's Assistant, and “Hey, Siri” for invoking APPLE's SIRI, or “Hey, Sonos” for a VAS offered by SONOS. In various embodiments, an activation word may also be referred to as, e.g., a wake-, trigger-, wakeup-word or phrase, and may take the form of any suitable word; combination of words, such as phrases; and/or audio cues indicating that the network microphone device and/or an associated VAS is to invoke an action.
It can be difficult to manage the association between various playback devices with two or more corresponding VASes. For example, although a user may wish to utilize multiple VASes within her home, a response received from one VAS may interrupt a response or other content received from a second VAS. Such interruptions can be synchronous, for example when a response from a second VAS interrupts a response from a first VAS. Additionally, such interruptions can be asynchronous, for example when a response from a second VAS interrupts a pre-scheduled event (e.g., an alarm) from the first VAS.
The systems and methods detailed herein address the above-mentioned challenges of managing associations between one or more playback devices and two or more VASes. In particular, systems and methods are provided for managing the communications and output between a playback device and two or more VASes to enhance the user experience. Although several examples are provided below with respect to managing interactions with two VASes, in various embodiments there may be additional VASes (e.g., three, four, five, six, or more VASes).
As described in more detail below, in some instances a playback device can manage multiple VASes by arbitrating playback of content received from different VASes content. For example, a playback device can detect an activation word in audio input, and then transmit a voice utterance of the audio input to a first VAS. The first VAS may then respond with content (e.g., a text-to-speech response) to be played back via the playback device, after which the playback device may then play back the content. At any point in this process, the playback device may concurrently receive second content from a second VAS, for example a pre-scheduled alarm, a user broadcast, a text-to-speech response, or any other content. In response to receiving this second content, the playback device can dynamically determine how to handle playback. As one option, the playback device may suppress the second content from the second VAS to avoid unduly interrupting the response played back from the first VAS. Such suppression can take the form of delaying playback of the second content or canceling playback of the second content. Alternatively, the playback device may allow the second content to interrupt the first content, for example by suppressing playback of the first content while allowing the second content to be played back. In some embodiments, the playback device determines which content to play and which to suppress based on the characteristics of the respective content—for example allowing a scheduled alarm from a second VAS to interrupt a podcast from a first VAS, but suppressing a user broadcast from a second VAS during output of a text-to-speech response from a first VAS.
As described in more detail below, in some instances a playback device can manage multiple VASes by arbitrating activation-word detection associated with different VASes. For example, the playback device may selectively disable activation-word detection for a second VAS while a user is actively engaging with a first VAS. This reduces the risk of the second VAS erroneously interrupting the user's dialogue with the first VAS upon detecting its own activation word. This also preserves user privacy by eliminating the possibility of a user's voice input intended for one VAS being transmitted to a different VAS. Once the user has concluded her dialogue session with the first VAS, the playback device may re-enable activation-word detection for the second VAS. These and other rules allow playback devices to manage playback of content from multiple different VASes without compromising the user experience.
While some examples described herein may refer to functions performed by given actors such as “users,” “listeners,” and/or other entities, it should be understood that this is for purposes of explanation only. The claims should not be interpreted to require action by any such example actor unless explicitly required by the language of the claims themselves.
In the Figures, identical reference numbers identify generally similar, and/or identical, elements. To facilitate the discussion of any particular element, the most significant digit or digits of a reference number refers to the Figure in which that element is first introduced. For example, element 110a is first introduced and discussed with reference to
II. Suitable Operating Environment
As used herein the term “playback device” can generally refer to a network device configured to receive, process, and output data of a media playback system. For example, a playback device can be a network device that receives and processes audio content. In some embodiments, a playback device includes one or more transducers or speakers powered by one or more amplifiers. In other embodiments, however, a playback device includes one of (or neither of) the speaker and the amplifier. For instance, a playback device can comprise one or more amplifiers configured to drive one or more speakers external to the playback device via a corresponding wire or cable.
Moreover, as used herein the term NMD (i.e., a “network microphone device”) can generally refer to a network device that is configured for audio detection. In some embodiments, an NMD is a stand-alone device configured primarily for audio detection. In other embodiments, an NMD is incorporated into a playback device (or vice versa).
The term “control device” can generally refer to a network device configured to perform functions relevant to facilitating user access, control, and/or configuration of the media playback system 100.
Each of the playback devices 110 is configured to receive audio signals or data from one or more media sources (e.g., one or more remote servers, one or more local devices) and play back the received audio signals or data as sound. The one or more NMDs 120 are configured to receive spoken word commands, and the one or more control devices 130 are configured to receive user input. In response to the received spoken word commands and/or user input, the media playback system 100 can play back audio via one or more of the playback devices 110. In certain embodiments, the playback devices 110 are configured to commence playback of media content in response to a trigger. For instance, one or more of the playback devices 110 can be configured to play back a morning playlist upon detection of an associated trigger condition (e.g., presence of a user in a kitchen, detection of a coffee machine operation). In some embodiments, for example, the media playback system 100 is configured to play back audio from a first playback device (e.g., the playback device 100a) in synchrony with a second playback device (e.g., the playback device 100b). Interactions between the playback devices 110, NMDs 120, and/or control devices 130 of the media playback system 100 configured in accordance with the various embodiments of the disclosure are described in greater detail below with respect to
In the illustrated embodiment of
The media playback system 100 can comprise one or more playback zones, some of which may correspond to the rooms in the environment 101. The media playback system 100 can be established with one or more playback zones, after which additional zones may be added, or removed to form, for example, the configuration shown in
In the illustrated embodiment of
In some aspects, one or more of the playback zones in the environment 101 may each be playing different audio content. For instance, a user may be grilling on the patio 101i and listening to hip hop music being played by the playback device 110c while another user is preparing food in the kitchen 101h and listening to classical music played by the playback device 110b. In another example, a playback zone may play the same audio content in synchrony with another playback zone. For instance, the user may be in the office 101e listening to the playback device 110f playing back the same hip hop music being played back by playback device 110c on the patio 101i. In some aspects, the playback devices 110c and 110f play back the hip hop music in synchrony such that the user perceives that the audio content is being played seamlessly (or at least substantially seamlessly) while moving between different playback zones. Additional details regarding audio playback synchronization among playback devices and/or zones can be found, for example, in U.S. Pat. No. 8,234,395 entitled, “System and method for synchronizing operations among a plurality of independently clocked digital data processing devices,” which is incorporated herein by reference in its entirety.
a. Suitable Media Playback System
In addition to the playback, network microphone, and controller devices 110, 120, and 130, the home environment 101 may include additional and/or other computing devices, including local network devices, such as one or more smart illumination devices 108 (
As further shown in
In some implementations, the various playback devices, NMDs, and/or controller devices 110, 120, 130 may be communicatively coupled to remote computing devices associated with one or more VASes and at least one remote computing device associated with a media content service (“MCS”). For instance, in the illustrated example of
The remote computing devices 106 further include remote computing devices configured to perform certain operations, such as remotely facilitating media playback functions, managing device and system status information, directing communications between the devices of the MPS 100 and one or multiple VASes and/or MCSes, among other operations. In one example, the additional remote computing devices provide cloud servers for one or more SONOS Wireless HiFi Systems.
In various implementations, one or more of the playback devices 110 may take the form of or include an on-board (e.g., integrated) network microphone device. For example, the playback devices 110k, 110h, 110c, 110e, and 110g include or are otherwise equipped with corresponding NMDs 120e-i, respectively. A playback device that includes or is equipped with an NMD may be referred to herein interchangeably as a playback device or an NMD unless indicated otherwise in the description. In some cases, one or more of the NMDs 120 may be a stand-alone device. For example, the NMDs 120a and 120b may be stand-alone devices. A stand-alone NMD may omit components and/or functionality that is typically included in a playback device, such as a speaker or related electronics. For instance, in such cases, a stand-alone NMD may not produce audio output or may produce limited audio output (e.g., relatively low-quality audio output).
The various playback and network microphone devices 110 and 120 of the MPS 100 may each be associated with a unique name, which may be assigned to the respective devices by a user, such as during setup of one or more of these devices. For instance, as shown in the illustrated example of
As discussed above, an NMD may detect and process sound from its environment, such as sound that includes background noise mixed with speech spoken by a person in the NMD's vicinity. For example, as sounds are detected by the NMD in the environment, the NMD may process the detected sound to determine if the sound includes speech that contains voice input intended for the NMD and ultimately a particular VAS. For example, the NMD may identify whether speech includes a wake word associated with a particular VAS.
In the illustrated example of
Upon receiving the stream of sound data, the first VAS 190 determines if there is voice input in the streamed data from the NMD, and if so the first VAS 190 will also determine an underlying intent in the voice input. The first VAS 190 may next transmit a response back to the MPS 100, which can include transmitting the response directly to the NMD that caused the wake-word event. The response is typically based on the intent that the first VAS 190 determined was present in the voice input. As an example, in response to the first VAS 190 receiving a voice input with an utterance to “Play Hey Jude by The Beatles,” the first VAS 190 may determine that the underlying intent of the voice input is to initiate playback and further determine that intent of the voice input is to play the particular song “Hey Jude.” After these determinations, the first VAS 190 may transmit a command to a particular MCS 192 to retrieve content (i.e., the song “Hey Jude”), and that MCS 192, in turn, provides (e.g., streams) this content directly to the MPS 100 or indirectly via the first VAS 190. In some implementations, the first VAS 190 may transmit to the MPS 100 a command that causes the MPS 100 itself to retrieve the content from the MCS 192. The second VAS 191 may operate similarly to the first VAS 190 when receiving a stream of sound data.
In certain implementations, NMDs may facilitate arbitration amongst one another when voice input is identified in speech detected by two or more NMDs located within proximity of one another. For example, the NMD-equipped Bookcase playback device 110e in the environment 101 (
In certain implementations, an NMD may be assigned to, or otherwise associated with, a designated or default playback device that may not include an NMD. For example, the NMD 120a in the Dining Room 101g (
Further aspects relating to the different components of the example MPS 100 and how the different components may interact to provide a user with a media experience may be found in the following sections. While discussions herein may generally refer to the example MPS 100, technologies described herein are not limited to applications within, among other things, the home environment described above. For instance, the technologies described herein may be useful in other home environment configurations comprising more or fewer of any of the playback, network microphone, and/or controller devices 110, 120, 130. For example, the technologies herein may be utilized within an environment having a single playback device 110 and/or a single NMD 120. In some examples of such cases, the LAN 111 (
b. Suitable Playback Devices
The playback device 110a, for example, can receive media content (e.g., audio content comprising music and/or other sounds) from a local audio source 105 via the input/output 111 (e.g., a cable, a wire, a PAN, a Bluetooth connection, an ad hoc wired or wireless communication network, and/or another suitable communication link). The local audio source 105 can comprise, for example, a mobile device (e.g., a smartphone, a tablet, a laptop computer) or another suitable audio component (e.g., a television, a desktop computer, an amplifier, a phonograph, a Blu-ray player, a memory storing digital media files). In some aspects, the local audio source 105 includes local music libraries on a smartphone, a computer, a networked-attached storage (NAS), and/or another suitable device configured to store media files. In certain embodiments, one or more of the playback devices 110, NMDs 120, and/or control devices 130 comprise the local audio source 105. In other embodiments, however, the media playback system omits the local audio source 105 altogether. In some embodiments, the playback device 110a does not include an input/output 111 and receives all audio content via the network 104.
The playback device 110a further comprises electronics 112, a user interface 113 (e.g., one or more buttons, knobs, dials, touch-sensitive surfaces, displays, touchscreens), and one or more transducers 114 (referred to hereinafter as “the transducers 114”). The electronics 112 is configured to receive audio from an audio source (e.g., the local audio source 105) via the input/output 111, one or more of the computing devices 106a-c via the network 104 (
In the illustrated embodiment of
The processors 112a can comprise clock-driven computing component(s) configured to process data, and the memory 112b can comprise a computer-readable medium (e.g., a tangible, non-transitory computer-readable medium, data storage loaded with one or more of the software components 112c) configured to store instructions for performing various operations and/or functions. The processors 112a are configured to execute the instructions stored on the memory 112b to perform one or more of the operations. The operations can include, for example, causing the playback device 110a to retrieve audio data from an audio source (e.g., one or more of the computing devices 106a-c (
The processors 112a can be further configured to perform operations causing the playback device 110a to synchronize playback of audio content with another of the one or more playback devices 110. As those of ordinary skill in the art will appreciate, during synchronous playback of audio content on a plurality of playback devices, a listener will preferably be unable to perceive time-delay differences between playback of the audio content by the playback device 110a and the other one or more other playback devices 110. Additional details regarding audio playback synchronization among playback devices can be found, for example, in U.S. Pat. No. 8,234,395, which was incorporated by reference above.
In some embodiments, the memory 112b is further configured to store data associated with the playback device 110a, such as one or more zones and/or zone groups of which the playback device 110a is a member, audio sources accessible to the playback device 110a, and/or a playback queue that the playback device 110a (and/or another of the one or more playback devices) can be associated with. The stored data can comprise one or more state variables that are periodically updated and used to describe a state of the playback device 110a. The memory 112b can also include data associated with a state of one or more of the other devices (e.g., the playback devices 110, NMDs 120, control devices 130) of the media playback system 100. In some aspects, for example, the state data is shared during predetermined intervals of time (e.g., every 5 seconds, every 10 seconds, every 60 seconds) among at least a portion of the devices of the media playback system 100, so that one or more of the devices have the most recent data associated with the media playback system 100.
The network interface 112d is configured to facilitate a transmission of data between the playback device 110a and one or more other devices on a data network such as, for example, the links 103 and/or the network 104 (
In the illustrated embodiment of
The audio components 112g are configured to process and/or filter data comprising media content received by the electronics 112 (e.g., via the input/output 111 and/or the network interface 112d) to produce output audio signals. In some embodiments, the audio processing components 112g comprise, for example, one or more digital-to-analog converters (DAC), audio preprocessing components, audio enhancement components, a digital signal processors (DSPs), and/or other suitable audio processing components, modules, circuits, etc. In certain embodiments, one or more of the audio processing components 112g can comprise one or more subcomponents of the processors 112a. In some embodiments, the electronics 112 omits the audio processing components 112g. In some aspects, for example, the processors 112a execute instructions stored on the memory 112b to perform audio processing operations to produce the output audio signals.
The amplifiers 112h are configured to receive and amplify the audio output signals produced by the audio processing components 112g and/or the processors 112a. The amplifiers 112h can comprise electronic devices and/or components configured to amplify audio signals to levels sufficient for driving one or more of the transducers 114. In some embodiments, for example, the amplifiers 112h include one or more switching or class-D power amplifiers. In other embodiments, however, the amplifiers include one or more other types of power amplifiers (e.g., linear gain power amplifiers, class-A amplifiers, class-B amplifiers, class-AB amplifiers, class-C amplifiers, class-D amplifiers, class-E amplifiers, class-F amplifiers, class-G and/or class H amplifiers, and/or another suitable type of power amplifier). In certain embodiments, the amplifiers 112h comprise a suitable combination of two or more of the foregoing types of power amplifiers. Moreover, in some embodiments, individual ones of the amplifiers 112h correspond to individual ones of the transducers 114. In other embodiments, however, the electronics 112 includes a single one of the amplifiers 112h configured to output amplified audio signals to a plurality of the transducers 114. In some other embodiments, the electronics 112 omits the amplifiers 112h.
The transducers 114 (e.g., one or more speakers and/or speaker drivers) receive the amplified audio signals from the amplifier 112h and render or output the amplified audio signals as sound (e.g., audible sound waves having a frequency between about 20 Hertz (Hz) and 20 kilohertz (kHz)). In some embodiments, the transducers 114 can comprise a single transducer. In other embodiments, however, the transducers 114 comprise a plurality of audio transducers. In some embodiments, the transducers 114 comprise more than one type of transducer. For example, the transducers 114 can include one or more low frequency transducers (e.g., subwoofers, woofers), mid-range frequency transducers (e.g., mid-range transducers, mid-woofers), and one or more high frequency transducers (e.g., one or more tweeters). As used herein, “low frequency” can generally refer to audible frequencies below about 500 Hz, “mid-range frequency” can generally refer to audible frequencies between about 500 Hz and about 2 kHz, and “high frequency” can generally refer to audible frequencies above 2 kHz. In certain embodiments, however, one or more of the transducers 114 comprise transducers that do not adhere to the foregoing frequency ranges. For example, one of the transducers 114 may comprise a mid-woofer transducer configured to output sound at frequencies between about 200 Hz and about 5 kHz.
By way of illustration, SONOS, Inc. presently offers (or has offered) for sale certain playback devices including, for example, a “SONOS ONE,” “PLAY:1,” “PLAY:3,” “PLAY:5,” “PLAYBAR,” “BEAM,” “PLAYBASE,” “CONNECT:AMP,” “CONNECT,” and “SUB.” Other suitable playback devices may additionally or alternatively be used to implement the playback devices of example embodiments disclosed herein. Additionally, one of ordinary skilled in the art will appreciate that a playback device is not limited to the examples described herein or to SONOS product offerings. In some embodiments, for example, one or more playback devices 110 comprises wired or wireless headphones (e.g., over-the-ear headphones, on-ear headphones, in-ear earphones). In other embodiments, one or more of the playback devices 110 comprise a docking station and/or an interface configured to interact with a docking station for personal mobile media playback devices. In certain embodiments, a playback device may be integral to another device or component such as a television, a lighting fixture, or some other device for indoor or outdoor use. In some embodiments, a playback device omits a user interface and/or one or more transducers. For example,
c. Suitable Network Microphone Devices (NMDs)
In some embodiments, an NMD can be integrated into a playback device.
Referring again to
After detecting the activation word, voice processing 124 monitors the microphone data for an accompanying user request in the voice input. The user request may include, for example, a command to control a third-party device, such as a thermostat (e.g., NEST® thermostat), an illumination device (e.g., a PHILIPS HUE® lighting device), or a media playback device (e.g., a Sonos® playback device). For example, a user might speak the activation word “Alexa” followed by the utterance “set the thermostat to 68 degrees” to set a temperature in a home (e.g., the environment 101 of
d. Suitable Control Devices
The control device 130a includes electronics 132, a user interface 133, one or more speakers 134, and one or more microphones 135. The electronics 132 comprise one or more processors 132a (referred to hereinafter as “the processors 132a”), a memory 132b, software components 132c, and a network interface 132d. The processor 132a can be configured to perform functions relevant to facilitating user access, control, and configuration of the media playback system 100. The memory 132b can comprise data storage that can be loaded with one or more of the software components executable by the processor 302 to perform those functions. The software components 132c can comprise applications and/or other executable software configured to facilitate control of the media playback system 100. The memory 112b can be configured to store, for example, the software components 132c, media playback system controller application software, and/or other data associated with the media playback system 100 and the user.
The network interface 132d is configured to facilitate network communications between the control device 130a and one or more other devices in the media playback system 100, and/or one or more remote devices. In some embodiments, the network interface 132 is configured to operate according to one or more suitable communication industry standards (e.g., infrared, radio, wired standards including IEEE 802.3, wireless standards including IEEE 802.11a, 802.11b, 802.11g, 802.11n, 802.11ac, 802.15, 4G, LTE). The network interface 132d can be configured, for example, to transmit data to and/or receive data from the playback devices 110, the NMDs 120, other ones of the control devices 130, one of the computing devices 106 of
The user interface 133 is configured to receive user input and can facilitate control of the media playback system 100. The user interface 133 includes media content art 133a (e.g., album art, lyrics, videos), a playback status indicator 133b (e.g., an elapsed and/or remaining time indicator), media content information region 133c, a playback control region 133d, and a zone indicator 133e. The media content information region 133c can include a display of relevant information (e.g., title, artist, album, genre, release year) about media content currently playing and/or media content in a queue or playlist. The playback control region 133d can include selectable (e.g., via touch input and/or via a cursor or another suitable selector) icons to cause one or more playback devices in a selected playback zone or zone group to perform playback actions such as, for example, play or pause, fast forward, rewind, skip to next, skip to previous, enter/exit shuffle mode, enter/exit repeat mode, enter/exit cross fade mode, etc. The playback control region 133d may also include selectable icons to modify equalization settings, playback volume, and/or other suitable playback actions. In the illustrated embodiment, the user interface 133 comprises a display presented on a touch screen interface of a smartphone (e.g., an iPhone™, an Android phone). In some embodiments, however, user interfaces of varying formats, styles, and interactive sequences may alternatively be implemented on one or more network devices to provide comparable control access to a media playback system.
The one or more speakers 134 (e.g., one or more transducers) can be configured to output sound to the user of the control device 130a. In some embodiments, the one or more speakers comprise individual transducers configured to correspondingly output low frequencies, mid-range frequencies, and/or high frequencies. In some aspects, for example, the control device 130a is configured as a playback device (e.g., one of the playback devices 110). Similarly, in some embodiments the control device 130a is configured as an NMD (e.g., one of the NMDs 120), receiving voice commands and other sounds via the one or more microphones 135.
The one or more microphones 135 can comprise, for example, one or more condenser microphones, electret condenser microphones, dynamic microphones, and/or other suitable types of microphones or transducers. In some embodiments, two or more of the microphones 135 are arranged to capture location information of an audio source (e.g., voice, audible sound) and/or configured to facilitate filtering of background noise. Moreover, in certain embodiments, the control device 130a is configured to operate as playback device and an NMD. In other embodiments, however, the control device 130a omits the one or more speakers 134 and/or the one or more microphones 135. For instance, the control device 130a may comprise a device (e.g., a thermostat, an IoT device, a network device) comprising a portion of the electronics 132 and the user interface 133 (e.g., a touch screen) without any speakers or microphones. Additional control device embodiments are described in further detail below with respect to
e. Suitable Playback Device Configurations
Each zone in the media playback system 100 may be provided for control as a single user interface (UI) entity. For example, Zone A may be provided as a single entity named Master Bathroom. Zone B may be provided as a single entity named Master Bedroom. Zone C may be provided as a single entity named Second Bedroom.
Playback devices that are bonded may have different playback responsibilities, such as responsibilities for certain audio channels. For example, as shown in
Additionally, bonded playback devices may have additional and/or different respective speaker drivers. As shown in
Playback devices that are merged may not have assigned playback responsibilities, and may each render the full range of audio content the respective playback device is capable of. Nevertheless, merged devices may be represented as a single UI entity (i.e., a zone, as discussed above). For instance, the playback devices 110a and 110n the master bathroom have the single UI entity of Zone A. In one embodiment, the playback devices 110a and 110n may each output the full range of audio content each respective playback devices 110a and 110n are capable of, in synchrony.
In some embodiments, an NMD is bonded or merged with another device so as to form a zone. For example, the NMD 120b may be bonded with the playback device 110e, which together form Zone F, named Living Room. In other embodiments, a stand-alone network microphone device may be in a zone by itself. In other embodiments, however, a stand-alone network microphone device may not be associated with a zone. Additional details regarding associating network microphone devices and playback devices as designated or default devices may be found, for example, in previously referenced U.S. patent application Ser. No. 15/438,749.
Zones of individual, bonded, and/or merged devices may be grouped to form a zone group. For example, referring to
In various implementations, the zones in an environment may be the default name of a zone within the group or a combination of the names of the zones within a zone group. For example, Zone Group 108b can have be assigned a name such as “Dining+Kitchen”, as shown in
Certain data may be stored in a memory of a playback device (e.g., the memory 112c of
In some embodiments, the memory may store instances of various variable types associated with the states. Variables instances may be stored with identifiers (e.g., tags) corresponding to type. For example, certain identifiers may be a first type “a1” to identify playback device(s) of a zone, a second type “b1” to identify playback device(s) that may be bonded in the zone, and a third type “c1” to identify a zone group to which the zone may belong. As a related example, identifiers associated with the second bedroom 101c may indicate that the playback device is the only playback device of the Zone C and not in a zone group. Identifiers associated with the Den may indicate that the Den is not grouped with other zones but includes bonded playback devices 110h-110k. Identifiers associated with the Dining Room may indicate that the Dining Room is part of the Dining+Kitchen zone group 108b and that devices 110b and 110d are grouped (
In yet another example, the media playback system 100 may variables or identifiers representing other associations of zones and zone groups, such as identifiers associated with Areas, as shown in
III. Example Systems and Devices
The transducers 214 are configured to receive the electrical signals from the electronics 112, and further configured to convert the received electrical signals into audible sound during playback. For instance, the transducers 214a-c (e.g., tweeters) can be configured to output high frequency sound (e.g., sound waves having a frequency greater than about 2 kHz). The transducers 214d-f (e.g., mid-woofers, woofers, midrange speakers) can be configured output sound at frequencies lower than the transducers 214a-c (e.g., sound waves having a frequency lower than about 2 kHz). In some embodiments, the playback device 210 includes a number of transducers different than those illustrated in
In the illustrated embodiment of
Electronics 312 (
Referring to
Referring to
The beamforming and self-sound suppression components 312l and 312m are configured to detect an audio signal and determine aspects of voice input represented in the detected audio signal, such as the direction, amplitude, frequency spectrum, etc. The voice activity detector activity components 312k are operably coupled with the beamforming and AEC components 3121 and 312m and are configured to determine a direction and/or directions from which voice activity is likely to have occurred in the detected audio signal. Potential speech directions can be identified by monitoring metrics which distinguish speech from other sounds. Such metrics can include, for example, energy within the speech band relative to background noise and entropy within the speech band, which is measure of spectral structure. As those of ordinary skill in the art will appreciate, speech typically has a lower entropy than most common background noise.
The activation word detector components 312n are configured to monitor and analyze received audio to determine if any activation words (e.g., wake words) are present in the received audio. The activation word detector components 312n may analyze the received audio using an activation word detection algorithm. If the activation word detector 312n detects an activation word, the NMD 320 may process voice input contained in the received audio. Example activation word detection algorithms accept audio as input and provide an indication of whether an activation word is present in the audio. Many first- and third-party activation word detection algorithms are known and commercially available. For instance, operators of a voice service may make their algorithm available for use in third-party devices. Alternatively, an algorithm may be trained to detect certain activation words. In some embodiments, the activation word detector 312n runs multiple activation word detection algorithms on the received audio simultaneously (or substantially simultaneously). As noted above, different voice services (e.g. AMAZON's ALEXA®, APPLE's SIRI®, or MICROSOFT's CORTANA®) can each use a different activation word for invoking their respective voice service. To support multiple services, the activation word detector 312n may run the received audio through the activation word detection algorithm for each supported voice service in parallel.
The speech/text conversion components 312o may facilitate processing by converting speech in the voice input to text. In some embodiments, the electronics 312 can include voice recognition software that is trained to a particular user or a particular set of users associated with a household. Such voice recognition software may implement voice-processing algorithms that are tuned to specific voice profile(s). Tuning to specific voice profiles may require less computationally intensive algorithms than traditional voice activity services, which typically sample from a broad base of users and diverse requests that are not targeted to media playback systems.
The voice utterance portion 328b may include, for example, one or more spoken commands (identified individually as a first command 328c and a second command 328e) and one or more spoken keywords (identified individually as a first keyword 328d and a second keyword 3280. In one example, the first command 328c can be a command to play music, such as a specific song, album, playlist, etc. In this example, the keywords may be one or words identifying one or more zones in which the music is to be played, such as the Living Room and the Dining Room shown in
In some embodiments, the media playback system 100 is configured to temporarily reduce the volume of audio content that it is playing while detecting the activation word portion 557a. The media playback system 100 may restore the volume after processing the voice input 328, as shown in
The playback zone region 533b can include representations of playback zones within the media playback system 100 (
The playback status region 533c includes graphical representations of audio content that is presently being played, previously played, or scheduled to play next in the selected playback zone or zone group. The selected playback zone or zone group may be visually distinguished on the user interface, such as within the playback zone region 533b and/or the playback queue region 533d. The graphical representations may include track title, artist name, album name, album year, track length, and other relevant information that may be useful for the user to know when controlling the media playback system 100 via the user interface 531.
The playback queue region 533d includes graphical representations of audio content in a playback queue associated with the selected playback zone or zone group. In some embodiments, each playback zone or zone group may be associated with a playback queue containing information corresponding to zero or more audio items for playback by the playback zone or zone group. For instance, each audio item in the playback queue may comprise a uniform resource identifier (URI), a uniform resource locator (URL) or some other identifier that may be used by a playback device in the playback zone or zone group to find and/or retrieve the audio item from a local audio content source or a networked audio content source, possibly for playback by the playback device. In some embodiments, for example, a playlist can be added to a playback queue, in which information corresponding to each audio item in the playlist may be added to the playback queue. In some embodiments, audio items in a playback queue may be saved as a playlist. In certain embodiments, a playback queue may be empty, or populated but “not in use” when the playback zone or zone group is playing continuously streaming audio content, such as Internet radio that may continue to play until otherwise stopped, rather than discrete audio items that have playback durations. In some embodiments, a playback queue can include Internet radio and/or other streaming audio content items and be “in use” when the playback zone or zone group is playing those items.
When playback zones or zone groups are “grouped” or “ungrouped,” playback queues associated with the affected playback zones or zone groups may be cleared or re-associated. For example, if a first playback zone including a first playback queue is grouped with a second playback zone including a second playback queue, the established zone group may have an associated playback queue that is initially empty, that contains audio items from the first playback queue (such as if the second playback zone was added to the first playback zone), that contains audio items from the second playback queue (such as if the first playback zone was added to the second playback zone), or a combination of audio items from both the first and second playback queues. Subsequently, if the established zone group is ungrouped, the resulting first playback zone may be re-associated with the previous first playback queue, or be associated with a new playback queue that is empty or contains audio items from the playback queue associated with the established zone group before the established zone group was ungrouped. Similarly, the resulting second playback zone may be re-associated with the previous second playback queue, or be associated with a new playback queue that is empty, or contains audio items from the playback queue associated with the established zone group before the established zone group was ungrouped.
At step 650a, the media playback system 100 receives an indication of selected media content (e.g., one or more songs, albums, playlists, podcasts, videos, stations) via the control device 130a. The selected media content can comprise, for example, media items stored locally on or more devices (e.g., the audio source 105 of
At step 650b, the playback device 110a receives the message 651a and adds the selected media content to the playback queue for play back.
At step 650c, the control device 130a receives input corresponding to a command to play back the selected media content. In response to receiving the input corresponding to the command to play back the selected media content, the control device 130a transmits a message 651b to the playback device 110a causing the playback device 110a to play back the selected media content. In response to receiving the message 651b, the playback device 110a transmits a message 651c to the computing device 106a requesting the selected media content. The computing device 106a, in response to receiving the message 651c, transmits a message 651d comprising data (e.g., audio data, video data, a URL, a URI) corresponding to the requested media content.
At step 650d, the playback device 110a receives the message 651d with the data corresponding to the requested media content and plays back the associated media content.
At step 650e, the playback device 110a optionally causes one or more other devices to play back the selected media content. In one example, the playback device 110a is one of a bonded zone of two or more players (
III. Example Systems and Methods for Managing Multiple VASes
As discussed above, the MPS 100 may be configured to communicate with remote computing devices (e.g., cloud servers) associated with multiple different VASes. Although several examples are provided below with respect to managing interactions between two VASes, in various embodiments there may be additional VASes (e.g., three, four, five, six, or more VASes), and the interactions between these VASes can be managed using the approaches described herein. In various embodiments, in response to detecting a particular activation word, the NMDs 120 may send voice inputs over a network 102 to the remote computing device(s) associated with the first VAS 190 or the second VAS 191 (
In some embodiments, suppressing operation of the second activation-word detector involves ceasing providing voice input to the second activation-word detector for a predetermined time, or until a user interaction with the first VAS is deemed to be completed (e.g., after a predetermined time has elapsed since the last interaction—either a text-to-speech output from the first VAS or a user voice input to the first VAS). In some embodiments, suppression of the second activation-word detector can involve powering down the second activation-word detector to a low-power or no-power state for a predetermined time or until the user interaction with the first VAS is deemed complete.
In some embodiments, the first activation-word detector can remain active even after the first activation word has been detected and the voice utterance has been transmitted to the first VAS, such that a user may utter the first activation word to interrupt a current output or other activity being performed by the first VAS. For example, if a user asks Alexa to read a news flash briefing, and the playback device begins to play back the text-to-speech (TTS) response from Alexa, a user may interrupt by speaking the activation word followed by a new command. Additional details regarding arbitrating between activation-word detection and playback of content from a VAS are provided below with respect to
With continued reference to
The first VAS 190 may process the voice input in the message(s) 709 to determine intent (block 711). Based on the intent, the first VAS 190 may send content 713 via messages (e.g., packets) to the media playback system 100. In some instances, the response message(s) 713 may include a payload that directs one or more of the devices of the media playback system 100 to execute instructions. For example, the instructions may direct the media playback system 100 to play back media content, group devices, and/or perform other functions. In addition or alternatively, the first content 713 from the first VAS 190 may include a payload with a request for more information, such as in the case of multi-turn commands.
In some embodiments, the first content 713 can be assigned to different categories that are treated differently when arbitrating between content received from different VASes. Examples of the first content 713 include (i) text-to-speech (TTS) responses (e.g., “it is currently 73 degrees” in response to a user's query regarding the temperature outside), (ii) alarms and timers (e.g., timers set by a user, calendar reminders, etc.), (iii) user broadcasts (e.g., in response to a user instructing Alexa to “tell everyone that dinner is ready,” all playback devices in a household are instructed to play back “dinner is ready”), and (iv) other media content (e.g., news briefings, podcasts, streaming music, etc.). As used herein a TTS response can include instances in which a VAS provides a verbal response to a user input, query, request, etc. to be played back via a playback device. In some embodiments, the first content 713 received from the first VAS 190 can include metadata, tags, or other identifiers regarding the type of content (e.g., a tag identifying the first content 713 as TTS, as an alarm or timer, etc.). In other embodiments, the MPS 100 may inspect the first content 713 to otherwise determine to which category the first content 713 belongs.
At any point along this process, the second VAS 191 may transmit second content 715 via messages (e.g., packets) to the media playback system 100. This second content 715 may likewise include a payload that directs one or more of the devices of the media playback system 100 to execute instructions such as playing back media content or performing other functions. The second content 715, like the first content 713, can take a variety of forms including a TTS output, an alarm or timer, a user broadcast, or other media content. Although the second content 715 here is illustrated as being transmitted at a particular time in the flow, in various embodiments the second content may be transmitted earlier (e.g., prior to transmission of the first content 713 from the first VAS 190 to the MPS 100) or later (e.g., after the MPS 100 has output a response in block 719, for example by playing back the first content 713). In at least some embodiments, the second content 715 is received during playback of the first content 713.
In block 717, the MPS 100 arbitrates between the first content 713 received from the first VAS 190 and the second content 715 received from the second VAS 191. Following arbitration, the MPS 100 may output a response in block 719. The particular operations performed during arbitration between the first and second content may depend on the characteristics of the first and second content, on the particular VASes selected, the relative times at which the first and second content are received, and other factors. For example, in some cases, the MPS 100 may suppress the second content while playing back the first content. As used herein, suppressing the second content can include delaying playback of the second content, pausing playback of the second content (if playback is already in progress), and/or canceling or ceasing playback of the second content altogether. In some cases, the MPS 100 may suppress the first content while playing back the second content. In some embodiments, suppressing playback of the first content can include “ducking” the first content while the second content is played back concurrently with the first content.
When arbitrating between the first and second content in block 717, the MPS 100 may rely at least in part on the category of content (e.g., a TTS output, an alarm or timer, a user broadcast, or other media content) received from each VAS to determine how playback should be handled. Various examples are provided below, in which the MPS 100 arbitrates between the first content 713 and the second content 715, for example by determining which content to play back and which to suppress, as well as whether to queue, duck, or cancel the suppressed content, etc.
In one example, the first content 713 is a TTS response, an alarm or timer, or a user broadcast, and the second content 715 is a timer or alarm. In this instance, the second content 715 (timer or alarm) may interrupt and cancel or queue the first content 713. This permits a user's pre-set alarms or timers to be honored for their assigned times, regardless of the content currently being played back.
In another example, the first content 713 is a TTS response, an alarm or timer, or a user broadcast, and the second content 715 is a user broadcast. In this instance, the second content 715 (user broadcast) is queued until after the first content is played back, without suppressing or otherwise interrupting the first content. This reflects the determination that, within a single household, it may be undesirable for one user's broadcast to interrupt playback of other content, such as another user's active dialogue session with a VAS.
In an additional example, the first content 713 can be streaming media (e.g., music, a podcast, etc.), and the second content 715 can be a TTS response, a timer or alarm, or a user broadcast. In this case, the first content 713 can be paused or “ducked” while the second content 715 is played back. After playback of the second content 715 is complete, the first content 713 can be unducked or unpaused and playback can continue as normal.
In yet another example, the first content 713 is other media such as a podcast, streaming music, etc., and the second content 715 is also of the same category, for example another podcast. In this case, the second content 715 may replace the first content 713, and the first content 713 can be deleted or canceled entirely. This reflects the assumption that a user wishes to override her previous selection of streaming content with the new selection via the second VAS 191.
In still another example, the first content 713 is an alarm or timer, and the second content 715 is a TTS response that is received during playback of the alarm or timer. Here, the first content 713 (alarm or timer) can be suppressed and the second content can be played back. In this instance, a user who has heard a portion of a timer or alarm likely does not wish the alarm or timer to resume after an intervening dialogue session with a VAS has ended.
As a further example, the first content 713 can be a user broadcast, and the second content 715 can be a TTS output, another user broadcast, or an alarm or timer. Here, the first content 713 can be suppressed (e.g., queued or canceled) while the second content 715 (the TTS output, the alarm or timer, or other user broadcast) is played back.
Although the above examples describe optional arbitration determinations made by the MPS 100, various other configurations and determinations are possible depending the desired operation of the MPS 100. For example, in some embodiments the MPS 100 may allow play back of any user broadcasts over any other currently played back content, while in another embodiment the MPS 100 may suppress playback of user broadcasts until playback of other media has completed. In various embodiments, the MPS 100 may suppress playback of the second content while allowing playback of the first content (or vice versa) based on the type of content, other content characteristics (e.g., playback length), the time at which the respective content is received at the MPS 100, particular user settings or preferences, or any other factor.
In block 719, the MPS 100 outputs a response, for example by playing back the selected content as determined via the arbitration in block 717. As noted above, this can include playing back the first content 713 while suppressing (e.g., canceling or queuing) playback of the second content 715, or alternatively this can include playing back the second content 715 while suppressing (e.g., canceling, queuing, or ducking) playback of the first content 713. In some embodiments, the first content 713 sent from the first VAS 190 may direct the media playback system 100 to request media content, such as audio content, from the media service(s) 192. In other embodiments, the MPS 100 may request content independently from the first VAS 190. In either case, the MPS 100 may exchange messages for receiving content, such as via a media stream 721 comprising, e.g., audio content.
In block 723, the other activation word detector(s) can be re-enabled. For example, the MPS 100 may resume providing voice4 input to the other activation-word detector(s) after a predetermined time or after the user's interaction with the first VAS 190 is deemed to be completed (e.g., after a predetermined time has elapsed since the last interaction—either a text-to-speech output from the first VAS or a user voice input to the first VAS). Once the other activation word detector(s) have been re-enabled, a user may initiate interaction with any available VAS by speaking the appropriate activation word or phrase.
Method 800 begins at block 802, which involves the playback device capturing audio input via one or more microphones as described above. The audio input can include a voice input, such as voice input 328 depicted in
At block 804, method 800 involves the playback device using a first activation-word detector (e.g., activation word detector components 312n of
Responsive to detecting the first activation word in the audio input in block 804, the playback device transmits a voice utterance of the audio input to a first VAS associated with the first activation word in block 806. For example, if the detected activation word in block 804 is “Alexa,” then then in block 806 the playback device transmits the voice utterance to one or more remote computing devices associated with AMAZON voice services. As noted previously, in some embodiments, the playback device only transmits the voice utterance portion 328b (
In block 808, the playback device receives first content from the first VAS, and in block 810, the playback device receives second content from a second, different VAS. In block 810, the playback device arbitrates between the first content and the second content. As described above with respect to
In one outcome of the arbitration in block 812, the method 800 continues in block 814 with playing back the first content while suppressing the second content. Such suppression can take the form of delaying playback of the second content until after the first content has been played back or canceling playback of the second content altogether.
In an alternative outcome of the arbitration in block 812, the method continues in block 816 with interrupting playback of the first content with playback of the second content. The first content, which is interrupted, can either be canceled altogether, or can be queued for later playback after the first content has been played back in its entirety. In some embodiments, the first playback is “ducked” while the second content is played back. After the second content has been played back completely, the first content can be “unducked”.
Method 900 begins at block 902, with receiving first content from a first VAS, and in block 904 the playback device plays back the first content. In various embodiments, the first content can be an alarm or timer, a user broadcast, a TTS output, or other media content.
At block 906, the playback device captures audio input via one or more microphones as described above. The audio input can include a voice input, such as voice input 328 depicted in
At block 908, the playback device arbitrates between the captured audio input and the playback of the first content from the first VAS. For example, the playback device may permit a detected activation word in the voice input to interrupt playback of the first device, or the playback device may suppress operation of the activation word detector so as not to interrupt playback of the first content. This arbitration can depend on the identity of the VAS that provides the first content, as well as the VAS associated with the potential activation word. This arbitration can also depend on the category of content being played back, for example an alarm/timer, a user broadcast, a TTS output, or other media content.
In one example, if the first content is a TTS output from a first VAS, the playback device may suppress operation of any activation-word detectors associated with any other VASes, while still permitting operation of the activation-word detector associated with the first VAS. As a result, a user receiving a TTS output from Alexa may interrupt the output by speaking the “Alexa” activation word, but speaking the “OK Google” activation word would not interrupt playback of the TTS output from Alexa.
In another example, if the first content is a user broadcast, the playback device may continue to monitor audio input for activation word(s) during playback. If an activation word is detected for any VAS, then the user broadcast can be canceled or queued while the user interacts with the selected VAS. In some embodiments, this interruption of a user broadcast is permitted regardless of which VAS directed the broadcast and which VAS is associated with the detected activation word.
In yet another example, if the first content is an alarm or timer, the playback device may continue to monitor audio input for activation word(s) during playback. If an activation word is detected, then the timer or alarm can be canceled or queued while the user interacts with the selected VAS. In some embodiments, this interruption of a timer or alarm is permitted regardless of which VAS directed the timer or alarm and which VAS is associated with the detected activation word.
Various other rules and configurations are possible for arbitrating between playback of content from a first VAS and monitoring captured audio for potential activation word(s) of the first VAS and/or any additional VASes. For example, the playback device might permit a user to interrupt any content whatsoever if an activation word associated with a preferred VAS is spoken, while speaking an activation word associated with a non-preferred VAS may interrupt only certain content.
As one outcome following the arbitration in block 908, in block 910 the playback device suppresses the activation-word detector during playback of the first content. The activation-word detector can be suppressed by ceasing to provide captured audio input to the activation-word detector or by otherwise causing the activation-word detector to pause evaluation of audio input for a potential activation word. In this instance, the user is not permitted to interrupt the playback of the first content, even using an activation word.
In the alternative outcome following the arbitration in block 908, in block 912 the playback device enables the activation word detector, for example by providing the audio input to the activation word detector of the playback device. At block 914, method 900 involves the playback device using an activation-word detector (e.g., activation word detector components 312n of
Responsive to detecting the first activation word in the audio input in block 914, the playback device interrupts playback of the first content in block 916. In place of the content, an active dialogue or other interaction can proceed between the user and the VAS associated with the activation word detected in block 914. In some embodiments, the interruption can include canceling or queuing playback of the first content. In some embodiments, interruption of the first content can include “ducking” the first content while a user interacts with the VAS associated with the activation word detected in block 914.
IV. Conclusion
The above discussions relating to playback devices, controller devices, playback zone configurations, voice assistant services, and media content sources provide only some examples of operating environments within which functions and methods described below may be implemented. Other operating environments and configurations of media playback systems, playback devices, and network devices not explicitly described herein may also be applicable and suitable for implementation of the functions and methods.
The description above discloses, among other things, various example systems, methods, apparatus, and articles of manufacture including, among other components, firmware and/or software executed on hardware. It is understood that such examples are merely illustrative and should not be considered as limiting. For example, it is contemplated that any or all of the firmware, hardware, and/or software aspects or components can be embodied exclusively in hardware, exclusively in software, exclusively in firmware, or in any combination of hardware, software, and/or firmware. Accordingly, the examples provided are not the only ways) to implement such systems, methods, apparatus, and/or articles of manufacture.
Additionally, references herein to “embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one example embodiment of an invention. The appearances of this phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. As such, the embodiments described herein, explicitly and implicitly understood by one skilled in the art, can be combined with other embodiments.
The specification is presented largely in terms of illustrative environments, systems, procedures, steps, logic blocks, processing, and other symbolic representations that directly or indirectly resemble the operations of data processing devices coupled to networks. These process descriptions and representations are typically used by those skilled in the art to most effectively convey the substance of their work to others skilled in the art. Numerous specific details are set forth to provide a thorough understanding of the present disclosure. However, it is understood to those skilled in the art that certain embodiments of the present disclosure can be practiced without certain, specific details. In other instances, well known methods, procedures, components, and circuitry have not been described in detail to avoid unnecessarily obscuring aspects of the embodiments. Accordingly, the scope of the present disclosure is defined by the appended claims rather than the foregoing description of embodiments.
When any of the appended claims are read to cover a purely software and/or firmware implementation, at least one of the elements in at least one example is hereby expressly defined to include a tangible, non-transitory medium such as a memory, DVD, CD, Blu-ray, and so on, storing the software and/or firmware.
This application is a continuation of U.S. patent application Ser. No. 17/454,676, filed Nov. 12, 2021, which is a continuation of U.S. patent application Ser. No. 16/213,570, filed Dec. 7, 2018, now U.S. Pat. No. 11,183,183, which are incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
999715 | Gundersen | Aug 1911 | A |
5717768 | Laroche | Feb 1998 | A |
5857172 | Rozak | Jan 1999 | A |
6070140 | Tran | May 2000 | A |
6219645 | Byers | Apr 2001 | B1 |
6937977 | Gerson | Aug 2005 | B2 |
7103542 | Doyle | Sep 2006 | B2 |
7516068 | Clark | Apr 2009 | B1 |
7705565 | Patino et al. | Apr 2010 | B2 |
8325909 | Tashev et al. | Dec 2012 | B2 |
8386523 | Mody | Feb 2013 | B2 |
8473618 | Spear et al. | Jun 2013 | B2 |
8588849 | Patterson | Nov 2013 | B2 |
8594320 | Faller | Nov 2013 | B2 |
8620232 | Helsloot | Dec 2013 | B2 |
8639214 | Fujisaki | Jan 2014 | B1 |
8676273 | Fujisaki | Mar 2014 | B1 |
8762156 | Chen | Jun 2014 | B2 |
8898063 | Sykes et al. | Nov 2014 | B1 |
9002024 | Nakadai et al. | Apr 2015 | B2 |
9047857 | Barton | Jun 2015 | B1 |
9070367 | Hoffmeister et al. | Jun 2015 | B1 |
9088336 | Mani et al. | Jul 2015 | B2 |
9098467 | Blanksteen et al. | Aug 2015 | B1 |
9124650 | Maharajh | Sep 2015 | B2 |
9183845 | Gopalakrishnan et al. | Nov 2015 | B1 |
9208785 | Ben-David et al. | Dec 2015 | B2 |
9313317 | Lebeau et al. | Apr 2016 | B1 |
9354687 | Bansal et al. | May 2016 | B2 |
9368105 | Freed et al. | Jun 2016 | B1 |
9491033 | Soyannwo et al. | Nov 2016 | B1 |
9514747 | Bisani et al. | Dec 2016 | B1 |
9532139 | Lu et al. | Dec 2016 | B1 |
9542941 | Weksler et al. | Jan 2017 | B1 |
9558755 | Laroche et al. | Jan 2017 | B1 |
9632748 | Faaborg et al. | Apr 2017 | B2 |
9640194 | Nemala et al. | May 2017 | B1 |
9672812 | Watanabe et al. | Jun 2017 | B1 |
9691384 | Wang et al. | Jun 2017 | B1 |
9706320 | Starobin et al. | Jul 2017 | B2 |
9749738 | Adsumilli et al. | Aug 2017 | B1 |
9756422 | Paquier et al. | Sep 2017 | B2 |
9767786 | Starobin et al. | Sep 2017 | B2 |
9779725 | Sun et al. | Oct 2017 | B2 |
9781532 | Sheen | Oct 2017 | B2 |
9799330 | Nemala et al. | Oct 2017 | B2 |
9812128 | Mixter et al. | Nov 2017 | B2 |
9818407 | Secker-Walker et al. | Nov 2017 | B1 |
9865264 | Gelfenbeyn et al. | Jan 2018 | B2 |
9875740 | Kumar et al. | Jan 2018 | B1 |
9898250 | Williams et al. | Feb 2018 | B1 |
9899021 | Vitaladevuni et al. | Feb 2018 | B1 |
9972343 | Thorson et al. | May 2018 | B1 |
9992642 | Rapp et al. | Jun 2018 | B1 |
10028069 | Lang | Jul 2018 | B1 |
10038419 | Elliot et al. | Jul 2018 | B1 |
10115400 | Wilberding | Oct 2018 | B2 |
10116748 | Farmer | Oct 2018 | B2 |
10134398 | Sharifi | Nov 2018 | B2 |
10134399 | Lang et al. | Nov 2018 | B2 |
10152969 | Reilly et al. | Dec 2018 | B2 |
10186276 | Dewasurendra et al. | Jan 2019 | B2 |
10204624 | Knudson et al. | Feb 2019 | B1 |
10249205 | Hammersley et al. | Apr 2019 | B2 |
10304440 | Panchapagesan et al. | May 2019 | B1 |
10304475 | Wang et al. | May 2019 | B1 |
10318236 | Pal et al. | Jun 2019 | B1 |
10332508 | Hoffmeister | Jun 2019 | B1 |
10354658 | Wilberding | Jul 2019 | B2 |
10424296 | Penilla et al. | Sep 2019 | B2 |
10565999 | Wilberding | Jan 2020 | B2 |
10565998 | Wilberding | Feb 2020 | B2 |
10586534 | Argyropoulos et al. | Mar 2020 | B1 |
10593328 | Wang et al. | Mar 2020 | B1 |
10593330 | Sharifi | Mar 2020 | B2 |
10699711 | Reilly | Jun 2020 | B2 |
10720173 | Freeman et al. | Jul 2020 | B2 |
10735870 | Ballande et al. | Aug 2020 | B2 |
10746840 | Barton et al. | Aug 2020 | B1 |
10777203 | Pasko | Sep 2020 | B1 |
10789041 | Kim et al. | Sep 2020 | B2 |
10824682 | Alvares et al. | Nov 2020 | B2 |
10825471 | Walley et al. | Nov 2020 | B2 |
10837667 | Nelson et al. | Nov 2020 | B2 |
10847137 | Mandal et al. | Nov 2020 | B1 |
10847164 | Wilberding | Nov 2020 | B2 |
10867604 | Smith et al. | Dec 2020 | B2 |
10871943 | D'Amato et al. | Dec 2020 | B1 |
10878811 | Smith et al. | Dec 2020 | B2 |
10964314 | Jazi et al. | Mar 2021 | B2 |
11024311 | Mixter et al. | Jun 2021 | B2 |
11025569 | Lind et al. | Jun 2021 | B2 |
11050615 | Mathews et al. | Jun 2021 | B2 |
11062705 | Watanabe et al. | Jul 2021 | B2 |
11100923 | Fainberg et al. | Aug 2021 | B2 |
11137979 | Plagge | Oct 2021 | B2 |
11138969 | D'Amato | Oct 2021 | B2 |
11159878 | Chatlani et al. | Oct 2021 | B1 |
11172328 | Soto et al. | Nov 2021 | B2 |
11172329 | Soto et al. | Nov 2021 | B2 |
11175880 | Liu et al. | Nov 2021 | B2 |
11184704 | Jarvis et al. | Nov 2021 | B2 |
11206052 | Park et al. | Dec 2021 | B1 |
11212612 | Lang et al. | Dec 2021 | B2 |
11264019 | Bhattacharya et al. | Mar 2022 | B2 |
11277512 | Leeds et al. | Mar 2022 | B1 |
11315556 | Smith et al. | Apr 2022 | B2 |
11354092 | D'Amato et al. | Jun 2022 | B2 |
11361763 | Maas et al. | Jun 2022 | B1 |
11373645 | Mathew et al. | Jun 2022 | B1 |
11411763 | Mackay et al. | Aug 2022 | B2 |
11445301 | Park et al. | Sep 2022 | B2 |
11514898 | Millington | Nov 2022 | B2 |
11531520 | Wilberding | Nov 2022 | B2 |
20020054685 | Avendano et al. | May 2002 | A1 |
20020055950 | Witteman | May 2002 | A1 |
20020143532 | McLean et al. | Oct 2002 | A1 |
20040093219 | Shin et al. | May 2004 | A1 |
20040153321 | Chung et al. | Aug 2004 | A1 |
20040161082 | Brown et al. | Aug 2004 | A1 |
20070033043 | Hyakumoto | Feb 2007 | A1 |
20070038461 | Abbott et al. | Feb 2007 | A1 |
20070201639 | Park et al. | Aug 2007 | A1 |
20080160977 | Ahmaniemi et al. | Jul 2008 | A1 |
20080182518 | Lo | Jul 2008 | A1 |
20080192946 | Faller | Aug 2008 | A1 |
20080221897 | Cerra et al. | Sep 2008 | A1 |
20080291916 | Xiong et al. | Nov 2008 | A1 |
20090013255 | Yuschik et al. | Jan 2009 | A1 |
20090113053 | Van Wie et al. | Apr 2009 | A1 |
20090214048 | Stokes, III et al. | Aug 2009 | A1 |
20090299745 | Kennewick et al. | Dec 2009 | A1 |
20090323924 | Tashev et al. | Dec 2009 | A1 |
20100070276 | Wasserblat et al. | Mar 2010 | A1 |
20100088100 | Lindahl | Apr 2010 | A1 |
20100179806 | Zhang et al. | Jul 2010 | A1 |
20100260348 | Bhow et al. | Oct 2010 | A1 |
20100299639 | Ramsay et al. | Nov 2010 | A1 |
20100329472 | Nakadai et al. | Dec 2010 | A1 |
20100332236 | Tan | Dec 2010 | A1 |
20110019833 | Kuech et al. | Jan 2011 | A1 |
20110176687 | Birkenes | Jul 2011 | A1 |
20110202924 | Banguero | Aug 2011 | A1 |
20110267985 | Wilkinson et al. | Nov 2011 | A1 |
20120009906 | Patterson et al. | Jan 2012 | A1 |
20120020485 | Msser et al. | Jan 2012 | A1 |
20120027218 | Every et al. | Feb 2012 | A1 |
20120076308 | Kuech et al. | Mar 2012 | A1 |
20120078635 | Rothkopf et al. | Mar 2012 | A1 |
20120086568 | Scott | Apr 2012 | A1 |
20120224457 | Kim et al. | Sep 2012 | A1 |
20120237047 | Neal et al. | Sep 2012 | A1 |
20120245941 | Cheyer | Sep 2012 | A1 |
20120265528 | Gruber et al. | Oct 2012 | A1 |
20130073293 | Jang et al. | Mar 2013 | A1 |
20130080146 | Kato et al. | Mar 2013 | A1 |
20130080167 | Mozer | Mar 2013 | A1 |
20130080171 | Mozer et al. | Mar 2013 | A1 |
20130129100 | Sorensen | May 2013 | A1 |
20130185639 | Lim | Jul 2013 | A1 |
20130230184 | Kuech et al. | Sep 2013 | A1 |
20130238326 | Kim et al. | Sep 2013 | A1 |
20130283169 | Van Wie | Oct 2013 | A1 |
20130289994 | Newman et al. | Oct 2013 | A1 |
20130294611 | Yoo et al. | Nov 2013 | A1 |
20130301840 | Yemdji et al. | Nov 2013 | A1 |
20130322462 | Poulsen | Dec 2013 | A1 |
20130322634 | Bennett et al. | Dec 2013 | A1 |
20130336499 | Beckhardt et al. | Dec 2013 | A1 |
20130339028 | Rosner et al. | Dec 2013 | A1 |
20140006825 | Shenhav | Jan 2014 | A1 |
20140056435 | Kjems et al. | Feb 2014 | A1 |
20140064476 | Mani et al. | Mar 2014 | A1 |
20140108010 | Maltseff et al. | Apr 2014 | A1 |
20140126745 | Dickins et al. | May 2014 | A1 |
20140159581 | Pruemmer et al. | Jun 2014 | A1 |
20140161263 | Koishida et al. | Jun 2014 | A1 |
20140172899 | Hakkani-Tur et al. | Jun 2014 | A1 |
20140181199 | Kumar et al. | Jun 2014 | A1 |
20140188476 | Li et al. | Jul 2014 | A1 |
20140200881 | Chatlani | Jul 2014 | A1 |
20140214429 | Pantel | Jul 2014 | A1 |
20140229959 | Beckhardt et al. | Aug 2014 | A1 |
20140244269 | Tokutake | Aug 2014 | A1 |
20140270216 | Tsilfidis et al. | Sep 2014 | A1 |
20140274203 | Ganong, III et al. | Sep 2014 | A1 |
20140278343 | Tran | Sep 2014 | A1 |
20140288686 | Sant et al. | Sep 2014 | A1 |
20140310002 | Nitz | Oct 2014 | A1 |
20140328490 | Mohammad et al. | Nov 2014 | A1 |
20140363022 | Dizon et al. | Dec 2014 | A1 |
20140364089 | Lienhart et al. | Dec 2014 | A1 |
20140365225 | Haiut | Dec 2014 | A1 |
20140368734 | Hoffert et al. | Dec 2014 | A1 |
20150032443 | Karov et al. | Jan 2015 | A1 |
20150032456 | Wait | Jan 2015 | A1 |
20150039310 | Clark et al. | Feb 2015 | A1 |
20150039311 | Clark et al. | Feb 2015 | A1 |
20150073807 | Kumar | Mar 2015 | A1 |
20150086034 | Lombardi | Mar 2015 | A1 |
20150126255 | Yang et al. | May 2015 | A1 |
20150154953 | Bapat et al. | Jun 2015 | A1 |
20150221307 | Shah et al. | Aug 2015 | A1 |
20150370531 | Faaborg | Dec 2015 | A1 |
20150373100 | Kravets et al. | Dec 2015 | A1 |
20150382128 | Ridihalgh et al. | Dec 2015 | A1 |
20160027440 | Gelfenbeyn et al. | Jan 2016 | A1 |
20160034448 | Tran | Feb 2016 | A1 |
20160055847 | Dahan | Feb 2016 | A1 |
20160066087 | Solbach et al. | Mar 2016 | A1 |
20160077794 | Kim et al. | Mar 2016 | A1 |
20160093281 | Kuo et al. | Mar 2016 | A1 |
20160134924 | Bush et al. | May 2016 | A1 |
20160148612 | Guo et al. | May 2016 | A1 |
20160154089 | Altman | Jun 2016 | A1 |
20160155443 | Khan | Jun 2016 | A1 |
20160171976 | Sun et al. | Jun 2016 | A1 |
20160212488 | Os et al. | Jul 2016 | A1 |
20160241976 | Pearson | Aug 2016 | A1 |
20160299737 | Clayton et al. | Oct 2016 | A1 |
20160314782 | Klimanis | Oct 2016 | A1 |
20160322045 | Hatfield et al. | Nov 2016 | A1 |
20160379634 | Yamamoto et al. | Dec 2016 | A1 |
20160379635 | Page | Dec 2016 | A1 |
20170083606 | Mohan | Mar 2017 | A1 |
20170084278 | Jung | Mar 2017 | A1 |
20170090864 | Jorgovanovic | Mar 2017 | A1 |
20170094215 | Western | Mar 2017 | A1 |
20170110130 | Sharifi et al. | Apr 2017 | A1 |
20170140449 | Kannan | May 2017 | A1 |
20170140750 | Wang et al. | May 2017 | A1 |
20170140757 | Penilla et al. | May 2017 | A1 |
20170140759 | Kumar et al. | May 2017 | A1 |
20170164139 | Deselaers et al. | Jun 2017 | A1 |
20170186425 | Dawes et al. | Jun 2017 | A1 |
20170186427 | Wang et al. | Jun 2017 | A1 |
20170242653 | Lang et al. | Aug 2017 | A1 |
20170242656 | Plagge et al. | Aug 2017 | A1 |
20170243587 | Plagge et al. | Aug 2017 | A1 |
20170245076 | Kusano et al. | Aug 2017 | A1 |
20170269900 | Triplett | Sep 2017 | A1 |
20170270919 | Parthasarathi et al. | Sep 2017 | A1 |
20170300289 | Gattis | Oct 2017 | A1 |
20170329397 | Lin | Nov 2017 | A1 |
20170357390 | Alonso Ruiz et al. | Dec 2017 | A1 |
20170364371 | Nandi et al. | Dec 2017 | A1 |
20170365247 | Ushakov | Dec 2017 | A1 |
20180012077 | Laska et al. | Jan 2018 | A1 |
20180033429 | Makke et al. | Feb 2018 | A1 |
20180040324 | Wilberding | Feb 2018 | A1 |
20180061396 | Srinivasan et al. | Mar 2018 | A1 |
20180061409 | Valentine et al. | Mar 2018 | A1 |
20180061419 | Melendo Casado | Mar 2018 | A1 |
20180091913 | Hartung et al. | Mar 2018 | A1 |
20180096678 | Zhou et al. | Apr 2018 | A1 |
20180137857 | Zhou et al. | May 2018 | A1 |
20180139512 | Moran et al. | May 2018 | A1 |
20180182410 | Kaskari et al. | Jun 2018 | A1 |
20180196776 | Hershko et al. | Jul 2018 | A1 |
20180199130 | Jaffe et al. | Jul 2018 | A1 |
20180233137 | Torok | Aug 2018 | A1 |
20180233139 | Finkelstein | Aug 2018 | A1 |
20180233141 | Solomon et al. | Aug 2018 | A1 |
20180260680 | Finkelstein et al. | Sep 2018 | A1 |
20180270573 | Lang et al. | Sep 2018 | A1 |
20180277113 | Hartung et al. | Sep 2018 | A1 |
20180293221 | Finkelstein et al. | Oct 2018 | A1 |
20180301147 | Kim | Oct 2018 | A1 |
20180314552 | Kim | Nov 2018 | A1 |
20180330727 | Tulli | Nov 2018 | A1 |
20180336892 | Kim et al. | Nov 2018 | A1 |
20180350356 | Garcia | Dec 2018 | A1 |
20180350379 | Wung et al. | Dec 2018 | A1 |
20180352014 | Alsina et al. | Dec 2018 | A1 |
20180352334 | Family et al. | Dec 2018 | A1 |
20180358009 | Daley | Dec 2018 | A1 |
20180358019 | Mont-Reynaud | Dec 2018 | A1 |
20180365567 | Kolavennu | Dec 2018 | A1 |
20190013019 | Lawrence | Jan 2019 | A1 |
20190014592 | Hampel | Jan 2019 | A1 |
20190019112 | Gelfenbeyn et al. | Jan 2019 | A1 |
20190035404 | Gabel et al. | Jan 2019 | A1 |
20190037173 | Lee | Jan 2019 | A1 |
20190044745 | Knudson | Feb 2019 | A1 |
20190066680 | Woo et al. | Feb 2019 | A1 |
20190066710 | Bryan et al. | Feb 2019 | A1 |
20190073999 | Prémont et al. | Mar 2019 | A1 |
20190079724 | Feuz | Mar 2019 | A1 |
20190104119 | Giorgi | Apr 2019 | A1 |
20190108839 | Reilly et al. | Apr 2019 | A1 |
20190122662 | Chang et al. | Apr 2019 | A1 |
20190130906 | Kobayashi et al. | May 2019 | A1 |
20190156847 | Bryan et al. | May 2019 | A1 |
20190163153 | Price | May 2019 | A1 |
20190172452 | Smith et al. | Jun 2019 | A1 |
20190172467 | Kim et al. | Jun 2019 | A1 |
20190172476 | Wung et al. | Jun 2019 | A1 |
20190186937 | Sharifi et al. | Jun 2019 | A1 |
20190188328 | Oyenan et al. | Jun 2019 | A1 |
20190189117 | Kumar | Jun 2019 | A1 |
20190206405 | Gillespie et al. | Jul 2019 | A1 |
20190206412 | Li et al. | Jul 2019 | A1 |
20190237089 | Shin | Aug 2019 | A1 |
20190251960 | Maker et al. | Aug 2019 | A1 |
20190259408 | Freeman et al. | Aug 2019 | A1 |
20190281387 | Woo et al. | Sep 2019 | A1 |
20190287536 | Sharifi et al. | Sep 2019 | A1 |
20190295555 | Wilberding | Sep 2019 | A1 |
20190295556 | Wilberding | Sep 2019 | A1 |
20190311715 | Pfeffinger et al. | Oct 2019 | A1 |
20190311718 | Huber et al. | Oct 2019 | A1 |
20190311722 | Caldwell | Oct 2019 | A1 |
20190318729 | Chao et al. | Oct 2019 | A1 |
20190325870 | Mitic | Oct 2019 | A1 |
20190325888 | Geng | Oct 2019 | A1 |
20190341037 | Bromand et al. | Nov 2019 | A1 |
20190341038 | Bromand et al. | Nov 2019 | A1 |
20190371324 | Powell et al. | Dec 2019 | A1 |
20190371342 | Tukka et al. | Dec 2019 | A1 |
20190392832 | Mitsui et al. | Dec 2019 | A1 |
20200034492 | Verbeke | Jan 2020 | A1 |
20200043489 | Bradley et al. | Feb 2020 | A1 |
20200110571 | Liu et al. | Apr 2020 | A1 |
20200125162 | D'Amato et al. | Apr 2020 | A1 |
20200135194 | Jeong | Apr 2020 | A1 |
20200135224 | Bromand et al. | Apr 2020 | A1 |
20200184980 | Wilberding | Jun 2020 | A1 |
20200244650 | Burris et al. | Jul 2020 | A1 |
20200342869 | Lee et al. | Oct 2020 | A1 |
20200409926 | Srinivasan et al. | Dec 2020 | A1 |
20210118439 | Schillmoeller et al. | Apr 2021 | A1 |
20210295849 | Van Der Ven et al. | Sep 2021 | A1 |
20210358481 | D'Amato et al. | Nov 2021 | A1 |
20220036882 | Ahn et al. | Feb 2022 | A1 |
20220050585 | Fettes et al. | Feb 2022 | A1 |
20220083136 | DeLeeuw | Mar 2022 | A1 |
20220301561 | Robert Jose et al. | Sep 2022 | A1 |
Number | Date | Country |
---|---|---|
1748250 | Mar 2006 | CN |
1781291 | May 2006 | CN |
101427154 | May 2009 | CN |
102999161 | Mar 2013 | CN |
104572009 | Apr 2015 | CN |
104885406 | Sep 2015 | CN |
104885438 | Sep 2015 | CN |
105162886 | Dec 2015 | CN |
105284168 | Jan 2016 | CN |
105389099 | Mar 2016 | CN |
105427861 | Mar 2016 | CN |
105453179 | Mar 2016 | CN |
105472191 | Apr 2016 | CN |
105493179 | Apr 2016 | CN |
105632486 | Jun 2016 | CN |
106030699 | Oct 2016 | CN |
106796784 | May 2017 | CN |
106910500 | Jun 2017 | CN |
107122158 | Sep 2017 | CN |
107465974 | Dec 2017 | CN |
107644313 | Jan 2018 | CN |
107767863 | Mar 2018 | CN |
107832837 | Mar 2018 | CN |
107919116 | Apr 2018 | CN |
108028047 | May 2018 | CN |
108028048 | May 2018 | CN |
108198548 | Jun 2018 | CN |
2501367 | Oct 2013 | GB |
2004109361 | Apr 2004 | JP |
2004163590 | Jun 2004 | JP |
2007235875 | Sep 2007 | JP |
2008217444 | Sep 2008 | JP |
2014510481 | Apr 2014 | JP |
2016009193 | Jan 2016 | JP |
2019109510 | Jul 2019 | JP |
201629950 | Aug 2016 | TW |
2008096414 | Aug 2008 | WO |
2015133022 | Sep 2015 | WO |
2015195216 | Dec 2015 | WO |
2016003509 | Jan 2016 | WO |
2016136062 | Sep 2016 | WO |
2018140777 | Aug 2018 | WO |
2019005772 | Jan 2019 | WO |
Entry |
---|
Notice of Allowance dated Jan. 20, 2023, issued in connection with U.S. Appl. No. 16/915,234, filed Jun. 29, 2020, 6 pages. |
Notice of Allowance dated Jun. 20, 2022, issued in connection with U.S. Appl. No. 16/947,895, filed Aug. 24, 2020, 7 pages. |
Notice of Allowance dated Mar. 20, 2023, issued in connection with U.S. Appl. No. 17/562,412, filed Dec. 27, 2021, 9 pages. |
Notice of Allowance dated Mar. 21, 2023, issued in connection with U.S. Appl. No. 17/353,254, filed Jun. 21, 2021, 8 pages. |
Notice of Allowance dated Nov. 21, 2022, issued in connection with U.S. Appl. No. 17/454,676, filed Nov. 12, 2021, 8 pages. |
Notice of Allowance dated Sep. 21, 2022, issued in connection with U.S. Appl. No. 17/128,949, filed Dec. 21, 2020, 8 pages. |
Notice of Allowance dated Sep. 22, 2022, issued in connection with U.S. Appl. No. 17/163,506, filed Jan. 31, 2021, 13 pages. |
Notice of Allowance dated Sep. 22, 2022, issued in connection with U.S. Appl. No. 17/248,427, filed Jan. 25, 2021, 9 pages. |
Notice of Allowance dated Feb. 23, 2023, issued in connection with U.S. Appl. No. 17/532,674, filed Nov. 22, 2021, 10 pages. |
Notice of Allowance dated Mar. 24, 2022, issued in connection with U.S. Appl. No. 16/378,516, filed Apr. 8, 2019, 7 pages. |
Notice of Allowance dated Apr. 26, 2022, issued in connection with U.S. Appl. No. 17/896,129, filed Aug. 26, 2022, 8 pages. |
Notice of Allowance dated Apr. 26, 2023, issued in connection with U.S. Appl. No. 17/658,717, filed Apr. 11, 2022, 11 pages. |
Notice of Allowance dated Aug. 26, 2022, issued in connection with U.S. Appl. No. 17/145,667, filed Jan. 11, 2021, 8 pages. |
Notice of Allowance dated Oct. 26, 2022, issued in connection with U.S. Appl. No. 17/486,574, filed Sep. 27, 2021, 11 pages. |
Notice of Allowance dated Jun. 27, 2022, issued in connection with U.S. Appl. No. 16/812,758, filed Mar. 9, 2020, 16 pages. |
Notice of Allowance dated Sep. 28, 2022, issued in connection with U.S. Appl. No. 17/444,043, filed Jul. 29, 2021, 17 pages. |
Notice of Allowance dated Dec. 29, 2022, issued in connection with U.S. Appl. No. 17/327,911, filed May 24, 2021, 14 pages. |
Notice of Allowance dated Jul. 29, 2022, issued in connection with U.S. Appl. No. 17/236,559, filed Apr. 21, 2021, 6 pages. |
Notice of Allowance dated Mar. 29, 2023, issued in connection with U.S. Appl. No. 17/722,438, filed Apr. 18, 2022, 7 pages. |
Notice of Allowance dated Mar. 3, 2022, issued in connection with U.S. Appl. No. 16/679,538, filed Nov. 11, 2019, 7 pages. |
Notice of Allowance dated Jun. 30, 2023, issued in connection with U.S. Appl. No. 17/303,001, filed May 18, 2021, 8 pages. |
Notice of Allowance dated Mar. 30, 2023, issued in connection with U.S. Appl. No. 17/303,066, filed May 19, 2021, 7 pages. |
Notice of Allowance dated Mar. 31, 2023, issued in connection with U.S. Appl. No. 17/303,735, filed Jun. 7, 2021, 19 pages. |
Notice of Allowance dated Apr. 5, 2023, issued in connection with U.S. Appl. No. 17/549,253, filed Dec. 13, 2021, 10 pages. |
Notice of Allowance dated Mar. 6, 2023, issued in connection with U.S. Appl. No. 17/449,926, filed Oct. 4, 2021, 8 pages. |
Notice of Allowance dated Apr. 8, 2022, issued in connection with U.S. Appl. No. 16/813,643, filed Mar. 9, 2020, 7 pages. |
Simon Doclo et al. Combined Acoustic Echo and Noise Reduction Using GSVD-Based Optimal Filtering. In 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No. 00CH37100), Aug. 6, 2002, 4 pages. [retrieved on Feb. 23, 2023], Retrieved from the Internet: URL: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C14&q=COMBINED+ACOUSTIC+ECHO+AND+NOISE+REDUCTION+USING+GSVD-BASED+OPTIMAL+FILTERING&btnG =. |
Wikipedia. “The Wayback Machine”, Speech recognition software for Linux, Sep. 22, 2016, 4 pages. [retrieved on Mar. 28, 2022], Retrieved from the Internet: URL: https://web.archive.org/web/20160922151304/https://en.wikipedia.org/wiki/Speech_recognition_software_for_Linux. |
Wolf et al. On the potential of channel selection for recognition of reverberated speech with multiple microphones. Interspeech, TALP Research Center, Jan. 2010, 5 pages. |
Wölfel et al. Multi-source far-distance microphone selection and combination for automatic transcription of lectures, Interspeech 2006—ICSLP, Jan. 2006, 5 pages. |
Zhang et al. Noise Robust Speech Recognition Using Multi-Channel Based Channel Selection and Channel Weighting. The Institute of Electronics, Information and Communication Engineers, arXiv:1604.03276v1 [cs.SD] Jan. 1, 2010, 8 pages. |
Non-Final Office Action dated May 26, 2022, issued in connection with U.S. Appl. No. 16/989,805, filed Aug. 10, 2020, 14 pages. |
Non-Final Office Action dated Feb. 27, 2023, issued in connection with U.S. Appl. No. 17/493,430, filed Oct. 4, 2021, 17 pages. |
Non-Final Office Action dated Feb. 28, 2023, issued in connection with U.S. Appl. No. 17/548,921, filed Dec. 13, 2021, 12 pages. |
Non-Final Office Action dated Mar. 28, 2022, issued in connection with U.S. Appl. No. 17/222,151, filed Apr. 5, 2021, 5 pages. |
Non-Final Office Action dated Jul. 3, 2023, issued in connection with U.S. Appl. No. 17/135,173, filed Dec. 28, 2020, 22 pages. |
Non-Final Office Action dated Sep. 30, 2022, issued in connection with U.S. Appl. No. 17/353,254, filed Jun. 21, 2021, 22 pages. |
Non-Final Office Action dated Nov. 4, 2022, issued in connection with U.S. Appl. No. 17/445,272, filed Aug. 17, 2021, 22 pages. |
Non-Final Office Action dated Oct. 4, 2022, issued in connection with U.S. Appl. No. 16/915,234, filed Jun. 29, 2020, 16 pages. |
Non-Final Office Action dated Apr. 5, 2023, issued in connection with U.S. Appl. No. 18/145,501, filed Dec. 22, 2022, 6 pages. |
Non-Final Office Action dated Jul. 5, 2023, issued in connection with U.S. Appl. No. 18/061,579, filed Dec. 5, 2022, 11 pages. |
Non-Final Office Action dated Feb. 7, 2023, issued in connection with U.S. Appl. No. 17/303,001, filed May 18, 2021, 8 pages. |
Non-Final Office Action dated Jun. 7, 2023, issued in connection with U.S. Appl. No. 16/179,779, filed Nov. 2, 2018, 29 pages. |
Non-Final Office Action dated Mar. 7, 2022, issued in connection with U.S. Appl. No. 16/812,758, filed Mar. 9, 2020, 18 pages. |
Non-Final Office Action dated Jun. 8, 2023, issued in connection with U.S. Appl. No. 18/048,034, filed Oct. 20, 2022, 8 pages. |
Non-Final Office Action dated Jun. 8, 2023, issued in connection with U.S. Appl. No. 18/061,243, filed Dec. 2, 2022, 10 pages. |
Notice of Allowance dated Nov. 2, 2022, issued in connection with U.S. Appl. No. 16/989,805, filed Aug. 10, 2020, 5 pages. |
Notice of Allowance dated Nov. 3, 2022, issued in connection with U.S. Appl. No. 17/448,015, filed Sep. 17, 2021, 7 pages. |
Notice of Allowance dated Feb. 6, 2023, issued in connection with U.S. Appl. No. 17/077,974, filed Oct. 22, 2020, 7 pages. |
Notice of Allowance dated Jan. 6, 2023, issued in connection with U.S. Appl. No. 17/896,129, filed Aug. 26, 2022, 13 pages. |
Notice of Allowance dated Dec. 7, 2022, issued in connection with U.S. Appl. No. 17/315,599, filed May 10, 2021, 11 pages. |
Notice of Allowance dated Feb. 8, 2023, issued in connection with U.S. Appl. No. 17/446,690, filed Sep. 1, 2021, 8 pages. |
Notice of Allowance dated Jan. 9, 2023, issued in connection with U.S. Appl. No. 17/247,507, filed Dec. 14, 2020, 8 pages. |
Notice of Allowance dated Jun. 9, 2023, issued in connection with U.S. Appl. No. 17/532,674, filed Nov. 22, 2021, 13 pages. |
Notice of Allowance dated Mar. 9, 2023, issued in connection with U.S. Appl. No. 17/662,302, filed May 6, 2022, 7 pages. |
Notice of Allowance dated Nov. 9, 2022, issued in connection with U.S. Appl. No. 17/385,542, filed Jul. 26, 2021, 8 pages. |
Notice of Allowance dated Mar. 1, 2022, issued in connection with U.S. Appl. No. 16/879,549, filed May 20, 2020, 9 pages. |
Notice of Allowance dated Jul. 10, 2023, issued in connection with U.S. Appl. No. 17/315,599, filed May 10, 2021, 2 pages. |
Notice of Allowance dated Jun. 10, 2022, issued in connection with U.S. Appl. No. 16/879,549, filed May 20, 2020, 8 pages. |
Notice of Allowance dated May 11, 2022, issued in connection with U.S. Appl. No. 17/135,123, filed Dec. 28, 2020, 8 pages. |
Notice of Allowance dated May 11, 2022, issued in connection with U.S. Appl. No. 17/145,667, filed Jan. 11, 2021, 7 pages. |
Notice of Allowance dated May 11, 2023, issued in connection with U.S. Appl. No. 18/061,638, filed Dec. 5, 2022, 15 pages. |
Notice of Allowance dated Jul. 12, 2022, issued in connection with U.S. Appl. No. 16/907,953, filed Jun. 22, 2020, 8 pages. |
Notice of Allowance dated Jul. 12, 2022, issued in connection with U.S. Appl. No. 17/391,404, filed Aug. 2, 2021, 13 pages. |
Notice of Allowance dated Jul. 12, 2023, issued in connection with U.S. Appl. No. 18/151,619, filed Jan. 9, 2023, 13 pages. |
Notice of Allowance dated Jun. 12, 2023, issued in connection with U.S. Appl. No. 17/453,632, filed Nov. 4, 2021, 9 pages. |
Notice of Allowance dated Apr. 13, 2022, issued in connection with U.S. Appl. No. 17/236,559, filed Apr. 21, 2021, 7 pages. |
Notice of Allowance dated Feb. 13, 2023, issued in connection with U.S. Appl. No. 18/045,360, filed Oct. 10, 2022, 9 pages. |
Notice of Allowance dated Jul. 13, 2023, issued in connection with U.S. Appl. No. 18/145,501, filed Dec. 22, 2022, 9 pages. |
Notice of Allowance dated Jun. 13, 2023, issued in connection with U.S. Appl. No. 17/249,776, filed Mar. 12, 2021, 10 pages. |
Notice of Allowance dated Aug. 15, 2022, issued in connection with U.S. Appl. No. 17/101,949, filed Nov. 23, 2020, 11 pages. |
Notice of Allowance dated Feb. 15, 2023, issued in connection with U.S. Appl. No. 17/659,613, filed Apr. 18, 2022, 21 pages. |
Notice of Allowance dated Jun. 15, 2023, issued in connection with U.S. Appl. No. 17/305,698, filed Jul. 13, 2021, 8 pages. |
Notice of Allowance dated Jun. 15, 2023, issued in connection with U.S. Appl. No. 17/305,920, filed Jul. 16, 2021, 8 pages. |
Notice of Allowance dated Sep. 15, 2022, issued in connection with U.S. Appl. No. 16/736,725, filed Jan. 1, 2020, 11 pages. |
Notice of Allowance dated Aug. 17, 2022, issued in connection with U.S. Appl. No. 17/135,347, filed Dec. 28, 2020, 14 pages. |
Notice of Allowance dated Nov. 17, 2022, issued in connection with U.S. Appl. No. 17/486,222, filed Sep. 27, 2021, 10 pages. |
Notice of Allowance dated Jul. 18, 2022, issued in connection with U.S. Appl. No. 17/222,151, filed Apr. 5, 2021, 5 pages. |
Notice of Allowance dated Dec. 20, 2022, issued in connection with U.S. Appl. No. 16/806,747, filed Mar. 2, 2020, 5 pages. |
Final Office Action dated Mar. 29, 2023, issued in connection with U.S. Appl. No. 17/549,034, filed Dec. 13, 2021, 21 pages. |
Final Office Action dated Jun. 7, 2022, issued in connection with U.S. Appl. No. 16/736,725, filed Jan. 7, 2020, 14 pages. |
Helwani et al. Source-domain adaptive filtering for MIMO systems with application to acoustic echo cancellation. In 2010 IEEE International Conference on Acoustics, Speech and Signal Processing, Jun. 28, 2010, 4 pages. [retrieved on Feb. 23, 2023], Retrieved from the Internet: URL: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C14&q=SOURCE-DOMAIN+ADAPTIVE+FILTERING+FOR+MIMO+SYSTEMS+WITH+APPLICATION+TO+ACOUSTIC+ECHO+CANCELLATION&btnG =. |
International Bureau, International Preliminary Report on Patentability, dated Jul. 21, 2022, issued in connection with International Application No. PCT/US2021/070007, filed on Jan. 6, 2021, 8 pages. |
International Bureau, International Preliminary Report on Patentability, dated Apr. 26, 2022, issued in connection with International Application No. PCT/US2020/056632, filed on Oct. 21, 2020, 7 pages. |
International Bureau, International Search Report and Written Opinion dated Mar. 20, 2023, issued in connection with International Application No. PCT/US2022/045399, filed on Sep. 30, 2022, 25 pages. |
International Searching Authority, Invitation to Pay Additional Fees dated Jan. 27, 2023, issued in connection with International Application No. PCT/US2022/045399, filed on Sep. 30, 2022, 19 pages. |
Japanese Patent Office, Decision of Refusal and Translation dated Oct. 4, 2022, issued in connection with Japanese Patent Application No. 2021-535871, 6 pages. |
Japanese Patent Office, Decision of Refusal and Translation dated May 23, 2023, issued in connection with Japanese Patent Application No. 2021-163622, 13 pages. |
Japanese Patent Office, Decision of Refusal and Translation dated Jul. 26, 2022, issued in connection with Japanese Patent Application No. 2020-513852, 10 pages. |
Japanese Patent Office, Non-Final Office Action dated Apr. 4, 2023, issued in connection with Japanese Patent Application No. 2021-573944, 5 pages. |
Japanese Patent Office, Notice of Reasons for Refusal and Translation dated Sep. 13, 2022, issued in connection with Japanese Patent Application No. 2021-163622, 12 pages. |
Japanese Patent Office, Office Action and Translation dated Nov. 15, 2022, issued in connection with Japanese Patent Application No. 2021-146144, 9 pages. |
Japanese Patent Office, Office Action dated Nov. 29, 2022, issued in connection with Japanese Patent Application No. 2021-181224, 6 pages. |
Katsamanis et al. Robust far-field spoken command recognition for home automation combining adaptation and multichannel processing. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing—Proceedings, May 2014, pp. 5547-5551. |
Korean Patent Office, Korean Examination Report and Translation dated Apr. 10, 2023, issued in connection with Korean Application No. 10-2022-7024007, 8 pages. |
Korean Patent Office, Korean Examination Report and Translation dated Oct. 13, 2022, issued in connection with Korean Application No. 10-2021-7030939, 4 pages. |
Korean Patent Office, Korean Examination Report and Translation dated Apr. 19, 2022, issued in connection with Korean Application No. 10-2021-7008937, 14 pages. |
Korean Patent Office, Korean Examination Report and Translation dated Jul. 26, 2022, issued in connection with Korean Application No. 10-2022-7016656, 17 pages. |
Korean Patent Office, Korean Examination Report and Translation dated Mar. 31, 2023, issued in connection with Korean Application No. 10-2022-7016656, 7 pages. |
Korean Patent Office, Korean Examination Report and Translation dated Oct. 31, 2021, issued in connection with Korean Application No. 10-2022-7024007, 10 pages. |
Korean Patent Office, Office Action and Translation dated Feb. 27, 2023, issued in connection with Korean Application No. 10-2022-7021879, 5 pages. |
Mathias Wolfel. Channel Selection by Class Separability Measures for Automatic Transcriptions on Distant Microphones, Interspeech 2007 10.21437/Interspeech_2007-255, 4 pages. |
Non-Final Office Action dated Feb. 2, 2023, issued in connection with U.S. Appl. No. 17/305,698, filed Jul. 13, 2021, 16 pages. |
Non-Final Office Action dated Dec. 5, 2022, issued in connection with U.S. Appl. No. 17/662,302, filed May 6, 2022, 12 pages. |
Non-Final Office Action dated Oct. 5, 2022, issued in connection with U.S. Appl. No. 17/449,926, filed Oct. 4, 2021, 11 pages. |
Non-Final Office Action dated Apr. 12, 2023, issued in connection with U.S. Appl. No. 17/878,649, filed Aug. 1, 2022, 16 pages. |
Non-Final Office Action dated Nov. 14, 2022, issued in connection with U.S. Appl. No. 17/077,974, filed Oct. 22, 2020, 6 pages. |
Non-Final Office Action dated Sep. 14, 2022, issued in connection with U.S. Appl. No. 17/446,690, filed Sep. 1, 2021, 10 pages. |
Non-Final Office Action dated Aug. 15, 2022, issued in connection with U.S. Appl. No. 17/448,015, filed Sep. 17, 2021, 12 pages. |
Non-Final Office Action dated Dec. 15, 2022, issued in connection with U.S. Appl. No. 17/549,253, filed Dec. 13, 2021, 10 pages. |
Non-Final Office Action dated Feb. 15, 2023, issued in connection with U.S. Appl. No. 17/453,632, filed Nov. 4, 2021, 12 pages. |
Non-Final Office Action dated Sep. 15, 2022, issued in connection with U.S. Appl. No. 17/247,507, filed Dec. 14, 2020, 9 pages. |
Non-Final Office Action dated Sep. 15, 2022, issued in connection with U.S. Appl. No. 17/327,911, filed May 24, 2021, 44 pages. |
Non-Final Office Action dated Feb. 16, 2023, issued in connection with U.S. Appl. No. 17/305,920, filed Jul. 16, 2021, 12 pages. |
Non-Final Office Action dated Jul. 18, 2023, issued in connection with U.S. Appl. No. 18/066,093, filed Dec. 14, 2022, 12 pages. |
Non-Final Office Action dated Oct. 18, 2022, issued in connection with U.S. Appl. No. 16/949,973, filed Nov. 23, 2020, 31 pages. |
Non-Final Office Action dated Sep. 19, 2022, issued in connection with U.S. Appl. No. 17/385,542, filed Jul. 26, 2021, 9 pages. |
Non-Final Office Action dated Apr. 20, 2023, issued in connection with U.S. Appl. No. 18/061,570, filed Dec. 5, 2022, 12 pages. |
Non-Final Office Action dated Oct. 20, 2022, issued in connection with U.S. Appl. No. 17/532,674, filed Nov. 22, 2021, 52 pages. |
Non-Final Office Action dated Dec. 22, 2022, issued in connection with U.S. Appl. No. 16/168,389, filed Oct. 23, 2018, 39 pages. |
Non-Final Office Action dated Jun. 23, 2023, issued in connection with U.S. Appl. No. 18/048,945, filed Oct. 24, 2022, 10 pages. |
Non-Final Office Action dated Mar. 23, 2022, issued in connection with U.S. Appl. No. 16/907,953, filed Jun. 22, 2020, 7 pages. |
Non-Final Office Action dated Sep. 23, 2022, issued in connection with U.S. Appl. No. 16/153,530, filed Oct. 5, 2018, 25 pages. |
Non-Final Office Action dated Apr. 24, 2023, issued in connection with U.S. Appl. No. 17/532,744, filed Nov. 22, 2021, 18 pages. |
Non-Final Office Action dated May 24, 2022, issued in connection with U.S. Appl. No. 17/101,949, filed Nov. 23, 2020, 10 pages. |
Non-Final Office Action dated Apr. 25, 2023, issued in connection with U.S. Appl. No. 17/536,572, filed Nov. 29, 2021, 8 pages. |
Non-Final Office Action dated Apr. 25, 2023, issued in connection with U.S. Appl. No. 17/656,794, filed Mar. 28, 2022, 22 pages. |
Non-Final Office Action dated May 25, 2023, issued in connection with U.S. Appl. No. 18/157,937, filed Jan. 23, 2023, 9 pages. |
Non-Final Office Action dated Oct. 25, 2022, issued in connection with U.S. Appl. No. 17/549,034, filed Dec. 13, 2021, 20 pages. |
Advisory Action dated Nov. 7, 2022, issued in connection with U.S. Appl. No. 16/168,389, filed Oct. 23, 2018, 4 pages. |
Advisory Action dated Feb. 28, 2022, issued in connection with U.S. Appl. No. 16/813,643, filed Mar. 9, 2020, 3 pages. |
Australian Patent Office, Australian Examination Report Action dated Nov. 10, 2022, issued in connection with Australian Application No. 2018312989, 2 pages. |
Australian Patent Office, Australian Examination Report Action dated Jul. 11, 2023, issued in connection with Australian Application No. 2022246446, 2 pages. |
Australian Patent Office, Australian Examination Report Action dated Jun. 14, 2023, issued in connection with Australian Application No. 2019299865, 2 pages. |
Australian Patent Office, Australian Examination Report Action dated May 19, 2022, issued in connection with Australian Application No. 2021212112, 2 pages. |
Australian Patent Office, Australian Examination Report Action dated Sep. 28, 2022, issued in connection with Australian Application No. 2018338812, 3 pages. |
Australian Patent Office, Australian Examination Report Action dated Mar. 4, 2022, issued in connection with Australian Application No. 2021202786, 2 pages. |
Canadian Patent Office, Canadian Examination Report dated Oct. 13, 2022, issued in connection with Canadian Application No. 3121516, 4 pages. |
Canadian Patent Office, Canadian Examination Report dated Sep. 14, 2022, issued in connection with Canadian Application No. 3067776, 4 pages. |
Canadian Patent Office, Canadian Examination Report dated Oct. 19, 2022, issued in connection with Canadian Application No. 3123601, 5 pages. |
Canadian Patent Office, Canadian Examination Report dated Mar. 29, 2022, issued in connection with Canadian Application No. 3111322, 3 pages. |
Canadian Patent Office, Canadian Examination Report dated Jun. 7, 2022, issued in connection with Canadian Application No. 3105494, 5 pages. |
Chinese Patent Office, First Office Action and Translation dated Jun. 1, 2021, issued in connection with Chinese Application No. 201980089721.5, 21 pages. |
Chinese Patent Office, First Office Action and Translation dated Feb. 9, 2023, issued in connection with Chinese Application No. 201880076788.0, 13 pages. |
Chinese Patent Office, First Office Action and Translation dated Oct. 9, 2022, issued in connection with Chinese Application No. 201780056695.7, 10 pages. |
Chinese Patent Office, First Office Action and Translation dated Nov. 10, 2022, issued in connection with Chinese Application No. 201980070006.7, 15 pages. |
Chinese Patent Office, First Office Action and Translation dated Jan. 19, 2023, issued in connection with Chinese Application No. 201880064916.X, 10 pages. |
Chinese Patent Office, First Office Action and Translation dated Sep. 19, 2022, issued in connection with Chinese Application No. 201980056604.9, 13 pages. |
Chinese Patent Office, First Office Action and Translation dated Nov. 25, 2022, issued in connection with Chinese Application No. 201780056321.5, 8 pages. |
Chinese Patent Office, First Office Action and Translation dated Feb. 27, 2023, issued in connection with Chinese Application No. 201980003798.6, 12 pages. |
Chinese Patent Office, First Office Action and Translation dated Dec. 30, 2022, issued in connection with Chinese Application No. 201880076775.3, 10 pages. |
Chinese Patent Office, Second Office Action and Translation dated Mar. 3, 2022, issued in connection with Chinese Application No. 201880077216.4, 11 pages. |
Chinese Patent Office, Second Office Action and Translation dated Apr. 1, 2023, issued in connection with Chinese Application No. 201980056604.9, 11 pages. |
Chinese Patent Office, Second Office Action dated Dec. 21, 2022, issued in connection with Chinese Application No. 201980089721.5, 12 pages. |
Chinese Patent Office, Second Office Action dated May 30, 2023, issued in connection with Chinese Application No. 201980070006.7, 9 pages. |
European Patent Office, Decision to Refuse European Patent Application dated May 30, 2022, issued in connection with European Application No. 17200837.7, 4 pages. |
European Patent Office, European EPC Article 94.3 dated Jun. 5, 2023, issued in connection with European Application No. 20710649.3, 8 pages. |
European Patent Office, European EPC Article 94.3 dated Feb. 10, 2023, issued in connection with European Application No. 19729968.8, 7 pages. |
European Patent Office, European EPC Article 94.3 dated Mar. 11, 2022, issued in connection with European Application No. 19731415.6, 7 pages. |
European Patent Office, European EPC Article 94.3 dated May 2, 2022, issued in connection with European Application No. 20185599.6, 7 pages. |
European Patent Office, European EPC Article 94.3 dated Jun. 21, 2022, issued in connection with European Application No. 19780508.8, 5 pages. |
European Patent Office, European EPC Article 94.3 dated Feb. 23, 2023, issued in connection with European Application No. 19839734.1, 8 pages. |
European Patent Office, European EPC Article 94.3 dated Nov. 28, 2022, issued in connection with European Application No. 18789515.6, 7 pages. |
European Patent Office, European EPC Article 94.3 dated Mar. 3, 2022, issued in connection with European Application No. 19740292.8, 10 pages. |
European Patent Office, European EPC Article 94.3 dated Jun. 30, 2022, issued in connection with European Application No. 19765953.5, 4 pages. |
European Patent Office, European Extended Search Report dated Oct. 7, 2022, issued in connection with European Application No. 22182193.7, 8 pages. |
European Patent Office, European Extended Search Report dated Apr. 22, 2022, issued in connection with European Application No. 21195031.6, 14 pages. |
European Patent Office, European Extended Search Report dated Jun. 23, 2022, issued in connection with European Application No. 22153180.9, 6 pages. |
European Patent Office, European Extended Search Report dated Jun. 30, 2022, issued in connection with European Application No. 21212763.3, 9 pages. |
European Patent Office, European Extended Search Report dated Jul. 8, 2022, issued in connection with European Application No. 22153523.0, 9 pages. |
European Patent Office, European Search Report dated Mar. 1, 2022, issued in connection with European Application No. 21180778.9, 9 pages. |
European Patent Office, European Search Report dated Oct. 4, 2022, issued in connection with European Application No. 22180226.7, 6 pages. |
European Patent Office, Summons to Attend Oral Proceedings mailed on Jul. 15, 2022, issued in connection with European Application No. 17792272.1, 11 pages. |
Final Office Action dated Jun. 1, 2022, issued in connection with U.S. Appl. No. 16/806,747, filed Mar. 2, 2020, 20 pages. |
Final Office Action dated Aug. 17, 2022, issued in connection with U.S. Appl. No. 16/179,779, filed Nov. 2, 2018, 26 pages. |
Final Office Action dated May 17, 2023, issued in connection with U.S. Appl. No. 16/168,389, filed Oct. 23, 2018, 44 pages. |
Final Office Action dated Mar. 21, 2022, issued in connection with U.S. Appl. No. 16/153,530, filed Oct. 5, 2018, 23 pages. |
Final Office Action dated Aug. 22, 2022, issued in connection with U.S. Appl. No. 16/168,389, filed Oct. 23, 2018, 37 pages. |
Final Office Action dated Jul. 27, 2022, issued in connection with U.S. Appl. No. 16/989,350, filed Aug. 10, 2020, 15 pages. |
Number | Date | Country | |
---|---|---|---|
20230215433 A1 | Jul 2023 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17454676 | Nov 2021 | US |
Child | 18061579 | US | |
Parent | 16213570 | Dec 2018 | US |
Child | 17454676 | US |