With the advancement of technology, the use and popularity of electronic devices has increased considerably. Electronic devices are commonly used to capture and process audio data.
For a more complete understanding of the present disclosure, reference is now made to the following description taken in conjunction with the accompanying drawings.
Electronic devices may be used to capture input audio and process input audio data. The input audio data may be used for voice commands and/or sent to a remote device as part of a communication session. If the device generates playback audio while capturing the input audio, the input audio data may include an echo signal representing a portion of the playback audio recaptured by the device.
To remove the echo signal, the device may perform acoustic echo cancellation (AEC) processing, but in some circumstances the AEC processing may not fully cancel the echo signal and an output of the echo cancellation may include residual echo. For example, due to mechanical noise and/or continuous echo path changes caused by movement of the device, the echo signal may be nonlinear and time-varying and linear AEC processing may be unable to fully cancel the echo signal.
To improve echo cancellation, devices, systems and methods are disclosed that perform deep adaptive AEC processing. For example, the deep adaptive AEC processing integrates a deep neural network (DNN) and linear adaptive filtering to perform either (i) echo removal or (ii) joint echo and noise removal. The DNN is configured to generate a nonlinear reference signal and step-size data, which the linear adaptive filtering uses to generate estimated echo data that accurately models the echo signal. For example, the step-size data may increase a rate of adaptation for an adaptive filter when local speech is not detected and may freeze adaptation of the adaptive filter when local speech is detected, causing the estimated echo data generated by the adaptive filter to correspond to the echo signal but not the local speech. By canceling the estimated echo data from a microphone signal, the deep adaptive AEC processing may generate output audio data representing the local speech. The DNN may generate the nonlinear reference signal by generating mask data that is applied to the microphone signal, such that the nonlinear reference signal corresponds to a portion of the microphone signal that does not include near-end speech.
The device 110 may be an electronic device configured to capture and/or receive audio data. For example, the device 110 may include a microphone array configured to generate microphone audio data that captures input audio, although the disclosure is not limited thereto and the device 110 may include multiple microphones without departing from the disclosure. As is known and used herein, “capturing” an audio signal and/or generating audio data includes a microphone transducing audio waves (e.g., sound waves) of captured sound to an electrical signal and a codec digitizing the signal to generate the microphone audio data. In addition to capturing the microphone audio data, the device 110 may be configured to receive playback audio data and generate output audio using one or more loudspeakers of the device 110. For example, the device 110 may generate output audio corresponding to media content, such as music, a movie, and/or the like.
If the device 110 generates playback audio while capturing the input audio, the microphone audio data may include an echo signal representing a portion of the playback audio recaptured by the device. In addition, the microphone audio data may include a speech signal corresponding to local speech, as well as acoustic noise in the environment, as shown below:
Yk,m=Sk,m+Dk,m+Nk,m [1]
where Yk,m denotes the microphone signal, Sk,m denotes a speech signal (e.g., representation of local speech), Dk,m denotes an echo signal (e.g., representation of the playback audio recaptured by the device 110), and Nk,m denotes a noise signal (e.g., representation of acoustic noise captured by the device 110).
The device 110 may perform deep adaptive AEC processing to reduce or remove the echo signal Dk,m and/or the noise signal Nk,m. For example, the device 110 may receive (130) playback audio data, may receive (132) microphone audio data, and may (134) process the playback audio data and the microphone audio data using a first model to determine step-size data and mask data. For example, the device 110 may include a deep neural network (DNN) configured to process the playback audio data and the microphone audio data to generate the step-size data and the mask data, as described in greater detail below with regard to
The device 110 may then generate (136) reference audio data using the microphone audio data and the mask data. For example, the mask data may indicate portions of the microphone audio data that do not include the speech signal, such that the reference audio data corresponds to portions of the microphone audio data that represent the echo signal and/or the noise signal. The device 110 may generate (138) estimated echo data using the reference audio data, the step-size data, and an adaptive filter. For example, the device 110 may adapt the adaptive filter based on the step-size data, then use the adaptive filter to process the reference audio data and generate the estimated echo data. The estimated echo data may correspond to the echo signal and/or the noise signal without departing from the disclosure. In some examples, the step-size data may cause increased adaptation of the adaptive filter when local speech is not detected and may freeze adaptation of the adaptive filter when local speech is detected, although the disclosure is not limited thereto.
The device 110 may generate (140) output audio data based on the microphone audio data and the estimated echo data. For example, the device 110 may subtract the estimated echo data from the microphone audio data to generate the output audio data. In some examples, the device 110 may detect (142) a wakeword represented in a portion of the output audio data and may cause (144) speech processing to be performed using the portion of the output audio data. However, the disclosure is not limited thereto, and in other examples the device 110 may perform deep adaptive AEC processing during a communication session or the like, without detecting a wakeword or performing speech processing.
While
An audio signal is a representation of sound and an electronic representation of an audio signal may be referred to as audio data, which may be analog and/or digital without departing from the disclosure. For ease of illustration, the disclosure may refer to either audio data (e.g., microphone audio data, input audio data, etc.) or audio signals (e.g., microphone audio signal, input audio signal, etc.) without departing from the disclosure. Additionally or alternatively, portions of a signal may be referenced as a portion of the signal or as a separate signal and/or portions of audio data may be referenced as a portion of the audio data or as separate audio data. For example, a first audio signal may correspond to a first period of time (e.g., 30 seconds) and a portion of the first audio signal corresponding to a second period of time (e.g., 1 second) may be referred to as a first portion of the first audio signal or as a second audio signal without departing from the disclosure. Similarly, first audio data may correspond to the first period of time (e.g., 30 seconds) and a portion of the first audio data corresponding to the second period of time (e.g., 1 second) may be referred to as a first portion of the first audio data or second audio data without departing from the disclosure. Audio signals and audio data may be used interchangeably, as well; a first audio signal may correspond to the first period of time (e.g., 30 seconds) and a portion of the first audio signal corresponding to a second period of time (e.g., 1 second) may be referred to as first audio data without departing from the disclosure.
In some examples, the audio data may correspond to audio signals in a time-domain. However, the disclosure is not limited thereto and the device 110 may convert these signals to a subband-domain or a frequency-domain prior to performing additional processing, such as adaptive feedback reduction (AFR) processing, acoustic echo cancellation (AEC), adaptive interference cancellation (AIC), noise reduction (NR) processing, tap detection, and/or the like. For example, the device 110 may convert the time-domain signal to the subband-domain by applying a bandpass filter or other filtering to select a portion of the time-domain signal within a desired frequency range. Additionally or alternatively, the device 110 may convert the time-domain signal to the frequency-domain using a Fast Fourier Transform (FFT) and/or the like.
As used herein, audio signals or audio data (e.g., microphone audio data, or the like) may correspond to a specific range of frequency bands. For example, the audio data may correspond to a human hearing range (e.g., 20 Hz-20 kHz), although the disclosure is not limited thereto.
As used herein, a frequency band (e.g., frequency bin) corresponds to a frequency range having a starting frequency and an ending frequency. Thus, the total frequency range may be divided into a fixed number (e.g., 256, 512, etc.) of frequency ranges, with each frequency range referred to as a frequency band and corresponding to a uniform size. However, the disclosure is not limited thereto and the size of the frequency band may vary without departing from the disclosure.
While the microphone audio data z(t) 210 is comprised of a plurality of samples, in some examples the device 110 may group a plurality of samples and process them together. As illustrated in
In some examples, the device 110 may convert microphone audio data z(t) 210 from the time-domain to the subband-domain. For example, the device 110 may use a plurality of bandpass filters to generate microphone audio data z(t, k) in the subband-domain, with an individual bandpass filter centered on a narrow frequency range. Thus, a first bandpass filter may output a first portion of the microphone audio data z(t) 210 as a first time-domain signal associated with a first subband (e.g., first frequency range), a second bandpass filter may output a second portion of the microphone audio data z(t) 210 as a time-domain signal associated with a second subband (e.g., second frequency range), and so on, such that the microphone audio data z(t, k) comprises a plurality of individual subband signals (e.g., subbands). As used herein, a variable z(t, k) corresponds to the subband-domain signal and identifies an individual sample associated with a particular time t and tone index k.
For ease of illustration, the previous description illustrates an example of converting microphone audio data z(t) 210 in the time-domain to microphone audio data z(t, k) in the subband-domain. However, the disclosure is not limited thereto, and the device 110 may convert microphone audio data z(n) 212 in the time-domain to microphone audio data z(n, k) the subband-domain without departing from the disclosure.
Additionally or alternatively, the device 110 may convert microphone audio data z(n) 212 from the time-domain to a frequency-domain. For example, the device 110 may perform Discrete Fourier Transforms (DFTs) (e.g., Fast Fourier transforms (FFTs), short-time Fourier Transforms (STFTs), and/or the like) to generate microphone audio data Z(n, k) 214 in the frequency-domain. As used herein, a variable Z(n, k) corresponds to the frequency-domain signal and identifies an individual frame associated with frame index n and tone index k. As illustrated in
A Fast Fourier Transform (FFT) is a Fourier-related transform used to determine the sinusoidal frequency and phase content of a signal, and performing FFT produces a one-dimensional vector of complex numbers. This vector can be used to calculate a two-dimensional matrix of frequency magnitude versus frequency. In some examples, the system 100 may perform FFT on individual frames of audio data and generate a one-dimensional and/or a two-dimensional matrix corresponding to the microphone audio data Z(n). However, the disclosure is not limited thereto and the system 100 may instead perform short-time Fourier transform (STFT) operations without departing from the disclosure. A short-time Fourier transform is a Fourier-related transform used to determine the sinusoidal frequency and phase content of local sections of a signal as it changes over time.
Using a Fourier transform, a sound wave such as music or human speech can be broken down into its component “tones” of different frequencies, each tone represented by a sine wave of a different amplitude and phase. Whereas a time-domain sound wave (e.g., a sinusoid) would ordinarily be represented by the amplitude of the wave over time, a frequency-domain representation of that same waveform comprises a plurality of discrete amplitude values, where each amplitude value is for a different tone or “bin.” So, for example, if the sound wave consisted solely of a pure sinusoidal 1 kHz tone, then the frequency-domain representation would consist of a discrete amplitude spike in the bin containing 1 kHz, with the other bins at zero. In other words, each tone “k” is a frequency index (e.g., frequency bin).
The system 100 may include multiple microphones, with a first channel m corresponding to a first microphone (e.g., m=1), a second channel (m+1) corresponding to a second microphone (e.g., m=2), and so on until a final channel (M) that corresponds to final microphone (e.g., m=M).
While
Prior to converting the microphone audio data z(n) and the playback audio data x(n) to the frequency-domain, the device 110 may first perform time-alignment to align the playback audio data x(n) with the microphone audio data z(n). For example, due to nonlinearities and variable delays associated with sending the playback audio data x(n) to loudspeaker(s) using a wired and/or wireless connection, the playback audio data x(n) may not be synchronized with the microphone audio data z(n). This lack of synchronization may be due to a propagation delay (e.g., fixed time delay) between the playback audio data x(n) and the microphone audio data z(n), clock jitter and/or clock skew (e.g., difference in sampling frequencies between the device 110 and the loudspeaker(s)), dropped packets (e.g., missing samples), and/or other variable delays.
To perform the time alignment, the device 110 may adjust the playback audio data x(n) to match the microphone audio data z(n). For example, the device 110 may adjust an offset between the playback audio data x(n) and the microphone audio data z(n) (e.g., adjust for propagation delay), may add/subtract samples and/or frames from the playback audio data x(n) (e.g., adjust for drift), and/or the like. In some examples, the device 110 may modify both the microphone audio data z(n) and the playback audio data x(n) in order to synchronize the microphone audio data z(n) and the playback audio data x(n). However, performing nonlinear modifications to the microphone audio data z(n) results in first microphone audio data z1(n) associated with a first microphone to no longer be synchronized with second microphone audio data z2(n) associated with a second microphone. Thus, the device 110 may instead modify only the playback audio data x(n) so that the playback audio data x(n) is synchronized with the first microphone audio data z1(n).
While
Additionally or alternatively, the first audio frame and the second audio frame may be distinct without overlapping, but the device 110 may determine power value calculations using overlapping audio frames. For example, a first power value calculation associated with the first audio frame may be calculated using a first portion of audio data (e.g., first audio frame and n previous audio frames) corresponding to a fixed time window, while a second power calculation associated with the second audio frame may be calculated using a second portion of the audio data (e.g., second audio frame, first audio frame, and n-1 previous audio frames) corresponding to the fixed time window. Thus, subsequent power calculations include n overlapping audio frames.
As illustrated in
In some examples, the adaptive filtering algorithm may be represented as a differentiable layer within a DNN framework, enabling the gradients to flow through the adaptive layer during back propagation. Thus, inner layers of the DNN may be trained to estimate a playback reference signal and time-varying learning factors (e.g., step-size data) using a target signal as a ground truth.
As illustrated in
where X′k,m denotes the reference signal, Mk,m denotes the mask data, |Yk,m|and θY
In the example illustrated in
In some examples, the reference signal X′k,m corresponds to only the echo components (e.g., Dk,m) and does not include near-end content (e.g., local speech and/or noise). However, the disclosure is not limited thereto, and in other examples the reference signal X′k,m may correspond to both the echo components (e.g., Dk,m) and the noise components (e.g., Nk,m) without departing from the disclosure. For example,
As illustrated in
As described above, values of the mask data Mk,m may range from a first value (e.g., 0) to a second value (e.g., 1), such that the mask data Mk,m has a value range of [0, 1]. For example, the first value (e.g., 0) may indicate that a corresponding portion of the microphone signal Yk,m will be completely attenuated or ignored (e.g., masked), while the second value (e.g., 1) may indicate that a corresponding portion of the microphone signal Yk,m will be passed completely without attenuation. Thus, applying the mask data Mk,m to the microphone signal Yk,m may remove at least a portion of the speech components (e.g., Sk,m) while leaving a majority of the echo components (e.g., Dk,m) in the reference signal X′k,m.
In the example mask data Mk,m 510 illustrated in
Similarly, values of the step-size data μk,m may range from the first value (e.g., 0) to the second value (e.g., 1), such that the step-size data μk,m has a value range of [0, 1]. However, while the mask data Mk,m corresponds to an intensity of the mask (e.g., mask value indicates an amount of attenuation to apply to the microphone signal Yk,m), the step-size data μk,m corresponds to an amount of adaptation to perform by the adaptive filter (e.g., how quickly the adaptive filter modifies adaptive filter coefficients). For example, the first value (e.g., 0) may correspond to performing a small amount of adaptation and/or freezing the adaptive coefficient values of the adaptive filter, whereas the second value (e.g., 1) may correspond to a large amount of adaptation and/or rapidly modifying the adaptive coefficient values.
In the example step-size data μk,m 520 illustrated in
In practice, the values of the example step-size data μk,m 520 illustrated in
Referring back to
μk,m=ƒ(Yk,m, Xk,m) [3]
X′k,m=g(Yk,m, Xk,m) [4]
where ƒ(·) and g(·) represent the nonlinear transform functions learned by the DNN 320 for estimating the step-size data μk,m and the reference signal X′k,m, respectively. The DNN 320 may output the step-size data μk,m and the reference signal X′k,m to the linear AEC component 330.
In certain aspects, an AEC component may be configured to receive the playback signal Xk,m and generate an estimated echo signal based on the playback signal Xk,m itself (e.g., by applying adaptive filters to the playback signal Xk,m to model the acoustic echo path). However, this models the estimated echo signal using a linear system, which suffers from degraded performance when nonlinear and time-varying echo signals and/or noise signals are present. For example, the linear system may be unable to model echo signals that vary based on how the echo signals reflect from walls and other acoustically reflective surfaces in the environment as the device 110 is moving.
To improve performance even when nonlinear and time-varying echo signals and/or noise signals are present, the linear AEC component 330 performs echo cancellation using the nonlinear reference signal X′k,m generated by the DNN 320. Thus, instead of estimating the real acoustic echo path, the linear AEC component 330 may be configured to estimate a transfer function between the estimated nonlinear reference signal X′k,m and the echo signal Dk,m.
As illustrated in
where Ek,m denotes the Error Signal, Yk,m denotes the microphone signal, {circumflex over (D)}k,m denotes the estimated echo signal, X′k,m denotes the reference signal, Ŵk,m denotes an adaptive filter of length L, μk,m denotes the step-size, ϵ denotes a regularization parameter, and the superscriptH represents conjugate transpose. In some examples, the linear AEC component 330 may be implemented as a differentiable layer with no trainable parameters, enabling gradients to flow through it and train the DNN parameters associated with the DNN 320.
As described above, the step-size data μk,m determines the learning rate of the adaptive filter and therefore needs to be chosen carefully to guarantee the convergence of the system and achieve acceptable echo removal. The deep adaptive AEC processing 300 improves echo removal by training the DNN 320 to generate the step-size data μk,m based on both the reference signal X′k,m and the microphone signal Yk,m, such that the step-size data μk,m (i) increases adaptation when the speech components are not present in the microphone signal Yk,m and (ii) freezes and/or slows adaptation when speech components are present in the microphone signal Yk,m. In addition, the deep adaptive AEC processing 300 improves echo removal by training the DNN 320 to generate the nonlinear reference signal X′k,m.
After the adaptive filter 335 uses the step-size data μk,m and the reference signal X′k,m to generate the estimated echo signal {circumflex over (D)}k,m, the canceler component 340 may subtract the estimated echo signal {circumflex over (D)}k,m from the microphone signal Yk,m to generate the error signal Ek,m. While
Using the adaptive filter 335 and/or the canceler component 340, the linear AEC component 330 may generate the estimated echo signal {circumflex over (D)}k,m and remove the estimated echo signal {circumflex over (D)}k,m from the microphone signal Yk,m to generate the error signal Ek,m. Thus, if the estimated echo signal {circumflex over (D)}k,m corresponds to a representation of the echo signal Dk,m, the device 110 effectively cancels the echo signal Dk,m, such that the error signal Ek,m includes a representation of the speech signal Sk,m without residual echo. However, if the estimated echo signal {circumflex over (D)}k,m does not accurately correspond to a representation of the echo signal Dk,m, the device 110 may only cancel a portion of the echo signal Dk,m, such that the error signal Ek,m includes a representation of the speech signal Sk,m along with a varying amount of residual echo. The residual echo may depend on several factors, such as distance(s) between loudspeaker(s) and microphone(s), a Signal to Echo Ratio (SER) value of the input to the AFE component, loudspeaker distortions, echo path changes, convergence/tracking speed, and/or the like, although the disclosure is not limited thereto.
As illustrated in
Loss=MSE(Ek,mTk,m) [7]
where Loss denotes the loss function 350, Ek,m denotes the error signal, Tk,m denotes the target signal 355, and MSE denotes the mean squared error between the error signal Ek,m and the target signal Tk,m. In some examples, the device 110 may perform echo removal, such that the estimated echo signal {circumflex over (D)}k,m corresponds to the echo components (e.g., Dk,m). However, the disclosure is not limited thereto, and in other examples the device 110 may perform joint echo and noise removal, such that the estimated echo signal {circumflex over (D)}k,m corresponds to both the echo components (e.g., Dk,m) and the noise components (e.g., Nk,m). While
Tk,m=Sk,m+Nk,m [8a]
{circumflex over (D)}k,m≈Dk,m [8b]
where Tk,m denotes the target signal, Sk,m denotes the speech signal (e.g., representation of local speech), Nk,m denotes the noise signal (e.g., representation of acoustic noise captured by the device 110, {circumflex over (D)}k,m denotes the estimated echo signal generated by the linear AEC component 330, and Dk,m denotes the echo signal (e.g., representation of the playback audio recaptured by the device 110). Training the model using this target signal Tk,m focuses on echo removal without performing noise reduction, and the estimated echo signal {circumflex over (D)}k,m approximates the echo signal Dk,m.
In contrast, in other examples the device 110 may perform joint echo and noise removal 620, such that the target signal Tk,m 355 corresponds to only the speech components (e.g., Sk,m). As a result, the estimated echo signal {circumflex over (D)}k,m corresponds to both the echo components (e.g., Dk,m) and the noise components (e.g., Nk,m), as shown below:
Tk,m=Sk,m [9a]
Dk,m≈Dk,m+Nk,m [9b]
Thus, the estimated echo signal Dk,m may correspond to (i) the echo signal Dk,m during echo removal 610 or (ii) a combination of the echo signal Dk,m and the noise signal Nk,m during joint echo and noise removal 620. Training the model using this target signal Tk,m achieves joint echo and noise removal, and the estimated echo signal {circumflex over (D)}k,m approximates a combination of the echo signal Dk,m and the noise signal Nk,m (e.g., background noise). Therefore, the error signal Ek,m corresponds to an estimate of the speech signal Sk,m (e.g., near end speech) with the echo and noise jointly removed from the microphone signal Yk,m.
Referring back to
While the linear AEC component 330 corresponds to a differentiable signal processing layer, enabling back propagation from the loss function 350 to the DNN 320, the deep adaptive AEC processing 300 does not train the DNN 320 using ground truths for the step-size data μk,m or the reference signal X′k,m. For example, in a simple system including only the DNN 320, the DNN 320 may be trained by inputting a first portion of training data (e.g., a training playback signal and a training microphone signal) to the DNN 320 to generate the step-size data μk,m and the reference signal X′k,m, and then comparing the step-size data μk,m and the reference signal X′k,m output by the DNN 320 to a second portion of the training data (e.g., known values for the step-size and the reference signal). Thus, the second portion of the training data would correspond to step-size values and reference signal values that act as a ground truth by which to train the DNN 320.
In contrast, the deep adaptive AEC processing 300 trains the DNN 320 using the loss function 350 with the target signal Tk,m 355 as a ground truth. For example, the device 110 may train the DNN 320 by inputting a first portion of training data (e.g., a training playback signal and a training microphone signal) to the DNN 320 to generate the step-size data μk,m and the reference signal X′k,m, processing the step-size data X′k,m and the reference signal X′k,m to generate the error signal Ek,m, and then comparing the error signal Ek,m to a second portion of the training data (e.g., known values for the target signal Tk,m 355). Thus, the second portion of the training data would correspond to the target signal Tk,m 355 that acts as a ground truth by which to train the DNN 320. During the inference stage, the parameters of the DNN 320 are fixed while the linear AEC component 330 is updating its filter coefficients adaptively using the step-size data μk,m and the reference signal X′k,m.
The combination of the DNN 320 and the linear AEC component 330 improves the deep adaptive AEC processing 300 in multiple ways. For example, the DNN 320 may compensate for nonlinear and time-varying distortions and generate a nonlinear reference signal X′k,m. Between the nonlinear reference signal X′k,m and training the DNN 320 to design appropriate time-frequency dependent step-size values, the linear AEC component 330 is equipped to model echo path variations. Thus, from a signal processing perspective, the deep adaptive AEC processing 300 can be interpreted as an adaptive AEC with its reference signal and step-size estimated by the DNN 320. From a deep learning perspective, the linear AEC component 330 can be interpreted as a non-trainable layer within a DNN framework. Integrating this interpretable and more constrained linear AEC elements into the more general and expressive DNN framework encodes structural knowledge in the model and makes model training easier.
In some examples, the device 110 may generate the training data used to train the DNN 320 by separately generating a speech signal (e.g., Sk,m), an echo signal (e.g., Dk,m), and a noise signal (e.g., Nk,m). For example, the echo signal may be generated by outputting playback audio and recording actual echoes of the playback audio by generating first audio data using a mobile platform. This echo signal may be combined with second audio data representing speech (e.g., an utterance) and third audio data representing noise to generate the microphone signal Yk,m. Thus, the microphone signal Yk,m corresponds to a digital combination of the first audio data, the second audio data, and the third audio data, and the device 110 may select the target signal Tk,m 355 as either the second audio data and the third audio data (e.g., echo removal) or just the second audio data (e.g., joint echo and noise removal), although the disclosure is not limited thereto.
While
In some examples, instead of outputting the mask data Mk,m, the DNN 320 may output the reference signal X′k,m. As illustrated in
While
In some examples, the DNN framework 710 may not explicitly generate the reference X′k,m. As illustrated in
While
Multiple systems (120/125) may be included in the system 100 of the present disclosure, such as one or more remote systems 120 for performing ASR processing, one or more remote systems 120 for performing NLU processing, and one or more skill component 125, etc. In operation, each of these systems may include computer-readable and computer-executable instructions that reside on the respective device (120/125), as will be discussed further below.
Each of these devices (110/120/125) may include one or more controllers/processors (904/1004), which may each include a central processing unit (CPU) for processing data and computer-readable instructions, and a memory (906/1006) for storing data and instructions of the respective device. The memories (906/1006) may individually include volatile random access memory (RAM), non-volatile read only memory (ROM), non-volatile magnetoresistive memory (MRAM), and/or other types of memory. Each device (110/120/125) may also include a data storage component (908/1008) for storing data and controller/processor-executable instructions. Each data storage component (908/1008) may individually include one or more non-volatile storage types such as magnetic storage, optical storage, solid-state storage, etc. Each device (110/120/125) may also be connected to removable or external non-volatile memory and/or storage (such as a removable memory card, memory key drive, networked storage, etc.) through respective input/output device interfaces (902/1002).
Computer instructions for operating each device (110/120/125) and its various components may be executed by the respective device's controller(s)/processor(s) (904/1004), using the memory (906/1006) as temporary “working” storage at runtime. A device's computer instructions may be stored in a non-transitory manner in non-volatile memory (906/1006), storage (908/1008), or an external device(s). Alternatively, some or all of the executable instructions may be embedded in hardware or firmware on the respective device in addition to or instead of software.
Each device (110/120/125) includes input/output device interfaces (902/1002). A variety of components may be connected through the input/output device interfaces (902/1002), as will be discussed further below. Additionally, each device (110/120/125) may include an address/data bus (924/1024) for conveying data among components of the respective device. Each component within a device (110/120/125) may also be directly connected to other components in addition to (or instead of) being connected to other components across the bus (924/1024).
Referring to
Via antenna(s) 914, the input/output device interfaces 902 may connect to one or more networks 199 via a wireless local area network (WLAN) (such as Wi-Fi) radio, Bluetooth, and/or wireless network radio, such as a radio capable of communication with a wireless communication network such as a Long Term Evolution (LTE) network, WiMAX network, 3G network, 4G network, 5G network, etc. A wired connection such as Ethernet may also be supported. Through the network(s) 199, the system may be distributed across a networked environment. The I/O device interface (902/1002) may also include communication components that allow data to be exchanged between devices such as different physical servers in a collection of servers or other components.
The components of the device 110, the remote system 120, and/or a skill component 125 may include their own dedicated processors, memory, and/or storage. Alternatively, one or more of the components of the device 110, the remote system 120, and/or a skill component 125 may utilize the I/O interfaces (902/1002), processor(s) (904/1004), memory (906/1006), and/or storage (908/1008) of the device(s) 110, system 120, or the skill component 125, respectively.
As noted above, multiple devices may be employed in a single system. In such a multi-device system, each of the devices may include different components for performing different aspects of the system's processing. The multiple devices may include overlapping components. The components of the device 110, the remote system 120, and a skill component 125, as described herein, are illustrative, and may be located as a stand-alone device or may be included, in whole or in part, as a component of a larger device or system.
As illustrated in
Other devices are included as network-connected support devices, such as the remote system 120 and/or other devices (not illustrated). The support devices may connect to the network(s) 199 through a wired connection or wireless connection. The devices 110 may capture audio using one-or-more built-in or connected microphones or other audio capture devices, with processing performed by ASR components, NLU components, or other components of the same device or another device connected via the network(s) 199, such as an ASR component, NLU component, etc. of the remote system 120.
The concepts disclosed herein may be applied within a number of different devices and computer systems, including, for example, general-purpose computing systems, speech processing systems, and distributed computing environments.
The above aspects of the present disclosure are meant to be illustrative. They were chosen to explain the principles and application of the disclosure and are not intended to be exhaustive or to limit the disclosure. Many modifications and variations of the disclosed aspects may be apparent to those of skill in the art. Persons having ordinary skill in the field of computers and speech processing should recognize that components and process steps described herein may be interchangeable with other components or steps, or combinations of components or steps, and still achieve the benefits and advantages of the present disclosure. Moreover, it should be apparent to one skilled in the art, that the disclosure may be practiced without some or all of the specific details and steps disclosed herein.
Aspects of the disclosed system may be implemented as a computer method or as an article of manufacture such as a memory device or non-transitory computer readable storage medium. The computer readable storage medium may be readable by a computer and may comprise instructions for causing a computer or other device to perform processes described in the present disclosure. The computer readable storage medium may be implemented by a volatile computer memory, non-volatile computer memory, hard drive, solid-state memory, flash drive, removable disk, and/or other media. In addition, components of system may be implemented as in firmware or hardware, such as an Audio Front End (AFE), which comprises, among other things, analog and/or digital filters (e.g., filters configured as firmware to a digital signal processor (DSP)).
Conditional language used herein, such as, among others, “can,” “could,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements, and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without other input or prompting, whether these features, elements, and/or steps are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list.
Disjunctive language such as the phrase “at least one of X, Y, Z,” unless specifically stated otherwise, is understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
As used in this disclosure, the term “a” or “one” may include one or more items unless specifically stated otherwise. Further, the phrase “based on” is intended to mean “based at least in part on” unless specifically stated otherwise.
Number | Name | Date | Kind |
---|---|---|---|
10839786 | Hera | Nov 2020 | B1 |