The invention relates to audio devices, and more particularly, to an audio device with an end-to-end neural network for suppressing distractor speech.
No matter how good a hearing aid is, it always sounds like a hearing aid. A significant cause of this is the “comb-filter effect,” which arises because the digital signal processing in the hearing aid delays the amplified sound relative to the leak-path/direct sound that enters the ear through venting in the ear tip and any leakage around it. As well known in the art, the sound through the leak path (i.e., direct sound) can be removed by introducing Active Noise Cancellation (ANC). After the direct sound is cancelled, the comb-filter effect would be mitigated. Theoretically, the ANC circuit may operate in time domain or frequency domain. Normally, the ANC circuit in the hearing aid includes one or more time-domain filters because the signal processing delay of the ANC circuit is typically required to be less than 50 μs. For the ANC circuit operating in frequency domain, the short-time Fourier Transform (STFT) and the inverse STFT processes contribute the signal processing delays ranging from 5 to 50 milliseconds (ms), which includes the effect of ANC circuit. However, most state-of-the-art audio algorithms manipulate audio signals in frequency domain for advanced audio signal processing.
On the other hand, although conventional artificial intelligent (AI) noise suppressor can suppress non-voice noise, such as traffic and environment noise, it is difficult to suppress distractor speech. The most critical case is that a speech distractor 230 is located at 0 degree relative to a user 210 carrying a smart phone 220 and wearing a pair of wireless earbuds 240, as shown in
What is needed is an audio device for integrating time-domain and frequency-domain audio signal processing, performing ANC, advanced audio signal processing, acoustic echo cancellation and distractor suppression, and improving audio quality.
In view of the above-mentioned problems, an object of the invention is to provide an audio device capable of suppressing distractor speech, cancelling acoustic echo and improving audio quality.
One embodiment of the invention provides an audio device. The audio device comprises: multiple microphones and an audio module. The multiple microphones generate multiple audio signals. The audio module coupled to the multiple microphones comprises at least one processor, at least one storage media and a post-processing circuit. The at least one storage media includes instructions operable to be executed by the at least one processor to perform operations comprising: producing multiple instantaneous relative transfer functions (IRTFs) using a known adaptive algorithm according to multiple mic spectral representations for multiple first sample values in current frames of the multiple audio signals; and, performing distractor suppression over the multiple mic spectral representations and the multiple IRTFs using an end-to-end neural network to generate a compensation mask. The post-processing circuit generates an audio output signal according to the compensation mask. Each IRTF represents a difference in sound propagation between each predefined microphone and a reference microphone of the multiple microphones relative to at least one sound source. Each predefined microphone is different from the reference microphone.
Another embodiment of the invention provides an audio apparatus. The audio apparatus comprises: two audio devices that are arranged at two different source devices. The two output audio signals from the two audio devices are respectively sent to a sink device over a first connection link and a second connection link.
Another embodiment of the invention provides an audio processing method. The audio processing method comprises: producing multiple instantaneous relative transfer functions (IRTFs) using a first known adaptive algorithm according to multiple mic spectral representations for multiple first sample values in current frames of multiple audio signals from multiple microphones; performing distractor suppression over the multiple mic spectral representations and the multiple IRTFs using an end-to-end neural network to generate a compensation mask; and, obtaining an audio output signal according to the compensation mask; wherein each IRTF represents a difference in sound propagation between each predefined microphone and a reference microphone of the multiple microphones relative to at least one sound source. Each predefined microphone is different from the reference microphone.
Further scope of the applicability of the present invention will become apparent from the detailed description given hereinafter. However, it should be understood that the detailed description and specific examples, while indicating preferred embodiments of the invention, are given by way of illustration only, since various changes and modifications within the spirit and scope of the invention will become apparent to those skilled in the art from this detailed description.
The present invention will become more fully understood from the detailed description given hereinbelow and the accompanying drawings which are given by way of illustration only, and thus are not limitative of the present invention, and wherein:
As used herein and in the claims, the term “and/or” includes any and all combinations of one or more of the associated listed items. The use of the terms “a” and “an” and “the” and similar referents in the context of describing the invention are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. Throughout the specification, the same components with the same function are designated with the same reference numerals.
As used herein and in the claims, the term “sink device” refers to a device implemented to establish a first connection link with one or two source devices so as to receive audio data from the one or two source devices, and implemented to establish a second connection link with another sink device so as to transmit audio data to the another sink device. Examples of the sink device include, but are not limited to, a personal computer, a laptop computer, a mobile device, a wearable device, an Internet of Things (IoT) device/hub and an Internet of Everything (IoE) device/hub. The term “source device” refers to a device having an embedded microphone and implemented to originate, transmit and/or receive audio data over connection links with the other source device or the sink device. Examples of the source device include, but are not limited to, a headphone, an earbud and one side of a headset. The type of headphones and the headset includes, but not limited, over-ear, on-ear, clip-on and in-ear-monitor. The source device, the sink device and the connection links can be either wired or wireless. A wired connection link is made using a transmission line or cable. A wireless connection link can occur over any suitable communication link/network that enables the source devices and the sink device to communicate with each other over a communication medium. Examples of protocols that can be used to form communication links/networks can include, but are not limited to, near-field communication (NFC) technology, radio-frequency identification (RFID) technology, Bluetooth, Bluetooth Low Energy (BLE), Wi-Fi technology, the Internet Protocol (“IP”) and Transmission Control Protocol (“TCP”).
A feature of the invention is to use an end-to-end neural network to simultaneously perform ANC functions, and advanced audio signal processing, e.g., noise suppression, acoustic feedback cancellation (AFC), sound amplification, distractor suppression and acoustic echo cancellation (AEC) and so on. Another feature of the invention is that the end-to-end neural network receives a time-domain audio signal and a frequency-domain audio signal for each microphone so as to gain the benefits of both time-domain signal processing (e.g., extremely low system latency) and frequency-domain signal processing (e.g., better frequency analysis). In comparison with the conventional ANC technology that is most effective on lower frequencies of sound, e.g., between 50 to 1000 Hz, the end-to-end neural network of the invention can reduce both the high-frequency noise and low-frequency noise. Another feature of the invention is to use multiple microphone signals from one or two source devices or/and a sink device and multiple IRTFs (will be described below) to suppress the distractor speech 230 in
In an embodiment, the audio device 10/60/70 may be a hearing aid, e.g. of the behind-the-ear (BTE) type, in-the-ear (ITE) type, in-the-canal (ITC) type, or completely-in-the-canal (CIC) type. The microphones 11˜1Q are used to collect ambient sound to generate Q audio signals au-1˜au-Q. The pre-processing unit 120 is configured to receive the Q audio signals au-1˜au-Q and generate audio data of current frames i of Q time-domain digital audio signals s1[n]˜sQ[n] and Q current spectral representations F1(i)˜FQ(i) corresponding to the audio data of the current frames i of time-domain digital audio signals s1[n]˜sQ[n], where n denotes the discrete time index and i denotes the frame index of the time-domain digital audio signals s1[n]˜sQ[n]. The end-to-end neural network 130 receives input parameters, the Q current spectral representations F1(i)˜FQ(i), and audio data for current frames i of the Q time-domain signals s1[n]˜sQ[n], performs ANC and AFC functions, noise suppression and sound amplification to generate a frequency-domain compensation mask stream G1(i)˜GN(i) and audio data of the current frame i of a time-domain digital data stream u[n]. The post-processing unit 150 receives the frequency-domain compensation mask 20) stream G1(i)˜GN(i) and audio data of the current frame i of the time-domain data stream u[n] to generate audio data for the current frame i of a time-domain digital audio signal y[n], where N denotes the Fast Fourier transform (FFT) size. The output terminal of the post-processing unit 150 is coupled to the audio output circuit 160 via a connection link 172, such as a transmission line or a Bluetooth/WiFi communication link. Finally, the audio output circuit 160 placed at a sink device or a source device converts the digital audio signal y[n] from the second connection link 172 into a sound pressure signal. Please note that the first connection links 171 and the second connection link 172 are not necessarily the same, and the audio output circuit 160 is optional.
The end-to-end neural network 130/630/730 may be implemented by a deep neural network (DNN), a convolutional neural network (CNN), a recurrent neural network (RNN), a time delay neural network (TDNN) or any combination thereof. Various machine learning techniques associated with supervised learning may be used to train a model of the end-to-end neural network 130/630/730 (hereinafter called “model 130/630/730” for short). Example supervised learning techniques to train the end-to-end neural network 130/630/730 include, without limitation, stochastic gradient descent (SGD). In supervised learning, a function ƒ (i.e., the model 130) is created by using four sets of labeled training examples (will be described below), each of which consists of an input feature vector and a labeled output. The end-to-end neural network 130 is configured to use the four sets of labeled training examples to learn or estimate the function ƒ (i.e., the model 130), and then to update model weights using the backpropagation algorithm in combination with cost function. Backpropagation iteratively computes the gradient of cost function relative to each weight and bias, then updates the weights and biases in the opposite direction of the gradient, to find a local minimum. The goal of a learning in the end-to-end neural network 130 is to minimize the cost function given the four sets of labeled training examples.
According to the input parameters, the end-to-end neural network 130 receives the Q current spectral representations F1(i)˜FQ(i) and audio data of the current frames i of Q time-domain input streams s1[n]˜sQ[n] in parallel, performs ANC function and advanced audio signal processing and generates one frequency-domain compensation mask stream (including N mask values G1(i)˜GN(i)) corresponding to N frequency bands and audio data of the current frame i of one time-domain output sample stream u[n]. Here, the advanced audio signal processing includes, without limitations, noise suppression, AFC, sound amplification, alarm-preserving, environmental classification, direction of arrival (DOA) and beamforming, speech separation and wearing detection. For purpose of clarity and ease of description, the following embodiments are described with the advanced audio signal processing only including noise suppression, AFC, and sound amplification. However, it should be understood that the embodiments of the end-to-end neural network 130 are not so limited, but are generally applicable to other types of audio signal processing, such as environmental classification, direction of arrival (DOA) and beamforming, speech separation and wearing detection.
For the sound amplification function, the input parameters for the end-to-end neural network 130 include, with limitations, magnitude gains, a maximum output power value of the signal z[n] (i.e., the output of inverse STFT 154) and a set of N modification gains g1˜gN corresponding to N mask values G1(i)˜GN(i), where the N modification gains g1˜gN are used to modify the waveform of the N mask values G1(i)˜GN(i). For the noise suppression, AFC and ANC functions, the input parameters for the end-to-end neural network 130 include, with limitations, level or strength of suppression. For the noise suppression function, the input data for a first set of labeled training examples are constructed artificially by adding various noise to clean speech data, and the ground truth (or labeled output) for each example in the first set of labeled training examples requires a frequency-domain compensation mask stream (including N mask values G1(i)˜GN(i)) for corresponding clean speech data. For the sound amplification function, the input data for a second set of labeled training examples are weak speech data, and the ground truth for each example in the second set of labeled training examples requires a frequency-domain compensation mask stream (including N mask values G1(i)˜GN(i)) for corresponding amplified speech data based on corresponding input parameters (e.g., including a corresponding magnitude gain, a corresponding maximum output power value of the signal z[n] and a corresponding set of N modification gains g1˜gN). For the AFC function, the input data for a third set of labeled training examples are constructed artificially by adding various feedback interference data to clean speech data, and the ground truth for each example in the third set of labeled training examples requires a frequency-domain compensation mask stream (including N mask values G1(i)˜GN(i)) for corresponding clean speech data. For the ANC function, the input data for a fourth set of labeled training examples are constructed artificially by adding the direct sound data to clean speech data, the ground truth for each example in the fourth set of labeled training examples requires N sample values of the time-domain denoised audio data u[n] for corresponding clean speech data. For speech data, a wide range of people's speech is collected, such as people of different genders, different ages, different races and different language families. For noise data, various sources of noise are used, including markets, computer fans, crowd, car, airplane, construction, etc. For the feedback interference data, interference data at various coupling levels between the loudspeaker 163 and the microphones 11˜1Q are collected. For the direct sound data, the sound from the inputs of the audio devices to the user eardrums among a wide range of users are collected. During the process of artificially constructing the input data, each of the noise data, the feedback interference data and the direct sound data is mixed at different levels with the clean speech data to produce a wide range of SNRs for the four sets of labeled training examples.
Regarding the end-to-end neural network 130, in a training phase, the TDNN 131 and the FD-LSTM network 132 are jointly trained with the first, the second and the third sets of labeled training examples, each labeled as a corresponding frequency-domain compensation mask stream (including N mask values G1(i)˜GN(i)); the TDNN 131 and the TD-LSTM network 133 are jointly trained with the fourth set of labeled training examples, each labeled as N corresponding time-domain audio sample values. When trained, the TDNN 131 and the FD-LSTM network 132 can process new unlabeled audio data, for example audio feature vectors, to generate N corresponding frequency-domain mask values G1(i)˜GN(i) for the N frequency bands while the TDNN 131 and the TD-LSTM network 133 can process new unlabeled audio data, for example audio feature vectors, to generate N corresponding time-domain audio sample values for the current frame i of the signal u[n]. In one embodiment, the N mask values G1(i)˜GN(i) are N band gains (being bounded between Th1 and Th2; Th1<Th2) corresponding to the N frequency bands in the current spectral representations F1(i)˜FQ(i). Thus, if any band gain value Gk(i) gets close to Th1, it indicates the signal on the corresponding frequency band k is noise-dominant; if any band gain value Gk(i) gets close to Th2, it indicates the signal on the corresponding frequency band k is speech-dominant. When the end-to-end neural network 130 is trained, the higher the SNR value in a frequency band k is, the higher the band gain value Gk(i) in the frequency-domain compensation mask stream becomes.
In brief, the low latency of the end-to-end neural network 130 between the time-domain input signals s1[n]˜sQ[n] and the responsive time-domain output signal u[n] fully satisfies the ANC requirements (i.e., less than 50 μs). In addition, the end-to-end neural network 130 manipulates the input current spectral representations F1(i)˜FQ(i) in frequency domain to achieve the goals of noise suppression, AFC and sound amplification, thus greatly improving the audio quality. Thus, the framework of the end-to-end neural network 130 integrates and exploits cross domain audio features by leveraging audio signals in both time domain and frequency domain to improve hearing aid performance.
In the embodiment of
A RTF represents correlation (or differences in magnitude and in phase) between any two microphones in response to the same sound source. Multiple sound sources can be distinguished by utilizing their RTFs, which describe differences in sound propagation between sound sources and microphones and are generally different for sound sources in different locations. Different sound sources, such as user speech, distractor speech and background noise, bring about different RTFs. Generally, the RTFs are used in sound source location, speech enhancement and beamforming, such as direction of arrival (DOA) and generalized sidelobe canceller (GSC) algorithm.
Each RTF is defined/computed for each predefined microphone 1u relative to a reference microphone 1v, where 1<=u, v<=Q and u≠v. Properly selecting the reference microphone is important as all RTFs are relative to this reference microphone. In a preferred embodiment, a microphone with a higher signal to noise ratio (SNR), such as a feedback microphone 12 of a TWS earbud 620 in
Hu,v(i) denotes an IRTF from the predefined microphone 1u to the reference microphone 1v and is obtained based on audio data in the current frames i of the audio signals su[n] and sv[n]. Each IRTF (Hu,v(i)) represents a difference in sound propagation between the predefined microphone 1u and the reference microphone 1v relative to at least one sound source. Each IRTF (Hu,v(i)) represents a vector including an array of N complex-valued elements: [H1,u,v(i), H2,u,v(i), . . . , HN,u,v(i)], respectively corresponding to N frequency bands for the audio data of the current frames i of the audio signals su[n] and sv[n]. Each IRTF element (Hk,u,v(i)) is a complex number that can be expressed in terms of a magnitude and a phase/angle, where 1<=k<=N. Assuming that a microphone 12 is selected as the reference microphone in
(i) for kth frequency band based on a previous estimated IRTF (
(i)=Hk,u,v(i)) from the adaptive algorithm block 615, where
(i)=Hk,u,v(i)×Fk,u(i). Then, the known adaptive algorithm block 615 updates the complex value of the current estimated IRTF (
(i)=Hk,u,v(i)) for kth frequency band according to the input sample Fk,u(i) and the error signal e(i) so as to minimize the error signal e(i) between the input sample Fk,v(i) and an estimated sample
(i) for a given environment. In one embodiment, the known adaptive algorithm block 615 is implemented by a least mean square (LMS) algorithm to produce the current complex value of the current estimated IRTF (
(i)=Hk,u,v(i)). However, the LMS algorithm is provided by example and not limitation of the invention.
In comparison with the neural network 130, the end to end neural network 630 (or the TDNN 631) additionally receives (Q−1) estimated IRTFs (Hu,v(i)) and one more input parameter as shown in
For the distractor suppression function, the input data for a fifth set of labeled training examples are constructed artificially by adding various distractor speech data to clean speech data, and the ground truth (or labeled output) for each example in the fifth set of labeled training examples requires a frequency-domain compensation mask stream (including N mask values G1(i)˜GN(i)) for corresponding clean speech data. For the distractor speech data, various distractor speech data from various directions, different distances and different numbers of people are collected. During the process of artificially constructing the input data, the distractor speech data is mixed at different levels with the clean speech data to produce a wide range of SNRs for the fifth sets of labeled training examples. The end-to-end neural network 630 is configured to use the above-mentioned five sets (i.e., from the first to the fifth sets) of labeled training examples to learn or estimate the function ƒ (i.e., the model 630), and then to update model weights using the backpropagation algorithm in combination with cost function. Besides, in the training phase, the TDNN 631 and the FD-LSTM network 132 are jointly trained with the first, the second, the third and the fifth sets of labeled training examples, each labeled as a corresponding frequency-domain compensation mask stream (including N mask values G1(i)˜GN(i)); the TDNN 631 and the TD-LSTM network 133 are jointly trained with the fourth set of labeled training examples, each labeled as N corresponding time-domain audio sample values. When trained, the TDNN 631 and the FD-LSTM network 132 can process new unlabeled audio data, for example audio feature vectors, to generate N corresponding frequency-domain mask values G1(i)˜GN(i) for the N frequency bands while the TDNN 631 and the TD-LSTM network 133 can process new unlabeled audio data, for example audio feature vectors, to generate N corresponding time-domain audio sample values for the current frame i of the signal u[n].
The playback audio signal r[n] played by a loudspeaker 66 can be modeled by PTFs relative to each of the microphones 11˜1Q at the source device, i.e., at the TWS earbud 620 in
Assuming that a microphone 12 is selected as the reference microphone in
(i)=Pk,j(i)) from the adaptive algorithm block 715, so that {circumflex over (F)}k,j(i)=Pk,j(i)×Rk(i). Then, the adaptive algorithm block 715 updates the complex value of the current estimated PTF (i.e.,
(i)=Pk,j(i)) for kth frequency band according to the input sample Rk(i) and the error signal e(i) so as to minimize the error signal e(i) between the sample Fk,j(i) and an estimated sample {circumflex over (F)}k,j(i) for a given environment. In one embodiment, the known adaptive algorithm block 715 is implemented by the LMS algorithm to produce the complex value of the current estimated PTF block 711. However, the LMS algorithm is provided by example and not limitation of the invention.
In comparison with the neural network 630, the end to end neural network 730 (or the TDNN 731) additionally receives a number Q of PTFs (P1(i)˜PQ(i)) and one more input parameter as shown in
For the AEC function, the input data for a sixth set of labeled training examples are constructed artificially by adding various playback audio data to clean speech data, and the ground truth (or labeled output) for each example in the sixth set of labeled training examples requires a frequency-domain compensation mask stream (including N mask values G1(i)˜GN(i)) for corresponding clean speech data. For the playback audio data, various playback audio data played by different loudspeakers at the source devices or the sink device at different locations are collected. During the process of artificially constructing the input data, the playback audio data is mixed at different levels with the clean speech data to produce a wide range of SNRs for the sixth sets of labeled training examples. The end-to-end neural network 730 is configured to use the above-mentioned six sets (from the first to the sixth sets) of labeled training examples to learn or estimate the function ƒ (i.e., the model 730), and then to update model weights using the backpropagation algorithm in combination with cost function. Besides, in the training phase, the TDNN 731 and the FD-LSTM network 132 are jointly trained with the first, the second, the third, the fifth and the sixth sets of labeled training examples, each labeled as a corresponding frequency-domain compensation mask stream (including N mask values G1(i)˜GN(i)); the TDNN 731 and the TD-LSTM network 133 are jointly trained with the fourth set of labeled training examples, each labeled as N corresponding time-domain audio sample values. When trained, the TDNN 731 and the FD-LSTM network 132 can process new unlabeled audio data, for example audio feature vectors, to generate N corresponding frequency-domain mask values G1(i)˜GN(i) for the N frequency bands while the TDNN 731 and the TD-LSTM network 133 can process new unlabeled audio data, for example audio feature vectors, to generate N corresponding time-domain audio sample values for the current frame i of the signal u[n].
Each of the pre-processing unit 120, the IRTF estimator 610, the PTF estimator 710, the STFT 720, the end-to-end neural network 130/630/730 and the post-processing unit 150 may be implemented by software, hardware, firmware, or a combination thereof. In one embodiment, the pre-processing unit 120, the IRTF estimator 610, the PTF estimator 710, the STFT 720, the end-to-end neural network 130/630/730 and the post-processing unit 150 are implemented by at least one first processor and at least one first storage media (not shown). The at least one first storage media stores instructions/program codes operable to be executed by the at least one first processor to cause the at least one first processor to function as: the pre-processing unit 120, the IRTF estimator 610, the PTF estimator 710, the STFT 720, the end-to-end neural network 130/630/730 and the post-processing unit 150. In an alternative embodiment, the IRTF estimator 610, the PTF estimator 710, and the end-to-end neural network 20) 130/630/730 are implemented by at least one second processor and at least one second storage media (not shown). The at least one second storage media stores instructions/program codes operable to be executed by the at least one second processor to cause the at least one second processor to function as: the IRTF estimator 610, the PTF estimator 710 and the end-to-end neural network 130/630/730.
Each of the audio modules 81/82 receives three audio signals from three microphones and a playback audio signal for one loudspeaker at the same TWS earbud, performs ANC function and the advanced audio signal processing including distractor suppression and AEC, and generates a time-domain digital audio signal yR[n]/yL[n]. Next, the TWS earbuds 810 and 820 respectively deliver their outputs (yR[n] and yL[n]) to the mobile phone 880 over two separate Bluetooth communication links. Finally, after receiving the two digital audio signals yR[n] and yL[n], the mobile phone 880 may deliver them to the stereo output circuit 160 for audio play, store them in a storage media, or deliver them to another sink device for audio communication via another communication link, such as WiFi.
At first, the TWS right earbud 840 delivers three audio signals s1[n]˜s3[n] from three microphones 11˜13 to the TWS left earbud 830 over a Bluetooth communication link. Then, the TWS left earbud 830 feeds the playback audio signal r[n], three audio signals s4[n]˜s6[n] from three microphones 14˜16 and the three audio signals s1[n]˜s3[n] to the audio module 83. The audio module 83 receives the six audio signals s1[n]˜s6[n] and the playback audio signal r[n], performs ANC function and the advanced audio signal processing including distractor suppression and AEC, and generates a time-domain digital audio signal y[n]. Next, the TWS left earbud 830 delivers the digital audio signal y[n] to the mobile phone 880 over another Bluetooth communication link. Finally, after receiving the digital audio signal y[n], the mobile phone 880 may deliver them to the stereo output circuit 160 for audio play, store them in a storage media, or deliver them to another sink device for audio communication via another communication link, such as WiFi.
At first, the TWS earbuds 840 and 850 respectively delivers six audio signals s1[n]˜s6[n] from six microphones 11˜16 to the mobile phone 890 over two separate Bluetooth communication links. Then, the mobile phone 890 feeds the six audio signals s1[n]˜s6[n] to the audio module 84. The audio module 84 receives the six audio signals s1[n]˜s6[n] and the playback audio signal r[n], performs ANC function and the advanced audio signal processing including distractor suppression and AEC, and generates a time-domain digital audio signal y[n]. Finally, the audio module 84 may deliver the signal y[n] to the stereo output circuit 160 for audio play. If not, the mobile phone 890 may store it in a storage media or deliver it to another sink device for audio communication via another communication link, such as WiFi.
In brief, the audio devices 800A˜D including one of the audio modules 600 and 700 of the invention can suppress the distractor speech 230 as shown in
As clearly shown in Table 1, the headset 900 passes the test because the speech to distractor ratios (SDRs) of the headset 900 are higher than the attenuation requirements, where the SDR describes the level ratio of the near end speech compared to the nearby distractor speech. The above test results prove the audio module 600/700 of the invention is capable of suppressing audio signals other than the (headset) user's speech.
The above embodiments and functional operations can be implemented in digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. The operations and logic flows described in
While certain exemplary embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on the broad invention, and that this invention should not be limited to the specific construction and arrangement shown and described, since various other modifications may occur to those ordinarily skilled in the art.