With the advancement of technology, the use and popularity of electronic devices has increased considerably. Electronic devices are commonly used to capture and process audio data.
For a more complete understanding of the present disclosure, reference is now made to the following description taken in conjunction with the accompanying drawings.
Electronic devices may be used to capture and process audio data. The audio data may be used for voice commands and/or may be output by speakers. An output loudness of the audio data may vary, both between different devices and within audio data. For example, a first portion of audio data may correspond to a user being close to a microphone, whereas a second portion of audio data may correspond to the user being further away from the microphone, resulting in a decreased loudness for the second portion relative to the first portion. In addition, speech may be difficult to understand due to variations in loudness of different words or sentences.
To perform voice enhancement, devices, systems and methods are disclosed that performs power normalization and selectively amplifies voice data. For example, the system may identify active intervals in audio data that correspond to voice activity and may selectively amplify the active intervals in order to generate output audio data at a near uniform loudness. The system may determine a variable gain for each of the active intervals based on a desired output loudness and a flatness value, which indicates how much a signal envelope is to be modified. For example, a low flatness value corresponds to no modification, with peak active interval values corresponding to the desired output loudness and lower active intervals being lower than the desired output loudness. In contrast, a high flatness value corresponds to extensive modification, with peak active interval values and lower active interval values both corresponding to the desired output loudness. Thus, individual words may share the same peak power level.
The VoIP device 30, the PSTN telephone 20, the first device 110a and/or the second device 110b may communicate with the server(s) 120 via the network(s) 10. For example, one or more of the VoIP device 30, the PSTN telephone 20, the first device 110a and the second device 110b may send audio data to the server(s) 120 via the network(s) 10, such as a voice message. While the server(s) 120 may receive audio data from multiple devices, for ease of explanation the disclosure illustrates the server(s) 120 receiving audio data from a single device at a time. The server(s) 120 may be configured to receive the audio data and perform voice enhancement on the audio data, as will be discussed in greater detail below.
The VoIP device 30 may be an electronic device configured to connect to the network(s) 10 and to send and receive data via the network(s) 10, such as a smart phone, tablet or the like. Thus, the VoIP device 30 may send audio data to and/or receive audio data from the server(s) 120, either during a VoIP communication session or as a voice message. In contrast, the PSTN telephone 20 may be a landline telephone (e.g., wired telephone, wireless telephone or the like) connected to the PSTN (not illustrated), which is a landline telephone network that may be used to communicate over telephone wires, and the PSTN telephone 20 may not be configured to directly connect to the network(s) 10. Instead, the PSTN telephone 20 may be connected to the adapter 22, which may be configured to connect to the PSTN and to transmit and/or receive audio data using the PSTN and configured to connect to the network(s) 10 (using an Ethernet or wireless network adapter) and to transmit and/or receive data using the network(s) 10. Thus, the PSTN telephone 20 may use the adapter 22 to send audio data to and/or receive audio data from the second device 110b during either a VoIP communication session or as a voice message.
The first device 110a and the second device 110b may be electronic devices configured to send audio data to and/or receive audio data from the server(s) 120. The device(s) 110 may include microphone(s) 112, speakers 114, and/or a display 116. For example,
In some examples, the devices 110 may send the audio data to the server(s) 120 in order for the server(s) 120 to determine a voice command. For example, the first device 110a may send first audio data to the server(s) 120, the server(s) 120 may determine a first voice command represented in the first audio data and may perform a first action corresponding to the first voice command (e.g., execute a first command, send an instruction to the first device 110a and/or other devices to execute the first command, etc.). Similarly, the second device 110b may send second audio data to the server(s) 120, the server(s) 120 may determine a second voice command represented in the second audio data and may perform a second action corresponding to the second voice command (e.g., execute a second command, send an instruction to the second device 110b and/or other devices to execute the second command, etc.).
In some examples, to determine the voice command the server(s) 120 may perform Automatic Speech Recognition (ASR) processing, Natural Language Understanding (NLU) processing and/or command processing to determine the voice command. The voice commands may control the device(s) 110, audio devices (e.g., play music over speakers, capture audio using microphones, or the like), multimedia devices (e.g., play videos using a display, such as a television, computer, tablet or the like), smart home devices (e.g., change temperature controls, turn on/off lights, lock/unlock doors, etc.) or the like.
While the above examples illustrate the server(s) 120 determining a voice command represented in the audio data, the disclosure is not limited thereto and the server(s) 120 may perform voice enhancement on the audio data without determining a voice command. For example, the server(s) 120 may perform the voice enhancement and a separate device may determine the voice command. Additionally or alternatively, the server(s) 120 may perform voice enhancement on the audio data separate from any device determining a voice command without departing from the disclosure.
The signal processor 210 may modify the audio data 111 and estimate a background noise power associated with the audio data 111. For example, the signal processor 210 may perform (212) low pass filtering to clean the audio data 111 and may perform (214) background noise power estimation to estimate the background noise power at various points in the audio data 111, as will be discussed in greater detail below with regard to
The VAD 220 may detect voice activity by performing (222) initial voice activity detection (VAD), performing (224) interval merging and performing (226) interval verification, as will be discussed in greater detail below with regard to
The gain estimator 230 may estimate the gain by performing (232) gain estimation, performing (234) gain extending, performing (236) gain limiting and performing (238) filtering of the gain values, as will be discussed in greater detail below with regard to
The output generator 240 may generate output audio data by applying the determined gain to the audio data, as will be discussed in greater detail below with regard to
As illustrated in
As used herein, an audio sample may refer to a single data point while an audio frame may refer to a series of consecutive audio samples (e.g., 128 audio samples included in a single audio frame). For ease of explanation, the drawings and corresponding description illustrate examples that refer to performing steps associated with an audio frame instead of an audio sample or vice versa. However, the disclosure is not limited thereto and the steps may be performed based on the audio frame and/or the audio sample without departing from the disclosure.
After generating the filtered audio data, the server(s) 120 may estimate (134) background noise power using the filtered audio data. For example,
Using the filtered audio data, the server(s) 120 may perform (136) Voice Activity Detection (VAD) to identify intervals of speech. For example, the server(s) 120 may determine a peak power among all audio frames and may determine a threshold value by multiplying the peak power by a power factor value. The power factor value may vary based on user preferences and/or other settings, with a higher power factor value increasing the threshold value (e.g., associating less audio data with voice activity) and a lower power factor decreasing the threshold value (e.g., associating more audio data with voice activity).
After determining the threshold value, the server(s) 120 may use the threshold value to determine active audio frames (e.g., audio frames corresponding to voice activity) and inactive audio frames (e.g., audio frames that do not correspond to voice activity). For example, the server(s) 120 may determine that first audio frames in the filtered audio data that have power values below the threshold value are inactive and that second audio frames in the filtered audio data that have power values above the threshold value are active. However, the disclosure is not limited thereto and the server(s) 120 may not determine that all of the second audio frames are active. Instead, the server(s) 120 may determine that the first audio frames with power below the threshold value are inactive and that second audio frames are remaining audio frames that have power values above the threshold value. The server(s) 120 may determine that a first portion of the second audio frames are inactive when the power value is larger than the noise adaptation factor multiplied by the background noise power estimate. Thus, the server(s) 120 may determine that a second portion of the second audio frames, which have power values smaller than the noise adaptation factor multiplied by the background noise power estimate, are active. The threshold value may avoid the classification of low power frames as being active, especially at the beginning where the recursive background noise power estimate is less accurate.
For ease of illustration,
In order to perform power normalization, the server(s) 120 may determine (138) a gain for each interval. In some examples, the server(s) 120 may determine a minimum gain for all of the active intervals, resulting in output audio data that preserves an original signal envelope of the input audio data. For example, active intervals with higher peak power values may be closer to the desired output level while active intervals with lower peak power values may be further from the desired output level in the output audio data.
As illustrated in
In other examples, each of the intervals may be modified using an individual gain determined based on a peak power value associated with the interval. For example, the server(s) 120 may determine a first peak value in the first interval (e.g., audio samples 200-400) and may determine a first gain based on the first peak value, may determine a second peak value in the second interval (e.g., audio samples 450-800) and may determine a second gain based on the second peak value, and so on for each of the 12 active intervals. Using the individual gains results in output audio data that modifies the original signal envelope of the input audio data. For example, all active intervals in the output audio data have the same power level (e.g., desired power level) regardless of a corresponding peak power value in the input audio data.
In some examples, the server(s) 120 may determine a gain for each interval based on a flatness value between zero and one. For example, the server(s) 120 may determine the gain for each interval based on the following equation:
g′=(g−min(g))*flatness+min(g) (1)
where g′ is the output gain, g is the individual gain for the interval (e.g., uniform power gains 732 illustrated in
After determining the gain for each interval, the server(s) 120 may generate (140) output audio data.
In some examples, a power level of output audio data may exceed a desired output loudness. For example, the server(s) 120 may determine a gain based on an average power in an active interval instead of a peak power. Thus, instead of only the peak power reaching the desired output loudness, a larger portion of the active interval reaches the desired output loudness, with the peak power exceeding the desired output loudness.
To prevent the output audio data from exceeding the desired output loudness, the server(s) 120 may include a limiter that reduces the gain for portions of the output audio data that would otherwise exceed the desired output loudness. For example, the server(s) 120 may determine a power level of the output audio data by multiplying the gain by the peak power. If the power level of the output audio data exceeds the desired output loudness, the server(s) 120 may determine a new gain based on the desired output loudness and the peak power. For example, the server(s) 120 may divide the desired output loudness by the peak power to determine the new gain. In addition to determining the new gain, the server(s) 120 may determine a gain drop from the previous gain to the new gain.
To avoid an abrupt drop in gain, the server(s) 120 may use the gain drop to lower the gain in neighboring audio frames surrounding the peak. For example, the server(s) 120 may determine an incremental gain by dividing the gain drop by a number of audio frames and may use the incremental gain to transition from the overall gain to the gain drop over the course of the number of audio frames. To illustrate an example, if the number of audio frames is equal to twenty, the server(s) 120 may transition from the overall gain to the gain drop over twenty audio frames, with a gain of each audio frame decreasing by the incremental gain. After the peak, the server(s) 120 may transition from the gain drop to the overall gain over twenty audio frames, with a gain of each audio frame increasing by the incremental gain. Thus, the server(s) 120 may slowly transition to the gain drop without an abrupt drop in the gain. The server(s) 120 may then determine the final gain sequence by adding the overall gain to the gain drop.
In some examples, the server(s) 120 may determine the gain for an active interval based on a peak power included in the active interval, such that the server(s) 120 only calculate the limited gain 1116 when the peak power exceeds the desired output loudness (e.g., only in small portions of the active interval). However, the disclosure is not limited thereto and in some examples, the server(s) 120 may calculate the gain for the active interval based on an average power, a second or third peak power, or the like. For example, the server(s) 120 may have determined the gain 1112 based on the second peak in the power 1114, intentionally increasing the second peak to the desired output loudness and then determining the limited gain 1116 so that the first peak and the third peak were also output at the desired output loudness. Using this technique, the server(s) 120 may increase a percentage of the active interval that is at the desired output loudness.
In some examples, the server(s) 120 may determine a gain for an active interval and then may average the gain using a lowpass filter to smooth the output gain. However, the smoothed gains may be significantly lower than the intended gains. In addition, in many instances the strong voiced portion of a word is surrounded by weaker unvoiced waveforms that carry important information, such as consonants. Smoothing the gain using the lowpass filter may result in a de-emphasis at the borders of the word, which may cause undesired suppression of the consonants. To avoid these issues, the server(s) 120 may extend the gain in either direction based on a gain drop rate, as illustrated in
The server(s) 120 may perform voice activity detection to identify active intervals (e.g., audio frames having a power level above a threshold value, which correspond to a signal) and inactive intervals (e.g., audio frames having a power level below the threshold value, which correspond to noise). The server(s) 120 may apply individual gains to the active intervals and may apply a minimum gain to the inactive intervals. When the inactive intervals are short, however, transitioning between a high gain and the minimum gain may result in distortion or the amplification of low level noise. To avoid issues caused by short inactive intervals, the server(s) 120 may identify these short inactive intervals and merge them with surrounding active intervals. For example, the server(s) 120 may set audio frames included in the short inactive intervals as active.
Similarly, when active intervals are short, transitioning between the minimum gain and a high gain may result in distortion or the amplification of low level noise. To avoid issues caused by short active intervals, the server(s) 120 may identify these short active intervals and merge them with surrounding inactive intervals. For example, the server(s) 120 may set audio frames included in the short active intervals as inactive.
In some examples, the server(s) 120 may set active intervals as inactive. For example, if a noise floor increases, an active interval may correspond to noise and not to an actual signal. The server(s) 120 may determine if an active interval corresponds to noise instead of a signal based on an average zero crossing rate. For example, an audio frame having a strong signal and limited noise may have a relatively low zero crossing rate (ZCR), whereas an audio frame having a weak signal and a lot of noise may have a relatively high ZCR. Thus, the server(s) 120 may determine average zero crossing rates (ZCRs) for active intervals, may determine intervals with average ZCR above a power threshold and may set audio frames included in the intervals as inactive.
By substituting the threshold value 1514 for the estimated noise floor, the server(s) 120 may determine noise power 1516, which is an accurate estimate of the noise. In some examples, the server(s) 120 may multiply the noise power 1516 by a noise multiple (e.g., 4×) to determine 4× noise power 1518. The server(s) 120 may use 4× noise power 1518 to determine which audio frames are active or inactive. For example, the server(s) 120 may compare a power level of each audio frame to the 4× noise power 1518 and may set audio frames below the 4× noise power 1518 as inactive.
In some examples, the server(s) 120 may determine the threshold value 1514, the noise power 1516 and the 4× noise power 1518 using the following equations:
peakPower=max(power[m]) (1)
pThreshold=peakPower*POWER_TH_FACTOR (2)
noiseIncMin=NOISE_MIN_FACTOR*pThreshold (3)
smoothPower=powerAdaptationFactor*smoothPowerPast+(1−powerAdaptationFactor)*power (4)
noisePower=max(min(smoothPower,max(noiseAdaptationFactor*noisePowerPast,noisePowerPast+noiseIncMin)),pThreshold) (5)
where power[m] corresponds to input audio data (e.g., the signal power 1512 prior to filtering), peakPower is a maximum power of the input audio data, pThreshold is the threshold value 1514, POWER_TH_FACTOR is a power threshold factor (e.g., 0.0005), noiseIncMin is a noise increment value, NOISE_MIN_FACTOR is a noise factor (e.g., 0.02), smoothPower corresponds to filtered audio data (e.g., the signal power 1512), powerAdaptationFactor is a power adaptation factor (e.g., 0.9), smoothPowerPast is a smoothed power value of a previous audio frame (e.g., m−1 of the filtered audio data), power is a power value of the current audio frame (e.g., m of the input audio data), noisePower is a power value of a current audio frame in the noise power 1516, noiseAdaptationFactor is a noise adaptation factor (e.g., 1.0005), and noisePowerPast is a power value of the noise power 1516 in a previous audio frame (e.g., m−1 of the noise power 1516).
Thus, the server(s) 120 may calculate the noise power 1516 and the 4× noise power 1518 recursively (e.g., frame by frame) beginning with a first audio frame, with an initial smoothPowerPast initialized to a value of zero. While typical parameters of the variables mentioned above are listed, the disclosure is not limited thereto and the exact values used may vary without departing from the disclosure. For example, the server(s) 120 may modify the parameters between different audio data, with different parameters resulting in changes to the noise power 1516 and 4× noise power 1518. As an example, the power adaptation factor (e.g., powerAdaptationFactor) controls how much to weight smoothed the power value of a previous audio frame (e.g., smoothPowerPast), with a value of 0.9 corresponding to a 90%/10% weighting between the previous audio frame and the current audio frame (e.g., 90% of the estimate comes from the previous estimate, with only 10% coming from the current power value of the input audio data). Similarly, a noise adaptation factor (e.g., noiseAdaptationFactor) value closer to one results in more smoothing, whereas a value closer to zero results in less smoothing.
The server(s) 120 may select (1616) a data point, may determine (1618) a power value of the data point in the filtered audio data, may determine (1620) a first estimate of noise using a noise adaptation factor and may determine (1622) a second estimate of noise using the noise increment value. The power value of the data point in the filtered audio data corresponds to an actual data point, whereas the first estimate and the second estimate correspond to estimates based on noise characteristics. For example, the first estimate may be determined by multiplying the noise adaptation factor by a previous noise power value (e.g., power value of a data point prior to the current data point), thus limiting the noise floor to being within a percentage increase of the previous data point. Similarly, the second estimate may be determined by adding the noise increment value to the previous noise power value, thus limiting the noise floor to being within the noise increment value above the previous data point.
The server(s) 120 may determine (1624) if the first estimate is larger than the second estimate. If the first estimate is larger, the server(s) 120 may select (1626) the first estimate as a first value and proceed to step 1630. If the second estimate is larger, the server(s) 120 may select (1628) the second estimate as the first value and proceed to step 1630. Therefore, the server(s) 120 may select the larger of the first estimate and the second estimate for further processing.
The server(s) 120 may determine (1630) if the power value (e.g., determined in step 1618) is larger than the first value (e.g., determined in 1626 or 1628). If the power value is not larger than the first value, the server(s) 120 may select (1632) the power value as a second value and proceed to step 1636. If the power value is larger, the server(s) 120 may select (1634) the first value as the second value and proceed to step 1636. Therefore, the server(s) 120 may select the smaller of the power value and the first value (e.g., larger of the first estimate and the second estimate) for further processing.
The server(s) 120 may determine (1636) if the second value is greater than the threshold value determined in step 1612. If the second value is larger than the threshold value, the server(s) 120 may select (1638) the second value as a value for the data point and proceed to step 1642. If the second value is not larger than the threshold value, the server(s) 120 may select (1640) the threshold value as a value of the data point and proceed to step 1642. Thus, the server(s) 120 may use the threshold value as a minimum noise level when the first estimate, second estimate and the actual power value of the data point are below the threshold value. For example, the threshold value may be used at the beginning and end of a signal when the power level drops near zero, increasing an accuracy of the noise prediction.
The server(s) 120 may determine (1642) if there are additional data points and, if there are additional data points, may loop (1644) to step 1616 and select an additional data point. If there are no additional data points, the server(s) 120 may generate (1646) a background noise power estimate based on the values selected for each of the data points. An example of the background noise power estimate is illustrated in
The server(s) 120 may determine (1714) active audio frames based on the power threshold value. For example, audio frames with power lower than the power threshold value may be considered inactive. However the disclosure is not limited thereto and in some examples audio frames with power below a noise multiple (e.g., 4×) multiplied by the noise power may be considered to be inactive (e.g., 4× noise power 1518) without departing from the disclosure, as discussed above.
The server(s) 120 may determine (1716) intervals of active audio frames. For example, a series of active audio frames may be combined in an active interval, while a series of inactive frames may be combined in an inactive interval. Thus, determining the intervals of active audio frames may comprise identifying series of data points that exceed the power threshold value, the active intervals separated by series of data points that are below the power threshold value. In some examples, this step completes VAD, with the server(s) 120 outputting values of 0 for inactive audio frames and values of 1 for active audio frames. However, the disclosure is not limited thereto and optional further steps may enhance the VAD output.
In some examples, the server(s) 120 may optionally determine (1718) a length of intervals (e.g., determine a length of each interval), determine (1720) first inactive intervals with a length below a time threshold value (e.g., 16 audio frames), and may set (1722) audio frames included in the first inactive intervals to be active, as discussed above with regard to
In some examples, the server(s) 120 may optionally determine (1728) average zero crossing rates (ZCRs) for active intervals, may determine (1730) second intervals with average ZCR above a second power threshold and may set (1732) audio frames included in the second intervals to be inactive, as discussed above with regard to
The server(s) 120 may determine (1816) a maximum gain at each active interval, as discussed above with regard to
As discussed in greater detail above with regard to
The server(s) 120 may extend (1832) gains, as described above with regard to
As illustrated in
The server(s) 120 may include one or more controllers/processors 1904, that may each include a central processing unit (CPU) for processing data and computer-readable instructions, and a memory 1906 for storing data and instructions. The memory 1906 may include volatile random access memory (RAM), non-volatile read only memory (ROM), non-volatile magnetoresistive (MRAM) and/or other types of memory. The server(s) 120 may also include a data storage component 1908, for storing data and controller/processor-executable instructions (e.g., instructions to perform the algorithm illustrated in
The server(s) 120 includes input/output device interfaces 1910. A variety of components may be connected through the input/output device interfaces 1910.
The input/output device interfaces 1910 may be configured to operate with network(s) 10, for example a wireless local area network (WLAN) (such as WiFi), Bluetooth, ZigBee and/or wireless networks, such as a Long Term Evolution (LTE) network, WiMAX network, 3G network, etc. The network(s) 10 may include a local or private network or may include a wide network such as the internet. Devices may be connected to the network(s) 10 through either wired or wireless connections.
The input/output device interfaces 1910 may also include an interface for an external peripheral device connection such as universal serial bus (USB), FireWire, Thunderbolt, Ethernet port or other connection protocol that may connect to network(s) 10. The input/output device interfaces 1910 may also include a connection to an antenna (not shown) to connect one or more network(s) 10 via an Ethernet port, a wireless local area network (WLAN) (such as WiFi) radio, Bluetooth, and/or wireless network radio, such as a radio capable of communication with a wireless communication network such as a Long Term Evolution (LTE) network, WiMAX network, 3G network, etc.
The server(s) 120 may include a signal processor 210, a voice activity detector (VAD) 220, a gain estimator 230 and/or output generator 240, which may comprise processor-executable instructions stored in storage 1908 to be executed by controller(s)/processor(s) 1904 (e.g., software, firmware, hardware, or some combination thereof). For example, components of the signal processor 210, the VAD 220, the gain estimator 230 and/or the output generator 240 may be part of a software application running in the foreground and/or background on the server(s) 120. The remote control component 1924 may control the server(s) 120 as discussed above, for example with regard to
Executable computer instructions for operating the server(s) 120 and its various components may be executed by the controller(s)/processor(s) 1904, using the memory 1906 as temporary “working” storage at runtime. The executable instructions may be stored in a non-transitory manner in non-volatile memory 1906, storage 1908, or an external device. Alternatively, some or all of the executable instructions may be embedded in hardware or firmware in addition to or instead of software.
The components of the server(s) 120, as illustrated in
The concepts disclosed herein may be applied within a number of different devices and computer systems, including, for example, general-purpose computing systems, server-client computing systems, mainframe computing systems, telephone computing systems, laptop computers, cellular phones, personal digital assistants (PDAs), tablet computers, video capturing devices, video game consoles, speech processing systems, distributed computing environments, etc. Thus the components, components and/or processes described above may be combined or rearranged without departing from the scope of the present disclosure. The functionality of any component described above may be allocated among multiple components, or combined with a different component. As discussed above, any or all of the components may be embodied in one or more general-purpose microprocessors, or in one or more special-purpose digital signal processors or other dedicated microprocessing hardware. One or more components may also be embodied in software implemented by a processing unit. Further, one or more of the components may be omitted from the processes entirely.
The above embodiments of the present disclosure are meant to be illustrative. They were chosen to explain the principles and application of the disclosure and are not intended to be exhaustive or to limit the disclosure. Many modifications and variations of the disclosed embodiments may be apparent to those of skill in the art. Persons having ordinary skill in the field of computers and/or digital imaging should recognize that components and process steps described herein may be interchangeable with other components or steps, or combinations of components or steps, and still achieve the benefits and advantages of the present disclosure. Moreover, it should be apparent to one skilled in the art, that the disclosure may be practiced without some or all of the specific details and steps disclosed herein.
Embodiments of the disclosed system may be implemented as a computer method or as an article of manufacture such as a memory device or non-transitory computer readable storage medium. The computer readable storage medium may be readable by a computer and may comprise instructions for causing a computer or other device to perform processes described in the present disclosure. The computer readable storage medium may be implemented by a volatile computer memory, non-volatile computer memory, hard drive, solid-state memory, flash drive, removable disk and/or other media.
Embodiments of the present disclosure may be performed in different forms of software, firmware and/or hardware. Further, the teachings of the disclosure may be performed by an application specific integrated circuit (ASIC), field programmable gate array (FPGA), or other component, for example.
Conditional language used herein, such as, among others, “can,” “could,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without author input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list.
Conjunctive language such as the phrase “at least one of X, Y and Z,” unless specifically stated otherwise, is to be understood with the context as used in general to convey that an item, term, etc. may be either X, Y, or Z, or a combination thereof. Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of X, at least one of Y and at least one of Z to each is present.
As used in this disclosure, the term “a” or “one” may include one or more items unless specifically stated otherwise. Further, the phrase “based on” is intended to mean “based at least in part on” unless specifically stated otherwise.
Number | Name | Date | Kind |
---|---|---|---|
8180064 | Avendano | May 2012 | B1 |
20060261992 | Lusky | Nov 2006 | A1 |
20080235011 | Archibald | Sep 2008 | A1 |
20100017203 | Archibald | Jan 2010 | A1 |
20110188671 | Anderson | Aug 2011 | A1 |
20120084084 | Zhu | Apr 2012 | A1 |
20130282373 | Visser | Oct 2013 | A1 |
20170011753 | Herbig | Jan 2017 | A1 |
Number | Date | Country |
---|---|---|
102056026 | Nov 2009 | CN |