The present disclosure relates to speech enhancement techniques and more particularly to improving mask-based speech enhancement methods and devices.
The performance of most speech enhancement algorithms heavily depends on the SNR of the input audio signal. Therefore, enhancing speech in an input signal with a low SNR poses a challenging problem. Many speech enhancement algorithms rely on a mask-based approach, such as a binary mask. By applying the mask to the input audio signal, a denoised audio signal can be generated.
For low-SNR parts of the audio signal however pumping/gating issues are commonly observable in the denoised audio signal output, due to the challenge of removing the noise in this circumstance.
In particular, most of the mask-based algorithms will try to remove all the noise within non-dialog segments, but within the dialog segments the noise cannot be filtered out completely. This behavior generates inconsistencies between the dialog and non-dialog segments in the denoised audio output signal. These inconsistencies may be audible as noise pumping/gating and may annoy listeners of the denoised output audio signal.
Thus, there is a need for improving mask-based speech enhancements techniques that in particular reduce or remove inconsistencies between the dialog and non-dialog segments in the denoised audio output signal.
In view of the above, the present disclosure provides methods, apparatus, and programs, as well as computer-readable storage media for improving noise compensation in mask-based speech enhancement, having the features of the respective independent claims.
According to an aspect of the disclosure, a method of processing an audio signal is provided. The audio signal may include one or more speech segments. In this method a mask for mask-based speech enhancement of the audio signal may be obtained (e.g., received or generated). A magnitude of the audio signal may be obtained (e.g., received or determined). An estimate of residual noise in the audio signal after mask-based speech enhancement may be determined (e.g., calculated), based on the mask and the magnitude of the audio signal. A modified mask may be determined (e.g., calculated) based on the estimate of the residual noise.
By modifying the mask based on an estimate of the residual noise, inconsistencies between the noise-compensated dialog and non-dialog segments can be reduced or completely removed. As a consequence, an output audio file with the applied modified mask will not contain perceivable effects such as noise pumping/gating or such effects are at least reduced. Overall, the listening experience can be improved compared to conventional mask-based approaches.
Accordingly, the method may further include applying the modified mask to the audio signal to obtain a denoised audio signal. Therein, applying the modified mask may reduce or remove perceivable effects in the denoised audio signal, including at least one of noise pumping or gating.
In some embodiments, the modified mask may be provided to a downstream device for storage, rendering, or additional processing.
In some embodiments, the mask may have values between 0 and 1 or values of the mask may be compressed to values between 0 and 1.
In some embodiments, the audio signal may include the speech segments and non-speech segments.
In some embodiments, the estimate of the residual noise may be determined based on a difference between the mask and a function of the mask.
In some embodiments, the function of the mask may be a convex function.
In some embodiments, the function of the mask may be given by F(x), where F(0)=0 and F(1)=1, for mask values limited or compressed to the range from 0 to 1.
In some embodiments, the function of the mask may be a power function with an exponent larger than 1.
In some embodiments, the mask may be defined for each of a plurality of time-frequency bins or time bins and frequency bands.
In some embodiments, the modified mask may be determined such that the modified mask is a stable mask, or residual noise is stable when the modified mask is applied to the audio signal.
In some embodiments, the one or more speech segments may be determined in the audio signal.
Then, determining the modified mask may be based on the one or more speech segments in the audio signal.
In some embodiments, determining the one or more speech segments in the audio signal may be based on a voice activity detector, VAD.
In some embodiments, determining the modified mask may include determining an averaged residual mask based on the estimate of the residual noise. The method may further include applying an average over time. The method may yet further include selecting, for each time-frequency bin or time bin and frequency band, one of the mask and the averaged residual mask as the modified mask.
In some embodiments, the selection may be based on a comparison of the mask to the averaged residual mask. The averaged residual mask may be determined by averaging a residual mask over time, the residual mask relating to the estimate of the residual noise.
In some embodiments, the averaged residual mask may only be determined for the one or more speech segments. That is, the averaged residual mask may not be determined for the non-speech segments.
In some embodiments, selecting one of the mask and an averaged residual mask may include, for each time-frequency bin or time bin and frequency band, setting the modified mask to the mask, if the mask is larger than or equal to the averaged residual mask. Otherwise, the modified mask may be set to the averaged residual mask.
In some embodiments, the residual mask may be determined based on a difference between the mask and a function of the mask.
In some embodiments, the estimate of the residual noise may be determined by multiplying the residual mask and the magnitude of the audio signal.
In some embodiments, the residual mask may be calculated in accordance with Maskres(τ, f)=Mask(τ, f)−Mask(τ, f)α, where Maskres(τ, f) denotes the residual mask, a is an exponent larger than 1, t is the time index, and f is the frequency or frequency band index.
In some embodiments, the averaged residual mask may be calculated in accordance with
where Maskresave(f) is the averaged residual mask, Maskres(τ, f) denotes the residual mask, and T a is number larger than or equal to 1.
In some embodiments, the averaged residual mask may be calculated in accordance with
where Maskresave(f) is the averaged residual mask, Maskres(τ, f) denotes the residual mask, S denotes the set of speech segments, and T′ denotes the total frame number of speech segments in S.
In some embodiments, the modified mask may be calculated in accordance with
where Maskmod(t, f) is the modified mask and Maskresave(f) is the averaged residual mask.
In some embodiments, the selection may be based on a comparison of an estimate of a residual speech signal and an average of residual noise over time. Therein, the estimate of the residual speech signal may be obtained based on the mask and the magnitude of the audio signa. Further, the averaged residual mask may be obtained based on the average of the residual noise and the magnitude of the audio signal.
In some embodiments, the average of the residual noise may only be determined for the one or more speech segments. That is, the average of the residual noise may not be determined for the non-speech segments.
In some embodiments, selecting one of the mask and an averaged residual mask may include, for each time-frequency bin or time bin and frequency band, setting the modified mask to the mask, if the estimate of the residual speech signal is larger or equal than the average of the residual noise. Otherwise, the modified mask may be set to the averaged residual mask.
In some embodiments, the estimate of residual noise may be calculated in accordance with Noiseres(τ, f)=(Mask(τ, f)−Mask(τ, f)α)*Magnoisy(τ, f), where Noiseres(τ, f) is the estimate of the residual noise, Mask(τ, f) denotes the mask, a is an exponent larger than 1, Magnoisy(τ, f) is the magnitude of the audio signal, t is the time index, and f is the frequency or frequency band index.
In some embodiments, the average of the residual noise may be calculated in accordance with
where Maskresave(f) is the averaged residual mask, and T is a number larger or equal to 1.
In some embodiments, the average of the residual noise may be calculated in accordance with
where Maskresave(f) is the averaged residual mask, S denotes the set of speech segments, and T′ denotes the total frame number of speech segments in S.
In some embodiments, the averaged residual mask may be calculated in accordance with
where Maskresave(τ, f) denotes the averaged residual mask and ε is a positive value close to zero. In general, ε may be a small positive constant for avoiding a singularity (division by zero) at Magnoisy(τ, f)=0.
In some embodiments, the modified mask may be calculated in accordance with
where Magspeechest(τ, f)=Mask(τ, f)*Magnoisy(τ, f) denotes the estimate of the residual speech signal.
According to another aspect of the disclosure, a method for improving mask-based speech enhancement is provided. An estimated mask and magnitude of a noisy input may be received. Possible residual noise in de-noised speech may be estimated. Noise compensation, including modifying the estimated mask based on the estimated possible residual noise to produce a compensated mask may be performed. The compensated mask may be provided to a downstream device for storage, rendering, or additional processing.
Aspects of the present disclosure may be implemented via an apparatus. The apparatus may include a processor and memory coupled to the processor. The processor may be adapted carry out the method according to aspects and embodiments of the present disclosure.
Aspects of the present disclosure may be implemented via a program. When instructions of the program are executed by a processor, the processor may carry out aspects and embodiments of the present disclosure. A computer-readable storage medium may store the program. Such computer-readable storage media may include memory devices such as those described herein, including but not limited to random access memory (RAM) devices, read-only memory (ROM) devices, etc. Accordingly, some innovative aspects of the subject matter described in this disclosure can be implemented via one or more computer-readable storage media having software stored thereon.
It will be appreciated that apparatus features and method steps may be interchanged in many ways. In particular, the details of the disclosed method(s) can be realized by the corresponding apparatus (or system), and vice versa, as the skilled person will appreciate. Moreover, any of the above statements made with respect to the method(s) are understood to likewise apply to the corresponding apparatus (or system), and vice versa.
Example embodiments of the disclosure are explained below with reference to the accompanying drawings, wherein
The Figures (FIGS.) and the following description relate to preferred embodiments by way of illustration only. It should be noted that from the following discussion, alternative embodiments of the structures and methods disclosed herein will be readily recognized as viable alternatives that may be employed without departing from the principles of what is claimed.
Reference will now be made in detail to several embodiments, examples of which are illustrated in the accompanying figures. It is noted that wherever practicable similar or like reference numbers may be used in the figures and may indicate similar or like functionality. The figures depict embodiments of the disclosed system (or method) for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described herein.
In conventional speech enhancements algorithms a mask is determined based on the SNR of the audio input signal. For example, a binary mask is either set to the values 0 or 1 depending on the (estimated) SNR of the time-frequency bin of the audio input signal. As a result, noise is suppressed in non-dialog parts of the audio input signal, while noise in the dialog parts of the audio input signal is suppressed to a lesser extent. The difference in noise suppression between dialog and non-dialog sections may be audible in the audio output signal as so called noise pumping/gating. This effect may annoy listeners of the speech enhanced audio signal.
To handle the above issues in conventional speech enhancement algorithms, a noise suppression method, system, and device are provided to reduce or remove the pumping/gating issues and improve the general perceptual quality of the denoised output audio signal, especially under low-SNR conditions.
An example of an improved speech enhancement system/framework is depicted in
The mask may be constrained to values between 0 and 1 or, alternatively, may be compressed (e.g., scaled or otherwise mapped) to values between 0 and 1 if the mask obtained by, for example, the conventional speech enhancement algorithm includes values outside of the range from 0 to 1.
Further, the residual noise is estimated in the (would-be) audio signal after mask-based speech enhancement has been applied. The estimate of the residual noise is based on the mask (e.g., input mask) and the magnitude of the audio input signal. There may exist multiple ways to estimate the residual noise.
In the following, examples for estimating the residual noise are provided. The disclosure should however not be construed as to be limited by these specific examples.
The estimate of the residual noise may be determined based on a difference between the mask (e.g., input mask) and a function of the same mask. The mask and the function of the mask may be defined for each of a plurality of time-frequency bins or time bins and frequency bands. The function of the mask may be a convex function and may be limited to values between 0 and 1. Additionally, the function may be F(0)=0 and F(1)=1. A specific example of a convex function may be a power function with an exponent larger than 1. Even more specifically the estimate of the residual noise may be based on the difference between the mask and the power function of the mask. This difference will be referred to as residual mask in the following.
The residual mask can be generally defined as
where Maskres(τ, f) denotes the residual mask, Mask(τ, f) denotes the mask, F(.) denotes the function of the mask t is the time index, and f is the frequency bin or frequency band index.
Specifically, F(.), may be a convex function, i.e.,
if F(.) is twice differentiable.
More specifically, F(.) may be power function. The residual mask can then be defined as
where α is an exponent larger than 1.
Examples of the power function of the mask for different exponents a are depicted in
Further,
The residual noise may be estimated based on the residual mask. For example, the residual noise can be determined or estimated by multiplying the residual mask with the magnitude of the audio input signal. The residual noise can then be expressed as
wherein Noiseres(τ, f) is the estimate of the residual noise and Magnoisy(τ, f) is the magnitude of the audio signal. Thereby, the residual noise may be estimated based on the mask and the magnitude of the audio signal.
In a next step of the proposed speech enhancement method, a modified mask is determined based on the estimate of the residual noise. The modified mask may be understood as a modification of the mask (e.g., input mask). In a first example, the modified mask may be determined in such a way that the modified mask is stable over time. In other words, the modified mask may be determined in order to reduce inconsistencies between the dialog and non-dialog sections in the denoised output audio signal. In a second example, the modified mask may be determined such that the residual noise is stable when the modified mask is applied to the audio signal. In both cases, noise pumping/gating issues will be reduced or completely removed in the denoised audio output signal.
In both the first and second example for determining the modified mask, an averaged residual mask may be determined based on the estimate of the residual noise, applying an average over time. Further, for each time-frequency bin or for each time bin and frequency band, one of the mask and the averaged residual mask may be selected as the modified mask.
In the following, the first example for determining the modified mask will be explained in more detail. In this case, the averaged residual mask is determined by averaging the residual mask over time. For example, the average residual mask can be expressed as
where Maskresave(f) is the averaged residual mask and T a is number larger than or equal to 1, indicating a number of time slots. T can be chosen depending on characteristics of the input audio signal or alternatively can be a fixed value. By repeating Maskresave(f) for T times, Maskresave(τ, f) can be obtained, i.e., Maskresave(τ, f)=Maskresave(f) for all τ∈[1, T], for example.
Further, the selection of one of the mask and the averaged residual mask may be based on a comparison of the mask to the averaged residual mask. In particular, for each time-frequency bin or for each time bin and frequency band, the modified mask may be determined to be the mask (e.g., input mask), if the mask is larger than or equal to the averaged residual mask. In the opposite case, i.e., if the mask is smaller than the averaged residual mask, the modified mask may be determined to be the averaged residual mask. For example, the modified mask can be expressed as
where Maskmod(t, f) is the modified mask. By setting the modified mask to the averaged residual mask when the mask is smaller than the averaged residual mask, the modified mask is more consistent over dialog and non-dialog sections. Therefore, audible noise pumping/gating issues in the denoised audio output signal can be reduced or completely avoided.
In the following, the second example for determining the modified mask will be explained in more detail. In this case, the average residual mask is determined based on the estimate of the residual noise. The estimate of the residual noise may be calculated as in Eq (2), for example. In a next step, the estimate of residual noise may be averaged over time to determine an average of the residual noise. For example, the average of the residual noise can then be expressed as
where Noiseresave(f) is the average of the residual noise. Again, T is larger than or equal to 1 and can be chosen depending on characteristics of the input audio signal or alternatively can be a fixed value. By repeating Noiseresave(f) for T times, Noiseresave(τ, f) can be obtained, i.e., Noiseresave(τ, f)=Noiseresave(f) for all τ∈[1, T], for example.
Based on the averaged residual noise and the magnitude of the audio signal, the average residual mask can be determined. The averaged residual noise can then be expressed for example as
where Maskresave(τ, f) denotes the averaged residual mask and ε is a positive value close to zero to avoid a division by zero.
Further, the selection of one of the mask and the averaged residual mask in this example may be based on a comparison of an estimate of a residual speech signal and the average of the residual noise. To do so, the estimate of the residual speech signal may be obtained based on the mask and the magnitude of the audio signal. For example, the residual speech may be obtained by multiplying the mask with the magnitude of the audio signal.
As to the selection between the mask and the averaged residual mask, for each time-frequency bin or for each time bin and frequency band, the modified mask may be determined to be the mask (e.g., input mask), if the estimate of the residual speech signal is larger than or equal to the average of the residual noise, setting the modified mask to the mask. In the opposite case, i.e., if the estimate of the residual speech signal is smaller than the average of the residual noise, the modified mask may be determined to be the averaged residual mask. For example, the modified mask can then be expressed as
where Magspeechest(τ, f)=Mask(τ, f)*Magnoisy(τ, f) denotes the estimate of the residual speech signal. By setting the modified mask to the averaged residual mask when the estimated of the residual speech is smaller than the averaged residual noise, the residual noise is more consistent over dialog and non-dialog sections. Therefore, audible noise pumping/gating issues in the denoised audio output signal can be reduced or completely avoided.
Optionally, after the modified mask is determined, the modified mask may be provided to a downstream device for storage, rendering, or additional processing.
In a further optional step, the method may determine (e.g., identify, detect) the one or more speech segments in the audio signal. For exampled the one ore more speech segments in the audio signal can be identified by using a voice activity detector, VAD. When the one or more speech segments have been determined in the audio signal, the determination of the mask can be additionally based on the one or more speech segments. In particular, computing the average of the residual mask or computing the average of the residual noise can be based on the one or more speech segments. Instead of averaging over some time interval T as in the first and second example to determine the modified mask, the average may then only be computed for frames including speech, i.e., for the identified speech segments.
With the speech segments identified, in case of the first example for determining the modified mask, instead of Eq (3), the average residual mask can be expressed as
where S denotes the set of speech segments, and T′ denotes the total frame number of speech segments in S.
Further, in the case of the second example for determining the modified mask, instead of Eq (5), the average residual noise can be expressed as
The remaining steps for determining the modified mask in the first and second examples may not need to be altered.
In line with the above, a method 100 is provided to improve speech enhancement algorithms as depicted in the flowchart of
In step S101, a mask for mask-based speech enhancement of an audio signal is obtained. In line with the above, this may include receiving or generating the mask.
In step S102, a magnitude of the audio signal is obtained. In line with the above, this may include receiving or determining the magnitude of the audio signal.
In step S103, an estimate of residual noise in the audio signal after mask-based speech enhancement is determined based on the mask and the magnitude of the audio signal. In line with the above, this may include determining a function of the mask. Specifically, the function of the mask may be a convex function. More specifically, the function may be a power function of the mask.
In step S104, a modified mask is determined based on the estimate of the residual noise. In line with the above, this may include determining an average of a residual mask. For example, this average is determined as in the first and second examples. Further, this step may include a selection between the mask and the averaged residual mask.
In step S201, an averaged residual mask is determined based on the estimate of the residual noise, applying an average over time. In line with the above, this may include determining the averaged residual mask based on averaging the residual mask or based on averaging the residual noise.
In step S202, one of the mask and the averaged residual mask is selected as the modified mask for each time-frequency bin or time bin and frequency band. In line with the above, this may include selection based on comparison of the mask and the averaged residual mask or a comparison of the averaged residual noise and the estimate of the residual speech signal.
In step S301, one or more speech segments in the audio signal are determined (e.g., identified, detected). In line with the above, this may include using a voice activity detector.
In step S302, the modified mask is additionally determined based on the one or more speech segments identified in the audio signal. In line with the above, this may include determining the averaged residual mask or the averaged residual noise only for the segments of the audio signal that have been determined to contain speech.
While a method of processing an audio signal ahs been described above, the disclosure likewise relates to corresponding apparatus, and the like. An embodiment providing such apparatus will be described next with reference to
As shown in
Aspects of the systems described herein may be implemented in an appropriate computer-based sound processing network environment for processing digital or digitized audio files. Portions of the adaptive audio system may include one or more networks that comprise any desired number of individual machines, including one or more routers (not shown) that serve to buffer and route the data transmitted among the computers. Such a network may be built on various different network protocols, and may be the Internet, a Wide Area Network (WAN), a Local Area Network (LAN), or any combination thereof.
One or more of the components, blocks, processes or other functional components may be implemented through a computer program that controls execution of a processor-based computing device of the system. It should also be noted that the various functions disclosed herein may be described using any number of combinations of hardware, firmware, and/or as data and/or instructions embodied in various machine-readable or computer-readable media, in terms of their behavioral, register transfer, logic component, and/or other characteristics. Computer-readable media in which such formatted data and/or instructions may be embodied include, but are not limited to, physical (non-transitory), non-volatile storage media in various forms, such as optical, magnetic or semiconductor storage media.
While one or more implementations have been described by way of example and in terms of the specific embodiments, it is to be understood that one or more implementations are not limited to the disclosed embodiments. To the contrary, it is intended to cover various modifications and similar arrangements as would be apparent to those skilled in the art. Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.
A computing device implementing the techniques described above can have the following example architecture. Other architectures are possible, including architectures with more or fewer components. In some implementations, the example architecture includes one or more processors (e.g., dual-core Intel® Xeon® Processors), one or more output devices (e.g., LCD), one or more network interfaces, one or more input devices (e.g., mouse, keyboard, touch-sensitive display) and one or more computer-readable mediums (e.g., RAM, ROM, SDRAM, hard disk, optical disk, flash memory, etc.). These components can exchange communications and data over one or more communication channels (e.g., buses), which can utilize various hardware and software for facilitating the transfer of data and control signals between components.
The term “computer-readable medium” refers to a medium that participates in providing instructions to processor for execution, including without limitation, non-volatile media (e.g., optical or magnetic disks), volatile media (e.g., memory) and transmission media. Transmission media includes, without limitation, coaxial cables, copper wire and fiber optics.
Computer-readable medium can further include operating system (e.g., a Linux® operating system), network communication module, audio interface manager, audio processing manager and live content distributor. Operating system can be multi-user, multiprocessing, multitasking, multithreading, real time, etc. Operating system performs basic tasks, including but not limited to: recognizing input from and providing output to network interfaces and/or devices; keeping track and managing files and directories on computer-readable mediums (e.g., memory or a storage device); controlling peripheral devices; and managing traffic on the one or more communication channels. Network communications module includes various components for establishing and maintaining network connections (e.g., software for implementing communication protocols, such as TCP/IP, HTTP, etc.).
Architecture can be implemented in a parallel processing or peer-to-peer infrastructure or on a single device with one or more processors. Software can include multiple software components or can be a single body of code.
The described features can be implemented advantageously in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. A computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program can be written in any form of programming language (e.g., Objective-C, Java), including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, a browser-based web application, or other unit suitable for use in a computing environment.
Suitable processors for the execution of a program of instructions include, by way of example, both general and special purpose microprocessors, and the sole processor or one of multiple processors or cores, of any kind of computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memories for storing instructions and data. Generally, a computer will also include, or be operatively coupled to communicate with, one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits).
To provide for interaction with a user, the features can be implemented on a computer having a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor or a retina display device for displaying information to the user. The computer can have a touch surface input device (e.g., a touch screen) or a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer. The computer can have a voice input device for receiving voice commands from the user.
The features can be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface or an Internet browser, or any combination of them. The components of the system can be connected by any form or medium of digital data communication such as a communication network. Examples of communication networks include, e.g., a LAN, a WAN, and the computers and networks forming the Internet.
The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some embodiments, a server transmits data (e.g., an HTML page) to a client device (e.g., for purposes of displaying data to and receiving user input from a user interacting with the client device). Data generated at the client device (e.g., a result of the user interaction) can be received from the client device at the server.
A system of one or more computers can be configured to perform particular actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any inventions or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the present invention discussions utilizing terms such as “processing”, “computing”, “calculating”, “determining”, “analyzing” or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing devices, that manipulate and/or transform data represented as physical, such as electronic, quantities into other data similarly represented as physical quantities.
Reference throughout this invention to “one example embodiment”, “some example embodiments” or “an example embodiment” means that a particular feature, structure or characteristic described in connection with the example embodiment is included in at least one example embodiment of the present invention. Thus, appearances of the phrases “in one example embodiment”, “in some example embodiments” or “in an example embodiment” in various places throughout this invention are not necessarily all referring to the same example embodiment. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner, as would be apparent to one of ordinary skill in the art from this invention, in one or more example embodiments.
As used herein, unless otherwise specified the use of the ordinal adjectives “first”, “second”, “third”, etc., to describe a common object, merely indicate that different instances of like objects are being referred to and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
Also, it is to be understood that the phraseology and terminology used herein are for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” or “having” and variations thereof are meant to encompass the items listed thereafter and equivalents thereof as well as additional items. Unless specified or limited otherwise, the terms “mounted”, “connected”, “supported”, and “coupled” and variations thereof are used broadly and encompass both direct and indirect mountings, connections, supports, and couplings.
In the claims below and the description herein, any one of the terms comprising, comprised of or which comprises is an open term that means including at least the elements/features that follow, but not excluding others. Thus, the term comprising, when used in the claims, should not be interpreted as being limitative to the means or elements or steps listed thereafter. For example, the scope of the expression a device comprising A and B should not be limited to devices consisting only of elements A and B. Any one of the terms including or which includes or that includes as used herein is also an open term that also means including at least the elements/features that follow the term, but not excluding others. Thus, including is synonymous with and means comprising.
It should be appreciated that in the above description of example embodiments of the present invention, various features of the present invention are sometimes grouped together in a single example embodiment, FIG., or description thereof for the purpose of streamlining the present invention and aiding in the understanding of one or more of the various inventive aspects. This method of invention, however, is not to be interpreted as reflecting an intention that the claims require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed example embodiment. Thus, the claims following the Description are hereby expressly incorporated into this Description, with each claim standing on its own as a separate example embodiment of this invention.
Furthermore, while some example embodiments described herein include some but not other features included in other example embodiments, combinations of features of different example embodiments are meant to be within the scope of the present invention, and form different example embodiments, as would be understood by those skilled in the art. For example, in the following claims, any of the claimed example embodiments can be used in any combination.
In the description provided herein, numerous specific details are set forth. However, it is understood that example embodiments of the present invention may be practiced without these specific details. In other instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Thus, while there has been described what are believed to be the best modes of the present invention, those skilled in the art will recognize that other and further modifications may be made thereto without departing from the spirit of the present invention, and it is intended to claim all such changes and modifications as fall within the scope of the present invention. For example, any formulas given above are merely representative of procedures that may be used.
Functionality may be added or deleted from the block diagrams and operations may be interchanged among functional blocks. Steps may be added or deleted to methods described within the scope of the present disclosure.
Various aspects and implementations of the present disclosure may also be appreciated from the following enumerated example embodiments (EEEs), which are not claims.
Maskres(τ,f)=Mask(τ,f)−Mask(τ,f)α,
Number | Date | Country | Kind |
---|---|---|---|
PCT/CN2021/129565 | Nov 2021 | WO | international |
This application claims priority to PCT Patent Application No. PCT/CN2021/129565, filed 9 Nov. 2021 and U.S. provisional application 63/286,703, filed 7 Dec. 2021, all of which are incorporated herein by reference in their entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2022/049079 | 11/7/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63286703 | Dec 2021 | US |