This application claims the benefit under 35 U.S.C. §119 of the filing date of Australian Patent Application No. 2014202322, filed Apr. 29, 2014, hereby incorporated by reference in its entirety as if fully set forth herein.
The current invention relates to the suppression of noise in a captured fringe image produced with an energy source, such as in an interferometer.
Image capture devices project a scene, natural or artificial, onto a two-dimensional surface. This two-dimensional projection provides an easy approach to transmitting and analysing information about the original scene. Meanwhile, the image capture process always involves blurring, noise and many other different kinds of degradation.
In the case of noise, it is very important to understand the nature of the noise, so as to be able to remove or suppress the noise efficiently. For example, additive noise can be effectively removed by many linear, non-linear filtering, or smoothing techniques, such as bilateral filtering or non-local means method. For noise such as Poisson noise (shot noise), one of the most effective denoising approaches is to physically obtain more captures of the same scene and to average across all the captures. This is due to the nature of Poisson noise, where the signal to noise ratio increases with the number of photons.
As a tool for analysing and compressing images, wavelet theory is widely used for image enhancement. Due to its flexibility between high spatial resolution and high frequency resolution, wavelet theory has been utilized in a ‘wavelet shrinkage’ algorithm for denoising. Wavelet shrinkage does not assume a smooth and continuous signal, which makes wavelet shrinkage viable for signals with discontinuities. Furthermore, because the wavelet transform maps white noise in the signal domain to white noise in the transform domain, wavelet transformation spreads the noise energy out while keeping the signal energy in a few wavelet coefficients, and thus separates noise and signal well. In other words, a simple thresholding in the wavelet domain can remove or suppress the noise and maintain the signal.
A standard wavelet shrinkage process involves three steps:
Among the three steps of wavelet shrinkage, the second step, thresholding the wavelet coefficients, is the most important. Currently there are two popular thresholding functions in the art: hard threshold and soft threshold.
Both the hard and the soft shrinkages have advantages and disadvantages. The soft shrinkage estimates tend to have a bigger bias due to the fact that all coefficients are either set to zero or shrunk by T, while the hard shrinkage estimates tend to have bigger variance and are sensitive to small changes in the data.
Since the introduction of wavelet shrinkage algorithm, many improvements have been suggested by various researchers. Most improvements are focused on subband-specific thresholding, exploring inter-scale dependency or a better estimation of the noise strength. That is, the parameters of the wavelet thresholding function are chosen based on an estimate of the noise strength, and the parameters vary from sub-band to sub-band. However, it is always possible that some wavelet coefficients outside the dead zone are actually noise and some wavelet coefficients inside the dead zone are in fact useful signal. For example, at a particular subband, frequency components from the original noisy image that match the resolution of this subband will be represented with a few strong coefficients while higher frequency components are represented with small coefficients. When thresholding in this subband, small coefficients representing useful signals with these higher frequency components will be treated as noise and set to zero. This will result in an uneven representation of the useful signal in the denoised image, even when the dead zone size T is carefully chosen to reflect the noise strength in the original image.
For images such as portrait photos and photos of nature, this uneven representation tends to have limited impact as these images tend to have a relatively flat spectral representation. That is, they have a wide range of frequency components with similar energy in each frequency component. It is, however, a much more prominent problem for fringe images captured in an interferometer. In such a fringe image, accidentally attenuating the frequency components that represent the carrier fringe will lead to disastrous denoised results, which will cause large error in later demodulation step.
There is a need for an algorithm that is able to remove noise in a fringe image without causing substantial errors in further image processing.
According to one aspect of the present disclosure there is provided an image processing, for example de-noising, method, the method comprising:
capturing a fringe pattern from an energy source, the captured fringe pattern having a carrier frequency component dependent on settings of the energy source;
obtaining wavelet coefficients for the captured fringe pattern by applying a wavelet transform to the captured fringe pattern;
establishing a wavelet coefficients mapping function having a rate of change that varies depending at least on the carrier frequency component of the captured fringe pattern;
transforming the obtained wavelet coefficients using the established wavelet coefficients mapping function; and
processing the captured fringe pattern by applying inverse wavelet transform to the transformed wavelet coefficients to form a denoised fringe pattern.
Preferably the method further comprises demodulating he captured fringe pattern to determine at least the carrier frequency component ( ) and a modulation strength to which the fringe pattern is modulated by an object being imaged using the energy source. Typically the established wavelet coefficients mapping function is further based on the determined modulation strength. The method may further comprise demodulating the denoised fringe pattern using the determined carrier frequency.
In a specific implementation, the method establishes the wavelet coefficients mapping function further comprises determining a range of magnitude values of the wavelet coefficients, where the wavelet coefficients with magnitude within said range are set to zero, the range is determined based on estimated noise variance in the captured fringe pattern.
Desirably the rate of change of the established wavelet coefficients mapping function is non-linearly dependent on the carrier frequency component. Preferably the rate of change of the established wavelet coefficients mapping function determines a suppressing rate for the wavelet coefficients with a magnitude outside the predetermined range.
In another implementation, the transforming the obtained wavelet coefficients further comprising:
setting the magnitude of wavelet coefficients with magnitude within a predetermined range to zero; and
suppressing the magnitude of at least one wavelet coefficient with magnitude outside the predetermined range in accordance with the suppressing rate determined based on the carrier frequency component, the suppressing rate being established by the wavelet coefficients mapping function.
Further, the wavelet coefficients mapping function may be further determined using strength to which the fringe pattern is modulated by an object being imaged using the energy source.
According to another aspect of the present disclosure provides an image processing method, the method comprising:
capturing a fringe pattern from an energy source, the captured fringe pattern having a carrier frequency component dependent on settings of the energy source;
obtaining wavelet coefficients for the captured fringe pattern by applying a wavelet transform to the captured fringe pattern;
establishing a wavelet coefficients mapping function that varies depending on the carrier frequency component of the captured fringe pattern;
transforming the obtained wavelet coefficients using the established wavelet coefficients mapping function; and
processing the captured fringe pattern by applying inverse wavelet transform to the transformed wavelet coefficients to form a denoised fringe pattern.
In another aspect, disclosed is an image processing method, the method comprising:
capturing a fringe pattern from an energy source, the captured fringe pattern having a carrier frequency component dependent on settings of the energy source;
obtaining wavelet coefficients for the captured fringe pattern by applying a wavelet transform to the captured fringe pattern;
suppressing the obtained wavelet coefficients in accordance with the carrier frequency component of the captured fringe pattern, wherein the suppressing rate increases with increasing the carrier frequency of the captured fringe pattern; and
processing the captured fringe pattern by applying inverse wavelet transform to the suppressed wavelet coefficients to form denoised fringe pattern.
In these methods, the wavelets coefficient mapping function desirably comprises a thresholding function expressed by:
where T is the dead zone size and F is a curvature parameter that changes the shape of the function.
Most preferably, the thresholding function is interpreted using the expression:
where ‘*’ indicates multiplication, and that solving Equation (b) for ω will result in Equation (a), Equation (b) defines a mapping from noisy wavelet coefficients y to denoised wavelet coefficients ω, such that in Equations (a) and (b), sgn(x)=1 where x>0, and sgn(x)=−1 where x<0.
Other aspects are also disclosed.
At least one embodiment of the invention will now be described with reference to the following drawings, in which:
A ‘fringe image’ refers to an image with at least one dominant one-dimensional or two-dimensional periodical signal.
Many imaging systems can produce fringe images such as the one in
The gratings in
XT images are inherently noisy due to the photon process (Poisson noise) and thermal noise, readout noise, background noise. These three noise sources can all be modelled with additive Gaussian noise.
The fringe pattern generated in the XT imaging system 100 and detected by the image sensor 140 can be approximately expressed as:
z(r)=¼a(r)(1+mx(r)cos(φx)(r))(1+my)(r)cos(φy(r))) (1)
where z(r) is the intensity value at position r in the captured XT image;
Fringe images z(r) captured in the XT system 100 need to be demodulated to recover the x and y phases ξx(r) and ξy(r) of the object 102. A demodulation process 200 is explained with reference to the flowchart of
φx(r)=2 πxfx+ξx(r) and φy(r)=2 πyfy+ξy(r) (2)
where fx and fy are the horizontal and vertical fringe frequency components. The object phase ξx and ξy are the x and y derivatives of the optical path length of the object
The values of mx(r) and my(r) fall between 0 and 1 and the values of ξx(r) and ξy(r) are between 0 and 2 π.
As illustrated in
In order to achieve a better image quality for the demodulated images, a ‘denoising’ step can be performed before the demodulation step 220. The new processing flow is shown in a method 300 in
However, one potential problem with performing denoising before demodulation, as shown in
In this specification, the term ‘fringe image’ refers to captured images in an imaging system such as an interferometer that has at least one periodical signal and may or may not have the object information. In other words, the examples shown in
As seen in
The computer module 1901 typically includes at least one processor unit 1905, and a memory unit 1906. For example, the memory unit 1906 may have semiconductor random access memory (RAM) and semiconductor read only memory (ROM). The computer module 1901 also includes an number of input/output (I/O) interfaces including: an audio-video interface 1907 that couples to the video display 1914, loudspeakers 1917 and microphone 1980; an I/O interface 1913 that couples to the keyboard 1902, mouse 1903, scanner 1926, XT microscope 1927 and optionally a joystick or other human interface device (not illustrated); and an interface 1908 for the external modem 1916 and printer 1915. In some implementations, the modem 1916 may be incorporated within the computer module 1901, for example within the interface 1908. The computer module 1901 also has a local network interface 1911, which permits coupling of the computer system 1900 via a connection 1923 to a local-area communications network 1922, known as a Local Area Network (LAN). As illustrated in
The I/O interfaces 1908 and 1913 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated). Storage devices 1909 are provided and typically include a hard disk drive (HDD) 1910. Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used. An optical disk drive 1912 is typically provided to act as a non-volatile source of data. Portable memory devices, such optical disks (e.g., CD-ROM, DVD, Blu ray Disc™), USB-RAM, portable, external hard drives, and floppy disks, for example, may be used as appropriate sources of data to the system 1900.
The components 1905 to 1913 of the computer module 1901 typically communicate via an interconnected bus 1904 and in a manner that results in a conventional mode of operation of the computer system 1900 known to those in the relevant art. For example, the processor 1905 is coupled to the system bus 1904 using a connection 1918. Likewise, the memory 1906 and optical disk drive 1912 are coupled to the system bus 1904 by connections 1919. Examples of computers on which the described arrangements can be practised include IBM-PC's and compatibles, Sun Sparcstations, Apple Mac™ or a like computer systems.
The methods of fringe image processing including denoising may be implemented using the computer system 1900 wherein the processes of
The software may be stored in a computer readable medium, including the storage devices described below, for example. The software is loaded into the computer system 1900 from the computer readable medium, and then executed by the computer system 1900. A computer readable medium having such software or computer program recorded on the computer readable medium is a computer program product. The use of the computer program product in the computer system 1900 preferably effects an advantageous apparatus for processing of fringe images.
The software 1933 is typically stored in the HDD 1910 or the memory 1906. The software is loaded into the computer system 1900 from a computer readable medium, and executed by the computer system 1900. Thus, for example, the software 1933 may be stored on an optically readable disk storage medium (e.g., CD-ROM) 1925 that is read by the optical disk drive 1912. A computer readable medium having such software or computer program recorded on it is a computer program product. The use of the computer program product in the computer system 1900 preferably effects an apparatus for processing fringe images.
In some instances, the application programs 1933 may be supplied to the user encoded on one or more CD-ROMs 1925 and read via the corresponding drive 1912, or alternatively may be read by the user from the networks 1920 or 1922. Still further, the software can also be loaded into the computer system 1900 from other computer readable media. Computer readable storage media refers to any non-transitory tangible storage medium that provides recorded instructions and/or data to the computer system 1900 for execution and/or processing. Examples of such storage media include floppy disks, magnetic tape, CD-ROM, DVD, Blu-ray Disc™, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 1901. Examples of transitory or non-tangible computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the computer module 1901 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like.
The second part of the application programs 1933 and the corresponding code modules mentioned above may be executed to implement one or more graphical user interfaces (GUIs) to be rendered or otherwise represented upon the display 1914. Through manipulation of typically the keyboard 1902 and the mouse 1903, a user of the computer system 1900 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 1917 and user voice commands input via the microphone 1980.
When the computer module 1901 is initially powered up, a power-on self-test (POST) program 1950 executes. The POST program 1950 is typically stored in a ROM 1949 of the semiconductor memory 1906 of
The operating system 1953 manages the memory 1934 (1909, 1906) to ensure that each process or application running on the computer module 1901 has sufficient memory in which to execute without colliding with memory allocated to another process. Furthermore, the different types of memory available in the system 1900 of
As shown in
The application program 1933 includes a sequence of instructions 1931 that may include conditional branch and loop instructions. The program 1933 may also include data 1932 which is used in execution of the program 1933. The instructions 1931 and the data 1932 are stored in memory locations 1928, 1929, 1930 and 1935, 1936, 1937, respectively. Depending upon the relative size of the instructions 1931 and the memory locations 1928-1930, a particular instruction may be stored in a single memory location as depicted by the instruction shown in the memory location 1930. Alternately, an instruction may be segmented into a number of parts each of which is stored in a separate memory location, as depicted by the instruction segments shown in the memory locations 1928 and 1929.
In general, the processor 1905 is given a set of instructions which are executed therein. The processor 1905 waits for a subsequent input, to which the processor 1905 reacts to by executing another set of instructions. Each input may be provided from one or more of a number of sources, including data generated by one or more of the input devices 1902, 1903, data received from an external source across one of the networks 1920, 1922, data retrieved from one of the storage devices 1906, 1909 or data retrieved from a storage medium 1925 inserted into the corresponding reader 1912, all depicted in
The disclosed image processing arrangements use input variables 1954, which are stored in the memory 1934 in corresponding memory locations 1955, 1956, 1957. The image processing arrangements produce output variables 1961, which are stored in the memory 1934 in corresponding memory locations 1962, 1963, 1964. Intermediate variables 1958 may be stored in memory locations 1959, 1960, 1966 and 1967.
Referring to the processor 1905 of
Thereafter, a further fetch, decode, and execute cycle for the next instruction may be executed. Similarly, a store cycle may be performed by which the control unit 1939 stores or writes a value to a memory location 1932.
Each step or sub-process in the processes of
In order to achieve different denoising results for different noise strengths and different underlying modulated fringe images, presently disclosed is a wavelet denoising method that uses an adaptive wavelet thresholding function that is parameterized by the fringe frequency, the modulation strength of the fringes, and noise strength.
Step 430, being the applying of wavelet denoising with a new adaptive thresholding function, can be explained with reference to
A wavelet thresholding function is a mapping relationship between noisy wavelet coefficients and the denoised wavelet coefficients. The wavelet coefficients mapping function has a rate of change that varies depending at least on the carrier frequency component of the captured fringe pattern.
The thresholding function (e.g. 1610, 1620) depicted in
where T is the dead zone size and F is a curvature parameter that changes the shape of the function. The function in Equation (3) can be interpreted using an expression:
where ‘*’ indicates multiplication. Note that solving Equation (4) for ω will result in Equation (3). Equation (4) provides a simple and more convenient way of describing the mapping from the noisy wavelet coefficients y to the denoised wavelet coefficients ω. In Equations (3) and (4), sgn(x)=1 where x>0 and sgn(x)=−1 where x<0. The value of sgn(0) is undefined as it is not necessary. Note that Equation (4) corresponds to the mapping from the noisy wavelet coefficients y to the denoised wavelet coefficients ω even though the equation format suggests a mapping from ω to y. Therefore, regardless of the fact that equation (4) cannot be formally called “a function”, the mapping from the noisy wavelet coefficients y to the denoised wavelet coefficients ω performed by solving Equation (4) can be considered as a mapping function for purposes of the present disclosure. By carefully choosing the curvature parameter F and the dead zone size T, the thresholding function in Equation (3) provides a flexible mapping that adapts to fringe images with different fringe frequencies, modulation strength, and noise strength. As will be appreciated from Equations (3) and (4), the rate of change of the established wavelet coefficients mapping function 1610, 1620 is non-linearly dependent on the carrier frequency component of the fringe pattern. Moreover, the rate of change of the established wavelet coefficients mapping function determines a suppressing rate for the wavelet coefficients with a magnitude outside the predetermined range. These attributes will be appreciated, at least qualitatively from
In
Meanwhile, the dotted line 1610 in
These arrangements present described provide an adaptive wavelet denoising framework that relies on good choices of dead zone size and curvature parameter that fit the signal to noise (SNR) level, fringe frequency, and modulation strength of an input fringe image.
Referring back to
T=σ
n√{square root over (2logL)}
where σn is the standard deviation of the additive white Gaussian noise, and L is the total number of pixels in the noisy captured fringe pattern image. The standard deviation of the noise σn can be estimated using the median value of pixels in one part of the image. As such, the range of the magnitude value T is determined based on estimated noise variance in the captured fringe pattern.
In step 420, the processor 1905 calculates a curvature parameter F that varies the shape of the thresholding function described in Equation (3). A smaller curvature parameter gives a thresholding function such as the function 1610, which shrinks the signals less, while a larger curvature parameter gives a thresholding function 1620, which shrinks the signals more. It should be noted that the dead zone size T and the curvature parameter F are determined independently and separately.
Details of Step 420 are now discussed with reference to
The process 510 for determining the fringe frequency is briefly described in
In this first implementation, the horizontal and vertical fringe frequency components are assumed to be the same, that is, fx=fy, due to the use of regular gratings, such as those illustrated in
Once the fringe frequency value f is determined, the process in
Once the fringe frequency f and the modulation strength m are calculated, the curvature parameter F can be determined in step 530. Because the captured fringe image is dominated by a one-dimensional or two-dimensional fringe, there is significant energy concentrated at the location of the fringe frequency in the Fourier domain. Naturally, the denoising process needs to handle the fringe frequency carefully to avoid visible damage to the fringe. It is therefore understandable that the fringe frequency f plays a role in how differently noise and signal are treated. In other words, the fringe frequency value f will have an impact on the curvature parameter F, as F determines how differently noise and signal are treated in the wavelet domain. Generally, when the fringe frequency f is low (i.e. a wide fringe), signal and noise should be shrunk at very different levels because signal and noise in this case are well separated in frequency domain. There is little risk of accidentally removing a considerable amount of signal while removing noise. If, however, the fringe frequency f is high (i.e. a narrow fringe), the boundary of signal and noise in frequency domain is not very clear, which makes it more risky for the thresholding function to suppress certain wavelet coefficients, even if the dead zone size T is carefully selected.
In summary, a captured image with low fringe frequency (wide fringe) should be denoised with a thresholding function with a small F like the function 1610 due to high confidence in the dead zone size T. A captured image with high fringe frequency (narrow fringe) should be denoised with a thresholding function with large F like the function 1620 to control the damage in the clean signal or to avoid boosting noise in case the dead zone size T is not accurate enough. That is, the curvature parameter F should be proportional to the fringe frequency: a low fringe frequency requires a small curvature parameter and a high fringe frequency requires a large curvature parameter.
Meanwhile, the modulation strength m determines the percentage of energy in the captured image that comes from the fringe and from the object. Therefore, the modulation strength also has an impact on the choice of the curvature parameter F. If the fringe is strongly modulated by the object information, only a small part of the signal energy comes from the fringe and the rest is from the object information. This means the peaks representing the fringe in Fourier domain will have small amplitudes and consequently small impact on the quality of the image denoising results. If, however, the modulation strength is relatively weak, the fringe dominates the whole image, more care needs to be taken towards shrinking the coefficients. That is, with a strong modulation, a small curvature parameter F can be used while a large curvature parameter is more suitable for captured images with weak modulation.
Step 530 calculates a curvature parameter F according to the fringe frequency f and the modulation strength m. The curvature parameter is calculated by interpolating values in a pre-generated look-up table. Details of the interpolation process will be explained with an example later. Generally, any linear or higher order interpolation is feasible.
The look-up table used in step 530 is generated using at least two standard or customized training images. Since a general correlation between the fringe frequency f, the modulation strength m, and the curvature parameter F is assumed for the system 100 with the gratings G0, G1 and G2, one only needs to generate the look-up table once. Whenever a new curvature parameter F is required, a simple linear or higher order interpolation can be performed.
The training images in step 810 are optical path length (OPL) images that represent the phase change of the optical wave along its propagation path. For example, in a medium with constant refractive index, the optical path length can be expressed as OPL=d*n, where d is the physical distance the ray travelled and n is the refractive index of the medium. If the wavelength λ of the optical wave is known, the phase change of this wave while travelling in the medium can be calculated as
These optical path length images can be either artificially created or simulated from a real X-ray absorption image. For example, an optical path length image can be created by taking the logarithm of a regular X-ray absorption image.
In
Similarly, step 960 checks if the current modulation strength m is greater than a predetermined maximum value—0.8. This maximum number is determined as empirical value from the real system. If not, the process goes to step 965 to increase the modulation strength by 0.1. For example, if the currently modulation strength is 0.4, the new modulation strength after step 965 will be 0.5. If both maximum fringe frequency and maximum modulation strength are reached, the look-up table for the current training image is done. The maximum modulation strength is also an empirical value. The maximum frequency rarely goes over 8 pixels per fringe and the maximum modulation almost never goes over 0.8. This is certainly true for XT systems, but also true for many modulation systems. Because simulations discussed in the preferred implementations aim to cover situations in real experiments, it is safe to assume these upper limits.
The details of the noisy fringe image simulation performed in step 930 are explained with reference to
z(r)=¼a(r)(1+mx(r)cos(φx(r)))(1+my(r)cos(φy(r))) (1)
where
In Equation (2), ψ represents the training OPL image, which varies as a function of position r. Since the x fringe frequency fx and the y fringe frequency fy are assumed to be the same, and the x modulation strength mx and the y modulation strength my are also assumed to be the same, the simulation of clean fringe image is implemented as:
z(r)=¼a(1+m cos(φx(r)))(1+m cos(φy(r)))
where
The f and m in Equation (5) are the current fringe frequency and the current modulation strength respectively. For example, the fringe frequency can be the initial frequency f0=64 pixels per fringe and the modulation strength can be the initial modulation strength m0=0.4. Notice that in Equation (5), the absorption factor a has been simplified to be a constant across the image, since the value of the absorption does not affect the RMSE, thus does not affect the choice of the curvature parameter F. For example, a can be set to 1.
In order to assess the denoising results with a particular curvature parameter F, certain noise is added to the generated fringe image in each of steps 1120 and 1130. Step 1120 is configured to add Poisson noise, and step 1130 to add Gaussian noise. Because the noise level of the simulated image will not directly affect the choice of the curvature parameter F, the Poisson and Gaussian noise added in
Details of a preferred implementation of step 940 are discussed with reference to
Note that when the value of F is large, the desired thresholding function in
There is a reason for starting the search with a relatively large value for F. By starting with a soft-thresholding-like curve, the algorithm is more likely to produce a tolerant yet less accurate algorithm. This will ensure that the fringe frequency, however high or low it is, will not suffer unstable damage. Ultimately, this will help avoid errors caused by incorrect estimation of the fringe frequency in later demodulation processes to recover the object information from a fringe image.
After the look-up table 825 is generated for the current training image, the process 800 in
The look-up table for another training image I2 might look like Table 2:
The combined look-up table, assuming only two training images are used, will be as shown in Table 3:
Once the combined look-up table 845 is generated in step 840, any combination of the fringe frequency f and the modulation strength m can be used to calculate a curvature parameter value F suitable for particular fringe frequency and modulation strength. Using Table 3 as the example combined look-up table 845, when the input fringe image has a fringe frequency of 24 pixels per fringe (as calculated in step 510) and a modulation strength of 0.66 (as calculated in step 520), the curvature parameter of interest can be interpolated by the processor 1905 using the values from Table 3: F(16, 0.6)=3.7, F(16, 0.7)=3.7, F(32, 0.6)=3.55, F(32, 0.7)=3.45, where F(f, m) is the value of F with fringe frequency f and modulation strength m. A bilinear interpolation method can be used to calculate the appropriate value of the curvature parameter F. In this example, the interpolated curvature parameter F=3.595.
From Table 3, it will be appreciated that the wavelet coefficients mapping function is seen to vary depending on the carrier frequency component of the captured fringe pattern. Further, it will be appreciated that the mapping function operates to suppress the wavelet coefficients in accordance with the carrier frequency component of the captured fringe pattern, and particularly where the suppressing rate increases with increasing the carrier frequency of the captured fringe pattern—the values of F are larger for higher frequencies.
Once the dead zone size T (as calculated in step 410) and the curvature parameter F are determined, the proposed wavelet denoising method with the new adaptive thresholding function as described in Equation (3) is applied by the processor 1905 in step 430 and the noisy image is denoised.
In some implementations, the adaptive thresholding function can be expressed as:
In Equation (6), T is the dead zone size and F is the curvature parameter. The function f (y, F) is a smooth non-linear function and T≧f (y, F)≧0, i.e. when y approaches T, the function f (y, F) approaches 0; and when y approaches infinity, the function f (y, F) approaches T. Therefore, the non-linearly thresholding function as per Equation (6) is bounded by the soft and hard thresholding functions and varying based on the determined carrier frequency f and the modulation strength m, providing that the curvature parameter F is non-linearly dependent on the fringe (carrier) frequency f and the modulation strength m of the system. The adaptive thresholding function (6) behaves similarly to either a soft or hard thresholding function depending on the value of F. The choice of value for F is non-linearly dependent on the fringe frequency and the modulation strength of the system.
This implementation describes a denoising method using the same wavelet shrinkage framework as described above in the first implementation while the input fringe image does not have isotropic fringe frequency or isotropic modulation strength. In other words, fx≠fy and mx≠my. In this case, one cannot use a single fringe frequency f and a single modulation strength m to construct the look-up table (845) for the curvature parameter F.
This means that each curvature parameter F in the look-up table is a function of four values, fx, fy, mx and myy, instead of two. Therefore, when generating the look-up table, different combinations of these four values need to be considered and the look-up table will be a four-dimensional table.
This will mainly affect the look-up table generation step in
The calculation of the curvature parameter for a particular noisy input image, as performed at step 530, is also affected due to the interpolation dimension change. That is, a quadrilinear interpolation is needed.
Once this four-dimensional look-up table is generated, the noise strength of an input fringe can be analysed to generate an appropriate dead zone size T; the x and y fringe frequency fx, fy and the x and y modulation strength mx, my can be used to find an appropriate curvature parameter F by interpolating the values in the look-up table. Then the wavelet shrinkage process 430, seen in
The arrangements described are applicable to the computer and data processing industries and particularly for the processing of images obtained from x-ray Talbot interferometry.
The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive.
Number | Date | Country | Kind |
---|---|---|---|
2014202322 | Apr 2014 | AU | national |