Embodiments of the present disclosure relate generally to a field of fluorescent imaging. Particularly, embodiments of the disclosure relate to a method, a data acquisition and image processing system and a non-transitory machine-readable medium for obtaining a super-resolved image of an object.
In a field of imaging using structured light, a plurality of raw structured images (or referred as to modulated images), that are captured by illuminating different patterns on an object, are used to recover a super-resolved image. For example, at least 9 raw structured images are used in conventional methods. However, the more the raw structured images are used, the slower data acquisition speed of the imaging system would be. With the tendency for increasing the data acquisition speed of the imaging system, it is desired to develop a method that is capable of improving the spatial resolution for the reconstructed image with less raw structured images.
An aspect of the present invention provides a method for obtaining a super-resolved image of an object, the method comprises: acquiring a plurality of structured images of the object by structured light; determining, from the structured images, modulation information of each structured light that comprises spatial frequency, phase shift and modulation factor; initializing a sample image of the object according the structured images and initializing structured pattern by the corresponding modulation information; and obtaining the super-resolved image by a plurality of epochs, wherein each epoch comprises:
Another aspect of the present invention provides a non-transitory machine-readable medium having instructions stored therein, which when executed by a processor, cause the processor to perform operations for acquiring a super-resolved image of an object, the operations comprising: obtaining a plurality of structured images of the object by structured light; determining, from the structured images, modulation information of each structured light that comprises spatial frequency, phase shift and modulation factor; initializing a sample image of the object according the structured images and initializing structured pattern by the corresponding modulation information; and obtaining the super-resolved image by a plurality of epochs, wherein each epoch comprises:
Yet another aspect of the present invention provides a data processing system, comprising: a processor; and a memory coupled to the processor to store instructions, which when executed by the processor, cause the processor to perform operations for obtaining a super-resolved image of an object, the operations comprising: acquiring a plurality of structured images of the object by structured light; determining, from the structured images, modulation information of each structured light that comprises spatial frequency, phase shift and modulation factor; initializing a sample image of the object according the structured images and initializing structured pattern by the corresponding modulation information; and obtaining the super-resolved image by a plurality of epochs, wherein each epoch comprises:
Yet another aspect of the present invention provides system for data acquisition and image processing, comprising a data acquisition device for acquiring a plurality of structured images of the object by structured light; a storage device for storing the plurality of structured images; and a data processing device comprising: a processor; and a memory coupled to the processor to store instructions, which when executed by the processor, cause the processor to perform operations for obtaining a super-resolved image of an object, the operations comprising:
wherein the plurality of epochs are continued until difference between the updated sample images within two last epochs is smaller than a predetermined value.
The above and other features of the present invention will become more apparent by describing in detail exemplary embodiments thereof with reference to the accompanying drawings, in which:
Various embodiments and aspects of the disclosures will be described with reference to details discussed below, and the accompanying drawings will illustrate the various embodiments. The following description and drawings are illustrative of the disclosure and are not to be construed as limiting the disclosure. Numerous specific details are described to provide a thorough understanding of various embodiments of the present disclosure. However, in certain instances, well-known or conventional details are not described in order to provide a concise discussion of embodiments of the present disclosures.
Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in conjunction with the embodiment can be included in at least one embodiment of the disclosure. The appearances of the phrase “in one embodiment” in various places in the specification do not necessarily all refer to the same embodiment.
The above steps S102-S104 will be described more fully hereinafter with reference to
At step S102, the spatial frequency of the structured light may be the spatial frequency vector, and the spatial frequency may be determined by localizing the corresponding spatial frequency vector. In the following description, the spatial frequency vector of the structured light is presented as {right arrow over (p)}θ
In the embodiment, the spatial frequency vector {right arrow over (p)}θ
pu
pixel−1, and
pixel−1 in the u and v directions, respectively.
In some embodiments, the determination of the spatial frequency vectors may further comprise an upsampling step. Specifically, the upsampling frequency content {tilde over (D)}n′({right arrow over (k)}) (as shown in
wherein {right arrow over (k)}:=({circumflex over (k)}u, {circumflex over (k)}v) is the frequency space coordinate; {circumflex over (k)}u and {circumflex over (k)}v are unit vectors in the u and v directions, respectively, X=1, 2, . . . , M0, and Y=1, 2, . . . , N0, are the vectors that span entire image coordinates in x, and y direction, respectively, {right arrow over (U)} and {right arrow over (V)} are vectors that span the required sub-pixel spatial frequency locations centered at, pu
wherein α is the upsampling factor, wx and wy determine the pixel length around pu
Next, the spatial frequency vector can be determined by finding out the maximum position, i.e., (pu
pixel−1, and
pixel−1, from
pixel−1, and
pixel−1, in the u and v directions, respectively. Therefore, the peak frequency vector localization precision is improved with a factor of α, that is, the precision of spatial frequency vector is improved with a factor of α.
The phase shift of each structured light may be obtained by: constructing a structured pattern for each structured light by an initialized phase shift and the corresponding spatial frequency; and adjusting each initialized phase shift by minimizing difference between the structured pattern for each structured light and the corresponding structured images.
As an example, a structured pattern of the nth structure light may be constructed according to the localized subpixel precise frequency vector {right arrow over (p)}θ
where Cn represents the correlation, and
describes the summation earned over the entire range of the spatial coordinates {right arrow over (r)}. The difference between the structured pattern Tn({right arrow over (r)}) and the nth structured image Dn({right arrow over (r)}) may be minimized by iteratively optimizing the parameter φn,ini until the correlation Cn reaches a minimum. When the correlation Cn reaches the minimum, the initial value φn,ini is optimized to be φn,opt′ that may be used as the phase shift of the nth structured light. It should be noted that there exists a bias of π between the φn,opt′ and φn. To ensure the φn that is ranged between 0 to 2π, the φn and φn,opt′ is limited to have a relation: φn=(φn,opt′−π)mod 2π, where mod represents the modulo operation.
The modulation factor of each structured light may be obtained from the raw structured image, Dn({right arrow over (r)}), by analyzing the power spectrum for the duplex sideband frequencies in the background rejected frequency contents.
The effective regions for the duplex sideband frequencies may be firstly identified according to the optical transfer function (OTF) {tilde over (H)}({right arrow over (k)}) of the imaging system (shown in
To this end, the OTF may be firstly binarized to approximately determine the regions inside the OTF support from {tilde over (H)}({right arrow over (k)}). The binarization may be expressed as:
where L0 is a binary tag image (shown in
Next, the OTF may be shifted with the spatial frequency vectors that having different frequencies. For example, the first, fourth and fifth structured light has different frequencies, and the frequency vectors thereof, i.e., {right arrow over (p)}θ
Then the regions inside the shifted OTFs support, i.e.,
may be determined by the equation:
The pixels in
are classified as inside the shifted OTFs support and labelled by “1”, when the normalized grayscale intensities are greater than or equal to the gray threshold ε0; otherwise, the pixels are classified as out of the shifted OTFs support and labelled by “0”.
respectively.
The dilated intersection regions between
may be expressed as:
wherein
is a binary tag image, “∩” is the intersection operator, “⊕” is the dilation operator, and SE1 is a disk structure element object (e.g., a 20 pixels disk structure element object). The morphological dilation operation is applied to enlarge the intersection regions, the pixels in which are both inside the shifted OTF support of
respectively.
The aggregated regions of
may be expressed as:
wherein
is a binary tag image, and “∪” is the union operator. The pixels in
are labelled by “1”, when they are either inside the shifted OTF support of
respectively.
The set difference of
may be expressed as:
wherein
is the result of set difference, and “−” is the set difference operator. The pixels in
are labelled by “1”, when they are labelled by “1” in
while not labelled by “1” in
respectively.
Since only the spatial frequencies within the OTF support can pass through the imaging system, the regions for the duplex sideband frequencies can be approximately determined by multiply
with L0, i.e., the following equation:
wherein
is a binary tag image, L0 is given by Eq. (4), “⊖” is the erosion operator, and SE2 is a disk structure element object (e.g., a 10 pixels disk structure element object). The morphological erosion operation is applied to diminish the interruption of the zero-order frequency contents. The pixels in
are labelled by “1”, when they are classified as the regions for the duplex sideband frequencies.
respectively.
The
may represent transfer property of the imaging system for nth structured light.
To determine the duplex sideband frequency contents, frequency contents for the widefield image, i.e., {tilde over (D)}w({right arrow over (k)}), may be firstly determined from the frequency contents of the raw structured images with identical illumination frequency vector. In the embodiment, the first to third structured images are the structured images with identical illumination frequency vector.
The frequency contents for the widefield image may be expressed as:
where {⋅} represents the Fourier transform, {tilde over (D)}p
The unwanted out-of-focus frequency components may be removed by subtracting {tilde over (D)}w({right arrow over (k)}) from {tilde over (D)}n({right arrow over (k)}). The result of the subtraction is the duplex sideband frequency contents, i.e.,
that is expressed as:
where {tilde over (f)}({right arrow over (k)}−{right arrow over (p)}θ
where I0 is the intensity distribution of the illumination light.
is shown in
Then, effective duplex sideband frequency contents (i.e., first frequency content) may be extracted, according to transfer property of the imaging system for each structured light, from the duplex sideband frequency contents (i.e., second frequency content) of each structured image. Specifically, The effective duplex sideband frequency contents, represented as
can be extracted from
by multiplying
as:)
the magnification of its central part, indicated by the red dashed bounding box, is shown in
Next, effective target duplex shifted frequency contents (i.e., third frequency content) of the structured images may extracted based on power spectra of the widefield image.
Specifically, since both signals and noises contribute to power spectrum, the power spectra of the widefield image, i.e., ξw, can be expressed as:
ξw=ξw,sig+M0N0·ξw,N, (14)
wherein M0 and N0 are the image pixels in length and width dimensions, respectively; and ξw,N represents the average noise power. The ξw,N may be initialized by averaging the power spectra with the frequencies that are greater than a predetermined frequency (e.g., 1.25 times cut-off frequency, i.e. |{right arrow over (k)}|>1.25kcut), as:
where conj(⋅) represents the complex conjugate, Mask({right arrow over (k)}) is a 2D binary mask, given by:
and A0 represents the image area of the frequencies that are smaller than the predetermined frequency, a.k.a., the total pixels of Mask({right arrow over (k)}) with zero intensity value. The signal power may be zero in the regions outside of the OTF support.
The widefield image signal power spectra, i.e., ξw,sig, may be given by:
An approximation of the signal power spectra may be given by:
where λw and γw are scale factors, {tilde over (H)}({right arrow over (k)}) is the system OTF, and
describes the summation carried over the entire range of frequency coordinates, {right arrow over (k)}.
The difference between the ξw,sig, ξw,sig′ may be represented by mean-square-error (MSE) thereof. The MSE may be given by:
Gradient descent algorithm may be applied to minimize the MSEξ
Next, the low-grade estimated emission frequency contents, represented as {tilde over (S)}o({right arrow over (k)}), may be achieved by suppressing the noise outside of the OTF support for {tilde over (D)}w({right arrow over (k)}), expressed as:
{tilde over (S)}o({right arrow over (k)}) is shown in
Then the low-grade estimated emission frequency contents may be shifted with the frequency vectors, so as to determine the duplex shifted frequency contents
may be expressed as:
where {⋅} and −1{⋅} represent the Fourier transform and inverse Fourier transform, respectively, j is the imaginary unit, {right arrow over (p)}θ
is shown in
After determining the duplex shifted frequency contents, the target duplex shifted frequency contents (i.e., the fourth frequency content) may be determined by filtering the shifted low-grade estimated emission frequency content according to the transfer property of the imaging system for the corresponding structured light. Specifically, the target duplex shifted frequency contents
may be determined by the relation:
is shown in
Next, the effective target duplex shifted frequency contents may be extracted, according to the transfer property of the imaging system, from the target duplex shifted frequency contents. The effective target duplex shifted frequency contents may be represented as
and be determined by the equation:
and the magnification of its central part, indicated by the dashed bounding box, is shown in
Finally, the modulation factor for each structured light may be determined based on the above mentioned first and third frequency contents of each structured image. For example, the modulation factor may be represented as mn and obtained by the equation:
wherein
describes the summation carried over the entire range of frequency coordinates, {right arrow over (k)}, and ∥⋅∥2 is L2-norm operator. It should be noted that
in Eq. (24) is a positive real number, as
is a complex number.
After determined the modulation information (i.e., spatial frequency, phase shift and modulation factor) of each structured light, the structured pattern of each structured light may be initialized as:
wherein {right arrow over (p)}θ
In addition, a sample image used to obtain the super-resolved image may be initialized based on the low-grade emission frequency contents. Specifically, the initialized sample image may be determined by applying the inverse Fourier transform of the low-grade emission frequency contents, i.e., S(0)({right arrow over (r)})=−1{{tilde over (S)}o({right arrow over (k)})}, wherein S(0)({right arrow over (r)}) represents initialized sample image.
To rebuild the fine structure based on the initialized sample image, an iterative optimizing process may be performed. During the iterative optimization process, the sample image is updated for each structured light until difference between the updated sample images within two last iterations is smaller than a predetermined value. The iterative optimizing process may comprise a plurality of epochs, and each of the epochs is a loop. In other words, the iterative optimizing process may comprise a plurality of loops, in each of which all the structured patterns may be adjusted in an iterative manner (e.g. in each epoch, an iteration is performed for each structured pattern).
The detailed iterative optimizing process will be described according
As shown in
S
tg
(τ−1)({right arrow over (r)})=S(τ−1)({right arrow over (r)})Pn(τ
where S(τ−1)({right arrow over (r)}) is the sample image, Pn(τ
Next, frequency content of the determined target image may be updated according to difference between frequency content of the structured image and frequency content of the target image that is measurable to a sample system used to capture the structured images. Specifically, frequency content of the target image may be updated by calculating the frequencies differential that is expressed as:
{tilde over (S)}
tg
(τ)({right arrow over (k)}):={tilde over (S)}tg(τ−1)({right arrow over (k)})+{tilde over (H)}(τ−1)({right arrow over (k)})·{F{Dn({right arrow over (r)})}−{tilde over (H)}(τ−1)({right arrow over (k)})·{tilde over (S)}tg(τ−1)({right arrow over (k)})} (27)
where {tilde over (S)}tg(τ)({right arrow over (k)}) is the updated frequency contents, := is the updating operator, {tilde over (S)}tg(τ−1)({right arrow over (k)}) is the Fourier transform of Stg(τ−1)({right arrow over (r)}), i.e., {tilde over (S)}tg(τ−1)({right arrow over (k)})={Stg(τ−1)({right arrow over (r)})}, {⋅} represents the Fourier transform, Dn({right arrow over (r)}) is the nth raw structured image, {tilde over (H)}(τ−1)({right arrow over (k)}) is the system OTF, and τ describes the iteration order number. When the superscript of τ−1 in the parentheses in Eq. (27) is equal to 0, the initial value of the associated term is used. After obtaining the updated frequency contents, the updated target image may be determined by: Stg(τ)({right arrow over (r)})=−1{{tilde over (S)}tg(τ)({right arrow over (k)})}.
Then, the sample image is modified according to difference between the target image and the target image obtained by the structured pattern adjusted in the last iteration. At this step, the fine structure in the sample may be recovered as number of the iteration increasing. The sample image may be updated by calculating the image differential. The updated sample image in τth iteration may be represented as S′(τ)({right arrow over (r)}), and the S′(τ)({right arrow over (r)}) may be determined by the equation:
wherein max{Pn(τ
In some embodiments, the frequency components beyond the extension OTF support for the frequency contents of updated sample image may be blocked. The blocking process may be expressed as:
S
(τ)({right arrow over (r)}):=F−1{{tilde over (S)}′(τ)({right arrow over (k)})·LExt} (29)
wherein, {tilde over (S)}′(τ)({right arrow over (k)})={S′(τ)({right arrow over (r)})}, −1{⋅} represents inverse Fourier transform, and LExt is a custom-built binary mask, given by:
wherein
are binary tag images, found out by Eq. (7), “⊕” is the dilation operator, SE3 is a disk structure element object.
During the iterative optimization process, the structured pattern may be also adjusted. For example, the structured pattern may be adjusted by the updated target image and the modified sample image according to the equation:
wherein Pn(τ
In some embodiments, another OTF updating step may be included to reduce the aberration effects. This updating step may be expressed as:
wherein {tilde over (H)}(τ)({right arrow over (k)}) represents the updated OTF in τth iteration, |⋅| is the modulus operator, max{|{tilde over (S)}tg(τ)({right arrow over (k)})|} represents the maximal intensity value in |{tilde over (S)}tg(τ)({right arrow over (k)})|, conj{{tilde over (S)}tg(τ)({right arrow over (k)})} represents the complex conjugate of {tilde over (S)}tg(τ)({right arrow over (k)}), and δr is a positive regularization parameter to guarantee the denominator in Eq. (5-33).
The above iterative optimizing process are sequentially implemented on the entire set of structured patterns, i.e., Pn(τ
By above iterative optimizing process, fine modulated structures may be recovered in the initial sample image, and thus the finally obtained sample image may have higher resolution and beyond the resolution limit.
Optionally, a periodic artifact reducing process may be performed by using frequency filtering to suppress the residue peaks after the iterative optimizing process is terminated. Specifically, the periodic artifacts reducing process may be expressed as:
S
sr({right arrow over (r)})=F−1{{tilde over (S)}sr({right arrow over (k)})}=F−1{{tilde over (S)}rc({right arrow over (k)})·{tilde over (Ω)}({right arrow over (k)})} (33)
where −1{⋅} represents inverse Fourier transform, {tilde over (S)}rc({right arrow over (k)}) is given by: {tilde over (S)}rc({right arrow over (k)})={Src({right arrow over (r)})}, and {tilde over (Ω)}({right arrow over (k)}) is a custom-designed filter, given by:
where |⋅| is the modulus operator, α0 and β0 are manually set parameters.
The present application also provides a system for data acquisition and image processing. The system may comprise a data acquisition system for obtaining a plurality of structured images of the object by structured light, a storage device for storing the plurality of structured images, and a data processing device. The storage device may be, but not limited to, a hard disk. The data processing device may comprise a processor and a memory coupled to the processor to store instructions, wherein the instructions cause the processor to perform operations for obtaining a super-resolved image of an object when executed by the processor. The operations may be the operations for performing the method for reconstructing a super-resolved image of an object described referring to
The present application further provides a non-transitory machine-readable medium having instructions stored therein, which when executed by a processor, cause the processor to perform operations for obtaining a super-resolved image of an object, the operations comprising: obtaining a plurality of structured images of the object by structured light; determining, from the structured images, modulation information of each structured light that comprises spatial frequency, phase shift and modulation factor; initializing a sample image of the object according the structured images and initializing structured pattern by the corresponding modulation information; and obtaining the super-resolved image by a plurality of epochs, wherein each epoch comprises:
In one embodiment, the spatial frequency is obtained by: exploring image coordinates of a peak intensity position for the frequency content of each edge tapered structured image by applying local maxima detection; and determining the spatial frequency vector of each structured light as the spatial frequency, based on the image coordinates.
In one embodiment, the obtaining of the spatial frequency further comprises: upsampling, around location of the determined spatial frequency vector, the frequency content of each edge tapered structured image; and adjusting the spatial frequency of each structured light to be maximum position in the upsampled frequency content of the corresponding edge tapered structured image.
In one embodiment, the phase shift is obtained by: constructing a structured pattern for each structured light by an initialized phase shift and the corresponding spatial frequency; and adjusting each initialized phase shift by minimizing difference between the structured pattern for each structured light and the corresponding structured images.
In one embodiment, the modulation factor is obtained by: extracting, according to transfer property of the imaging system for the each structured light, first frequency content from second frequency content of each structured image, wherein the second frequency content is determined by removing first frequency component from each structured image, and the first frequency component is out-of-focus frequency component; extracting, according to the transfer property of the imaging system for each structured light, third frequency content of each structured image from fourth frequency content of the corresponding structured image, wherein the fourth frequency content is determined by shifting low-grade estimated emission frequency content of each structured image with vector corresponding to the spatial frequency of the corresponding structured light and then filtering the shifted low-grade estimated emission frequency content according to the transfer property of the imaging system for the corresponding structured light; and determining modulation factor for each structured light based on the first and third frequency contents of each structured image.
In one embodiment, the transfer property of the imaging system for each structured light is determined by the optical transfer function of the imaging system and the shifted optical transfer functions of the imaging system, wherein the shifted optical transfer functions are obtained by shifting the optical transfer function of the sampling system by spatial frequency vectors of the structured light having different modulated frequencies.
In one embodiment, the out-of-focus frequency component is determined based on a widefield image that is obtained by averaging the structured images corresponding to the structure light having identical spatial frequency vector.
In one embodiment, the low-grade estimated emission frequency content is determined by: obtaining a widefield image by averaging the structured images corresponding to the structure light having identical spatial frequency vector; extracting signal power spectrum of the widefield image by removing noise power spectrum from power spectrum of the widefield image, wherein the noise power spectrum is determined by averaging the power spectrum with frequencies that are greater than a frequency threshold; initializing an estimated signal power spectrum by the optical transfer function of the sampling system, a first scale factor and a second scale factor; adjusting the first and second scale factor by minimizing difference between the estimated signal power spectrum and the extracted signal power spectrum; and suppressing noise outside of the optical transfer function support for frequency content of the widefield image.
In one embodiment, the sample image is initialized according to low-grade estimated emission frequency content of the corresponding structured image.
In one embodiment, the iteration further comprises: blocking frequency components beyond extension optical transfer function support for frequency contents of the updated sample image before the adjustment of the structured pattern.
In one embodiment, the iteration further comprises: updating the optical transfer function of the sampling system for each structured light according to the difference between the optical transfer functions within two last iterations.
In one embodiment, the operations further comprising: reducing periodic artifacts in the super-resolved image after the epoch is terminated.
In one embodiment, the obtaining of the plurality of structured images comprises: obtaining a plurality of structured images of the object by structured light; and storing the plurality of structured images in a permanent storage medium. For example, the plurality of structured images may be stored in a hard disk, and may be processed in the future in offline processing situation.
In one embodiment, the operations further comprise: before the determining of the modulation information of each structured light, reading the plurality of structured images from the permanent storage medium. For example, the plurality of structured images to be processed may be obtained by reading them from a hard disk in which the plurality of structured images is stored. In some embodiment, the plurality of structured images may be read directly from RAM.
The system 600 may be a mobile terminal, a personal computer (PC), a tablet computer, a server, etc. In
In addition, in the RAM 603, various programs and data required by operation of the apparatus may also be stored. The CPU 601, the ROM 602 and the RAM 603 are connected to each other through the bus 604. Where RAM 603 exists, the ROM 602 is an optional module. The RAM 603 stores executable instructions or writes executable instructions to the ROM 602 during operation, and the executable instructions cause the central processing unit 601 to perform the steps included in the method of any of the embodiments of the present application. The input/output (I/O) interface 605 is also connected to the bus 604. The communication portion 612 may be integrated, and may also be provided with a plurality of sub-modules (e.g., a plurality of D3 network cards) and connected to the bus 604, respectively.
The following components are connected to the I/O interface 605: an input unit 606 including a keyboard, a mouse, and the like; an output unit 607 including such as a cathode ray tube (CRT), a liquid crystal display (LCD) and a loudspeaker, and the like; a storage unit 608 including a hard disk, and the like; and a communication unit 609 including a network interface card such as a LAN card, a modem, and the like. The communication unit 609 performs communication processing via a network such as the Internet and/or an USB interface and/or a PCIE interface. A driver 610 also connects to the I/O interface 605 as needed. A removable medium 611, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, is installed on the driver 610 as needed so that the computer programs read therefrom are installed in the storage unit 608 as needed.
It should be noted that the architecture shown in
In particular, according to the embodiments of the present application, the process described above with reference to the flowchart may be implemented as a computer software program, for example, the embodiments of the present application include a computer program product, which includes a computer program tangible included in a machine-readable medium. The computer program includes a program code for performing the steps shown in the flowchart. The program code may include corresponding instructions to perform correspondingly the steps in the method provided by any of the embodiments of the present application, including: obtaining a plurality of structured images of the object by structured light; determining, from the structured images, modulation information of each structured light that comprises spatial frequency, phase shift and modulation factor; initializing a sample image of the object according the structured images and initializing structured pattern of each structured light by the corresponding modulation information; and obtaining the super-resolved image by the following epoch:
In such embodiments, the computer program may be downloaded and installed from the network through the communication unit 609, and/or installed from the removable medium 611. When the computer program is executed by the central processing unit (CPU) 601 and/or GPU 613 and/or XPU, the above-described instruction described in the present application is executed.
As will be appreciated by one skilled in the art, the disclosure may be embodied as a system, a method or an apparatus with domain specific hardware and computer program product. Accordingly, the disclosure may take the form of an entirely hardware embodiment and hardware aspects that may all generally be referred to herein as a “unit”, “circuit,” “module” or “system.” Much of the inventive functionality and many of the inventive principles when implemented, are best supported with or integrated circuits (ICs), such as a digital signal processor, graphic processor and software therefore or application specific ICs. It is expected that one of ordinary skill, notwithstanding possibly significant effort and many design choices motivated by, for example, available time, current technology, and economic considerations, when guided by the concepts and principles disclosed herein will be readily capable of generating ICs with minimal experimentation. Therefore, in the interest of brevity and minimization of any risk of obscuring the principles and concepts according to the disclosure, further discussion of such software and ICs, if any, will be limited to the essentials with respect to the principles and concepts used by the preferred embodiments. In addition, the present invention may take the form of an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software. For example, the system may comprise a memory that stores executable components and a processor, electrically coupled to the memory to execute the executable components to perform operations of the system, as discussed in reference to
Although the preferred examples of the disclosure have been described, those skilled in the art can make variations or modifications to these examples upon knowing the basic inventive concept. The appended claims are intended to be considered as comprising the preferred examples and all the variations or modifications fell into the scope of the disclosure.
Obviously, those skilled in the art can make variations or modifications to the disclosure without departing the spirit and scope of the disclosure. As such, if these variations or modifications belong to the scope of the claims and equivalent technique, they may also fall into the scope of the disclosure.