This application presents a new method for performing the image interpolation/reconstruction from freehand ultrasound data.
Medical imaging is utilized to provide images of internal patient structure for diagnostic purposes as well as for interventional procedures. Often, it is desirable to utilize multiple two-dimensional (i.e. 2-D) images to generate (e.g., reconstruct) a three-dimensional (i.e., 3-D) image of an internal structure of interest.
2-D image to 3-D image reconstruction has been used for a number of image acquisition modalities (e.g., ultrasound) and image based/guided procedures. These images may be acquired as a number of parallel 2-D image slices/planes or rotational slices/planes, which are then combined together to reconstruct a 3-D image volume. Generally, the movement of the imaging device has to be constrained such that only a single degree of freedom is allowed (e.g., rotational movement. This single degree of freedom may be rotation of the imaging device or a linear motion of the imaging device.
For the rotary scanning, if the angle between two scans is small enough, the object will be fully scanned without any holes. This means at each voxel there exist a measured position. For the freehand scanning, normally there are holes in the scanned data. This means there some voxel in the space where little or no scanned data is available. The difficulty of the interpolation for the freehand scanning compared to rotary scanning is that the data is required to be filled to the above described holes.
When an object is free hand scanned using an ultrasound image system, a group of values and positions are measured, which are the input data of this ultrasound system. Then interpolation methods are utilized to reconstruct the original object from this input data. Normally this input data is noisy, and the positions of the measurement are sparse or irregularly distributed in the space. There are holes (without data to fill all grid points in the space) on the freehand ultrasound data.
One object of the present disclosure is to provide an interpolation method for reconstructing an imaged object with the image edge as sharp as possible and the image itself as smooth as possible.
The EM (Expectation Maximization) image reconstruction is suitable for freehand and rotary ultrasound system. The EM method recovers the image based on the statistical model. It does not only interpolate the measured ultrasound data but also reduce the noises in the reconstruction process. The EM method reconstructs the image iteratively. In a loop of the iteration, each voxel of the image is updated from two images. The first is an image created directly from the input ultrasound data. The second is the filtered image of the output from the prior loop of the iteration. The filter in the EM method utilizes the neighbor points in a 3*3*3 cube in 3D situation. This kind filter is referred the cubic filter. In this invention the cubic filter in the EM method is replaced as anisotropic diffusion filter. The diffusion filter can further reduce the speckle noises and keep the image edge information simultaneously. It is remarkable the diffusion filter used here is not just a post-filter. It is used inside the loop of the iteration and hence is a component of modified EM method. The quality of reconstructed image is controlled through the combination of the two goals, (1) the smoothness of the image; (2) the sharpness of the image edges. These two goals are further controlled through the simulation which optimized the error functions. Here the error is the difference between the reconstructed image and the original object. Two error functions (A) the absolute error and (B) the square error are combined together. There are two parameters in this invention which are optimized to the combination of square error and absolute error in the simulation. One parameter is used for the EM method itself and the other parameter is used in the anisotropic diffusion filter. The algorithm is speed up with the technique of compute unified device architecture (CUDA) and graphics processing unit (GPU). The algorithm in this invention is tested with simulated and measured freehand 2D frame images.
Since the measured ultrasound data is noisy, the interpolation method is required also to recover the image and reduce the noise simultaneously. There are 3 major kinds of interpolation methods available, Voxel-based method, Pixel-Based Methods and Function-Based methods.
Voxel-Base Methods (VBM) traverse each voxel in the target voxel grid and gather information from the input 2D ultrasound data. In different algorithms, one or several pixels may contribute to the value of each voxel. Pixel-Based Methods (PBM) each pixel in the input image and assign the pixel value to one or several voxels. A PBM may consist of two steps: a distribution Step (DS) and a hole-filling step (HFS). In the DS, the input pixels are traversed and the pixel value applied to one or several voxels, often stored together with a weight values. In the HFS, the voxels are traversed and empty voxels are being filled. Function-Based Methods (FBM) provide methods where goal functions are utilized. The coefficients and parameters of the algorithms are optimized with the goal function. There two major FBMs. In a first, splane functions are utilized for the interpolation. The coefficients of the interpolation are calculated through the balance of the errors of the reconstructed image to the measured ultrasound data and the smoothness of the reconstructed image. Group linear equations are solved in this algorithm. In order to reduce the size of the linear equations, the 3D space is divided to sub-regions. The linear equations are solved on the sub-region with an additional boundary place. The second is the EM (Expectation Maximization) In this algorithm, Bayesian framework estimates a function for the tissue by statistical methods. The algorithm in reference can be summary as an iterative filter. Inside the loop of the iteration, the updated image voxel is obtained through two parts, the first is the measured ultrasound data (if it exists inside this voxel), and the second is the average of the image voxel and all its surrounding image voxels. This method achieved good results of reconstruction. A diffusion technique of the EM method may be utilized for rotary scanning ultrasound. This requires that the measured data is not sparse (without holes) which is only suitable to the rotary scanning instead of the freehand scanning.
The method in this invention is an interpolation method for B-scan of the freehand ultrasound. It is a modified EM method replacing the cubic average filter to the anisotropic diffusion filter. The presented invention is aimed at providing a 3D image interpolation/reconstruction algorithm for freehand ultrasound B-scan data which is measured at a group of random distributed planes. This measured ultrasound data is noisy and sparse (with holes) in 3D space. The algorithm reconstructs the image with sharp image edges and the smoothes the image simultaneously.
The algorithm presented in this disclosure is a modified EM algorithm. In this disclosure the anisotropic diffusion filter is utilized instead of the cubic filter. An anisotropic diffusion filter adds a new parameter to the algorithm which can be optimized in order to further reduce the errors. Here the error means the difference between the reconstructed image from simulated freehand-scanning ultrasound data and the original object to create the ultrasound data.
The algorithm in this presented disclosure is an iterative algorithm. Inside the loop of the iteration the updated image is calculated through two parts of the images. The first is the image directly obtained from the acquired data. The second is the output image from the prior loop of the iteration. The second image is filtered through an anisotropic diffusion filter.
The presented algorithm has two parameters, the first is used to balance two kinds of image described in the last paragraph; the second is used to adjust the strength of the anisotropic diffusion filter. The present disclosure utilizes an additional parameter in comparison to prior techniques. The additional parameter enhance the optimization process. The two parameters are optimized with smallest error between the reconstructed image from the simulated freehand ultrasound data and the original object to create the ultrasound data. The error is defined as a combination of the absolute error and the square error. Combining two kind errors together can better balance the sharpness of the image edge and the smoothness of the image itself. The optimized parameters are utilized in the reconstruction from measured freehand-scanning ultrasound data.
Since the iteration with the anisotropic diffusion filter is very time consuming. The GPU CULD technique is utilized to speed the algorithm.
Reference will now be made to the accompanying drawings, which assist in illustrating the various pertinent features of the various novel aspects of the present disclosure. Although the invention is described primarily with respect to an ultrasound imaging embodiment, aspects of the invention may be applicable to a broad range of imaging modalities, including MRI, CT, and PET, which are applicable to organs and/or internal body parts of humans and animals. In this regard, the following description is presented for purposes of illustration and description. Furthermore, the description is not intended to limit the invention to the form disclosed herein. Consequently, variations and modifications commensurate with the following teachings, and skill and knowledge of the relevant art, are within the scope of the present invention.
An exemplary embodiment of the invention will be described in relation to performing prostate scanning using freehand transrectal ultrasound (TRUS). As shown in
With the dimensions of the probe 10 and needle assembly 12 taken into the calculations, the 3D position of the needle tip and its orientation is known. The ultrasound probe 10 sends signal to the ultrasound system 3, which may be connected to the same computer (e.g., via a video image grabber) as the output of the position sensors 14. In the present embodiment, this computer is integrated into the imaging system 3. The computer 2 therefore has real-time 2D and/or 3D images of the scanning area in memory. The image coordinate system and the robotic arm coordinate system are unified by a transformation. Using the acquired 2D images, a prostate surface 5 (e.g., 3D model of the organ) and biopsy needle are simulated and displayed on a display screen 8. A biopsy needle may also be modeled on the display, which has a coordinate system so the doctor has the knowledge of the exact locations of the needle and the prostate.
The computer system runs application software and computer programs which can be used to control the system components, provide user interface, and provide the features of the imaging system. The software may be originally provided on computer-readable media, such as compact disks (CDs), magnetic tape, or other mass storage medium. Alternatively, the software may be downloaded from electronic links such as a host or vendor website. The software is installed onto the computer system hard drive and/or electronic memory, and is accessed and controlled by the computer's operating system. Software updates are also electronically available on mass storage media or downloadable from the host or vendor website. The software, as provided on the computer-readable media or downloaded from electronic links, represents a computer program product usable with a programmable computer processor having computer-readable program code embodied therein. The software contains one or more programming modules, subroutines, computer links, and compilations of executable code, which perform the functions of the imaging system. The user interacts with the software via keyboard, mouse, voice recognition, and other user-interface devices (e.g., user I/O devices) connected to the computer system. Aspects of the present invention may be embodied as software and/or hardware.
I. Creating the 3D Freehand Simulated Ultrasound Data
The free hand ultrasound scanning planes are shown in
The measured signal yi and the position in video frame coordinates are XiVideo=(X1ivideo, X1ivideo, 0). The frame plane position is measured by the tracker system. The position (angles θ, φ) of the tracker probe is measured on the tracker coordinates as seen in
There are few coordinates: (1) The volume coordinates X, which is the coordinates to show 3D reconstructed image; (2) Tracker coordinates Xtr which is show in
Assume it is known that the original object of the ultrasound image is ƒ(X). Examples of ƒ(X) are a cube or a sphere. Here X is on 3D space. The task of this section is to create a set of ultrasound measured point Xi and to give out the simulated output of the values of ultrasound signal Yi according to the original object ƒ(X) and the noise model. Here Xi=(xi1, xi2, xi3) is the point of the ultrasound measured point, xi1, xi2 and xi3 are 3D coordinates. i=0, 1, 2, 3 . . . I−1 is the index of the measured points. I is the size of measured points. Xi is on a set of planes which simulated the 2D ultrasound B-scans. Yi is used to express a measured values of the ultrasound at the point Xi. If ƒ(X) is known only on the grids, ƒ(Xi) can be obtained from interpolation.
η(Xi)=Interpolation η(X) (1)
The Interpolation can be the nearest interpolation which is defined as the following,
Here ri is the distance from the point Xi to the 8 nearest neighbor points on the regular grid (X(1), X(2), . . . , X(8)). Normally ƒ(X) is taken as an ideal ship, for example a sphere or a cube. In this situation the values ƒ(X) is known exactly everywhere in the 3D space include ƒ(Xi). The above interpolation can be avoided. In the following section we assume the object ƒ(X) is a cube.
Assume the object function ƒ(X) has zero value outside of a enough large radio RL. The measured points Xi are on a set of planes. The plane equation are written as
n
1
x
1
+n
2
x
2
+n
3
x
3
=r (3)
Here {circumflex over (n)}=(n1, n2, n3) is the perpendicular normal vector of the plane and
n1=sin θ cos φ, n2=sin θ sin φ, n3=cos θ (4)
where θ is the angle between the normal vector {circumflex over (n)} and the coordinate x3, φ is the angle between the projection of the normal vector {circumflex over (n)} on the (x 1, x2) plane and the coordinate x1, r is the distance from the plane to the original point in the 3D space. Assume there are 2D rectangle coordinates (ξ, η) on the plane which is expressed by equation (3) above.
In the practical situation the measured points are on a fan region with a polar coordinates. However considering the only requirement is that the measurement are distribute on a grid without holes on it, hence we use the rectangle coordinates in 2D plane for the simplicity. The direction of the coordinates (ξ, η) are defined as following,
The direction of the vectors {circumflex over (ξ)}, {circumflex over (η)}, {circumflex over (n)} can be seen in
All measurement points X are on the plane (ξ, η). Assume d is the projection of vector. X on the plane (ξ, η) which is known on the measurement.
d=ξ{circumflex over (ξ)}+η{circumflex over (η)} (10)
The coordinate X on the 3D space can be found
X=X
0
+d (11)
where X0=r{circumflex over (n)}, is the original point of the coordinates (ξ, η). Here X=(x1, x2, x3). The above equation can be rewritten as components,
x
1
=rn
1+ξξ1+ηη1
x
2
=rn
2+ξξ2+ηη2
x
3
=rn
3+ξξ1+ηη3 (12)
According to above knowledge, the simulated ultrasound data can be created as following sub-section.
The creation of ultrasound freehand scanning data is summarized as following,
Step 1. Create a set of random values X0=(x01, x02, x03) satisfy Gauss distribution.
where p(x01, x02, x03) is probability distribution function, Z is the normalization factor, σ is taken for example as 0.3R. Here 0.3 is chosen through experiment. If this value is taken too big, most of the points of x01, x02, x03 will be distributed closing to the boundary of 3D space. If it is taken too small most of the points oƒ(x01, x02, x03) will be distributed closing to the center of the 3D space.
Step 2. The measurement plane is calculated through
Step 3. Calculate the (θ, φ) through the above directions.
θ=arccos(n3)
φ=arctan2(n2,n1) (16)
where arctan2 is the atan 2 function defined in Matlab or the c++ head file Math.h.
Step 4. calculate {circumflex over (ξ)}, {circumflex over (η)} according to the Eq. (5) and (6).
Step 5. calculate measured points Xi=(xi1, xi2, xi3) from Eq. (12). Take out all the points outside the region [—R, R]3.
Step 6. Assume the original function ƒ(X) is a cube.
The constant const can be taken as, for example, const=10.
Step 4. Add the object related random noise to the original function as following
Y
i=ƒ(Xi)+ƒ(Xi)*Noise (18)
where the variable Noise is a random variable satisfies uniform distribution in the region [−0.5, 0.5]. Here 0.5 is taken so that the noise Noise and signal ƒ(Xi) has the same wave width. Put the Yi to the coordinates Xi in the 3D volume image space, it can be shown in
II. The EM Method with the Cubic Filter
The EM method with the cubic filter is an iterative an algorithm. The updated image n+1Up is obtained from a combination of two parts. The first is the first step reconstructed image before any iteration UpML UpML is obtained directly from free hand ultrasound input data. The second is the neighborhood average nŪp of the image from the last iteration,
n+1
U
p=(1−kp)UpML+kpnŪp (19)
where n+1Up is the output image of Up in (n+1) times of the loop of the iteration. UpML is defined as
where Ω=[p1−Δ, p1+Δ][p2−Δ, p2+Δ][p3−Δ, p3+Δ], X1 is the i-th position of ultrasound measurement. Yi is i-th measured value of ultrasound measurement. ∃X; means there exist a Xi. ∀Xi means for all Xi p=(p1, p2, p3) is the position of ultrasound image voxel. Δ is the space between two pixels. In this article Δ=1. p1, p2, p3 are discrete rectangle coordinates which are integers of 0, 2, . . . N−1.
Here xk is x1, x2 or x3. δ=[−Δ, Δ]3. In this simulation Δ is taken as 1. kp in Eq. (19) is defined as following,
where K can be a constant parameter. ∃Xi means there exist a Xi. ∀Xi means for all Xi. In a prior method, K is taken as a variable parameter. It is dependent to σ(X)
Here σ(X) is variance of measured data. However this value cannot be obtained to arbitrary small point X. It is defined on a small region close to X. Kc is a constant parameter. The average in Eq. (19) is defined as
where p=(i, j, k). The average can be seen as a 27 element cube filter.
III. The Modified EM Method with the Diffusion Filter
The method described in last paragraph is referred to as the EM method with cubic filter. In this invention a new method is proposed in which the cubic filter Ūp is replaced as the diffusion filter D Up. The presented method in is referred as the modified EM method with the diffusion filter. The iteration algorithm of Eq. (19) is modified to as the following,
n+1
U
p=(1−kp)UpML+kpDnUp (27)
where n+1Up is (n+1) times iteration of Up. The definition of kp and UpML are same as in Eq. (19). kp is calculate from Eq. (23) and (24). It is noticed that in this invention K in the Eq. (24) is taken as an independent constant parameter K=const and it is not adapted to σ(X).
The anisotropic diffusion filter used here is defined as the following,
DU
P
=U
P
+tcΔU
p (28)
where t is a constant and c is defined as
where m is a normalization factor. m is the max value of the original object (this value is not required very accuracy, it can be approximately evaluated).
m=max(ƒ(X)) (30)
where g(x) has two forms which are shown in the following
27 point neighborhood average can be replaced as 7 point neighborhood average:
The above formula can be rewritten as
where ΔUp is defined as following,
ΔUp=∇EU+∇WU+∇NU+∇SU+∇uU+∇dU (35)
where
∇WU=U(i−1,j,k)−U(i,j,k)
∇EU=U(i+1,j,k)−U(i,j,k)
∇SU=U(i,j−1,k)−U(i,j,k)
∇NU=U(i,j+1,k)−U(i,j,k)
∇dU=U(i,j,k−1)−U(i,j,k)
∇uU=U(i,j,k+1)−U(i,j,k) (36)
In Eq. (28) cΔUp is implemented as following,
Kd in the Eq. (32) is the diffusion constant. If Kd→∞, c=g(x)→1. The effect of diffusion is eliminated.
DU
p
=U
p
+tΔU
p (39)
Comparing Eq. (39) with Eq. (34), if t=1.0/7, Kd→∞, There is DUp→Ūp. Hence t can be chosen around 1/7. In general If t>1/7, it can reduce the number of iteration. However if it is too larger the quality of reconstruction will be reduced.
IV. Optimization of the Parameters
The parameters used in two algorithms are optimized according to the absolute errors and the square errors. Here the errors are differences between the reconstructed image and the original object. Define the absolute errors function,
Define the square error function,
where Uo(i, j, k)=ƒ(iΔ, jΔ, kΔ) is the original object on the rectangle coordinates, Ur(i, j, k) is the reconstructed image. For This invention there are two parameters K Eq. (24), and Kd in Eq. (31) and (32) which are required to be optimized. The optimization can be written as
where q is a constant to balance the two errors Err1 and Err2. q=0.5 is used here.
V. The Working Flow of the Method
The free-hand reconstruction system. The working flow of the above described Freehand reconstruction system can be seen in
The Free-Hand Data Acquisition Sub-System.
The freehand acquisition sub-system described in
After the scanning, the ultrasound signal generates an image 140. The tracker position (measured angle information) 120 is used to calculate transform matrix related to the rotation (angle ρ) and translation (displacement δ) matrix 150 that allows for aligning the image in a common frame of reference. This typically requires further calculating 160 the orientation matrix related to the tracker position (angles θ, φ)). Another output of is a frame image index. that offers the matrix from 3D volume space to the tracker home coordinates. The frame index matrix, reference point and orientation matrix form frame are utilized to generate a whole transform matrix. The point position in the frame image 130 combines the transform matrix to produce the point position in the 3D volume space X1 200. Another output of tracker probe is the measured ultrasound frame signal 140 which is utilized be the signal generation 190 to separate the frame signal to an individual signal related to a video point position Xi. The output of ultrasound signal saved as yi in 210.
Position Generation.
The details of the position generation sub-system 180 of
The freehand reconstruction sub-system 60 in
The initialization procedure 310 of
The diffusion procedure 350 of
The simulation procedure compound 720 in
The data acquisition procedure of
The position generation procedure of
The offset calculation compound of
of the input vector Xo which is the origin point of the frame. The angles (θ, φ) 1240 corresponding to the direction {circumflex over (n)} are then calculated 1230. A subsequent process 1250 calculates the direction of the direction of the frame axis ({circumflex over (ζ)}, {circumflex over (η)}) 1260. The values are utilized to calculate 1270 the offset vector d in the tracker coordinates and generate an offset output 1290.
The signal generation compound of
The compound 0150 in
VI. The Implementation of the Reconstruction Using GPU.
GPU-based speed enhancement technique is applied to the implementation of this invention. Visual C++ compiler is combined with CUDA compiler of nVdia. Here the example of GPU is nVdia. GeForce 8800 GTS Graphic card. The following described the working principle of this invention using a GPU. See generally
The input image UpML (created through the compound 0090 in Figure) and kp (created through the compound 0110 in
Create a variable Up in the device memory of GPU. Up is used to save the output image the of a loop of the iteration. Up is also used as input image of the loop of the iteration. Up is initialized to zero at the beginning of the iteration. The count of the iteration is initialized as 0.
In order to process the 3d image using GPU effectively, the 3D image is divided to a number of blocks or sub-regions see
Create a variable Upsh in GPU shared memory. The Upsh is a 3 dimensional image see the compound 0290 in
Upsh is inputted to the compound 0300 in the
c is shown through the compound 0220 in
Image Up is calculated according to Eq. (27). This is done through the compound 0120, 0160 and 0170 in
VII. An Example of Optimization of the Parameters
An example of original object ƒ(X) is a cube, which can be used to produce the simulated ultrasound data. Assume the simulated data is produced according to section I. Assume the number of planes used to cut the 3D object is M=100. The size of one dimension of the image is N=200. The simulated ultrasound data (Xi, Yi) is produce with Eq. (18) or the work flow
Assuming that t=1.0/7 is taken, the number of iteration N is enough large for example N=1000, the parameters K, Kd used in this invention is optimized according to section V with q=0.5 in Eq. (43) where Err is Err1, Err2 or Err1+Err2. The parameter can be found from Eq. (42). Cubic object is used to produce the simulated ultrasound data. The number of Iteration is taken as 1000. In the beginning k=0.4 is fixed. The calculated result is shown in the following:
The smallest error Err1 is 0.335985. The corresponding value of parameter Kd is 0.25.
Now we fixed the value of Kd as 0.25, the following table shows that the smallest error Err1 is 0.335985. The corresponding value of parameter K is 0.4. However considering the smallest error of Err1+Err2 is 1.191522 and the corresponding value of parameter K is 0.25. A smoother image of reconstruction is corresponding to a smaller K. Since Err1+Err2 combining the absolute error and the square error together, it can offer better control of the errors of the image and the smoothness of reconstructed image. Hence in the implementation we use the parameter K=0.25 instead of K=0.4.
After the optimized parameters K=0.25 and Kd=0.25 are found. Group parameters which are very close to above parameters are checked. For example Errs=0.427283 Err2=0.90568 if K=0.25 and Kd=0.275. These parameters are worse than the parameters K=0.25 and Kd=0.25.
In the above, the parameter optimization is done only for one sample of simulated data. This is not justice for the simulation. Different samples of simulated data should be checked. Different object function should be checked. Different M (number of planes) should be used. After a number of tests we have found the parameters are not sensitive to different samples of data, it is also not sensitive to the object function and M. Hence the above optimized parameter can be used generally. It is remarkable if the noise model changed the parameter should be re-adjusted.
VIII. Example of Reconstructions
The reconstructed image which is the output of the compound 0190 in
Comparing to Voxel-based method and Pixel-Based Methods, this invention is based on Function-Based method i.e., the EM method, which has a clear goal function for the minimization. The smoothness of the image and sharpness of the image edge can be combined in this goal function. This invention is not only rigorously interpolation the data but also filtered out noises simultaneously with the iteration of the interpolation.
The image reconstruction method in this invention is ideal for the ultrasound data with freehand scanning. It is extremely suitable to the sparse data which are measured in irregularly positioned B-scan slices.
In this invention the combination of the two error functions the absolute error and the square error for the goal function of the parameter optimization. The combined error function offers the parameters with a good balance for the sharpness or image edge and the smoothness of the image itself.
In this invention there are two parameters K and Kd in Eq. (24), (31) and (32) comparing to the EM method which has only one parameters Kc in Eq. (25). One more parameter makes the optimization easy to meet the two requirements the sharpness of the image edge and smoothness of the image itself simultaneously.
In the EM method in reference, the sharpness of the image edge and the smoothness of the image are controlled through adapting the variance σ(X), see Eq. (25). In this invention, the sharpness of image edge and smoothness of the image are controlled through the gradient of the image in last loop of the iteration (∇(Up)). Since the variance σ(X) cannot be obtained to a rigorous point, it has to be averaged at a small region, for example within a sphere of radius 5 voxel (around 3×52 voxels). After this average the variance become not sensitive to the edge of image and hence cannot offer big help for the sharpness of image edge. In other hand, the gradient of the image in last iteration (∇(Up)) is rigorous defined at the voxel and its closed 6 neighbor voxels. This information can be utilized to sharp the image edges. Hence this invention obtained better reconstructed image compared to prior EM methods.
a) shows the absolute errors Err1 of reconstructed image using different samples of the simulated data. This data is created through Eq. (18). Since the initial seed used to create the pseudo random variable (x01, x02, x03) in Eq. (13) is different for different sample, the simulated data for different sample is also not same. 72 samples of the simulated data are produced. Here the initial seed used in a sample is the last seed of the random number for the prior sample. The number of planes (B-scans) to cut the object is N=100. The object is a rigorous cube which is not shown here. The object can be estimated from the reconstructed image in
In a previous EM method, the sharpness of image edge and smoothness of the image are controlled through adapting K through ∇(Y(X)). Here X is the position and Y is the measured ultrasound value at the position X. ∇(Y(X)) is well defined in the rotatory B-scans. Since in the freehand B-scans, this data X is irregular (sparse or with holes in the space), ∇(Y(X)) is not well defined. This prior method still uses the cubic average which partly reduce sharpness of the image comparing to the anisotropic diffusion filter used in this invention. Hence the method is more restrict to rotatory B-scans. In the method this invention however the sharpness of image edge and smoothness of the image are controlled through ∇(Up), where Up is the output image of last iteration which is well defined to all voxels. ∇(Up) is also well defined.
The implementation of the algorithm is based on CUDA GPU technique, which utilized thousands of real threads in GPU. Hence the speed of calculation is largely increased. In the implementation, the output image voxel is dependent on the neighbor voxels. In GPU there are device memory and shared memory. Shared memory is much fast than device memory however it can be only shared inside a block. In this invention the image in device memory is copied to shared memory. The image is processed in the shared memory. After it finishes the processing, it is copied back to the device memory. This largely reduced the computing time.
This application claims the benefit of the filing date of U.S. Provisional Application No. 61/147,932 entitled: “Apparatus for 3-D Free Hand Reconstruction” and having a filing date of Jan. 28, 2009, the entire contents of which are incorporated by reference
Number | Date | Country | |
---|---|---|---|
61147932 | Jan 2009 | US |