Medical image processing apparatus and medical image processing method which are for medical navigation device

Information

  • Patent Grant
  • 11676706
  • Patent Number
    11,676,706
  • Date Filed
    Friday, October 29, 2021
    2 years ago
  • Date Issued
    Tuesday, June 13, 2023
    11 months ago
  • Inventors
  • Original Assignees
    • GMEDITEC CO., LTD.
  • Examiners
    • Beard; Charles L
    Agents
    • Ladas & Parry, LLP
Abstract
The present invention relates to a medical image processing apparatus and a medical image processing method for a medical navigation device, and more particularly, to an apparatus and method for processing an image provided when using the medical navigation device. To this end, the present invention provides a medical image processing apparatus for a medical navigation device, including: a position tracking unit configured to obtain position information of the medical navigation device within an object; a memory configured to store medical image data generated based on a medical image of the object; and a processor configured to set a region of interest (ROI) based on position information of the medical navigation device in reference to the medical image data, and generate partial medical image data corresponding to the ROI, and a medical image processing method using the same.
Description
TECHNICAL FIELD

The present invention relates to a medical image processing apparatus and a medical image processing method for a medical navigation device, and more particularly, to an apparatus and method for processing an image provided when using the medical navigation device.


BACKGROUND ART

A minimally invasive surgery that minimizes the incision site of the patient during surgery is widely used. The minimally invasive surgery has an advantage of minimizing the incision and thus minimizing blood loss and recovery time, but restricts the doctor's field of view thus having some risk factors such as meninx damage and eye damage in some surgeries. As a tool for overcoming the disadvantages of minimally invasive surgery in which the doctor's field of view of is restricted, a medical navigation device (or surgical navigation device) is used. The medical navigation device tracks in real time the position of the instrument in the surgical site in reference to a previously obtained medical image of the patient. In addition, such a medical navigation device may be used in combination with an endoscope.


An optical or electromagnetic position tracking devices may be used for real-time position tracking of the inserted surgical instrument in the medical navigation device. As an example for tracking the position of the surgical instrument, an optical position tracking device including an infrared emitting device and a passive image sensor may be used. The optical position tracking device emits reference light through the infrared emitting device and collects the image reflected by plural markers through the image sensor. The position tracking apparatus may obtain position information of the surgical instrument based on the positions of the markers. Meanwhile, as another example for tracking the position of the surgical instrument, an electromagnetic position tracking device including a magnetic field generator and a conductive metal object may be used. The electromagnetic position tracking device may obtain the position information of the surgical instrument by measuring the eddy current occurs in the conductive metal object in the magnetic field generated by the magnetic field generator. In order to accurately indicate the positional relationship between the surgical instrument and the body part in the position tracking device, a registration process may be required that defines the initial positional relationship between the medical data for the patient's body part and the surgical instrument.



FIG. 1 illustrates an embodiment of an output image of a medical navigation device. The medical navigation device may display at least one of horizontal, sagittal, and coronal images of a body part. The operator (or doctor) interprets each image to determine the three-dimensional position of the surgical instrument, and to identify adjacent risk factors. However, these cross-sectional images are not intuitive representation of the position of the surgical instrument in surgical site. Therefore, in order to identify the exact position of the surgical instrument with cross-sectional images, the operator may need a lot of time as well as a skill. In addition, when the time of looking at the monitor of the medical navigation device to determine the position of the surgical instrument is prolonged, the overall surgery time becomes long, which may increase the fatigue of both the operator and the patient.


DISCLOSURE
Technical Problem

The present invention has an object to provide a medical image processing method for helping the operator to intuitively identify information on surgical site and adjacent elements (e.g. organs) in a patient's body.


In addition, the present invention has an object to effectively render the medical image of the patient taken in advance and the intraoperative image in surgical site.


In addition, the present invention has an object to provide a medical navigation device that is easy to identify the patient's anatomical structure.


Technical Solution

In order to solve the above problems, the present invention provides a medical image processing apparatus and a medical image processing method as follows.


First, an exemplary embodiment of the present invention provides a medical image processing apparatus using an augmented reality, including: an endoscopic image obtaining unit which obtains an endoscopic image of an object; a memory which stores medical image data generated based on a medical image of the object; and a processor which obtains position and direction information of the endoscope in reference to the medical image data, determines a target area to be displayed in augmented reality among the medical image data based on the obtained position and direction information, and renders partial medical image data corresponding to the target area as an augmented reality image on the endoscopic image.


In addition, an exemplary embodiment of the present invention provides a medical image processing method using an augmented reality, including: obtaining an endoscopic image of an object; obtaining position and direction information of the endoscope in reference to medical image data of the object, wherein the medical image data of the object is generated based on a medical image of the object; determining a target area to be displayed in augmented reality among the medical image data based on the obtained position and direction information; and rendering partial medical image data corresponding to the target area as an augmented reality image on the endoscopic image.


According to an embodiment, the medical image data may include data obtained by synthesizing the medical image of the object and user defined auxiliary data and performing volume rendering on the synthesized data.


In this case, the auxiliary data may be represented as a voxel having a value outside a pre-defined Hounsfield Unit (HU) range.


In addition, the range outside pre-defined HU range may include a first HU range exceeding a first threshold and a second HU range below a second threshold, and a value of the first HU range and a value of the second HU range may represent different types of auxiliary data.


According to an embodiment, the pre-defined HU range may reach from −1000 HU to +1000 HU.


According to another embodiment of the present invention, the processor may generate a first normal map using the endoscopic image, and obtain the position and direction information of the endoscope in reference to the medical image data based on a result of determining a similarity between the first normal map with a plurality of second normal maps obtained from the medical image data.


In this case, the processor may compare the first normal map with second normal maps within a preset range from a position and a direction of the endoscope at a previous time point.


According to an embodiment, the first normal map may be obtained based on reflection information of structured light with respect to a search area of the object.


In addition, the second normal map may be obtained from the medical image data based on position and direction information of a virtual endoscope for the object.


In addition, the direction information of the virtual endoscope may be determined based on a straight line connecting a start point (or a previous position) of a path of the virtual endoscope and a current position of the virtual endoscope.


Next, another exemplary embodiment of the present invention provides a medical image processing apparatus for a medical navigation device, including: a position tracking unit which obtains position information of the medical navigation device within an object; a memory which stores medical image data generated based on a medical image of the object; and a processor which sets a region of interest (ROI) based on position information of the medical navigation device in reference to the medical image data, and generates partial medical image data corresponding to the ROI.


In addition, another exemplary embodiment of the present invention provides a medical image processing method for a medical navigation device, including: obtaining position information of the medical navigation device within an object; storing medical image data generated based on a medical image of the object; setting a region of interest (ROI) based on position information of the medical navigation device in reference to the medical image data; and generating partial medical image data corresponding to the ROI.


In this case, the ROI may be set based on an area within a preset distance from a position of the medical navigation device in reference to at least one of a horizontal plane, a sagittal plane, and a coronal plane of the medical image data.


In addition, the preset distance in reference to each of the horizontal plane, the sagittal plane, and the coronal plane may be determined by a user input.


According to an embodiment, the partial medical image data may be generated by rendering voxels having a value within a pre-defined Hounsfield Unit (HU) range in the ROI.


In addition, the pre-defined HU range may be determined based on a CT value of a specific tissue of the object.


In addition, the specific tissue may be arbitrarily determined by a user.


According to a further embodiment of the present invention, the partial medical image data may be generated by rendering voxels in the ROI with a light from a virtual light source at a predetermined point based on a position of the medical navigation device.


In this case, each pixel value I(S0,Sn) of the partial medical image data may be determined based on the following equation.







I


(


S
0

,

S
n


)


=




S
0


S
n





{




I
λ



(
x
)




e

-




S
0

x




τ


(
t
)



d

t





+


K

r

e

f


·
L
·

e

-




P
0

x




τ


(
t
)



dt






}


dx






Herein, S0 is a first voxel sampled by ray casting, Sn is a last voxel sampled by ray casting, Iλ(x) is a value of voxel x, τ(t) is an attenuation coefficient of voxel t, Kref is a reflection coefficient, P0 is a position of the virtual light source, L is a brightness value of the virtual light source at P0.


In this case, the Kref may be determined based on the following equation.

Kref=max(G(x)*Vp0→x,0)


Herein, G (x) is a gradient vector at voxel x, and Vp0->x is a direction vector from a position P0 of the virtual light source to voxel x.


In addition, the medical image data may be set of voxels generated using the medical image of the object, and the partial medical image data is volume rendering data obtained by applying ray casting on voxels in the ROI.


Advantageous Effects

According to an embodiment of the present invention, the medical image and the surgical site image of the patient may be effectively rendered to provide convenience of surgery and medical diagnosis.


In addition, according to an embodiment of the present invention, it is possible to minimize the amount of computation required to render additional data included in the medical image.


In addition, according to an embodiment of the present invention, the operator can easily identify the anatomical structure of the patient thereby improving the convenience and concentration on the surgery.





DESCRIPTION OF DRAWINGS


FIG. 1 illustrates an embodiment of an output image of a medical navigation device.



FIG. 2 is a block diagram of a medical image processing apparatus according to an embodiment of the present invention.



FIG. 3 is a more detailed block diagram of a processor of the medical image processing apparatus according to an embodiment of the present invention.



FIG. 4 is a block diagram of an endoscope tracking unit according to an embodiment of the present invention.



FIG. 5 illustrates an endoscopic image and a normal map generated using the same.



FIG. 6 illustrates an embodiment that medical image data is provided as an input of augmented reality image of endoscopic image.



FIGS. 7 and 8 illustrate a volume rendering technique according to an embodiment of the present invention.



FIG. 9 is a block diagram of a medical image data generator according to an embodiment of the present invention.



FIG. 10 is a block diagram of a medical image processing apparatus according to another exemplary embodiment of the present invention.



FIG. 11 is a more detailed block diagram of a processor of the medical image processing apparatus according to another embodiment of the present invention.



FIG. 12 illustrates an example of defining region of interest with respect to an object.



FIG. 13 illustrates partial medical image data corresponding to the region of interest defined in an embodiment of FIG. 12.



FIG. 14 illustrates an embodiment of user interface for defining a region of interest with respect to an object.



FIGS. 15 and 16 illustrate partial medical image data corresponding to various regions of interest.



FIG. 17 illustrates a method of generating partial medical image data according to an additional embodiment of the present invention.





DETAILED DESCRIPTION OF THE INVENTION

In the specification, up to date general terms are used considering functions in the present invention, but they may be changed depending on an intention of those skilled in the art, customs, and emergence of new technology. Further, in a specific case, there is a term arbitrarily selected by an applicant and in that case, a meaning thereof will be described in a corresponding description part of the invention. Accordingly, it should be revealed that a term used in the specification should be understood on not just a name of the term but a substantial meaning of the term and contents throughout the specification.


Throughout this specification and the claims that follow, when it is described that an element is “coupled” to another element, the element may be “directly coupled” to the other element or “electrically coupled” to the other element through a third element. Further, unless explicitly described to the contrary, the word “comprise” and variations such as “comprises” or “comprising”, will be understood to imply the inclusion of stated elements but not the exclusion of any other elements. Moreover, limitations such as “or more” or “or less” based on a specific threshold may be appropriately substituted with “more than” or “less than”, respectively.


Hereinafter, a medical image processing apparatus and a medical image processing method according to an exemplary embodiment of the present invention will be described with reference to the drawings. The image processing apparatus and the image processing method according to the embodiment of the present invention may be applied to a medical image of an object including a human body and an animal body. The medical image includes an X-ray image, a computed tomography (CT) image, a positron emission tomography (PET) image, an ultrasound image, and a magnetic resonance imaging (MRI), but the present invention is not limited thereto. In addition, in the present description, the term medical image data is used as a term in a broad sense including not only the medical image itself but also various types of data generated by rendering the medical image. According to an embodiment, the medical image data may refer to data obtained by performing volume rendering on the medical image. In addition, the medical image data may refer to a three-dimensional data set composed of a group of two-dimensional medical images. The value on a regular grid in the three-dimensional data set configured as described above is called a voxel. The medical image processing apparatus and the medical image processing method according to an embodiment of the present invention may generate or process an image provided by an endoscope and/or a medical navigation device.



FIG. 2 is a block diagram of the medical image processing apparatus 10 according to an embodiment of the present invention. As illustrated, the medical image processing apparatus 10 according to an embodiment of the present invention may include a processor 11, a communication unit 12, an input unit 13, a memory 14, an endoscopic image obtaining unit 15, and a display output unit 16.


First, the communication unit 12 includes a wired/wireless communication module of various protocols for communicating with an external device. The input unit 13 includes various types of interfaces for receiving a user input for the medical image processing apparatus 10. According to an embodiment, the input unit 13 may include a keyboard, a mouse, a camera, a microphone, a pointer, a USB, a connection port with an external device, and the like, but the present invention is not limited thereto. The medical image processing apparatus may obtain a medical image of the object through the communication unit 12 and/or the input unit 13 in advance. The memory 14 stores a control program used in the medical image processing apparatus 10 and various data related thereto. For example, the memory 14 may store a previously obtained medical image of an object. In addition, the memory 14 may store medical image data generated by rendering the medical image of the object.


The endoscopic image obtaining unit 15 obtains an endoscopic image of a search area of an object captured by the endoscope 50. The endoscopic image obtaining unit 15 may be connected with the endoscope 50 by wire or wireless to receive an image from the endoscope 50.


The display output unit 16 outputs an image generated according to an embodiment of the present invention. That is, the display output unit 16 may output an augmented reality image together with the endoscopic image of the object as described below. In this case, the augmented reality image may include partial medical image data corresponding to the endoscopic image. The image output by the display output unit 16 may be displayed by the monitor 60 connected to the medical image processing apparatus 10.


The processor 11 of the present invention may execute various commands and programs and process data in the medical image processing apparatus 10. In addition, the processor 11 may control each unit of the preceding medical image processing apparatus 10 and control data transmission and reception between the units.


The medical image processing apparatus 10 illustrated in FIG. 2 is a block diagram according to an exemplary embodiment of the present invention, in which the separately displayed blocks logically distinguish elements of the apparatus. Therefore, the preceding elements of the medical image processing apparatus 10 may be mounted on one chip or on plural chips according to the design of the corresponding apparatus. In addition, some of the components of the medical image processing apparatus 10 illustrated in FIG. 2 may be omitted, and additional components may be included in the medical image processing apparatus 10.



FIG. 3 is a more detailed block diagram of the processor 11 of the medical image processing apparatus 10 according to an embodiment of the present invention. As illustrated, the processor 11 of the medical image processing apparatus 10 according to an embodiment of the present invention may include an endoscope tracking unit 110, a medical image data generator 120, a partial medical image data extractor 130 and an augmented reality data generator 140.


The endoscope tracking unit 110 obtains position and direction information of the endoscope 50 that provides an endoscopic image to the medical image processing 10. More specifically, the endoscope tracking unit 110 obtains the position and direction information of the endoscope 50 (e.g., position and direction information of the endoscope camera) based on a medical image data coordinate system of the object. According to an embodiment of the present invention, the endoscope tracking unit 110 may track the position and direction of the endoscope 50 by analyzing the endoscopic image obtained through the endoscopic image obtaining unit 15. Specific embodiments thereof will be described later. Meanwhile, according to another embodiment of the present invention, the endoscope tracking unit 110 may include a separate endoscope tracking device to track the position and direction of the endoscope 50. When a 6 degree of freedom (DOF) tracking device is coupled to the endoscope 50, the endoscope tracking unit 110 may obtain the position and direction information of the endoscope 50 from the tracking device. When the registration process is performed on the position and direction information obtained from the tracking device, the position and direction information of the endoscope 50 in reference to the medical image data coordinate system may be obtained.


Next, the medical image data generator 120 renders a medical image of the object to generate medical image data. As described above, the medical image includes at least one of an X-ray image, a CT image, a PET image, an ultrasound image, and an MRI. According to an embodiment, the medical image data generator 120 may generate medical image data by performing volume rendering on the medical image of the object. In addition, the medical image data generator 120 may generate medical image data by synthesizing the medical image of the object and the user defined auxiliary data and performing volume rendering on the synthesized data. Specific embodiments thereof will be described later.


Next, the partial medical image extractor 130 extracts the partial medical image data to be displayed in augmented reality among the medical image data based on the computed position and direction information of the endoscope 50. More specifically, the partial medical image extractor 130 determines a target area (i.e., field of view) to be displayed in augmented reality among the medical image data based on the position and direction information of the endoscope 50. According to an embodiment of the present invention, the target area may be determined as a view frustum based on a specific focal length, a viewing angle, and a depth of the endoscope 50. Therefore, when the position and direction information of the endoscope 50 in reference to the medical image data coordinate system is obtained by the endoscope tracking unit 110, a target area to be represented in augmented reality within the medical image data may be determined. The partial medical image extractor 130 extracts partial medical image data corresponding to the target area determined as described above.


Next, the augmented reality data generator 140 renders the augmented reality image from the extracted partial medical image data. That is, the augmented reality data generator 140 may compose the partial medical image data with the endoscopic image and provide the partial medical image data as an augmented reality image for the endoscopic image.



FIG. 4 is a block diagram of the endoscope tracking unit 110 according to an embodiment of the present invention. In order to match the endoscopic image and the augmented reality image in a meaningful form, the following information is required.

    • The position Pf(Xf, Yf, Zf) of the endoscope 50 in reference to the medical image data coordinate system
    • Direction vectors Vview(Xv, Yv, Zv), Vup(Xu, Yu, Zu) and Vright(Xr, Yr, Zr) of the endoscope 50 in reference to the medical image data coordinate system
    • Field of view (FOV) of the endoscope 50
    • Focal length of the endoscope 50
    • Depth of field (DOF) of the endoscope 50


Among the information, the viewing angle, focal length, and depth of field follow the fixed specifications of the endoscope lens (i.e., endoscope camera). Therefore, in order to determine a target area to be represented in augmented reality within the medical image data, position and direction information of the endoscope 50 should be obtained in real time.


According to an embodiment of the present invention, the endoscope tracking unit 110 may track the position and direction of the endoscope 50 via a separate endoscope tracking device. However, according to another embodiment of the present invention, the endoscope tracking unit 110 may track the position and direction of the endoscope 50 by comparing the endoscopic image and the medical image data. More specifically, the endoscope tracking unit 110 tracks the position and direction of the endoscope 50 by comparing a normal map based on an endoscopic image with plural candidate normal maps based on medical image data. The normal map maybe represented as two dimensional projection data of surface information of the search area.


Referring to FIG. 4, the endoscope tracking unit 110 may include a first normal map generator 112, a second normal map generator 114, and a normal map comparator 116. First, the first normal map generator 112 obtains an endoscopic image and generates the first normal map Mreal using the obtained endoscopic image. In general, an endoscopic image includes a spotlight type light source that is easy to find the direction of the light. In addition, the inside of the human body, which is observed by the endoscope, does not have a separate light source, and contains a lot of reflective saliva on the surface. As a result, the endoscopic image may maximize effects of highlight and shade. Therefore, according to an exemplary embodiment of the present invention, the first normal map generator 112 may analyze the intensity of light in the obtained endoscopic image to generate the first normal map Mreal in which the surface information of the three-dimensional search area is projected to the two-dimensional form. The first normal map Mreal generated by the first normal map generator 112 is transferred to the normal map comparator 116.



FIG. 5 illustrates an endoscopic image and a normal map generated using the same. FIG. 5(a) shows an endoscopic image obtained from the endoscope 50, and FIG. 5(b) shows a normal map generated using the endoscopic image. As shown in FIG. 5(a), the endoscopic image may clearly show highlights and shades on the curved surface. Accordingly, the medical image processing apparatus 10 of the present invention may generate a normal map as shown in FIG. 5(b) by analyzing the endoscopic image.


According to a further embodiment of the present invention, an endoscopic image to which structured light or patterned light is applied may be used to generate a more accurate first normal map Mreal. In this case, the first normal map Mreal is obtained based on the reflection information of the structured light or the patterned light with respect to the search area of the object.


Returning to FIG. 4, the second normal map generator 114 obtains plural second normal maps Mvirtual from the medical image data. The user may determine the searching path of the endoscope for the medical image data in advance and store the information. According to an embodiment of the present invention, the second normal map generator 114 may divide the predetermined endo scope searching path by predetermined interval and generate a virtual endoscopic image corresponding to each divided point. The second normal map Mvirtual is obtained by using the virtual endoscopic image. The second normal map Mvirtual may be obtained from medical image data based on the position and direction information of the virtual endoscope (e.g., position and direction information of the virtual endoscope camera) with regard to the object. In this case, the direction information of the virtual endoscope may be determined based on a straight line connecting the start point (or a previous position) of the path of the virtual endoscope with the current position of the virtual endoscope. However, even if the virtual endoscope has the same position and the direction vector Vview, plural second normal maps Mvirtual are required in consideration of the rotation of the virtual endoscope in reference to the direction vector Vview. Therefore, according to an embodiment of the present invention, plural second normal maps Mvirtual may be generated according to predetermined angular interval with respect to one point may be obtained. The second normal map Mvirtual generated by the second normal map generator 114 is transferred to the normal map comparator 116.


The normal map comparator 116 compares the first normal map Mreal obtained from the endoscopic image with the plural second normal maps Mvirtual obtained from the medical image data to determine similarity. The position and direction information of the endoscope 50 in reference to the medical image data may be obtained based on the second normal map Mvirtual with highest similarity as a result of the similarity determination. According to a further embodiment of the present invention, in order to reduce the computing complexity of the similarity measure of the normal map, the normal map comparator 116 may preferentially compare the first normal map Mreal with the second normal maps Mvirtual within a preset range from the position and the direction of the endoscope 50 at a previous time point.


When the position and direction information of the endoscope 50 is obtained, the medical image processing apparatus 10 may extract partial medical image data to be displayed in augmented reality as described above, and render the extracted partial medical image data as an augmented reality image.



FIG. 6 illustrates an embodiment that medical image data is provided as an augmented reality image for an endoscopic image. More specifically, FIG. 6(a) shows an endoscopic image, FIG. 6(b) shows partial medical image data, and FIG. 6(c) shows that partial medical image data is provided as an augmented reality image for the endoscopic image. When the position and direction information of the endoscope 50 in reference to the medical image data coordinate system is obtained according to the preceding embodiment of the present invention, the partial medical image data and the endoscopic image may be efficiently matched. Accordingly, information about the surgical site and the adjacent elements of the object may be intuitively identified by the operator.


Meanwhile, the medical image data to be represented in augmented reality may include various types of data. As described above, the medical image data may be data obtained by performing volume rendering on a medical image such as an X-ray image, a CT image, a PET image, an ultrasound image, and an MRI. According to a further embodiment of the present invention, the medical image data may include an image of a target organ (e.g., brain, eye, lung, heart, etc.) represented in a mesh form after segmentation in a medical image of the object. In addition, the medical image data may further include user defined auxiliary data. The auxiliary data includes planning information such as markers and paths inserted into the medical image before the surgery represented as a mesh. According to an embodiment of the present invention, the medical image processing apparatus 10 may perform volume rendering on the auxiliary data represented in the mesh form together with the medical image without performing surface rendering on it. More specifically, the medical image processing apparatus 10 may generate medical image data for augmented reality by synthesizing the auxiliary data with the medical image and performing volume rendering on the synthesized data.



FIGS. 7 and 8 illustrate a volume rendering technique according to an embodiment of the present invention. The volume rendering is a technique for displaying two-dimensional projection images of three-dimensional sample data set. The general three-dimensional data set may be composed of a group of two-dimensional tomographic images collected from the preceding medical images. The images of the group may have a regular pattern and same number of pixels. The value on a regular grid in the three-dimensional data set configured as described above is called a voxel.



FIG. 7 illustrates a ray-casting technique that may be used in volume rendering. The ray casting method is defined as that the voxels constituting the volume have the property of being translucent and emitting light by themselves. The ray casting method accumulates voxel values sampled along with each ray r0, r1, . . . , r4 determined according to the line of sight of the user (or the position and direction of the camera) to obtain a rendering value (i.e., pixel value). In this case, the number of rays are determined according to the resolution of the resultant image. The color cube technique can be used to properly render three-dimensional volume data according to the line of sight of the user.



FIG. 8 illustrates a color cube used in volume rendering. As shown in FIG. 8, the color cube assigns black to the origin (0, 0, 0), assigns white to the vertex (1, 1, 1) diagonally opposite to the origin, and increases the intensity of the corresponding RGB value as the value of each coordinate increases within the cube. The RGB value for each coordinate is used as normalized texture sampling coordinate value.


In order to define the start point and the end point of each ray in the volume rendering, front and the rear face of color cube images with the same size (that is, the pixel size) may be generated. The value obtained at the same position of each of the two generated images becomes the start point and the end point of the ray corresponding to the position. When accumulating the values obtained by performing three-dimensional texture sampling of the medical image with a predetermined interval along with the ray from the start point to the end point, the intended volume rendering result may be obtained. The medical image data generator 120 of the medical image processing apparatus 10 according to an embodiment of the present invention may perform volume rendering and generate medical image data using the preceding method.



FIG. 9 is a block diagram of the medical image data generator 120 according to an embodiment of the present invention. Referring to FIG. 9, the medical image data generator 120 according to an embodiment of the present invention may include a HU setting unit 122 and a volume renderer 124.


The volume renderer 124 receives the medical image of the object, and performs volume rendering on the received medical image to generate medical image data. As described above, the medical image may include at least one of an X-ray image, a CT image, a PET image, an ultrasound image, and an MRI, but the present invention is not limited thereto. According to a further embodiment of the present invention, the volume renderer 124 may perform volume rendering on the user defined auxiliary data as well as the medical image of the object. The auxiliary data may represent arbitrary information whose size and position are defined in reference to a medical image coordinate system such as a path, a critical zone, and the like previously prepared by the user.


In general, the auxiliary data may be defined in the form of a triangle mesh and drawn separately from the medical image and then synthesized. However, according to an exemplary embodiment of the present invention, the volume rendering may be performed after synthesizing previously prepared auxiliary data with the medical image. In order to perform the volume rendering of the auxiliary data together with the medical image, the auxiliary data may be represented as voxels having a predetermined range of values.


In the case of CT, which is the most widely used medical image, the CT values of each component of the human body are shown in Table 1 below. In this case, the unit of each value is Hounsfield Unit (HU).












TABLE 1







Tissue
CT Number (HU)



















Bone
+1000



Liver
40~60



White matter
−20~−30



Gray matter
−37~−45



Blood
40



Muscle
10~40



Kidney
30



Cerebrospinal Fluid (CSF)
15



Water
0



Fat
 −50~−100



Air
~1000










Data according to digital imaging and communications in medicine (DICOM), which is the medical imaging standard, uses 2 bytes per pixel. Thus, the range of values each pixel can have is 216, ranging from −32768 to 32767. Foreign substances such as implants may be inserted into the human body, but they are substituted with appropriate values during the reconstruction process so that values outside the range of +/−1000 HU are not used in the CT.


Therefore, according to an embodiment of the present invention, the auxiliary data may be represented as a voxel having a value outside the pre-defined HU range. In this case, the pre-defined HU range may be from −1000 HU to +1000 HU, but the present invention is not limited thereto. The HU setting unit 122 may substitute the voxel value corresponding to the position occupied by the auxiliary data in the medical image data with a value outside the range of +/−1000 HU. The volume renderer 124 obtains the voxel data substituted by the value outside the pre-defined HU range from the HU setting unit 122 and performs volume rendering on it together with the medical image. When performing volume rendering of the auxiliary data together with the medical image, the amount of computation required to render additional data included in the medical image may be minimized.


According to a further embodiment of the present invention, the range outside the pre-defined HU range may include the first HU range exceeding the first threshold and the second HU range below the second threshold. In this case, the first threshold may be +1000 HU, and the second threshold may be −1000 HU. The HU setting unit 122 may set a value of the first HU range and a value of the second HU range to represent different types of auxiliary data. For example, the value of the first HU range may represent marker information set by the user, and the value of the second HU range may represent path information. According to another embodiment, the value of the first HU range may represent path information, and the value of the second HU range may represent critical zone information. By representing the auxiliary data as voxels having different ranges of values, the user may easily identify different types of auxiliary data. The above-mentioned classification criteria of the auxiliary data type and the HU range allocation method are illustrative of the present invention, and the present invention is not limited thereto.



FIG. 10 is a block diagram of a medical image processing apparatus 20 according to another embodiment of the present invention. As illustrated, the medical image processing apparatus 20 according to an embodiment of the present invention includes a processor 21, a communication unit 22, an input unit 23, a memory 24, a position tracking unit 25, and a display output unit 26.


First, the communication unit 22 includes a wired/wireless communication module of various protocols for communicating with an external device. The input unit 23 includes various types of interfaces for receiving a user input for the medical image processing apparatus 20. According to an embodiment, the input unit 23 may include a keyboard, a mouse, a camera, a microphone, a pointer, a USB, a connection port with an external device, and the like, but the present invention is not limited thereto. The medical image processing apparatus may obtain a medical image of the object through the communication unit 22 and/or the input unit 23 in advance. The memory 24 stores a control program used in the medical image processing apparatus 10 and various data related thereto. For example, the memory 24 may store a previously obtained medical image of an object. In addition, the memory 24 may store medical image data generated by rendering a medical image of the object.


The position tracking unit 25 obtains position information of the medical navigation device 55 in the object. In the embodiment of the present invention, the medical navigation device 55 may include various kinds of surgical navigation devices. An optical position tracking method or an electromagnetic position tracking method may be used for position tracking of a medical navigation device, but the present invention is not limited thereto. If the position information obtained from the medical navigation device 55 is not matched with the medical image data of the object, the position tracking unit 25 may perform a matching process to generate position information of the medical navigation device 55 in reference to the medical image data. The position tracking unit 25 may be connected to the medical navigation device 55 by wire or wireless to receive position information from the medical navigation device 55.


The display output unit 26 outputs an image generated according to an embodiment of the present invention. That is, the display output unit 26 may output medical image data corresponding to the region of interest with respect to the object as described below. The image output by the display output unit 26 may be displayed by the monitor 65 connected to the medical image processing apparatus 20.


The processor 21 of the present invention may execute various commands or programs and process data in the medical image processing apparatus 20. In addition, the processor 21 may control each unit of the preceding medical image processing apparatus 20 and control data transmission and reception between the units.


The medical image processing apparatus 20 illustrated in FIG. 10 is a block diagram according to an exemplary embodiment of the present invention, in which the separately displayed blocks logically distinguish elements of the apparatus. Therefore, the preceding elements of the medical image processing apparatus 20 may be mounted on one chip or on plural chips according to the design of the corresponding apparatus. In addition, some of the components of the medical image processing apparatus 20 illustrated in FIG. 10 may be omitted, and additional components may be included in the medical image processing apparatus 20.



FIG. 11 is a more detailed block diagram of a processor 21 of the medical image processing apparatus 20 according to another embodiment of the present invention. As illustrated, the processor 21 of the medical image processing apparatus 20 according to another embodiment of the present invention may include a region of interest (ROI) setting unit 210, a medical image data generator 220, and a partial medical image data generator 230.


The ROI setting unit 210 sets the ROI of the user with respect to the object. More specifically, the ROI setting unit 210 receives the position information of the medical navigation device 55 from the position tracking unit 25 and sets the ROI based on the position information. According to an embodiment of the present invention, the ROI is set based on an area within a preset distance from the position of the medical navigation device 55 in reference to at least one of the horizontal plane, the sagittal plane, and the coronal plane of the medical image data. As such, the ROI may be set as a three-dimensional region including an area within a preset distance from a plane based on the position of the medical navigation device 55. Therefore, the ROI may be set as a slab having a thickness based on the preset distance. According to an embodiment, the ROI may be set in reference to at least one of the horizontal plane, the sagittal plane, and the coronal plane of the medical image data. To this end, the ROI setting unit 210 may receive in advance, as a user input, information on a preset distance (i.e., area setting information) in reference to each of the horizontal plane, the sagittal plane, and the coronal plane. The ROI setting unit 210 sets an ROI by cropping an area included, from the position of the medical navigation device 55, within the first distance in reference to the horizontal plane, within the second distance in reference to the sagittal plane, and/or within the third distance in reference to the coronal plane. If the user does not input area setting information on at least one reference plane among the horizontal plane, the sagittal plane and the coronal plane, the ROI setting unit 210 may not perform cropping in reference to the plane. The ROI information obtained by the ROI setting unit 210 is transferred to the partial medical image data generator 230.


Next, the medical image data generator 220 renders a medical image of the object to generate medical image data. As described above, the medical image includes at least one of an X-ray image, a CT image, a PET image, an ultrasound image, and an MRI. The medical image data may refer to voxels generated using the medical image of the object. However, the present invention is not limited thereto, and the medical image data may refer to data obtained by volume rendering the medical image of an object.


Next, the partial medical image data generator 230 extracts and renders medical image data of a portion corresponding to the ROI among the medical image data of the object. More specifically, the partial medical image data generator 230 may perform volume rendering by selectively ray casting voxels of the ROI. Accordingly, by preventing objects other than the ROI from overlapping with objects of the ROI within the object, the operator can easily identify the anatomical structure of the object.


According to an embodiment of the present invention, the partial medical image data generator 230 may generate volume rendering data in various ways. According to an embodiment, the partial medical image data generator 230 may generate volume rendering data by performing ray casting on all of the voxels included in the ROI.


According to another embodiment of the present invention, the partial medical image data generator 230 may generate volume rendering data by selectively performing ray casting on voxels having a value within a pre-defined HU range in the ROI. In this case, the pre-defined HU range may be determined based on the CT value of a specific tissue of the object. In addition, the specific tissue for performing the volume rendering may be selected by the user. In this way, volume rendering data may be generated by selectively performing ray casting only on voxels corresponding to a specific tissue selected by a user setting in the ROI.


As explained through Table 1, the range of CT values of each component of a human body is predetermined. If the user wants to selectively check only gray matter in the ROI with respect to the object, the user may input an arbitrary value within the CT value range of −37 to −45 or input a CT value range including the corresponding range through the input unit 23. In addition, the user may input a selection for gray matter among predetermined tissues of the object through the input unit 23. The partial medical image data generator 230 may generate volume rendering data by selectively performing ray casting only on voxels corresponding to the gray matter within the ROI of the object based on the user input.


According to another exemplary embodiment of the present invention, the partial medical image data generator 230 may generate partial medical image data by rendering voxel values of an ROI when light is emitted from a virtual light source at a predetermined point based on the position of the medical navigation device 55. More specifically, the partial medical image data generator 230 may assume that a virtual light source exists at a predetermined point based on the position of the medical navigation device 55, and set voxel values of an ROI when light is emitted from the virtual light source. The partial medical image data generator 230 may generate volume rendering data by performing ray casting on the voxels set as described above. Specific embodiments thereof will be described later.


The partial medical image data generated by the partial medical image data generator 230 may be provided as an output image of the display output unit 26.



FIG. 12 illustrates an example of defining an ROI with respect to an object. Referring to FIG. 12, in the color cube for performing the volume rendering, an area included within a specific distance in reference to the sagittal plane is set as the ROI. FIG. 13 illustrates partial medical image data corresponding to the ROI defined as described above. More specifically, FIG. 13(a) illustrates volume rendering data of an object included in a cube, and FIG. 13(b) illustrates volume rendering data corresponding to the ROI defined in an embodiment of FIG. 12. As shown in FIG. 13(b), according to the ROI setting of the user, only an area included within a specific distance in reference to the sagittal plane of the object may be volume-rendered and displayed.



FIG. 14 illustrates an embodiment of a user interface for defining an ROI of an object. Referring to FIG. 14, the user interface may receive information (i.e., area setting information) on a preset distance in reference to each of the horizontal plane, the sagittal plane, and the coronal plane of the object from the user. The ROI is set based on an area within a preset distance from the position of the medical navigation device 55 in reference to at least one of the horizontal plane, the sagittal plane, and the coronal plane of the medical image data. According to an embodiment of the present invention, the position information of the medical navigation device 55 obtained from the position tracking unit 25 may be displayed in a specific coordinate on the slide bar for each reference plane in the user interface. The user may set a reference distance to be included in the ROI based on the coordinates displayed on each slide bar. The ROI setting unit 210 receives at least one of first distance information in reference to the horizontal plane, second distance information in reference to the sagittal plane, and third distance information in reference to the coronal plane through the user interface. The ROI setting unit 210 sets an ROI by cropping an area included, from the position of the medical navigation device 55, within the first distance in reference to the horizontal plane, within the second distance in reference to the sagittal plane, and/or within the third distance in reference to the coronal plane.



FIGS. 15 and 16 illustrate partial medical image data corresponding to various regions of interest. FIG. 15(a) illustrates a case where the ROI is set based on the sagittal plane of the object, FIG. 15(b) illustrates a case where the ROI is set based on the horizontal plane of the object, and FIG. 15(c) illustrates a case where the ROI is set based on the coronal plane of the object, respectively. In addition, FIG. 16 illustrates a case where the ROI is set based on all of the sagittal plane, the horizontal plane, and the coronal plane of the object. As described above, according to the embodiment of the present invention, the ROI may be set in various forms according to the user setting, and only the medical image corresponding to the ROI may be selectively rendered and provided to the user.


Meanwhile, in the above embodiments, it is illustrated that the ROI is set based on the horizontal plane, sagittal plane, and coronal plane of the medical image data, but the present invention is not limited thereto. In the present invention, the ROI may be set with respect to any reference axis or reference plane of the medical image data.



FIG. 17 illustrates a method of generating partial medical image data according to an additional embodiment of the present invention. As described above, the partial medical image data generator 230 may generate the partial medical image data by rendering voxel values of the ROI under an assumption that light is emitted from a virtual light source at a predetermined point for a special effect on the ROI 30.


First, each pixel value I(S0,Sn) of the volume rendering data using the general ray casting method may be expressed by Equation 1 below.










I


(


S
0

,

S
n


)


=




S
0


s
n





{




I
λ



(
x
)




e

-




S
0

x




τ


(
t
)



d

t





+


K

r

e

f


·
L
·

e

-




P
0

x




τ


(
t
)



d

t






}


d

x






[

Equation





1

]







Here, S0 denotes the first voxel sampled by ray casting, and Sn denotes the last voxel sampled by ray casting. In addition, Iλ(x) denotes a value of the voxel x (or an intensity of the voxel x), and






e

-




S
0

x




τ


(
f
)



d

t








denotes a transparency accumulated from the first voxel S0 to the current voxel x. τ(t) denotes an attenuation coefficient of the voxel t.


According to an embodiment of the present invention, it is assumed that a virtual light source exists at a predetermined point P0 based on the position of the medical navigation device 55, and the voxel values of the ROI 30 may be set when the light is emitted from the virtual light source. In this case, the value I′λ(x) of the voxel x may be expressed by Equation 2 below.

I′λ(x)=Iλ(x)+R(x)  [Equation 2]


Here, R(x) is a voxel value adjusted by the light emitted from the virtual light source and may be defined as in Equation 3 below.










R


(
x
)


=


K

r

e

f


·
L
·

e

-




P
0

x




τ


(
t
)



d

t









[

Equation





3

]







Here, Kref denotes a reflection coefficient, P0 denotes a position of the virtual light source, and L denotes a brightness value of the virtual light source at P0, respectively. According to an embodiment of the present invention, the reflection coefficient Kref may be determined as in Equation 4 below.

Kref=max(G(x)*Vp0→x,0)  [Equation 4]


Here, G(x) denotes a gradient vector in the voxel x, and Vp0->x denotes the direction vector from the position P0 of the virtual light source to the voxel x, respectively. According to an embodiment, the gradient vector G(x) may be defined as a normal to a reference plane in which peripheral voxel values change most with respect to the voxel x.


Therefore, according to an embodiment of the present invention, each pixel value I(S0,Sn) of the partial medical image data may be determined based on Equation 5 below.










I


(


S
0

,

S
n


)


=




S
0


S
n





{




I
λ



(
x
)




e

-




S
0

x




τ


(
t
)



d

t





+


K

r

e

f


·
L
·

e

-




P
0

x




τ


(
t
)



d

t






}


d

x






[

Equation





5

]







The partial medical image data generator 230 sets the volume rendering data generated as above as the partial medical image data. According to such an additional embodiment of the present invention, the effect may be as if an illumination is inserted into or around the ROI. When an illumination is inserted into or around the ROI, the stereoscopic effect of the ROI may be maximized due to the shadow effect on the ROI, and the user's identification of the ROI may be increased.


The description of the present invention is used for exemplification and those skilled in the art will be able to understand that the present invention can be easily modified to other detailed forms without changing the technical idea or an essential feature thereof. Thus, it is to be appreciated that the embodiments described above are intended to be illustrative in every sense, and not restrictive. For example, each component described as a single type may be implemented to be distributed and similarly, components described to be distributed may also be implemented in an associated form.


The scope of the present invention is represented by the claims to be described below rather than the detailed description, and it is to be interpreted that the meaning and scope of the claims and all the changes or modified forms derived from the equivalents thereof come within the scope of the present invention.

Claims
  • 1. A medical image processing apparatus for a medical navigation device, comprising: a position tracker configured to obtain position information of the medical navigation device within an object;a memory configured to store medical image data generated based on a medical image of the object; anda processor configured to set a region of interest (ROI) based on position information of the medical navigation device in reference to the medical image data, and generate partial medical image data corresponding to the ROI,wherein the ROI is set to a three-dimensional region including a region within a preset distance from a reference plane based on a position of the medical navigation device,wherein the reference plane is set based on at least one of a horizontal plane, a sagittal plane, and a coronal plane of the medical image data,wherein the partial medical image data is generated by rendering voxels in the ROI with a light from a virtual light source at a predetermined point based on a position of the medical navigation device, andwherein each pixel value I(S0,Sn) of the partial medical image data is determined based on the following equation:
  • 2. The apparatus of claim 1, wherein the preset distance in reference to each of the horizontal plane, the sagittal plane, and the coronal plane is determined by a user input.
  • 3. The apparatus of claim 1, wherein the partial medical image data is generated by rendering voxels in the ROI having a value within a pre-defined Hounsfield Unit (HU) range.
  • 4. The apparatus of claim 3, wherein the pre-defined HU range is determined based on a CT value of a specific tissue of the object.
  • 5. The apparatus of claim 4, wherein the specific tissue is determined by a selection of a user.
  • 6. The apparatus of claim 1, wherein the Kref is determined based on the following equation: Kref=max(G(x)*Vp0→x,0)herein G (x) is a gradient vector at voxel x, and Vp0->x is a direction vector from a position P0 of the virtual light source to voxel x.
  • 7. The apparatus of claim 1, wherein the medical image data is set of voxels generated using the medical image of the object, and the partial medical image data is volume rendering data obtained by applying ray casting on voxels in the ROI.
  • 8. A medical image processing method for a medical navigation device, comprising: obtaining position information of the medical navigation device within an object;storing medical image data generated based on a medical image of the object;setting a region of interest (ROI) based on position information of the medical navigation device in reference to the medical image data; andgenerating partial medical image data corresponding to the ROI,wherein the ROI is set to a three-dimensional region including a region within a preset distance from a reference plane based on a position of the medical navigation device,wherein the reference plane is set based on at least one of a horizontal plane, a sagittal plane, and a coronal plane of the medical image data, andwherein the partial medical image data is generated by rendering voxels in the ROI with a light from a virtual light source at a predetermined point based on a position of the medical navigation device, andwherein each pixel value I(S0,Sn) of the partial medical image data is determined based on the following equation:
  • 9. The method of claim 8, wherein the preset distance in reference to each of the horizontal plane, the sagittal plane, and the coronal plane is determined by a user input.
  • 10. The method of claim 8, wherein the partial medical image data is generated by rendering voxels in the ROI having a value within a pre-defined Hounsfield Unit (HU) range.
  • 11. The method of claim 10, wherein the pre-defined HU range is determined based on a CT value of a specific tissue of the object.
  • 12. The method of claim 11, wherein the specific tissue is determined by a selection of a user.
  • 13. The method of claim 8, wherein the Kref is determined based on the following equation: Kref=max(G(x)*Vp0→x,0)herein, G (x) is a gradient vector at voxel x, and Vp0->x is a direction vector from a position P0 of the virtual light source to voxel x.
  • 14. The method of claim 8, wherein the medical image data is set of voxels generated using the medical image of the object, and the partial medical image data is volume rendering data obtained by applying ray casting on voxels in the ROI.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 16/807,139 filed on Mar. 2, 2020, which is a continuation of International Patent Application No. PCT/KR2017/009541 filed on Aug. 31, 2017, the entire contents of which are incorporated herein by reference.

US Referenced Citations (308)
Number Name Date Kind
4782501 Dixon, Jr. Nov 1988 A
5442733 Kaufman et al. Aug 1995 A
5555352 Lucas Sep 1996 A
5630034 Oikawa et al. May 1997 A
6166742 He Dec 2000 A
7085400 Holsing et al. Aug 2006 B1
7091973 Cohen Aug 2006 B1
7330578 Wang Feb 2008 B2
7952583 Waechter et al. May 2011 B2
8780115 Mackrell Jul 2014 B1
8867807 Fram Oct 2014 B1
9072470 Sumi et al. Jul 2015 B2
9245377 Jarosz Jan 2016 B1
9256978 Kim et al. Feb 2016 B2
9280848 Chen Mar 2016 B1
9495794 Masumoto Nov 2016 B2
9633413 Kim Apr 2017 B2
9706972 Ahn et al. Jul 2017 B1
9799135 Zhou et al. Oct 2017 B2
10354438 Engel et al. Jul 2019 B2
10467752 Tanji Nov 2019 B2
10546415 Petkov et al. Jan 2020 B2
10565773 Tytgat Feb 2020 B1
10573067 Naik Feb 2020 B1
10595824 Kim Mar 2020 B2
10664217 Laha May 2020 B1
10671237 McBeth Jun 2020 B2
10755474 Schneider Aug 2020 B1
10790056 Accomazzi Sep 2020 B1
10799101 Nakamura et al. Oct 2020 B2
11398072 Schneider Jul 2022 B1
20020005850 Osborne Jan 2002 A1
20040046704 Kim Mar 2004 A1
20040070584 Pyo Apr 2004 A1
20040228453 Dobbs Nov 2004 A1
20060056726 Fujiwara Mar 2006 A1
20060066611 Fujiwara Mar 2006 A1
20060071930 Fujiwara Apr 2006 A1
20060238534 Matsumoto Oct 2006 A1
20060274065 Buyanovskiy Dec 2006 A1
20060291705 Baumann Dec 2006 A1
20070009078 Saito Jan 2007 A1
20070098299 Matsumoto May 2007 A1
20070167801 Webler Jul 2007 A1
20070247460 Smitt Oct 2007 A1
20070248259 Liu Oct 2007 A1
20070265813 Unai Nov 2007 A1
20070285421 Kobayashi Dec 2007 A1
20070299639 Weese et al. Dec 2007 A1
20080084542 Lalley Apr 2008 A1
20080123912 Lal May 2008 A1
20080246768 Murray et al. Oct 2008 A1
20080259080 Masumoto Oct 2008 A1
20090003668 Matsumoto Jan 2009 A1
20090060309 Tsujii Mar 2009 A1
20090096787 Masumoto Apr 2009 A1
20090097722 Dekel Apr 2009 A1
20090128552 Fujiki May 2009 A1
20090136096 Sirohey et al. May 2009 A1
20090174729 Matsumoto Jul 2009 A1
20090221920 Boppart et al. Sep 2009 A1
20090226055 Dankowicz Sep 2009 A1
20090295805 Ha Dec 2009 A1
20100014737 Rührnschopf et al. Jan 2010 A1
20100033482 Zhou et al. Feb 2010 A1
20100245369 Yoshino Sep 2010 A1
20100266184 Kitamura Oct 2010 A1
20110032533 Izatt et al. Feb 2011 A1
20110043613 Rohaly Feb 2011 A1
20110069069 Engel Mar 2011 A1
20110102435 Brabec May 2011 A1
20110137156 Razzaque Jun 2011 A1
20110158386 Payne et al. Jun 2011 A1
20110178394 Fitzpatrick Jul 2011 A1
20110188726 Nathaniel Aug 2011 A1
20110231795 Cheon Sep 2011 A1
20110273667 Knighton et al. Nov 2011 A1
20110292081 Matsunobu Dec 2011 A1
20120047465 Noda Feb 2012 A1
20120050277 Murakoshi Mar 2012 A1
20120053408 Miyamoto Mar 2012 A1
20120069020 Smith-Casern Mar 2012 A1
20120139815 Aono Jun 2012 A1
20120242893 Akitomo Sep 2012 A1
20120249742 Abert Oct 2012 A1
20120250961 Iwasaki Oct 2012 A1
20130002671 Armsden Jan 2013 A1
20130069970 Sasaki Mar 2013 A1
20130076677 Kretz Mar 2013 A1
20130120385 Krishnaswamy May 2013 A1
20130139105 Park May 2013 A1
20130222276 Kim Aug 2013 A1
20140005530 Liu Jan 2014 A1
20140071132 Noshi Mar 2014 A1
20140079178 Mukumoto Mar 2014 A1
20140132597 Tsukagoshi May 2014 A1
20140139518 Kim May 2014 A1
20140152560 Hussain Jun 2014 A1
20140205150 Ogawa Jul 2014 A1
20140267271 Billeter Sep 2014 A1
20140342823 Kapulkin et al. Nov 2014 A1
20140354695 Sakai Dec 2014 A1
20150063545 Lee et al. Mar 2015 A1
20150093005 Oh et al. Apr 2015 A1
20150145892 Hong May 2015 A1
20150154790 Kim Jun 2015 A1
20150161802 Christiansen Jun 2015 A1
20150164475 Kuga Jun 2015 A1
20150193187 Kimn Jul 2015 A1
20150205565 Koguchi Jul 2015 A1
20150208039 Kuga Jul 2015 A1
20150227298 Kim Aug 2015 A1
20150228110 Hecht Aug 2015 A1
20150243055 Nishiyama Aug 2015 A1
20150262416 Hecht Sep 2015 A1
20150262422 Znamenskiy et al. Sep 2015 A1
20150293739 Choi Oct 2015 A1
20150302638 Jago Oct 2015 A1
20150317026 Choi Nov 2015 A1
20150348314 Koguchi Dec 2015 A1
20150379780 Jin Dec 2015 A1
20160018663 Kim Jan 2016 A1
20160030007 Tsujita Feb 2016 A1
20160038248 Bharadwaj Feb 2016 A1
20160042559 Seibert Feb 2016 A1
20160070436 Thomas Mar 2016 A1
20160080719 Tsukagoshi Mar 2016 A1
20160098820 Rousselle Apr 2016 A1
20160133042 Kim May 2016 A1
20160135775 Mistretta May 2016 A1
20160148401 Hirai May 2016 A1
20160162244 Christmas Jun 2016 A1
20160163045 Penney Jun 2016 A1
20160171753 Park Jun 2016 A1
20160203602 Hayashi Jul 2016 A1
20160260222 Paglieroni et al. Sep 2016 A1
20160269723 Zhou Sep 2016 A1
20160275679 Im Sep 2016 A1
20160343161 Paladini et al. Nov 2016 A1
20160350963 Petkov Dec 2016 A1
20170046858 Brokish et al. Feb 2017 A1
20170061675 Segasby Mar 2017 A1
20170061681 Engel Mar 2017 A1
20170084059 Hagiwara Mar 2017 A1
20170124770 Vats May 2017 A1
20170140527 Govari May 2017 A1
20170150874 Kawano et al. Jun 2017 A1
20170161909 Hamanaka et al. Jun 2017 A1
20170178390 Ye et al. Jun 2017 A1
20170186216 Engel Jun 2017 A1
20170193690 Ha Jul 2017 A1
20170199627 Ikeda Jul 2017 A1
20170206861 Rojas Jul 2017 A1
20170228918 Ovtchinnikov Aug 2017 A1
20170236325 Lecocq Aug 2017 A1
20170236492 Taki Aug 2017 A1
20170248532 Kadambi et al. Aug 2017 A1
20170249749 Takahashi Aug 2017 A1
20170255340 Ishii Sep 2017 A1
20170255374 Yasuda Sep 2017 A1
20170262250 Tanabe Sep 2017 A1
20170294042 Engel Oct 2017 A1
20170309061 Wang et al. Oct 2017 A1
20170312031 Amanatullah Nov 2017 A1
20170323432 Funabasama et al. Nov 2017 A1
20170323471 Chien Nov 2017 A1
20170339394 Paulus, Jr. Nov 2017 A1
20170358123 Novak et al. Dec 2017 A1
20170364249 Kumaki Dec 2017 A1
20180039470 Tokita Feb 2018 A1
20180055575 Krimsky Mar 2018 A1
20180061111 Engel Mar 2018 A1
20180061370 Ota Mar 2018 A1
20180082487 Kiraly et al. Mar 2018 A1
20180092615 Sakaguchi Apr 2018 A1
20180103246 Yamamoto Apr 2018 A1
20180107440 Knoppert Apr 2018 A1
20180137244 Sorenson et al. May 2018 A1
20180143796 Murakawa May 2018 A1
20180150110 Tokuchi May 2018 A1
20180158217 Wang et al. Jun 2018 A1
20180173373 Hill Jun 2018 A1
20180174354 Dufay Jun 2018 A1
20180176506 McNelley Jun 2018 A1
20180189014 Patil Jul 2018 A1
20180225861 Petkov Aug 2018 A1
20180239305 Shi Aug 2018 A1
20180240213 Izumi Aug 2018 A1
20180260995 Steen Sep 2018 A1
20180260997 Petkov et al. Sep 2018 A1
20180267326 Broadbent Sep 2018 A1
20180308264 Gu et al. Oct 2018 A1
20180308278 Qiu et al. Oct 2018 A1
20180322806 Avisar et al. Nov 2018 A1
20180329580 Aurongzeb Nov 2018 A1
20180329609 De Swarte Nov 2018 A1
20180330520 Brücker Nov 2018 A1
20180330538 Petkov Nov 2018 A1
20180333129 Toepfer Nov 2018 A1
20180342074 Sakamoto Nov 2018 A1
20180350129 Assmann et al. Dec 2018 A1
20180357032 Popovich Dec 2018 A1
20180360408 Quan Dec 2018 A1
20190000588 Choudhry et al. Jan 2019 A1
20190005611 Aguirre-Valencia Jan 2019 A1
20190005612 Aguirre-Valencia Jan 2019 A1
20190015163 Abhari Jan 2019 A1
20190021699 Bracken Jan 2019 A1
20190029784 Moalem et al. Jan 2019 A1
20190042066 Kim Feb 2019 A1
20190043449 Niinuma Feb 2019 A1
20190065134 Kanki Feb 2019 A1
20190066391 Anderso et al. Feb 2019 A1
20190130630 Ackerson et al. May 2019 A1
20190133693 Mahfouz May 2019 A1
20190141315 Broadbent May 2019 A1
20190146653 Ikuta May 2019 A1
20190147639 Sudarsky May 2019 A1
20190147645 Mory May 2019 A1
20190150745 Sobek et al. May 2019 A1
20190156526 Liu et al. May 2019 A1
20190164345 Petkov et al. May 2019 A1
20190167370 Olson Jun 2019 A1
20190179968 Iwadate Jun 2019 A1
20190200951 Meier Jul 2019 A1
20190220172 Sakashita Jul 2019 A1
20190221027 Petkov Jul 2019 A1
20190239926 Pavlovskaia Aug 2019 A1
20190247130 State Aug 2019 A1
20190272027 Löffler et al. Sep 2019 A1
20190272631 Shemonski et al. Sep 2019 A1
20190272667 Roundhill Sep 2019 A1
20190295497 Itakura Sep 2019 A1
20190304129 Schafer Oct 2019 A1
20190306467 Sonoda Oct 2019 A1
20190311530 Wahrenberg Oct 2019 A1
20190318534 Mory Oct 2019 A1
20190320886 Yano Oct 2019 A1
20190325573 Bernard Oct 2019 A1
20190340837 Shmayahu Nov 2019 A1
20190340838 Gluhovsky Nov 2019 A1
20190362150 Wei Nov 2019 A1
20190378607 Chen Dec 2019 A1
20190388123 Pavlovskaia Dec 2019 A1
20200005520 Zhang et al. Jan 2020 A1
20200008770 Salomon Jan 2020 A1
20200035348 Sartor Jan 2020 A1
20200036910 Alzaga Jan 2020 A1
20200041261 Bernstein Feb 2020 A1
20200050550 Muthler Feb 2020 A1
20200054398 Kovtun Feb 2020 A1
20200105048 Rust Apr 2020 A1
20200105053 Prakash Apr 2020 A1
20200107886 Govari Apr 2020 A1
20200129237 Ay et al. Apr 2020 A1
20200183566 Ouyang Jun 2020 A1
20200193695 Dingeldey Jun 2020 A1
20200203006 Park Jun 2020 A1
20200205763 Helm et al. Jul 2020 A1
20200206536 Wang Jul 2020 A1
20200227000 Liu Jul 2020 A1
20200240934 Yi Jul 2020 A1
20200258314 Nonoyama Aug 2020 A1
20200264659 Kim Aug 2020 A1
20200275977 Govari Sep 2020 A1
20200286225 Ben-Haim Sep 2020 A1
20200289025 Dichterman Sep 2020 A1
20200293260 Fitzgerald Sep 2020 A1
20200302683 Huang Sep 2020 A1
20200342653 Dupuis et al. Oct 2020 A1
20200357513 Katra Nov 2020 A1
20200367970 Qiu Nov 2020 A1
20200380680 Aoyagi et al. Dec 2020 A1
20200402236 Courot Dec 2020 A1
20200402286 Thienphrapa Dec 2020 A1
20200410727 Yamakawa Dec 2020 A1
20210004961 Takahashi Jan 2021 A1
20210007715 Belt Jan 2021 A1
20210019932 Breivik Jan 2021 A1
20210035356 Castaneda et al. Feb 2021 A1
20210052919 Ho Feb 2021 A1
20210068742 Goto Mar 2021 A1
20210072944 Neldeborn Mar 2021 A1
20210074052 Ha Mar 2021 A1
20210090261 Sugimoto Mar 2021 A1
20210090325 Engel Mar 2021 A1
20210113857 Maltz Apr 2021 A1
20210121143 Iniewski et al. Apr 2021 A1
20210125396 Martin et al. Apr 2021 A1
20210132687 Luo May 2021 A1
20210236233 Fuerst Aug 2021 A1
20210287454 Shah Sep 2021 A1
20210304423 Yi Sep 2021 A1
20210335031 Hamilton Oct 2021 A1
20210358198 Pantaleoni Nov 2021 A1
20210390757 Muthler Dec 2021 A1
20220005252 Breivik Jan 2022 A1
20220036641 Vorba Feb 2022 A1
20220122312 Vega Apr 2022 A1
20220130099 Yang Apr 2022 A1
20220172411 Chelnokov Jun 2022 A1
20220180591 Taskov Jun 2022 A1
20220277507 Park Sep 2022 A1
20220284657 Müller Sep 2022 A1
20220287669 Sudarsky Sep 2022 A1
20220327762 Panteleev Oct 2022 A1
20220327765 Liu Oct 2022 A1
20220335636 Bi Oct 2022 A1
Foreign Referenced Citations (14)
Number Date Country
2002-510230 Apr 2002 JP
2003-79637 Mar 2003 JP
2003-265408 Sep 2003 JP
2005-520590 Jul 2005 JP
2006-223894 Aug 2006 JP
2012-165838 Sep 2012 JP
2014-104328 Jun 2014 JP
10-2012-0122542 Nov 2012 KR
10-2014-0089222 Jul 2014 KR
10-2017-0026163 Mar 2017 KR
10-1728044 Apr 2017 KR
10-2017-0062897 Jun 2017 KR
2008125910 Oct 2008 WO
2019045144 Mar 2019 WO
Non-Patent Literature Citations (13)
Entry
Xiaohui Yuan and G. Chi-Fishman, “Volumetric tongue reconstruction by fusing bidirectional MR images,” 3rd IEEE International Symposium on Biomedical Imaging: Nano to Macro, 2006., 2006, pp. 1352-1355, doi: 10.1109/ISBI.2006.1625177. (Year: 2006).
D. Yin and R.-W. Lu, “A Method of Breast Tumour MRI Segmentation and 3D Reconstruction,” 2015 7th International Conference on Information Technology in Medicine and Education (ITME), 2015, pp. 23-26, doi: 10.1109/ITME.2015.117. (Year: 2015).
N. Max, “Optical models for direct volume rendering,” in IEEE Transactions on Visualization and Computer Graphics, vol. 1, No. 2, pp. 99-108, Jun. 1995, doi: 10.1109/2945.468400. (Year: 1995).
International Search Report for PCT/KR2017/009541 dated May 10, 2018 and its English translation from WIPO (now published as WO 2019/045144).
Written Opinion of the International Searching Authority for PCT/KR2017/009541 dated May 10, 2018 and its English translation from WIPO (now published as WO 2019/045144).
Office Action dated May 4, 2018 for Korean Patent Application No. 10-2017-0110940 and its English translation provided by Applicant's foreign council.
Office Action dated Nov. 2, 2018 for Korean Patent Application No. 10-2017-0110940 and its English translation provided by Applicant's foreign council.
Notice of Allowance dated May 1, 2019 for Korean Patent Application No. 10-2017-0110940 and its English translation provided by Applicant's foreign council.
Office Action dated Mar. 12, 2019 for Korean Patent Application No. 10-2017-0110943 and its English translation provided by Applicant's foreign council.
Office Action dated Nov. 9, 2019 for Korean Patent Application No. 10-2017-0110943 and its English translation provided by Applicant's foreign council.
Notice of Allowance dated Feb. 2, 2020 for Korean Patent Application No. 10-2017-0110943 and its English translation provided by Applicant's foreign council.
Notice of Allowance dated Jul. 22, 2021 for U.S. Appl. No. 16/807,139 (now published as US 2020/0203006).
Office Action dated Apr. 15, 2021 for U.S. Appl. No. 16/807,139 (now published as US 2020/0203006).
Related Publications (1)
Number Date Country
20220051786 A1 Feb 2022 US
Continuations (2)
Number Date Country
Parent 16807139 Mar 2020 US
Child 17513884 US
Parent PCT/KR2017/009541 Aug 2017 US
Child 16807139 US