The present invention relates to an image processing apparatus and an image processing method, and, more particularly, the present invention relates to an image processing apparatus and image processing method which perform registration among a plurality of images.
A technique of registering a plurality (hereinafter, also referred to as a plural photographs) of two-dimensional or three-dimensional images is used in various fields, and is an important technique. For example, in the field of medical images, various types of three-dimensional images such as a CT (Computed Tomography) image, a MR (Magnetic Resonance) image, a PET (Positron Emission Tomography) image, and an ultrasonic image are acquired. For the various types of the acquired three-dimensional images, an image registration technique is used in order to register and superimpose the images for the display. Such a display method is called fusion image display, which enables such display as capturing the feature of the images. For example, the CT image is suitable to display detailed shapes, and the PET image is suitable to display human body functions such as metabolism and blood flow.
In addition, in the medical field, a state of a lesion can be observed in time series so that the presence/absence of a disease or progress of the same can be diagnosed by registration among a plurality of frames of medical images acquired in time series in order to observe a disease progression of the same patient. In the registration among the plurality of images, a fixed image is called a referring image, and an image whose coordinates are converted for the registration is called a moving image.
The techniques for the registration among the plurality of images can be classified into a rigid registration method and a non-rigid registration method. In the rigid registration method, the images are registered by parallel movement and rotation of the images. This method is suitable for an image of a region which does not easily deform, such as a bone. On the other hand, in the non-rigid registration method, it is required to obtain the correspondence relationship between images by performing complicated deformation including local deformation to the images. Therefore, this method is applied to the registration of a plurality of frames of medical images acquired in treatment planning and/or follow-up, or is applied to the registration among the medical images such as the registration between a standard human body/organ model and an individual model, and therefore, has a wide range of the applications.
In a generally-known non-rigid registration method, the moving image is deformed by arranging a control grid on a moving image and moving control points on the control grid. An image similarity is obtained between the deformed moving image and a referring image, optimization calculation based on the obtained image similarity is performed, and a movement amount (deformation amount) of control point on the control grid is obtained. In this case, a movement amount of a pixel between the control points on the control grid is calculated by interpolation based on the movement amounts of the control points arranged in periphery of the pixel. The coordinates of the moving image are converted by using the obtained movement amount of each pixel, so that such registration as locally deforming an image is executed. In addition, multiresolution deformation can be executed by changing an interval between the control points, i.e., the number of grid points.
Patent Document 1 describes that, on a moving image, not the grid control point but a landmark corresponding to a region similar to that on a referring image is used as the control point, and that the image is subjected to tile division (segmentation) to be deformed by using the control point. When local deformation is desired, a landmark is added into the divided tiles, the image is further subjected to the tile division, so that the registration is executed.
Patent Document 1: Japanese Patent Application Laid-Open Publication (Translation of PCT Application) No. 2007-516744
In the above-described registration using the control grid, the number of control points on the control grid reaches about several-thousand or several-ten-thousand order. Therefore, optimization calculation for obtaining the movement amount of each control point is complicated. Therefore, registration accuracy depends on the initial positions of the control points on the control grid. By using the above-described rigid registration method, the initial position of each of the control points can be roughly set. However, a case of occurrence of complicated deformation due to temporal changes in soft tissues and organs has a possibility that the rigid registration method itself cannot be applied to the case. Therefore, it is difficult to obtain a correct initial position.
In addition, when a registration result is corrected, it is required to move a plurality of control points on the control grid to corresponding positions one by one. This operation is very complicated.
On the other hand, in the technique described in Patent Document 1, when complicated local deformation is desired, a processing of sequentially adding landmarks and dividing tiles is required. However, when the areas of tile regions are reduced by the division processing, it is difficult in existing tiles to accurately search the corresponding points in an anatomic region. In addition, in the processing of the sequential addition of the landmarks, a robust erroneous-support exclusion processing using the matching degree of the entire landmarks is difficult.
An object of present invention is to provide an image processing apparatus and an image processing method which have high registration processing accuracy.
The above and other object and novel characteristics of the present invention will be apparent from the description of the present specification and the accompanying drawings.
The summary of the typical one of the inventions disclosed in the present application will be briefly described as follows.
That is, in order to deform the moving image, the control grid is set on the moving image. In addition, from each of the moving image and the referring image, a feature point (hereinafter, also referred to as landmark) is extracted. Points at positions corresponding to the extracted feature points are searched out from each of the referring image and the moving image. The initial positions of control points on the control grid set on the moving image are set by using the searched-out points. The respective extracted feature points on the referring image and the moving image correspond to each other (are paired), and are feature parts on the respective images. In this manner, the positions corresponding to the respective feature points corresponding to each other (positions on the referring image and the moving image) are reflected on the initial positions of the control points. Before deforming the moving image for the registration, the control points can be arranged at more correct positions, so that the registration accuracy can be improved.
In addition, according to an embodiment, feature points are manually inputted (edited). From this result, a registration result can be corrected by deforming the control grid, so that the correction can be facilitated.
According to an embodiment, an image processing apparatus and an image processing method which have high registration processing accuracy can be provided.
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. Note that the same components are denoted by the same reference symbols throughout all the drawings for describing the embodiments, and the repetitive description thereof will be omitted.
<Outline>
In a referring image and a moving image, respective feature points corresponding to each other are extracted as a pair. The position information of the feature-point pair is extracted from each of the referring image and the moving image, and the initial position of each control point on a control grid used for the registration processing is determined by using the extracted position information. In this manner, optimization calculation for obtaining the movement amount of each control point can be more accurately executed. As a result, a stable and accurate registration processing can be achieved.
<Configuration and Operation>
The image processing apparatus includes an image sampling unit 13, a feature-point detection/correspondence unit 14, a control grid deforming unit 16, a registering unit 10, and a moving-image deforming unit 17. In this drawing, note that a reference symbol “18” denotes a moving image whose registration has been completed by the image processing apparatus.
The feature-point detection/correspondence unit 14 receives each of the referring image 11 and the moving image 12, extracts the feature point from each of the images, and extracts a position corresponding to each of the extracted feature points from the referring image 11 and the moving image 12. The information of the extracted position is outputted as position information (hereinafter, also referred to as corresponding-point position information) 15 of a point corresponding to each of the extracted feature points. The control grid deforming unit 16 decides the initial positions of control points on the control grid by deforming the control grid using the corresponding-point position information 15 outputted from the feature-point detection/correspondence unit 14. The determined initial positions of the control points are fed to the registering unit 10.
The image sampling unit 13 receives the referring image 11, extracts image sampling points and sampling data of the referring image 11 used for calculation of image similarity, and feeds them to the registering unit 10. The registering unit 10 executes the registration in accordance with the image data and the control grid received from each unit, and feeds the registration result to the moving-image deforming unit 17. The moving-image deforming unit deforms the moving image 12 in accordance with the fed registration result, and outputs the deformation result as the registered moving image 18. These operations will be described in detail after a description of a hardware configuration of the image processing apparatus.
In addition, the image processing apparatus according to an embodiment can be implemented on a general computer and may be placed in a medical facility or the others. Alternatively, the image processing apparatus may be placed in a data center, and a result of the image registration may be transmitted to a client terminal via a network. In this case, an image to be registered may be fed from a client terminal to the image processing apparatus in the data center via the network. The following is explanation while exemplifying a case of the implementation of the image processing apparatus in the computer placed in the medical facility.
In
The ROM 41 and the RAM 42 store a program and data which are required to achieve the image processing apparatus by the computer. The CPU 40 executes the program stored in the ROM 41 and the RAM 42, so that various types of processing in the image processing apparatus is achieved. The storage device 43 described above is a magnetic storage device which stores input images or others. The storage device 43 may include a nonvolatile semiconductor storage medium (e.g., a flash memory). In addition, an external storage device connected via a network may be used.
The program to be executed by the CPU 40 may be stored in a storage medium 50 (e.g., an optical disk), and the medium input unit 45 (e.g., an optical disk drive) may read and store the program in the RAM 42. Alternatively, the program may be stored in the storage device 43, and the program may be loaded from the storage device 43 into the RAM 42. Alternatively, the program may be previously stored in the ROM 41.
The image input unit 44 is an interface to which images captured by an image capturing device 49 are inputted. The CPU 40 executes various types of processing by using the images inputted from the image capturing device 49. The medium input unit 45 reads out data and a program stored in the storage medium 50. The data and the program read out from the storage medium 50 are stored in the RAM 42 or the storage device 43.
The input control unit 46 is an interface which receives an operation input inputted by a user from an input device 51 (e.g., a keyboard). The operation input received by the input control unit 46 is processed by the CPU 40. For example, the image generating unit 47 generates image data from the moving image 12 deformed by the moving-image deforming unit 17 illustrated in
Next, the operation of the image processing apparatus according to the first embodiment will be described with reference to the image processing apparatus illustrated in
The processing starts (“START” in
The feature point is provided to a feature image part on the image. Although the feature point will be described in detail later with reference to
In step S102, a plurality of the feature-point pairs are extracted as described above. That is, a plurality of the corresponding-point pairs are extracted. The plurality of the extracted feature-point pairs includes a feature-point pair whose distance between the feature amounts is relatively large. Such a feature-point pair has low reliability, and therefore, is removed in step S103 as an error corresponding-point pair. The corresponding-point position information 15 from which the error corresponding-point pair is removed is created in step S103.
The control grid deforming unit 16 determines the initial position of the control point on the control grid by deforming the grid control using the corresponding-point position information 15 (step S104). The determined initial position is fed to the registering unit 10 as control-point moving amount information 1001 (
As illustrated in
To the image similarity calculating unit 1003 (
The image similarity maximizing unit 1004 (
On the other hand, if it is determined that the image similarity is maximized, the registering unit 10 outputs the control-point moving amount information 1001 obtained when the image similarity is maximized to the moving-image deforming unit 17. The moving-image deforming unit 17 executes geometric transformation of the moving image 12 by using the control-point moving amount information 1001, and generates and outputs the registered moving image 18 (step S110).
Each of these units will be described in more detail below.
<Feature-Point Detection/Correspondence Unit>
The feature-point detection/correspondence (corresponding-point setting) unit 14 detects the image feature point on each of the referring image 11 and the moving image 12, and records the feature amount of each feature point. An example of a recording form will be described with reference to
As a method of detecting the image feature point and a method of describing the feature amount, a publicly-known method can be used. As the publicly-known method, for example, SIFT (Scale-Invariant Feature Transform) feature point detection and SIFT feature amount description can be used. In this embodiment, since the image to be registered is a three-dimensional image, the image feature point detection and feature amount description methods are extended from the two to three dimension.
Next, the feature-point detection/correspondence unit 14 then searches the feature point on the moving image 12 which corresponds to the feature point on the referring image 11. In specific explanation, when the feature amounts (feature-amount vectors) of a feature point Pr on the referring image 11 and a feature point Pf on the moving image 12 are set to “Vr” and “Vf”, an inter-feature amount Euclidean distance “d” is calculated by Expression (1). Here, “M” represents the dimension of the feature amount.
The feature-point detection/correspondence unit 14 calculates the distances “d” between the feature amount of a certain one feature point in the referring image 11 and the feature amounts of all the feature points included in the moving image 12, and detects the feature points having the smallest distance “d” therebetween among the feature points as (paired) points corresponding to each other.
It can be determined that a pair of feature points having the large inter-feature amount distance “d” therebetween has low reliability, and therefore, the feature-point detection/correspondence unit 14 performs the processing of removing such a feature-point pair having the low reliability as an error corresponding-point pair in step S103 (
As different from
The corresponding-point position information 15 outputted to the control grid deforming unit 16 includes the information of the corresponding point (feature-point) pair illustrated in
In the feature-point detection/correspondence unit 14 edit (including addition and deletion) of the corresponding-point pair is possible. For example, the information of the corresponding-point pair illustrated in
<Control Grid Deforming Unit>
The control grid deforming unit 16 deforms a control grid used for the registration processing by using the corresponding-point position information 15 (as initial position setting). Although not particularly limited, in the control grid deforming unit 16, the control grid which is used for the deformation of the moving image 12 is arranged on the moving image 12 (as control point setting). While regarding the grid-pattern control points on the control grid arranged on the moving image 12 as a vertex of a three-dimensional mesh, the control point mesh is deformed by using a geometrical distance between the above-described corresponding points. Here, a publicly-known method such as the MLS (Moving Least Squares) method can be used. In the MLS method, for a certain vertex in the control mesh, a control point which is the vertex (the above-described certain vertex) is moved so as to simulate the movement of the feature point on the moving image 12 which is close to the vertex as much as possible (simulate the shift toward the corresponding point on the referring image 11). Therefore, the control grid deforming unit 16 obtains such non-rigid deformation of the control mesh as flexibly matching with the movement of a surrounding corresponding point (step S104). The control grid deforming unit 16 (
<Image Sampling Unit>
The image sampling unit 13 (
The sampling may be performed while taking all the pixels in the image region which is the target of the registration processing as the sampling points. However, in order to increase the speed of the registration processing, a grid may be placed on the image, and only pixels at nodes of the grid may be used as the sampling points. Alternatively, in a sampling target region, the predetermined number of coordinates may be randomly generated, and luminance values at the obtained coordinates maybe used as luminance values at the sampling points. In a medical image processing apparatus, it is desired to use the luminance values as the sampling data for improving the processing speed. However, the sampling data may be color information in accordance with the intended use of the image processing apparatus.
<Registering Unit>
As described above, the registering unit 10 (
The coordinate geometric transforming unit 1002 (
In addition, the coordinate geometric transforming unit 1002 executes the coordinate transformation of the coordinates of the sampling points on the referring image 11 by using the control-point moving amount information 1001 (step S204). This step aims at calculating the coordinates of the image data on the moving image 12 which correspond to the coordinates of the sampling points on the referring image 11. Here, based on the positions of control points in periphery of the coordinates of a certain sampling point, the coordinates of the sampling point is interpolated by using, for example, a publicly-known B-spline function, so that the coordinates of the corresponding sampling point on the moving image 12 is calculated.
Next, the coordinate geometric transforming unit 1002 calculates a luminance value at the corresponding sampling point on the moving image 12 (a sampling point corresponding to each sampling point on the referring image 11) by, for example, linear interpolation computation (step S205: extraction). This manner obtains the moving-image coordinates (sampling point) changed by the movement of the control point and obtains the luminance value at the coordinates (sampling point). That is, the moving image is deformed by the movement of the control point in the coordinate geometric transforming unit 1002.
The image similarity calculating unit 1003 (
The image similarity maximizing unit 1004 (
On the other hand, if the image similarity converges in step S207, the registering unit 10 outputs the obtained control-point moving amount information 1001 to the moving-image deforming unit 17 (step S209). Through the above-described processing, the processing performed by the registering unit 10 is completed.
<Moving-Image Deforming Unit>
The moving-image deforming unit 17 (
According to this embodiment, the respective positions on the referring image and the moving image are obtained from the feature-point pair (corresponding-point pairs) corresponding to each other. By using the obtained position, the initial value (position) of the control point to be used for the registration between the referring image and the moving image is set. In this manner, the initial value of the control grid can be set to more appropriate value, so that the registration accuracy can be improved. In addition, the time required for the registration can be shortened.
<Application Example>
Next, an example of application to a medical image will be described with reference to
Although not particularly limited, the transverse plane slice of the abdominal region illustrated in
The images illustrated in
The respective feature regions TA and TB in
The transverse plane slices of the abdominal region illustrated in
As illustrated in
The control grid 1201 has been described in the description of the control grid deforming unit 16, and can deform the moving image by deforming the control grid. That is, in this embodiment, as illustrated in
After the initial setting of the control grid 1201, the control grid 1201 is further deformed so as to maximize the image similarity between the referring image (e.g.,
In the process of the similarity maximization, the coordinate geometric transforming unit 1002 (
The feature points P2 to P5 correspond to feature points P2′ to P5′, respectively. The corresponding-point position information 15 is obtained from the above-described corresponding-point pairs. The control grid deforming unit 16 deforms the control grid 1201 based on the above-described corresponding-point position information 15. The control grid 1201 in
Even after the execution of the initial setting for the control grid 1201, the control grid 1201 is deformed in the registering unit 10 (
<Outline>
The regions to be registered are extracted from the referring image 11 and the moving image 12, respectively. In the extracted regions, the feature points and the corresponding-point pair are extracted. By using the position information of the corresponding-point pair, the control grid used for the registration processing is deformed. In this manner, the registration can be performed at a highspeed in a region (interest region) which is an interest of a person who uses the image processing apparatus. In addition, the position information of the corresponding points extracted from the above-described region is also used for optimization calculation in the registration processing. In this manner, the optimization calculation can converge more accurately at a higher speed.
<Configuration and Operation>
In the second embodiment, the control grid is deformed by using the corresponding-point pair extracted from a predetermined region which is the registration target, and the deformed control grid is used for the registration processing. The above-described predetermined region is designated as, for example, the region (interest region) which is the interest of the image processing apparatus is interested. In addition, the image sampling point used for the registration processing is also extracted from the interest region. Furthermore, the position information of the extracted corresponding-point pair is used for the calculation of the image similarity. In this manner, the accuracy and robustness of the registration in the interest region can be further improved. The following is the explanation mainly about the differences from the first embodiment. Therefore, the same reference symbol between the first embodiment and the present embodiment basically denotes the same component as that of the first embodiment, and a detailed description of the component will be omitted.
From each of the referring image 11 and the moving image 12, each of the interest region extracting units 19 and 20 extracts a region to be the registration target such as an image region corresponding to an organ or a tubular region included in the organ. The target region is specified by, for example, a user who uses the image processing apparatus.
As a method of the extraction of the organ region from each of the referring image 11 and the moving image 12, for example, a publicly-known graph cut method can be used. In the graph cut method, a region division problem is regarded as energy minimization, and the method is a method of obtaining a region boundary by using an algorithm for cutting a graph created from an image so that energy defined in the graph is minimized. In addition to the graph cut method, a region growing method, a method such as a threshold processing, or others can be also used.
The interest region extracting units 19 and 20 can also extract not the overall organ but a tubular region from the extracted organ regions. The tubular region is a region corresponding to a blood vessel portion when, for example, the organ is a liver, or corresponding to a bronchial portion when the organ is a lung. The following is explanation about a processing of an image region having the liver as a region of the registration target. That is, the interest region extracting units 19 and 20 divides the liver region from each of the referring image 11 and the moving image 12, and extract the image region including the liver blood vessel.
It is desired to use an anatomically-featured image data for the region of the registration target. As the image region having the feature image data in the liver region, an image region including the liver blood vessel and its surrounding region (a hepatic parenchymal region adjacent to the blood vessel) is conceivable. That is, the processing contents of the interest region extracting units 19 and 20 are to not extract only the liver blood vessel region but simultaneously extract the liver blood vessel and the hepatic parenchymal region adjacent to the blood vessel. Therefore, a processing such as accurate region division is not required.
The interest region extracting units 19 and 20 extract the image regions including the liver region from the referring image 11 and the moving image 12, respectively (step S301). The pixel value of the extracted liver region image is converted within a predetermined range in accordance with Expression (2) (step S302). For example, the pixel value is converted within a range of 0 to 200 HU (Hounsfield Unit: the unit for a CT value). Here, I(x) and I′(x) in Expression (2) represent pixel values obtained before and after the conversion, respectively, and Imin and Imax represent the minimum value, e.g., 0 (HU) and the maximum value, e.g., 200 (HU), respectively, in the conversion range.
Next, a smoothing processing is performed for the liver region image by using, for example, a Gaussian filter (step S303). Subsequently, an average value “μ” and a standard deviation “σ” of the pixel value of the smoothed liver region image are calculated (step S304). Next, in step S305, a threshold for the division processing is calculated. This calculation is performed by using, for example, Expression (3) to calculate a threshold “T”.
[Expression 3]
T=μ+1.0×σ Expression (3)
A threshold processing is performed for the pixel value of the data representing the liver region image by using the acquired threshold T (step S306). That is, the pixel value of each pixel is compared with the threshold T to extract a pixel having a pixel value larger than the threshold T as a pixel in an image region which is a blood vessel region candidate. Lastly, in step S307, a Morphology computation processing such as a dilation processing or an erosion processing is performed for the obtained image region. By this computation processing, a processing such as removal of an isolated pixel or connection between discontinuous pixels is performed. By the processing as described above, the liver blood vessel region to be a candidate region (target region) for the registration sampling processing and the feature-point extraction processing is extracted. The liver blood vessel region extracted from each of the referring image 11 and the moving image 12 is outputted to the image sampling unit 13 (
In
The image sampling unit 13 acquires the image region corresponding to the organ region or the tubular region from the interest region extracting unit 19, and executes the sampling processing.
On the other hand, the feature-point detection/correspondence unit 14 executes the feature-point extraction/correspondence processing for the image region corresponding to the organ region (the liver region in this case) and/or the tubular region acquired from each of the interest region extracting units 19 and 20. As a result, the corresponding-point position information 15 is generated, and is outputted to the control grid deforming unit 16 and the registering unit 10. Since the generation of the corresponding-point position information 15 has been described in detail in the first embodiment, a description thereof will be omitted.
Each processing performed by the control grid deforming unit 16 and the registering unit 10 in the second embodiment is basically the same as that in the first embodiment. However, as different from the first embodiment, in the present embodiment, the corresponding-point position information 15 is set to be used also in an image similarity calculating unit 1003 in the registering unit 10. That is, in the second embodiment, in order to improve the registration processing accuracy, the corresponding-point position information 15 acquired from the feature-point detection/correspondence unit 14 is also used for the optimization calculation for maximizing the image similarity between the referring image 11 and the moving image 12.
For example, at the same time with maximization of a mutual information content which is the image similarity, the coordinates of the feature point on the moving image 12 are transformed based on the corresponding-point position information 15 so that the geometrical distance between the transformed coordinates and the corresponding-point coordinates on the referring image 11 is minimized. In the above-described optimization calculation, for example, a cost function C (R, F, U (x)) expressed by Expression (4) is minimized.
Here, reference symbols “R” and “F” are the referring image 11 and the moving image 12, and a reference symbol “U(x)” is the movement amount of each pixel obtained by the optimization calculation. A reference symbol “S (R, F, U (x))” represents the image similarity between the referring image 11 and the transformed moving image 12. Also, a reference symbol “P” is a set of feature points obtained by the feature-point detection/correspondence unit 14. A reference symbol “V(x)” is the movement amount of each corresponding point obtained by the feature-point detection/correspondence unit 14. a reference symbol “Σx∈P∥U(x)−V(x)∥2 represents the geometrical distance between the movement amount of each pixel obtained by the optimization calculation and the movement amount of each pixel obtained by the feature-point detection/correspondence unit 14. Further, a reference symbol “μ” is a weight to be experimentally determined.
By the minimization of the cost function C, the optimization calculation for the registration processing can converge more accurately at a higher speed. In this manner, by using the information related to the feature-point position for the calculation of the cost function C for the minimization, a control grid set in the initial setting is also reflected on the optimization calculation, and therefore, large shift from the position of the feature point provided in the initial setting can be limited in the optimization calculation processing. That is, the feature region (the feature region on the image) set in the initial setting can be also considered in the optimization calculation processing.
As described above, the image processing apparatus according to the second embodiment extracts the interest regions which are the registration target from the referring image 11 and the moving image 12, make the extraction and the correspondence of the feature points from these interest regions, and deforms the control grid in registration processing by using the position information of the corresponding points. In this manner, the regions whose feature points are to be extracted and corresponded are limited, and therefore, the processing speed or accuracy can be increased. In addition, the position information of the corresponding points extracted from the interest regions is also used for the optimization calculation in the registration processing. In this manner, the optimization calculation can converge more accurately at a higher speed.
<Outline>
The referring image 11, the interest region on the referring image 11, the registered moving image 18, and the interest region on the registered moving image are superimposed and displayed on a screen. The user who uses the image processing apparatus can perform the edit while checking the display. In the present specification, note that the edit includes addition, correction, and deletion unless particularly limited.
<Configuration and Operation>
In the third embodiment, the registration result and the extraction result of the interest region are superimposed and displayed on the screen. Through the screen, the user visually checks each result, and manually edits the corresponding landmarks (feature points) on the referring image 11 and a moving image 12. In this manner, the registration result can be edited.
Other configurations except for the processing of the edition of the registration result are the same as those of the above-described first and second embodiments, and therefore, the following is the explanation mainly about differences between them. For the descriptive convenience, note that the following is the explanation while exemplifying a configuration obtained by adding a function of editing the registration result to the configuration described as the second embodiment. Obviously, the addition is similarly possible for the configuration described as the first embodiment.
The referring image 11, the moving image 12, the corresponding-point position information 15, and the registered moving image 18 are fed to the image display unit 21. In addition, from the interest region extracting units 19 and 20, information related to the interest region is fed to the image display unit 21. The image display unit 21 superimposes and displays the referring image 11 and the registered moving image 18 in accordance with the fed referring image 11 and the fed registered moving image 18. At this time, the image display unit 21 transparently superimposes and displays the extracted interest region from the referring image 11, on the referring image 11 while changing its color. In addition, the image display unit 21 performs the coordinate transformation of the interest region of the moving image 12 by using the registration result in accordance with the fed moving image 12, the corresponding-point position information 15, and the registered moving image 18, and transparently superimposes the interest region of the moving image 12 on the registered moving image 18 while changing the color. These displays can be combined with each other.
In addition, the image display unit 21 transparently superimposes and displays the referring image 11, its interest region, and the feature point in the interest region. The image display unit 21 also transparently superimposes and displays the moving image 12, its interest region, and the feature point in the interest region. In the transparent superimposing and displaying, the display is performed while changing the colors. As described above, the feature point is superimposed and displayed, so that the results of the feature point extraction and correspondence can be visually checked.
A user such as a doctor checks whether the registration processing has been accurately performed or not while checking the result displayed on the image display unit 21. If it is determined that the registration processing has not been accurately executed, the user manually edits, for example, the landmark determined as not being accurate by using the landmark manual correction/input unit 22. The corresponding-point position information 15 after the edit, which is obtained as the manual editing result, is outputted to the registering unit 10. The registering unit 10 further deforms the deformed control grid by using the corresponding-point position information 15 acquired after the edit, updates the control-point moving amount information 1001 (
By the manual edit, for example, the feature-point coordinates on the referring image and/or the feature-point coordinates on the moving image are edited in the corresponding-point pair illustrated in
If the user determines that the registration processing has not been accurately executed even by the above-described manual correction, the user corrects the initial position of the control point in the registration processing by using the corresponding-point position information 15 obtained after the edit which is obtained by the manual edit, and executes the registration processing as similar to those in steps S104 to S110 (
The image display unit 21 is configured by using, for example, the image generating unit 47 illustrated in
As described above, in the third embodiment, the referring image 11 and its interest region, and the registered moving image 18 and the interest region of the registered moving image are superimposed and displayed on the screen. In this manner, the user manually edits landmarks while checking the display result, and adjust the control-point moving amount information 1001, so that the registration result can be manually corrected. In addition, when it is determined that the registration processing has not been accurately executed even by the manual edit, the user can correct the initial position of the control point in the registration processing by using the corresponding-point position information 15 obtained by the manual edit, and can execute the registration processing again.
The present invention is not limited to the above-described embodiments, and incorporates various modification examples. The above-described first to third embodiments have been described in detail in order to clearly explain the present invention, and the present invention is not necessarily limited to an embodiment including all the configurations described above. Also, a part of the structure of one embodiment can be replaced with the structure of the other embodiment. Further, the structure of the other embodiment can be added to the structure of one embodiment. Still further, the other structure can be added to/eliminated from/replaced with a part of the structure of each embodiment.
Each configuration, function, processing unit, processing means, and others described above maybe partly or entirely achieved by using hardware by, for example, design in an integrated circuit or others. In addition, each configuration, function, and others described above may be achieved by software by interpretation and execution of a program achieving each function by a processor. The information such as a program, table, and file achieving each function can be stored in a recording medium such as a recording medium such as a memory, hard disk, or SSD (Solid State Drive), an IC card, an SD card, or a DVD.
10 registering unit
11 referring image
12 moving image
13 image sampling unit
14 feature-point detection/correspondence unit
15 corresponding-point position information
16 control-grid deforming unit
17 moving image deforming unit
18 registered moving image
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2013/065737 | 6/6/2013 | WO | 00 |