The present application claims priority from Chinese patent application 201010163949.5 filed on Apr. 16, 2010, the content of which is hereby incorporated by reference into this application.
The present invention relates to the field of three-dimensional image display, more particularly, relates to a display method and apparatus of three-dimensional data, which provides a method for selecting object of interest in a three-dimensional scene by using information on a section parallel to a sight line, and rendering a two-dimensional image of the selected object along the sight line.
With the rapid development of information technology, the data obtained from calculation and measurement techniques is increased in an incredible speed. In the next few years, the amount of information produced and collected by human being will exceed the total amount of information that human being has obtained so far. This makes extracting meaningful information, quick and efficiently, from a large amount of information become more and more difficult. In order to solve this problem, scientists have proposed a variety of models and methods, and one of them is the visualization technology. The visualization technology is used for getting meaningful information from a large amount of basic data, and showing it to the user by means of interactive computer graphics techniques for the purpose of a better understanding of information and a quick decision-making. The visualization is mainly classified into two types: scientific computing visualization and information visualization. The scientific computing visualization pays attention to the physical data, such as human body, earth and molecular etc., while the information visualization is used for abstract non-physical data, such as text and statistics data etc. Here, the attention is mainly focused on scientific computing visualization which is such a technology that the data produced in the process of scientific computation is converted, by means of computer graphics and image processing techniques, into graphics and images which are shown to the user through a display device, so as to enable the user to perform interactive processing of the data. The application field of scientific computing visualization is very wide, mainly covering the fields of medicine, geological exploration, meteorology, molecular modeling, computational fluid dynamics and finite element analysis etc. Wherein, the medical data visualization is a very important application, the medical data is mainly obtained from medical imaging devices after measuring the structure and function of human tissues, such as computed tomography (CT) scan data and NMR (nuclear magnetic resonance) data.
At present, the core of scientific computing visualization technology is the visualization for three-dimensional space data field, all the medical imaging data, such as CT data, are regularized three-dimensional space grid data, and the data distributed on the discrete grid points in a three-dimensional space are obtained from an interpolation operation after performing CT scan or random sampling to three-dimensional continuous data field. The function of three-dimensional space data field visualization is to convert the discrete three-dimensional grid data field to a two-dimensional discrete signal in a frame buffer of a graphic display device according to a certain rule, i.e., generating color values (R, G, B values) of each pixel. A two-dimensional image reconstructed from the three-dimensional scene represents a complex three-dimensional scene from a specific visual angle, the user can change the position of view point by using computer graphics interactive technique to reconstruct the three-dimensional scene from different visual angles, thereby achieving knowledge and understanding of complex three-dimensional scenes. A typical application of three-dimensional space data field visualization is the visualization of CT data. A doctor can obtain the scan data of a patient's specific part from a CT device, and import it into a three-dimensional visualization device, then observe the specific part from different view points through the use of interactive technique, and from which obtain the structure and shape of specific human tissues, thereby, positioning the location of lesions and achieving a rapid diagnosis for patients. With the development of medical imaging devices, the amount of medical data is increasing by several times, and the three-dimensional data field visualization technology greatly increases the working efficiency of radiologists, that makes it possible to position and diagnose lesions more rapidly. In addition, the computer simulation surgery and planning for orthopedic surgery and radiotherapy etc. can also be implemented through the data interactive operation based on this technique.
Volume rendering technology is a very important three-dimensional display technique in the scientific computing visualization, and it is widely used in the field of medical image display with fine display accuracy. The data generated by modern computer tomography devices are the discrete data distributed on a three-dimensional space grid (a point on the grid is called ‘voxel’). The function of volume rendering algorithm is to convert the discrete three-dimensional data to a two-dimensional discrete signal in a frame buffer of a graphics display device according to a certain rule, i.e., generating color values (R, G, B values) of each pixel. The most commonly used method in volume rendering is the ray casting method which comprises mainly three steps. Firstly, data is classified according to the value of voxel, and different values of color and opacity are assigned to each kind of data so as to correctly indicate the different attributes of various matters, this process can be completed through the transfer function with which the value of a voxel is mapped to the values of color and opacity of the voxel. Secondly, three-dimensional data is re-sampled, that is, a light ray passing through the three-dimensional data is emitted from each pixel on the screen in the direction of sight line, and the equally-spaced sampling point is selected from the three-dimensional data along the light ray, and then, the values of color and opacity of the sampling point are obtained with the aid of interpolation according to eight voxels around the sampling point. Finally, the image synthesis processing is performed to synthesize the color value and opacity value of each sampling point on each light ray in the order from front to back or from back to front, thus, the color value of the pixel corresponding to the light ray is obtained, and the synthetic method is established by the synthetic function. Volume rendering can bring about more fine and rich effect by establishing different transfer functions, and this greatly increases the understanding of volume data.
In medical imaging fields, the image obtained from CT or MRI equipment are the grayscale image, however, there exists an overlapping in grayscale values between a variety of different tissues inside the human body, because the space distribution between tissues is extremely complex, usually, the three-dimensional reconstruction results of volume data obtained through the use of volume rendering technology contain plural tissues, and many tissues or its specific parts are obstructed by other tissues or itself, thus, it is unable for a doctor to carry on the diagnosis by means of the volume rendering technology, and this hindered the development of volume rendering technology in the medical field.
A common way to address this problem is to assign different transparency value and color for different tissues by establishing a transfer function. The assignment of the opacity and color is depending on the grayscale information of tissues, however, the grayscale of different tissues is often partially overlapped, such as in a CT image, fat and soft tissues have a similar grayscale range, blood and cartilage have a similar grayscale range, although bone has a high density and presents a high grayscale value in the CT image, the grayscale of its edges has a very wide range, and has covered the grayscale range of the blood and soft tissues, this make it difficult to achieve the purpose of showing the interested tissues emphatically. Although the multi-dimensional transfer function may use other information such as gradient etc., these multi-dimensional information still cannot accurately differentiate tissues.
Another common method to address this problem is extracting the tissues of interest from CT or MRI images by using the segmentation technique. In this way, we can control the rendering of different tissues in the rendering result by establishing different transfer functions for different tissues, however, it can not solve the part occluded by the object itself, many tissues have a complex space structure in medical images, and different parts within a tissue may obstruct each other. Since the segmentation method usually performs an overall segmentation for a tissue, it is impossible to identify the different part of a single tissue, therefore, we can not observe the specific part.
WO2006/099490 proposed a method of displaying an object of interest through an opaque object, in which the region of an opaque object is determined by using a fixed threshold value (grayscale or gradient) so as to control the synthesizing process of sampling points on the light ray, thereby, achieving the purpose of rendering the interested object through the opaque area. However, the method using fixed threshold can not make a correct judgment for the range of a complex opaque object.
Japan Patent Application Laid-Open Publication No. 2003-91735 proposed a method, wherein three-dimensional data is divided into several groups in a certain direction and each group of data generates a two-dimensional image in a particular manner (such as average value algorithm or maximum intensity projection algorithm), and the user-interested object is designated in such a group of two-dimensional images; then, the distance from the other voxels to the user-interest object in the three-dimensional data is calculated, and taken as a weighting factor in a synthetic function, for example, the voxel near to the object of interest has a higher weight, and the far voxel has a smaller weight, thus, the user-designated object can be highlighted by fuzzifying its surrounding area. However, this method needs that the designated object must be segmented wholly at the first, and it still can not display the parts occluded by other parts of the designated object itself.
In view of abovementioned background, the present invention proposes an image processing method, in which an object of interest is selected in a three-dimensional scene by using information on a section parallel to a sight line, and a surface is generated to divide the line of sight passing through the object into two parts, so as to display the user-interested object through an opaque area by establishing different rendering parameters for the two parts of sight line.
The present invention proposes a solution in other to solve the problem that the user-interested object occluded by other opaque object can not be rendered in a volume rendering. The object to be rendered is selected by using information on a section parallel to a sight line, and a two-dimensional segmentation curved surface is generated to separate the selected object from neighboring object in the direction of the sight line, so as to control the rendering process along the sight line, thereby, achieving the purpose of rendering the selected object individually.
According to the first aspect of the present invention, an image processing apparatus is proposed, which comprises: a segmentation curved surface generation unit for generating segmentation curved surface passing through a designated control point and intersecting with a first predetermined direction in accordance with three-dimensional image data; a first two-dimensional image generation unit for generating a first two-dimensional image in accordance with the projection of the three-dimensional image data on the segmentation curved surface; and a display unit for displaying the first two-dimensional image generated by the first two-dimensional image generation unit.
Preferably, the segmentation curved surface generated by the segmentation curved surface generation unit is substantially vertical to the first predetermined direction.
Preferably, the first two-dimensional image generation unit generates the first two-dimensional image in accordance with the projection of the three-dimensional image data on the segmentation curved surface along the first predetermined direction.
Preferably, the first two-dimensional image generation unit generates the first two-dimensional image in accordance with the projection of the three-dimensional image data on the segmentation curved surface along a direction opposite to the first predetermined direction.
Preferably, the image processing apparatus further comprises: a second two-dimensional image generation unit for generating a second two-dimensional image in accordance with the projection of the three-dimensional image data on the segmentation curved surface along the first predetermined direction; a third two-dimensional image generation unit for generating a third two-dimensional image in accordance with the projection of the three-dimensional image data along a direction vertical to the first predetermined direction; and a control point designation unit for designating the designated control point in the third two-dimensional image, wherein the display unit is further used for displaying the second two-dimensional image and the third two-dimensional image, and the display unit also displays the first two-dimensional image at a corresponding position of the second two-dimensional image in the form of window so as to cover the corresponding part of the second two-dimensional image.
Preferably, the segmentation curved surface generation unit generates the segmentation curved surface with points having a same attribute as that of the designated control point in accordance with the attribute of the designated control point. More preferably, the attribute is at least one selected from the group consisting of the following attributes: grayscale value of the designated control point, color value of the designated control point, gradient value and gradient direction of the designated control point. More preferably, the segmentation curved surface generation unit generates the segmentation curved surface through the use of local segmentation method by taking the designated control point as a seed.
According to the second aspect of the present invention, an image processing method is proposed, which comprises: generating a segmentation curved surface passing through a designated control point and intersecting with a first predetermined direction in accordance with the three-dimensional image data; and generating a first two-dimensional image in accordance with the projection of the three-dimensional image data on the segmentation curved surface.
Preferably, the segmentation curved surface is substantially vertical to the first predetermined direction. Preferably, the first two-dimensional image is generated in accordance with the projection of the three-dimensional image data on the segmentation curved surface along the first predetermined direction.
Preferably, the first two-dimensional image is generated in accordance with the projection of the three-dimensional image data on the segmentation curved surface along a direction opposite to the first predetermined direction.
Preferably, the image processing method further comprises: generating a second two-dimensional image in accordance with the projection of the three-dimensional image data on the segmentation curved surface along the first predetermined direction; generating a third two-dimensional image in accordance with the projection of the three-dimensional image data along a direction vertical to the first predetermined direction, wherein, the designated control point is designated in the third two-dimensional; and displaying the first two-dimensional image at a corresponding position of the second two-dimensional image in the form of window so as to cover the corresponding part of the second two-dimensional image.
Preferably, each point on the segmentation curved surface has a same attribute as that of the designated control point. More preferably, the attribute is at least one selected from the group consisting of the following attributes: grayscale value of the designated control point, color value of the designated control point, gradient value and gradient direction of the designated control point. More preferably, the segmentation curved surface is generated through the use of local segmentation method by taking the designated control point as a seed.
According to the present invention, a user can select, from a rendering window of a three-dimensional scene, a sub-window which is used for rendering an object or its specific part occluded by an opaque object in a three-dimensional scene along a sight line.
According to the present invention, the sub-window selected by the user from the volume rendering window is called focus window, the user can change its shape and size, and move it within the volume rendering window.
According to the present invention, an object to be rendered is selected by the user from a plane orthogonal to the focus window. The orthogonal plane is parallel to the sight line and passes through the object to be rendered or its specific part and shows the profile information of that the orthogonal plane passes through the three-dimensional scene. The information may be obtained by sampling the three-dimensional data, may also be the results obtained from common rendering technique, such as volume rendering method, in which the plane is taken as a projection plane.
According to the present invention, the intersection line of the orthogonal plane and the projection plane is located in a sub-window which is selected by a user, and the user can adjust its position in the focus window, so that the position of user-interested object can be rapidly located by adjusting position of the orthogonal plane in the volume data.
According to the present invention, the orthogonal plane provides a control point for selecting the user-interested object, the user can move the control point to near the edge of interested object, and the system automatically generates a two-dimensional surface to separate the object of interest from the other objects in the direction of light according to the control point. The range of the segmentation curved surface is limited within a focus space which takes the focus window as a bottom, wherein the height of the focus space is parallel to the sight line.
According to the present invention, the segmentation curved surface divides all the lights emitted from the focus window into two parts, one part passes through the opaque area in front of the interested object, and the other part irradiates the object of interest directly, and the user-interested object can be shown through the opaque area by means of establishing different transfer functions for the two parts of lights.
According to the present invention, also the back of another interested object can be rendered by taking the segmentation curved surface as a starting point, sampling and synthesizing along the opposite direction of the light ray in the interior of the focus space.
The embodiment of this invention is described with reference to the drawings hereafter, wherein some details and functions unnecessary for the present invention is omitted so as to prevent misunderstanding of the present invention.
The present invention solves the problem that the user-interested object occluded by other opaque objects can not be rendered in a volume rendering.
Size of the focus window 905 can be selected freely by the user, and the free adjustment of the focus window size provides the user a more flexible and controllable displaying mode although the shape and distribution of the objects in three-dimensional data are more complex usually.
As shown in
in step S1203, the user selects a focus window from the operational interface and dragged into the main window by the user;
afterward, in step S1204, the system will automatically generate a section vertical to the focus window and display it in an object selection window;
in step S1205, the user can see the three-dimensional data in the direction of sight line, so as to select an object of interest in this direction. There is a control point used for selecting the object of interest in the object selection window, the user can move the control point to near the edge of the interested object in the object selection window;
in step S1206, the system automatically generates a surface to separate the interested object from neighboring object on the basis of the control point. The produced segmentation curved surface divides the light ray emitted from a pixel in the focus window into two parts, one part passing through the object part obstructing the interested object, the other directly irradiating the surface of the interested object;
in step S1207, the system can carry on sampling and synthesis individually for the second part of the light ray to show the object of interest directly, also can design different transfer functions for the two parts of light to turn the partial area standing in front of the interested object into semi-transparence;
in step S1208, the user may continue to move the control point in order to select other object;
in step S1209, the user can also locate the object of interest by adjusting position and size of the focus window, at the same time, can also adjust the space projection location of an object selection surface by means of controlling projection segment of the object selection surface in the focus window, and the content of object selection surface is updated constantly with the position of object selection surface in the volume data.
A computer 1302 is a general computer mainly comprising a processor unit 1303, a memory unit 1304 and a data storage unit 1305. A user input device 1301 and a display unit 1306 implement the interactive tasks together between the user and the computer. The processor 1303 and the memory device 1304 carry on the user-required data processing in accordance with the user interaction.
A data acquisition unit 1401 is used for collecting three-dimensional data such as regular three-dimensional CT scan data and so on. A main window rendering unit 1402 (the second two-dimensional image generation unit) accomplishes three-dimensional rendering from a certain view point. A three-dimensional data interaction unit 1403 enables the user to select a specific view point to observe three-dimensional object. A focus window selection and adjustment unit 1404 allows the user to select different shape of the focus window, and adjust its size and position in the main window. An object selection surface generation and update unit 1407 (the third two-dimensional image generation unit) updates the contents of display according to the position and shape of the focus window. An interested object selection unit 1408 (a control point designation unit) provides a function of selecting interested object in the object selection surface. A segmentation curved surface generation unit 1409 automatically generates a segmentation curved surface based on the position of an object selection control point which is selected by the user. A transfer function generation unit 1410 divides the light ray emitted from the focus window into two parts and establishes different transfer functions according to the segmentation curved surface generated by the unit 1409, that is, setting the color and opacity values for the three-dimensional data voxel which is passed through by the light ray. A focus window rendering unit 1405 (the first two-dimensional image generation unit) performs rendering for the three-dimensional data included in a focus space by using the synthetic function generated from a synthetic function generation unit 1411, and displays the results in the focus window.
In the above description, plural examples is given with regard to each step. Although the inventor tried to indicate the examples which are interconnected as much as possible, this does not mean that these examples must have a corresponding relation according to their respective mark numbers. As long as there are no contradictions among the conditions given by the selected examples, it is possible to select examples which do not have corresponding numbers in different steps to constitute the technical solution. Such technical solution should also be regarded as included in the scope of the present invention.
It is noteworthy that, the technical solution of the present invention is illustrated only by way of demonstration in the above description, however, this does not mean that the invention is limited to the above steps and unit structures. Under possible circumstances, it is possible to make adjustment and selection of steps and unit structures. Accordingly, some steps and unit structures are not essential elements to implement the overall thinking of the invention. Therefore, the necessary technical features of the present invention is only restricted by the lowest requirements to implement the overall thinking of the invention, and not restricted by the above specific examples.
The other setting disclosed here in the embodiments of the present invention includes the software program executing and operating the steps of the embodiments which have been briefly introduced first and elaborated later. More concretely, the product of the computer program is the flowing embodiment, which comprises a computer readable medium having a computer program logic coded thereon, when executed on a computing equipment, the computer program logic provides relevant operations, thus, providing abovementioned one-way agent transfer encryption scheme. When implemented on at least one processor of the computing system, the computer program logics makes the processor perform the operation (method) described in the embodiments of the invention. The setting of the present invention can be provided typically as a software, a code and/or other data structures set or encoded on computer readable media, such as optical media (e.g. CD-ROM), soft disk, hard disk and the like, or as a firmware on chips such as one or more Rom or RAM or PROM or other media of microcode, or as a downloadable software image or shared database etc. in an application specific integrated chip (ASIC) or one or more modules. Software or firmware or such a configuration can be installed on a computing equipment to make one or more processors of the computing equipment implement the technology described in the embodiments of the present invention. The system according to the present invention can also be provided by software processes operated by combining a set of data communication equipments or computing equipments in other entities. The system according to the present invention can also be distributed among plural software processes on plural data communication equipments, or all the software processes operated on a set of dedicated minicomputers, or all the software processes operated on individual computers.
To be understood, strictly speaking, the embodiments of the present invention can be realized as a software program, software and hardware, or individual software and/or an independent electric circuit on data communication equipments.
The present invention has been described in combination with the preferred embodiments thereof. It is to be understood that other various modifications, replacements and adjunctions can be made herein without departing from the spirit and scope of the invention by those skilled in the art. Therefore, the scope of the present invention is not limited by the specific embodiments described above, but defined only by the appended claims instead.
Number | Date | Country | Kind |
---|---|---|---|
201010163949.5 | Apr 2010 | CN | national |