Embodiments described herein relate generally to an image processing device, a stereoscopic image display device, and an image processing method.
In recent years, glasses-free 3D displays in which a light beam control element such as a lenticular lens is used to enable stereoscopic viewing of multiple view images, which are captured from a plurality of camera viewpoints, with the unaided eye have been put to practical use. In such a glasses-free 3D display, by adjusting a plurality of camera intervals or a plurality of camera angles, it is possible to change the pop out amount of stereoscopic images. Moreover, in a glasses-free 3D display, images displayed on the display surface, which represents a surface that neither pops out toward the near side nor recedes toward the far side during the stereoscopic viewing, can be displayed in the highest definition. Hence, according to an increase or a decrease in the pop out amount, there occurs a decline in the definition. Furthermore, the range within which high-definition stereoscopic display is possible is only a limited range. Hence, in case a pop out amount equal to or greater than a certain value is set, then it results in the formation of double images or blurred images.
Meanwhile, as far as medical diagnostic imaging devices such as X-ray computer tomography (CT) devices, magnetic resonance imaging (MRI) devices, or ultrasound diagnostic devices are concerned; some devices capable of generating three-dimensional medical images (hereinafter, called “volume data”) have been put to practical use. From the volume data generated by a medical diagnostic imaging device, it is possible to generate a volume rendering image (a parallax image) at an arbitrary parallax angle and with an arbitrary number of parallaxes. In that regard, it is being studied whether a two-dimensional volume rendering image, which is generated from the volume data, can be stereoscopically displayed in a glasses-free 3D display.
However, in the conventional technology, the stereoscopic image of an region of interest, on which the user should focus in the volume data, cannot be visually recognized in a satisfactory manner.
According to an embodiment, an image processing device includes an obtainer, a determiner, a controller, and a generator. The obtainer obtains a position of an object to be observed in volume data of a medical image. The determiner determines a region of interest by using the position of the object and an instructed region inputted by a user in the volume data so that the region of interest includes at least part of the object. The controller controls a relation between the region of interest and a display range that indicates a range allowed to be displayed stereoscopically on a display. The generator generates a stereoscopic image of the volume data according to the relation between the region of interest and the display range.
An embodiment of an image processing device, a stereoscopic image display device, and an image processing method according to the invention is described below in detail with reference to the accompanying drawings.
The image display system 1 generates stereoscopic images from volume data, which is generated by the medical diagnostic imaging device 10. Then, the stereoscopic images are displayed in a display with the aim of providing stereoscopically-viewable medical images to the doctors or the laboratory personnel working in the hospital. Herein, a stereoscopic image includes a plurality of parallax images having mutually different parallaxes. Given below is the explanation of each device in turn.
The medical diagnostic imaging device 10 is capable of generating three-dimensional volume data related to medical images. Examples of the medical diagnostic imaging device 10 include an X-ray CT device, an MRI device, an ultrasound diagnostic device, a single photon emission computer tomography (SPECT) device, a positron emission computed tomography (PET) device, a SPECT-CT device configured by integrating a SPECT device and an X-ray CT device, a PET-CT device configured by integrating a PET device and an X-ray CT device, and a group of these devices.
The medical diagnostic imaging device 10 captures images of a subject being tested, and generates volume data. For example, the medical diagnostic imaging device 10 captures images of a subject being tested; collects data such as projection data or MR signals; reconfigures a plurality of (for example, 300 to 500) slice images (cross-sectional images) along the body axis direction of the subject being tested; and generates volume data. Thus, as illustrated in
The volume data generated by the medical diagnostic imaging device 10 contains images of target objects for observation at the medical site (hereinafter, called “objects”) such as bones, blood vessels, nerves, tumors, and the like. According to the embodiment, the medical diagnostic imaging device 10 analyzes the generated volume data, and generates specifying information that enables identification of the position of each object in the volume data. The specifying information can contain arbitrary details. For example, as the specifying information, it is possible to use groups of information in each of which identification information enabling identification of an object is held in a corresponding manner to a group of voxels included in the object. Alternatively, as the specifying information, it is possible to use groups of information obtained by appending, to each voxel included in the volume data, identification information that enables identification of the object to which that voxel belongs. Besides, the medical diagnostic imaging device 10 can analyze the generated volume data and identify the position of the center of gravity of each object. Herein, the information indicating the position of the center of gravity of each object can also be included in the specifying information. Meanwhile, the user can refer to the specifying information that is automatically created by the medical diagnostic imaging device 10, and can correct the details of the specifying information. That is, the specifying information can be generated in a semi-automatic manner. Then, the medical diagnostic imaging device 10 sends the generated volume data and the specifying information to the image archiving device 20.
The image archiving device 20 represents a database for archiving the medical images. More particularly, the image archiving device 20 is used to store and archive the volume data and the specifying information sent by the medical diagnostic imaging device 10.
The stereoscopic image display device 30 is capable of displaying a plurality of parallax images having mutually different parallaxes, and thus enabling a viewer to view a stereoscopic image. The stereoscopic image display device 30 can be configured to implement, for example, the integral imaging method (II method) or the 3D display method in the multi-eye mode. Examples of the stereoscopic image display device 30 include a television (TV) or a personal computer (PC) that enables viewers to view stereoscopic images with the unaided eye. In the embodiment, the stereoscopic image display device 30 performs volume rendering with respect to the volume data that is obtained from the image archiving device 20, and generates and displays a group of parallax images. Herein, the group of parallax images is a group of images generated by performing a volume rendering operation in which the viewpoint position is shifted in increments of a predetermined parallax angle with respect to the volume data. Thus, the group of parallax images includes a plurality of parallax images having different viewpoint positions.
In the embodiment, while the user confirms the stereoscopic image of a medical image displayed on the stereoscopic image display device 30, the user is enabled to perform operations so as to satisfactorily display an area (a region of interest) on which the user desires to focus.
The display 50 displays a stereoscopic image that is generated by the image processor 40. As illustrated in
As the display panel 52, it is possible to use a direct-view-type two-dimensional display such as an organic electro luminescence (organic EL), a liquid crystal display (LCD), a plasma display panel (PDP), or a projection-type display. Moreover, the display panel 52 can also have a backlight.
The light beam controller 54 is disposed opposite to the display panel 52 with a clearance gap maintained therebetween. The light beam controller 54 controls the direction of emission of the light beam that is emitted from each sub-pixel of the display panel 52. The light beam controller 54 has a plurality of linearly-extending optical apertures arranged in the first direction for shooting out the light beams. For example, the light beam controller 54 can be a lenticular sheet having a plurality of cylindrical lenses arranged thereon or can be a parallax barrier having a plurality of slits arranged thereon. The optical apertures are arranged corresponding to the element images of the display panel 52.
In the embodiment, in the stereoscopic image display device 30, the sub-pixels of each color component are arranged in the second direction, while the color components are repeatedly arranged in the first direction thereby forming a “longitudinal stripe arrangement”. However, that is not the only possible case. Moreover, in the first embodiment, the light beam controller 54 is disposed in such a way that the extending direction of the optical apertures thereof is consistent with the second direction of the display panel 52. However, that is not the only possible case. Alternatively, the light beam controller 54 may be disposed in such a way that the extending direction of the optical apertures thereof has a predetermined tilt with respect to the second direction of the display panel 52.
As illustrated in
In each element image 24, the light beams emitted from the pixels (the pixel 241 to the pixel 243) of the parallax images reach the light beam controller 54. Then, the light beam controller 54 controls the travelling direction and the scattering of each light beam, and shoots the light beams toward the whole surface of the display 50. For example, in each element image 24, the light emitted from the pixel 241 of the parallax image 1 travels in the direction of an arrow Z1; the light emitted from the pixel 242 of the parallax image 2 travels in the direction of an arrow Z2; and the light emitted from the pixel 243 of the parallax image 3 travels in the direction of an arrow Z3. In this way, in the display 50, the direction of emission of the light emitted from each pixel in each element image is regulated by the light beam controller 54.
Given below is the detailed explanation of the image processor 40.
The setter 41 sets a region of interest, on which the user should focus, in the volume data (in this example, in the volume data of the brain illustrated in
As illustrated in
The sensor 45 detects the coordinate value of the input device (such as a pen) in the three-dimensional space of the display 50 in which the stereoscopic image is displayed. The input device corresponds to an indicator that is used by the user 4 to indicate a three-dimensional position.
Meanwhile, the sensor 45 is not limited to the explanation given above. That is, in essence, as long as the sensor 45 is able to detect the coordinate value of the input device in the three-dimensional space of the display 50, it serves the purpose. Besides, the type of the input device is also not limited to a pen. Alternatively, for example, a finger of the viewer can serve as the input device, or a surgical knife or a scissor can serve as the input device. In the embodiment, when the user confirms the default stereoscopic image and specifies a predetermined position in the three-dimensional space of the display 50 using the input device; the sensor 45 detects the three-dimensional coordinate value of the input device at that point of time.
The receiver 46 receives input of the three-dimensional coordinate value detected by the sensor 45 (that is, receives an input from the user). In response to the input from the user, the specifier 47 specifies an area in the volume data (hereinafter, called an “instructed region”). Herein, the instructed region can be a point present in the volume data or can be a surface having a certain amount of spread.
In the embodiment, the specifier 47 specifies, as the instructed region, a normalized value that is obtained by normalizing the three-dimensional coordinate value, which is detected by the sensor 45, in a corresponding manner to the coordinates in the volume data. For example, assume that the range of coordinates in the volume data is 0 to 512 in the X-direction, 0 to 512 in the Y-direction, and 0 to 256 in the Z-direction. Moreover, assume that the detectable range in the three-dimensional space of the display 50 that is detectable by the sensor 45 (i.e., the range of spatial coordinates in a stereoscopically-displayed medical image) is 0 to 1200 in the X-direction, 0 to 1200 in the Y-direction, and 0 to 1200 in the Z-direction. If (x1, y1, z1) represents the three-dimensional coordinate value detected by the sensor 45, then the instructed region is equal to (x1×(512/1200), y1×(512/1200), z1×(256/1200)). Meanwhile, the stereoscopically-displayed medical image and the leading end of the input device need not appear to be coincident to each other. As illustrated in
Moreover, the method of specifying the instructed region is not limited to the method explained above. Alternatively, for example, as illustrated in
Meanwhile, for example, the user can operate a keyboard and directly input a three-dimensional coordinate value within the volume data. Alternatively, for example, as illustrated in
Returning to the explanation with reference to
Herein, if d2 represents the distance between the three-dimensional coordinate value (x1, y1, z1), which is specified by the specifier 47, and the coordinate value (x2, y2, z2), which indicates the position of the center of gravity of the first object; then d2 can be obtained using Equation 1 given below.
d2=√{square root over ((x1−x2)2+(y1−y2)2+(z1−z2)2)}{square root over ((x1−x2)2+(y1−y2)2+(z1−z2)2)}{square root over ((x1−x2)2+(y1−y2)2+(z1−z2)2)} (1)
Similarly, if d3 represents the distance between the three-dimensional coordinate value (x1, y1, z1), which is specified by the specifier 47, and the coordinate value (x3, y3, z3), which indicates the position of the center of gravity of the second object; then d3 can be obtained using Equation 2 given below.
d3=√{square root over (x1−x3)2+(y1−y3)2+(z1−z3)2)}{square root over (x1−x3)2+(y1−y3)2+(z1−z3)2)}{square root over (x1−x3)2+(y1−y3)2+(z1−z3)2)} (2)
Moreover, if d4 represents the distance between the three-dimensional coordinate value (x1, y1, z1), which is specified by the specifier 47, and the coordinate value (x4, y4, z4), which indicates the position of the center of gravity of the third object; then d4 can be obtained using Equation 3 given below.
d4=√{square root over ((x1−x4)2+(y1−y4)2+(z1−z4)2)}{square root over ((x1−x4)2+(y1−y4)2+(z1−z4)2)}{square root over ((x1−x4)2+(y1−y4)2+(z1−z4)2)} (3)
Then, the determiner 48 determines that the object having the smallest value of the calculated distance is the region of interest. However, the method of determining the region of interest is not limited to this method. Alternatively, for example, the object having the smallest distance in the X-Y plane with the exclusion of the Z-direction (the depth direction) can be determined to be the region of interest. Still alternatively, for each voxel coordinate included in each object, the distance to the instructed region can be calculated and the object that includes the voxel coordinate having the smallest distance can be determined to be the region of interest. Still alternatively, for example, as illustrated in
Still alternatively, instead of determining an object that is present in the volume data to be the region of interest, a cuboid or spherical area that has an arbitrary size with the instructed region serving as the base point can also be determined to be the region of interest. Still alternatively, if an object is present at a distance equal to or smaller than a predetermined threshold distance from the instructed region, then that object can be determined to be the region of interest. Still alternatively, if an object is present at a distance equal to or smaller than a predetermined threshold distance from the instructed region, then a cuboid or spherical area that has an arbitrary size with the instructed region serving as the base point can be determined to be the region of interest. Still alternatively, for example, as illustrated in
Meanwhile, as illustrated in
Meanwhile, if an object is present on the periphery of the instructed region specified by the specifier 47, then the determiner 48 can determine, as the region of interest, an expanded area that includes the instructed region and at least some portion of the object present on the periphery of the instructed region. For example, as illustrated in
In essence, the determiner 48 can determine, as the region of interest, an expanded area that includes the instructed region and at least some portion of an object present on the periphery of the instructed region. For example, from among the objects included in the volume data, when the target object for operation (for example, a tumor) is specified as the instructed region, then an area including the target object for operation and other objects (for example, blood vessels or nerves) present on the periphery of the target object for operation is set as the region of interest. That makes it possible for the doctor to accurately understand the positional relationship between the target object for operation and the objects on the periphery thereof. As a result, an appropriate diagnosis can be made prior to performing the operation.
Given below is the explanation of the details of the controller 42 illustrated in
In the embodiment, the controller 42 sets the depth range in such a way that the width in the depth direction (the Z-direction) of the region of interest in the volume data is consistent with the width of the stereoscopic display allowable range. For example, as illustrated in
Meanwhile, if the region of interest 1001 is stereoscopically displayed in a rotatable manner, then the depth control can be performed in such a way that a maximum length 1003 of the region of interest 1001 is consistent with the width of the stereoscopic display allowable range. Thus, even when the region of interest 1001 is stereoscopically displayed in a rotatable manner, it becomes possible to fit the region of interest 1001 within the stereoscopic display allowable range. As a result, a high-definition stereoscopic display can be achieved while achieving abundant expression of the stereoscopic effect. Meanwhile, for example, as illustrated in
Meanwhile, it is also possible to perform the depth control in such a way that the ratio of the depth direction of the stereoscopically-displayed region of interest and a direction perpendicular to the depth direction (i.e., the X-direction or the Y-direction) is close to the ratio in the real world. More particularly, the controller 42 can set the depth range of the region of interest in such a way that the ratio of the X-direction, the Y-direction, and the Z-direction of the stereoscopically-displayed region of interest is close to the ratio in the real world. Moreover, for example, while the default stereoscopic image is being displayed, if the ratio of the X-direction, the Y-direction, and the Z-direction of the region of interest is close to the ratio of the object in the real world; then the controller 42 needs not perform the depth control. In this way, it becomes possible to prevent a situation in which the shape of the stereoscopically-displayed region of interest is different than the shape in the real world.
Given below is the explanation about performing the position control. Since the region of interest set by the setter 41 represents the area on which the user wishes to focus, it is preferable to display the region of interest in high-definition. In that regard, in the embodiment, the controller 42 performs the position control so as to set the display position of the region of interest, which is set by the setter 41, close to the display surface. As described earlier, since an image displayed on the display screen of the display 50 is displayed in the highest definition, bringing the display position of the region of interest close to the display surface makes it possible to display the region of interest in high-definition. In the embodiment, the controller 42 performs the position control in such a way that the stereoscopically-displayed region of interest fits within the stereoscopic display allowable range.
Herein, for example, the explanation is given under the assumption that the cuboid area 1001 illustrated in
Meanwhile, the method for performing the position control is not limited to the example explained above. Alternatively, for example, the display position of the region of interest can be set in such a way that the position of the center of gravity of the region of interest matches with the center position of the display surface. Still alternatively, the display position of the region of interest can be set in such a way that the midpoint of the greatest length of the region of interest matches with the center position of the display surface. When at least a single object is present in the region of interest, the display position of the region of interest can be set in such a way that the position of the center of gravity of any one object matches with the center position of the display surface. However, for example, as illustrated in
By performing the depth control and the position control as described above, the controller 42 sets various parameters such as camera intervals, camera angles, and camera positions at the time of creating stereoscopic images; and sends the set parameters to the image generator 43. Meanwhile, in the embodiment, although the controller 42 performs the depth control as well as the position control, it is not the only possible case. Alternatively, the controller 42 can be configured to perform only one of the depth control and the position control. In essence, as long as the controller 42 performs at least one of the depth control or the position control, it serves the purpose.
Explained below are the details of the image generator 43 illustrated in
The generator 43 generates the stereoscopic image of the volume data in such a way that, of the volume data, a region which overlaps with the region of interest is hidden. For example, the image generator 43 can generate a stereoscopic image of the volume data in such a way that, of the volume data, the image portion other than the region of interest is hidden. That is, regarding the image portion other than the region of interest in the volume data, the image generator 43 can set the pixel values to a value representing hiding. Alternatively, the configuration can be such that the image portion other than the region of interest is not generated in the first place. Still alternatively, the image generator 43 can generate a stereoscopic image of the volume data in such a way that the image portion other than the region of interest is closer to being transparent than the region of interest. That is, regarding the image portion other than the region of interest in the volume data, the image generator 43 can set the pixel values to a value closer to transparency than the region of interest.
Still alternatively, the generator can generate the stereoscopic image of the volume data in such a way that, of the volume data, a region which does not overlap with the region of interest and which is positioned on the outside of the display range is translucent. Still alternatively, the image generator 43 can generate a stereoscopic image of the volume data in such a way that, during the stereoscopic display, such an image portion of the volume data which is on the outside of the stereoscopic display allowable range is hidden. Alternatively, the image generator 43 can generate a stereoscopic image of the volume data in such a way that, during the stereoscopic display, such an image portion of the volume data which is on the outside of the stereoscopic display allowable range is closer to being transparent than the image portion present in the stereoscopic display allowable range.
Still alternatively, as illustrated in
Explained below with reference to
Meanwhile, at Step S1401, if it is determined that an input is received from a user (YES at Step S1401), then the specifier 47 specifies the instructed region according to the input from the user (Step S1403). Subsequently, the determiner 48 determines the region of interest by using the specifying information and the instructed region (Step S1404). Moreover, the controller 42 obtains the stereoscopic display allowable range (Step S1405). For example, the controller 42 can access a memory (not illustrated) and obtain the stereoscopic display allowable range that has been set in advance. Then, the controller 42 performs the depth control and the position control using the stereoscopic display allowable range and the region of interest (Step S1406). Subsequently, according to the result of the control performed by the controller 42, the image generator 43 generates a stereoscopic image of the volume data (Step S1407). Then, the image generator 43 sends the stereoscopic image of the volume data to the display 50, and the display 50 displays the stereoscopic image of the volume data received from the image generator 43 (Step S1408). These operations are performed in a repeated manner at predetermined intervals.
As described above, in the embodiment, when an region of interest, on which the user should focus, is set in the volume data; the controller 42 performs at least either the depth control, in which the depth range of the region of interest that is stereoscopically displayed on the display 50 is set to a value closer to the stereoscopic display allowable range as compared to the state prior to setting the region of interest, or the position control, in which the display position of the region of interest is set close to the display surface. As a result, it becomes possible to enhance the visibility of the stereoscopic image of the region of interest.
The adjuster 70 adjusts the range of the region of interest, which is set by the setter 41, according to the input from the user. For example, as illustrated in
For example, according to the depth range of the region of interest, the controller 42 can control the size of the region of interest to be displayed in a plane perpendicular to the depth direction. As an example of this control method, when a standard value of the depth range (i.e., the depth range before performing the depth control) is set to “1” and when, as a result of performing the depth control, the depth range is set to “1.4”; it is possible to think of a method in which the enlargement factor in the X-direction and in the Y-direction of the region of interest is set to “1.4”. As a result, the depth range of the region of interest that is stereoscopically displayed on the display 50 is enlarged by 1.4 times, and the size of the region of interest that is displayed in a plane perpendicular to the depth direction is also enlarged by 1.4 times from the standard size.
The image generator 43 generates a stereoscopic image of the volume data according to the depth range set by the controller 42 and according to the enlargement factor in the X-direction and in the Y-direction. Depending on the enlargement factor in the X-direction and in the Y-direction, it is possible to think of a case when the region of interest does not fit within the display surface. In such a case, either a stereoscopic image can be generated only for that portion of the region of interest which fits within the display surface, or a stereoscopic image for the portion not fitting within the display surface can also be generated at the same time. Besides, the stereoscopic image can be generated by matching the enlargement factor in the X-direction and the Y-direction of the volume data other than the region of interest to the enlargement factor of the region of interest.
For example, as illustrated in
In the embodiment described above, the medical diagnostic imaging device 10 analyzes the volume data generated therein and generates the specifying information. However, that is not the only possible case. Alternatively, for example, the stereoscopic image display device 30 can be configured to analyze the volume data. In that case, for example, the medical diagnostic imaging device 10 sends only the generated volume data to the image archiving device 20, and the stereoscopic image display device 30 obtains the volume data stored in the image archiving device 20. Meanwhile, for example, instead of using the image archiving device 20, a memory for storing the generated volume data can be disposed in the medical diagnostic imaging device 10. In this case, the stereoscopic image display device 30 obtains the volume data from the medical diagnostic imaging device 10.
Then, the stereoscopic image display device 30 analyzes the obtained volume data and generates the specifying information. Herein, the specifying information generated by the stereoscopic image display device 30 either can be stored in a memory in the stereoscopic image display device 30 along with the volume data obtained from the medical diagnostic imaging device 10 or obtained from the image archiving device 20; or can be stored in the image archiving device 20.
As illustrated in
The computer program, which is executed in the image processor 40 according to the embodiment described above, can be stored in a downloadable manner in a computer connected to a network such as the Internet or can be made available for distribution through a network such as the Internet. Alternatively, the computer program, which is executed in the image processor 40 according to the embodiment described above, can be stored in advance in a ROM or the like.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
This application is a continuation of PCT international application Ser. No. PCT/JP2012/051124 filed on Jan. 19, 2012 which designates the United States, the entire contents of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2012/051124 | Jan 2012 | US |
Child | 14335432 | US |