This application is based upon and claims the benefit of priority from Japanese Patent Application No. 2012-038584, filed on Feb. 24, 2012; the entire contents of which are incorporated herein by reference.
The embodiment relates to the technology of a medical image processing apparatus for generating medical images.
Medical image processing apparatuses exist for displaying three-dimensional image data collected by medical image diagnostic apparatuses. The medical image diagnostic apparatus herein includes an X-ray Computer tomography (CT) apparatus or Magnetic resonance Imaging (MRI) apparatus, X-ray diagnostic apparatus, ultrasound diagnostic apparatus, etc.
In addition, such a medical image diagnostic apparatus includes an apparatus such as a multi-slice X-ray CT system that can carry out high-definition (high resolution) imaging over a wide range per unit time. This multi-slice X-ray CT system uses a two-dimensional detector of a configuration having detector elements of m channels×n rows (m, n are positive integers) in total, wherein a plurality of rows (for example, 4 rows, 8 rows, etc.) of detector (1 row) used for a single-slice X-ray CT system is arranged in a direction orthogonal to these rows.
Due to such a multi-slice X-ray CT system, the larger a detector is (the greater the number of detector elements configuring the detector), the greater the possibility of acquiring projection data over a wider region in a single image. In other words, by temporarily imaging using a multi-slice X-ray CT system provided with such a detector, it is possible to generate volume data for a specific site at a high frame rate (hereinafter, sometimes referred to as a “Dynamic Volume scan”). This makes it possible for an operator to assess the movement of the specific region within a unit of time by means of three-dimensional images.
In addition, a medical image processing apparatus exists that generates medical images based on image data obtained by such a medical image diagnostic apparatus (for example, volume data reconstructed by an X-ray CT system).
The purpose of this embodiment is to provide a medical image processing apparatus capable of easily assessing the motion of other sites based on a specific site in the event of assessing the motion of a flexible site configured by a plurality of sites.
The present embodiment pertains to a medical image processing apparatus comprising storage, a reconstruction processor, an extracting part, an analyzing part, an image processor, a display, and a display controller. The storage stores three-dimensional image data at a plurality of timing points indicating a flexible site constructed by a plurality of sites of a biological body. The reconstruction processor subjects the projection data to reconstruction processing to generate three-dimensional image data regarding the flexible site for each of a plurality of timing points. The extracting part extracts a plurality of construction sites constructing the flexible site from the image data. The analyzing part calculates positional information indicating the position of the first site in the plurality of construction sites extracted from the image data at the first timing point, as well as the position of the first site extracted from the image data at the second timing point. The image processor generates a plurality of medical images indicating changes over time in the relative position of the second site in the plurality of construction sites to the first site based on the positional information. The display controller causes a display to display the plurality of medical images along the time sequence.
The medical image processing apparatus according to the first embodiment generates medical images based on the image data (for example, volume data) obtained by a medical image diagnostic apparatus such an X-ray CT system. Hereinafter, the configuration of the medical image processing apparatus according to the present embodiment will be described with reference to
The image data storage 10 is storage for storing three-dimensional image data (for example, volume data) of a plurality of timing points obtained by imaging a subject in each examination by an imaging part 500. The imaging part 500 is a medical imaging apparatus capable of obtaining three-dimensional image data, for example, as with a CT, MRI, ultrasound diagnostic apparatus, etc. It should be noted that hereinafter, the three-dimensional image data is referred to as “image data.” Furthermore, hereinafter, image data is described as volume data obtained by a CT. In addition, according to the present embodiment, the image data is constructed so as to be capable of extracting bones. The flexible site is explained exemplifying a part configured by two bones as well as a joint connecting these bones. The joint is a joint connecting the bones and including joint fluid, a synovial, and a joint capsule. Further, the side of the bone connected through the joint has cartilage and by means of this cartilage, the flexible site can be smoothly moved. In other words, this bone also includes cartilage. In addition, this flexible site comprises a plurality of construction sites, and in the above case, these construction sites include two bones to be connected by the joint.
Here,
As illustrated in
At first, the operation of respective configurations according to the specification of a standard will be described.
The image processing unit 20 includes a configuration extracting part 21, an image processor 22, and image storage 23.
The configuration extracting part 21 includes an object extracting part 211 and a position analyzing part 212. At first, the configuration extracting part 21 reads image data from the image data storage 10 for each timing point. The configuration extracting part 21 outputs all image data for each read timing point to the object extracting part 211, providing instructions to extract the object.
The object extracting part 211 receives image data for each timing point from the configuration extracting part 21. According to the present embodiment, the object extracting part 211 extracts bone parts and makes them into objects based on the voxel data in this image data. Here,
In addition, the object extracting part 211 outputs the image data corresponding to predetermined timing points to the image processor 22 together with information indicating the bone objects extracted from this image data. Due to this information indicating the bone objects, the image processor 22 is capable of generating medical images on which respective bones in the image data are displayed so as to be capable of being identified. Further, if respective bones can be identified, information to be output to the image processor 22 together with the image data is not limited to this information indicating the bone objects. For example, supplementary information for identifying the bones may be related to the position corresponding to each bone in the image data.
The image processor 22 receives, from the object extracting part 211, image data corresponding to specific timing points and the information indicating the bone objects extracted from this image data. The image processor 22 generates medical images by subjecting the image data to image processing based on predetermined image processing conditions. If medical images are generated, based on the information indicating the bone objects, the image processor 22 identifies the positions, directions, and sizes of respective bones, and identifies the regions of respective bones in the generated medical images. The image processor 22 relates the information indicating the identified respective regions with the information indicating the bone objects corresponding to this region. Thereby, by specifying the region in the medical images, it is possible to identify the bone objects corresponding to this region. The image processor 22 outputs to the display controller 30 medical images with the regions of respective bone objects specified together with the information indicating this region.
The display controller 30 receives, from the image processor 22, medical images and information indicating the regions of the bone objects included in these medical images. The display controller 30 causes a display 401 to display respective regions included in the medical images such that they are capable of being specified. Thereby, by specifying a desired region in the medical images through an operation part 402, the operator can specify the bone objects corresponding to the region as an object to be a standard for alignment. Upon receiving specification of the region in the medical images from the operator, the operation part 402 notifies the position analyzing part 212 regarding information indicating the bone objects related to this region.
Next, the operations of respective configurations according to the embodiment of alignment will be described.
The position analyzing part 212 receives, from the object extracting part 211, image data of each timing point with the information indicating the bone objects related. In addition, the position analyzing part 212 receives information indicating the bone objects specified by the user from the operation part 402.
At first, the position analyzing part 212 identifies the bone objects notified from the operation part 402 among bone objects M11, M12, and M13 illustrated in
If the standard object M11 is identified, the position analyzing part 212 extracts at least three portions having characteristics in its shape (hereinafter, referred to as “shape characteristics”) from this standard object M11. For example, as illustrated in
Next, the position analyzing part 212 forms planes for representing the positions and directions of respective objects in a simulated manner by portions (namely, points) indicating the extracted three shape characteristics, relating the plane with an object that is the origin for extracting the shape characteristics. Here,
Here,
When the joint is moved, the position and direction of each of a plurality of bones constructing the joint and their relative positional relations (hereinafter, sometimes they are simply referred to as “positional relations”); however, the shape and the size of each bone are not changed. In other words, the objects M11 and M13 extracted at each timing point change in their positional relation along the time sequence; however, the shape and the size of each object are not changed. The same applies to the planes P11 and P13 extracted based on the shape characteristic of each object. According to the present embodiment position, using this characteristic, the analyzing part 212 identifies the position and the direction of the position and the direction of the standard object M11 that is the standard for alignment based on the position and the direction of the plane P11. Thus, by forming a plane from the object, there is no need to carry out analysis of a complete shape in order to grasp the position and direction of the object, making it possible to reduce the processing load.
Hereinafter, in order to clearly explain changes in the relative position of the other object M13 associated with alignment based on the standard object M11, as illustrated in
Thus, based on the plane P11 extracted at each timing point, the position analyzing part 212 identifies the position and direction of the object M11 at each timing point. Here,
As illustrated in
Therefore, the position analyzing part 212 calculates positional information for alignment among the image data for each timing point such that the plane P11 corresponding to the standard object M11 is located at the same position for each timing point (such that the relative positions coincide with each other). As a specific example, the position analyzing part 212 calculates the relative coordinate system for each set of image data based on the position and the direction of the plane P11. Thereby, for example, by carrying out alignment such that the axis of this relative coordinate system is identical among respective image data, the position and the direction of the plane P11, namely, the position and direction of the bone objects M11, are always constant. In other words, by carrying out alignment and generating medical images for each timing point based on this calculated positional information, when respective medical images are displayed along the time sequence, it is possible to display the relative motion of the other site based on the site corresponding to the object M11.
In addition, the position and direction of the other object M13 to the standard object M11 can be identified as coordinates on the calculated relative coordinate system. For example,
Further, if alignment can be carried out such that the position and direction of the standard object M11 are always constant among respective image data, the alignment method is not limited to the above-described method for calculating the relative coordinate system. For example, alignment may be carried out by calculating displacement in the position and direction of the standard object M11 in the absolute coordinate system among respective image data, and carrying out coordinate conversion based on this displacement. Hereinafter, the embodiment will be described assuming that alignment is carried out among respective image data by calculating the relative coordinate system.
In addition, the position analyzing part 212 is not limited to the above-described method based on the plane P11 if the position and direction of the standard object M11 can be identified. For example, the position and the direction of the standard object M11 may be identified based on the outline of the standard object M11. In this case, the position analyzing part 212 identifies a three-dimensional positional relation. In addition, for two-dimensional alignment, it is possible to extract from the standard object M11 a line connecting at least two shape characteristics, and to identify the position and direction of the standard object M11 based on the extracted line. For example, as illustrated in
In addition, an example in which a bone object specified by the operator is defined as the standard object is described above; however, the position analyzing part 212 may be operated so as to automatically determine the standard object. In this case, the position analyzing part 212 stores the biological body information of respective parts constructing the known biological body (for example, the information indicating the positional relation of the bones constructing the brachial region and the antebrachial region), and it may identify a standard object based on this biological body information. Further, as another method, the position analyzing part 212 stores the information indicating the shape of the standard object in advance, and it may identify an object that coincides with this shape as the standard object.
In addition, if the positional relation of the bones can be analyzed, it is not always necessary for the whole images of respective bones such as the image of the brachial region and the image of the antebrachial region to be taken as illustrated in
As described above, the position analyzing part 212 calculates a relative coordinate system regarding each set of image data for each timing point based on the position and direction of the plane P11. When the relative coordinate system is calculated with respect to a series of timing points, the position analyzing part 212 attaches the information indicating the calculated relative coordinate system (hereinafter, referred to as “positional information”) to the image data corresponding to the standard object M11 (namely, plane P11) that is a calculation origin, and outputs this to the image processor 22.
An image processor 22 receives from the position analyzing part 212 a series of image data reconstructed for each specific timing point with the positional information attached. The image processor 22 extracts the positional information attached to respective image data and carries out alignment among respective image data based on this positional information. In other words, the image processor 22 carries out alignment such that the axes of the relative coordinate system coincide with each other among respective image data. After alignment among the image data, the processor 22 generates medical images respectively by subjecting respective image data to the image processing based on the predetermined image processing conditions. The image processor 22 causes image storage 23 to store the generated medical images and the information indicating a timing point corresponding to image data as a generation origin to be associated with each other. The image storage 23 is storage that stores the medical images.
When medical images are generated for a series of timing points, the display controller 30 reads a series of medical images stored in the image storage 23. With reference to the information indicating the timing point attached to read respective medical images, the display controller 30 arranges these series of medical images along the time sequence to generate motion images. The display controller 30 causes the display 401 to display the generated motion image. Here, as the case of displaying respective medical images along the time sequence, it is possible to display respective medical images by the motion images. In addition, it is possible to superimpose respective medical images for each time sequence medical image to be displayed as a static image.
With reference to
According to the display of the motion image, the motion image of the object M13 is displayed at the position of P11, with the remaining object M11 displayed, in the order of P13a, P13b, P13c, and P13d. In addition, according to another example of displaying each time sequence, the remaining object M11 is displayed to be superimposed on all of the positions of P11, P13a, P13b, P13c, P13d. Since the object M13 is obtained for each of P13a, P13b, P13c, P13d at a different time, the case of displaying with superimposing is included in the display in the time sequence.
Further, in the above-described embodiment, an example is provided in which the medical images (images of bones) are displayed with image processing carried out on the image data; however, for example, the planes extracted from respective bone objects as illustrated in
Next, with reference to
The object extracting part 211 receives the image data for each timing point from the configuration extracting part 21. According to the present embodiment, the object extracting part 211 extracts the bones as the object based on the voxel data in this image data. Here,
In addition, the object extracting part 211 outputs the image data corresponding to a predetermined timing point to the image processor 22 together with the information indicating the bone objects extracted from this image data. Due to this information indicating the bone objects, the image processor 22 can generate medical images displaying respective bones in the image data so as to be capable of being identified. Further, if respective bones can be identified, the information to be output to the image processor 22 together with the image data is not limited to this information indicating the bone objects. For example, the associated information for identifying the bones may be related to the positions corresponding to respective bones in the image data.
The image processor 22 receives image data corresponding to specific timing points and the information indicating the bone objects extracted from this image data from the object extracting part 211. The image processor 22 generates medical images by subjecting the image data to image processing based on predetermined image processing conditions. If medical images are generated, based on the information indicating the bone objects, the image processor 22 identifies the positions, directions, and sizes of respective bones, and identifies the regions of respective bones in the generated medical images. The image processor 22 relates the information indicating the identified respective regions with the information indicating the bone objects corresponding to this region. Thereby, by specifying the region in the medical images, it is possible to identify the bone objects corresponding to this region. The image processor 22 outputs medical images, with the regions of respective bone objects identified, to the display controller 30 together with the information indicating this region.
The display controller 30 receives medical images and information indicating the regions of the bone objects included in these medical images from the image processor 22. The display controller 30 causes a display 401 to display respective regions included in the medical images so as to be capable of being specified. Thereby, the operator can specify the bone objects corresponding to the region as objects that are the standard for alignment by specifying a desired region in the medical images through an operation part 402. Upon receiving the specifications of the region in the medical images from the operator, the operation part 402 notifies the position analyzing part 212 of information indicating the bone objects related to this region.
For each timing point, the position analyzing part 212 receives, from the object extracting part 211, image data with the information indicating the bone objects related. In addition, the position analyzing part 212 receives, from the operation part 402, information indicating the bone objects specified by the user.
At first, the position analyzing part 212 identifies the bone objects notified from the operation part 402 among bone objects M11, M12, and M13 illustrated in
If the standard object M11 is identified, the position analyzing part 212 extracts at least three portions having characteristics in its shape (hereinafter, referred to as “shape characteristics”) from this standard object M11. For example, as illustrated in
Next, the position analyzing part 212 forms planes for grasping the positions and directions of respective objects by simulation by portions (namely, points) indicating the extracted three shape characteristics, relating the plane with the object that is the origin for extracting the shape characteristics. Here,
Thus, based on the plane P11 extracted at each timing point, the position analyzing part 212 identifies the position and direction of the object M11 at each timing point.
Here,
Therefore, the position analyzing part 212 calculates positional information for alignment among the image data for each timing point such that the plane P11 corresponding to the standard object M11 is located at the same position for each timing point (such that the relative positions coincide with each other). As a specific example, the position analyzing part 212 calculates a relative coordinate system for each set of image data based on the position and the direction of the plane P11. Thereby, for example, by carrying out alignment such that the axis of this relative coordinate system is identical among respective image data, the position and direction of the plane P11, namely, the position and direction of the bone objects M11 are always constant. In other words, by carrying out alignment and generating medical images for each timing point based on this calculated positional information, when respective medical images are displayed along the time sequence, it is possible to display the relative motion of the other site based on the site corresponding to the object M11.
As described above, the position analyzing part 212 calculates a relative coordinate system regarding each set of image data for each timing point based on the position and direction of the plane P11. When the relative coordinate system is calculated with respect to a series of timing points, the position analyzing part 212 attaches the information indicating the calculated relative coordinate system (hereinafter, referred to as “positional information”) to the image data corresponding to the standard object M11 (namely, plane P11) that is a calculation origin and outputs this to the image processor 22.
The image processor 22 receives, from the position analyzing part 212, a series of image data reconstructed for each specific timing point with the positional information attached. The image processor 22 extracts the positional information attached to respective image data and carries out alignment among respective image data based on this positional information. In other words, the image processor 22 carries out alignment such that the axes of the relative coordinate system coincide with each other among respective image data. When alignment among the image data is carried out, the processor 22 generates medical images respectively by subjecting respective image data to the image processing based on the predetermined image processing conditions. The image processor 22 cause the image storage 23 to store the generated medical images and the information indicating a timing point corresponding to image data as a generation origin while relating them with each other.
When medical images are generated for a series of timing points, the display controller 30 reads a series of medical images stored in the image storage 23. With reference to the information indicating the timing point attached to read respective medical images, the display controller 30 arranges this series of medical images along the time sequence to generate motion images. The display controller 30 causes the display 401 to display the generated motion images. Here, as the case of displaying respective medical images along the time sequence, it is possible to display respective medical images by the motion images. In addition, it is possible to superimpose respective medical images for each time sequence medical image to be displayed as a static image.
As described above, according to the present embodiment, the medical image processing apparatus analyzes changes in the positional relation of at least two sites that temporarily work with each other such as a joint by means of the bone objects corresponding to these sites. In addition, the medical image processing apparatus carries out alignment such that the positions and directions of one bone object among the bone objects corresponding to a plurality of sites (namely, standard object) coincide with each other at each timing point. Thereby, in the event of assessing the movements of the observation objects of a plurality of sites temporarily working with each other, it becomes possible to easily assess the movement of the other sites based on the specific site.
The above description has set forth an example of bones as the flexible sites, while the same cases may be applied to muscles or tendons. In this case, objects are formed with regard to each muscle tissue to determine the positional relations between the objects over time series just as set forth as above.
The case of tendons is also same as the case of muscles. The positional relation of tendons may be determined by forming objects just as the case of muscles. Particularly, tissues close to bones, among the tendons such as ligaments connecting bones, may be transformed into objects so as to determine a positional relation between objects of tendons and bones over time series.
Furthermore, the positional relation of components of the flexible site such as bones, etc. was described using the 2-dimensional positional relation between two bones as an example; however, the positional relation may be 3-dimensionally shaped in some cases. The example described a case when the first bone is pointing up and the second bone is pointing right, and when the second bone is pointing to the upper right with respect to this. However, a case may be considered in which the movement of the bone shifts in the rotational direction by adding a twist, etc., in addition to the movement in the 2-dimensional direction. A case may also be considered in which the position of the second bone does not move with respect to the first bone regardless of the rotation of the second bone. Accordingly, the positional relation of the components of the flexible site may be 3-dimensionally comprehended, the movement in the 3-dimensional rotational direction may be obtained from among the changes in the shape characteristics of three points and the shape feature of two points, thereby the amount of change in the positional relation is also obtained regarding the twisting, and the determination process with respect to the amount of change may be carried out. The determination process itself with respect to the amount of change is the same as in the case of the 2-dimensional positional relation.
In the above-described embodiments, as a flexible site, bones and joints are exemplified; however, as a flexible site, it is also possible to focus on cartilage. For example, the abovementioned process may be carried out by identifying three points of shape features regarding cartilage and two shape features instead of identifying three points of shape features regarding the bones. As a merit of analyzing cartilage as a flexible site in place of a bone, improved diagnosis accuracy of disc hernias can be cited. Disc hernias occur due to the protrusion of cartilage in the joints.
Acquiring image data of cartilage by means of a medical imaging apparatus, the positional relationship of cartilage is analyzed in the same manner as the above-described positional relationship of the bones. Disc herniation is present if there is protrusion of cartilage in the joints; therefore, the diagnosis result may be obtained without having to wait for an analysis of the bone. This analysis processing can be carried out in place of analysis processing regarding the bones; however, the analysis processing can be carried out together with analysis processing regarding the bones. When acquisition and analysis of images are carried out in parallel with processing of the bones and it is found that a disk hernia has occurred from analysis results regarding images of the cartilage, by completing analysis without waiting for analysis of the bones, it is possible to acquire an accurate diagnosis at an earlier stage. Further, other than the case in which cartilage protrudes, the case in which cartilage is crushed by sites such as other bones is also considered; in this case also, when cartilage is crushed more than a certain extent, the crushing is defined as an analysis result, and based on this result.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel systems described herein may be embodied in a variety of their forms; furthermore, various omissions, substitutions and changes in the form of the systems described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
Number | Date | Country | Kind |
---|---|---|---|
2012-038584 | Feb 2012 | JP | national |