The configuration of an ultrasonic imaging apparatus according to an embodiment of the present invention is described with reference to
The ultrasonic imaging apparatus 1 according to the present embodiment is configured to comprise an ultrasonic probe 2, a transmitter/receiver 3, an image processor 4, and a display 11.
For the ultrasonic probe 2, a two-dimensional array probe on which a plurality of ultrasonic transducers are two-dimensionally arranged, or a one-dimensional array probe on which a plurality of ultrasonic transducers are arranged in a predetermined direction (scanning direction) is employed. The two-dimensional array probe has a plurality of ultrasonic transducers that are two-dimensionally arranged, so it can three-dimensionally transmit ultrasonic waves and can receive three-dimensional data as an echo signal. In addition, the one-dimensional array probe can receive three-dimensional data as an echo signal by mechanically swinging the ultrasonic transducers in the direction perpendicular to the scanning direction. In the present embodiment, a one-dimensional array probe may be employed, or a two-dimensional array probe may be employed.
Herein, the appearance of the ultrasonic probe 2 is described with reference to
As shown in
A transmitting/receiving surface 22 is in contact with the body surface of a subject to be examined. A plurality of ultrasonic transducers is provided inside the case 21. The plurality of ultrasonic transducers is arranged in a line in the scanning direction on the one-dimensional array probe.
As shown in
For example, when transmitting/receiving ultrasonic waves while swinging ultrasonic transducers in the direction perpendicular to the scanning direction (hereinafter, may be referred to as the “swing direction”), the first physical mark 23 is formed at the center of the swing direction. In addition, the second physical mark 24 is formed at the center of the scanning direction.
Incidentally, in the present embodiment, the first physical mark 23 is provided in the center of the first surface 21a. As another example, the first physical mark 23 may be provided on the end part of the first side surface 21a. Consequently, the first physical mark 23 is to be provided on the end part in the swing direction. In addition, in the present embodiment, the second physical mark 24 is provided in the center of the second side surface 21b. As another example, the second physical mark 24 may be provided on the end part of the second side surface 21b. Consequently, the second physical mark 24 is to be provided on the end part in the scanning direction. In addition, the first physical mark 23 and the second physical mark 24 may be provided on a part other than the center or the end part.
In the present embodiment, the case of employing a one-dimensional array probe as the ultrasonic probe 2 and swinging ultrasonic transducers in the direction perpendicular to the scanning direction (swing direction) to scan a three-dimensional region is described. A plurality tomographic image data along the swing direction is obtained by transmitting/receiving ultrasonic waves while swinging the ultrasonic transducers in this way.
The transmitter/receiver 3 is provided with a transmitting part and a receiving part. The transmitting part generates ultrasonic waves by supplying electrical signals to the ultrasonic probe 2. The receiving part receives echo signals received by the ultrasonic probe 2. The signals received by the transmitter/receiver 3 are output to the signal processor 5 of the image processor 4.
The signal processor 5 is configured to comprise a B-mode processor 51 and a CFM processor 52.
The B-mode processor 51 converts the amplitude information of the echo to an image and generates B-mode ultrasonic raster data from the echo signals. The CFM processor 52 converts the moving bloodstream information to an image and generates color ultrasonic raster data. The storage 6 temporarily stores the ultrasonic raster data generated by the signal processor 5.
A DSC (Digital Scan Converter) 7 converts the ultrasonic raster data into image data represented by Cartesian coordinates in order to obtain an image represented by a Cartesian coordinate system (scan conversion processing). Then, the image data is output from the DSC 7 to the display 11, and an image based on the image data is displayed on the display 11. For example, the DSC 7 generates tomographic image data as two-dimensional information based on the B-mode ultrasonic raster data, and outputs the tomographic image data to the display 11. The display 11 displays a tomographic image based on the tomographic image data. Incidentally, the signal processor 5 and the DSC 7 are one example of the “tomographic image data generator” of the present invention.
In the present embodiment, image data such as the tomographic image data output from the DSC 7 is output to and stored on the storage 8. In the present embodiment, a plurality of tomographic image data along the swing direction is obtained and is stored on the storage 8.
A calculator 9 reads image data from the storage 8, and generates three-dimensional image data based on the image data. In the present embodiment, the calculator 9 reads a plurality of tomographic image data along the swing direction from the storage 8, and generates three-dimensional image data based on the plurality of tomographic image data. Moreover, the calculator 9 writes a mark for indicating the orientation of the ultrasonic probe 2 into a predetermined position in the three-dimensional image. Hereinafter, the configuration and processing content of this calculator 9 are described. Incidentally, although a fetus is described as the subject of radiography in the present embodiment, an organ such as the heart may be the subject of radiography.
When a plurality of tomographic image data along the swing direction is obtained by the ultrasonic probe 2 and is stored on the storage 8, the calculator 9 reads the plurality of tomographic image data from the storage 8.
The mark forming part 91 selects tomographic image data obtained at a predetermined position in the swing direction among a plurality of tomographic image data along the swing direction, and writes a predetermined mark into the selected tomographic image data. This predetermined position is a position predefined by the operator. This predetermined position is a position recognized by the operator. For example, the mark forming part 91 selects tomographic image data obtained at the center of the swing direction among a plurality of tomographic image data obtained along the swing direction, and writes a predetermined mark into the tomographic image data at the center. Information indicating the position at which the mark forming part 91 selects tomographic image data, information indicating the position into which a mark is written, and information regarding the mark is pre-stored in a condition storage 10. In addition, the operator can use an operating part (not shown) to optionally change the position at which tomographic image data is selected or the position into which a mark is written. For example, a position at the end part in the swing direction may be optionally designated as well as the center of the swing direction.
For example, the first physical mark 23 is provided in the center of the swing direction of the case 21 and a mark is written into tomographic image data obtained at the center of the swing direction by the mark forming part 91. As a result, the position of the first physical mark 23 and the position in the swing direction of the tomographic image data into which a mark has been written correspond with each other.
Herein, processing for forming a mark by the mark forming part 91 is described with reference to
Then, the mark forming part 91 outputs a plurality of tomographic image data read from the storage 8 along with the colored tomographic image data to a VR processor 92. In the present embodiment, it is intended to obtain an image of a fetus; therefore, the ROI 101 is set so as to include the image of the fetus. The operator can optionally set this ROI 101.
The VR processor 92 receives a plurality of tomographic image data from the mark forming part 91, and generates volume data based on the plurality of tomographic image data. Then, the VR processor 92 applies volume rendering on the volume data to generate image data as three-dimensional information (hereinafter, may be referred to as “VR image data”). The VR processor 92 outputs the VR image data to the display 11. The display 11 displays a VR image based on the VR image data (three-dimensional image) on the screen. Incidentally, the VR processor 92 is one example of the “three-dimensional image data generator” of the present invention.
As described above, a mark is written into predetermined tomographic image data. Furthermore, three-dimensional image data is generated based on a plurality of tomographic image data including the tomographic image data. As a result, a display mark corresponding to the mark written into the predetermined tomographic image data is displayed on the VR image displayed on the display 11.
A mark is written into the tomographic image data obtained at the center of the swing direction and the first physical mark 23 is provided in the center of the swing direction of the case 21. Therefore, the display mark on the VR image displayed on the display 11 corresponds with the first physical mark 23. The display mark on the VR image displayed on the display 11 and the first physical mark 23 provided on the case 21 of the ultrasonic probe 2 correspond with each other. Therefore, referencing the orientation of the display mark on the VR image and the orientation of the first physical mark 23 enables the operator to easily determine in which direction the ultrasonic probe 2 should be moved or rotated in order to obtain the desired image. That is, it enables the relative positional relationship between the ultrasonic probe 2 and the three-dimensional image to be easily ascertained.
Incidentally, when the first physical mark 23 is provided on the end part in the swing direction of the case 21, the mark forming part 91 writes a mark into tomographic image data obtained at the end part in the swing direction so that it corresponds with the position of the first physical mark 23. Consequently, the display mark on the VR image and the first physical mark 23 provided on the case 21 of the ultrasonic probe 2 correspond with each other. This enables the relative positional relationship between the ultrasonic probe 2 and the three-dimensional image to be easily ascertained.
In addition, marks formed by the mark forming part 91 are not limited to the examples shown in
First, Example of Modification 1 is described with reference to
The VR processor 92 receives the plurality of tomographic image data from the mark forming part 91, and applies volume rendering to generate the VR image data. The display 11 displays a VR image based on the VR image data (three-dimensional image) on the screen.
As described above, a mark is written into a predetermined tomographic image data and three-dimensional image data is generated based on a plurality of tomographic image data including the tomographic image data. As a result, a display mark corresponding to the mark written into the predetermined tomographic image data is displayed on the VR image displayed on the display 11.
Since the frame (mark) 112 is written into the tomographic image data obtained at the center of the swing direction and the first physical mark 23 is provided at the center of the swing direction of the case 21, the display mark on the VR image displayed on the display 11 corresponds with the first physical mark 23. Consequently, referencing the orientation of the display mark on the VR image and the orientation of the first physical mark 23 enables the operator to easily ascertain the relative positional relationship between the ultrasonic probe 2 and the three-dimensional image.
Incidentally, when the first physical mark 23 is provided on the end part in the swing direction of the case 21, the mark forming part 91 writes the frame (mark) 112 into tomographic image data obtained at the end part in the swing direction so that it corresponds with the position of the first physical mark 23. Consequently, the display mark on the VR image and the first physical mark 23 provided on the case 21 of the ultrasonic probe 2 correspond with each other. This enables the relative positional relationship between the ultrasonic probe 2 and the three-dimensional image to be easily ascertained.
In this Example of Modification 1, the mark forming part 91 writes the frame 112 surrounding the ROI 111 as a mark into the tomographic image data 110 so that a display mark is displayed on the VR image.
Besides the manner of writing the mark into tomographic image data as described above, the calculator 9 may detect an outline of the ROI 111 from the tomographic image data 110, and display, on the display 11, a display mark representing the outline of the ROI 111 overlapping the VR image.
For example, the calculator 9 selects tomographic image data 110 obtained at the center of the swing direction among a plurality of tomographic image data obtained along the swing direction. Then, the calculator 9 detects an outline of the ROI 111 from the tomographic image data 110, and generates a display mark representing the outline. Moreover, the calculator 9 reads a plurality of tomographic image data from the storage 8, and applies volume rendering to generate the VR image data. Unlike the above processing, no mark is written into this VR image data.
Then, the calculator 9 displays a VR image based on the VR image data on the display 11. Moreover, the calculator 9 displays, on the display 11, a display mark representing the outline of the ROI 111 overlapping the position (coordinates) at which the ROI 111 has been detected in the VR image.
The display mark that is displayed overlapping the VR image corresponds with the first physical mark 23, and thus, referencing the orientation of the display mark on the VR image and the orientation of the first physical mark 23 enables the operator to easily ascertain the relative positional relationship between the ultrasonic probe 2 and the three-dimensional image.
Next, Example of Modification 2 is described with reference to
In addition, as shown in
The VR processor 92 receives the plurality of tomographic image data from the mark forming part 91, and applies volume rendering to generate the VR image data. The display 11 displays a VR image based on the VR image data (three-dimensional image) on the screen.
As described above, a mark is written into a plurality of tomographic image data and VR image data is generated based on the plurality of tomographic image data. As a result, a display mark corresponding to the mark is displayed on the VR image displayed on the display 11.
Since the straight mark 123 is written into the center of the scanning direction and the second physical mark 24 is provided in the center of the scanning direction of the case 21, the display mark on the VR image displayed on the display 11 corresponds with the second physical mark 24. Since the display mark on the VR image displayed on the display 11 and the second physical mark 24 provided on the case 21 of the ultrasonic probe 2 correspond with each other, referencing the orientation of the display mark on the VR image and the orientation of the second physical mark 24 enables the operator to easily determine a direction in which the ultrasonic probe 2 should be moved or rotated in order to obtain the desired image.
Incidentally, although the mark forming part 91 has written the straight mark 122 or the straight mark 123 into all the tomographic image data in this Example of Modification 2, it is possible to achieve the same effect and result even when writing the mark into a portion of tomographic image data.
In addition, the mark forming part 91 may write both marks of the straight mark 122 and mark 123 into the tomographic image data.
Next, Example of Modification 3 is described with reference to
In addition, as shown in
The VR processor 92 receives the plurality of tomographic image data from the mark forming part 91, and applies volume rendering to generate the VR image data. The display 11 displays a VR image based on the VR image data (three-dimensional image) on the screen.
As described above, a mark is written into a plurality of tomographic image data and three-dimensional image data is generated based on the plurality of tomographic image data. As a result, a display mark corresponding to the mark is displayed on the VR image displayed on the display 11.
Since the mark 133 is written into the center of the scanning direction and the second physical mark 24 is provided at the center of the scanning direction of the case 21, the display mark on the VR image displayed on the display 11 corresponds with the second physical mark 24. Since the display mark on the VR image displayed on the display 11 and the second physical mark 24 provided on the case 21 of the ultrasonic probe 2 correspond with each other, referencing the orientation of the display mark on the VR image and the orientation of the second physical mark 24 enables the operator to easily determine in which direction the ultrasonic probe 2 should be moved or rotated in order to obtain the desired image.
Incidentally, although the mark forming part 91 has written the mark 132 or the mark 133 into all the tomographic image data in this Example of Modification 3, it is possible to achieve the same effect and result even when writing the mark into a portion of tomographic image data. For example, as in the above embodiment, the mark may be written into one tomographic image data.
The mark forming part 91 and the VR processor 92 described above may be implemented by hardware or software. For example, the calculator 9 may be implemented by a storage device such as a CPU (Central Processing Unit), a ROM (Read Only Memory), or a RAM (Random Access Memory). An image-processing program for performing the functions of the calculator 9 is stored on the storage device. This image-processing program includes a mark-forming program for performing the functions of the mark forming part 91, and a VR processing program for performing the functions of the VR processor 92. The CPU writes a mark into tomographic image data by performing the mark-forming program. In addition, the CPU performs volume rendering by performing the VR processing program.
Next, a series of operations by the ultrasonic imaging apparatus 1 according to an embodiment of the present invention is described with reference to
First, ultrasonic waves are transmitted to a subject to be examined using an ultrasonic probe 2, and a plurality of tomographic image data is obtained, based on reflected waves from the subject to be examined. Herein, by employing a one-dimensional array probe as the ultrasonic probe 2 and transmitting/receiving ultrasonic waves while swinging ultrasonic transducers in the direction perpendicular to the scanning direction (swing direction), a plurality of tomographic image data along the swing direction is obtained. The plurality of tomographic image data is stored on the storage 8.
Next, the calculator 9 reads the plurality of tomographic image data along the swing direction from the storage 8. Then, the mark forming part 91 selects tomographic image data at a predefined position among the plurality of tomographic image data, and writes a predetermined mark into the tomographic image data. For example, as shown in
The center of the swing direction corresponds with the position of the first physical mark 23 provided at the center of the swing direction of the case 21. That is, since the mark is written into the tomographic image data that has been acquired at the center of the swing direction and the first physical mark 23 is provided at the center of the swing direction of the case 21, the position of the mark written into the tomographic image data and the position of the first physical mark 23 correspond with each other.
Next, the VR processor 92 generates volume data by means of a known method, based on the plurality of tomographic image data, and applies volume rendering to the volume data to generate three-dimensional image data (VR image data). At this time, the VR processor 92 generates VR image data seen from a predetermined direction by performing volume rendering along a preset eye-gaze direction. The VR processor 92 outputs the VR image data to the display 11.
Upon receiving the VR image data from the VR processor 92, the display 11 displays a VR image based on the VR image data on the screen. A display mark, which corresponds to the mark written into the tomographic image data at Step S02, is displayed on the VR image displayed on the display 11.
Since a mark is written into the tomographic image data obtained at the center of the swing direction and the first physical mark 23 is provided at the center of the swing direction of the case 21, the display mark on the VR image displayed on the display 11 corresponds with the first physical mark 23. Consequently, referencing the orientation of the display mark on the VR image and the orientation of the first physical mark 23 enables the operator to easily determine a direction in which the ultrasonic probe 2 should be moved or rotated in order to obtain the desired image.
Incidentally, at Step S02, any mark of Examples of Modification 1 through 3 described above may be formed instead of the marks shown in
Since the frame (mark) 112 is written into the tomographic image data obtained at the center of the swing direction and the first physical mark 23 is provided at the center of the swing direction of the case 21, the display mark on the VR image displayed on the display 11 corresponds with the first physical mark 23. Consequently, referencing the orientation of the display mark on the VR image and the orientation of the first physical mark 23 enables the operator to easily determine a direction in which the ultrasonic probe 2 should be moved or rotated in order to obtain the desired image.
In addition, as in Example of Modification 2 shown in
Since the straight mark 123 is written into the center of the scanning direction and the second physical mark 24 is provided at the center of the scanning direction of the case 21, the display mark on the VR image displayed on the display 11 corresponds with the second physical mark 24. Consequently, referencing the orientation of the display mark on the VR image and the orientation of the second physical mark 24 enables the operator to easily determine a direction in which the ultrasonic probe 2 should be moved or rotated in order to obtain the desired image.
In addition, as in Example of Modification 3 shown in
Since the mark 133 is written into the center of the scanning direction and the second physical mark 24 is provided at the center of the scanning direction of the case 21, the display mark on the VR image displayed on the display 101 corresponds with the second physical mark 24. Consequently, referencing the orientation of the display mark on the VR image and the orientation of the second physical mark 24 enables the operator to easily determine a direction in which the ultrasonic probe 2 should be moved or rotated in order to obtain the desired image.
Herein, examples of the display of the VR image and the display mark are shown in
As shown in
The positional relationship is clear between the display mark 30A or the display mark 30B displayed on the VR image and the first physical mark 23 or the second physical mark 24 provided on the ultrasonic probe 2. This enables the direction in which the ultrasonic probe 2 should be moved or rotated in order to obtain the desired image to be easily determined, thereby making it possible to improve operability of the ultrasonic probe 2.
In addition, the display mark may be capable of switching between display/hide. For example, when the first physical mark 23 or the second physical mark 24 provided on the ultrasonic probe 2 is used as a changeover switch, and the first physical mark 23 or the second physical mark 24 is pressed, the mark forming part 91 writes the mark into a predetermined tomographic image data so as to display the mark on the VR image. In addition, when the first physical mark 23 or the second physical mark 24 is pressed while the mark is displayed on the VR image, the mark forming part 91 ceases writing the mark into the tomographic image data so as not to display the mark on the VR image.
For example, the display mark is displayed on the VR image when moving or rotating the ultrasonic probe 2. Referencing the display mark and the first physical mark 23 or the second physical mark 24 provided on the ultrasonic probe 2 enables the operator to easily determine the direction in which the ultrasonic probe 2 will be moved or rotated. On the other hand, when there is no need to move or rotate the ultrasonic probe 2, it is possible to display only the VR image, without displaying the display mark, to observe the VR image in detail.
Incidentally, in the embodiments and examples of modification described above, a one-dimensional array probe has been employed as the ultrasonic probe 2. A two-dimensional array probe may also be employed instead of the one-dimensional array probe. In this case, it is possible to achieve the same effect and result as the embodiments and examples of modification described above by forming a mark on the three-dimensional image data obtained by the two-dimensional array probe and displaying the three-dimensional image.
For example, when obtaining three-dimensional volume data by employing the two-dimensional array probe, the mark forming part 91 writes a mark for indicating the positional relationship with the ultrasonic probe into a predetermined position on the volume data. Then, the VR processor 92 applies volume rendering to the volume data to generate the VR image data. Writing a mark into a predetermined position on volume data and generating three-dimensional image data based on the volume data results in a display mark corresponding to the mark written into the predetermined position being displayed on the VR image displayed on the display 11. Referencing this mark enables the relative positional relationship between the ultrasonic probe 2 and the three-dimensional image to be easily ascertained.
Number | Date | Country | Kind |
---|---|---|---|
2006-130651 | May 2006 | JP | national |