The present invention relates to an image processing apparatus and an image processing method for generating an image for providing guidance in moving an instrument to a target part of a subject.
Image diagnostic apparatuses for a living body, such as X-ray diagnostic apparatuses, MR (magnetic resonance) diagnostic apparatuses, and ultrasound diagnostic apparatuses, have widely spread. Particularly, ultrasound diagnostic apparatuses have advantages such as noninvasiveness and real-time performance, and are widely used for diagnosis and medical checkup. Ultrasound diagnostic apparatuses are used for diagnosis of a wide variety of body parts, such as the heart, blood vessels, the liver, and the breasts. In recent years, attention is being given to diagnosis of blood vessels, such as the carotid artery, for assessing the risk of arterial sclerosis. However, since vascular diagnosis requires much skill. Accordingly, ultrasound diagnostic apparatuses displaying images providing guidance to examiners are being proposed. One example of such an ultrasound diagnostic device is described in Patent Document 1.
Further, in recent years, intra-surgery navigation systems displaying the positional relationship between a part of a patient body and a surgical instrument during surgery are being proposed. Such intra-surgery navigation systems are used, for example, in order to improve visual perceptibility of where a tumor or a blood vessel is located, and to improve surgical safety through display of a position of a surgical instrument with respect to a part of the patient body that is the surgical target, such as a bone or an organ.
However, ultrasound diagnostic apparatuses and intra-surgery navigation systems as described above pose a problem that images displayed to users, who may be examiners and operators, do not have high visual perceptibility.
The present invention, therefore, provides an image processing apparatus that is capable of displaying, to a user with high visual perceptibility, an image for providing guidance in moving an instrument to a target part of a subject.
One aspect of the present invention is an image processing apparatus for generating an assist image that is an image providing guidance in moving an instrument to a target part of a subject, including: a three-dimensional image analyzer determining, as target position information, a three-dimensional position of the target part based on a three-dimensional image including the target part; a position information acquirer acquiring instrument position information indicating a three-dimensional position of the instrument; a display state determiner selecting one display state from at least two display states based on a positional relationship between the target part and the instrument; an assist image generator generating an assist image for the selected display state by using the target position information and the instrument position information; and a display controller performing control for outputting the assist image generated by the assist image generator to a display device.
Such aspects of the present invention, including those that are general and those that are specific, may be realized by a system, a method, an integrated circuit, a computer program, or a computer-readable recording medium such as a CD-ROM, or may be realized by any combination of a system, a method, an integrated circuit, a computer program, or a computer-readable recording medium such as a CD-ROM.
The present invention enables displaying, to a user with high visual perceptibility, an image for providing guidance in moving an instrument to a target part of a subject.
The inventors of the present invention found that the following problems arise in image processing apparatuses, such as the ultrasound diagnostic apparatuses and the intra-surgery navigation systems described in the “Background Art” section of the present disclosure.
First, description will be given of ultrasound diagnosis of the carotid artery.
The probe 10 includes ultrasound transducers (not illustrated). For example, when the ultrasound transducers are one-dimensionally arranged, an ultrasound image is obtained with respect to a two-dimensional scan plane 11 immediately below the ultrasound transducers, as illustrated in
With reference to
As illustrated in
Treatment such as medication or surgical separation of the plaque 27 is required depending on the thickness, the size, etc., of the plaque 27. Therefore, correctly measuring the thickness of the intima-media complex becomes a key in the diagnosis. However, the thickness of the intima-media complex changes depending on the region that is measured. Further, an examiner cannot easily grasp three-dimensionally the shape of the carotid artery, which runs inside the neck. Therefore, diagnosis of the carotid artery requires skill and experience. Further, when medicinal treatment is applied, a specific position of the plaque 27 is measured periodically in order to confirm the effect of the treatment. That is, a diagnosis is made of whether the thickness, the area, the volume, etc., of the plaque 27 are being effectively reduced by the treatment. Here, it is important that the plaque 27 be measured at the same position and in the same orientation each time. This measurement requires skill and experience.
Hence, an ultrasound diagnostic apparatus 30 is proposed that provides guidance to an examiner by displaying an ultrasound live image (i.e., a real-time ultrasound image acquired by a probe) and in addition, how the probe is to be moved in order to acquire an ultrasound image in a position and orientation that are to be measured.
As illustrated in
The three-dimensional image analysis unit 31 analyzes a three-dimensional image (hereinafter, referred to as a 3D image) acquired in advance. Further, the three-dimensional image analysis unit 31 determines target position information tgtInf including a three-dimensional position (hereinafter, also simply referred to as a position) and an orientation of a measurement target part of a subject (hereinafter, also referred to as a measurement target). Further, the three-dimensional image analysis unit 31 outputs the target position information tgtInf so determined to the assist image generation unit 33.
The position information acquisition unit 32 acquires instrument position information indicating a current scan position and a current orientation of the probe 10, by use of, for example, a magnetic sensor or an optical camera.
The assist image generation unit 33 generates an assist image asis0, in which the measurement plane of the measurement target and information concerning the position and the orientation of the current scan plane are superimposed on the 3D image, based on the 3D image, the target position information tgtInf, and the instrument position information.
The display control unit 35 causes a display device 150 to display the assist image, along with a live image (ultrasound image) at the current scan position.
First, the three-dimensional image analysis unit 31 analyzes the 3D image to determine the target position information including the position and the orientation of the measurement target (Step S001). Next, the position information acquisition unit 32 acquires the instrument position information indicating the current scan position and the current orientation of the probe 10 (Step S002). Next, the assist image generation unit 33 calculates a difference between the position of the measurement target and the current scan position to generate route information Z for changing the color or the shape of the image to be displayed in accordance with the difference (Step S003). Then the assist image generation unit 33 generates an assist image containing the route information Z in addition to the 3D image, the position of the measurement target, and the current scan position (Step S004). The display control unit 35 causes the display device 150 to display a screen 40 obtained by combining an assist image 41 with a live image 48 (ultrasound image) at the current scan position, as illustrated in, for example,
Typically, an examiner positions the scan plane at the measurement target while moving the probe by first performing rough alignment, and then performing fine adjustment. The examiner mainly refers to the assist image when performing the rough alignment, and mainly refers to the live image when performing the fine adjustment. Thus, the examiner is able to position the scan plane at the measurement target smoothly. However, always displaying the assist image 41 and the live image 48 on the same screen, as illustrated in
In view of this problem, one aspect of the present invention is an image processing apparatus for generating an assist image that is an image providing guidance in moving an instrument to a target part of a subject, including: a three-dimensional image analyzer determining, as target position information, a three-dimensional position of the target part based on a three-dimensional image including the target part; a position information acquirer acquiring instrument position information indicating a three-dimensional position of the instrument; a display state determiner selecting one display state from at least two display states based on a positional relationship between the target part and the instrument; an assist image generator generating an assist image for the selected display state by using the target position information and the instrument position information; and a display controller performing control for outputting the assist image generated by the assist image generator to a display device.
This achieves displaying, to a user with high visual perceptibility, the image providing guidance in moving the instrument to the target part of the subject.
Further, the at least two display states may include a first display state where the assist image generated by the assist image generator is displayed at a first magnification ratio, and a second display state where the assist image generated by the assist image generator is displayed at a second magnification ratio greater than the first magnification ratio, and the display state determiner may select the first display state when the positional relationship does not fulfill a first predetermined condition, and select the second display state when the positional relationship fulfills the first predetermined condition.
This achieves switching to displaying the assist image in enlarged state when the positional relationship fulfills the first predetermined condition. Accordingly, the assist image is displayed to a user with high visual perceptibility.
Further, the three-dimensional image analyzer may determine, as the target position information, an orientation of the target part based on the three-dimensional image, in addition to determining the three-dimensional position of the target part as the target position information, and the position information acquirer may acquire, as the instrument position information, an orientation of the instrument, in addition to the three-dimensional position of the instrument.
This achieves selecting the display state according to not only the positions of the target part and the instrument, but also the orientations of the target part and the instrument.
Further, the instrument may be a probe in an ultrasound diagnostic device, the probe usable for acquiring an ultrasound image of the subject, the position information acquirer may acquire, as the instrument position information, a scan position and an orientation of the probe, and the assist image generated by the assist image generator may be an image providing guidance in moving the probe to the target part.
This achieves displaying the assist image, which is an image providing guidance in moving the probe to the target part, to a user with high visual perceptibility.
Further, the image processing apparatus may further include a live image acquirer acquiring, from the probe, the ultrasound image of the subject as a live image, and the display controller may output the assist image generated by the assist image generator and the live image to the display device.
This achieves displaying, to a user with high visual perceptibility, both the live image and the assist image.
Further, the at least two display states may include a third display state where on the display device, the assist image generated by the assist image generator is displayed as a main image and the live image is displayed as a sub image, the sub image smaller than the main image, and a fourth display state where on the display device, the live image is displayed as the main image and the assist image generated by the assist image generator is displayed as the sub image, the display state determiner may select the third display state when the positional relationship does not fulfill a second predetermined condition, and select the fourth display state when the positional relationship fulfills the second predetermined condition, and the display controller may output the assist image generated by the assist image generator and the live image to the display device so as to be displayed in the selected display state.
This achieves changing how the live image and the assist image are displayed, so that the live image and the assist image are displayed to a user with high visual perceptibility.
Further, the display controller may output the assist image generated by the assist image generator and the live image to the display device while, based on the selected display state, changing relative sizes at which the assist image generated by the assist image generator and the live image are to be displayed and thereby exchanging the main image and the sub image.
This achieves changing how the live image and the assist image are displayed, so that the live image and the assist image are displayed to a user with high visual perceptibility.
Further, when the third display state is currently selected, the display state determiner may select the display state based on whether the positional relationship fulfills a third predetermined condition, and when the fourth display state is currently selected, the display state determiner may select the display state based on whether the positional relationship fulfills a fourth predetermined condition.
This achieves switching between display states steadily.
Further, the target part may be a blood vessel, and the display state determiner may determine the positional relationship according to whether the live image includes a cross section substantially parallel with a direction in which the blood vessel runs, and select one of the at least two display states based on the positional relationship so determined.
This achieves displaying the assist image, which is an image providing guidance in moving the probe to the target part, to a user with high visual perceptibility.
Further, the image processing apparatus may further include a three-dimensional image generator generating the three-dimensional image from data acquired in advance, and the data acquired in advance may be the ultrasound image, which is obtained by the probe scanning a region including the target part, and the three-dimensional image generator may extract a contour of an organ including the target part from the ultrasound image so as to generate the three-dimensional image, and the three-dimensional image generator may associate a position and an orientation of the three-dimensional image in a three-dimensional space with the scan position and the orientation of the probe acquired by the position information acquirer.
This achieves associating the position and the orientation of the 3D image in the three-dimensional space with the scan position and the orientation of the probe, respectively.
Further, the assist image generator may generate navigation information based on a relative relationship between a current scan position of the probe and the position of the target part, and a relative relationship between a current orientation of the probe and the orientation of the target part, and generate, as the assist image, an image in which the navigation information and a probe image indicating the current scan position and the current orientation of the probe are superimposed on the three-dimensional image.
This achieves displaying the assist image, which is an image providing guidance in moving the probe to the target part, to a user with higher visual perceptibility.
Further, when the fourth display state is selected, the assist image generator may generate a plurality of cross-sectional images each indicating a cross-sectional shape of the target part from one of a plurality of directions, and generate, as the assist image, an image in which a probe image indicating a current scan position and a current orientation of the probe is superimposed on each of the cross-sectional images.
Further, the target part may be a blood vessel, the plurality of cross-sectional images may include two cross-sectional images, one of the two cross-sectional images indicating a cross-sectional shape of the blood vessel from a long axis direction being a direction in which the blood vessel runs, and the other one of the two cross-sectional images indicating a cross-sectional shape of the blood vessel from a short axis direction being substantially perpendicular to the long axis direction, and the assist image generator may generate, as the assist image, an image in which a straight line or a rectangle providing guidance in moving the probe to the target part is superimposed on each of the two cross-sectional images, based on a relative relationship between the current scan position of the probe and the position of the target part and a relative relationship between the current orientation of the probe and the orientation of the target part.
This achieves displaying the assist image, which is an image providing guidance in moving the probe to the target part, to a user with higher visual perceptibility.
Further, the display state determiner may calculate, as the positional relationship, a difference between the position of the target part and the position of the instrument, and a difference between the orientation of the target part and the orientation of the instrument by using the target position information and the instrument position information, and select one of the at least two display states according to the differences so calculated.
Further, the display state determiner may calculate a difference between the position of the target part and the position of the instrument, and a difference between the orientation of the target part and the orientation of the instrument by using the target position information and the instrument position information, and hold the differences so calculated, so as to calculate, as the positional relationship, changes occurring in the differences as time elapses and to select one of the at least two display states according to the changes in the differences so calculated.
This achieves accurate selection of display state.
Further, the target part may be a part of the subject that is a target of surgery, the instrument may be a surgical instrument used in the surgery, and the assist image generated by the assist image generator may be an image providing guidance in moving the surgical instrument to the part of the subject that is the target of surgery.
This achieves allowing a practitioner to confirm the movement of the surgical instrument that he/she has operated, and to adjust with ease the distance of the surgical instrument from the target part and the direction in which he/she performs removal or cutting.
Further, the image processing apparatus may further include a three-dimensional image generator generating the three-dimensional image from data acquired in advance.
This achieves generating a 3D image from data acquired in advance.
Further, the display state determiner may calculate, as the positional relationship, a difference between the position of the target part and the position of the instrument by using the target position information and the instrument position information, and select one of the at least two display states according to the difference so calculated.
Further, the display state determiner may calculate a difference between the position of the target part and the position of the instrument by using the target position information and the instrument position information, and hold the difference so calculated, so as to calculate, as the positional relationship, a change occurring in the difference as time elapses, and select one of the at least two display states according to the change in the difference so calculated.
This achieves accurate selection of display state.
Further, the at least two display states may include two or more display states differing from one another in terms of at least one of a modification ratio and a viewpoint of the assist image, and the display state determiner may select one of the two or more display states based on the positional relationship.
This achieves generating assist images that are in accordance with various forms of display, and displaying assist images to a user with high visual perceptibility.
Such aspects of the present invention, including those that are general and those that are specific, may be realized by a system, a method, an integrated circuit, a computer program, or a computer-readable recording medium such as a CD-ROM, or may be realized by any combination of a system, a method, an integrated circuit, a computer program, or a computer-readable recording medium such as a CD-ROM.
Embodiments of the present invention will be described below with reference to the drawings.
The examples described in the embodiments described below may either be general or specific. The numerical values, the shapes, the materials, the constituent elements, how the constituent elements are arranged in terms of position and are connected with one another, the order of the steps described in the embodiments are mere examples, and thus do not limit the present invention. Further, among the constituent elements described in the embodiments, those not introduced in the independent claims, which represent the present invention in the most general and abstract manner, should be construed as constituent elements that may either be or not be included in the present invention.
This embodiment will describe a case where the image processing apparatus pertaining to one aspect of the present invention is implemented as an ultrasound diagnostic apparatus, with reference to the drawings. Note that in the following, a measurement target may be any organ whose image can be captured by ultrasound, and thus, may for example be a blood vessel, the heart, the liver, or the breasts. In the following, description is provided of a case where the measurement target is the carotid artery.
The structure of the ultrasound diagnostic apparatus will be first described.
The ultrasound diagnostic apparatus 100 includes, as shown in
Further, the ultrasound diagnostic apparatus 100 is configured so as to be connectable to a probe 10, a display device 150, and an input device 160.
The probe 10 has a plurality of transducers (not shown) which, for example, are arranged one-dimensionally (hereinafter, transducer array direction). The probe 10 converts a pulse electric signal or a continuous wave electric signal (hereinafter, an electric transmission signal) supplied from the transmission/reception unit 105 into a pulse ultrasound wave or a continuous ultrasound wave. The probe 10 transmits an ultrasound beam composed of a plurality of ultrasound waves generated by the plurality of transducers to the measurement-target organ (i.e., the carotid artery) with the probe 10 in contact with the surface of the subject's skin. In order to acquire a tomographic image of a long-axis cross-section of the carotid artery, the probe 10 should be arranged on the surface of the subject's skin so that the transducer array direction of the probe 10 is along the long-axis direction of the carotid artery. The probe 10 receives a plurality of ultrasound waves reflected off from the subject, and the plurality of transducers convert the reflected ultrasound waves into electric signals (hereinafter, electric reception signal), and supplies the electric reception signals to the transmission/reception unit 105.
Although this embodiment illustrates an example of the probe 10 having a plurality of transducers arrayed one-dimensionally, the probe 10 is not limited to this. For example, the probe 10 may have an array of transducers arranged two-dimensionally, or may be an oscillating ultrasound probe that mechanically oscillates a plurality of transducers arrayed one-dimensionally so as to compose a three-dimensional tomographic image. Different probes may be depending on the measurement to be performed.
Further, the ultrasound probe 10 may be configured to be provided with some of the functions of the transmission/reception unit 105. One example of such a structure is a structure where the probe 10 generates an electric transmission signal based on a control signal (hereinafter, a transmission control signal) which is output from the transmission/reception unit 105 and is for generating the electric transmission signal, and converts this electric transmission signal into an ultrasound wave, and further, generates a reception signal (described later in the present disclosure) based on an electric signal converted from a reflected ultrasound wave that the probe 10 receives.
The display device 150 is a so-called monitor, and displays the output from the display control unit 107 in the form of a displayed screen.
The input device 160 has various input keys, and is used by an operator to make various settings to the ultrasound diagnostic apparatus 100.
The three-dimensional image analysis unit 101 analyzes a 3D image that has been acquired in advance through a short-axis scan of the measurement target, and determines position information (target position information) tgtInf1 including a three-dimensional position and an orientation of the measurement target. Further, the three-dimensional image analysis unit 101 outputs the target position information tgtInf 1 so determined to the display state determination unit 103.
The position information acquisition unit 102 acquires position information (instrument position information) indicating a current scan position and a current orientation of the probe 10 by using, for example, a magnetic sensor or an optical camera.
The display state determination unit 103 selects one display state from two display states, based on the positional relationship between the measurement target and the probe 10. Specifically, the display state determination unit 103 selects either a first display state or a second display state, based on the difference between the position of the measurement target and the current scan position, and the difference between the orientation of the measurement target and the orientation of the current scan position. Further, the three-dimensional image analysis unit 101 outputs the display state so selected as mode information mode.
The assist image generation unit 104 acquires, from the three-dimensional image analysis unit 101, assist image generation information tgtInf2 including data of the 3D image and the target position information of the measurement target, and generates an assist image for the display state indicated by the mode information mode. An assist image is an image for providing guidance in moving the probe 10 to the measurement target, and is an image in which information indicating a measurement plane of the measurement target and a position and an orientation of a current scan plane are superimposed on a 3D image. Note that when a magnification ratio, a viewpoint direction, or the like, and not screen structure, is to be switched in the switching of display state, such information related to the magnification ratio, the viewpoint direction, or the like is to be included in the mode information mode. Further, when changing both the magnification ratio and the viewpoint direction in the switching of display state, information related to both the magnification ratio and the viewpoint direction is to be included in the mode information mode.
The transmission/reception unit 105 is connected to the probe 10, and performs a transmission process. The transmission process includes generating a transmission control signal pertaining to ultrasound beam transmission control by the probe 10, and supplying a pulsar electric transmission signal or a continuous wave electric transmission signal generated based on the transmission control signal to the probe 10 Note that the transmission process at least includes generating the transmission control signal and causing the probe 10 to transmit an ultrasound wave (beam).
Meanwhile, the transmission/reception unit 105 also executes a reception process. The reception process includes generating a reception signal by amplifying and A/D converting an electric reception signal received from the probe 10. The transmission/reception unit 105 supplies the reception signal to the live image acquisition unit 106. The reception signal is composed of, for example, a plurality of signals in the transducer array direction and in an ultrasound transmission direction (depth direction), which is perpendicular to the transducer array direction. Each of the signals is a digital signal obtained by A/D-converting an electric signal obtained by converting an amplitude of a corresponding reflected ultrasound wave. The transmission/reception unit 105 repeatedly performs the transmission process and the reception process, to compose a plurality of frames each composed of a plurality of reception signals. The reception process at least includes acquire reception signals based on reflected ultrasound waves.
Here, a frame is one set of reception signals required for composing one tomographic image, a signal that is obtained by processing the set of reception signals to compose tomographic image data, or data corresponding to one tomographic image or a tomographic image that is composed based on the set of reception signals.
The live image acquisition unit 106 generates data of a tomographic image by converting each reception signal in a frame into a luminance signal corresponding to the intensity of the reception signal, and performing coordinate conversion on the luminance signal to convert the luminance signal into coordinates of an orthogonal coordinate system. The live image acquisition unit 106 executes this process successively for each frame, and outputs the tomographic image data so generated to the display control unit 107.
The display control unit 107 causes the display device 150 to display the assist image and a live image, in accordance with the screen structure specified in the mode information mode. In displaying the assist image and the live image, the display control unit 107 respectively uses the assist image generated by the assist image generation unit 104 and an ultrasound live image (tomographic image data) at the current scan position, which is obtained by the live image acquisition unit 106.
The control unit 108 controls the respective units in the ultrasound diagnostic apparatus 100, based on instructions from the input device 160.
The operation of the ultrasound diagnostic apparatus 100 having the above-described structure will be described below.
First, the three-dimensional image analysis unit 101 analyzes the 3D image acquired in advance, and thereby determines the target position information including the position and the orientation of a cross-section that is the measurement target and sets, as a measurement range, a range of positions or orientations differing from the position or the orientation of the measurement target by respective threshold values or less (Step S101).
The following describes how the 3D image is generated and how the target position information of a measurement target is determined, with reference to
First, for example, scanning of the entire carotid artery is performed by using the probe 10 to acquire tomographic image data of short-axis images corresponding to a plurality of frames 51, as shown in
Further, the probe 10 need not be a probe acquiring two-dimensional images, and may be a probe capable of acquiring three-dimensional images without being moved. Examples of such a probe include a mechanically-swinging probe whose scan plane mechanically swings, and a matrix probe in which ultrasound transducers are disposed two-dimensionally on a probe surface.
Further, the 3D image, besides being acquired by using ultrasound, may be acquired through a method such as CT (computer tomography) or MRI (magnetic resonance imaging).
Further, in this embodiment, the 3D image is acquired in advance. However, the present invention is not limited to this, and for example, the ultrasound diagnostic apparatus 100 may be provided with a structure for generating 3D image.
The position and the orientation of the measurement target vary according to the purpose of diagnosis of the measurement-target organ. For example, when the measurement-target organ is the carotid artery, typically, a position and an orientation of a measurement target in a 3D image 53 is as shown in
Further, the three-dimensional image analysis unit 101 determines, as the position of the measurement target 63, a short-axis direction plane corresponding to a plane (hereinafter, maximum active plane) 66 including a line (hereinafter, a center line) 65 connecting centers of contours 64 in the short-axis images of the frames composing the 3D image. The three-dimensional image analysis unit 101 determines the position of the measurement target 63 so that the maximum active plane 66 is a plane including the line connecting the centers of contours around the branch portion of the carotid artery, or a plane tilted by a predetermined angle with respect to such a plane. For example, when the probe can be put in contact along a reference plane passing through the centers of the contours around the branch portion, measurement is conducted at the reference plane. Meanwhile, depending upon the direction in which the carotid artery runs, there are cases where the probe cannot be put in contact along the reference plane. In such a case, it is plausible to select one of two planes that tilted with respect to the reference plane by ±45°. In medical check-ups, it is plausible to conduct measurement of a part that is specified by diagnosis guidelines. Meanwhile, in assessing the effect of plaque treatment, it is important to conduct measurement under the same conditions (position and orientation) every time, as already described above. Therefore, a configuration may be made such that the three-dimensional image analysis unit 101 stores position information of the measurement target acquired through a given diagnostic session, and in the subsequent diagnostic session, three-dimensional image analysis unit 101 determines the measurement target 63 so that measurement can be conducted at the same position and from the same orientation as in the given diagnostic session. Further, the three-dimensional image analysis unit 101 is capable of calculating the thickness of the intima-media complex by extracting the tunica intima boundary and the tunica adventitia boundary from short-axis images and the like acquired in the generation of the 3D image, and further, of detecting a part having a thickness equal to or greater than a threshold value as a plaque. A configuration may be made such that the three-dimensional image analysis unit 101 determines, as the measurement target 63, a long-axis direction cross-section of the plaque so detected where thickness is greatest. Further, in this embodiment, the three-dimensional image analysis unit 101 determines the measurement target 63. Alternatively, an examiner may manually set the measurement target 63.
Subsequently, the position information acquisition unit 102 acquires the position information (instrument position information) indicating the current scan position and the current orientation of the probe 10 (Step S102). Here, the position information acquisition unit 102 acquires this position information by using various sensors, such as a camera and a magnetic sensor, as described above. In an exemplar configuration involving the use of a camera, an optical marker including four markers are attached to the probe 10, and the position information acquisition unit 102 estimates the current scan position and the current orientation of the probe 10 by estimating a position and an orientation of the optical marker based on center coordinates and a size of the area defined by the four markers in the images acquired by the camera.
The display state determination unit 103 determines whether the current scan position is within the measurement range from the measurement target (Step S103). When the current scan position is within the measurement range (Yes in Step S103), the display state determination unit 103 selects the first display state (Step S104). Subsequently, the assist image generation unit 104 generates an assist image for the first display state using the assist image generation information tgtInf2, which includes the data of the 3D image and the target position information of the measurement target (Step S105). Then, the display control unit 107 displays the assist image and the live image, which is an ultrasound image at the current scan position and is acquired by the live image acquisition unit 106, in the first display state on the display device 150 (Step S106).
Meanwhile, when the current scan position is not within the measurement range (No in Step S103), the display state determination unit 103 selects the second display state (Step S107). Subsequently, the assist image generation unit 104 generates an assist image for the second display state using the assist image generation information tgtInf2, which includes the data of the 3D image and the target position information of the measurement target (Step S108). Then, the display control unit 107 displays the assist image and the live image in the second display state on the display device 150 (Step S109).
Following this, a determination is made of whether the process is to be terminated (Step S110), and when the process is not to be terminated (No in Step S110), the process is repeated starting from the acquisition of the current position information (Step S102).
The following describes a specific example of a flow for determining display state illustrated in Steps S103 to S109 in
The display state determination unit 103 first calculates the difference between the positions of the measurement target and the current scan position and the difference between the orientations of the measurement target and the current scan position (Step S1101). Subsequently, the display state determination unit 103 determines whether the differences, in a specific direction in the 3D image, are equal to or smaller than threshold values (Step S1102).
Here, the specific direction may be the directions of the mutually orthogonal three axes of the three-dimensional coordinate system, or may be a direction set based on the shape of the measurement-target organ. For example, a configuration may be made such that when the measurement target is parallel with the center line of the blood vessel, a determination is made that the differences between the positions and the orientations are equal to or smaller than threshold values when the distance between the center of the measurement target and the center of the scan plane at the current scan position is equal to or smaller than a threshold value and the scan plane at the current scan position is close to parallel with the center line.
When the differences are equal to or smaller than threshold values (Yes in Step S1102), the display state determination unit 103 selects the first display state (Step S104). Subsequently, the assist image generation unit 104 generates an assist image for the first display state using the assist image generation information tgtInf2 (Step S105). Then, the display control unit 107 causes the display device 150 to display in the first display state (fourth display state), where the ultrasound live image is used as a main image and the assist image is used as a sub image (Step S1103).
Meanwhile, when the differences are greater than threshold values (No in Step S1102), the display state determination unit 103 selects the second display state (Step S107). Subsequently, the assist image generation unit 104 generates an assist image for the second display state using the assist image generation information tgtInf2 (Step S108). Then, the display control unit 107 causes the display device 150 to display in the second display state (third display state), where the assist image is used as the main image and the live image is used as the sub image (Step S109).
Here, among the information displayed on a screen of the display device 150, on which ultrasound images are displayed, a main image is an image displayed at the center of the screen or an image that occupies the largest area of the screen, and the sub image is an image displayed at an area other than the area occupied by the main image.
The following describes an example of the switching of display states, conducted when scanning of the measurement target is performed in long-axis images of the carotid artery, with reference to
For measurement of the hypertrophy of the intima-media complex in long-axis images, the probe is first moved close to the measurement target while scanning short-axis images, and then the probe is rotated in order to draw long-axis images. Accordingly, when the short-axis cross-section of the carotid artery is parallel with the x-z plane and the carotid artery runs parallel to the y axis as shown in
When the screen is switched from the second display state shown in
In Step S1102 in
Accordingly, an operation for steadily switching display state will be described below.
The display state determination unit 103 determines whether the current display state is the second display state (Step S1105). When the current display state is the second display state (Yes in Step S1105), the display state determination unit 103 sets T1 as the threshold value to be used for determining whether or not to switch the display state (Step S1106). Here, a different threshold value T1 is set for each of the difference between positions and the difference between orientations. Meanwhile, when the current display state is not the second display state (No in Step S1105), the display state determination unit 103 sets T2 as the threshold value to be used for determining whether or not to switch the display state (Step S1107). Here, T2 is a value differing from T1 set in Step S1106. For example, 8 mm is applied as the threshold value T2 for position when the current display state is the first display state, and 10 mm is applied as the threshold value T1 for the position when the current display state is the second display state. When the current display state is initially the first display state, display state is switched to the second display state when the difference between positions becomes 8 mm or less. Further, since the threshold value applied when the current display state is the first display state is 10 mm, when the difference between positions is equal to or less than 10 mm, the first display state remains to be the current display state. By making such a configuration, even when the probe moves by about 2 mm near where the positional difference is near the threshold value of 8 mm, the display state does not change frequently and remains stable.
Note that the switching performed based on the difference between the position of the measurement target and the current scan position is not limited to switching between different display states, e.g., the main image and the sub image. For example, switching may be performed with respect to parameters affecting the appearance of the assist image itself, such as the viewpoint direction and the magnification ratio of the assist image.
An example of switching the viewpoint direction of the assist image in diagnosis of the carotid artery will be described below with reference to
Here, it is assumed that the carotid artery has a three-dimensional shape as shown in
Here, as shown in
When the current scan position is not within the measurement range, an assist image 85 with the viewpoint set to the long-axis direction, as shown in
Further, switching of magnification ratio may be performed.
When the current scan position is not within the measurement range and the distance between the scan position and the measurement target is long, the entire image should be viewable. Thus, display is performed with low magnification ratio, as shown in
The assist image generation unit 104 switches settings of parameters of the assist image, such as the viewpoint direction and the magnification ratio (Step S204). Further, the assist image generation unit 104 generates the assist image in which the switching is reflected (Step S205). The switching of the parameters such as the viewpoint direction and the magnification ratio may be used in performed in addition to the switching of screen display.
In the ultrasound diagnostic apparatus 100, the screen display is dynamically switched based on whether the current scan position is within the measurement range of the measurement target. As a result, guidance for moving the probe is provided to the examiner with high visual perceptibility. Further, by making a configuration such that the viewpoint direction of the assist image in the 3D space is changed according to the current scan position and the current orientation, guidance is provided so that the examiner can align the measurement target with the scan position with ease.
Screen structures other than the screen structures illustrated in
Further, in this embodiment, the display state determination unit 103 selects the first display state or the second display state, based on the difference between the position of the measurement target and the current scan position and the difference between the orientation of the measurement target and the current scan position. However, the present invention is not limited to this. For example, the display state determination unit 103 may select the first display state or the second display state based on the difference between the position of the measurement target and the current scan position. Further, the display state determination unit 103 may retain the difference between the position of the measurement target and the current scan position and the difference between the orientation of the measurement target and the current scan position (or only the difference between the positions), and select the first display state or the second display state, based on a change in the differences taking place as time elapses.
The second embodiment differs from the first embodiment in that the position information acquisition unit 102 of the ultrasound diagnostic apparatus 100 determines whether position information of the probe is acquired. Since the ultrasound diagnostic apparatus 100 in the present embodiment has the same structure as shown in the first embodiment in
For example, when acquiring position information by image-capturing an optical marker attached to the probe by using a camera, position information cannot be correctly acquired when the probe leaves the visual field of the camera or the optical marker is hidden by a probe cable or an examiner's hand and is not image-captured by the camera (occlusion). Further, also in a case where, for example, a magnetic sensor is used to acquire the position information, when the probe leaves a magnetic field range or approaches an instrument made of metal or the like that disturbs the magnetic field, the position information of the probe cannot be correctly acquired.
In the second embodiment, the position information acquisition unit 102 determines whether position information of the probe 10 is acquired.
The position information acquisition unit 102 determines in Step S111 whether the position information of the probe 10 is acquired. When the position information is acquired (Yes in Step S111), the process proceeds to Step S113.
Meanwhile, when the position information is not acquired (No in Step S111), the position information acquisition unit 102 instructs the display control unit 107 to display warning information indicating that the position information is not acquired, and the display control unit 107 displays the warning information on the display device 150 (Step S112).
In this embodiment, when the position information is not acquired, the warning information providing indication that the position information is not acquired is displayed. However, when the position information is acquired in or following Step S103, information providing indication that the position information is acquired may be displayed. Further, in addition to whether or not the position information of the probe can be acquired, display based on reliability of the position information may be performed. For example, when gain, exposure, and/or white balance of the camera is/are not proper, the accuracy in detecting the position of the optical marker in images captured by the camera is low, and accordingly, the reliability of the position information is low. In this case, a numerical value based on reliability or a graphic or the like whose form such as shape, design, and color changes may be displayed in Step S112 or in and following Step S103
For example, in this system, the optical marker is composed of four markers 15a to 15d as shown in
For example, when the position information is not acquired because the probe 10 is hidden by the marker 15c as shown in
For example, when the position information is not acquired because the probe 10 is not within the visual field of the camera 90 as shown in
The following describes a modified example of an assist image.
For example, information associating the 3D image of the carotid artery with the orientation of the subject's body may be included in the assist image.
For example, an assist image may indicate the orientation of the subject's head as shown in the display example 1 of
Further, for example, the viewpoint direction need not be switched when switching the main image and the sub image, and the assist image may always include information from a plurality of viewpoint directions.
In this example, an assist image 71 always has two viewpoint directions, namely the long-axis direction and the short-axis direction. The assist image 71 includes an image 78 from the long-axis direction and an image 79 from the short-axis direction. Further, in the example shown in
Further, particularly since a person with skill can draw long-axis images with ease, a configuration may be made such that switching of screen structure is not performed, and a live image is always used as the main image and the assist image is always used as the sub image. Further, a configuration may be made such that information indicating the current scan position is superimposed on the assist image only when the current scan position is within the measurement range. Further, a configuration may be made such that information indicating whether the current scan position is within the measurement range is displayed.
The third embodiment is differs from the first embodiment in that the display state determination unit 103 of the ultrasound diagnostic apparatus 100 switches display state according to whether an ultrasound image includes a long-axis image. Since the ultrasound diagnostic apparatus 100 in the present embodiment has the same structure as shown in the first embodiment in
In the third embodiment, the display state determination unit 103 determines whether an ultrasound image at a current scan position acquired by a live image acquisition unit 106 includes a long-axis image. When the ultrasound image includes a long-axis image, the display state determination unit 103 selects the first display state, where the main image is an ultrasound live image and the sub image is an assist image. Meanwhile, when the ultrasound image does not include a long-axis image, the display state determination unit 103 selects the second display state, where the main image is the assist image and the sub image is the live image.
The tunica intima boundary and the tunica adventitia boundary in a long-axis image of a blood vessel can be extracted based on an ultrasound B-mode image, a color flow image, or a power Doppler image. For example, in order to extract the tunica intima boundary and the tunica adventitia boundary based on a B-mode image, it suffices to search for edges near the boundaries based on brightness values. Further, in order to extract the tunica intima boundary and the tunica adventitia boundary based on a color flow image or a power Doppler image, it suffices to extract vascular contour under the presumption that a blood flow region corresponds to a lumen of the blood vessel. Further, when the direction in which the blood vessel runs and the scan plane of the probe are nearly parallel, an ultrasound image includes a long-axis image from one end to the other. However, the further the scan surface deviates from being parallel to the direction in which the blood vessel runs, the smaller the part of the ultrasound image in which the long-axis image included. Accordingly, when an ultrasound image includes a contour of a long-axis image detected based on a B-mode image or the like with a predetermined length or more, switching of display state can be performed while regarding that the scan plane of the probe is parallel to the direction in which the blood vessel runs. Further, in order to enable the examiner to manually switch display state when determining that a long-axis image is included in an ultrasound image, a configuration may be made of providing a UI (user interface) that facilitates the switching operation such that switching of display state can be performed by a single touch of a button.
The operation of the ultrasound diagnostic apparatus 100 according to the third embodiment will be described below.
The display state determination unit 103 determines whether a long-axis image is included in the ultrasound image acquired by the live image acquisition unit 106 at the current scan position (Step S301). When a long-axis image is included (Yes in Step S301), the display state determination unit 103 selects the first display state, where the main image is an ultrasound live image and the sub image is an assist image (Step S104). Meanwhile, when a long-axis image is not included (No in Step S301), the display state determination unit 103 selects the second display state, where the main image is an assist image and the sub image is a live image (Step S107).
In this embodiment, the display state on the screen is dynamically switched based on whether a long-axis image is included in an ultrasound image. Accordingly, guidance for moving the probe is provided to the examiner with high visual perceptibility.
The first to third embodiments described above mainly describe operations in the diagnosis of a plaque in the carotid artery. However, assist images are effective not only in plaque diagnosis, but also in Doppler measurement that is important for vascular diagnosis. When applying assist images to Doppler measurement, position information of a sample gate of the Doppler measurement, instead of the position information of a plaque, is determined by the three-dimensional image analysis unit 101 or is set manually. As such, guidance is provided to an examiner so that the examiner can scan a set position of the sample gate. The sample gate can be set at the boundary between the common carotid artery and the carotid sinus, at a predetermined distance from the branching portion of the carotid artery, or near a plaque part. Further, application for observing blood vessels other than the carotid artery, such as the abdominal aorta and the subclavian artery, or for observing tumors in the liver and the breasts is also possible.
This embodiment will describe a case where the image processing apparatus according to one aspect of the present invention is applied to an intra-surgery navigation system, with reference to the drawings. An intra-surgery navigation is a system for displaying a positional relationship between a position of a surgical subject part of a patient who is taking a surgery and a surgical instrument. Such an intra-surgery navigation system is used, for example, for improving visual perceptibility of a position of a tumor or a blood vessel, and for improving surgical safety by displaying a position of a surgical instrument with respect to a surgical subject such as a bone or an organ.
In a surgical operation, for example, a surgical instrument 203 such as an endoscope may be inserted into an incisional site 202 of a patient 201 (surgical subject), and removal or cutting of a desired part may be performed, as shown in
In recent years, before surgeries, a simulation is conducted to confirm the three-dimensional shape, size, and the like of a surgical subject part of a patient. Further, before surgeries, a region of a surgical-subject patient that is to be removed or cut is determined by using three-dimensional volume data 510 of a surgical target part (target part) acquired by a modality such as a CT, an MRI, a PET, or an ultrasound diagnostic apparatus. Further, when performing intra-surgery navigation, it is necessary to accurately reproduce the actual positional relationship between the patient 201 (surgical subject) and the three-dimensional volume data 510 in a virtual three-dimensional space 520 in the tracking system. As such, it is necessary to measure information 221 indicating the size of the surgical target part, and the position and the orientation of the surgical target part with respect to the tracking system. The actual alignment 222 between the surgical target part and the three-dimensional volume data 510 is carried out before the surgery once the patient is fixed to the bed 204. That is to say, the position, the orientation, and the size of the surgical target part are imported into the tracking system under the condition that the positional relationship between the imaging apparatus 511 and the patient 201 (surgical subject) or the bed 204 to which the patient 201 is fixed does not change. This process is executed by attaching optical markers 214 and 211 to predetermined positions (for example, the bed and a characteristic part of the patient such as a bone) and measuring information indicating the spatial position and orientation of each optical marker by using the tracking system. This is similar to the measurement of information indicating the position and the orientation of the surgical instrument 203.
In such a manner, information indicating the surgical target part and the information indicating the position and the orientation of the surgical instrument are imported into the virtual three-dimensional space in the tracking system.
Setting a given viewpoint position in the virtual three-dimensional space 520 enables generation of an image with which the entirety of the positional relationship between the surgical target part and the surgical instrument can be observed. Further, such an image can be displayed as navigation information (assist image) on the display device 250.
The image processing apparatus 500 includes, as shown in
The imaging apparatus 511 is an imaging unit such as a CCD camera, and acquires images of the patient (surgical subject) and the surgical instrument. Optical markers appear in the images acquired by the imaging apparatus 511.
The volume data 510 is three-dimensional image data of the surgical target part and is typically acquired by a modality such as a CT or an MRI before surgery. Alternatively, performing navigation while updating the volume data as necessary is possible by using an ultrasound diagnostic apparatus to acquire data in real-time.
The display device 250 is a so-called monitor, and displays the output from the display control unit 505 in the form of a displayed screen.
The three-dimensional image generation unit 501 renders the volume data 510 and generates a 3D image of the surgical target part. Here, the three-dimensional image generation unit 501 may determine a region to be removed or cut, and introduce information of such region and the like to the 3D image.
The position information acquisition unit 502 acquires position information (target position information) including a three-dimensional position and an orientation of the surgical target portion, and position information (instrument position information) indicating a three-dimensional position and an orientation of the surgical instrument. The position information acquisition unit 502 acquires such information based on the images acquired by the imaging apparatus 511, in which the optical markers attached to the surgical instrument, the bed of the surgical subject patient or the surgical subject patient, etc., appear.
The display state determination unit 503 selects one of two display states, based on the positional relationship between the surgical target part (target part) and the surgical instrument. Specifically, the display state determination unit 503 selects either the first display state or the second display state, based on the difference (distance) between the positions of the surgical target part and the surgical instrument. Here, the display state determination unit 503 calculates the distance between the surgical target part and the surgical instrument based on the positions of the surgical target part and the surgical instrument in the virtual three-dimensional space.
The assist image generation unit 504 generates an assist image for the display state selected by the display state determination unit 503.
The display control unit 505 displays the assist image on the display device 250 while controlling the position and the size of the assist image.
The operation of the image processing apparatus 500 having the above structure will be described below.
The three-dimensional image generation unit 501 acquires pre-acquired 3D volume data that includes the surgical target part of the patient and renders the 3D volume data, so as to generate a 3D image to be included in an assist image (Step S501). Here, the three-dimensional image generation unit 501 may additionally perform a process equivalent to pre-surgery simulation and specify a part to be removed or cut. (Note that typically, the part to be removed or cut is set through a separate process conducted before surgery.)
Subsequently, the position information acquisition unit 502 acquires target position information indicating the three-dimensional position, the orientation, the size, etc., of the surgical target part based on images acquired by the imaging apparatus 511. The imaging apparatus 511 acquires images in an environment where the geometric positional relationship between the imaging apparatus 511 and the bed in the operation room or the surgical subject patient is fixed (Step S502). The three-dimensional image generation unit 501 performs alignment of positions by calibrating the three-dimensional position, the orientation, the size, etc., of the surgical target part with respect to the 3D image (Step S503).
Subsequently, the position information acquisition unit 502 acquires information indicating the position and the orientation of the surgical instrument, based on the images acquired by the imaging apparatus 511. Further, the position information acquisition unit 502 converts the information so that the position and the orientation of the surgical instrument are converted into the position and the orientation of the tip of the surgical instrument, respectively (Step S504).
The three-dimensional image generation unit 501 arranges the surgical target part and the surgical instrument in the virtual three-dimensional space based on information indicating the position and the orientation of the surgical target part, information indicating the position and the orientation of the surgical instrument, information indicating the position and the orientation of the tip of the surgical instrument, and the like (Step S505).
Subsequently, the display state determination unit 503 calculates the distance between the surgical target part and the surgical instrument in the virtual three-dimensional space (Step S506). Then, the display state determination unit 503 determines whether the distance between the surgical target part and the surgical instrument in the virtual three-dimensional space is within a predetermined range (Step S507). When the distance is within the predetermined range (Yes in Step S507), the display state determination unit 103 selects the second display state (Step S508). Further, the display state determination unit 103 changes settings of the assist image, such as the magnification ratio and the view direction (Step S509).
Meanwhile, when the distance is not within the predetermined range (No in Step S507), the display state determination unit 103 selects the first display state (Step S510).
Subsequently, the assist image generation unit 504 generates an assist image for the display state selected by the display state determination unit 503; i.e., the first display state or the second display state (Step S511). Then, the display control unit 505 displays the assist image on the display device 250 (Step S512).
The assist images for the first display state and the second display state will be described.
The assist image for the first display state is generated when the surgical target part and the surgical instrument are not within the predetermined range (separated by a predetermined distance or more). In the assist image for the first display state, the viewpoint position is set to be distant from the 3D volume data (i.e., a wide field angle is applied in cutting out the image), so that the entirety of the positional relationship between the surgical target part and the surgical instrument can be observed, as shown in
Meanwhile, the assist image for the second display state is generated when the surgical target part and the surgical instrument are within the predetermined range (are not separated by the predetermined distance). In the assist image for the second display state, the viewpoint position is set to be close to the 3D volume data (i.e., a narrow field angle is applied in cutting out the image), so that the positional relationship between the surgical target part and the surgical instrument can be observed in more detail and the movement of the surgical instrument can be observed in detail, as shown in
Returning to description referring to the flowchart in
Meanwhile, when the process is not to be terminated (No in Step S513), an assist image including information indicating the latest positional relationship between the surgical target part and the surgical instrument is to be generated. As such, the position information acquisition unit 502 acquires information indicating the position and the orientation of the surgical instrument, based on the images acquired by the imaging apparatus 511 (Step S514). Further, a determination is made of whether the position or the orientation of the surgical instrument has changed (Step S515). When the position or the orientation of the surgical instrument has changed (Yes in Step S515), the process starting from Step S506 is repeated.
Meanwhile, when the position or the orientation of the surgical instrument has not changed (No in Step S515), the process starting from Step S513 is repeated. Here, a procedure for updating only the instrument position information of the surgical instrument is described. However, a configuration may be made such that the target position information of the surgical target part is also updated as necessary. When making such a configuration, when the positional relationship between the surgical target part and the surgical instrument has changed, the process starting form Step S506 is repeated.
As such, the instrument position information of the surgical instrument is updated in real time, and accordingly, the assist image displayed on the display device 250 is also updated. Due to this, the practitioner can confirm the movement of the surgical instrument that he/she has manipulated on the display device 250, and thus, is able to adjust with ease the distance between the surgical instrument and the target part, and the direction in which he/she performs the removal or cutting.
In the present embodiment, the assist image generation unit 504 first generates an assist image for the first display state that provides a bird's-eye view, based on the presumption that initially, the surgical target part and the surgical instrument are distant from one another. Subsequently, the display state determination unit 503, when determining that the calculated distance is smaller than a predetermined value, or that is, when determining that the surgical target part and the surgical instrument are very close to each other, changes the settings of the assist image, such as the magnification ratio and the view direction, from the initial settings in Step S509 in order to change the settings of the assist image from those for the first display state to those for the second display state. Further, although not shown in the flowchart of
Further, description has been provided that the three-dimensional image generation unit 501 may perform specification corresponding to pre-surgery simulation and specify the part to be removed or cut in Step S501. Based on this, a configuration may be made such that the result of the simulation (the part to be removed or cut) is superimposed on the 3D image in Step S511. Further, a configuration may be made of additionally providing steps or units for determining whether the surgical instrument has accessed the part to be removed or cut, and updating display by regenerating a 3D image not including the part to be removed or cut when the surgical instrument has accessed the part. This enables the practitioner to grasp the progress of the surgery with more ease.
Further, the present embodiment describes acquiring position information by image-capturing optical markers by using a camera. However, position information may be acquired by using a magnetic sensor, a multi-joint arm, or the like.
Further, the settings of the assist image are switched between two different settings based on information related to distance in Steps S507 to S510. However, the present invention is not limited to this. For example, a modification may be made such that m (where m is a natural number) states for displaying the assist image are prepared in advance, and with an nth one of the display states (where n is a natural number smaller than m) selected, in Step S507, a determination is made of whether or not an absolute value of the difference between a distance at time point t and a distance at time point t−1 is equal to or greater than a predetermined value and whether the difference is positive or negative, and the display state is switched to either the n+1th display state or the n−1th display state based on the determination results. Such a modification achieves an effect where the size in which the part to be removed or cut is enlarged as the surgical instrument approaches the surgical target part. That is, images achieving a smooth transition from
By recording, on a recording medium such as a flexible disk, a program for implementing the image processing methods described in the above embodiments, an independent computer system can easily execute processing described in the above embodiments.
In the above explanation, explanation is provided while taking a flexible disk as an example of a recording medium. However, similar implementation is possible by using an optical disc. Further, recording media are not limited to a flexible disk and an optical disc, and alternatively any media on which the program can be recorded, such as an IC (Integrated Circuit) card or a ROM cassette, can be used for implementation.
Note that functional blocks of the ultrasound diagnostic apparatus illustrated in
Although referred to here as an LSI, depending on the degree of integration, the terms IC, system LSI, super LSI, or ultra LSI are also used.
In addition, the method for assembling integrated circuits is not limited to the above-described method utilizing LSIs, and a dedicated communication circuit or a general-purpose processor may be used. For example, a dedicated circuit for graphics processing, such as a graphic processing unit (GPU), may be used. A field programmable gate array (FPGA), which is programmable after the LSI is manufactured, or a reconfigurable processor, which allows for reconfiguration of the connection and setting of circuit cells inside the LSI, may alternatively be used.
Furthermore, if technology for forming integrated circuits that replaces LSI were to emerge, owing to advances in semiconductor technology or to another derivative technology, the integration of functional blocks may naturally be accomplished using such technology. The application of biotechnology or the like is possible.
Further, the units of the ultrasound diagnostic apparatus illustrated in
The image processing apparatus and the image processing method pertaining to the present invention achieve reduction in the time required for positioning a scan position to a target. Thus, the image processing apparatus and the image processing method pertaining to the present invention are expected to improve examination efficiency in screening of arterial sclerosis and the like, and is high usable in the field of medical diagnostic devices.
Number | Date | Country | Kind |
---|---|---|---|
2012-251583 | Nov 2012 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2013/006625 | 11/11/2013 | WO | 00 |