This application claims the benefit of Korean Patent Application No. 10-2012-0102430, filed on Sep. 14, 2012 in the Korean Intellectual Property Office, the disclosure of which is incorporated herein by reference in its entirety.
1. Field
Exemplary embodiments of the present disclosure relate to an ultrasound imaging apparatus that outputs a 2-Dimensional (2D) ultrasound image and a 3-Dimensional (3D) ultrasound image of a subject, and a control method for the same.
2. Description of the Related Art
Ultrasound imaging apparatuses have non-invasive and non-destructive characteristics and are widely used in the field of medicine for acquisition of data regarding a subject. Recently developed ultrasound imaging apparatuses provide a 3D ultrasound image that provides spatial data and clinical data regarding a subject, such as an anatomical shape, etc., which are not provided by a 2D ultrasound image.
However, current ultrasound imaging apparatuses display a 3D ultrasound image on a 2D display unit, or display each cross-section of the 3D ultrasound image on a 2D display unit, which may make it difficult for an inspector to utilize substantial 3D effects of the 3D ultrasound image for diagnosis of diseases.
It is an aspect of the exemplary embodiments to provide an ultrasound imaging apparatus which includes a 3D display unit to display a 3D ultrasound image as well as a 2D display unit to display a 2D ultrasound image, thereby displaying both the anatomical shape of a diagnosis part and a high-resolution image, and a control method thereof.
Additional aspects of the exemplary embodiments will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the exemplary embodiments.
In accordance with an aspect of the exemplary embodiments, an ultrasound imaging apparatus includes an ultrasound data acquirer configured to acquire ultrasound data, a volume data generator configured to generate volume data from the ultrasound data, a 3-Dimensional (3D) display image generator configured to generate a 3D ultrasound image based on the volume data, a cross-sectional image acquirer configured to acquire a cross-sectional image based on the volume data, a 3D display configured to display the 3D ultrasound image, and a 2D display configured to display the cross-sectional image.
In accordance with another aspect of the exemplary embodiments, a control method for an ultrasound imaging apparatus includes acquiring ultrasound data regarding a subject, generating volume data regarding the subject based on the ultrasound data, generating a 2D cross-sectional image of the subject and a 3D ultrasound image of the subject based on the volume data, and displaying the 2D cross-sectional image of the subject on a 2D display and displaying the 3D ultrasound image of the subject on a 3D display.
These and/or other aspects of the exemplary embodiments will become apparent and more readily appreciated from the following description of the exemplary embodiments, taken in conjunction with the accompanying drawings of which:
Reference will now be made in detail to an ultrasound imaging apparatus and a control method for the same according to the exemplary embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout.
Referring to
Referring to
Alternatively, as illustrated in
The ultrasound imaging apparatus 100 displays a 3D ultrasound image of the subject on the 3D display unit 140 and a 2D ultrasound cross-sectional image regarding, for example, a diseased part of the subject on the 2D display unit 160, thereby simultaneously providing the anatomical shape of the subject and a high-resolution cross-sectional image for easy diagnosis of diseases.
The ultrasound imaging apparatus 100 includes an input unit 180 that receives an instruction from a user, such as, for example, an instruction based on a motion of the user. The user, e.g., an inspector such as a medical professional, may input an instruction for selection of a cross-sectional image or a variety of setting values with regard to generation of a 3D display image via the input unit.
Hereinafter, an operation of each constituent element of the ultrasound imaging apparatus according to exemplary embodiments will be described in detail.
Referring to
The ultrasound probe 112 includes a plurality of transducer elements for changing between the use of ultrasonic signals and electric signals. To generate a 3D ultrasound image, a plurality of transducer elements may be arranged in a 2D array, or a plurality of transducer elements arranged in a 1D array may be swung in an elevation direction. Many different kinds of ultrasound probes may be implemented as the ultrasound probe 112 employed in the present exemplary embodiment so long as the ultrasound probe 112 may acquire a 3D ultrasound image.
Upon receiving the transmission signal from the transmission signal generator 111, the plurality of transducer elements changes the transmission signal into ultrasonic signals to transmit the ultrasonic signals to the subject. Then, the transducer elements generate reception signals upon receiving the ultrasonic echo-signals reflected from the subject. According to exemplary embodiments, the reception signals are analog signals.
More specifically, the ultrasound probe 112 appropriately delays an input time of pulses input to the respective transducer elements, thereby transmitting a focused ultrasonic beam to the subject along a scan line. Meanwhile, the ultrasonic echo-signals reflected from the subject are input to the respective transducer elements at different reception times, and the respective transducer elements output the input ultrasonic echo-signals.
To generate a 3D ultrasound image, signal generation in the transmission signal generator 111 and transmission and reception of the ultrasonic signals in the ultrasound probe 112 may be sequentially and iteratively performed, which enables sequential and iterative generation of reception signals.
The beam former 113 changes the analog reception signals transmitted from the ultrasound probe 112 into digital signals. Then, the beam former 113 receives and focuses the digital signals in consideration of positions and focusing points of the transducer elements, thereby generating focused reception signals. In addition, to generate a 3D ultrasound image, the beam former 113 sequentially and iteratively performs analog to digital conversion and reception-focusing according to the reception signals sequentially provided from the ultrasound probe 120, thereby generating a plurality of focused reception signals.
The signal processor 114, which may be implemented, for example, as a Digital Signal Processor (DSP), performs envelope detection processing to detect the strengths of the ultrasonic echo-signals based on the ultrasonic echo-signals focused by the beam former 113, thereby generating ultrasound image data. That is, the signal processor 114 generates ultrasound image data based on position data of a plurality of points present on each scan line and data acquired at the respective points. The ultrasound image data includes cross-sectional image data on a per scan line basis.
Referring again to
In consideration of the fact that volume data is generated by ultrasonic signals reflected from the subject that is present in a 3D space, the volume data according to exemplary embodiments is defined on a torus coordinate system. Accordingly, for rendering volume data via a display device having a Cartesian coordinate system such as a monitor, a scan conversion operation to convert coordinates of the volume data so as to conform to the Cartesian coordinate system may be performed. Accordingly, the volume data generation unit 120 may include a scan converter to convert coordinates of volume data.
The 3D display image generation unit 130 generates an image to be displayed on the 3D display unit 140 using a volume image of the subject. The 2D cross-sectional image acquisition unit 150 acquires a cross-sectional image of the subject from a volume image of the subject.
The 2D cross-sectional image acquisition unit 150 acquires a cross-sectional image to be displayed on the 2D display unit 160 from the volume image of the subject. The acquired cross-sectional image may be a cross-sectional image corresponding to the XY plane, the YZ plane, or the XZ plane, and may be an arbitrary cross-sectional image defined by the user. In addition, the cross-sectional image may be arbitrarily selected from the 2D cross-sectional image acquisition unit 150, or may be acquired in response to a cross-sectional image selection instruction input via the input unit 180 by the user. A detailed exemplary embodiment with regard to selection of the cross-sectional image will hereinafter be described.
The 3D display image generation unit 130 generates a 3D image conforming to an output format of the 3D display unit 140 such that the 3D image is displayed via the 3D display unit 140. Accordingly, the 3D image generated by the 3D display image generation unit 130 may be determined according to the output format of the 3D display unit 140.
The output format of the 3D display unit 140 may be various types, including, for example, a stereoscopic type, a volumetric type, a holographic type, an integral image type, or the like. The stereoscopic type is classified into a stereoscopic type using special glasses and a glasses-free auto-stereoscopic type.
Various exemplary embodiments with regard to generation of a 3D display image will hereinafter be described in detail. Ultrasound imaging apparatuses 200, 300, 400 and 500 of the exemplary embodiments that will be described hereinafter correspond to the ultrasound imaging apparatus 100 of the above-described exemplary embodiment, and the above description of the ultrasound imaging apparatus 100 may be applied to the ultrasound imaging apparatuses 200, 300, 400 and 500.
An ultrasound data acquisition unit 210, a volume data generation unit 220, a 2D cross-sectional image acquisition unit 250, and a 2D display unit 260 may be substantially the same as the ultrasound data acquisition unit 110, the volume data generation unit 120, the 2D cross-sectional image acquisition unit 150 and the 2D display unit 160 described above with reference to
A 3D display image generation unit 230 according to the present exemplary embodiment generates an autostereoscopic multi-view image.
Referring to
The parameter setter 231 sets view parameters used to acquire a plurality of view images. In general, a multi-view image is generated by synthesizing images captured via a plurality of cameras. However, in the present exemplary embodiment, through the use of a program that acquires a virtual view image corresponding to a 3D volume image, view images, which are obtained as virtual cameras capture 3D volume data (3D volume images) generated by the volume data generation unit at different views, may be acquired. In this case, volume rendering may be used. According to exemplary embodiments, volume rendering may be performed by any one of various different types of rendering methods, such as Ray-Casting, Ray-Tracing, etc.
The view parameters used for generation of view images may include at least one of the number of views, view disparity, and a focal position. For example, the number of views may be determined according to characteristics of the 3D display unit 240, and view disparity and the focal position may be arbitrarily set by the parameter setter 231. Alternatively, the user may set setting values thereof via the input unit 180 illustrated in
The view image generator 232 generates a plurality of view images having different views, which respectively correspond to the number of views, view disparity, and the focal position.
Referring to
More specifically, the view image acquirer may acquire, using 3D volume data regarding the subject, a view-image 1 that may be acquired when capturing the subject by the camera located at the position of view 1 to a view-image 9 that may be acquired when capturing the subject by the camera located at the position of view 9.
The multi-view image generator 233 generates a multi-view image by synthesizing a plurality of view images acquired by the view image acquirer 232. According to exemplary embodiments, synthesizing a plurality of view images is referred to as weaving. Weaving generates a multi-view image by weaving a plurality of view images. When displaying the generated multi-view image, a viewer may perceive different 3D effects according to view positions where the viewer views an image. A detailed description of weaving is omitted.
When view disparity is set to a small value, a multi-view image having a small depth is generated. When view disparity is set to a large value, a multi-view image having a large depth is generated. The focal position may be set to a position behind the display unit 240, a position on the display unit 240, or a position in front of the display unit 240. As the focal position is displaced forward of the display unit 240, a multi-view image seems to protrude outward.
The generated multi-view image is displayed on the 3D display unit 240. When the multi-view image of the subject is displayed on the 3D display unit 240, the user may attain clinical data, such as the anatomical shape of the subject, at various views, which enables a diagnosis that is more accurate.
An ultrasound data acquisition unit 310, a volume data generation unit 320, a 2D cross-sectional image acquisition unit 350, and a 2D display unit 360 may be substantially the same as the ultrasound data acquisition unit 110, the volume data generation unit 120, the 2D cross-sectional image acquisition unit 150 and the 2D display unit 160 described above with reference to
The ultrasound imaging apparatus 300 according to the present exemplary embodiment generates an integral image of the subject, and displays the integral image on the 3D display unit 340. The integral image is acquired by storing 3D data of the subject in the form of elemental images using a lens array consisting of a plurality of elemental lenses, and integrating the elemental images into a 3D image via the lens array.
The integral image is an image having successive views in a left-and-right direction (horizontal direction) as well as in an up-and-down direction (vertical direction) within a view angle range, and may effectively transmit stereoscopic data regarding the subject to the user without requiring special glasses.
To generate the integral image, a pickup part to acquire elemental images of the subject and a display part to regenerate a 3D image from the acquired elemental images may be employed.
To this end, a 3D display image generation unit 330, as illustrated in
Although the pickup part to acquire elemental images is generally constructed by a lens array and a plurality of cameras corresponding to the lens array, Computer Generated Integral Imaging (CGII) that calculates elemental images from 3D data regarding the subject using a computer program rather than actually capturing the elemental images has been proposed. Accordingly, the elemental image acquirer 331 may receive 3D volume data regarding the subject from the volume data generation unit 320, and may acquire elemental images of the subject under given conditions via imitation of the lens array based on a CGII program. The number of acquired elemental images and views of the elemental images may be determined according to the lens array of the 3D display unit 340.
Referring to
The integral image output 332 matches the elemental images acquired by the elemental image acquirer 331 with corresponding positions on the display device 341, thereby allowing the elemental images output via the display device 341 to be integrated by the lens array. As such, a 3D integral image of the subject may be generated.
An ultrasound data acquisition unit 410, a volume data generation unit 420, a 2D cross-sectional image acquisition unit 450, and a 2D display unit 460 may be substantially the same as the ultrasound data acquisition unit 110, the volume data generation unit 120, the 2D cross-sectional image acquisition unit 150 and the 2D display unit 160 described above with reference to
The ultrasound imaging apparatus 400 of the present exemplary embodiment displays a 3D ultrasound image of the subject in a holographic manner, and a hologram generated in the holographic manner is referred to as a complete stereoscopic image. When recording a collision between object waves reflected from an object using laser light and laser light in another direction, an interference pattern depending on a phase difference of the object waves reflected from respective portions of the object is generated. An amplitude and a phase are recorded in the interference pattern. An image in which the shape of the object is recorded in the interference pattern is referred to as a hologram.
A 3D display image generation unit 430 may generate a hologram of the subject based on Computer Generated Holography (CGH). CGH is technology in which an interference pattern with respect to appropriate reference waves, e.g., a hologram, is calculated and generated using data of an object stored in a computer. CGH includes point-based CGH, convolution-based CGH, and Fourier-based CGH, for example. The 3D display image generation unit 430 may implement many different kinds of CGH to calculate and generate holograms.
Referring to
The 2D image acquirer 431 acquires a 2D image of the subject from a 3D volume image of the subject, and the depth-image acquirer 432 acquires a depth image of the subject from the 3D volume image of the subject. The 2D image of the subject may include color data regarding the subject.
The hologram pattern generator 433 generates a hologram pattern using a 2D image and a depth image regarding the subject. In an exemplary embodiment, the hologram pattern generator 433 may generate a single criterion elemental fringe pattern with respect to respective points of the subject that are equally spaced apart from a criterion point in a hologram plane. The criterion elemental fringe pattern may be pre-stored in a lookup table according to distances between the criterion points and the respective points of the subject. Alternatively, a criterion elemental fringe pattern on a per depth basis may be pre-stored.
The criterion elemental fringe pattern is shifted by a distance corresponding to the criterion elemental fringe pattern with respect to the respective points of the subject located in the same plane, so as to form a hologram pattern.
A 3D display unit 440 displays the generated hologram pattern to enable the user to view a 3D hologram of the subject.
The above-described exemplary embodiment of
Operations of the ultrasound imaging apparatus for generation of the 3D ultrasound image of the subject and display of the 3D ultrasound image via the 3D display unit have been described above, and selection or control of an image to be displayed on each display unit will hereinafter be described.
As described above in
The user may adjust a depth of a 3D ultrasound image via the depth manipulator 180f, and may adjust 3D effects of the 3D ultrasound image, e.g., a protrusion degree of the image on the basis of the display unit 140, via the focus manipulator 180e. As the 3D ultrasound image is controlled to project outward farther from the screen, 3D effects may be increased, but viewer eye fatigue may occur. In contrast, if the 3D ultrasound image is controlled to appear to be inserted into the display unit 140, 3D effects of the image are reduced, but the image is easy to diagnosis because extended viewing does not cause eye fatigue. Accordingly, a 3D ultrasound image of the subject may be controlled and displayed in an easily diagnosable form using each manipulator.
As described above, the ultrasound imaging apparatus 100 according to the exemplary embodiment may acquire a cross-sectional image from a 3D volume image of the subject and display the acquired image on the 2D display unit 160. In this case, although the cross-sectional image may be arbitrarily selected from the cross-sectional image acquisition unit 150, an instruction for selection of a cross-sectional image may also be input via the input unit 180 by the user. In an exemplary embodiment, the cross-section manipulators 180a to 180d illustrated in
ax+by+cz+d=0 Equation 1
Here, the normal of a plane is represented by n=(a, b, c), and “d” represents a distance between the plane and a starting point. Accordingly, when the values a, b, c, and d are set respectively, a single plane is defined. The ultrasound imaging apparatus receives values of parameters a, b, c, and d that define a cross section from the user via the cross-section manipulators 180a to 180d, and acquires and displays a cross-sectional image corresponding to the values.
Alternatively, the 2D display unit 160 may take the form of a touchscreen, such that a portion of the touchscreen serves as an input unit. If the user, for example, drags a touch (e.g., user drags a finger contacting the touchscreen) from one point to another point, a cross-sectional image taken along the line connecting the two points to each other may be acquired.
If the user inputs an instruction to select a cross-sectional image, the 3D display unit 140 may display a 3D ultrasound image of the subject, or an image acquired via rendering of volume data on the 2D display unit 160. The user may refer to the displayed image for selection of the cross-sectional image.
Referring to
The image capture unit 571 may be implemented as a camera, and may be mounted to a 2D display unit 560 or a 3D display unit 540. The image capture unit 571 captures an image of the user and transmits the image to the motion recognition unit 572. The motion recognition unit 572 recognizes a user motion by analyzing the captured image. The motion recognition unit 572 may be realized by any one of various motion recognition technologies. A detailed description of such motion recognition technologies is omitted herein.
In an exemplary embodiment, the motion recognition unit 572 may recognize the shape and motion of the user's hand. Instructions corresponding to the shape and motion of the user's hand may be preset. If the motion recognition unit 572 recognizes the preset shape and motion of the hand, a corresponding instruction may be transmitted to a 3D display image generation unit 530 or a 2D cross-sectional image acquisition unit 550.
For example, if the user rotates a clenched hand leftward or rightward as illustrated in
Referring to
Referring to
Motions illustrated in
Hereinafter, an exemplary embodiment with regard to a control method of the ultrasound imaging apparatus will be described.
Referring to
Next, volume data regarding the subject is generated from the acquired ultrasound data at operation 611. The volume data may be generated via 3D reconstruction of a plurality of pieces of cross-sectional image data regarding the subject.
A 3D display ultrasound image is generated using the volume data regarding the subject at operation 612. The 3D display ultrasound image is obtained by processing the volume data regarding the subject to conform to an output format of the 3D display unit.
In an exemplary embodiment, if the 3D display unit is configured to output a 3D multi-view image, a plurality of view images having different views is acquired from the volume data regarding the subject, and is synthesized to generate a multi-view image. In this case, weaving of the multi-view image may be implemented, and view disparity or the focal position as parameters for acquisition of view images may be set by the user.
In another exemplary embodiment, if the 3D display unit is configured to output an integral image, a plurality of elemental images having different horizontal parallaxes and vertical parallaxes is acquired from the volume data regarding the subject, and is matched with positions corresponding to a lens array of the 3D display unit.
In a further exemplary embodiment, if the 3D display unit is configured to output a hologram, a 2D image and a depth image are acquired from the volume data regarding the subject, and a hologram pattern is generated using the 2D image and the depth image.
Next, the generated 3D ultrasound image is displayed on the 3D display unit at operation 613. The 3D display unit may be of a stereoscopic type in which the viewer views a 3D image using special glasses, or of an auto-stereoscopic type in which the viewer views a 3D image without wearing special glasses. All of the above methods of generating the 3D display image, described by way of example with respect to operation 612, may be applied to the auto-stereoscopic type 3D display unit. In particular, if the 3D display unit is configured to directly output an image, the 3D display unit may include a display device, such as an LCD, LED, PDP, etc., and a lens array. As elemental images matched with the lens array are integrated by the lens array, a single 3D integral image is output.
A 2D cross-sectional image of the subject is displayed on the 2D display unit. To this end, a 2D cross-sectional image is acquired from the volume data regarding the subject at operation 614, and the acquired 2D cross-sectional image is displayed on the 2D display unit at operation 615. The acquired cross-sectional image may be a cross-sectional image corresponding to the XY plane, the YZ plane, or the XZ plane, or any other arbitrary images. Acquisition of the cross-sectional image may be performed by the ultrasound imaging apparatus, or may be performed in response to a selection instruction from the user. If a selection instruction is input by the user, a 3D ultrasound image of the subject may be displayed on the 3D display unit, or a volume image subjected to volume rendering may be displayed on the 2D display unit, so as to enable the user to select a 2D cross-sectional image based on the displayed image.
Although
When receiving the instruction for selection of the 2D cross-sectional image from the user, the instruction may be input via the input unit of the ultrasound imaging apparatus, or may be input via recognition of a user motion.
Referring to
Next, an image of the user is captured using an image capture unit, such as a camera, etc., at operation 622. Motion recognition is performed based on the captured image at operation 623, and a cross-sectional image corresponding to the recognized motion is acquired from the volume data regarding the subject at operation 624. To this end, a particular motion and a cross-sectional image corresponding to the particular motion may be preset to correspond to each other. If a motion recognized from the captured image conforms to the preset particular motion, a corresponding cross-sectional image is acquired and displayed on the 2D display unit at operation 625.
As is apparent from the above description of exemplary embodiments, according to an ultrasound imaging apparatus and a control method for the same, both a 2D ultrasound image and a 3D ultrasound image are displayed, which may provide not only clinical data, such as the anatomical shape of a subject, but also a high-resolution image for diagnosis of diseases.
Although the exemplary embodiments of the present disclosure have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these exemplary embodiments without departing from the principles and spirit of the present disclosure, the scope of which is defined in the claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
10-2012-0102430 | Sep 2012 | KR | national |