System and method for three-dimensional (3D) reconstruction from ultrasound images

Abstract
A method for local 3-dimensional (3D) reconstruction from 2-dimensional (2D) ultrasound images includes deriving a 2D image of an object; defining a target region within said 2D image; defining a volume scan period; during the volume scan period, deriving further 2D images of the target region and storing respective pose information for the further 2D images; and reconstructing a 3D image representation for the target region by utilizing the 2D images and the respective pose information.
Description


[0012] The present invention relates to ultrasound imaging, such as for medical imaging purposes and, more particularly to local three-dimensional (3D) reconstniction from two-dimensional (2D) ultrasound images.


[0013] Augmented Reality visualization of ultrasound images has been proposed in the literature; see for exampled, M. Bajura, H. Fuchs, and R. Ohbuchi. “Merging Virtual Objects with the Real World: Seeing Ultrasound Imagery within the Patient.” Proceedings of SIGGRAPH 92 (Chicago, Ill., Jul. 26-31, 1992). In Computer Graphics 26, #2 (July 1992): 20


[0014] Helpful background material on augmented reality and related topics can be found in Proceedings of the IEEE and ACM International Symposium on Augmented Reality 2000, dated Oct. 5-6, 2000; Munich, Germany; IIEE Computer Society, Los Alamitos, Calif., U.S.A. In the above-cited Proceedings, an article of particular interest entitled AUGMENTED WORKSPACE: DESIGNING AN AR TESTBED is published on pages 47-53, and is authored by Frank Sauer, an inventor in the present application, et alii.


[0015] See also the review article by R. T. Azuma: “A Survey of Augmented Reality”, Presence: Teleoperators and Virtual Environments, 6(4), 355-386, (1997).


[0016]
FIG. 1 show a schematic block diagram of an augmented reality system as may be utilized in conjunction with features of the invention. A tracker camera 10 is coupled by way of and A/D (analog to digital) converter 12 to a programmable digital computer 14. Two scene cameras 16 are coupled to computer 14. An ultrasound scanner 16, having a transducer 18, is coupled by way of an A/D converter 20 to computer 14. A head-mounted display (HMD) control unit 22 is coupled for signal interchange with computer 14 and to an HMD display 24.


[0017] Ultrasound scanners capture live 2D images from within objects or patients (B-scans). 3D ultrasound imaging is attractive as it facilitates the diagnosis and identification of scanned structures, and provides overall a better understanding of the shape and topology of such structures. To assemble a set of 2D images into a 3D representation, one needs to track the pose of the individual 2D images, for example, using a commercial magnetic or optical tracking system.


[0018] Commercial systems are available, such as TomTec, for example, for which see website http://www.tomtec.de/). Free software to perform a 3D reconstruction from 2D ultrasound slices with known poses is available; see, for example, website http://svr-www.eng.cam.ac.uk/˜rwp/stradx/. Nevertheless, it is difficult to achieve real-time performance with existing 3D ultrasound systems.


[0019] An object of the present invention is to perform local 3D reconstruction from 2D ultrasound images.


[0020] In accordance with another aspect of the invention, a method for local 3-dimensional (3D) reconstruction from 2-dimensional (2D) ultrasound images includes deriving a 2D image of an object; defining a target region within said 2D image; defining a volume scan period; during the volume scan period, deriving further 2D images of the target region and storing respective pose information for the further 2D images; and reconstructing a 3D image representation for the target region by utilizing the 2D images and the respective pose information.






[0021] The invention will be more fully understood from the following detailed description of preferred embodiments, in conjunction with the Drawing, in which


[0022]
FIG. 1 shows a block diagram of an augmented reality system.






[0023] In accordance with an aspect of the invention, a system for local (3D) reconstruction allows a user to identify a target in an ultrasound image, and then to scan a volume by moving the transducer. Alternatively, the system allows a user to identify a region of interest in an ultrasound image, and then to scan a volume by moving the transducer.


[0024] The system will perform a 3D reconstruction of the local volume that is centered around the target or, in the alternative, bounded by the 2-D region of interest, and the scan motion. The local volume is smaller than the whole volume scanned by the ultrasound slice, and the 3D reconstruction can accordingly be performed in real time.


[0025] Preferably, the 3D information is visualized in an augmented reality fashion. The ultrasound images are overlaid onto a view of the patient or the object in a registered way. Structures visible in the ultrasound images appear in the location of the corresponding physical structures. Preferably, the augmented view is stereoscopic to provide the user with depth perception. For the augmented reality visualization, the system includes means to track the ultrasound transducer's pose from the user's viewpoint.


[0026] A preferred embodiment in accordance with the principles of the present invetion comprises and ultrasound scanner, including a transducer and tracking apparatus to track the transducer. For the option with augmented reality (AR) applications, tracking apparatus is utilized to track user's viewpoint. A calibration procedure is established, and a computer is utilized for generating graphics.


[0027] As concerns visualization, the volume can be shown as a 3D texture map. Alternately, the system can process the volume information and segment out a target structure. A segmented structure is then displayed in a 3D surface representation.


[0028] With regard to the user interface, there are various ways for the user to input the location of the target structure of interest, including such as are described in the afore-mentioned patent application entitled “MARKING 3D LOCATIONS FROM ULTRASOUND IMAGES”.


[0029] Target localization and marking can be done by using a pointing device, such as a computer-type mouse to mark a 2D location in the image. With the knowledge of the 3D pose of that image, the system can calculate the 3D position of the marker. The user's input may be in an “on-line” mode, where he holds the transducer still in the desired pose, or in an “off-line” mode, where he first freezes (that is, records) the image together with its pose information, and then places the marker in the recorded still image at his leisure. For the on-line mode, the pointing device is preferably attached to the transducer and can be operated with the same hand that is holding the transducer, or alternatively, is placed on the floor and is foot-actuated by the user.


[0030] Preferably, the system performs image processing on an ultrasound image and automatically or semiautomatically locates a target. The type of target may have optionally been predetermined by the user. Hence, the input of the user is simplified. For example, without an extra pointing device, the user may place the transducer in a pose where the target structure appears on the vertical center-line of the ultrasound image. The user then simply triggers the system to locate the target along the image's center-line—with a button on the transducer, with a foot switch, or by voice control, and so forth.


[0031] The system includes a processor that searches the image along its centerline, which makes locating the target easier than if the search would have had to be conducted over the whole image. A preferred search algorithm would be to de-noise the image around the centerline, e.g. with a median filter, identify potential target locations in a line scan along the centerline, and verify the existence of a target with a Hough transform. The Hough transform is known and may be found in various textbooks, such as, for example, “Fundamentals of Electronic Image Processing” by Arthur R. Weeks, Jr., IEEE Press, New York, N.Y.; 1996. When the system proposes a target location, the user then accepts or rejects it with another simple trigger input (such as button on transducer, footswitch, voice, and so forth).


[0032] Alternatively, a line on the ultrasound image may be utilized as pointer. A line other than a vertical centerline can be used, such as a vertical off-center line, or a line that is tilted at an angle with respect to vertical direction. Also, target position along line can be input by user via thumbwheel at the transducer or by using a similar 1-D pointing device, (no image processing being necessary.


[0033] Alternatively the user can use a line on the ultrasound image to point to a target from two different transducer poses, so that the system can calculate the target location as the intersection of the two lines in 3D space; two different lines can be used to require less movement between the two pointing transducer poses, e.g. two lines that intersect in the image.


[0034] The image processing is powerful enough to find a target in the 2D image. Then the user need only position the transducer so that the target is visible in the image, initiate the automatic target search, and then confirm or dismiss a target proposed by the system.


[0035] Several targets can be found in the same image. This embodiment is attractive from a user point of view as it requires the least input, but robust target detection in ultrasound images is generally very difficult. Therefore, it is preferable to provide the user with a 1D or 2D pointing interface as described in the above.


[0036] The size of a region of interest centered at the target can be fixed, but is preferably user-adjustable. Once size and location of a region of interest is determined in a 2D ultrasound slice, the user triggers the start and the end of the volume scan, e.g. by using a pushbutton at the transducer, or with a foot switch, or via voice command, etc. The system records 2D images together with the pose information from the tracking system and builds the volume according to prior art.


[0037] In an alternate embodiment, the system automatically selects target structures for local reconstruction. An important example is local 3D reconstruction from ultrasound Doppler images. The system automatically detects the regions with flow and performs local volume reconstruction or segmentation for these regions.


[0038] While the invention has been explained by way of exemplary embodiments, it will be understood by one of skill in the art to which it pertains that various modifications and changes may be readily made without (departing from the spirit of the invention which is defined by the claims following.

Claims
  • 1. A method for local 3-dimensional (3D) reconstruction from 2-dimensional (2D) ultrasound images, comprising: deriving a 2D image of an object; defining a target region within said 2D image; deriving further 2D images having respective poses and including said target region; and reconstructing a 3D image representation for said target region from said 2D images and said respective poses.
  • 2. A method for local 3D reconstruction as recited in claim 1, wherein said step of defining a target region comprises a step of searching said image along its centerline for identifying a potential target region.
  • 3. A method for local 3D reconstruction as recited in claim 2, wherein said step of searching said image comprises a step of utilizing a search algorithm for searching said image along its centerline for identifying a potential target region.
  • 4. A method for local 3D reconstruction as recited in claim 3, wherein said step of utilizing a search algorithm comprises a step of de-noising said image around its centerline for identifying a potential target region.
  • 5. A method for local 3D reconstruction as recited in claim 4, wherein said step of de-noising comprises a step of median filtering for identifying a potential target region.
  • 6. A method for local 3D reconstruction as recited in claim 1, wherein said step of de-noising comprises a step of median filtering for identifying a potential target region.
  • 7. A method for local 3D reconstruction as recited in claim 4, wherein said step of searching said image comprises a step of utilizing a Hough transform for verifying a potential target region.
  • 8. A method for local 3-dimensional (3D) reconstruction from 2-dimensional (2D) ultrasound images, comprising: deriving a 2D image of an object; defining a target region within said 2D image, defining a volume scan period; during said volume scan period, deriving further 2D images of said target region and storing respective pose information for said further 2D images; and reconstructing a 3D image representation for said target region by utilizing said 2D images and said respective pose information.
  • 9. A method for local 3D reconstruction as recited in claim 8, wherein said step of defining a target region comprises semi-automatic steps.
  • 10. A method for local 3-dimensional (3D) reconstruction from 2-dimensional (2D) ultrasound images, comprising: deriving a 2D image of an object; defining a target region within said 2D image, said target region being less than the whole of said 2D image; defining the start and end of a volume scan, deriving further 2D images of said target region during a period between said start and end of said volume scan, said further 2D images having respective poses; storing pose information for said respective poses; defining the end of said volume scan; and reconstructing a 3D image representation for said target region by utilizing said said 2D images and said respective pose information.
  • 11. A method for local 3-dimensional (3D) reconstruction from 2-dimensional (2D) ultrasound Doppler images, comprising: deriving a 2D Doppler image of an object; detecting flow regions exhibiting predetermined flow characteristics; defining a target region within said 2D image in correspondence with said flow regions; defining a volume scan period; during said volume scan period, deriving further 2D images of said target region and storing respective pose information for said further 2D images; and reconstructing a 3D image representation for said target region by utilizing said 2D images and said respective pose information.
  • 12. A method for local 3-dimensional (3D) reconstruction from 2-dimensional (2D) ultrasound Doppler images, comprising: deriving a 2D Doppler image of an object; automatically detecting flow regions exhibiting predetermined flow characteristics; automatically defining a target region within said 2D image in correspondence with said flow regions; defining a volume scan period; during said volume scan period, deriving further 2D images of said target region and storing respective pose information for said further 2D images; and reconstructing a 3D image representation for said target region by utilizing said 2D images and said respective pose information.
  • 13. A method for local 3D reconstruction as recited in claim 12, wherein said step of automatically defining a target region comprises a step of automatically searching said image along its centerline for identifying a potential target region.
  • 14. A method for local 3D reconstruction as recited in claim 13, wherein said step of automatically searching said image comprises a step of utilizing a search algorithm for searching said image along its centerline for identifying a potential target region.
  • 15. A method for local 3D reconstruction as recited in claim 14, wherein said step of utilizing a search algorithm comprises a step of de-noising said image around its centerline for identifying a potential target region.
  • 16. A method for local 3D reconstruction as recited in claim 15, wherein said step of de-noising comprises a step of median filtering for identifying a potential target region.
  • 17. A method for local 3D reconstruction as recited in claim 15, wherein said step of de-noising comprises a step of median filtering for identifying a potential target region.
  • 18. A method for local 3D reconstruction as recited in claim 13, wherein said step of searching said image comprises a step of utilizing a Hough transform for verifying a potential target region.
  • 19. Apparatus for local 3-dimensional (3D) reconstruction from 2-dimensional (2D) ultrasound images, comprising: means for deriving a 2D image of an object; means for determining and storing respective pose information for a 2D image derived by said means for deriving a 2D image; means for defining the start and end of a volume scan, said means for defining being coupled to said means for deriving a 2D image; means for defining a target region within said 2D image, said target region being less than the whole of said 2D image, wherein said means for deriving a 2D image is coupled to said means for defining a target region, and derives further 2D images of said target region between said start and said end of said volume scan, said 2D images having respective poses; and means for reconstructing a 3D image representation for said target region by utilizing said said 2D images and said respective pose information.
  • 20. Apparatus for local 3-dimensional (3D) reconstruction as recited in claim 1, wherein said means for defining a target region comprises a pointer.
  • 21. Apparatus for local 3-dimensional (3D) reconstruction as recited in claim 1, wherein said means for defining a target region comprises an image line.
  • 22. Apparatus for local 3-dimensional (3D) reconstruction from 2-dimensional (2D) ultrasound images, comprising: means for deriving a 2D image of an object; means for defining a target region within said 2D image; means for defining a volume scan period; means for storing respective pose information for further 2D images of said target region derived during said volume scan period by said means for deriving a 2D image; and means for reconstructing a 3D image representation for said target region by utilizing said 2D images and said respective pose information.
  • 23. Apparatus for local 3-dimensional (3D) reconstruction as recited in claim 22, wherein said means for defining a target region comprises processor means for searching said image along its centerline for identifying a potential target region.
  • 24. Apparatus for local 3-dimensional (3D) reconstruction as recited in claim 23, wherein processor means for searching utilizes a search algorithm for searching said image along its centerline for identifying a potential target region.
  • 25. Apparatus for local 3-dimensional (3D) reconstruction as recited in claim 23, wherein processor means for searching utilizes a search algorithm for de-noising said image around its centerline for identifying a potential target region.
  • 26. Apparatus for local 3-dimensional (3D) reconstruction as recited in claim 23, wherein processor means for searching utilizes a search algorithm for de-noising said image around its centerline, by using a median filter, for identifying a potential target region.
  • 27. Apparatus for local 3-dimensional (3D) reconstruction as recited in claim 23, wherein processor means for searching utilizes a Hough transform for verifying a potential target region.
Parent Case Info

[0001] Reference is hereby made to the following U.S. Provisional patent applications whereof the benefit is hereby claimed and whereof the disclosures are hereby incorporated by reference: [0002] U.S. Provisional patent application No. 60/312,872, entitled MARKING 3D LOCATIONS FROM ULTRASOUND IMAGES and filed Aug. 16, 2001 in the names of Frank Sauer, Ali Khamene, Benedicte Bascle; [0003] U.S. Provisional patent application No. 60/312,876, entitled LOCAL 3D RECONSTRUCTION FROM ULTRASOUND IMAGES and filed Aug. 16, 2001 in the names of Frank Sauer, Ali Khamene, Benedicte Bascle; [0004] U.S. Provisional patent application No. 60/312,871, entitled SPATIOTEMPORAL FREEZING OF ULTRASOUND IMAGES IN AUGMENTED REALITY VISUALIZATION and filed Aug. 16, 2001 in the names of Frank Sauer, Ali Khamene, Benedicte Bascle; [0005] U.S. Provisional patent application No. 60/312,875, entitled USER INTERFACE FOR AUGMENTED AND VIRTUAL REALITY SYSTEMS and filed Aug. 16, 2001 in the names of Frank Sauer, Lars Schimmang, Ali Khamene; and [0006] U.S. Provisional patent application No. 60/312,873, entitled VIDEO-ASSISTANCE FOR ULTRASOUND GUIDED NEEDLE BIOPSY and filed Aug. 16, 2001 in the names of Frank Sauer and Ali Khamene. [0007] Reference is hereby made to the following copending U.S. patent applications being filed on even date herewith. [0008] U.S. patent application, entitled MARKING 3D LOCATIONS FROM ULTRASOUND IMAGES filed in the names of Frank Sauer, Ali Khamene, Benedicte Bascle; [0009] U.S. patent application entitled SPATIOTEMPORAL FREEZING OF ULTRASOUND IMAGES IN AUGMENTED REALITY VISUALIZATION and filed in the names of Frank Sauer, Ali Khamene, Benedicte Bascle; [0010] U.S. patent application entitled USER INTERFACE FOR AUGMENTED AND VIRTUAL REALITY SYSTEMS and filed in the names of Frank Sauer, Lars Schimmang, Ali Khamene; and [0011] U.S. patent application entitled VIDEO-ASSISTANCE FOR ULTRASOUND GUIDED NEEDLE BIOPSY and filed in the names of Frank Sauer and Ali Khamene.

Provisional Applications (5)
Number Date Country
60312872 Aug 2001 US
60312876 Aug 2001 US
60312871 Aug 2001 US
60312875 Aug 2001 US
60312873 Aug 2001 US