Claims
- 1. A system for generating simulated data, comprising:
a medical imaging camera for generating images, said images including distorted images; a registration device for registering data to a physical space, and to said medical imaging camera; and a fusion mechanism for fusing said data and said distorted images, to generate simulated data.
- 2. The system according to claim 1, further comprising:
another medical imaging camera for collecting said data, and wherein said medical imaging camera collects said images.
- 3. The system according to claim 1, wherein said data comprises pre-operative images and said images comprise intra-operative images.
- 4. The system according to claim 1, wherein said data comprises a two-dimensional image.
- 5. The system according to claim 1, wherein said simulated data comprises simulated post-operative images.
- 6. The system according to claim 1, wherein said fusion mechanism generates said simulated data while a surgery is being performed, and without correcting for distortion of said distorted images.
- 7. The system as in claim 3, wherein the pre-operative data comprises data of a surgical plan including a position of a component, and a three-dimensional shape of said component.
- 8. The system as in claim 7, wherein said component comprises an implant for a patient.
- 9. The system as in claim 2, wherein said another imaging camera comprises an X-ray, computed tomography (CT) scanner.
- 10. The system as in claim 1, wherein said medical imaging camera comprises a two-dimensional (2D) X-ray camera.
- 11. The system as in claim 1, wherein said fusion mechanism comprises a data processor.
- 12. A system for fusing three-dimensional shape data on distorted images, comprising:
an imaging camera; a registration device for registering data to a physical space, and to the imaging camera; and a fusion mechanism for fusing said data and distorted intra-operative images to generate simulated data without correcting for distortion.
- 13. A system for providing intra-operative visual evaluations of potential surgical outcomes, using medical images, comprising:
a first medical imaging camera for collecting pre-operative images; a second medical imaging camera for collecting distorted intra-operative images, while a surgery is being performed; a registration mechanism for registering said pre-operative images and other pre-operative data to a physical space, and to said second medical imaging camera; and a fusion mechanism for fusing said pre-operative data and said distorted intra-operative images, without correcting for distortion, to generate simulated post-operative images.
- 14. A method of fusing three-dimensional image data on an image, comprising:
receiving a potentially distorted image; computing an apparent contour of said three-dimensional image data on said potentially distorted image; for each pixel of the image, determining one of a ray in three-dimensional space and computing a distance from the ray to the apparent contour; and selectively adjusting a pixel value of said potentially distorted image based on said distance.
- 15. The method according to claim 14, further comprising:
calibrating said potentially distorted image, wherein said computing is based on said potentially distorted image having been calibrated.
- 16. The method according to claim 15, wherein said calibrating comprises:
associating a center of perspective to the image and a ray destination for each pixel of the potentially distorted image.
- 17. The method according to claim 16, wherein said computing comprises:
decomposing said three-dimensional image data into predetermined sub-shapes; and computing a set of three-dimensional apparent contours based on said center of perspective and the three-dimensional image data being decomposed into said predetermined sub-shapes.
- 18. The method according to claim 17, wherein said sub-shapes comprise sub-shapes having one of a triangular shape and a polygonal shape.
- 19. The method according to claim 16, wherein said computing further comprises:
based on said center of perspective, defining and extracting a three-dimensional apparent contour.
- 20. The method according to claim 19, wherein said defining and extracting comprises:
for each surface sub-shape, defining a viewing direction as a vector originating from the center of perspective to a centroid of the sub-shape.
- 21. The method according to claim 20, wherein if the sub-shape normal, as defined by a cross product of ordered oriented sub-shape edges, makes an obtuse angle with the viewing direction, the sub-shape is considered visible,
wherein a surface apparent contour is a subset of surface edges, such that a sub-shape on one side of the edge is visible and a sub-shape on another side of the edge is invisible, said apparent contour having edges linked to form non-planar polygonal curves in three dimensions.
- 22. The method according to claim 21, wherein said forming of said apparent contour comprises:
identifying edges belonging to any apparent contour, and adding said edges to a list, wherein said edges are oriented such that a visible sub-shape is on a predetermined side of the edge, thus defining an edge origin and an edge destination.
- 23. The method according to claim 22, wherein said forming of said apparent contour further comprises:
based on a first edge in the list, creating a new apparent contour starting with said first edge; and completing said apparent contour, containing said first edge.
- 24. The method according to claim 23, wherein said completing said apparent contours comprises:
starting from the destination of the current edge, completing a next edge, wherein sub-shapes incident to a destination vertex in a counter-clockwise fashion, are visited, and the first edge is determined that belongs to the list of apparent contour edges; and reapplying said completing until a next edge is the same as the first edge that was processed previously.
- 25. The method according to claim 24, wherein said forming said apparent contour further comprises:
removing all the edges forming that contour from the list of apparent contour edges.
- 26. The method according to claim 25, wherein said forming said apparent contour further comprises:
reapplying said creating a new apparent contour, completing the apparent contour, and said removing until the list of apparent contour edges is empty.
- 27. The method according to claim 16, wherein said determining comprises:
for each pixel of the potentially distorted image, determining the corresponding ray from the center of perspective, and computing the distance to the apparent contour.
- 28. The method according to claim 27, wherein computing said distance from a given line in three-dimensions to an apparent contour, comprises:
computing the distance from a line in three-dimensions to a line segment in three-dimensions.
- 29. The method according to claim 16, wherein said adjusting said pixel value comprises:
updating the pixel value with a distance ray-shape that was determined.
- 30. The method according to claim 15, further comprising:
fusing said three-dimensional image data with said potentially distorted image by integrating a two-dimensional projection of a silhouette of a three-dimensional implant model in an X-ray image.
- 31. The method according to claim 30, wherein said fusing uses a calibration of the X-ray image, to determine a center of perspective whose location represents an estimate of a location of an X-ray source,
said center of perspective being used to compute silhouette curves on the three-dimensional implant model.
- 32. The method according to claim 31, wherein said silhouette curves are such that rays emanating from the center of perspective and tangent to the three-dimensional model meet the three-dimensional implant model on a silhouette curve.
- 33. The method according to claim 16, further comprising:
fusing by projecting a silhouette curve of said data by considering in turn each new pixel, determining a line in three-dimensions corresponding to that pixel by image calibration, computing a distance from a line to the silhouette curve, and assigning a pixel gray-scale value depending on the distance.
- 34. The method according to claim 33, wherein said fusing comprises assigning pixel gray-scale values corresponding to a distance,
wherein if the distance is less than a first predetermined value, then the gray-scale value is set to a first predetermined number.
- 35. The method according to claim 34, wherein if the distance is less than a second predetermined value, then the gray-scale value is set to a second predetermined number larger than said first predetermined number.
- 36. The method according to claim 35, wherein if the distance is less than a third predetermined value greater than said first and second predetermined values, then the gray-scale value is set to a third predetermined number larger than said first and second predetermined numbers.
- 37. The method according to claim 36, wherein if the distance is greater than or equal to said third predetermined value, then the gray-scale value is not modified for projecting the silhouette curves.
- 38. A method of generating simulated post-operative data, comprising:
collecting pre-operative data and collecting potentially distorted intra-operative images; registering said pre-operative data to a physical space, and to a medical imaging camera; and fusing said pre-operative data and said intra-operative images to generate simulated post-operative data, without correcting for distortion.
- 39. The method of claim 38, further comprising:
calibrating the medical imaging camera, wherein the pre-operative data comprises data of a surgical plan including a position of a component, and a three-dimensional shape of said component.
- 40. The method of claim 38, wherein said component of said surgical plan includes an implant position.
- 41. An apparatus for fusing three-dimensional image data on an image, comprising:
means for receiving a potentially distorted image; a processor for computing an apparent contour of said three-dimensional image data on said potentially distorted image; means for determining, for each pixel of the image, one of a ray in three-dimensional space and computing a distance from the ray to the apparent contour; and means for selectively adjusting a pixel value of said potentially distorted image based on said distance.
- 42. A signal-bearing medium tangibly embodying a program of machine-readable instructions executable by a digital processing apparatus to perform a method for computer-implemented fusing of three-dimensional image data on a distorted image without correcting for distortion, comprising:
computing an apparent contour of three-dimensional image data of a potentially distorted image; for each pixel of the image, determining a ray in three-dimensional space and computing a distance from the ray to the apparent contour; and selectively adjusting a pixel value of said potentially distorted image based on said distance.
- 43. A signal-bearing medium tangibly embodying a program of machine-readable instructions executable by a digital processing apparatus to perform a method for computer-implemented generating of simulated post-operative data, comprising:
collecting pre-operative data and collecting potentially distorted intra-operative images; registering said pre-operative data to a physical space, and to a medical imaging camera; and fusing said pre-operative data and said intra-operative images to generate simulated post-operative data, without correcting for distortion.
CROSS-REFERENCE TO RELATED APPLICATION
[0001] The present application is related to U.S. patent application Ser. No. 09/299,643, filed on Apr. 27, 1999, to Gueziec et al., entitled “SYSTEM AND METHOD FOR INTRA-OPERATIVE, IMAGE-BASED, INTERACTIVE VERIFICATION OF A PRE-OPERATIVE SURGICAL PLAN” having IBM Docket No. YO999-095, assigned to the present assignee, and incorporated herein by reference.
Divisions (1)
|
Number |
Date |
Country |
Parent |
09354800 |
Jul 1999 |
US |
Child |
09852739 |
May 2001 |
US |