System and method of providing real-time dynamic imagery of a medical procedure she using multiple modalities

Information

  • Patent Grant
  • 11481868
  • Patent Number
    11,481,868
  • Date Filed
    Friday, July 3, 2020
    4 years ago
  • Date Issued
    Tuesday, October 25, 2022
    2 years ago
Abstract
A system and method of providing composite real-time dynamic imagery of a medical procedure site from multiple modalities which continuously and immediately depicts the current state and condition of the medical procedure site synchronously with respect to each modality and without undue latency is disclosed. The composite real-time dynamic imagery may be provided by spatially registering multiple real-time dynamic video streams from the multiple modalities to each other. Spatially registering the multiple real-time dynamic video streams to each other may provide a continuous and immediate depiction of the medical procedure site with an unobstructed and detailed view of a region of interest at the medical procedure site at multiple depths. A user may thereby view a single, accurate, and current composite real-time dynamic imagery of a region of interest at the medical procedure site as the user performs a medical procedure.
Description
BACKGROUND OF THE INVENTION
Field of the Invention

The present invention is directed to a system and method of providing composite real-time dynamic imagery of a medical procedure site using multiple modalities. One or more of the modalities may provide two-dimensional or three-dimensional imagery.


Description of the Related Art

It is well established that minimally-invasive surgery (MIS) techniques offer significant health benefits over their analogous laparotomic (or “open”) counterparts. Among these benefits are reduced trauma, rapid recovery time, and shortened hospital stays, resulting in greatly reduced care needs and costs. However, because of limited visibility to certain internal organs, some surgical procedures are at present difficult to perform using MIS. With conventional technology, a surgeon operates through small incisions using special instruments while viewing internal anatomy and the operating field through a two-dimensional monitor. Operating below while seeing a separate image above can give rise to a number of problems. These include the issue of parallax, a spatial coordination problem, and a lack of depth perception. Thus, the surgeon bears a higher cognitive load when employing MIS techniques than with conventional open surgery because the surgeon has to work with a less natural hand-instrument-image coordination.


These problems may be exacerbated when the surgeon wishes to employ other modalities to view the procedure. A modality may be any method and/or technique for visually representing a scene. Such modalities, such as intraoperative laparoscopic ultrasound, would benefit the procedure by providing complementary information regarding the anatomy of the surgical site, and, in some cases, allowing the surgeon to see inside of an organ before making an incision or performing any other treatment and/or procedure. But employing more than one modality is often prohibitively difficult to use. This is particularly the case when the modalities are video streams displayed separately on separate monitors. Even if the different modalities are presented in a picture-in-picture or side-by-side arrangement on the same monitor, it would not be obvious to the surgeon, or any other viewer, how the anatomical features in each video stream correspond. This is so because, the spatial relationship between the areas of interest at the surgical site, for example, surface, tissue, organs, and/or other objects imaged by the different modalities, are not aligned to the same view perspectives. As such, the same areas of interest may be positioned and oriented differently between the different modalities. This is a particular problem for modalities like ultrasound, wherein anatomical features do not obviously correspond to the same feature in optical (or white-light) video.


The problems may be further exacerbated in that the surgical site is not static but dynamic, continually changing during the surgery. For example, in laparoscopic surgery, the organs in the abdomen continually move and reshape as the surgeon explores, cuts, stitches, removes and otherwise manipulates organs and tissues inside the body cavity. Even the amount of gas inside the body cavity (used to make space for the surgical instruments) changes during the surgery, and this affects the shape or position of everything within the surgical site. Therefore, if the views from the modalities are not continuous and immediate, they may not accurately and effectively depict the current state and/or conditions of the surgical site.


While there is current medical imaging technology that superimposes a video stream using one modality on an image dataset from another modality, the image dataset is static and, therefore, not continuous or immediate. As such, the image dataset, must be periodically updated based on the position of the subject, for example the patient, and/or anatomical or other features and/or landmarks. Periodically updating and/or modifying the image dataset may introduce undue latency in the system, which may be unacceptable from a medical procedure standpoint. The undue latency may cause the image being viewed on the display by the surgeon to be continually obsolete. Additionally, relying on the positions of the subject, and/or anatomical or other features and/or landmarks to update and/or modify the image being viewed, may cause the images from the different modalities to not only be obsolete but, also, non-synchronous when viewed.


Accordingly, there currently is no medical imaging technology directed to providing composite real-time dynamic imagery from multiple modalities using two or more video streams, wherein each video stream from each modality may provide a real-time view of the medical procedure site to provide a continuous and immediate view of the current state and condition of the medical procedure site. Also, there currently is no medical imaging technology directed to providing composite imagery from multiple modalities using two or more video streams, wherein each video stream may be dynamic in that each may be synchronized to the other, and not separately to the position of the subject, and/or anatomical or other features and/or landmarks. As such, there is currently no medical imaging technology that provides composite real-time, dynamic imagery of the medical procedure site from multiple modalities.


Therefore, there is a need for a system and method of providing composite real-time dynamic imagery of a medical procedure site from multiple medical modalities, which continuously and immediately depicts the current state and condition of the medical procedure site and does so synchronously with respect to each of the modalities and without undue latency.


SUMMARY OF THE INVENTION

The present invention is directed to a system and method of providing composite real-time dynamic imagery of a medical procedure site from multiple modalities which continuously and immediately depicts the current state and condition of the medical procedure site synchronously with respect to each modality and without undue latency. The composite real-time dynamic imagery may be provided by spatially registering multiple real-time dynamic video streams from the multiple modalities to each other. Spatially registering the multiple real-time dynamic video streams to each other may provide a continuous and immediate depiction of the medical procedure site with an unobstructed and detailed view of a region of interest at the medical procedure site. As such, a surgeon, or other medical practitioner, may view a single, accurate, and current composite real-time dynamic imagery of a region of interest at the medical procedure site as he/she performs a medical procedure, and thereby, may properly and effectively implement the medical procedure.


In this regard, a first real-time dynamic video stream of a scene based on a first modality may be received. A second real-time dynamic video stream of the scene based on a second modality may also be received. The scene may comprise tissues, bones, instruments, and/or other surfaces or objects at a medical procedure site and at multiple depths. The first real-time dynamic video stream and the second real-time dynamic video stream may be spatially registered to each other. Spatially registering the first real-time dynamic video stream and the second real-time dynamic video stream to each other may form a composite representation of the scene. A composite real-time dynamic video stream of the scene may be generated from the composite representation. The composite real-time dynamic video stream may provide a continuous and immediate depiction of the medical procedure site with an unobstructed and detailed view at multiple depths of a region of interest at the medical procedure site. The composite real-time dynamic video stream may be sent to a display.


The first real-time dynamic video stream may depict the scene from a perspective based on a first spatial state of a first video source. Also, the second real-time dynamic video stream may depict the scene from a perspective based on a second spatial state of a second video source. The first spatial state may comprise a displacement and an orientation of the first video source, while the second spatial state may comprise a displacement and an orientation of the second video source. The first spatial state and the second spatial state may be used to synchronously align a frame of the second real-time dynamic video stream depicting a current perspective of the scene with a frame of the first real-time dynamic video stream depicting a current perspective of the scene. In this manner, the displacement and orientation of the first video source and the displacement and orientation of the second video source may be used to accurately depict the displacement and orientation of the surfaces and objects in the scene from both of the current perspectives in the composite representation.


The first modality may be two-dimensional or three-dimensional. Additionally, the first modality may comprise endoscopy, and may be selected from a group comprising laparoscopy, hysteroscopy, thoracoscopy, arthroscopy, colonoscopy, bronchoscopy, cystoscopy, proctosigmoidoscopy, esophagogastroduodenoscopy, and colposcopy. The second modality may be two-dimensional or three dimensional. Additionally, the second modality may comprise one or more modalities selected from a group comprising medical ultrasonography, magnetic resonance, x-ray imaging, computed tomography, and optical wavefront imaging. As such, a plurality, comprising any number, of video sources, modalities, and real-time dynamic video streams is encompassed by the present invention.


Those skilled in the art will appreciate the scope of the present invention and realize additional aspects thereof after reading the following detailed description of the preferred embodiments in association with the accompanying drawing figures.





BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawing figures incorporated in and forming a part of this specification illustrate several aspects of the invention, and together with the description serve to explain the principles of the invention.



FIG. 1 is a schematic diagram illustrating an exemplary real-time dynamic imaging system, wherein a first real-time, dynamic video stream of a scene may be received from a first video source, and a second real-time dynamic video stream of the scene may be received from a second video source, and wherein the first real-time dynamic video stream and the second real-time dynamic video stream may be spatially registered to each other, according to an embodiment of the present invention;



FIG. 2 is a flow chart illustrating a process for generating a composite real-time dynamic video stream of the scene by spatially registering the first real-time dynamic video stream and the second real-time dynamic video stream according to an embodiment of the present invention;



FIGS. 3A, 3B, and 3C are graphical representations of the spatial registering of a frame of the first real-time dynamic video stream and a frame of the second real-time dynamic video stream to form a composite representation of the scene, according to an embodiment of the present invention;



FIGS. 4A and 4B illustrate exemplary arrangements, which may be used to determine the spatial relationship between the first video source and the second video source using the first spatial state and the second spatial state, according to an embodiment of the present invention;



FIG. 5 is a schematic diagram illustrating an exemplary real-time dynamic imaging system at a medical procedure site, wherein the first video source and the second video source are co-located, and wherein the first video source may comprise an endoscope, and wherein the second video source may comprise an ultrasound transducer, according to an embodiment of the present invention;



FIG. 6 is a schematic diagram illustrating an exemplary real-time dynamic imaging system at a medical procedure site wherein the first video source and the second video source are separately located and wherein an infrared detection system to determine the first spatial state and the second spatial state may be included, according to an embodiment of the present invention;



FIGS. 7A, 7B, and 7C are photographic representations of a frame from a laparoscopy-based real-time dynamic video stream, a frame of a two-dimensional medical ultrasonography-based real-time dynamic video stream, and a frame of a composite real-time dynamic video stream resulting from spatially registering the laparoscopy-based real-time dynamic video stream and the two-dimensional medical ultrasonography-based real-time dynamic video stream, according to an embodiment of the present invention; and



FIG. 8 illustrates a diagrammatic representation of a controller in the exemplary form of a computer system adapted to execute instructions from a computer-readable medium to perform the functions for spatially registering the first real-time dynamic video stream and the second real-time dynamic video stream for generating the composite real-time dynamic video stream according to an embodiment of the present invention.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

The embodiments set forth below represent the necessary information to enable those skilled in the art to practice the invention and illustrate the best mode of practicing the invention. Upon reading the following description in light of the accompanying drawing figures, those skilled in the art will understand the concepts of the invention and will recognize applications of these concepts not particularly addressed herein. It should be understood that these concepts and applications fall within the scope of the disclosure and the accompanying claims.


The present invention is directed to a system and method of providing composite real-time, dynamic imagery of a medical procedure site from multiple modalities which continuously and immediately depicts the current state and condition of the medical procedure site synchronously with respect to each modality and without undue latency. The composite real-time dynamic imagery may be provided by spatially registering multiple real-time dynamic video streams from the multiple modalities to each other. Spatially registering the multiple real-time dynamic video streams to each other may provide a continuous and immediate depiction of the medical procedure site with an unobstructed and detailed view of a region of interest at the medical procedure site. As such, a surgeon, or other medical practitioner, may view a single, accurate, and current composite real-time dynamic imagery of a region of interest at the medical procedure site as he/she performs a medical procedure, and thereby, may properly and effectively implement the medical procedure.


In this regard, a first real-time dynamic video stream of a scene based on a first modality may be received. A second real-time dynamic video stream of the scene based on a second modality may also be received. The scene may comprise tissues, bones, instruments, and/or other surfaces or objects at a medical procedure site and at multiple depths. The first real-time dynamic video stream and the second real-time dynamic video stream may be spatially registered to each other. Spatially registering the first real-time dynamic video stream and the second real-time dynamic video stream to each other may form a composite representation of the scene. A composite real-time dynamic video stream of the scene may be generated from the composite representation. The composite real-time dynamic video stream may provide a continuous and immediate depiction of the medical procedure site with an unobstructed and detailed view at multiple depths of a region of interest at the medical procedure site. The composite real-time dynamic video stream may be sent to a display.


The first real-time dynamic video stream may depict the scene from a perspective based on a first spatial state of a first video source. Also, the second real-time dynamic video stream may depict the scene from a perspective based on a second spatial state of a second video source. The first spatial state may comprise a displacement and an orientation of the first video source, while the second spatial state may comprise a displacement and an orientation of the second video source. The first spatial state and the second spatial state may be used to synchronously align a frame of the second real-time dynamic video stream depicting a current perspective of the scene with a frame of the first real-time dynamic video stream depicting a current perspective of the scene. In this manner, the displacement and orientation of the first video source and the displacement and orientation of the second video source may be used to accurately depict the displacement and orientation of the surfaces and objects from both of the current perspectives in the composite representation.


The first modality may be two-dimensional or three-dimensional. Additionally, the first modality may comprise endoscopy, and may be selected from a group comprising laparoscopy, hysteroscopy, thoracoscopy, arthroscopy, colonoscopy, bronchoscopy, cystoscopy, proctosigmoidoscopy, esophagogastroduodenoscopy, and colposcopy. The second modality may be two-dimensional or three dimensional. Additionally, the second modality may comprise one or more modalities selected from a group comprising medical ultrasonography, magnetic resonance, x-ray imaging, computed tomography, and optical wavefront imaging. As such, a plurality, comprising any number, of video sources, modalities, and real-time dynamic video streams is encompassed by embodiments of the present invention. Therefore, the first imaging modality may comprise a plurality of first imaging modalities and the second imaging modality may comprise a plurality of second imaging modalities.



FIG. 1 illustrates a schematic diagram of an exemplary real-time dynamic imagery system 10 for generating a composite real-time dynamic video stream of a scene from a first real-time dynamic video stream based on a first modality and a second real-time dynamic video stream based on a second modality, according to an embodiment of the present invention. FIG. 2 is a flow chart illustrating a process for generating the composite real-time dynamic video stream of a scene in the system 10 according to an embodiment of the present invention. Using a first real-time dynamic video stream based on a first modality and a second real-time dynamic video stream based on a second modality to generate a composite real-time dynamic video stream may provide a continuous and immediate depiction of the current state and condition of the scene, and at multiple depths and with unobstructed depiction of details of the scene at those depths. For purposes of the embodiment of the present invention, immediate may be understood to be 500 milliseconds or less.


Accordingly, as the scene changes the first real-time dynamic video stream and the second real-time dynamic video stream may also change, and, as such, the composite real-time dynamic video stream may also change. As such, the composite real-time dynamic video stream may be immediate in that when viewed on a display, the composite real-time dynamic video stream may continuously depict the actual current state and/or condition of the scene and, therefore, may be suitable for medical procedure sites, including, but not limited to, surgical sites. By viewing a single, accurate, and current image of the region of interest, the surgeon, or the other medical practitioner, may properly and effectively implement the medical procedure while viewing the composite real-time dynamic imagery.


In this regard, the system 10 of FIG. 1 may include a controller 12 which may comprise a spatial register 14 and a composite video stream generator 16. The controller 12 may be communicably coupled, to a display 18, a first video source 20, and a second video source 22. The first video source 20 and the second video source 22 may comprise an instrument through which an image of the scene may be captured and/or detected. Accordingly, the first video source 20 and the second video source 22 capture and/or detect images of the scene from their particular perspectives. The first video source 20 may have a first spatial state and the second video source 22 may have a second spatial state. In this manner, the first spatial state may relate to the perspective in which the image is captured and/or detected by the first video source 20, and the second spatial state may relate to the perspective in which the image is captured and/or detected by the second video source 22.


The first spatial state may be represented as [Fρ,Φ], and the second spatial state may be represented as [Sρ,Φ]. In FIG. 1, “ρ” may refer to three-dimensional displacement representing x, y, z positions, and “Φ” may refer to three-dimensional orientation representing roll, pitch, and yaw, with respect to both the first video source 20 and the second video source 22, as the case may be. By employing [Fρ,Φ] and [Sρ,Φ], the perspective of the first video source 20 viewing the scene and the perspective of the second video source 22 viewing the scene may be related to the three-dimensional displacement “ρ” and the three-dimensional orientation “Φ” of the first video source 20 and the second video source 22, respectively.


Accordingly, the first video source 20 and the second video source 22 capture and/or detect images of the scene from their particular perspectives. The scene may comprise a structure 24, which may be an organ within a person's body, and a region of interest 26 within the structure 24. The region of interest 26 may comprise a mass, lesion, growth, blood vessel, and/or any other condition and/or any detail within the structure 24. The region of interest 26 may or may not be detectable using visible light. In other words, the region of interest 26 may not be visible to the human eye.


The first video source 20 produces the first real-time dynamic video stream of the scene, and the second video source produces the second real-time dynamic video stream of the scene. The first real-time dynamic video stream of the scene may be a two-dimensional or three-dimensional video stream. Similarly, the second real-time dynamic video stream of the scene may be a two-dimensional or three-dimensional video stream.



FIG. 2 illustrates the process for generating a composite real-time dynamic video stream of the scene that may be based on the first real-time dynamic video stream and the second real-time dynamic video stream according to an embodiment of the present invention. The controller 12 may receive the first real-time dynamic video stream of a scene based on a first modality from a first video source having a first spatial state (step 200). The first modality may for example comprise two-dimensional or three-dimensional endoscopy. Additionally, the first modality may be any type of endoscopy such as laparoscopy, hysteroscopy, thoracoscopy, arthroscopy, colonoscopy, bronchoscopy, cystoscopy, proctosigmoidoscopy, esophagogastroduodenoscopy, and colposcopy. The controller 12 also may receive the second real-time dynamic video stream of the scene based on a second medical modality from a second video source having a second spatial state (step 202). The second modality may comprise one or more of two-dimensional or three-dimensional medical ultrasonography, magnetic resonance imaging, x-ray imaging, computed tomography, and optical wavefront imaging. Accordingly, the present invention is not limited to only two video sources using two modalities to produce only two real-time dynamic video streams. As such, a plurality, comprising any number, of video sources, modalities, and real-time dynamic video streams is encompassed by the present invention.


The controller 12 using the spatial register 14 may then spatially register the first real-time dynamic video stream and the second real-time dynamic video stream using the first spatial state and the second spatial state to align the first real-time dynamic video stream and the second real-time dynamic video stream to form a real-time dynamic composite representation of the scene (step 204). The controller 12 using the composite video stream generator 16 may generate a composite real-time dynamic video stream of the scene from the composite representation (step 206). The controller 12 may then send the composite real-time dynamic video stream to a display 18.


Please note that for purposes of discussing the embodiments of the present invention, it should be understood that the first video source 20 and the second video source 22 may comprise an instrument through which an image of the scene may be captured and/or detected. In embodiments of the present invention in which an imaging device such as a camera, for example, may be fixably attached to the instrument, the first video source 20 and the second video source 22 may be understood to comprise the imaging device in combination with the instrument. In embodiments of the present invention in which the imaging device may not be fixably attached to the instrument and, therefore, may be located remotely from the instrument, the first video source 20 and the second video source 22 may be understood to comprise the instrument and not the imaging device.


Spatially registering the first real-time dynamic video stream and the second real-time dynamic video stream may result in a composite real-time dynamic video stream that depicts the scene from merged perspectives of the first video source 20 and the second video source 22. FIGS. 3A, 3B, and 3C illustrate graphical representations depicting exemplary perspective views from the first video source 20 and the second video source 22, and a sequence which may result in the merged perspectives of the first real-time dynamic video stream and the second real-time dynamic video stream, according to an embodiment of the present invention. FIGS. 3A, 3B, and 3C provide a graphical context for the discussion of the computation involving forming the composite representation, which results from the spatial registration of the first real-time dynamic video stream and the second real-time dynamic video stream.



FIG. 3A may represent the perspective view of the first video source 20, shown as first frame 28. FIG. 3B may represent the perspective view of the second video source 22, shown as second frame 30. FIG. 3C shows the second frame 30 spatially registered with the first frame 28 which may represent a merged perspective and, accordingly, a composite representation 32, according to an embodiment of the present invention. The composite real-time dynamic video stream may be generated from the composite representation 32. Accordingly, the composite representation may provide the merged perspective of the frame of the scene depicted by the composite real-time dynamic video stream.


The first frame 28 may show the perspective view of the first video source 20 which may use a first medical modality, for example endoscopy. The first frame 28 may depict the outside of the structure 24. The perspective view of the structure 24 may fill the first frame 28. In other words, the edges of the perspective view of the structure 24 may be co-extensive and/or align with the corners and sides of the first frame 28. The second frame 30 may show the perspective view of the second video source 22 which may be detected using a second medical modality, for example medical ultrasonography. The second frame 30 may depict the region of interest 26 within the structure 24. As with the perspective view of the structure in the first frame 28, the perspective view of the region of interest 26 may fill the second frame 30. The edges of the region of interest 26 may be co-extensive and/or align with the sides of the second frame 30.


Because the perspective view of the structure 24 may fill the first frame 28, and the perspective view of the region of interest 26 may fill the second frame 30, combining the first frame 28 as provided by the first video source 20 with the second frame 30 as provided by the second video source 22 may not provide a view that accurately depicts the displacement and orientation of the region of interest 26 within the structure 24. Therefore, the first frame 28 and the second frame 30 may be synchronized such that the composite representation 32 accurately depicts the actual displacement and orientation of the region of interest 26 within the structure 24. The first frame 28 and the second frame 30 may be synchronized by determining the spatial relationship between the first video source 20 and the second video source 22 based on the first spatial state and the second spatial state. Accordingly, if the first spatial state and/or the second spatial state change, the first frame 28 and/or the second frame 30 may be synchronized based on the changed first spatial state and/or changed the second spatial state. In FIG. 3C, the first frame 28 and the second frame 30 may be synchronized by adjusting the second frame 30 to be co-extensive and/or aligned with the corners and the sides of the first frame 28. The spatial relationship may then be used to spatially register the second frame 30 with the first frame 28 to form the composite representation 32. The composite representation 32 may then depict the actual displacement and orientation of the region of interest 26 within the structure 24 synchronously with respect to the first real-time dynamic video stream and the second real-time video stream.


Spatially registering the first real-time dynamic video stream and the second real-time dynamic video stream may be performed using calculations involving the first spatial state of the first video source 20, and the second spatial state of the second video source 22. The first spatial state and the second spatial state each comprise six degrees of freedom. The six degrees of freedom may comprise a displacement representing x, y, z positions which is collectively referred to herein as “ρ,” and orientation representing roll, pitch, and yaw which is collectively referred to herein as “Φ.” Accordingly, the first spatial state may be represented as [Fρ,Φ], and the second spatial state may be represented as [Sρ,Φ]. The first special state and the second spatial state may be used to determine the spatial relationship between the first video source 20 and the second video source 22, which may be represented as [Cρ,Φ].


The first spatial state [Fρ,Φ] may be considered to be a transformation between the coordinate system of the first video source 20 and some global coordinate system G, and the second spatial state [Sρ,Φ] may be considered to be a transformation between the coordinate system of the second video source 22 and the same global coordinate system G. The spatial relationship [Cρ,Φ], then, may be considered as a transformation from the coordinate system of the second video source 22, to the coordinate system of the first video source 20.


As transforms, [Cρ,Φ], [Fρ,Φ], and [Sρ,Φ] may each be represented in one of three equivalent forms:

    • 1) Three-dimensional displacement “ρ” as [tx, ty, tz] and three-dimensional orientation “Φ” as [roll, pitch, yaw]; or
    • 2) Three-dimensional displacement “ρ” as [tx, ty, tz] and three-dimensional orientation “Φ” as a unit quaternion [qx, qy, qz, qw]; or
    • 3) A 4-by-4 (16 element) matrix.


Form 1 has the advantage of being easiest to use. Form 2 has the advantage of being subject to less round-off error during computations, for example it avoids gimbal lock, a mathematical degeneracy problem. Form 3 is amendable to modern computer-graphics hardware, which has dedicated machinery for composing, transmitting, and computing 4-by-4 matrices.


In some embodiments, where the first video source 20 and second video source 22 do not move with respect to each other, the spatial relationship [Cρ,Φ] between the first video source 20 and the second video source 22 is constant and may be measured directly. Alternatively, if embodiments where the first video source 20 and the second video source 22 move relative to each other, the spatial relationship between the first video source 20 and the second video source 22 may be continually measured by a position detecting system. The position detecting system may measure an output [Cρ,Φ] directly, or it may measure and report the first spatial state [Fρ,Φ], the second spatial state [Sρ,Φ]. In the latter case, [Cρ,Φ] can be computed as [Cρ,Φ] and [Cρ,Φ] as follows:

[Cρ,Φ]=[Fρ,Φ]*[S]−1(indirect computation).


The three-dimensional position of the corner points of the second frame 30, relative to the center of the second frame 30, are constants which may be included in the specification sheets of the second video source 22. There are four (4) such points if the second video source 22 is two-dimensional, and eight (8) such points if the second video source 22 is three-dimensional. For each such corner point, three-dimensional position relative to the first video source 20 may be computed using the formula:

cs=cf*[cρ,Φ],


where cf is the second frame 30 corner point relative to the second video source 22, and cs is the second frame 30 corner point relative to first video source 20. If either the first video source 20 or the second video source 22 comprise a video camera, then the field-of-view of the video camera, and the frame, may be given by the manufacturer. The two-dimensional coordinates of the corner points (sx, sy) of the second frame 30 in the first frame 28 may be computed as follows:








C
sp

=

(


C
s

*

[
P
]


)


,




where






P
=

[




cos






(
f
)




0


0


0




0







0


0




0


0


0


0




0


0


1


0



]





and f=the field of view of the first video source 20. csp is a four (4) element homogenous coordinate consisting of [xcsp, ycsp, zcsp, hcsp]. The two-dimensional coordinates are finally computed as:


sx=xcsp/hcsp; and


sy=ycsp/hcsp


By knowing sx and sy, for all the corners of the second frame 30 relative to the first frame 28 standard compositing hardware may be used to overlay and, thereby, spatially registering the first real-time dynamic video stream and the second real-time dynamic video stream to generate the composite real-time dynamic video stream. As such the spatial registration of the first real-time dynamic video stream and the second real-time dynamic video stream may be performed using information other than an anatomical characteristic and/or a position of the subject (i.e. a person's body), the world, or some other reference coordinate system. Accordingly, the composite real-time dynamic video stream may be generated independently of the position or condition of the subject, the location and/or existence of anatomical features and/or landmarks, and/or the condition or state of the medical procedure site.


The determination whether to directly or indirectly compute the spatial relationship between the first video source 20 and the second video source 22 may depend on an arrangement of components of the system, and a method used to establish the first spatial state of the first video source 20 and the second spatial state of the second video source 22.



FIGS. 4A and 4B are schematic diagrams illustrating alternative exemplary arrangements of components in which the direct computation or the indirect computation for determining the spatial relationship between the first video source 20 and the second video source 22 may be used.



FIG. 4A illustrates an exemplary arrangement in which the direct computation of the spatial relationship between the first video source 20 and the second video source 22 may be used, according to an embodiment of the present invention. An articulated mechanical arm 34 may connect the first video source 20 and the second video source 22. The mechanical arm 34 may be part of and/or extend to an instrument or other structure, which supports and/or allows the use of the mechanical arm 34, and thereby the first video source 20 and the second video source 22. The mechanical arm 34 may provide a rigid connection between the first video source 20 and the second video source 22. In such a case, because the mechanical arm may be rigid, the first spatial state of the first video source 20 and the second spatial state of the second video source 22 may be fixed.


Accordingly, because the first spatial state and the second spatial state may be fixed, the first spatial state and the second spatial state may be programmed or recorded in the controller 12. The controller 12 may then directly compute the spatial relationship between the first video source 20 and the second video source 22 and, therefrom, the composite representation 32. As discussed above, the composite representation 32 represents the spatial registration of the first real-time dynamic video stream and the second real-time dynamic video stream. The controller 12 may then generate the composite real-time dynamic video stream from the composite representation 32.


Alternatively, the mechanical arm 34 may comprise joints 34A, 34B, 34C connecting rigid portions or links 34D, 34E of the mechanical arm 34. The joints 34A, 34B, 34C may include rotary encoders for measuring and encoding the angle of each of the joints 34A, 34B, 34C. By measuring the angle of the joints 34A, 34B, 34C and knowing the length of the links 34D, 34E, the first spatial state [Cρ,Φ] of the second video source 22, relative to that of the first video source 20 may be determined. The controller 12 may receive [Cρ,Φ] and, therefrom, compute the composite representation 32. As discussed above, the composite representation 32 represents the spatial registration of the first real-time dynamic video stream and the second real-time dynamic video stream. The controller 12 may generate the composite real-time dynamic video stream from the composite representation. The mechanical arm 34 may be a Faro-Arm™. mechanical arms or any similar component that provides the functionality described above.



FIG. 4B illustrates an exemplary arrangement where the indirect computation of the spatial relationship between the first video source 20 and the second video source 22 may be used, according to an embodiment of the present invention. In FIG. 4B, an intermediary in the form of a positions detecting system comprising a first transmitter 36, a second transmitter 38, and an infrared detection system 40 are shown. The first transmitter 36 and the second transmitter 38 may be in the form of LED's. The infrared detection system 40 may comprise one or more infrared detectors 40A, 40B, 40C. The infrared detectors 40A, 40B, 40C may be located or positioned to be in lines-of-sight of the first transmitter 36 and the second transmitter 38. The lines-of-sight are shown in FIG. 4B by lines emanating from the first transmitter 36 and the second transmitter 38.


The infrared detection system 40 may determine the first spatial state of the first video source 20 and the second spatial state of the second video source 22 by detecting the light emitted from the first transmitter 36 and the second transmitter 38, respectively. The infrared detection system 40 may also determine the intermediary reference related to the position of the infrared detection system 40. The infrared detection system 40 may then send the first spatial state of the first video source 20, represented as [Fρ,Φ], and the second spatial state of the second video source 22, represented as [Sρ,Φ], to the controller 12. The controller 12 may receive the first spatial state and the second spatial state, and may compute the spatial relationship [Cρ,Φ] between the first video source 20 and the second video source 22 using the indirect computation and, therefrom, the composite representation 32. As discussed above, the composite representation 32 represents the spatial registration of the first real-time dynamic video stream and the second real-time dynamic video stream. The controller 12 may then generate the composite real-time dynamic video stream from the composite representation 32.


The infrared detection system 40 may be any type of optoelectronic system for example the Northern Digital Instrument Optotrak™. Alternatively, other position detecting systems may be used such as magnetic, GPS+compass, inertial, acoustic, or any other equipment for measuring spatial relationship, or relative or absolute displacement and orientation.



FIGS. 5 and 6 are schematic diagrams illustrating exemplary systems in which the exemplary arrangements discussed with respect to FIGS. 4A and 4B may be implemented in medical imaging systems based on the system 10 shown in FIG. 1, according to an embodiment of the present invention. FIGS. 5 and 6 each illustrate systems for generating composite real-time dynamic video streams using medical modalities comprising ultrasonography and endoscopy. Accordingly, FIGS. 5 and 6 comprise additional components and detail than which are shown in system 10 to discuss the present invention with respect to ultrasonography and endoscopy. However, it should be understood that the present invention is not limited to any particular modality, including any particular medical modality.



FIG. 5 is a schematic diagram illustrating a system 10′ comprising an endoscope 42 and an ultrasound transducer 44 combined in a compound minimally-invasive instrument 48, according to an embodiment of the present invention. FIG. 5 is provided to illustrate an exemplary system in which the direct computation of the spatial relationship between the first video source 20 and the second video source 22 may be used. The compound minimally-invasive instrument 48 may be used to provide images of the scene based on multiple medical modalities using a single minimally-invasive instrument.


The compound minimally-invasive instrument 48 may penetrate into the body 46 of the subject, for example the patient, to align with the structure 24 and the region of interest 26 within the structure 24. In this embodiment, the structure 24 may be an organ within the body 46, and the region of interest 26 may be a growth or lesion within the structure 24. A surgeon may use the compound minimally-invasive instrument 48 to provide both an endoscopic and ultrasonogramic composite view to accurately target the region of interest 26 for any particular treatment and/or procedure.


The endoscope 42 may be connected, either optically or in some other communicable manner to a first video camera 50. Accordingly, the first video source 20 may be understood to comprise the endoscope 42 and the first video camera 50. The first video camera 50 may capture an image of the structure 24 through the endoscope 42. From the image captured by the first video camera 50, the first video camera 50 may produce a first real-time dynamic video stream of the image and send the first real-time dynamic video stream to the controller 12.


The ultrasound transducer 44 may be communicably connected to a second video camera 52. Accordingly, the second video source 22 may be understood to comprise the ultrasound transducer 44 and the second video camera 52. The ultrasound transducer 44 may detect an image of the region of interest 26 within the structure 24 and communicate the image detected to the second video camera 52. The second video camera 52 may produce a second real-time dynamic video stream representing the image detected by the ultrasound transducer 44, and then send the second real-time dynamic video stream to the controller 12.


Because the compound minimally-invasive instrument 48 comprises both the endoscope 42 and the ultrasound transducer 44, the first spatial state and the second spatial state may be fixed with respect to each other, and, accordingly, the spatial relationship of the first video source 20 and the second video source 22 may be determined by the direct computation discussed above with reference to FIG. 4A. This may be so even if the first video camera 50 and the second video camera 52, as shown in FIG. 5, are located remotely from the compound minimally-invasive instrument 48. In other words, the first video camera 50 and the second video camera 52 may not be included within the compound minimally-invasive instrument 48. As discussed above, the first spatial state and the second spatial state may be determined relative to a particular perspective of the image of the scene that is captured and/or detected. As such the first spatial state may be based on the position and displacement of the endoscope 42, while the second spatial state may be based on the displacement and position of the ultrasound transducer 44.


The first spatial state and the second spatial state may be received by the controller 12. The controller 12 may then determine the spatial relationship between the first video source 20, and the second video source 22 using the direct computation discussed above. Using the spatial relationship, the first real-time dynamic video stream and the second real-time dynamic video stream may be spatially registered to generate the composite representation 32. The composite real-time dynamic video stream may be generated from the composite representation 32. The controller 12 may then send the composite real-time dynamic video stream to the display 18.



FIG. 6 is a schematic diagram illustrating a system 10″ comprising a separate endoscope 42 and an ultrasound transducer 44, according to an embodiment of the present invention; in this embodiment, the endoscope 42 comprises a laparoscope, and the ultrasound transducer 44 comprises a laparoscopic ultrasound transducer. FIG. 6 is provided to illustrate an exemplary system in which the direct computation of the spatial relationship between the first video source 20 and the second video source 22 may be used.


Accordingly, in FIG. 6, instead of one minimally-invasive instrument penetrating the body 46, two minimally-invasive instruments are used. The endoscope 42 may align with the structure 24. The ultrasound transducer 44 may extend further into the body 46 and may contact the structure 24 at a point proximal to the region of interest 26. In a similar manner to the system 10′, the structure 24 may be an organ within the body 46, and the region of interest 26 may be a blood vessel, growth, or lesion within the structure 24. A surgeon may use the endoscope 42 and the ultrasound transducer 44 to provide a composite view of the structure 24 and the region of interest 26 to accurately target the region of interest 26 point on the structure 24 for any particular treatment and/or procedure.


To provide one of the images of the composite view for the surgeon, the endoscope 42 may be connected, either optically or in some other communicable manner, to a first video camera 50. Accordingly, the first video source 20 may be understood to comprise the endoscope 42 and the first video camera 50. The first video camera 50 may capture an image of the structure 24 through the endoscope 42. From the image captured by the first video camera 50, the first video camera 50 may produce a first real-time dynamic video stream of the image and send the first real-time dynamic video stream to the controller 12.


Additionally, to provide another image of the composite view for the surgeon, the ultrasound transducer 44 may be communicably connected to a second video camera 52. Accordingly, the second video source 22 may be understood to comprise the ultrasound transducer 44 and the second video camera 52. The ultrasound transducer 44 may detect an image of the region of interest 26 within the structure 24 and communicate the image detected to the second video camera 52. The second video camera 52 may produce a second real-time dynamic video stream representing the image detected by the ultrasound transducer 44 and then send the second real-time dynamic video stream to the controller 12.


Because the endoscope 42 and the ultrasound transducer 44 are separate, the first spatial state of the first video source 20 and the second spatial state of the second video source 22 may be determined using the indirect computation discussed above with reference to FIG. 4B. As discussed above, the indirect computation involves the use of an intermediary, such as a positional system. Accordingly, in system 10″, an intermediary comprising a first transmitter 36, a second transmitter 38 and an infrared detection system 40 may be included. The first transmitter 36 may be located in association with the endoscope 42, and the second transmitter 38 may be located in association with the ultrasound transducer 44. Associating the first transmitter 36 with the endoscope 42 and the second transmitter 38 with the ultrasound transducer 44 may allow the first video camera 50 to be located remotely from the endoscope 42, and/or the second video camera 52 to be located remotely from the ultrasound transducer 44.


As discussed above with respect to the system 10′, the first spatial state and the second spatial state may be determined with respect to the particular perspectives of the image of the scene that may be captured and/or detected by the first video source 20 and the second video source 22, respectively. As such the first spatial state may be based on the orientation and displacement of the endoscope 42, while the second spatial state may be based on the displacement and orientation of the ultrasound transducer 44. Additionally, in system 10′ of FIG. 5, the endoscope 42 and the ultrasound transducer 44 are shown in a co-located arrangement in the compound minimally-invasive instrument 48. As such, the first spatial state of the first video source 20 and the second spatial state of the second video source 22 in addition to being fixed may also be very close relationally. Conversely, in the system 10″, the orientation and displacement of the endoscope 42 and the ultrasound transducer 44 may be markedly different as shown in FIG. 6, which may result in the first spatial state of the first video source 20 and the second spatial state of the second video source 22 not being close relationally.


The infrared detection system 40 may determine the first spatial state of the first video source 20 and the second spatial state of the second video source 22 by detecting the light emitted from the first transmitter 36 and the second transmitter 38, respectively. The infrared detection system 40 may also determine the intermediary reference related to the position of the infrared detection system 40. The infrared detection system 40 may then send the first spatial state, the second spatial state, and the intermediary reference to the controller 12. The controller 12 may receive the first spatial state, the second spatial state, and the intermediary reference and may compute the spatial relationship between the first video source 20 and the second video source 22 using the indirect computation and, therefrom, the composite representation 32. As discussed above, the composite representation 32 represents the spatial registration of the first real-time dynamic video stream and the second real-time dynamic video stream. The controller 12 may then generate the composite real-time dynamic video stream from the composite representation 32.


For purposes of the present invention, the controller 12 may be understood to comprise devices, components and systems not shown in system 10′ and system 10″ in FIGS. 5 and 6. For example, the controller 12 may be understood to comprise an ultrasound scanner, which may be a Sonosite MicroMaxx, or similar scanner. Also, the controller 12 may comprise a video capture board, which may be a Foresight Imaging Accustream 170, or similar board. An exemplary video camera suitable for use in the system 10′ and system 10″ of FIGS. 5 and 6 is the Stryker 988 that has a digital IEEE 1394 output, although other digital and analog cameras may be used. The endoscope may be any single or dual optical path laparoscope, or similar endoscope.



FIGS. 7A, 7B, and 7C are photographic representations illustrating a first frame 54 from the first real-time dynamic video stream, a second frame 56 from the second real-time dynamic video stream, and a composite frame 58 of the composite real-time dynamic video stream generated from the spatial registration of the first real-time dynamic video stream and the second real-time dynamic video stream, according to an embodiment of the present invention. FIGS. 7A, 7B, and 7C are provided to further illustrate an embodiment of the present invention with reference to actual medical modalities, and the manner in which the composite real-time dynamic video stream based on multiple modalities may appear to a surgeon viewing a display.


In FIG. 7A, the first real-time dynamic video stream may be produced based on an endoscopic modality. In FIG. 7B, the second real-time dynamic video stream may be produced based on medical ultrasonographic modality. In FIG. 7A, the first real-time dynamic video stream shows the structure 24 in the form of an organ of the human body being contacted by an ultrasound transducer 44. FIG. 7B shows the second real-time dynamic video stream is produced using the ultrasound transducer 44 shown in FIG. 7A. In FIG. 7B the region of interest 26, which appears as blood vessels within the structure 24 is shown. In FIG. 7C, the composite real-time dynamic video stream generated shows the first real-time dynamic video stream and the second real-time dynamic video stream spatially registered. The second real-time dynamic video stream is merged with the first real-time dynamic video stream in appropriate alignment. As such the second real-time dynamic video stream is displaced and oriented in a manner as reflects the actual displacement and orientation of the region of interest 26 within the structure 24. In other words, the region of interest 26 is shown in the composite real-time dynamic video stream as it would appear if the surface of the structure 24 were cut away to make the region of interest 26 visible.



FIG. 8 illustrates a diagrammatic representation of what a controller 12 adapted to execute functioning and/or processing described herein. In the exemplary form, the controller may comprise a computer system 60, within which is a set of instructions for causing the controller 12 to perform any one or more of the methodologies discussed herein. The controller may be connected (e.g., networked) to other controllers or devices in a local area network (LAN), an intranet, an extranet, or the internet. The controller 12 may operate in a client-server network environment, or as a peer controller in a peer-to-peer (or distributed) network environment. While only a single controller is illustrated, the controller 12 shall also be taken to include any collection of controllers and/or devices that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The controller 12 may be a server, a personal computer, a mobile device, or any other device.


The exemplary computer system 60 includes a processor 62, a main memory 64 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), and a static memory 66 (e.g., flash memory, static random access memory (SRAM), etc.), which may communicate with each other via a bus 68. Alternatively, the processor 62 may be connected to the main memory 64 and/or the static memory 66 directly or via some other connectivity means.


The processor 62 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or processor implementing other instruction sets, or processors implementing a combination of instruction sets. The processor 62 is configured to execute processing logic 70 for performing the operations and steps discussed herein.


The computer system 60 may further include a network interface device 72. It also may include an input means 74 to receive input (e.g., the first real-time dynamic video stream, the second real-time dynamic video stream, the first spatial state, the second spatial state, and the intermediary reference) and selections to be communicated to the processor 62 when executing instructions. It also may include an output means 76, including but not limited to the display 18 (e.g., a head-mounted display, a liquid crystal display (LCD), or a cathode ray tube (CRT)), an alphanumeric input device (e.g., a keyboard), and/or a cursor control device (e.g., a mouse).


The computer system 60 may or may not include a data storage device having a computer-readable medium 78 on which is stored one or more sets of instructions 80 (e.g., software) embodying any one or more of the methodologies or functions described herein. The instructions 80 may also reside, completely or at least partially, within the main memory 64 and/or within the processor 62 during execution thereof by the computer system 60, the main memory 64, and the processor 62 also constituting computer-readable media. The instructions 80 may further be transmitted or received over a network via the network interface device 72.


While the computer-readable medium 78 is shown in an exemplary embodiment to be a single medium, the term “computer-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the controller and that cause the controller to perform any one or more of the methodologies of the present invention. The term “computer-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, and carrier wave signals.


Those skilled in the art will recognize improvements and modifications to the preferred embodiments of the present invention. All such improvements and modifications are considered within the scope of the concepts disclosed herein and the claims that follow.

Claims
  • 1. A method for image guidance comprising: obtaining a first real-time imaging stream of a first imaging modality from a first medical device, the first real-time imaging stream depicting a scene;obtaining a second real-time imaging stream of a second imaging modality from a second medical device, the second real-time imaging stream depicting a portion of the scene, wherein the second imaging modality is different from the first imaging modality;determining a spatial relationship between the first medical device and the second medical device;generating a composite image of the scene based at least in part on the first real-time imaging stream, the second real-time imaging stream, and the spatial relationship between the first medical device and the second medical device, the composite image depicting the second real-time imaging stream spatially registered with the first real-time imaging stream; andcausing a display to display the composite representation image of the scene.
  • 2. The method of claim 1, further comprising spatially registering the first real-time imaging stream and the second real-time imaging stream, wherein said generating the composite image of the scene is based at least in part on the spatial relationship the first real-time imaging stream and the second real-time imaging stream.
  • 3. The method of claim 1, further comprising determining a relative position and orientation of the first real-time imaging stream with respect to the second real-time imaging stream, wherein said generating the composite image of the scene is based at least in part on the relative position and orientation of the first real-time imaging stream with respect to the second real-time imaging stream.
  • 4. The method of claim 1, further comprising determining a relationship between a pose of the first medical device with respect to a first coordinate system and a pose of the second medical device with respect to a second coordinate system, wherein said generating the composite image of the scene is based at least in part on the relationship between a pose of the first medical device with respect to the first coordinate system and the pose of the second medical device with respect to the second coordinate system.
  • 5. The method of claim 1, wherein the first imaging modality comprises a three-dimensional modality and the second imaging modality comprises a two-dimensional modality.
  • 6. The method of claim 1, wherein the composite image of the scene depicts at least a portion of the second real-time imaging stream within the at least a portion of the first real-time imaging stream.
  • 7. The method of claim 1, wherein the composite image of the scene depicts at least a portion of the first real-time imaging stream aligned with at least a portion of the second real-time imaging stream.
  • 8. The method of claim 1, wherein the causing the display to display the composite image of the scene comprises causing the display to display the composite image of the scene in a virtual 3D space.
  • 9. The method of claim 1, wherein at least one of the first real-time imaging stream or the second real-time imaging stream corresponds to a real-time dynamic video stream of the scene.
  • 10. A system for medical procedure image guidance comprising an image guidance system having one or more processors, the one or more processors configured to: obtain a first real-time imaging stream of a first imaging modality from a first medical device, the first real-time imaging stream depicting a scene;obtain a second real-time imaging stream of a second imaging modality from a second medical device, the second real-time imaging stream depicting a portion of the scene, wherein the second imaging modality is different from the first imaging modality;determine a spatial relationship between the first medical device and the second medical device;generate a composite image of the scene based at least in part on the first real-time imaging stream, the second real-time imaging stream, and the spatial relationship between the first medical device and the second medical device, the composite image depicting the second real-time imaging stream spatially registered with the first real-time imaging stream; andcause a display to display the composite image of the scene.
  • 11. The system of claim 10, wherein the one or more processors are further configured to spatially register the first real-time imaging stream and the second real-time imaging stream.
  • 12. The system of claim 10, wherein the one or more processors are further configured to determine a relative position and orientation of the first real-time imaging stream with respect to the second real-time imaging stream.
  • 13. The system of claim 10, wherein the one or more processors are further configured to determine a relationship between a pose of the first medical device with respect to a first coordinate system and a pose of the second medical device with respect to a second coordinate system.
  • 14. The system of claim 10, wherein the first imaging modality comprises a three-dimensional modality and the second imaging modality comprises a two-dimensional modality.
  • 15. The system of claim 10, wherein the composite image of the scene depicts at least a portion of the second real-time imaging stream within at least a portion of the first real-time imaging stream.
  • 16. A computer-readable, non-transitory storage medium storing computer-executable instructions that when executed by one or more processors cause the one or more processors to: obtain a first real-time imaging stream of a first imaging modality from a first medical device, the first real-time imaging stream depicting a scene;obtain a second real-time imaging stream of a second imaging modality from a second medical device, the second real-time imaging stream depicting a portion of the scene, wherein the second imaging modality is different from the first imaging modality;determine a spatial relationship between the first medical device and the second medical device;generate a composite image of the scene based at least in part on the first real-time imaging stream, the second real-time imaging stream, and the spatial relationship between the first medical device and the second medical device, the composite image depicting the second real-time imaging stream spatially registered with the first real-time imaging stream; andcause a display to display the composite image representation of the scene.
  • 17. The computer-readable, non-transitory storage medium of claim 16, wherein the first imaging modality comprises a three-dimensional modality and the second imaging modality comprises a two-dimensional modality.
RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 16/177,894, filed Nov. 1, 2018, entitled “System and Method of Providing Real-Time Dynamic Imagery of a Medical Procedure Site Using Multiple Modalities,” which is a continuation U.S. patent application Ser. No. 15/598,616, filed May 18, 2017, entitled “System and Method of Providing Real-Time Dynamic Imagery of a Medical Procedure Site Using Multiple Modalities,” which is a continuation of U.S. patent application Ser. No. 13/936,951, filed Jul. 8, 2013, entitled “System and Method of Providing Real-Time Dynamic Imagery of a Medical Procedure Site Using Multiple Modalities,” which is a continuation of U.S. patent application Ser. No. 12/760,274, filed Apr. 14, 2010, entitled “System and Method of Providing Real-Time Dynamic Imagery of a Medical Procedure Site Using Multiple Modalities,” which is a continuation of U.S. patent application Ser. No. 11/833,134, filed Aug. 2, 2007, entitled “System and Method of Providing Real-Time Dynamic Imagery of a Medical Procedure Site Using Multiple Modalities,” which claims priority benefit to U.S. Provisional Application Ser. No. 60/834,932, filed Aug. 2, 2006, entitled “Spatially Registered Ultrasound and Endoscopic Imagery,” and U.S. Provisional Application Ser. No. 60/856,670, filed Nov. 6, 2006, entitled “Multiple Depth-Reconstructive Endoscopies Combined With Other Medical Imaging Modalities, And System,” the disclosure of each of which is hereby incorporated by reference in its entireties for all purposes.

US Referenced Citations (511)
Number Name Date Kind
3556079 Omizo Jan 1971 A
4058114 Soldner Nov 1977 A
RE30397 King Sep 1980 E
4249539 Vilkomerson et al. Feb 1981 A
4294544 Altschuler et al. Oct 1981 A
4390025 Takemura et al. Jun 1983 A
4407294 Vilkomerso Oct 1983 A
4431006 Trimmer et al. Feb 1984 A
4567896 Barnea et al. Feb 1986 A
4583538 Onik et al. Apr 1986 A
4620546 Aida et al. Nov 1986 A
4671292 Matzuk Jun 1987 A
4839836 Fonsalas Jun 1989 A
4862873 Yajima et al. Sep 1989 A
4884219 Waldren Nov 1989 A
4899756 Sonek Feb 1990 A
4911173 Terwillige Mar 1990 A
4945305 Blood Jul 1990 A
5076279 Arenson et al. Dec 1991 A
5078140 Kwoh Jan 1992 A
5078142 Siczek et al. Jan 1992 A
5095910 Powers Mar 1992 A
5109276 Nudelman et al. Apr 1992 A
5158088 Nelson et al. Oct 1992 A
5161536 Vikomerson et al. Nov 1992 A
5193120 Gamache et al. Mar 1993 A
5209235 Brisken et al. May 1993 A
5249581 Horbal et al. Oct 1993 A
5251127 Raab Oct 1993 A
5261404 Mick et al. Nov 1993 A
5265610 Darrow et al. Nov 1993 A
5271400 Dumoulin et al. Dec 1993 A
5307153 Maruyama et al. Apr 1994 A
5309913 Kormos et al. May 1994 A
5323002 Sampsell et al. Jun 1994 A
5371543 Anderson Dec 1994 A
5383454 Bucholz Jan 1995 A
5394875 Lewis et al. Mar 1995 A
5411026 Carol May 1995 A
5433198 Desai Jul 1995 A
5433739 Sluijter Jul 1995 A
5443489 Ben-Haim Aug 1995 A
5446798 Morita et al. Aug 1995 A
5447154 Cinquin et al. Sep 1995 A
5452024 Sampsell Sep 1995 A
5457493 Leddy et al. Oct 1995 A
5474073 Schwartz et al. Dec 1995 A
5476096 Olstad et al. Dec 1995 A
5483961 Kelly et al. Jan 1996 A
5488431 Gove et al. Jan 1996 A
5489952 Gove et al. Feb 1996 A
5491510 Gove Feb 1996 A
5494039 Onik et al. Feb 1996 A
5503152 Oakley et al. Apr 1996 A
5505204 Picot et al. Apr 1996 A
5515856 Olstad et al. May 1996 A
5517990 Kalfas et al. May 1996 A
5526051 Gove et al. Jun 1996 A
5526812 Dumoulin et al. Jun 1996 A
5529070 Augustine et al. Jun 1996 A
5531227 Schneider Jul 1996 A
5532997 Pauli Jul 1996 A
5541723 Tanaka Jul 1996 A
5558091 Acker et al. Sep 1996 A
5568811 Olstad Oct 1996 A
5570135 Gove et al. Oct 1996 A
5579026 Tabata Nov 1996 A
5581271 Kraemer Dec 1996 A
5588948 Takahashi et al. Dec 1996 A
5608468 Gove et al. Mar 1997 A
5608849 King, Jr. Mar 1997 A
5611345 Hibbeln Mar 1997 A
5611353 Dance et al. Mar 1997 A
5612753 Poradish et al. Mar 1997 A
5625408 Matsugu et al. Apr 1997 A
5628327 Unger et al. May 1997 A
5629794 Magel et al. May 1997 A
5630027 Venkateswar et al. May 1997 A
5647361 Damadian Jul 1997 A
5647373 Paltieli et al. Jul 1997 A
5660185 Shmulewitz et al. Aug 1997 A
5662111 Cosman Sep 1997 A
5699444 Palm Dec 1997 A
5701898 Adam et al. Dec 1997 A
5701900 Shehada et al. Dec 1997 A
5726670 Tabata et al. Mar 1998 A
5728044 Shan Mar 1998 A
5758650 Miller et al. Jun 1998 A
5766135 Terwilliger Jun 1998 A
5784098 Shoji et al. Jul 1998 A
5792147 Evans et al. Aug 1998 A
5793701 Wright et al. Aug 1998 A
5797849 Vesely et al. Aug 1998 A
5806521 Morimoto et al. Sep 1998 A
5807395 Mulier et al. Sep 1998 A
5810008 Dekel et al. Sep 1998 A
5817022 Vesely Oct 1998 A
5820554 Davis et al. Oct 1998 A
5820561 Olstad et al. Oct 1998 A
5829439 Yokosawa et al. Nov 1998 A
5829444 Ferre et al. Nov 1998 A
5851183 Bodiolz Dec 1998 A
5870136 Fuchs et al. Feb 1999 A
5891034 Bucholz Apr 1999 A
5920395 Schulz Jul 1999 A
5961527 Whitmore, III et al. Oct 1999 A
5967980 Ferre et al. Oct 1999 A
5967991 Gardineer et al. Oct 1999 A
5991085 Rallison et al. Nov 1999 A
6016439 Acker Jan 2000 A
6019724 Gronningsaeter et al. Feb 2000 A
6048312 Ishrak et al. Apr 2000 A
6064749 Hirota et al. May 2000 A
6091546 Spitzer Jul 2000 A
6095982 Richards-Kortum et al. Aug 2000 A
6099471 Torp et al. Aug 2000 A
6108130 Raj Aug 2000 A
6122538 Sliwa, Jr. et al. Sep 2000 A
6122541 Cosman et al. Sep 2000 A
6160666 Rallison et al. Dec 2000 A
6167296 Shahidi Dec 2000 A
6181371 Maguire, Jr. Jan 2001 B1
RE37088 Olstad et al. Mar 2001 E
6216029 Paltieli Apr 2001 B1
6241725 Cosman Jun 2001 B1
6245017 Hashimoto et al. Jun 2001 B1
6246784 Summers et al. Jun 2001 B1
6246898 Vesely et al. Jun 2001 B1
6248101 Witmore, III et al. Jun 2001 B1
6261234 Lin Jul 2001 B1
6341016 Malione Jan 2002 B1
6348058 Melken et al. Feb 2002 B1
6350238 Olstad et al. Feb 2002 B1
6352507 Torp et al. Mar 2002 B1
6379302 Kessman et al. Apr 2002 B1
6385475 Cinquin et al. May 2002 B1
6442417 Shahidi et al. Aug 2002 B1
6447450 Olsdat Sep 2002 B1
6456868 Saito et al. Sep 2002 B2
6470207 Simon et al. Oct 2002 B1
6471366 Hughson et al. Oct 2002 B1
6477400 Barrick Nov 2002 B1
6478793 Cosman et al. Nov 2002 B1
6503195 Keller et al. Jan 2003 B1
6511418 Shahidi et al. Jan 2003 B2
6517485 Torp et al. Feb 2003 B2
6518939 Kikuchi Feb 2003 B1
6527443 Vilsmeier Mar 2003 B1
6529758 Shahidi Mar 2003 B2
6537217 Bjaerum et al. Mar 2003 B1
6545706 Edwards et al. Apr 2003 B1
6546279 Bova et al. Apr 2003 B1
6551325 Neubauer et al. Apr 2003 B2
6570566 Yoshigahara May 2003 B1
6575969 Rittman, III et al. Jun 2003 B1
6579240 Bjaerum et al. Jun 2003 B2
6587711 Alfano et al. Jul 2003 B1
6591130 Shahidi Jul 2003 B2
6592522 Bjaerum et al. Jul 2003 B2
6594517 Nevo Jul 2003 B1
6597818 Kumar et al. Jul 2003 B2
6604404 Paltieli et al. Aug 2003 B2
6616610 Steininger et al. Sep 2003 B2
6626832 Paltieli et al. Sep 2003 B1
6652462 Bjaerum et al. Nov 2003 B2
6669635 Kessman et al. Dec 2003 B2
6676599 Torp et al. Jan 2004 B2
6689067 Sauer et al. Feb 2004 B2
6695786 Wang et al. Feb 2004 B2
6711429 Gilboa et al. Mar 2004 B1
6725082 Sati et al. Apr 2004 B2
6733458 Steins et al. May 2004 B1
6764449 Lee et al. Jul 2004 B2
6766184 Utzinger et al. Jul 2004 B2
6768496 Bieger et al. Jul 2004 B2
6775404 Pagoulatos et al. Aug 2004 B1
6782287 Grzeszczuk et al. Aug 2004 B2
6783524 Anderson et al. Aug 2004 B2
6827723 Carson Dec 2004 B2
6863655 Bjaerum et al. Mar 2005 B2
6873867 Vilsmeier Mar 2005 B2
6875179 Ferguson et al. Apr 2005 B2
6881214 Cosman et al. Apr 2005 B2
6895268 Rahn et al. May 2005 B1
6915150 Cinquin et al. Jul 2005 B2
6917827 Kienzle, III Jul 2005 B2
6923817 Carson et al. Aug 2005 B2
6936048 Hurst Aug 2005 B2
6947783 Immerz Sep 2005 B2
6968224 Kessman et al. Nov 2005 B2
6978167 Dekel et al. Dec 2005 B2
7008373 Stoianovici et al. Mar 2006 B2
7033360 Cinquin et al. Apr 2006 B2
7072707 Galloway, Jr. et al. Jul 2006 B2
7077807 Torp et al. Jul 2006 B2
7093012 Oltad et al. Aug 2006 B2
7110013 Ebersole et al. Sep 2006 B2
7171255 Holupka et al. Jan 2007 B2
7209776 Leitner Apr 2007 B2
7245746 Bjaerum et al. Jul 2007 B2
7248232 Yamazaki et al. Jul 2007 B1
7261694 Torp et al. Aug 2007 B2
7313430 Urquhart et al. Dec 2007 B2
7331932 Leitner Feb 2008 B2
7351205 Szczech et al. Apr 2008 B2
7379769 Piron et al. May 2008 B2
7385708 Ackerman et al. Jun 2008 B2
7392076 Moctezuma de La Barrera Jun 2008 B2
7398116 Edwards Jul 2008 B2
7466303 Yi et al. Dec 2008 B2
7480533 Cosman et al. Jan 2009 B2
7505809 Strommer et al. Mar 2009 B2
7588541 Floyd et al. Sep 2009 B2
7596267 Accomazzi et al. Sep 2009 B2
7652259 Kimchy et al. Jan 2010 B2
7662128 Salcudean et al. Feb 2010 B2
7678052 Torp et al. Mar 2010 B2
7728868 Razzaque et al. Jun 2010 B2
7747305 Dean et al. Jun 2010 B2
7797032 Martinelli et al. Sep 2010 B2
7798965 Torp et al. Sep 2010 B2
7833168 Taylor et al. Nov 2010 B2
7833221 Voegele et al. Nov 2010 B2
7846103 Cannon, Jr. et al. Dec 2010 B2
7876942 Gilboa Jan 2011 B2
7889905 Higgins et al. Feb 2011 B2
7901357 Boctor et al. Mar 2011 B2
7912849 Ohrn et al. Mar 2011 B2
7920909 Lyon et al. Apr 2011 B2
7962193 Edwards et al. Jun 2011 B2
7976469 Bonde et al. Jul 2011 B2
8023712 Ikuma et al. Sep 2011 B2
8038631 Sanghvi et al. Oct 2011 B1
8041413 Barbagli et al. Oct 2011 B2
8050736 Piron et al. Nov 2011 B2
8052636 Moll et al. Nov 2011 B2
8066644 Sarkar et al. Nov 2011 B2
8073528 Zhao et al. Dec 2011 B2
8086298 Whitmore, III et al. Dec 2011 B2
8135669 Olstad et al. Mar 2012 B2
8137281 Huang et al. Mar 2012 B2
8147408 Bunce et al. Apr 2012 B2
8152724 Ridley et al. Apr 2012 B2
8167805 Emery et al. May 2012 B2
8216149 Oonuki et al. Jul 2012 B2
8221322 Wang et al. Jul 2012 B2
8228028 Schneider Jul 2012 B2
8257264 Park et al. Sep 2012 B2
8296797 Olstad et al. Oct 2012 B2
8340379 Razzaque et al. Dec 2012 B2
8350902 Razzaque et al. Jan 2013 B2
8482606 Razzaque et al. Jul 2013 B2
8554307 Razzaque et al. Oct 2013 B2
8585598 Razzaque et al. Nov 2013 B2
8641621 Razzaque et al. Feb 2014 B2
8670816 Green et al. Mar 2014 B2
8690776 Razzaque et al. Apr 2014 B2
8831310 Razzaque et al. Sep 2014 B2
9107698 Razzaque et al. Aug 2015 B2
9265572 Fuchs et al. Feb 2016 B2
9282947 Razzaque et al. Mar 2016 B2
9364294 Razzaque et al. Jun 2016 B2
9398936 Razzaque et al. Jul 2016 B2
9659345 Razzaque et al. May 2017 B2
9675319 Razzaque et al. Jun 2017 B1
9901406 State et al. Feb 2018 B2
9949700 Razzaque et al. Apr 2018 B2
10026191 Accomando et al. Jul 2018 B2
10127629 Razzaque et al. Nov 2018 B2
10136951 Razzaque et al. Nov 2018 B2
10188467 Razzaque et al. Jan 2019 B2
10278778 State et al. May 2019 B2
10314559 Razzaque et al. Jun 2019 B2
10398513 Razzaque et al. Sep 2019 B2
10433814 Razzaque et al. Oct 2019 B2
10733700 Keller et al. Aug 2020 B2
10772686 State et al. Sep 2020 B2
10820944 State et al. Nov 2020 B2
10820946 Heaney et al. Nov 2020 B2
20010007919 Shahidi Jul 2001 A1
20010016804 Cunningham et al. Aug 2001 A1
20010031920 Kaufman et al. Oct 2001 A1
20010041838 Holupka et al. Nov 2001 A1
20010045979 Matsumoto et al. Nov 2001 A1
20020010384 Shahidi et al. Jan 2002 A1
20020032772 Olstad et al. Mar 2002 A1
20020049375 Strommer et al. Apr 2002 A1
20020077540 Kienzle, III Jun 2002 A1
20020077543 Grzeszczuk et al. Jun 2002 A1
20020103431 Toker et al. Aug 2002 A1
20020105484 Navab et al. Aug 2002 A1
20020135673 Favalora et al. Sep 2002 A1
20020138008 Tsujita et al. Sep 2002 A1
20020140814 Cohen-Solal et al. Oct 2002 A1
20020156375 Kessmam et al. Oct 2002 A1
20020198451 Carson Dec 2002 A1
20030032878 Shahidi Feb 2003 A1
20030040743 Cosman et al. Feb 2003 A1
20030073901 Simon et al. Apr 2003 A1
20030135119 Lee et al. Jul 2003 A1
20030163142 Paltieli et al. Aug 2003 A1
20030164172 Chumas et al. Sep 2003 A1
20030231789 Willis et al. Dec 2003 A1
20030233123 Kindlein et al. Dec 2003 A1
20040034313 Leitner Feb 2004 A1
20040078036 Keidar Apr 2004 A1
20040095507 Bishop et al. May 2004 A1
20040116810 Olstad Jun 2004 A1
20040147920 Keidar Jul 2004 A1
20040152970 Hunter et al. Aug 2004 A1
20040181144 Cinquin et al. Sep 2004 A1
20040215071 Frank et al. Oct 2004 A1
20040238732 State et al. Dec 2004 A1
20040243146 Chesbrough et al. Dec 2004 A1
20040243148 Wasielewski Dec 2004 A1
20040249281 Olstad Dec 2004 A1
20040249282 Olstad Dec 2004 A1
20040254454 Kockro Dec 2004 A1
20050010098 Frigstad et al. Jan 2005 A1
20050033160 Yamagata et al. Feb 2005 A1
20050085717 Shahidi Apr 2005 A1
20050085718 Shahidi Apr 2005 A1
20050090733 Van Der Lugt et al. Apr 2005 A1
20050090742 Mine et al. Apr 2005 A1
20050107679 Geiger et al. May 2005 A1
20050111733 Fors et al. May 2005 A1
20050159641 Kanai Jul 2005 A1
20050182316 Burdette et al. Aug 2005 A1
20050192564 Cosman et al. Sep 2005 A1
20050219552 Ackerman et al. Oct 2005 A1
20050222574 Giordano et al. Oct 2005 A1
20050231532 Suzuki et al. Oct 2005 A1
20050240094 Pichon et al. Oct 2005 A1
20050251148 Friedrich Nov 2005 A1
20060004275 Vija et al. Jan 2006 A1
20060020204 Serra et al. Jan 2006 A1
20060036162 Shahidi et al. Feb 2006 A1
20060052792 Boettiger et al. Mar 2006 A1
20060058609 Olstad Mar 2006 A1
20060058610 Olstad Mar 2006 A1
20060058674 Olstad Mar 2006 A1
20060058675 Olstad Mar 2006 A1
20060089625 Voegele et al. Apr 2006 A1
20060100505 Viswanathan May 2006 A1
20060122495 Kienzle Jun 2006 A1
20060184040 Keller et al. Aug 2006 A1
20060193504 Saigo et al. Aug 2006 A1
20060229594 Francischelli et al. Oct 2006 A1
20060235290 Gabriel et al. Oct 2006 A1
20060235538 Rochetin et al. Oct 2006 A1
20060241450 Da Silva et al. Oct 2006 A1
20060253030 Altmann et al. Nov 2006 A1
20060253032 Altmann et al. Nov 2006 A1
20060271056 Terrill-Grisoni et al. Nov 2006 A1
20060282023 Leitner Dec 2006 A1
20060293643 Wallace et al. Dec 2006 A1
20070002582 Burwell et al. Jan 2007 A1
20070016035 Hashimoto Jan 2007 A1
20070024617 Poole Feb 2007 A1
20070032906 Sutherland et al. Feb 2007 A1
20070073155 Park et al. Mar 2007 A1
20070073455 Oyobe et al. Mar 2007 A1
20070078346 Park et al. Apr 2007 A1
20070167699 Lathuiliere et al. Jul 2007 A1
20070167701 Sherman Jul 2007 A1
20070167705 Chiang et al. Jul 2007 A1
20070167771 Olstad Jul 2007 A1
20070167801 Webler et al. Jul 2007 A1
20070225553 Shahidi Sep 2007 A1
20070239281 Gotte et al. Oct 2007 A1
20070244488 Metzger et al. Oct 2007 A1
20070255136 Kristofferson et al. Nov 2007 A1
20070270718 Rochetin et al. Nov 2007 A1
20070276234 Shahidi Nov 2007 A1
20070291000 Liang et al. Dec 2007 A1
20080004481 Bax et al. Jan 2008 A1
20080004516 DiSilvestro et al. Jan 2008 A1
20080007720 Mittal Jan 2008 A1
20080030578 Razzaque et al. Feb 2008 A1
20080039723 Suri et al. Feb 2008 A1
20080051910 Kammerzell et al. Feb 2008 A1
20080091106 Kim et al. Apr 2008 A1
20080114235 Unai et al. May 2008 A1
20080146939 McMorrow et al. Jun 2008 A1
20080161824 McMillen Jul 2008 A1
20080183080 Abraham Jul 2008 A1
20080200794 Teichman et al. Aug 2008 A1
20080208031 Kurpad et al. Aug 2008 A1
20080208081 Murphy et al. Aug 2008 A1
20080214932 Mollard et al. Sep 2008 A1
20080232679 Hahn et al. Sep 2008 A1
20080287794 Li et al. Nov 2008 A1
20080287805 Li Nov 2008 A1
20080287837 Makin et al. Nov 2008 A1
20090024030 Lachaine et al. Jan 2009 A1
20090036902 DeMaio et al. Feb 2009 A1
20090105597 Abraham Apr 2009 A1
20090118613 Krugman et al. May 2009 A1
20090118724 Zvuloni et al. May 2009 A1
20090131783 Jenkins et al. May 2009 A1
20090137907 Takimoto et al. May 2009 A1
20090175518 Ikuma et al. Jul 2009 A1
20090196480 Nields et al. Aug 2009 A1
20090226069 Razzaque et al. Sep 2009 A1
20090234369 Bax et al. Sep 2009 A1
20090312629 Razzaque et al. Dec 2009 A1
20100045783 State et al. Feb 2010 A1
20100152570 Navab Jun 2010 A1
20100185087 Nields et al. Jul 2010 A1
20100198045 Razzaque et al. Aug 2010 A1
20100198402 Greer et al. Aug 2010 A1
20100208963 Kruecker et al. Aug 2010 A1
20100268067 Razzaque et al. Oct 2010 A1
20100268072 Hall et al. Oct 2010 A1
20100268085 Kruecker et al. Oct 2010 A1
20100296718 Ostrovsky-Berman et al. Nov 2010 A1
20100298705 Pelissier et al. Nov 2010 A1
20100305448 Dagonnau et al. Dec 2010 A1
20100312121 Guan Dec 2010 A1
20100331252 Hamrick Dec 2010 A1
20110043612 Keller et al. Feb 2011 A1
20110046483 Fuchs et al. Feb 2011 A1
20110046486 Shin et al. Feb 2011 A1
20110057930 Keller Mar 2011 A1
20110082351 Razzaque et al. Apr 2011 A1
20110130641 Razzaque et al. Jun 2011 A1
20110137156 Razzaque et al. Jun 2011 A1
20110201915 Gogin et al. Aug 2011 A1
20110201976 Sanghvi et al. Aug 2011 A1
20110208055 Dalal et al. Aug 2011 A1
20110230351 Fischer et al. Sep 2011 A1
20110237947 Boctor et al. Sep 2011 A1
20110238043 Kleven Sep 2011 A1
20110251483 Razzaque et al. Oct 2011 A1
20110274324 Clements et al. Nov 2011 A1
20110282188 Burnside et al. Nov 2011 A1
20110288412 Deckman et al. Nov 2011 A1
20110295108 Cox et al. Dec 2011 A1
20110301451 Rohling Dec 2011 A1
20120035473 Sanghvi et al. Feb 2012 A1
20120059260 Robinson Mar 2012 A1
20120071759 Hagy et al. Mar 2012 A1
20120078094 Nishina et al. Mar 2012 A1
20120101370 Razzaque et al. Apr 2012 A1
20120108955 Razzaque et al. May 2012 A1
20120138658 Ullrich et al. Jun 2012 A1
20120143029 Silverstein et al. Jun 2012 A1
20120143055 Ng et al. Jun 2012 A1
20120165679 Orome et al. Jun 2012 A1
20120215096 Gilboa Aug 2012 A1
20120230559 Itai Sep 2012 A1
20120237105 Mielekamp Sep 2012 A1
20120259210 Harhen et al. Oct 2012 A1
20130030286 Alouani et al. Jan 2013 A1
20130044930 Li et al. Feb 2013 A1
20130079770 Kyle, Jr. et al. Mar 2013 A1
20130090646 Moss et al. Apr 2013 A1
20130096497 Duindam et al. Apr 2013 A1
20130129175 Razzaque May 2013 A1
20130132374 Olstad et al. May 2013 A1
20130144165 Ebbini et al. Jun 2013 A1
20130151533 Udupa et al. Jun 2013 A1
20130178745 Kyle et al. Jul 2013 A1
20130197357 Green Aug 2013 A1
20130218024 Boctor et al. Aug 2013 A1
20130249787 Morimoto Sep 2013 A1
20140016848 Razzaque et al. Jan 2014 A1
20140051987 Kowshik et al. Feb 2014 A1
20140058387 Kruecker et al. Feb 2014 A1
20140078138 Martin et al. Mar 2014 A1
20140094687 Razzaque Apr 2014 A1
20140142425 Razzaque et al. May 2014 A1
20140142426 Razzaque et al. May 2014 A1
20140180074 Green Jun 2014 A1
20140201669 Liu et al. Jul 2014 A1
20140275760 Lee et al. Sep 2014 A1
20140275810 Keller et al. Sep 2014 A1
20140275997 Chopra et al. Sep 2014 A1
20140343404 Razzaque et al. Nov 2014 A1
20140350390 Kudavelly et al. Nov 2014 A1
20150238259 Albeck et al. Aug 2015 A1
20150257847 Higgins et al. Sep 2015 A1
20160117857 State et al. Apr 2016 A1
20160166334 Razzaque Jun 2016 A1
20160166336 Razzaque Jun 2016 A1
20160196694 Lindeman Jul 2016 A1
20160270862 Fuchs et al. Sep 2016 A1
20160354152 Beck Dec 2016 A1
20170024903 Razzaque Jan 2017 A1
20170065352 Razzaque Mar 2017 A1
20170099479 Browd et al. Apr 2017 A1
20170128139 Razzaque et al. May 2017 A1
20170323424 Razzaque et al. Nov 2017 A1
20170348067 Krimsky Dec 2017 A1
20170360395 Razzaque et al. Dec 2017 A1
20180116731 State et al. May 2018 A1
20180263713 State Sep 2018 A1
20180289344 Green et al. Oct 2018 A1
20190021681 Kohli Jan 2019 A1
20190060001 Kohli et al. Feb 2019 A1
20190167354 Heaney et al. Jun 2019 A1
20190180411 Keller Jun 2019 A1
20190216547 Heaney et al. Jul 2019 A1
20190223958 Kohli Jul 2019 A1
20190247130 State Aug 2019 A1
20190321107 State et al. Oct 2019 A1
20200046315 State Feb 2020 A1
20200138402 Kohli May 2020 A1
20210113273 State Apr 2021 A1
20210161600 Heaney Jun 2021 A1
20210161601 State Jun 2021 A1
Foreign Referenced Citations (2)
Number Date Country
WO 97029682 Aug 1997 WO
WO 18080844 May 2018 WO
Non-Patent Literature Citations (139)
Entry
U.S. Appl. No. 11/828,826 including its ongoing prosecution history, including without limitation Office Actions, Amendments, Remarks, and any other potentially relevant documents.
U.S. Appl. No. 15/041,868 including its ongoing prosecution history, including without limitation Office Actions, Amendments, Remarks, and any other potentially relevant documents.
U.S. Appl. No. 15/068,323 including its ongoing prosecution history, including without limitation Office Actions, Amendments, Remarks, and any other potentially relevant documents.
U.S. Appl. No. 15/182,346 including its ongoing prosecution history, including without limitation Office Actions, Amendments, Remarks, and any other potentially relevant documents.
U.S. Appl. No. 15/199,630 including its ongoing prosecution history, including without limitation Office Actions, Amendments, Remarks, and any other potentially relevant documents.
U.S. Appl. No. 15/199,711 including its ongoing prosecution history, including without limitation Office Actions, Amendments, Remarks, and any other potentially relevant documents.
U.S. Appl. No. 15/175,981 including its ongoing prosecution history, including without limitation Office Actions, Amendments, Remarks, and any other potentially relevant documents.
U.S. Appl. No. 15/415,398 including its ongoing prosecution history, including without limitation Office Actions, Amendments, Remarks, and any other potentially relevant documents.
U.S. Appl. No. 15/598,616 including its ongoing prosecution history, including without limitation Office Actions, Amendments, Remarks, and any other potentially relevant documents.
U.S. Appl. No. 15/611,454 including its ongoing prosecution history, including without limitation Office Actions, Amendments, Remarks, and any other potentially relevant documents.
U.S. Appl. No. 15/799,639 including its ongoing prosecution history, including without limitation Office Actions, Amendments, Remarks, and any other potentially relevant documents.
U.S. Appl. No. 15/882,709 including its ongoing prosecution history, including without limitation Office Actions, Amendments, Remarks, and any other potentially relevant documents.
U.S. Appl. No. 15/995,059 including its ongoing prosecution history, including without limitation Office Actions, Amendments, Remarks, and any other potentially relevant documents.
U.S. Appl. No. 16/052,289 including its ongoing prosecution history, including without limitation Office Actions, Amendments, Remarks, and any other potentially relevant documents.
U.S. Appl. No. 16/178,002 including its ongoing prosecution history, including without limitation Office Actions, Amendments, Remarks, and any other potentially relevant documents.
U.S. Appl. No. 16/177,894 including its ongoing prosecution history, including without limitation Office Actions, Amendments, Remarks, and any other potentially relevant documents.
U.S. Appl. No. 16/209,021 including its ongoing prosecution history, including without limitation Office Actions, Amendments, Remarks, and any other potentially relevant documents.
U.S. Appl. No. 16/255,629 including its ongoing prosecution history, including without limitation Office Actions, Amendments, Remarks, and any other potentially relevant documents.
U.S. Appl. No. 16/366,537 including its ongoing prosecution history, including without limitation Office Actions, Amendments, Remarks, and any other potentially relevant documents.
U.S. Appl. No. 16/985,580 including its ongoing prosecution history, including without limitation Office Actions, Amendments, Remarks, and any other potentially relevant documents.
“3D Laparoscope Technology,” http://www.inneroptic.com/tech_3DL.htm, copyright 2007 InnerOptic Technology, Inc. printed Sep. 19, 2007, 2 pages.
“AIM 3D Needle Placement Software from InnerOptic”, Medgadget, Sep. 21, 2012.
AIM Section 5: 510k Summary, submitted by InnerOptic Technology, Inc., in 5 pages, submission date May 17, 2012.
“Cancer Facts & Figures 2004,” www.cancer.org/downloads/STT/CAFF_finalPWSecured.pdf, copyright 2004 American Cancer Society, Inc., printed Sep. 19, 2007, 60 pages.
Cancer Prevention & Early Detection Facts & Figures 2004; National Center for Tobacco-Free Kids; 2004; American Cancer Society; USA.
“David Laserscanner <—Latest News <—Institute for Robotics and Process Control <— Te . . . ,” http://www/rob.cs.tu-bs.de/en/news/david, printed Sep. 19, 2007, 1 page.
“InnerOptic's AIM System Receives DA 510(K) Clearance”, InnerOptic Technology, Inc., Sep. 18, 2012.
“Laser scanned 3d model Final” video, still image of video attached, http://www.youtube.com/watch?v+DaLglgmoUf8, copyright 2007 YouTube, LLC, printed Sep. 19, 2007, 2 pages.
“Olympus Endoscopic Ultrasound System,” www.olympusamerica.com/msg_section/download_brochures/135_b_gfum130.pdf, printed Sep. 20, 2007, 20 pages.
“Point Grey Research Inc.—Imaging Products—Triclops SDK Samples,” http://www.ptgrey.com/products/triclopsSDK/samples.asp, copyright 2007 Point Grey Research Inc., printed Sep. 19, 2007, 1 page.
“Robbins, Mike—Computer Vision Research—Stereo Depth Perception,” http://www.compumike.com/vision/stereodepth. php, copyright 2007 Michael F. Robbins, printed Sep. 19, 2007, 3 pages.
“RUE, Registered Ultrasound-Endoscope,” copyright 2007 InnerOptic Technology, Inc., 2 pages.
Advertisement, “Inspeck 3DC 3D Capturor,” Inspeck 3DC 3D Capturor (www.inspeck.com), 1998.
Advertisement, “Virtual 3D High Speed Non-Contact Surface Perception,” Virtual 3-D Technologies Corporation (www.virtual3dtech.com)., Dec. 21, 1998.
Advertisements, “Virtuoso,” Visual Interface, Inc. (www.visint.com), Dec. 21, 1998.
Akka, “Automatic Software Control of Display Parameters for Stereoscopic Graphics Images,” SPIE vol. 1669: Stereoscopic Displays and Applications III, pp. 31-38 (1992).
Ali et al., “Near Infrared Spectroscopy and Imaging to Probe Differences in Water Content in Normal and Cancer Human Prostate Tissues,” Technology in Cancer Research & Treatment; Oct. 2004; 3(5):491-497; Adenine Press.
Aylward et al., Analysis of the Parameter Space of a Metric for Registering 3D Vascular Images, in W. Niessen and M. Viergever (Eds.): MICCAI 2001, LNCS 2208, pp. 932-939, 2001.
Aylward et al., Registration and Analysis of Vascular Images, International Journal of Computer Vision 55(2/3), 123-138, 2003.
Aylward, et al., Intra-Operative 3D Ultrasound Augmentation, Proceedings of the IEEE International Symposium on Biomedical Imaging, Washington, Jul. 2002.
Azuma et al., “Improving Static and Dynamic Registration in an Optical See-Through HMD,” Paper Presented at SIGGRAPH '94 Annual Conference in Orlando, FL, 17 pages (1994).
Azuma, “A Survey of Augmented Reality,” Presence: Teleoperators and Virtual Environments 6, 4:1-48 (Aug. 1997).
Badler et al., “Simulating Humans: Computer Graphics, Animation, and Control,” Oxford University Press (1993).
Bajura, Michael et al., “Merging Virtual Objects with the Real World: Seeing Ultrasound Imagery within the Patient,” Computer Graphics, Proceedings of SIGGRAPH 1992, vol. 26(2), pp. 203-210, available from www.cs.unc.edu/˜fuchs/publications/MergVirtObjs92.pdf, printed Sep. 20, 2007, 8 pages.
Benavides et al., “Multispectral digital colposcopy for in vivo detection of cervical cancer,” Optics Express; May 19, 2003; 11 (1 0) Optical Society of America; USA.
Beraldin, J.A. et al., “Optimized Position Sensors for Flying-Spot Active Triangulation Systems,” Proceedings of the Fourth International Conference on a 3-D Digital Imaging and Modeling (3DIM), Banff, Alberta, Canada, Oct. 6-10, 2003, pp. 334-341, NRC 47083, copyright 2003 National Research Council of Canada, http:/iit-iti.nrc-cnrc.gc.ca/iit-publications-iti/docs/NRC-47083.pdf, printed Sep. 19, 2007, 9 pages.
Billinghurst, M. et al., Research Directions in Handheld AR; Int. J. of Virtual Reality 5(2),51-58 (2006).
Blais, F., “Review of 20 Years of Range Sensor Development,” Journal of Electronic Imaging, 13(1): 231-240, Jan. 2004, NRC 46531, copyright 2004 National Research Council of Canada, http://iit-iti.nrc-cnrc.gc.ca/iit-publications-iti/docs/NRC-46531.pdf, printed Sep. 19, 2007, 14 pages.
Bouguet, Jean-Yves, “Camera Calibration Toolbox for Matlab,” www.vision.caltech.edu/bouguetj/calib_doc, printed Sep. 20, 2007, 5 pages.
Buxton et al.; “Colposcopically directed punch biopsy: a potentially misleading investigation,” British Journal of Obstetrics and Gynecology; Dec. 1991; 98:1273-1276.
Caines, Judy S. et al. Stereotaxic Needle Core Biopsy of Breast Lesions Using a Regular Mammographic Table with an Adaptable Stereotaxic Device, American Journal of Roentgenology, vol. 163, No. 2, Aug. 1994, pp. 317-321. Downloaded from www.ajrorline.org on Jul. 10, 2013.
Cantor et al., “Cost-Effectiveness Analysis of Diagnosis and Management of Cervical Squamous Intraepithelial Lesions,” Diagnostic Strategies for SILs; Feb. 1998; 91(2):270-277.
Catalano et al. “Multiphase helical CT findings after percutaneous ablation procedures for hepatocellular carcinoma.” Abdom. Imaging, 25(6),2000, pp. 607-614.
Chiriboga et al., “Infrared Spectroscopy of Human Tissue. IV. Detection of Dysplastic and Neoplastic Changes of Human Cervical Tissue Via Infrared Microscopy,” Cellular and Molecular Biology; 1998; 44(1): 219-229.
Crawford, David E. et al., “Computer Modeling of Prostate Biopsy: Tumor Size and Location—Not Clinical Significance—Determine Cancer Detection,” Journal of Urology, Apr. 1998, vol. 159(4), pp. 1260-1264, 5 pages.
Deering, Michael “High Resolution Virtual Reality.” Proceedings of SIGGRAPH '92, Computer Graphics, 26(2), 1992, pp. 195-202.
Depiero et al., “3-D Computer Vision Using Structured Light: Design, Calibration and Implementation Issues,”The University of Tennessee, pp. 1-46, (1996).
Dodd, G.D et al. “Minimally invasive treatment of malignant hepatic tumors: at the threshold of a major breakthrough.” Radiographies 20(1), 2000, pp. 9-27.
Drascic et al., “Perceptual Issues in Augmented Reality,” SPIE vol. 2653: Stereoscopic Displays and Virtual Reality Systems III, pp. 123-134 (Feb. 1996).
Dumoulin, C.L. et al., Real-Time Position Monitoring of Invasive Devices Using Magnetic Resonance, Magnetic Resonance in Medicine, vol. 29, Issue 3, Mar. 1993, pp. 411-415.
Edwards et al., Video See-Through Design for Merging of Real and Virtual Environments, VRAIS '93, pp. 1-11 (1993).
Fahey et al., “Meta-analysis of Pap Test Accuracy; American Journal of Epidemiology,” 1995 141(7):680-689; The John Hopkins University School of Hvqiene and Public Health; USA.
Foxlin et al., “An Inertial Head-Orientation Tracker with Automatic Drift Compensation for Use with HMD's,” Proceedings of the 1994 Virtual Reality Software and Technology Conference, Aug. 23-26, 1994, Singapore, pp. 159-173 (1994).
Fronheiser et al., Real-Time 3D Color Doppler for Guidance of Vibrating Interventional Devices, IEEE Ultrasonics Symposium, pp. 149-152 (2004).
Fuchs, Henry et al. “Augmented Reality Visualization for Laparoscopic Surgery,” Proceedings of Medical Image Computing and Computer-Assisted Intervention (MICCAI) 1998, pp. 934-943, available from www.cs.unc.edu/˜fuchs/publications /AugRealVis_LaparoSurg98.pdf, printed Sep. 20, 2007, 10 pages.
Fuchs, et al.: “Optimizing a Head-Tracked Stereo Display System to Guide Hepatic Tumor Ablation,” Departments of Computer Sciences and Radiology, and School of Medicine, University of North Carolina at Chapel Hill; InnerOptic Technology, Inc. 2008.
Fuchs, et al.: “Virtual Environments Technology to Aid Needle Biopsies of the Breast,” Health Care in the Information Age, Ch. 6, pp. 60-61, Presentedin San Diego, Jan. 17-20, 1996, published by IOS Press and Ohmsha Feb. 1996.
Fuhrmann A. et al., Comprehensive calibration and registiation procedures for augmented reality; Proc. Eurographics Workshop on Virtual Environments 2001,219-228 (2001).
Garrett, William F. et al., “Real-Time Incremental Visualization of Dynamic Ultrasound Volumes Using Parallel BSP Trees, ”Proceedings of IEEE Visualization 1996, pp. 235-240, available from www.cs.unc.edu/˜andrei/pubs/1996_VIS_dualBSP_Mac.pdf, printed Sep. 20, 2007, 7 pages.
Georgakoudi et al., “Trimodal spectroscopy for the detection and characterization of cervical precancers in vivo,” American Journal of Obstetrics and Gynecology; Mar. 2002; 186(3):374-382; USA.
StereoMirror Technology Webpage, http://www.planar.com/products/flatpanel_monitors/stereoscopic/ (Printed Dec. 29, 2011).
Herline et al., Surface Registration for Use in Interactive, Image-Guided Liver Surgery, Computer Aided Surgery 5:11-17 (2000).
Holloway, R.; Registration Error Analysis for Augmented Reality; Presence: Teleoperators and Virtual Environments 6(4), 413-432 (1997).
Hornung et al., “Quantitative near-infrared spectroscopy of cervical dysplasia in vivo,” Human Reproduction; 1999; 14(11):2908-2916; European Society of Human Reproduction and Embryology.
Howard, M.D., et al.: “An Electronic Device for Needle Placement during Sonographically Guided Percutaneous Intervention”, Radiology 2001; 218:905-911.
InnerAim Brochure; 3D Visualization Software for Simpler, Safer, more Precise Aiming, Published no earlier than Apr. 1, 2010.
InVision System Brochure; A “GPS” for Real-Time 3D Needle Visualization & Guidance, Published no earlier than Mar. 1, 2008.
InVision User Manual; Professional Instructions for Use, Published no earlier than Dec. 1, 2008.
Jacobs, Marco C. et al., “Managing Latency in Complex Augmented Reality Systems,” ACM SIGGRAPH Proceedings of the Symposium of Interactive 3D Graphics 1997, pp. 49-54, available from www.cs.unc.edu/˜us/Latency//ManagingRelativeLatency.html, printed Sep. 20, 2007, 12 pages.
Jolesz, Ferenc A, M.D., et al. MRI-Guided Laser-Induced Interstitial Thermotherapy: Basic Principles, SPIE Institute on Laser-Induced Interstitial Thermotherapy (L1TT), Jun. 22-23, 1995, Berlin, Germany.
Kadi, A Majeed, et al., Design and Simulation of an Articulated Surgical Arm for Guiding Sterotactic Neurosurgery, SPIE vol. 1708 Applications of Artificial Intelligence X: Machine Vision and Robotics (1992). Downloaded from: http://proceedings.spiedigitallibrary.org/on Jul. 11, 2013.
Kanbara et al., “A Stereoscopic Video See-through Augmented Reality System Based on Real-time Vision-Based Registration,” Nara Institute of Science and Technology, pp. 1-8 (2000).
Kato, Amami, et al., A frameless, armless navigational system for computer-assisted neurosurgery, Journal of Neurosurgery, vol. 74, No. 5, May 1991, pp. 845-849.
Keller et al., “What is it in Head Mounted Displays (MDs) that really make them all so terrible?,” pp. 1-8 (1998).
Lass, Amir, “Assessment of Ovarian Reserve,” Human Reproduction, 2004, vol. 19(3), pp. 467-469, available from http://humrep.oxfordjournals.orgcgi/reprint/19/3/467, printed Sep. 20, 2007, 3 pages.
Lee, et al., “Modeling Real Objects Using Video See-Through Augmented Reality,” Proceedings of the Second International Symposium on Mixed Reality, ISMR2001, pp. 19-26 (Mar. 14-15, 2001).
Lee et al., “Modeling Real Objects Using Video See-Through Augmented Reality,” Presence, 11(2):144-157 (Apr. 2002).
Leven et al., DaVinci Canvas: A Telerobotic Surgical System with Integrated, Robot-Assisted, Laparoscopic Ultrasound Capability, in J. Duncan and G. Gerig (Eds.): MICCAI 2005, LNCS 3749, pp. 811-818, 2005.
Levy, et al., An Internet-Connected, Patient Specific, Deformable Brain Atlas Integrated into a Surgical Navigation System, Journal of Digital Imaging, vol. 10, No. 3. Suppl. 1 Aug. 1997: pp. 231-237.
Lindeman, A Low-Cost, Low-latency Approach to Dynamic Immersion in Occlusive Head-Mounted Displays, University of Canterbury, WPI,—Poster from IEEE VR 2016, Mar. 19-23, 2016.
Lipton, “Foundations of the Steroscopic Cinema a Study in Depth,” Van Nostrad Reinhold Company, pp. 1-319 (1982).
Livingston, Mark A. et al., “Magnetic Tracker Calibration for Improved Augmented Reality Registration,” Presence: Teleoperators and Virtual Environments, 1997, vol. 6(5), pp. 532-546, available from www.cs.unc.edu/˜andrei/pubs/1997_Presence_calibr.pdf, printed Sep. 20, 2007, 14 pages.
Matsunaga et al., “The Effect of the Ratio Difference of Overlapped Areas of Stereoscopic Images on each Eye in a Teleoperalion,” Stereoscopic Displays and Virtual Reality Systems VII, Proceedings of SPIE, 3957:236-243 (2000).
Meehan, Michael et al., “Effect of Latency on Presence in Stressful Virtual Environment,” Proceedings of IEEE Virtual Reality 2003, pp. 141-148, available from http://www.cs.unc.edu/˜eve/pubs.html, printed Sep. 20, 2007, 8 pages.
Milgram et al., “Adaptation Effects in Stereo due to Online Changes in Camera Configuration,” SPIE vol. 1669-13, Stereoscopic Displays and Applications III, 17 pages (1992).
Mitchell et al., “Colposcopy for the Diagnosis of Squamous Intraepithelial lesions: A metaanalysis,” Obstetrics and Gynecology; Apr. 1998; 91 (4):626-631.
Nakamoto et al., 3D Ultrasound System Using a Magneto-optic Hybrid Trackerfor Augmented Reality Visualization in Laparoscopic Liver Surgery, in T. Dohi and R. Kikinis (Eds.): MICCAI 2002, LNCS 2489, pp. 148-155, 2002.
Nordstrom et al., “Identification of Cervical Intraepithelial Neoplasia (CIN) Using UV-Excited Fluorescence and Diffuse-Reflectance Tissue Spectroscopy,” Lasers in Surgery and Medicine; 2001; 29; pp. 118-127; Wiley-Liss, Inc.
Ohbuchi et al. “An Incremental vol. Rendering Algorithm for Interactive 3D Ultrasound Imaging”, UNC-CH Computer Science Technical Report TR91-003, (1991).
Ohbuchi et al., “Incremental Volume Reconstruction and Rendering for 3D Ultrasound Imaging,” Visualization in Biomedical Computing, SPIE Proceedings, pp. 312-323, (Oct. 13, 1992).
Ohbuchi, “Incremental Acquisition and Visualization of 3D Ultrasound Images,” Ph.D. Dissertation, UNC-CH Computer Science Technical Report TR95--023, (1993).
PCT, International Search Report and Written Opinion, re PCT Application No. PCT/US07/75122, dated Aug. 20, 2008.
PCT, International Preliminary Report on Patentability, re PCT Application No. PCT/US07/75122, dated Mar. 3, 2009.
PCT, International Search Report and Written Opinion, re PCT Application No. PCT/US2010/024378, dated Oct. 13, 2010.
PCT, International Search Report and Written Opinion, re PCT Application No. PCT/US2010/043760, dated Mar. 3, 2011.
PCT, The International Search Report and Written Opinion of the International Searching Authority, dated Sep. 9, 2009, for case PCT/US2009/032028.
PCT International Search Report and Written Opinion, re PCT Application No. PCT/US2013/023678, dated Jun. 13, 2013.
Ohnesorge, Lauren K., “InnerOptic technology wins FDA approval”, Triangle Business Journal, Sep. 19, 2012.
Pogue, Brian W. et al., “Analysis of acetic acid-induced whitening of high-grade squamous intraepitheliallesions,” Journal of Biomedical Optics; Oct. 2001; 6(4):397-403.
Press Release: Pathfinder and InnerOptic Announce Technology Integration to Enhance Visualization and Outcomes in Liver Surgery, Published Mar. 6, 2013.
Ram, A.B., et al., Comparing Interpersonal Interactions with a Virtual Human to Those with a Real Human; IEEE Transactions on Visualization and Computer Graphics 13(3), 443-457 (2007).
Raz et al, Real-Time Magnetic Resonance Imaging-Guided Focal Laser Therapy in Patients with Low-Risk Prostate Cancer, European Urology 58, pp. 173-177. Mar. 12, 2010.
Robinett et al., “A Computational Model for the Stereoscopic Optics of a Head-Mounted Display,” SPIE vol. 1457, Stereoscopic Displays and Applications II, pp. 140-160 (1991).
Rolland et al., Towards Quantifying Depth and Size Perception in Virtual Environments, Presence: Teleoperators and Virtual Environments, Winter 1995, vol. 4, Issue 1, pp. 1-21 and 24-49.
Rosenthal, Michael et al., “Augmented Reality Guidance for Needle Biopsies: An Initial Randomized, Controlled Trial in Phantoms,” Proceedings of Medical Image Analysis, Sep. 2002, vol. 6(3), pp. 313-320, available from www.cs.unc.edu/˜fuchs/publications/AugRealGuida_NeedleBiop02.pdf, printed Sep. 20, 2007, 8 pages.
Rosenthal, Michael et al., “Augmented Reality Guidance for Needle Biopsies: A Randomized, Controlled Trial in Phantoms,” Proceedings of MICCAI 2001, eds. W. Niessen and M. Viergever, Lecture Notes in Computer Science, 2001, vol. 2208, pp. 240-248, available from www.cs.unc.edu/˜us/AugmentedRealityAssistance.pdf, printed Sep. 20, 2007, 9 pages.
Screenshots from video produced by the University of North Carolina, produced circa 1992.
Splechtna et al., Comprehensive calibration and registration procedures for augmented reality; Proc. Eurographics Workshop on Virtual Environments 2001, 219-228 (2001).
State et al., “Case Study: Observing a Volume Rendered Fetus within a Pregnant Patient,” Proceedings of IEEE Visualization 1994, pp. 364-368, available from www.cs.unc.edu/˜fuchs/publications/cs-ObservVolRendFetus94.pdf, printed Sep. 20, 2007, 5 pages.
State et al., “Interactive Volume Visualization on a Heterogeneous Message-Passing Multicomputer,” Proceedings of 1995 Symposium on Interactive 3D Graphics, 1995, pp. 69-74, 208, available from www.cs.unc.edu/˜andrei/pubs/1995_I3D_vol2_Mac.pdf, printed Sep. 20, 2007.
State et al., “Simulation-Based Design and Rapid Prototyping of a Parallax-Free, Orthoscopic Video See-Through Head-Mounted Display,” Proceedings of International Symposium on Mixed and Augmented Reality (ISMAR) 2005, available from www.cs.unc.edu/˜andrei/pubs/2005_ISMAR_VSTHMD_design.pdf, printed Sep. 20, 2007, 4 pages.
State et al., “Stereo Imagery from the UNC Augmented Reality System for Breast Biopsy Guidance” Proc. Medicine Meets Virtual Reality (MMVR) 2003 (Newport Beach, CA, Jan. 22-25, 2003).
State et al., “Superior Augmented Reality Registration by Integrating Landmark Tracking and Magnetic Tracking,” ACM SIGGRAPH Computer Graphics, Proceedings of SIGGRAPH 1996, 10 pages (Aug. 1996).
State et al., “Technologies for Augmented Reality Systems: Realizing Ultrasound-Guided Needle Biopsies,” Proc. SIGGRAPH 96 (New Orleans, LA, Aug. 4-9, 1996). In Computer Graphics Proceedings, Annual Conference Series, 1996, ACM SIGGRAPH, pp. 439-446.
State, Andrei “Exact Eye Contact with Virtual Humans.” Proc. IEEE International Workshop on Human Computer Interaction 2007 (Rio de Janeiro, Brazil, Oct. 20, 2007), pp. 138-145.
State, et al.: Contextually Enhanced 3D Visualization for Multi-Born Tumor Ablation Guidance, Departments of Computer Science and Radiology, and School of Medicine, University of North Carolina at Chapel Hill; InnerOptic Technology, Inc. 2008, Chapel Hill, NC, pp. 70-77.
Symons et al., “What are You Looking at? Acuity for Triadic Eye Gaze,” J. Gen. Psychology 131 (4), pp. 451-469 (2004).
Takacs et al., “The Virtual Human Interface: A Photorealistic Digital Human,” IEEE Computer Graphics and Applications 23(5), pp. 38-45 (2003).
Takagi et al., “Development of a Stereo Video See-through HMD for AR Systems,” IEEE, pp. 68-77 (2000).
Takayama et al., “Virtual Human with Regard to Physical Contact and Eye Contact,” Entertaining Computing 2005, LNCS, vol. 3711, pp. 268-278 (2005).
Ultraguide 1000 System, Ultraguide, www.ultraguideinc.com, 1998.
Van Staveren et al., “Light Scattering in Intralipid—10% in the wavelength range of 400-1100 nm,” Applied Optics; Nov. 1991; 30(31):4507-4514.
Viola et al., “Alignment by Maximization of Mutual Information,” International Journal of Computer Vision, vol. 24, No. 2, pp. 137-154 (1997).
Viola, Paul A., Alignment by Maximization of Mutual Information, Ph.D. Dissertation, MIT—Artificial Intelligence Laboratory Technical Report No. 1548 (Jun. 1995), 156 pages.
Ware et al., “Dynamic Adjustment of Stereo Display Parameters,” IEEE Transactions on Systems, Many and Cybernetics, 28(1):1-19 (1998).
Watson et al., “Using Texture Maps to Correct for Optical Distortion in Head-Mounted Displays,” Proceedings of the Virtual Reality Annual Symposium '95, IEEE, pp. 1-7 (1995).
Welch, Hybrid Self-Tracker: An Inertial/Optical Hybrid Three-Dimensional Tracking System, University of North Carolina Chapel Hill Department of Computer Science, TR 95-048 (1995).
Yinghui et al., Real-Time Deformation Using Modal Analysis on Graphics Hardware, GRAPHITE 2006, Kuala Lumpur, Malaysia, Nov. 29-Dec. 2, 2006.
Zitnick et al., “Multi-Base Stereo Using Surface Extraction,” Visual Interface Inc., (Nov. 24, 1996).
Related Publications (1)
Number Date Country
20210027418 A1 Jan 2021 US
Provisional Applications (2)
Number Date Country
60856670 Nov 2006 US
60834932 Aug 2006 US
Continuations (5)
Number Date Country
Parent 16177894 Nov 2018 US
Child 16920560 US
Parent 15598616 May 2017 US
Child 16177894 US
Parent 13936951 Jul 2013 US
Child 15598616 US
Parent 12760274 Apr 2010 US
Child 13936951 US
Parent 11833134 Aug 2007 US
Child 12760274 US