The present invention relates to medical systems, and in particular, but not exclusively, to path visualization.
In image-guided surgery (IGS), a medical practitioner uses instruments that are tracked in real time within the body, so that positions and/or orientations of the instruments can be presented on images of a patient's anatomy during the surgical procedure. In many IGS scenarios an image of the patient is prepared in one modality, such as magnetic resonance imaging (MRI) or computerized tomography (CT), and the instrument tracking uses a different modality, such as electromagnetic tracking. In order for the tracking to be effective, frames of reference of the two modalities are registered with each other.
US Patent Publication 2011/0236868 of Bronstein, et al., describes a method of performing computerized simulations of image-guided procedures. The method may comprise receiving medical image data of a specific patient. A patient-specific digital image-based model of an anatomical structure of the specific patient may be generated based on the medical image data. A computerized simulation of an image-guided procedure may be performed using the digital image-based model. Medical image data, the image-based model and a simulated medical tool model may be simultaneously displayed.
US Patent Publication 2017/0151027 of Walker, et al., describes systems and methods for driving a flexible medical instrument to a target in an anatomical space with robotic assistance. The flexible instrument may have a tracking sensor embedded therein. An associated robotic control system may be provided, which is configured to register the flexible instrument to an anatomical image using data from the tracking sensor and identify one or more movements suitable for navigating the instrument towards an identified target. In some embodiments, the robotic control system drives or assists in driving the flexible instrument to the target.
US Patent Publication 2016/0174874 of Averbuch, et al., describes a registration method whereby a sensor-based approach is used to establish initial registration and whereby upon the commencement of navigating an endoscope, image-based registration methods are used in order to more accurately maintain the registration between the endoscope location and previously-acquired images. A six-degree-of-freedom location sensor is placed on the probe in order to reduce the number of previously-acquired images that must be compared to a real-time image obtained from the endoscope.
US Patent Publication 2005/0228250 of Bitter, et al., describes a user interface including an image area that is divided into a plurality of views for viewing corresponding 2-dimensional and 3-dimensional images of an anatomical region. Tool control panes can be simultaneously opened and accessible. The segmentation pane enables automatic segmentation of components of a displayed image within a user-specified intensity range or based on a predetermined intensity.
US Patent Publication 2007/0276214 of Dachille, et al., describes an imaging system for automated segmentation and visualization of medical images and includes an image processing module for automatically processing image data using a set of directives to identify a target object in the image data and process the image data according to a specified protocol, a rendering module for automatically generating one or more images of the target object based on one or more of the directives and a digital archive for storing the one or more generated images. The image data may be DICOM-formatted image data), wherein the imaging processing module extracts and processes meta-data in DICOM fields of the image data to identify the target object. The image processing module directs a segmentation module to segment the target object using processing parameters specified by one or more of the directives.
U.S. Pat. No. 5,371,778 to Yanof, et al., describes a CT scanner that non-invasively examines a volumetric region of a subject and generates volumetric image data indicative thereof. An object memory stores the data values corresponding to each voxel of the volume region. An affine transform algorithm operates on the visible faces of the volumetric region to translate the faces from object space to projections of the faces onto a viewing plane in image space. An operator control console includes operator controls for selecting an angular orientation of a projection image of the volumetric region relative to a viewing plane, i.e. a plane of the video display. A cursor positioning trackball inputs i- and j-coordinate locations in image space which are converted into a cursor crosshair display on the projection image. A depth dimension k between the viewing plane and the volumetric region in a viewing direction perpendicular to the viewing plane is determined. The (i,j,k) image space location of the cursor is operated upon by the reverse of the selected transform to identify a corresponding (x,y,z) cursor coordinate in object space. The cursor coordinate in object space is translated into corresponding addresses of the object memory for transverse, coronal, and sagittal planes through the volumetric region.
U.S. Pat. No. 10,188,465 to Gliner, et al., describes a method including receiving a computerized tomography scan of at least a part of a body of a patient, and identifying voxels of the scan that correspond to regions in the body that are traversable by a probe inserted therein. The method also includes displaying the scan on a screen and marking thereon selected start and termination points for the probe. A processor finds a path from the start point to the termination point consisting of a connected set of the identified voxels. The processor also uses the scan to generate a representation of an external surface of the body and displays the representation on the screen. The processor then renders an area of the external surface surrounding the path locally transparent in the displayed representation, so as to make visible on the screen an internal structure of the body in a vicinity of the path.
US Patent Publication 2018/0303550 of Altmann, et al., describes a method for visualization includes registering, within a common frame of reference, a position tracking system and a three-dimensional (3D) computerized tomography (CT) image of at least a part of a body of a patient. A location and orientation of at least one virtual camera are specified within the common frame of reference. Coordinates of a medical tool moving within a passage in the body are tracked using the position tracking system. A virtual endoscopic image, based on the 3D CT image, of the passage in the body is rendered and displayed from the specified location and orientation, including an animated representation of the medical tool positioned in the virtual endoscopic image in accordance with the tracked coordinates.
There is provided in accordance with an embodiment of the present disclosure, a medical apparatus, including a medical instrument, which is configured to move within a passage in a body of a patient, a position tracking system, which is configured to track coordinates of the medical instrument within the body, a display screen, and a processor, which is configured to register the position tracking system and a three-dimensional (3D) computerized tomography (CT) image of at least a part of the body within a common frame of reference, find a 3D path of the medical instrument through the passage from a given start point to a given termination point, compute segments of the 3D path, compute respective different locations along the 3D path of respective virtual cameras responsively to the computed segments, select the respective virtual cameras for rendering respective virtual endoscopic images responsively to the tracked coordinates of the medical instrument and the respective locations of the respective virtual cameras within the common frame of reference, compute respective orientations of the respective virtual cameras, and render and display on the display screen the respective virtual endoscopic images, based on the 3D CT image, of the passage in the body viewed from the respective locations and orientations of the respective virtual cameras including an animated representation of the medical instrument positioned in the respective virtual endoscopic images in accordance with the tracked coordinates.
Further in accordance with an embodiment of the present disclosure the processor is configured to find turning points in the 3D path above a threshold turning, and compute the segments of the 3D path and the respective different locations along the 3D path of the respective virtual cameras responsively to the found turning points.
Still further in accordance with an embodiment of the present disclosure the processor is configured to position at least one of the virtual cameras in a middle of one of the segments responsively to a distance between adjacent ones of the virtual cameras exceeding a limit.
Additionally, in accordance with an embodiment of the present disclosure the processor is configured to check a line of sight between two adjacent ones of the virtual cameras, and position at least one of the virtual cameras between the two adjacent virtual cameras responsively to the line of sight being blocked.
Moreover, in accordance with an embodiment of the present disclosure the processor is configured to compute the segments based on an n-dimensional polyline simplification.
Further in accordance with an embodiment of the present disclosure the n-dimensional polyline simplification includes the Ramer—Douglas—Peucker algorithm.
Still further in accordance with an embodiment of the present disclosure the processor is configured to compute respective bisectors for respective ones of the virtual cameras, and select the respective virtual cameras for rendering respective virtual endoscopic images responsively to which side the tracked coordinates of the medical instrument fall with respect to a respective one of the bisectors of a respective one of the virtual cameras closest to the tracked coordinates.
Additionally, in accordance with an embodiment of the present disclosure the processor is configured to compute the respective bisectors as respective planes perpendicular to the 3D path at respective ones of the locations of respective ones of the virtual cameras on the 3D path.
Moreover, in accordance with an embodiment of the present disclosure the processor is configured to compute an average direction of vectors from a respective one of the locations of a respective one of the virtual cameras to different points along the 3D path, and compute a respective one of the orientations of the respective one of the virtual cameras responsively to the computed average direction.
Further in accordance with an embodiment of the present disclosure the processor is configured to shift the respective one of the locations of the respective one of the virtual cameras in an opposite direction to the computed average direction.
Still further in accordance with an embodiment of the present disclosure the processor is configured to render and display on the display screen a transition between two respective ones of the virtual endoscopic images of two respective adjacent ones of the virtual cameras based on successively rendering respective transitional virtual endoscope images of the passage in the body viewed from respective locations of respective additional virtual cameras disposed between the two respective adjacent ones of the virtual cameras.
Additionally, in accordance with an embodiment of the present disclosure the position tracking system includes an electromagnetic tracking system, which includes one or more magnetic field generators positioned around the part of the body and a magnetic field sensor at a distal end of the medical instrument.
There is also provided in accordance with another embodiment of the present disclosure, a medical method, including tracking coordinates of a medical instrument within a body of a patient using a position tracking system, the medical instrument being configured to move within a passage in the body of the patient, registering the position tracking system and a three-dimensional (3D) computerized tomography (CT) image of at least a part of the body within a common frame of reference, finding a 3D path of the medical instrument through the passage from a given start point to a given termination point, computing segments of the 3D path, computing respective different locations along the 3D path of respective virtual cameras responsively to the computed segments, selecting the respective virtual cameras for rendering respective virtual endoscopic images responsively to the tracked coordinates of the medical instrument and the respective locations of the respective virtual cameras within the common frame of reference, computing respective orientations of the respective virtual cameras, and rendering and displaying on a display screen the respective virtual endoscopic images, based on the 3D CT image, of the passage in the body viewed from the respective locations and orientations of the respective virtual cameras including an animated representation of the medical instrument positioned in the respective virtual endoscopic images in accordance with the tracked coordinates.
Moreover, in accordance with an embodiment of the present disclosure, the method includes finding turning points in the 3D path above a threshold turning, and wherein the computing the segments includes computing the segments of the 3D path and the respective different locations along the 3D path of the respective virtual cameras responsively to the found turning points.
Further in accordance with an embodiment of the present disclosure, the method includes positioning at least one of the virtual cameras in a middle of one of the segments responsively to a distance between adjacent ones of the virtual cameras exceeding a limit.
Still further in accordance with an embodiment of the present disclosure, the method includes checking a line of sight between two adjacent ones of the virtual cameras, and positioning at least one of the virtual cameras between the two adjacent virtual cameras responsively to the line of sight being blocked.
Additionally, in accordance with an embodiment of the present disclosure, the method includes computing the segments based on an n-dimensional polyline simplification.
Moreover, in accordance with an embodiment of the present disclosure the n-dimensional polyline simplification includes the Ramer—Douglas—Peucker algorithm.
Further in accordance with an embodiment of the present disclosure, the method includes computing respective bisectors for respective ones of the virtual cameras, and wherein the selecting includes selecting the respective virtual cameras for rendering respective virtual endoscopic images responsively to which side the tracked coordinates of the medical instrument fall with respect to a respective one of the bisectors of a respective one of the virtual cameras closest to the tracked coordinates.
Still further in accordance with an embodiment of the present disclosure the computing the respective bisectors includes computing the respective bisectors as respective planes perpendicular to the 3D path at respective ones of the locations of respective ones of the virtual cameras on the 3D path.
Additionally, in accordance with an embodiment of the present disclosure, the method includes computing an average direction of vectors from a respective one of the locations of a respective one of the virtual cameras to different points along the 3D path, and wherein computing the respective orientations includes computing a respective one of the orientations of the respective one of the virtual cameras responsively to the computed average direction.
Moreover, in accordance with an embodiment of the present disclosure, the method includes shifting the respective one of the locations of the respective one of the virtual cameras in an opposite direction to the computed average direction.
Further in accordance with an embodiment of the present disclosure, the method includes rendering and displaying on the display screen a transition between two respective ones of the virtual endoscopic images of two respective adjacent ones of the virtual cameras based on successively rendering respective transitional virtual endoscope images of the passage in the body viewed from respective locations of respective additional virtual cameras disposed between the two respective adjacent ones of the virtual cameras.
There is also provided in accordance with still another embodiment of the present disclosure, a software product, including a non-transient computer-readable medium in which program instructions are stored, which instructions, when read by a central processing unit (CPU), cause the CPU to track coordinates of a medical instrument within a body of a patient using a position tracking system, the medical instrument being configured to move within a passage in the body of the patient, register the position tracking system and a three-dimensional (3D) computerized tomography (CT) image of at least a part of the body within a common frame of reference, find a 3D path of the medical instrument through the passage from a given start point to a given termination point, compute segments of the 3D path, compute respective different locations along the 3D path of respective virtual cameras responsively to the computed segments, select the respective virtual cameras for rendering respective virtual endoscopic images responsively to the tracked coordinates of the medical instrument and the respective locations of the respective virtual cameras within the common frame of reference, compute respective orientations of the respective virtual cameras, and render and display on a display screen the respective virtual endoscopic images, based on the 3D CT image, of the passage in the body viewed from the respective locations and orientations of the respective virtual cameras including an animated representation of the medical instrument positioned in the respective virtual endoscopic images in accordance with the tracked coordinates.
The present invention will be understood from the following detailed description, taken in conjunction with the drawings in which:
During medical procedures within the nasal passages, such as sinuplasty operations, it is impossible to directly visualize what is happening without insertion of an endoscope into the sinuses. Insertion of an endoscope is problematic, however, because of the tight spaces involved, as well as the extra cost of the endoscope. Furthermore, endoscopes for use in the nasal passages are typically rigid instruments, which cannot make turns or provide views from sinus cavities back toward the sinus opening.
Embodiments of the present invention that are described herein address this problem by generating virtual endoscopic views of the procedure from virtual cameras, similar to what would be seen by an actual endoscope positioned at the locations of the virtual cameras within the nasal passages. The virtual endoscopic views show the anatomy as well as the medical instrument moving through the anatomy. As the medical instrument moves along a passage, the virtual cameras used to generate the virtual endoscopic views passes from one virtual camera to the other responsively to tracked coordinates of the medical instrument.
These virtual endoscope views may be used, for example, in visualizing the location and orientation of a guidewire relative to the anatomy, as well as other instruments, such as a suction tool or a shaving tool (debrider).
Moving from one virtual camera to the other provides a more stable view of the anatomy than placing a virtual camera on the distal tip of the medical instrument whereby the virtual camera is always “moving” as the distal tip moves leading to a jumpy or choppy video of the anatomy which is very difficult to follow.
Furthermore, although the embodiments disclosed hereinbelow are directed specifically to visualization within the nasal passages, the principles of the present invention may similarly be applied within other spaces in the body, particularly in narrow passages in which actual optical endoscopy is unavailable or difficult to use.
Prior to the medical procedure, a CT image of the patient's head, including the sinuses, is acquired, and a position tracking system, such as an electromagnetic tracking system, is registered with the CT image. A position sensor is attached to the distal end of the guidewire or other instrument, and the distal end is thus tracked, in location and orientation, relative to the registered CT image, as it is inserted into the sinuses. The CT image of the head is processed in order to generate and display images of the 3D volume of the nasal passages.
Inside this 3D volume, an operator of the imaging system, such as a surgeon performing a sinuplasty procedure, can select start and termination points of a 3D path along which to navigate the medical instrument. A suitable 3D path from the start to the termination point is computed, for example, using a path finding algorithm and data from the CT image indicating which voxels of the CT image include material suitable for traversing, such as air or liquid.
The computed 3D path is automatically divided into segments with turning points between the segments above a threshold turning value. The virtual cameras are positioned around these turning points. Additional virtual cameras may be automatically positioned between the turning points if there is no line of sight between the virtual cameras positioned at the turning points and/or if the distance between the turning points exceeds a given value.
The orientation of the optical axis of each virtual camera is computed. The orientation may be computed using any suitable method. In some embodiments, the orientation may be computed based on an average direction of vectors from a virtual camera to locations along the path until the next virtual camera. In other embodiments, the orientation may be computed as a direction parallel to the path at the location of the respective virtual camera. The field of view of the virtual cameras may be fixed, e.g., to 90 degrees or any suitable value, or set according to the outer limits of the relevant segment of the path served by the respective virtual cameras with an additional tolerance to allow for deviations from the path.
In some embodiments, the location of a virtual camera may be shifted backwards in an opposite direction to the computed average direction. The virtual camera may be shifted back by any suitable distance, for example, until the camera is shifted back to solid material such as tissue or bone. Shifting the virtual cameras back may result in a better view of the medical instrument within the respective virtual endoscopic images particularly when the medical instrument is very close to the respective virtual cameras, and may result in a better view of the surrounding anatomy.
As the medical tool moves through the passages, the respective virtual cameras are selected for rendering and displaying respective virtual endoscopic images according to a camera selection method. In some embodiments, the camera selection method includes finding a closest camera to the tracked coordinates of the medical instrument and then finding which side of a bisector (plane) associated with the closest camera the tracked coordinates fall. If the tracked coordinates fall on the side of the bisector further down the computed path (in the direction of travel of the medical instrument) from the closest camera, the closest camera is selected for rendering. If the tracked coordinates fall on the side of the bisector closest to the current virtual camera, the current virtual camera continues to provide its endoscopic image. The bisector associated with the closest camera may be defined as a plane perpendicular to the computed path at the point of the closest camera. In other embodiments, the passages may be divided into regions based on the segments with the virtual cameras being selected according to the region in which the tracked coordinates are disposed.
The transition between two virtual cameras and therefore the transition between the associated virtual endoscopic images may be a smooth transition or a sharp transition. In some embodiments, a smooth transition between the two respective virtual cameras may be performed by finding locations of addition virtual cameras on the path between the two the virtual cameras and then successively rendering respective transitional virtual endoscope images viewed from the locations of the additional virtual cameras.
Reference is now made to
The medical apparatus 20 includes a position tracking system 23, which is configured to track coordinates of the medical instrument 21 within the body. In some embodiments, the position tracking system 23 comprises an electromagnetic tracking system 25, which comprises one or more magnetic field generators 26 positioned around the part of the body and one or more magnetic field sensors 32 at a distal end of the medical instrument 21. In one embodiment, the magnetic field sensors 32 comprises a single axis coil and a dual axis coil which act as magnetic field sensors and which are tracked during the procedure by the electromagnetic tracking system 25. For the tracking to be effective, in apparatus 20 frames of reference of a CT (computerized tomography) image of patient 22 and of the electromagnetic tracking system 25 are registered described in more detail with reference to
Prior to and during the sinus procedure, a magnetic radiator assembly 24, comprised in the electromagnetic tracking system 25, is positioned beneath the patient's head. The magnetic radiator assembly 24 comprises the magnetic field generators 26 which are fixed in position and which transmit alternating magnetic fields into a region 30 wherein the head of patient 22 is located. Potentials generated by the single axis coil of the magnetic field sensor(s) 32 in region 30, in response to the magnetic fields, enable its position and its orientation to be measured in the magnetic tracking system's frame of reference. The position can be measured in three linear dimensions (3D), and the orientation can be measured for two axes that are orthogonal to the axis of symmetry of the single axis coil. However, the orientation of the single axis coil with respect to its axis of symmetry cannot be determined from the potentials generated by the coil.
The same is true for each of the two coils of the dual axis coil of the magnetic field sensor 32. That is, for each coil the position in 3D can be measured, as can the orientation with respect to two axes that are orthogonal to the coil axis of symmetry, but the orientation of the coil with respect to its axis of symmetry cannot be determined.
By way of example, radiators 26 of assembly 24 are arranged in an approximately horseshoe shape around the head of patient 22. However, alternate configurations for the radiators of assembly 24 will be apparent to those having skill in the art, and all such configurations are assumed to be comprised within the scope of the present invention.
Prior to the procedure, the registration of the frames of reference of the magnetic tracking system with the CT image may be performed by positioning a magnetic sensor at known positions, such as the tip of the patient's nose, of the image.
However, any other convenient system for registration of the frames of reference may be used as described in more detail with reference to
Elements of apparatus 20, including radiators 26 and magnetic field sensor 32, are under overall control of a system processor 40. Processor 40 may be mounted in a console 50, which comprises operating controls 58 that typically include a keypad and/or a pointing device such as a mouse or trackball. Console 50 connects to the radiators and to the magnetic field sensor 32 via one or more cables 60 and/or wirelessly. A physician 54 uses operating controls 58 to interact with the processor 40 while performing the medical procedure using apparatus 20. While performing the procedure, the processor may present results of the procedure on a display screen 56.
Processor 40 uses software stored in a memory 42 to operate apparatus 20. The software may be downloaded to processor 40 in electronic form, over a network, for example, or it may, alternatively or additionally, be provided and/or stored on non-transitory tangible media, such as magnetic, optical, or electronic memory.
Reference is now made to
The position tracking system 23 (
The processor 40 is configured to find (block 76) a 3D path of the medical instrument 21 through a passage from a given start point to a given termination point. The step of block 76 is described in more detail with reference to the path finding method of
The processor 40 is configured to compute (block 78) segments of the computed 3D path. The processor 40 is configured to compute (block 80) respective different locations along the 3D path of respective virtual cameras responsively to the computed segments. The steps of blocks 78 and 80 are described in more detail with reference to
The processor 40 is configured to select (block 82) the respective virtual cameras for rendering respective virtual endoscopic images responsively to the tracked coordinates of the medical instrument 21 and the respective locations of the respective virtual cameras within the common frame of reference. As the medical instrument 21 moves along the 3D path (which may be some distance either side of the path as the medical instrument 21 is not locked to the path), the virtual camera providing the virtual endoscopic image is selected according to the tracked coordinates of the medical instrument 21 and control is passed successively from one virtual camera to another as the medical instrument 21 moves along the path. The step of block 82 may be repeated intermittently, for example, each time new tracked coordinates are received, for example, in a range between 10 and 100 milliseconds, such as 50 milliseconds. The step of block 82 is described in more detail with reference to
The processor 40 is configured to compute (block 84) respective orientations of the respective virtual cameras. The orientation of the cameras is generally a 3D orientation and is defined with respect to a respective optical axis of the respective virtual cameras. In other words, the orientation of the cameras is a measure of which directions the cameras are facing for optical purposes. The location of the virtual cameras may also be shifted back as described in more detail with reference to
The processor 40 is configured to render and display (block 86) on the display screen 56 (
Reference is now made to
In an initial step (block 100 of the flowchart 90), a computerized tomography (CT) X-ray scan of the nasal sinuses of patient 22 is performed, and the data from the scan is acquired by processor 40. As is known in the art, the scan comprises two-dimensional X-ray “slices” of the patient 22, and the combination of the slices generates three-dimensional voxels, each voxel having a Hounsfield unit, a measure of radiodensity, determined by the CT scan.
In an image generation step (block 102), physician 54 (
The displayed results are typically gray scale images, and an example is provided in
As is known in the art, apart from the values for air and water, which by definition are respectively −1000 and 0, the value of the Hounsfield unit of any other substance or species, such as dense bone, is dependent, inter alia, on the spectrum of the irradiating X-rays used to produce the CT scans referred to herein. In turn, the spectrum of the X-rays depends on a number of factors, including the potential in kilovolts (kV) applied to the X-ray generator, as well as the composition of the anode of the generator. For clarity in the present disclosure, the values of Hounsfield units for a particular substance or species are assumed to be as given in Table I below.
However the numerical values of HUs for particular species (other than air and water) as given in Table I are to be understood as being purely illustrative, and those having ordinary skill in the art will be able to modify these illustrative values, without undue experimentation, according to the species and the X-ray machine used to generate the CT images referred to herein.
Typically, a translation between HU values and gray scale values is encoded into a DICOM (Digital Imaging and Communications in Medicine) file that is the CT scan output from a given CT machine. For clarity in the following description the correlation of HU=−1000 to black, and HU=3000 to white, and correlations of intermediate HU values to corresponding intermediate gray levels is used, but it will be understood that this correlation is purely arbitrary. For example, the correlation may be “reversed,” i.e., HU=−1000 may be assigned to white, HU=3000 assigned to black, and intermediate HU values assigned to corresponding intermediate gray levels. Thus, those having ordinary skill in the art will be able to adapt the description herein to accommodate other correlations between Hounsfield units and gray levels, and all such correlations are assumed to be comprised within the scope of the present invention.
In a marking step (block 104) the physician 54 (
In a permissible path definition step (block 106), the physician defines ranges of Hounsfield units which the path finding algorithm, referred to below, uses as acceptable voxel values in finding a path from start point 150 to termination point 152. The defined range typically includes HUs equal to −1000, corresponding to air or a void in the path; the defined range may also include HUs greater than −1000, for example, the range may be defined as given by expression (1):
{HU|−1000≤HU≤U} (1)
where U is a value selected by the physician.
For example, U may be set to +45, so that the path taken may include water, fat, blood, soft tissue as well as air or a void. In some embodiments, the range may be set by the processor 40 (
There is no requirement that the defined range of values is a continuous range, and the range may be disjoint, including one or more sub-ranges. In some embodiments a sub-range may be chosen to include a specific type of material. An example of a disjoint range is given by expression (2):
{HU|HU=−1000 or A≤HU≤B} (2)
where A, B are values selected by the physician.
For example, A and B may be set to be equal to −300 and −100 respectively, so that the path taken may include air or a void and soft tissue.
The method of selection for the range of HUs may include any suitable method, including, but not being limited to, by number, and/or by name of material, and/or by gray scale. For example, in the case of selection by gray scale, physician 54 (
In the case of selection by name, a table of named species may be displayed to the physician. The displayed table is typically similar to Table I, but without the column providing values of Hounsfield units. The physician may select one or more named species from the table, in which case the HU equivalents of the selected named species are included in the acceptable range of HUs for voxels of the path to be determined by the path finding algorithm.
In a path finding step (block 108), processor 40 (
In some embodiments, the path finding step includes taking account of mechanical properties and dimensions of medical instrument 21 (
In a further disclosed embodiment, the processor 40 (
In considering the possible radii of curvature of the medical instrument 21 (
In a yet further disclosed embodiment, the processor 40 (
In an overlay step (block 110), the shortest path found in the step of block 108 is overlaid on an image that is displayed on display screen 56.
Typically, the path found traverses more than one 2D slice, in which case the overlaying may be implemented by incorporating the path found into all the 2D slices that are relevant, i.e., through which the path traverses. Alternatively, or additionally, an at least partially transparent 3D image may be generated from the 2D slices of the scan, and the path found may be overlaid on the 3D image. The at least partially transparent 3D image may be formed on a representation of an external surface of patient 22, as is described in more detail below.
For clarity, the following description assumes that the boundary plane is parallel to an x-y plane of frame of reference 184, as is illustrated schematically in
z=zbp (3)
As described below, processor 40 uses the boundary plane and the bounding region 192 to determine which elements of surface 180 are to be rendered locally transparent, and which elements are not to be so rendered.
Processor 40 determines elements of surface 180 (
In consequence of the above-defined elements being rendered transparent, elements of surface 180, having values of <zbp and that when projected along the z-axis lie within bounding region 192 are now visible, so are displayed in the image. Prior to the local transparent rendering, the “now visible” elements were not visible since they were obscured by surface elements. The now visible elements include elements of shortest path 154, as is illustrated in
Shortest path 154 has also been drawn in
It will be appreciated that in the case illustrated in
The description above provides one example of the application of local transparency to viewing a shortest path derived from tomographic data, the local transparency in this case being formed relative to a plane parallel to the coronal plane of the patient 22. It will be understood that because of the three-dimensional nature of the tomographic data, the data may be manipulated so that embodiments of the present invention may view the shortest path 154 using local transparency formed relative to substantially any plane through patient 22, and that may be defined in frame of reference 184.
In forming the local transparency, the dimensions and position of the boundary plane 190 and the bounding region 192 may be varied to enable the physician 54 (
The physician 54 may vary the direction of the bounding plane 190, for example to enhance the visibility of particular internal structures. While the bounding plane 190 is typically parallel to the plane of the image presented on display screen 56, this is not a requirement, so that if, for example, the physician 54 wants to see more detail of a particular structure, she/he may rotate the bounding plane 190 so that it is no longer parallel to the image plane.
In some cases, the range of HU values/gray scales selected in the step of block 106 includes regions other than air, for example, regions that correspond to soft tissue and/or mucous. The path 154 found in the step of block 108 may include such regions, and in this case, for medical instrument 21
While the description above has assumed that the CT scan is an X-ray scan, it will be understood that embodiments of the present invention comprise finding a shortest path using MRI (magnetic resonance imaging) tomography images.
Thus, referring back to the flowchart 90, in the case of MRI images, wherein Hounsfield values may not be directly applicable, in step 106 the physician 54 (
Reference is now made to
The processor 40 (
Sub-steps of the step of block 302 are now described below.
The processor 40 (
In other embodiments, the processor 40 is configured to compute the segments 322 using any suitable algorithm so that the turning points below the threshold turning value are removed.
The processor 40 (
The processor 40 is configured to check (block 308) a line of sight between two adjacent virtual cameras 320 and position one or more virtual cameras 320 between the two adjacent virtual cameras 320 responsively to the line of sight being blocked. The line of sight may be checked by examining voxels of the 3D CT image to determine if there is material blocking the line of sight between the adjacent virtual cameras 320. The type of material which is considered to block or allow the line of sight may be the same as used when computing the path 154 as described with reference to the step of block 106 of
The processor 40 is optionally configured to position (block 310) one or more additional virtual cameras 320 in the middle of one or more of the segments 322 responsively to a distance between the existing virtual cameras 320 exceeding a limit.
Reference is now made to
The processor 40 (
The processor 40 is configured to find (block 344) the closest virtual camera 320 (e.g., virtual camera 320-7) to the distal end of the medical instrument 21 (
The steps of blocks 344-348 are illustrated in
Therefore, the processor 40 is configured to select the respective virtual cameras 320 for rendering respective virtual endoscopic images responsively to which side the tracked coordinates (positions 352) of the medical instrument 21 (
In other embodiments, the passages may be divided into regions based on the segments 322 with the virtual cameras 320 being selected according to the region in which the tracked coordinates are disposed.
Reference is now made to
The orientation of the optical axis of each virtual camera 320 is computed. The orientation may be computed any time after the locations of the virtual cameras 320 have been computed. In some embodiments, the orientations of the virtual cameras 320 may be computed as each virtual camera 320 is selected for use as the medical instrument 21 (
The processor 40 (
In some embodiments, the respective locations of the respective virtual cameras 320 (e.g., virtual camera 320-2) may be shifted back, for example, in an opposite direction 376 to the respective average directions 374 of the respective virtual cameras 320 as shown in
The field of view of the respective virtual cameras 320 may be set to any suitable respective values. The field of view of each virtual camera 320 may be fixed, for example, in a range of values between 25-170 degrees, e.g., 90 degrees. The field of view of any one virtual camera 320 may be set according to the outer limits of the segment 322 of the path 254 that that virtual camera 320 covers (for example, derived from the vectors 372 of
Reference is now made to
Reference is also made to
The transition between two virtual cameras 320 and therefore the transition between the associated virtual endoscopic images may be a smooth transition or a sharp transition. In some embodiments, a smooth transition between the two respective virtual cameras may be performed by performing the following steps. The transition is described by way of example between virtual camera 320-2 and virtual camera 320-4.
The processor 40 (
The processor 40 (
Reference is now made to
The virtual endoscopic images 398 may be rendered using volume visualization techniques generating the virtual endoscopic images 398 from tissue image data (e.g., based on the HU of voxels of the 3D CT scan) in a 3D viewing volume such as a cone projecting outward from the location of the relevant virtual camera 320. The images 398 may be rendered based on known colors of the tissue. Certain materials such as liquids or even soft tissue may be selected to be transparent while all other denser material may be rendered according to the natural color of the denser material.
Alternatively, even liquids and soft tissue along with the denser material may be rendered according to the natural color of the respective materials. In some embodiments, the physician 54 (
The above images 398 are presented only for purposes of illustration, and other sorts of images may likewise be rendered and displayed in accordance with the principles of the present invention.
As used herein, the terms “about” or “approximately” for any numerical values or ranges indicate a suitable dimensional tolerance that allows the part or collection of components to function for its intended purpose as described herein. More specifically, “about” or “approximately” may refer to the range of values ±20% of the recited value, e.g. “about 90%” may refer to the range of values from 72% to 108%.
Various features of the invention which are, for clarity, described in the contexts of separate embodiments may also be provided in combination in a single embodiment. Conversely, various features of the invention which are, for brevity, described in the context of a single embodiment may also be provided separately or in any suitable sub-combination.
The embodiments described above are cited by way of example, and the present invention is not limited by what has been particularly shown and described hereinabove. Rather the scope of the invention includes both combinations and subcombinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art.
Number | Date | Country | |
---|---|---|---|
Parent | 16726661 | Dec 2019 | US |
Child | 18081986 | US |