The present disclosure is directed to systems, methods, and computer program products for registering an instrument and image frames of reference by combining point cloud data and endoscopic image data.
Minimally invasive medical techniques are intended to reduce the amount of tissue that is damaged during medical procedures, thereby reducing patient recovery time, discomfort, and harmful side effects. Such minimally invasive techniques may be performed through natural orifices in a patient anatomy or through one or more surgical incisions. Through these natural orifices or incisions, an operator may insert minimally invasive medical tools to reach a target tissue location. Minimally invasive medical tools include instruments such as therapeutic, diagnostic, biopsy, and surgical instruments. Medical tools may be inserted into anatomic passageways and navigated toward a region of interest within a patient anatomy. Navigation may be assisted using images of the anatomic passageways. Improved systems and methods are needed to accurately perform registrations between medical tools and images of the anatomic passageways.
Disclosed herein are devices, systems, methods, and computer program products for combining positional sensor data (e.g., shape and/or electro-magnetic sensor data) and endoscopic image data (e.g., video data, still images, etc.) to improve registration between (i) real patient anatomy (e.g., airways of lungs of a patient) within an anatomic region of a patient navigated by a medical instrument system as part of an image-guided medical procedure and (ii) an image of the anatomic region (e.g., generated from preoperative and/or intraoperative imaging).
In some embodiments, a medical instrument system for use in an image-guided medical procedure includes a positional sensor, an image capture device, a processor communicatively coupled to the positional sensor and the image capture device, and a memory. The positional sensor can be configured to generate positional sensor data associated with one or more positions of a biomedical device within an anatomic region of a patient. The image capture device can be configured to capture first image data of patient anatomy within the anatomic region while the biomedical device is positioned within the anatomic region. The memory can store instructions that, when executed by the processor, cause the medical instrument system to perform operations including (i) generating a point cloud of coordinate points based, at least in part, on the positional sensor data, (ii) receiving second image data of the anatomic region, wherein the second image data is generated based, at least in part, on imaging of the anatomic region, (iii) generating a registration between at least a portion of the point cloud and at least a portion of the second image data, and/or (iv) updating the registration based, at least in part, on the first image data.
In these and other embodiments, a non-transitory, computer-readable medium can store instructions that, when executed by one or more processors of a computing system, cause the computing system to perform operations including (i) generating a point cloud of coordinate points based, at least in part, on positional sensor data captured using a position sensor, wherein the positional sensor data is associated with one or more positions of a biomedical device within an anatomic region of a patient; (ii) receiving first image data of patient anatomy captured using an image capture device positioned within the anatomic region; (iii) receiving second image data of the anatomic region, wherein the second image data is generated based, at least in part, on preoperative or intraoperative imaging of the anatomic region; (iv) generating a registration between at least a portion of the point cloud with at least a portion of the second image data; and/or (v) updating the registration based, at least in part, on the first image data.
In these and still other embodiments, a method can include (i) generating a point cloud of coordinate points based, at least in part, on positional sensor data captured using a position sensor of a robotic system, wherein the positional sensor data is associated with one or more positions of a biomedical device within an anatomic region of a patient; (ii) receiving first image data of patient anatomy captured using an image capture device of the robotic system while the image capture device is positioned within the anatomic region; (iii) receiving second image data of the anatomic region, wherein the second image data is based, at least in part, on preoperative or intraoperative imaging of the anatomic region; (iv) generating a registration between at least a portion of the point cloud and at least a portion of the second image data; and/or (v) updating the registration based, at least in part, on a portion of the first image data.
It is to be understood that both the foregoing general description and the following details description are exemplary and explanatory in nature and are intended to provide an understanding of the present disclosure without limiting the scope of the present disclosure. In that regard, additional aspects, features, and advantages of the present disclosure will be apparent to one skilled in the art from the following detailed description.
Many aspects of the present disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale. Instead, emphasis is placed on illustrating clearly the principles of the present disclosure. The drawings should not be taken to limit the disclosure to the specific embodiments depicted, but are for explanation and understanding only.
The present disclosure is directed to devices, systems, methods, and computer program products for combining positional sensor data (e.g., shape and/or electro-magnetic sensor data) and endoscopic image data (e.g., video data, still images, etc.) to improve registration between (i) real patient anatomy (e.g., airways of lungs of a patient) within an anatomic region of a patient navigated by a medical instrument system as part of an image-guided medical procedure and (ii) an image of the anatomic region (e.g., generated from preoperative and/or intraoperative imaging). When adequately registered, the tracked position of the medical instrument system within the anatomic region can be mapped to a correct position within an anatomic model of the anatomic region for use in guiding navigation of the medical instrument system throughout the anatomic region and/or for use in guiding interaction with subsurface structures within and/or near the anatomic region (e.g., for use in guiding a biopsy and/or treatment of nodules of the lungs). In particular, the present technology provides visual guidance in the form of a virtual navigational (e.g., fly-through) images from the viewpoint of the medical instrument system within the anatomic region that are generated within the anatomic model at the location of the medical instrument system following registration.
In some embodiments, the steps of registering the real patient anatomy to the anatomic model include: (a) navigating a medical instrument system throughout an anatomic region of a patient, (b) generating a point cloud of coordinate points representing locations visited by (e.g., a distal portion of) the medical instrument system, and (c) registering the point cloud (using an iterative closest point algorithm) to an image (e.g., a segmented CT image) of the anatomic region. In these and other embodiments, the present technology captures endoscopic image data (e.g., video data, still images, etc.) including a real navigational image of real patient anatomy within the anatomic region using an endoscope or other image capture device mounted to the distal portion (or another suitable location) of the medical instrument system. In these and still other embodiments, the present technology computes a virtual navigational image based, at least in part, on the registration. The virtual navigational image depicts virtual patient anatomy of the anatomic region from the perspective of the distal portion (or another suitable location) of the medical instrument system.
In some embodiments, the extent to which the virtual navigational image of virtual patient anatomy matches the real navigational image of real patient anatomy of the anatomic region provides an indication of how well the point cloud of coordinate points registers with the image (e.g., with the segmented CT image) of the anatomic region. The present technology therefore leverages information provided by both the real and virtual navigational images to improve the registration of the point cloud generated from data captured by the medical instrument system with the preoperative and/or intraoperative image of the anatomic region. In the context of biopsy medical procedures, the present technology thereby increases localization accuracy of regions of interest (e.g., tumor position estimations), which increases the probability of successfully navigating an anatomic region of a patient and the probability of effectively diagnosing and treating disease (e.g., of effectively biopsying or ablating small lung tumors).
1. Robotic or Teleoperated Medical Systems and Associated Devices and Systems
To aid the operator 105 in controlling the manipulator assembly 102 and/or the medical instrument system 104 during an image-guided medical procedure, the medical system 100 may further include a positional sensor system 108, an endoscopic imaging system 109, an imaging system 118, and/or a virtual visualization system 115. In some embodiments, the positional sensor system 108 includes a location sensor system (e.g., an electromagnetic (EM) sensor system) and/or a shape sensor system for capturing positional sensor data (e.g., position, orientation, speed, velocity, pose, shape, etc.) of the medical instrument system 104. In these and other embodiments, the endoscopic imaging system 109 includes one or more image capture devices (not shown) that record endoscopic image data that includes concurrent or real-time images (e.g., video, still images, etc.) of patient anatomy. Images captured by the endoscopic imaging system 109 may be, for example, two or three-dimensional images of patient anatomy captured by an image capture device positioned within the patient 103, and are referred to hereinafter as “real navigational images.”
In some embodiments, the medical instrument system 104 may include components of the positional sensor system 108 and/or components of the endoscopic imaging system 109. For example, components of the positional sensor system 108 and/or components of the endoscopic imaging system 109 can be integrally or removably coupled to the medical instrument system 104. Additionally, or alternatively, the endoscopic imaging system 109 can include a separate endoscope (not shown) attached to a separate manipulator assembly (not shown) that can be used in conjunction with the medical instrument system 104 to image patient anatomy. The positional sensor system 108 and/or the endoscopic imaging system 109 may be implemented as hardware, firmware, software, or a combination thereof that interact with or are otherwise executed by one or more computer processors, such as the computer processor(s) 114 of the control system 112.
The imaging system 118 of the medical system 100 may be arranged in the surgical environment 101 near the patient 103 to obtain real-time and/or near real-time images of the patient 103 before, during, and/or after a medical procedure. In some embodiments, the imaging system 118 includes a mobile C-arm cone-beam computerized tomography (CT) imaging system for generating three-dimensional images. For example, the imaging system 118 can include a DynaCT imaging system from Siemens Corporation, or another suitable imaging system. In these and other embodiments, the imaging system 118 can include other imaging technologies, including magnetic resonance imaging (MRI), fluoroscopy, thermography, ultrasound, optical coherence tomography (OCT), thermal imaging, impedance imaging, laser imaging, nanotube X-ray imaging, and/or the like.
The virtual visualization system 115 of the control system 112 provides navigation and/or anatomy-interaction assistance to the operator 105 when controlling the medical instrument system 104 during an image-guided medical procedure. As described in greater detail below, virtual navigation using the virtual visualization system 115 can be based, at least in part, upon reference to an acquired pre-operative or intra-operative dataset (e.g., based, at least in part, upon reference to data generated by the positional sensor system 108, the endoscopic imaging system 109, and/or the imaging system 118) of anatomic passageways of the patient 103. In some implementations, for example, the virtual visualization system 115 processes preoperative and/or intraoperative image data of an anatomic region of the patient 103 captured by the imaging system 118 to generate an anatomic model (not shown) of the anatomic region. The virtual visualization system 115 then registers the anatomic model to positional sensor data generated by the positional sensor system 108 and/or to endoscopic image data generated by the endoscopic imaging system 109 to (i) map the tracked position, orientation, pose, shape, and/or movement of the medical instrument system 104 within the anatomic region to a correct position within the anatomic model, and/or (ii) determine a virtual navigational image of virtual patient anatomy of the anatomic region from a viewpoint of the medical instrument system 104 at a location within the anatomic model corresponding to a location of the medical instrument system 104 within the patient 103.
The display system 110 can display various images or representations of patient anatomy and/or of the medical instrument system 104 that are generated by the positional sensor system 108, by the endoscopic imaging system 109, by the imaging system 118, and/or by the virtual visualization system 115. In some embodiments, the display system 110 and/or the master assembly 106 may be oriented so the operator 105 can control the manipulator assembly 102, the medical instrument system 104, the master assembly 106, and/or the control system 112 with the perception of telepresence.
As discussed above, the manipulator assembly 102 drives the medical instrument system 104 at the direction of the master assembly 106 and/or the control system 112. In this regard, the manipulator assembly 102 can include select degrees of freedom of motion that may be motorized and/or teleoperated and select degrees of freedom of motion that may be non-motorized and/or non-teleoperated. For example, the manipulator assembly 102 can include a plurality of actuators or motors (not shown) that drive inputs on the medical instrument system 104 in response to commands received from the control system 112. The actuators can include drive systems (not shown) that, when coupled to the medical instrument system 104, can advance the medical instrument system 104 into a naturally or surgically created anatomic orifice. Other drive systems may move a distal portion (not shown) of the medical instrument system 104 in multiple degrees of freedom, which may include three degrees of linear motion (e.g., linear motion along the X, Y, Z Cartesian axes) and three degrees of rotational motion (e.g., rotation about the X, Y, Z Cartesian axes). Additionally, or alternatively, the actuators can be used to actuate an articulable end effector of the medical instrument system 104 (e.g., for grasping tissue in the jaws of a biopsy device and/or the like).
The manipulator assembly 102 includes an instrument carriage 226 mounted to an insertion stage 228. In the illustrated embodiment, the insertion stage 228 is linear, while in other embodiments, the insertion stage 228 is curved or has a combination of curved and linear sections. In some embodiments, the insertion stage 228 is fixed within the surgical environment 101. Alternatively, the insertion stage 228 can be movable within the surgical environment 101 but have a known location (e.g., via a tracking sensor (not shown) or other tracking device) within the surgical environment 101. In these alternatives, the medical instrument frame of reference (XM, YM, ZM) is fixed or otherwise known relative to the surgical frame of reference (XS, YS, ZS).
The medical instrument system 104 of
In operation, the manipulator assembly 102 can control insertion motion (e.g., proximal and/or distal motion along an axis A) of the elongate device 231 into the patient 103 via a natural or surgically created anatomic orifice of the patient 103 to facilitate navigation of the elongate device 231 through anatomic passageways of an anatomic region of the patient 103 and/or to facilitate delivery of a distal portion 238 of the elongate device 231 to or near a target location within the patient 103. For example, the instrument carriage 226 and/or the insertion stage 228 may include actuators (not shown), such as servomotors, that facilitate control over motion of the instrument carriage 226 along the insertion stage 228. Additionally, or alternatively, the manipulator assembly 102 in some embodiments can control motion of the distal portion 238 of the elongate device 231 in multiple directions, including yaw, pitch, and roll rotational directions (e.g., to navigate patient anatomy). To this end, the elongate device 231 may house or include cables, linkages, and/or other steering controls (not shown) that the manipulator assembly 102 can use to controllably bend the distal portion 238 of the elongate device 231. For example, the elongate device 231 can house at least four cables that can be used by the manipulator assembly 102 to provide (i) independent “up-down” steering to control a pitch of the distal portion 238 of the elongate device 231 and (ii) independent “left-right” steering of the elongate device 231 to control a yaw of the distal portion 238 of the elongate device 231.
The medical instrument 232 of the medical instrument system 104 can be used for medical procedures, such as for survey of anatomic passageways, surgery, biopsy, ablation, illumination, irrigation, and/or suction. Thus, the medical instrument 232 can include image capture probes, biopsy instruments, laser ablation fibers, and/or other surgical, diagnostic, and/or therapeutic tools. For example, the medical instrument 232 can include an endoscope or other biomedical device having one or more image capture devices 247 positioned at a distal portion 237 of and/or at other locations along the medical instrument 232. In these embodiments, an image capture device 247 can capture one or more real navigational images or video (e.g., a sequence of one or more real navigational image frames) of anatomic passageways and/or other real patient anatomy while the medical instrument 232 is within an anatomic region of the patient 103.
As discussed above, the medical instrument 232 can be deployed into and/or be delivered to a target location within the patient 103 via the channel 244 defined by the elongate device 231. In embodiments in which the medical instrument 232 includes an endoscope or other biomedical device having an image capture device 247 at its distal portion 237, the image capture device 247 can be advanced to the distal portion 238 of the elongate device 231 before, during, and/or after the manipulator assembly 102 navigates the distal portion 238 of the elongate device 231 to a target location within the patient 103. In these embodiments, the medical instrument 232 can be used as a survey instrument to capture real navigational images of anatomic passageways and/or other real patient anatomy, and/or to aid an operator (not shown) to navigate the distal portion 238 of the elongate device 231 through anatomic passageways to the target location.
As another example, after the manipulator assembly 102 positions the distal portion 238 of the elongate device 231 proximate a target location within the patient 103, the medical instrument 232 can be advanced beyond the distal portion 238 of the elongate device 231 to perform a medical procedure at the target location. Continuing with this example, after all or a portion of the medical procedure at the target location is complete, the medical instrument 232 can be retracted back into the elongate device 231 and, additionally or alternatively, be removed from the proximal end 236 of the elongate device 231 or from another instrument port (not shown) along the elongate device 231.
As shown in
The shape sensor 233 of the positional sensor system 108 includes an optical fiber extending within and aligned with the elongate device 231. In one embodiment, the optical fiber of the shape sensor 233 has a diameter of approximately 200 μm. In other embodiments, the diameter of the optical fiber may be larger or smaller. The optical fiber of the shape sensor 233 forms a fiber optic bend sensor that is used to determine a shape, orientation, and/or pose of the elongate device 231. In some embodiments, optical fibers having Fiber Bragg Gratings (FBGs) can be used to provide strain measurements in structures in one or more dimensions. Various systems and methods for monitoring the shape and relative position of an optical fiber in three dimensions are described in further detail in U.S. Patent Application Publication No. 2006/0013523 (filed Jul. 13, 2005) (disclosing fiber optic position and shape sensing device and method relating thereto); U.S. Pat. No. 7,781,724 (filed on Sep. 26, 2006) (disclosing fiber-optic position and shape sensing device and method relating thereto); U.S. Pat. No. 7,772,541 (filed on Mar. 12, 2008) (disclosing fiber-optic position and/or shape sensing based on Rayleigh scatter); and U.S. Pat. No. 6,389,187 (filed on Jun. 17, 1998) (disclosing optical fiber bend sensors), which are all incorporated by reference herein in their entireties. In these and other embodiments, sensors of the present technology may employ other suitable strain sensing techniques, such as Rayleigh scattering, Raman scattering, Brillouin scattering, and Fluorescence scattering. In these and still other embodiments, the shape of the elongate device 231 may be determined using other techniques. For example, a history of the pose of the distal portion 238 of the elongate device 231 can be used to reconstruct the shape of elongate device 231 over an interval of time.
In some embodiments, the shape sensor 233 is fixed at a proximal point 234 on the instrument body 235 of the medical instrument system 104. In operation, for example, the shape sensor 233 measures a shape in the medical instrument reference frame (XM, YM, ZM) from the proximal point 234 to another point along the optical fiber, such as the distal portion 238 of the elongate device 231. The proximal point 234 of the shape sensor 233 may be movable along with instrument body 235 but the location of proximal point 234 may be known (e.g., via a tracking sensor (not shown) or other tracking device).
The position measuring device 239 of the positional sensor system 108 provides information about the position of the instrument body 235 as it moves along the insertion axis A on the insertion stage 228 of the manipulator assembly 102. In some embodiments, the position measuring device 239 includes resolvers, encoders, potentiometers, and/or other sensors that determine the rotation and/or orientation of actuators (not shown) controlling the motion of the instrument carriage 226 of the manipulator assembly 102 and, consequently, the motion of the instrument body 235 of the medical instrument system 104.
As shown in
The coordinate points may together form a point cloud. For example,
In some embodiments, a point cloud (e.g., the point cloud 460) can include the union of all or a subset of coordinate points recorded by the positional sensor system 108 during an image capture period that spans multiple shapes, positions, orientations, and/or poses of the elongate device 231 within the anatomic region 350. In these embodiments, the point cloud can include coordinate points captured by the positional sensor system 108 that represent multiple shapes of the elongate device 231 while the elongate device 231 is advanced or moved through patient anatomy during the image capture period. Additionally, or alternatively, because the configuration, including shape and location, of the elongate device 231 within the patient 103 may change during the image capture period due to anatomical motion, the point cloud in some embodiments can comprise a plurality of coordinate points 462 captured by the positional sensor system 108 that represent the shapes of the elongate device 231 as the elongate device 231 passively moves within the patient 103. As described in greater detail below, a point cloud of coordinate points captured by the positional sensor system 108 can be registered to different models or damsels of patient anatomy.
Referring again to
In the embodiment illustrated in
Referring again to
As shown in
All or a portion of the graphical elements 681 and 682 of the image data 680 can be segmented and/or filtered to generate a virtual, three-dimensional model of the anatomic passageways 352 within the portion 655 of the anatomic region 350 (with or without the medical instrument system 104). In some embodiments, the graphical elements 681 and 682 can additionally or alternatively be segmented and/or filtered to generate an image point cloud (not shown) of the medical instrument system 104 based, at least in part, on images captured by the imaging system 118 (
As discussed above with respect to
The composite virtual image 791 of
Based, at least in part, on the registration, the virtual visualization system 115 can additionally or alternatively generate virtual navigational images (e.g., the virtual navigational image 792 of
In some embodiments, the virtual visualization system 115 can place the virtual camera within the anatomic model 750 at a position and orientation corresponding to the position and orientation of the image capture device 247 within the patient 103 (
As further shown in
2. Associated Methods
At step 801, the method 800 records positional sensor data of a medical instrument system. In some embodiments, the positional sensor data is recorded using a positional sensor system (e.g., the positional sensor system 108 of
As discussed in greater detail above, the positional sensor data provides position information (shape, position, orientation, pose, movement, etc.) of the medical instrument system while at least a portion of the medical instrument system is located within a patient. For example, the positional sensor data can include shape data. In these and other embodiments, the positional sensor data can include position information related to a distal end of and/or other points along an elongate device (e.g., the elongate device 231 of
At step 802, the method 800 generates a point cloud from the recorded positional sensor data. In some embodiments, the point cloud is generated from the union of all or a subset of the coordinate points recorded at step 801 during the data capture period of the positional sensor system. In these and other embodiments, the point cloud represents one or more shapes of the medical instrument system as the medical instrument system is stationary and/or is actively or passively moved within the patient. The point cloud may be generated in two or three dimensions in the medical instrument reference frame (XM, YM, ZM).
At step 803, the method 800 captures endoscopic image data of patient anatomy. In some embodiments, the endoscopic image data is captured using an endoscopic imaging system (e.g., the endoscopic imaging system 109 of
As discussed in greater detail above, the endoscopic image data captures one or more images (e.g., still images, video, etc.) from a viewpoint of the medical instrument system. For example, an image capture device of the endoscopic imaging system can be mounted to a distal end of the medical instrument system (e.g., to the distal portion 238 of the elongate device 231 and/or to the distal portion 237 of the medical instrument 232 of
At step 804, the method 800 captures, receives, and/or processes image data of the patient and generates an anatomic model. In some embodiments, the image data is captured using an imaging system (e.g., the imaging system 118 of
As discussed in greater detail above, the image data of the patient includes graphical elements representing anatomical features of the patient and (in the case of intraoperative image data) graphical elements representing the medical instrument system. A model of the anatomical features of the patient is generated by segmenting and filtering the graphical elements included in the image data. During the segmentation process, pixels or voxels generated from the image data may be partitioned into segments or elements and/or be tagged to indicate that they share certain characteristics or computed properties such as color, density, intensity, and texture. In some embodiments, less than all of the image data may be segmented and filtered. The segments or elements associated with anatomical features of the patient are then converted into an anatomic model, which is generated in an image reference frame (XI, YI, ZI).
At step 805, the method 800 generates one or more correspondences between the endoscopic image data of patient anatomy captured at step 803 and the image data of the patient captured, received, and/or processed at step 804, and/or updates the point cloud generated at step 802 based, at least in part, on the correspondence(s). For example, as discussed above, an image capture device of the endoscopic imaging system can be mounted to a distal portion of the medical instrument system and positioned within an anatomic region of the patient. In these embodiments, the endoscopic image data captured at step 803 includes (i) images of real patient anatomy near the distal end of the medical instrument system and (ii) indications of positions of the distal portion of the medical instrument within anatomic passageways actually visited by the medical instrument system. Thus, when the method 800 determines a real navigational image of patient anatomy (e.g., a carina marking a branching point of two or more anatomic passageways) in the endoscopic image data captured at step 803 matches a portion of the image data of the patient captured, received, and/or processed at step 804, the method 800 can generate a correspondence between the endoscopic image data of step 803 and the image data of step 804. Because the matched real navigational image of patient anatomy in the endoscopic image data is associated with a timestamp and a known position of the image capture device within the medical instrument frame of reference (XM, YM, ZM), the correspondence generated between the endoscopic image data of step 803 and the image data of step 804 provides a known correspondence between the medical instrument frame of reference (XM, YM, ZM) and the image reference frame (XI, YI, ZI) at the known position of the image capture device. In some embodiments, the method 800 updates the point cloud generated at step 802 based, at least in part, on the generated correspondences. For example, the method 800 can add one or more coordinate points in the medical instrument frame of reference (XM, YM, ZM) to the point cloud of step 802 at and/or proximate the known position of the image capture device when the image capture device captured the real navigational image of the endoscopic image data that matched the image data of step 804.
At step 806, the method 800 registers the point cloud generated at step 802 and/or updated at step 805 to the anatomic model generated at step 804. In some embodiments, the registration involves aligning the medical instrument frame of reference (XM, YM, ZM) and/or the surgical reference frame (XS, YS, ZS) with the image reference frame (XI, YI, ZI). For example, the point cloud of steps 802 and/or 805 in the medical instrument reference frame (XM, YM, ZM) can be registered to the anatomic model in the image reference frame (XI, YI, ZI). This registration may rotate, translate, or otherwise manipulate by rigid and/or non-rigid transforms coordinate points of the point cloud (e.g., the coordinate points generated from the positional sensor data at steps 801 and 802 and/or the added coordinate points generated at step 805 from correspondences between real navigational images in the endoscopic image data of step 803 and the image data of step 804) to align the coordinate points with the anatomic model generated at 804. The transforms may be six degrees-of-freedom transforms, such that the point clouds may be translated or rotated in any or all of X, Y. Z, pitch, roll, and yaw. In some embodiments, the method 800 uses an iterative closest point (ICP) algorithm to perform the registration. In particular, the method 800 can (i) compute a point-to-point correspondence between coordinate points in the point cloud to points (e.g., on a centerline or at other locations) within the anatomic model and (ii) compute an optimal transform to minimize Euclidean distances between corresponding points. In other embodiments, the method 800 can use another technique to perform the registration.
In some embodiments, the method 800 can use the endoscopic image data captured at step 803 to improve the accuracy of and/or otherwise provide insight for the registration between the point cloud generated at step 802 and/or updated at step 805 and the anatomic model generated at step 804. For example, as discussed above with respect to step 805, the method 800 can add one or more coordinate points at known locations of the image capture device where patient anatomy in real navigational images of the endoscopic image data of step 803 matches patient anatomy captured in the image data of step 804. In some embodiments, the added coordinate points can be used in the ICP algorithm in combination with the coordinate points generated from the positional sensor data of steps 801 and/or 802 to compute the optimal transform. In these and other embodiments, the added coordinate points can be weighted differently (e.g., heavier or lighter) in the computation than the coordinate points generated from the positional sensor data of step 801. In these and still other embodiments, orientation alignment data captured by the correspondence at step 805 (e.g., information regarding how patient anatomy in a matched real navigational image of the endoscopic image data of step 803 must be transformed (e.g., translated, rotated, reflected, etc.) to align with the corresponding portion of patient anatomy in the image data of step 804) can be fed as an additional error term minimized by the registration algorithm to further inform the registration between the point cloud and the image data of step 804.
In these and other embodiments, the method 800 can use the endoscopic image data captured at step 803 to temporally or locally improve the accuracy of and/or otherwise provide insight for the registration performed at step 806. For example, the method 800 can use coordinate points added at step 805 and/or orientation alignment data captured by the correspondence at step 805 to improve the accuracy of and/or otherwise provide insight for only a portion of the registration performed at step 806. Continuing with this example, the portion of the registration performed at step 806 can correspond to coordinate points from steps 802 and/or 805 and/or a subset of points of the anatomic model generated at step 804 within threshold distances of coordinate points added at step 805 and/or within threshold distances of correspondences generated at step 805.
Alternatively, the method 800 can perform a registration (e.g., a sparse point registration) between only (a) coordinate points stemming from the endoscopic image data of step 803 and (b) the anatomic model generated at step 804.
In these and other embodiments, the method 800 can use the real navigational images of the endoscopic image data of step 803 to provide insight as to the pathway taken by the medical instrument system as it is navigated throughout an anatomic region. For example, after the method 800 identifies a branching point in a real navigational image of the endoscopic image data of step 803, the method 800 can use the real navigational image and/or one or more real navigational images previously and/or subsequently captured in the endoscopic image data to determine which of the anatomic passageways of the branching point the medical instrument system took as it navigated throughout the anatomic region.
As a more specific example with continuing reference to
When the medical instrument reference frame (XM, YM, ZM) is registered to the image reference frame (XI, YI, ZI), images displayed to an operator on the display system may allow the operator to more accurately steer the medical instrument system through patient anatomy, observe the patient anatomy from the perspective of a distal end of the medical instrument system, and/or improve efficiency and efficacy of targeted medical procedures. For example, the method 800 in some embodiments can display a composite virtual image (e.g., the composite virtual image 791 of
In these and other embodiments, based, at least in part, on the performed registration, the method 800 can calculate a real-time and/or near real-time virtual navigational image (e.g., the virtual navigational image 792 of
At step 807, the method 800 estimates and/or displays a registration error for the registration performed at step 806. For example, the method 800 can compute a disagreement between (i) a known position of the image capture device associated with a real navigational image of the endoscopic image data of step 803 that matches the image data of step 804 and (ii) the estimated position of the image capture device within the registration generated at step 806. After computing the disagreement, the method 800 can display the estimated registration error on a display of the display system.
For the sake of clarity and understanding of the above concept, consider the following additional example with reference to both
In these and other embodiments, the method 800 can estimate and/or display a registration error in real-time or near real-time. For example, the method 800 can estimate a registration error in real-time or near real-time for a current location of an image capture device of the medical instrument system within the patient. In this example, the method 800 can compute a disagreement at or proximate the current location of the image capture device between (i) a position of the image capture device associated with a real navigational image of the endoscopic image data of step 803 that matches the image data of step 804 and (ii) the estimated position of the image capture device within the registration performed at step 806.
After computing the disagreement, the method 800 can display the estimated registration error in real-time or near real-time on a display of the display system. Referring again to
In these and other embodiments, the method 800 can vary a color, pattern, and/or other visual indicators of other information on the display to indicate an estimated registration error in real-time, near real-time, or otherwise. For example, the method 800 can vary a color, pattern, and/or other visual indicator used to display virtual patient anatomy in a virtual navigational image (e.g., the virtual navigational image 792 of
At step 808, the method 800 updates the registration performed at step 806. In some embodiments, the method 800 can update the registration by returning to step 801 and reperforming (e.g., iteratively performing) all or a subset of the steps 801-807. In these and other embodiments, the method 800 can update the registration performed at step 806 using the endoscopic image data captured at step 803. For example, the method 800 can use one or more real navigational images of the endoscopic image data of step 803 to align computed virtual navigational images to corresponding real navigational images of the endoscopic image data. For the sake of clarity and understanding, consider the following example with reference to
In some embodiments, the method 800 can compute a virtual navigational image based, at least in part, on the registration performed at step 806 at a location corresponding to the location of the image capture device when the image capture device captured the real navigational image 1030.
Referring to
In these and still other embodiments, the method 800 (at step 808) can update the registration performed at step 806 by correcting the registration for drift away from the endoscopic image data of step 803. For the sake of clarity and understanding of this concept, consider the following example with reference to
The virtual patient anatomy 1271 in the virtual navigational image 1250 of
Although the above concept is illustrated and discussed above in the context of matching a branching point of two anatomic passageways in a virtual navigational image with corresponding patient anatomy in real navigational images, the above concept is particularly useful in locations where a branching point is not visible in the virtual and real navigational images. For example, a diameter of an anatomic passageway typically decreases as the medical instrument system navigates further along it. Thus, the above concept can be used to determine a real navigational image that illustrates an anatomic passageway with a diameter that best matches a diameter of the anatomic passageway in a virtual navigational image. The best match, therefore, can provide information regarding how far into an anatomic passageway the medical instrument system has been inserted at a given point in time.
In some embodiments, the method 800, at step 808, temporally or locally updates the registration performed at step 806. For example, the method 800 can update the registration performed at step 806 for a specific respiratory or cardiac phase. Continuing with this example, the method 800 can update the registration performed at step 806 differently for a different respiratory or cardiac phase. As another example, the method 800 can, at step 808, update only a portion of the registration performed at step 806. Continuing with this example, the portion of the registration updated can correspond to coordinate points from steps 802 and/or 805 and/or a subset of points of the anatomic model generated at step 804 within threshold distances of coordinate points and/or anatomic model points corresponding to one or more real and/or virtual navigational images.
Some of the computations (e.g., matching between real and virtual navigational images) performed in the steps 801-808 above can be particularly resource intensive. Thus, as an extension of any one or more of the steps 801-808 discussed above, the method 800 can use the endoscopic image data captured at step 802 and/or other information available to the method 800 to determine when to perform certain computations of the method 800. In some embodiments, the method 800 can use input/output values of the medical instrument system to identify when to perform registration computations. For example, the method 800 can use distance traversed by a distal end of the medical device as an indicator of when to perform computations. As a more specific example, the method 800 can anticipate that a patient's main carina lays approximately a first distance away from a distal end of the medical instrument system at a point of initial insertion into the patient. Thus, the method 800 can monitor and identify when the distal end of the medical instrument system has traversed the first distance from the point of initial insertion to determine when to attempt to capture the main carina in endoscopic image data and/or when to attempt to generate a correspondence between real navigational images of the endoscopic image data and (e.g., preoperative) image data of the patient's main carina. Additionally, or alternatively, the method 800 can use motion of the positional sensor system and/or the registration performed at step 806 to estimate when an image capture device of the endoscopic imaging system is likely near a carina and can use this estimation to determine when to attempt to generate a correspondence between real navigational images of the endoscopic image data captured at step 802 and the (e.g., preoperative and/or intraoperative) image data of the patient captured, received, and/or processed at step 804.
In these and other embodiments, the method 800 can use the occurrence of other events to determine when to perform computations. For example, the method 800 can perform specific computations each time the distal end or another portion of the medical instrument system traverses a threshold distance (e.g., each time the position of the distal end changes by a threshold amount). As another example, the method 800 can perform specific computations after the orientation of the distal end of the medical instrument system has changed by a threshold amount. As yet another example, the method 800 can capture positional sensor data and/or endoscopic image data periodically (e.g., in accordance with set intervals and/or events) and can wait to perform resource intensive computations until the method 800 determines the medical instrument system is subject to commanded movement (e.g., by an operator) and/or until another event occurs.
Although the steps of the method 800 are discussed and illustrated in a particular order, the method 800 illustrated in
Several aspects of the present technology are set forth in the following examples. Although several aspects of the present technology are set forth in examples directed to systems, computer-readable mediums, and methods, any of these aspects of the present technology can similarly be set forth in examples directed to any of systems, computer-readable mediums, and methods in other embodiments.
1. A medical instrument system for use in an image-guided medical procedure, the system comprising:
2. The system of example 1 wherein the operations further comprise generating one or more correspondences by matching patient anatomy in one or more images of the first image data with patient anatomy of the anatomic region in the portion of the second image data.
3. The system of example 2 wherein the patient anatomy in the one or more images of the first image data and the patient anatomy of the anatomic region in the portion of the second image data are one or more branching points of anatomic passageways in the anatomic region.
4. The system of example 2 or example 3 wherein the operations further comprise adding one or more coordinate points to the point cloud at one or more locations corresponding to one or more positions of the image capture device within the anatomic region associated with the one or more images of the first image data.
5. The system of example 4 wherein generating the registration includes weighting the one or more added coordinate points differently than other coordinate points of the point cloud generated from the positional sensor data.
6. The system of example 4 or example 5 wherein the portion of the point cloud includes only the one or more added coordinate points.
7. The system of any of examples 2-6 wherein the operations further comprise determining a transformation to align an image of the one or more images of the first image data with corresponding patient anatomy of the anatomic region in the portion of the second image data, and wherein generating the registration includes generating the registration based, at least in part, on the transformation.
8. The system of any of examples 2-7 wherein the operations further comprise determining, based, at least in part, on the first image data, at least a portion of a pathway taken by the biomedical device throughout the anatomic region, and wherein generating the registration includes generating the registration between at least the portion of the point cloud and a section of the anatomic region corresponding to the portion of the pathway.
9. The system of any of examples 2-8 wherein the operations further comprise estimating a registration error between a correspondence of the one or more correspondences and the generated registration.
10. The system of example 9 wherein the operations further comprise coloring a display of the generated registration based, at least in part, on a magnitude of the estimated registration error.
11. The system of example 10 wherein the operations further comprise:
12. The system of any of examples 1-11 wherein the operations further comprise:
13. The system of example 12 wherein updating the registration includes updating the registration based, at least in part, on the determined transformation.
14. The system of example 12 or example 13 wherein the determining the transformation includes:
15. The system of any of examples 1-14 wherein the operations further comprise:
16. The system of example 15 wherein the operations further comprise determining a difference between (i) a timestamp associated with the image of the first image data that best matches the virtual image and (ii) the first timestamp, and wherein updating the registration includes updating the registration based, at least in part, on the determined difference.
17. The system of any of examples 1-16 wherein the operations further comprise:
18. The system of any of examples 1-17 wherein the operations further comprise:
19. The system of any of examples 1-18 wherein the operations further comprise:
20. A non-transitory, computer-readable medium storing instructions thereon that, when executed by one or more processors of a computing system, cause the computing system to perform operations comprising:
21. The non-transitory, computer-readable medium of example 20 wherein the operations further comprise generating one or more correspondences by matching patient anatomy in one or more images of the first image data with patient anatomy of the anatomic region in the portion of the second image data.
22. The non-transitory, computer-readable medium of example 21 wherein the operations further comprise adding one or more coordinate points to the point cloud at one or more locations corresponding to one or more positions of the image capture device within the anatomic region associated with the one or more images of the first image data.
23. The non-transitory, computer-readable medium of example 22 wherein generating the registration includes weighting the one or more added coordinate points differently than other coordinate points of the point cloud generated from the positional sensor data.
24. The non-transitory, computer-readable medium of any of examples 21-23 wherein the operations further comprise determining a transformation to align an image of the one or more images of the first image data with corresponding patient anatomy of the anatomic region in the portion of the second image data, and wherein generating the registration includes generating the registration based, at least in part, on the transformation.
25. The non-transitory, computer-readable medium of any of examples 21-24 wherein the operations further comprise determining, based, at least in part, on the first image data, at least a portion of a pathway taken by the biomedical device throughout the anatomic region, and wherein generating the registration includes generating the registration between at least the portion of the point cloud and a section of the anatomic region corresponding to the portion of the pathway.
26. The non-transitory, computer-readable medium of any of examples 21-25 wherein the operations further comprise:
27. The non-transitory, computer-readable medium of any of examples 21-26 wherein the operations further comprise:
28. The non-transitory, computer-readable medium of any of examples 20-27 wherein the operations further comprise:
29. The non-transitory, computer-readable medium of any of examples 20-28 wherein the operations further comprise:
30. The non-transitory, computer-readable medium of example 29 wherein the operations further comprise determining a difference between (i) a timestamp associated with the image of the first image data that best matches the virtual image and (ii) the first timestamp, and wherein updating the registration includes updating the registration based, at least in part, on the determined difference.
31. A method, comprising:
The systems and methods described herein can be provided in the form of tangible and non-transitory machine-readable medium or media (such as a hard disk drive, hardware memory, etc.) having instructions recorded thereon for execution by a processor or computer. The set of instructions can include various commands that instruct the computer or processor to perform specific operations such as the methods and processes of the various embodiments described here. The set of instructions can be in the form of a software program or application. The computer storage media can include volatile and non-volatile media, and removable and non-removable media, for storage of information such as computer-readable instructions, data structures, program modules or other data. The computer storage media can include, but are not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid-state memory technology, CD-ROM, DVD, or other optical storage, magnetic disk storage, or any other hardware medium which can be used to store desired information and that can be accessed by components of the system. Components of the system can communicate with each other via wired or wireless communication. The components can be separate from each other, or various combinations of components can be integrated together into a monitor or processor or contained within a workstation with standard computer hardware (for example, processors, circuitry, logic circuits, memory, and the like). The system can include processing devices such as microprocessors, microcontrollers, integrated circuits, control units, storage media, and other hardware.
Although many of the embodiments are described above in the context of navigating and performing medical procedures within lungs of a patient, other applications and other embodiments in addition to those described herein are within the scope of the present technology. For example, unless otherwise specified or made clear from context, the devices, systems, methods, and computer program products of the present technology can be used for various image-guided medical procedures, such as medical procedures performed on, in, or adjacent hollow patient anatomy, and, more specifically, in procedures for surveying, biopsying, ablating, or otherwise treating tissue within and/or proximal the hollow patient anatomy. Thus, for example, the systems, devices, methods, and computer program products of the present disclosure can be used in one or more medical procedures associated with other patient anatomy, such as the bladder, urinary tract, GI system, and/or heart of a patient.
As used herein, the term “operator” shall be understood to include any type of personnel who may be performing or assisting a medical procedure and, thus, is inclusive of a physician, a surgeon, a doctor, a nurse, a medical technician, other personnel or user of the technology disclosed herein, and any combination thereof. Additionally, or alternatively, the term “patient” should be considered to include human and/or non-human (e.g., animal) patients upon which a medical procedure is being performed.
From the foregoing, it will be appreciated that specific embodiments of the technology have been described herein for purposes of illustration, but well-known structures and functions have not been shown or described in detail to avoid unnecessarily obscuring the description of the embodiments of the technology. To the extent any materials incorporated herein by reference conflict with the present disclosure, the present disclosure controls. Where the context permits, singular or plural terms can also include the plural or singular team, respectively. Moreover, unless the word “or” is expressly limited to mean only a single item exclusive from the other items in reference to a list of two or more items, then the use of “or” in such a list is to be interpreted as including (a) any single item in the list, (b) all of the items in the list, or (c) any combination of the items in the list. As used herein, the phrase “and/or” as in “A and/or B” refers to A alone, B alone, and both A and B. Where the context permits, singular or plural terms can also include the plural or singular term, respectively. Additionally, the terms “comprising,” “including,” “having” and “with” are used throughout to mean including at least the recited feature(s) such that any greater number of the same feature and/or additional types of other features are not precluded.
Furthermore, as used herein, the term “substantially” refers to the complete or nearly complete extent or degree of an action, characteristic, property, state, structure, item, or result. For example, an object that is “substantially” enclosed would mean that the object is either completely enclosed or nearly completely enclosed. The exact allowable degree of deviation from absolute completeness may in some cases depend on the specific context. However, generally speaking the nearness of completion will be so as to have the same overall result as if absolute and total completion were obtained. The use of “substantially” is equally applicable when used in a negative connotation to refer to the complete or near complete lack of an action, characteristic, property, state, structure, item, or result.
The above detailed descriptions of embodiments of the technology are not intended to be exhaustive or to limit the technology to the precise form disclosed above. Although specific embodiments of, and examples for, the technology are described above for illustrative purposes, various equivalent modifications are possible within the scope of the technology, as those skilled in the relevant art will recognize. For example, while steps are presented in a given order, alternative embodiments can perform steps in a different order. As another example, various components of the technology can be further divided into subcomponents, and/or various components and/or functions of the technology can be combined and/or integrated. Furthermore, although advantages associated with certain embodiments of the technology have been described in the context of those embodiments, other embodiments can also exhibit such advantages, and not all embodiments need necessarily exhibit such advantages to fall within the scope of the technology.
It should also be noted that other embodiments in addition to those disclosed herein are within the scope of the present technology. For example, embodiments of the present technology can have different configurations, components, and/or procedures in addition to those shown or described herein. Moreover, a person of ordinary skill in the art will understand that these and other embodiments can be without several of the configurations, components, and/or procedures shown or described herein without deviating from the present technology. Accordingly, the disclosure and associated technology can encompass other embodiments not expressly shown or described herein.
This application claims the benefit of U.S. Provisional Application No. 62/994,205, filed Mar. 24, 2020, which is incorporated by reference herein in its entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2021/022628 | 3/16/2021 | WO |
Number | Date | Country | |
---|---|---|---|
62994205 | Mar 2020 | US |