Currently claimed embodiments of the invention relate to methods for intraoperative image-guided navigation of surgical instrumentation.
Orthopaedic trauma is a prominent socioeconomic burden in terms of lost quality of life and cost of surgery1. In particular, fractures of the pelvic ring present a major challenge in orthopaedic trauma surgery, with an incidence rate of 37 out of 100,000 individuals/year (3-7% of all skeletal fractures)2-4 and considerably poor surgical outcomes with high complication rates (>30%) and mortality rates (8%)5,6. Surgical treatment has commonly consisted of open or closed reduction followed by internal fixation, with percutaneous approaches under fluoroscopic guidance gaining prevalence in recent years due to shorter recovery times7-9.
Common surgical approach to fixation of pelvic fractures involves insertion of a guidewire (typically a Kirschner wire, K-wire) along bone corridors in the pubis, ischium, and/or ilium, followed by insertion of a cannulated screw along the K-wire trajectory (after which the guidewire is removed)10. The procedure is commonly guided by x-ray fluoroscopy on a mobile C-arm, where intermittent exposures are acquired concurrently with placement of the guidewire for assessment of device position relative to surrounding anatomy—namely conformance within bone corridors11.
Surgeons cognitively, qualitatively estimate the 3D position of the K-wire within the pelvis from multiple projection views (e.g., inlet, outlet, at other oblique views). However, due to the challenge of 3D reckoning within the complex morphology of the pelvis, accurate K-wire placement often requires “fluoro-hunting” and multiple trial and error attempts, even for experienced surgeons. It is not uncommon for the guidewire to be completely withdrawn if the K-wire trajectory appears in danger of breaching the bone corridor and reinserted along a new trajectory, leading to extended procedure time and fluoroscopic exposure (often exceeding 120 seconds of fluoroscopy time and hundreds of radiographic views10). The ability to accurately place K-wires under fluoroscopic guidance requires long learning curves to achieve sufficient quality in device placement.
Surgical navigation has emerged as a potential solution for guidance of device placement and reduction of radiation dose. Current state-of-the-art navigation systems include the Medtronic StealthStation and Stryker Navigation System II, which use optical (infrared) tracking of rigid markers to provide virtual visualization of the location of surgical instruments with respect to preoperative or intraoperative CT, cone-beam CT (CBCT), or MRI. Navigation based on such trackers is fairly common in brain and spine surgery12,13, where its use has demonstrated improved surgical precision. However, orthopaedic trauma surgery has not seen widespread adoption of these systems, primarily due to factors of cost, the requirement for external trackers with line of sight, and workflow bottlenecks associated with setup calibration/registration. With such a fast-paced environment and steep workflow requirements, fluoroscopic guidance remains the mainstay for surgical guidance in orthopaedic trauma, with the standard of care largely unchanged for decades.
One solution, specifically targeting procedures in orthopaedic trauma surgery, disclosed an invention describing the attachment of a calibrated, tracked marker to a surgical drill (U.S. Pat. No. 6,887,247, CAS drill guide and drill tracking system). However, the invention still required the maintenance of line-of-sight to an external infrared camera and suffers from the same limitations of mainstream navigation systems
As noted above, current state-of-the-art systems typically use optical (infrared) tracking of rigid markers to provide virtual visualization of the location of surgical instruments with respect to preoperative or intraoperative CT, cone-beam CT (CBCT), or MRI. To provide such visualizations, a registration between the surgical instrument (via the tracking system) and 3D intraoperative and/or preoperative images must be performed,
Typical registration procedures involve placing markers on the patient prior to 3D image acquisition, requiring the markers to remain in place until the start of the surgical procedure. At the start of the procedure, the marker positions on the patient and within the 3D image are matched to obtain the registration relating the surgical instrument to the patient. A common drawback in such navigation approaches is that any perturbations in the position of markers that occur between the initial preoperative setup and the time of procedure cannot be tracked.
Since any variation in marker location may adversely affect registration accuracy, prior solutions have sought to mitigate perturbations by invasively attaching markers to rigid structures within the patient, further adding to the cost, time, and complexity of the procedure. Other approaches have looked towards marker-less and surface-matching based registration methods. In marker-less registration, corresponding anatomical landmarks (e.g. distinct bony surfaces) between prior image data and the patient are manually identified by the surgeon. While such an approach avoids the use of invasive markers, it is subject to inconsistences between data collected during planning and the time of surgery (e.g. skin-surface deformations).
In conventional surface-matching approaches, a mapped surface of the patient is created by tracing a tracked pointer along the patient skin surface. The resulting surface is then registered to a corresponding surface segmented from image data. Such methods are subject to the same errors as the conventional marker-less approach, often require time-consuming setup and multiple, manual re-registration steps, further adding to the time and complexity of the procedure.
Surgical navigation solutions that are potentially better suited to the demanding workflow of orthopaedic trauma surgery have been reported in prior art. Some have mounted a camera on the C-arm to provide augmented fluoroscopic guidance and tool tracking.14-16 Others have aimed to improve accuracy and reduce line-of-sight obstruction—e.g., mounting a surgical tracker to the C-arm17—a solution that also provides stereoscopic video for augmented views of the surgical scene and/or fluoroscopy. Others incorporated the tracking system into the operating table—as in Yoo et al.18, who incorporated an open frame electromagnetic field generator into the operating table to provide real-time tracking while maintaining compatibility with fluoroscopy (i.e., the PA view shooting through the open frame). Still others have sought to mount tracking equipment on the instrument itself19-21.
Prior art in ultrasound needle-based interventions have disclosed the mounting of a video camera onto an ultrasound probe as a means to realize targeted needle guidance (U.S. Pat. No. 14,092,755, Ultrasound system with stereo image guidance or tracking; U.S. Pat. No. 14,689,849, System and method for fused image-based navigation with late marker placement).
In Magaraggia et al.19, a video-based guidance framework was described for real-time guidance of the surgical drill tip in distal radial fracture surgery. The framework took advantage of implant-specific drill sleeves by augmenting them with binary markers, allowing video-based tracking by a camera mounted on the surgical drill. The study reported improvements in screw positioning accuracy compared to conventional fluoroscopic guidance and compatibility with clinical workflow, recognizing that the approach is limited to surgeries that routinely employ drill sleeves.
Prior solutions that go beyond the conventional approach for fiducial registration have been reported. One such solution discloses the invention of a flexible tracking article that consists of a substrate containing multiple point-based features that can be detected by an external tracking system (viz. active LEDs detected by an external infrared camera). The point-based features are used to create a surface model of the patient that can then be registered to surfaces extracted from image data (e.g. preoperative CT or MRI) allowing automatic registration and tracking of surgical devices without the need for invasive markers. (U.S. Pat. No. 9,901,407, Computer-implemented technique for determining a coordinate transformation for surgical navigation; U.S. Pat. No. 7,869,861, Flexible tracking article and method of using the same).
Prior art has also used “multimodal” markers visible to both an external tracking modality and an x-ray imaging system to establish navigation in surgical procedures that routinely use fluoroscopic. imaging (e.g., orthopaedic-trauma surgery). In Hamming et al.22 and Dang et al.23, multimodal markers comprised of an infrared reflective sphere enclosing a radio-opaque tungsten sphere were described. The works reported on various arrangements of these multimodal markers (e.g. predefined, known 3D configurations as well as free-form 3D configurations) as well as the corresponding methods used to automatically solve the registration between a stereoscopic infrared tracking system and intraoperative CBCT images. In Andress et al.24, a multimodal marker (optical and radio-opaque) with features based on the well-known ARToolKit was used to co-register an augmented reality, head mounted display with fluoroscopy images.
In some embodiments, a system for surgical navigation, including an instrument configured for a medical procedure on a patient, a camera attached to the instrument, wherein the instrument has a spatial position relative to the camera, an x-ray system configured to acquire x-ray images of the patient during the medical procedure, and multiple fiducial markers positioned on the surface of the patient during the medical procedure, the fiducial markers being detectable by both the camera and the x-ray system, the fiducial markers comprising a radio-opaque material arranged as at least one of a line and a point. The system also includes a computer configured to receive an optical image acquired by the camera, receive an x-ray image acquired by the x-ray system, identify a subset of the fiducial markers that are visible in the optical image and are also visible in the x-ray image, determine, based on the optical image, a spatial position relative to the camera for each fiducial marker in the subset of fiducial markers, determine, based on the x-ray image, a spatial position relative to the x-ray system for each fiducial marker in the subset of fiducial markers, and determine, based on at least the spatial positions of the subset of fiducial markers relative to the camera and the spatial positions of the subset of fiducial markers relative to the x-ray system, a spatial position for the instrument relative to the x-ray system.
In some embodiments, a method for surgical navigation, including receiving an optical image acquired by a camera, the camera attached to an instrument configured for a medical procedure on a patient, the instrument having a spatial position relative to the camera. The method includes receiving an x-ray image acquired by an x-ray system configured to acquire x-ray images of the patient during the medical procedure. For multiple Uncial markers positioned on the surface of the patient during the medical procedure and detectable by both the camera and the x-ray system, the method includes identifying a subset of the fiducial markers that are visible in the optical image and are also visible in the x-ray image, the fiducial markers comprising a radio-opaque material arranged as at least one of a line and a point. The method includes determining, based on the optical image, a spatial position relative to the camera for each fiducial marker in the subset of fiducial markers, determining, based on the x-ray image, a spatial position relative to the x-ray system for each fiducial marker in the subset of fiducial markers, and determining, based on at least the spatial positions of the subset of fiducial markers relative to the camera and the spatial positions of the subset of fiducial markers relative to the x-ray system, a spatial position for the instrument relative to the x-ray system.
In some embodiments, a system for surgical navigation, including an instrument configured for a medical procedure on a patient, a camera attached to the instrument, the instrument having a spatial position relative to the camera, an x-ray system configured to acquire x-ray images of the patient during the medical procedure, and multiple fiducial markers positioned on the surface of the patient during the medical procedure, the fiducial markers being detectable by both the camera and the x-ray system. The system includes a computer configured to receive a two-dimensional (2D) optical image acquired by the camera, receive a 2D x-ray image acquired by the x-ray system, identify a subset of the fiducial markers that are visible in the optical image and are also visible in the x-ray image, determine, based on the 2D optical image, a spatial position relative to the camera for each fiducial marker in the subset of fiducial markers, determine, based on the 2D x-ray image, a spatial position relative to the x-ray system for each fiducial marker in the subset of fiducial markers, and determine, based on at least the spatial positions of the subset of fiducial markers relative to the camera and the spatial positions of the subset of fiducial markers relative to the x-ray system, a spatial position for the instrument relative to the x-ray system.
In some embodiments, a method for surgical navigation, including receiving a two-dimensional (2D) optical image acquired by a camera, the camera being attached to an instrument configured for a medical procedure on a patient, the instrument having a spatial position relative to the camera. The method includes receiving a 2D x-ray image acquired by an x-ray system configured to acquire x-ray images of the patient during the medical procedure. For a plurality of fiducial markers positioned on the surface of the patient during the medical procedure and detectable by both the camera and the x-ray system, the method includes identifying a subset of the fiducial markers that are visible in the optical image and are also visible in the x-ray image, determining, based on the 2D optical image, a spatial position relative to the camera for each fiducial marker in the subset of fiducial markers, determining, based on the 2D x-ray image, a spatial position relative to the x-ray system for each fiducial marker in the subset of fiducial markers, and determining, based on at least the spatial positions of the subset of fiducial markers relative to the camera and the spatial positions of the subset of fiducial markers relative to the x-ray system, a spatial position for the instrument relative to the x-ray system.
Further objectives and advantages will become apparent from a consideration of the description, drawings, and examples.
FIC. 1C illustrates a 3D-printed drill mount to rigidly hold the video camera in the system of
Some embodiments of the current invention are discussed in detail below. In describing embodiments, specific terminology is employed for the sake of clarity. However, the invention is not intended to be limited to the specific terminology so selected. A person skilled in the relevant art will recognize that other equivalent components can be employed, and other methods developed, without departing from the broad concepts of the current invention. All references cited anywhere in this specification, including the Background and Detailed Description sections, are incorporated by reference as if each had been individually incorporated.
Some embodiments of the invention provide a video-based guidance system that is suitable to the workflow of fluoroscopy-guided orthopaedic trauma procedures, biopsy procedures, and other medical interventional procedures. Some embodiments of the system are applicable to surgical drills, biopsy needles, and other instruments that require real-time navigation.
Some embodiments of the invention provide a surgical guidance system that assists in the placement of surgical instruments. In some embodiments, the system comprises a video camera (or other tracker modalities including but riot limited to infrared and electromagnetic sensors) attached to a surgical drill (or other instrument); an x-ray imaging system (either plane radiography or fluoroscopy); and an arrangement of multimodal fiducial markers visible to both the attached tracking modality (e.g. video, infrared) and the x-ray system (radio-opaque). The underlying geometric relationships between the surgical instrument and x-ray imaging system are solved enabling the registration and overlay of instrument position onto x-ray images and/or preoperative 3D images (e.g. CT, MRI). Some embodiments improve the accuracy and safety of instrument placement, and decrease both x-ray imaging dose and procedure time in image-guided interventions.
In some embodiments, the instrument position is determined by determining the spatial position of the fiducial markers in two coordinate frames. From the optical images and known calibration parameters of the camera (mounted on the drill), the markers are localized with respect to the camera. From the x-ray projection images and a known geometric calibration of the x-ray imaging system (e.g. C-arm), the markers are also localized in the C-arm/lab frame. The markers identified in both coordinate frames (e.g., at least three) are then used to solve a registration between the two coordinate frames. This solved registration represents the camera's pose in the C-arm/lab frame (i.e., the camera's known position in the lab frame). With this registration (which is solved in real-time), the calibrated drill axis is then shown in both CT images and radiographic images (via projection).
In some embodiments, a miniature camera is mounted on a surgical drill to provide real-time visualization of the drill trajectory in fluoroscopy and/or CT. The relationship between camera images and intraoperative fluoroscopy is established via multimodality (optical and radio-opaque) fiducial markers that are placed about the surgical field at the time of surgery. Notably (and likely essential to realistic workflow), in some embodiments the markers do not need to appear in preoperative CT or MRI. The solution of some embodiments couples 3D-2D registration algorithms with vision-based tracking to register and track surgical instrumentation without additional equipment in the operating room (OR). The proposed solution also has the potential to reduce radiation dose in the OR by reducing “fluoro-hunting,” and registration can be performed in some embodiments with commonly acquired (e.g., inlet and outlet) views. In principle, the system could reduce the number of views required for K-wire placement from hundreds10 to as few as two projections—one for initial registration and one for visual confirmation of device placement.
In some embodiments, in place of an external tracker system, a video camera is attached to a surgical drill that holds the instrument. Compared to the drill-mounted video system described in Magaraggia et al.,19 some embodiments use multimodal markers (rather than custom drill sleeves) to register video images to fluoroscopy. As such, the underlying registration algorithms are different.
Some embodiments use automatic 3D-2D registration to register the tracked instrumentation (e.g., surgical drills, biopsy needles, etc.) to a preoperative CT (or intraoperative CBCT) enabling 3D navigation. Prior art video-based systems require that the video markers be placed on the patient in the preoperative CT.25-29 The need to place markers in preoperative imaging presents a workflow barrier that is not likely to be compatible with emergent trauma scenarios—where the preoperative image is acquired quickly for diagnostic purposes (e.g., rule-out hemorrhage or other conditions as well as detection and characterization of the fracture).
Most prior art systems also require the markers to remain unperturbed from the moment they are imaged through the duration of the case. Instead, some embodiments of the proposed system allow the markers to be placed during the procedure (e.g., after the patient is draped, immediately prior to K-wire insertion), and perturbations of the marker arrangement is accommodated by updating the registration with as little as one fluoroscopic view.
Some embodiments provide multimodal fiducial markers and corresponding automatic registration algorithms that overcome limitations of current state-of-the-art navigation methods. In some embodiments, the markers contain both optical (vision-based) and radio-opaque point-based features with known geometric correspondence. Notably (and likely essential to realistic workflow), in some embodiments the markers do not need to appear in preoperative imaging and are instead placed about the surgical field and registered at the time of surgery. Some embodiments utilize multimodal marker arrangements that are potentially compatible with existing clinical workflows.
Some embodiments describe “stray” multimodal (optical and radio-opaque) markers with potential arrangements of the “stray” markers for feasible usage in the clinic. Unlike conventional markers, some embodiments are robust to perturbations in marker position due to skin/surface deformation since the registration between the video scene and radiographic image updates with each x-ray shot. Some embodiments of the invention are compatible with preoperative images acquired in the diagnostic work-up prior to surgery day, obviating the need to acquire a preoperative CT or MRI of the patient with fiducial markers.
Some embodiments use a miniature optical camera rigidly mounted on the instrument (e.g., surgical drill, biopsy needle, etc.) and multimodality (e.g., optical and radio-opaque) fiducial markers to provide real-time visualization of the drill trajectory in fluoroscopy and/or CT images, as discussed below with reference to
Some embodiments of the invention are operated in a “Video-Fluoro” mode of visualization and guidance. In this mode (referred to as “Video-Fluoro”), fluoroscopic views are overlaid with the registered drill axis trajectory in real-time according to the current pose of the drill. This mode of navigation allows the surgeon to adjust freehand drill trajectory in real-time to align with the planned trajectory, with the background anatomical scene providing important visual context. Visualization in multiple augmented fluoroscopic views gives reliable 3D reckoning of the scene (normally done mentally, requiring years of training in understanding the complex morphology of the pelvis in projection views). This mode of operation is compatible for patients with and without diagnostic 3D imaging.
Some embodiments of the invention are operated in a “Video-CT” mode of visualization and guidance. This mode of operation (referred to as “Video-CT”) requires a preoperative image and provides 3D navigation analogous to common surgical tracking systems. The CT image (and fluoroscopic views) is overlaid with the drill axis trajectory (rendered in real-time according to the pose of the drill) along with preoperative planning information (e.g. acceptance corridors) to aid the surgeon in determining whether the given trajectory conforms to the bone corridor.
Workflow. The workflow of some embodiments of a drill guidance system is now described with reference to
The offline workflow includes two main steps in some embodiments. The first is calibration of the drill-mounted camera (
The intraoperative workflow also includes two main steps in some embodiments. The first (
The second step (
Some embodiments employ an algorithm for displaying a drill trajectory in imaging coordinates. For example, the registration of a surgical drill axis, a video camera, an x-ray imaging system, and any available preoperative images is computed in some embodiments as follows. A video camera mounted onto a surgical drill is calibrated with respect to the drill axis. Multimodal markers identifiable in both video and x-ray images are placed about the surgical field and co-registered by feature correspondences. If available, a preoperative CT can also be co-registered by 3D-2D image registration. Real-time guidance is achieved by overlay of the drill axis on fluoroscopy and/or CT images.
Table 1 summarizes notation for relevant coordinate frames, transforms, and variables for video-based surgical drill guidance and notation, with 3D vectors denoted in uppercase and 2D vectors denoted in lowercase. The Coordinate frames are: D (for drill camera), C (for the C-arm), and V (for preoperative CT volume). Multimodal markers (m=1, . . . , M) are co-registered between the video frame, one or more fluoroscopic images at view angles (θ), and the CT volume, via 3D-2D registration.
D
C(θ)
C
V
In some embodiments, camera calibration is performed to estimate the intrinsic parameters of the camera (pinhole model) and distortion coefficients of the lens. The calibration is performed in some embodiments using well-known resectioning techniques such as Zhang's method30 or Tsai's method31. The instrument (e.g., drill) axis is then calibrated to the drill camera coordinate frame (D). The examples below describe embodiments for a surgical drill, though other embodiments with other types of instruments are also envisioned.
A description of one embodiment for camera calibration to a drill is here described. The intrinsic parameters are represented by the camera calibration matrix (KD) consisting of principal points (a0, b0) and focal lengths (fa, fb):
Calibration is performed in some embodiments using the resectioning method of Zhang et al.30 using multiple images of a planar checkboard to obtain a closed form solution for the intrinsic parameters of the camera. The resulting parameters are then jointly refined along with the distortion coefficients by least—squares minimization of the reprojection error across all image points. Lens distortion is modeled in some embodiments using the Brown-Conrady even-order polynomial model30 to remap image points (x̆D) onto distortion-corrected image points (xD) with a model describing both radial (k1, k2) and tangential distortion (p1, p2) effects:
a=ă+(ă−a0)(k1r2+k2r4)+2p1(ă−a0)(b̆−b0)+p2[r+2(ă−a0)2]
b=b̆+(b̆−b0)(k1r2+k2r4)+2p1(ă−a0)(b̆−b0)+p1[r+2(b̆−b0)2]
r=√{square root over ((ă−a0)2+(b̆−b0)2)} (1b)
where (a, b) are components of the distortion-corrected image point xD, (ă, b̆) are components of the uncorrected image point , and (a0, b0) are the principal points shown in Eq. (1a) approximating the center of distortion.
The orientation of the drill-axis in the drill camera coordinate frame ({right arrow over (L)}D) is solved in some embodiments using a calibration jig constructed to freely spin about the drill axis. In some embodiments, the jig includes a drill sleeve centered on an ArUco board34 containing a square 3×3 grid of ArUco tags (with inner marker tag removed). The calibration jig is attached to the instrument (e.g. K-wire, drill bit, screw, etc.) and rotated along the axis of the instrument. As the jig (drill sleeve) rotates about the drill axis, the pose of the ArUco board. in the 3D frame of the camera is estimated in multiple images using the Perspective-N-Points (PNP) algorithm. For circular motion of the jig, pose estimation from multiple camera images yields a 3D cylindrical point cloud, and a generalized RANSAC-based cylindrical fit (MLESAC42) is computed to obtain the central axis of the point cloud—i.e., the drill (or other instrument) axis ({right arrow over (L)}D). Due to constraints on motion of the calibration jig, the cylindrical axis represents the surgical drill axis in the drill camera coordinate frame.
Video Features. In some embodiments, video images are registered to the fluoroscopic scene through localization and registration of point-based feature correspondences within the multimodal markers. Video features are realized through a variety of vision-based fiducial systems reported in the prior art (e.g. ArUco, AprilTag, ARToolKit). In some embodiments, for example, ArUco tags are used.
Image-based 3D-2D registration methods are used in some embodiments to estimate the. pose of the markers from fluoroscopic images in the C-arm coordinate frame (C) through image similarity metrics. In some embodiments, a calibrated C-arm is used with projective relationship C relating 3D points in the C-arm frame (XC) to 2D fluoroscopic image points on the C-arm detector plane (xC). To extract marker pose from fluoroscopic images, some embodiments perform 3D-2D “known-component” registration (KC-Reg)33. The general framework for KC-Reg is as shown in panel (a) of
A description of an example embodiment for video-to-fluoroscopy registration here follows.
Video images were registered to the fluoroscopic scene through localization and registration of the point-based feature correspondences of the multimodality markers (discussed in more detail below). For the ArUco tags, the 3D pose of each marker in the drill camera coordinate frame (PD(m)) is estimated via well-known camera pose estimation techniques (for example, the PNP algorithm) for m=1, . . . , M markers. The translational component of the resulting pose estimate is extracted to represent the central marker feature point in the drill camera coordinate frame [XD(m)].
Similar to PNP, 3D-2D registration methods use fluoroscopic images to estimate the pose of the markers in the C-arm coordinate frame (C) through image similarity metrics rather than explicit point-to-point correspondences. In this work, a calibrated C-arm with projective relationship C related 3D points in the C-arm frame (XC) to 2D fluoroscopic image points on the C-arm detector plane (xC) (in homogenous coordinates) by:
The projection matrix was obtained by standard C-arm geometric calibration methods43 and is defined by:
C
=K
C[Θ]
The intrinsic matrix KC describes the geometric relationship between the C-arm source and detector as described by the source-to-detector distance (SDD) and the piercing point (u0, v0) representing the position of the orthogonal ray between the source and detector plane. The extrinsic parameters (Θ]) describe the pose of the C-arm source-detector assembly for a fluoroscopy frame at view angle θ in a common coordinate frame (referred to as the C-arm coordinate frame C).
Optional Pre-Processing, Video and/or fluoroscopic views are pre-processed. in some embodiments for better performance and to improve robustness of the registration (e.g., noise reduction, edge enhancement, etc.). For best performance, fluoroscopic views are selected in some embodiments such that markers are not overlapping in projection data. To improve robustness of the registration, fluoroscopic images are pre-processed in some embodiments to isolate the marker features from background anatomy (e.g., with morphological top-hat filtering and gamma expansion) and masked to mitigate interference from neighboring marker features.
Similarity Metric. The similarity between a fixed fluoroscopic image (Ifixed) and its corresponding moving DRR (Imoving) is computed in some embodiments through a variety of metrics (e.g. cross-correlation, gradient-based similarity). In some embodiments, the gradient information (GI) similarity metric is used as it has been shown to be robust in the presence of strong gradient magnitudes not present in both images.34,35 To solve for the pose of marker m, an objective function that maximizes the cumulative GI across θ=1, . . . , Nview fluoroscopic views was defined as:
where the moving DRR is computed by integrating rays () along the transformed mesh volume PC(m)(κ) according to the C-arm gantry pose at view θ.
Optimization Method. In some embodiments, optimization of the pose estimate {circumflex over (P)}C(m) is performed with any general optimizer. For example, the pose estimate {circumflex over (P)}C(m) was optimized using the covariance matrix adaptive evolution strategy (CMA-ES) algorithm36 from which the translational component was extracted to represent the central marker feature point in the C-arm coordinate frame [{circumflex over (X)}C(m)]. Prior to optimization, the pose was initialized using features extracted during marker detection. For each multimodal marker m, an initial estimate of the central 3D feature point in the C-arm frame [{circumflex over (X)}C(m)] was obtained by first backprojecting a ray from the corresponding 2D image feature point in homogenous coordinates (referred to in bold type as xC(m,θ)) toward the x-ray source. The backprojected ray was estimated for each fluoroscopic view θ, from which an initial estimate of the 3D feature point could be reconstructed by:
describes the estimated distance along the backprojected source-detector ray. The magnification was estimated for each marker m in each fluoroscopic view using the perspective relationship between the diameter of a circle and the major axis length of its elliptical projection. Once the 3D position of each marker was initialized, a rotational initialization was performed with a planar fit of the global marker arrangement. The computed plane normal was used as an initial estimate of the out-of-plane axis for each marker.
Once initialized, the CMA-ES optimization for each marker was performed following a coarse-to- fine multiresolution strategy. First, a coarse multi-start was performed in which KC-Reg was reinitialized 7 times with 45° rotational offsets to initialize the rotational pose of the marker. A refinement was then computed at fine resolution to obtain the final marker pose estimate ({circumflex over (P)}C(m)). The coarse stage was carried out at 1×1 mm2 pixel size, with a total population size of 350 and initial standard deviation of σ=10 mm (and 10°). The refinement was performed at 0.5×0.5 mm2 pixel size with a population size of 50 and initial standard deviation of σ=5 mm (and 5°).
With corresponding point estimates obtained in both the drill camera (D) and C-arm (C) coordinate frames, a transformation between the two was estimated using point-based registration described by Horn et al.37. The resulting video-to-fluoroscopy transform (TCD) was used to represent the surgical drill axis in the C-arm coordinate frame:
C
=T
C
D
D (5a)
and its projection on the C-arm detector plane:
C=C (5b)
Visualization/Display. Augmentation of fluoroscopic views with C realizes the “Video-Fluoro” navigation mode of some embodiments. During instrument guidance, only the localization in the camera images is continuously updated, not fluoroscopy frames, so long as the markers are not perturbed relative to anatomy. If perturbed, the video-to-fluoroscopy registration is updated in some embodiments by acquisition of one or more fluoro shots.
Intraoperative fluoroscopy is used in some embodiments to register the patient to a preoperative CT (or intraoperative CBCT) volume using 3D-2D registration34-38. The workflow for patient registration in some embodiments is illustrated in panel (b) of
In some embodiments, the gradient correlation (GC) metric is used for CT registration because it is independent of absolute gradient magnitudes (unlike GI) making it more robust for registration between images in which corresponding anatomical gradients may differ due to differences in imaging techniques or mismatch in image content (e.g., tools in the fluoroscopic scene that are not in the CT).38 For marker localization, GI is observed to perform well due to its relative insensitivity to gradients from surrounding anatomy. In some embodiments, a coarse-to-fine multiresolution strategy is employed, with a coarse multi-start consisting of 7 reinitializations (each with a 5° rotational offset) at a resolution of 2×2 mm2 pixel size, a total population size of 700, and initial standard deviation of σ=10 mm (and 10°). A refinement is subsequently computed at higher image resolution (0.5×0.5 mm2 pixel size) with a population size of 50 and initial standard deviation of σ=5 mm (and 5°).
The resulting CT-to-fluoroscopy transform (TCV) in some embodiments is used to transform the surgical drill axis into the preoperative CT coordinate frame as:
v=(TCV)−1TCDD (6)
Augmentation of the CT image (e.g., orthogonal slices, MIPs, or volume renderings) realizes the “Video-CT” navigation mode of some embodiments in which the instrument trajectory is visualized in the 3D CT image.
In some embodiments, fluoroscopy-to-CT registration operates under the assumption of rigidity, which is appropriate in the context of simple fractures. The drill-mounted video concept is integrated in some embodiments with recent 3D-2D registration methods that address challenges of joint dislocation40 and multi-body comminuted, displaced. fractures.44,45 Such integration extends the potential applicability across a filler range of pelvic trauma surgery.
An anthropomorphic pelvis phantom 510 composed of a natural adult skeleton in tissue-equivalent plastic (Rando®, The Phantom Lab, Greenwich NY) was used. A UR3e robotic arm 520 (Universal Robotics, Odense, Denmark) was used as a drill holder throughout the experiments, but a robot is not required in other embodiments.
The drill camera 515 was aligned with respect to the anterior inferior iliac spine (AIIS) to posterior superior iliac spine (PSIS) trajectory in the left hip. Five markers 525 were placed about the planned entry site. The robotic arm 520 was used to position the drill camera 515 at a distance of ˜20 cm from the surface of the phantom 510 to emulate realistic surgical drill positioning. Nine camera poses were sampled about the initial planned trajectory 530 to measure the sensitivity of fiducial registration error to various perspectives.
A mobile isocentric C-arm 505 (Cios Spin, Siemens Healthineers, Forcheim, Germany) was used to acquire fluoroscopic images (for 3D-2D registration) and CBCT (for truth definition). CBCT images were acquired with 400 projections over a 195° semi-circular orbit and reconstructed on a 0.512×0.512×0.512 mm3 voxel grid with a standard bone kernel. An initial CBCT scan (110 kV, 350 mAs) was acquired—with only the pelvis phantom 510 and marker 525 arrangement present in the field of view (FOV). The projections from the scan where markers 525 were not overlapping (an orbital arc of θ=−20° to 70°) were selected as candidate views for solving 3D-2D marker localization. The resulting CBCT reconstruction was used to segment the central BB position for each marker 525 as truth definition. Single fluoroscopic views (100 kV, ˜0.9 mAs) were collected at common clinical orientations for augmentation in “Video-Fluoro” mode, including: AP view (θ=0°, ϕ=0°), lateral view (θ=90°, ϕ=0°), inlet view (θ=0°, ϕ=−25°), and outlet view (θ=0°, ϕ=30°). The drill camera 515 was then positioned with a 3 mm K-wire extending from the drill tip to the surface of the pelvis phantom 510, and video images of the arrangement of markers 525 were collected. A final CBCT scan (110 kV, 380 mAs) was acquired with the drill camera 515 in the FOV for ground truth definition of the K-wire drill axis.
For evaluation of the “Video-CT” navigation mode, a preoperative CT volume (0.82×0.82×0.5 mm3 voxel grid) of the pelvis phantom 510 was acquired (SOMATOM Definition, Siemens, Erlangen Germany). K-wire trajectories were automatically planned in the preoperative CT using the atlas-based planning method in Goerres et al.39 and Han et al.40 Acceptance volumes interior to bone corridors were created for common K-wire trajectories, including: AIIS-to-PSIS, superior ramus (SR), and iliac crest (IC) to posterior column (PC). Acceptance volumes were used to visualize and evaluate drill axis conformance within pelvic bone corridors.
Table 2 provides a summary of figures of merit for assessing the accuracy of individual registration methods and end-to-end system performance. The performance of video-to-fluoroscopy registration was quantified in terms of errors related to 3D marker localization in the drill camera (D) and C-arm (C) coordinate frames.
The positional estimate for each marker in the camera frame [{circumflex over (X)}D(m)] was evaluated with respect to the truth definition (points defined in CBCT) and quantified in terms of fiducial registration error (FRE):
FRE=||{circumflex over (T)}CD{circumflex over (X)}D(m)−XC(m,true)|| (7a)
where XC(m,true) represents the true location of marker m in the C-arm frame and {circumflex over (T)}CD is an estimate of the true camera-to-C-arm transform derived from point-based registration with all markers. The term {circumflex over (T)}CD{circumflex over (X)}D(m) therefore is the estimated location of marker m in the C-arm frame. Fiducial errors were further decomposed into in-plane and depth errors with respect to the drill camera coordinate frame (ΔD) as:
ΔD=DC({circumflex over (T)}CD{circumflex over (X)}D(m)−XC(m,true)) (7b)
Estimation of DC (the rotation matrix from the C-arm to the drill camera coordinate frame) was performed independently of the fiducials by solving the rotation between the calibrated drill axis (D) and the truth definition for the drill axis segmented from CBCT (referred to as the reference drill axis, C(true)).
Marker localization errors in the C-arm frame (C) were estimated with reference to the ground truth marker locations (XC(m,true)). The 3D-2D registration of each marker 525 was solved using Nview=1−3 fluoroscopic views, selected from the subset of candidate projections mentioned earlier. For Nview>1, selected views were chosen such that they spanned (in total) a 30° arc with equiangular spacing. Accuracy was assessed in terms of the norm of the translational error (δC):
δD(m)=||{circumflex over (X)}C(m)−XC(m,true)|| (7c)
For registrations based on a single fluoroscopic view (Nview=1), translational errors were further examined with respect to in-plane (parallel to the detector plane) and out-of-plane (depth) components (referred to as ΔC).
Fluoroscopy-to-CT registration was evaluated over the same set of fluoroscopic views used during marker localization for Nview=1 and 2 (over a 30< arc). To evaluate accuracy, truth was defined from a large number (Nview=15) of fluoroscopic views to solve the true 3D-2D patient registration (referred to as V), selecting distinct views for registration and truth definition to mitigate bias. Performance was calculated in terms of the difference transform (ΓV) between the registration result and the truth definition by:
ΓV={circumflex over (T)}CV(CV)−1 (8)
The difference transform was further decomposed into translational (ΓVΔ) and rotational error components (ΓVφ), from which the norm of the translational error (δV) was also computed.
End-to-end system performance (registration accuracy) was evaluated by comparison of the predicted drill axis (C) with the reference drill axis isolated from CBCT (C(true)). To correct for possible mismatch between the reference trajectory and the automatically planned trajectory, a transformation was applied to both the predicted and reference drill axes. This transformation places the drill 535 within the context of the bone corridor, as would be done in clinical use, while preserving the relative errors between the predicted and reference trajectories. The predicted and reference drill axes were separated into translational (τC) and rotational (ρC) components from which errors in positional and angular measurements could be computed, respectively, as:
where the hat operator denotes components of the predicted drill axis. Here, the rotational components (ρC) are given by a vector describing the direction of the drill axis, and the translational components (τC) describe a point along the drill axis. Since the drill was aligned with respect to a planned bone corridor (AIIS-to-PSIS), the translational component was set to correspond to the bone corridor entry point. The entry point was computed as the intersection of the predicted and reference drill axes with the surface of the planned acceptance volume. All registration errors reported were computed in the CT coordinate frame (V). The reference trajectory in the CT coordinate frame was estimated using the fluoroscopy-to-CT truth definition
(V(true)=(CV)−1C(true)).
To evaluate end-to-end system performance in a broader context, the conformance of drill axis trajectories was evaluated relative to multiple pelvic bone corridors. For each bone corridor, a mesh of the planned acceptance volume was created to represent the cortical bone surface. The acceptability of a resulting trajectory was measured by first computing the entry and exit point at which the given trajectory intersects the cortical bone surface. The resulting path from entry to exit point was equidistantly sampled along the trajectory, and the distance from each sample to the nearest cortical bone surface point was calculated. Measurements less than zero (or less than the radius of the K-wire) suggest a breach of the bone cortex, An ensemble of drill axis trajectories was simulated as a dispersion of trajectories about the reference trajectory according to the median positional and angular TRE (Eq. 9). In addition to the AIIS-to-PSIS trajectory, the conformance was analyzed relative to the SR and IC-PC bone corridors.
Trajectories were selected by radially sampling poses about the center of the marker arrangement. Also shown is the FRE for the actual planned trajectory [denoted in (a) as Plan].
Overall, the results demonstrate consistent FRE across the sampled trajectories, suggesting reasonable robustness in localizing markers from video images from a broad variety of camera poses. Such robustness is important as the surgeon maneuvers the drill about the scene to align with the planned trajectory. The out-of-plane errors (zD) reflect challenges in resolving depth with a monocular camera, addressed in future work incorporating a stereoscopic camera system.
Fluoroscopy-to-CT registration accuracy was computed in terms of the difference in true and estimated 3D transformations—ΓV as in Eq. 8. As shown in
By updating “Video-Fluoro” or “Video-CT” visualization in real-time during freehand manipulation, the system could also help to reduce the amount of trial-and-error “fluoro hunting” in conventional workflow, thereby reducing time and radiation dose.
The trajectory shown in this example corresponds to K-wire delivery along the AIIS-to-PIIS bone corridor. The end-to-end system performance with a single fluoroscopic view (Nview=1) gave median TREx=3.4 mm (1.9 mm IQR) and TREφ=2.7° (0.79° IQR). For Nview=2, the end-to-end accuracy improved to median TREx=0.88 mm (0.16 mm IQR) and TREφ=2.0° (0.16° IQR).
An embodiment of the proposed method was further tested in pre-clinical cadaver experiments. Geometric accuracy of the drill axis registration was evaluated with respect to common trajectories in pelvic trauma surgery [anterior inferior iliac spine (AIIS), superior ramus (SR), and posterior column (PC)]. A preoperative CT of the cadaver was obtained (Aquilion Precision CT, Canon) for automatic planning of common pelvic K-wire trajectories using atlas-based planning methods.39,40 Acceptance volumes within bone corridors were created for each trajectory and used to evaluate drill axis conformance within pelvic bone corridors.
System registration accuracy is summarized in
The computational runtime of system is summarized in
As discussed above, in some embodiments, multimodal fiducial markers visible to both a video camera and an imaging system (e.g., x-ray, cone-beam CT, MRF, etc.) are used to compute the geometric relationship between the camera and x-ray coordinate systems, allowing real-time, image-based surgical tracking in intraoperative x-ray images (as well as preoperative 3D images via 3D-2D registration34). An example of an image-guided navigation system 1500 using multimodal fiducials 1505 to co-register scenes from a camera 1510 and x-ray system 1515 (including an x-ray source 1516 and an x-ray detector 1517) is schematically illustrated in
“Paired-Point” Markers. Some embodiments employ “paired-point” multimodal markers. These markers are named as such since they can be uniquely identified in both video and x-ray images. The markers are rigid, contain corresponding feature points detectable in video and x-ray images, and are matched prior to point-based registration.
Markers (at least 3) are placed about the surgical field at the time of surgery.
X-ray images (one or more) are acquired with the markers in the field-of-view. Image processing algorithms automatically identify valid markers, determine marker identity, and extract key-point features. Marker correspondence between x-ray images is performed via direct identity matching.
If at least 3 markers are present across all x-ray images, the 3D position of x-ray key-point features is determined via automatic pose estimation techniques (e.g. 3D-2D registration, stereo triangulation)
Real-time video images of the markers are acquired, and automatic image processing algorithms uniquely identify corresponding markers and key-point features. The 3D position of video-based key-point features is determined via automatic pose estimate techniques (e.g. Perspective-N-Points algorithm).
If at least 3 corresponding markers are found between x-ray and video images, point-based registration methods (e.g. Horn's method) are used to register the video and x-ray scenes. After this stage, surgical devices are tracked and overlaid on x-ray images in real-time. Note that in some embodiments it is not required that all of the markers that are visible in both modalities are used to register the scenes. Other markers could be visible in both images, but which are unused for registration of the video and x-ray scenes.
Registration with preoperative images is performed using an automatic 3D-2D image registration algorithm (e.g., as described below with reference to
“Point-Cloud” Markers. Some embodiments utilize “point-cloud” multimodal markers. These markers contain corresponding features points extracted in both video and x-ray images but are not uniquely identified and directly matched as in the “paired-point” approach. Instead a point cloud is generated in the video and x-ray space and conventional surface-matching algorithms are used to solve the registration.
Embodiments are realized in many forms or shapes (e.g. circle, square, etc.). For example,
Markers (at least 3) are placed about the surgical field at the time of surgery
X-ray images (2 or more) are acquired with the markers in the field-of-view. image processing algorithms automatically identify valid markers and extract key-point features. Marker correspondence between x-ray images is performed via off-the-shelf feature matching techniques (e.g., SIFT, SURF).
If an acceptable number of key-points are found, the 3D position of x-ray key-point features is then determined via automatic pose estimation techniques (e.g. 3D-2D registration, stereo triangulation).
Real-time stereo video images of the markers are acquired, and automatic image processing algorithms detect key-point features. The 3D position of video-based key-point features is determined via automatic 3D reconstruction techniques (e.g. stereo triangulation, structure from motion).
If a sufficient number of points are used to create x-ray and video point-clouds, off-the-shelf surface-matching registration methods (e.g. iterative closest point) are used to register the video and x-ray scenes. Following registration, the fiducial registration error (FRE) is checked and if within an acceptable level, the registration is treated as successful. After this stage, surgical devices are tracked and overlaid on x-ray images in real-time.
Registration with preoperative images can be performed using an automatic 3D-2D image registration algorithm (e.g., as described below with reference to
Marker Design,
In some embodiments, the marker base was 3D-printed (Vero PureWhite, Connex-3 Objet 260, Stratasys, Eden Prairie, MN, LISA) with pockets to hold the BBs, a peripheral groove to hold the wire, and a 30×30 mm2 square recess (˜1.8 mm deep) in which the ArUco tag was placed such that its center coincided with the central BB. The initial design had a footprint of 50 mm and could generate up to 48 unique markers, with other embodiments having more compact designs.
Marker Detection. The detection of ArUco tags in video was based in some embodiments on open-source tools available in OpenCV. The algorithm of some embodiments first performs adaptive thresholding and contour detection of grayscale images to isolate candidate regions for multiple tags. The inner area of each candidate is then analyzed by correcting the tag perspective to a square region and then binarizing the resulting region to a regularly spaced grid upon which marker identification can be performed.
In some embodiments, marker detection in the fluoroscopic scene was performed first by ellipse detection (using the peripheral wire) to coarsely identify individual marker positions. A Canny edge filter was followed by morphological closing to yield binafized elliptical contours and filter out smaller Objects (e.g., BBs). A Hough-based ellipse detector returned elliptical fits ordered by accumulator score, which was cutoff according to the known number of markers (M).
The inner region within each ellipse was then analyzed to determine the arrangement of BBs and the corresponding unique marker ID. A morphological top-hat filter isolated marker features from surrounding anatomy, and the ellipse was removed by morphological opening to isolate the BB features. Hough-based circle detection was then used to identify the position and radius of BB locations within each marker. To eliminate false positives, candidate BBs were filtered based on the known range of BB radii. Detections within a certain proximity to each other were also filtered based on the known marker designs, and collinearity was enforced to remove any remaining false positives. The resulting BB detections were then hierarchically clustered in two groups according to size, and markers were uniquely identified according to a lookup table.
The embodiments discussed above realize surgical guidance using the video-on-drill concept in the example context of K-wire insertions in orthopaedic-trauma surgery. Embodiments can be generalized for tracking a variety of surgical instrumentation (e.g. drill bit, screws, nails, biopsy needles, etc.). Embodiments of the system can also be envisioned using multiple combinations of stereo/mono video cameras, C-arm/O-arm/portable x-ray intraoperative imaging devices, and CT/MRI/cone-beam CT (CBCT) 3D images. To achieve navigation, a video camera can in principle be mounted on other locations (e.g., handheld instrument, overhead, or tablet computer) as long as it can observe instruments with patient surface markers. Other embodiments replace the video camera altogether, using an alternative tracking modality (e.g. infrared, electromagnetic) with corresponding multimodal markers (e.g. active infrared LEDs, electromagnetic sensor coils) for registration with fluoroscopic images.
A potential embodiment can be envisioned in which naturally occurring features are present in both the camera and fluoroscopic images replace the requirement for surface markers positioned on the patient. For example, naturally occurring features such as the surface of bony anatomy exposed at the surgical site (e.g. features of the pelvic ring) and/or instrumentation part of the surgical setup (e.g. surgical retractor) would provide the basis for video-to-fluoroscopy registration. A stereoscopic video camera detects features in the 3D space of the patient (alternatively, determined by 3D structure-from-motion approach using a monocular video camera), and the x-ray fluoroscopy system detects such features in 2 or more fluoroscopy images to localize them in the 3D space of the x-ray imaging system. The two sets of 3D features are then co-registered to establish registration between the camera (mounted on the drill) and the x-ray imaging system. Such an embodiment would automatically detect corresponding features in video and x-ray images without relying upon known feature configurations in the markers.
The examples above augment the real-time location of the instrument onto x-ray/CT images, recognizing that alternative embodiments similarly augment anatomy from x-ray/CT onto the camera images. This would provide an augmented reality display of the anatomy underneath the area visible to the video camera. If the intended trajectory is defined in the preoperative 3D image, it can be overlaid on video images such that the surgeon can align the trajectory of the drill to what was defined in the preoperative image.
Other embodiments are applicable to other medical procedures outside ortho-trauma surgery in which the physician needs to intraoperatively estimate and verify the 3D pose of a medical instrument relative to surrounding anatomy. Examples include various needle injection procedures, such as spinal pain management, biopsy procedures, or ablation procedures.
Embodiments outside the medical domain are similarly applicable. For example, potential embodiments include visual servo-ing and vision-based robotic guidance where the 3D pose of an instrument needs to be tracked. In other words, the video-guided instrument is not limited to medical instruments in some embodiments.
The term “computer” is intended to have a broad meaning that may be used in computing devices such as, e.g., but not limited to, standalone or client or server devices. The computer may be, e.g., (but not limited to) a personal computer (PC) system running an operating system such as, e.g., (but not limited to) MICROSOFT® WINDOWS® NT/98/2000/XP/Vista/Windows 7/8/etc. available from MICROSOFT® Corporation of Redmond, Wash., U.S.A. or an Apple computer executing MAC® OS from Apple® of Cupertino, Calif., U.S.A. However, the invention is not limited to these platforms. Instead, the invention may be implemented on any appropriate computer system running any appropriate operating system. In one illustrative embodiment, the present invention may be implemented on a computer system operating as discussed herein. The computer system may include, e.g., but is not limited to, a main memory, random access memory (RAM), and a secondary memory, etc. Main memory, random access memory (RAM), and a secondary memory, etc., may be a computer-readable medium that may be configured to store instructions configured to implement one or more embodiments and may comprise a random-access memory (RAM) that may include RAM devices, such as Dynamic RAM (DRAM) devices, flash memory devices, Static RAM (SRAM) devices, etc.
The secondary memory may include, for example, (but is not limited to) a hard disk drive and/or a removable storage drive, representing a floppy diskette drive, a magnetic tape drive, an optical disk drive, a compact disk drive CD-ROM, flash memory, etc. The removable storage drive may, e.g., but is not limited to, read from and/or write to a removable storage unit in a well-known manner. The removable storage unit, also called a program storage device or a computer program product, may represent, e.g., but is not limited to, a floppy disk, magnetic tape, optical disk, compact disk, etc. which may be read from and written to the removable storage drive. As will be appreciated, the removable storage unit may include a computer usable storage medium having stored therein computer software and/or data.
In alternative illustrative embodiments, the secondary memory may include other similar devices for al lowing computer programs or other instructions to be loaded into the computer system. Such devices may include, for example, a removable storage unit and an interface. Examples of such may include a program cartridge and cartridge interface (such as, e.g., but not limited to, those found in video game devices), a removable memory chip (such as, e.g., but not limited to, an erasable programmable read only memory (EPROM), or programmable read only memory (PROM) and associated socket, and other removable storage units and interfaces, which may allow software and data to be transferred from the removable storage unit to the computer system.
The computer may also include an input device may include any mechanism or combination of mechanisms that may permit information to be input into the computer system from, e.g., a user. The input device may include logic configured to receive information for the computer system from, e.g. a user. Examples of the input device may include, e.g., but not limited to, a mouse, pen-based pointing device, or other pointing device such as a digitizer, a touch sensitive display device, and/or a keyboard or other data entry device (none of which are labeled). Other input devices may include, e.g., but not limited to, a biometric input device, a video source, an audio source, a microphone, a web cam, a video camera, and/or another camera. The input device may communicate with a processor either wired or wirelessly.
The computer may also include output devices which may include any mechanism or combination of mechanisms that may output information from a computer system. An output device may include logic configured to output information from the computer system. Embodiments of output device may include, e.g., but not limited to, display, and display interface, including displays, printers, speakers, cathode ray tubes (CRTs), plasma displays, light-emitting diode (LED) displays, liquid crystal displays (LCDs), printers, vacuum florescent displays (VFDs), surface-conduction electron-emitter displays (SEDs), field emission displays (FEDs), etc. The computer may include input/output (I/O) devices such as, e.g., (but not limited to) communications interface, cable and communications path, etc. These devices may include, e.g., but are not limited to, a network interface card, and/or modems. The output device may communicate with processor either wired or wirelessly. A communications interface may allow software and data to be transferred between the computer system and external devices.
The term “data processor” is intended to have a broad meaning that includes one or more processors, such as, e.g., but not limited to, that are connected to a communication infrastructure (e.g., but not limited to, a communications bus, cross-over bar, interconnect, or network, etc.). The term data processor may include any type of processor, microprocessor and/or processing logic that may interpret and execute instructions (e.g., for example, a field programmable gate array (FPGA)). The data processor may comprise a single device (e.g., for example, a single core) and/or a group of devices (e.g., multi-core). The data processor may include logic configured to execute computer-executable instructions configured to implement one or more embodiments. The instructions may reside in main memory or secondary memory. The data processor may also include multiple independent cores, such as a dual-core processor or a multi-core processor. The data processors may also include one or more graphics processing units (GPU) which may be in the form of a dedicated graphics card, an integrated graphics solution, and/or a hybrid graphics solution. Various illustrative software embodiments may be described in terms of this illustrative computer system. After reading this description, it will become apparent to a person skilled in the relevant art(s) how to implement the invention using other computer systems and/or architectures.
The term “data storage device” is intended to have a broad meaning that includes removable storage drive, a hard disk installed in hard disk drive, flash memories, removable discs, non-removable discs, etc. In addition, it should be noted that various electromagnetic radiation, such as wireless communication, electrical communication carried over an electrically conductive wire (e.g., but not limited to twisted pair, CAT5, etc.) or an optical medium (e.g., but not limited to, optical fiber) and the like may be encoded to carry computer-executable instructions and/or computer data that embodiments of the invention on e.g., a communication network. These computer program products may provide software to the computer system. It should be noted that a computer-readable medium that comprises computer-executable instructions for execution in a processor may be configured to store various embodiments of the present invention.
The embodiments illustrated and discussed in this specification are intended only to teach those skilled in the art how to make and use the invention. In describing embodiments of the invention, specific terminology is employed for the sake of clarity. However, the invention is not intended to be limited to the specific terminology so selected. The above-described embodiments of the invention may be modified or varied, without departing from the invention, as appreciated by those skilled in the art in light of the above teachings. It is therefore to be understood that, within the scope of the claims and their equivalents, the invention may be practiced otherwise than as specifically described.
The following is an example of a further embodiment within the general concepts of this invention. The general concepts are not limited to this and/or the other specific embodiments that were described in detail to facilitate an explanation of some concepts of the current invention. The scope of the invention is defined by the claims.
In
In
This application claims priority to U.S. Provisional Application No. 63/123,909, filed Dec. 10, 2020, and to U.S. Provisional Application No. 63/193,987, filed May 27, 2021, which are incorporated herein by reference in their entirety. This invention was made with government support under grant EB028330 awarded by the National Institutes of Health. The government has certain rights in this invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2021/062702 | 12/9/2021 | WO |
Number | Date | Country | |
---|---|---|---|
63193987 | May 2021 | US | |
63123909 | Dec 2020 | US |