Systems for medical image visualization

Information

  • Patent Grant
  • 12354227
  • Patent Number
    12,354,227
  • Date Filed
    Friday, August 4, 2023
    2 years ago
  • Date Issued
    Tuesday, July 8, 2025
    4 months ago
Abstract
A computer-implemented method includes obtaining a three-dimensional (3D) image of a region of a body of a patient, the 3D image having feature values. The 3D image is segmented to define one or more regions of interest (ROIs). At least one region of interest (ROI) feature threshold is determined. A background feature threshold is determined. A 3D model is generated from the 3D image based on the determined at least one ROI feature threshold, the determined background feature threshold, and the segmentation. The 3D model is output for display to a user.
Description
FIELD

The present disclosure generally relates to medical visualization, including systems and methods for generating (e.g., computing) three-dimensional (3D) models based on 2D or 3D medical images for use, among other considered medical usages, such as in image-guided surgery, and for generating 3D renderings based on the 2D or 3D medical images (e.g., for output on an augmented reality display).


BACKGROUND

Near-eye display devices and systems, such as head-mounted displays are commonly used in augmented reality systems, for example, for performing image-guided surgery. In this way, a computer-generated three-dimensional (3D) model of a volume of interest of a patient may be presented to a healthcare professional who is performing the procedure, such that the 3D model is visible to the professional engaged in optically viewing an anatomical portion of a patient who is undergoing the procedure.


Systems of this sort for image-guided surgery are described, for example, in Applicant's U.S. Pat. Nos. 9,928,629, 10,835,296, 10,939,977, PCT International Publication WO 2022/053923, and U.S. Patent Application Publication 2020/0163723. The disclosures of all of these patents and publications are incorporated herein by reference.


SUMMARY

In accordance with several embodiments, systems, devices and methods are described that provide enhanced or improved display of 3D models in connection with image-guided medical procedures. For example, the 3D models may be displayed with reduced noise and increased image quality. In some instances, the 3D models may not be displayed with background features (e.g., soft tissue is not displayed when it is bone tissue that is desired to be displayed). In some instances, the 3D models that are generated include implants or hardware (such as screws, rods, cages, pins, tools, instruments, etc.) but not certain types of tissue (e.g., soft tissue, nerve tissue) that is not the focus of the particular medical procedure. For example, the content of the 3D model or rendering that is generated for display may preferentially or selectively include only certain types of content that a clinical professional or other operator would want to see and not “background” content that the clinical professional or other operator does not need to see or does not want to see because it does not impact or affect the medical procedure (e.g., surgical or non-surgical therapeutic procedure or diagnostic procedure). In the example of a spinal surgical procedure, the background content that may be desired to be filtered out or selectively or preferentially not displayed may be soft tissue surrounding or between the vertebrae of the spine and the content that may selectively or preferentially be displayed is the bone tissue (and optionally, any hardware, implant or instruments or tools within the bone, such as screws, rods, cages, etc.).


In accordance with several embodiments, the systems, devices and methods described herein involve automatically segmenting the image(s) (e.g., 3D computed tomography images, 3D magnetic resonance images, other 2D or 3D images) received from an imaging device scan prior to applying one or more 3D model generation algorithms or processes. In accordance with several embodiments, the segmentation advantageously reduces noise around the bone structure and provides a better “starting point” for the 3D model generation algorithms or processes. The segmentation may involve segmenting the 3D image into multiple separate regions, sections, portions, or segments. An entire anatomical portion of a subject (e.g., an entire spine or entire other bone, such as a hip, knee, shoulder, ankle, limb bone, cranium, facial bone, jaw bone, etc.) may be segmented or a subportion of the anatomical portion may be segmented.


Several embodiments are particularly advantageous because they include one, several or all of the following benefits: (i) using a segmented image for 3D image rendering and/or model building to achieve a more accurate and less noisy 3D model and/or visualization of a patient anatomy (e.g., a portion of the patient anatomy); and/or (ii) using artificial intelligence based segmentation in 3D model building to support low-quality intra-operative scanners or imaging devices; and/or (iii) applying different threshold values to different portions of the medical image to achieve better visualization of the patient anatomy; and/or (iv) enhancing a 3D model building while using a model building algorithm which may receive only a single threshold by applying multiple thresholds and/or (v) allowing a user to select different thresholds to be applied on different portions of a medical image (e.g., to improve or optimize visualization).


In accordance with several embodiments, a system for improving display of 3D models in connection with image-guided medical procedures (e.g., surgical or non-surgical therapeutic and/or diagnostic procedures) comprises or consists essentially of a wearable device (e.g., head-mounted unit such as eyewear) including at least one see-through display configured to allow viewing of a region of a body of a patient through at least a portion of the display and at least one processor (e.g., a single processor or multiple processors) configured to perform actions (e.g., upon execution of stored program instructions on one or more non-transitory computer-readable storage media). For example, the at least one processor is configured to receive a three-dimensional (3D) image of the region of the body of the patient, the 3D image having intensity values; segment the 3D image to define at least one region of interest (ROI) of the region of the body (e.g., a portion of a spine or other bone associated with the medical procedure); determine at least one ROI intensity threshold value of the at least one ROI; determine a background intensity threshold value; and generate a 3D rendering of the 3D image.


The generation of the 3D rendering may include, in the at least one defined ROI of the 3D image, rendering based on intensity values of the at least one ROI that satisfy a lowest threshold value of the at least one ROI intensity threshold value and the background intensity threshold value; and, in a background region of the 3D image, rendering based on intensity values of the background region that satisfy the background intensity threshold value. The background region of the 3D image may include a portion of the 3D image which is not an ROI.


The at least one processor may also be configure to cause the 3D rendering to be output to the display of the wearable device.


In some embodiments, the intensity values are values of 3D image voxels of the 3D image.


In some embodiments, the wearable device is a pair of glasses or other eyewear. In some embodiments, the wearable device is an over-the-head mounted unit, such as a headset.


In some embodiments, the wearable device is configured to facilitate display of 3D stereoscopic images that are projected at a distance to align with a natural focal length of eyes (or a natural convergence or focus) of a wearer of the wearable device to reduce vergence-accommodation conflict.


The 3D image may be a computed tomography image, a magnetic resonance image, or other 3D image generated by another 3D imaging modality.


In some embodiments, the at least one processor is configured to display the 3D rendering in alignment by performing registration of the 3D rendering with the body region of the patient (e.g., using one or more markers, such as retroreflective markers that can be scanned or imaged by an imaging device of the wearable device).


In some embodiments, the 3D rendering is a 3D model. The 3D rendering may be output for display as a virtual augmented reality image. The virtual augmented reality image may be projected directly on a retina of the wearer of the wearable device. The virtual augmented reality image may be presented in such a way that the wearer can still see the physical region of interest through the display.


In some embodiments, the at least one processor is configured to generate the 3D rendering by changing the determined intensity values of the 3D image into a value which does not satisfy the lowest threshold value.


In some embodiments, the background region of the 3D image includes soft tissue.


In some embodiments, the at least one processor is further configured to repeatedly adjust the at least one ROI intensity threshold value and the background feature intensity threshold value according to input from a user; and repeatedly generate a 3D rendering of the 3D image based on the adjusted values of the at least two intensity thresholds. Only one of the threshold values may be adjusted in some embodiments.


In accordance with several embodiments, a system for improving display of 3D models in connection with image-guided surgery comprises or consists essentially of a head-mounted unit including at least one see-through display configured to allow viewing of a region of a spine of a patient through at least a portion of the display and at least one processor configured to (e.g., upon execution of program instructions stored on one or more non-transitory computer-readable storage media): receive a three-dimensional (3D) image of the region of the spine of the patient, the 3D image having intensity values, segment the 3D image to define multiple regions of interest (ROI) of the spine (e.g., multiple vertebrae or multiple vertebral segments); determine a ROI intensity threshold value for each of the multiple ROIs of the spine; determine a background intensity threshold value; generate a 3D rendering (e.g., 3D model) of the 3D image; and cause the 3D rendering to be output to the display as a virtual augmented reality image. The generation of the 3D rendering includes, in the multiple ROIs of the 3D image, rendering based on the determined ROI intensity values of the multiple ROIs that satisfy a lowest threshold value of the intensity threshold value and the background intensity threshold value and, in a background region of the 3D image, rendering based on intensity values of the background region that satisfy the background intensity threshold value. The background region of the 3D image includes a portion of the 3D image which is not an ROI.


In some embodiments, the intensity values are voxel values of the 3D image.


In some embodiments, the head-mounted unit is a pair of glasses or other form of eyewear, including eyewear without lenses. In some embodiments, the head-mounted unit is an over-the-head mounted unit (e.g. a headset).


In some embodiments, the display is configured to be displayed directly on a retina of a wearer of the head-mounted unit.


In some embodiments, the at least one processor is configured to display the 3D rendering in alignment by performing registration of the 3D rendering with the region of the spine.


In some embodiments, the at least one processor is configured to generate the 3D rendering by changing the determined intensity values of the 3D image into a value which does not satisfy the lowest threshold value.


In some embodiments, the at least one processor is further configured to repeatedly adjust the at least one ROI intensity threshold value and the background feature intensity threshold value according to input from a user and repeatedly generate a 3D rendering of the 3D image based on the adjusted values of the at least two intensity thresholds.


An embodiment of the present disclosure that is described hereinafter provides a computer-implemented method that includes obtaining a three-dimensional (3D) image of a region of a body of a patient, the 3D image having feature values. The 3D image is segmented to define one or more regions of interest (ROIs). The one or more regions of interest may include one or more portions of a bone or joint (e.g., a portion of a spine, individual vertebrae of a portion of a spine, a particular spinal segment, a portion of a pelvis or sacroiliac region, or other bone or joint). At least one region of interest (ROI) feature threshold is determined. A background feature threshold is determined. A 3D model is generated from the 3D image based on the determined at least one ROI feature threshold, the determined background feature threshold, and the segmentation. The 3D model is outputted for display to a user.


In some embodiments, the feature values are intensity values, the ROI feature threshold is an ROI intensity threshold, and the background feature threshold is a background intensity threshold.


In some embodiments, the intensity values are the 3D image voxel values, and the intensity thresholds are the 3D image voxel thresholds.


In an embodiment, the method further includes generating a 3D rendering of the 3D image, wherein the generation of the 3D rendering includes (i) in the at least one ROI of the 3D image, rendering based on feature values of the at least one ROI that satisfy the lowest threshold of the at least one ROI feature threshold and the background feature threshold, and (ii) in a background region of the 3D image, rendering based on feature values of the background region that satisfy the background feature threshold, wherein the background region of the 3D image includes a portion of the 3D image which is not an ROI. The 3D model is outputted for display to a user.


In another embodiment, generating the 3D model includes using a 3D model generation algorithm and providing a feature threshold as input to the 3D model generation algorithm.


In some embodiments, the 3D model generation algorithm is a marching cubes algorithm.


In some embodiments, the provided feature threshold corresponds to one of the at least one ROI feature threshold.


In some embodiments, the provided feature threshold is determined based on the at least one ROI feature threshold and the background feature threshold.


In other embodiments, the provided feature threshold is selected as the lowest of the at least one ROI feature threshold value and the background feature threshold value.


In some embodiments, the generation of the 3D model includes selecting feature values of the 3D image satisfying the provided feature threshold and omitting portions of the background region of the 3D image with selected feature values from being an input to the generation of the 3D model.


In other embodiments, the omitting of portions of the background region includes changing the selected feature values of the portions of the background region into a value that does not satisfy the provided feature threshold.


In an embodiment, the determination of the at least one ROI feature threshold and of the background feature threshold includes receiving input values for at least one of: the at least one ROI feature threshold or the background feature threshold.


In some embodiments, the input values are received for the at least one ROI feature threshold and the background feature threshold.


In some embodiments, the input values are received from a user.


In some embodiments, the receiving of the input values from the user includes generating a Graphical User Interface (GUI) element to be displayed to the user, the GUI element allowing the user to adjust the input values, and the rendering and displaying of the 3D rendering is iteratively performed in correspondence to the user adjustment of the input values.


In an embodiment, in response to a request of the user, the 3D model is generated based on current input values for at least one of: the at least one ROI feature threshold or the background feature threshold.


In another embodiment, the input values are received for the at least one ROI feature threshold and the background feature threshold.


In some embodiments, the method further includes displaying a default 3D model based on default values for the at least one ROI feature threshold and the background feature threshold, and displaying the default 3D model to the user.


In some embodiments, the determining of the at least one ROI feature threshold includes, when multiple ROIs are considered, determining respective multiple ROI feature thresholds.


In some embodiments, the determining of the at least one ROI feature threshold includes setting a feature threshold value that differentiates bone from soft tissue.


In other embodiments, the determining of the background feature threshold includes setting a feature threshold value that differentiates metal from bone.


In an embodiment, the method further includes using the 3D model with an image-guided system. The image-guided system and methods described herein can be surgical, non-surgical or diagnostic.


In another embodiment, the image-guided system is an augmented or mixed reality system including a direct see-through display, such as a Head Mounted Display (HMD). In some embodiments, the image-guided system is an augmented or mixed reality system that does not include a head mounted display/component or includes both head mounted and non-head mounted di splays/components.


In some embodiments, the segmentation is performed based on one or more deep learning networks. In some embodiments, the deep learning networks are convolutional neural networks.


There is additionally provided, in accordance with another embodiment of the present disclosure, a computer-implemented method including obtaining a three-dimensional (3D) image of a region of a body of a patient, the 3D image having features. The 3D image is segmented to define one or more regions of interest (ROIs). At least one ROI feature threshold is determined. A background feature threshold is determined. A 3D rendering of the 3D image is generated, wherein the generation of the 3D rendering includes (i) in the at least one defined ROI of the 3D image, rendering based on feature values of the at least one ROI that satisfy the lowest threshold of the at least one ROI feature threshold and the background feature threshold, and (ii) in a background region of the 3D image, rendering based on feature values of the background region that satisfy the background feature threshold, wherein the background region of the 3D image includes a portion of the 3D image which is not an ROI. The 3D model is outputted for display to a user.


In some embodiments, the method further includes generating a 3D model based on the determined at least one ROI feature threshold, the determined background feature threshold, and the segmentation. The 3D model is displayed on a display.


In some embodiments, generating the 3D model includes using a 3D model generation algorithm and providing a feature threshold as input to the 3D model generation algorithm.


In an embodiment, the provided feature threshold is selected to be the lowest of the at least one ROI feature threshold and the background feature threshold.


In another embodiment, the provided feature threshold is the at least one ROI feature threshold, and the portions of the background region having selected feature values which do not satisfy the background intensity threshold are omitted.


In some embodiments, the determination of the at least ROI feature threshold and of the background feature threshold includes receiving input values for at least one of: the at least one ROI feature threshold or the background feature threshold.


In some embodiments, the method further includes displaying a default 3D rendering based on default values for the at least one ROI feature threshold and the background feature threshold, and displaying the default 3D rendering to the user.


There is further provided, in accordance with another embodiment of the present disclosure, a system for image-guided surgery (or other procedures such as non-surgical procedures and diagnostics), the system including at least one display, and at least one processor. The at least one processor is configured to (i) receive a three-dimensional (3D) image of a region of a body of a patient, the 3D image having intensity values, (ii) compute a 3D model of the region based on at least two intensity threshold values applied to the 3D image, and (iii) cause the computed 3D model to be output for display on the at least one display.


In some embodiments, the at least one display includes a see-through display configured to allow viewing of a body of a patient through at least a portion of the display.


In some embodiments, the see-through display is included in a head-mounted unit. The head-mounted unit may provide elimination of attention shift and a reduced (e.g., minimized) line of sight interruption.


In other embodiments, the computed 3D model is displayed in alignment with the body of the patient as viewed through the see-through display. The 3D model may be provided in a 3D stereoscopic view. The virtual images may be projected at 50 cm in order to align with a normal focal distance of the eyes so as to reduce “vergence-accommodation conflict”, which can result in fatigue, headache, or general discomfort. The line of sight may be at approximately 30 degrees.


In other embodiments, the at least one processor is configured to display in alignment by performing registration of the 3D model with the body of the patient.


In some embodiments, the intensity values are the 3D image voxel values.


In an embodiment, the at least one display includes a stationary display. In some embodiments, the display is a display of a head-mounted device, including but not limited to glasses, googles, spectacles, visors, monocle, other eyewear, or over-the-head headset. In some embodiments, the head-mounted displays are not used or used together with stand-alone displays, such as monitors, portable devices, tablets, etc. The display may be a hands-free display such that the operator does not need to hold the display.


The head-mounted device may alternatively be a wearable device on a body part other than the head (e.g., a non-head-mounted device). The head-mounted device may be substituted with an alternative hands-free device that is not worn by the operator, such as a portal, monitor or tablet. The display may be a head-up display or heads-up display.


In an embodiment, the at least one processor is further configured to segment the 3D image to define one or more regions of interest (ROIs), and the computation of the 3D model is further based on the segmentation.


In another embodiment, the at least two intensity threshold values include at least one ROI threshold intensity value and a background threshold intensity value, and the at least one processor is further configured to (i) compute the 3D model by selecting intensity values of the 3D image satisfying the lowest intensity threshold value of the at least one ROI threshold intensity value, and (ii) omit portions of a background region of the 3D image with intensity values between the lowest intensity threshold value of the at least one ROI threshold intensity values and the background threshold intensity value from being an input to the generation of the 3D model.


In some embodiments, the at least one processor is configured to generate the 3D model by changing the selected intensity values of the 3D image into a value which does not satisfy the lowest intensity threshold value.


In some embodiments, the at least two intensity thresholds include at least one region of interest (ROI) intensity threshold to be applied to one or more ROIs and a background intensity threshold to be applied to at least a background region of the 3D image, and the background region of the 3D image includes a portion of the 3D image which is not an ROI.


In other embodiments, the at least one processor is further configured to (i) repeatedly adjust the at least two intensity thresholds according to input from a user, and (ii) repeatedly generate a 3D rendering of the 3D image based on the adjusted values of the at least two intensity thresholds.


In an embodiment, a method is provided of generating at least one of a 3D model and images for display.


In an embodiment, a system is provided, that includes at least one processor configured for facilitating generation of at least one of a 3D model and images, the system further including a display device configured to display the at least one of a 3D model and images.


Also described and contemplated herein is the use of any of the apparatus, systems, or methods for the treatment of a spine through a surgical intervention.


Also described and contemplated herein is the use of any of the apparatus, systems, or methods for the treatment of an orthopedic joint through a surgical intervention, including, optionally, a shoulder, a knee, an ankle, a hip, or other joint.


Also described and contemplated herein is the use of any of the apparatus, systems, or methods for the treatment of a cranium through a surgical intervention.


Also described and contemplated herein is the use of any of the apparatus, systems, or methods for the treatment of a jaw through a surgical intervention.


Also described and contemplated herein is the use of any of the apparatus, systems, or methods for diagnosis of a spinal abnormality or degeneration or deformity.


Also described and contemplated herein is the use of any of the apparatus, systems, or methods for diagnosis of a spinal injury.


Also described and contemplated herein is the use of any of the apparatus, systems, or methods for diagnosis of joint damage.


Also described and contemplated herein is the use of any of the apparatus, systems, or methods for diagnosis of an orthopedic injury.


In accordance with several embodiments, any of the methods described herein may include diagnosing and/or treating a medical condition, the medical condition comprising one or more of the following: back pain, spinal deformity, spinal stenosis, disc herniation, joint inflammation, joint damage, ligament or tendon ruptures or tears.


In accordance with several embodiments, a method of presenting one or more images on a wearable display is described and/or illustrated herein during medical procedures, such as orthopedic procedures, spinal surgical procedures, joint repair procedures, joint replacement procedures, facial bone repair or reconstruction procedures, ENT procedures, cranial procedures or neurosurgical procedures.


For purposes of summarizing the disclosure, certain aspects, advantages, and novel features of embodiments of the disclosure have been described herein. It is to be understood that not necessarily all such advantages may be achieved in accordance with any particular embodiment of the disclosure disclosed herein. Thus, the embodiments disclosed herein may be embodied or carried out in a manner that achieves or optimizes one advantage or group of advantages as taught or suggested herein without necessarily achieving other advantages as may be taught or suggested herein. The systems and methods of the disclosure each have several innovative aspects, no single one of which is solely responsible for the desirable attributes disclosed herein. The methods summarized above and set forth in further detail below describe certain actions taken by a practitioner; however, it should be understood that they can also include the instruction of those actions by another party. Thus, actions such as “adjusting a threshold” include “instructing the adjustment of a threshold.”


The present disclosure will be more fully understood from the following detailed description of the embodiments thereof, taken together with the drawings.





BRIEF DESCRIPTION OF THE DRAWINGS

Non-limiting features of some embodiments are set forth with particularity in the claims that follow. The following drawings are for illustrative purposes only and show non-limiting embodiments. Features from different figures may be combined in several embodiments. It should be understood that the figures are not necessarily drawn to scale. Distances, angles, etc. are merely illustrative and do not necessarily bear an exact relationship to actual dimensions and layout of the devices illustrated.



FIG. 1 is a schematic, pictorial illustration of an example system for image-guided surgery or other medical intervention utilizing a head-mounted display, in accordance with an embodiment of the disclosure.



FIG. 2 is a flowchart of steps performed to generate a 3D rendering and/or a 3D model based on a 3D medical image of at least a portion of a body of a patient, in accordance with an embodiment of the disclosure.



FIGS. 3A-3C show graphical user interface displays that include 3D renderings based on processed 3D images that facilitate the determination of feature thresholds used for computing respective 3D models, in accordance with embodiments of the disclosure.



FIG. 4 is a graphical user interface display that includes a generated 3D model of a volume of interest of a patient, the 3D model computed based on the 3D image used to generate the 3D rendering of FIG. 3C, in accordance with an embodiment of the disclosure.



FIG. 5 is a flowchart of steps performed to generate a three-dimensional (3D) model based on a 3D image of a portion of a body of a patient using a selected threshold value, in accordance with an embodiment of the disclosure.



FIGS. 6A-6D are example graphical user interface displays that include various views of a patient's spine, in accordance with an embodiment of the disclosure.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
Overview

A three-dimensional (3D) model of a volume of interest of a patient may include irrelevant or undesired information around the volume of interest or around the model, such as irrelevant or undesired objects or tissue and/or image noise. A user may adjust or cause adjustment of a threshold (e.g., radiodensity threshold value such as using a Hounsfield Unit scale) of an image (e.g., a CT image or other 2D or 3D medical image), to suppress the irrelevant or undesired information. However, reducing irrelevant or undesired information this way may cause loss of some of the relevant or desired information. For example, a 3D image may include metal objects characterized by very high radiodensity values and bone characterized by lower radiodensity values. Setting a high enough threshold may show the metal objects and remove noise but may well erode or remove some bone structure features in the image.


In accordance with several embodiments, a pre-processing stage may be included to automatically segment the medical image (e.g., 3D CT scan) prior to generation of a 3D model (e.g., generating a 3D model by applying one or more 3D model generation algorithms or processes). The segmentation may advantageously reduce the noise around the bone structure and provide a better “starting point” for the 3D model generation. The segmentation may be applied to the entire medical image or to one or more portions, regions or segments of the medical image.


Image-guided surgery employs tracked surgical instruments and images of the patient anatomy in order to guide the procedure. In such procedures, a proper visualization of regions of interest of the patient anatomy, including tissue of interest, is of high importance. Enhanced or improved visualization may also advantageously provide increased user adoption and enhance user experience.


In particular, during image-guided surgery or other medical intervention (e.g., procedures employing augmented reality technology, virtual reality technology or mixed-reality technology), the 3D model of a volume of interest of the patient upon whom the surgery or other medical intervention (including diagnostic and/or therapeutic intervention) is being performed may be aligned with tools or devices used during the surgery or intervention and/or with the anatomical portion of the patient undergoing surgery or other medical intervention. Such 3D model may be derived from a 3D medical image, such as a 3D CT image or 3D MR image. In accordance with several embodiments, a proper modeling based on the 3D medical image is important for providing the user with the required and accurate information for navigating a medical tool on anatomical images derived from the 3D medical image (e.g., slices, 3D model and/or 2D images) and for accurate and/or proper augmentation (e.g., in alignment) of the 3D model with the optically viewed (e.g., with a near-eye display device) anatomical portion of the patient when augmented-reality is used.


Some embodiments of the present disclosure provide computer-implemented methods for generating a 3D model from at least one 3D medical image. The methods include obtaining one or more 3D images of at least a region of a body of a patient (e.g., including a cervical spine region, a thoracic spine region, a lumbosacral spine region, a cranial region, an ENT-associated region, a facial region, a hip region, a shoulder region, a knee region, an ankle region, a joint region, or a limb region), the 3D image having one or more feature values, such as image intensity values on a gray-scale, RGB values, gradients, edges and/or texture. In some embodiments, the 3D image is first segmented to include or identify one or more regions of interest (ROIs). At least one ROI feature threshold may be determined. At least one background feature threshold may also be determined. In some embodiments, a 3D model is generated from the 3D image, based on the determined at least one ROI feature threshold, the determined at least one background feature threshold, and the segmentation. The 3D model may then be saved for use in image-guided surgery and/or systems, e.g., to be displayed to a user on a display of a wearable (e.g., head-mounted or near-eye or direct see-through) display device and/or on a separate or stationary display monitor or device (e.g., a standalone or separate portal, tablet or monitor).


In some embodiments, the 3D image is processed to only show the at least one ROI and to only show background information above the background feature threshold. This may be performed, for example, by segmenting the image and applying different feature thresholds to different portions of the image. In such embodiments, irrelevant or undesired background information can be suppressed using a high background feature threshold, without desired features inside the at least one ROI being eroded.


As noted above, the techniques disclosed herein may advantageously present (e.g., cause to be displayed) maximum or an increased amount of information obtained from ROIs of the 3D medical image and filter out some of the background of the 3D medical image, which may include, for example, noise or information of less interest. Accordingly, in some embodiments, the at least one processor may be configured to apply the lowest feature threshold to the ROIs (e.g., even if a specific ROI's feature threshold is not or different than the lowest feature threshold). In some embodiments, the at least one processor may be configured to apply the background feature threshold to the entire image while determining in each region or segment of the image the lowest feature threshold applied to the region or segment as the specific region or segment feature threshold. For example, for a specific ROI, a background feature threshold and an ROI feature threshold are applied. One may expect that the ROI feature threshold would be the lowest but in case the background feature threshold is the lowest, then the background feature threshold value would determine the threshold value for the specific ROI. Specifically, a background feature (e.g., intensity) threshold may be set to a higher value than any of one or more ROI feature (e.g., intensity) thresholds, for example. As a result, the processed 3D image (e.g., processed via volume rendering) may show only information contained in the at least one ROI and background information (e.g., objects) of sufficient intensity, while removing irrelevant or undesired background information. The processed 3D image may show in an ROI bone structure with metal surgical objects in the background, without showing irrelevant anatomy (e.g., other bones or other type of tissue) or an image suffering from background image noise.


In some embodiments, at least two feature thresholds are provided that include at least one region of interest (ROI) feature threshold to be applied to the one or more ROIs, and a background feature threshold to be applied to a background region of the 3D image. The background region of the 3D image may include a portion of the 3D image which is not an ROI or the portion or all portions of the 3D image which are not an ROI. By suppressing the values of the background regions (e.g., values of 3D image voxels) which do not satisfy the background feature threshold, an effective unique threshold may be obtained. This suppression step may be used for generating a 3D model using an algorithm that accepts only one threshold value (e.g., marching cubes algorithm), while at the same time maximizing or otherwise increasing information obtained from ROIs of the 3D medical image and filtering out some of the background of the 3D medical image, which may be an area of less interest.


In some embodiments of the computer-implemented method, before the 3D model is computed, a 3D rendering based on a processed 3D image (e.g., a segmented image) is displayed to a user, such as a surgeon or other healthcare professional. The user may view the 3D rendering on a monitor or a display (e.g., a graphical user interface display on a monitor or on a near-eye display of a head-mounted, direct see-through, or near-eye display device). The graphical user interface display may allow the user to adjust the at least one ROI intensity threshold and/or the background intensity threshold, to thereby modify the look of the 3D rendering. This way, the user can advantageously optimize the rendered 3D image by selecting and/or identifying optimal and/or desired feature thresholds, based on which the 3D model will be computed.


The 3D rendering may be made of the original image voxels, or may be a result of further image processing steps applied to the 3D image, such as surface ray casting. Further image processing of the 3D image beyond producing a 3D rendering using the native 3D image voxels can have advantages. For example, using ray casting may facilitate interpolation between, e.g., voxel intensity values, to produce a smoother look. Also, some methods such as surface ray cast rendering use only a portion of the information included in the native 3D image (e.g., use only voxels encountered by rays, provided the voxels have feature values such as intensities above a threshold). In accordance with several embodiments, using only a subset of the voxels minimizes or otherwise advantageously reduces computation effort and time required to generate a 3D rendering. This reduction may be particularly important as the user manipulates the view of the rendering (e.g., views the rendering from different points of view), which requires repeated computations of the 3D rendering.


In accordance with several embodiments, the 3D rendering generation process comprises the steps of:

    • In the at least one ROI of the 3D image, rendering based on feature values of the at least one ROI which satisfy the lowest threshold of the at least one ROI feature threshold and the background feature threshold.
    • In a background region of the 3D image, rendering based on feature values of the background region which satisfy the background feature threshold, wherein the background region of the 3D image comprises a portion of the 3D image which is not an ROI.


As noted above, to generate the 3D model, two thresholds (e.g., the ROI feature threshold and the background feature threshold) may be determined by a user (or automatically by at least one processor, e.g., based on accumulated data or based on trained machine learning algorithms or neural networks). However, some 3D model generation algorithms (e.g., surface building algorithms, such as the Marching Cubes algorithm), may receive as input only a single threshold value. In accordance with several embodiments, the techniques disclosed herein advantageously solve the limitation of only a single allowed input threshold by:

    • Using the lowest ROI feature threshold value with the algorithm (e.g., running the Marching Cubes algorithm) to delineate ROI regions.
    • Visualizing only ROI regions with feature values higher than a lowest ROI feature threshold.
    • Visualizing only portions of any other regions, including background regions, with feature values higher than a background feature threshold.


In accordance with several embodiments, background information (e.g., voxel values) having feature values higher than the ROI feature threshold but lower than the background feature threshold is removed (e.g., in order not to model and display irrelevant or undesired information). In some embodiments, such removal is performed by determining the value of such voxels to a value which does not satisfy the ROI feature threshold(s), e.g., equal to or lower than the ROI feature threshold (e.g., ROI threshold-C) while C is a constant (non-negative number): C>0. In some embodiments, the subtracting of such a constant may allow further smoothing of the visualization (e.g., the 3D model).


As described above, using two thresholds, called herein TH ROI and TH Background, may allow showing ROI information and some selected background objects, respectively. If ROI information to be visualized has to meet multiple thresholding conditions (e.g., multiple tissue types have to be visualized, such as fat, muscle and bone), then two or more (e.g., multiple) ROI thresholds (e.g., one for each ROI type), TH(j) ROI, j=1, 2, . . . , may be defined and used in segmentation, volume rendering and/or generating a 3D model. In some embodiments when only one threshold may be input to a 3D model generation algorithm, then the lowest of {TH(j)ROI} may be used.


In one embodiment, a trained neural-network (NN) based segmentation algorithm is provided. The disclosed segmentation algorithm classifies different regions of 3D image as ROI and background and delineates one from another. For example, in orthopedic image-guided applications (such as surgery or other procedures, such as non-surgical or diagnostic procedures), the disclosed segmentation algorithm may be trained to classify vertebrae column and the Iliac and Sacrum as “Spine”. According to some embodiments, the vertebrae, each vertebra, the Iliac and/or the Sacrum may be segmented separately. According to some embodiments, screws, implants and/or inter-bodies included in the scan may also be classified by the algorithm as “Spine” (or the desired category to be shown). Soft tissues, the rib cage and instruments such as clamps, retractors and rods that are included in the scan may be classified as “background.” In some implementations, the segmentation of the spine is done on the whole spine anatomy included in the scan. In some implementations, the segmentation of the spine is performed on portions of the spine anatomy (e.g., just the lumbosacral region, just the lumbar region, just the thoracic region, just the sacral region, just the sacro-iliac region, just the cervical region, or combinations thereof) or on individual vertebrae, and/or on anatomical portions of individual vertebrae. The segmentation may further be used for creating a virtual 3D model of the scanned spine area, leaving out, for example, soft tissue, ribs and rods. However, in some embodiments, such left-out elements may be segmented and classified as ROI, such as ribs.


In some embodiments, a segmentation method is applied for generating a 3D model without resorting to a subsequent model building step and algorithm. To this end, the segmentation may use ROI and background feature threshold values in the delineation process. In some embodiments, the segmentation algorithm uses the minimal ROI feature threshold value to segment a first group of regions (e.g., bone regions) in the 3D image, with values above the minimal ROI feature threshold value. Then, the segmentation algorithm uses the background intensity threshold to further segment a second group of regions in the 3D image, with values above the background feature threshold value. A processed image is then generated that visualizes only first and second regions obtained by the segmentation algorithm. If the spatial resolution of the segmentation algorithm is sufficiently high, as well as that of the native 3D image, the segmentation algorithm may be sufficient to create a 3D model (e.g., without resorting to surface building algorithms, such as the Marching Cubes algorithm).


In some embodiments, the at least one processor is configured to use the segmentation to generate a digitally reconstructed radiograph (DRR) of one or more ROIs only. To this end, the at least one processor may be configured to omit the background region, for example, by setting voxel feature values in the medical 3D image (e.g., an input 3D Digital Imaging and Communications in Medicine (DICOM) image) segmented as background to zero. The DRRs may be generated by summarizing feature values in one dimension or by using methods such as Siddon's algorithm published in 1985 by Robert L. Siddon. For example, in Siddon's algorithm, the index of each voxel along a certain projection ray and the interesting length of that ray within that voxel may be computed by four multiplications. Refinements or modifications to Siddon's algorithm, or alternative algorithms (e.g., for calculating radiological paths through pixel or voxel spaces), may also be used.


A further embodiment provides an augmented-reality system for image-guided surgery (or other procedures such as non-surgical procedures and diagnostics), the system comprising (i) a see-through augmented-reality display configured to allow viewing of a body of a patient through at least a portion of the display by the user (e.g., wearing a head-mounted unit), and at least one processor, which is configured to receive the 3D image of a region of the body of the patient and compute a 3D model of the region based on at least two intensity thresholds applied to the 3D image, and present the computed 3D model on the display in alignment with the body of the patient viewed through the display, e.g., for better spatial awareness.


In some embodiments, accurate and/or proper augmentation (e.g., alignment) of a navigated medical tool with images derived from the medical image (e.g., slices, 3D model, 2D images and/or DRR images) and, optionally, accurate and/or proper augmentation of the 3D model with an optically viewed anatomical portion is facilitated by a registration step of the 3D medical image, e.g., via artificial fiducials in the 3D medical image and by utilizing a tracking system. Specifically, when using a direct see-through or head-mounted display, accurate and/or proper augmentation of the 3D model with an optically viewed anatomical portion as viewed from the point of view of a professional wearing the head-mounted display, may be achieved by utilizing a tracking device for tracking at least the position of the head-mounted display relative to the anatomical portion. In some embodiments the tracking system may be mounted on the head-mounted display device. Systems of this sort using augmented-reality for image-guided surgery (or other procedures such as non-surgical procedures and diagnostics), are described, for example, in the above incorporated Applicant's U.S. Pat. Nos. 9,928,629, 10,939,977, PCT International Publication WO 2022/053923, and U.S. Patent Application Publication 2020/0163723.


The disclosure describes, for example, a method of generating a 3D model and/or images for display (e.g., to facilitate image-guided surgery using a direct see-through or head-mounted augmented reality display device) such as described and/or illustrated hereinafter. The disclosure further describes a system comprising at least one processor for generating a 3D model and/or images for display, and a display device configured to display the 3D model and/or images (e.g., to facilitate image-guided surgery using a head-mounted augmented reality display device) such as described and/or illustrated hereinafter.


Finally, as noted above, one common type of 3D medical image in use is a 3D CT image. Another type of 3D medical image that can be used is a 3D MM image. The method can use, mutatis mutandis, 2D or 3D medical images derived from other imaging modalities, such as ultrasound, positron emission tomography (PET), and optical coherence tomography (OCT). Therefore, the above-described methods may be applied to other types of medical acquisitions (e.g., medical scans or images), including electroencephalogram (EEG) imaging, and to other areas or organs of the body, such as skull or cranium, knees, hips, shoulders, sacroiliac joints, ankles, ear, nose, throat, facial regions, elbows, other joint regions or orthopedic regions, gastrointestinal regions, etc. Accordingly, the feature values may be selected as any relevant physiological or electrophysiological values, such as blood and neural activation velocities, respectively, or speeds as derived from the image features.


The systems and methods described herein may be used in connection with surgical procedures, such as spinal surgery, joint surgery (e.g., shoulder, knee, hip, ankle, other joints), orthopedic surgery, heart surgery, bariatric surgery, facial bone surgery, dental surgery, cranial surgery, or neurosurgery. The surgical procedures may be performed during open surgery or minimally-invasive surgery (e.g., surgery during which small incisions are made that are self-sealing or sealed with surgical adhesive or minor suturing or stitching. However, the systems and methods described may be used in connection with other medical procedures (including therapeutic and diagnostic procedures) and with other instruments and devices or other non-medical display environments. The methods described herein further include the performance of the medical procedures (including but not limited to performing a surgical intervention such as treating a spine, shoulder, hip, knee, ankle, other joint, jaw, cranium, etc.).


System Description


FIG. 1 is a schematic, pictorial illustration of an example system 10 for image-guided surgery or other medical intervention utilizing a head-mounted display such as a direct see-through display, in accordance with an embodiment of the disclosure. FIG. 1 is a pictorial illustration of the system as a whole, including showing, by way of example, a head-mounted display unit 28 in a form of eye-glasses, spectacles, or other eyewear that is used in system 10. Alternative types of a head-mounted unit, such as ones based on a helmet, visor, or over-the-head mounted unit that the surgeon wears, may be used in system 10. Alternatively, other wearable or hands-free devices, units or displays may be used.


In accordance with several embodiments, system 10 is applied in a medical procedure on a patient 20 using image-guided intervention. In this procedure, a tool 22 is inserted via an incision (e.g., minimally-invasive or self-sealing incision) in the patient's back in order to perform a surgical intervention. Alternatively, system 10 and the techniques described herein may be used, mutatis mutandis, in other surgical or non-surgical medical or diagnostic procedures.


In the pictured embodiment, a user of system 10, such as a healthcare professional 26 (e.g., a surgeon performing the procedure), wears the head-mounted display unit 28. In various embodiments, the head-mounted display unit 28 includes one or more see-through displays 30, for example as described in the above-mentioned U.S. Pat. No. 9,928,629 or in the other patents and applications cited above that are incorporated by reference. Such displays may include an optical combiner that is controlled by a computer processor (e.g., a computer processor 32 in a central processing system 50 and/or a dedicated computer processor in the head-mounted display unit 28) to display an augmented-reality image to the healthcare professional 26. The image is presented on displays 30 (e.g., as a stereoscopic near-eye display) such that a computer-generated image is projected in alignment with the anatomy of the body of patient 20 that is visible to professional 26 through a portion of the display. In some embodiments, one or more projected computer-generated images include a virtual image of tool 22 overlaid on a virtual image of at least a portion of the patient's anatomy (such as slices, the 3D model or 2D images derived from the 3D medical image, e.g., of a portion of the spine). Specifically, a portion of the tool 22 that would not otherwise be visible to the healthcare professional 26 (for example, by virtue of being hidden by the patient's anatomy) is included in the computer-generated image.


In image-guided surgery or in surgeries utilizing augmented reality systems, a bone-anchoring device may be used as a fiducial marker or may be coupled with such a marker, for indicating the patient's body location in a coordinate system. In system 10, for example, an anchoring device is coupled with a marker that is used to register a region of interest (ROI) of the patient body with a CT scan or other medical images of the ROI (e.g., preoperative, or intraoperative CT or fluoroscopic image). During the procedure, a tracking system (e.g., an IR tracking system of the head-mounted display unit 28) tracks a marker mounted on the anchoring device and a tool that includes a tool marker. Following that, the display of the CT or other medical image data (which may include, for example, a model generated based on such data) on a near-eye display may be aligned with the professional's actual view of the ROI based on this registration. In addition, a virtual image of the tool 22 may be displayed on the CT or other 3D image model based on the tracking data and the registration. The user (e.g., professional 26) may then navigate the tool 22 based on the virtual display of the tool 22 with respect to the patient image data, and optionally, while the 3D model is aligned with the professional's view of the patient or ROI.


According to some aspects, the 3D image or model presented on display 30 is aligned with the patient's body. According to some aspects, allowed alignment error (or allowed misalignment) may not be more than about two to three mm or may be less than four mm or less than five mm or about one to two mm or less than one mm. In one embodiment, the allowed alignment error is less than one mm. In order to account for such a limit on error in alignment of the patient's anatomy with the presented images, the position of the patient's body, or a portion thereof, with respect to the head-mounted unit may be tracked. According to some aspects, the 3D model presented on display 30 is misaligned with the patient's body (e.g., having a misalignment greater than the allowed misalignment).


A patient marker 60 attached to an anchoring implement or device such as a clamp 58 or a pin, for example, may be used for this purpose, as described further hereinbelow. In some embodiments, a registration marker, such as registration marker 38, may be used for registering the 3D image with the patient anatomy, e.g., in a preceding registration procedure. In some embodiments registration marker 38 may be removed once the registration is complete. According to some embodiments, patient marker 60 and registration marker 38 may be combined into a single marker. Anchoring devices, markers and registration systems and methods of this sort for image-guided surgery are described, for example, in the above incorporated U.S. Pat. Nos. 9,928,629, 10,835,296 and 10,939,977, and in addition in Applicant's U.S. Patent Application Publication 2021/0161614, U.S. Patent Application Publication 2022/0142730, U.S. Patent Application Publication 2021/0386482, U.S. Patent Application Publication 2023/0009793, and PCT International Publication WO 2023/026229, all of which are incorporated herein by reference.


When an image of tool 22 is incorporated into the image that is displayed on head-mounted display unit 28, the position of the tool 22 with respect to the patient's anatomy should be accurately reflected. For this purpose, the position of the tool 22 or a portion thereof (e.g., a tool marker 40) is tracked by system 10 (e.g., via a tracking system built into the head-mounted display unit 28). In some embodiments, it is desirable to determine the location of the tool 22 with respect to the patient's body such that errors in the determined location of the tool 22 with respect to the patient's body are typically less than three mm or less than two and a half mm or less than two mm.


In some embodiments, head-mounted display unit 28 includes a tracking sensor 34 to facilitate determination of the location and orientation of head-mounted display unit 28 with respect to the patient's body (e.g., via patient marker 60) and/or with respect to tool 22 (e.g., via a tool marker 40). Tracking sensor 34 can also be used in finding the position and orientation of tool 22 (e.g., via tool marker 40) with respect to the patient's body (e.g., via patient marker 60). In one embodiment, tracking sensor 34 comprises one or more image-capturing or image-acquiring devices, such as a camera (e.g., infrared camera, visible light camera, and/or an RGB-IR camera), which captures or acquires images of patient marker 60 and/or tool marker 40, and/or other landmarks or markers. The tracking sensor 34 may comprise an optical tracking device, an RFID reader, an NFC reader, a Bluetooth sensor, an electromagnetic tracking device, an ultrasound tracking device, or other tracking device.


In the pictured embodiment, system 10 also includes a tomographic imaging device, such as an intraoperative computerized tomography (CT) scanner 41. Alternatively or additionally, processing system 50 may access or otherwise receive tomographic data from other sources; and the CT scanner 41 itself is not necessarily an essential part of the present system 10 (e.g., it is optional). Regardless of the source of the tomographic data, processor 32 computes a transformation over the ROI so as to register the tomographic images with images and information (e.g., a depth map) that are displayed on head-mounted display unit 28. The processor 32 can then apply this transformation in presenting at least a part of the tomographic image on display 30 in registration with the ROI viewed through the display 30. In the disclosed technique, the processor 32 can generate from the tomographic images, collectively named 3D image, a 3D model, and register the 3D model with the ROI, e.g., viewed through the display 30.


In accordance with several implementations, in order to generate and present an augmented reality image on display 30, processor 32 computes the location and orientation of head-mounted display unit 28 with respect to a portion of the body of patient 20 (e.g., the patient's back or a portion thereof). In some embodiments, processor 32 also computes the location and orientation of tool 22 with respect to the patient's body. In one embodiment, a processor which is integrated within the head-mounted display unit 28, may perform these functions or a portion of these functions. Alternatively or additionally, processor 32, which is disposed externally to the head-mounted display unit 28 and which may be in wireless communication with the head-mounted display unit 28, may be used to perform these functions or a portion of these functions. Processor 32 can be part of processing system 50, which additionally includes an output device 52 (e.g., a display, such as a monitor) for outputting information to an operator of the system, memory, and/or an input device 54 (such as a pointing device, a keyboard, a mouse, a trackpad, a pedal or touchscreen display, etc.) to allow the operator (e.g., professional 26 or an assistant to professional 26) to input data into the system 10.


In general, in the context of the present description, when a computer processor is described as performing certain steps, these steps may be performed by external computer processor 32 and/or a computer processor that is integrated within the head-mounted display unit 28. The processors described herein (e.g., processor 32) may include a single processor or multiple processors (e.g., parallel processors running in parallel to reduce computing time). If multiple processors are used, they may be communicatively coupled to each other over a network (e.g., wired or wireless communications network). The processor or processors carry out the described functionality under the control of suitable software, which may be downloaded to system 10 in electronic form, for example over a network, and/or stored on tangible, non-transitory computer-readable media, such as electronic, magnetic, or optical memory. In some embodiments, processing system 50 may be integrated in head-mounted display unit 28. In some embodiments, system 10 may include a see-through and/or augmented reality display which is not head-mounted. In such systems the display may be mounted to patient 20 or to the operating table and positioned such that professional 26 may be able to view patient 20 anatomy and/or operation site through at least a portion of the display. The display position may be adjustable.


In some embodiments, the 3D image, 3D rendering and 3D model of the present disclosure may be displayed while allowing user interaction as described herein on a display which is not a see-through display and/or a head-mounted display, such as a display of a workstation, a personal computer, a terminal, a tablet or a mobile device.


Visualizing a 3D Medical Image


FIG. 2 is a flowchart of steps performed to visualize a 3D medical image of a portion of a body of a patient, in accordance with an embodiment of the disclosure. The process can run automatically without user input, by one or more processors running all of the steps. Optionally, a user, such as the professional 26, may give input in steps 206 and/or 218, as described below. In some embodiments, the processor can be more than one processor, such that various steps are performed by different processors or processing units. In some embodiments, a single processor runs all of the steps. Thus, when a processor is mentioned herein, it may be referring to a single processor or multiple processors or processing units.


The process begins by the processor obtaining a 3D medical image (e.g., a CT scan) of a volume of the body portion, at a medical image obtaining step 202.


In general, e.g., depending on the clinical application, various medical image formats or modalities may be considered, such as MRI, PET or ultrasound images. In the contexts of this disclosure, the term “obtaining” may refer to, inter alia, acquiring, receiving, and/or accessing the 3D image and does not necessarily require the system to include the imaging device and/or the method to include the image acquiring operation or step.


Next, the processor segments the medical image to delineate at least one ROI (e.g., a spine section or region) from the rest of the imaged volume, at an image segmentation step 204. Various segmentation methods may be used, such as one that uses a trained convolutional neural network (CNN, such as U-net or V-net).


In some embodiments, a 3D image of the spine is segmented into spinal regions (e.g., cervical, thoracic, lumbar, sacral). In some embodiments, a 3D image of the spine is segmented into individual 3D vertebrae. In some embodiments, a 3D image of the spine is further segmented to indicate anatomical regions or portions of a vertebra (for example, spinous process, articular processes, transverse processes, vertebral body, centrum, posterior vertebral arch or neural arch). Bones other than the bones of the spine or other tissue may also be segmented into various regions, into individual components, and/or into anatomical portions of individual components. Such segmentation, although requiring more computation resources, may provide a better segmentation of the spine or any other such complex structure. This segmentation operation can advantageously be carried out by deep learning techniques, using one or more trained CNNs. The sacrum and ilium may be segmented in this manner, as well, using a separate neural network from the neural network used for the vertebrae or the same neural network.


In some embodiments, a combination of three networks may be used for this purpose. The networks are fully convolutional networks, e.g., based on the U-Net architecture.


The processor may obtain at least one ROI threshold and background threshold levels or values, TH1 ROI and TH2 background, respectively, at thresholds obtaining step 206. Examples of different ROI intensity threshold and background intensity threshold levels used are described in connection with FIGS. 3A-3C below. In the examples given by FIGS. 3A-3C, the ROI and background thresholds are HU thresholds that govern CT 3D visualization of mainly bone and metal. In these figures, of an orthopedic application concerning a spine, the main aim of the background threshold is to visualize metal elements that are in the background section (e.g., external to the spine) such as clamps and retractors without adding irrelevant or undesired information (e.g., background noise, such as noisy voxel levels, having a radiodensity value which does not satisfy the background threshold).


In another example, relevant to another medical application, the ROI and background thresholds can be MM intensities set to optimize, for example, an MRI 3D visualization of a joint or of at least a portion of a brain. In other embodiments, to, for example, distinguish between multiple tissue types, two or more (e.g., multiple) ROI thresholds can be defined and used together with the background threshold in segmentation, rendering a 3D image and/or generating a 3D model.


In an image processing step 210, the processor utilizes the segmentation and/or the thresholds to generate a processed 3D image. The processing may include reducing the feature values of voxels classified as background voxels having feature values which satisfy the ROI feature threshold value but do not satisfy the background feature threshold value. Thus, a 3D model generated based on the processed image (e.g., image with altered feature values) and the ROI feature threshold may visualize tissue features inside ROIs (e.g., bone tissue) when irrelevant or undesired background information is suppressed


The processed 3D image of step 210 may be used by the processor for computing a 3D model, at a 3D model creation step 212. At step 212, the processor, in some embodiments, runs a 3D model generation application (e.g., a surface building algorithm), such as a marching cubes algorithm described by Lorensen E. and Harvey E. in a paper titled, “Marching cubes: A high resolution 3D surface construction algorithm,” published in ACM SIGGRAPH Computer Graphics, Volume 21 (4): 163-169, July 1987, the content of which is incorporated herein by reference. As described above, the marching cubes algorithm may use a threshold value, e.g., the lowest TH ROI value, to generate the 3D model. In some embodiments, the 3D model comprises a surface of an anatomy in the ROI (e.g., spine) and possibly comprising artificial elements as well, depending on TH ROI and TH background threshold settings. An example of a generated 3D model is shown in FIGS. 3A-3C, described below.


Steps 222 and 224 include generating a suitable file of the 3D model, to be used, for example, with a near eye display (e.g., of head-mounted display unit 28) in image-guided surgery and/or with any other display such as a work station display. In an optional 3D model cropping step 222, the processor may define or may receive a definition (e.g., via user input) of a cropping box, such as shown in FIG. 4, and may crop the 3D model accordingly. Finally, the processor saves the 3D model (e.g., cropped 3D model) into a file of a suitable format (e.g., PLY format), in a memory, to be later uploaded to the processor or any other processor (e.g., a dedicated processor of a near-eye display of a head-mounted display unit 28).


In optional 3D image rendering step 208, the 3D image obtained in step 202 is displayed on a monitor as a 3D visualization (e.g., a 3D volume rendering). A user, such as a medical professional can optimize the 3D visualization for the required or desired medical application (e.g., for spine image-guided surgery) by adjusting the two feature thresholds obtained (e.g., received, uploaded, etc.) in step 206. According to some embodiments, the initial 3D rendering is performed based on predetermined default values for the TH1 ROI and TH2 Background. According to some embodiments, the thresholds may be iteratively adjusted by the user. Final threshold values may be set or determined, for example, once a user input indicating that the threshold adjustment process is complete or a user request for generating of the 3D model is received.


In optional image cropping step 218, the processed 3D image obtained in step 210 is displayed on a monitor so that the user can crop the portion of processed 3D image inputted for the modeling step (e.g., instead of the processor performing this step as part of, for example, step 210). According to some embodiments, the cropping may be performed following step 208 or in the frame of step 208 and/or prior to step 210.


In an additional optional step (not illustrated), the generated 3D model is displayed on a display (e.g., a stationary display such as a workstation and/or a near-eye display of head-mounted display unit 28) for the user's review.


The flowchart of FIG. 2 is presented by way of example and is simplified for clarity of presentation. The process may include additional or alternative steps, such as generating a 3D model from another imaging modality (e.g., based on a volume rendering obtained using ultrasound or Mill or fluoroscopy), or directly from high resolution segmentation, skipping the step of volume rendering, as described above.



FIGS. 3A-3C are exemplary Graphical User Interface (GUI) displays showing 3D renderings based on processed CT images (e.g., segmented CT images) of a spine that can be used for selecting or determining optimal or desired ROI intensity threshold and background intensity threshold for computing the 3D model, in accordance with embodiments of the disclosure. The 3D renderings may be viewed by a user that can determine or adjust the two thresholds of step 206 above. The user may set or adjust the ROI and background intensity thresholds by adjusting a GUI element, such as sliding ROI TH sliders (302, 312, 322) and background sliders (304, 314, 324), respectively, at the bottom of the 3D rendering screen, respectively. The 3D rendering will change in correspondence to the change in thresholds and based on the segmentation, visualizing the data that will be included in the 3D model for each selection of thresholds or for each selected combination of thresholds. The user may then optimize the visualization or rendering while the thresholds values for the optimized or desired rendering will be provided as input to the 3D image processing and/or modeling. In FIGS. 3A-3C, the value of a threshold increases as the threshold slider is slid to the left. The slider may be moved by touching and dragging on the screen and/or by use of a computer mouse, keyboard, joystick, voice-activated control, or other input device.



FIG. 3A is a graphic display of a rendering 310 of a 3D image with threshold sliders set as follows: ROI TH slider 302 at 100%, meaning ROI intensity TH value=min scan radiodensity value and all is shown within the segmented area, and background threshold slider 304 at 0%, meaning background intensity TH value=maximum scan radiodensity value and all information outside the segmented portion is blocked.



FIG. 3B is a graphic display of a rendering of a 3D image with threshold sliders set as follows: ROI threshold slider 312 is set at 100%, meaning ROI intensity TH value=minimum scan radiodensity value and all tissue is shown within segmented portion. Background threshold slider is set at 22%, meaning background intensity TH value is high but such that metal retractors 318 and clamp 320 outside the segmented spine are shown.



FIG. 3C is a graphic display of a rendering of a 3D image with threshold sliders set as follows: ROI threshold slider is set at 92%, meaning ROI intensity TH value=low scan radiodensity value and most tissue is shown within segmented portion. Background threshold slider is set at 47%, meaning background intensity TH value is of mid value but such that the heads of metal screws 340 implanted in the bone protruding from the spine are shown.


Due to the relatively large background threshold, e.g., that is set at about the middle of the intensity range, irrelevant or undesired signals in the background, such as image noise and soft and bone tissue are not shown, making the disclosed processed 3D image particularly well visualizing the ROIs only.


In some embodiments, in generating the rendering, the one or more ROI feature thresholds would be applied to the one or more ROIs respectively while the background feature threshold would be applied only to the background portion of the image.


In some embodiments, as illustrated in FIGS. 3A-3C, when generating the rendering, the one or more ROI feature thresholds would be applied to the one or more ROIs respectively, while the background feature threshold would be applied to the entire image (i.e., including the one or more ROIs). In such a case, in the one or more ROIs, rendering may be performed based on the lowest threshold applied (i.e., the lowest of the respective ROI feature threshold and the background feature threshold). In such embodiment, the user may view a rendering or visualization of the medical image without the effect of the segmentation by simply setting the background feature threshold to a value lower than or equal to the one or more ROI feature thresholds. Following that, a single feature threshold value would apply to the entire scan or image without a distinction between different portions of the image (e.g., ROI vs. background).



FIG. 4 is a graphical user interface display showing a 3D model 400 of a volume of interest, the model 400 computed based on processed 3D image of FIG. 3C, in accordance with an embodiment of the disclosure. As seen, the model 400 is located inside a box 405 having marked facets 415. The disclosed 3D model clearly presents a spine section implanted with metal screws 440 with clean background (e.g., empty of, or without, irrelevant or undesired information). Such 3D model is suitable for presentation and/or augmentation with the optically viewed (e.g., with the near-eye display device 28) anatomical portion of the patient.



FIG. 5 is a flowchart of steps performed to generate a three-dimensional (3D) model based on a 3D image of a portion of a body of a patient using selected threshold values, in accordance with an embodiment of the disclosure. The method according to the presented embodiment carries out a process that begins with obtaining a 3D medical image having feature values, of at least a portion of an organ or anatomy of a patient (e.g., an entire spine or region of a spine, such as a lumbosacral region, a cervical region, a thoracic region, and/or a sacroiliac region), at a 3D medical image receiving step 502. The features may be directly extracted from the medical image, as in the case the feature is intensity, or may require an additional operation of processing or computation, for example, as in the case the feature is gradients. The 3D medical image may be obtained, for example, via provided input, uploaded or downloaded via the internet. In some embodiments the 3D image may be obtained by capturing the medical image via a medical imaging device.


Next, a processor segments the 3D medical image to define one or more ROIs (e.g., portions of spine regions), at 3D medical image segmentation step 504.


At a threshold value determination step 506, the processor determines at least one ROI features threshold, such as image intensity threshold value. At a threshold value determination step 508, the processor determines a background features threshold, such as image background intensity threshold value.


The determination of the at least one ROI feature threshold and of the background feature threshold of steps 506 and 508 may include receiving input values (e.g., from the user) for the at least one ROI feature threshold or for the background feature threshold or for both. In some implementations, the receiving of the input values from the user comprises generating a GUI element to be displayed to the user, such as sliders 302, 304, 312, 314, 322 and 324 of FIGS. 3A-3C, respectively. The GUI element may allow the user to adjust the input values, while the 3D rendering and displaying of the 3D rendering to the user are iteratively performed in correspondence to the user adjustment of the input values. In some embodiments, the one or more ROI feature thresholds and the background feature threshold are predetermined. In some embodiments, a default value for some or for each of the one or more ROI feature thresholds and the background feature threshold is predetermined. A 3D rendering based on these default values may be displayed to the user as a default 3D rendering. The user may then adjust the thresholds values as described above.


At step 510, the processor selects or determines the threshold to be input to a 3D model generation algorithm based on the determined one or more ROI feature thresholds and the background feature threshold. In some embodiments, the lowest ROI feature threshold may be selected or determined as the input threshold for the 3D model generation. In some embodiments, the lowest of the one or more ROI feature thresholds and the background feature threshold may be selected.


At threshold values checking step 512, the processor checks if the background feature threshold value is greater than the threshold value selected to be provided to the 3D model generation algorithm (step 510 above).


If the result of the check performed in step 512 is positive, i.e., the background feature threshold value is greater than the threshold value selected in step 510, then selected feature values in the background (e.g., feature values associated with voxels segmented as background (step 504 above)) are reduced or suppressed in a step 514. Background feature values which satisfy the model generation threshold (i.e., the threshold selected to be input to the 3D model generation algorithm in step 510 above) but do not satisfy the background feature threshold are reduced to a value equal to or below the model generation threshold. In some embodiments, such background feature values are reduced to a value equal to the value of the model generation threshold minus a constant. Thus, background elements (e.g., voxels) having feature values which do not satisfy the background feature threshold, are not considered for the model generation, or excluded from being included in the model generation algorithm input.


At a step 516, a 3D model is generated by a model generation algorithm that accepts a single 3D image feature threshold value selected at step 510. An example of such a model is the aforementioned marching cubes algorithm.


Finally, the processor displays (or generates an output for display) the generated 3D model to the user, at a 3D model displaying step 518.


The flowchart of FIG. 5 is presented by way of example and is simplified for clarity of presentation. The process may include alternative steps, such as determining threshold values that are from different medical modalities.



FIGS. 6A-6D are example GUI displays 600 that include various views of a patient's spine, in accordance with an embodiment of the disclosure. FIGS. 6A-6D show an illustration of image data displayed during a medical procedure for inserting pedicle screws into a patient's spine. The image data include a display of a 3D model or rendering 601 of an ROI (e.g., a portion of the patient's spine), and virtual images of a tool 602 and a pedicle screw 604. In some embodiments, 3D model 601 may be generated as disclosed hereinabove. The GUI displays 600 in FIGS. 6A-6D further include various views of the patient spine and pedicle screw 604: X-ray lateral view 606, X-ray anteroposterior (AP) view 608, 3D lateral view 610 and 3D AP view 612, displayed in different combinations.


The displays shown in FIGS. 6A-6D are not augmented-reality displays and may be displayed, for example, on a workstation. However, these displays may be displayed in a see-through display of a wearable, hands-free and/or head-mounted display system while 3D model 600 is overlaid on the patient anatomy, optionally in-alignment, and as described hereinabove. The different views (606, 608, 610 and/or 612) may be displayed on a different section of the see-through display, e.g., in a top portion or a side portion of the display and such that they will not interfere with the user's view (e.g., direct see-through view) of the physical ROI through the display.


A user may select which view will be presented in each view display window, such as windows 614 and 616. The different views may be presented in the GUI display, for example, via a drop-down menu, such as drop-down menu 618, associated with each view display window. The GUI display 600 shown in FIGS. 6A-6D includes two view display windows. Other embodiments may include a different number of view display windows. In some embodiments, the user may select which view to display in which view display window (e.g., via a drop-down menu) and switch between the views during the procedure, as desired. Accordingly, the user may generate different combinations of views during the procedure, allowing him or her to receive maximal and proper information for each maneuver, occurrence and/or phase of the procedure.


X-ray views 606 and 608 are X-ray like images or virtual X-ray images derived from the 3D medical image as viewed from a fixed position, specifically lateral and AP respectively. Lateral and AP X-ray like or virtual X-ray views may facilitate navigation while AP view may allow, for example, symmetric screw insertion planning (e.g., during minimally invasive surgical (MIS) procedures). The virtual X-ray images may be generated by the processor from the 3D medical image by generating digitally reconstructing radiographs. In some embodiments, the processor may utilize the segmentation to digitally reconstruct radiographs from the segmented 3D medical image. The voxel feature values segmented as background (i.e., not ROI) may be set to zero, thus generating X-ray views of the ROI only (e.g., without noise or with reduced noise), as shown, for example, in FIGS. 6A-6D. In some embodiments, a GUI element such as slider 620 may be generated to allow the user to adjust the DRR images visibility threshold (e.g., sum of voxels intensity values). Each change of visibility threshold may cause the processor to generate a new DRR based on the adjusted visibility threshold.


In some embodiments, X-ray views 606 and 608 may display 2D X-ray images of the patient anatomy captured by an X-ray imaging device such as a fluoroscope. In some embodiments, the X-ray images may be registered with the 3D medical image.


3D views 610 and 612 display the 3D model as viewed from a fixed position, specifically lateral and AP respectively. The 3D view may facilitate visualization and orientation and may simplify entry point location.


Although FIGS. 6A-6D show X-ray views and 3D views from lateral and AP points of view, other points of view or angles of view may be displayed alternatively or additionally. It should be noted that lateral and AP points of view may be specifically advantageous for medical procedures or interventions which the professional may perform via a lateral approach (e.g., by positioning the patient on the side of his body), such as a Lateral Lumbar Interbody Fusion (LLIF) procedure.


In some embodiments, as indicated above, the user may select which views will be displayed in each window of the display, such as windows 614 and 616. FIGS. 6A-6D show such various combinations which may be advantageous to the user during a medical procedure or intervention (e.g., surgical or non-surgical therapeutic or diagnostic procedure). FIG. 6A shows a combination of X-ray views 606 and 608 from different points of view, specifically lateral and AP. FIG. 6B shows a combination of 3D views 610 and 612 from different points of view, specifically lateral and AP. FIG. 6C shows a combination of lateral views 606 and 610 via different types of view (e.g., imaging modalities or techniques), specifically X-ray and 3D, respectively. FIG. 6D shows a combination of AP views 608 and 612 via different types of view, specifically X ray and 3D, respectively. It should be noted that 3D view may provide surface-related data of a bone structure, while X-ray view may provide depth-related data of the bone structure. Displaying a specific type of view, e.g., X-ray view or 3D view, from multiple points of view or a specific point of view, e.g., lateral or AP, via different types of view at the same time may be advantageous and may facilitate the procedure or intervention or even better the procedure or intervention outcome by providing the required information and/or more information and better visualization of the ROI to the professional performing the procedure or intervention.


While images 606, 608, 610 and 612 are static, virtual images of the tool and screw implant 602 and 604 may be dynamically augmented on images 606, 608, 610 and 612 presenting the actual location and navigation of the tool and/or screw implant with respect to the patient anatomy. Such display may be facilitated via a tracking system and as described hereinabove. Since screw implant 604 is attached as a straight extension to tool 602 and since its dimensions may be predefined, there is no need for tracking of screw 604 and tracking of only tool 602 may suffice.


In some embodiments, the GUI may include one or more slice views of the 3D medical image. In some embodiments, a sagittal slice view and an axial slice view may be generated and displayed. A virtual image representing a tool and/or a virtual image representing an implant, such as a pedicle screw may be augmented on the slice views, while navigated by a professional during a medical procedure or intervention. In some embodiments, the displayed slice may be selected and/or generated based on or according to the position of the navigated tool. In some embodiments, the displayed slice may be selected and/or generated such that a predefined axis of the tool is aligned with the slice plane. For example, for an elongated tool, such as a screwdriver, the tool longitudinal axis may be predefined as the tool axis. Thus, for each tracked and/or recorded change of position of the tool a new slice may be generated and displayed.


In some embodiments, for each anatomical view only slices of the specific anatomical view would be generated and displayed in response to the manipulation of the tool. For example, for an axial view, only axial slices would be displayed according to the axial component of the tool current position. In some embodiments, in one or more anatomical views, slices may be generated based on tool movement in two or three anatomical planes. For example, in an axial view, for a current position of the tool having components in the axial and coronal planes, a slice positioned axially and coronally in accordance with the axial and coronal tool position, would be generated and displayed. As another example, in an axial view, for a current position of the tool having components in the axial, coronal and sagittal planes, a corresponding slice would be generated for the axial view. Thus, for example, for a tool positioned in a sagittal plane, the axial slice view and the sagittal slice view may be similar or even identical. Such anatomical slice view which allows slice generation in multiple plains and not just in the predefined view plane provides better visualization and may facilitate navigation.


In some embodiments, the display of the various views may be manipulated by a user. In some embodiments a view may be mirrored, e.g., with respect to an anatomical plane of the patient. For example, a mirroring command button may be included in the GUI. Mirroring of a lateral view may be performed, for example, with respect to a plane of the patient body. In some embodiments, the view may be rotated by a specific angle, e.g., by 180°, with respect to an anatomical axis, for example. Such rotating may be performed by pressing a dedicated command button generated as part of the GUI, for example. Specifically, such rotating of an AP view may provide a view rotated by 180° with respect to the longitudinal anatomical axis (i.e., head to toe). Such manipulation may be advantageous, for example, when the medical image and its derived images are displayed from a side opposite to the desired side (e.g., left side or right side of the patient or upper side or lower side of the patient). In some embodiments, the various views may be panned and zoomed in and out, e.g., via dedicated command buttons generated as part of the GUI.


While examples of the disclosed technique are given for body portion containing spine vertebrae, the principles of the system, method, and/or disclosure may also be applied to other bones and/or body portions than spine, including hip bones, pelvic bones, leg bones, arm bones, ankle bones, foot bones, shoulder bones, cranial bones, oral and maxillofacial bones, sacroiliac joints, etc.


The disclosed technique is presented with relation to image-guided surgery systems or methods, in general, and accordingly, the disclosed technique of visualization of medical images should not be considered limited only to augmented reality systems and/or head-mounted systems. For example, the technique is applicable to the processing of images from different imaging modalities, as described above, for use in diagnostics.


The terms “top,” “bottom,” “first,” “second,” “upper,” “lower,” “height,” “width,” “length,” “end,” “side,” “horizontal,” “vertical,” and similar terms may be used herein; it should be understood that these terms have reference only to the structures shown in the figures and are utilized only to facilitate describing embodiments of the disclosure. Various embodiments of the disclosure have been presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the disclosure. The ranges disclosed herein encompass any and all overlap, sub-ranges, and combinations thereof, as well as individual numerical values within that range. For example, description of a range such as from about 5 to about 30 degrees should be considered to have specifically disclosed subranges such as from 5 to 10 degrees, from 10 to 20 degrees, from 5 to 25 degrees, from 15 to 30 degrees etc., as well as individual numbers within that range (for example, 5, 10, 15, 20, 25, 12, 15.5 and any whole and partial increments therebetween). Language such as “up to,” “at least,” “greater than,” “less than,” “between,” and the like includes the number recited. Numbers preceded by a term such as “about” or “approximately” include the recited numbers. For example, “approximately 2 mm” includes “2 mm.” The terms “approximately”, “about”, and “substantially” as used herein represent an amount close to the stated amount that still performs a desired function or achieves a desired result.


In some embodiments, the system comprises various features that are present as single features (as opposed to multiple features). For example, in one embodiment, the system includes a single HMD, a single camera, a single processor, a single display, a single fiducial marker, a single imaging device, etc. Multiple features or components are provided in alternate embodiments.


In some embodiments, the system comprises one or more of the following: means for imaging (e.g., a camera or fluoroscope or MRI machine or CT machine), means for calibration or registration (e.g., adapters, markers, objects), means for fastening (e.g., anchors, adhesives, clamps, pins), etc.


The processors described herein may include one or more central processing units (CPUs) or processors or microprocessors. The processors may be communicatively coupled to one or more memory units, such as random-access memory (RAM) for temporary storage of information, one or more read only memory (ROM) for permanent storage of information, and one or more mass storage devices, such as a hard drive, diskette, solid state drive, or optical media storage device. The processors (or memory units communicatively coupled thereto) may include modules comprising program instructions or algorithm steps configured for execution by the processors to perform any of all of the processes or algorithms discussed herein. The processors may be communicatively coupled to external devices (e.g., display devices, data storage devices, databases, servers, etc. over a network via a network communications interface.


In general, the algorithms or processes described herein can be implemented by logic embodied in hardware or firmware, or by a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, Python, Java, Lua, C, C#, or C++. A software module or product may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as, for example, BASIC, Perl, or Python. It will be appreciated that software modules may be callable from other modules or from themselves, and/or may be invoked in response to detected events or interrupts. Software modules configured for execution on computing devices may be provided on a computer readable medium, such as a compact disc, digital video disc, flash drive, or any other tangible medium. Such software code may be stored, partially or fully, on a memory device of the executing computing device, such as the computing system 50, for execution by the computing device. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware modules may be comprised of connected logic units, such as gates and flip-flops, and/or may be comprised of programmable units, such as programmable gate arrays or processors. The modules described herein are preferably implemented as software modules but may be represented in hardware or firmware. Generally, any modules or programs or flowcharts described herein may refer to logical modules that may be combined with other modules or divided into sub-modules despite their physical organization or storage.


The various features and processes described above may be used independently of one another or may be combined in various ways. All possible combinations and sub-combinations are intended to fall within the scope of this disclosure. In addition, certain method or process blocks or steps may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks, steps, or states relating thereto can be performed in other sequences that are appropriate. For example, described blocks, steps, or states may be performed in an order other than that specifically disclosed, or multiple blocks or states may be combined in a single block or state. The example blocks, steps, or states may be performed in serial, in parallel, or in some other manner. Blocks, steps, or states may be added to or removed from the disclosed example embodiments. The example systems and components described herein may be configured differently than described. For example, elements may be added to, removed from, or rearranged compared to the disclosed example embodiments.


Any process descriptions, elements, or blocks in the flow diagrams described herein and/or depicted in the attached figures should be understood as potentially representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process.


It will be appreciated that the systems and methods of the disclosure each have several innovative aspects, no single one of which is solely responsible or required for the desirable attributes disclosed herein. The various features and processes described above may be used independently of one another or may be combined in various ways. The section headings used herein are merely provided to enhance readability and are not intended to limit the scope of the embodiments disclosed in a particular section to the features or elements disclosed in that section.


Certain features that are described in this specification in the context of separate embodiments also may be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment also may be implemented in multiple embodiments separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination may in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination. No single feature or group of features is necessary or indispensable to each and every embodiment.


Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.


The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. In addition, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. In addition, the articles “a,” “an,” and “the” as used in this application and the appended claims are to be construed to mean “one or more” or “at least one” unless specified otherwise.

Claims
  • 1. A system for improving display of 3D models in connection with image-guided surgery, the system comprising: a head-mounted unit comprising at least one see-through display configured to allow viewing of a region of a spine of a patient through at least a portion of the display; andat least one processor configured to: receive a three-dimensional (3D) computed tomography image of the region of the spine of the patient, the 3D computed tomography image having intensity values;segment the 3D computed tomography image to define multiple regions of interest (ROIs) of the spine and a background region, wherein the background region comprises a portion of the 3D computed tomography image which is not an ROI;determine, for each of the multiple ROIs of the spine, a ROI intensity threshold value that can be used as an input to control what portions of the ROI are included in a 3D rendering of the 3D computed tomography image;determine a background intensity threshold value that can be used as an input to control what portions of the background region are included in the 3D rendering of the 3D computed tomography image;generate the 3D rendering of the 3D computed tomography image, wherein the generation of the 3D rendering comprises: in each of the multiple ROIs of the spine of the 3D computed tomography image, rendering based on intensity values of the ROI that satisfy a lowest threshold value of the determined ROI intensity threshold value and the determined background intensity threshold value; andin the background region of the 3D computed tomography image, rendering based on intensity values of the background region that satisfy the determined background intensity threshold value; andcause the 3D rendering to be output to the display as an augmented reality image,wherein the intensity threshold values are voxel values.
  • 2. The system of claim 1, wherein the head-mounted unit is a pair of glasses.
  • 3. The system of claim 1, wherein the head-mounted unit is an over-the-head mounted unit.
  • 4. The system of claim 1, wherein the display is configured to be displayed directly on a retina of a wearer of the head-mounted unit.
  • 5. The system of claim 1, wherein the at least one processor is configured to display the 3D rendering in alignment by performing registration of the 3D rendering with the region of the spine.
  • 6. The system of claim 1, wherein the 3D rendering is a 3D model.
  • 7. The system of claim 1, wherein the at least one processor is configured to generate the 3D rendering by changing at least some of the intensity values of the 3D computed tomography image into a value which does not satisfy the lowest threshold value.
  • 8. The system of claim 1, wherein the background region of the 3D computed tomography image comprises soft tissue.
  • 9. The system of claim 1, wherein the at least one processor is further configured to: repeatedly adjust one or more of the ROI intensity threshold values and the background intensity threshold value according to input from a user; andrepeatedly generate the 3D rendering of the 3D computed tomography image based on the adjusted values.
  • 10. The system of claim 1, wherein the multiple ROIs of the spine comprise various regions of the spine.
  • 11. The system of claim 1, wherein the multiple ROIs of the spine comprise individual vertebrae of the spine.
  • 12. A system for improving display of 3D models in connection with image-guided surgery, the system comprising: a head-mounted unit comprising at least one see-through display configured to allow viewing of a region of a spine of a patient through at least a portion of the display; andat least one processor configured to: receive a three-dimensional (3D) image of the region of the spine of the patient, the 3D image having intensity values,segment the 3D image to define multiple regions of interest (ROIs) of the spine and a background region, wherein the background region comprises a portion of the 3D image which is not an ROI;determine, for each of the multiple ROIs of the spine, a ROI intensity threshold value that can be used as an input to control what portions of the ROI are included in a 3D rendering of the 3D image;determine a background intensity threshold value that can be used as an input to control what portions of the background region are included in the 3D rendering of the 3D image;generate the 3D rendering of the 3D image, wherein the generation of the 3D rendering comprises: in each of the multiple ROIs of the spine of the 3D image, rendering based on intensity values of the ROI that satisfy a lowest threshold value of the determined ROI intensity threshold value and the determined background intensity threshold value; andin the background region of the 3D image, rendering based on intensity values of the background region that satisfy the determined background intensity threshold value; andcause the 3D rendering to be output to the display as a virtual augmented reality image.
  • 13. The system of claim 12, wherein the intensity values are voxel values of the 3D image.
  • 14. The system of claim 12, wherein the head-mounted unit is a pair of glasses.
  • 15. The system of claim 12, wherein the head-mounted unit is an over-the-head mounted unit.
  • 16. The system of claim 12, wherein the display is configured to be displayed directly on a retina of a wearer of the head-mounted unit.
  • 17. The system of claim 12, wherein the 3D image is a computed tomography image or a magnetic resonance image.
  • 18. The system of claim 12, wherein the at least one processor is further configured to: display the 3D rendering in alignment by performing registration of the 3D rendering with the region of the spine;repeatedly adjust one or more of the ROI intensity threshold values and the background intensity threshold value according to input from a user; andrepeatedly generate the 3D rendering of the 3D image based on the adjusted values.
  • 19. The system of claim 1, wherein the at least one processor is further configured to: adjust one or more of the ROI intensity threshold values and the background intensity threshold value according to input from a user;regenerate the 3D rendering of the 3D computed tomography image based on the adjusted values; andgenerate a 3D model based on the adjusted values.
  • 20. The system of claim 12, wherein the at least one processor is further configured to: adjust one or more of the ROI intensity threshold values and the background intensity threshold value according to input from a user;regenerate the 3D rendering of the 3D image based on the adjusted values; andgenerate a 3D model based on the adjusted values.
INCORPORATION BY REFERENCE TO ANY PRIORITY APPLICATIONS

This application is a continuation of International PCT Application PCT/IB2023/054056, filed Apr. 20, 2023, which claims the benefit of U.S. Provisional Patent Application No. 63/333,128, filed Apr. 21, 2022 and of U.S. Provisional Patent Application No. 63/428,781, filed Nov. 30, 2022, the entire contents of each of which are incorporated herein by reference.

US Referenced Citations (1527)
Number Name Date Kind
3101715 Glassman Aug 1963 A
3690776 Zaporoshan Sep 1972 A
4459358 Berke Jul 1984 A
4711512 Upatnieks Dec 1987 A
4863238 Brewster Sep 1989 A
4944739 Torre Jul 1990 A
5100420 Green et al. Mar 1992 A
5147365 Whitlock et al. Sep 1992 A
5357292 Wiedner Oct 1994 A
5410802 Buckley May 1995 A
5441042 Putman Aug 1995 A
5442146 Bell et al. Aug 1995 A
5510832 Garcia Apr 1996 A
D370309 Stucky May 1996 S
5620188 McCurry et al. Apr 1997 A
5636255 Ellis Jun 1997 A
5665092 Mangiardi et al. Sep 1997 A
5743731 Lares et al. Apr 1998 A
5771121 Hentschke Jun 1998 A
5792046 Dobrovolny Aug 1998 A
5841507 Barnes Nov 1998 A
6006126 Cosman Dec 1999 A
6038467 De Bliek et al. Mar 2000 A
6125164 Murphy et al. Sep 2000 A
6138530 McClure Oct 2000 A
6147805 Fergason Nov 2000 A
6227667 Halldorsson et al. May 2001 B1
6256529 Holupka et al. Jul 2001 B1
6285505 Melville et al. Sep 2001 B1
6314310 Ben-Haim et al. Nov 2001 B1
6349001 Spitzer Feb 2002 B1
6444192 Mattrey Sep 2002 B1
6447503 Wynne et al. Sep 2002 B1
6449090 Omar et al. Sep 2002 B1
6456405 Horikoshi et al. Sep 2002 B2
6456868 Saito et al. Sep 2002 B2
6474159 Foxlin et al. Nov 2002 B1
6518939 Kikuchi Feb 2003 B1
6527777 Justin Mar 2003 B2
6529331 Massof et al. Mar 2003 B2
6549645 Oikawa et al. Apr 2003 B1
6578962 Amir et al. Jun 2003 B1
6609022 Vilsmeier et al. Aug 2003 B2
6610009 Person Aug 2003 B2
D480476 Martinson et al. Oct 2003 S
6659611 Amir et al. Dec 2003 B2
6675040 Cosman Jan 2004 B1
6683584 Ronzani et al. Jan 2004 B2
6690964 Bieger et al. Feb 2004 B2
6714810 Grzeszczuk et al. Mar 2004 B2
6737425 Yamamoto et al. May 2004 B1
6740882 Weinberg May 2004 B2
6757068 Foxlin Jun 2004 B2
6759200 Stanton, Jr. Jul 2004 B1
6847336 Lemelson et al. Jan 2005 B1
6856324 Sauer et al. Feb 2005 B2
6856826 Seeley et al. Feb 2005 B2
6891518 Sauer et al. May 2005 B2
6900777 Hebert et al. May 2005 B1
6919867 Sauer Jul 2005 B2
6921167 Nagata Jul 2005 B2
6966668 Cugini et al. Nov 2005 B2
6980849 Sasso Dec 2005 B2
6993374 Sasso Jan 2006 B2
6997552 Hung Feb 2006 B1
6999239 Martins et al. Feb 2006 B1
7000262 Bielefeld Feb 2006 B2
7035371 Boese et al. Apr 2006 B2
7043961 Pandey et al. May 2006 B2
7072435 Metz et al. Jul 2006 B2
7103233 Stearns Sep 2006 B2
7107091 Jutras et al. Sep 2006 B2
7112656 Desnoyers et al. Sep 2006 B2
7141812 Appleby et al. Nov 2006 B2
7157459 Ohta et al. Jan 2007 B2
7169785 Timmer et al. Jan 2007 B2
7171255 Holupka et al. Jan 2007 B2
7176936 Sauer et al. Feb 2007 B2
7187792 Fu et al. Mar 2007 B2
7190331 Genc et al. Mar 2007 B2
7194295 Vilsmeier Mar 2007 B2
7215322 Genc et al. May 2007 B2
7229078 Lechot Jun 2007 B2
7231076 Fu et al. Jun 2007 B2
7235076 Pacheco Jun 2007 B2
7239330 Sauer et al. Jul 2007 B2
7241292 Hooven Jul 2007 B2
7259266 Carter et al. Aug 2007 B2
7260426 Schweikard et al. Aug 2007 B2
7269192 Hayashi Sep 2007 B2
7281826 Huang Oct 2007 B2
7315636 Kuduvalli Jan 2008 B2
7320556 Vagn-Erik Jan 2008 B2
7330578 Wang et al. Feb 2008 B2
7359535 Salla et al. Apr 2008 B2
7364314 Nilsen et al. Apr 2008 B2
7366934 Narayan et al. Apr 2008 B1
7379077 Bani-Hashemi et al. May 2008 B2
7431453 Hogan Oct 2008 B2
7435219 Kim Oct 2008 B2
7450743 Sundar et al. Nov 2008 B2
7458977 McGinley et al. Dec 2008 B2
7462852 Appleby et al. Dec 2008 B2
7493153 Ahmed et al. Feb 2009 B2
7505617 Fu et al. Mar 2009 B2
7507968 Wollenweber et al. Mar 2009 B2
7518136 Appleby et al. Apr 2009 B2
7525735 Sottilare et al. Apr 2009 B2
D592691 Chang May 2009 S
D592692 Chang May 2009 S
D592693 Chang May 2009 S
7536216 Geiger et al. May 2009 B2
7542791 Mire et al. Jun 2009 B2
7556428 Sukovic et al. Jul 2009 B2
7557824 Holliman Jul 2009 B2
7563228 Ma et al. Jul 2009 B2
7567834 Clayton et al. Jul 2009 B2
7570791 Frank et al. Aug 2009 B2
7586686 Hall Sep 2009 B1
D602620 Cristoforo Oct 2009 S
7605826 Sauer Oct 2009 B2
7606613 Simon et al. Oct 2009 B2
7607775 Hermanson et al. Oct 2009 B2
7620223 Xu et al. Nov 2009 B2
7623902 Pacheco Nov 2009 B2
7627085 Boyden et al. Dec 2009 B2
7630753 Simon et al. Dec 2009 B2
7633501 Wood et al. Dec 2009 B2
7645050 Wilt et al. Jan 2010 B2
7653226 Guhring et al. Jan 2010 B2
7657075 Mswanathan Feb 2010 B2
7689019 Boese et al. Mar 2010 B2
7689042 Brunner et al. Mar 2010 B2
7689320 Prisco et al. Mar 2010 B2
7699486 Beiner Apr 2010 B1
7699793 Goette et al. Apr 2010 B2
7719769 Sugihara et al. May 2010 B2
D617825 Chang Jun 2010 S
7734327 Colquhoun Jun 2010 B2
D619285 Cristoforo Jul 2010 S
7751865 Jascob et al. Jul 2010 B2
7758204 Klipstein et al. Jul 2010 B2
7768702 Hirose et al. Aug 2010 B2
7769236 Fiala Aug 2010 B2
7773074 Arenson et al. Aug 2010 B2
7774044 Sauer et al. Aug 2010 B2
7822483 Stone et al. Oct 2010 B2
D628307 Krause-Bonte Nov 2010 S
7826902 Stone et al. Nov 2010 B2
7831073 Fu et al. Nov 2010 B2
7831096 Williamson, Jr. Nov 2010 B2
7835778 Foley et al. Nov 2010 B2
7835784 Mire et al. Nov 2010 B2
7837987 Shi et al. Nov 2010 B2
7840093 Fu et al. Nov 2010 B2
7840253 Tremblay et al. Nov 2010 B2
7840256 Lakin et al. Nov 2010 B2
7853305 Simon et al. Dec 2010 B2
7854705 Pawluczyk et al. Dec 2010 B2
7857271 Lees Dec 2010 B2
7860282 Boese et al. Dec 2010 B2
D630766 Harbin Jan 2011 S
7865269 Prisco et al. Jan 2011 B2
7874686 Rossner et al. Jan 2011 B2
7881770 Melkent et al. Feb 2011 B2
7893413 Appleby et al. Feb 2011 B1
7894649 Fu et al. Feb 2011 B2
7920162 Masini et al. Apr 2011 B2
7922391 Essenreiter et al. Apr 2011 B2
7938553 Beiner May 2011 B1
7945310 Gattani et al. May 2011 B2
7953471 Clayton et al. May 2011 B2
7969383 Eberl et al. Jun 2011 B2
7974677 Mire et al. Jul 2011 B2
7985756 Barlow et al. Jul 2011 B2
7991557 Liew et al. Aug 2011 B2
7993353 Roner et al. Aug 2011 B2
7996064 Simon et al. Aug 2011 B2
8004524 Deinzer Aug 2011 B2
8021300 Ma et al. Sep 2011 B2
8022984 Cheong et al. Sep 2011 B2
8045266 Nakamura Oct 2011 B2
8060181 Rodriguez et al. Nov 2011 B2
8068581 Boese et al. Nov 2011 B2
8068896 Daghighian et al. Nov 2011 B2
8077943 Williams et al. Dec 2011 B2
8079957 Ma et al. Dec 2011 B2
8081812 Kreiser Dec 2011 B2
8085075 Huffman et al. Dec 2011 B2
8085897 Morton Dec 2011 B2
8090175 Fu et al. Jan 2012 B2
8092400 Warkentine et al. Jan 2012 B2
8108072 Zhao et al. Jan 2012 B2
8112292 Simon Feb 2012 B2
8116847 Gattani et al. Feb 2012 B2
8120847 Chang Feb 2012 B2
8121255 Sugiyama Feb 2012 B2
8155479 Hoffman et al. Apr 2012 B2
8180132 Gorges et al. May 2012 B2
8180429 Sasso May 2012 B2
8208599 Ye et al. Jun 2012 B2
8216211 Mathis et al. Jul 2012 B2
8221402 Francischelli et al. Jul 2012 B2
8239001 Verard et al. Aug 2012 B2
8244012 Liang et al. Aug 2012 B2
8253778 Atsushi Aug 2012 B2
8271069 Jascob et al. Sep 2012 B2
8280491 Kuduvalli et al. Oct 2012 B2
8285021 Boese et al. Oct 2012 B2
8300315 Kobayashi Oct 2012 B2
8305685 Heine et al. Nov 2012 B2
8306305 Porat et al. Nov 2012 B2
8309932 Haselman et al. Nov 2012 B2
8317320 Huang Nov 2012 B2
8328815 Farr et al. Dec 2012 B2
8335553 Rubner et al. Dec 2012 B2
8335557 Maschke Dec 2012 B2
8340379 Razzaque et al. Dec 2012 B2
8369925 Giesel et al. Feb 2013 B2
8386022 Jutras et al. Feb 2013 B2
8394144 Zehavi et al. Mar 2013 B2
8398541 Dimaio et al. Mar 2013 B2
8444266 Waters May 2013 B2
8457719 Moctezuma De La Barrera et al. Jun 2013 B2
8467851 Mire et al. Jun 2013 B2
8469902 Dick et al. Jun 2013 B2
8475470 Von Jako Jul 2013 B2
8494612 Vetter et al. Jul 2013 B2
8509503 Nahum et al. Aug 2013 B2
8511827 Hua et al. Aug 2013 B2
8531394 Maltz Sep 2013 B2
8540364 Waters Sep 2013 B2
8545012 Waters Oct 2013 B2
8548567 Maschke et al. Oct 2013 B2
8556883 Saleh Oct 2013 B2
8559596 Thomson et al. Oct 2013 B2
8567945 Waters Oct 2013 B2
8571353 Watanabe Oct 2013 B2
8585598 Razzaque et al. Nov 2013 B2
8600001 Schweizer Dec 2013 B2
8600477 Beyar et al. Dec 2013 B2
8605199 Imai Dec 2013 B2
8611988 Miyamoto Dec 2013 B2
8612024 Stone et al. Dec 2013 B2
8634897 Simon et al. Jan 2014 B2
8641621 Razzaque et al. Feb 2014 B2
8643950 König Feb 2014 B2
8644907 Hartmann et al. Feb 2014 B2
8674902 Park et al. Mar 2014 B2
8686923 Eberl et al. Apr 2014 B2
8690581 Ruf et al. Apr 2014 B2
8690776 Razzaque et al. Apr 2014 B2
8692845 Fedorovskaya et al. Apr 2014 B2
8693632 Allison Apr 2014 B2
8694075 Groszmann et al. Apr 2014 B2
8699765 Hao et al. Apr 2014 B2
8705829 Frank et al. Apr 2014 B2
8737708 Hartmann et al. May 2014 B2
8746887 Shestak et al. Jun 2014 B2
8764025 Gao Jul 2014 B1
8784450 Moskowitz et al. Jul 2014 B2
8786689 Liu Jul 2014 B1
D710545 Wu Aug 2014 S
D710546 Wu Aug 2014 S
8827934 Chopra et al. Sep 2014 B2
8831706 Fu et al. Sep 2014 B2
8836768 Rafii et al. Sep 2014 B1
8838199 Simon et al. Sep 2014 B2
8848977 Bammer et al. Sep 2014 B2
8855395 Baturin et al. Oct 2014 B2
8878900 Yang et al. Nov 2014 B2
8879815 Miao et al. Nov 2014 B2
8885177 Ben-Yishai et al. Nov 2014 B2
8890772 Woo et al. Nov 2014 B2
8890773 Pederson Nov 2014 B1
8890943 Lee et al. Nov 2014 B2
8897514 Feikas et al. Nov 2014 B2
8900131 Chopra et al. Dec 2014 B2
8903150 Star-Lack et al. Dec 2014 B2
8908952 Isaacs et al. Dec 2014 B2
8911358 Koninckx et al. Dec 2014 B2
8917268 Johnsen et al. Dec 2014 B2
8920776 Gaiger et al. Dec 2014 B2
8922589 Laor Dec 2014 B2
8941559 Bar-Zeev et al. Jan 2015 B2
8942455 Chou et al. Jan 2015 B2
8950877 Northey et al. Feb 2015 B2
8953246 Koenig Feb 2015 B2
8961500 Dicorleto et al. Feb 2015 B2
8965583 Ortmaier et al. Feb 2015 B2
8969829 Wollenweber et al. Mar 2015 B2
8989349 Thomson et al. Mar 2015 B2
8992580 Bar et al. Mar 2015 B2
8994729 Nakamura Mar 2015 B2
8994795 Oh Mar 2015 B2
9004711 Gerolemou Apr 2015 B2
9005211 Brundobler et al. Apr 2015 B2
9011441 Bertagnoli et al. Apr 2015 B2
9057759 Klingenbeck et al. Jun 2015 B2
9060757 Lawson et al. Jun 2015 B2
9066751 Sasso Jun 2015 B2
9081436 Berme et al. Jul 2015 B1
9084635 Nuckley et al. Jul 2015 B2
9085643 Svanborg et al. Jul 2015 B2
9087471 Miao Jul 2015 B2
9100643 McDowall et al. Aug 2015 B2
9101394 Arata et al. Aug 2015 B2
9104902 Xu et al. Aug 2015 B2
9111175 Strommer et al. Aug 2015 B2
9123155 Cunningham et al. Sep 2015 B2
9125556 Zehavi et al. Sep 2015 B2
9129054 Nawana et al. Sep 2015 B2
9129372 Kriston et al. Sep 2015 B2
9132361 Smithwick Sep 2015 B2
9135706 Zagorchev et al. Sep 2015 B2
9141873 Takemoto Sep 2015 B2
9142020 Deguise et al. Sep 2015 B2
9149317 Arthur et al. Oct 2015 B2
9165203 McCarthy Oct 2015 B2
9165362 Siewerdsen et al. Oct 2015 B2
9179984 Teichman et al. Nov 2015 B2
D746354 Chang Dec 2015 S
9208916 Appleby et al. Dec 2015 B2
9220573 Kendrick et al. Dec 2015 B2
9225895 Kozinski Dec 2015 B2
9232982 Soler et al. Jan 2016 B2
9235934 Mandella et al. Jan 2016 B2
9240046 Carrell et al. Jan 2016 B2
9244278 Sugiyama et al. Jan 2016 B2
9247240 Park et al. Jan 2016 B2
9259192 Ishihara Feb 2016 B2
9265572 Fuchs et al. Feb 2016 B2
9269192 Kobayashi Feb 2016 B2
9283052 Rodriguez Ponce Mar 2016 B2
9286730 Bar-Zeev et al. Mar 2016 B2
9289267 Sauer et al. Mar 2016 B2
9294222 Proctor, Jr. Mar 2016 B2
9300949 Ahearn Mar 2016 B2
9305354 Burlon et al. Apr 2016 B2
9310591 Hua et al. Apr 2016 B2
9320474 Demri et al. Apr 2016 B2
9323055 Baillot Apr 2016 B2
9330477 Rappel May 2016 B2
9335547 Takano et al. May 2016 B2
9335567 Nakamura May 2016 B2
9341704 Picard et al. May 2016 B2
9344686 Moharir May 2016 B2
9349066 Koo et al. May 2016 B2
9349520 Demetriou et al. May 2016 B2
9364294 Razzaque et al. Jun 2016 B2
9370332 Paladini et al. Jun 2016 B2
9373166 Azar Jun 2016 B2
9375639 Kobayashi et al. Jun 2016 B2
9378558 Kajiwara et al. Jun 2016 B2
9380287 Nistico et al. Jun 2016 B2
9387008 Sarvestani et al. Jul 2016 B2
9392129 Simmons Jul 2016 B2
9395542 Tilleman et al. Jul 2016 B2
9398936 Razzaque et al. Jul 2016 B2
9400384 Griffith Jul 2016 B2
9414041 Ko et al. Aug 2016 B2
9424611 Kanjirathinkal et al. Aug 2016 B2
9424641 Wiemker et al. Aug 2016 B2
9427286 Siewerdsen et al. Aug 2016 B2
9438894 Park et al. Sep 2016 B2
9443488 Borenstein et al. Sep 2016 B2
9453804 Tahtali Sep 2016 B2
9456878 MacFarlane et al. Oct 2016 B2
9465235 Chang Oct 2016 B2
9468373 Larsen Oct 2016 B2
9470908 Frankel et al. Oct 2016 B1
9473766 Douglas et al. Oct 2016 B2
9492222 Singh Nov 2016 B2
9495585 Bicer et al. Nov 2016 B2
9498132 Maier-Hein et al. Nov 2016 B2
9498231 Haider et al. Nov 2016 B2
9499999 Zhou Nov 2016 B2
9507155 Morimoto Nov 2016 B2
9513495 Waters Dec 2016 B2
9521966 Schwartz Dec 2016 B2
9526443 Berme et al. Dec 2016 B1
9530382 Simmons Dec 2016 B2
9532846 Nakamura Jan 2017 B2
9532849 Anderson et al. Jan 2017 B2
9533407 Ragner Jan 2017 B1
9538962 Hannaford et al. Jan 2017 B1
9545233 Sirpad et al. Jan 2017 B2
9546779 Rementer Jan 2017 B2
9547174 Gao et al. Jan 2017 B2
9547940 Sun et al. Jan 2017 B1
9557566 Fujimaki Jan 2017 B2
9560318 Reina et al. Jan 2017 B2
9561095 Nguyen et al. Feb 2017 B1
9561446 Brecher Feb 2017 B2
9565415 Zhang et al. Feb 2017 B2
9572661 Robin et al. Feb 2017 B2
9576398 Zehner et al. Feb 2017 B1
9576556 Simmons Feb 2017 B2
9581822 Morimoto Feb 2017 B2
9610056 Lavallee et al. Apr 2017 B2
9612657 Bertram et al. Apr 2017 B2
9626936 Bell Apr 2017 B2
9629595 Walker et al. Apr 2017 B2
9633431 Merlet Apr 2017 B2
9645395 Bolas et al. May 2017 B2
9646423 Sun et al. May 2017 B1
9672597 Amiot et al. Jun 2017 B2
9672607 Demri et al. Jun 2017 B2
9672640 Kleiner Jun 2017 B2
9675306 Morton Jun 2017 B2
9675319 Razzaque et al. Jun 2017 B1
9684980 Royalty et al. Jun 2017 B2
9690119 Garofolo et al. Jun 2017 B2
RE46463 Fienbloom et al. Jul 2017 E
9693748 Rai et al. Jul 2017 B2
9710968 Dillavou et al. Jul 2017 B2
9713502 Finkman et al. Jul 2017 B2
9724119 Hissong et al. Aug 2017 B2
9724165 Arata et al. Aug 2017 B2
9726888 Giartosio et al. Aug 2017 B2
9728006 Varga Aug 2017 B2
9729831 Birnkrant et al. Aug 2017 B2
9746739 Alton et al. Aug 2017 B2
9757034 Desjardins et al. Sep 2017 B2
9757087 Simon et al. Sep 2017 B2
9766441 Rappel Sep 2017 B2
9766459 Alton et al. Sep 2017 B2
9767608 Lee et al. Sep 2017 B2
9770203 Berme et al. Sep 2017 B1
9772102 Ferguson Sep 2017 B1
9772495 Tam et al. Sep 2017 B2
9791138 Feinbloom et al. Oct 2017 B1
9800995 Libin et al. Oct 2017 B2
9805504 Zhang et al. Oct 2017 B2
9808148 Miller et al. Nov 2017 B2
9839448 Reckling et al. Dec 2017 B2
9844413 Daon et al. Dec 2017 B2
9851080 Wilt et al. Dec 2017 B2
9858663 Penney et al. Jan 2018 B2
9861446 Lang Jan 2018 B2
9864214 Fass Jan 2018 B2
9872733 Shoham et al. Jan 2018 B2
9875544 Rai et al. Jan 2018 B2
9877642 Duret Jan 2018 B2
9885465 Nguyen Feb 2018 B2
9886552 Dillavou et al. Feb 2018 B2
9886760 Liu et al. Feb 2018 B2
9892564 Cvetko et al. Feb 2018 B1
9898866 Fuchs et al. Feb 2018 B2
9901414 Lively et al. Feb 2018 B2
9911187 Steinle et al. Mar 2018 B2
9911236 Bar et al. Mar 2018 B2
9927611 Rudy et al. Mar 2018 B2
9928629 Benishti et al. Mar 2018 B2
9940750 Dillavou et al. Apr 2018 B2
9943374 Merritt et al. Apr 2018 B2
9947110 Haimerl Apr 2018 B2
9952664 Border et al. Apr 2018 B2
9956054 Aguirre-Valencia May 2018 B2
9958674 Border May 2018 B2
9959620 Merlet May 2018 B2
9959629 Dillavou et al. May 2018 B2
9965681 Border et al. May 2018 B2
9968297 Connor May 2018 B2
9980780 Lang May 2018 B2
9986228 Woods May 2018 B2
D824523 Paoli et al. Jul 2018 S
10010379 Gibby et al. Jul 2018 B1
10013531 Richards et al. Jul 2018 B2
10015243 Kazerani et al. Jul 2018 B2
10016243 Esterberg Jul 2018 B2
10022064 Kim et al. Jul 2018 B2
10022065 Ben-Yishai et al. Jul 2018 B2
10022104 Sell et al. Jul 2018 B2
10023615 Bonny Jul 2018 B2
10026015 Cavusoglu et al. Jul 2018 B2
10034713 Yang et al. Jul 2018 B2
10042167 Mcdowall et al. Aug 2018 B2
10046165 Frewin et al. Aug 2018 B2
10055838 Elenbaas et al. Aug 2018 B2
10066816 Chang Sep 2018 B2
10067359 Ushakov Sep 2018 B1
10073515 Awdeh Sep 2018 B2
10080616 Wilkinson et al. Sep 2018 B2
10082680 Chung Sep 2018 B2
10085709 Lavallee et al. Oct 2018 B2
10105187 Corndorf et al. Oct 2018 B2
10107483 Oren Oct 2018 B2
10108833 Hong et al. Oct 2018 B2
10123840 Dorman Nov 2018 B2
10130378 Bryan Nov 2018 B2
10132483 Feinbloom et al. Nov 2018 B1
10134166 Benishti et al. Nov 2018 B2
10134194 Kepner et al. Nov 2018 B2
10139652 Windham Nov 2018 B2
10139920 Isaacs et al. Nov 2018 B2
10142496 Rao et al. Nov 2018 B1
10151928 Ushakov Dec 2018 B2
10154239 Casas Dec 2018 B2
10159530 Lang Dec 2018 B2
10163207 Merlet Dec 2018 B2
10166079 McLachlin et al. Jan 2019 B2
10175507 Nakamura Jan 2019 B2
10175753 Boesen Jan 2019 B2
10181361 Dillavou et al. Jan 2019 B2
10186055 Takahashi et al. Jan 2019 B2
10188672 Wagner Jan 2019 B2
10194131 Casas Jan 2019 B2
10194990 Amanatullah et al. Feb 2019 B2
10194993 Roger et al. Feb 2019 B2
10195076 Fateh Feb 2019 B2
10197803 Badiali et al. Feb 2019 B2
10197816 Waisman et al. Feb 2019 B2
10207315 Appleby et al. Feb 2019 B2
10212517 Beltran et al. Feb 2019 B1
10230719 Vaughn et al. Mar 2019 B2
10231893 Lei et al. Mar 2019 B2
10235606 Miao et al. Mar 2019 B2
10240769 Braganca et al. Mar 2019 B1
10247965 Ton Apr 2019 B2
10251724 McLachlin et al. Apr 2019 B2
10261324 Chuang et al. Apr 2019 B2
10262424 Ketcha et al. Apr 2019 B2
10274731 Maimone Apr 2019 B2
10278777 Lang May 2019 B1
10292768 Lang May 2019 B2
10296805 Yang et al. May 2019 B2
10319154 Chakravarthula et al. Jun 2019 B1
10326975 Casas Jun 2019 B2
10332267 Rai et al. Jun 2019 B2
10339719 Jagga et al. Jul 2019 B2
10352543 Braganca et al. Jul 2019 B1
10357146 Fiebel et al. Jul 2019 B2
10357574 Hilderbrand et al. Jul 2019 B2
10366489 Boettger et al. Jul 2019 B2
10368947 Lang Aug 2019 B2
10368948 Tripathi Aug 2019 B2
10382748 Benishti et al. Aug 2019 B2
10383654 Yilmaz et al. Aug 2019 B2
10386645 Abou Shousha Aug 2019 B2
10388076 Bar-Zeev et al. Aug 2019 B2
10398514 Ryan et al. Sep 2019 B2
10401657 Jiang et al. Sep 2019 B2
10405825 Rai et al. Sep 2019 B2
10405927 Lang Sep 2019 B1
10413752 Berlinger et al. Sep 2019 B2
10419655 Sivan Sep 2019 B2
10420626 Tokuda et al. Sep 2019 B2
10420813 Newell-Rogers et al. Sep 2019 B2
10424115 Ellerbrock Sep 2019 B2
D862469 Sadot et al. Oct 2019 S
10426554 Siewerdsen et al. Oct 2019 B2
10429675 Greget Oct 2019 B2
10431008 Djajadiningrat et al. Oct 2019 B2
10433814 Razzaque et al. Oct 2019 B2
10434335 Takahashi et al. Oct 2019 B2
10441236 Bar-Tal et al. Oct 2019 B2
10444514 Abou Shousha et al. Oct 2019 B2
10447947 Liu Oct 2019 B2
10448003 Grafenberg Oct 2019 B2
10449040 Lashinski et al. Oct 2019 B2
10453187 Peterson et al. Oct 2019 B2
10463434 Siegler et al. Nov 2019 B2
10465892 Feinbloom et al. Nov 2019 B1
10466487 Blum et al. Nov 2019 B2
10470732 Baumgart et al. Nov 2019 B2
10473314 Braganca et al. Nov 2019 B1
10485989 Jordan et al. Nov 2019 B2
10488663 Choi Nov 2019 B2
D869772 Gand Dec 2019 S
D870977 Berggren et al. Dec 2019 S
10492755 Lin et al. Dec 2019 B2
10499997 Weinstein et al. Dec 2019 B2
10502363 Edwards et al. Dec 2019 B2
10504231 Fiala Dec 2019 B2
10507066 Dimaio et al. Dec 2019 B2
10511822 Casas Dec 2019 B2
10517544 Taguchi et al. Dec 2019 B2
10537395 Perez Jan 2020 B2
10540780 Cousins et al. Jan 2020 B1
10543485 Ismagilov et al. Jan 2020 B2
10546423 Jones et al. Jan 2020 B2
10548557 Lim et al. Feb 2020 B2
10555775 Hoffman et al. Feb 2020 B2
10568535 Roberts et al. Feb 2020 B2
10571696 Urey et al. Feb 2020 B2
10571716 Chapiro Feb 2020 B2
10573086 Bar-Zeev et al. Feb 2020 B2
10573087 Gallop et al. Feb 2020 B2
10577630 Zhang et al. Mar 2020 B2
10586400 Douglas Mar 2020 B2
10591737 Yildiz et al. Mar 2020 B2
10592748 Cousins et al. Mar 2020 B1
10594998 Casas Mar 2020 B1
10595716 Nazareth et al. Mar 2020 B2
10601950 Devam et al. Mar 2020 B2
10602114 Casas Mar 2020 B2
10603113 Lang Mar 2020 B2
10603133 Wang et al. Mar 2020 B2
10606085 Toyama Mar 2020 B2
10610172 Hummel et al. Apr 2020 B2
10610179 Altmann Apr 2020 B2
10613352 Knoll Apr 2020 B2
10617566 Esmonde Apr 2020 B2
10620460 Carabin Apr 2020 B2
10621738 Miao et al. Apr 2020 B2
10625099 Takahashi et al. Apr 2020 B2
10626473 Mariani et al. Apr 2020 B2
10631905 Asfora et al. Apr 2020 B2
10631907 Zucker et al. Apr 2020 B2
10634331 Feinbloom et al. Apr 2020 B1
10634921 Blum et al. Apr 2020 B2
10638080 Ovchinnikov et al. Apr 2020 B2
10646285 Siemionow et al. May 2020 B2
10650513 Penney et al. May 2020 B2
10650594 Jones et al. May 2020 B2
10652525 Woods May 2020 B2
10653495 Gregerson et al. May 2020 B2
10660715 Dozeman May 2020 B2
10663738 Carlvik et al. May 2020 B2
10665033 Bar-Zeev et al. May 2020 B2
10670937 Alton et al. Jun 2020 B2
10672145 Albiol et al. Jun 2020 B2
10682112 Pizaine et al. Jun 2020 B2
10682767 Grafenberg et al. Jun 2020 B2
10687901 Thomas Jun 2020 B2
10691397 Clements Jun 2020 B1
10702713 Mori et al. Jul 2020 B2
10706540 Merlet Jul 2020 B2
10709398 Schweizer Jul 2020 B2
10713801 Jordan et al. Jul 2020 B2
10716643 Justin et al. Jul 2020 B2
10722733 Takahashi Jul 2020 B2
10725535 Yu Jul 2020 B2
10731832 Koo Aug 2020 B2
10732721 Clements Aug 2020 B1
10742949 Casas Aug 2020 B2
10743939 Lang Aug 2020 B1
10743943 Razeto et al. Aug 2020 B2
10747315 Tungare et al. Aug 2020 B2
10748319 Tao et al. Aug 2020 B1
10758315 Johnson et al. Sep 2020 B2
10777094 Rao et al. Sep 2020 B1
10777315 Zehavi et al. Sep 2020 B2
10781482 Gubatayao et al. Sep 2020 B2
10792110 Leung et al. Oct 2020 B2
10799145 West et al. Oct 2020 B2
10799296 Lang Oct 2020 B2
10799298 Crawford et al. Oct 2020 B2
10799316 Sela et al. Oct 2020 B2
10810799 Tepper et al. Oct 2020 B2
10818019 Piat et al. Oct 2020 B2
10818101 Gallop et al. Oct 2020 B2
10818199 Buras et al. Oct 2020 B2
10825563 Gibby et al. Nov 2020 B2
10827164 Perreault et al. Nov 2020 B2
10831943 Santarone et al. Nov 2020 B2
10835296 Elimelech et al. Nov 2020 B2
10838206 Fortin-Deschnes et al. Nov 2020 B2
10839629 Jones et al. Nov 2020 B2
10839956 Beydoun et al. Nov 2020 B2
10841556 Casas Nov 2020 B2
10842002 Chang Nov 2020 B2
10842461 Johnson et al. Nov 2020 B2
10849691 Zucker et al. Dec 2020 B2
10849693 Lang Dec 2020 B2
10849710 Liu Dec 2020 B2
10861236 Geri et al. Dec 2020 B2
10865220 Ebetino et al. Dec 2020 B2
10869517 Halpern Dec 2020 B1
10869727 Yanof et al. Dec 2020 B2
10872472 Watola et al. Dec 2020 B2
10877262 Luxembourg Dec 2020 B1
10877296 Lindsey et al. Dec 2020 B2
10878639 Douglas et al. Dec 2020 B2
10893260 Trail et al. Jan 2021 B2
10895742 Schneider et al. Jan 2021 B2
10895743 Dausmann Jan 2021 B2
10895906 West et al. Jan 2021 B2
10898151 Harding et al. Jan 2021 B2
10908420 Lee et al. Feb 2021 B2
10921595 Rakshit et al. Feb 2021 B2
10921613 Gupta et al. Feb 2021 B2
10928321 Rawle Feb 2021 B2
10928638 Ninan et al. Feb 2021 B2
10929670 Troy et al. Feb 2021 B1
10935815 Castaeda Mar 2021 B1
10935816 Ban et al. Mar 2021 B2
10936537 Huston Mar 2021 B2
10939973 Dimaio et al. Mar 2021 B2
10939977 Messinger et al. Mar 2021 B2
10941933 Ferguson Mar 2021 B2
10946108 Zhang et al. Mar 2021 B2
10950338 Douglas Mar 2021 B2
10951872 Casas Mar 2021 B2
10964095 Douglas Mar 2021 B1
10964124 Douglas Mar 2021 B1
10966768 Poulos Apr 2021 B2
10969587 Mcdowall et al. Apr 2021 B2
10993754 Kuntz et al. May 2021 B2
11000335 Dorman May 2021 B2
11002994 Jiang et al. May 2021 B2
11006093 Hegyi May 2021 B1
11013550 Rioux et al. May 2021 B2
11013560 Lang May 2021 B2
11013562 Marti et al. May 2021 B2
11013573 Chang May 2021 B2
11013900 Malek et al. May 2021 B2
11016302 Freeman et al. May 2021 B2
11019988 Fiebel et al. Jun 2021 B2
11027027 Manning et al. Jun 2021 B2
11029147 Abovitz et al. Jun 2021 B2
11030809 Wang Jun 2021 B2
11041173 Zhang et al. Jun 2021 B2
11045663 Mori et al. Jun 2021 B2
11049293 Chae et al. Jun 2021 B2
11049476 Fuchs et al. Jun 2021 B2
11050990 Casas Jun 2021 B2
11057505 Dharmatilleke Jul 2021 B2
11058390 Douglas Jul 2021 B1
11061257 Hakim Jul 2021 B1
11064904 Kay et al. Jul 2021 B2
11065062 Frushour et al. Jul 2021 B2
11067387 Marell et al. Jul 2021 B2
11071497 Hallack et al. Jul 2021 B2
11079596 Arizona Aug 2021 B2
11087039 Duff et al. Aug 2021 B2
11090019 Siemionow et al. Aug 2021 B2
11097129 Sakata et al. Aug 2021 B2
11099376 Steier et al. Aug 2021 B1
11103320 Leboeuf et al. Aug 2021 B2
D930162 Cremer et al. Sep 2021 S
11109762 Steier et al. Sep 2021 B1
11112611 Kessler et al. Sep 2021 B1
11122164 Gigante Sep 2021 B2
11123604 Fung Sep 2021 B2
11129562 Roberts et al. Sep 2021 B2
11132055 Jones et al. Sep 2021 B2
11135015 Crawford et al. Oct 2021 B2
11135016 Frielinghaus et al. Oct 2021 B2
11137610 Kessler et al. Oct 2021 B1
11141221 Hobeika et al. Oct 2021 B2
11153549 Casas Oct 2021 B2
11153555 Healy et al. Oct 2021 B1
11163176 Karafin et al. Nov 2021 B2
11164324 Liu et al. Nov 2021 B2
11166006 Hegyi Nov 2021 B2
11169380 Manly et al. Nov 2021 B2
11172990 Lang Nov 2021 B2
11179136 Kohli et al. Nov 2021 B2
11180557 Noelle Nov 2021 B2
11181747 Kessler et al. Nov 2021 B1
11185891 Cousins et al. Nov 2021 B2
11187907 Osterman et al. Nov 2021 B2
11202682 Staunton et al. Dec 2021 B2
11207150 Healy et al. Dec 2021 B2
11217028 Jones et al. Jan 2022 B2
11224483 Steinberg et al. Jan 2022 B2
11224763 Takahashi et al. Jan 2022 B2
11227417 Berlinger et al. Jan 2022 B2
11231787 Isaacs et al. Jan 2022 B2
11243404 Mcdowall et al. Feb 2022 B2
11244508 Kazanzides et al. Feb 2022 B2
11253216 Crawford et al. Feb 2022 B2
11253323 Hughes et al. Feb 2022 B2
11257190 Mao et al. Feb 2022 B2
11257241 Tao Feb 2022 B2
11263772 Siemionow et al. Mar 2022 B2
11269401 West et al. Mar 2022 B2
11272151 Casas Mar 2022 B2
11278359 Siemionow et al. Mar 2022 B2
11278413 Lang Mar 2022 B1
11280480 Wilt et al. Mar 2022 B2
11284846 Graumann et al. Mar 2022 B2
11291521 Im Apr 2022 B2
11294167 Ishimoda Apr 2022 B2
11297285 Pierce Apr 2022 B2
11300252 Nguyen Apr 2022 B2
11300790 Cheng et al. Apr 2022 B2
11304621 Merschon et al. Apr 2022 B2
11304759 Kovtun et al. Apr 2022 B2
11307402 Steier et al. Apr 2022 B2
11308663 Alhrishy et al. Apr 2022 B2
11311341 Lang Apr 2022 B2
11317973 Calloway et al. May 2022 B2
11337763 Choi May 2022 B2
11348257 Lang May 2022 B2
11350072 Quiles Casas May 2022 B1
11350965 Yilmaz et al. Jun 2022 B2
11351006 Aferzon et al. Jun 2022 B2
11354813 Piat et al. Jun 2022 B2
11360315 Tu et al. Jun 2022 B2
11373342 Stafford et al. Jun 2022 B2
11382699 Wassall et al. Jul 2022 B2
11382700 Calloway et al. Jul 2022 B2
11382712 Elimelech et al. Jul 2022 B2
11382713 Healy et al. Jul 2022 B2
11389252 Gera et al. Jul 2022 B2
11393229 Zhou et al. Jul 2022 B2
11399895 Soper et al. Aug 2022 B2
11402524 Song et al. Aug 2022 B2
11406338 Tolkowsky Aug 2022 B2
11412202 Hegyi Aug 2022 B2
11423554 Borsdorf et al. Aug 2022 B2
11430203 Navab et al. Aug 2022 B2
11432828 Lang Sep 2022 B1
11432931 Lang Sep 2022 B2
11443428 Petersen et al. Sep 2022 B2
11443431 Flossmann et al. Sep 2022 B2
11452568 Lang Sep 2022 B2
11452570 Tolkowsky Sep 2022 B2
11460915 Frielinghaus et al. Oct 2022 B2
11461936 Freeman et al. Oct 2022 B2
11461983 Jones et al. Oct 2022 B2
11464580 Kemp et al. Oct 2022 B2
11464581 Calloway Oct 2022 B2
11475625 Douglas Oct 2022 B1
11478214 Siewerdsen et al. Oct 2022 B2
11483532 Quiles Casas Oct 2022 B2
11488021 Sun et al. Nov 2022 B2
11490986 Ben-Yishai Nov 2022 B2
11510750 Dulin et al. Nov 2022 B2
11513358 Mcdowall et al. Nov 2022 B2
11527002 Govari Dec 2022 B2
11528393 Garofolo et al. Dec 2022 B2
11544031 Harviainen Jan 2023 B2
11573420 Sarma et al. Feb 2023 B2
11589927 Oezbek et al. Feb 2023 B2
11627924 Alexandroni et al. Apr 2023 B2
11644675 Manly et al. May 2023 B2
11648016 Hathaway et al. May 2023 B2
11651499 Wang et al. May 2023 B2
11657518 Ketcha et al. May 2023 B2
11666458 Kim et al. Jun 2023 B2
11669984 Siewerdsen et al. Jun 2023 B2
11686947 Loyola et al. Jun 2023 B2
11699236 Avital et al. Jul 2023 B2
11712582 Miyazaki et al. Aug 2023 B2
11715210 Haslam et al. Aug 2023 B2
11719941 Russell Aug 2023 B2
11730389 Farshad et al. Aug 2023 B2
11733516 Edwin et al. Aug 2023 B2
11734901 Jones et al. Aug 2023 B2
11744657 Leboeuf et al. Sep 2023 B2
11750794 Benishti et al. Sep 2023 B2
11766296 Wolf et al. Sep 2023 B2
11798178 Merlet Oct 2023 B2
11801097 Crawford et al. Oct 2023 B2
11801115 Elimelech et al. Oct 2023 B2
11808943 Robaina et al. Nov 2023 B2
11815683 Sears et al. Nov 2023 B2
11826111 Mahfouz Nov 2023 B2
11832886 Dorman Dec 2023 B2
11838493 Healy et al. Dec 2023 B2
11839433 Schaewe et al. Dec 2023 B2
11839501 Takahashi et al. Dec 2023 B2
11864934 Junio et al. Jan 2024 B2
11885752 St-Aubin et al. Jan 2024 B2
11892647 Hung et al. Feb 2024 B2
11896445 Gera et al. Feb 2024 B2
11900620 Lalys et al. Feb 2024 B2
11914155 Zhu et al. Feb 2024 B2
11918310 Roh et al. Mar 2024 B1
11922631 Haslam et al. Mar 2024 B2
11941814 Crawford et al. Mar 2024 B2
11944508 Cowin et al. Apr 2024 B1
11948265 Gibby et al. Apr 2024 B2
11950968 Wiggermann Apr 2024 B2
11957420 Lang Apr 2024 B2
11961193 Pelzl et al. Apr 2024 B2
11963723 Vilsmeier et al. Apr 2024 B2
11972582 Yan et al. Apr 2024 B2
11974819 Finley et al. May 2024 B2
11974887 Elimelech et al. May 2024 B2
11977232 Wu et al. May 2024 B2
11980429 Wolf et al. May 2024 B2
11980506 Wolf et al. May 2024 B2
11980507 Elimelech et al. May 2024 B2
11980508 Elimelech et al. May 2024 B2
11983824 Avisar et al. May 2024 B2
12002171 Jones et al. Jun 2024 B2
12010285 Quiles Casas Jun 2024 B2
12014497 Hong et al. Jun 2024 B2
12019314 Steines et al. Jun 2024 B1
12026897 Frantz et al. Jul 2024 B2
12033322 Laaksonen et al. Jul 2024 B2
12044856 Gera et al. Jul 2024 B2
12044858 Gera et al. Jul 2024 B2
12053247 Chiou Aug 2024 B1
12056830 Cvetko et al. Aug 2024 B2
12059281 Weingarten et al. Aug 2024 B2
12063338 Quiles Casas Aug 2024 B2
12063345 Benishti et al. Aug 2024 B2
12069233 Benishti et al. Aug 2024 B2
12076158 Geiger et al. Sep 2024 B2
12076196 Elimelech et al. Sep 2024 B2
12079385 Ben-Yishai et al. Sep 2024 B2
12112483 Grady et al. Oct 2024 B2
12114933 Seo et al. Oct 2024 B2
12115028 Dulin et al. Oct 2024 B2
12127800 Qian et al. Oct 2024 B2
12133772 Calloway et al. Nov 2024 B2
12136176 Spaas et al. Nov 2024 B2
12142365 Kaethner et al. Nov 2024 B2
12150821 Gera et al. Nov 2024 B2
12178666 Wolf et al. Dec 2024 B2
12186028 Gera et al. Jan 2025 B2
12201384 Wolf et al. Jan 2025 B2
12206837 Benishti et al. Jan 2025 B2
20020082498 Wendt et al. Jun 2002 A1
20030059097 Abovitz et al. Mar 2003 A1
20030117393 Sauer et al. Jun 2003 A1
20030130576 Seeley et al. Jul 2003 A1
20030156144 Morita Aug 2003 A1
20030210812 Khamene et al. Nov 2003 A1
20030225329 Rossner et al. Dec 2003 A1
20040019263 Jutras et al. Jan 2004 A1
20040030237 Lee et al. Feb 2004 A1
20040138556 Cosman Jul 2004 A1
20040152955 Mcginley et al. Aug 2004 A1
20040171930 Grimm et al. Sep 2004 A1
20040238732 State et al. Dec 2004 A1
20050017972 Poole et al. Jan 2005 A1
20050024586 Teiwes et al. Feb 2005 A1
20050119639 McCombs et al. Jun 2005 A1
20050154296 Lechner et al. Jul 2005 A1
20050203367 Ahmed et al. Sep 2005 A1
20050203380 Sauer et al. Sep 2005 A1
20050215879 Chuanggui Sep 2005 A1
20050267358 Tuma et al. Dec 2005 A1
20060072124 Smetak et al. Apr 2006 A1
20060134198 Tawa et al. Jun 2006 A1
20060147100 Fitzpatrick Jul 2006 A1
20060176242 Jaramaz et al. Aug 2006 A1
20070018975 Chuanggui et al. Jan 2007 A1
20070058261 Sugihara et al. Mar 2007 A1
20070100325 Jutras et al. May 2007 A1
20070183041 McCloy et al. Aug 2007 A1
20070233371 Stoschek et al. Oct 2007 A1
20070273610 Baillot Nov 2007 A1
20080002809 Bodduluri Jan 2008 A1
20080007645 McCutchen Jan 2008 A1
20080035266 Danziger Feb 2008 A1
20080085033 Haven et al. Apr 2008 A1
20080159612 Fu et al. Jul 2008 A1
20080183065 Goldbach Jul 2008 A1
20080221625 Hufner et al. Sep 2008 A1
20080253527 Boyden et al. Oct 2008 A1
20080262812 Arata et al. Oct 2008 A1
20080287728 Mostafavi et al. Nov 2008 A1
20090005961 Grabowski et al. Jan 2009 A1
20090018437 Cooke Jan 2009 A1
20090024127 Lechner et al. Jan 2009 A1
20090036902 DiMaio et al. Feb 2009 A1
20090062869 Claverie et al. Mar 2009 A1
20090099445 Burger Apr 2009 A1
20090123452 Madison May 2009 A1
20090227847 Tepper et al. Sep 2009 A1
20090285366 Essenreiter et al. Nov 2009 A1
20090300540 Russell Dec 2009 A1
20100076305 Maier-Hein et al. Mar 2010 A1
20100094308 Tatsumi et al. Apr 2010 A1
20100106010 Rubner et al. Apr 2010 A1
20100114110 Taft et al. May 2010 A1
20100138939 Bentzon et al. Jun 2010 A1
20100149073 Chaum et al. Jun 2010 A1
20100172567 Prokoski Jul 2010 A1
20100210939 Hartmann et al. Aug 2010 A1
20100266220 Zagorchev et al. Oct 2010 A1
20100274124 Jascob et al. Oct 2010 A1
20110004259 Stallings et al. Jan 2011 A1
20110098553 Robbins et al. Apr 2011 A1
20110105895 Kornblau et al. May 2011 A1
20110125159 Hanson et al. May 2011 A1
20110125160 Bagga et al. May 2011 A1
20110216060 Weising et al. Sep 2011 A1
20110245625 Trovato et al. Oct 2011 A1
20110248064 Marczyk Oct 2011 A1
20110254922 Schaerer et al. Oct 2011 A1
20110306873 Shenai et al. Dec 2011 A1
20120014608 Watanabe Jan 2012 A1
20120068913 Bar-Zeev et al. Mar 2012 A1
20120078236 Schoepp Mar 2012 A1
20120109151 Maier-Hein et al. May 2012 A1
20120143050 Heigl Jun 2012 A1
20120155064 Waters Jun 2012 A1
20120162452 Liu Jun 2012 A1
20120182605 Hall et al. Jul 2012 A1
20120201421 Hartmann et al. Aug 2012 A1
20120216411 Wevers et al. Aug 2012 A1
20120224260 Healy et al. Sep 2012 A1
20120238609 Srivastava et al. Sep 2012 A1
20120245645 Hanson et al. Sep 2012 A1
20120289777 Chopra et al. Nov 2012 A1
20120306850 Balan et al. Dec 2012 A1
20120320100 Machida et al. Dec 2012 A1
20130002928 Imai Jan 2013 A1
20130009853 Hesselink et al. Jan 2013 A1
20130038632 Dillavou et al. Feb 2013 A1
20130050258 Liu et al. Feb 2013 A1
20130050833 Lewis et al. Feb 2013 A1
20130057581 Meier Mar 2013 A1
20130079829 Globerman et al. Mar 2013 A1
20130083009 Geisner et al. Apr 2013 A1
20130106833 Fun May 2013 A1
20130135734 Shafer et al. May 2013 A1
20130135738 Shafer et al. May 2013 A1
20130190602 Liao et al. Jul 2013 A1
20130195338 Xu et al. Aug 2013 A1
20130209953 Arlinsky et al. Aug 2013 A1
20130212453 Gudai et al. Aug 2013 A1
20130234914 Fujimaki Sep 2013 A1
20130234935 Griffith Sep 2013 A1
20130237811 Mihailescu et al. Sep 2013 A1
20130245461 Maier-Hein et al. Sep 2013 A1
20130249787 Morimoto Sep 2013 A1
20130249945 Kobayashi Sep 2013 A1
20130265623 Sugiyama et al. Oct 2013 A1
20130267838 Fronk et al. Oct 2013 A1
20130278631 Border et al. Oct 2013 A1
20130278635 Maggiore Oct 2013 A1
20130300637 Smits et al. Nov 2013 A1
20130300760 Sugano et al. Nov 2013 A1
20130342571 Kinnebrew et al. Dec 2013 A1
20130345718 Crawford et al. Dec 2013 A1
20140031668 Mobasser et al. Jan 2014 A1
20140049629 Siewerdsen et al. Feb 2014 A1
20140088402 Xu Mar 2014 A1
20140088990 Nawana et al. Mar 2014 A1
20140104505 Koenig Apr 2014 A1
20140105912 Noelle Apr 2014 A1
20140114173 Bar-Tal et al. Apr 2014 A1
20140142426 Razzaque et al. May 2014 A1
20140168261 Margolis et al. Jun 2014 A1
20140176661 Smurro et al. Jun 2014 A1
20140177023 Gao et al. Jun 2014 A1
20140189508 Granchi et al. Jul 2014 A1
20140198129 Liu et al. Jul 2014 A1
20140218291 Kirk Aug 2014 A1
20140240484 Kodama et al. Aug 2014 A1
20140243614 Rothberg et al. Aug 2014 A1
20140256429 Kobayashi et al. Sep 2014 A1
20140266983 Christensen Sep 2014 A1
20140268356 Bolas et al. Sep 2014 A1
20140270505 McCarthy Sep 2014 A1
20140275760 Lee et al. Sep 2014 A1
20140285404 Takano et al. Sep 2014 A1
20140285429 Simmons Sep 2014 A1
20140288413 Hwang et al. Sep 2014 A1
20140300632 Laor Oct 2014 A1
20140300967 Tilleman et al. Oct 2014 A1
20140301624 Barckow Oct 2014 A1
20140303491 Shekhar et al. Oct 2014 A1
20140320399 Kim et al. Oct 2014 A1
20140333899 Smithwick Nov 2014 A1
20140336461 Reiter et al. Nov 2014 A1
20140340286 Machida et al. Nov 2014 A1
20140361956 Mikhailov et al. Dec 2014 A1
20140371728 Vaughn Dec 2014 A1
20150005772 Anglin et al. Jan 2015 A1
20150018672 Blumhofer et al. Jan 2015 A1
20150031985 Reddy et al. Jan 2015 A1
20150043798 Carrell et al. Feb 2015 A1
20150070347 Hofmann et al. Mar 2015 A1
20150084990 Laor Mar 2015 A1
20150150641 Daon et al. Jun 2015 A1
20150182293 Yang et al. Jul 2015 A1
20150192776 Lee et al. Jul 2015 A1
20150209119 Theodore et al. Jul 2015 A1
20150230873 Kubiak et al. Aug 2015 A1
20150230893 Huwais Aug 2015 A1
20150261922 Nawana et al. Sep 2015 A1
20150277123 Chaum et al. Oct 2015 A1
20150282735 Rossner Oct 2015 A1
20150287188 Gazit et al. Oct 2015 A1
20150287236 Winne et al. Oct 2015 A1
20150297314 Fowler et al. Oct 2015 A1
20150305828 Park et al. Oct 2015 A1
20150310668 Ellerbrock Oct 2015 A1
20150338652 Lim et al. Nov 2015 A1
20150338653 Subramaniam et al. Nov 2015 A1
20150350517 Duret et al. Dec 2015 A1
20150351863 Plassky et al. Dec 2015 A1
20150363978 Maimone et al. Dec 2015 A1
20150366620 Cameron et al. Dec 2015 A1
20160015878 Graham et al. Jan 2016 A1
20160022287 Nehls Jan 2016 A1
20160030131 Yang et al. Feb 2016 A1
20160054571 Tazbaz et al. Feb 2016 A1
20160086380 Vayser et al. Mar 2016 A1
20160103318 Du et al. Apr 2016 A1
20160125603 Tanji May 2016 A1
20160133051 Aonuma et al. May 2016 A1
20160143699 Tanji May 2016 A1
20160153004 Zhang et al. Jun 2016 A1
20160163045 Penney et al. Jun 2016 A1
20160175064 Steinle et al. Jun 2016 A1
20160178910 Giudicelli et al. Jun 2016 A1
20160191887 Casas Jun 2016 A1
20160223822 Harrison et al. Aug 2016 A1
20160228033 Rossner Aug 2016 A1
20160246059 Halpin et al. Aug 2016 A1
20160249989 Devam et al. Sep 2016 A1
20160256223 Haimerl et al. Sep 2016 A1
20160275684 Elenbaas et al. Sep 2016 A1
20160297315 Gonzalez et al. Oct 2016 A1
20160302870 Wilkinson et al. Oct 2016 A1
20160324580 Esterberg Nov 2016 A1
20160324583 Kheradpir et al. Nov 2016 A1
20160339337 Ellsworth et al. Nov 2016 A1
20170014119 Capote et al. Jan 2017 A1
20170024634 Miao et al. Jan 2017 A1
20170027650 Merck et al. Feb 2017 A1
20170031163 Gao et al. Feb 2017 A1
20170031179 Guillot et al. Feb 2017 A1
20170045742 Greenhalgh et al. Feb 2017 A1
20170065364 Schuh et al. Mar 2017 A1
20170068119 Antaki et al. Mar 2017 A1
20170076501 Jagga et al. Mar 2017 A1
20170086941 Marti et al. Mar 2017 A1
20170112586 Dhupar Apr 2017 A1
20170164919 Lavallee et al. Jun 2017 A1
20170164920 Lavallee et al. Jun 2017 A1
20170178375 Benishti et al. Jun 2017 A1
20170220224 Kodali et al. Aug 2017 A1
20170239015 Sela et al. Aug 2017 A1
20170245944 Crawford et al. Aug 2017 A1
20170251900 Hansen et al. Sep 2017 A1
20170252109 Yang et al. Sep 2017 A1
20170258526 Lang Sep 2017 A1
20170281283 Siegler et al. Oct 2017 A1
20170312032 Amanatullah et al. Nov 2017 A1
20170322950 Han et al. Nov 2017 A1
20170348055 Salcedo et al. Dec 2017 A1
20170348061 Joshi et al. Dec 2017 A1
20170366773 Kiraly et al. Dec 2017 A1
20170367766 Mahfouz Dec 2017 A1
20170367771 Tako et al. Dec 2017 A1
20170372477 Penney et al. Dec 2017 A1
20180003981 Urey Jan 2018 A1
20180018791 Guoyi Jan 2018 A1
20180021597 Berlinger et al. Jan 2018 A1
20180028266 Barnes et al. Feb 2018 A1
20180036884 Chen et al. Feb 2018 A1
20180049622 Ryan et al. Feb 2018 A1
20180055579 Daon et al. Mar 2018 A1
20180071029 Srimohanarajah et al. Mar 2018 A1
20180078316 Schaewe et al. Mar 2018 A1
20180082480 White et al. Mar 2018 A1
20180092667 Heigl et al. Apr 2018 A1
20180092698 Chopra et al. Apr 2018 A1
20180092699 Finley Apr 2018 A1
20180116732 Lin et al. May 2018 A1
20180116741 Garcia et al. May 2018 A1
20180117150 O'Dwyer et al. May 2018 A1
20180120106 Sato May 2018 A1
20180133871 Farmer May 2018 A1
20180153626 Yang et al. Jun 2018 A1
20180182150 Benishti et al. Jun 2018 A1
20180185100 Weinstein et al. Jul 2018 A1
20180185113 Gregerson et al. Jul 2018 A1
20180193097 McLachlin et al. Jul 2018 A1
20180200002 Kostrzewski et al. Jul 2018 A1
20180247128 Alvi et al. Aug 2018 A1
20180262743 Casas Sep 2018 A1
20180303558 Thomas Oct 2018 A1
20180311011 Van et al. Nov 2018 A1
20180317803 Ben-Yishai et al. Nov 2018 A1
20180318035 McLachlin et al. Nov 2018 A1
20180368898 Divincenzo et al. Dec 2018 A1
20190000372 Gullotti et al. Jan 2019 A1
20190000564 Navab et al. Jan 2019 A1
20190015163 Abhari et al. Jan 2019 A1
20190018235 Ouderkirk et al. Jan 2019 A1
20190038362 Nash et al. Feb 2019 A1
20190038365 Soper et al. Feb 2019 A1
20190043238 Benishti et al. Feb 2019 A1
20190043392 Abele Feb 2019 A1
20190046272 Zoabi et al. Feb 2019 A1
20190046276 Inglese et al. Feb 2019 A1
20190053851 Siemionow et al. Feb 2019 A1
20190069971 Tripathi et al. Mar 2019 A1
20190080515 Geri et al. Mar 2019 A1
20190105116 Johnson et al. Apr 2019 A1
20190130792 Rios et al. May 2019 A1
20190142519 Siemionow et al. May 2019 A1
20190144443 Jackson et al. May 2019 A1
20190175228 Elimelech et al. Jun 2019 A1
20190192226 Lang Jun 2019 A1
20190192230 Siemionow et al. Jun 2019 A1
20190200894 Jung et al. Jul 2019 A1
20190201106 Siemionow et al. Jul 2019 A1
20190205606 Zhou et al. Jul 2019 A1
20190216537 Eltorai et al. Jul 2019 A1
20190251692 Schmidt-Richberg et al. Aug 2019 A1
20190251694 Han et al. Aug 2019 A1
20190254753 Johnson et al. Aug 2019 A1
20190273916 Benishti et al. Sep 2019 A1
20190310481 Blum et al. Oct 2019 A1
20190324365 De et al. Oct 2019 A1
20190333480 Lang Oct 2019 A1
20190369660 Wen et al. Dec 2019 A1
20190369717 Frielinghaus et al. Dec 2019 A1
20190378276 Flossmann et al. Dec 2019 A1
20190387351 Lyren et al. Dec 2019 A1
20200015895 Frielinghaus et al. Jan 2020 A1
20200019364 Pond Jan 2020 A1
20200020249 Jarc et al. Jan 2020 A1
20200038112 Amanatullah et al. Feb 2020 A1
20200043160 Mizukura et al. Feb 2020 A1
20200078100 Weinstein et al. Mar 2020 A1
20200085511 Oezbek et al. Mar 2020 A1
20200088997 Lee et al. Mar 2020 A1
20200100847 Siegler et al. Apr 2020 A1
20200117025 Sauer Apr 2020 A1
20200129058 Li et al. Apr 2020 A1
20200129136 Harding et al. Apr 2020 A1
20200129262 Verard et al. Apr 2020 A1
20200129264 Oativia et al. Apr 2020 A1
20200133029 Yonezawa Apr 2020 A1
20200138518 Lang May 2020 A1
20200138618 Roszkowiak et al. May 2020 A1
20200143594 Lal et al. May 2020 A1
20200146546 Chene et al. May 2020 A1
20200151507 Siemionow et al. May 2020 A1
20200156259 Ruiz et al. May 2020 A1
20200159313 Gibby et al. May 2020 A1
20200163723 Wolf et al. May 2020 A1
20200163739 Messinger et al. May 2020 A1
20200178916 Lalys et al. Jun 2020 A1
20200184638 Meglan et al. Jun 2020 A1
20200186786 Gibby et al. Jun 2020 A1
20200188028 Feiner et al. Jun 2020 A1
20200188034 Lequette et al. Jun 2020 A1
20200201082 Carabin Jun 2020 A1
20200229877 Siemionow et al. Jul 2020 A1
20200237256 Farshad et al. Jul 2020 A1
20200237459 Racheli et al. Jul 2020 A1
20200237880 Kent et al. Jul 2020 A1
20200242280 Pavloff et al. Jul 2020 A1
20200246074 Lang Aug 2020 A1
20200246081 Johnson et al. Aug 2020 A1
20200264451 Blum et al. Aug 2020 A1
20200265273 Wei et al. Aug 2020 A1
20200275988 Johnson et al. Sep 2020 A1
20200281554 Trini et al. Sep 2020 A1
20200286222 Essenreiter et al. Sep 2020 A1
20200288075 Bonin et al. Sep 2020 A1
20200294233 Merlet Sep 2020 A1
20200297427 Cameron et al. Sep 2020 A1
20200305980 Lang Oct 2020 A1
20200315734 El Amm Oct 2020 A1
20200321099 Holladay et al. Oct 2020 A1
20200323460 Busza et al. Oct 2020 A1
20200323609 Johnson et al. Oct 2020 A1
20200327721 Siemionow et al. Oct 2020 A1
20200330179 Ton Oct 2020 A1
20200337780 Winkler et al. Oct 2020 A1
20200341283 McCracken et al. Oct 2020 A1
20200352655 Freese Nov 2020 A1
20200355927 Marcellin-Dibon et al. Nov 2020 A1
20200360091 Murray et al. Nov 2020 A1
20200360105 Frey et al. Nov 2020 A1
20200375666 Stephen Dec 2020 A1
20200377493 Heiser et al. Dec 2020 A1
20200377956 Vogelstein et al. Dec 2020 A1
20200388075 Kazanzides et al. Dec 2020 A1
20200389425 Bhatia et al. Dec 2020 A1
20200390502 Holthuizen et al. Dec 2020 A1
20200390503 Casas et al. Dec 2020 A1
20200402647 Domracheva et al. Dec 2020 A1
20200409306 Gelman et al. Dec 2020 A1
20200410687 Siemionow et al. Dec 2020 A1
20200413031 Khani et al. Dec 2020 A1
20210004956 Book et al. Jan 2021 A1
20210009339 Morrison et al. Jan 2021 A1
20210015560 Boddington et al. Jan 2021 A1
20210015583 Avisar et al. Jan 2021 A1
20210022599 Freeman et al. Jan 2021 A1
20210022808 Lang Jan 2021 A1
20210022811 Mahfouz Jan 2021 A1
20210022828 Elimelech et al. Jan 2021 A1
20210029804 Chang Jan 2021 A1
20210030374 Takahashi et al. Feb 2021 A1
20210030511 Wolf et al. Feb 2021 A1
20210038339 Yu et al. Feb 2021 A1
20210049825 Wheelwright et al. Feb 2021 A1
20210052348 Stifter et al. Feb 2021 A1
20210056687 Hibbard et al. Feb 2021 A1
20210065911 Goel et al. Mar 2021 A1
20210077195 Saeidi et al. Mar 2021 A1
20210077210 Itkowitz et al. Mar 2021 A1
20210080751 Lindsey et al. Mar 2021 A1
20210090344 Geri et al. Mar 2021 A1
20210093391 Poltaretskyi et al. Apr 2021 A1
20210093392 Poltaretskyi et al. Apr 2021 A1
20210093400 Quaid et al. Apr 2021 A1
20210093417 Liu Apr 2021 A1
20210104055 Ni et al. Apr 2021 A1
20210107923 Jackson et al. Apr 2021 A1
20210109349 Schneider et al. Apr 2021 A1
20210109373 Loo et al. Apr 2021 A1
20210110517 Flohr et al. Apr 2021 A1
20210113269 Vilsmeier et al. Apr 2021 A1
20210113293 Silva et al. Apr 2021 A9
20210121238 Palushi et al. Apr 2021 A1
20210137634 Lang May 2021 A1
20210141887 Kim et al. May 2021 A1
20210150702 Claessen et al. May 2021 A1
20210157544 Denton May 2021 A1
20210160472 Casas May 2021 A1
20210161614 Elimelech et al. Jun 2021 A1
20210162287 Xing et al. Jun 2021 A1
20210165207 Peyman Jun 2021 A1
20210169504 Brown Jun 2021 A1
20210169578 Calloway et al. Jun 2021 A1
20210169581 Calloway et al. Jun 2021 A1
20210169605 Calloway et al. Jun 2021 A1
20210186647 Elimelech et al. Jun 2021 A1
20210196404 Wang Jul 2021 A1
20210211640 Bristol et al. Jul 2021 A1
20210223577 Zhang et al. Jul 2021 A1
20210225006 Grady et al. Jul 2021 A1
20210227791 De Oliveira Seixas et al. Jul 2021 A1
20210231301 Hikmet et al. Jul 2021 A1
20210235061 Hegyi Jul 2021 A1
20210248822 Choi et al. Aug 2021 A1
20210274281 Zhang et al. Sep 2021 A1
20210278675 Klug et al. Sep 2021 A1
20210282887 Wiggermann Sep 2021 A1
20210290046 Nazareth et al. Sep 2021 A1
20210290336 Wang Sep 2021 A1
20210290394 Mahfouz Sep 2021 A1
20210295108 Bar Sep 2021 A1
20210295512 Knoplioch et al. Sep 2021 A1
20210298795 Bowling et al. Sep 2021 A1
20210298835 Wang Sep 2021 A1
20210306599 Pierce Sep 2021 A1
20210311322 Belanger et al. Oct 2021 A1
20210314502 Liu Oct 2021 A1
20210315636 Akbarian et al. Oct 2021 A1
20210315662 Freeman et al. Oct 2021 A1
20210325684 Ninan et al. Oct 2021 A1
20210332447 Lubelski et al. Oct 2021 A1
20210333561 Oh et al. Oct 2021 A1
20210341739 Cakmakci et al. Nov 2021 A1
20210341740 Cakmakci et al. Nov 2021 A1
20210346115 Dulin et al. Nov 2021 A1
20210349677 Baldev et al. Nov 2021 A1
20210364802 Uchiyama et al. Nov 2021 A1
20210369226 Siemionow et al. Dec 2021 A1
20210371413 Thurston et al. Dec 2021 A1
20210373333 Moon Dec 2021 A1
20210373344 Loyola et al. Dec 2021 A1
20210378757 Bay et al. Dec 2021 A1
20210382310 Freeman et al. Dec 2021 A1
20210386482 Gera et al. Dec 2021 A1
20210389590 Freeman et al. Dec 2021 A1
20210400247 Casas Dec 2021 A1
20210401533 Im Dec 2021 A1
20210402255 Fung Dec 2021 A1
20210405369 King Dec 2021 A1
20220003992 Ahn Jan 2022 A1
20220007006 Healy et al. Jan 2022 A1
20220008135 Frielinghaus et al. Jan 2022 A1
20220038675 Hegyi Feb 2022 A1
20220039873 Harris Feb 2022 A1
20220051484 Jones et al. Feb 2022 A1
20220054199 Sivaprakasam et al. Feb 2022 A1
20220061921 Crawford et al. Mar 2022 A1
20220071712 Wolf et al. Mar 2022 A1
20220079675 Lang Mar 2022 A1
20220087746 Lang Mar 2022 A1
20220113810 Isaacs et al. Apr 2022 A1
20220117669 Nikou et al. Apr 2022 A1
20220121041 Hakim Apr 2022 A1
20220133484 Lang May 2022 A1
20220142730 Wolf et al. May 2022 A1
20220155861 Myung et al. May 2022 A1
20220159227 Quiles Casas May 2022 A1
20220179209 Cherukuri Jun 2022 A1
20220192776 Gibby et al. Jun 2022 A1
20220193453 Miyazaki et al. Jun 2022 A1
20220201274 Achilefu et al. Jun 2022 A1
20220245400 Siemionow et al. Aug 2022 A1
20220245821 Ouzounis Aug 2022 A1
20220257206 Hartley et al. Aug 2022 A1
20220269077 Adema et al. Aug 2022 A1
20220270263 Junio Aug 2022 A1
20220287676 Steines et al. Sep 2022 A1
20220292786 Pelzl et al. Sep 2022 A1
20220295033 Quiles Casas Sep 2022 A1
20220296315 Sokhanvar et al. Sep 2022 A1
20220304768 Elimelech et al. Sep 2022 A1
20220351385 Finley et al. Nov 2022 A1
20220353487 Hegyi Nov 2022 A1
20220358759 Cork et al. Nov 2022 A1
20220370152 Lavallee et al. Nov 2022 A1
20220387130 Spaas et al. Dec 2022 A1
20220392085 Finley et al. Dec 2022 A1
20220397750 Zhou et al. Dec 2022 A1
20220398752 Yoon et al. Dec 2022 A1
20220398755 Herrmann Dec 2022 A1
20220405935 Flossmann et al. Dec 2022 A1
20230004013 McCracken et al. Jan 2023 A1
20230009793 Gera et al. Jan 2023 A1
20230025480 Kemp et al. Jan 2023 A1
20230027801 Qian et al. Jan 2023 A1
20230032731 Hrndler et al. Feb 2023 A1
20230034189 Gera et al. Feb 2023 A1
20230050636 Yanof et al. Feb 2023 A1
20230053120 Jamali et al. Feb 2023 A1
20230073041 Samadani et al. Mar 2023 A1
20230085387 Jones et al. Mar 2023 A1
20230087783 Dulin et al. Mar 2023 A1
20230100078 Toporek et al. Mar 2023 A1
20230123621 Joshi et al. Apr 2023 A1
20230126207 Wang Apr 2023 A1
20230129056 Hemingway et al. Apr 2023 A1
20230131515 Oezbek et al. Apr 2023 A1
20230149083 Lin et al. May 2023 A1
20230162493 Worrell et al. May 2023 A1
20230165640 Dulin et al. Jun 2023 A1
20230169659 Chen et al. Jun 2023 A1
20230196582 Grady et al. Jun 2023 A1
20230200917 Calloway et al. Jun 2023 A1
20230236426 Manly et al. Jul 2023 A1
20230236427 Jiannyuh Jul 2023 A1
20230245784 Crawford et al. Aug 2023 A1
20230260142 Chatterjee et al. Aug 2023 A1
20230290037 Tasse et al. Sep 2023 A1
20230295302 Bhagavatheeswaran et al. Sep 2023 A1
20230306590 Jazdzyk et al. Sep 2023 A1
20230316550 Hiasa Oct 2023 A1
20230326011 Cutforth et al. Oct 2023 A1
20230326027 Wahrenberg Oct 2023 A1
20230329799 Gera et al. Oct 2023 A1
20230329801 Elimelech et al. Oct 2023 A1
20230334664 Lu et al. Oct 2023 A1
20230335261 Reicher et al. Oct 2023 A1
20230359043 Russell Nov 2023 A1
20230363832 Mosadegh et al. Nov 2023 A1
20230371984 Leuthardt et al. Nov 2023 A1
20230372053 Elimelech et al. Nov 2023 A1
20230372054 Elimelech et al. Nov 2023 A1
20230377171 Hasler et al. Nov 2023 A1
20230377175 Seok Nov 2023 A1
20230379448 Benishti et al. Nov 2023 A1
20230379449 Benishti et al. Nov 2023 A1
20230386022 Tan et al. Nov 2023 A1
20230386067 De et al. Nov 2023 A1
20230394791 Wang et al. Dec 2023 A1
20230397349 Capelli et al. Dec 2023 A1
20230397957 Crawford et al. Dec 2023 A1
20230410445 Elimelech et al. Dec 2023 A1
20230419496 Wuelker et al. Dec 2023 A1
20230420114 Scholler et al. Dec 2023 A1
20240008935 Wolf et al. Jan 2024 A1
20240016549 Johnson et al. Jan 2024 A1
20240016572 Elimelech et al. Jan 2024 A1
20240020831 Johnson et al. Jan 2024 A1
20240020840 Johnson et al. Jan 2024 A1
20240020862 Johnson et al. Jan 2024 A1
20240022704 Benishti et al. Jan 2024 A1
20240023946 Wolf et al. Jan 2024 A1
20240041530 Lang Feb 2024 A1
20240041558 Siewerdsen et al. Feb 2024 A1
20240045491 Sourov Feb 2024 A1
20240058064 Weiser et al. Feb 2024 A1
20240062387 Frantz et al. Feb 2024 A1
20240103271 Farid Mar 2024 A1
20240103282 Law et al. Mar 2024 A1
20240111163 Law et al. Apr 2024 A1
20240122560 Junio et al. Apr 2024 A1
20240126087 Gera et al. Apr 2024 A1
20240127559 Rybnikov et al. Apr 2024 A1
20240127578 Hiasa Apr 2024 A1
20240129451 Healy et al. Apr 2024 A1
20240130826 Elimelech et al. Apr 2024 A1
20240134206 Gera et al. Apr 2024 A1
20240144497 Cvetko et al. May 2024 A1
20240156532 Weiman et al. May 2024 A1
20240177445 Galeotti et al. May 2024 A1
20240177458 Zhang et al. May 2024 A1
20240180634 Mikus Jun 2024 A1
20240184119 Lee et al. Jun 2024 A1
20240185509 Kovler et al. Jun 2024 A1
20240202926 Crawford et al. Jun 2024 A1
20240202927 Haslam et al. Jun 2024 A1
20240212111 Genghi et al. Jun 2024 A1
20240233131 Westerhoff et al. Jul 2024 A1
20240245463 Vilsmeier et al. Jul 2024 A1
20240245474 Weiman et al. Jul 2024 A1
20240248530 Gibby et al. Jul 2024 A1
20240252252 Lang Aug 2024 A1
20240261036 Finley et al. Aug 2024 A1
20240261058 Gera et al. Aug 2024 A1
20240265645 Papar Aug 2024 A1
20240266033 Freeman et al. Aug 2024 A1
20240268922 Calloway et al. Aug 2024 A1
20240273740 Gibby et al. Aug 2024 A1
20240281979 Schrempf et al. Aug 2024 A1
20240296527 Nett et al. Sep 2024 A1
20240303832 Chen et al. Sep 2024 A1
20240307101 Gera et al. Sep 2024 A1
20240312012 Li et al. Sep 2024 A1
20240341853 Gibby et al. Oct 2024 A1
20240341861 Wolf et al. Oct 2024 A1
20240341910 Wolf et al. Oct 2024 A1
20240341911 Elimelech et al. Oct 2024 A1
20240355098 Liu et al. Oct 2024 A1
20240374314 Frey et al. Nov 2024 A1
20240377640 Asaban et al. Nov 2024 A1
20240378708 Kim et al. Nov 2024 A1
20240382283 Kuhnert et al. Nov 2024 A1
20240386572 Barasofsky et al. Nov 2024 A1
20240386682 Cvetko et al. Nov 2024 A1
20240394883 Liao et al. Nov 2024 A1
20240394985 Hanlon et al. Nov 2024 A1
20240404065 Gibbons et al. Dec 2024 A1
20240404106 Wu et al. Dec 2024 A1
20240420337 Li et al. Dec 2024 A1
20240420592 Stone et al. Dec 2024 A1
20240423724 Wolf et al. Dec 2024 A1
20240423750 Elimelech et al. Dec 2024 A1
20250020931 Gera et al. Jan 2025 A1
20250049534 Elimelech et al. Feb 2025 A1
Foreign Referenced Citations (189)
Number Date Country
3022448 Feb 2018 CA
3034314 Feb 2018 CA
101379412 Mar 2009 CN
102740784 Oct 2012 CN
102740789 Oct 2012 CN
103106348 May 2013 CN
103945780 Jul 2014 CN
105310756 Feb 2016 CN
109199563 Jan 2019 CN
111915696 Nov 2020 CN
112489047 Mar 2021 CN
202004011567 Nov 2004 DE
102004011567 Sep 2005 DE
102014008153 Oct 2014 DE
202022103168 Jun 2022 DE
0933096 Aug 1999 EP
1640750 Mar 2006 EP
1757974 Feb 2007 EP
2119397 Nov 2009 EP
2134847 Dec 2009 EP
2557998 Feb 2013 EP
2823463 Jan 2015 EP
2868277 May 2015 EP
2891966 Jul 2015 EP
2963616 Jan 2016 EP
3028258 Jun 2016 EP
3034607 Jun 2016 EP
3037038 Jun 2016 EP
3069318 Sep 2016 EP
3076660 Oct 2016 EP
3121789 Jan 2017 EP
3123970 Feb 2017 EP
2654749 May 2017 EP
3175815 Jun 2017 EP
3216416 Sep 2017 EP
2032039 Oct 2017 EP
3224376 Oct 2017 EP
3247297 Nov 2017 EP
3256213 Dec 2017 EP
3306567 Apr 2018 EP
3320874 May 2018 EP
2030193 Jul 2018 EP
2225723 Feb 2019 EP
2619622 Feb 2019 EP
2892558 Apr 2019 EP
3494903 Jun 2019 EP
2635299 Jul 2019 EP
3505050 Jul 2019 EP
2875149 Dec 2019 EP
3593227 Jan 2020 EP
3634294 Apr 2020 EP
3206583 Sep 2020 EP
3711700 Sep 2020 EP
2625845 Mar 2021 EP
3789965 Mar 2021 EP
3858280 Aug 2021 EP
3913423 Nov 2021 EP
3952331 Feb 2022 EP
3960235 Mar 2022 EP
3635683 Jul 2022 EP
3602492 Nov 2022 EP
4173590 May 2023 EP
3533031 Aug 2023 EP
4252695 Oct 2023 EP
3195257 Nov 2023 EP
3405909 Nov 2023 EP
4270313 Nov 2023 EP
4287120 Dec 2023 EP
3488381 Feb 2024 EP
3834768 Feb 2024 EP
3903714 Feb 2024 EP
4336450 Mar 2024 EP
3814984 Apr 2024 EP
4115389 Apr 2024 EP
3752981 May 2024 EP
4375948 May 2024 EP
4383203 Jun 2024 EP
4459543 Nov 2024 EP
4292045 Dec 2024 EP
4298604 Dec 2024 EP
2507314 Apr 2014 GB
262864 Mar 2019 IL
2004-237092 Aug 2004 JP
2005-246059 Sep 2005 JP
2008-507361 Mar 2008 JP
2009-514571 Apr 2009 JP
2021-525186 Sep 2021 JP
10-2014-0120155 Oct 2014 KR
0334705 Apr 2003 WO
2006002559 Jan 2006 WO
2007051304 May 2007 WO
2007115826 Oct 2007 WO
2008103383 Aug 2008 WO
2010067267 Jun 2010 WO
2010074747 Jul 2010 WO
2012061537 May 2012 WO
2012101286 Aug 2012 WO
2013112554 Aug 2013 WO
2014014498 Jan 2014 WO
2014024188 Feb 2014 WO
2014037953 Mar 2014 WO
2014113455 Jul 2014 WO
2014125789 Aug 2014 WO
2014167563 Oct 2014 WO
2014174067 Oct 2014 WO
2015058816 Apr 2015 WO
2015061752 Apr 2015 WO
2015109145 Jul 2015 WO
2016151506 Sep 2016 WO
2017042171 Mar 2017 WO
2018052966 Mar 2018 WO
2018073452 Apr 2018 WO
2018200767 Nov 2018 WO
2018206086 Nov 2018 WO
2019083431 May 2019 WO
2019135209 Jul 2019 WO
2019161477 Aug 2019 WO
2019195926 Oct 2019 WO
2019210353 Nov 2019 WO
2019211741 Nov 2019 WO
2020109903 Jun 2020 WO
2020109904 Jun 2020 WO
2021017019 Feb 2021 WO
2021019369 Feb 2021 WO
2021021979 Feb 2021 WO
2021023574 Feb 2021 WO
2021046455 Mar 2021 WO
2021048158 Mar 2021 WO
2021061459 Apr 2021 WO
2021062375 Apr 2021 WO
2021073743 Apr 2021 WO
2021087439 May 2021 WO
2021091980 May 2021 WO
2021112918 Jun 2021 WO
2021130564 Jul 2021 WO
2021137752 Jul 2021 WO
2021141887 Jul 2021 WO
2021145584 Jul 2021 WO
2021154076 Aug 2021 WO
2021183318 Sep 2021 WO
2021188757 Sep 2021 WO
2021255627 Dec 2021 WO
2021257987 Dec 2021 WO
2021258078 Dec 2021 WO
2022009233 Jan 2022 WO
2022053923 Mar 2022 WO
2022056010 Mar 2022 WO
2022079565 Apr 2022 WO
2022180624 Sep 2022 WO
2023003952 Jan 2023 WO
2023281395 Jan 2023 WO
2023007418 Feb 2023 WO
2023011924 Feb 2023 WO
2023021448 Feb 2023 WO
2023021450 Feb 2023 WO
2023021451 Feb 2023 WO
2023026229 Mar 2023 WO
2023047355 Mar 2023 WO
2023072887 May 2023 WO
2023088986 May 2023 WO
2023158878 Aug 2023 WO
2023159104 Aug 2023 WO
2023161848 Aug 2023 WO
2023163933 Aug 2023 WO
2023175244 Sep 2023 WO
2023186996 Oct 2023 WO
2023202909 Oct 2023 WO
2023205212 Oct 2023 WO
2023205896 Nov 2023 WO
2023209014 Nov 2023 WO
2023229415 Nov 2023 WO
2023232492 Dec 2023 WO
2023240912 Dec 2023 WO
2024001140 Jan 2024 WO
2024002620 Jan 2024 WO
2024013642 Jan 2024 WO
2024018368 Jan 2024 WO
2024046760 Mar 2024 WO
2024052136 Mar 2024 WO
2024077077 Apr 2024 WO
2024121060 Jun 2024 WO
2024132609 Jun 2024 WO
2024145341 Jul 2024 WO
2024160896 Aug 2024 WO
2024165508 Aug 2024 WO
2024173251 Aug 2024 WO
2024186811 Sep 2024 WO
2024226797 Oct 2024 WO
2024251344 Dec 2024 WO
Non-Patent Literature Citations (36)
Entry
Jang et al., “Retinal 3D: Augmented Reality Near-Eye Display Via Pupil-Tracked Light Field Projection on Retina”, ACM 2017. (Year: 2017).
ISA/220—Notification of Transmittal or Search Report and Written Opinion of the ISA, or the Declaration dated Aug. 11, 2023 for WO Application No. PCT/IB23/054056, 19 page(s).
U.S. Appl. No. 15/896,102 (U.S. Pat. No. 10,134,166), filed Feb. 14, 2018 (Nov. 20, 2018), Combining Video-Based and Optic-Based Augmented Reality in a Near Eye Display.
U.S. Appl. No. 16/159,740 (10,382,748), filed Oct. 15, 2018 (Aug. 13, 2019), Combining Video-Based and Optic-Based Augmented Reality in a Near Eye Display.
U.S. Appl. No. 16/419,023 (U.S. Pat. No. 11,750,794), filed May 22, 2019, Combining Video-Based and Optic-Based Augmented Reality in a Near Eye Display.
U.S. Appl. No. 18/352,158, filed Jul. 13, 2023, Combining Video-Based and Optic-Based Augmented Reality in a Near Eye Display.
U.S. Appl. No. 18/365,643, filed Aug. 4, 2023, Head-Mounted Augmented Reality Near Eye Display Device.
U.S. Appl. No. 18/365,650, filed Aug. 4, 2023, Systems for Facilitating Augmented Reality-Assisted Medical Procedures.
U.S. Appl. No. 15/127,423 (9,928,629), filed Sep. 20, 2016 (Mar. 27, 2018), Combining Video-Based and Optic-Based Augmented Reality in a Near Eye Display.
U.S. Appl. No. 16/120,480 (10,835,296), filed Sep. 4, 2018 (Nov. 17, 2020), Spinous Process Clamp.
U.S. Appl. No. 17/067,831, filed Oct. 12, 2020, Spinous Process Clamp.
U.S. Appl. No. 18/030,072, filed Apr. 4, 2023, Spinous Process Clamp.
U.S. Appl. No. 18/365,590, filed Aug. 4, 2023, Registration of a Fiducial Marker for an Augiviented Reality System.
U.S. Appl. No. 18/365,571, filed Aug. 4, 2023, Registration Marker for an Augiviented Reality System.
U.S. Appl. No. 17/045,766, filed Oct. 7, 2020, Registration of a Fiducial Marker for an Augmented Reality System.
U.S. Appl. No. 16/199,281, (10,939,977), filed Nov. 26, 2018 (Mar. 9, 2021), Positioning Marker.
U.S. Appl. No. 16/524,258, filed Jul. 29, 2019, Fiducial Marker.
U.S. Appl. No. 17/585,629, filed Jan. 27, 2022, Fiducial Marker.
U.S. Appl. No. 16/724,297 (U.S. Pat. No. 11,382,712), filed Dec. 22, 2019 (Jul. 12, 2022), Mirroring in Image Guided Surgery.
U.S. Appl. No. 17/827,710, filed May 29, 2022, Mirroring in Image Guided Surgery.
U.S. Appl. No. 18/352,181, filed Jul. 13, 2023, Mirroring in Image Guided Surgery.
U.S. Appl. No. 16/200,144, filed Nov. 26, 2018, Tracking System for Image-Guided Surgery.
U.S. Appl. No. 17/015,199, filed Sep. 9, 2020, Universal Tool Adapter.
U.S. Appl. No. 18/044,380, filed Mar. 8, 2023, Universal Tool Adapter for Image-Guided Surgery.
U.S. Appl. No. 16/901,026 (U.S. Pat. No. 11,389,252), filed Jun. 15, 2020 (Jul. 19, 2022), Rotating Marker for Image Guided Surgery.
U.S. Appl. No. 18/008,980, filed Dec. 8, 2022, Rotating Marker.
U.S. Appl. No. 17/368,859, filed Jul. 7, 2021, Iliac Pin and Adapter.
U.S. Appl. No. 17/388,064, filed Jul. 29, 2021, Rotating Marker and Adapter for Image-Guided Surgery.
U.S. Appl. No. 18/365,844, filed Aug. 4, 2023, Augmented-Reality Surgical System Using Depth Sensing.
U.S. Appl. No. 35/508,942 (D. 930,162), filed Feb. 13, 2020 (Sep. 7, 2021), Medial Headset.
16 Augmented Reality Glasses of 2021 (with Features), in Back to News, Dated May 6, 2022, accessed at https://web.archive.org/web/20221127195438/https://circuitstream.com/blog/16-augmented-reality-glasses-of-2021-with-features-breakdowns/.
Everysight, Installing your RX Adaptor, accessed Mar. 13, 2024 at https://support.everysight.com/hc/en-US/articles/115000984571-Installing-your-RX-Adaptor.
Everysight, Raptor User Manual, copyright 2017, in 46 pages.
Frames Direct, InSpatialRx Prescription Insert, Prescription Insert for Magic Leap 1, accessed Mar. 8, 2024 at https://www.framesdirect.com/inspatialrx-prescription-insert. html.
Reddit, Notice on Prescription Lenses for Nreal Glasses, accessed Mar. 13, 2024 at https://www.reddit.com/r/nreal/comments/x1fte5/notice_on_prescription_lenses_for_nreal_glasses/.
Vuzix Blades, Prescription Lens Installation Guide, copyright 2020.
Related Publications (1)
Number Date Country
20230386153 A1 Nov 2023 US
Provisional Applications (2)
Number Date Country
63428781 Nov 2022 US
63333128 Apr 2022 US
Continuations (1)
Number Date Country
Parent PCT/IB2023/054056 Apr 2023 WO
Child 18365566 US