The present application relates to implant tracking in computer-assisted orthopedic surgery and/or in robotized computer-assisted surgery.
The navigation of surgical instruments or tools is an integral part of computer-assisted surgery (hereinafter “CAS”). The tools are navigated, i.e., tracked for position and/or orientation, in such a way that relative information pertaining to bodily parts is obtained. The information may be used in various interventions (e.g., orthopedic surgery, trauma surgery, neurological surgery) with respect to the body, such as bone alterations, implant positioning, incisions and the like during surgery.
In some circumstances, some implants must penetrate bones, such that the tracking of the implant becomes more difficult in the absence of visibility. For example, an intramedullary nail, also known as an intramedullary rod, is a metallic implant that is forced into the intramedullary canal (a.k.a., medullary canal or cavity) of the bone, in the case of a long bone fracture (e.g., tibia fracture, femur fracture). The implant contributes to the load bearing of the bone, by defining a structure holding the bone together. However, one challenge with the installation of such implants is its lack of visibility once inserted into the intramedullary canal.
Likewise, in similar circumstances, for minimally invasive surgeries, it is desirable to limit incision size and position implants such as plates along the bones, where the plates are inserted via a small incision and slid along the bone. Thus, plates may not be visible because they are concealed under the skin. Then, there remains a challenge in finding bolts holes on the plate.
In accordance with a first aspect of the present disclosure, there is provided a system for tracking a surgical implant relative to a bone in computer-assisted surgery, comprising: a processing unit; and a non-transitory computer-readable memory communicatively coupled to the processing unit and comprising computer-readable program instructions executable by the processing unit for: obtaining a virtual model of the surgical implant, the virtual model including at least one landmark of the surgical implant, the at least one landmark being configured to be inside the bone; tracking the surgical implant, via an optical non-radiographic tracking device, as the surgical implant is inserted into the bone; calculating a location of the at least one landmark relative to the bone using the virtual model of the surgical implant and tracking data from the tracking of the surgical implant; and outputting the location of the at least one landmark relative to the bone.
Referring to the drawings and more particularly to
The CAS system 10 may be robotized in a variant, and has, may have or may be used with a head-mounted device or tracking device 20, a tracking device 30, a robot arm 40, a CAS controller 50, a tracking module 60, an augmented reality module 70, and a robot driver 80, or any combination thereof:
Other components, devices, systems, may be present, such as surgical instruments and tools T, the interfaces I/F such as displays, screens, computer station, servers, and the like etc.
Referring to
The head-mounted tracking device 20 may consequently include a processor 20A and components to produce a mixed reality session. For instance, the head-mounted tracking device 20 may have an integrated projector 23 that may project data on the face shield 22, in a manner described below. Alternatively, the face shield 22 may be a screen having the ability to display images. As an example, the head-mounted tracking device 20 may be a Hololens®. In an embodiment, the face shield 22 is a display-like unit of the type that may be used in virtual reality, with camera(s) therein to create a mixed reality output using camera footage, such as an Oculus Rift®, smartphone with head support, etc, hologram projection in augmented reality. The head-mounted tracking device 20 may include one or more orientation sensors, such as inertial sensor unit(s) (e.g., shown as 30), for an orientation of the head-mounted tracking device 20 to be known and tracked.
According to an embodiment, the head-mounted tracking device 20 is equipped to perform optical tracking of an implant IN, such as the intramedullary implant or orthopedic plate, patient tissue B, instruments T and/or robot arm 40, from a point of view (POV) of the operator. The head-mounted tracking device 20 may therefore have one or more imaging devices or apparatuses, to capture video images of a scene, i.e., moving visual images, a sequence of images over time. In a variant, the video images are light backscatter (a.k.a. backscattered radiation) used to track objects. In the present disclosure, the head-mounted tracking device 20 may be used to track tools and bones so as to provide navigation data in mixed reality to guide an operator based on surgery planning. Backscattered radiation can also be used for acquisition of 3D surface geometries of bones and tools.
The head-mounted tracking device 20 may produce structured light illumination for tracking objects with structured light 3D imaging. In structured light illumination, a portion of the objects is illuminated with one or multiple patterns from a pattern projector 24 or like light source. The pattern projector 24 includes infrared light projection. Structured light 3D imaging is based on the fact that a projection of a line of light from the pattern projector 24 onto a 3D shaped surface produces a line of illumination that appears distorted as viewed from perspectives other than that of the pattern projector 24. Accordingly, imaging such a distorted line of illumination allows a geometric reconstruction of the 3D shaped surface. Imaging of the distorted line of illumination is generally performed using one or more cameras 25 (including appropriate components such as e.g., lens(es), aperture, image sensor such as CCD, image processor) which are spaced apart from the pattern projector 24 so as to provide such different perspectives, e.g., triangulation perspective. In some embodiments, the pattern projector 24 is configured to project a structured light grid pattern including many lines at once as this allows the simultaneous acquisition of a multitude of samples on an increased area. In these embodiments, it may be convenient to use a pattern of parallel lines. However, other variants of structured light projection can be used in some other embodiments.
The structured light grid pattern can be projected onto the surface(s) to track using the pattern projector 24. In some embodiments, the structured light grid pattern can be produced by incoherent light projection, e.g., using a digital video projector, wherein the patterns are typically generated by propagating light through a digital light modulator. Examples of digital light projection technologies include transmissive liquid crystal, reflective liquid crystal on silicon (LCOS) and digital light processing (DLP) modulators. In these embodiments, the resolution of the structured light grid pattern can be limited by the size of the emitting pixels of the digital projector. Moreover, patterns generated by such digital display projectors may have small discontinuities due to the pixel boundaries in the projector. However, these discontinuities are generally sufficiently small that they are insignificant in the presence of a slight defocus. In some other embodiments, the structured light grid pattern can be produced by laser interference. For instance, in such embodiments, two or more laser beams can be interfered with one another to produce the structured light grid pattern wherein different pattern sizes can be obtained by changing the relative angle between the laser beams.
The pattern projector 24 may emit light that is inside or outside the visible region of the electromagnetic spectrum. For instance, in some embodiments, the emitted light can be in the ultraviolet region and/or the infrared region of the electromagnetic spectrum such as to be imperceptible to the eyes of the medical personnel. In these embodiments, however, the medical personnel may be required to wear protective glasses to protect their eyes from such invisible radiations, and the face shield 22 may have protective capacity as well. As alternatives to structured light, the head-mounted tracking device 20 may also operate with laser rangefinder technology or triangulation, as a few examples among others.
The head-mounted tracking device 20 may consequently include the cameras 25 to acquire backscatter images of the illuminated portion of objects. Hence, the cameras 25 capture the pattern projected onto the portions of the object. The cameras 25 are adapted to detect radiations in a region of the electromagnetic spectrum that corresponds to that of the patterns generated by the light projector 24. As described hereinafter, the known light pattern characteristics and known orientation of the pattern projector 24 relative to the cameras 25, are used by the tracking module 60 to generate a 3D geometry of the illuminated portions, using the backscatter images captured by the camera(s) 25. Although a single camera spaced form the pattern projector 24 can be used, using more than one camera 25 may increase the field of view and increase surface coverage, or precision via triangulation. The head-mounted tracking device 20 is shown as having a pair of cameras 25 is used.
The head-mounted tracking device 20 may also have one or more filters integrated into either or both of the cameras 25 to filter out predetermined regions or spectral bands of the electromagnetic spectrum. The filter can be removably or fixedly mounted in front of any given camera 25. For example, the filter can be slidably movable into and out of the optical path of the cameras 25, manually or in an automated fashion. In some other embodiments, multiple filters may be periodically positioned in front of a given camera in order to acquire spectrally resolved images with different spectral ranges at different moments in time, thereby providing time dependent spectral multiplexing. Such an embodiment may be achieved, for example, by positioning the multiple filters in a filter wheel that is controllably rotated to bring each filter in the filter wheel into the optical path of the given one of the camera 25 in a sequential manner.
In some embodiments, the filter can allow transmittance of only some predetermined spectral features of objects within the field of view, captured either simultaneously by the head-mounted tracking device 20 or separately by the secondary tracking device 90, so as to serve as additional features that can be extracted to improve accuracy and speed of registration.
More specifically, the filter can be used to provide a maximum contrast between different materials which can improve the imaging process and more specifically the soft tissue identification process. For example, in some embodiments, the filter can be used to filter out bands that are common to backscattered radiation from typical soft tissue items, the surgical structure of interest, and the surgical tool(s) such that backscattered radiation of high contrast between soft tissue items, surgical structure and surgical tools can be acquired. Additionally, or alternatively, where white light illumination is used, the filter can include band pass filters configured to let pass only some spectral bands of interest. For instance, the filter can be configured to let pass spectral bands associated with backscattering or reflection caused by the bones, the soft tissue while filtering out spectral bands associated with specifically colored items such as tools, gloves and the like within the surgical field of view. Other methods for achieving spectrally selective detection, including employing spectrally narrow emitters, spectrally filtering a broadband emitter, and/or spectrally filtering a broadband imaging detector (e.g., the camera 25), can also be used. Another light source may also be provided on the head-mounted tracking device 20, for a secondary tracking option, as detailed below. It is considered to apply distinctive coatings on the parts to be tracked, such as the bone and the tool, to increase their contrast relative to the surrounding soft tissue.
In accordance with another embodiment, the head-mounted tracking device 20 may include a 3D camera(s), also shown as 25, to perform range imaging, and hence determine position data from the captured images during tracking—
In a variant, the head-mounted tracking device 20 only has imaging capacity, for instance through cameras 25 (of any type described above), optionally pattern projector 24, without other components, such as face shield 22, etc.
Referring to
Still referring to
The tracking module 60 may be a subpart of the CAS controller 50, or an independent module or system. The tracking module 60 receives from the head-mounted tracking device 20 and the tracking device 90 (if present) the video feed of the surgical scene, e.g., as backscatter images of the objects. The tracking module 60 may also concurrently receive tracking data (e.g., orientation data) from the inertial sensor unit(s) 30. In an embodiment, as the system 10 performs real-time tracking, the video images and the orientation data are synchronized, as they are obtained and processed simultaneously. Other processing may be performed to ensure that the video footage and the orientation data are synchronized.
The tracking module 60 processes the video images to track one or more objects, such as a bone, an instrument, etc. The tracking module 60 may determine the relative position of the objects, and segment the objects within the video images. In a variant, the tracking module 60 may process the video images to track a given portion of an object, that may be referred to as a landmark. The landmark may be different parts of the objects, objects on the objects, such as tracking tokens with recognizable patterns, etc.
The tracking module 60 may also be provided with models of the objects to be tracked. For example, the tracking module 60 may track bones, implants IN and tools, and hence uses virtual bone models and tool models. The bone models may be acquired from pre-operative imaging (e.g., MRI, CT-scans), for example in 3D or in multiple 2D views, including with 2D X-ray to 3D bone model technologies. The virtual bone models may also include some image processing done preoperatively, for example to remove soft tissue or refine the surfaces that will be exposed and tracked. The virtual bone models may be of greater resolution at the parts of the bone that will be tracked during surgery, such as the knee articulation in knee surgery. The bone models may also carry additional orientation data, such as various axes (e.g., longitudinal axis, mechanical axis, etc). The bone models may therefore be patient specific. It is also considered to obtain bone models from a bone model library, with the data obtained from the video images used to match a generated 3D surface of the bone with a bone from the bone atlas. The virtual implant models and/or tool models may be provided by the implant/tool manufacturer, or may also be generated in any appropriate way so as to be a virtual 3D representation of the tool(s).
In a variant, the tracking module 60 may generate 3D models using the video images. For example, if the tracking module 60 can have video images of a tool, from 360 degrees, it may generate a 3D model that can be used for subsequent tracking. This intraoperative model may or may not be matched with pre-existing or pre-operative model of the tool and/or of the implant.
Additional data may also be available, such as tool orientation (e.g., axis data and geometry). By having access to bone, implant and tool models, the tracking module 60 may recognize an object in the image processing and/or may obtain additional information, such as the axes related to bones or tools. The image processing by the tracking module 60 may be assisted by the presence of the models, as the tracking module 60 may match objects from the video images with the virtual models.
For example, two distinct implants IN inside bones N are shown in
As another example, referring to
In a variant, the objects used as landmarks are parts of the bone, implant and of the tool that are visible from the head-mounted tracking device 20 and/or from the tracking device 30. Stated differently, as the operator has a direct and proximal view of the surgical site, e.g., the bone being drilled, the implant being inserted in the intramedullary canal or the plates being positioned along the bone, and the tool performing a drilling action, the footage from the POV of the head-mounted tracking device 20 is used by the tracking module 60 to navigate the tool T and implant IN relative to the bone B. Despite the variation in POV of the camera(s) 25, the tracking module 60 using the known dimensions of a landmark to track the objects in a referential system. The landmark may be an extension EX (
Optionally, it is considered to provide specific detectable landmarks on the implant IN, tool(s) or bones to ensure the detectable landmarks will be properly imaged and detected by the tracking module 60. For example, tokens with given patterns, such as QR-code like labels, AprilTags, etc, may be provided on the objects, such as on the bones or on the tools, such as on the brackets 204 or extensions EX described above. In an embodiment, the tokens have an image that may be preprogrammed, or whose dimensions, are known or accessible by the tracking module 60. Accordingly, by seeing such tokens in the video images, the tracking module 60 may locate the objects in a spatial coordinate system. Again, the POV for the video images may move, with the objects serving as a referential for the tracking. The referential system may be that of the robot arm 20 (if present), with the camera tracking providing positional information for the referential system.
In matching the recognizable pattern and/or the 3D geometry to the implant models, bone models and tool models with the video images, the tracking module 60 may reduce its computation using different strategies. The bone model(s) B may have higher resolution for the parts of the bone that will be altered during surgery. The remainder of the bone may be limited to information on landmarks, such as axis orientation, center of rotation, midpoints, etc. A similar approach may be taken for the tool models C, with the focus and higher detail resolution being on parts of the tools that come into contact with the bone. The bone model(s) B may also include implants that are already in the bone, for example from a past procedure. The CAS system 10 may thus have the capacity to guide the procedure in determining any potential interference between implant IN and existing implant, to avoid damaging existing implants. For example, in the case of the plates IN of
Moreover, considering that the camera(s) 25 may interrupt its line of sight with the object, the video feed or like tracking from the tracker device 30 may complement that from the camera(s) 25 and/or supersede the video feed from the camera(s) 25. In another embodiment, the tracker device 30 is the primary tracking camera using any of the technologies described above for the head-mounted tracking device 20. Therefore, the tracking module 60 continuously updates the position and/or orientation of the patient bones and tools in the coordinate system using the video feed from the camera(s) 25 and/or tracking device 30 to track objects in position and orientation.
In an embodiment with structured light projection, the tracking module 60 receives the backscatter images from the camera(s) 25 or from the tracking device 90, as a result of the structured light projection from the projector 24. In another embodiment, the tracking module 60 receives the video images from the camera 25 in a depth camera configuration, and may take steps to calibrate the camera(s) 25 or tracking device 90 for ranging to be done from the acquired images. An initial calibration may be done using a calibration pattern, such as that in tokens. The calibration pattern is placed in the light of sight of the camera(s) 25 or tracking device 90 such that it is imaged. The calibration pattern is any appropriate shape and configuration, but may be a planar recognizable pattern with high contrast, or given landmarks of a bone, or geometry of tool. Other items can be used for the calibration, including the body of a tool T, whose geometry may be programmed into or may be accessed by the tracking module 60. The tracking module 60 stores a virtual version of the calibration pattern, including precise geometrical data of the calibration pattern. The tracking module 60 therefore performs a correspondence between imaged and virtual calibration patterns. The correspondence may entail calculating the mapping function between landmarks on the planar imaged calibration pattern and the virtual calibration pattern. This may include a projection of the calibration patterns on one another to determine the distortion characteristics of the images of the camera(s) 25 or tracking device 90, until the rectification values are determined by the tracking module 60 to correct the images of camera. This calibration may be repeated punctually through the procedure, for instance based on the camera updating requirements. It may require that the camera be used in conjunction with a calibration reflective surface whose position and orientation relative to the camera is known. The calibration may be automatically performed by the CAS system 10.
The tracking module 60 may therefore perform a 3D geometry image processing, using the known patterns of structured light, or calibrated camera images, video feed, etc, along with the known shape of the virtual bone model(s) and/or tool model(s), optionally with QR tokens, and generate 3D images from the tracking, using for examples the pre-operative models. Moreover, a generated 3D geometry may be located in the X, Y, Z, coordinate system using the tracking of landmarks on the bones or tools to set the coordinate system on the bones. Therefore, the tracking module 60 may generates an image or 3D geometry of the landmarks on the object(s) being illuminated. Then, using the virtual models and/or of the bone(s) and tool(s), respectively, the tracking module 60 can match the image or 3D geometry with the virtual models of the landmarks. Consequently, the tracking module 60 determines a spatial relationship between the landmarks being imaged and the preoperative 3D models, to provide a dynamic (e.g. real time or quasi real time) intraoperative tracking of the bones relative to the tools, in spite of tool portions and bone surfaces not being visible from the POV of the operator.
In an embodiment, the position and orientation of the surgical tool calculated by the tracking module 60 may be redundant over the tracking data provided by the robot driver 80 and robot arm sensors, if the robot arm 40 is used. However, the redundancy may assist in ensuring the accuracy of the tracking of the surgical tool. For example, the redundancy is used as a safeguard against disruption of the line of sight between the head-mounted tracking device 20 and the surgical site, for instance if the operator looks away. The redundancy may also allow the reduction of frequency of image processing for the surgical tool. Also, the tracking of the tool using the tracking module 60 may be used to detect any discrepancy between a calculated position and orientation of the surgical tool T through the sensors on the robot arm 40, and the actual position and orientation of the surgical tool. For example, an improper mount of the tool T into the chuck of the robot arm 40 could be detected from the output of the tracking module 60, when verified with the position and orientation from the robot driver 80 (e.g., obtained from the encoders on the robot arm 40). The operator may be prompted to verify the mount, via the interface I/F or head-mounted tracking device 20.
The camera 25 and/or tracking device 30 may continuously capture images, for the tracking module 60 to perform a continuous tracking of the objects. The terms video feed, image feed, video, may be used to describe the capture of images by the head-mounted tracking device 20, and optionally by the tracking device 30, and entails a capture of frames over a time period, in contrast to a single image capture. The frequency of capture may vary according to different factors. For example, there may be different phases during the surgical workflow, some in which the tracking requires a more dynamic update (i.e., higher frequency), and some in which tracking updates are less important. Another factor that may affect the image capture frequency is the fixed relation of the objects. For example, once the tracking module 60 identifies a landmark and tracks a bone from the images, the frequency capture by the camera 25 and/or tracking device 30 may be reduced if the bone is fixed or if no maneuvers are performed, if the bone alterations have not yet begun. Also, when both a tool and a bone are tracked, the frequency capture may be reduced when the tool and the bone are spaced from one another by a given distance, and increased as the proximity between the tool and the bone is increased. The tracking module 60 may drive the camera 25 and/or tracking device 90 in order to control the frequency. For example, the tracking module 60 may adapt the frequency using the surgical planning, e.g., anticipating upcoming steps in the workflow, etc. The tracking module 60 may consequently toggle between a lower frequency capture mode and a higher frequency capture mode, for example. The lower frequency capture mode may be in instances in which the tool is at a given distance from the bone, and is not driven to alter the bone. The lower frequency capture mode may also be operated when the objects are in a fixed relation relative to one another. Other modes are contemplated. The tracking module 60 may output the data directly on the interfaces I/F.
The augmented reality module 70 may be present in the CAS controller 50 and may produce an augmented reality (AR) output to the operator, for instance for display in the head-mounted tracking device 20. The augmented reality module 70 may also produce other types of outputs, including a virtual reality output. One example is an hologram view of the plates IN relative to the bone, along with trajectory FT, as a user is manipulating a tool (via the robot or in free hand mode), on the bone during the procedure. The augmented reality module 70 may provide its output to displays other than head-mounted tracking device 20. For example, the augmented reality module 70 may produce an output for display on monitors of the CAS system 10.
As seen in
In a variant, the wearer may also interface with the CAS controller 50 using the AR output from the head-mounted tracking device 20. For example, when looking away from the surgical site, or at given instances during the surgical workflow, the head-mounted tracking device 20 may display virtual touch zones, and may track with its camera 25 the wearer's arm reaching any such touch zone. The head-mounted tracking device 20 and CAS controller 50 may trigger an action based on the touch zone activation.
Accordingly, while the data provided by the augmented reality module 70 could be displayed on separate monitors or like interfaces, the display of the images in augmented reality may minimize the movements to be made by the wearer, increase the rapidity of access to information and/or provide information that may not otherwise be available.
Still referring to
Referring now to
In the variant of
In the variant of
The CAS system 10 of
As shown in
The present disclosure refers to the system 10 as performing continuous tracking. This means that the tracking may be performed continuously during discrete time periods of a surgical procedure. Continuous tracking may entail pauses, for example when the bone is not being altered. However, when tracking is required, the system 10 may provide a continuous tracking output, with any disruption in the tracking output triggering an alarm or message to an operator. The frequency of tracking may vary.
The CAS system 10 may generally be described as a system for tracking a surgical implant IN relative to a bone B in computer-assisted surgery. The system includes a processing unit and a non-transitory computer-readable memory communicatively coupled to the processing unit, which may, for instance, form part of the CAS controller 50. The non-transitory computer-readable memory comprises computer-readable program instructions executable by the processing unit for steps such as: obtaining a virtual model of the surgical implant IN, the virtual model including at least one landmark (for instance, a locking hole 106) of the surgical implant IN, the at least one landmark being configured to be inside the bone B; tracking the surgical implant IN, via an optical non-radiographic tracking device (for instance, tracking device 30), as the surgical implant IN is inserted into the bone B; calculating a location of the at least one landmark relative to the bone B using the virtual model of the surgical implant IN and tracking data from the tracking of the surgical implant IN; and outputting the location of the at least one landmark relative to the bone.
In a variant, the method and system describe herein operate in a traumatology setting, i.e., with limited pre-operative planning. In such a variant, the geometry of the implant IN may be known, including the position and orientation of the landmark location(s). Hence, by simply tracking the implant IN non-radiographically, the CAS system 10 may know its position and orientation in space, even without a model of the bone B. Therefore, by knowing the position and orientation in space of the implant IN (e.g., via extension EX), the CAS system 10 may output the trajectory and entry point of a drilling path, aligned with screw holes in the implant IN.
The CAS system 10 may be used in numerous other orthopedic procedures for different types of implants. Other examples include pelvic fracture plates, collarbone fracture plates, periprosthetic plates, among others.
The present application claims the priority of U.S. Patent Application No. 63/515,262, filed on Jul. 24, 2023, and incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63515262 | Jul 2023 | US |