This application claims priority to European Patent Application No. EP 23306386.6, filed Aug. 17, 2023, the entirety of which is incorporated herein by reference.
The present invention relates to the assistance for orthopedic surgery.
More precisely, the invention concerns a system for intra-operatively guiding, at a current temporal instant, a gesture intended to be performed with a drilling surgical tool on a target bone of a subject and a related computer-implemented method.
Nowadays, virtual visualization software tools are used by surgeons to define preoperative planning of operations. These tools use three-dimensional models of bones obtained from patient preoperative radiologic images. The surgeon can define a surgical plan specific to the patient and thus, improve surgical outcomes.
Digital assistance could be also useful and advantageous during surgical procedures such as orthopedic surgery. Indeed, when a surgeon has to operate on a part of an individual's body, he or she does not have complete visibility of the inside of that part of the body.
A particular example of orthopedic surgery is arthroplasty. Arthroplasty is a surgical procedure to restore the function of a joint. A joint can be restored by resurfacing the bones with metallic implants. After the incision and the bone exposition phase, the surgeon uses drilling and cutting tools to prepare the bone surface before the implantation of the prosthesis. To ensure a proper positioning of the prosthesis, the drilling and cutting gestures must be precise. In standard practice, this type of gesture is guided through the visual determination of anatomical landmarks (surgical navigation) and/or using intramedullary rods inserted in the cavity of the bone.
However, this practice is prone to user error.
The invention aims at providing a solution improving surgical gestures' precision during interventions.
This invention thus relates to a device for computer guided orthopedic surgery of a target bone of a patient based on actions planned in a virtual environment with respect to a virtual coordinate system RP, so as to guide a physical action of a user, to be performed with a drilling surgical tool; said device comprising:
Advantageously, the present invention provides to the user (i.e., a surgeon) insightful information allowing to guide his/her physical actions to be performed during the surgery according to a predefined surgical plan (i.e., comprising a plurality of actions planned in the virtual environment). This guidance is of high importance in the present case where the actions are to be performed using a hand-held drilling surgical tool (i.e., the drilling surgical tool is not connected to any support such as a corobot/robot).
According to other advantageous aspects of the invention, the device comprises one or more of the features described in the following embodiments, taken alone or in any possible combination.
According to one embodiment, the localization device comprises a depth camera and wherein the current localization information comprises at least one 3D image.
According to one embodiment, the rigid transformation LTO is defined by design or by calibration.
According to one embodiment, the portion of interest of the target bone is a glenoid.
According to one embodiment, the target bone is a vertebra or a bone of the shoulder, such as the scapula.
According to one embodiment, the drilling surgical tool is a handheld surgical power tool used to screw a pin into said portion of interest, said pin having a tip.
According to one embodiment, said guiding data comprises a distance between the tip of the pin and said planned bone entry point and an angle between a current pin direction and the planned drilling axis Xp.
According to one embodiment, the guiding data comprises information indicating whether the current drilling axis Xc approaches or moves away from the planned drilling axis Xp.
According to one embodiment, the guiding data comprises a vector representing a translation to be performed to overlap the current and planned bone entry point.
According to one embodiment, the output is provided via a screen embedded in the drilling surgical tool.
According to one embodiment, the output is provided via a virtual reality headset or augmented reality device.
The present invention also relates to a computer-implemented method for computer guided orthopedic surgery of a target bone of a patient based on actions planned in a virtual environment with respect to a virtual coordinate system RP, so as to guide a physical action of a user, to be performed with a drilling surgical tool; said device comprising:
The present invention also relates to a device for computer guided surgery of a target object of a patient based on actions planned in a virtual environment with respect to a virtual coordinate system, so as to guide a physical action of a user, to be performed with a tool with use of a localization visualization device rigidly attached to the tool; said system comprising:
In addition, the disclosure relates to a computer program comprising software code adapted to perform a method for computer guided orthopedic surgery compliant with any of the above execution modes when the program is executed by a processor.
The present disclosure further pertains to a non-transitory program storage device, readable by a computer, tangibly embodying a program of instructions executable by the computer to perform a method for computer guided orthopedic surgery, compliant with the present disclosure.
In the present invention, the following term has the following meaning:
The terms “adapted” and “configured” are used in the present disclosure as broadly encompassing initial configuration, later adaptation or complementation of the present device, or any combination thereof alike, whether effected through material or software means (including firmware).
The term “processor” should not be construed to be restricted to hardware capable of executing software, and refers in a general way to a processing device, which can for example include a computer, a microprocessor, an integrated circuit, or a programmable logic device (PLD). The processor may also encompass one or more Graphics Processing Units (GPU), whether exploited for computer graphics and image processing or other functions. Additionally, the instructions and/or data enabling to perform associated and/or resulting functionalities may be stored on any processor-readable medium such as, e.g., an integrated circuit, a hard disk, a CD (Compact Disc), an optical disc such as a DVD (Digital Versatile Disc), a RAM (Random-Access Memory) or a ROM (Read-Only Memory). Instructions may be notably stored in hardware, software, firmware or in any combination thereof.
“Rigid transform” (also known as isometry) refers to a geometrical transform that does not affect the size and shape of an object A rigid transform can be a combination of translations and rotations.
This invention relates to a system 1 and a device 3 for intra-operatively guiding, at a current temporal instant, a gesture intended to be performed with a drilling surgical tool D on a target bone B of a subject.
It is assumed that a patient is to undergo an orthopedic surgical intervention on a target joint comprising a target bone B. The intervention is intended to be performed on a portion of interest P of the target bone B. Prior to the intervention, a surgical plan has been defined by a professional heath practitioner responsible for the intervention. For instance, the surgical plan was obtained by means of virtual visualization software tools.
Advantageously, a three-dimensional model of the portion of interest P was obtained from patient preoperative radiologic images, such as X-rays, CT-scan or MRI images. The three-dimensional model is referred to as bone 3D model 31. For instance, in the case of total knee arthroplasty, the portion of interest P is the femur or tibia of the patient. In another case of total shoulder arthroplasty, the portion of interest P is a glenoid, as shown in the example of
The surgical plan comprises surgical planning information 32. Advantageously, the planning information 32 comprises the bone 3D model 31 and at least one planned action to be performed during the surgery. This planned action comprises information concerning at least one trajectory 33 of the drilling surgical tool D with respect to the bone 3D model 31. The trajectory 33 corresponds to a gesture to be performed during the intervention by a practitioner on the target bone.
Notably, the drilling surgical tool D is a handheld surgical power tool used to screw a pin into a portion of interest of the target bone B. In this case, each planned action of the drilling surgical tool D is defined by at least one planned spatial position of the drilling surgical tool, a planned drilling axis Xp and a planned bone entry point. Each planned action may be as well associate to the planned depth of the hole that has to be drilled. The position of the pin locked in the power tool is known by design and reproductible. The planned drilling axis Xp is defined as the orientation that the drilling pin of the drilling surgical tool D should have to perform the planned hole in the target bone. The planned bone entry point is a point selected on the surface of the 3D bone model where the surgeon should ideally put in contact the tip of the drilling pin before starting to drill the bone.
The system 1 comprises a localization device 2, a programmable device 3 (i.e., a processor), and an output interface 4.
Advantageously, the localization device 2 comprises a 3D imaging sensor S3D present in the surgical theater and positioned in such a way to encompass in its field of view at least the portion of interest P. The 3D imaging sensor is configured to acquire 3D images 22. The 3D imaging sensor S3D refers to a sensor for acquiring topological data of a real scene in 3 dimensions. These topological data are recorded in the form of a point cloud, and/or a depth map. Herein after the term “data points” will be used to refer both to the point clouds or depth maps, as the person skilled in the art knows how to perform registration on both point clouds or depth maps. Therefore, at least one portion Pp of the data points of one 3D image 22 represents at least the portion of interest P. The other data points are generally associated to the structures surrounding the portion of interest P comprised in the field of view of the 3D imaging sensor, such as soft tissues surrounding the bone target, various surgical tools, part of a corobot and the like.
Multiple acquisition techniques may be utilized to obtain these topological data for example techniques based on the measure of wave propagation time such as ultrasound or light (LIDAR, Time-of-Flight) or stereoscopic camera or sensor, which is a type of camera with two or more lenses with a separate image sensor or film frame for each lens. This allows the camera to simulate human binocular vision, and therefore gives it the ability to capture three-dimensional images. Other techniques may be based on light deformation, such as structured-light 3D scanners which project a pattern of light on an object and look at the deformation of the pattern on the object. The advantage of structured-light 3D scanners is speed and precision. Instead of scanning one point at a time, structured light scanners scan multiple points or the entire field of view at once. Scanning an entire field of view in a fraction of a second reduces or eliminates the problem of distortion from motion. Another class of techniques is based on laser scanning for sampling or scanning a surface using laser technology, such as hand-held laser or time-of-flight 3D laser scanner. More in general, any techniques known by the skilled artisan providing topological data of a real scene in 3 dimensions may be used for the implementation of the present invention.
The 3D image(s) 22 may be associated to corresponding grayscale image(s), or be colored depth (RGB-D) image(s) among others (i.e., a depth image associated to a RGB image of the scene captured by the imaging sensor). The 3D image(s) 22 may include numerical data, such as digital data. Those data may include individual image data in a compressed form, as well known to a person skilled in image compression, such as e.g., in compliance with e.g., in compliance with JPEG (for Joint Photographic Experts Group), JPEG 2000 or HEIF (for High Efficiency Image File Format) standard.
As the 3D image(s) 22 is (are) acquired by the 3D imaging sensor S3D, the data points of the 3D images 22 are associated to the coordinate system of the 3D imaging sensor S3D. The coordinate system of the 3D imaging sensor S3D will be referred to as localization coordinate system RL in what follows.
Advantageously, the programmable device 3 is an apparatus, or a physical part of an apparatus, designed, configured and/or adapted for performing the mentioned functions and produce the mentioned effects or results. In alternative implementations, the programmable device 3 is embodied as a set of apparatus or physical parts of apparatus, whether grouped in a same machine or in different, possibly remote, machines. The programmable device 3 may e.g. have functions distributed over a cloud infrastructure and be available to users as a cloud-based service, or have remote functions accessible through an API.
In what follows, the modules are to be understood as functional entities rather than material, physically distinct, components. They can consequently be embodied either as grouped together in a same tangible and concrete component, or distributed into several such components. Also, each of these modules are possibly themselves shared between at least two physical components. In addition, the modules are implemented in hardware, software, firmware, or any mixed form thereof as well. They are preferably embodied within at least one processor of the programmable device 3.
Though the presently described programmable device 3 is versatile and provided with several functions that can be carried out alternatively or in any cumulative way, other implementations within the scope of the present disclosure include devices having only parts of the present functionalities.
As illustrated on
For instance, when the bone 3D model 31 has been obtained from patient preoperative radiologic images, the preoperative radiologic images can also be received by the module 11.
The module 11 is configured to further receive, during the intervention and preferably in real time, the at least one 3D image 22 acquired by the 3D imaging sensor S3D, for example from a communication network, allowing the communication with the 3D imaging sensor S3D.
The at least one 3D image 22 comprises the at least one portion Pp of the data points providing information representative of a current spatial position and/or orientation of the portion of interest P at a given time t. This information will be referred to as current localization information I in what follows.
The module 11 is also configured to receive a rigid transformation LTO between a localization coordinate system RL of the localization device 2 and a coordinate system of the drilling surgical tool RO. This may be known by design or by calibration. This transformation has therefore to be known before the surgery. When this rigid transformation LTO is known by calibration, it is usually computed at the end of the manufacturing of the drilling surgical tool D (which comprises a rigidly fixed localization device) that will be used during the surgery to perform the planned actions by the surgeon, by a protocol called the hand-eye calibration. Alternative, there are several ways to obtain this transform by calibration. For example, a calibration object, which shape and dimensions are previously accurately known thanks to a metrology process. The system can be rigidly fixed to the calibration object in a known, accurate and reproducible position. In that position, the localization device 2 can retrieve at least one 3D image of the surface of the calibration object. The LTO transformation is computed by registration between the known specific shape of the calibration object and the 3D image given by the localization device 2.
The (programmable) device 3 may comprise a segmentation module configured to segment the 3D image(s) 22 in order to isolate the at least one portion Pp of the data points corresponding to the at least one portion P of the target bone B. Advantageously, this segmentation step allows to remove the data points not representing the target bone B (e.g. data points corresponding to soft tissues) which will to improve accuracy of registration steps performed by the other modules of the device 3. This segmentation module may be implemented on grayscale or color image that are associated to the 3D image acquired by the 3D imaging sensor S3D (e.g., RGB-D sensor). A pipeline that may be implemented by this segmentation module may comprise the steps of: segmenting at least one portion P target bone B in the color image (i.e., the portion of the bone that is visible in the image); the segmented area in the color image is then used to segment the 3D image 22, knowing the sensor calibration between color sensor and 3D sensor.
The programmable device 3 may further comprise a module 12 configured to calculate, for each 3D image 22 obtained at a current temporal instant t, a transformation CTL between the localization coordinate system RL and the target coordinate system RC by registration of the 3D target model 31 with the current target localization information I. At this calculation step the virtual coordinate system Rp coincides with the target coordinate Rc. Module 12 is also configured to apply this transformation CTL to the surgical planning information 32 so that the position and orientation of the at least one portion P of the target bone B is known in the localization coordinate system RL, and that each planned action, associated to at least one planned spatial position and/or orientation of said drilling surgical tool D, is known also in the localization coordinate system RL. In this way, the spatial position(s) and/or orientation(s) that the drilling surgical tool D should take in order to perform the planned actions are known in the localization coordinate system RL.
The programmable device 3 also comprises a module 13 configured to apply the rigid transformation LTO between the localization coordinate system RL and the coordinate system of the drilling surgical tool RO so as to know as well the position and spatial orientation of the drilling surgical tool D in the localization coordinate system RL. Given the position and orientation of the portion of interest P in the localization coordinate system RL, the current drilling tool spatial position and/or current drilling tool orientation 34 with respect to the portion of interest P can be deduced.
The programmable device 3 also comprises a module 14 configured to perform a step of computing guiding data DG. The guiding data DG are representative of a comparison between the planning information 32 and the current tool spatial position and/or current tool orientation 34. As explained above, since the drilling surgical tool D is a handheld surgical power tool used to screw a pin into the portion of interest P, the planning information 32 may comprise at least a planned drilling axis Xp and a planned bone entry point.
The model 14 is configured to first use the calculated position and spatial orientation of the drilling surgical tool D and the position and orientation of the portion of target bone B in the same localization coordinate system RL, in order to calculate the current spatial position and current drilling axis Xc of said drilling surgical tool D and/or the current bone entry point with respect to the target bone B. Then module 14 is configured to compare, in the localization coordinate system RL, the planned drilling axis Xp with the current spatial position and the current drilling axis Xc and/or and the planned bone entry point with the current bone entry point in the target bone B.
Therefore, the guiding data DG may comprise a distance between the tip of the drilling pin and the planned bone entry point and/or an angle between the current drilling axis Xc (i.e., current pin direction) and the planned drilling axis Xp. In other words, the guiding data DG comprises the deviation in distance and/or orientation of the current positioning of the drilling surgical tool D compared to the planned positioning and/or orientation defined in the surgical plan.
As explained above, the planning information 32 may optionally comprise an information on the planned depth of the hole to be drilled. In this case, the module 14 is also configured to provide guiding data DG comprising information on the planned depth of the hole to the user via the output interface 4. The module 14 may as well provide on-line information during the drilling about the current depth of the hole and the resting distance still to be drilled for the current planned action under performance.
The programmable device 3 further comprises an exit module 18 to transmit the guiding data DG to the output interface 4.
The output interface 4 is configured to output the guiding data DG for visualization by a user, such as the health practitioner carrying out the intervention. In this manner, a direct and in-situ feedback about the surgical gesture is provided to the user.
In some embodiments, the output interface 4 is a virtual reality headset or augmented reality device.
In some other embodiments, the output interface 4 is a screen embedded in the drilling surgical tool D.
The programmable device 3 may be interacting with a user interface 19, via which information can be entered and retrieved by a user. The user interface 19 includes any means appropriate for entering or retrieving data, information or instructions, notably visual, tactile and/or audio capacities that can encompass any or several of the following means as well known by a person skilled in the art: a screen, a keyboard, a trackball, a touchpad, a touchscreen, a loudspeaker, a voice recognition system.
In some embodiments, the output interface 4 coincides with the user interface 19.
In its automatic actions, the programmable device 3 may for example execute the following process illustrated on
A particular apparatus 9, visible on
The apparatus 9 comprises a memory 91 to store program instructions loadable into a circuit and adapted to cause a circuit 92 to carry out steps of the method of
The circuit 92 may be for instance:
The apparatus 9 may also comprise an input interface 93 for the reception of the planning information 32 and the 3D images 22, and an output interface 94. The input interface 93 and the output interface 94 may together correspond to the user interface 19 of
To ease the interaction with the computer, a screen 95 and a keyboard 96 may be provided and connected to the computer circuit 92.
A person skilled in the art will readily appreciate that various parameters disclosed in the description may be modified and that various embodiments disclosed may be combined without departing from the scope of the invention. Of course, the present invention is not limited to the embodiments described above as examples. It can be extended to other variants.
The present invention is further illustrated by the following example.
In this practical example, the system 1 is used for Total Shoulder Arthroplasty and particularly on the preparation of the glenoid, end of the scapula, that meets the head of the humerus to form the joint. Prior to the surgery, the patient undergoes a preoperative shoulder CT scan used to create a three-dimensional virtual model of the bones, i.e., the bone 3D model 31. This information is also used to create a surgical planning that defines the gesture to perform to prepare the bone surfaces.
The preparation of the glenoid involves the placement of a pin guide in the middle of the glenoid surface. Then, the entire glenoid is reamed with a large circular reamer in order to flatten the glenoid bone surface to match the backside surface of the glenoid implant. The placement of the pin guide, i.e. the entry point and the pin axis, is challenging. The pin entry point and axis are defined on the virtual 3D model of the scapula, i.e., the bone 3D model, at preoperative stage.
In this application, the target bone B is the glenoid. The drilling surgical tool D is a handheld surgical power tool used to screw the pin into the bone. The position of the pin locked in the power tool is known by design and reproductible. The localization device 2 comprises an RGB-Depth sensor S3D and a software configured to localize the bone by continuously matching the 3D image acquired with the RGB-D sensor S3D with the bone 3D model 31. The localization device 2 is rigidly fixed to the power tool. The position of the tip of the pin and the orientation of the pin axis is known by hand eye calibration. The output interface 4 is a small screen embedded on the power tool that provides useful information through minimal visual guidance.
As the health practitioner is approaching the pin near the surface of the glenoid, the bone surface is localized in real time by the localization device 2. Thus, the distance between the actual position of the tip of the pin and the planned entry point on the bone surface is computed and updated continuously. In the same manner, the angle between the actual pin direction and the planned drilling axis is estimated live. The information of distance and angle can be used on the output interface 4 that helps the health practitioner to adjust its gesture as illustrated in
Number | Date | Country | Kind |
---|---|---|---|
23306386.6 | Aug 2023 | EP | regional |