The following relates to surgical planning, and more specifically to a system and method for interactive 3D surgical planning. The following further relates to interactive 3D modelling of surgical implants.
Preoperative planning is indispensible to modern surgery. It allows surgeons to optimise surgical outcomes and prevent complications during procedures. Preoperative planning also assists surgeons to determine which tools will be required to perform procedures.
The value of preoperative planning has long been recognised, particularly in the field of orthopaedic surgery. In recent years, however, increased technical complexity and cost pressures to reduce operating room time have led to greater emphasis on preoperative planning.
One of the purposes of preoperative planning is to predict implant type and size. It is important that implants fit accurately and in the correct orientation. Frequently, a surgical team will prepare numerous implants of varying sizes to ensure that at least one will be appropriately sized for a surgical operation. The more accurately the team can predict the required implant configuration, the fewer implants required to be on hand during the operation; this reduces the demand for sterilisation of redundant tools and implants. More accurate predictions may also reduce operating time, thereby decreasing the risk of infection and patient blood loss.
A thorough preoperative plan includes a careful drawing of the desired result of a surgical operation.
Standard preoperative planning is typically performed by hand-tracing physical radiographic images or using digital 2D systems that allow manipulation of radiographic images and application of implant templates. More recently, 3D computed tomography (CT) reconstruction has been developed and has shown to be a useful adjunct in the surgical planning of complex fractures.
Several preoperative planning software solutions exist. The majority of such solutions are used by surgeons prior to surgery at a location remote from the surgery.
In an aspect, a system for segmentation and reduction of a three-dimensional model of an anatomical feature is provided, the system comprising: a display unit configured to display a two-dimensional rendering of the three-dimensional model to a user; an input unit configured to receive a user input gesture comprising a two-dimensional closed stroke on the display unit; and a manipulation engine configured to: select a subset of the three-dimensional model falling within the two-dimensional closed stroke; receive a further user input gesture from the input unit; and manipulate in accordance with the further user input gesture the subset relative to the surrounding three-dimensional model from an initial placement to a final placement.
In an aspect, a method for segmentation and reduction of a three-dimensional model of an anatomical feature is provided, the method comprising: displaying, on a display unit, a two-dimensional rendering of the three-dimensional model to a user; receiving a user input gesture comprising a two-dimensional closed stroke on the display unit; selecting a subset of the three-dimensional model falling within the two-dimensional closed stroke; receiving a further user input gesture; and manipulating in accordance with the further user input gesture the subset relative to the surrounding three-dimensional model from an initial placement to a final placement.
In an aspect, a system for generating a three-dimensional model of a surgical implant for an anatomical feature is provided, the system comprising: a display unit configured to display a two-dimensional rendering of a three-dimensional model of the anatomical feature; an input unit configured to receive from a user at least one user input selecting a region on the three-dimensional model of the anatomical feature to place the three-dimensional model of the surgical implant; and a manipulation engine configured to generate the contour and placement for the three-dimensional model of the surgical implant in the selected region.
In an aspect, a method for generating a three-dimensional model of a surgical implant for an anatomical feature is provided, the method comprising: displaying, on a display unit, a two-dimensional rendering of the three-dimensional model of the anatomical feature; receiving from a user at least one user input selecting a region on the three-dimensional model of the anatomical feature to place the three-dimensional model of a surgical implant; and generating the contour and placement for the three-dimensional model of the surgical implant in the selected region.
In an aspect, a system for generating a two-dimensional rendering of a three-dimensional model of an anatomical feature from a plurality of datasets in response to a user input action from a user is provided, the system comprising: a display unit configured to display a plurality of parameters, the parameters corresponding to Hounsfield values; an input unit configured to receive a user input action from the user selecting at least one parameter corresponding to the Hounsfield value of the anatomical feature; and a modeling engine configured to retrieve a subset of imaging data corresponding to the at least one parameter and to generate a three-dimensional model of the anatomical feature therefrom, and further to generate a two-dimensional rendering of the three-dimensional model for display on the display unit.
In an aspect, a method for generating a two-dimensional rendering of a three-dimensional model of an anatomical feature from a plurality of datasets in response to a user input action from a user is provided, the system comprising: displaying a plurality of parameters, the parameters corresponding to Hounsfield values; receiving a user input action from the user selecting at least one parameter corresponding to the Hounsfield value of the anatomical feature; and retrieving a subset of imaging data corresponding to the at least one parameter and generating a three-dimensional model of the anatomical feature therefrom, and further generating a two-dimensional rendering of the three-dimensional model for display on the display unit.
In an aspect, a system for modeling screw trajectory on a three-dimensional model of an anatomical feature is provided, the system comprising: a display unit configured to display a two-dimensional rendering of the three-dimensional model to a user; an input unit configured to: receive a user input gesture from the user to modify the two dimensional rendering displayed by the display unit; and receive a user input action from the user indicating a desired screw location; and a manipulation engine configured to augment the three-dimensional model by applying a virtual screw to the three-dimensional model having a screw trajectory extending from the screw location to an end location perpendicularly into the three-dimensional model from the plane and at the location of the user input action.
In an aspect, a method for modeling screw trajectory on a three-dimensional model of an anatomical feature is provided, the method comprising: displaying a two-dimensional rendering of the three-dimensional model to a user; receiving a user input gesture from the user to modify the two dimensional rendering; receive a user input action from the user indicating a desired screw location; and augment the three-dimensional model by applying a virtual screw to the three-dimensional model having a screw trajectory extending from the screw location to an end location perpendicularly into the three-dimensional model from the plane and at the location of the user input action.
Features will become more apparent in the following detailed description in which reference is made to the appended drawings wherein:
Embodiments will now be described with reference to the figures. It will be appreciated that for simplicity and clarity of illustration, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein may be practised without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the embodiments described herein. Also, the description is not to be considered as limiting the scope of the embodiments described herein.
It will also be appreciated that any engine, unit, module, component, server, computer, terminal or device exemplified herein that executes instructions may include or otherwise have access to computer readable media, such as, for example, storage media, computer storage media, or data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Computer storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as, for example, computer readable instructions, data structures, program modules, or other data. Examples of computer storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by an application, module, or both. Any such computer storage media may be part of the device or accessible or connectable thereto. Any application or module herein described may be implemented using computer readable/executable instructions that may be stored or otherwise held by such computer readable media. Such engine, unit, module, component, server, computer, terminal or device further comprises at least one processor for executing the foregoing instructions.
In embodiments, an intuitive system for interactive 3D surgical planning is provided. The system comprises: an input unit for receiving user input gestures; a manipulation engine for processing the user input gestures received in the input unit to manipulate a 3D model of at least one anatomical feature; and a display for displaying the 3D model manipulated in the manipulation engine.
In embodiments, the system provides an intuitive and interactive interface for surgical planning in three dimensions. The system further permits interaction with a 3D model of at least one anatomical feature to create a preoperative plan for patients. In embodiments, the system allows for surgical planning on a virtual model in real time using simple and intuitive gestures. Surgical planning may include: fracture segmentation and reduction; screw and plate placement for treating fractures; and planning of positioning of implants for treating a patient.
In further embodiments, a method for interactive 3D surgical planning is provided. The method comprises: in an input unit, receiving from a user at least one input gesture; in a manipulation engine, processing the at least one user input gesture received in the input unit to manipulate a 3D model of at least one anatomical feature; and in a display unit, displaying the 3D model manipulated in the manipulation engine.
In embodiments, the method provides intuitive and interactive surgical planning in three dimensions. The method further permits interaction with anatomical features to create a unique preoperative plan for patients. In embodiments, the method allows for surgical planning on a virtual model in real time using simple and intuitive input gestures.
In aspects, an intuitive method for interactive 3D surgical planning is provided.
In further embodiments, the system provides an intuitive and interactive interface for generating digital 3D models of surgical implants, including, for example, surgical joints, plates, screws and drill guides. The system may export the digital 3D models for rapid prototyping in a 3D printing machine or manufacture. The system may also export 3D models of anatomic structures, such as, for example, bone fractures, for rapid prototyping.
Referring now to
The mobile tablet device depicted in
In embodiments, the mobile tablet device may comprise a network unit 113 providing, for example, Wi-Fi, cellular, 3G, 4G, Bluetooth and/or LTE functionality, enabling network access to a network 121, such as, for example, a secure hospital network. A server 131 may be connected to the network 121 as a central repository. The server may be linked to a database 141 for storing digital images of anatomical features. In embodiments, database 141 is a hospital Picture Archiving and Communication System (PACS) archive which stores 2D computerised tomography (CT) in Digital Imaging and Communications in Medicine (DICOM) format. The PACS stores a plurality of CT datasets for one or more patients. The mobile tablet device 101 is registered as an Application Entity on the network 121. Using DICOM Message Service Elements (DIMSE) protocol, the mobile tablet device 101 communicates with the PACS archive over the network 121.
The user of the system can view on the display unit 103 the available CT datasets available in the PACS archive, and select the desired CT dataset for a specific operation. The selected CT dataset is downloaded from the database 141 over the network 121 and stored in the memory 111. In embodiments, the memory 111 comprises a cache where the CT datasets are temporarily stored until they are processed by the modelling engine 109 as hereinafter described.
In embodiments, each CT dataset contains a plurality of 2D images. Each image, in turn, comprises a plurality of pixels defining a 2D model of an anatomical feature. Each pixel has a greyscale value. The pixels of a given anatomical feature share a range of greyscale values corresponding to a range of Hounsfield values. The CT datasets further contain at least the following data: the 2D spacing between pixels on each image, the position and orientation of the image relative to the other images, spacing between images, and patient identifiers, including a unique hospital identifier.
A method of generating a 3D model is illustrated in
At block 1709, the modelling engine 109 then retrieves from the dataset located in the memory 111 the data for the pixels corresponding to the selected Hounsfield value; all pixels having a greyscale value falling within the corresponding range of Hounsfield values are selected. As previously described, the dataset comprises: the 2D spacing between pixels on each image, the position and orientation of the image relative to the other images, and spacing between images. It will be appreciated that the dataset therefore contains sufficient information to determine in three dimensions a location for each pixel relative to all other pixels. The modelling engine 109 receives from the memory 111 the 2D coordinates of each pixel. At block 1711, the modelling engine 109 calculates the spacing in the third dimension between the pixels and thereby provides a coordinate in the third dimension to each pixel. At block 1719, the modelling engine 109 stores the 3D coordinates and greyscale colour for each pixel in the memory 111.
In embodiments, at block 1713 the modelling engine 109 generates a 3D model comprising all the selected points arranged according to their respective 3D coordinates. For example, the 3D model may be generated using the raw data as a point cloud; however, in embodiments, as shown at block 1715, the modelling engine 109 applies any one or more volume rendering techniques, such as, for example, Maximum Intensity Projection (MIP), to the raw data 3D model. At block 1721, the modelling engine 109 directs the display unit to display the 3D model.
It will be further appreciated, however, that the 3D model may be generated as a polygon mesh, as shown at block 1717. In still further embodiments, point cloud and polygon mesh models are both generated and stored. The modelling engine 109 transforms the 2D CT dataset into a polygon mesh by applying a transform or algorithm, such as, for example, the Marching Cubes algorithm, as described in William E. Lorenson and Harvey E. Cline, “Marching Cubes: A High Resolution 3D Surface Construction Algorithm” (1987) 21:4 Computer Graphics 163, incorporated herein by reference.
It will be appreciated that a polygon mesh comprises a collection of vertices, edges and faces. The faces consist of triangles. Every vertex is assigned a normal vector. It will be further appreciated that the polygon mesh provides for 3D visualisation of 2D CT scans, while providing an approximation of the curvature of the surface of the anatomical feature.
In embodiments, the modelling engine 109 generates a point cloud model, at block 1713, and a polygon mesh model, at block 1717, of the selected anatomical feature; these models are stored in the memory 111 of the mobile tablet device 101 for immediate or eventual display. Preferably, the models are retained in the memory until the user chooses to delete them so that the 3D modelling process does not need to be repeated. The 3D models having been generated, the CT datasets and identifying indicia are preferably wiped from memory 111 to preserve patient privacy. Preferably, the unique hospital identifier is retained so that the 3D models can be associated with the patient whose anatomical feature the 3D models represent.
In embodiments, 3D modelling is generated external to the mobile tablet device 101, by another application. The 3D models thus generated are provided to the mobile tablet device 101 over the network 121. In such embodiments, it will be appreciated that the CT datasets do not need to be provided to the mobile tablet device 101, but rather to the external engine performing the 3D modelling.
In embodiments, the 3D model is displayed on the display unit 103, preferably selectively either as a point cloud or polygon mesh. A user may then manipulate the 3D models as hereinafter described in greater detail.
In embodiments having a touch screen 104, as shown in
In further embodiments, a settings menu is displayed on the touch screen 104. The settings menu may selectively provide the following functionality which, in some instances, are described in more detail herein manual input gesture control as previously described; selection of models available to be viewed, such as with a user interface button (“UI”) labeled “series”; surface and transparent (x-ray emulation) modes, such as with a UI button labeled “model”, wherein the x-ray emulation may provide simulated x-ray imaging based on the current viewpoint of the 3D model; an option to reduce model resolution and improve interactive speed, such as with a UI button labeled “downsample”, wherein, as described below, when a user performs any transformation the system draws points instead of the mesh so that the system may be more responsive, but once a user discontinues the associated user input, such as by releasing their fingers, the mesh is immediately drawn again; an option to enable a user to perform lasso selection, such as with a UI button labeled “selection or segmentation”, allowing a user to reduce, delete or crop a selection; an option to select the type of implant to be used (for example, a. screw, plate, hip, knee, etc.) such as with a UI button labeled “implants”; an option to select a measurement tool (for example, length, angle, diameter, etc.) such as with a UI button labeled “measurement”; an option to display the angle of the screen in relation to orthogonal planes, such as with a UI button labeled “screen view angle”; an option to select between anterior, posterior, left and right lateral, superior (cephalad), inferior (caudad) positions, such as with a UI button labeled “pre-set anatomical views”; an option to allow a user to easily take a screen shot that will be saved to photo library on device, such as with a UI button labeled “screenshot”; an option to allow a user to evaluate implant models in 1:1 ratio real life size on screen with present views as described above, and to export as a StereoLithography (“STL”) file to email or share through a digital file sharing medium (for example, Dropbox™, etc,) such as with a UI button labeled “export view”; an option to allow a user to check implant/bone interface fit thereby validating implant size and position and correlate with 2D orthogonal plane views, such as with a UI button labeled “interface fit or cut-away view”; an option to allow a user to unlock or lock screen rotation, such as with a UI button labeled “accelerometer”. Further, radial menus can be implemented to facilitate for touch inputs.
The foregoing functionality may enhance the user experience by, for example, allowing the user to more quickly or accurately recall preset views or to visualise environmental features that may impact the surgical procedure being planned.
Further, to provide the foregoing functionality, rendering of 3D models can be decoupled from touch inputs, which may increase responsiveness. Specifically, when the user's input causes a transformation, the systems can be configured to draw points instead of an associated mesh and to only draw the mesh when the touch input is discontinued.
The described method of generating a 3D model may provide models having a relatively high resolution. The mesh used may be the raw output of the marching cubes algorithm, without downsampling. For example, output of such methods may provide a pelvic model having 2.5 million polygons and a head model having 3.9 million polygons. Further, it will be appreciated that a 3rd party rendering library may not be utilized.
In order to effect preoperative planning to, for example, treat bone fractures, a user may need to segment and select bones and fracture fragments. Once the bones and fracture fragments are segmented, the user can manually reduce them into anatomical position, as hereinafter described in greater detail. Where possible, the user can use as a template an unaffected area matching the treatment area to determine whether the user has properly reduced the fracture fragments.
A method of segmenting the elements in a 3D model of an anatomical feature is shown in
The user may manipulate the model to select an optimal view for segmenting the elements, as previously described. At block 1803 the user input 105 receives from the user a gesture input as previously described to manipulate the display of the 3D model. At block 1805 the manipulation engine 107 manipulates the display, at block 1801, of the 3D model. Once the user is satisfied with the display of the 3D model, the user draws a 2D closed stroke on the touch screen display unit 103 around an element to segment. In embodiments, a user may wish to segment an element such as, for example, a bone fracture.
As shown in
As shown in
As shown in
As shown in
Other slicing operations may be used. For example, as shown in
As shown in
In further embodiments, a user may repeat segmentation on a fracture element that has already been segmented. For example, a user may segment and manipulate a fracture element, rotate, pan and/or zoom the 3D model, and then segment a portion of the element, as described above. The unselected portion of the element is returned to its original location (i.e., as derived from the CT scans), and the selected portion is segmented from the element. The user may repeat manipulation and segmentation as desired. The user may thereby iteratively segment elements.
In further aspects, the present systems and methods provide preoperative design of surgical implants, such as, for example, surgical plates and screws. Many surgical treatments call for installation of metal surgical plates in an affected area, such as the surgical plate shown in
In aspects, the user may plan placement and configuration of a surgical implant by virtually contouring a 3D model of the surgical implant on the 3D model of the anatomical feature to be treated. After contouring the 3D model, a surgeon may view the model in a 1:1 aspect ratio on the touch screen as a guide to form an actual surgical implant for subsequent use in surgery. Further, in aspects, the digital model of the surgical implant contains sufficient in information for rapid prototyping (also referred to as 3D printing) of a template of the surgical implant or of the actual implant. Where the 3D model is used to generate prototype that will be used as an actual implant, the rapid prototyping method and materials may be selected accordingly. For example, the resulting prototype may be made out of metal.
The printed template may serve as a guide to contour a metal physical implant or further as a drill guide for precise drill and screw placement during surgery. Therefore, pre-surgically planned screw trajectories may be incorporated into the digital surgical implant model to allow rapid prototyping of a pre-contoured template that also contains built-in drill or saw guides for each screw hole in the implant, as herein described in greater detail.
In order to plan placement of surgical implants, the user uses suitable input gestures to manipulate the 3D model of the anatomical features to obtain an appropriate view for planning the surgical implant. The user then indicates that he wishes to plan the surgical implant by, for example, selecting “Implants” in the user interface, as shown in
Referring now to
A 3D model of an anatomical feature into which a screw is to be placed is displayed, as previously described. The user may use any of the aforementioned input methods to manipulate the 3D model to find an appropriate view for placing a starting point for screw insertion, as shown in
As shown in
The manipulation engine 107 shown in
The trajectory 401 having been established, at block 1905 the manipulation engine 107 causes a selection menu to be displayed on the touch screen so that the user may select the length 402 of the screw in the trajectory 401, as well as the angle 403 of the screw relative to either of the orthogonal planes or other screws. At block 1901 the input unit 105 receives the user's selection as a user input gesture and at block 1905 the manipulation engine causes the length to be displayed.
In embodiments, the user may further modify the screw trajectory, as shown in
In further embodiments, the user may plan sizing and placement of further surgical implants, such as, for example, surgical plates. In embodiments, 3D models of surgical plates are provided. The 3D models represent surgical plates, such as those shown in
Typically, as shown in
It will be appreciated that other types of plates and plate segments may be created, either by the user or by third parties. The user may remodel the size and shape of the hole and shape of the edge for each segment of the plate. Appropriate users may further easily determine a correct position of point 0 for different hole designs and the direction of a normal vector N and vector D for the different plate segments. Different models may be loaded into the memory 111, for retrieval by the manipulation engine 107.
In still further aspects, the hospital's database 141, as shown in
The system provides for automatic and manual virtual manipulation of the model of the surgical plate, including, for example, in-plane, out-of-plane bending and torquing/twisting to contour the plate to the bone surface.
In embodiments, a 3D model is displayed at block 2001, as shown in
In the manual scenario, upon selecting the plate start point 1101, the user again taps the touch screen 104 at other locations to establish next plate points 1102, 1103, 1104 and so on. The number of points may be any number. At block 2007, the manipulation engine 107 converts each of the 2D touch point coordinates to a location on the surface of the 3D model of the anatomical feature, according to previously described techniques. In embodiments, at block 2003 the manipulation engine 107 calculates the shortest geodesic path to define a curve 1105 between points 1101, 1102, 1103 and 1104, as shown in
In the automated scenario, upon selecting the plate start point 1101, the user again taps the touch screen 104 at a desired plate end point 1104 to establish an end point for the trajectory. At block 2007, the manipulation engine 107 converts each of the 2D touch point coordinates to a location on the surface of the 3D model of the anatomical feature, as in the manual scenario. In embodiments, at block 2003 the manipulation engine 107 calculates the shortest geodesic path to define a curve 1105 between points 1101 and 1104, as shown in
It will be further appreciated that the shortest geodesic path is not always optimal; in embodiments, therefore, the user may alternatively, and preferably selectively, use one-finger panning to draw a customised 2D stroke on the surface of the touch screen 104. At block 2007, the manipulation engine 107 converts each of the 2D stroke coordinates to a location on the surface of the 3D model of the anatomical feature, using known methods as previously described. As a result, a 3D discrete curve 1201 that lies on the surface of the 3D model is created, as shown in
Regardless of the resulting curve, in the automated scenario, the manipulation engine 107 segments the discrete curve 1105 or 1201 into a segmented discrete curve 1301 according to suitable techniques, as shown in
Upon manual or automatic placement and alignment of the segments, the manipulation engine may further assign a control point at the hole for each segment. The user may manipulate each control point by any suitable input, in response to which the manipulation engine moves the model of the corresponding segment, for example, in-plane, or along the curve.
In one aspect, the interface may provide an over-sketch function enabling the user to manipulate the surgical plate or segments of the surgical plate, either by moving segments, or by altering the curve along which the segments are located. For example, the user may initiate the over-sketch function by touching the touchscreen over one of the control points and swiping towards a desired location. The manipulation engine reassigns the feature associated to the control point to the new location, and re-invokes any suitable algorithm, as previously described, to re-calculate and adjust the curve and the surgical plate.
The use of the system during surgery has apparent benefits in the context of implant preparation and placement. For example, once a preoperative plan made with the system has been finalised, the manipulation may have generated a 3D model of a surgical implant having a particular set of curvatures, bends and other adjustments. A surgeon, upon conducting the surgery, may refer directly to the system when preparing the actual surgical implant to ensure that the implant is formed as planned. Such a possibility is further enhanced as the surgeon can easily use gesture commands to scale the rendered implant to real-world scale and can rotate the rendered and real-world implants simultaneously to compare them to one another.
The 3D model may enhance or ease fabrication of the physical implant to be used in surgery. Users may view the 3D model of the surgical implant as a guide aiding with conceptualisation for contouring the physical implant, whether preoperatively or in the field. The user may view the model on the touchscreen of her device. In aspects, the interface provides a menu from which the user may select presentation of a preconfigured 1:1 aspect ratio viewing size representing the actual physical dimensions of the surgical implant to be used in surgery. Additional preconfigured views may include the following, for example:
Model—a standard 3D orthographic projection view where user can rotate/scale/translate the model using gestures described previously;
Side—an orthographic projection view from the left and/or right hand side of the plate model;
Front—an orthographic projection view from the front and/or back of the plate model; and
Top—an orthographic projection view from the top and/or bottom of the plate model.
In preferred embodiments, a projection angle icon for the 3D model of the anatomical features is provided and displayed as shown in
The interface may further enhance pre-operative surgical planning and surgical implant assembly by exporting the 3D models of the surgical implants and anatomical features for use in 3D printing. For example, a “negative” mould of a surgical implant may guide a surgeon in shaping bone grafts during surgery.
The modelling engine may be configured to export digital models in any number of formats suitable for 3D prototyping. The modelling engine may export various types of digital models, such as, for example: anatomic structures, including bone fragments; and surgical implants, including contoured plates, screws and drill guides.
In an exemplary scenario, upon finalisation of a preoperative plan, the modelling engine may export digital models in, for example, a Wavefront .obj file format or STL (StereoLithography) file format. In order to model screws, the manipulating engine obtains the length, trajectory and desired radius for each screw and generates a 3D model (using any of the previously described modelling techniques) of a cylinder with a cap, emulating a screw. The modelling engine exports the 3D model for 3D printing.
Furthermore, the printed plate model can also be utilized as a drill guide for precise drill and screw placement during the surgery. To achieve this, the pre-surgically planned screw trajectories are incorporated into the precisely contoured digital plate model that also contains built-in drill guides for each screw hole in the plate. Overall this may improve surgical accuracy by assisting the user to avoid important anatomical structures, improve efficiency by reducing surgical steps, reduce the number of standard instruments needed, reducing instruments to re-sterilize, reducing wastage of implants, and facilitates faster operating room turnover.
Referring now to
3D printed drill guides printed from 3D models generated according to the systems and methods herein, such as discussed with reference to
It will be appreciated that the system may be provided on a mobile tablet device. By its nature, such a device is easily transportable and may be used in a surgical setting to augment the surgeon's tools available therein. For example, a surgeon could utilize the system before, during or both before and during surgery. An illustrative example enables a surgeon to have a more thorough view of a particular bone fracture using the system than the surgeon could otherwise have by simply looking directly at a bone fracture within a patient's body.
It will be further appreciated that the preoperative screw and plate positions determined using the aforementioned methods can be stored in the memory 111 for post-operative analysis. In embodiments, a post-operative 3D model is generated by the modelling engine from post-operative CT datasets as heretofore described. The user may recall the preoperative screw and plate positions from the memory 111, so that the positions are superimposed over the post-operative 3D model. It will be appreciated that the accuracy of the surgical procedure can thus be gauged with respect to the planned procedure.
Although the illustrated embodiments have been described with particular respect to preoperative planning for orthopaedic surgery, it will be appreciated that a system and method for interactive 3D surgical planning may have many possible applications outside of orthopaedic trauma. Exemplary applications include, but are not limited to, joint replacement surgery, deformity correction and spine surgery, head and neck surgery, oral surgery and neurosurgery.
It will be further appreciated that the embodiments described may provide educational benefits, for example as a simulation tool to train resident and novice surgeons. Further, the embodiments may enable improved communication between surgeons and patients by offering enhanced visualisation of surgical procedures.
Orthopaedic implant manufacturing and service companies will appreciate that the foregoing embodiments may also provide a valuable marketing tool to display implants and technique guides, or to employees.
It will further be appreciated that the embodiments described may be used to train X-ray technologists to optimise patient positioning and X-ray projection selection.
It will further be appreciated that the above-described embodiments provide techniques to provide rapid access to automated segmentation allowing active participation in planning, design and implantation of patient-specific implants, including “lasso” segmentation, facilitating screw hole planning, drill-guide modeling, and contouring a modeled implant plate. Further, the embodiments may be applicable to a range of anatomical features, including, but not limited to hips and knees.
It will further be appreciated that that the above-described embodiments provide a unified simulation system, optimized for use on mobile touch-screen devices, allowing users, such as surgeons and medical device engineers to work in parallel during the design of patient-matched implants and to contribute to reducing the overall temporal and financial cost of the manufacture thereof. Embodiments described above thus provide a unified platform for 3D surgical planning and implant design which may enhance communication between surgeons and engineers.
Although the invention has been described with reference to certain specific embodiments, various modifications thereof will be apparent to those skilled in the art without departing from the spirit and scope of the invention as outlined in the claims appended hereto. The entire disclosures of all references recited above are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
61989232 | May 2014 | US | |
62046217 | Sep 2014 | US |