Surgical joint repair procedures involve repair and/or replacement of a damaged or diseased joint. Many times, a surgical joint repair procedure, such as joint arthroplasty as an example, involves replacing the damaged joint with a prosthetic that is implanted into the patient's bone. Proper selection of a prosthetic that is appropriately sized and shaped and proper positioning of that prosthetic to ensure an optimal surgical outcome can be challenging. Various tools may assist surgeons with preoperative planning for joint repairs and replacements.
This disclosure describes a variety of techniques for providing preoperative planning, medical implant design and manufacture, intraoperative guidance, postoperative analysis, and/or training and education for surgical joint repair procedures. The techniques may be used independently or in various combinations to support particular phases or settings for surgical joint repair procedures or provide a multi-faceted ecosystem to support surgical joint repair procedures. In various examples, the disclosure describes techniques for preoperative surgical planning, intra-operative surgical planning, intra-operative surgical guidance, intra-operative surgical tracking and post-operative analysis using mixed reality (MR)-based visualization. In some examples, the disclosure also describes surgical items and/or methods for performing surgical joint repair procedures. In some examples, this disclosure also describes techniques and visualization devices configured to provide education about an orthopedic surgical procedure using mixed reality.
This disclosure describes a variety of techniques for using machine learning to determine operational duration of an orthopedic implant in a pre-operative or intraoperative setting. A computing system may determine the operational duration of the implant such as an estimate of how long an implant will effectively serve its intended function after implantation before subsequent action, e.g., additional surgery such as a revision procedure, is needed. A revision procedure may involve replacement of an orthopedic implant with a new implant. For example, the computing system may configure a machine-learned model with a machine learning dataset that includes information used to predict the operational duration of the orthopedic implant. The machine-learned model may receive patient and implant characteristics and use the model parameters of the machine-learned model generated from the machine learning dataset to determine information indicative of the predicted operational duration of the implant.
In this manner, a surgeon can receive information indicative of an estimate of the operational duration of a particular implant. A longer operational duration is ordinarily desirable so as to prolong effective operation and delay the need for a surgical revision procedure. The surgeon can then determine whether the particular implant is a suitable implant for the patient or whether a different implant is more suitable, e.g., based on prediction of a longer operational duration. In some examples, the computing system may determine the operational duration of multiple implants and provide a recommendation to the surgeon based on the operational duration of the multiple implants, and in some examples, accounting for patient characteristics.
Accordingly, the example techniques rely on computational processes rooted in machine learning technologies as a way to provide a practical application of selecting an implant for implantation. The techniques described in this disclosure may allow a surgeon to select the suitable implant based on more than just know-how and experience of the surgeon, which may be especially limited for less experienced surgeons.
In one example, the disclosure describes a computer-implemented method comprising obtaining, by a computing system, patient characteristics of a patient, obtaining, by the computing system, implant characteristics of an implant, determining, by the computing system, information indicative of an operational duration of the implant based on the patient characteristics and the implant characteristics, and outputting, by the computing system, the information indicative of the operational duration of the implant.
In one example, the disclosure describes a computing system comprising memory configured to store patient characteristics of a patient and implant characteristics of an implant and one or more processors, coupled to the memory, and configured to obtain the patient characteristics of the patient, obtain the implant characteristics of the implant, determine information indicative of an operational duration of the implant based on the patient characteristics and the implant characteristics, and output the information indicative of the operational duration of the implant.
In one example, the disclosure describes a computer-readable storage medium storing instructions thereon that when executed cause one or more processors to obtain patient characteristics of a patient, obtain implant characteristics of an implant, determine information indicative of an operational duration of the implant based on the patient characteristics and the implant characteristics, and output the information indicative of the operational duration of the implant.
In one example, the disclosure describes a computer system comprising means for obtaining patient characteristics of a patient, means for obtaining implant characteristics of an implant, means for determining information indicative of an operational duration of the implant based on the patient characteristics and the implant characteristics, and means for outputting the information indicative of the operational duration of the implant.
This disclosure describes a variety of techniques for using machine learning to determine information indicative of dimensions of an orthopedic implant based on implant characteristics. For instance, a machine-learned model may receive the implant characteristics such as information that the implant is used for a type of surgery (e.g., reverse or anatomical shoulder replacement surgery), information that the implant is stemmed or stemless, information that the implant is for a type of patient condition (e.g., fracture, cuff tear, or osteoarthritis), and information that the implant is for a particular bone (e.g., humerus or glenoid). The machine-learned model may apply model parameters of the machine-learned model, where the model parameters are generated from a machine learning data set, and determine information indicative of the dimensions based on the applying of the model parameters of the machine-learned model. A manufacturer may then construct the implant based on the determined dimensions.
In one or more examples, the determination of the information indicative of the dimensions of the implant may be applicable to many patients rather than determined for a specific patient. In other words, the machine-learned model may determine the information indicative of the dimensions of the implant without relying on patient specific information such that the resulting implant having the dimensions may be suitable for many patients.
Accordingly, the example techniques may rely on the computational processes rooted in machine learning technologies as a way to provide a practical application of determining dimensions of an implant for designing and constructing the implant. The techniques described in this disclosure allow an implant designer to design an implant relying on more than know-how and experience of the implant designer, which may be especially limited for less experienced designers.
In one example, the disclosure describes a computer-implemented method comprising receiving, with a machine-learned model of the computing system, implant characteristics of an implant to be manufactured, applying, with the computing system, model parameters of the machine-learned model to the implant characteristics, wherein the model parameters of the machine-learned model are generated based on a machine learning dataset, determining information indicative of dimensions of the implant to be manufactured based on the applying of the model parameters of the machine-learned model, and outputting, by the computing system, the information indicative of the dimensions of the implant to be manufactured.
In one example, the disclosure describes a computing system comprising memory configured to store implant characteristics of an implant to be manufactured and one or more processors configured to receive, with a machine-learned model of the computing system, the implant characteristics of the implant to be manufactured, apply model parameters of the machine-learned model to the implant characteristics, wherein the model parameters of the machine-learned model are generated based on a machine learning dataset, determine information indicative of dimensions of the implant to be manufactured based on the applying of the model parameters of the machine-learned model, and output the information indicative of the dimensions of the implant to be manufactured.
In one example, the disclosure describes a computer-readable storage medium storing instructions thereon that when executed cause one or more processors to receive implant characteristics of an implant to be manufactured, apply model parameters of the machine-learned model to the implant characteristics, wherein the model parameters of the machine-learned model are generated based on a machine learning dataset, determine information indicative of dimensions of the implant to be manufactured based on the applying of the model parameters of the machine-learned model, and output the information indicative of the dimensions of the implant to be manufactured.
In one example, the disclosure describes a computer system comprising means for receiving implant characteristics of an implant to be manufactured, means for applying model parameters of the machine-learned model to the implant characteristics, wherein the model parameters of the machine-learned model are generated based on a machine learning dataset, means for determining information indicative of dimensions of the implant to be manufactured based on the applying of the model parameters of the machine-learned model, and means for outputting, by the computing system, the information indicative of the dimensions of the implant to be manufactured.
The details of various examples of the disclosure are set forth in the accompanying drawings and the description below. Various features, objects, and advantages will be apparent from the description, drawings, and claims.
Certain examples of this disclosure are described with reference to the accompanying drawings, wherein like reference numerals denote like elements. It should be understood, however, that the accompanying drawings illustrate only the various implementations described herein and are not meant to limit the scope of various technologies described herein. The drawings show and describe various examples of this disclosure.
In the following description, numerous details are set forth to provide an understanding of the present disclosure. However, it will be understood by those skilled in the art that the present disclosure may be practiced without these details and that numerous variations or modifications from the described examples may be possible.
Orthopedic surgery can involve implanting one or more prosthetic devices to repair or replace a patient's damaged or diseased joint. Today, virtual surgical planning tools are available that use image data of the diseased or damaged joint to generate an accurate three-dimensional bone model that can be viewed and manipulated preoperatively by the surgeon. These tools can enhance surgical outcomes by allowing the surgeon to simulate the surgery, select or design an implant that more closely matches the contours of the patient's actual bone, and select or design surgical instruments and guide tools that are adapted specifically for repairing the bone of a particular patient. Use of these planning tools typically results in generation of a preoperative surgical plan, complete with an implant and surgical instruments that are selected or manufactured for the individual patient. Oftentimes, once in the actual operating environment, the surgeon may desire to verify the preoperative surgical plan intraoperatively relative to the patient's actual bone. This verification may result in a determination that an adjustment to the preoperative surgical plan is needed, such as a different implant, a different positioning or orientation of the implant, and/or a different surgical guide for carrying out the surgical plan. In addition, a surgeon may want to view details of the preoperative surgical plan relative to the patient's real bone during the actual procedure in order to more efficiently and accurately position and orient the implant components. For example, the surgeon may want to obtain intra-operative visualization that provides guidance for positioning and orientation of implant components, guidance for preparation of bone or tissue to receive the implant components, guidance for reviewing the details of a procedure or procedural step, and/or guidance for selection of tools or implants and tracking of surgical procedure workflow.
Accordingly, this disclosure describes systems and methods for using a mixed reality (MR) visualization system to assist with creation, implementation, verification, and/or modification of a surgical plan before and during a surgical procedure. Because MR, or in some instances VR, may be used to interact with the surgical plan, this disclosure may also refer to the surgical plan as a “virtual” surgical plan. Visualization tools other than or in addition to mixed reality visualization systems may be used in accordance with techniques of this disclosure. A surgical plan, e.g., as generated by the BLUEPRINT™ system or another surgical planning platform, may include information defining a variety of features of a surgical procedure, such as features of particular surgical procedure steps to be performed on a patient by a surgeon according to the surgical plan including, for example, bone or tissue preparation steps and/or steps for selection, modification and/or placement of implant components. Such information may include, in various examples, dimensions, shapes, angles, surface contours, and/or orientations of implant components to be selected or modified by surgeons, dimensions, shapes, angles, surface contours and/or orientations to be defined in bone or tissue by the surgeon in bone or tissue preparation steps, and/or positions, axes, planes, angle and/or entry points defining placement of implant components by the surgeon relative to patient bone or tissue. Information such as dimensions, shapes, angles, surface contours, and/or orientations of anatomical features of the patient may be derived from imaging (e.g., x-ray, CT, MRI, ultrasound or other images), direct observation, or other techniques.
In this disclosure, the term “mixed reality” (MR) refers to the presentation of virtual objects such that a user sees images that include both real, physical objects and virtual objects. Virtual objects may include text, 2-dimensional surfaces, 3-dimensional models, or other user-perceptible elements that are not actually present in the physical, real-world environment in which they are presented as coexisting. In addition, virtual objects described in various examples of this disclosure may include graphics, images, animations or videos, e.g., presented as 3D virtual objects or 2D virtual objects. Virtual objects may also be referred to as virtual elements. Such elements may or may not be analogs of real-world objects. In some examples, in mixed reality, a camera may capture images of the real world and modify the images to present virtual objects in the context of the real world. In such examples, the modified images may be displayed on a screen, which may be head-mounted, handheld, or otherwise viewable by a user. This type of mixed reality is increasingly common on smartphones, such as where a user can point a smartphone's camera at a sign written in a foreign language and see in the smartphone's screen a translation in the user's own language of the sign superimposed on the sign along with the rest of the scene captured by the camera. In some examples, in mixed reality, see-through (e.g., transparent) holographic lenses, which may be referred to as waveguides, may permit the user to view real-world objects, i.e., actual objects in a real-world environment, such as real anatomy, through the holographic lenses and also concurrently view virtual objects.
The Microsoft HOLOLENS™ headset, available from Microsoft Corporation of Redmond, Washington, is an example of a MR device that includes see-through holographic lenses, sometimes referred to as waveguides, that permit a user to view real-world objects through the lens and concurrently view projected 3D holographic objects. The Microsoft HOLOLENS™ headset, or similar waveguide-based visualization devices, are examples of an MR visualization device that may be used in accordance with some examples of this disclosure. Some holographic lenses may present holographic objects with some degree of transparency through see-through holographic lenses so that the user views real-world objects and virtual, holographic objects. In some examples, some holographic lenses may, at times, completely prevent the user from viewing real-world objects and instead may allow the user to view entirely virtual environments. The term mixed reality may also encompass scenarios where one or more users are able to perceive one or more virtual objects generated by holographic projection. In other words, “mixed reality” may encompass the case where a holographic projector generates holograms of elements that appear to a user to be present in the user's actual physical environment.
In some examples, in mixed reality, the positions of some or all presented virtual objects are related to positions of physical objects in the real world. For example, a virtual object may be tethered to a table in the real world, such that the user can see the virtual object when the user looks in the direction of the table but does not see the virtual object when the table is not in the user's field of view. In some examples, in mixed reality, the positions of some or all presented virtual objects are unrelated to positions of physical objects in the real world. For instance, a virtual item may always appear in the top right of the user's field of vision, regardless of where the user is looking.
Augmented reality (AR) is similar to MR in the presentation of both real-world and virtual elements, but AR generally refers to presentations that are mostly real, with a few virtual additions to “augment” the real-world presentation. For purposes of this disclosure, MR is considered to include AR. For example, in AR, parts of the user's physical environment that are in shadow can be selectively brightened without brightening other areas of the user's physical environment. This example is also an instance of MR in that the selectively brightened areas may be considered virtual objects superimposed on the parts of the user's physical environment that are in shadow.
Furthermore, in this disclosure, the term “virtual reality” (VR) refers to an immersive artificial environment that a user experiences through sensory stimuli (such as sights and sounds) provided by a computer. Thus, in virtual reality, the user may not see any physical objects as they exist in the real world. Video games set in imaginary worlds are a common example of VR. The term “VR” also encompasses scenarios where the user is presented with a fully artificial environment in which some virtual object's locations are based on the locations of corresponding physical objects as they relate to the user. Walk-through VR attractions are examples of this type of VR.
The term “extended reality” (XR) is a term that encompasses a spectrum of user experiences that includes virtual reality, mixed reality, augmented reality, and other user experiences that involve the presentation of at least some perceptible elements as existing in the user's environment that are not present in the user's real-world environment. Thus, the term “extended reality” may be considered a genus for MR and VR. XR visualizations may be presented in any of the techniques for presenting mixed reality discussed elsewhere in this disclosure or presented using techniques for presenting VR, such as VR goggles.
These mixed reality systems and methods can be part of an intelligent surgical planning system that includes multiple subsystems that can be used to enhance surgical outcomes. In addition to the preoperative and intraoperative applications discussed above, an intelligent surgical planning system can include postoperative tools to assist with patient recovery and which can provide information that can be used to assist with and plan future surgical revisions or surgical cases for other patients.
Accordingly, systems and methods are also described herein that can be incorporated into an intelligent surgical planning system, such as artificial intelligence systems to assist with planning, implants with embedded sensors (e.g., smart implants) to provide postoperative feedback for use by the healthcare provider and the artificial intelligence system, and mobile applications to monitor and provide information to the patient and the healthcare provider in real-time or near real-time.
Visualization tools are available that utilize patient image data to generate three-dimensional models of bone contours to facilitate preoperative planning for joint repairs and replacements. These tools allow surgeons to design and/or select surgical guides and implant components that closely match the patient's anatomy. These tools can improve surgical outcomes by customizing a surgical plan for each patient. An example of such a visualization tool for shoulder repairs is the BLUEPRINT™ system available from Wright Medical Technology, Inc. The BLUEPRINT™ system provides the surgeon with two-dimensional planar views of the bone repair region as well as a three-dimensional virtual model of the repair region. The surgeon can use the BLUEPRINT™ system to select, design or modify appropriate implant components, determine how best to position and orient the implant components and how to shape the surface of the bone to receive the components, and design, select or modify surgical guide tool(s) or instruments to carry out the surgical plan. The information generated by the BLUEPRINT™ system is compiled in a preoperative surgical plan for the patient that is stored in a database at an appropriate location (e.g., on a server in a wide area network, a local area network, or a global network) where it can be accessed by the surgeon or other care provider, including before and during the actual surgery.
Users of orthopedic surgical system 100 may use virtual planning system 102 to plan orthopedic surgeries. Users of orthopedic surgical system 100 may use planning support system 104 to review surgical plans generated using orthopedic surgical system 100. Manufacturing and delivery system 106 may assist with the manufacture and delivery of items needed to perform orthopedic surgeries. Intraoperative guidance system 108 provides guidance to assist users of orthopedic surgical system 100 in performing orthopedic surgeries. Medical education system 110 may assist with the education of users, such as healthcare professionals, patients, and other types of individuals. Pre- and postoperative monitoring system 112 may assist with monitoring patients before and after the patients undergo surgery. Predictive analytics system 114 may assist healthcare professionals with various types of predictions. For example, predictive analytics system 114 may apply artificial intelligence techniques to determine a classification of a condition of an orthopedic joint, e.g., a diagnosis, determine which type of surgery to perform on a patient and/or which type of implant to be used in the procedure, determine types of items that may be needed during the surgery, and so on.
The subsystems of orthopedic surgical system 100 (i.e., virtual planning system 102, planning support system 104, manufacturing and delivery system 106, intraoperative guidance system 108, medical education system 110, pre- and postoperative monitoring system 112, and predictive analytics system 114) may include various systems. The systems in the subsystems of orthopedic surgical system 100 may include various types of computing systems, computing devices, including server computers, personal computers, tablet computers, smartphones, display devices, Internet of Things (IoT) devices, visualization devices (e.g., mixed reality (MR) visualization devices, virtual reality (VR) visualization devices, holographic projectors, or other devices for presenting extended reality (XR) visualizations), surgical tools, and so on. A holographic projector, in some examples, may project a hologram for general viewing by multiple users or a single user without a headset, rather than viewing only by a user wearing a headset. For example, virtual planning system 102 may include a MR visualization device and one or more server devices, planning support system 104 may include one or more personal computers and one or more server devices, and so on. A computing system is a set of one or more computing systems configured to operate as a system. In some examples, one or more devices may be shared between two or more of the subsystems of orthopedic surgical system 100. For instance, in the previous examples, virtual planning system 102 and planning support system 104 may include the same server devices.
In the example of
Many variations of orthopedic surgical system 100 are possible in accordance with techniques of this disclosure. Such variations may include more or fewer subsystems than the version of orthopedic surgical system 100 shown in
In the example of
In the example of
In some examples, multiple users can simultaneously use MR system 212. For example, MR system 212 can be used in a spectator mode in which multiple users each use their own visualization devices so that the users can view the same information at the same time and from the same point of view. In some examples, MR system 212 may be used in a mode in which multiple users each use their own visualization devices so that the users can view the same information from different points of view.
In some examples, processing device(s) 210 can provide a user interface to display data and receive input from users at healthcare facility 204. Processing device(s) 210 may be configured to control visualization device 213 to present a user interface. Furthermore, processing device(s) 210 may be configured to control visualization device 213 to present virtual images, such as 3D virtual models, 2D images, and so on. Processing device(s) 210 can include a variety of different processing or computing devices, such as servers, desktop computers, laptop computers, tablets, mobile phones and other electronic computing devices, or processors within such devices. In some examples, one or more of processing device(s) 210 can be located remote from healthcare facility 204. In some examples, processing device(s) 210 reside within visualization device 213. In some examples, at least one of processing device(s) 210 is external to visualization device 213. In some examples, one or more processing device(s) 210 reside within visualization device 213 and one or more of processing device(s) 210 are external to visualization device 213.
In the example of
Network 208 may be equivalent to network 116. Network 208 can include one or more wide area networks, local area networks, and/or global networks (e.g., the Internet) that connect preoperative surgical planning system 202 and MR system 212 to storage system 206. Storage system 206 can include one or more databases that can contain patient information, medical information, patient image data, and parameters that define the surgical plans. For example, medical images of the patient's diseased or damaged bone typically are generated preoperatively in preparation for an orthopedic surgical procedure. The medical images can include images of the relevant bone(s) taken along the sagittal plane and the coronal plane of the patient's body. The medical images can include X-ray images, magnetic resonance imaging (MRI) images, computerized tomography (CT) images, ultrasound images, and/or any other type of 2D or 3D image that provides information about the relevant surgical area. Storage system 206 also can include data identifying the implant components selected for a particular patient (e.g., type, size, etc.), surgical guides selected for a particular patient, and details of the surgical procedure, such as entry points, cutting planes, drilling axes, reaming depths, etc. Storage system 206 can be a cloud-based storage system (as shown) or can be located at healthcare facility 204 or at the location of preoperative surgical planning system 202 or can be part of MR system 212 or visualization device (VD) 213, as examples.
MR system 212 can be used by a surgeon before (e.g., preoperatively) or during the surgical procedure (e.g., intraoperatively) to create, review, verify, update, modify and/or implement a surgical plan. In some examples, MR system 212 may also be used after the surgical procedure (e.g., postoperatively) to review the results of the surgical procedure, assess whether revisions are required, or perform other postoperative tasks. To that end, MR system 212 may include a visualization device 213 that may be worn by the surgeon and (as will be explained in further detail below) is operable to display a variety of types of information, including a 3D virtual image of the patient's diseased, damaged, or postsurgical joint and details of the surgical plan, such as a 3D virtual image of the prosthetic implant components selected for the surgical plan, 3D virtual images of entry points for positioning the prosthetic components, alignment axes and cutting planes for aligning cutting or reaming tools to shape the bone surfaces, or drilling tools to define one or more holes in the bone surfaces, in the surgical procedure to properly orient and position the prosthetic components, surgical guides and instruments and their placement on the damaged joint, and any other information that may be useful to the surgeon to implement the surgical plan. MR system 212 can generate images of this information that are perceptible to the user of the visualization device 213 before and/or during the surgical procedure.
In some examples, MR system 212 includes multiple visualization devices (e.g., multiple instances of visualization device 213) so that multiple users can simultaneously see the same images and share the same 3D scene. In some such examples, one of the visualization devices can be designated as the master device and the other visualization devices can be designated as observers or spectators. Any observer device can be re-designated as the master device at any time, as may be desired by the users of MR system 212.
In this way,
The virtual surgical plan may include a 3D virtual model corresponding to the anatomy of interest of the particular patient and a 3D model of a prosthetic component matched to the particular patient to repair the anatomy of interest or selected to repair the anatomy of interest. Furthermore, in the example of
In some examples, visualization device 213 is configured such that the user can manipulate the user interface (which is visually perceptible to the user when the user is wearing or otherwise using visualization device 213) to request and view details of the virtual surgical plan for the particular patient, including a 3D virtual model of the anatomy of interest (e.g., a 3D virtual bone model of the anatomy of interest, such as a glenoid bone or a humeral bone) and/or a 3D model of the prosthetic component selected to repair an anatomy of interest. In some such examples, visualization device 213 is configured such that the user can manipulate the user interface so that the user can view the virtual surgical plan intraoperatively, including (at least in some examples) the 3D virtual model of the anatomy of interest (e.g., a 3D virtual bone model of the anatomy of interest). In some examples, MR system 212 can be operated in an augmented surgery mode in which the user can manipulate the user interface intraoperatively so that the user can visually perceive details of the virtual surgical plan projected in a real environment, e.g., on a real anatomy of interest of the particular patient. In this disclosure, the terms real and real world may be used in a similar manner. For example, MR system 212 may present one or more virtual objects that provide guidance for preparation of a bone surface and placement of a prosthetic implant on the bone surface. Visualization device 213 may present one or more virtual objects in a manner in which the virtual objects appear to be overlaid on an actual, real anatomical object of the patient, within a real-world environment, e.g., by displaying the virtual object(s) with actual, real-world patient anatomy viewed by the user through holographic lenses. For example, the virtual objects may be 3D virtual objects that appear to reside within the real-world environment with the actual, real anatomical object.
As described in this disclosure, orthopedic surgical system 100 (
Various workflows may exist within the surgical process of
Furthermore, the example of
The example of
Additionally, in the example of
Furthermore, in the example of
The example of
A virtual planning step (412) may follow the manual correction step in
Furthermore, in the example of
Additionally, in the example of
In the example of
Postoperative patient monitoring may occur after completion of the surgical procedure (420). During the postoperative patient monitoring step, healthcare outcomes of the patient may be monitored. Healthcare outcomes may include relief from symptoms, ranges of motion, complications, performance of implanted surgical items, and so on. Pre- and postoperative monitoring system 112 (
The medical consultation, case creation, preoperative patient monitoring, image acquisition, automatic processing, manual correction, and virtual planning steps of
As mentioned above, one or more of the subsystems of orthopedic surgical system 100 may include one or more mixed reality (MR) systems, such as MR system 212 (
In some examples, screen 520 may include see-through holographic lenses. sometimes referred to as waveguides, that permit a user to see real-world objects through (e.g., beyond) the lenses and also see holographic imagery projected into the lenses and onto the user's retinas by displays, such as liquid crystal on silicon (LCoS) display devices, which are sometimes referred to as light engines or projectors, operating as an example of a holographic projection system 538 within visualization device 213. In other words, visualization device 213 may include one or more see-through holographic lenses to present virtual images to a user. Hence, in some examples, visualization device 213 can operate to project 3D images onto the user's retinas via screen 520, e.g., formed by holographic lenses. In this manner, visualization device 213 may be configured to present a 3D virtual image to a user within a real-world view observed through screen 520, e.g., such that the virtual image appears to form part of the real-world environment. In some examples, visualization device 213 may be a Microsoft HOLOLENS™ headset, available from Microsoft Corporation, of Redmond, Washington, USA, or a similar device, such as, for example, a similar MR visualization device that includes waveguides. The HOLOLENS™ device can be used to present 3D virtual objects via holographic lenses, or waveguides, while permitting a user to view actual objects in a real-world scene, i.e., in a real-world environment, through the holographic lenses.
Although the example of
Visualization device 213 can also generate a user interface (UI) 522 that is visible to the user, e.g., as holographic imagery projected into see-through holographic lenses as described above. For example, UI 522 can include a variety of selectable widgets 524 that allow the user to interact with a mixed reality (MR) system, such as MR system 212 of
Visualization device 213 can also include a transceiver 528 to connect visualization device 213 to a processing device 510 and/or to network 208 and/or to a computing cloud, such as via a wired communication protocol or a wireless protocol, e.g., Wi-Fi, Bluetooth, etc. Visualization device 213 also includes a variety of sensors to collect sensor data, such as one or more optical camera(s) 530 (or other optical sensors) and one or more depth camera(s) 532 (or other depth sensors), mounted to, on or within frame 518. In some examples, the optical sensor(s) 530 are operable to scan the geometry of the physical environment in which a user of MR system 212 is located (e.g., an operating room) and collect two-dimensional (2D) optical image data (either monochrome or color). Depth sensor(s) 532 are operable to provide 3D image data, such as by employing time of flight, stereo or other known or future-developed techniques for determining depth and thereby generating image data in three dimensions. Other sensors can include motion sensors 533 (e.g., Inertial Mass Unit (IMU) sensors, accelerometers, etc.) to assist with tracking movement.
MR system 212 processes the sensor data so that geometric, environmental, textural, or other types of landmarks (e.g., corners, edges or other lines, walls, floors, objects) in the user's environment or “scene” can be defined and movements within the scene can be detected. As an example, the various types of sensor data can be combined or fused so that the user of visualization device 213 can perceive 3D images that can be positioned, or fixed and/or moved within the scene. When a 3D image is fixed in the scene, the user can walk around the 3D image, view the 3D image from different perspectives, and manipulate the 3D image within the scene using hand gestures, voice commands, gaze line (or direction) and/or other control inputs. As another example, the sensor data can be processed so that the user can position a 3D virtual object (e.g., a bone model) on an observed physical object in the scene (e.g., a surface, the patient's real bone, etc.) and/or orient the 3D virtual object with other virtual images displayed in the scene. In some examples, the sensor data can be processed so that the user can position and fix a virtual representation of the surgical plan (or other widget, image or information) onto a surface, such as a wall of the operating room. Yet further, in some examples, the sensor data can be used to recognize surgical instruments and the position and/or location of those instruments.
Visualization device 213 may include one or more processors 514 and memory 516, e.g., within frame 518 of the visualization device. In some examples, one or more external computing resources 536 process and store information, such as sensor data, instead of or in addition to in-frame processor(s) 514 and memory 516. In this way, data processing and storage may be performed by one or more processors 514 and memory 516 within visualization device 213 and/or some of the processing and storage requirements may be offloaded from visualization device 213. Hence, in some examples, one or more processors that control the operation of visualization device 213 may be within visualization device 213, e.g., as processor(s) 514. Alternatively, in some examples, at least one of the processors that controls the operation of visualization device 213 may be external to visualization device 213, e.g., as processor(s) 210. Likewise, operation of visualization device 213 may, in some examples, be controlled in part by a combination one or more processors 514 within the visualization device and one or more processors 210 external to visualization device 213.
For instance, in some examples, when visualization device 213 is in the context of
In some examples, MR system 212 can also include user-operated control device(s) 534 that allow the user to operate MR system 212, use MR system 212 in spectator mode (either as master or observer), interact with UI 522 and/or otherwise provide commands or requests to processing device(s) 210 or other systems connected to network 208. As examples, control device(s) 534 can include a microphone, a touch pad, a control panel, a motion sensor or other types of control input devices with which the user can interact.
Speakers 604, in some examples, may form part of sensory devices 526 shown in
In some examples, a user may interact with and control visualization device 213 in a variety of ways. For example, microphones 606, and associated speech recognition processing circuitry or software, may recognize voice commands spoken by the user and, in response, perform any of a variety of operations, such as selection, activation, or deactivation of various functions associated with surgical planning, intra-operative guidance, or the like. As another example, one or more cameras or other optical sensors 530 of sensors 614 may detect and interpret gestures to perform operations as described above. As a further example, sensors 614 may sense gaze direction and perform various operations as described elsewhere in this disclosure. In some examples, input devices 608 may receive manual input from a user, e.g., via a handheld controller including one or more buttons, a keypad, a touchscreen, joystick, trackball, and/or other manual input media, and perform, in response to the manual user input, various operations as described above.
As discussed above, surgical lifecycle 300 may include a preoperative phase 302 (
Processor(s) 702 may process information at virtual planning system 701. Processors 702 may be implemented at any variety of circuitry, such as one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), discrete logic, software, hardware, firmware or any combinations thereof.
Power supply 704 may provide power to one or more components of virtual planning system 701. For example, power supply 704 may provide electrical power to processor(s) 702, communication device(s) 706, display device(s) 708, input device(s) 710, output device(s) 712, and storage device(s) 714.
Communication device(s) 706 may facilitate communication between virtual planning system 701 and various other devices and systems. For instance, communication devices 706 may facilitate communication between virtual planning system 701 and any of planning support system 104, manufacturing and delivery system 106, intraoperative guidance system 108, medical education system 110, monitoring system 112, and predictive analytics system 114 of
Display device(s) 708 may be configured to display information to a user of virtual planning system 701. For instance, display devices 708 may display a graphical user interface (GUI) via which virtual planning system 701 may convey information. Examples of display devices 708 include, but are not limited to, liquid crystal displays (LCDs), organic light emitting diode (OLED) displays, plasma displays, projectors, or other types of display screens on which images are perceptible to a user.
Input device(s) 710 may be configured to receive input at virtual planning system 701. Examples of input devices 710 include, but are not limited to, user input devices (e.g., keyboards, mice, microphones, touchscreens, etc.) and sensors (e.g., photosensors, temperature sensors, pressure sensors, etc.).
Output device(s) 712 may be configured to provide output from virtual planning system 701. Examples of output devices 712 include, but are not limited to, speakers, lights, haptic output devices, display devices (e.g., display devices 708 may, in some examples, be considered an output device), communication devices (e.g., communication devices 706 may, in some examples, be considered an output device), or any other device capable of producing a user-perceptible signal.
Storage device(s) 714 may be configured to store information at virtual planning system 102. Examples of storage devices 714 include, but are not limited to, random access memory (RAM), hard drives (e.g., both solid state and not solid state), optical drives, or any other device capable of storing information. In some examples, storage devices 714 may be considered to be non-transitory computer-readable storage media. As shown in
Surgery planning module 718 may facilitate the planning of surgical procedures. For instance, surgery planning module 718 may facilitate the preoperative creation of a surgical plan. A surgical plan created with surgery planning module 718 may specify one or more of: a surgery type, an implant type, an implant location, and/or any other aspects of a surgical procedure. One example of surgery planning module 718 is the BLUEPRINT™ system.
As discussed in further detail below, surgery planning module 718 may invoke/execute or otherwise utilize one or more of machine-learned models 720 to aid in the planning of a surgical procedure. For instance, surgery planning module 718 may invoke a particular machine-learned model of machine-learned models 720 to recommend/predict/estimate a particular aspect of a surgical procedure. As one example, surgery planning module 718 may use one or more of machine-learned models 720 to determine feasibility scores and select one or more implants based on the feasibility scores. As another example, surgery planning module 718 may use one or more of machine-learned models 720 to determine information indicative of dimensions of an implant. As another example, surgery planning module 718 may use one or more of machine-learned models 720 to determine whether a selected surgical option is among a set of recommended surgical options. As another example, surgery planning module 718 may use one or more of machine-learned models 720 to estimate an amount of operating room time for a surgical procedure. Additional details of machine-learned models 720 are discussed below with reference to
Surgery planning module 718 may perform operations described using software, hardware, firmware, or a mixture of hardware, software, and/or firmware residing in and/or executing at virtual planning system 701. Virtual planning system 701 may execute module 718 and models 720 with one or more of processors 702. Virtual planning system 701 may execute surgery planning module 718 and machine-learned models 720 as a virtual machine executing on underlying hardware. Surgery planning module 718 and machine-learned models 720 may execute as a service or component of an operating system or computing platform. Surgery planning module 718 and machine-learned models 720 may execute as one or more executable programs at an application layer of a computing platform. Surgery planning module 718 and machine-learned models 720 may be otherwise arranged remotely to and remotely accessible to virtual planning system 701, for instance, as one or more network services operating at a network in a network cloud. Although surgery planning module 718 is described as a module, surgery planning module 718 may be implemented using one or more modules or other software architectures.
In the example of
Additionally, a surgical plan may be selected based on the pathology (804). The surgical plan is a plan to address the pathology. For instance, in the example where the area of interest is the patient's shoulder, the surgical plan may be selected from an anatomical shoulder arthroplasty, a reverse shoulder arthroplasty, a post-trauma shoulder arthroplasty, or a revision to a previous shoulder arthroplasty. The surgical plan may then be tailored to patient (806). For instance, tailoring the surgical plan may involve selecting and/or sizing surgical items needed to perform the selected surgical plan. Additionally, the surgical plan may be tailored to the patient in order to address issues specific to the patient, such as the presence of osteophytes. As described in detail elsewhere in this disclosure, one or more users may use mixed reality systems of orthopedic surgical system 100 to tailor the surgical plan to the patient.
The surgical plan may then be reviewed (808). For instance, a consulting surgeon may review the surgical plan before the surgical plan is executed. As described in detail elsewhere in this disclosure, one or more users may use mixed reality (MR) systems of orthopedic surgical system 100 to review the surgical plan. In some examples, a surgeon may modify the surgical plan using an MR system by interacting with a UI and displayed elements, e.g., to select a different procedure, change the sizing, shape or positioning of implants, or change the angle, depth or amount of cutting or reaming of the bone surface to accommodate an implant.
Additionally, in the example of
The input data may include one or more features that are associated with an instance or an example. In some implementations, the one or more features associated with the instance or example can be organized into a feature vector. In some implementations, the output data can include one or more predictions. Predictions can also be referred to as inferences. Thus, given features associated with a particular instance, machine-learned model 902 can output a prediction for such instance based on the features.
Machine-learned model 902 can be or include one or more of various different types of machine-learned models. In particular, in some implementations, machine-learned model 902 can perform classification, regression, clustering, anomaly detection, recommendation generation, and/or other tasks.
In some implementations, machine-learned model 902 can perform various types of classification based on the input data. For example, machine-learned model 902 can perform binary classification or multiclass classification. In binary classification, the output data can include a classification of the input data into one of two different classes. In multiclass classification, the output data can include a classification of the input data into one (or more) of more than two classes. The classifications can be single label or multi-label. Machine-learned model 902 may perform discrete categorical classification in which the input data is simply classified into one or more classes or categories.
In some implementations, machine-learned model 902 can perform classification in which machine-learned model 902 provides, for each of one or more classes, a numerical value descriptive of a degree to which it is believed that the input data should be classified into the corresponding class. In some instances, the numerical values provided by machine-learned model 902 can be referred to as “confidence scores” that are indicative of a respective confidence associated with classification of the input into the respective class. In some implementations, the confidence scores can be compared to one or more thresholds to render a discrete categorical prediction. In some implementations, only a certain number of classes (e.g., one) with the relatively largest confidence scores can be selected to render a discrete categorical prediction.
Machine-learned model 902 may output a probabilistic classification. For example, machine-learned model 902 may predict, given a sample input, a probability distribution over a set of classes. Thus, rather than outputting only the most likely class to which the sample input should belong, machine-learned model 902 can output, for each class, a probability that the sample input belongs to such class. In some implementations, the probability distribution over all possible classes can sum to one. In some implementations, a Softmax function, or other type of function or layer can be used to squash a set of real values respectively associated with the possible classes to a set of real values in the range (0, 1) that sum to one.
In some examples, the probabilities provided by the probability distribution can be compared to one or more thresholds to render a discrete categorical prediction. In some implementations, only a certain number of classes (e.g., one) with the relatively largest predicted probability can be selected to render a discrete categorical prediction.
In cases in which machine-learned model 902 performs classification, machine-learned model 902 may be trained using supervised learning techniques. For example, machine-learned model 902 may be trained on a training dataset that includes training examples labeled as belonging (or not belonging) to one or more classes. Further details regarding supervised training techniques are provided below in the descriptions of
In some implementations, machine-learned model 902 can perform regression to provide output data in the form of a continuous numeric value. The continuous numeric value can correspond to any number of different metrics or numeric representations, including, for example, currency values, scores, or other numeric representations. As examples, machine-learned model 902 can perform linear regression, polynomial regression, or nonlinear regression. As examples, machine-learned model 902 can perform simple regression or multiple regression. As described above, in some implementations, a Softmax function or other function or layer can be used to squash a set of real values respectively associated with a two or more possible classes to a set of real values in the range (0, 1) that sum to one.
Machine-learned model 902 may perform various types of clustering. For example, machine-learned model 902 can identify one or more previously-defined clusters to which the input data most likely corresponds. Machine-learned model 902 may identify one or more clusters within the input data. That is, in instances in which the input data includes multiple objects, documents, or other entities, machine-learned model 902 can sort the multiple entities included in the input data into a number of clusters. In some implementations in which machine-learned model 902 performs clustering, machine-learned model 902 can be trained using unsupervised learning techniques.
Machine-learned model 902 may perform anomaly detection or outlier detection. For example, machine-learned model 902 can identify input data that does not conform to an expected pattern or other characteristic (e.g., as previously observed from previous input data). As examples, the anomaly detection can be used for fraud detection or system failure detection.
In some implementations, machine-learned model 902 can provide output data in the form of one or more recommendations. For example, machine-learned model 902 can be included in a recommendation system or engine. As an example, given input data that describes previous outcomes for certain entities (e.g., a score, ranking, or rating indicative of an amount of success or enjoyment), machine-learned model 902 can output a suggestion or recommendation of one or more additional entities that, based on the previous outcomes, are expected to have a desired outcome (e.g., elicit a score, ranking, or rating indicative of success or enjoyment). As one example, given input data descriptive of a patient, a recommendation system, such as orthopedic surgical system 100 of
Machine-learned model 902 may, in some cases, act as an agent within an environment. For example, machine-learned model 902 can be trained using reinforcement learning, which will be discussed in further detail below.
In some implementations, machine-learned model 902 can be a parametric model while, in other implementations, machine-learned model 902 can be a non-parametric model. In some implementations, machine-learned model 902 can be a linear model while, in other implementations, machine-learned model 902 can be a non-linear model.
As described above, machine-learned model 902 can be or include one or more of various different types of machine-learned models. Examples of such different types of machine-learned models are provided below for illustration. One or more of the example models described below can be used (e.g., combined) to provide the output data in response to the input data. Additional models beyond the example models provided below can be used as well.
In some implementations, machine-learned model 902 can be or include one or more classifier models such as, for example, linear classification models; quadratic classification models; etc. Machine-learned model 902 may be or include one or more regression models such as, for example, simple linear regression models; multiple linear regression models; logistic regression models; stepwise regression models; multivariate adaptive regression splines; locally estimated scatterplot smoothing models; etc.
In some examples, machine-learned model 902 can be or include one or more decision tree-based models such as, for example, classification and/or regression trees; iterative dichotomiser 3 decision trees; C4.5 decision trees; chi-squared automatic interaction detection decision trees; decision stumps; conditional decision trees; etc.
Machine-learned model 902 may be or include one or more kernel machines. In some implementations, machine-learned model 902 can be or include one or more support vector machines. Machine-learned model 902 may be or include one or more instance-based learning models such as, for example, learning vector quantization models; self-organizing map models; locally weighted learning models; etc. In some implementations, machine-learned model 902 can be or include one or more nearest neighbor models such as, for example, k-nearest neighbor classifications models; k-nearest neighbors regression models; etc. Machine-learned model 902 can be or include one or more Bayesian models such as, for example, naive Bayes models; Gaussian naive Bayes models; multinomial naive Bayes models; averaged one-dependence estimators; Bayesian networks; Bayesian belief networks; hidden Markov models; etc.
In some implementations, machine-learned model 902 can be or include one or more artificial neural networks (also referred to simply as neural networks). A neural network can include a group of connected nodes, which also can be referred to as neurons or perceptrons. A neural network can be organized into one or more layers. Neural networks that include multiple layers can be referred to as “deep” networks. A deep network can include an input layer, an output layer, and one or more hidden layers positioned between the input layer and the output layer. The nodes of the neural network can be connected or non-fully connected.
Machine-learned model 902 can be or include one or more feed forward neural networks. In feed forward networks, the connections between nodes do not form a cycle. For example, each connection can connect a node from an earlier layer to a node from a later layer.
In some instances, machine-learned model 902 can be or include one or more recurrent neural networks. In some instances, at least some of the nodes of a recurrent neural network can form a cycle. Recurrent neural networks can be especially useful for processing input data that is sequential in nature. In particular, in some instances, a recurrent neural network can pass or retain information from a previous portion of the input data sequence to a subsequent portion of the input data sequence through the use of recurrent or directed cyclical node connections.
In some examples, sequential input data can include time-series data (e.g., sensor data versus time or imagery captured at different times). For example, a recurrent neural network can analyze sensor data versus time to detect or predict a swipe direction, to perform handwriting recognition, etc. Sequential input data may include words in a sentence (e.g., for natural language processing, speech detection or processing, etc.); notes in a musical composition; sequential actions taken by a user (e.g., to detect or predict sequential application usage); sequential object states; etc.
Example recurrent neural networks include long short-term (LSTM) recurrent neural networks; gated recurrent units; bi-direction recurrent neural networks; continuous time recurrent neural networks; neural history compressors; echo state networks; Elman networks; Jordan networks; recursive neural networks; Hopfield networks; fully recurrent networks; sequence-to-sequence configurations; etc.
In some implementations, machine-learned model 902 can be or include one or more convolutional neural networks. In some instances, a convolutional neural network can include one or more convolutional layers that perform convolutions over input data using learned filters.
Filters can also be referred to as kernels. Convolutional neural networks can be especially useful for vision problems such as when the input data includes imagery such as still images or video. However, convolutional neural networks can also be applied for natural language processing.
In some examples, machine-learned model 902 can be or include one or more generative networks such as, for example, generative adversarial networks. Generative networks can be used to generate new data such as new images or other content.
Machine-learned model 902 may be or include an autoencoder. In some instances, the aim of an autoencoder is to learn a representation (e.g., a lower-dimensional encoding) for a set of data, typically for the purpose of dimensionality reduction. For example, in some instances, an autoencoder can seek to encode the input data and provide output data that reconstructs the input data from the encoding. An autoencoder may be used for learning generative models of data. In some instances, the autoencoder can include additional losses beyond reconstructing the input data.
Machine-learned model 902 may be or include one or more other forms of artificial neural networks such as, for example, deep Boltzmann machines; deep belief networks; stacked autoencoders; etc. Any of the neural networks described herein can be combined (e.g., stacked) to form more complex networks.
One or more neural networks can be used to provide an embedding based on the input data. For example, the embedding can be a representation of knowledge abstracted from the input data into one or more learned dimensions. In some instances, embeddings can be a useful source for identifying related entities. In some instances, embeddings can be extracted from the output of the network, while in other instances embeddings can be extracted from any hidden node or layer of the network (e.g., a close to final but not final layer of the network). Embeddings can be useful for performing auto suggest next video, product suggestion, entity or object recognition, etc. In some instances, embeddings be useful inputs for downstream models. For example, embeddings can be useful to generalize input data (e.g., search queries) for a downstream model or processing system.
Machine-learned model 902 may include one or more clustering models such as, for example, k-means clustering models; k-medians clustering models; expectation maximization models; hierarchical clustering models; etc.
In some implementations, machine-learned model 902 can perform one or more dimensionality reduction techniques such as, for example, principal component analysis; kernel principal component analysis; graph-based kernel principal component analysis; principal component regression; partial least squares regression; Sammon mapping; multidimensional scaling; projection pursuit; linear discriminant analysis; mixture discriminant analysis; quadratic discriminant analysis; generalized discriminant analysis; flexible discriminant analysis; autoencoding; etc.
In some implementations, machine-learned model 902 can perform or be subjected to one or more reinforcement learning techniques such as Markov decision processes; dynamic programming; Q functions or Q-learning; value function approaches; deep Q-networks; differentiable neural computers; asynchronous advantage actor-critics; deterministic policy gradient; etc.
In some implementations, machine-learned model 902 can be an autoregressive model. In some instances, an autoregressive model can specify that the output data depends linearly on its own previous values and on a stochastic term. In some instances, an autoregressive model can take the form of a stochastic difference equation. One example autoregressive model is WaveNet, which is a generative model for raw audio.
In some implementations, machine-learned model 902 can include or form part of a multiple model ensemble. As one example, bootstrap aggregating can be performed, which can also be referred to as “bagging.” In bootstrap aggregating, a training dataset is split into a number of subsets (e.g., through random sampling with replacement) and a plurality of models are respectively trained on the number of subsets. At inference time, respective outputs of the plurality of models can be combined (e.g., through averaging, voting, or other techniques) and used as the output of the ensemble.
One example ensemble is a random forest, which can also be referred to as a random decision forest. Random forests are an ensemble learning method for classification, regression, and other tasks. Random forests are generated by producing a plurality of decision trees at training time. In some instances, at inference time, the class that is the mode of the classes (classification) or the mean prediction (regression) of the individual trees can be used as the output of the forest. Random decision forests can correct for decision trees' tendency to overfit their training set.
Another example ensemble technique is stacking, which can, in some instances, be referred to as stacked generalization. Stacking includes training a combiner model to blend or otherwise combine the predictions of several other machine-learned models. Thus, a plurality of machine-learned models (e.g., of same or different type) can be trained based on training data. In addition, a combiner model can be trained to take the predictions from the other machine-learned models as inputs and, in response, produce a final inference or prediction. In some instances, a single-layer logistic regression model can be used as the combiner model.
Another example ensemble technique is boosting. Boosting can include incrementally building an ensemble by iteratively training weak models and then adding to a final strong model. For example, in some instances, each new model can be trained to emphasize the training examples that previous models misinterpreted (e.g., misclassified). For example, a weight associated with each of such misinterpreted examples can be increased. One common implementation of boosting is AdaBoost, which can also be referred to as Adaptive Boosting. Other example boosting techniques include LPBoost; TotalBoost; BrownBoost; xgboost; MadaBoost, LogitBoost, gradient boosting; etc. Furthermore, any of the models described above (e.g., regression models and artificial neural networks) can be combined to form an ensemble. As an example, an ensemble can include a top level machine-learned model or a heuristic function to combine and/or weight the outputs of the models that form the ensemble.
In some implementations, multiple machine-learned models (e.g., that form an ensemble can be linked and trained jointly (e.g., through backpropagation of errors sequentially through the model ensemble). However, in some implementations, only a subset (e.g., one) of the jointly trained models is used for inference.
In some implementations, machine-learned model 902 can be used to preprocess the input data for subsequent input into another model. For example, machine-learned model 902 can perform dimensionality reduction techniques and embeddings (e.g., matrix factorization, principal components analysis, singular value decomposition, word2vec/GLOVE, and/or related approaches); clustering; and even classification and regression for downstream consumption. Many of these techniques have been discussed above and will be further discussed below.
As discussed above, machine-learned model 902 can be trained or otherwise configured to receive the input data and, in response, provide the output data. The input data can include different types, forms, or variations of input data. As examples, in various implementations, the input data can include features that describe the content (or portion of content) initially selected by the user, e.g., content of user-selected document or image, links pointing to the user selection, links within the user selection relating to other files available on device or cloud, metadata of user selection, etc. Additionally, with user permission, the input data includes the context of user usage, either obtained from app itself or from other sources. Examples of usage context include breadth of share (sharing publicly, or with a large group, or privately, or a specific person), context of share, etc. When permitted by the user, additional input data can include the state of the device, e.g., the location of the device, the apps running on the device, etc.
One example way in which to receive the input data is through an application programming interface (API). As an example, the input data may be stored in a cloud for one or more hospitals. An endpoint for the cloud may retrieve data stored in the cloud in response to a request formatted in accordance with the API for the cloud. Processor(s) 702 may generate the request for specific data stored in the cloud in accordance with the API for the cloud, and communication device(s) 706 may transmit the request to the endpoint for the cloud. In return, communication device(s) 706 may receive the requested data that processor(s) 702 stores as the input data for training machine-learned model 902.
Utilization of the API for accessing the input data may be beneficial for various reasons, such as protecting patient privacy. The API may not allow for a query to access private information that can identify a patient, such as name, address, etc. Hence, the endpoint may not access the private information from the cloud. Accordingly, when training machine-learned model 902, the input data may be limited to protect patient privacy.
In some implementations, machine-learned model 902 can receive and use the input data in its raw form. In some implementations, the raw input data can be preprocessed. Thus, in addition or alternatively to the raw input data, machine-learned model 902 can receive and use the preprocessed input data.
In some implementations, preprocessing the input data can include extracting one or more additional features from the raw input data. For example, feature extraction techniques can be applied to the input data to generate one or more new, additional features. Example feature extraction techniques include edge detection; corner detection; blob detection; ridge detection; scale-invariant feature transform; motion detection; optical flow; Hough transform; etc.
In some implementations, the extracted features can include or be derived from transformations of the input data into other domains and/or dimensions. As an example, the extracted features can include or be derived from transformations of the input data into the frequency domain. For example, wavelet transformations and/or fast Fourier transforms can be performed on the input data to generate additional features.
In some implementations, the extracted features can include statistics calculated from the input data or certain portions or dimensions of the input data. Example statistics include the mode, mean, maximum, minimum, or other metrics of the input data or portions thereof
In some implementations, as described above, the input data can be sequential in nature. In some instances, the sequential input data can be generated by sampling or otherwise segmenting a stream of input data. As one example, frames can be extracted from a video. In some implementations, sequential data can be made non-sequential through summarization.
As another example preprocessing technique, portions of the input data can be imputed. For example, additional synthetic input data can be generated through interpolation and/or extrapolation.
As another example preprocessing technique, some or all of the input data can be scaled, standardized, normalized, generalized, and/or regularized. Example regularization techniques include ridge regression; least absolute shrinkage and selection operator (LASSO); elastic net; least-angle regression; cross-validation; L1 regularization; L2 regularization; etc. As one example, some or all of the input data can be normalized by subtracting the mean across a given dimension's feature values from each individual feature value and then dividing by the standard deviation or other metric.
As another example preprocessing technique, some or all or the input data can be quantized or discretized. In some cases, qualitative features or variables included in the input data can be converted to quantitative features or variables. For example, one hot encoding can be performed.
In some examples, dimensionality reduction techniques can be applied to the input data prior to input into machine-learned model 902. Several examples of dimensionality reduction techniques are provided above, including, for example, principal component analysis; kernel principal component analysis; graph-based kernel principal component analysis; principal component regression; partial least squares regression; Sammon mapping; multidimensional scaling; projection pursuit; linear discriminant analysis; mixture discriminant analysis; quadratic discriminant analysis; generalized discriminant analysis; flexible discriminant analysis; autoencoding; etc.
In some implementations, during training, the input data can be intentionally deformed in any number of ways to increase model robustness, generalization, or other qualities. Example techniques to deform the input data include adding noise; changing color, shade, or hue; magnification; segmentation; amplification; etc.
In response to receipt of the input data, machine-learned model 902 can provide the output data. The output data can include different types, forms, or variations of output data. As examples, in various implementations, the output data can include content, either stored locally on the user device or in the cloud, that is relevantly shareable along with the initial content selection.
As discussed above, in some implementations, the output data can include various types of classification data (e.g., binary classification, multiclass classification, single label, multi-label, discrete classification, regressive classification, probabilistic classification, etc.) or can include various types of regressive data (e.g., linear regression, polynomial regression, nonlinear regression, simple regression, multiple regression, etc.). In other instances, the output data can include clustering data, anomaly detection data, recommendation data, or any of the other forms of output data discussed above.
In some implementations, the output data can influence downstream processes or decision making. As one example, in some implementations, the output data can be interpreted and/or acted upon by a rules-based regulator.
The present disclosure provides systems and methods that include or otherwise leverage one or more machine-learned models to suggest content, either stored locally on the uses device or in the cloud, that is relevantly shareable along with the initial content selection based on features of the initial content selection. Any of the different types or forms of input data described above can be combined with any of the different types or forms of machine-learned models described above to provide any of the different types or forms of output data described above.
The systems and methods of the present disclosure can be implemented by or otherwise executed on one or more computing devices. Example computing devices include user computing devices (e.g., laptops, desktops, and mobile computing devices such as tablets, smartphones, wearable computing devices, etc.); embedded computing devices (e.g., devices embedded within a vehicle, camera, medical scanner, image sensor, industrial machine, satellite, gaming console or controller, or home appliance such as a refrigerator, thermostat, energy meter, home energy manager, smart home assistant, etc.); server computing devices (e.g., database servers, parameter servers, file servers, mail servers, print servers, web servers, game servers, application servers, etc.); dedicated, specialized model processing or training devices; virtual computing devices; other computing devices or computing infrastructure; or combinations thereof.
In yet other implementations, different respective portions of machine-learned model 902 can be stored at and/or implemented by some combination of a user computing device; an embedded computing device; a server computing device; etc. In other words, portions of machine-learned model 902 may be distributed in whole or in part amongst client device 1102 and server system 1104.
Devices 1102 and 1104 may perform graph processing techniques or other machine learning techniques using one or more machine learning platforms, frameworks, and/or libraries, such as, for example, TensorFlow, Caffe/Caffe2, Theano, Torch/PyTorch, MXnet, CNTK, etc. Devices 1102 and 1104 may be distributed at different physical locations and connected via one or more networks, including network 1100. If configured as distributed computing devices, Devices 1102 and 1104 may operate according to sequential computing architectures, parallel computing architectures, or combinations thereof. In one example, distributed computing devices can be controlled or guided through use of a parameter server.
In some implementations, multiple instances of machine-learned model 902 can be parallelized to provide increased processing throughput. For example, the multiple instances of machine-learned model 902 can be parallelized on a single processing device or computing device or parallelized across multiple processing devices or computing devices.
Each computing device that implements machine-learned model 902 or other aspects of the present disclosure can include a number of hardware components that enable performance of the techniques described herein. For example, each computing device can include one or more memory devices that store some or all of machine-learned model 902. For example, machine-learned model 902 can be a structured numerical representation that is stored in memory. The one or more memory devices can also include instructions for implementing machine-learned model 902 or performing other operations. Example memory devices include RAM, ROM, EEPROM, EPROM, flash memory devices, magnetic disks, etc., and combinations thereof.
Each computing device can also include one or more processing devices that implement some or all of machine-learned model 902 and/or perform other related operations. Example processing devices include one or more of: a central processing unit (CPU); a visual processing unit (VPU); a graphics processing unit (GPU); a tensor processing unit (TPU); a neural processing unit (NPU); a neural processing engine; a core of a CPU, VPU, GPU, TPU, NPU or other processing device; an application specific integrated circuit (ASIC); a field programmable gate array (FPGA); a co-processor; a controller; or combinations of the processing devices described above. Processing devices can be embedded within other hardware components such as, for example, an image sensor, accelerometer, etc.
Hardware components (e.g., memory devices and/or processing devices) can be spread across multiple physically distributed computing devices and/or virtually distributed computing systems.
In some implementations, machine-learned model 902 may be trained in an offline fashion or an online fashion. In offline training (also known as batch learning), machine-learned model 902 is trained on the entirety of a static set of training data. In online learning, machine-learned model 902 is continuously trained (or re-trained) as new training data becomes available (e.g., while the model is used to perform inference).
Model trainer 1208 may perform centralized training of machine-learned model 902 (e.g., based on a centrally stored dataset). In other implementations, decentralized training techniques such as distributed training, federated learning, or the like can be used to train, update, or personalize machine-learned model 902.
Machine-learned model 902 described herein can be trained according to one or more of various different training types or techniques. For example, in some implementations, machine-learned model 902 can be trained by model trainer 1208 using supervised learning, in which machine-learned model 902 is trained on a training dataset that includes instances or examples that have labels. The labels can be manually applied by experts, generated through crowdsourcing, or provided by other techniques (e.g., by physics-based or complex mathematical models). In some implementations, if the user has provided consent, the training examples can be provided by the user computing device. In some implementations, this process can be referred to as personalizing the model.
Training data 1302 used by training process 1300 can include, upon user permission for use of such data for training, anonymized usage logs of sharing flows, e.g., content items that were shared together, bundled content pieces already identified as belonging together, e.g., from entities in a knowledge graph, etc. In some implementations, training data 1302 can include examples of input data 1304 that have been assigned labels 1306 that correspond to output data 1308.
In some implementations, machine-learned model 902 can be trained by optimizing an objective function, such as objective function 1310. For example, in some implementations, objective function 1310 may be or include a loss function that compares (e.g., determines a difference between) output data generated by the model from the training data and labels (e.g., ground-truth labels) associated with the training data. For example, the loss function can evaluate a sum or mean of squared differences between the output data and the labels. In some examples, objective function 1310 may be or include a cost function that describes a cost of a certain outcome or output data. Other examples of objective function 1310 can include margin-based techniques such as, for example, triplet loss or maximum-margin training.
One or more of various optimization techniques can be performed to optimize objective function 1310. For example, the optimization technique(s) can minimize or maximize objective function 1310. Example optimization techniques include Hessian-based techniques and gradient-based techniques, such as, for example, coordinate descent; gradient descent (e.g., stochastic gradient descent); subgradient methods; etc. Other optimization techniques include black box optimization techniques and heuristics.
In some implementations, backward propagation of errors can be used in conjunction with an optimization technique (e.g., gradient based techniques) to train machine-learned model 902 (e.g., when machine-learned model is a multi-layer model such as an artificial neural network). For example, an iterative cycle of propagation and model parameter (e.g., weights) update can be performed to train machine-learned model 902. Example backpropagation techniques include truncated backpropagation through time, Levenberg-Marquardt backpropagation, etc.
In some implementations, machine-learned model 902 described herein can be trained using unsupervised learning techniques. Unsupervised learning can include inferring a function to describe hidden structure from unlabeled data. For example, a classification or categorization may not be included in the data. Unsupervised learning techniques can be used to produce machine-learned models capable of performing clustering, anomaly detection, learning latent variable models, or other tasks.
Machine-learned model 902 can be trained using semi-supervised techniques which combine aspects of supervised learning and unsupervised learning. Machine-learned model 902 can be trained or otherwise generated through evolutionary techniques or genetic algorithms. In some implementations, machine-learned model 902 described herein can be trained using reinforcement learning. In reinforcement learning, an agent (e.g., model) can take actions in an environment and learn to maximize rewards and/or minimize penalties that result from such actions. Reinforcement learning can differ from the supervised learning problem in that correct input/output pairs are not presented, nor sub-optimal actions explicitly corrected.
In some implementations, one or more generalization techniques can be performed during training to improve the generalization of machine-learned model 902. Generalization techniques can help reduce overfitting of machine-learned model 902 to the training data. Example generalization techniques include dropout techniques; weight decay techniques; batch normalization; early stopping; subset selection; stepwise selection; etc.
In some implementations, machine-learned model 902 described herein can include or otherwise be impacted by a number of hyperparameters, such as, for example, learning rate, number of layers, number of nodes in each layer, number of leaves in a tree, number of clusters; etc. Hyperparameters can affect model performance. Hyperparameters can be hand selected or can be automatically selected through application of techniques such as, for example, grid search; black box optimization techniques (e.g., Bayesian optimization, random search, etc.); gradient-based optimization; etc. Example techniques and/or tools for performing automatic hyperparameter optimization include Hyperopt; Auto-WEKA; Spearmint; Metric Optimization Engine (MOE); etc.
In some implementations, various techniques can be used to optimize and/or adapt the learning rate when the model is trained. Example techniques and/or tools for performing learning rate optimization or adaptation include Adagrad; Adaptive Moment Estimation (ADAM); Adadelta; RMSprop; etc.
In some implementations, transfer learning techniques can be used to provide an initial model from which to begin training of machine-learned model 902 described herein.
In some implementations, machine-learned model 902 described herein can be included in different portions of computer-readable code on a computing device. In one example, machine-learned model 902 can be included in a particular application or program and used (e.g., exclusively) by such particular application or program. Thus, in one example, a computing device can include a number of applications and one or more of such applications can contain its own respective machine learning library and machine-learned model(s).
In another example, machine-learned model 902 described herein can be included in an operating system of a computing device (e.g., in a central intelligence layer of an operating system) and can be called or otherwise used by one or more applications that interact with the operating system. In some implementations, each application can communicate with the central intelligence layer (and model(s) stored therein) using an application programming interface (API) (e.g., a common, public API across all applications).
In some implementations, the central intelligence layer can communicate with a central device data layer. The central device data layer can be a centralized repository of data for the computing device. The central device data layer can communicate with a number of other components of the computing device, such as, for example, one or more sensors, a context manager, a device state component, and/or additional components. In some implementations, the central device data layer can communicate with each device component using an API (e.g., a private API).
The technology discussed herein makes reference to servers, databases, software applications, and other computer-based systems, as well as actions taken and information sent to and from such systems. The inherent flexibility of computer-based systems allows for a great variety of possible configurations, combinations, and divisions of tasks and functionality between and among components. For instance, processes discussed herein can be implemented using a single device or component or multiple devices or components working in combination.
Databases and applications can be implemented on a single system or distributed across multiple systems. Distributed components can operate sequentially or in parallel.
In addition, the machine learning techniques described herein are readily interchangeable and combinable. Although certain example techniques have been described, many others exist and can be used in conjunction with aspects of the present disclosure.
A brief overview of example machine-learned models and associated techniques has been provided by the present disclosure. For additional details, readers should review the following references: Machine Learning A Probabilistic Perspective (Murphy); Rules of Machine Learning: Best Practices for ML Engineering (Zinkevich); Deep Learning (Goodfellow); Reinforcement Learning: An Introduction (Sutton); and Artificial Intelligence: A Modern Approach (Norvig).
Orthopedic surgery can involve implanting one or more implant components into a patient to repair or replace a damaged or diseased joint of the patient. For example, orthopedic shoulder surgery may involve attaching a first implant component to a glenoid cavity of a patient and attaching a second implant component to a humerus of the patient. It is important for surgeons to select the correct implant components and the plan the surgery properly when planning an orthopedic surgery. Some selected implant components and some planned procedures, involving positioning, angles, etc., may limit patient range of motion, cause bone fractures, or loosen and detach from patients' bones.
Over time and use, the implant may not function in the desired way or the patient condition may worsen in a way that makes the implant not function in the desired way. After the operational duration of the implant, additional corrective actions (e.g., surgery, such as revision surgery, or physical therapy) may be needed to alleviate symptoms of the patient condition.
An operational duration of implant may refer to information (e.g., a prediction) indicative of how long the implant will operate before additional corrective actions are needed. As one example, the information indicative of the operational duration of an implant may include information indicative of a likelihood that the implant will serve a function of the implant for a certain amount of time. For example, the operational duration of an implant may be information indicating that there is a 95% likelihood that the implant will serve its function for 10 years (e.g., for a group of patients who have the implant at 10 years, 95% of the patients still have the implant). As another example, the operational duration of an implant may be information indicative of likelihood that the implant will serve its function over a period of years (e.g., 99% likelihood that the implant will serve its function for 2 years, 99% likelihood that the implant will serve its function for 5 years, 95% likelihood that the implant will serve its function for 10 years, 90% likelihood that the implant will serve its function for 15 years, and so forth). As an example, the operational duration of the implant may be a histogram showing probability of duration for certain periods.
There may be various ways in which to qualify whether the implant will serve its function for a certain amount of time (e.g., whether there is efficacious or effective functioning of the implant). Example ways to determine the efficacious or effective function include determination of range of motion, tolerable or no pain, little to no dislocation, no implant breakage, and no infection.
The operational duration of an implant may be represented in other ways. As a few examples, the operational duration may be a particular time duration or range or classification (e.g., short, medium, long), with a likelihood or confidence of different durations. As described in more detail below, in some examples, rather than or in addition to a predicted duration for a particular selected implant, there may be a comparative ranking of other suitable implants by duration.
In some examples, the operational duration may be for the implant. However, there may be various factors, beyond just the size and shape of the implant, that may impact the operational duration such as an overall surgical plan that includes the implant along with positioning (medialization, lateralization, angle, etc.) of the implant.
Implanting some selected implant components with a certain designed surgical procedure may require the patient to undergo additional corrective actions earlier than necessary. For instance, the operational duration of a first implant, given the implant characteristics of the first implant, surgical procedure, and the patient characteristics of the patient, may be longer than the operational duration of a second implant, given the implant characteristics of the second implant, surgical procedure, and the patient characteristics of the patient. In this example, if a surgeon were to implant the second implant, the patient may require corrective actions earlier than if the surgeon were to implant the first implant. Without knowledge of the operational duration, in some cases, the surgeon may recommend the patient take corrective action earlier than necessary to ensure that the implant does not go past its operational duration.
However, taking corrective action, especially earlier than necessary, may be undesirable. Corrective action by the patient may increase cost to the patient and require the patient to undergo surgery, which increases the chances of infection, further damage to the bone or surrounding bone or tissue, and the like.
While ensuring that the implant selected for implantation has the longest operational duration (or an acceptable operational duration above a threshold) may be important, there may be other factors that impact which implant a surgeon is to use. As one example, a first implant may have a longer operational duration than a second implant if implanted in a particular patient. However, to implant the first implant, the surgeon may need to perform a surgical procedure that may not be appropriate for the patient, or the surgeon may be less experienced or skilled on that procedure, so the type of procedure can be balanced against the duration. For instance, the selected surgical procedure may last longer than would be advisable for the patient.
This disclosure describes example techniques performed by a computing system to generate information indicative of the operational duration of an implant. The surgeon may then utilize the information indicative of the operational duration of the implant to determine which implant to use prior to the surgery or during the surgery.
For example, the computing system may utilize a machine-learned model to determine the information indicative of the operational duration of the implant. The machine-learned model is a computer-implemented tool that may analyze input data (e.g., patient characteristics and implant characteristics), utilizing computational processes of the computing system in a manner that extends beyond just know-how of the surgeon to generate an output of the information indicative of the operational duration of the implant. The surgeon skill and experience may be additional examples of input data for the machine-learned model. Some additional examples of the input data include data from publications showing a survival rate of implants for a specific group of patients and a published revision rate for a selected implant range.
The computing system may apply model parameters of the machine-learned model to the input data (e.g., categorize the input data based on the model parameters or generally perform a machine learning algorithm using the model parameters on the input data). The result of applying the model parameters of the machine-learned model may be the information indicative of the operational duration of the implant.
The computing system may generate the model parameters of the machine-learned model based on a machine learning dataset. Examples of the machine learning dataset include one or more of pre-operative scans of a set of patients, patient characteristics of the set of patients, information indicative of previous surgical plans (e.g., surgical procedures) using different implants, information indicative of surgical results, and surgeon characteristics.
For example, the computing system may receive pre-operational or intra-operational data for a large number of cases. The pre-operational or intra-operational data may include information indicative of a type of surgery, scans of patient anatomy, patient information such as age, gender, diseases, smoker or not, fatty infiltration at bony region, etc., and implant characteristics such as dimension (e.g., size and shape), manufacturing company, stemmed or stemless configuration, stem size if stemmed, implant for anatomical or reverse implantation, etc. The computing system may also receive post-operational data for the large number of cases. The post-operational data may be information indicative of surgical results such as length of surgery, success or failure of proper implantation, infection rate, length of time before further corrective action was taken post implant, etc.
Additional examples of machine learning datasets may be data from patients that have had the implant implanted, and their results from the implantation. For example, after a patient is implanted with an implant, the patient may be periodically questioned about the comfort of the implant and a physician may also periodically determine movement capability of the patient. As one example, the patient may be asked questions like whether there is pain in certain body parts (e.g., shoulder). The patient may be asked questions such as whether their day-to-day life is impacted (e.g., in their daily living, in their leisure or recreational activity, during sleep, and how high they can move their arm without pain). The physician may determine the forward flexion, abduction, external rotation, and internal rotation of the patient. The physician may also determine how much weight the patient can pull.
All of these replies may be associated with a numerical score that is scaled to determine a composite score for the patient. This composite score may be referred to as a “constant score.” The composite score may be indicative of the success of the implantation. In some example, the composite score, or one or more of the numerical scores used to generate the composite score, may be machine learning datasets for training the machine-learned model. For example, each of the numerical scores used to generate the composite score may be indicative of how comfortable the patient is with the implantation, meaning that there is a lesser likelihood of needing corrective surgery soon. Utilizing the score information (e.g., scores used to generate composite score or composite score itself) from a plurality of patients that have been previously implanted may be helpful in determining an implant that is appropriate for the current patient.
The computing system may train (e.g., in a supervised or unsupervised manner) the machine-learned model using the pre-operational or intra-operational data as known inputs and the post-operational data as known outputs. The result of the training of the machine-learned model may be the model parameters. The computing system may periodically update the model parameters based on pre-operational or intra-operational and post-operational data of implant surgeries that are subsequently performed.
With the model parameters, the machine-learned model may be configured to generate information indicative of an operational duration of an implant. In some examples, the machine-learned model may be configured to generate information indicative of respective operational durations of a plurality of implants. The surgeon may then select one of the implants based on the information indicative of the respective operational durations.
For example, the model parameters may define operations that the computing system, executing the machine-learned model, is to perform. The inputs to the machine-learned model may be patient characteristics such as age, gender, diseases, smoker or not, and bone status (e.g., fatty infiltration, fracture, arthritic, etc.), as a few non-limiting examples. Additional inputs to the machine-learned model may be implant characteristics such as type of implant (e.g., anatomical or reversed, stemmed or stemless, etc.) and parameters of the implant (e.g., stem size, polyethylene (PE) insert thickness, etc.). As more examples, inputs to the machine-learned model may be information of the surgical skill/experience of the surgeon. The machine-learned model receives these examples of input data and groups, classifies, or generally performs the operation of a machine learning algorithm using the model parameters to determine operational duration of the implants.
Accordingly, in one or more examples, in predicting operational duration, the machine-learned model may utilize, as inputs, characteristics of an implant such as size, shape, angle, surgical positioning, and material. The machine-learned model may also utilize, as inputs, parameters such as orthopedic measurements obtained from CT image segmentation of the patient's joint, patient information, such as age, gender, shoulder classification (i.e., type of shoulder problem ranging from cuff tear to osteoarthritis). In some examples, the machine-learned model may also utilize, as input, physician information, such as preferences or experience/skill level/preferred implants.
The output from the machine-learned model may be the operational duration of the implant. For instance, the operational duration may be a particular time duration or range of classification (e.g., short, medium, long). The particular duration or range of classification may be associated with a likelihood or confidence for different durations. For instance, there is a 95% likelihood that implant serves its function after 10 years. As described in more detail elsewhere in this disclosure, in addition to a predicted duration for a particular implant, the machine-learned model may perform its example operations for a plurality of implants and may provide a comparative ranking of other suitable implants by duration.
Also, in some examples, the operational duration may be based on a particular surgical procedure (e.g., surgical plan). The operative technique (e.g., surgical procedure) may be different for different types of implants. The number of steps needed for the surgical procedure may be correlated with the operational duration of the implant. The example techniques may account for the number of steps needed for the surgical procedure in determining the operational duration of the implant.
In some examples, for the same implant and same patient, the machine-learned model may determine operational durations for the implant for different surgical procedures. For instance, for a first implant, the machine-learned model may generate operational duration information for a plurality of time periods (e.g., 2, 5, 10, and 15 years), and for each time period, the machine-learned model may generate operational duration information for different surgical procedures. The machine-learned model may repeat the process for a second implant, and so forth. In some examples, machine-learned model may perform a subset of the example operations (e.g., generate duration information for only one time period). In general, machine-learned model may determine different types of operational duration information using the techniques described in this disclosure, and machine-learned model may determine all or a subset of the examples of the operational duration information.
As one example way in which the machine-learned model may operate, the machine-learned model, using the model parameters, may determine a classification based on the input data. The classification may be associated with a particular value for the operational duration. For instance, the model parameters may include a plurality of clusters for each of a plurality of implants, where each cluster represents a multi-dimensional range of values for the input data. Each of the clusters may be associated with a particular operational duration for respective implants. The machine-learned model may be configured to determine a cluster based on the input data and then determine the operational duration of the implant based on the determined cluster.
As another example way in which the machine-learned model may operate, the machine-learned model, using the model parameters, may scale a baseline operational duration value based on numerical representations of the input data, where the amount by which the machine-learned model scales the baseline operational duration value is based on the model parameters. For example, the baseline operational duration value for a particular implant may be 90% likelihood of serving its function for 10 years. Based on the input data and the model parameters, the machine-learned model may scale the 90% down to 80% or scale the 90% to 95%.
In some examples, the machine-learned model may be further configured to compare operational duration of different implants based on patient characteristics and output a recommendation for an implant. For instance, the machine-learned model may further analyze, based on the model parameters, factors such as length of operation needed to implant, cost of implantation, risk of infection during the operation, quality of life expectancy post-implant (e.g., such as based on a determination of the composite score or scores used to form the composite score), and other such factor. For each of the implants, the machine-learned model may generate a feasibility score for each of the implant. The feasibility score may indicative of how beneficial the implant would be to the patient. The machine-learned model may compare (e.g., as a weighted comparison) the feasibility score and the operational duration of each implant with other implants and output a particular implant as the recommended implant based on the comparison.
As described herein,
For instance, machine-learned model 902 may be configured to determine and output information indicative of an operational duration of an implant based on patient characteristics of a patient and implant characteristics of an implant, and in some examples, based on surgical procedure and/or surgeon experience. A surgeon may receive the information indicative of the operational duration and select an implant to use based on the information indicative of the operational duration. As described in more detail, machine-learned model 902 may generate information indicative of operational durations of a plurality of implants, and the surgeon may select an implant from the plurality of implants based on the information indicative of the operational durations.
A computing system, applying machine-learned model 902, may be configured to obtain patient characteristics of a patient and obtain implant characteristics of an implant. The patient characteristics may include one or more age of the patient, gender of the patient, diseases of the patient, whether the patient is a smoker, and fatty infiltration in tissue adjacent target bone where the implant is to be implanted. The patient characteristic may also include information such as type of disease the patient is experiencing (e.g., if shoulder problem, the problem may be for cuff tear or osteoarthritis). The implant characteristics may include one or more of a type of implant and dimensions of the implant. For example, the implant characteristics may include information indicating whether the implant is for an anatomical or reversed implant procedure, whether the implant is stemmed or stemless, and the like. The implant characteristics may also include information indicating parameters of the implant such as stem size, polyethylene (PE) insert thickness, and the like. In some examples, the computing system, applying machine-learned model 902, may also be configured to obtain information of the surgical procedure (e.g., plan), including positioning of the implant. For instance, the surgical procedure may include information such as medialization and lateralization angles. Also, the surgical procedure) may be different for different types of implants. The number of steps needed for the surgical procedure may be correlated with the operational duration of the implant. Machine-learned model 902 may account for the number of steps needed for the surgical procedure in determining the operational duration of the implant.
The computing system, applying machine-learned model 902, may be configured to determine information indicative of an operational duration of the implant based on the patient characteristics and the implant characteristics. For example, machine-learned model 902 may determine information indicative of a likelihood that the implant will serve a function of the implant for a certain amount of time. As one example, machine-learned model 902 may determine information such as there being a 90% likelihood that the implant will serve its function for 10 years. In this example, there is a 10% likelihood that the patient will need revision or some other form of corrective action in the first 10 years.
As another example, the operational duration of an implant may be information indicative of a likelihood that the implant will serve its function over a period of years (e.g., a 99% likelihood that the implant will serve its function for 2 years, a 99% likelihood that the implant will serve its function for 5 years, a 95% likelihood that the implant will serve its function for 10 years, a 90% likelihood that the implant will serve its function for 15 years, and so forth). As an example, the operational duration of the implant may be a histogram showing a probability of duration for certain periods.
In some examples, the operational duration information may be relative information such as the operation duration is short, medium, or long. The operational duration information may be associated with a likelihood or confidence value (e.g., very likely that the operational duration of implant is at least short term).
The operational duration information may be for an implant, and in some examples, for a specific surgical procedure. In some examples, the surgeon may utilize the operational duration information to assist with surgical planning (e.g., select the surgical procedure that provides the longest operational duration (or at least above a threshold duration) balanced with the highest likelihood and other factors such as a length of surgical procedure).
The computing system, applying machine-learned model 902, may be configured to output the information indicative of the operational duration of the implant (e.g., which may be a plurality of cooperative components such as a humeral head with stem and a glenoid plate). A health professional (e.g., one or more surgeon, nurse, clinician, etc.) may utilize the information indicative of the operational duration of the implant to select an implant to use for the surgery, as well as a surgical plan in some examples.
As an example, the computing system may be virtual planning system 701 of
A health professional (e.g., surgeon, nurse, clinician, etc.), as part of the pre-operative planning or intra-operative planning, may cause one or more processors 702 to execute surgery planning module 718 using one or more input devices 710. The health professional may enter, using one or more input devices 710, the patient characteristics and the implant characteristics. In some examples, a range of implant characteristics may be recommended by the system based on automated planning using segmentation and image processing. In this way, the computing system (e.g., virtual planning system 701) may obtain the patient characteristics and the implant characteristics. The health professional may also enter information of the surgeon (e.g., surgical experience, preferences, etc.).
Executing surgery planning module 718 may cause one or more processors 702 to perform operations defined by one or more machine-learned models 720, including operations defined by machine-learned module 902. For example, one or more processors 702 may determine information indicative of an operational duration of the implant based on the patient characteristics and the implant characteristics (e.g., information indicative of a likelihood that the implant will serve a function of the implant for a certain amount of time).
One or more output devices 712 of virtual planning system 701 may be configured to output information indicative of the operational duration of the implant. For example, in some examples, one or more display devices 708 may be part of output devices 712, and one or more display devices 708 may be configured to present information indicative of the operational duration of the implant. In some examples, one or more output devices 712 may include one or more communication devices 706. One or more communication devices 706 may output the information indicative of the operational duration of the implant to one or more visualization devices, such as visualization device 213. In such examples, visualization device 213 may be configured to display the information indicative of the operational duration of the implant (e.g., likelihood and duration values, likelihood histograms, ranking system, etc.).
The above example with respect to virtual planning system 701 is provided as one example and should not be considered limiting. For instance, other examples of a computing system such as computing device 1002 (
In some examples, server system 1104 of
As described above, a computing system, using machine-learned model 902, may be configured to determine information indicative of the operational duration of the implant. The information indicative of the operational duration of the implant may be information indicative of how long before corrective action may be needed. As one example, the operational duration of the implant may indicate a likelihood of the implant serving its function (e.g., restoring joint mobility, paint reduction, no dislocation, no implant fracture, etc.) for a certain amount of time. Examples of corrective action may include revision surgery (which may involve removal of implant and implantation of a different type of implant with a different surgical procedure), replacement of the implant (e.g., removing and replacing with similar implant), physical therapy to accommodate for the reduction in functionality of the implant, and the like.
As described above, there may be various ways in which to qualify whether the implant will serve its function for a certain amount of time such as based on efficacious or effective function. Examples ways to determine efficacious or effective function include determination of range of motion, tolerable or no pain, little to no dislocation, no implant breakage, and no infection.
For example, effective function of a joint may mean that a pain score for the patient is below a certain level. As another example, effective function may mean that an activity score associated with impact on day-to-day life is within a particular range. As another example, effective function may mean that the forward flexion score is greater than a particular angle and the abduction score is greater than a particular angle. As another example, a rotation score indicative of external rotation and internal rotation may be indicative of effective function. A power score indicative of an amount of weight that the patient can pull may be indicative of effective function. These various scores may be combined together to form a composite score, also referred to as a constant score.
In one or more examples, the various scores or the composite score may be used as part of the training set for training the machine-learned model 902. For instance, utilizing the various scores for patients that have already had the implant may be predictive for the duration of the implant in a current patient, such as being indicative of whether the current patient is predicted to find the implant satisfactory, and hence, lower likelihood of needing a replacement.
In some examples, machine-learned model 902 of the computing system may receive the patient characteristics and the implant characteristics and apply model parameters of the machine-learned model to the patient characteristics and the implant characteristics, as described in this disclosure with respect to
There may be various ways in which machine-learned model 902 may apply the model parameters to determine the information indicative of the operational duration of the implant. As one example, machine-learned model 902, using the model parameters, may determine a classification based on the patient characteristics and the implant characteristics. The classification may be associated with a particular value for the operational duration. For instance, the model parameters may include a plurality of clusters for each of a plurality of implants, where each cluster represents a multi-dimensional range of values for the patient characteristics and the implant characteristics. Each of the clusters may be associated with a particular operational duration for respective implants. Machine-learned model 902 may be configured to determine a cluster based on the patient characteristics and the implant characteristics and then determine the operational duration of the implant based on the determined cluster.
As another example, machine-learned model 902, using the model parameters, may scale a baseline operational duration value based on numerical representations of the patient characteristics and the implant characteristics, where the amount by which machine-learned model 902 scales the baseline operational duration value is based on the model parameters. For example, the baseline operational duration value for a particular implant may be 90% likelihood of serving its function for 10 years. Based on the input data and the model parameters, machine-learned model 902 may scale the 90% down to 80% or scale the 90% to 95%.
As one or more example, machine-learned model 902 may utilize the model parameters generated from random forest machine-learning techniques. As another example, the model parameters may be for a neural network.
In the above examples, a computing system, using machine-learned model 902, may determine an operational duration for an implant. In some examples, the computing system, using machine-learned model 902, may determine respective operational durations for a plurality of implants. For instance, machine-learned model 902 may receive implant characteristics for each of a plurality of implants. For each implant of the plurality of implants, the computing system, using machine-learned model 902, may determine an operational duration. In some examples, for each implant and for each of a plurality of different surgical procedures (e.g., as input by the health professional or as automatically determined), machine-learned model 902 may determine an operational duration. The health professional may review the operational durations for each of the plurality of implants and select one of the implants. In examples where a surgical procedure is also a factor used in determining operational durations, the health professional may select one of the implants further based on the surgical procedure and operational duration or vice-versa (i.e., select surgical procedure based on operational duration for implant).
In some examples, the computing system, using machine-learned model 902, may compare the operational durations for each of the plurality of implants and select an implant of the plurality of implants based on the comparison. For instance, the computing system, using machine-learned model 902, may compare the likelihood values for each of the operational durations for each of the implants and select the implant having the highest likelihood value (e.g., implant having the highest likelihood of serving its function for a certain amount of time). In some examples, rather than relying only on the highest likelihood value, machine-learned model 902 may select the implant having a likelihood value that meets a threshold.
The computing device may then output information indicative of the operational duration of the selected implant, as the recommended implant. The health professional may then choose to accept the recommendation of the recommend implant or reject the recommendation.
In some examples, the computing system, using machine-learned model 902, may rank the implants based on the comparison. For instance, the computing system may output, for display, information indicative of the operational duration of each of the implants, but in an order most recommended to least recommended. The health professional may then review the ranking to select the implant.
The operational duration of the implant may be one factor that machine-learned model 902 may utilize in recommending or ranking the implants. In some examples, the computing system, using machine-learned model 902, may be configured to compare the information indicative of the operational duration of each of the plurality of implants based on patient characteristics and/or surgical procedure.
For certain patients, implanting the implant with the longest operational duration may not be ideal. As an example, the surgical procedure for implanting the implant with the longest operational duration may not be safe for the patient given the patient characteristics. As another example, the implant with the longest operational duration may not be ideal for a patient given his or her life expectancy. As another example, implantation of the implant with the longest operational duration may result in lower quality of life as compared to another implant (e.g., more limited range of mobility as compared to another implant). There may be various other factors that impact which implant to select.
In some examples, machine-learned model 902 may utilize information of patient characteristics to further refine the determination of which implant to recommend. For example, machine-learned model 902 may determine a feasibility score for each implant. The feasibility score may be indicative of how beneficial the implant is for the patient. The feasibility score may be based on a combination of a plurality of patient factors. Examples of the plurality of patient factors may include two or more of length of surgery, risk of infection, mobility post-implant, recovery from surgery, price of implant, and the like. For instance, the feasibility score may be based on a prediction of the composite score or the various scores used to generate the composite score such as one or more of the pain score, activity score, forward flexion score, abduction score, a rotation score indicative of external rotation and internal rotation, and a power score may be indicative of effective function.
The computing system, using machine-learned model 902, may be configured to determine a value for one or more of the patient factors and determine a feasibility score based on a combination (e.g., weighted average) of the values of the patient factors. The computing system may then output the feasibility score for the implant in addition to the operational duration. In some examples, rather than determining a single feasibility score, machine-learned model 902 may be configured to determine values for the one or more patient factors. In such examples, the values for the one or more patient factors may be each considered as examples of a feasibility score. That is, in some examples, the feasibility score refers to a single feasibility score based on a combination of values for patient factors, and in some examples, each of the values of the patient factors may be considered as a feasibility score.
Machine-learned model 902 may be configured to output information indicative of an operational duration for each of the implants (and possibly for each surgical procedure) and information indicative of the one or more feasibility scores. The health professional may then select a particular implant based on the operational duration and the feasibility score. In some examples, machine-learned model 902 may be configured to recommend a particular implant based on the operational duration and respective feasibility scores for the plurality of implants. For example, the computing system, using machine-learned model 902, may be configured to select one of the plurality of implants based on the comparison of the information indicative of the operational duration of each of the plurality of implants and the comparison of the one or more feasibility scores.
As an example, a first patient factor may be how long the surgery would take, a second patient factor may be the chances of infection, a third patient factor may be a range of mobility (or more generally, one of the example scores described above), and a fourth patient factor may be length of recovery. Machine-learned model 902 may determine how long the surgery would take to implant a first implant (e.g., a value for a first patient factor), the chances of infection (e.g., a value for a second patient factor), the range of mobility (e.g., a value for a third patient factor), and a length of recovery time (e.g., a value for a fourth patient factor). Based on the values for the first, second, third, and fourth patient factors, machine-learned model 902 may determine a feasibility score for the first implant. Machine-learned model 902 may repeat these operations for the plurality of implants.
Machine-learned model 902 may utilize the operational duration and feasibility score as factors in determining which implant to recommend. For example, machine-learned model 902 may weigh the operational duration information and the feasibility score to recommend a particular implant, and in some examples, accounting for the surgical procedure. For example, if the implant having the highest likelihood of serving its function for a certain period of time also has the highest feasibility score, then machine-learned model 902 may recommend that implant. However, if an implant has a relatively high likelihood of serving its function for a certain period of time but has a relatively low feasibility score, machine-learned model 902 may be configured to recommend another implant with a lower likelihood of serving its function for the certain period of time but with a higher feasibility score. By how much the operational duration and feasibility scores are weighted may be a factor of design choice and may be different for different types of surgeries and different patients.
In one or more examples, machine-learned model 902 may be trained using model trainer 1208 (
As one example, the machine learning dataset may include information such as operational duration for a plurality of implants that were previously implanted in different patients. The machine learning dataset may include information such as length of surgery, mobility of patient after surgery, whether there was an infection or not, the length of recovery, and the like.
For example, training data 1302 may include information such as patient ages, weight, height, smoker or not, types of diseases, etc., types of implants, and the like. Training data 1302 may include surgical experience. Example input data 1304 may be the actual information about the patient ages, weight, height, smoker or not, types of diseases, and the implant used in patients. Some additional examples of the input data 1304 include data from publications showing survival rate of implants for specific group of patients and published revision rate for a selected implant range. Output data 1308 may be the operational duration of the implants used for the patients that make up the example input data 1304. Output data 1308 may also include information such as the length of surgery, mobility of patient after surgery, whether there was an infection or not, the length of recovery, actual surgical procedure used and the like.
Objective function 1310 may utilize the example input data 1304 and the output data 1308 to determine model parameters for machine-learned model 902. For example, the model parameters may be weights and biases, or other example parameters described with respect to
In this way, this disclosure describes example techniques utilizing computational processes for selecting an implant for implantation. The example techniques described in this disclosure are based on machine learning datasets that may be extremely vast with more information that could be accessed or processed by a surgeon without access to the computing system that uses machine-learned model 902. For instance, surgeons with limited experience may not have sufficient know-how for how to accurately determine which implant, from among multiple implants, to use, given an objective for prolonged operation and delayed need for revision surgery. Even experienced surgeons may not have access and may not be able to comprehend the vast information available that is used to train machine-learned model 902.
For example, even if a surgeon were to access and review the information from the dataset, the surgeon may still not be able to, given the vast amount of information, to construct a surgical technique, e.g., including implant(s) selection and positioning, that accurately accounts for all the different patient information and implant characteristics. However, machine-learned model 902 may be configured to utilize the vast amount of information as a way to determine operational duration of an implant and select an implant, in some examples, as the recommended implant. Moreover, using machine-learned model 902 may allow for a scalable, extensible computing system that can be updated with new information periodically to create a better version of machine-learned model 902. A surgeon may not have the ability to update his/her understanding of what the operational duration or what the recommended implant should be, much less update as quickly as machine-learned model 902 can be updated.
As illustrated in
One or more processors 702 (e.g., using machine-learned model 902) may obtain patient characteristics of a patient (1400). For example, the patient characteristics include one or more age of the patient, gender of the patient, diseases of the patient, whether the patient is a smoker, and fatty infiltration of tissue adjacent a target bone where the implant is to be implanted. A health professional may provide information of the patient characteristics using input devices 710 as an example.
One or more processors 702 may obtain implant characteristics of an implant (1402). In some examples, the implant characteristics of the implant include one or more of a type of implant and dimensions of the implant (e.g., for reverse or anatomical, stemmed or stemless, etc.). A health professional may provide information of the implant characteristics using input devices 710 as an example. In some examples, one or more processors 702 may obtain implant characteristics of a plurality of implants to perform the example techniques on a plurality of implants.
One or more processors 702 may determine information indicative of an operational duration of the implant based on the patient characteristics and the implant characteristics (1404). As one example, one or more processors 702 may determine information indicative of the likelihood that the implant will serve a function of the implant for a certain amount of time. In examples where there is a plurality of implants, one or more processors 702 may determine information indicative of an operational duration for one or more (including all) of the implants.
There may be various ways in which one or more processors 702 determine the information indicative of the of the operational duration of the implant. As one example, one or more processors may receive, with machine-learned model 902, the patient characteristics and the implant characteristics, apply model parameters of machine-learned model 902 to the patient characteristics and the implant characteristics, and determine the information indicative of the operational duration based on the application of the model parameters of machine-learned model 902.
In some examples, the model parameters of machine-learned model 902 are generated based on a machine learning dataset. For example, the machine learning dataset includes one or more of pre-operative scans of a set of patients, patient characteristics of the set of patients, information indicative of previous surgical plans using the same or different implants, and information indicative of surgical results.
One or more output devices 712 may be configured to output the information indicative of the operational duration of the implant (1406). For example, one or more processors 702 may output the information indicative of the operational duration of the implant to one or more output devices 712. One or more output devices 712 may display the operational duration of the implant (e.g., in examples where display device 708 is part of output devices 712). In some examples, one or more output devices 712 may output the information indicative of the operational duration of the implant to another device, such as visualization device 213, for display.
In examples where there is a plurality of implants, output devices 712 or visualization device 213 may output information indicative of the operational duration for the plurality of implants. However, in some examples, one or more processors 702 may compare the information indicative of the operation duration for the plurality of implants to select a recommendation of the implant.
In the example illustrated in
In some examples, one or more processors 702, with output devices 712, may output the information indicative of the operational duration of the plurality of implants and/or information indicative of the surgical procedures. For example, output devices 712 may output information such as short, medium, long with likelihood or confidence values for the operational duration, a value indicative of a likelihood over a period of time, or a histogram of likelihood values at certain time periods, as a few examples. In some examples, output devices 712 may output information such as the surgical procedure associated with achieving the operational duration (e.g., implant location, medialization, lateralization angles, etc.).
However, in some examples, rather than or in addition to outputting the information indicative of the operational duration of the plurality of implants, one or more processors 702 may compare the information indicative of the operational duration of the plurality implants or plurality of surgical procedures (1506). For example, one or more processors 702 may compare the values of each implant indicating the likelihood that the implant will serve its function (e.g., provide mobility while remaining implanted with minimal pain or discomfort) for a certain amount of time.
One or more processors 702 may select one of the plurality of implants or surgical procedure based on the comparison (1508). For example, one or more processors 702 may select the implant with the highest likelihood of serving its function for the certain amount of time. In some examples, output devices 712 may output information indicative of the selected implant.
In some examples, as a result of the comparison, one or more processors 702 may rank each of the implants based on the operational duration. For example, one or more processors 702 may rank first the implant with the highest likelihood of serving its function for the certain amount of time, followed by the second implant with second highest likelihood, and so forth.
In some examples, as a result of the comparison, one or more processors 702 may rank each of the surgical procedures (e.g., which one takes least amount of time, which one is safest, etc.). One or more output devices 712 may be configured to output the ranked list or lists.
In the above example, one or more processors 702 may determine operational duration and rank implants or select an implant based on the operational duration. However, in some examples, one or more processors 702 may also determine one or more feasibility scores to rank implants or surgical procedure or select an implant or surgical procedure based on the operational duration and feasibility scores.
In one or more examples, output devices 712 may be configured to output a list of implants with their operational duration scores and feasibility scores. In some examples, output devices 712 may output a ranked list of the implants with their operational duration scores and feasibility scores.
In the example illustrated in
Orthopedic surgery can involve implanting one or more implant components into a patient to repair or replace a damaged or diseased joint of the patient. For example, an orthopedic shoulder surgery may involve attaching a first implant component to a glenoid cavity of a patient and attaching a second implant component to a humerus of the patient. The first and second implant components may cooperate with one another to replace the shoulder joint and restore motion and/or reduce discomfort. It is important for surgeons to select from properly designed implant components when planning an orthopedic surgery. Improperly selected or improperly designed implant components may limit patient range of motion, cause bone fractures, or loosen and detach from patients' bones or require more follow up visits.
In many cases, the implant that a surgeon selects need not necessarily be a patient specific implant. For example, an implant manufacturer may generate a plurality of different implants having different dimensions and shapes. The surgeon may select from one of these pre-manufactured implants as part of the pre-operative planning or, possibly, intra-operatively. For instance, rather than having an implant custom manufactured for a patient (e.g., based on pre-operative information of the patient or possibly intra-operatively with a 3D printer), the surgeon may select from one of the pre-manufactured implants. In some examples, it may be possible that the surgeon selects a particular implant, and then the implant is manufactured (e.g., such as where the manufacturer or hospital does not have the particular implant in stock). However, the implant manufacturing may be done without information of the specific patient who is to be implanted with the implant.
Although the implant may be not specific for a patient, the implant may be manufactured for a particular group of patients. For instance, the group of patients may be gender based, height based, obesity based, etc. As an example, the manufacturer may generate an implant that, while not specific to a particular patient, may be generally for obese patients, or male patients, or tall patients, etc.
In some examples, the implant may be manufactured based on specific patient information. For instance, as part of the pre-operative planning, the surgeon may determine patient dimensions (e.g., size of bone where implant is to be implanted) and patient characteristics (e.g., age, weight, sex, smoker or not, etc.). A manufacturer may then construct a patient specific implant.
In both examples (e.g., non-patient specific implant or patient specific implant), the implant manufacturing procedure should manufacture an implant that will be well-suited for implantation. For example, a surgeon should be able to implant with effort well within the range of normal surgical effort. If implanted in a competent manner, the implant should not cause any additional damage to the target bone, surrounding bone, or surrounding tissue. The implant should serve its function for a reasonable amount of time before the patient needs to take corrective actions (e.g., having implant replaced with same type of implant, having a reversed implant surgery, undergoing extensive physical therapy, etc.).
There may be technical problems in manufacturing implants to achieve the above example goals of the implant (e.g., reasonable implant effort, low amount of damage to bone or surrounding area, long functional time, etc.). For instance, an implant designer, which may be person or a machine, may have a limited knowledge base of how to design an implant that satisfies the various goals. With a human implant designer, the amount of knowledge needed to ensure that the implant satisfies the example goals would be too vast and a human implant designer, or even a team of implant designers, would not be able to know all of the information needed to design an implant that satisfies the example goals.
This disclosure describes example techniques of utilizing machine-learning techniques for practical applications of designing implants. For instance, a computing system utilizing a machine-learned model may be configured to perform the example techniques described in this disclosure, which a human designer or a team of human designers would not be able to perform. In some examples, it may be possible that a human designer or team of human designers can construct an example implant and input information of the implant into the machine-learned model. The machine-learned model, in this example, indicates whether the implant would be suitable or not.
The computing system may utilize a machine-learned model to determine the size and shape of an implant. The machine-learned model is a tool that may analyze input data (e.g., implant characteristics of an implant to be manufactured) utilizing computational processes of the computing system in a manner that extends beyond just know-how of a designer to generate an output of the information indicative of the dimensions of the implant (e.g., size and shape). As one example, the implant characteristics of the implant to be manufactured include information that the implant is for a type of surgery (e.g., anatomical or reversed), information that the implant is stemmed or stemless, information that the implant is for a type of patient condition (e.g., for fracture, for osteoporosis, etc.), information that the implant is for a particular bone (e.g., humerus, glenoid, etc.), and information of a press fit area (e.g., distal press fit or proximal press fit) of the implant (e.g., area around which bone is to grow to secure the implant in place). The following are some additional examples of implant characteristics: length of the stem in case of a revision stem, graft window for revision or fracture cases, locking screw to lock the stem in the humerus in case of revision, convertible prosthesis, monolithic or modular prosthesis, stem shape that mimics the internal shape of the humerus or not, etc.
The computing system may apply model parameters of the machine-learned model to the input data (e.g., categorize the input data based on the model parameters or generally perform a machine learning algorithm using the model parameters on the input data) and the result of applying the model parameters of the machine-learned model may be the information indicative of the dimensions of the implant. For instance, the machine-learned model may receive the implant characteristics. Implant characteristics may be information of a way in which the implant is to be used and not necessarily size and shape of the implant. However, information of the size and shape of the implant that the machine-learned model modifies may be possible. The machine-learned model may apply model parameters of the machine-learned model. The machine-learned model may determine the information indicative of the dimensions of the implant based on the applying of the model parameters of the machine-learned model.
The computing system may generate the model parameters of the machine-learned model based on a machine learning dataset. Examples of the machine learning dataset includes one or more of information indicative of clinical outcomes (e.g., information indicative of survival rate (how long the implant lasted), range of motion, pain level, etc.) for different types of implants and dimensions of available implants. For example, similar to the above description for determining operational duration, part of the information indicative of clinical outcomes may be composite scores or scores used to generate the composite score from patients that have had an implant implanted. For instance, a pain score associated with an implant may be indicative of a pain level for a patient. An activity score may be associated with impact on day-to-day life of the patient. The forward flexion score and the abduction score may be indicative of an amount of movement by the patient. A rotation score indicative of external rotation and internal rotation may indicate how well the patient can rotate his/her shoulder and arm. A power score may indicate how much weight the patient can move. These various scores may be combined together to form a composite score, also referred to as a constant score. Such score information may be indicative of how well an implant will function, and may help guide how an implant is to be constructed.
As another example, information indicative of clinical outcomes may include information available from articles and publications of clinical outcomes. Examples of information available from articles includes survival rate, range of motion, pain level, revision rate, dislocation rate, and infection rate. The above examples of information available from articles may be for each of a plurality of patient characteristics like mean age, gender, diseases, dominant arm, etc. The articles and publications may also include information collected directly from physicians performing procedures.
For example, the computing system may receive implant data for a large number of implants. The implant data may include information indicative of clinical outcomes of the various implants and implant 3D models. For instance, the implant data may include information of implants that were used in surgery (e.g., as trial or permanent) and what the outcome of the surgery was. Examples of the outcome of the surgery include information such as a length of time that the surgery took, how difficult the surgery was, how much damage there was to the bone and surrounding area, and how long the implant served its function, as a few examples. In addition, for each of the implants, the patient information may also be used as input, such as type of surgery for which the implant was used, type of bone on which the implant was affixed, bone characteristics (e.g., how much available bone there was), and other characteristics like patient disease.
The computing system may train (e.g., in a supervised or unsupervised manner) the machine-learned model using the implant data and patient characteristics as known inputs and the clinical outcomes as known outputs. The result of the training of the machine-learned model may be the model parameters. The computing system may periodically update the model parameters based on implant data and clinical outcomes generated subsequently. For example, the machine-learned model receives different implants and outcomes and uses the different implants and outcomes to pick the best ones (best size/shape) for a recommended design.
With the model parameters, the machine-learned model may be configured to generate information indicative of dimensions (e.g., size and shape) of an implant that is be designed and manufactured. In some examples, with the dimensions of the implant a manufacturer may manufacture the implant.
For example, the model parameters may define operations that the computing system, executing the machine-learned model, is to perform. The inputs to the machine-learned model may be implant characteristics of an implant to be manufactured such as information that the implant is for a type of surgery, information that the implant is stemmed or stemless, information that the implant is for a type of patient condition, information that the implant is for a particular bone, and press fit area, as a few non-limiting examples. The press fit area is the area where the implant will have its primary stability in the bone, waiting for bone ingrowth in this area. The strength of the press fit depends on the size/volume of the implant. The machine-learned model receives these examples of input data and groups, classifies, or generally performs the operation of a machine learning algorithm using the model parameters to determine information indicative of dimensions of the implant based on the implant characteristics.
As one example, the machine-learned model, using the model parameters, may determine a classification based on the input data. The classification may be associated with particular dimensions of the implant. For instance, the model parameters may include a plurality of clusters for each of a plurality of implants, where each cluster represents a multi-dimensional range of values for the input data. Each of the clusters may be associated with dimensions for respective implants. The machine-learned model may be configured to determine a cluster based on the input data and then determine the dimensions of the implant based on the determined cluster.
As described herein,
For instance, machine-learned model 902 may be configured to determine and output information indicative of information indicative of dimensions of an implant to be manufactured based on implant characteristics. A manufacturer may receive the information indicative of the dimensions of the implant and manufacture the implant based on the information indicative of the dimensions of the implant.
A computing system, applying machine-learned model 902, may be configured to obtain implant characteristics of an implant to be manufactured. The implant characteristics may include one or more of information that the implant is for a type of surgery, information that the implant is stemmed or stemless, information that the implant is for a type of patient condition, information that the implant is for a particular bone, and information of a press fit area of the implant. For example, the implant characteristics may include information indicating whether the implant will be used for an anatomical or reversed implant procedure, whether the implant will be stemmed or stemless, and the like. The implant characteristics may also include information indicating information such as the type of patient condition for which the implant will be used (e.g., fracture, osteoporosis, etc.), and/or information indicating the type of bone where the implant will be used (e.g., humerus), as some additional examples.
As explained above, the implant characteristics may be for an implant that is to be manufactured. The implant may be manufactured for keeping in stock at the manufacturer or hospital such that when that implant is needed, the implant is available. For instance, the implant may be for the humerus and stemmed, and the implant may be available in stock when needed. In some examples, the implant may be manufactured after the implant is needed (e.g., because the implant is not in stock). The implant to be manufactured need not be manufactured for a particular patient (e.g., the implant is not patient specific). However, in some examples, the implant may be a patient specific implant. Furthermore, the implants may be designed in pairs (e.g., glenoid and humeral implant) to cooperate with one another.
Moreover, the implant may not be patient specific, but may be meant for a particular group of people. The grouping of the people for which the implant is designed may be based on various factors such as race, height, gender, weight, etc. As an example, the implant characteristics may, in addition to or instead of including information that the implant is for a type of surgery, information that the implant is stemmed or stemless, information that the implant is for a type of patient condition, information that the implant is for a particular bone, and information of a press fit area of the implant, include information about a characteristic of a group of people such as race, weight, height, gender, etc.
The computing system, applying machine-learned model 902, may be configured to determine information indicative of dimensions of the implant based on the implant characteristics. For example, machine-learned model 902 may determine information indicative of a size and shape of the implant. As one example, machine-learned model 902 may determine information such as thickness, height, material, etc. of each of the components of the implant (e.g., length of stem, thickness of stem along the length, the material of the stem, shape, angles, etc.). In some examples, machine-learned model 902 may determine, in addition to or instead of the example information described above, information such as type of coating (e.g., hydroxyapatite (HAP) coating, porous titanium coating, etc.), type of finishing (e.g., blasted, mirror polished, coated, etc.), whether there is a graft window or not, location of the holes for suture and its location, whether locking screws are used or not (and if used type of locking screws), information indicative of thickness of the metal back on a glenoid implant, and information indicative of type of fixation (e.g., cemented, pressfit, locking screws, etc.).
The computing system, applying machine-learned model 902, may be configured to output the information indicative of the dimensions of the implant. A manufacturer may utilize the information indicative of the dimensions of the implant to manufacture the implant for use in surgery.
As an example, the computing system may be virtual planning system 701 of
A manufacturer may cause one or more processors 702 to execute implant design module 719 using one or more input devices 710. The manufacturer may enter, using one or more input devices 710, the implant characteristics of the implant to be manufactured. This is one example way in which the computing system (e.g., virtual planning system 701) may obtain the implant characteristics.
Executing implant design module 719 may cause one or more processors 702 to perform operations defined by one or more machine-learned models 720, including operations defined by machine-learned module 902. For example, one or more processors 702 may determine information indicative of dimensions of the implant to be manufactured based on the implant characteristics (e.g., size and shape).
One or more output devices 712 of virtual planning system 701 may be configured to output information indicative of the dimensions of the implant to be manufactured. For example, in some examples, one or more display devices 708 may be part of output devices 712, and one or more display devices 708 may be configured to present information indicative of the dimensions of the implant. In some examples, one or more output devices 712 may include one or more communication devices 706. One or more communication devices 706 may output the information indicative of the dimensions of the implant to one or more visualization devices, such as visualization device 213. In such examples, visualization device 213 may be configured to display the information indicative of the dimensions of the implant to be manufactured.
In some examples, one or more processors 702 may be configured to execute an application programming interface (API) for utilizing a computer-aided design (CAD) software. For example, the one or more processors 702 may utilize the API to provide the dimensions of the implant to be manufactured to the CAD software. The CAD software may generate a 3D model of the implant based on the dimensions of the implant. One or more processors 702 may utilize the CAD 3D model to generate an implant manufacturing file in a file format that an implant manufacturing machine can import and parse. The implant manufacturing machine may receive the implant manufacturing file and manufacture the implant based on the information in the implant manufacturing file.
The above example with respect to virtual planning system 701 is provided as one example and should not be considered limiting. For instance, other examples of a computing system such as computing device 1002 (
In some examples, server system 1104 of
In some examples, server system 1104 may generate the implant manufacturing file and transmit that implant manufacturing file to client computing device 1102 or directly to the implant manufacturing machine, bypassing client computing device 1102. However, even in such examples, server system 1104 may output information indicative of the dimensions of the implant such as output information indicative of the dimensions of the implant to the CAD software, where the CAD software may be executing on server system 1104 or elsewhere.
In some examples, machine-learned model 902 of the computing system may receive the implant characteristics and apply model parameters of the machine-learned model to the implant characteristics, as described in this disclosure with respect to
There may be various ways in which machine-learned model 902 may apply the model parameters to determine the dimensions of the implant. As one example, machine-learned model 902, using the model parameters, may determine a classification based on the implant characteristics. The classification may be associated with a particular value for the dimensions of various components of the implant.
For example, the most appropriate pressfit area in case of fracture may be based on determining by comparison of osteolysis rate for several type of implant with distally or proximally pressfit considerations. In this example, the pressfit area may be a way in which machine-learned model 902 may classify the implants, and the classification may be based on the comparison of osteolysis rate.
In one or more examples, machine-learned model 902 may be trained using model trainer 1208 (
Examples of clinical outcomes include information indicative of survival rate, range of motion, pain level, etc. As one example, the information indicative of clinical outcomes may be information such as survival rate of the implant (e.g., how long the implant served its function before needing to be replaced). Model trainer 1208 may utilize the survival rate of various implants used for a particular type of fracture. The size and shapes of the implants may impact the survival rate, and model trainer 1208 may be configured to train machine-learned model 902 to determine size and shapes of the implants that increase the survival rate.
The combination of these criteria (e.g., which implant, characteristics of implant, procedure, and characteristics of the patient) may all influence the outcome. For example, a younger, healthier patient who received an implant for a fracture may have a different outcome than an older, unhealthy patient who received the same implant for the same type of fracture. Accordingly, model trainer 1208 may be configured to account for all these different criteria in generating the model parameters.
For example, training data 1302 may include information such as patient ages, weight, height, smoker or not, types of diseases, etc., types of implants, and the like. Example input data 1304 may be the actual information about the patient ages, weight, height, smoker or not, types of diseases, and the implants used in patients. Output data 1308 may be the clinical outcomes for the patients that make up the example input data 1304.
In some examples, the clinical outcomes for the patients may be a multi-factor comparison. For instance, length of surgery, survival rate, type of fracture, etc. may all be factors of output data 1308. As one example, output data 1308 may indicate that for a particular type of surgery and a particular type of fracture, that the result was implanting a particular implant. For a different type of surgery, a different type of fracture, and a different implant, the result may be different, and represented in output data 1308.
Objective function 1310 may utilize the example input data 1304 and the output data 1308 to determine model parameters for machine-learned model 902. For example, the model parameters may be weights and biases, or other example parameters described with respect to
In this way, this disclosure describes example techniques utilizing computational processes for determining dimensions of an implant for manufacturing. The example techniques described in this disclosure are based on machine learning datasets that may be extremely vast with more information than could be accessed or processed by a human designer or manufacturer without access to the computing system that uses machine-learned model 902. For instance, human designers or manufacturers may not be able to determine that some implant dimensions have already been tried and have not worked for various reasons. Manufactures or designers may end up designing and manufacturing implants that were otherwise known to be defective, or at least less effective than others. With the example techniques described in this disclosure, machine-learned model 902 may determine information indicative of dimensions of the implant (e.g., diameter of the metaphysis, angle of the stem, shape of the glenoid, length of the glenoid plug, etc.) to be manufactured based on the implant characteristics and avoid bad implant concepts.
Even if a person were to access and review the information from the dataset, the person may still not be able to, given the vast amount of information, construct a technique that accurately accounts for all the different patient information and implant characteristics. However, machine-learned model 902 may be configured to utilize the vast amount of information as a way to determine dimensions of an implant. Moreover, using machine-learned model 902 allows for a scalable, extensible computing system that can be updated with new information periodically to create a better version of machine-learned model 902. A person may not have the ability to update his/her understanding of what the dimensions should be, much less update as quickly as machine-learned model 902 can be updated.
As illustrated in
One or more processors 702 (e.g., using machine-learned model 902) may receive implant characteristics of an implant (1700). For example, implant characteristics of the implant may include one or more of information that the implant is for a type of surgery, information that the implant is stemmed or stemless, information that the implant is for a type of patient condition, information that the implant is for a particular bone, and/or information of a press fit area of the implant. The press fit area is the area where the implant will have its primary stability in the bone, waiting for bone ingrowth in this area. The strength of the press fit depends on the size/volume of the implant. The following are some additional examples of implant characteristics: length of the stem in case of a revision stem, graft window for revision or fracture cases, locking screw to lock the stem in the humerus in case of revision, convertible prosthesis, monolithic or modular prosthesis, stem shape that mimics the internal shape of the humerus or not, etc. A manufacturer may provide information of the implant characteristics using input devices 710 as an example.
One or more processors 702 may apply model parameters of machine-learned model 902 to the implant characteristics (1702). In some examples, the model parameters of machine-learned model 902 are generated based on a machine learning dataset. For example, the machine learning dataset includes one or more of information indicative of clinical outcomes for different types of implants and dimensions of available implants. The information indicative of clinical outcomes may be information available from articles and publications of clinical outcomes, examples of which include information collected directly from physicians performing procedures. Examples of information available from articles includes survival rate, range of motion, pain level, revision rate, dislocation rate, and infection rate. The above examples of information available from articles may be for each of a plurality of patient characteristics like mean age, gender, diseases, dominant arm, etc.
As an example of ease with understanding, a manufacturer may want to design an implant with a particular length for the stem for men and for fracture. In this example, machine-learned model 902 may have been trained with information from publications about the outcomes of different implants within men. The result of the training may be model parameters that one or more processors 702, via machine-learned model 902, are to apply to implant characteristic information such as length of stem, for fracture, and for men. In this example, machine-learned model 902 may scale, modify, weight, etc. the input information based on the model parameters.
One or more processors 702 may be configured to determine information indicative of dimensions of the implant based on applying model parameters of machine-learned model 902 (1704). For example, the result of the applying the model parameters may be information indicative of external size and shape of the implant. In some examples, machine-learned model 902 may determine, in addition to or instead dimensions of the implant, information such as type of coating (e.g., hydroxyapatite (HAP) coating, porous titanium coating, etc.), type of finishing (e.g., blasted, mirror polished, coated, etc.), whether there is a graft window or not, location of the holes for suture and its location, whether locking screws are used or not (and if used type of locking screws), information indicative of thickness of the metal back on a glenoid implant, and information indicative of type of fixation (e.g., cemented, pressfit, locking screws, etc.).
One or more output devices 712 may be configured to output the information indicative of the dimensions of the implant to be manufactured (1706). For example, one or more processors 702 may output the information indicative of the external size and shape of the implant to one or more output devices 712. One or more output devices 712 may display the dimensions of the implant (e.g., in examples where display device 708 is part of output devices 712). In some examples, one or more output devices 712 may output information indicative of the dimensions of the implant to another device such as visualization device 213 for display.
In some examples, one or more processors 702 may generate a 3D model of the implant (e.g., such as using CAD software). Display device 708 or visualization device 213 may display the 3D model of the implant, and a surgeon or other health professional may confirm that the 3D model of the implant should be manufactured.
One or more processors 702 may instruct a machine for manufacturing to manufacture the implant (1708). For example, one or more processors 702 may cause output devices 712 to output the CAD 3D model of the implant to generate an implant manufacturing file in a file format that an implant manufacturing machine can import and parse. The implant manufacturing machine may receive the implant manufacturing file and manufacture the implant based on the information in the implant manufacturing file.
During the preoperative phase of the surgical procedure, a surgeon may use surgery planning module 718 to develop a surgical plan for the surgical procedure. As discussed elsewhere in this disclosure, processing device(s) 1004 (
Example types of surgical options for a step of the surgical procedure may include a range of surgical items, such as orthopedic prostheses (i.e., orthopedic implants), that the surgeon may use during the step of the orthopedic surgery. For instance, there may be a range of glenoid prostheses from which the surgeon may choose a glenoid prothesis. Other types of example surgical options include attachment positions for a specific orthopedic prosthesis. For instance, in an example involving a glenoid prosthesis, virtual planning system 701 may allow the surgeon to select an attachment position for the glenoid prosthesis from a range of attachment positions that are more medial or less medial, more anterior or less anterior, and so on.
Selecting the correct surgical options for a surgical procedure may be vital to the success of the surgical procedure. For example, selecting an incorrectly sized orthopedic prosthesis may lead to the patient experiencing pain or limited range of motion. In another example, selecting an incorrect attachment point for an orthopedic prosthesis may lead to loosening of the orthopedic prosthesis over time, which may eventually require a revision surgery.
Different patients have different anatomic parameters and different patient characteristics. The anatomic parameters for the patient may be descriptive of the patient's anatomy at the surgical site for the surgical procedure. The patient characteristics may include one or more characteristics of the patient separate from the anatomic parameter data for the patient. Because patients have different anatomic parameters and different patient characteristics, surgeons may need to select different surgical options for different patients.
Because there may be a very large number of surgical options from which a surgeon can choose, it may be challenging for the surgeon to select a combination of surgical options that is best for a specific patient. Accordingly, it may be desirable for a surgical planning system, such as surgery planning module 718, to suggest appropriate surgical options for the patient, given the anatomic parameters and patient characteristics of the patient.
However, implementing a computerized system for suggesting appropriate surgical options presents significant challenges. For instance, the number of combinations of selectable surgical options may grow exponentially, which may result in a significant draw on the memory and computational resources of any computing system implementing such a system. Moreover, there is typically a range of acceptable surgical options for any given patient. In other words, there might not be one right answer to the question of which set of surgical options should be used in a surgical procedure. Thus, even if the implementation problems associated with the potentially large number of combinations can be addressed, there may be a problem of how to account for the ranges of acceptable surgical options. Computerized solutions for suggesting appropriate surgical options during an intraoperative phase of a surgical procedure may present even more challenges, such as how to account for surgical options that can no longer be unselected or how to suggest surgical options when a surgical plan changes during the surgical procedure.
This disclosure describes techniques that may address one or more of these problems. As described herein, surgery planning module 718 (
Surgery planning module 718 may use one or more machine-learned models 720 to determine sets of recommended surgical options for one or more surgical parameters of one or more steps of a surgical procedure. For instance, surgery planning module 718 may use a different one of machine-learned models 720 to determine different sets of recommended surgical options for different surgical parameters. In some instances, a set of recommended surgical options includes a plurality of recommended surgical options. As the surgeon plans the surgical procedure, surgery planning module 718 may receive indications of the surgeon's selection of surgical options for the surgical parameters of the steps of the surgical procedure. Surgery planning module 718 may determine whether a selected surgical option is among the recommended surgical options for a surgical parameter. If the selected surgical option is not among the recommended surgical options for the surgical parameter, surgery planning module 718 may output a warning indicating that the selected surgical option is not among the recommended surgical options.
Thus, by determining a set of recommended surgical options for a surgical parameter and warning the surgeon when the selected surgical option is not among the set of recommended surgical options, the problem of how to implement a computerized system to determine which of the surgical options is the single best surgical option may be avoided. Because a machine-learned model is expected to determine a set of one or more recommended surgical options, as opposed to the single best surgical option, less training data may be required in order to train the machine-learned model to a workable state.
Furthermore, as described herein, the user's selection of a surgical option for a first surgical parameter may serve as input to a machine-learned model that generates a set of recommended surgical options for a second surgical parameter. Thus, the machine-learned model may generate the set of recommended surgical options for the second surgical parameter given the surgical option selected for the first surgical parameter. For example, a surgeon may select a specific glenoid implant as a first surgical parameter. In this example, data indicating the specific glenoid implant may serve as input to a machine-learned model that generates a set of recommended surgical options for a surgical parameter corresponding to a humeral implant.
In the example of
Furthermore, in the example of
Surgery planning module 718 may use a machine-learned model (e.g., one of machine-learned models 720) to determine a set of recommended surgical options for a surgical parameter (1804). In some examples, the set of recommended surgical options may correspond to options that other surgeons are likely to use when planning the surgical procedure on the patient, given the patient characteristics data for the patient and/or the anatomic parameter data for the patient. Surgery planning module 718 may provide the anatomic parameter data and/or the patient characteristic data as input to the machine-learned model. In some examples, surgery planning module 718 may also provide different sets of anatomic parameter data and/or patient characteristic data to machine-learned models for different surgical parameters. Furthermore, in some examples, surgery planning module 718 may provide data indicating one or more previously selected surgical options as input to the machine-learned model.
The machine-learned model may be implemented in one of a variety of ways. For instance, the machine-learned model may be implemented using one or more of the types of machine-learned models described elsewhere in this disclosure, such as with respect to
Virtual planning system 701 may identify the recommended surgical options based on the confidence values output by the output neurons. For instance, in some examples, virtual planning system 701 may determine that the recommended surgical options are surgical options whose corresponding output neurons generated confidence values that are above a particular threshold. In other examples, virtual planning system 701 may rank the surgical options based on the confidence values generated by the output neurons corresponding to the surgical options and select a given number of the highest-ranked surgical options as the set of recommended options.
As noted above, each of the output neurons may be configured to output a confidence value indicating a level of confidence that a set of reference surgeons would have selected the surgical option corresponding to the output neuron. To ensure that the output neurons output confidence values indicating levels of confidence that the set of reference surgeons would have selected the surgical options corresponding to the output neurons, the neural network may have been trained using training data that indicate surgical options selected by the reference surgeons when given various sets of patient characteristic data and/or anatomic parameter data for the patient.
The reference surgeons may be determined in any of one or more ways. For example, the reference surgeons may be a set of surgeons recognized as experts in performing the orthopedic surgery that the user is planning. In some examples, the reference surgeons may be a set of surgeons who are working within the same insurance network, same hospital, or same region.
In some examples where the machine-learned model includes a neural network, surgery planning module 718 may train the neural network using training data that comprises training data pairs. Although described as being trained by surgery planning module 718, the neural network may, in some examples, be trained by another application and/or model trainer 1208 (
In some examples, surgery planning module 718 may use the training process 1300 (
In some examples, surgery planning module 718 may automatically generate training data pairs. As noted elsewhere in this disclosure, surgery planning module 718 may be used to generate surgical plans and generating surgical plans may involve selecting surgical options. Surgery planning module 718 may take the anatomic parameter data, patient characteristic data, and selected surgical option for a specific surgical parameter of a specific surgical step of such a surgical plan generated by a reference surgeon and generate a training data pair based on this data. Because surgical plans generated using surgery planning module 718 may share the same surgical steps (and data structures identifying the surgical steps), surgery planning module 718 may apply data generated across instances of the same surgical step in different instances of the same surgical procedure. In other words, surgery planning module 718 may generate training data pairs based on anatomic parameter data, patient characteristic data, and selected surgical options for the specific step in difference instances of the same surgical procedure. Thus, surgery planning module 718 may use the training data pair to train the machine-learned model for the specific surgical parameter of the specific surgical step. In this way, as the reference surgeons plan more and more surgical procedures, surgery planning module 718 may generate more and more training data pairs that surgery planning module 718 may use to continue training machine-learned models.
In other examples, machine-learned model 720 may be implemented as one or more support vector machine (SVM) models, Bayesian network models, decision tree models, or random forests, other types of machine-learned model. In examples where machine-learned model 720 is implemented using SVM models, there may be a plurality of separate SVM models for different surgical options. In this example, the SVM model of a surgical option may classify the surgical option as being part of the recommended set of surgical options or not in the recommended set of surgical options. In examples where machine-learned model 720 includes a set of decision trees, the set of decision trees may include decision trees that generate output indicating whether or not a surgical option is or is not in the recommended set of surgical options.
In examples where machine-learned model 720 includes a Bayesian network, the Bayesian network may take the planning parameters as inputs and the training will be done by optimization on a validation database (a set of “regular” planning). Then, for testing if a selected surgical option is or is not in a recommended set of surgical options, the surgery planning module 718 may project the selected surgical option into a space represented by the possible surgical options, and then determine whether that projection is within the recommended set of surgical options.
Furthermore, in the example of
Surgery planning module 718 may then determine whether the selected surgical option is among the set of recommended surgical options (1808). Based on determining that the selected surgical option is not among the set of recommended surgical options (“NO” branch of 1808), surgery planning module 718 may output a warning message to the user (1810). On the other hand, based on determining that the selected surgical option is among the set of recommended surgical options (“YES” branch of 1808), surgery planning module 718 may not output the warning message (1812).
Surgery planning module 718 may output the warning message in one or more ways. For instance, in one example, surgery planning module 718 may output the warning message as text or graphical data in an MR visualization. In another example, surgery planning module 718 may output the warning message as text or graphical data in a 2-dimensional display. The warning message may indicate to the user that the reference surgeons are unlikely to have chosen the selected option for the patient, given the patient characteristic data for the patient. In some examples, the warning message on its own is not intended to prevent the user from using the selected surgical option during the surgical procedure on the patient. Thus, in some examples, the warning message does not limit the choices of the user. Rather, the warning message may help the user understand that the selected surgical option might not be the surgical option that the reference surgeons would typically choose.
In some examples, surgery planning module 718 may perform the operation of
In some examples, the surgical plan for the surgical procedure may change while the surgeon is performing the surgical procedure. For instance, the surgeon may need to change the surgical plan for the surgical procedure during the intraoperative phase of the surgical procedure upon discovering that the patient's anatomy is different than assumed during the preoperative phase of the surgical procedure. Accordingly, surgery planning module 718 may update the surgical plan for the surgical procedure during the intraoperative phase of the surgical procedure. In some examples, surgery planning module 718 may determine a modified surgical plan for the surgical procedure in accordance with one or more of the examples described in PCT application no. PCT/US2019/036993, filed Jun. 13, 2019, the entire content of which is incorporated by reference. The updated surgical plan for the surgical procedure may have different steps from the original surgical plan for the surgical procedure. In accordance with an example of this disclosure, surgery planning module 718 may perform the operation of
As noted above, in some examples, one or more of the machine-learned models may receive indications of previously selected surgical options. Thus, a machine-learned model may use information about the previously selected surgical options when determining the set of recommended surgical options for a surgical parameter. Hence, in some examples, surgery planning module 718 may use a second machine-learned model to determine a second set of recommended surgical options for a second surgical parameter, wherein the anatomic parameter set for the patient, the patient characteristic data for the patient are input to the machine-learned model, and the selected surgical option for a first surgical parameter are input to the second machine-learned model. Thus, the set of recommended surgical options may be different depending on a previously selected surgical option. For instance, in one example, the set of recommended surgical options may include a plurality of humeral prostheses. In this example, the plurality of humeral prostheses may be different depending on which glenoid prosthesis was selected by the surgeon.
Because the machine-learned model may be designed to accept only those ones of the previously selected surgical options that are material to the determination of the recommended surgical options, it may be unnecessary to evaluate combinations of all surgical options at once. In this way, examples of this disclosure may avoid problems associated with large numbers of potential combinations of surgical options, which may be costly in terms of computing resources.
It is noted that providing data indicating previously selected surgical options as input to machine-learned models may create dependencies in the order in which the surgeon selects surgical options. However, in some examples, if surgery planning module 718 uses a machine-learned model to determine a set of recommended surgical options and the surgeon has not indicated a selection of a surgical option that the machine-learned model uses as input, the machine-learned model may be trained to generate the set of recommended surgical options such that the set of recommended surgical options includes no recommended surgical options. In other examples, surgery planning module 718 may output the warning message without using the machine-learned model when the surgeon selects the surgical option. In this way, the resulting warning message may make the surgeon aware that surgery planning module 718 cannot accurately provide guidance about whether the selected surgical option is among a set of recommended surgical options.
An estimated amount of operating room (OR) time for a surgical procedure to be performed on a patient may be or include an estimate of an amount of time that an OR will be in use during performance of the surgical procedure on the patient. Estimating the amount of OR time for a surgical procedure may be important for a variety of reasons. For example, because hospitals typically have a limited number of ORs, it may be important for hospitals to know the estimated amounts of OR time for surgical procedures in order to determine how best to schedule surgical procedures in the ORs. That is, hospital administrators may want to maximize utilization of ORs through appropriate scheduling of surgical procedures. Appropriate estimation of amounts of OR time for some types of orthopedic surgical procedures may be especially important given that orthopedic surgical procedures can be lengthy and are also frequently non-urgent. Because orthopedic surgical procedures are frequently non-urgent, there may be greater flexibility in scheduling orthopedic surgical procedures relative to other types of surgical procedures, such as oncology surgeries, organ transplant surgeries, and so on.
In addition to using estimates of amounts of OR time for purposes of optimizing OR utilization, an accurate estimate of an amount of OR time for a surgical procedure may be important in understanding the risk of the patient acquiring an infection during the surgical procedure. The risk of the patient acquiring an infection increases with increased amounts of OR time. The patient, the surgeon, and hospital administration need to understand the risk of infection before undertaking the surgical procedure.
Currently, surgeons use their professional judgment in estimating amounts of OR time for surgical procedures. However, some surgeons may be unable to accurately estimate amounts of OR times for surgical procedures. For instance, some surgeons may estimate amounts of OR time for surgical procedures that are too long or too short, which may result in sub-optimal OR utilization. It may be especially difficult to estimate amounts of OR times for certain types of orthopedic surgeries, such as shoulder arthroplasties and ankle arthroplasties, because of the high number of surgical options available to surgeons. For instance, in one example involving a shoulder arthroplasty, a surgeon may choose between a stemmed or stemless humeral implant. In this example, it may take different amounts of time to implant a stemmed humeral implant versus a stemless humeral implant. In another example involving a shoulder arthroplasty, a surgeon may choose between different types of glenoid implants. In this example, different types of glenoid implants may require different amounts of reaming, different types of bone grafts, and so on. Furthermore, in another example involving a shoulder arthroplasty, arthritic shoulders commonly develop osteophytes that should be accounted for during the shoulder arthroplasty. Thus, because of the large number of surgical options available to a surgeon, it may be difficult for the surgeon to accurately estimate the amount of OR time for a surgical procedure.
In addition to the variety of surgical options available to a surgeon, it may be difficult to estimate an amount of OR time for a surgical procedure to be performed in a specific patient because of various patient-specific parameters. For instance, it may take different amounts of time to perform the same surgical procedure on diabetic patients as opposed to non-diabetic patients.
Computerized techniques for scheduling ORs have previously been developed. In some instances, computerized techniques for scheduling ORs simply accept a surgeon's estimate of the amount of OR time for a surgical procedure. In some instances, computerized techniques for scheduling ORs use default, static estimates of amounts of OR time for surgical procedures. However, because of the high degree of variability within even the same type of surgical procedure, the estimated amounts of time used in such computerized techniques may be quite inaccurate, leading to poor utilization of ORs. Moreover, such techniques do not provide for a way to update the estimated amount of OR time during an intraoperative phase of the surgical procedure.
Techniques of this disclosure may address one or more of these issues. In accordance with one or more techniques of this disclosure, surgery planning module 718 (
As shown in the example of
Furthermore, in the example of
Surgery planning module 718 may also obtain surgical parameter data for the surgical procedure (1904). The surgical parameter data may include data regarding a type of surgical procedure, as well as data indicating selected surgical options for the surgical procedure. For instance, the surgical parameter data may include data indicating any of the types of surgical options described elsewhere in this disclosure. For instance, example types of surgical parameter data may include one or more of parameters of a surgeon selected to perform the surgical procedure, a type of the surgical procedure, a type of an implant to be implanted during the surgical procedure, a size of the implant, an amount of bone to be removed during the surgical procedure, and so on.
Surgery planning module 718 may estimate, using one or more of machine-learned models 720, an amount of OR time for the surgical procedure based on the patient characteristic data, the anatomic parameter data, and the surgical parameter data (1906). Surgery planning module 718 may estimate the amount of OR time in one or more of various ways.
The one or more machine-learned models 720 may be implemented in accordance with one or more of the example types of machine-learned models described with respect to
In some examples, the input layer may include input neurons that receive input data separate from and additional to data in the anatomic parameter data, the patient characteristic data, and the surgical parameter data. For example, an input neuron may receive input data indicating an experience level of the surgeon performing the surgical procedure. In another example, an input neuron may receive data indicating a region in which the surgeon practices.
The output neurons of the neural network may output various types of data. For instance, in some examples, the output neurons of the neural network include an output neuron that outputs an indication of the estimated amount of OR time for the surgical procedure. In such examples, surgery planning module 718 may train the neural network using training data that comprises training data pairs. Each training data pair corresponds to a different performance of the surgical procedure. Each training data pair includes an input vector (e.g., example input data 1304 (
To train the neural network using the training data, surgery planning module 718 may perform a forward pass through the neural network using the input vector of a training data pair. Surgery planning module 718 may then compare the indication of the amount of OR time for the surgical procedure generated by the output neuron to the target value to determine an error value. Surgery planning module 718 may then use a backpropagation algorithm to modify weights of neurons of the neural network based on the error value. Surgery planning module 718 may repeat this process for different training data pairs in the training data. In some examples, surgery planning module 718 may receive new training data pairs and may continue training the neural network as more surgical procedures are completed.
In another example, the output neurons of the neural network correspond to different time periods. For instance, in this example, a first output neuron may correspond to an OR time of 0-29 minutes, a second output neuron may correspond to an OR time of 30-59 minutes, a third output neuron may correspond to an OR time of 60-89 minutes, and so on. In other examples, the output neurons may correspond to time periods of greater or less duration. In some examples, the time periods corresponding to the output neurons all have the same duration. In some examples, two or more of the time periods corresponding to the output neurons of the same neural network may be different.
In examples where the output neurons of the neural network include output neurons that correspond to different time periods, the output neurons may generate confidence values. A confidence value generated by an output neuron may be indicative of a level of confidence that the surgical procedure will end within the time period corresponding to the output neuron. For example, the confidence value generated by the output neuron corresponding to the OR time of 30-59 minutes indicates a level of confidence that the surgical procedure will end at some time between 30 and 59 minutes after the surgical procedure started.
In such examples, surgery planning module 718 may determine the estimated amount of OR time for the surgical procedure as a time in the time period corresponding to the output neuron that generated the highest confidence value. For instance, if the output neuron for the OR time of 30-59 minutes generated the highest confidence value, surgery planning module 718 may determine that the estimated amount of OR time for the surgical procedure is between 30 and 59 minutes.
In examples where the neural network has output neurons that correspond to different time periods, surgery planning module 718 may train the neural network using training data that comprises training data pairs. Each training data pair corresponds to a different performance of the surgical procedure. Each training data pair includes an input vector and a target value. The input vector of a training data pair may include values for each input neuron of the neural network. The target value of a training data pair may specify a time period in which the surgical procedure corresponding to the training data pair was completed. For instance, the target value of the training data pair may specify that the surgical procedure was completed within a time period from 30 to 59 minutes after the start of the surgical procedure (e.g., after the start of the OR being used for the surgical procedure).
To train the neural network using the training data, surgery planning module 718 may perform a forward pass through the neural network using the input vector of a training data pair. Surgery planning module 718 may then compare the values generated by the output neurons to the target value to determine error values. Surgery planning module 718 may then use a backpropagation algorithm to modify weights of neurons of the neural network based on the error values. Surgery planning module 718 may repeat this process for different training data pairs in the training data. In some examples, surgery planning module 718 may receive new training data pairs and may continue training the neural network as more surgical procedures are completed.
In some examples, surgery planning module 718 may estimate the amount of OR time for the surgical procedure using a plurality of machine-learned models 720, such as a plurality of neural networks. In some examples, surgery planning module 718 may generate and store data indicating a surgical plan for the surgical procedure. The surgical plan for the surgical procedure may specify the steps of the surgical procedure. In some examples, the surgical plan for the surgical procedure may further specify surgical items that are associated with specific steps of the surgical procedure.
In examples where surgery planning module 718 estimates the amount of OR time for the surgical procedure using a plurality of machine-learned models 720, the machine-learned models 720 generate output data indicating estimated amounts of time that will be required to perform separate steps of the surgical procedure. For example, a first machine-learned model may generate output data indicating an estimated amount of time to perform a first step of the surgical procedure, a second machine-learned model may generate output data indicating an estimated amount of time to perform a second step of the surgical procedure, and so on. Surgery planning module 718 may then estimate the amount of OR time for the surgical procedure based on a sum of the estimated amounts of time that will be required to perform the steps of the surgical procedure. In some examples, the estimated amount of OR time for the surgical procedure may be equal to the sum of the estimated amounts of time that will be required to perform the steps of the surgical procedure. In some examples, the estimated amount of OR time for the surgical procedure may be equal the sum of the estimated amounts of time that will be required to perform the steps of the surgical procedure plus some amount of time associated with starting and concluding the surgical procedure and/or transitioning between steps of the surgical procedure.
In some examples where surgery planning module 718 estimates the amount of OR time for the surgical procedure using a plurality of machine-learned models 720, a machine-learned model may directly output a value indicating the estimated amount of time to perform a step of the surgical procedure. For instance, at least one of the machine-learned models may be implemented as a neural network having an output neuron that generates a value indicating the estimated amount of time to perform a step of the surgical procedure. Thus, such neural networks may be similar to the neural network described in the example provided above where a single neural network is used to estimate the amount of OR time for the whole surgical procedure.
In other examples, one or more of the machine-learned models may be implemented as neural networks that have output neurons corresponding to different time periods. Thus, such neural networks may be similar to the neural network described in the example provided above where a single neural network has output neurons corresponding to different time periods and is used to estimate the amount of OR time for the whole surgical procedure. In this example, the time periods for output neurons of a neural network corresponding to an individual step of the surgical procedure may have intervals significantly shorter than the time periods used for estimating an amount of OR time for the whole surgical procedure. For instance, a first output neuron of a neural network corresponding to a specific step of the surgical procedure may correspond to 0 to 4 minutes, a second output neuron of the neural network may correspond to 5 to 9 minutes, and so on. In such examples, an output neuron of the neural network may output a confidence value that indicates a level of confidence that the step of the surgical procedure will be completed within the time period corresponding to the output neuron. Surgery planning module 718 may select the time period having the highest confidence as the estimated time amount of time required to complete the step of the surgical procedure.
In some examples, information describing the steps of the surgical procedure and the surgical items associated with the steps of the surgical procedure are presented to one or more users during the intraoperative phase of the surgical procedure. For instance, a surgeon may wear MR visualization device 213 during the surgical procedure and MR visualization device 213 may generate an MR visualization that includes virtual objects that indicate the steps of the surgical procedure and surgical items associated with specific steps of the surgical procedure. Presenting information describing the steps of the surgical procedure and the surgical items associated with the steps of the surgical procedure during the intraoperative phase of the surgical procedure may help to remind the surgeon and OR staff how they planned to perform the surgical procedure during performance of the surgical procedure. In some examples, the presented information may include checklists indicating what actions need to be performed in order to complete each step of the surgical procedure.
In some examples, surgery planning module 718 may automatically determine when a step of the surgical procedure is complete. For instance, surgery planning module 718 may automatically determine that a step of the surgical procedure is complete when a surgeon removes a surgical item associated with a next step of the surgical procedure from a storage location. In other examples, surgery planning module 718 may receive indications of user input, such as voice commands, touch input, button-push input, or other types of input to indicate the completion of steps of a surgical procedure. For instance, surgery planning module 718 may implement techniques as described in Patent Cooperation Treaty (PCT) application PCT/US2019/036978, filed Jun. 13, 2019 (the entire content of which is incorporated by reference), which describes example processes for presenting virtual checklist items in an extended reality (XR) visualization device and example ways of marking steps of surgical procedures as complete.
Based on a determination that a step of the surgical procedure is complete, surgery planning module 718 may record an amount of time that was used to complete the step of the surgical procedure. Surgery planning module 718 may then generate a new training data pair. The input vector of the training data pair may include an applicable value for the surgical procedure (e.g., anatomic parameter data, patient characteristic data, surgical parameter data, surgeon experience level, etc.). In an example where a neural network corresponding to the step of the surgical procedure has an output neuron that generates output indicating the amount of estimated amount of time required to perform the step of the surgical procedure, the target value of the training data pair indicates an amount of time it took to complete the step of the surgical procedure. In an example where a neural network corresponding to the step of the surgical procedure has output neurons corresponding to different time periods, the target value of the training data pair may indicate the time period in which the step of the surgical procedure was completed. After generating the new training data pair, surgery planning module 718 may use the new training data pair to continue the training of the neural network. In this way, the neural network may continue to improve as the step of the surgical procedure is performed more times.
As indicated above, in some examples, surgery planning module 718 may estimate an updated amount of OR time during the intraoperative phase of the surgical procedure. In some examples, when surgery planning module 718 estimates the updated amount of OR time during the intraoperative phase of the surgical procedure, surgery planning module 718 may determine the updated estimated amount of OR time in the same way that surgery planning module 718 estimates the amount of OR time during the preoperative phase, albeit with updated input data. For instance, in some examples, surgery planning module 718 may use a single machine-learned model to estimate the amount of OR time. In other examples, surgery planning module 718 may use separate machine-learned models for different steps of the surgical procedure. In such examples, surgery planning module 718 may estimate the amount of OR time based on a sum of the amount of time elapsed so far during the surgical procedure and estimates of amounts of time to perform any unfinished steps of the surgical procedure.
In examples where surgery planning module 718 estimates the updated amount of OR time for the surgical procedure during the intraoperative phase, surgery planning module 718 may estimate the updated amount of OR time in response to various events. For instance, surgery planning module 718 may estimate the updated amount of OR time for the surgical procedure in response to receiving input indicating different anatomic parameter data than the anatomic parameter data obtained during the preoperative phase of the surgical procedure. For instance, surgery planning module 718 may estimate the updated amount of OR time for the surgical procedure in response to receiving input indicating the presence of additional osteophytes that were not accounted for in the preoperative phase.
In another example, surgery planning module 718 may estimate the updated amount of OR time for the surgical procedure in response to receiving input indicating different surgical parameter data than the surgical parameter data obtained during the preoperative phase of the surgical procedure. For instance, surgery planning module 718 may estimate the updated amount of OR time for the surgical procedure in response to receiving input indicating that a surgeon has chosen a different surgical option during the surgical procedure than was selected during the preoperative phase of the surgical procedure. For example, surgery planning module 718 may receive input indicating that the surgeon has chosen to use a different type of orthopedic prosthesis than the surgeon selected during the preoperative phase of the surgical procedure.
In some examples, surgery planning module 718 may determine, during the intraoperative phase of the surgical procedure, whether different steps of the surgical procedure will need to be performed based on updated anatomic parameter data and/or updated procedure data received during the intraoperative phase of the surgical procedure. For instance, in one example involving a shoulder arthroplasty, if one or more anatomic parameters are different from what was expected (e.g., erosion of glenoid was deeper than expected), surgeon may need to perform more or fewer steps during the surgical procedure (e.g., performing a bone graft). In another example involving a shoulder surgery, if the original plan for the surgical procedure was to implant a stemless humeral implant and the surgeon selected a stemmed humeral implant instead, the surgeon may need to perform additional steps, such as sounding and compacting spongy bone tissue in the patient's humerus.
In some examples, surgery planning module 718 may determine a modified surgical plan for the surgical procedure in accordance with one or more of the examples described in PCT application no. PCT/US2019/036993, filed Jun. 13, 2019. PCT application no. PCT/US2019/036993 describes obtaining an information model specifying a first surgical plan for an orthopedic surgery to be performed on a patient; modifying the first surgical plan during an intraoperative phase of the orthopedic surgery to generate a second surgical plan; and, during the intraoperative phase of the orthopedic surgery, presenting, with a visualization device, a visualization for display that is based on the second surgical plan.
In examples where surgery planning module 718 determines the estimated amount of OR time for the surgical procedure based on a sum of estimated amounts of times to perform steps of the surgical procedure, surgery planning module 718 may estimate the amounts of time for remaining steps of the surgical procedure according to the modified surgical plan. For instance, in some such examples, machine-learned models 720 may include a machine-learned model (e.g., a neural network) for each potential step in a type of surgical procedure. Surgery planning module 718 may determine an estimated time to complete a step based on output of the machine-learned model for the step. In such examples, when surgery planning module 718 determines the estimated amount of OR time for the surgical procedure during the intraoperative phase of the orthopedic procedure, surgery planning module 718 may use the machine-learned models corresponding to remaining steps of the surgical procedure as specified by an original or modified surgical plan for the surgical procedure. Surgery planning module 718 may estimate the amount of remaining OR time for the surgical procedure based on a sum of the estimated times to complete each of the remaining steps of the surgical procedure. In some examples, during the intraoperative phase of the surgical procedure, surgery planning module 718 may estimate the amount of OR time for the surgical procedure based on a sum of the amount of time elapsed so far during the surgical procedure and the estimated amounts of time required to complete each of the remaining steps of the surgical procedure.
In examples where surgery planning module 718 estimates the amount of OR time for the surgical procedure using a plurality of machine-learned models 720, different machine-learned models in the plurality of machine-learned models 720 may have different inputs. For instance, in an example where surgery planning module 718 uses different neural networks to estimate amounts of time to perform different steps of the surgical procedure, a first neural network that estimates an amount of time to perform a first step of the surgical procedure may have input neurons that accept a different set of input from input neurons of a second neural network that estimates an amount of time to perform a second step of the surgical procedure. For instance, in one example, a first neural network may estimate an amount of time to perform a step of reaming a patient's glenoid and a second neural network may estimate an amount of time to perform a step of implanting a humeral prosthesis in the patient's humerus. In this example, the surgical parameter data may include data indicating a type of reaming bit and data indicating a type of humeral prosthesis. In this example, it may be unnecessary to provide the data indicating the type of humeral prosthesis to the first neural network and it may be unnecessary to provide the data indicating the type of reaming bit to the second neural network.
Furthermore, in the example of
As indicated above, in some examples, surgery planning module 718 may estimate an updated amount of OR time during the intraoperative phase of the surgical procedure. In such examples, surgery planning module 718 may output the indication of the estimated amount of OR time for the surgical procedure during the intraoperative phase of the surgical procedure. In some examples where surgery planning module 718 outputs the indication of the estimated amount of OR time for the surgical procedure during the intraoperative phase of the surgical procedure, surgery planning module 718 may output the indication of the estimated amount of OR time for the surgical procedure to the surgeon or other persons in the OR.
In some examples where surgery planning module 718 outputs the indication of the estimated amount of OR time for the surgical procedure during the intraoperative phase of the surgical procedure, surgery planning module 718 may output the indication of the estimated amount of OR time for the surgical procedure to users outside the OR, such as hospital scheduling staff. Thus, if the anatomic parameters or surgical parameters change during the surgical procedure and surgery planning module 718 determines that the surgical procedure will run long, the hospital scheduling staff may cancel or reschedule one or more surgical procedures due to occur in the OR after completion of the surgical procedure on the current patient. Conversely, if the anatomic parameters or surgical parameters change during the surgical procedure and surgery planning module 718 determines that the surgical procedure will run short (e.g., because the surgeon determines that specific steps of the surgical procedure are unnecessary or cannot be performed), the hospital scheduling staff may add one or more surgical procedures to a schedule for the OR or move forward one or more surgical procedures scheduled for the OR. Advantageously, this may allow automatic updates regarding the amount of time the OR is expected to be in use without anyone outside the OR having to ask anyone inside the OR about the amount of time the OR is expected to be in use. This may reduce distraction and time pressure experienced by the surgeon, which may lead to better surgical outcomes.
In the example of
For instance, in one example, virtual planning system 701 may scan through a schedule for the OR chronologically and identify a first available unallocated time slot that has a duration longer than the estimated amount of OR time for the surgical procedure. An unallocated time slot is a time slot in which the OR has not been allocated for use.
In some examples where surgery planning module 718 generates confidence values for a plurality of time periods, the estimated amount of OR time for the surgical procedure may be the time period with the greatest confidence value. However, rather than using the first available unallocated time slot that virtual planning system 701 identifies that has a duration longer than the estimated amount of OR time for the surgical procedure, surgery planning module 718 may determine a cut-off time period. The cut-off time period is a time period immediately preceding a first-occurring time period that is longer than the time period having the greatest confidence value and that has a confidence value below a threshold. The threshold may be configurable (e.g., by hospital scheduling staff or other parties). Virtual planning system 701 may then determine the OR schedule using the cut-off time duration instead of the time duration having the greatest confidence value. In this way, surgery planning system 701 may build time into the OR schedule for possible time overruns during the surgical procedure.
As in the previous example, the estimated amount of OR time for the surgical procedure may be the time duration with the greatest confidence value. However, in some examples, surgery planning system 701 may analyze a distribution of the confidence values and determine the OR schedule based on the distribution. For instance, surgery planning system 701 may determine that the distribution of confidence values is biased toward smaller time durations than the time duration with the greatest confidence value. Accordingly, surgery planning system 701 may build in a smaller amount of time after the time duration with the greatest confidence value. For instance, if the time durations are in 30 minute increments and the two time durations before the time duration with the highest confidence value have confidence values almost as high as the highest confidence value while the two time durations after the time duration with the highest confidence value are significantly lower than the highest confidence value, surgery planning system 701 may identify an unallocated time slot that is only 30 minutes longer than the estimated amount of OR time. In contrast, in this example, if the two time durations after the time duration with the highest confidence value is almost as high as the highest confidence value, surgery planning system 701 may identify an unallocated time slot that is 60 minutes longer than the time duration having the highest confidence value.
While the techniques been disclosed with respect to a limited number of examples, those skilled in the art, having the benefit of this disclosure, will appreciate numerous modifications and variations there from. For instance, it is contemplated that any reasonable combination of the described examples may be performed. It is intended that the appended claims cover such modifications and variations as fall within the true spirit and scope of the invention.
It is to be recognized that depending on the example, certain acts or events of any of the techniques described herein can be performed in a different sequence, may be added, merged, or left out altogether (e.g., not all described acts or events are necessary for the practice of the techniques). Moreover, in certain examples, acts or events may be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors, rather than sequentially.
In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol. In this manner, computer-readable media generally may correspond to (1) tangible computer-readable storage media which is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. A computer program product may include a computer-readable medium.
By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are instead directed to non-transitory, tangible storage media. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
Operations described in this disclosure may be performed by one or more processors, which may be implemented as fixed-function processing circuits, programmable circuits, or combinations thereof, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Fixed-function circuits refer to circuits that provide particular functionality and are preset on the operations that can be performed. Programmable circuits refer to circuits that can programmed to perform various tasks and provide flexible functionality in the operations that can be performed. For instance, programmable circuits may execute instructions specified by software or firmware that cause the programmable circuits to operate in the manner defined by instructions of the software or firmware. Fixed-function circuits may execute software instructions (e.g., to receive parameters or output parameters), but the types of operations that the fixed-function circuits perform are generally immutable. Accordingly, the terms “processor” and “processing circuity,” as used herein may refer to any of the foregoing structures or any other structure suitable for implementation of the techniques described herein.
Various examples have been described. These and other examples are within the scope of the following claims.
This application claims the benefit of U.S. Patent Application No. 62/942,956 filed on Dec. 3, 2019, the entire content of which is incorporated herein by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2020/062567 | 11/30/2020 | WO |
Number | Date | Country | |
---|---|---|---|
62942956 | Dec 2019 | US |