The present disclosure relates generally to systems, devices, and methods for determining or computing a length of an implant using image processing and analysis.
The bones and connective tissue of an adult human spinal column consists of more than 20 discrete bones coupled sequentially to one another by a tri-joint complex. The complex consists of an anterior disc and two posterior facet joints. The anterior discs of adjacent bones are cushioned by cartilage spacers referred to as intervertebral discs. The over 20 bones of the spinal column are anatomically categorized as one of four classifications: cervical, thoracic, lumbar, or sacral. The cervical portion of the spine which comprises the top of the spine up to the base of the skull, includes the first 7 vertebrae. The intermediate 12 bones are thoracic vertebrae, and connect to the lower spine comprising the 5 lumbar vertebrae. The base of the spine are sacral bones, including the coccyx.
The spinal column of bones is highly complex in that it includes over 20 bones coupled to one another, housing and protecting critical elements of the nervous system having innumerable peripheral nerves and circulatory bodies in close proximity. Despite its complexity, the spine is a highly flexible structure, capable of a high degree of curvature and twist in nearly every direction.
Genetic or developmental irregularities, trauma, chronic stress, tumors and disease, however, can result in spinal pathologies which either limit this range of motion or threaten the critical elements of the nervous system housed within the spinal column. A variety of systems have been disclosed in the art which achieve immobilization by implanting artificial assemblies in or on the spinal column. These assemblies may be classified as anterior, posterior or lateral implants. Lateral and anterior assemblies are coupled to the anterior portion of the spine which is in the sequence of vertebral bodies. Posterior implants generally comprise pairs of rods (“bilateral spinal support rods”), which are aligned along the axis which the bones are to be disposed, and which are then attached to the spinal column by screws which are inserted through pedicles.
Spinal stabilization or immobilization assemblies involve patient-specific geometries. Further, the size of some of the components, such as the bilateral spinal support rods, may not be determinable until the screws are positioned in the pedicles and the distance between the screws is known. Other variables that affect the size and/or length of an implant include the patient's lordosis or spinal curvature, the tools and instruments used in the procedure, and the doctor's preferences, such as the overhang of the spinal support rods on either end of the assembly.
The present disclosure describes systems, devices, and methods for image-based determination of implant lengths and/or other geometries. In one embodiment, a mobile computing device, such as a smartphone or a tablet, receives a radiographic image of a patient's spine, including at least one pedicle screw assembly. The mobile computing device may calibrate the image dimensions based on a calibration feature in the image of a known-dimension. For example, the calibration feature may include a radiographic (e.g., radiopaque) marker positioned on the patient, the radiographic marker associated with a known width, height, diameter, and/or other dimension. The mobile computing device may further receive one or more user inputs indicating rod length selection parameters. The rod length selection parameters may include an optional overhang measurement, screw length, estimated lordosis, and/or any other suitable parameter. The mobile computing device identifies outer rod connection points in at least one of the radiographic images, the outer rod connection points associated with the desired span of the rod in the patient's spine. Based on the calibrated image dimensions, the outer rod connection points, and the rod length selection parameters, the mobile computing device determines or selects a rod length, and outputs the rod length to a display. In some aspects, the mobile computing device may include a memory device storing a database of available rod lengths and/or rod configurations, and select the rod length from the database.
In some aspects, the mobile communication device may be configured to output a graphical representation of the selected rod to the display, where the graphical representation is overlaid on at least one of the radiographic images to scale with the spine and pedicle screws. The user may then manipulate the graphical representation of the rod by, for example, translating and/or rotating the graphical representation of the rod into place at the desired connection points. If the curvature of the graphical representation of the rod does not match the curvature of the spine in the image, the user can manipulate the shape of the graphical representation to adjust the rod curvature. For example, the user may use a touch screen interface to adjust the curvature of the graphical representation of the rod. In some instances, the mobile computing device may update or reselect the rod length based on the rod curvature adjustment prompted by the user inputs.
According to an embodiment of the present disclosure, a method of determining a size of a connecting rod includes: obtaining, by an image acquisition unit of a mobile computing device, at least one radiographic image, wherein the at least one radiographic image is representative of a first medical device inserted into a body of a patient and a calibration marker; determining, by a processor of the mobile computing device based on the calibration marker in the at least one radiographic image, a calibration factor of the at least one radiographic image; receiving, from a user interface of the mobile computing device, one or more rod length selection parameters; determining, based on a shape of the first medical device, a first rod connection point associated with the first medical device; determining, based on at least one of a shape of a second medical device in the radiographic image or a first rod selection parameter of the one or more rod selection parameters, a second rod connection point; determining, by the processor of the mobile computing device based on the first rod connection point, the second rod connection point, the calibration factor of the at least one radiographic image, and the one or more rod selection parameters, a length of the connecting rod associated with a distance between the first rod connection point and the second rod connection point; and outputting, to a display, a visual representation of the length of the rod.
In some aspects, the at least one radiographic image is representative of the first medical device and the second medical device implanted into the patient, the first medical device includes a first radiographic indicia and the second medical device includes a second radiographic indicia, the method further includes: determining, by the processor of the mobile computing device, a deformation of the first indicia; and determining by the processor of the mobile computing device, a deformation of the second indicia; the determining the first rod connection point is based on the first deformation; and the determining the second rod connection point is further based on the second deformation. In some aspects, the method further includes: determining a first relative length of the first indicia; and determining a second relative length of the second indicia, wherein the determining the first rod connection point is based on the first relative length of the first indicia, and wherein the determining the second rod connection point is based on the second relative length of the second indicia. In some aspects, the first indicia comprises a first plurality of indicia features, the determining the first deformation of the first indicia comprises determining: a first relative distance between a first indicia feature and a second indicia feature of the first plurality of indicia features; and a second relative distance between a third indicia feature and a fourth indicia feature of the first plurality of indicia features. In some aspects, the determining the first deformation of the first indicia comprises: determining a first dimension of the first indicia feature; determining a second dimension of the second indicia feature; and comparing the first dimension to the second dimension. In some aspects, the receiving the one or more rod selection parameters comprises receiving, from the user interface of the mobile computing device, a first input indicating the second rod connection point in the one or more radiographic images. In some aspects, the receiving the one or more rod selection parameters comprises receiving, from the user interface of the mobile computing device, a second input indicating an overhang length of the connecting rod. In some aspects, the determining the rod length is based on the second input.
In some aspects, the receiving the one or more rod selection parameters comprises receiving, from the user interface of the mobile computing device, a third input indicating at least one of: a type of the first medical device; or a dimension of a fixation device associated with the implant, wherein the determining the rod length is based on the third input. In some aspects, the determining the rod length comprises selecting, based on the first rod connection point and the second rod connection point, a stored rod length from a database including a plurality of stored rod lengths. In some aspects, the method further includes: determining, by an orientation sensor of the mobile computing device, an orientation of the mobile computing device, wherein the determining the calibration factor is based on the orientation of the mobile computing device. In some aspects, the method further includes: outputting, to the user interface of the mobile computing device, a graphical indicator of a rod overlayed on a radiographic image of the one or more radiographic images, wherein the graphical indicator is scaled relative to the radiographic image based on the calibration factor. In some aspects, the method further includes: receiving, from the user interface of the mobile computing device, a fourth input indicating an adjustment to a curvature of the connecting rod; and updating the graphical indicator based on the adjustment to the curvature of the connecting rod. In some aspects, the method further includes: determining, based on the fourth input, an updated rod length; and outputting, to the user interface, a visual indication of the updated rod length.
According to another embodiment of the present disclosure, a mobile computing device includes: an image acquisition unit configured to obtain at least one radiographic image, wherein the at least one radiographic image is representative of a first medical device inserted into a body of a patient and a calibration marker; a user interface; and a processor in communication with the image acquisition unit and the user interface, the processor configured to: determine, based on the calibration marker in the at least one radiographic image, a calibration factor of the at least one radiographic image; receive, from a user interface of the mobile computing device, one or more rod length selection parameters; determine, based on a shape of the first medical device, a rod connection point associated with the first medical device; determine, based on at least one of a shape of a second medical device in the radiographic image or a first rod selection parameter of the one or more rod selection parameters, a second rod connection point; determine, based on the first rod connection point, the second rod connection point, the calibration factor of the at least one radiographic image, and the one or more rod selection parameters, a length of the connecting rod associated with a distance between the first rod connection point and the second rod connection point; and output, to the user interface, a visual representation of the length of the rod.
In some aspects, the at least one radiographic image is representative of the first medical device and the second medical device implanted into the patient, the first medical device includes a first radiographic indicia and the second medical device includes a second radiographic indicia. In some aspects, the processor is further configured to: determine, by the processor of the mobile computing device, a deformation of the first indicia; and determine by the processor of the mobile computing device, a deformation of the second indicia; the processor is configured to determine the first rod connection point is based on the first deformation. In some aspects, the processor is configured to determine the second rod connection point is further based on the second deformation. In some aspects, the processor is configured to: determine a first relative length of the first indicia; and determine a second relative length of the second indicia, wherein the processor is configured to determine the first rod connection point is based on the first relative length of the first indicia. In some aspects, the processor is configured to determine the second rod connection point is based on the second relative length of the second indicia. In some aspects, the first indicia comprises a first plurality of indicia features, and the processor is configured to determine the first deformation of the first indicia based on: a first relative distance between a first indicia feature and a second indicia feature of the first plurality of indicia features; and a second relative distance between a third indicia feature and a fourth indicia feature of the first plurality of indicia features. In some aspects, the processor is configured to determine the first deformation of the first indicia based on: a first dimension of the first indicia feature; a second dimension of the second indicia feature; and a comparison of the first dimension to the second dimension.
In some aspects, the processor configured to receive the one or more rod selection parameters comprises the processor configured to receive, from the user interface of the mobile computing device, a first input indicating the second rod connection point in the one or more radiographic images. In some aspects, the processor configured to receive the one or more rod selection parameters comprises the processor configured to receive, from the user interface of the mobile computing device, a second input indicating an overhang length of the connecting rod. In some aspects, the processor is configured to determine the rod length based on the second input. In some aspects, the processor configured to receive the one or more rod selection parameters comprises the processor configured to receive, from the user interface of the mobile computing device, a third input indicating at least one of: a type of the implant; or a dimension of a fixation device associated with the implant. In some aspects, the processor is configured to determine the rod length based on the third input. In some aspects, the processor configured to determine the rod length comprises the processor configured to select, based on the first rod connection point and the second rod connection point, a stored rod length from a database including a plurality of stored rod lengths. In some aspects, the processor is further configured to: determine, based on orientation data from an orientation sensor of the mobile computing device, an orientation of the mobile computing device. In some aspects, the processor is configured to determine the calibration factor further based on the orientation of the mobile computing device. In some aspects, the processor is further configured to: output, to the user interface of the mobile computing device, a graphical indicator of a rod overlayed on a radiographic image of the one or more radiographic images, wherein the graphical indicator is scaled relative to the radiographic image based on the calibration factor. In some aspects, the processor is further configured to: receive, from the user interface of the mobile computing device, a fourth input indicating an adjustment to a curvature of the connecting rod; and update the graphical indicator based on the adjustment to the curvature of the connecting rod. In some aspects, the processor is further configured to: determine, based on the fourth input, an updated rod length; and output, to the user interface, a visual indication of the updated rod length.
Additional aspects, features, and advantages of the present disclosure will become apparent from the following detailed description.
For a more complete understanding of the present disclosure and its advantages, reference is now made to the following description taken in conjunction with the accompanying drawings, in which like reference numbers indicate like features, and:
Although similar reference numbers may be used to refer to similar elements for convenience, it can be appreciated that each of the various example embodiments may be considered to be distinct variations.
Exemplary embodiments will now be described hereinafter with reference to the accompanying figures, which form a part hereof, and which illustrate examples by which the exemplary embodiments, and equivalents thereof, may be practiced. As used in the disclosures and the appended claims, the terms “embodiment,” “example embodiment” and “exemplary embodiment” do not necessarily refer to a single embodiment, although it may, and various example embodiments, and equivalents thereof, may be readily combined and interchanged, without departing from the scope or spirit of present embodiments. Furthermore, the terminology as used herein is for the purpose of describing example embodiments only and is not intended to be limitations of the embodiments. In this respect, as used herein, the term “plate” may refer to any substantially flat structure or any other three-dimensional structure, and equivalents thereof, including those structures having one or more portions that are not substantially flat along one or more axis. Furthermore, as used herein, the terms “opening,” “recess,” “aperture,” and equivalents thereof, may include any hole, space, area, indentation, channel, slot, bore, and equivalents thereof, that is substantially round, oval, square, rectangular, hexagonal, and/or of any other shape, and/or combinations thereof, and may be defined by a partial, substantial or complete surrounding of a material surface. Furthermore, as used herein, the term “in” may include “in” and “on,” and the terms “a,” “an” and “the” may include singular and plural references. Furthermore, as used herein, the term “by” may also mean “from,” depending on the context. Furthermore, as used herein, the term “if” may also mean “when” or “upon,” depending on the context. Furthermore, as used herein, the words “and/or” may refer to and encompass any and all possible combinations of one or more of the associated listed items.
Referring to
In some aspects, the length of the connecting rod may be based on a variety of factors, including anatomical factors and implant dimensions. For example, the length of the connecting rod may correspond to the distance between the lowest-placed pedicle screw assembly and the highest-placed pedicle screw assembly. In this regard, the surgeon may not select the rod length until at least some of the pedicle screw assemblies have been placed. Thus, the connecting rod may be selected during surgery, which can further complicate the procedure. The present disclosure provides systems, devices, and methods for determining the length of a connecting rod based on image data representative of a patient's spine, and at least one implant assembly (e.g., pedicle screw assembly), connected to the spine. In this regard, the mobile computing device 100 can be configured to receive the image of the screw assemblies 22 and calibration marker 24 on the display 20, and to process the image to identify the outer connection points for calculating the length of the connecting rod. The mobile computing device 100 may output a user interface 26 including a reproduction or duplication of the image on the display 20. The user interface 26 may assist the user to control the mobile computing device 100 during one or more steps of a rod length calculation method. The calculation may be based on determining a relative size (e.g., in pixels) and/or deformation of the calibration marker 24, and comparing the determined size to a known or stored size of the calibration marker. In some embodiments, the mobile computing device 100 may obtain the image from the display 20 using a camera of the mobile computing device 100. In other embodiments, the mobile computing device 100 may receive image data as obtained by the medical imaging device via the interface 30. The interface 30 may provide for a wired and/or a wireless link with the mobile computing device 100.
The tower bodies 32, 34, 36, 38 each have a corresponding length and width. For example, the lowest tower body 32 has a first length 42 and a first width 46. The highest tower body 28 has a second length 44 and a second width 48. The terms “lowest” and “highest” may refer to the relative placement on the spine, where the higher bodies are closer to the patient's head, and the lower bodies are close to the patient's tailbone.
Two calibration markers 24, 28 are also shown in the image. The markers 24, 28 are shown having circular donut shapes. In other embodiments, one or both of the markers 24, 28 have solid circular shapes, rectangular shapes, triangular shapes, hexagonal shapes, elliptical shapes, and/or any other suitable shape. The calibration makers 24, 28 may be adhesive markers including a radiopaque material and an adhesive backing to attach to the patient's skin. The calibration makers 24, 28, have one or more known dimensions. For example, each calibration marker may have an outside diameter 52 and an inside diameter 54. In other embodiments, each calibration maker may have a height and a width. The image shows the calibration markers 24, 28 as somewhat skewed or deformed. This may be a result of the relative orientations of the calibration markers 24, 28 with respect to the imaging device.
In some embodiments, a computing device, such as the mobile computing device 100 shown in
Based on the located rod connection points 62, 64 derived using image processing, and the rod selection parameters, the computing device may determine or select a rod length for implanting into the patient to span the rod connection points 62, 64 according to the rod selection parameters (e.g., optional overhang, additional curvature). The computing device may then output an indication of the rod length to a user interface device, such as a touchscreen display or an external display.
In some aspects, multiple views of the spine 10 and spinal fixation assembly may be used or input into the computing device for determining the rod length. In this regard,
In another embodiment of the present disclosure, radiographic markers can be incorporated into the pedicle screw receiver assemblies, where the radiographic markers are used as calibration markers to determine a conversion or calibration factor for the image.
The markings 72, 74 shown in
It will be understood that the patterns of the markings 72, 74 shown in
Referring to
In some aspects, step 602 includes obtaining the image using a camera of the mobile computing device 100. In this regard,
The mobile computing device 100 then receives the radiographic image 110 of the spine and one or more pedicle screw assemblies. In the illustrated embodiment, the view is a lateral view. In other embodiments, the view may be an A/P view, a P/A view, or any other suitable view. Referring to
In some aspects, step 602 includes selecting views and capturing images multiple times. For example, each captured image may be associated with a different view. Accordingly, the user may repeat the actions illustrated in
In some embodiments, step 602 includes receiving radiographic images directly from a radiographic imaging system (e.g., x-ray imaging system). For example, the radiographic imaging system may include an interface (e.g., interface 30,
Referring to
Referring to
In some aspects, calibrating the image 110 also includes resizing the overlaid shape (e.g., bullseye) to match the known dimension in one or more respects. For example, referring to
Referring to
Referring to
As explained above, one or more rod connection points or entry points may be identified by the mobile computing device 100 by identifying one or more components of a pedicle screw assembly, and locating the connection point based on a reference feature of the pedicle screw assembly. In other embodiments, a rod connection point may be input by a user to the mobile computing device 100 using a user interface (e.g., user interface 130 in
Referring to
Referring to
Referring to
As shown in
In another embodiment, the mobile computing device 100 may be configured to automatically adjust the curvature of the graphical indicator 141 by identifying a rod connection point for each pedicle screw assembly, or for each vertebral body to which the pedicle screw assemblies will be attached. For example, the mobile computing device 100 may be configured to adjust a fit function associated with the graphical indicator 141 until the graphical indicator 141 overlaps all the connection points.
Referring to
At step 802, a mobile computing device obtains one or more radiographic images of a patient's spine including one or more receiver bodies. At least one of the receiver bodies may include one or more alignment features. The alignment features may be used to calibrate or scale the image, and/or to determine the location of the receiving portions of the receiver bodies. The receiving portions of the receiver bodies may be a saddle or channel at which the rod can be seated and tightened down to fix the rod to the receiver bodies.
At step 804, the mobile computing device identifies one or more alignment features in the one or more radiographic images. In some aspects, the mobile computing device uses image processing and analysis techniques to search the image for shapes and objects matching a type of profile. In some aspects, step 804 includes edge or boundary detection, segmentation, and/or any other suitable image analysis technique to identify the alignment features. In some aspects, step 804 includes determining a reference dimension or calibration factor based on the size of the alignment features in the image. For example, the mobile computing device may store a known absolute size (e.g., width, height, diameter) of the one or more alignment features in a memory, and may compare the known absolute size to a determined relative size (e.g., in pixels) to determine the calibration factor.
At step 806, the mobile computing device calculates, based on the one or more radiographic images, a distortion of the one or more alignment features. For example, if the alignment feature is a circle, the mobile computing device may be configured to determine an eccentricity of a detected ellipse corresponding to the detected alignment feature. In another example, if the alignment feature is a square, the mobile computing device may determine the distortion of the alignment feature based on a comparison of the width, height, and tilting angle of the alignment feature. At step 808, the mobile computing device determines an orientation of the alignment features based on the distortion determined at step 806. a tilt of the alignment feature in one or more dimensions (e.g., X, Y, Z). In some aspects, the mobile computing device may determine a relative spacing between components of the alignment features, and compare the relative spacing between the alignment features to the relative sizes of the alignment features. Because the alignment features are incorporated into the structure of the receiver bodies, the tilt and orientation of the receiver bodies may be inferred by the tilt and orientation of the alignment features.
At step 810, the mobile computing device calculates the locations of one or more receiving portions of the receiver bodies based on the orientation and location of the alignment features as determined in steps 804-808. In some aspects, step 810 may be performed based on geometrical information of the alignment features, the receiver bodies, and/or the bone screws attached to the receiver bodies. For example, the mobile computing device may retrieve, from a memory or a database, a relative spacing between the alignment feature and the receiving portion. Based on the known relative spacing, the mobile computing device may calculate or estimate the location of the receiving portion.
At step 812, the mobile computing device calculates a distance between the receiving portions of the outermost receiver bodies. In some aspects, step 812 is performed based on a calibration factor determined at step 804.
At step 814, the mobile computing device calculates or selects a rod size based on the distance calculated at step 812. In some aspects, the rod size may be calculated further based on one or more rod selection parameters as described above. For example, the mobile computing device may determine the rod size based on the distance calculated at step 812 and a selected overhang.
Although described as being performed by a mobile computing device, it will be understood that one or more aspects of the methods 600 and/or 800 may be performed by a remote computing device other than the mobile computing device. For example, in some aspects, the mobile computing device 100 is configured to transmit image data and/or rod selection parameters input into the mobile computing device, via a network, to a remote server. The remote server may be configured to perform one or more actions of the methods 600, 800 to determine or select a rod length, adjust a rod curvature, generate a graphical indicator of the rod, and/or adjust the rod length based on the curvature. Accordingly, the mobile computing device 100 may operate in concert with one or more remote computing devices to determine the rod length. In some embodiments, the remote computing device may determine the rod length based on historical measurements and training data obtained and organized using artificial intelligence or machine learning techniques. For example, the remote computing device may be configured to receive the image and rod selection parameters from the mobile computing device, input the image data and rod selection parameters into a machine learning algorithm bolstered by machine learning data, and transmit, to the mobile computing device, an indication of the determined rod length.
The mobile computing device 900 includes a processing unit 902, a camera 916, a transceiver 918, and a touchscreen display 920. The touchscreen display 920 may be configured to output a user interface, such as one or more of the interfaces shown in
The camera 916 may include a digital camera, such as a charge-coupled device (CCD) camera. The camera 916 may include an imaging sensor and one or more optical components, such as lenses, mirrors, and/or filters. The camera 916 may be suitable for obtaining one or more images of a medical display, for example the camera 916 may be in communication with the memory 906 of the processing circuit 902 for storing the one or more images. The transceiver 918 may include a wireless transceiver for communicating according to one or more radio access technologies (RATs) or wireless protocols, including Wi-Fi™, Bluetooth®, near field communication (NFC), ultra-wideband (UWB), long-term evolution (LTE), 5G New Radio (NR), and/or any other suitable type of wireless communication. The transceiver 918 may be configured for communicating with a network 930 for communicating data associated with the rod length selection methods. For example, the transceiver 918 may be configured to receive configurations and/or parameters for selecting a rod length. Further, as explained above, in some aspects, the mobile computing device 900 may be configured to operate in concert with a remote computing device of the network 930 to indicate the rod length to the user via the mobile computing device 900.
The processing circuit 902 includes a processor 904 and a memory 906 storing computer program instructions 908 executable by the processor 904. For example, the processor 904 may include a central processing unit (CPU), and the instructions 908 may include computer program code for performing one or more of the actions of methods 600 and/or 800. The processing circuit 902 also includes a plurality of modules and units, including an image acquisition unit 910, an image processing module 912, a rod selection module 914, and a training module 922. In some aspects, one or more of the units and modules of the processing circuit 902 may include physical hardware components (e.g., application specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), processors, etc.). In other aspects, one or more of the units and modules of the processing circuit 902 may include virtual modules facilitated by the instructions 908 of the memory 906, which modules are executable by the processor 904. In another aspect, the modules and units of the processing circuit 902 may include a combination of physical and virtual modules.
The image acquisition unit 910 may include hardware and/or software for receiving images representative of a patient's spine or other anatomies. In some aspects, the image acquisition unit 910 is configured to control acquisition of images using the camera 916. In another aspect, the image acquisition unit 910 is configured to receive images from a remote device (e.g., from a medical imaging system via the interface 30 in
The image processing module 912 may include hardware and/or software for performing one or more image processing steps to identify features in the images received from the image acquisition unit 910. For example, the image processing module 912 may be configured to recognize and measure pedicle screw assemblies, spinal vertebrae, calibration markers, alignment features, and/or any other suitable image feature. For example, the image processing module 912 may be configured to perform one or more actions of steps 604, 608, 610, and/or 616 of the method 600. In another example, the image processing module 912 may be configured to perform one or more actions of steps 804, 806, 808, 810, 812, and/or 814 of the method 800. The image processing module 912 may be configured to use various techniques to perform these actions, including boundary detection, segmentation, object recognition, pattern recognition, feature extraction, filtering, point feature matching, and/or any other suitable technique.
The rod selection module 914 may include hardware and/or software for determining or selecting a rod length based on data extracted by the image processing module 912, and by user inputs from the touchscreen display 920. The rod selection module 914 may be configured to determine the rod length according to an algorithm or formula stored in the instructions 908. For example, the rod selection module 914 may be configured to select the rod length based on a sum of the computer-determined distance between two rod connection points, and two-times a selected overhang length. The overhang length may be selected by the user, or may be preconfigured in the instructions 908. In some aspects, the rod selection module 914 may be configured to output the selected rod length to the touchscreen display 920.
The training module 922 may include hardware and/or software for improving rod length selection using machine learning data. For example, the training module 922 may be configured to store, update, and output a model based on training data. The training data may be received from the network 930, and may be associated with historical rod selection procedures obtained over time. Further, the training module 922 may be configured to prepare training data based on the rod selection performed by the modules of the processing circuit, and to transmit the training data to the network 930 to update a machine learning model. In some aspects, if the user determines that the rod selection module 914 did not output a correct rod length, or that the algorithm for rod length selection could benefit from a correction, the training module 922 may be configured to receive inputs from the user associated with the correction, and update the machine learning model based on the corrections.
The mobile computing device 900 may further include an orientation sensor 924. In some aspects, the orientation sensor 924 includes an accelerometer. In other aspects, the orientation sensor 924 includes a gyroscope. In other aspects, the orientation sensor 924 includes both an accelerometer and a gyroscope. The orientation senso 924 may be configured to obtain orientation data, and provide the orientation data to the processing circuit 902. For example, the orientation data may indicate or represent an angular orientation of the mobile computing device 900 relative to a horizontal plane, or to a vertical plane. In some aspects, the processing circuit 902 of the mobile computing device 900 may be configured to perform one or more aspects of the methods 600, 800 described above based on the orientation data. For example, if the camera 916 of the mobile computing device 900 is used to obtain a photo of a medical display showing the radiographic image, the processing circuit may be configured to take into account the orientation of the mobile computing device when determining the calibration factor, the rod connection points, and/or any other suitable aspect of the rod length selection methods. In another aspect, the processing circuit 902 may be configured to control the camera 916 based on the input from the orientation sensor 924. For example, the processing circuit 902 may be configured to disable an image capture feature until the orientation data from the sensor 924 indicates that the mobile computing device is within a certain range of angles from vertical. This may assist the user in obtaining an image of the medical display, where the field of view is on a plane parallel with the plane of the medical display.
As further explained above, in some instances, a rod selection algorithm may be performed by a combination of the mobile computing device 900 and a remote computing device (e.g., a server) of the network 930. For example, the mobile computing device 900 may server as an interface for the user in the surgical environment for obtaining the images, inputting rod selection parameters, visualizing a representation of the selected rod length on the display, and receiving additional inputs indicating a correction to the curvature and/or length of the graphical indicator of the rod. Further, one or more of the image processing and rod selection calculation steps may be performed wholly or in part by the remote computing device of the network 930, and may be transmitted to the mobile computing device 900 to indicate the user.
At action 1002, a session is started. In some aspects, starting the session includes initiating a software application. For example, starting the session may include opening a smartphone application or “app.” In some aspects, the application may comprise an Apple IOS application, a Google Android application, a Microsoft Windows application, a web application, a Linux-based application, and/or any other suitable type of software application. In some aspects, starting or launching the application may include or involve initiating an image capture functionality. In another aspect, launching the application may involve authenticating the user using biometric authentication, password authentication, device id authentication, and/or any other suitable type of authentication. Starting the session may include or involve launching a user interface. The user interface may provide interactive interface objects, such as buttons, drop-down menus, text fields, and/or any other suitable type of interface object.
At action 1004, the method 1000 includes capturing an image. In some aspects, capturing the image includes inputting an image capture command on the mobile computing device. For example, the user may select a capture image button. In other aspects, the mobile computing device may capture the image automatically in response to detecting a suitable image in frame. In some aspects, the captured image may be an image of a computer monitor or television displaying human anatomy. For example, the mobile computing device may be configured to detect one or more shapes corresponding to a pedicle screw assembly, spinal anatomy, computer monitor, and/or any other suitable object. In response to detecting the object, the mobile computing device may capture the image. In some aspects, the image may be saved to random access memory. The image may comprise or represent spinal vertebrae, pedicle screw assemblies, intervertebral spacers, calibration markers, and/or any other suitable object in a field of view.
Referring to
Referring again to
At action 1010, the method 1000 includes overlaying an image of the rod on the screen. Aspects of action 1010 may include or involve aspects of step 612 of the method 600.
At action 1012, the method 1000 includes determining whether to save the captured image with the text removed. As explained above, in some aspects, capturing the image may include storing the captured image to a random access memory (RAM). Action 1012 may include determining whether to save the image to non-volatile storage on the mobile computing device, to a server, or both. In some aspects, the mobile computing device determines whether to save the image based on an input from the user. For example, the mobile computing device may output a screen prompting the user to select and option “yes” or “no,” where “yes” causes the mobile computing device to store the image. In some aspects, the determination of whether to save the image may be based on one or more questions and the corresponding response is provided by the user. For example, the method 1000 may include prompting the user to select whether the data should be stored for data analytics purposes.
At action 1014, in response to a determination not to save the image, the method 1000 is complete, in some aspects, action 1014 includes outputting, to a user display, an indication that the rod selection procedure is complete. In some aspects, action 1014 includes completing the session started at action 1002. At action 1016, in response to determining to save the image, the method 1000 proceeds to a second phase. In some aspects, the second phase involves a verification that all relevant text has been removed, redacted, or otherwise scrubbed from the image.
At action 1016, the method 1000 includes receiving an input from the user indicating whether all patient identifying information (PII) has been removed from the image. In some aspects, action 1016 includes displaying the image after the automated text scrubber has been applied according to step 1006. In some aspects, action 1016 includes outputting “yes/no” buttons on the screen via which the user can indicate whether the PII has been removed. In another aspect, action 1016 includes receiving a “continue” command from the user. For example, action 1016 may include providing a user interface whereby the user may manually remove any PII or other text, and if all PII has been removed, the user may select “continue.”
At action 1018, the method 1000 includes, in response to receiving an indication that not all PII has been removed, performing a manual text scrubbing operation. In some aspects, the manual text scrubbing operation comprises receiving inputs from the user identifying portions of the image associated with text that has not yet been removed. For example, action 1018 may include receiving user inputs identifying a bounding box or bounding area inside of which PII is contained. In another aspect, action 1018 includes a virtual eraser function or virtual marking function which blurs, obscures, blacks out, whites out, or otherwise scrubs remaining text in the image. After the user has manually scrubbed the text, a renewed prompt may appear on the screen allowing the user to indicate whether all PII has been removed from the image. The user may again select “yes” or “no” to indicate whether the PII has been removed. If the user indicates “no”, action 1018 is repeated. If the user indicates “yes,” or “continue,” the application may proceed to step 1020.
At action 1020, the method 1000 includes, in response to receiving an indication that the PII has been removed, encrypting the image.
At action 1022, the method 1000 includes saving or storing the encrypted image. In some aspects, storing the image may include storing the image to the mobile computing device. In another aspect, storing the image may include storing the image to a server. For example, storing the image may include transmitting the image to a remote server or data center. And another aspect, storing the image may include transmitting the image to a local server. In some aspects, the encryption of the image may allow the image to be accessed or viewed only on the same device which performed the steps of the method 1000. In another aspect, the encryption may include a password protection feature such that accessing the image or its associated data requires a password to be input.
At action 1024, the method 1000 is complete, and the session is terminated. In some aspects, action 1024 may include closing the software application. In some aspects, action 1024 may include clearing the mobile computing devices cache, erasing any images or data from any volatile or nonvolatile memory, and/or transmitting a user log to a server.
While various embodiments in accordance with the disclosed principles have been described above, it should be understood that they have been presented by way of example only, and are not limiting. Thus, the breadth and scope of the invention(s) should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the claims and their equivalents issuing from this disclosure. Furthermore, the above advantages and features are provided in described embodiments, but shall not limit the application of such issued claims to processes and structures accomplishing any or all of the above advantages.
Additionally, the section headings herein are provided for consistency with the suggestions under 37 C.F.R. 1.77 or otherwise to provide organizational cues. These headings shall not limit or characterize the invention(s) set out in any claims that may issue from this disclosure. Specifically, a description of a technology in the “Background” is not to be construed as an admission that technology is prior art to any invention(s) in this disclosure. Furthermore, any reference in this disclosure to “invention” in the singular should not be used to argue that there is only a single point of novelty in this disclosure. Multiple inventions may be set forth according to the limitations of the multiple claims issuing from this disclosure, and such claims accordingly define the invention(s), and their equivalents, that are protected thereby. In all instances, the scope of such claims shall be considered on their own merits in light of this disclosure, but should not be constrained by the headings herein.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2022/050687 | 11/22/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63292649 | Dec 2021 | US |