The present disclosure relates generally to improving surgical safety, and more specifically to techniques for automated detection of non-compliance to surgical protocols in a surgical environment such as an operating room (OR).
In operating rooms that conduct surgical procedures, there are many devices of which have the capability to take in data inputs, for example, images, sounds, and signals. During surgical events in such operating rooms, equipment is monitored by staff to ensure proper use. However, it is impractical for a staff member to observe all aspects and equipment of all surgeries occurring at an institution. Thus, injuries and harm may come to a patient upon the improper use of medical equipment, such as a waste management system. Further, adding staff to monitor for effective use of equipment is inefficient and expensive. There is a lack of an automated system to improve the patient care and outcomes through a systemic review of care against the defined criteria of surgical protocols, procedures, and environments.
This Summary introduces a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to limit the scope of the claimed subject matter nor identify key features or essential features of the claimed subject matter.
According to a first aspect, a method for preventing non-compliant use of a medical waste collection system is provided. The medical waste collection system includes a vacuum source and a suction tube configured to provide suction at a surgical site of a patient. The method comprising: receiving, at one or more processors, signals captured by one or more devices positioned within an operating room; providing the signals to a trained machine-learning model trained on signals representative of nominal and adverse medical events; determining based on the signals, with the trained machine-learning model, the non-compliant use of the suction tube as being used or to be used in a manner to produce a potential adverse medical event; and terminating or preventing, by the one or more processors, operation of the vacuum source during the non-compliant use.
According to a second aspect, a method of a surgical sponge management system including a plurality of sponges is provided. The method comprising: receiving, at one or more processors, signals captured by one or more devices positioned within an operating room, wherein the signals are directed to at least a surgical site of a patient during a surgical procedure; providing the signals to a trained machine-learning model trained on representative signals of nominal and adverse medical events; determining based on the signals, with the trained machine-learning model, one or more of the plurality of sponges has been directed within the patient at the surgical site; receiving, at a user interface, a user input that the surgical procedure has concluded or is concluding; monitoring based on the signals, with the trained machine-learning model, whether the one or more sponges, previously directed within the patient at the surgical site, has been removed; determining, with the trained machine-learning model, the non-compliant use of the surgical sponge management system implicating a potential adverse medical event from at least one of the one or more sponges potentially not having been removed from the patient; and providing a notification or alarm based on the non-compliant use, wherein the notification includes textual or graphical corrective instructions that is specific to activity implicating the potential adverse medical event.
According to a third aspect, a system for preventing of non-compliant use of equipment during a surgical procedure is provided. The system including one or more cameras positioned within an operating room, a display, and a computer program product. The computer program product having instructions stored on non-transitory computer-readable medium and, when executed by one or more processors, causing the one or more processors to: receive images from the one or more cameras, wherein the images include at least a surgical site of a patient; provide the images to a trained machine-learning model trained on images representative of nominal and adverse medical events; determining based on the images, with the trained machine-learning model, the non-compliant use of the equipment in a manner to produce a potential adverse medical event; and cause a notification or alarm to be displayed on the display based on the determination of the non-compliant use.
Advantages of the present disclosure will be readily appreciated as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings.
Disclosed herein are exemplary devices, apparatuses, systems, methods, and non-transitory storage media for determining non-compliance to surgical protocols in an operating room. Examples of the present disclosure can automate auditing of surgical equipment and improve the use, efficiency and accuracy of the equipment. Some examples of the present disclosure include a system that employs a plurality of inputs such as cameras, microphones, and RFID readers in an operating room and processes, the inputs using machine-learning algorithms such as object detection and tracking techniques to detect (e.g., in real time) instances of non-compliance when required protocols have been violated. In examples, the machine-learning algorithms prevent surgical non-compliance events of equipment such as unintended use of a surgical waste collection system and surgical sponge management systems. In some examples, the machine-learning algorithms can be trained to monitor activities in the surgical workflow and recognize: adherence to operating room preparation and turnover protocols, non-compliance with sterile protocol, non-compliance with surgical attire, etc. The system can provide alerts in real time for certain protocol violations to prevent SSIs. Alternatively, or additionally, the system can provide interventions in real time for certain protocol violations to prevent SSIs. Additionally, the system can provide suggestions for retraining opportunities and protocol enhancements. Accordingly, examples of the present disclosure provide an efficient and accurate mechanism for conducting surgical audits and improve surgical safety and patient outcomes in ongoing and future surgeries.
The following description sets forth exemplary methods, parameters, and the like. It should be recognized, however, that such description is not intended as a limitation on the scope of the present disclosure but is instead provided as a description of exemplary examples.
Although the following description uses terms “first,” “second,” etc. to describe various elements, these elements should not be limited by the terms. These terms are only used to distinguish one element from another. For example, a first graphical representation could be termed a second graphical representation, and, similarly, a second graphical representation could be termed a first graphical representation, without departing from the scope of the various described examples. The first graphical representation and the second graphical representation are both graphical representations, but they are not the same graphical representation.
The terminology used in the description of the various described examples herein is for the purpose of describing particular examples only and is not intended to be limiting. As used in the description of the various described examples and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “includes,” “including,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The term “if” is, optionally, construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event],” depending on the context.
Certain aspects of the present disclosure include process steps and instructions described herein in the form of an algorithm. It should be noted that the process steps and instructions of the present disclosure could be embodied in software, firmware, or hardware and, when embodied in software, could be downloaded to reside on and be operated from different platforms used by a variety of operating systems. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that, throughout the description, discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining,” “displaying,” “generating” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system memories or registers or other such information storage, transmission, or display devices.
The present disclosure in some examples also relates to a device for performing the operations herein. This device may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, computer readable storage medium, such as, but not limited to, any type of disk, including floppy disks, USB flash drives, external hard drives, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. Furthermore, the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
The methods, devices, and systems described herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein.
The cameras (e.g., cameras 102a, 102b, and 120) can be oriented toward one or more areas or objects of interest in the operating room. For example, one or more cameras can be oriented toward: the door such that they can capture images of the door, the operating table such that they can capture images of the operating table, the patient such that they can capture images of the patient, medical equipment (e.g., X-Ray device, anesthesia machine, staple gun, retractor, clamp, endoscope, electrocautery tool, fluid management system, waste management system, suction units, etc.) such that they can capture images of the medical equipment, surgical staff (e.g., surgeon, anesthesiologist, surgical assistant, scrub nurse, circulating nurse, registered nurse) such that they can capture images of the surgical staff, etc. Multiple cameras may be placed in different locations in the operating room such that they can collectively capture a particular area or object of interest from different perspectives. Some cameras can be configured to track a moving object. The one or more cameras can include PTZ cameras. The cameras can include cameras that can provide a video stream over a network. The one or more cameras can include a camera integrated into a surgical light in the operating room.
The sensors 128, including microphones, RFID readers, and other types of sensors may be orientated towards the one or more areas or objects of interest in the operating room. For example, the sensors 128 may be places such that they capture inputs of the patient, medical equipment, staff, or any other particular area or object of interest from different perspectives. The microphones are configured to detect auditory outputs such as speech, alarms, and any other sound that may occur from use of medical equipment.
At block 202, an exemplary system (e.g., one or more electronic devices) can receive one or more inputs of the operating room captured one or more of the devices (e.g., cameras 102a and/or 102b, sensors 128 in
Multiple cameras, microphones, and sensors may be placed at different angles oriented toward a first door (e.g., a door the patient enters through) and/or a second door (e.g., a door sterile equipment and staff enter through) in the operating room, multiple cameras, microphones, and sensors may be oriented toward the operating table from different angles, one or more cameras, microphones, and sensors may be oriented toward the surgical lights and the surgeon, and one or more cameras, microphones, and sensors may be oriented toward the surgical support staff. Different cameras, microphones, and sensors, depending on the orientation of the camera, may be associated with different models configured to detect different objects such that inputs captured by a given camera are processed by associated model(s), as described in detail below.
The one or more inputs can include images captured by one or more surgical devices (e.g., endoscopes). By utilizing inputs captured by cameras, microphones, and sensors generally installed in the operating room in conjunction with information from surgical devices, the system may provide a more accurate and realistic identification of surgical milestones and activities in blocks 204 and 206.
At block 204, the system can detect a surgical milestone associated with a surgery in the operating room using a first set of one or more trained machine-learning models based on the received one or more inputs. The system can be configured to determine a plurality of surgical milestones, which are described in detail herein. A milestone may refer to a phase or period of time during a surgical workflow (e.g., surgical phase), or a specific time point during the surgical workflow. A surgical milestone can refer to a preoperative activity, an intraoperative activity, or a postoperative activity, as discussed herein. Some surgical milestones may include specific steps (e.g., making an incision, removing an organ) of a surgery.
A surgical milestone can indicate the stage of progression through a surgical procedure or a surgical workflow. The plurality of predefined milestones can include: whether an operating room is ready, whether operating room setup has started, whether a medical staff member (e.g., the surgeon, the scrub nurse, the technician) is donning surgical attire (e.g., masks, gloves, caps, gowns), whether operating room equipment is being set up, whether the patient is brought in to the operating room, whether the patient is ready for intubation or anesthesia, whether a timeout is occurring, whether the timeout has occurred, whether the patient is intubated or anesthetized, whether the patient has been prepped and draped for surgery, whether the patient is ready for surgery, whether a surgery site prep is complete, whether a surgery has started, whether the surgery is closing, whether a dressing is applied to the patient, whether the surgery is stopped, whether the patient is brought out of the operating room, whether the operating room is being cleaned, whether the operating room is clean, or any combination thereof. It should be understood that the foregoing list of milestones is merely exemplary. There may be fewer, additional, or different predefined milestones, for instance, depending on a type of surgical procedure.
The system can be configured to use the one or more trained machine learning models to detect one or more detected objects or events, which are in turn used to determine the one or more surgical milestones (e.g., surgical time points, surgical phases). The one or more trained machine learning models can include an object detection algorithm, an object tracking algorithm, a video action detection algorithm, an anomaly detection algorithm, or any combination thereof.
The system can be configured to first use an object detection algorithm to detect a particular type of object in an input, and then use an object tracking algorithm to track the movement and/or status of the detected object in subsequent inputs. In another example, the system can be configured to use a voice recognition algorithm to track the movement and/or status of the detected object within the OR. Using one or more object detection algorithms, the system may detect one or more objects and assign an object ID to each detected object. The one or more object detection algorithms can comprise machine-learning models such as a 2D convolutional neural network (CNN) or 3D-CNN (e.g., MobileNetV2, ResNet, MobileNetV3, CustomCNN). After the objects are detected, the system may then use one or more object tracking algorithms to track the movement of the detected objects. The one or more object tracking algorithms can comprise any computer-vision algorithms for tracking objects and can comprise non-machine-learning algorithms. The object tracking algorithm(s) may involve execution of more lightweight code than the object detection algorithm(s), thus improving efficiency and reducing latency for surgical milestone determination. An object detection algorithm may include an instance segmentation algorithm, which can be configured to simultaneously perform classification (e.g., determining what type of object an image depicts), semantic segmentation (e.g., determining what pixels in the image belong to the object), and instance association (e.g., identifying individual instances of the same class; for example, person1 and person2). Additionally, in real-world scenes, a given visual object may be occluded by other objects. Although human vision systems can locate and recognize severely occluded objects with temporal context reasoning and prior knowledge, it may be challenging for classical video understanding systems to perceive objects in the heavily occluded video scenes. Accordingly, some examples include machine-learning algorithms that take into account the temporal component of the video stream. For example, the system may perform spatial feature calibration and temporal fusion for effective one-stage video instance segmentation. As another example, the system may perform spatio-temporal contrastive learning for video instance segmentation. Additional information on these exemplary algorithms can be found, for example, in Li et al., “Spatial Feature Calibration and Temporal Fusion for Effective One-stage Video Instance Segmentation”, arXiv:2104.05606v1, available at https://doi.org/10.48550/arXiv.2104.05606, and Jiang et al., “STC: Spatio-Temporal Contrastive Learning for Video Instance Segmentation”, arXiv:2202.03747v1, available at https://doi.org/10.48550/arXiv.2202.03747, both of which are incorporated herein by reference.
The tracked movement and/or status of one or more detected objects can then be used to determine events occurring in the operating room. For example, the system can first use an object detection model to detect a stretcher in an image and then use an object tracking algorithm to detect when the stretcher crosses door coordinates to determine that the stretcher is being moved into the operating room (i.e., an event). The one or more trained machine-learning models can be trained using a plurality of annotated images (e.g., annotated with labels of object(s) and/or event(s)). Further description of such machine learning models is provided below with reference to
An object that the system can detect can include physical items, persons, or parts thereof, located inside, entering, or leaving an operating room. The object can for example include a stretcher, a patient, a surgeon, an anesthesiologist, the surgeon's hand, a surgical assistant, a scrub nurse, a technician, a nurse, a scalpel, sutures, a staple gun, a door to a sterile room, a door to a non-sterile corridor, a retractor, a clamp, an endoscope, an electrocautery tool, an intubation mask, a surgical mask, a C-Arm, an Endoscopic Equipment Stack, an anesthesia machine, an anesthesia cart, a fluid management system, a waste management system, a waste disposal receptacle, an operating table, surgical table accessories, an equipment boom, an anesthesia boom, an endoscopic equipment cart, surgical lights, a case cart, a sterile back table, a sterile mayo stand, a cleaning cart, an X-Ray device, an imaging device, a trocar, a surgical drape, operating room floor, EKG leads, ECG leads, bed linens, a blanket, a heating blanket, a lap belt, safety straps, a pulse oximeter, a blood pressure machine, an oxygen mask, an IV, or any combination thereof.
An event that the system can detect can include a status, change of status, and/or an action associated with an object. The event can for example include whether the surgical lights are turned off, whether the operating table is vacant, whether the bed linens are wrinkled, whether the bed linens are stained, whether the operating table is wiped down, whether a new linen is applied to the operating table, whether a first sterile case cart is brought into the operating room, whether a new patient chart is created, whether instrument packs are distributed throughout the operating room, whether booms and suspended equipment are repositioned, whether the operating table is repositioned, whether a nurse physically exposes instrumentation by unfolding linen or paper, or opening instrumentation containers using a sterile technique, whether the scrub nurse entered the operating room, whether the technician entered the operating room, whether the scrub nurse is donning a gown, whether the circulating nurse is securing the scrub nurse's gown, whether the scrub nurse is donning gloves using the sterile technique, whether the sterile back table or the sterile mayo stand is being set with sterile instruments, whether the patient is wheeled into the operating room on a stretcher, whether the patient is wheeled into the operating room on a wheel chair, whether the patient walked into the operating room, whether the patient is carried into the operating room, whether the patient is transferred to the operating table, whether the patient is covered with the blanket, whether the lap belt is applied to the patient, whether the pulse oximeter is placed on the patient, whether the EKG leads are applied to the patient, whether the ECG leads are applied to the patient, whether the blood pressure cuff is applied to the patient, whether a surgical sponge and instrument count is conducted, whether a nurse announces a timeout, whether a surgeon announces a timeout, whether an anesthesiologist announces a timeout, whether activities are stopped for a timeout, whether the anesthesiologist gives the patient the oxygen mask, whether the patient is sitting and leaning over with the patient's back cleaned and draped, whether the anesthesiologist inspects the patient's anatomy with a long needle, whether the anesthesiologist injects medication into the patient's back, whether the anesthesiologist indicates that the patient is ready for surgery, whether the patient is positioned for a specific surgery, whether required surgical accessories are placed on a table, whether padding is applied to the patient, whether the heating blanket is applied to the patient, whether the safety straps are applied to the patient, whether a surgical site on the patient is exposed, whether the surgical lights are turned on, whether the surgical lights are positioned to illuminate the surgical site, whether the scrub nurse is gowning the surgeon, whether the scrub nurse is gloving the surgeon, whether skin antiseptic is applied, whether the surgical site is draped, whether sterile handles are applied to the surgical lights, whether a sterile team member is handing off tubing to a non-sterile team member, whether a sterile team member is handing off electrocautery to a non-sterile team member, whether the scalpel is handed to the surgeon, whether an incision is made, whether the sutures are handed to the surgeon, whether the staple gun is handed to the surgeon, whether the scrub nurse is handing a sponge to a sponge collection basin, whether an incision is closed, whether dressing is applied to cover a closed incision, whether the surgical lights are turned off, whether the anesthesiologist is waking the patient, whether the patient is returned to a supine position, whether extubation is occurring, whether instruments are being placed on the case cart, whether a garbage bag is being tied up, whether the bed linens are collected and tied up, whether the operating table surface is cleaned, whether the operating room floor is being mopped, whether the patient is being transferred to a stretcher, whether the patient is being brought out of the operating room, whether the surgical table is dressed with a clean linen, whether a second sterile case cart is brought into the operating room, or any combination thereof.
Instead of using trained machine-learning models to detect objects/events (which are then used to determine surgical milestones), the system may use trained machine-learning models to output surgical milestones directly. A trained machine-learning model of the one or more trained machine-learning models can be a machine-learning model (e.g., deep-learning model) trained using annotated surgical video information, where the annotated surgical video information includes annotations of at least one of the plurality of predefined surgical milestones. Further description of such machine learning models is provided below with reference to
The system may perform a spatial analysis (e.g., based on object detection/tracking as discussed above), a temporal analysis, or a combination thereof. The system may perform the temporal analysis using a temporal deep neural network (DNN), such as LSTM, Bi-LSTM, MS-TCN, etc. The DNN may be trained using one or more training videos in which the start time and the end time of various surgical milestones are bookmarked. The temporal analysis may be used to predict remaining surgery duration, as discussed below.
The one or more trained machine-learning models used herein can comprise a trained neural network model, such as a 2D CNN, 3D-CNN, temporal DNN, etc. For example, the models may comprise ResNet50, AlexNet, Yolo, I3D ResNet 50, LSTM, MSTCN, etc. The one or more trained machine-learning models may comprise supervised learning models that are trained using annotated images such as human-annotated images. Additionally or alternatively, the one or more trained machine-learning model may comprise self-supervised learning models where a specially trained network can predict the remaining surgery duration, without relying on labeled images. As examples, a number of exemplary models are described in G. Yengera et al., “Less is More: Surgical Phase Recognition with Less Annotations through Self-Supervised Pre-training of CNN-LSTM Networks,” arXiv:1805.08569 [cs.CV], available at https://arxiv.org/abs/1805.08569. For example, an exemplary model may utilize a self-supervised pre-training approach based on the prediction of remaining surgery duration (RSD) from laparoscopic videos. The RSD prediction task is used to pre-train a CNN and long short-term memory (LSTM) network in an end-to-end manner. The model may utilize all available data and reduces the reliance on annotated data, thereby facilitating the scaling up of surgical phase recognition algorithms to different kinds of surgeries. Another example model may comprise an end-to-end trained CNN-LSTM model for surgical phase recognition. It should be appreciated by one of ordinary skill in the art that other types of object detection algorithms, object tracking algorithms, video action detection algorithms, that provide sufficient performance and accuracy (e.g., in real time) can be used. The system can include machine-learning models associated with a family of architectures based on visual transformers, which may perform image recognition at scale. An exemplary framework is a Self-supervised Transformer with Energy-based Graph Optimization (STEGO) and may be capable of jointly discovering and segmenting objects without any human supervision. Building upon another self-supervised architecture, DINO, STEGO can distill pre-trained unsupervised visual features into semantic clusters using a novel contrastive loss. Additional information on visual transformers can be found, for example, in Caron et al., “An Image is Worth 16×16 Words: Transformers for Image Recognition at Scale”, arXiv:2010.11929v2, available at https://doi.org/10.48550/arXiv.2010.11929, which is incorporated herein by reference. Additional information on DINO and STEGO can be found, for example, in Hamilton et al., “Unsupervised Semantic Segmentation by Distilling Feature Correspondences”, arXiv:2203.08414v1, available at https://doi.org/10.48550/arXiv.2203.08414, and Caron et al., “Emerging Properties in Self-Supervised Vision Transformers”, arXiv:2104.14294v2, available at https://doi.org/10.48550/arXiv.2104.14294, which are incorporated herein by reference. Additional details related to detection of surgical milestones can be found in U.S. Provisional Application entitled “SYSTEMS AND METHODS FOR MONITORING SURGICAL WORKFLOW AND PROGRESS” (Attorney Docket No.: 16890-30044.00), which is incorporated herein by reference.
At block 206, the system can detect one or more activities in the operating room using a second set of one or more trained machine-learning models based on the received one or more inputs. The activities may be monitored for the purpose of detecting non-compliance with one or more surgical protocols. For example, the models may be configured to monitor any activities from which compliance and/or non-compliance to specific requirements in a surgical protocol can be detected.
As an example, a surgical protocol may comprise one or more requirements related to preparation of the surgery before a surgery commences. The protocol may require that linens be changed on the surgical table in the operating room, that various objects in the operating room (e.g., surgical table, surgical lights, equipment) be cleaned, wiped, and/or disinfected, that the necessary equipment and instruments (e.g., major imaging equipment) are available for the surgery, that the patient is properly prepared, etc. Accordingly, the second set of one or more machine-learning models may be configured to detect activities such as: change of linen on the surgical table, cleaning/wiping/disinfection of objects in the operating room, availability of equipment necessary for the surgery, patient preparation performed before the surgery starts, availability of routine monitoring equipment (e.g., pulse oximeter for monitoring pulse and O2 saturation in the blood stream, EKG/ECG heart monitor, automatic blood pressure machine), etc. Additionally or alternatively, the second set of one or more machine-learning models may be configured to detect the lack of such activities in the operating room. Metadata related to the activities (time stamps, duration, count) may also be obtained based on the output of the machine-learning models.
As another example, a surgical protocol may comprise one or more requirements related to intraoperative surgical safety. The protocol may comprise specific requirements related to the traffic in the operating room, the opening and closing of doors in the operating room (e.g., permissible counts of door openings and door closings, permissible durations of door(s) being open and closing, aggregate duration of door(s) being open), proper surgical attire, proper activities in the sterile zone during the surgery, equipment and instruments introduced into the operating room during the surgery, contamination of sterile equipment and instruments, use of blood units, use of monitoring equipment, use of sponges and swabs, etc. Accordingly, the second set of one or more machine-learning models may be configured to detect activities such as: staff members moving in/out of the operating room during the surgery, when each door in the operating room opens or closes during the surgery, surgical attire by the staff member (surgical mask, surgical cap, surgical gloves, body hair concealed, etc.), people entering and exiting sterile zone during the surgery, any equipment brought into the operating room during the surgery, contamination of sterile instruments and equipment during the surgery, blood units used, blood units required, sponges and swabs used, etc. Additionally or alternatively, the second set of one or more machine-learning models may be configured to detect the lack of such activities in the operating room. Metadata related to the activities (time stamps, duration, count) may also be obtained based on the output of the machine-learning models.
As another example, a surgical protocol may comprise one or more requirements related to cleaning and disinfection. The protocol may comprise specific requirements related to cleaning activities, fumigation, fogging, collection and disposal of bio-waste post-surgery, collection and disposal of sharps, emptying garbage receptacles, mopping the floor, wiping down walls, changing of table linen, attending to (e.g., emptying and cleaning of) fluid management systems, removal of fluid management systems from the OR for preparation for the next surgery, etc. Accordingly, the second set of one or more machine-learning models may be configured to detect activities such as: cleaning, fumigation, fogging, application of disinfection chemicals to equipment, collection and disposal of operating room swabs, etc. Additionally or alternatively, the second set of one or more machine-learning models may be configured to detect the lack of such activities in the operating room. Metadata related to the activities (time stamps, duration, count of an activity) may also be obtained based on the output of the machine-learning models. As another example, a surgical protocol may comprise one or more requirements related to operational parameters of the surgery. For example, the protocol may require that the nurse to patient ratio to be lower than a threshold. As another example, the protocol may require that the number of times a door in the operating room is opened during the surgery is lower than a threshold. The output of the second set of one or more machine-learning models may be used to calculate the operational parameters for each surgery.
As another example, a surgical protocol may comprise one or more requirements related to the use of a surgical waste collection system. Within a surgical waste collection system, the waste material is collected in a waste container connected to a vacuum source. A portable cart supports the waste container for moving throughout the health care facility. One or more suction lines extend from the waste container and are positioned near the site from which the waste material is to be collected. When the vacuum source is operating, the waste material is drawn through the suction lines into the waste container. The protocol may comprise specific requirements related to the use of suction in connection to chest tubes, tracheal tubes, and closed wound drainage systems.
Additionally, the protocol may comprise requirements related to systems for the monitoring and tracking of surgical sponges. A surgical sponge tracking system determines that all sponges have been accounted for after sponge removal, then it will be certain that all sponges have indeed been removed and the patient is ready to be surgically closed. If, however, the system indicates that one or more sponges are missing, it is necessary for the nurses and surgeons to visually inspect the surgical site in an attempt to locate the missing sponge(s). The protocol may comprise specific requirements related to the RFID reading and tracking of a surgical sponge to accurately direct a staff member to the location.
It should be appreciated the activities described above are merely exemplary. The second set of machine-learning model can be configured to detect any activities from which compliance and/or non-compliance to specific requirements in a surgical protocol can be detected. The one or more activities for example include: linen changing on a surgical table; cleaning of the surgical table; wiping of the surgical table; application of a disinfectant; introduction of a surgical equipment; preparation of the surgical equipment; entrance of a person into the operating room; exiting of the person out of the operating room; opening of a door in the operating room; closing of the door in the operating room; donning of surgical attire; contamination of sterile instruments; contact between anything sterile and a non-sterile surface (e.g., an inadvertent contact from the surgeons glove with a non-sterile surface of the surgical light while using the sterile control interface of the light); preparation of a patient; usage of one or more blood units; usage of one or more surgical sponges; usage of one or more surgical swabs; collection and/or disposal of waste; fumigation; sterile zone violation (e.g., suspension or transfer of anything non-sterile above (within the 3D space above) the surgical site); a conducted time-out; a conducted debriefing; fogging; or any combination thereof.
A surgeon's technical skills assessment can also be a subject of audit and can be evaluated using various machine learning models. For example, a trained machine-learning model can receive information related to a surgeon (e.g., videos of the surgeon's procedures) and provide one or more outputs indicative of the surgeon's technical skill level. Exemplary techniques for assessing a surgeon's technical skills can be found, for example, in Lam et al., “Machine learning for technical skill assessment in surgery: a systematic review”, Digit. Med. 5, 24 (2022), which is incorporated herein by reference. The assessment of the surgeon's technical skills as provided by a machine-learning model may be incorporated in the calculation of the audit score described below.
The system may be configured to invoke different machine-learning models depending on the current surgical milestone and/or a type of the surgery. For example, if the system determines (e.g., in block 204) that the operating room is being prepared for an upcoming surgery, which has not started, the system may invoke the machine-learning models for detecting potential non-compliant activities during pre-operation preparation, but not the machine-learning models for detecting potential non-compliant activities during a surgery, thereby improving efficiency and reducing computational demands. As another example, different surgeries may require different equipment; thus, depending on the type of the surgery, the system may invoke different machine-learning models for detecting necessary equipment for the type of the surgery.
In order to detect the activities, the system may use the one or more trained machine learning models to detect one or more detected objects, which are in turn used to determine the one or more activities. The one or more objects may include: one or more surgical tables; one or more surgical lights; one or more cleaning supplies; one or more disinfectants; one or more linens; one or more surgical equipment; one or more patients; one or more medical staff members; attire of the one or more medical staff members; one or more doors in the operating room; one or more blood units; one or more surgical sponges; one or more surgical swabs; or any combination thereof. The attire of the one or more medical staff members can include: a surgical mask, a surgical cap, a surgical glove, a surgical gown, or any combination thereof. The one or more surgical equipment can include: one or more imaging devices, one or more diagnostic devices, one or more monitoring devices, one or more surgical tools, or any combination thereof.
At least some of the first set of one or more trained machine-learning models can be the same as some of the second set of one or more trained machine-learning models. For example, the same machine-learning model may be used to detect and/or track a particular object (e.g., a door in the operating room) in blocks 204 and 206. As another example, the same model may be used to detect an event in 204 and an activity in block 206. In other examples, the first set of one or more trained machine-learning models may be different from the second set of one or more trained machine-learning models.
The system can use the one or more trained machine learning models to detect one or more detected objects and/or events, which are in turn used to determine the one or more activities. The one or more trained machine learning models can include an object detection algorithm, an object tracking algorithm, a video action detection algorithm, an anomaly detection algorithm, or any combination thereof.
The system can be configured to first use an object detection algorithm to detect a particular type of object in an image, and then use an object tracking algorithm to track the movement and/or status of the detected object in subsequent inputs. Using one or more object detection algorithms, the system may detect one or more objects and assign an object ID to each detected object. The one or more object detection algorithms can comprise machine-learning models such as a 2D convolutional neural network (CNN) or 3D-CNN (e.g., MobileNetV2, ResNet, MobileNetV3, CustomCNN). After the objects are detected, the system may then use one or more object tracking algorithms to track the movement of the detected objects. The one or more object tracking algorithms can comprise any computer-vision algorithms for tracking objects and can comprise non-machine-learning algorithms. In some examples, the object tracking algorithm(s) may involve execution of more lightweight code than the object detection algorithm(s), thus improving efficiency and reducing latency for activity determination.
The tracked movement and/or status of one or more detected objects can then be used to determine events occurring in the operating room. For example, the system can first use an object detection model to detect a stretcher in an image and then use an object tracking algorithm to detect when the stretcher crosses door coordinates to determine that the stretcher is being moved into the operating room (i.e., an event). The one or more trained machine-learning models can be trained using a plurality of annotated images (e.g., annotated with labels of object(s) and/or event(s)). Further description of such machine learning models is provided below with reference to
Instead of using trained machine-learning models to detect objects/events (which are then used to determine activities), the system can use trained machine-learning models to output activities directly. A trained machine-learning model of the one or more trained machine-learning models can be a machine-learning model (e.g., deep-learning model) trained using annotated surgical video information, where the annotated surgical video information includes annotations of at least one of the plurality of predefined activities. Further description of such machine learning models is provided below with reference to
The system may perform a spatial analysis (e.g., based on object detection/tracking as discussed above), a temporal analysis, or a combination thereof. The system may perform the temporal analysis using a temporal deep neural network (DNN), such as LSTM, Bi-LSTM, MS-TCN, etc. The DNN may be trained using one or more training videos in which the start time and the end time of various activities are bookmarked.
The one or more trained machine-learning models used herein can comprise a trained neural network model, such as a 2D CNN, 3D-CNN, temporal DNN, etc. For example, the models may comprise ResNet50, AlexNet, Yolo, I3D ResNet 50, LSTM, MSTCN, etc. In some examples, as discussed herein, the one or more trained machine-learning models may comprise supervised learning models that are trained using annotated images such as human-annotated images. Additionally or alternatively, the one or more trained machine-learning model may comprise self-supervised learning models where a specially trained network can predict the remaining surgery duration, without relying on labeled images. As examples, a number of exemplary models are described in G. Yengera et al., “Less is More: Surgical Phase Recognition with Less Annotations through Self-Supervised Pre-training of CNN-LSTM Networks,” arXiv:1805.08569 [cs.CV], available at https://arxiv.org/abs/1805.08569. For example, an exemplary model may utilize a self-supervised pre-training approach based on the prediction of remaining surgery duration (RSD) from laparoscopic videos. The RSD prediction task is used to pre-train a CNN and long short-term memory (LSTM) network in an end-to-end manner. The model may utilize all available data and reduces the reliance on annotated data, thereby facilitating the scaling up of activity recognition algorithms to different kinds of surgeries. Another example model may comprise an end-to-end trained CNN-LSTM model for surgical phase recognition. It should be appreciated by one of ordinary skill in the art that other types of object detection algorithms, object tracking algorithms, video action detection algorithms, that provide sufficient performance and accuracy (e.g., in real time) can be used.
At block 208, the system can determine, based on the detected one or more activities and a surgical protocol associated with the detected surgical milestone, that an instance of non-compliance to the surgical protocol has occurred in the operating room. For example, the detected activities may indicate a lack of or improper linen changing on a surgical table; a lack of or improper cleaning of the surgical table; a lack of or improper wiping of the surgical table; a lack of or improper application of a disinfectant; a lack of or improper introduction of a surgical equipment; a lack of or improper preparation of the surgical equipment; improper entrance of a person into the operating room; improper exiting of the person out of the operating room; improper opening of a door in the operating room; improper closing of the door in the operating room; a lack of or improper donning of surgical attire; contamination of sterile instruments; contact between anything sterile and a non-sterile surface (e.g., an inadvertent contact from the surgeons glove with a non-sterile surface of the surgical light while using the sterile control interface of the light); a lack of or improper preparation of a patient; improper usage of one or more blood units; improper usage of one or more surgical sponges; improper usage of one or more surgical swabs; improper collection and/or disposal of waste; improper fumigation; sterile zone violation (e.g., suspension or transfer of anything non-sterile above (within the 3D space above) the surgical site); an improperly conducted time-out; an improperly conducted debriefing; or any combination thereof.
In order to determine whether an activity is non-compliant in block 206, the system may analyze the activity in light of the surgical protocol requirements specific to the surgical milestone detected in 204. Depending on the surgical milestone, the surgical protocol, or what is considered to be a non-compliant activity may differ. For example, a lack of mask wearing may be considered acceptable if it occurs after the surgery, but considered non-compliant if it occurs during the surgery. As another example, a door that stays open for an extended period of time may be considered acceptable if it occurs before the surgery, but considered non-compliant if it occurs during the surgery. As another example, the proper location for disposing used instruments, sponges, and swabs may differ depending on whether a surgery is ongoing or has concluded. As another example, it may be considered acceptable for a person to not wear gloves to enter the OR during surgical preparation, but a lack of glove wearing may be considered non-compliant if it occurs during surgery. As another example, it may be considered acceptable for a person to touch/position the surgical light from the light handle without gloves and without the application of the sterile handle cover during surgical preparation, but doing so without wearing sterile gloves may be considered non-compliant if it occurs during surgery. As another example, it may be considered acceptable for a person to not use sterile techniques to handle surgical instruments after completion of surgery, but not using sterile techniques may be considered non-compliant if it occurs during surgery. As another example, it may be considered acceptable for a person to enter and exit the sterile zone after the surgery, but doing so may be considered non-compliant if it occurs during surgery.
Detection of non-compliances can be performed as an anomaly detection task. For example, instead of first enumerating various adverse events and then developing models for detecting them, an anomaly detection model can be trained end to end on a variety of surgical workflows that are deemed compliant. Accordingly, the anomaly detection model can receive a surgical workflow and provide an output indicative of whether or how far the input surgical workflow deviates from a normal range. Using the anomaly detection model, any surgical workflow that is classified to fall outside the normal range can be flagged as anomalous and, as a result, could be a potential compliance violation. Digital Twin environments can be used to generate enough compliant data for training such models.
If non-compliance is detected, the system may intelligently select one or more follow-up actions to take from a plurality of potential follow-up actions. The plurality of potential follow-up actions may include, but are not limited to: outputting an alert on a dashboard (e.g., in the operating room, in a control room), sending a message (e.g., an email, a text message), logging the non-compliance in a database, updating a report, recommending training and retraining, recommending protocol changes, performing downstream analytics, etc. The alert may be auditory, graphical, textual, haptic, or any combination thereof. The plurality of potential follow-up actions may also include starting an intervention, such as: locking a door, altering the OR lighting, such as dimming the lights or setting the lights to high brightness; modifying a view on a monitor, such as blocking or blurring the view, e.g. a camera view; blocking or providing haptic feedback on controls of medical equipment, e.g. of a surgical robot; blocking or providing, e.g. auditory, graphical, textual and/or haptic, feedback on medical equipment, such as a diagnostic imaging device, an anesthesia machine, a staple gun, a retractor, a clamp, an endoscope, an electrocautery tool; etc. As further examples, in response to detection of non-compliance, the system may stop the suction on a medical waste system. The system may direct surgical staff for a search preformed on a missing surgical sponge.
The system can be configured to determine a severity level of the instance of non-compliance to the surgical protocol and determine which follow-up action(s) to take accordingly. Optionally, the system may alert or notify the OR team of the severe protocol infraction. The alert may be visual, auditory, haptic, or any combination thereof. The alert may comprise an indication on one or more OR displays and/or one or more OR dashboards. The alert may optionally comprise a video or a textual description of the detected infraction and/or how severe it is.
For example, certain instances of non-compliance may not be considered severe enough to warrant intervention and/or a real-time alert because the real-time alert may be disruptive to the surgical staff, whereas certain instances of non-compliance, such as contamination of a sterile instrument during the course of surgery (e.g., surgical staff inadvertently touching the sterile portion of the sterile instrument with a contaminated glove), need to be reported in real-time, and/or may warrant an intervention, to prevent increased risk of surgical site infections. Thus, the system can determine that the severity level meets a predefined severity threshold and, in accordance with the determination, generate an alert and/or start an intervention. The system can also determine that the severity level does not meet the predefined severity threshold and, in accordance with the determination, forego generating the alert or intervention or delay the generation of the alert until a later time. The system may nevertheless still record the detected instance of non-compliance in the audit logs or a database for downstream analysis.
Certain instances of non-compliance may not be considered severe enough to warrant an intervention because the intervention may be disruptive to the surgery, so an alert may be more appropriate. Thus, the system can determine whether the severity level meets a predefined severity threshold and, in accordance with a determination that the determined severity level meets the predefined severity threshold, start an intervention.
Certain instances of non-compliance may not be considered severe enough to warrant an audio alert because the audio alert may be disruptive to the surgery, so a different type of alert (e.g., visual alert) may be more appropriate. Thus, the system can determine whether the severity level meets a predefined severity threshold and, in accordance with a determination that the determined severity level meets the predefined severity threshold, generate an audio alert. In accordance with a determination that the determined severity level does not meet the predefined severity threshold, the system can forego generating the alert, generate a text alert, or delay the generation of the audio alert until a later time. The system may nevertheless still record the detected instance of non-compliance in the audit logs or a database for downstream analysis.
Non-compliance to a surgical protocol may affect an audit score for: a surgery, an individual or a group of individuals (e.g., a surgical team), an organization (e.g., a department, a hospital), or any combination thereof. An audit score can quantify the amount of deviation from one or more surgical protocols. The calculated audit score can be provided to a user (e.g., displayed on a dashboard) in real time. The calculated audit score can be stored as a part of an audit log in a database (e.g., HIS, EMR, an audit log) for downstream analysis, etc. The system can compare the audit score against a predefined audit score threshold to determine how well a surgery, an individual, a group of individuals, and/or an organization are observing surgical protocols. The predefined audit score threshold can be associated with a type of surgery in the operating room.
The system can calculate an audit score for the surgery based on detected instances of non-compliance to the surgical protocol. The system can be configured to aggregate multiple instances of non-compliance across multiple surgical milestones into a single audit score. For example, one or more surgical protocols may specify 20 requirements associated with operating room setup, 10 requirements associated with patient preparation, 30 requirements associated with cleaning and disinfection, etc. Non-compliance to each requirement may be associated with a sub-score. The sub-scores can be aggregated to calculate a single audit score across multiple surgical milestones. A first surgical milestone can be associated with a first surgical protocol and a second surgical milestone can be associated with a second surgical protocol, and the system can calculate the audit score for the surgery based on an instance of non-compliance to the first surgical protocol and the instance of non-compliance to the second surgical protocol. The audit score can be based on a weighted calculation of the instance of non-compliance to the first surgical protocol and the instance of non-compliance to the second surgical protocol.
Non-compliance to different requirements may be scored and/or weighted differently, with more severe instances of non-compliance weighted heavier. As examples, contamination of sterile equipment may be weighted heavier than improper disposal of a cotton swab; a longer period of door opening may be weighted heavier than a shorter period of door opening.
The scoring mechanism can be configurable by a user. For example, a user can set how a given instance of non-compliance is scored. For example, the user can assign different sub-scores to violations of different requirements. As another example, the user can also specify that the first time a particular requirement is violated is scored 0 (i.e., ignored), but the score increases as the number of instances of non-compliance to the particular requirement increases. The scoring mechanism can depend on the facility, the type of surgery (e.g., cardiothoracic and orthopedic surgeries may be associated with more severe scoring mechanisms because infection can be very detrimental to the patient), etc.
Based on detected non-compliance with surgical protocols, the system can identify a change to the surgical protocol and output a recommendation based on the identified change to the surgical protocol. The system can be configured to identify a change to the surgical protocol by identifying a correlation between an outcome of the surgery in the operating room and the instance of non-compliance to the surgical protocol in a database or audit logs. For example, if a strong correlation (e.g., above a predefined threshold) is identified between violation of a particular requirement and post-surgery infection, the system may recommend adding the particular requirement to a checklist to minimize the likelihood of violation and thus improve outcomes in future surgeries.
Protocol enhancement recommendations can be developed by utilizing Digital Twins technology. A digital twin can include a virtual representation a true-to-reality simulation of physics and materials of a real-world physical asset or system (e.g., an operating room), which is continuously updated. Digital Twins technology can be used to generate a virtual twin of an operating room to provide a safe environment to test the changes in system performance. It can also be used to generate training data for machine-learning models, such as the machine-learning models described herein. Additional details of the Digital Twins technology can be found, for example, in “What Is a Digital Twin?”, available at https://blogs.nvidia.com/blog/2021/12/14/what-is-a-digital-twin/, which is incorporated by reference herein.
Based on detected non-compliance with surgical protocols, the system can recommend training or retraining of the surgical protocol. The system can be configured to determine an identity or a surgical function of a person associated with the instance of non-compliance; and determine whether to recommend a change to the surgical protocol or to recommend retraining of the surgical protocol at least partially based on the identity or the surgical function of the person associated with the instance of non-compliance. For example, if the system detects that a requirement is violated by multiple people across departments and/or organizations, the system may determine that a general update to the surgical protocol (e.g., new checklists) is needed to ensure that the requirement is observed. But, if the system detects that a requirement is violated repeatedly by a particular person (e.g., a particular surgeon, a particular nurse), a particular group of people (e.g., a particular surgical team), or people of the same surgical function (e.g., scrub nurses, circulating nurses), the system may determine that the person or the group of people needs to be retrained on the requirement. The system may determine a need for both protocol enhancement and retraining. The identify or the surgical function of the person may be identified using facial recognition techniques, RFID or GPS signals of a device associated with the person, the person's attire/actions, the hospital record/schedules, HIS/EMR databases, or any combination thereof.
With reference to
With reference to
With reference to
With reference to
The operations described herein are optionally implemented by components depicted in
Input device 520 can be any suitable device that provides input, such as a touch screen, keyboard or keypad, mouse, or voice-recognition device. Output device 530 can be any suitable device that provides output, such as a touch screen, haptics device, or speaker.
Storage 540 can be any suitable device that provides storage, such as an electrical, magnetic or optical memory including a RAM, cache, hard drive, cloud storage, or removable storage disk. Communication device 560 can include any suitable device capable of transmitting and receiving signals over a network, such as a network interface chip or device. The components of the computer can be connected in any suitable manner, such as via a physical bus or wirelessly. Software 550, which can be stored in storage 540 and executed by processor 510, can include, for example, the programming that embodies the functionality of the present disclosure (e.g., as embodied in the devices as described above).
Software 550 can also be stored and/or transported within any non-transitory computer-readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as those described above, that can fetch instructions associated with the software from the instruction execution system, apparatus, or device and execute the instructions. In the context of this disclosure, a computer-readable storage medium can be any medium, such as storage 540, that can contain or store programming for use by or in connection with an instruction execution system, apparatus, or device.
Software 550 can also be propagated within any transport medium for use by or in connection with an instruction execution system, apparatus, or device, such as those described above, that can fetch instructions associated with the software from the instruction execution system, apparatus, or device and execute the instructions. In the context of this disclosure, a transport medium can be any medium that can communicate, propagate or transport programming for use by or in connection with an instruction execution system, apparatus, or device. The transport readable medium can include, but is not limited to, an electronic, magnetic, optical, electromagnetic or infrared wired or wireless propagation medium.
Device 500 may be connected to a network, which can be any suitable type of interconnected communication system. The network can implement any suitable communications protocol and can be secured by any suitable security protocol. The network can comprise network links of any suitable arrangement that can implement the transmission and reception of network signals, such as wireless network connections, T1 or T5 lines, cable networks, DSL, or telephone lines.
Device 500 can implement any operating system suitable for operating on the network. Software 550 can be written in any suitable programming language, such as C, C++, Java or Python. In various examples, application software embodying the functionality of the present disclosure can be deployed in different configurations, such as in a client/server arrangement, through an on-premise or cloud application, through a Web browser as a Web-based application or Web service, for example.
Now turning to
An exemplary surgical sponge management system 134 is sold under the tradename SurgiCount by Stryker Corporation (Kalamazoo, Mich). The surgical sponge management system 134 includes a tablet 138 positioned on a moveable stand, an RFID reader, and a plurality or RFID tagged sponges 136 to track surgical sponges and prevent their retention. Additional cameras may be included on the tablet 138. Users of the surgical sponge management system 134 may use the system incorrectly leading to missing sponges or miscounting sponges. Additionally, user may have difficulty locating missing sponges. For example, sponges may be left within a body, under a body, thrown out with other materials, stuck to other sponges, and tossed away. Non-compliant use of the surgical sponge management system 134 is not limited to the above examples and may include any misuse of the surgical sponge management system against procedures.
At block 710, the exemplary system may receive images and audio from one of more devices positioned within the medical care area 100 or surgical suite of a patient (e.g., cameras 102a and/or 102b, sensors 128 in
At block 720, an exemplary system may provide the images and/or the audio to a trained machine-learning models. The machine learning model is trained using video, images, audio, or other input signals depicting nominal and adverse medical events and procedures. The machine learning model may also be trained using procedural lists, manuals, and other standards of care inputs. Additionally, the machine-learning program may be continuously updated and modified by receiving additional input signals received of the medical care area 100. The trained machine learning models can include an object detection algorithm, an object tracking algorithm, a video action detection algorithm, an anomaly detection algorithm, or any combination thereof.
As non-limiting examples, the machine learning program may receive inputs indicating the suction is connected to an incorrect tube by receiving images showing the incorrect connection, audio of staff indicating the incorrect connection, the sound of incorrect suction in the tube, electronic signal indicating an incorrect pressure or setting on the equipment, alarms from various medical equipment within the surgical suite, or any other inputs indicating non-compliant use. As another example, the machine learning program may receive inputs indicating incorrect use of the surgical sponge management system as videos or images indicating a location of the sponge, audio of staff searching for a sponge, verbal counts of sponges and other inputs indicating non-compliant use.
At block 730, an exemplary system may determine based on the received images and/or audio, and using the trained machine learning model, the non-compliant use of medical equipment 116 during a surgical procedure. As an example, the machine learning model may compare the received images and audio to images and audio depicting the nominal or adverse medical events and procedures. Adverse medical events of the medical waste collection system 130 may include the suction tube 132 being coupled to a chest tube, the suction tube 132 being coupled to a tracheal tube, the suction tube 132 being coupled to a close wound drainage tube, incorrect suction flowing through the suction tube 132, the patient body retaining fluid, or any other use adverse to the surgical procedure. As a non-limiting example the system may validate a verbal count of surgical sponges with the signals captured by the devices within the medical care area 100. The system may determine non-compliant use upon a determination that the verbal count does not match a visual count taken by the system.
At block 740, an exemplary system may cause a notification to be displayed on the display based on the determination of non-compliant use. The notification may be a warning signal, alarm, graphical or textual instructions, message, visual, or other indicator to staff of the non-compliant event. For example, the system may put off an audio alarm to speakers positioned within the surgical suite. The system may push through a notification on a display such as a tablet or surgical navigation screen. The system may provide other indications such as a change in color on a surgical instrument or equipment. The notification may be specified to the type of non-compliant use. For example, if the adverse medical event includes the suction tube being incorrectly coupled, the notification may include information relating to where the suction tube is coupled. As another example, if the non-compliant use of the surgical sponge management system includes a missing sponge, the notification may include a location of the sponge.
Upon detection of a non-compliant use, the system may take an action with no input from a user. For example, the system may stop operation of the vacuum source of the surgical waste collection system. The system may additionally prevent or terminate use of surgical equipment upon detection of the non-compliant use. As another example, the system may prevent further counting of surgical sponges 136 until a missing sponge is accounted for. The system may only permit operation of the medical equipment upon receiving a user input that the non-compliant use has been obviated or upon receiving additional inputs to the machine learning program indicating that the non-compliant use has been obviated. In examples, the system may terminate operation of the vacuum source during con-compliant use.
At block 810, an exemplary system may receive additional images and audio from one of more devices positioned within the medical care area 100 of a patient (e.g., cameras 102a and/or 102b, sensors 128 in
At block 840, an exemplary system permits the operation of the medical equipment 116. In examples, the operation of the medical equipment 116 is permitted upon the determination that the non-compliant use has been obviated. In examples, a user may provide an input to the system to permit the operation of the medical equipment 116. The input may be a button, interaction with the notification or alarm, a verbal command, or any other user input to the system. In examples, the system may permit the operation of the medical equipment 116 after a timer has expired. The timer may be based on user input of preference or preset.
At block 910, an exemplary system may receive additional images and audio from one of more devices positioned within the medical care area 100 of a patient (e.g., cameras 102a and/or 102b, sensors 128 in
At block 940, an exemplary system may receive, at a user interface or device, a user input that the surgical procedure has concluded or is concluding. The user interface may be the navigation system, the tablet 138 of the surgical sponge management system 134, the mobile device, or other device local or remote to the medical care area 100. The user input may be physical touch, verbal, or gesture based, wherein the one or more cameras 102a, 102b, may sense a gesture of the user.
At block 950, an exemplary system may monitor based on the inputted images and/or audio and using the machine-learning model, whether the one or more sponges 134 has been removed from the patient. In examples, the system monitors whether the one or more sponges 134 has been removed from a position near the patient or other location of the medical care area 100. The system may perform a count of the sponges removed from the patient or the surrounding area and compare to a count of the sponges preformed prior to the surgical procedure.
At block 960, an exemplary system may determine the non-compliant use of the surgical sponge management system 134 implicating a potential adverse medical event using the trained machine-learning model. The non-compliant use may implicate a potential adverse medical event from at least one of the one or more sponges 136 potentially not having been removed from the patient. For example, the non-compliant use may stem from a missing sponge thrown into a non-sponge container. A count of the sponges after the procedure will not match the count of the sponges after the procedure. The system may determine the non-compliant use of throwing out the sponge caused the miscount and there are no sponges within the patient. The system may determine a count of sponges using audible counts by a user, object recognition, RFID, or other inputs including a count of the plurality of surgical sponges 136. As another example, the system may determine a sponge has been placed in the patient, but was not removed.
At block 970, an exemplary system may cause a notification to be displayed on the display based on the determination of non-compliant use. The notification may include textual or graphical corrective instructions that is specific to the activity implicating the potential adverse medical event. For example, the display may position a bounding box overlay over a graphical display of the medical care area 100 or the patient, positioned at the location of a sponge. The notification may include an alarm or color coded system to display a potential high risk area. As another example, the system may include textual graphics directing a user to a sponge or a corrective action. The corrective action may direct a user to the missing sponge, a miss count of the sponges, or other adverse medical event.
Although the disclosure and examples have been fully described with reference to the accompanying figures, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of the disclosure and examples as defined by the claims.
The foregoing description, for purpose of explanation, has been described with reference to specific examples. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The examples were chosen and described in order to best explain the principles of the techniques and their practical applications. Others skilled in the art are thereby enabled to best utilize the techniques and various examples with various modifications as are suited to the particular use contemplated.
The subject patent application is a continuation-in-part application of U.S. patent application Ser. No. 18/334,344, Filed on Jun. 13, 2023, which Claims Priority to, and all the benefits of, U.S. Provisional Patent Application No. 63/366,399, filed on Jun. 14, 2022, the entire contents of which are incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
63366399 | Jun 2022 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 18334344 | Jun 2023 | US |
Child | 18895851 | US |