Certain embodiments may relate to surgical apparatuses and methods. For example, some embodiments may relate to identification and tracking of surgical instruments and/or surgical material.
When performing surgery, medical personnel, including surgeons, surgical assistants, surgical technologists, nurses, radiographers, anesthesiologists, and other professionals, use a variety of surgical instruments, such as scalpels, scissors, forceps, and clamps, as well as surgical material. For convenience, throughout this description, “SI” will refer to surgical instruments, surgical material, and/or various objects. Similarly, “surgical material” may include disposable products used during a surgical procedure, such as gloves, gauzes, sponges, needles, staples, and sutures. With such a wide variety and number of instruments and materials, and each surgical team member focusing on performing his/her task to benefit the patient, one preventable yet reoccurring medical error is “retained surgical instruments” (RSI). RSI can occur, for example, when one of the above-mentioned SI examples, such as a clamp or a sponge, remains within a patient's body after surgery. RSI may occur because of an oversight by operating room (OR) personnel, defective surgical equipment, or accident (e.g. SI falling into an incision). As a result, RSI can potentially cause deadly medical complications for the surgical patient, and cost hospitals millions of dollars each year from additional surgeries to remove the SI. The embodiments of the innovation are designed to provide an automated count of all surgical items, thereby providing either a redundancy system or replacement system to definitively solve the problem of RSI, improve patient safety, reduce morbidity, and reduce cost to patients and hospitals.
For a proper understanding of the various aspects of the disclosed invention, reference should be made to the accompanying drawings, wherein:
As noted above, RSI creates a potential danger to patients, as well as liability for hospitals and medical staff. Some current techniques have been developed in an attempt to reduce or eliminate RSI. For example, one technique requires OR personnel to manually count SI before and after a surgery. However, the potential for human error remains, and any errors may only be discovered after the surgery ends and the patient has been sutured (i.e. any incisions have been closed). With the continued occurrence of RSI, current techniques for managing SI can be inadequate and unreliable.
In order to address the disadvantages noted above, certain embodiments described herein relate to a method for detecting and tracking SI, a method for performing a surgical procedure, as well as an artificial intelligence (AI)-based automated computer vision system for performing these methods during surgery. For example, various embodiments may include electronically detecting, tracking, and counting SI and other items by monitoring their location in real-time, such as during a surgical procedure. Some embodiments may then alert OR personnel at the conclusion of the surgery regarding any SI remaining in the patient or otherwise missing.
In various embodiments, light source(s) 110 may be ceiling-mounted, as shown in
Sensor(s) 120 may be disposed and/or movable on at least one horizontal fixed rail, vertical fixed rail, moveable arm, robotic arm, ceiling, wall, and any combination thereof. Additionally or alternatively, sensor(s) 120 may be disposed and/or movable on at least one light source, IV post, head-mounted equipment, such as a helmet, hat, goggles, glasses, microscope, and headlamp, and any combination thereof. Sensor(s) 120 may be aimed towards a particular area, such as operating area(s) 150.
In some embodiments, sensor(s) 120 may be one or more items from the following list, which is intended to be exemplary and not limiting: a still camera, video camera, high-speed camera, light detection and ranging (LiDAR)-enabled sensor, infrared sensor, X-ray sensor, gamma-ray sensor, radiation sensor, 3D-laser scanning sensor, gyroscope, such as a microelectromechanical system (MEMS) unit, accelerometer, temperature sensor, and any combination thereof.
As noted above, sensor(s) 120 may be configured with LiDAR, which may be imaging LiDAR and/or scanning LiDAR. For example, a LiDAR sensor may be configured to detect a distance of an object from sensor(s) 120 and/or distance between a first object and a second object, such as SI.
In certain embodiments, sensor(s) 120 may be configured to continuously and/or periodically perform at least one sensing operation, such as capturing pictures and/or collecting data of one or more of temperature, color, shape, size, light, X-rays, gamma-rays, and radiation. For example, sensor(s) 120 may be configured to periodically perform at least one sensing operation. Sensor(s) 120 may also be configured to continuously and/or periodically monitor various objects, such as operating area(s) 150, surgical instruments, surgical material, operating cavities, lights, other sensors, surgical incisions, instrument tables, workstations, waste collection areas, and OR personnel, including their heads and/or hands.
In various embodiments, sensor(s) 120 may be configured with a microphone, which may be configured to detect spoken conversation and/or verbal commands of the OR personnel through speech recognition capability.
In some embodiments, sensor(s) 120 may be configured to perform sensing operations according to certain performance parameters, such as (but not limited to) speed, zoom, shutter speed, focal length, spectral response, image stabilization, size, frames per second (FPS), and resolution, which may minimize interference, such as blur. For example, sensor(s) 120 may be exposed to a desired area, as noted above. As another example, the resolution of a camera may be configured so as to detect objects of a certain size while minimizing interference. In another example, a spectral response of a camera may be configured with an IR optical filter configured to sense temperature of an object within the field of view of the camera. This may provide indications of a recent location of object, such as inside an operating cavity.
In various embodiments, computer(s) 130 may be configured to perform a method similar to the method shown in
In various embodiments, the received sensor data may include data, such as video data and/or image data, containing indications of at least one object, such as surgical instruments, surgical material, surgical incisions, operating areas, lights, other sensors, OR personnel, including their hands and/or heads, and other items of interest. The received sensor data may be associated with a viewpoint angle, background, lighting condition, timestamp, and any combination thereof. The image processing algorithm may also timestamp the plurality of images or other data, and, at 503, may convert video into still image frames.
In some embodiments, at 505, the object detection algorithm may select and label the received sensor data. For example, the detection algorithm may be configured to receive an orientation indication of sensor(s) 120. For example, the orientation indication may include location coordinates, orientation coordinates, and identifiers of particular sensor(s) 120 and/or particular object. As an example, orientation indications may include one or more rectangular, polar, or spherical coordinates. Using rectangular coordinates as an example, the object detection algorithm may be further configured to associate the center of a bounding box based upon
and
Following these calculations, the object detection algorithm may record x and y and/or cause display(s) 140, described in further detail below, to display a graphic of x and y, such as shown in
In various embodiments, the software may be configured to process spoken conversation and/or verbal commands of the OR personnel detected by and received from sensor(s) 120. For example, the spoken conversation may be processed by a pre-trained speech recognition algorithm, such as a recurrent neural network (RNN), and configured to detect pre-defined words to identify particular objects.
In certain embodiments, the image processing algorithm may be configured to modify the plurality of images with bounding boxes around each of the objects, as illustrated in
In certain embodiments, the object detection algorithm may have SI and other objects detected according to deep-learning algorithms, which may be based on multiple orientations and distances of the SI and other detected objects. Furthermore, SI and other detected objects may change size, color, shape during the surgical procedure. For example, the appearance of gauze may change color, shape, and/or size during the surgery, such as when exposed to blood or cut into pieces. Such visual modifications may also be trained on the object detection algorithm so that items are continuously recognized despite these modifications.
In various embodiments, the object detection algorithm may be configured to detect that sensor(s) 120 has a blocked line-of-sight to the surgical incision based on the positioning/orientation coordinates of blocked sensor(s) 120 and/or positioning of detected objects, such as surgical instruments, surgical material, an operating cavity, surgical incisions, instrument tables, workstations, waste collection areas, lights, other sensors, OR personnel, including their heads and/or hands, and other objects, also illustrated in
In some embodiments, the object detection algorithm may be configured to calculate the position of sensor(s) 120 with respect to the surgical incision. In certain embodiments, computer 130(s) may be configured to determine that at least one detected object is improperly within the surgical incision area. For example, upon tracking at least one object, and the object subsequently failing to be tracked again throughout the remainder of the surgical procedure, or the object being tracked in the same location as the surgical incision, computer 130(s) may indicate the tracked object as unaccounted for and/or remaining within the surgical incision.
In certain embodiments, at 507, the object detection algorithm may be configured to be trained according to the modified image, and a prediction/inference model may be generated according to the modified image, at 509. Each sensor(s) 110 may be associated with a dedicated inference model, as shown in
In various embodiments, the object detection algorithm may be configured to process the plurality of images throughout the surgery, such that subsequently-received images and detected objects are used to further train the algorithm. For example, the object detection algorithm may be configured to track detected objects on sensor(s) 120 based on data received from view position of sensor(s) 120.
In some embodiments, display(s) 140 may be configured to display a user interface (UI), such as a graphical user interface (GUI), and/or receive input via touch buttons on display(s) 140 and/or the UI. For example, input may be received via virtual touch button, physical touch buttons, microphones configured to detect spoken commands through speech recognition capability, foot pedals, RFID signals, and any combination thereof.
In some embodiments, the UI may be configured to operate in a set up mode, wherein the UI may be configured to display a request for a user to configure the system, such as a configuration of light source(s) 110 and/or configuration of sensor(s) 120, for example, sensor field of view and sensor position. In some embodiments, the system may be configured to have robotic arms automatically configure positions of sensor(s) 120. The display may be configured to receive input for configuring the system in response to the request. A set up mode may be configured to allow the user to aim light source(s) 110 towards a particular location, such as operating area(s) 150, adjust sensor(s) 120, such as to focus a camera, prevent adjustment of a configuration setting, and any combination thereof.
In various embodiments, after display(s) 140 receives input for configuring the system, display(s) 140 may be configured to operate in a run mode, wherein display(s) 140 may be configured to receive a command from a user to begin detecting and tracking an object, such as a SI and operating area(s) 150. Display(s) 140 may be configured to display a field of view of sensor(s) 120, such as a camera, in real time. Additionally or alternatively, display(s) 140 may be configured to display names, numbers, statuses, and locations of detected objects, as illustrated in
In certain embodiments, display(s) 140 may be configured to display available commands, such as start and stop, which may be selected as input by OR personnel. For example, a start command may be configured to initiate sensor(s) 120 capturing data, such as of a surgery, applying sensor data to an inference model, detect an object, calculating coordinates of detected objects, and generate prediction/inference models. Similarly, the stop command may be configured to discontinue the aforementioned actions, cause display(s) 140 to display an indication, such as the table shown in
In various embodiments, display(s) 140 may be configured to display data saved to memory of computer(s) 130. For example, the UI may be configured to display objects sensed inside the surgical incision at a particular time, a list of objects sensed since receiving a start command, a time that an object was sensed to enter the surgical incision, a time that an object was sensed to exit the surgical incision, an indication of whether an object is still inside the surgical incision, an illustration of the movement of an object within the field of view, an indication of whether and/or when sensor(s) 120 was disabled, such as being blocked or turned off, a real-time image provided by sensor(s) 120, a real-time labeled image, an indication of the time a start command was received, an indication of a current time, and any combination thereof.
In various embodiments, the received sensor data may be associated with LiDAR, which may be imaging LiDAR and/or scanning LiDAR. For example, a LiDAR sensor may be configured to detect a distance of an object from at least one sensor and/or distance between a first object and a second object, such as SI.
In certain embodiments, the received sensor data may be based upon continuously and/or periodically performed sensing operations, such as capturing pictures and/or collecting data of temperature, color, shape, size, light, X-rays, gamma-rays, and radiation. For example, the received sensor data may be collected based upon a pre-defined time period. The received sensor data may be based on continuously and/or periodically monitored various objects, such as operating areas, surgical instruments, surgical material, operating cavities, lights, other sensors, surgical incisions, instrument tables, workstations, waste collection areas, and OR personnel, including their heads and/or hands.
In various embodiments, the received sensor data may include detected spoken conversation and/or verbal commands of the OR personnel.
In some embodiments, the received sensor data may be associated with performance parameters, such as speed, zoom, shutter speed, focal length, spectral response, image stabilization, size, FPS, and resolution, which may minimize interference, such as blur. As an example, the resolution of a camera may be configured so as to detect objects of a certain size while minimizing interference. In another example, a spectral response of a camera may be configured with an IR optical filter configured to provide temperature of an object within the field of view of the camera. This may provide an indication of a recent location of the object, such as within an operating cavity. In various embodiments, the received sensor data, such as a plurality of images, may contain objects, such as SI, operating area, and other items of interest.
At 703, the received sensor data may be applied to at least one inference model, which, at 705, may detect and/or track a predefined object, such as SI. In some embodiments, an object may be detected in the received sensor data. In certain embodiments, a predefined object, such as SI, may be detected and/or tracked, according to a plurality of image processing and/or deep learning-based object detection algorithms, such as Faster region-based convolutional neural networks (R-CNN), RetinaNet, or other open source or proprietary algorithm. In various embodiments, a video may be converted into a still image frame. The plurality of images may be associated with and/or modified with a timestamp, viewpoint angle, background, lighting condition, and any combination thereof. Furthermore, the plurality of images or other data may be timestamped when received from the sensor.
In some embodiments, the received sensor data may include orientation indications of a sensor and/or object, such as surgical instruments, surgical material, and surgical incisions. For example, the orientation indications may include location coordinates, orientation coordinates, and indications of a particular sensor. As an example, the orientation indications may include one or more rectangular, polar, or spherical coordinates. At 707, object coordinates may be calculated by associating the center of a bounding box based upon
and
Following these calculations, an object detection algorithm may record x and y and/or display graphics in a display. By matching coordinates of an object in the surgical area with timestamps, the object detection algorithm may possess real-time tracking information for each detected object. At 709, a detected object may be resolved from each sensor where they have the same label, name, and coordinates within a predetermined threshold.
In various embodiments, spoken conversation of the OR personnel detected by and received from a sensor may be processed. For example, spoken conversation may be processed by a pre-trained speech recognition algorithm, such as a recurrent neural network (RNN), and configured to detect pre-defined words and phrases to identify particular objects.
In various embodiments, a plurality of images containing items, such as SI, operating area, and other items of interest may be received from the sensor. For example, video may be converted into still image frames. The plurality of images may be associated with and/or assigned a timestamp at 711, as well as a viewpoint angle, background, lighting condition, and any combination thereof. Furthermore, the plurality of images or other data may be timestamped upon receipt.
In certain embodiments, the plurality of images may be modified with a bounding box around each of the detected objects. For example, the surgical incision may be detected as an object, as shown in
In some embodiments, the position of the sensor may be calculated with respect to the surgical incision. In certain embodiments, the computer may be configured to determine that at least one detected object is improperly within the surgical incision area. For example, upon tracking at least one object, and the object subsequently failing to be tracked again throughout the remainder of the surgical procedure, or the object being tracked in the same location as the surgical incision, computer 130(s) may indicate the tracked object as unaccounted for and/or remaining within the surgical incision. In some embodiments, orientation indications may be received from the sensor. For example, the orientation indication may include location coordinates, orientation coordinates, and indications of a particular sensor and/or object, such as surgical instruments, surgical material, and surgical incisions. As an example, the orientation indication may include one or more rectangular, polar, or spherical coordinates. The center of at least one bounding box may be calculated based upon
and
Following these calculations, x and y may be recorded and/or a graphic may be displayed in the display. By matching coordinates of an object in the surgical area with timestamps, real-time location information for each object may be tracked for each object.
In various embodiments, a prediction/inference model may be generated according to a modified image. For example, the object detection algorithm may perform segmentation, blob analysis, correspondence between a plurality of sensor data, conversion from RGB additive color model to another color space, such as Lab, and any other image processing algorithm configured to perform detection and identification of objects. Additionally or alternatively, the object detection algorithm may be trained to identify particular objects, which may be based upon orientation, distance, zoom, color, shape, and size during the surgery.
In certain embodiments, the object detection algorithm may have SI and other objects detected according to deep-learning algorithms, which may be based on multiple orientations and distances of the SI and other detected objects. Furthermore, SI and other detected objects may change size, color, and/or shape during the surgical procedure. For example, the appearance of gauze may change color, shape, and/or size during the surgery, for example, when exposed to blood or cut into pieces. Such modifications may also be trained on the object detection algorithm so that items are continuously recognized despite these modifications.
In various embodiments, the object detection algorithm may be configured to detect that a sensor has a blocked line-of-sight to the surgical incision based on the positioning/orientation coordinates of a blocked sensor and/or positioning of detected objects, such as surgical instruments, surgical material, an operating cavity, surgical incisions, instrument tables, workstations, waste collection areas, lights, other sensors, OR personnel, including their heads and/or hands, and other objects, also illustrated in
At 713, input may be received to stop tracking of the sensed objects, such as from OR personnel, and at least one piece of orientation information may be transmitted to at least one display. At 715, a UI may display results of the processed images. For example, a particular time that an object was detected within the surgical area, a list of objects sensed since receiving a start command, a time that an object was sensed to enter the surgical incision, a time that an object was sensed to exit the surgical incision, an indication of whether an object is still inside the surgical incision, an illustration of the movement of an object within the field of view, an indication of whether and/or when a sensor was disabled, such as being blocked or powered off, a real-time image from a sensor, a real-time labeled image, an indication of the time a start command was received, an indication of a current time, and any combination thereof may be displayed.
At 715, a location may be displayed indicating all detected objects, and a warning may be displayed with the name and/or location of all previously-detected objects which are not currently detected.
In various embodiments, the received sensor data may include video and/or image data containing at least one object, such as SI, operating area, and other items of interest. The received sensor data may be associated with a viewpoint angle, background, lighting condition, timestamp, and any combination thereof. The image processing algorithm may also timestamp the plurality of images or other data, and may convert video into still image frames.
In some embodiments, the sensor may be one or more items from the following list, which is intended to be exemplary and not limiting: a still camera, video camera, high-speed camera, light detection and ranging (LiDAR)-enabled sensor, infrared sensor, X-ray sensor, gamma-ray sensor, radiation sensor, 3D-laser scanning sensor, gyroscope, such as a microelectromechanical system (MEMS) unit, accelerometer, temperature sensor, and any combination thereof.
As noted above, the sensor may be configured with LiDAR, which may be imaging LiDAR and/or scanning LiDAR. For example, a LiDAR sensor may be configured to detect a distance of an object from the sensor and/or distance between a first object and a second object, such as SI.
In certain embodiments, the sensor may be configured to continuously and/or periodically perform at least one sensing operation, such as capturing pictures and/or collecting data of one or more of temperature, color, shape, size, light, X-rays, gamma-rays, and radiation. For example, the sensor may be configured to periodically perform at least one sensing operation. The sensor may also be configured to continuously and/or periodically monitor various objects, such as operating areas, surgical instruments, surgical material, operating cavities, lights, other sensors, surgical incisions, instrument tables, workstations, waste collection areas, and OR personnel, including their heads and/or hands.
In various embodiments, the sensor may be configured with a microphone, which may be configured to detect spoken conversation and/or verbal commands of the OR personnel through speech recognition capability.
In some embodiments, the sensor may be configured to perform sensing operations according to certain performance parameters, such as speed, zoom, shutter speed, focal length, spectral response, image stabilization, size, FPS, and resolution, which may minimize interference, such as blur. For example, the sensor may be exposed to a desired area, as noted above. As another example, the resolution of a camera may be configured so as to detect objects of a certain size while minimizing interference. In another example, a spectral response of a camera may be configured with an IR optical filter configured to sense temperature of an object within the field of view of the camera. This may provide indications of a recent location of an object, such as inside an operating cavity.
At 803, a surgical procedure may be performed with a plurality of surgical instruments and/or surgical material. For example, the surgical procedure may be performed using scalpels, scissors, forceps, clamps, gloves, gauzes, sponges, needles, staples, and sutures.
At 805, detecting and tracking within the surgical area may be stopped. For example, the display may be configured to display a stop command, which may be selected as input by OR personnel. At 807, the display may indicate the location status of the plurality of surgical instruments. For example, the display may display an indication, such as the table shown in
The features, structures, or characteristics of certain embodiments described throughout this specification may be combined in any suitable manner in one or more embodiments. For example, the usage of the phrases “certain embodiments,” “some embodiments,” “other embodiments,” or other similar language, throughout this specification refers to the fact that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present invention. Thus, appearance of the phrases “in certain embodiments,” “in some embodiments,” “in other embodiments,” or other similar language, throughout this specification does not necessarily refer to the same group of embodiments, and the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
One having ordinary skill in the art will readily understand that the invention as discussed above may be practiced with steps in a different order, and/or with hardware elements in configurations which are different than those which are disclosed. Therefore, although the invention has been described based upon these preferred embodiments, it would be apparent to those of skill in the art that certain modifications, variations, and alternative constructions would be apparent, while remaining within the spirit and scope of the invention.
This application is a Continuation application of U.S. patent application Ser. No. 16/791,716, filed Feb. 14, 2020, which claims the benefit of U.S. Provisional Application No. 62/807,947, filed Feb. 20, 2019. The entire contents of the above-referenced applications are hereby incorporated by reference for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
10729502 | Wolf | Aug 2020 | B1 |
20160379504 | Bailey et al. | Dec 2016 | A1 |
20180049622 | Ryan et al. | Feb 2018 | A1 |
20180338801 | Barnett et al. | Nov 2018 | A1 |
20190038362 | Nash | Feb 2019 | A1 |
20200237452 | Wolf | Jul 2020 | A1 |
20200273575 | Wolf | Aug 2020 | A1 |
20200273581 | Wolf | Aug 2020 | A1 |
20230210608 | Woods | Jul 2023 | A1 |
Number | Date | Country |
---|---|---|
3474285 | Apr 2019 | EP |
Entry |
---|
International Search Report and Written Opinion mailed May 29, 2020 for International Application No. PCT/EP2020/018422, 19 pages. |
Number | Date | Country | |
---|---|---|---|
20230210608 A1 | Jul 2023 | US |
Number | Date | Country | |
---|---|---|---|
62807947 | Feb 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16791716 | Feb 2020 | US |
Child | 18120772 | US |