The present disclosure generally relates to surgical automation and, more specifically, to systems, devices and techniques for automatically turning on/off a light source of an endoscope when the endoscope is being put into/taken out of a patient's body or pointed at a white object for automatic white balancing.
During a laparoscopic or robotic surgery, the endoscope inside a patient's body may need to be taken out of the patient's body for various reasons, such as for lens cleaning, and then put back into the patient's body to continue its imaging function. The endoscope is generally equipped with a powerful light source, such as a light-emitting diode (LED) or a laser, wherein the light generated by the light source travels down to the distal end of the endoscope inside the patient's body to illuminate the body's cavity so that the endoscope camera can “see” and record. However, if this powerful light source is not turned off when the endoscope is taken out of the patient's body, accidentally pointing the endoscope at the eyes of any of the nearby surgical staff can cause significant retinal damage that can be potentially permanent. In order to avoid this potential health risk when the endoscope is outside of the patient's body, an operator of the endoscope is trained to immediately hold his/her palm against the end of the endoscope until the light source is manually turned off. Similarly, before inserting the endoscope back into the patient's body, the operator needs to manually turn the light source back on, hold his/her palm against the end of the endoscope to block off the light until the endoscope has entered the trocar or otherwise been inserted into the patient's body.
However, the manual control of turning on/off the endoscope light source can still pose risks to the surgical staff if the surgeon taking the endoscope out has poor coordination between blocking off the light and turning off the light source, or simply forgets to turn off the light source. Moreover, in certain operating room (OR) setups, it is the responsibility of a circulating nurse or a scrub technologist to turn on or off the light source depending on whether the surgeon is driving the endoscope into the patient's body or taking it out. These setups depend on the coordination between the surgical staff and the surgeon to ensure the safety of people in the OR, and therefore can have higher associated risks of accidental light exposure than having the surgeon controlling both the endoscope and the on/off state of the light source.
Hence, there is a need for an automatic process of turning on/off the endoscope light source when the endoscope is being inserted into/taken out of the patient's body without the problems described above.
This patent disclosure provides various embodiments of a machine-learning-based detection/control technique for enabling automatically turning on/off a light source of an endoscope when the endoscope is being inserted into/taken out of a patient's body or pointed at a white object for automatic white balancing. In some embodiments, to enable automatically turning on/off an LED light source or a laser light source, a statistical classifier for video scene analysis, and in particular for distinguishing between scenes from inside the patent's body and scenes from outside of the patent's body was first constructed. Next, the statistical classifier can be applied to real-time video images captured by the endoscope to determine a state for the endoscope, such as whether the endoscope is inside the patient's body or outside of the patient's body. In this manner, a change of state of the endoscope can be automatically detected.
More specifically, if the state of the endoscope has changed from being inside the patient's body to being outside of the patient's body, a control signal can be immediately generated to turn off the light source for the endoscope. On the other hand, if the state of the endoscope has changed from being outside of the patient's body to being inside of the patient's body, a control signal can be immediately generated to turn on the light source for the endoscope. Hence, the disclosed control technique eliminates the need for manual intervention in turning on/off the light source for the endoscope camera. Note that the disclosed technique essentially detects transitions of the endoscope between the two states, and therefore the detection of the state changes is instant with a minimal latency. In other words, the disclosed technique can instantly detect the first moment when the endoscope is outside of the patient's body and immediately cause the light source to be turned off.
It is another objective of this patent disclosure to provide a technique for enabling automatically turning on/off the white balance mode of the endoscope for automatic white balancing. More specifically, a proposed auto-white-balancing technique is designed to detect and recognize that the endoscope is pointed at a white balancing object, such as a white card or a white sponge. The proposed auto-white-balancing technique can then trigger the white balancing operation, including causing the light source to be turned on to allow an automatic white balance to be performed. In some embodiments, the proposed auto-white-balancing technique can also detect the end of the white balancing operation by recognizing that the endoscope is pointed away from the white balancing object. For example, the proposed auto-white-balancing technique can include a video analysis tool for detecting a scene change from the images of the white balancing object to a different object. The proposed auto-white-balancing technique can then switch off the white balancing mode including causing the light source to be turned off again. Note that the light source can remain off until the endoscope being reinserted into the patient's body, wherein the proposed automatic light source control technique will have the light source turned on again to resume imaging. Hence, the proposed auto-white-balancing technique completely eliminates the need for manual intervention during white balancing, and therefore saves significant OR time and hassle.
In one aspect, a process for automatically controlling an on/off state of a light source of an endoscope camera during a surgical procedure to ensure the safety of the surgical staff is disclosed. Note that the endoscope camera can be either inside a patient's body or outside of the patient's body during the surgical procedure. This process canbegin by receiving a first real-time video image captured by the endoscope camera, wherein the first real-time video image is captured either inside the patient's body or outside of the patient's body. The process next processes the first real-time video image using a first statistical classifier to classify the first real-time video image as either being inside the patient's body or being outside of the patient's body. When the first real-time video image is classified as being outside of the patient's body, the process subsequently generates a first control signal to immediately turn off the light source if the light source is turned on.
In some embodiments, if the first real-time video image is classified as being inside of the patient's body, the process subsequently determines if the light source is turned off. If so, the process generates a second control signal to immediately turn on the light source. Otherwise, the process continues receiving and processing real-time video images captured by the endoscope camera.
In some embodiments, while the light source remains turned off, the process further receives a second real-time video image captured by the endoscope camera, wherein the second real-time video image is captured either inside the patient's body or outside of the patient's body. Next, the process processes the second real-time video image using the first statistical classifier to classify the second real-time video image as either being inside the patient's body or being outside of the patient's body. If the second real-time video image is classified as being inside the patient's body, the process subsequently generates a second control signal to immediately turn on the light source.
In some embodiments, if the second real-time video image is classified as being outside of the patient's body, the process continues receiving and processing real-time video images captured by the endoscope camera while keeping the light source turned off.
In some embodiments, prior to using the first statistical classifier to classify real-time video images, the process further includes the steps for training the first statistical classifier by: receiving a recorded surgical video of the surgical procedure, wherein the recorded surgical video contains a sequence of video images captured by an endoscope camera both being inside and outside of a patient's body; labeling each video image in the sequence of video images as either a first class of image of being inside a patient's body or a second class of image of being outside of a patient's body; and training the first statistical classifier for classifying an input image as being either the first class of image or the second class of image based on the sequence of labeled video images.
In some embodiments, the process trains the first statistical classifier for classifying the input image by training the first statistical classifier based on a set of distinct image features, such as statistics of the color, texture, and contrast in the sequence of labeled video images.
In some embodiments, while the light source remains turned off and the endoscope camera is outside of the patient's body, the process further receives a third real-time video image captured by the endoscope camera. The process next processes the third real-time video image using a second statistical classifier to classify the third real-time video image as either a white balancing object or a non-white balancing object. If the third real-time video image is classified as a white balancing object, the process subsequently generates a third control signal to turn on the light source for performing a white balancing operation.
In some embodiments, while performing the white balancing operation, the process further receives a fourth real-time video image captured by the endoscope camera while performing the white balancing operation. Next, the process processes the fourth real-time video image using the second statistical classifier to classify the fourth real-time video image as either a white balancing object or a non-white balancing object. If the fourth real-time video image is classified as a non-white balancing object, the process subsequently generates a fourth control signal to immediately turn off the light source.
In some embodiments, prior to using the second statistical classifier to classify real-time video images, the process trains the second statistical classifier based on a set of labeled images that further includes a first subset of images labeled as white balancing objects and a second subset of images labeled as non-white balancing objects.
In some embodiments, the white balancing object can include: a white balancing card, a white sponge, a whiteboard, a piece of white paper, a piece of white cloth, and any other suitable white medium.
In another aspect, an endoscope system is disclosed. This endoscope system can include: an endoscope camera module; a light source module coupled to the endoscope camera module for providing a light source for the endoscope camera module; and a light-source control module coupled to the endoscope camera module and the light source module. More specifically, this light-source control module is configured to cause the light source in the light source module to be automatically turned on/off during a surgical procedure by receiving a first real-time video image captured by the endoscope camera, wherein the first real-time video image is captured either inside the patient's body or outside of the patient's body. The light-source control module is further configured to process the first real-time video image using a first statistical classifier to classify the first real-time video image as either being inside the patient's body or being outside of the patient's body. If the first real-time video image is classified as being outside of the patient's body, the light-source control module is further configured to determine if the light source is turned on. If so, the light-source control module is configured to generate a first control signal to immediately turn off the light source. Otherwise, the light-source control module is configured to continue receiving and processing real-time video images captured by the endoscope camera.
In some embodiments, while the light source remains turned off, the light-source control module is further configured to cause the light source in the light source module to be automatically turned on/off during a surgical procedure by: receiving a second real-time video image captured by the endoscope camera, wherein the second real-time video image is captured either inside the patient's body or outside of the patient's body; processing the second real-time video image using the first statistical classifier to classify the second real-time video image as either being inside the patient's body or being outside of the patient's body; and if the second real-time video image is classified as being inside the patient's body, generating a second control signal to immediately turn on the light source.
The structure and operation of the present disclosure will be understood from a review of the following detailed description and the accompanying drawings in which like reference numerals refer to like parts and in which:
The detailed description set forth below is intended as a description of various configurations of the subject technology and is not intended to represent the only configurations in which the subject technology may be practiced. The appended drawings are incorporated herein and constitute a part of the detailed description. The detailed description includes specific details for the purpose of providing a thorough understanding of the subject technology. However, the subject technology is not limited to the specific details set forth herein and may be practiced without these specific details. In some instances, structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject technology.
This patent disclosure provides various embodiments of a machine-learning-based detection/control technique for enabling automatically turning on/off a light source of an endoscope when the endoscope is being inserted into/taken out of a patient's body or pointed at a white object for automatic white balancing. In some embodiments, to enable automatically turning on/off an LED light source or a laser light source, a statistical classifier for video scene analysis, and in particular for distinguishing between scenes from inside the patent's body and scenes from outside of the patent's body was first constructed. Next, the statistical classifier can be applied to real-time video images captured by the endoscope to determine a state for the endoscope, such as whether the endoscope is inside the patient's body or outside of the patient's body. In this manner, a change of state of the endoscope can be automatically detected.
More specifically, if the state of the endoscope has changed from being inside the patient's body to being outside of the patient's body, a control signal can be immediately generated to turn off the light source for the endoscope. On the other hand, if the state of the endoscope has changed from being outside of the patient's body to being inside of the patient's body, a control signal can be immediately generated to turn on the light source for the endoscope. Hence, the disclosed control technique eliminates the need for manual intervention in turning on/off the light source for the endoscope camera. Note that the disclosed technique essentially detects transitions of the endoscope between the two states, and therefore the detection of the state changes is instant with a minimal latency. In other words, the disclosed technique can instantly detect the first moment when the endoscope is outside of the patient's body and immediately cause the light source to be turned off.
Note that prior to commencing any surgery, automatic white balancing is a mandatory step to prepare the endoscope. Moreover, during a surgical procedure, after the endoscope has been outside of the patient's body temporarily for cleaning or for other reasons (described in more details below), the endoscope needs to be re-calibrated, i.e., to have another white balancing done before the endoscope can be reinserted into the patient's body. Assuming the endoscope light source is turned off prior to performing the white balancing for the described safety reasons, the surgical staff would normally be required to manually turn on/off a white balancing mode, which includes manually turning on the light source to perform the white balancing operation and then manually turning off the light source when the white balancing is completed.
It is another objective of this patent disclosure to provide a technique for enabling automatically turning on/off the white balance mode of the endoscope for automatic white balancing. More specifically, a proposed auto-white-balancing technique is designed to detect and recognize that the endoscope is pointed at a white balancing object, such as a white card or a white sponge. The proposed auto-white-balancing technique can then trigger the white balancing operation, including causing the light source to be turned on to allow an automatic white balance to be performed. In some embodiments, the proposed auto-white-balancing technique can also detect the end of the white balancing operation by recognizing that the endoscope is pointed away from the white balancing object. For example, the proposed auto-white-balancing technique can include a video analysis tool for detecting a scene change from the images of the white balancing object to a different object. The proposed auto-white-balancing technique can then switch off the white balancing mode including causing the light source to be turned off again. Note that the light source can remain off until the endoscope being reinserted into the patient's body, wherein the proposed automatic light source control technique will have the light source turned on again to resume imaging. Hence, the proposed auto-white-balancing technique completely eliminates the need for manual intervention during white balancing, and therefore saves significant OR time and hassle.
During a laparoscopic or robotic surgery, the endoscope inside a patient's body may need to be taken out of the patient's body for various reasons, such as for lens cleaning, and then put back into the patient's body to continue its imaging function. A time period during the surgical procedure when the endoscope is taken out of the patient's body for one of a number of reasons and then put back into the patient's body may be referred to as an out-of-body (OOB) event. Note that each surgical procedure can include a number of OOB events which take place for a number of reasons. For example, an OOB event will occur if the endoscope lens has to be cleaned, e.g., when the endoscope lens is covered with blood. Another type of OOB event involves changing the endoscope lens from one scope size to another scope size for different anatomy/fields of view (FOV). Typically, during an OOB event the endoscope camera continues to record. The proposed techniques can use the video images captured during an OOB event to determine when to turn on/off the light source during the OOB event to ensure the safety of the surgical staff in the OR and when to turn on/off the light source for automatic white balancing.
Proposed light-source on/off control subsystem 110 can include one or more video-image processing modules including video-image processing modules 112 and 114. Specifically, video-image processing module 112 is configured to perform a surgical scene analysis to determine whether endoscope camera module 102 is inside the patent's body or outside of the patient's body. Video-image processing module 112 is also configured to detect two transitional events: (1) when endoscope camera module 102 is transferred from the inside of the patient's body to the outside of the patient's body, also referred to as “the first transitional event” hereinafter; and (2) when endoscope camera module 102 is transferred from the outside of the patient's body to the inside of the patient's body, also referred to as “the second transitional event” hereinafter. Video-image processing module 112 is configured to generate a “POWER-OFF” signal 130-1 for turning off light source 106 in light source module 104 when the first transitional event is detected, and a “POWER-ON” signal 130-2 for turning on light source 106 in light source module 104 when the second transitional event is detected. Light-source control subsystem 110 is configured to output control signals 130-1 and 130-2 generated by video-image processing module 112 to light source module 104. Upon receiving a control signal 130, power control unit 108 of light source module 104 is configured to immediately turn off light source 106 for endoscope camera module 102 if the POWER-OFF signal 130-1 is received or immediately turn on light source 106 for endoscope camera module 102 if the POWER-ON signal 130-2 is received.
Proposed video-image processing module 114 is configured to perform a surgical scene analysis to detect the beginning and ending of a white balancing operation when endoscope camera module 102 is outside of the patient's body. Video-image processing module 114 is also configured to generate a “POWER-ON” signal 130-3 for turning on light source 106 in light source module 104 when the beginning of a white balancing operation is detected, or to generate a “POWER-OFF” signal 130-4 for turning off light source 106 in light source module 104 when the ending of a white balancing operation is detected. Light-source control subsystem 110 is configured to output the control signals 130-3 and 130-4 generated by video-image processing module 114 to light source module 104. Upon receiving a control signal 130, power control unit 108 of light source module 104 is configured to immediately turning on light source 106 for endoscope camera module 102 if the POWER-ON signal 130-3 is received or immediately turn off light source 106 for endoscope camera module 102 if the POWER-OFF signal 130-4 is received. Note that video-image processing module 114 does not need to be engaged when endoscope camera module 102 is inside the patient's body. In some embodiments, video-image processing module 114 becomes engaged after video-image processing module 112 detects the first transitional event, and becomes disengaged after video-image processing module 112 detects the second transitional event.
In some embodiments, video-processing module 112 can include a machine-learning-based or a computer-vision-based statistical classifier trained to distinguish between scenes from inside a patent's body and scenes from outside of the patent's body. Video-processing module 112 can apply this machine-learning or computer-vision-based statistical classifier to real-time surgical video images 120 captured by endoscope camera module 102 to classify a real-time scene depicted in surgical video images 120 as either (1) inside the patient's body or (2) outside of the patient's body. In some embodiments, the statistical classifier is configured to classify/label each received video image 120 as being either (1) inside the patient's body or (2) outside of the patient's body. Note that the first transitional event or the second transitional event is a continuous action represented by a consecutive sequence of video images. Hence, after a sequence of video images 120 has been classified/labeled, a transitional event can be detected if the sequence of video images 120 contains either the first transitional event or the second transitional event. While it is possible to detect the first and second transitional events based on just one or a few classified/labeled video frames, generating decisions based on a sequence of labeled video images 120 allows for identifying correlations among the sequence of video images to facilitate generating a more accurate decision on whether the first or second transitional event exists in the sequence of video images.
In some embodiments, if endoscope camera module 102 is initially inside the patient's body (and light source 106 is turned on) and the statistical classifier has classified one or more newly received surgical video images 120 as being outside of the patient's body, the first transitional event can be detected and video-processing module 112 immediately generates the “POWER-OFF” control signal 130-1. Alternatively, if endoscope camera module 102 is initially inside the patient's body (and light source 106 is turned on) and the statistical classifier has classified one or more newly received surgical video images 120 as being inside the patient's body, no transitional event has occurred and video-processing module 112 continues monitoring the state of endoscope camera module 102 without generating any control signals 130. Note that by detecting the first transitional event, video-processing module 112 can detect the moment endoscope camera module 102 is brought to the outside of the patient's body with a minimal latency to immediately cause the light source to be turned off.
Similarly, if endoscope camera module 102 is initially outside of the patient's body (and light source 106 is turned off) and the statistical classifier has classified one or more newly received surgical video images 120 as being inside the patient's body, the second transitional event can be detected and video-processing module 112 immediately generates the “POWER-ON” control signal 130-3. Alternatively, if endoscope camera module 102 is initially outside of the patient's body (and light source 106 is turned off) and the statistical classifier has classified one or more newly received surgical video images 120 as being outside of the patient's body, no transitional event has occurred and video-processing module 112 continues monitoring the state of endoscope camera module 102 without generating any control signals 130.
Note that another important application of the disclosed light-source control subsystem 110 is that, anytime endoscope camera module 102 is taken out of the patient's body, video-processing module 112 automatically causes light source 106 to be turned off. As a result, even if endoscope camera module 102 is then pointed at a whiteboard in the OR with sensitive information or another object in the OR associated with a privacy concern, the sensitive information can not be captured clearly or identified in the dark images captured by endoscope camera module 102 when light source 106 is off.
In some embodiments, prior to using the proposed statistical classifier to classify surgical video images 120, the statistical classifier was trained based on a large number of surgical videos containing the above-described first and second transitional events. More specifically, a large number of surgical videos containing the first and second transitional events are first gathered. The gathered surgical videos can include actual surgical procedure videos performed by surgeons. Furthermore, the gathered surgical videos can include artificially generated procedure videos that are created to include the first and second transitional events. In some embodiments, each training video can contain sufficient video frames depicting the endoscope being both inside and outside the patient's body and the actions of transferring the endoscope from the inside/outside to the outside/inside of the patient's body (i.e., the first and second transitional events).
Next, for each of the training surgical videos, each of the video images is annotated/labeled as either a first class of image of inside a patient's body or a second class of image of outside of the patient's body. In some embodiments, prior to annotating the training surgical video, the training surgical video can be segmented by a phase segmentation engine into a set of video segments based on the first and second transitional events, and each video segment in the set of video segments can belong to one of the two phases, i.e., an inside phase when the endoscope is inside the patient's body or an outside phase when the endoscope is entirely outside of the patient's body. A person skilled in the art would appreciate that a surgical video including multiple OOB events can contain multiple video segments of the inside phase and multiple video segments of the outside phase. Next, for each video segment of the set of video segments of the inside phase, video images within the video segment can be labeled as the first class of image of inside the patient's body; and for each video segment of the video segments of the outside phase, video images within the video segment can be labeled as the second class of image of outside of the patient's body.
Once all the training videos have been properly labeled, the statistical classifier can be trained based on the labeled set of surgical videos so that the trained statistical classifier can be used to distinguish a given input video frame as the first class of image or the second class of image. Moreover, the statistical classifier can also be trained to distinguish a transitional event within a sequence of video images 120 as being the first transitional event or the second transitional event based on the ability to correctly label each image frame as being the first class of image or the second class of image. In some embodiments, the proposed statistical classifier can include a feature-based model which can be used to distinguish an input image as being the first class of image or the second class of image based on a set of user-specified image features, such as the statistics of the color, the texture, the contrast, among other features. In other embodiments, the proposed statistical classifier can include a deep learning model that does not require the user to manually identify the features.
In various embodiments, the proposed statistical classifier can include a machine-learning model constructed based on a regression model, a deep neural network-based model, a support vector machine, a decision tree, a Naive Bayes classifier, a Bayesian network, or a k-nearest neighbors (KNN) model. In some embodiments, each of these machine-learning models is constructed based on a convolutional neural network (CNN) architecture, a recurrent neural network (RNN) architecture, or another form of deep neural network (DNN) architecture.
As mentioned above, video-image processing module 114 is configured to generate a “POWER-ON” control signal 130-3 for light source module 104 when the beginning of a white balancing operation is detected, or to generate a “POWER-OFF” control signal 130-4 for light source module 104 when the ending of a white balancing operation is detected. In some embodiments, video-processing module 114 can include a machine-learning-based or a computer-vision-based statistical classifier trained to detect a white balancing object. Some applicable white balancing objects can include a white balancing card, a white sponge, a whiteboard, a piece of white paper, a piece of white cloth, and any other suitable white medium. In some embodiments, the statistical classifier is configured to classify/label each received video image 120 as being either (1) a white balancing object or (2) a non-white balancing object.
In some embodiments, if endoscope camera module 102 is outside of the patient's body during an OOB event with light source 106 turned off, and the statistical classifier has classified one or more newly received surgical video images 120 as a white balancing object, video-processing module 114 immediately generates the “POWER-ON” control signal 130-3 to trigger the white balancing operation. Alternatively, if endoscope camera module 102 is initially pointed at a white balancing object with light source 106 turned on for white balancing, and the statistical classifier has classified one or more newly received surgical video images 120 as being a non-white balancing object, video-processing module 114 immediately generates the “POWER-OFF” control signal 130-4 to terminate the white balancing operation. In some embodiments, prior to using the proposed statistical classifier to classify surgical video images 120 as being either a white balancing object or a non-white balancing object, the statistical classifier can be trained on a different data set to identify different types of white balancing objects and non-white-balancing objects.
In some embodiments, the disclosed automatic light-source control subsystem 110 can be implemented in computer software, in electronic hardware, or in combinations of both, and as a separate module from light source module 104 (as shown in
Process 200 may begin by receiving a sequence of real-time video images captured by the endoscope camera module when the light source is turned on (step 202). In particular, the endoscope camera module is initially inside the patient's body. Next, process 200 analyzes a sequence of newly received video images to determine whether the endoscope camera module is inside the patient's body or outside of the patient's body (step 204). In some embodiments, process 200 uses a statistical classifier to classify each image in the sequence of newly received video images as either a first class of image of being inside the patient's body or a second class of image of being outside of the patient's body. Process 200 can determine a state of the endoscope camera module as being inside the patient's body or outside of the patient's body based on the classified sequence of images. More specifically, process 200 can determine the state of the endoscope camera module as being outside of the patient's body by detecting the first transitional event within the sequence of newly received video images.
If process 200 determines that the endoscope camera module is still inside the patient's body at step 204, process 200 returns to step 202 and continues monitoring the state of the endoscope camera module. Otherwise, if process 200 determines that the endoscope camera module is outside of the patient's body (i.e., the first transitional event is detected), process 200 generates a control signal for turning off the light source for safety and other reasons (step 206). The control signal is used by a light source module to immediately turn off the light source to prevent accidental eye damages.
Next, when the endoscope camera module is outside of the patient's body with the light source turned off, process 200 analyzes a real-time video image to determine whether the endoscope camera module is pointed at a white balancing object (step 208). For example, process 200 can analyze the video image to identify a predefined white balancing object within the video image. If not, process 200 can return to step 208 and continue to look for the cue indicating the beginning of the white balancing operation. However, if a white balancing object is detected at step 208, process 200 generates another control signal for turning on the light source for white balancing (step 210).
Next, when the endoscope camera module is outside of the patient's body with the light source turned on, process 200 continues to analyze real-time video images to determine whether the white balancing operation is completed (step 212). If not, process 200 can return to step 212 and continue to look for the cue indicating the ending of the white balancing operation. Otherwise, the white balancing operation is completed and process 200 generates yet another control signal for turning off the light source (step 214). Process 200 subsequently analyzes another sequence of newly received video images to determine if the endoscope camera module remains outside of the patient's body (step 216). Similarly, process 200 can use the statistical classifier to classify each image in the sequence of newly received video images as either the first class of image of being inside the patient's body or the second class of image of being outside of the patient's body. More specifically, process 200 can determine the state of the endoscope camera module as being inside the patient's body by detecting the second transitional event within the sequence of newly received video images.
If process 200 determines that the endoscope camera module remains outside of the patient's body at step 216, process 200 returns to step 216 to continue monitoring the state of the endoscope camera module. Otherwise, if process 200 determines that the endoscope camera module is inside the patient's body at step 216 (i.e., the second transitional event is detected), process 200 generates yet another control signal for turning on the light source (step 218). The control signal is used by the light source module to immediately turn on the light source to resume endoscopic imaging.
Process 300 begins by collecting a large number of training surgical videos, wherein each training surgical video contains one or more OOB events (step 302). As described above, each of the OOB events typically includes the first transitional event (i.e., taking the endoscope out of the patient's body) and the second transitional event (i.e., putting the endoscope back into the patient's body). Note that the collected training surgical videos can include both actual surgical procedure videos performed by surgeons and artificially generated procedure videos created specifically to provide training data for training the statistical classifier.
Process 300 next segments each of the training surgical videos into a set of video segments based on the associated one or more OOB events, wherein each video segment in the set of video segments belongs to either the inside phase when the endoscope is inside the patient's body or the outside phase when the endoscope is outside of the patient's body (step 304). Note that the set of transitional events within a surgical video provides the natural phase boundaries for segmenting the surgical video into the inside phases and the outside phases. Also note that a surgical video containing multiple OOB events can be segmented into a set of corresponding video segments that is composed of multiple inside phases and multiple outside phases.
Next, for each video segment of the inside phase from a segmented training surgical video, process 300 labels video images within the video segment as the first class of image of inside the patient's body; and for each video segment of the outside phase of the segmented surgical video, process 300 labels video images within the video segment as the second class of image of outside of the patient's body (step 306). After all the training surgical videos have been properly labeled, process 200 proceeds to train the statistical classifier based on the labeled set of training surgical videos so that the trained statistical classifier can be used to distinguish a given input video frame as the first class of image or the second class of image (step 308).
Bus 502 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of computer system 500. For instance, bus 502 communicatively connects processing unit(s) 512 with ROM 510, system memory 504, and permanent storage device 508.
From these various memory units, processing unit(s) 512 retrieves instructions to execute and data to process in order to execute various processes described in this patent disclosure, including the above-described processes of automatically turning on/off a light source of an endoscope camera module during a surgical procedure and training the disclosed statistical classifier for classifying surgical video images as being either inside or outside of a patient's body in conjunction with
ROM 510 stores static data and instructions that are needed by processing unit(s) 512 and other modules of the computer system. Permanent storage device 508, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when computer system 500 is off. Some implementations of the subject disclosure use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as permanent storage device 508.
Other implementations use a removable storage device (such as a floppy disk, flash drive, and its corresponding disk drive) as permanent storage device 508. Like permanent storage device 508, system memory 504 is a read-and-write memory device. However, unlike storage device 508, system memory 504 is a volatile read-and-write memory, such as a random access memory. System memory 504 stores some of the instructions and data that the processor needs at runtime. In some implementations, various processes described in this patent disclosure, including the above-described processes of automatically turning on/off a light source of an endoscope camera module during a surgical procedure and training the disclosed statistical classifier for classifying surgical video images as being either inside or outside of a patient's body in conjunction with
Bus 502 also connects to input and output devices 514 and 506. Input devices 514 enable the user to communicate information to and select commands for the computer system. Input devices 514 can include, for example, alphanumeric keyboards and pointing devices (also called “cursor control devices”). Output devices 506 enable, for example, the display of images generated by computer system 500. Output devices 506 can include, for example, printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some implementations include devices such as a touchscreen that functions as both input and output devices.
Finally, as shown in
The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed in this patent disclosure may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The hardware used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other programmable-logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of receiver devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some steps or methods may be performed by circuitry that is specific to a given function.
In one or more exemplary aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable storage medium or non-transitory processor-readable storage medium. The steps of a method or algorithm disclosed herein may be embodied in processor-executable instructions that may reside on a non-transitory computer-readable or processor-readable storage medium. Non-transitory computer-readable or processor-readable storage media may be any storage media that may be accessed by a computer or a processor. By way of example but not limitation, such non-transitory computer-readable or processor-readable storage media may include RAM, ROM, EEPROM, flash memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of non-transitory computer-readable and processor-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable storage medium and/or computer-readable storage medium, which may be incorporated into a computer-program product.
While this patent document contains many specifics, these should not be construed as limitations on the scope of any disclosed technology or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular techniques. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.
Only a few implementations and examples are described, and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.
This patent application is a continuation of, and hereby claims the benefit of priority under 35 U.S.C. § 120 to co-pending U.S. patent application Ser. No. 16/440,552, filed on 13 Jun. 2019 (Attorney Docket No. 10231P1077US), entitled, “Method and System for Automatically Turning On/Off a Light Source for an Endoscope During a Surgery,” by inventor Jagadish Venkataraman. The above-listed application is hereby incorporated by reference as a part of this patent document.
Number | Name | Date | Kind |
---|---|---|---|
6033426 | Kaji | Mar 2000 | A |
6154560 | Cothren | Nov 2000 | A |
20020013512 | Sendai et al. | Jan 2002 | A1 |
20040054259 | Hasegawa | Mar 2004 | A1 |
20040059215 | Nishimura et al. | Mar 2004 | A1 |
20070165932 | Nishimura et al. | Jul 2007 | A1 |
20070185384 | Bayer et al. | Aug 2007 | A1 |
20070191671 | Kawano et al. | Aug 2007 | A1 |
20080119691 | Yagi | May 2008 | A1 |
20080278575 | Matsui | Nov 2008 | A1 |
20080312504 | Kimoto | Dec 2008 | A1 |
20090073260 | Nagase et al. | Mar 2009 | A1 |
20100272358 | Kanda | Oct 2010 | A1 |
20120262621 | Sato et al. | Oct 2012 | A1 |
20130028470 | Kanda | Jan 2013 | A1 |
20140036259 | Kamee et al. | Feb 2014 | A1 |
20140052004 | D'Alfondo et al. | Feb 2014 | A1 |
20140187931 | Wood | Jul 2014 | A1 |
20140206988 | Ramachandran et al. | Jul 2014 | A1 |
20140288365 | Henley et al. | Sep 2014 | A1 |
20150031990 | Boctor et al. | Jan 2015 | A1 |
20150297807 | Leblanc et al. | Oct 2015 | A1 |
20150342443 | Tanabe et al. | Dec 2015 | A1 |
20170032521 | Kubo | Feb 2017 | A1 |
20170071504 | Wang | Mar 2017 | A1 |
20170100019 | Ikuma et al. | Apr 2017 | A1 |
20170296032 | Li | Oct 2017 | A1 |
20170325670 | Koizumi | Nov 2017 | A1 |
20180153381 | Wei | Jun 2018 | A1 |
20180174311 | Kluckner et al. | Jun 2018 | A1 |
20180253839 | Zur | Sep 2018 | A1 |
20190206053 | Ichiki | Jul 2019 | A1 |
20190348452 | Ootsuka | Nov 2019 | A1 |
Number | Date | Country |
---|---|---|
08076029 | Mar 1996 | JP |
2002034911 | Feb 2010 | JP |
2010082041 | Apr 2010 | JP |
10-2018-0138037 | Dec 2018 | KR |
Entry |
---|
International Search Report and Written Opinion for International Application No. PCT/US2019/038606, dated Mar. 9, 2020. |
International Preliminary Report on Patentability for International Application No. PCT/US2019/038606 dated Dec. 23, 2021, 8 pages. |
Number | Date | Country | |
---|---|---|---|
20210015342 A1 | Jan 2021 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16440552 | Jun 2019 | US |
Child | 17037056 | US |