This application claims the benefit of European Patent Application Serial No. 21191153.2, filed Aug. 12, 2021, the entire disclosure of which is incorporated herein by reference in its entirety.
The present disclosure relates to the field of video analytics systems and methods for use in operating rooms.
More and more different technologies are being used in operating rooms to aid medical professionals and improve patient care and outcomes. However, integrating these technologies is difficult, as different tools, equipment and systems may be incompatible or require different inputs and outputs, presenting a challenge when trying to maximize the effectiveness of an operating room environment. Furthermore, many tools, devices and equipment also generate data. This data is frequently device and manufacturer specific, further hindering integration. For example, for surgery utilizing a robotic surgical instrument to perform portions of the surgery, the robotic surgical instrument needs to know the precise location of the operating room table, and/or the patient relative to that operating table. Positional data may be output by the operating room table, but this is in the form of changes in a step motor configured to move the table. This positional data provides little benefit and cannot be used by, for example, a robotic surgical instrument, unless it has been calibrated prior to use and localized within the operating room. This is a time consuming process, made more undesirable by the fact that an operating room may host many different surgical procedures, each requiring different equipment, and hence requiring new localization and calibration before each use.
The present disclosure includes one or more of the features recited in the appended claims and/or the following features which, alone or in any combination, may comprise patentable subject matter.
According to a first aspect of the present disclosure, a method of monitoring objects within an operating room is provided. The method comprises receiving first image data captured by at least one image capture device; determining at least a subset of the first image data, the at least a subset of the first image data relating to a region of interest within the operating room; and determining, in dependence on the first image data, if the region of interest within the operating room is at least partially obstructed within the subset of the first image data. Upon determining that the region of interest is at least partially obstructed, the method further comprises receiving second image data captured by the at least one image capture device; determining at least a subset of the second image data, the at least a subset of the second image data relating to the region of interest; and outputting the at least a subset of the second image data, the region of interest being less obstructed within the at least a subset of the second image data than within the at least a subset of the first image data.
Accordingly, a method is provided that enables an obstruction within an operating room that is obstructing the view of an image capture device to be overcome.
The region of interest may be, or may contain, one or more objects, and the method may comprise identifying at least one of the one or more objects. Determining if the region of interest within the operating room is at least partially obstructed may comprise determining if the at least one of the one or more objects is at least partially obstructed. This may be done by obtaining a model (such as a 3D volumetric representation) of the region of interest or of one or more objects. This model can then be deformed, for example, vectors defining the 3D volumetric representation are projected to a plane of the first image data. The object, as identified in the first image data, and the deformed model can then be compared to generate a relational score between the two. If this score is below a threshold value, then it may be determined that the region of interest is at least partially obstructed.
Optionally, the method further comprises, upon determining that the region of interest is at least partially obstructed, the steps of comparing the at least a subset of the first image data and the at least a subset of the second image data to determine whether the region of interest is less obstructed in the at least a subset of the first image data or whether the region of interest is less obstructed in the at least a subset of the second image data. This may be achieved by generating a relational score from the at least a subset of the second image data in the same manner as described above for the first image data and then comparing the relational scores for the first image data or the second image data.
At least a subset of the second image data is output if it is determined that the region of interest is less obstructed within the at least a subset of the second image data; whilst at least a subset of the first image data is output if it is determined that the region of interest is less obstructed within the at least a subset of the first image data. In this way, if multiple image capture devices have their views partially obstructed, then the most preferable image capture device, being the one with the least obstructed view, can be determined.
Optionally, the at least one image capture device comprises a first image capture device and wherein both the first image data and the second image data are captured by the first image capture device, the first image data being captured by the first image capture device located at a first position, the second image data being captured by the first image capture device located at a second, different, position. Where both the first image data and the second image data are captured by the first image capture device, the method may further comprise moving the first image capture device iteratively from a first position to a second position until the region of interest is sufficiently less obstructed within the at least a subset of the second image data than within the at least a subset of the first image data. Moving an image capture device in this manner such that the view of the image capture device after it has been moved to the second position enables obstructions to be overcome even when only a single image capture device is used, or when multiple image capture devices are used but none of them have an acceptably unobstructed view.
A movement trajectory from the first position to the second position can be generated, and at least an estimate of the three-dimensional position of an obstruction object can be determined, based on the first image data. The obstruction object is an object obstructing the region of interest from the first position. The movement trajectory can be generated using a look up table, based on the location of the region of interest, the location of the first position and the three-dimensional position of the obstruction object. A candidate position to give an unobstructed view may be determined from the look up table, and a trajectory may be determined based on the first position and the candidate position, for example using the look up table.
The process may be iterative. The image capture device may move a distance towards the candidate position along the movement trajectory from the first position to a second position along the movement trajectory. A score may be compared between the first position and the second position, the scores representing an amount by which the region of interest is obstructed. For example, the relational scores discussed above may be calculated at the first and second positions, and if they indicate that region of interest is less obstructed from the second position then the next iteration may continue to move the image capture device along the movement trajectory. If the scores indicate that, after moving to the second position, the region of interest is more obscured than from the first position, an alternative trajectory can be computed for the next iteration of the method.
Optionally, the step of moving the first image capture device to the second position such that the region of interest is less obstructed within the at least a subset of the second image data than within the at least a subset of the first image data comprises determining a volume occupied by the region of interest based on the first image data; determining a three-dimensional position of an obstruction object based on the first image data; determining a second position from which a field of view of the region of interest from the first image capture device will not be obstructed or will be obstructed to a lesser extent than a field of view of the region of interest from the first image capture device from the first position based on the three-dimensional position of the region of interest and the three-dimensional position of an obstruction object; and moving the at least first image capture device from the first position to the second position. This process may be performed iteratively. By determining a position, size and shape of the object that is obstructing the view of the image capture device then a position from which an unobstructed view can be achieved can be determined. This enables a suitable location to be chosen to move the image capture device to, rather than having to rely on chance or trial and error, reducing the time the camera will have an unobstructed view whilst also minimizing any distraction or risk caused by the movement of the image capture device in the operating room.
Moving the first image capture device may comprise one or more of a translation, rotation, or pivoting of the first image capture device.
In an alternative, the at least one image capture device of the first aspect of the present disclosure comprises a first image capture device and a second image capture device, and wherein first image data is captured by the first image capture device and the second image data is captured by the second capture device. Using two (or more) image capture devices means that if one has its view obstructed then a second image capture device can instead be used having a less obstructed view.
Optionally, the method comprises determining a volume of the region of interest based on the first image data; determining a three-dimensional position of an obstruction object based on the first image data; and determining whether image data of the region of interest from one or more of the at least two image capture devices other than the first image capture device will be less obstructed than the subset of the first image data relating to the region of interest from the first image capture device based on the volume of the region of interest, the three-dimensional position of the obstruction object, and a known position of the at least two image capture devices. Upon determining that the image data of the region of interest from one or more of the at least two image capture devices other than the first image capture device will be less obstructed than the subset of the first image data relating to the region of interest from the first image capture device, determining that image capture device to be the second image capture device. In this manner, the second image capture device, being an image capture device having a less obstructed view than the first image capture device, can be determined. This ensures that time and effort is not required to manually check the views from different image capture devices to find one with a less obstructed view, but rather this can be automatically determined.
The first image capture device and the second image capture device are preferably located in different positions, such that they have different fields of view. Having image capture devices in different positions, with different fields of view, increases the likelihood that if one of them has their view obstructed then the other(s) will have a less obstructed view.
Optionally, the first image capture device is of a first type and the second image capture device is of a second types, such that they produce different types of image data.
Different types of image capture device, producing different types of image data, are obstructed by different objects. Whilst an object may be opaque to one image capture device (e.g., at one wavelength of light), it may be transparent to another image capture device (e.g., at a second wavelength of light). Accordingly, using different types of image capture devices increases the likelihood that if one of them has their view obstructed then the other(s) will have a less obstructed view.
Optionally, the first image capture device is a visible wavelength camera and the second image capture device is an infrared wavelength camera. This combination of image capture devices can, for example, help locate a patient beneath a sheet.
Optionally, the method further comprises the steps of combining the first image data and the second image data to generate composite image data and outputting the composite image data. The composite image data relates to the region of interest, with the region of interest being less obstructed in the composite image data than in either the first image data or the second image data.
Combining the first image data with the second image data, for example overlaying the two sets of image data, to generate composite image data and outputting the composite image data can provide a more complete view of the region of interest, and can help to overcome obstructions. This may particularly be the case when there are multiple objects obstructing the region of interest, or a large object obstructing the region of interest, such that neither the first image data nor the second image data relate to an unobstructed view of the region of interest, but both have different parts of the region of interest obstructed.
Optionally, the method further comprises the step of capturing, by the second image capture device, the second image data.
Optionally, the method further comprises the step of capturing, by the first image capture device, the first image data.
Optionally, one or both of the region of interest and the obstruction object are identified using image recognition techniques. The use of image recognition techniques can allow the method to be automatically implemented by a computer or computer system.
Optionally, the image data comprises one or more of: visible spectrum data; thermal or infrared data; depth or time-of-flight data, such as radar data, LIDAR data, and ultrasound data; and stereoscopic image data.
The region of interest may not be static with respect to the operating room. In particular, the region of interest may correspond to one or more of: an object; a volume around an object; a person; and a portion of a person. By having a non-static region of interest, objects can be tracked around an operating room throughout a procedure, and it can be ensured that despite this movement that suitable image capture data is collected by the image capture devices.
Alternatively, the region of interest may be a static region with respect to the operating room. Having the region of interest be static can reduce the processing power required to monitor a moving region of interest, and further the image capture devices can be positioned taking into account the static position of the region of interest to maximize their view of the region of interest.
The method may comprise carrying out the steps for a plurality of regions of interest.
According to a second aspect of the present disclosure, a system for monitoring objects within an operating room is provided. The system comprises a first image capture device configured to capture first image data; and one or more computing devices configured to carry out the method of the first aspect of the present disclosure.
Optionally, the system further comprises a second image capture device configured to capture second image data. Using two (or more) image capture devices means that if one has its view obstructed then a second image capture device can instead be used having a less obstructed view.
The first image capture device and the second image capture device are preferably located in different positions. Having image capture devices in different positions, with different fields of view, increases the likelihood that if one of them has their view obstructed then the other(s) will have a less obstructed view.
Optionally, the first image capture device is of a first type and the second image capture device is of a second type, the second type being different from the first type.
Different types of image capture device, producing different types of image data, may be obstructed by different objects. Whilst an object may be opaque to one image capture device (e.g., at one wavelength of light), it may be transparent to another image capture device (e.g., at a second wavelength of light). Accordingly, using different types of image capture devices increases the likelihood that if one of them has their view obstructed then the other(s) will have a less obstructed view.
Optionally, the first image capture device and the second image capture device are of a type selected from visible spectrum cameras, thermal or infrared cameras, depth or time-of-flight cameras, radar, LIDAR, ultrasound devices, and stereoscopic imaging cameras.
According to a third aspect of the present disclosure, a computer program is provided. When run on one or more computing device, the computer program is configured to cause the one or more computing devices to perform the method of the first aspect of the present disclosure.
According to a fourth aspect of the present disclosure, a non-transitory memory having stored thereon the computer program of the third aspect is provided.
According to a fifth aspect of the present disclosure, a method of monitoring objects within an operating room is provided. The method comprises receiving image data captured by at least one image capture device; identifying a first object from the image data; identifying a second object from the image data; determining an interaction state between the first object and the second object; and outputting the interaction state.
Using image capture devices to identify different objects provides a versatile system that can easily be adapted to monitor different objects within an operating room, providing little in the way of modification for different scenarios. For example, because the image capture devices use the visual appearance of objects, rather than requiring different parameters to be manually input for different objects, the system can easily work with a variety of different objects.
Optionally, the method further comprises determining a volume occupied by the first object based on the image data; and determining a volume occupied by the second object based on the image data. The interaction state being determined based at least in part on the volume occupied by the first object and the volume occupied by the second object.
Determining a volume can be useful in determining an interaction state because it allows a greater range of spatial calculations to be performed, including determining whether an object will fit within another, determining whether objects will collide, etc.
The interaction states may include information as to whether the first object and the second object are compatible. Determining whether objects are compatible based on the visual information means that potential problems and damage, e.g., due to incompatible objects being used together, can be avoided. It may be determined that the first object and the second object are not compatible if the first object cannot fit within or around the second object based on the volume occupied by the first object and the volume occupied by the second object. Using the volume information in this way can prevent costly collisions and other problems if one object is used with another and the two do not fit together.
The method may further comprise determining a velocity of the first object from the image data; and determining a velocity of the second object from the image data. In this case, determining an interaction state between the first object and the second object comprises determining whether the first object and the second object will collide, based on the velocity of the first object, the velocity of the second object, the volume occupied by the first object, and the volume occupied by the second object.
Determining velocities and predicting collisions can avoid costly, and potentially dangerous, collisions. Using visual information from image capture devices means that this can be done for any object that can be seen by the image capture device, and that calibration or other complex and time consuming steps need not be performed, greatly increasing the versatility of the method.
Outputting the interaction state may comprise outputting an alert if it is determined that the first object and the second object will collide. Outputting an alert can enable the collision to be prevented.
Outputting the interaction state may alternatively, or in addition, comprise outputting a command configured to adjust the velocity of either or both of the first object and the second object to prevent the first object and the second object from colliding. In this way, objects that are being controlled via a robotic arm or the like can be prevented from colliding. Using image data means that a robotic arm or the like could also be prevented from colliding with objects that it would otherwise not be aware of, such as people. Indeed, it allows objects to be detected and avoided, including objects not previously calibrated for etc., greatly increasing the versatility of the system. Adjusting the velocity of either or both of the first object or the second object may comprise causing either or both of the first object or the second object to stop moving. Stopping either or both of the first object or the second object to stop moving is an effective way of preventing collisions.
The interaction state is preferably output a predetermined time before a collision will occur, based on the velocity of the first object, the velocity of the second object, the volume occupied by the first object, and the volume occupied by the second object. Outputting the interaction state, which may include an alert (such as a command) to prevent a collision, at a predetermined time before the collision can occur means that objects are not unnecessarily stopped far in advance of a predicted collision when such a collision is unlikely to occur, for example, because it is likely that one of the objects will change their velocity prior to the collision without intervention.
The volume of the first object may be one of: a parallelepiped, such as a cuboid; a sphere; and a cylinder. The volume of the second object may be one of a parallelepiped, such as a cuboid; a sphere; and a cylinder.
Using a parallelepiped such as a cuboid, a sphere or a cylinder as representative of the volume of the either or both of the first and second objects can reduce the computing power associated with determining an exact (often highly irregular) volume of an object, without greatly detracting from the accuracy of the method.
The first object may be identified using image recognition techniques. The second object may be identified using image recognition techniques. The use of image recognition techniques can allow the method to be automatically implemented by a computer or computer system.
The method preferably further comprises the step of capturing, by one or more image capture devices, the image data.
The image data preferably comprises one or more of: visible spectrum data; thermal or infrared data; depth or time-of-flight data, such as radar data, LIDAR data, and ultrasound data; and stereoscopic image data.
According to a sixth aspect of the present disclosure, a system for monitoring a patient on an operating room table is provided. The system comprises one or more image capture devices configured to capture image data; and one or more computing devices configured to carry out the method of the fifth aspect.
Optionally, the one or more image capture devices are one or more of visible spectrum cameras, thermal or infrared cameras, depth or time-of-flight cameras, radar, LIDAR, ultrasound devices, and stereoscopic imaging cameras.
According to a seventh aspect of the present disclosure, a computer program is provided. When run on one or more computing device, the computer program is configured to cause the one or more computing devices to perform the method of the fifth aspect.
According to an eight aspect of the present disclosure, a non-transitory memory having stored thereon the computer program of the seventh aspect is provided.
According to a ninth aspect of the present disclosure, a method of monitoring a patient on an operating room table is provided. The method comprises receiving image data captured by at least one image capture device; and determining a position of the patient relative to the operating table in dependence on the image data.
Determining a patient position using image data means that the method does not require any specific operating table, and as such can be employed easily to a wide variety of situations. Furthermore, the method can be used with existing operating tables and even with future tables. In addition, as the system is remote from the operating table, it does not compromise patient outcomes or the effectiveness of an operating table as an operating table.
Optionally, determining a position of the patient relative to the operating table in dependence on the image data comprises determining a position of the patient from the image data; determining a position of the operating table; and determining the position of the patient relative to the operating table based on the position of the patient and the position of the operating table. Determining positions of the patient and the operating table allows an accurate position of the patient to be determined, and means that the position of the operating table and patient relative to other objects need not be known. For example, as the position of the operating table is determined, it does not matter if the position moves between different procedures or the like.
The step of determining the position of the operating table may be based on the image data. Determining the position of the operating table based on the image data means that the type of operating table used does not matter, and that the method can easily be used with existing or future operating tables. In particular, no calibration or other information about the operating table is needed. The position of the operating table is preferably determined using image recognition techniques. The use of image recognition techniques can allow the method to be automatically implemented by a computer or computer system.
Alternatively or in addition, the step of determining a position of the operating table comprises receiving position information of the operating table; and determining the position of the operating table from the received position information. Receiving position information from the operating table means the method can integrate and take advantage of many existing operating tables that can provide such information, reducing the computing power needed to determine the position of the operating table from the image data.
The step of determining a position of the patient relative to the operating table may comprise determining a first position of the patient relative to the operating table and determining a second position of the patient relative to the operating table, the second position being determined at a time later than the first position. The method further comprises determining if the patient has moved on the operating table by comparing the first position and the second position. Determining a first and second position allows the position of the patient to be monitored, and for it to be determined if the patient has moved. This can help prevent undesired events from occurring that could be detrimental to patient outcomes, such as a patient sliding, slipping or falling from an operating table.
Optionally, the method further comprises outputting an alert if it is determined that the patient has moved on the operating table. Outputting an alert in this situation means that the problem can be rectified and appropriate action can be taken to ensure that patient outcomes are not compromised. The alert may be output if the patient has moved by at least a threshold amount on the operating table. Outputting an alert if the patient has moved by at least a threshold amount ensures that the method does not output an unnecessary number of alerts, and only outputs an alert when necessary, for example to prevent a patient sliding, slipping or falling from an operating table.
Optionally, the alert is output if the patient has moved out of a predefined region on the operating table. A patient moving out of a predefined region on the operating table, e.g., a central region, can be indicating of the patient may be sliding, slipping or falling from an operating table. Accordingly, outputting an alert if this is detected can prevent these adverse incidents from occurring.
Optionally, determining a position of the patient on an operating table comprises determining a plurality of sub-positions of the patient based on the image data, the sub-positions of the patient being positions of parts of the patient's body. Determining the positions of the different parts of a patient's body can allow just the parts of interest to be monitored, and also means that the posture, pose or position of the patient can be determined and also monitored.
An alert may be output if it is determined that the patient is not in an acceptable position and/or if it is determined that the patient is in an unacceptable position based upon the plurality of sub-positions of the patient. An acceptable and unacceptable position (e.g. a pose or posture), can be detrimental to patient outcomes, for example by causing sores or bruising, and may not be noticed by a medical professional during surgery due to, for example, the patient being beneath a sheet or other object, or simply because their attention is on the surgery. Accordingly, detecting the position, pose or posture of the patient using image data may improve patient outcomes.
The alert may be output if the patient is not in an acceptable position and/or if it is determined that the patient is in an unacceptable position for at least a threshold period of time. Outputting an alert if the patient is not in an acceptable position and/or if it is determined that the patient is in an unacceptable position for at least a threshold period of time ensures that the method does not output an unnecessary number of alerts, and only outputs an alert when necessary. Different threshold periods of time may be set for different positions. This again further reduces the number of unnecessary alerts output whilst also ensuring that patient outcomes are not compromised by enabling very bad situations to be quickly rectified.
The parts of the patient's body corresponding to sub-positions of the patient's body may include one or more of: head; arms; hands; legs; feet; torso; and abdomen. Identifying the sub-positions of these parts of the patient effectively allow the position, pose or posture of the patient to be determined and monitored.
Preferably, determining a position of the patient on an operating table comprises determining the position of a patient underneath a sheet. Determining the position of a patient underneath a sheet means that even if part of the patient is covered then the method is still effective, and further when the patient is covered then medical professionals will likely be less aware of the position of the patient, meaning that the method is particularly helpful.
The position of the patient underneath a sheet may be determined at least in part from image data captured by a thermal imaging and/or infrared camera. The use of a thermal camera and/or an infrared camera can effectively determine the position of a patient underneath a sheet because many sheets will not obstruct the infrared light given off by a patient's body, whilst they may block visible light that can be seen by the human eye.
Optionally, the step of determining a position of the patient underneath a sheet comprises determining a surface of the sheet above the operating table and determining the position of the patient underneath the sheet based on the surface. As a sheet laying over the patient will often be distorted by the body of the patient, it is possible to determine the position of the patient based on the surface of the sheet, in particular by determining the three-dimensional shape of the surface of the sheet. From this surface, the patient's position can be deduced.
Optionally, the surface of the sheet above the operating table is determined at least in part from image data captured using one or more of radar, LIDAR, ultrasound devices, ultrasound, and time-of-flight imaging.
The position of the patient may be identified using image recognition techniques. The use of image recognition techniques can allow the method to be automatically implemented by a computer or computer system.
The method may further comprise the step of capturing, by one or more image capture devices, the image data. Optionally, the image data comprises one or more of visible spectrum data, thermal or infrared data, depth or time-of-flight data, radar data, LIDAR data, ultrasound data, and stereoscopic image data.
According to a tenth aspect of the present disclosure, a system for monitoring a patient on an operating room table is provided. The system comprises one or more image capture devices configured to capture image data and one or more computing devices configured to carry out the method of the ninth aspect.
Optionally, the one or more image capture devices are one or more of visible spectrum cameras, thermal or infrared cameras, depth or time-of-flight cameras, radar, LIDAR, ultrasound devices, and stereoscopic imaging cameras.
According to an eleventh aspect of the present disclosure, a computer program is provided. When run on one or more computing device, the computer program is configured to cause the one or more computing devices to perform the method of the ninth aspect.
According to a twelfth aspect of the present disclosure, a non-transitory memory having stored thereon the computer program of the eleventh aspect is provided.
According to a thirteenth aspect of the present disclosure, a method of monitoring objects within an operating room is provided. The method comprises receiving image data captured by at least one image capture device; identifying an object from the image data; receiving procedure data defining a procedure being performed and comprising one or more steps to be performed, the one or more steps to be performed having one or more objects being associated with each step; determining a current step to be performed; determining whether the identified object is one of the one or more objects associated with the current step to be performed; and outputting an alert if the identified object is not one of the one or more objects associated with the current step to be performed.
Such a method may enable incidents of the incorrect step from being performed or an incorrect object being used, helping to prevent error by a medical professional and improving patient outcomes. Using image data to achieve this effect makes the system highly adaptable to a wide variety of situations.
Optionally, the one or more objects associated with each step further have a state associate with each step. In this case, the method further comprises determining a current state of the object, based on the image data; and outputting an alert if the current state of the object does not match the state associated with the current step to be performed. This can ensure that not only is the correct object used, but that is in a correct state. For example, it can prevent the accidental use of non-sterile surgical tools and similar incidents. Again, using image data enables multiple different states to be detected for a wide range of objects, and the system can easily be adapted to different objects and different procedures.
The method may further comprise determining a position of the object based on the image data; and wherein the current state of the object is further determined based on the position of the object. Using the position of an object can provide information and context about the state. For example, one tray may have sterile, unused surgical tools on whilst another may be for unsterile, used surgical tools, and so the state of a surgical tool can be determined accordingly.
Optionally, the state associated with the current step is an initial state, the initial state being preset for each of the one or more objects associated with each step. In this manner, the method can check the state of each object at the beginning of a procedure, and the state that each object should be in at the start of a procedure may be preset and can be verified.
The current state may be one of: a “sterile” state; an “in use” state; a “used” state; an “open” state; a “closed” state; a “moving” state; a “stationary” state; an “idle” state; a “locked” state; an “unlocked” state; and a “paused” state.
The current state may be stored in memory, as may any changes to the current state (e.g., a transition from “sterile” to “used”), with timestamps associated with the state or the change in state. A record may be generated comprising this information for each object used in a procedure, allowing for information regarding the procedure to be reviewed at a later time. The record may be stored as part of an electronic medical record for the patient and/or a case protocol for the particular surgery.
Optionally, the method further comprises determining a medical professional performing the procedure and determining whether the object is being used by the medical professional. The alert is output if the identified object is being used by the medical professional and if the identified object is not one of the one or more objects associated with the current step to be performed. Accordingly, the method can prevent a medical professional from using an incorrect object.
The object is preferably identified using image recognition techniques. The use of image recognition techniques can allow the method to be automatically implemented by a computer or computer system.
Optionally, the method further comprises the step of capturing, by one or more image capture devices, the image data.
The image data may comprises one or more of: visible spectrum data; thermal or infrared data; depth or time-of-flight data, such as radar data, LIDAR data, and ultrasound data; and stereoscopic image data.
According to a fourteenth aspect of the present disclosure, a system for monitoring objects within an operating room is provided. The system comprises one or more image capture devices configured to capture image data and one or more computing devices configured to carry out the method of the thirteenth aspect.
Optionally, the one or more image capture devices are one or more of visible spectrum cameras, thermal or infrared cameras, depth or time-of-flight cameras, radar, LIDAR, ultrasound devices, and stereoscopic imaging cameras.
According to a fifteenth aspect of the present disclosure, a computer program is provided. When run on one or more computing devices, is configured to cause the one or more computing devices to perform the method of the thirteenth aspect.
According to a sixteenth aspect of the present disclosure, a non-transitory memory having stored thereon the computer program of the fifteenth aspect is provided.
According to a seventeenth aspect of the present disclosure, a method of monitoring objects within an operating room is provided. The method comprises receiving first image data captured at a first time by at least one image capture device; identifying an object from the first image data; receiving second image data captured at a second time by at least one image capture device, the second time being a later time than the first time; and determining whether the object identified from the first image data can be identified from the second image data.
Using image data provides a flexible method that can track a wide range of objects and determine whether they are present throughout a procedure at different times. In particular, any type of object that can be detected from the image data can be used.
Optionally, upon determining that the object cannot be identified from the second image data, the method further comprises determining that the object is not present, or not visible, within the operating room at the second time.
Determining an object is not present at a second time can provide information indicating that the object was used, or missing or otherwise unaccounted for. This can ensure that all objects were used that were required (e.g., that a stent was not forgotten to be inserted), and that no foreign objects were left within a patient, for example.
Optionally, the second image data is captured by a first set of the at least one image capture devices, and wherein the method further comprises, upon determining that the object cannot be identified from the second image data, receiving third image data, the third image data captured by a second set of the at least one image capture devices, the second set of at least one image capture devices comprising at least one image capture device not in the first set of at least one image capture devices; determining whether the object can be identified from the third image data; and upon determining that the object cannot be identified from the third image data, determining that the object is not present within the operating room at the second time. By utilizing different image capture devices in such a manner an object can be effectively tracked in an operating room despite it being moved to different positions and obscured by other objects or people.
Optionally, the method further comprises the step of capturing, by the second set of the at least one image capture devices, the third image data.
Optionally, the second image data is captured from a first position by a first image capture device of the at least one image capture devices, and wherein the method further comprises, upon determining that the object cannot be identified from the second image data, moving the first image capture device to a second position, such that the field of view of the first image capture device at the second position is different to a field of view of the first image capture device at the first position; receiving fourth image data, the fourth image data captured by the first image capture device at the second position; determining whether the object can be identified from the fourth image data; and upon determining that the object cannot be identified from the fourth image data, determining that the object is not present within the operating room at the second time. By moving the image capture devices in such a manner an object can be effectively tracked in an operating room despite it being moved to different positions and obscured by other objects or people.
Optionally, the method further comprises the step of capturing, by the first image capture devices, the fourth image data.
Optionally, the method further comprise the step of, when it is determined that the object is not present within the operating room at the second time, outputting an alert. Such an alert can warn a medical practitioner that an object cannot be located and may be missing or misplaced. Alerting a medical practitioner in this way can prevent, for example, surgical tools or other objects from being left within a patient during surgery.
The first time is preferably a beginning of a procedure and the second time is preferably an end of a procedure. Checking whether an object is present at a beginning and end of a procedure means that the objects used can all be accounted for, and that if any are missing then a medical practitioner can be warned. Alerting a medical practitioner in this way can prevent, for example, surgical tools or other objects from being left within a patient during surgery.
The object may be a surgical tool.
The object may be identified using image recognition techniques. The use of image recognition techniques can allow the method to be automatically implemented by a computer or computer system.
The method preferably further comprises the step of capturing, by the at least one image capture devices, the first image data.
The method preferably further comprises the step of capturing, by the at least one image capture devices, the second image data.
Optionally, the image data comprises one or more of visible spectrum data, thermal or infrared data, depth or time-of-flight data, radar data, LIDAR data, ultrasound data, and stereoscopic image data.
According to an eighteenth aspect of the present disclosure, a system for monitoring objects within an operating room is provided. The system comprises one or more image capture devices configured to capture image data and one or more computing devices configured to carry out the method of the seventeenth aspect.
Optionally, the one or more image capture devices are one or more of visible spectrum cameras, thermal or infrared cameras, depth or time-of-flight cameras, radar, LIDAR, ultrasound devices, and stereoscopic imaging cameras.
According to a nineteenth aspect of the present disclosure, a computer program is provided. When run on one or more computing devices, the computer program is configured to cause the one or more computing devices to perform the method of the seventeenth aspect.
According to a twentieth aspect of the present disclosure, a non-transitory memory having stored thereon the computer program of the nineteenth aspect is provided.
It will be appreciated that features described in relation to one aspect of the present disclosure may also be applied equally to all of the other aspects of the present disclosure.
Additional features, which alone or in combination with any other feature(s), such as those listed above and/or those listed in the claims, can comprise patentable subject matter and will become apparent to those skilled in the art upon consideration of the following detailed description of various embodiments exemplifying the best mode of carrying out the embodiments as presently perceived.
The detailed description particularly refers to the accompanying figures in which:
A system according to aspects of the present disclosure will be described with respect to
In addition to the operating table 101 and the robotic surgical equipment 103, the operating room 100 comprises a number of other pieces of equipment 107. In this example, the operating room further comprises a first table 107a having surgical tools thereon, a vision cart 107b, an anesthesiologist's cart 107c, a second table 107d having further surgical tools thereon, a nurse's workstation 107e, and a laparoscopic tower 107f.
In addition to the surgeon 109, a number of other medical professionals are located in the operating room 100. In this example, a first assistant 111a and a second assistant 111c are present, an anesthesiologist 111b is present, a nurse 111d is present and a surgical technician 111e is present.
It should be noted that the equipment and medical professionals listed above and illustrated within the operating room 100 in
Image capture devices 113 configured to capture image data of within the operating room 100 are also provided. In
The three image capture devices 113 each have a different field of view of within the operating room 100. The field of view of image capture device A is shown by cone 115a, the field of view of image capture device B is shown by cone 115b, and the field of view of image capture device C is shown by ellipse 115c. It will be appreciated that image capture devices A and B are located to the sides of the operating room looking inwards, whilst image capture device C is located above the operating table 101 looking downwards.
It can be seen that the fields of view 115 of each of the image capture devices 113 all cover a region where the patient 117 is being operated on with the robotic surgical instrument 103 by the surgeon 109. This is an example of a region of interest, and according to aspects of the present disclosure the image capture devices 113 are configured such that their fields of view 115 include that region of interest. Typically, a region of interest will be a region wherein surgery is taking place, as illustrated in
The image capture devices 113 are positioned such that they are likely to have an unobstructed view of a region of interest, and multiple image capture devices 113 can help to compensate if the field of view of one image capture device 113 is obstructed. For example, as can be seen in
The number and the position, as well as the type, of image capture devices may vary depending upon the use case of the present disclosure. For example, whilst three image capture devices 113 are illustrated in
The image capture devices may also be static or moveable. In the case that they are moveable, this may be in between procedures (e.g. they can be moved on trollies or the like) or during a procedure (e.g., they may be attached to a maneuverable arm). Moveable image capture devices can be advantageous for being adaptable to different types of surgery that may require different fields of view (e.g., of different regions of interest). Image capture devices moveable during a procedure can advantageously help to overcome an obstruction in the field of view of the image capture device by enabling the field of view of the image capture device to be changed.
A number of different types of image capture devices may be used that capture different types of image data. A non-exhaustive list of different types of image data that could be captured by the image capture device includes visible spectrum data, thermal or infrared data, depth or time-time-of-flight data, radar data, LIDAR data, and stereoscopic image data. To capture these types of data, the image capture devices may be visible spectrum cameras, thermal or infrared cameras, depth or time-of-flight cameras, radar, LIDAR, or stereoscopic imaging cameras. Other types of image data and image capture devices than those listed herein may also be used.
Different image types of image capture device may be provided in different locations. For example, in
Aspects of the present disclosure provide a number of different methods that can be implemented using the systems described herein, such as that illustrated in
One aspect of the present disclosure provides a method of monitoring objects within an operating room, and in particular involves a method of overcoming obstructions of an image capture device.
A flowchart of the method 200 is illustrated in
At step 203, at least a subset of the first image data is determined. The at least a subset of the first image data (which may therefore comprise all the captured image data or only some of the captured image data) relates to a region of interest within the operating room. The region of interest may be identified by a medical professional, and subsequently determined using image recognition techniques. Alternatively, the region of interest may be identified automatically in dependence on the type of surgical procedure, or in dependence on a predefined list; e.g. the list may comprise surgical tools; patient body parts; and medical equipment.
The region of interest may be static region of interest or it may not be static. That is to say, the region of interest may move throughout a surgical procedure. For example, the region of interest may be an object (such as a piece of equipment or a surgical tool) or a person, or a volume around such an object or person, and the region of interest may move with the object or person. As another example, the region of interest may be a region in which the surgical procedure is to be performed, such as a torso region of a patient.
At step 205, it is determined whether the region of interest within the operating room is at least partially obstructed within the subset of the first image data. This is done based on the first image data, that is, it is determined if the region of interest is obstructed in dependence of the first image data. It may be determined that the region of interest is obstructed, for example, if, after an unobstructed view of the region of interest is obtained, it is detected that another object has moved in front of the region of interest thus obscuring it. In another case, it may be determined that the region of interest is obstructed if certain objects cannot be identified within the image data. For example, if it is known that a trocar should be within the region of interest but a trocar cannot be identified in the subset of the first image data, then it may be determined that the region of interest is obscured. Other techniques may also be used.
For example, determining if the region of interest within the operating room is at least partially obstructed may be done by obtaining a model (such as a 3D volumetric representation) of the region of interest or of an object in the region of interest. This model can then be deformed. For example, vectors defining a 3D volumetric representation of the object or region of interest are projected to a plane of the first image data. The object, as identified in the first image data, and the deformed model can then be compared to generate a relational score between the two. If this score is below a threshold value, then it may be determined that the region of interest is at least partially obstructed.
Upon determining that the region of interest is at least partially obscured, the method moves to step 207. At step 207, second image data captured by the at least one image capture device is received. Then, at step 209, at least a subset of the second image data is determined (which may comprise all the captured image data or only some of the captured image data), the at least a subset of the second image data relating to the region of interest. At step 211, the at least a subset of the second image data is output; the at least a subset of the second image data relates to the region of interest and the region of interest is less obstructed within the at least a subset of the second image data compared to within the at least a subset of the first image data.
The second image data received at step 207 may be some combination of image data of the same type as the first image data captured by a different image capture device at a different position to the image capture device that captured the first image data, image data of the same type as the first image data captured by the same image capture device at a different position to the image capture device that captured the first image data (e.g., the image captured device moved between capturing the first and second image data), and image data of a different type to the first image data captured by the same or a different image capture device at the same or a different position as when the first image data was captured.
In some cases, the subset of the first image data determined at step 203 is compared with the subset of the second image data determined at step 209. Such a method is illustrated in
The first steps of
After step 209, in the method of
If it is determined that the region of interest is less obstructed within the at least a subset of the second image data, then step 211 is performed, as in the method of
The comparison between the first image data and the second image data to determine in which the region of interest is more obstructed may be achieved by generating a relational score from the at least a subset of the second image data in the same manner as described above for the first image data and then comparing the relational scores for the first image data or the second image data.
The first image data and the second image data, used in both the methods of
Moving the image capture device may comprise a translation of the image capture device (e.g., by moving on wheels) or a rotation or pivoting of the image capture device, amongst other movements. In some cases, the image capture device may be attached to an articulated arm, such as a robotic arm, that can move to provide a desired translation and/or rotation of the image capture device.
The movement of the image capture device may be along a movement trajectory, which is a trajectory from the first position to the second position. The movement trajectory can be generated using a look up table, based on the location of the region of interest, the location of the first position and the three-dimensional position of the obstruction object. A candidate position to give an unobstructed view may be determined from the look up table, and a trajectory may be determined based on the first position and the candidate position, for example using the look up table.
For example, turning to
In this example, the process is iterative, although it is should be understood that this is not essential. When iterative, the process comprises moving the image capture device towards the candidate position along the movement trajectory from the first position to a second position along the movement trajectory. A score is then compared between the first position and the second position, the score representing an amount by which the region of interest is obstructed. For example, the relational scores discussed above may be calculated at the first and second positions, and if they indicate that region of interest is less obstructed from the second position then the next iteration may continue to move the image capture device along the movement trajectory. If the scores indicate that, after moving to the second position, the region of interest is more obscured than from the first position, an alternative trajectory can be computed for the next iteration of the method.
Continuing the example discussed above with reference to
Alternatively, or in addition, to moving an image capture device to overcome an obstruction, multiple image capture devices can be used that capture different image data. This could be image data of the same type from a different position, image data of a different type from the same position, or image data of a different type from a different position.
For example, in such a case, when applied to the method of
By way of an example, consider the anesthesiologist's cart 107c and the anesthesiologist 111b shown in
In this example, the first image capture device and the second image capture device are located in different positions, such that they have different fields of view. This can enable a second image capture device to capture unobstructed, or less obstructed, image data relating to the region of interest than a first image capture device. However, additionally or alternatively, the first image capture device and the second image capture device may be different types of image capture device, configured to produce different types of image data. This can mean that the first and second image capture devices may be in the same position (and so have the same field of view) but the image data generated by the first image capture device may be obstructed whilst the image data generated by the second image capture device may not be. This is particularly the case when the obstructing object is transparent to a certain form of light or electromagnetic radiation but not another form. For example, different wavelengths of electromagnetic radiation can pass through different materials without being blocked. In some implementations, the first image capture device may be a visible spectrum camera whilst the second image capture device may be an infrared camera. Thus, whilst one type of image capture device may have its view obstructed by an obstructing object, a different type of image capture device may not, even with the same field of view. It should be noted that whilst the first and second image capture devices are described as separate devices, they may be integrated with one another so as to form a single unit.
In both the case where the image capture device is moved to overcome an obstruction and/or where multiple image capture devices are used to overcome an obstruction, the image data captured from the different positions and/or the different image capture devices may be combined. In this case, the method further comprises the steps of combining the first image data and the second image data to generate composite image data and outputting the composite image data. The composite image data relates to the region of interest, with the region of interest being less obstructed in the composite image data than in either the first image data or the second image data.
The combining of the image data can be achieved in a number of ways. For example, if different types of image data are captured by different types of image capture device, but from substantially the same position, then the image data in one of the first or second sets of image data that relates to an obstruction object can be removed, and replaced with the equivalent image data that is unobstructed from the other image capture device. For example, if one image capture device is a visible spectrum camera and another is an infrared camera, if an object is obstructing the view of the visible spectrum camera but not the infrared camera (e.g. a sheet which blocks visible light but not infrared), then image data from the infrared camera can be used to replace the portions of the image data from the visible spectrum camera that are obstructed.
In the case that the positions of the two image capture devices from which image data is being combined are different, then some or all of the image data may need to be morphed or projected before it can be combined. For example, based on first image data, a three-dimensional representation of an object in the region of interest can be created, and this can then be projected into the plane of the second image data and then used to replace a portion of the second image data that is obstructed. In order to project the data in this manner, the image data may be interpolated or extrapolated so that gaps in the available image data can be overcome. Artificial intelligence techniques such as machine learning may be used for this purpose, and may utilize reference images of the identified object from one or more angles to assist in projecting the image and fill in any missing portions in the image data.
In some cases, missing portions in the data can be overcome using three-dimensional models of an object in the region of interest. Continuing the above example, after the object in the region of interest is identified from the first image data, a three-dimensional model can be obtained and this can be projected into the plane of the second image data. This can help to provide visible spectrum image data from other forms of data. For example, if an object is identified using infrared image data, a three-dimensional model of this object can be mapped to the object so that it can be viewed as if it were captured with a visible spectrum image capture device. Hence, the composite image data may comprise both portions of the first image data, the second image data, and computer-generated data.
Another aspect of the present disclosure described herein provides another method of monitoring objects within an operating room. This method is described with respect to
Method 400 begins at step 401 with receiving image data captured by at least one image capture device. The image capture device and the image data may be of any type discussed herein.
At step 403, a first object is identified from the image data, and at step 405 a second object is identified from the image data. The first and second objects are identified from the image data using image recognition techniques of the type well known to those skilled in the art.
Subsequently, at step 405, an interaction state between the first object and the second object is determined. The interaction state can include a compatibility state (e.g. compatible, not compatible), a relative positional state (e.g. a distance between the objects, potential collision risk), and a current interaction (e.g. coupled, coupling, uncoupled, uncoupling) etc.
Once the interaction state has been determined at step 407, the interaction state is output at step 409. This could include outputting to another part of a computer program for subsequent use, outputting to another device, and outputting for a medical professional to observe.
The interaction state may be based on the volumes of the first object and the second object. In this case, the method may further include determining a volume occupied by the first object based on the image data and determining a volume occupied by the second object based on the image data. This can help to provide information as to whether the objects are compatible. In some cases, objects may be determined as not being compatible if they cannot fit within or around each other. For example, the first and second object may be determined to not be compatible if the first object cannot fit within or around the second object. An example use case of such a method would be if the first object were a trocar and the second object were a laparoscope. The method would determine whether the trocar were of an appropriate size for the laparoscope. If it were determined that the laparoscope would not fit within the trocar, then it would be determined that the laparoscope and trocar were not compatible and an alert could be output to a surgeon. Another situation would be determining whether a large piece of equipment such as a robotic surgical implement were compatible with an operatic table based on whether the operating table could fit within a defined region of the robotic surgical implement.
In some cases, the interaction state may be a state of impending collision or not. That is, determining the interaction state may comprise determining whether the first object and the second object will collide. This can be done by determining a velocity of the first object from the image data and determining a velocity of the second object from the image data. The velocities may be determined relative to an external reference frame (e.g., the operating room) or relative to one another. In this case outputting the interaction state may comprise outputting an alert if it is determined that the first and the second object will collide. This could be in the form of an audible and or visual warning, such as a message on a screen, a warning light and a verbal warning.
Alternatively or additionally, outputting the interaction state may comprise outputting a command configured to adjust the velocity of either or both of the first object and the second object to prevent the first object and the second object from colliding. For example, if one or both of the first object and the second object are being controlled by a computer, a command may be sent to the computer that, when executed by the computer, causes the computer to adjust the velocity of the first and or second object. For example, the velocity may be adjusted to zero, i.e., the first and or the second object may be caused to stop moving.
The interaction state, in particular the alert or the adjust velocity command as discussed above, may be output a predetermined time before a collision is predicted to occur. The time until a collision is predicted to occur may be based on the determined velocities of the first and second object. For example, if the velocity of the first object is determined to be 0 m/s and the velocity of the second object is determined to be 0.1 m/s in the direction of the first object, and the distance between the objects is 0.5 m, it may be determined that the objects will collide in 5 seconds. The predetermined time may be 1 second before a collision, 2 seconds before collision, 5 seconds before collision, or another time. If the output interaction state is a warning to enable a medical professional to take action then preferably the predetermined time is longer (e.g. 10 seconds) to give the medical professional enough time to react to prevent collision. If the output interaction state is a command to a computer to prevent the collision then the predetermined time may be shorter (e.g. 1 second or less) as the computer does not need as much time to execute the command.
In determining whether a collision will occur, the volume occupied by each object may be considered. A collision may be considered to be imminent when it is predicted that the volumes occupied by each object will overlap based on current velocities. The volumes may be a compound volume that precisely matches the size and shape of the object, or it may be a simple and/or regular volume that encompasses the object. The latter may require less computing power to implement. In some cases, the volume may be one of a parallelepiped (e.g. a cuboid), a sphere, or a cylinder. An appropriate shape may automatically be assigned to an identified object based on the captured image data.
As well some or all of position, velocity and volume information about the first and second objects, other data may also be used to determine if a collision will occur. In particular, information may be received from one or both of the first and second objects (or from a device or computer controlling them) regarding a trajectory or future movement path of the objects. For example, information may be provided about how far an object will move in a certain direction, how long it will move for, a path of the object, and/or a final position of the object. This information can be combined with the information derived from the image data to further refine the determination as to whether a collision will occur. For example, whilst two objects may have relative velocities that will result in a collision, information from each object may indicate that the objects will stop moving prior to a collision and so it may be that no collision will occur.
Another aspect of the present disclosure described herein provides a method of monitoring a patient on an operating room table. This method is described with respect to
At step 501 of method 500 illustrated in
At step 503, a position of the patient relative to the operating table is determined. In particular, this may be a position of the patient on the operating table. This position is determined based on the image data. For example, a patient can be located in the image data using image recognition techniques and their position subsequently determined. The position of the operating table can be similarly determined using image recognition techniques and then a relative position of the patient on the operating table can be determined. Alternatively, the position of the operating table may be determined based on position data of the operating table received from the operating table or some other source. The position of the patient relative to the operating table can then be determined based on the patient position (determined based on the image data) and the operating table position (determined based on the received position data). The positions determined may be three-dimensional positions.
It should be noted that the positions of the patient and the operating table may be determined relative to some external coordinate system (e.g. relative to the operating room) and compared to generate a relative position, or they can be determined directly relative to one another. For example, an origin can be determined based on the operating table (e.g. a top left hand corner of an operating table could be identified and used as the origin), and then the position of the patient could be determined relative to this.
The position of the patient relative to the operating table can be used to determine if a patient moves. This can be helpful to ensure a patient stays in the correct position such that a robotic surgical implement can operate properly, or to ensure that a patient is not sliding or slipping on the operating table, for example if the operating table is not horizontal.
An example of such a method is illustrated in
Once the first and second positions have been determined, the method moves to step 509 whereby the first and second positions are compared to determine if the patient has moved on the operating table. If the first and second positions are different, then it may be determined that the patient has moved, whereas if the first and second positions are the same, then it may be determined that the patient has not moved.
A threshold difference may be used to determine whether the patient has moved. For example, whilst the two positions may not be exactly the same the difference may be sufficiently small (i.e. below a threshold value) that the difference is considered insignificant and it is determined that the patient has not moved. The threshold value could be a relative value (e.g. a certain percentage of a height or length of the patient, which may be determined from the image data or otherwise input) or an absolute value (e.g. a given number of centimeters). Different thresholds may be used for different directions of movement (i.e. a larger threshold may be used for movement along the length of an operating table whilst a smaller threshold may be used for movement across a width of an operating table) and for different patients and types of surgery or procedure.
Alternatively, or in addition, it may be determined that a patient has moved on the operating table if the patient has moved out of a predefined region on the operating table. For example, if the second position is determined to be a position outside of such a predefined region then it may be determined that the patient has moved, whereas if the second position is a position within the predefined region then it may be determined that the patient has not moved. The predefined region may be a region that it is considered to be safe for the patient to be in. For example, it may be a central region such that it is considered that the patient is not at risk of slipping, sliding or falling of the operating table. Such a position could be, for example, the region of the operating table that is 5 centimeters or more from an edge of the operating table. Different predefined regions may be used for different patients and types of surgery or procedure.
In some cases, both a threshold and a predefined region may be used. For example, it may be determined that a patient has moved if the second position is different from the first position by a threshold amount or the second position is a position outside a predefined region. In other cases, it may be determined that a patient has moved if the second position is different from the first position by a threshold amount and the second position is a position outside a predefined region.
If it is determined that the patient has moved, then the method can proceed to step 511. At step 511 an alert is output. This could be in the form of an audible and or visual warning, such as a message on a screen, a warning light, and a verbal warning. For example, it may be configured to alert a medical professional that they need to attend to the position of the patient. Additionally or alternatively, the alert may be in the form of a computer command configure to cause a piece of equipment to perform a certain action, such as causing an operating table to adjust its position (e.g. level itself).
The first and second positions may represent a variety of different things. For example, the positions could be a single point or a plurality of discreet points. Such points could be an estimated center of mass determined based on the image data or another reference point, such as a top of the head or points of specific facial features (e.g., eyes). Alternatively, the positions may be areas or volumes, e.g., an area or volume occupied by a patient, which could also be determined from the image data. The volume occupied by a patient could be represented by a cuboid that the patient fits within, for example, the smallest such cuboid.
First and second positions may also correspond to positions of one or more parts of the patient's body. For example, the first and second position may each comprise a plurality of sub-positions that are determined based on the image data. These sub-positions may be three-dimensional positions of parts of the patient's body. Such parts could be, for example, a head, torso, abdomen, arms, legs, feet, hands, etc. The position of these parts of a body may be any of the types of position described in the preceding paragraph.
From the determined sub-positions of parts of a patient's body, a position of the patient can be determined. For example, it can be determined if the patient is in a supine or prone position, as well as a position of the arms, legs and head. It can be determined if the position of the patient is an acceptable position or not. For example, some positions may increase the risk of complications or patient discomfort, particularly if the position is maintained for an extended period of time.
If it is determined that the patient is not in an acceptable position or if it is determined that the patient is in an unacceptable position then an alert is output. This may be determined by comparing the position of a patient against a list of acceptable positions and determining that the patient is not in an acceptable position if the determined position of the patient does not match a position on the list of acceptable positions. Similarly, it may be determined by comparing the position of a patient against a list of unacceptable positions and determining that the patient is in an unacceptable position if the determined position of the patient matches a position on the list of unacceptable positions. In either case, an alert may be output. This could be in the form of an audible and or visual warning, such as a message on a screen, a warning light, and a verbal warning.
As noted above, not being in an acceptable position, or being in an unacceptable position, may be undesirable if the patient remains in such a position for an extended period of time. Therefore, the alert may be output if the position is maintained for at least a threshold period of time, i.e., if the patient is not in an acceptable position, or in an unacceptable position, for a threshold period of time. The threshold period of time may be different for different patients or groups of patients (e.g., certain classes of patient may be more at risk of certain complications arising due to being in an unacceptable position and so the threshold period of time for these patients may be shorter), and it may also vary with position. For example, some positions may be deemed less acceptable, worse, or more dangerous than other positions, and so the threshold period of time may be shorter for these positions.
In some cases, the patient's body may be fully or partially covered by a sheet or other item. In this case, the position of the patient underneath the sheet or other item may be determined. This may be done based on the three-dimensional shape formed by the sheet or other item using image recognition techniques and/or depth information, time-of-flight information, radar, LIDAR, ultrasound and stereoscopic information captured by the at least one image capture device. In addition, or alternatively, the position may be determined using thermal imaging or infrared image data. This can be effective, particularly for thin sheets, at determining a patient position whilst using relatively little computational power. Such techniques can be applied to any of the methods of monitoring a patient position on an operating table described herein, such as those of
Another aspect of the present disclosure described herein provides a method of monitoring objects within an operating room. Such a method is described with respect to
Method 700 begins at step 701, wherein image data captured by at least one image capture device is received. The image capture device and the image data may be of any type discussed herein.
The method then proceeds to step 703, which comprises identifying an object from the image data. The object is identified using image recognition techniques well known to those skilled in the art.
Step 705 comprises receiving procedure data. The procedure data defines a procedure being performed (e.g. an operation or type of surgery, such as bypass surgery) and comprises one or more steps to be performed. That is, the procedure data comprises information regarding the different steps that are performed during the procedure. The procedure data also comprises information regarding one or more objects that are associated with each step. For example, the procedure data may include a list of the surgical tools that are required for each step. It should be noted that step 705, whilst illustrated as subsequent to steps 701 and 703 in
At step 709, a current step to be performed is determined. That is, it is determined which step of the procedure is currently being performed. This may be done based on the image data (e.g. certain objects and object states may be recognized) or based on some other input. For example, a medical professional may input the current step into a computer.
At step 711, the method determines whether the object identified at step 703 is one of the one or more objects associated with the current step to be performed. For example, the method determines if a surgical tool identified at step 703 is one of the surgical tools that are required for the current step of the procedure or not. This may be done, for example, by comparing the identified object to a list of the objects associated with each step.
If it is determined that the identified object is not one of the one or more objects associated with the current step to be performed, then an alert is output at step 713. This could be in the form of an audible and/or visual warning, such as a message on a screen, a warning light, and a verbal warning. This can ensure that an incorrect object (e.g. incorrect surgical tool) is not inadvertently used, which could lead to complications for the patient.
On the other hand, if the identified object is one of the one or more objects associated with the current step to be performed, then the method ends at step 715.
In some cases, it is determined not just whether an incorrect object is present, but that an incorrect object is being used by a medical professional. Such a method is illustrated in
Method 800 begins with steps 701 to 711, which are the same as those described above with respect to method 700 of
However, unlike method 700 of
Then, at step 803, it is determined whether the object is being used by a medical professional. If it is determined that the identified object is not being used by a medical professional then the method ends at step 805, whilst if it is determined that the identified object is being used by a medical professional then the method outputs an alert at step 807. This could be in the form of an audible and/or visual warning, such as a message on a screen, a warning light, and a verbal warning. Hence, the alert is output in method 800 if the identified object is being used by a medical professional and is not an object associated with a current step being performed.
In some cases, in addition or alternatively to determining if the identified object is being used by a medical practitioner, a state of the object can be used to determine whether an alert is output.
Steps 701 to 711 of method 900, illustrated in
The state of the object may be an initial state of the object (e.g., the state the object is in at the beginning of the procedure). The initial state of each object associated with a procedure may be preset. For example, an initial state of a surgical tool may be a “sterile” state, indicating that that surgical tool is sterile. Examples of possible states, which may be initial states or otherwise, include a “sterile” state, an “in use” state, a “used” state, an “open” state, a “closed” state, a “moving” state, a “stationary” state, an “idle” state, a “locked” state, an “unlocked” state, and a “paused” state.
At step 711, if it is determined whether the identified object is not one of the one or more objects associated with the current step to be performed, then an alert is output at step 713. This could be in the form of an audible and/or visual warning, such as a message on a screen, a warning light and a verbal warning. On the other hand, if the object is not one of the objects associated with the current step then the method moves to step 903.
At step 903 it is determined whether the current state of the object matches the state associated with the current step to be performed. If the current state of the object does match the state associated with the current step to be performed then the method ends at 905. If the current state of the object does not match the state associated with the current step to be performed then an alert is output at 907. This could be in the form of an audible and/or visual warning, such as a message on a screen, a warning light, and a verbal warning.
An application of such a method is to ensure that a surgeon is using a sterile surgical tool in a medical procedure. For example, the first step of a procedure may be to create an incision in a patient using a scalpel. Therefore, a scalpel will be an object associated with this step. It is important that the scalpel is sterile, and so the state of the object associated with this step will be a “sterile” state, which may be determined from the image data. If the scalpel is identified at step 703 and if the state that is determined at step 901 is a “sterile” state, then at step 903 it will be determined that the current state of the scalpel matches the state associated with that step of the procedure. However, if the state is a different state, such as if it is determined that the scalpel is in a “used” state, then the current state of the scalpel will be determined to not match the state associated with that step at step 903. In this case, an alert is output to warn the surgeon that they are about to use a non-sterile scalpel.
In the method of any of
Another aspect of the present disclosure described herein provides a method of monitoring objects within an operating room. Such a method is described with respect to
The method 1000 begins at step 1001, whereby first image data is received. The first image data is captured at a first time, by at least one image capture device. The image capture device and the image data may be of any type discussed herein.
At step 1003, an object is identified from the first image data. The object is identified using image recognition techniques.
At step 1005, second image data is received. The second image data is captured at a second time, being after the first time, by the at least one capture device. That is, the second image data is image data captured after the first image data, and so corresponds to a later point in time.
Then, at step 1007, it is determined whether the object can be identified from the second image data. As with step 1003, this may be done using image recognition techniques. If the object can be identified then it is determined that the object is present at the second time, and the method ends at step 1009. However, if the object cannot be identified, then, at step 1011, it is determined that the object is not present, or least not visible, within the operating room at the second time, and so at step 1013 an alert is output. This could be in the form of an audible and/or visual warning, such as a message on a screen, a warning light, and a verbal warning.
The first time can be a start or beginning of a procedure, and the second time may be an end of a procedure. In this case, one particular use of the method 1000 is to ensure that no surgical tools are missing at the end of the procedure. This can be especially useful in ensuring that no surgical tools or other items are left inside a patient at the end of surgery, which can cause complications for the patient and may require further surgery to remove.
In some cases, however, an object may be present in the operating room but not visible by at least one of the image capture devices due to an obstruction blocking the view of the image capture device. Such a situation can be overcome by methods 1100 and 1200.
Firstly, method 1100 is described with respect to
At step 1101, third image data is received. The third image data corresponds at least in part to image data from a different source than the second image data. That is, the second image data is captured by a first set of the at least one image capture devices whilst the third image data is captured by a second set of the at least one image capture devices, wherein the second set includes at least one image capture device not in the first set. For example, the second image data could be received from image capture device A in
The method then proceeds to step 1103, where it is determined whether the object can be identified in the third image data. If the object can be identified, it is determined that the object is present in the operating room at step 1105, whilst if the object cannot be identified it is determined that the object is not present in the operating room at step 1107. In this case, as with method 1000, an alert may be output at step 1013. This could be in the form of an audible and/or visual warning, such as a message on a screen, a warning light, and a verbal warning.
Another method, which may be performed additionally with or alternatively to method 1100, is method 1200, described in
At step 1201, the first image capture device is moved to a second position. In particular, it is moved such that the field of view of the first image capture device is changed. That is, the field of view of the first image capture device after being moved at step 1201 is different to the field of view of the first image capture device prior to being move at step 1201. Then, at step 1203, fourth image data is received. The fourth image data is captured by the first image capture device at the second position.
Once the fourth image data is received, at step 1205 it is determined whether the object can be identified from the fourth image data. If it can be, then at step 1207 it is determined that the object is present in the operating room. If it cannot be then it is determined at step 1209 that the object is not present in the operating room. In this case, an alert may be output at step 1013. This could be in the form of an audible and/or visual warning, such as a message on a screen, a warning light, and a verbal warning.
Reference to positions, directions (such as up and down), and velocities herein, unless otherwise specified, may be relative to a coordinate system of the operating room or to another object within the operating room. When using a coordinate system of the operating room, an object described as having a velocity of zero will be stationary with respect to the walls, floor and ceiling of the operating room. The origin of this coordinate system may be chosen as any convenient point, and so references to a “position” should be construed as a reference to a position in the operating room with respect to the walls, floor and ceiling of the operating room but should be interpreted as being limiting on the origin against which the position is measured. An appropriate origin may be selected by the skilled person when implementing the approach described herein on a case by case basis. For example, the origin may be taken as a point on the floor of the operating room, or a central point of the operating room (which may be in a position other than on a floor or wall of the operating room, for example, in mid-air). In each case, unless otherwise specified, a position may be a three-dimensional position of an object within the operating room or a two dimensional position of an object with respect to some surface in the operating room (e.g., a floor or an operating table) or with respect to the image data (e.g., a two dimensional position in the image itself).
Throughout this specification, the identification of objects from image data may be performed using image recognition techniques known to the skilled person, unless otherwise specified. For example, convolutional operations to detect edges and textures is used to process the raw image into a set of masks and regions. Optionally an atlas based representation tree is applied for the determination of an object within frame. This tree shows dependence information about where an object “should” be with respect to another object—and allows the algorithm to better identify an object, for example with an increased confidence. For example, dependence information may link a patient with being on an operating table, or a surgical tool being on a tray.
Such image recognition techniques can include the use of artificial intelligence techniques, such as machine learning and in particular deep learning and neural networks. Image recognition techniques can include various techniques for recognizing and identifying, classifying, detecting, and tagging objects from images. The image recognition may require the training of a neural network using a training set, which may comprise images of objects that are intended to be recognized. Such a neural network may be continuously updated and re-trained. It will be appreciated that the methods described herein may be performed using a wide range of appropriate algorithms and techniques known in the art, and the method is hence not limited in this regard.
The image data referred to herein, unless otherwise specified, may comprise image data in the form of a single image or video frame or may be a series of consecutive or non-consecutive images or video frames. These may be analyzed individually or separately, or they may be analyzed together. Furthermore, the methods herein, whilst described as a series of discreet steps, may be performed continuously on a continuous stream of image data, for example in the form of a video. That is, the methods may continuously analyses the video (e.g., they may analyses each video frame, or a subset of the video frames, such as every tenth frame) and perform the method continuously on the video (e.g., on each selected frame).
The identification of objects performed in the methods described herein may include, unless otherwise specified, the identification of whole objects, parts of an object, pieces of equipment, parts or regions of a surface such as a wall, floor or table, people, parts of people, surgical tools and objects, organs, amongst others. This is a non-exhaustive list.
The identification may be a classification of an object as a certain class of object (e.g., an object may be classified as a person, or as a class of person, such as a surgeon or a patient), or it may further include a specific identification of the object as a unique entity (e.g., the person may be identified as an individual person, such as Joe Blogs). In the case of equipment and objects, these may be identified based on a serial number or other identifying tag or feature (e.g., a visual marking, such as a QR code). In the case of people, these may be identified based on a biometric property such as a face or retina, or they may be identified based on a tag or mark associated with them, such as a marking (e.g., a QR code) on an item of clothing.
In some cases, it may be preferable to avoid the capture of personal or identifying data due to legal or regulatory requirements, and so as to preserve the privacy and confidentiality of an individual. In this case, certain portions of image data may be excluded from analysis, such as a face. This can ensure that personal information is not collected or processed. This may be a default case, but the option may be available for an individual to opt-in to personal identification. This can be advantageous so as to be able to link information captured during a procedure to the individual or to personalize settings of the system. For example, a surgeon may prefer a particular arrangement of image capture devices and so upon identifying the surgeon the system may automatically configure this arrangement of image capture devices. In another case, image data, such as a video, may be recorded and saved and linked with a patient for review by their doctor or other medical practitioner at a later date. However, in other cases, the image data collected and used in the methods herein may not be saved or stored, only being kept for as long as is required to perform the relevant processing upon it as required by the method. This can prevent personal data from being accidentally or deliberately lost or misused.
The methods described herein may be performed on a single computing device (on one or more processors), or across multiple computing devices in a distributed system. Such a distributed system may be connected by a local area network or a wide area network. The methods may also take advantage of cloud computing services to perform one or more steps of the method.
Aspects of the disclosure will now be described in the following numbered clauses, and dependent clauses:
Clause 1. A method of monitoring objects within an operating room, the method comprising:
receiving first image data captured by at least one image capture device;
determining at least a subset of the first image data, the at least a subset of the first image data relating to a region of interest within the operating room;
determining, in dependence on the first image data, if the region of interest within the operating room is at least partially obstructed within the subset of the first image data; wherein,
outputting the at least a subset of the second image data, the region of interest being less obstructed within the at least a subset of the second image data than within the at least a subset of the first image data.
Clause 2. The method of clause 1, wherein further comprises, upon determining that the region of interest is at least partially obstructed the method, the steps of:
comparing the at least a subset of the first image data and the at least a subset of the second image data to determine whether the region of interest is less obstructed in the at least a subset of the first image data or whether the region of interest is less obstructed in the at least a subset of the second image data;
wherein the step of outputting at least a subset of the second image data is performed if it is determined that the region of interest is less obstructed within the at least a subset of the second image data; and
wherein the method further comprises the step of outputting the at least a subset of the first image data if it is determined that the region of interest is less obstructed within the at least a subset of the first image data.
Clause 3. The method of clause 1 or 2, wherein the at least one image capture device comprises a first image capture device and wherein both the first image data and the second image data are captured by the first image capture device, the first image data being captured by the first image capture device located at a first position, the second image data being captured by the first image capture device located at a second, different, position, wherein the method further comprises moving the first image capture device iteratively from a first position to a second position until the region of interest is sufficiently less obstructed within the at least a subset of the second image data than within the at least a subset of the first image data.
Clause 4. The method of clause 3, the step of moving the first image capture device to the second position such that the region of interest is sufficiently less obstructed within the at least a subset of the second image data than within the at least a subset of the first image data comprises:
determining a volume occupied by the region of interest based on the first image data;
determining a three-dimensional position of an obstruction object based on the first image data;
determining a second position from which a field of view of the region of interest from the first image capture device will not be obstructed or will be obstructed to a lesser extent than a field of view of the region of interest from the first image capture device from the first position based on the three-dimensional position of the region of interest and the three-dimensional position of an obstruction object; and
moving the at least first image capture device from the first position to the second position.
Clause 5. The method of clause 3 or 4, wherein moving the first image capture device comprises one or more of a translation, rotation, or pivoting of the first image capture device.
Clause 6. The method of clause 1 or 2, wherein the at least one image capture device comprises a first image capture device and a second image capture device, and wherein first image data is captured by the first image capture device and the second image data is captured by the second capture device.
Clause 7. The method of clause 6, wherein the method comprises:
determining a volume of the region of interest based on the first image data;
determining a three-dimensional position of an obstruction object based on the first image data;
determining whether image data of the region of interest from one or more of the at least two image capture devices other than the first image capture device will be less obstructed than the subset of the first image data relating to the region of interest from the first image capture device based on the volume of the region of interest, the three-dimensional position of the obstruction object, and a known position of the at least two image capture devices; and
upon determining that the image data of the region of interest from one or more of the at least two image capture devices other than the first image capture device will be less obstructed than the subset of the first image data relating to the region of interest from the first image capture device, determining that image capture device to be the second image capture device.
Clause 8. The method of any clause 6 or 7, wherein the first image capture device and the second image capture device are located in different positions, such that they have different fields of view.
Clause 9. The method of any of clauses 6 to 8, wherein the first image capture device is of a first type and the second image capture device is of a second types, such that they produce different types of image data.
Clause 10. The method of clause 9, wherein the first image capture device is a visible wavelength camera and the second image capture device is an infrared wavelength camera.
Clause 11. The method of any of clauses 6 to 10, further comprising the step of capturing, by the second image capture device, the second image data.
Clause 12. The method of any preceding clause, further comprising the steps of:
combining the first image data and the second image data to generate composite image data, the composite image data relating to the region of interest, the region of interest being less obstructed in the composite image data than in either the first image data or the second image data; and
outputting the composite image data.
Clause 13. The method of any preceding clause, further comprising the step of capturing, by the first image capture device, the first image data.
Clause 14. The method of any preceding clause, wherein one or both of the region of interest and the obstruction object are identified using image recognition techniques.
Clause 15. The method of any preceding clause, wherein the image data comprises one or more of: visible spectrum data; thermal or infrared data; depth or time-of-flight data such as radar data, LIDAR data, and ultrasound data; and stereoscopic image data.
Clause 16. The method of any preceding clause, wherein the region of interest is not static with respect to the operating room.
Clause 17. The method of any preceding clause, wherein the region of interest corresponds to one or more of an object, a volume around an object, a person, and a portion of a person.
Clause 18. The method of any of clauses 1 to 15, wherein the region of interest is a static region with respect to the operating room.
Clause 19. The method of any preceding clause, applied to a plurality of regions of interest.
Clause 20. A system for monitoring objects within an operating room comprising:
a first image capture device configured to capture first image data; and
one or more computing devices configured to carry out the method of any of clauses 1 to 19.
Clause 21. The system of clause 20, the system further comprises a second image capture device configured to capture second image data.
Clause 22. The system of clause 21, wherein the first image capture device and the second image capture device are located in different positions.
Clause 23. The system of clause 21 or 22, wherein the first image capture device is of a first type and the second image capture device is of a second type, the second type being different from the first type.
Clause 24. The system of any of clauses 20 to 23, wherein the first image capture device and the second image capture device are of a type selected from visible spectrum cameras, thermal or infrared cameras, depth or time-of-flight cameras, radar, LIDAR, ultrasound devices, and stereoscopic imaging cameras.
Clause 25. A computer program which, when run on one or more computing device, is configured to cause the one or more computing devices to perform the method of any of clauses 1 to 19.
Clause 26. A non-transitory memory having stored thereon the computer program of clause 25.
Clause 27. A method of monitoring objects within an operating room, the method comprising:
receiving image data captured by at least one image capture device;
identifying a first object from the image data;
identifying a second object from the image data;
determining an interaction state between the first object and the second object; and
outputting the interaction state.
Clause 28. The method of clause 27, wherein the method further comprises:
determining a volume occupied by the first object based on the image data; and
determining a volume occupied by the second object based on the image data;
wherein the interaction state is determined based at least in part on the volume occupied by the first object and the volume occupied by the second object.
Clause 29. The method of clause 27 or 28, wherein the interaction states includes information as to whether the first object and the second object are compatible.
Clause 30. The method of clause 29, wherein it is determined that the first object and the second object are not compatible if the first object cannot fit within or around the second object based on the volume occupied by the first object and the volume occupied by the second object.
Clause 31. The method of any of clauses 27 to 30, wherein the method further comprises:
determining a velocity of the first object from the image data; and
determining a velocity of the second object from the image data;
wherein determining an interaction state between the first object and the second object comprises determining whether the first object and the second object will collide, based on the velocity of the first object, the velocity of the second object, the volume occupied by the first object, and the volume occupied by the second object.
Clause 32. The method of clause 31, wherein outputting the interaction state comprises outputting an alert if it is determined that the first object and the second object will collide.
Clause 33. The method of clause 31 or 32, wherein outputting the interaction state comprises outputting a command configured to adjust the velocity of either or both of the first object and the second object to prevent the first object and the second object from colliding.
Clause 34. The method of clause 33, wherein adjusting the velocity of either or both of the first object or the second object comprises causing either or both of the first object or the second object to stop moving.
Clause 35. The method of any of clauses 33 to 34, wherein the interaction state is output a predetermined time before a collision will occur, based on the velocity of the first object, the velocity of the second object, the volume occupied by the first object, and the volume occupied by the second object.
Clause 36. The method of any of clauses 28 to 35, wherein the volume of the first object is one of: a parallelepiped, such as a cuboid; a sphere; and a cylinder.
Clause 37. The method of any of clauses 28 to 36, wherein the volume of the second object is one of: a parallelepiped, such as a cuboid; a sphere; and a cylinder.
Clause 38. The method of any of clauses 27 to 37, wherein the first object is identified using image recognition techniques.
Clause 39. The method of any of clauses 27 to 38, wherein the second object is identified using image recognition techniques.
Clause 40. The method of any of clauses 27 to 39, further comprising the step of capturing, by one or more image capture devices, the image data.
Clause 41. The method of any of clauses 27 to 40, wherein the image data comprises one or more of: visible spectrum data; thermal or infrared data; depth or time-of-flight data, such as radar data, LIDAR data, and ultrasound data; and stereoscopic image data.
Clause 42. A system for monitoring a patient on an operating room table comprising:
one or more image capture devices configured to capture image data; and
one or more computing devices configured to carry out the method of any of clauses 27 to 41.
Clause 43. The system of clause 42, wherein the one or more image capture devices are one or more of visible spectrum cameras, thermal or infrared cameras, depth or time-of-flight cameras, radar, LIDAR, ultrasound devices, and stereoscopic imaging cameras.
Clause 44. A computer program which, when run on one or more computing device, is configured to cause the one or more computing devices to perform the method of any of clauses 27 to 41.
Clause 45. A non-transitory memory having stored thereon the computer program of clause 44.
Clause 46. A method of monitoring a patient on an operating room table, the method comprising:
receiving image data captured by at least one image capture device; and
a position of the patient relative to the operating table in dependence on the image data.
Clause 47. The method of clause 46, wherein determining a position of the patient relative to the operating table in dependence on the image data comprises:
determining a position of the patient from the image data;
determining a position of the operating table; and
determining the position of the patient relative to the operating table based on the position of the patient and the position of the operating table.
Clause 48. The method of clause 47, wherein determining the position of the operating table is based on the image data.
Clause 49. The method of clause 48, wherein the operating position of the operating table is determined using image recognition techniques.
Clause 50. The method of clause 47, wherein determining a position of the operating table comprises:
receiving position information of the operating table; and
determining the position of the operating table from the received position information.
Clause 51. The method of any of clauses 46 to 50, wherein determining a position of the patient relative to the operating table comprises:
determining a first position of the patient relative to the operating table; and
determining a second position of the patient relative to the operating table, the second position being determined at a time later than the first position;
wherein the method further comprises determining if the patient has moved on the operating table by comparing the first position and the second position.
Clause 52. The method of clause 51, wherein the method further comprises outputting an alert if it is determined that the patient has moved on the operating table.
Clause 53. The method of clause 52, wherein the alert is output if the patient has moved by at least a threshold amount on the operating table.
Clause 54. The method of clause 52 or 53, wherein the alert is output if the patient has moved out of a predefined region on the operating table.
Clause 55. The method of any of clauses 46 to 54, wherein determining a position of the patient on an operating table comprises determining a plurality of sub-positions of the patient based on the image data, the sub-positions of the patient being positions of parts of the patient's body.
Clause 56. The method of clause 55, wherein an alert is output if it is determined that the patient is not in an acceptable position and/or if it is determined that the patient is in an unacceptable position based upon the plurality of sub-positions of the patient.
Clause 57. The method of clause 56, wherein the alert is output if the patient is not in an acceptable position and/or if it is determined that the patient is in an unacceptable position for at least a threshold period of time.
Clause 58. The method of clause 57, wherein different threshold periods of time are set for different positions.
Clause 59. The method of any of clauses 55 to 58, wherein the parts of the patient's body corresponding to sub-positions of the patient's body may include one or more of: head; arms; hands; legs; feet; torso; and abdomen.
Clause 60. The method of any of clauses 46 to 59, wherein determining a position of the patient on an operating table comprises determining the position of a patient underneath a sheet.
Clause 61. The method of clause 60, wherein the position of the patient underneath a sheet is determined at least in part from image data captured by a thermal imaging and/or infrared camera.
Clause 62. The method of clause 60 or 61, wherein the position of the patient underneath a sheet comprises:
determining a surface of the sheet above the operating table; and
determining the position of the patient underneath the sheet based on the surface.
Clause 63. The method of clause 62, wherein the surface of the sheet above the operating table is determined at least in part from image data captured using one or more of radar, LIDAR, ultrasound devices, ultrasound, and time-of-flight imaging.
Clause 64. The method of any of clauses 46 to 63, wherein the patient position is identified using image recognition techniques.
Clause 65. The method of any of clauses 46 to 64, further comprising the step of capturing, by one or more image capture devices, the image data.
Clause 66. The method of any of clauses 46 to 65, wherein the image data comprises one or more of visible spectrum data, thermal or infrared data, depth or time-of-flight data, radar data, LIDAR data, ultrasound data, and stereoscopic image data.
Clause 67. A system for monitoring a patient on an operating room table comprising:
one or more image capture devices configured to capture image data; and
one or more computing devices configured to carry out the method of any of clauses 46 to 66.
Clause 68. The system of clause 67, wherein the one or more image capture devices are one or more of visible spectrum cameras, thermal or infrared cameras, depth or time-of-flight cameras, radar, LIDAR, ultrasound devices, and stereoscopic imaging cameras.
Clause 69. A computer program which, when run on one or more computing device, is configured to cause the one or more computing devices to perform the method of any of clauses 46 to 66.
Clause 70. A non-transitory memory having stored thereon the computer program of clause 69.
Clause 71. A method of monitoring objects within an operating room, the method comprising:
receiving image data captured by at least one image capture device;
identifying an object from the image data;
receiving procedure data defining a procedure being performed and comprising one or more steps to be performed, the one or more steps to be performed having one or more objects being associated with each step;
determining a current step to be performed;
determining whether the identified object is one of the one or more objects associated with the current step to be performed; and
outputting an alert if the identified object is not one of the one or more objects associated with the current step to be performed.
Clause 72. The method of clause 71, wherein the one or more objects associated with each step further have a state associate with each step, and wherein the method further comprises:
determining a current state of the object, based on the image data; and
outputting an alert if the current state of the object does not match the state associated with the current step to be performed.
Clause 73. The method of clause 72, wherein the method further comprises determining a position of the object based on the image data; and wherein the current state of the object is further determined based on the position of the object.
Clause 74. The method of clause 72 or 73, wherein the state associated with the current step is an initial state, the initial state being preset for each of the one or more objects associated with each step.
Clause 75. The method of any of clauses 72 to 74, wherein the current state is one of: a “sterile” state; an “in use” state; a “used” state; an “open” state; a “closed” state; a “moving” state; a “stationary” state; an “idle” state; a “locked” state; an “unlocked” state; and a “paused” state.
Clause 76. The method of any of clauses 71 to 75, wherein the method further comprises:
determining a medical professional performing the procedure; and
determining, whether the object is being used by the medical professional;
wherein the alert is output if the identified object is being used by the medical professional and if the identified object is not one of the one or more objects associated with the current step to be performed.
Clause 77. The method of any of clauses 71 to 76, wherein the object is identified using image recognition techniques.
Clause 78. The method of any of clauses 71 to 77, further comprising the step of capturing, by one or more image capture devices, the image data.
Clause 79. The method of any of clauses 71 to 78, wherein the image data comprises one or more of: visible spectrum data; thermal or infrared data; depth or time-of-flight data, such as radar data, LIDAR data, and ultrasound data; and stereoscopic image data.
Clause 80. A system for monitoring objects within an operating room comprising:
one or more image capture devices configured to capture image data; and
one or more computing devices configured to carry out the method of any of clauses 71 to 79.
Clause 81. The system of clause 80, wherein the one or more image capture devices are one or more of visible spectrum cameras, thermal or infrared cameras, depth or time-of-flight cameras, radar, LIDAR, ultrasound devices, and stereoscopic imaging cameras.
Clause 82. A computer program which, when run on one or more computing devices, is configured to cause the one or more computing devices to perform the method of any of clauses 71 to 79.
Clause 83. A non-transitory memory having stored thereon the computer program of clause 82.
Clause 84. A method of monitoring objects within an operating room, the method comprising:
receiving first image data captured at a first time by at least one image capture device;
identifying an object from the first image data;
receiving second image data captured at a second time by at least one image capture device, the second time being a later time than the first time; and
determining whether the object identified from the first image data can be identified from the second image data.
Clause 85. The method of clause 84, wherein upon determining that the object cannot be identified from the second image data, the method further comprises determining that the object is not present within the operating room at the second time.
Clause 86. The method of any clause 84 or 85, wherein the second image data is captured by a first set of the at least one image capture devices, and wherein the method further comprises, upon determining that the object cannot be identified from the second image data:
receiving third image data, the third image data captured by a second set of the at least one image capture devices, the second set of at least one image capture devices comprising at least one image capture device not in the first set of at least one image capture device;
determining whether the object can be identified from the third image data; and
upon determining that the object cannot be identified from the third image data, determining that the object is not present within the operating room at the second time.
Clause 87. The method of clause 86, further comprising the step of capturing, by the second set of the at least one image capture devices, the third image data.
Clause 88. The method of any of clauses 84 to 87, wherein the second image data is captured from a first position by a first image capture device of the at least one image capture devices, and wherein the method further comprises, upon determining that the object cannot be identified from the second image data:
moving the first image capture device to a second position, such that the field of view of the first image capture device at the second position is different to a field of view of the first image capture device at the first position;
receiving fourth image data, the fourth image data captured by the first image capture device at the second position;
determining whether the object can be identified from the fourth image data; and
upon determining that the object cannot be identified from the fourth image data, determining that the object is not present within the operating room at the second time.
Clause 89. The method of clause 88, further comprising the step of capturing, by the first image capture devices, the fourth image data.
Clause 90. The method of any of clauses 85 to 89, further comprise the step of, when it is determined that the object is not present within the operating room at the second time, outputting an alert.
Clause 91. The method of any of clauses 84 to 90, wherein the first time is a beginning of a procedure and the second time is an end of a procedure.
Clause 92. The method of any of clauses 84 to 91, wherein the object is a surgical tool.
Clause 93. The method of any of clauses 84 to 92, wherein the object is identified using image recognition techniques.
Clause 94. The method of any of clauses 84 to 93, further comprising the step of capturing, by the at least one image capture devices, the first image data.
Clause 95. The method of any of clauses 84 to 94, further comprising the step of capturing, by the at least one image capture devices, the second image data.
Clause 96. The method of any of clauses 84 to 95, wherein the image data comprises one or more of visible spectrum data, thermal or infrared data, depth or time-of-flight data, radar data, LIDAR data, ultrasound data, and stereoscopic image data.
Clause 97. A system for monitoring objects within an operating room comprising: one or more image capture devices configured to capture image data; and one or more computing devices configured to carry out the method of any of clauses 84 to 96.
Clause 98. The system of clause 97, wherein the one or more image capture devices are one or more of visible spectrum cameras, thermal or infrared cameras, depth or time-of-flight cameras, radar, LIDAR, ultrasound devices, and stereoscopic imaging cameras.
Clause 99. A computer program which, when run on one or more computing devices, is configured to cause the one or more computing devices to perform the method of any of clauses 84 to 96.
Clause 100. A non-transitory memory having stored thereon the computer program of clause 99.
While the disclosure has been illustrated and described in detail in the drawings and the foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive. The disclosure is not limited to the disclosed embodiments. From reading the present disclosure, other modifications will be apparent to a person skilled in the art. Such modifications may involve other features, which are already known in the art and may be used instead of or in addition to features already described herein. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality.
Although this disclosure refers to specific embodiments, it will be understood by those skilled in the art that various changes in form and detail may be made without departing from the subject matter set forth in the accompanying claims.
Number | Date | Country | Kind |
---|---|---|---|
21191153.2 | Aug 2021 | EP | regional |