With the complexity and autonomy of smart devices, particularly in the medical field, interactions may be managed between multiple smart devices (e.g., and legacy devices). Systems may operate in isolation or with limited collaboration, limiting their effectiveness and potentially leading to instability or predictability failures. Means for coordinating these systems may be static and may not adapt based on changing circumstances or patient parameters, posing a potential challenge in providing patient care and monitoring.
In operating rooms, multiple surgical imaging devices may operate in close proximity to one another. In addition, the imaging devices may all be from different manufacturers and may have different control systems. The imaging devices may not be aware of the presence of other devices. Even if the devices are aware of other devices, the devices may not be able to communicate to coordinate their actions. This lack of coordination may limit the field of view of a user (e.g., surgeon). This limited visibility may cause the user to miss important events during surgery, such as an unintended bleed.
Synchronized imaging of two system may be used to maintain a common field-of-view or perspective for both systems. The synchronized imaging may allow a user to seamlessly transition objects from one field of view to another. Synchronized visualization may involve synchronized motion of the cameras. Synchronized visualization may involve electronic and/or algorithmic field-of-view limiting and/or overlapping imaging to enable each camera to capture a larger field of view than originally possible. The two systems may produce a composite image by adapting the synchronized imaging arrays.
Multiple scopes may use synchronized motion to maintain a relational field-of-view. For example, independent imaging scopes may use couple motion (e.g., the movement of one scope initiates movement of the second scope to maintain the coupled field-of-view of the two scopes). The coupled motion of the two scope may be maintained while the scopes exist in separate anatomic spaces (e.g., on either side of a tissue barrier, such as an organ wall) but are focused on the same tissue in between the scopes. The couple motion may be used when the two scopes are in the same space focused on the same field of view. In this case, the two scopes may cooperatively maintain a field of view that is larger than either scope is capable of capturing independently. A composite image may be created to display the larger field of view to a user. Synchronized motion may be used to maintain the spacing between the scopes to maintain the overall field of view.
Examples described herein may include a Brief Description of the Drawings.
A more detailed understanding may be had from the following description, given by way of example in conjunction with the accompanying drawings.
The surgical system 20002 may be in communication with a remote server 20009 that may be part of a cloud computing system 20008. In an example, the surgical system 20002 may be in communication with a remote server 20009 via an internet service provider's cable/FIOS networking node. In an example, a patient sensing system may be in direct communication with a remote server 20009. The surgical system 20002 (and/or various sub-systems, smart surgical instruments, robots, sensing systems, and other computerized devices described herein) may collect data in real-time and transfer the data to cloud computers for data processing and manipulation. It will be appreciated that cloud computing may rely on sharing computing resources rather than having local servers or personal devices to handle software applications.
The surgical system 20002 and/or a component therein may communicate with the remote servers 20009 via a cellular transmission/reception point (TRP) or a base station using one or more of the following cellular protocols: GSM/GPRS/EDGE (2G), UMTS/HSPA (3G), long term evolution (LTE) or 4G, LTE-Advanced (LTE-A), new radio (NR) or 5G, and/or other wired or wireless communication protocols. Various examples of cloud-based analytics that are performed by the cloud computing system 20008, and are suitable for use with the present disclosure, are described in U.S. Patent Application Publication No. US 2019-0206569 A1 (U.S. patent application Ser. No. 16/209,403), titled METHOD OF CLOUD BASED DATA ANALYTICS FOR USE WITH THE HUB, filed Dec. 4, 2018, the disclosure of which is herein incorporated by reference in its entirety.
The surgical hub 20006 may have cooperative interactions with one of more means of displaying the image from the laparoscopic scope and information from one or more other smart devices and one or more sensing systems 20011. The surgical hub 20006 may interact with one or more sensing systems 20011, one or more smart devices, and multiple displays. The surgical hub 20006 may be configured to gather measurement data from the sensing system(s) and send notifications or control messages to the one or more sensing systems 20011. The surgical hub 20006 may send and/or receive information including notification information to and/or from the human interface system 20012. The human interface system 20012 may include one or more human interface devices (HIDs). The surgical hub 20006 may send and/or receive notification information or control information to audio, display and/or control information to various devices that are in communication with the surgical hub.
For example, the sensing systems may include the wearable sensing system 20011 (which may include one or more HCP sensing systems and/or one or more patient sensing systems) and/or the environmental sensing system 20015 shown in
The biomarkers measured by the sensing systems may include, but are not limited to, sleep, core body temperature, maximal oxygen consumption, physical activity, alcohol consumption, respiration rate, oxygen saturation, blood pressure, blood sugar, heart rate variability, blood potential of hydrogen, hydration state, heart rate, skin conductance, peripheral temperature, tissue perfusion pressure, coughing and sneezing, gastrointestinal motility, gastrointestinal tract imaging, respiratory tract bacteria, edema, mental aspects, sweat, circulating tumor cells, autonomic tone, circadian rhythm, and/or menstrual cycle.
The biomarkers may relate to physiologic systems, which may include, but are not limited to, behavior and psychology, cardiovascular system, renal system, skin system, nervous system, gastrointestinal system, respiratory system, endocrine system, immune system, tumor, musculoskeletal system, and/or reproductive system. Information from the biomarkers may be determined and/or used by the computer-implemented patient and the surgical system 20000, for example. The information from the biomarkers may be determined and/or used by the computer-implemented patient and the surgical system 20000 to improve said systems and/or to improve patient outcomes, for example.
The sensing systems may send data to the surgical hub 20006. The sensing systems may use one or more of the following RF protocols for communicating with the surgical hub 20006: Bluetooth, Bluetooth Low-Energy (BLE), Bluetooth Smart, Zigbee, Z-wave, IPv6 Low-power wireless Personal Area Network (6LoWPAN), Wi-Fi.
The sensing systems, biomarkers, and physiological systems are described in more detail in U.S. application Ser. No. 17/156,287 (attorney docket number END9290USNP1), titled METHOD OF ADJUSTING A SURGICAL PARAMETER BASED ON BIOMARKER MEASUREMENTS, filed Jan. 22, 2021, the disclosure of which is herein incorporated by reference in its entirety.
The sensing systems described herein may be employed to assess physiological conditions of a surgeon operating on a patient or a patient being prepared for a surgical procedure or a patient recovering after a surgical procedure. The cloud-based computing system 20008 may be used to monitor biomarkers associated with a surgeon or a patient in real-time and to generate surgical plans based at least on measurement data gathered prior to a surgical procedure, provide control signals to the surgical instruments during a surgical procedure, and notify a patient of a complication during post-surgical period.
The cloud-based computing system 20008 may be used to analyze surgical data. Surgical data may be obtained via one or more intelligent instrument(s) 20014, wearable sensing system(s) 20011, environmental sensing system(s) 20015, robotic system(s) 20013 and/or the like in the surgical system 20002. Surgical data may include, tissue states to assess leaks or perfusion of sealed tissue after a tissue sealing and cutting procedure pathology data, including images of samples of body tissue, anatomical structures of the body using a variety of sensors integrated with imaging devices and techniques such as overlaying images captured by multiple imaging devices, image data, and/or the like. The surgical data may be analyzed to improve surgical procedure outcomes by determining if further treatment, such as the application of endoscopic intervention, emerging technologies, a targeted radiation, targeted intervention, and precise robotics to tissue-specific sites and conditions. Such data analysis may employ outcome analytics processing and using standardized approaches may provide beneficial feedback to either confirm surgical treatments and the behavior of the surgeon or suggest modifications to surgical treatments and the behavior of the surgeon.
As illustrated in
The surgical hub 20006 may be configured to route a diagnostic input or feedback entered by a non-sterile operator at the visualization tower 20026 to the primary display 20023 within the sterile field, where it can be viewed by a sterile operator at the operating table. In an example, the input can be in the form of a modification to the snapshot displayed on the non-sterile display 20027 or 20029, which can be routed to the primary display 20023 by the surgical hub 20006.
Referring to
As shown in
Other types of robotic systems can be readily adapted for use with the surgical system 20002. Various examples of robotic systems and surgical tools that are suitable for use with the present disclosure are described herein, as well as in U.S. Patent Application Publication No. US 2019-0201137 A1 (U.S. patent application Ser. No. 16/209,407), titled METHOD OF ROBOTIC HUB COMMUNICATION, DETECTION, AND CONTROL, filed Dec. 4, 2018, the disclosure of which is herein incorporated by reference in its entirety.
In various aspects, the imaging device 20030 may include at least one image sensor and one or more optical components. Suitable image sensors may include, but are not limited to, Charge-Coupled Device (CCD) sensors and Complementary Metal-Oxide Semiconductor (CMOS) sensors.
The optical components of the imaging device 20030 may include one or more illumination sources and/or one or more lenses. The one or more illumination sources may be directed to illuminate portions of the surgical field. The one or more image sensors may receive light reflected or refracted from the surgical field, including light reflected or refracted from tissue and/or surgical instruments.
The illumination source(s) may be configured to radiate electromagnetic energy in the visible spectrum as well as the invisible spectrum. The visible spectrum, sometimes referred to as the optical spectrum or luminous spectrum, is the portion of the electromagnetic spectrum that is visible to (e.g., can be detected by) the human eye and may be referred to as visible light or simply light. A typical human eye will respond to wavelengths in air that range from about 380 nm to about 750 nm.
The invisible spectrum (e.g., the non-luminous spectrum) is the portion of the electromagnetic spectrum that lies below and above the visible spectrum (i.e., wavelengths below about 380 nm and above about 750 nm). The invisible spectrum is not detectable by the human eye. Wavelengths greater than about 750 nm are longer than the red visible spectrum, and they become invisible infrared (IR), microwave, and radio electromagnetic radiation. Wavelengths less than about 380 nm are shorter than the violet spectrum, and they become invisible ultraviolet, x-ray, and gamma ray electromagnetic radiation.
In various aspects, the imaging device 20030 is configured for use in a minimally invasive procedure. Examples of imaging devices suitable for use with the present disclosure include, but are not limited to, an arthroscope, angioscope, bronchoscope, choledochoscope, colonoscope, cytoscope, duodenoscope, enteroscope, esophagogastro-duodenoscope (gastroscope), endoscope, laryngoscope, nasopharyngo-neproscope, sigmoidoscope, thoracoscope, and ureteroscope.
The imaging device may employ multi-spectrum monitoring to discriminate topography and underlying structures. A multi-spectral image is one that captures image data within specific wavelength ranges across the electromagnetic spectrum. The wavelengths may be separated by filters or by the use of instruments that are sensitive to particular wavelengths, including light from frequencies beyond the visible light range, e.g., IR and ultraviolet. Spectral imaging can allow extraction of additional information that the human eye fails to capture with its receptors for red, green, and blue. The use of multi-spectral imaging is described in greater detail under the heading “Advanced Imaging Acquisition Module” in U.S. Patent Application Publication No. US 2019-0200844 A1 (U.S. patent application Ser. No. 16/209,385), titled METHOD OF HUB COMMUNICATION, PROCESSING, STORAGE AND DISPLAY, filed Dec. 4, 2018, the disclosure of which is herein incorporated by reference in its entirety. Multi-spectrum monitoring can be a useful tool in relocating a surgical field after a surgical task is completed to perform one or more of the previously described tests on the treated tissue. It is axiomatic that strict sterilization of the operating room and surgical equipment is required during any surgery. The strict hygiene and sterilization conditions required in a “surgical theater,” e.g., an operating or treatment room, necessitate the highest possible sterility of all medical devices and equipment. Part of that sterilization process is the need to sterilize anything that comes in contact with the patient or penetrates the sterile field, including the imaging device 20030 and its attachments and components. It will be appreciated that the sterile field may be considered a specified area, such as within a tray or on a sterile towel, that is considered free of microorganisms, or the sterile field may be considered an area, immediately around a patient, who has been prepared for a surgical procedure. The sterile field may include the scrubbed team members, who are properly attired, and all furniture and fixtures in the area.
Wearable sensing system 20011 illustrated in
The environmental sensing system(s) 20015 shown in
The surgical hub 20006 may use the surgeon biomarker measurement data associated with an HCP to adaptively control one or more surgical instruments 20031. For example, the surgical hub 20006 may send a control program to a surgical instrument 20031 to control its actuators to limit or compensate for fatigue and use of fine motor skills. The surgical hub 20006 may send the control program based on situational awareness and/or the context on importance or criticality of a task. The control program may instruct the instrument to alter operation to provide more control when control is needed.
The modular control may be coupled to non-contact sensor module. The non-contact sensor module may measure the dimensions of the operating theater and generate a map of the surgical theater using, ultrasonic, laser-type, and/or the like, non-contact measurement devices. Other distance sensors can be employed to determine the bounds of an operating room. An ultrasound-based non-contact sensor module may scan the operating theater by transmitting a burst of ultrasound and receiving the echo when it bounces off the perimeter walls of an operating theater as described under the heading “Surgical Hub Spatial Awareness Within an Operating Room” in U.S. Provisional Patent Application Ser. No. 62/611,341, titled INTERACTIVE SURGICAL PLATFORM, filed Dec. 28, 2017, which is herein incorporated by reference in its entirety. The sensor module may be configured to determine the size of the operating theater and to adjust Bluetooth-pairing distance limits. A laser-based non-contact sensor module may scan the operating theater by transmitting laser light pulses, receiving laser light pulses that bounce off the perimeter walls of the operating theater, and comparing the phase of the transmitted pulse to the received pulse to determine the size of the operating theater and to adjust Bluetooth pairing distance limits, for example.
During a surgical procedure, energy application to tissue, for sealing and/or cutting, may be associated with smoke evacuation, suction of excess fluid, and/or irrigation of the tissue. Fluid, power, and/or data lines from different sources may be entangled during the surgical procedure. Valuable time can be lost addressing this issue during a surgical procedure. Detangling the lines may necessitate disconnecting the lines from their respective modules, which may require resetting the modules. The hub modular enclosure 20060 may offer a unified environment for managing the power, data, and fluid lines, which reduces the frequency of entanglement between such lines.
Energy may be applied to tissue at a surgical site. The surgical hub 20006 may include a hub enclosure 20060 and a combo generator module slidably receivable in a docking station of the hub enclosure 20060. The docking station may include data and power contacts. The combo generator module may include two or more of: an ultrasonic energy generator component, a bipolar RF energy generator component, or a monopolar RF energy generator component that are housed in a single unit. The combo generator module may include a smoke evacuation component, at least one energy delivery cable for connecting the combo generator module to a surgical instrument, at least one smoke evacuation component configured to evacuate smoke, fluid, and/or particulates generated by the application of therapeutic energy to the tissue, and a fluid line extending from the remote surgical site to the smoke evacuation component. The fluid line may be a first fluid line, and a second fluid line may extend from the remote surgical site to a suction and irrigation module 20055 slidably received in the hub enclosure 20060. The hub enclosure 20060 may include a fluid interface.
The combo generator module may generate multiple energy types for application to the tissue. One energy type may be more beneficial for cutting the tissue, while another different energy type may be more beneficial for sealing the tissue. For example, a bipolar generator can be used to seal the tissue while an ultrasonic generator can be used to cut the sealed tissue. Aspects of the present disclosure present a solution where a hub modular enclosure 20060 is configured to accommodate different generators and facilitate an interactive communication therebetween. The hub modular enclosure 20060 may enable the quick removal and/or replacement of various modules.
The modular surgical enclosure may include a first energy-generator module, configured to generate a first energy for application to the tissue, and a first docking station comprising a first docking port that includes first data and power contacts, wherein the first energy-generator module is slidably movable into an electrical engagement with the power and data contacts and wherein the first energy-generator module is slidably movable out of the electrical engagement with the first power and data contacts. The modular surgical enclosure may include a second energy-generator module configured to generate a second energy, different than the first energy, for application to the tissue, and a second docking station comprising a second docking port that includes second data and power contacts, wherein the second energy generator module is slidably movable into an electrical engagement with the power and data contacts, and wherein the second energy-generator module is slidably movable out of the electrical engagement with the second power and data contacts. In addition, the modular surgical enclosure also includes a communication bus between the first docking port and the second docking port, configured to facilitate communication between the first energy-generator module and the second energy-generator module.
Referring to
A surgical data network having a set of communication hubs may connect the sensing system(s), the modular devices located in one or more operating theaters of a healthcare facility, a patient recovery room, or a room in a healthcare facility specially equipped for surgical operations, to the cloud computing system 20008.
The surgical hub 5104 may be connected to various databases 5122 to retrieve therefrom data regarding the surgical procedure that is being performed or is to be performed. In one exemplification of the surgical system 5100, the databases 5122 may include an EMR database of a hospital. The data that may be received by the situational awareness system of the surgical hub 5104 from the databases 5122 may include, for example, start (or setup) time or operational information regarding the procedure (e.g., a segmentectomy in the upper right portion of the thoracic cavity). The surgical hub 5104 may derive contextual information regarding the surgical procedure from this data alone or from the combination of this data and data from other data sources 5126.
The surgical hub 5104 may be connected to (e.g., paired with) a variety of patient monitoring devices 5124. In an example of the surgical system 5100, the patient monitoring devices 5124 that can be paired with the surgical hub 5104 may include a pulse oximeter (SpO2 monitor) 5114, a BP monitor 5116, and an EKG monitor 5120. The perioperative data that is received by the situational awareness system of the surgical hub 5104 from the patient monitoring devices 5124 may include, for example, the patient's oxygen saturation, blood pressure, heart rate, and other physiological parameters. The contextual information that may be derived by the surgical hub 5104 from the perioperative data transmitted by the patient monitoring devices 5124 may include, for example, whether the patient is located in the operating theater or under anesthesia. The surgical hub 5104 may derive these inferences from data from the patient monitoring devices 5124 alone or in combination with data from other data sources 5126 (e.g., the ventilator 5118).
The surgical hub 5104 may be connected to (e.g., paired with) a variety of modular devices 5102. In one exemplification of the surgical system 5100, the modular devices 5102 that are paired with the surgical hub 5104 may include a smoke evacuator, a medical imaging device such as the imaging device 20030 shown in
The perioperative data received by the surgical hub 5104 from the medical imaging device may include, for example, whether the medical imaging device is activated and a video or image feed. The contextual information that is derived by the surgical hub 5104 from the perioperative data sent by the medical imaging device may include, for example, whether the procedure is a VATS procedure (based on whether the medical imaging device is activated or paired to the surgical hub 5104 at the beginning or during the course of the procedure). The image or video data from the medical imaging device (or the data stream representing the video for a digital medical imaging device) may be processed by a pattern recognition system or a machine learning system to recognize features (e.g., organs or tissue types) in the field of view (FOY) of the medical imaging device, for example. The contextual information that is derived by the surgical hub 5104 from the recognized features may include, for example, what type of surgical procedure (or step thereof) is being performed, what organ is being operated on, or what body cavity is being operated in.
The situational awareness system of the surgical hub 5104 may derive the contextual information from the data received from the data sources 5126 in a variety of different ways. For example, the situational awareness system can include a pattern recognition system, or machine learning system (e.g., an artificial neural network), that has been trained on training data to correlate various inputs (e.g., data from database(s) 5122, patient monitoring devices 5124, modular devices 5102, HCP monitoring devices 35510, and/or environment monitoring devices 35512) to corresponding contextual information regarding a surgical procedure. For example, a machine learning system may accurately derive contextual information regarding a surgical procedure from the provided inputs. In examples, the situational awareness system can include a lookup table storing pre-characterized contextual information regarding a surgical procedure in association with one or more inputs (or ranges of inputs) corresponding to the contextual information. In response to a query with one or more inputs, the lookup table can return the corresponding contextual information for the situational awareness system for controlling the modular devices 5102. In examples, the contextual information received by the situational awareness system of the surgical hub 5104 can be associated with a particular control adjustment or set of control adjustments for one or more modular devices 5102. In examples, the situational awareness system can include a machine learning system, lookup table, or other such system, which may generate or retrieve one or more control adjustments for one or more modular devices 5102 when provided the contextual information as input.
For example, based on the data sources 5126, the situationally aware surgical hub 5104 may determine what type of tissue was being operated on. The situationally aware surgical hub 5104 can infer whether a surgical procedure being performed is a thoracic or an abdominal procedure, allowing the surgical hub 5104 to determine whether the tissue clamped by an end effector of the surgical stapling and cutting instrument is lung (for a thoracic procedure) or stomach (for an abdominal procedure) tissue. The situationally aware surgical hub 5104 may determine whether the surgical site is under pressure (by determining that the surgical procedure is utilizing insufflation) and determine the procedure type, for a consistent amount of smoke evacuation for both thoracic and abdominal procedures. Based on the data sources 5126, the situationally aware surgical hub 5104 could determine what step of the surgical procedure is being performed or will subsequently be performed.
The situationally aware surgical hub 5104 could determine what type of surgical procedure is being performed and customize the energy level according to the expected tissue profile for the surgical procedure. The situationally aware surgical hub 5104 may adjust the energy level for the ultrasonic surgical instrument or RF electrosurgical instrument throughout the course of a surgical procedure, rather than just on a procedure-by-procedure basis.
In examples, data can be drawn from additional data sources 5126 to improve the conclusions that the surgical hub 5104 draws from one data source 5126. The situationally aware surgical hub 5104 could augment data that it receives from the modular devices 5102 with contextual information that it has built up regarding the surgical procedure from other data sources 5126.
The situational awareness system of the surgical hub 5104 can consider the physiological measurement data to provide additional context in analyzing the visualization data. The additional context can be useful when the visualization data may be inconclusive or incomplete on its own.
The situationally aware surgical hub 5104 could determine whether the surgeon (or other HCP(s)) was making an error or otherwise deviating from the expected course of action during the course of a surgical procedure. For example, the surgical hub 5104 may determine the type of surgical procedure being performed, retrieve the corresponding list of steps or order of equipment usage (e.g., from a memory), and compare the steps being performed or the equipment being used during the course of the surgical procedure to the expected steps or equipment for the type of surgical procedure that the surgical hub 5104 determined is being performed. The surgical hub 5104 can provide an alert indicating that an unexpected action is being performed or an unexpected device is being utilized at the particular step in the surgical procedure.
The surgical instruments (and other modular devices 5102) may be adjusted for the particular context of each surgical procedure (such as adjusting to different tissue types) and validating actions during a surgical procedure. Next steps, data, and display adjustments may be provided to surgical instruments (and other modular devices 5102) in the surgical theater according to the specific context of the procedure.
In operating rooms, multiple surgical imaging devices may operate in close proximity to one another. For example,
In an example, a first imaging device, such as an endoscope 54406, may include an imaging sensor and a processor. A second imaging device, such as any of laparoscopes 54402, 54404 may each include a respective imaging sensor and processor. The first and second imaging devices may be manual surgical instruments (e.g., as illustrated in
A field of view may include a visual area that can be seen via an imaging device such as an endoscope 54406, and/or laparoscopes 54402, 54404. The field of view provided by the imaging devices may be dynamically adjustable. For example, the field of view may be adjusted by affecting changes in the device such as changes to the lens and/or optics, by changing the focal length of the optics, by affecting the zoom of the imaging device via physical optics and/or digitally, by affecting the effective sensor size (e.g., larger effective sensors generally have wider field of views), by affecting the position of the imaging device (e.g., physically moving the device may change the portion of the observable space; movements may include pan, tilt, shift, dolly, and the like), image stitching (e.g., multiple images may be stitched together to create a panoramic view to increase field of view), adjusting aperture size (e.g., to influence depth of field and perceived field of view), and the like.
In an example, imaging systems, for example a first and second imaging system, may coordination their respective operation. For example, the first and second imaging system may coordinate their respective operation regarding their respective fields of view. For example, synchronized imaging of two system may be used to maintain a common field-of-view or perspective for a plurality of systems (e.g., to transition objects from one field of view to another). Cooperative multiple scope synchronized motion may be used to maintain the relational field-of-views. The motion of multiple (e.g., two) independent imaging scopes may be coupled. The coupled motion may involve the movement of one scope initiating movement of a second scope to maintain the couple field-of-view of the two scopes.
In an example, the coordinate of fields of view may be implemented via the adjustment of an imaging parameter, as disclosed herein. For example, the processor of the first imaging device may determine, based on the video data stream, that the second imaging device has moved and may adjust an imaging parameter of the first imaging device to maintain the coupled field of view. The imaging parameter may include any parameter that represents a quantity suitable for altering a field of view of an imaging device, such as an electronically controlled field of view (e.g., digitally controlled resolution and/or windowing), a position of the first imaging device, a focal length of the first imaging device, a portion of a field of view associated with the first imaging device that is displayed to a user, and the like.
In the surgical environment, coordinated operation of two or more imaging systems regarding fields of view may be employed when the imagining systems are viewing within a common anatomical space and/or when the imaging systems are viewing within separate anatomical spaces. An anatomical space may include a cavity and/or compartment within the body. An anatomical space may include a space that contains organs, tissues, or other structures. The anatomical space may include a space accessible by a particular instrument. For example, in laparoscopic surgery, a small incision is made in the abdomen to insert a laparoscope which provides a view of the abdominal anatomical space (e.g., a laparoscopic anatomical space). For example, in endoscopic surgery, an endoscope may be inserted through natural orifice, such as the mouth or anus, and guided to the surgical site. Such spaces accessible in this way may include endoscopic anatomical space. In examples, the endoscope may be inserted through a small incision. In addition to surgical procedures, anatomical spaces may also be visualized using non-invasive medical imaging techniques such as X-rays and CT scans, for example.
In an example, with the two scopes in a common anatomical space (e.g., laparoscopes 54402, 54404 in a common laparoscopic space), coordination regarding field of view may include adjustments to a common observable area. In an example, the two scopes may cooperatively maintain a filed-of-view that is larger than either scope is capable of capturing alone. The composite image may be the entire displayed field-of-view (e.g., captured by both scopes). The synchronized motion may be used to maintain the spacing between the scopes (e.g., to maintain the overall field-of-view).
In an example, with two scopes in different anatomical spaces (e.g., either of laparoscopes 54402, 54404 in a laparoscopic space and endoscope 54406 in an endoscopic space) coordination regarding field of view may include adjustments to each scope's respective side of a common anatomical barrier between the two anatomical spaces. For example, visualization of the common anatomical barrier may be performed from differing sides (e.g., the endoscopic and laparoscopic sides) of a tissue wall (e.g., an organ wall). Coordinate of field of view may include adjusting the field of view to maintain viewing of opposite sides of the common anatomical barrier (e.g., maintaining respective views on both sides of the barrier in sync). For example, adjustments that affect the monitored position and/or orientation may be used to keep the two cameras focused on the same intermediate portion of a tissue wall. In an example, synchronization may be maintained through external imaging (e.g., a cone-beam computerized tomography or electromagnetic sensing of another camera).
Cone-beam computerized tomography (CBCT) may include a medical imaging technique that uses a cone-shaped X-ray beam to produce 3D images of the patient's body. CBCT is similar to traditional computed tomography (CT) but may use a lower radiation dose and may provide higher resolution images of the area of interest. CBCT can also be used to guide surgical procedures.
For example, CBCT may be used to determine the position of a surgical instrument during a procedure, such as the position of one or more scopes disclosed herein. Here, the tracking system of the surgical instrument may be configured to determine the position/orientation of each scope head (e.g., each camera's location) based on the relative position of the respective scope head as observed by the CBCT. The position information may be provided to a processor, which can be programmed or configured to determine, with the camera intrinsics, a complete coordinate model for each scope within a common geometry. Such a model may be used for registration and/or iterative tracking of the scopes.
Electromagnetic (EM) sensing may be used to determine the position/orientation of the one or more scopes disclosed herein. Each scope may include an EM sensor system. The sensor may include one or more conductive coils that are subjected to an externally generated electromagnetic field. When subjected to the externally generated electromagnetic field, each coil of the EM sensor system may produce an induced electrical signal having characteristics that depend on the position and orientation of the coil relative to the externally generated electromagnetic field. In an example, the EM sensor system may be configured and positioned to measure six degrees of freedom. e.g., three position coordinates X, Y, Z and three orientation angles indicating pitch, yaw, and roll of a base point or five degrees of freedom. e.g., three position coordinates X, Y, Z and two orientation angles indicating pitch and yaw of a base point. For example, the EM sensor system may include that disclosed in U.S. Pat. No. 6,380,732 (filed Aug. 11, 1999 and incorporated by reference herein). The position information may be provided to a processor, which can be programmed or configured to determine, with the camera intrinsics, a complete coordinate model for each scope within a common geometry. Such a model may be used for registration and/or iterative tracking of the scopes.
Field of view coordination may include the incorporation of certain visualizations to one and/or both of the video streams. Visualizations may include any operation to the visual information of the video stream of one imaging device based on information from the other imaging device. Here, coordination of the field of view includes the ability to operationally augment the field of view from one device with information from the other. For example, with imaging devices in different anatomical space, a visualization may include an operation to visually subtract, or make transparent, parts of the anatomical barrier (e.g., tissue wall) separating them. This may allow a user to section the common view and see depth, underlining structures, or instruments on the other side of the tissue wall.
In other examples, a multi-scope system may use visualization cooperative movement and/or an extended viewable display. Synchronized visualization may involve electronic and/or algorithmic field-of-view limiting and overlapping imaging to enable each camera to capture images to produce a composite image (e.g., by adapting the synchronized imaging arrays). The synchronization may employ misaligned (e.g., slightly misaligned) CMOS arrays (e.g., that image differing wavelengths of energy). Such an overlap mode of operation may enable the overlay of multi-spectral imaging over visible light imaging on a common display (e.g., with the different imaging originating from side-by-side CMOS arrays).
The navigating imaging scope 55410 and the tracking imaging scope 54408 may exchange an initial handshake protocol to establish the corporative operation. The handshake protocol may include any data protocol suitable for device and capability discovery. For example, the handshake protocol may include a Universal Plug and Play UPnP protocol. Via UPnP, the navigating imaging scope 55410 and the tracking imaging scope 54408 may advertise their presence in the network and discover each other. The protocol may include sending responsive to discovery a description of available cooperative services available. The description may be an XML-based (Extensible Markup Language-based) description. The description may include identifying information about each device, such as manufacturer, model name, model number, serial number, and the like. The description may include capability-specific information including service type, service Id, Service description URL, control URL, eventing URL, and the like. The service description may include information about the device's capability to act as in a tracking and/or navigating capacity. The service description may include information regarding a destination for communicating video data stream information for purposes of cooperative field of view operation.
The navigating imaging scope 54410 may communicate the video data stream information to the tracking imaging scope 54408. The tracking imaging scope 54408 may receive the video data stream from the navigating imaging scope 54410.
Having video information from the navigating imaging scope 54410, the tracking imaging scope 54408 may establish a common coordinate system. To establish a common coordinate system, the system may use any computer vision, photogrammetry, and/or robotics-control techniques suitable for synthesizing information from multiple cameras. In an example, the system may use a computer vision algorithm to develop a common coordinate system. The tracking imaging scope 54410 may receive intrinsic parameter information from the navigating imaging scope 54408, for example, parameters such as focal length, principal point, lens distortion, and the like. The tracking imaging scope 54410 may receive extrinsic parameter information from the navigating imaging scope 54408, for example, parameters such as relative positions and/or orientations. In an example, one or more features or keypoints may be identified in both images. For example, the object of interest 54414, when in the field of view of both scopes, may be selected as such as feature.
The video data stream information may indicate coordinate points associated with the object of interest 54414. For example, as shown in
For example, as shown in
In an example, the tracking imaging scope 54408 may adjust an imaging parameter that includes an electronically controlled field of view, for example a digital window of its field of view, to establish a coupled field of view with the navigating imaging scope 54410. As illustrated in
In an example, the tracking imaging scope 54408 may adjust an imaging parameter, for example the position of the camera of the tracking imaging scope 54408, to establish a coupled field of view with the navigating imaging scope 54410. Again, the tracking imaging scope 54408 may determine the imaging parameter to be vector represented by the delta between the coordinate points identified in the object registration. The vector may be used to as a control input to be applied to a surgical robot control of the tracking imaging scope 54408. The change in position of the camera may be proportional to the vector such that in the resulting field of view the objects of the object registration are aligned.
In an example, the tracking imaging scope 54408 may adjust more than one imaging parameter, for example the position of the camera of the tracking imaging scope 54408 and a digital window of its field of view, to establish a coupled field of view with the navigating imaging scope 54410. For example, the adjustment may apportion the vector represented by the delta between the coordinate points identified in the object registration to the more than one imaging parameters. The proportional adjustment of each may result in a field of view in which the objects of the object registration are aligned.
In an example, adjustment of the imaging parameter(s) may be done in unit steps, iteratively, gradient-based, or the like. In an example, the adjustment of the imaging parameter(s) may incorporate modeled aspects of the tracking imaging scope's performance, such a project camera model. Here, a project camera model with camera intrinsics and inverse pose may be solved to determine a world line corresponding to the image point corresponding to the tracking imaging scope's object identified in the object registration. Then, a new set of intrinsics and/or inverse pose may be solved for using the same world line and the target image point represented by the delta between the objects identified in the object registration.
The tracking imaging scope 54408 may compare subsequent video images to determine a difference between the coordinate points of the two scopes (e.g., in this examples, Δ=(75, −100)) after the movement of the navigating imaging scope 54410. The tracking imaging scope 54408 may adjust one or more of its imaging parameter(s), as described herein. Adjusting the imaging parameter(s) will change the coordinate point that the secondary imaging device associates with the object of interest. The tracking imaging scope may then iteratively compare the coordinate points of the two scopes and adjust imaging parameter(s) as needed until the fields of view are synchronized. The tracking imaging scope 54408 may determine the coupled field of view has been maintained on the condition field of the view of the tracking imaging scope 54408 and the video data stream are aligned.
In an example, the navigating imaging scope 54410 may communicate information indicative of its movement to the tracking imaging scope 54408. For example, a tracking imaging scope 54408 driven by a surgical robot control may provide the control inputs given by the surgeon to the surgical robot to the tracking imaging scope 54408 via a data channel (e.g., embedded in the video data stream and/or a channel separate from the video data stream). The tracking imaging scope 54408 may use the control inputs when determining the motion seen in the video data stream. In an example, other positioning data such as surgical x-ray, cone CT, EM sensing, the like, may be employed to determine motion.
In an example, other aspects of scope operation may be coordinated to enhance performance and/or usability. Operational aspects such as lighting intensity, wavelength, display features, and the like may be coordinated via communication between the scopes. For example, the motion of each scope and/or the direction of the motion from the light source from each individual scope may be used in cooperation (e.g., to enhance the view of each scope). Synchronizing the motion of the scopes on the surgical site and the light source of a scope may be used to enhance the view from the other scope. A laparoscopic scope may limit the luminescence or the amount of light the scope generates. The light source may be directed from behind the camera of the scope. During a procedure utilizing two scopes, the light sources of each camera may be synchronized to benefit the other. The light source of the scope may obscure the lighting back to the field of view (e.g., based on the surgical location and/or other instruments). In this case, the first scope may turn off its light source. The second scope may be used to direct its light source to optimize the view. Each scope may have unique wavelengths and/or color (e.g., to further optimize the feedback to the user).
Regarding display aspects, in an example, camera views may be altered (e.g., by establishing the same coordinate system for the two scopes, as described herein). The coordinate system synchronization may allow a surgeon to easily toggle back and forth between the two scopes, while still recognizing what is shown in both views. A user may be able to swap between primary and secondary live feeds. For example, there may be a solid part or organ within the surgeon's field of view. The system may allow the surgeon to switch to another field of view associated with a different scope.
In addition, scope coordination may enhance surgical operation through enhanced calibration. For example, calibration may be enabled via multi-system registration of common object. And a redundant imaging system may be used to view a common object between two systems (e.g., one of which is being affected by navigation distortion) to aid in the compensation for calibration. For example, the laparoscopic and endoscopic imaging systems may be coupled together for cooperative motion. And the cooperative motion may be used to enable one of the two visualization systems that is highly impact by distortion (e.g., an electromagnetic navigation system of the robotic flexible endoscopic system) to track common landmarks or easily identified structures (e.g., to help correct for the distortions in its navigation system without the need for frequent radio transmissive recalibration). The relational field-of-views position may indicate a global position of the position of a tumor. The system may use the global position of the tumor to augment the view of the tumor into the laparoscopic field of view. As the colon (or hollow organ, e.g., stomach, bladder, etc.) moves, the image recognition from the laparoscopic side (e.g., through TOF, structured light, or optical shape sensing) and proximity of the endoscope may be used to dynamically move the augmented tumor view.
In an example, an illumination-providing scope or light source may be moved in advance of the imaging scope (e.g., to compliment the illumination of the imaging scope). In this predictive operating mode, the light source may be moved first (e.g., to remove shadows and improve visualization of recessed areas that the light from the primary scope has not yet illuminated). Using
In an example, a visualization may be generated in the video output of one of the scopes based on information from the other scope. For example, the visualization may include information from a first imaging device that when incorporated into the video output of the second imaging device makes a portion of the viewed anatomical barrier appear transparent. To illustrate, video of the object of interest 54422 taken from the endoscopic scope 54418 may be overlaid the video from the laparoscopic scope 54420 to reveal an image of the object of interest 54422 in the laparoscopic view. As modified, a portion of the tissue barrier 54424 in the video may appear to be transparent. Such video operation may include a selective subtractive image modification operation.
Examples of procedures in which the techniques described with respect to
In an example, subtractive image modification may be used to fill in the shapes and colors of the part of the image that was occluded. Subtractive image modification may be used to enable user visibility of occluded areas. Subtractive image modification may be used to enable visualization of occluded areas.
Cooperative operation among imaging devices may enable the presentation of the image(s) modified with color. For example, an area made transparent (e.g., by subtractive image modification) may be shaded with a color. The area that is visible due to the transparency may be shaded with color. The image may be composed (or decomposed) into the relative areas and related colors. For example, a primary image may be blue and a secondary image may be red. The primary and secondary images may be composed into the blind spot/occluded image (e.g., which may be purple). The color may be distorted (e.g., using software) on each camera to be shaded with a different tint (e.g., red or blue to make it more obvious when a composite image is seen).
In an example, a selective view of certain segments may be used. The ability to make the anatomy transparent may be enabled/disabled (e.g., by a user). The system may allow the surgeon to draw shapes around the areas they would like to be made transparent (e.g., using a tool such as a tablet).
Synchronization of multi-sourced imaging data (e.g., currently active vision systems or prior scans) may be used to generate or create a virtually-rendered field of view. This enables a field of view larger than that possible using a single vision system or overlapping. Vision systems may coordinate and alternate performing scans of the surrounding area (e.g., to determine whether to update the virtually rendered field of view). For example, a first vision system, vision system A, may be the currently active vision system. A second vision system, vision system B may run a scan of surrounding area. Vision system B may update the combined field of view (e.g., if necessary). If vision system B's area of view is currently active, vision system A may perform the sweeping scan to check for any field of view updates.
In an example, data input from multiple sources may be used to enable multi-directional modality projection. Multi-directional projection of energy modalities (e.g., light spectrum, gamma, x-ray, microwave) may be used. Each source may include a respective imagine sensor that senses light of a corresponding band. So a video data stream captured from one scope may then represent light from a first band, and a video stream captured by another scope may represent light of a second band that is different from the first band. The light from differing scopes and/or projectors may be received. Penetrating or backlit cooperative scope wavelength motion and detection system may introduce wavelengths, intensities, and/or combined wavelengths (e.g., for which the first scope may not have the bandwidth or capabilities). In the case of high intensity coherent light sources, the fiber optics of the shaft may conduct the light from a source outside of the body. As the size of the shafts gets smaller, the fiber bundles may get smaller. This may cause the fiber bundles to be overloaded with the magnitude of the energy being conducted (e.g., often melting or degrading the fiber optics). These externally-projected lights may be introduced with a larger system (e.g., from the lap side with a 12 mm shaft), for example, as opposed to the sending system (e.g., a 2 mm CMOS arrays on the flexible scope side). This may provide illumination to sub-surface structures (e.g., because the light is projected through the tissue rather than being projected onto the tissue). The larger system may allow the projection source to be stronger (e.g., orders of magnitude stronger) than the 2 mm fiber optics could provide. The projection source may be in addition to the local light imaging of the endoscopic scope (e.g., thereby minimizing the extra-spectral loss of the magnitude of the visible light spectrum). The projection source light may include wavelengths and/energy sources that are not compatible with the flexible scope system (e.g., wavelengths exciting a reagent, for example, ICG or tumor tagging fluorophores with wavelengths that cause excitation of the natural molecular nature of the tissue, radiation, microwave, etc.).
The movement of one or more visualization system(s) and/or smart device(s) may be synchronized so that the visualization system(s) are paired to the device. The movement of a device may initiate movement of one or more cameras (e.g., to maintain a common field of view). Vision systems may track a smart device to maintain field of view (e.g., using a sub-spectral or infrared tag). Vision systems may not (e.g., may not need to) keep a device centered in its field of view (e.g., if AE or critical area of tissue gains precedence, in which case the AE or critical area of tissue may be the center of the field of view).
The reception of the video data stream may be responsive to a handshake protocol. For example, the first imaging device may communicate a handshake protocol with the second imaging device to establish cooperative operation.
At 54442, a coupled field of view may be determined, based on the video data stream. In an example, the coupled field of view may be determined by performing an object registration on an object and synchronizing a first field of view associated with the first imaging device with a second field of view associated with the second imaging device based on the object registration. Here, the object may be visible in the first field of view and in a second field of view. In an example, the coupled field of view may be determined by receiving relative field of view positioning information from cone-beam computerized tomography imaging that comprises a view of the first imaging device and the second imaging device. In an example, the coupled field of view may be determined by receive relative field of view positioning information from electromagnetic sensing of the second imaging device.
In an example, a visualization associated with the anatomical barrier in the video data stream may be generated based on information from an imaging sensor of the first imaging device. For example, the visualization may include information from the imaging sensor of the first imaging device that makes a portion of the anatomical barrier appear transparent.
At 54444, it may be determined, based on the video data stream, that the second imaging device has moved. And at 54446, an imaging parameter of the first imaging device may be adjusted to maintain the coupled field of view. In an example, the imaging parameter may include an electronically controlled field of view. In an example, the imaging parameter may include any of a position of the first imaging device, a focal length of the first imaging device, or a portion of a field of view associated with the first imaging device that is displayed to a user. The imaging parameter may be adjusted to maintain the coupled field of view by iteratively comparing a first field of view associated with the first imaging device to the video data stream and adjusting the imaging parameter based on the comparison. It may be determined that that the coupled field of view is maintained based on the condition that the first field of view and the video data stream are aligned with respect to a registered object.
This application claims the benefit of the following, the disclosures of which are incorporated herein by reference in its entirety: Provisional U.S. Patent Application No. 63/602,040, filed Nov. 22, 2023;Provisional U.S. Patent Application No. 63/602,028, filed Nov. 22, 2023;Provisional U.S. Patent Application No. 63/601,998, filed Nov. 22, 2023,Provisional U.S. Patent Application No. 63/602,003, filed Nov. 22, 2023,Provisional U.S. Patent Application No. 63/602,006, filed Nov. 22, 2023,Provisional U.S. Patent Application No. 63/602,011, filed Nov. 22, 2023,Provisional U.S. Patent Application No. 63/602,013, filed Nov. 22, 2023,Provisional U.S. Patent Application No. 63/602,037, filed Nov. 22, 2023, andProvisional U.S. Patent Application No. 63/602,007, filed Nov. 22, 2023. This application is related to the following, filed contemporaneously, the contents of each of which are incorporated by reference herein: U.S. patent application Ser. No. 18/809,890, filed Aug. 20, 2024, andU.S. patent application Ser. No. 18/810,133, filed Aug. 20, 2024.
Number | Date | Country | |
---|---|---|---|
63602040 | Nov 2023 | US | |
63602028 | Nov 2023 | US | |
63601998 | Nov 2023 | US | |
63602003 | Nov 2023 | US | |
63602006 | Nov 2023 | US | |
63602011 | Nov 2023 | US | |
63602013 | Nov 2023 | US | |
63602037 | Nov 2023 | US | |
63602007 | Nov 2023 | US |