Current monocular optical devices (e.g. endoscope, bronchoscope, colonoscope) used for viewing surgical fields during minimally invasive surgery (e.g. laparoscopy) and visual diagnostic procedures (e.g. colonoscopy, bronchoscopy) provide limited reference information on the absolute position of surgical tools and anatomical features because the image has no depth of field. Binocular (also known as stereoscopic) optical devices provide limited depth of field affording the surgeon visual information on the distance between items within the optical device's field of view. The accuracy of distance information is limited based on the amount of parallax provided by the optical paths, determined by the distance between the optical paths, and the amount of overlap between the two optical paths.
This disclosure is directed to a method of imaging within a body of a patient. The method includes capturing IR light reflected off of an anatomical object with at least two IR cameras, with the at least two IR cameras positioned in spaced relation relative to a scanner projecting the IR light. The method further includes generating a plurality of IR light images, where each IR light image of the plurality of IR light images is generated based on IR light captured by a respective IR camera of the at least two IR cameras. Additionally, the method includes determining a parallax of each IR camera with respect to the scanner, associating each IR light image of the plurality of IR light images with each other based on the determined parallax to create an integrated IR light image, associating the integrated IR light image with an optical light image captured by an optical light camera, generating an intra-operative 3D image based on the association of the integrated IR light image with the optical light image, and displaying the generated intra-operative 3D image on a display.
Associating each IR light image of the plurality of IR light images with each other may include generating a 3D point cloud. In an embodiment, the at least two IR cameras include three IR cameras arrayed in the body of the patient, and capturing IR light reflected off the anatomical object includes capturing IR light with the three IR cameras. Each IR camera may have a different field of view. The different fields of view may overlap.
In an embodiment, a first IR camera of the at least two IR cameras is coupled to a first surgical instrument and a second IR camera of the at least two IR cameras is coupled to a second surgical instrument, and capturing IR light reflected off of the anatomical object includes capturing IR light reflected off of the anatomical object within a first field of view and a second field of view. The first field of view is associated with the first IR camera and the second field of view is associated with the second IR camera. The method may further include overlapping the first and second fields of view.
In an embodiment, the method further includes projecting IR light with a second scanner onto the anatomical object within the patient at a different frequency, interleaved timing, or combinations thereof, then that of the IR light projected by the scanner.
Additionally, or alternatively, generating the intra-operative 3D image is effectuated in real-time and updated as new optical light images are captured by the optical light camera.
According to another aspect of the disclosure, a system for imaging within a body of a patient includes: an optical camera; a surgical device having a scanner configured to transmit infrared (IR) light within the body of the patient; a first surgical instrument having a first IR camera; a second surgical instrument having a second IR camera; and a computing device. The first and second surgical instruments are separate from the surgical device such that the first and second IR cameras are disposed in spaced relationship with each other and the scanner. The computing device is in communication with the scanner and the first and second IR cameras. The computing device has a processor and a memory storing instructions thereon which, when executed on the processor, cause the system to: determine parallax between the first and second IR cameras with respect to each other and the scanner; generate an integrated IR image based on the determined parallax, a first IR image captured by the first IR camera, and a second IR image captured by the second IR camera; associate the integrated IR image with an optical image captured by the optical camera; and generate an intra-operative 3D image based on the association of the integrated IR image with the optical image.
In some embodiments, the surgical device may be an endoscope. The endoscope may include an optical light transmitter configured to project optical light toward the one or more anatomical objects. The memory may further store instructions thereon which, when executed by the processor, cause the system to display generated the intra-operative 3D image.
In embodiments, the system may include a third surgical instrument having a third IR camera. The first, second and third surgical instruments may be positioned such that each IR camera maintains a different field of view within the body of the patient.
In certain embodiments, the scanner is a first scanner and the system may further include a second scanner that is configured to transmit IR light at a different frequency than that of the first scanner. Additionally, or alternatively, the memory may further store instructions thereon which, when executed by the processor, cause the system to transmit IR light from the first scanner at a first frequency when the second scanner transmits IR light at a second frequency that is different from the first frequency and/or transmit IR light from the first scanner and the second scanner at interleaved timing with respect to the first scanner.
According to yet another aspect of the disclosure, a non-transitory computer readable storage medium stores a program which, when executed by a computer, causes the computer to: generate IR light images of IR light projected by a scanner and reflected off of an anatomical object, where each of the IR light images is captured by a plurality of IR cameras; determine parallax between the plurality of IR cameras and the scanner; associate the IR light images to create an integrated IR light image based upon the determined parallax; and associate the integrated IR light image with an optical light image of the anatomical object to generate a real-time, intra-operative 3D image of the anatomical object. The program, when executed by the computer, may cause the computer to generate the real-time, intra-operative 3D image and/or display the generated real-time intra-operative 3D image.
In embodiments, the program, when executed by the computer, causes the computer to generate a 3D point cloud based on the integrated IR light image. In embodiments, the point cloud can be rendered into a volumetric shape such as a mesh or other 3D construct. In some embodiments, segmented CT objects can be overlaid into the surgical field of view on anatomical objects. Additionally, or alternatively, the program, when executed by the computer, causes the computer to warp the optical light image onto the integrated IR image.
Other aspects, features, and advantages will be apparent from the description, the drawings, and the claims that follow.
Various aspects and features of the disclosure are described hereinbelow with references to the drawings, wherein:
Aspects of the disclosed systems, devices and methods are described in detail with reference to the drawings, in which like reference numerals designate identical or corresponding elements in each of the several views. As commonly known, the term “clinician” refers to a doctor, a nurse, or any other care provider and may include support personnel. Additionally, the term “proximal” refers to the portion of structure that is closer to the clinician and the term “distal” refers to the portion of structure that is farther from the clinician. In addition, the term “cephalad” is used to indicate a direction toward a patient's head, whereas the term “caudad” indicates a direction toward the patient's feet. Further still, the term “medial” indicates a direction toward the middle of the body of the patient and the term “lateral” indicates a direction toward a side of the body of the patient (e.g., away from the middle of the body of the patient). The term “posterior” indicates a direction toward the patient's back, and the term “anterior” indicates a direction toward the patient's front. The phrases “in an embodiment,” “in embodiments,” or “in other embodiments” may each refer to one or more of the same or different embodiments in accordance with the disclosure. In addition, the terms “surgical devices,” “surgical tools,” “surgical instruments,” and “surgical instrumentation” may be used interchangeably throughout the disclosure. Further, while reference may be made to elements in the singular, such distinction is intended only to simplify such description and is not intended to limit the subject matter of the disclosure. The term “target” used herein refers to tissue (either soft or hard) or areas/regions in the body of a patient that is designated as of interest for diagnosis or for therapeutic deliveries. Similarly, the term “anatomical feature” or its variants, refers to organs, tissue, vessels, or other discrete portions of the body of a patient. In addition, directional terms such as front, rear, upper, lower, top, bottom, and the like are used simply for convenience of description and are not intended to limit this disclosure.
In the following description, well-known functions or constructions are not described in detail to avoid obscuring the disclosure in unnecessary detail.
This disclosure is directed to systems and methods for creating and displaying a 3D endoscopic image from multiple cameras using optic and/or scanned images.
One method to create a 3D map of a surface is to use a scanner, which draws a pattern across the surface while capturing the distortion of the image. Distortion in the captured image is used to extract depth information to create a 3D map. This method uses a fixed projector that creates the scan pattern and a single dedicated camera to perform the image capture. The resultant information forms a point cloud of data points positioned in 3D space (e.g. x, y, z coordinates). This configuration of scanner and imager defines how the system can be constructed. The accuracy of the positional information derived from this system is dependent on the parallax, namely, the distance and angle between the scanner field of view (FOV) and that of the camera(s), where larger distances and angles produce better results.
For a medical endoscope, the amount of parallax that can be achieved is limited by the physical properties of the endoscope, which must be linear in design to allow ease of insertion into trocars or body cavities. One implementation places a camera at the distal end of the scope and the scanner some distance back from the distal tip and projecting out the side of the endoscope body. By designing the FOV optics of the scanner and camera, overlap can be managed. This creates a gradient of parallax where there is little towards the distal tip but significant amounts away from the distal tip.
In order to address the issues described above, according to aspects of this disclosure, the camera is spatially removed, or otherwise separate, from the endoscope body. Instead of being associated with the endoscope, the imager is appended to one or more instruments or other objects that would have a view of a scan target. More particularly, the described systems and methods to image objects or targets in an in vivo scene utilize one or more scanners that project onto a target in vivo, while cameras are arrayed in vivo in different positions relative to one another and the scanner, thereby capturing different aspects (e.g., different perspectives) of the same scene. The data acquired is processed by the disclosed system to generate an intra-operative 3D image. An exemplary implementation would be to attach a camera to each laparoscopic instrument and trocar to provide multiple imagers simultaneously observing the scan target where each camera provides scene information.
This approach has multiple advantages over the traditional self-contained endoscopic design, which includes both the scanner and camera on the same device, and therefore has limited parallax. Placing a camera on each surgical instrument improves the ability to get wide parallax between the camera and scanner due to the natural separation between the instrument(s) and the endoscope. It should, of course, be understood that an imaging camera may still be provided on the endoscope at the distal tip adjacent to the scanner to support traditional monocular vision when no camera-supported instrument is in use.
Providing more than one camera-equipped instrument with a FOV overlapping the scanner's FOV allows multiple views from multiple angles to be collected simultaneously. Viewing the IR light output by the scanner from multiple angles via overlapping instrument camera FOVs increases the accuracy of the generated 3D point cloud and decreases the number of blind spots where a specific camera is blocked by an intervening structure from a portion of the scanner's FOV.
This can be extended to replacing some cameras with scanners or placing a scanner/camera pair on each instrument. This allows scanning and imaging from multiple directions simultaneously, speeding the ability to fully capture a scene in 3D with a decrease in the number of areas left unscanned due to intervening structures or temporary changes in surgical environment (e.g., smoke from ablation) interfering with a sole scanner or camera FOV. Because the level of detail increases as the scanner approaches a structure, a default zoom mode becomes available in the specific area covered by the instrument's FOV. In order to avoid interference between scanners, each could use a different frequency of light or interleaved timing.
Memory 12a may include any non-transitory computer-readable storage media for storing data and/or software (instructions) executable by processor 12b and which controls the operation of computing device 12 and/or various components of the system, when in communication with the components (e.g., with the optical cameras, light sources, scanners, IR cameras, etc.). In embodiments, memory 12a may include one or more solid-state storage devices such as flash memory chips. Alternatively, or in addition to the one or more solid-state storage devices, memory 12a may include one or more mass storage devices connected to processor 12b through a mass storage controller (not shown) and a communications bus (not shown). Although the description of computer-readable media contained herein refers to a solid-state storage, it should be appreciated by those skilled in the art that computer-readable storage media can be any available media that can be accessed by processor 12b. That is, computer readable storage media includes non-transitory, volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media includes RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, Blu-Ray or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 12.
Network interface 12d may be configured to connect to a network such as a local area network (LAN) including a wired network and/or a wireless network, a wide area network (WAN), a wireless mobile network, a Bluetooth network, and/or the Internet. Input device 12e may be any device through which a user may interact with computing device 12, such as, for example, a mouse, keyboard, foot pedal, touch screen, and/or voice interface. Output module 12f may include any connectivity port or bus, such as, for example, parallel ports, serial ports, universal serial busses (USB), or any other similar connectivity port.
Any of the disclosed scanners, such as scanner 22s, includes a structured light (or laser) scanner. The structured light scanner may employ infrared light so as to avoid interference from visible light sources, although it is contemplated that the structured light scanner may emit light in the visible spectrum, or any other wavelength or frequency band, depending upon the tissue being scanned during the procedure. In embodiments, light may be provided in a range of visible or IR spectrum. For instance, in the visible spectrum a frequency band may be the entire visible spectrum (e.g., white light) or a specific color frequency (e.g., green). The structured light source is selectively positionable in one or more positions, which may be predetermined, relative to one or more cameras (e.g., IR cameras 14i, 16i, 18i, and/or optical cameras 16c, 18c) of the disclosed systems (e.g., systems 10, 10′). The structured light source enables the calculation of the exact location of the intersection between the light ray from the structured light source and the one or more cameras of the system. This information can be scanned as single points, lines, or arrays to create topologic maps of surfaces. In embodiments, the structured light source is that of a light emitting diodes (LED) or LED infrared laser that is dispersed into a scan pattern (e.g., line, mesh, dots, etc.), by rotating mirror, beam splitter, diffraction grating, and/or panning. In one embodiment, the structured light source may be an LED laser having collimated light. The laser scanner will enable visualization systems to achieve accurate surface maps of an anatomical object such as the lung needed in order to match preoperative computed images to the operative image generated by one or more cameras of the disclosed systems. In embodiments, a clinician may enter in commands or control a structured light pattern projected from any of the disclosed scanners using any suitable user input device (e.g., touchscreen, mouse, keyboard, or the like).
The IR light may also be projected in a predetermined pattern (e.g., a grid or shaped pattern) and/or may be projected toward a target such as tissue surface, which may include the target, surrounding tissue, or other tissue within the body of the patient “P” and surgical objects the like. The IR light may be configured to strike the target and/or surrounding tissue. One or more of the beams may be projected at varying distances from one another, to increase or decrease the precision of each IR image. For example, in embodiments, the IR light may form one or more patterns such as preselected geometric images (e.g., stripes, random or structured placements of dots). Based on the desired level of accuracy, the patterns may be varied in complexity, having greater amounts of angles, positioned closer to one another, etc. Patterns may also be selected to optimize later analysis of the IR light once captured.
Any of the disclosed optical cameras may be visual-light optical cameras, such as a charge-coupled device (CCD), complementary metal-oxide-semiconductor (CMOS), N-type metal-oxide-semiconductor (NMOS), or other suitable camera known in the art. In embodiments, an optical camera may be a CCD camera having a resolution of 1080p. In some embodiments, any of the disclosed systems may include a digital filter (not shown) or a filter having narrow band optical grating (not shown) that inhibits extraneous light (e.g., visible) from distracting the clinician during a surgical procedure. In some embodiments, visible light is filtered from the image captured by one or more of the disclosed optical cameras and transmitted to the clinician such that any captured image is clear and free from extraneous light patterns. The optical light transmitters may be LEDs that emit white light, although any suitable light emitting device may be utilized. In some embodiments, the optical light transmitters may include RGB LEDs to provide the ability to generate an infinite range of different visible light spectrum. In some aspects of the disclosure, the optical light transmitters are configured to fade between and/or discretely switch between various subsets of the visible spectrum. In certain embodiments, the optical light transmitters may provide RGB, IR, UV, or combinations thereof (e.g., RGB and IR combination LEDs, RGB and UV combination LEDs, and/or IR and UV combination LEDs).
Any of the disclosed IR cameras may be CCD cameras capable of detecting IR light (for example, as reflected), although it is contemplated that the IR cameras may have sufficiently wide optical capture spectrum to detect visible light, such as visible green light or the like, depending upon the tissue being scanned. Specifically, visible green light contrasts with tissue having a red or pinkish hue enabling IR cameras to more easily identify the topography of the tissue. Likewise, visible blue light that is absorbed by hemoglobin may enable the system to detect vascular structures along with a vascular topology to act as additional reference points to be matched when aligning captured images.
It is contemplated that any of the disclosed cameras may be any thermographic camera known in the art, such as such as ferroelectric, silicon microbolometer, or uncooled focal plane array (UFPA), or may be any other suitable visible light camera such as a charge-coupled device (CCD), complementary metal-oxide-semiconductor (CMOS), N-type metal-oxide-semiconductor (NMOS), or other suitable camera where the light emitted from any of the disclosed scanners is in the visible or detectable spectrum.
In embodiments, any of the disclosed cameras, scanners, or transmitters may include one or more transparent protective covers (not shown) capable of inhibiting fluids or other contaminants from coming into contact with the disclosed cameras, scanners, or transmitters. In some embodiments, any of the disclosed cameras, scanners, or transmitters may include one or more coatings or layers with hydrophobic properties such as silicones or HDMSO Plasma depositions. In certain embodiments, the covers may include raised geometry that sheds, or washes body fluids ejected onto the cover.
In operation, a patient “P” (
For instance, a distal portion of an endoscope can be advanced through one trocar and a grasper can be advanced through another trocar so that the graspers and the forceps can be positioned in or adjacent to a surgical site such as the thoracic cavity of the patient “P.”
To enable the clinician to observe images on a display 12c (
In step 100, to image a target such as an anatomical object within a body of a patient, such as the lung “L,” infrared (IR) light (e.g., laser) is projected (e.g., a plurality of IR light beams or a large beam of IR light covering an entire viewing window) from one or more scanners of one of the disclosed systems, onto the target (e.g., an anatomical object) within the patient. The beams may be projected in any configuration or pattern (e.g., grid and/or other shape) and may be projected any number of distances, which may be one or more predetermined distances relative to the scanner and/or other beams. As illustrated in
While the systems and methods described herein may refer to the use of IR light to determine distance and optical light to capture optical images for subsequent analysis and display, the use of IR light and optical light may be interchanged, for instance to create light patterns and/or to illuminate portions or an entirety of a surgical space. Both the IR light and the optical light may be received by multiple sensors, to enable stereoscopic imaging of the reflected light by the respective sensors configured to capture the light.
In step 110, two or more IR cameras capture IR light reflected off the anatomical object (not shown). The two or more IR cameras are positioned separate from the one or more scanners, for example, on different surgical instruments than surgical instruments supporting the one or more scanners. The two or more IR cameras are disposed in spaced relation with respect to the one or more scanners and may be arrayed about the anatomical object such that each IR camera has a different field of view with respect to the other IR cameras. The fields of view may be at least partially overlapping. In certain aspects, three or more separate IR cameras may be provided.
The IR light may be received as a plurality of points at varying distances from one another. With reference also to
In step 120, for each IR camera, an IR light image is generated, for instance by the computing device 12, based on the captured IR light of the respective IR camera.
In step 130, parallax of each IR camera relative to the other IR cameras and/or scanner(s) (e.g., distance and angle) can be determined, for example by computing device 12 and/or any number or type of position sensors (not shown) or tracking software coupled to the respective IR cameras (or instruments thereof), based on the position of the respective IR cameras with respect to the one or more scanners and each other (see, for example, the different parallax “P1,” “P2,” and “P3” with respect to a point “Z” on the lung “L”). For instance, the positioning sensors may include electromagnetic sensors. In some embodiments, the various sensors may be separate and distinct components with associated hardware and/or software or may be part of a commercial platform such as the Intel® RealSense™ technology system developed by Intel. Alternatively, or additionally, other external imaging modalities such as MRI, Fluoroscopy, etc., RFID, or the like may also be used.
In particular applications, the positioning of the surgical devices, or components thereof, can also be tracked by intraoperative instrument tracking systems for example electromagnetic (EM) navigation systems. The locational information obtained by the intraoperative instrument tracking system aids in simplifying the algorithms needed to produce large-scale spatial surface maps from segmental sized scans.
In step 140, the IR light images (e.g., some and/or all) are associated to create an integrated IR light image based upon the determined parallax. In embodiments, the computing device 12 creates a 3D data point cloud from the processing (and converging) of data from each of the IR light images captured for creating the integrated IR light image. In embodiments, the 3D data point cloud can be provided in the form of an intra-operative 3D model. In embodiments, the intra-operative 3D model may be matched with a portion of a pre-operative 3D model and/or a pre-operative image data (e.g., points contained or otherwise associated with the pre-operative 3D model). Matching may occur by identifying certain fiducial markers in both the pre-operative 3D model and the intra-operative 3D model and, based on the matching, the pre-operative 3D model and the intra-operative 3D model may be aligned or otherwise associated with one another. The fiducial markers may be naturally occurring anatomic or mechanical and placed by a clinician before imaging by a CT or other modality. The computing device 12 generates a 3D model (e.g., the intra-operative 3D model), or a rendering of the 3D model for display on a 2D display (e.g., display 12c) based on the integrated IR light image data and may store the 3D model in the memory 12a. The 3D model may be stored in the memory 12a in any suitable data structure (e.g., a 2D array of distances from a common plane or a 3D array of points).
In step 150, the one or more optical light cameras (e.g., 20c, 22c) capture reflected optical light to create an optical light image. For instance, one or more optical light transmitters (e.g., light source 221) emit visible light that is reflected off the target and captured by the optical cameras 20c, 22c of one or more of the various surgical instruments. The optical light cameras convert the detected light into visible light data that is processed by the optical light camera(s) and/or the computing device 12 to form the optical light image. The optical light image data may be stored on the memory 12a of computing device 12. The projection of optical light by the one or more optical light cameras can be effectuated before, during, and/or after projecting IR light from the one or more scanners.
In step 160, the integrated IR light image is associated with the optical light image. For instance, integrated IR light image data is associated with the optical light image data stored in the memory 12a of the computing device 12 such that the integrated IR light image data and the optical light image data are combined and/or warped together. In embodiments, optical light image data can be associated with one or more of the IR light images and/or the integrated IR light image before, during, after, and/or instead of integrating the IR light images.
In step 170, an intra-operative 3D image (and/or 3D model) is generated based on the association of the integrated IR light image (or one or more of the IR light images) and the optical light image based on the combined or warped data of the IR and optical light image data. For instance, the computing device 12 maps the optical image data to corresponding points in the 3D model. These points may be mapped by aligning the optical light image data with the integrated IR image data (or one or more of the IR light images) (e.g., adjusting the pixels to account for the spatial difference and/or parallax between the one or more the optical cameras and the one or more IR cameras) and, once aligned, associating the optical image data with the integrated IR image data (or one or more of the IR light images). For example, when the optical image data is captured as a 2D array of points, the 2D array of points may be advanced or otherwise projected toward the 3D model, with each corresponding point of the 3D model (along the surface of the objects contained therein) associated with the point in the 2D array from the optical image data. Each point may include image data such as color value, luminance, chrominance, brightness, etc. As subsequently captured optical image data is mapped to the 3D model, the earlier-captured optical image data may be compared to the most-recently captured optical image data and updated as is necessary. Once the optical image data is mapped to the integrated IR image data (or one or more of the IR light images), the computing device 12 may generate the intra-operative 3D model (or, where a 2D display is available, a 2D rendering of an intra-operative 3D image) to be displayed on the display 12c of the computing device 12 based on the mapping of the optical image data to the integrated IR image data (or one or more of the IR light images). In embodiments, once the intra-operative 3D model and/or the intra-operative 3D image is generated, the computing device 12 causes the output module 12f to output the 2D and/or 3D image.
In step 180, the generated image is displayed on the display 12c of the computing device 12. The generated image can include any number of images captured and/or combined and/or otherwise stitched together to provide a 3D spatial map of the anatomical object, portions, or an entirety of the surgical site, which may include any anatomy and/or objects disposed therein (e.g., surgical tools). Imaging may be effectuated one-time and/or repeated (e.g., continuously) so as create a video stream of intra-operative 3D images. In aspects of the disclosure, axial CT image slices may be included. In particular, with location identified in the 3D model, one could determine which slice of the CT image a special point on the 3D model came from so that the axial CT image can be superimposed into the view. The image would be warped in perspective to denote both the rotation of the FOV to the normal of the CT image and also the depth as one side of the CT image will be closer to the FOV than the other. The user could scroll through the axial views and the view would continue to display a single CT image slice overlaid or inserted into the FOV.
Traditional 3D or stereoscopic endoscopes utilize optical ports located adjacent to each other on the distal endoscope tip and a crude 3D image is created from the small parallax between the ports. Placing a camera on each instrument, according to this disclosure, provides significantly wider parallax and allows for accurate measurement of object location within the overlapping FOVs.
More particularly, endoscopic system 10″ includes a variety of surgical instruments (e.g., instruments 14″, 16″, 18″, 20″, and 22″) having optical cameras that are arrayed about an anatomical object and that capture visible light images at different fields of view such as fields of view 14f, 16f, and 18f that may at least partially overlap. A computing device 12 (including memory 12a for storing captured or predetermined data) coupled to the optical cameras associates (e.g., via one or more processors 12b thereof) the visible light images to create an integrated image for displaying on a display 12c (
In step 200, to image an anatomical object within a body of a patient, such as the lung “L,” optical light is projected from one or more optical light transmitters, such as light source 221 of endoscope 22″ of system 10″, onto the target (e.g., an anatomical object) within the patient.
In steps 210-230, first, second, and third cameras 14c, 16c, 18c of system 10″, respectively, capture light reflected off the target to create respective first, second, and third images.
In step 240, parallax of each camera (e.g., distance and angle) is determined with respect to the other cameras, for instance, by the computing device 12 and/or any number or type of position sensors (not shown) and/or tracking software coupled to the respective cameras and/or instruments.
In step 250, the first, second, and third images, namely the corresponding data thereof, are associated (e.g., processed by the computing device 12) to generate an intra-operative 3D image and/or 3D model based on the determined parallax.
In step 260, the generated image is displayed on a display 12c. The generated image can be correlated to any pre-operative imaging. The generated image can include any number of images that can be combined or otherwise stitched together and may be provided as a 3D spatial map of the anatomical object or at least portions of the surgical site including any anatomy or objects disposed therein (e.g., surgical instrumentation).
According to aspects of the disclosure, generated 3D images may be associated with pre-operative 3D models generated from pre-operative image data. More particularly, the application 12h, during execution, may cause computing device 12 to store the image data associated with the generated 3D image at a corresponding location in the 3D model. This association may enable computing device 12 to update images generated during, for instance, EM navigation, or display the intra-operative 3D model (in embodiments, the pre-operative 3D model) generated during planning or review phases of surgical procedures.
In certain embodiments, the components of the disclosed surgical systems may be positionable by a robotic system. The robotic system can provide precise six-axis orientation of the surgical instruments of the disclosed systems in a similar manner to the navigation systems but benefited by active positioning as well as locational knowledge of the surgical instruments within the patient. As can be appreciated, the robotic system may be utilized to autonomously move the surgical instruments to complete imaging of larger areas or whole organs.
More specifically, the systems, and/or components thereof, described herein may be configured to work with robotic surgical systems and what is commonly referred to as “Telesurgery.” Such systems employ various robotic elements to assist the surgeon and allow remote operation (or partial remote operation) of surgical instrumentation. Various robotic arms, gears, cams, pulleys, electric and mechanical motors, etc. may be employed for this purpose and may be designed with a robotic surgical system to assist the surgeon during the course of an operation or treatment. Such robotic systems may include remotely steerable systems, automatically flexible surgical systems, remotely flexible surgical systems, remotely articulating surgical systems, wireless surgical systems, modular or selectively configurable remotely operated surgical systems, etc.
The robotic surgical systems may be employed with one or more consoles that are next to the operating theater or located in a remote location. In this instance, one team of surgeons or nurses may prep the patient for surgery and configure the robotic surgical system with one or more of the surgical instruments disclosed herein while another surgeon (or group of surgeons) remotely controls the surgical instruments via the robotic surgical system. As can be appreciated, a highly skilled surgeon may perform multiple operations in multiple locations without leaving his/her remote console which can be both economically advantageous and a benefit to the patient or a series of patients.
The robotic arms of the surgical system are typically coupled to a pair of master handles by a controller. The handles can be moved by the surgeon to produce a corresponding movement of the working ends of any type of surgical instrument (e.g., end effectors, graspers, knifes, scissors, endoscopes, etc.) which may complement the use of one or more of the embodiments described herein. The movement of the master handles may be scaled so that the working ends have a corresponding movement that is different, smaller or larger, than the movement performed by the operating hands of the surgeon. The scale factor or gearing ratio may be adjustable so that the operator can control the resolution of the working ends of the surgical instrument(s).
It is contemplated that the surgical instruments described herein may be positioned by the robotic system and the precise position of the endoscope transmitted to the computer to construct the 3D image of the scanned organ or operative field. The robotic system has the ability to autonomously scan the surgical field and construct a complete 3D model of the field to aid the surgeon in directing the robotic arms or to provide necessary 3D information for the robotic system to further conduct surgical steps autonomously. In embodiments, where the surgical instruments include a camera and/or a structured light source that are independent of one another, the robotic system may direct the camera and a structured light source separately. The robotic system provides the relative coordinates between respective surgical instruments needed to triangulate points in the structured light and/or camera views to construct a 3D surface of the operative field.
The master handles may include various sensors to provide feedback to the surgeon relating to various tissue parameters or conditions, e.g., tissue resistance due to manipulation, cutting or otherwise treating, pressure by the instrument onto the tissue, tissue temperature, tissue impedance, etc. As can be appreciated, such sensors provide the surgeon with enhanced tactile feedback simulating actual operating conditions. The master handles may also include a variety of different actuators for delicate tissue manipulation or treatment further enhancing the surgeon's ability to mimic actual operating conditions.
Referring to
Each of the robot arms 1002, 1003 may include a plurality of members, which are connected through joints, and an attaching device 1009, 1011, to which may be attached, for example, a surgical instrument or surgical tool “ST” supporting an end effector 1100, in accordance with any one of several embodiments disclosed herein.
Robot arms 1002, 1003 may be driven by electric drives (not shown) that are connected to control device 1004. Control device 1004 (e.g., a computer) may be set up to activate the drives, in particular by means of a computer program, in such a way that robot arms 1002, 1003, their attaching devices 1009, 1011 and thus the surgical tool (including end effector 1100) execute a desired movement according to a movement defined by means of manual input devices 1007, 1008. Control device 1004 may also be set up in such a way that it regulates the movement of robot arms 1002, 1003 and/or of the drives.
Medical work station 1000 may be configured for use on a patient “P” lying on a patient table 1012 to be treated in a minimally invasive manner by means of end effector 1100. Medical work station 1000 may also include more than two robot arms 1002, 1003, the additional robot arms likewise being connected to control device 1004 and being telemanipulatable by means of operating console 1005. A surgical tool (including an end effector 1100) may also be attached to the additional robot arm. Medical work station 1000 may include a database 1014, in particular coupled to with control device 1004, in which are stored, for example, pre-operative data from patient/living being “P” and/or anatomical atlases.
One aspect of the disclosure is directed to an endoscopic system that supports organ matching to preoperative images, for example, images of a lung, other anatomy or anatomical features within a surgical site. The endoscopic system can provide both visual imaging and surface mapping for providing 3D models or 2D renderings of a 3D image (where a display is a 2D display). The endoscopic system includes surgical instrumentation that can be used to generate a 3D spatial map. The endoscopic system includes a computing device that utilizes the 3D spatial map to provide enhanced navigational guidance.
One advantage of the disclosure is to enable 3D surfacing of organs and other anatomical features and objects in a surgical site, which can be matched to preoperative computational imaging needed for operative guidance to target lesions with particular special knowledge of adjacent structures and anatomic boundaries such as in sublobar resection or lung cancer as well as overlay of pre-surgical planning information such as planned resection lines. Primary use for this system is thoracic, but this system can be utilized, or modified for use, in connection with deep pelvic surgery, rectal surgery, or other surgical applications.
The systems and methods described herein may be useful in various surgical procedures in which a patient is being diagnosed and/or treated, e.g., in cavities (insufflated or otherwise established), luminous structures, etc. For example, in an embodiment in which a clinician is performing a diagnosis of targets in a thoracic area of a patient, the disclosed systems and methods may be employed to assist during navigation of surgical instruments moving relative to anatomical features or targets within body. Specifically, the systems and methods described enable in vivo imaging for later display on an intra-operative 3D model or a two-dimensional 3D rendering (where a 3D display is not available). Additionally, the disclosed systems and methods may provide the clinician with the ability to view and/or determine various characteristics of anatomical features, structures, and/or other targets, as well as the position of one or more surgical devices, tools, and/or instruments relative to the body of the patient, as well as other surgical instrumentation disposed within or about the patient.
Persons skilled in the art will understand that the structures and methods specifically described herein and illustrated in the accompanying figures are non-limiting exemplary embodiments, and that the description, disclosure, and figures should be construed merely as exemplary of particular embodiments. It is to be understood, therefore, that the disclosure is not limited to the precise embodiments described, and that various other changes and modifications may be affected by one skilled in the art without departing from the scope or spirit of the disclosure. Additionally, it is envisioned that the elements and features illustrated or described in connection with one exemplary embodiment may be combined with the elements and features of another without departing from the scope of the disclosure, and that such modifications and variations are also intended to be included within the scope of the disclosure. Indeed, any combination of any of the disclosed elements and features is within the scope of the disclosure. Accordingly, the subject matter of the disclosure is not to be limited by what has been particularly shown and described.
This application is a continuation of U.S. patent application Ser. No. 17/839,064, filed on Jun. 13, 2022, which is a continuation of U.S. patent application Ser. No. 16/707,280, filed on Dec. 9, 2019, now U.S. Pat. No. 11,357,593 which claims the benefit of and priority to U.S. Provisional Patent Application No. 62/790,839, filed on Jan. 10, 2019, the entire disclosure of each of which is incorporated by reference herein.
Number | Name | Date | Kind |
---|---|---|---|
5057494 | Sheffield | Oct 1991 | A |
5321113 | Cooper et al. | Jun 1994 | A |
6003517 | Sheffield et al. | Dec 1999 | A |
6009189 | Schaack | Dec 1999 | A |
6139490 | Breidenthal et al. | Oct 2000 | A |
7170677 | Bendall et al. | Jan 2007 | B1 |
7277120 | Gere et al. | Oct 2007 | B2 |
7564626 | Bendall et al. | Jul 2009 | B2 |
7812815 | Banerjee et al. | Oct 2010 | B2 |
8335359 | Fidrich et al. | Dec 2012 | B2 |
8442355 | Imai | May 2013 | B2 |
8508580 | McNamer et al. | Aug 2013 | B2 |
8706184 | Mohr et al. | Apr 2014 | B2 |
8827934 | Chopra et al. | Sep 2014 | B2 |
8922589 | Laor | Dec 2014 | B2 |
9005112 | Hasser et al. | Apr 2015 | B2 |
9044138 | Sjostrom et al. | Jun 2015 | B2 |
9066086 | Angot et al. | Jun 2015 | B2 |
9134534 | Border et al. | Sep 2015 | B2 |
9179822 | Kitamura et al. | Nov 2015 | B2 |
9182596 | Border et al. | Nov 2015 | B2 |
9188765 | Venkataraman et al. | Nov 2015 | B2 |
9223138 | Bohn | Dec 2015 | B2 |
9265572 | Fuchs et al. | Feb 2016 | B2 |
9277201 | Izawa et al. | Mar 2016 | B2 |
9285589 | Osterhout et al. | Mar 2016 | B2 |
9288472 | Aoki et al. | Mar 2016 | B2 |
9294672 | Georgiev et al. | Mar 2016 | B2 |
9364294 | Razzaque et al. | Jun 2016 | B2 |
9368546 | Fleck et al. | Jun 2016 | B2 |
9375133 | Kitamura et al. | Jun 2016 | B2 |
9375268 | Long | Jun 2016 | B2 |
9380292 | McNamer et al. | Jun 2016 | B2 |
9386222 | Georgiev et al. | Jul 2016 | B2 |
9473766 | Douglas et al. | Oct 2016 | B2 |
9479755 | Routhier et al. | Oct 2016 | B2 |
9485496 | Venkataraman et al. | Nov 2016 | B2 |
9554117 | Lee et al. | Jan 2017 | B2 |
9576369 | Venkataraman et al. | Feb 2017 | B2 |
9578259 | Molina | Feb 2017 | B2 |
9578318 | Fleck et al. | Feb 2017 | B2 |
9581820 | Robbins | Feb 2017 | B2 |
9684174 | Fleck et al. | Jun 2017 | B2 |
9712759 | Venkataraman et al. | Jul 2017 | B2 |
9717981 | Robbins et al. | Aug 2017 | B2 |
9730572 | Hasser et al. | Aug 2017 | B2 |
9733458 | Georgiev et al. | Aug 2017 | B2 |
9741118 | Mullis | Aug 2017 | B2 |
9749547 | Venkataraman et al. | Aug 2017 | B2 |
9759917 | Osterhout et al. | Sep 2017 | B2 |
9766441 | Rappel | Sep 2017 | B2 |
9779643 | Bohn et al. | Oct 2017 | B2 |
9807381 | Fleck et al. | Oct 2017 | B2 |
9807382 | Duparre et al. | Oct 2017 | B2 |
9813616 | Lelescu et al. | Nov 2017 | B2 |
9832381 | Osborne | Nov 2017 | B2 |
9843723 | Osborne | Dec 2017 | B2 |
9857591 | Welch et al. | Jan 2018 | B2 |
9858673 | Ciurea et al. | Jan 2018 | B2 |
9898866 | Fuchs et al. | Feb 2018 | B2 |
9918659 | Chopra et al. | Mar 2018 | B2 |
9986224 | Mullis | May 2018 | B2 |
10004558 | Long et al. | Jun 2018 | B2 |
10009538 | Venkataraman et al. | Jun 2018 | B2 |
10019816 | Venkataraman et al. | Jul 2018 | B2 |
10027901 | Venkataraman et al. | Jul 2018 | B2 |
10084958 | Georgiev et al. | Sep 2018 | B2 |
10091405 | Molina | Oct 2018 | B2 |
10105034 | Suga | Oct 2018 | B2 |
10122993 | Venkataraman et al. | Nov 2018 | B2 |
10127682 | Mullis | Nov 2018 | B2 |
10142560 | Venkataraman et al. | Nov 2018 | B2 |
10192358 | Robbins | Jan 2019 | B2 |
10194897 | Cedro et al. | Feb 2019 | B2 |
10234687 | Welch et al. | Mar 2019 | B2 |
10258426 | Silva et al. | Apr 2019 | B2 |
10262453 | Mountney et al. | Apr 2019 | B2 |
10299880 | Luna et al. | May 2019 | B2 |
10306120 | Duparre | May 2019 | B2 |
10317690 | Cheng | Jun 2019 | B2 |
10334241 | Duparre et al. | Jun 2019 | B2 |
10341643 | Routhier et al. | Jul 2019 | B2 |
10345582 | Schneider et al. | Jul 2019 | B2 |
10366472 | Lelescu et al. | Jul 2019 | B2 |
10373719 | Soper et al. | Aug 2019 | B2 |
10376178 | Chopra | Aug 2019 | B2 |
10380752 | Ciurea et al. | Aug 2019 | B2 |
10386636 | Welch | Aug 2019 | B2 |
10388073 | Westerinen et al. | Aug 2019 | B2 |
10397560 | Miyao et al. | Aug 2019 | B2 |
10398513 | Razzaque et al. | Sep 2019 | B2 |
10405753 | Sorger | Sep 2019 | B2 |
10412314 | McMahon et al. | Sep 2019 | B2 |
10413369 | Kashima et al. | Sep 2019 | B2 |
10430682 | Venkataraman et al. | Oct 2019 | B2 |
10444931 | Akeley | Oct 2019 | B2 |
10448692 | Hsu | Oct 2019 | B2 |
10462362 | Lelescu et al. | Oct 2019 | B2 |
10478162 | Barbagli et al. | Nov 2019 | B2 |
10478717 | Robbins et al. | Nov 2019 | B2 |
10480926 | Froggatt et al. | Nov 2019 | B2 |
10502876 | Robbins et al. | Dec 2019 | B2 |
10506920 | Hasser et al. | Dec 2019 | B2 |
10516879 | Eash et al. | Dec 2019 | B2 |
10524866 | Srinivasan et al. | Jan 2020 | B2 |
10540818 | Akeley | Jan 2020 | B2 |
10546424 | Pang et al. | Jan 2020 | B2 |
10547772 | Molina | Jan 2020 | B2 |
10548459 | Itkowitz et al. | Feb 2020 | B2 |
10555788 | Panescu et al. | Feb 2020 | B2 |
10567464 | Pang et al. | Feb 2020 | B2 |
10569071 | Harris et al. | Feb 2020 | B2 |
10598914 | Siegel et al. | Mar 2020 | B2 |
10603106 | Weide et al. | Mar 2020 | B2 |
10603133 | Wang et al. | Mar 2020 | B2 |
10610306 | Chopra | Apr 2020 | B2 |
10614555 | Fukazawa et al. | Apr 2020 | B2 |
10627632 | Welch et al. | Apr 2020 | B2 |
10638099 | Mullis et al. | Apr 2020 | B2 |
10638953 | Duindam et al. | May 2020 | B2 |
10639114 | Schuh et al. | May 2020 | B2 |
10674138 | Venkataraman et al. | Jun 2020 | B2 |
10674970 | Averbuch et al. | Jun 2020 | B2 |
10682070 | Duindam | Jun 2020 | B2 |
10702137 | Deyanov | Jul 2020 | B2 |
10706543 | Donhowe et al. | Jul 2020 | B2 |
10709506 | Coste-Maniere et al. | Jul 2020 | B2 |
10772485 | Schlesinger et al. | Sep 2020 | B2 |
10796432 | Mintz et al. | Oct 2020 | B2 |
10823627 | Sanborn et al. | Nov 2020 | B2 |
10827913 | Ummalaneni et al. | Nov 2020 | B2 |
10835153 | Rafii-Tari et al. | Nov 2020 | B2 |
10885630 | Li et al. | Jan 2021 | B2 |
11357593 | Komp | Jun 2022 | B2 |
11432828 | Lang | Sep 2022 | B1 |
11521345 | Jiang | Dec 2022 | B2 |
11553969 | Lang | Jan 2023 | B1 |
11793390 | Komp | Oct 2023 | B2 |
20020147462 | Mair et al. | Oct 2002 | A1 |
20030013972 | Makin | Jan 2003 | A1 |
20040120981 | Nathan | Jun 2004 | A1 |
20080045938 | Weide et al. | Feb 2008 | A1 |
20130303945 | Blumenkranz et al. | Nov 2013 | A1 |
20140035798 | Kawada et al. | Feb 2014 | A1 |
20150077529 | Hatta et al. | Mar 2015 | A1 |
20150148690 | Chopra et al. | May 2015 | A1 |
20150265368 | Chopra et al. | Sep 2015 | A1 |
20160157939 | Larkin et al. | Jun 2016 | A1 |
20160183841 | Duindam et al. | Jun 2016 | A1 |
20160192860 | Allenby et al. | Jul 2016 | A1 |
20160287344 | Donhowe et al. | Oct 2016 | A1 |
20170112571 | Thiel et al. | Apr 2017 | A1 |
20170112576 | Coste-Maniere et al. | Apr 2017 | A1 |
20170209071 | Zhao et al. | Jul 2017 | A1 |
20170265952 | Donhowe et al. | Sep 2017 | A1 |
20170311844 | Zhao et al. | Nov 2017 | A1 |
20170319165 | Averbuch | Nov 2017 | A1 |
20180078318 | Barbagli et al. | Mar 2018 | A1 |
20180144092 | Flitsch et al. | May 2018 | A1 |
20180153621 | Duindam et al. | Jun 2018 | A1 |
20180235709 | Donhowe et al. | Aug 2018 | A1 |
20180240237 | Donhowe et al. | Aug 2018 | A1 |
20180256262 | Duindam et al. | Sep 2018 | A1 |
20180263706 | Averbuch | Sep 2018 | A1 |
20180279852 | Rafii-Tari et al. | Oct 2018 | A1 |
20180325419 | Zhao et al. | Nov 2018 | A1 |
20190000559 | Berman et al. | Jan 2019 | A1 |
20190000560 | Berman et al. | Jan 2019 | A1 |
20190008413 | Duindam et al. | Jan 2019 | A1 |
20190038365 | Soper et al. | Feb 2019 | A1 |
20190065209 | Mishra et al. | Feb 2019 | A1 |
20190110839 | Rafii-Tari et al. | Apr 2019 | A1 |
20190175062 | Rafii-Tari et al. | Jun 2019 | A1 |
20190175799 | Hsu et al. | Jun 2019 | A1 |
20190183318 | Froggatt et al. | Jun 2019 | A1 |
20190183585 | Rafii-Tari et al. | Jun 2019 | A1 |
20190183587 | Rafii-Tari et al. | Jun 2019 | A1 |
20190192234 | Gadda et al. | Jun 2019 | A1 |
20190209016 | Herzlinger et al. | Jul 2019 | A1 |
20190209043 | Zhao et al. | Jul 2019 | A1 |
20190216548 | Ummalaneni | Jul 2019 | A1 |
20190239723 | Duindam et al. | Aug 2019 | A1 |
20190239831 | Chopra | Aug 2019 | A1 |
20190250050 | Sanborn et al. | Aug 2019 | A1 |
20190254649 | Walters et al. | Aug 2019 | A1 |
20190269470 | Barbagli et al. | Sep 2019 | A1 |
20190269818 | Dhanaraj et al. | Sep 2019 | A1 |
20190269819 | Dhanaraj et al. | Sep 2019 | A1 |
20190272634 | Li et al. | Sep 2019 | A1 |
20190298160 | Ummalaneni et al. | Oct 2019 | A1 |
20190298451 | Wong et al. | Oct 2019 | A1 |
20190320878 | Duindam et al. | Oct 2019 | A1 |
20190320937 | Duindam et al. | Oct 2019 | A1 |
20190336238 | Yu et al. | Nov 2019 | A1 |
20190343424 | Blumenkranz et al. | Nov 2019 | A1 |
20190350659 | Wang et al. | Nov 2019 | A1 |
20190365199 | Zhao et al. | Dec 2019 | A1 |
20190365479 | Rafii-Tari | Dec 2019 | A1 |
20190365486 | Srinivasan et al. | Dec 2019 | A1 |
20190380787 | Ye et al. | Dec 2019 | A1 |
20200000319 | Saadat et al. | Jan 2020 | A1 |
20200000526 | Zhao | Jan 2020 | A1 |
20200008655 | Schlesinger et al. | Jan 2020 | A1 |
20200030044 | Wang et al. | Jan 2020 | A1 |
20200030461 | Sorger | Jan 2020 | A1 |
20200038750 | Kojima | Feb 2020 | A1 |
20200043207 | Lo et al. | Feb 2020 | A1 |
20200046431 | Soper et al. | Feb 2020 | A1 |
20200046436 | Tzeisler et al. | Feb 2020 | A1 |
20200054399 | Duindam et al. | Feb 2020 | A1 |
20200054408 | Schuh et al. | Feb 2020 | A1 |
20200060771 | Lo et al. | Feb 2020 | A1 |
20200069192 | Sanborn et al. | Mar 2020 | A1 |
20200077870 | Dicarlo et al. | Mar 2020 | A1 |
20200078023 | Cedro et al. | Mar 2020 | A1 |
20200078095 | Chopra et al. | Mar 2020 | A1 |
20200078103 | Duindam et al. | Mar 2020 | A1 |
20200085514 | Blumenkranz | Mar 2020 | A1 |
20200109124 | Pomper et al. | Apr 2020 | A1 |
20200129045 | Prisco | Apr 2020 | A1 |
20200129239 | Bianchi et al. | Apr 2020 | A1 |
20200138514 | Blumenkranz et al. | May 2020 | A1 |
20200138515 | Wong | May 2020 | A1 |
20200138518 | Lang | May 2020 | A1 |
20200142013 | Wong | May 2020 | A1 |
20200155116 | Donhowe et al. | May 2020 | A1 |
20200155232 | Wong | May 2020 | A1 |
20200170623 | Averbuch | Jun 2020 | A1 |
20200170720 | Ummalaneni | Jun 2020 | A1 |
20200179058 | Barbagli et al. | Jun 2020 | A1 |
20200188021 | Wong et al. | Jun 2020 | A1 |
20200188032 | Komp | Jun 2020 | A1 |
20200188038 | Donhowe et al. | Jun 2020 | A1 |
20200195903 | Komp | Jun 2020 | A1 |
20200205903 | Srinivasan et al. | Jul 2020 | A1 |
20200205904 | Chopra | Jul 2020 | A1 |
20200214664 | Zhao et al. | Jul 2020 | A1 |
20200222146 | Komp | Jul 2020 | A1 |
20200229679 | Zhao et al. | Jul 2020 | A1 |
20200242767 | Zhao et al. | Jul 2020 | A1 |
20200249428 | Sugiyama | Aug 2020 | A1 |
20200275860 | Duindam | Sep 2020 | A1 |
20200297442 | Adebar et al. | Sep 2020 | A1 |
20200315554 | Averbuch et al. | Oct 2020 | A1 |
20200330795 | Sawant et al. | Oct 2020 | A1 |
20200352427 | Deyanov | Nov 2020 | A1 |
20200364865 | Donhowe et al. | Nov 2020 | A1 |
20200383750 | Kemp et al. | Dec 2020 | A1 |
20210000524 | Barry et al. | Jan 2021 | A1 |
Number | Date | Country |
---|---|---|
0013237 | Jul 2003 | BR |
0116004 | Jun 2004 | BR |
0307259 | Dec 2004 | BR |
0412298 | Sep 2006 | BR |
112018003862 | Oct 2018 | BR |
1644519 | Dec 2008 | CZ |
486540 | Sep 2016 | CZ |
2709512 | Aug 2017 | CZ |
2884879 | Jan 2020 | CZ |
102011084920 | Apr 2013 | DE |
1644519 | Dec 2008 | EP |
2141497 | Jan 2010 | EP |
3413830 | Sep 2019 | EP |
3478161 | Feb 2020 | EP |
3641686 | Apr 2020 | EP |
3644885 | May 2020 | EP |
3644886 | May 2020 | EP |
3749239 | Dec 2020 | EP |
03005028 | Jan 2004 | MX |
03000137 | Sep 2004 | MX |
03006874 | Sep 2004 | MX |
225663 | Jan 2005 | MX |
226292 | Feb 2005 | MX |
03010507 | Jul 2005 | MX |
05011725 | May 2006 | MX |
06011286 | Mar 2007 | MX |
246862 | Jun 2007 | MX |
2007006441 | Aug 2007 | MX |
265247 | Mar 2009 | MX |
284569 | Mar 2011 | MX |
2018032804 | Feb 2018 | WO |
2018049215 | Mar 2018 | WO |
WO-2018132804 | Jul 2018 | WO |
Entry |
---|
Communication pursuant to Article 94(3) EPC issued in European Patent Application No. 20150911.4 dated Mar. 4, 2022, 8 pages. |
Communication pursuant to Article 94(3) EPC issued in European Patent Application No. 20150911.4 dated May 19, 2023. |
Number | Date | Country | |
---|---|---|---|
20240041298 A1 | Feb 2024 | US |
Number | Date | Country | |
---|---|---|---|
62790839 | Jan 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17839064 | Jun 2022 | US |
Child | 18381481 | US | |
Parent | 16707280 | Dec 2019 | US |
Child | 17839064 | US |