The present invention relates generally to the field of minimally invasive surgery (MIS) and more particularly to enhanced visualization methods and tools for use in such surgical procedures.
Visualization of the surgical field during minimally invasive surgical procedures is indirect which can make the experience unintuitive and ergonomically incorrect on many levels. While during open surgery the surgeon looks directly at where his/her hands and instruments are; works in line with his/her visual axis with natural depth perception and peripheral vision. Duplicating this experience in MIS has never been achieved. To capture the images of the surgical field in real-time, surgeons have to rely on endoscopes which are inserted into the body close to the surgical field. These images are then displayed on monitors which are usually placed away from the sterile field and often in a direction not aligned with the motor axis of the surgeon during surgery. Furthermore, the surgeon's eyes are accommodated at a distance much farther than the work area which exacerbates a mental confusion which contributes in part to incorrect depth perception.
There have been many initiatives by visualization system manufacturers to improve this experience, the most impactful one of which has been the move to high-definition (HD) cameras and monitors. This, along with advancements in digital image processing, is reported widely to have made a big difference in the quality of visualization. The other major initiative has been to replace the 2-D cameras and monitors with 3-D versions addressing directly the problem of depth perception. While it was demonstrated that 3-D cameras and monitors improve surgeon performance in several critical tasks such as suturing, adoption has been limited due to surgeon reluctance for wearing special glasses (usually dark) in the operating room to watch such monitors, as well as various forms of discomfort (dizziness, fatigue, etc.) that results from doing so while standing up and performing many tasks for hours as necessary.
Another major difference between MIS and open surgery is that during MIS the surgeon needs someone else's help—often full-time—to see and navigate the surgical field. An assistant (surgeon assistant, attending nurse, etc.) holds and steers the endoscope under the surgeon's verbal instructions such that it can be directed to the desired area with the desired level of magnification and detail. Some robotically steered systems have also been introduced which are steered by voice or manual inputs; however these systems are expensive to use and maintain, and not commonly adopted.
To achieve high quality magnified views of the surgical field, zooming is performed by the assistant (or the robot) by physically moving the endoscope closer to the field. Some endoscope cameras may also incorporate limited optical zooming such as 2× integrated into the endoscope camera itself, but this is usually not enough to go from a full panoramic view of a body cavity to a highly detailed magnified view. Digital zoom is also an option; however, this approach suffers from subsampling and is not preferred. Regardless, however, only one view is available to the surgeon at any given time: a zoomed-out view where he/she can see the entire field including instrument coming in and out, or a magnified close-up view of the exact location of the surgery.
Some embodiments of the invention are intended to address one or more of the above noted fundamental problems associated with visualization systems used in conventional minimally invasive surgery. In the prefferred embodiment these problems are addressed by providing the surgeon two or more views (e.g. panoramic top-level and magnified views) of the surgical field simultaneously in a picture-in-picture format.
In other embodiments, placement of the display is in the sterile field above the patient at an ergonomically correct eye-accommodation distance and orientation such that the surgeon's visual axis is in alignment with his/her motor axis.
In yet other embodiments, an auto-stereoscopic (glasses-free) screen is used which can operate in 2-D as well as 3-D modes based on user commands.
In some embodiments, the images on the screen can be manipulated by the surgeon through touchscreen commands which allow him/her to zoom in, zoom out, change picture-in-picture settings, and convert from 2-D to 3-D modes among other functions. This ability of the surgeon to control most if not all of the major facets of his/her visualization may eliminate the need for an assistant to steer the , thus saving costs and improving productivity.
In all embodiments, magnification of images is achieved by using optical means which allows the image resolution to remain the same high quality regardless of the zooming level.
In all embodiments, the images are captured via a single percutaneous lens inserted through an incision whose length is such that its tip stays as far from the surgical field as possible and at a stationary position: This minimizes the intrusion into the body space as well as the likelihood of contact with tissue, which is in direct contrast to a conventional endoscope the tip of which needs to be moved closer to the surgical are in order to capture zoomed-in images, which in the process may unintentionally cauterize such tissue.
In some embodiments, an ancillary benefit of the monitor repositioning is a larger field of view.
Improved visualization methods and apparatus of the various embodiments of the invention are applicable to many types of minimally invasive surgery, for example in the areas of laparoscopic, thoracoscopic, pelviscopic, arthroscopic surgeries. For laparoscopic surgery, significant utility will be found in cholecystectomy, hernia repair, bariatric procedures (bypass, banding, sleeve, or the like), bowel resection, hysterectomy, appendectomy, gastric/anti-reflux procedures, and nephrectomy.
Other objects and advantages of various embodiments of the invention will be apparent to those of skill in the art upon review of the teachings herein. The various embodiments of the invention, set forth explicitly herein or otherwise ascertained from the teachings herein, may address one or more of the above objects alone or in combination, or alternatively may address some other object ascertained from the teachings herein. It is not necessarily intended that all objects be addressed by any single embodiment or aspect of the invention even though that may be the case with regard to some embodiments or aspects.
In a first aspect of the invention a percutaneous visualization system for providing a plurality of indirect views of a surgical area through a single incision, the system including a percutaneous lens assembly with a proximal end, a distal end, and one or more optical lenses in between, which is placed through an incision in a patient's body such that the proximal end is outside of the patient's body while the distal end disposed inside of the body cavity and aligned such that it is facing the surgical area a plurality of optical zoom lens assemblies each of which is aligned with the optical path of the percutaneous lens assembly, and each with a distal end and a proximal end and one or more movable lenses in between, wherein the distal end of each such zoom lens assembly receives the light emanating from the proximal end of the percutaneous lens, and directs the zoomed light through the proximal end of each such zoom lens assembly to an electronic capture means, wherein the magnification level of each of the zoom assemblies is independently controlled by user input, an electronic image capture means, comprising an at least one photosensitive integrated circuit, wherein the at least one photosensitive integrated circuit converts light exiting each zoom lens assembly to electrical image signals, an electronic processing means for formatting the electrical image signals from the at least one photosensitive integrated circuit for display on a display device based on user input, an at least one display device which receives the formatted electrical image signal and displays it for selectively viewing by the surgeon in two-dimensional or three-dimensional formats based on user input.
In a second aspect of the invention a surgical area viewing method for use in a minimally invasive surgical procedure, wherein the viewing method includes, making at least one percutaneous incision in the body of the patient in proximity to the surgical area, inserting at least one percutaneous lens assembly into the incision such that the proximal end of the lens assembly is outside of the patient's body while the distal end is disposed inside of a body cavity and aligned such that it is facing the surgical area, aligning at least one optical zoom assembly proximal to and in the optical path of the percutaneous lens, aligning at least one electronic image capture means in the optical path of the optical zoom assembly, wherein the electronic image capture means comprises one or more photosensitive integrated circuits, wherein the photosensitive integrated circuits convert light exiting each zoom lens assembly to electrical image signals, formatting the electronic image signals for display on a display device, viewing the formatted electronic image signals on a display device for viewing by the surgeon in two-dimensional or three-dimensional formats based on user input, manipulating a user input wherein the optical zoom assembly magnification level and electronic image formatting are chosen based on the user input.
In a third aspect of the invention a percutaneous visualization system for use in a minimally invasive surgical procedure for providing indirect views of a surgical area through a single incision, the system includes a percutaneous lens assembly with a proximal end, a distal end, and one or more optical lenses in between, which is placed through an incision in a patient's body such that the proximal end is outside the patient's body while the distal end disposed inside of a body cavity and aligned such that it is facing the surgical area, an optical zoom lens assembly which is aligned with the optical path of the percutaneous lens assembly, and with a distal end and a proximal end, and one or more movable lenses in between, wherein the distal end of such zoom lens assembly receives the light emanating from the proximal end of the percutaneous lens, and directs the zoomed light through the proximal end of such zoom lens assembly to an electronic capture means, wherein the magnification level is controlled by user inputs, an electronic image capture means, comprising an at least one photosensitive integrated circuit, wherein the at least one photosensitive integrated circuit converts light exiting each zoom lens assembly to electrical image signals, an electronic processing means for formatting the electrical image signals from the at least one photosensitive integrated circuit for display on a display device based on user input, an at least one display device which receives the formatted electrical image signal and displays it and facilitates touch screen inputs.
Other aspects of the invention will be understood by those of skill in the art upon review of the teachings herein. Other aspects of the invention may involve combinations of the above noted aspects of the invention. These other aspects of the invention may provide various combinations of the aspects presented above as well as provide other configurations, structures, functional relationships, and processes that have not been specifically set forth above.
A one or more channels 204 may extend through the plug and/or the percutaneous lens 100, the distal end of which open to the body cavity and the proximal end of which connect to a common chamber or manifold 205. This manifold 205 is coupled to a connector means 201 to allow gas or fluid to be introduced into the body cavity. In a preferred embodiment, the body cavity is the gastrointestinal peritoneum and the gas is carbon dioxide for insufflation. However the fluid nor-anatomy are limited to either.
In the preferred embodiment, the plurality of optical zoom assemblies 302 include at least two laterally separated independent identical optical zoom assemblies with optical axes corresponding to left and right stereoscopic views. The optical zoom assemblies are independently zoomed allowing for each assembly to have different magnification powers simultaneously based on independently moving lens elements in each. Thus one may image a magnified view while the other provides a more wide angled view. Since the optics are identical and laterally separated, the zoom magnification can be synchronized to obtain simultaneous left and right identically magnified images corresponding to stereoscopic viewing by the left and right eye. The left and right images can be acquired by the at least one photo-sensor and displayed to the viewer for stereoscopic visualization. The amount of lateral separation between the independent optical zooming assemblies in part defines the amount of parallax that is perceived by a viewer.
The actuators can be servo control motors with sensors, piezoelectric actuators with sensors, or stepper motors. Although two optical zoom lens assemblies are shown, there could be more than two. In the preferred embodiment there are at least two optical zoom assemblies that are laterally separated to provide either two 2D views of the surgical site with different magnifications, or they can be coordinated with the same magnification, each lens corresponding to the right and left eye of a stereoscopic view.
By pixel-wise it is meant that each pixel can be displayed from any image produced by the independent optical channels. As shown in
In the preferred embodiment the display is touch screen. This allows the surgeon to manipulate the views via touch screen inputs. In the preferred embodiment the touch screen is capable of detecting multiple touch locations. As shown in
Similarly the pixel-wise image combinations can also be changed by appropriate touch inputs on the display. For example the location of the inlay 502 could be moved by touching the inlay and dragging it to a new location. Furthermore the size can be adjusted as well by appropriate inputs. Menus and buttons can also be used to change viewing modalities. In the preferred embodiment the display is auto-stereoscopic and the 2D to 3D transition can also be controlled by touch inputs.
An HMI interpreter algorithm analyzes the inputs from the surgeon. As an example, if the surgeon input is intended to activate the stereoscopic 3D view, the HMI interpreter analyzes the inputs as such. Based on these inputs the HMI interpreter sends commands to the display and/or the multi optical channel percutaneous image acquisition module. In the case of the display, these commands can trigger 2D to 3D transitions, pixel wise combining schemes (e.g. picture in picture), and display settings (e.g. brightness), and the like. In the case of the camera, this can be in the form of zoom/magnification commands, standard camera settings (e.g. gain, exposure, white balance), etc. Once the HMI interpreter interprets the surgeon inputs, the data is sent to the optical channel magnification computation. In a preferred embodiment, if a pinch or spread gesture is read as shown in
Once the magnification level is computed, the reference signal is sent to the magnification servo control. The magnification servo control is a computer algorithm that regulates the position of the optical components of the independent optical zoom assembly. In a feedback mode the servo control algorithm reads the optics position sensor and regulates the optics position by sending commands to the zoom actuators. In a preferred embodiment there are an at least two independent optical zooming mechanisms corresponding to the left and right eye of a stereoscopic camera. When in stereoscopic mode, the left and right zoom actuators are sent commands based on the left/right optics position sensors to keep the magnifying power the same, thus imaging the surgical area in a stereoscopic fashion. In a 2D or picture in picture modality, the two optical channels image a magnified and wider angle view of the surgical site.
In the preferred embodiment, the light from the at least left/right zoom optics are focused onto the left/right photosensors. The left/right photosensors could be one sensor for the at least left/right optical channels or a plurality of sensors. If a plurality of sensors is used, additional optical components may be used to direct specific bands of light wavelengths to each sensor. After the light is measured by the photosensors, image acquisition electronics convert the photosensor charges to digital image information and are sent to the video processing/formatting electronics. These electronics perform all or some of image sharpening, color correction, pixel wise formatting (e.g. picture in picture), up-sampling, down-sampling and the like, as well as pixel wise formatting including but not limited to spatial and temporal interleaving of pixel data. This can manifest as simply interleaving frames from each optical channel, or combining frames in a pixel wise fashion to create picture in picture views, alternating image columns and the like. The processed/formatted images are sent to the display, as well as other configuration data pertaining to settings such as parallax barrier on and off commands, infrared cuing for active glasses, and light polarization for passive glasses. Finally the video is displayed to the surgeon.
From the standard 2D view, the viewing modality can change to 2D picture in picture or to a 3D stereoscopic view based on the surgeon inputs. As shown, the picture in picture view can either be the left view inlayed on the right view or the right view inlayed on the left view. In the state machine diagram this is denoted by PRinPL for right channel inlayed on left channel, or PLinPR for left optical channel inlayed on right optical channel. From either picture in picture view, the view can be restored to 2DL, the view imaged by the left optical channel, or changed to 2DR denoting the view imaged by the right optical channel.
In the case of transitioning to 3D viewing, the two optical channel's magnification levels are synchronized and the left/right views are formatted appropriately to display the 3D view. In the case of active glasses, the left and right frames are shown in an alternating fashion synchronized with the shutters of the glasses. In the case of passive glasses, the left and right view are displayed with the appropriate light polarization, allowing the left and right eye to view the corresponding left and right view. In the preferred embodiment, the display is auto-stereoscopic showing the left and right view to each eye based on parallax barrier technology or a lenticular display.
During each state in the state machine diagram, the magnification level of each view can be adjusted based on user inputs. In the preferred embodiment the magnification level of each independent optical zoom assembly can be adjusted independently. During 2D viewing, either picture in picture or a single view, the magnification can be controlled for each optical zoom assembly independently. In the case of stereoscopic viewing or 3D, the magnification is synchronized.
The following paragraphs provide additional information about selected components and their functionality.
The functional purpose of the plug/trocar 202 is to hold the device down to the patient by an expanded flange. The plug must be deformable enough to allow insertion into the incision, either by the natural compliance of the material that it is constructed from, by being or having inflatable components, or having articulating components. In the preferred embodiment the plug is disposable, but at the minimum it is sterilizable.
The percutaneous lens assembly 100 may have optical components made of glass or plastic, however in the preferred embodiment it is disposable, but at the minimum it is sterilizable.
The coupler 305 must be rigid enough to maintain sufficient optical alignment between the percutaneous lens assembly 100 and the multi optical channel percutaneous image acquisition module 101. It must have means to attach to the percutaneous lens assembly 100 or plug 200 and means to attach to the steering frame 603 or multi optical channel percutaneous image acquisition module 300.
The multi optical channel percutaneous image acquisition module 300 contains numerous optical and electronic components of the system which may limit the ability for this unit to be treated as disposable in some embodiments and in such embodiments may instead be designed for multiple uses and the unit may be configured for ease of surface sterilizability. This unit typically includes optical zoom and focusing mechanisms, photosensitive integrated circuits, and digital image processing electronics. In some basic embodiments, two photosensitive integrated circuits, one associated with each pupil, and thus with each optical channel created by the stereoscopic pupils, may be the extent of the electronic components in the unit. However, to obtain better image quality and truer color, 3 or 4 photosensitive integrated circuits may be used to sense different wavelengths of light separately (e.g. red, green, and blue). In this case extra optical hardware may need to be added, such as dichroic prisms, in order to optically separate the different wavelengths of light. In still other embodiment variations, it may be desirable to sacrifice image quality for compactness, and use a single photo sensor to capture both right and left images, half for the left and half for the right. Zooming could be continuous, or could have a finite number of discrete zoom levels. Focus could be manual or automatic.
The display 601 communicates to the multi optical channel percutaneous image acquisition module 300 by wired, wireless or the like communication. This could be a single direction communication where the image data is simply sent to the display 601 for viewing. The display may also have touch screen controls for zoom, focus, image freezing, or other camera mode selections, requiring the communication between the devices to support two-way information flow. A touch screen interface could be button based or gesture based. For example, a gesture to zoom out would be to perform a two finger pinching motion on the screen and the picture-in-picture roles could be reversed by swiping from the smaller image to the center of the screen. The display 601 may support VGA resolution (640×480) all the way up to true high definition (1920×1080p) or beyond. Since the multi optical channel percutaneous image acquisition module 300 facilitates stereoscopic image acquisition, the display 601 preferably supports either active or passive 3D display technology. In the preferred embodiment, the display is auto-stereoscopic (e.g. parallax barrier), requiring no glasses for viewing a 3-D effect.
In some embodiments, the movement of the objective lens assembly may be largely rotational in nature such that the objective lens assembly pivots about the most distal lens or about the entry point of the assembly into the skin or other tissue of the patient. In other embodiments, movement of the assembly may be such that it undergoes some translation relative to the base and as such some repositioning of the base relative to the patient's skin may be used to ensure that undue stressing of the patient's tissue does not occur.
In view of the teachings herein, many further embodiments, alternatives in design and uses of the embodiments of the instant invention will be apparent to those of skill in the art. As such, it is not intended that the invention be limited to the particular illustrative embodiments, alternatives, and uses described above but instead that it be solely limited by the claims presented hereafter.
This application claims benefit of U.S. Provisional Application No. 61/595,467 filed Feb. 6, 2012, 61/622,922 filed Apr. 11, 2012, 61/693551 filed Aug. 27, 2012 and, 61/694678 filed Aug. 29, 2012 and is a Continuation-in-Part of U.S. patent application Ser. No. 13/268,071, filed Oct. 7, 2011. Each of these referenced applications is incorporated herein by reference as if set forth in full herein.
Number | Date | Country | |
---|---|---|---|
61595467 | Feb 2012 | US | |
61622922 | Apr 2012 | US |