There are various types of surgical robotic systems on the market or under development. Some surgical robotic system use a plurality of robotic arms. Each arm carries a surgical instrument, or the camera used to capture images from within the body for display on a monitor. Other surgical robotic systems use a single arm that carries a plurality of instruments and a camera that extend into the body via a single incision. These types of robotic system use motors to position and orient the camera and instruments and, where applicable, to actuate the instruments. Input to the system is generated based on input from a surgeon positioned at master console, typically using input devices such as input handles and a foot pedal. Motion and actuation of the surgical instruments and the camera is controlled based on the user input. The image captured by the camera is shown on a display at the surgeon console. Examples of surgical robotics systems are described in, for example, described in WO2007/088208, WO2008/049898, WO2007/088206, US 2013/0030571, and WO2016/057989, each of which is incorporated herein by reference.
The Senhance Surgical System from TransEnterix, Inc. includes, as an additional input device, an eye tracking system. The eye tracking system detects the direction of the surgeon's gaze and enters commands to the surgical system based on the detected direction of the gaze. The eye tracker may be mounted to the console or incorporated into glasses (e.g. 3D glasses worn by the surgeon to facilitate viewing of a 3D image captured by the camera and shown on the display). Input from the eye tracking system can be used for a variety of purposes, such as controlling the movement of the camera that is mounted to one of the robotic arms.
The present application describes various surgeon interfaces incorporating augmented reality that may be used by a surgeon to give input to a surgical robotic system.
This application describes systems that enhance the experience of surgeons and/or surgical staff members by providing an enhanced display incorporating information useful to the surgeon and/or staff.
In the
Referring to
The monitor 10 positioned above the patient can be used for displaying various information about the procedure and the patient. For example, the monitor can be used to display real-time 3D scanned images, video from operative cameras (laparoscopic, endoscopic, etc), patient vital signs, procedural information (steps, instruments being used, supply counts, etc).
Beneath or on the underside of the monitor 10 (i.e. the face oriented towards the patient) is a user interface 12 for giving input to the surgical system to control the surgical instruments that are operated by the system. When surgeon places his/her hands under the screen, he can manipulate the system through the user interface. This interface could require the surgeon to grasp and manipulate a handle-type device that functions as a user-input device, or the surgeon interface could comprise a series of cameras positioned to capture his/her hand gestures. In the latter example, user hand movements beneath the monitor are tracked by the camera system (e.g. using optical tracking). The detected movements are used as input to cause the system to direct corresponding movement and actuation of the robotic surgical instruments within the body. In this way, the surgeons can view the operative site and simultaneously manipulate instruments inside the body. Graphical depictions or images 14 of the surgical instruments (or just their end effectors) are shown on the display.
Referring to
A third embodiment, shown in
A fourth embodiment utilizes a variation of “smart glasses” that have an inset screen, such as the Google Glass product (or others available from Tobii, ODG, etc). The display on the glasses may be used to display patient vitals, procedure steps, views captured by auxiliary (or primary) cameras/scopes, or indicators of locations of instruments within the body.
The specific embodiment of these glasses incorporates both externally facing cameras as well as internally facing cameras. See, for example, US Patent Application 2015/0061996, entitled “Portable Eye Tracking Device” owned by Tobii Technology AB and incorporated herein by reference. The externally facing cameras would be used to track surgeon gaze or movement around the operating room (i.e. is s/he looking at an arm, or a monitor, or a bed, etc). The internally facing cameras would track the surgeon's eye movement relative to the lenses themselves. The detected eye movement could be used to control a heads-up-display (HUD) or tracked as a means to control external devices within the view of the externally facing camera. As an example, these glasses could also be used to direct movement of the laparoscopic camera for panning or zoom by measuring the position and orientation of the wearer relative to an origin in the operating room space, or through measuring the position of the pupils relative to an external monitor or the HUD within the glasses themselves. As another example, input from the external camera would be used to detect what component within the operating room the user was looking at (a particular arm, as identified by shape or some indicia affixed to the arm and recognized by the system from the sensed external camera image), causing the system to call up a menu of options relative to that component on the HUD of the glasses themselves, finally allowing the user to select a function for that component from that menu of options by focusing her gaze on the desired mention option.
In a variation of the fourth embodiment, user input handles of the surgeon console might be replaced with a system in which the user's hands or “dummy” instruments held by the user's hands are tracked by the externally facing camera on the smart glasses. In this case, the surgeon console is entirely mobile, and the surgeon can move anywhere within the operating room while still commanding the surgical robotic system. In this variation, the externally facing camera on the glasses is configured to track the position/orientation of the input devices and the system is configured to use those positions and orientations to generate commands for movement/actuation of the instruments.
A fifth embodiment comprises virtual reality (“VR”) goggles, like those sold under the name Oculus Rift. These goggles differ from those of the fourth embodiment in that they are opaque and the lenses do not permit visualization of devices beyond the screen/lens. These goggles may be used to create an immersive experience with the camera/scope's 3D image output as well as the user interface of a surgical system.
In one variation of this embodiment, the VR goggles are worn on the operator's head. While wearing VR goggles, the surgeon's head movements could control the position/orientation of the scope. In some embodiments, the goggles could be configured to display stitched-together images from multiple scopes/cameras.
In a second variation of this embodiment, the VR goggles might instead be mounted at eye level on the surgeon console, and the surgeon could put his/her head into the cradle of the goggles to see the scope image in 3D for an immersive experience. In this example, the surgeon console might also have a 2D image display for reference by the user at times when the 3D immersive experience is not needed. In some implementations of this embodiment, the goggle headset is detachable from the surgeon console, permitting it to be worn as in the first variation (described above).
In some implementations, fore/aft head motion of the user's head may also be allowed via a prismatic joint in-line with the 3D display and roll axis and may be used as input to the system to control zooming of the endoscope.
The mounting arm implementations shown in
As noted, in
The movement of the headset could be used as an input to the system for repositioning of instruments or a laparoscope, as an example. The movement of the headset may be used to otherwise alter the user's viewpoint, such as moving within a stitched image field, or to switch between various imaging modes.