The present invention relates to a system for laparoscopic surgery, and particularly to a system for controlling a camera attached to a laparoscope by head movements of the surgeon.
Typically in a classical laparoscopic surgery setting, a medical professional operates on a tissue using instruments inserted through small incisions. The operating field inside a patient's body is visualized using a camera attached to a laparoscope. During a laparoscopic procedure, both hands of the medical professional are usually occupied with laparoscopic instruments. As such, an experienced assistant is required to maneuver the laparoscope and continuously provide necessary visualization of the operating field to the surgeon. Since the medical professional does not have direct control over the visualization of the operative field, miscommunications between the surgeon and the assistant may occur, leading to complications and increased operating time. Apart from miscommunication, the assistant is subject to fatigue, distractions, and natural hand-tremors that may result in abrupt movement of the operating field on the display.
In an effort to overcome these challenges, robotic laparoscope holders have been used by medical professionals to maintain direct control of the operative field. Typically, the medical professional controls camera movement by providing the robotic laparoscope holder a set of maneuver commands, including tilting, panning, insertion/retraction, rotation and angulation. For such robotic laparoscope holders, the medical professional must first mentally compute the position and orientation of the entire laparoscope to focus the camera at a desired position, and then specify the sequence of maneuvers through an interface, such as a voice-controlled interface, to move the laparoscope. Thus, the interface currently used for the control of these robotic devices requires the surgeon to give a discrete set of commands to focus the camera on a desired location, such as tile-up, tilt-up, pan-right, pan-right, tilt-up, tilt-up, pan-right, tilt-up, angulate, rotate. This can result in poor human-in-the-loop interaction with the robotic laparoscope holder. Further, the incision point acts as a fulcrum for the laparoscope, thereby causing scaling and inversion of movements, as well as making the maneuvering of the camera disposed at the distal end of the laparoscope challenging, especially in the case of articulated and angulated laparoscopes.
Thus, a system for camera control in robotic and laparoscopic surgery solving the aforementioned problems is desired.
The system for camera control in robotic and laparoscopic surgery includes a head tracking system for tracking movements of an operator's head during laparoscopic surgery, a robotic laparoscope holder operatively engaged to a laparoscope, an interface workstation having a processor and inputs connecting the sensor signal and the servo control system signals to the processor, and a clutch switch connected to the processor for activating and inactivating the interface workstation. The head tracking system includes at least one optical marker to be worn on the operator's head and an optical tracker for detecting movement of the at least one optical marker and transmitting a corresponding sensor signal. The laparoscope includes an articulating distal portion, a tip, and a camera disposed at the tip.
These and other features of the present invention will become readily apparent upon further review of the following specification and drawings.
Similar reference characters denote corresponding features consistently throughout the attached drawings.
Referring to
The interface workstation 120 receives and/or sends commands to the head tracking system 130, the video processing system 140, the robotic scope holder 110, and the clutch switch 180. The human operator H selects the scope type and model from a list available on the interface workstation before the surgery.
The head tracking system 130 includes one or more optical markers 415, e.g., three optical markers 415, that are attachable to the head of the human operator H, and an optical tracker 410 that is configured to track the spatial location of the optical markers 415 and to translate or transform this information into a virtual head-frame representing the position and orientation of the operator's head (
The robotic scope holder 110 can be a robot configured to move the laparoscope 200a-200c. The laparoscope 200a-200c can be placed on the robotic scope holder 110 using a scope adaptor 205 (
The articulating distal portion 214 of the shaft 210 is configured for insertion into the patient's body, such as through a cannula 220 inserted through the patient's abdominal wall AW. A camera frame or virtual frame 400 is defined at the tip of the scope to identify the position and orientation of the camera. As the camera frame 400 moves in a three-dimensional (3D) space, the laparoscope 200a-200c follows the motion of the camera frame 400. There is a one-to-one direct mapping of the surgeon's head movement (defined by head-frame) to the motion of the camera. The camera output is rendered on the display 160. For example, a video stream of the operating field from the scope camera 150 is provided to the video processing system 140 which rotates the video stream of the operating field with superimposed information which is then provided to the display 160 for operator viewing.
The proximal portion 212 of the laparoscope 200 includes a plurality of knobs 218. As illustrated by arrow A, each knob 218 may selectively extend and retract the articulating distal portion 214 in into or out of the incision. The knobs 218 can be configured to maneuver the articulating distal portion 214 of the laparoscope 200 within the surgical environment. For example, the knobs 218 may rotate the articulating distal portion 214 of the shaft 210 about the vertical axis, as illustrated by arrow A′, and/or articulate the articulating distal portion 214 of the shaft 210, as illustrated by arrow A″, to reposition the camera 150 within the surgical environment. In case of angulated laparoscope 200b, the rotation along the vertical axis A′ can also be performed directly by rotating the shaft 210 using the robotic laparoscope holder 110.
The clutch switch 180 may be used to activate or deactivate the interface workstation 120 and, in turn, the system 100. The clutch switch 180 may act as an ‘ON’ and ‘OFF’ switch and, as such, may be any suitable type of activation switch, such as a foot pedal or a button, attached to the robotic laparoscope holder 110, or a voice activated receiver. The switching between ‘ON’ and ‘OFF’ allows for ergonomic repositioning of the operator's head H in front of the optical tracker 410.
The band 420 may be any suitable type of band formed from a lightweight, flexible material allowing the band 420 to have a variety of configurations, as illustrated in
The camera 150 may be any suitable medical grade type of camera adapted for acquiring video images of the surgical environment within the patient's body and for communicating a video stream to the video processing system 140 to show on the display 160. The display 160 may be any suitable type of display, such as a light emitting diode (LED) or liquid crystal display (LCD).
The video processing system rotates the video as requested by the interface workstation. The rotational angle by which the video is rotated at the center of the visualization screen is represented by RScreen(t). It is measured with respect to the “X” axis of an imaginary 2D coordinate system located at the center of the screen and with the axes parallel to the sides of the visualization screen. The angle is measured in degrees.
Any motion of the optical markers placed on the operator's head is tracked by the head tracking system. The head frame 405 or orientation and position of the operator's head is measured with respect to a head tracking base frame 425. The head frame is represented by a 4×4 homogenous transformation matrix, MHead-Frame(t0), wherein ‘t0’ represents the time at which the frame was captured. For example, each of the optical markers 415 are arranged in a specific configuration that allows the optical tracker 410 to triangulate the orientation and position of the medical professional's head within a three-dimensional (“3D”) space in real-time. As illustrated in
If the desired position and orientation of the camera 150 (as per the medical professional's head movements) is not achievable due to certain constraints, the interface workstation 120 may utilize a transformation filter (not shown) to compute feasible positions and orientations for the camera 150 to move. Such constraints, which may be incorporated in various filters, may include: (1) tissue boundaries and (2) unreachable workspaces for the camera 150. The tissue boundaries may be computed from preoperative medical imagining data (e.g. MR scans or CT scans) to avoid impingement of the camera 150 with vital structures. In the case of zero-degree or angulates laparoscopes, the limited degrees of freedom may restrict the motion of the camera 150.
The processing of the video stream produced by the camera 150 may involve rotating the images of the video stream by a predetermined angle. The rotational angle, measured in degrees (°), by which the video is rotated is represented by RScreen(t) and is measured with respect to the “X” axis of an imaginary two dimensional (e.g. 2D) coordinate system located at the center of the display 160, the “X” axis being parallel to the sides of the display 160. As illustrated in
The interface workstation 120 may be a centralized system that sends and receives commands to and from the robotic laparoscopic holder 110, the head tracking system 130, the video processing system 140, and the clutch switch 180. As such, it is to be noted that the interface workstation 120 may represent a standalone computer, computer terminal, portable computing device, networked computer or computer terminal, or networked portable device, and can also include a microcontroller, an application specific integrated circuit (ASIC), or a programmable logic controller (PLC).
Data can be entered into the interface workstation 120 by the medical professional, or sent or received from or by any suitable type of interface 500, such as the robotic laparoscopic holder 110, the head tracking system 130, the video processing system 140, or the clutch switch 180, as can be associated with a transmitter/receiver 510, such as for wireless transmission/reception or for wireless communication for receiving signals from a processor 540 to articulate the articulating distal portion 214 of the laparoscope 200 and to reposition the camera 150.
The interface workstation 120 may include a memory 520 such as to store data and information, as well as program(s), instructions, or parameters for implementing operation of the system 100. The memory 520 can be any suitable type of computer readable and programmable memory, such as non-transitory computer readable media, random access memory (RAM) or read only memory (ROM), for example. The interface workstation 120 can be powered by a suitable power source 530.
The interface workstation 120 provides new configuration parameters to actuate the robotic scope holder and move the laparoscope. The interface workstation 120 also receives current configuration parameters measured from the actuator states of the robotic scope holder. The processor 540 of the interface workstation 120 is configured for performing or executing calculations, determinations, data transmission or data reception, sending or receiving of control signals or commands, such as in relation to the movement of the robotic laparoscope holder 110 and/or the camera 150, as further discussed below. The processor 540 can be any suitable type of computer processor, such as a microprocessor or an ASIC, and the calculations, determinations, data transmission or data reception, sending or receiving of control signals or commands processed or controlled by the processor 540 can be displayed on the display 160. The processor 540 can be associated with, or incorporated into, any suitable type of computing device, for example, a personal computer or a PLC. The display 160, the interface 500, the transmitter/receiver 510, the servo control system 515, the memory 520, the power source 530, the processor 540, and any associated computer readable media are in communication with one another by any suitable type of data bus, as is well known in the art.
The point of reference for the robotic laparoscope holder 110 is generally referred to as a robot base frame 600, which is a fixed reference frame for the entire robotic scope holder 110. As such, any motion/movement of the robotic laparoscope holder 110 may be measured with respect to the robot base frame 600. The robotic laparoscope holder 110 may include a mechanism for holding a trocar for creating an incision in the abdominal wall AW of the patient. The position of the trocar depends upon the surgery and patient position. Once the incision is made before the surgery, the robotic laparoscope holder 110 is manually adjusted by the operator to hold the trocar.
The incision frame for both articulated laparoscopes 200a (
For the articulated laparoscope 200a (
The insertion length LInsertion for both the articulated laparoscope 200a (
Once positioned in the surgical environment within the patient's body, the camera 150 moves within the camera frame 400. The camera frame 400 describes the position and orientation of the camera 150 at a specific point in time, such as time ‘t’. Accordingly, the camera frame 400 is represented by a 4×4 homogenous transformation matrix, MCamera-Frame. The camera frame 400 is measured with respect to the robot base frame 600 and, as such, represents the position of the camera 150. As illustrated in
Once the clutch switch 180 is turned “ON,” the interface workstation 120 may communicate with the head tracking system 130 (Step 816) to activate the head tracking system 130 and determine the spatial location (e.g. the position and orientation) of the medical professional's head H within the head frame 405, with the robotic laparoscopic holder 110 (Step 818) to activate the robotic scope holder 110 and, in turn, to activate the laparoscope 200, and with the video processing system 140 (Step 820) to activate the video processing system 140 and display the activation of the robotic laparoscopic holder 110 on the display 160.
Once the video processing system 140 is activated, the camera 150 may stream real-time video of the operating field through the video processing system 140, such that the new rotational angle(s) may be displayed on the display 160, such as superimposed on the operating field seen on the display 160, which may allow the medical professional to view the operating field, along with the spatial orientation of his/her instruments on the display 160.
The time at which the clutch switch 180 is activated is generally referred to as (t0). Once activated, the interface workstation 120 requests the medical professional's head orientation/position from the head tracking system 130 and stores the medical professional's head orientation/position as MIncision-Frame(t0) (Step 822). The interface workstation 120 also requests the rotational angle of the visualization screen and stores the rotational angle as RScreen(t0) (Step 824). Further, the interface workstation 120 requests the robotic laparoscope holder's 110 configuration parameters, (e.g. (<MIncision-Frame(t0), LInsertion(t0), RArticulation(t0)> for articulated laparoscopes 200a and (<MIncision-Frame(t0), LInsertion(t0)> for angulated laparoscopes 200b, as well as for the zero-degree laparoscopes 200c, and stores the configuration parameters as f(t0) (Step 826). These configuration parameters can be communicated between the interface workstation 120 and the robotic laparoscope holder 110, the head tracking system 130, and the camera 150. Further, the configuration parameters are sufficient to define the configuration for the robotic laparoscope holder's 110 degrees of freedom at time (t) for either articulated laparoscopes 200a or angulated laparoscopes 200b. The position/orientation of the camera at time instant “t” is the “camera frame” and is represented by a 4×4 homogenous transformation matrix Mcamera-Frame(t0). The camera frame may be computed by the interface workstation 120 with respect to the robot base frame 600 based on f(t0)=<MIncision-Frame(t0), LInsertion(t0), RArticulation(t0)>, RScreen(t0) and the type of laparoscope 200a used (articulated in this case) (Step 828) or f(t0)=<MIncision-Frame(t0), LInsertion(t0)>, RScreen(t0), and the type of laparoscope 200b or 200c used (angulated or zero-degree in this case). The Z axis denotes the viewing direction of the camera. If the Z direction and origin of both the camera frame and incision frame are aligned, the X axis of the camera frame will substitute an angle of RScreen (t) to the X axis of the incision frame. The camera frame MCamera-Frame(t0) may then be stored as a basis for computing subsequent movements.
The position/orientation of the camera frame MCamera-Frame(t0) as it relates to the articulated laparoscope 200a may be computed in three steps by applying affine transformations: (1) the incision frame MIncision-Frame(t0) is translated along the ‘Z’ axis of the incision frame MIncision-Frame(t0) for a distance equal to the insertion length LInsertion(t0) (
The position/orientation of the camera frame MCamera-Frame(t0) as it relates to the angulated laparoscope 200b may be computed in three steps by applying affine transformations: (1) the incision frame MIncision-Frame(t0) is translated along the ‘Z’ axis of the incision frame MIncision-Frame(t0) for a distance equal to the insertion length LInsertion(t0) (
After all the information from the robotic laparoscope holder 110, the head tracking system 130, and the video processing system 140 has been obtained and stored by the interface workstation 120, the interface workstation 120 re-checks the clutch switch 180 to determine whether the clutch switch 180 remains active or whether the clutch switch 180 has been deactivated (Step 830). If the clutch switch 180 has been deactivated, the interface workstation 120 sends a command to the robotic laparoscopic holder 110 to deactivate the robotic laparoscope holder 110 (Step 832). The interface workstation 120 also sends a command to the head tracking system 130 to deactivate the head tracking system 130 (Step 834). Lastly, the interface workstation 120 sends a command to the video processing system 140 to display the deactivation of the robotic laparoscope holder 110 (Step 836). Subsequently, the process flow returns to Step 810 and proceeds as described herein.
If, on the other hand, the clutch switch 180 remains active (e.g. the clutch switch 180 is still turned “ON”), such as from the initial time (t0), the interface workstation 120 requests the head tracking system operator's head orientation/position and stores it as MHead-Frame(t) (Step 840). The new desired position MCamera-Frame(t) of the camera 160 at subsequent time instances ‘t’ may be calculated by using the following equations:
M
HeadRelativeMotion
=M
−1
Head-Frame(t0)×MHead-Frame(t) (1)
M
Camera-Frame(t)=M−1Head-Frame(t0)×MHeadRelativeMotion (2)
After the new desired position MCamera-Frame(t) of the laparoscope camera 200a-200c has been computed, the interface workstation 120 begins to calculate the new robotic laparoscope holder configuration parameters at a subsequent time instances (t), herein (t>t0) (e.g. f(t)) and the video processing system rotational angle RScreen(t) based on f(t0), the type of laparoscope used, and the new desired position MCamera-Frame(t) of the camera 150 (Step 844). If the articulated laparoscope 200a is being used the following configuration parameters are used: f(t0)=<MIncision-Frame(t0), LInsertion(t0), RArticulation(t0)>. If, however, the angulated laparoscope 200b is being used the following configuration parameters are used: f(t0)=<MIncision-Frame(t0), LInsertion(t0)>.
For the articulated laparoscope 200a (
I
Px(t)sin(RArticulation(t))−IPy(t)cos(RArticulation(t))+(L/RArticulation(t))(cos(RArticulation(t))−1)=0 (3)
wherein ‘L’ denotes the constant Articulated Section Length.
After the RArticulation(t) has been computed, the head tracking system 130 communicates with the interface workstation 120 such that the interface workstation 120 may compute the insertion length LInsertion(t) and the incision frame MIncision-Frame(t) (
(L/RArticulation(t))tan(RArticulation(t)/2) (4)
The insertion length Linsertion(t) is then computed as the difference between the length of line segment IPAP(t) minus the length of line segment Cp(t)AP(t), wherein the incision frame MIncision-Frame(t) is defined by the incision point Ip, the ‘Z’ direction is defined by the vector pointing from Ip to AP(t), the ‘X’ axis is orthogonal to the plane defined by points Ip, AP(t), and Cp(t), and the ‘Y’ axis is computed as a cross product of the ‘Z’ and the ‘X’ axis (
For the angulated laparoscope 200b (
Referring to
{right arrow over (Dp(t))}={right arrow over (Ip)}+({right arrow over (Cp(t))}−{right arrow over (Ip)}).n)n (5)
If ∥{right arrow over (Dp(t))}−{right arrow over (Cp(t))}∥>μ (6)
then {right arrow over (Dp(t))}=Cp(t)+μ(Dp(t)−{right arrow over (Cp(t))})/∥{right arrow over (Dp(t))}−{right arrow over (Cp(t))}∥ (7)
L
Insertion(t)=∥{right arrow over (Dp(t))}−{right arrow over (Ip)}∥ (8)
Where μ is the maximum permissible distance defined by the operator for movement of the laparoscope's distal point DP(t) from the point CP(t) represented by the operator's head-motion MCamera-Frame(t).
After the LInsertion(t) has been computed, the head tracking system 130 communicates with the interface workstation 120 such that the interface workstation 120 may compute the incision frame MIncision-Frame(t) (
Regardless of whether the articulated laparoscope 200a, the angulated laparoscope 200b, or the zero-degree laparoscope 200c is used, the interface workstation 120 then sends the new computed rotational angle RScreen(t) to the video processing system 140 to rotate the video (Step 846). The interface workstation 120 sends the new computed configuration parameters f(t) to the robotic laparoscope holder 110, such as to move the camera 150 to the desired position (Step 848). Once the robotic laparoscope holder 110 has been moved to the new position, the status of the clutch switch 180 is checked and the process continues as described herein until the surgical procedure is complete. If the procedure has not been completed, the laparoscope 200 can be moved away from the cannula 220 (Step 852). The medical professional removes the cannula 220 from the patient's body, the incision is closed (Step 854), and commands are sent to switch off the robotic laparoscope holder 110, the head tracking system 130, and the video processing system 140 (Steps 832-836).
Alternatively, the system for camera control in robotic and laparoscopic surgery 100 may be controlled remotely (e.g. telemanipulated surgical systems as shown in
For example, the operator may operate on tissue using robotic tooltips 915 and, at the same time, view the tool-tissue interaction with the camera 910 affixed to the robotic arm 900. The integration of the control of the tooltips 915 and the camera 910 may allow independent camera control; thereby allowing the hand-held console to be dedicated to the control of the tooltips 915. Such a configuration may allow the simultaneous control of both the camera 910 and tooltips 915 utilized by the operator during the surgical procedure.
The system 100 may, for example, include one robotic arm 900 with the camera 910, and two robotic arms 900 having tooltips 915, such that the surgical procedure requires making three incisions as illustrated in
It is to be understood that the present invention is not limited to the embodiments described above, but encompasses any and all embodiments within the scope of the following claims.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2017/041874 | 7/13/2017 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62361962 | Jul 2016 | US |