In general, the present invention provides improved devices, systems, and methods for using, training for the use of, planning for the use of, and/or simulating the use of elongate articulate bodies and other tools such as catheters, borescopes, continuum robotic manipulators, rigid endoscopic robotic manipulators, and the like. In exemplary embodiments, the invention provides in situ robotic catheter motion planning, and linear catheter position control over complex trajectories, particularly for catheter systems driven by fluid pressure.
Diagnosing and treating disease often involve accessing internal tissues of the human body, and open surgery is often the most straightforward approach for gaining access to internal tissues. Although open surgical techniques have been highly successful, they can impose significant trauma to collateral tissues.
To help avoid the trauma associated with open surgery, a number of minimally invasive surgical access and treatment technologies have been developed, including elongate flexible catheter structures that can be advanced along the network of blood vessel lumens extending throughout the body. While generally limiting trauma to the patient, catheter-based endoluminal therapies can be very challenging, in-part due to the difficulty in accessing (and aligning with) a target tissue using an instrument traversing tortuous vasculature. Alternative minimally invasive surgical technologies include robotic surgery, and robotic systems for manipulation of flexible catheter bodies from outside the patient have also previously been proposed. Some of those prior robotic catheter systems have met with challenges, possibly because of the difficulties in effectively integrating large and complex robotic pull-wire catheter systems into the practice of interventional cardiology as it is currently performed in clinical catheter labs. While the potential improvements to surgical accuracy make these efforts alluring, the capital equipment costs and overall burden to the healthcare system of these large, specialized systems is also a concern. Examples of prior robotic disadvantages that would be beneficial to avoid may include longer setup and overall procedure times, deleterious changes in operative modality (such as a decrease in effective tactile feedback when initially accessing or advancing tools toward an internal treatment site), and the like.
A new technology for controlling the shape of catheters has recently been proposed which may present significant advantages over pull-wires and other known catheter articulation systems. As more fully explained in US Patent Publication No. US 2016/0279388, entitled “Articulation Systems, Devices, and Methods for Catheters and Other Uses,” published on Sep. 29, 2016 (assigned to the assignee of the subject application and the full disclosure of which is incorporated herein by reference), an articulation balloon array can include subsets of balloons that can be inflated to selectively bend, elongate, or stiffen segments of a catheter. These articulation systems can direct pressure from a simple fluid source (such as a pre-pressurized canister) toward a subset of articulation balloons disposed along segment(s) of the catheter inside the patient so as to induce a desired change in shape. These new technologies may provide catheter control beyond what was previously available, often without having to resort to a complex robotic gantry, without having to rely on pull-wires, and even without having the expense of electric motors. Hence, these new fluid-driven catheter systems appear to provide significant advantages.
Along with the advantages of fluid-driven technologies, significant work is now underway on improved imaging for use by interventional and other doctors in guiding the movement of articulated therapy delivery systems within a patient. Ultrasound and fluoroscopy systems often acquire planar images (in some cases on different planes at angularly offset orientations), and news three-dimensional (3D) imaging technologies have been (and are still being) developed and used to show these 3D images. While 3D imaging has some advantages, guiding interventional procedures with reference to 2D images (and other uses) may still have benefits over at least some of the new 3D imaging and display techniques, including the ability to provide alignment with target tissues using quantitative and qualitative planar positioning guidelines that have been developed over the years.
Despite the advantages of the newly proposed fluid-driven robotic catheter and imaging systems, as with all successes, still further improvements and alternatives would be desirable. In general, it would be beneficial to provide further improved medical devices, systems, and methods, as well as to provide alternative devices, systems, and methods for users to input, view, and control the automated movements thereof. For example, the position and morphology of a diseased heart tissue relating to a structural heart therapy may change significantly between the day on which diagnostic and treatment planning images are obtained, and the day and time an interventional cardiologist begins deploying a therapy within the beating heart. These changes may limit the value of treatment planning prior to start of an interventional procedure. However, excessive engagement of structural heart devices against sensitive tissues of the heart, as might be imposed when attempting a multiple alternative trajectories for advancing a structural heart tool from an access pathway to a target position, could induce arrhythmias and other trauma. Hence, technologies which facilitate precise movements of these tools and/or in situ trajectory planning for at least a portion of the overall tool movement, ideally while the tool is near or in the target treatment site, would be particularly beneficial. Improved display systems that provided some or all of the benefits of both 2D and 3D imaging would also be beneficial.
The present invention generally provides improved devices, systems, and methods for using, training for the use of, planning for the use of, and/or simulating the use of elongate bodies and other tools such as catheters, borescopes, continuum robotic manipulators, rigid endoscopic robotic manipulators, and the like. The technologies described herein can facilitate precise control over both actual and virtual catheter-based therapies by, for example, allowing medical professionals to plan an automated movement of a therapeutic tool supported by the catheter based on a starting location of the catheter previously inserted into the heart. Optionally, a virtual version of the tool can be safely moved from that starting location along a meandering path through a number of different locations till the user has identified a desired ending position and orientation of the tool for the movement. A processor of the system can then generate synchronized actuator drive signals to move the tool in the chamber of the heart from its starting point to the end without following the meandering input path, with the progress of the tool along its trajectory being under the full control of the user, such as with a simple linear input that allows the user to advance or retract along a desired fraction of the trajectory. Alternative actual and/or virtual robotic systems facilitate the use of standard input devices that accommodate at least two-dimensional or planar input (such as a mouse, tablet, phone, or the like) to re-position elongate bodies such as catheters, optionally using separate but intuitive input modes for orientation and translation movements. Still further aspects provide hybrid display formats which can take advantage of a combination of 2D image components and 3D image components in an overall 3D display space to enhance 3D situational awareness, often by combining at least one (often multiple) 2D tissue image in a 3D display space that also includes a 3D model of an instrument that can be seen in the tissue images, optionally with 2D virtual models superimposed on the tissues and instrument of the image plane. The image may optionally be shown on a 2D (such as a screen) or 3D display modalities (such as 3D stereoscopic screens, Augmented Reality (AR) or virtual Reality (VR) glasses, or the like). Input systems that facilitate driving of catheters and other articulate bodies relative to 2D image planes, such as by keeping the tip or a tool receptacle within the field of view of an ultrasound image plane, are also provided.
In a first aspect, the invention provides an image-guided therapy method for treating a patient body. The method comprises generating a three-dimensional (3D) virtual therapy workspace inside the patient body and a three-dimensional (3D) virtual image of a therapy tool within the 3D virtual workspace. An actual 2D image of the tool in the patient body is aligned with the 3D virtual image, the actual image having an image plane. The actual image is superimposed with the 3D virtual image so as to generate a hybrid image, and the hybrid image is transmitted to a display having a display plane so as to present the hybrid image with the image plane of the actual image at an angle relative to the display plane, for example, with the actual planar image shown so as to appear at an offset angle to the plane of the display.
In optional aspects, the generating, aligning, superimposing, and tramsitting may be performed by a processor by manipulating image data, with the processor typically being included in an imaging and/or therapy delivery system (ideally being included in a robotic catheter system). A number of additional aspects of the method (and corresponding apparatus) are described herein, with the aspects often being independent (so as to stand on their own merits as stand-alone methods or systems with or without the image guided methodology described immediately above), but also being suitable to be used together.
For example, in another aspect the invention provides an image-guided therapy system for use with a tool movable in an internal surgical site. An image capture device is included in the system for acquiring an actual image encompassing the tool and a target tissue and having an image plane, and a display is also provided for displaying the actual image. The system comprises a simulation module configured to generate a three-dimensional (3D) virtual workspace and a virtual three-dimensional (3D) image of the tool within the 3D virtual workspace. A registration module is configured align the actual image with the 3D virtual image. The simulation module is configured to superimpose the actual image with the 3D virtual image so as to transmit a hybrid image including the 3D virtual workspace and the image plane of the actual image at an angle relative to the display.
In an optional aspect, the image acquisition system employed with the systems and methods described herein may including an ultrasound imaging system for generating a plurality of planar images having first and second image planes. The simulation system can be configured to offset the first and second image planes from the virtual tool in the 3D virtual workspace and to superimpose a 2D virtual image of the tool on the first and second image planes in the hybrid image.
In another aspect, the invention provides a method for aligning a therapeutic or diagnostic tool with a target tissue adjacent an internal site in a patient. The method makes use of an elongate body inserted into the patient, the elongate body having a receptacle to support the tool. The receptacle defines a first pose within the internal surgical site. The method comprises receiving, with a processor of a surgical robotic system and from a user, input for moving the receptacle (or an image thereof) from the first pose to a second pose within the internal surgical site. The input optionally defines an intermediate input pose after the first pose and before the second pose. The processor also receives a movement command to move the receptacle, and in response, transmits drive signals to a plurality of actuators so as to advance the receptacle along a trajectory from the first pose toward the second pose, with the trajectory optionally being independent of the intermediate input pose.
In an optional aspect, the movement command received by the processor comprises a command to move along an incomplete spatial portion of a trajectory from the first pose to the second pose and to stop at an intermediate pose between the first pose and the second pose. In response, to the movement command, the processor transmits drive signals to a plurality of actuators coupled with the elongate body so as to move the receptacle toward the intermediate pose.
In another aspect, the invention provides a system for aligning a therapeutic or diagnostic tool with a target tissue adjacent an internal site in a patient. The system comprises an elongate body having a proximal end and a distal end with an axis therebetween. The body has a receptacle configured to support the tool within the internal surgical site so that the tool defines a first pose. A plurality of actuators is driving couplable with the elongate body so as to move the receptacle within the surgical site. A processor is couplable with the actuators. The processor has a first module and a second module. The first module is configured to receive input from a user for moving the receptacle (or an image thereof) from the first pose to a second pose within the internal surgical site. The input optionally defines an intermediate input pose between the first pose and the second pose. The second module is configured to receive a movement command, and in response, to drive the actuators so as to move the receptacle along a trajectory from the first pose to the second pose, with the trajectory optionally being independent of the intermediate input pose. Typically, the input defines an input trajectory between the first pose and the second pose, and the intermediate input pose is disposed along the input trajectory. The plurality of actuators can optionally be energized so that the elongate body disregards the input trajectory as the receptacle moves along the trajectory, so that the receptacle may not be driven to (or even toward) the intermediate input pose. This can, for example, allow a user to evaluate a series of candidate tool poses and/or trajectories in silico without imposing the trauma of actually moving the tool to unsuitable configurations, all from the actual starting location and orientation of the tool in or near the heart.
In another optional aspect, the first module can optionally be configured to receive, from a user after the receptacle is in the first pose, a second pose. The second module is configured to receive, from the user, a movement command and, in response, to drive the actuators so as to move the receptacle along an incomplete spatial portion of a trajectory from the first pose to the second pose and stop at an intermediate pose between the first pose and the second pose.
Optional and independent features may be included to enhance the functionality of the devices described herein. For example, the processor may be configured to calculate the trajectory from the first pose to the second pose, and a series of intermediate poses of the receptacle along the trajectory between the first pose and the second pose. The processor, when in the second mode, may be configured to receive a series of additional movement commands, and in response, to drive the actuators so as to move the receptacle with a plurality of incremental movements along a series of incomplete portions of the trajectory between the intermediate poses. These movement commands may also induce the receptacle to stop at one or more of the intermediate poses. Advantageously, the additional movement commands can include a move back command. The processor, in response to the move back command, can be configured to drive the actuators so as to move the receptacle along the trajectory away from the second pose and toward the first pose.
Optionally, the processor can be configured so as to receive the movement command as a one-dimensional input signal corresponding to a portion of the trajectory. Therein the processor can be configured to energize the plurality of actuators so as to move the receptacle in a plurality of degrees of freedom of the elongate body along the trajectory. Such an arrangement provides a simple and intuitive control that keeps the movement speed and advancement under full control of the user, allowing the user to concentrate on the progress of the movement and the relationship of the tool to adjacent tissue, rather than being distracted by having to enter a complex series of multidimensional inputs that might otherwise be needed to follow a complex trajectory.
Optionally, an intra-procedure image capture system can be oriented to image tissue adjacent the internal surgical site so as to generate image data. A display can be coupled to the image capture system so as to show, in response to the image data, an image of the adjacent tissue and the tool in the first pose (and/or in other poses). An input device can be coupled with the processor and disposed to facilitate entry of the input by the user with reference to the image of the adjacent tissue and the tool as shown by the display. The processor can have a simulation module configured to superimpose a graphical tool indicator with the image of the adjacent tissue in the display. A pose of the tool indicator can move with the input so as to facilitate aligning the second pose with the target tissue. Hence, the image can comprise a calculated pose of the tool indicator relative to the target tissue. The processor can have a simulation input mode in which the processor energizes the actuators so as to maintain the first pose of the tool when the user is entering the input for the second pose. This arrangement facilitates evaluation of candidate poses using a virtual or simulated tool, with the tool indicator often comprising a graphical model of the tool and at least some of the supporting catheter structure.
Optionally, the processor has a master-slave mode in which the processor energizes the actuators to move the receptacle toward the second pose while the user is entering the input for the second pose. Preferably, the processor has both a simulation mode and a master-slave mode to facilitate alignment of tools with target tissues using both a graphical tool indicator (during a portion of the procedure) and real-time or near real-time moving images of the actual tool.
Optionally, the system includes a two-dimensional input device couplable to the processor. The processor may have a first mode configured to define a position of the receptacle relative to the adjacent tissue. The processor may optionally also have a second mode configured to define an orientation of the receptacle relative to the adjacent tissue. The processor may (or may not) also have a third mode configured to manipulate an orientation of the adjacent tissue as shown in the display.
Preferably, the elongate body comprises a flexible catheter body configured to be bent proximally of the receptacle by the actuators. The actuators may comprise fluid-expandable bodies disposed along the elongate body, and a fluid supply system can couple the processor to the actuators. The fluid system can be configured to transmit fluid along channels of the elongate body to the actuators.
In another aspect, the invention provides a robotic catheter system for aligning a therapeutic or diagnostic tool with a target tissue by an internal surgical site in a patient. The system comprises an elongate flexible catheter body configured to be inserted distally into the internal surgical site. The tool is supportable adjacent a distal end of the elongate body to define a first pose within the internal surgical site. A plurality of actuators are couplable to the elongate body. A processor is couplable to the actuators and configured to i) receive a desired second position of the tool within the internal surgical site, ii) calculate a tool trajectory of the tool from the first position to the second position (along with associated drive signals for the actuators to move the elongate body along a tool trajectory from the first position to the seconded position), iii) receive an input signal with a single degree of freedom defining a desired portion of the trajectory, and iv) drive the actuators so as to move the tool along the portion of the trajectory defined by the input signal, the portion having a plurality of degrees of freedom.
In yet another aspect, the invention provides a system for manipulating a real and/or virtual elongate tool in a three-dimensional workspace. The tool has an axis, and the system comprises an input/output (I/O) system configured for showing an image of the tool and for receiving a two-dimensional input from a user, the I/O system having a plane and the axis of the tool as shown in the tool image having a display slope along the plane, a first component of the input being defined parallel to a first axis corresponding to the tool display slope, a second component of the input being defined along a second axis of the input plane perpendicular to the tool display slope. A processor is coupled to the I/O system, the processor having a translation mode and an orientation mode. The processor in the orientation mode is configured to, in response to the first component of the input, induce rotation of the tool in the three-dimensional workspace about a first rotational axis. The first rotational axis is parallel to the plane and perpendicular to the tool axis. The orientation mode also has the processor configured to, in response to the second component of the input, induce rotation of the tool image about a second rotational axis, the second rotational axis perpendicular to the tool axis and the first rotational axis.
Optionally, the first rotational axis and the second rotational axis intersect with the tool axis at a center of rotation. The processor may be configured to superimpose, with the image of the tool, a spherical rotation indicator concentric with the center of rotation. Rotation indicia may be included with the spherical rotation indicator, the rotation indicia rotating about the center of rotation with the input so that movement of the indicia displayed adjacent the user move in an orientation corresponding with an orientation of the input. The rotation indicia may encircle the axis along a first side of the spherical rotation indicator toward the user from the center of rotation at a start of a rotation. The rotation indicia may rotate with the tool in the three-dimensional space so that the rotation indicia remain on the first side of the spherical rotation indicator during the rotation, and the processor, when a second side of the spherical rotation indicator opposite the first side is toward the user after the rotation, may reposition the rotation indicia to the second side. The processor, when in the translation mode, can optionally be configured to translate the tool along the first rotational axis in response to the first input component and along the second rotational axis in response to the second input component. The processor may, in response to the tool axis being within an angle range of normal to the imaging plane, align the first and second axes with the lateral display axis and the transverse display axis, respectively, the angle being between 5 and 45 degrees.
In another aspect, the invention provides a system for aligning a therapeutic or diagnostic tool with a target tissue adjacent an internal site in a patient. The system comprises an elongate body having a proximal end and a distal end with an axis therebetween. The body has a receptacle configured to support the tool within the internal surgical site so that the tool defines a first pose. A plurality of actuators are driving couplable with the elongate body so as to move the receptacle within the surgical site with a plurality of degrees of freedom. A processor is couplable with the actuators and configured to receive input from a user for moving the receptacle from the first pose to a second pose within the internal surgical site. A remote image capture system is oriented toward the internal surgical site and configured to acquire an image of the target through tissue of the patient. The processor is configured to constrain the tool to movement adjacent a plane by coordinating articulation about the degrees of freedom.
In another aspect, the invention provides a medical robotic simulation system for use with a computer coupled an input device. The system comprises tangible media embodying machine-readable code with instructions for displaying, on a display, an image of an elongate flexible body. The body has a proximal end, a distal end, and a tool receptacle configured to support a therapeutic or diagnostic tool in alignment with a target tissue adjacent an internal surgical site. The instructions are also for receiving, with the input device, a movement command from a user. The movement command is for moving the receptacle from a first pose toward a second pose that is aligned with the target tissue within the internal surgical site. The instructions are also for transmitting at least two-dimensional input in response to the movement command and from the input device to the computer, and for determining, with the computer and in response to the input, articulation of the body so as to induce the receptacle to move toward the second pose. The instructions can also result in displaying, on the display, of the determined articulation of the body and movement.
Optionally, the computer comprises an off-the-shelf computer couplable with a cloud and the input device comprises an off-the-shelf device having a sensor system configured for measuring changes in position with at least two degrees of freedom. The body may comprise a virtual flexible body, facilitating use of the system for planning, training, therapeutic tool evaluation, and/or the like. The system may also comprise an actual robotic system (in addition to or instead of being capable of virtual movements), with the system including an actual elongate body having an actual proximal end and an actual distal end with an actual receptacle configured for supporting an actual therapeutic or diagnostic tool. A plurality of actuators will typically be coupled with the elongate body, and an actual drive system can be couplable with the cloud and/or with the actuators so as to induce movement of the receptacle within an actual internal surgical site in a patient. A clinical input device having a clinical sensor system can be configured for measuring changes in position with the at least two degrees of freedom of the off-the-shelf device to allow the user to transition easily between the virtual and actual components of the system. Coupling of the virtual and actual components via the cloud facilitate analytic data tracking, coordinated updates of both systems to accommodate new and revised elongate body and therapeutic tool designs, improvements in the user interface, and the like.
In another aspect, the invention provides a method for presenting an image to a user of a target tissue of a patient body. The method comprises receiving a first two-dimensional (2D) image dataset, the first 2D dataset defining a first image including the target tissue and a tool receptacle of a tool delivery system disposed within the patient body. The first image has a first orientation relative to the receptacle. A second 2D image dataset defining a second target image including the target tissue and the tool delivery system is also received, the second image having a second orientation relative to the receptacle, the second orientation being angularly offset from the first orientation. Hybrid 2D/three-dimensional (3D) image data is transmitted to a display device so as to present a hybrid 2D/3D image for reference by the user. The hybrid image includes the first 2D image with the first orientation relative to a 3D model of the tool delivery system, and the second 2D image having the second orientation relative to the 3D model, the first and second 2D images positionally offset from the model.
Preferably, the hybrid image also includes a 3D virtual image of the model, the model comprising a calculated virtual pose of the receptacle. The first 2D image can be disposed on a first plane in the hybrid image, the first plane being offset from the model along a first normal to the first plane; and/or the second 2D image may be disposed on a second plane in the hybrid image, the second plane being offset from the model along a second normal to the second plane. With or without such a 3D model image, the hybrid image may include a first 2D virtual image of the model superimposed on the first 2D image, the first 2D virtual image being at the first orientation relative to the model; and/or the hybrid image may include a second 2D virtual image of the model superimposed on the second 2D image, the second 2D virtual image being at the second orientation relative to the model. These 2D virtual images may comprise planar images of the model (including one, some, or all of the tip, receptacle, tool, articulated body, etc.) projected onto the image data planes. As the image data planes typically also include images of both tissue and the actual tool etc., these superimposed planar images facilitate user or automated verification of alignment of the virtual model with the actual articulated device, of the movement of the articulated device relative to the tissue, and the like.
Preferably, the model includes a phantom defining a phantom receptacle pose angularly and/or positionally offset from the virtual receptacle pose. The 3D virtual image includes an image of the phantom, and the hybrid image includes a first 2D augmented image showing the phantom with the first orientation superimposed on the first 2D image, and a second 2D augmented image of the phantom with the second orientation superimposed on the second 2D image. Optionally, the method further comprises receiving a movement command from a hand of the user to move relative to the display, and moving he phantom pose in correlation with the movement command. The moved phantom can be displayed on the first 2D image and the second 2D image. A trajectory can be calculated between the virtual tool and the phantom and the tool can be moved within the patient body by articulating an elongate body supporting the tool in response to a one-dimensional (1D) input from the user.
Independently, the device and methods described herein may involve constraining motion of the receptacle, tool, tip, or the like relative to the first plane so that an image of the receptacle (for example) moves along the first plane, or normal to the first plane.
Optionally, the first 2D image comprises a sufficiently real-time video image for safe therapy based on that image (typically having a lag of less than 1 second). The second 2D image may comprise a recorded image (optionally being a series of recorded images, such as those included in a brief video loop) of the target tissue and the actual tool system. The first and second 2D images may comprise ultrasound, fluoroscope, magnetic resonance imaging (MRI), computed tomography (CT), or other real-time or pre-recorded images of the target tissue, with the real-time images preferably showing the tool system.
In another aspect, the invention provides a system for presenting an image to a user for diagnosing or treating a target tissue of a patient body. The system comprises a first image input configured to receive a first two-dimensional (2D) image dataset. The first 2D dataset defines a first image showing the target tissue and a tool receptacle of a tool delivery system disposed within the patient body. The first image has a first orientation relative to the first tool. A second image input is configured to receive a second 2D image dataset defining a second target image showing the target tissue and the tool delivery system, the second image having a second orientation relative to the tool receptacle. The second orientation is angularly offset from the first orientation. An output is configured to transmit hybrid 2D/three-dimensional (3D) image data to a display device so as to present a hybrid image for reference by the user. The hybrid image shows the first 2D image with the first orientation relative to a 3D model of the tool delivery system; and also shows the second 2D image having the second orientation relative to the 3D model. The first and second 2D images are positionally offset from the model.
In another aspect, the invention provides a method for moving a tool of a tool delivery system in a patient body with reference to a display image shown on a display. The display image shows a target tissue and the tool and defines a display coordinate system, the tool delivery system including an articulated elongate body coupled with the tool and having 1 or more, often 2 or more, and typically having 3 or more degrees of freedom. The method comprises determining, in response to a movement command entered by a hand of a user relative to the display image, a desired movement of the tool. In response to the movement command, an articulation of the elongate body is calculated so as to move the tool within the patient body, wherein the calculation of the articulation is performed by constraining the tool relative to a first plane of the display coordinate system so that the image of the tool moves along the first plane or normal to the first plane. The calculated articulation is transmitted so as to effect movement of the tool.
Optionally, a first two-dimensional (2D) image dataset is received, the first 2D dataset defining a first image showing the target tissue and the tool, the first image being along the first plane. Image data corresponding to the first 2D image dataset can be transmitted to the display device so as to generate the display image. Preferably, the display coordinate frame includes a view plane extending along a surface of the display, and the first plane will often be angularly offset from the view plane. The first plane can optionally be identified in response to a plane command from the user. The first image plane may have a first orientation relative to the tool, and a second 2D image dataset defining a second target image showing the target tissue and the tool delivery system may also be received, the second image having a second orientation relative to the receptacle. The second orientation may be angularly offset from the first orientation. The image data may be transmitted to the display, the image data comprising hybrid 2D/three-dimensional (3D) image data and the display presenting a hybrid image for reference by the user. The hybrid image may show the first 2D image with the first orientation relative to a 3D model of the tool delivery system, and the second 2D image having the second orientation relative to the 3D model. The first and second 2D images can be positionally offset from the model.
Preferably, the movement command is sensed in 1 or more, typically 2 or more, often 3 or more degrees of freedom, optionally in 5 or 6 degrees of freedom. The calculated movement command in a first mode may induce translation of the tool along the plane first plane and rotation of the tool about an axis normal to the first plane. Ideally, the calculated movement command in a second mode may induce translation of the tool normal to the first plane and rotation of the tool about an axis parallel to the first plane and normal to an axis of the tool (or along an alternative axis).
Optionally, the tool system comprises a phantom and the display image comprises an augmented reality image with a phantom image and another image of the tool. The movement command in a third mode may induce movement of the receptacle along a trajectory between the phantom image and the other image. When a workspace boundary is disposed between a location of the tool (before the commanded movement) and a desired location of the tool (as defined by the commanded movement), the movement may be limited by generating a plurality of test solutions for test movement commands at test poses of the tool along the plane. A plurality of command gradients may be determined from the candidate commands, and the movement command may be generated from the test poses and command gradients so that the commanded movement induces movement of the tool along the plane and within the workspace to adjacent the boundary.
In another aspect, the invention provides a system for moving a tool of a tool delivery system in a patient body with reference to a display image shown on a display. The display image may show a target tissue and a tool receptacle, and may define a display coordinate system. The tool delivery system may include an articulated elongate body coupled with the tool and having 3 or more degrees of freedom. The system comprises aa first processor module configured to determine, in response to a movement command entered by a hand of a user relative to the display image, a desired movement of the tool. A second processor module can be configured to determine, in response to the movement command, an articulation of the elongate body so as to move the tool within the patient body. The calculation of the articulation can be performed by constraining the tool relative to a first plane of the display coordinate system so that the image of the tool moves along the first plane, or normal to the first plane. An output can be configured to transmit the calculated articulation so as to effect movement of the tool.
Many of the system and methods described herein may be used to articulate therapeutic delivery systems and other elongate bodies having a plurality of degrees of freedom. It will often be desirable to limit the calculated articulation commands (typically generated by the processor of the system in response to input commands from the user) so that the tool, tip and/or receptacle is constrained to movement along a spatial construct, such as a plane, line, or the like. A workspace boundary will often be disposed between a current position of the receptacle and a desired position of the receptacle (as defined by the movement command from the user). Advantageously, the calculated articulation can be determined so as to induce movement of the receptacle along the spatial construct to adjacent the boundary. The constrained movement may be selected from the group consisting of translation movement in 3D space without rotation, movement along a plane, movement along a line, gimbal rotation about a plurality of intersecting axes, and rotation about an axis.
In yet another aspect, the invention provides a system for moving a tool of a tool delivery system in a patient body. The system includes an articulated elongate body coupled with the tool, the articulated tool having a boundary. The system comprises an input module configured to determine a desired spatial construct and, in response to a movement command entered by a hand of a user, a desired movement of the tool. A simulation module is coupled to the input module and is configured to determine, in response to the movement command, a plurality of alternative offset command poses of the elongate body. An articulation command module is coupled to the simulation module and configured, in response to the candidate command poses, to determine a plurality of candidate articulation commands along the construct to the simulation module; to determine a plurality of command gradients between the candidate articulation commands; and to determine an articulation command along the construct adjacent to the boundary using the gradients. The articulation command module has an output configured to transmit the articulation command so as to effect movement of the tool.
In yet another aspect, the invention provides a system for aligning a therapeutic or diagnostic tool with a target tissue adjacent an internal site in a patient. The system comprises an elongate body having a proximal end and a distal end with an axis therebetween. The body may have a receptacle configured to support the tool within the internal surgical site so that the elongate body defines a first pose. A plurality of actuators may be driving couplable with the elongate body so as to move the elongate body within the surgical site. A display may be configured to present an image including the elongate body to a user; and a processor may be couplable with the actuators and the display, the processor having a first module and a second module. The first module can be configured to receive input from the user for moving a virtual image of the elongate body from the first pose to a second pose on the display. The second module can be configured to receive a movement command, and in response, to drive the actuators so as to move the elongate body along a trajectory between the first pose and the second pose.
Preferably, an image capture system is coupled to the display and the processor. The first module can be configured to move the virtual image of the elongate body relative to a stored image of the internal surgical site. The second module can be configured to transmit image capture commands to the image capture system in response to the movement command such that the image capture system selectively images the elongate body just before the move, between the first and second pose, and/or when the move is complete, and ideally all three. The virtual image can be superimposed on the display of the elongate body, and the image processing system may be configured to intermittently image the elongate body while between the poses. Advantageously, the processor may include an image processing module configured to track the movement of the elongate body using intermittent images and the virtual image, such as images separated by more than 1/15th of a second, by more than 1/10th of a second or even by more than ½ second. nonetheless, the availability of the virtual image can facilitate image-guided movement with or without image processing-based position feedback, often with much less radiation to the patient and medical personnel than would be the case with standard fluoroscopy. Optionally, where the anatomy may move after a movement is planned and before the movement is completed, the processor can be configured to verify that the image data is within a desired safety threshold of expected image parameters, and if it does not, to stop the planned trajectory of the elongate body and/or alert the user that something has changed, thereby providing an automated safety mechanism.
In yet another aspect, the invention provides a system for aligning a therapeutic or diagnostic tool with a target tissue adjacent an internal site in a patient. The system comprises an elongate body having a proximal end and a distal end with an axis therebetween. The body has a receptacle configured to support the tool within the internal surgical site so that the elongate body defines a pose. A plurality of actuators can be driving couplable with the elongate body so as to move the elongate body within the surgical site. A first image capture device and a second image capture device may be included for generating first image data and second image data, respectively. A display will often be coupled to the first and second image capture devices and configured to present first and second images including the elongate body to a user, the first and second images generated using the first and second image data, respectively. A processor may be couplable with the actuators and the display, the processor having a first registration module and a second registration module. The first module can be configured for aligning a virtual image of the elongate body with the first image of the elongate body. The second module can be configured for aligning the second image of the elongate body with the virtual image.
In another aspect, the invention provides a method for driving a robotic catheter within an internal worksite of a patient body, the catheter having a passively flexible proximal portion supporting an actively articulated distal portion. The method comprises manipulating, typically manually and from outside the patient body, a proximal end of the catheter so as to induce rotational and/or axial movement of an interface between the flexible proximal portion and the distal portion, typically while the interface is within the patient body. The articulated distal portion of the catheter is articulated so as to compensate for the movement of the interface such that displacement of a distal tip of the catheter within the patient in response to the movement of the interface is inhibited.
In another optional aspect, the articulated distal portion can include a proximal articulated segment having a drive-alterable proximal curvature and a distal articulated segment having a distal drive-alterable curvature with a segment interface therebetween. The manipulating of the proximal end of the catheter may include manually rotating the proximal end of the catheter about an axis of the catheter adjacent the proximal end with a hand of a user. The rotation of the catheter may optionally be sensed, and the articulating of the articulated distal portion can be performed so as to induce precessing of the proximal curvature about the axis of the catheter adjacent the interface, optionally along with precessing of the distal curvature about the axis of the catheter adjacent the segment interface such that lateral displacement of the distal tip of the catheter in response to the manual rotation of the catheter is inhibited. Manual rotation from outside the body with a fixed catheter tip inside the body can be particularly helpful for rotation of a tool supported adjacent the tip into a desired orientation about the axis of the catheter relative to a target tissue. Relatedly, the articulated distal portion can include a proximal articulated segment having a proximal curvature and a distal articulated segment having a distal curvature with a segment interface therebetween and the manipulating of the proximal end of the catheter can comprise manually displacing the proximal end of the catheter along an axis of the catheter adjacent the proximal end with a hand of a user. The manual axial displacement of the catheter can be sensed and the articulating of the articulated distal portion can be performed so as to induce a first change in the proximal curvature and a second change in the distal curvature such that axial displacement of the distal tip of the catheter in response to the manual displacement of the catheter is inhibited, which can be useful for positioning a workspace of a tool adjacent the distal tip of the catheter so as to encompass a target tissue. The axial and/or rotational manual manipulation of the catheter outside the patient can be combined or used while driving a position of the tip to a new position relative to adjacent tissue.
In another aspect, the invention provides a system for driving a robotic catheter within an internal worksite of a patient body. The catheter can have a passively flexible proximal portion supporting an actively articulated distal portion. The system comprises a processor having a drive module configured to, in response to manipulating a proximal end of the catheter from outside the patient body so as to induce rotational and/or axial movement of an interface between the flexible proximal portion and the distal portion, transmit signals to articulate the articulated distal portion of the catheter. These drive signals can help compensate for the movement of the interface. More specifically, the drive signals can drive the tip such that displacement of a distal tip of the catheter within the patient (in response to the movement of the interface) is inhibited.
In another aspect, the invention provides a method for driving a medical robotic system. The system can be configured for manipulating a tool receptacle in a workspace within a patient body with reference to a display. The receptacle can define a first pose in the workspace and the display can show a workspace image of the receptacle and/or a tool supported thereby in the workspace. The method comprises receiving input, with a processor and relative to the workspace image, defining an input trajectory from the first pose to a desired pose of the receptacle and/or tool within the workspace. The processor can calculate a candidate trajectory from the first pose to the desired pose; and can transmit drive commands from the processor in response to the candidate trajectory so as to induce movement of the tool and/or receptacle toward the desired pose.
Optionally, the workspace image can include a tissue image of tissue adjacent the workspace. The tool and/or receptacle can be supported by an elongate flexible catheter having an image shown on the display. On the display phantom catheter with the desired pose can be superimposed, and a trajectory validation catheter between the initial pose and the desired pose. These can facilitate visual validation of catheter movement safety prior to transmitting of the drive commands. Other options include identifying a plurality of verification locations along the candidate trajectory. For any of the verification locations outside a workspace of the catheter, alternative verification locations within the workspace can be identified and a path can be smoothed in response to the verification locations and any alternative verification locations. Superimposing the validation catheter can be performed by advancing the validation catheter between the verification locations and any alternative verification locations. Still further options include identifying he first location in response to receipt, by the processor, of a command to go back to a prior pose of the catheter. The desired pose may, for example, comprise the prior pose and the catheter may have moved from the prior pose along a previous trajectory,
In yet another aspect, the invention provides a processor for driving a medical robotic system. The system can have a display and a tool receptacle movable in a workspace within a patient body with reference to the display. The receptacle can, in use, define a first pose in the workspace and the display can show a workspace image of the receptacle (and/or a tool supported thereby) in the workspace. The processor can comprise an input module configured for receiving input, relative to the workspace image, defining an input trajectory from the first pose to a desired pose of the receptacle and/or tool within the workspace. A simulation module can be configured for calculating, with the processor, a candidate trajectory from the first pose to the desired pose. An output of the processor can be configured for transmitting drive commands in response to the candidate trajectory so as to induce movement of the tool and/or receptacle toward the desired pose.
The improved devices, systems, and methods for controlling, image guidance of, inputting commands into, and simulating movement of powered and robotic devices will find a wide variety of uses. The elongate tool-supporting structures described herein will often be flexible, typically comprising catheters suitable for insertion in a patient body. Exemplary systems will be configured for insertion into the vascular system, the systems typically including a cardiac catheter and supporting a structural heart tool for repairing or replacing a valve of the heart, occluding an ostium or passage, or the like. Other cardiac catheter systems will be configured for diagnosis and/or treatment of congenital defects of the heart, or may comprise electrophysiology catheters configured for diagnosing or inhibiting arrhythmias (optionally by ablating a pattern of tissue bordering or near a heart chamber).
Alternative applications may include use in steerable supports of image acquisition devices such as for trans-esophageal echocardiography (TEE), intra-coronary echocardiography (ICE), and other ultrasound techniques, endoscopy, and the like. The structures described herein will often find applications for diagnosing or treating the disease states of or adjacent to the cardiovascular system, the alimentary tract, the airways, the urogenital system, and/or other lumen systems of a patient body. Other medical tools making use of the articulation systems described herein may be configured for endoscopic procedures, or even for open surgical procedures, such as for supporting, moving and aligning image capture devices, other sensor systems, or energy delivery tools, for tissue retraction or support, for therapeutic tissue remodeling tools, or the like. Alternative elongate flexible bodies that include the articulation technologies described herein may find applications in industrial applications (such as for electronic device assembly or test equipment, for orienting and positioning image acquisition devices, or the like). Still further elongate articulatable devices embodying the techniques described herein may be configured for use in consumer products, for retail applications, for entertainment, or the like, and wherever it is desirable to provide simple articulated assemblies with one or more (preferably multiple) degrees of freedom without having to resort to complex rigid linkages.
Embodiments provided herein may use balloon-like structures to effect articulation of the elongate catheter or other body. The term “articulation balloon” may be used to refer to a component which expands on inflation with a fluid and is arranged so that on expansion the primary effect is to cause articulation of the elongate body. Note that this use of such a structure is contrasted with a conventional interventional balloon whose primary effect on expansion is to cause substantial radially outward expansion from the outer profile of the overall device, for example to dilate or occlude or anchor in a vessel in which the device is located. Independently, articulated medial structures described herein will often have an articulated distal portion, and an unarticulated proximal portion, which may significantly simplify initial advancement of the structure into a patient using standard catheterization techniques.
The robotic systems described herein will often include an input device, a driver, and an articulated catheter or other robotic manipulator supporting a diagnostic or therapeutic tool. The user will typically input commands into the input device, which will generate and transmit corresponding input command signals. The driver will generally provide both power for and articulation movement control over the tool. Hence, somewhat analogous to a motor driver, the driver structures described herein will receive the input command signals from the input device and will output drive signals to the tool-supporting articulated structure so as to effect robotic movement of an articulated feature of the tool (such as movement of one or more laterally deflectable segments of a catheter in multiple degrees of freedom). The drive signals may comprise fluidic commands, such as pressurized pneumatic or hydraulic flows transmitted from the driver to the tool-supporting catheter along a plurality of fluid channels. Optionally, the drive signals may comprise electromagnetic, optical, or other signals, preferably (although not necessarily) in combination with fluidic drive signals. Unlike many robotic systems, the robotic tool supporting structure will often (though not always) have a passively flexible portion between the articulated feature (typically disposed along a distal portion of a catheter or other tool manipulator) and the driver (typically coupled to a proximal end of the catheter or tool manipulator). The system will be driven while sufficient environmental forces are imposed against the tool or catheter to impose one or more bend along this passive proximal portion, the system often being configured for use with the bend(s) resiliently deflecting an axis of the catheter or other tool manipulator by 10 degrees or more, more than 20 degrees, or even more than 45 degrees.
The catheter bodies (and many of the other elongate flexible bodies that benefit from the inventions described herein) will often be described herein as having or defining an axis, such that the axis extends along the elongate length of the body. As the bodies are flexible, the local orientation of this axis may vary along the length of the body, and while the axis will often be a central axis defined at or near a center of a cross-section of the body, eccentric axes near an outer surface of the body might also be used. It should be understood, for example, that an elongate structure that extends “along an axis” may have its longest dimension extending in an orientation that has a significant axial component, but the length of that structure need not be precisely parallel to the axis. Similarly, an elongate structure that extends “primarily along the axis” and the like will generally have a length that extends along an orientation that has a greater axial component than components in other orientations orthogonal to the axis. Other orientations may be defined relative to the axis of the body, including orientations that are transvers to the axis (which will encompass orientation that generally extend across the axis, but need not be orthogonal to the axis), orientations that are lateral to the axis (which will encompass orientations that have a significant radial component relative to the axis), orientations that are circumferential relative to the axis (which will encompass orientations that extend around the axis), and the like. The orientations of surfaces may be described herein by reference to the normal of the surface extending away from the structure underlying the surface. As an example, in a simple, solid cylindrical body that has an axis that extends from a proximal end of the body to the distal end of the body, the distal-most end of the body may be described as being distally oriented, the proximal end may be described as being proximally oriented, and the curved outer surface of the cylinder between the proximal and distal ends may be described as being radially oriented. As another example, an elongate helical structure extending axially around the above cylindrical body, with the helical structure comprising a wire with a square cross section wrapped around the cylinder at a 20 degree angle, might be described herein as having two opposed axial surfaces (with one being primarily proximally oriented, one being primarily distally oriented). The outermost surface of that wire might be described as being oriented exactly radially outwardly, while the opposed inner surface of the wire might be described as being oriented radially inwardly, and so forth.
Referring first to
During use, catheter 12 extends distally from driver system 14 through a vascular access site S, optionally (though not necessarily) using an introducer sheath. A sterile field 18 encompasses access site S, catheter 12, and some or all of an outer surface of driver assembly 14. Driver assembly 14 will generally include components that power automated movement of the distal end of catheter 12 within patient P, with at least a portion of the power often being transmitted along the catheter body as a hydraulic or pneumatic fluid flow. To facilitate movement of a catheter-mounted therapeutic tool per the commands of user U, system 10 will typically include data processing circuitry, often including a processor within the driver assembly. Regarding that processor and the other data processing components of system 10, a wide variety of data processing architectures may be employed. The processor, associated pressure and/or position sensors of the driver assembly, and data input device 16, optionally together with any additional general purpose or proprietary computing device (such as a desktop PC, notebook PC, tablet, server, remote computing or interface device, or the like) will generally include a combination of data processing hardware and software, with the hardware including an input, an output (such as a sound generator, indicator lights, printer, and/or an image display), and one or more processor board(s). These components are included in a processor system capable of performing the transformations, kinematic analysis, and matrix processing functionality associated with generating the valve commands, along with the appropriate connectors, conductors, wireless telemetry, and the like. The processing capabilities may be centralized in a single processor board, or may be distributed among various components so that smaller volumes of higher-level data can be transmitted. The processor(s) will often include one or more memory or other form of volatile or non-volatile storage media, and the functionality used to perform the methods described herein will often include software or firmware embodied therein. The software will typically comprise machine-readable programming code or instructions embodied in non-volatile media and may be arranged in a wide variety of alternative code architectures, varying from a single monolithic code running on a single processor to a large number of specialized subroutines, classes, or objects being run in parallel on a number of separate processor sub-units.
Referring still to
Referring now to
Referring now to
Referring still to
Referring now to
Referring now to
Alternative catheter 112 can be replaceably coupled with alternative driver assembly 114. When simulation system 101 is used for driving an actual catheter, the coupling may be performed using a quick-release engagement between an interface 113 on a proximal housing of the catheter and a catheter receptacle 103 of the driver assembly. An elongate body 105 of catheter 112 has a proximal/distal axis as described above and a distal receptacle 107 that is configured to support a therapeutic or diagnostic tool 109 such as a structural heart tool for repairing or replacing a valve of a heart. The tool receptacle may comprise an axial lumen for receiving the tool within or through the catheter body, a surface of the body to which the tool is permanently affixed, or the like. Alternative drive assembly 114 may be wireless coupled to a simulation computer 115 and/or a simulation input device 116, or cables may be used for transmission of data.
When alternative catheter 112 and alternative drive system 114 comprise virtual structures, they may be embodied as modules of software, firmware, and/or hardware. The modules may optionally be configured for performing articulation calculations modeling performance of some or all of the actual clinical components as described below, and/or may be embodied as a series of look-up tables to allow computer 115 to generate a display effectively representing the performance. The modules will optionally be embodied at least in-part in a non-volatile memory of a simulation-supporting alternative drive assembly 121a, but some or all of the simulation modules will preferably be embodied as software in non-volatile memories 121b, 121c of simulation computer 115 and/or simulation input device 116, respectively. Coupling of alternative virtual catheters and tools can be permed using menu options or the like. In some embodiments, selection of a virtual catheter may be facilitated by a signal generated in response to mounting of an actual catheter to an actual driver.
Simulation computer 115 preferably comprises an off-the-shelf notebook or desktop computer that can be coupled to cloud 17, optionally via an intranet, the internet, an ethernet, or the like, typically using a wireless router or a cable coupling the simulation computer to a server. Cloud 17 will preferably provide data communication between simulation computer 115 and a remote server, with the remote server also being in communication with a processor of other simulation computers 115 and/or one or more clinical drive assemblies 14. Simulation computer 115 may also comprise code with a virtual 3D workspace, the workspace optionally being generated using a proprietary or commercially available 3D development engine that can also be used for developing games and the like, such as Unity™ as commercialized by Unity Technologies. Suitable off-the-shelf computers may include any of a variety of operating systems (such as Windows from Microsoft, OS from Apple, Linex, or the like), along with a variety of additional proprietary and commercially available apps and programs.
Simulation input device 116 may comprise an off-the-shelf input device having a sensor system for measuring input commands in at least two degrees of freedom, preferably in 3 or more degrees of freedom, and in some cases 5, 6, or more degrees of freedom. Suitable off-the-shelf input devices include a mouse (optionally with a scroll wheel or the like to facilitate input in a 3rd degree of freedom), a tablet or phone having an X-Y touch screen (optionally with AR capabilities such as being compliant with ARCor from Google, ARKit from Apple, or the like to facilitate input of translation and/or rotation, along with multi-finger gestures such as pinching, rotation, and the like), a gamepad, a 3D mouse, a 3D stylus, or the like. Proprietary code may be loaded on the simulation input device (particularly when a phone, tablet, or other device having a touchscreen is used), with such input device code presenting menu options for inputting additional commands and changing modes of operation of the simulation or clinical system. A simulation input/output system 111 may be defined by the simulation input device 116 and the simulation display SD.
System Motion Equations
Referring now to
Referring now to
Startup Position:
Segments start by expanding the balloon array inflations to predetermined levels. The Segment is driven to predetermined and straight (or nearly straight) condition defined by an initial joint space vector js, which accounts for all the Segments' initial states.
To move to a desired location, balloon array conditions are changed to locate segment angles and displacement. The first step is to determine the current and desired position vectors for the robot tip in the robot's base coordinate system, sometime referred to herein as world space. Note that world space may move relative to the patient and/or the operating room with manual repositioning of the catheter or the like, so that this world space may differ from a fixed room space or global world space.
User Input Commands/Tip Vector q:
Referring now to
Q
C=(XC,YC,ZC,c,c)
The user input q represents a velocity (or small displacements) command. The tip coordinate system resides at the current tip position, and therefore the current q, or qc is always at the origin:
q
c=(0,0,0,0,0)
Since qc is zero the desired displacement, qd, is equivalent to the change in q or dq as shown here:
dq=q
d
−q
c
=q
d
To simplify, dq is replaced simply with q to represent the desired change in position,
q=q
d=(xT,yT,zT,αT,βT),
where xT, yT, zT, αT, and βT describes a change vector in tip space coordinates. q is then used by the current Transformation Matrix, T0Tc, to acquire the desired world coordinates, Qd.
Q
d
=T
0Tc(qd)=(Xd,Yd,Zd,d,d)
Where the Tip's current world coordinates are defined by Qc.
Q
C
=T
0Tc(qc)=(XC,YC,ZC,c,c)
Use Joint Space vector, J, contains the Segment angles and displacement information which is used to solve for the Transformation and Rotation matrix, T0T and R0T respectively. The Transformation and Rotation Matrix will be discussed below.
Current Catheter State or Pose
The current world coordinate vector QC is defined by the tip q vector with no displacement which is qc, and can be resolved as follows:
Q
C
=T
0Tc(qc)=(XC,YC,ZC,c,c)
The coordinates may be found by the following math,
(XC,YC,ZC,1)T=T0Tc·(0,0,0,1)T
a
T=cos(0)*sin(0)=0
b
T=sin(0)*sin(0)=0
c
T=cos(0)=1
(Ac,Bc,Cc,0)T=T0Tc·(0,0,1,0)T
c=atan 2(Ac,Bc)
c=atan 2(Cc,Hc)
H
c=(Ac2+Bc2)1/2
The range of Beta () may be limited if the rotation matrix is not used. This is due to use of the hypotenuse (H) quantity which removes the negative sign from one side of the atan 2 formula as follows:
H=(A2B2)1/2
=atan 2(C,H), 0<<180
The desired world coordinate vector Qd is defined by the tip q vector with desired displacement which is qd, and can be resolved as follows:
Q
d
=T
0Tc(qd)=(Xd,Yd,Zd,d,d)
The coordinates may be found by the following math,
(Xd,Yd,Zd,1)T=T0T·(xT,yT,zT,1)T
a
T=cos(αT)*sin(βT); bT=sin(αT)*sin(βT); cT=cos(βT)
(Ad,Bd,Cd,0)T=T0T·(aT,bT,cT,0)T
d=atan 2(Ad,Bd)
d=atan 2(Cd,Hd)
H
d=(Ad2+Bd2)1/2
The following alternative formula solves Beta (B) in all four quadrants for a full 360 degrees (as opposed to only two quadrants and 180 degrees) and for Gamma (F), the sixth and final coordinate to define the position in 3D space.
Use Jc (current Joint Space variables) is used to solve for the current T0Tc and R0Tc
(XC,YC,ZC,1)T=T0Tc·(0,0,0,1)T
(aTx,bTx,cTx)T=R0Tc×(1,0,0)T
(aTy,bTy,cTy)T=R0Tc×(0,1,0)T
(aTz,bTz,cTz)T=R0Tc×(0,0,1)T
c=atan 2(aTz,bTz)
if |aTz|<min; than aTz=(aTz/|aTz|)*min
c=atan 2(cαz,aαt)
a
αz=cos α*aTz+sin α*bTz
C
αz
=C
Tz
if |cαz|<min; than cαz=(cαz/|cαz|)*min
Γc=atan 2(aβx,bβx)
a
βx=cos α*cos β*aTx+sin α*cos β*bTx−sin β*cTx
b
βx=−sin α*aTx+cos α*bTx
if |aβx|<min; than aβx=(aβx/|aβx)*min
(Xd,Yd,Zd,1)T=T0Tc—(Xd,Yd,Zd,1)T
Use Qd with Inverse Jacobian to solve for Segment angles and displacements, Jd (desired Joint Space variables).
Use Jd to solve for the new T0Td and R0Td
(Xd,Yd,Zd,1)T=T0Td·(0,0,0,1)T
(aTx,bTx,cTx)T=R0Td×(1,0,0)T
(aTy,bTy,cTy)T=R0Td×(0,1,0)T
(aTz,bTz,cTz)T=R0Td×(0,0,1)T
d=atan 2(aTz,bTz)
if |aTz|<min; than aTz=(aTz/|aTz|)*min
d=atan 2(cαz,aαz)
a
αz=cos α*aTz+sin α*bTz
c
αz
=c
Tz
if |cαz|<min; than cαz=(cαz/|cαz|)*min
Γd=atan 2(aβx,bβx)
a
βx=cos α*cos β*aTx+sin α*cos β*bTx−sin β*cTx
b
βx=−sin α*aTx+cos α*bTx
if |aβx|<min; than aβx=(aβx/|aβx|)*min
Basis for Alternative Formulas:
Solve for base coordinate axis unit vectors
(aTx,bTx,cTx)T=R0T×(1,0,0)T
(aTy,bTy,cTy)T=R0T×(0,1,0)T
(aTz,bTz,cTx)T=R0T×(0,0,1)T
Referring now to
αT=atan 2(aTz,bTz)
The rotation matrix for alpha (α) is the following:
The inverse of this rotation matrix is the transpose.
Applying the rotation RαT removes the bα component and aligns the beta (β) angle within X′-Z′ plane. This allows full circumferential angle determination of beta (β).
βT=atan 2(cαz,aαz)
a
αz=cos α*aTz+sin α*bTz
b
αz=−sin α*aTz+cos α*bTz=0
c
αz
=c
Tz
To find gamma (γ) use alpha (α) and beta (β) to create a rotation matrix as follows:
Remove the alpha (α) and beta (β) from the Y axis vector to solve for gamma (γ). This can be done by inverting the rotation matrix and multiply by Y axis unit vector. The inverse of this rotation matrix is the transpose.
This new Roll vector should have a zero in the cTr positions placing the vector on a Tip coordinate X-Y plane with values for aTr and bTr. Use these two values to determine gamma (γ) as follows:
γT=atan 2(aβx,bβx)
a
βx=cos β*cos β*aTx+sin α*cos β*bTx−sin β*cTx
b
βx=−sin α*aTx+cos α*bTx
c
βx=cos α*sin β*aTx+sin α*sin β*bTx+cos β*cTx=0
Limiting Input Commands to Facilitate Solution:
As discussed in the previous section, this user input vector q is used to find the desired world space vector Qd using the Transformation Matrix T0T. The Qd vector finds the desired coordinate values for Xd, Yd, Zd, d, and d.
Due to coordinate frame limitations, beta () should be greater than zero and less than 180 degrees. As beta approaches these limits, the Inverse Kinematics solver may become unstable. This instability is remediated by assigning a maximum and minimum value for beta that is higher than zero and lower than 180 degrees. How close to the limits depends on multiple variables and it is best to validate stability with assigned limits. For example, a suitable beta minimum may be 0.01 degrees and maximum 178 degrees.
The optimization scheme used to solve for the joint vector j (through the Inverse Kinematics) may become unstable with large changes in position and angles. While large displacements and orientation changes will often resolve, there are times when it may not. Limiting the position and angles change will help maintain mathematical stability. For large q changes, dividing the path into multiple steps may be helpful, optionally with a Trajectory planner. For reference, the maximum displacement per command may be set for 3.464 mm and maximum angle set for 2 degrees. The displacement is defined by the following:
Displacement=(xT2yT2+zT2)1/2<2 mm
Angle=βT degrees
Segment Rotational Matrix:
Referring now to
R=Joint Rotational matrix.
Segment Position Matrix:
Referring now to
P=point in space relative to reference frame
r=S/β
x=r*cα*[1−cβ]
x=(S/β)*cα*[1−cβ]
y=(S/β)*sα*[1−cβ]
z=(S/β)*sβ
Segment Transformation Matrix:
Combine rotation and position matrix into a transformation matrix.
Rotational Matrix Generalized:
It, indicates the rotation of a reference frame attached to the tip of segment “i” relative to a reference frame attached to its base (or the tip of Segment “i−1”).
R
i
=R(αi,βi,Si); i=0,1,2, . . . ,n;
i=0, sensor readings to input by manual (rotation & axial motion) actuation of the catheter proximal of the first segment.
i=1, is the most proximal segment.
i=n, the most distal segment (which is 2 for a two-segment catheter).
System Position Generalized:
Pi indicates the origin of the distal end of segment “i” relative to a reference frame attached to its base (or the tip of Segment “i−1”).
Continuum Translation Matrix:
Ti is the transformation matrix from a frame at the distal end of segment “i” to a frame attached to its base (or the tip of Segment “i−1”).
Tw is the transformation matrix from the most distal segment's tip reference frame to the world reference frame which is located proximal to the manually (versus fluidically) driven joints.
T
w
=T
0
T
1
×T
2
× . . . T
n
Use this matrix to solve the Forward Kinematics for current tip position Pw and axial unit vector Vwz.
P
w
=T
w*(0,0,0,1)
P
w=(xw,yw,zw)
V
w
=T
w*(0,0,1,0)
V
wz=(awz,bwz,cwz)
Combine for World Space Tip Position:
Solve tip world space Q w, by combining Pw and Vwz as follows.
Qw=(Xw,Yw,Zw,αw,βw)
Xw=X
w
=X
n
Yw=y
w
=y
n
Zw=Z
w
=Z
n
αw=atan 2(awz,bwz)
if |awz|<min; than awz=(awz/|awz|)*min
βw=atan 2(cwz,abwz)
ab
wz=(awz2+bwz2)1/2
if |cwz|<min; than cwz=(cwz/|cwz)*min
New βw and γw
βw=atan 2(cwz,abwz)
ab
wz=cos α*awz+sin α*bwz
if |cwz|<min; than cwz=(cwz/|cwz)*min
γw=atan 2(aβx,bβx)
a
βx=cos α*cos β*awx+sin α*cos β*bwx−sin β*cwx
b
βx=−sin α*awx+cos α*bwx
if |aβx|<min; than aβx=(aβx/|aβx|)*min
Convert QW to QJ for Use with Jacobian
B
Wx=βw*cos(αw)
B
Wy=βw*sin(αw)
Q
J=(XW,YW,ZW,βWx,βWy)
Numerical Jacobian:
To solve the unique QJ for a deviation in joint variable (αi, βi, Si), one at a time, deviate each variable in every Segment by using the Transformation matrix. Then combine the resultant QJ vectors to form a numeric Jacobian. By using small single variable deviations from the current joint space, a localized Jacobean can be obtained. This Jacobean can be used in several ways to iteratively find a solution for segment joint variables to a desired world space position, Qd. Preferable the Jacobean is invertible in which case the difference vector between the current and desired world position can be multiplied by the inverse Jacobian J−1 to iteratively approach a correct joint space variables. Repeat this process until the Forward Kinematics calculates a position vector equivalent to Qd within a minimum error. Check Joint Space results (αi, βi, Si for i=0, 1, 2, . . . , n) for accuracy, solvability, workspace limitations and errors.
J
−1
*J=I
αi>=−360° & αi<=360°
βi>0 min & βi<βmax
βi≠0° or 180° (or βmax)
βw≠0° or 180°
S
i
>S
min
; S
i
<S
max
Referring now to
Solving for Segment Balloon Array End Coordinates:
A balloon array is a set of fluidically connected balloons along one side of a segment. Find end coordinates for each balloon array within a segment base frame. Assume balloon arrays are spaced at 120 degrees apart around the cordial axis, that the first is located on the X axis, and that the array balloons remain axially aligned through the length of the segment.
rA=radius of balloon center in balloon array from cordial axis
θ=angular period of balloons within segment (120° for a 3 array segment)
θA=0; θB=120; θC=240
Arc Start points for balloon arrays:
Arc unit vectors for axial orientation of balloon arrays are as follows:
Use local (for one Segment) Transformation Matrix to find Balloon Array distal end coordinates.
P
i1
=T×P
0
P
1A
=T×P
0A
P
1B
=T×P
0B
P
1C
=T×P
0C
For each Balloon Array set the origin at the starting points to normalize distal endpoint coordinates dP. This is helpful for solving for the Balloon Array arc, S, which follows in the Find Array Arc Lengths section below.
dP
i
=P
i1
−P
i0=(x0,y0,z0) Segment Centerline Cord
dP
A
=P
A1
−P
A0=(xA,yA,zA) Balloon Cord A
dP
B
=P
B1
−P
B0=(xB,yB,zB) Balloon Cord B
dP
C
=P
C1
−P
C0=(xC,yC,zC) Balloon Cord C
All cordial orientation vectors are equivalent.
V=(aiz,biz,ciz), for i from 1 to n.
Find Array Arc Lengths:
Referring again to
(α, β, S), α represent the bend direction (about z axis starting at x axis), β the bend amount (off the z axis), and S the length of an arc anchored to the origin of a reference frame.
(x, y, z) is the coordinate location of a point at the end of the arc.
r is the balloon array cord radius.
h=(x2+y2)1/2
r=(h2+z2)/(2*h)
r−h=(z2−h2)/(2*h)
α=atan 2(x,y)
β=atan 2(z,r−h)
S=r*β
S
i=(hi2+zi2)/(2*hi)*β, hi=(xi2+yi2)1/2
i (segment center cord)
Solve S for cords A, B, C, (segment center cord)
Note that segment center cord (S, β, α) is already determined.
S
A=(hA2+zA2)/(2*hA)*β, hA=(xA2+yA2)1/2
S
B=(hB2+zB2)/(2*hB)*β, hB=(xB2yB2)1/2
S
C=(hC2+zC2)/(2*hC)*β, hC=(xC2+yC2)1/2
When β=<βmin
S
A
=S
B
=S
C
=S
i
Segment Internal Load Conditions:
Segment spring force may be proportional to a spring rate with extension.
F
S
=K
F
*S
i
+F
0
Where F is the sum of the balloon forces, KF is the spring constant, and F0 is the offset force.
F
0
=F
preload
−K
F
*S
min
Where Fpreload is the preload force at the minimum segment length Smin.
Sum balloon array forces as follows:
F
Pr
=F
A
+F
B
+F
C
=A*(PrA+PrB+PrC)
F
S
=F
Pr
S
i=(A*(PrA+PrB+PrC)−F0)/KF
Segment spring torque proportional to a spring rate with bend angle.
M
S
=K
M
*β+M
0
β=(MS−M0)/KM
Where M is the internal moment applied to the Segment, KM is a angular spring constant can, and M0 is the preload moment (compensation for no load bend).
Sum of balloon array torques:
rA=radius of balloon center in balloon array from cordial axis
θ=angular period of balloons within segment (120° for a 3 array segment)
θA=0°; θB=120°; θC=240°
M
x
=F
A
*r
A*sin(θA)+FB*rA*sin(θB)+FC*rA*sin(θC)
=(31/2/2)*A*rA*(PrB−PrC)
M
y
=−F
A
*r
A*cos(θA)−FB*rA*cos(θB)−FC*rA*cos(θC)
=(½)*A*rA*(PrB+PrC−2*PrA)
M
S=(Mx2+My2)1/2
=(((31/2/2)*A*rA*(PrB−PrC))2+((½)*A*rA*(PrB+PrC−2*PrA))2)1/2
=(A/2)*rA*(3*(PrB−PrC)2+(PrB+PrC−2*PrA)2)1/2
=(A/2)*rA*((3*PrB2−6*PrB*PrC+3*PrC2)+(PrB2+PrB*PrC−2*PrA*PrB+PrB*PrC+PrC2−2*PrA*PrC−2*PrA*PrB−2*PrA*PrC+4*pr
=(A/2)*rA*(4*PrB2−4*PrB*PrC+4*PrC2−4*PrA*PrB−4*PrA*PrC+4*pr
=A*rA*(PrB2−PrB*PrC+PrC2−PrA*PrB−PrA*PrC+prA2)1/2
=A*rA*(PrA2+PrB2+PrC2−PrA*PrB−PrB*PrC−PrC*PrA)1/2
M
Pr
=M
S
β=[A*rA*(PrA2+PrB2+PrC2−PrA*PrB−PrB*PrC−PrC*PrA)1/2−M0]/KM
Moment direction angle (α):
M
x=(31/2/2)*A*rA*(PrB−PrC)
M
y=(½)*A*rA*(PrB+PrC−2*PrA)
cos(α)=My/MS
sin(α)=−Mx/MS
β=(MS−M0)/KM
βx=[(MS−M0)/KM]*cos(α)=[(MS−M0)/KM]*My/MS
=[1−(M0/MS)]*My/KM
=[1−(M0/A*rA*(PrA2+PrB2+PrC2−PrA*PrB−PrB*PrC−PrC*PrA)1/2)]*(½)*A*rA*(PrB++PrC−2*PrA)/KM
βy=−[(MS−M0)/KM]*sin(α)=−[(MS−M0)/KM]*Mx/MS
=−[1−(M0/MS)]*Mx/KM
=−[1−(M0/A*rA*(PrA2+PrB2+PrC2−PrA*PrB−PrB*PrC−PrC*PrA)1/2))]*(31/2/2)*A*rA*(PrB−PrC)/KM
βx=(½)*A*rA*(PrB+PrC−2*PrA)/KM
βy=−(31/2/2)*A*rA*(PrB−PrC)/KM
α=atan 2(βx,βy); Note the minus sign for Mx is applied for matching direction.
α=atan 2((PrB+PrC−2*PrA),−1.73205*(PrB−PrC))
if (PrB−PrC|<min) & (PrB+PrC−2*PrA|<min); than α=0
Solve for PA, PB, and PC using the following three equations and a Jacobian. Pressures should meet these conditions to solve.
|PrA—PrB|+|PrB−PrC|+|PrC−PrA|<MINDIfference; related to |βCalc|>βmin
Pr
A
,Pr
B
,Pr
C>MinPressure
IF(αDesired>180°,αDesired−360°,IF(αDesired<−180°,αDesired+360°,αDesired))
Find Segment position with:
S
Calc=[A*(PrA+PrB+PrC)−F0]/KF
βx=(½)*A*rA*(PrB+PrC−2*PrA)/KM
βy=−(31/2/2)*A*rA*(PrB−PrC)/KM
βx=−(31/2/2)*A*rA*(PrB−PrC)/KM
Pr
C
=Pr
B+2*(βy/31/2)*KM/(A*rA)
βx=(½)*A*rA*(PrB+PrC−2*PrA)/KM
2*βx*KM/(A*rA)=PrB+PrB+2*(β3/31/2)*KM/(A*rA)−2*PrA
2*βx*KM/(A*rA)+2*PrA=2*PrB+2*(βy/31/2)*KM/(A*rA)
Pr
B
=Pr
A+βx*KM/(A*rA)−(βy/31/2)*KM/(A*rA)
Pr
B
=Pr
A+(βx−βy/31/2)*KM/(A*rA)
S
Calc=[A*(PrA+PrB+PrC)−F0]/KF
(SCalc*KF+F0)/A=PrA+PrB+PrB+2*(βy/31/2)*KM/(A*rA)
(SCalc*KF+F0)/A=PrA+2*[PrA+(βx−βy/31/2)*KM/(A*rA)]+2*(βy/31/2)*KM/(A*rA)
(SCalc*KF+F0)/A=3*PrA+[2*(βx−βy/31/2)+2*(βy/31/2)]*KM/(A*rA)
(SCalc*KF+F0)/A=3*PrA+2*βx*KM/(A*rA)
Pr
A=(SCalc*KF+F0)/(3*A)−2*βx*KM/(3*A*rA)
Pr
B
=Pr
A+(βx−βy/31/2)*KM/(A*rA)
=(SCalc*KF+F0)/(3*A)−2*βx*KM/(3*A*rA)+3*(βx−βy/31/2)*KM/(3*A*rA)
Pr
B=(SCalc*KF+F0)/(3*A)+(βx−3*βy/31/2)*KM/(3*A*rA)
Pr
C
=Pr
B+2*(βy/31/2)*KM/(A*rA)
Pr
C=(SCalc*KF+F0)/(3*A)+(βx−3*βy/31/2)*KM/(3*A*rA)+6*(βy/31/2)*KM/(3*A*rA)
Pr
C=(SCalc*KF+F0)/(3*A)+(βx+3*βy/31/2)*KM/(3*A*rA)
Or
S
Calc=[A*(PrA+PrB+PrC)−F0]/KF
βCalc=[A*rA*(prA2+PrB2+PrC2−PrA*PrB−PrB*PrC−PrC*PrA)1/2−M0]/KM
αCalc=atan 2((PrB+PrC−2*PrA),−1.73205*(PrB−PrC))
IF(|αCalc|>45° AND |αCalc|<315°,
IF α
Calc>0°,IF(αDesired<0°,αDesired+360°,αDesired),
IF(αDesired>0°,αDesired−360°,αDesired)),αDesired)
Setup Inverse Jacobian to solve for pressures and repeat until joint (j) variable errors meet minimum condition. Check that α, β, and S meet viable solutions.
j
calc
−j
desired
<j
error
|αCalc|<360°
βCalc>βMin (some amount greater than Zero),
βCalc<βMax (may be 180° or 360°)
S
Calc
>S
Min & <SMax
Solve for segment force and moment
F=K
F
*S
i
+F
0
F=A*(PrA+PrB+PrC)
M=K
M
*β+M
0
M=A*r
A*(PrA2+PrB2+PrC2−PrA*PrB−PrB*PrC−PrC*PrA))1/2
S=(F−F0)/KF Check for equivalency
β=(M−M0)/KM Check for equivalency
Communications Between Robot Articulation Controller Module And Simulation/Display Processor Module
Referring now to
Coordinates
Robot and Unity™ or simulation module coordinate systems are different. To swap coordinates relabel the Robot Z axis to be the Unity™ Y axis and the Robot Y axis to be the Unity™ Z. At the base of the first segment the catheter axis points along the Robot Z axis and in the Unity™ Y axis. The base segment starts vertical in both coordinate frames.
Angles
Robot and Unity™ coordinate angles are measured in opposite directions. When viewed with the axis of rotation pointing towards the observer, the Robot angles are measured counter clockwise while the Unity™ angles are measured clockwise.
Rotation Types
The Robot rotations act intrinsically, which means the second and third rotation is about a coordinate system that moves with the object in prior rotations. The Unity™ rotations act extrinsically, which means the all rotations acting on an object are about a fixed coordinate system. The axis defining a rotation for intrinsic rotations incudes the number of apostrophes to indicate the sequence.
Axis of Rotation
The Robot rotations rotate about axes z-y′-z″, in this order and about rotating coordinate frames. The Unity™ rotations rotate about axes Z-X-Y, in this order, about a fixed coordinate frame.
Rotation Nomenclature
The Robot segments rotation angles are labeled with alpha (α), beta (β), and gamma (γ) and define the segments angles of rotation about the rotating frames axes z-y′-z″ of the Robot base coordinates. The Unity™ segment rotation angles are labeled with phi (φ), theta (θ), and psi (w) and define the segments angles of rotation about the fixed frame axes Z-X-Y of the Unity™-Robot base coordinates.
Position Nomenclature
The Robot segments positions are denoted by lower case x, y, and z and define the location of the segments distal end using the Robot base coordinates. The Unity™ segment positions are denoted by upper case X, Y, Z and define the location of the segments distal end within the Unity™-Robot base coordinates.
Command Protocol
The Command input comes from Unity™ and is delivered to the Moray Controller in Unity™ coordinates. Command inputs may be incremental changes to affect the robot tip position and orientation and based on a coordinate system attached to a robot tip at the distal end of the Unity™ segments. Alternatively, the command inputs may be the absolute position and orientation of the robot tip based on the coordinate systems attached to the base at the proximal end of the Unity™ segments. As can be understood with reference to
The following represents the Unity™ generated data set (and nomenclature) from Unity™ to the Robot Controller.
Telemetry Protocol
The telemetry input comes from the robot controller and is delivered to Unity™ in Unity™ Euler coordinates. Telemetry inputs relate the position and orientation of the robot segment ends based on a coordinate system at the base of the most proximal segment. There are three telemetry vector types, a command telemetry vector, an actual telemetry vector and a target telemetry vector. The command telemetry is that which is being asked for by the command input, the actual telemetry is the measured segment positions, and the target telemetry reflects the phantom segments position. Each telemetry vector holds 14 variables which includes two manual (sensed) catheter base inputs and two segments end conditions (6 values for each segment). The telemetry protocol has 43 values and starts with a packet count number, followed by the command, actual, and target telemetry vectors.
The following represents the Robot Controller generated data set (and nomenclature) from Robot Controller to Unity™
Command and Telemetry Variables
The nomenclature for this data set include the following:
Converting Robot to Unity™ Coordinates
For the purpose of finding to the Unity™ rotation angles, the Robot coordinate axes will be used. At the end of the computation the Z and Y axis are be switched to sync with the Unity™ coordinate system. Therefore, the Unity™ rotations (using the Robot axes) have the base of the segments in plane with the Y-X axes. The extrinsic rotations are therefore about Y-X-Z (formerly Z-X-Y with the Unity™ axes) with -θ-ψ representing associated rotations. When converting from an extrinsic to an intrinsic rotational sequence the order of rotation is reversed. Therefore, an equivalent intrinsic system will have rotations about z-x′-y″ with ψ-θ-φ, in this order.
Referring now to
Referring once again to
Converting Unity™ to Robot Coordinates
Referring now to
The angles in this case represent the deviation from the last tip position. Solving the rotational matrix and finding the Robot angles directly solves for the Robot Tip deviations.
For position conversion from the Unity™ (input or virtual) Robot to the actual Robot, switch the Z and Y position values.
Referring now to
Referring still to
Referring now to
As can be understood with reference to
Referring now to
To facilitate precise control over both the position and orientation of the tool in the workspace, the processor of the system may have a translation input mode and an orientation input mode. When the processor is in the orientation mode, the first component 174 of the input 172 (the portion extending along the axis 166 of the tool as shown in the image), will typically induce rotation of the tool in three-dimensional workspace 158 about a first rotational axis 178 that is parallel to the display plane 138 and perpendicular to the tool axis 166. In response to the second component 174 of the input, the processor can induce rotation of the tool and tool image about a second rotational axis 180 that is perpendicular to the tool axis 166 and also to the first rotational axis 178. Using vector notation, the first rotational axis VN can be calculated from the first and second components of the input V1, V2 as:
The second rotational axis Vs can then be calculated from the inverse of the first rotational axis VNS and from the axis 166 of the tool VT as follows:
V
NS
×V
T
=V
S
The tool axis 166 and first and second rotational axes 178, 180 will typically intersect at a spherical center of rotation at a desired location along the tool, such as at a proximal end, distal end, or mid-point of the tool. To help make the rotational movement intuitive, the processor can superimpose an image of a spherical rotation indicator such as a transparent ball 184 concentric with the center of rotation. Superimposing rotation indicia such as a concentric ring 186 encircling the tool axis 166 on the side of ball 184 oriented toward the user can further help make the orientation of rotation predictable, as input movement of the mouse or movement of the user's finger on a touchscreen in an input direction along the input plane can move the rotation indicia in the same general direction as the input movement, giving the user the impression that the input is rotating the ball about the center of rotation with the input device. The rotation indicia will preferably stay at a fixed relationship relative to the center of rotation and tool axis during a rotation of the tool, but may switch sides of the ball when a rotation increment is complete (as can be understood by comparing
When a planar input device is in use and the input processor is in an object-based translation mode, input movement along the axis slope 170 may result in translation of tool 162 in workspace 164 along the second rotational axis 180. Input command movement along the display and/or image plane perpendicular to the axis slope 170 may induce translation of tool parallel to the first rotational axis 178. Advantageous movement of the tool along the axis of the catheter when using a mouse or the like can be induced by rotating a scroll wheel. In a view-based translation mode, input command movement along the X-Y input plane 139 can induce corresponding movement of the tool along the X-Y display plane 138. Scroll wheel rotation in the view-based translation mode can induce movement into and out of the display plane (along the Z axis identified adjacent display plane 138 in
A number of additional input and/or articulation modes may be provided. For example, the user may select a constrained motion mode, in which movement of the tool or receptacle is constrained to motion along a plane. The plane may be parallel to the display plane and the processor may maintain a separation distance between the tool and the constraint plane (which may be coincident with or near an imaging plane of the imaging system) when the planar movement mode is initiated. This can help keep the tool in view of, for example, a planar ultrasound imaging system, while facilitating movement of the tool relative to tissue structures while both remain at good imaging depths. Alternatively, the user may use the input system to position a constraint plane or other surface at a desired angle and location within the 3-D workspace. Alternative constraint surfaces may allow movement on one side of the surface and inhibit motion beyond the surface, or the like. Such constrained motion can be provided by constraining the above catheter motion arrangements with the equation for the surface, such as with the equation for a plane (aX+bY+cZ+d=0). Hence, a wide variety of alternative surface-based or volume-based movement constraints may be provided.
Referring now to
Referring still to
As can be understood with reference to
Referring now to
Referring now to
Referring now to
Referring now to
Referring to
Referring now to
Referring now to
Referring now to
Regarding the functionality and data processing for which constraints module 326 is configured, the following section provides additional details.
Boundary and Constraint Control Mode Types & Function
Five distinct control modes of processor 214 are described herein for addressing workspace boundaries and other constraints and which may optionally be implemented in constraints module 326 (see
Mode Table: Table 2 below describes the purpose and the general interaction between the input system 204, simulation module 256, and the pressure controller 230 (and particularly the response telemetry from the pressure controller to the simulation module).
5D Shift/Scale vs. 3D Gradient
This is for motion unconstrained (other than pressure limits) in 5/6D space. The planar position can give way to achieve the orientation commanded. It is used with an unconstrained 6DOF input such as a Tango™ smartphone in free space and at pressure boundaries. There are two modalities described herein:
Shift/Scale Mode:
This uses the Shift and Scale functions to respond to the pressure boundary; and
Gradient Mode:
This function uses three points (A, B, C) with set orientations and in the vicinity of the goal Q (as defined by the input) to form a local linear 3D pressure gradient to estimate the goal position or the closest achievable position in the 3D space.
Planar Mode:
This is for motion constrained to a plane. The planar position can give way to achieve the orientation commanded. It is used when, for example, the Tango™ 6DOF input system (sometimes referenced below as Tango™) constrains motion to a plane, when the mouse can be used for translation commands (optionally when the left button is held) to move on a plane, and optionally when the roller button is held and the mouse moves on a plane for changing orientation. This mode uses the three point (A-B-C) Planar function. (Note that the Roll function optionally utilizes Line mode).
Line Mode:
This is for motion constrained to a Line. The line position can give way to achieve the orientation commanded. It can be used when Tango constrains motion to a line and optionally with the mouse Roll function.
Gimbal Mode:
This is for motion constrained to a point in space. The point does not give way. Tip orientation will slide along the orientation boundary and find the closest Tip position and telemetry. This mode can be used when Tango™ constrains motion to a point and optionally when the mouse roller button is held for orientation control.
Axial Mode:
This is for motion constrained to a point with rotating fixed to one axis in space. The point and axis do not give way, and the single driven orientation may be achievable while the tip remains on this point. The tip stops at the workspace (pressure) boundary. It can be used when simulation module 256 constrains motion to a point and a single axis.
Segment Mode:
This is for driving motion on individual segments at segment transitions where one segment is driven to articulate and elongate. The passive segments respond in a manor preset by the user. For example, the passive segments may be set to hold their orientation and position relative to its own segment base. A second example is that passive segments may be set to stay on a point, trajectory, or plane. This mode can be driven by Tango™, a mouse, and other input forms. In this mode different segments may be set to behave with or without spatial constraints utilizing some of the properties in the previously listed Modes.
Simulation module 256 sends to the pressure control module 230 the input mode, input parameters, and the trajectory point(s). The input data is different for different modes as follows. Note that the Target Data and Command Data sets will often both utilize this input mode strategy.
Input Data
The pressure control module functions differently in each mode.
Pressure Control Module Function
The pressure control module 230 sends simulation module 256 the error conditions, boundary conditions and trajectory data for the command, phantom, and actual segments.
Trajectory Data
Simulation module 256 uses the error conditions, boundary conditions, and trajectory data to proceed with the next action.
Setting Up the Pressure Gradient with Fixed Orientation
Scaling & Shifting Function Limitations
The pressure control module 230 optionally uses the kinematic equations to find a solution for the goal QT. The solution produces a pressure vector, PrT, based on the Jacobean of the current location. This solution does not account for workspace boundaries in the form of lumen pressure limits. When the PrT vector includes components outside the maximum and minimum pressure limits, two functions may be implemented to find the closest achievable solution. The first is to shift segment-based pressures values into range. Shifting maintains orientation at the sacrifice of position. When a segments lumen pressure range is too large to shift, a secondary function scales the individual segment-based pressures. The scaling changes both the position and the orientation from the goal QT. The result of Shifting and Scaling is that the telemetry produced tends towards the closest special position available sometimes maintaining and other times changing the tip orientation. This Shifting and Scaling can be functional while the Goal QT is only constrained by a pressure boundary, though it may not produce the goal orientation. Scaling may (at least in some cases) inherently change the segments' orientation.
Gradient Control
Gradient control is a method for finding the closest positional telemetry at engaging the boundary, with or without additional spatial constraints, while maintaining the goal orientation. The function finds the closest available position Qd to the goal QT. This method is an alternative for the Shifting and Scaling functions for finding the Q trajectories. Either methods may to be utilized in the pressure control module code at different times; Shifting and Scaling for unconstrained orientation, and the Gradient Control for when maintaining goal orientation with (or without) spatial constraints is preferred.
General Model and Steps Towards Finding the Closest Gradient Solution
With reference to
For displacement commands, the pressure control module allows the user to slide the Tip along the boundary and to achieve the closest solution. For pivoting commands, displacement occurs only when at a boundary. For both, the limit of travel is to the closest position while achieving the goal orientation by shifting on the along the boundary.
Symbols
Simulation Module/Pressure Control Module Communication
Simulation module 256 solves for the planar Q's (QA, QB, QC) based on QT and sends the three Q vectors to the pressure control module 230. The pressure control module solves for the three pressure vectors (PrA, PB, PrC), produces the pressure gradients, solves for the lumen pressures PrT or PrP, and sends position telemetry QT or QP to the simulation module.
Gradient Model
The following Gradient Math occurs in the pressure control module after receiving the Q vectors from the simulation module. This gradient model applies to the 3D Gradient Mode, Planar Mode, and the Line Mode.
Spatial Plane
Find the plane formed by the positions of QA, QB, and QC, using the X, Y, & Z component.
C
X0*(X−XA)+CY0*(Y−YA)+CZ0*(Z−ZA)=0
(1)
Find the plane constants using the cross-product, to find the perpendicular vector.
V
1
=B−A=[(XB−XA), (YB−YA), (ZB−ZA)]
V
2
=C−A=[(XC−XA),(YC−YA),(ZC−ZA)]
V
P0
=V
1
×V
2=(CX0,CY0,CZ0)
C
X0=(YB−YA)*(ZC−ZA)−(YC−YA)*(ZB−ZA)
C
Y0=(XC−XA)*(ZB−ZA)−(XB−XA)*(ZC−ZA)
C
Z0=(XB−XA)*(YC−YA)−(XC−XA)*(YB−YA)
From equation 1
C
X0
*X+C
Y0
*Y+C
Z0
*Z=C
X0
*X
A
+C
Y0
*Y
A
+C
Z0
*Z
A
P
L0
=C
X0
*X
A
+C
Y0
*Y
A
+C
Z0
*Z
A
P
L0 is the ABC plain constant
Pressure Plane
We assume a linear gradient for the lumen pressure variance throughout the Q sample range. Use the following formulas to solve for the planer “C” constants.
C
Xi
*X+C
Yi
*Y+C
Zi
*Z=Pr
i
[2]
For each of the three Q's defined by A, B, and C, a position (X, Y, Z) and Lumen Pressure (Pri) are known. Set up equation and solve for constants for each lumen pressure.
C
Xi
*X
A
+C
Yi
*Y
A
+C
Zi
*Z
A
=Pr
Ai
C
Xi
*X
B
+C
Yi
*Y
B
+C
Zi
*Z
B
=Pr
Bi
C
Xi
*X
C
±C
Yi
*Y
C
+C
Zi
*Z
C
=Pr
Ci
Including additional Q points which can be retained from the previous cycle, an alternative may be implemented.
{right arrow over (C)}
i=(XYZ·XYZT)−1·XYZT·{right arrow over (Pr)}i
Goal Position
Geometrically, QT is the average of QA, QB, and QC and expressed as follows:
Goal Lumen Pressure
The pressure vector can be found through the “C” constants vector:
{right arrow over (Pr)}
I
={right arrow over (C)}
I
·{right arrow over (Q)}
T
Use equation 2 (6 times for two segments) and solve for the target lumen pressures {right arrow over (Pr)}T (at position QT).
If all lumen pressures at {right arrow over (Pr)}T are within the pressure limits, use the current pressure vector. If one or more pressure components are outside the limits, solve for the closest position on the Smart Plane where all pressures are within the pressure limits.
3D Gradient Mode
Pressure Limit Points
Referring now to
Goal Point Normal Line
The Vector normal to the pressure plane is a derivative of the plane definition.
{right arrow over (V)}
Ni=(VNXi,VNYi,VNzi)=(CXi,CYi,CZi)
This vector may be normalized to make it a unit vector.
Normal Line Constants, Normal Line Equations, and Intersect Point Qdi (at Pressure Limit),
Choose best axis by choosing largest absolute value of {right arrow over (V)}Ni components, (CXi, CYi, CZi). If |CXi| is the largest, solve for Normal Line Constants with X as variable. If |CYi| is the largest, solve for Normal Line Constants with Y as variable. If |CZi| is the largest, solve for Normal Line Constants with Z as variable.
Define the Intersection Line of each pressure limit plane. Use the same variable axis as with the TABLE 3 above. Line equations are formed from the equations of TABLE 4 below.
Find the intersection point of the Normal line and the constant pressure plane, for each lumen that is not within the pressure limits.
Insert “X” line equations:
C
Xi
*X
Pi
+C
Yi
*Y
Pi
+C
Zi
*Z
Pi
=Pr
Limit
[from 2]
C
Xi
*X
Pi
+C
Yi*(NYi+NYXi*XPi)+CZi*(NZi+NZXi*XPi)=Pri
C
Xi
*X
Pi
+C
Yi
*N
Yi
+C
Yi
*N
YXi
*X
Pi
+C
Zi
*N
Zi
+C
Zi
*N
ZXi
*X
Pi
=Pr
i
(CXi+CYi*NYXi+CZi*NZXi)*XPi+(CYi*NYi+CZi*NZi)=Pri
X
Pi=[PrLimit−(CYi*NYi+CZi*NZi)]/(CXi+CYi*NYXi+CZi*NZXi)
Y
Pi
=N
Yi
+N
YXi
*X
Pi
Z
Pi
=N
Zi
+N
ZXi
*X
Pi
“Y” and “Z” line equations are similarly resolved.
Plane Crossing Vector
{right arrow over (V)}
Cik
={right arrow over (V)}
Ni
×{right arrow over (V)}
Nk=(VCXi,VCYi,VCZi)
(VCXik,VCYik,VCZik)=[(VNYi*VNZk−VNZi*VNYk),(VNZi*VNXk−VNXi*VNZk),(VNXi*VNYk−VNYi*VNXk)]
This vector may be normalized to make it a unit vector.
Perpendicular Vector from Normal Point to Plane Crossing Vector
{right arrow over (V)}
Pik
={right arrow over (V)}
Ni
×{right arrow over (V)}
Cik=(VPXi,VPYi,VPZi)
(VPXik,VPYik,VPZik)=[(VNYi*VCZik−VNZi*VCYik),(VNZi*VCXik−VNXi*VCZik),(VNXi*VCYik−VNYi*VCXik)]
Perpendicular Line Constants, Equations, and Intersect Point Pik (at Pressure Limit),
Choose best axis by associated with the smallest absolute value of {right arrow over (V)}Pik components, (VPXik, VPYik, VPZik).
If |VPXik| is the smallest, solve for perpendicular Line Constants with X as variable. If |VPYik| is the smallest, solve for perpendicular Line Constants with Y as variable. If |VPZik| is the smallest, solve for perpendicular Line Constants with Y as variable.
Find Line Intersection Point (XP, YP, ZP) on each plane intersection line. Use the same variable axis as with the Perpendicular Line Constants. Line equations can be formed from the following equivalencies.
“X” as variable input
K
ZXik
+ΔK
ZXik
*X
Pi
=K
Zki
+ΔK
ZXki
*X
Pi or KYXik+ΔKYXik*XPi=KYXki+ΔKYXki*XPi
“Y” as variable input
K
ZYik
+ΔK
ZYik
*Y
Pi
=K
ZYki
+ΔK
ZYki
*Y
Pi or KXYik+ΔKXYik*YPi=KXYki+ΔKXYki*YPi
“Z” as variable input
K
YZik
+ΔK
YZik
*Z
Pi
=K
Yki
+ΔK
YZki
*Z
Pi or KXik+KXZik*ZPi=KXki+KXZki*ZPi
Solve the pressure array(s) for each plane intersection line Point
{right arrow over (Pr)}
Ti
={right arrow over (C)}
i*(XPi,YPi,ZPi)
{right arrow over (Pr)}
Tik
={right arrow over (C)}
i*(XPik,YPik,ZPik)
Solve for the intersect points on the line formed (by lumens beyond the pressure limit) from crossing limit planes. The number of points may be dictated by the number of lumens that cross a limit pressure. The max number of planes for two segments may be six.
In the Table 8 below the i and k indicate a specific lumen combination where i and k are not be the same number and all combinations can only be selected once.
This chart indicates the number of lumen plane intersect points as a function of the number of lumen lines in play. Note that with six lumen planes there are 15 intersect lines and associated points as indicated by the “x's”. Add this to the Normal line intersects, with six lumens over the pressure limit and a total of 21 intersect points (6 normal+15 plane line points) may benefit from being resolved.
Now find the closest (and achievable) point QP to the target QT (with all pressure values within the limits). There should be one intersect point with the Pr vector within the pressure limits. To optimize the search sequence, note the following:
Planar Mode
Pressure Limit Points
Referring now to
First, determine if pressure limit plane is normal to the A-B-C plane, in which case the pressure may be parallel to the allowable direction of motion and there may be no solution. For this condition return the previous telemetry.
Second, determine which limit points are farthest apart. This avoids using two points with the same pressure which leads to a non-zero divisor when finding the points.
ΔPrABC=PrAB−PrBC
ΔPrBCA=PrBC−PrCA
ΔPrCAB=PrCA−PrAB
MinΔPr=0.001—Other numbers may be determined empirically from trial and error or derived.
(XDi,YDi,ZDi)=(PrLimit−PrAi)/(PrBi−PrAi)*[(XB,YB,ZB)−(XA,YA,ZA)]+(XA,YA,ZA),
(XDi,YDi,ZDi)=(PrLimit=PrBi)/(PrCi−PrBi)*[(XC,YC,ZC)−(XB,YB,ZB)]+(XB,YB,ZB),
(XDi,YDi,ZDi)=(PrLimit−PrCi)/(PrAi−PrCi)*[(XA,YA,ZA)−(XC,YC,ZC)]+(XC,YC,ZC),
Limit Line Vector
Find the pressure Limit Line (unit) Vector with the cross product of the pressure and A-B-C plane's normal vectors, which can be taken directly from the plane constants above.
{right arrow over (V)}
L=(CXi,CYi,CZi)×(CX0,CY0,CZ0)=(VLXi,VLYi,VLZi)
(VLXi,VLYi,VLZi)=[(CYi*CZ0−CZi*CY0),(CZi*CX0−CXi*CZ0),(CXi*CY0−CYi*CX0)]
This vector may be normalized to make it a unit vector.
Normal Line Vector
Normal Line (unit) Vector is the cross product of a line normal to the ABC plane with the Limit Line Vector.
{right arrow over (V)}
Ni
={right arrow over (V)}
Li
×{right arrow over (C)}
i=(VNXi,VNYi,VNZi)
(VNXi,VNYi,VNZi)=[(VLYi*CZ0−VLZi*CY0),(VLZi*CX0−VLXi*CZ0),(VLXi*CY0−VLYi*CX0)]
This vector may be normalized to make it a unit vector.
Limit Line Constants
Solve the Limit Line Constants by choosing the best variable axis. Look for maximum vector component value of the following equation.
{right arrow over (V)}
Li
·{right arrow over (V)}
Ni
If(VLXi·VNXi=Max vector component, “X” is variable axis)
If(VLYi·VNYi=Max vector component, “Y” is variable axis)
If(VLZi·VNZi=Max vector component, “Z” as variable axis)
Normal Line Constants
Use the same variable axis as with the Limit Line Constants.
Normal Line Intersect Point
Find Normal Line Intersection Point (XPi, YPi, ZPi) for each lumen Line crossing the pressure limit. Use the same variable axis as with the Limit Line Constants.
Limit Lines Intersect Points
Find the pressure Limit Lines (i, k) Intersection points for lumens that cross pressure limit. Use the same variable axis as with the Limit Line Constants.
Intersect Point Pressure Vectors
Solve the pressure array(s) for each Intersection Point
{right arrow over (Pr)}
Ti
={right arrow over (C)}
i*(XPi,YPi,ZPi)
{right arrow over (Pr)}
Tik
={right arrow over (C)}
i*(XPik,YPik,ZPik)
Solve for the intersect points of all limit lines. The number of limit lines will be dictated by the number of lumens that cross a limit pressure. The max number possible for two segments is six lumens lines.
In Table 13 below the i and k indicate a specific lumen combination where i and k are not be the same number and all combinations should be selected only once.
This chart indicates the number of lumen line intersect points as a function of the number of lumen lines in play. Note that with six lumen lines there are 15 intersect points as indicated by the “x's”. Add this to the Normal line intersects, with six lumens over the pressure limit a total of 21 intersect points (6 normal+15 lumen line) would benefit from being resolved.
Now find the closest (and achievable) point Qd to the target QT (with all pressure values within the limits). There should be one or more intersect points that are within the pressure limits. To optimize the search sequence, note the following.
Line Mode
Pressure Limit Points
Referring now to
First, determine if pressure limit plane is normal to the A-B-C plane, in which case the pressure parallel to the allowable direction of motion and there may be no solution. For this condition return the previous telemetry.
ΔPrABC=PrAB−PrBC
ΔPrBCA=PrBC−PrCA
ΔPrCAB=PrCA−PrAB
MinΔPr=0.001 (Other numbers may be determined empirically or analytically.)
Second, determine which limit points are farthest apart. This avoids using two points with the same pressure which leads to a non-zero divisor when finding the points.
(XDi,YDi,ZDi)=(PrLimit−PrAi)/(PrBi−PrAi)*[(XB,YB,ZB)−(XA,YA,ZA)]+(XA,YA,ZA),
(XDi,YDi,ZDi)=(PrLimit−PrBi)/(PrCi−PrBi)*[(XC,YC,ZC)−(XB,YB,ZB)]+(XB,YB,ZB),
(XDi,YDi,ZDi)=(PrLimit−PrCi)/(PrAi−PrCi)*[(XA,YA,ZA)−(XC,YC,ZC)]+(XC,YC,ZC),
Pressure Limit Line
Find the pressure Limit Line (unit) Vector with the cross product of the pressure and A-B-C plane's normal vectors, which can be taken directly from the plane constants above.
{right arrow over (V)}
L=(VLXi,VLYi,ZLZi)=(CXi,CYi,CZi)×(CX0,CY0,CZ0)
(VLXi,VLYi,VLZi)=[(CYi*CZ0−CZi*CY0),(CZi*CX0−CXi*CZ0),(CXi*CY0−CYi*CX0)]
This vector may be normalized to make it a unit vector.
Normal Line Vector
Normal Line (unit) Vector is the cross product of a line normal to the ABC plane with the Limit Line Vector.
{right arrow over (V)}
N=(VNX,VNY,VNZ)=[(XA−XT),(YA−XT),(ZA−ZT)]
This vector may be normalized to make it a unit vector.
Limit Line Constants
Solve the Limit Line Constants by choosing the best variable axis. Look for maximum vector component value of the following equation.
{right arrow over (V)}
Li
·{right arrow over (V)}
N
If(VLXi·VNX=Max vector component, “X” is variable axis)
If(VLYi·VNY=Max vector component, “Y” is variable axis)
If(VLZi·VNZ=Max vector component, “Z” as variable axis)
Limit line Constants:
Normal Line Constants
Solve Normal Line Constants (line normal to Smart Plane). Use the same variable axis as with the Limit Line Constants.
Find Normal Line and pressure Limit Lines intersection points (XPi, YPi, ZPi) for each lumen Line crossing the pressure limit. Use the same variable axis as with the Limit Line Constants.
Solve the pressure array(s) for each Intersection Point
{right arrow over (Pr)}
Ti
={right arrow over (C)}
i*(XPi,YPi,ZPi)
Solve for the intersect points of all limit lines. The number of limit lines will be dictated by the number of lumens that cross a limit pressure. The max number for two segments may be six lumens lines.
Now find the closest (achievable) point Qd to the target QT (with all pressure values within the limits). There should be one or more intersect points that are within the pressure limits.
Setting Up the Pressure Gradient with Fixed Position
General Model and Steps Towards Finding the Closest Gradient Solution
Referring once again to
Symbols
Gradient Model
The following Gradient Math occurs in the pressure control module after receiving the Q vectors from the simulation module. This gradient model applies to the Gimbal and Axial Mode.
Spatial Plane
Find the plane formed by the positions of QA, QB, and QC, using the βx, and βy, component.
C
X0*(βX−βxA)+CY0*(βy−βyA)=0
C
X0
*βx+C
Y0
*βy=C
X0
*βx
A
+C
Y0
*βy
A (1)
Pressure Plane
Linear gradient for the lumen pressure variance throughout the Q sample range. Use the following formulas to solve for the planer “C” constants.
C
Xi
*βx+C
Yi
*βy=Pr
i [2]
For each of the three Q's defined by A, B, and C, two orientations (βxi, βyi) and Lumen Pressure (Pri) are known. Set up equation and solve for constants for each lumen pressure.
C
Xi
*βx
A
+C
Yi
*βy
A
=Pr
Ai
C
Xi
*βx
B
+C
Yi
*βy
B
=Pr
Bi
C
Xi
*βx
C
+C
Yi
*βy
C
=Pr
Ci
Since there are more Q points than variables, a least square fit may be applied for a pseudo inverse matrix.
{right arrow over (C)}
i=([β]·[β]T)−1·[β]T·{right arrow over (Pr)}i
Goal Position
Geometrically, QT is the average of QA, QB, and QC and expressed as follows:
{right arrow over (Q)}
T=({right arrow over (Q)}A+{right arrow over (Q)}B+{right arrow over (Q)}C)/3
βxT=(βxA+βxB+βx)/3; βyT=(βyA+βyB+βyC)/3
Goal Lumen Pressure
The pressure vector can be found through the “C” constants vector:
{right arrow over (Pr)}
i
={right arrow over (C)}=
i
·{right arrow over (Q)}
T
Use equation 2 (6 times for two segments) and solve for the target lumen pressures {right arrow over (Pr)}T (at position QT).
If all lumen pressures at {right arrow over (Pr)}T are within the pressure limits, use the current pressure vector. If one or more pressure components are outside the limits, solve for the closest position on the Smart Plane where all pressures are within the pressure limits.
Pressure Limit Points
For lumens that are not within a pressure limit, find the points on graphic lines A-B, B-C, and C-A that cross the limit pressure line. These points will all be one a graphic line.
First, determine if pressure limit plane is normal to the A-B-C plane, in which case the pressure parallel to the allowable direction of motion and there may be no solution. For this condition return the previous telemetry.
ΔPrABC=PrAB−PrBC
ΔPrBCA=PrBC−PrCA
ΔPrCAB=PrCA−PrAB
MinΔPr=0.001 (Other numbers may be determined empirically or analytically.)
Second, determine which limit points are farthest apart. This avoids using two points with the same pressure which leads to a non-zero divisor when finding the points.
(βxDi,βyDi)=(PrLimit−PrAi)/(PrBi−PrAi)*[(βxB,βyB)−(βxA,βyA)]+(βyA,βyA),
(βxDi,βyDi)=(PrLimit−PrBi)/(PrCi−PrBi)*[(βxC,βyC)−(βxB,βyB)]+(βxB,βyB),
(βxDi,βyDi)=(PrLimit−PrCi)/(PrAi−PrCi)*[(βxA,βyA)−(βxC,βyC)]+(βxC,βyC),
Pressure Limit Line
Find the pressure Limit Line Vector. The Limit Line Vector is perpendicular to the Pressure constants Vector (CXi, CYi). Since they are perpendicular, the dot product of the “C” Vector and Limit Line Vector is equal to zero.
{right arrow over (V)}
Li
·{right arrow over (C)}
i=[VLXi,VLYi]·(CXi,CYi)=0
{right arrow over (V)}
Li=[VLXi,VLYi]=(CYi,−CXi)
This vector may be normalized to make it a unit vector.
Gimbal Mode
Gimbal Mode control allows change in two axes of orientations at a fixed position.
When orientation adjustments meet a boundary, the rotation slides along the angle boundary. The method maintains telemetry on a point while moving to the closest orientation.
Normal Line Vector
In Gimbal Mode, the Normal Line Vector is a line that passes through the goal point QT and is perpendicular to the Limit Line Vector. Since they are perpendicular, the dot product of the Normal Line Vector and Limit Line Vector is equal to zero.
{right arrow over (V)}
Li
·{right arrow over (V)}
Ni=[CYi,−CXi]·(VNXi,VNYi)=0
{right arrow over (V)}
Ni=(VNXi,VNYi)=(CXi,CYi)
This vector may be normalized to make it a unit vector.
Limit Line Constants
Solve the Limit Line Constants by choosing the best variable axis. Look for maximum vector component value of the following equation.
{right arrow over (V)}
Li
·{right arrow over (V)}
Ni
If(VLXi·VNXi=Max vector component, “βx” is variable axis)
If(VLYi·VNYi=Max vector component, “βx” is variable axis)
Limit line Constants:
Normal Line Constants
Solve Normal Graphic Line Constants. Use the same variable axis as with the Limit Line Constants.
Normal Line Intersect Point
Find Normal Line and pressure Limit Lines intersection points ((βxPi, βyPi) for each lumen Line crossing the pressure limit. Use the same variable axis as with the Limit Line Constants.
Limit Lines Intersect Points
Find the pressure Limit Lines (i, k) Intersection points for lumens that cross pressure limit. Use the same variable axis as with the Limit Line Constants.
Intersect Point Pressure Vectors
Solve the pressure array(s) for each Intersection Point
{right arrow over (Pr)}
Ti
={right arrow over (C)}
i*(βxPi,βyPi)
{right arrow over (Pr)}
Tik
={right arrow over (C)}
i*(βxPik,βyPik)
Solve for the intersect points of all limit lines. The number of limit lines will be dictated by the number of lumens that cross a limit pressure. The max number possible for two segments is six lumens lines.
In the chart below the i and k indicate a specific lumen combination where i and k are not be the same number and all combinations should only be selected once.
Table 19 indicates the number of lumen line intersect points as a function of the number of lumen lines in play. Note that with six lumen lines there are 15 intersect points as indicated by the “x's”. Add this to the Normal line intersects, with six lumens over the pressure limit a total of 21 intersect points (6 normal+15 lumen line) would benefit from being resolved.
Now find the closest (and achievable) point Qd to the target QT (with all pressure values within the limits). There should be one or more intersect points that are within the pressure limits. To optimize the search sequence, note the following.
Axial Mode
Axial Mode control allows change in orientation at a fixed position and about one axis.
When orientation adjustments meet a boundary, pitch is sacrificed in order to meet circumferential angle about the Normal axis. The method maintains telemetry on a point while moving to the closest orientation while sacrificing pitch angle.
Normal Line Vector
For the Axial Mode the Normal Line Vector is the line normal to the ABC plane and, assuming point A is on trajectory path, it can be found by the orientation vector between point A and T.
{right arrow over (V)}
N=(VNX,VNY)=[(βxA·βxT),(βyA−βyT)]
This vector may be normalized to make it a unit vector.
Limit Line Constants
Solve the Limit Line Constants by choosing the best variable axis. Look for maximum vector component value of the following equation.
{right arrow over (V)}
Li
·{right arrow over (V)}
N
If(VLXi·VNX=Max vector component, “βx” is variable axis)
If(VLYi·VNY=Max vector component, “βy” is variable axis)
Limit line Constants:
Normal Line Constants
Solve Normal Graphic Line Constants. Use the same variable axis as with the Limit Line Constants.
Find Normal Line and pressure Limit Lines intersection points (XPi, YPi, ZPi) for each lumen Line crossing the pressure limit. Use the same variable axis as with the Limit Line Constants.
Solve the pressure array(s) for each Intersection Point
{right arrow over (Pr)}
Ti
=C
i*(βxPi,βyPi)
Now find the closest (achievable) point Qd to the target QT (with all pressure values within the limits). There should be one or more intersect points that are within the pressure limits.
Referring now to
Referring now to
Referring now to
As seen in
Referring now to
Referring now to
Referring now to
Referring now to
Referring still to
Referring now to
Referring now to
While the exemplary embodiments have been described in some detail for clarity of understanding and by way of example, a variety of modifications, changes, and adaptations of the structures and methods described herein will be obvious to those of skill in the art. Hence, the scope of the present invention is limited solely by the claims attached hereto.
The present application is a continuation under 35 U.S.C. 111(a) of PCT International Application No. PCT/US2019/065752, filed on Dec. 11, 2019, which claims the benefit of and priority to U.S. Provisional Application Ser. No. 62/778,148 filed on Dec. 11, 2018, 62/896,381 filed Sep. 5, 2019, and 62/905,243 filed Sep. 24, 2019. These applications are incorporated by reference herein in their entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
62778148 | Dec 2018 | US | |
62896381 | Sep 2019 | US | |
62905243 | Sep 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/US2019/065752 | Dec 2019 | US |
Child | 17340773 | US |