Medical imaging has vastly improved medical diagnosis and treatment fields by allowing doctors and medical technicians to visualize internal anatomical structures. Among the many imaging capabilities available, ultrasound mediums are favored for their benign signals and portability. Ultrasound (US) imaging has been widely adopted for abnormality monitoring, obstetrics, guiding interventional, and radiotherapy procedures. US is acknowledged for being cost-effective, real-time, and safe. Nonetheless, the US examination is a physically demanding procedure. Sonographers needs to press the US probe firmly onto the patient's body and fine-tune the probe's image view in an un-ergonomic way. More importantly, the examination outcomes are heavily operator-dependent. The information contained in the US images can be easily affected by factors such as scan locations on the body, the probe orientations at the scan location and the contact force at the scan location. Obtaining consistent examination outcomes requires highly skilled personnel with substantial experience.
An imaging self-positioning system includes a robotic actuator for manipulating an imaging tool or medical probe and a sensory component for maintaining a normal orientation adjacent patient a treatment site. The imaging tool, typically an US probe, is grasped by an end-effector or similar actuator, and a sensory component engaged with the imaging tool senses an orientation of the tool relative to the treatment surface, and the robotic actuator disposes the imaging tool for maintaining a normal or other predetermined angular alignment with the treatment surface. The treatment surface is a patient epidermal region adjacent an imaged region for identifying anatomical features and surgical targets. A medical probe such as a biopsy needle may accompany the end-effector for movement consistent with the probe, either manually or robotically advanced towards the surgical target.
Robotic members are often sought for performing repetitive object placement tasks such as assembly and sorting of various objects or parts. Robot-assisted imaging may include a procedure using an end-effector of a robot arm or mechanical actuators to manipulate an imaging probe (for ultrasound, optics, and photoacoustic imaging) to realize teleoperative or autonomous tasks. Such a procedure employs sensing of the surface terrain (e.g., skin) and controlling both the orientation and location of the probe by grasping the probe through the end-effector, typically a claw or similar actuator.
Configurations herein are based, in part, on the observation that convention medical imaging, and in particular US imaging, is often employed by skilled sonographers for obtaining visual imaging for diagnosis and real time feedback during minimally invasive procedures using a needle or probe. Unfortunately, conventional approaches to US imaging suffer from the shortcoming that it can be problematic to manipulate an imaging probe for an accurate depiction of a surgical target, particularly during concurrent insertion of the needle or instrument. US probes, while portable, are dependent on accurate positioning at the treatment surface for rendering positional guidance. Accordingly, configurations herein substantially overcome the shortcoming of conventional US procedures by providing a self-positioning robotic apparatus for positioning and maintaining an alignment of the probe at a predetermined angle with the treatment site. Typically a normal or substantially normal orientation to the surface is sought, however an angular tilt may be beneficial to avoid anatomical structures obscuring the surgical target.
In a particular use case of a needle or instrument, insertion force is another parameter that eludes automation. Insertion progression and depth may be measured by resistance, or the force needed for insertion. However, varied densities of anatomical tissue, as well as variances due to an insertion angle, can make depth sensing based on resistive force to insertion unreliable.
In an example configuration, the imaging device performs a method for robotic positioning of a by receiving, from each plurality of sensing elements disposed in proximity to a medical instrument, a signal indicative of a distance to a treatment site of a patient. The controller computes, based on each of the signals and an offset of the sensor from the medical instrument, a distance from each of the respective sensing elements to the treatment site. The medical instrument may be an imaging probe, such that the imaging device determines, based on the computed distances, an angle of the medical instrument relative to the treatment site for optimal imaging alignment of a surgical site,
The foregoing and other objects, features and advantages of the invention will be apparent from the following description of particular embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention.
Conventional manual ultrasound (US) imaging is a physically demanding requiring skilled operators for accurate positioning of the imaging sensor. A Robotic Ultrasound system (RUSS) has the potential to overcome this limitation by automating and standardizing the imaging procedure. It also extends ultrasound accessibility in resource-limited environments with the shortage of human operators by enabling remote diagnosis. During imaging, maintaining the US probe in a normal orientation to the skin surface largely benefits the US image quality. However, an autonomous, real-time, low-cost method to align the probe towards the direction orthogonal to the skin treatment without pre-operative information is absent in conventional RUSS.
Referring to
A controller 130 includes a robotic positioning circuit 132 and logic and an image processor 134, along with a processor 136 and memory 138 for containing instructions as described further below. The method for robotic positioning of a surgical instrument or probe 116 includes receiving, from each plurality of sensing elements disposed in proximity to the probe 116, a signal indicative of a distance to a treatment site 101 of a patient, and computing, based on each of the signals and an offset of the sensor from the medical instrument, a distance from each of the respective sensing elements to the treatment site. This determines a normal or off-normal position of the sensor ring, and hence the probe, with the treatment surface. Based on the computed distances, the processor 136 computes an angle of the probe 116 relative to the treatment site 101.
An autonomous RUSS has been explored to address the issues with the conventional US. RUSS utilizes robot arms to manipulate the US probe. The sonographers are thereby relieved of the physical burdens. The diagnosis can be done remotely, eliminating the need for direct contact with patients. The desired probe pose (position and orientation) and the applied force can be parameterized and executed by the robot arm with high motion precision. As a result, the examination accuracy and repeatability can be secured. The probe pose can also be precisely localized, which enables 3D reconstruction of human anatomy with 2D US images.
An autonomous scan may adopt a 2-step- strategy: First, a scan trajectory formed by a series of probe poses is defined using preoperative data such as Magnetic Resonance Imaging (MRI) of the patient or a vision-based point cloud of the patient body. Second, the robot travels along the trajectory while the probe pose and applied force are continuously updated according to intraoperative inputs (e.g., force/torque sensing, real-time US images, etc.). Yet, owing to factors including involuntary patient movements during scanning, inevitable errors in scan trajectory to patient registration, and a highly-deformable skin surface, which can be difficult to be measured preoperatively. The second step is of significance to the successful acquisition of diagnostically meaningful US images. The ability to update probe positioning and orientation in real-time is preferred to enhance the efficiency and safety of the scanning process. In particular, keeping the probe to an appropriate orientation assures a good acoustic coupling between the transducer and the body. A properly oriented probe position offers a clearer visualization of pathological clues in the US images. Real-time probe orientation adjustment is challenging and remains an open problem.
Configurations herein apply two aspects: i) a compact and cost-effective active-sensing end-effector (A-SEE) device that provides real-time information on the rotation adjustment required for achieving normal positioning. Conventional approaches do not achieve simultaneous in-plane and out-of-plane probe orientation control without relying on a passive contact mechanism; ii) the A-SEE approach integrates with the RUSS for implementing a complete US imaging workflow to demonstrate the A-SEE enabled probe self-normal-positioning capability. It should be further emphasized that normal positioning, meaning probe orientation locates a longitudinal axis of the probe at a normal, or perpendicular to a plane defined by the skin surface, is an example of a preferred orientation; other angular orientations may be determined.
(1) depicts a transformation from Fbase to Fflange, denoted as Tbase-flange
(2) denotes a transformation from Fflange to FA-SEE, denoted as Tflange-A-see
(3) is the transformation from FA-SEE to Fcam, denoted as TA-see-cam.
Operation of the controller includes the implementation details of A-SEE and its integration with a RUSS to manipulate the actuator 114 according to the sensor ring 120. A typical use case involves preoperative probe landing pose identification and intraoperative probe self-normal-positioning with contact force adaptation. During imaging, the shared control scheme can allow teleoperative sliding of the probe along the patient body surface, as well as rotating the probe about its axis. Of course, the normal (or other angle pose) can assist in in-person procedures as well.
A typical scenario deploys the probe 116 to have an imaging field 140 capturing images of a surgical target 150, usually a mass or anatomical region to be biopsied or pierced, although any suitable anatomical location may be sought. This usually involves identifying an axis 124 of the medical instrument or probe 116, such that the axis 124 extends towards the treatment site 101, and is based on an orientation of the axis 124 relative to the plane of the treatment site 101. The probe axis 124 is defined by a longitudinal axis through the center of mass of the probe 116, or other axis that denotes a middle of the sensed imaging field 140. In a simplest case, seeking a normal orientation of the probe 116 to the surface 101, each of the 4 distance sensors 122 returns an equal value. Differing values can give an angular orientation of the probe axis 124 relative to the treatment surface 101, as the “tilt” or angle of the sensory ring 120 will be reflected in the relative distance 122′-1 . . . 122′-4 (122 generally).
Either a sensory probe such as the US probe 116, or a surgical medical instrument such as a needle may be grasped by the actuator 114. The probe axis 124 therefore defines an approach angle of the medical instrument to the treatment site 101, where the sensors 122 are used to dispose the medical instrument based on a target angle defined by intersection of the axis 124 with the treatment site 101. The robotic arm 110 translates the surgical instrument along the axis 124, and therefore disposes the robotic actuator 114 based on the determined angle of the medical instrument.
The probe axis 124 need not be normal to the treatment surface 101. In general, the probe 116 receives a location of the surgical target 150 in the imaging region 140. The sensors 122 may be used to compute the angle of the medical instrument based on an intersection with the surgical target 150 and the probe axis 124. The medical instrument 117 may then be projected along the computed angle for attaining the surgical target 150.
In
Any suitable sensing medium may be employed for the sensors 122. In an example configuration, optical based sensors such as infrared (IR) are a feasible option, however other mediums such as laser, electromagnetic or capacitance can suffice given appropriate power and distance considerations.
where dmin is 50 mm, dmax is 200 mm. With SEC, the same trials were repeated. The curves in
The values in
where Kp and Kd are empirically tuned control gains;
d13(t)=
d
3(t)−d1(t), d24(t)=
d
4(t)−d2(t); Δd13=
d
13(t)−d13(t−1), Δd24=
d
24(t)−d24(t−1);
where:
d1 to d4 are the filtered distances from sensor 1 to 4, respectively; Δt is the control interval. ωnx and ωny are limited within 0.1 rad/s. The angular velocity adjustment rate can reach to 30 Hz.
To prevent a loose contact between the probe and the skin that may cause acoustic shadows in the image, a force control strategy is necessary to stabilize the probe by pressing force at an adequate level throughout the imaging process. This control strategy is also responsible for landing the probe gently on the body for the patient's safety. A force control strategy is formulated to adapt the linear velocity along the z-axis expressed in FA-SEE. The velocity adaptation is described by a two-stage process that manages the landing and the scanning motion separately: during landing, the probe velocity will decrease asymptotically as it gets closer to the body surface; during scanning, the probe velocity is altered based on the deviation of the measured force from the desired value.
Therefore, the velocity at time stamp t is calculated as:
νfz(t)=w·ν+(1−w)·νfz(t−1)
where w is a constant between 0 to 1 to maintain the smoothness of the velocity profile; ν is computed by:
Where d_′ is the vector of the four sensor readings after error compensation and filtering, and F−z is the robot measured force along the z-axis of FA-SEE, internally estimated from joint torque readings. It is then processed using a moving average filter; F˜ is the desired contact force; Kp1, Kp2 are the empirically given gains; d˜ is the single threshold to differentiate the landing stage from the scanning stage, which is set to be the length from the bottom of the sensor ring to the tip 116′ of the probe (120 mm, in the example use case of
The combination of the self-normal-positioning and contact force control of the probe forms an autonomous pipeline that controls 3-DoF probe motion. A shared control scheme is implemented to give manual control of the translation along the x-, y-axis, and the rotation about the z-axis in concurrence with the three automated DoFs. A 3-DoF joystick may be used as an input source, whose movements in the three axes are mapped to the probe's linear velocity along the x-, y-axis (νtx, νty), and angular velocity about the z-axis (ωtz), expressed in FA-SEE.
A configuration of the imaging device 100 of
Tlandbase=TflangebaseTA-SEEflangeTcamA-SEETlandcam
Where TA-SEEflange and TcamA-SEE and are calibrated from a CAD model or measurements of the device 100. The robot then moves the probe 100 to a landing pose using a velocity-based PD controller. In the intraoperative step, the probe will be gradually attached to the skin using the landing stage force control strategy. Once the probe is in contact with the body, the operator can slide the probe on the body and rotate the probe about its long axis via the joystick. Meanwhile, commanding robot joint velocities generates probe velocities in FA-SEE, such that the probe will be dynamically held in the normal direction and pressed with constant force. The desired probe velocities are formed as:
Transforming them to velocities expressed in Fbase yields:
Where RA-SEEbase ∈ SO(3) is the rotational component of TA-SEEbase ∈ SE(3);
[ 1]T=[0 0 0 1]T
Lastly, the joint-space velocity command _q⋅ that will be sent to the robot for execution is obtained by:
where J(_q)† is the Moore-Penrose pseudo-inverse of the robot Jacobian matrix. During the scanning, the US images are streamed and displayed to the operator. The operator decides when to terminate the procedure. The robot will move back to its home configuration after completing the scanning.
The example A-SEE device 100 employs single-point distance sensors to provide sparse sensing of the local contact surface. Such sparse sensing is sufficient to enable probe rotational autonomy when scanning flat, less deformable surfaces. However, dense sensing capability is needed when dealing with more complicated scan surfaces. To this end, the sparsely configured single-point distance sensors can be replaced with short-range stereo cameras (e.g., RealSense D405, Intel, USA), allowing dense RGB-D data acquisition of the probe's surroundings. In general, the plurality of sensing elements 122 define a set of points, such that each point of the set of points has a position and corresponding distance 122′ to the treatment site 101. In the configuration of
A conceptual graph of the dense-sensing A-SEE is shown in
While the system and methods defined herein have been particularly shown and described with references to embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.
This patent application claims the benefit under 35 U.S.C. § 119(e) of U.S. Provisional Patent App. No. 63/416,989 filed Oct. 18, 2022, entitled “ROBOTIC ASSISTED IMAGING,” incorporated herein by reference in entirety.
This invention was made with government support under grant DP5 OD028162, awarded by the National Institute for Health. The government has certain rights in the invention.
Number | Date | Country | |
---|---|---|---|
63416989 | Oct 2022 | US |