There are various contexts in which it is useful for a practitioner performing surgery to obtain area and/or depth measurements for areas or features of interest within the surgical field.
One context is that of hernia repair. After closure of a hernia, a surgical mesh is often inserted and attached (via suture or other means) to provide additional structural stability to the site and minimize the likelihood of recurrence. It is important to size this mesh correctly, with full coverage of the site along with adequate margin provided along the perimeter to allow for attachment to healthy tissue—distributing the load as well as minimizing the likelihood of tearing through more fragile tissue at the boundaries of the now-closed hernia.
The size of the area to be covered and thus the size of the mesh needed may currently be estimated by a user looking at the endoscopic view of the site. For example, the user might use the known diameters or feature lengths on surgical instruments as size cues. In more complex cases, a sterile, flexible, measuring “tape” may be rolled up, inserted through a trocar, unrolled in the surgical field, and manipulated using the laparoscopic instruments to make the necessary measurements.
This application describes a system providing more accurate sizing and area measurement information than can be achieved using current methods.
This application describes a system and method that use image processing of the endoscopic view to determine sizing and measurement information for a hernia defect or other area of interest within a surgical site.
Examples of ways in which an area in a surgical field may be measured are described here, but it should be understood that others may be used without deviating from the scope of the invention. Additionally, examples are given in this application in the context of hernia repair, but the disclosed features and steps are equally useful for other clinical applications requiring measurement of an area of interest within the surgical site and, optionally, selection of an appropriately-sized implant or other medical device for use at that site.
A system useful for performing the disclosed methods, as depicted in
The camera 10 may be a 3D or 2D endoscopic or laparoscopic camera. Where it is desirable to obtain depth measurements or determination of depth variations, configurations allowing such measurements (e.g. a stereo/3D camera, or a 2D camera with software and/or hardware configured to permit depth information to be determined or derived) are used. The computing unit 12 is configured to receive the images/video from the camera and input from the user input device(s). An algorithm stored in memory accessible by the computing unit is executable to, depending on the particular application, use the image data to perform one or more of the following (a) image segmentation, such as for identifying boundaries of an area of interest that is to be measured; (b) recognition of hernia defects or other predetermined types of areas of interest, based on machine learning or neural networks; (c) point to point measurement; (d) area measurement; and (c) computing the depth (if not done by the camera itself), i.e. the distance between the image sensor and the scene points captured by the image, which in the case of a laparoscope or endoscope are points within a body cavity using data from the camera. The computing unit may also include an algorithm for generating overlays to be displayed on the display.
The system may include one or more user input devices 16. When included, a variety of different types of user input devices may be used alone or in combination. Examples include, but are not limited to, eye tracking devices, head tracking devices, touch screen displays, mouse-type devices, voice input devices, foot pedals, or switches. Various movements of an input handle used to direct movement of a component of a surgical robotic system may be received as input (e.g. handle manipulation, joystick, finger wheel or knob, touch surface, button press). Another form of input may include manual or robotic manipulation of a surgical instrument having a tip or other part that is tracked using image processing methods when the system is in an input-delivering mode, so that it may function as a mouse, pointer and/or stylus when moved in the imaging field, etc. Input devices of the types listed are often used in combination with a second, confirmatory, form of input device allowing the user to enter or confirm (e.g. a switch, voice input device, button, icon to press on a touch screen, etc., as non-limiting examples).
The following steps may be carried out when using the disclosed system:
In an initial step, image processing techniques are used in real time on images of the surgical site to identify the area to be measured. Embodiments for carrying out this step include, without limitation, the following:
As another specific example, the user might circumscribe an area using multiple points or an area “drawn” using the instrument tip and prompt the system to measure the circumscribed area. In this example, the user could trace the perimeter of the defect or other object or area of interest. The steps are repeated as needed to obtain the dimensions for the desired area. Note that when measurement techniques are used in a system employing robotically-manipulated instruments, kinematic information may be used to aid in defining the location of the instrument tips in addition to, or as an alternative to, the use of image processing.
In some implementations, the system may take the measured dimensions and automatically add a safe margin around its perimeter. In these cases, the system may propose a corresponding mesh size and shape that covers the defect plus the margin. The width of the margin may be predefined or entered/selected by the user using an input device. The perimeter of this mesh may be adjusted by the user.
This system may be used during laparoscopic or other types of surgical procedures performed with manual instruments, or in a robotically-assisted procedures where the instruments are electromechanically maneuvered or articulated. It may also be used in semi- or fully-autonomous robotic surgical procedures. Where the system is used in conjunction with a surgical robotic system, the enhanced accuracy, user interface, and kinematic information (e.g. kinematic information relating to the location of instrument tips being used to identify sites at which measurements are to be taken) may increase the accuracy of the measurements and provide a more seamless user experience.
Some specific examples of use of the described system will now be given. Each of the listed examples may incorporate any of the features or functions described above in the “System” section.
In this example, an image of the operative site is captured by an endoscope and displayed on a display. See
As shown in
In other embodiments, the image processing algorithm automatically detects the defect, and expands and automatically repositions the border 18 to surround it, optimally then receiving user confirmation using a user input device that the defect has been encircled.
Once the user has identified the region within which the area of interest or defect is located, a computer vision algorithm is employed to determine the boundaries of the area of interest or defect. Various techniques for carrying out this process are described above in (a). In this specific example, to detect the perimeter of the detect, the system places an active contour model 24 within the border placed or confirmed by the user, as shown in
Before or after measuring the defect, the system may display a margin overlay 26 on the image display, around the perimeter of the defect. This overlay has an outer edge that runs parallel to the edge of the defect, with the width of the overlay corresponding to a predetermined margin around the defect. In
The user inputs instructions to the system confirming the selected margin width. The system measures the dimensions and, optionally the area, of the hernia, preferably using 3D image processing techniques as described above. The system measures the largest dimensions of the defect based on the perimeter defined using the active contour model. The nature of the measurement may include measurement across the defect from various portions of its edge to determine the largest dimensions in perpendicular directions across the defect. If a circular mesh is intended, the largest dimension in a single direction across the defect may be measured.
A recommended mesh profile 28 and/or recommended mesh dimensions are overlaid onto the image. Where the user has specified the margin width, or the system is programmed to include a predetermined margin width, the recommended profile is preferably a shape having borders that surround the defect by an amount that creates at least the chosen or predetermined margin around the defect. In
The displayed overlay, as well as others described in this application, is preferably at least partially transparent so as to not obscure the user's view of the operative site. The user may wish to choose the position and/or orientation for the mesh, or to deviate from the algorithm-proposed position and/or orientation, if for example, the user wants to choose certain robust tissue structures as attachment sites and/or to choose the desired distribution of mesh tension. The system thus may be configured to receive input from the user to select or change the orientation of the displayed mesh. For example, the user may give input to drag and/or rotate the mesh overlay relative to the image. As another example, the system may automatically, or be prompted to, identify the primary and secondary axes of the defect, and automatically rotate and skew a displayed rectangular or oval shaped mesh overly to align its primary and second axes with those of the defect. The user may from this point use the user input device to fine tune the position and orientation.
Note that the measurement techniques may be used to measure the defect itself (based on the perimeter defined using the active contour model) and to output those measurements to the user as depicted in
In modifications to Example 1, neural networks may be trained to recognize hernia defects, and/or to identify optimal mesh placement and sizing.
In another modification to Example 1, rather than encircling an area, a user input device is used to move a cursor (crosshairs) or other graphical overlay to define a point inside a defect or region to be measured as it is displayed in real time on the display. A region growing algorithm is then executed, expanding an area from within that point by finding within the image data continuity of color or other features within some tolerance that are used to identify the extents of the area of interest.
As discussed in connection with Example 1, segmentation methods often use color differentiation or edge detection methods to determine the extent of a given region, such as the hernia defect. In certain instances, the color information may change across a region, creating potential for errors in segmentation and therefore measurement. It can therefore be beneficial to enrich the fidelity of segmentation and classification of regions by also using depth information, which may be gathered from a stereo endoscopic camera. Using detection of depth disparities, significant changes in depth across the region identified as being the defect can be used by the system to confirm that the active contour model detection of edges is correct.
In use, during the edge identification process, the depth disparity information can be used as illustrated in
As another example, a user might use a user input device to place overlays of horizontal and vertical lines or crosshairs within the defect as observed on the image display. These lines could be used to define horizontal and vertical section lines along which depth disparities would be sought. Once found, the defects could be traced circumferentially to define the maximum extent of the area/region/defect, and the measurements would be taken from those extents.
It is not required that depth disparity detection be used in combination with, or as a check, on edge detection carried out using active contour models. It is a technique that may be used on its own for edge detection, or in combination with other methods such as machine learning/neural networks.
Referring to
In a second example depicted in
In this embodiment, the system may be configured to detect the defect as described with Example 1. Alternatively, the system may be configured to determine 3D surface topography but to not necessarily determine the edges of the defect.
User input is received by which the user “selects” a first one of the displayed mesh types. As one specific example, the user may rotate a finger wheel or knob on the user input device to sequentially highlight each of the displayed mesh types, then give a confirmatory form of input such as a button press to confirm selection of the highlighted mesh. Once confirmed, the system displays the selected mesh type in position over the defect (if the edges of the defect have been determined by the system), or the user gives input to “pick up” and “drag” the selected mesh type into a desired position over the defect. The system conforms the displayed mesh overlay to the surface topography, while maintaining tension across the defect, as discussed in connection with Example 1. See
In this embodiment, the system is configured to detect the defect as described with Example 1, and the method is performed similarly to Example 1, with a recommended mesh size and orientation displayed as in
In this embodiment, the system is configured to detect the defect as described with Example 1, and the method is performed similarly to Example 1. Once the defect is detected, all available mesh types are simultaneously displayed on the defect, each with coloring to differentiate it from the other displayed mesh overlays (e.g. different color shading and/or border types, different patterns, etc.). Each overlay is oriented as determined by the system to best cover the defect given the size and shape of the defect and the size and shape of the corresponding mesh, and to conform to the topography but with tension across the defect as described in the prior examples. Further user input can be given to select and re-position displayed mesh overlays as discussed with prior examples, and to remove mesh types that have been ruled out from the display.
Measurement of the area of an area of interest may also be of use to a practitioner in the above-described contexts, and in other contexts. The maximum dimensions of a tumor or lesions may be necessary for staging purposes, and dimensions of treated tumors, lesions, etc may necessitate different medical coding than smaller ones to insure commensurate reimbursement. These needs may come into play in treatment of tumors or endometriosis, cancer staging, or myomectomy.
In addition to the computer vision-based algorithms described above to aid in determining the extents of the areas of interest to be measured, measuring maximum dimensions or areas might use computer vision applications such as region growing, magic wand tool (where pixels of like colors within a (variable) tolerance are identified by the system to find boundaries of regions of interest) may be used. Fluorescence may be used for some areas of interest to aid in highlighting and identifying extents. Regions within which the user wants the system to apply computer vision to identify the extents of areas of interest may be identified to the system may be similar to those described above, where a boundary is created around the area to which the user wants the system to look for and measure the area of interest. A tool such as the commercially known “magnetic lasso” tool in which points can be dropped and snapped to an edge of an area to be measured may also be used.
In use, a user uses a user input to select regions to be measured. Computer vision is used to determine area and or max dimensions (i.e. largest length, width, and/or depth), which are then output to the user using text or graphical icons on a screen, audio output, etc. In some cases, a running aggregate of all the area treated (for example the combined area of all endometriosis lesions treated) may be stored in the system memory and output to the user. These concepts may be combined with those described in co-pending and commonly owned U.S. application Ser. No. 17/368,756, AUTOMATIC TRACKING OF TARGET SITES WITHIN PATIENT ANATOMY, filed Jul. 6, 2021, which is incorporated herein by reference.
Area measurement may be used in a laparoscopic case with manual instruments, or in a robotically-assisted case, or in semi-autonomous or autonomous robotic surgery. In some implementations using a surgical robotic system, the enhanced accuracy, user interface, and kinematic information from the robotic system may be used to provide more accurate information and a more seamless user experience.
This application is a continuation of U.S. application Ser. No. 17/487,646, filed Sep. 28, 2021, which is a continuation of part of U.S. application Ser. No. 17/035,534, filed Sep. 28, 2020, which claims the benefit of U.S. Provisional Application No. 62/907,449, filed Sep. 27, 2019, and U.S. Provisional Application No. 62/934,441, filed Nov. 12, 2019, each of which is incorporated herein by reference. U.S. application Ser. No. 17/487,646 also claims the benefit of U.S. Provisional Application No. 63/084,545, filed Sep. 28, 2020.
Number | Date | Country | |
---|---|---|---|
62907449 | Sep 2019 | US | |
62934441 | Nov 2019 | US | |
63084545 | Sep 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17487646 | Sep 2021 | US |
Child | 18436655 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17035534 | Sep 2020 | US |
Child | 17487646 | US |