The disclosure generally relates to the fields of computer vision and photogrammetry, and in particular, but not by way of limitation, the present disclosed embodiments are used in the context of clinical procedures of surgery and diagnosis for the purpose of calibrating endoscopic camera systems with exchangeable, rotatable optics (the rigid endoscope), identifying if a particular endoscope is being in use, or verifying it is correctly assembled to the camera head. These endoscopy systems are used in several medical domains, such as orthopedics (arthroscopy) or abdominal surgery (laparoscopy), and the camera calibration enables applications in Computer-Aided Surgery (CAS) and enhanced visualization.
Video-guided procedures, such as arthroscopy and laparoscopy, make use of a video camera equipped with a rigid endoscope to provide the surgeon with the possibility of visualizing the interior of the anatomical cavity of interest. The rigid endoscope, which, depending on the medical specialty, can be an arthroscope, laparoscope, neuroscope, etc., is combined with a camera comprising a camera-head and a Camera Control Unit (CCU), to form an endoscopic camera. These cameras are different from conventional ones mainly because of two characteristics. The first one is that the rigid endoscope, also referred to as lens scope or optics, is usually exchangeable for the sake of easy sterilization, with the endoscope being attached to the camera-head by the surgeon in the Operating Room (OR) before starting the medical procedure. This attachment is accomplished with a connector that allows the endoscopic lens to rotate with respect to the camera-head around its symmetry axis (the mechanical axis in
An important enabling step for computer-aided arthroscopy or laparoscopy is camera calibration such that 2D image information can be related with the 3D scene for the purpose of enhanced visualization, improved perception, measurement and/or navigation. The calibration of the camera system, that in this case comprises a camera-head equipped with a lens scope, consists in determining the parameters of a projection model that maps projection rays in 3D into points in pixel coordinates in the image, and vice-versa (
In endoscopic cameras with rotatable optics, the motion between the rigid endoscope and the camera sensor causes changes in the calibration parameters of the camera system which means that the projection model is not constant along time as it happens in conventional cameras. Since it is impractical to perform independent calibration for every possible position of the optics with respect to camera-head, the calibration parameters must be updated according to a camera model that accounts for this relative motion. Solutions for determining this relative rotation and updating the calibration accordingly have been proposed in the literature with examples including the use of a rotary encoder attached to the camera head [1], or the employment of an optical tracking system for determining the position of an optical marker attached to the scope cylinder [2]. These approaches present a serious drawback that is the need of additional equipment and instrumentation that is costly, occupies space in the OR, and disrupts the established surgical workflow.
U.S. Pat. No. 9,438,897 discloses a method to solve some of the aforementioned issues for accomplishing endoscopic camera calibration without requiring any additional instrumentation. The rotation of the lens scope is estimated at each frame time instant using image processing and the result is used as input into a model that, given the camera calibration at a particular angular position of the lens scope with respect to camera-head (the reference position), outputs the calibration at the current angular position. However, this method presents the following drawbacks: (i) calibration at the reference position requires the acquisition of one or more frames of a known checkerboard pattern (the calibration grid) in the OR, which, besides requiring user intervention, is typically a time-consuming process that must be performed with a sterile grid, and thus is undesirable and should be avoided, (ii) the disclosed method does not allow changes in the optical zoom during operation as it is not capable of updating the calibration parameters to different zoom levels, and (iii) it requires the lens scope to only rotate, and never translate, with respect to camera-head with the point where the mechanical axis intersects the image having to be explicitly determined.
The presently disclosed embodiments refer to a method that avoids the need of calibrating the endoscopic camera at a particular angular position in the OR after assembling the rigid scope in the camera head. This patent discloses models, methods and apparatuses for characterizing a rigid endoscope in a manner that enables to determine the calibration of any endoscopic camera system comprising the endoscope independently of the camera-head that is being used, the amount of zoom introduced by that camera-head, and the relative rotation or translation between the scope and the camera-head at a particular frame-time-instant. This allows the surgeon to change the endoscope and/or the camera-head during the surgical procedure and adjust the zoom as desired for better visualization of certain image contents without causing any disruption to the workflow of the procedure.
The present disclosure shows how to calibrate the rigid endoscope alone to obtain a set of parameters—the lens descriptor—that fully characterize the optics. The lens calibration is performed in advance (e.g. at the moment of manufacture) and the descriptor is then loaded into the Camera Control Unit (CCU) to be used as input in a real-time software that automatically provides the calibration of the complete endoscopic camera arrangement, comprising both camera-head and lens, at every frame instant, irrespective of the relative rotation between the two components. This is accomplished in a seamless manner to the user.
Since this descriptor characterizes a lens or a batch of lenses, it can also be used for the purposes of identification and quality control. Thus, and building on this functionality, it is also disclosed a method for detecting inconsistencies between the lens descriptor loaded in the CCU and the actual rigid endoscope assembled in the camera-head. This method is useful to warn the user if the lens being used is not the correct one and/or if it is damaged or has not been properly assembled in the camera-head.
The present disclosure can be used in particular, but not by way of limitation, in conjunction with the methods disclosed in U.S. Pat. No. 9,438,897 to correct image radial distortion and enhance visual perception, or with the methods disclosed in US 20180071032 A1 to provide guidance and navigation during arthroscopy, for the purpose of accomplishing camera calibration at every frame time instant, which is a requirement for those methods to work.
Systems, methods and apparatuses for determining the calibration of an endoscopic camera consisting in a camera-head equipped with exchangeable, rotatable optics (the rigid endoscope), such that 2D image points can be related with 3D projection rays in applications of computer-aided surgery, and where the rigid endoscope is characterized in advance (e.g. in factory at manufacture time) to accomplish camera calibration without requiring any user intervention in the Operating Room (OR).
A method for characterizing a rigid endoscope through a set of parameters {dot over (ϕ)} that are then used as input in a real-time software that processes the images acquired by an arbitrary camera-head equipped with the rigid endoscope to provide the calibration of the complete endoscopic camera arrangement at every frame time instant, irrespective of the relative rotation between the lens scope and the camera-head or zoom settings.
An image software method detects if the lens descriptor {dot over (ϕ)} is not compatible with the rigid endoscope in use, which is useful to prevent errors and warn the user about faulty situations such as usage of the incorrect lens, defects in the lens or camera-head or improper assembly of the lens in the camera-head.
For a more complete understanding of the present disclosure, reference is made to the following detailed description of exemplary embodiments considered in conjunction with the accompanying drawings.
It should be understood that, although an illustrative implementation of one or more embodiments is provided below, the various specific embodiments may be implemented using any number of techniques known by persons of ordinary skill in the art. The disclosure should in no way be limited to the illustrative embodiments, drawings, and/or techniques illustrated below, including the exemplary designs and implementations illustrated and described herein. Furthermore, the disclosure may be modified within the scope of the appended claims along with their full scope of equivalents.
In this patent, 2D and 3D vectors are written in bold lower and upper case letters, respectively. Functions are represented by lower case italic letters, and angles by lower case Greek letters. Points and other geometric entities in the plane are represented in homogeneous coordinates, as it is commonly done in projective geometry, with 2D linear transformations in the plane being represented by 3×3 matrices and equality being up to scale. In addition, throughout the text different sections are referenced by the numbers of their paragraphs using the symbol §.
1. Camera Model for Endoscopic Systems with Exchangeable, Rotatable Optics (the Rigid Endoscope)
Camera calibration is the process of determining the camera model that projects 3D points X in the camera reference frame into 2D image points x in pixel coordinates. Alternatively, the camera model can be interpreted as the function that back-projects image points x into light rays going through the 3D point X in the scene. This process is illustrated in
Conventional, commonly used cameras are described by the so-called pin-hole model that can be augmented with a radial distortion model that accounts for non-linear effects introduced by small optics and/or fish-eye lenses. In this case, points X in the scene are projected onto points x in the image according to the formula x=K Γξ(PX) where x and X are represented in homogeneous coordinates with the equality being up to scale, P=[I 03×1] is a 3×4 projection matrix with I denoting the 3×3 identity matrix, K is the so-called matrix of intrinsic parameters with dimension 3×3, and Γ denotes a distortion function with parameters ξ. Henceforth, and without loss of generality, it will be assumed that the camera is skewless with unitary aspect ratio, yielding a model that approximates well the majority of modern cameras where the deviation of the skew from zero and of the aspect ratio from one is negligible. With this assumption, K depends solely on the focal length f and image coordinates of the principal point O=[Ox, Oy, 1]T such that
The distortion function Γ represents a mapping in 2D and can be any of the many distortion functions or models available in the literature that include, but are not limited to, the polynomial model (also known as Brown's model), the division model, the rational model, the fish-eye lens model, etc., in either its first order or higher order (multi-parameter) versions with ξ respectively being a scalar or a vector.
An endoscopic camera, that results from combining a rigid endoscope with a camera, has exchangeable optics for the purpose of easy sterilization, with the endoscope having in the proximal end an ocular lens (or eye-piece) that is assembled to the camera using a connector that typically allows the surgeon to rotate the scope with respect to the camera-head. As illustrated in
The mechanical axis is roughly coincident with the symmetry axis of the eye-piece that does not necessarily have to be aligned with the symmetry axis of the cylindrical scope and/or pass through the center of the circular region defined by the FSM. These alignments are aimed but never perfectly achieved because of mechanical tolerances in building and manufacturing the endoscope. Thus, the rotation center Q, the center of the circular boundary C and the principal point O are in general distinct points in the image, which complicates camera modeling but, and as disclosed ahead, can be used as a signature to identify a particular endoscope or batch of similar endoscopes.
Consider that the endoscopic camera is calibrated for a certain position of the scope, such that K(f, O) is the matrix of intrinsic parameters and ξ is the distortion parameter quantifying radial distortion according to a chosen model Γ (
Similarly to causing a rotation in the principal point O, such that it becomes O′=R(δ, Q) O, the rotation of the scope with respect to the camera head causes circle Ω with center C and notch P to become circle Ω′ with center C′=R(δ, Q)C and notch P′=R(δ, Q)P (
2. Calibration of an Endoscopic Camera with Exchangeable, Rotatable Optics
Summarizing, in order to obtain the correct calibration parameters of an endoscopic camera at all times, the focal length f, distortion ξ, and principal point O must be known for a particular rotation angle between camera-head and lens scope (the reference angular position) and the location of the principal point must be updated during operation according to O′=R(δ, Q)O, which requires knowing the rotation center Q and the angular displacement δ between current and reference angular positions at every frame time instant.
The calibration of the endoscopic camera at the reference angular position, which can be easily recognized by the position P of the notch, can be performed “off-line” before starting the clinical procedure by following the steps of
The update of the camera model is carried “on-line” during the clinical procedure at every frame time instant by following the steps of
2.1 Camera Calibration at a Particular Angular Position Including Intrinsics K(f, O) and Distortion ξ (Module A in
The literature is vast in methods for calibrating a pinhole camera with radial distortion which can be divided into two large groups: explicit methods and auto-calibration methods. The former use images of a known calibration object, which can be a general 3D object, a set of spheres, a planar checkerboard pattern, etc., while the latter rely on correspondences across successive frames of unknown, natural scenes. The two approaches can require more or less user supervision, ranging from manual to fully automatic depending on the particular method and underlying algorithms.
The disclosed embodiments will consider, without loss of generality, that the camera calibration at a particular angular position of the lens scope with respect to camera-head will be conducted using an explicit method that makes use of a known calibration object such as a planar checkerboard pattern or any other planar pattern that enables to establish point correspondences between image and calibration object. This approach is advantageous with respect to most competing methods because of the good performance in terms of robustness and accuracy, the ease of fabrication of the calibration object (planar grid), and the possibility of accomplishing full calibration from a single image of the rig acquired from an arbitrary position. However, other explicit or auto-calibration methods can be employed to estimate the focal length f, distortion and principal point O of the endoscopic camera for a particular relative rotation between camera-head and endoscope (the reference angular position).
The explicit calibration using a planar checkerboard pattern typically comprises the following steps: acquisition of a frame of the calibration object from an arbitrary position or 3D pose (rotation R and translation t of the object with respect to camera); employment of image processing algorithms for establishing point correspondences x, X between image and calibration object; execution of a suitable calibration algorithm that uses the point correspondences for the estimation of the focal length f, the principal point O and distortion parameters ξ, as well as the pose R, t of the object with respect to the camera.
The approach can be applied to multiple calibration frames Ik, k=0, . . . , K−1, instead of a single one, for the purpose of improving robustness and accuracy. In this case the calibration is independently carried for each frame and a last optimization step that minimizes the re-projection error is used to enforce the same intrinsic parameters K(f, O) and distortion across the multiple frames, while considering a different pose Rk, tk for each frame.
2.2 Detection of the Circular Boundary and Notch (Module B in
The circular boundary and the notch of the FSM can be detected as schematized in
Referring to step 2 of
Then, the detected edge points are mapped back to the Cartesian image space so that the circle boundary can be estimated. This is performed using a circle fitting approach inside a robust framework. Given a set of noisy data points, which can be contaminated by outliers, the objective of circle fitting is to find a circle that minimizes or maximizes a particular error or cost function that quantifies how well a given circle fits the data points. The most widely used techniques either minimize the geometric or the algebraic (approximate) distances from the circle to the data points. In order to handle outlier data points, a robust framework such as RANSAC is usually employed. The steps of ring image rendering, detection of edge points and robust circle estimation are performed iteratively until the detected edge points are collinear, in a robust manner. If this occurs, the algorithm proceeds to the detection of the notch P by performing correlation with a known template of the notch. The output of this algorithm is the notch location P and the circle Ω with center C and radius r.
As depicted in
As shown in
The disclosed method for boundary and notch detection can have other applications such as the detection of engravings in the FSM for reading relevant information including, but not limited to, particular characteristics of the lens.
In addition, although the implementation of this method assumes that the boundary can be accurately represented by a circle, generic conic fitting can be used in the method without major modifications.
2.3 Image-Based Measurement of the Relative Rotation Between Endoscope and Camera-Head (Module C in
As previously mentioned, finding the calibration for current frame i can be accomplished by rotating the principal point O (or O0) at the reference angular position by angle δi around the rotation center Q. In this case, both center Q and the angular displacement δi between frame i and frame 0 corresponding to the reference position must be estimated.
If the rotation center Q is known and the center and notch at the reference angular position are respectively C0 and P0, then the angular displacement δi can be inferred from the notch Pi, the boundary center Ci, or both simultaneously (δi=P0QPi=C0QCi), with their positions being determined by applying the steps of
Since the distance from the rotation center Q to the notch P is significantly larger than that between Q and C, estimations using the notch P are in general more robust and accurate, and thus it is important that it can be detected in all frames. Since its detection is mostly affected by situations of occlusion, one solution is to consider multiple notches in the FSM to ensure that at least one is always visible in the frame.
The algorithm described in §§ [0050]-[0057] for detecting the notch can be extended to the case when the FSM contains multiple notches. For this, the last step in
Without loss of generality, it is assumed in the remainder of this patent that the FSM has only one notch P that is always visible and the rotation center Q is determined from two frames.
Whenever more than two frames acquired at different angular positions are available, and in order to filter out possible noisy estimations of the rotation center Q and the relative rotation δi, a filtering approach can be applied. The filter can take as input the previous estimation for Q and the current boundary and notch and output the updated location of Q and an estimation for the relative rotation δi. This filtering technique can be implemented using any temporal filter such as a Kalman filter or an Extended Kalman filter.
2.4 Offline Calibration at the Reference Angular Position
The off-line calibration method of
The case N=1 and K=1 is the one that requires minimum user effort, being particularly well suited for fast calibration in the OR where the surgeon just has to acquire a single image of the checkerboard pattern after assembling the endoscope in the camera head. The accuracy in the estimation of the calibration parameters tends to improve for an increasing number K of frames.
The use of information from two or more angular positions (N>1) makes it possible to estimate the rotation center Q in conjunction with f, ξ and O0, independently of the number K of frames acquired at each position. This can be accomplished by following the approach depicted in
2.5 Online Update of the Calibration Parameters
During operation, every time a new frame j is acquired, an on-the-fly procedure must detect and measure the angular displacement with respect to the reference position and update the calibration and camera model accordingly.
There are two possible modes of operation for retrieving Q: in mode 1 the rotation center is known in advance from the offline calibration step that used frames acquired in N>1 angular positions as disclosed in §§ [0065]-[0069]; in mode 2, the rotation center is not known ‘a priori’ but estimated on-the-fly from successive frames for which the notch P and/or center of circular boundary C are determined such that the methods disclosed in §§ [0058]-[0064] and
3. Off-Site Lens Calibration to Avoid Explicit Calibration Steps in the OR
It has been disclosed a method to determine the calibration of an endoscopic camera at all times that comprises two steps or stages: an offline step that aims to estimate the focal length f, the distortion and the principal point O for an arbitrary reference angular position, and an online step that determines at every frame time instant the angular displacement with respect to the reference and updates the position of the principal point to provide the calibration for the current frame. Since the lens of the endoscopic camera is exchangeable, both offline and online steps are carried on-site in the OR after the surgeon assembles the endoscope in the camera-head. While the online step is meant to run on-the-fly, in parallel with image acquisition in a seamless manner to user, the offline step requires explicit user intervention to acquire one or more calibration frames, which is undesirable.
In order to minimize disruption to the existing surgical workflow, U.S. Pat. No. 9,438,897 B2 describes a method that is the particular situation of N=1 and K=1 of the offline step disclosed in §§ [0065]-[0069] and
This patent overcomes these problems by disclosing a method for calibrating the endoscopic lens alone, which can be performed off-site (e.g. at manufacture) with the help of a camera or other means, and that provides a set of parameters that fully characterize the rigid endoscope leading to a lens descriptor {dot over (Φ)} that can be used for different purposes. One of these purposes is to accomplish calibration of any endoscopic camera system that is equipped with the lens, in which case the descriptor {dot over (Φ)} is loaded in the Camera Control Unit (CCU), to be used as input in an online method that runs on-the-fly, and that outputs the complete calibration of the camera-head+lens arrangement at every frame time instant.
Since the lens calibration can be carried off-site, namely in factory at the time of manufacture, and the online calibration runs on-the-fly in a seamless manner to the user, there is no action to be carried in the OR by the surgeon, which means that endoscopic camera calibration is accomplished at all times with no change or disruption of the established routines. Moreover, and differently from what is possible with the method disclosed in U.S. Pat. No. 9,438,897 B2, calibration is accomplished even in situations of variable zoom and/or translation of the lens with respect to camera-head.
The method of off-site, offline calibration of the rigid endoscope to generate the descriptor {dot over (Φ)} is disclosed in §§ [0079]-[0083], while the online method to accomplish calibration of endoscopic camera comprising camera-head and optics is described in §§ [0084]-[0085].
3.1 Off-Site, Offline Lens Calibration
The rigid endoscope is assembled in an arbitrary camera-head, henceforth referred to as the Characterization Camera, and the offline calibration method of
The calibration result refers to the compound arrangement of camera-head with rigid endoscope, with the measurements depending on the particular camera-head in use, as well as on the manner the lens is mounted in the camera-head. Since the objective is to characterize the endoscope alone, the influence of the camera-head must be removed such that the final descriptor only depends on the lens and is invariant to the camera and/or equipment employed to generate it.
The method herein disclosed accomplishes this objective by building in two key observations: (i) the camera-head usually follows an orthographic (or nearly orthographic) projection model, which means that it only contributes to the imaging process with magnification and conversion of metric units to pixels; and (ii) the images of the Field-Stop-Mask (FSM) always relate by a similarity transformation, which means the FSM can be used as a reference to encode information about the lens that is invariant to rigid motion and scaling.
Let the calibration result after applying the offline method of
and β is angle between the x axes of the image and boundary reference frames. If the rotation center Q is known, then it can also be represented in lens coordinates by making {dot over (Q)}=AQ and stacked to the descriptor, that becomes {dot over (Φ)}={{dot over (f)}, ξ, {dot over (O)}, {dot over (Q)}}. These particular choices of image and lens reference frames are arbitrary, and other reference frames, related by rigid transformations with the chosen ones, could have been considered without compromising the disclosed methods.
3.2 Online Camera Calibration Using the Lens Descriptor
When the lens with descriptor {dot over (Φ)} is mounted on an arbitrary camera head, henceforth referred to as application camera, it is possible to automatically obtain the calibration of the full arrangement camera+lens by proceeding as follows: for each frame j, apply the method of
where α is the angle between the x axes of the two reference frames; finally, the calibration of the endoscopic camera for the current angular position can be determined by decoding the different descriptor entries, in which case the focal length becomes fj=rj{dot over (f)}, the principal point is now Oj=B{dot over (O)} and the distortion ξ is the same because it is inherent to the optics. If the descriptor also comprises the rotation center, then its position in frame j can be determined in a similar manner by making Qj=B{dot over (Q)}.
3.3 Relevant Considerations
Off-site offline lens calibration using a single image: One important consideration is that the calibration approach disclosed in this patent does not require the knowledge of the rotation center Q for determining the calibration of the endoscopic camera at every frame time instant. Thus, if time and effort of the off-site calibration procedure is a concern, the lens descriptor can be generated by acquiring a single calibration image in which case the offline method of
Accommodation of relative rotation (calibration by detection or by tracking): Since a rotation of the endoscope with respect to the camera-head δj causes a similar rotation to the lens reference frame in the image, the update of the calibration at every frame j can be performed implicitly without having to compute an angular displacement δj and explicitly rotate the principal point around the center Q. In this case, the disclosed approach based on the lens descriptor can be used alone, with {dot over (Φ)} being decoded at every frame time instant by the online method of §§ [0084]-[0085] (calibration by detection). An alternative is to employ the method of
Adaptation to optical zoom and/or translation of the lens scope along the plane orthogonal to the mechanical axis: In the disclosure, the focal length fj is determined at each frame time instant by scaling the normalized focal length {dot over (f)} by the magnification introduced by the application camera, which is inferred from the radius rj of the circular boundary. If the magnification is constant, then fj is also constant across successive frames j. However, if the application camera has optical zoom that varies, then f will vary accordingly. Thus, and unlike the method described in U.S. Pat. No. 9,438,897 B2, the method herein disclosed can cope with changes in zoom, as well as with translations of the lens scope along the plane orthogonal to the mechanical axis. The adaptation to the former stems from the fact that changes in zoom lead to changes in the radius of the boundary rj that is used to decode the relevant entries in the lens descriptor, namely fj, Oj and Qj, providing the desired adaptation. The adjustment to the latter arises from the fact that the circular boundary, to which the lens coordinate system is attached, translates with the lens, and the image coordinates of the decoded Oj and Qj translate accordingly.
Alternative means to generate the lens descriptor: The descriptor {dot over (Φ)}={{dot over (f)}, {dot over (ξ)}, {dot over (O)}, {dot over (Q)}} characterizes the lens through parameters or features that have a clear physical meaning. For example, the mechanical axis of the endoscope, that is in general defined by the symmetry axis of the eye-piece in the proximal end of the lens, should go through the center of the circle defined by the FSM. If this condition holds, then the center C and the rotation center Q are coincident and {dot over (Q)}=[0,0,1]T. In general, the condition is not verified, as illustrated in
Transmission of the lens descriptor to Application Camera: In the disclosed embodiment the lens descriptor is generated off-site with the help of a Characterization Camera, and must be then communicated to the CCU or computer platform connected to the Application Camera that will execute the online method of §§ [0084]-[0085] (
Descriptor for a batch of lenses: The descriptor {dot over (Φ)} can either characterize a specific lens or be representative of a batch of lenses with similar characteristics. In this last case, the descriptor can be generated by either using as input to the off-line calibration method of
4. Detection of Anomalies in the Endoscopic Camera
While the calibration approach presented in §§ [0041]-[0072] always provides a correct calibration of the endoscopic camera, as it is assembled and explicitly calibrated in the OR, the calibration method disclosed in this patent (§§ [0073]-[0092]) relies on prior assumptions such as the correct retrieval of stored calibration information and the proper assembly of the lens in the camera head. In case these assumptions are not satisfied, the camera+lens arrangement will not be accurately calibrated and malfunctions can occur in systems that use the calibration information for performing distortion correction, virtual views rendering, enhanced visualization, surgical navigation, etc.
This patent discloses a method that makes use of the lens descriptor {dot over (Φ)} for detecting anomalies in the endoscopic camera calibration caused by a mismatch between the loaded calibration information and the lens in use and/or an incorrect assembly of the lens in the camera head or a defect of any of these components.
This change in the lens motion model can be used to detect the existence of an anomaly, as well as to quantify how serious the anomaly is, and warn the user to verify the assemblage and/or replace the lens.
This method only provides information on the existence of an anomaly, and does not specify which type of anomaly is occurring, which would allow the system to provide the user specific instructions for fixing the anomaly. To accomplish this, the approach for anomaly detection schematized in
In particular, if the FSM is projected in the image plane onto a circle when the lens is properly assembled in the camera-head, this circle tends to evolve into an ellipse when the optics is not correctly assembled. Thus, in this case, the eccentricity of the boundary detected during operation can be measured to verify if the assemblage is correct, and it is not required to know the specific shape of the boundary detected during calibration of the lens.
This approach is valid if the FSM has a shape that can be represented parametrically, such as an ellipse or any other geometric shape. In addition, template matching or machine learning techniques can be used to compare the boundary detected during operation with the known shape.
Summarizing, there exist two important features that can be used for detecting and identifying anomalies. The first one is the difference between the rotation center estimates obtained at calibration time and during operation, Qj and , respectively. The second consists in the difference between the boundary contours detected at calibration time and during operation. While the first allows the detection of an anomaly, whether it is a mismatch between the loaded calibration and the camera+lens arrangement in use, an incorrect assembly of the lens in the camera head or a defect of any of these components, the second provides information on the type of anomaly since it only occurs when there is a deficient assemblage.
Thus, the disclosed method for detection and identification of anomalies that makes use of these two distinct features can be implemented using a cascaded classifier that starts by using the first feature for the anomaly detection stage and then discriminates between a calibration mismatch and an incorrect camera+lens assembly by making use of the second feature. In alternative to the cascaded classifier, other methods such as different types of classifiers, machine learning, statistical approaches, data mining can be employed. In addition, depending on the desired application, these features can be used individually, in which case the first feature would allow the detection of an anomaly, without identification of the type of anomaly, and the second feature would solely serve to detect incorrect assemblages.
In its most basic configuration, computing system environment 1200 typically includes at least one processing unit 1202 and at least one memory 1204, which may be linked via a bus 1206. Depending on the exact configuration and type of computing system environment, memory 1204 may be volatile (such as RAM 1210), non-volatile (such as ROM 1208, flash memory, etc.) or some combination of the two. Computing system environment 1200 may have additional features and/or functionality. For example, computing system environment 1200 may also include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks, tape drives and/or flash drives. Such additional memory devices may be made accessible to the computing system environment 1200 by means of, for example, a hard disk drive interface 1212, a magnetic disk drive interface 1214, and/or an optical disk drive interface 1216. As will be understood, these devices, which would be linked to the system bus 1206, respectively, allow for reading from and writing to a hard disk 1218, reading from or writing to a removable magnetic disk 1220, and/or for reading from or writing to a removable optical disk 1222, such as a CD/DVD ROM or other optical media. The drive interfaces and their associated computer-readable media allow for the nonvolatile storage of computer readable instructions, data structures, program modules and other data for the computing system environment 1200. Those skilled in the art will further appreciate that other types of computer readable media that can store data may be used for this same purpose. Examples of such media devices include, but are not limited to, magnetic cassettes, flash memory cards, digital videodisks, Bernoulli cartridges, random access memories, nano-drives, memory sticks, other read/write and/or read-only memories and/or any other method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Any such computer storage media may be part of computing system environment 1200.
A number of program modules may be stored in one or more of the memory/media devices. For example, a basic input/output system (BIOS) 1224, containing the basic routines that help to transfer information between elements within the computing system environment 1200, such as during start-up, may be stored in ROM 1208. Similarly, RAM 1210, hard drive 1218, and/or peripheral memory devices may be used to store computer executable instructions comprising an operating system 1226, one or more applications programs 1228 (such as an application that performs the methods and processes of this disclosure), other program modules 1230, and/or program data 1232. Still further, computer-executable instructions may be downloaded to the computing environment 1200 as needed, for example, via a network connection.
An end-user, e.g., a customer, retail associate, and the like, may enter commands and information into the computing system environment 1200 through input devices such as a keyboard 1234 and/or a pointing device 1236. While not illustrated, other input devices may include a microphone, a joystick, a game pad, a scanner, etc. These and other input devices would typically be connected to the processing unit 1202 by means of a peripheral interface 1238 which, in turn, would be coupled to bus 1206. Input devices may be directly or indirectly connected to processor 1202 via interfaces such as, for example, a parallel port, game port, firewire, or a universal serial bus (USB). To view information from the computing system environment 1200, a monitor 1240 or other type of display device may also be connected to bus 1206 via an interface, such as via video adapter 1242. In addition to the monitor 1240, the computing system environment 1200 may also include other peripheral output devices, not shown, such as speakers and printers.
The computing system environment 1200 may also utilize logical connections to one or more computing system environments. Communications between the computing system environment 1200 and the remote computing system environment may be exchanged via a further processing device, such as a network router 1252, that is responsible for network routing. Communications with the network router 1252 may be performed via a network interface component 1254. Thus, within such a networked environment, e.g., the Internet, World Wide Web, LAN, or other like type of wired or wireless network, it will be appreciated that program modules depicted relative to the computing system environment 1200, or portions thereof, may be stored in the memory storage device(s) of the computing system environment 1200.
The computing system environment 1200 may also include localization hardware 1256 for determining a location of the computing system environment 1200. In embodiments, the localization hardware 1256 may include, for example only, a GPS antenna, an RFID chip or reader, a Wi-Fi antenna, or other computing hardware that may be used to capture or transmit signals that may be used to determine the location of the computing system environment 1200.
In a first aspect of this disclosure, a method for calibrating an endoscopic camera is provided. The endoscopic camera results from combining a rigid endoscope with a camera, wherein the rigid endoscope or lens scope has a Field Stop Mask (FSM) that renders an image boundary with center C and a notch P, and can rotate with respect to the camera-head by an angle δ around a mechanical axis that intersects the image plane in point Q, in which case C, P and the principal point O undergo a 2D rotation of the same angle δ around Q, and wherein calibration consists in determining the focal length f, distortion ξ, rotation center Q and principal point O0 for a chosen angular position of the lens scope with respect to the camera-head, henceforth referred to as the reference angular position i=0. The method comprises: acquiring one or more calibration images of a calibration object with the endoscopic camera at angular position i without rotating the lens with respect to the camera head; determining a first estimate of the calibration parameters f, ξ and Oi of the endoscopic camera, as well as the 3D pose (rotation and translation) of the calibration object with respect to the camera for each calibration image; detecting a boundary with center Ci and notch Pi on the calibration images using an image processing method; rotating the lens scope with respect to the camera-head to a new angular position i and repeating the previous steps, with i being incremented to take successive values i=0, 1, . . . , N−1 where N≥1 is the number of different angular positions used for the calibration; determining a first estimate for the rotation center Q and for the angular displacements δi between the reference position i=0 and the successive calibration positions i=1, . . . N−1; and refining the calibration parameters f, ξ, Q, and O0 through a final optimization step that enforces the model of the principal point, boundary center, and notch undergoing a rotation by an angle δ′ around the center Q for successive calibration positions i=0, . . . N−1.
In an embodiment of the first aspect, the calibration object is either a 2D plane with a checkerboard pattern or any other known pattern, a known 3D object, or is non-existent, with the calibration input being a set of point correspondences across images, in which case the first estimate of the calibration parameters are respectively obtained by a camera calibration algorithm from planes, a camera calibration algorithm from objects, or a suitable auto-calibration technique.
In an embodiment of the first aspect, the final optimization step is performed using an iterative non-linear minimization of a reprojection error, photogeometric error, or any other suitable optimization approach.
In an embodiment of the first aspect, the first estimate of the calibration parameters is determined from any calibration method in the literature.
In an embodiment of the first aspect, the first estimate of the calibration parameters includes any distortion model known in the literature such as Brown's polynomial model, the rational model, the fish-eye model, or the division model with one or more parameters, in which case is a scalar or a vector, respectively.
In an embodiment of the first aspect, the rotation center Q is known in advance, in which case the calibration can be accomplished from images acquired in one or more angular positions (N>=1), is determined from the image position of boundary centers Ci and notches Pi, in which case the calibration is accomplished from images acquired in two or more angular positions (N>=2), or is determined solely from image position of boundary centers Ci or notches Pi, in which case the calibration is accomplished from images acquired in three or more angular positions (N>=3).
In a second aspect of the present disclosure, a method for updating, at every frame time instant, the calibration parameters of an endoscopic camera is provided. The endoscopic camera results from combining a rigid endoscope with a camera comprising a camera-head and a Camera Control Unit (CCU), wherein the rigid endoscope or lens scope has a Field Stop Mask (FSM) that renders an image boundary with center C and a notch P, and can rotate with respect to the camera-head by an angle δ around a mechanical axis that intersects the image plane in point Q, in which case C, P and the principal point O undergo a 2D rotation of the same angle δ around Q, and wherein the calibration parameters focal length f, distortion ξ, rotation center Q and principal point O0, as well as a boundary with center C0 and a notch P0, for a reference angular position i=0 of the lens scope with respect to the camera-head are known. The method comprises: acquiring a new frame j by the endoscopic camera and detecting a boundary center Cj and a notch Pj; estimating an angular displacement δ of the endoscopic lens with respect to the camera-head according to notch P0, notch Pj and Q; and estimating an updated principal point Oj of the endoscopic camera by performing a 2D rotation of the principal point O0 around Q by an angle δ.
In an embodiment of the second aspect, the calibration parameters focal length f, distortion ξ, rotation center Q and principal point O0, as well as a boundary with center C0 and a notch P0, at the reference angular position i=0 are obtained by calibrating the endoscopic camera with the lens scope at the reference position or by retrieval from the CCU.
In an embodiment of the second aspect, the rotation center Q is determined using two or more boundary centers Cj and/or notches Pj.
In an embodiment of the second aspect, the angular displacement of the endoscopic lens is estimated by mechanical means and/or by making use of optical tracking, in which case the boundary centers C0 and Cj and the notches P0 and Pj do not have to be known.
In an embodiment of the second aspect, the method further comprises employing a technique for filtering the estimation of the rotation center Q and the angular displacement δ including, but not limited to, any recursive or temporal filter known in the literature such as a Kalman filter or an Extended Kalman filter.
In a third aspect of the present disclosure, a method for characterizing a rigid endoscope with a Field Stop Mask (FSM) that induces an image boundary with center C and a notch P by obtaining a descriptor {dot over (Φ)} comprising a normalized focal length {dot over (f)}, a distortion ξ, a normalized principal point {dot over (O)} and a normalized rotation center {dot over (Q)} is provided, the method comprising: combining the rigid endoscope with a camera to obtain an endoscopic camera, referred to as characterization camera; estimating the calibration parameters (focal length f, distortion ξ, principal point O0 and rotation center Q) of the characterization camera at a reference position; detecting a boundary with center C0 and a notch P0 at the reference position; and determining the normalized focal length {dot over (f)}, normalized principal point {dot over (O)} and normalized rotation center {dot over (Q)} according to center C0, notch P0, focal length f, principal point O0 and rotation center Q.
In an embodiment of the third aspect, the normalized focal length {dot over (f)} is computed from {dot over (f)}=f/r, with r being the distance between center C0=[Cx, Cy, 1]T and notch P0, and the normalized principal point {dot over (O)} and rotation center {dot over (Q)} are obtained by computing {dot over (O)}=AO and {dot over (Q)}=AQ, respectively, with A=
and β being the angle between line
In a fourth aspect of the present disclosure, a method for calibrating an endoscopic camera is provided. The endoscopic camera results from combining a rigid endoscope with a camera comprising a camera-head and a Camera Control Unit (CCU), wherein the rigid endoscope has a descriptor {dot over (Φ)} comprising a normalized focal length {dot over (f)}, a distortion ξ, a normalized principal point {dot over (O)} and a normalized rotation center {dot over (Q)} and has a Field Stop Mask (FSM) that renders an image boundary with center C and a notch P, and wherein calibration consists in determining the focal length f, distortion ξ, rotation center Q and principal point O for a particular angular position of the lens scope with respect to the camera-head. The method comprises: acquiring frame i by the endoscopic camera, detecting a boundary center Ci=[Cx, Cy, 1]T and a notch Pi, and determining a radius r=∥CiPi∥−; and estimating the calibration parameters of the endoscopic camera focal length f, rotation center Q and principal point O according to center center Ci, notch Pi, the normalized focal length {dot over (f)}, the normalized principal point {dot over (O)} and the normalized rotation center {dot over (Q)}.
In an embodiment of the fourth aspect, the focal length f, the principal point O and the rotation center Q are computed by f=r{dot over (f)}, O=B{dot over (O)} and Q=B{dot over (Q)}, respectively, with B=
and α being the angle between line ∥
In an embodiment of the fourth aspect, the endoscopic lens descriptor {dot over (Φ)} is obtained by using a camera, by measuring the endoscopic lens using a caliper, a micrometer, a protractor, gauges, robotic measurement apparatuses or any combination of thereof or by using a CAD model of the endoscopic lens.
In an embodiment of the fourth aspect, the endoscopic lens descriptor {dot over (Φ)} is obtained by loading information into the CCU from a database or using QR codes, USB flash drives, manual insertion, engravings in the FSM, RFID tags, an internet connection, etc.
In an embodiment of the fourth aspect, frame i comprises two or more frames, wherein the rotation center Q is determined using two or more boundary centers Ci and/or notches Pi.
In an embodiment of the fourth aspect, the endoscopic camera can have an arbitrary angular position of the lens scope with respect to the camera-head and an arbitrary amount of zoom.
In a fifth aspect of the present disclosure, a method for detecting an anomaly caused by defects or incorrect assembly of a rigid endoscope in a camera-head or by a mismatch between a considered calibration and an endoscopic lens in use in an endoscopic camera that results from combining the rigid endoscope with a camera comprising the camera-head and a Camera Control Unit (CCU), wherein the rigid endoscope has a descriptor {dot over (Φ)} comprising a normalized rotation center {dot over (Q)} and has a Field Stop Mask (FSM) that renders an image boundary with center C and a notch P is provided, the method comprising: acquiring at least two frames by the endoscopic camera, having the rigid endoscope in different positions with respect to the camera-head, and detecting boundary centers Ci and notches Pi for each frame; estimating a rotation center {circumflex over (Q)} using the detected boundary centers Ci and/or notches Pi; estimating a rotation center Q according to the normalized rotation center {dot over (Q)}, boundary centers Ci and notches Pi; and comparing the two rotation centers {circumflex over (Q)} and Q and deciding about the existence of an anomaly.
In an embodiment of the fifth aspect, the endoscopic lens descriptor {dot over (Φ)} is obtained by loading information into the CCU from a database or using QR codes, USB flash drives, manual insertion, engravings in the FSM, RFID tags, an internet connection, etc.
In an embodiment of the fifth aspect, the comparison between the two rotation centers {circumflex over (Q)} and Q and the decision about the existence of an anomaly are performed by making use of one or more of algebraic functions, classification schemes, statistical models, machine learning algorithms, thresholding, or data mining.
In an embodiment of the fifth aspect, comparing the boundaries detected at calibration time and during operation for identifying the cause of the anomaly.
In an embodiment of the fifth aspect, the method further comprises providing an alert message to the user, wherein the cause of the anomaly, whether it is a mismatch between the lens in use and the considered calibration or a physical problem with the rigid endoscope and/or the camera-head, is identified.
In a sixth aspect of the present disclosure, a method for detecting an image boundary with center C and a notch P in a frame acquired by using a rigid endoscope that has a Field Stop Mask (FSM) that induces the image boundary with center C and the notch P is provided, the method comprising: using an initial estimation of the boundary with center C and notch P for rendering a ring image, which is an image obtained by interpolating and concatenating image signals extracted from the acquired frame at concentric circles centered in C, wherein the notch P is mapped to the center of the ring image; detecting salient points in the ring image; repeating the following until the detected salient points are collinear; mapping the salient points into the space of the acquired frame and fitting a circle with center C to the mapped points; rendering a new ring image by making use of the fitted circle; detecting salient points in the new ring image; and detecting the notch P in the final ring image using correlation with a known template.
In an embodiment of the sixth aspect, the FSM contains more than one notch, all having different shapes and/or sizes so that they can be identified, in which case the template comprises a combination of notches whose relative location is known.
In an embodiment of the sixth aspect, the notches can have any desired shape.
In an embodiment of the sixth aspect, the notch that is mapped in the center of the ring image is an arbitrary notch.
In an embodiment of the sixth aspect, the initial estimation of the boundary with center C and notch P can be obtained from a multitude of methods which include, but are not limited to, deep/machine learning, image processing, statistical-based and random approaches.
In an embodiment of the sixth aspect, a generic conic is fitted to the mapped points.
In an embodiment of the sixth aspect, the detected center C and/or notch P are used for the estimation of the angular displacement of the rigid endoscope with respect to a camera-head it is mounted on.
While various embodiments have been described for purposes of this disclosure, such embodiments should not be deemed to limit the teaching of this disclosure to those embodiments. Various changes and modifications may be made to the elements and operations described above to obtain a result that remains within the scope of the systems and processes described in this disclosure. All patents, patent applications, and published references cited herein are hereby incorporated by reference in their entirety. It should be emphasized that the above-described embodiments of the present disclosure are merely possible examples of implementations, merely set forth for a clear understanding of the principles of the disclosure. Many variations and modifications may be made to the above-described embodiment(s) without departing substantially from the spirit and principles of the disclosure. It will be appreciated that several of the above-disclosed and other features and functions, or alternatives thereof, may be desirably combined into many other different systems or applications. All such modifications and variations are intended to be included herein within the scope of this disclosure, as fall within the scope of the appended claims.
The described embodiments are to be considered in all respects only as illustrative and not restrictive and the scope of the presently disclosed embodiments is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed systems and/or methods.
All of the references cited are expressly incorporated herein by reference. The discussion of any reference is not an admission that it is prior art to the presently disclosed embodiments, especially any reference that may have a publication date after the priority date of this application.
This application is a 35 U.S.C. § 371 national phase of PCT International Application No. PCT/US2020/054636, filed Oct. 7, 2020, which claims priority to and the benefit of U.S. Provisional Patent Application No. 62/911,950, filed Oct. 7, 2019 (“'950 application”) and U.S. Provisional Patent Application No. 62/911,986, filed on Oct. 7, 2019 (“'986 application”). All the noted applications are hereby incorporated herein by reference in their entireties for all purposes.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2020/054636 | 10/7/2020 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2021/071988 | 4/15/2021 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
8444652 | Amis et al. | May 2013 | B2 |
9438897 | De Almeida Barreto et al. | Sep 2016 | B2 |
9526587 | Zhao et al. | Dec 2016 | B2 |
10307296 | Tan et al. | Jun 2019 | B2 |
10499996 | de Almeida Barreto | Dec 2019 | B2 |
20100168562 | Zhao et al. | Jul 2010 | A1 |
20140285676 | Barreto | Sep 2014 | A1 |
20150351967 | Lim et al. | Dec 2015 | A1 |
20170046833 | Lurie | Feb 2017 | A1 |
20180071032 | De Almeida Barreto et al. | Mar 2018 | A1 |
20180242823 | Ichihashi | Aug 2018 | A1 |
20210145254 | Shekhar | May 2021 | A1 |
20210161718 | Ng et al. | Jun 2021 | A1 |
20220383588 | De Almeida Barreto | Dec 2022 | A1 |
Number | Date | Country |
---|---|---|
108938086 | Dec 2018 | CN |
2018232322 | Dec 2018 | WO |
2021071991 | Apr 2021 | WO |
2021168408 | Aug 2021 | WO |
2021203077 | Oct 2021 | WO |
2021203082 | Oct 2021 | WO |
2021257672 | Dec 2021 | WO |
2022006041 | Jan 2022 | WO |
2022006248 | Jan 2022 | WO |
2022159726 | Jul 2022 | WO |
Entry |
---|
International Patent Application No. PCT/US2022/041845 filed Aug. 29, 2022 listing Quist, Brian William as first Inventor, entitled, “Methods and Systems of Ligament Repair,”, 108 pages. |
International Patent Application No. PCT/US2022/046857 filed Oct. 17, 2022 listing Torrie, Paul Alexander as first inventor, entitled, “Methods and Systems of Tracking Objects in Medical Procedures,”, 77 pages. |
International Search Report and Written Opinion of PCT/US2020/054636 mailed Mar. 10, 2021. |
Melo, Rui, et al., “A New Solution for Camera Calibration and Real-Time Image Distortion Correction in Medical Endoscopy-Initial Technical Evaluation”, IEEE Transactions on Biomedical Engineering, vol. 59, No. 3, Mar. 2012, pp. 634-644. |
Yamaguchi, Tetsuzo, et al., “Development of a Camera Model and Calibration Procedure for Oblique-viewing Endoscopes,” Computer Aided Surgery, vol. 9, No. 5, Feb. 2004, pp. 203-214. |
Wu, C., et al., “A Full Geometric and Photometric Calibration Method for Oblique-viewing Endoscopes,” Computer Aided Surgery, vol. 15, No. 1-3, Apr. 2010, pp. 19-31. |
Number | Date | Country | |
---|---|---|---|
20220392110 A1 | Dec 2022 | US |
Number | Date | Country | |
---|---|---|---|
62911986 | Oct 2019 | US | |
62911950 | Oct 2019 | US |