The present disclosure generally relates to an imaging system, a related method, and a computer-readable medium storing instructions for performing the method. The present disclosure more particularly relates to an imaging system, a related method, and a computer-readable medium that each involve identification of a boundary between an active portion and an inactive portion of a captured digital image.
An imaging system having an imaging scope (e.g., an endoscope, an exoscope, a borescope) and a camera can be used for capturing light reflected, scattered, and/or emitted from an object, converting the captured light into a digital image, and displaying the digital image (or a modified version thereof) on a monitor for viewing by a user. For example, German Patent Publication No. DE102011054031A1, incorporated herein by reference, discloses an imaging system that uses an exoscope that enables the illumination and observation of an operation field during a surgical operation from a working distance (e.g., 25 cm to 75 cm). The exoscope is affixed on a holder in a suitable position, and with a suitable orientation, such that the exoscope does not obstruct a surgical team's view of the operation field during the surgical operation.
The imaging system is typically configured such that the optics of the scope form a captured light beam having an at least partially circular cross-sectional shape (e.g., partially circular, ovular, elliptical, etc.). In contrast, the light sensor used to the convert the captured light into a digital image (e.g., a charge-coupled device (CCD), a complementary metal-oxide semiconductor (CMOS), etc.) typically has a rectangular-shaped light-sensitive surface for receiving and sensing the captured light beam. As a result, the digital image generated by such light sensors can include an active portion corresponding to the portion of the light-sensitive surface occupied by the captured light, an inactive portion corresponding to the portion of the light-sensitive surface that was not occupied by the captured light, and a boundary defined between the active portion and the inactive portion of the digital image. Depending on the size and optical configuration of the scope attached to the camera, the cross section of the image light beam may be contained partially or totally on the image sensor. That is, in some instances the captured light beam may appear as a complete circular shape upon the image sensor, while in other instances the corner regions of the image sensor only may contain inactive regions when the radius of the image is greater than either a length or height of the image sensor. In still other instances, the diameter of the captured light beam may be greater than the diagonal of the image sensor, and in such cases there will be no inactive portion of the image sensor. Variations on these examples can also exist, wherein, for example, the diameter of the captured image is larger than the diagonal of the image sensor, but is offset along either an x or y axis of the sensor, causing the image to not be centered on the sensor, resulting in, for example, the left portion of the image sensor being completely active, and the right portion, outside of the circular image, to be inactive. All these instances may be achieved by selecting different scopes to attach to a given camera housing, particular zoom settings, and/or may occur through damage to the scope, for example, damage causing a shift in the position of the image circle from that expected by the image sensor.
The respective characteristics of the active and inactive portions of the digital image, and thus the characteristics of the boundary defined therebetween (e.g., center, radius, position, shape, size, etc.) can vary over time depending on one or more characteristics of the captured light beam (e.g., center, radius, position, shape, size, etc.). For example, the imaging system can include an optical zoom device that can be selectively adjusted over time to change the diameter of the captured light beam before it is received by the light-sensitive surface of the light sensor. This, in turn, can change the respective characteristics of the active and inactive portions of the digital image generated by the light sensor. Also, the camera of the imaging system can be configured for interchangeable use with various different types (e.g., classes) of imaging scopes that each form captured light beams having characteristics that are different from one another.
These imaging systems can be disadvantageous in that they are typically “blind” to (i.e., not configured to determine) the settings of the zoom device, and/or the type of imaging scope that is being used. As a result, these imaging systems are typically blind as to whether a generated digital image includes (i) only an active portion, or (ii) an active portion, an inactive portion, and a boundary defined therebetween. This limits the ability of these imaging systems to automatically select and/or adjust a setting of the camera (e.g., exposure, gain, etc.). For example, if one of these imaging systems generates a digital image with an active portion, an inactive portion, and a boundary therebetween, and if the imaging system automatically selects an exposure setting of the camera without knowing one or more characteristics of the boundary (e.g., center, radius, position, shape, size, etc.), then the camera may overexpose the active portion of the digital image in a futile attempt to adequately expose the inactive portion of the digital image. Similar issues can occur with tone mapping procedures, in which the imaging systems may futilely attempt to manipulate the contrast of the inactive portion of a digital image. These disadvantages can also prevent these imaging systems from: (i) automatically adjusting the size of an active portion of a digital image to fit a monitor screen; and (ii) automatically centering the active portion of the digital image on a monitor. Additionally, as discussed above, the imaging system is generally blind to the specifics of the optical design of an attached scope, resulting in an associated blindness to chromatic and other aberrations which may be present in a specific given scope.
Aspects of the present invention are directed to these and other problems.
According to an aspect of the present invention, an imaging system is provided that includes an imaging scope, a camera, an image processor, and a system controller. The imaging scope is configured to illuminate an object and capture light reflected from the object. The camera has a light sensor with a light-sensitive surface configured to receive the captured light from the imaging scope and generate a digital image representative of the captured light. The image processor is configured to receive the digital image from the camera and use at least one of a random sample consensus (RANSAC) technique and a Hough Transform technique to (i) identify a boundary between an active portion and an inactive portion of the digital image and (ii) generate boundary data indicative of a characteristic of the boundary. The system controller is configured to receive the boundary data from the image processor and use the boundary data to select and/or adjust a setting of the imaging system.
According to another aspect of the present invention, a method is provided that includes the steps of: (i) receiving, by an image processor, a digital image generated by an imaging system; (ii) using, by the image processor, at least one of a RANSAC technique and a Hough Transform technique to identify a boundary between an active portion and an inactive portion of the digital image; (iii) generating, by the image processor, boundary data indicative of a characteristic of the boundary; and (iv) automatically selecting and/or adjusting one or more settings of the imaging system based on the boundary data.
According to another aspect of the present invention, a computer-readable medium storing instructions is provided, the instructions including the steps of: (i) using at least one of a RANSAC technique and a Hough Transform technique to identify a boundary between an active portion and an inactive portion of a digital image generated using an imaging system; (ii) generating boundary data indicative of a characteristic of the boundary; and (iii) automatically selecting and/or adjusting one or more settings of the imaging system based on the boundary data.
In addition to, or as an alternative to, one or more of the features described above, further aspects of the present invention can include one or more of the following features, individually or in combination:
These and other aspects of the present invention will become apparent in light of the drawings and detailed description provided below.
The present disclosure describes an imaging system 10 (see
Referring to
The imaging scope 12 may illuminate an object 20 (e.g., an internal body cavity of a patient) directly by illumination with light sources 50, such as distally placed light emitting diodes, or may carry illuminating light from a proximal region of the scope provided by an external light source 13 via optical fiber or other light propagating means to the distal end of the scope. Alternatively, light may be provided by an external light source 13 and be delivered to the object 20 independently of the scope itself. The imaging scope 20 captures light reflected, scattered, and/or emitted from the object 20, and transmits the captured light to the camera 14. The camera 14 includes a light sensor 22 with a light-sensitive surface 24 (see
Referring still to
In other embodiments not shown in the drawings, the imaging scope 12, and/or components thereof, can be configured in various different ways. For example, in other embodiments, the imaging scope 12 can be an exoscope or a borescope, as opposed to a medical endoscope. The outer surface of the shaft wall 42 can define a diameter of approximately five millimeters (5 mm), ten millimeters (10 mm), or another magnitude. The imaging scope 12 can be configured for a non-medical use, as opposed to a medical use, such industrial scopes are commonly referred to as borescopes. The proximal end 38 of the shaft 34 can be integrally connected to the camera 14. The objective lens 48 and the light sources 50 can be arranged in a different manner relative to one another, or in a different manner relative to the distal end 40 of the shaft 34. The image transmission device 60 can additionally or alternatively include various different types of lenses or light conductors capable of transmitting the captured light from the objective lens 48 to the proximal end 38 of the shaft 34. The image transmission device 60 can be configured to transmit the captured light therethrough in the form of a captured light beam having another at least partially circular cross-sectional shape (e.g., ovular, elliptical, etc.).
Referring to
In the illustrated embodiment, the camera 14 is a video camera, and thus the digital image generated by the light sensor 22 can represent one of a plurality of time-sequenced frames of a digital video. The light sensor 22 included in the camera 14 is a CCD or CMOS sensor with a rectangular-shaped light-sensitive surface 24 configured to receive and sense the captured light received from the imaging scope 12.
Referring to
Referring to
Referring to
The grayscale converter 70 receives the digital image via the input 68 of the image processor 16. The grayscale converter 70 converts the digital image to a grayscale digital image using max colors (e.g., R,G,B) and/or a weighted average of color channels (e.g., luminance).
The boundary point detector 72 receives the grayscale digital image from the grayscale converter 70 and filters it with a filter (e.g., a high pass kernel filter) to detect boundary points within the grayscale digital image. Each of the boundary points corresponds to at least one pixel of the grayscale digital image that might possibly form a portion of a boundary 30 between an active portion 26 and an inactive portion 28 of the digital image. In instances in which the optical zoom device 64 is in a low magnification configuration (i.e., when the digital image includes an active portion 26, an inactive portion 28, and a boundary 30 defined therebetween) (see
The boundary point thinner 74 receives the grayscale digital image and any boundary points detected by the boundary point detector 72. The boundary point thinner 74 performs a non-maximal suppression on the boundary points, and eliminates any boundary points that do not generate a local response peak. Additionally or alternatively, the boundary point thinner 74 can consider the relative positioning of the boundary points within the grayscale digital image, and can eliminate boundary points that are unlikely to correspond to an area of transition (e.g., a boundary 30) between an active portion 26 and an inactive portion 28 of the digital image.
The memory device 76 receives and stores the digital image and any detected boundary points thereof, as well as any other information and/or data that may need to be stored and/or retrieved by the components of the image processor 16 in order to perform the functionality described herein.
In instances in which boundary points have been detected in the grayscale digital image, the boundary identifier 78 selectively retrieves the boundary points from the memory device 76 and uses at least one of a RANSAC technique and a Hough Transform technique to (i) identify the boundary 30 between the active portion 26 and the inactive portion 28 of the digital image and (ii) generate boundary data indicative of one or more characteristics of the boundary 30 (e.g., center, radius, position, shape, size, etc.).
In the illustrated embodiment, the boundary identifier 78 uses a RANSAC identifier. RANSAC is an iterative technique that can be used to estimate parameters of a mathematical model from a set of observed data which contains outliers. Detailed descriptions of RANSAC can be found, for example, in Fischler et al., Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography, 24 Graphics and Image Processing 381-95 (J. D. Foley ed., June 1981), and Marco Zuliani, RANSAC for Dummies (2014), available at http://old.vision.ece.ucsb.edu/˜zuliani/Research/RANSAC/docs/RANSAC4Dummies.pdf.
In the illustrated embodiment, the RANSAC technique performed by the boundary identifier 78 involves randomly selecting a boundary point subset having at least three (e.g., three, four, five, etc.) of the boundary points, and determining a fit (e.g., a least square error fit) for the boundary point subset. The number of randomly selected boundary points of the boundary point subset can depend at least in part on the shape or expected shape of the boundary 30. For example, a boundary point subset having three randomly selected boundary points may be acceptable when the expected shape of the boundary 30 is circular, but four or more randomly selected boundary points may be needed when the expected shape of the boundary 30 is elliptical. The fit for the boundary point subset can be determined using various different error metrics, including, for example, the 2-norm error metric (i.e., sum of squared error), the 1-norm error metric (sum of absolute error), or the infinity norm error metric (maximum absolute error).
Next, the boundary identifier 78 checks the fit against all of the boundary points (not just those from the selected boundary point subset) to determine an inlier number for the fit. The inlier number is the number of all of the boundary points that lie within the fit within a predetermined tolerance.
These steps can be repeated a plurality of times for each digital image from which boundary points were detected. After a plurality of iterations, the fit with the inlier number having the highest magnitude (e.g., the fit that includes the most boundary points lying within the fit) is selected as the best fit. The boundary identifier 78 can use the selected best fit to generate boundary data indicative of one or more characteristics of the boundary 30 (e.g., center, radius, position, shape, size, etc.). This approach allows for a best fit to be selected even when the boundary 30 has a shape that is not entirely encompassed within the digital image. That is, when the optical zoom device 64 is between the low magnification configuration (see
In some embodiments, instead of selecting the best fit to be the fit with the most inlying boundary points, as described above, the best fit can be selected using a least median error approach, in which the fit with the least median error is selected as the best fit. In such embodiments, the boundary identifier 78 compares each fit with all of the boundary points (not just those from the selected boundary point subset) to determine a median error measurement for each fit. The fit having the lowest median error measurement is selected as the best fit. In such embodiments, the median will typically represent the 50th percentile of errors. However, one could also minimize other error percentiles with similar results. This approach can be advantageous in that it avoids the need to define a threshold for which boundary points are to be considered inliers. However, this approach can be disadvantageous from a computational feasibility standpoint, because it requires computation of all of the error scores and then at least a partial sorting of the error scores.
As indicated above, in the illustrated embodiment, the camera 14 is a video camera, and thus the digital image generated by the light sensor 22 of the camera 14 can represent one frame of a digital video. Accordingly, in the illustrated embodiment, for each new digital image that is generated (i.e., each new frame of the digital video), the RANSAC technique is repeated. In addition, for each new digital image that is captured, a best fit of a previously-analyzed digital image, and/or minor perturbations thereof (slightly larger/smaller, slightly different center, etc.), can be tried with the new digital image. This feature enables the image processor 16 to upgrade to a “better” best fit if one is ever found. Also, for each new digital image that is generated, the best fit of a previously-analyzed digital image can serve as a starting point from which the RANSAC technique is performed, thereby avoiding the need to start the RANSAC technique from scratch for each new digital image (i.e., for each new frame of the digital video). The rate at which the camera 14 generates new digital images (i.e., the frame rate of the digital video generated by the camera 14) can vary depending on the particular application. In some embodiments (e.g., embodiments in which the frame rate is relatively high), the image processor 16 can be configured so that only some of the digital images produced by the camera 14 (e.g., selected at periodic intervals) are processed by the image processor 16 and the components thereof. This feature can be advantageous from a computational feasibility standpoint, and can still allow the benefits of the imaging system 10 to be achieved, because the boundary between an active portion and an inaction portion thereof typically does not change frequently and rapidly from frame to frame.
As indicated above, the boundary identifier 78 can additionally or alternatively use a Hough Transform technique in identifying the boundary 30 and generating boundary data. The Hough Transform technique can be especially useful in embodiments in which the boundary 30 can be expected to have a circular shape (e.g., embodiments in which a captured light beam has a circular cross-sectional shape). In such embodiments, the boundary identifier 78 receives an array of index points, each index point corresponding to a radius and center coordinates (e.g., “x” and “y” Cartesian coordinates) of a different candidate fit for the boundary points. The index points of the array each have a count, and the counts of all index points are initialized with zeros. For each of the boundary points, the boundary identifier 78 increments the count of each index point corresponding to the different candidate fits that include the respective boundary point. Finally, the boundary identifier 78 selects a best fit. The best fit is the candidate fit corresponding to the index point with the highest count. The boundary identifier 78 can use the best fit to generate boundary data indicative of one or more characteristics of the boundary 30 (e.g., center, radius, position, shape, size, etc.). The Hough Transform technique has the potential to be more accurate and reliable than the RANSAC technique, as all of the boundary points and all the candidate fits are considered. However, the Hough Transform technique can be disadvantageous from a computational feasibility standpoint, as many indices in the array need to be maintained simultaneously.
Referring still to
The error and reasonableness detector 80 also analyzes the selected best fit to determine whether it is “reasonable.” For instance, a selected best fit indicating that the center of the boundary 30 is outside of the displayable area of the digital image will be rejected as an unreasonable (e.g., unlikely) characterization of the boundary 30. If the error and reasonableness detector 80 determines that a boundary 30 has been confidently detected, it will cause the boundary data received from the boundary identifier 78 to be transmitted to the system controller 18 via the output 82 of the image processor 16. If, on the other hand, the error and reasonableness detector 80 determines that a boundary 30 has not been confidently detected, it can first instruct the boundary identifier 78 to repeat the above-described RANSAC technique (e.g., using a best fit of a previously-analyzed digital image, and/or a minor perturbation thereof). If this does not improve the confidence measure or reasonableness of the best fit newly selected by the boundary identifier 78, the error and reasonableness detector 80 can ultimately determine that the digital image does not include a boundary 30 (e.g., as when the digital image includes only an active portion 26). The error and reasonableness detector 80 can update the boundary data received from the boundary identifier 78 accordingly and can cause the updated boundary data received from the boundary identifier 78 to be transmitted to the system controller 18 via the output 82 of the image processor 16.
Referring to
The system controller 18 can automatically select and/or adjust various different settings of the imaging system 10 based on the boundary data. For example, in some embodiments, the system controller 18 can use the boundary data to automatically select an exposure setting of the camera 14, automatically select a gain setting of the camera 14, and/or automatically reduce the importance of pixel intensities that have been determined to correspond to an inactive portion 28 of the digital image. The system can use the boundary data to determine the parameters of the imaging system, such as the diameter of the attached endoscope and set the light source intensity (or range of intensities) to match the light delivery capability of the endoscope inferred from the scope size. Further, the determination of the boundary data may be used to adjust the optical zoom mechanism, which may, in addition to having a high magnification and a low magnification, have various other magnification levels. For example, if a narrow diameter endoscope is attached, and the system determines a boundary area, the optical zoom mechanism may be adjusted such that the image is magnified upon the image sensor until such a zoom value is reached that the inactive area of the sensor is no longer present, is minimized, or is adjusted to a predetermined desired size. The inverse may be performed if a large diameter scope is used, as some operators may prefer a display where a circular image can be discerned on the display. The system controller 18 can also use the boundary data in automatically controlling the camera 14 to aggressively filter noise in an area of the digital image that has been determined to correspond to an inactive portion 28 of the digital image. The boundary data can also be used to select a set of appropriate optical model parameters for the modulation transfer function (MTF), spatial frequency response (SFR), lens distortion, vignetting, and/or chromatic aberration of the identified attached scope. These model parameters are characterized for each potential class of scope, and their values stored in the CCU, and are applied by the CCU based on the determined boundary data and corresponding scope identification. Model parameters could be an MTF curve, a distortion profile, or the focal shift profile as a function of wavelength. The system controller 18 can use the boundary data and associated optical model in automatically adjusting the sharpness of the digital image, lens distortion correction, vignetting correction, and/or chromatic aberration correction. The system controller can use the boundary data in automatically performing digital zooming on the digital image, and/or automatically performing re-centering techniques on the digital image on a monitor 32. This can allow the imaging system 10 to maintain an area and position of the respective active portions 26 of digital images as a digital video including those digital images is displayed on a monitor 32. The system controller 18 can use the boundary data to automatically improve a tone mapping technique performed by the imaging system 10.
The functionality of the system controller 18 can be implemented using analog and/or digital hardware (e.g., counters, switches, logic devices, memory devices, programmable processors, non-transitory computer-readable storage mediums), software, firmware, or a combination thereof. The system controller 18 can perform one or more of the functions described herein by executing software, which can be stored, for example, in a memory device. A person having ordinary skill in the art would be able to adapt (e.g., construct, program) the system controller 18 to perform the functionality described herein without undue experimentation. Although the system controller 18 is described as being discrete components separate from the image processor 16, in other embodiments the system controller 18 and the image processor 16, or one or more components thereof, can be combined into a single component.
Another aspect of the invention involves a method that includes the steps of: (i) using a RANSAC technique and/or a Hough Transform technique to identify a boundary 30 between an active portion 26 and an inactive portion 28 of a digital image generated using an imaging system 10; (ii) generating boundary data indicative of a characteristic of the boundary 30 (e.g., center, radius, position, shape, size, etc.); and (iii) automatically selecting and/or adjusting one or more settings of the imaging system 10 based on the boundary data. As will be apparent in view of the above-described functionality of the imaging system 10, the steps of the method can include various sub-steps of the above-described steps, and/or various other steps in addition to the above-described steps.
Another aspect of the invention involves a computer-readable medium storing instructions for performing steps including: (i) using a RANSAC technique and/or a Hough Transform technique to identify a boundary 30 between an active portion 26 and an inactive portion 28 of a digital image generated using an imaging system 10; (ii) generating boundary data indicative of a characteristic of the boundary 30 (e.g., center, radius, position, shape, size, etc.); and (iii) automatically selecting and/or adjusting one or more settings of the imaging system 10 based on the boundary data. The computer-readable medium can also store instructions for performing one sub-steps of the above-described steps. As will be apparent in view of the above-described functionality of the imaging system 10, the computer-readable medium can additionally store instructions for performing various sub-steps of the above-described steps, and/or various other steps in addition to the above-described steps. The computer-readable medium can be a non-transitory computer-readable medium (e.g., a non-transitory memory device). The instructions stored in the computer-readable medium can cause a component of the above-described imaging system 10 (e.g., the image processor 16, the system controller 18, and/or another component) to perform one or more steps of the above-described method.
The present disclosure describes aspects of the present invention with reference to the exemplary embodiments illustrated in the drawings; however, aspects of the present invention are not limited to the exemplary embodiments illustrated in the drawings. It will be apparent to those of ordinary skill in the art that aspects of the present invention include many more embodiments. Accordingly, aspects of the present invention are not to be restricted in light of the exemplary embodiments illustrated in the drawings. It will also be apparent to those of ordinary skill in the art that variations and modifications can be made without departing from the true scope of the present disclosure. For example, in some instances, one or more features disclosed in connection with one embodiment can be used alone or in combination with one or more features of one or more other embodiments.
This application is a continuation-in-part of U.S. patent application Ser. No. 17/352,645, filed Jun. 21, 2021, and entitled “Imaging System for Identifying a Boundary Between Active and Inactive Portions of a Digital Image,” which in turn claims priority to U.S. patent application Ser. No. 15/040,688, filed Feb. 10, 2016, and issued as U.S. Pat. No. 11,044,390, issued Jun. 22, 2021, and entitled “Imaging System for Identifying a Boundary Between Active and Inactive Portions of a Digital Image,” both disclosures of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | 15030688 | Apr 2016 | US |
Child | 17352645 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17352645 | Jun 2021 | US |
Child | 18787898 | US |