The present disclosure relates to measuring properties of light from a scene, and more particularly, to novel systems and methods for measuring multiple properties of light from a scene from a single viewpoint.
The complexity of modern camera systems varies widely based on their application. Point-and-shoot, interchangeable-lens, and digital single-lens reflex cameras are typically used for photography and are rarely used to analyze more than one aspect of a scene. In comparison, specialized scientific cameras are often tasked with providing one or more polarimetric, spectral, temporal, or high-dynamic range analyses.
To increase the analysis capabilities of the camera, recent efforts have added optical components such as prisms, compound optics, and beamsplitters to the camera's optical assembly to generate multiple images from the incident scene. Similar to compact multispectral systems, the resulting images can be recombined during post-processing to extract information about materials or objects within the scene.
Since many of these systems rely on splitting the spectral or polarimetric content of the incident light, the resulting set of images does not retain the full content of the original scene. One solution—the snapshot multispectral imager—filters both polarization and wavelength simultaneously, decoupling the polarization and spectral data during post-processing. However, this technique remains limited in the number of usable spectral bands as well as the parallax induced by close-range targets.
The inventor of embodiments of the present disclosure has identified an alternative approach to form multiple images within a uniaxial optical multi-measurement imaging system, such that each image retains all spectral and polarimetric content. By retaining this information, each image can be independently captured, filtered, processed, and analyzed.
In embodiments, a uniaxial optical multi-measurement imaging system includes an imaging lens column having an optical axis. The imaging lens column is configured to receive and transmit light from a scene from a single viewpoint. The imaging system also includes a light redistribution optic (LRO) in the shape of a thin pyramid shell with an apex. The LRO is centered along the optical axis with the apex pointing towards the imaging lens column. The LRO has planar sides with each side angled 45 degrees with respect to the optical axis and each side is configured to reflect and transmit light received from the imaging lens column. The imaging system also includes a circumferential filter array (CFA) concentrically located around the LRO. The CFA is configured to filter the light reflected from or transmitted through the LRO. Finally, the imaging system includes multiple image sensors. Each image sensor is positioned to receive the light reflected from or transmitted through the LRO.
In general, this approach is similar to the previous multispectral imaging systems described in the inventor's previous disclosures in that it utilizes additional components placed along the optical axis to split the incident light field. Unlike other systems, however, the LRO does not rely on spectral or polarimetric splitting; instead, in embodiments, a simple broadband 50/50 reflection coating may be applied to the forward-facing surfaces of the LRO. In turn, the customizable CFA is able to filter each reflected image independently, greatly increasing the analysis capabilities and versatility of the imaging system.
The foregoing features of the present invention will become more fully apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. Understanding that these drawings depict only typical embodiments of the invention and are, therefore, not to be considered limiting of its scope, the invention will be described with additional specificity and detail through use of the accompanying drawings in which:
The present disclosure covers apparatuses and associated methods for a uniaxial optical multi-measurement imaging system. In the following description, numerous specific details are provided for a thorough understanding of specific preferred embodiments. However, those skilled in the art will recognize that embodiments can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In some cases, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the preferred embodiments. Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in a variety of alternative embodiments. Thus, the following more detailed description of the embodiments of the present invention, as illustrated in some aspects in the drawings, is not intended to limit the scope of the invention, but is merely representative of the various embodiments of the invention.
In this specification and the claims that follow, singular forms such as “a,” “an,” and “the” include plural forms unless the content clearly dictates otherwise. All ranges disclosed herein include, unless specifically indicated, all endpoints and intermediate values. In addition, “optional”, “optionally” or “or” refer, for example, to instances in which subsequently described circumstance may or may not occur, and include instances in which the circumstance occurs and instances in which the circumstance does not occur. For example, if the text reads “option A or option B,” there may be instances where option A and option B are mutually exclusive or instances where both option A and option B may be included. The terms “one or more” and “at least one” refer, for example, to instances in which one of the subsequently described circumstances occurs, and to instances in which more than one of the subsequently described circumstances occurs.
The following examples are illustrative only and are not intended to limit the disclosure in any way.
In
In this example of the uniaxial optical multi-measurement imaging system 102, the LRO 32 has two planar sides 33A and 33B facing the imaging lens column 30. A first planar side 33A of the LRO 32 is configured to reflect the light 22 transmitted from the imaging lens column 30 to a first image sensor 40A. A second planar side 33B of the LRO 32 is configured to reflect the light 22 transmitted from the imaging lens column 30 to a second image sensor 40B. Also, both first 33A and second 33B planar sides of the LRO 32 are configured to transmit light 22 from the imaging lens column 30 to a third image sensor 40C. First 28A, second 28B, and third 28C independent, spatially separate images from the scene 20 may be captured by the multiple image sensors 40A, 40B, and 40C.
Design of the LRO 32 was based on the convergence of light rays 22 leaving the lens column 30 and the optical path length of light traveling between the lens column 30 and the on-axis detector 40C. In conventional imaging systems, light rays exiting the lens column horizontally and vertically converge to form a focused image on the detector. To continue the convergence in a uniaxial system, each forward-facing surface 33A, 33B, of the LRO 32 is planar. Additionally, on-axis rays will travel a shorter distance from the lens column 30 to the detector 40C compared to off-axis rays. Geometrically, this indicates the LRO 32 should be angled along the optical axis with the apex 32A placed closest to the lens column 30.
To satisfy these conditions, the LRO 32 was modeled as a thin pyramidal shell of Schott FK3 glass. The faces of the pyramid are angled at 45 degrees with respect to the optical axis 24, the base of the pyramid (LRO 32) is square, and the apex 32A of the pyramid lies along the optical axis 24 pointed towards the lens column 30. To work with existing camera hardware, the diagonal of the pyramid's base may be designed to be shorter than the diameter of the largest lens in the column 30. The design of the LRO 32 is advantageous in that it is lightweight, modular, intuitive to design, and does not complicate ray tracing through the imaging system 102. However, the size, position, and geometry of the LRO 32 must be carefully chosen since each parameter directly impacts image quality.
Changing the slope of the LRO surfaces 33A and 33B, for instance, induces a tilt on the plane aligning the focal points of the optical rays. For this reason, decreasing the depth of pyramidal shell along the optical axis 24 while keeping its width and height the same results in a set of blurry, stretched images 28A and 28B on the circumferential detectors 40A and 40B. This change also results in a lateral shift of the images due to the change in the angle of incidence between the rays leaving the lens column 30 and the LRO surfaces 33A and 33B. Finally, the forward-facing surfaces 33A and 33B of the LRO 32 must be kept flat; while conical and other non-planar surfaces may allow light to converge along one axis, they also cause light to reflect divergently along the other axis, again leading to blurry images.
In embodiments, the surfaces 33A and 33B on the front (lens-facing) side of the LRO 32 may be coated with a broadband 50/50 reflective coating to equally divide light from the lens column into a reflected (28A and 28B) and transmitted (28C) image. Similarly, the back side (33C and 33D) of the LRO 32 may be coated with a broadband anti-reflection (AR) coating to reduce internal reflections. Since light rays exiting the lens column 30 strike the LRO 32 at different angles, both coatings must be insensitive to angle of incidence (AoI) and wavelength. Fortunately, optical coatings that satisfy these requirements are well known and are commercially available. For example, the optical coating used on Thorlabs' BSW16 50:50 Plate Beamsplitter provides 50% transmission at 45-degree AoI across the visible regime and exhibits less than 10% variation in transmittance at AoI values as large as 30 degrees from the surface normal.
A first planar side 35A of the LRO 34 is configured to reflect the light transmitted from the imaging lens column 30 to a first image sensor 40A. A second planar side 35B of the LRO 34 is configured to reflect the light transmitted from the imaging lens column 30 to a second image sensor 40B. A third planar side 35C of the LRO 34 is configured to reflect the light transmitted from the imaging lens column 30 to a third image sensor (not labeled in
A first planar side 37A of the LRO 36 is configured to reflect the light transmitted from the imaging lens column 30 to a first image sensor 40A. A second planar side 37B of the LRO 36 is configured to reflect the light transmitted from the imaging lens column 30 to a second image sensor 40B. A third planar side 37C of the LRO 36 is configured to reflect the light transmitted from the imaging lens column 30 to a third image sensor (not labeled in
Unsurprisingly, the shape of the LRO 36 greatly influences the size and shape of the reflected images 28A, 28B, 28C (not labeled in
In embodiments, each image sensor measures or images a different property of the light 22 from the scene 22 from the single viewpoint.
Also in embodiments, the light 22 entering the imaging lens column 30 is uncollimated and the imaging lens column 30 is configured to receive the uncollimated light 22 and direct the uncollimated light 22 onto and through the LRO 32, 34, or 36.
Also in embodiments, the planar sides (e.g., planar sides 33A, 33B, or 35A, 35B, 35C, or 37A, 37B, 37C, 37D) of LROs 32, 34, or 36, are angled 45 degrees with respect to the optical axis 24 and are coated with a reflective coating (not illustrated) configured to divide the light 22 transmitted from the imaging lens column 30 into reflected images (e.g., images 28A, 28B, 28C (not illustrated), and 28D (not illustrated) from system 104) and a transmitted images 28C (from system 102), 28D (from system 103) or 28E (from system 104).
Similarly, the planar sides (e.g., planar sides 33A, 33B, or 35A, 35B, 35C, or 37A, 37B, 37C, 37D) of LROs 32, 34, or 36, are angled 45 degrees with respect to the optical axis and are coated with a broadband 66% reflective coating (not shown) configured to equally divide the light transmitted from the imaging lens column 30 into reflected (e.g., images 28A, 28B, 28C (not illustrated), and 28D (not illustrated) from system 104) and transmitted images 28C (from system 102), 28D (from system 103) or 28E (from system 104).
The CFA 54 is modeled as an optically transparent substrate whose surfaces are coated with spectral, polarimetric, or neutral density filters (
In doing so, the CFA 54 can be customized for the application at hand or exchanged for a different filter array suited for the same adapter. For multispectral applications, the generic filters described above could be extracted and replaced with another CFA containing spectral, plasmonic, or polarimetric filter geometries.
In this respect, the CFA 54 is similar to the polarized-type divided aperture color-coding (P-DACC) unit used in the snapshot multispectral imager (SMI) system: both are modular, both are designed to be swapped without changing other camera components, and both are meant to provide spectral and polarimetric data about a scene.
One of the key differences between the two approaches is the position of the filter. In the SMI system, the P-DACC unit is placed as an aperture stop within the lens column. Not only does this limit the spatial footprint available for designing and placing filters, but it also limits the number of spectral bands that can be imaged by the color polarization image (CPI) sensor. In this configuration, only nine spectral bands are available for subsequent image analysis. A similar approach could be used with the uniaxial geometry by placing the color polarization filters on the CFA 54 and using CPI sensors for the circumferential detectors.
In turn, the LRO 36 (in
Referring back to
To simplify the explanation of the images illustrated in
Similarly, each reflected image (7C, 7D, 7E, and 7F) can be intuitively validated by examining the ray trace in
Validation of the right and left images also follows similar reasoning. Rays from the right side of the input image (e.g., scene 20) propagate through the lens column 30 and are reflected towards the left-most sensor 40C by the widest portion of the LRO's left surface 37C. During the reflection, the image retains is upside down orientation, but is flipped again laterally before striking the left-most sensor 40C. Therefore, the left-most image (shown in
A uniaxial imaging system described in this work was numerically validated in Zemax OpticStudio using commercially available materials and lenses. Lens parameters are the same as those defined in OpticStudio's “Double Gauss Experimental Arrangement” example. Although each detector within the simulation is modeled as identical color sensors utilizing a Bayer pixel pattern, this does not have to be the case since multispectral image analysis can be performed using highly scattering filters and monochrome sensors.
One notable difference between simulations and a physical system is the requirement of the software to trace a chief ray from the input image or scene 20 to the detectors (e.g., detectors 40A-40E in
Nonetheless, embodiments of the proposed imaging system provide many advantages over existing multispectral cameras. First and foremost, the addition of an LRO and CFA to the base imaging system only increase its functionality and capability. Since both components are designed to be modular and removable, they can be taken out of the optical assembly and original functionality is restored. Additionally, an LRO splits the incident image or scene 20 in a way that both enables each (reflected) sub-image to be filtered and processed independently, and keeps the original (transmitted) image unfiltered to be used as a reference during post-processing—capabilities that do not exist in systems that rely on splitting the scene's spectral and polarimetric content.
Similarly, a CFA introduces a fresh approach to filtering data within the camera system. The high degree of customization offered by a CFA is based on its circumferential design; not only can the filters be chosen and arranged based on the application, but the filters themselves can be tailored to fit the incident light source and detector geometries. Furthermore, the ability to independently apply custom filters to different portions of the image greatly extends the versatility of the imaging system. Together, an LRO and CFA offer an intuitive, updated method to split and analyze multiple aspects of the input scene.
Implementing the LRO and CFA within an existing imaging system, though, is not without cost since the hardware and software infrastructure of the camera will need to be modified to accommodate the new components. In addition to the hardware needed to mount the LRO and CFA within a camera housing, four additional image sensors are needed to capture the reflected images, placing a heavier burden on the power and weight of the camera. In turn, each of these changes must be supported by the camera's software. Each of the five sensors, for example, may require its own ISO rating, shutter speed, and aperture setting, so a fixed aperture shared across five sensors may not be ideal. Additionally, each sensor would require pre-processing (demosaicing) to convert its discrete pixel values into a coherent image. Once converted, the raw image data needs to be converted into a useful image format (ex. PNG or JPEG) before being stitched together to form a single reflected image. Furthermore, the cost of the additional sensors may not outweigh the information they gather. Since each circumferential sensor only receives a partial reflection of the incident field, each of these sensors is drastically underfilled. The additional hardware, software, and development of fabrication techniques for the LRO and CFA filters is expected to greatly increase the cost of the camera.
The method further comprises providing 230 a circumferential filter array concentrically located around the LRO. The filter array is configured to filter the light reflected from or transmitted through the LRO.
Also, the method further comprises providing 240 multiple image sensors.
Finally, the method comprises capturing 250 an image of the scene from each of the multiple image sensors.
The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative, and not restrictive. All changes which come within the meaning and range of equivalency of the foregoing description are to be embraced within the scope of the invention.
It will be appreciated that several of the above-disclosed and other features and functions, or alternatives thereof, may be desirably combined into many other different systems or applications. Also, various presently unforeseen or unanticipated alternatives, modifications, variations or improvements therein may be subsequently made by those skilled in the art and are also intended to be encompassed by the following claims.
This application is a continuation-in-part to U.S. Non-provisional application Ser. No. 17/540,327, filed on Dec. 2, 2021, and entitled “Uniaxial Optical Multi-Measurement Sensor,” which is incorporated by this reference in its entirety. This application is also a continuation-in-part to U.S. Non-provisional application Ser. No. 17/954,446, filed on Sep. 28, 2022, and entitled “Aperture Stop Exploitation Camera,” which is incorporated by this reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | 17954446 | Sep 2022 | US |
Child | 17540327 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17540327 | Dec 2021 | US |
Child | 17974094 | US |