This disclosure relates generally to optics and digital imaging and, more particularly, to large-pixel-count imaging systems including monocentric multiscale cameras having an enhanced field of view.
As will be readily appreciated by those skilled in the art, digital imaging systems, methods, and structures are employed in an ever-increasing number of applications and have become integral in every industry imaginable—making, creating, storing, analyzing, and disseminating images.
Given this importance, improved systems, methods, and structures for digital imaging—and in particular—systems, methods and structures which facilitate the development of large-pixel-count imaging (i.e., gigapixel)—would represent a welcome addition to the art.
An advance is made in the art according to aspects of the present disclosure directed to systems, methods, and structures for monocentric multiscale imaging systems and cameras having an enhanced field of view as compared with the prior art.
In sharp contrast to the prior art arrangements of monocentric multiscale imaging systems and cameras according to the present disclosure provide—illustratively—a 360° ring FoV MMS lens that advantageously captures approximately 500-mega-pixel image from a circular ring area. Additionally, by varying microcamera imaging channel configurations, we disclose a multi-focal design that advantageously can range from 15 mm to 40 mm providing coverage of a scene with widely different imaging magnifications. Finally, additional illustrative configurations combine multiple MMS systems such that an arbitrary solid angle in 4π space is covered
This SUMMARY is provided to briefly identify some aspect(s) of the present disclosure that are further described below in the DESCRIPTION. This SUMMARY is not intended to identify key or essential features of the present disclosure nor is it intended to limit the scope of any claims.
The term “aspect” is to be read as “at least one aspect”. The aspects described above and other aspects of the present disclosure are illustrated by way of example(s) and not limited in the accompanying drawing.
A more complete understanding of the present disclosure may be realized by reference to the accompanying drawing in which:
The following merely illustrates the principles of the disclosure. It will thus be appreciated that those skilled in the art will be able to devise various arrangements which, although not explicitly described or shown herein, embody the principles of the disclosure and are included within its spirit and scope. More particularly, while numerous specific details are set forth, it is understood that embodiments of the disclosure may be practiced without these specific details and in other instances, well-known circuits, structures and techniques have not been shown in order not to obscure the understanding of this disclosure.
Furthermore, all examples and conditional language recited herein are principally intended expressly to be only for pedagogical purposes to aid the reader in understanding the principles of the disclosure and the concepts contributed by the inventor(s) to furthering the art and are to be construed as being without limitation to such specifically recited examples and conditions.
Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosure, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently-known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.
Thus, for example, it will be appreciated by those skilled in the art that the diagrams herein represent conceptual views of illustrative structures embodying the principles of the disclosure.
In addition, it will be appreciated by those skilled in art that certain methods according to the present disclosure may represent various processes which may be substantially represented in computer readable medium and so controlled and/or executed by a computer or processor, whether or not such computer or processor is explicitly shown.
In the claims hereof any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements which performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function. The invention as defined by such claims resides in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. Applicant thus regards any means which can provide those functionalities as equivalent as those shown herein. Finally, and unless otherwise explicitly specified herein, the drawings are not drawn to scale.
By way of some additional background, we begin by noting that the demand for gigapixel-scale cameras and imaging systems has been steadily increasing given their recognized utility in a variety of applications including broadcast media, imaging, virtual reality, flight control, transportation management, security, and environmental monitoring,—among others. Notwithstanding this considerable demand, utilization of such gigapixel systems has been tempered due—in part—to the cost and system complexity of such gigapixel systems coupled with recognized computational and communications challenge(s) of gigapixel image management.
Given these infirmities, the art has directed considerable enthusiasm towards Monocentric Multiscale (MMS) imaging systems and cameras that may advantageously reduce the cost and complexity of gigapixel imaging systems due to several design and technology breakthroughs. Notably, and as will be readily appreciated by those skilled in the art, MMS imaging systems and cameras advantageously achieve both high angular resolution and a wide field of view (FOV) in gigapixel scale systems. In contrast with gigapixel astronomical telescopes and lithographic lenses, MMS imaging systems and cameras according to the present disclosure may advantageously be manufactured and assembled using commercially available, off-the-shelf components and methods, while the former may only can be realized in precisely controlled lab environment with purposely developed tools and materials.
We note that the architecture of an illustrative MMS system generally resembles that of a telescope. More particularly, one layered monocentric spherical objective lens is shared by several microcameras, wherein each microcamera covers a portion of an overall FOV—denoted as microcamera FOV (MFOV). We note further that refractive telescopes may be classified into Keplerian systems having an internal image surface and Galilean systems having secondary optics positioned before an objective focal surface. Yet while MMS systems may be designed according to either of these two classifications, and that Galilean systems achieve a smaller physical size, prior art MMS imaging systems and cameras all adopt Keplerian design(s) because such architectures more readily accommodate overlap between adjacent microcamera FOV and because they are easier to construct.
As will be understood and appreciated by those skilled in the art, field of view (FoV) and instantaneous field of view (iFoV) are two basic measures of camera performance. Conventionally, FoV describes the angular extent of a cone around the optical axis observed by a camera. As is known, fisheye lenses have long been used to achieve wide field of view imaging. For example, the Ricoh Theta and the Samsung Gear 360 capture 360°×180° images. Despite such impressive FoV, distortion and aberration in fisheye systems severely limits iFoV. For at least this reason, systems that capture a wide field of view by computationally stitching images obtained using temporal scanning or camera arrays have become increasingly popular. Note that higher resolution full solid angle imaging has been implemented in camera arrays like as the Facebook Surround 360.
Several platforms have recently been developed that generalize 360 camera design to include more diverse parallel camera architectures. While a camera array can be designed to cover any FoV and iFoV, the cost of such systems increases nonlinearly as iFoV decreases. One fundamental issue is that as iFoV decreases entrance aperture must increase. Those skilled in the art will readily understand that lens cost increases nonlinearly with entrance aperture size and such cost is raised still further if FoV per microlens decreases as required by conventional scaling—since the number of microlenses required to fill a given field of view must then also increase as iFoV decreases.
In sharp contrast, multiscale designs in which a parallel array of microcameras share a common objective lens, have been shown to allow wide FoV over a wide range of aperture scales. Monocentric multiscale (MMS) designs using a spherical objective lens and microcameras mounted on a spherical shell have been particularly effective in this regard.
While previous works by the instant applicants has primarily focused on multiscale design for lens systems with a conventional, cone-shaped FoV, this disclosure describes novel MMS designs suitable for wide angle applications commonly associated with fish-eye lenses and ring-shaped camera arrays.
At this point we note that focus control presents a considerable challenge for conventional wide field imaging systems. More particularly, even if a fish-eye lens can reasonably focus on a scene in one configuration, the design of such lenses for focal accommodation is extremely challenging. And while we do not explicitly discuss focal accommodation in this disclosure, we note that the ability to independently and locally control focus state in each microcamera of an MMS system is a particular advantage of such MMS systems. Notwithstanding, we recognize the significance of work relating to focus control strategies and note that these strategies can be implemented in the systems disclosed herein. We now show how spherical geometry of MMS systems allows a variety of novel field of view alignments according to aspects of the present disclosure.
By way of illustrative example, we note that security cameras are oftentimes installed on a ceiling or pole overlooking a target field of view. Complementing such installations, contemporary systems oftentimes incorporate mechanical pan-tilt-zoom components that allow a camera to scan wide angle fields with high resolution.
Alternatively, such security camera systems may combine a wide-angle spotting camera with a long focal length narrow field slew camera. In operation, when an event of interest is registered by the wide-angle camera, the long focal length narrow field slew camera will be directed into that event and capture high resolution details.
As those skilled in the art will readily appreciate however, there are at least three disadvantages to such camera system configurations. First, only one region of the full field is captured in high resolution. Second, the response time and mechanical motion speed may be limited and unable to keep up with rapidly changing events—especially when several events take place simultaneously. Third, mechanical components may render the entire system unreliable and afflicted with high maintenance cost.
In sharp contrast, MMS achieve a wide field and a high resolution in real time. With such an architecture, parallel small aperture optics outperforms the traditional single aperture lens thereby providing a significantly improved information efficiency. In addition, the MMS' shared objective lens results in a more compact layout as compared with that of multi-camera clusters. Since MIMS imaging sensors are tessellated over a spherical surface—as long as there is no notable inter-occlusion occurring—a target from any spatial angle can be imaged by a microcamera appropriately configured.
Those skilled in the art will know and appreciate that there are numerous ways of arranging a microcamera array hence the configurations of the FoV. This flexibility offers great opportunities for different FoV configurations and other camera set ups for various application scenarios. In addition to arrangement flexibility of one MMS lens, more configurations can be realized by using multiple MMS lenses as a combination.
As an illustrative example, we have previously described a design for compact wide field imaging using three MMS systems. In this disclosure, we now disclose and describe enhancements to our designs to include compact 360 ring cameras, multifocal length/extended depth of field systems and full sphere imaging systems. We note however, that the illustrative designs presented herein are intended as simple illustrations of the potential of systems constructed according to the present disclosure while—in practice—systems can be constructed which cover an arbitrary field of view and depth of field.
As will be readily understood by those skilled in the art, conventional cameras capture images in a rectangular format due to the format of film, electronic sensor and display devices. In most cases, the captured FoV is slated to be fully streamed for rendering, e.g. the FoV shape/format of any captured image data is determined by the rendering convention employed.
As technologies in optics, electronics and computation advance however, this close coupling between image capturing and rendering needs to be decoupled to further exploit image information and further creation of novel functionalities. We note that an arbitrary image format realizable by synthesizing image frames derived from multiple focal planes. Additionally, as new image and video rendering technologies are explored and subsequently developed, alternative image navigation methods will become readily available for development of new ways of rendering.
As will be further understood and appreciated by those skilled in the art, an MMS lens expands its FOV by adding up small size secondaries. The resulting, ultra-high information capacity advantageously allows for a myriad of FoV configuration options and image resolution formats, which provides widespread applicability to new and different application scenarios.
As will be known and appreciated by those skilled in the art—with systems including an MMS architecture, the microcameras are packed or otherwise positioned on a spherical surface. Accordingly, the extent and format of the FoV captured are determined by the manner of packing the microcameras. And while such positioning/packing is a relatively trivial task in case of a 2D plane, close-packing on a spherical surface can be much more challenging.
We note that depending on the extent of a targeted packing region, either a local packing or a global packing strategy is preferable. A local packing strategy is preferred if the packing region comprises only a small fraction of the whole sphere onto which the microcameras are to be positioned.
Turning our attention now to
As a matter of experience, a chord ratio value less than 0.17 creates small perturbation and uniform packing density which leads to a high image quality and reduced lens complexity. Observing this rule of thumb, a hexagonal packing strategy can only achieve a maximum latitudinal angle span of 60°.
We note that additional previous work has implemented a packing strategy based on a distorted icosahedral geodesic. By iteratively subdividing a regular icosahedron that is projected onto a sphere, this strategy produces an approximately uniformly distributed grid of circles on the whole globe.
At this point we may now estimate a maximum angle cFoV within which a light path stays obscuration free. We note that a MMS lens of Galilean style exhibits the following parameters: the focal length of the spherical objective lens is fo, the radius of the objective is R, the distance between the stop and the center of the objective is dos, the distance between entrance pupil and the center of the objective is lε, the half FoV angle of each sub-imager is α.
As depicted in
The clear semi-diameter of the objective can be approximated as:
where f is the overall effective focal length.
The free angle cFoV can be determined from the following relationship:
Assuming an illustrative design wherein chief parameters are: f=20 mm, F/#=2.5, f0=47.06, R=21.11 mm, dos=27.49 mm, α=5.7 and substituting these parameters into EQN.(2) and EQN.(3), we have a clear angle of FoV, namely cFoV=77.35°.
We note—and as will be readily appreciated by those skilled in the art—numerous contemporary applications benefit from cameras with a ring-shaped field of view—such as used for security purposes in parks, squares, traffic circles and entry/exit ways, where the top and bottom views with content are not of concern. 360° ring FoV cameras are developed to increase situational awareness in surveillance, navigation applications and in Virtual Reality (VR) and Augmented Reality (AR) for stereo effect [19]. 360° photography is also called panoramic imaging. A common way of doing panoramic imaging is by tiling multiple cameras in a circle. As mentioned previously, this method usually ends up with bulky and costly hardware. The rest of the section shows how MMS architecture deal with this subject with superior dexterity.
As illustrated in
A square image sensor chip would be ideal for MMS lens design for its advantage in producing a mosaic. The effective focal length f is chosen to be 20 mm, which is adequate for the required angular resolution. The aperture size is F/#=2.5 and the FoV of each sub-imager (microcamera) is 11.4°.
Since the monitored area covers an extensive area of the hemisphere, a local packing method would lead to an inferior quality in terms of packing uniformity. Here we configure our MMS lens by choosing a group of circle slots that resulted from the distorted icosahedron geodesic method shown previously in
With reference to these figures, we note that another optical design now disclosed includes 165 microcameras covering a polar angle from 43° to 76°. The covered FoV is not exactly equal to the required due to discretely added FoV with step of 11.4° of each channel.
As will be understood by those skilled in the art, for a single focal length camera, magnification varies for objects at different ranges. The further the object from the camera, the smaller the magnification. As will be appreciated, this property may cause difficulty in recognition of objects dispersed over a deep depth of field.
One solution to this problem is to employ a zoom lens. Such a zoom lens adjusts (zooms) to a long focal length for distant objects and to a short focal length for close objects. Another alternative solution employs a camera cluster that includes multiple cameras exhibiting different focal lengths wherein cameras exhibiting a long focal length employed for distant objects and cameras exhibiting a short(er) focal length for close(r) objects.
As compared with these two techniques, an MMS lens architecture provides a more compact, more modular and less expensive way of conducting multi-focal imaging. In a MMS lens architecture, the overall effective focal length of any individual channel can be varied by changing its design of secondary optics. By applying different secondary optics, we can advantageously integrate multiple focal lengths within a single optical system.
Those skilled in the art will understand and appreciate that it is impossible to achieve a uniform sampling with a finite segmentation of the field of view. However, we can attempt to obtain a nominally uniform sampling with a multifocal imager having equal magnification in the axial field point of each channel. In this illustrative example, we may install the camera 10 m above the ground with each channel covering a 10° area and 1.5° overlap between adjacent channels. With 7 channels, the total range of the surveillance is about 110 m. The focal lengths of each channel and the respective object distances of central field of view are tabulated in Table. 1.
With continued reference to this figure—and as shown in Table 1—a uniform sampling rate over the entire street requires a focal range from 15 mm to 59.39 mm which requires a 4× zoom capacity. Unfortunately, however, our illustrative MMS design only achieves a focal range from 15 mm to 40 mm. As a result, the 7th channel cannot satisfy a quasi-uniform condition. Notwithstanding, a varying sensor pitch can be employed to compensate for this.
As discussed previously, light path obscuration prevents arbitrary FoV configuration for one MMS camera. Nonetheless, this limitation can be surmounted by a combinational use of multiple MMS cameras. One such example that has been demonstrated is where multiple MMS lenses are co-boresighted to interleave a continuous coverage of a wide FoV. Here we disclose yet another example.
With respect to the 360° ring FoV lens described previously, we note that the viewing angle ranges from 4° to 76°. However, light occlusion occurs when the covering area approaches the equator as shown illustratively in
As illustrated in
Another configuration according to aspects of the present disclosure provides a spherical camera where free spaces are reserved between adjacent optic and sensors for light to pass through. To achieve this field of view using a multiscale array, some microcamera positions are saved for light passages. For a continuous FoV coverage, we combine image patches captured by multiple MMS cameras together.
Turning our attention now to
Finally, we now present a final illustrative example that is a omnidirectional camera which sees in all directions with uniform angular resolution. Previously in this disclosure, we estimated that the largest obscuration free angle for a MMS lens is less than 80° which implies a minimum of 4 MMS lenses are required for a full 4π spherical FoV coverage. Each camera of the four is positioned at one of the vertices of a regular tetrahedron and covers a solid angle slightly more than it steradian. The extra coverage is for overlapping.
As illustratively shown in
To provide a quantitative perception about all the design instances disclosed herein we provide Table 2 that describes field of view configurations, angular resolution, information capacity and physical size of each instance. This table helps verify the effectiveness of the MMS lens architecture in building high pixel count, versatile field of view configuration cameras with compactly small form factor.
At this point, those skilled in the art will readily appreciate that while the methods, techniques, and structures according to the present disclosure have been described with respect to particular implementations and/or embodiments, those skilled in the art will recognize that the disclosure is not so limited. Accordingly, the scope of the disclosure should only be limited by the claims appended hereto.
4.5 × 103
1.2 × 104
6.1 × 102
9.7 × 103
9.7 × 103
1.3 × 105
This application claims the benefit of U.S. Provisional Application Ser. No. 62/631,170 filed 15 Feb. 2018 the entire contents of which is incorporated by reference as if set forth at length herein.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2019/018199 | 2/15/2019 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62631170 | Feb 2018 | US |