Point of view aberrations correction in a scanning folded camera

Information

  • Patent Grant
  • 11910089
  • Patent Number
    11,910,089
  • Date Filed
    Tuesday, July 13, 2021
    2 years ago
  • Date Issued
    Tuesday, February 20, 2024
    2 months ago
Abstract
Systems and methods for correcting point of view (POV) aberrations in scanning folded cameras and multi-cameras including such scanning folded cameras. In a Tele folded camera that includes an optical path folding element (OPFE) and an image sensor, the OPFE is tilted in one or two directions to direct the Tele folded camera towards a POV of a scene, a Tele image or a stream of Tele images is captured from the POV, the Tele image having POV aberrations and the POV aberrations are digitally corrected to obtain an aberration-corrected image or stream of images.
Description
FIELD

Embodiments disclosed herein relate in general to digital cameras and in particular to correction of images obtained with folded digital cameras.


BACKGROUND

Compact digital cameras having folded optics, also referred to as “folded cameras” or “folded camera modules” are known, see e.g. co-owned international patent application PCT/IB2016/057366. FIG. 1A shows schematically a folded Tele camera disclosed therein and numbered 100 from a first perspective view. FIG. 1B shows camera 100 from a second perspective view. Camera 100 includes a lens 102 with a lens optical axis 110, an optical path folding element (OPFE) 104 and an image sensor 106. OPFE 104 folds a first optical path which defines the point of view (POV) 108 of camera 100 and which is substantially parallel to the X axis from an object, scene or panoramic view section 114 into a second optical path along an axis 110 substantially parallel to the Z axis. Image sensor 106 has a plane normal aligned with (parallel to) axis 110 and outputs an output image that may be processed by an image signal processor (ISP—not shown). In some embodiments, the ISP may be part of image sensor 106.


Camera 100 is designed to rotate OPFE 104 around axis 110 (the Z axis) relative to the image sensor, a rotation indicated by an arrow 112. That is, camera 100 is a “scanning” Tele camera (“STC”). OPFE 104 can rotate in an angle range as required by optical requirements (see below), in some cases by up to 180 degrees and in other cases by up to 360 degrees. Camera 100 can scan a scene with its “native” Tele field of view (“N-FOVT”), so that it effectively covers a FOV of a scene which is larger than N-FOVT and which we call scanning Tele FOV (“S-FOVT”). S-FOVT is the FOV that includes all scene segments that can be captured with the STC in a plurality of STC images. For scanning a scene in 2 dimensions, OPFE 104 must be rotated around two rotation axes. For example, N-FOVT=10-20 deg and S-FOVT=30-80 deg.



FIG. 1C shows OPFE 104 after rotation by 30 degrees and FIG. 1D shows OPFE 104 after rotation by 180 degrees from the zero position. The 30 and 180 degree rotated positions are exemplary of a range of many rotation positions.


Images are acquired from a certain point of view (POV) of the camera. The POV is the direction defined by the unit vector of the vector that has the location of the camera aperture as starting point and an object point at the center of N-FOVT as end point. As an example, in spherical coordinates (r, θ, φ) defined according to ISO convention, the POV for a camera at r=0 is defined by (1, θ, φ), with the polar angle θ and azimuthal angle φ defining the location of the object point at C-N-FOVT. In FIGS. 1A and 1B, the OPFE is in a zero rotation position (“zero position”). With the OPFE in a zero position, an image acquired with the sensor (i.e. “produced” by camera 100) has no POV aberrations. In spherical coordinates defined as see above, the zero rotation position is given by (1, 0, 0). When the POV changes, the image acquired by the sensor undergoes POV aberrations. Specifically, an image may be tilted (stretched to a trapeze shape) and/or rotated and/or scaled, see e.g. FIGS. 2A and 2B.


There is a need for and it would be advantageous to have a STC image without POV aberrations regardless of the POV.


SUMMARY

Considering the OPFE position, a method suggested herein uses a digital algorithm to correct the POV aberration to obtain an image without POV aberrations. After acquiring (capturing) an image and correcting it, it is suggested herein to crop a rectangular area from the corrected image, to display a cropped rectangular image on the screen or save the cropped rectangular image to a file. For each OPFE position, a pre-calculated geometric transformation (i.e. homography transform) is applied on the acquired image, resulting in a POV aberration-corrected image.


Depending on the OPFE position after correcting the POV aberration, the original (uncorrected) image center will not coincide with the corrected image center. There may be for example five different cropping options (A, B, C, D, E), see FIG. 3D.


The outcome of the cropping is a rectangular image with the same aspect ratio AR (i.e. height/width=3/4) as the zero position, but with a smaller image area than for the zero-position image area. The size of the image area depends on the OPFE position. The corrected and cropped image is scaled to fit the display size or the saved image size.


All images may be further cropped to have the same crop size (image area) for all OPFE positions. The maximal crop size that fits all OPFE positions can be calculated as the minimal size from the set of maximum sizes for every OPFE position.


In various embodiments there are provided methods, comprising: providing a Tele folded camera that includes an OPFE and an image sensor; tilting the OPFE in one or more directions to direct the Tele folded camera towards a POV; capturing a Tele image or a stream of Tele images from the POV, the Tele image having a POV aberration; and digitally correcting the POV aberration.


Is some embodiments, the POV may have a plurality of aberrations and the above and below apply to the correction of one, some, or all of the plurality of aberrations.


In some embodiments, the correcting the POV aberration includes applying a geometric transformation to the captured Tele image to obtain a respective aberration-corrected image. In some exemplary embodiments, the geometric transformation uses calibration information captured during a camera calibration process.


In some embodiments, a method further comprises cropping the aberration-corrected image to obtain an aberration-corrected cropped (ACC) image that has an ACC image center, an ACC image size and an ACC image width/height ratio.


In some embodiments, a method further comprises scaling the ACC image to obtain an aberration-corrected cropped and scaled output image that has an output image center (OIC), an output image size and an output image width/height ratio. In some embodiments, the tilting of the OPFE and the capturing of a Tele image from the POV are repeated to obtain a plurality of Tele images captured at a plurality of POVs, and the OIC is selected such that a plurality of Tele images captured for all possible POVs cover a maximum rectangular area within a scene. In some embodiments, the tilting of the OPFE and the capturing of a Tele image from the POV are repeated to obtain a plurality of Tele images captured at a plurality of POVs, and the OIC is selected such that a plurality of Tele images captured for a particular plurality of POVs cover a maximum rectangular area within a scene.


In various embodiments there are provided systems, comprising: a Wide camera with a Wide field of view FOVW; a Tele folded camera with a Tele field of view FOVT<FOVW and which includes an OPFE and an image sensor, the Tele camera having a scanning capability enabled by OPFE tilt in one or more directions to direct the Tele folded camera towards a POV of a scene and used to capture a Tele image or a stream of Tele images from the POV, the Tele image or stream of Tele images having a POV aberration; and a processor configured to digitally correct the POV aberration.


In some embodiments, the POV aberration may be corrected using calibration data.


In some embodiments, the calibration data may be stored in a non-volatile memory.


In some embodiments, the calibration data include data on calibration between tilt positions of the OPFE in one or two directions and corresponding POVs.


In some embodiments, the calibration data may include data on calibration between a Tele image and a Wide image.


In some embodiments, the calibration data may include data on calibration between tilt positions of the OPFE in one or two directions and the position of FOVT within FOVW.


In some embodiments, the processor configuration to digitally correct the POV aberration may include applying a configuration to apply a geometric transformation to the captured Tele image or stream of Tele images to obtain an aberration-corrected image.


In some embodiments, the geometric transformation may be a homography transformation.


In some embodiments, the geometric transformation may include a homography motion-based calculation using a stream of frames from the Wide camera.


In some embodiments, the homography motion-based calculation may further use inertial measurement unit information.


In some embodiments, the geometric transformation may be a non-affine transformation.


In some embodiments, the image sensor has an image sensor center, an active sensor width and an active sensor height, and the OIC coincides with the image sensor center.


In some embodiments, the OIC may be selected such that a largest possible rectangular crop image size for a particular output image width/height ratio is achieved.


In some embodiments, the OIC may be located less than a distance of 10× pixel size away from an ideal OIC.


In some embodiments, the OIC may be located less than a distance of 10% of the active sensor width away from an ideal OIC.


In some embodiments, the OIC may be located less than a distance of 10% of the active sensor height away from an ideal OIC.


In some embodiments, the OIC may be selected such that an object-image magnification M of an object across different POVs does vary from a constant value by less than 10%


In some embodiments, the OIC may be selected such that the output image covers a maximum area within a scene.


In some embodiments, the OIC may be selected such that a plurality of Tele images captured for all possible POVs cover a maximum rectangular area within the scene.


In some embodiments, the OIC may be selected such that a plurality of Tele images captured for a particular plurality of POVs cover a maximum rectangular area within the scene.


In some embodiments, the OIC may be selected such that the output image shows a region of interest or object of interest in a visually appealing fashion.


In various embodiments there are provided methods, comprising: providing a Tele folded camera that includes an OPFE and an image sensor; tilting the OPFE in one or more directions to direct the Tele folded camera towards a POVs of a calibration chart, each POV associated with a respective OPFE position; capturing a respective Tele image of the calibration chart at each POV, each Tele image having a respective POV aberration; analyzing the Tele image data for deriving calibration data between each POV with its respective POV aberration and the respective OPFE position; and using the calibration data to digitally correct the POV aberration.


In some embodiments, the calibration chart may include location identifiers that allow to determine the POV for the given OPFE position from the respective Tele image.


In some embodiments, the calibration chart may include angular identifiers that allow to determine the POV aberration for the given OPFE position from each Tele image.


In some embodiments, the calibration chart may be a checkerboard chart.


In some embodiments, the calibration data chart may represented by a bi-directional function that translates any OPFE position to a Tele POV and/or its respective POV aberrations and vice versa.


In some embodiments, the bi-directional function chart may a polynomial.


In some embodiments, the calibration data chart may represented by a bi-directional Look-Up-Table that translates any OPFE position to a Tele POV and/or its respective POV aberrations and vice versa.


In some embodiments, the calibration data chart may represented by a Look-Up-Table comprising a plurality of OPFE positions with associated values for Tele POVs and/or its respective POV aberrations.


In some embodiments, the plurality of OPFE positions may include more than five OPFE positions, more than 50 OPFE positions, or even more than 250 OPFE positions.


In some embodiments, a method may further comprise providing a Wide camera with a field of view FOVW larger than a field of view FOVT of the Tele folded camera.


In some embodiments, between the analyzing of the Tele image and the using of the calibration data, a method may further comprise: in a first additional step, with a Tele image POV positioned within a respective Wide image FOV at a respective OPFE position associated with the Tele image POV, capturing an additional Tele image of the calibration chart along with capturing a Wide image of the calibration chart, and in a second additional step, using the Tele and Wide image data for deriving calibration data between the respective OPFE position, the Tele POV within the respective Wide FOV and the Tele image's POV aberration with respect to the Wide image. In some such embodiments, the first and second additional steps may be performed simultaneously. In some such embodiments, all the steps may be performed by a same operator. In some such embodiments, the first four steps may be performed by a first operator, and the first and second additional steps may be performed by a second operator. In some such embodiments, the first four steps may be performed in a time frame of less than 10 s, and the first and second additional steps are performed in a time frame of less than 10 s. In some such embodiments, the first four steps may performed in a time frame of less than 5 s and the first and second additional steps are performed in a time frame of less than 5 s. In some such embodiments, the first additional step does not include any additional image capture, and the analysis and the deriving of the calibration data may include receiving external calibration data between the Tele folded camera and the Wide camera.





BRIEF DESCRIPTION OF THE DRAWINGS

Non-limiting examples of embodiments disclosed herein are described below with reference to figures attached hereto that are listed following this paragraph. The drawings and descriptions are meant to illuminate and clarify embodiments disclosed herein, and should not be considered limiting in any way. Like elements in different drawings may be indicated like numerals.



FIG. 1A shows schematically a known folded camera with an OPFE in zero position from one perspective view;



FIG. 1B shows schematically the folded camera of FIG. 1A from another perspective view;



FIG. 1C shows schematically the camera of FIG. 1A with the OPFE in a first non-zero position;



FIG. 1D shows schematically the camera of FIG. 1A with the OPFE in a second non-zero position;



FIG. 2A shows different OPFE positions and respective FOVs in an object domain;



FIG. 2B shows acquired images, corrected images and cropped images of respective objects at the three different OPFE positions;



FIG. 2C shows the method for generating a Tele output image as described herein;



FIG. 3A shows an FOV of the entire range of the OPFE scanning positions with an exemplary object;



FIG. 3B shows details of the acquired image of the exemplary object in FIG. 3A;



FIG. 3C shows details of a corrected image of the acquired image in FIG. 3B;



FIG. 3D shows details of a cropped image of the corrected image in FIG. 3C;



FIG. 4 shows schematically an embodiment of an electronic device including multi-aperture cameras with at least one scanning Tele camera;



FIG. 5 shows a method for a STC calibration process described herein;



FIG. 6 shows an exemplarily calibration chart that may be used for the calibration method described in FIG. 5.





DETAILED DESCRIPTION


FIG. 2A shows different OPFE positions and their respective N-FOVTs in an “object domain”. FIG. 2B shows acquired images, corrected images and cropped images of respective objects at the three different (0, 1 and 2) OPFE positions shown in FIG. 2A. The object domain is defined as the appearance of a scene that is captured by an ideal camera having a sufficiently large FOV and not having any aberrations and distortions. That is, the object domain corresponds to the appearance of the scene as it may appear to a human observer. The object domain is differentiated from an “image domain”, which is defined as the appearance of the scene as captured by a STC such as camera 100.


In FIGS. 2A-B, 202-i (i=0, 1, 2) represent S-FOVTs, 204-i represent objects, 206-i represent image frames, 208-i represent images of objects 204-i, 210-i represent image data boundaries and 212-i represent rectangular crops of image frames.


Box 200 represents the smallest rectangular FOV that includes S-FOVT, i.e. all the image data from all POVs that can be reached with a STC in the object domain. The N-FOVTs for three different OPFE positions (0, 1 and 2) are represented by 202-0, 202-1 and 202-2. Each OPFE position corresponds to a different POV. The N-FOVT for an OPFE “zero position” 202-0 is defined as an N-FOVT that produces an image of an object or scene without POV aberrations, i.e. (besides a scaling factor and assuming no camera aberrations and distortions) at zero position an object in the object domain is identical to the object image in the image domain. As shown, the N-FOVT at any other position (e.g. 202-1 and 202-2) is not a horizontal rectangle (with respect to 202-0), but an arbitrary tetragon. The same rectangular object is represented by 204-0, 204-1 and 204-2 in, respectively, N-FOVTs 202-0, 202-1 and 202-2.


In an example, the OPFE is positioned at a scanning position 1 (FIG. 2A) with N-FOVT 202-1 that includes object 204-1 and represents a POV of the STC. An image frame 206-1 (FIG. 2B) is captured (acquired) at position 1. In the captured image, object 204-1 is represented by captured object image 208-1 (FIG. 2B). A geometric transformation is applied on frame 206-1 to obtain a corrected image frame 206′-1. The geometric transformation is related to the rotation angle of the OPFE under which the respective image was captured. Inside corrected image frame 206′-1, one can see the corrected image 208′-1 of image 208-1. 210′-1 marks the boundary of image data present in corrected image frame 206′-1, and 212-1 is a possible rectangular crop comprising an image segment of 210′-1. Rectangular crop 212-1 may have the same aspect ratio AR (i.e. a ratio of horizontal width and vertical height) as AR of image sensor 106. In other examples, rectangular crop 212-1 may have a same aspect ratio as the aspect ratio used for outputting a zero position image such as 206″-0. Corrected image 206′-1 is then cropped to obtain a corrected cropped image 206″-1.


In FIG. 2B in the acquired image row, 206-0, 206-1 and 206-2 are original raw image captures (frames) acquired at, respectively, OPFE positions 0, 1 and 2. 208-0, 208-1 and 208-2 are captured images of, respectively, objects 204-0, 204-1 and 204-2. In the corrected image row, 206′-0, 206′-1 and 206′-2 represent corrected (also referred to as “rectified” or “aberration-corrected”) image frames that underwent image rectification. 208′-0, 208′-1 and 208′-2 represent captured object images 208-0, 208-1 and 208-2 of objects 204-0, 204-1 and 204-2 after image rectification, i.e. they represent “corrected images” (or “rectified images”) of the objects. The image data present in corrected image 206′-1 has a boundary 210′-1 and the image data in corrected image 206′-2 has a boundary 210′-2. The dotted area between 206′-1 and 210′-1, and between 206′-2 and 210′-2 has no valid image data to be shown on screen or saved to disk (i.e. the area includes only empty pixels). 212-1 and 212-2 are possible rectangular crops comprising image segments of 210′-1 and 210′-2 respectively. 212-1 and 212-2 may have a specific AR. In the cropped image row, 206″-0, 206″-1 and 206″-2 are aberration-corrected cropped (“ACC”) images comprising image data of 206′-0, 206′-1 and 206′-2, i.e. they comprise corrected image data of 206-0, 206-1 and 206-2. In some embodiments, after cropping the images are scaled. In images 206″-0, 206″-1 and 206″-2, areas 208″-0, 208″-1 and 208″-2 represent the image data of objects 204-0, 204-1 and 204-2 which underwent rectification, cropping and scaling. The images 206″-1 and 206″-2 are generated by cropping images 206′-1 and 206′-2 along the boundaries 212-1 and 212-2 respectively. Image 206″-0 did not undergo cropping, i.e. all image data of 206′-0 is present also in 206″-0. Note that since position ‘0’ was defined as having no POV aberrations, the correction algorithm will have no effect on the acquired image (i.e. 206-0, 206′-0 and 206″-0 will be identical). In other examples, 206-0, 206′-0 and 206″-0 may not be of equal size, but 206′-0 and/or 206″-0 may differ in size from 206-0 by a certain crop factor. The same applies for object images 208-0, 208′-0 and 208″-0.



FIG. 2C shows schematically an exemplary method for generating a Tele output image disclosed herein. In a first step 252, a command triggered by a human user or a program directs N-FOVT to a region of interest (ROI) within a scene by scanning. The scanning may be performed by rotating an OPFE. The FOV scanning by OPFE rotation is not performed instantaneously, but requires some settling time, which may be for example about 1-30 ms for scanning 2-5 degrees and about 15-100 ms for scanning 10-25 degrees. After the settling time, an STC image (such as images 206-0, 206-1 and 206-2 in FIG. 2B) is captured in step 254. In step 256, the STC image is rectified. In a first rectification sub-step (called “geometrical transformation sub-step”), a geometric transformation (such as a homography transformation, an affine transformation or a non-affine transformation) is performed, with results as shown in FIG. 2B. In the following and as an example, “homography transformation” is used to represent any meaningful geometric transformation. The homography transformation corrects for the aberrations associated with any particular POV and is thus a function of the POV. A second rectification sub-step (“interpolation sub-step”) may be performed, which is detailed below.


A corrected (or rectified, or aberration-corrected) image is thus obtained. Calibration data between an OPFE position and the corresponding POV may be used to select the homography transformation corresponding to the particular POV. In some embodiments, the geometric transformation may include corrections known in the art such as e.g. distortion correction and color correction. In step 258, the corrected image is cropped as depicted in FIG. 2B. In step 260, the cropped image is scaled. A cropped and scaled output image is output in step 262. The output image may be displayed on an electronic device such as device 400 (FIG. 4) and/or stored in a memory such as memory 450 (FIG. 4) or any other memory of the device.


The cropping in step 258 may be done according to different crop selection criteria. Some crop selection criteria may aim for a particular size of the cropped image. Other crop selection criteria may enable a particular input image coordinate to be transferred to a particular image coordinate of the cropped image. In the following, “crop selection” criteria may be referred to simply as “crop criteria”.


Crop criteria that aim for a particular size of cropped images may be as follows: in one criterion (crop criterion 1), the image may be cropped so that a resulting image is a rectangular image. In another criterion (crop criterion 2), the resulting image may be a square. Here and in the following, the image size and shape are defined by the number and distribution of the image pixels, so that size and shape do not depend on the actual mode the image is displayed. As an example, a rectangular image has m rows (image height), wherein each row includes n values (image width). A square image has m rows with m values each. A first rectangular image having m1 rows with n1 values each is larger than a second rectangular image having m2 rows and n2 values if m1×n1>m2×n2 is satisfied.


In yet another criterion (crop criterion 3), the image is cropped so that a largest rectangular image having a particular AR for the particular POV is obtained. Examples for this criterion are the crop options “D” and “E” shown in FIG. 3D. The AR refers to the width/height ratio of an image. An AR may e.g. be 4:3, 3:2 or 16:9. In yet another criterion (crop criterion 4), the image captured at a first POV is cropped so that the resulting image has the same AR and size as an image captured at a second POV. The second POV is the POV that leads to the smallest image obtained by cropping a largest rectangular image having a particular AR for the second POV. Crop criterion 4 ensures that cropped images at all possible POVs have identical AR and shape. In yet another criterion (crop criterion 5), the image is cropped so that all output images generated from STC images captured in step 254 from the entire S-FOVT cover a largest area of a rectangular FOV in the object domain. This cropping criterion ensures that the area of a rectangular FOV in the object domain such as 200 is covered maximally by S-FOVT. In yet another criterion (crop criterion 6), the image is cropped rectangularly so that an identical object-to-image magnification is obtained for the entire S-FOVT. In general, the object images of images captured in step 254 are smaller for larger POVs. In some embodiments, the condition of “identical magnification” may be satisfied if the magnifications obtained for all POVs vary from a constant value by <10%. In other examples, the condition of “identical magnification” may be satisfied if the magnifications obtained for all POVs vary by <5% or by <15%.


Crop criteria that map particular input image coordinates to particular image coordinates of the cropped image are presented next. In general, and by applying a particular crop selection criterion, any arbitrary object image point of the image captured in step 254 (the “input image”) can be defined as the image center of the image output in step 262. In a crop criterion 7, the image may be cropped rectangularly so that the image center of the cropped image contains image data identical with that of the input image center for a particular POV. An image center may be defined as the center pixel and the surrounding pixels that lie within a radius of e.g. 10 times the pixel size. In some embodiments, the image center may be defined as the center pixel plus surrounding pixels that lie within a radius of e.g. 5 or 30 times the pixel size.


In a crop criterion 8, the image may be cropped rectangularly so that the cropped image center contains image data identical with that of an input image center, with the cropped image additionally fulfilling the condition that any two images that are captured at arbitrary first and second POVs are cropped so that the resulting images have the same AR and size. In yet other examples, crop criterion 8 may additionally fulfill the condition that the cropped images are of maximal size (crop criterion 9). In yet other examples, an image may be cropped so that a ROI or an object of interest (OOI) is displayed on the image output in step 264 in a visually appealing fashion (crop criterion 10). This criterion may support aesthetic image cropping, e.g. as described by Wang et al in the article “A deep network solution for attention and aesthetics aware photo cropping”, May 2018, IEEE Transactions on Pattern Analysis and Machine Intelligence. Applications of aesthetic image cropping are also described in the co-owned PCT Patent Application No. PCT/IB2020-061330. In yet other examples, an image may be cropped according to the needs of further processing steps, e.g. the image may be cropped so that only a particular segment of the FOV in the object domain is included (crop criterion 11). A possible further processing may e.g. be the generation of a super image, i.e. of an output image that is composed of the image data of a plurality of input images. The generation of a super-image is described in co-owned PCT Patent Application No. PCT/IB2021-054070. Another possible further processing may be the generation of a panorama image as known in the art.


The scaling in step 260 may be performed according to different scaling selection criteria. In some embodiments, scaling may be performed so that images captured under different POVs in step 254 and output in step 262 (the “output image”) have identical size and AR (scale criterion 1). In other examples, scaling may be performed so that the pixel density per object image area in the output image is identical with the pixel density per area in the object domain present in the image captured in step 254 (scale criterion 2). In yet other examples, scaling may be performed so that the image size fits the requirements of a program that performs further processing on the image data (scale criterion 3).


Steps 252-262 outlined above may be performed sequentially, i.e. one after the other.


In some STC image rectification embodiments, step 256 may be performed as follows: let (xini, yinj) be the values of some arbitrary image coordinates (i, j) of an input image (captured in step 254) and let (xoutm, youtn) be the values of some arbitrary image coordinates (m, n) of an output image (of step 256). In the geometrical transformation sub-step, a homography transformation may be (xout, yout)=fH(xin, yin) with H being a 3×3 homography transformation matrix known in the art. The homography transformation can be inversed by using fH−1=fH-1. A crop transformation (xout, yout)=Crop(xin, yin) may be (xoutm, youtn)=(xini−crop-start_xi, yinj−crop-start_yj) for assigning each coordinate of the input image a coordinate in the output image wherein only coordinates with values>0 are used for the output image. Vector (crop-start_xi, crop-start_yj) defines size and shape of the cropped image. An inverse crop transformation Crop−1 is defined by (xinm, yinn)=(xouti+crop-start_xi, youtj−crop-start_yj). A scale transformation (xout, yout)=Scale(xin, yin) may be (xout, yout)=(sx·xin, sy·yin) with scaling factors sx and sy in x and y direction respectively. An inverse scale transformation Scale−1 is defined by (xin, yin)=(sx−1·xout, sy−1·yout). A transfer function T is defined by applying homography, crop and scale sequentially, i.e. T is defined by (xout, yout)=Scale(Crop(fH(xin, yin))) and (xout, yout)=T(xin, yin).


In the interpolation sub-step, one may sequentially interpolate all values of output image (xout, yout) directly from the input image via transfer function T. For example, one may start with calculating values (xoutm, youtn) at an arbitrary starting point having coordinates (m, n) of the output image. For this, one calculates coordinates (m′, n′) of input image (xin, yin) that are to be included for calculating values (xoutm, youtn) at the particular coordinates (m, n) of the output image. Coordinates (m′, n′) in the input image may be obtained by applying an inverse transfer function T−1 to all output coordinates (m, n), i.e. T−1(xoutm, youtn) or fH−1(crop−1(scale−1(xoutm, youtn))) for all (m, n). In general, T−1 may not map each coordinate (m, n) on one coordinate (m′, n′), but map each coordinate on a segment of neighboring coordinates (m′, n′). For calculating the values (xoutm, youtn), the entire segment or parts of the segment of neighboring coordinates (m′, n′) may be taken into account. For obtaining values (xoutm, youtn) of the output image at coordinates (m, n), in a first step a re-sampling function R as known in the art may be evaluated for all neighboring coordinates (m′, n′) according to T(xout′m, yout′n)=Resample (Tin,xinm′, yinn′). The re-sampling may be performed by methods known in the art such as nearest neighbor, bi-linear, or bi-cubic.


After values (xoutm, youtn) are determined, one may perform the steps above for calculating the values (xouto, youtp) at additional coordinates (o, p), etc. This is repeated until all values (xout, yout) of the output image are obtained. In various embodiments, the calculation as described above is performed for a plurality of output coordinates or even for all output coordinates in parallel. In some STC image rectification embodiments, the calculations described here may be performed by a CPU (Central Processing Unit). In other STC image rectification embodiments and for faster image processing, the calculations described here may be performed by a GPU (Graphics Processing Unit). The STC image rectification may be performed in different color domains, e.g. RGB, YUV, YUV420 and further color domains known in the art.



FIG. 3A shows S-FOVT 300 in the object domain. 302 represents a N-FOVT corresponding to a particular OPFE position within 300.304 represents the object point in the scene whose image point is located at the center of the image sensor. 306 represents an arbitrary selected particular object point in the scene that is included in N-FOVT 302.



FIG. 3B shows an acquired Tele image 310 having a center 304′ and including an image point 306′ of object point 306. FIG. 3C shows a corrected image 312 generated by applying the geometric transformation described above to image 310. 314 is the boundary of the image data in the corrected image, i.e. outside of the boundary (dotted area) there is no image data available. 304″ and 306″ are the locations of image points 304′ and 306′ respectively in the corrected image.



FIG. 3D shows exemplarily different options for rectangularly cropping corrected image 312 shown in FIG. 3C and generating an output image as described herein. One can see that the image data that is located at the output image center (OIC) depends on the selected cropping criteria. In the examples marked “D” and “C”, crop selection criteria are selected so that particular image point 306′ is located at the OIC. Cropping option “D” is selected so that two criteria are fulfilled: (i) “particular image point 306′ is located at the OIC” and (ii) “the largest rectangular image for the given POV is achieved”. Cropping option “C” is selected so that the criterion “particular image point 306′ is located at the OIC” is fulfilled. Additional crop selection criteria are depicted in the examples marked “A”, “B” and “E”. Cropping option “A” is selected so that the criterion “image center 304′ is located at the OIC” is fulfilled. Cropping option “B” is selected so that two criteria are fulfilled: (i) “image center 304′ is located at the OIC” and (ii) “the largest rectangular image for the given POV is achieved”. Cropping option “E” is selected so that for the output image the criterion “the largest rectangular image for the given POV is achieved” is fulfilled. In other examples (not shown), a cropping option may be selected so that two criteria are fulfilled: (i) “image center 304′ is located at the OIC” and (ii) “the largest rectangular image for all possible POVs is achieved”. It may not always be possible or beneficial to locate the OIC exactly at a particular image position (“ideal OIC”) such as e.g. the image center, but in proximity of the ideal OIC. Proximity of the ideal OIC may be expressed as a percentage of the image sensor size (e.g. OIC may be located<10% of image sensor width from the ideal OIC) or as a distance in pixels (e.g. OIC may be located less than a distance of 10× pixel size away from the ideal OIC).


In yet other examples (not shown), a cropping option may be selected so that that the criterion “the largest rectangular image for all possible POVs is achieved” is fulfilled. In yet other examples (not shown), a cropping option may be selected so that that the criterion “the largest rectangular image for a particular plurality of POVs is achieved” is fulfilled. The particular plurality of POVs may cover all possible POVs or a subset thereof.



FIG. 4 shows schematically an embodiment of a mobile electronic device (“electronic device”) numbered 400 and including multi-aperture cameras with at least one STC. Electronic device 400 may e.g. be a smartphone, a tablet, a laptop, etc. Electronic device 400 comprises a first STC module 410 that includes an OPFE 412 for FOV scanning, and a Tele lens module 418 that forms a Tele image recorded by a first image sensor 416. Image sensor 416 has an active sensor area defined by an active sensor width and an active sensor height which performs the actual light harvesting, and an inactive area that does not perform light harvesting. A Tele lens actuator 422 may move lens module 418 for focusing and/or optical image stabilization (OIS). Electronic device 400 may further comprise an application processor (AP) 440 that includes a FOV scanner 442, a motion estimator 444 and an image generator 446. STC 410 may have an effective focal length (“EFL”) of EFL=5 mm-50 mm. A sensor diagonal (“SD”) of image sensor 416 may be SD=3 mm-15 mm.


Calibration data may be stored in a first memory 424, e.g. in an EEPROM (electrically erasable programmable read only memory), in a second memory 438, or in a third memory 450, e.g. in a NVM (non-volatile memory). Calibration data may include STC calibration data and DC calibration data. Electronic device 400 further comprises a Wide (“W”) (or Ultra-Wide, “UW”) camera module 430 with a FOVW, FOVUW>N-FOVT that includes a second lens module 434 that forms an image recorded by a second image sensor 432. A second lens actuator 436 may move lens module 434 for focusing and/or OIS.


In use, a processing unit such as AP 440 may receive respective first and second image data from camera modules 410 and 430 and supply camera control signals to camera modules 410 and 430. FOV scanner 442 may receive commands from a human user or a program for directing the N-FOVT to particular POVs in a scene. In some embodiments, the commands may include a single request for directing N-FOVT to one particular POV. In other examples, the commands may include a series of requests e.g. for serially directing N-FOVT to a plurality of particular POVs. FOV scanner 442 may be configured to calculate a scanning order given the requested particular POVs. FOV scanner 442 may be configured to supply control signals to OPFE actuator 414, which may, in response to the control signals, rotate OPFE 412 for scanning N-FOVT. In some embodiments, FOV scanner 442 may additionally supply control signals to OPFE actuator 414 for actuating OPFE 412 for OIS.


Electronic device 400 further comprises an inertial measurement unit (IMU, or “Gyro”) 460 that may supply information on the motion of 400. Motion estimator 444 may use data from IMU 460, e.g. for estimating hand motion caused by a human user. In some embodiments, motion estimator 444 may use additional data. For example, image data from camera 410 and/or from camera 430 may be used to estimate an “optical flow” from a plurality of images as known in the art. Motion estimator 444 may use data from IMU 460 and may use as well optical flow data for estimating motion of 400 with higher accuracy. The information on motion of 400 may be used for OIS or for the homography transformation described above. In other embodiments, only optical flow data estimated from image data of camera 410 and/or camera 430 may be used for estimating motion of 400. Image generator 446 may be configured to generate images and image streams respectively as e.g. described in FIG. 2C. In some embodiments, image generator 446 may be configured to use only first image data from camera 430. In other embodiments, image generator 446 may use image data from camera 410 and/or camera 430.



FIG. 5 shows a method for a STC calibration process described herein. The calibration process allows to derive calibration data for a single scanning Tele camera (“STC calibration”) or for a dual-camera (“DC”) including a STC and a Wide camera having a FOVW>N-FOVT (“DC calibration”). Goal of the calibration process is to connect between three parameters for all possible POVs: a specific OPFE position which is defined by a position sensor value pair, the POV associated with this specific OPFE position as well as the POV aberrations associated with this specific POV.


In a first example (calibration example 1 or “CE1”), the calibration process refers to a STC that scans in 2 dimensions by rotating an OPFE along two axes, wherein the amplitude of the rotation is measured by two or more position sensors (e.g. Hall sensors), a first and a second position sensor P1 and P2 respectively. The STC's POV is measured by the value pair p1 and p2 of P1 and P2 respectively. In a first step 502, a calibration chart (“CC”) is provided. A suitable CC includes location identifiers (such as location identifiers 602 and 604, see FIG. 6) which allow to determine the location on the chart. When capturing a suitable CC at a given distance with a STC, at least one location identifier is present in N-FOVT for all POVs in S-FOVT. By means of the location identifier, the STC's POV with respect to the CC can be determined. The location identifiers may e.g. be symbols encoding location information, spread in sufficiently high frequency all over the checkerboard. Additionally, a suitable CC includes angular identifiers that allow to determine the relative angular tilt and rotation between the CC and a STC's image of the CC. The angular identifiers may e.g. be lines present in a checkerboard. An example of a suitable CC is shown in FIG. 6. The size of the CC and the distance between the STC and the CC is to be selected so that the entire S-FOVT is included in the FOV covered by the CC.


In CE1, a list of N specific value pairs (p1, p2) may be defined for a specific STC design. In some embodiments, the list may include N=10 value pairs (p1, p2)1, . . . , (p1, p2)10. In other embodiments, the list may include N=10-20 or even more value pairs. According to a first criterion for value pair selection, the value pairs may be selected so that the STC must capture a minimum number of different POVs in the calibration process (or, in other words, a minimum number of repetitions of steps 504 and 506 is desired).


For a second example (“CE2”) for DC calibration, in step 502 another CC may be required, wherein the CC may or may not be a checkerboard. The STC of CE2 fulfills the same attributes as for CE1. Also in CE2, a list of N specific position sensor value pairs (p1, p2), each value pair associated with a specific OPFE position, may be defined for a specific STC design. In some embodiments, the list may include N=200 value pairs (p1, p2)1, . . . , (p1, p2)200. In other embodiments, the list may include N=100-300 or even more value pairs.


By tilting the OPFE to a specific OPFE position, in CE1 and CE2 e.g. defined by (p1, p2)1, in step 504 the STC is directed to a (yet unknown) POV on the CC.


In step 506, one or more STC images are captured. For DC calibration and for CE2, a second sub-step of step 506 is required, where STC images are captured along W images captured by the Wide camera. Capturing STC images along W images means here that the images are captured at a same dual-camera position and orientation. In general, the capture may be simultaneous, but this is not mandatory.


In some embodiments, the capturing of the STC images along the Wide images may be performed together and in one single step, e.g. by a same operator.


In other examples, the two steps may be performed separately and e.g. by different operators. For example and for calibrating a STC with respect to a first CC, a first operator may capture one or more STC images at a specific OPFE position. The STC which is calibrated with respect to the first CC may be included by a second operator into a dual-camera which is used for capturing a second CC (which may be or may be not identical to the first CC) with the STC at a specific OPFE position along with one or more W images for calibrating the STC with respect to the W camera of the dual-camera. Steps 504 and 506 are performed repeatedly according to the number N of value pairs, so that one or more STC images are captured at each of the N OPFE positions (or value pairs). The repetition of steps 502 and 504 for the plurality of OPFE positions may be performed for example in a predetermined timeframe. The predetermined timeframe may e.g. be 10 s or 5 s. For example, the first operator may be a camera module manufacturer that manufactures the STC and the second operator may be a phone manufacturer that includes the STC into a dual-camera and the dual-camera into a mobile device. In some embodiments, the second sub-step of step 506 does not include capturing additional STC and W images, but includes receiving external calibration data between the STC and the Wide camera.


In step 508, the STC images are analyzed. Aim is to assign a POV and a respective POV aberration to the specific OPFE (or value pairs) position of step 504. The analysis includes to use the CC's location identifiers that appear in a STC image to determine the POV from which it was captured, as well as to use the CC's angular identifiers along with GT images to determine the POV aberrations.


For CE1, the analysis includes to use the CC's location identifiers that appear in a STC image to determine the POV from which it was captured, as well as to use the CC's angular identifiers along with ground truth images to determine the POV aberrations.


In a first sub-step of CE1, a specific POV; is assigned to the value pair (p1, p2)i.


In a second sub-step of CE1, the STC image is compared to a ground truth image of the CC at the respective POV. In this comparison it is determined which image transformation parameters transform the STC image into the CC's ground truth image. In some embodiments, three image transformation parameters may be used. For DC and CE2, POVs and respective POV aberrations are determined by comparing the STC images and the Wide images captured in step 506.


In step 508 of CE1, the first and the second sub-step are performed for all value pairs (p1,p2)1, . . . , (p1,p2)N, so that to each value pair (p1,p2); a specific POVi and image transformation parameters are assigned.


In step 510, from the analysis in step 508, calibration data is derived. In some embodiments, the calibration data is represented by a bi-directional data polynomial. In other examples, the calibration data is represented by a bi-directional Look-Up-Table (LUT) polynomial. In all examples, STC calibration data includes a function that can be used to translate any OPFE position to a STC image's POV aberrations with respect to a checkerboard and/or STC's POV. DC calibration data can be used to translate any OPFE position to a STC image's POV aberrations with respect to the W camera and/or STC's POV within FOVW. Vice versa, any POV aberration of an STC image with respect to a W camera's image can be translated into a STC POV within FOVW and/or to an OPFE position (thus “bi-directional”). In yet other examples, STC calibration data is represented by a LUT which comprises a multitude of OPFE positions with associated values for STC images' POV aberrations with respect to the CC and/or STC's POVs. DC calibration data is represented by a LUT which comprises a multitude of OPFE positions with associated values for STC images' rotation angles with respect to the Wide camera's images and/or Tele POVs within FOVW. For CE1, a function is determined which approximates the relation between all the value pairs (p1, p2)1, . . . , (p1,p2)N and their assigned specific POVs, POV1, . . . , POVN, as well as their assigned image transformation parameters. This function is generalized, meaning that it is used for bi-directionally translating between all possible OPFE position value pairs, their POVs and image transformation parameters for image rectification. According to a second criterion for value pair selection, the value pairs may be selected so that the generalization of the function leads to a minimum aggregated error (“AE”). “AE”, which is to be minimized, refers here to an error function that depends on the deviation of the STC images that underwent the POV correction from their respective ground truth images for all possible POVs (or a number of POVs that is sufficiently large to approximate statistically all possible POVs). In some embodiments, some compromise between fulfilling the first or the second criterion for value pair selection is made.


For CE2, the calibration data derived is included in a LUT. The LUT includes the N OPFE positions (value pairs), the POV associated with each value pair as well as its respective POV aberration. This implies that not for all possible OPFE positions there is explicit calibration data. So for rectifying a STC image with CE2 at an OPFE position which is not included in the LUT, one may approximate a POV and its POV aberrations. In some embodiments for approximation, one may use the calibration values which are associated with one OPFE position which is, from all the N OPFE positions, located closest to the current OPFE position. Closest may be defined here by a distance metrics known in the art, e.g. a quadratic distance of the respective value pairs (sqrt((p1−p1c)2+(p2−p2c)2)) may be smallest, where p1, p2 is the current OPFE position, and p1c, p2c are values included in the LUT. In other examples for approximation, one may use a weighted average of a plurality of calibration values which are associated with a plurality of OPFE positions which are, from all the N OPFE positions, located closest to the current OPFE position.


In step 512, the calibration data are applied to the STC images for correcting POV aberrations.



FIG. 6 shows an exemplarily CC 600 that may be used for the calibration method described in FIG. 5. CC 600 includes 72 location identifiers, which are distributed in rows of 9 (oriented parallel x) and 8 columns (oriented parallel y). Exemplarily, the two first location identifiers of the first row are marked 602 and 604 respectively. A location identifier is located at a defined distance from a CC reference point, e.g. the upper left corner of CC. For calibrating with CC 600 at different camera-CC distances, one may adapt the size of CC 600 so that at least one location identifier is present in N-FOVT for all POVs in S-FOVT. CC 600 includes additionally angular identifiers, which in this example are represented by the vertical and horizontal lines of the checkerboard.


While this disclosure has been described in terms of certain embodiments and generally associated methods, alterations and permutations of the embodiments and methods will be apparent to those skilled in the art. The disclosure is to be understood as not limited by the specific embodiments described herein, but only by the scope of the appended claims.


Unless otherwise stated, the use of the expression “and/or” between the last two members of a list of options for selection indicates that a selection of one or more of the listed options is appropriate and may be made.


It should be understood that where the claims or specification refer to “a” or “an” element, such reference is not to be construed as there being only one of that element.


All references mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual reference was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention.

Claims
  • 1. A method comprising: a) providing a Tele folded camera that includes an optical path folding element (OPFE) and an image sensor;b) tilting the OPFE by a rotation angle in one or more directions to direct the Tele folded camera towards a point of view (POV);c) capturing a Tele image from the POV, the Tele image having a POV aberration in the shape of a non-rectangular tetragon introduced by the rotation angle of the OPFE; andd) digitally correcting the Tele image having the POV aberration by applying to the Tele image having the POV aberration a geometric transformation related to the rotation angle of the OPFE to obtain a respective aberration-corrected image, and cropping and scaling the aberration-corrected image to obtain a respective aberration-corrected, cropped and scaled output image that has an output image center (OIC), an output image size and an output image width/height ratio.
  • 2. The method of claim 1, wherein the image sensor has an image sensor center, an active sensor width and an active sensor height, and wherein the OIC coincides with the image sensor center.
  • 3. The method of claim 1, wherein the OIC is selected so that a largest possible rectangular crop image size for a particular output image width/height ratio is achieved.
  • 4. The method of claim 2, wherein the OIC is located less than 10 pixel sizes from an ideal OIC, wherein the ideal OIC is where a particular image point is located at the OIC or an image center is located at the OIC.
  • 5. The method of claim 2, wherein the OIC is located less than a distance of 10% of the active sensor width from an ideal OIC, wherein the ideal OIC is where a particular image point is located at the OIC or an image center is located at the OIC.
  • 6. The method of claim 1, wherein a cropping criterion is selected such that an object-image magnification M of an object across different POVs varies from a constant value by less than 10%.
  • 7. The method of claim 1, wherein steps b and c are repeated to obtain a plurality of Tele images captured at a plurality of POVs, and wherein the OIC is selected such that a plurality of Tele images captured for all possible POVs cover a maximum rectangular area within a scene.
  • 8. The method of claim 1, wherein steps b and c are repeated to obtain a plurality of Tele images captured at a plurality of POVs, and wherein the OIC is selected such that a plurality of Tele images captured for a particular plurality of POVs cover a maximum rectangular area within a scene.
  • 9. The method of claim 1, wherein the geometric transformation uses calibration data captured during a camera calibration process.
  • 10. The method of claim 1, wherein the geometric transformation is a homography transformation.
  • 11. A method, comprising: a) providing a Tele folded camera that includes an optical path folding element (OPFE) and an image sensor;b) tilting the OPFE in one or more directions to direct the Tele folded camera towards a plurality of points of view (POVs) of a calibration chart, each POV associated with a respective OPFE position;c) capturing a respective Tele image of the calibration chart at each POV, each Tele image having a respective POV aberration in the shape of a non-rectangular tetragon introduced by the position of the OPFE, wherein the calibration chart includes location identifiers that allow to determine the POV for the given OPFE position from the respective Tele image, wherein the calibration chart includes angular identifiers that allow to determine the POV aberration for the given OPFE position from each Tele image;d) from the respective Tele images of the calibration chart at each POV, deriving calibration data between each POV with its respective POV aberration and the respective OPFE position; ande) using the calibration data to digitally correct the POV aberration of each Tele image by applying to the Tele image having the POV aberration a geometric transformation related to the position of the OPFE, thereby obtaining a respective POV aberration-corrected image.
  • 12. The method of claim 11, wherein the calibration chart is a checkerboard chart.
  • 13. The method of claim 11, wherein the calibration data is represented by a bi-directional function that assigns any OPFE position to a Tele POV and/or its respective POV aberration correction and vice versa.
  • 14. The method of claim 13, wherein the bi-directional function is a polynomial.
  • 15. The method of claim 11, wherein the calibration data is represented by a bi-directional Look-Up-Table that assigns any OPFE position to a Tele POV and/or its respective POV aberration correction and vice versa.
  • 16. The method of claim 11, wherein the calibration data is represented by a Look-Up-Table comprising a plurality of OPFE positions with associated values for Tele POVs and/or their respective POV aberration corrections.
  • 17. The method of claim 16, wherein the plurality of OPFE positions includes more than five OPFE positions.
  • 18. The method of claim 16, wherein the plurality of OPFE positions includes more than 100 OPFE positions.
  • 19. The method of claim 11, further comprising: providing a Wide camera with a field of view FOVW larger than a field of view FOVT of the Tele folded camera; between step d and e, in a first additional step, with a Tele image POV positioned within a respective Wide image FOV at a respective OPFE position associated with the Tele image POV, capturing an additional Tele image of the calibration chart along with capturing a Wide image of the calibration chart; andin a second additional step, using the Tele and Wide images for deriving calibration data between the respective OPFE position, the Tele POV within the respective Wide FOV and the Tele image's POV aberration with respect to the Wide image.
  • 20. The method of claim 19, wherein the first and second additional steps are performed simultaneously.
  • 21. The method of claim 19, wherein all the steps are performed by a same operator.
  • 22. The method of claim 19, wherein steps a-d are performed by a first operator, and wherein the first and second additional steps are performed by a second operator.
  • 23. The method of claim 19, wherein the steps a-d are performed in a time frame of less than 10 s, and wherein the first and second additional steps are performed in a time frame of less than 10 s.
  • 24. The method of claim 19, wherein the first additional step does not include any additional image capture, and wherein the analysis and the deriving of the calibration data includes receiving external calibration data between the Tele folded camera and the Wide camera.
  • 25. The method of claim 11, further comprising: providing a Wide camera with a field of view FOVW larger than a field of view FOVT of the Tele folded camera;receiving external calibration data between the Tele folded camera and the Wide camera; and
  • 26. The method of claim 25, wherein the external calibration data is represented by a bi-directional function that assigns any OPFE position to a Tele POV within the Wide FOV and/or the Tele image respective POV aberration correction with respect to the Wide image and vice versa.
  • 27. The method of claim 25, wherein the bi-directional function is a bi-directional polynomial.
  • 28. The method of claim 25, wherein the calibration data is represented by a bi-directional Look-Up-Table that translates any OPFE position to a Tele POV within the Wide FOV and/or the Tele image respective POV aberration correction with respect to the Wide image and vice versa.
  • 29. The method of claim 25, wherein the calibration data is represented by a Look-Up-Table comprising a plurality of OPFE positions with associated values for Tele POVs within the Wide FOV and/or the Tele image respective POV aberration corrections with respect to a Wide image.
  • 30. The method of claim 29, wherein the plurality of OPFE positions includes more than five OPFE positions.
  • 31. The method of claim 29, wherein the plurality of OPFE positions includes more than 100 OPFE positions.
CROSS REFERENCE TO RELATED APPLICATIONS

This is a 371 application from international application PCT/IB2021/056311 filed Jul. 13, 2021, and is related to and claims priority from U.S. Provisional Patent Application No. 63/051,993 filed Jul. 15, 2020, which is incorporated herein by reference in its entirety.

PCT Information
Filing Document Filing Date Country Kind
PCT/IB2021/056311 7/13/2021 WO
Publishing Document Publishing Date Country Kind
WO2022/013753 1/20/2022 WO A
US Referenced Citations (307)
Number Name Date Kind
4199785 McCullough et al. Apr 1980 A
5005083 Grage et al. Apr 1991 A
5032917 Aschwanden Jul 1991 A
5041852 Misawa et al. Aug 1991 A
5051830 von Hoessle Sep 1991 A
5099263 Matsumoto et al. Mar 1992 A
5248971 Mandl Sep 1993 A
5287093 Amano et al. Feb 1994 A
5394520 Hall Feb 1995 A
5436660 Sakamoto Jul 1995 A
5444478 Lelong et al. Aug 1995 A
5459520 Sasaki Oct 1995 A
5657402 Bender et al. Aug 1997 A
5682198 Katayama et al. Oct 1997 A
5768443 Michael et al. Jun 1998 A
5926190 Turkowski et al. Jul 1999 A
5940641 McIntyre et al. Aug 1999 A
5982951 Katayama et al. Nov 1999 A
6101334 Fantone Aug 2000 A
6128416 Oura Oct 2000 A
6148120 Sussman Nov 2000 A
6208765 Bergen Mar 2001 B1
6268611 Pettersson et al. Jul 2001 B1
6549215 Jouppi Apr 2003 B2
6611289 Yu et al. Aug 2003 B1
6643416 Daniels et al. Nov 2003 B1
6650368 Doron Nov 2003 B1
6680748 Monti Jan 2004 B1
6714665 Hanna et al. Mar 2004 B1
6724421 Glatt Apr 2004 B1
6738073 Park et al. May 2004 B2
6741250 Furlan et al. May 2004 B1
6750903 Miyatake et al. Jun 2004 B1
6778207 Lee et al. Aug 2004 B1
7002583 Rabb, III Feb 2006 B2
7015954 Foote et al. Mar 2006 B1
7038716 Klein et al. May 2006 B2
7199348 Olsen et al. Apr 2007 B2
7206136 Labaziewicz et al. Apr 2007 B2
7248294 Slatter Jul 2007 B2
7256944 Labaziewicz et al. Aug 2007 B2
7305180 Labaziewicz et al. Dec 2007 B2
7339621 Fortier Mar 2008 B2
7346217 Gold, Jr. Mar 2008 B1
7365793 Cheatle et al. Apr 2008 B2
7411610 Doyle Aug 2008 B2
7424218 Baudisch et al. Sep 2008 B2
7509041 Hosono Mar 2009 B2
7533819 Barkan et al. May 2009 B2
7619683 Davis Nov 2009 B2
7738016 Toyofuku Jun 2010 B2
7773121 Huntsberger et al. Aug 2010 B1
7809256 Kuroda et al. Oct 2010 B2
7880776 LeGall et al. Feb 2011 B2
7918398 Li et al. Apr 2011 B2
7964835 Olsen et al. Jun 2011 B2
7978239 Deever et al. Jul 2011 B2
8115825 Culbert et al. Feb 2012 B2
8149327 Lin et al. Apr 2012 B2
8154610 Jo et al. Apr 2012 B2
8238695 Davey et al. Aug 2012 B1
8274552 Dahi et al. Sep 2012 B2
8390729 Long et al. Mar 2013 B2
8391697 Cho et al. Mar 2013 B2
8400555 Georgiev et al. Mar 2013 B1
8439265 Ferren et al. May 2013 B2
8446484 Muukki et al. May 2013 B2
8483452 Jeda et al. Jul 2013 B2
8514491 Duparre Aug 2013 B2
8547389 Hoppe et al. Oct 2013 B2
8553106 Scarff Oct 2013 B2
8587691 Takane Nov 2013 B2
8619148 Watts et al. Dec 2013 B1
8803990 Smith Aug 2014 B2
8896655 Mauchly et al. Nov 2014 B2
8976255 Matsuoto et al. Mar 2015 B2
9019387 Nakano Apr 2015 B2
9025073 Attar et al. May 2015 B2
9025077 Attar et al. May 2015 B2
9041835 Honda May 2015 B2
9137447 Shibuno Sep 2015 B2
9185291 Shabtay et al. Nov 2015 B1
9215377 Sokeila et al. Dec 2015 B2
9215385 Luo Dec 2015 B2
9230326 Liu Jan 2016 B1
9270875 Brisedoux et al. Feb 2016 B2
9286680 Jiang et al. Mar 2016 B1
9344626 Silverstein et al. May 2016 B2
9360671 Zhou Jun 2016 B1
9369621 Malone et al. Jun 2016 B2
9413930 Geerds Aug 2016 B2
9413984 Attar et al. Aug 2016 B2
9420180 Jin Aug 2016 B2
9438792 Nakada et al. Sep 2016 B2
9485432 Medasani et al. Nov 2016 B1
9578257 Attar et al. Feb 2017 B2
9618748 Munger et al. Apr 2017 B2
9681057 Attar et al. Jun 2017 B2
9723220 Sugie Aug 2017 B2
9736365 Laroia Aug 2017 B2
9736391 Du et al. Aug 2017 B2
9768310 Ahn et al. Sep 2017 B2
9800798 Ravirala et al. Oct 2017 B2
9851803 Fisher et al. Dec 2017 B2
9894287 Qian et al. Feb 2018 B2
9900522 Lu Feb 2018 B2
9927600 Goldenberg et al. Mar 2018 B2
10832418 Karasev Nov 2020 B1
20020005902 Yuen Jan 2002 A1
20020030163 Zhang Mar 2002 A1
20020063711 Park et al. May 2002 A1
20020071604 Orpaz Jun 2002 A1
20020075258 Park et al. Jun 2002 A1
20020122113 Foote Sep 2002 A1
20020167741 Koiwai et al. Nov 2002 A1
20020180759 Park Dec 2002 A1
20030030729 Prentice et al. Feb 2003 A1
20030093805 Gin May 2003 A1
20030160886 Misawa et al. Aug 2003 A1
20030202113 Yoshikawa Oct 2003 A1
20040008773 Itokawa Jan 2004 A1
20040012683 Yamasaki et al. Jan 2004 A1
20040017386 Liu et al. Jan 2004 A1
20040027367 Pilu Feb 2004 A1
20040061788 Bateman Apr 2004 A1
20040141065 Hara et al. Jul 2004 A1
20040141086 Mihara Jul 2004 A1
20040240052 Minefuji et al. Dec 2004 A1
20050013509 Samadani Jan 2005 A1
20050046740 Davis Mar 2005 A1
20050157184 Nakanishi et al. Jul 2005 A1
20050168834 Matsumoto et al. Aug 2005 A1
20050185049 Iwai et al. Aug 2005 A1
20050200718 Lee Sep 2005 A1
20060054782 Olsen et al. Mar 2006 A1
20060056056 Ahiska et al. Mar 2006 A1
20060067672 Washisu et al. Mar 2006 A1
20060102907 Lee et al. May 2006 A1
20060125937 LeGall et al. Jun 2006 A1
20060170793 Pasquarette et al. Aug 2006 A1
20060175549 Miller et al. Aug 2006 A1
20060187310 Janson et al. Aug 2006 A1
20060187322 Janson et al. Aug 2006 A1
20060187338 May et al. Aug 2006 A1
20060227236 Pak Oct 2006 A1
20060291042 Alfano Dec 2006 A1
20070024737 Nakamura et al. Feb 2007 A1
20070126911 Nanjo Jun 2007 A1
20070177025 Kopet et al. Aug 2007 A1
20070188653 Pollock et al. Aug 2007 A1
20070189386 Imagawa et al. Aug 2007 A1
20070257184 Olsen et al. Nov 2007 A1
20070285550 Son Dec 2007 A1
20080017557 Witdouck Jan 2008 A1
20080024614 Li et al. Jan 2008 A1
20080025634 Border et al. Jan 2008 A1
20080030592 Border et al. Feb 2008 A1
20080030611 Jenkins Feb 2008 A1
20080084484 Ochi et al. Apr 2008 A1
20080106629 Kurtz et al. May 2008 A1
20080111881 Gibbs May 2008 A1
20080117316 Orimoto May 2008 A1
20080129831 Cho et al. Jun 2008 A1
20080218611 Parulski et al. Sep 2008 A1
20080218612 Border et al. Sep 2008 A1
20080218613 Janson et al. Sep 2008 A1
20080219654 Border et al. Sep 2008 A1
20090086074 Li et al. Apr 2009 A1
20090109556 Shimizu et al. Apr 2009 A1
20090122195 Van Baar et al. May 2009 A1
20090122406 Rouvinen et al. May 2009 A1
20090128644 Camp et al. May 2009 A1
20090219547 Kauhanen et al. Sep 2009 A1
20090252484 Hasuda et al. Oct 2009 A1
20090295949 Ojala Dec 2009 A1
20090324135 Kondo et al. Dec 2009 A1
20100013906 Border et al. Jan 2010 A1
20100020221 Tupman et al. Jan 2010 A1
20100060746 Olsen et al. Mar 2010 A9
20100097444 Lablans Apr 2010 A1
20100103194 Chen et al. Apr 2010 A1
20100119172 Yu May 2010 A1
20100134621 Namkoong et al. Jun 2010 A1
20100165131 Makimoto et al. Jul 2010 A1
20100196001 Ryynänen et al. Aug 2010 A1
20100238327 Griffith et al. Sep 2010 A1
20100259836 Kang et al. Oct 2010 A1
20100283842 Guissin et al. Nov 2010 A1
20100321494 Peterson et al. Dec 2010 A1
20110058320 Kim et al. Mar 2011 A1
20110063417 Peters et al. Mar 2011 A1
20110063446 McMordie et al. Mar 2011 A1
20110064327 Dagher et al. Mar 2011 A1
20110080487 Venkataraman et al. Apr 2011 A1
20110128288 Petrou et al. Jun 2011 A1
20110164172 Shintani et al. Jul 2011 A1
20110229054 Weston et al. Sep 2011 A1
20110234798 Chou Sep 2011 A1
20110234853 Hayashi et al. Sep 2011 A1
20110234881 Wakabayashi et al. Sep 2011 A1
20110242286 Pace et al. Oct 2011 A1
20110242355 Goma et al. Oct 2011 A1
20110298966 Kirschstein et al. Dec 2011 A1
20120026366 Golan et al. Feb 2012 A1
20120044372 Cote et al. Feb 2012 A1
20120062780 Morihisa Mar 2012 A1
20120069235 Imai Mar 2012 A1
20120075489 Nishihara Mar 2012 A1
20120105579 Jeon et al. May 2012 A1
20120124525 Kang May 2012 A1
20120154547 Aizawa Jun 2012 A1
20120154614 Moriya et al. Jun 2012 A1
20120196648 Havens et al. Aug 2012 A1
20120229663 Nelson et al. Sep 2012 A1
20120249815 Bohn et al. Oct 2012 A1
20120287315 Huang et al. Nov 2012 A1
20120320467 Baik et al. Dec 2012 A1
20130002928 Imai Jan 2013 A1
20130016427 Sugawara Jan 2013 A1
20130063629 Webster et al. Mar 2013 A1
20130076922 Shihoh et al. Mar 2013 A1
20130093842 Yahata Apr 2013 A1
20130094126 Rappoport et al. Apr 2013 A1
20130113894 Mirlay May 2013 A1
20130135445 Dahi et al. May 2013 A1
20130155176 Paripally et al. Jun 2013 A1
20130182150 Asakura Jul 2013 A1
20130201360 Song Aug 2013 A1
20130202273 Ouedraogo et al. Aug 2013 A1
20130235224 Park et al. Sep 2013 A1
20130250150 Malone et al. Sep 2013 A1
20130258044 Betts-LaCroix Oct 2013 A1
20130258048 Wang et al. Oct 2013 A1
20130270419 Singh et al. Oct 2013 A1
20130278785 Nomura et al. Oct 2013 A1
20130286221 Shechtman Oct 2013 A1
20130321668 Kamath Dec 2013 A1
20140009631 Topliss Jan 2014 A1
20140049615 Uwagawa Feb 2014 A1
20140118584 Lee et al. May 2014 A1
20140160311 Hwang et al. Jun 2014 A1
20140192238 Attar et al. Jul 2014 A1
20140192253 Laroia Jul 2014 A1
20140218587 Shah Aug 2014 A1
20140313316 Olsson et al. Oct 2014 A1
20140362242 Takizawa Dec 2014 A1
20150002683 Hu et al. Jan 2015 A1
20150042870 Chan et al. Feb 2015 A1
20150070781 Cheng et al. Mar 2015 A1
20150092066 Geiss et al. Apr 2015 A1
20150103147 Ho et al. Apr 2015 A1
20150138381 Ahn May 2015 A1
20150154776 Zhang et al. Jun 2015 A1
20150162048 Hirata et al. Jun 2015 A1
20150195458 Nakayama et al. Jul 2015 A1
20150215516 Dolgin Jul 2015 A1
20150237280 Choi et al. Aug 2015 A1
20150242994 Shen Aug 2015 A1
20150244906 Wu et al. Aug 2015 A1
20150253543 Mercado Sep 2015 A1
20150253647 Mercado Sep 2015 A1
20150261299 Wajs Sep 2015 A1
20150271471 Hsieh et al. Sep 2015 A1
20150281678 Park et al. Oct 2015 A1
20150286033 Osborne Oct 2015 A1
20150304527 Chou Oct 2015 A1
20150316744 Chen Nov 2015 A1
20150334309 Peng et al. Nov 2015 A1
20160044250 Shabtay et al. Feb 2016 A1
20160070088 Koguchi Mar 2016 A1
20160154202 Wippermann et al. Jun 2016 A1
20160154204 Lim et al. Jun 2016 A1
20160212358 Shikata Jul 2016 A1
20160212418 Demirdjian et al. Jul 2016 A1
20160241751 Park Aug 2016 A1
20160291295 Shabtay et al. Oct 2016 A1
20160295112 Georgiev et al. Oct 2016 A1
20160301840 Du et al. Oct 2016 A1
20160353008 Osborne Dec 2016 A1
20160353012 Kao et al. Dec 2016 A1
20170019616 Zhu et al. Jan 2017 A1
20170070731 Darling et al. Mar 2017 A1
20170187962 Lee et al. Jun 2017 A1
20170214846 Du et al. Jul 2017 A1
20170214866 Zhu et al. Jul 2017 A1
20170242225 Fiske Aug 2017 A1
20170289458 Song et al. Oct 2017 A1
20170294002 Jia Oct 2017 A1
20180005035 Bogolea Jan 2018 A1
20180013944 Evans, V et al. Jan 2018 A1
20180017844 Yu et al. Jan 2018 A1
20180024329 Goldenberg et al. Jan 2018 A1
20180059379 Chou Mar 2018 A1
20180120674 Avivi et al. May 2018 A1
20180150973 Tang et al. May 2018 A1
20180176426 Wei et al. Jun 2018 A1
20180184010 Cohen Jun 2018 A1
20180198897 Tang et al. Jul 2018 A1
20180241922 Baldwin et al. Aug 2018 A1
20180295292 Lee et al. Oct 2018 A1
20180300901 Wakai et al. Oct 2018 A1
20190100156 Chung et al. Apr 2019 A1
20190121103 Bachar et al. Apr 2019 A1
20190121216 Shabtay Apr 2019 A1
20190394396 Fridman Dec 2019 A1
20200014912 Kytsun et al. Jan 2020 A1
20200154014 Gu May 2020 A1
Foreign Referenced Citations (39)
Number Date Country
101276415 Oct 2008 CN
201514511 Jun 2010 CN
102739949 Oct 2012 CN
103024272 Apr 2013 CN
103841404 Jun 2014 CN
1536633 Jun 2005 EP
1780567 May 2007 EP
2523450 Nov 2012 EP
S59191146 Oct 1984 JP
04211230 Aug 1992 JP
H07318864 Dec 1995 JP
08271976 Oct 1996 JP
2002010276 Jan 2002 JP
2003298920 Oct 2003 JP
2004133054 Apr 2004 JP
2004245982 Sep 2004 JP
2005099265 Apr 2005 JP
2006238325 Sep 2006 JP
2007228006 Sep 2007 JP
2007306282 Nov 2007 JP
2008076485 Apr 2008 JP
2010204341 Sep 2010 JP
2011085666 Apr 2011 JP
2013106289 May 2013 JP
20070005946 Jan 2007 KR
20090058229 Jun 2009 KR
20100008936 Jan 2010 KR
20140014787 Feb 2014 KR
101477178 Dec 2014 KR
20140144126 Dec 2014 KR
20150118012 Oct 2015 KR
2000027131 May 2000 WO
2004084542 Sep 2004 WO
2006008805 Jan 2006 WO
2010122841 Oct 2010 WO
2014072818 May 2014 WO
2017025822 Feb 2017 WO
2017037688 Mar 2017 WO
2018130898 Jul 2018 WO
Non-Patent Literature Citations (21)
Entry
Escalera et al. (Automatic Chessboard Detection for Intrinsic and Extrinsic Camera Parameter Calibration), Mar. 15, 2010.
Budelmann (Fully-deformable 3D image registration in two seconds), Dec. 17, 2018.
Statistical Modeling and Performance Characterization of a Real-Time Dual Camera Surveillance System, Greienhagen et al., Publisher: IEEE, 2000, 8 pages.
A 3MPixel Multi-Aperture Image Sensor with 0.7μm Pixels in 0.11μm CMOS, Fife et al., Stanford University, 2008, 3 pages.
Dual camera intelligent sensor for high definition 360 degrees surveillance, Scotti et al., Publisher: IET, May 9, 2000, 8 pages.
Dual-sensor foveated imaging system, Hua et al., Publisher: Optical Society of America, Jan. 14, 2008, 11 pages.
Defocus Video Matting, McGuire et al., Publisher: ACM SIGGRAPH, Jul. 31, 2005, 11 pages.
Compact multi-aperture imaging with high angular resolution, Santacana et al., Publisher: Optical Society of America, 2015, 10 pages.
Multi-Aperture Photography, Green et al., Publisher: Mitsubishi Electric Research Laboratories, Inc., Jul. 2007, 10 pages.
Multispectral Bilateral Video Fusion, Bennett et al., Publisher: IEEE, May 2007, 10 pages.
Super-resolution imaging using a camera array, Santacana et al., Publisher: Optical Society of America, 2014, 6 pages.
Optical Splitting Trees for High-Precision Monocular Imaging, McGuire et al., Publisher: IEEE, 2007, 11 pages.
High Performance Imaging Using Large Camera Arrays, Wilburn et al., Publisher: Association for Computing Machinery, Inc., 2005, 12 pages.
Real-time Edge-Aware Image Processing with the Bilateral Grid, Chen et al., Publisher: ACM SIGGRAPH, 2007, 9 pages.
Superimposed multi-resolution imaging, Carles et al., Publisher: Optical Society of America, 2017, 13 pages.
Viewfinder Alignment, Adams et al., Publisher: EUROGRAPHICS, 2008, 10 pages.
Dual-Camera System for Multi-Level Activity Recognition, Bodor et al., Publisher: IEEE, Oct. 2014, 6 pages.
Engineered to the task: Why camera-phone cameras are different, Giles Humpston, Publisher: Solid State Technology, Jun. 2009, 3 pages.
European Search Report in related EP patent application 21843329.0, dated Nov. 15, 2022.
Office Action in related EP patent application 21843329.0, dated Nov. 28, 2022.
Office Action in related KR patent application 2021-7037792, dated Jun. 20, 2023.
Related Publications (1)
Number Date Country
20230131620 A1 Apr 2023 US
Provisional Applications (1)
Number Date Country
63051993 Jul 2020 US