Systems and methods for obtaining a super macro image

Information

  • Patent Grant
  • 11962901
  • Patent Number
    11,962,901
  • Date Filed
    Sunday, July 2, 2023
    10 months ago
  • Date Issued
    Tuesday, April 16, 2024
    18 days ago
  • CPC
  • Field of Search
    • CPC
    • H04N23/67
    • H04N23/69
    • H04N23/698
    • G02B3/14
    • G02B15/15
    • G03B13/36
    • G03B30/00
    • G03B37/02
    • G03B17/12
    • G06T7/50
  • International Classifications
    • H04N23/67
    • G02B3/14
    • G03B13/36
    • G06T7/50
    • H04N23/69
    • H04N23/698
    • Disclaimer
      This patent is subject to a terminal disclaimer.
Abstract
Systems comprising a Wide/Ultra-Wide camera, a folded Tele camera comprising an optical path folding element and a Tele lens module, a lens actuator for moving the Tele lens module for focusing to object-lens distances between 3.0 cm and 35 cm with an object-to-image magnification between 1:5 and 25:1, and an application processor (AP), wherein the AP is configured to analyze image data from the UW camera to define a Tele capture strategy for a sequence of Macro images with a focus plane slightly shifted from one captured Macro image to another and to generate a new Macro image from this sequence, and wherein the focus plane and a depth of field of the new Macro image can be controlled continuously.
Description
FIELD

The subject matter disclosed herein relates in general to macro images and in particular to methods for obtaining such images with mobile telephoto (“Tele” or “T”) cameras.


BACKGROUND

Multi-cameras (of which a “dual-camera” having two cameras is an example) are now standard in portable electronic mobile devices (“mobile devices”, e.g. smartphones, tablets, etc.). A multi-camera usually comprises a wide field-of-view (or “angle”) FOVW camera (“Wide” or “W” camera), and at least one additional camera, with a narrower (than FOVW) FOV (Tele camera with FOVT), or with an ultra-wide field of view FOVUW (wider than FOVW, “UW” camera). A known dual camera including a W camera and a folded T camera is shown in FIG. 10.


A “Macro-photography” mode is becoming a popular differentiator. “Macro-photography” refers to photographing objects that are close to the camera, so that an image recorded on the image sensor is nearly as large as the actual object photographed. The ratio of object size over image size is the object-to-image magnification. For system cameras such as digital single-lens reflex camera (DSLR), a Macro image is defined by having an object-to-image magnification of about 1:1 or larger, e.g. 1:1.1. In the context of mobile devices this definition is relaxed, so that also an image with an object-to-image magnification of about 10:1 or even 15:1 is referred to as “Macro image”. Known mobile devices provide Macro-photography capabilities which are usually provided by enabling very close focusing with a UW camera, which has a relatively short effective focal length (EFL) of e.g. EFL=2.5 mm.


An UW camera can focus to close range required for Macro photography (e.g., 1.5 cm to 15 cm), but its spatial resolution is poor. For example, an UW camera with EFL=2.5 mm focused to an object at 5 cm (lens-object distance) will have approximately 19:1 object-to-image magnification. This according to the thin lens equation:







1
EFL

=


1
u

+

1
v







with EFL=2.5 mm, a lens-image distance v=2.6 mm and an object-lens distance of u=50 mm. Even when focused as close as 1.5 cm, the object-to-image magnification of the UW camera will be approximately 5:1. Capturing objects in Macro images from these short object-lens distances of e.g. u=5 cm or less is very challenging for a user—e.g. it may make framing of the image very hard, it may prohibit taking image of popular Macro objects such as living subjects (e.g. insects), and it may introduce shadows and obscure the lighting in the scene


A dedicated Macro camera may be realized with a smartphone's Tele camera. Tele cameras focused to close objects have a very shallow depth of field (DOF). Consequently, capturing Macro images in Macro-photography mode is very challenging. Popular Macro objects such as flowers or insects exhibit a significant variation in depth, and cannot be imaged all-in-focus in a single capture. It would be beneficial to have a multi camera in mobile devices that capture Macro images (i) from a larger lens-object distance (e.g. 3.0-35 cm) and (ii) with larger object-to-image magnification (e.g. 1:5-25:1).


SUMMARY

In the following and for simplicity, the terms “UW image” and “W image”, “UW camera” and “W camera”, “UW FOV” (or FOVUW) and “W FOV” (or FOVW) etc. may be used interchangeably. A W camera may have a larger FOV than a Tele camera or a Macro-capable Tele camera, and a UW camera may have a larger FOV than a W camera. Typically but not limiting, FOVT may be 15-40 degrees, FOVW may be 60-90 degrees and FOVUW may be 90-130 degrees. A W camera or a UW camera may be capable to focus to object-lens distances that are relevant for Macro photography and that may be in the range of e.g. 2.5-15 cm. In some cases (e.g. between W and UW), FOV ranges given above may overlap to a certain degree.


In various embodiments, there are provided systems, comprising: a Wide camera for providing at least one Wide image; a Tele camera comprising a Tele lens module; a lens actuator for moving the Tele lens module for focusing to any distance or set of distances between 3.0 cm and 35 cm with an object-to-image magnification between 1:5 and 25:1; and an application processor (AP) configured to analyse image data from the Wide camera to define a capture strategy for capturing with the Tele camera a sequence of Macro images with a focus plane shifted from one captured Macro image to another captured Macro image, and to generate a new Macro image from this sequence. The focus plane and the DOF of the new Macro image can be controlled continuously. In some embodiments, the continuous control may be post-capture.


In some embodiments, the Tele camera may be a folded Tele camera comprising an optical path folding element (OPFE). In some embodiments, the Tele camera may be a double-folded Tele camera comprising two OPFEs. In some embodiments, the Tele camera may be a pop-out Tele camera comprising a pop-out lens


In some embodiments, the focusing may be to object-lens distances of 3.0-25 cm, of 3.0-15 cm, or of 10-35 cm.


In some embodiments, the Tele camera may have an EFL of 7-10 mm, of 10-20 mm, or of 20-40 mm.


In some embodiments, the Tele capture strategy may be adjusted during capture of the sequence of Macro images based on information from captured Macro images.


In some embodiments, the information from captured Macro images is processed by a Laplacian of Gaussian analysis.


In some embodiments, the image data from the UW camera is phase detection auto-focus (PDAF) data.


In some embodiments, generation of the new Macro image may use a UW image as reference image.


In some embodiments, the generation of the new Macro image may use a video stream of UW images as reference image.


In some embodiments, the AP may be configured to automatically detect objects of interests (OOIs) in the sequence of captured Macro images and to generate the new Macro image when the OOIs are entirely in-focus.


In some embodiments, the AP may be configured to automatically detect OOIs in the UW image data and to generate the new Macro image when the OOIs are entirely in-focus.


In some embodiments, the AP may be configured to automatically detect OOIs in the sequence of input Macro images and to generate the new Macro image when specific image segments of the OOIs have a specific amount of forward de-focus blur and a specific amount of backward de-focus blur.


In some embodiments, the AP may be configured to automatically detect OOIs in the UW image data and to generate the new Macro image when specific image segments of the OOIs have a specific amount of forward de-focus blur and a specific amount of backward de-focus blur.


In some embodiments, the AP may be configured to calculate a depth map from the sequence of captured Macro images and to use the depth map to generate the new Macro image.


In some embodiments, the AP may be configured to provide the new Macro image with realistic artificial lightning scenarios.


In some embodiments, the AP may be configured to analyse of image data from the Wide camera to automatically select an object and to define the capture strategy for capturing the object with the Tele camera. In some embodiments, a focus peaking map may be displayed to a user for selecting an object which is captured with the Tele camera.


In some embodiments, the AP may be configured to calculate a depth map from the PDAF data and to use the depth map to generate the new Macro image.


In some embodiments, the Tele lens module may include one or more D cut lenses.


In some embodiments, a system may further comprise a liquid lens used for focusing to the object-lens distances of 4-15 cm. In some embodiments, the power of the liquid lens can be changed continuously in a range of 0-30 dioptre. In some embodiments, the liquid lens may be located on top of the folded Tele camera's OPFE. In some embodiments, the liquid lens may be located between the folded Tele camera's OPFE and the Tele lens module.





BRIEF DESCRIPTION OF THE DRAWINGS

Non-limiting examples of embodiments disclosed herein are described below with reference to figures attached hereto that are listed following this paragraph. The drawings and descriptions are meant to illuminate and clarify embodiments disclosed herein, and should not be considered limiting in any way. Like elements in different drawings may be indicated by like numerals. Elements in the drawings are not necessarily drawn to scale.



FIG. 1A shows a perspective view of an embodiment of a folded Tele lens and sensor module in a Tele lens state with focus on infinity;



FIG. 1B shows a perspective view of the Tele lens and sensor module of FIG. 1A in a Macro lens state with focus on a close object;



FIG. 1C shows in cross section another continuous zoom Tele lens and sensor module disclosed herein in a minimum zoom state;



FIG. 1D shows the module of FIG. 1C in an intermediate zoom state;



FIG. 1E shows the module of FIG. of FIG. 1C in a maximum zoom state;



FIG. 1F shows in cross section yet another continuous zoom Tele lens and sensor module disclosed herein in a minimum zoom state;



FIG. 1G shows the module of FIG. 1F in an intermediate zoom state;



FIG. 1H shows the module of FIG. of FIG. 1F in a maximum zoom state;



FIG. 1I shows an embodiment of a folded Tele camera disclosed herein;



FIG. 1J shows a pop-out camera in an operational or “pop-out” state;



FIG. 1K shows the pop-out camera of FIG. 1J in a non-operational or “collapsed” state;



FIG. 1L shows an exemplary Tele-Macro camera lens system disclosed herein in a cross-sectional view in a collapsed state;



FIG. 1M shows the lens system of FIG. 1L in a first Tele state having a first EFL and a first zoom factor;



FIG. 1N shows the lens system of FIG. 1L in a second Tele state having a second EFL and a second zoom factor;



FIG. 1O shows the lens system of FIG. 1L in a Tele-Macro state having a third EFL and a third zoom factor;



FIG. 1P shows schematically another exemplary Tele-Macro camera lens system disclosed herein in a cross-sectional view in pop-out state;



FIG. 1Q shows the lens system of FIG. 1P in a first collapsed state;



FIG. 1R shows the lens system of FIG. 1P in a second collapsed state;



FIG. 1S shows schematically yet another exemplary Tele-Macro camera lens system disclosed herein in a cross-sectional view in pop-out state;



FIG. 1T shows the lens system of FIG. 1S in a collapsed state;



FIG. 1U shows schematically dual-camera output image sizes and ratios between an ultra-wide FOV and a Macro FOV;



FIG. 2A illustrates an embodiment of a folded Tele digital camera with Macro capabilities disclosed herein;



FIG. 2B illustrates another embodiment of a folded Tele digital camera with Macro capabilities disclosed herein;



FIG. 2C shows in cross section yet another continuous zoom Tele lens and sensor module disclosed herein in a first zoom state;



FIG. 2D shows the module of FIG. 2C in a second zoom state;



FIG. 2E shows the module of FIG. 2C in a third zoom state;



FIG. 3A shows a point object in focus, with a micro-lens projecting the light from the object onto the center of two sub-pixels, causing zero-disparity;



FIG. 3B shows light-rays from the point object in FIG. 3A out of focus;



FIG. 4A illustrates a method of capturing a Macro focus stack disclosed herein;



FIG. 4B illustrates another method of generating a focus stack disclosed herein;



FIG. 5A shows an exemplary Macro object and setup for capturing the Macro object;



FIG. 5B shows an output graph for the Macro setup of FIG. 5A;



FIG. 5C shows another exemplary Macro object and setup for capturing the Macro object;



FIG. 5D shows an output graph for the Macro setup of FIG. 5C;



FIG. 6 illustrates a method of generating single Macro images from a plurality of images of a focus stack;



FIG. 7 shows a graphic user interface (GUI) that a user may use to transmit a command to modify the appearance of the output image;



FIG. 8A shows a symmetric blur function;



FIG. 8B shows an asymmetric blur function with functionality as described in FIG. 8A;



FIG. 9 shows a system for performing methods disclosed herein;



FIG. 10 shows an exemplary dual-camera.





DETAILED DESCRIPTION

Tele cameras with a Macro-photography mode can switch to a Macro state by performing movements within the lens of the Tele camera, thus changing the lens's properties. Cameras with such capability are described for example in co-owned international patent applications PCT/IB2020/051405 and PCT/IB2020/058697. For example, FIGS. 19A and 19B in PCT/IB2020/051405 show two folded Tele camera states: one with the Tele lens in a first “Tele lens” state and the other with the Tele lens in a second “Macro lens” state. Because of the large EFL of a Tele camera and an image region of the image sensor that is smaller in the Macro mode than it is in the Tele mode, a “Macro lens” state may come with a small Macro FOV like FOV 198 below.


In the following, images are referred to as “Macro images”, if they fulfil both of the two criteria:

    • Object-to-image magnification of 1:5-25:1.
    • Captured at an object-lens distance in the range of 30 mm-350 mm with a camera having an EFL in the range of 7 mm-40 mm.



FIGS. 1A and 1B show schematically an embodiment of a folded Tele lens and sensor module disclosed herein and numbered 100. FIG. 1A shows module 100 in a Tele lens state with focus on infinity from a top perspective view, and FIG. 1B shows module 100 in a Macro lens state with maximum object-to-image magnification (Mmax) with a focus on a (close) object at about 4 cm from the camera from the same top perspective view.


Module 100 further comprises a first lens group (G1) 104, a second lens group (G2) 106 and a third lens group (G3) 108, a module housing 102 and an image sensor 110. In this embodiment, lens groups 104, 106 and 108 are fixedly coupled, i.e. the distances between lens groups do not change. Lens groups 104, 106 and 108 together may form a lens with an EFL=13 mm. Lens groups 104, 106 and 108 share a lens optical axis 112. For focusing, lens groups 104, 106 and 108 are actuated together by a VCM mechanism (not shown) along lens optical axis 112. A VCM mechanism (not shown) can also be used for changing between lens focus states.


With reference to FIG. 1B and to an optical design detailed in Example 6 in Table 25 of PCT/IB2020/051405, Mmax=2.3:1 may be achieved (for objects at 4.2 cm). This according to a thin lens approximation with EFL=13 mm, a lens-image distance v=19 mm, and an object-lens distance of u=42 mm. Mmax may be achieved with the lens configuration as shown in FIG. 1B, where lens groups G1+G2+G3 are moved together as far as possible towards the object (i.e. away from sensor 110).


A smaller object-to-image magnification M may be selected continuously by capturing the object from a larger distance. A magnification of zero (for objects at infinity) is obtained with the lens configuration of FIG. 1A and with lens groups G1+G2+G3 moved together as far as possible towards image sensor 110. For magnifications between zero and Mmax, lens groups G1+G2+G3 are moved together between the limits stated above. For example, a magnification M=4.3:1 may be desired. To switch from a Mmax state to M=4.3:1, the lenses G1+G2+G3 must be moved together about 3 mm towards the image sensor.


In another embodiment a Macro camera may have an EFL of 25 mm and may be compared to a UW camera with EFL=2.5 mm described above. Both cameras may include a same image sensor, e.g., with 4 mm active image sensor width. When focused to 5 cm, the Macro camera with EFL=25 mm will have 1:1 object-to-image magnification and will capture an object width of 4 mm (same as the sensor width). In comparison, the UW camera with approximately 19:1 object-to-image magnification will capture an object width of 76 mm.


A Tele camera with an EFL=7-40 mm may be beneficial for Macro photography, as it can provide large image magnification. However, focusing a Tele camera to short object-lens distances is not trivial and requires large lens strokes that must support optics specifications such as limiting de-center deviations (with respect to a plane normal to an optical path) between lens and image sensor to 25 μm or less, e.g. to 5 μm. As an example, for focusing the Macro camera having EFL=25 mm to 10 cm (compared to focus on infinity), a lens stroke of about 6.3 mm is required. For an upright (non-folded) Tele camera, lens strokes of 2 mm or more are incompatible with mobile device (and thus camera) height constraints. However, in folded camera designs (described in FIGS. 1A-1B and FIGS. 2A-2B) or “pop-out” camera designs (described in FIGS. 1J-1K and for example in co-owned international patent application PCT/IB2020/058697) a smartphone's height does not limit such lens strokes.


In other embodiments, a folded or non-folded Tele camera for capturing Macro images may have an EFL of 7-40 mm, for example 18 mm. For Macro capability, the folded or non-folded Tele camera may be able to focus continuously to objects having an object-lens distance of e.g. 30-350 mm.



FIG. 1C-E shows an embodiment of a continuous zoom Tele lens and sensor module disclosed herein and numbered 120 in different zoom states. FIG. 1C shows module 120 in its minimum zoom state, having an EFL=15 mm, FIG. 1D shows module 120 in an intermediate zoom state, having an EFL=22.5 mm, and FIG. 1E shows module 120 in its maximum zoom state, having an EFL=30 mm.


Module 120 comprises a lens 122 with 8 single lens elements L1-L8, an image sensor 124 and, optionally, an optical window 126. The optical axis is indicated by 128. Module 120 is included in a folded Tele camera such as camera 1000. Module 120 has a continuous zoom range that can be switched continuously between a minimum zoom state and a maximum zoom state. The EFL of the maximum zoom state EFLMAX and the EFL of the minimum zoom state EFLMIN fulfil EFLMAX=2×EFLMIN. Lens 122 is divided into three lens groups, group 1 (“G1”), which is closest to an object, group 2 (“G2”) and group 3 (“G3”), which is closest to sensor 124. For changing a zoom state, G1 and G3 are moved together as one group (“G13” group) with respect to G2 and to sensor 124. For focusing, G1+G2+G3 move together as one group with respect to sensor 124.



FIG. 1F-H shows another embodiment of a continuous zoom Tele lens and sensor module disclosed herein and numbered 130 in different zoom states. FIG. 1F shows module 130 in its minimum zoom state, having an EFL=10 mm, FIG. 1G shows module 130 in an intermediate zoom state, having an EFL=20 mm, and FIG. 1H shows module 130 in its maximum zoom state, having an EFL=30 mm.


Module 130 comprises a lens 132 with 10 single lens elements L1-L10, an image sensor 134 and optionally an optical window 136. Module 130 is included in a folded Tele camera such as camera 1000. Module 130 has a continuous zoom range that can be switched continuously between a minimum zoom state and a maximum zoom state. The EFL of the maximum zoom state EFLMAX and the EFL of the minimum zoom state EFLMIN fulfil: EFLMAX=3×EFLMIN. Lens 132 is divided into four lens groups, group 1 (“G1”), which is closest to an object, group 2 (“G2”), group 3 (“G3”) and group 4 (“G4”) which is closest to sensor 134. For changing a zoom state, G1 and G3 are moved together as one group (“G13” group) with respect to G2, G4 and to sensor 134. For focusing, G13+G2+G4 move together as one group with respect to sensor 134.



FIG. 1I shows an embodiment of a folded Tele camera disclosed herein and numbered 140. In general, folded Tele cameras are based on one optical path folding element (OPFE). Such scanning folded Tele cameras are described for example in the co-owned international patent application PCT/IB2016/057366. Camera 140 is based on two OPFEs, so that one may refer to a “double-folded” Tele camera. Module 140 comprises a first “Object OPFE” 142, an Object OPFE actuator 144, an “Image OPFE” 146 and an Image OPFE actuator 148. A lens (not shown) is included in a lens barrel 150. Camera 140 further includes an image sensor 151 and a focusing actuator 153.


Module 140 is a scanning folded Tele camera. By rotational movement of Object OPFE 142 and Image OPFE 146, the native (diagonal) FOV (FOVN) of camera 140 can be steered for scanning a scene. FOVN may be 10-40 degrees, and a scanning range of FOVN may be ±5 deg−±35 deg. For example, a scanning folded Tele camera with 20 deg FOVN and ±20 FOVN scanning covers a Tele FOV of 60 deg.



FIG. 1J-K shows exemplarily a pop-out Tele camera 160 which is described for example in co-owned international patent application PCT/IB2020/058697. FIG. 1J shows pop-out camera 160 in an operational or “pop-out” state. Pop-out camera 150 comprises an aperture 152, a lens barrel 154 including a lens (not shown), a pop-out mechanism 156 and an image sensor 158. FIG. 1K shows pop-out camera 160 in a non-operational or “collapsed” state. By means of pop-out mechanism 156, camera 150 is switched from a pop-out state to the collapsed state. In some dual-camera embodiments, both the W camera and the T camera may be pop-out cameras. In other embodiments, only one of the W or T cameras may be a pop-out camera, while the other (non-pop-out) camera may be a folded or a non-folded (upright) camera.



FIGS. 1L-O show schematically an exemplary pop-out Tele-Macro camera lens system 170 as disclosed herein in a cross-sectional view. Lens system 170 may be included in a pop-out camera as described in FIGS. 1J-K. FIG. 1L shows lens system 170 in a collapsed state.



FIG. 1M shows lens system 170 in a first Tele state having a first EFL (EFL1) and a first zoom factor (ZF1). FIG. 1N shows lens system 170 in a second Tele state having a second EFL (EFL2) and a second ZF2, wherein EFL1<EFL2 and ZF1<ZF2. FIG. 1O shows lens system 170 in a Tele-Macro state having a third EFL3 and a third ZF3. In the Tele-Macro state, a camera including lens system 170 can focus to close objects at <350 mm object-lens distance for capturing Macro images.



FIGS. 1P-R show schematically another exemplary pop-out Tele-Macro camera lens system 180 as disclosed herein in a cross-sectional view. Lens system 180 includes a lens 182 and an image sensor 184. Lens system 180 may be included in a pop-out camera as described in FIGS. 1J-K. FIG. 1P shows lens system 180 in pop-out state. In a pop-out state, a camera including lens system 180 can focus to close objects at <350 mm object-lens distance for capturing Macro images. FIG. 1Q shows lens system 180 in a first collapsed state. FIG. 1R shows lens system 180 in a second collapsed state.



FIGS. 1S-T show schematically another exemplary pop-out Tele-Macro camera lens system 190 as disclosed herein in a cross-sectional view. Lens system 190 includes a lens 192 and an image sensor 194. Lens system 190 may be included in a pop-out camera as described in FIGS. 1J-K. FIG. 1S shows lens system 190 in pop-out state. In a pop-out state, a camera including lens system 190 can focus to close objects at less than 350 mm object-lens distance for capturing Macro images. FIG. 1T shows lens system 190 in a collapsed state.


Modules 100, 120, 130, 140, 150, 170, 180, 190 and 220 or cameras including modules 100, 120, 130, 140, 150, 170, 180, 190 and 220 may be able/used to capture Macro images with a Macro camera module such as Macro camera module 910.



FIG. 1U illustrates in an example 195 exemplary triple camera output image sizes of, and ratios between an Ultra-Wide (UW) FOV 196, a Wide (W) FOV 197 and a Macro FOV 198. With respect to a Tele camera used for capturing objects at lens-object distances of e.g. 1 m or more, in a Macro mode based on a Tele camera, a larger image is formed at the image sensor plane. Thus an image may cover an area larger than the active area of an image sensor so that only a cropped FOV of the Tele camera's FOV may be usable for capturing Macro images. As an example, consider a Macro camera that may have an EFL of 30 mm and an image sensor with 4 mm active image sensor width. When focused to an object at 5 cm (lens-object distance) a lens-image distance of v=77 mm is required for focusing and an object-to-image magnification of about 1:1.5 is achieved. A Macro FOV of about 43% of the actual Tele FOV may be usable for capturing Macro images.


The following description refers to W cameras, assuming that a UW camera could be used instead.



FIG. 2A illustrates an embodiment of a folded Tele camera with Macro capabilities disclosed herein, numbered 200. Camera 200 comprises an image sensor 202, a lens 204 with an optical axis 212, and an OPFE 206, exemplarily a prism. Camera 200 further comprises a liquid lens (LL) 208 mounted on a top side (surface facing an object, which is not shown) of prism 206, in a direction 214 perpendicular to optical axis 212. The liquid lens has optical properties that can be adjusted by electrical voltage supplied by a LL actuator 210. In this embodiment, LL 208 may supply a dioptre range of 0 to 35 dioptre continuously. In a Macro photography state, the entire lens system comprising LL 208 and lens 204 may have an EFL of 7-40 mm. The DOF may be as shallow as 0.01-2 mm. In this and following embodiments, the liquid lens has a mechanical height HLL and an optical height (clear height) CH. CH defines a respective height of a clear aperture (CA), where CA defines the area of the lens surface that meets optical specifications. That is, CA is the effective optical area and CH is the effective height of the lens, see e.g. co-owned international patent application PCT/IB2018/050988.


For regular lenses with fixed optical properties (in contrast with a LL with adaptive optical properties), the ratio between the clear height and a lens mechanical height H (CH/H) is typically 0.9 or more. For a liquid lens, the CH/H ratio is typically 0.9 or less, e.g. 0.8 or 0.75. Because of this and in order to exploit the CH of the optical system comprising the prism and lens, HLL may be designed to be 15% larger or 20% larger than the smallest side of the prism top surface. In embodiment 200, LL actuator 210 is located along optical axis 212 of the lens, i.e. in the −X direction in the X-Y-Z coordinate system shown. Lens 204 may be a D cut lens with a lens width W that is larger than lens height H. In an example, a width/height W/H ratio of a D cut lens may be 1.2.



FIG. 2B illustrates yet another embodiment of a folded Tele camera with Macro capabilities disclosed herein, numbered 200′. Camera 200′ comprises the same elements as cameras 200, except that in in camera 200′ LL 208 is located between prism 206 and lens 204. As in camera 200, lens 204 may be a D cut lens with a lens width W that is larger than a lens height H. In an example, a width/height W/H ratio of a D cut lens may be 1.2. As in camera 200, in a Macro photography state, the entire lens system comprising of LL 208 and lens 204 may have an EFL of 7 mm-40 mm and a DOF may be as shallow as 0.01-7.5 mm.



FIGS. 2C-2E show schematically another embodiment of a continuous zoom Tele lens and sensor module disclosed herein and numbered 220 in different zoom states. Module 220 is included in a folded Tele camera such as camera 1000. Module 220 comprises a lens 222, an (optional) optical element 224 and an image sensor 226. FIGS. 2C-2E show 3 fields with 3 rays for each: the upper marginal-ray, the lower marginal-ray and the chief-ray. Lens 222 includes 6 single lens elements L1-L6. The optical axis is indicated by 228.



FIG. 2C shows module 220 focused to infinity, FIG. 2D shows module 220 focused to 100 mm and FIG. 2E shows module 220 focused to 50 mm.


Lens 220 is divided into two lens groups G1 (includes lens elements L1 and L2) and G2 (includes L3, L4, L5 and L6) which move relative to each other and additionally together as one lens with respect to the image sensor for focusing. Because of the very shallow DOF that comes with these cameras, capturing a focus stack and building a good image out of it is not trivial. However, methods described below allow to do so.


Some multi-cameras are equipped with a W camera and a Tele camera with Macro capabilities both (or only one of the cameras) having a Phase-Detection Auto-Focus (PDAF) sensor such as a 2 PD sensor, i.e. a sensor in which each sensor pixel is divided into two or more sub-pixels and supports depth estimation via calculation of disparity. PDAF sensors take advantage of multiple micro-lenses (“ML”), or partially covered MLs to detect pixels in and out of focus. MLs are calibrated so that objects in focus are projected onto the sensor plane at the same location relative to the lens, see FIG. 3A.



FIG. 3A shows a point object 302 in focus, with a MLs projecting the light from the object onto the center of two sub-pixels, causing zero-disparity. FIG. 3B shows light-rays from a point object 304 out of focus. “Main-lens” “ML”, and “Sub-pixels pair” are illustrated the same way in both FIGS. 3A and 3B. In FIG. 3B, a left ML projects the light from object 304 onto the center of a left sub-pixel. A right ML projects the same object onto a right sub-pixel, causing a positive disparity value of 2. Objects before/after the focal plane (not shown) are projected to different locations relative to each lens, creating a positive/negative disparity between the projections. The PDAF disparity information can be used to create a “PDAF depth map”. Note that this PDAF depth map is both crude (due to a very small baseline) and relative to the focal plane. That is, zero-disparity is detected for objects in focus, rather than for objects at infinity. In other embodiments, a depth map may be crated based on image data from a stereo camera, a Time-of-Flight (ToF) or by methods known in the art for monocular depth such as e.g. depth from motion.



FIG. 4A illustrates a method of capturing a Macro focus stack (or “defining a Tele capture strategy”) as disclosed herein. The term “focus stack” refers to a plurality of images that are captured in identical imaging conditions (i.e. camera and object are not moving during the capturing of the focus stack but the focus of the lens is moving in defined steps between consecutive image captures). An application controller (AP), for example AP 940 shown in FIG. 9, may be configured to perform the steps of this method. An object is brought into focus in step 402. In some embodiments and for bringing an object or region into focus, a focus peaking map as known in the art may be displayed to a user. If a scanning Tele camera such as camera 140 is used, an object may be brought into focus by detecting the object in the W camera FOV and automatically steering the scanning Tele camera FOV towards this object. An object in the W camera FOV may be selected for focusing automatically by an algorithm, or manually by a human user. For example, a saliency algorithm providing a saliency map as known in the art may be used for automatic object selection by an algorithm. The user gives a capture command in step 404. A first image is captured in the step 406. In step 408, the image is analysed according to methods described below and shown in FIG. 5A and FIG. 5B. In some embodiments, only segments of the image (instead of the entire image) may be analysed. The segments that are analysed may be defined by an object detection algorithm running on the image data from the Macro camera or on the image data of the W camera. Alternatively, the segments of the image that are analysed (i.e. OOIs) may be marked manually by a user. According to the results of this analysis, the lens is moved in defined steps for focusing forward (i.e. the focus moves a step away from the camera) in step 410, or for focusing backward (i.e. the focus moves a step towards the camera) in step 412. The forward or backward focus may depend on a command generated in step 408. A backward focusing command may, for example, be triggered when a plateau A (A′) in FIG. 5B (or FIG. 5D) is detected. A forward focusing command may, for example, be triggered when no plateau A (A′) in FIG. 5B (or FIG. 5D) is detected. An additional image is captured in step 414. These steps are repeated until the analysis in step 408 outputs a command for reversing the backward focusing or an abort command to abort focus stack capturing. An abort command may, for example, be triggered when a plateau A (A′) or E (E′) in FIG. 5B (or FIG. 5D) is detected. The abort command ends the focus stack capture in step 416. In another embodiment, step 410 may be replaced by step 412 and step 412 may be replaced by step 410, i.e. first the backward focusing may be performed and then the forward focusing may be performed.


If a scanning Tele camera such as camera 140 is used for capturing a Macro focus stack and defining a Tele capture strategy, an object that covers a FOV segment which is larger than the native Tele FOV (“object FOV”) can be captured by multiple focus stacks that cover a different FOV segment of the object FOV each. For example, W camera image data may be used to divide the object FOV in a multitude of smaller (than the Tele FOVN) FOVs with which are captured consecutively with the focus stack capture process as described above, and stitched together after capturing the multitude of FOVs.


If a continuous zoom Tele camera such as camera 120 or camera 130 is used for capturing a Macro focus stack and defining a Tele capture strategy, e.g. depending on the size or content or color of the object FOV, a specific zoom factor may be selected. For example, W camera image data can be used to analyze a Macro object. Based on this analysis, a suitable zoom factor for the continuous zoom Tele camera may be selected. A selection criterion may be that the FOV of the continuous zoom Tele camera fully covers the Macro object. Other selection criteria may be that the FOV of the continuous zoom Tele camera not just fully covers the Macro object, but covers additionally a certain amount of background FOV, e.g. for aesthetic reasons. Yet other selection criteria may be to select a FOV so that the images captured by the continuous zoom Tele camera may have a certain DOF. As a first example, a larger DOF may be beneficial for capturing an object with a focus stack including a smaller number of single images. As a second example, a specific DOF may be beneficial, e.g. as of the Macro image's aesthetic appearance.



FIG. 4B illustrates another method of capturing a focus stack (or defining a Tele capture strategy). An AP (e.g. AP 940 shown in FIG. 9) may be configured to perform the steps of this method. In step 452, a PDAF map is captured with the W camera. In step 454, a depth map is calculated from the PDAF map as known in the art. Focus stack parameters such as focus step size and focus stack brackets are derived in step 456 from the depth map. The focus stack brackets are the upper and lower limits of the focus stack, i.e. they include two planes, a first in-focus plane with the largest object-lens distance in the focus stack, and a second in-focus plane with the smallest object-lens distance in the focus stack. A plurality of images with shifted focus is captured between these two limits. The focus step size defines the distance between two consecutive in-focus planes that were captured in the focus stack. A focus plane may have a specific depth defined by the DOF (focus plane located in center). The parameters defined in step 456 may be used to control the camera. For example, the parameters may be fed into a standard Burst mode feature for focus stack capture, as supplied for example on Android smartphones. In step 458, the focus stack is captured according to the parameters. In other embodiments, the PDAF map in step 452 may be captured not by a W camera, but by a Macro capable Tele camera. The PDAF map of the Tele camera may exhibit a higher spatial resolution, which may be desirable, and a stronger blurring of out-of-focus areas, which may be desirable or not. The stronger blurring of out-of-focus areas may be desirable for an object having a shallow depth, e.g. a depth of <1 mm. The stronger blurring of out-of-focus areas may not be desirable for an object having a larger depth, e.g. a depth of >2.5 mm. A strong blurring may render a depth calculation as performed in step 454 impossible.


In some embodiments, in step 452, PDAF image data may be captured from specific scene segments only, e.g. for a ROI only. In other embodiments, in step 452, PDAF image data may be captured from the entire scene, but depth map calculation in step 454, may be performed for segments only. The specific scene segments may be identified by image analysis performed on image data from a UW or a W or the Tele camera. PDAF maps may be captured in step 452 not only from single images, but also from a video stream.


In some embodiments, instead of calculating a depth map in step 454, a depth map or image data for calculating a depth map may be provided by an additional camera.


In some embodiments, a different analysis method may be applied in order to analyse the entire Macro scene at only one (or only a few) focus position(s). From this analysis, a preferred focus stack step size and focus stack range may be derived. These values are then feed into a standard Burst mode feature for focus stack capture.


In some embodiments, for focus stack capture in step 458, imaging settings such as the values for white-balance and exposure time may be kept constant for all images captured in the focus stack.


Capturing a focus stack comprising Macro images with shallow DOF may require actuation of the camera's lens with high accuracy, as the DOF defines a minimum accuracy limit for the focusing process. The requirements for actuation accuracy may be derived from the images' DOF. For example, an actuation accuracy may be required that allows for controlling the location of the focus plane with an accuracy that is larger than the DOF by a factor of 2-15. As an example, consider a focus stack including Macro images having a DOF of 50 μm, i.e. segments of the scene that are located less than 25 μm distance from the focus plane are in-focus. The minimum accuracy for focusing would accordingly be 25 μm-3 μm.


Optical image stabilization (OIS) as known in the art may be used during focus stack capturing. OIS may be based on actuating the lens or the image sensor or the OPFE of camera 910. In some embodiments, depth data of the Macro scene may be used for OIS.



FIG. 5A shows exemplarily a Macro object (here Flower) and a camera for capturing the Macro object (not in scale). The flower is captured from a top position (marked by “camera”).



FIG. 5B shows an exemplary output graph for the Macro setup of FIG. 5A obtained using a method described in FIG. 4A. The dots in the graph represent the results of the analysis for a specific image of the focus stack, i.e. each image in the focus stack is analysed during focus stack capturing as described above, where the analysis provides a number (sum of pixels in focus) for each image. These numbers may be plotted as illustrated here. The analysis may use functions as known in the art such as e.g. Laplacian of Gaussians, or Brenner's focus measure. An overview of suitable functions may be found in Santos et al., “Evaluation of autofocus functions in molecular cytogenetic analysis”, 1997, Journal of Microscopy, Vol. 188, Pt 3, December 1997, pp. 264-272.


The analysis output is a measure for the amount of pixels in each image that are in-focus. The larger the number output for a specific image, the higher the overall number of pixels in the image that are in focus. The assumption of the focus stack analysis is that a major part of Macro objects exhibits an analysis curve characterized by common specific features. The curve is characterized (starting from a left image side, i.e. from a camera-scene setup where the focus is farther away than the Macro object) by a plateau A (focus farther away than object, so almost no pixel is in-focus and there is a small output number), followed by a positive gradient area B (where first the farthest parts of the Macro objects are in-focus and then larger parts of the Macro object are in-focus), followed by a plateau C (where for example the center of the Macro object and large parts of the object are in-focus), which is followed by a negative gradient D (where the focus moves away from Macro object center), followed by a plateau E. The abort command as described in FIG. 4A is triggered by detecting plateau A or plateau E. Depending on which focus position the focus stack capture was started, the focus stack capture will be aborted or the direction of focus shifting will be switched (from towards the camera to away from the camera or the other way around). In general, focus stack capture may be started with a focus position where a part or point of the Macro object is in focus. The analysis will output a high number for the first image. Then focus is moved away from the camera, which means that analysis output moves on the plateau C (towards the left in the graph), until it reaches the gradient area B in the graph and in the end the plateau area A. If there is no further increase in the number outputted from the analysis, the focus is moved back to the first position (at plateau C) and focus is shifted towards the camera. The same steps as described above are performed till in the end plateau E is reached. Here the focus stack capture process is finished.



FIG. 5C shows another exemplary Macro object (here a bee) and another camera for capturing the Macro object (not in scale). FIG. 5D shows another exemplary output graph for the Macro setup of FIG. 5C using a method described in FIG. 4. Although varying in details because of the different object depth distribution, features A′-E′ here are similar to features A-E in FIG. 5B.


The Tele images of the focus stack captured according to methods described e.g. in FIG. 4A, FIG. 4B and FIG. 5A-D are the input Macro images that may be further processed, e.g. by the method described in FIG. 6.



FIG. 6 illustrates a method of generating single Macro images from a plurality of images of a focus stack. An AP such as AP 940 may be configured to perform the steps of this method. Suitable images of the focus stack are selected by analysis methods known in the art in step 602. Criteria that may disqualify an image as “suitable” image may include: significant motion blur (e.g. from handshake) in an image, redundancy in captured data, or bad focus. Only selected suitable images are used further in the process. The suitable images are aligned with methods as known in the art in step 604. Suitable image regions in the aligned images are selected in step 606. Selection criteria for “suitable” regions may include the degree of focus of an area, e.g. whether an area is in focus or has a certain degree of defocus blur. The choice of selection criteria depends on the input of a user or program. A user may wish an output image with a Macro object that is all-in-focus (i.e. image with a depth of field larger than the depth of the Macro object), meaning that all the parts of the Macro object are in focus simultaneously. However, the all-in-focus view generally does not represent the most pleasant image for a human observer (as human perception comes with certain amount of blurring by depth, too), so an image with a certain focus plane and a certain amount of blurred area may be more appealing. “Focus plane” is the plane formed by all points of an un-processed image that are in focus. Images from a focus stack generated as described in FIG. 4A-B and a selection of suitable images in step 606 may allow to choose any focus plane and any amount of blurring in the output image 612 continuously. The amount of blurring of image segments that are not in focus may depend on their location in a scene. The amount of blurring may be different for image segments of object segments that are further away from the camera by some distance d with respect to the focus plane, than for image segments that are closer to the camera than the focus plane by the same distance d. The continuous control of the focus plane's position and the depth of field of the new Macro image may be performed after capturing the focus stack (“post-capture”). In some embodiments, continuous control of the focus plane's position and the depth of field of the new Macro image may be performed before capturing the focus stack (“pre-capture”) as well and e.g. enabled by showing a preview video stream to a user. The selected images are fused into a single image with methods known in the art in step 608. In some embodiments and optionally, the fusion in step 608 may use depth map information, estimated e.g. using depth from focus or depth from defocus methods known in the art. In other embodiments, depth map information from PDAF (see FIG. 3A-B) may be used. The PDAF information may be provided from the image sensor of the UW camera or from the W camera or from the Tele camera with Macro capability. In some embodiments, PDAF data may be captured by the Tele camera simultaneously with capturing the Tele focus stack images, i.e. a stack of PDAF images is captured under identical focus conditions as the focus stack image. From this PDAF image stack a depth map may be calculated. E.g. one may use in-focus image segments from a single PDAF image only, as they can be assigned to a specific depth with high accuracy. By fusing the depth estimation data from all the in-focus image segments of the PDAF image stack a high-quality depth map may be generated.


In some embodiments, both Tele image data and Wide image data may be fused to one image in step 608.


In other embodiments, only a subset of the images selected in step 602 may be fused into a single image in step 608 and output in step 612. For example, a subset of only 1, only 2, or only 3, or only 4, or only 5 images may be fused into one single image in step 608 and output in step 612. In yet another embodiment, only one of the images selected in step 602 may be output in step 612. The single output image is fine-tuned in step 610 to finalize results by, e.g. reduce noise. The fine tuning may include smoothening images seams, enhancements, filters like radial blur, chroma fading, etc. The image is output in step 612.


In other embodiments, selection of suitable image regions in step 606 may be based on an image analysis performed on images from a W camera. Because of the wider FOV and larger DOF of a W camera (with respect to a Macro capable Tele camera), it may be beneficial to additionally use W image data for generating the single Macro images, e.g. for object identification and segmentation. For example, a Macro region of interest (ROI) or object of interest (OOI) may be detected in FOVW before or during focus stack capturing with the Macro capable Tele camera. The ROI or OOI may be segmented according to methods known in the art. Segmentation means identification of coordinates of the FOV segment that contains the ROI or OOI. Via calibration of the FOVW and FOVT, these coordinates are translated to the FOVT coordinates. The coordinates of ROIs or OOIs may be used for selection of suitable image regions in step 606. In some embodiments, the segmentation analysis may be performed on single images. In other embodiments, the segmentation analysis may be performed on a video stream, i.e. on a sequence of single images.


In some embodiments, image information of the W camera may be used for further tasks. One or more W images may be used as a ground truth “anchor” or reference image in the Macro image generation process. Ground truth refers here to W image information about a scene segment that is significantly more complete than the Tele image information of the same scene segment. A single W image provides significantly more information about a Macro object than a single Tele image. As an example one may think of an ROI or OOI that is mostly in-focus and fully visible in a single W image but only partly visible in a single Tele image, e.g. because of the significantly shallower Tele DOF. The W ground truth or reference image may be used as ground truth anchor in the following steps of the method described in FIG. 6:

    • In step 602, a W image may be used for selection of suitable images. The ground truth may e.g. allow to identify Tele images that exceed a certain threshold of focus blur or motion blur.
    • In step 604, a W image may be used as a reference image for aligning images. In one example the Tele images of the focus stack may all be aligned with reference to the W reference image. In another example, the Tele images of the focus stack may first all be aligned with reference to the W reference image, and for more detailed alignment the Tele images may be aligned with reference to other Tele images of the focus stack.
    • In step 606, a W image may be used for defining suitable image regions as described above.
    • In step 608, a W image may be used for correction of fusion artifacts. Fusion artifacts are defined as visual features that are not present in the actual scene but that are an undesired byproduct of the image fusion process.
    • In step 610, a W image may be used to identify image segments in the fused image that exhibit undesired features and that may be corrected. Such undesired features may e.g. be misalignments of images, unnatural color differences or blurring caused by e.g. de-focus or motion. De-focus blur may e.g. be induced by estimation errors in the depth map used in image fusion step 608.


In yet another embodiment, the method described above may not involve any image processing such as described in steps 608-612, but may be used to select a single image from the focus stack. The selection may be performed automatically (e.g. by analyzing the focus stack for the sharpest, most clear and well-composed image with a method as described in FIG. 5A-5D) or manually by a human user. FIG. 7 shows a graphical user interface (GUI) that a user may use to transmit a command to modify the appearance of the output image, e.g. a user may transmit a command (e.g. “forward blur” and “backward blur”) for a more blurred image or an image where larger parts are in focus. “Background blur” and “forward blur” refer to the blur options as described in FIGS. 8A, 8B. In one embodiment, in case the user command is to modify the appearance of an image, the method will be re-performed from step 606 on, however with a different set of selection criteria. In another embodiment, in case the user command is to modify the appearance of an image, a blurring algorithm (artificial blurring) may be applied to the output image to form another output image. The focus plane may be changed by marking a new image segment that should be in-focus by touching the device screen. The blur may be changed according to the wishes of the user. The user may wish to modify the DOF of the displayed image, e.g. from an all-in-focus image (i.e. infinite DOF) to a more shallow DOF. A user may wish to modify the focus plane of an image that is not all-in-focus. A user may modify the image, and a pre-view image generated by an estimation indicating a projected output image may be displayed. If a user performs a click on “Apply”, a full algorithm may be applied as described in FIG. 6.



FIG. 8A shows a symmetric blur function. By moving the sliders (forward/backward blur) in FIG. 8A, a user may move linearly on the X axis, with blur applied to the image as indicated on the Y axis. FIG. 8B shows an asymmetric blur function with functionality as described in FIG. 8A. Application of the blur function enables the user to blur differently the foreground and the background. For example, there are cases where forward blur may be unwanted at all, from an artistic point of view. Asymmetric blur enables this possibility.


In some embodiments, further image features such as e.g. artificial lightning may be provided. Artificial lightning means that the lightning scenario in the scene can be changed by a user or a program, e.g. by artificially moving a light source within a scene. For artificial lightning, the presence of a depth map may be beneficial.



FIG. 9 shows a system 900 for performing methods as described above. System 900 comprises a first Tele camera module (or simply “Tele camera”) 910. Tele camera 910 may be a Macro capable folded Tele camera, a double-folded Tele camera, a pop-out Tele camera, a scanning folded Tele camera, or an upright (non-folded) Tele camera. If camera 910 is a folded camera, it comprises an optical path folding element (OPFE) 912 for folding an optical path by 90 degrees, a lens module 914 and an image sensor 916. A lens actuator 918 performs a movement of lens module 914 to bring the lens to different lens states for focusing and optionally for OIS. System 910 may comprise an additional, second camera module 930, and an application processor (AP) 940. The second camera module 930 may be a W camera or a UW camera. In some embodiments, both a W camera and a UW camera may be included. AP 940 comprises an image generator 942 for generating images, and an image analyzer 946 for analyzing images as described above, as well as an object detector 944. A human machine interface (HMI) 950 such as a smartphone screen allows a user to transmit commands to the AP. A memory element 970 may be used to store image data. Calibration data for calibration between camera 910 and second camera module 930 may be stored in memory element 970 and/or in additional memory elements (not shown). The additional memory elements may be integrated in the camera 910 and/or in the second camera module 930. The additional memory elements may be EEPROMs (electrically erasable programmable read-only memory). Memory element 970 may e.g. be a NVM (non-volatile memory).



FIG. 10 illustrates a dual-camera (which may be part of a multi-camera with more than two cameras) known in the art and numbered 1000, see e.g. co-owned international patent application PCT/IB2015/056004. Dual-camera 1000 comprises a folded Tele camera 1002 and a Wide camera 1004. Tele camera 1002 comprises an OPFE 1006, a lens 1008 that may include a plurality of lens elements (not visible in this representation, but visible e.g. in FIG. 1C-H) with an optical axis 1010 and an image sensor 1012. Wide camera 1004 comprises a lens 1014 with an optical axis 1016 and an image sensor 1018. OPFE 1006 folds the optical path from a first optical path 1020 which is substantially parallel to optical axis 1016 to a second optical path which is substantially parallel optical axis 1010.


While this disclosure has been described in terms of certain embodiments and generally associated methods, alterations and permutations of the embodiments and methods will be apparent to those skilled in the art. The disclosure is to be understood as not limited by the specific embodiments described herein, but only by the scope of the appended claims.


Furthermore, for the sake of clarity the term “substantially” is used herein to imply the possibility of variations in values within an acceptable range. According to one example, the term “substantially” used herein should be interpreted to imply possible variation of up to 5% over or under any specified value. According to another example, the term “substantially” used herein should be interpreted to imply possible variation of up to 2.5% over or under any specified value. According to a further example, the term “substantially” used herein should be interpreted to imply possible variation of up to 1% over or under any specified value.


All references mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual reference was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present application.

Claims
  • 1. A system, comprising: a Wide camera for providing at least one Wide image;a Tele camera comprising a Tele lens module;a lens actuator for moving the Tele lens module for focusing to any distance or set of distances between 3.0 cm and 35 cm with an object-to-image magnification between 1:5 and 25:1; andan application processor (AP) configured to analyse image data from the Wide camera to automatically select an object and to define a capture strategy for capturing the object with the Tele camera a sequence of Macro images with a focus plane shifted from one captured Macro image to another captured Macro image, and to generate a new Macro image from this sequence, wherein the system is included in a mobile electronic device.
  • 2. The system of claim 1, wherein the focus plane and a depth of field of the new Macro image can be controlled continuously post-capture.
  • 3. The system of claim 1, wherein the focusing is to object-lens distances of 3.0-25 cm.
  • 4. The system of claim 1, wherein the focusing is to object-lens distances of 3.0-15 cm.
  • 5. The system of claim 1, wherein the Tele camera is a folded Tele camera comprising an optical path folding element.
  • 6. The system of claim 1, wherein the Tele camera is a double-folded Tele camera comprising two optical path folding elements.
  • 7. The system of claim 1, wherein the Tele camera is a pop-out Tele camera comprising a pop-out lens.
  • 8. The system of claim 1, wherein the AP is configured to calculate a depth map from Wide image data or Wide phase detection auto-focus (PDAF) image data and to use the depth map to define the capture strategy for capturing with the Tele camera a sequence of Macro images or to generate the new Macro image.
  • 9. The system of claim 1, wherein the Tele camera has an EFL of EFL=10-20 mm.
  • 10. The system of claim 1, wherein the Tele camera has an EFL of EFL=20-40 mm.
  • 11. The system of claim 1, wherein instead of a Wide camera an Ultra-Wide camera is used for providing at least one Ultra-Wide image.
  • 12. The system of claim 1, wherein the Tele camera can be switched between two or more discrete zoom states.
  • 13. The system of claim 12, wherein the AP is configured to analyse image data from the Wide camera to switch the Tele camera to a specific zoom state for capturing Macro images which have a specific magnification and a specific field of view.
  • 14. The system of claim 12, wherein a zoom factor of a maximum zoom state is 2×larger than a zoom factor of a minimum zoom state.
  • 15. The system of claim 13, wherein the analysis of image data from the Wide camera includes use of a saliency map.
  • 16. The system of claim 1, wherein the generation of the new Macro image uses a Wide image as a reference image.
  • 17. The system of claim 1, wherein the Tele capture strategy is adjusted during capture of the sequence of Macro images based on information from captured Macro images.
  • 18. The system of claim 1, wherein the mobile electronic device is a smartphone.
  • 19. The system of claim 1, wherein the mobile electronic device is a tablet.
  • 20. A method, comprising: in a mobile electronic device comprising a Wide camera and a Tele camera: using the Wide camera to provide at least one Wide image;focusing the Tele camera to any distance or set of distances between 3.0 cm and 35 cm with an object-to-image magnification between 1:5 and 25:1;based on image data from the Wide camera, automatically selecting an object and defining a capture strategy for capturing the object with the Tele camera a sequence of Macro images with a focus plane shifted from one captured Macro image to another captured Macro image; andgenerating a new Macro image from the sequence of Macro images.
CROSS REFERENCE TO RELATED APPLICATIONS

This application is a continuation from U.S. patent application Ser. No. 17/600,341 filed Sep. 30, 2021 (now allowed), which was a 371 application from international application PCT/IB2021/054186 filed May 15, 2021, and is related to and claims priority from U.S. Provisional patent Applications No. 63/032,576 filed May 30, 2020, No. 63/070,501 filed on Aug. 26, 2020, No. 63/110,057 filed Nov. 5, 2020, No. 63/119,853 filed Dec. 1, 2020, No. 63/164,187 filed Mar. 22, 2021, No. 63/173,446 filed Apr. 11, 2021 and No. 63/177,427 filed Apr. 21, 2021, all of which are expressly incorporated herein by reference in their entirety.

US Referenced Citations (582)
Number Name Date Kind
2106752 Land Feb 1938 A
2354503 Arthur Jul 1944 A
2378170 Aklin Jun 1945 A
2441093 Aklin May 1948 A
3085354 Rasmussen et al. Apr 1963 A
3388956 Eggert et al. Jun 1968 A
3524700 Eggert et al. Aug 1970 A
3558218 Grey Jan 1971 A
3584513 Gates Jun 1971 A
3864027 Harada Feb 1975 A
3941001 LaSarge Mar 1976 A
3942876 Betensky Mar 1976 A
4134645 Sugiyama et al. Jan 1979 A
4199785 McCullough et al. Apr 1980 A
4338001 Matsui Jul 1982 A
4465345 Yazawa Aug 1984 A
4792822 Akiyama et al. Dec 1988 A
5000551 Shibayama Mar 1991 A
5005083 Grage et al. Apr 1991 A
5032917 Aschwanden Jul 1991 A
5041852 Misawa et al. Aug 1991 A
5051830 von Hoessle Sep 1991 A
5099263 Matsumoto et al. Mar 1992 A
5248971 Mandl Sep 1993 A
5287093 Amano et al. Feb 1994 A
5327291 Baker et al. Jul 1994 A
5331465 Miyano Jul 1994 A
5394520 Hall Feb 1995 A
5436660 Sakamoto Jul 1995 A
5444478 Lelong et al. Aug 1995 A
5459520 Sasaki Oct 1995 A
5502537 Utagawa Mar 1996 A
5657402 Bender et al. Aug 1997 A
5682198 Katayama et al. Oct 1997 A
5768443 Michael et al. Jun 1998 A
5892855 Kakinami et al. Apr 1999 A
5926190 Turkowski et al. Jul 1999 A
5940641 McIntyre et al. Aug 1999 A
5969869 Hirai et al. Oct 1999 A
5982951 Katayama et al. Nov 1999 A
6014266 Obama et al. Jan 2000 A
6035136 Hayashi et al. Mar 2000 A
6101334 Fantone Aug 2000 A
6128416 Oura Oct 2000 A
6147702 Smith Nov 2000 A
6148120 Sussman Nov 2000 A
6169636 Kreitzer Jan 2001 B1
6201533 Rosenberg et al. Mar 2001 B1
6208765 Bergen Mar 2001 B1
6211668 Duesler et al. Apr 2001 B1
6215299 Reynolds et al. Apr 2001 B1
6222359 Duesler et al. Apr 2001 B1
6268611 Pettersson et al. Jul 2001 B1
6549215 Jouppi Apr 2003 B2
6611289 Yu et al. Aug 2003 B1
6643416 Daniels et al. Nov 2003 B1
6650368 Doron Nov 2003 B1
6654180 Ori Nov 2003 B2
6680748 Monti Jan 2004 B1
6714665 Hanna et al. Mar 2004 B1
6724421 Glatt Apr 2004 B1
6738073 Park et al. May 2004 B2
6741250 Furlan et al. May 2004 B1
6750903 Miyatake et al. Jun 2004 B1
6778207 Lee et al. Aug 2004 B1
7002583 Rabb, III Feb 2006 B2
7015954 Foote et al. Mar 2006 B1
7038716 Klein et al. May 2006 B2
7187504 Horiuchi Mar 2007 B2
7199348 Olsen et al. Apr 2007 B2
7206136 Labaziewicz et al. Apr 2007 B2
7248294 Slatter Jul 2007 B2
7256944 Labaziewicz et al. Aug 2007 B2
7305180 Labaziewicz et al. Dec 2007 B2
7339621 Fortier Mar 2008 B2
7346217 Gold, Jr. Mar 2008 B1
7365793 Cheatle et al. Apr 2008 B2
7411610 Doyle Aug 2008 B2
7424218 Baudisch et al. Sep 2008 B2
7509041 Hosono Mar 2009 B2
7515351 Chen et al. Apr 2009 B2
7533819 Barkan et al. May 2009 B2
7564635 Tang Jul 2009 B1
7619683 Davis Nov 2009 B2
7643225 Tsai Jan 2010 B1
7660049 Tang Feb 2010 B2
7684128 Tang Mar 2010 B2
7688523 Sano Mar 2010 B2
7692877 Tang et al. Apr 2010 B2
7697220 Iyama Apr 2010 B2
7738016 Toyofuku Jun 2010 B2
7738186 Chen et al. Jun 2010 B2
7773121 Huntsberger et al. Aug 2010 B1
7777972 Chen et al. Aug 2010 B1
7809256 Kuroda et al. Oct 2010 B2
7813057 Lin Oct 2010 B2
7821724 Tang et al. Oct 2010 B2
7826149 Tang et al. Nov 2010 B2
7826151 Tsai Nov 2010 B2
7869142 Chen et al. Jan 2011 B2
7880776 LeGall et al. Feb 2011 B2
7898747 Tang Mar 2011 B2
7916401 Chen et al. Mar 2011 B2
7918398 Li et al. Apr 2011 B2
7957075 Tang Jun 2011 B2
7957076 Tang Jun 2011 B2
7957079 Tang Jun 2011 B2
7961406 Tang et al. Jun 2011 B2
7964835 Olsen et al. Jun 2011 B2
7978239 Deever et al. Jul 2011 B2
8000031 Tsai Aug 2011 B1
8004777 Sano et al. Aug 2011 B2
8077400 Tang Dec 2011 B2
8115825 Culbert et al. Feb 2012 B2
8149327 Lin et al. Apr 2012 B2
8149523 Ozaki Apr 2012 B2
8154610 Jo et al. Apr 2012 B2
8218253 Tang Jul 2012 B2
8228622 Tang Jul 2012 B2
8233224 Chen Jul 2012 B2
8238695 Davey et al. Aug 2012 B1
8253843 Lin Aug 2012 B2
8274552 Dahi et al. Sep 2012 B2
8279537 Sato Oct 2012 B2
8363337 Tang et al. Jan 2013 B2
8390729 Long et al. Mar 2013 B2
8391697 Cho et al. Mar 2013 B2
8395851 Tang et al. Mar 2013 B2
8400555 Georgiev et al. Mar 2013 B1
8400717 Chen et al. Mar 2013 B2
8439265 Ferren et al. May 2013 B2
8446484 Muukki et al. May 2013 B2
8451549 Yamanaka et al. May 2013 B2
8483452 Ueda et al. Jul 2013 B2
8503107 Chen et al. Aug 2013 B2
8514491 Duparre Aug 2013 B2
8514502 Chen Aug 2013 B2
8547389 Hoppe et al. Oct 2013 B2
8553106 Scarff Oct 2013 B2
8570668 Takakubo et al. Oct 2013 B2
8587691 Takane Nov 2013 B2
8619148 Watts et al. Dec 2013 B1
8718458 Okuda May 2014 B2
8752969 Kane et al. Jun 2014 B1
8780465 Chae Jul 2014 B2
8803990 Smith Aug 2014 B2
8810923 Shinohara Aug 2014 B2
8854745 Chen Oct 2014 B1
8896655 Mauchly et al. Nov 2014 B2
8958164 Kwon et al. Feb 2015 B2
8976255 Matsuoto et al. Mar 2015 B2
9019387 Nakano Apr 2015 B2
9025073 Attar et al. May 2015 B2
9025077 Attar et al. May 2015 B2
9041835 Honda May 2015 B2
9137447 Shibuno Sep 2015 B2
9185291 Shabtay et al. Nov 2015 B1
9215377 Sokeila et al. Dec 2015 B2
9215385 Luo Dec 2015 B2
9229194 Yoneyama et al. Jan 2016 B2
9235036 Kato et al. Jan 2016 B2
9241111 Baldwin Jan 2016 B1
9270875 Brisedoux et al. Feb 2016 B2
9279957 Kanda et al. Mar 2016 B2
9286680 Jiang et al. Mar 2016 B1
9344626 Silverstein et al. May 2016 B2
9360671 Zhou Jun 2016 B1
9369621 Malone et al. Jun 2016 B2
9413930 Geerds Aug 2016 B2
9413984 Attar et al. Aug 2016 B2
9420180 Jin Aug 2016 B2
9438792 Nakada et al. Sep 2016 B2
9485432 Medasani et al. Nov 2016 B1
9488802 Chen et al. Nov 2016 B2
9568712 Dror et al. Feb 2017 B2
9578257 Attar et al. Feb 2017 B2
9618748 Munger et al. Apr 2017 B2
9678310 Iwasaki et al. Jun 2017 B2
9681057 Attar et al. Jun 2017 B2
9723220 Sugie Aug 2017 B2
9736365 Laroia Aug 2017 B2
9736391 Du et al. Aug 2017 B2
9768310 Ahn et al. Sep 2017 B2
9800798 Ravirala et al. Oct 2017 B2
9817213 Mercado Nov 2017 B2
9851803 Fisher et al. Dec 2017 B2
9894287 Qian et al. Feb 2018 B2
9900522 Lu Feb 2018 B2
9927600 Goldenberg et al. Mar 2018 B2
20020005902 Yuen Jan 2002 A1
20020030163 Zhang Mar 2002 A1
20020054214 Yoshikawa May 2002 A1
20020063711 Park et al. May 2002 A1
20020075258 Park et al. Jun 2002 A1
20020118471 Imoto Aug 2002 A1
20020122113 Foote Sep 2002 A1
20020167741 Koiwai et al. Nov 2002 A1
20030030729 Prentice et al. Feb 2003 A1
20030048542 Enomoto Mar 2003 A1
20030093805 Gin May 2003 A1
20030156751 Lee et al. Aug 2003 A1
20030160886 Misawa et al. Aug 2003 A1
20030202113 Yoshikawa Oct 2003 A1
20040008773 Itokawa Jan 2004 A1
20040012683 Yamasaki et al. Jan 2004 A1
20040017386 Liu et al. Jan 2004 A1
20040027367 Pilu Feb 2004 A1
20040061788 Bateman Apr 2004 A1
20040141065 Hara et al. Jul 2004 A1
20040141086 Mihara Jul 2004 A1
20040239313 Godkin Dec 2004 A1
20040240052 Minefuji et al. Dec 2004 A1
20050013509 Samadani Jan 2005 A1
20050041300 Oshima et al. Feb 2005 A1
20050046740 Davis Mar 2005 A1
20050062346 Sasaki Mar 2005 A1
20050128604 Kuba Jun 2005 A1
20050134697 Mikkonen et al. Jun 2005 A1
20050141103 Nishina Jun 2005 A1
20050141390 Lee et al. Jun 2005 A1
20050157184 Nakanishi et al. Jul 2005 A1
20050168834 Matsumoto et al. Aug 2005 A1
20050168840 Kobayashi et al. Aug 2005 A1
20050185049 Iwai et al. Aug 2005 A1
20050200718 Lee Sep 2005 A1
20050248667 Schweng et al. Nov 2005 A1
20050270667 Gurevich et al. Dec 2005 A1
20060054782 Olsen et al. Mar 2006 A1
20060056056 Ahiska et al. Mar 2006 A1
20060067672 Washisu et al. Mar 2006 A1
20060102907 Lee et al. May 2006 A1
20060125937 LeGall et al. Jun 2006 A1
20060126737 Boice et al. Jun 2006 A1
20060170793 Pasquarette et al. Aug 2006 A1
20060175549 Miller et al. Aug 2006 A1
20060181619 Liow et al. Aug 2006 A1
20060187310 Janson et al. Aug 2006 A1
20060187322 Janson et al. Aug 2006 A1
20060187338 May et al. Aug 2006 A1
20060227236 Pak Oct 2006 A1
20060238902 Nakashima et al. Oct 2006 A1
20060275025 Labaziewicz et al. Dec 2006 A1
20070024737 Nakamura et al. Feb 2007 A1
20070126911 Nanjo Jun 2007 A1
20070127040 Davidovici Jun 2007 A1
20070159344 Kisacanin Jul 2007 A1
20070177025 Kopet et al. Aug 2007 A1
20070188653 Pollock et al. Aug 2007 A1
20070189386 Imagawa et al. Aug 2007 A1
20070229983 Saori Oct 2007 A1
20070247726 Sudoh Oct 2007 A1
20070253689 Nagai et al. Nov 2007 A1
20070257184 Olsen et al. Nov 2007 A1
20070285550 Son Dec 2007 A1
20080017557 Witdouck Jan 2008 A1
20080024614 Li et al. Jan 2008 A1
20080025634 Border et al. Jan 2008 A1
20080030592 Border et al. Feb 2008 A1
20080030611 Jenkins Feb 2008 A1
20080056698 Lee et al. Mar 2008 A1
20080084484 Ochi et al. Apr 2008 A1
20080088942 Seo Apr 2008 A1
20080094730 Toma et al. Apr 2008 A1
20080094738 Lee Apr 2008 A1
20080106629 Kurtz et al. May 2008 A1
20080117316 Orimoto May 2008 A1
20080129831 Cho et al. Jun 2008 A1
20080218611 Parulski et al. Sep 2008 A1
20080218612 Border et al. Sep 2008 A1
20080218613 Janson et al. Sep 2008 A1
20080219654 Border et al. Sep 2008 A1
20080291531 Heimer Nov 2008 A1
20080304161 Souma Dec 2008 A1
20090002839 Sato Jan 2009 A1
20090067063 Asami et al. Mar 2009 A1
20090086074 Li et al. Apr 2009 A1
20090102948 Scherling Apr 2009 A1
20090109556 Shimizu et al. Apr 2009 A1
20090122195 Van Baar et al. May 2009 A1
20090122406 Rouvinen et al. May 2009 A1
20090122423 Park et al. May 2009 A1
20090128644 Camp et al. May 2009 A1
20090135245 Luo et al. May 2009 A1
20090141365 Jannard et al. Jun 2009 A1
20090147368 Oh et al. Jun 2009 A1
20090168135 Yu et al. Jul 2009 A1
20090200451 Conners Aug 2009 A1
20090219547 Kauhanen et al. Sep 2009 A1
20090225438 Kubota Sep 2009 A1
20090234542 Orlewski Sep 2009 A1
20090252484 Hasuda et al. Oct 2009 A1
20090279191 Yu Nov 2009 A1
20090295949 Ojala Dec 2009 A1
20090303620 Abe et al. Dec 2009 A1
20090324135 Kondo et al. Dec 2009 A1
20100013906 Border et al. Jan 2010 A1
20100020221 Tupman et al. Jan 2010 A1
20100033844 Katano Feb 2010 A1
20100060746 Olsen et al. Mar 2010 A9
20100060995 Yumiki et al. Mar 2010 A1
20100097444 Lablans Apr 2010 A1
20100103194 Chen et al. Apr 2010 A1
20100134621 Namkoong et al. Jun 2010 A1
20100165131 Makimoto et al. Jul 2010 A1
20100165476 Eguchi Jul 2010 A1
20100196001 Ryynänen et al. Aug 2010 A1
20100202068 Ito Aug 2010 A1
20100214664 Chia Aug 2010 A1
20100238327 Griffith et al. Sep 2010 A1
20100246024 Aoki et al. Sep 2010 A1
20100259836 Kang et al. Oct 2010 A1
20100265331 Tanaka Oct 2010 A1
20100277813 Ito Nov 2010 A1
20100283842 Guissin et al. Nov 2010 A1
20100321494 Peterson et al. Dec 2010 A1
20110001838 Lee Jan 2011 A1
20110032409 Rossi et al. Feb 2011 A1
20110058320 Kim et al. Mar 2011 A1
20110063417 Peters et al. Mar 2011 A1
20110063446 McMordie et al. Mar 2011 A1
20110064327 Dagher et al. Mar 2011 A1
20110080487 Venkataraman et al. Apr 2011 A1
20110080655 Mori Apr 2011 A1
20110102667 Chua et al. May 2011 A1
20110102911 Iwasaki May 2011 A1
20110115965 Engelhardt et al. May 2011 A1
20110121666 Park et al. May 2011 A1
20110128288 Petrou et al. Jun 2011 A1
20110149119 Matsui Jun 2011 A1
20110157430 Hosoya et al. Jun 2011 A1
20110164172 Shintani et al. Jul 2011 A1
20110188121 Goring et al. Aug 2011 A1
20110221599 Högasten Sep 2011 A1
20110229054 Weston et al. Sep 2011 A1
20110234798 Chou Sep 2011 A1
20110234853 Hayashi et al. Sep 2011 A1
20110234881 Wakabayashi et al. Sep 2011 A1
20110242286 Pace et al. Oct 2011 A1
20110242355 Goma et al. Oct 2011 A1
20110249347 Kubota Oct 2011 A1
20110285714 Swic et al. Nov 2011 A1
20110298966 Kirschstein et al. Dec 2011 A1
20120014682 David et al. Jan 2012 A1
20120026366 Golan et al. Feb 2012 A1
20120044372 Cote et al. Feb 2012 A1
20120062780 Morihisa Mar 2012 A1
20120062783 Tang et al. Mar 2012 A1
20120069235 Imai Mar 2012 A1
20120069455 Lin et al. Mar 2012 A1
20120075489 Nishihara Mar 2012 A1
20120092777 Tochigi et al. Apr 2012 A1
20120105579 Jeon et al. May 2012 A1
20120105708 Hagiwara May 2012 A1
20120124525 Kang May 2012 A1
20120147489 Matsuoka Jun 2012 A1
20120154547 Aizawa Jun 2012 A1
20120154614 Moriya et al. Jun 2012 A1
20120154929 Tsai et al. Jun 2012 A1
20120194923 Um Aug 2012 A1
20120196648 Havens et al. Aug 2012 A1
20120229663 Nelson et al. Sep 2012 A1
20120229920 Otsu et al. Sep 2012 A1
20120249815 Bohn et al. Oct 2012 A1
20120262806 Lin et al. Oct 2012 A1
20120287315 Huang et al. Nov 2012 A1
20120320467 Baik et al. Dec 2012 A1
20130002928 Imai Jan 2013 A1
20130002933 Topliss et al. Jan 2013 A1
20130016427 Sugawara Jan 2013 A1
20130057971 Zhao et al. Mar 2013 A1
20130063629 Webster et al. Mar 2013 A1
20130076922 Shihoh et al. Mar 2013 A1
20130088788 You Apr 2013 A1
20130093842 Yahata Apr 2013 A1
20130094126 Rappoport et al. Apr 2013 A1
20130113894 Mirlay May 2013 A1
20130135445 Dahi et al. May 2013 A1
20130148215 Mori et al. Jun 2013 A1
20130148854 Wang et al. Jun 2013 A1
20130155176 Paripally et al. Jun 2013 A1
20130163085 Lim et al. Jun 2013 A1
20130182150 Asakura Jul 2013 A1
20130201360 Song Aug 2013 A1
20130202273 Ouedraogo et al. Aug 2013 A1
20130208178 Park Aug 2013 A1
20130229544 Bando Sep 2013 A1
20130235224 Park et al. Sep 2013 A1
20130250150 Malone et al. Sep 2013 A1
20130258044 Betts-LaCroix Oct 2013 A1
20130258048 Wang et al. Oct 2013 A1
20130270419 Singh et al. Oct 2013 A1
20130271852 Schuster Oct 2013 A1
20130278785 Nomura et al. Oct 2013 A1
20130279032 Suigetsu et al. Oct 2013 A1
20130286221 Shechtman et al. Oct 2013 A1
20130286488 Chae Oct 2013 A1
20130321668 Kamath Dec 2013 A1
20140009631 Topliss Jan 2014 A1
20140022436 Kim et al. Jan 2014 A1
20140049615 Uwagawa Feb 2014 A1
20140063616 Okano et al. Mar 2014 A1
20140092487 Chen et al. Apr 2014 A1
20140118584 Lee et al. May 2014 A1
20140139719 Fukaya et al. May 2014 A1
20140146216 Okumura May 2014 A1
20140160311 Hwang et al. Jun 2014 A1
20140160581 Cho et al. Jun 2014 A1
20140192224 Laroia Jul 2014 A1
20140192238 Attar et al. Jul 2014 A1
20140192253 Laroia Jul 2014 A1
20140204480 Jo et al. Jul 2014 A1
20140218587 Shah Aug 2014 A1
20140240853 Kubota et al. Aug 2014 A1
20140285907 Tang et al. Sep 2014 A1
20140293453 Ogino et al. Oct 2014 A1
20140313316 Olsson et al. Oct 2014 A1
20140362242 Takizawa Dec 2014 A1
20140362274 Christie et al. Dec 2014 A1
20140376090 Terajima Dec 2014 A1
20140379103 Ishikawa et al. Dec 2014 A1
20150002683 Hu et al. Jan 2015 A1
20150002684 Kuchiki Jan 2015 A1
20150022896 Cho et al. Jan 2015 A1
20150029601 Dror et al. Jan 2015 A1
20150042870 Chan et al. Feb 2015 A1
20150070781 Cheng et al. Mar 2015 A1
20150086127 Camilus et al. Mar 2015 A1
20150092066 Geiss et al. Apr 2015 A1
20150103147 Ho et al. Apr 2015 A1
20150110345 Weichselbaum Apr 2015 A1
20150116569 Mercado Apr 2015 A1
20150124059 Georgiev et al. May 2015 A1
20150138381 Ahn May 2015 A1
20150138431 Shin et al. May 2015 A1
20150145965 Livyatan et al. May 2015 A1
20150153548 Kim et al. Jun 2015 A1
20150154776 Zhang et al. Jun 2015 A1
20150162048 Hirata et al. Jun 2015 A1
20150168667 Kudoh Jun 2015 A1
20150195458 Nakayama et al. Jul 2015 A1
20150198464 El Alami Jul 2015 A1
20150205068 Sasaki Jul 2015 A1
20150215516 Dolgin Jul 2015 A1
20150237280 Choi et al. Aug 2015 A1
20150242994 Shen Aug 2015 A1
20150244906 Wu et al. Aug 2015 A1
20150244942 Shabtay et al. Aug 2015 A1
20150253532 Lin Sep 2015 A1
20150253543 Mercado Sep 2015 A1
20150253647 Mercado Sep 2015 A1
20150261299 Wajs Sep 2015 A1
20150271471 Hsieh et al. Sep 2015 A1
20150281678 Park et al. Oct 2015 A1
20150286033 Osborne Oct 2015 A1
20150296112 Park et al. Oct 2015 A1
20150316744 Chen Nov 2015 A1
20150323757 Bone Nov 2015 A1
20150334309 Peng et al. Nov 2015 A1
20150373252 Georgiev Dec 2015 A1
20150373263 Georgiev et al. Dec 2015 A1
20160033742 Huang Feb 2016 A1
20160044250 Shabtay et al. Feb 2016 A1
20160062084 Chen et al. Mar 2016 A1
20160062136 Nomura et al. Mar 2016 A1
20160070088 Koguchi Mar 2016 A1
20160085089 Mercado Mar 2016 A1
20160105616 Shabtay et al. Apr 2016 A1
20160154066 Hioka et al. Jun 2016 A1
20160154202 Wippermann et al. Jun 2016 A1
20160154204 Lim et al. Jun 2016 A1
20160187631 Choi et al. Jun 2016 A1
20160202455 Aschwanden et al. Jul 2016 A1
20160212333 Liege et al. Jul 2016 A1
20160212358 Shikata Jul 2016 A1
20160212418 Demirdjian et al. Jul 2016 A1
20160238834 Erlich et al. Aug 2016 A1
20160241751 Park Aug 2016 A1
20160241756 Chen Aug 2016 A1
20160291295 Shabtay et al. Oct 2016 A1
20160295112 Georgiev et al. Oct 2016 A1
20160301840 Du et al. Oct 2016 A1
20160301868 Acharya et al. Oct 2016 A1
20160306161 Harada et al. Oct 2016 A1
20160313537 Mercado Oct 2016 A1
20160341931 Liu et al. Nov 2016 A1
20160342095 Bieling et al. Nov 2016 A1
20160349504 Kim et al. Dec 2016 A1
20160353008 Osborne Dec 2016 A1
20160353012 Kao et al. Dec 2016 A1
20160381289 Kim et al. Dec 2016 A1
20170001577 Seagraves et al. Jan 2017 A1
20170019616 Zhu et al. Jan 2017 A1
20170023778 Inoue Jan 2017 A1
20170070731 Darling et al. Mar 2017 A1
20170094187 Sharma et al. Mar 2017 A1
20170102522 Jo Apr 2017 A1
20170115471 Shinohara Apr 2017 A1
20170124987 Kim et al. May 2017 A1
20170150061 Shabtay et al. May 2017 A1
20170153422 Tang et al. Jun 2017 A1
20170160511 Kim et al. Jun 2017 A1
20170187962 Lee et al. Jun 2017 A1
20170199360 Chang Jul 2017 A1
20170214846 Du et al. Jul 2017 A1
20170214866 Zhu et al. Jul 2017 A1
20170219749 Hou et al. Aug 2017 A1
20170242225 Fiske Aug 2017 A1
20170276911 Huang Sep 2017 A1
20170276954 Bajorins et al. Sep 2017 A1
20170289458 Song et al. Oct 2017 A1
20170310952 Adomat et al. Oct 2017 A1
20170329108 Hashimoto Nov 2017 A1
20170337703 Wu et al. Nov 2017 A1
20180003925 Shmunk Jan 2018 A1
20180013944 Evans, V et al. Jan 2018 A1
20180017844 Yu et al. Jan 2018 A1
20180024319 Lai et al. Jan 2018 A1
20180024329 Goldenberg et al. Jan 2018 A1
20180059365 Bone et al. Mar 2018 A1
20180059376 Lin et al. Mar 2018 A1
20180059379 Chou Mar 2018 A1
20180081149 Bae et al. Mar 2018 A1
20180109660 Yoon et al. Apr 2018 A1
20180109710 Lee et al. Apr 2018 A1
20180120674 Avivi et al. May 2018 A1
20180149835 Park May 2018 A1
20180150973 Tang et al. May 2018 A1
20180176426 Wei et al. Jun 2018 A1
20180184010 Cohen et al. Jun 2018 A1
20180196236 Ohashi et al. Jul 2018 A1
20180196238 Goldenberg et al. Jul 2018 A1
20180198897 Tang et al. Jul 2018 A1
20180217475 Goldenberg et al. Aug 2018 A1
20180218224 Olmstead et al. Aug 2018 A1
20180224630 Lee et al. Aug 2018 A1
20180241922 Baldwin et al. Aug 2018 A1
20180268226 Shashua et al. Sep 2018 A1
20180295292 Lee et al. Oct 2018 A1
20180300901 Wakai et al. Oct 2018 A1
20180329281 Ye Nov 2018 A1
20180368656 Austin et al. Dec 2018 A1
20190025549 Hsueh et al. Jan 2019 A1
20190025554 Son Jan 2019 A1
20190075284 Ono Mar 2019 A1
20190086638 Lee Mar 2019 A1
20190100156 Chung et al. Apr 2019 A1
20190107651 Sade Apr 2019 A1
20190121103 Bachar et al. Apr 2019 A1
20190121216 Shabtay et al. Apr 2019 A1
20190130822 Jung et al. May 2019 A1
20190155002 Shabtay et al. May 2019 A1
20190170965 Shabtay Jun 2019 A1
20190196148 Yao et al. Jun 2019 A1
20190213712 Shoa Hassani Lashdan Jul 2019 A1
20190215440 Rivard et al. Jul 2019 A1
20190222758 Goldenberg et al. Jul 2019 A1
20190228562 Song Jul 2019 A1
20190297238 Klosterman Sep 2019 A1
20190353874 Yeh et al. Nov 2019 A1
20200064597 Shabtay Feb 2020 A1
20200084358 Nadamoto Mar 2020 A1
20200092486 Guo et al. Mar 2020 A1
20200103726 Shabtay et al. Apr 2020 A1
20200104034 Lee et al. Apr 2020 A1
20200134848 El-Khamy et al. Apr 2020 A1
20200192069 Makeev et al. Jun 2020 A1
20200221026 Fridman et al. Jul 2020 A1
20200264403 Bachar et al. Aug 2020 A1
20200333691 Shabtay et al. Oct 2020 A1
20200389580 Kodama et al. Dec 2020 A1
20200400926 Bachar Dec 2020 A1
20210048628 Shabtay et al. Feb 2021 A1
20210180989 Fukumura et al. Jun 2021 A1
20210263276 Huang et al. Aug 2021 A1
20210333521 Yedid et al. Oct 2021 A9
20210364746 Chen Nov 2021 A1
20210396974 Kuo Dec 2021 A1
20220046151 Shabtay et al. Feb 2022 A1
20220066168 Shi Mar 2022 A1
20220113511 Chen Apr 2022 A1
20220232167 Shabtay et al. Jul 2022 A1
20220252963 Shabtay et al. Aug 2022 A1
Foreign Referenced Citations (125)
Number Date Country
101276415 Oct 2008 CN
101634738 Jan 2010 CN
201514511 Jun 2010 CN
102147519 Aug 2011 CN
102193162 Sep 2011 CN
102215373 Oct 2011 CN
102466865 May 2012 CN
102466867 May 2012 CN
102739949 Oct 2012 CN
102147519 Jan 2013 CN
102982518 Mar 2013 CN
103024272 Apr 2013 CN
203406908 Jan 2014 CN
103576290 Feb 2014 CN
103698876 Apr 2014 CN
103841404 Jun 2014 CN
104297906 Jan 2015 CN
104407432 Mar 2015 CN
105467563 Apr 2016 CN
105657290 Jun 2016 CN
205301703 Jun 2016 CN
105827903 Aug 2016 CN
105847662 Aug 2016 CN
106680974 May 2017 CN
104570280 Jun 2017 CN
107608052 Jan 2018 CN
107682489 Feb 2018 CN
109729266 May 2019 CN
1536633 Jun 2005 EP
1780567 May 2007 EP
2523450 Nov 2012 EP
S54157620 Dec 1979 JP
S59121015 Jul 1984 JP
S59191146 Oct 1984 JP
6165212 Apr 1986 JP
S6370211 Mar 1988 JP
H0233117 Feb 1990 JP
04211230 Aug 1992 JP
406059195 Mar 1994 JP
H07318864 Dec 1995 JP
H07325246 Dec 1995 JP
H07333505 Dec 1995 JP
08271976 Oct 1996 JP
H09211326 Aug 1997 JP
H11223771 Aug 1999 JP
3210242 Sep 2001 JP
2002010276 Jan 2002 JP
2003298920 Oct 2003 JP
2003304024 Oct 2003 JP
2004056779 Feb 2004 JP
2004133054 Apr 2004 JP
2004245982 Sep 2004 JP
2004334185 Nov 2004 JP
2005099265 Apr 2005 JP
2005122084 May 2005 JP
2005321592 Nov 2005 JP
2006195139 Jul 2006 JP
2006237914 Sep 2006 JP
2006238325 Sep 2006 JP
2007133096 May 2007 JP
2007164065 Jun 2007 JP
2007219199 Aug 2007 JP
2007228006 Sep 2007 JP
2007306282 Nov 2007 JP
2008076485 Apr 2008 JP
2008111876 May 2008 JP
2008191423 Aug 2008 JP
2008271026 Nov 2008 JP
2010032936 Feb 2010 JP
2010164841 Jul 2010 JP
2010204341 Sep 2010 JP
2011055246 Mar 2011 JP
2011085666 Apr 2011 JP
2011138407 Jul 2011 JP
2011145315 Jul 2011 JP
2011203283 Oct 2011 JP
2012132739 Jul 2012 JP
2012203234 Oct 2012 JP
2013003317 Jan 2013 JP
2013003754 Jan 2013 JP
2013101213 May 2013 JP
2013105049 May 2013 JP
2013106289 May 2013 JP
2013148823 Aug 2013 JP
2014142542 Aug 2014 JP
2016105577 Jun 2016 JP
2017116679 Jun 2017 JP
2017146440 Aug 2017 JP
2018059969 Apr 2018 JP
2019113878 Jul 2019 JP
20070005946 Jan 2007 KR
20090019525 Feb 2009 KR
20090058229 Jun 2009 KR
20090131805 Dec 2009 KR
20100008936 Jan 2010 KR
20110058094 Jun 2011 KR
20110080590 Jul 2011 KR
20110115391 Oct 2011 KR
20120068177 Jun 2012 KR
20140135909 May 2013 KR
20130104764 Sep 2013 KR
1020130135805 Nov 2013 KR
20140014787 Feb 2014 KR
20140023552 Feb 2014 KR
101428042 Aug 2014 KR
101477178 Dec 2014 KR
20140144126 Dec 2014 KR
20150118012 Oct 2015 KR
20160000759 Jan 2016 KR
101632168 Jun 2016 KR
20160115359 Oct 2016 KR
20170105236 Sep 2017 KR
20180120894 Nov 2018 KR
20130085116 Jun 2019 KR
M602642 Oct 2020 TW
2000027131 May 2000 WO
2004084542 Sep 2004 WO
2006008805 Jan 2006 WO
2010122841 Oct 2010 WO
2013058111 Apr 2013 WO
2013063097 May 2013 WO
2014072818 May 2014 WO
2017025822 Feb 2017 WO
2017037688 Mar 2017 WO
2018130898 Jul 2018 WO
Non-Patent Literature Citations (25)
Entry
Statistical Modeling and Performance Characterization of a Real-Time Dual Camera Surveillance System, Greienhagen et al., Publisher: IEEE, 2000, 8 pages.
A 3MPixel Multi-Aperture Image Sensor with 0.7μm Pixels in 0.11μm CMOS, Fife et al., Stanford University, 2008, 3 pages.
Dual camera intelligent sensor for high definition 360 degrees surveillance, Scotti et al., Publisher: IET, May 9, 2000, 8 pages.
Dual-sensor foveated imaging system, Hua et al., Publisher: Optical Society of America, Jan. 14, 2008, 11 pages.
Defocus Video Matting, McGuire et al., Publisher: ACM SIGGRAPH, Jul. 31, 2005, 11 pages.
Compact multi-aperture imaging with high angular resolution, Santacana et al., Publisher: Optical Society of America, 2015, 10 pages.
Multi-Aperture Photography, Green et al., Publisher: Mitsubishi Electric Research Laboratories, Inc., Jul. 2007, 10 pages.
Multispectral Bilateral Video Fusion, Bennett et al., Publisher: IEEE, May 2007, 10 pages.
Super-resolution imaging using a camera array, Santacana et al., Publisher: Optical Society of America, 2014, 6 pages.
Optical Splitting Trees for High-Precision Monocular Imaging, McGuire et al., Publisher: IEEE, 2007, 11 pages.
High Performance Imaging Using Large Camera Arrays, Wilburn et al., Publisher: Association for Computing Machinery, Inc., 2005, 12 pages.
Real-time Edge-Aware Image Processing with the Bilateral Grid, Chen et al., Publisher: ACM SIGGRAPH, 2007, 9 pages.
Superimposed multi-resolution imaging, Carles et al., Publisher: Optical Society of America, 2017, 13 pages.
Viewfinder Alignment, Adams et al., Publisher: Eurographics, 2008, 10 pages.
Dual-Camera System for Multi-Level Activity Recognition, Bodor et al., Publisher: IEEE, Oct. 2014, 6 pages.
Engineered to the task: Why camera-phone cameras are different, Giles Humpston, Publisher: Solid State Technology, Jun. 2009, 3 pages.
A compact and cost effective design for cell phone zoom lens, Chang et al., Sep. 2007, 8 pages.
Consumer Electronic Optics: How small a lens can be? The case of panomorph lenses, Thibault et al., Sep. 2014, 7 pages.
Optical design of camera optics for mobile phones, Steinich et al., 2012, pp. 51-58 (8 pages).
The Optics of Miniature Digital Camera Modules, Bareau et al., 2006, 11 pages.
Modeling and measuring liquid crystal tunable lenses, Peter P. Clark, 2014, 7 pages.
Mobile Platform Optical Design, Peter P. Clark, 2014, 7 pages.
Boye et al., “Ultrathin Optics for Low-Profile Innocuous Imager”, Sandia Report, 2009, pp. 56-56.
“Cheat sheet: how to understand f-stops”, Internet article, Digital Camera World, 2017.
Office action in related Chinese patent application No. 202180002905.0, dated Apr. 18, 2023.
Related Publications (1)
Number Date Country
20230353871 A1 Nov 2023 US
Provisional Applications (7)
Number Date Country
63177427 Apr 2021 US
63173446 Apr 2021 US
63164187 Mar 2021 US
63119853 Dec 2020 US
63110057 Nov 2020 US
63070501 Aug 2020 US
63032576 May 2020 US
Continuations (1)
Number Date Country
Parent 17600341 US
Child 18346243 US