Embodiments disclosed herein relate in general to digital cameras, and in particular to thin folded zoom camera modules.
Personal computing and/or communication devices (such as smartphones), also referred to as “mobile electronic devices” often have dual-aperture zoom cameras on their back side, in which one camera (“Wide camera”) has a “Wide” field of view (FOVW) and the other camera (“Tele camera”) has a narrow or “Tele” field of view (FOVT).
Known folded Tele cameras (also referred to herein as “native” folded Tele cameras) for electronics mobile devices (e.g. smartphones) may have a focal length of e.g. 10 mm-30 mm, and at the same time are able to keep low module height and an aperture as large as possible, beneficial e.g. for imaging in low-light conditions and with high optical resolution. An exemplary aperture diameter may be 6 mm. In folded Tele cameras with a cut Tele lens, the aperture size may range, for example, from 3 mm to 8 mm in width, and more preferably from 6 mm to 7 mm in width.
A folded Tele camera with such a long focal length and with a relatively large aperture may result in an image with a very shallow depth of field (DOF). This may be desired for the purpose of creating optical Bokeh, but may cause a problem in scenes with objects that are spread over a certain range of distances from the cameras, for which it is required to keep all in focus. For example, a folded Tele camera with 30 mm effective focal length (EFL) and a f-number (“f/#”) of f/4 (“camera 1”), focusing on an object that is 3 m away, will have an object-side DOF of about 10 cm (assuming a 2 μm circle of confusion). In folded Tele cameras, typical f-numbers are in the range f/1.5 to f/5.
Slight misalignment in the position of the lens may cause significant defocus to the object intended to be in focus.
There is therefore a need for, and it would be beneficial to expand the capabilities of folded Tele cameras to control (i) the amount of light reaching the Tele sensor and (ii) the DOF of the Tele image by adapting the camera's f-number.
Embodiments disclosed herein teach folded Tele cameras with adaptive apertures that (i) adapt the Tele aperture according to scene conditions, and (ii) still support the condition of low folded camera module height (no additional height penalty for the camera module due to the adaptive aperture). Such systems comprise a dedicated, adaptive, controllable aperture (henceforth, “adaptive Tele aperture” or simply “adaptive aperture” or “AA”) that can be added to the folded Tele camera. Such systems may be used with lenses with cut lens designs or with lenses without cut lens designs.
In various embodiments, an adaptive aperture disclosed herein is formed by a linearly sliding diaphragm using a single pair of linearly sliding blades or a plurality of overlapping linearly sliding blades to provide an aperture of a desired size. The terms “adaptive aperture” and “diaphragm” reflect the same entity.
In various embodiments there are provided systems comprising a folded camera that includes a lens module with a native aperture, the lens module having a height HM, an adaptive aperture located between the native aperture and an optical path folding element, and an adaptive aperture forming mechanism for forming the adaptive aperture, wherein the AA forming mechanism has a height HAA not larger than HM.
In various embodiments, the AA forming mechanism includes an actuator and at least one pair of blades.
In some embodiments, the actuator is operative to move the at least one pair of blades linearly to a given position to form the adaptive aperture.
In some embodiments, the at least one pair of blades includes a plurality of pair of blades, each pair of the plurality operative to be moved to different positions.
In some embodiments, the lens module includes a folded Tele lens with a cut lens design.
In some embodiments, the folded camera is a scanning folded Tele camera. In some embodiments, the scanning folded Tele camera captures a plurality of images of a scene with different fields of view. In some embodiments, the processor is configured to control the adaptive aperture so that the plurality of images have similar depth of field. In some embodiments, the processor is configured to stitch the plurality of images to one or more images having a larger field of view than any single image.
In some embodiments, the adaptive aperture does not limit the native aperture.
In some embodiments, the adaptive aperture is round in a closed position.
In some embodiments, the adaptive aperture is rectangular in a closed position.
In some embodiments, the adaptive aperture is square in a closed position.
In various embodiments, a system further comprises a processor configured for controlling the AA forming mechanism. In some embodiments, the controlling is based on the lightning conditions of a scene. In some embodiments, the processor is configured to control the adaptive aperture so that an image captured with the folded camera has a depth of field similar to a depth of field of an image simultaneously captured with a second camera. In some embodiments, the processor is configured to control the adaptive aperture so that each image captured in a focus stack with the folded camera has a depth of field similar to a depth of field of all other images captured in the focus stack.
In some embodiments, the folded camera is operational to capture objects at object-image distances of less than 50 cm, of less than 25 cm, or of less than 15 cm.
In some embodiments, the folded camera includes a sensor for detecting the lightning conditions. In some embodiments, the lightning conditions are detected with a sensor of a second camera. In some embodiments, the lightning conditions are detected using an illumination estimation.
In some embodiments, the processor is configured to control the AA forming mechanism based on scene depth. The scene depth may be detected with a sensor of the folded camera or with a sensor of a second camera. In some embodiments, the second camera may be a Time-of-Flight camera.
In some embodiments, the processor is configured to calculate the scene depth from stereo camera data provided by the folded Tele camera and by a second camera, from stereo camera data provided by a second camera and by a third camera, by depth from motion estimation, wherein the depth from motion estimation uses image data provided by the folded camera or by a second camera, or from a plurality of images captured under different adaptive aperture settings.
In some embodiments, the folded camera is a Tele camera and the processor is configured to calculate the scene depth from phase detection autofocus data of the folded Tele camera or from phase detection autofocus data of a second camera.
In some embodiments, the processor is configured to retrieve the scene depth information from an application programming interface.
Non-limiting examples of embodiments disclosed herein are described below with reference to figures attached hereto that are listed following this paragraph. The drawings and descriptions are meant to illuminate and clarify embodiments disclosed herein and should not be considered limiting in any way. Like elements in different drawings may be indicated by like numerals. Elements in the drawings are not necessarily drawn to scale. In the drawings:
Adaptive apertures and AA mechanisms like 310 are characterized in that: a) when fully open, the AA does not limit the native aperture, and b) AA mechanism 310 does not increase a total folded Tele camera module height HM (shown in the Y direction).
Mechanism 310 supports opening the AA to a size that is larger than the size of native lens aperture 212, so that, when it is open widely, AA mechanism 310 does not block light that would have otherwise (had the AA mechanism not been included in the Tele camera) reached native lens aperture 212. This property allows to set the adaptive aperture 302 to a large size in order to fully utilize the native Tele lens aperture size, in case it is important to collect as much light as possible, or in case a very shallow DOF is desired. Blades 304, 306, 308 have each an open state and a closed state. Blades 304 have to be closed in order to effectively close blades 306, and blades 306 have to be closed in order to effectively close blades 308, i.e. the overlapping of the blades underlies the functionality of AA mechanism 310.
The design shown in
The design shown in
In another embodiment, the rectangular shape may form a square aperture (not shown), i.e. an aperture with identical height and width.
The design shown in
System 750 may be included in an electronic mobile device (not shown) such as a smartphone. The Tele camera may be included with one or more additional cameras in a multi-camera. The additional camera(s) may be a Wide camera having a diagonal FOV of e.g. 50-100 degree and/or an Ultra-Wide camera having a diagonal FOV of e.g. 70-140 degree and/or a Time-of-Flight (ToF) camera. To clarify, a multi-camera may include any combination of two or more cameras where one camera is the Tele camera. In some embodiments, one or more of the cameras may be capable to capture image data that can be used to estimate a depth of scene or “scene depth”. Scene depth refers to the respective object-lens distance (or “focus distance”) between the objects within a scene and system 750. The scene depth may be represented by a RGB-D map, i.e. by a data array that assigns a particular depth value to each RGB pixel (or to each group of RGB pixels). In general, the pixel resolution of a RGB image is higher than the resolution of a depth map.
Image data used for estimating scene depth may be for example:
In some embodiments, scene depth may be provided by an application programming interface (“API”), e.g. Google's “Depth API”. Knowledge on a scene depth may be desired as of the quadratic dependence of the DOF from the focus distance, i.e. from the depth of the object in focus.
In a scene sensing step 802 the camera's image sensors are used to detect the conditions and properties of a scene (e.g. lightning conditions, scene depth, visual content, etc.), which is done in pre-capture or preview mode. In some embodiments, additional sensor data (e.g. of ToF sensors, temperature sensors, humidity sensors, radar sensors etc.), e.g. of sensors present in the camera hosting device, may be read-out in the scene sensing step 802. Data generated in step 802 is fed into a processor (e.g. CPU, application processor) where a scene evaluation step 804 is executed. In step 804, the data is evaluated with the goal of determining ideal settings for the adaptive aperture, given the input of the human user or a dedicated algorithm. The term “ideal settings” refers here to settings that provide a maximum degree of user experience, e.g. a high image quality, or a high uniformity along stitching borders of panorama images. In case that the camera is operated in a mode highly reliant on automated image capturing, other steps may be performed besides sensor data evaluation. In some examples, ROIs and OOIs may be detected and automatically selected as focus targets by an algorithm in scene evaluation step 804. The ideal settings from step 804 are fed into an AA mechanism such as 710. The AA is set up according to these settings in an aperture adjustment step 806. The scene is then captured in a scene capture step 808. Steps 802 to 806 ensure improved user experience.
In an example, processor 718 calculates control commands concerning the size of the adaptive Tele aperture based on Wide camera image information and/or Tele camera image information, while one or both cameras operate in preview and/or video recording mode. In another example, AA mechanism 710 receives, from the user or from an automated detection method, a desired ROI or OOI, for example where Wide and Tele cameras are focused on, or intend to focus on. The processor 718 detects OOIs or ROIs (for example faces of persons) in a Wide camera image (or alternatively, receives information about OOIs or ROIs detected by another module) by means of dedicated algorithms, and estimates the relative or absolute distance between the objects, for example, by comparing the size of faces or properties of landmarks in each face. The processor then calculates the desired aperture size to keep at least part of said objects of interest in focus, and submits these ideal aperture settings to AA mechanism 710, which configures the adaptive Tele aperture to this aperture size.
In another example, control software running on processor 718 calculates a depth map of part of the scene (or alternatively, receives such a depth map calculated by another module), for example, based on stereo information between a Wide camera image and a Tele camera image, or based on information from phase detection autofocus (PDAF) pixels in the Wide camera sensor, or based on a ToF camera. A dedicated algorithm running on processor 718 determines the required range of distances to be in focus from the depth map, and calculates the desired aperture size to keep at least some of the OOIs in focus. The information is transmitted to AA mechanism 710, which configures the adaptive Tele aperture to this aperture size.
In yet another example, the software may take into account the light levels in the scene, by analyzing the Wide camera image and the Tele camera image (for example, by calculating a histogram of intensity levels), or by receiving an estimation for the illumination in the scene (for example, LUX estimation, or the Wide sensor and/or Tele sensor analog gain) and calculates the ideal adaptive Tele aperture size based on the illumination estimation.
In yet another example, the software may receive indications from the user (for example, by switching the camera between different imaging modes, e.g. to a dedicated portrait-mode or stitching mode, or by changing some parameter in the camera application) regarding the required DOF and aperture configuration, and may take this information into account to calculate ideal settings for the adaptive Tele aperture size to fulfill these requirements.
In yet another example with the folded Tele camera being a scanning folded camera with an adjustable FOV, when operating the camera in a scanning mode, i.e. capturing Tele camera images having different FOVs and stitching the Tele camera images together to create an image with a larger FOV (as e.g. for a high resolution panoramic image), for example as described in U.S. provisional patent application 63/026,097, software running on processor 718 determines the ideal adaptive Tele aperture size before scanning starts and updates this value throughout the scanning and capturing of the images to be stitched. This may be desired e.g. for achieving a similar DOF for all captured Tele images or to achieve similar lightning for all captured Tele images.
In yet another example, when operating the camera in a scanning mode and stitching the Tele camera images together to create an image with a larger FOV, for example as described in PCT/IB2018/050988, software running on processor 718 determines the ideal AA in a way such that single Tele images captured with this AA have very similar optical Bokeh, leading to a stitched image with larger FOV and very uniform appearance in terms of Bokeh, including along single Tele image borders.
In yet another example, for supplying an image with Wide camera FOV and Tele camera resolution for specific ROIs or OOIs, the ROIs and OOIs are captured by the Tele camera and these Tele images are stitched into the Wide camera image with large FOV. To supply a natural or seamless transition between the two images, software running on processor 718 determines the ideal AA size so that the optical Bokeh of the Tele image to be stitched is very similar to the optical Bokeh of the Wide image.
In yet another example, the adaptive Tele aperture is modified by AA mechanism 710 between two consecutive Tele image captures, (or between two Tele camera preview frames) to obtain two frames of largely the same scene with different depths of field and to estimate depth from the two images, for example by identifying features in one of these images that correspond to features in the other image, comparing the contrast in the local area of the image and based on this, calculating relative depth for the image region. Relevant methods are discussed in “Elder, J. and Zucker, S. 1998. Local scale control for edge detection and blur estimation” and “Depth Estimation from Blur Estimation, Tim Zaman, 2012”.
In yet another example, a software running on processor 718 may calculate the ideal AA settings from the distance between the camera and the object that the camera is focused on. For example, Hall sensors provide the information on the focus position. As DOF has a quadratic dependence on the focus distance, and in order to supply sufficient DOF in the image to be captured, the control software may assign smaller AA setting to closer objects and larger AA setting to objects farther away.
In yet another example, the camera may be operated in the native aperture state for high quality Tele images in low light conditions. To achieve the DOF necessary for achieving a crisp appearance of a specific ROI or OOI, an image series may be taken, whereas the focus scans the necessary DOF range and captures an image at each one of the different scan states, a technique known in the art as “focus stacking” to create a “focus stack”. In a second (computational) step, the output image may be assembled by stitching the crisp segments of the ROI or OOI from the series of images in a way so that the entire ROI or OOI appears crisp. In some examples, focus stacking may be also used for estimating scene depth.
In conclusion, adaptive apertures and methods of use described herein expand the capabilities of folded Tele cameras to control the amount of light reaching the Tele sensor and the DOF of the Tele image by adapting the camera's f-number. In particular, they provide solutions to problems of very shallow DOF, particularly in more severe cases, for example:
While the description above refers in detail to adaptive apertures for folded Tele lenses with a cut lens design, it is to be understood that the various embodiments of adaptive apertures and AA mechanisms therefor disclosed herein are not limited to cut lens designs. Adaptive apertures and AA mechanisms therefor disclosed herein may work with, and be applied to, non-cut lens designs (i.e. lenses without a cut).
Unless otherwise stated, the use of the expression “and/or” between the last two members of a list of options for selection indicates that a selection of one or more of the listed options is appropriate and may be made.
It should be understood that where the claims or specification refer to “a” or “an” element, such reference is not to be construed as there being only one of that elements.
All patents, patent applications and publications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual patent, patent application or publication was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present disclosure.
This application is a continuation from U.S. patent application Ser. No. 17/104,744, filed Nov. 25, 2020 (now allowed), which claims priority from U.S. Provisional Patent Application No. 62/939,943 filed Nov. 25, 2019, which is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
2106752 | Land | Feb 1938 | A |
2354503 | Arthur | Jul 1944 | A |
2378170 | Aklin | Jun 1945 | A |
2441093 | Aklin | May 1948 | A |
3388956 | Eggert et al. | Jun 1968 | A |
3524700 | Eggert et al. | Aug 1970 | A |
3558218 | Grey | Jan 1971 | A |
3864027 | Harada | Feb 1975 | A |
3942876 | Betensky | Mar 1976 | A |
4134645 | Sugiyama et al. | Jan 1979 | A |
4338001 | Matsui | Jul 1982 | A |
4465345 | Yazawa | Aug 1984 | A |
4792822 | Akiyama et al. | Dec 1988 | A |
5000551 | Shibayama | Mar 1991 | A |
5327291 | Baker et al. | Jul 1994 | A |
5331465 | Miyano | Jul 1994 | A |
5969869 | Hirai et al. | Oct 1999 | A |
6014266 | Obama et al. | Jan 2000 | A |
6035136 | Hayashi et al. | Mar 2000 | A |
6147702 | Smith | Nov 2000 | A |
6169636 | Kreitzer | Jan 2001 | B1 |
6654180 | Ori | Nov 2003 | B2 |
7187504 | Horiuchi | Mar 2007 | B2 |
7206136 | Labaziewicz et al. | Apr 2007 | B2 |
7515351 | Chen et al. | Apr 2009 | B2 |
7564635 | Tang | Jul 2009 | B1 |
7643225 | Tsai | Jan 2010 | B1 |
7660049 | Tang | Feb 2010 | B2 |
7684128 | Tang | Mar 2010 | B2 |
7688523 | Sano | Mar 2010 | B2 |
7692877 | Tang et al. | Apr 2010 | B2 |
7697220 | Iyama | Apr 2010 | B2 |
7738186 | Chen et al. | Jun 2010 | B2 |
7777972 | Chen et al. | Aug 2010 | B1 |
7813057 | Lin | Oct 2010 | B2 |
7821724 | Tang et al. | Oct 2010 | B2 |
7826149 | Tang et al. | Nov 2010 | B2 |
7826151 | Tsai | Nov 2010 | B2 |
7869142 | Chen et al. | Jan 2011 | B2 |
7898747 | Tang | Mar 2011 | B2 |
7916401 | Chen et al. | Mar 2011 | B2 |
7918398 | Li et al. | Apr 2011 | B2 |
7957075 | Tang | Jun 2011 | B2 |
7957076 | Tang | Jun 2011 | B2 |
7957079 | Tang | Jun 2011 | B2 |
7961406 | Tang et al. | Jun 2011 | B2 |
8000031 | Tsai | Aug 2011 | B1 |
8004777 | Souma | Aug 2011 | B2 |
8077400 | Tang | Dec 2011 | B2 |
8149523 | Ozaki | Apr 2012 | B2 |
8218253 | Tang | Jul 2012 | B2 |
8228622 | Tang | Jul 2012 | B2 |
8233224 | Chen | Jul 2012 | B2 |
8253843 | Lin | Aug 2012 | B2 |
8279537 | Sato | Oct 2012 | B2 |
8363337 | Tang et al. | Jan 2013 | B2 |
8395851 | Tang et al. | Mar 2013 | B2 |
8400717 | Chen et al. | Mar 2013 | B2 |
8451549 | Yamanaka et al. | May 2013 | B2 |
8503107 | Chen et al. | Aug 2013 | B2 |
8514502 | Chen | Aug 2013 | B2 |
8570668 | Takakubo et al. | Oct 2013 | B2 |
8718458 | Okuda | May 2014 | B2 |
8780465 | Chae | Jul 2014 | B2 |
8810923 | Shinohara | Aug 2014 | B2 |
8854745 | Chen | Oct 2014 | B1 |
8958164 | Kwon et al. | Feb 2015 | B2 |
9185291 | Shabtay et al. | Nov 2015 | B1 |
9229194 | Yoneyama et al. | Jan 2016 | B2 |
9235036 | Kato et al. | Jan 2016 | B2 |
9279957 | Kanda et al. | Mar 2016 | B2 |
9438792 | Nakada et al. | Sep 2016 | B2 |
9488802 | Chen et al. | Nov 2016 | B2 |
9568712 | Dror et al. | Feb 2017 | B2 |
9678310 | Iwasaki et al. | Jun 2017 | B2 |
9817213 | Mercado | Nov 2017 | B2 |
20020118471 | Imoto | Aug 2002 | A1 |
20030048542 | Enomoto | Mar 2003 | A1 |
20040169772 | Matsui | Sep 2004 | A1 |
20050041300 | Oshima et al. | Feb 2005 | A1 |
20050062346 | Sasaki | Mar 2005 | A1 |
20050128604 | Kuba | Jun 2005 | A1 |
20050141103 | Nishina | Jun 2005 | A1 |
20050168840 | Kobayashi et al. | Aug 2005 | A1 |
20050270667 | Gurevich et al. | Dec 2005 | A1 |
20060238902 | Nakashima et al. | Oct 2006 | A1 |
20060275025 | Labaziewicz et al. | Dec 2006 | A1 |
20070229983 | Saori | Oct 2007 | A1 |
20070247726 | Sudoh | Oct 2007 | A1 |
20070253689 | Nagai et al. | Nov 2007 | A1 |
20080056698 | Lee et al. | Mar 2008 | A1 |
20080094730 | Toma et al. | Apr 2008 | A1 |
20080094738 | Lee | Apr 2008 | A1 |
20080291531 | Heimer | Nov 2008 | A1 |
20080304161 | Souma | Dec 2008 | A1 |
20090002839 | Sato | Jan 2009 | A1 |
20090067063 | Asami et al. | Mar 2009 | A1 |
20090122423 | Park et al. | May 2009 | A1 |
20090141365 | Jannard et al. | Jun 2009 | A1 |
20090147368 | Oh et al. | Jun 2009 | A1 |
20090225438 | Kubota | Sep 2009 | A1 |
20090279191 | Yu | Nov 2009 | A1 |
20090303620 | Abe et al. | Dec 2009 | A1 |
20100033844 | Katano | Feb 2010 | A1 |
20100060995 | Yumiki et al. | Mar 2010 | A1 |
20100165476 | Eguchi | Jul 2010 | A1 |
20100214664 | Chia | Aug 2010 | A1 |
20100277813 | Ito | Nov 2010 | A1 |
20110001838 | Lee | Jan 2011 | A1 |
20110032409 | Rossi et al. | Feb 2011 | A1 |
20110080655 | Mori | Apr 2011 | A1 |
20110102911 | Iwasaki | May 2011 | A1 |
20110115965 | Engelhardt et al. | May 2011 | A1 |
20110149119 | Matsui | Jun 2011 | A1 |
20110157430 | Hosoya et al. | Jun 2011 | A1 |
20110188121 | Goring et al. | Aug 2011 | A1 |
20110249347 | Kubota | Oct 2011 | A1 |
20110292274 | Takeuchi | Dec 2011 | A1 |
20120044582 | Murakami | Feb 2012 | A1 |
20120062783 | Tang et al. | Mar 2012 | A1 |
20120069455 | Lin et al. | Mar 2012 | A1 |
20120092777 | Tochigi et al. | Apr 2012 | A1 |
20120105708 | Hagiwara | May 2012 | A1 |
20120147489 | Matsuoka | Jun 2012 | A1 |
20120154929 | Tsai et al. | Jun 2012 | A1 |
20120194923 | Um | Aug 2012 | A1 |
20120229920 | Otsu et al. | Sep 2012 | A1 |
20120262806 | Huang | Oct 2012 | A1 |
20130057971 | Zhao et al. | Mar 2013 | A1 |
20130088788 | You | Apr 2013 | A1 |
20130208178 | Park | Aug 2013 | A1 |
20130271852 | Schuster | Oct 2013 | A1 |
20130279032 | Suigetsu et al. | Oct 2013 | A1 |
20130286488 | Chae | Oct 2013 | A1 |
20140022436 | Kim et al. | Jan 2014 | A1 |
20140063616 | Okano et al. | Mar 2014 | A1 |
20140092487 | Chen et al. | Apr 2014 | A1 |
20140139719 | Fukaya et al. | May 2014 | A1 |
20140146216 | Okumura | May 2014 | A1 |
20140160581 | Cho et al. | Jun 2014 | A1 |
20140204480 | Jo et al. | Jul 2014 | A1 |
20140240853 | Kubota et al. | Aug 2014 | A1 |
20140285907 | Tang et al. | Sep 2014 | A1 |
20140293453 | Ogino et al. | Oct 2014 | A1 |
20140362274 | Christie et al. | Dec 2014 | A1 |
20150022896 | Cho et al. | Jan 2015 | A1 |
20150029601 | Dror et al. | Jan 2015 | A1 |
20150116569 | Mercado | Apr 2015 | A1 |
20150138431 | Shin et al. | May 2015 | A1 |
20150153548 | Kim et al. | Jun 2015 | A1 |
20150168667 | Kudoh | Jun 2015 | A1 |
20150205068 | Sasaki | Jul 2015 | A1 |
20150244942 | Shabtay et al. | Aug 2015 | A1 |
20150253532 | Lin | Sep 2015 | A1 |
20150253543 | Mercado | Sep 2015 | A1 |
20150253647 | Mercado | Sep 2015 | A1 |
20150323757 | Bone | Nov 2015 | A1 |
20150373252 | Georgiev | Dec 2015 | A1 |
20150373263 | Georgiev et al. | Dec 2015 | A1 |
20160033742 | Huang | Feb 2016 | A1 |
20160044250 | Shabtay et al. | Feb 2016 | A1 |
20160062084 | Chen et al. | Mar 2016 | A1 |
20160062136 | Nomura et al. | Mar 2016 | A1 |
20160070088 | Koguchi | Mar 2016 | A1 |
20160085089 | Hillis et al. | Mar 2016 | A1 |
20160105616 | Shabtay et al. | Apr 2016 | A1 |
20160187631 | Choi et al. | Jun 2016 | A1 |
20160202455 | Aschwanden et al. | Jul 2016 | A1 |
20160212333 | Liege et al. | Jul 2016 | A1 |
20160241756 | Chen | Aug 2016 | A1 |
20160291295 | Shabtay | Oct 2016 | A1 |
20160306161 | Harada et al. | Oct 2016 | A1 |
20160313537 | Mercado | Oct 2016 | A1 |
20160341931 | Liu et al. | Nov 2016 | A1 |
20160349504 | Hun-Kim et al. | Dec 2016 | A1 |
20160353008 | Osborne | Dec 2016 | A1 |
20170023778 | Inoue | Jan 2017 | A1 |
20170094187 | Sharma et al. | Mar 2017 | A1 |
20170102522 | Jo | Apr 2017 | A1 |
20170115471 | Shinohara | Apr 2017 | A1 |
20170153422 | Tang et al. | Jun 2017 | A1 |
20170160511 | Kim et al. | Jun 2017 | A1 |
20170199360 | Chang | Jul 2017 | A1 |
20170276911 | Huang | Sep 2017 | A1 |
20170310952 | Adomat et al. | Oct 2017 | A1 |
20170329108 | Hashimoto | Nov 2017 | A1 |
20170337703 | Wu et al. | Nov 2017 | A1 |
20170359566 | Goma | Dec 2017 | A1 |
20180024319 | Lai et al. | Jan 2018 | A1 |
20180059365 | Bone et al. | Mar 2018 | A1 |
20180059376 | Lin et al. | Mar 2018 | A1 |
20180081149 | Bae et al. | Mar 2018 | A1 |
20180120674 | Avivi et al. | May 2018 | A1 |
20180149835 | Park | May 2018 | A1 |
20180196236 | Ohashi et al. | Jul 2018 | A1 |
20180196238 | Goldenberg et al. | Jul 2018 | A1 |
20180217475 | Goldenberg et al. | Aug 2018 | A1 |
20180218224 | Olmstead et al. | Aug 2018 | A1 |
20180224630 | Lee et al. | Aug 2018 | A1 |
20180268226 | Shashua et al. | Sep 2018 | A1 |
20190025549 | Hsueh et al. | Jan 2019 | A1 |
20190025554 | Son | Jan 2019 | A1 |
20190075284 | Ono | Mar 2019 | A1 |
20190086638 | Lee | Mar 2019 | A1 |
20190107651 | Sade | Apr 2019 | A1 |
20190121216 | Shabtay et al. | Apr 2019 | A1 |
20190170965 | Shabtay et al. | Jun 2019 | A1 |
20190215440 | Rivard et al. | Jul 2019 | A1 |
20190353874 | Yeh et al. | Nov 2019 | A1 |
20200084358 | Nadamoto | Mar 2020 | A1 |
20200150695 | Huang | May 2020 | A1 |
20200192069 | Makeev et al. | Jun 2020 | A1 |
20200221026 | Fridman et al. | Jul 2020 | A1 |
20200333691 | Shabtay et al. | Oct 2020 | A1 |
20210263276 | Huang et al. | Aug 2021 | A1 |
20210364746 | Chen | Nov 2021 | A1 |
20210396974 | Kuo | Dec 2021 | A1 |
20220046151 | Shabtay et al. | Feb 2022 | A1 |
20220066168 | Shi | Mar 2022 | A1 |
20220113511 | Chen | Apr 2022 | A1 |
20220232167 | Shabtay et al. | Jul 2022 | A1 |
Number | Date | Country |
---|---|---|
101634738 | Jan 2010 | CN |
102147519 | Aug 2011 | CN |
102193162 | Sep 2011 | CN |
102466865 | May 2012 | CN |
102466867 | May 2012 | CN |
102147519 | Jan 2013 | CN |
103576290 | Feb 2014 | CN |
103698876 | Apr 2014 | CN |
104297906 | Jan 2015 | CN |
104407432 | Mar 2015 | CN |
105467563 | Apr 2016 | CN |
105657290 | Jun 2016 | CN |
106680974 | May 2017 | CN |
104570280 | Jun 2017 | CN |
S54157620 | Dec 1979 | JP |
S59121015 | Jul 1984 | JP |
6165212 | Apr 1986 | JP |
S6370211 | Mar 1988 | JP |
406059195 | Mar 1994 | JP |
H07325246 | Dec 1995 | JP |
H07333505 | Dec 1995 | JP |
H09211326 | Aug 1997 | JP |
H11223771 | Aug 1999 | JP |
3210242 | Sep 2001 | JP |
2004334185 | Nov 2004 | JP |
2006195139 | Jul 2006 | JP |
2007133096 | May 2007 | JP |
2007164065 | Jun 2007 | JP |
2007219199 | Aug 2007 | JP |
2007306282 | Nov 2007 | JP |
2008111876 | May 2008 | JP |
2008191423 | Aug 2008 | JP |
2010032936 | Feb 2010 | JP |
2010164841 | Jul 2010 | JP |
2011145315 | Jul 2011 | JP |
2012203234 | Oct 2012 | JP |
2013003317 | Jan 2013 | JP |
2013003754 | Jan 2013 | JP |
2013101213 | May 2013 | JP |
2013105049 | May 2013 | JP |
2013106289 | May 2013 | JP |
2013148823 | Aug 2013 | JP |
2014142542 | Aug 2014 | JP |
2017116679 | Jun 2017 | JP |
2018059969 | Apr 2018 | JP |
2019113878 | Jul 2019 | JP |
20090019525 | Feb 2009 | KR |
20090131805 | Dec 2009 | KR |
20110058094 | Jun 2011 | KR |
20120068177 | Jun 2012 | KR |
20140135909 | May 2013 | KR |
20140023552 | Feb 2014 | KR |
20160000759 | Jan 2016 | KR |
101632168 | Jun 2016 | KR |
20160115359 | Oct 2016 | KR |
M602642 | Oct 2020 | TW |
2013058111 | Apr 2013 | WO |
2013063097 | May 2013 | WO |
2018130898 | Jul 2018 | WO |
Entry |
---|
A compact and cost effective design for cell phone zoom lens, Chang et al., Sep. 2007, 8 pages. |
Consumer Electronic Optics: How small a lens can be? The case of panomorph lenses, Thibault et al., Sep. 2014, 7 pages. |
Optical design of camera optics for mobile phones, Steinich et al., 2012, pp. 51-58 (8 pages). |
The Optics of Miniature Digital Camera Modules, Bareau et al., 2006, 11 pages. |
Modeling and measuring liquid crystal tunable lenses, Peter P. Clark, 2014, 7 pages. |
Mobile Platform Optical Design, Peter P. Clark, 2014, 7 pages. |
Boye et al., “Ultrathin Optics for Low-Profile Innocuous Imager”, Sandia Report, 2009, pp. 56-56. |
“Cheat sheet: how to understand f-stops”, Internet article, Digital Camera World, 2017. |
Number | Date | Country | |
---|---|---|---|
20230251554 A1 | Aug 2023 | US |
Number | Date | Country | |
---|---|---|---|
62939943 | Nov 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17104744 | Nov 2020 | US |
Child | 18301438 | US |