Systems and methods for obtaining a smart panoramic image

Information

  • Patent Grant
  • 11770618
  • Patent Number
    11,770,618
  • Date Filed
    Thursday, December 3, 2020
    3 years ago
  • Date Issued
    Tuesday, September 26, 2023
    8 months ago
Abstract
Mobile handheld electronic devices such as smartphones, comprising a Wide camera for capturing Wide images with respective Wide fields of view (FOVW), a Tele camera for capturing Tele images with respective Tele fields of view (FOVT) smaller than FOVW, and a processor configured to stitch a plurality of Wide images into a panorama image with a field of view FOVP>FOVW and to pin a Tele image to a given location within the panorama image to obtain a smart panorama image.
Description
FIELD

The subject matter disclosed herein relates in general to panoramic images and in particular to methods for obtaining such images with multi-cameras (e.g. dual-cameras).


BACKGROUND

Multi-aperture cameras (or multi-cameras) are becoming the standard choice of mobile device (e.g. smartphone, tablet, etc.) makers when designing cameras for their high-ends devices. A multi-camera setup usually comprises a wide field-of-view (FOV) (or “angle”) aperture (“Wide” or “W” camera), and one or more additional lenses, either with the same FOV (e.g. a depth auxiliary camera), with a narrower FOV (“Telephoto”, “Tele” or “T” camera, with a “Tele FOV” or FOVT) or with Wide FOV (FOVW) or ultra-wide FOV (FOVUW) (“Ultra-Wide” or “UW” camera).


In recent years, panoramic photography has gained popularity with mobile users, as it gives a photographer the ability to capture a scenery and its surroundings with very large FOV (in general in vertical direction). Some mobile device makers have recognized the trend and offer an ultra-wide-angle (or “ultra-Wide”) camera in the rear camera setup of a mobile device such as a smartphone. Nevertheless, capturing a scenery on a single aperture is limited and image stitching is required when a user wishes to capture a large FOV scene.


A panoramic image (or simply “regular panorama”) captured on a mobile device comprises a plurality of FOVW images stitched together. The W image data is the main camera data used for the stitching process, since having a Wide FOV (also marked “FOVW”), the final (stitched) image (referred to as “Wide panorama”) consumes less memory than that required for a Tele camera-based panorama (or simply “Tele panorama”) capturing the same scene. Additionally, the W camera has a larger depth-of-field than a T camera, leading to superior results in terms of focus. In comparison to an ultra-W camera, a W camera also demonstrates superior results in terms of distortion.


Since a Wide panorama is limited by a Wide image resolution, the ability to distinguish between fine details, mainly of far objects, is limited. A user who wishes to zoom in towards an “object of interest” (OOI) within the panorama image, i.e. perform digital zoom, will notice a blurred image due to Wide image resolution limits. Moreover, the panoramic image may be compressed to an even lower resolution than Wide image resolution in order to meet memory constraints.


There is need and it would be beneficial to combine the benefits of a panorama image having a very large FOV and Tele images having a large image resolution.


SUMMARY

To increase the resolution of OOIs, systems and methods for obtaining a “smart panorama” are disclosed herein. A smart panorama comprises a Wide panorama and at least one Tele-based image of an OOI captured simultaneously. That is, a smart panorama as described herein refers to an image data array comprising (i) a panorama image as known in the art and (ii) a set of one or more high-resolution images of OOIs that are pinned or located within the panorama FOV. While the panorama is being captured, an additional process analyzes the W camera FOVW scene and identifies OOIs. Once an OOI is identified, the “best camera” is chosen out of the multi-camera array. The “best camera” selection may be between a plurality of cameras, or it may be between a single Tele camera having different operational modes such as different zoom states or different points of view (POVs). The “best camera” selection may be based on the OOI's object size, distance from the camera etc., and a capture request to the “best camera” is issued. The “best camera” selection may be defined by a Tele capture strategy such as described below. In some embodiments with cameras that have different optical zoom states, the “best camera” may be operated using a beneficial zoom state. In other embodiments with cameras that have a scanning FOV the “best camera” may be directed towards that OOI.


Note that a method disclosed herein is not limited to a specific multi-camera module and could be used for any combination of cameras as long as the combination consists of at least two cameras with a FOV ratio different than 1.


In current multi-camera systems, the FOVT is normally in the center part of the FOVW, defining a limited strip where interesting objects that have been detected trigger a capture request. A Tele camera with a 2D scanning capability extends the strip such that any object detected in the scanning range could be captured, i.e. provides “zoom anywhere”. Examples of cameras with 2D scanning capability may be found in co-owned international patent applications PCT/IB2016/057366, PCT/IB2019/053315 and PCT/IB2018/050988.


Tele cameras with multiple optical zoom states can adapt the zoom (and FOVT) according to e.g. size and distance of OOIs. Cameras with that capability may be found for example in co-owned US international patent applications No. PCT/IB2020/050002 and PCT/IB2020/051405.


The panorama being displayed to the user will contain some differentiating element marking the area of the panorama where high resolution OOI image information is present, such differentiating element marking may include, for example, a touchable rectangle box. By touching the box, the full resolution optically zoomed image will be displayed, allowing the user to enjoy both the panoramic view and the high-resolution zoom-in view.


In various embodiments there are provide handheld mobile electronic devices, comprising: a Wide camera for capturing Wide images, each Wide image having a respective Wide field of view (FOVW); a Tele camera for capturing Tele images, each Tele image having a respective Tele field of view (FOVT) smaller than FOVW; and a processor configured to stitch a plurality of Wide images with respective FOVW into a panorama image with a field of view FOVP>FOVW and to pin a Tele image to a given location within the panorama image to obtain a smart panorama image.


In some embodiments, each Wide image includes Wide scene information that is different from scene information of other Wide images.


In some embodiments, the processor is configured to crop the Tele image before pinning it to the given location.


In some embodiments, the Tele images are cropped according to aesthetic criteria.


In some embodiments, the Wide camera is configured to capture Wide images autonomously.


In some embodiments, the Tele camera is configured to capture the Tele images autonomously.


In some embodiments, the processor is configured to use a motion model that predicts a future movement of the handheld device.


In some embodiments, the processor is configured to use a motion model that predicts a future movement of an object within the FOVP.


In some embodiments, the processor is configured to use particular capture strategies for the autonomous capturing of the Tele images.


In some embodiments, the pinning a Tele image to a given location within the panorama image is obtained by executing localization between the Wide images and Tele images.


In some embodiments, the pinning a Tele image to a given location within the panorama image is obtained by executing localization between the panorama image and Tele images.


In some embodiments, the Tele camera has a plurality of zoom states.


In some embodiments, the processor is configured to autonomously select a particular zoom state from the plurality of zoom states.


In some embodiments, a particular zoom state from the plurality of zoom states is selected by a human user.


In some embodiments, the plurality of zoom states includes a discrete number.


In some embodiments, at least one of the plurality of zoom states can be modified continuously.


In some embodiments, the Tele camera is a scanning Tele camera.


In some embodiments, the processor is configured to autonomously direct scanning of the FOVT to a specific location within a scene.


In some embodiments, the FOVT scanning is performed by rotating one optical path folding element.


In some embodiments, the FOVT scanning is performed by rotating two or more optical path folding elements.


In some embodiments, each Tele image includes scene information from a center of the panorama image.


In some embodiments, scene information in the Tele images includes scene information from a field of view larger than a native Tele field of view and smaller than a Wide field of view.


In some embodiments, a particular segment of a scene is captured by the Tele camera and is pinned to locations within the panorama image.


In some embodiments, the processor uses a tracking algorithm to capture the particular segment of a scene with the Tele camera.


In some embodiments, a program decides which scene information captured by the Tele camera and pinned to locations within the panorama image.


In some embodiments, the processor is configured to calculate a saliency map based on Wide image data to decide which scene information is captured by the Tele camera and pinned to locations within the panorama image.


In some embodiments, the processor is configured to use a tracking algorithm to capture scene information with the Tele camera.


In some embodiments, the Tele image pinned to a given location within the panorama image is additionally shown in another location within the panorama image.


In some embodiments, the Tele image pinned to a given location within the panorama image is shown in an enlarged scale.


In various embodiments there are provided methods, comprising: providing a plurality of Wide images, each Wide image having a respective FOVW and including Wide scene information different from other Wide images; providing a plurality of Tele images, each Tele image having a respective FOVT that is smaller than FOVW; using a processor for stitching a plurality of Wide images into a panorama image with a panorama field of view FOVP>FOVW; and using the processor to pin at least one Tele image to a given location within the panorama image.


In some embodiments, the handheld device is manually moved by a user to capture scene information in the FOVP.





BRIEF DESCRIPTION OF THE DRAWINGS

Non-limiting examples of embodiments disclosed herein are described below with reference to figures attached hereto that are listed following this paragraph. The drawings and descriptions are meant to illuminate and clarify embodiments disclosed herein and should not be considered limiting in any way. Like elements in different drawings may be indicated by like numerals. Elements in the drawings are not necessarily drawn to scale. In the drawings:



FIG. 1A illustrates exemplary triple camera output image sizes and ratios therebetween;



FIG. 1B illustrates exemplary ratios between W and T images in a dual-camera, with the T camera in two different zoom states;



FIG. 1C illustrates the FOVs of dual-camera images, for a dual-camera that comprises a 2D scanning T cameras;



FIG. 2A shows a smart panorama image example, in which certain OOIs are objects located in a limited strip around the center of FOVW;



FIG. 2B shows a panorama image example in which certain OOIs are located across a large part of the FOVW;



FIG. 3A shows an exemplary embodiment of a smart panorama output from a human user perspective;



FIG. 3B shows another exemplary embodiment of a smart panorama output from a human user perspective;



FIG. 3C shows yet another exemplary embodiment of a smart panorama output from a human user perspective.



FIG. 4 shows schematically an embodiment of an electronic device capable of providing smart panorama images as described herein;



FIG. 5 shows a general workflow of the smart panorama method of use as described herein;



FIGS. 6A and 6B show the localization of the T image within the W image.





DETAILED DESCRIPTION

Non-limiting examples of embodiments disclosed herein are described below with reference to figures attached hereto that are listed following this paragraph. The drawings and descriptions are meant to illuminate and clarify embodiments disclosed herein and should not be considered limiting in any way. Like elements in different drawings may be indicated by like numerals. Elements in the drawings are not necessarily drawn to scale. In the drawings:



FIG. 1A illustrates exemplary triple camera output image sizes and ratios therebetween. A triple camera includes three cameras having different FOVs, for example an ultra-Wide FOV (marked FOVUW) 102, a Wide FOV (marked FOVW) 104 and a Tele FOV (marked FOVT) 106. Such a triple camera is applicable for the “smart panorama” method disclosed herein. Either of the UW or W cameras can be used as a “Wide camera” in a method of obtaining a smart panorama disclosed herein, and the Tele camera can be used to capture high-resolution images of OOIs within a capture time needed to capture the panorama.



FIG. 1B illustrates exemplary ratios between W and T images in a dual-camera comprising a Wide camera and a Tele camera, with the Tele camera in two different zoom states, 1st zoom state and 2nd zoom state. Here, the 2nd zoom state refers to a state with a higher zoom factor ZF (and smaller corresponding FOV) than the 1st zoom state. The W camera has a FOVW 104. The T camera is a zoom Tele camera that can adapt its zoom factor (and a corresponding FOV 106′), either between 2 or more discrete zoom states of e.g. ×5 zoom and ×8 zoom, or between any number of desired zoom states (in the limits of the zoom capability) via continuous zoom. While the panorama image is based on the W image data, it is possible to select a specific FOVT 106′ (and corresponding zoom factor) and use this specific FOVT 106′ to capture OOIs so that a best user experience is provided for the smart panorama image.



FIG. 1C illustrates the FOVs of dual-camera images, for a dual-camera that comprises a 2D scanning T camera. A 2D scanning T camera has a “native FOVT” wherein the location of the native FOVT in the scene can be changed, enabling to cover or “scan” a segment of a scene that is larger than the native FOVT. This larger scene segment is referred to as the “effective Tele FOV” or simply “Tele FOV”. FIG. 1C shows a native FOVT 106″ at two different positions within FOVW 104. The W camera with FOVW 104 is used for capturing a regular panorama. A region-of-interest (ROI) detection method applied to FOVW is used to direct FOVT 106″ towards this ROI. Examples of such detection methods are described below. The FOV scanning may be performed by rotational actuation of one or more optical path folding elements (OPFEs). FOV scanning by actuating an OPFE is not instantaneous, since it requires some settling time. FOV scanning may for example require a time scale of about 1-30 ms for scanning 2°-5°, and about 5-80 ms for scanning 10-25°. In some embodiments, the T camera may cover about 50% of the area of FOVW. In other embodiments, the T camera may cover about 80% or more of the area of FOVW.


Regular panorama images can be captured with vertical or horizontal sensor orientation. The panorama capturing direction could be either left-to-right or right-to-left and can comprise any angle of view up to 360 degrees. This capturing is applicable to spherical, cylindrical or 3D panoramas.



FIG. 2A shows a smart panorama image example, in which OOIs 202, 204, 206, 208 and 210 are objects located in (restricted to) a limited strip around the center of FOVW, the amount of restriction defined by the FOV ratio between e.g. the W and T cameras. This strip corresponds to the FOV of a T camera with no scanning capability. OOIs contained in this strip are detected by the smart panorama process and are automatically captured. With a multi-state zoom camera or a continuous zoom camera as T camera, an object (e.g. 202) occupying a solid angle Ω202 in FOVW may be captured with higher image resolution than that of another object 210 (occupying a solid angle Ω210 in FOVW, where Ω210>Ω202).



FIG. 2B shows a panorama image example, in which OOIs 212, 214, 216, 218, 220 and 222 are located across a large part of FOVW. The OOIs may also be restricted to a limited strip, but the limits of this strip are significantly larger than in FIG. 2A. A scanning T camera can capture objects located off-center (e.g. object 222) in the 2D scanning range.



FIG. 3A shows an exemplary embodiment of a smart panorama output from a human user perspective. Objects 212, 214, 216, 218, 220 and 222 identified as OOIs and captured with high T image resolution are marked with a rectangle box that may be visible or may not be visible on the panorama image, hinting to the user the availability of high-resolution images of OOIs. By clicking one of the boxes (e.g. box 222), the high-resolution image is accessed and can be displayed to the user in a number of ways, including, but not limited to: in full image preview; in a side-by-side display together with the smart panorama image; in a zoom-in video display combining the panorama, the W image and the T image; or in any other type of display that uses the available images.



FIG. 3B and FIG. 3C show another exemplary embodiment of a smart panorama output from a human user perspective. FIG. 3B and FIG. 3C refer to the panoramic scene shown inn FIG. 2A. Objects 202 and 208 which are identified as OOIs and captured with high T image resolution may be visible on the panorama image not only in their actual location (and size), but also in an enlarged representation (or scale) such as 224 and 226 for objects 202 and 208 respectively. This enlarged representation may be shown in a suitable segment of the panorama image. A suitable segment may be a segment where no other OOIs are present, where image quality is low, where image artifacts are present, etc. In some examples, this double representation may be used for all OOIs in the scene.


In other examples and as shown in FIG. 3C exemplarily for objects 224 and 226 which are enlarged representations of objects 202 and 208 respectively, one or more OOIs may be shown in their actual location in an enlarged representation.



FIG. 4 shows schematically an embodiment of an electronic device (e.g. a smartphone) numbered 400 capable of providing smart panorama images as described herein. Electronic device 400 comprises a first T camera 402, which may be a non-folded (vertical) T camera or a folded T camera that includes one or more OPFEs 404, and a first lens module 406 including a first (Tele) lens that forms a first image recorded by a first image sensor 408. T camera 402 forms an image recorded by a first (Tele) image sensor 410. The first lens may have a fixed effective focal length (fixed EFL) providing a fixed zoom factor (ZF), or an adaptable effective focal length (adaptive EFL) providing an adaptable ZF. The adaptation of the focal length may be discrete or continuous, i.e. a discrete number of varying focal lengths for providing two or more discrete zoom states having particular respective ZFs, or the adaptation of the ZF may be continuous. A first lens actuator 412 may move lens module 406 for focusing and/or optical image stabilization (OIS). An OPFE actuator 414 may actuate OPFE 404 for OIS and/or FOV scanning.


In some embodiments, the FOV scanning of the T camera may be performed not by OPFE actuation. In some embodiments, the FOV scanning of the T camera may be performed not by actuating one OPFE, but by actuating two or more OPFEs. A scanning T camera that performs FOV scanning by actuating two OPFEs is described for example in co-owned U.S. provisional patent application No. 63/110,057 filed Nov. 5, 2020.


Electronic device 400 further comprises a W camera module 420 with a FOVW larger than FOVT of camera module 402. W camera module 420 includes a second lens module 422 that forms an image recorded by a second (Wide) image sensor 424. A second lens actuator 426 may move lens module 422 for focusing and/or OIS. In some embodiments, second calibration data may be stored in a second memory 428.


Electronic device 400 may further comprise an application processor (AP) 430. Application processor 440 comprises a T image signal processor (ISP) 432 and a W image ISP 434. Application processor 430 further comprises a Real-time module 436 that includes a salient ROI extractor 438, an object detector 440, an object tracker 442 and a camera controller 444. Application processor 440 further comprises a panorama module 448 and a smart panorama module 450.


In some embodiments, first calibration data may be stored in a first memory 416 of the T camera module, e.g. in an EEPROM (electrically erasable programmable read only memory). In other embodiments, first calibration data may be stored in a third memory 470 such as a NVM (non-volatile memory). The first calibration data may comprise calibration data between sensors of a W module 420 and the T module 402. In other embodiments, the second calibration data may be stored in third memory 452. The second calibration data may comprise calibration data between sensors of a W module 420 and the T module 402. The T module may have an effective focal length (EFL) of e.g. 8 mm-30 mm or more, a diagonal FOV of 10 deg-40 deg and a f number of about f/#=1.8-6. The W module may have an EFL of e.g. 2.5 mm-8 mm, a diagonal FOV of 50 deg-130 deg and f/#=1.0-2.5.


In use, a processing unit such as AP 430 may receive respective Wide and T image data from camera modules 402 and 420 and supply camera control signals to camera modules 402 and 420.


Salient ROI extractor 438 may calculate a saliency map for each W image. The saliency maps may be obtained by applying various saliency or salient-object-detection (SOD) algorithms, using classic computer vision methods or neural networks models. Examples to saliency methods can be found in datasets known in the art such as the “MIT Saliency Benchmark” and the “MIT/Tuebingen Saliency Benchmark”. Salient ROI extractor 438 also extracts salient Regions-Of-Interest (ROIs) and may contain the OOIs discussed above. For each salient object (or ROI), a surrounding bounding box is defined which may include a scene segment and a saliency score. The saliency score may be used to determine the influence of an object on future decisions as described in later steps. The saliency score is selected as a combination of parameters that reflect object properties, for example the size of the object and a representation of the saliency scores in each object.


In some embodiments, object detector 440 may detect objects in the W image simultaneously to the calculation of the saliency map and provide a semantic understanding of the objects in the scene. The semantic information extracted may be considered in calculating the saliency score.


In other embodiments, object detector 440 may detect objects in the W image after calculation of the saliency map. Object detector 440 may use only segments of the W image, e.g. only segments that are classified as saliency ROIs by salient ROI extractor 438. Object detector 440 may additionally provide a semantic understanding of the ROIs wherein the semantic information may be used to re-calculate the saliency score.


Object detector 440 may provide data such as information on an ROI's location and classification type to an object tracker 442, which may update camera controller 444 on the ROI's location as well as to the camera controller 458. Camera controller 444 may consider capturing a ROI in dependence of particular semantic labels or of a ROI's location (e.g. for considering hardware limitation such as a limited Tele FOV coverage of the Wide FOV) within the Wide FOV or of a saliency score above a certain threshold etc.


Panorama module 448 stitches a plurality of W images to a panorama image as known in the art. Smart panorama module 450 matches the high-resolution ROIs to their corresponding locations on the panorama image and to an image selection module (not shown) that selects the T images that are to be used in the smart panorama image.


Camera controller 444 may select or direct the T camera to capture the ROIs according to different Tele capture strategies for providing a best user experience. For providing a best user experience, camera controller 444 may a “best camera” e.g. by selecting a suitable ZF or by directing the native FOVT towards a ROI within the FOVT.


In some examples a “best user experience” may refer to T images of ROIs that provide information on OOIs in highest resolution (Tele capture “strategy example 1” or “SE 1”), and a respective Tele capture strategy that provides this may be selected. However, in other examples a best user experience may be provided by strategy examples such as:


capturing the Tele ROI that contains the OOI with the highest saliency score (“SE 2”);


capturing multiple OOIs in one ROI Tele capture (“SE 3”);


a uniform or non-uniform depth-of-field distribution between the different ROI Tele captures (“SE 4”);


including not only the OOI, but also a certain amount of background (“SE 5”) e.g. so that aesthetic cropping can be applied;


capturing a plurality of ROIs with a particular zoom factor (“SE 6”);


capturing multiple OOIs in one ROI Tele capture wherein the OOIs may be distributed according to a particular distribution within the Tele FOV (“SE 7”);


capturing one or more OOIs in one ROIs wherein the OOIs are to be located at particular positions or areas within the T image (“SE 8”);


capturing a plurality of ROIs with a particular zoom factors, e.g. so that the images of the ROIs or of particular OOIs which are formed on the image sensor may have a particular image size (“SE 9”);


a particular spectroscopic or colour composition range (“SE 10”);


a particular brightness range (“SE 11”); a particular scene characteristics which may be visual data (“SE 12”) such as texture;


including not only the OOI, but also a certain amount of background wherein the T camera settings may be selected so that the OOI may be in focus and the background may have some particular degree of optical bokeh (“SE 13”) or may have a minimal or maximal degree of optical bokeh (“SE 14”);


capturing with a higher preference specific types of OOIs, e.g. a user may be able to select whether e.g. animals or plants or buildings or humans may be captured by the Tele with a higher preference (“SE 15”); or


capturing a preferred type of OOI with higher preference in some particular state or condition, e.g. a human may be captured with open eyes with a higher preference or a bird may be captured with open wings with higher preference (“SE 16”) etc., or other criteria known in photography may be considered for best user experience.


The Tele capture strategies are respectively defined for providing a best user experience. According to the Tele capture strategy, camera controller 444 may adjust the settings of the T camera, e.g. with respect to a selected zoom factor or to a selected f number or to a POV that the scanning camera may be directed to etc. Other techniques described herein such as the calculation of a saliency map or the application of a motion model or the use of an object tracking algorithm etc. may be used or adapted e.g. by modifying settings to implement a particular Tele capture strategy.


In another embodiment, camera controller 444 may decide to capture a ROI that is a sub-region of an OOI that exceeds the native FOVT boundaries. Such objects will be referred to as “large” objects. When a “large” object is selected, salient ROIs extractor 438 may calculate an additional saliency map on the segment of the Wide FOV that contains the large object. The saliency map may be analysed, and the most visually attentive (salient) sub-region of the large object may be selected to be captured by the T camera. For example, the sub-region may replace the large object data in following calculation steps. Camera controller 444 may direct a scanning T camera towards the sub-region for capturing it.


Smart panorama module 450 may decide whether to save (capture) or discard a T image, e.g. smart panorama module 464 may save only the “best” images out of all T images captured. The best images may be defined as images that contain the largest amount of salient information. In other embodiments, the best images may include particular objects that may be of high value for the individual user, e.g. particular persons or animals Smart panorama module 450 may e.g. be taught automatically (e.g. by a machine learning procedure) or manually by the user which ROIs are to be considered best images. In yet other embodiments, the best images may be an image captured with a particular zoom factor, or a plurality of images including a ROI each, wherein each ROI may be captured with a particular zoom factor or some other property, e.g. so that the images of the ROIs which are formed on the image sensor may have a particular size, or a particular spectroscopic or colour composition range, or with a minimum degree of focus or defocus, or a particular brightness range, or a particular scene characteristics that may be visual data such as texture. In some embodiments, smart panorama module 450 may verify that newly captured images have non-overlapping FOVs with previously saved (i.e. already selected) images.


In some embodiments, object tracker 442 may track a selected ROI across consecutive W images. Different tracking methods may be used, e.g. Henriques et al. “High-speed tracking with kernelized correlation filters”. The object tracking may proceed until the ROI is captured by the T camera or until the object tracking process fails. In some embodiments, object tracker 442 may be configured as well for predicting a future position of the ROI, e.g. based on a current camera position and some motion model. For this prediction, an extension of a Kalman filter or any other motion estimation as known in the art may be used. Examples to Kalman filter methods can be found in the article “An Introduction to the Kalman Filter”, published by Welch and Bishop in 1995. The position prediction may be used for directing the scanning T camera to an expected future ROI position. In some embodiment, also the estimated velocity of an ROI may be considered. The velocity may refer to the velocity of e.g. an OOI with respect to other objects in the scene or to the velocity of e.g. an OOI with respect to the movement of electronic device 400.


In other embodiments, camera controller 444 may be configured to perform fault detection. The fault detection may for example raise an error in case that a particular threshold in terms of image quality or scene content may not be met. For example, an error may be raised if a certain threshold of (a) motion blur, (b) electronic noise, (c) defocus blur, obstructions in the scene or other undesired effects may be detected in the image. In some examples, in case a ROI image raised an error, this image will not be considered for a smart panorama image, and a scanning T camera may be instructed to re-direct to the scene segment comprising the ROI and to re-capture the ROI.


In other embodiments, camera controller 444 may consider further user inputs for a capture decision. User inputs may be intentional or unintentional. For example, eye tracking may be used to make a capture decision. For example, a user-facing camera may be used to automatically observe the eye movement of a user when watching on a screen of a camera hosting device or on the scene itself. For example, in case a user's eyes stay a significantly longer time on a particular scene segment than they stay on other scene segments, the given segment may be considered important to the user and may be captured with increased priority.


In other embodiments and for example for capturing objects that are large with respect to the Tele FOV or for capturing objects with very high resolution, camera controller 444 may be configured to capture a ROI not by a single T image, but by a plurality of T images that include different segments of an ROI. The plurality of T images may be stitched together to one image that may display the ROI in its entirety.


A final selection of best images may be performed by smart panorama module 450. Smart panorama module 450 may e.g. consider (i) the maximal storage capacity, (ii) FOV overlap across saved images, and (iii) the spatial distribution of the ROIs on a panorama FOV.


Smart panorama module 450 additionally includes a cropping module (not shown) that aims to find the cropping window that satisfies criteria such as providing best user experience as described above, as well as criteria from aesthetic image cropping, e.g. as described by Wang et al in the article “A deep network solution for attention and aesthetics aware photo cropping”, 2018.


In some embodiments, smart panorama module 450 may perform an additional saliency calculation on a stitched image with a FOV wider than the Wide FOV. For example, saliency information can be calculated by applying a saliency or SOD model on a segment of, or on the entire the panorama FOV.


In other embodiments, smart panorama module 450 may use semantic information to select T images to be used in the smart panorama image, e.g. by applying a detection algorithm. The chances of selecting a T image to be used in the smart panorama image may e.g. be elevated if human faces were detected by a face-detection algorithm.


The selected T images may be exemplarily displayed to the user via a tap on a rectangle marked on the smart panorama image, or with zoom transition from the smart panorama FOV to the native Tele FOV via zoom pinching.



FIG. 5 shows a general workflow of the smart panorama “feature” (or method of use) as described herein, which could for example be implemented on (performed or carried out in) an electronic device such as device 400. The capture process starts with the capturing of a regular panorama image in step 502. A processing unit such as AP 430 receives a series of W (Wide) images as the user directs the W camera along the scene in step 504. The W images may be captured autonomously. The W images are processed by a RT module such as 436 to identify OOIs and ROIs in step 506. After ROIs are identified, in case of a 2D scanning camera, a processing unit may direct a high-resolution T camera to the region of interests in step 508. In case of a “centered FOVT camera” (i.e. a T camera with a FOVT centered with respect to the Wide FOV) with multiple zoom states, camera controller 444 may select a beneficial zoom state for capturing the T image during the regular panorama capture. Here, the term “beneficial zoom state” may refer to a zoom state that provides best user experience as described above. With the T camera directed towards the ROI, T images are captured in step 510. In case fault detection is performed and raises an error message, one may return to step 508, i.e. the processing unit may re-direct the high-resolution Tele camera to the ROI and capture it again. Eventually the W images are stitched by panorama module 448 to create a regular panorama image in step 512. In step 514, smart panorama module 450 decides which T images are to be included in the smart panorama and pins the chosen T images locations to the panorama image with very high resolution.


In some examples, image data of the T images captured in step 510 may be used for the regular panorama image.


In another embodiment with a centered FOVT camera, the processing unit may determine the right timing for capturing the T image during the panorama capture.


determine the right timing for capturing the T image during the panorama capture.



FIG. 6A-B shows the localization of the T image within the W image. The localization may be performed in step 508 for directing a high resolution camera to an ROI or in step 514 for pinning a T image into a particular location in the panorama image. The T image may be captured by a scanning Tele camera or a non-scanning Tele camera.


In FIG. 6A, the scanning T FOV 602 is shown at an estimated POV within the Wide camera FOV 604. The scanning T FOV estimation with respect to the W FOV 604 is acquired by the Tele-Wide calibration information which in general may rely on position sensor measurements that provide OPFE position data. The T FOV estimation is calibration depended, it may be insufficiently accurate in terms of matching the T image data with the W image data. Typically, before the localization image points of a same object point may e.g. deviate by more than 25 pixels or by more than 50 pixels or by more than 100 pixels between the Wide and Tele camera. We assume a pixel size of about 1 μm. Tele localization is performed to improve the accuracy of the T FOV estimation over the W FOV. The localization process includes the following:

    • 1. First, a search area 606 may be selected as shown in FIG. 6A. The selection may be based on the prior (calibration based) estimation. The search area may be defined by the FOV center of the prior estimation, which may be symmetrically embedded in a rectangular area, wherein the rectangular area may be for example twice or three times or four times the area covered by a T FOV.
    • 2. The search area is cropped from the W FOV frame.
    • 3. The next step may include template matching, wherein a source may be represented by the cropped search area and a template may be represented by the T FOV frame. This process may be performed by cross-correlation of the template over different locations of the search area or over the entire search area. The location with a highest matching value may indicate a best estimation of the T FOV location within the W FOV. In FIG. 2B, 608 indicates the final estimated Tele FOV after the localization.


      After the localization, image points of a same object point may typically deviate by less than 20 pixels, by less than 10 pixels, or even by less than 2 pixels between the Wide and Tele camera.


While this disclosure has been described in terms of certain embodiments and generally associated methods, alterations and permutations of the embodiments and methods will be apparent to those skilled in the art. The disclosure is to be understood as not limited by the specific embodiments described herein, but only by the scope of the appended claims.


All references mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual reference was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present application.

Claims
  • 1. A handheld device, comprising: a Wide camera for capturing Wide images, each Wide image having a respective Wide field of view (FOVw);a scanning Tele camera for capturing Tele images, each Tele image having a respective Tele field of view (FOVT) smaller than FOVw; anda processor configured to autonomously direct scanning of the FOVT to a specific location within a scene, to stitch a plurality of Wide images with respective FOVw into a panorama image with a field of view FOVp>FOVw and to pin a Tele image to a given location within the panorama image to obtain a smart panorama.
  • 2. The handheld device of claim 1, wherein the processor is configured to crop the Tele image before pinning it to the given location.
  • 3. The handheld device of claim 1, wherein the scanning Tele camera is configured to capture the Tele images autonomously.
  • 4. The handheld device of claim 1, wherein the processor is configured to use a motion model that predicts a future movement of the handheld device and/or of an object within the FOVp.
  • 5. The handheld device of claim 1, wherein the pinning a Tele image to a given location within the panorama image is obtained by executing localization between the Wide images and the Tele image.
  • 6. The handheld device of claim 5, wherein the executing the localization includes calculating a cross-correlation between the Tele image and the Wide image.
  • 7. The handheld device of claim 1, wherein the FOVT scanning is performed by rotating one optical path folding element.
  • 8. The handheld device of claim 1, wherein the FOVT scanning is performed by rotating two or more optical path folding elements.
  • 9. The handheld device of claim 1, wherein scene information in the Tele images includes scene information from a field of view larger than a native Tele field of view and smaller than a Wide field of view.
  • 10. The handheld device of claim 1, wherein a particular segment of a scene is captured by the Tele camera and is pinned to locations within the panorama image.
  • 11. The handheld device of claim 10, wherein the processor uses a tracking algorithm to capture the particular segment of a scene with the Tele camera.
  • 12. The handheld device of claim 1, wherein a program decides which scene information captured by the Tele camera and pinned to locations within the panorama image.
  • 13. The handheld device of claim 1, wherein the Tele image pinned to a given location within the panorama image is shown additionally in another location within the panorama image.
  • 14. A handheld device, comprising: a Wide camera for capturing Wide images, each Wide image having a respective Wide field of view (FOVw);a Tele camera for capturing Tele images, each Tele image having a respective Tele field of view (FOVT) smaller than FOVw; anda processor configured to stitch a plurality of Wide images with respective FOVw into a panorama image with a field of view FOVp>FOVw and to pin a Tele image to a given location within the panorama image to obtain a smart panorama, wherein the pinning of the Tele image to a given location within the panorama image is obtained by executing localization between the panorama image and the Tele image.
  • 15. The handheld device of claim 14, wherein the executing the localization reduces a deviation of the image points of a same object point in a Tele image and the panorama image by at least 2.5 times.
  • 16. The handheld device of claim 14, wherein the executing the localization reduces a deviation of the image points of a same object point in a Tele image and the panorama image by at least 5 times.
  • 17. A handheld device, comprising: a Wide camera for capturing Wide images, each Wide image having a respective Wide field of view (FOVw), wherein the Wide camera is configured to capture Wide images autonomously;a Tele camera for capturing Tele images, each Tele image having a respective Tele field of view (FOVT) smaller than FOVw; anda processor configured to stitch a plurality of Wide images with respective FOVw into a panorama image with a field of view FOVp>FOVw and to pin a Tele image to a given location within the panorama image to obtain a smart panorama wherein the processor is configured to calculate a saliency map based on Wide image data to decide which scene information is captured by the Tele camera and pinned to locations within the panorama image.
CROSS-REFERENCE TO RELATED APPLICATIONS

This is a 371 of international patent application PCT/IB2020/061461 filed Dec. 3, 2020, and claims priority from U.S. Provisional Patent Application No. 62/945,519 filed Dec. 9, 2019, which is expressly incorporated herein by reference in its entirety.

PCT Information
Filing Document Filing Date Country Kind
PCT/IB2020/061461 12/3/2020 WO
Publishing Document Publishing Date Country Kind
WO2021/116851 6/17/2021 WO A
US Referenced Citations (293)
Number Name Date Kind
4199785 McCullough et al. Apr 1980 A
5005083 Grage et al. Apr 1991 A
5032917 Aschwanden Jul 1991 A
5041852 Misawa et al. Aug 1991 A
5051830 von Hoessle Sep 1991 A
5099263 Matsumoto et al. Mar 1992 A
5248971 Mandl Sep 1993 A
5287093 Amano et al. Feb 1994 A
5394520 Hall Feb 1995 A
5436660 Sakamoto Jul 1995 A
5444478 Lelong et al. Aug 1995 A
5459520 Sasaki Oct 1995 A
5502537 Utagawa Mar 1996 A
5657402 Bender et al. Aug 1997 A
5682198 Katayama et al. Oct 1997 A
5768443 Michael et al. Jun 1998 A
5926190 Furkowski et al. Jul 1999 A
5940641 McIntyre et al. Aug 1999 A
5982951 Katayama et al. Nov 1999 A
6101334 Fantone Aug 2000 A
6128416 Oura Oct 2000 A
6148120 Sussman Nov 2000 A
6208765 Bergen Mar 2001 B1
6268611 Pettersson et al. Jul 2001 B1
6549215 Jouppi Apr 2003 B2
6611289 Yu et al. Aug 2003 B1
6643416 Daniels et al. Nov 2003 B1
6650368 Doron Nov 2003 B1
6680748 Monti Jan 2004 B1
6714665 Hanna et al. Mar 2004 B1
6724421 Glatt Apr 2004 B1
6738073 Park et al. May 2004 B2
6741250 Furlan et al. May 2004 B1
6750903 Miyatake et al. Jun 2004 B1
6778207 Lee et al. Aug 2004 B1
7002583 Rabb, III Feb 2006 B2
7015954 Foote et al. Mar 2006 B1
7038716 Klein et al. May 2006 B2
7199348 Olsen et al. Apr 2007 B2
7206136 Labaziewicz et al. Apr 2007 B2
7248294 Slatter Jul 2007 B2
7256944 Labaziewicz et al. Aug 2007 B2
7305180 Labaziewicz et al. Dec 2007 B2
7339621 Fortier Mar 2008 B2
7346217 Gold, Jr. Mar 2008 B1
7365793 Cheatle et al. Apr 2008 B2
7411610 Doyle Aug 2008 B2
7424218 Baudisch et al. Sep 2008 B2
7509041 Hosono Mar 2009 B2
7533819 Barkan et al. May 2009 B2
7619683 Davis Nov 2009 B2
7738016 Toyofuku Jun 2010 B2
7773121 Huntsberger et al. Aug 2010 B1
7809256 Kuroda et al. Oct 2010 B2
7880776 LeGall et al. Feb 2011 B2
7918398 Li et al. Apr 2011 B2
7964835 Olsen et al. Jun 2011 B2
7978239 Deever et al. Jul 2011 B2
8115825 Culbert et al. Feb 2012 B2
8149327 Lin et al. Apr 2012 B2
8154610 Jo et al. Apr 2012 B2
8238695 Davey et al. Aug 2012 B1
8274552 Dahi et al. Sep 2012 B2
8390729 Long et al. Mar 2013 B2
8391697 Cho et al. Mar 2013 B2
8400555 Georgiev et al. Mar 2013 B1
8439265 Ferren et al. May 2013 B2
8446484 Muukki et al. May 2013 B2
8483452 Ueda et al. Jul 2013 B2
8514491 Duparre Aug 2013 B2
8547389 Hoppe et al. Oct 2013 B2
8553106 Scarff Oct 2013 B2
8587691 Takane Nov 2013 B2
8619148 Watts et al. Dec 2013 B1
8803990 Smith Aug 2014 B2
8896655 Mauchly et al. Nov 2014 B2
8976255 Matsuoto et al. Mar 2015 B2
9019387 Nakano Apr 2015 B2
9025073 Attar et al. May 2015 B2
9025077 Attar et al. May 2015 B2
9041835 Honda May 2015 B2
9137447 Shibuno Sep 2015 B2
9185291 Shabtay et al. Nov 2015 B1
9215377 Sokeila et al. Dec 2015 B2
9215385 Luo Dec 2015 B2
9270875 Brisedoux et al. Feb 2016 B2
9286680 Jiang et al. Mar 2016 B1
9344626 Silverstein et al. May 2016 B2
9360671 Zhou Jun 2016 B1
9369621 Malone et al. Jun 2016 B2
9413930 Geerds Aug 2016 B2
9413984 Attar et al. Aug 2016 B2
9420180 Jin Aug 2016 B2
9438792 Nakada et al. Sep 2016 B2
9485432 Medasani et al. Nov 2016 B1
9578257 Attar et al. Feb 2017 B2
9618748 Munger et al. Apr 2017 B2
9681057 Attar et al. Jun 2017 B2
9723220 Sugie Aug 2017 B2
9736365 Laroia Aug 2017 B2
9736391 Du et al. Aug 2017 B2
9768310 Ahn et al. Sep 2017 B2
9800798 Ravirala et al. Oct 2017 B2
9851803 Fisher et al. Dec 2017 B2
9894287 Qian et al. Feb 2018 B2
9900522 Lu Feb 2018 B2
9927600 Goldenberg et al. Mar 2018 B2
20020005902 Yuen Jan 2002 A1
20020030163 Zhang Mar 2002 A1
20020054214 Yoshikawa May 2002 A1
20020063711 Park et al. May 2002 A1
20020075258 Park et al. Jun 2002 A1
20020122113 Foote Sep 2002 A1
20020167741 Koiwai et al. Nov 2002 A1
20030030729 Prentice et al. Feb 2003 A1
20030093805 Gin May 2003 A1
20030160886 Misawa et al. Aug 2003 A1
20030202113 Yoshikawa Oct 2003 A1
20040008773 Itokawa Jan 2004 A1
20040012683 Yamasaki et al. Jan 2004 A1
20040017386 Liu et al. Jan 2004 A1
20040027367 Pilu Feb 2004 A1
20040061788 Bateman Apr 2004 A1
20040141065 Hara et al. Jul 2004 A1
20040141086 Mihara Jul 2004 A1
20040240052 Minefuji et al. Dec 2004 A1
20050013509 Samadani Jan 2005 A1
20050046740 Davis Mar 2005 A1
20050157184 Nakanishi et al. Jul 2005 A1
20050168834 Matsumoto et al. Aug 2005 A1
20050185049 Iwai et al. Aug 2005 A1
20050200718 Lee Sep 2005 A1
20060054782 Olsen et al. Mar 2006 A1
20060056056 Ahiska et al. Mar 2006 A1
20060067672 Washisu et al. Mar 2006 A1
20060102907 Lee et al. May 2006 A1
20060125937 LeGall et al. Jun 2006 A1
20060170793 Pasquarette et al. Aug 2006 A1
20060175549 Miller et al. Aug 2006 A1
20060187310 Janson et al. Aug 2006 A1
20060187322 Janson et al. Aug 2006 A1
20060187338 May et al. Aug 2006 A1
20060227236 Pak Oct 2006 A1
20070024737 Nakamura et al. Feb 2007 A1
20070126911 Nanjo Jun 2007 A1
20070177025 Kopet et al. Aug 2007 A1
20070188653 Pollock et al. Aug 2007 A1
20070189386 Imagawa et al. Aug 2007 A1
20070257184 Olsen et al. Nov 2007 A1
20070285550 Son Dec 2007 A1
20080017557 Witdouck Jan 2008 A1
20080024614 Li et al. Jan 2008 A1
20080025634 Border et al. Jan 2008 A1
20080030592 Border et al. Feb 2008 A1
20080030611 Jenkins Feb 2008 A1
20080084484 Ochi et al. Apr 2008 A1
20080106629 Kurtz et al. May 2008 A1
20080117316 Orimoto May 2008 A1
20080129831 Cho et al. Jun 2008 A1
20080218611 Parulski et al. Sep 2008 A1
20080218612 Border et al. Sep 2008 A1
20080218613 Janson et al. Sep 2008 A1
20080219654 Border et al. Sep 2008 A1
20090086074 Li et al. Apr 2009 A1
20090109556 Shimizu et al. Apr 2009 A1
20090122195 Van Baar et al. May 2009 A1
20090122406 Rouvinen et al. May 2009 A1
20090128644 Camp et al. May 2009 A1
20090219547 Kauhanen et al. Sep 2009 A1
20090252484 Hasuda et al. Oct 2009 A1
20090295949 Ojala Dec 2009 A1
20090313267 Girgensohn Dec 2009 A1
20090324135 Kondo et al. Dec 2009 A1
20100013906 Border et al. Jan 2010 A1
20100020221 Tupman et al. Jan 2010 A1
20100060746 Olsen et al. Mar 2010 A9
20100097444 Lablans Apr 2010 A1
20100103194 Chen et al. Apr 2010 A1
20100165131 Makimoto et al. Jul 2010 A1
20100196001 Ryynänen et al. Aug 2010 A1
20100238327 Griffith et al. Sep 2010 A1
20100259836 Kang et al. Oct 2010 A1
20100283842 Guissin et al. Nov 2010 A1
20100321494 Peterson et al. Dec 2010 A1
20110058320 Kim et al. Mar 2011 A1
20110063417 Peters et al. Mar 2011 A1
20110063446 McMordie et al. Mar 2011 A1
20110064327 Dagher et al. Mar 2011 A1
20110080487 Venkataraman et al. Apr 2011 A1
20110128288 Petrou et al. Jun 2011 A1
20110164172 Shintani et al. Jul 2011 A1
20110229054 Weston et al. Sep 2011 A1
20110234798 Chou Sep 2011 A1
20110234853 Hayashi et al. Sep 2011 A1
20110234881 Wakabayashi et al. Sep 2011 A1
20110242286 Pace et al. Oct 2011 A1
20110242355 Goma et al. Oct 2011 A1
20110298966 Kirschstein et al. Dec 2011 A1
20120026366 Golan et al. Feb 2012 A1
20120044372 Cote et al. Feb 2012 A1
20120062780 Morihisa Mar 2012 A1
20120069235 Imai Mar 2012 A1
20120075489 Nishihara Mar 2012 A1
20120105579 Jeon et al. May 2012 A1
20120124525 Kang May 2012 A1
20120154547 Aizawa Jun 2012 A1
20120154614 Moriya et al. Jun 2012 A1
20120196648 Havens et al. Aug 2012 A1
20120229663 Nelson et al. Sep 2012 A1
20120249815 Bohn et al. Oct 2012 A1
20120287315 Huang et al. Nov 2012 A1
20120320467 Baik et al. Dec 2012 A1
20130002928 Imai Jan 2013 A1
20130016427 Sugawara Jan 2013 A1
20130063629 Webster et al. Mar 2013 A1
20130076922 Shihoh et al. Mar 2013 A1
20130093842 Yahata Apr 2013 A1
20130094126 Rappoport et al. Apr 2013 A1
20130113894 Mirlay May 2013 A1
20130135445 Dahi et al. May 2013 A1
20130155176 Paripally et al. Jun 2013 A1
20130182150 Asakura Jul 2013 A1
20130201360 Song Aug 2013 A1
20130202273 Ouedraogo et al. Aug 2013 A1
20130235224 Park et al. Sep 2013 A1
20130250150 Malone et al. Sep 2013 A1
20130258044 Betts-Lacroix Oct 2013 A1
20130270419 Singh et al. Oct 2013 A1
20130278785 Nomura et al. Oct 2013 A1
20130321668 Kamath Dec 2013 A1
20140009631 Topliss Jan 2014 A1
20140049615 Uwagawa Feb 2014 A1
20140118584 Lee et al. May 2014 A1
20140160311 Hwang et al. Jun 2014 A1
20140192238 Attar et al. Jul 2014 A1
20140192253 Laroia Jul 2014 A1
20140218587 Shah Aug 2014 A1
20140313316 Olsson et al. Oct 2014 A1
20140362242 Takizawa Dec 2014 A1
20150002683 Hu et al. Jan 2015 A1
20150042870 Chan et al. Feb 2015 A1
20150070781 Cheng et al. Mar 2015 A1
20150092066 Geiss et al. Apr 2015 A1
20150103147 Ho et al. Apr 2015 A1
20150138381 Ahn May 2015 A1
20150154776 Zhang et al. Jun 2015 A1
20150162048 Hirata et al. Jun 2015 A1
20150195458 Nakayama et al. Jul 2015 A1
20150215516 Dolgin Jul 2015 A1
20150237280 Choi et al. Aug 2015 A1
20150242994 Shen Aug 2015 A1
20150244906 Wu et al. Aug 2015 A1
20150253543 Mercado Sep 2015 A1
20150253647 Mercado Sep 2015 A1
20150261299 Wajs Sep 2015 A1
20150271471 Hsieh et al. Sep 2015 A1
20150281678 Park et al. Oct 2015 A1
20150286033 Osborne Oct 2015 A1
20150316744 Chen Nov 2015 A1
20150334309 Peng et al. Nov 2015 A1
20160044250 Shabtay et al. Feb 2016 A1
20160070088 Koguchi Mar 2016 A1
20160154202 Wippermann et al. Jun 2016 A1
20160154204 Lim et al. Jun 2016 A1
20160212358 Shikata Jul 2016 A1
20160212418 Demirdjian et al. Jul 2016 A1
20160241751 Park Aug 2016 A1
20160291295 Shabtay et al. Oct 2016 A1
20160295112 Georgiev et al. Oct 2016 A1
20160301840 Du et al. Oct 2016 A1
20160353008 Osborne Dec 2016 A1
20160353012 Kao et al. Dec 2016 A1
20170019616 Zhu et al. Jan 2017 A1
20170070731 Darling et al. Mar 2017 A1
20170187962 Lee et al. Jun 2017 A1
20170214846 Du et al. Jul 2017 A1
20170214866 Zhu et al. Jul 2017 A1
20170242225 Fiske Aug 2017 A1
20170289458 Song et al. Oct 2017 A1
20180013944 Evans, V et al. Jan 2018 A1
20180017844 Yu et al. Jan 2018 A1
20180024329 Goldenberg et al. Jan 2018 A1
20180059379 Chou Mar 2018 A1
20180120674 Avivi et al. May 2018 A1
20180150973 Tang et al. May 2018 A1
20180176426 Wei et al. Jun 2018 A1
20180198897 Tang et al. Jul 2018 A1
20180241922 Baldwin et al. Aug 2018 A1
20180295292 Lee et al. Oct 2018 A1
20180300901 Wakai et al. Oct 2018 A1
20190121103 Bachar et al. Apr 2019 A1
20200020085 Pekkucuksen Jan 2020 A1
20220277463 Schlattmann Sep 2022 A1
Foreign Referenced Citations (39)
Number Date Country
101276415 Oct 2008 CN
201514511 Jun 2010 CN
102739949 Oct 2012 CN
103024272 Apr 2013 CN
103841404 Jun 2014 CN
1536633 Jun 2005 EP
1780567 May 2007 EP
2523450 Nov 2012 EP
S59191146 Oct 1984 JP
04211230 Aug 1992 JP
H07318864 Dec 1995 JP
08271976 Oct 1996 JP
2002010276 Jan 2002 JP
2003298920 Oct 2003 JP
2004133054 Apr 2004 JP
2004245982 Sep 2004 JP
2005099265 Apr 2005 JP
2006238325 Sep 2006 JP
2007228006 Sep 2007 JP
2007306282 Nov 2007 JP
2008076485 Apr 2008 JP
2010204341 Sep 2010 JP
2011085666 Apr 2011 JP
2013106289 May 2013 JP
20070005946 Jan 2007 KR
20090058229 Jun 2009 KR
20100008936 Jan 2010 KR
20140014787 Feb 2014 KR
101477178 Dec 2014 KR
20140144126 Dec 2014 KR
20150118012 Oct 2015 KR
2000027131 May 2000 WO
2004084542 Sep 2004 WO
2006008805 Jan 2006 WO
2010122841 Oct 2010 WO
2014072818 May 2014 WO
2017025822 Feb 2017 WO
2017037688 Mar 2017 WO
2018130898 Jul 2018 WO
Non-Patent Literature Citations (18)
Entry
Statistical Modeling and Performance Characterization of a Real-Time Dual Camera Surveillance System, Greienhagen et al., Publisher: IEEE, 2000, 8 pages.
A 3MPixel Multi-Aperture Image Sensor with 0.7μm Pixels in 0.11μm CMOS, Fife et al., Stanford University, 2008, 3 pages.
Dual camera intelligent sensor for high definition 360 degrees surveillance, Scotti et al., Publisher: IET, May 9, 2000, 8 pages.
Dual-sensor foveated imaging system, Hua et al., Publisher: Optical Society of America, Jan. 14, 2008, 11 pages.
Defocus Video Matting, McGuire et al., Publisher: ACM SIGGRAPH, Jul. 31, 2005, 11 pages.
Compact multi-aperture imaging with high angular resolution, Santacana et al., Publisher: Optical Society of America, 2015, 10 pages.
Multi-Aperture Photography, Green et al., Publisher: Mitsubishi Electric Research Laboratories, Inc., Jul. 2007, 10 pages.
Multispectral Bilateral Video Fusion, Bennett et al., Publisher: IEEE, May 2007, 10 pages.
Super-resolution imaging using a camera array, Santacana et al., Publisher: Optical Society of America, 2014, 6 pages.
Optical Splitting Trees for High-Precision Monocular Imaging, McGuire et al., Publisher: IEEE, 2007, 11 pages.
High Performance Imaging Using Large Camera Arrays, Wilburn et al., Publisher: Association for Computing Machinery, Inc., 2005, 12 pages.
Real-time Edge-Aware Image Processing with the Bilateral Grid, Chen et al., Publisher: ACM SIGGRAPH, 2007, 9 pages.
Superimposed multi-resolution imaging, Carles et al., Publisher: Optical Society of America, 2017, 13 pages.
Viewfinder Alignment, Adams et al., Publisher: EUROGRAPHICS, 2008, 10 pages.
Dual-Camera System for Multi-Level Activity Recognition, Bodor et al., Publisher: IEEE, Oct. 2014, 6 pages.
Engineered to the task: Why camera-phone cameras are different, Giles Humpston, Publisher: Solid State Technology, Jun. 2009, 3 pages.
European Search Report in related EP patent application 20897934.4, dated Oct. 14, 2022.
Office Action in related EP patent application 20897934.4, dated Oct. 26, 2022.
Related Publications (1)
Number Date Country
20220303464 A1 Sep 2022 US
Provisional Applications (1)
Number Date Country
62945519 Dec 2019 US