The present application relates to image capture and generation methods and apparatus and, more particularly, to methods and apparatus which detect and/or indicate a dirty lens condition.
While professional photographers often go to great lengths to protect and keep their camera lenses clean, even professional photographers may not notice dirt, dust, oil, fingerprints, water droplets or other containments on a lens, sometimes referred to as a dirty lens condition, resulting in degradation to one or more images captured using a dirty camera lens. The problem of dirty camera lenses grows in the case of lower end camera devices where users often store the camera device, e.g., cell phone including a camera, in a pocket or other location where the lens may easily become dirty. Dirty camera lenses of laptop computers, pads, tablets or other devices which may also include cameras can also experience this problem.
While dirt on camera lenses can be a common problem that results in degraded images, the user is often unaware of the degradation in image quality and/or the need to clean a lens until the degradation due to the dirty lens becomes severe and noticeable.
Unfortunately, by the time a dirty camera lens or its effect is noticed by a user of a camera device the opportunity to clean the dirty lens and still capture particular scenes of interest in a timely manner may have passed.
In view of the above discussion it should be appreciated that it would be desirable if methods or apparatus could be developed which detect and alert a user of a camera to a dirty lens condition.
Methods and apparatus relating to the detection of a dirty camera lens are described. In accordance with some features, images are captured using one or more camera lenses of a camera device. The images or characteristics of the images are then compared to determine if a dirty camera lens condition exists. In some embodiments the contrast of individual captured images is determined and the contrast of the different images is compared. A difference image contrast, e.g., above a threshold level, is indicative in some embodiments of a dirty camera lens. This is because a dirty camera lens tends to cause blurring and thus a reduction in contrast in a captured image. As a result when the contrast of two images is compared, a significant difference in contrast levels of images corresponding to the same scene which are captured at the same time or close to one another in time can, and in some embodiments does, indicate a dirty lens condition. A pixel level comparison of first and second images can also be used to determine a dirty lens condition with a significant difference, e.g., an average difference over a predetermined or dynamically determined threshold, between the compared image pixels in color or luminance, indicating a dirty lens condition.
In response to determining a dirty lens condition, an indication of the dirty lens condition is generated and presented to a user of the camera. The indication maybe a visual or audible indication of the dirty lens condition.
In at least some embodiments, the methods of the present invention are used in apparatus including multiple optical chains, e.g., each including one or more lenses and being capable of capturing an image. Images or characteristics of images captured by different optical chains are compared and used to determine whether a dirty lens condition exists or not. In the case of a dirty lens condition a user perceivable indication is generated signaling the user of the camera device to make the user aware of the dirty lens condition and the need to perform a lens cleaning operation. In the case of a camera including multiple optical chains, the user may be, and in some embodiments is informed of which lens of the camera device is dirty. Information about the dirty lens condition may, and in some embodiments is, included in a visual display. For example, a dirty lens condition may be indicated by presenting the user with a “Clean Lens X” message specifying to the user which of the plurality of lenses should be cleaned and thus making the user aware that the lens is dirty. Alternatively, a simple message saying “Dirty Lens Condition detected” or some other similar warning may be presented to the user, e.g. via a display on the camera. A camera warning indicator light or particular sound may and in some embodiments is used to indicate a dirty lens condition and optionally which lens in particular is dirty.
Information about a dirty lens maybe and in some embodiments is stored and associated with images captured while the lens is determined to be dirty. The dirty lens condition can and in some embodiments is taken into consideration when generating composite images from images captured from multiple optical chains of the camera device. For example, a dirty lens condition associated with a captured image may result in the image being omitted from use when generating a composite image from multiple captured images, e.g., with the images captured by optical chains not suffering from a dirty lens condition being used without the image corresponding to the dirty lens. Alternatively the image corresponding to the dirty lens and/or pixel values may be given a lower priority or weight in an image combining process so that the dirty lens condition does not degrade the composite image if the image captured using the dirty lens was given the same weight normally applied in the combining process when the dirty lens condition is not detected. Accordingly, in at least some embodiments dirty lens condition information is used in the image combining process when it is available.
When the dirty lens condition is rectified, e.g., by cleaning of the dirty lens, the change in the lens condition is detected and the dirty lens indication is deactivated. The detection of the change in condition may be based on images captured subsequent to the detection of the dirty lens condition.
The methods and apparatus of the present invention can be used to detect dirty lens conditions during periods of time when the camera is performing autofocus or other operations and is not dependent on user selection and capture of specific images to enable the dirty lens detection and notification process to be implemented. However, images captured in response to user control can and are used in dirty lens determination operations in at least some embodiments.
The display device 102 may be, and in some embodiments is, a touch screen, used to display images, video, information regarding the configuration of the camera device, and/or status of data processing being performed on the camera device. In the case where the display device 102 is a touch screen, the display device 102 serves as an additional input device and/or as an alternative to the separate input device, e.g., buttons, 106. As will be discussed in some embodiments zooming operation can be controlled by pressing a zoom control sensor, e.g., a touch sensor. In some embodiments when the camera user touches the zoom control sensor the zoom functionality is enabled. For example a finger on the touch sensor activates/enables the zoom functionality. The I/O interface 112 couples the display 102 and input device 106 to the bus 116 and interfaces between the display 102, input device 106 and the other elements of the camera which can communicate and interact via the bus 116.
In addition to being coupled to the I/O interface 112, the bus 116 is coupled to the memory 108, hardware assembly of module 180, processor 110, an optional autofocus controller 132, the wireless and/or wired interface 114, a zoom control module 140, and a plurality of optical chains 130, e.g., X optical chains also referred to herein as camera modules. In some embodiments X is an integer greater than 2, e.g., 3, 4, 7 or a larger value depending on the particular embodiment. The plurality of camera modules 130 may be implemented using any of the various camera module sets and/or arrangements described in the present application. Images captured by individual optical chains in the plurality of optical chains 130 can, and in various embodiments are, stored in memory 108, e.g., as part of the data/information 120 and processed by the processor 110, e.g., to generate one or more composite images.
The X camera modules 131 through 133 may, and in various embodiments do, include camera modules having different focal lengths. Multiple camera modules may be provided at a given focal length. For example, multiple camera modules having a 35 mm equivalent focal length to a full frame DSLR camera, multiple camera modules having a 70 mm equivalent focal length to a full frame DSLR camera and multiple camera modules having a 140 mm equivalent focal length to a full frame DSLR camera are included in an individual camera device in some embodiments. The various focal lengths are exemplary and a wide variety of camera modules with different focal lengths may be used. The camera device 100 is to be considered exemplary. To the extent that other references are made to a camera or camera device with regard to some of the other figures, it is to be understood that at least in some embodiments the camera device or camera will include the elements shown in
As will be discussed below images from different camera modules captured at the same time or during a given time period can be combined to generate a composite image, e.g., an image having better resolution, frequency content and/or light range than an individual image captured by a single one of the camera modules 131, 133.
Multiple captured images and/or composite images may, and in some embodiments are, processed to form video, e.g., a series of images corresponding to a period of time. The interface 114 couples the internal components of the camera device 100 to an external network, e.g., the Internet, and/or one or more other devices e.g., memory or stand alone computer. Via interface 114 the camera device 100 can and does output data, e.g., captured images, generated composite images, and/or generated video. The output may be to a network or to another external device for processing, storage and/or to be shared. The captured image data, generated composite images and/or video can be provided as input data to another device for further processing and/or sent for storage, e.g., in external memory, an external device or in a network.
The interface 114 of the camera device 100 may be, and in some instances is, coupled to a computer so that image data may be processed on the external computer. In some embodiments the external computer has a higher computational processing capability than the camera device 100 which allows for more computationally complex image processing of the image data outputted to occur on the external computer. The interface 114 also allows data, information and instructions to be supplied to the camera device 100 from one or more networks and/or other external devices such as a computer or memory for storage and/or processing on the camera device 100. For example, background images may be supplied to the camera device to be combined by the camera processor 110 with one or more images captured by the camera device 100. Instructions and/or data updates can be loaded onto the camera via interface 114 and stored in memory 108.
The lighting module 104 in some embodiments includes a plurality of light emitting elements, e.g., LEDs, which can be illuminated in a controlled manner to serve as the camera flash with the LEDs being controlled in groups or individually, e.g., in a synchronized manner based on operation of the rolling shutter and/or the exposure time. For purposes of discussion module 104 will be referred to as an LED module since in the exemplary embodiment LEDs are used as the light emitting devices but as discussed above the invention is not limited to LED embodiments and other light emitting sources may be used as well. In some embodiments the LED module 104 includes an array of light emitting elements, e.g., LEDs. In some embodiments the light emitting elements in the LED module 104 are arranged such that each individual LED and/or a group of LEDs can be illuminated in a synchronized manner with rolling shutter operation. Light emitting elements are illuminated, in some but not all embodiments, sequentially, so that different portions of an area are illuminated at different times so that the full area need not be consistently lighted during image capture. While all lighting elements are not kept on for the full duration of an image capture operation involving the reading out of the full set of pixel elements of a sensor, the portion of area which is having its image captured, e.g., the scan area, at a given time as a result of the use of a rolling shutter will be illuminated thanks to synchronization of the lighting of light emitting elements with rolling shutter operation. Thus, various light emitting elements are controlled to illuminate at different times in some embodiments based on the exposure time and which portion of a sensor will be used to capture a portion of an image at a given time. In some embodiments the light emitting elements in the LED module 104 include a plurality of sets of light emitting elements, each set of light emitting elements corresponding to a different image area which it illuminates and which is captured by a different portion of the image sensor. Lenses may, and in some embodiments are used to direct the light from different light emitting elements to different scene areas which will be captured by the camera through the use of one or more camera modules.
The rolling shutter controller 150 is an electronic shutter that controls reading out of different portions of one or more image sensors at different times. Each image sensor is read one row of pixel values at a time and the various rows are read in order. As will be discussed below, the reading out of images captured by different sensors is controlled in some embodiments so that the sensors capture a scene area of interest, also sometimes referred to as an image area of interest, in a synchronized manner with multiple sensors capturing the same image area at the same time in some embodiments.
While an electronic rolling shutter is used in most of the embodiments, a mechanical rolling shutter may be used in some embodiments.
The light control device 152 is configured to control light emitting elements (e.g., included in the LED module 104) in a synchronized manner with the operation of the rolling shutter controller 150. In some embodiments the light control device 152 is configured to control different sets of light emitting elements in the array to emit light at different times in a manner that is synchronized with the timing of the rolling shutter 150. In some embodiments the light control device 152 is configured to control a first set of light emitting elements corresponding to a first image area to output light during a first time period, the first time period being determined based on the timing of the rolling shutter and being a period of time during which a first portion of the sensor is exposed for image capture. In some embodiments the light control device 152 is further configured to control a second set of light emitting elements corresponding to a second image area to output light during a second time period, the second time period being determined based on the timing of the rolling shutter and being a period of time during which a second portion of the sensor is exposed for image capture. In some embodiments the first time period includes at least a portion of time which does not overlap the second time period.
In some embodiments the light control device 152 is further configured to control an Nth set of light emitting elements corresponding to an Nth image area to output light during a third time period, said Nth time period being determined based on the timing of the rolling shutter and being a period of time during which an Nth portion of the sensor is exposed for image capture, N being an integer value corresponding to the total number of time periods used by said rolling shutter to complete one full read out of total image area.
In some embodiments the light control device 152 is further configured to control the second set of light emitting elements to be off during said portion of time included in the first period of time which does not overlap said second period of time. In some embodiments the light control device is configured to determine when the first set and said second set of light emitting elements are to be on based on an exposure setting. In some embodiments the light control device is configured to determine when said first set and said second set of light emitting elements are to be on based on an amount of time between read outs of different portions of said sensor. In some embodiments the different sets of light emitting elements in the plurality of light emitting elements are covered with different lenses. In some such embodiments the light control device 152 is further configured to determine which sets of light emitting elements to use based on an effective focal length setting being used by the camera device.
The accelerometer module 122 includes a plurality of accelerometers including accelerometer 1124, accelerometer 2126, and accelerometer 3128. Each of the accelerometers is configured to detect camera acceleration in a given direction. Although three accelerometers 124, 126 and 128 are shown included in the accelerometer module 122 it should be appreciated that in some embodiments more than three accelerometers can be used. Similarly the gyro module 192 includes 3 gyros, 194, 196 and 198, one for each axis which is well suited for use in the 3 dimensional real world environments in which camera devices are normally used. The camera acceleration detected by an accelerometer in a given direction is monitored. Acceleration and/or changes in acceleration, and rotation indicative of camera motion, are monitored and processed to detect one or more directions, of motion e.g., forward camera motion, backward camera motion, etc. The acceleration/rotation indicative of camera motion can be used to control zoom operations and/or be provided in some cases to a camera mount which can then take actions such as rotating a camera mount or rotating a camera support to help stabilize the camera.
The camera device 100 may include, and in some embodiments does include, an autofocus controller 132 and/or autofocus drive assembly 134. The autofocus controller 132 is present in at least some autofocus embodiments but would be omitted in fixed focus embodiments. The autofocus controller 132 controls adjustment of at least one lens position in one or more optical chains used to achieve a desired, e.g., user indicated, focus. In the case where individual drive assemblies are included in each optical chain, the autofocus controller 132 may drive the autofocus drive of various optical chains to focus on the same target.
The zoom control module 140 is configured to perform a zoom operation in response to user input.
The processor 110 controls operation of the camera device 100 to control the elements of the camera device 100 to implement the steps of the methods described herein. The processor may be a dedicated processor that is preconfigured to implement the methods. However, in many embodiments the processor 110 operates under direction of software modules and/or routines stored in the memory 108 which include instructions that, when executed, cause the processor to control the camera device 100 to implement one, more or all of the methods described herein. Memory 108 includes an assembly of modules 118 wherein one or more modules include one or more software routines, e.g., machine executable instructions, for implementing the image capture and/or image data processing methods of the present invention. Individual steps and/or lines of code in the modules of 118 when executed by the processor 110 control the processor 110 to perform steps of the method of the invention. When executed by processor 110, the data processing modules 118 cause at least some data to be processed by the processor 110 in accordance with the method of the present invention. The assembly of modules 118 includes a mode control module which determines, e.g., based on user input which of a plurality of camera device modes of operation are to be implemented. In different modes of operation, different camera modules 131, 133 may and often are controlled differently based on the selected mode of operation. For example, depending on the mode of operation different camera modules may use different exposure times. Alternatively, the scene area to which the camera module is directed and thus what portion of a scene is captured by an individual camera module may be changed depending on how the images captured by different camera modules are to be used, e.g., combined to form a composite image and what portions of a larger scene individual camera modules are to capture during the user selected or automatically selected mode of operation.
The resulting data and information (e.g., captured images of a scene, combined images of a scene, etc.) are stored in data memory 120 for future use, additional processing, and/or output, e.g., to display device 102 for display or to another device for transmission, processing and/or display. The memory 108 includes different types of memory for example, Random Access Memory (RAM) in which the assembly of modules 118 and data/information 120 may be, and in some embodiments are stored for future use. Read only Memory (ROM) in which the assembly of modules 118 may be stored for power failures. Non-volatile memory such as flash memory for storage of data, information and instructions may also be used to implement memory 108. Memory cards may be added to the device to provide additional memory for storing data (e.g., images and video) and/or instructions such as programming. Accordingly, memory 108 may be implemented using any of a wide variety of non-transitory computer or machine readable mediums which serve as storage devices.
Having described the general components of the camera device 100 with reference to
Box 117 represents a key and indicates that OC=optical chain, e.g., camera module, and each L1 represents an outermost lens in an optical chain. Box 119 represents a key and indicates that S=sensor, F=filter, L=lens, L1 represents an outermost lens in an optical chain, and L2 represents an inner lens in an optical chain. While
OC 4133 includes an outer lens L1109, a filter 135, an inner lens L2137, and a sensor 139. OC 4133 includes LD 141 for controlling the position of lens L2137. The LD 141 includes a motor or other drive mechanism and operates in the same or similar manner as the drives of the other optical chains. While only three of the OCs are shown in
While a filter may be of a particular color or used in some optical chains, filters need not be used in all optical chains and may not be used in some embodiments. In embodiments where the filter is expressly omitted and/or described as being omitted or an element which allows all light to pass, while reference may be made to the OCs of
While the processor 110 is not shown being coupled to the LD, and sensors 127, 151, 139 it is to be appreciated that such connections exist and are omitted from
As should be appreciated the number and arrangement of lens, filters and/or mirrors can vary depending on the particular embodiment and the arrangement shown in
The front of the plurality of optical chains 130 is visible in
The overall total light capture area corresponding to the multiple lenses of the plurality of optical chains OC 1 to OC 7, also sometimes referred to as optical camera modules, can, in combination, approximate that of a lens having a much larger opening but without requiring a single lens having the thickness which would normally be necessitated by the curvature of a single lens occupying the area which the lenses shown in
While gaps are shown between the lens openings of the optical chains OC 1 to OC 7, it should be appreciated that the lenses may be made, and in some embodiments are, made so that they closely fit together minimizing gaps between the lenses represented by the circles formed by solid lines. While seven optical chains are shown in
The use of multiple optical chains has several advantages over the use of a single optical chain. Using multiple optical chains allows for noise averaging. For example, given the small sensor size there is a random probability that one optical chain may detect a different number, e.g., one or more, photons than another optical chain. This may represent noise as opposed to actual human perceivable variations in the image being sensed. By averaging the sensed pixel values corresponding to a portion of an image, sensed by different optical chains, the random noise may be averaged resulting in a more accurate and pleasing representation of an image or scene than if the output of a single optical chain was used.
Given the small size of the optical sensors (e.g., individual pixel elements) the dynamic range, in terms of light sensitivity, is normally limited with the sensors becoming easily saturated under bright conditions. By using multiple optical chains corresponding to different exposure times the dark areas of a scene can be sensed by the sensor corresponding to the longer exposure time while the light areas of a scene can be sensed by the optical chain with the shorter exposure time without getting saturated. Pixel sensors of the optical chains that become saturated as indicated by a pixel value indicative of sensor saturation can be ignored, and the pixel value from the other, e.g., less exposed, optical chain can be used without contribution from the saturated pixel sensor of the other optical chain. Weighting and combining of non-saturated pixel values as a function of exposure time is used in some embodiments. By combining the output of sensors with different exposure times a greater dynamic range can be covered than would be possible using a single sensor and exposure time.
As illustrated in
As illustrated in the
Note that while supporting a relatively large light capture area and offering a large amount of flexibility in terms of color filtering and exposure time, the camera device 100 shown in
The optical chains shown in
In some but not all embodiments, processor 211 of camera device 200 of
OC 2207 includes outer lens L1263, light redirection device 231, hinge drive 293, filter 265, inner lens L2267, sensor 2269, and LD 271. OC N 209 includes outer lens L1275, light redirection device 235, hinge drive 295, filter 277, inner lens L2279, sensor N 281, and LD 283. The exposure and read out controller 150 controls sensors to read out, e.g., rows of pixel values, in a synchronized manner while also controlling the exposure time. In some embodiments the exposure and read out controller 150 is a rolling shutter controller including an exposure controller 287 and a sensor read out controller 289. An autofocus controller 132 is included to control the lens drives 259, 271 and 283 in some embodiments.
In the
In
In some but not all embodiments, optical chains are mounted in the camera device with some, e.g., the shorter focal length optical chains extending in a straight manner from the front of the camera device towards the back. However, in the same camera, longer focal length camera modules may and sometimes do include light redirection devices which allow at least a portion of the optical path of a camera module to extend sideways allowing the length of the optical axis to be longer than the camera is deep. The use of light redirection elements, e.g., mirrors, is particularly advantageous for long focal length camera modules given that the overall length of such modules tends to be longer than that of camera modules having shorter focal lengths. A camera may have a wide variety of different camera modules some with light redirection elements, e.g., mirrors, and others without mirrors. Filters and/or lenses corresponding to different optical chains may, and in some embodiments are, arranged in planes, e.g. the apertures of the outermost lenses may be configured in a plane that extends parallel to the face of the camera, e.g., a plane in which the front of the camera both extends vertically and horizontally when the camera is in a vertical direction with the top of the camera both being up.
The exemplary method of flowchart 500 for detecting and indicating a dirty lens condition will now be described in detail. The method of flowchart 500 can be, and in some embodiments is, performed using a camera device such as the camera 100 of
The exemplary method starts in step 502, e.g., with the initiation, e.g., by a user of the capture of an image, e.g., of a scene area of interest, which causes the camera device, e.g., camera device 100 or 200, to initiate image capture of the scene area of interest by one or more optical chains. For the purposes of explanation, the camera device being utilized in this exemplary method includes a plurality of optical chains, and each of the optical chains can be independently operated and controlled.
Operation proceeds from step 502 to steps 504, 506, 508, and 510, which involve image capture operations which are performed by image capture module 601. The image capture operations may and in some embodiments are performed in a synchronized manner. In some embodiments, the image capture module operates the two or more of the optical chains of the camera device 200 to capture images at the same time. In other embodiments the same camera module may be used to capture images sequentially with different images corresponding to sequential capture image time periods. For example a first camera module may be used to sequentially capture first, second, third through Nth images rather than using N camera modules to capture N images at the same time. In accordance with the invention, images captured in parallel by different camera modules can be used to detect whether a camera module has a dirty lens, e.g., a transparent outer cover, or alternatively images captured in parallel can by different camera modules can be compared to determine if one or more of the camera modules suffers from a dirty lens. Thus, in some embodiments the first and second optical chains of camera device 200 are operated at the same time to capture the first and second images, respectively. While in other embodiments one or more modules are operated to sequentially capture images with one or more images of a camera module being compared to a previously captured image to detect a dirty lens condition. In some embodiments a combination of comparing images captured in parallel by different camera modules along with a comparison of images captured by an individual module over time are both performed to increase the reliability of a dirty lens determination. In at least some synchronized embodiments the images captured by some by not necessarily all of the different optical chains correspond to the same or an overlapping time period. In other embodiments image capture is not synchronized but multiple one of the captured images are captured during the same or an overlapping time period. In still other embodiments as least some images are captured sequentially, e.g., in rapid succession. Sequential image capture may, and in some embodiments are used for capturing images corresponding to different portions of a scene area.
In step 504 a first image, e.g., a first portion of the scene area of interest, is captured using a first lens of the camera, e.g., an outermost lens of a first optical chain of the camera. In some embodiments, the first optical chain, e.g., optical chain 205 of device 200 includes an image sensor 257 and the outermost lens 251 with the outermost lens 251 being the lens furthest from the image sensor along the direction of the path of the light. In some but not all embodiments, the first portion of the scene area of interest is the entire scene area of interest. Operation proceeds from step 504 to step 512. Step 512 is performed in some but not necessarily all embodiments. In some embodiments where step 512 is skipped operation proceeds directly to step 514.
In step 506 a second image, e.g., a second portion of the scene area of interest, is captured using a second lens of the camera. The second lens corresponding to a second optical chain of the camera device. For example, the second outermost lens 263 of optical chain 2207 of device 200. The second lens being different from the first lens. In some, but not all, embodiments the second portion of the scene area of interest is the entire scene area of interest. In some, but not all embodiments, the second portion of the scene area of interest is not the entire scene area of interest but includes at least an overlapping region with the first portion of the scene area of interest. In some embodiments, the first and second images are of the entire scene area of interest. In some embodiments the first and second images are of different scenes but have an overlapping area of image capture. In some embodiments, the first and second images are overlapping images of a scene area. Operation proceeds from step 506 to step 512. Step 512 is performed in some but not necessarily all embodiments. In some embodiments where step 512 is skipped operation proceeds directly to step 514.
In step 508 a third image, e.g., a third portion of the scene area of interest, is captured using a third lens of the camera device. The camera device including a third optical chain, e.g., optical chain 209 having a third lens, e.g., a third outermost lens 275, and a sensor 281. The third outermost lens 275 is separate from said first and second lens. In some embodiments the first optical chain captures the first image using a first sensor, the second optical chain captures the second image using a second sensor and the third optical chain captures the third image using a third sensor. In some, but not all embodiments, the third portion of the scene area of interest is not the entire scene area of interest but includes at least an overlapping region with the first portion of the scene area of interest. In some embodiments, the first, second, and third images are of the entire scene area of interest. In some embodiments the first, second, and third images are of different scenes but have an overlapping area of image capture. Operation proceeds from step 508 to step 512. Step 512 is performed in some but not necessarily all embodiments. In some embodiments where step 512 is skipped operation proceeds directly to step 514.
In step 510 a Nth image, e.g., a Nth portion of the scene area of interest, is captured using an Nth lens of the camera device. The camera device including a Nth optical chain having a Nth lens, e.g., Nth outermost lens, and a sensor. The Nth outermost lens is separate from said first, second, and third lens. In some embodiments the first optical chain captures the first image using a first sensor, the second optical chain captures the second image using a second sensor, the third optical chain captures the third image using a third sensor, and the Nth optical chain captures the Nth image using a Nth sensor. In some, but not all embodiments, the Nth portion of the scene area of interest is not the entire scene area of interest but includes at least an overlapping region with the first portion of the scene area of interest. In some embodiments, the first, second, third, and Nth images are of the entire scene area of interest. In some embodiments the first, second, third and Nth images are not of the entire scene are of interest but have one or more overlapping areas of image capture. Operation proceeds from step 508 to step 512. Step 512 is performed in some but not necessarily all embodiments. In some embodiments where step 512 is skipped operation proceeds directly to step 514.
In step 512, storage module 602 of assembly of modules 600 stores one or more of the captured images, e.g., first, second, third or Nth images in memory, e.g., date/information section 221 of memory 213, and/or outputted by output module 622, e.g., to display 215. In some embodiments, the stored captured images are retrieved from memory, e.g., memory 213, when needed for use in additional processing steps of method 500. Operation proceeds from step 512 to step 514.
In step 514, dirty lens determination module 604 of assembly of modules 600 makes a determination as to whether or not a dirty camera lens condition exists based on at least the first captured image. If it is determined that a dirty camera lens condition exists then operation proceeds to steps 534 and 536 shown on
In step 544, the composite image generation module 620 of assembly of modules 600 generates a composite image by combining images, e.g., two or more of the first, second, third, or Nth images. Operation proceeds from step 544 to step 546.
In some embodiments, dirty lens determination step 514 includes one or more optional sub-steps 516, 522, 526, and 528.
In some embodiments, the step of determining if a dirty camera lens condition exists 514 includes sub-steps 516 and 526. In such embodiments, in sub-step 516 comparison module 608 performs a first comparison by comparing the first image or a characteristic of the first image to the second image or a characteristic of the second image. Operation proceeds from sub-step 516 to decision sub-step 526. In decision sub-step 526, decision module 611 of assembly of modules 600 makes a decision as to whether said dirty camera lens condition exists based on the first comparison e.g., based on the result of the first comparison. If the decision module 611 determines based on the first comparison that a dirty camera lens condition exists then operation proceeds to steps 534 and step 536 shown on
In some embodiments, sub-step 516 includes step 518 wherein the first comparison performed by the comparison module 608 is a comparison of a first image metric corresponding to the first image and a second image metric corresponding to the second image wherein the first and second image metrics correspond to an overlapping image region. In some of such embodiments, decision sub-step 526 includes step 527 wherein making a decision as to whether said dirty camera lens condition exists based on the first comparison includes determining that a dirty camera lens condition exists when the first comparison indicates a difference in the first and second image metrics above a predetermined threshold. If the first comparison indicates the difference in the first and second image metrics are not above the predetermined threshold than it determined that a dirty camera lens condition does not exist. In some embodiments the first image metric is a contrast measure of the overlapping image region in the first image and the second image metric is a contrast measure of the overlapping region in the second image.
In some embodiments step 518 of sub-step 516 includes step 520. In step 520, an average image contrast of the first image is compared to the average image contrast of the second image.
In some embodiments, the step of determining if a dirty camera lens condition exists 514 includes sub-steps 516, 522, and 528. In such embodiments, sub-steps 516 and 522 may be performed sequentially or concurrently. The ordering in which the steps are performed is not important.
As previously described, in sub-step 516 comparison module 608 performs a first comparison by comparing the first image or a characteristic of the first image to the second image or a characteristic of the second image. Sub-step 516 may, and in some embodiments does, include step 518 as previously described. In turn step 518, may and in some embodiments does, include step 520. As described above, in step 520, an average image contrast of the first image is compared to the average image contrast of the second image.
In sub-step 522, comparison module 608 of assembly of modules 600 performs a second comparison by comparing the third image or a characteristic of the third image to another image or a characteristic of another image. In some embodiments, the another image is either the first image or the second image. In some embodiments, the another image is either the first image, the second image, or the Nth image. In some embodiments sub-step 522 includes step 524, wherein the second comparison includes comparing an average image contrast of the third image to the average image contrast of the first image or the average image contrast of the second image.
Operation proceeds from sub-steps 516 and 522 to decision sub-step 528. In decision sub-step 528, decision module 611 makes a decision as to whether said dirty camera lens condition exists based on the first comparison and the second comparison. If the decision module 611 determines based on the first comparison and the second comparison that a dirty camera lens condition exists then operation proceeds to steps 534 and step 536 shown on
In some embodiments, decision sub-step 528 includes the step 530. In step 530 the decision module 611 determines that a dirty lens condition exists when either the first comparison or the second comparison indicates a predetermined condition with respect to a threshold, e.g., when the difference between the first and second image metrics is above a predetermined condition threshold. If neither the first comparison nor the second comparison indicates a predetermined condition with respect to the predetermined condition threshold then the decision module 611 determines that a dirty lens condition does not exist. In some embodiments the predetermined condition threshold is the same as the predetermined threshold.
As previously described when it is determined or a decision is made in step 514 that a dirty lens condition exists operation proceeds to steps 534 and 536 shown on
In step 534 dirty lens notification module 614 notifies a user of the camera of the dirty lens condition, e.g., by generating a dirty lens condition notification, and/or initiates an automatic camera lens cleaning operation in response to determining that a dirty lens condition exists. In some embodiments, the camera includes an automatic cleaning mechanism that once initiated cleans the lenses of the camera. In some embodiments, the particular lens or lenses that are dirty are identified and a cleaning operation is initiated with respect to the identified one or more dirty lenses. In some embodiments, upon the identification of a dirty lens a camera cleaning module translates or causes to move the identified lens or an associated lens cover to clean or move to a different area on the lens, the dirt or other contamination on the identified dirty lens. For example, moving the dirt or contamination to a portion of the lens that has less effect or influence on the image or which is less important to the generation of a composite image. In some embodiments, one or more mirrors of the optical chain which contains the identified dirty lens are moved to reduce the effect of the dirt or contamination on the combined image which is generated from images captured using the dirty lens. For example, by moving the dirty lens, mirror or associated cover, the effects on the image may be minimized by having the affected area of the captured image be a portion of the image that is not used in generation of the composite image. In another example, by moving the dirty lens, mirror or associated cover, the effects on the image may be minimized by having the affected area of the captured image be a portion of the image that is not the object of interest being photographed but instead is a peripheral portion of the scene such as a background portion of the scene that may be blurred or replaced with similar background imagery during the generation of the composite image, e.g., one portion of the image showing grass replacing a portion of the image showing a fingerprint and grass.
In some embodiments generating a dirty lens condition notification to notify a user of the camera of that a dirty lens condition exists includes providing an audio or visual indication of the dirty lens condition. In some embodiments a camera warning indicator light or particular sound may and in some embodiments is used to indicate a dirty lens condition and optionally which lens in particular is dirty.
The dirty lens notification module 614 in some embodiments provides an audio indication of the dirty lens condition by providing a tone, e.g., a beep, or a set of tones or beeps when a dirty lens condition is detected. The audio indication may be, and in some embodiments is, an audio message in one or more languages. In some of such embodiments, the audio message may be, an English language audio message, stating that “camera lens is dirty.” In another exemplary embodiment an audio message may be, and sometimes is an English language message, stating, “camera lens X is dirty lens” or “camera lenses X and Y are dirty” where X and Y identify lenses of the camera that have been detected as being dirty. In some embodiments, the audio message is a pre-recorded full length audio message stored in memory e.g., memory 213. In some embodiments the audio message is generated from a set of pre-recorded audio fragments stored in memory such as “camera lens”, “camera lenses”, “one”, “two”, “three”, “four”, “is”, “are” “dirty”. In some embodiments, the language of the pre-recorded audio message, e.g., English, French, Korean, Japanese, etc. is based upon a user selected language mode of the device such as the device 200, being placed in English mode of operation for display and audio messages. In some embodiments, the audio notification module 618 of dirty lens notification module 614 of assembly of modules 600 performs the operation of audibly notifying the user of a dirty lens condition.
The dirty lens notification module 614 in some embodiments provides a visual indication that a dirty lens condition exists by illuminating one or more lights, e.g., LEDs on the device including the dirty lens, e.g., the device 200. In some embodiments, a light may be, and is placed in a flashing mode when a dirty lens condition is detected. In some embodiments, light may be, and sometimes is, changed from a green color to a red color to indicate and notify the user of a dirty lens condition. In some embodiments, the device 200 includes a light corresponding to each lens of the camera and when one or more dirty lens are detected and identified each light corresponding to each of the identified dirty lenses is illuminated or placed in a state, such as flashing, to indicate that the corresponding lens has a dirty lens. In some embodiments, a visual indication of a dirty lens condition includes a message presented on the display of the device in one or more languages, e.g., on display 215 of device 200. The language of the display message may be set in a manner similar to that described above in connection with audio message that may be played to indicate a dirty lens condition. In some embodiments, a dirty lens condition may be indicated by presenting the user with a “Clean Lens X” message specifying to the user which of the plurality of lenses should be cleaned and thus making the user aware that the lens is dirty. Alternatively, a simple message saying “Dirty Lens Condition detected” or some other similar warning may be presented to the user, e.g. via a display on the camera. In some embodiments, the visual notification sub-module 616 of dirty lens notification module 614 of assembly of modules 600 performs the visual notification operation to alert the user of the dirty lens condition.
The dirty lens notification module 614 in some embodiments provides both a visual and audio indication that a dirty lens condition exists.
In some embodiments, in response to determining that a dirty lens condition exists the method 500 includes the additional steps of periodically testing to determine if said determined dirty lens condition has been rectified; and when said dirty lens condition has been rectified ceasing said notification being provided to the user of said dirty lens condition. For example, terminating a light that illuminated, stopping an audible alert, such as a beep or a tone, removing a display message from the display, etc. In some embodiments, the method further includes that when said dirty lens condition has been rectified providing an audio or visual indication to the user that said dirty lens condition has been rectified. For example, a different audible tone or set of tones or beeps than used to notify the user of a dirty lens condition, turning a light corresponding to a lens from one color to another such as from red signifying a dirty lens to green indicating a clean lens, an audio played by the device or visual message displayed on the display of the device wherein the messages states for example “Lens X has been cleaned” where X was the lens that had been identified as dirty or more generally, “Camera lenses clean”.
In step 536, dirty lens determination module 604 associates dirty lens information with one or more images captured by a lens while the lens was determined to be dirty. Operation proceeds from step 536 to step 538.
In step 538, storage module 602 of assembly of modules 600 stores in memory, e.g., data/information section 221 of memory 213, information indicating which of the plurality of optical chains, e.g., first, second, third or Nth optical chain, includes a dirty lens. Operation proceeds from step 538 to step 540.
In step 540, processor 110 of device 100 or processor 211 of device 200 generates a composite image. In some embodiments, composite image generation module performs step 540. In step 540 a composite image is generated by combining images, e.g., the first image and the second image, and treating an image indicated as corresponding to a dirty lens differently than an image corresponding to a clean lens. In some embodiments, step 540 includes step 542 which includes excluding the image corresponding to a dirty lens from use in generating the composite image or giving a lower influence to the image corresponding to a dirty lens in the composite image generation process than an image corresponding to the dirty lens would be given if the image did not correspond to a dirty lens. Operation proceeds from step 540 to step 546.
As previously discussed, in those instances in which the determination step 514 determines that a dirty lens condition does not exist, operation proceeds from step 514 to step 544 shown in
In step 544, the processor of the device, e.g., processor 211 of device 200 or processor 110 of device 100, or composite image generation module 620 of assembly of modules 600 generates a composite image by combining images, e.g., two or more of the first, second, third, or Nth images. Operation proceeds from step 544 to step 546.
In step 546, storage module 602 stores the generated composite image in memory, e.g., memory 213 of camera device 200 and/or output module 622 of assembly of module 600 outputs the composite image, e.g., to display 215 of camera device 200.
In some embodiments, the first comparison is performed by the first comparison module 610 included in the comparison module 608 of the assembly of modules. In some embodiments the second comparison is performed by the second comparison module 612 included in the comparison module 608 of the assembly of modules 600.
In some embodiments, the camera contains a cleaning mechanism or functionality to translate the lenses/mirrors/cover, of an optical chain containing a dirty outermost lens or cover to clean the dirt or other contamination or to move the dirt or other contamination to a less influential spot on the image, e.g. from an area of the image that is in focus to an area of the image that is less in focus.
The first comparison of the dirty lens detection method may, and in some embodiments does, use a process involving multi-view, and/or multi-zoom and/or multi-focus images. In some embodiments a criterion or criteria are used in determining a dirty lens condition. In some such embodiments, the criterion is a combination of factors. For example, the first comparison has a criteria indicating presence of dirt. Such criteria in some embodiments is a difference in contrast of an area of the image being below a threshold, or difference in local intensity of an area or the image being above or below a threshold, presence (above a threshold) of image spatial frequency components that do not conform with natural scene statistics indicating superposition of dust shadows, repeated irregularity in depth estimates, etc.
The first and/or second comparisons may be, and in some embodiments are, made with captured images taken as a user changes zoom or object focus distance, as well as from multi-views. Comparison of multiple images from the same optical chain across changing zoom/pointing or auto-focus may be, and in some embodiments is, used to identify dirt on the outermost lens of the optical chain.
In another embodiment of a method of operating a camera in accordance with the present invention, the method includes: capturing a first image using a first lens of the camera; determining, based on at least the first image, if a dirty camera lens condition exists; and in response to determining that a dirty lens condition exists, generating a dirty lens condition notification or initiating an automatic camera lens cleaning operation. In some of such embodiments the step of determining if a dirty camera lens condition exists includes: performing a first comparison of said first image or a characteristic of said first image to a second image or a characteristic of said second image; and making a decision as to whether said dirty camera lens condition exists based on said first comparison. In some embodiments, the first and first and second images were captured at the same time using outer lenses corresponding to two different camera modules. In some embodiments, the first and second images were captured at different times using the first lens.
It should be appreciated that various features and/or steps of method 500 relate to improvements in cameras and/or image processing even though such devices may use general purpose processors and/or image sensors. While one or more steps of the method 500, e.g., such as composite image generation step, have been discussed as being performed by a processor, e.g., processor 110, 211, it should be appreciated that one or more of the steps of the method 500 may be, and in some embodiments are, implemented by dedicated circuitry, e.g., ASICs, FPGAs and/or other application specific circuits which improve the efficiency, accuracy and/or operational capability of the imaging device performing the method. In some embodiments, dedicated hardware, e.g., circuitry, and/or the combination of dedicated hardware and software are utilized in implementing one or more steps of the method 500 therein providing additional image processing efficiency, accuracy and/or operational capability to the imaging device, e.g., camera, implementing the method.
While a logical sequencing of the processing steps of the exemplary embodiments of the methods, routines and subroutines of the present invention have been shown, the sequencing is only exemplary and the ordering of the steps may be varied.
The techniques of various embodiments may be implemented using software, hardware and/or a combination of software and hardware. Various embodiments are directed to apparatus, e.g., a camera device. Various embodiments are also directed to methods, e.g., a method of processing images. Various embodiments are also directed to non-transitory machine, e.g., computer, readable medium, e.g., ROM, RAM, solid state storage, silicon storage disks, CDs, hard discs, etc., which include machine readable instructions for controlling a machine to implement one or more steps of a method.
Various features of the present invention are implemented using modules. For example each of the various routines and/or subroutines disclosed may be implemented in one or more modules. Such modules may be, and in some embodiments are, implemented as software modules. In other embodiments the modules are implemented in hardware. In still other embodiments the modules are implemented using a combination of software and hardware. A wide variety of embodiments are contemplated including some embodiments where different modules are implemented differently, e.g., some in hardware, some in software, and some using a combination of hardware and software. It should also be noted that routines and/or subroutines, or some of the steps performed by such routines, may be implemented in dedicated hardware as opposed to software executed on a general purpose processor. Such embodiments remain within the scope of the present invention. Many of the above described methods or method steps can be implemented using machine executable instructions, such as software, included in a machine readable medium such as a memory device, e.g., RAM, floppy disk, solid state storage device, silicon storage device, etc. to control a machine, e.g., general purpose computer with or without additional hardware, to implement all or portions of the above described methods. Accordingly, among other things, the present invention is directed to a machine readable medium including machine executable instructions for causing a machine, e.g., processor and associated hardware, to perform one or more of the steps of the above described method(s).
Numerous additional variations on the methods and apparatus of the various embodiments described above will be apparent to those skilled in the art in view of the above description. Such variations are to be considered within the scope of the invention.
The present application claims the benefit of the filing date of U.S. Provisional Application Ser. No. 62/021,097 filed on Jul. 4, 2014 which is hereby expressly incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4427888 | Galvin | Jan 1984 | A |
4890133 | Ogawa et al. | Dec 1989 | A |
5078479 | Vuilleumier | Jan 1992 | A |
5153569 | Kawamuraa et al. | Oct 1992 | A |
5353068 | Moriwake | Oct 1994 | A |
5583602 | Yamamoto | Dec 1996 | A |
5781331 | Carr et al. | Jul 1998 | A |
5889553 | Kino et al. | Mar 1999 | A |
5975710 | Luster | Nov 1999 | A |
5982951 | Katayama et al. | Nov 1999 | A |
6011661 | Weng | Jan 2000 | A |
6028600 | Rosin et al. | Feb 2000 | A |
6081670 | Madsen et al. | Jun 2000 | A |
6141034 | McCutchen | Oct 2000 | A |
6255651 | Laluvein et al. | Jul 2001 | B1 |
7009652 | Tanida et al. | Mar 2006 | B1 |
7280735 | Thibault | Oct 2007 | B2 |
7315423 | Sato | Jan 2008 | B2 |
7551358 | Lee et al. | Jun 2009 | B2 |
7561201 | Hong | Jul 2009 | B2 |
7801428 | Nagaishi et al. | Sep 2010 | B2 |
7810511 | Fagrenius et al. | Oct 2010 | B2 |
8144230 | Watanabe et al. | Mar 2012 | B2 |
8194169 | Tamaki et al. | Jun 2012 | B2 |
8199222 | Drimbarean et al. | Jun 2012 | B2 |
8237841 | Tanida et al. | Aug 2012 | B2 |
8320051 | Matsumura et al. | Nov 2012 | B2 |
8417058 | Tardif | Apr 2013 | B2 |
8482637 | Ohara et al. | Jul 2013 | B2 |
8520022 | Cohen et al. | Aug 2013 | B1 |
8553106 | Scarff | Oct 2013 | B2 |
8619082 | Cuirea et al. | Dec 2013 | B1 |
8639296 | Ahn et al. | Jan 2014 | B2 |
8665341 | Georgiev et al. | Mar 2014 | B2 |
8704944 | Wierzoch et al. | Apr 2014 | B1 |
8762895 | Mehta et al. | Jun 2014 | B2 |
8780258 | Lee | Jul 2014 | B2 |
8896655 | Mauchly et al. | Nov 2014 | B2 |
8970765 | Higashimoto | Mar 2015 | B2 |
9041826 | Jung et al. | May 2015 | B2 |
9104705 | Fujinaga | Aug 2015 | B2 |
9135732 | Winn et al. | Sep 2015 | B2 |
9282228 | Laroia | Mar 2016 | B2 |
9374514 | Laroia | Jun 2016 | B2 |
20020149691 | Pereira et al. | Oct 2002 | A1 |
20030018427 | Yokota et al. | Jan 2003 | A1 |
20030020814 | Ono | Jan 2003 | A1 |
20030185551 | Chen | Oct 2003 | A1 |
20030193604 | Robins et al. | Oct 2003 | A1 |
20040100479 | Nakano et al. | May 2004 | A1 |
20040227839 | Stavley et al. | Nov 2004 | A1 |
20050088546 | Wang | Apr 2005 | A1 |
20050200012 | Kinsman | Sep 2005 | A1 |
20060067672 | Washisu et al. | Mar 2006 | A1 |
20060187338 | May et al. | Aug 2006 | A1 |
20060221218 | Adler et al. | Oct 2006 | A1 |
20060238886 | Kushida et al. | Oct 2006 | A1 |
20060281453 | Jaiswal et al. | Dec 2006 | A1 |
20070050139 | Sidman | Mar 2007 | A1 |
20070065012 | Yamakado et al. | Mar 2007 | A1 |
20070127915 | Lu et al. | Jun 2007 | A1 |
20070177047 | Goto | Aug 2007 | A1 |
20070182528 | Breed et al. | Aug 2007 | A1 |
20080030592 | Border et al. | Feb 2008 | A1 |
20080074755 | Smith | Mar 2008 | A1 |
20080084484 | Ochi et al. | Apr 2008 | A1 |
20080111881 | Gibbs et al. | May 2008 | A1 |
20080180562 | Kobayashi | Jul 2008 | A1 |
20080211941 | Deever et al. | Sep 2008 | A1 |
20080219654 | Border et al. | Sep 2008 | A1 |
20080240698 | Bartilson et al. | Oct 2008 | A1 |
20080247745 | Nilsson | Oct 2008 | A1 |
20080251697 | Park et al. | Oct 2008 | A1 |
20080278610 | Boettiger | Nov 2008 | A1 |
20090059037 | Naick et al. | Mar 2009 | A1 |
20090086032 | Li | Apr 2009 | A1 |
20090136223 | Motomura et al. | May 2009 | A1 |
20090154821 | Sorek et al. | Jun 2009 | A1 |
20090225203 | Tanida et al. | Sep 2009 | A1 |
20090278950 | Deng et al. | Nov 2009 | A1 |
20090290042 | Shiohara | Nov 2009 | A1 |
20100013906 | Border et al. | Jan 2010 | A1 |
20100034531 | Go | Feb 2010 | A1 |
20100045774 | Len et al. | Feb 2010 | A1 |
20100053414 | Tamaki et al. | Mar 2010 | A1 |
20100079635 | Yano et al. | Apr 2010 | A1 |
20100091089 | Cromwell et al. | Apr 2010 | A1 |
20100097443 | Lablans | Apr 2010 | A1 |
20100225755 | Tamaki et al. | Sep 2010 | A1 |
20100238327 | Griffith et al. | Sep 2010 | A1 |
20100265346 | Iizuka | Oct 2010 | A1 |
20100283842 | Guissin et al. | Nov 2010 | A1 |
20100296802 | Davies | Nov 2010 | A1 |
20110051243 | Su | Mar 2011 | A1 |
20110063325 | Saunders | Mar 2011 | A1 |
20110069189 | Venkataraman et al. | Mar 2011 | A1 |
20110080655 | Mori | Apr 2011 | A1 |
20110123115 | Lee et al. | May 2011 | A1 |
20110128393 | Tavi et al. | Jun 2011 | A1 |
20110157430 | Hosoya et al. | Jun 2011 | A1 |
20110157451 | Chang | Jun 2011 | A1 |
20110187878 | Mor et al. | Aug 2011 | A1 |
20110193984 | Kitaya et al. | Aug 2011 | A1 |
20110221920 | Gwak | Sep 2011 | A1 |
20110222167 | Iwasawa | Sep 2011 | A1 |
20110242342 | Goma et al. | Oct 2011 | A1 |
20110280565 | Chapman et al. | Nov 2011 | A1 |
20110285895 | Weng et al. | Nov 2011 | A1 |
20120002096 | Choi et al. | Jan 2012 | A1 |
20120013708 | Okubo | Jan 2012 | A1 |
20120033069 | Becker et al. | Feb 2012 | A1 |
20120062691 | Fowler et al. | Mar 2012 | A1 |
20120155848 | Labowicz et al. | Jun 2012 | A1 |
20120162464 | Kim | Jun 2012 | A1 |
20120188391 | Smith | Jul 2012 | A1 |
20120207462 | Justice | Aug 2012 | A1 |
20120212651 | Sawada | Aug 2012 | A1 |
20120242881 | Suzuki | Sep 2012 | A1 |
20120249815 | Bohn et al. | Oct 2012 | A1 |
20120257013 | Witt et al. | Oct 2012 | A1 |
20120257077 | Suzuki | Oct 2012 | A1 |
20120268642 | Kawai | Oct 2012 | A1 |
20130027353 | Hyun | Jan 2013 | A1 |
20130050564 | Adams, Jr. et al. | Feb 2013 | A1 |
20130057743 | Minagawa et al. | Mar 2013 | A1 |
20130064531 | Pillman et al. | Mar 2013 | A1 |
20130076928 | Olsen et al. | Mar 2013 | A1 |
20130086765 | Chen | Apr 2013 | A1 |
20130088614 | Lee | Apr 2013 | A1 |
20130093842 | Yahata | Apr 2013 | A1 |
20130093947 | Lee et al. | Apr 2013 | A1 |
20130100272 | Price et al. | Apr 2013 | A1 |
20130153772 | Rossi et al. | Jun 2013 | A1 |
20130155194 | Sacre et al. | Jun 2013 | A1 |
20130194475 | Okamoto | Aug 2013 | A1 |
20130222676 | Ono | Aug 2013 | A1 |
20130223759 | Nishiyama | Aug 2013 | A1 |
20130250125 | Garrow et al. | Sep 2013 | A1 |
20130258044 | Betts-Lacroix | Oct 2013 | A1 |
20130300869 | Lu et al. | Nov 2013 | A1 |
20140049677 | Kawaguchi | Feb 2014 | A1 |
20140063018 | Takeshita | Mar 2014 | A1 |
20140111650 | Georgiev et al. | Apr 2014 | A1 |
20140152802 | Olsson et al. | Jun 2014 | A1 |
20140192214 | Laroia | Jul 2014 | A1 |
20140192224 | Laroia | Jul 2014 | A1 |
20140192225 | Laroia | Jul 2014 | A1 |
20140192240 | Laroia | Jul 2014 | A1 |
20140192253 | Laroia | Jul 2014 | A1 |
20140204244 | Choi et al. | Jul 2014 | A1 |
20140226041 | Eguchi et al. | Aug 2014 | A1 |
20140267243 | Venkataraman et al. | Sep 2014 | A1 |
20140293079 | Milanfar | Oct 2014 | A1 |
20140354714 | Hirschler et al. | Dec 2014 | A1 |
20150035824 | Takahashi et al. | Feb 2015 | A1 |
20150043808 | Takahashi et al. | Feb 2015 | A1 |
20150049233 | Choi | Feb 2015 | A1 |
20150154449 | Ito et al. | Jun 2015 | A1 |
20150156399 | Chen et al. | Jun 2015 | A1 |
20150163400 | Geiss | Jun 2015 | A1 |
20150234149 | Kreitzer et al. | Aug 2015 | A1 |
20150253647 | Mercado | Sep 2015 | A1 |
20150279012 | Brown et al. | Oct 2015 | A1 |
20160142610 | Rivard et al. | May 2016 | A1 |
20160165101 | Akiyama et al. | Jun 2016 | A1 |
20170180615 | Lautenbach | Jun 2017 | A1 |
20170180637 | Lautenbach et al. | Jun 2017 | A1 |
Number | Date | Country |
---|---|---|
2642757 | Sep 2013 | EP |
10091765 | Apr 1998 | JP |
2001061109 | Mar 2001 | JP |
2007164258 | Jun 2004 | JP |
2004289214 | Oct 2004 | JP |
2006106230 | Apr 2006 | JP |
2007201915 | Aug 2007 | JP |
2008268937 | Nov 2008 | JP |
2010049263 | Mar 2010 | JP |
2010114760 | May 2010 | JP |
2010256397 | Nov 2010 | JP |
100153873 | Jul 1998 | KR |
1020080022260 | Mar 2008 | KR |
1020110022279 | Mar 2011 | KR |
1020130038076 | Apr 2013 | KR |
Entry |
---|
International Search Report and Written Opinion of the International Searching Authority from International Application No. PCT/US15/39161, pp. 1-7, dated Oct. 29, 2015. |
Segan,S. “Hands on with the 41-Megapixel Nokia PureView 808”, Feb. 27, 2012, PC Mag, [online], [retrieved on Apr. 16, 2014]. Retrieved from the Internet: , URL:http://www.pcmag.com/article2/0,2817,2400773,00.asp>, pp. 1-9. |
Robertson, M et al “Dynamic Range Improvement Through Multiple Exposures”. 1999. [online] [retrieved on Apr. 16, 2014]:<URL:http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=817091&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%Fabs_all.jsp%3Farnumber%3D817091>, pp. 1-6. |
Supplementary European Search Report from Application No. EP15814587, dated Jan. 11, 2018, pp. 1-7. |
Number | Date | Country | |
---|---|---|---|
20160004144 A1 | Jan 2016 | US |
Number | Date | Country | |
---|---|---|---|
62021097 | Jul 2014 | US |