System, method, and computer program for capturing an image with correct skin tone exposure

Information

  • Patent Grant
  • 11699219
  • Patent Number
    11,699,219
  • Date Filed
    Monday, March 14, 2022
    2 years ago
  • Date Issued
    Tuesday, July 11, 2023
    a year ago
Abstract
A system and method are provided for capturing an image with correct skin tone exposure. In use, one or more faces are detected having threshold skin tone within a scene. Next, based on the detected one or more faces, the scene is segmented into one or more face regions and one or more non-face regions. A model of the one or more faces is constructed based on a depth map and a texture map, the depth map including spatial data of the one or more faces, and the texture map includes surface characteristics of the one or more faces. The one or more images of the scene are captured based on the model. Further, in response to the capture, the one or more face regions are processed to generate a final image.
Description
FIELD OF THE INVENTION

The present invention relates to capturing an image, and more particularly to capturing an image with correct skin tone exposure.


BACKGROUND

Conventional photographic systems currently capture images according to scene level exposure settings, with one or more points of interest used to specify corresponding regions used to meter and/or focus the scene for capture by a photographic system. However, a certain region (or regions) may have overall intensity levels that are too dark or too light to provide sufficient contrast using conventional capture techniques, resulting in a poor quality capture of significant visual features within the region. Consequently, conventional photographic systems commonly fail to capture usable portrait images of individuals with very dark skin tone or very light skin tone because the subject's skin tone is at one extreme edge of the dynamic range for the photographic system.


There is thus a need for addressing these and/or other issues associated with the prior art.


SUMMARY

A system and method are provided for capturing an image with correct skin tone exposure. In use, one or more faces are detected having threshold skin tone within a scene. Next, based on the detected one or more faces, the scene is segmented into one or more face regions and one or more non-face regions. A model of the one or more faces is constructed based on a depth map and a texture map, the depth map including spatial data of the one or more faces, and the texture map includes surface characteristics of the one or more faces. The one or more images of the scene are captured based on the model. Further, in response to the capture, the one or more face regions are processed to generate a final image.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1A illustrates a first exemplary method for capturing an image, in accordance with one possible embodiment.



FIG. 1B illustrates a second exemplary method for capturing an image, in accordance with one possible embodiment.



FIG. 1C illustrates an exemplary scene segmentation into a face region and non-face regions, in accordance with one possible embodiment.



FIG. 1D illustrates a face region mask of a scene, in accordance with one possible embodiment.



FIG. 1E illustrates a face region mask of a scene including a transition region, in accordance with one possible embodiment.



FIG. 2 illustrates an exemplary transition in mask value from a non-face region to a face region, in accordance with one possible embodiment.



FIG. 3A illustrates a digital photographic system, in accordance with an embodiment.



FIG. 3B illustrates a processor complex within the digital photographic system, according to one embodiment.



FIG. 3C illustrates a digital camera, in accordance with an embodiment.



FIG. 3D illustrates a wireless mobile device, in accordance with another embodiment.



FIG. 3E illustrates a camera module configured to sample an image, according to one embodiment.



FIG. 3F illustrates a camera module configured to sample an image, according to another embodiment.



FIG. 3G illustrates a camera module in communication with an application processor, in accordance with an embodiment.



FIG. 4 illustrates a network service system, in accordance with another embodiment.



FIG. 5 illustrates capturing an image with correct skin tone exposure, in accordance with another embodiment.



FIG. 6 illustrates capturing an image with correct skin tone exposure, in accordance with another embodiment.



FIG. 7 illustrates capturing an image with correct skin tone exposure, in accordance with another embodiment.



FIG. 8 illustrates a network architecture, in accordance with one possible embodiment.



FIG. 9 illustrates an exemplary system, in accordance with one embodiment.





DETAILED DESCRIPTION


FIG. 1A illustrates an exemplary method 100 for capturing an image, in accordance with one possible embodiment. Method 100 may be performed by any technically feasible digital photographic system (e.g., a digital camera or digital camera subsystem). In one embodiment, method 100 is performed by digital photographic system 300 of FIG. 3A.


At step 102, the digital photographic system detects one or more faces within a scene. It is to be appreciated that although the present description describes detecting one or more faces, one or more other body parts (e.g. arms, hands, legs, feet, chest, neck, etc.) may be detected and used within the context of method 100. Any technically feasible technique may be used to detect the one or more faces (or the one or more body parts). If, at step 104, at least one of the one or more faces (or the one or more body parts) has a threshold skin tone, then the method proceeds to step 108. In the context of the present description, skin tone refers to a shade of skin (e.g., human skin). For example, a skin tone may be light, medium, or dark, or a meld between light and medium, or medium and dark, according to a range of natural human skin colors.


If, at step 104, no face within the scene has a threshold skin tone, the method proceeds to step 106. In the context of the present description, a threshold skin tone is defined to be a dark skin tone below a defined low intensity threshold or a light skin tone above a defined intensity high threshold. For dark skin tones, an individual's face may appear to be highly underexposed, while for light skin tones, and individual's face may appear to be washed out and overexposed. Such thresholds may be determined according to any technically feasible technique, including quantitative techniques and/or techniques using subjective assessment of captured images from a given camera system or systems.


Additionally, a threshold skin tone may include a predefined shade of skin. For example, a threshold skin tone may refer to a skin tone of light shade, medium shade, or dark shade, or a percentage of light shade and/or medium shade and/or dark shade. Such threshold skin tone may be predefined by a user, by an application, an operating system, etc. Additionally, the threshold skin tone may function in a static manner (i.e. it does not change, etc.) or in a dynamic manner. For example, a threshold skin tone may be tied to a context of the capturing device (e.g. phone, camera, etc.) and/or of the environment. In this manner, a default threshold skin tone may be applied contingent upon specific contextual or environmental conditions (e.g. brightness is of predetermined range, etc.), and if such contextual and/or environmental conditions change, the threshold skin tone may be accordingly modified. For example, a default threshold skin tone may be tied to a ‘normal’ condition of ambient lighting, but if the environment is changed to bright sunlight outside, the threshold skin tone may account for the brighter environment and modify the threshold skin tone.


A low threshold skin tone may be any technically feasible threshold for low-brightness appearance within a captured scene. In one embodiment, the low threshold skin tone is defined as a low average intensity (e.g., below 15% of an overall intensity range) for a region for a detected face. In another embodiment, the low threshold skin tone is defined as a low contrast for the region for the detected face. In yet another embodiment, the low threshold is defined as a low histogram median (e.g., 20% of the overall intensity range) for the region for the detected face. Similarly, a high threshold may be any technically feasible threshold for high-brightness appearance within a captured scene. In one embodiment, the high threshold is defined as a high average intensity (e.g., above 85% of an overall intensity range) for a region for a detected face. In another embodiment, the high threshold is defined as high intensity (bright) but low contrast for the region for the detected face. In yet another embodiment, the high threshold is defined as a high histogram median (e.g., 80% of the overall intensity range) for the region for the detected face.


If, at step 106, the scene includes regions having collectively high dynamic range intensity, then the method proceeds to step 108. Otherwise, the method proceeds to step 110.


At step 108, the digital photographic system enables high dynamic range (HDR) capture. At step 110, the digital photographic system captures an image of the scene according to a capture mode. For example, if the capture mode specified that HDR is enabled, then the digital photographic system captures an HDR image.



FIG. 1B illustrates an exemplary method 120 for capturing an image, in accordance with one possible embodiment. Method 120 may be performed by any technically feasible digital photographic system (e.g., a digital camera or digital camera subsystem). In one embodiment, method 120 is performed by digital photographic system 300 of FIG. 3A.


At step 122, the digital photographic system detects one or more faces within a scene having threshold skin tone, as described herein. Of course, it is to be appreciated that method 120 may be applied additionally to one or more other body parts (e.g. arm, neck, chest, leg, hand, etc.).


At step 124, the digital photographic system segments the scene into one or more face region(s) and one or more non-face region(s). Any technically feasible techniques may be implemented to provide scene segmentation, including techniques that surmise coverage for a segment/region based on appearance, as well as techniques that also include a depth image (z-map) captured in conjunction with a visual image. In an alternative embodiment, step 124 may include edge detection between one part (e.g. head, etc.) and a second part (e.g. neck, etc.). In certain embodiments, machine learning techniques (e.g., a neural network classifier) may be used to detect image pixels that are part of a face region(s), or skin associated with other body parts.


At step 126, the one or more images of the scene are captured. The camera module and/or digital photographic system may be used to capture such one or more images of the scene. In one embodiment, the digital photographic system may capture a single, high dynamic range image. For example, the digital photographic system may capture a single image, which may have a dynamic range of fourteen or more bits per color channel per pixel. In another embodiment, the digital photographic system captures two or more images, each of which may provide a relatively high dynamic range (e.g., twelve or more bits per color channel per pixel) or a dynamic range of less than twelve bits per color channel per pixel. The two or more images are exposed to capture detail of at least the face region(s) and the non-face region(s). For example, a first of the two or more images may be exposed so that the median intensity of the face region(s) defines the mid-point intensity of the first image. Furthermore, a second of the two or more images may be exposed so that the median intensity of the non-face region(s) defines a mid-point intensity of the second image.


At step 128, the digital photographic system processes the one or more face regions to generate a final image. In one embodiment, to process the one or more face regions, the digital photographic system applies a high degree of HDR effect to final image pixels within the face region(s). In certain embodiments, a degree of HDR effect is tapered down for pixels along a path leading from an outside boundary of a given face region through a transition region, to a boundary of a surrounding non-face region. The transition region may have an arbitrary thickness (e.g., one pixel to many pixels). In one embodiment, the degree of HDR effect is proportional to a strength coefficient, as defined in co-pending U.S. patent application Ser. No. 14/823,993, now U.S. Pat. No. 9,918,017, filed Aug. 11, 2015, entitled “IMAGE SENSOR APPARATUS AND METHOD FOR OBTAINING MULTIPLE EXPOSURES WITH ZERO INTERFRAME TIME,” which is incorporated herein by reference for all purposes. In other embodiments, other HDR techniques may be implemented, with the degree of HDR effect defined according to the particular technique. For example, a basic alpha blend may be used to blend between a conventionally exposed (ev 0) image and an HDR image, with the degree of zero HDR effect for non-face region pixels, a degree of one for face regions pixels, and a gradual transition (see FIG. 2) between one and zero for pixels within a transition region. In general, applying an HDR effect to pixels within a face region associated with an individual with a dark skin tone provides greater contrast at lower light levels and remaps the darker skin tones closer to an image intensity mid-point. Applying the HDR effect to pixels within the face region can provide greater contrast for pixels within the face region, thereby providing greater visual detail. Certain HDR techniques implement tone (intensity) mapping. In one embodiment, conventional HDR tone mapping is modified to provide greater range to pixels within the face region. For example, when capturing an image of an individual with dark skin tone, a darker captured intensity range may be mapped by the modified tone mapping to have a greater output range (final image) for pixels within the face region, while a conventional mapping is applied for pixels within the non-face region. In one embodiment, an HDR pixel stream (with correct tone mapping) may be created, as described in U.S. patent application Ser. No. 14/536,524, now U.S. Pat. No. 9,160,936, entitled “SYSTEMS AND METHODS FOR GENERATING A HIGH-DYNAMIC RANGE (HDR) PIXEL STREAM,” filed Nov. 7, 2014, which is hereby incorporated by reference for all purposes. Additionally, a video stream (with correct tone mapping) may be generated by applying the methods described herein.


In another embodiment, to process the one or more images, the digital photographic system may perform a local equalization on pixels within the face region (or the selected body region). The local equalization may be applied with varying degrees within the transition region. In one embodiment, local equalization techniques, including contrast limited adaptive histogram, equalization (CLAHE) may be applied separately or in combination with an HDR technique. In such embodiments, one or more images may be captured according to method 120, or one or more images may be captured according to any other technically feasible image capture technique.


In certain embodiments, a depth map image and associated visual image(s) may be used to construct a model of one or more individuals within a scene. One or more texture maps may be generated from the visual image(s). For example, the depth map may be used, in part, to construct a three-dimensional (3D) model of an individual's face (photographic subject), while the visual image(s) may provide a surface texture for the 3D model. In one embodiment, a surface texture may include colors and/or features (e.g. moles, cuts, scars, freckles, facial fair, etc.). The surface texture may be modified to provide an average intensity that is closer to an image mid-point intensity, while preserving skin color and individually-unique skin texture (e.g. moles, cuts, scars, freckles, facial fair, etc.). The 3D model may then be rendered to generate a final image. The rendered image may include surmised natural scene lighting, natural scene lighting in combination with added synthetic illumination sources in the rendering process, or a combination thereof. For example, a soft side light may be added to provide depth cues from highlights and shadows on the individual's face. Furthermore, a gradient light may be added in the rendering process to provide additional highlights and shadows.


In certain other embodiments, techniques disclosed herein for processing face regions may be implemented as post-processing rather than in conjunction with image capture.


More illustrative information will now be set forth regarding various optional architectures and uses in which the foregoing method may or may not be implemented, per the desires of the user. It should be strongly noted that the following information is set forth for illustrative purposes and should not be construed as limiting in any manner. Any of the following features may be optionally incorporated with or without the exclusion of other features described.



FIG. 1C illustrates an exemplary scene segmentation into face region(s) 142 and non-face region(s) 144, in accordance with one possible embodiment. As shown, an image 140 is segmented into a face region 142 and a non-face region 144. Any technically feasible technique may be used to perform the scene segmentation. The technique may operate solely on visual image information, depth map information, or a combination thereof. As an option, FIG. 1C may be implemented in the context of any of the other figures, as described herein. In particular, FIG. 1C may be implemented within the context of steps 122-128 of FIG. 1B.


In another embodiment, a selected body-part region may be distinguished and separately identified from a non-selected body-part region. For example, a hand may be distinguished from the surroundings, an arm from a torso, a foot from a leg, etc.



FIG. 1D illustrates a face region mask 141 of a scene, in accordance with one possible embodiment. In one embodiment, a pixel value within face region mask 141 is set to a value of one (1.0) if a corresponding pixel location within image 140 is within face region 142, and a pixel value within face region mask 141 is set to a value of zero (0.0) if a corresponding pixel location within image 140 is outside face region 142. In one embodiment, a substantially complete face region mask 141 is generated and stored in memory. In another embodiment, individual mask elements are computed prior to use, without storing a complete face region mask 141 in memory. As an option, FIG. 1D may be implemented in the context of any of the other figures, as described herein. In particular, FIG. 1D may be implemented within the context of steps 122-128 of FIG. 1B, or within the context of FIG. 1C.



FIG. 1E illustrates a face region mask 141 of a scene including a transition region 146, in accordance with one possible embodiment. As shown, transition region 146 is disposed between face region 142 and non-face region 144. Mask values within face region mask 141 increase from zero to one along path 148 from non-face region 144 to face region 142. A gradient of increasing mask values from non-face region 144 to face region 142 is indicated along path 148. For example, a mask value may increase from a value of zero (0.0) in non-face region 144 to a value of one (1.0) in face region 142 along path 148. Any technically feasible technique may be used to generate the gradient. As an option, FIG. 1E may be implemented in the context of any of the other figures, as described herein. In particular, FIG. 1E may be implemented within the context of steps 122-128 of FIG. 1B, or within the context of FIGS. 1C-1D.



FIG. 2 illustrates an exemplary transition in mask value from a non-face region (e.g., non-face region 144) to a face region (e.g., face region 142), in accordance with one possible embodiment. As an option, FIG. 2 may be implemented in the context of the details of any of the Figures disclosed herein. Of course, however, FIG. 2 may be implemented in any desired environment.


As shown, a face region mask pixel value 202 (e.g., mask value at a given pixel within face region mask 141) increases from non-face region 201A pixels to face region 201C pixels along path 200, which starts out in a non-face region 201A, traverses through a transition region 201B, and continues into a face region 201C. For example, path 200 may correspond to at least a portion of path 148 of FIG. 1E. Of course, it is to be appreciated that FIG. 2 may be implemented with respect to other and/or multiple body regions. For example, FIG. 2 may be implemented to indicate mask values that include face regions and neck regions. Furthermore, FIG. 2 may be implemented to indicate mask values that include any body part regions with skin that is visible.


In other embodiments, a depth map may be used in conjunction with, or to determine, a face region. Additionally, contrast may be modified (e.g., using CLAHE or any similar technique) to determine a skin tone. In one embodiment, if the skin tone is lighter (or darker), additional contrast may be added (or removed). In other embodiments, if the contours of a face are known (e.g., from 3D mapping of the face), lighting, shadowing, and/or other lighting effects may be added or modified in one or more post processing operations. Further, voxelization (and/or 3d mapping of 2d images) or other spatial data (e.g., a surface mesh of a subject constructed from depth map data) may be used to model a face and/or additional body parts, and otherwise determine depth values associated with an image.


In one embodiment, depth map information may be obtained from a digital camera (e.g. based on parallax calculated from more than one lens perspectives, more than one image of the same scene but from different angles an/or zoom levels, near-simultaneous capture of the image, dual pixel/focus pixel phase detection, etc.). Additionally, depth map information may also be obtained (e.g., from a depth map sensor). As an example, if a face is found in an image, and a depth map of the image is used to model the face, then synthetic lighting (e.g., a lighting gradient) could be added to the face to modify lighting conditions on the face (e.g., in real-time or in post-processing). Further, a texture map (sampled from the face in the image) may be used in conjunction with the depth map to generate a 3D model of the face. In this manner, not only can synthetic lighting be applied with correct perspective on an arbitrary face, but additionally, the lighting color may be correct for the ambient conditions and skin tone on the face (or whatever skin section is shown), according to a measured color balance for ambient lighting. Any technically feasible technique may be implemented for measuring color balance of ambient lighting, including identifying illumination sources in a scene and estimating color balance for one or more of the illumination sources. Alternatively, color balance for illumination on a subject's face may be determined based on matching sampled color to a known set of human skin tones.


In one embodiment, a texture map may be created from the face in the image. Furthermore, contrast across the texture map may be selectively modified to correct for skin tone. For example, a scene(s) may be segmented to include regions of subject skin (e.g., face) and regions that are not skin. Additionally, the scene(s) may include other skin body parts (e.g. arm, neck, etc.). In such an example, all exposed skin may be included in the texture map (either as separate texture maps or one inclusive texture map) and corrected (e.g., equalized, tone mapped, etc.) together. The corrected texture map is then applied to a 3D model of the face and any visible skin associated with visible body parts. The 3D model may then be rendered in place in a scene to generate an image of the face and any other body parts of an individual in the scene. By performing contrast correction/adjustment on visible skin of the same individual, the generated image may appear to be more natural overall because consistent skin tone is preserved for the individual.


In certain scenarios, non-contiguous regions of skin may be corrected separately. As an example, a news broadcaster may have light projected onto their face, while their neck, hands, or arms may be in shadows. Such face may therefore be very light, whereas the neck, hands, arms may all be of a separate and different hue and light intensity. As such, the scene may be segmented into several physical regions having different hue and light intensity. Each region may therefore be corrected in context for a more natural overall appearance.


In one embodiment, an object classifier and/or recognition techniques (e.g. machine learning, etc.) may be used to detect a body part (hand, arm, face, leg) and associate all body parts with exposed skin such that contrast correcting (according to the texture map) may be applied to the detected body part or parts. In one embodiment, a neural-network classification engine is configured to identify individual pixels as being affiliated with exposed skin of a body part. Pixels that are identified as being exposed skin may be aggregated into segments (regions), and the regions may be corrected (e.g., equalized, tone-mapped, etc.).


In one embodiment, a hierarchy of the scene may be built and a classification engine may be used to segment the scene. An associated texture map(s) may be generated from a scene segment(s). The texture map(s) may be corrected, rendered in place, and applied to such hierarchy. In another embodiment, the hierarchy may include extracting exposure values for each skin-exposed body part, and correlating the exposure values with a correction value based on the texture map.


In some embodiments, skin tone may be different based on the determined body part. For example, facial skin tone may differ from the hand/arm/etc. In such an embodiment, a 3D model including the texture map and the depth map, may be generated and rendered to separately correct and/or equalize pixels for one or more of each different body part. In one embodiment, a reference tone (e.g., one of a number of discrete, known human skin tones) may be used as a basis for correction (e.g., equalization, tone mapping, hue adjustment) of pixels within an image that are affiliated with exposed skin. In other embodiments, correction/skin tone may be separately applied to different, visually non-contiguous body parts.



FIG. 3A illustrates a digital photographic system 300, in accordance with one embodiment. As an option, the digital photographic system 300 may be implemented in the context of the details of any of the Figures disclosed herein. Of course, however, the digital photographic system 300 may be implemented in any desired environment. Further, the aforementioned definitions may equally apply to the description below.


As shown, the digital photographic system 300 may include a processor complex 310 coupled to a camera module 330 via an interconnect 334. In one embodiment, the processor complex 310 is coupled to a strobe unit 336. The digital photographic system 300 may also include, without limitation, a display unit 312, a set of input/output devices 314, non-volatile memory 316, volatile memory 318, a wireless unit 340, and sensor devices 342, each coupled to the processor complex 310. In one embodiment, a power management subsystem 320 is configured to generate appropriate power supply voltages for each electrical load element within the digital photographic system 300. A battery 322 may be configured to supply electrical energy to the power management subsystem 320. The battery 322 may implement any technically feasible energy storage system, including primary or rechargeable battery technologies. Of course, in other embodiments, additional or fewer features, units, devices, sensors, or subsystems may be included in the system.


In one embodiment, a strobe unit 336 may be integrated into the digital photographic system 300 and configured to provide strobe illumination 350 during an image sample event performed by the digital photographic system 300. In another embodiment, a strobe unit 336 may be implemented as an independent device from the digital photographic system 300 and configured to provide strobe illumination 350 during an image sample event performed by the digital photographic system 300. The strobe unit 336 may comprise one or more LED devices, a gas-discharge illuminator (e.g. a Xenon strobe device, a Xenon flash lamp, etc.), or any other technically feasible illumination device. In certain embodiments, two or more strobe units are configured to synchronously generate strobe illumination in conjunction with sampling an image. In one embodiment, the strobe unit 336 is controlled through a strobe control signal 338 to either emit the strobe illumination 350 or not emit the strobe illumination 350. The strobe control signal 338 may be implemented using any technically feasible signal transmission protocol. The strobe control signal 338 may indicate a strobe parameter (e.g. strobe intensity, strobe color, strobe time, etc.), for directing the strobe unit 336 to generate a specified intensity and/or color of the strobe illumination 350. The strobe control signal 338 may be generated by the processor complex 310, the camera module 330, or by any other technically feasible combination thereof. In one embodiment, the strobe control signal 338 is generated by a camera interface unit within the processor complex 310 and transmitted to both the strobe unit 336 and the camera module 330 via the interconnect 334. In another embodiment, the strobe control signal 338 is generated by the camera module 330 and transmitted to the strobe unit 336 via the interconnect 334.


Optical scene information 352, which may include at least a portion of the strobe illumination 350 reflected from objects in the photographic scene, is focused as an optical image onto an image sensor 332 within the camera module 330. The image sensor 332 generates an electronic representation of the optical image. The electronic representation comprises spatial color intensity information, which may include different color intensity samples (e.g. red, green, and blue light, etc.). In other embodiments, the spatial color intensity information may also include samples for white light. The electronic representation is transmitted to the processor complex 310 via the interconnect 334, which may implement any technically feasible signal transmission protocol.


In one embodiment, input/output devices 314 may include, without limitation, a capacitive touch input surface, a resistive tablet input surface, one or more buttons, one or more knobs, light-emitting devices, light detecting devices, sound emitting devices, sound detecting devices, or any other technically feasible device for receiving user input and converting the input to electrical signals, or converting electrical signals into a physical signal. In one embodiment, the input/output devices 314 include a capacitive touch input surface coupled to a display unit 312. A touch entry display system may include the display unit 312 and a capacitive touch input surface, also coupled to processor complex 310.


Additionally, in other embodiments, non-volatile (NV) memory 316 is configured to store data when power is interrupted. In one embodiment, the NV memory 316 comprises one or more flash memory devices (e.g. ROM, PCM, FeRAM, FRAM, PRAM, MRAM, NRAM, etc.). The NV memory 316 comprises a non-transitory computer-readable medium, which may be configured to include programming instructions for execution by one or more processing units within the processor complex 310. The programming instructions may implement, without limitation, an operating system (OS), UI software modules, image processing and storage software modules, one or more input/output devices 314 connected to the processor complex 310, one or more software modules for sampling an image stack through camera module 330, one or more software modules for presenting the image stack or one or more synthetic images generated from the image stack through the display unit 312. As an example, in one embodiment, the programming instructions may also implement one or more software modules for merging images or portions of images within the image stack, aligning at least portions of each image within the image stack, or a combination thereof. In another embodiment, the processor complex 310 may be configured to execute the programming instructions, which may implement one or more software modules operable to create a high dynamic range (HDR) image.


Still yet, in one embodiment, one or more memory devices comprising the NV memory 316 may be packaged as a module configured to be installed or removed by a user. In one embodiment, volatile memory 318 comprises dynamic random access memory (DRAM) configured to temporarily store programming instructions, image data such as data associated with an image stack, and the like, accessed during the course of normal operation of the digital photographic system 300. Of course, the volatile memory may be used in any manner and in association with any other input/output device 314 or sensor device 342 attached to the process complex 310.


In one embodiment, sensor devices 342 may include, without limitation, one or more of an accelerometer to detect motion and/or orientation, an electronic gyroscope to detect motion and/or orientation, a magnetic flux detector to detect orientation, a global positioning system (GPS) module to detect geographic position, or any combination thereof. Of course, other sensors, including but not limited to a motion detection sensor, a proximity sensor, an RGB light sensor, a gesture sensor, a 3-D input image sensor, a pressure sensor, and an indoor position sensor, may be integrated as sensor devices. In one embodiment, the sensor devices may be one example of input/output devices 314.


Wireless unit 340 may include one or more digital radios configured to send and receive digital data. In particular, the wireless unit 340 may implement wireless standards (e.g. WiFi, Bluetooth, NFC, etc.), and may implement digital cellular telephony standards for data communication (e.g. CDMA, 3G, 4G, 5G, LTE, LTE-Advanced, etc.). Of course, any wireless standard or digital cellular telephony standards may be used.


In one embodiment, the digital photographic system 300 is configured to transmit one or more digital photographs to a network-based (online) or “cloud-based” photographic media service via the wireless unit 340. The one or more digital photographs may reside within the NV memory 316, the volatile memory 318, or any other memory device associated with the processor complex 310. In one embodiment, a user may possess credentials to access an online photographic media service and to transmit one or more digital photographs for storage to, retrieval from, and presentation by the online photographic media service. The credentials may be stored or generated within the digital photographic system 300 prior to transmission of the digital photographs. The online photographic media service may comprise a social networking service, photograph sharing service, or any other network-based service that provides storage of digital photographs, processing of digital photographs, transmission of digital photographs, sharing of digital photographs, or any combination thereof. In certain embodiments, one or more digital photographs are generated by the online photographic media service based on image data (e.g. image stack, HDR image stack, image package, etc.) transmitted to servers associated with the online photographic media service. In such embodiments, a user may upload one or more source images from the digital photographic system 300 for processing and/or storage by the online photographic media service.


In one embodiment, the digital photographic system 300 comprises at least one instance of a camera module 330. In another embodiment, the digital photographic system 300 comprises a plurality of camera modules 330. Such an embodiment may also include at least one strobe unit 336 configured to illuminate a photographic scene, sampled as multiple views by the plurality of camera modules 330. The plurality of camera modules 330 may be configured to sample a wide angle view (e.g., greater than forty-five degrees of sweep among cameras) to generate a panoramic photograph. In one embodiment, a plurality of camera modules 330 may be configured to sample two or more narrow angle views (e.g., less than forty-five degrees of sweep among cameras) to generate a stereoscopic photograph. In other embodiments, a plurality of camera modules 330 may be configured to generate a 3-D image or to otherwise display a depth perspective (e.g. a z-component, etc.) as shown on the display unit 312 or any other display device. In still other embodiments, two or more different camera modules 330 are configured to have different optical properties, such as different optical zoom levels. In one embodiment, a first camera module 330 is configured to sense intensity at each pixel, while a second camera module 330 is configured to sense color at each pixel. In such an embodiment, pixel intensity information from the first camera module and pixel color information from the second camera module may be fused together to generate an output image. In one embodiment, a first camera module 330 with a higher zoom factor is configured to capture a central image, while at least one camera module 330 with a wider zoom factor is configured to capture a wider image; the central image and the wider image are then fused together to generate a visual image, while parallax between the central image and the wider image is used to estimate a depth image (depth map). The visual image and the depth map may be used to generate a corrected portrait image according to the techniques disclosed herein.


In one embodiment, a display unit 312 may be configured to display a two-dimensional array of pixels to form an image for display. The display unit 312 may comprise a liquid-crystal (LCD) display, a light-emitting diode (LED) display, an organic LED display, or any other technically feasible type of display. In certain embodiments, the display unit 312 may only be able to display a narrower dynamic range of image intensity values than a complete range of intensity values sampled from a photographic scene, such as the dynamic range of a single HDR image, the total dynamic range sampled over a set of two or more images comprising a multiple exposure (e.g., an HDR image stack), or an image and/or image set captured to combine ambient illumination and strobe illumination (e.g., strobe illumination 350). In one embodiment, images comprising an image stack may be merged according to any technically feasible HDR blending technique to generate a synthetic image for display within dynamic range constraints of the display unit 312. In one embodiment, the limited dynamic range of display unit 312 may specify an eight-bit per color channel binary representation of corresponding color intensities. In other embodiments, the limited dynamic range may specify more than eight-bits (e.g., 10 bits, 12 bits, or 14 bits, etc.) per color channel binary representation.



FIG. 3B illustrates a processor complex 310 within the digital photographic system 300 of FIG. 3A, in accordance with one embodiment. As an option, the processor complex 310 may be implemented in the context of the details of any of the Figures disclosed herein. Of course, however, the processor complex 310 may be implemented in any desired environment. Further, the aforementioned definitions may equally apply to the description below.


As shown, the processor complex 310 includes a processor subsystem 360 and may include a memory subsystem 362. In one embodiment, processor complex 310 may comprise a system on a chip (SoC) device that implements processor subsystem 360, and memory subsystem 362 comprises one or more DRAM devices coupled to the processor subsystem 360. In another embodiment, the processor complex 310 may comprise a multi-chip module (MCM) encapsulating the SoC device and the one or more DRAM devices comprising the memory subsystem 362.


The processor subsystem 360 may include, without limitation, one or more central processing unit (CPU) cores 370, a memory interface 380, input/output interfaces unit 384, and a display interface unit 382, each coupled to an interconnect 374. The one or more CPU cores 370 may be configured to execute instructions residing within the memory subsystem 362, volatile memory 318, NV memory 316, or any combination thereof. Each of the one or more CPU cores 370 may be configured to retrieve and store data through interconnect 374 and the memory interface 380. In one embodiment, each of the one or more CPU cores 370 may include a data cache, and an instruction cache. Additionally, two or more of the CPU cores 370 may share a data cache, an instruction cache, or any combination thereof. In one embodiment, a cache hierarchy is implemented to provide each CPU core 370 with a private cache layer, and a shared cache layer.


In some embodiments, processor subsystem 360 may include one or more graphics processing unit (GPU) cores 372. Each GPU core 372 may comprise a plurality of multi-threaded execution units that may be programmed to implement, without limitation, graphics acceleration functions. In various embodiments, the GPU cores 372 may be configured to execute multiple thread programs according to well-known standards (e.g. OpenGL™, WebGL™, OpenCL™, CUDA™, etc.), and/or any other programmable rendering graphic standard. In certain embodiments, at least one GPU core 372 implements at least a portion of a motion estimation function, such as a well-known Harris detector or a well-known Hessian-Laplace detector. Such a motion estimation function may be used at least in part to align images or portions of images within an image stack. For example, in one embodiment, an HDR image may be compiled based on an image stack, where two or more images are first aligned prior to compiling the HDR image. In another example, the motion estimation function may be used to stabilize video frames, either during real-time recording/previews or post-capture.


As shown, the interconnect 374 is configured to transmit data between and among the memory interface 380, the display interface unit 382, the input/output interfaces unit 384, the CPU cores 370, and the GPU cores 372. In various embodiments, the interconnect 374 may implement one or more buses, one or more rings, a cross-bar, a mesh, or any other technically feasible data transmission structure or technique. The memory interface 380 is configured to couple the memory subsystem 362 to the interconnect 374. The memory interface 380 may also couple NV memory 316, volatile memory 318, or any combination thereof to the interconnect 374. The display interface unit 382 may be configured to couple a display unit 312 to the interconnect 374. The display interface unit 382 may implement certain frame buffer functions (e.g. frame refresh, etc.). Alternatively, in another embodiment, the display unit 312 may implement certain frame buffer functions (e.g. frame refresh, etc.). The input/output interfaces unit 384 may be configured to couple various input/output devices to the interconnect 374.


In certain embodiments, a camera module 330 is configured to store exposure parameters for sampling each image associated with an image stack. For example, in one embodiment, when directed to sample a photographic scene, the camera module 330 may sample a set of images comprising the image stack according to stored exposure parameters. A software module comprising programming instructions executing within a processor complex 310 may generate and store the exposure parameters prior to directing the camera module 330 to sample the image stack. In other embodiments, the camera module 330 may be used to meter an image or an image stack, and the software module comprising programming instructions executing within a processor complex 310 may generate and store metering parameters prior to directing the camera module 330 to capture the image. Of course, the camera module 330 may be used in any manner in combination with the processor complex 310.


In one embodiment, exposure parameters associated with images comprising the image stack may be stored within an exposure parameter data structure that includes exposure parameters for one or more images. In another embodiment, a camera interface unit (not shown in FIG. 3B) within the processor complex 310 may be configured to read exposure parameters from the exposure parameter data structure and to transmit associated exposure parameters to the camera module 330 in preparation of sampling a photographic scene. After the camera module 330 is configured according to the exposure parameters, the camera interface may direct the camera module 330 to sample the photographic scene; the camera module 330 may then generate a corresponding image stack. The exposure parameter data structure may be stored within the camera interface unit, a memory circuit within the processor complex 310, volatile memory 318, NV memory 316, the camera module 330, or within any other technically feasible memory circuit. Further, in another embodiment, a software module executing within processor complex 310 may generate and store the exposure parameter data structure.



FIG. 3C illustrates a digital camera 302, in accordance with one embodiment. As an option, the digital camera 302 may be implemented in the context of the details of any of the Figures disclosed herein. Of course, however, the digital camera 302 may be implemented in any desired environment. Further, the aforementioned definitions may equally apply to the description below.


In one embodiment, the digital camera 302 may be configured to include a digital photographic system, such as digital photographic system 300 of FIG. 3A. As shown, the digital camera 302 includes a camera module 330, which may include optical elements configured to focus optical scene information representing a photographic scene onto an image sensor, which may be configured to convert the optical scene information to an electronic representation of the photographic scene.


Additionally, the digital camera 302 may include a strobe unit 336, and may include a shutter release button 315 for triggering a photographic sample event, whereby digital camera 302 samples one or more images comprising the electronic representation. In other embodiments, any other technically feasible shutter release mechanism may trigger the photographic sample event (e.g. such as a timer trigger or remote control trigger, etc.).



FIG. 3D illustrates a wireless mobile device 376, in accordance with one embodiment. As an option, the mobile device 376 may be implemented in the context of the details of any of the Figures disclosed herein. Of course, however, the mobile device 376 may be implemented in any desired environment. Further, the aforementioned definitions may equally apply to the description below.


In one embodiment, the mobile device 376 may be configured to include a digital photographic system (e.g. such as digital photographic system 300 of FIG. 3A), which is configured to sample a photographic scene. In various embodiments, a camera module 330 may include optical elements configured to focus optical scene information representing the photographic scene onto an image sensor, which may be configured to convert the optical scene information to an electronic representation of the photographic scene. Further, a shutter release command may be generated through any technically feasible mechanism, such as a virtual button, which may be activated by a touch gesture on a touch entry display system comprising display unit 312, or a physical button, which may be located on any face or surface of the mobile device 376. Of course, in other embodiments, any number of other buttons, external inputs/outputs, or digital inputs/outputs may be included on the mobile device 376, and which may be used in conjunction with the camera module 330.


As shown, in one embodiment, a touch entry display system comprising display unit 312 is disposed on the opposite side of mobile device 376 from camera module 330. In certain embodiments, the mobile device 376 includes a user-facing camera module 331 and may include a user-facing strobe unit (not shown). Of course, in other embodiments, the mobile device 376 may include any number of user-facing camera modules or rear-facing camera modules, as well as any number of user-facing strobe units or rear-facing strobe units.


In some embodiments, the digital camera 302 and the mobile device 376 may each generate and store a synthetic image based on an image stack sampled by camera module 330. The image stack may include one or more images sampled under ambient lighting conditions, one or more images sampled under strobe illumination from strobe unit 336, one or more images sampled under an accumulated exposure of both ambient and strobe illumination, or a combination thereof.



FIG. 3E illustrates camera module 330, in accordance with one embodiment. As an option, the camera module 330 may be implemented in the context of the details of any of the Figures disclosed herein. Of course, however, the camera module 330 may be implemented in any desired environment. Further, the aforementioned definitions may equally apply to the description below.


In one embodiment, the camera module 330 may be configured to control strobe unit 336 through strobe control signal 338. As shown, a lens 390 is configured to focus optical scene information 352 onto image sensor 332 to be sampled. In one embodiment, image sensor 332 advantageously controls detailed timing of the strobe unit 336 though the strobe control signal 338 to reduce inter-sample time between an image sampled with the strobe unit 336 enabled, and an image sampled with the strobe unit 336 disabled. For example, the image sensor 332 may enable the strobe unit 336 to emit strobe illumination 350 less than one microsecond (or any desired length) after image sensor 332 completes an exposure time associated with sampling an ambient image and prior to sampling a strobe image.


In other embodiments, the strobe illumination 350 may be configured based on a desired one or more target points. For example, in one embodiment, the strobe illumination 350 may light up an object in the foreground, and depending on the length of exposure time, may also light up an object in the background of the image. In one embodiment, once the strobe unit 336 is enabled, the image sensor 332 may then immediately begin exposing a strobe image. The image sensor 332 may thus be able to directly control sampling operations, including enabling and disabling the strobe unit 336 associated with generating an image stack, which may comprise at least one image sampled with the strobe unit 336 disabled, and at least one image sampled with the strobe unit 336 either enabled or disabled. In one embodiment, data comprising the image stack sampled by the image sensor 332 is transmitted via interconnect 334 to a camera interface unit 386 within processor complex 310. In some embodiments, the camera module 330 may include an image sensor controller (e.g., controller 333 of FIG. 3G), which may be configured to generate the strobe control signal 338 in conjunction with controlling operation of the image sensor 332.



FIG. 3F illustrates a camera module 330, in accordance with one embodiment. As an option, the camera module 330 may be implemented in the context of the details of any of the Figures disclosed herein. Of course, however, the camera module 330 may be implemented in any desired environment. Further, the aforementioned definitions may equally apply to the description below.


In one embodiment, the camera module 330 may be configured to sample an image based on state information for strobe unit 336. The state information may include, without limitation, one or more strobe parameters (e.g. strobe intensity, strobe color, strobe time, etc.), for directing the strobe unit 336 to generate a specified intensity and/or color of the strobe illumination 350. In one embodiment, commands for configuring the state information associated with the strobe unit 336 may be transmitted through a strobe control signal 338, which may be monitored by the camera module 330 to detect when the strobe unit 336 is enabled. For example, in one embodiment, the camera module 330 may detect when the strobe unit 336 is enabled or disabled within a microsecond or less of the strobe unit 336 being enabled or disabled by the strobe control signal 338. To sample an image requiring strobe illumination, a camera interface unit 386 may enable the strobe unit 336 by sending an enable command through the strobe control signal 338. In one embodiment, the camera interface unit 386 may be included as an interface of input/output interfaces 384 in a processor subsystem 360 of the processor complex 310 of FIG. 3B. The enable command may comprise a signal level transition, a data packet, a register write, or any other technically feasible transmission of a command. The camera module 330 may sense that the strobe unit 336 is enabled and then cause image sensor 332 to sample one or more images requiring strobe illumination while the strobe unit 336 is enabled. In such an implementation, the image sensor 332 may be configured to wait for an enable signal destined for the strobe unit 336 as a trigger signal to begin sampling a new exposure.


In one embodiment, camera interface unit 386 may transmit exposure parameters and commands to camera module 330 through interconnect 334. In certain embodiments, the camera interface unit 386 may be configured to directly control strobe unit 336 by transmitting control commands to the strobe unit 336 through strobe control signal 338. By directly controlling both the camera module 330 and the strobe unit 336, the camera interface unit 386 may cause the camera module 330 and the strobe unit 336 to perform their respective operations in precise time synchronization. In one embodiment, precise time synchronization may be less than five hundred microseconds of event timing error. Additionally, event timing error may be a difference in time from an intended event occurrence to the time of a corresponding actual event occurrence.


In another embodiment, camera interface unit 386 may be configured to accumulate statistics while receiving image data from camera module 330. In particular, the camera interface unit 386 may accumulate exposure statistics for a given image while receiving image data for the image through interconnect 334. Exposure statistics may include, without limitation, one or more of an intensity histogram, a count of over-exposed pixels, a count of under-exposed pixels, an intensity-weighted sum of pixel intensity, or any combination thereof. The camera interface unit 386 may present the exposure statistics as memory-mapped storage locations within a physical or virtual address space defined by a processor, such as one or more of CPU cores 370, within processor complex 310. In one embodiment, exposure statistics reside in storage circuits that are mapped into a memory-mapped register space, which may be accessed through the interconnect 334. In other embodiments, the exposure statistics are transmitted in conjunction with transmitting pixel data for a captured image. For example, the exposure statistics for a given image may be transmitted as in-line data, following transmission of pixel intensity data for the captured image. Exposure statistics may be calculated, stored, or cached within the camera interface unit 386. In other embodiments, an image sensor controller within camera module 330 is configured to accumulate the exposure statistics and transmit the exposure statistics to processor complex 310, such as by way of camera interface unit 386. In one embodiments, the exposure statistics are accumulated within the camera module 330 and transmitted to the camera interface unit 386, either in conjunction with transmitting image data to the camera interface unit 386, or separately from transmitting image data.


In one embodiment, camera interface unit 386 may accumulate color statistics for estimating scene white-balance. Any technically feasible color statistics may be accumulated for estimating white balance, such as a sum of intensities for different color channels comprising red, green, and blue color channels. The sum of color channel intensities may then be used to perform a white-balance color correction on an associated image, according to a white-balance model such as a gray-world white-balance model. In other embodiments, curve-fitting statistics are accumulated for a linear or a quadratic curve fit used for implementing white-balance correction on an image. As with the exposure statistics, the color statistics may be presented as memory-mapped storage locations within processor complex 310. In one embodiment, the color statistics are mapped in a memory-mapped register space, which may be accessed through interconnect 334. In other embodiments, the color statistics may be transmitted in conjunction with transmitting pixel data for a captured image. For example, in one embodiment, the color statistics for a given image may be transmitted as in-line data, following transmission of pixel intensity data for the image. Color statistics may be calculated, stored, or cached within the camera interface 386. In other embodiments, the image sensor controller within camera module 330 is configured to accumulate the color statistics and transmit the color statistics to processor complex 310, such as by way of camera interface unit 386. In one embodiments, the color statistics are accumulated within the camera module 330 and transmitted to the camera interface unit 386, either in conjunction with transmitting image data to the camera interface unit 386, or separately from transmitting image data.


In one embodiment, camera interface unit 386 may accumulate spatial color statistics for performing color-matching between or among images, such as between or among an ambient image and one or more images sampled with strobe illumination. As with the exposure statistics, the spatial color statistics may be presented as memory-mapped storage locations within processor complex 310. In one embodiment, the spatial color statistics are mapped in a memory-mapped register space. In another embodiment the camera module is configured to accumulate the spatial color statistics, which may be accessed through interconnect 334. In other embodiments, the color statistics may be transmitted in conjunction with transmitting pixel data for a captured image. For example, in one embodiment, the color statistics for a given image may be transmitted as in-line data, following transmission of pixel intensity data for the image. Color statistics may be calculated, stored, or cached within the camera interface 386.


In one embodiment, camera module 330 may transmit strobe control signal 338 to strobe unit 336, enabling the strobe unit 336 to generate illumination while the camera module 330 is sampling an image. In another embodiment, camera module 330 may sample an image illuminated by strobe unit 336 upon receiving an indication signal from camera interface unit 386 that the strobe unit 336 is enabled. In yet another embodiment, camera module 330 may sample an image illuminated by strobe unit 336 upon detecting strobe illumination within a photographic scene via a rapid rise in scene illumination. In one embodiment, a rapid rise in scene illumination may include at least a rate of increasing intensity consistent with that of enabling strobe unit 336. In still yet another embodiment, camera module 330 may enable strobe unit 336 to generate strobe illumination while sampling one image, and disable the strobe unit 336 while sampling a different image.



FIG. 3G illustrates camera module 330, in accordance with one embodiment. As an option, the camera module 330 may be implemented in the context of the details of any of the Figures disclosed herein. Of course, however, the camera module 330 may be implemented in any desired environment. Further, the aforementioned definitions may equally apply to the description below.


In one embodiment, the camera module 330 may be in communication with an application processor 335. The camera module 330 is shown to include image sensor 332 in communication with a controller 333. Further, the controller 333 is shown to be in communication with the application processor 335.


In one embodiment, the application processor 335 may reside outside of the camera module 330. As shown, the lens 390 may be configured to focus optical scene information to be sampled onto image sensor 332. The optical scene information sampled by the image sensor 332 may then be communicated from the image sensor 332 to the controller 333 for at least one of subsequent processing and communication to the application processor 335. In another embodiment, the controller 333 may control storage of the optical scene information sampled by the image sensor 332, or storage of processed optical scene information.


In another embodiment, the controller 333 may enable a strobe unit to emit strobe illumination for a short time duration (e.g. less than ten milliseconds) after image sensor 332 completes an exposure time associated with sampling an ambient image. Further, the controller 333 may be configured to generate strobe control signal 338 in conjunction with controlling operation of the image sensor 332.


In one embodiment, the image sensor 332 may be a complementary metal oxide semiconductor (CMOS) sensor or a charge-coupled device (CCD) sensor. In another embodiment, the controller 333 and the image sensor 332 may be packaged together as an integrated system, multi-chip module, multi-chip stack, or integrated circuit. In yet another embodiment, the controller 333 and the image sensor 332 may comprise discrete packages. In one embodiment, the controller 333 may provide circuitry for receiving optical scene information from the image sensor 332, processing of the optical scene information, timing of various functionalities, and signaling associated with the application processor 335. Further, in another embodiment, the controller 333 may provide circuitry for control of one or more of exposure, shuttering, white balance, and gain adjustment. Processing of the optical scene information by the circuitry of the controller 333 may include one or more of gain application, amplification, and analog-to-digital conversion. After processing the optical scene information, the controller 333 may transmit corresponding digital pixel data, such as to the application processor 335.


In one embodiment, the application processor 335 may be implemented on processor complex 310 and at least one of volatile memory 318 and NV memory 316, or any other memory device and/or system. The application processor 335 may be previously configured for processing of received optical scene information or digital pixel data communicated from the camera module 330 to the application processor 335.



FIG. 4 illustrates a network service system 400, in accordance with one embodiment. As an option, the network service system 400 may be implemented in the context of the details of any of the Figures disclosed herein. Of course, however, the network service system 400 may be implemented in any desired environment. Further, the aforementioned definitions may equally apply to the description below.


In one embodiment, the network service system 400 may be configured to provide network access to a device implementing a digital photographic system. As shown, network service system 400 includes a wireless mobile device 376, a wireless access point 472, a data network 474, a data center 480, and a data center 481. The wireless mobile device 376 may communicate with the wireless access point 472 via a digital radio link 471 to send and receive digital data, including data associated with digital images. The wireless mobile device 376 and the wireless access point 472 may implement any technically feasible transmission techniques for transmitting digital data via digital radio link 471 without departing the scope and spirit of the present invention. In certain embodiments, one or more of data centers 480, 481 may be implemented using virtual constructs so that each system and subsystem within a given data center 480, 481 may comprise virtual machines configured to perform data processing and network data transmission tasks. In other implementations, one or more of data centers 480, 481 may be physically distributed over a plurality of physical sites.


The wireless mobile device 376 may comprise a smart phone configured to include a digital camera, a digital camera configured to include wireless network connectivity, a reality augmentation device, a laptop configured to include a digital camera and wireless network connectivity, or any other technically feasible computing device configured to include a digital photographic system and wireless network connectivity.


In various embodiments, the wireless access point 472 may be configured to communicate with wireless mobile device 376 via the digital radio link 471 and to communicate with the data network 474 via any technically feasible transmission media, such as any electrical, optical, or radio transmission media. For example, in one embodiment, wireless access point 472 may communicate with data network 474 through an optical fiber coupled to the wireless access point 472 and to a router system or a switch system within the data network 474. A network link 475, such as a wide area network (WAN) link, may be configured to transmit data between the data network 474 and the data center 480.


In one embodiment, the data network 474 may include routers, switches, long-haul transmission systems, provisioning systems, authorization systems, and any technically feasible combination of communications and operations subsystems configured to convey data between network endpoints, such as between the wireless access point 472 and the data center 480. In one implementation scenario, wireless mobile device 376 may comprise one of a plurality of wireless mobile devices configured to communicate with the data center 480 via one or more wireless access points coupled to the data network 474.


Additionally, in various embodiments, the data center 480 may include, without limitation, a switch/router 482 and at least one data service system 484. The switch/router 482 may be configured to forward data traffic between and among a network link 475, and each data service system 484. The switch/router 482 may implement any technically feasible transmission techniques, such as Ethernet media layer transmission, layer 2 switching, layer 3 routing, and the like. The switch/router 482 may comprise one or more individual systems configured to transmit data between the data service systems 484 and the data network 474.


In one embodiment, the switch/router 482 may implement session-level load balancing among a plurality of data service systems 484. Each data service system 484 may include at least one computation system 488 and may also include one or more storage systems 486. Each computation system 488 may comprise one or more processing units, such as a central processing unit, a graphics processing unit, or any combination thereof. A given data service system 484 may be implemented as a physical system comprising one or more physically distinct systems configured to operate together. Alternatively, a given data service system 484 may be implemented as a virtual system comprising one or more virtual systems executing on an arbitrary physical system. In certain scenarios, the data network 474 may be configured to transmit data between the data center 480 and another data center 481, such as through a network link 476.


In another embodiment, the network service system 400 may include any networked mobile devices configured to implement one or more embodiments of the present invention. For example, in some embodiments, a peer-to-peer network, such as an ad-hoc wireless network, may be established between two different wireless mobile devices. In such embodiments, digital image data may be transmitted between the two wireless mobile devices without having to send the digital image data to a data center 480.



FIG. 5 illustrates capturing 500 an image with correct skin tone exposure, in accordance with one embodiment. As an option, the capturing 500 may be implemented in the context of any one or more of the embodiments set forth in any previous and/or subsequent figure(s) and/or description thereof. Of course, however, the capturing 500 may be implemented in the context of any desired environment. Further, the aforementioned definitions may equally apply to the description below.


As shown, image 502 represents a conventional captured image using standard exposure and capture techniques. As can be seen, portions of image 502 lack sufficient contrast, making the subject's facial features difficult to distinguish. Image 504 represents a conventional flash captured image using standard exposure and flash capture technique. As shown in image 504, turning on the flash aggravates a lack of contrast, making the subject's facial features even more difficult to distinguish. In contrast, image 506 represents one or more techniques described herein, wherein such techniques provide improved image quality for portraits of individuals with very dark skin tone or very light skin tone. In particular, image 506 was captured according to method 100 of FIG. 1A, where a face was detected in the scene, and the face was determined to have a threshold dark skin tone. Having detected the face with threshold skin tone, the camera was configured to capture an HDR image, where default camera behavior would have captured a non-HDR image. The HDR image was tone-mapped and equalized according to CLAHE techniques to generate image 506.



FIG. 6 illustrates capturing 600 an image with correct skin tone exposure, in accordance with one embodiment. As an option, the capturing 600 may be implemented in the context of any one or more of the embodiments set forth in any previous and/or subsequent figure(s) and/or description thereof. Of course, however, the capturing 600 may be implemented in the context of any desired environment. Further, the aforementioned definitions may equally apply to the description below.


As shown, image 602 represents a conventional captured image at EV+3. As visually apparent and indicated by the accompanying histogram, image 602 is “blown out” and generally over-exposed (note the large spike on the far right side of the histogram). However, image 602 does include useful visual detail in the region of the subject's face, despite such detail appearing incorrectly over exposed. Image 604 represents a conventional captured image at EV−0. As visually apparent, image 602 balances overall image exposure, the goal of conventional exposure; however, as a consequence, there is insufficient visual detail or contrast captured in the region of the subject's face. The accompanying histogram is centered, with no excessively dark regions and only a narrow spike of excessively bright pixels. As such, image 604 meets conventional goals of balanced exposure, despite the subject's face lacking sufficient visual detail. Image 602 and image 604 both fail to capture important visual detail in different regions, with image 604 failing to capture visual detail in the region of the subject's face and image 602 failing to capture visual detail surrounding the subject's face. In contrast, image 606 provides visual detail while preserving a natural exposure appearance (e.g., correct exposure) in the region of the subjects face, and furthermore provides visual detail in regions surrounding the subject's face. In the histogram accompanying image 606, three clusters of pixel values are apparent. A leftmost cluster is associated with the region of the subject's face and some background regions. This cluster has a higher peak and breadth than that of image 604 or image 602, indicating greater detail in this intensity range for image 606. Furthermore, a mid-range group associate mostly with surrounding regions has increased in magnitude and breadth, indicating greater detail in the mid range. Note that while there are a significant number of bright pixels (e.g., in the sky region) in the rightmost cluster, few overall pixels are actually saturated (“blown out”). Beyond the visual superiority of detail in image 606, the accompanying histogram objectively illustrates that image 606 provides more detail in appropriate intensity ranges. Image 606 was captured using one or more techniques described herein, wherein such techniques provide improved image quality for portraits of individuals with very dark skin tone (or very light skin tone). In particular, image 606 was captured according to method 120 of FIG. 2B, with a high degree of HDR effect applied, in conjunction with equalization, to pixels within the face region. In this manner, the lighting and visual detail of the captured image is corrected, both with respect to the environment (shown around the face), and the face skin tone of the subject.



FIG. 7 illustrates capturing 700 an image with correct skin tone exposure, in accordance with one embodiment. As an option, the capturing 700 may be implemented in the context of any one or more of the embodiments set forth in any previous and/or subsequent figure(s) and/or description thereof. Of course, however, the capturing 700 may be implemented in the context of any desired environment. Further, the aforementioned definitions may equally apply to the description below.


As shown, image 702 represents a conventional captured image at EV+3. As is visually apparent, image 702 is over exposed; furthermore, the accompanying histogram shows a large spike of saturated pixels at the far right. At this exposure level, useful visual detail is captured in the region of the subject's face. However, the subject appears unnaturally bright and over exposed. Image 704 represents a conventional captured image at EV−0. The histogram accompanying image 704 indicates conventionally proper exposure that would not cause a conventional camera system to select an HDR capture mode. Despite the conventionally proper exposure, the subject's face lacks visual detail. In contrast, image 706 was captured according to method 100 of FIG. 1A, where a face was detected in the scene, and the face was determined to have a threshold dark skin tone. Having detected the face with threshold skin tone, the camera was configured to capture an HDR image, where default camera behavior would have captured a non-HDR image. The HDR image was tone-mapped and equalized according to CLAHE techniques to generate image 706. In this manner, the lighting of the captured image is corrected, both respect to the environment (shown around the face), and the face skin tone.



FIG. 8 illustrates a network architecture 800, in accordance with one possible embodiment. As shown, at least one network 802 is provided. In the context of the present network architecture 800, the network 802 may take any form including, but not limited to a telecommunications network, a local area network (LAN), a wireless network, a wide area network (WAN) such as the Internet, peer-to-peer network, cable network, etc. While only one network is shown, it should be understood that two or more similar or different networks 802 may be provided.


Coupled to the network 802 is a plurality of devices. For example, a server computer 812 and an end user computer 808 may be coupled to the network 802 for communication purposes. Such end user computer 808 may include a desktop computer, lap-top computer, and/or any other type of logic. Still yet, various other devices may be coupled to the network 802 including a personal digital assistant (PDA) device 810, a mobile phone device 806, a television 804, a camera 814, etc.



FIG. 9 illustrates an exemplary system 900, in accordance with one embodiment. As an option, the system 900 may be implemented in the context of any of the devices of the network architecture 800 of FIG. 8. Of course, the system 900 may be implemented in any desired environment.


As shown, a system 900 is provided including at least one central processor 902 which is connected to a communication bus 912. The system 900 also includes main memory 904 [e.g. random access memory (RAM), etc.]. The system 900 also includes a graphics processor 908 and a display 910.


The system 900 may also include a secondary storage 906. The secondary storage 906 includes, for example, a hard disk drive and/or a removable storage drive, representing a floppy disk drive, a magnetic tape drive, a compact disk drive, etc. The removable storage drive reads from and/or writes to a removable storage unit in a well known manner.


Computer programs, or computer control logic algorithms, may be stored in the main memory 904, the secondary storage 906, and/or any other memory, for that matter. Such computer programs, when executed, enable the system 900 to perform various functions (as set forth above, for example). Memory 904, storage 906 and/or any other storage are possible examples of non-transitory computer-readable media. In one embodiment, digital photographic system 300 includes system 900.


It is noted that the techniques described herein, in an aspect, are embodied in executable instructions stored in a computer readable medium for use by or in connection with an instruction execution machine, apparatus, or device, such as a computer-based or processor-containing machine, apparatus, or device. It will be appreciated by those skilled in the art that for some embodiments, other types of computer readable media are included which may store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, Bernoulli cartridges, random access memory (RAM), read-only memory (ROM), and the like.


As used here, a “computer-readable medium” includes one or more of any suitable media for storing the executable instructions of a computer program such that the instruction execution machine, system, apparatus, or device may read (or fetch) the instructions from the computer readable medium and execute the instructions for carrying out the described methods. Suitable storage formats include one or more of an electronic, magnetic, optical, and electromagnetic format. A non-exhaustive list of conventional exemplary computer readable medium includes: a portable computer diskette; a RAM; a ROM; an erasable programmable read only memory (EPROM or flash memory); optical storage devices, including a portable compact disc (CD), a portable digital video disc (DVD), a high definition DVD (HD-DVD™), a BLU-RAY disc; and the like.


It should be understood that the arrangement of components illustrated in the Figures described are exemplary and that other arrangements are possible. It should also be understood that the various system components (and means) defined by the claims, described below, and illustrated in the various block diagrams represent logical components in some systems configured according to the subject matter disclosed herein.


For example, one or more of these system components (and means) may be realized, in whole or in part, by at least some of the components illustrated in the arrangements illustrated in the described Figures. In addition, while at least one of these components are implemented at least partially as an electronic hardware component, and therefore constitutes a machine, the other components may be implemented in software that when included in an execution environment constitutes a machine, hardware, or a combination of software and hardware.


More particularly, at least one component defined by the claims is implemented at least partially as an electronic hardware component, such as an instruction execution machine (e.g., a processor-based or processor-containing machine) and/or as specialized circuits or circuitry (e.g., discreet logic gates interconnected to perform a specialized function). Other components may be implemented in software, hardware, or a combination of software and hardware. Moreover, some or all of these other components may be combined, some may be omitted altogether, and additional components may be added while still achieving the functionality described herein. Thus, the subject matter described herein may be embodied in many different variations, and all such variations are contemplated to be within the scope of what is claimed.


In the description above, the subject matter is described with reference to acts and symbolic representations of operations that are performed by one or more devices, unless indicated otherwise. As such, it will be understood that such acts and operations, which are at times referred to as being computer-executed, include the manipulation by the processor of data in a structured form. This manipulation transforms the data or maintains it at locations in the memory system of the computer, which reconfigures or otherwise alters the operation of the device in a manner well understood by those skilled in the art. The data is maintained at physical locations of the memory as data structures that have particular properties defined by the format of the data. However, while the subject matter is being described in the foregoing context, it is not meant to be limiting as those of skill in the art will appreciate that various of the acts and operations described hereinafter may also be implemented in hardware.


To facilitate an understanding of the subject matter described herein, many aspects are described in terms of sequences of actions. At least one of these aspects defined by the claims is performed by an electronic hardware component. For example, it will be recognized that the various actions may be performed by specialized circuits or circuitry, by program instructions being executed by one or more processors, or by a combination of both. The description herein of any sequence of actions is not intended to imply that the specific order described for performing that sequence must be followed. All methods described herein may be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context


The use of the terms “a” and “an” and “the” and similar referents in the context of describing the subject matter (particularly in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. Furthermore, the foregoing description is for the purpose of illustration only, and not for the purpose of limitation, as the scope of protection sought is defined by the claims as set forth hereinafter together with any equivalents thereof entitled to. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illustrate the subject matter and does not pose a limitation on the scope of the subject matter unless otherwise claimed. The use of the term “based on” and other like phrases indicating a condition for bringing about a result, both in the claims and in the written description, is not intended to foreclose any other conditions that bring about that result. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention as claimed.


The embodiments described herein included the one or more modes known to the inventor for carrying out the claimed subject matter. Of course, variations of those embodiments will become apparent to those of ordinary skill in the art upon reading the foregoing description. The inventor expects skilled artisans to employ such variations as appropriate, and the inventor intends for the claimed subject matter to be practiced otherwise than as specifically described herein. Accordingly, this claimed subject matter includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed unless otherwise indicated herein or otherwise clearly contradicted by context.

Claims
  • 1. A method for high-dynamic range (HDR) imaging, comprising: identifying one or more faces in a preview image;identifying a region of interest (ROI) for each identified face of the one or more faces in the preview image, wherein a number of identified ROIs is more than one;determining a number of images to be captured for an HDR image to be saved in an image stack; andfor each ROI corresponding to an identified face of the one or more faces: determining a skin tone of the identified face;determining, based on the determined skin tone and a comparison between the identified face and the ROI corresponding to the identified face, a target brightness of the respective image associated with the ROI; anddetermining, based on the target brightness, an exposure value of the respective image associated with the ROI.
  • 2. The method of claim 1, wherein determining the exposure value of a respective image associated with a ROI comprises exclusively using the ROI in determining the exposure value.
  • 3. The method of claim 2, further comprising: determining a separate exposure value from an entirety of the preview image, wherein the exposure value is to be used for capturing an image other than the number of images;receiving the number of images and the image other than the number of images captured using corresponding exposure values; andgenerating the HDR image based on the number of images and the image other than the number of images.
  • 4. The method of claim 1, further comprising: receiving a user input for the preview image; andidentifying one or more ROIs of the preview image from the user input.
  • 5. The method of claim 1, further comprising for each ROI corresponding to an identified face of the one or more faces: comparing a size of the identified face to a size of the ROI corresponding to the identified face.
  • 6. A device configured to generate a High-Dynamic Range (HDR) image, comprising: one or more processors implemented in circuitry; anda memory coupled to the one or more processors and including instructions that, when executed by the one or more processors, cause the device to: identify one or more faces in a preview image;identify a region of interest (ROI) for each identified face of the one or more faces in the preview image, wherein a number of identified ROIs is more than one;determine a number of images to be captured for an HDR image to be saved in an image stack; andfor each ROI corresponding to an identified face of the one or more faces: determine a skin tone of the identified face;determine, based on the determined skin tone and a comparison between the identified face and the ROI corresponding to the identified face, a target brightness of the respective image associated with the ROI; anddetermine, based on the target brightness, an exposure value of the respective image associated with the ROI.
  • 7. The device of claim 6, wherein execution of the instructions further causes the device to exclusively use the ROI in determining the exposure value.
  • 8. The device of claim 7, wherein execution of the instructions further causes the device to: determine a separate exposure value from an entirety of the preview image, wherein the exposure value is to be used for capturing an image other than the number of images;receive the number of images and the image other than the number of images captured using corresponding exposure values; andgenerate the HDR image based on the number of images and the image other than the number of images.
  • 9. The device of claim 8, further comprising one or more cameras, wherein execution of the instructions further causes the device to: capture the preview image using at least one camera of the one or more cameras; andcapture the number of images and the image other than the number of images using at least one camera of the one or more cameras.
  • 10. The device of claim 6, further comprising: a display to display the preview image; anda user interface to receive a user input for the displayed preview image;wherein execution of the instructions further causes the device to identify one or more ROIs of the preview image from the user input.
  • 11. The device of claim 6, wherein execution of the instructions, for each ROI corresponding to an identified face of the one or more faces, further causes the device to: compare a size of the identified face to a size of the ROI corresponding to the identified face.
  • 12. A non-transitory computer-readable medium storing one or more programs containing instructions that, when executed by one or more processors of a device, cause the device to: identify one or more faces in a preview image;identify a region of interest (ROI) for each identified face of the one or more faces in the preview image, wherein a number of ROIs is more than one;determine a number of images to be captured for an HDR image to be saved in an image stack; andfor each ROI corresponding to an identified face of the one or more faces: determine a skin tone of the identified face;determine, based on the determined skin tone and a comparison between the identified face and the ROI corresponding to the identified face, a target brightness of the respective image associated with the ROI; anddetermine, based on the target brightness, an exposure value of the respective image associated with the ROI.
  • 13. The non-transitory computer-readable medium of claim 12, wherein execution of the instructions further causes the device to exclusively use the ROI of the preview image in determining the exposure value.
  • 14. The non-transitory computer-readable medium of claim 13, wherein execution of the instructions further causes the device to: determine a separate exposure value from an entirety of the preview image, wherein the exposure value is to be used for capturing an image other than the number of images;receive the number of images and the image other than the number of images captured using corresponding exposure values; andgenerate the HDR image based on the number of images and the image other than the number of images.
  • 15. The non-transitory computer-readable medium of claim 12, wherein execution of the instructions further causes the device to: receive a user input for the preview image; andidentify one or more ROIs of the preview image from the user input.
  • 16. The non-transitory computer-readable medium of claim 12, wherein execution of the instructions further causes the device to: for each ROI corresponding to an identified face of the one or more faces, compare a size of the identified face to a size of the ROI corresponding to the identified face.
CROSS-REFERENCES TO RELATED APPLICATIONS

The present application is a continuation of and claims priority to U.S. patent application Ser. No. 16/796,497, titled “SYSTEM, METHOD, AND COMPUTER PROGRAM FOR CAPTURING AN IMAGE WITH CORRECT SKIN TONE EXPOSURE,” filed Feb. 20, 2020, which in turn, is a continuation of and claims priority to U.S. patent application Ser. No. 16/290,763, titled “SYSTEM, METHOD, AND COMPUTER PROGRAM FOR CAPTURING AN IMAGE WITH CORRECT SKIN TONE EXPOSURE,” filed Mar. 1, 2019, which in turn, is a continuation of and claims priority to U.S. patent application Ser. No. 16/215,351, titled “SYSTEM, METHOD, AND COMPUTER PROGRAM FOR CAPTURING AN IMAGE WITH CORRECT SKIN TONE EXPOSURE,” filed Dec. 10, 2018, which in turn, is a continuation of and claims priority to U.S. patent application Ser. No. 15/976,756, titled “SYSTEM, METHOD, AND COMPUTER PROGRAM FOR CAPTURING AN IMAGE WITH CORRECT SKIN TONE EXPOSURE,” filed May 10, 2018, which claims priority to U.S. Provisional Patent Application No. 62/568,553, titled “SYSTEM, METHOD, AND COMPUTER PROGRAM FOR CAPTURING AN IMAGE,” filed Oct. 5, 2017, as well as U.S. Provisional Patent Application No. 62/599,940, titled “SYSTEM, METHOD, AND COMPUTER PROGRAM FOR CAPTURING AN IMAGE WITH CORRECT SKIN TONE EXPOSURE SETTINGS,” filed Dec. 18, 2017, all of which are hereby incorporated by reference for all purposes.

US Referenced Citations (434)
Number Name Date Kind
4873561 Wen Oct 1989 A
5200828 Jang et al. Apr 1993 A
5363209 Eschbach et al. Nov 1994 A
5818977 Tansley Oct 1998 A
5859921 Suzuki Jan 1999 A
5867215 Kaplan Feb 1999 A
6115065 Yadid-Pecht et al. Sep 2000 A
6148092 Qian Nov 2000 A
6184940 Sano Feb 2001 B1
6243430 Mathe Jun 2001 B1
6293284 Rigg Sep 2001 B1
6332033 Qian Dec 2001 B1
6365950 Sohn Apr 2002 B1
6453068 Li Sep 2002 B1
6498926 Ciccarelli et al. Dec 2002 B1
6642962 Lin et al. Nov 2003 B1
6788338 Dinev et al. Sep 2004 B1
6885761 Kage Apr 2005 B2
6944319 Huang et al. Sep 2005 B1
6996186 Ngai et al. Feb 2006 B2
7084905 Nayar et al. Aug 2006 B1
7088351 Wang Aug 2006 B2
7098952 Morris et al. Aug 2006 B2
7142697 Huang et al. Nov 2006 B2
7206449 Raskar et al. Apr 2007 B2
7256381 Asaba Aug 2007 B2
7265784 Frank Sep 2007 B1
7362886 Rowe et al. Apr 2008 B2
7415152 Jiang et al. Aug 2008 B2
7518645 Farrier Apr 2009 B2
7587099 Szeliski et al. Sep 2009 B2
7599569 Smirnov et al. Oct 2009 B2
7609860 Lee et al. Oct 2009 B2
7646909 Jiang et al. Jan 2010 B2
7656456 Zhang Feb 2010 B2
7715598 Li et al. May 2010 B2
7760246 Dalton et al. Jul 2010 B2
7835586 Porikli Nov 2010 B2
7844076 Corcoran et al. Nov 2010 B2
7907791 Kinrot et al. Mar 2011 B2
7999858 Nayar et al. Aug 2011 B2
8125526 Maruyama et al. Feb 2012 B2
8144253 Su et al. Mar 2012 B2
8155397 Bigioi et al. Apr 2012 B2
8189944 Lim May 2012 B1
8199203 Sugimoto Jun 2012 B2
8237813 Garten Aug 2012 B2
8351711 Takano et al. Jan 2013 B2
8363951 Bigioi et al. Jan 2013 B2
8369586 Corcoran et al. Feb 2013 B2
8406482 Chien et al. Mar 2013 B1
8548257 Reid et al. Oct 2013 B2
8605142 Hayashi Dec 2013 B2
8610789 Nayar et al. Dec 2013 B1
8633978 Yang et al. Jan 2014 B2
8675086 Linzer Mar 2014 B1
8675960 Reid et al. Mar 2014 B2
8682029 Piramuthu Mar 2014 B2
8699822 Park et al. Apr 2014 B2
8712160 Bigioi et al. Apr 2014 B2
8723284 Hynecek May 2014 B1
8761245 Puri et al. Jun 2014 B2
8780420 Bluzer et al. Jul 2014 B1
8786725 Maruyama et al. Jul 2014 B2
8792679 Sengupta et al. Jul 2014 B2
8797445 Kang Aug 2014 B2
8811757 Batur Aug 2014 B2
8824747 Free Sep 2014 B2
8852003 Oku Oct 2014 B2
8854421 Kasahara Oct 2014 B2
8861847 Srinivasan et al. Oct 2014 B2
8878963 Prabhudesai et al. Nov 2014 B2
8897504 Steinberg et al. Nov 2014 B2
8908932 Corcoran et al. Dec 2014 B2
8934029 Nayar et al. Jan 2015 B2
8942436 Mori et al. Jan 2015 B2
8970770 Nanu et al. Mar 2015 B2
8976264 Rivard et al. Mar 2015 B2
9014459 Xiang et al. Apr 2015 B2
9058655 Doepke et al. Jun 2015 B2
9070185 Lee et al. Jun 2015 B2
9083905 Wan et al. Jul 2015 B2
9106888 Chou Aug 2015 B2
9118876 Felt Aug 2015 B2
9124814 Kim et al. Sep 2015 B2
9137455 Rivard et al. Sep 2015 B1
9154708 Rivard et al. Oct 2015 B1
9160936 Rivard et al. Oct 2015 B1
9167169 Rivard et al. Oct 2015 B1
9179062 Rivard et al. Nov 2015 B1
9179085 Rivard et al. Nov 2015 B1
9230343 Ozawa Jan 2016 B2
9239947 Auberger et al. Jan 2016 B2
9336574 Zhang et al. May 2016 B2
9406147 Rivard et al. Aug 2016 B2
9421462 Oku Aug 2016 B2
9443132 Linguraru et al. Sep 2016 B2
9467607 Ng et al. Oct 2016 B2
9516217 Corcoran et al. Dec 2016 B2
9531961 Rivard et al. Dec 2016 B2
9560269 Baldwin Jan 2017 B2
9578211 Kong et al. Feb 2017 B2
9600741 Su et al. Mar 2017 B1
9639742 Lee et al. May 2017 B2
9661327 Nilsson May 2017 B2
9704250 Shah et al. Jul 2017 B1
9760764 Mishra et al. Sep 2017 B2
9779287 Steinberg et al. Oct 2017 B2
9807322 Feder et al. Oct 2017 B2
9819849 Rivard et al. Nov 2017 B1
9860461 Feder et al. Jan 2018 B2
9898674 Connell, II et al. Feb 2018 B2
9912928 Rivard et al. Mar 2018 B2
9918017 Rivard et al. Mar 2018 B2
9998721 Rivard et al. Jun 2018 B2
10007842 Hu Jun 2018 B2
10055646 Bataller et al. Aug 2018 B2
10109107 Knorr et al. Oct 2018 B2
10110870 Rivard et al. Oct 2018 B2
10129514 Rivard et al. Nov 2018 B2
10178300 Rivard et al. Jan 2019 B2
10182197 Feder et al. Jan 2019 B2
10270958 Rivard et al. Apr 2019 B2
10372971 Rivard et al. Aug 2019 B2
10375369 Rivard et al. Aug 2019 B2
10382702 Rivard et al. Aug 2019 B2
10469714 Rivard et al. Nov 2019 B2
10477077 Rivard et al. Nov 2019 B2
10498982 Feder et al. Dec 2019 B2
10558848 Rivard et al. Feb 2020 B2
10586097 Rivard et al. Mar 2020 B2
10630903 Srivastava et al. Apr 2020 B2
10652478 Rivard et al. May 2020 B2
10785401 Rivard et al. Sep 2020 B2
10904505 Rivard et al. Jan 2021 B2
10924688 Rivard et al. Feb 2021 B2
10931897 Feder et al. Feb 2021 B2
11025831 Rivard et al. Jun 2021 B2
11356647 Rivard Jun 2022 B2
11375085 Rivard et al. Jun 2022 B2
11394894 Rivard et al. Jul 2022 B2
11455829 Rivard et al. Sep 2022 B2
11463630 Rivard et al. Oct 2022 B2
20020070945 Kage Jun 2002 A1
20030015645 Brickell et al. Jan 2003 A1
20030142745 Osawa Jul 2003 A1
20030179911 Ho et al. Sep 2003 A1
20030184660 Skow Oct 2003 A1
20040027471 Koseki et al. Feb 2004 A1
20040181375 Szu et al. Sep 2004 A1
20040184677 Raskar et al. Sep 2004 A1
20040228528 Lao Nov 2004 A1
20040247177 Rowe et al. Dec 2004 A1
20040252199 Cheung et al. Dec 2004 A1
20040263510 Marschner et al. Dec 2004 A1
20050088570 Seo Apr 2005 A1
20050134723 Lee et al. Jun 2005 A1
20050147292 Huang et al. Jul 2005 A1
20050180657 Zhang et al. Aug 2005 A1
20050196069 Yonaha Sep 2005 A1
20060007346 Nakamura et al. Jan 2006 A1
20060015308 Marschner et al. Jan 2006 A1
20060050165 Amano Mar 2006 A1
20060087702 Satoh et al. Apr 2006 A1
20060115157 Mori et al. Jun 2006 A1
20060177150 Uyttendaele et al. Aug 2006 A1
20060181614 Yen et al. Aug 2006 A1
20060192785 Marschner et al. Aug 2006 A1
20060245014 Haneda Nov 2006 A1
20060245639 Jiang et al. Nov 2006 A1
20060280343 Lee et al. Dec 2006 A1
20070023798 McKee Feb 2007 A1
20070025714 Shiraki Feb 2007 A1
20070025717 Raskar et al. Feb 2007 A1
20070030357 Levien et al. Feb 2007 A1
20070052838 Zhang Mar 2007 A1
20070110305 Corcoran et al. May 2007 A1
20070122034 Maor May 2007 A1
20070182823 Maruyama et al. Aug 2007 A1
20070200663 White et al. Aug 2007 A1
20070242900 Chen et al. Oct 2007 A1
20070248342 Tamminen et al. Oct 2007 A1
20070263106 Tanaka et al. Nov 2007 A1
20070280505 Breed Dec 2007 A1
20080018763 Sato Jan 2008 A1
20080019575 Scalise et al. Jan 2008 A1
20080025576 Li et al. Jan 2008 A1
20080030592 Border et al. Feb 2008 A1
20080107411 Hope May 2008 A1
20080151097 Chen et al. Jun 2008 A1
20080158398 Yaffe et al. Jul 2008 A1
20080170160 Lukac Jul 2008 A1
20080192064 Hong et al. Aug 2008 A1
20080218611 Parulski et al. Sep 2008 A1
20080310753 Edgar Dec 2008 A1
20090002475 Jelley et al. Jan 2009 A1
20090052748 Jiang et al. Feb 2009 A1
20090060379 Manabe Mar 2009 A1
20090066782 Choi et al. Mar 2009 A1
20090080713 Bigioi et al. Mar 2009 A1
20090141149 Zhang Jun 2009 A1
20090153245 Lee Jun 2009 A1
20090160992 Inaba et al. Jun 2009 A1
20090175555 Mahowald Jul 2009 A1
20090238419 Steinberg et al. Sep 2009 A1
20090278922 Tinker et al. Nov 2009 A1
20090295941 Nakajima et al. Dec 2009 A1
20090309990 Levoy et al. Dec 2009 A1
20090309994 Inoue Dec 2009 A1
20090322903 Hashimoto et al. Dec 2009 A1
20100026836 Sugimoto Feb 2010 A1
20100066822 Steinberg et al. Mar 2010 A1
20100073499 Gere Mar 2010 A1
20100118204 Proca et al. May 2010 A1
20100160049 Oku Jun 2010 A1
20100165178 Chou et al. Jul 2010 A1
20100165181 Murakami et al. Jul 2010 A1
20100172578 Reid et al. Jul 2010 A1
20100172579 Reid et al. Jul 2010 A1
20100182465 Okita Jul 2010 A1
20100194851 Pasupaleti Aug 2010 A1
20100194963 Terashima Aug 2010 A1
20100201831 Weinstein Aug 2010 A1
20100201846 Silverbrook Aug 2010 A1
20100208099 Nomura Aug 2010 A1
20100215259 Scalise et al. Aug 2010 A1
20100220933 Takano et al. Sep 2010 A1
20100231747 Yim Sep 2010 A1
20100265079 Zin Oct 2010 A1
20100302407 Ayers et al. Dec 2010 A1
20100328442 Yang et al. Dec 2010 A1
20110013043 Corcoran et al. Jan 2011 A1
20110019051 Yin et al. Jan 2011 A1
20110058060 Bigioi et al. Mar 2011 A1
20110090385 Aoyama et al. Apr 2011 A1
20110096192 Niikura Apr 2011 A1
20110115893 Hayashi May 2011 A1
20110115971 Furuya et al. May 2011 A1
20110134267 Ohya Jun 2011 A1
20110150332 Sibiryakov et al. Jun 2011 A1
20110194618 Gish et al. Aug 2011 A1
20110221911 Kang Sep 2011 A1
20110242334 Wilburn et al. Oct 2011 A1
20110279698 Yoshikawa Nov 2011 A1
20110280541 Lee Nov 2011 A1
20110292242 Imai Dec 2011 A1
20110311150 Okamoto Dec 2011 A1
20110317917 Free Dec 2011 A1
20120002082 Johnson et al. Jan 2012 A1
20120002089 Wang et al. Jan 2012 A1
20120008011 Garcia Manchado Jan 2012 A1
20120033118 Lee et al. Feb 2012 A1
20120057786 Yano Mar 2012 A1
20120069213 Jannard et al. Mar 2012 A1
20120075492 Nanu et al. Mar 2012 A1
20120105579 Jeon et al. May 2012 A1
20120105584 Gallagher et al. May 2012 A1
20120120304 Corcoran et al. May 2012 A1
20120127333 Maruyama et al. May 2012 A1
20120154541 Scott Jun 2012 A1
20120154627 Rivard et al. Jun 2012 A1
20120162465 Culbert et al. Jun 2012 A1
20120177352 Pillman et al. Jul 2012 A1
20120188392 Smith Jul 2012 A1
20120206582 DiCarlo et al. Aug 2012 A1
20120212661 Yamaguchi et al. Aug 2012 A1
20120224788 Jia et al. Sep 2012 A1
20120242844 Walker et al. Sep 2012 A1
20120242886 Kawarada Sep 2012 A1
20120262600 Velarde et al. Oct 2012 A1
20120274806 Mori Nov 2012 A1
20120287223 Zhang et al. Nov 2012 A1
20120314100 Frank Dec 2012 A1
20120314124 Kaizu et al. Dec 2012 A1
20130010075 Gallagher et al. Jan 2013 A1
20130010138 Bigioi et al. Jan 2013 A1
20130021447 Brisedoux et al. Jan 2013 A1
20130027580 Olsen et al. Jan 2013 A1
20130050460 Steinberg et al. Feb 2013 A1
20130050520 Takeuchi Feb 2013 A1
20130070145 Matsuyama Mar 2013 A1
20130107062 Okazaki May 2013 A1
20130114853 Sengupta et al. May 2013 A1
20130114894 Yadav et al. May 2013 A1
20130129209 Reid et al. May 2013 A1
20130140435 Kikuchi Jun 2013 A1
20130147979 McMahon et al. Jun 2013 A1
20130148013 Shiohara Jun 2013 A1
20130176458 Van Dalen et al. Jul 2013 A1
20130194963 Hampel Aug 2013 A1
20130223530 Demos Aug 2013 A1
20130228673 Hashimoto et al. Sep 2013 A1
20130235068 Ubillos et al. Sep 2013 A1
20130251202 Auberger et al. Sep 2013 A1
20130258118 Felt Oct 2013 A1
20130271631 Tatsuzawa et al. Oct 2013 A1
20130278798 Hattori Oct 2013 A1
20130279584 Demos Oct 2013 A1
20130293744 Attar et al. Nov 2013 A1
20130294688 Auberger et al. Nov 2013 A1
20130301729 Demos Nov 2013 A1
20130301885 Mori et al. Nov 2013 A1
20130307999 Motta Nov 2013 A1
20130335596 Demandolx et al. Dec 2013 A1
20130342526 Ng et al. Dec 2013 A1
20130342740 Govindarao Dec 2013 A1
20140009636 Lee et al. Jan 2014 A1
20140063287 Yamada Mar 2014 A1
20140063301 Solhusvik Mar 2014 A1
20140098248 Okazaki Apr 2014 A1
20140125856 Kim et al. May 2014 A1
20140168468 Levoy et al. Jun 2014 A1
20140176757 Rivard et al. Jun 2014 A1
20140184894 Motta Jul 2014 A1
20140192216 Matsumoto Jul 2014 A1
20140192267 Biswas et al. Jul 2014 A1
20140193088 Capata et al. Jul 2014 A1
20140198242 Weng et al. Jul 2014 A1
20140211852 Demos Jul 2014 A1
20140219517 Mishra et al. Aug 2014 A1
20140219526 Linguraru et al. Aug 2014 A1
20140244858 Okazaki Aug 2014 A1
20140247870 Mertens Sep 2014 A1
20140247979 Roffet et al. Sep 2014 A1
20140267869 Sawa Sep 2014 A1
20140300795 Bilcu et al. Oct 2014 A1
20140301642 Muninder Oct 2014 A1
20140310788 Ricci Oct 2014 A1
20140354781 Matsuyama Dec 2014 A1
20140364241 Oku Dec 2014 A1
20150005637 Stegman et al. Jan 2015 A1
20150016693 Gattuso Jan 2015 A1
20150055835 Ozawa Feb 2015 A1
20150067600 Steinberg et al. Mar 2015 A1
20150077581 Baltz et al. Mar 2015 A1
20150092852 Demos Apr 2015 A1
20150098651 Rivard et al. Apr 2015 A1
20150103192 Venkatraman et al. Apr 2015 A1
20150138366 Keelan et al. May 2015 A1
20150146079 Kim May 2015 A1
20150222809 Osuka et al. Aug 2015 A1
20150229819 Rivard et al. Aug 2015 A1
20150229898 Rivard et al. Aug 2015 A1
20150279113 Knorr et al. Oct 2015 A1
20150288870 Nagaraja et al. Oct 2015 A1
20150310261 Lee et al. Oct 2015 A1
20150334318 Georgiev et al. Nov 2015 A1
20150341593 Zhang et al. Nov 2015 A1
20160006949 Kim et al. Jan 2016 A1
20160028948 Omori et al. Jan 2016 A1
20160057348 Liang et al. Feb 2016 A1
20160065926 Nonaka et al. Mar 2016 A1
20160071289 Kobayashi et al. Mar 2016 A1
20160086318 Hannuksela et al. Mar 2016 A1
20160142610 Rivard et al. May 2016 A1
20160150147 Shioya May 2016 A1
20160150175 Hynecek May 2016 A1
20160157587 Yamanashi et al. Jun 2016 A1
20160219211 Katayama Jul 2016 A1
20160248968 Baldwin Aug 2016 A1
20160284065 Cohen Sep 2016 A1
20160316154 Elmfors et al. Oct 2016 A1
20160323518 Rivard et al. Nov 2016 A1
20160350587 Bataller et al. Dec 2016 A1
20160352996 Qian et al. Dec 2016 A1
20160366331 Barron et al. Dec 2016 A1
20160381304 Feder et al. Dec 2016 A9
20170032181 Hu Feb 2017 A1
20170048442 Cote et al. Feb 2017 A1
20170054966 Zhou et al. Feb 2017 A1
20170061234 Lim et al. Mar 2017 A1
20170061236 Pope Mar 2017 A1
20170061567 Lim et al. Mar 2017 A1
20170064192 Mori Mar 2017 A1
20170064227 Lin et al. Mar 2017 A1
20170064276 Rivard et al. Mar 2017 A1
20170068846 Linguraru et al. Mar 2017 A1
20170070690 Feder et al. Mar 2017 A1
20170076430 Xu Mar 2017 A1
20170085785 Corcoran et al. Mar 2017 A1
20170109931 Knorr et al. Apr 2017 A1
20170118394 Van Hoeckel et al. Apr 2017 A1
20170169303 Connell, II et al. Jun 2017 A1
20170187938 Ichihara Jun 2017 A1
20170201677 Otani Jul 2017 A1
20170228583 Lee et al. Aug 2017 A1
20170262695 Ahmed Sep 2017 A1
20170286752 Gusarov et al. Oct 2017 A1
20170302903 Ng et al. Oct 2017 A1
20170337440 Green et al. Nov 2017 A1
20170364752 Zhou et al. Dec 2017 A1
20170372108 Corcoran et al. Dec 2017 A1
20170374336 Rivard et al. Dec 2017 A1
20180007240 Rivard et al. Jan 2018 A1
20180020156 Zobel Jan 2018 A1
20180025218 Steinberg et al. Jan 2018 A1
20180025244 Bohl et al. Jan 2018 A1
20180063409 Rivard et al. Mar 2018 A1
20180063411 Rivard et al. Mar 2018 A1
20180074495 Myers et al. Mar 2018 A1
20180075637 Henry et al. Mar 2018 A1
20180077367 Feder et al. Mar 2018 A1
20180160092 Rivard et al. Jun 2018 A1
20180183989 Rivard et al. Jun 2018 A1
20180288311 Baghert et al. Oct 2018 A1
20190012525 Wang et al. Jan 2019 A1
20190031145 Trelin Jan 2019 A1
20190045165 Rivard et al. Feb 2019 A1
20190057554 Knorr et al. Feb 2019 A1
20190108387 Rivard et al. Apr 2019 A1
20190108388 Rivard et al. Apr 2019 A1
20190116306 Rivard et al. Apr 2019 A1
20190124280 Feder et al. Apr 2019 A1
20190174028 Rivard et al. Jun 2019 A1
20190179594 Alameh et al. Jun 2019 A1
20190197297 Rivard et al. Jun 2019 A1
20190197330 Mahmoud et al. Jun 2019 A1
20190222769 Srivastava et al. Jul 2019 A1
20190222807 Rivard et al. Jul 2019 A1
20190263415 Gong Aug 2019 A1
20190335151 Rivard et al. Oct 2019 A1
20190349510 Rivard et al. Nov 2019 A1
20200029008 Rivard et al. Jan 2020 A1
20200059575 Rivard et al. Feb 2020 A1
20200084398 Feder et al. Mar 2020 A1
20200193144 Rivard et al. Jun 2020 A1
20200259991 Rivard et al. Aug 2020 A1
20210001810 Rivard et al. Jan 2021 A1
20210037178 Rivard et al. Feb 2021 A1
20210274142 Rivard et al. Sep 2021 A1
20210314507 Feder et al. Oct 2021 A1
20210337104 Rivard et al. Oct 2021 A1
20210360141 Rivard et al. Nov 2021 A1
20230005294 Rivard et al. Jan 2023 A1
Foreign Referenced Citations (75)
Number Date Country
101290388 Oct 2008 CN
101408709 Apr 2009 CN
102053453 May 2011 CN
102165783 Aug 2011 CN
103152519 Jun 2013 CN
103813098 May 2014 CN
204316606 May 2015 CN
105026955 Nov 2015 CN
102011107844 Jan 2013 DE
2169946 Mar 2010 EP
2346079 Jul 2011 EP
2565843 Mar 2013 EP
2731326 May 2014 EP
2486878 Jul 2012 GB
2487943 Aug 2012 GB
H09-200617 Jul 1997 JP
2000278532 Oct 2000 JP
2001245213 Sep 2001 JP
2002112008 Apr 2002 JP
2003101886 Apr 2003 JP
2003299067 Oct 2003 JP
2004247983 Sep 2004 JP
2004248061 Sep 2004 JP
2004326119 Nov 2004 JP
2004328532 Nov 2004 JP
2006080752 Mar 2006 JP
2006121612 May 2006 JP
2006311311 Nov 2006 JP
2007035028 Feb 2007 JP
2008177738 Jul 2008 JP
2008187615 Aug 2008 JP
2008236726 Oct 2008 JP
2009267923 Nov 2009 JP
2009303010 Dec 2009 JP
2010016416 Jan 2010 JP
2010512049 Apr 2010 JP
2010136224 Jun 2010 JP
2010157925 Jul 2010 JP
2010166281 Jul 2010 JP
2010239317 Oct 2010 JP
4649623 Mar 2011 JP
2011097141 May 2011 JP
2011101180 May 2011 JP
2011120087 Jun 2011 JP
2011120094 Jun 2011 JP
2011146957 Jul 2011 JP
2012080196 Apr 2012 JP
2012156885 Aug 2012 JP
2012195660 Oct 2012 JP
2012213137 Nov 2012 JP
2013026734 Feb 2013 JP
2013055610 Mar 2013 JP
2013066142 Apr 2013 JP
2013093875 May 2013 JP
2013120254 Jun 2013 JP
2013207327 Oct 2013 JP
2013219708 Oct 2013 JP
2013258444 Dec 2013 JP
2013258510 Dec 2013 JP
2014057256 Mar 2014 JP
2014140246 Jul 2014 JP
2014140247 Jul 2014 JP
2014142836 Aug 2014 JP
2014155033 Aug 2014 JP
20100094200 Aug 2010 KR
2008010559 Jan 2008 NO
9746001 Dec 1997 WO
0237830 May 2002 WO
2004064391 Jul 2004 WO
2009074938 Jun 2009 WO
2009074938 Aug 2009 WO
2014172059 Oct 2014 WO
2015120873 Aug 2015 WO
2015123455 Aug 2015 WO
2015173565 Nov 2015 WO
Non-Patent Literature Citations (251)
Entry
Rivard et al., U.S. Appl. No. 17/857,906, filed Jul. 5, 2022.
Office Action from Japanese Patent Application No. 2021-076679, dated Aug. 2, 2022.
Office Action from Japanese Patent Application No. 2021-079285, dated Aug. 2, 2022.
Non-Final Office Action for U.S. Appl. No. 17/171,800, dated Aug. 18, 2022.
Final Office Action for U.S. Appl. No. 17/000,098, dated Aug. 25, 2022.
Office Action from Japanese Patent Application No. 2021-096499, dated Sep. 6, 2022.
Office Action from Japanese Patent Application No. 2021-154653, dated Sep. 13, 2022.
Office Action from Chinese Patent Application No. 202110773625.1, dated Nov. 2, 2022.
Office Action from Chinese Patent Application No. 201780053926.9, dated Jan. 16, 2020.
Rivard et al., U.S. Appl. No. 16/796,497, filed Feb. 20, 2020.
Corrected Notice of Allowance from U.S. Appl. No. 16/519,244, dated Feb. 20, 2020.
Extended European Search Report from European Application No. 17821236.1, dated Jan. 24, 2020.
Petschnigg et al., “Digital Photography with Flash and No-Flash Image Pairs,” ACM Transactions of Graphics, vol. 23, Aug. 2004, pp. 664-672.
Corrected Notice of Allowance from U.S. Appl. No. 16/519,244, dated Apr. 9, 2020.
Rivard et al., U.S. Appl. No. 16/857,016, filed Apr. 23, 2020.
International Preliminary Examination Report from PCT Application No. PCT/US2018/054014, dated Apr. 16, 2020.
Office Action from Chinese Patent Application No. 201680088945.0, dated May 21, 2020.
Notice of Allowance from U.S. Appl. No. 16/213,041, dated May 29, 2020.
Supplemental Notice of Allowance from U.S. Appl. No. 16/213,041, dated Jun. 17, 2020.
Non-Final Office Action for U.S. Appl. No. 16/857,016, dated Aug. 5, 2020.
Rivard, W. et al., U.S. Appl. No. 17/000,098, filed Aug. 21, 2020.
Office Action from Japanese Patent Application No. 2017-544284, dated Aug. 18, 2020.
International Search Report and Written Opinion from PCT Application No. PCT/US2020/040478, dated Sep. 25, 2020.
Notice of Allowance from U.S. Appl. No. 16/505,278, dated Sep. 25, 2020.
Supplemental Notice of Allowance from U.S. Appl. No. 16/213,041, dated Aug. 31, 2020.
Summons to Attend Oral Proceedings from European Application No. 15 856 710.7, dated Sep. 18, 2020.
Notice of Allowance from U.S. Appl. No. 16/584,486, dated Oct. 21, 2020.
Corrected Notice of Allowance from U.S. Appl. No. 16/505,278, dated Oct. 22, 2020.
Notice of Allowance from U.S. Appl. No. 16/684,389, dated Oct. 29, 2020.
Office Action from Japanese Patent Application No. 2017-544281, dated Oct. 27, 2020.
Office Action from Chinese Patent Application No. 201780053926.9, dated Oct. 13, 2020.
Corrected Notice of Allowance from U.S. Appl. No. 16/584,486, dated Nov. 18, 2020.
Corrected Notice of Allowance from U.S. Appl. No. 16/505,278, dated Nov. 18, 2020.
Corrected Notice of Allowance from U.S. Appl. No. 16/684,389, dated Nov. 27, 2020.
Office Action from Japanese Patent Application No. 2017-544280, dated Jun. 30, 2020.
Corrected Notice of Allowance from U.S. Appl. No. 16/505,278, dated Dec. 24, 2020.
Corrected Notice of Allowance from U.S. Appl. No. 16/584,486, dated Dec. 24, 2020.
Second Office Action from Chinese Patent Application No. 201680088945.0, dated Dec. 17, 2020.
Corrected Notice of Allowance from U.S. Appl. No. 16/684,389, dated Dec. 23, 2020.
Rivard et al., U.S. Appl. No. 17/144,915, filed Jan. 8, 2021.
Notice of Allowance from U.S. Appl. No. 16/857,016, dated Jan. 27, 2021.
Office Action from Japanese Patent Application No. 2017-544282, dated Jan. 5, 2021.
Office Action from Japanese Patent Application No. 2017-544283, dated Jan. 12, 2021.
Non-Final Office Action for U.S. Appl. No. 16/460,807, dated Aug. 20, 2020.
Corrected Notice of Allowance from U.S. Appl. No. 16/857,016, dated Feb. 16, 2021.
Feder et al., U.S. Appl. No. 17/171,800, filed Feb. 9, 2021.
Rivard et al., U.S. Appl. No. 17/163,086, filed Jan. 29, 2021.
Non-Final Office Action for U.S. Appl. No. 16/662,965, dated Mar. 22, 2021.
Decision to Refuse from European Application No. 15856710.7, dated Mar. 15, 2021.
Examination Report from Indian Application No. 201827049041, dated Mar. 19, 2021.
Decision to Refuse from European Application No. 15856212.4, dated Mar. 22, 2021.
Final Office Action for U.S. Appl. No. 16/460,807, dated Mar. 1, 2021.
Corrected Notice of Allowance from U.S. Appl. No. 16/857,016, dated Apr. 13, 2021.
Examination Report from European Application No. 16915389.7, dated Feb. 25, 2021.
Examination Report from European Application No. 15857386.5, dated Feb. 8, 2021.
Rivard et al., U.S. Appl. No. 17/321,166, filed May 14, 2021.
Non-Final Office Action for U.S. Appl. No. 16/931,286, dated May 11, 2021.
Decision for Rejection and Decision of Dismissal of Amendment for Japanese Application No. 2017-544280, dated May 25, 2021.
Summons to Attend Oral Proceedings from European Application No. 16 915 389.7, dated Oct. 21, 2022.
Final Office Action for U.S. Appl. No. 17/321,166, dated Dec. 9, 2022.
Srivastava et al., U.S. Appl. No. 15/870,689, filed Jan. 12, 2018.
Non-Final Office Action from U.S. Appl. No. 15/870,689, dated Mar. 17, 2019.
Final Office Action from U.S. Appl. No. 15/870,689, dated Sep. 5, 2019.
Advisory Action from U.S. Appl. No. 15/870,689, dated Nov. 14, 2019.
Notice of Allowance from U.S. Appl. No. 15/870,689, dated Dec. 17, 2019.
Examination Report from Indian Application No. 202027018945, dated Mar. 17, 2022.
Non-Final Office Action for U.S. Appl. No. 17/321,166, dated Apr. 25, 2022.
Rivard et al., U.S. Appl. No. 17/745,668, filed May 16, 2022.
Rivard et al., U.S. Appl. No. 17/749,919, filed May 20, 2022.
Notice of Allowance from U.S. Appl. No. 16/796,497, dated May 26, 2022.
Rivard et al., U.S. Appl. No. 17/835,823, filed Jun. 8, 2022.
Notice of Allowance from U.S. Appl. No. 17/163,086, dated Mar. 21, 2022.
Examination Report from Indian Application No. 201927010939, dated Mar. 25, 2022.
Non-Final Office Action from U.S. Appl. No. 15/354,935, dated Feb. 8, 2017.
Non-Final Office Action from U.S. Appl. No. 13/999,678, dated Dec. 20, 2016.
Wan el al., “CMOS Image Sensors With Mulli-Bucket Pixels for Computational Photography,” IEEE Journal of Solid-State Circuits, vol. 47, No. 4, Apr. 2012, pp. 1031-1042.
Notice of Allowance from U.S. Appl. No. 15/201,283, dated Mar. 23, 2017.
Chatterjee et al., “Clustering-Based Denoising With Locally Learned Dictionaries,” IEEE Transactions on Image Processing, vol. 18, No. 7, Jul. 2009, pp. 1-14.
Burger et al., “Image denoising: Can plain Neural Networks compete with BM3D?,” Computer Vision and Pattern Recognition (CVPR), IEEE, 2012, pp. 4321-4328.
Kervann et al., “Optimal Spatial Adaptation for Patch-Based Image Denoising,” IEEE Transactions on Image Processing, vol. 15, No. 10, Oct. 2006, pp. 2866-2878.
Foi et al., “Practical Poissonian-Gaussian noise modeling and fitting for single-image raw-data,” IEEE Transactions, 2007, pp. 1-18.
International Search Report and Written Opinion from PCT Application No. PCT/US17/39946, dated Sep. 25, 2017.
Notice of Allowance from U.S. Appl. No. 15/201,283, dated Jul. 19, 2017.
Notice of Allowance from U.S. Appl. No. 15/354,935, dated Aug. 23, 2017.
Notice of Allowance from U.S. Appl. No. 14/823,993, dated Oct. 31, 2017.
Notice of Allowance from U.S. Appl. No. 15/352,510, dated Oct. 17, 2017.
European Office Communication and Exam Report from European Application No. 15856814.7, dated Dec. 14, 2017.
Supplemental Notice of Allowance from U.S. Appl. No. 15/354,935, dated Dec. 1, 2017.
European Office Communication and Exam Report from European Application No. 15856267.8, dated Dec. 12, 2017.
European Office Communication and Exam Report from European Application No. 15856710.7, dated Dec. 21, 2017.
European Office Communication and Exam Report from European Application No. 15857675.1, dated Dec. 21, 2017.
European Office Communication and Exam Report from European Application No. 15856212.4, dated Dec. 15, 2017.
Non-Final Office Action from U.S. Appl. No. 15/254,964, dated Jan. 3, 2018.
Non-Final Office Action from U.S. Appl. No. 15/643,311, dated Jan. 4, 2018.
European Office Communication and Exam Report from European Application No. 15857386.5, dated Jan. 11, 2018.
Kim et al., “A CMOS Image Sensor Based on Unified Pixel Architecture With Time-Division Multiplexing Scheme for Color and Depth Image Acquisition,” IEEE Journal of Solid-State Circuits, vol. 47, No. 11, Nov. 2012, pp. 2834-2845.
European Office Communication and Exam Report from European Application No. 15857748.6, dated Jan. 10, 2018.
Non-Final Office Action from U.S. Appl. No. 15/814,238, dated Feb. 8, 2018.
Non-Final Office Action for U.S. Appl. No. 15/687,278, dated Apr. 13, 2018.
Non-Final Office Action from U.S. Appl. No. 15/836,655, dated Apr. 6, 2018.
Notice of Allowance from U.S. Appl. No. 15/836,655, dated Apr. 30, 2018.
Rivard, W. et al., U.S. Appl. No. 15/891,251, filed Feb. 7, 2018.
Extended European Search Report from European Application No. 15891394.7 dated Jun. 19, 2018.
Non-Final Office Action for U.S. Appl. No. 15/885,296, dated Jun. 4, 2018.
Non-Final Office Action for U.S. Appl. No. 15/891,251, dated May 31, 2018.
Notice of Allowance from U.S. Appl. No. 15/687,278, dated Aug. 24, 2018.
Final Office Action for U.S. Appl. No. 15/643,311 dated Jul. 24, 2018.
Notice of Allowance for U.S. Appl. No. 15/885,296 dated Sep. 21, 2018.
Final Office Action for U.S. Appl. No. 15/254,964 dated Jul. 24, 2018.
Notice of Allowance for U.S. Appl. No. 15/814,238 dated Oct. 4, 2018.
Corrected Notice of Allowance for U.S. Appl. No. 15/885,296 dated Oct. 16, 2018.
Rivard et al., U.S. Appl. No. 16/154,999, filed Oct. 9, 2018.
Non-Final Office Action for U.S. Appl. No. 15/636,324, dated Oct. 18, 2018.
Notice of Allowance for U.S. Appl. No. 15/643,311, dated Oct. 31, 2018.
Corrected Notice of Allowance for U.S. Appl. No. 15/814,238 dated Nov. 13, 2018.
Final Office Action for U.S. Appl. No. 15/891,251, dated Nov. 29, 2018.
Rivard et al., U.S. Appl. No. 16/215,351, filed Dec. 10, 2018.
Rivard et al., U.S. Appl. No. 16/213,041, filed Dec. 7, 2018.
Non-Final Office Action for U.S. Appl. No. 16/154,999, dated Dec. 20, 2018.
Notice of Allowance for U.S. Appl. No. 15/254,964, dated Dec. 21, 2018.
Supplemental Notice of Allowance for U.S. Appl. No. 15/643,311, dated Dec. 11, 2018.
Feder et al., U.S. Appl. No. 16/217,848, filed Dec. 12, 2018.
International Preliminary Examination Report from PCT Application No. PCT/US2017/39946, dated Jan. 10, 2019.
International Search Report and Written Opinion from International Application No. PCT/US 18/54014, dated Dec. 26, 2018.
Non-Final Office Action from U.S. Appl. No. 16/215,351, dated Jan. 24, 2019.
Supplemental Notice of Allowance for U.S. Appl. No. 15/254,964, dated Feb. 1, 2019.
Rivard et al., U.S. Appl. No. 16/290,763, filed Mar. 1, 2019.
Supplemental Notice of Allowance for U.S. Appl. No. 15/254,964, dated Mar. 11, 2019.
Rivard et al., U.S. Appl. No. 15/976,756, filed May 10, 2018.
Final Office Action for U.S. Appl. No. 15/636,324, dated Mar. 22, 2019.
Non-Final Office Action from U.S. Appl. No. 16/271,604, dated Apr. 5, 2019.
Notice of Allowance from U.S. Appl. No. 16/215,351, dated Apr. 1, 2019.
Rivard et al., U.S. Appl. No. 16/271,604, filed Feb. 8, 2019.
Non-Final Office Action for U.S. Appl. No. 15/636,324, dated Apr. 18, 2019.
Notice of Allowance from U.S. Appl. No. 15/891,251, dated May 7, 2019.
Notice of Allowance from U.S. Appl. No. 16/154,999, dated Jun. 7, 2019.
Corrected Notice of Allowance from U.S. Appl. No. 15/891,251, dated Jul. 3, 2019.
Notice of Allowance from U.S. Appl. No. 15/636,324, dated Jul. 2, 2019.
Notice of Allowance from U.S. Appl. No. 16/271,604, dated Jul. 2, 2019.
Non-Final Office Action for U.S. Appl. No. 15/976,756, dated Jun. 27, 2019.
Non-Final Office Action for U.S. Appl. No. 16/290,763, dated Jun. 26, 2019.
Rivard et al., U.S. Appl. No. 16/505,278, filed Jul. 8, 2019.
Rivard et al., U.S. Appl. No. 16/519,244, filed Jul. 23, 2019.
Notice of Allowance from U.S. Appl. No. 16/217,848, dated Jul. 31, 2019.
Corrected Notice of Allowance from U.S. Appl. No. 16/271,604, dated Aug. 8, 2019.
Corrected Notice of Allowance from U.S. Appl. No. 15/636,324, dated Aug. 20, 2019.
Office Action from Chinese Patent Application No. 2015800794.44.1, dated Aug. 1, 2019.
Corrected Notice of Allowance from U.S. Appl. No. 15/636,324, dated Sep. 5, 2019.
Corrected Notice of Allowance from U.S. Appl. No. 16/271,604, dated Sep. 19, 2019.
Non-Final Office Action for U.S. Appl. No. 16/519,244, dated Sep. 23, 2019.
Corrected Notice of Allowance from U.S. Appl. No. 16/217,848, dated Sep. 24, 2019.
Examination Report from European Application No. 15 856 814.7, dated Aug. 20, 2019.
Examination Report from European Application No. 15 857 675.1, dated Aug. 23, 2019.
Examination Report from European Application No. 15 856 710.7, dated Sep. 9, 2019.
Examination Report from European Application No. 15 857 386.5, dated Sep. 17, 2019.
Examination Report from European Application No. 15 857 748.6, dated Sep. 26, 2019.
Rivard et al., U.S. Appl. No. 16/584,486, filed Sep. 26, 2019.
Notice of Allowance from U.S. Appl. No. 15/976,756, dated Oct. 4, 2019.
Notice of Allowance from U.S. Appl. No. 16/290,763, dated Oct. 10, 2019.
Corrected Notice of Allowance from U.S. Appl. No. 16/217,848, dated Oct. 31, 2019.
Non-Final Office Action for U.S. Appl. No. 16/213,041, dated Oct. 30, 2019.
Office Action from Japanese Patent Application No. 2017-544279, dated Oct. 23, 2019.
Office Action from Japanese Patent Application No. 2017-544280, dated Oct. 29, 2019.
Office Action from Japanese Patent Application No. 2017-544283, dated Oct. 29, 2019.
Office Action from Japanese Patent Application No. 2017-544547, dated Nov. 5, 2019.
Rivard et al., U.S. Appl. No. 16/662,965, filed Oct. 24, 2019.
Office Action from Japanese Patent Application No. 2017-544281, dated Nov. 26, 2019.
Extended European Search Report from European Application No. 16915389.7, dated Dec. 2, 2019.
Office Action from Japanese Patent Application No. 2017-544284, dated Dec. 10, 2019.
Feder et al., U.S. Appl. No. 16/684,389, filed Nov. 14, 2019.
Non-Final Office Action for U.S. Appl. No. 16/505,278, dated Jan. 10, 2020.
Notice of Allowance from U.S. Appl. No. 16/519,244, dated Jan. 14, 2020.
Office Action from Japanese Patent Application No. 2017-544282, dated Jan. 7, 2020.
Extended European Search Report from European Application No. 18864431.4, dated Jun. 1, 2021.
Kaufman et al., “Content-Aware Automatic Photo Enhancemen,” Computer Graphics Forum, vol. 31, No. 08, 2012, pp. 2528-2540.
Battiato et al., “Automatic Image Enhancement by Content Dependent Exposure Correction,” EURASIP Journal on Applied Signal Processing, 2004, pp. 1849-1860.
Mangiat et al., “Automatic scene relighting for video conferencing,” IEEE 16th Annual International Conference on Image Processing (ICIP), Nov. 2009, pp. 2781-2784.
Weyrich et al., “Analysis of human faces using a measurement-based skin reflectance model,” ACM Transactions on Graphics, vol. 25, No. 3, 2006, pp. 1013-1024.
Extended European Search Report from European Application No. 21169039.1, dated Jun. 16, 2021.
Office Action from Japanese Patent Application No. 2017-544284, dated Jul. 13, 2021.
Office Action from Japanese Patent Application No. 2017-544284, dated Jul. 3, 2021.
Non-Final Office Action for U.S. Appl. No. 17/144,915, dated Aug. 13, 2021.
Final Office Action for U.S. Appl. No. 16/662,965, dated Sep. 3, 2021.
Non-Final Office Action from U.S. Appl. No. 16/460,807, dated Aug. 30, 2021.
Office Action from Chinese Patent Application No. 202010904659.5, dated Jul. 28, 2021.
Extended European Search Report from European Application No. 21175832.1, dated Aug. 27, 2021.
Examination Report from Indian Application No. 201927010939, dated Jun. 9, 2021.
Non-Final Office Action for U.S. Appl. No. 17/163,086, dated Oct. 13, 2021.
Summons to Attend Oral Proceedings from European Application No. 15856267.8, dated Sep. 3, 2021.
Photoshop, “Photoshop Help/Levels adjustment,” Photoshop Help, retrieved on Aug. 20, 2021 from https://web.archive.org/web/20141018232619/http://helpx.adobe.com:80/photoshop/using/levels-adjustrnent.html, 3 pages.
Photoshop, “Photoshop Help/Levels adjustment,” Photoshop Help, 2014, retrieved on Aug. 20, 2021 from https://web.archive.org/web/20141018232619/http://helpx.adobe.com:80/photoshop/using/levels-adjustment.html, 3 pages.
Office Action from Japanese Patent Application No. 2020-121537, dated Oct. 19, 2021.
Non-Final Office Action for U.S. Appl. No. 17/000,098, dated Dec. 7, 2021.
Non-Final Office Action for U.S. Appl. No. 16/796,497, dated Dec. 8, 2021.
Final Office Action for U.S. Appl. No. 16/931,286, dated Dec. 29, 2021.
Extended European Search Report from European Application No. 21196442.4, dated Dec. 13, 2021.
Huo et al., “Robust Automatic White Balance algorithm using Gray Color Points in Images,” IEEE Transactions on Consumer Electronics, vol. 52, No. 2, May 2006, pp. 541-546.
Notice of Allowance from U.S. Appl. No. 17/144,915, dated Feb. 10, 2022.
Notice of Allowance from U.S. Appl. No. 16/662,965, dated Mar. 1, 2022.
Examination Report from European Application No. 15857386.5, dated Dec. 12, 2021.
Final Office Action from Japanese Patent Application No. 2017-544282, dated Mar. 1, 2022.
Rivard et al., U.S. Appl. No. 14/823,993, filed Aug. 11, 2015.
Rivard et al., U.S. Appl. No. 14/536,524, filed Nov. 7, 2014.
Notice of Allowance from U.S. Appl. No. 13/573,252, dated Oct. 22, 2014.
Non-Final Office Action from U.S. Appl. No. 13/573,252, dated Jul. 10, 2014.
Rivard, W. et al., U.S. Appl. No. 14/568,045, filed Dec. 11, 2014.
Restriction Requirement from U.S. Appl. No. 14/568,045, dated Jan. 15, 2015.
Rivard, W. et al., U.S. Appl. No. 14/534,068, filed Nov. 5, 2014.
Non-Final Office Action from U.S. Appl. No. 14/534,068, dated Feb. 17, 2015.
Feder et al., U.S. Appl. No. 13/999,678, filed Mar. 14, 2014.
Rivard, W. et al., U.S. Appl. No. 14/534,079, filed Nov. 5, 2014.
Non-Final Office Action from U.S. Appl. No. 14/534,079, dated Jan. 29, 2015.
Rivard, W. et al., U.S. Appl. No. 14/534,089, filed Nov. 5, 2014.
Non-Final Office Action from U.S. Appl. No. 14/534,089, dated Feb. 25, 2015.
Rivard, W. et al., U.S. Appl. No. 14/535,274, filed Nov. 6, 2014.
Non-Final Office Action from U.S. Appl. No. 14/535,274, dated Feb. 3, 2015.
Rivard, W. et al., U.S. Appl. No. 14/535,279, filed Nov. 6, 2014.
Non-Final Office Action from U.S. Appl. No. 14/535,279, dated Feb. 5, 2015.
Rivard, W. et al., U.S. Appl. No. 14/535,282, filed Nov. 6, 2014.
Non-Final Office Action from U.S. Appl. No. 14/535,282, dated Jan. 30, 2015.
Non-Final Office Action from U.S. Appl. No. 14/536,524, dated Mar. 3, 2015.
Rivard, W. et al., U.S. Appl. No. 14/536,524, filed Nov. 7, 2014.
Non-Final Office Action from U.S. Appl. No. 14/568,045, dated Mar. 24, 2015.
Rivard, W. et al., U.S. Appl. No. 14/702,549, filed May 1, 2015.
Notice of Allowance from U.S. Appl. No. 14/534,079, dated May 11, 2015.
Notice of Allowance from U.S. Appl. No. 14/535,274, dated May 26, 2015.
Notice of Allowance from U.S. Appl. No. 14/534,089, dated Jun. 23, 2015.
Notice of Allowance from U.S. Appl. No. 14/535,282, dated Jun. 23, 2015.
Notice of Allowance from U.S. Appl. No. 14/536,524, dated Jun. 29, 2015.
Notice of Allowance from U.S. Appl. No. 14/534,068, dated Jul. 29, 2015.
Notice of Allowance from U.S. Appl. No. 14/535,279, dated Aug. 31, 2015.
Final Office Action from U.S. Appl. No. 14/568,045, dated Sep. 18, 2015.
Non-Final Office Action from U.S. Appl. No. 13/999,678, dated Aug. 12, 2015.
International Search Report and Written Opinion from International Application No. PCT/US15/59348, dated Feb. 2, 2016.
International Search Report and Written Opinion from International Application No. PCT/US15/59097, dated Jan. 4, 2016.
Non-Final Office Action from U.S. Appl. No. 14/702,549, dated Jan. 25, 2016.
Final Office Action from U.S. Appl. No. 13/999,678, dated Mar. 28, 2016.
International Search Report and Written Opinion from International Application No. PCT/US2015/060476, dated Feb. 10, 2016.
Notice of Allowance from U.S. Appl. No. 14/568,045, dated Apr. 26, 2016.
International Search Report and Written Opinion from International Application No. PCT/US2015/058895, dated Apr. 11, 2016.
Notice of Allowance from U.S. Appl. No. 14/568,045, dated Jan. 12, 2016.
International Search Report and Written Opinion from International Application No. PCT/US2015/059103, dated Dec. 21, 2015.
Final Office Action from U.S. Appl. No. 14/178,305, dated May 18, 2015.
Non-Final Office Action from U.S. Appl. No. 14/178,305, dated Aug. 11, 2014.
Non-Final Office Action from U.S. Appl. No. 14/823,993, dated Jul. 28, 2016.
International Search Report and Written Opinion from International Application No. PCT/US2015/059105, dated Jul. 26, 2016.
Notice of Allowance from U.S. Appl. No. 14/702,549, dated Aug. 15, 2016.
International Search Report and Written Opinion from International Application No. PCT/US2015/058896, dated Aug. 26, 2016.
International Search Report and Written Opinion from International Application No. PCT/US2015/058891, dated Aug. 26, 2016.
International Search Report and Written Opinion from International Application No. PCT/US2016/050011, dated Nov. 10, 2016.
Final Office Action from U.S. Appl. No. 14/823,993, dated Feb. 10, 2017.
Related Publications (1)
Number Date Country
20220343678 A1 Oct 2022 US
Provisional Applications (2)
Number Date Country
62599940 Dec 2017 US
62568553 Oct 2017 US
Continuations (4)
Number Date Country
Parent 16796497 Feb 2020 US
Child 17694458 US
Parent 16290763 Mar 2019 US
Child 16796497 US
Parent 16215351 Dec 2018 US
Child 16290763 US
Parent 15976756 May 2018 US
Child 16215351 US