This application also makes reference to U.S. patent application Ser. No. ______ (Attorney Docket #50229-5030), titled “Depth Map Generation and Post-Capture Focusing,” filed on even date herewith, the entire contents of which are hereby incorporated herein by reference.
Certain cameras, such as light-field or plenoptic cameras, rely upon a lens array over an image sensor and/or an array of image sensors to capture directional projection of light. Among other drawbacks, these approaches use relatively large and specialized image sensors which are generally unsuitable for other applications (e.g., video capture, video conferencing, etc.), use only a fraction of the information captured, and rely upon high levels of processing to deliver even a viewfinder image, for example. Further, some of these light-field or plenoptic camera devices require a relatively large height for specialized lens and/or sensor arrays and, thus, do not present practical solutions for use in cellular telephones.
For a more complete understanding of the embodiments and the advantages thereof, reference is now made to the following description, in conjunction with the accompanying figures briefly described as follows:
The drawings are provided by way of example and should not be considered limiting of the scope of the embodiments described herein, as other equally effective embodiments are within the scope and spirit of this disclosure. The elements and features shown in the drawings are not necessarily drawn to scale, emphasis instead being placed upon clearly illustrating the principles of the embodiments. Additionally, certain dimensions or positions of elements and features may be exaggerated to help visually convey certain principles. In the drawings, similar reference numerals among the figures generally designate like or corresponding, but not necessarily the same, elements.
In the following paragraphs, the embodiments are described in further detail by way of example with reference to the attached drawings. In the description, well known components, methods, and/or processing techniques are omitted or briefly described so as not to obscure the embodiments.
Certain cameras, such as light-field or plenoptic cameras, rely upon a lens array over an image sensor and/or an array of image sensors to capture directional projection of light. Among other drawbacks, these approaches use relatively large and specialized image sensors which are generally unsuitable for other applications (e.g., video capture, video conferencing, etc.), use only a fraction of the information captured, and rely upon high levels of processing to deliver even a viewfinder image, for example. Further, some of these light-field or plenoptic camera devices require a relatively large height for specialized lens and/or sensor arrays and, thus, do not present practical solutions for use in cellular telephones.
In this context, the embodiments described herein include a heterogeneous mix of sensors which may be relied upon to achieve, among other processing results, image processing results that are similar, at least in some aspects, to those achieved by light-field or plenoptic imaging devices. In various embodiments, the mix of sensors may be used for focusing and re-focusing images after the images are captured. In other embodiments, the mix of sensors may be used for object extraction, scene understanding, gesture recognition, etc. In other aspects, a mix of image sensors may be used for high dynamic range (HDR) image processing. Further, according to the embodiments described herein, the mix of image sensors may be calibrated for focusing and re-focusing, object extraction, scene understanding, gesture recognition, HDR image processing, etc.
In one embodiment, the heterogeneous mix of sensors includes a main color image sensor having a pixel density ranging from 3 to 20 Megapixels, for example, with color pixels arranged in a Bayer pattern, and a secondary luminance image sensor having a relatively lower pixel density. It should be appreciated, however, that the system is generally agnostic to the resolution and format of the main and secondary sensors, which may be embodied as sensors of any suitable type, pixel resolution, process, structure, or arrangement (e.g., infra-red, charge-coupled device (CCD), 3CCD, Foveon X3, complementary metal-oxide-semiconductor (CMOS), red-green-blue-clear (RGBC), etc.).
Turning now to the drawings, a description of exemplary embodiments of a system and its components are provided, followed by a discussion of the operation of the same.
In the example illustrated in
Here, it should be appreciated that the elements of the processing environment 100 may vary among embodiments, particularly depending upon the application for use of the heterogeneous mix of image sensors 150 and 152. In other words, depending upon whether the first and second sensors 150 and 152 are directed for use in focusing, re-focusing, object extraction, scene understanding, gesture recognition, HDR image processing, etc., the processing environment 100 may include additional or alternative processing elements or modules. Regardless of the application for use of the first and second sensors 150 and 152, the embodiments described herein are generally directed to calibrating operational aspects of the first and second sensors 150 and 152 and/or the image data captured by the first and second sensors 150 and 152. In this way, the first and second sensors 150 and 152 and the images captured by the sensors 150 and 152 can be used together.
The first and second sensors 150 and 152 may be embodied as any suitable types of sensors, depending upon the application for use of the system 10. For example, in image processing applications, the first and second sensors 150 and 152 may be embodied as image sensors having the same or different pixel densities, ranging from a fraction of 1 to 20 Megapixels, for example. The first image sensor 150 may be embodied as a color image sensor having a first pixel density, and the second image sensor 152 may be embodied as a luminance image sensor having a relatively lower pixel density. It should be appreciated, however, that the system 10 is generally agnostic to the resolution and format of the first and second sensors 150 and 152, which may be embodied as sensors of any suitable type, pixel resolution, process, structure, or arrangement (e.g., infra-red, charge-coupled device (CCD), 3CCD, Foveon X3, complementary metal-oxide-semiconductor (CMOS), red-green-blue-clear (RGBC), etc.).
The memory 110 may be embodied as any suitable memory that stores data provided by the first and second sensors 150 and 152, among other data, for example. In this context, the memory 110 may store image and image-related data for manipulation and processing by the processing environment 100. As noted above, the memory 110 includes memory areas for image data 112 and calibration characteristic data 114. Various aspects of processing and/or manipulation of the image data 112 by the processing environment 100, based, for example, upon the calibration characteristic data 114, are described in further detail below.
As illustrated in
As described herein, in one embodiment, the first and second sensors 150 and 152 may be embodied as sensors of varied operating and structural characteristics (i.e., a heterogeneous mix of sensors). The differences in operating characteristics may be identified during manufacturing and/or assembly of the device 160, for example, based on manufacturing and/or assembly calibration processes. Additionally or alternatively, the differences in operating characteristics may be identified during post-assembly calibration processes by the calibrator 122. These differences may be quantified as calibration data which is representative of the operating characteristics of the first and second sensors 150 and 152, and stored in the memory 110 as the calibration characteristic data 114.
Among other operational aspects, the device 160 is configured to capture images using the first and second sensors 150 and 152. Based on the processing techniques described herein, images captured by the first and second sensors 150 and 152 may be focused and re-focused after being captured. Additionally or alternatively, the images may be processed according to one or more HDR image processing techniques, for example, or for object extraction, scene understanding, gesture recognition, etc.
Here, it is noted that, before the first and second sensors 150 and 152 capture the first and second images 202 and 204, the calibrator 122 may adapt at least one operating parameter of the first sensor 150 or the second sensor 152 to accommodate for at least one of noise, defective pixels, dark current, vignetting, demosaicing, or white balancing, for example, without limitation. More particularly, the calibrator 122 may reference the calibration characteristic data 114 in the memory 110, to identify any adjustments to the operating parameters of the first and second sensors 150 and 152, and accommodate for or balance differences in noise, defective pixels, dark current, vignetting, demosaicing, or white balancing, for example, between or among images generated by the first and second sensors 150 and 152.
In this context, it should be appreciated that, to the extent that the characteristics of the first and second sensors 150 and 152 vary, such that the first and second images 202 and 204 deviate along a corresponding unit of measure or other qualitative or quantitative aspect, for example, the calibrator 122 may adjust one or more of the operating parameters of the first and second sensors 150 and 152 (e.g., operating voltages, timings, temperatures, exposure timings, etc.) to address the difference or differences. In other words, the calibrator 122 may seek to align or normalize aspects of the operating characteristics of the first and second sensors 150 and 152. In this way, downstream operations performed by other elements in the system 10 may be aligned, as necessary, for suitable performance and results in image processing.
As a further example, based on the respective characteristics of the first sensor 150 and the second sensor 152, the first sensor 150 may produce images including relatively more noise than the images produced by the second sensor 152. This difference in the generation of noise may be embodied in values of the calibration characteristic data 114, for example, in one or more variables, coefficients, or other data metrics. The calibrator 122 may refer to the calibration characteristic data 114 and, based on the calibration characteristic data 114, adjust operating parameters of the first and second sensor 150 and 152, in an effort to address the difference.
Similarly, the first sensor 150 may produce images including a first dark current characteristic, and the second sensor 152 may produce images including a second dark current characteristic. The difference between these dark current characteristics may be embodied in values of the calibration characteristic data 114. The calibrator 122 may seek to adjust operating parameters of the first and second sensors 150 and 152 to address this difference. Although certain examples are provided herein, it should be appreciated that the calibrator 122 may seek to normalize or address other differences in operating characteristics between the first and second sensors 150 and 152, so that a suitable comparison may be made between images produced by the first and second sensors 150 and 152.
The differences in operating characteristics between the first and second sensors 150 and 152 may be due to various factors. For example, the differences may be due to different pixel densities of the first and second sensors 150 and 152, different manufacturing processes used to form the first and second sensors 150 and 152, different pixel array patterns or filters (e.g., Bayer, EXR, X-Trans, etc.) of the first and second sensors 150 and 152, different sensitivities of the first and second sensors 150 and 152 to light, temperature, operating frequency, operating voltage, or other factors, without limitation.
As noted above, differences in operating characteristics between the first and second sensors 150 and 152 may be identified and characterized during manufacturing and/or assembly of the device 160, for example, based on manufacturing and/or assembly calibration processes. Additionally or alternatively, the differences in operating characteristics may be identified during post-assembly calibration processes. These differences may be quantified as calibration data representative of the operating characteristics of the first and second sensors 150 and 152, and stored in the memory 110 as the calibration characteristic data 114.
In addition to adapting one or more of the operating parameters of the first and second sensors 150 and 152, the calibrator 122 may adjust one or more attributes of one or more of the first or second images 202 or 204 to substantially address a difference between attributes of the first or second images 202 or 204. For example, based on a difference in sensitivity between the first sensor 150 and the second sensor 152, the calibrator 122 may adjust the exposure of one or more of the first image 202 and the second image 204, to address the difference in exposure. Similarly, based on a difference in noise, the calibrator 122 may filter one or more of the first image 202 and the second image 204, to address a difference in an amount of noise among the images.
In various embodiments, to the extent possible, the calibrator 122 may adjust one or more attributes of the first and/or second images 202 and/or 204 to accommodate for, address, or normalize differences between them due to noise, defective pixels, dark current, vignetting, demosaicing, or white balancing, for example, or any combination thereof, without limitation. Again, a measure of differences among attributes (e.g., noise response, defective pixels, dark current response, vignetting response, white balance response, exposure response, etc.) of the first and second images 202 and 204 may be quantified as the calibration characteristic data 114. This calibration characteristic data 114 may be referenced by the calibrator 122 when adjusting attributes of the first and/or second images 202 and/or 204.
In one embodiment, as further illustrated in
In some embodiments, after the scaler 120 downscales the first and second images 202 and 204 into the first and second downscaled images 212 and 214, respectively, the calibrator 122 may adjust one or more attributes of the first and/or second downscaled images 212 and/or 214 to accommodate for, address, or normalize differences between them due to noise, defective pixels, dark current, vignetting, demosaicing, or white balancing, for example, or any combination thereof, without limitation. In other words, it should be appreciated that the calibrator 122 may make adjustments to the first and/or second downscaled images 212 and/or 214 at various stages. For example, the adjustments may be made before and/or after downscaling, upscaling, or other image processing activities.
Generally, the calibrator 122 adapts operating parameters of the first and second sensors 150 and 152 and adjusts attributes of the first and second images 202 and 204 to substantially remove, normalize, or balance differences between images, for other downstream image processing activities of the system 10 and/or the device 160. For example, as described in the examples below, the images captured by the system 10 and/or the device 160 may be relied upon in focusing and re-focusing, object extraction, scene understanding, gesture recognition, HDR image processing, etc. To the extent that these image processing activities rely upon a stereo pair of images, and to the extent that the system 10 and/or the device 160 may benefit from a heterogeneous mix of image sensors (e.g., for cost reduction, processing reduction, parts availability, wider composite sensor range and sensitivity, etc.), the calibrator 122 is configured to adapt and/or adjust certain operating characteristics and attributes into substantial alignment for the benefit of the downstream image processing activities.
As one example of a downstream image processing activity that may benefit from the operations of the calibrator 122, aspects of depth map generation and focusing and re-focusing our described below with reference to
It is noted that, in certain downstream processes, the first image 202 may be compared with the second image 204 according to one or more techniques for image processing. In this context, the first and second images 202 and 204 may be representative of and capture substantially the same field of view. In this case, similar or corresponding image information (e.g., pixel data) among the first and second images 202 and 204 is typically shifted in pixel space between the first and second images 202 and 204, due to the relative difference in position (e.g., illustrated as X, Y, R1, and R2 in
According to various embodiments described herein, the first and second images 202 and 204 may have the same or different pixel densities, depending upon the respective types and characteristics of the first and second image sensors 150 and 152, for example. Further, the first and second images 202 and 204 may be of the same or different image formats. For example, the first image 202 may include several color components of a color image encoded or defined according to a certain color space (e.g., red, green, blue (RGB); cyan, magenta, yellow, key (CMYK); phase alternating line (PAL); YUV or Y′UV; YCbCr; YPbPr, etc.), and the second image 204 may include a single component of another color space.
Referring again to
Referring again to
The depth map generator 124 may generate the depth map 224, for example, by calculating a sum of absolute differences (SAD) between pixel values in a neighborhood of pixels in the downscaled image 212 and a corresponding neighborhood of pixels in the downscaled image 214, for each pixel in the downscaled images 212 and 214. Each SAD value may be representative of a relative depth value in a field of view of the downscaled images 212 and 214 and, by extension, the first and second images 202 and 204. In alternative embodiments, rather than (or in addition to) calculating relative depth values of the depth map 224 by calculating a sum of absolute differences, other stereo algorithms, processes, or variations thereof may be relied upon by the depth map generator 124. For example, the depth map generator 124 may rely upon squared intensity differences, absolute intensity differences, mean absolute difference measures, or other measures of difference between pixel values, for example, without limitation. Additionally, the depth map generator 124 may rely upon any suitable size, shape, or variation of pixel neighborhoods for comparisons between pixels among images. Among embodiments, any suitable stereo correspondence algorithm may be relied upon by the depth map generator 124 to generate a depth map including a mapping among relative depth values between images.
Referring again to
More particularly, in the generation of the depth map 500 by the smoother 128, the smoother 128 scans along columns of the depth map 500, from a right to a left, for example, of the map. The columns may be scanned according to a column-wise pixel-by-pixel shift of depth values in the map. Along each column, edges which intersect the column are identified, and the depth values within or between adjacent pairs of intersecting edges are filtered. For example, as illustrated in
As further illustrated in
Referring back to
As illustrated in
The point for focus 140 may be received by the device 160 (
According to one embodiment, for a certain point for focus 140 selected by a user, the focuser 130 identifies a corresponding depth value (i.e., a selected depth value for focus) in the upscaled depth map 228, and evaluates a relative difference in depth between the selected depth value and each other depth value in the upscaled depth map 228. Thus, the focuser 130 evaluates the depth values in the depth map 228 according to relative differences from the point for focus 140. In turn, the focuser 130 blends the first image 202 and the blurred replica of the first image 202 based on relative differences in depth, as compared to the point for focus 140.
In one embodiment, the blurred replica of the first image 202 may be generated by the image processor 132 using a Gaussian blur or similar filter, and the focuser 130 blends the first image 202 and the blurred replica according to an alpha blend. For example, at the point for focus 140, the focuser 130 may form a composite of the first image 202 and the blurred replica, where the first image 202 comprises all or substantially all information in the composite and the blurred replica comprises no or nearly no information in the composite. On the other hand, for a point in the first image 202 having a relatively significant difference in depth as compared to the point for focus 140 in the first image 202, the focuser 130 may form another composite of the first image 202 and the blurred replica, where the first image 202 comprises no or nearly no information in the composite and the blurred replica comprises all or substantially all information in the composite.
The focuser 130 may evaluate several points among the first image 202 for difference in depth as compared to the point for focus 140, and generate or form a composite image for each point based on relative differences in depth, as compared to the point for focus 140 as described above. The composites for the various points may then be formed or joined together by the focuser 130 into an output image. In one embodiment, the focuser 130 may evaluate individual pixels in the first image 202 for difference in depth as compared to the point for focus 140, and generate or form a composite image for each pixel (or surrounding each pixel) based on relative differences in depth embodied in the depth values of the depth map 228, as compared to the point for focus 140.
According to the operation of the focuser 130, the output image of the focuser 130 includes a region of focus identified by the point for focus 140, and a blend of regions of progressively less focus (i.e., more blur) based on increasing difference in depth as compared to the point for focus 140. In this manner, the focuser 130 simulates a focal point and associated in-focus depth of field in the output image 260A, along with other depths of field which are out of focus (i.e., blurred). It should be appreciated that, because the depth map 228 includes several graduated (or nearly continuous) values of depth, the output image 260A also includes several graduated ranges of blur or blurriness. In this way, the focuser 130 simulates the effect of capturing the image 202 using a relatively larger optical aperture, and the point of focus when capturing the image 202 may be altered after the image 202 is captured. Particularly, several points for focus 140 may be received by the focuser 130 over time, and the focuser 130 may generate respective output images 260A for each point for focus 140.
In another embodiment, rather than relying upon a blurred replica of the first image 202, the focuser 130 selectively focuses regions of the first image 202 without using the blurred replica. In this context, the focuser 130 may determine a point spread per pixel for pixels of the first image 202, to generate an output image. For example, for pixels with little or no difference in depth relative to the point for focus 140, the focuser 130 may form the output image 260 using the pixel values in the first image 202 without (or with little) change to the pixel values. On the other hand, for pixels with larger differences in depth relative to the point for focus 140, the focuser 130 may determine a blend of the value of the pixel and its surrounding pixel values based on a measure of the difference. In this case, rather than relying upon a predetermined blurred replica, the focuser 130 may determine a blend of each pixel, individually, according to values of neighboring pixels.
While it is noted that the processes for focusing and re-focusing images, as described above, may benefit from the calibration processes performed by the calibrator 122, other image processing techniques may benefit from the calibration processes. For example, depth maps may be relied upon for object extraction, scene understanding, or gesture recognition, for example. In this context, to the extent that the calibration processes performed by the calibrator 122 improve the accuracy of depth maps generated by the system 10, the calibrator 122 may improve object extraction, scene understanding, or gesture recognition image processes.
As another example of image processing techniques which may benefit from the calibration processes performed by the calibrator 122, it is noted that additional details may be imparted to regions of an image which would otherwise be saturated (i.e., featureless or beyond the measureable range) using HDR image processing techniques. Generally, HDR images are created by capturing both a short exposure image and a normal or long exposure image of a certain field of view. The short exposure image provides the additional details for regions that would otherwise saturated in the normal or long exposure. The short and normal exposure images may be captured in various ways. For example, multiple images may be captured for the same field of view, successively, over a short period of time and at different levels of exposure. This approach is commonly used in video capture, for example, especially if a steady and relatively high-rate flow of frames is being captured and any object motion is acceptably low. For still images, however, object motion artifacts are generally unacceptable for a multiple, successive capture approach.
An alternative HDR image processing approach alternates the exposure lengths of certain pixels of an image sensor. This minimizes problems associated with object motion, but injects interpolation artifacts due to the interpolation needed to reconstruct a full resolution image for both exposures. Still another approach adds white or clear pixels to the Bayer pattern of an image sensor, and is commonly known as RGBC or RGBW. The white or clear pixels may be embodied as low light pixels, but the approach may have problems with interpolation artifacts due to the variation on the Bayer pattern required for the white or clear pixels.
In the context of the system 10 and/or the device 160, if the first sensor 150 is embodied as a main color image sensor, and the second sensor 152 is embodied as a secondary luminance only image sensor, for example, the luminance-only data provided from the second sensor 152 may provide additional information in HDR detail enhancement. In certain aspects of the embodiments described herein, the exposure settings and characteristics of the secondary luminance image sensor may be set and determined separately from that of the main color image sensor by the calibrator 122. This is achieved while the main sensor is not adversely affected by the addition of white or clear pixels, for example.
While various examples are provided above, it should be appreciated that the examples are not to be considered limiting, as other advantages in image processing techniques may be achieved based on the calibration processes performed by the calibrator 122.
Before turning to the process flow diagrams of
Differences in operating characteristics between the first and second sensors 150 and 152 may be quantified as calibration data and stored in the memory 110 as the calibration characteristic data 114. The differences may be due to different pixel densities of the first and second sensors 150 and 152, different manufacturing processes used to form the first and second sensors 150 and 152, different pixel array patterns or filters (e.g., Bayer, EXR, X-Trans, etc.) of the first and second sensors 150 and 152, different sensitivities of the first and second sensors 150 and 152 to light, temperature, operating frequency, operating voltage, or other factors, without limitation.
At reference numeral 604, the process 600 includes adapting an operating characteristic of at least one of the first sensor or the second sensor to accommodate for at least one of noise, defective pixels, dark current, vignetting, demosaicing, or white balancing, for example, using the characteristic for calibration identified at reference numeral 602. For example, the calibrator 122 may adapt operating characteristics or parameters of one or more of the first sensor 150 and/or the second sensor 152, as described herein.
At reference numeral 606, the process 600 includes capturing a first image with the first sensor, and capturing a second image with a second sensor. In the context of the system 10 and/or the device 160 (
At reference numeral 608, the process 600 includes adjusting an attribute of one or more of the first or second images to substantially address at least one difference between them. For example, reference numeral 608 may include adjusting an attribute of the second image to substantially address a difference between the attribute of the second image and a corresponding attribute of the first image using the characteristic for calibration identified at reference numeral 602. Reference numeral 608 may further include aligning the second image with the first image to substantially address a difference in alignment between the first sensor and the second sensor. Additionally or alternatively, reference numeral 608 may include normalizing values among the first image and the second image to substantially address a difference in sensitivity between the first sensor and the second sensor.
In this context, the calibrator 122 may adjust one or more attributes of one or more of the first or second images 202 or 204 (
In various embodiments, to the extent possible, the calibrator 122 may adjust one or more attributes of the first and/or second images 202 and/or 204 to accommodate for, address, or normalize differences between them due to noise, defective pixels, dark current, vignetting, demosaicing, or white balancing, for example, or any combination thereof, without limitation. Again, a measure of differences among attributes (e.g., noise response, defective pixels, dark current response, vignetting response, white balance response, exposure response, etc.) of the first and second images 202 and 204 may be quantified as the calibration characteristic data 114. This calibration characteristic data 114 may be referenced by the calibrator 122 when adjusting attributes of the first and/or second images 202 and/or 204 at reference numeral 608.
At reference numeral 610, the process 600 may include scaling one or more of the first image or the second image to scaled image copies. For example, at reference numeral 610, the process 600 may include upscaling the first image to an upscaled first image and/or upscaling the second image to an upscaled second image. Alternatively, at reference numeral 610, the process 600 may include downscaling the first image to a downscaled first image and/or downscaling the second image to a downscaled second image. In certain embodiments, the scaling at reference numeral 610 may be omitted, for example, depending upon the application for use of the first and/or second images and the pixel densities of the sensors used to capture the images.
At reference numeral 612, the process 600 includes adjusting an attribute of one or more of the scaled (i.e., upscaled or downscaled) first or second images to substantially address at least one difference between them. This process may be similar to that performed at reference numeral 608, although performed on scaled images. Here, it should be appreciated that the process 600 may make adjustments to downscaled or upscaled images at various stages. For example, adjustments may be made before and/or after downscaling, upscaling, or other image processing activities.
Here, it is noted that the processes performed at reference numerals 602, 604, 606, 608, 610, and 612 may be relied upon to adapt and/or adjust one or more images or pairs of images, so that other image processes, such as the processes at reference numerals 614, 616, and 618, may be performed with better results. In this context, the processes at reference numerals 614, 616, and 618 are described by way of example (and may be omitted or replaced), as other downstream image processing techniques may follow the image calibration according to the embodiments described herein.
At reference numeral 614, the process 600 may include generating one or more edge or depth maps. For example, the generation of edge or depth maps may be performed by the edge map generator 126 and/or the depth map generator 124 as described above with reference to
Alternatively or additionally, at reference numeral 618, the process 600 may include extracting one or more objects, recognizing one or more gestures, or other image processing techniques. These techniques may be performed with reference to the edge or depth maps generated at reference numeral 614, for example. In this context, due to the calibration processes performed at reference numerals 602, 604, 606, 608, 610, and 612, for example, the accuracy of edge or depth maps may be improved, and the image processing techniques at reference numeral 618 (and reference 616) may also be improved.
As another alternative, at reference numeral 620, the process 600 may include generating an HDR image. Here, it is noted that the generation of an HDR image may occur before any image scaling occurs at reference numeral 610. The generation of an HDR image may be performed according to the embodiments described herein. For example, the generation of an HDR image may include generating the HDR image by combining luminance values of a second image with full color values of a first image.
According to various aspects of the process 600, the process 600 may be relied upon for calibration of images captured from a plurality of image sensors, which may include a heterogeneous mix of image sensors. The calibration may assist with various image processing techniques, such as focusing and re-focusing, object extraction, scene understanding, gesture recognition, HDR image processing, etc.
In various embodiments, the processor 710 may include or be embodied as a general purpose arithmetic processor, a state machine, or an ASIC, for example. In various embodiments, the processing environment 100 of
The RAM and ROM 720 and 730 may include or be embodied as any random access and read only memory devices that store computer-readable instructions to be executed by the processor 710. The memory device 740 stores computer-readable instructions thereon that, when executed by the processor 710, direct the processor 710 to execute various aspects of the embodiments described herein.
As a non-limiting example group, the memory device 740 includes one or more non-transitory memory devices, such as an optical disc, a magnetic disc, a semiconductor memory (i.e., a semiconductor, floating gate, or similar flash based memory), a magnetic tape memory, a removable memory, combinations thereof, or any other known non-transitory memory device or means for storing computer-readable instructions. The I/O interface 750 includes device input and output interfaces, such as keyboard, pointing device, display, communication, and/or other interfaces. The one or more local interfaces 702 electrically and communicatively couples the processor 710, the RAM 720, the ROM 730, the memory device 740, and the I/O interface 750, so that data and instructions may be communicated among them.
In certain aspects, the processor 710 is configured to retrieve computer-readable instructions and data stored on the memory device 740, the RAM 720, the ROM 730, and/or other storage means, and copy the computer-readable instructions to the RAM 720 or the ROM 730 for execution, for example. The processor 710 is further configured to execute the computer-readable instructions to implement various aspects and features of the embodiments described herein. For example, the processor 710 may be adapted or configured to execute the process 600 described above in connection with
The flowchart or process diagram of
Although embodiments have been described herein in detail, the descriptions are by way of example. The features of the embodiments described herein are representative and, in alternative embodiments, certain features and elements may be added or omitted. Additionally, modifications to aspects of the embodiments described herein may be made by those skilled in the art without departing from the spirit and scope of the present invention defined in the following claims, the scope of which are to be accorded the broadest interpretation so as to encompass modifications and equivalent structures.
This application claims the benefit of U.S. Provisional Application No. 61/891,648, filed Oct. 16, 2013, and claims the benefit of U.S. Provisional Application No. 61/891,631, filed Oct. 16, 2013, the entire contents of each of which are hereby incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
61891648 | Oct 2013 | US | |
61891631 | Oct 2013 | US |