This application is related to the commonly-assigned U.S. patent application having Ser. No. 13/489,950, filed Jun. 6, 2012, and entitled, “Image Blending Operations” (“the '950 application”) and the commonly-assigned U.S. patent application having Ser. No. 13/490,183, filed Jun. 6, 2012, and entitled, “Adaptive Image Blending Operations” (“the '183 application”), each of which applications is hereby incorporated by reference in its entirety.
This disclosure relates generally to the field of image processing and, more particularly, to various blending techniques for use in generating wide area-of-view images constructed by stitching together individual image slices having varying exposures.
One conventional method to generate a wide area-of-view image from a sequence of images (frames) is illustrated in
The role of blending operation 135 is to mask or obfuscate the differences or transition between two frames. One approach to do this uses a process known as “gradient domain” blending, which consists of constructing the gradient field of final image 140 by copying the gradient fields of each image on the corresponding sides the identified seam (e.g., referring to identifier 145, the gradient fields across seam 125 would be gradient field A from frame 1 and gradient field B from frame 2). Once this is done, the final image is generated by integrating over the gradients across the seam. Reconstructing final wide angle-of-view image from its gradient fields requires substantial computational resources (e.g., to solve Poisson partial differential equations)—resources that may not permit for the satisfactory real-time generation of wide angle-of-view images on common hand-held devices such as, for example, personal electronic devices having embedded image sensors such as mobile telephones, personal music players, tablet computer systems, and personal gaming devices. Gradient domain blending may also be unable to mask registration errors visible along objects that span multiple frames.
Another approach, referred as “image cross-fading,” seeks to mask the transition between two frames by cross-fading pixel values from each frame along the transition seam (e.g. 125 and 130). This generally consists of a technique known as “alpha blending,” which comprises calculating a weighted average between the corresponding pixel values in the two frames, where the weight given to each pixel decreases smoothly while approaching the seam and vanishes at some distance after passing the seam. For example, the weight given to each pixel from frame 1 in region 110 can decrease smoothly from 1 to 0 while crossing seam 125 from left to right. Similarly, the weight given each pixel from frame 2 in region 110 can decrease smoothly from 1 to 0 while crossing seam 125 from right to left. Exactly on seam 125, pixels from both frame 1 and frame 2 may have the same weight, e.g., 0.5.
Reconstructing a final wide angle-of-view image using image cross-fading techniques alone can result in both “ghosting artifacts” (manifested by preserving parts of a moving object that is close to a transition seam) and banding artifacts (manifested in smooth areas in the images such as sky, constant color walls and fine textures). Likewise, using Poisson blending techniques alone can result in problems within regions of the reconstructing final wide angle-of-view image where, for instance, there are “broken” objects (e.g., a long object that is broken across individual image slices because of problems in the image registration process). Finally, in order to more fully capture image details in a wide angle-of-view image with varying brightness levels across the extent of the wide angle-of-view image, the auto exposure settings of the camera must be allowed to change or ‘drift’ during the capture process, preferably within some predefined bounds.
Thus, the inventors have realized new and non-obvious ways to constrain this auto exposure drift process and harness the information provided from the auto exposure drift process in order to more effectively account and correct for changes in the camera's exposure settings across consecutive image slices and blend across consecutive image slices without producing noticeable exposure banding artifacts, while still preserving maximum image detail.
In one embodiment, the inventive concept provides a method to more effectively blend two images in an operation referred to herein as “on-the-fly exposure mapping” for high dynamic range, wide area-of-view image construction, e.g., panoramic image construction. In this embodiment, the exposure settings of the camera are allowed to “float” or “drift” within a predefined range. For any given captured image slice in a wide area-of-view image that is being constructed, the ratio of the exposure value for the given captured image slice to the exposure value of an initial captured image slice of the wide area-of-view image may be determined and associated with the given captured image slice. This ratio, referred to herein as an “Exposure Ratio” value or “ER,” may then be used—in conjunction with one or more exposure mapping curves—to boost or lower the exposure (e.g., luma values) of the given captured image slice to the initial captured image slice of the wide area-of-view image. The mapping curves may be used to retain highlights and/or shadows in the given captured image slice, while matching the midtones of the given captured image slice to the initial captured image slice, so as to avoid overly jarring changes in brightness between image slices. In this way, the information gained by the exposure adjustment during the capture of the wide area-of-view image may be retained, while maintaining the overall image brightness balance that allows the slices to be blended together in a visually pleasing manner.
In other embodiments according to the “on-the-fly exposure mapping” techniques described herein, the saturation levels of the colors values (e.g., chroma values) of the given captured image slice may also be adjusted (e.g., scaled) based, at least in part, on the determined Exposure Ratio value for the slice and the lama values of the pixels in the slice. This may help to prevent coloration problems from occurring in newly captured image slices as the exposure settings of the camera change, e.g., to prevent get overly yellowish colors when moving the camera from a bluish region of the scene that is being captured to a more yellowish region of the scene that is being captured.
In another embodiment of the inventive concept referred to herein as “dynamic thresholding based on exposure ratio” for high dynamic range, wide area-of-view image construction, the method includes obtaining first and second images (or image slices), preferably images (or image slices) that the “on-the-fly exposure mapping” techniques described above have already been applied to. The method then identifies an overlap region between the two, where the overlap region has identified therein a seam and guard-band borders demarcating a “transition band.” For each seam pixel, a value may be determined based, at least in part, on the values of the corresponding pixels from the first and second images and a known exposure setting difference between the first and second images. These determined values may then be used to determine a “cost image” in the overlap region. The path of the seam within the overlap region may be then identified in a manner so as to minimize the associated cost in the cost image. The two images are then matched across the image seam by blending together the pixels in the transition band.
In such an embodiment, the image seam-matching process may comprise one or more weighted blending operations such as, for example, “alpha blending” and “Poisson blending.” The weighting between the different blending techniques may be based on a number of factors including, but not limited to, the images' content within the transition band (e.g., whether moving objects exist or whether there are long contiguous objects passing thought the overlap region) and the magnitude of the difference of the corresponding pixels from the first and second images in the transition band. In some embodiments, when a large difference between corresponding pixel values from the first and second images is detected, the alpha blending approach will be favored; whereas, when a small difference between corresponding pixel values from the first and second images is detected, the Poisson blending approach will be favored.
In still other embodiments, the methods described herein may be implemented via computer executable program code. In still other embodiments the disclosed methods may be implemented in systems, such as electronic devices having image capture capabilities.
This disclosure pertains to systems, methods, and computer readable media for blending multiple images for use in generating high dynamic range, wide area-of-view, images using an operation referred to as “image seam-matching” while leveraging novel “on-the-fly exposure mapping” techniques. Image seam-matching seeks to make the pixel values in the two images being blended equal along their transition border or seam, and smoothly increases/decreases pixel values on either side of the seam through the images' transition hand. “On-the-fly exposure mapping” techniques attempt to account for shifts in a camera's exposure settings during the capture of images in the construction of a wide area-of-view image.
This disclosure also pertains to systems, methods, and computer readable media for blending multiple images for use in generating wide area-of-view images using an operation referred to as “dynamic thresholding based on exposure ratio.” Prior to the use of floating auto exposure in the generation of wide area-of-view images, if corresponding pixels didn't match well between consecutive image slices, it was a good assumption that the poor matching was due to a moving object within the overlap region between the images. With the use of floating auto exposure, however, if there is poor matching between the corresponding pixels of consecutively capture image slices, it is difficult to determine if the mismatching is due to a moving object in the overlap region or, rather, a difference in exposure from one image slice to the next. With the use of the Exposure Ratio variable, the known change in exposure for the currently captured image slice can be accounted for—in other words, a “reverse” tone mapping operation may be performed. Then, whatever amount of pixel difference remains between the corresponding pixels may be assumed to be due to the movement of an object(s) within the overlap region. This difference may then be accounted for in the blending operation by intelligently weighting between different blending operations along the identified seam and within the transition band.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the inventive concept. As part of this description, some of this disclosure's drawings represent structures and devices in block diagram form in order to avoid obscuring the invention. In the interest of clarity, not all features of an actual implementation are described in this specification. Moreover, the language used in this disclosure has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter. Reference in this disclosure to “one embodiment” or to “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation of the invention, and multiple references to “one embodiment” or “an embodiment” should not be understood as necessarily all referring to the same embodiment.
It will be appreciated that in the development of any actual implementation (as in any development project), numerous decisions must be made to achieve the developers' specific goals (e.g., compliance with system- and business-related constraints), and that these goals may vary from one implementation to another. It will also be appreciated that such development efforts might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the design of an implementation of image processing systems having the benefit of this disclosure.
Referring to
It will be understood that, when generating wide area-of-view images, the surrounding scene may be captured as it would be projected onto an imaginary cylindrical manifold. As such, the individual frames forming the wide area-of-view image may be projected onto this cylindrical manifold before being blended into the final image. Thus, while the precise dimensions of a slice are unimportant (e.g., 205′ or 210′), in some embodiments it may be significant that the slice is much larger in one dimension (e.g., height) than it is in the other dimension (e.g., width). In embodiments in which this holds, the slices may be concatenated to approximate the cylindrical projection of the captured frames. By doing this, cylindrical warping operations may be avoided (a computationally intensive process necessary to compensate for image distortions when large slices or whole frames are registered in accordance with the prior art). Additional details regarding cylindrical projection techniques in the context of the capture of wide area-of-view images may be found in the incorporated '950 application and '183 application.
Referring now to
Referring now to
ERi=(Ei/E1), where E1 is the exposure value of the initial captured image portion (EQ. 1)
In some embodiments employing a “floating auto exposure with locking” the range of possible camera exposure parameters may be locked to within 1.5 f-stops from the camera's exposure value at the capture of the initial image portion of the wide area-of-view image, i.e., E1. As may be understood, the more that the exposure value is allowed to drift, the greater the dynamic range is that may be captured in the wide area-of-view image. However, if the exposure difference is too large, then the resulting de area-of-view image will exhibit reduced bit depth in the resulting blended image, effectively “posterizing” the image, pushing a large number of pixels into the same “bucket” in the output mapped image.
“On-the-Fly Exposure Mapping”
As mentioned above, one potential problem with “locked exposure” schemes is that, when a user starts panning a scene, if he or she moves toward something brighter or darker, the image may either end up being overexposed or underexposed. By calculating an “Exposure Ratio” that changes as the user pans the camera, the captured image slices may be brought in towards the exposure of the initial slice so that the later-captured slices would more closely match in brightness.
The inventors have surprisingly discovered that the exposure compensation process is further improved when the images are mapped at the individual pixel level, and wherein every pixel is not mapped in the same way. To implement this per-pixel mapping approach, one embodiment, one or more mapping curves are employed. The mapping curves may be used in an effort to capture the retained information from slice to slice while still adjusting the midtones sufficiently that there are not overly visually jarring brightness changes between consecutive slices in the resultant generated wide area-of-view image.
Referring now to
Exemplary pseudocode for employing the “on-the-fly exposure mapping” techniques described above is give below:
The resulting value may then be clamped to the range of 0 to 255 (e.g., in the case of embodiments with 8-bit luma values).
In one embodiment, the exposure mapping process described above is employed as a “streaming solution,” i.e., it is done in substantially real time, rather than in post processing once all the images have already been captured. According to the teachings presented herein employing a novel “Exposure Ratio” concept and utilizing only thin, central image portions, i.e., slices, in the image registration process, the exposure mapping process is thus able to be completed “on-the-fly.”
Auto White Balance (AWB) Desaturation
In addition to the “on-the-fly exposure mapping” techniques described above, the inventors also realized that, if a given image capture operation starts in the shade and pans to sunnier areas, certain colors can become oversaturated in the resultant wide area-of-view image. Hence, according to one embodiment, the output from a camera's existing auto white balance (AWB) calculation unit may be leveraged in order to intelligently desaturate the chroma values in the captured image slices. In certain embodiments the AWB desaturation process may be carried out in addition to the on-the-fly exposure mapping of the pixel luma values discussed above.
According to one embodiment of an AWB desaturation process, at the beginning of the capture of the wide area-of-view image, the camera's white balance (WB) is locked. Although the WB of the camera is locked, the AWB calculation unit continues to calculate the gain for each color channel, e.g., each of the red, green, and blue channels, so that the AWB desaturation knows where the camera's AWB “wants” to adjust the camera's color channel gains. This allows for the calculation of a “delta” between the camera's current color channel gains and the gain values that the camera's AWS “wants” to adjust the color channel gains to in response to the composure of the currently-captured scene.
If both the white balance and the exposure settings of the camera have drifted off, the AWB desaturation process may use this as a factor in deciding how much to desaturate the colors. The AWB desaturation process may then calculate a scale factor by which to desaturate the colors.
Exemplary pseudocode for employing the “AWB desaturation” techniques described above is give below:
The result of the on-the-fly exposure mapping and the additional AWB desaturation processing described herein is an incoming image slice that is close enough to its predecessor that it may be blended without significant artifacts, but wherein a greater amount of information in the incoming image slice is retained than if the system would not have allowed the camera's exposure value to change during the capture of the wide area-of-view image. These mapped pixel values may then be passed into the ‘blending’ stage, which will be described in greater detail below.
Referring now to
Referring now to
Referring now to
Referring to
A cost value Q(x, y) 915 may be assigned to every pixel (x, y) in overlap region 225, where xε{0, 1, 2, . . . W−1} and yε{0, 1, 2, . . . H−1}. Cost value Q(x, y) may be thought of as expressing the penalty incurred by the seam passing through that pixel. For instance, Q(x, y) may represent the difference between the color values of the two overlapping pixels in the two slices (on a component-by-component basis or as a combined vector descriptor). Cost Q(x, y) may also represent the absolute difference in the luminance values of the two overlapping pixels in the two slices. While the specific measure Q(x, y) used may vary from one implementation to another, it is generally true that the larger the value of Q(x, y), the more likely the pixels in each slice corresponding to the overlap element at (x, y) are associated with different objects, and hence the seam should not pass through that pixel.
When applied to each corresponding pair of pixels (e.g., pixel ‘a’ in overlap region 225 from slice 205′ and the corresponding pixel ‘b’ from slice 210′), the result may be cost map 920. Cost map 920 may additionally be filtered via various functions to produce a filtered cost map, as will be discussed further below. Still further details regarding the construction of cost maps and filtered cost maps may be found in the incorporated '950 application and '183 application.
Once cost map 920 has been determined, a minimum cost for a path or “seam” that traverses overlap region 225 from top to bottom may be determined as follows:
where (xk, yk) represents the coordinates of the k-th seam pixel, and K represents the number of pixels in the seam. Resulting seam 925 represents a path through overlap region 225. Application of various filters and/or weighting functions may be used to create transition band boundaries 930 and 935, in some embodiments, the size of the transition hand may be 64 pixels on either side of seam 925 in order to allow a sufficiently wide blending region to account for potential changes in exposure and, thus, average brightness levels) from slice to slice. Further details regarding the construction of minimum cost seams in wide area-of-view images may be found in the incorporated '950 application and '183 application.
Referring to
An illustration of this image seam-matching operation within a transition band, in accordance with one embodiment, may be seen in
Dynamic Thresholding for Blending of Multiple Images Using Floating Auto Exposure
The inventors have discovered additional novel blending techniques to mask the transition from one slice to another, as well as to mask the difference in exposure from once slice to another, for example, in the case of a wide area-of-view image generation embodiment employing the “floating auto exposure” or “floating auto exposure with locking” schemes described in reference to
To implement these novel techniques, the blending process may first perform what essentially amounts to a “reverse” tone mapping process on the currently acquired images slice. Because the blending process is able to determine how much the exposure of the camera has changed for the capture of the currently acquired image slice, it is able to rule out changes in the image that are caused only by the change in exposure—and isolate those changes in the currently acquired image slice that are actually attributable to changes in the image slice's composition. In other words, an object with one gray level in the existing wide area-of-view image should have a different gray level in the newly acquired image slice, based on how much the exposure has changed. Thus, changes that deviate from this difference may be used to aid in determining whether a particular pixel or group of pixels corresponds to the same object in the currently acquired image slice or not. To do so, first, the pixel values in the currently acquired image slice are “mapped back” to the existing wide area-of-view image using the one or exposure and/or chroma mapping techniques described above (i.e., to rule out differences in the values of corresponding pixels across the two images slices being stitched together that are caused by the exposure changes between the image slices), and then the process can perform the difference calculations described above in reference to the cost map 920, and as is described in further detail in the incorporated '950 application and '183 application.
Once the cost map has been created and the seam has been identified in the overlap region between the two slices, an improved blending process may be implemented to perform masking along the seam in the transition band area. If the transitions in this area are not masked effectively, it may result in an undesirable exposure banding or “zebra effect” in the resulting constructed wide area-of-view image, i.e., alternating bright and dark slices that don't blend well into each other.
In some implementations, the transition in brightness between slices may be masked using two distinct blending approaches, which will be referred to herein as “alpha blending” and “simplified Poisson blending.” As is explained in the incorporated '950 application and '183 application, the first approach, alpha blending, comprises taking a simple weighted average between the existing image slice and the currently being acquired image slice. For example, the weight given to pixels in the currently being acquired image slice would increase smoothly from left to right (assuming the wide area-of-view image was being captured left to right), and the weight given to pixels from the existing image slice would decrease smoothly from left to right. One problem with alpha blending is that, in the neighborhood of a moving object, e.g., moving leaves on a tree, alpha blending is likely to create a blurring artifact where the two images are combined in the transition band. This type of blurring is referred to herein as “ghosting artifacts.” Thus, a second approach to blending may also be employed: a simplified Poisson blending.
According to one embodiment of simplified Poisson blending, only information from the existing wide area-of-view image is considered, without combining the information at any point with information from the currently being acquired image. In a preferred embodiment, the Poisson blending approach does not operate on the intensity values of the image directly; rather, it operates on a gradient map of the image. Once the gradient is calculated for each image being blended, it may be interpolated in order to calculate the final image, which now has the same or similar brightness values as are around the transition band area. Because the information comes from only one image slice, ghosting artifacts may be substantially minimized. Because the gradient map represents the derivative of the image pixel values, it may be integrated to determine the blended values of the pixels located at and around the seam. Further details regarding the use of a Poisson blending approach and gradient map to blend pixel values within transition band 1005 may be found in the incorporated '950 application and '183 application.
As mentioned above, in order to more successfully account for both moving objects and changes in brightness in the overlap region between successively captured images in the construction of a wide area-of-view image, one embodiment described herein combines alpha blending and Poisson blending along the identified seam in an intelligent manner. For example, in areas where there are “broken” (i.e., misaligned) objects in the overlap area, alpha blending is preferred, in order to make a smooth transition between slices. In areas within the overlap region that are smoother, Poisson blending is preferred. The combination between the two blending approaches is done by looking along the seam and calculating the difference between the values of pixels in existing wide area-of-view images and the corresponding pixels in the image slice currently being acquired. Thus, when a large difference is detected between corresponding pixels in the overlap region, the alpha blending approach is favored; whereas, when a smaller difference is detected, the Poisson blending approach is favored.
In one embodiment, the following equations are employed to alpha blend the pixel values in the overlap region:
On the left side of the seam: Pnew=P+wL*(S−P) (EQ. 3);
On the right side of the seam: Pnew=S+wR*(P−S) (EQ. 4);
where P is the pixel value in the currently existing wide area-of-view image, S is the pixel value in the currently acquired image, and wL, and wR are weighting functions, e.g., as are illustrated in weighting function 1200 of
Due to wider transition bands that would be employed in a “flea auto exposure” or “floating auto exposure with locking” scheme as compared to a “locked auto exposure” scheme, both blending risks described above (i.e., ghosting and exposure handing) are higher. Thus, in one embodiment, the following equations are employed to more intelligently alpha blend between the two images in the overlap region:
On the left side of the seam: Pnew=P+wL*f(C)*(S−P) (EQ. 5);
On the right side of the seam: Pnew=S+wR*f(C)*(P−S) (EQ. 6);
where f(C) is a function applied to the cost image determined for the overlap region. This cost image is higher where the difference between S and P is higher (also taking into consideration the exposure difference between the two image slices), so the function f(C) is chosen such that f(C) goes to zero for high cost values, and goes to 1 for low cost values, as shown in the exemplary function 1255 illustrated in graph 1250 of
As mentioned above, the Poisson blending techniques are favored in areas where a smaller difference is detected between the image slices. Poisson blending substantially eliminates ghosting because it doesn't mix the data from the two images being stitched together, but it may introduce artifacts when long objects are cut by the seam. Thus, a weight function w, may be calculated along the seam that is larger in areas of so-called “labeling errors” (e.g., when a seam cuts through a long object), and smaller in noisy and smoother areas. The signal-to-noise ratio (SNR) metadata of an may also be used in determining what areas of the image have greater noise. For example, in the case of heavy noise regions (i.e., low SNR), Poisson blending techniques may be more heavily favored in order to avoid the risks that alpha blending techniques would “smooth out” the noise along the seam to too great of an extent, thereby making the seam location more visible in the resultant wide area-of-view image.
Once the alpha blending result (A) and the Poisson blending result (P) have been calculated for a given pixel, the results may then be combined, e.g., according to weighting function, w, in order to calculate a new value for the pixel in the resulting wide area-of-view image. In one embodiment, the new value for the pixel, Pnew, is calculated according to the equation:
Pnew=wA(1−w)P (EQ. 7).
In one embodiment, the weight, w, for pixels in the transition band may be calculated in two steps. First, for every pixel along the seam, a pre-weight value (w0) may be determined, based on SNR and the difference (D) between the values of the corresponding pixels in the two images being blended. The larger the pixel difference (D), the larger the value that is assigned to the pre-weight value, w0—as a large pixel difference is often found in places where the seam cannot avoid cutting a long horizontal object (e.g., telephone wires, long horizontal structures, building edges, etc.). In such cases, it is preferable to mask the sudden transition by smoothing it via alpha blending techniques. On the other hand, the pixel difference (D) between the corresponding pixel values in the two images may be also caused by noise. In the case of heavy noise regions (i.e., low SNR), it is preferable to emphasize Poisson blending techniques, and hence use a smaller w0 value, in order to avoid the smoothing effect of alpha blending that may have the undesirable consequence of making the seam more visible.
Thus, one exemplary formula for the calculation of the pre-weight value w0, may be: w0=D*SNR (or any similar equation that emphasizes w0 values as D and SNR become larger) (EQ. 8). Since w0 is calculated independently for every pixel along the seam, it may exhibit sudden transitions from one pixel to the next. Such sudden transitions may introduce undesirable visible artifacts, e.g., in the form of short horizontal lines along the transition border. In order to avoid such sudden transitions, w0 values along the seam may be smoothed, e.g., by applying a smoothing low pass filter along the seam. The final weight, w, for a given pixel may then be calculated as a Gaussian-filtered version of pre-weight, i.e., w0, values along the seam.
Referring now to
When employing a “floating auto exposure” or “floating auto exposure with locking” scheme, if the pixels are different from slice to slice, the process may look first to information about how the camera's exposure was changed from slice to slice and account for brightness differences due to such exposure changes. Whatever difference is left over may then be assumed to be due to the movement of an object(s) in the overlap region. Further, a wider transition area may be employed in order to more effectively mask the potential brightness changes from image to slice. So, beginning with block 1305 of operation 1300, two images (or image slices) are obtained. An exposure metric, e.g., the aforementioned “Exposure Ratio,” as well as the slice's overall position within the wide area-of-view image, may then be calculated for the most recently captured image (block 1310). The luma values for the most recently captured image may then be mapped according to one or more exposure mapping curves, as described above (block 1315). The chroma values for the most recently captured image may also then be desaturated based, at least in part, on the exposure metric and the luma value of the corresponding pixel, as described above (block 1320). An overlap region between the two images may then be identified so that the images may be registered and a registration quality metric may be obtained (block 1325). If the quality metric indicates the registration is acceptable, an initial cost map may be deter mined, taking into account the expected brightness differences between the slices due to the camera's change in exposure settings during the time between the capture of the two images being registered (block 1330). Details regarding how the process may proceed if the quality metric indicates that the registration is not acceptable may be found in the incorporated '183 application, and thus are not discussed further here. Based on the final cost map, a minimum cost seam can be determined (block 1335). Details regarding the construction of a cost map and determination of a minimum cost seam may be found above and in the incorporated '950 application and '183 application. The two images may then be blended together by blending ±‘p’ overlapping pixels across the identified minimum cost seam, e.g., by using a weighted combination of alpha blending and Poisson blending (block 1340). With blending operations complete, the blended image may be stored or displayed (block 1345). If all desired images have been registered and blended (the “YES” prong of block 1350), operation 1300 is complete. If additional images remain to be combined (the “NO” prong of block 1350), operation 1300 continues at block 1305 where the next image to be combined may be obtained.
Referring to
Processor 1405 may be any suitable programmable control device capable of executing instructions necessary to carry out or control the operation of the many functions performed by device 1400 (e.g., such as the generation and/or processing of images in accordance with operations in any one or more of
Sensor and camera circuitry 1450 may capture still and video images that may be processed to generate wide angle-of-view images, at least in part, by video codec(s) 1455 and/or processor 1405 and/or graphics hardware 1420, and/or a dedicated image processing unit incorporated within circuitry 1450. Images so captured may be stored in memory 1460 and/or storage 1465. Memory 1460 may include one or more different types of media used by processor 1405, graphics hardware 1420, and image capture circuitry 1450 to perform device functions. For example, memory 1460 may include memory cache, read-only memory (ROM), and/or random access memory (RAM). Storage 1465 may store media (e.g., audio, image and video files), computer program instructions or software, preference information, device profile information, and any other suitable data. Storage 1465 may include one more non-transitory storage mediums including, for example, magnetic disks (fixed, floppy, and removable) and tape, optical media such as CD-ROMs and digital video disks (DVDs), and semiconductor memory devices such as Electrically Programmable Read-Only Memory (EPROM), and Electrically Erasable Programmable Read-Only Memory (EEPROM). Memory 1460 and storage 1465 may be used to retain computer program instructions or code organized into one or more modules and written in any desired computer programming language. When executed by, for example, processor 1405 such computer program code may implement one or more of the methods described herein.
It is to be understood that the above description is intended to be illustrative, and not restrictive. The material has been presented to enable any person skilled in the art to make and use the invention as claimed and is provided in the context of particular embodiments, variations of which will be readily apparent to those skilled in the art (e.g., some of the disclosed embodiments may be used in combination with each other). In addition, it will be understood that some of the operations identified in
Number | Name | Date | Kind |
---|---|---|---|
5237413 | Israelsen | Aug 1993 | A |
6075905 | Herman | Jun 2000 | A |
6094215 | Sundahl | Jul 2000 | A |
6167404 | Morcos | Dec 2000 | A |
6243103 | Takiguchi | Jun 2001 | B1 |
6304284 | Dunton | Oct 2001 | B1 |
6978052 | Beged-Dov | Dec 2005 | B2 |
7006124 | Peleg | Feb 2006 | B2 |
7409105 | Jin | Aug 2008 | B2 |
7424218 | Baudisch | Sep 2008 | B2 |
7460730 | Pal | Dec 2008 | B2 |
7577314 | Zhou | Aug 2009 | B2 |
7590335 | Kobayashi | Sep 2009 | B2 |
7627225 | Shimosato | Dec 2009 | B2 |
7656428 | Trutna | Feb 2010 | B2 |
7656429 | Larson | Feb 2010 | B2 |
7746404 | Deng | Jun 2010 | B2 |
7796871 | Park | Sep 2010 | B2 |
7844130 | Dong | Nov 2010 | B2 |
7912337 | Souchard | Mar 2011 | B2 |
1310987 | Bhogal | May 2011 | A1 |
1310988 | Doepke | May 2011 | A1 |
1310994 | Doepke | May 2011 | A1 |
1315114 | Doepke | Jun 2011 | A1 |
8121809 | Mealy | Feb 2012 | B2 |
1348995 | Tico | Jun 2012 | A1 |
1349018 | Tico | Jun 2012 | A1 |
8285079 | Robertson | Oct 2012 | B2 |
8310522 | Jeong | Nov 2012 | B2 |
8311355 | Brunner | Nov 2012 | B2 |
8379054 | Katayama | Feb 2013 | B2 |
1391124 | Doepke | Jun 2013 | A1 |
8957944 | Doepke | Feb 2015 | B2 |
20020126913 | Kotake | Sep 2002 | A1 |
20020141002 | Takano | Oct 2002 | A1 |
20020196353 | Nakahira | Dec 2002 | A1 |
20040001639 | Ohno | Jan 2004 | A1 |
20040066449 | Givon | Apr 2004 | A1 |
20040155968 | Cheatle | Aug 2004 | A1 |
20040201705 | Lin | Oct 2004 | A1 |
20040233274 | Uyttendaele | Nov 2004 | A1 |
20050088534 | Shen | Apr 2005 | A1 |
20050168593 | Akizuki | Aug 2005 | A1 |
20060114363 | Kang | Jun 2006 | A1 |
20060115181 | Deng | Jun 2006 | A1 |
20060215930 | Terui | Sep 2006 | A1 |
20060224997 | Wong | Oct 2006 | A1 |
20060268130 | Williams | Nov 2006 | A1 |
20060280429 | Shimosato | Dec 2006 | A1 |
20060291747 | Peterson | Dec 2006 | A1 |
20070019882 | Tanaka | Jan 2007 | A1 |
20070025723 | Baudisch | Feb 2007 | A1 |
20070081081 | Cheng | Apr 2007 | A1 |
20070085913 | Ketelaars | Apr 2007 | A1 |
20070097266 | Souchard | May 2007 | A1 |
20070236513 | Hedenstroem | Oct 2007 | A1 |
20070237421 | Luo | Oct 2007 | A1 |
20070237423 | Tico | Oct 2007 | A1 |
20070258656 | Arabi | Nov 2007 | A1 |
20080056612 | Park | Mar 2008 | A1 |
20080215286 | Mealy | Sep 2008 | A1 |
20080253685 | Kuranov | Oct 2008 | A1 |
20080309772 | Ikeda | Dec 2008 | A1 |
20090021576 | Linder | Jan 2009 | A1 |
20090058989 | Kim | Mar 2009 | A1 |
20090190803 | Neghina | Jul 2009 | A1 |
20090208062 | Sorek | Aug 2009 | A1 |
20090231447 | Paik | Sep 2009 | A1 |
20090244404 | Park | Oct 2009 | A1 |
20100053303 | Hayashi | Mar 2010 | A1 |
20100054628 | Levy | Mar 2010 | A1 |
20100097442 | Lablans | Apr 2010 | A1 |
20100097444 | Lablans | Apr 2010 | A1 |
20100141737 | Li | Jun 2010 | A1 |
20100165087 | Corso | Jul 2010 | A1 |
20100188579 | Friedman | Jul 2010 | A1 |
20100309336 | Brunner | Dec 2010 | A1 |
20100328512 | Davidovici | Dec 2010 | A1 |
20110002544 | Oshima | Jan 2011 | A1 |
20110043604 | Peleg | Feb 2011 | A1 |
20110058014 | Yamashita | Mar 2011 | A1 |
20110058015 | Moriyama | Mar 2011 | A1 |
20110110605 | Cheong | May 2011 | A1 |
20110116767 | Souchard | May 2011 | A1 |
20110129126 | Begeja | Jun 2011 | A1 |
20110141227 | Bigioi | Jun 2011 | A1 |
20110141300 | Stec | Jun 2011 | A1 |
20110157386 | Ishii | Jun 2011 | A1 |
20110216156 | Bigioi | Sep 2011 | A1 |
20110234750 | Lai | Sep 2011 | A1 |
20110267544 | Mei | Nov 2011 | A1 |
20110304688 | Ge | Dec 2011 | A1 |
20120076358 | Meadow | Mar 2012 | A1 |
20120133639 | Kopf | May 2012 | A1 |
20120155786 | Zargarpour | Jun 2012 | A1 |
20120169840 | Yamashita | Jul 2012 | A1 |
20120229595 | Miller | Sep 2012 | A1 |
20120263397 | Kimura | Oct 2012 | A1 |
20120274739 | Li | Nov 2012 | A1 |
20120293607 | Bhogal | Nov 2012 | A1 |
20120293608 | Doepke | Nov 2012 | A1 |
20120314945 | Cha | Dec 2012 | A1 |
20130004100 | Putraya | Jan 2013 | A1 |
20130033568 | Kim | Feb 2013 | A1 |
20130063555 | Matsumoto | Mar 2013 | A1 |
20130236122 | Drouot | Sep 2013 | A1 |
Number | Date | Country |
---|---|---|
101158889 | Apr 2008 | CN |
101228477 | Jul 2008 | CN |
101379513 | Mar 2009 | CN |
101600113 | Dec 2009 | CN |
101668118 | Mar 2010 | CN |
102037719 | Apr 2011 | CN |
0592136 | Apr 1994 | EP |
1940152 | Dec 2006 | EP |
2018049 | Jan 2009 | EP |
2009290827 | Dec 2009 | JP |
2011066635 | Mar 2011 | JP |
9951027 | Oct 1999 | WO |
2004049257 | Jun 2004 | WO |
2006048875 | May 2006 | WO |
2009094661 | Jul 2009 | WO |
2010025309 | Mar 2010 | WO |
2011033968 | Mar 2011 | WO |
2011039904 | Apr 2011 | WO |
Entry |
---|
Efros, Alexei. “Image Blending and Compositing.” Computational Photography, CMU, Fall 2010. pp. 1-82. |
Levin, Anat, Assaf Zomet, Shmuel Peleg and Yair Weiss. “Seamless Image Stitching in the Gradient Domain.” Proc. of the European Conference on Computer Vision (ECCV). Prague. May 2004. pp. 1-12. |
“Rendering by Manifold Hopping.” Submitted to Siggraph '2001, Paper No. 194, Catgory: research. pp. 1-10. |
McCann, James, Nancy S. Pollard. “Real-Time Gradient-Domain Painting.” Computer Science Department, http://repository.cmu.edu/compsci/1257. |
Perez, P., Gangnet, M., and Blake, A., ‘Poisson Image Editing,’ ACM Transactions on Graphics (TOG) Proceedings of ACM SIGGRAPH 2003, vol. 22, iss. 3, pp. 313-318, Jul. 2003. |
Methods of Adaptive Image Blending and Artifacts Masking for Panorama Creation. Apple. Inc. Camera Algorithm Team. Technical Report TR-2012-001. |
Xiong, Yingen and Pulli, Kari, “Fast Panorama Stitching for High-Quality Panoramic Images on Mobile Phones,” IEEE Transactions on Consumer Electronics, vol. 56, No. 2, May 2010, pp. 298-306. |
Cha, Joon-Hyuk, et al., “Seamless and Fast Panoramic Image Stitching,” 2012 IEEE International Conference on Consumer Electronics (ICCE), pp. 29-30. |
‘Panoramic Image Projections’, http://www.cambridgeincolour.com/tutorials/image-projections.htm, 9 pages, May 12, 2011. |
‘Photo Stitching Digital Panoramas’, http://www.cambridgeincolour.com/tutorials/digital-panoramas.htm, 11 pages, May 12, 2011. |
‘Using Photo Stitching Software’, http://www.cambridgeincolour.com/tutorials/photo-stitching-software.htm, 9 pages, May 12, 2011. |
International Search Report and Search Opinion for PCT Application No. PCT/US2012/033010, dated Jun. 19, 2012, 9 pages. |
Joshi, Neei, et al., ‘Image Deblurring using Inertial Measurement Sensors’, ACM SIGGRAPH, New York, NY, 8 pages, Jul. 2010. |
PCT Search Report and Search Opinion for PCT Application No. PCT/US2012/034625, dated Jul. 31, 2012, 10 pages. |
“Filter,” In “The IEEE Standard Dictionary of Electrical and Electronics Terms—Sixth edition—IEEE Std 100-1996,” 1997, IEEE, ISBN: 1-55937-833-6, pp. 408-409. |
Number | Date | Country | |
---|---|---|---|
20140362173 A1 | Dec 2014 | US |