Reference is made to commonly assigned, co-pending U.S. Application Publication No. 2012/0057745 published Mar. 8, 2012, entitled: “Detection of Objects Using Range Information”, by S. Wang, which is incorporated herein by reference.
This invention relates to adjust digital images, and more particularly to a method for adjusting perspective and disparity in a stereoscopic image pair using range information.
Stereo cameras or multi-view cameras, generally capture left and right images using two or more cameras functioning similarly to human eyes, and cause a viewer to feel a stereoscopic effect due to disparities between the two images. Specifically, a user observes parallax when a different image is presented to each eye due to the disparity between the two images captured by a stereo camera, and this binocular parallax causes the user to experience a stereoscopic effect. However, the stereo images are often not pleasing, because of the perspective or parallax.
There are many references that describe how to capture stereo image pairs and how to adjust the parallax of the stereo image pairs. Most of these methods change the entire scene without adjusting specific objects. For example, in U.S. Patent Application Publication No. 2008/0112616, disparity of stereo images is adjusted based on disparity histogram.
A need exists for a method to adjust perspective of stereoscopic image pairs with modifying different disparities of objects with different distance from the view point in a scene.
The present invention represents a method for adjusting perspective and disparity in a stereoscopic image pair, and the method implemented at least in part by a data processing system and comprising the steps of:
receiving the stereoscopic image pair representing a scene;
identifying range information associated with the stereoscopic image pair and including distances of pixels in the scene from a reference location;
generating a cluster map based at least upon an analysis of the range information and the stereoscopic images, the cluster map grouping pixels of the stereoscopic images by their distances from a viewpoint;
identifying objects and background in the stereoscopic images based at least upon an analysis of the cluster map and the stereoscopic images; generating a new stereoscopic image pair at least by adjusting perspective and disparity of the object and the background in the stereoscopic image pair, the adjusting occurring based at least upon an analysis of the range information; and
storing the new generated stereoscopic image pair in a processor-accessible memory system.
It is an advantage of the present invention that by using range information objects and background can be detected and segmented with improved accuracy. Furthermore, the perspective and disparity in a stereoscopic image pair can be adjusted efficiently.
In addition to the embodiments described above, further embodiments will become apparent by reference to the drawings and by study of the following detailed description.
The present invention will be more readily understood from the detailed description of exemplary embodiments presented below considered in conjunction with the attached drawings, of which:
The invention is inclusive of combinations of the embodiments described herein. References to “a particular embodiment” and the like refer to features that are present in at least one embodiment of the invention. Separate references to “an embodiment” or “particular embodiments” or the like do not necessarily refer to the same embodiment or embodiments; however, such embodiments are not mutually exclusive, unless so indicated or as are readily apparent to one of skill in the art. The use of singular and/or plural in referring to the “method” or “methods” and the like is not limiting.
The phrase, “digital content record”, as used herein, refers to any digital content record, such as a digital still image, a digital audio file, a digital video file, etc.
It should be noted that, unless otherwise explicitly noted or required by context, the word “or” is used in this disclosure in a non-exclusive sense.
The data processing system 10 includes one or more data processing devices that implement the processes of the various embodiments of the present invention, including the example processes of
The data storage system 40 includes one or more processor-accessible memories configured to store information, including the information needed to execute the processes of the various embodiments of the present invention, including the example processes of
The phrase “processor-accessible memory” is intended to include any processor-accessible data storage device, whether volatile or nonvolatile, electronic, magnetic, optical, or otherwise, including but not limited to, registers, floppy disks, hard disks, Compact Discs, DVDs, flash memories, ROMs, and RAMs.
The phrase “communicatively connected” is intended to include any type of connection, whether wired or wireless, between devices, data processors, or programs in which data may be communicated. Further, the phrase “communicatively connected” is intended to include connections between devices or programs within a single data processor, connections between devices or programs located in different data processors, and connections between devices not located in data processors at all. In this regard, although the data storage system 40 is shown separately from the data processing system 10, one skilled in the art will appreciate that the data storage system 40 may be contained completely or partially within the data processing system 10. Further in this regard, although the peripheral system 20 and the user interface system 30 are shown separately from the data processing system 10, one skilled in the art will appreciate that one or both of such systems may be stored completely or partially within the data processing system 10.
The peripheral system 20 may include one or more devices configured to provide digital content records to the data processing system 10. For example, the peripheral system 20 may include digital still cameras, digital video cameras, cellular phones, or other data processors. The data processing system 10, upon receipt of digital content records from a device in the peripheral system 20, may store such digital content records in the data storage system 40.
The user interface system 30 may include a mouse, a keyboard, another computer, or any device or combination of devices from which data is input to the data processing system 10. In this regard, although the peripheral system 20 is shown separately from the user interface system 30, the peripheral system 20 may be included as part of the user interface system 30.
The user interface system 30 also may include a display device, a processor-accessible memory, or any device or combination of devices to which data is output by the data processing system 10. In this regard, if the user interface system 30 includes a processor-accessible memory, such memory may be part of the data storage system 40 even though the user interface system 30 and the data storage system 40 are shown separately in
Range information 105 associated with the digital image 103 is identified in identify range information step 104. The range information 105 includes distances of pixels in the scene from a known reference location. The viewpoint location needs to be identified relative to the given range information. Usually, the viewpoint location is the reference location. Range information 105 is preferably presented in the form of a range map provided by a ranging camera which uses visible light, inferred light, laser light or ultrasound to determine distances to pixels in the scene. Alternately, the range map can be provided using stereoscopic image processing techniques that involve capturing images of a scene from multiple viewpoints and determining the range information by evaluating the relative positions of objects in the scene. For cases where the range map has different dimensions (i.e., number of rows and columns) than the digital image 103, the range map is preferably interpolated so that it has the same dimensions.
In the identify objects and background step 106, objects and background 107 in a scene are identified based upon an analysis of the range information 105 and the stereoscopic image pair 103. A new stereoscopic image pair 109 is generated by adjusting perspective and disparity of the object and the background 107 in the stereoscopic image pair 103 based upon an analysis of the range information 105 in generate new stereoscopic image pair step 108. Because different people have different stereo experiences, the perspective and disparity adjustment is optionally based upon user-preference input 111 in one embodiment of the present invention. (The dashed lines in
Because the objects and background in the stereoscopic images are detected, the perspective and disparity of a specific object can also be adjusted. For example, if it is desired to create a modified stereoscopic image pair where one specific object is modified so that it appears to be further from the viewpoint, it is necessary to reduce its size and the change disparity of this object in the new stereoscopic image pair 109. Changes of size and disparity of the object should be proportional to the change of distance of the object from the viewpoint. As the size and position of objects in the stereoscopic image pair are adjusted, portions of the background or objects that were previously occluded may be exposed. In order to fill in the pixel values for these regions in one of the images in the stereoscopic image pair 109, pixels can be copied from the corresponding objects or background region in the other image in the stereoscopic image pair 109. In cases where the object region was occluded in both of the images in the stereoscopic image pair 109, it will be necessary to determine pixel values by extrapolating from the portions of the object that are not occluded.
In one embodiment of the present invention, a processor to perform the perspective and disparity adjustment operations can be embedded in a stereo-image-capture device such as a parallel camera, a multi-view camera, a single lens camera, or hybrid stereo image camera. This enables the adjustment of the perspective and disparity at the time of image capture.
In store new stereoscopic image pair step 110, the new stereoscopic image pair 109 is stored in a data storage system 40 (
objects=f(cluster map,I)
where the function f( ) is an object segmentation operation applied to the digital image I using the cluster map 203. The digital image I will be one of the images in the stereoscopic image pair 103. The function f( ) works by identifying pixels in the cluster map 203 having the same distance, then assigning the corresponding pixels in the digital image I to a corresponding object.
Edges are detected in the stereoscopic image using an identify edges step 308. In a preferred embodiment of the present invention, the edges are identified using a gradient operation. The gradient of an image is defined as:
where I(x,y) is the intensity of pixel at location (x,y). The magnitude of the gradient vector is:
G=[Gx2+Gy2]1/2.
Edges are detected in the stereoscopic image based on the magnitude of the gradient in each pixel.
Next, filter edges step 310 is used to filter the detected edges to remove insignificant edges and keep the significant edges. Mathematically, the filtering operation can be expressed as:
where e is one of the detected edges, S(e) is the sum of gradient magnitudes of each pixel in the edge e, f is a filter mask and T is the threshold.
The pixel clusters produced by the reduce cluster noise step 306 will typically still have errors in the boundary areas because of the noise in the range map. A refine clusters step 312 is used refine the cluster groups and produce cluster map 203. The boundary of cluster groups are refined by using the significant edges computed in the filter edges step 310. If pixels are outside of the detected significant edges in each cluster group, they will be removed. This will make the boundary of cluster groups much more accurate. Next, an average distance, n, is computed in each of the refined cluster group as:
where m is the number of pixels in the cluster group w, dis(i) is the distance of the ith pixel to the viewpoint location. By assigning the average distance to each pixel in the cluster groups, the cluster map is generated.
It is to be understood that the exemplary embodiments are merely illustrative of the present invention and that many variations of the above-described embodiments can be devised by one skilled in the art without departing from the scope of the invention. It is therefore intended that all such variations be included within the scope of the following claims and their equivalents.
Number | Name | Date | Kind |
---|---|---|---|
4101217 | Fergg et al. | Jul 1978 | A |
4707119 | Terashita | Nov 1987 | A |
4945406 | Cok | Jul 1990 | A |
4984013 | Terashita | Jan 1991 | A |
5016043 | Kraft et al. | May 1991 | A |
5903660 | Huang et al. | May 1999 | A |
6243133 | Spaulding et al. | Jun 2001 | B1 |
6252976 | Schildkraut et al. | Jun 2001 | B1 |
6275605 | Gallagher et al. | Aug 2001 | B1 |
6573932 | Adams, Jr. et al. | Jun 2003 | B1 |
6636646 | Gindele | Oct 2003 | B1 |
6845181 | Dupin et al. | Jan 2005 | B2 |
6873723 | Aucsmith et al. | Mar 2005 | B1 |
6873743 | Steinberg | Mar 2005 | B2 |
7043090 | Gindele et al. | May 2006 | B2 |
7046400 | Gindele et al. | May 2006 | B2 |
7116838 | Gindele et al. | Oct 2006 | B2 |
7129980 | Ashida | Oct 2006 | B1 |
7158174 | Gindele et al. | Jan 2007 | B2 |
7230538 | Lai et al. | Jun 2007 | B2 |
7289154 | Gindele | Oct 2007 | B2 |
7421149 | Haynes et al. | Sep 2008 | B2 |
7421418 | Nakano | Sep 2008 | B2 |
7526127 | Koide et al. | Apr 2009 | B2 |
7689021 | Shekhar et al. | Mar 2010 | B2 |
20020126893 | Held et al. | Sep 2002 | A1 |
20030007687 | Nesterov et al. | Jan 2003 | A1 |
20030044063 | Meckes et al. | Mar 2003 | A1 |
20030044070 | Fuersich et al. | Mar 2003 | A1 |
20030044178 | Oberhardt et al. | Mar 2003 | A1 |
20030223622 | Simon et al. | Dec 2003 | A1 |
20040240749 | Miwa et al. | Dec 2004 | A1 |
20040247176 | Aucsmith et al. | Dec 2004 | A1 |
20050089212 | Mashitani et al. | Apr 2005 | A1 |
20050157204 | Marks | Jul 2005 | A1 |
20070121094 | Gallagher et al. | May 2007 | A1 |
20070126921 | Gallagher et al. | Jun 2007 | A1 |
20070274604 | Schechner et al. | Nov 2007 | A1 |
20080112616 | Koo et al. | May 2008 | A1 |
Number | Date | Country |
---|---|---|
1 968 329 | Sep 2008 | EP |
WO 2008102296 | Aug 2008 | WO |
WO 2009072070 | Jun 2009 | WO |
Entry |
---|
A. Hoover et al.: “An Experimental Comparison of Range Image Segmentation Algorithms”, IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Service Center, Los Alamitos, CA, US LNKD-DOI:10.1109/34.506791, vol. 18, No. 7, Jul. 1, 1996, pp. 673-689, XP002213073, ISSN: 0162-8828, p. 674; Table 1. |
“Dominant Sets and Pairwise Clustering”, by Massimiliano Pavan et al., IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 29, No. 1, Jan. 2007, pp. 167-172. |
“Object Detection using a Max-Margin Hough Transform” by S. Maji et al., Proc. IEEE Conf on Computer Vision and Pattern Recopition, 2009. |
John Canny, “A Computational Approach to Edge Detection”, IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Service Center, Los Alamitos, CA, US LNKD-DOI:10.1109/TPAMI.1986.4767851, vol. PAMI-08, No. 6, Nov. 1, 1986, pp. 679-698, XP000604891, ISSN: 0162-8828 abstract. |
“Object-Specific Figure-Ground Segregation” by S. X. Yu, et al., Proc. IEE Conf. on Computer Vision and Pattern Recognition, 2003. |
“Object Segmentation Using Graph Cuts Based Active Contours” by N. Xu et al., Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 2003. |
International Preliminary Report on Patentability for PCT/US2010/001946, issued Jan. 31, 2012. |
International Search Report and Written Opinion on PCT/US2010/001946 mailed Sep. 15, 2010. |
Number | Date | Country | |
---|---|---|---|
20110026807 A1 | Feb 2011 | US |