The present disclosure relates to methods and systems for generating a depth map based on soft classification that may be used for converting an image in a two-dimensional (“2D”) format into an image in a three-dimensional format (“3D”).
Rapidly emerging 3D technologies, in the form of 3D cinemas, 3D home entertainment devices, and 3D portable electronics, has created increasing demand for 3D content. One popular way of creating 3D content is to leverage and convert the large existing database of 2D media into 3D. The conversion of image data from 2D to 3D, a fast way to obtain 3D content from existing 2D content, has been extensively studied. One of the methods to convert 2D into 3D is to first generate a depth map, and create left and right eye images from the depth map. This 3D rendering method based on depth map is useful for multi-view stereoscopic system, and is also well-suited for efficient transmission and storage.
In converting 2D images into 3D images, most conventional technologies apply a same method to different images, regardless of the type of content in the images. The lack of a customized method in these technologies may either create unsatisfactory 3D effects for certain content, or significantly increase the computational complexity required.
To use customized methods for different types of scenes, a classification-based algorithm has been proposed that seeks to improve over conventional 2D to 3D image conversion technologies. This algorithm classifies the image into different categories, and uses different methods to generate the depth map for different image categories. In this algorithm, known as “hard classification,” each image is assigned a fixed class label which possesses a unique property, and the depth map is generated using the method that is suitable only for that class.
However, the hard classification method may lead to several problems. First, some images may not be strictly classified as belonging to a single class, and therefore the depth map generated according to the property of a single class for these images may not be optimal. Second, the non-optimally generated depth map in a misclassified image may lead to 3D image distortion. Lastly, misclassification of images may lead to temporal flickering of depth maps during the conversion of individual frames in video sequences, which may result in visually unpleasant 3D perception.
The present disclosure includes an exemplary method and system for generating a depth map for a 2D image based on soft classification.
Embodiments of the method include receiving the 2D image, defining a plurality of object classes, analyzing content of the received 2D image, calculating probabilities that the received 2D image belongs to the object classes, and determining a final depth map based on a result of the analyzed content and the calculated probabilities for the object classes. Some embodiments of the method may include performing a multi-stage classification, for example, a multi-stage two-class classification, of the received 2D image if there are more than two object classes in the plurality of object classes. Other embodiments may include applying a filter, for example, an infinite impulse response filter, to smooth the calculated probabilities for the object classes if the received 2D image is a fast action scene, so as to prevent temporal flickering in the final depth map.
An exemplary system in accordance with the present disclosure comprises a user device receiving a 2D image, and a 2D-to-3D image converter coupled to the user device. The 2D-to-3D image converter analyzes content of the received 2D image, calculates the probabilities that the received 2D image belongs to a plurality of object classes, and determines a final depth map based on a result of the analyzed content and the calculated probabilities for the object classes. In some embodiments, the 2D-to-3D image converter applies a filter, for example, an infinite impulse response filter, to smooth the calculated probabilities for the object classes if the received 2D image is a fast action scene, so as to prevent temporal flickering in the final depth map. In certain embodiments, the 2D-to-3D image converter generates a 3D image by applying the final depth map to the received 2D image.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Reference will now be made in detail to the exemplary embodiments illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
Methods and systems disclosed herein address the above described needs. For example, methods and systems disclosed herein can generate depth maps based on content and features of 2D images (e.g., single still images or video frames), by utilizing efficient algorithms with low computational complexity suitable for real-time implementation, even on low-power computing devices and/or 3D display devices.
Media source 102 can be any type of storage medium capable of storing imaging data, such as video or still images. For example, media source 102 can be provided as a CD, DVD, Blu-ray disc, hard disk, magnetic tape, flash memory card/drive, volatile or non-volatile memory, holographic data storage, and any other type of storage medium. Media source 102 can also be an image capturing device or computer capable of providing imaging data to user device 104. For example, media source 102 can be a camera capturing imaging data and providing the captured imaging data to user device 104.
As another example, media source 102 can be a web server, an enterprise server, or any other type of computer server. Media source 102 can be computer programmed to accept requests (e.g., HTTP, or other protocols that can initiate data transmission) from user device 104 and to serve user device 104 with requested imaging data. In addition, media source 102 can be a broadcasting facility, such as free-to-air, cable, satellite, and other broadcasting facility, for distributing imaging data.
As further example, media source 102 can be a client computing device. Media source 102 can request a server (e.g., user device 104 or 2D-to-3D image converter 106) in a data network (e.g., a cloud computing network) to convert a 2D image into a 3D image.
User device 104 can be, for example, a computer, a personal digital assistant (PDA), a cell phone or smartphone, a laptop, a desktop, a tablet PC, a media content player, a set-top box, a television set including a broadcast tuner, a video game station/system, or any electronic device capable of providing or rendering imaging data. User device 104 may include software applications that allow user device 104 to communicate with and receive imaging data from a network or local storage medium. As mentioned above, user device 104 can receive data from media source 102, examples of which are provided above.
As another example, user device 104 can be a web server, an enterprise server, or any other type of computer server. User device 104 can be a computer programmed to accept requests (e.g., HTTP, or other protocols that can initiate data transmission) from, e.g., media source 102, for converting an image into a 3D image, and to provide the 3D image generated by 2D-to-3D image converter 106. In addition, user device 104 can be a broadcasting facility, such as free-to-air, cable, satellite, and other broadcasting facility, for distributing imaging data, including imaging data in a 3D format.
As shown in
Output device 108 can be, for example, a computer, personal digital assistant (PDA), cell phone or smartphone, laptop, desktop, a tablet PC, media content player, set-top box, television set including a broadcast tuner, video game station/system, or any electronic device capable of accessing a data network and/or receiving imaging data. In some embodiments, output device 108 can be a display device such as, for example, a television, monitor, projector, digital photo frame, display panel, or any other display device. In certain embodiments, output device 108 can be a printer.
While shown in
Referring to
Mathematical classifiers are used to calculate the probability that the image belongs to each object class. In some embodiments, for example, a support vector machine (SVM) classifier is used to calculate class probabilities for two-class classifications. Class probabilities are obtained either by a linear or nonlinear mapping of the SVM decision function value, or by binning the SVM decision function value and estimating the probabilities based on empirical histograms. Through a linear mapping of the SVM decision function value, the class probability p may be defined as, for example,
where C is the class label in a two-class classification problem, S is the SVM decision function value, and
Since the decision function values of S above 1 or below −1 are not reliable for estimating the class probability, the class probability estimates p+ and p− corresponding to decision function values S>1 and S<−1 are held constant. Class probability estimates p+ and p− may be obtained from training data. For example, M number of images may be taken in class 1 to compute the SVM decision function values. If M+ is the number of images in which decision function values exceed 1, then the estimate of p(S>1|C=1) is given by M+/M. To obtain good probability estimates, a large number of images may be required. Nevertheless, when a large image database is not available, estimates for p+ and p− may be assigned based on previously collected or known empirical data.
In images with fast action scenes, the class probabilities may fluctuate widely between frames. As such, the class probabilities may need to be smoothed to avoid depth map temporal flickering in the final depth map. A filter, for example, an Infinite Impulse Response (IIR) filter, may be used to smooth the class probabilities. The smoothing filter may be described by the following equation:
p*(i)=w1p*(i−1)+w2p*(i−2)+w0p(i)
In the above equation, p(i) represents the unsmoothed class probability for frame i, p*(i) represents the smoothed class probability for frame i, and the weights w0, w1, w2 are positive and sum up to 1.
Any types and any number of image classifications consistent with disclosed embodiments may be used. Further, depth map generation based on soft classification using SVM classifier is only one example of depth map generation based on the weighted sum of the preliminary depth maps and calculated probabilities for the object classes. Other methods consistent with the present disclosure may also be adopted to implement depth map generation based on soft classification for 2D-to-3D image conversion.
If the final depth map x and the class label c are treated as unknown random variables, it can be shown mathematically that the weighted combination of preliminary depth maps is the Minimum Mean Square Error (MMSE) estimate of the final depth map, assuming the preliminary depth map x0 generated from class c is the expected value of the depth given that the image is in class c. The MMSE estimate of X based on observation Y=y is defined by
E[X|Y=y]=∫x·p(x|Y=y)dx,
where p(x|Y=y) is the probability density function of depth X given the 2D image Y=y, and
Thus, the MMSE estimate can be written as
where E[X|C=c, Y=y] is the expected depth given that the 2D image Y=y and that the image belongs to class c, and p(C=c|Y=y) is the class probability given the 2D image. If the estimated preliminary depth xc for class C=c is assumed to be the expected depth E[X|C=c, Y=y], then the computed depth estimate shown in the previous equation is the MMSE of the final depth.
With reference to
In some embodiments, for example, a landscape object class may comprise landscape images containing natural scenes having vertically changing depths, while a city object class may comprise images of man-made objects such as buildings, roads, etc. Therefore, a city object class may have characteristics pertaining to strong vertical and horizontal edges, while a landscape object class may be differentiated by its randomly distributed edge directions. Accordingly, the edge direction distribution may be one of visual features distinguishing a landscape object class from a city object class. In some embodiments, an edge direction histogram may be employed for image classification. In certain embodiments, various classifiers, e.g., a Gaussian Bayes classifier, may be used to perform the classification based on the edge direction histogram.
In other embodiments, after a 2D image is classified as one of image categories (or classes), it may be further classified as one of subcategories (or subclasses) of the image categories. For example, in some embodiments, if a 2D image is classified as a structure image, it may be further classified as, e.g., an indoor image or a city image (also called an outdoor image). A city image is a picture taken outside and its main contents are man-made structures, such as buildings, cars, and so forth. A city image tends to have uniform spatial lighting/color distribution. For example, in the city image, a sky may be blue and on a top of the image, while a ground is at a bottom of the image. On the other hand, an indoor image tends to have more varied color distributions. Therefore, in some embodiments, spatial color distribution features may be used to distinguish between an indoor image and a city image. In some embodiments, a color histogram may be employed for the image classification. In certain embodiments, various classifiers, e.g., a support vector machine, may be used to perform the classification based on the color histogram.
For each image category, a different method may be developed or chosen to generate its respective depth map. The rationale behind this classification is that geometric structures in different categories may be different, and depth assignment can be done in different ways. A depth map may be represented as a grayscale image with an intensity value of each pixel registering its depth. Then, an appropriate disparity between left and right eye images (which is also called parallax) may be calculated from the depth map. An image may contain different categories of sub-images arranged in different layouts. Accordingly, the method used to reconstruct a depth map may vary with the content of an image. Thus, in some embodiments, depth map generation may be based on an understanding of image content, e.g., image classification (and/or subclassification), and so forth.
Referring back to
In a single stage two-class classification, the probabilities that the image belongs to the two object classes are calculated in a single stage. Preliminary depth maps may be assigned to the object classes, or computed for the object classes based on features in the input 2D image and the corresponding object class properties. The final depth map is then computed as the weighted sum of the preliminary depth maps and the calculated probabilities for the object classes.
In a multi-stage two-class classification, the probabilities for a first two object classes are calculated in a first stage, and the probabilities for other object classes are calculated in subsequent stages. The calculated probabilities are then combined to determine the probabilities that the received 2D image belongs to each of a plurality of combinations of object classes from the different stages. In some embodiments, for example, the probabilities calculated in different stages may be combined to yield the probabilities for different combinations of object classes. Subsequent stages may also include a second stage, a third stage, and so forth. Similar to the single stage two-class classification, preliminary depth maps may be assigned or computed for the object classes as described above. The final depth map is then computed as the weighted sum of the preliminary depth maps and the calculated probabilities for the object classes or combinations of object classes across the stages.
Referring back to
The object classes of landscape, man-made, city, and indoor images are only exemplary image classifications. Any type and any number of image classifications consistent with the disclosed embodiments may be used. The number of image object classes may be expanded within the disclosed framework, so that higher quality depth maps may be generated for more images having a variety of contents. In addition, any combination of the class probabilities in the different stages may be calculated. By combining the class probabilities across the different stages, the probabilities that the image belongs to different combinations of object classes may be calculated, and the final depth map determined as a weighted sum of the preliminary depth maps and the calculated probabilities for the object classes or combinations of object classes across the stages. The use of different combinations of object classes may further refine the final depth map.
It is understood that the above-described exemplary process flows in
For further example,
For further example,
It can be seen from
It is understood that components of 2D-to-3D image converter 106 shown in
With reference to
Image database 1702 may be used for storing a collection of data related to depth map generation for 2D-to-3D image conversion. The storage may be organized as a set of queues, a structured file, a flat file, a relational database, an object-oriented database, or any other appropriate database. Computer software, such as a database management system, may be utilized to manage and provide access to the data stored in image database 1702. Image database 1702 may store, among other things, configuration information for image content analysis, configuration information for depth map generation methods corresponding to content of images, configuration information for generating 3D images based on depth maps, etc.
The configuration information for image content analysis may include but is not limited to, for example, image classes/subclasses, and/or methods for the above-described image categorization/subcategorization, or any other type of image content analysis. The configuration information for depth map generation methods may include but is not limited to, for example, methods for generating depth information based on results of content analysis (e.g., image categorization/subcategorization), as described above, or depth models such as a simple sphere model or any other more sophisticated 3D depth model corresponding to image content, and so forth.
With reference to
The class probability calculator 1706 computes the probability of the 2D image belonging to each of the object classes. This computation may be performed, for example, using a SVM classifier, in a single stage or multi-stage two-class classification method, as described above. The class probability calculator 1706 may further comprise a filter, for example, an infinite impulse response filter, to smooth the calculated probabilities to prevent temporal flickering in the final depth map if the 2D image is a fast action scene.
The content analysis from image analyzer 1704 and the class probabilities calculated in class probability calculator 1706 are provided to the final depth map generator 1708. The final depth map generator 1708 may then determine a final depth map by computing a weighted sum of the preliminary depth maps generated in image analyzer 1704 and the class probabilities computed in class probability calculator 1706 for the object classes.
Based on the final generated depth map and the received 2D image, image rendering engine 1710 may create a 3D image, according to configuration information acquired from, e.g., image database 1702, as previously presented. After the 3D image is generated, image rendering engine 1710 may render the 3D image for output, e.g., display, printing, etc.
The 2D-to-3D image converter 106 may be used to generate depth maps for any 2D images and video sequences, including digital pictures taken by 2D cameras, videos taken by 2D video cameras, live broadcast, DVD/Blue-ray disc, and any other digital media. The depth maps may be used to render 3D on 3DTV, 3D photo frames, 3D monitors, 3D laptops, and 3D printing. The depth maps may also be used for multiview 3D rendering on autostereoscopic displays and TVs.
In some embodiments, during the above-described depth map generation and 2D-to-3D image conversion, each component of 2D-to-3D image converter 106 may store its computation/determination results in image database 1702 for later retrieval or training purpose. Based on the historic data, 2D-to-3D image converter 106 may train itself for improved performance.
The methods disclosed herein may be implemented as a computer program product, i.e., a computer program tangibly embodied in a non-transitory information carrier, e.g., in a machine-readable storage device, or a tangible non-transitory computer-readable medium, for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers. A computer program may be written in any form of programming language, including compiled or interpreted languages, and it may be deployed in any form, including as a standalone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
A portion or all of the methods disclosed herein may also be implemented by an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a complex programmable logic device (CPLD), a printed circuit board (PCB), a digital signal processor (DSP), a combination of programmable logic components and programmable interconnects, a single central processing unit (CPU) chip, a CPU chip combined on a motherboard, a general purpose computer, or any other combination of devices or modules capable of performing depth map generation for 2D-to-3D image conversion based on the soft classification method disclosed herein.
In the preceding specification, the invention has been described with reference to specific exemplary embodiments. It will, however, be evident that various modifications and changes may be made without departing from the broader spirit and scope of the invention as set forth in the claims that follow. The specification and drawings are accordingly to be regarded as illustrative rather than restrictive. Other embodiments of the invention may be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein.
Number | Name | Date | Kind |
---|---|---|---|
7680342 | Steinberg et al. | Mar 2010 | B2 |
8472746 | Wei et al. | Jun 2013 | B2 |
8520935 | Wang et al. | Aug 2013 | B2 |
8565512 | Jung et al. | Oct 2013 | B2 |
20050053276 | Curti et al. | Mar 2005 | A1 |
20080150945 | Wang et al. | Jun 2008 | A1 |
20090116732 | Zhou et al. | May 2009 | A1 |
20100014781 | Liu et al. | Jan 2010 | A1 |
20100026784 | Burazerovic | Feb 2010 | A1 |
20110026840 | Tao et al. | Feb 2011 | A1 |
20120002862 | Mita et al. | Jan 2012 | A1 |
20120183202 | Wei et al. | Jul 2012 | A1 |
Entry |
---|
Battiato et al. (2004) “Depth-map generation by image classification.” Proc. SPIE vol. 5302, pp. 95-104. |
Burazerovic et al. (Nov. 2009) “Automatic depth profiling of 2d cinema—and photographic images.” Proc. 2009 IEEE Int'l Conf. on Image Processing, pp. 2365-2368. |
Lin et al. (Nov. 2011) “2d to 3d image conversion based on classification of background depth profiles.” LNCS 7088, pp. 381-392. |
Liu et al. (Jun. 2010) “Single image depth estimation from predicted semantic labels.” Proc. 2010 IEEE Conf. on Computer Vision and Pattern Recognition, pp. 1253-1260. |
Number | Date | Country | |
---|---|---|---|
20130156294 A1 | Jun 2013 | US |