External depth map transformation method for conversion of two-dimensional images to stereoscopic images

Information

  • Patent Grant
  • 9241147
  • Patent Number
    9,241,147
  • Date Filed
    Wednesday, May 1, 2013
    12 years ago
  • Date Issued
    Tuesday, January 19, 2016
    10 years ago
Abstract
External depth map transformation method for conversion of two-dimensional images to stereoscopic images that provides increased artistic and technical flexibility and rapid conversion of movies for stereoscopic viewing. Embodiments of the invention convert a large set of highly granular depth information inherent in a depth map associated with a two-dimensional image to a smaller set of rotated planes associated with masked areas in the image. This enables the planes to be manipulated independently or as part of a group and eliminates many problems associated with importing external depth maps including minimization of errors that frequently exist in external depth maps.
Description
BACKGROUND OF THE INVENTION

1. Field of the Invention


One or more embodiments of the invention are related to the image processing. More particularly, but not by way of limitation, one or more embodiments of the invention enable an external depth map transformation method for conversion of two-dimensional images to stereoscopic images that provides increased artistic and technical flexibility and rapid conversion of movies for stereoscopic viewing. Embodiments of the invention convert a large set of highly granular depth information inherent in a depth map associated with a two-dimensional image to a smaller set of rotated planes associated with masked areas in the image. This enables the planes to be manipulated independently or as part of a group, and eliminates many problems associated with importing external depth maps including minimization of errors that frequently exist in external depth maps.


2. Description of the Related Art


Two-dimensional images contain no depth information and hence appear the same to an observer's left and right eye. Two-dimensional images include paper photographs or images displayed on a standard computer monitor. Two-dimensional images however may include shading and lighting that provide the observer a sense of depth for portions of the image, however, this is not considered a three-dimensional view of an image. Three-dimensional images on the other hand include image information that differs for each eye of the observer. Three-dimensional images may be displayed in an encoded format and projected onto a two-dimensional display. This enables three-dimensional or stereoscopic viewing for example with anaglyph glasses or polarized glasses. Other displays may provide different information based on the orientation with respect to the display, e.g., autostereoscopic displays that do not require special glasses for viewing three-dimensional images on a flat two-dimensional display. An example of such as display is a lenticular display. Alternatively, two images that are shown alternately to the left and right eyes may be viewed with shutter glasses. Regardless of the type of technology involved, conversion of two-dimensional images to stereoscopic images requires the addition of depth information to the two-dimensional input image.


Current solutions for conversion of two-dimensional images to stereoscopic images fall into two broad categories.


The first category involves systems that convert two-dimensional images into three-dimensional images wherein the two-dimensional images have no associated depth maps or other depth information. Systems in this category may be automated to provide depth information based on colors or areas of the picture, but these systems have had limited success. Other systems in this category require large amounts of manual labor for highly accurate results. These manual masking systems generally operate by accepting manually created masks in order to define areas or regions in the image that have different depths and which generally represent different human observable objects. Depth information is then accepted by the system as input from artists for example, which results in nearer objects being shifted relatively further horizontally to produce left and right eye viewpoints or images, or Red/Blue anaglyph single image encodings, either of which may be utilized for stereoscopic viewing. By shifting objects in the foreground, hidden or background information may be exposed. If the missing image data is not shown in any other images in a scene, then the “gap” must be filled with some type of image data to cover the artifact. If the hidden image data does not exist in any other image in a scene, then this prohibits borrowing of pixels from the areas in other images that do contain the missing information. Various algorithms exist for filling gaps, which are also known as occlusion filling algorithms, to minimize the missing information with varying success. Generally, the depth artist gains visual clues from the image and applies depth to masks using artistic input.


The main problems with this first category of conversion are time of conversion based on the large amount of manual labor and the expense of the conversion process.


The second category involves systems that convert two-dimensional images that have associated depth maps or other depth information, into three-dimensional images. The depth information may be obtained by the system from an external “time of flight” system, where light from a laser for example is sent towards the subject and timed to determine the distance after the light reflects back from the subject. The depth information may also be obtained by the system from a “triangulation” system, which determines the angles to a subject, for example from two sensors that are a known distance away from one another. Another apparatus that may obtain depth is a light-field or plenoptic camera having multiple lenses. A recent development has been the three camera system that includes a high resolution camera and two lower resolution side cameras or “witness cameras” mounted next to the high resolution camera. A depth map may be calculated from the disparity between the two side camera images and applied to the image obtained from the high-resolution camera to generate stereoscopic images. Any missing information may be filled with image data from the side cameras to minimize artifacts such as missing or hidden information, even if not at the same resolution. Another advantage of the trifocal system is the elimination of heavy and expensive stereo camera systems that have two large and optically identical and perfectly aligned lenses.


However, there are many problems that occur when using an externally generated depth map to a Z-depth. This includes any depth map created from a disparity map that is generated from a stereoscopic pair of images, for example captured with a two-lens stereo-camera or with the witness cameras of the trifocal system. One of the main problems is that depth maps provided by external systems are noisy, may include inaccurate edges, spikes and spurious values, all of which are common with Z-depths generated from external systems. Another problem is that since the depth maps correspond either on a pixel-by-pixel basis or at least generally fairly high resolution with the associated two-dimensional image, manipulating depth on this fine granularity is extremely difficult and time consuming. These types of systems are generally directed at automatically converting video or movies for stereoscopic viewing for example without masking objects and with the labor associated therewith. Artifacts on edges of objects are common in some systems limiting their overall automation capabilities.


For at least the limitations described above there is a need for a method for an external depth map transformation method for conversion of two-dimensional images to stereoscopic images.


BRIEF SUMMARY OF THE INVENTION

One or more embodiments described in the specification are related to an external depth map transformation method for conversion of two-dimensional images to stereoscopic images that provides increased artistic and technical flexibility and rapid conversion of movies for stereoscopic viewing. Embodiments of the invention convert a large set of highly granular depth information inherent in a depth map associated with a two-dimensional image to a smaller set of rotated planes associated with masked areas in the image. This enables the planes to be manipulated independently or as part of a group and eliminates many problems associated with importing external depth maps including minimization of errors that frequently exist in external depth maps


Embodiments of the invention may utilize any type of depth map including Z-Depth associated with images that are generated through rendering from a Computer Generated Imagery or CGI application such as MAYA® or HOUDINI®, depth maps obtained after conversion of a disparity map from a stereoscopic pair of images to a Z-Depth, Z-Depth extraction from of a light-field image, time-of-flight imaging systems, LIDAR, or any other type of depth map associated with a two-dimensional image.


Embodiments of the invention include a number of inherent advantages over simply using the Z-Depths as is currently performed in automated or semi-automated 2D to 3D conversion processes.


For example, embodiments of the invention transform the large set of depth map depths or Z-Depth into a manageable number of parts. Thus, the system enables an artist to manipulate individual or groups of parts for artistic purposes, as opposed to pixel-by-pixel editing. So, for example, an artist may independently adjust the angle, and hence depth of a robot's arm so the resulting stereoscopic image appears to reach out of the screen.


In addition, by transforming the Z-Depth into a manageable number of parts, the system enables an artist to group these parts and apply separate RGB image layers to these groups. This enables more efficient occlusion filling in the 2D to 3D conversion workflow.


Furthermore, embodiments of the invention mold depth data to eliminate depth errors by transforming large numbers of depth values to smaller number of plane rotations. In one embodiment, the system may calculate the normal and position for a specific region, for example to form an average, rotation value associated with a plane that represents a large group of depth values, some of which may be erroneous. Hence, issues associated with imperfect depth map data are often averaged out, or otherwise eliminated. In some extreme cases of noisy depth data, these issues may not be fully resolved, however, embodiments of the invention reduce the problem to a manageable number of editable parts, and enable the issues to be rapidly and easily corrected automatically or by accepting inputs from an artist. One or more embodiments of the invention may utilize a normal vector algorithm. Other algorithms may be utilized alone or in combination with the normal vector method to achieve similar or advantageous results. For example, embodiments of the invention may treat each pixel as a point in space, e.g., wherein X and Y represent the position of the pixel and Z represents the Z-Depth value of that pixel, and isolate only the points within the defined region, and calculate the “best-fit” plane for that group of points, and/or a normal vector representation of the plane. The normal vector in this embodiment is orthogonal to the plane and may be encoded into separate RGB channels in order to provide a viewable representation of the planar angles with respect to the optical display. Embodiments of the invention may utilize any type of plane fitting algorithm including, but not limited to, regression plane, orthogonal distance regression plane, etc. Embodiments of the invention may utilize any type of filtering as part of the transformation processing including but not limited to dilation and erosion.


One or more embodiments of the invention implement a method on a computer for example wherein the method includes obtaining an external depth map associated with a two-dimensional image, obtaining at least one mask associated with at least one area within the two-dimensional image, calculating a fit or best fit for a plane using a computer based on depth associated with the at least one area associated with each of the at least one mask, applying depth associated with the plane having the fit to the at least one area to shift pixels in the two-dimensional image horizontally to produce a stereoscopic image or stereoscopic image pair.


Embodiments of the method may also include obtaining of the external depth map associated with a two-dimensional image by obtaining a disparity map, or a depth map of lower resolution than the two-dimensional image from a pair of witness cameras, or a depth map from time-of-flight system, or a depth map from a triangulation system.


Embodiments of the invention may also include obtaining at least one mask associated with at least one area within the two-dimensional image by automatically generating the at least one mask comprising the at least one area wherein the at least one area is over a predefined size and within a predefined depth range, or automatically generating the at least one mask comprising the at least one area wherein the at least one area comprises a boundary having a difference in luminance values over a predefined threshold, or both methods of size, depth range and boundary or any combination thereof.


Embodiments of the invention may also include calculating the best fit for a plane using a computer based on depth associated with the at least one area associated with each of the at least one mask by calculating a normal vector for the plane, or a regression fit for the plane, or an orthogonal distance regression fit for the plane, or in any other known manner regarding fitting a plane to particulars points in three-dimensional space.


Embodiments of the invention generally also include applying depth associated with the plane having the best fit to the at least one area to shift pixels in the two-dimensional image horizontally to produce a stereoscopic image or stereoscopic image pair.


Embodiments may also include grouping two or more of the planes in order to provide a piecewise masked surface area. The grouping may include a link of a predefined minimum and maximum distance, which enables moving one plane to move other grouped planes if the maximum values are hit. The minimum values may be zero or negative to allow to precise joining of planes or slight overlap for example. In one or more embodiments, the grouping may include a link having a spring constant, this enables the movement of planes relative to one another to self align with respect to the other planes to minimize the overall spring force on three or more of the corner points of the plane. Alternatively or in combination, embodiments of the invention may include altering automatically any combination of position, orientation, shape, depth or curve of said plane in order to fit edges or corners of the plane with another plane. This enables a plane to be positioned in three-dimensional space, rotated in three-dimensions, reshaped by moving a corner point, warped in effect by adding depth or a curve to the plane, for example to add depth to the plane itself to match the underlying image data. Embodiments of the invention may also include accepting an input to alter any combination of position, orientation, shape, depth or curve of the plane, for example to artistically fit the underlying image data, correct errors or artifacts from the automated fitting process for touch up, etc.





BRIEF DESCRIPTION OF THE DRAWINGS

The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.


The above and other aspects, features and advantages of the invention will be more apparent from the following more particular description thereof, presented in conjunction with the following drawings wherein:



FIG. 1 illustrates an exemplary overall system architecture for one or more embodiments of the invention.



FIG. 2 illustrates a flow chart for an embodiment of the method of the invention.



FIG. 3 illustrates an example two-dimensional input image.



FIG. 4 illustrates an example set of masks applied to areas or regions of the two-dimensional input image.



FIG. 5A illustrates a perfect error free input depth map associated with the two-dimensional image. FIG. 5B illustrates a typical depth map having erroneous depth data as acquired from or calculated from an external depth capture apparatus or system.



FIG. 6A illustrates a perfect normal map showing colors associated with angles of the surface of pixels within the two-dimensional image of FIG. 5A. FIG. 6B illustrates an imperfect normal map associated with erroneous depth data as acquired from or calculated from an external depth capture apparatus or system as shown in FIG. 5B.



FIGS. 7A-C illustrates a perspective, side and top view of perfect error-free depth applied to the two-dimensional input image. FIGS. 7D-F illustrates a perspective, side and top view of imperfect typical depth having errors and acquired from an external depth capture apparatus or system and applied to the two-dimensional input image.



FIGS. 8A-C illustrates a perspective, side and top view of perfect error-free depth applied to planes and/or masks of areas or regions associated with the two-dimensional input image. FIGS. 8D-F illustrates a perspective, side and top view of imperfect typical depth acquired from an external depth capture apparatus or system and applied to planes and/or masks of areas or regions associated with to the two-dimensional input image.



FIG. 9A illustrates a close-up view of the imperfect typical transformed into rotated planes which may or may not be flat planes, or curved planes or surfaces and which yield a manageable set of information that enables correction of the errors automatically by the system or through manual artist input accepted and processed by the system. FIG. 9B illustrates a close-up view of a portion of FIG. 9B showing plane grouping with or without minimum and maximum connections and with or without spring attachments having a spring constant for example.





DETAILED DESCRIPTION OF THE INVENTION

An external depth map transformation method for conversion of two-dimensional images to stereoscopic images will now be described. In the following exemplary description numerous specific details are set forth in order to provide a more thorough understanding of embodiments of the invention. It will be apparent, however, to an artisan of ordinary skill that embodiments of the invention may be practiced without incorporating all aspects of the specific details described herein. In other instances, specific features, quantities, or measurements well known to those of ordinary skill in the art have not been described in detail so as not to obscure the invention. Readers should note that although examples of the invention are set forth herein, the claims, and the full scope of any equivalents, are what define the metes and bounds of the invention.



FIG. 1 illustrates an exemplary overall system architecture for one or more embodiments of the invention. As shown, camera 101 and associated external depth capture apparatus 102, e.g., a single apparatus such as a light-field or time-of-flight apparatus, or as shown with two side mounted witness cameras for example, are utilized to capture images and depth values associated with foreground object 150, background object 151 in scene 152. Generally, multiple images are captured at fixed points in time to obtain a movie. Other embodiments may utilize depth captured from or input via other systems not directly coupled with camera 101, including computer generated depths associated with computer-animated characters or objects, external LIDAR, plenoptic camera or any other external depth capture apparatus for example. Embodiments of the system generally include a computer such as server 110 that enables artist(s) 160 to apply and/or correct depth for example as obtained from the external depth capture apparatus 102 or any other external source of depth. Embodiments of the server are generally utilized to obtain masks associated with areas within the foreground object 150, background object 151 or elsewhere in the scene 152. The masks may be automatically generated or accepted by the server for example by artist 160. The server may automatically calculate a fit for planes or masks associated with areas of the images or accept inputs to alter the fit from artist 160 for example. The system may optionally also automatically alter the position, orientation, shape, depth or curve of planes or masks to fit the edges of the planes or masks with other planes or masks for example. Embodiments apply the depth of the planes or masks to the areas in the image to produce a stereoscopic image, e.g., anaglyph, or stereoscopic image pair for display on visual output 120 by viewer 170.



FIG. 2 illustrates a flow chart for an embodiment of the method of the invention wherein the method includes obtaining an external depth map associated with a two-dimensional image at 201, obtaining at least one mask associated with at least one area within the two-dimensional image at 202, calculating a fit or best fit for a plane using a computer based on depth associated with the at least one area associated with each of the at least one mask at 203, optionally, embodiments of the method may also automatically alter the position, orientation, shape, depth or curve of planes or masks to fit the edges of the planes or masks with other planes or masks for example at 204, applying depth associated with the plane having the fit to the at least one area to shift pixels in the two-dimensional image horizontally to produce a stereoscopic image or stereoscopic image pair at 205. One or more embodiments thus enable an external depth map transformation method for conversion of two-dimensional images to stereoscopic images that provides increased artistic and technical flexibility and rapid conversion of movies for stereoscopic viewing. Embodiments of the invention convert a large set of highly granular depth information inherent in a depth map associated with a two-dimensional image to a smaller set of rotated planes associated with masked areas in the image. This enables the planes to be manipulated independently or as part of a group and eliminates many problems associated with importing external depth maps including minimization of errors that frequently exist in external depth maps.



FIG. 3 illustrates an example two-dimensional input image. As shown, foreground object 150, here a CGI human torso and background object 151, here a box form a scene in two-dimensional space with no depth information encoded into the image.



FIG. 4 illustrates an example set of masks applied to areas or regions of the two-dimensional input image. As shown, masks 401 and 402 are utilized to represent sides of the box 151a with flat planes, while masks 403, 404, 405, 406 are utilized to represent portions of the human torso with flat and/or curved planes, namely the front of the pectoral area, stomach area, front inner shoulder area, and front bicep area of the human torso. The other box in the background, box 151b is also shown with masks that are covered by foreground object, e.g., the human torso. Embodiments of the invention may also include obtaining at least one mask associated with at least one area within the two-dimensional image by automatically generating the at least one mask comprising the at least one area wherein the at least one area is over a predefined size and within a predefined depth range, or automatically generating the at least one mask comprising the at least one area wherein the at least one area comprises a boundary having a difference in luminance values over a predefined threshold, or both methods of size, depth range and boundary or any combination thereof. As shown masks representing flat planes 401 and 402 may be obtained by determining that they are over N pixels by M pixels and have within a K to L range in orthogonal vector value. The test regions for generating masks may be iterated through the image and enlarged to fit the underlying area if the depth data is within the K to L range for example. Masks 403-406 may be curved or faceted or flat to represent the shape of any underlying portion that is within a predefined size and/or vector range for example. Any algorithm for detecting object types and assigning predefined mask groups to represent the underlying two-dimensional image are in keeping with the spirit of the invention.



FIG. 5A illustrates a perfect error free input depth map associated with the two-dimensional image. As shown, darker luminance values are encoded as nearer to the observer, although the reverse encoding may be utilized. There is no requirement that the depth map has to be in a viewable format, however, this encoding may enable intuitive viewing of depth errors for example. If all depth maps were error free, then embodiments of the invention may still be utilized to alter depth associated with the two-dimensional image in a manageable manner. However, since most depth maps are not perfect, embodiments of the invention enable error minimization and manageability unknown in the art.



FIG. 5B illustrates a typical depth map having erroneous depth data 501 as acquired from or calculated from an external depth capture apparatus or system. This problem makes some external depth maps nearly unusable for automated depth conversion, however, embodiments of the system enable use of error prone depth maps in a manageable manner.


Embodiments of the method may also include obtaining of the external depth map associated with a two-dimensional image by obtaining a disparity map, or a depth map of lower resolution than the two-dimensional image from a pair of witness cameras, or a depth map from time-of-flight system, or a depth map from a triangulation system. Embodiments of the invention may also include obtaining any type of depth map at 201 including Z-Depth associated with images that are generated through rendering from a Computer Generated Imagery or CGI application such as MAYA® or HOUDINI® as shown for example in FIG. 5A, depth maps obtained after conversion of a disparity map from a stereoscopic pair of images to a Z-Depth for example as shown in FIG. 5B, Z-Depth extraction from of a light-field image, time-of-flight imaging systems, LIDAR, or any other type of depth map associated with a two-dimensional image.


Embodiments of the invention include a number of inherent advantages over simply using the Z-Depths as is currently performed in automated or semi-automated 2D to 3D conversion processes.


For example, embodiments of the invention transform the large set of depth map depths or Z-Depth into a manageable number of parts. Thus, the system enables artist 160 to manipulate individual or groups of parts for artistic purposes, as opposed to pixel-by-pixel editing. So, for example, the artist may independently adjust the angle, and hence depth of a robot's arm so the resulting stereoscopic image appears to reach out of the screen. In one or more embodiments, the planes may be grouped and movement or reshaping of a plane in two or three dimensions may move or reshape other grouped or otherwise coupled planes.


In addition, by transforming the Z-Depth into a manageable number of parts, the system enables an artist to group these parts and apply separate RGB image layers to these groups. This enables more efficient occlusion filling in the 2D to 3D conversion workflow.



FIG. 6A illustrates a perfect normal map showing colors associated with angles of the surface of pixels within the two-dimensional image of FIG. 5A. For example by encoding separate RGB channels with X, Y and Z information related to the vector orthogonal to the particular area, a viewable format for normal vectors is thus obtained. If X increases from left to right, and Red is utilized as the X vector channel, then areas pointing to the right for example have a higher red value. If Y increases in the vertical direction and if Green is utilized for the Y channel, the areas having an orthogonal vector pointing have a higher green value. Any other encoding for visually viewable normal vector formats is in keeping with the spirit of the invention.



FIG. 6B illustrates an imperfect normal map associated with erroneous depth data as acquired from or calculated from an external depth capture apparatus or system as shown in FIG. 5B. This image shows the severe problems with error prone depth data showing error area 501 indicating an area that is pointing upward, when it is really not (see FIG. 6A for error free portion of image).



FIGS. 7A-C illustrate a perspective, side and top view of perfect error-free depth applied to the two-dimensional input image. If depth is error free, then the foreground object may be viewed with depth without errors in the foreground object, however, depending on the depth capture system, missing background data may exist and cause artifacts if there are not other images in a scene that have the missing background information. This occurs for example if the foreground object does not move with respect to the background during the scene.



FIGS. 7D-F illustrate a perspective, side and top view of imperfect typical depth having errors and acquired from an external depth capture apparatus or system and applied to the two-dimensional input image. As shown, portions of the torso stretch outward when they should not, errors in the left box 151 are shown as well with a curved flat surface near the top. The errors in the depth shown as applied in FIGS. 7D-F are difficult to fix by an artist since the errors occur on a small granular level, for example pixel or sub-pixel and require tedious paint style operations to correct. In addition, altering a depth of a visible area such as the arm is also quite difficult and economically unfeasible since the operations occur at the pixel level and not associated with a mask or masks that represents the arm for example.



FIGS. 8A-C illustrate a perspective, side and top view of perfect error-free depth applied to planes and/or masks of areas or regions associated with the two-dimensional input image. If error free depth maps are obtained by the system, then applying the masks of FIG. 4 results after calculation of plane rotations for example results in a readily editable image.



FIGS. 8D-F illustrate a perspective, side and top view of imperfect typical depth acquired from an external depth capture apparatus or system and applied to planes and/or masks of areas or regions associated with to the two-dimensional input image. As shown, many of the errors of shown in FIGS. 7D-F are entirely or nearly entirely eliminated which enables rapid adjustment of depth of correction of depth errors if at all depending on the quality requirements of the particular project. For example, through calculation of best fit planes (whether flat or curved) at 203 as detailed above with respect to FIG. 2, many of the errors are minimized or otherwise averaged away.


Whether using a perfect depth map that is error free or not, embodiments of the invention may also include calculating the best fit for a plane using a computer based on depth associated with the at least one area associated with each of the at least one mask by calculating a normal vector for the plane, or a regression fit for the plane, or an orthogonal distance regression fit for the plane, or in any other known manner regarding fitting a plane to particulars points in three-dimensional space. Specifically, embodiments of the invention mold depth data to eliminate depth errors by transforming large numbers of depth values to smaller number of plane rotations. In one embodiment, the system may calculate the normal and position for a specific region, for example to form an average, rotation value associated with a plane that represents a large group of depth values, some of which may be erroneous. Hence, issues associated with imperfect depth map data are often averaged out, or otherwise eliminated. In some extreme cases of noisy depth data, these issues may not be fully resolved, however, embodiments of the invention reduce the problem to a manageable number of editable parts, and enable the issues to be rapidly and easily corrected automatically or by accepting inputs from an artist. Although embodiments of the invention may utilize a normal vector approach, other algorithms may be utilized alone or in combination to achieve similar or advantageous results. For example, embodiments of the invention may treat each pixel as a point in space, e.g., wherein X and Y represent the position of the pixel and Z represents the Z-Depth value of that pixel, and isolate only the points within the defined region, and calculate the “best-fit” plane for that group of points. Embodiments of the invention may utilize any type of plane fitting algorithm including but not limited to regression plane, orthogonal distance regression plane, etc. Commonly available statistics toolboxes include orthogonal regression using principal components analysis for example that may be utilized as off the shelf software components for calculation of best fitting planes to a number of points for example to minimize the perpendicular distances from each of the points to a plane. Embodiments of the invention may utilize any type of filtering as part of the transformation processing including but not limited to dilation and erosion. In one or more embodiments, an algorithm that iterates over a set of depth slopes and averages the slopes over an area for example is one example of an algorithm that may be utilized to calculate the normal vector for a particular area of the depth map.



FIG. 9A illustrates a close-up view of the imperfect typical transformed into rotated planes which may or may not be flat planes, or curved planes or surfaces and which yield a manageable set of information that enables correction of the errors automatically by the system or through manual artist input accepted and processed by the system. As shown, even with a high degree of erroneous depth data, the masks are low in number, and thus are far easier to adjust to correct depth associated with the two-dimensional image.



FIG. 9B illustrates a close-up view of a portion of FIG. 9B showing plane grouping with or without minimum and maximum connections and with or without spring attachments having a spring constant for example. In one or more embodiments, plane grouping may be visibly shown and editable to enable grouped planes to be repositioned by accepting a drag or other user input wherein the couplings 902 and 903 thus pull or otherwise drag or reposition attached planes. Any algorithm such as an iterative algorithm for moving attached planes with minimum and maximum coupling distances or spring constants is in keeping with the spirit of the invention. One such method iterates over each coupling after accepting a user move and moves each attached plane. If a coupling is maximized then the grouped planes move as well. If the couplings have a spring constant F=k*x, then a force may be imparted on the other planes, which may have masses associated with them for example based on size to visually show non-rigid coupling moves of multiple grouped planes.


Alternatively or in combination, embodiments of the invention may include altering automatically any combination of position, orientation, shape, depth or curve of said plane in order to fit edges or corners of the plane with another plane. This enables a plane to be positioned in three-dimensional space, rotated in three-dimensions, reshaped by moving a corner point, warped in effect by adding depth or a curve to the plane, for example to add depth to the plane itself to match the underlying image data. Embodiments of the invention may also include accepting an input to alter any combination of position, orientation, shape, depth or curve of the plane, for example to artistically fit the underlying image data, correct errors or artifacts from the automated fitting process for touch up, etc.


Embodiments of the invention generally also include applying depth associated with the plane having the best fit to the at least one area to shift pixels in the two-dimensional image horizontally to produce a stereoscopic image or stereoscopic image pair. Any type of output that is capable of providing different left and right eye information is in keeping with the spirit of the invention.


While the invention herein disclosed has been described by means of specific embodiments and applications thereof, numerous modifications and variations could be made thereto by those skilled in the art without departing from the scope of the invention set forth in the claims.

Claims
  • 1. An external depth map transformation method for conversion of two-dimensional images to stereoscopic images comprising: obtaining an external depth map associated with a two-dimensional image;obtaining at least one mask associated with at least one area within the two-dimensional image;calculating a best fit for a plane using a computer based on depth associated with the at least one area associated with each of the at least one mask;applying depth associated with the plane having the best fit to the at least one area to shift pixels in the two-dimensional image horizontally to produce a stereoscopic image or stereoscopic image pair; and,altering automatically using said computer, any combination of position, orientation, shape, depth or curve of the plane in order to fit edges or corners of the plane with another plane.
  • 2. The method of claim 1wherein the obtaining of the external depth map associated with a two-dimensional image comprises obtaining a disparity map.
  • 3. The method of claim 1wherein the obtaining of the external depth map associated with a two-dimensional image comprises obtaining a depth map of lower resolution than the two-dimensional image from a pair of witness cameras.
  • 4. The method of claim 1wherein the obtaining of the external depth map associated with a two-dimensional image comprises obtaining a depth map from time-of-flight or light-field system.
  • 5. The method of claim 1wherein the obtaining of the external depth map associated with a two-dimensional image comprises obtaining a depth map from a triangulation system.
  • 6. The method of claim 1wherein the obtaining at least one mask associated with at least one area within the two-dimensional image comprises automatically generating the at least one mask comprising the at least one area wherein the at least one area is over a predefined size and within a predefined depth range.
  • 7. The method of claim 1wherein the obtaining at least one mask associated with at least one area within the two-dimensional image comprises automatically generating the at least one mask comprising the at least one area wherein the at least one area comprises a boundary having a difference in luminance values over a predefined threshold.
  • 8. The method of claim 1 wherein the calculating the best fit for the plane using a computer based on depth associated with the at least one area associated with each of the at least one mask comprises calculating a normal vector for the plane.
  • 9. The method of claim 1 wherein the calculating the best fit for the plane using a computer based on depth associated with the at least one area associated with each of the at least one mask comprises calculating a regression fit for the plane.
  • 10. The method of claim 1 wherein the calculating the best fit for the plane using a computer based on depth associated with the at least one area associated with each of the at least one mask comprises calculating an orthogonal distance regression fit for the plane.
  • 11. The method of claim 1 further comprising: grouping two or more of said plane and at least one other plane.
  • 12. The method of claim 1 further comprising: grouping two or more of said plane and at least one other plane wherein each of said two or more of said plane and at least one other plane are grouped with a link of a predefined minimum and maximum distance.
  • 13. The method of claim 1 further comprising: grouping two or more of said plane and at least one other plane wherein each of said two or more of said plane and at least one other plane are grouped with a link having a spring constant.
  • 14. The method of claim 1 further comprising accepting an input to alter any combination of position, orientation, shape, depth or curve of said plane.
  • 15. An external depth map transformation method for conversion of two-dimensional images to stereoscopic images comprising: obtaining an external depth map associated with a two-dimensional image;obtaining at least one mask associated with at least one area within the two-dimensional image by automatically generating the at least one mask comprising the at least one area wherein the at least one area is over a predefined size and within a predefined depth range or wherein the at least one area comprises a boundary having a difference in luminance values over a predefined threshold or wherein the at least one area is over the predefined size and within the predefined depth range and comprises the boundary having the difference in luminance values over the predefined threshold;calculating a best fit for a plane using a computer based on depth associated with the at least one area associated with each of the at least one mask;altering automatically using said computer, any combination of position, orientation, shape, depth or curve of the plane in order to fit edges or corners of the plane with another plane; and,applying depth associated with the plane having the best fit to the at least one area to shift pixels in the two-dimensional image horizontally to produce a stereoscopic image or stereoscopic image pair.
  • 16. The method of claim 15wherein the obtaining of the external depth map associated with a two-dimensional image comprises obtaining a disparity map, or obtaining a depth map of lower resolution than the two-dimensional image from a pair of witness cameras, or obtaining a depth map from time-of-flight system, or obtaining a depth map from a triangulation system.
  • 17. The method of claim 15 wherein the calculating the best fit for the plane using a computer based on depth associated with the at least one area associated with each of the at least one mask comprises calculating a normal vector for the plane, or a regression fit for the plane, or an orthogonal distance regression fit for the plane.
US Referenced Citations (375)
Number Name Date Kind
2593925 Sheldon Apr 1952 A
2799722 Neugebauer Jul 1957 A
2804500 Giacoletto Aug 1957 A
2874212 Bechley Feb 1959 A
2883763 Schaper Apr 1959 A
2974190 Fine et al. Mar 1961 A
3005042 Horsley Oct 1961 A
3258528 Oppenheimer Jun 1966 A
3486242 Aronson Dec 1969 A
3551589 Moskoviz Dec 1970 A
3558811 Montevecchio et al. Jan 1971 A
3560644 Petrocelli et al. Feb 1971 A
3595987 Vlahos Jul 1971 A
3603962 Lechner Sep 1971 A
3612755 Tadlock Oct 1971 A
3617626 Bluth et al. Nov 1971 A
3619051 Wright Nov 1971 A
3621127 Hope Nov 1971 A
3647942 Siegel Mar 1972 A
3673317 Newell et al. Jun 1972 A
3705762 Ladd et al. Dec 1972 A
3706841 Novak Dec 1972 A
3710011 Altemus et al. Jan 1973 A
3731995 Reifflel May 1973 A
3737567 Kratomi Jun 1973 A
3742125 Siegel Jun 1973 A
3761607 Hanseman Sep 1973 A
3769458 Driskell Oct 1973 A
3770884 Curran et al. Nov 1973 A
3770885 Curran et al. Nov 1973 A
3772465 Vlahos et al. Nov 1973 A
3784736 Novak Jan 1974 A
3848856 Reeber et al. Nov 1974 A
3851955 Kent et al. Dec 1974 A
3971068 Gerhardt et al. Jul 1976 A
3972067 Peters Jul 1976 A
4017166 Kent et al. Apr 1977 A
4021841 Weinger May 1977 A
4021846 Roese May 1977 A
4054904 Saitoh et al. Oct 1977 A
4149185 Weinger Apr 1979 A
4168885 Kent et al. Sep 1979 A
4183046 Daike et al. Jan 1980 A
4183633 Kent et al. Jan 1980 A
4189743 Schure et al. Feb 1980 A
4189744 Stern Feb 1980 A
4235503 Condon Nov 1980 A
4258385 Greenberg et al. Mar 1981 A
4318121 Taite et al. Mar 1982 A
4329710 Taylor May 1982 A
4334240 Franklin Jun 1982 A
4436369 Bukowski Mar 1984 A
4475104 Shen Oct 1984 A
4544247 Ohno Oct 1985 A
4549172 Welk Oct 1985 A
4558359 Kuperman et al. Dec 1985 A
4563703 Taylor Jan 1986 A
4590511 Bocchi et al. May 1986 A
4600919 Stern Jul 1986 A
4603952 Sybenga Aug 1986 A
4606625 Geshwind Aug 1986 A
4608596 Williams et al. Aug 1986 A
4617592 MacDonald Oct 1986 A
4642676 Weinger Feb 1987 A
4645459 Graf et al. Feb 1987 A
4647965 Imsand Mar 1987 A
4694329 Belmares-Sarabia et al. Sep 1987 A
4697178 Heckel Sep 1987 A
4700181 Maine et al. Oct 1987 A
4721951 Holler Jan 1988 A
4723159 Imsand Feb 1988 A
4725879 Eide et al. Feb 1988 A
4755870 Markle et al. Jul 1988 A
4758908 James Jul 1988 A
4760390 Maine et al. Jul 1988 A
4774583 Kellar et al. Sep 1988 A
4794382 Lai et al. Dec 1988 A
4809065 Harris et al. Feb 1989 A
4827255 Ishii May 1989 A
4847689 Yamamoto et al. Jul 1989 A
4862256 Markle et al. Aug 1989 A
4888713 Falk Dec 1989 A
4903131 Lingermann et al. Feb 1990 A
4918624 Moore et al. Apr 1990 A
4925294 Geshwind et al. May 1990 A
4933670 Wislocki Jun 1990 A
4952051 Lovell et al. Aug 1990 A
4965844 Oka et al. Oct 1990 A
4984072 Sandrew Jan 1991 A
5002387 Baljet et al. Mar 1991 A
5038161 Ki Aug 1991 A
5050984 Geshwind Sep 1991 A
5055939 Karamon et al. Oct 1991 A
5093717 Sandrew Mar 1992 A
5177474 Kadota Jan 1993 A
5181181 Glynn Jan 1993 A
5185852 Mayer Feb 1993 A
5237647 Roberts et al. Aug 1993 A
5243460 Kornberg Sep 1993 A
5252953 Sandrew Oct 1993 A
5262856 Lippman et al. Nov 1993 A
5328073 Blanding et al. Jul 1994 A
5341462 Obata Aug 1994 A
5347620 Zimmer Sep 1994 A
5363476 Kurashige et al. Nov 1994 A
5402191 Dean et al. Mar 1995 A
5428721 Sato et al. Jun 1995 A
5481321 Lipton Jan 1996 A
5495576 Ritchey Feb 1996 A
5528655 Umetani et al. Jun 1996 A
5534915 Sandrew Jul 1996 A
5668605 Nachshon et al. Sep 1997 A
5673081 Yamashita et al. Sep 1997 A
5682437 Okino et al. Oct 1997 A
5684715 Palmer Nov 1997 A
5699443 Murata et al. Dec 1997 A
5699444 Palm Dec 1997 A
5717454 Adolphi et al. Feb 1998 A
5729471 Jain et al. Mar 1998 A
5734915 Roewer Mar 1998 A
5739844 Kuwano et al. Apr 1998 A
5742291 Palm Apr 1998 A
5748199 Palm May 1998 A
5767923 Coleman Jun 1998 A
5777666 Tanase et al. Jul 1998 A
5778108 Coleman Jul 1998 A
5784175 Lee Jul 1998 A
5784176 Narita Jul 1998 A
5808664 Yamashita et al. Sep 1998 A
5825997 Yamada et al. Oct 1998 A
5835163 Liou et al. Nov 1998 A
5841512 Goodhill Nov 1998 A
5867169 Prater Feb 1999 A
5880788 Bregler Mar 1999 A
5899861 Friemel et al. May 1999 A
5907364 Furuhata et al. May 1999 A
5912994 Norton et al. Jun 1999 A
5920360 Coleman Jul 1999 A
5929859 Meijers Jul 1999 A
5940528 Tanaka et al. Aug 1999 A
5959697 Coleman Sep 1999 A
5973700 Taylor et al. Oct 1999 A
5973831 Kleinberger et al. Oct 1999 A
5982350 Hekmatpour et al. Nov 1999 A
5990900 Seago Nov 1999 A
5990903 Donovan Nov 1999 A
5999660 Zorin et al. Dec 1999 A
6005582 Gabriel et al. Dec 1999 A
6011581 Swift et al. Jan 2000 A
6014473 Hossack et al. Jan 2000 A
6023276 Kawai et al. Feb 2000 A
6025882 Geshwind Feb 2000 A
6031564 Ma et al. Feb 2000 A
6049628 Chen et al. Apr 2000 A
6056691 Urbano et al. May 2000 A
6067125 May May 2000 A
6086537 Urbano et al. Jul 2000 A
6088006 Tabata Jul 2000 A
6091421 Terrasson Jul 2000 A
6102865 Hossack et al. Aug 2000 A
6108005 Starks et al. Aug 2000 A
6118584 Van Berkel et al. Sep 2000 A
6119123 Dimitrova et al. Sep 2000 A
6132376 Hossack et al. Oct 2000 A
6141433 Moed et al. Oct 2000 A
6157747 Szeliski et al. Dec 2000 A
6166744 Jaszlics et al. Dec 2000 A
6173328 Sato Jan 2001 B1
6184937 Williams et al. Feb 2001 B1
6198484 Kameyama Mar 2001 B1
6201900 Hossack et al. Mar 2001 B1
6208348 Kaye Mar 2001 B1
6211941 Erland Apr 2001 B1
6215516 Ma et al. Apr 2001 B1
6222948 Hossack et al. Apr 2001 B1
6226015 Danneels et al. May 2001 B1
6228030 Urbano et al. May 2001 B1
6263101 Klein et al. Jul 2001 B1
6271859 Asente Aug 2001 B1
6314211 Kim et al. Nov 2001 B1
6329963 Chiabrera et al. Dec 2001 B1
6337709 Yamaashi et al. Jan 2002 B1
6360027 Hossack et al. Mar 2002 B1
6363170 Seitz et al. Mar 2002 B1
6364835 Hossack et al. Apr 2002 B1
6373970 Dong et al. Apr 2002 B1
6390980 Peterson et al. May 2002 B1
6405366 Lorenz et al. Jun 2002 B1
6414678 Goddar et al. Jul 2002 B1
6416477 Jago Jul 2002 B1
6426750 Hoppe Jul 2002 B1
6429867 Deering Aug 2002 B1
6445816 Pettigrew Sep 2002 B1
6456340 Margulis Sep 2002 B1
6466205 Simpson et al. Oct 2002 B2
6477267 Richards Nov 2002 B1
6492986 Metaxas et al. Dec 2002 B1
6496598 Harman Dec 2002 B1
6509926 Mills et al. Jan 2003 B1
6515659 Kaye et al. Feb 2003 B1
6535233 Smith Mar 2003 B1
6553184 Ando et al. Apr 2003 B1
6590573 Geshwind Jul 2003 B1
6606166 Knoll Aug 2003 B1
6611268 Szeliski et al. Aug 2003 B1
6650339 Silva et al. Nov 2003 B1
6662357 Bowman-Amueh Dec 2003 B1
6665798 McNally et al. Dec 2003 B1
6677944 Yamamoto Jan 2004 B1
6686591 Ito et al. Feb 2004 B2
6686926 Kaye Feb 2004 B1
6707487 Amand et al. Mar 2004 B1
6727938 Randall Apr 2004 B1
6733361 Rudell May 2004 B1
6737957 Petrovic et al. May 2004 B1
6744461 Wada et al. Jun 2004 B1
6765568 Swift et al. Jul 2004 B2
6791542 Matusik et al. Sep 2004 B2
6798406 Jones et al. Sep 2004 B1
6813602 Thyssen Nov 2004 B2
6847737 Kouri et al. Jan 2005 B1
6850252 Hoffberg Feb 2005 B1
6853383 Duquesnois Feb 2005 B2
6859523 Jilk et al. Feb 2005 B1
6919892 Cheiky et al. Jul 2005 B1
6964009 Samaniego et al. Nov 2005 B2
6965379 Lee et al. Nov 2005 B2
6973434 Miller Dec 2005 B2
6985187 Han et al. Jan 2006 B2
7000223 Knutson et al. Feb 2006 B1
7006881 Hoffberg et al. Feb 2006 B1
7027054 Cheiky et al. Apr 2006 B1
7032177 Novak et al. Apr 2006 B2
7035451 Harman et al. Apr 2006 B2
7079075 Connor et al. Jul 2006 B1
7098910 Petrovic et al. Aug 2006 B2
7102633 Kaye et al. Sep 2006 B2
7116323 Kaye et al. Oct 2006 B2
7116324 Kaye et al. Oct 2006 B2
7117231 Fischer et al. Oct 2006 B2
7136075 Hamburg Nov 2006 B1
7181081 Sandrew Feb 2007 B2
7190496 Klug et al. Mar 2007 B2
7254264 Naske et al. Aug 2007 B2
7254265 Naske et al. Aug 2007 B2
7260274 Sawhney et al. Aug 2007 B2
7272265 Kouri et al. Sep 2007 B2
7298094 Yui Nov 2007 B2
7308139 Wentland et al. Dec 2007 B2
7321374 Naske Jan 2008 B2
7327360 Petrovic et al. Feb 2008 B2
7333519 Sullivan et al. Feb 2008 B2
7333670 Sandrew Feb 2008 B2
7343082 Cote et al. Mar 2008 B2
7461002 Crockett et al. Dec 2008 B2
7512262 Criminisi et al. Mar 2009 B2
7519990 Xie Apr 2009 B1
7532225 Fukushima et al. May 2009 B2
7538768 Kiyokawa et al. May 2009 B2
7542034 Spooner et al. Jun 2009 B2
7573475 Sullivan et al. Aug 2009 B2
7573489 Davidson et al. Aug 2009 B2
7576332 Britten Aug 2009 B2
7577312 Sandrew Aug 2009 B2
7610155 Timmis et al. Oct 2009 B2
7624337 Sull et al. Nov 2009 B2
7630533 Ruth et al. Dec 2009 B2
7663689 Marks Feb 2010 B2
7680653 Yeldener Mar 2010 B2
7772532 Olsen et al. Aug 2010 B2
7852461 Yahav Dec 2010 B2
7860342 Levien et al. Dec 2010 B2
7894633 Harman Feb 2011 B1
7940961 Allen May 2011 B2
8085339 Marks Dec 2011 B2
8090402 Fujisaki Jan 2012 B1
8194102 Cohen et al. Jun 2012 B2
8217931 Lowe et al. Jul 2012 B2
8320634 Deutsh Nov 2012 B2
8401336 Baldridge et al. Mar 2013 B2
8462988 Boon Jun 2013 B2
8526704 Dobbe Sep 2013 B2
8543573 MacPherson Sep 2013 B2
8634072 Trainer Jan 2014 B2
8698798 Murray et al. Apr 2014 B2
8907968 Tanaka et al. Dec 2014 B2
20010025267 Janiszewski Sep 2001 A1
20010051913 Vashistha et al. Dec 2001 A1
20020001045 Ranganath et al. Jan 2002 A1
20020048395 Harman et al. Apr 2002 A1
20020049778 Bell Apr 2002 A1
20020063780 Harman et al. May 2002 A1
20020075384 Harman Jun 2002 A1
20030018608 Rice Jan 2003 A1
20030046656 Saxana Mar 2003 A1
20030069777 Or-Bach Apr 2003 A1
20030093790 Logan et al. May 2003 A1
20030097423 Ozawa et al. May 2003 A1
20030154299 Hamilton Aug 2003 A1
20030177024 Tsuchida Sep 2003 A1
20040004616 Konya et al. Jan 2004 A1
20040062439 Cahill et al. Apr 2004 A1
20040181444 Sandrew Sep 2004 A1
20040189796 Ho et al. Sep 2004 A1
20040258089 Derechin et al. Dec 2004 A1
20050088515 Geng Apr 2005 A1
20050104878 Kaye et al. May 2005 A1
20050146521 Kaye et al. Jul 2005 A1
20050188297 Knight et al. Aug 2005 A1
20050207623 Liu et al. Sep 2005 A1
20050231501 Nitawaki Oct 2005 A1
20050231505 Kaye et al. Oct 2005 A1
20050280643 Chen Dec 2005 A1
20060028543 Sohn et al. Feb 2006 A1
20060061583 Spooner et al. Mar 2006 A1
20060083421 Weiguo Apr 2006 A1
20060143059 Sandrew Jun 2006 A1
20060159345 Clary et al. Jul 2006 A1
20060274905 Lindahl et al. Dec 2006 A1
20070052807 Zhou et al. Mar 2007 A1
20070236514 Agusanto et al. Oct 2007 A1
20070238981 Zhu et al. Oct 2007 A1
20070260634 Makela et al. Nov 2007 A1
20070279412 Davidson et al. Dec 2007 A1
20070279415 Sullivan et al. Dec 2007 A1
20070286486 Goldstein Dec 2007 A1
20070296721 Chang et al. Dec 2007 A1
20080002878 Meiyappan Jan 2008 A1
20080044155 Kuspa Feb 2008 A1
20080079851 Stanger et al. Apr 2008 A1
20080117233 Mather et al. May 2008 A1
20080147917 Lees et al. Jun 2008 A1
20080162577 Fukuda et al. Jul 2008 A1
20080181486 Spooner et al. Jul 2008 A1
20080225040 Simmons et al. Sep 2008 A1
20080225042 Birtwistle et al. Sep 2008 A1
20080225045 Birtwistle Sep 2008 A1
20080225059 Lowe et al. Sep 2008 A1
20080226123 Birtwistle Sep 2008 A1
20080226128 Birtwistle et al. Sep 2008 A1
20080226160 Birtwistle et al. Sep 2008 A1
20080226181 Birtwistle et al. Sep 2008 A1
20080226194 Birtwistle et al. Sep 2008 A1
20080227075 Poor et al. Sep 2008 A1
20080228449 Birtwistle et al. Sep 2008 A1
20080246759 Summers Oct 2008 A1
20080246836 Lowe et al. Oct 2008 A1
20080259073 Lowe et al. Oct 2008 A1
20090002368 Vitikainen et al. Jan 2009 A1
20090033741 Oh et al. Feb 2009 A1
20090116732 Zhou et al. May 2009 A1
20090219383 Passomore Sep 2009 A1
20090256903 Spooner et al. Oct 2009 A1
20090290758 Ng-Thow-Hing Nov 2009 A1
20090303204 Nasiri et al. Dec 2009 A1
20100045666 Kornmann et al. Feb 2010 A1
20100166337 Murashita et al. Jul 2010 A1
20100259610 Petersen Oct 2010 A1
20110050864 Bond Mar 2011 A1
20110074784 Turner Mar 2011 A1
20110169827 Spooner et al. Jul 2011 A1
20110169914 Lowe et al. Jul 2011 A1
20110188773 Wei et al. Aug 2011 A1
20110227917 Lowe et al. Sep 2011 A1
20110273531 Ito et al. Nov 2011 A1
20120032948 Lowe et al. Feb 2012 A1
20120039525 Tian Feb 2012 A1
20120087570 Seo et al. Apr 2012 A1
20120102435 Han et al. Apr 2012 A1
20120188334 Fortin et al. Jul 2012 A1
20120274626 Hsieh Nov 2012 A1
20120281906 Appia Nov 2012 A1
20120306849 Steen Dec 2012 A1
20130051659 Yamamoto Feb 2013 A1
20130234934 Champion et al. Sep 2013 A1
Foreign Referenced Citations (23)
Number Date Country
003444353 Dec 1986 DE
0302454 Aug 1989 EP
1187494 Mar 2002 EP
2487039 Jul 2012 GB
60-52190 Mar 1985 JP
2003046982 Feb 2003 JP
2004207985 Jul 2004 JP
20120095059 Feb 2012 KR
20130061289 Nov 2013 KR
1192168 Sep 1982 SU
9724000 Jul 1997 WO
9912127 Mar 1999 WO
9930280 Jun 1999 WO
0079781 Dec 2000 WO
0101348 Jan 2001 WO
0213143 Feb 2002 WO
2006078237 Jul 2006 WO
2007148219 Dec 2007 WO
2008075276 Jun 2008 WO
2009155688 Dec 2009 WO
2011029209 Mar 2011 WO
2012016600 Sep 2012 WO
2013084234 Jun 2013 WO
Non-Patent Literature Citations (64)
Entry
Modern stereo—evaluation, Anders Olofsson, Jun. 17, 2010, pp. 1-102.
Tam et al., “3D-TV Content Generation: 2D-to-3D Conversion”, ICME 2006, p. 1868-1872.
Harman et al. “Rapid 2D to 3D Conversion”, The Reporter, vol. 17, No. 1, Feb. 2002, 12 pages.
Legend Films, “System and Method for Conversion of Sequences of Two-Dimensional Medical Images to Three-Dimensional Images” Sep. 12, 2013, 7 pages.
International Search Report and Written Opinion issued for PCT/US2013/072447, dated Mar. 13, 2014, 6 pages.
“Nintendo DSi Uses Camera Face Tracking to Create 3D Mirages”, retrieved from www.Gizmodo.com on Mar. 18, 2013, 3 pages.
Noll, Computer-Generated Three-Dimensional Movies, Computers and Automation, vol. 14, No. 11 (Nov. 1965), pp. 20-23.
Noll, Stereographic Projections by Digital Computer, Computers and Automation, vol. 14, No. 5 (May 1965), pp. 32-34.
Australian Office Action issued for 2002305387, dated Mar. 15, 2007, 2 page.
Canadian Office Action, Dec. 28, 2011, Appl No. 2,446,150, 4 pages.
Canadian Office Action, Oct. 8, 2010, App. No. 2,446,150, 6 pages.
Canadian Office Action, Jun. 13, 2011, App. No. 2,446,150, 4 pages.
Daniel L. Symmes, Three-Dimensional Image, Microsoft Encarta Online Encyclopedia (hard copy printed May 28, 2008 and of record, now indicated by the website indicated on the document to be discontinued: http://encarta.msn.com/text—761584746—0/Three-Dimensional—Image.htm).
Declaration of Barbara Frederiksen in Support of In-Three, Inc's Opposition to Plaintiff's Motion for Preliminary Injunction, Aug. 1, 2005, IMAX Corporation et al v. In-Three, Inc., Case No. CV05 1795 FMC (Mcx). (25 pages).
Declaration of John Marchioro, Exhibit C, 3 pages, Nov. 2, 2007.
Declaration of Michael F. Chou, Exhibit B, 12 pages, Nov. 2, 2007.
Declaration of Steven K. Feiner, Exhibit A, 10 pages, Nov. 2, 2007.
Di Zhong, Shih-Fu Chang, “AMOS: An Active System for MPEG-4 Video Object Segmentation,” ICIP (2) 8: 647-651, Apr. 1998.
E. N. Mortensen and W. A. Barrett, “Intelligent Scissors for Image Composition,” Computer Graphics (SIGGRAPH '95), pp. 191-198, Los Angeles, CA, Aug. 1995.
EPO Office Action issued for EP Appl. No. 02734203.9, dated Sep. 12, 2006, 4 pages.
EPO Office Action issued for EP Appl. No. 02734203.9, dated Oct. 7, 2010, 5 pages.
Eric N. Mortensen, William A. Barrett, “Interactive segmentation with Intelligent Scissors,” Graphical Models and Image Processing, v.60 n.5, p. 349-384, Sep. 2002.
Exhibit 1 to Declaration of John Marchioro, Revised translation of portions of Japanese Patent Document No. 60-52190 to Hiromae, 3 pages, Nov. 2, 2007.
Gao et al., Perceptual Motion Tracking from Image Sequences, IEEE, Jan. 2001, pp. 389-392.
Grossman, “Look Ma, No Glasses”, Games, Apr. 1992, pp. 12-14.
Hanrahan et al., “Direct WYSIWYG painting and texturing on 3D shapes”, Computer Graphics, vol. 24, Issue 4, pp. 215-223. Aug. 1990.
Zhong, et al., “Interactive Tracker—A Semi-automatic Video Object Tracking and Segmentation System,” Microsoft Research China, http://research.microsoft.com (Aug. 26, 2003).
Indian Office Action issued for Appl. No. 49/DELNP/2005, dated Apr. 4, 2007, 9 pages.
Interpolation (from Wikipedia encyclopedia, article pp. 1-6) retrieved from Internet URL:http://en.wikipedia.org/wiki/Interpolation on Jun. 5, 2008.
IPER, Mar. 29, 2007, PCT/US2005/014348, 5 pages.
IPER, Oct. 5, 2012, PCT/US2011/058182, 6 pages.
International Search Report, Jun. 13, 2003, PCT/US02/14192, 4 pages.
PCT Search Report issued for PCT/US2011/058182, dated May 10, 2012, 8 pages.
PCT Search Report issued for PCT/US2011/067024, dated Aug. 22, 2012, 10 pages.
Izquierdo et al., Virtual 3D-View Generation from Stereoscopic Video Data, IEEE, Jan. 1998, pp. 1219-1224.
Jul. 21, 2005, Partial Testimony, Expert: Samuel Zhou, Ph.D., 2005 WL 3940225 (C.D.Cal.), 21 pages.
Kaufman, D., “The Big Picture”, Apr. 1998, http://www.xenotech.com Apr. 1998, pp. 1-4.
Lenny Lipton, “Foundations of the Stereo-Scopic Cinema, a Study in Depth” With and Appendix on 3D Television, 325 pages, May 1978.
Lenny Lipton, Foundations of the Stereo-Scopic Cinema a Study in Depth, 1982, Van Nostrand Reinhold Company.
Machine translation of JP Patent No. 2004-207985, dated Jul. 22, 2008, 34 pg.
Michael Gleicher, “Image Snapping,” SIGGRAPH: 183-190, Jun. 1995.
Murray et al., Active Tracking, IEEE International Conference on Intelligent Robots and Systems, Sep. 1993, pp. 1021-1028.
Ohm et al., An Object-Based System for Stereopscopic Viewpoint Synthesis, IEEE transaction on Circuits and Systems for Video Technology, vol. 7, No. 5, Oct. 1997, pp. 801-811.
Optical Reader (from Wikipedia encyclopedia, article page 1) retrieved from Internet URL:http://en.wikipedia.org/wiki/Optical—reader on Jun. 5, 2008.
Selsis et al., Automatic Tracking and 3D Localization of Moving Objects by Active Contour Models, Intelligent Vehicles 95 Symposium, Sep. 1995, pp. 96-100.
Slinker et al., “The Generation and Animation of Random Dot and Random Line Autostereograms”, Journal of Imaging Science and Technology, vol. 36, No. 3, pp. 260-267, May 1992.
Nguyen et al., Tracking Nonparameterized Object Contours in Video, IEEE Transactions on Image Processing, vol. 11, No. 9, Sep. 2002, pp. 1081-1091.
U.S. District Court, C.D. California, IMAX Corporation and Three-Dimensional Media Group, Ltd., v. In-Three, Inc., Partial Testimony, Expert: Samuel Zhou, Ph.D., No. CV 05-1795 FMC(Mcx), Jul. 19, 2005, WL 3940223 (C.D.Cal.), 6 pages.
U.S. District Court, C.D. California, IMAX v. In-Three, No. 05 CV 1795, 2005, Partial Testimony, Expert: David Geshwind, WestLaw 2005, WL 3940224 (C.D.Cal.), 8 pages.
U.S. District Court, C.D. California, Western Division, IMAX Corporation, and Three-Dimensional Media Group, Ltd. v. In-Three, Inc., No. CV05 1795 FMC (Mcx). Jul. 18, 2005. Declaration of Barbara Frederiksen in Support of In-Three, Inc.'s Opposition to Plaintiffs' Motion for Preliminary Injunction, 2005 WL 5434580 (C.D.Cal.), 13 pages.
U.S. Patent and Trademark Office, Before the Board of Patent Appeals and Interferences, Ex Parte Three-Dimensional Media Group, Ltd., Appeal 2009-004087, Reexamination Control No. 90/007,578, U.S. Pat. No. 4,925,294, Decis200, 88 pages, Jul. 30, 2010.
Yasushi Mae, et al., “Object Tracking in Cluttered Background Based on Optical Flow and Edges,” Proc. 13th Int. Conf. on Pattern Recognition, vol. 1, pp. 196-200, Apr. 1996.
PCT ISR, Feb. 27, 2007, PCT/US2005/014348, 8 pages.
PCT ISR, Sep. 11, 2007, PCT/US07/62515, 9 pages.
PCT ISR, Nov. 14, 2007, PCT/US07/62515, 24 pages.
PCT IPRP, Jul. 4, 2013, PCT/US2011/067024, 5 pages.
Weber, et al., “Rigid Body Segmentation and Shape Description from Dense Optical Flow Under Weak Perspective,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, No. 2, Feb. 1997,pp. 139-143.
IPER, May 10, 2013, PCT/US2011/058182, 6 pages.
International Search Report Issued for PCT/US2013/072208, dated Feb. 27, 2014, 6 pages.
McKenna “Interactive Viewpoint Control and Three-Dimensional Operations”, Computer Graphics and Animation Group, The Media Laboratory, pp. 53O56, 1992.
International Preliminary Report on Patentability received in PCT/US2013/072208 on Jun. 11, 2015, 5 pages.
International Preliminary Report on Patentability received in PCT/US2013/072447 on Jun. 11, 2015, 12 pages.
European Search Report received in PCT/US2011/067024 on Nov. 28, 2014, 6 pages.
European Office Action dated Jun. 26, 2013, received for EP Appl. No. 02734203.9 on Jul. 22, 2013, 5 pages.
Related Publications (1)
Number Date Country
20140327736 A1 Nov 2014 US