Three-dimensional annotation system and method

Information

  • Patent Grant
  • 9547937
  • Patent Number
    9,547,937
  • Date Filed
    Friday, November 30, 2012
    13 years ago
  • Date Issued
    Tuesday, January 17, 2017
    9 years ago
Abstract
Embodiments enable a three-dimensional annotation system and method that accepts desired depths for regions of input images and annotates two-dimensional/three-dimensional images with three-dimensional annotations for viewing at the desired depth(s) in any three-dimensional manner. Enables rapid and intuitive specification of desired depth and application of depth to regions in the two-dimensional images, or when editing three-dimensional images, as annotated by the three-dimensional annotations having at least one depth associated with the annotation. Enables rapid and intuitive depth augmentation or editing of an input image.
Description
BACKGROUND OF THE INVENTION

Field of the Invention


One or more embodiments of the invention are related to the field of image analysis and image enhancement and computer graphics processing of two-dimensional images into three-dimensional images. More particularly, but not by way of limitation, one or more embodiments of the invention enable a three-dimensional annotation system and method. Embodiments accept a desired depth for a region in a two-dimensional image, or three-dimensional image, and annotate the image with three-dimensional annotations at the desired depth for example. This enables rapid and intuitive depth alteration in three-dimensional images and conversion of two-dimensional images to three-dimensional images by enabling stereographers to specify depths for regions of images in an intuitive manner. Embodiments may display an annotated image with a corresponding stereoscopic image or pair of images for left and right eye viewing, or any other three-dimensional viewing enabled image, such as an anaglyph image.


Description of the Related Art


Three-dimensional images include any type of image or images that provide different left and right eye views to encode depth, some types of three-dimensional images require use of special glasses to ensure the left eye viewpoint is shown to the left eye and the right eye viewpoint is shown to the right eye of an observer. Existing systems that are utilized to convert two-dimensional images to three-dimensional images typically require rotoscoping of images to create outlines of regions in the images. The rotoscoped regions are then individually depth adjusted by hand to produce a left and right eye image, or single anaglyph image, or other three-dimensionally viewable image, such as a polarized three-dimensional image viewed with left and right lenses having different polarization angles for example. There is no easy way for stereographers to specify specific depths to apply to regions in a natural way. Thus, ad hoc depths are applied to images, and if the images are not acceptable, for example after client review, then there is no easy manner in which to provide easy feedback. Without a visual language to give creative and technical notes for the placement of objects, feedback can be unclear, causing more creative and technical iteration.


In addition, typical methods for converting movies from 2D to 3D in an industrial setting capable of handling the conversion of hundreds of thousands of frames of a movie with large amounts of labor or computing power, make use of an iterative workflow. The iterative workflow includes rotoscoping or modeling objects in each frame, adding depth and then rendering the frame into left and right viewpoints forming an anaglyph image or a left and right image pair. If there are errors in the edges of the masked objects for example, then the typical workflow involves an “iteration”, i.e., sending the frames back to the workgroup responsible for masking the objects, (which can be in a country with cheap unskilled labor half way around the world), after which the masks are sent to the workgroup responsible for rendering the images, (again potentially in another country), wherein rendering is accomplished by either shifting input pixels left and right for cell animation images for example or ray tracing the path of light through each pixel in left and right images to simulate the light effects the path of light interacts with and for example bounces off of or through, which is computationally extremely expensive. After rendering, the rendered image pair is sent back to the quality assurance group. It is not uncommon in this workflow environment for many iterations of a complicated frame to take place. This is known as “throw it over the fence” workflow since different workgroups work independently to minimize their current workload and not as a team with overall efficiency in mind. With hundreds of thousands of frames in a movie, the amount of time that it takes to iterate back through frames containing artifacts can become high, causing delays in the overall project. Even if the re-rendering process takes place locally, the amount of time to re-render or ray-trace all of the images of a scene can cause significant processing and hence delays on the order of at least hours. Each iteration may take a long period of time to complete as the work may be performed by groups in disparate locations having shifted work hours. Elimination of iterations such as this would provide a huge savings in wall-time, or end-to-end time that a conversion project takes, thereby increasing profits and minimizing the workforce needed to implement the workflow.


Hence there is a need for a three-dimensional annotation system and method.


BRIEF SUMMARY OF THE INVENTION

Embodiments of the invention accept inputs from a stereographer that indicate depths at which to place regions or object volumes within a two dimensional image that are utilized to create stereoscopic viewable images, e.g., two horizontally offset left and right eye viewpoints or images. In one or more embodiments of the invention, the input is accepted by the system and displayed at the depth indicated on the three-dimensional version of the two-dimensional input image. In one or more embodiments, the depth may be specified using a graphical input device, such as a graphics drawing tablet. In other embodiments or in combination, depths may be input via a keyboard, obtained through analysis of the input, e.g., script or text to annotate with, or via voice commands while drawing annotation information or symbols for example.


In one scenario of the conversion workflow, a mask group takes source images and creates masks for items, areas or human recognizable objects in each frame of a sequence of images that make up a movie. Stereographers utilize embodiments of the invention to specify depths, for example with annotations that are shown at the desired depth along with any other information, to apply to particular regions, for example the masked regions from the mask group, in each image. The depth augmentation group applies the specified depths, and for example shapes, to the masks created by the mask group. Embodiments of the invention make this process extremely intuitive as the depth to apply is shown with information at the desired depth. Optionally, the depth may be applied before or independent of the masking process for example.


When rendering an image pair, left and right viewpoint images and left and right absolute translation files, or a single relative translation file may be generated and/or utilized by one or more embodiments of the invention. The translation files specify the pixel offsets for each source pixel in the original 2D image, for example in relative or absolute form respectively. These files are generally related to an alpha mask for each layer, for example a layer for an actress, a layer for a door, a layer for a background, etc. These translation files, or maps are passed from the depth augmentation group that renders 3D images, to the quality assurance workgroup or depending on the project size, a stereographer and/or associate stereographer. This allows the quality assurance workgroup (or other workgroup such as the depth augmentation group) to perform real-time editing of 3D images without re-rendering for example to alter layers/colors/masks and/or remove artifacts such as masking errors without delays associated with processing time/re-rendering and/or iterative workflow that requires such re-rendering or sending the masks back to the mask group for rework, wherein the mask group may be in a third world country with unskilled labor on the other side of the globe. In addition, when rendering the left and right images, i.e., 3D images, the Z depth of regions within the image, such as actors for example, may also be passed along with the alpha mask to the quality assurance group, who may then adjust depth as well without re-rendering with the original rendering software. This may be performed for example with generated missing background data from any layer so as to allow “downstream” real-time editing without re-rendering or ray-tracing for example.


Quality assurance may give feedback to the masking group or depth augmentation group for individuals so that these individuals may be instructed to produce work product as desired for the given project, without waiting for, or requiring the upstream groups to rework anything for the current project. This allows for feedback yet eliminates iterative delays involved with sending work product back for rework and the associated delay for waiting for the reworked work product. Elimination of iterations such as this provide a huge savings in wall-time, or end-to-end time that a conversion project takes, thereby increasing profits and minimizing the workforce needed to implement the workflow.


In summary, embodiments of the invention minimize iterative workflow by providing more intuitive instructions regarding depth for another workgroup to utilize. For example, embodiments of the invention enable eliminate iterative workflow paths back through different workgroups by enabling other workers or workgroups to have an intuitive method in which to view depth instructions and successfully input the correct depth. Great amounts of time are saved by eliminating re-rendering by other work groups, and allow depth to be correctly input local to a work group. Embodiments of the system thus greatly aid the artist in the enhancement of images to include depth by providing realistic depth information once, to minimize manual manipulation of images.





BRIEF DESCRIPTION OF THE DRAWINGS

The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.



FIG. 1 shows an architectural view of an embodiment of the system.



FIG. 2 shows an input two-dimensional image.



FIG. 3 shows a masked version of the two-dimensional image showing regions within each object to apply depth to.



FIG. 4 shows annotations for desired depth at a specific depth for general messages, or at the depth of the desired region for example, wherein the annotations may be viewed in three-dimensional depth with anaglyph glasses.



FIG. 5 shows the input image converted to three-dimensional image in anaglyph format, which may be viewed in three-dimensional depth with anaglyph glasses to view separate left and right eye viewpoints from one image.



FIG. 6 shows a logical side view of the depth applied to the annotations and optionally to the regions that may be masked for example and depth augmented as per the associated annotation.



FIG. 7 illustrates a flowchart illustrating an embodiment of the method implemented by one or more embodiments of the system of FIG. 1.





DETAILED DESCRIPTION OF THE INVENTION


FIG. 1 shows an architectural view of an embodiment of the system 100. As illustrated, computer 101 is coupled with any combination of input devices including graphics tablet 102a, keyboard 102b, mouse 102c and/or microphone 102d. Computer 101 may obtain a two-dimensional image and display the image on screen 103. Screen 103 may display a single image that may be viewed at depth, for example as an anaglyph using two different colors shifted left and right that may be viewed with glasses with lenses of two different colors, e.g., Red and Blue to view the image as a three-dimensional image for example. In general, the two-dimensional image may have multiple regions that are to be converted to different depths, for example first region 151a, e.g., a fish and second region 151b, coral for ease of illustration. In other scenarios, embodiments of the invention may be utilized to amend or otherwise change or alter depth of three-dimensional input images. Other embodiments may be utilized to annotate convergence for blending a feature film and/or alteration of native stereo elements with positive or negative depth with respect to the screen plane for example. Regardless of the input image type, embodiments of the system accept annotation associated with desired depths from input devices 102a-d, for example first annotation 152a and second annotation 152b. Any number of regions or annotations may be accepted by embodiments of the system. In one or more embodiments of the invention, the annotation itself may be analyzed to obtain the desired depth associated with a given region, or any input from the same or other input device may be utilized to obtain the desired depth. The annotation is then placed at the depth thus obtained, which results in three-dimensional annotations 152a and 152b displayed at the depth thus obtained from the annotation itself, e.g., from numbers in the annotation via optical character recognition or other handwriting recognition software for example. The depth may be the desired depth of an associated object or for example the depths of the four corners of the screen or any other depth associated with the annotation for example. The annotation may include general comments at a particular depth and not associated with a specific region for example.


In one or more embodiments of the invention, obtaining the depth includes analyzing the annotation with text recognition software to determine the depth. For example, if mouse or graphics tablet 102c or 102a is utilized to cursively drawing the annotation, the input may be analyzed by text recognition software to determine if a numerical value exists within the cursive text, for example “10” or “5” as obtained from annotations 152a and 152b. In addition, keywords or characters such as “+”, “−”, “forward”, “back”, etc., may be obtained via text recognition software and applied to the depth of the annotation automatically for example. Alternatively, or in combination, the mouse input may be utilized to for example drag up or down to adjust the annotation and add text next to an arrow annotation for example to show that the annotation is “10” or “5”, for example which changes as the mouse is dragged and automatically updated in the annotation, while the annotation itself moves forward or backward. Alternatively, or in combination, the keyboard may be utilized to accept annotations or depths associated with annotations. The input text from the keyboard may be parsed to obtain keywords, characters or numbers for example to automatically either augment the annotation or move the annotation in depth or both. Alternatively, or in combination, the microphone may be monitored and depth accepted therefrom to automatically augment the annotation or move the annotation in depth or by asserting voice recognition software to determine keywords, characters or numbers.


Embodiments of the system may thus be utilized in obtaining a two-dimensional source image, displaying the two-dimensional source image on the screen associated with the computer, accepting an annotation associated with a desired depth of a region within the two-dimensional source image via any of the input devices coupled with the computer, obtaining a depth associated with the annotation as described above, and annotating the two-dimensional image with the annotation at the depth in a three-dimensional image, i.e., an image that has at least the annotations displayed at depth.


Embodiments of the system annotate the two-dimensional image with the annotation at the depth by generating an image encoded with left and right viewpoints or a pair of images comprising an image for viewing with a left and right eye respectively wherein the pair of images includes the annotation and the two-dimensional source image. In one or more embodiments the resulting image is a single anaglyph image, or polarized image, or any other type of image that includes the annotation shown at depth along with the two-dimensional source image.


Before or after accepting the desired depth of any portions or regions of a two-dimensional image, the computer or any other computer that may access the resulting annotated image, may accept at least one mask associated with the region of the two-dimensional source image. In other words, masking may take place before or after the annotation of the two-dimensional image. Embodiments of the system may then displace at least a portion of the region, for example a particular side, or middle or any other portion, in the two-dimensional source image left and right based on the depth to create a resulting output three-dimensional image.


When the resulting depth appears to be acceptable based on the requirements of the particular project, the system may output a three-dimensional image without the annotation. In movie-based projects, this may entail large numbers of images and tweening for example between key frames or other images generated with one or more embodiments of the system.



FIG. 2 shows an input two-dimensional image. Embodiments of the invention may be utilized on cell animation or photographic or rendered or any other type of images. As shown, an exemplary object such as a fish is shown near vertically oriented structures, which may represent coral or other structures. FIG. 3 shows a masked version of the two-dimensional image showing regions within each object to apply depth to. In one or more embodiments, the regions are utilized to apply depths that vary over the region to create regions that are not flat, i.e., not at the same depth across the entire region. As shown, region 151a includes many sub-regions or masks, shown as different colors along the sides and back of the fish, which are not shown in the unmasked version of FIG. 2. FIG. 4 shows annotations for desired depth at a specific depth for general messages or at the depth of the desired region for example, wherein the annotations may be viewed in three-dimensional depth with anaglyph glasses. As shown, the two-dimensional image is still in two-dimensions, i.e., the depth across the entire image does not vary. In other words, the two-dimensional image along with the three-dimensional annotations specify the depths to apply to particular areas or regions and is used as an input to the depth augmentation group for example. The depth group then moves the associated regions in depth to match the annotations in an intuitive manner that is extremely fast and provides a built-in sanity check for depth. Using this method, it is inherently verifiable whether a depth of a region is at or about at the depth of the associated annotation.



FIG. 5 shows the input image converted to three-dimensional image in anaglyph format, which may be viewed in three-dimensional depth with anaglyph glasses. As shown, the individual coral pieces are at the specified depths, for example, nearest ones at “−10” at region 151b having associated annotation 152b, with the furthest ones at “4”, “5”, and “7”, while the region of the nose of the fish 151a is at “0” and the fins are at offset “−2” as shown associated with annotation 152a. In one or more embodiments these numbers may indicate the left and right shift in pixels or the depth in feet/meters of the particular regions, or any other quantitative value associated with distance or depth. In other embodiments of the invention, the polarity may be such that positive numbers represent depths further away from the viewer.


As illustrated, embodiments of the invention minimize iterative workflow by providing more intuitive instructions regarding depth for another workgroup to utilize. Thus, the system and method implemented by the system eliminate iterative workflow paths back through different workgroups by enabling other workers or workgroups to have an intuitive method in which to view depth instructions and successfully input the correct depth. Great amounts of time are saved by eliminating re-rendering by other work groups, and allow depth to be correctly input local to a work group. Embodiments of the system thus greatly aid the artist in the enhancement of images to include depth by providing realistic depth information once, to minimize manual manipulation of images.


In one or more embodiments, a particular annotation may itself have a differing depth along the annotation to show how a depth varies, i.e., is not constant or flat across a region. For example, an annotation may show a curve from a first depth to a second depth along the annotation so that the annotation has a depth range. In this case more than one number for depth may be associated with a particular annotation and analyzed by the system to shift a portion of the annotation nearer or further than another portion of the same annotation. There is no limit to the number of depths that a particular annotation may be placed at. As shown in FIG. 5, the bottom right annotation shows a depth of −14 and −4 with a “far” depth of −20, which is analyzed by an embodiment of the invention to designate that region of the image as having a depth that ranges between the three annotated depths, wherein an embodiment of the invention may thus set the depth of the masked region to as shown by shifting closer annotated portions farther left and right that deeper areas respectively.



FIG. 6 shows a logical side view of the depth applied to the annotations and optionally to the regions that may be masked for example and depth augmented as per the associated annotation. FIG. 6 illustrates the depth applied to FIGS. 1 and 2 from a side view of screen 103 to show the depth applied to the annotations 152a and 152b (see also FIG. 4 with anaglyph glasses on), and optionally to the associated regions 151a and 151b, once the associated depth notated in the annotations is applied to the regions (see also FIG. 5 with anaglyph glasses on). As shown, the annotations are at depth for three-dimensional or stereoscopic viewing 602 to aid in the application of depth to the associated regions for example wherein a viewer 601 is shown at the right side of screen 103.



FIG. 7 illustrates a flowchart illustrating an embodiment of the method implemented by one or more embodiments of the system of FIG. 1. As shown, the method includes obtaining the source image at 701, displaying the source image on the screen of the computer shown in FIG. 1, as per 702, accepting annotation associated with the desired depth of the region at 703, obtaining a depth associated with the annotation at 704 in a number of ways previously described with respect to the system, annotating the source image with the annotation in three-dimensions, for stereoscopic viewing at 705. From the viewpoint of depth workers viewing the annotations at depth, the annotations are utilized to show where depth should be applied and the system may accept masks for regions in the source image at 706 and then optionally display the regions as well at 707, and which is shown in FIG. 6. Although the annotations may not be at the same depth as the associated regions or may not even have associated regions, i.e., may simply be annotations at depth to aid in understanding something associated with the source image, the annotations at depth greatly speed and aid the process of working on images that may include depth.


While the invention herein disclosed has been described by means of specific embodiments and applications thereof, numerous modifications and variations could be made thereto by those skilled in the art without departing from the scope of the invention set forth in the claims.

Claims
  • 1. A three-dimensional annotation method comprising: obtaining a source image that is two-dimensional or three-dimensional;displaying said source image on a screen associated with a first computer;accepting an annotation associated with a desired depth of a region within said source image via an input device coupled with said first computer;obtaining at least one depth associated with said annotation;wherein said at least one depth corresponds with said desired depth of said region; and,annotating said source image with said annotation at said at least one depth in a three-dimensional image;generating an annotated stereoscopic image that comprises left and right eye views that differ from each other having said annotation at said at least one depth that differs from a depth of said region; and,generating an output stereoscopic image with said region at said same depth as said at least one depth of said annotation.
  • 2. The method of claim 1 wherein said input device comprises a graphics tablet, a mouse, or a keyboard and wherein said accepting said annotation comprises accepting input from said graphics tablet, said mouse or said keyboard respectively.
  • 3. The method of claim 1 wherein said input device comprises a microphone and wherein said accepting said annotation comprises accepting input from said microphone.
  • 4. The method of claim 1 wherein said obtaining said at least one depth comprises analyzing said annotation with text recognition software to determine said at least one depth.
  • 5. The method of claim 1 wherein said obtaining said at least one depth comprises analyzing motion of a mouse to determine said at least one depth.
  • 6. The method of claim 1 wherein said obtaining said at least one depth comprises parsing alphanumeric data from a keyboard to determine said at least one depth.
  • 7. The method of claim 1 wherein said obtaining said at least one depth comprises asserting voice recognition software.
  • 8. The method of claim 1 wherein said generating said annotated stereoscopic image comprises generating a pair of images comprising a left image to view with a left eye and a right eye image to view with a right eye respectively wherein said pair of images includes said annotation and said source image.
  • 9. The method of claim 1 wherein said generating said annotated stereoscopic image comprises generating an anaglyph image comprising a left eye colored image and a right eye colored image that are combined and that includes said annotation and said source image.
  • 10. The method of claim 1 wherein said generating said annotated stereoscopic image comprises generating a polarized image comprising a left eye image polarized in a first axis and a right eye image polarized in a second axis orthogonal to said first axis that are combined and that includes said annotation and said source image.
  • 11. The method of claim 1 wherein said generating said annotated stereoscopic image comprises generating a single image capable of displaying left and right eye viewpoints to a left eye and right eye respectively with differing depths that includes said annotation and said source image.
  • 12. The method of claim 1 further comprising: accepting at least one mask associated with said region of said source image.
  • 13. The method of claim 1 further comprising: displacing at least a portion of said region in said source image left and right based on said at least one depth to create said three-dimensional image.
  • 14. The method of claim 1 further comprising: displacing at least a portion of said region in said source image left and right based on said at least one depth to create an output three-dimensional image without said annotation.
  • 15. The method of claim 1 wherein said annotating said source image with said annotation at said at least one depth occurs before moving at least a portion of said region in said source image left and right to alter depth within the source image.
  • 16. The method of claim 1 wherein said annotating said source image with said annotation at said at least one depth comprises annotating said source image with a plurality of annotations that each comprise a different depth.
  • 17. A three-dimensional annotation method comprising: obtaining a source image that is two-dimensional or three-dimensional;displaying said source image on a screen associated with a first computer;accepting an annotation associated with a desired depth of a region within said source image via an input device coupled with said first computer wherein said input device comprises any combination of graphics tablet, mouse, keyboard or microphone;obtaining a at least one depth associated with said annotation by analyzing said annotation with text recognition software or by analyzing motion of a mouse or by parsing alphanumeric data from said keyboard or by asserting voice recognition software or any combination thereof;wherein said at least one depth corresponds with said desired depth of said region;and, annotating said source image with said annotation at said at least one depth in a three-dimensional image wherein said annotating said source image with said annotation at said at least one depth occurs before moving at least a portion of said region in said source image left and right to alter depth within the source image; and,generating an output stereoscopic image with said region at said same depth as said at least one depth of said annotation.
  • 18. The method of claim 17 further comprising: accepting at least one mask associated with said region of said source image.
  • 19. The method of claim 17 further comprising: displacing at least a portion of said region in said source image left and right based on said at least one depth to create said three-dimensional image.
  • 20. The method of claim 17 further comprising: displacing at least a portion of said region in said source image left and right based on said at least one depth to create an output three-dimensional image without said annotation.
  • 21. The method of claim 17 wherein said annotating said source image with said annotation at said at least one depth comprises annotating said source image with a plurality of annotations that each comprise a different depth.
US Referenced Citations (371)
Number Name Date Kind
2593925 Sheldon Apr 1952 A
2799722 Neugebauer Jul 1957 A
2804500 Giacoletto Aug 1957 A
2874212 Bechley Feb 1959 A
2883763 Schaper Apr 1959 A
2974190 Fine et al. Mar 1961 A
3005042 Horsley Oct 1961 A
3258528 Oppenheimer Jun 1966 A
3486242 Aronson Dec 1969 A
3551589 Moskoviz Dec 1970 A
3558811 Montevecchio et al. Jan 1971 A
3560644 Petrocelli et al. Feb 1971 A
3595987 Vlahos Jul 1971 A
3603962 Lechner Sep 1971 A
3612755 Tadlock Oct 1971 A
3617626 Bluth et al. Nov 1971 A
3619051 Wright Nov 1971 A
3621127 Hope Nov 1971 A
3647942 Siegel Mar 1972 A
3673317 Newell et al. Jun 1972 A
3705762 Ladd et al. Dec 1972 A
3706841 Novak Dec 1972 A
3710011 Altemus et al. Jan 1973 A
3731995 Reiffel May 1973 A
3737567 Kratomi Jun 1973 A
3742125 Siegel Jun 1973 A
3761607 Hanseman Sep 1973 A
3769458 Driskell Oct 1973 A
3770884 Curran et al. Nov 1973 A
3770885 Curran et al. Nov 1973 A
3772465 Vlahos et al. Nov 1973 A
3784736 Novak Jan 1974 A
3848856 Reeber et al. Nov 1974 A
3851955 Kent et al. Dec 1974 A
3971068 Gerhardt et al. Jul 1976 A
3972067 Peters Jul 1976 A
4017166 Kent et al. Apr 1977 A
4021841 Weinger May 1977 A
4021846 Roese May 1977 A
4054904 Saitoh et al. Oct 1977 A
4149185 Weinger Apr 1979 A
4168885 Kent et al. Sep 1979 A
4183046 Daike et al. Jan 1980 A
4183633 Kent et al. Jan 1980 A
4189743 Schure et al. Feb 1980 A
4189744 Stern Feb 1980 A
4235503 Condon Nov 1980 A
4258385 Greenberg et al. Mar 1981 A
4318121 Taite et al. Mar 1982 A
4329710 Taylor May 1982 A
4334240 Franklin Jun 1982 A
4436369 Bukowski Mar 1984 A
4475104 Shen et al. Oct 1984 A
4544247 Ohno Oct 1985 A
4549172 Welk Oct 1985 A
4558359 Kuperman et al. Dec 1985 A
4563703 Taylor Jan 1986 A
4590511 Bocchi et al. May 1986 A
4600919 Stern Jul 1986 A
4603952 Sybenga Aug 1986 A
4606625 Geshwind Aug 1986 A
4608596 Williams et al. Aug 1986 A
4617592 MacDonald Oct 1986 A
4642676 Weinger Feb 1987 A
4645459 Graf et al. Feb 1987 A
4647965 Imsand Mar 1987 A
4694329 Belmares-Sarabia et al. Sep 1987 A
4697178 Heckel Sep 1987 A
4700181 Maine et al. Oct 1987 A
4721951 Holler Jan 1988 A
4723159 Imsand Feb 1988 A
4725879 Eide et al. Feb 1988 A
4755870 Markle et al. Jul 1988 A
4758908 James Jul 1988 A
4760390 Maine et al. Jul 1988 A
4774583 Kellar et al. Sep 1988 A
4794382 Lai et al. Dec 1988 A
4809065 Harris et al. Feb 1989 A
4827255 Ishii May 1989 A
4847689 Yamamoto et al. Jul 1989 A
4862256 Markle et al. Aug 1989 A
4888713 Falk Dec 1989 A
4903131 Lingemann et al. Feb 1990 A
4918624 Moore et al. Apr 1990 A
4925294 Geshwind et al. May 1990 A
4933670 Wislocki Jun 1990 A
4952051 Lovell et al. Aug 1990 A
4965844 Oka Oct 1990 A
4984072 Sandrew Jan 1991 A
5002387 Baljet et al. Mar 1991 A
5038161 Ki Aug 1991 A
5050984 Geshwind Sep 1991 A
5093717 Sandrew Mar 1992 A
5177474 Kadota Jan 1993 A
5181181 Glynn Jan 1993 A
5185852 Mayer Feb 1993 A
5237647 Roberts et al. Aug 1993 A
5252953 Sandrew et al. Oct 1993 A
5262856 Lippman et al. Nov 1993 A
5328073 Blanding et al. Jul 1994 A
5341462 Obata Aug 1994 A
5347620 Zimmer Sep 1994 A
5402191 Dean et al. Mar 1995 A
5428721 Sato et al. Jun 1995 A
5481321 Lipton Jan 1996 A
5495576 Ritchey Feb 1996 A
5528655 Umetani et al. Jun 1996 A
5534915 Sandrew Jul 1996 A
5684715 Palmer Nov 1997 A
5699444 Palm Dec 1997 A
5717454 Adolphi et al. Feb 1998 A
5729471 Jain et al. Mar 1998 A
5739844 Kuwano et al. Apr 1998 A
5742291 Palm Apr 1998 A
5748199 Palm May 1998 A
5767923 Coleman Jun 1998 A
5778108 Coleman Jul 1998 A
5784175 Lee Jul 1998 A
5784176 Narita Jul 1998 A
5825997 Yamada et al. Oct 1998 A
5835163 Liou et al. Nov 1998 A
5841512 Goodhill Nov 1998 A
5867169 Prater Feb 1999 A
5880788 Bregler Mar 1999 A
5899861 Friemel et al. May 1999 A
5907364 Furuhata et al. May 1999 A
5912994 Norton et al. Jun 1999 A
5920360 Coleman Jul 1999 A
5929859 Meijers Jul 1999 A
5940528 Tanaka et al. Aug 1999 A
5959697 Coleman Sep 1999 A
5973700 Taylor et al. Oct 1999 A
5973831 Kleinberger et al. Oct 1999 A
5982350 Hekmatpour et al. Nov 1999 A
5990903 Donovan Nov 1999 A
5999660 Zorin et al. Dec 1999 A
6005582 Gabriel et al. Dec 1999 A
6011581 Swift et al. Jan 2000 A
6014473 Hossack et al. Jan 2000 A
6023276 Kawai et al. Feb 2000 A
6025882 Geshwind Feb 2000 A
6031564 Ma et al. Feb 2000 A
6049628 Chen et al. Apr 2000 A
6056691 Urbano et al. May 2000 A
6067125 May May 2000 A
6086537 Urbano et al. Jul 2000 A
6088006 Tabata Jul 2000 A
6091421 Terrasson Jul 2000 A
6102865 Hossack et al. Aug 2000 A
6108005 Starks et al. Aug 2000 A
6118584 Van Berkel et al. Sep 2000 A
6119123 Elenbaas et al. Sep 2000 A
6132376 Hossack et al. Oct 2000 A
6141433 Moed et al. Oct 2000 A
6166744 Jaszlics et al. Dec 2000 A
6173328 Sato Jan 2001 B1
6184937 Williams et al. Feb 2001 B1
6198484 Kameyama Mar 2001 B1
6201900 Hossack et al. Mar 2001 B1
6208348 Kaye Mar 2001 B1
6211941 Erland Apr 2001 B1
6215516 Ma et al. Apr 2001 B1
6222948 Hossack et al. Apr 2001 B1
6226015 Danneels et al. May 2001 B1
6228030 Urbano et al. May 2001 B1
6263101 Klein Jul 2001 B1
6271859 Asente Aug 2001 B1
6314211 Kim et al. Nov 2001 B1
6337709 Yamaashi et al. Jan 2002 B1
6360027 Hossack et al. Mar 2002 B1
6363170 Seitz et al. Mar 2002 B1
6364835 Hossack et al. Apr 2002 B1
6373970 Dong et al. Apr 2002 B1
6390980 Peterson et al. May 2002 B1
6416477 Jago Jul 2002 B1
6426750 Hoppe Jul 2002 B1
6445816 Pettigrew Sep 2002 B1
6456340 Margulis Sep 2002 B1
6466205 Simpson et al. Oct 2002 B2
6477267 Richards Nov 2002 B1
6492986 Metaxas et al. Dec 2002 B1
6496598 Harman Dec 2002 B1
6509926 Mills et al. Jan 2003 B1
6515659 Kaye et al. Feb 2003 B1
6535233 Smith Mar 2003 B1
6590573 Geshwind Jul 2003 B1
6606166 Knoll Aug 2003 B1
6611268 Szeliski et al. Aug 2003 B1
6650339 Silva et al. Nov 2003 B1
6662357 Bowman-Amuah Dec 2003 B1
6665798 McNally et al. Dec 2003 B1
6677944 Yamamoto Jan 2004 B1
6686591 Ito et al. Feb 2004 B2
6686926 Kaye Feb 2004 B1
6707487 Aman et al. Mar 2004 B1
6727938 Randall Apr 2004 B1
6737957 Petrovic et al. May 2004 B1
6744461 Wada et al. Jun 2004 B1
6765568 Swift et al. Jul 2004 B2
6791542 Matusik et al. Sep 2004 B2
6798406 Jones et al. Sep 2004 B1
6813602 Thyssen Nov 2004 B2
6847737 Kouri et al. Jan 2005 B1
6859523 Jilk et al. Feb 2005 B1
6964009 Samaniego et al. Nov 2005 B2
6965379 Lee et al. Nov 2005 B2
6973434 Miller Dec 2005 B2
7000223 Knutson et al. Feb 2006 B1
7006881 Hoffberg et al. Feb 2006 B1
7027054 Cheiky et al. Apr 2006 B1
7032177 Novak et al. Apr 2006 B2
7035451 Harman et al. Apr 2006 B2
7079075 Connor et al. Jul 2006 B1
7084868 Farag et al. Aug 2006 B2
7102633 Kaye et al. Sep 2006 B2
7116323 Kaye et al. Oct 2006 B2
7116324 Kaye et al. Oct 2006 B2
7117231 Fischer et al. Oct 2006 B2
7123263 Harvill Oct 2006 B2
7136075 Hamburg Nov 2006 B1
7181081 Sandrew Feb 2007 B2
7254265 Naske et al. Aug 2007 B2
7260274 Sawhney et al. Aug 2007 B2
7272265 Kouri et al. Sep 2007 B2
7298094 Yui Nov 2007 B2
7308139 Wentland et al. Dec 2007 B2
7333519 Sullivan et al. Feb 2008 B2
7333670 Sandrew Feb 2008 B2
7343082 Cote et al. Mar 2008 B2
7355607 Harvill Apr 2008 B2
7461002 Crockett et al. Dec 2008 B2
7512262 Criminisi et al. Mar 2009 B2
7519990 Xie Apr 2009 B1
7532225 Fukushima et al. May 2009 B2
7538768 Kiyokawa et al. May 2009 B2
7542034 Spooner et al. Jun 2009 B2
7558420 Era Jul 2009 B2
7573475 Sullivan et al. Aug 2009 B2
7573489 Davidson et al. Aug 2009 B2
7576332 Britten Aug 2009 B2
7577312 Sandrew Aug 2009 B2
7610155 Timmis et al. Oct 2009 B2
7624337 Sull et al. Nov 2009 B2
7630533 Ruth et al. Dec 2009 B2
7663689 Marks Feb 2010 B2
7680653 Yeldener Mar 2010 B2
7772532 Olsen et al. Aug 2010 B2
7852461 Yahav Dec 2010 B2
7860342 Levien et al. Dec 2010 B2
7894633 Harman Feb 2011 B1
8036451 Redert et al. Oct 2011 B2
8085339 Marks Dec 2011 B2
8090402 Fujisaki Jan 2012 B1
8194102 Cohen et al. Jun 2012 B2
8213711 Tam et al. Jul 2012 B2
8217931 Lowe et al. Jul 2012 B2
8244104 Kashiwa Aug 2012 B2
8320634 Deutsch Nov 2012 B2
8384763 Tam et al. Feb 2013 B2
8401336 Baldridge et al. Mar 2013 B2
8462988 Boon Jun 2013 B2
8488868 Tam et al. Jul 2013 B2
8526704 Dobbe Sep 2013 B2
8543573 Macpherson Sep 2013 B2
8634072 Trainer Jan 2014 B2
8644596 Wu et al. Feb 2014 B1
8670651 Sakuragi et al. Mar 2014 B2
8698798 Murray et al. Apr 2014 B2
8907968 Tanaka et al. Dec 2014 B2
8922628 Bond Dec 2014 B2
20010025267 Janiszewski Sep 2001 A1
20010051913 Vashistha et al. Dec 2001 A1
20020048395 Harman et al. Apr 2002 A1
20020049778 Bell Apr 2002 A1
20020063780 Harman et al. May 2002 A1
20020075384 Harman Jun 2002 A1
20030018608 Rice Jan 2003 A1
20030046656 Saxana Mar 2003 A1
20030069777 Or-Bach Apr 2003 A1
20030093790 Logan et al. May 2003 A1
20030097423 Ozawa et al. May 2003 A1
20030154299 Hamilton Aug 2003 A1
20030177024 Tsuchida Sep 2003 A1
20040004616 Konya et al. Jan 2004 A1
20040062439 Cahill et al. Apr 2004 A1
20040189796 Ho et al. Sep 2004 A1
20040258089 Derechin et al. Dec 2004 A1
20050083421 Berezin et al. Apr 2005 A1
20050088515 Geng Apr 2005 A1
20050146521 Kaye et al. Jul 2005 A1
20050188297 Knight et al. Aug 2005 A1
20050207623 Liu et al. Sep 2005 A1
20050231501 Nitawaki Oct 2005 A1
20060061583 Spooner et al. Mar 2006 A1
20060083421 Weiguo et al. Apr 2006 A1
20060143059 Sandrew Jun 2006 A1
20060159345 Clary et al. Jul 2006 A1
20060274905 Lindahl et al. Dec 2006 A1
20070052807 Zhou et al. Mar 2007 A1
20070236514 Agusanto et al. Oct 2007 A1
20070238981 Zhu et al. Oct 2007 A1
20070260634 Makela et al. Nov 2007 A1
20070286486 Goldstein Dec 2007 A1
20070296721 Chang et al. Dec 2007 A1
20080002878 Meiyappan Jan 2008 A1
20080044155 Kuspa Feb 2008 A1
20080079851 Stanger et al. Apr 2008 A1
20080117233 Mather et al. May 2008 A1
20080147917 Lees et al. Jun 2008 A1
20080162577 Fukuda et al. Jul 2008 A1
20080181486 Spooner et al. Jul 2008 A1
20080225040 Simmons et al. Sep 2008 A1
20080225042 Birtwistle et al. Sep 2008 A1
20080225045 Birtwistle et al. Sep 2008 A1
20080225059 Lowe et al. Sep 2008 A1
20080226123 Birtwistle et al. Sep 2008 A1
20080226128 Birtwistle et al. Sep 2008 A1
20080226160 Birtwistle et al. Sep 2008 A1
20080226181 Birtwistle et al. Sep 2008 A1
20080226194 Birtwistle et al. Sep 2008 A1
20080227075 Poor et al. Sep 2008 A1
20080228449 Birtwistle et al. Sep 2008 A1
20080246759 Summers Oct 2008 A1
20080246836 Lowe et al. Oct 2008 A1
20080259073 Lowe et al. Oct 2008 A1
20090002368 Vitikainen et al. Jan 2009 A1
20090033741 Oh et al. Feb 2009 A1
20090116732 Zhou et al. May 2009 A1
20090144772 Fink Jun 2009 A1
20090147074 Getty Jun 2009 A1
20090179895 Zhu Jul 2009 A1
20090219383 Passmore Sep 2009 A1
20090256903 Spooner et al. Oct 2009 A1
20090290758 Ng-Thow-Hing et al. Nov 2009 A1
20090297061 Mareachen et al. Dec 2009 A1
20090303204 Nasiri et al. Dec 2009 A1
20100026784 Burazerovic Feb 2010 A1
20100045666 Kommann et al. Feb 2010 A1
20100166338 Lee et al. Jul 2010 A1
20100289819 Singh Nov 2010 A1
20110050864 Bond Mar 2011 A1
20110069152 Wang et al. Mar 2011 A1
20110072397 Baker Mar 2011 A1
20110074784 Turner Mar 2011 A1
20110081042 Kim Apr 2011 A1
20110096832 Zhang et al. Apr 2011 A1
20110109617 Snook May 2011 A1
20110158504 Turner Jun 2011 A1
20110161843 Bennett Jun 2011 A1
20110169827 Spooner et al. Jul 2011 A1
20110169914 Lowe et al. Jul 2011 A1
20110188773 Wei et al. Aug 2011 A1
20110227917 Lowe et al. Sep 2011 A1
20110273531 Ito et al. Nov 2011 A1
20120032948 Lowe et al. Feb 2012 A1
20120039525 Tian et al. Feb 2012 A1
20120087570 Seo et al. Apr 2012 A1
20120102435 Han et al. Apr 2012 A1
20120188334 Fortin et al. Jul 2012 A1
20120218382 Zass Aug 2012 A1
20120249746 Cornog et al. Oct 2012 A1
20120274626 Hsieh Nov 2012 A1
20120274634 Yamada et al. Nov 2012 A1
20120281906 Appia Nov 2012 A1
20120306849 Steen Dec 2012 A1
20120306874 Nguyen et al. Dec 2012 A1
20130044192 Mukherjee et al. Feb 2013 A1
20130051659 Yamamoto Feb 2013 A1
20130063549 Schnyder et al. Mar 2013 A1
20130234934 Champion et al. Sep 2013 A1
20130258062 Noh et al. Oct 2013 A1
Foreign Referenced Citations (23)
Number Date Country
003444353 Jun 1986 DE
03052454 Feb 1989 EP
1187494 Mar 2002 EP
1719079 Nov 2006 EP
60-52190 Mar 1985 JP
2002123842 Apr 2002 JP
2003046982 Feb 2003 JP
2004-207985 Jul 2004 JP
20120095059 Feb 2012 KR
20130061289 Nov 2013 KR
1192168 Nov 1982 SU
9724000 Jul 1997 WO
9912127 Mar 1999 WO
9930280 Jun 1999 WO
0079781 Dec 2000 WO
0101348 Jan 2001 WO
0213143 Feb 2002 WO
2006078237 Jul 2006 WO
2007148219 Dec 2007 WO
2008075276 Jun 2008 WO
2011029209 Mar 2011 WO
2012016600 Sep 2012 WO
2013084234 Jun 2013 WO
Non-Patent Literature Citations (72)
Entry
“Nintendo DSi Uses Camera Face Tracking to Create 3D Mirages”, retrieved from www.Gizmodo.com on Mar. 18, 2013, 3 pages.
IPER, Mar. 29, 2007, PCT/US2005/014348, 5 pages.
IPER, Oct. 5, 2013, PCT/US2011/058182, 6 pages.
International Search Report, Jun. 13, 2003, PCT/US02/14192, 4 pages.
Partial Testimony, Expert: Samuel Zhou, Ph.D., 2005 WL 3940225 (C.D.Cal.), Jul. 21, 2005, 21 pages.
PCT ISR, Feb. 27, 2007, PCT/US2005/014348, 8 pages.
PCT ISR, Sep. 11, 2007, PCT/US07/62515, 9 pages.
CA Office Action, Dec. 28, 2011, Appl No. 2,446,150, 4 pages.
PCT ISR, Nov. 14, 2007, PCT/US07/62515, 24 pages.
PCT IPRP, Jul. 4, 2013, PCT/US2011/067024, 5 pages.
European Office Action dated Jun. 26, 2013, received for EP Appl. No. 02734203.9 on Jul. 22, 2013, 5 pages.
Ohm et al., An Object-Based System for Stereopscopic Viewpoint Synthesis, IEEE transaction on Circuits and Systems for Video Technology, vol. 7, No. 5, Oct. 1997, pp. 801-811.
Izquierdo et al., Virtual 3D-View Generation from Stereoscopic Video Data, IEEE, Jan. 1998, pp. 1219-1224.
Kaufman, D., “The Big Picture”, Apr. 1998, http://www.xenotech.com Apr. 1998, pp. 1-4.
Hanrahan et al., “Direct WYSIWYG painting and texturing on 3D shapes”, Computer Graphics, vol. 24, Issue 4, pp. 215-223. Aug. 1990.
Grossman, “Look Ma, No Glasses”, Games, Apr. 1992, pp. 12-14.
Slinker et al., “The Generation and Animation of Random Dot and Random Line Autostereograms”, Journal of Imaging Science and Technology, vol. 36, No. 3, pp. 260-267, May 1992.
A. Michael Noll, Stereographic Projections by Digital Computer, Computers and Automation, vol. 14, No. 5 (May 1965), pp. 32-34.
A. Michael Noll, Computer-Generated Three-Dimensional Movies, Computers and Automation, vol. 14, No. 11 (Nov. 1965), pp. 20-23.
Selsis et al., Automatic Tracking and 3D Localization of Moving Objects by Active Contour Models, Intelligent Vehicles 95 Symposium, Sep. 1995, pp. 96-100.
Smeulders et al., Tracking Nonparameterized Object Contours in Video, IEEE Transactions on Image Processing, vol. 11, No. 9, Sep. 2002, pp. 1081-1091.
Office Action for EPO Patent Application No. 02 734 203.9 dated Sep. 12, 2006. (4 pages).
Office Action for AUS Patent Application No. 2002305387 dated Mar. 9, 2007. (2 pages).
Office Action for EPO Patent Application No. 02 734 203.9 dated Oct. 7, 2010. (5 pages).
First Examination Report for Indian Patent Application No. 01779/DELNP/2003 dated Mar. 2004. (4 pages).
International Search Report Dated Jun. 13, 2003. (3 pages).
Declaration of Barbara Frederiksen in Support of In-Three, Inc's Opposition to Plaintiffs Motion for Preliminary Injunction, Aug. 1, 2005, IMAX Corporation et al v. In-Three, Inc., Case No. CV05 1795 FMC (Mcx). (25 pages).
USPTO, Board of Patent Appeals and Interferences, Decision on Appeal dated Jul. 30, 2010, Ex parte Three-Dimensional Media Group, LTD., Appeal 2009-004087, Reexamination Control No. 90/007,578, U.S. Pat. No. 4,925,294. (88 pages).
Office Action for Canadian Patent Application No. 2,446,150 dated Oct. 8, 2010. (6 pages).
Office Action for Canadian Patent Application No. 2,446,150 dated Jun. 13, 2011. (4 pages).
International Search Report received fro PCT Application No. PCT/US2011/067024, dated Aug. 22, 2012, 10 pages.
Lenny Lipton, “Foundations of the Stereo-Scopic Cinema, a Study in Depth” With and Appendix on 3D Television, 325 pages, May 1978.
Interpolation (from Wikipedia encyclopedia, article pp. 1-6), retrieved from Internet URL:http://en.wikipedia.org/wiki/Interpolation on Jun. 5, 2008.
Optical Reader (from Wikipedia encyclopedia, article p. 1), retrieved from Internet URL:http://en.wikipedia.org/wiki/Optical—reader on Jun. 5, 2008.
Declaration of Steven K. Feiner, Exhibit A, 10 pages, Nov. 2, 2007.
Declaration of Michael F. Chou, Exhibit B, 12 pages, Nov. 2, 2007.
Declaration of John Marchioro, Exhibit C, 3 pages, Nov. 2, 2007.
Exhibit 1 to Declaration of John Marchioro, Revised translation of portions of Japanese Patent Document No. 60-52190 to Hiromae, 3 pages, Nov. 2, 2007.
U.S. Patent and Trademark Office, Before the Board of Patent Appeals and Interferences, Ex Parte Three-Dimensional Media Group, Ltd., Appeal 2009-004087, Reexamination Control No. 90/007,578, U.S. Pat. No. 4,925,294, Decision on Appeal, 88 pages, Jul. 30, 2010.
International Search Report dated May 10, 2012, 8 pages.
Machine translation of JP Patent No. 2004-207985, dated Jul. 22, 2008, 34 pages.
Daniel L. Symmes, Three-Dimensional Image, Microsoft Encarta Online Encyclopedia (hard copy printed May 28, 2008 and of record, now indicated by the website indicated on the document to be discontinued: http://encarta.msn.com/text—761584746—0/Three-Dimensional—Image.htm).
U.S. District Court, C.D. California, IMAX v. In-Three, No. 05 CV 1795, 2005, Partial Testimony, Expert: David Geshwind, WestLaw 2005, WL 3940224 (C.D.Cal.), 8 pages.
U.S. District Court, C.D. California, IMAX Corporation and Three-Dimensional Media Group, Ltd., v. In-Three, Inc., Partial Testimony, Expert: Samuel Zhou, Ph.D., No. CV 05-1795 FMC(Mcx), Jul. 19, 2005, 2005 WL 3940223 (C.D.Cal.), 6 pages.
U.S. District Court, C.D. California, IMAX v. In-Three. No. 06 CV 1795. Jul. 21, 2005, Partial Testimony, Expert: Samuel Zhou, Ph.D., 2005 WL 3940225 (C.D.Cal.), 21 pages.
U.S. District Court, C.D. California, Western Division, IMAX Corporation, and Three-Dimensional Media Group, Ltd. v. In-Three, Inc., No. CV05 1795 FMC (Mcx). Jul. 18, 2005. Declaration of Barbara Frederiksen in Support of In-Three, Inc.'s Opposition to Plaintiffs' Motion for Preliminary Injunction, 2005 WL 5434580 (C.D.Cal.), 13 pages.
Noll et al., “Stereographic Projections by Digital Computer”, Computers and Automation for May 1965, pp. 32-34.
Noll, “Computer-Generated Three-Dimensional Movies” Computers and Automation for Nov. 1965, pp. 20-23.
Murray et al., Active Tracking, IEEE International Conference on Intelligent Robots and Systems, Sep. 1993, pp. 1021-1028.
Gao et al., Perceptual Motion Tracking from Image Sequences, IEEE, Jan. 2001, pp. 389-392.
Yasushi Mae, et al., “Object Tracking in Cluttered Background Based on Optical Flow and Edges,” Proc. 13th Int. Conf. on Pattern Recognition, vol. 1, pp. 196-200, Apr. 1996.
Di Zhong, Shih-Fu Chang, “AMOS: An Active System for MPEG-4 Video Object Segmentation,” ICIP (2) 8: 647-651, Apr. 1998.
Hua Zhong, et al., “Interactive Tracker—A Semi-automatic Video Object Tracking and Segmentation System,” Microsoft Research China, http://research.microsoft.com (Aug. 26, 2003).
Eric N. Mortensen, William A. Barrett, “Interactive segmentation with Intelligent Scissors,” Graphical Models and Image Processing, v.60 n. 5, p. 349-384, Sep. 2002.
Michael Gleicher, “Image Snapping,” SIGGRAPH: 183-190, Jun. 1995.
Joseph Weber, et al., “Rigid Body Segmentation and Shape Description . . . , ” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, No. 2, Feb. 1997,pp. 139-143.
E. N. Mortensen and W. A. Barrett, “Intelligent Scissors for Image Composition,” Computer Graphics (SIGGRAPH '95), pp. 191-198, Los Angeles, CA, Aug. 1995.
International Search Report Issued for PCT/US2013/072208, dated Feb. 27, 2014, 6 pages.
International Search Report and Written Opinion issued for PCT/US2013/072447, dated Mar. 13, 2014, 6 pages.
Tam et al., “3D-TV Content Generation: 2D-To-3D Conversion”, ICME 2006, p. 1868-1872.
Harman et al. “Rapid 2D to 3D Conversion”, The Reporter, vol. 17, No. 1, Feb. 2002, 12 pages.
Legend Films, “System and Method for Conversion of Sequences of Two-Dimensional Medical Images to Three-Dimensional Images” Sep. 12, 2013, 7 pages.
International Preliminary Report on Patentability received in PCT/US2013/072208 on Jun. 11, 2015, 5 pages.
International Preliminary Report on Patentability received in PCT/US2013/072447 on Jun. 11, 2015, 12 pages.
European Search Report Received in PCTUS2011067024 on Nov. 28, 2014, 6 pages.
Zhang, et al., “Stereoscopic Image Generation Based on Depth Images for 3D TV”, IEEE Transactions on Broadcasting, vol. 51, No. 2, pp. 191-199, Jun. 2005.
Beraldi, et al., “Motion and Depth from Optical Flow”, Lab. Di Bioingegneria, Facolta' di Medicina, Universit' di Modena, Modena, Italy; pp. 205-208, 1989.
Hendriks, et al. “Converting 2D to 3D: A Survey”, Information and Communication Theory Group, Dec. 2005.
Abstract of “A Novel Method for Semi-Automatic 2D to 3D Video Conversion”, Wu, et al, IEEE 978-1-4244-1755-1, 2008, 1 Page.
Abstract of “Converting 2D Video to 3D: An Efficient Path to a 3D Experience”, Cao, et al, IEEE 1070-986X, 2011, 1 Page.
Abstract of Learning to Produce 3D Media from a Captured 2D Video, Park et al., Eastman Kodak Research Journal of Latex Class files, vol. 6, Jan. 2007, 4 pages.
Abstract of “Efficient and high speed depth-based 2D to 3D video conversion”, Somaiya et al., Springer 3DR Express 10, 1007, 2013, pp. 1-9.
Related Publications (1)
Number Date Country
20140152648 A1 Jun 2014 US