1. Field of the Invention
One or more embodiments of the invention are related to the field of image analysis and image enhancement, and computer graphics processing of two-dimensional (2D) images into three-dimensional (3D) stereoscopic images. More particularly, but not by way of limitation, one or more embodiments of the invention enable a system for simultaneous review of a 3D model by multiple reviewers, each potentially viewing the model from a different viewpoint, and for coordination of these reviews by a coordinator. Graphical indicators of the reviewers' viewpoints may be overlaid onto an overview image of the 3D model so the coordinator can determine the regions and perspectives of the model that are under review.
2. Description of the Related Art
3D viewing is based on stereographic vision, with different viewpoints from one or more images provided to the left and right eyes to provide the illusion of depth. Many techniques are known in the art to provide 3D viewing. For example, specialized glasses may be utilized for viewing 3D images, such as glasses with color filters, polarized lenses, or anamorphic lenses. Some 3D viewing methods use separate screens for left eye and right eye images, or project images directly onto the left eye and right eye.
Virtual reality environments typically are computer-generated environments that simulate user presence in either real world or computer-generated worlds. The systems utilized to display the virtual reality environment typically include a stereoscopic display for 3D viewing and generally instrument a viewer with one or more sensors, in order to detect and respond to the position, orientation, and movements of the viewer. Based on these values, the virtual reality environment generates images to provide an immersive experience. The immersive experience may also include other outputs such as sound or vibration. Images may be projected onto screens, or provided using specialized glasses worn by the user.
The vast majority of images and films historically have been captured in 2D. These images or movies are not readily viewed in 3D without some type of conversion of the 2D images for stereoscopic display. Thus 2D images are not generally utilized to provide realistic 3D stereoscopic virtual reality environments. Although it is possible to capture 3D images from stereoscopic cameras, these cameras, especially for capturing 3D movies, are generally expensive and/or cumbersome 3D cameras. Specifically, there are many limitations with current 3D camera systems including prices and precision of alignment and minimum distance to a subject to be filmed for example.
The primary challenge with creating a 3D virtual reality environment is the complexity of generating the necessary stereo images for all possible positions and orientations of the viewer. These stereo images must be generated dynamically in approximately real-time as the viewer moves through the virtual reality environment. This requirement distinguishes 3D virtual reality from the process of generating 3D movies from 2D images as the location of the viewer is essentially fixed at the location of the camera.
Approaches in the existing art for 3D virtual reality rely on a detailed three-dimensional model of the virtual environment. Using the 3D model, left and right eye images can be generated by projecting the scene onto separate viewing planes for each eye. Computer-generated environments that are originally modeled in 3D can therefore be viewed in 3D virtual reality. However, creating these models can be extremely time-consuming. The complexity of creating a full 3D model is particularly high when it is desired to create a photo-realistic 3D model of an actual scene. This modeling effort requires that all shapes be defined and positioning in 3D in great detail, and that all colors and textures of the objects be set to match their counterparts in the real scene. Existing techniques for creating 3D virtual environments are therefore complex and time-consuming. They require extensive efforts from graphic artists and 3D modelers to generate the necessary realistic 3D models. Hence there is a need for a method for creating 3D virtual reality from 2D images.
The process of creating a 3D virtual reality is typically iterative. 2D images and possibly computer-generated elements are modeled in 3D space using depth assignments. Stereoscopic 3D images are then rendered from the 3D models and reviewed for accuracy and artistic effects. Typically, these reviews identify adjustments that are needed to the 3D models. These adjustments are applied, and the revised model is re-rendered for a subsequent review. This process may repeat several times. It is time consuming because the modifications to the 3D models and the rendering of images from these 3D models are labor intensive and computationally intensive and may be impossible in real-time. There are no known methods that provide rapid depth update and review cycles for virtual reality models. There are no known methods that allow editors to modify depth and immediately observe the effect of these depth changes on the stereoscopic images. Hence there is a need for a method for real-time depth modification of stereo images for a virtual reality environment.
Reviewing and adjusting a 3D model often requires rendering images of the model from many different viewpoints. A single reviewer typically reviews the model from a single viewpoint at a time in order to check that the model appears accurate and realistic from that viewpoint. Because it is time consuming for a single reviewer to check a model from many different viewpoints, parallel reviews of the model by multiple reviewers may be used. However, there are no known systems that coordinate these multiple simultaneous reviewers of a 3D model. Existing processes treat each review as an independent quality check, and they accumulate reviewer comments at the end each review. No existing systems are configured to coordinate review information from multiple viewpoints simultaneously. Hence there is a need for a 3D model multi-reviewer system.
Embodiments of the invention enable a system for coordination of multiple, simultaneous reviews of a 3D model, potentially from different viewpoints. A set of 2D images may be obtained from an environment, which may for example be a real physical environment such as a room, a computer-generated environment, or a mix of physical and computer-generated elements. 2D images may be captured using a camera aimed at various angles to form a panoramic collection of images covering a desired part of a scene. In some embodiments the collection of images may cover an entire sphere, providing 360° viewing in all directions (including left-to-right and up and down). Embodiments of the invention enable converting these 2D images into a 3D virtual reality experience. A viewer in the virtual reality environment may be able to view the environment from various locations and orientations, and perceive three-dimensional depth reflecting the physical or modeled characteristics of the captured scene.
In one or more embodiments of the invention, subsets of the 2D images are first stitched together, using for example common features or overlapping pixels. The integrated, stitched images may then be projected onto a spherical surface to form a complete 360-degree view of the scene (or a desired portion thereof) in any direction (left to right as well as up and down). The spherical surface provides a complete spherical view of the scene, but this view is still two-dimensional since it lacks any depth information. Embodiments of the invention enable addition of depth information to the spherical display. In one or more embodiments, the spherical surface image is unwrapped onto a plane. This unwrapped image may be divided into regions to assist in generating depth information. Depth information is generated for the points of the regions. Depth may be applied to the regions manually, via external depth information sources, calculated based on any portion of the underlying image data including hue, luminance, saturation or any other image feature or any combination of acceptance of manual depth or external depth or calculation of depth. Depth information may comprise for example, without limitation, depth maps, bump maps, parallax maps, U maps, UV maps, disparity maps, ST maps, point clouds, z maps, offset maps, displacement maps, or more generally any information that may provide a three-dimensional shape or three-dimensional appearance to an image. Using the spherical surface image and the assigned depth information for the points of the regions, 3D stereoscopic images may be generated for a viewer in a 3D virtual reality environment. The depth information determines the amount of offset for each point between the left eye and right eye images, which provides a three-dimensional viewing experience.
Different embodiments of the invention may use various methods for generating the stereo images using the depth information. In one or more embodiments, the depth information may be projected onto a sphere, yielding spherical depth information that provides depth for all or a portion of the points on the sphere. Spherical depth information may comprise for example, without limitation, spherical depth maps, spherical bump maps, spherical parallax maps, spherical U maps, spherical UV maps, spherical disparity maps, spherical ST maps, spherical point clouds, spherical z maps, spherical offset maps, spherical displacement maps, or more generally any information that may provide a three-dimensional shape or three-dimensional appearance to a spherical surface. The unwrapped plane image is also projected onto a spherical surface to form the spherical image. Left and right eye images may then be generated using the spherical depth information. For example, if the depth information is a depth map that provides a z-value for each pixel in one or more 2D images, the spherical depth information may be a spherical depth map that provides a z-value for each point on the sphere. In this case left and right images may be formed by projecting each image point from its spherical depth position onto left and right image planes. The position and orientation of the left and right image planes may depend on the position and orientation of the viewer in the virtual reality environment. Thus the stereo images seen by the viewer will change as the viewer looks around the virtual reality environment in different directions. The projections from the spherical depth map points onto the left and right image planes may for example use standard 3D to 2D projections to a plane using a different focal point for each eye.
In other embodiments of the invention a different method may be used to generate the stereographic images. This method first generates a stereo image in 2D using the unwrapped plane image and the plane depth information. The left and right images are then projected onto spheres, with a separate left sphere and right sphere for the left and right images. Based on the position and orientation of the viewer in the virtual reality environment, left and right image planes and eye positions are calculated, the left sphere is projected onto the left image plane, and the right sphere is projected onto the right image plane.
In one or more embodiments, the regions of the unwrapped plane image may be used to assist in creating the depth information. One or more regions may be mapped onto flat or curved surfaces, and these surfaces may be positioned and oriented in three-dimensional space. In some embodiments constraints may be applied manually or automatically to reflect continuous or flexible boundaries between region positions in space. Depth information may be generated directly from the region positions and orientations in three-dimensional space by relating the depth to the distance of each point from a specified viewpoint.
In some embodiments it may be desirable to modify the 2D images from the scene in order to create a 3D virtual reality environment. For example, objects may be captured in the 2D images that should not appear in the virtual reality environment; conversely it may be desirable to insert additional objects that were not present in the 2D images. Operators may edit the original images, the unwrapped 2D image, the spherical images, or combinations of these to achieve the desired effects. Removing an object from the environment consists of replacing the pixels of the removed object with a suitable fill, which may be obtained automatically from surrounding regions. Adding an object consists of inserting an image and applying the appropriate depth information to the region or regions of the added image. Inserted images may be obtained from real objects or they may be computer generated, or they may be a combination of real images and computer generated images. Objects in the 2D images may also be extended in some embodiments to fill areas that were not captured in the original images, or that are in areas where undesired objects have been removed. Some embodiments may add objects in multiple layers at multiple depths, providing for automatic gap filling when the viewpoint of a viewer in the 3D virtual reality environment changes to reveal areas behind the original objects in the scene.
One or more embodiments enable a system and method for rapidly and efficiently updating stereoscopic images to reflect changes in the depths assigned to objects in the scene. Multiple, iterative depth modifications may be needed in some situations. Artists, editors, producers, and quality control teams may review left and right viewpoint images generated from the virtual environment, and may determine in some cases that the assigned depths are not optimal. One or more embodiments provide a method for applying depth changes without re-rendering the entire virtual reality scene to create new left and right images from the 3D model of the virtual environment. In one or more embodiments, a 3D model of a virtual reality environment is created or obtained, for example using the image stitching techniques described above. One or more embodiments may generate a spherical translation map from this 3D model that may be used to apply depth updates without re-rendering. A spherical translation map may for example assign translation values to points on the sphere, where the translation value for a point is a function of the distance between the viewer (at the center of the sphere) and the closest object in the scene along the ray between the viewer and that point of the sphere. A translation map may be for example, without limitation, a depth map, a parallax map, a disparity map, a displacement map, a U map, or a UV map. For example, a parallax map contains left and right pixel offsets directly. A depth map contains depth values that are inversely proportional to pixel offsets. Any map that can be used to determine pixel offsets between left and right viewpoint images may be used as a spherical translation map.
In one or more embodiments, an initial stereoscopic (left and right) pair of images may be rendered from the 3D model, along with the spherical translation map. Subsequent modifications to the depth of selected objects may be made by directly editing and modifying the spherical translation map, rather than modifying and re-rendering the 3D model. This process provides a rapid and efficient method for modifying depth after left and right images are created.
Any left and right images from any user viewpoint may be used to review the 3D model for the virtual reality environment. For example, in one or more embodiments reviewers may use one or more planar image projections from one or more directions. In one or more embodiments reviewers may use panoramic images generated from the 3D virtual environment to review and possibly modify the depth assignments. These panoramic images may for example include a full 360-degree horizontal field of view and a full 180-degree vertical field of view, or they may in other cases be partial panoramas covering only a subset of the user's entire field of view. Updates to left and right viewpoint images from spherical translation map modifications may be made to any stereoscopic images generated from the scene, including both planar projections and panoramic images.
One or more embodiments may apply the above methods for real-time, rapid update of stereo images using spherical translation maps to any 3D model of a virtual reality environment. A 3D model of a virtual reality environment may for example be created from camera images of a real scene, from computer-generated elements, or from combinations of real images and computer-generated elements. A 3D model may be created by assigning depth to computer-generated elements or to geometrical surfaces onto which images are projected. Embodiments may create spherical translation maps from the 3D model and use these spherical translation maps to modify stereoscopic images without re-rendering.
One or more embodiments may enable simultaneous reviews of 3D models by multiple reviewers. Because it may be desirable to review a 3D model from multiple viewpoints, use of multiple simultaneous reviewers may dramatically shorten the time needed to complete a review. However, simultaneous review by multiple reviewers presents a challenge in coordinating the activities of the reviewers. In particular, during reviews of complex models from many different viewpoints, it may be difficult to track what each reviewer is observing at any point in time. Lack of coordinated viewpoint tracking may for example lead to communication problems. For example, if a reviewer comments that the depth of an object should be changed, it may not be clear precisely which object the reviewer is referring to. In addition, real-time, iterative updates of 3D models may require rechecking objects from various viewpoints; therefore it may be critical to coordinate the viewpoints used by each reviewer during the review process.
One or more embodiments enable a system for multi-reviewer reviews of 3D models that adds a central coordinator who receives information about each reviewer's activities. In one or more embodiments, multiple review stations are operated by multiple reviewers, and a coordinator station is operated by a coordinator; each station has access to a 3D model under review. Review stations and the coordinator station have displays that show various views of the 3D model.
A review station may have a viewpoint renderer that renders one or more viewpoint images of the 3D model from a particular position and orientation. Viewpoint images may be monoscopic or stereoscopic. The position or orientation (or both) may for example be measured by sensors attached to a reviewer. These sensors may for example be integrated into a virtual reality headset that the reviewer uses to review the 3D model. In one or more embodiments a reviewer may select a viewpoint, using for example a mouse or joystick. A review station may include a reviewer pose subsystem, which may measure or obtain the pose of the reviewer. The reviewer pose subsystem may include sensors, input devices, graphical user interface controls, or any combination of hardware and software configured to measure or obtain the pose of the reviewer. The reviewer pose subsystem provides the reviewer's pose to the viewpoint renderer, which renders one or more viewpoint images of the 3D model based on the pose.
A review station may also include a reviewer focus subsystem, which determines the current focus region or regions within the reviewer's field of view. These focus regions indicate the areas of the 3D model that the reviewer is currently observing, checking, or commenting upon. Embodiments may use various techniques to determine a reviewer's focus regions. For example, one or more embodiments may consider a reviewer's entire field of view to be the focus region. One or more embodiments may use other techniques to locate objects or regions within the field of view that represent the reviewer's current focus.
In one or more embodiments, reviewer focus regions are communicated to the coordinator station. The coordinator station may include a model overview renderer that renders one or more overview images of the 3D model. The overview render may also add graphical indicators to these overview images to represent the reviewer focus regions. These graphical indicators may be for example overlays of icons, shapes, curves, or labels onto the overview images. The coordinator may therefore be able to see the 3D model and the current focus regions of each reviewer in a single set of integrated overview images.
In one or more embodiments, the location of the graphical indicators of reviewer focus areas may correspond to the location of the reviewer focus regions within the 3D model. For example, in a 3D model for a virtual reality environment that is projected onto a sphere, the center of a reviewer's field of view may be represented by a point on the surface of the sphere. An overview image of the sphere that is a two dimensional projection of the sphere (such as a lat-long projection for example) may for example locate graphical indicators of a reviewer's focus area at the corresponding point on the two dimensional spherical projection.
One or more embodiments may differentiate the graphical indicators of focus areas for different reviewers using various visual characteristics, such as for example different colors, shapes, sizes, patterns, textures, icons, or labels.
Embodiments may determine reviewer focus areas based on the specific requirements of their application. In one or more embodiments, reviewer focus areas may be the entire field of view of a reviewer. In one or more embodiments reviewer focus areas may be portions of this field of view, for example a central portion. In one or more embodiments a point at the center of the field of view of a reviewer may be the reviewer focus area.
One or more embodiments may enable reviewers to identify all or a portion of the reviewer focus area. For example, a reviewer may be able to draw a boundary on his or her display to indicate a focus region within which the reviewer has observed an issue. One or more embodiments may further enable a reviewer to add notes or annotations to a designated reviewer focus area. These user-designated focus areas, along with possibly notes or annotations, may be communicated to the coordinator station and displayed with the overview images.
One or more embodiments may enable a coordinator or a reviewer to update the 3D model, or to enter, store, or communicate desired updates that should be applied later to the 3D model. For example, a coordinator station may include a model update subsystem that provides 3D model editing capabilities. In one or more embodiments, updates to the 3D model may be propagated to the review stations to update the viewpoint images in real time or in near real time.
In one or more embodiments a coordinator may have the ability to receive, process, and then remove a reviewer focus region from the coordinator overview display. For example, a reviewer may highlight a region containing an object in the 3D model that should be modified; this region may then appear on the coordinator's overview image. The coordinator may then make a modification to the 3D model (or make a note for a subsequent update) to address the issue. After making the update, the coordinate may remove the update from the queue of review issues requiring attention; this action may result in the graphical indicator for this region being removed from the coordinator's overview screen.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
The above and other aspects, features and advantages of the invention will be more apparent from the following more particular description thereof, presented in conjunction with the following drawings wherein:
A 3D model multi-reviewer system will now be described. In the following exemplary description numerous specific details are set forth in order to provide a more thorough understanding of embodiments of the invention. It will be apparent, however, to an artisan of ordinary skill that the present invention may be practiced without incorporating all aspects of the specific details described herein. In other instances, specific features, quantities, or measurements well known to those of ordinary skill in the art have not been described in detail so as not to obscure the invention. Readers should note that although examples of the invention are set forth herein, the claims, and the full scope of any equivalents, are what define the metes and bounds of the invention.
Step 103 stitches together subsets of the 2D images 102 into integrated 2D images 104. The stitching process combines and aligns 2D images and eliminates overlap among the 2D images. Stitching step 103 may combine all 2D images into a single panorama, or it may combine subsets of 2D images into various panoramic images that cover portions of the entire environment 100. Different embodiments may employ different stitching strategies. Integrated images may cover all or any portion of the sphere of view directions visible from one or more cameras.
Any known technique for stitching together multiple images into a composite integrated image may be utilized. Stitching may be done manually, automatically, or with a hybrid manual-automatic procedure where one or more operators make rough stitching and software completes the smooth stitch. Embodiments of the invention may utilize any or all of these approaches.
Automated stitching typically aligns the overlap regions of multiple images using a best-fit based on feature differences or on pixel differences. Feature-based methods perform a feature extraction pass first on the images, and then find the location of similar features in multiple images for alignment. See for example: M. Brown and D. Lowe (2007). Automatic Panoramic Image Stitching using Invariant Features. International Journal of Computer Vision, 74(1). Pixel-based methods minimize the pixel differences between images in their regions of overlap. See for example: Suen et al. (2007). Photographic stitching with optimized object and color matching based on image derivatives. Optics Express, 15(12).
In addition, several existing software packages perform stitching using either of both of these methods; illustrative examples include commonly available photo processing software. Embodiments of the invention may use any of the methods known in the art or available in software packages to perform the image stitching step 103.
In step 105, the integrated 2D images 104 are projected onto a spherical surface. In some embodiments, these projections may use a measured or estimated position and orientation of the camera or cameras used in step 101 to capture the 2D images that formed the integrated 2D images. The output of step 105 is a spherical surface image 106. A spherical surface represents an approximate 3D model of the location of the objects in the environment 100; this approximate model places all points on the objects equidistant from the center of the sphere. Adjustments to this approximate 3D model may be made in subsequent steps using depth information to form more realistic models of the environment.
In step 107 the spherical surface image 106 is “unwrapped” onto an unwrapped plane image 108. This unwrapping process may use any of the spherical-to-plane projection mappings that are known.
In step 109, the unwrapped plane image 108 is divided into regions 110. This step may be done by one or more operators, or it may be assisted by software. For example, software may tentatively generate region boundaries based on shapes, colors, or textures of objects in the unwrapped image 108.
In step 111, depth information 112 is assigned to the points of the regions 110. The depth information is used in subsequent steps to generate the 3D stereo images for a virtual reality experience. Depth information reflects how far away each point of each region is from a viewer. Many techniques for defining and using depth information are known in the art; any of these techniques may be used for generating or using the depth information 112. For example, without limitation, depth information may comprise depth maps, bump maps, parallax maps, U maps, UV maps, disparity maps, ST maps, point clouds, z maps, offset maps, displacement maps, or more generally any information that may provide a three-dimensional shape or three-dimensional appearance to an image. Assigning of depth information may be done by one or more operators. In some embodiments software may be used to assist the step 111 of assigning depth information. For example, operators may be able to rotate or reposition entire regions in a 3D scene, and depth information may be generated automatically for the regions based on this 3D positioning. Software may also be used to generate curved regions or to blend depth information at boundaries between regions.
In step 114, the depth information 112 and the unwrapped image 108 are used as inputs to generate stereo images for a viewer at viewer position and orientation 113. The stereo images consist of a left eye image 115 and a right eye image 116. Any of the commonly available stereo 3D vision technologies, such as special viewing glasses used to see 3D movies, may be used by the viewer to view the virtual reality environment in 3D using these stereo images. For example, viewers may use glasses with different colored left and right lenses, or glasses with different polarization in left and right lenses, or glasses with LCD lenses that alternately show left and right images.
Different embodiments of the invention may employ different projection techniques.
In
Returning to
In addition, in
Returning again to
In one or more embodiments of the invention, modifications may be made to the images captured from the scene in order to create a modified 3D virtual reality environment. Such modifications may include additions, deletions, modifications, or any combinations of these changes to the images. Modifications may be made in some embodiments to the original captured 2D images, to the stitched integrated images, to the spherical projection, to the unwrapped plane image, to the depth information, to the stereo images, or to any combinations of these.
Image 17 illustrates an unwrapped image 1701 formed from the images of
The process of creating a 3D virtual environment may be iterative.
As illustrated in
For example, if the spherical translation map 1901 is a parallax map, which encodes the amount of horizontal pixel offset between left eye image 115 and right eye image 116, then modifications to this parallax map may be applied directly and rapidly to the images. Changes in offsets may be applied by shifting pixels in left and right images by the changes in the offsets. Re-rendering from the 3D model is not required. In some cases, modifying the shifts of pixels may generate gaps in the images with missing pixels. Techniques known in the art for gap-filling may be used to generate the missing pixels.
One or more embodiments may use a graphical user interface to perform any of the steps of the method, including for example the step 1902 of modifying the spherical translation map.
User interface screen 2001 includes a list of objects 2003. These objects may for example have been defined during the segmenting of the original stitched image into regions. One or more embodiments may use any desired method to create and modify masks that define objects. The objects 2003 are selectable, with the objects highlighted in gray currently selected. In the illustrative interface a selection boundary is shown in the translation map 2002 for each selected object; for example the boundary 2004 shows the selected rocking chair object.
The user interface provides one or more controls to edit the pixel translations of the selected objects. The illustrative user interface 2001 in
3D models, in particular virtual reality environments, may be viewed from different viewpoints. One or more embodiments enable a review process for validating or correcting a 3D model that uses multiple reviewers operating simultaneously to review the model from different viewpoints.
The illustrative example in
Review stations may generate one or more display images showing the 3D model from the viewpoint determined by the reviewer pose subsystem. Generating these images is the function of a viewpoint renderer. Using techniques known in the art, and techniques described above in this specification, viewpoint renderers may generate monoscopic or stereoscopic views of the 3D model from the viewpoint corresponding to the reviewer's pose. These views may be displayed on any number of display devices for each reviewer. Display devices may for example include computer monitors, as well as virtual reality headsets or other head-mounted displays. In
In one or more embodiments, the information on the current viewpoint of one or more reviewers may be communicated to the coordinator station. This information assists the coordinator in coordinating the reviews. For example, a coordinator may be able to determine that all relevant viewpoints have been reviewed and checked. In addition, a coordinator may be able to communicate more effectively with a reviewer about the 3D model if the coordinator can observe the precise viewpoint from which the reviewer is viewing the 3D model.
More generally, in one or more embodiments each reviewer may have one or more focus regions within his or her viewpoint that define areas of interest to the reviewer. These focus regions may for example encompass the reviewer's entire field of view or they may be selected portions of the field of view. A review station may include a reviewer focus subsystem that determines the focus region or focus regions for each reviewer and communicates this information to the coordinator station. In
The coordinator station provides information that allows the coordinator to observe the entire review process from the collection of reviewers. In one or more embodiments, the coordinator station may generate one or more overview images of the 3D model that show the entire model. In these embodiments the coordinator may for example focus on the model as a whole, while each reviewer focuses on the view of the model from various specific viewpoints. As an example, an overview image may be a panoramic image of a complete spherical 3D model of a virtual reality environment. Any projection or image or collection of images may serve as overview images in one or more embodiments. The overview images may be generated by model overview renderer that has access to the 3D model. An overview renderer may use any techniques known in the art, and techniques described in this specification, to generate one or more overview images of the 3D model. Overview images may be monoscopic or stereoscopic. For virtual reality environments, overview images may be for example, without limitation, spherical projections, cylindrical projections, cubical projections, or any collection of planar projections. In
One or more embodiments may augment the overview images shown in the coordinator station with information depicting or describing the reviewer focus regions. For example, graphics showing the location of each reviewer focus region may be overlaid onto an overview image of the 3D model. This is illustrated in
In the example illustrated in
Embodiments may define focus regions in any desired manner depending on the application for the multi-reviewer system.
In one or more embodiments, reviewers may be able to designate specific areas within their field of view as focus regions. This capability may be useful for example if a reviewer wants to highlight a particular object to bring it to the attention of the coordinator. For example, a reviewer may notice that the 3D model has an anomaly or an inaccuracy at a particular location or a particular object. By highlighting this object or region the reviewer can notify the coordinator or other parties that this area requires attention.
In one or more embodiments the coordinator, or the reviewers, or other editors, may be able to modify the 3D model during the review process. In one or more embodiments some or all modifications may be performed after the review process, and the review process may be used to collect notes on changes that should be made or considered.
In the embodiment shown in
While the invention herein disclosed has been described by means of specific embodiments and applications thereof, numerous modifications and variations could be made thereto by those skilled in the art without departing from the scope of the invention set forth in the claims.
This application is a continuation in part of U.S. Utility patent application Ser. No. 14/709,327, filed 11 May 2015, which is a continuation in part of U.S. Utility patent application Ser. No. 13/874,625, filed 1 May 2013, the specifications of which are hereby incorporated herein by reference. This application is also a continuation in part of U.S. Utility patent application Ser. No. 13/895,979, filed 16 May 2013, which is a continuation in part of U.S. Utility patent application Ser. No. 13/366,899, filed 6 Feb. 2012, which is a continuation in part of U.S. Utility patent application Ser. No. 13/029,862, filed 17 Feb. 2011, the specifications of which are hereby incorporated herein by reference.
| Number | Name | Date | Kind |
|---|---|---|---|
| 2593925 | Sheldon | Apr 1952 | A |
| 2799722 | Neugebauer | Jul 1957 | A |
| 2804500 | Giacoletto | Aug 1957 | A |
| 2874212 | Bechley | Feb 1959 | A |
| 2883763 | Schaper | Apr 1959 | A |
| 2974190 | Geiger | Mar 1961 | A |
| 3005042 | Horsley | Oct 1961 | A |
| 3258528 | Oppenheimer | Jun 1966 | A |
| 3486242 | Aronson | Dec 1969 | A |
| 3551589 | Moskovitz | Dec 1970 | A |
| 3558811 | Barton et al. | Jan 1971 | A |
| 3560644 | Palmer et al. | Feb 1971 | A |
| 3595987 | Vlahos | Jul 1971 | A |
| 3603962 | Lechner | Sep 1971 | A |
| 3612755 | Tadlock | Oct 1971 | A |
| 3617626 | Bluth et al. | Nov 1971 | A |
| 3619051 | Wright | Nov 1971 | A |
| 3621127 | Hope | Nov 1971 | A |
| 3647942 | Siegel et al. | Mar 1972 | A |
| 3673317 | Kennedy et al. | Jun 1972 | A |
| 3705762 | Ladd et al. | Dec 1972 | A |
| 3706841 | Novak | Dec 1972 | A |
| 3710011 | Altemus et al. | Jan 1973 | A |
| 3731995 | Reiffel | May 1973 | A |
| 3737567 | Kratomi | Jun 1973 | A |
| 3742125 | Siegel | Jun 1973 | A |
| 3761607 | Hanseman | Sep 1973 | A |
| 3769458 | Driskell | Oct 1973 | A |
| 3770884 | Curran et al. | Nov 1973 | A |
| 3770885 | Curran et al. | Nov 1973 | A |
| 3772465 | Vlahos et al. | Nov 1973 | A |
| 3784736 | Novak | Jan 1974 | A |
| 3848856 | Reeber et al. | Nov 1974 | A |
| 3851955 | Kent et al. | Dec 1974 | A |
| 3971068 | Gerhardt et al. | Jul 1976 | A |
| 3972067 | Peters | Jul 1976 | A |
| 4017166 | Kent et al. | Apr 1977 | A |
| 4021841 | Weinger | May 1977 | A |
| 4021846 | Roese | May 1977 | A |
| 4054904 | Saitoh et al. | Oct 1977 | A |
| 4149185 | Weinger | Apr 1979 | A |
| 4168885 | Kent et al. | Sep 1979 | A |
| 4183046 | Dalke et al. | Jan 1980 | A |
| 4183633 | Kent et al. | Jan 1980 | A |
| 4189743 | Schure et al. | Feb 1980 | A |
| 4189744 | Stern | Feb 1980 | A |
| 4235503 | Condon | Nov 1980 | A |
| 4258385 | Greenberg et al. | Mar 1981 | A |
| 4318121 | Taite et al. | Mar 1982 | A |
| 4329710 | Taylor | May 1982 | A |
| 4334240 | Franklin | Jun 1982 | A |
| 4436369 | Bukowski | Mar 1984 | A |
| 4475104 | Shen | Oct 1984 | A |
| 4544247 | Ohno | Oct 1985 | A |
| 4549172 | Welk | Oct 1985 | A |
| 4558359 | Kuperman et al. | Dec 1985 | A |
| 4563703 | Taylor | Jan 1986 | A |
| 4590511 | Bocchi et al. | May 1986 | A |
| 4600919 | Stern | Jul 1986 | A |
| 4603952 | Sybenga | Aug 1986 | A |
| 4606625 | Geshwind | Aug 1986 | A |
| 4608596 | Williams et al. | Aug 1986 | A |
| 4617592 | MacDonald | Oct 1986 | A |
| 4642676 | Weinger | Feb 1987 | A |
| 4645459 | Graf et al. | Feb 1987 | A |
| 4647965 | Imsand | Mar 1987 | A |
| 4694329 | Belmares-Sarabia et al. | Sep 1987 | A |
| 4697178 | Heckel | Sep 1987 | A |
| 4700181 | Maine et al. | Oct 1987 | A |
| 4721951 | Holler | Jan 1988 | A |
| 4723159 | Imsand | Feb 1988 | A |
| 4725879 | Eide et al. | Feb 1988 | A |
| 4755870 | Markle et al. | Jul 1988 | A |
| 4758908 | James | Jul 1988 | A |
| 4760390 | Maine et al. | Jul 1988 | A |
| 4774583 | Kellar et al. | Sep 1988 | A |
| 4794382 | Lai et al. | Dec 1988 | A |
| 4809065 | Harris et al. | Feb 1989 | A |
| 4827255 | Ishii | May 1989 | A |
| 4847689 | Yamamoto et al. | Jul 1989 | A |
| 4862256 | Markle et al. | Aug 1989 | A |
| 4888713 | Falk | Dec 1989 | A |
| 4903131 | Lingermann et al. | Feb 1990 | A |
| 4918624 | Moore et al. | Apr 1990 | A |
| 4925294 | Geshwind et al. | May 1990 | A |
| 4933670 | Wislocki | Jun 1990 | A |
| 4952051 | Lovell et al. | Aug 1990 | A |
| 4965844 | Oka et al. | Oct 1990 | A |
| 4984072 | Sandrew | Jan 1991 | A |
| 5002387 | Baljet et al. | Mar 1991 | A |
| 5038161 | Ki | Aug 1991 | A |
| 5050984 | Geshwind | Sep 1991 | A |
| 5055939 | Karamon et al. | Oct 1991 | A |
| 5093717 | Sandrew | Mar 1992 | A |
| 5177474 | Kadota | Jan 1993 | A |
| 5181181 | Glynn | Jan 1993 | A |
| 5185852 | Mayer | Feb 1993 | A |
| 5237647 | Roberts et al. | Aug 1993 | A |
| 5243460 | Kornberg | Sep 1993 | A |
| 5252953 | Sandrew et al. | Oct 1993 | A |
| 5262856 | Lippman et al. | Nov 1993 | A |
| 5328073 | Blanding et al. | Jul 1994 | A |
| 5341462 | Obata | Aug 1994 | A |
| 5347620 | Zimmer | Sep 1994 | A |
| 5363476 | Kurashige et al. | Nov 1994 | A |
| 5402191 | Dean et al. | Mar 1995 | A |
| 5428721 | Sato et al. | Jun 1995 | A |
| 5481321 | Lipton | Jan 1996 | A |
| 5495576 | Ritchey | Feb 1996 | A |
| 5528655 | Umetani et al. | Jun 1996 | A |
| 5534915 | Sandrew | Jul 1996 | A |
| 5668605 | Nachshon et al. | Sep 1997 | A |
| 5673081 | Yamashita et al. | Sep 1997 | A |
| 5682437 | Okino et al. | Oct 1997 | A |
| 5684715 | Palmer | Nov 1997 | A |
| 5699443 | Murata et al. | Dec 1997 | A |
| 5699444 | Palm | Dec 1997 | A |
| 5717454 | Adolphi et al. | Feb 1998 | A |
| 5729471 | Jain et al. | Mar 1998 | A |
| 5734915 | Roewer | Mar 1998 | A |
| 5739844 | Kuwano et al. | Apr 1998 | A |
| 5742291 | Palm | Apr 1998 | A |
| 5748199 | Palm | May 1998 | A |
| 5767923 | Coleman | Jun 1998 | A |
| 5777666 | Tanase et al. | Jul 1998 | A |
| 5778108 | Coleman | Jul 1998 | A |
| 5784175 | Lee | Jul 1998 | A |
| 5784176 | Narita | Jul 1998 | A |
| 5808664 | Yamashita et al. | Sep 1998 | A |
| 5825997 | Yamada et al. | Oct 1998 | A |
| 5835163 | Liou et al. | Nov 1998 | A |
| 5841512 | Goodhill | Nov 1998 | A |
| 5867169 | Prater | Feb 1999 | A |
| 5880788 | Bregler | Mar 1999 | A |
| 5899861 | Friemel et al. | May 1999 | A |
| 5907364 | Furuhata et al. | May 1999 | A |
| 5912994 | Norton et al. | Jun 1999 | A |
| 5920360 | Coleman | Jul 1999 | A |
| 5929859 | Meijers | Jul 1999 | A |
| 5940528 | Tanaka et al. | Aug 1999 | A |
| 5959697 | Coleman | Sep 1999 | A |
| 5973700 | Taylor et al. | Oct 1999 | A |
| 5973831 | Kleinberger et al. | Oct 1999 | A |
| 5982350 | Hekmatpour et al. | Nov 1999 | A |
| 5990900 | Seago | Nov 1999 | A |
| 5990903 | Donovan | Nov 1999 | A |
| 5999660 | Zorin et al. | Dec 1999 | A |
| 6005582 | Gabriel et al. | Dec 1999 | A |
| 6011581 | Swift et al. | Jan 2000 | A |
| 6014473 | Hossack et al. | Jan 2000 | A |
| 6023276 | Kawai et al. | Feb 2000 | A |
| 6025882 | Geshwind | Feb 2000 | A |
| 6031564 | Ma et al. | Feb 2000 | A |
| 6049628 | Chen et al. | Apr 2000 | A |
| 6056691 | Urbano et al. | May 2000 | A |
| 6067125 | May | May 2000 | A |
| 6086537 | Urbano et al. | Jul 2000 | A |
| 6088006 | Tabata | Jul 2000 | A |
| 6091421 | Terrasson | Jul 2000 | A |
| 6102865 | Hossack et al. | Aug 2000 | A |
| 6108005 | Starks et al. | Aug 2000 | A |
| 6118584 | Van Berkel et al. | Sep 2000 | A |
| 6119123 | Dimitrova et al. | Sep 2000 | A |
| 6132376 | Hossack et al. | Oct 2000 | A |
| 6141433 | Moed et al. | Oct 2000 | A |
| 6157747 | Szeliski | Dec 2000 | A |
| 6166744 | Jaszlics et al. | Dec 2000 | A |
| 6173328 | Sato | Jan 2001 | B1 |
| 6184937 | Williams et al. | Feb 2001 | B1 |
| 6198484 | Kameyama | Mar 2001 | B1 |
| 6201900 | Hossack et al. | Mar 2001 | B1 |
| 6208348 | Kaye | Mar 2001 | B1 |
| 6211941 | Erland | Apr 2001 | B1 |
| 6215516 | Ma et al. | Apr 2001 | B1 |
| 6222948 | Hossack et al. | Apr 2001 | B1 |
| 6226015 | Danneels et al. | May 2001 | B1 |
| 6228030 | Urbano et al. | May 2001 | B1 |
| 6263101 | Klein et al. | Jul 2001 | B1 |
| 6271859 | Asente | Aug 2001 | B1 |
| 6314211 | Kim et al. | Nov 2001 | B1 |
| 6329963 | Chiabrera | Dec 2001 | B1 |
| 6337709 | Yamaashi et al. | Jan 2002 | B1 |
| 6360027 | Hossack et al. | Mar 2002 | B1 |
| 6363170 | Seitz | Mar 2002 | B1 |
| 6364835 | Hossack et al. | Apr 2002 | B1 |
| 6373970 | Dong et al. | Apr 2002 | B1 |
| 6390980 | Peterson et al. | May 2002 | B1 |
| 6405366 | Lorenz et al. | Jun 2002 | B1 |
| 6414678 | Goddard et al. | Jul 2002 | B1 |
| 6416477 | Jago | Jul 2002 | B1 |
| 6426750 | Hoppe | Jul 2002 | B1 |
| 6429867 | Deering | Aug 2002 | B1 |
| 6445816 | Pettigrew | Sep 2002 | B1 |
| 6456340 | Margulis | Sep 2002 | B1 |
| 6466205 | Simpson et al. | Oct 2002 | B2 |
| 6474970 | Caldoro | Nov 2002 | B1 |
| 6477267 | Richards | Nov 2002 | B1 |
| 6492986 | Metaxas et al. | Dec 2002 | B1 |
| 6496598 | Harman | Dec 2002 | B1 |
| 6509926 | Mills et al. | Jan 2003 | B1 |
| 6515659 | Kaye et al. | Feb 2003 | B1 |
| 6535233 | Smith | Mar 2003 | B1 |
| 6553184 | Ando et al. | Apr 2003 | B1 |
| 6590573 | Geshwind | Jul 2003 | B1 |
| 6606166 | Knoll | Aug 2003 | B1 |
| 6611268 | Szeliski et al. | Aug 2003 | B1 |
| 6650339 | Silva et al. | Nov 2003 | B1 |
| 6662357 | Bowman-Amuah | Dec 2003 | B1 |
| 6665798 | McNally et al. | Dec 2003 | B1 |
| 6677944 | Yamamoto | Jan 2004 | B1 |
| 6686591 | Ito et al. | Feb 2004 | B2 |
| 6686926 | Kaye | Feb 2004 | B1 |
| 6707487 | Aman et al. | Mar 2004 | B1 |
| 6727938 | Randall | Apr 2004 | B1 |
| 6737957 | Petrovic et al. | May 2004 | B1 |
| 6744461 | Wada et al. | Jun 2004 | B1 |
| 6765568 | Swift et al. | Jul 2004 | B2 |
| 6791542 | Matusik et al. | Sep 2004 | B2 |
| 6798406 | Jones et al. | Sep 2004 | B1 |
| 6813602 | Thyssen | Nov 2004 | B2 |
| 6847737 | Kouri et al. | Jan 2005 | B1 |
| 6850252 | Hoffberg | Feb 2005 | B1 |
| 6853383 | Duquesnois | Feb 2005 | B2 |
| 6859523 | Jilk et al. | Feb 2005 | B1 |
| 6919892 | Cheiky et al. | Jul 2005 | B1 |
| 6964009 | Samaniego et al. | Nov 2005 | B2 |
| 6965379 | Lee et al. | Nov 2005 | B2 |
| 6973434 | Miller | Dec 2005 | B2 |
| 6985187 | Han et al. | Jan 2006 | B2 |
| 7000223 | Knutson et al. | Feb 2006 | B1 |
| 7006881 | Hoffberg et al. | Feb 2006 | B1 |
| 7027054 | Cheiky et al. | Apr 2006 | B1 |
| 7032177 | Novak et al. | Apr 2006 | B2 |
| 7035451 | Harman et al. | Apr 2006 | B2 |
| 7079075 | Connor et al. | Jul 2006 | B1 |
| 7098910 | Petrovic et al. | Aug 2006 | B2 |
| 7102633 | Kaye et al. | Sep 2006 | B2 |
| 7110605 | Marcellini et al. | Sep 2006 | B2 |
| 7116323 | Kaye et al. | Oct 2006 | B2 |
| 7116324 | Kaye et al. | Oct 2006 | B2 |
| 7117231 | Fischer et al. | Oct 2006 | B2 |
| 7136075 | Hamburg | Nov 2006 | B1 |
| 7181081 | Sandrew | Feb 2007 | B2 |
| 7190496 | Klug et al. | Mar 2007 | B2 |
| 7254264 | Naske et al. | Aug 2007 | B2 |
| 7254265 | Naske et al. | Aug 2007 | B2 |
| 7260274 | Sawhney et al. | Aug 2007 | B2 |
| 7272265 | Kouri et al. | Sep 2007 | B2 |
| 7298094 | Yui | Nov 2007 | B2 |
| 7308139 | Wentland et al. | Dec 2007 | B2 |
| 7321374 | Naske | Jan 2008 | B2 |
| 7327360 | Petrovic et al. | Feb 2008 | B2 |
| 7333519 | Sullivan et al. | Feb 2008 | B2 |
| 7333670 | Sandrew | Feb 2008 | B2 |
| 7343082 | Cote et al. | Mar 2008 | B2 |
| 7461002 | Crockett et al. | Dec 2008 | B2 |
| 7512262 | Criminisi et al. | Mar 2009 | B2 |
| 7519990 | Xie | Apr 2009 | B1 |
| 7532225 | Fukushima et al. | May 2009 | B2 |
| 7538768 | Kiyokawa et al. | May 2009 | B2 |
| 7542034 | Spooner et al. | Jun 2009 | B2 |
| 7558420 | Era | Jul 2009 | B2 |
| 7573475 | Sullivan et al. | Aug 2009 | B2 |
| 7573489 | Davidson et al. | Aug 2009 | B2 |
| 7576332 | Britten | Aug 2009 | B2 |
| 7577312 | Sandrew | Aug 2009 | B2 |
| 7610155 | Timmis et al. | Oct 2009 | B2 |
| 7624337 | Sull et al. | Nov 2009 | B2 |
| 7630533 | Ruth et al. | Dec 2009 | B2 |
| 7663689 | Marks | Feb 2010 | B2 |
| 7665798 | McNally et al. | Feb 2010 | B2 |
| 7680653 | Yeldener | Mar 2010 | B2 |
| 7772532 | Olsen et al. | Aug 2010 | B2 |
| 7852461 | Yahav | Dec 2010 | B2 |
| 7860342 | Levien et al. | Dec 2010 | B2 |
| 7894633 | Harman | Feb 2011 | B1 |
| 7940961 | Allen | May 2011 | B2 |
| 8036451 | Redert et al. | Oct 2011 | B2 |
| 8085339 | Marks | Dec 2011 | B2 |
| 8090402 | Fujisaki | Jan 2012 | B1 |
| 8194102 | Cohen | Jun 2012 | B2 |
| 8213711 | Tam et al. | Jul 2012 | B2 |
| 8217931 | Lowe et al. | Jul 2012 | B2 |
| 8244104 | Kashiwa | Aug 2012 | B2 |
| 8320634 | Deutsch | Nov 2012 | B2 |
| 8384763 | Tam et al. | Feb 2013 | B2 |
| 8401336 | Baldridge et al. | Mar 2013 | B2 |
| 8462988 | Boon | Jun 2013 | B2 |
| 8488868 | Tam et al. | Jul 2013 | B2 |
| 8526704 | Dobbe | Sep 2013 | B2 |
| 8543573 | MacPherson | Sep 2013 | B2 |
| 8634072 | Trainer | Jan 2014 | B2 |
| 8670651 | Sakuragi et al. | Mar 2014 | B2 |
| 8698798 | Murray et al. | Apr 2014 | B2 |
| 8907968 | Tanaka et al. | Dec 2014 | B2 |
| 8922628 | Bond | Dec 2014 | B2 |
| 20010025267 | Janiszewski | Sep 2001 | A1 |
| 20010051913 | Vashistha et al. | Dec 2001 | A1 |
| 20020001045 | Ranganath et al. | Jan 2002 | A1 |
| 20020048395 | Harman et al. | Apr 2002 | A1 |
| 20020049778 | Bell | Apr 2002 | A1 |
| 20020063780 | Harman et al. | May 2002 | A1 |
| 20020075384 | Harman | Jun 2002 | A1 |
| 20030018608 | Rice | Jan 2003 | A1 |
| 20030046656 | Saxena | Mar 2003 | A1 |
| 20030069777 | Or-Bach | Apr 2003 | A1 |
| 20030093790 | Logan et al. | May 2003 | A1 |
| 20030097423 | Ozawa et al. | May 2003 | A1 |
| 20030154299 | Hamilton | Aug 2003 | A1 |
| 20030177024 | Tsuchida | Sep 2003 | A1 |
| 20040004616 | Konya et al. | Jan 2004 | A1 |
| 20040062439 | Cahill et al. | Apr 2004 | A1 |
| 20040130680 | Zhou et al. | Jul 2004 | A1 |
| 20040151471 | Ogikubo | Aug 2004 | A1 |
| 20040181444 | Sandrew | Sep 2004 | A1 |
| 20040189796 | Ho et al. | Sep 2004 | A1 |
| 20040258089 | Derechin et al. | Dec 2004 | A1 |
| 20050083421 | Berezin et al. | Apr 2005 | A1 |
| 20050088515 | Geng | Apr 2005 | A1 |
| 20050104878 | Kaye et al. | May 2005 | A1 |
| 20050146521 | Kaye et al. | Jul 2005 | A1 |
| 20050188297 | Knight et al. | Aug 2005 | A1 |
| 20050207623 | Liu et al. | Sep 2005 | A1 |
| 20050231501 | Nitawaki | Oct 2005 | A1 |
| 20050231505 | Kaye et al. | Oct 2005 | A1 |
| 20050280643 | Chen | Dec 2005 | A1 |
| 20060028543 | Sohn et al. | Feb 2006 | A1 |
| 20060061583 | Spooner et al. | Mar 2006 | A1 |
| 20060083421 | Weiguo et al. | Apr 2006 | A1 |
| 20060143059 | Sandrew | Jun 2006 | A1 |
| 20060159345 | Clary et al. | Jul 2006 | A1 |
| 20060274905 | Lindahl et al. | Dec 2006 | A1 |
| 20070052807 | Zhou et al. | Mar 2007 | A1 |
| 20070236514 | Agusanto | Oct 2007 | A1 |
| 20070238981 | Zhu | Oct 2007 | A1 |
| 20070260634 | Makela et al. | Nov 2007 | A1 |
| 20070279412 | Davidson et al. | Dec 2007 | A1 |
| 20070279415 | Sullivan et al. | Dec 2007 | A1 |
| 20070286486 | Goldstein | Dec 2007 | A1 |
| 20070296721 | Chang et al. | Dec 2007 | A1 |
| 20080002878 | Meiyappan | Jan 2008 | A1 |
| 20080044155 | Kuspa | Feb 2008 | A1 |
| 20080079851 | Stanger et al. | Apr 2008 | A1 |
| 20080117233 | Mather et al. | May 2008 | A1 |
| 20080147917 | Lees et al. | Jun 2008 | A1 |
| 20080162577 | Fukuda et al. | Jul 2008 | A1 |
| 20080181486 | Spooner et al. | Jul 2008 | A1 |
| 20080225040 | Simmons et al. | Sep 2008 | A1 |
| 20080225042 | Birtwistle et al. | Sep 2008 | A1 |
| 20080225045 | Birtwistle | Sep 2008 | A1 |
| 20080225059 | Lowe et al. | Sep 2008 | A1 |
| 20080226123 | Birtwistle | Sep 2008 | A1 |
| 20080226128 | Birtwistle et al. | Sep 2008 | A1 |
| 20080226160 | Birtwistle et al. | Sep 2008 | A1 |
| 20080226181 | Birtwistle et al. | Sep 2008 | A1 |
| 20080226194 | Birtwistle et al. | Sep 2008 | A1 |
| 20080227075 | Poor et al. | Sep 2008 | A1 |
| 20080228449 | Birtwistle et al. | Sep 2008 | A1 |
| 20080246759 | Summers | Oct 2008 | A1 |
| 20080246836 | Lowe et al. | Oct 2008 | A1 |
| 20080259073 | Lowe et al. | Oct 2008 | A1 |
| 20090002368 | Vitikainen et al. | Jan 2009 | A1 |
| 20090033741 | Oh et al. | Feb 2009 | A1 |
| 20090116732 | Zhou et al. | May 2009 | A1 |
| 20090219383 | Passmore | Sep 2009 | A1 |
| 20090256903 | Spooner et al. | Oct 2009 | A1 |
| 20090290758 | Ng-Thow-Hing et al. | Nov 2009 | A1 |
| 20090303204 | Nasiri | Dec 2009 | A1 |
| 20100045666 | Kommann | Feb 2010 | A1 |
| 20100166338 | Lee | Jul 2010 | A1 |
| 20100259610 | Petersen | Oct 2010 | A1 |
| 20110050864 | Bond | Mar 2011 | A1 |
| 20110074784 | Turner et al. | Mar 2011 | A1 |
| 20110164109 | Balderinge et al. | Jul 2011 | A1 |
| 20110169827 | Spooner et al. | Jul 2011 | A1 |
| 20110169914 | Lowe et al. | Jul 2011 | A1 |
| 20110188773 | Wei et al. | Aug 2011 | A1 |
| 20110227917 | Lowe et al. | Sep 2011 | A1 |
| 20110273531 | Ito et al. | Nov 2011 | A1 |
| 20120032948 | Lowe et al. | Feb 2012 | A1 |
| 20120039525 | Tian et al. | Feb 2012 | A1 |
| 20120087570 | Seo et al. | Apr 2012 | A1 |
| 20120102435 | Han et al. | Apr 2012 | A1 |
| 20120188334 | Fortin et al. | Jul 2012 | A1 |
| 20120274626 | Hsieh | Nov 2012 | A1 |
| 20120281906 | Appia | Nov 2012 | A1 |
| 20120306849 | Steen | Dec 2012 | A1 |
| 20130051659 | Yamamoto | Feb 2013 | A1 |
| 20130234934 | Champion | Sep 2013 | A1 |
| 20140130680 | Fin et al. | May 2014 | A1 |
| 20140169767 | Goldberg | Jun 2014 | A1 |
| Number | Date | Country |
|---|---|---|
| 003444353 | Jun 1986 | DE |
| 003444353 | Dec 1986 | DE |
| 0302454 | Feb 1989 | EP |
| 1187494 | Apr 2004 | EP |
| 1719079 | Nov 2006 | EP |
| 2487039 | Nov 2012 | GB |
| 60-52190 | Mar 1985 | JP |
| 2002123842 | Apr 2002 | JP |
| 2003046982 | Feb 2003 | JP |
| 2004-207985 | Jul 2004 | JP |
| 2004207985 | Jul 2004 | JP |
| 20120095059 | Aug 2012 | KR |
| 20130061289 | Nov 2013 | KR |
| 2376632 | Dec 1998 | RU |
| 1192168 | Sep 1982 | SU |
| 1192168 | Nov 1982 | SU |
| 9724000 | Jul 1997 | WO |
| 9912127 | Mar 1999 | WO |
| 9930280 | Jun 1999 | WO |
| 0079781 | Dec 2000 | WO |
| 0101348 | Jan 2001 | WO |
| 0213143 | Feb 2002 | WO |
| 2006078237 | Jul 2006 | WO |
| 20070148219 | Dec 2007 | WO |
| 2008075276 | Jun 2008 | WO |
| 2011029209 | Sep 2011 | WO |
| 2012016600 | Feb 2012 | WO |
| 2013084234 | Jun 2013 | WO |
| Entry |
|---|
| Tam et al., “3D-TV Content Generation: 2D-to-3D Conversion”, ICME 2006, p. 1868-1872. |
| Harman et al. “Rapid 2D to 3D Conversion”, The Reporter, vol. 17, No. 1, Feb. 2002, 12 pages. |
| Legend Films, “System and Method for Conversion of Sequences of Two-Dimensional Medical Images to Three-Dimensional Images” Sep. 12, 2013, 7 pages. |
| International Search Report Issued for PCT/US2013/072208, dated Feb. 27, 2014, 6 pages. |
| International Search Report and Written Opinion issued for PCT/US2013/072447, dated Mar. 13, 2014, 6 pages. |
| International Preliminary Report on Patentability received in PCT/US2013/072208 on Jun. 11, 2015, 5 pages. |
| International Preliminary Report on Patentability received in PCT/US2013/072447 on Jun. 11, 2015, 12 pages. |
| McKenna “Interactive Viewpoint Control and Three-Dimensional Operations”, Computer Graphics and Animation Group, The Media Laboratory, pp. 53-56, 1992. |
| European Search Report Received in PCTUS2011067024 on Nov. 28, 2014, 6 pages. |
| “Nintendo DSi Uses Camera Face Tracking to Create 3D Mirages”, retrieved from www.Gizmodo.com on Mar. 18, 2013, 3 pages. |
| Noll, Computer-Generated Three-Dimensional Movies, Computers and Automation, vol. 14, No. 11 (Nov. 1965), pp. 20-23. |
| Noll, Stereographic Projections by Digital Computer, Computers and Automation, vol. 14, No. 5 (May 1965), pp. 32-34. |
| Australian Office Action issued for 2002305387, dated Mar. 15, 2007, 2 page. |
| Canadian Office Action, Dec. 28, 2011, Appl No. 2,446,150, 4 pages. |
| Canadian Office Action, Oct. 8, 2010, App. No. 2,446,150, 6 pages. |
| Canadian Office Action, Jun. 13, 2011, App. No. 2,446,150, 4 pages. |
| Daniel L. Symmes, Three-Dimensional Image, Microsoft Encarta Online Encyclopedia (hard copy printed May 28, 2008 and of record, now indicated by the website indicated on the document to be discontinued: http://encarta.msn.com/text—761584746—0/Three-Dimensional—Image.htm). |
| Declaration of Barbara Frederiksen in Support of In-Three, Inc's Opposition to Plaintiff's Motion for Preliminary Injunction, Aug. 1, 2005, IMAX Corporation et al v. In-Three, Inc., Case No. CV05 1795 FMC (Mcx). (25 pages). |
| Declaration of John Marchioro, Exhibit C, 3 pages, Nov. 2, 2007. |
| Declaration of Michael F. Chou, Exhibit B, 12 pages, Nov. 2, 2007. |
| Declaration of Steven K. Feiner, Exhibit A, 10 pages, Nov. 2, 2007. |
| Di Zhong, Shih-Fu Chang, “AMOS: An Active System for MPEG-4 Video Object Segmentation,” ICIP (2) 8: 647-651, Apr. 1998. |
| E. N. Mortensen and W. A. Barrett, “Intelligent Scissors for Image Composition,” Computer Graphics (SIGGRAPH '95), pp. 191-198, Los Angeles, CA, Aug. 1995. |
| EPO Office Action issued for EP Appl. No. 02734203.9, dated Sep. 12, 2006, 4 pages. |
| EPO Office Action issued for EP Appl. No. 02734203.9, dated Oct. 7, 2010, 5 pages. |
| Eric N. Mortensen, William A. Barrett, “Interactive segmentation with Intelligent Scissors,” Graphical Models and Image Processing, v.60 No. 5, p. 349-384, Sep. 2002. |
| Exhibit 1 to Declaration of John Marchioro, Revised translation of portions of Japanese Patent Document No. 60-52190 to Hiromae, 3 pages, Nov. 2, 2007. |
| Gao et al., Perceptual Motion Tracking from Image Sequences, IEEE, Jan. 2001, pp. 389-392. |
| Grossman, “Look Ma, No Glasses”, Games, Apr. 1992, pp. 12-14. |
| Hanrahan et al., “Direct WYSIWYG painting and texturing on 3D shapes”, Computer Graphics, vol. 24, Issue 4, pp. 215-223. Aug. 1990. |
| Zhong, et al., “Interactive Tracker—A Semi-automatic Video Object Tracking and Segmentation System,” Microsoft Research China, http://research.microsoft.com (Aug. 26, 2003). |
| Indian Office Action issued for Appl. No. 49/DELNP/2005, dated Apr. 4, 2007, 9 pages. |
| Interpolation (from Wikipedia encyclopedia, article pp. 1-6) retrieved from Internet URL: http://en.wikipedia.org/wiki/Interpolation on Jun. 5, 2008. |
| IPER, Mar. 29, 2007, PCT/US2005/014348, 5 pages. |
| IPER, Oct. 5, 2012, PCT/US2011/058182, 6 pages. |
| International Search Report, Jun. 13, 2003, PCT/US02/14192, 4 pages. |
| PCT Search Report issued for PCT/US2011/058182, dated May 10, 2012, 8 pages. |
| PCT Search Report issued for PCT/US2011/067024, dated Aug. 22, 2012, 10 pages. |
| Izquierdo et al., Virtual 3D-View Generation from Stereoscopic Video Data, IEEE, Jan. 1998, pp. 1219-1224. |
| Jul. 21, 2005, Partial Testimony, Expert: Samuel Zhou, Ph.D., 2005 WL 3940225 (C.D.Cal.), 21 pages. |
| Kaufman, D., “The Big Picture”, Apr. 1998, http://www.xenotech.com Apr. 1998, pp. 1-4. |
| Lenny Lipton, “Foundations of the Stereo-Scopic Cinema, a Study in Depth” With and Appendix on 3D Television, 325 pages, May 1978. |
| Lenny Lipton, Foundations of the Stereo-Scopic Cinema a Study in Depth, 1982, Van Nostrand Reinhold Company. |
| Machine translation of JP Patent No. 2004-207985, dated Jul. 22, 2008, 34 pg. |
| Michael Gleicher, “Image Snapping,” SIGGRAPH: 183-190, Jun. 1995. |
| Murray et al., Active Tracking, IEEE International Conference on Intelligent Robots and Systems, Sep. 1993, pp. 1021-1028. |
| Ohm et al., An Object-Based System for Stereopscopic Viewpoint Synthesis, IEEE transaction on Circuits and Systems for Video Technology, vol. 7, No. 5, Oct. 1997, pp. 801-811. |
| Optical Reader (from Wikipedia encyclopedia, article p. 1) retrieved from Internet URL:http://en.wikipedia.org/wiki/Optical—reader on Jun. 5, 2008. |
| Selsis et al., Automatic Tracking and 3D Localization of Moving Objects by Active Contour Models, Intelligent Vehicles 95 Symposium, Sep. 1995, pp. 96-100. |
| Slinker et al., “The Generation and Animation of Random Dot and Random Line Autostereograms”, Journal of Imaging Science and Technology, vol. 36, No. 3, pp. 260-267, May 1992. |
| Nguyen et al., Tracking Nonparameterized Object Contours in Video, IEEE Transactions on Image Processing, vol. 11, No. 9, Sep. 2002, pp. 1081-1091. |
| U.S. District Court, C.D. California, IMAX Corporation and Three-Dimensional Media Group, Ltd., v. In-Three, Inc., Partial Testimony, Expert: Samuel Zhou, Ph.D., No. CV 05-1795 FMC(Mcx), Jul. 19, 2005, WL 3940223 (C.D.Cal.), 6 pages. |
| U.S. District Court, C.D. California, IMAX v. In-Three, No. 05 CV 1795, 2005, Partial Testimony, Expert: David Geshwind, WestLaw 2005, WL 3940224 (C.D.Cal.), 8 pages. |
| U.S. District Court, C.D. California, Western Division, IMAX Corporation, and Three-Dimensional Media Group, Ltd. v. In-Three, Inc., No. CV05 1795 FMC (Mcx). Jul. 18, 2005. Declaration of Barbara Frederiksen in Support of In-Three, Inc.'s Opposition to Plaintiffs' Motion for Preliminary Injunction, 2005 WL 5434580 (C.D.Cal.), 13 pages. |
| U.S. Patent and Trademark Office, Before the Board of Patent Appeals and Interferences, Ex Parte Three-Dimensional Media Group, Ltd., Appeal 2009-004087, Reexamination Control No. 90/007,578, U.S. Pat. No. 4,925,294, Decis200, 88 pages, Jul. 30, 2010. |
| Yasushi Mae, et al., “Object Tracking in Cluttered Background Based on Optical Flow and Edges,” Proc. 13th Int. Conf. on Pattern Recognition, vol. 1, pp. 196-200, Apr. 1996. |
| PCT ISR, Feb. 27, 2007, PCT/US2005/014348, 8 pages. |
| PCT ISR, Sep. 11, 2007, PCT/US07/62515, 9 pages. |
| PCT ISR, Nov. 14, 2007, PCT/US07/62515, 24 pages. |
| PCT IPRP, Jul. 4, 2013, PCT/US2011/067024, 5 pages. |
| Weber, et al., “Rigid Body Segmentation and Shape Description from Dense Optical Flow Under Weak Perspective,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, No. 2, Feb. 1997, pp. 139-143. |
| IPER, May 10, 2013, PCT/US2011/058182, 6 pages. |
| European Office Action dated Jun. 26, 2013, received for EP Appl. No. 02734203.9 on Jul. 22, 2013, 5 pages. |
| PCT Search Report and Written Opinion, dated Aug. 22, 2013, for PCT Appl. No. PCT/US2013/035506, 7 pages. |
| Interpolation (from Wikipedia encyclopedia, article pp. 1-6), retrieved from Internet URL:http://en.wikipedia.org/wiki/Interpolation on Jun. 5, 2008. |
| Optical Reader (from Wikipedia encyclopedia, article p. 1), retrieved from Internet URL:http://en.wikipedia.org/wiki/Optical—reader on Jun. 5, 2008. |
| U.S. Patent and Trademark Office, Before the Board of Patent Appeals and Interferences, Ex Parte Three-Dimensional Media Group, Ltd., Appeal 2009-004087, Reexamination Control No. 90/007,578, U.S. Pat. No. 4,925,294, Decision on Appeal, 88 pages, Jul. 30, 2010. |
| International Search Report dated May 10, 2012, 8 pages. |
| Machine translation of JP Patent No. 2004-207985, dated Jul. 22, 2008, 34 pages. |
| Nell et al., “Stereographic Projections by Digital Computer”, Computers and Automation for May 1965, pp. 32-34. |
| Nell, “Computer-Generated Three-Dimensional Movies” Computers and Automation for Nov. 1965, pp. 20-23. |
| International Search Report received fro PCT Application No. PCT/US2011/067024, dated Aug. 22, 2012, 10 pages. |
| International Patentability Preliminary Report and Written Opinion / PCT/US2013/035506, dated Aug. 21, 2014, 6 pages. |
| Zhang, et al., “Stereoscopic Image Generation Based on Depth Images for 3D TV”, IEEE Transactions on Broadcasting, vol. 51, No. 2, pp. 191-199, Jun. 2005. |
| Beraldi, et al., “Motion and Depth from Optical Flow”, Lab. Di Bioingegneria, Facolta' di Medicina, Universit' di Modena, Modena, Italy; pp. 205-208, 1989. |
| Hendriks, et al. “Converting 2D to 3D: A Survey”, Information and Communication Theory Group, Dec. 2005. |
| Number | Date | Country | |
|---|---|---|---|
| 20150358613 A1 | Dec 2015 | US |
| Number | Date | Country | |
|---|---|---|---|
| Parent | 14709327 | May 2015 | US |
| Child | 14827482 | US | |
| Parent | 13874625 | May 2013 | US |
| Child | 14709327 | US | |
| Parent | 14827482 | US | |
| Child | 14709327 | US | |
| Parent | 13895979 | May 2013 | US |
| Child | 14827482 | US | |
| Parent | 13366899 | Feb 2012 | US |
| Child | 13895979 | US | |
| Parent | 13029862 | Feb 2011 | US |
| Child | 13366899 | US |