A given video generally includes one or more scenes, where each scene in the video can be either relatively static (e.g., the objects in the scene do not substantially change or move over time) or dynamic (e.g., the objects in the scene substantially change and/or move over time). In a traditional video the viewpoint of each scene is chosen by the director when the video is recorded/captured and this viewpoint cannot be controlled or changed by an end user while they are viewing the video. In other words, in a traditional video the viewpoint of each scene is fixed and cannot be modified when the video is being rendered and displayed. In a free viewpoint video an end user can interactively control and change their viewpoint of each scene at will while they are viewing the video. In other words, in a free viewpoint video each end user can interactively request different synthetic (i.e., virtual) viewpoints of each scene on-the-fly when the video is being rendered and displayed.
This Summary is provided to introduce a selection of concepts, in a simplified form, that are further described hereafter in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Free viewpoint video processing pipeline technique embodiments described herein are generally applicable to generating a free viewpoint video of a scene and presenting it to a user. In one exemplary embodiment an arrangement of sensors is used to capture the scene, where the arrangement includes a plurality of video capture devices and generates a plurality of streams of sensor data each of which represents the scene from a different geometric perspective. The streams of sensor data are then input and calibrated. A scene proxy is then generated from the calibrated streams of sensor data, where the scene proxy geometrically describes the scene as a function of time and includes one or more types of geometric proxy data which is matched to a first set of current pipeline conditions in order to maximize the photo-realism of the free viewpoint video that results from the scene proxy at each point in time. This scene proxy generation includes the following actions. The current pipeline conditions in the first set are periodically analyzed. The results of this periodic analysis are then used to select one or more different 3D (three-dimensional) reconstruction methods which are matched to these current pipeline conditions. The selected 3D reconstruction methods are then used to generate one or more different 3D reconstructions of the scene from the calibrated streams of sensor data. The 3D reconstructions of the scene and the results of the periodic analysis are then used to generate the scene proxy.
In another exemplary embodiment the scene proxy is input. A current synthetic viewpoint of the scene is then generated from the scene proxy, where this current synthetic viewpoint generation maximizes the photo-realism of the current synthetic viewpoint based upon a second set of current pipeline conditions. The current synthetic viewpoint of the scene is then displayed. The current synthetic viewpoint generation includes the following actions. The current pipeline conditions in the second set are periodically analyzed. The results of this periodic analysis are then used to select one or more different image-based rendering methods which are matched to these current pipeline conditions. The selected image-based rendering methods and the results of the period analysis are then used to generate the current synthetic viewpoint of the scene.
The specific features, aspects, and advantages of the free viewpoint video processing pipeline technique embodiments described herein will become better understood with regard to the following description, appended claims, and accompanying drawings where:
In the following description of free viewpoint video (FVV) processing pipeline technique embodiments (hereafter simply referred to as pipeline technique embodiments) reference is made to the accompanying drawings which form a part hereof, and in which are shown, by way of illustration, specific embodiments in which the pipeline technique can be practiced. It is understood that other embodiments can be utilized and structural changes can be made without departing from the scope of the pipeline technique embodiments.
It is also noted that for the sake of clarity specific terminology will be resorted to in describing the pipeline technique embodiments described herein and it is not intended for these embodiments to be limited to the specific terms so chosen. Furthermore, it is to be understood that each specific term includes all its technical equivalents that operate in a broadly similar manner to achieve a similar purpose. Reference herein to “one embodiment”, or “another embodiment”, or an “exemplary embodiment”, or an “alternate embodiment”, or “one implementation”, or “another implementation”, or an “exemplary implementation”, or an “alternate implementation” means that a particular feature, a particular structure, or particular characteristics described in connection with the embodiment or implementation can be included in at least one embodiment of the pipeline technique. The appearances of the phrases “in one embodiment”, “in another embodiment”, “in an exemplary embodiment”, “in an alternate embodiment”, “in one implementation”, “in another implementation”, “in an exemplary implementation”, and “in an alternate implementation” in various places in the specification are not necessarily all referring to the same embodiment or implementation, nor are separate or alternative embodiments/implementations mutually exclusive of other embodiments/implementations. Yet furthermore, the order of process flow representing one or more embodiments or implementations of the pipeline technique does not inherently indicate any particular order not imply any limitations of the pipeline technique.
The term “sensor” is used herein to refer to any one of a variety of scene-sensing devices which can be used to generate a stream of sensor data that represents a given scene. Generally speaking and as will be described in more detail hereafter, the pipeline technique embodiments described herein employ a plurality of sensors which can be configured in various arrangements to capture a scene, thus allowing a plurality of streams of sensor data to be generated each of which represents the scene from a different geometric perspective. Each of the sensors can be any type of video capture device (VCD) (e.g., any type of video camera), or any type of audio capture device, or any combination thereof. Each of the sensors can also be either static (i.e., the sensor has a fixed spatial location and a fixed rotational orientation which do not change over time), or moving (i.e., the spatial location and/or rotational orientation of the sensor change over time). The pipeline technique embodiments can employ a combination of different types of sensors to capture a given scene.
The term “baseline” is used herein to refer to a ratio of the actual physical distance between a given pair of VCDs to the average of the actual physical distance from each VCD in the pair to the viewpoint of the scene. When this ratio is larger than a prescribed value the pair of VCDs is referred to herein as a wide baseline stereo pair of VCDs. When this ratio is smaller than the prescribed value the pair of VCDs is referred to herein as a narrow baseline stereo pair of VCDs.
The pipeline technique embodiments described herein generally involve an FVV processing pipeline for generating an FVV of a given scene and presenting the FVV to one or more end users. The pipeline technique embodiments are advantageous for various reasons including, but not limited to, the following. Generally speaking and as will be appreciated from the more detailed description that follows, the pipeline technique embodiments create a feeling of immersion for any end user who is viewing a rendering of the captured scene, thus enhancing their viewing experience. The pipeline technique embodiments also enable optimal viewpoint navigation for up to six degrees of viewpoint navigation freedom.
Furthermore, the pipeline technique embodiments described herein do not rely upon having to constrain the FVV processing pipeline in order to produce a desired visual result. In other words, the pipeline technique embodiments eliminate the need to place constraints on the FVV processing pipeline in order to generate various synthetic viewpoints of the scene which are photo-realistic and thus are free of discernible artifacts. More particularly and by way of example but not limitation, the pipeline technique embodiments eliminate having to constrain the arrangement of the sensors that are used to capture the scene. Accordingly, the pipeline technique embodiments are operational with any arrangement of sensors. The pipeline technique embodiments also eliminate having to constrain the complexity or composition of the scene that is being captured (e.g., neither the environment(s) in the scene, nor the types of objects in the scene, nor the number of people of in the scene, among other things has to be constrained). Accordingly, the pipeline technique embodiments are operational with any type of scene, including both relatively static and dynamic scenes. The pipeline technique embodiments also eliminate having to constrain the number or types of sensors that are used to capture the scene. Accordingly, the pipeline technique embodiments are operational with any number of sensors and all types of sensors. The pipeline technique embodiments also eliminate having to constrain the number of degrees of viewpoint navigation freedom that are provided during the rendering and end user viewing of the captured scene. Accordingly, the pipeline technique embodiments can produce visual results having as many as six degrees of viewpoint navigation freedom. The pipeline technique embodiments can also produce visual results having just one degree of viewpoint navigation freedom.
Yet, furthermore, the pipeline technique embodiments described herein do not rely upon having to use a specific 3D (three-dimensional) reconstruction method in the FVV processing pipeline to generate a 3D reconstruction of the captured scene. Accordingly, the pipeline technique embodiments support the use of any one or more 3D reconstruction methods in the pipeline and therefore provide the freedom to use whatever 3D reconstruction method(s) produces the desired visual result (e.g., the highest degree of photo-realism for the particular scene being captured and the desired number of degrees of viewpoint navigation freedom) based on the particular characteristics of the streams of sensor data that are generated by the sensors (e.g., based on factors such as the particular number and types of sensors that are used to capture the scene, and the particular arrangement of these sensors that is used), along with other current pipeline conditions.
Yet furthermore, the pipeline technique embodiments described herein do not rely upon having to use a specific image-based rendering method in the FVV processing pipeline during the rendering and end user viewing of the captured scene. Accordingly, the pipeline technique embodiments support the use of any one or more image-based rendering methods in the pipeline and therefore provide the freedom to use whatever image-based rendering method(s) produces the desired visual result based on the particular characteristics of the streams of sensor data that are generated by the sensors, along with other current pipeline conditions. By way of example but not limitation, in an exemplary situation where just two VCDs are used to capture a scene, an image-based rendering method that renders a lower fidelity 3D geometric proxy of the captured scene (herein simply referred to as a scene proxy) may produce an optimally photo-realistic visual result when the end user's viewpoint is close to the axis of one of the VCDs (such as with billboards). In another exemplary situation where a large number of VCDs configured in a circular arrangement are used to capture a scene, a conventional image warping/morphing image-based rendering method may produce an optimally photo-realistic visual result. In yet another exemplary situation where a large number of VCDs configured in either a 2D (two-dimensional) or 3D array arrangement are used to capture a scene, a conventional view interpolation image-based rendering method may produce an optimally photo-realistic visual result. In yet another exemplary situation where an even larger number of VCDs is used, a conventional lumigraph or light-field image-based rendering method may produce an optimally photo-realistic visual result.
It will thus be appreciated that the pipeline technique embodiments described herein result in a flexible, robust and commercially viable next generation FVV processing pipeline that meets the needs of today's various creative video producers and editors. By way of example but not limitation and as will be appreciated from the more detailed description that follows, the pipeline technique embodiments are applicable to various types of video-based media applications such as consumer entertainment (e.g., movies, television shows, and the like) and video-conferencing/telepresence, among others. The pipeline technique embodiments support a broad range of features that provide for the capture (i.e., recording), processing, storage, distribution, rendering, and end user viewing of any type of FVV that can be generated. Various implementations of the pipeline technique embodiments are possible, where each different implementation supports a different type of FVV. Exemplary types of supported FVV are described in more detail hereafter.
Additionally, the pipeline technique embodiments described herein allow any one or more parameters in the FVV processing pipeline to be freely modified without introducing artifacts into the FVV that is presented to the one or more end users. This allows the photo-realism of the FVV that is presented to each end user to be maximized (i.e., the artifacts are minimized) regardless of the characteristics of the various sensors that are used to capture the scene, and the characteristics of the various streams of sensor data that are generated by the sensors. Exemplary pipeline parameters which can be modified include, but are not limited to, the following. The number and types of sensors that are used to capture the scene can be modified. The arrangement of the sensors can also be modified. Which if any of the sensors is static and which is moving can also be modified. The complexity and composition of the scene can also be modified. Whether the scene is relatively static or dynamic can also be modified. The 3D reconstruction methods and image-based rendering methods that are used can also be modified. The number of degrees of viewpoint navigation freedom that are provided during the rendering and end user viewing of the captured scene can also be modified.
Referring again to
Referring again to
Referring again to
Referring again to
As noted heretofore, various implementations of the pipeline technique embodiments described herein are possible, where each different implementation supports a different type of FVV and a different user viewing experience. As will now be described in more detail, each of these different implementations differs in terms of the user viewing experience it provides, its latency characteristics (i.e., how rapidly the streams of sensor data have to be processed through the FVV processing pipeline), its storage characteristics, its transmission and related bandwidth characteristics, and the types of computing device hardware it necessitates.
Referring again to
Referring again to
Referring again to
Referring again to
This section provides a more detailed description of the capture and processing stages of the FVV processing pipeline. The pipeline technique embodiments described herein generally employ a plurality of sensors which are configured in a prescribed arrangement to capture a given scene. The pipeline technique embodiments are operable with any type of sensor, any number (two or greater) of sensors, any arrangement of sensors (where this arrangement can include a plurality of different geometries and different geometric relationships between the sensors), and any combination of different types of sensors. The pipeline technique embodiments are also operable with both static and moving sensors. A given sensor can be any type of VCD (examples of which are described in more detail hereafter), or any type of audio capture device (such as a microphone, or the like), or any combination thereof. Each VCD generates a stream of video data which includes a stream of images (also known as and referred to herein as frames) of the scene from the specific geometric perspective of the VCD. Similarly, each audio capture device generates a stream of audio data representing the audio emanating from the scene from the specific geometric perspective of the audio capture device.
Exemplary types of VCDs that can be employed include, but are not limited to, the following. A given VCD can be a conventional visible light video camera which generates a stream of video data that includes a stream of color images of the scene. A given VCD can also be a conventional light-field camera (also known as a “plenoptic camera”) which generates a stream of video data that includes a stream of color light-field images of the scene. A given VCD can also be a conventional infrared structured-light projector combined with a conventional infrared video camera that is matched to the projector, where this projector/camera combination generates a stream of video data that includes a stream of infrared images of the scene. This projector/camera combination is also known as a “structured-light 3D scanner”. A given VCD can also be a conventional monochromatic video camera which generates a stream of video data that includes a stream of monochrome images of the scene. A given VCD can also be a conventional time-of-flight camera which generates a stream of video data that includes both a stream of depth map images of the scene and a stream of color images of the scene. For simplicity sake, the term “color camera” is sometimes used herein to refer to any type of VCD that generates color images of the scene.
It will be appreciated that variability in factors such as the composition and complexity of a given scene, and each end user's viewpoint navigation during the user viewing experience stage of the FVV processing pipeline, among other factors, can impact the determination of how many sensors to use to capture the scene, the particular type(s) of sensors to use, and the particular arrangement of the sensors to use. The pipeline technique embodiments described herein generally employ a minimum of one VCD which generates color image data for the scene, along with one or more other VCDs that can be used in combination to generate 3D geometry data for the scene. In situations where an outdoor scene is being captured or the sensors are located far from the scene, it is advantageous to capture the scene using both a wide baseline stereo pair of color cameras and a narrow baseline stereo pair of color cameras. In situations where an indoor scene is being captured, it is advantageous to capture the scene using a narrow baseline stereo pair of VCDs both of which generate video data that includes a stream of infrared images of the scene in order to eliminate the dependency on scene lighting variables.
Generally speaking, it is advantageous to increase the number of sensors being used as the complexity of the scene increases. In other words, as the scene becomes more complex (e.g., as additional people are added to the scene), the use of additional VCDs serves to reduce the number of occluded areas within the scene. It may also be advantageous to capture the entire scene using a given arrangement of static VCDs, and at the same time also capture a specific higher complexity region of the scene using one or more additional moving VCDs. In a situation where a large number of VCDs is used to capture a complex scene, different combinations of the VCDs can be used during the processing stage of the FVV processing pipeline (e.g., a situation where a specific VCD is part of both a narrow baseline stereo pair and a different wide baseline stereo pair involving a third VCD).
As is appreciated in the art of video recording, the intrinsic and extrinsic characteristics of each of the VCDs in the arrangement are commonly determined by performing one or more calibration procedures which calibrate the VCDs, where these procedures are specific to the particular types of VCDs that are being used to capture the scene, and the particular number and arrangement of the VCDs. In the unidirectional and bidirectional live FVV implementations of the pipeline technique embodiments described herein the calibration procedures are performed and the streams of sensor data which are generated thereby are input before the scene capture. In the asynchronous FVV implementation of the pipeline technique embodiments the calibration procedures can be performed and the streams of sensor data which are generated thereby can be input either before or after the scene capture. Exemplary calibration procedures will now be described.
In a situation where the VCDs that are being used to capture the scene are genlocked and include a combination of color cameras, VCDs which generate a stream of infrared images of the scene, and one or more time-of-flight cameras, and this combination of cameras is arranged in a static array, the cameras in the array can be calibrated and the intrinsic and extrinsic characteristics of each of the cameras can be determined in the following manner. A stream of calibration data can be input from each of the cameras in the array while a common physical feature (such as a ball, or the like) is internally illuminated with an incandescent light (which is visible to all of the cameras) and moved throughout the scene. These streams of calibration data can then be analyzed using conventional methods to determine both an intrinsic and extrinsic calibration matrix for each of the cameras.
In another situation where the VCDs that are being used to capture the scene include a plurality of color cameras which are arranged in a static array, the cameras in the array can be calibrated and the intrinsic and extrinsic characteristics of each of the cameras can be determined in the following manner. A stream of calibration data can be input from each camera in the array while it is moved around the scene but in close proximity to its static location (thus allowing each camera in the array to view overlapping parts of the static background of the scene). After the scene is captured by the static array of color cameras and the streams of sensor data generated thereby are input, the streams of sensor data can be analyzed using conventional methods to identify features in the scene, and these features can then be used to calibrate the cameras in the array and determine the intrinsic and extrinsic characteristics of each of the cameras by employing a conventional structure-from-motion method.
In yet another situation where one or more of the VCDs that are being used to capture the scene are moving VCDs (such as when the spatial location of a given VCD changes over time, or when controls on a given VCD are used to optically zoom in on the scene while it is being captured (which is commonly done during the recording of sporting events, among other things)), each of these moving VCDs can be calibrated and its intrinsic and extrinsic characteristics can be determined at each point in time during the scene capture by using a conventional background model to register and calibrate relevant individual images that were generated by the VCD. In yet another situation where the VCDs that are being used to capture the scene include a combination of static and moving VCDs, the VCDs can be calibrated and the intrinsic and extrinsic characteristics of each of the VCDs can be determined by employing conventional multistep calibration procedures.
In yet another situation where there is no temporal synchronization between the VCDs that are being used to capture the scene and the arrangement of the sensors can randomly change over time (such as when a plurality of mobile devices are held up by different users and the VCDs on these devices are used to capture the scene), the pipeline technique embodiments described herein will both spatially and temporally calibrate the streams of sensor data generated by the VCDs at all points in time during the scene capture before the streams are processed in the processing stage. In an exemplary embodiment of the pipeline technique this spatial and temporal calibration can be performed as follows. After the scene is captured and the streams of sensor data representing the scene are input, the streams of sensor data can be analyzed using conventional methods to separate the static and moving elements of the scene. The static elements of the scene can then be used to generate a background model. Additionally, the moving elements of the scene can be used to generate a global timeline that encompasses all of the VCDs, and each image in each stream of sensor data is assigned a relative time. The intrinsic characteristics of each of the VCDs can be determined by using conventional methods to analyze each of the streams of sensor data.
In an embodiment of the pipeline technique described herein where the capture stage of the FVV processing pipeline is directly connected to the VCDs that are being used to capture the scene, the intrinsic characteristics of each of the VCDs can also be determined by reading appropriate hardware parameters directly from each of the VCDs. In another embodiment of the pipeline technique where the capture stage is not directly connected to the VCDs but rather the streams of sensor data are pre-recorded and then imported into the capture stage, the number of VCDs and various intrinsic properties of each of the VCDs can be determined by analyzing the streams of sensor data using conventional methods.
Referring again to
The set of current pipeline conditions can also include one or more conditions in the storage and distribution stage of the FVV processing pipeline such as the amount of storage space that is currently available to store the scene proxy, or the network transmission bandwidth that is currently available, or the like. The set of current pipeline conditions can also include one or more conditions in the user viewing experience stage of the pipeline such as the type of display device upon which the FVV either is, or will be, viewed, or the particular characteristics of the display device (e.g., one or more of its aspect ratio, or its pixel resolution, or its form factor, among others), or the level of data fidelity that is desired in the free viewpoint video, or the like.
Referring again to
It will thus be appreciated that the pipeline technique embodiments described herein can use a wide variety of 3D reconstruction methods in various combinations, where the particular types of 3D reconstruction methods that are being used depend upon various current conditions in the FVV processing pipeline. Accordingly and as will be described in more detail hereafter, the scene proxy will include one or more types of geometric proxy data examples of which include, but are not limited to, the following. The scene proxy can include a stream of depth map images of the scene. The scene proxy can also include a stream of calibrated point cloud reconstructions of the scene. As is appreciated in the art of 3D reconstruction, these point cloud reconstructions are a low order geometric representation of the scene. The scene proxy can also include one or more high order geometric models, where these models can include one or more of planes, or billboards, or existing (i.e., previously created) generic object models (e.g., human body models, or human face models, or clothing models, or furniture models, or the like) which can be either modified, or animated, or both, among others. Such high order geometric models can be advantageously used to fill in occlusions that may exist in the captured scene. The scene proxy can also include other high fidelity proxies such as a stream of mesh models of the scene and a corresponding stream of texture maps which define texture data for each of the mesh models, among others. It will further be appreciated that since the particular 3D reconstruction methods that are used and the related manner in which the scene proxy is generated are based upon a period analysis (i.e., monitoring) of the various current conditions in the FVV processing pipeline, the 3D reconstruction methods that are used and the resulting types of data in the scene proxy can change over time based on changes in the pipeline conditions.
Generally speaking, for the unidirectional and bidirectional live FVV implementations of the pipeline technique embodiments described herein, due to the fact that the capture, processing, storage and distribution, rendering, and user viewing experience stages of the FVV processing pipeline have to be completed within a very short period of time, the types of 3D reconstruction methods that can be used in these implementations are limited to high speed 3D reconstruction methods. By way of example but not limitation, in the unidirectional and bidirectional live FVV implementations of the pipeline technique embodiments the scene proxy that is generated will include a stream of calibrated point cloud reconstructions of the scene, and may also include one or more high order geometric models which can be either modified, or animated, or both. It will be appreciated that 3D reconstruction methods which can be implemented in hardware are also favored in the unidirectional and bidirectional live FVV implementations of the pipeline technique embodiments. The use of VCDs which generate infrared images of the scene is also favored in the unidirectional and bidirectional live FVV implementations of the pipeline technique embodiments.
For the asynchronous FVV implementation of the pipeline technique embodiments described herein, due to the fact that the capture and processing stages of the FVV processing pipeline operate asynchronously from the rendering and user viewing experience stages (and as such, there is effectively an unlimited amount of time available for the processing stage), more rigorous (and thus slower) 3D reconstruction methods can be used in this implementation. By way of example but not limitation, in the asynchronous FVV implementation of the pipeline technique embodiments the scene proxy that is generated can include both a stream of calibrated point cloud reconstructions of the scene, as well as one or more higher fidelity geometric proxies of the scene (such as when the calibrated point cloud reconstructions of the scene are used to generate a stream of mesh models of the scene, among other possibilities). The asynchronous FVV implementation of the pipeline technique embodiments also allows a plurality of 3D reconstruction steps to be used in sequence when generating the scene proxy. By way of example but not limitation, consider a situation where a stream of calibrated point cloud reconstructions of the scene has been generated, but there are some noisy or error prone stereo matches present in these reconstructions that extend beyond a human silhouette boundary in the scene. It will be appreciated that these noisy or error prone stereo matches can lead to the wrong texture data appearing in the mesh models of the scene, thus resulting in artifacts in the rendered scene. These artifacts can be eliminated by running a segmentation process to separate the foreground from the background, and then points outside of the human silhouette can be rejected as outliers.
It will be appreciated that depending on the particular arrangement of sensors that is used to capture the scene, a given VCD can be in a plurality of narrow baseline stereo pairs of VCDs, and can also be in a plurality of wide baseline stereo pairs of VCDs. This serves to maximize the number of different depth map image streams that are created, which in turn serves to maximize the precision of the scene proxy.
Referring again to
In one implementation of the capture and processing stages of the FVV processing pipeline a circular arrangement of eight genlocked VCDs is used to capture a scene which includes one or more human beings, where each of the VCDs includes a combination of one infrared structured-light projector, two infrared video cameras, and one color camera. Accordingly, the VCDs each generate a different stream of video data which includes both a stereo pair of infrared image streams and a color image stream. As described heretofore, the pair of infrared image streams and the color image stream generated by each VCD are first used to generate different depth map image streams. The different depth map image streams are then merged into a stream of calibrated point cloud reconstructions of the scene. These point cloud reconstructions are then used to generate a stream of mesh models of the scene. A conventional view-dependent texture mapping method which accurately represents specular textures such as skin is then used to extract texture data from the color image stream generated by each VCD and map this texture data to the stream of mesh models of the scene.
In another implementation of the capture and processing stages of the FVV processing pipeline four genlocked visible light video cameras are used to capture a scene which includes one or more human beings, where the cameras are evenly placed around the scene. Accordingly, the cameras each generate a different stream of video data which includes a color image stream. An existing 3D geometric model of a human body can be used in the scene proxy as follows. Conventional methods can be used to kinematically articulate the model over time in order to fit (i.e., match) the model to the streams of video data generated by the cameras. The kinematically articulated model can then be colored as follows. A conventional view-dependent texture mapping method can be used to extract texture data from the color image stream generated by each camera and map this texture data to the kinematically articulated model.
In yet another implementation of the capture and processing stages of the FVV processing pipeline three unsynchronized visible light video cameras are used to capture a soccer game, where each of the cameras is moving and is located far from the game (e.g., rather than the spatial location of each of the cameras being fixed to a specified arrangement, each of the cameras is hand held by a different user who is capturing the game while they freely move about). Accordingly, the cameras each generate a different stream of video data which includes a stream of color images of the game. Articulated billboards can be used to represent the moving players in the scene proxy of the game as follows. For each stream of video data, conventional methods can be used to generate a segmentation mask for each body part of each player in the stream. Conventional methods can then be used to generate an articulated billboard model of each of the moving players in the game from the appropriate segmentation masks. The articulated billboard model can then be colored as just described.
This section provides a more detailed description of the rendering and user viewing experience stages of the FVV processing pipeline.
The set of current pipeline conditions can also include one or more conditions in the rendering and user viewing experience stages of the FVV processing pipeline such as the graphics processing capabilities/features that are available in the hardware of the computing device which is being used by a given end user to generate the current synthetic viewpoint of the scene, or the type of display device upon which the current synthetic viewpoint of the scene is being displayed, or the particular characteristics of the display device (described heretofore), or the number of degrees of viewpoint navigation freedom that are being provided to the end user, or the view frustum of the current synthetic viewpoint, or whether or not this computing device includes a natural user interface (and if so, the particular natural user interface modalities that are anticipated to be used by the end user), or the like. The set of current pipeline conditions can also include information which is generated by the end user in the user viewing experience stage that specifies desired changes to (i.e., controls) the current synthetic viewpoint of the scene. Such information can include one or more of viewpoint navigation information which is being output by this stage based upon the FVV navigation that is being performed by the end user, or temporal navigation information which may also be output by this stage based upon this FVV navigation. The set of current pipeline conditions can also include the type of FVV that is being presented to the end user.
Referring again to
It will thus be appreciated that the pipeline technique embodiments described herein can use a wide variety of image-based rendering methods in various combinations, where the particular types of image-based rendering methods that are being used depend upon various current conditions in the FVV processing pipeline. Unlike the rendering methods that are employed in conventional 3D computer graphics applications where the 3D geometry of the scene that is being rendered is known (i.e., the geometric primitives for the scene are known), the image-based rendering methods that are employed by the pipeline technique embodiments described herein can render novel views (i.e., synthetic viewpoints) of the scene directly from a collection of images in the scene proxy without having to know the scene geometry. An overview exemplary image-based rendering methods which can be employed by the pipeline technique embodiments is provided hereafter.
The pipeline technique embodiments described herein support using any type of display device to view the FVV including, but not limited to, the very small form factor display devices used on conventional smart phones and other types of mobile devices, the small form factor display devices used on conventional tablet computers and netbook computers, the display devices used on conventional laptop computers and personal computers, conventional televisions and 3D televisions, conventional autostereoscopic 3D display devices, conventional head-mounted transparent display devices, and conventional wearable heads-up display devices such as those that are used in virtual reality applications. In a situation where the end user is using an autostereoscopic 3D display device to view the FVV, then the rendering stage of the FVV processing pipeline will simultaneously generate both left and right current synthetic viewpoints of the scene at an appropriate aspect ratio and resolution in order to create a stereoscopic effect for the end user. In another situation where the end user is using a conventional television to view the FVV, then the rendering stage will generate just a single current synthetic viewpoint.
The pipeline technique embodiments described herein also support using any type of user interface modality to control the current viewpoint while viewing the FVV including, but not limited to, conventional keyboards, conventional pointing devices (such as a mouse, or a graphics tablet, or the like), and conventional natural user interface modalities (such as voice, or a touch-sensitive display screen, or the head tracking functionality that is integrated into wearable heads-up display devices, or a motion and location sensing device (such as the Microsoft Kinect™ (a trademark of Microsoft Corporation), among others), or the like). It will be appreciated that if the end user either is, or will be, using one or more natural user interface modalities while they are viewing the FVV, this can influence the spatiotemporal navigation capabilities that are provided to the end user. In other words, the FVV processing pipeline can process the streams of sensor data differently in order to enable different end user viewing experiences based on the particular type(s) of user interface modality that is anticipated to be used by the end user. By way of example but not limitation, in a situation where a given end user is using the wearable heads-up display device to view and navigate the FVV, then all six degrees of viewpoint navigation freedom could be provided to the end user. In the bidirectional live FVV implementation of the pipeline technique embodiments, if the end user at each physical location that is participating in a given video-conferencing/telepresence session is using the wearable heads-up display device to view and navigate the FVV, then parallax functionality can be implemented in order to provide each end user with an optimally realistic viewing experience when they control/change their viewpoint of the FVV using head movements; the pipeline can also provide for corrected conversational geometry between two end users, thus providing the appearance that both end users are looking directly at each other. In another situation where a given end user is using the motion and location sensing device navigate the FVV, then the rendering stage can optimize the current synthetic viewpoint that is being displayed based on the end user's current spatial location in front of their display device. In this way, the end user's current spatial location can be mapped to the 3D geometry within the FVV.
In some embodiments of the pipeline technique described herein, such as the asynchronous FVV implementation described herein, a producer or editor of the FVV may want to specify the particular types of viewpoint navigation that are possible at different times during the FVV. By way of example but not limitation, in one scene a movie director may want to confine the end user's viewpoint navigation to a limited area of the scene or a specific axis, but in another scene the director may want to allow the end user to freely navigate their viewpoint throughout the entire area of the scene.
As described heretofore, the current synthetic viewpoint of the scene is generated using one or more image-based rendering methods which are selected based upon a periodic analysis of the aforementioned set of current pipeline conditions. Accordingly, the particular image-based rendering methods that are used can change over time based upon changes in the current pipeline conditions. It will thus be appreciated that in one situation where the scene has a low degree of complexity and the arrangement of sensors which either is being, or was, used to capture the scene are located close to the scene, just a single image-based rendering method may be used to generate the current synthetic viewpoint of the scene. In another situation where the scene has a high degree of complexity and the arrangement of sensors which either is being, or was, used to capture the scene are located far from the scene, a plurality of image-based rendering methods may be used to generate the current synthetic viewpoint of the scene depending on the location of the current viewpoint relative to the scene and the particular types of geometric proxy data that are in the scene proxy.
As also exemplified in
On the left side 706 of the continuum 700 exemplified in
In the middle 704 of the continuum 700 exemplified in
On the right side 702 of the continuum 700 exemplified in
While the pipeline technique has been described by specific reference to embodiments thereof, it is understood that variations and modifications thereof can be made without departing from the true spirit and scope of the pipeline technique. By way of example but not limitation, rather than the capture and processing stages of the FVV processing pipeline being implemented on one computing device (or a collection of computing devices), and the rendering and user viewing experience stages of the pipeline being implemented on another computing device(s) which is being used by an end user(s) to view the FVV, an alternate embodiment of the pipeline technique described herein is possible where the capture, processing, rendering and user viewing experience stages of the pipeline are implemented on a single computing device (i.e., the FVV can be rendered and viewed on the same computing device that is used to input/calibrate the streams of sensor data and generate the scene proxy). Furthermore, in addition to the sensors being any type of VCD, or any type of audio capture device, or any combination thereof as described heretofore, the sensors can also be a wearable body-suit that provides a stream of depth data.
It is also noted that any or all of the aforementioned embodiments can be used in any combination desired to form additional hybrid embodiments. Although the pipeline technique embodiments have been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described heretofore. Rather, the specific features and acts described heretofore are disclosed as example forms of implementing the claims.
The pipeline technique embodiments described herein are operational within numerous types of general purpose or special purpose computing system environments or configurations.
For example,
To allow a device to implement the pipeline technique embodiments described herein, the device should have a sufficient computational capability and system memory to enable basic computational operations. In particular, as illustrated by
In addition, the simplified computing device 1200 of
The simplified computing device 1200 of
Storage of information such as computer-readable or computer-executable instructions, data structures, program modules, and the like, can also be accomplished by using any of a variety of the aforementioned communication media to encode one or more modulated data signals or carrier waves, or other transport mechanisms or communications protocols, and includes any wired or wireless information delivery mechanism. Note that the terms “modulated data signal” or “carrier wave” generally refer to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. For example, communication media includes wired media such as a wired network or direct-wired connection carrying one or more modulated data signals, and wireless media such as acoustic, radio frequency (RF), infrared, laser, and other wireless media for transmitting and/or receiving one or more modulated data signals or carrier waves. Combinations of the any of the above should also be included within the scope of communication media.
Furthermore, software, programs, and/or computer program products embodying the some or all of the various embodiments of the pipeline technique described herein, or portions thereof, may be stored, received, transmitted, or read from any desired combination of computer or machine readable media or storage devices and communication media in the form of computer executable instructions or other data structures.
Finally, the pipeline technique embodiments described herein may be further described in the general context of computer-executable instructions, such as program modules, being executed by a computing device. Generally, program modules include routines, programs, objects, components, data structures, and the like, that perform particular tasks or implement particular abstract data types. The pipeline technique embodiments may also be practiced in distributed computing environments where tasks are performed by one or more remote processing devices, or within a cloud of one or more devices, that are linked through one or more communications networks. In a distributed computing environment, program modules may be located in both local and remote computer storage media including media storage devices. Additionally, the aforementioned instructions may be implemented, in part or in whole, as hardware logic circuits, which may or may not include a processor.
This application claims the benefit of and priority to provisional U.S. patent application Ser. No. 61/653,983 filed May 31, 2012.
Number | Date | Country | |
---|---|---|---|
61653983 | May 2012 | US |