Claims
- 1. A method for enhancing a broadcast of an event, comprising:
generating a synthetic scene based on audio visual (A/V) data and supplemental data received in the broadcast; generating a depth map to store depth information for the synthetic scene; and integrating the synthetic scene into the broadcast using the depth map.
- 2. The method of claim 1, wherein the supplemental data comprises sensing data from a plurality of sensors placed at strategic points to render realistic synthetic movement of a synthetic object in the synthetic scene.
- 3. The method of claim 1, wherein the supplemental data comprises position and orientation data of an object from a global positioning system (GPS) receiver, the position and orientation data indicating the position and orientation of the object in a three dimensional (3D) space.
- 4. The method of claim 3, wherein the GPS receiver further provides environmental conditions of the event to synchronize with the broadcast.
- 5. The method of claim 1, wherein the supplemental data comprises camera tracking data from a tracked camera located at the event, the camera tracking data including positions of the tracked camera in a three dimensional (3D) space to generate a virtual camera corresponding to the tracked camera.
- 6. The method of claim 5, further comprising registering the camera tracking data with the broadcast to render a virtual version of an object in the broadcast.
- 7. The method of claim 1, further comprising a signal processor for processing the A/V data and the supplemental data to generate the synthetic scene using the depth map.
- 8. The method of claim 7, further comprising selecting a desired synthetic camera view in response to receiving an input through a user interface.
- 9. The method of claim 1, wherein generating the depth map comprises:
establishing a virtual camera using camera tracking data of a tracked camera which defines a viewpoint for the synthetic scene; setting a field of view of the virtual camera to a corresponding field of view of the tracked camera; positioning a synthetic tracked object in the synthetic scene according to position information of the tracked object; and extracting depth information of the synthetic tracked object to generate the depth map.
- 10. The method of claim 9, further comprising repositioning the virtual camera in response to position changes of the tracked camera.
- 11. The method of claim 9, further comprising refining the depth map by distorting grid coordinates of the depth map based on characteristics of the tracked camera.
- 12. The method of claim 9, further comprising:
reconstructing a virtual view of a model, using the position information and camera tracking data; and extracting the depth information from the virtual view to generate the depth map.
- 13. A method for enhancing a broadcast of an event, comprising:
collecting, at a broadcast server, audio visual (A/V) data and supplemental data from the event; transmitting the A/V data and the supplemented data to a broadcast client over a network; generating, at the broadcast client, a synthetic scene based on the A/V data and the supplemental data; generating a depth map to store depth information for the synthetic scene; and integrating the synthetic scene into the broadcast using the depth map.
- 14. The method of claim 13, further comprising:
encoding, at the broadcast server, the A/V data and the supplemental data as motion picture expert group (MPEG) compatible data; and decoding, at the broadcast client, the MPEG compatible data to retrieve the A/V data and supplemental data.
- 15. The method of claim 13, wherein the supplemental data comprises:
sensing data from a plurality of sensors placed at the strategic points of the event; position and orientation data from a global positioning system (GPS) receiver; and camera tracking data from a tracked camera.
- 16. The method of claim 13, further comprising selecting a desired synthetic camera view in response to receiving an input at the broadcast client.
- 17. The method of claim 13, wherein the generating the depth map comprises:
establishing a virtual camera using camera data of a tracked camera which defines a viewpoint for the synthetic scene; setting a field of view of the virtual camera to corresponding field of view of the tracked camera; positioning a synthetic tracked object in the synthetic scene according to position information of the tracked object; and extracting depth information of the synthetic tracked object to generate the depth map.
- 18. A system for enhancing a broadcast of an event, comprising:
a video signal unit coupled to provide audio visual (A/V) data from the event; a supplemental data unit coupled to provide supplemental data from the event; a depth map coupled to provide depth information; and a processing unit configured to process the A/V data and the supplemental data to generate a synthetic scene and further, the processing unit configured to integrate the synthetic scene into the broadcast using the depth map.
- 19. The system of claim 18, further comprising a viewer control unit to receive a user selection of the synthetic scene.
- 20. The system of claim 18, further comprising:
a plurality of sensors placed at strategic points of a synthetic object to provide sensing data to render realistic synthetic movement of the synthetic object in the synthetic scene; a global positioning system (GPS) receiver to provide position and orientation data of the synthetic object, the position and orientation data indicating the position and orientation of the synthetic object in a three dimensional (3D) space; and a tracked camera located at the event to provide camera track data, the camera track data including positions of the tracked camera in a three dimensional (3D) space to generate a virtual camera corresponding to the tracked camera.
- 21. A system for enhancing abroadcast of an event, comprising:
a broadcast server configured to receive audio visual (A/V) data and supplemental data; and a broadcast client configured to receive the A/V data and the supplemental data transmitted from the broadcast server over a network, the broadcast client communicating with the broadcast server over the network, wherein the broadcast client:
generates a synthetic scene based on the A/V data and the supplemental data; generates a depth map to store depth information for the synthetic scene; and integrates the synthetic scene into the broadcast using the depth map.
- 22. The system of claim 21, further comprising:
a display device coupled to the broadcast client to display the integrated broadcast; and a viewer control unit coupled to the broadcast client to allow a user to select the synthetic scene to be displayed.
- 23. The system of claim 21, wherein the broadcast server encodes the A/V data and the supplemental data to be motion picture expert group (MPEG) compatible data, and wherein the broadcast client decodes the MPEG compatible data to retrieve the A/V data and the supplemental data.
- 24. The system of claim 21, wherein the supplemental data comprises:
sensing data from a plurality of sensors placed at the strategic points of a synthetic object at the event; position and orientation data of the synthetic object from a global positioning system (GPS) receiver; and camera tracking data from a tracked camera.
- 25. A machine-readable medium having executable code to cause a machine to perform a method for enhancing a broadcast of an event, the method comprising:
generating a synthetic scene based on audio visual (A/V) data and supplemental received in the broadcast; generating a depth map to store depth information for the synthetic scene; and integrating the synthetic scene into the broadcast using the depth map.
- 26. The machine-readable medium of claim 25, wherein the method further comprises selecting a desired synthetic camera view in response to receiving an input through a user interface.
- 27. The machine-readable medium of claim 25, wherein generating the depth map comprises:
establishing a virtual camera using camera tracking data of a tracked camera which defines a viewpoint for the synthetic scene; setting a field of view of the virtual camera to a corresponding field of view of the tracked camera; positioning a synthetic tracked object in the synthetic scene according to position information of the tracked object; and extracting depth information of the synthetic tracked object to generate the depth map.
- 28. The machine-readable medium of claim 27, wherein the method further comprises:
reconstructing a virtual view of a model, using the position information and camera tracking data; and extracting the depth information from the virtual view to generate the depth map.
- 29. A machine-readable medium having executable code to cause a machine to perform a method for enhancing a broadcast of an event, the method comprising:
collecting, at a broadcast server, audio visual (A/V) data and supplemental data from the event; transmitting the A/V data and the supplemented data to a broadcast client over a network; generating, at the broadcast client, a synthetic scene based on the A/V data and the supplemental data; generating a depth map to store depth information for the synthetic scene; and integrating the synthetic scene into the broadcast using the depth map.
- 30. The machine-readable medium of claim 29, wherein the method further comprises:
encoding, at the broadcast server, the A/V data and the supplemental data as motion picture expert group (MPEG) compatible data; and decoding, at the broadcast client, the MPEG compatible data to retrieve the A/V data and supplemental data.
- 31. The machine-readable medium of claim 29, wherein the supplemental data comprises:
sensing data from a plurality of sensors placed at the strategic points of the event; position and orientation data from a global positioning system (GPS) receiver; and camera tracking data from a tracked camera.
- 32. The machine-readable medium of claim 29, wherein the method further comprises selecting a desired synthetic camera view in response to receiving an input at the broadcast client.
- 33. The machine-readable medium of claim 29, wherein the generating the depth map comprises:
establishing a virtual camera using camera data of a tracked camera which defines a viewpoint for the synthetic scene; setting a field of view of the virtual camera to corresponding field of view of the tracked camera; positioning a synthetic tracked object in the synthetic scene according to position information of the tracked object; and extracting depth information of the synthetic tracked object to generate the depth map.
RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional Application No. 60/311,513, filed Aug. 9, 2001, which is hereby incorporated by reference.
[0002] This application is related to the pending application, application Ser. No. 09/943,044, filed Aug. 29, 2001, entitled “Enhancing Broadcasts with Synthetic Camera View”, which has been assigned to the common assignee of this application.
[0003] This application is also related to the pending application, application Ser. No. 09/942,806, entitled “Extracting a Depth Map From Known Camera and Model Tracking Data”, filed Aug. 29, 2001, which has been assigned to the common assignee of this application.
Provisional Applications (1)
|
Number |
Date |
Country |
|
60311513 |
Aug 2001 |
US |