Claims
- 1. A process for a immersing a user in a three dimensional virtual reality environment room, comprising the steps of:
providing a head mounted display worn by a user; wherein said head mounted display comprises a video camera and video display; providing a plurality of target markers; distributing said plurality of target markers within said room; wherein each of said plurality of target markers are distinct from all other target markers in said plurality of target markers and distinct from rotated versions of itself; receiving a video signal of a portion of said room from said video camera; identifying at least one target marker in said video signal; calculating user position within said room using relative positioning of identified target marker(s); streaming three dimensional video content to the user through said video display; and dynamically repositioning the user's perspective viewpoint within said three dimensional video content by using said calculated user position to adjust position and viewing angle within said three dimensional video content.
- 2. The process of claim 1, further comprising the step of:
providing target marker calibration means for automatically calibrating relative positions of said plurality of target markers within said room.
- 3. The process of claim 2, wherein said target marker calibration means detects said plurality of target markers in said room using said video signal, and wherein said target marker calibration means identifies pairs of target markers within said room.
- 4. The process of claim 3, wherein said target marker calibration means determines positioning of each target in a target pair relative to said video camera, wherein said target marker calibration means calculates positioning of each target in a target pair relative to each other, and wherein said target marker calibration means stores the relative positioning of the target pair in a list of relative target transforms.
- 5. The process of claim 4, wherein said calculating user position step determines the relative positioning of identified target markers using said relative target transforms.
- 6. The process of claim 5, wherein said calculating user position step detects the effects of viewing angles and gives higher weight to target markers that are detected at more reliable angles.
- 7. The process of claim 1, wherein said receiving step receives said video signal via a wireless link.
- 8. The process of claim 1, wherein said streaming step streams said three dimensional video content to the user through said video display via a wireless link.
- 9. The process of claim 1, wherein said streaming step overlays said three dimensional video content onto said video signal and sends a resulting combined signal to said video display.
- 10. The process of claim 1, wherein said plurality of target markers are a sufficient number such that at least one target marker is always visible in said video signal.
- 11. An apparatus for a immersing a user in a three dimensional virtual reality environment room, comprising:
a head mounted display worn by a user; wherein said head mounted display comprises a video camera and video display; a plurality of target markers; means for distributing said plurality of target markers within said room; wherein each of said plurality of target markers are distinct from all other target markers in said plurality of target markers and distinct from rotated versions of itself; a module for receiving a video signal of a portion of said room from said video camera; a module for identifying at least one target marker in said video signal; a module for calculating user position within said room using relative positioning of identified target marker(s); a module for streaming three dimensional video content to the user through said video display; and a module for dynamically repositioning the user's perspective viewpoint within said three dimensional video content by using said calculated user position to adjust position and viewing angle within said three dimensional video content.
- 12. The apparatus of claim 11, further comprising:
target marker calibration means for automatically calibrating relative positions of said plurality of target markers within said room.
- 13. The apparatus of claim 12, wherein said target marker calibration means detects said plurality of target markers in said room using said video signal, and wherein said target marker calibration means identifies pairs of target markers within said room.
- 14. The apparatus of claim 13, wherein said target marker calibration means determines positioning of each target in a target pair relative to said video camera, wherein said target marker calibration means calculates positioning of each target in a target pair relative to each other, and wherein said target marker calibration means stores the relative positioning of the target pair in a list of relative target transforms.
- 15. The apparatus of claim 14, wherein said calculating user position module determines the relative positioning of identified target markers using said relative target transforms.
- 16. The apparatus of claim 15, wherein said calculating user position module detects the effects of viewing angles and gives higher weight to target markers that are detected at more reliable angles.
- 17. The apparatus of claim 11, wherein said receiving module receives said video signal via a wireless link.
- 18. The apparatus of claim 11, wherein said streaming module streams said three dimensional video content to the user through said video display via a wireless link.
- 19. The apparatus of claim 11, wherein said streaming module overlays said three dimensional video content onto said video signal and sends a resulting combined signal to said video display.
- 20. The apparatus of claim 11, wherein said plurality of target markers are a sufficient number such that at least one target marker is always visible in said video signal.
- 21. A process for tracking a video camera in a three dimensional virtual reality environment room, comprising the steps of:
providing a video camera movable within said room; providing a plurality of target markers; distributing said plurality of target markers within said room; wherein each of said plurality of target markers are distinct from all other target markers in said plurality of target markers and distinct from rotated versions of itself; receiving a video signal of a portion of said room from said video camera; identifying at least one target marker in said video signal; and calculating video camera position within said room using relative positioning of identified target marker(s).
- 22. The process of claim 21, further comprising the step of:
providing target marker calibration means for automatically calibrating relative positions of said plurality of target markers within said room.
- 23. The process of claim 22, wherein said target marker calibration means detects said plurality of target markers in said room using said video signal, and wherein said target marker calibration means identifies pairs of target markers within said room.
- 24. The process of claim 23, wherein said target marker calibration means determines positioning of each target in a target pair relative to said video camera, wherein said target marker calibration means calculates positioning of each target in a target pair relative to each other, and wherein said target marker calibration means stores the relative positioning of the target pair in a list of relative target transforms.
- 25. The process of claim 24, wherein said calculating video camera position step determines the relative positioning of identified target markers using said relative target transforms.
- 26. The process of claim 25, wherein said calculating video camera position step detects the effects of viewing angles and gives higher weight to target markers that are detected at more reliable angles.
- 27. The process of claim 21, wherein said receiving step receives said video signal via a wireless link.
- 28. The process of claim 21, wherein said plurality of target markers are a sufficient number such that at least one target marker is always visible in said video signal.
- 29. An apparatus for tracking a video camera in a three dimensional virtual reality environment room, comprising:
a video camera movable within said room; a plurality of target markers; means for distributing said plurality of target markers within said room; wherein each of said plurality of target markers are distinct from all other target markers in said plurality of target markers and distinct from rotated versions of itself; a module for receiving a video signal of a portion of said room from said video camera; and a module for identifying at least one target marker in said video signal; a module for calculating video camera position within said room using relative positioning of identified target marker(s).
- 30. The apparatus of claim 29, further comprising:
target marker calibration means for automatically calibrating relative positions of said plurality of target markers within said room.
- 31. The apparatus of claim 30, wherein said target marker calibration means detects said plurality of target markers in said room using said video signal, and wherein said target marker calibration means identifies pairs of target markers within said room.
- 32. The apparatus of claim 31, wherein said target marker calibration means determines positioning of each target in a target pair relative to said video camera, wherein said target marker calibration means calculates positioning of each target in a target pair relative to each other, and wherein said target marker calibration means stores the relative positioning of the target pair in a list of relative target transforms.
- 33. The apparatus of claim 32, wherein said calculating video camera position module determines the relative positioning of identified target markers using said relative target transforms.
- 34. The apparatus of claim 33, wherein said calculating video camera position module detects the effects of viewing angles and gives higher weight to target markers that are detected at more reliable angles.
- 35. The apparatus of claim 29, wherein said receiving module receives said video signal via a wireless link.
- 36. The apparatus of claim 29, wherein said plurality of target markers are a sufficient number such that at least one target marker is always visible in said video signal.
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application is a continuation in part of U.S. patent application Ser. No. 10/060,008, filed on 28 Jan. 2002 which claims benefit of U.S. Provisional Patent Application Ser. Nos. 60/264,604 and 60/264,596, both filed on 26 Jan. 2001 and further claims benefit of U.S. Provisional Patent Application Ser. No. 60/398,896, filed on 26 Jul. 2002.
Provisional Applications (3)
|
Number |
Date |
Country |
|
60264604 |
Jan 2001 |
US |
|
60264596 |
Jan 2001 |
US |
|
60398896 |
Jul 2002 |
US |
Continuation in Parts (1)
|
Number |
Date |
Country |
| Parent |
10060008 |
Jan 2002 |
US |
| Child |
10628951 |
Jul 2003 |
US |