Claims
- 1. An electronic device that produces an enhanced spatial television like viewing experience utilizing conventional video devices for the provision of the source media.
- 2. An electronic device that produces graphical imagery depicting a panoramic (360 degree horizontal view) image such that this overall panoramic image (“Image Sphere”) is composed of a number of smaller image subsections (“Pages”).
- 3. An electronic device that produces graphical imagery as described in claims 1-2, such that the overall Image Sphere is updated on a Page by Page basis in real-time utilizing conventional video devices for the provision of the source media.
- 4. An electronic device that is described in claims 1-3 in which the Page order is determined by additional information present in the source media.
- 5. An electronic device as described in claims 1-4, which allows the viewer to view prerecorded audiovisual media in a wide screen format such that the width of the “virtual” screen can extend to a full 360 degrees horizontally and up to 180 degrees vertically.
- 6. An electronic device as described in claims 1-5, which allows the viewer to view prerecorded audiovisual material on a conventional screen based display device (TV, projection TV, computer screen) such that the display device represents a viewport or subset of the full 360 degree panoramic image.
- 7. An entertainment system consisting of; a range of alternative media provision devices (such as VCR, DVD, satellite receiver etc.) an electronic device (VTV processor), which generates panoramic video imagery from video data provided from the aforementioned devices and a display device such as a conventional flat screen television or helmet mounted display device (HMD) or other virtual reality display device, fitted with an optional single view or panoramic video capture device, in conjunction with a wireless data communication network to communicate this video information between the HMD and the VTV processor as shown FIGS. 1-3.
- 8. A new audiovisual standard (the virtual television or VTV standard) which consists of a modification to the existing television standard which allows for a variety of different “Frames”, such that these Frames may contain graphical data, sound or control information while still maintaining compatibility with the existing television standards (NTSC, PAL, HDTV etc.)
- 9. A new audiovisual standard as described in claim 8 which, includes within one or more scan lines of a standard video image, additional digital and/or analog coded data which provides information which define control parameters and image manipulation data for the VTV graphics processor.
- 10. A new audiovisual standard as described in claim 8 which, includes within one or more scan lines of a standard video image, additional digital and analog coded data (hybrid coded data) which provides information to generate 4 or more audio tracks in real-time.
- 11. A new audiovisual standard as described in claim 8 which, includes within one or more scan lines of a standard video image, additional digital or analog coded data which provides information as to absolute orientation (azimuth or azimuth and elevation) of the camera that filmed the imagery.
- 12. A new audiovisual standard as described in claim 8 which, includes within one or more scan lines of a standard video image, additional digital or analog coded data which provides information as to the relative placement position of the current Page (video field or frame) within the 360 degree horizontal by X degree vertical “Image Sphere”.
- 13. A new audiovisual standard as described in claims 8,10, which, includes within one or more scan lines of a standard video image, additional digital or analog coded data which provides information as to the number of audio tracks, the audio sampling rate and the track synchronization which allows the VTV graphics processor to decode the audio information as described in claim 10 into spatial (position and orientation sensitive) sound.
- 14. A new audiovisual standard based around the concept of “Image Spheres” which are 360 degree horizontal by X degree vertical cylinders or truncated spheres, such that each Image Sphere consists of a number of subsections or “Pages”.
- 15. A new audiovisual standard as described in claim 8 which makes possible the encoding of multi-track audio for use with standard video storage and transmission systems such that this information can be subsequently decoded by specific hardware (the VTV processor) to produce a left and right audio channel (for headphones or speaker systems) such that the audio channels are mixed (mathematically combined) in such a way as to produce spatially correct audio for the left and right ears of the user. The parameters affecting this mathematical combination being primarily azimuth (in the case of a 4 track audio system) and both azimuth and elevation azimuth (in the case of an 8 track audio system).
- 16. An electronic device as described in claims 1-6, which allows the viewer to view prerecorded audiovisual material using a helmet mounted display (HMD) or other virtual reality type display device such that the display device represents a viewport or subset of the full 360 degree horizontal panoramic image.
- 17. An electronic device as described in claims 1-6,16, such that the horizontal direction of view within the 360 degree by X degree vertically “virtual environment” is dynamically controllable by the user at runtime (while the images being displayed).
- 18. An electronic device as described in claims 1-6,16-17, such that both the azimuth and elevation of the viewport within the 360 degree horizontal by X degree vertical “virtual environment” is dynamically controllable by the user at runtime (while the images being displayed).
- 19. An electronic device as described in claims 1-6,16-18, in which the direction of view is automatically controlled by virtue of a tracking device which continuously measures the azimuth or both azimuth and elevation of the viewer's head.
- 20. An electronic device as described in claims 1-6,16-19, in which the virtual camera position within “virtual environment” (i.e. the viewpoint of the viewer) is dynamically controllable by the user at runtime (while the images are being displayed).
- 21. An electronic device as described in claims 1-6,16-20, in which the virtual camera position within “virtual environment” (i.e. the viewpoint of the viewer) is automatically controlled by virtue of a tracking device which continuously measures the physical position of viewer's head in “real world coordinates”.
- 22. An electronic device, in which orientation sensitive audio is provided in real-time, which is controlled by the direction of the viewers head (azimuth and elevation).
- 23. An electronic device as described in claims 1-6,16-21, in which orientation sensitive audio is also provided in real-time, which is controlled by the direction of the viewport within the 360 degree Image Sphere (“virtual environment”).
- 24. An electronic device as described in claims 1-6,16-21, in which orientation and position sensitive audio is also provided in real-time, which is controlled by the direction of the viewport within the 360 degree Image Sphere and virtual position within the “virtual environment”.
- 25. An electronic device as described in claims 1-6,16-24, which is capable of displaying prerecorded computer graphic or live imagery in a 360 degree Image Sphere format to produce a virtual reality experience which is capable of being provided from standard video storage and transmission devices (VCR, DVD, satellite transmission etc.)
- 26. An electronic device as described in claims 1-6,16-25 which is capable of combining prerecorded computer graphic or live imagery with “real world imagery” captured utilizing a simple single view or panoramic camera system in real-time to produce an augmented reality experience.
- 27. An electronic device as described in claims 1-6,16-26, which is capable of selectively combining and geometrically altering either “real world” or prerecorded imagery to create a composite augmented reality experience.
- 28. An electronic device as described in claims 1-6,16-27, which is capable of analyzing “real world” images captured by a simple single view or panoramic camera system and by utilizing differential imaging techniques and/or other image processing techniques, is capable of automatically removing the background “real world” scenery and replacing this with synthetic or prerecorded imagery provided from a video device (such as VCR DVD player etc.)
- 29. An electronic device as described in claims 1-6,16-25, which is capable of combining “foreground” and “background” pre-rendered video information utilizing chroma-keying techniques in which the foreground and background information may be provided by the same video source and which additionally the chroma-key color is dynamically variable within an image by providing an analog or digital sample of the chroma-key color coded either as a special control frame, as part of each scan line of the video image.
- 30. An electronic device which is capable of performing both of the functions described in clams 28 and 29.
- 31. An electronic device which is capable of analyzing images captured by a simple single view or panoramic camera system as described in claims 39-44 and interpreting the imagery as three-dimensional objects in real-time.
- 32. An electronic device as described in claim 31, which converts the three-dimensional objects into a “universal graphics description language” such as VRML or other appropriate language for storage or live transmission and subsequent decoding into graphical imagery by another VTV processor and appropriate display device.
- 33. An electronic device (otherwise known as the VTV graphics processor) described in claims 1-6,16-32, shown in FIGS. 8-10, and who's functionality is described in paragraphs 3.1-3.18, which is comprised of; one or more video digitizing modules, three areas of memory, known as augmented reality memory (ARM), virtual reality memory (VRM), and translation memory (TM), a digital processing module and one or more video generation modules.
- 34. An electronic device as described in claim 33, In which the augmented reality memory (ARM) is “mapped” to occupy a smaller vertical field of view than the virtual reality memory (VRM), and translation memory (TM) so as to minimize the data requirement for the provision of the media whilst still maintaining a high-quality image.
- 35. An electronic device as described in claims 33-34, In which the augmented reality memory (ARM), virtual reality memory (VRM), and translation memory (TM) may be “mapped” at different resolutions (i.e. pixels in each memory region can represent a different degree of angular deviation.)
- 36. An electronic device as described in claims 33-35, which displays imagery as described in claims 26-28, by first placing the “real world” video information in augmented reality memory (foreground memory), source information from video provision device (VCR, DVD player etc.) into virtual reality memory and then combining these two sources of imagery according to the pattern of data held in translation memory (part of the Warp Engine) into a “composite image” before displaying on the output device (such as a flat screen display or HMD).
- 37. An electronic device as described in claims 33-35, which displays imagery as described in claims 25,29, by first placing the foreground video information from a video provision device (VCR, DVD player etc.) into augmented reality memory, and then by placing background video information from a video provision device (VCR, DVD player etc.) into virtual reality memory and then combining these two sources of imagery according to the pattern of data held in translation memory (part of the Warp Engine) into a “composite image” before displaying on the output device (such as a flat screen display or HMD).
- 38. An electronic device as described in claims 37, which in addition to using the Warp Engine for image combination also relies on chroma-keying information present in the video media to determine foreground and background priority for final combination and display.
- 39. An electro-optical assembly which consists of a plurality of electronic image capture devices (video cameras, HDTV cameras, digital still cameras etc.) which are configured with overlapping horizontal fields of view such that collectively the overlapping horizontal fields of view cover a full 360 degrees.
- 40. An electronic device which crops and aligns the individual images (Pages) produced by the assembly described in claims 39 to produce an overall 360 degree panoramic image with negligible distortion and overlap between the individual Pages.
- 41. An electronic device as described in claim 40, which in addition to cropping and aligning the separate images to produce a seamless 360 degree panoramic image, also applies distortion correction to the images so that the resulting 360 degree panoramic image is mathematically “flat” in the horizontal axis. (i.e. each pixel in the horizontal axis of the image subtends an equal angle to the camera.)
- 42. An electronic device as described in claims 40-41, which also applies distortion correction to the images so that the resulting 360 degree panoramic image is mathematically “flat” in the vertical axis. (i.e. each pixel in the vertical axis of the image subtends an equal angle to the camera.)
- 43. An electronic device as described in claims 40-42, which additionally, Inserts “Page identification information” which describe the location of the individual Pages that comprise the 360 degree panoramic image produced by the panoramic camera assembly, into the outgoing video stream.
- 44. An electronic device as described in claims 40-43, which additionally, Inserts “tracking information” which describe the current orientation of the panoramic camera assembly (azimuth and elevation) into the video stream.
- 45. An electronic device which utilizing data received from one or more video capture devices (video cameras etc.) and by performing a series of simple image analysis processes such as autocorrelation calculates relative movement in the azimuth of the camera (of the viewer in the case of an HMD based camera assembly) as shown in FIGS. 13,14 and more completely described in paragraphs 4.1-4.8.
- 46. An electronic device which utilizing data received from one or more video capture devices (video cameras etc.) and by performing a series of simple image analysis processes such as autocorrelation calculates relative movement in the elevation of the camera (of the viewer in the case of an HMD based camera assembly) as shown in FIGS. 13,15 and more completely described in paragraphs 4.1-4.8.
- 47. An electronic device which utilizing data received from one or more video capture devices (video cameras etc.) and by performing a series of simple image analysis processes such as autocorrelation calculates relative movement in the roll of the camera (of the viewer in the case of an HMD based camera assembly) as shown in FIGS. 13,16 and more completely described in paragraphs 4.1-4.8.
- 48. An electronic device which utilizing data received from one or more video capture devices (video cameras etc.) and by performing a series of simple image analysis processes such as autocorrelation calculates relative movement in the physical (spatial) position of the camera (of the viewer in the case of an HMD based camera assembly) in either or any combination of the X, Y or Z axes as shown in FIGS. 13,17-18 and more completely described in paragraphs 4.1-4.8.
- 49. An electronic device as described in claims 45-48, which utilizes a number of retroflective targets with known “real world” coordinates in conjunction with constant or strobed on-axis light sources to determine absolute angular/spatial references for the purposes of a converting the relative angular and spatial data determined by devices described in claims 45-48 into absolute angular and spatial data.
- 50. An electronic device as described in claim 49, which utilizes a combination of color filters over the retroflective targets in conjunction with controllable on-axis light sources which are synchronized to the video capture rate of the HMD based or remote panoramic cameras to improve the ability of the system to correctly identify and maintain tracking of the individual retroflective targets.
- 51. An electronic device as described in claims 49-50, which utilizes a combination of retroflective targets in conjunction with color controllable on-axis light sources which are synchronized to the video capture rate of the HMD based or remote panoramic cameras to improve the ability of the system to correctly identify and maintain tracking of the individual retroflective targets.
- 52. An electronic device as described in claims 49-51, which utilizes a combination of color filters over the retroflective targets in conjunction with color controllable on-axis light sources which are synchronized to the video capture rate of the HMD based or remote panoramic cameras to improve the ability of the system to correctly identify and maintain tracking of the individual retroflective targets.
- 53. An electronic device as described in claims 45-48, which utilizes a number of “active optical beacons” (controllable light sources which are synchronized to the video capture rate of the HMD based or remote panoramic cameras) such that pulse timing, color of light and/or combinations of these are used to transmit the “real world” coordinates of the beacon to the HMD or remote panoramic camera to determine absolute angular/spatial references for the purposes of a converting the relative angular and spatial data determined by devices described in claims 45-48, into absolute angular and spatial data.
- 54. An electronic device as described in claims 45-48, which utilizes a number of “bi-directional infrared beacons” which communicate a unique ID code between the HMD and the beacon such that this calibration would occur only once each time the HMD passed under any of these “known in spatial reference points.”
- 55. An electronic device which utilizes a single optical imaging device to monitor a pattern on the ceiling and utilizing similar image processing techniques as described in claims 45-48, determines relative spatial movement and azimuth, in conjunction with an alternative angular tracking system such as fluid level sensors to determine the remaining angular orientations (pitch and roll).
- 56. An electronic device as described in claim 55 which utilizes any of the calibration systems as described in claims 49-54 to determine absolute references for the purposes of converting the relative spatial data determined by the device described in claim 55, into absolute spatial data
REFERENCE TO RELATED APPLICATION
[0001] This application claims priority of U.S. provisional patent No. 60/212,862 titled “VTV System” filed Jun. 26, 2000 by Angus Duncan Richards.
Provisional Applications (1)
|
Number |
Date |
Country |
|
60212862 |
Jun 2000 |
US |