Claims
- 1. A method for dynamic virtual convergence for video see through head mountable displays to allow stereoscopic viewing of close-range objects, the method comprising:
(a) sampling an image with the first and second cameras, each camera having a first field of view; (b) estimating a gaze distance for a viewer; (c) transforming display frustums to converge at the estimated gaze distance; (d) reprojecting the image sampled by the cameras into the display frustums; and (e) displaying the reprojected image to the viewer on displays having a second field of view smaller than the first field of view, thereby allowing stereoscopic viewing of close range objects.
- 2. The method of claim 1 wherein sampling an image with the first and second cameras includes obtaining video samples of an image.
- 3. The method of claim 1 wherein estimating a gaze distance includes tracking objects within the camera fields of view and applying a heuristic to estimate the gaze distance based on the distance from the cameras to at least one of the tracked objects.
- 4. The method of claim 1 wherein transforming the display frustums to converge at the estimated gaze distance includes rotating the display frustums to converge at the estimated gaze distance.
- 5. The method of claim 1 wherein transforming the display frustums to converge at the estimated gaze distance includes shearing the display frustums to converge at the estimated gaze distance.
- 6. The method of claim 1 wherein transforming the display frustums to converge at the estimated gaze distance includes transforming the display frustums without moving the cameras.
- 7. The method of claim 1 wherein displaying the reprojected image to a user includes reprojecting the images to the user on first and second display screens in a video-see-through head mountable display.
- 8. The method of claim 1 comprising adding an augmented reality image to the displayed image.
- 9. A method for estimating convergence distance of a viewer's eyes when viewing a scene through a video-see-through head mountable display, the method comprising:
(a) creating depth buffers for each pixel in a scene viewable by each of a viewer's eyes through a video-see-through head mountable display using known information about the scene, positions of tracked objects in the scene, and positions of each of the viewer's eyes; and (b) examining predetermined scan lines in each depth buffer and determining a closest depth value for each of the viewer's eyes; (c) averaging the depth values for the viewer's eyes to determine an estimated convergence distance; (d) determining whether depths of any tracked objects override the estimated convergence distance; and (e) determining a final convergence distance based on the estimated convergence distance and the determination in step (d).
- 10. The method of claim 9 comprising filtering the final convergence distance to dampen high frequency changes in the final convergence distance and avoid oscillations of the final convergence distance.
- 11. The method of claim 11 wherein filtering the final convergence distance includes temporally averaging a predetermined number of recently calculated convergence distance values.
- 12. A head mountable display system for displaying real and augmented reality images in stereo to a viewer, the system comprising:
(a) a main body including a tracker for tracking position of a viewer's head, first and second cameras for obtaining images of an object of interest, and first and second mirrors for reprojecting virtual centroids of the cameras to centroids of the viewer's eyes; and (b) a display unit including first and second displays for receiving the images sampled by the cameras and displaying the images to the viewer.
- 13. The system of claim 12 wherein the main body includes a tracker mounting portion and first, second, and third light emitting elements for tracking the position of the user's head.
- 14. The system of claim 13 wherein the tracker mounting portion is substantially triangular shaped and the first, second, and third light emitting elements are located at vertices of a triangle formed by the tracker mounting portion.
- 15. The system of claim 12 wherein the main body includes first and second opposing portions for holding the first and second mirrors.
- 16. The system of claim 12 wherein the first mirror is located opposite the cameras and the second mirror is located opposite the first mirror.
- 17. The system of claim 16 wherein the first mirror is adapted to project the camera centroids into the first mirror and the first and second mirrors are spaced from each other and oriented such that camera centroids correspond to the positions of the viewer's eyes.
- 18. The system of claim 12 wherein the second mirror is angled to reflect images of an object being viewed and the second mirror is of unitary construction.
- 19. The system of claim 12 wherein the second mirror comprises left and right portions located close to each other.
- 20. The system of claim 12 wherein the fields of view of the displays are smaller than fields of view of the cameras.
- 21. The system of claim 12 wherein the cameras are stationary.
RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional Patent Application Ser. No. 60/335,052 filed Oct. 19, 2001, the disclosure of which is incorporated herein by reference In its entirety.
GOVERNMENT INTEREST
[0002] This invention was made with Government support under Grant Nos. CA47287 awarded by National Institutes of Health, and ASC8920219 awarded by National Science Foundation. The Government has certain rights in the invention.
PCT Information
Filing Document |
Filing Date |
Country |
Kind |
PCT/US02/33597 |
10/18/2002 |
WO |
|
Provisional Applications (1)
|
Number |
Date |
Country |
|
60335052 |
Oct 2001 |
US |