Claims
- 1. A method for navigating a site, the method comprising the steps of:
determining a location of a user by receiving a location signal from a location-dependent device; loading and displaying a three-dimensional (3D) scene of the determined location; determining an orientation of the user by a tracking device; adjusting a viewpoint of the 3D scene by the determined orientation; determining if the user is within a predetermined distance of an object of interest; and loading a speech dialog of the object of interest.
- 2. The method as in claim 1, wherein if the user is within a predetermined distance of a plurality of objects of interest, prompting the user to select at least one object of interest.
- 3. The method as in claim 1, wherein the speech dialog is displayed to the user.
- 4. The method as in claim 1, wherein the speech dialog is audibly produced to the user.
- 5. The method as in claim 1, further comprising the step of querying a status of the object of interest by the user.
- 6. The method as in claim 5, further comprising the step of informing the user of the status of the object of interest.
- 7. The method as in claim 1, further comprising the step of initiating by the user a collaboration session with a remote party for instructions.
- 8. The method as in claim 7, wherein the remote party annotates the displayed viewpoint of the user.
- 9. The method as in claim 7, wherein the remote party views the displayed viewpoint of the user.
- 10. A system for navigating a user through a site, the system comprising:
a plurality of location-dependent devices for transmitting a signal indicative of each devices' location; and a navigation device for navigating the user including: a tracking component for receiving the location signals and for determining a position and orientation of the user; a graphic management component for displaying scenes of the site to the user on a display; and a speech interaction component for instructing the user.
- 11. The system as in claim 10, wherein the tracking component includes a coarse-grained tracking component for determining the user's location and a fine-grained tracking component for determining the user's orientation.
- 12. The system as in claim 11, wherein the coarse-grained tracking component includes an infrared sensor for receiving an infrared location signal from at least one of the plurality of location-dependent devices.
- 13. The system as in claim 11, wherein the fine-grained tracking component is an inertia tracker.
- 14. The system as in claim 10, wherein the graphic management component includes a three dimensional graphics component for modeling a scene of the site.
- 15. The system as in claim 10, wherein the graphic management component determines if the user is within a predetermined distance of an object of interest and, if the user is within the predetermined distance, the speech interaction component loads a speech dialog associated with the object of interest.
- 16. The system as in claim 15, wherein the speech dialog is displayed on the display.
- 17. The system as in claim 15, wherein the speech dialog is audibly produced by a text-to-speech engine.
- 18. The system as in claim 10, wherein the speech interaction component includes a text-to-speech engine for audibly producing instructions to the user.
- 19. The system as in claim 10, wherein the speech interaction component includes a voice recognition engine for receiving voice commands from the user.
- 20. The system as in claim 10, wherein the navigation device further includes a wireless communication module for communicating to a network.
- 21. The system as in claim 10, wherein the navigation device further includes a collaboration component for the user to collaborate with a remote party.
- 22. A navigation device for navigating a user through a site comprising:
a tracking component for receiving location signals from a plurality of location-dependent devices and for determining a position and orientation of the user; a graphic management component for displaying scenes of the site to the user on a display; and a speech interaction component for instructing the user.
- 23. A program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine to perform method steps for navigating a site, the method steps comprising:
determining a location of a user by receiving a location signal from a location-dependent device; loading and displaying a three-dimensional (3D) scene of the determined location; determining an orientation of the user by a tracking device; and adjusting a viewpoint of the 3D scene by the determined orientation; determining if the user is within a predetermined distance of an object of interest; and loading a speech dialog of the object of interest.
PRIORITY
[0001] This application claims priority to an application entitled “A MOBILE MULTIMODAL USER INTERFACE COMBINING 3D GRAPHICS, LOCATION-SENSITIVE SPEECH INTERACTION AND TRACKING TECHNOLOGIES” filed in the United States Patent and Trademark Office on Feb. 6, 2002 and assigned Serial No. 60/355,524, the contents of which are hereby incorporated by reference.
Provisional Applications (1)
|
Number |
Date |
Country |
|
60355524 |
Feb 2002 |
US |