Not applicable.
Not applicable.
Not applicable.
1. Field of the Invention
The present invention relates to the field of virtual reality devices. In particular, the invention relates to a system and method to simulate underwater diving in a variety of desired environments.
2. Description of the Related Art
In an era of increasing fuel prices and dwindling natural resources, one constant continues to be many people's desire to travel to exotic locations and experience relaxing recreation. One such form of recreation is scuba diving on the world's coral reefs, shipwrecks, and other sites. Unfortunately, the cost of such travel has traditionally made such experiences prohibitive for the majority of people. This motivates the question of whether such an experience could be provided in bodies of water closer to where people live.
Over the years, there have been numerous patents issued in the area of virtual reality. These patents fall roughly into two categories: those that enhance the state of the art with regard to the basic science required to achieve a lightweight head-mounted display, or mask and those that relate to the application of virtual reality. Within the applications category, there are several distinct areas of interest that include general recreation/fitness, medical/therapeutic applications, entertainment, and database usability.
U.S. Pat. No. 4,884,219 to Waldren relates to a head-mounted virtual reality device. It discloses moving a pair of viewing screens from a roll-around type platform into a mask mounted and worn on the user's head. U.S. Pat. No. 5,151,722 to Massof et al. shows an optics arrangement whereby the image source is mounted on the side of the user's head, and the image is reflected off of a series of mirrors.
Patents have also issued that relate to entertainment and recreation. For example, U.S. Pat. No. 5,890,995 to Bobick et al. and German Patent 3706250 to Reiner disclose systems that couple a virtual reality mask with pedaled exercise equipment. The user mounts a bicycle and can navigate through virtual environments that represent either a synthetic playing field with avatars (computer-graphics generated “opponents”) or a synthetic road with vehicles and other bicyclists.
U.S. Pat. No. 6,428,449 to Apseloff addresses individuals who choose to run on a treadmill, rather than pedal a bicycle, while watching the screen. The system is responsive to both body motion and verbal cues. The invention is sensitive to the particular aspects of the running activity, such as providing for a means to detect the pace of the runner's cadence.
US Patent Publication 2002/0183961 to French et al. focuses on the artificial intelligence algorithms for rendering opponents in a virtual environment (such as a tennis player who anticipates the user's next move or tries to put the user on the defensive) and is intended to serve as an invention for the purpose of training. The system senses the player's 3D position in real-time and renders the avatars' responses accordingly. Unlike the three previous patents, this invention does not address the interface between the computer system and more traditional mechanical training equipment such as treadmills and stationary bikes.
US Patent Publication 2004/0086838 to Dinis shows a scuba diving simulator including an interactive submersible diver apparatus and a source of selectable underwater three-dimensional virtual images. The system disclosed requires the user to hold his or her head pressed to a viewer with a view port. There is no change in scenery when the user changes the position of his or her head relative to the underwater environment. Also, inputs to the Dinis system originate from joysticks and rods that the diver holds onto, and constant supervision from an operator is required. Further, the diver in Dinis is restricted by the position of the connecting cable to the surface at a fixed location. Further still, the images provided in Dinis are static and not dynamic.
US Patent Publication 2007/0064311 to Park discloses a head mounted waterproof display.
US Patent Publication 2008/0218332 to Lyons shows a monitoring device to alert a swimmer that he or she is approaching a boundary or wall.
Nintendo® markets underwater simulation software under the trademark Wii® known as Endless Ocean™. The software includes fictional scenes only and requires that you control an avatar (solo diver) onscreen by using a joystick or a remote control device.
What is needed is a system and method that addresses the need for a low-cost, scuba diving recreation option without the expense or inconvenience associated with physical travel to distant diving locations. The system and method should include the ability to allow the user to experience scenery in real time, based upon the position of his or her head relative to a mobile, triangulation positioning and navigation system.
An underwater diving simulation system comprises at least three surface electronics units that define a diving area. The surface electronics units are positioned in proximity to a desired dive location. Each surface electronics unit includes a microprocessor-controlled transceiver that receives x-y-z position data from an underwater acoustical transponder located on a diver who is located in the diving area. At least one of the surface electronics units includes a graphic processing unit that provides user selectable, variable underwater virtual reality data to the diver via a communication link. A plurality of sensors in proximity to the diver's head is provided to transmit the real-time rate of change, horizontal and vertical position of the diver's head to a signal decoder via the communication link. The plurality of sensors located in proximity to the diver's head is typically attached or integral with an underwater diving mask that is worn by the diver. The mask has at least one optical element visible by the diver. Typically a pair of projectors is provided, one for each of the diver's eyes. Each projector sends video to the at least one optical element, which displays underwater virtual reality images to the diver while the diver swims within the dive area. The virtual reality images are generated by the graphics-processing unit in real-time response to the position and orientation of the diver and the diver's head whereby the diver can experience a virtual reality of diving in a user selectable location and with user selectable sea creatures.
The invention comprises a system of software, sensors, and hardware components that can be partitioned into two groups. The first group of elements includes surface electronics, which are housed in surface electronics units and are responsible for the production of an immersive underwater virtual reality that responds to real-time environmental inputs. The second group of elements includes a diving mask with electronics and sensors that is worn by a diver D and is responsible for delivering a virtual reality (“VR”) experience to the diver, as well as for providing a set of sensor readings that are used to update the VR experience. Together, the two groups comprise a feedback loop of information that renders a real-time, interactive underwater virtual world that anyone can experience without having to travel to a tropical or a remote location.
The following table lists the physical and process elements of a preferred embodiment of the inventive system:
Information flows, with respect to the flow-schematic (
The first group includes surface electronics contained in a buoy B that floats near the diving site and has the computing power roughly equivalent to that of a laptop personal computer.
A view of the overall system is shown in
Once the region has been selected with the toggle 54, as confirmed on the display 53 with the select button 54a, the diver D, (or an assistant) may then choose the type of dive. In one embodiment, the type of dive may be one of several generic diving scenarios, such as coral reef or shipwreck. Alternately, the diver D may choose between one of several specific diving sites, such as a national underwater park or a nature preserve site. It is contemplated that a site may also be selected from a geophysical mapping source, such as Google® Earth. Once these selections have been made, a simple circuitry card in the console C announces the activation of the program to the 3D game engine 25, the secondary circuit card 25a, and the mask video decoder 19 in the mask M. The mask M (shown in
First, the program populates a scene graph database 8 with data from an underwater terrain database 2 and wireframe mesh geometry (i.e. faces, edges and vertices) of the sea creatures from the 3D sea creatures database (geometry) 5. The database structure used may be similar to those used in Apple iPhone™ or similar to a more industrial strength product such as SQL Server 2005. The raw computing power of the graphics pipeline resides in hardware Graphics Processing Unit (“GPU”) 9. The GPU 9 serves as high-speed cache, or buffer, for storing data such as the pixels that comprise the texture of an object or the geometry (vertices, edges, and faces) that comprise a mesh, or wireframe representation of a real-world creature. The GPU 9 is a dedicated, rapid-access memory with mathematic routines for performing matrix algebra and floating point operations. The textures for each sea creature and terrain are loaded from the databases 2, 5, 12 and stored in the GPU 9 prior to execution of the main simulation loop (shown in
Referring again to
The program selected by the user calls another function that loads behavior scripts from the 3D sea creatures database (scripts) 12 into an area of memory where they are available to the artificial intelligence (“AI”) module 6. Initial set-up of the creatures in the 3D sea creatures database 12 includes a script that adjusts their positioning, articulation, and their state assignment (floating, fleeing, swimming, etc). The scripts can be written using one of several commercially available or open-source software packages known under the trade marks Maya, 3D Studio, Blender, or Milk Shape 3D. The scripts prescribe the motion of the creatures in the coordinate system and are distinct from the code of the software itself. In a preferred embodiment, the scripting is updated in real-time according to stochastic artificial intelligence algorithms that introduce randomness into the creature behavior as a response to external stimuli, either from other virtual sea creatures in the environment or from the diver D or other user. For example, a school of fish may shrink back in response to the virtual presence of the diver D in their swim area, based upon the position vector of the diver D at a given point in time. Data flows from the navigation unit 31, which is located on the secondary circuit card 25a (see
Finally, the real-time scene rendering engine simulation loop begins 3. A variable such as elapsed time is initialized and is used to keep track of time in the simulation. During the loop a test is done to see if the diver has exited 36a the simulation by turning an on/off switch 37 on the mask M to the “off” position. If the on/off switch 37 has not been turned to the off position, the time step 36b is incremented and the loop repeats.
Scene rendering may be implemented using a commercially-licensed game engine 25. The game engine 25 provides scene rendering by traversing the scene graph database 8 to operate on only the part of it that is actively in the diver's D view. As the diver D moves around, different areas of the scene graph database 8 are culled and drawn by the game engine 25. The objects in the scene (fish, coral, other landscape features) are attached as “nodes” to the scene graph 8, as a way of efficiently organizing the objects. Every node in the scene graph database 8 goes through additional processing. First, the game engine 25 computes key-frame poses of the creatures for the next frame. Next, world transformations (e.g., rotation, translation) are computed by the 3D world transformer 7 using a virtual world transformation algorithm and are applied to the scene graph database 8 based on velocity, acceleration, and position of the diver. Textures are obtained from the GPU 9 by a set of program texture mapping functions from a texture-mapping library 20 and painted onto the scene. Caustics (sea bottom refracted light patterns) are applied with the atmospherics processor II to the ocean floor/terrain mesh and to coral, sunken ship, large creatures, etc. Waves above the diver D may also be simulated. Finally, the underwater objects are projected onto the camera viewing plane via a software projection and scan-line conversion module 15 to form the scene image for a given time stamp. This comprises one frame of the simulation.
Before being sent to the mask M, each frame is encoded by the buoy video encoder 23. The encoding may use a technique such as the Discrete Cosine Transform (“DCT”) to reduce the number of bits in the signal that need to be transmitted. One may guide the meaning of the encoding with a desired video standard such as MPEG-4. In a preferred mode of the invention, the buoy sends an encoded NTSC, PAL, or other digital video signal along a tether T directly to the mask video decoder 19 on the mask M (
By this time, the diver D has donned the mask M (shown in
As previously indicated (See
The sensor system responsible for determining the position of the diver employs acoustic short baseline technology. A trio of transceivers 16-1, 16-2, 16-3 and a transponder 13 (
As the diver D moves about in the water in the dive area defined by the buoys B, B1 and B2, a Doppler Velocity Sensor (“DVS”) 28 mounted in an enclosure 28a on the diver's D back transmits his or her body's velocity vector. A DVS 28 includes a piston or phased array transducer 28b attached to an electronics enclosure 28a, which houses the DVS 28. The electronics enclosure 28a for the DVS 28 is also carried on the diver's D back, housed next to the transponder 13. The DVS 28 transmits data to the signal decoder 34 via DVS cable 50, which is connected to the tether T. Examples of suitable DVS 28 include DVS units manufactured under the trademarks Explorer Doppler Velocity Log (DVL) or NavQuest 600 Micro DVL by LinkQuest, Inc. The DVS electronics module enclosure 28a is approximately the same size as the transponder 13 and weighs about 1.0 kg in water. The piston or phased array transducer 28b that actually takes the velocity reading typically weighs about 0.85 kilograms and could be mounted on the backpack. The velocity data is transferred to the mask encoder 14 and then to the signal decoder 34.
In contrast to the equipment on the diver's D back, the sensors mounted on the mask, as shown in
The mask sensor components' purpose is to provide an accurate location for the diver so the software resident in the GPU 9 on-board the buoy B can render the virtual world. A software module written in C/C++ or assembly contained in the navigation unit 31 combines the decoded velocity, acceleration, and compass, and tilt readings to provide a finer level of detail so that sudden changes in motion are accurately rendered.
The signals from the mask sensor components pass through the formatting circuitry/mask encoder 14 and are then sent along the tether T to a signal decoder 34 and the navigation unit 31 on-board the secondary circuit card 25a in the buoy B. Simultaneously, the signal and x-y-z position data from the transponder 13 and depth sensor 13a on the diver's D back are received by the transceivers 16-1, 16-2, 16-3 on the buoys B. The diver's D x-y-z position data is then passed to the navigation unit 31. The navigation unit 31 combines the sensor readings and computes the real-time position/orientation estimation before passing the vectors to the software camera 35. Position, velocity, acceleration, and orientation data are processed as events using dead-reckoning algorithms to derive, for each frame of the simulation, an instantaneous estimation of the diver's D head position and orientation. Velocity and acceleration vectors serve as inputs to converge the estimated position and orientation vectors of the diver's D head, as represented by the software camera 35. A multi-threaded (parallel) algorithm may also be implemented to combine the sensor readings to obtain an estimation of the diver's D head position and orientation.
After the mask M sensor components 29a, 30-1, 30-2 have begun transmission of attitude and position via the tether T, a signaling circuit 32 in the mask M begins to ping the transceivers 16-1, 16-2, 16-3 via the tether T to locate the video signal from the GPU 9. Upon a successful handshake, the video signal is received by the mask decoder 19 on the mask M. Circuitry, comprising a picture formatter 17-1, 17-2, on-board each side of the mask, generates and formats a picture from the video signal as it comes in and pushes the picture to the optics projectors 26-1, 26-2 on each side of the mask M. The optics projectors 26-1, 26-2 send the video to the optical elements 27-1, 27-2, which are embedded on each side of the mask M transparent viewing surface, one for each eye.
Alternative optics for the mask M currently exist. Lumus Optical Corporation of Israel has developed one such component. The Lumus component comprises a so-called Light Optical Element (“LOE”) and a “Micro-Display Pod.” The LOE may be substituted in the instant invention for the optical elements 27-1, 27-2. The LOE comprises a refracting ultra-thin lens that displays a high resolution and full color images in front of the eye. It does this through the use of a series of refracting glass planes, tilted at varying angles to direct the image onto the retina as if it originated at a distance from the viewer. The second component, the display “pod,” is essentially a pair of projectors embedded in the sides of the eyeglasses that receive the image content and project it into the LOE.
After the simulation has began and the mask sensor components have begun communicating with the GPU 9 as previously described, the diver D can float or swim through the water and interact with the various sea creatures that inhabit the virtual environment. He or she may dive through a shipwreck for example or choose to inspect some unusual-looking coral. He or she may pass through a school of virtual fish or choose to pet a manta ray, all without having left the lake, beach, or swimming pool where the inventive system has been set up.
If the diver D needs to exit the simulation, he or she can move the on/off switch 37 on the side of the mask (M) to the “off” position. The simulation terminates when the on/off switch 37 is turned to “off” and the GPU 25 enters a ‘done’ state. After 2 minutes, the system shuts off completely to conserve battery power.
The invention is not limited to the above-described embodiments and methods and other embodiments and methods may fall within the scope of the invention, the claims of which follow.