This invention relates to a three-dimensional simulator system, and in particular, to a multi-plane hands-on computer simulator system capable of operator's interaction.
Three dimensional (3D) capable electronics and computing hardware devices and real-time computer-generated 3D computer graphics have been a popular area of computer science for the past few decades, with innovations in visual, audio and tactile systems. Much of the research in this area has produced hardware and software products that are specifically designed to generate greater realism and more natural computer-human interfaces. These innovations have significantly enhanced and simplified the end-user's computing experience.
Ever since humans began to communicate through pictures, they faced a dilemma of how to accurately represent the three-dimensional world they lived in. Sculpture was used to successfully depict three-dimensional objects, but was not adequate to communicate spatial relationships between objects and within environments. To do this, early humans attempted to “flatten” what they saw around them onto two-dimensional, vertical planes (e.g. paintings, drawings, tapestries, etc.). Scenes where a person stood upright, surrounded by trees, were rendered relatively successfully on a vertical plane. But how could they represent a landscape, where the ground extended out horizontally from where the artist was standing, as far as the eye could see?
The answer is three dimensional illusions. The two dimensional pictures must provide a numbers of cues of the third dimension to the brain to create the illusion of three dimensional images. This effect of third dimension cues can be realistically achievable due to the fact that the brain is quite accustomed to it. The three dimensional real world is always and already converted into two dimensional (e.g. height and width) projected image at the retina, a concave surface at the back of the eye. And from this two dimensional image, the brain, through experience and perception, generates the depth information to form the three dimension visual image from two types of depth cues: monocular (one eye perception) and binocular (two eye perception). In general, binocular depth cues are innate and biological while monocular depth cues are learned and environmental.
The major binocular depth cues are convergence and retinal disparity. The brain measures the amount of convergence of the eyes to provide a rough estimate of the distance since the angle between the line of sight of each eye is larger when an object is closer. The disparity of the retinal images due to the separation of the two eyes is used to create the perception of depth. The effect is called stereoscopy where each eye receives a slightly different view of a scene, and the brain fuses them together using these differences to determine the ratio of distances between nearby objects.
Binocular cues are very powerful perception of depth. However, there are also depth cues with only one eye, called monocular depth cues, to create an impression of depth on a flat image. The major monocular cues are: overlapping, relative size, linear perspective and light and shadow. When an object is viewed partially covered, this pattern of blocking is used as a cue to determine that the object is farther away. When two objects known to be the same size and one appears smaller than the other, this pattern of relative size is used as a cue to assume that the smaller object is farther away. The cue of relative size also provides the basis for the cue of linear perspective where the farther away the lines are from the observer, the closer together they will appear since parallel lines in a perspective image appear to converge towards a single point. The light falling on an object from a certain angle could provide the cue for the form and depth of an object. The distribution of light and shadow on objects is a powerful monocular cue for depth provided by the biologically correct assumption that light comes from above.
Perspective drawing, together with relative size, is most often used to achieve the illusion of three dimension depth and spatial relationships on a flat (two dimension) surface, such as paper or canvas. Through perspective, three dimension objects are depicted on a two dimension plane, but “trick” the eye into appearing to be in three dimension space. The first theoretical treatise for constructing perspective, Depictura, was published in the early 1400's by the architect, Leone Battista Alberti. Since the introduction of his book, the details behind “general” perspective have been very well documented. However, the fact that there are a number of other types of perspectives is not well known. Some examples are military, cavalier, isometric, and dimetric, as shown at the top of
Of special interest is the most common type of perspective, called central perspective, shown at the bottom left of
The vast majority of images, including central perspective images, are displayed, viewed and captured in a plane perpendicular to the line of vision. Viewing the images at angle different from 90° would result in image distortion, meaning a square would be seen as a rectangle when the viewing surface is not perpendicular to the line of vision.
Central perspective is employed extensively in 3D computer graphics, for a myriad of applications, such as scientific, data visualization, computer-generated prototyping, special effects for movies, medical imaging, and architecture, to name just a few. One of the most common and well-known 3D computing applications is 3D gaming, which is used here as an example, because the core concepts used in 3D gaming extend to all other 3D computing applications.
A person using a 3D application, such as a game, is in fact running software in the form of a real-time computer-generated 3D graphics engine. One of the engine's key components is the renderer. Its job is to take 3D objects that exist within computer-generated world coordinates x, y, z, and render (draw/display) them onto the computer monitor's viewing surface, which is a flat (2D) plane, with real world coordinates x, y.
As they move through time, the satellite and earth must stay properly synchronized. To accomplish this, the 3D graphics engine creates a fourth universal dimension for computer-generated time, t. For every tick of time t, the 3D graphics engine regenerates the satellite at its new location and orientation as it orbits the spinning earth. Therefore, a key job for a 3D graphics engine is to continuously synchronize and regenerate all 3D objects within all four computer-generated dimensions x, y, z, and t.
While running a 3D application the real person, i.e. the end-user, views only a small segment of the entire 3D world at any given time. This is done because it is computationally expensive for the computer's hardware to generate the enormous number of 3D objects in a typical 3D application, the majority of which the end-user is not currently focused on. Therefore, a critical job for the 3D graphics engine is to minimize the computer hardware's computational burden by drawing/rendering as little information as absolutely necessary during each tick of computer-generated time t.
The boxed-in area in
In
The camera model depicted in
Every component of a camera model is called an “element”. In our simplified camera model, the element called near clip plane is the 2D plane onto which the x, y, z coordinates of the 3D objects within the view volume will be rendered. Each projection line starts at the camera point, and ends at a x, y. z coordinate point of a virtual 3D object within the view volume. The 3D graphics engine then determines where the projection line intersects the near clip-plane and the x and y point where this intersection occurs is rendered onto the near clip-plane. Once the 3D graphics engine's renderer completes all necessary mathematical projections, the near clip plane is displayed on the 2D viewing surface of the computer monitor, as shown in
The basic of prior art 3D computer graphics is the central perspective projection. 3D central perspective projection, though offering realistic 3D illusion, has some limitations is allowing the user to have hands-on interaction with the 3D display.
There is a little known class of images that we called it “horizontal perspective” where the image appears distorted when viewing head on, but displaying a three dimensional illusion when viewing from the correct viewing position. In horizontal perspective, the angle between the viewing surface and the line of vision is preferably 45° but can be almost any angle, and the viewing surface is preferably horizontal (wherein the name “horizontal perspective”), but it can be any surface, as long as the line of vision forming a not-perpendicular angle to it.
Horizontal perspective images offer realistic three dimensional illusion, but are little known primarily due to the narrow viewing location (the viewer's eyepoint has to be coincide precisely with the image projection eyepoint), and the complexity involving in projecting the two dimensional image or the three dimension model into the horizontal perspective image.
The generation of horizontal perspective images requires considerably more expertise to create than conventional perpendicular images. The conventional perpendicular images can be produced directly from the viewer or camera point. One need simply open one's eyes or point the camera in any direction to obtain the images. Further, with much experience in viewing three dimensional depth cues from perpendicular images, viewers can tolerate significant amount of distortion generated by the deviations from the camera point. In contrast, the creation of a horizontal perspective image does require much manipulation. Conventional camera, by projecting the image into the plane perpendicular to the line of sight, would not produce a horizontal perspective image. Making a horizontal drawing requires much effort and very time consuming. Further, since human has limited experience with horizontal perspective images, the viewer's eye must be positioned precisely where the projection eyepoint point is to avoid image distortion. And therefore horizontal perspective, with its difficulties, has received little attention.
The present invention recognizes that the personal computer is perfectly suitable for horizontal perspective display. It is personal, thus it is designed for the operation of one person, and the computer, with its powerful microprocessor, is well capable of rendering various horizontal perspective images to the viewer. Further, horizontal perspective offers open space display of 3D images, thus allowing the hands-on interaction of the end users.
Thus the present invention discloses a multi-plane hands-on simulator system comprising at least two display surfaces, one of which displaying a three dimensional horizontal perspective images. The other display surfaces can display two dimensional images, or preferably three dimensional central perpective images. Further, the display surfaces can have a curvilinear blending display section to merge the various images. The multi-plane hands-on simulator can comprise various camera eyepoints, one for the horizontal perspective images, one for the central perspective images, and optionally one for the curvilinear blending display surface. The multi-plane display surface can further adjust the various images to accommodate the position of the viewer. By changing the displayed images to keep the camera eyepoints of the horizontal perspective and central perspective images in the same position as the viewer's eye point, the viewer's eye is always positioned at the proper viewing position to perceive the three dimensional illusion, thus minimizing viewer's discomfort and distortion. The display can accept manual input such as a computer mouse, trackball, joystick, tablet, etc. to re-position the horizontal perspective images. The display can also automatically re-position the images based on an input device automatically providing the viewer's viewpoint location. The multi-plane hands-on simulator system can project horizontal perspective images into the open space and a peripheral device that allow the end user to manipulate the images with hands or hand-held tools.
The new and unique inventions described in this document build upon prior art by taking the current state of real-time computer-generated 3D computer graphics, 3D sound, and tactile computer-human interfaces to a whole new level of reality and simplicity. More specifically, these new inventions enable real-time computer-generated 3D simulations to coexist in physical space and time with the end-user and with other real-world physical objects. This capability dramatically improves upon the end-user's visual, auditory and tactile computing experience by providing direct physical interactions with 3D computer-generated objects and sounds. This unique ability is useful in nearly every conceivable industry including, but not limited to, electronics, computers, biometrics, medical, education, games, movies, science, legal, financial, communication, law enforcement, national security, military, print media, television, advertising, trade show, data visualization, computer-generated reality, animation, CAD/CAE/CAM, productivity software, operating systems, and more.
The present invention discloses a multi-plane horizontal perspective hands-on simulator comprising at least two display surfaces, one of which capable of projecting three dimensional illusion based on horizontal perspective projection.
In general, the present invention horizontal perspective hands-on simulator can be used to display and interact with three dimensional images and has obvious utility to many industrial applications such as manufacturing design reviews, ergonomic simulation, safety and training, video games, cinematography, scientific 3D viewing, and medical and other data displays.
Horizontal perspective is a little-known perspective, of which we found only two books that describe its mechanics: Stereoscopic Drawing ((®1990) and How to Make Anaglyphs (®1979, out of print). Although these books describe this obscure perspective, they do not agree on its name. The first book refers to it as a “free-standing anaglyph,” and the second, a “phantogram.” Another publication called it “projective anaglyph” (U.S. Pat. No. 5,795,154 by G. M. Woods, Aug. 18, 1998). Since there is no agreed-upon name, we have taken the liberty of calling it “horizontal perspective.” Normally, as in central perspective, the plane of vision, at right angle to the line of sight, is also the projected plane of the picture, and depth cues are used to give the illusion of depth to this flat image. In horizontal perspective, the plane of vision remains the same, but the projected image is not on this plane. It is on a plane angled to the plane of vision. Typically, the image would be on the ground level surface. This means the image will be physically in the third dimension relative to the plane of vision. Thus horizontal perspective can be called horizontal projection.
In horizontal perspective, the object is to separate the image from the paper, and fuse the image to the three dimension object that projects the horizontal perspective image. Thus the horizontal perspective image must be distorted so that the visual image fuses to form the free standing three dimensional figure. It is also essential the image is viewed from the correct eye points, otherwise the three dimensional illusion is lost. In contrast to central perspective images which have height and width, and project an illusion of depth, and therefore the objects are usually abruptly projected and the images appear to be in layers, the horizontal perspective images have actual depth and width, and illusion gives them height, and therefore there is usually a graduated shifting so the images appear to be continuous.
In other words, in Image A, the real-life three dimension object (three blocks stacked slightly above each other) was drawn by the artist closing one eye, and viewing along a line of sight perpendicular to the vertical drawing plane. The resulting image, when viewed vertically, straight on, and through one eye, looks the same as the original image.
In Image B, the real-life three dimension object was drawn by the artist closing one eye, and viewing along a line of sight 45° to the horizontal drawing plane. The resulting image, when viewed horizontally, at 45° and through one eye, looks the same as the original image.
One major difference between central perspective showing in Image A and horizontal perspective showing in Image B is the location of the display plane with respect to the projected three dimensional image. In horizontal perspective of Image B, the display plane can be adjusted up and down, and therefore the projected image can be displayed in the open air above the display plane, i.e. a physical hand can touch (or more likely pass through) the illusion, or it can be displayed under the display plane, i.e. one cannot touch the illusion because the display plane physically blocks the hand. This is the nature of horizontal perspective, and as long as the camera eyepoint and the viewer eyepoint is at the same place, the illusion is present. In contrast, in central perspective of Image A, the three dimensional illusion is likely to be only inside the display plane, meaning one cannot touch it. To bring the three dimensional illusion outside of the display plane to allow viewer to touch it, the central perspective would need elaborate display scheme such as surround image projection and large volume.
Now look at
Again, the reason your one open eye needs to be at this precise location is because both central and horizontal perspective not only defines the angle of the line of sight from the eye point; they also define the distance from the eye point to the drawing. This means that
Notice that in
The generation of horizontal perspective images requires considerably more expertise to create than central perspective images. Even though both methods seek to provide the viewer the three dimension illusion that resulted from the two dimensional image, central perspective images produce directly the three dimensional landscape from the viewer or camera point. In contrast, the horizontal perspective image appears distorted when viewing head on, but this distortion has to be precisely rendered so that when viewing at a precise location, the horizontal perspective produces a three dimensional illusion.
The horizontal perspective display system promotes horizontal perspective projection viewing by providing the viewer with the means to adjust the displayed images to maximize the illusion viewing experience. By employing the computation power of the microprocessor and a real time display, the horizontal perspective display is shown in
The horizontal perspective display system, shown in
The horizontal perspective display system promotes horizontal perspective projection viewing by providing the viewer with the means to adjust the displayed images to maximize the illusion viewing experience. By employing the computation power of the microprocessor and a real time display, the horizontal perspective display, comprising a real time electronic display capable of re-drawing the projected image, together with a viewer's input device to adjust the horizontal perspective image. By re-display the horizontal perspective image so that its projection eyepoint coincides with the eyepoint of the viewer, the horizontal perspective display of the present invention can ensure the minimum distortion in rendering the three dimension illusion from the horizontal perspective method. The input device can be manually operated where the viewer manually inputs his or her eyepoint location, or change the projection image eyepoint to obtain the optimum three dimensional illusions. The input device can also be automatically operated where the display automatically tracks the viewer's eyepoint and adjust the projection image accordingly. The horizontal perspective display system removes the constraint that the viewers keeping their heads in relatively fixed positions, a constraint that create much difficulty in the acceptance of precise eyepoint location such as horizontal perspective or hologram display.
The horizontal perspective display system can further a computation device in addition to the real time electronic display device and projection image input device providing input to the computational device to calculating the projectional images for display to providing a realistic, minimum distortion three dimensional illusion to the viewer by coincide the viewer's eyepoint with the projection image eyepoint. The system can further comprise an image enlargement/reduction input device, or an image rotation input device, or an image movement device to allow the viewer to adjust the view of the projection images.
The input device can be operated manually or automatically. The input device can detect the position and orientation of the viewer eyepoint, to compute and to project the image onto the display according to the detection result. Alternatively, the input device can be made to detect the position and orientation of the viewer's head along with the orientation of the eyeballs. The input device can comprise an infrared detection system to detect the position the viewer's head to allow the viewer freedom of head movement. Other embodiments of the input device can be the triangulation method of detecting the viewer eyepoint location, such as a CCD camera providing position data suitable for the head tracking objectives of the invention. The input device can be manually operated by the viewer, such as a keyboard, mouse, trackball, joystick, or the like, to indicate the correct display of the horizontal perspective display images.
The head or eye-tracking system can comprise a base unit and a head-mounted sensor on the head of the viewer. The head-mounted sensor produces signals showing the position and orientation of the viewer in response to the viewer's head movement and eye orientation. These signals can be received by the base unit and are used to compute the proper three dimensional projection images. The head or eye tracking system can be infrared cameras to capture images of the viewer's eyes. Using the captured images and other techniques of image processing, the position and orientation of the viewer's eyes can be determined, and then provided to the base unit. The head and eye tracking can be done in real time for small enough time interval to provide continous viewer's head and eye tracking.
The invention described in this document, employing the open space characteristics of the horizontal perspective, together with a number of new computer hardware and software elements and processes that together to create a “Hands-On Simulator”. In the simplest terms, the Hands-On Simulator generates a totally new and unique computing experience in that it enables an end user to interact physically and directly (Hands-On) with real-time computer-generated 3D graphics (Simulations), which appear in open space above the viewing surface of a display device, i.e. in the end user's own physical space.
For the end user to experience these unique hands-on simulations the computer hardware viewing surface is situated horizontally, such that the end-user's line of sight is at a 45° angle to the surface. Typically, this means that the end user is standing or seated vertically, and the viewing surface is horizontal to the ground. Note that although the end user can experience hands-on simulations at viewing angles other than 45° (e.g. 55°, 300 etc.), it is the optimal angle for the brain to recognize the maximum amount of spatial information in an open space image. Therefore, for simplicity's sake, we use “45°” throughout this document to mean “an approximate 45 degree angle”. Further, while horizontal viewing surface is preferred since it simulates viewers' experience with the horizontal ground, any viewing surface could offer similar three dimensional illusion experience. The horizontal perspective illusion can appear to be hanging from a ceiling by projecting the horizontal perspective images onto a ceiling surface, or appear to be floating from a wall by projecting the horizontal perspective images onto a vertical wall surface.
The hands-on simulations are generated within a 3D graphics engines' view volume, creating two new elements, the “Hands-On Volume” and the “Inner-Access Volume.” The Hands-On Volume is situated on and above the physical viewing surface. Thus the end user can directly, physically manipulate simulations because they co-inhabit the end-user's own physical space. This 1:1 correspondence allows accurate and tangible physical interaction by touching and manipulating simulations with hands or hand-held tools. The Inner-Access Volume is located underneath the viewing surface and simulations within this volume appear inside the physically viewing device. Thus simulations generated within the Inner-Access Volume do not share the same physical space with the end user and the images therefore cannot be directly, physically manipulated by hands or hand-held tools. That is, they are manipulated indirectly via a computer mouse or a joystick.
This disclosed Hands-On Simulator can lead to the end user's ability to directly, physically manipulate simulations because they co-inhabit the end-user's own physical space. To accomplish this requires a new computing concept where computer-generated world elements have a 1:1 correspondence with their physical real-world equivalents; that is, a physical element and an equivalent computer-generated element occupy the same space and time. This is achieved by identifying and establishing a common “Reference Plane”, to which the new elements are synchronized.
Synchronization with the Reference Plane forms the basis to create the 1:1 correspondence between the “virtual” world of the simulations, and the “real” physical world. Among other things, the 1:1 correspondence insures that images are properly displayed: What is on and above the viewing surface appears on and above the surface, in the Hands-On Volume; what is underneath the viewing surface appears below, in the Inner-Access Volume. Only if this 1:1 correspondence and synchronization to the Reference Plane are present can the end user physically and directly access and interact with simulations via their hands or hand-held tools.
The present invention simulator further includes a real-time computer-generated 3D-graphics engine as generally described above, but using horizontal perspective projection to display the 3D images. One major different between the present invention and prior art graphics engine is the projection display. Existing 3D-graphics engine uses central-perspective and therefore a vertical plane to render its view volume while in the present invention simulator, a “horizontal” oriented rendering plane vs. a “vertical” oriented rendering plane is required to generate horizontal perspective open space images. The horizontal perspective images offer much superior open space access than central perspective images.
One of the invented elements in the present invention hands-on simulator is the 1:1 correspondence of the computer-generated world elements and their physical real-world equivalents. As noted in the introduction above, this 1:1 correspondence is a new computing concept that is essential for the end user to physically and directly access and interact with hands-on simulations. This new concept requires the creation of a common physical Reference Plane, as well as, the formula for deriving its unique x, y, z spatial coordinates. To determine the location and size of the Reference Plane and its specific coordinates requires understanding the following.
A computer monitor or viewing device is made of many physical layers, individually and together having thickness or depth. To illustrate this,
With a viewing device's z axis in mind, let's display an image on that device using horizontal perspective. In
It is now clear that a viewing device's View Surface is the correct physical location to present open space images. Therefore, the View Surface, i.e. the top of the viewing device's glass surface, is the common physical Reference Plane. But only a subset of the View Surface can be the Reference Plane because the entire View Surface is larger than the total image area.
Many viewing devices enable the end user to adjust the size of the image area by adjusting its x and y value. Of course these same viewing devices do not provide any knowledge of, or access to, the z axis information because it is a completely new concept and to date only required for the display of open space images. But all three, x, y, z, coordinates are essential to determine the location and size of the common physical Reference Plane. The formula for this is: The Image Layer is given a z coordinate of 0. The View Surface is the distance along the z axis from the Image Layer the Reference Plane's z coordinate is equal to the View Surface, i.e. its distance from the Image Layer. The x and y coordinates, or size of the Reference Plane, can be determined by displaying a complete image on the viewing device and measuring the length of its x and y axis.
The concept of the common physical Reference Plane is a new inventive concept. Therefore, display manufactures may not supply or even know its coordinates. Thus a “Reference Plane Calibration” procedure might need to be performed to establish the Reference Plane coordinates. This calibration procedure provides the end user with a number of orchestrated images that s/he interacts. The end-user's response to these images provides feedback to the Simulation Engine such that it can identify the correct size and location of the Reference Plane. When the end user is satisfied and completes the procedure the coordinates are saved in the end user's personal profile.
With some viewing devices the distance between the View Surface and Image Layer is quite short. But no matter how small or large the distance, it is critical that all Reference Plane x, y, and z coordinates are determined as close as technically possible.
After the mapping of the “computer-generated” horizontal perspective projection display plane (Horizontal Plane) to the “physical” Reference Plane x, y, z coordinates, the two elements coexist and are coincident in time and space; that is, the computer-generated Horizontal Plane now shares the real-world x, y, z coordinates of the physical Reference Plane, and they exist at the same time.
You can envision this unique mapping of a computer-generated element and a physical element occupying the same space and time by imagining you are sitting in front of a horizontally oriented computer monitor and using the Hands-On Simulator. By placing your finger on the surface of the monitor, you would touch the Reference Plane (a portion of the physical View Surface) and the Horizontal Plane (computer-generated) at exactly the same time, In other words, when touching the physical surface of the monitor, you are also “touching” its computer-generated equivalent, the Horizontal Plane, which has been created and mapped by the Simulation Engine to the same location and time.
One element of the present invention horizontal perspective projection hands-on simulator is a computer-generated “Angled Camera” point, shown in
Mathematically, the computer-generated x, y, z coordinates of the Angled Camera point form the vertex of an infinite “pyramid”, whose sides pass through the x, y, z coordinates of the Reference/Horizontal Plane.
The present invention simulator further defines a “Hands-On Volume”, shown in
Where the Hands-On Volume exists on and above the Reference/Horizontal Plane, the Inner-Access Volume exists below or inside the physical viewing device. For this reason, an end user cannot directly interact with 3D objects located within the Inner-Access Volume via their hand or hand-held tools. But they can interact in the traditional sense with a computer mouse, joystick, or other similar computer peripheral. An “Inner Plane” is further defined, located immediately below and are parallel to the Reference/Horizontal Plane within the pyramid as shown in
The end-user's preferred viewing distance to the bottom of the viewing pyramid determines the location of these planes. One way the end user can adjust the location of the Bottom Planes is through a “Bottom Plane Adjustment” procedure. This procedure provides the end user with orchestrated simulations within the Inner-Access Volume and enables them to interact and adjust the location of the Bottom Plane relative to the physical Reference/Horizontal Plane. When the end user completes the procedure the Bottom Plane's coordinates are saved in the end-user's personal profiles.
For the end user to view open space images on their physical viewing device it must be positioned properly, which usually means the physical Reference Plane is placed horizontally to the ground. Whatever the viewing device's position relative to the ground, the Reference/Horizontal Plane must be at approximately a 45° angle to the end-user's line-of-sight for optimum viewing. One way the end user might perform this step is to position their CRT computer monitor on the floor in a stand, so that the Reference/Horizontal Plane is horizontal to the floor. This example use a CRT-type computer monitor, but it could be any type of viewing device, placed at approximately a 45° angle to the end-user's line-of-sight.
The real-world coordinates of the “End-User's Eye” and the computer-generated Angled Camera point must have a 1:1 correspondence in order for the end user to properly view open space images that appear on and above the Reference/Horizontal Plane (
The present invention horizontal perspective hands-on simulator employs the horizontal perspective projection to mathematically projected the 3D objects to the Hands-On and Inner-Access Volumes. The existence of a physical Reference Plane and the knowledge of its coordinates are essential to correctly adjusting the Horizontal Plane's coordinates prior to projection. This adjustment to the Horizontal Plane enables open space images to appear to the end user on the View Surface vs. the Image Layer by taking into account the offset between the Image Layer and the View Surface, which are located at different values along the viewing device's z axis.
As a projection line in either the Hands-On and Inner-Access Volume intersects both an object point and the offset Horizontal Plane, the three dimensional x, y, z point of the object becomes a two-dimensional x, y point of the Horizontal Plane (see
The Hands-On Simulator further involves adding completely new elements and processes and existing stereoscopic 3D computer hardware. The result in a Hands-On Simulator with multiple views or “Multi-View” capability. Multi-View provides the end user with multiple and/or separate left- and right-eye views of the same simulation.
To provide motion, or time-related simulation, the simulator further includes a new computer-generated “time dimension” element, called “SI-time”. SI is an acronym for “Simulation Image” and is one complete image displayed on the viewing device. SI-Time is the amount of time the Simulation Engine uses to completely generate and display one Simulation Image. This is similar to a movie projector where 24 times a second it displays an image. Therefore, 1/24 of a second is required for one image to be displayed by the projector But SI-Time is variable, meaning that depending on the complexity of the view volumes it could take 1/120th or ½ a second for the Simulation Engine to complete just one SI.
The simulator also includes a new computer-generated “time dimension” element, called “EV-time” and is the amount of time used to generate a one “Eye-View”. For example, let's say that the Simulation Engine needs to create one left-eye view and one right-eye view for purposes of providing the end user with a stereoscopic 3D experience. If it takes the Simulation Engine ½ a second to generate the left-eye view then the first EV-Time period is ½ a second. If it takes another ½ second to generate the right-eye view then the second EV-Time period is also ½ second. Since the Simulation Engine was generating a separate left and right eye view of the same Simulation Image the total SI-Time is one second. That is, the first EV-Time was ½ second and the second EV-Time was also ½ second making a total SI-Time of one second.
The illustration in the upper left of
Once the first eye (right-eye) view is complete, the Simulation Engine starts the process of rendering the computer-generated person's second eye (left-eye) view. The illustration in the lower left of
The distances between people's eyes vary but in the above example we are using the average of 2 inches. It is also possible for the end user to supply the Simulation Engine with their personal eye separation value. This would make the x value for the left and right eyes highly accurate for a given end user and thereby improve the quality of their stereoscopic 3D view.
Once the Simulation Engine has incremented the Angled Camera point's x coordinate by two inches, or by the personal eye separation value supplied by the end user, it completes the rendering and display of the second (left-eye) view. This is done by the Simulation Engine within the EV-Time-2 period using the Angled Camera point coordinate x±2″, y, z and the exact same Simulation Image rendered. This completes one SI-Time period.
Depending on the stereoscopic 3D viewing device used, the Simulation Engine continues to display the left- and right-eye images, as described above, until it needs to move to the next SI-Time period. The job of this step is to determine if it is time to move to a new SI-Time period, and if it is, then increment SI-Time. An example of when this may occur is if the bear cub moves his paw or any part of his body Then a new and second Simulated Image would be required to show the bear cub in its new position. This new Simulated Image of the bear cub, in a slightly different location, gets rendered during a new SI-Time period or SI-Time-2. This new SI-time-2 period will have its own EV-Time-1 and EV-Time-2, and therefore the simulation steps described above will be repeated during SI-time-2. This process of generating multiple views via the nonstop incrementing of SI-Time and its EV-Times continues as long as the Simulation Engine is generating real-time simulations in stereoscopic 3D.
The above steps describe new and unique elements and process that makeup the Hands-On Simulator with Multi-View capability. Multi-View provides the end user with multiple and/or separate left- and right-eye views of the same simulation. Multi-View capability is a significant visual and interactive improvement over the single eye view.
The present invention also allows the viewer to move around the three dimensional display and yet suffer no great distortion since the display can track the viewer eyepoint and re-display the images correspondingly, in contrast to the conventional prior art three dimensional image display where it would be projected and computed as seen from a singular viewing point, and thus any movement by the viewer away from the intended viewing point in space would cause gross distortion.
The display system can further comprise a computer capable of re-calculate the projected image given the movement of the eyepoint location. The horizontal perspective images can be very complex, tedious to create, or created in ways that are not natural for artists or cameras, and therefore require the use of a computer system for the tasks. To display a three-dimensional image of an object with complex surfaces or to create animation sequences would demand a lot of computational power and time, and therefore it is a task well suited to the computer. Three dimensional capable electronics and computing hardware devices and real-time computer-generated three dimensional computer graphics have advanced significantly recently with marked innovations in visual, audio and tactile systems, and have producing excellent hardware and software products to generate realism and more natural computer-human interfaces.
The horizontal perspective display system of the present invention are not only in demand for entertainment media such as televisions, movies, and video games but are also needed from various fields such as education (displaying three-dimensional structures), technological training (displaying three-dimensional equipment). There is an increasing demand for three-dimensional image displays, which can be viewed from various angles to enable observation of real objects using object-like images. The horizontal perspective display system is also capable of substitute a computer-generated reality for the viewer observation. The systems may include audio, visual, motion and inputs from the user in order to create a complete experience of three dimensional illusions.
The input for the horizontal perspective system can be two dimensional image, several images combined to form one single three dimensional image, or three dimensional model. The three dimensional image or model conveys much more information than that a two dimensional image and by changing viewing angle, the viewer will get the impression of seeing the same object from different perspectives continuously.
The horizontal perspective display can further provide multiple views or “Multi-View” capability. Multi-View provides the viewer with multiple and/or separate left- and right-eye views of the same simulation. Multi-View capability is a significant visual and interactive improvement over the single eye view. In Multi-View mode, both the left eye and right eye images are fused by the viewer's brain into a single, three-dimensional illusion. The problem of the discrepancy between accommodation and convergence of eyes, inherent in stereoscopic images, leading to the viewer's eye fatigue with large discrepancy, can be reduced with the horizontal perspective display, especially for motion images, since the position of the viewer's gaze point changes when the display scene changes.
In Multi-View mode, the objective is to simulate the actions of the two eyes to create the perception of depth, namely the left eye and the right eye sees slightly different images. Thus Multi-View devices that can be used in the present invention include methods with glasses such as anaglyph method, special polarized glasses or shutter glasses, methods without using glasses such as a parallax stereogram, a lenticular method, and mirror method (concave and convex lens).
In anaglyph method, a display image for the right eye and a display image for the left eye are respectively superimpose-displayed in two colors, e.g., red and blue, and observation images for the right and left eyes are separated using color filters, thus allowing a viewer to recognize a stereoscopic image. The images are displayed using horizontal perspective technique with the viewer looking down at an angle. As with one eye horizontal perspective method, the eyepoint of the projected images has to be coincide with the eyepoint of the viewer, and therefore the viewer input device is essential in allowing the viewer to observe the three dimensional horizontal perspective illusion. From the early days of the anaglyph method, there are much improvements such as the spectrum of the red/blue glasses and display to generate much more realism and comfort to the viewers.
In polarized glasses method, the left eye image and the right eye image are separated by the use of mutually extinguishing polarizing filters such as orthogonally linear polarizer, circular polarizer, elliptical polarizer. The images are normally projected onto screens with polarizing filters and the viewer is then provided with corresponding polarized glasses. The left and right eye images appear on the screen at the same time, but only the left eye polarized light is transmitted through the left eye lens of the eyeglasses and only the right eye polarized light is transmitted through the right eye lens.
Another way for stereoscopic display is the image sequential system. In such a system, the images are displayed sequentially between left eye and right eye images rather than superimposing them upon one another, and the viewer's lenses are synchronized with the screen display to allow the left eye to see only when the left image is displayed, and the right eye to see only when the right image is displayed. The shuttering of the glasses can be achieved by mechanical shuttering or with liquid crystal electronic shuttering. In shuttering glass method, display images for the right and left eyes are alternately displayed on a CRT in a time sharing manner, and observation images for the right and left eyes are separated using time sharing shutter glasses which are opened/closed in a time sharing manner in synchronism with the display images, thus allowing an observer to recognize a stereoscopic image.
Other way to display stereoscopic images is by optical method. In this method, display images for the right and left eyes, which are separately displayed on a viewer using optical means such as prisms, mirror, lens, and the like, are superimpose-displayed as observation images in front of an observer, thus allowing the observer to recognize a stereoscopic image. Large convex or concave lenses can also be used where two image projectors, projecting left eye and right eye images, are providing focus to the viewer's left and right eye respectively. A variation of the optical method is the lenticular method where the images form on cylindrical lens elements or two dimensional array of lens elements.
The illustration in the upper left of
Once the horizontal perspective display has incremented the Angled Camera point's x coordinate by two inches, or by the personal eye separation value supplied by the viewer, the rendering continues by displaying the second (left-eye) view.
Depending on the stereoscopic 3D viewing device used, the horizontal perspective display continues to display the left- and right-eye images, as described above, until it needs to move to the next display time period. An example of when this may occur is if the bear cub moves his paw or any part of his body. Then a new and second Simulated Image would be required to show the bear cub in its new position. This new Simulated Image of the bear cub, in a slightly different location, gets rendered during a new display time period. This process of generating multiple views via the nonstop incrementing of display time continues as long as the horizontal perspective display is generating real-time simulations in stereoscopic 3D.
By rapidly display the horizontal perspective images, three dimensional illusion of motion can be realized. Typically, 30 to 60 images per second would be adequate for the eye to perceive motion. For stereoscopy, the same display rate is needed for superimposed images, and twice that amount would be needed for time sequential method.
The display rate is the number of images per second that the display uses to completely generate and display one image. This is similar to a movie projector where 24 times a second it displays an image. Therefore, 1/24 of a second is required for one image to be displayed by the projector. But the display time could be a variable, meaning that depending on the complexity of the view volumes it could take 1/12 or ½ a second for the computer to complete just one display image. Since the display was generating a separate left and right eye view of the same image, the total display time is twice the display time for one eye image.
The present invention hands-on simulator further includes technologies employed in computer “peripherals”.
The new Peripheral Open-Access Volume, which as an example in
Some Peripherals provide a mechanism that enables the Hands-On Simulation Tool to perform this calibration without any end-user involvement. But if calibrating the Peripheral requires external intervention than the end-user will accomplish this through an “Open-Access Peripheral Calibration” procedure. This procedure provides the end-user with a series of Simulations within the Hands-On Volume and a user-friendly interface that enables them to adjusting the location of the Peripheral's volume until it is in perfect synchronization with the View surface. When the calibration procedure is complete, the Hands-On Simulation Tool saves the information in the end-user's personal profile.
Once the Peripheral's volume is precisely calibrated to the View surface, the next step in the process can be taken. The Hands-On Simulation Tool will continuously track and map the Peripheral's volume to the Open-Access Volumes. The Hands-On Simulation Tool modifies each Hands-On Image it generates based on the data in the Peripheral's volume. The end result of this process is the end-user's ability to use any given Peripheral to interact with Simulations within the Hands-On Volume generated in real-time by the Hands-On Simulation Tool.
With the peripherals linking to the simulator, the user can interact with the display model. The Simulation Engine can get the inputs from the user through the peripherals, and manipulate the desired action. With the peripherals properly matched with the physical space and the display space, the simulator can provide proper interaction and display. The invention Hands-On Simulator then can generate a totally new and unique computing experience in that it enables an end user to interact physically and directly (Hands-On) with real-time computer-generated 3D graphics (Simulations), which appear in open space above the viewing surface of a display device, i.e. in the end user's own physical space. The peripheral tracking can be done through camera triangulation or through infrared tracking devices.
The simulator can further include 3D audio devices for “SIMULATION RECOGNITION & 3D AUDIO”. This results in a new invention in the form of a Hands-On Simulation Tool with its Camera Model, Horizontal Multi-View Device, Peripheral Devices, Frequency Receiving/Sending Devices, and Handheld Devices as described below.
Object Recognition is a technology that uses cameras and/or other sensors to locate simulations by a method called triangulation. Triangulation is a process employing trigonometry, sensors, and frequencies to “receive” data from simulations in order to determine their precise location in space. It is for this reason that triangulation is a mainstay of the cartography and surveying industries where the sensors and frequencies they use include but are not limited to cameras, lasers, radar, and microwave. 3D Audio also uses triangulation but in the opposite way 3D Audio “sends” or projects data in the form of sound to a specific location. But whether you're sending or receiving data the location of the simulation in three-dimensional space is done by triangulation with frequency receiving/sending devices. By changing the amplitudes and phase angles of the sound waves reaching the user's left and right ears, the device can effectively emulate the position of the sound source. The sounds reaching the ears will need to be isolated to avoid interference. The isolation can be accomplished by the use of earphones or the like.
Create a new frequency receiving/sending device by combining a video camera with an audio speaker, as previously shown in
Take these new camera/speaker devices and attach or place them nearby a viewing device, such as a computer monitor as previously shown in
Triangulation works by separating and positioning each camera/speaker device such that their individual frequency receiving/sending volumes overlap and cover the exact same area of space. If you have three widely spaced frequency receiving/sending volumes covering the exact same area of space than any simulation within the space can accurately be located. The next step creates a new element in the Open-Access Camera Model for this real-world space and in
Now that this real frequency receiving/sending volume exists it must be calibrated to the Common Reference, which of course is the real View Surface. The next step is the automatic calibration of the real frequency receiving/sending volume to the real View Surface. This is an automated procedure that is continuously performed by the Hands-On Simulation Tool in order to keep the camera/speaker devices correctly calibrated even when they are accidentally bumped or moved by the end-user, which is likely to occur.
The simulator then performs simulation recognition by continuously locating and tracking the end-user's “left and right eye” and their “line-of-sight”, continuously map the real-world left and right eye coordinates into the Open-Access Camera Model precisely where they are in real space, and continuously adjust the computer-generated cameras coordinates to match the real-world eye coordinates that are being located, tracked, and mapped. This enables the real-time generation of Simulations within the Hands-On Volume based on the exact location of the end-user's left and right eye. Allowing the end-user to freely move their head and look around the Hands-On Image without distortion.
The simulator then perform simulation recognition by continuously locating and tracking the end-user's “left and right ear” and their “line-of-hearing”, continuously map the real-world left- and right-ear coordinates into the Open-Access Camera Model precisely where they are in real space, and continuously adjust the 3D Audio coordinates to match the real-world ear coordinates that are being located, tracked, and mapped. This enables the real-time generation of Open-Access sounds based on the exact location of the end-user's left and right ears. Allowing the end-user to freely move their head and still hear Open-Access sounds emanating from their correct location.
The simulator then perform simulation recognition by continuously locating and tracking the end-user's “left and right hand” and their “digits,” i.e. fingers and thumbs, continuously map the real-world left and right hand coordinates into the Open-Access Camera Model precisely where they are in real space, and continuously adjust the Hands-On Image coordinates to match the real-world hand coordinates that are being located, tracked, and mapped. This enables the real-time generation of Simulations within the Hands-On Volume based on the exact location of the end-user's left and right hands allowing the end-user to freely interact with Simulations within the Hands-On Volume.
The simulator then perform simulation recognition by continuously locating and tracking “handheld tools”, continuously map these real-world handheld tool coordinates into the Open-Access Camera Model precisely where they are in real space, and continuously adjust the Hands-On Image coordinates to match the real-world handheld tool coordinates that are being located, tracked, and mapped. This enables the real-time generation of Simulations within the Hands-On Volume based on the exact location of the handheld tools allowing the end-user to freely interact with Simulations within the Hands-On Volume.
A “computer-generated attachment” is mapped in the form of an Open-Access computer-generated simulation onto the tip of a handheld tool, which in
The present invention further discloses a Multi-Plane display comprising a horizontal perspective display together with a non-horizontal central perspective display.
The Multi-Plane display can be made with one or more physical viewing surfaces. For example, the vertical leg of the “L” can be one physical viewing surface, such as flat panel display, and the horizontal leg of the “L” can be a separate flat panel display. The edge of the two display segments can be a non-display segment and therefore the two viewing surface are not continuous. Each leg of a Multi-Plane display is called a viewing plane and as you can see in the upper left of
To generate both the horizontal perspective and central perspective images requires the creation of two camera eyepoints (which can be the same or different) as shown in
The multi-plane display system can further include a curvilinear connection display section to blend the horizontal perspective and the central perspective images together at the location of the seam in the “L,” as shown at the bottom of
Furthermore, the multi-plane display system can comprise multiple display surfaces together with multiple curvilinear blending sections as shown in
The present invention multi-plane display system thus can simultaneously projecting a plurality of three dimensional images onto multiple display surfaces, one of which is a horizontal perspective image. Further, it can be a stereoscopic multiple display system allowing viewers to use their stereoscopic vision for three dimensional image presentation.
Since the multi-plane display system comprises at least two display surfaces, various requirements need to be addressed to ensure high fidelity in the three dimensional image projection. The display requirements are typically geometric accuracy, to ensure that objects and features of the image to be correctly positioned, edge match accuracy, to ensure continuity between display surfaces, no blending variation, to ensure no variation in luminance in the blending section of various display surfaces, and field of view, to ensure a continuous image from the eyepoint of the viewer.
Since the blending section of the multi-plane display system is preferably a curve surface, some distortion correction could be applied in order for the image projected onto the blending section surface to appear correct to the viewer. There are various solutions for providing distortion correction to a display system such as using a test pattern image, designing the image projection system for the specific curved blending display section, using special video hardware, utilizing a piecewise-linear approximation for the curved blending section. Still another distortion correction solution for the curve surface projection is to automatically computes image distortion correction for any given position of the viewer eyepoint and the projector.
Since the multi-plane display system comprises more than one display surface, care should be taken to minimize the seams and gaps between the edges of the respective displays. To avoid seams or gaps problem, there could be at least two image generators generating adjacent overlapped portions of an image. The overlapped image is calculated by an image processor to ensure that the projected pixels in the overlapped areas are adjusted to form the proper displayed images. Other solutions are to control the degree of intensity reduction in the overlapping to create a smooth transition from the image of one display surface to the next.
This application claims priority from U.S. provisional applications Ser. No. 60/576, 187 filed Jun. 1, 2004, entitled “Multi plane horizontal perspective display”; Ser. No. 60/576,189 filed Jun. 1, 2004, entitled “Multi plane horizontal perspective hand on simulator”; Ser. No. 60/576, 182 filed Jun. 1, 2004, entitled “Binaural horizontal perspective display”; and Ser. No. 60/576,181 filed Jun. 1, 2004, entitled “Binaural horizontal perspective hand on simulator” which are incorporated herein by reference. This application is related to co-pending application Ser. No. 11/098,681 filed Apr. 4, 2005, entitled “Horizontal projection display”; Ser. No. 11/098,685 filed Apr. 4, 2005, entitled “Horizontal projection display”, Ser. No. 11/098,667 filed Apr. 4, 2005, entitled “Horizontal projection hands-on simulator”; Ser. No. 11/098,682 filed Apr. 4, 2005, entitled “Horizontal projection hands-on simulator”; “Multi plane horizontal perspective display” filed May 27, 2005; “Multi plane horizontal perspective hand on simulator” filed May 27, 2005; “Binaural horizontal perspective display” filed May 27, 2005; and “Binaural horizontal perspective hand on simulator” filed May 27, 2005.
Number | Date | Country | |
---|---|---|---|
60576187 | Jun 2004 | US | |
60576189 | Jun 2004 | US | |
60576182 | Jun 2004 | US | |
60576181 | Jun 2004 | US |