Projecting floating 3D images in midair can dramatically replace the traditional mediums of displaying digital data. The current available traditional mediums for displaying digital data, such as computer screens, holograms, and projectors have major disadvantages in comparison with the idea of floating 3D images. For example, computer screens constrain users from motion when viewing the digital data presented on the screen. This contrasts with the idea of floating 3D images that follow the user's movement to display the digital data in front of him/her, regardless of the user's location or body position.
Viewing holograms requires the user to look at or through a piece of glass or film to see the image, if a “mid-air” effect is intended the hologram image appears slightly in front of or behind a glass. Accordingly, holograms are best suited for museum-type applications while floating 3D images are supposed to be used anywhere without the need for glass or film to display the digital data. Projectors require a flat wall or surface when projecting the digital data, as well as, specific space dimensions and certain sitting positions to view the projected images. This contrasts with the idea of floating 3D images, which does not require a flat surface, or particular space dimensions or sitting positions.
In fact, until now there has not been a universal method or technique which achieves the idea of projecting 3D images floating-in-midair. Once this method or technique is invented, it is expected to replace most traditional mediums of displaying digital data and to open the door for innovative entertainment, gaming, educational, engineering, and industrial applications.
The present invention introduces a method and system for projecting 3D images floating-in-midair. In this case, the 3D images are always located in front of the user's eyes even when s/he walks, turns around, or lies supine. The user can move around the 3D objects presented in the 3D images or even walk through these 3D objects to see more details or scenes from different points of view. The user can interact with the content of the 3D images similar to the way s/he interacts with a content presented on a computer display. The content of the 3D image may include 3D models, images, videos, text, or the like.
In one embodiment, the present invention discloses a method for projecting an image to appear as a floating 3D image in midair relative to a user's point of view. Changing the position of the user while walking simultaneously changes the projection of the image to appear as if it is always located in front of the user, or to appear as if it has a fixed position regardless of the user's position or movement. The projection of the image is generated by a head mounted projector utilized by the user a 3D scanner is also utilized to detect the locations and shapes of the surfaces located in front of the projector. A CPU receives the data of the 3D scanner and changes the parameters of the image to be projected. The change of the image parameters makes the projected image appears as if it is floating in mid-air, regardless of the locations and shapes of the surfaces located in front of the projector. In another embodiment, the use of the 3D scanner is replaced with a database that stores a 3D model for the surfaces located in front or around the user essentially storing a 3D model of the surfaces instead of creating the 3D model of the surfaces in real time using a 3D scanner.
In one embodiment, the present invention discloses a method for projecting virtual objects on certain real objects in front of a user. For example, projecting an image of a man on a door, regardless of the user's movement with the head mounted projector, to make the man appear as he is standing in front of the door relative to the user even when the user moves. Also, projecting an image of a virtual 3D home on the interior walls of a real home where the doors and windows of the virtual 3D home are always projected on top of the doors or windows of the real home, regardless of the user's head mounted projector movement. This is achieved by identifying the real objects located in front of the user and also identifying the virtual objects located in a 3D model and then changing the locations of the virtual object in the 3D model so it is projected on top of the real objects in front of the user during the user's movement.
In another embodiment, the present invention discloses a system for projecting an image on a transparent surface held by the user's hands wherein the projected image include virtual objects with certain parameters that suit the identities and locations of real objects located behind the transparent surface. In one embodiment, the image is partially projected on the transparent surface while the rest of the image is projected on the real objects located behind the transparent surface. The partial images projected on both of the transparent surface and the real objects form a complete image relative to the user's point of view as will be described subsequently.
The third surface is located further away from the head mounted projector than the first surface and the second surface, and accordingly, if the virtual screen is projected as one rectangular image then the first, second, and third parts of the virtual screen will not appear to the point of view as one unit or rectangle. To correct this, the shape of the second part of the virtual screen is adjusted to suit its location relative to the first part and second part of the virtual screen. Generally, adjusting the second part of the virtual screen depends on the locations of each of the three surfaces relative to the point of view. If the three surfaces are flat surfaces then each surface is represented by the Cartesian coordinates of its corners. If a surface is comprised of a plurality of flat surfaces then this surface is represented by the Cartesian coordinates of the corners of each one of the plurality of the flat surfaces. If a surface is a curved surface, such as a sphere or a cylinder, then the type of the surface curvature is determined and taken into consideration when projecting the virtual screen.
The previous examples in
To create the feeling of a three-dimensional image floating-in-midair, the shape of the virtual screen is simultaneously altered with any change in the position of the point of view. Changing the position of the point of view may lead to changing the surfaces that the parts of the virtual screen are projected, measured by the head mounted projector's movement. Reshaping the virtual screen simultaneously with the movement of the point of view makes the virtual screen appear as if it has a fixed position, giving the sense of a 3D image floating-in-midair. Also the 3D effect of the virtual screen is greatly enhanced by presenting 3D objects inside the virtual screen where the parts or sides of the 3D object that appear to the user change with the change of the point of view during the user's movement. In this case, the user can move around the virtual screen and the 3D objects to see them from different points of view.
Generally, it is important to note that the virtual screen can take various shapes and forms and present different contents. For example,
Generally, in all cases when a user moves around a virtual screen, different sides of the virtual screen appear to the user according to his/her position. For example, when a virtual screen is projected as a 2D floating window, the 2D floating window will have a front side and a back side that may present a different content of digital data. Moving around the 2D floating window enables seeing its front side or its back side according to the point of view during the user's movement. Also when a virtual screen is projected as a 3D cube, the user may see different faces of the cube during his/her movement around the cube. In this case also, each face of the cube may contain or present different content of digital data.
According to the previous description, in one embodiment, the present invention discloses a method for projecting an image on random surfaces from a movable projection source to make the image appear as a floating three-dimensional image relative to a point of view wherein the method is comprised of four steps. The first step is detecting the number, positions, and parameters of the random surfaces located in front of the movable projection source. The second step is dividing the image into parts wherein each part of the parts corresponds to a surface of the random surfaces. The third step is reforming each part according to the position and parameters of the corresponding surface to generate a reformed part wherein the reformed parts can be projected on random surfaces to appear as a floating three-dimensional image relative to the point of view. The fourth step is projecting the reformed parts from the movable projection source on to random surfaces.
In one embodiment, the parameters of the surface include the flatness or curvature of each surface. In another embodiment, the parameters of the surface also include the color or material of each surface to change the color or brightness of the correspond part of the virtual image. The number of the surfaces may vary from one surface to multiple surfaces according to the user's position and the nature of the surrounding environment. In some cases, when the image is projected on natural surfaces, such as a mountain surface, each point of the mountain is considered as a surface, and accordingly, the number of surfaces will be a big number. In this case, the image is divided into a number of spots equal to the number of the mountain points in front of the image projection where each spot is corresponding to one point of the mountain surface.
In one embodiment of the present invention, hiding parts of the image of the virtual screen before projecting it enhances the effect of the third dimensions. For example,
Generally, the virtual 3D home can be big enough to completely cover a building. In such cases, the virtual windows and doors of the virtual 3D model will be created to be aligned and projected on the real doors or windows of the building. This way, the user can walk through the virtual 3D home using the virtual doors that are located on top of real doors openings. Also the user can look through the virtual windows that are located on top of the real windows openings to see the outside of the building. In fact, such utilization of the present invention converts the augmented reality application from displaying the virtual objects on a screen of a tablet or a computer to displaying the virtual objects on a surrounding environment or buildings.
According to the previous description, in another embodiment, the present invention discloses a method for projecting a virtual 3D model on a 3D environment wherein certain virtual objects of the virtual 3D model are projected on certain actual objects of the 3D environment while the source of the virtual 3D model is moving and the method comprising of five step. The first step is identifying the virtual objects. The second step is identifying the certain actual objects located in front of the source of projection. The third step is determining the momentary location of certain actual objects relative to the projection source. The fourth step is changing the position and dimensions of certain virtual objects in the virtual 3D model according to the location of the certain actual objects to make the certain virtual objects projected on top of the certain actual objects. The fifth step is projecting the image of the virtual 3D model on the 3D environment during the movement of the source of projection.
In one embodiment, the 3D environment is the environment that surrounds the user, whether indoor or outdoor. The projection source is a head mounted projector utilized by the user while s/he is walking through the 3D environment. The identification of the virtual objects is manually achieved by associating an ID for each virtual object of the 3D model, or the identification of the virtual objects is automatically achieved by using a computer vision program for 3D models as known in the art. The identification of the actual objects is also automatically achieved by using a computer vision program that analyzes the image of the actual objects located in front of the head mounted projector. The determination of the location of the certain actual objects relative to the projection source is achieved by using a depth sensing camera or a 3D laser scanner.
Generally, the main advantage of the present invention is utilizing an existing hardware technology that is simple and straightforward which easily and inexpensively carries out the present method of creating and projecting floating 3D image in midair.
For example, the locations and shapes of the surfaces located in front of the point of view can be detected using different techniques. In one embodiment of the present invention, the locations and shapes of the surfaces located in front of the point of view are obtained from a database that stores a 3D model for the surrounding environment or surfaces located around the user. This includes the walls, doors, windows, furniture, equipment, or the like. In this case the database stores the 3D model of each object with its dimensions and location, in addition to, the ID of the object which is utilized in projecting certain virtual objects on certain real objects as was described previously.
In another embodiment, the surfaces located in front of the point of view are scanned by a 3D scanner that analyzes the surfaces in a random direction to collect data on the surfaces position, shape, and appearance including color. The 3D scanner captures the image of the surface in its field of view where the picture produced by the 3D scanner describes the distance to the surfaces at each point in the picture. This allows the three dimensional position of each point in the picture relative to the position of the 3D scanner to be identified. In one embodiment, the 3D scanner is an active scanner that emits a kind of radiation or light and detects its reflection in order to probe the surfaces in front of the 3D scanner after steering the 3D scanner to the random direction.
The advantage of using the 3D scanner over the database is that the locations of the surfaces located in front of the user will be related to the position of the point of view, based on positioning the 3D scanner near the user's eye. When utilizing a database, the user's position is detected in real time using a position detection tool. The position detection tool will then detect the position and direction of the user's eyes using a 3D accelerometer, compass, and GPS as known in the art.
In both cases, using a database or a 3D scanner, a CPU is utilized to retrieve the data of the surfaces from the database or the 3D scanner to reform the projected image according to this data.
According to one embodiment of the present invention,
In one embodiment of the present invention, the projection of the virtual screen on the surfaces located in front of the point of view is replaced with a projection on a transparent surface held by the user's hands. For example,
As shown in the figure, the 3D model of Mickey Mouse is projected to appear as if it is located at the door opening. Moving or tilting the transparent surface will change the content of the virtual screen to always make Mickey Mouse consistently appear at the door opening relative to the point of view. For example, if the transparent surface is moved to the right, then the virtual screen will move Mickey Mouse to the left to appear at the door opening. Also, if the transparent surface is titled vertically or horizontally then the virtual screen will change the dimensions of Mickey Mouse to make him appear as if he was not affected by the tilting of the transparent surface. If the transparent surface is moved away or closer from/to the user then the image of Mickey Mouse will be resized to look as if he was not affected by the movement of the transparent surface.
In another embodiment of the present invention, the virtual screen is simultaneously projected on both the transparent surface and the surfaces located in front of the point of view. For example,
According to the previous description, in one embodiment the present invention discloses a method for projecting an image on a transparent surface that can be held by a user's hands wherein the content of the image suits the identity of the objects located behind the transparent surface relative to a point of view, and the method comprising four steps. The first step is detecting the identity of the objects located behind the transparent surface. The second step is detecting the movement of the transparent surface. The third step is changing the position of the content according to the identity of the objects and the movement of the transparent surface to make certain contents appear on top of certain objects when the image is projected on the transparent surface. The fourth step is projecting the image on the transparent surface.
The idea of projecting the virtual screen on a transparent surface can be replaced with projecting the virtual screen directly on a user's retina, head mounted display, eye glasses, or the like. In all such cases, the user's hands are free so they can be moved to provide an immediate computer input to a computer system, enabling interaction with the content of the virtual screen as was described previously.
One of the innovative applications of the present invention is hiding real objects in front of a user. This is achieved by projecting the scene behind the object on the object surfaces to give the illusion of disappearing the object and appearing the scene to the user. In this case, it is required to capture the scene behind the user then project this scene on the object after reforming the scene image according to the surfaces of the object. The same process can be utilized to project the scene behind the object on the user's retina to make the object disappear in front of the user. In this case the scene image will not be reformed since it will not be projected on the object surfaces.
As mentioned previously, the user can interact with the content of the virtual screen similar to interacting with content presented on a computer display. This is achieved by using a camera that tracks the movements of the user's hands or fingers and a software program is utilized to interpret these movements into an input to a computer system. It is also possible to replace the camera with any other tracking tools or systems, such as optical sensors, laser sensors, or the like.
Finally, to clarify the idea of reforming the image of the virtual screen according to the point of view and the surfaces located in front of the projector.
To explain the mathematical process for creating the four parts of the real projection of the virtual screen,
Conclusively, while a number of exemplary embodiments have been presented in the description of the present invention, it should be understood that a vast number of variations exist, and these exemplary embodiments are merely representative examples, and are not intended to limit the scope, applicability or configuration of the disclosure in any way. Various of the above-disclosed and other features and functions, or alternative thereof, may be desirably combined into many other different systems or applications. Various presently unforeseen or unanticipated alternatives, modifications variations, or improvements therein or thereon may be subsequently made by those skilled in the art which are also intended to be encompassed by the claims, below. Therefore, the foregoing description provides those of ordinary skill in the art with a convenient guide for implementation of the disclosure, and contemplates that various changes in the functions and arrangements of the described embodiments may be made without departing from the spirit and scope of the disclosure defined by the claims thereto.
This application is a Continuation-in-Part of co-pending U.S. Patent Applications No. 61/624,174, filed Aug. 8, 2012, titled “Method, system, and device for displaying digital data”, and No. 61/743,022, filed Aug. 23, 2013, titled “Method and system for projecting images using 3-D scanning”.