The head-mounted display, abbreviated HMD, is a head worn display device working as an extension of a computer in the form of eye glasses or helmet; it has a small display positioned in front of a user's eyes. The optical head-mounted display (OHMD), such as GOGGLE GLASS, is a wearable display that has the capability of reflecting projected images and also allows the user to see through it. The major applications of HMD and OHMD include military, governmental (fire, police, etc.) and civilian/commercial (medicine, video gaming, sports, etc.) use.
For example, in the aviation field, HMDs are increasingly being integrated into the modem pilot's flight helmet. In the rescue field, firefighters use HMDs to display tactical information, like maps or thermal imaging data, while simultaneously viewing a real scene. In the engineering field, engineers use HMDs to provide stereoscopic views of drawings by combining computer graphics, such as system diagrams and imagery, with the technician's natural vision. In the medical field, physicians use HMDs during surgeries, where a combination of radiographic data (CAT scans and MRI imaging) is combined with the surgeon's natural view of the operation, and the anesthesiologist can maintain knowledge of the patient's vital signs through data presented on the HMDs.
In the gaming and entertainment fields, some HMDs have a positional sensing system which permits the user to view their surroundings, with the perspective shifting as the head is moved, thus providing a deep sense of immersion. In sports, a HMD system has been developed for car racers to easily see critical race data while maintaining focus on the track. In the skill training field, a simulation presented on the HMD allows the trainer to virtually place a trainee in a situation that is either too expensive or too dangerous to replicate in real-life. Training with HMDs covers a wide range of applications such as driving, welding and spray painting, flight and vehicle simulators, dismounted soldier training, medical procedure training and more.
Recent OHMDs were developed to serve all aforementioned fields. For example, GOOGLE GLASS, which is a wearable computer with an optical head-mounted display, has been developed by GOOGLE. It displays information in a smartphone-like hands-free format that can communicate with the Internet via natural language voice commands. Many other companies have developed OHMDs similar to GOOGLE GLASS with less or more features or differing capabilities.
Generally, the two main disadvantages of using the HMDs and OHMDs are their limited visual area on which to display digital data, and the difficulty the user experiences interacting with digital data presented in front of his/her eyes. The area assigned for displaying the digital data on the HMD or OHMD is miniscule in comparison to the larger screens of computers and tablets. Also, the interaction with the digital data on the HMDs or OHMDs cannot employ a traditional computer input device, such as a computer mouse or computer keyboard, when the user is standing, walking, or lying supine. If there is a solution for the two aforementioned problems, the use of the HMDs and OHMDs will dramatically be improved to aptly serve military, government and civilian/commercial interests.
The present invention discloses a method for interaction with the digital data presented on a display. The display can be a HMD, OHMD, a tablet screen, mobile phone screen, or the like. The method resolves the aforementioned two problems. Accordingly, the digital data presented on the display becomes unrestricted to the dimensions or size of the display, and the user can easily interact with digital data without using a computer input device while s/he is standing, walking, or lying supine. Thus, the present invention enhances the various applications and uses of the HMDs and OHMDs, and creates new applications for tablets and mobile phones.
In one embodiment, the present invention enables a user to select virtual data on a display and position the virtual data in mid-air around the user. The virtual data remains stationed at its new location, regardless of the movements the user makes. The user can view the virtual data at its new location once the display is faced towards this new location. The user can also select and relocate the virtual data from its new location in the air to the display. In another embodiment, the present invention enables a user to select virtual data on a display and relocate to attach this virtual data to a real object, such as a wall or piece of furniture located in the surrounding environment of the user. The virtual data remains attached to the real object regardless of the movement of the user. The user can view the virtual data attached to the real object once the display is aimed towards the real object.
The selection of the virtual data can be achieved in various manners, such as using gesture recognition, voice commands, picture capturing, or the like. The relocation of the virtual data can be achieved in various ways, such as hand movements, device movement, or providing numerical data representing the position of the new location of the virtual data. Accordingly, the present invention turns the surrounding environment of the user into a large virtual display that can hold much more digital data than the size of the display, whether this display is a tablet, mobile phone, HMD, or OHMD. The virtual data may contain digital text, images, or videos. The text, images, or videos can be associated with a URL of online content such, as a website. Accordingly, the virtual data simultaneously changes with the change of the online content. The user can view this online content once the display is aimed towards the position of the virtual data.
In another embodiment, the present invention enables a group of users to interact with virtual data located suspended in the atmosphere around them. Each one of the users can select virtual data on a device display and drag the virtual data to a desired position in mid-air. All users can view the virtual data at its new location by aiming a device display towards this location. This innovative application enhances the collaborative interaction of a group of users with virtual data, opening the door for various gaming, entertainment, educational, and professional computer applications.
Generally, the above Summary is provided to introduce a selection of concepts in a simplified form that is further described below in the Detailed Description. This Summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Using the present invention to select windows on the OHMD and navigate these windows through the user's environment opens the door for a variety of innovative computer applications. For example,
Generally, the method of the present invention can be utilized with OHMDs, HMDs, tablets, mobile phones, and computers. For example,
The concept of using the present invention in augmented reality applications can provide freedom to attach virtual windows to real objects that appear in the physical landscape of the user. For example,
It is important to note that positioning the virtual windows on real objects, such as walls or furniture, means the virtual windows remain attached to these real objects regardless of the user's movement with the OHMD. Accordingly, when a user positions a plurality of virtual windows on the walls of different rooms of a building, s/he can walk through the physical landscape of the building and view the virtual windows attached in each room. The building essentially becomes 3D gallery with digital data, and the digital data can contain text, pictures, or videos as mentioned previously. In one embodiment of the present invention, each window virtually positioned on a real object can be associated with online content described by a URL. For example, a virtual window can be associated with a URL such as “www.cnn.com” which leads to the CNN news website. Accordingly, the content of this virtual window will change each time the CNN website itself undergoes a change in content. Of course, the virtual window can present a specific webpage of a website, or the homepage of the website, and the user can interact or browse the website at will, as will be described subsequently.
The previous examples demonstrate using the present invention when interacting with two-dimensional computer applications. However, the present invention is also helpful when interacting with three-dimensional computer applications. For example,
Overall, the main advantages of the present invention is utilizing an existing hardware technology that is simple and straightforward which easily and inexpensively carries out the interaction method of the present invention. For example, in
To rotate the virtual window vertically or horizontally, move the virtual window away or closer to the user, or resize the virtual window in its position in the air, the user provides an immediate input representing a rotation, movement, or resizing. The user input can be done with many gestures, each of which can represent a rotation, moment, or resizing with certain criteria. For example, the rotation can be described by a vertical angle or horizontal angle. The movement can be described by a 3D direction and a distance along this 3D direction, similar to using the spherical coordinate system. The 3D direction can be described by a first angle located between a line representing the 3D direction and the xy-plane, and a second angle located between the projection of the line on the xy-plane and the x-axis. The resizing can be described by a positive or negative percentage of the original size of the virtual window.
In addition to the gestures, the present invention can utilize natural language voice commands to provide an immediate input to a computer system representing the intended rotation, movement, or resizing. For example, a command such as “rotation, vertical, 90” can be interpreted to represent “a vertical rotation with an angle equal to 90 degrees”. Also, a command such as “movement, 270, 45, 100” can be interpreted to represent a movement in a 3D direction with a vertical angle equal to 270 and a horizontal angle equal to 45, as well as a distance along this 3D direction equal to 100 units. A command such as “resize, 50” can be interpreted as resizing the virtual window 50% compared to its original size.
In
The tilted angle of the mobile phone indicates the orthogonal angle of the virtual window movement. The length of time the icon is pressed represents the movement of the virtual window along the orthogonal angle. The GPS of the mobile phone detects the position of the mobile phone, which represents the start position of the virtual window. The orthogonal angle and distance of the virtual window movement, relative to the start position of the mobile phone, determines the final position of the virtual window after its movement.
In
In the case of positioning a plurality of virtual windows inside different rooms of a building, a database stores the 3D model of the rooms and buildings to show or hide the virtual windows on the device display according to which room the user is standing in. Of course, the user may prefer to view all virtual windows located inside the entire building from each room. In this case, the walls of the room will not block any virtual windows, which means the 3D model of the rooms and building will be ignored.
In
In another embodiment, the present invention allows a group of users to interact with virtual data located suspended in the atmosphere around them. Each one of the users can select virtual data on a device display and drag the virtual data to a desired position in mid-air. All users can view the virtual data at its new location by aiming a device display towards this location. This innovative application enhances the collaborative interaction of a group of users with virtual data, opening the door for various gaming, entertainment, educational, and professional applications. Additionally, the group of users can be located in different locations or cities, and still maintain an interaction with the same virtual data. In this case, each virtual window suspended in the air will be presented around each user at his/her location. Once a user changes the position and/or content of a virtual window, these changes appears to all users at their locations.
Generally, when using a device such as a HMD, OHMD, tablet, or mobile phone, the device is equipped with a camera, processor, 3D compass, GPS, accelerometer, and movement sensing unit. The camera captures the picture of the user's finger. The processor analyzes the picture of the finger to determine its position relative to the user's eye. The position of the finger is compared with the virtual windows presented on the display to determine which virtual window the user is selecting. The processor reshapes the selected virtual window to match the finger's movement when moving, rotating, or resizing the virtual window. Once the virtual window is moved to a new location in mid-air, and the device display is not facing this new location, the virtual window remains at its position and disappears on the device display. If the device display is moved again to face the location of the virtual window then the virtual windows appears on the device display as it is suspended in the air. The 3D compass detects the tilting of the device in three dimensions, and the GPS determines the current position or coordinates of the device location. The accelerometer and movement sensing unit determine the movement of the device relative to its original position.
In another embodiment of the present invention, a modern retinal projector is utilized to project the image of the virtual window onto the user's retina. In this case, the image of the virtual window changes to correspond to the location of the virtual window in the air. Since the user sees the scene in front of him/her, accordingly, the virtual window will look like it is suspended in the air in front of the scene.
In one embodiment, there is no need to select a virtual window from a device display: the user can directly create a virtual window in the air in front of him/her. This is achieved by selecting a position for the virtual window in mid-air by a finger, drawing the boundary lines of the virtual window, and describing the content of the virtual window. The content of the virtual window can be described by a URL, as was described previously. Also, the content of the virtual window can be described by a name of a desktop application such as MICROSOFT WORD to display this application in and-air in front of the user, who is free to interact with it. Of course, in all such cases, the user needs to use a device display such as a HMD, OHMD, tablet, or mobile phone, or to use a retinal projector to view the virtual window. However, to describe the content of the virtual window the user may use natural language voice commands to describe this content. Also, the user may write in the air, and this freehand writing is tracked by a camera which interprets this as digital text describing the content of the virtual window.
Finally, it is important to note that the present invention can virtually move a virtual window from a first position on a device display to a second position in mid-air. Also, the present invention can virtually move the virtual window from the second position in mid-air to its first or original position on the device display. Moreover, the present invention can move a virtual window from a first position on a first device display to a second position on a second device display. In this case, the present invention will project the picture of the virtual window on the second device display, where the user can see this projected picture when using an OHMD or aiming the first device display towards the second device display.
Conclusively, while a number of exemplary embodiments have been presented in the description of the present invention, it should be understood that a vast number of variations exist, and these exemplary embodiments are merely representative examples, and are not intended to limit the scope, applicability or configuration of the disclosure in any way. Various of the above-disclosed and other features and functions, or alternative thereof, may be desirably combined into many other different systems or applications. Various presently unforeseen or unanticipated alternatives, modifications variations, or improvements therein or thereon may be subsequently made by those skilled in the art, which are also intended to be encompassed by the claims, below. Therefore, the foregoing description provides those of ordinary skill in the art with a convenient guide for implementation of the disclosure, and contemplates that various changes in the functions and arrangements of the described embodiments may be made without departing from the spirit and scope of the disclosure defined by the claims thereto.
This application claims the benefits of a U.S. Provisional Patent Application No. 61/835,351, filed Apr. 2, 2013, titled “Method For Positioning and Displaying Digital Data”.