The present invention relates to a method for displaying augmentation information in an augmented reality system and to an augmented reality system.
In augmented reality systems a view of a physical real-world environment is augmented by virtual computer-generated information. The view of the physical real-world environment may be a direct or indirect live view displayed on eyeglasses, lenses or other display means to a user of the augmented reality system. When the user of the augmented reality system is standing or talking face-to-face with another person augmentation information may be displayed. However, when the amount of the displayed augmentation information is increasing, the user of the augmented reality system may be distracted from the conversation with the other person.
Therefore, there is a need for an appropriate displaying of augmentation information in an augmented reality system.
According to the present invention, this object is achieved by a method for displaying augmentation information in an augmented reality system as defined in claim 1 and an augmented reality system as defined in claim 11. The dependent claims define preferred and advantageous embodiments of the invention.
According to an aspect of the present invention, a method for displaying augmentation information in an augmented reality system is provided. According to the method, a reality information is detected with an imaging device. The reality information comprises an image of an environment of the imaging device. Furthermore, a human face is automatically detected in the reality information and a predetermined mapping is automatically assigned to the reality information depending on the detected human face. The predetermined mapping comprises a plurality of mapping areas which are each configured to display augmentation information. Finally, the augmentation information is displayed in a predetermined mapping area of the plurality of mapping areas.
By automatically detecting the human face in the reality information and displaying the augmentation information in predetermined mapping areas of the predetermined mapping, the augmentation information can be displayed in appropriate locations in relation to the detected human face and therefore the human face is not concealed in an unwanted way by the augmentation information. For example, the augmentation information may be automatically displayed in mapping areas located outside of the eyes or the mouth of the human face. Thus, the face-to-face view is not obstructed by the augmentation information. Furthermore, if a very important or urgent augmentation information shall be displayed, this augmentation information can be displayed in mapping areas where the user of the augmented reality system recognizes these information immediately, for example in mapping areas located at the eyes, the nose or the mouth of the detected human face.
According to an embodiment, the predetermined mapping comprises a default mapping comprising a plurality of mapping areas which are assigned to areas of the human face. The mapping areas may comprise for example an area on the left forehead, an area on the right forehead, an area covering an eye of the human face, an area covering the nose of the human face, areas covering the cheeks of the human face or areas covering parts of the chin of the detected human face. Furthermore, the predetermined mapping may comprise a default mapping comprising a plurality of mapping areas assigned to areas adjacent to the human face, for example mapping areas left or right beside the forehead of the human face or mapping areas left or right beside the cheeks of the human face. This allows the augmentation information to be arranged in a lot of appropriate and convenient mapping areas.
According to another embodiment, a person is automatically recognized from a plurality of persons based on the human face comprised in the reality information. Furthermore, a plurality of mappings are provided and each of the mappings is respectively assigned to one person of the plurality of persons. Depending on the recognized person the mapping assigned to the recognized person is automatically assigned to the reality information. Therefore, depending on the recognized person, a person-specific mapping can be used to display the augmentation information in connection with the recognized person in the reality information.
According to another embodiment, the augmentation information comprises information about the recognized person. The information about the recognized person may comprise for example the name of the person, the birthday of the person, information about appointments with the person or any other information related to the person.
According to another embodiment, the mapping is configurable by a user of the augmented reality system. Furthermore, the kind of augmentation information to be displayed may also be configurable by the user of the augmented reality system and the mapping area where the augmentation information is to be displayed may also be configurable by the user of the augmented reality system. This allows the user of the augmented reality system to individually configure the whole arrangement of the augmentation information in connection with the human face of the reality information.
In addition to the above-described kinds of augmentation information, the augmentation information may comprise for example an e-mail information indicating for example the arrival of a new e-mail, a calendar information indicating for example a list of appointments of the current day, an appointment information indicating for example the time and date information of a next appointment, and a time of the day information. This allows the user of the augmented reality system to be informed about important and actual information while the user is having a conversation with the person whose face is present in the reality information.
Furthermore, the augmentation information may comprise visual control information or virtual control elements indicating an actuation of a function of the augmented reality system. The function of the augmented reality system may comprise for example starting and stopping an audio and video recording of the reality information or an opening of an incoming e-mail or an upcoming appointment. For actuating the function of the augmented reality system the eyes of the user of the augmented reality system are automatically tracked and it is automatically detected if the user is looking at the optical control information. Upon detecting that the user is looking at the optical control information or that the user is looking in a predetermined sequence at several items of the optical control information the function of the augmented reality system is automatically actuated. This allows the user of the augmented reality system to actuate specific functions of the augmented reality system without using a manual control device. Thus, the function of the augmented reality system can be actuated without being noticed by the person being in front of the user of the augmented reality system. The optical control information may be displayed as the augmentation information in one of the mapping areas. When the optical control information is arranged for example in a mapping area covering the forehead of the human face in the reality information, the function of the augmented reality system represented by the optical control information can be actuated by the user while still looking at the human face in the reality information.
According to another aspect of the present invention, an augmented reality system is provided. The system comprises an imaging device for detecting reality information, a display unit adapted to display augmentation information in combination with the reality information, and a processing unit coupled to the imaging device and the display unit. The imaging device may comprise for example a camera for capturing an image of an environment of the camera as the reality information. The processing unit is adapted to detect a human face in the reality information and to assign a predetermined mapping to the reality information depending on the detected human face. The mapping comprises a plurality of mapping areas and the processing unit is adapted to display the augmentation information in a predetermined mapping area of the plurality of mapping areas. The augmented reality system may be adapted to perform the above-described method and comprises therefore the above-described advantages.
The display unit may comprise eyeglasses adapted to display the reality information in connection with the augmentation information. The eyeglasses may be adapted to pass through the reality information transparently to a user of the eyeglasses and present via an electronic display of the eyeglasses the augmentation information simultaneously and synchronized with the reality information to the user. This offers the user a convenient way to receive the augmentation information while the user is face to face to a person.
According to another embodiment, the display unit is adapted to display the reality information and the augmentation information simultaneously on a display. This allows a user for example during a video conference to receive the augmentation information while looking at another person of the video conference.
The augmented reality system, especially the processing unit, may be included in a mobile device, for example a mobile phone, a person digital assistant, a mobile navigation system, or a mobile computer.
Although specific features described in the above summary and the following detailed description are described in connection with specific embodiments, it is to be understood that the features of the embodiments can be combined with each other unless noted otherwise.
The invention will now be described in more detail with reference to the accompanying drawings.
In the following, exemplary embodiments of the present invention will be described in detail. It is to be understood that the following description is given only for the purpose of illustrating the principles of the invention and is not to be taken in a limiting sense. Rather, the scope of the invention is defined only by the appended claims and not intended to be limited by the exemplary embodiments hereinafter.
It is to be understood that the features of the various exemplary embodiments described herein may be combined with each other unless specifically noted otherwise. Same reference signs in the various instances of the drawings refer to similar or identical components.
In operation the processing unit 103 receives the reality information captured by the camera 102 and detects if a human face is present in the reality information.
In mapping area 1 a visual control information, a so-called sensorial recording control, for starting and stopping recording of audio and video data from the conversation with the person is located. To activate or de-active the recording, the eyes of the user of the augmented reality system are tracked for example by an additional camera (not shown) mounted at the eyeglasses 101 tracking the eyes of the user. For activating the recording the user has to briefly look at the three dots shown in mapping area 1 in a predetermined order. The user has to look first at the upper dot then move the look down to the lower left dot and then move the look to the lower right dot as indicated by the arrows connecting the dots. In response to this eye movement the recording will start and indicated by displaying “REC” in mapping area 1. Video and audio information from the conversation will then be recorded by the processing unit 103.
Finally, information about upcoming events are displayed in mapping area 16. The upcoming events may be retrieved from a data base of the processing unit 103 comprising calendar information of the user using the augmented reality system. Furthermore, upcoming events may comprise for example incoming phone calls or incoming e-mails. An exemplary upcoming event from a calendar is shown in
As described above, the augmented reality system 100 allows the user of the augmented reality system 100 to receive augmentation information while having a conversation with a person in front of the user. As the displayed augmentation information is configurable by the user, only that kind of information is displayed during the conversation which is appropriate in the user's view. Therefore, the user is not distracted unnecessarily from the conversation by the augmentation information.
While exemplary embodiments have been described above, various modifications may be implemented in other embodiments. For example, the predetermined mapping as shown in
Finally, it is to be understood that all the embodiments described above are considered to be comprised by the present invention as it is defined by the appended claims.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP2010/004055 | 7/5/2010 | WO | 00 | 7/1/2011 |