This application claims priority from and the benefit under 35 U.S.C. §119(a) of Korean Patent Application No. 10-2010-0103620, filed on Oct. 22, 2010, the disclosure of which is incorporated herein by reference for all purposes.
1. Field
The following description relates to an apparatus and method for providing an augmented reality (AR) user interface.
2. Discussion of the Background
Augmented reality (AR) is a computer graphic technique that combines virtual is objects or information with a real-world environment to display the virtual elements as if they were present in the real environment.
Unlike a general virtual reality technology which provides only virtual objects in a virtual space, AR technology provides a view of reality which is blended with virtual objects, thereby providing supplementary information which is difficult to obtain in reality. In addition, the general virtual reality technology is applicable to only a limited range of fields, such as game technology, whereas AR can be applied to various fields.
For example, if a tourist traveling in London views a street in a certain direction through a camera built in a mobile phone having various functions, such as a global positioning system (GPS) function, the mobile phone may display augmented reality (AR) information about a surrounding environment and objects, such as restaurants and stores having a sale along the street, as an information overlay on a real street image captured by the camera.
However, as more AR data services are offered, a large amount of data is more likely to be displayed on a single screen. Thus it is not easy for the user to obtain desired information from such large amount of AR data.
Exemplary embodiments of the present invention provide an apparatus and method for providing an augmented reality (AR) information user interface, which allows a user to easily search for AR information.
Additional features of the invention will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the invention.
An exemplary embodiment provides a method for providing an augmented reality (AR) user interface, the method including: acquiring an image containing at least one object; recognizing the object from the acquired image; detecting AR information related to the recognized object; classifying the detected AR information into groups according to specific property information; generating a user interface that displays the groups of AR information separately.
An exemplary embodiment provides an apparatus to provide an augmented reality (AR) user interface, the apparatus including: an image acquisition unit to acquire an image containing at least one object; a display unit to output the image acquired by the image acquisition unit and AR information; and a control unit to recognize the object from the image acquired by the image acquisition unit, detect AR information related to the recognized object, classify the detected AR information into one or more separate groups according to specific properties, and generate a user interface which displays the groups of AR information separately.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are intended to provide further explanation of the invention as claimed. Other features and aspects may be apparent from the following detailed description, the drawings, and the claims.
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention, and together with the description serve to explain the principles of the invention.
Throughout the drawings and the detailed description, unless otherwise described, the same drawing reference numerals will be understood to refer to the same elements, features, and structures. The relative size and depiction of these elements may be exaggerated for clarity, illustration, and convenience.
The invention is described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these exemplary embodiments are provided so that this disclosure is thorough, and will fully convey the scope of the invention to those skilled in the art. In the drawings, the size and relative sizes of layers and regions may be exaggerated for clarity. Like reference numerals in the drawings denote like elements.
Terms used herein are defined in consideration of functions of embodiments, and may be modified according to the purpose and practice of a user or an operator, and thus the definition of the terms used herein should be made based on contents throughout the specification.
The image acquisition unit 110 may be a camera or an image sensor that acquires an image containing at least one AR object (hereinafter, referred to as an “object”). In addition, the image acquisition unit 110 may be a camera which can zoom in/out manually or automatically and/or rotate an image manually or automatically under the control of the control unit 140. The term “object” used in the description indicates a marker in real life or a marker-less-based object.
The display unit 120 may output AR information about the object contained in the image and the image output from the image acquisition unit 110 in an overlapping manner. The control unit 140 may output the AR information through a user interface (UI) which displays groups of AR information which are created according to specific properties.
Referring back to
The database 130 may store object recognition information 161, AR information 162, interface determination information 163, and context information 164.
The object recognition information 161 may be mapping information for recognizing an object, and may contain specific object feature information. The object feature information may include a shape, a color, a texture, a pattern, a color histogram, an architectural feature, and/or edge information of an object. The control unit 140 may determine an object by comparing acquired object recognition information with the stored object recognition information 161. In one example, the acquired object recognition information and/or the stored object recognition information 161 may include object location information, such as global positioning system (GPS) data. Thus, objects which have the same object feature information may be recognized as different objects according to their locations. The objects recognized based on the acquired object recognition information and/or the stored object recognition information 161 may be distinguished from each other by identifiers assigned thereto.
The AR information 162 may contain information related to an object. For example, if the object is a tree, the AR information 162 may be tag images that represent a name of the tree, a habitat, ecological characteristics, etc. The AR information 162 may have the same identifier as an object to be mapped.
The interface determination information 163 may be user interface determination information and may be information for generating a UI for providing detected AR information. The interface determination information 163 may include property information, which is classification criteria to classify the detected AR information, and UI formation information about how the UI is to be formed according to a size of a group.
The context information 164 may include information for personalizing AR information. For example, the context information 164 may include user information containing a name, the age, and the gender of the user, information on words frequently used in the user's text messages, frequently used applications, frequently used keywords, current time zone and location, and the user's emotional state. The control unit 140 may filter AR information by use of the context information 164 and output the filtered AR information. Moreover, the context information 164 may be used by the control unit 140 to prioritize the display of AR information in the UI.
Referring to the example illustrated in
Referring to the example illustrated in
The control unit 140 may classify the AR information into groups more than two times. For example, the control unit 140 may further classify the group of restaurants into sub-groups using the context information, may display the sub-group of the restaurants of the first preference on a screen and vertically arrange the remaining sub-groups of restaurants according to rank in preference order.
The control unit 140 may control the above elements to implement an AR user interface by providing a user interface for each group of AR information, and be a hardware processor or a software module to run on the hardware processor. Operation of the control unit 140 may be described in detail with an AR object recognition guiding method which will be described later.
Referring back to
The manipulation unit 160, which may be a user interface unit, may receive information from the user, and examples of the manipulation unit 160 may include a key input unit that generates key data each time a key button is pressed, a touch sensor, or a mouse. Information about moving an AR information group may be input through the manipulation unit 160. Further, the manipulation unit 160 and the display unit 120 may be combined into a touch screen.
The temporary memory 170 may store in advance AR information of an AR information group neighboring to the AR information group which is currently displayed on the display unit 120, thereby preventing processing delay which may occur in detecting AR information corresponding to the neighboring group if a group movement request signal is received through the manipulation unit 160 later.
In operation 530, the control unit 140 detects AR information related to the recognized object. That is, AR information which has the same identifier as assigned to the recognized object is detected.
In operation 540, the control unit 140 determines whether AR information related to one or more recognized objects is present based on the detection result, i.e., there may be no AR information related to the object.
If the determination result indicates that AR information related to the recognized object is present, the control unit 140 classifies pieces of AR information into groups according to specific reference properties in operation 550. At this time, the control unit 140 may classify the AR information at least twice, that is, the control unit 140 may primarily classify the AR information into groups and secondarily classify the AR information in each group into sub-groups. However, aspects are not limited thereto such that the control unit 140 may classify the AR information once, twice, or more than twice, for example, three times.
In operation 560, the control unit 140 generates user interfaces to output the respective groups of AR information separately.
That is, the number of the user interfaces corresponds to the number of the groups of the AR information. For example, if the number of groups is n as shown in the example illustrated in
In this case, the control unit 140 may use context information to generate the user interface such that displayed AR information can be prioritized to enable efficient information searching. In operation 570, the control unit 140 displays the AR information through the generated user interface.
In addition, the control unit 140 may store or cache AR information related to the neighboring groups of AR information of the currently displayed group of AR information in the temporary memory 170. As a result, the AR information can be promptly changed upon receiving a signal for moving the current group to another group from the user, for example, the signal for moving may be a signal generated in response to a key input, a touch, a multi-touch, a swipe, and the like.
As described above, pieces of AR information related to one or more objects contained in an acquired image are classified into one or more groups, and the groups of AR information are provided separately through a user interface. Hence, the user can quickly and accurately search for desired AR information.
It will be apparent to those skilled in the art that various modifications and variation can be made in the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention cover the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
10-2010-0103620 | Oct 2010 | KR | national |