1. Field of the Invention
The present invention relates generally to an apparatus and method for providing an Augmented Reality (AR) using a camera. More particularly, the present invention relates to a method and apparatus for providing augmented information using a three Dimensional (3-D) map.
2. Description of the Related Art
The various services and additional functions provided by a mobile device are gradually increasing, and among them, a camera function is considered an essential function. In order to enhance the effective value of a mobile device, and to satisfy various desires of users, applications for combining a mobile device having a camera with various additional services and functions of the mobile device are required. An example of such an application is a method of showing various information items for an object in a camera image to a user. For this purpose, an augmented reality technology may be applied.
In general, augmented reality means to combine a virtual object that the user desires to represent with the real world as a background. That is, the augmented reality is meant to represent a virtual object as a thing existing in areal environment by combining the virtual object and the real environment. Real time image recognition technology is increasingly being used in this context since a virtual object should be augmented at a correct location, and without error, even if the user's camera location or the user's pose changes, or if a real object where the virtual object will be shown is moved.
Existing augmented reality services are provided on the basis that a camera image while being photographed and previously stored database images, are compared with each other to show augmented information such that an extracted image is augmented and shown on the camera image. However, since the existing augmented reality service as described above uses a real time image acquired through a camera, it is restricted by the Field Of View (FOV) of the camera. For example, when a user of a mobile device having the camera is walking down a street where a number of buildings are located, the camera can photograph one or more buildings foremost among the buildings in the direction opposed to the lens of the camera, but cannot photograph other buildings or roads obstructed by the foremost buildings. Accordingly, there is a problem in that information for the other buildings or roads positioned behind the foremost buildings is unavailable, and hence the augmented information for them cannot be provided to the user.
Therefore, a need exists for a method and apparatus for providing a greater amount of augmented information using a 3-D map.
The above information is presented as background information only to assist with an understanding of the present disclosure. No determination has been made, and no assertion is made, as to whether any of the above might be applicable as prior art with regard to the present invention.
Aspects of the present invention are to address at least the above-mentioned problems and/or disadvantages and to provide at least the advantages described below. Accordingly, an aspect of the present invention is to provide an apparatus and method capable of providing various pieces of information for objects included in a current image that is acquired through a camera using a three-Dimensional 3-D map.
Another aspect of the present invention is to provide an apparatus and method capable of providing augmented information for objects included in a current image acquired through a camera using a 3D map.
At least one of the above mentioned aspects may be achieved by the configuration elements as set forth below.
According to an aspect of the present invention, an apparatus of providing augmented information using a 3-D map in a mobile device is provided. The apparatus includes a camera, a display that displays a current image including a first object acquired in real time through the camera, and a controller that performs to control such that the current image acquired in real time and an actual 3-D image acquired and stored by previously photographing the first object are mixed and displayed on the display.
According to another aspect of the present invention, a method of providing augmented information using a 3-D map in a mobile device is provided. The method includes acquiring a current image including a first object in real time using a camera, and mixing and displaying the current image acquired in real time using the camera with an actual 3-D image acquired and stored by previously photographing the first object on the display.
According to another aspect of the present invention, at least one non-transitory processor readable medium is provided. The medium stores a computer program of instructions configured to be readable by at least one processor for instructing the at least one processor to execute a computer process for performing the methods herein.
According to another aspect of the present invention, it is possible to provide various information items for objects included in a current image obtained through a camera using a 3-D map.
According to another aspect of the present invention, it is also possible to clearly indicate a driving route of a vehicle in a current image acquired through a camera using a 3-D map.
According to another aspect of the present invention, even if an object of the user's interest in a current image acquired through a camera is obstructed by one or more other objects, it is possible to provide augmented information of the interest object using a 3-D image.
Other aspects, advantages, and salient features of the invention will become apparent to those skilled in the art from the following detailed description, which, taken in conjunction with the annexed drawings, discloses exemplary embodiments of the invention.
The above and other aspects, features, and advantages of the present invention will be more apparent from the following detailed description taken in conjunction with the accompanying drawings, in which:
Throughout the drawings, it should be noted that like reference numbers are used to depict the same or similar elements, features, and structures.
The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of exemplary embodiments of the present invention as described by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. In addition, descriptions of well-known functions and constructions may be omitted for clarity and conciseness.
The terms and words used in the following description and claims are not limited to the bibliographical meanings, but, are merely used by the inventor to enable a clear and consistent understanding of the invention. Accordingly, it should be apparent to those skilled in the art that the following description of exemplary embodiments of the present invention is provided for illustration purpose only and not for the purpose of limiting the invention as defined by the appended claims and their equivalents.
It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a component surface” includes reference to one or more of such surfaces.
Referring to
Referring to
The controller 110 may include a CPU 111, a Read Only Memory (ROM) 112 in which one or more control programs for controlling the apparatus 100 are stored, and a Random Access Memory (RAM) 113 which stores a signal or data output from the outside of the apparatus 100 or is used as a memory region for operations performed in the apparatus 100. The CPU 111 may include a single core, a dual core, a triple core or a quad core. The CPU 111, ROM 112 and RAM 113 may be connected with each other through internal buses.
The controller 110 may control the mobile communication module 120, the sub-communication module 130, the multimedia module 140, the camera module 150, the GPS module 155, the input/output module 160, the sensor module 170, the storage unit 175, the power supply unit 180, the touch screen 190, and the touch screen controller 195.
The mobile communication module 120 causes the apparatus 100 to be connected with another external device through mobile communication using one or more antennas (not illustrated) according to the control of the controller 110. The mobile communication module 120 transmits/receives wireless signals for voice communication, image communication, a Short Message Service (SMS), or a Multimedia Message Service (MMS) with a portable phone (not illustrated) that has a telephone number input to the apparatus 100, a smart phone (not illustrated), a tablet PC, or another device (not illustrated).
The sub-communication module 130 may include at least one of a WLAN module 131 and an NFC module 132. For example, the sub-communication module 130 may include only the WLAN module 131, only the NFC module 132, or both the WLAN module 131 and the NFC module 132.
The WLAN module 131 may be connected to the Internet at a place where a wireless Access Point (AP) (not illustrated) is available according to the control of the controller 110. The WLAN module 131 uses the wireless LAN standard (IEEE802.11x) of the Institute of Electrical and Electronics Engineers. The NFC module 132 may perform short range communication between the apparatus 100 and an image forming apparatus (not illustrated) according to the control of the controller 110. In exemplary embodiments, short range communication, Bluetooth communication, infra-red communication (Infra-red Data Association, or “IrDA”), or the like, may be included.
The apparatus 100 may include at least one of the mobile communication module 120, the WLAN module 131, and the NFC module 132 according to the performance thereof. For example, the apparatus 100 may include a combination of the mobile communication module 120, the WLAN module 131, and the NFC module 132 according to the performance thereof.
The multimedia module 140 may include the broadcasting communication module 141, the audio reproducing module 142, or the moving image reproducing module 143. The broadcasting communication module 141 may receive a broadcasting signal (for example, a TV broadcasting signal, a radio broadcasting signal or a data broadcasting signal) delivered from a broadcasting station and broadcast addition information (for example, Electric Program Guide (EPG) or Electric Service Guide (ESG) information) through a broadcasting communication antenna (not illustrated) according to the control of the controller 110. The audio reproducing module 142 may reproduce a digital audio file stored or received according to the control of the controller 110 (for example, having a file extension of .mp3, .wma, .ogg or .wav). The moving image reproducing module 143 may reproduce a digital moving image file stored or received according to the control of the controller 110 (for example, having a file extension of .mpeg, .mpg, .mp4, .avi, .mov, or .mkv). The moving image reproducing module 143 may also reproduce a digital audio file.
The multimedia module 140 may include the audio reproducing module 142 and the moving image reproducing module 143, but not the broadcasting communication module 141. In addition, the audio reproducing module 142 or the moving image reproducing module 143 of the multimedia module 140 may be included in the controller 110.
The camera module 150 may include at least one of the first camera 151 and the second camera 152 that photograph a still image or a moving image according to the control of the controller 110. In addition, the first camera 151 or the second camera 152 may include an auxiliary light source, for example, a flash (not illustrated) that provides a quantity of light required for photographing. The first camera 151 may be disposed on the front surface of the apparatus 100, and the second camera 152 may be disposed on the rear surface of the apparatus 100. Alternatively, the first camera 151 and the second camera 152 may be arranged adjacent to each other (for example, having an interval between the first camera 151 and the second camera 152 in a range of from 1 cm to 8 cm) to photograph a three-Dimensional (3-D) still image or a 3-D moving image.
The GPS module 155 may receive electric waves from a plurality of GPS satellites on an earth orbit (not illustrated), and may calculate the location of the apparatus using the time of arrival of the electric waves from the GPS satellites (not illustrated) to the apparatus 100.
The input/output module 160 may include at least one of the buttons 161, the microphone 162, the speaker 163, the vibration motor 164, the connector 165, and the keypad 166.
The buttons 161 may be formed on the front surface, a side surface, or the rear surface of the apparatus 100, and may include at least one of a power/locking button (not illustrated), a volume button (not illustrated), a menu button, a home button, a back button, and a search button 161.
The microphone 162 receives a voice or sound input and converts the input into an electrical signal according to the control of the controller 110.
The speaker 163 may output the sounds corresponding to various signals of the mobile communication module 120, the sub-communication module 130, the multimedia module 140, or the camera module 150 (for example, sounds that correspond to a wireless signal, a broadcasting signal, a digital audio file, a digital moving image file, a photograph, or the like) to the outside of the apparatus 100 according to the control of the controller 110. The speaker 163 may output sounds corresponding to the functions performed by the apparatus 100 (for example, corresponding to a button operating sound or to a communication connection sound corresponding to a telephone communication). One or a plurality of speakers 163 may be formed at a proper position or positions on the housing of the apparatus 100.
The vibration motor 164 may convert an electrical signal into mechanical vibration according to the control of the controller 110. For example, when the apparatus 100 in the vibration mode receives a voice communication from another device (not illustrated), the vibration motor 164 is operated. One or a plurality of vibration motors 164 may be provided in the housing of the apparatus 100. The vibration motor 164 may be operated in response to a user's touch action that touches the touch screen 190 or a continuous touch movement on the touch screen 190.
The connector 165 may be used as an interface for connecting the apparatus 100 with an external device (not illustrated) or to a power source (not illustrated). According to the control of the controller 110, the data stored in the storage unit 175 of the apparatus 100 may be transmitted to an external device (not illustrated), or data may be received from an external device (not illustrated), through a wire cable connected to the connector 165. Power may be input from the power source (not illustrated) or may charge a battery (not illustrated) through a wire cable connected to the connector 165.
The keypad 166 may receive a key input from the user for controlling the apparatus 100. The keypad 166 may take the form of a physical keypad (not illustrated) formed on the apparatus 100 or a virtual keypad (not illustrated) displayed on the touch screen 190. The physical keypad (not illustrated) formed on the apparatus 100 may be omitted according to the performance or configuration of the apparatus 100.
The sensor module 170 includes at least one sensor that detects the condition of the apparatus 100. In exemplary embodiments, the sensor module 170 may include a proximity sensor that detects whether or not the user approaches the apparatus 100, an illumination sensor (not illustrated) that detects the quantity of light around the apparatus 100, or a motion sensor (not illustrated) that detects the action of the apparatus 100 (for example, the rotation of the apparatus 100, its acceleration velocity or whether a vibration is applied to the apparatus 100). At least one of the sensors may detect the condition and produce and transmit a signal corresponding to the detection to the controller 110. The sensors of the sensor module 170 may be added or omitted according to the performance of the apparatus 100.
The storage unit 175 may store signals or data input/output to correspond to operations of the mobile communication module 120, the sub-communication module 130, the multimedia module 140, the camera module 150, the GPS module 155, the input/output module 160, the sensor module 170, and the touch screen 190 according to the control of the controller 110. The storage unit 175 may also store control programs and applications for controlling the apparatus 100 or the controller 110.
The term “storage unit” may include the storage unit 175, the ROM 112 and the RAM 113 in the controller 110, or a memory card (not illustrated) (for example, an SD card and a memory stick) installed in the apparatus 100. The storage unit 175 may include a non-volatile memory, a volatile memory, a Hard Disc Drive (HDD) or a Solid State Drive (SSD).
The power supply unit 180 may supply power to one or more batteries (not illustrated) disposed in the housing of the apparatus 100 according to the control of the controller 110. The one or more batteries (not illustrated) supply power to the apparatus 100. Also, the power supply unit 180 may supply power input from an external power source (not illustrated) through a wire cable connected to the connector 165 of the apparatus 100.
The touch screen 190 may provide user interfaces which correspond to various services (for example, a telephone conversation, a data transmission, a broadcast, and to photography), respectively, to the user. The touch screen 190 may transmit an analog signal input to a user interface and corresponding to at least one touch to the touch screen controller 195. The touch screen 190 may receive an input of at least one touch through the user's body (for example, through fingers, including a thumb) or through a touchable input means (for example, a stylus pen). Also, the touch screen 190 may receive an input of continuous movement of one touch in the at least one touch. The touch screen 190 may transmit an analog signal corresponding to the continuous movement of the touch to the touch screen controller 195.
In exemplary embodiments of the present invention, the touch is not limited to the touch between the touch screen 190 and the user's body or to the touchable input means, and may include a contactless touch (for example, when a detectable space between the touch screen 190 and the user's body or the touchable input means is not more than 1 mm). The space detectable by the touch screen 190 may be changed according to the performance or configuration of the apparatus 100.
The touch screen 190 may be implemented in, for example, a resistive type, a capacitive type, an infrared type, or an acoustic wave type.
The touch screen controller 195 converts an analog signal received from the touch screen 190 into a digital signal (for example, having X and Y coordinates), and transmits the digital signal to the controller 110. The controller 110 may control the touch screen 190 using the digital signal received from the touch screen controller 195. For example, the controller 110 may cause a shortcut icon (not illustrated) displayed on the touch screen 190 to be selected or may execute the shortcut icon (not illustrated). In addition, touch screen controller 195 may be included in the controller 110.
Referring to
The touch screen 190 includes a main screen 210 and a bottom bar 220.
The main screen 210 is a region where one or more applications are executed.
The bottom bar 220 extends horizontally at the lower end of the touch screen 190, and may include a plurality of standard function buttons 222 to 228.
In addition, at the top end of the touch screen 190, a top end bar (not illustrated) may be formed that displays the states of the apparatus 100, for example, the state of charge of the battery, the intensity of a received signal, and the current time.
Meanwhile, according to an Operating System (OS) of the apparatus 100, or to an application executed in the apparatus 100, the bottom bar 220 and the top end bar (not illustrated) may not be displayed on the touch screen 190. If both the bottom bar 220 and the top end bar (not illustrated) are not displayed on the touch screen 190, the main screen 210 may be formed on the entire area of the touch screen 190. In addition, the bottom bar 220 and the top end bar (not illustrated) may be displayed translucently to be superimposed on the main screen 210.
Referring to
In exemplary embodiments, the map DB 310 stores actual 3-D maps consisting of actual images of cities or streets photographed by a camera. Here, the actual 3-D map means a 3-D map implemented by photographing a real street using a vehicle, an airplane, or the like, and then using the photographed actual image. Such an actual 3-D map is obtained by photographing a city or a street using a stereo camera mounted on a vehicle, or the like. Therefore, it is possible to obtain not only three dimension coordinates (x, y and z coordinates) of objects included in the photographed image, but also depth information corresponding to the distance between the camera used in the photographing and the objects. In addition, such a 3-D map may be implemented by a plurality of two-Dimensional (2-D) images for a wide area using an airplane, extracting depth information at an overlapping area between each two neighboring 2-D images among the photographed images, and performing three dimension modeling through three dimension mapping. In addition, each of the objects included in the actual 3-D map has a plurality of pieces of 3-D information and depth information items. For example, a plurality of pixels displaying each object may have 3-D information and depth information items. As a result, the actual 3-D map may differentiate not only a position of a specific building, but also the contour of the building, e.g., the front, the rear and the side views of the specific building, and may also differentiate respective floors of the building. On the contrary, existing 2-D maps are different from the above-described 3-D map in that, since only GPS information is used from the 2-D map, only a single location information item may be provided for a specific building and thus the front and rear sides, or respective floors, of the building cannot be differentiated in detail.
The controller 320 receives a request for the stored 3-D images and augmented information from a mobile device 100 through the communication unit 330, and provides the 3-D images and augmented information to the mobile device 100 in response to the request. The 3-D images and augmented information provided to the mobile device 100 by the controller 320 will be described in detail below.
Referring to
The controller 110 of the mobile device 100 requests augmented information for the first object from the 3-D map providing unit 300 in step S402. In step S402, the mobile device 100 transmits the GPS information (location information) acquired by the GPS module 155, the camera 150 aiming direction information acquired by the sensor module 170, and the external current image acquired by the camera 150 to the 3-D map providing unit 300. The augmented information may include a previously photographed 3-D image for the first object. The augmented information may further include various information items for a Point Of Interest (POI), for example, vehicle navigation information, buildings' names, information regarding any specific stores positioned inside of the buildings, advertisement information, and the like.
The 3-D map providing unit 300 compares the GPS information received from the mobile device 100, the camera 150 aiming direction information acquired by the sensor module 170, and the external current image with a previously stored actual 3-D image in step S403 to acquire a previously stored 3-D image that matches the first object in step S404.
The 3-D map providing unit 300 transmits the acquired previously stored 3-D image matched to the first object to the mobile device 100 in step S405. Also, the 3-D map providing unit 300 may additionally transmit predetermined augmented information as well as the acquired previously stored 3-D image to the mobile device 100.
The controller 110 of the mobile device 100 synthesizes the external current image and the matched actual image in step S406. The controller 110 of the mobile device 100 executes a control so that the display 190 displays a synthesized image in which the external current image acquired by the camera 150 and a previously stored 3-D image which corresponds to at least a part of the first object and thus, received from the 3-D map providing unit 300 is synthesized with each other in step S407. For this purpose, the controller 110 may produce a synthesized image in which the external current image acquired by the camera 150 and a previously stored 3-D image which corresponds to at least a part of the first object and thus, received from the 3-D map providing unit 300 is synthesized with each other, and displays the synthesized image on the display 190. In addition, the synthesized image may be produced in the 3-D map providing unit 300 and provided to the mobile device 100. The 3-D map providing unit 300 may be configured by a server positioned remotely from the mobile device 100, or formed inside of the mobile device 100.
According to the present exemplary embodiment, the controller 110 of the mobile device 100 may adjust the transparency of the building E 435 and the building F 436 positioned in front of the building D 434 so that the entire appearance of the building D 434 may be displayed on the display 190. Here, it may be assumed that the building D 434 is a first object and the building E 435 and the building F 436 are second objects. For example, in
Meanwhile, in the external current image acquired in real time by the camera 150 of the mobile device 100, the building E 435 and the building F 436 obstruct a part or the entirety of the building D 434. However, when the building E 435 and the building F 436 are displayed transparently or translucently as described above, there is no information for a region 434-1 in the building D 434 which has been obstructed by the building E 435 and the building F 436. Therefore, the image is not smoothly displayed between the obstructed region 434-1 and a non-obstructed region 434-2. In order to address this problem, the exemplary embodiments herein may receive an image corresponding to the obstructed region 434-1 among previously photographed images for the building D 434 from the 3-D map providing unit 300 and may display the image on the obstructed region 434-1. As a result, in the obstructed region 434-1 of the building D 434, the previously photographed image received from 3-D map providing unit 300 is displayed, and for the non-obstructed region 434-2 of the building D 434, and for the other objects 420, 431, 432, 433 and 434, the images photographed in real time by the camera 150 are mixed and displayed. When there is a user's command through the input module 160 or when it is determined that at least a part of the arrow 440 is obstructed by other objects 435 and 436, the controller 110 may determine whether to display the building E 435 and the building F 436 transparently or translucently, or whether to synthesize the current image and a previously photographed image received from the 3-D map providing unit and display the synthesized image on the display 190 of the mobile device 100.
Referring to
Referring to
Referring to
It will be appreciated that the methods for providing augmented information using a 3-D map according to the exemplary embodiments of the present invention may be implemented in a form of hardware, software or a combination of the hardware and software. Such arbitrary software may be stored in a volatile or non-volatile storage device, for example a ROM, a memory, for example, a RAM, a memory chip, a memory device, or a memory integrated circuit, or a storage medium that is optically or magnetically recordable and readable by a machine (e.g., a computer), for example, a CD, a DVD, a magnetic disc, or a magnetic tape, irrespective of whether the software is erasable or rewritable or not. The inventive graphic screen renewal method may be implemented by a computer or a portable terminal that includes a controller and a memory, in which the memory is an example of a machine-readable storage medium which is suitable for storing a program or programs that contain instructions for implementing the exemplary embodiments of the present invention. Accordingly, the present invention includes a storage medium, a program that contains codes for implementing the apparatuses or methods defined in the accompanying claims, and a machine-readable (for example, computer-readable) storage medium that stores the program. In addition, the programs may be electronically transmitted through an arbitrary medium, for example, communication signals transmitted through a wired or wireless connection, and the present invention properly includes those equivalent thereto.
Further, the apparatus for providing augmented information using a 3-D map may receive and store the program from a wiredly or wirelessly connected program providing apparatus. The program providing apparatus may include a memory for storing a program that contains instructions for executing the augmented information providing methods using the 3-D map, a communication unit for performing a wired or wireless communication of the augmented information providing apparatus using the 3-D map, and a controller for transmitting the corresponding program to the communication unit in response to a request of the augmented information providing apparatus using the 3-D map or automatically.
While the present invention has been shown and described with reference to certain embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
10-2013-0027752 | Mar 2013 | KR | national |
This application claims the benefit under 35 U.S.C. §119(e) of a US Provisional application filed on Jun. 6, 2012 in the U.S. Patent and Trademark Office and assigned Ser. No. 61/656,134, and under 35 U.S.C. §119(a) of a Korean patent application filed on Mar. 15, 2013 in the Korean Intellectual Property Office and assigned Serial No. 10-2013-0027752, the entire disclosure of each of which is hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
61656134 | Jun 2012 | US |