Augmented reality (AR) is the integration of digital information with the real-world environment. In particular, AR provides a live, direct, or indirect, view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics, or GPS data. AR may include the recognition of an image, an object, a face, or any element within the real-world environment and the tracking of that image by utilizing real-time localization in space. AR may also include superimposing digital media, e.g., video, three-dimensional (3D) images, graphics, text, etc., on top of a view of the real-world environment so as to merge the digital media with the real-world environment.
Certain examples are described in the following detailed description and in reference to the drawings, in which:
Images may be augmented in real-time and in semantic context with environmental elements to enhance a viewer's understanding or informational context. For example, a broadcast image of a sporting event may include superimposed visual elements, such as lines that appear to be on the field, or arrows that indicate the movement of an athlete. Thus, augmented reality (AR) allows enhanced information about the real-world of a user to be overlaid onto a view of the real world.
As discussed above, AR technology adds an additional layer of information, for example, overlaying computer generated graphics on a real-time environment to aid in the interaction with the environment. Thus, AR may include the use of animated environments or videos. Animated may be defined to include motion of portions of an image, as distinguished from something that is merely static. AR may also include incorporating targeted objects from the real world into a virtual world. The virtual world can be configured by and displayed on a computer device. The AR platform of the computer device can utilize multiple-object tracking to configure and track multiple objects or triggers isolated from images of the real world.
Some embodiments described herein enable a user of a computer device to create a customizable videogame environment without further involvement by videogame developers. In some embodiments, an image may be captured using a computer device, where the image may be a static image. The computer device may include a display on which the captured image can be displayed. The image can be sent to a matching engine of the computer device, and triggers defined by an augmented gaming platform can be matched to multiple real-world objects, which may be tracked using multi-object tracking techniques. A set of overlays associated with the trigger defined by the augmented gaming platform can be returned by the matching engine. The overlay can be an input to a videogame software platform running on the computer device, thereby adding customizable variety to a videogame based on how real-world objects in the image are arranged.
The memory device 104 can include random access memory (RAM), e.g., SRAM, DRAM, zero capacitor RAM, eDRAM, EDO RAM, DDR RAM, RRAM, PRAM, read only memory (ROM), e.g., Mask ROM, PROM, EPROM, EEPROM, flash memory, or any other suitable memory systems. The computer device 100 may also include a graphics processing unit (GPU) 108. As shown, the processor 102 may be coupled through the bus 106 to the GPU 108. The GPU 108 may be configured to perform any number of graphics operations within the computer device 100. For example, the GPU 108 may be configured to render or manipulate graphic images, graphic frames, videos, or the like, that may be displayed to a user of the computer device 100. The computer device 100 may also include a storage device 110. The storage device 110 may include non-volatile storage devices, such as a solid-state drive, a hard drive, an optical drive, a flash drive, an array of drives, or any combinations thereof.
The processor 102 may be connected through the bus 106 to an input/output (I/O) device interface 114 configured to connect the computer device 100 to one or more I/O devices 116. The I/O devices 116 may include, for example, a keyboard, a mouse, or a pointing device, wherein the pointing device may include a touchpad or a touchscreen, among others. The I/O devices 116 may be built-in components of the computer device 100, or located externally to the computer device 100.
The processor 102 may also be linked through the bus 106 to a camera 118 to capture an image, where the captured image may be stored to the memory device 104. The processor 102 may also be linked through the bus 106 to a display interface 120 configured to connect the computer device 100 to display devices 122. A display device 122 may be a built-in component of the computer device 100, or connected externally to the computer device 100. The display device 122 may also include a display screen of a smartphone, a computing tablet, a computer monitor, a television, or a projector, among others. As a result of using the camera 118, the captured image may be viewed on the display screen of the display device 122 by a user. In some embodiments, the display screen may include a touch screen component, e.g., a touch-sensitive display. The touch screen component may allow a user to interact directly with the display screen of the display device 122 by touching the display screen with a pointing device, one or more fingers, or a combination of both.
A wireless local area network (WLAN) 124 and a network interface controller (NIC) 126 may also be linked to the processor 102. The WLAN 124 may link the computer device 100 to a network 128 through a radio signal 130. Similarly, the NIC 126 may link the computer device 100 to the network 128 through a physical connection, such as a cable 132. Either network connection 124 or 126 allows the computer device to network with resources, such as the Internet, printers, fax machines, email, instant messaging applications, and with files located on storage servers.
The storage device 110 may include a number of modules configured to provide the computer device 100 with AR functionality. For example, an image recognition module 134 may be utilized to identify an image. The image recognition module 134 may be used, for example, to analyze an image and detect points of interest or fiducial markers using feature detection or other image processing methods. A fiducial is an object used in the field of view of an imaging system that appears in a produced image, and can be used as a point of reference or a measure. The interest points or markers can be used as a basis for tracked objects or triggers. In some examples, the image recognition module 134 need not be on the device itself, but may be hosted separately and contacted over the network 128.
A matching engine 136 may be utilized to match the image and its interest points to triggers, which are objects from the image that are tracked. In embodiments discussed herein, the triggers can be used subsequently as customizable components of a videogame that increase gameplay longevity and enhance user interaction for relatively simple videogames. Each tracked object or trigger will have an associated augmented reality overlay that is pre-defined by developers of the videogame software.
An augmented reality platform 138 may process input from the matching engine 136, and use image and pattern recognition technology to superimpose content, e.g., 3D models and video, over the initial static image and the triggers obtained therefrom. The superposition may be triggered when the image recognition module 134 recognizes an image and when triggers are identified by the matching engine 136. The overlay information that is desired can be superimposed over the image from the camera through using the augmented reality platform 138. Thus, a videogame environment running on the computer device 100 can be placed as an overlay relative to an image being tracked. The three modules 134, 136, and 138, can make up an augmented gaming platform 140.
Depending on the particular development of a target videogame, trigger items may interact with each other in a predefined manner. A developer, or a user, can have triggers defined in-game, specifically, where and how a particular trigger functions relative to virtual constructions and other triggers in the game. The more triggers that are defined, the more customizable a videogame becomes for a user. The user can manipulate the environment from which the stored image is generated, thus enabling the user to add or remove a number of triggers in endlessly customizable arrangements designed to effect gameplay. In this way, a user is given the freedom to define the solution to a particular videogame, add elements in the form of recognized triggers that make the game more or less difficult, and perform other arrangements of triggers that can change the manner a user experiences the videogame.
The block diagram of
Triggers 208 may also be considered tracked objects. An augmented gaming platform capable of multi-object tracking is used to track the real-world objects 206, each of which will have an associated augmented reality overlay, which is specific to the videogame created by the developer. In this way, an overlay can be returned that may be ultimately used in a videogame environment implemented on the computer device 202.
In the virtual car example of
The location of real life, tracked objects relative to virtual objects created by the developer and controlled by the user can be used to create interactions in a videogame. The user's ability to move the real life objects allows for increased variety in the videogame, with the experience being different dependent on the user's choice of location for the tracked objects.
In the videogame shown in
An augmented gaming platform, such as the augmented videogame platform 140 of
A typical augmented gaming platform may use camera technology to recognize a real-world environment, including images and objects within the environment, and to overlay digital and virtual information onto the real-world environment. However, in the present disclosure, the user may access the augmented gaming platform from the computer device 202 and then point the device 202 at the image 204, e.g., the static image that embodies no movement. By pointing the computer device 202 towards the image 204, the image recognition software determines that a trigger 208 from the image 204 is in view of the camera, and then retrieves and activates a matching engine in the device 208 so that the augmented gaming platform may overlay graphics from videogame platform 210 onto the image 204 that is being tracked. When viewed from the display screen of the computer device 202, entities in the virtual environment on the videogame platform 210 based on triggers 208 from the image 204 create a readily customizable videogame experience for the user.
The sequence depicted by
At block 306, a matching engine is configured to overlay a trigger in the videogame on a real-world object in the captured image. Overlay information can be returned by the matching engine. An AR platform can be implemented by the computer device to draw the overlay into the videogame platform, and each tracked object or trigger will have an associated AR overlay. The triggers are tracked using multiple-object tracking techniques.
At block 308, the AR platform can input the overlay information into the augmented gaming platform. A trigger is also used in the overlay of the augmented gaming platform and becomes part of a virtual videogame environment running on the computer device. Thus, the real-world objects stored in the image can be rearranged by a user and add customizable variety to a videogame environment, because of incorporating triggers that correspond to real-world objects.
At block 310, the user is enabled to alter the videogame environment that is experienced on the computer device. Using the method 300 and techniques described herein, a user is enabled to alter what the solution to a particular videogame can be. This empowers the user to create different levels and experiences, with different problems and solutions, within the videogame environment, based on a captured image of a real-world environment.
The process flow diagram in
Additionally, the various components of a computer device 100, such as the computer device 100 discussed with respect to
An overlay return module 410 may be configured to superimpose an overlay based on triggers defined by an AR platform. The overlay can be entered into a videogame software platform running on the computer device using a videogame implementation module 412. The videogame implementation module 412 enables a user to add customizable variety to an interactive videogame environment based on how real-world objects in the captured image are arranged. User customizability results from the ability to capture different images having various orientations of real-world objects, which are tracked has triggers and associated with an augmented reality overlay. Depending on how the videogame platform was developed, the various triggers based on real-world objects can be defined in various ways virtually in the videogame environment.
The block diagram of
While the present techniques may be susceptible to various modifications and alternative forms, the exemplary examples discussed above have been shown only by way of example. It is to be understood that the technique is not intended to be limited to the particular examples disclosed herein. Indeed, the present techniques include all alternatives, modifications, and equivalents falling within the true spirit and scope of the appended claims.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2014/036219 | 4/30/2014 | WO | 00 |