Augmented reality (AR) and virtual reality (VR) experiences may merge virtual objects or characters with real-world features in a way that can, in principle, provide a powerfully interactive experience to a user. AR can augment “real-reality”. i.e., a user can see the real world through clear lenses with virtual projections on top. VR can augment a virtual rendition of real-reality, where the view of the real-world comes from a headset mounted camera that are projected into VR space, so a user still sees the real world around them.
There are provided systems and methods for performing augmented reality image generation, substantially as shown in and/or described in connection with at least one of the figures, and as set forth more completely in the claims.
One limitation associated with conventional approaches to generating augmented reality (AR) and virtual reality (VR) imagery is accurate identification of the real-world locations of the real-world objects captured by a camera. As result, when imagery captured by such a camera is merged with a virtual object, the virtual object may be inappropriately located within the real-world environment. For example, a virtual object such as an avatar or character may be placed partially through a wall or floor of a real-world environment, thereby significantly reducing the apparent realism of the AR or VR scene to the user.
The following description contains specific information pertaining to implementations in the present disclosure. One skilled in the art will recognize that the present disclosure may be implemented in a manner different from that specifically discussed herein. The drawings in the present application and their accompanying detailed description are directed to merely exemplary implementations. Unless noted otherwise, like or corresponding elements among the figures may be indicated by like or corresponding reference numerals. Moreover, the drawings and illustrations in the present application are generally not to scale, and are not intended to correspond to actual relative dimensions.
According to the present exemplary implementation, system memory 106 stores software code 110 and virtual object library 112, and may store augmented reality image 128. As used herein, “augmented reality image” refers to an image composed of one or more real-world objects and one or more virtual objects or affects, and may be incorporated as part of an AR user experience or a VR user experience. Also, “virtual object” refers to simulations of persons, avatars, characters, caricatures of persons, animals, plants, and living things of various species or varieties, as well as inanimate objects. As further used herein, a “real-world object” is an animate entity or inanimate object that actually exists, i.e., has objectively verifiable physical extension and presence in space.
As further shown in
Also shown in
With respect to aggregated sensor data 126, by analogy, aggregated sensor data 126 may include any or all of sensor data 125a, 125b, and 125c. That is to say, in a use case in which only user 150a utilizing mobile user system 130a is present at user venue 160, aggregated sensor data 126 may include only sensor data 125a. However, in use cases in which one or both of users 150a and 150b are also present at user venue 160, aggregated sensor data 126 may include one or both of sensor data 125b and 125c, in addition to sensor data 125a.
It is further noted that display 108 of augmented reality image generation system 100, as well as display 138 corresponding to mobile user system(s) 130a, 130b, and/or 130c, may be implemented as a liquid crystal display (LCD), a light-emitting diode (LED) display, an organic light-emitting diode (OLED) display, or another suitable display screen that performs a physical transformation of signals to light. It is also noted that, although the present application refers to software code 110 as being stored in system memory 106 for conceptual clarity, more generally, system memory 106 may take the form of any computer-readable non-transitory storage medium.
The expression “computer-readable non-transitory storage medium,” as used in the present application, refers to any medium, excluding a carrier wave or other transitory signal that provides instructions to hardware processor 104 of computing platform 102, or to user system hardware processor 134 corresponding to mobile user system(s) 130a, 130b, and 130c. Thus, a computer-readable non-transitory medium may correspond to various types of media, such as volatile media and non-volatile media, for example. Volatile media may include dynamic memory, such as dynamic random access memory (dynamic RAM), while non-volatile memory may include optical, magnetic, or electrostatic storage devices. Common forms of computer-readable non-transitory media include, for example, optical discs, RAM, programmable read-only memory (PROM), erasable PROM (EPROM), and FLASH memory.
It is also noted that although
According to the implementation shown by
In some implementations, user venue 160 may include an indoor environment. Examples of such indoor environments include an enclosed sports arena or shopping mall, a hotel, a casino, or a museum, to name a few. Alternatively, in some implementations, user venue 160 may include an outdoor environment. Examples of such outdoor environments include a pedestrian shopping and/or dining district, an open air sports arena or shopping mall, a resort property, and a park, again to name merely a few.
Although mobile user system 130a, for example, is depicted as having a form factor corresponding to a smartphone or tablet computer, in
User(s) 150a, 150b, and/or 150c may utilize respective mobile user system(s) 130a, 130b, and/or 130c to interact with augmented reality image generation system 100 to use software code 110, executed by hardware processor 104, to produce augmented reality image 128 based on camera image 124, and, in some implementations, optional aggregated sensor data 126 as well. For example, in some implementations, augmented reality image generation system 100 may receive optional aggregated sensor data 126 including sensor data 125a, 124b, and 125c, and/or camera image 124 including camera image data 123a, 123b, and 123c from respective mobile user system(s) 130a, 130b, and 130c.
Camera image data 123a, 123b, and 123c may correspond to camera images of user venue 160 captured by camera 140 corresponding to mobile user systems 130a, 130b, and 130c from the different perspectives of respective users 150a, 150b, and 150c. In some implementations, hardware processor 104 may be configured to execute software code 110 to effectively “stitch together” camera image data 123a, 123b, and 123c to form a spatial map, such as a three-hundred-and-sixty (360°) map of user venue 160 for use in generating augmented reality image 128.
Sensor data 125a, 125b, and 125c may include various types of sensor data, such as depth sensor data, accelerometer data, Global Positioning System (GPS) data, and/or magnetometer data, to name a few examples, obtained at the different locations of users 150a, 150b, and 150c in user venue 160. In some implementations, hardware processor 104 may be configured to execute software code 110 to form the spatial map of user venue 160 for use in generating augmented reality image 128 based on sensor data 125a. 125b, and 125c, in addition to camera image data 123a. 123b, and 123c. It is noted that aggregated sensor data 126 including one or more of sensor data 125a, 125b, and/or 125c will hereinafter be referred to simply as “sensor data 126.”
It is noted that, in various implementations, augmented reality image 128, when generated using software code 110, may be stored in system memory 106 and/or may be copied to non-volatile storage. Alternatively, or in addition, as shown in
In some implementations, software code 110 may be utilized directly by mobile user system(s) 130a, 130b, and/or 130c. For example, software code 110 may be transferred to user system memory 136 corresponding to mobile user system(s) 130a, 130b, and/or 130c, via download over communication system 122, for example, or via transfer using a computer-readable non-transitory medium, such as an optical disc or FLASH drive. In those implementations, software code 110 may be persistently stored on user system memory 136, and may be executed locally on mobile user system(s) 130a, 130b, and/or 130c by user system hardware processor 134.
As further shown in
Augmented reality image generation system 200, communication network 220, and network communication links 222 correspond respectively in general to augmented reality image generation system 100, communication network 120, and network communication links 122, in
In addition, camera image 224, optional sensor data 226, and augmented reality image 228, in
Camera 240 in
Alternatively, and as shown in
It is noted that camera 140/240 may include a still image camera and/or a video camera. Moreover, in some implementations, camera 140/240 may take the form of a 360° camera, or an array of cameras configured to generate a 360° camera image. It is further noted that, as shown in
Mobile user system 330 including user system hardware processor 334, user system memory 336, display 338, camera 340, and sensor(s) 352 corresponds in general to any or all of mobile user system(s) 130a, 130b, and/or 130c, in
In addition, camera image 324, sensor data 326, and augmented reality image 328, in
The functionality of software code 110/310 will be further described by reference to
Referring now to
In some implementations, user(s) 150a/150b/150c/250 may utilize mobile user system(s) 130a/130b/130c/330 or camera 240 to interact with augmented reality image generation system 100/200 in order to produce augmented reality image 128/228/328 including the one or more real-world objects captured by camera image 124/224/324. As shown by
Alternatively, and as noted above, in some implementations, software code 110/310 may be stored on user system memory 136/336 and may be executed locally on mobile user system(s) 130a/130b/130c/330 by user system hardware processor 134/334. In those implementations, camera image 124/224/324 may be stored in user system memory 136/336. In various implementations, camera image 124/224/324 may be received by software code 110/310, executed by hardware processor 104 of computing platform 102, or by user system hardware processor 134/334 of mobile user system(s) 130a/130b/130c/330.
Flowchart 480 continues identifying one or more reference point(s) 258 corresponding to camera image 124/224/324, one or more reference point(s) 258 having respective predetermined real-world location(s) 254 (action 482). For example, in one implementation, camera image 124/224/324, may include real-world object 270 and may include data identifying reference point 258 corresponding to camera image 124/224/324. It is noted that although reference point 258 is identified in
One or more reference point(s) 258 corresponding to camera image 124/224/324 and having respective predetermined real-world location(s) 254 may be identified by software code 110/310, executed by hardware processor 104 of computing platform 102, or by user system hardware processor 134/334 of mobile user system(s) 130a/130b/130c/330. For example, software code 110/310 may be configured to obtain reference point metadata, which can include metadata associated with other captured images or feeds, such as timestamps and location data. Software code 110/310 may be further configured to perform machine learning to identify common objects using image analysis. As a specific example, software code 110/310 may be configured to perform image analysis on camera image 124/224/324 to identify surfaces having respective predetermined real-world locations, such as floor 264, walls 266, and ceiling 268 within user venue 160/260, and/or surface 272 of real-world object 270.
It is noted that, in some implementations, the one or more real-world object(s) depicted by camera image 124/224/324, may serve as its/their own reference point(s) 258. For example, where user venue 160/260 is an outdoor venue, a real-world object such as a well-known or famous building, geographic feature, or other landmark may correspond uniquely to a predetermined real-world location. Specific examples of real-world objects encountered out-of-doors that can also serve as reference points include buildings such as the Empire State Building or White House, stadiums such as Wembley Stadium or the Rose Bowl, streets such as Wall Street in New York or Bourbon Street in New Orleans. or distinctive structures such as the Golden Gate Bridge or Seattle Space Needle, to name a few.
Alternatively, where user venue 160/260 is an indoor venue, any distinctive indoor feature, or a readily recognizable or famous object or configuration of objects such as a work of art, a museum exhibition, or the composition of objects arranged in an interior space, may serve as a reference point corresponding to camera image 124/224/324. Other examples of interior features that may serve as indoor reference points include windows, doors, furnishings, appliances, electronics equipment, or the layout of a hotel room, for instance.
Flowchart 480 continues with mapping the one or more real-world object(s) included in camera image 124/224/324 to respective real-world location(s) of the one or more real-world object(s) based on predetermined real-world location(s) 254 of the one or more reference point(s) 258 (action 483). In the interests of conceptual clarity, the actions outlined in flowchart 480 will be further described by reference to an exemplary use case focusing on real-world object 270 and virtual object 256. However, it is emphasized that, in many implementations, the present method may include mapping multiple real-world objects, e.g., floor 264, walls 266, and/or ceiling 268 of user venue 160/260 to their respective real-world locations.
Referring to
In some implementations, the method outlined by flowchart 480 may further include mapping real-world object 270 to its real-world location 274 based on sensor data 126/226/326, as well as on predetermined real-world location(s) 254 of reference point(s) 258. For example, as shown in
As further shown by
Flowchart 480 continues with merging camera image 124/224/324 with virtual object 256 to generate augmented reality image 128/228/328 including real-world object 270 and virtual object 256, wherein a location of virtual object 256 in augmented reality image 128/228/328 is determined based on real-world location 274 of real-world object 270 (action 484). It is noted that mapping of real-world object(s) included in camera image 124/224/324 to their real-world locations in action 483 advantageously enables appropriate and realistic placement of virtual objects, such as virtual object 256 into augmented reality image 128/228/328.
For example, and as shown in
In some implementations, flowchart 480 can conclude with rendering augmented reality image 128/228/328, on a display, such as display 108 of augmented reality image generation system 100/200 or display 138/338 of mobile user system(s) 130a/130b/130c/330 (action 485). As noted above, in some implementations, camera 140/240/340 may take the form of a 360° camera, or an array of cameras configured to generate a 360° camera image. In those implementations, for example, augmented reality image 128/228/328 may be rendered as a 360° image. The rendering of augmented reality image 128/228/328 on display 108 or display 138/338 may be performed by software code 110/310, executed respectively by hardware processor 104 of computing platform 102 or by user system hardware processor 134/334.
Thus, the present application discloses an augmented reality image generation solution. In one implementation, by identifying one or more reference point(s) having respective predetermined real-world location(s) and corresponding to a camera image, the present solution enables mapping of one or more real-world object(s) depicted in the camera image to their respective real-world location(s). As a result, in one implementation, the present solution advantageously enables merging the camera image with an image of a virtual object. The merger generates an augmented reality image including the real-world object(s) and the virtual image, such that an appropriate location of the virtual object in the augmented reality image is determined based on the real-world location(s) of the real-world object(s). The present solution may further include rendering the augmented reality image on a display.
From the above description it is manifest that various techniques can be used for implementing the concepts described in the present application without departing from the scope of those concepts. Moreover, while the concepts have been described with specific reference to certain implementations, a person of ordinary skill in the art would recognize that changes can be made in form and detail without departing from the scope of those concepts. As such, the described implementations are to be considered in all respects as illustrative and not restrictive. It should also be understood that the present application is not limited to the particular implementations described herein, but many rearrangements, modifications, and substitutions are possible without departing from the scope of the present disclosure.
Number | Date | Country | |
---|---|---|---|
Parent | 15974604 | May 2018 | US |
Child | 17963965 | US |