This disclosure is directed to systems and methods for asymmetrical extended-reality (XR) navigation.
XR streaming provides users with an immersive interface that may be implemented via a full virtual reality (VR) 3D environment, augmented reality (AR) having a virtual 3D overlay on top of a normal reality medium, and/or a mixed reality (MR) having a real-world environment incorporated into a computer-generated environment. These XR technologies of AR/VR/MR provide for richer and more immersive experiences compared to a 2D experience that may be provisioned via consumer electronics such as smartphones, tablets, and personal computers without the integration of XR. For example, a 2D display may be used to provide a graphical representation of a real 3D space (e.g., a 2D video feed of a park, or polygon generated model of a park). However, even with these graphical representations, there are user experience deficiencies with respect to communication between the real 3D space and the 2D graphical representation, such as not being able to hone in on specific objects of interest immediately. Because adoption of XR is not yet pervasive, there remains a need for integrating conventional interfaces, such as 2D or 3D interfaces, with XR interfaces in order to provide a fluid user experience and mitigating the user experience differences in these asymmetrical technologies.
In one approach, user devices that are 2D-enabled may utilize unrelated communication mediums, such as voice calls, in order to interact with the XR device to specify an object of interest. For example, if an XR host is streaming at a public park, and a user using a 2D device is receiving a 2D feed of the XR point of view, the user may be able to interact with an XR host for further exploration of a landmark that was shown previously on the 2D graphical representation. However, it is apparent that the XR host will have to determine which landmark the user is referring to (e.g., “Did you mean that mountain, or the other mountain?”), and this may be an inefficient process to determine which is the landmark of interest to the user.
To overcome these problems, systems and methods are provided herein for an asymmetrical XR Application for efficient navigation between different graphical representations. In particular, the XR application may create a graphical representation of a real 3D location on a client device (e.g., a 2D rendering point of view perspective of a park tour on a 2D tablet device). The XR application may then receive an input by which the client identifies a portion of the 2D graphical representation (e.g., the user selects an apple tree via touchscreen).
The XR application accesses a mapping between both of the different 2D and 3D graphical representations of the park. For example, the touchscreen input associated with the apple tree object in 2D, which has its own 2D coordinates, is mapped to the 3D graphical representation in spatial coordinates for the XR application to map the apple tree in the 3D space. Then, based on the mapping, the XR application may identify a correspondence between the selected input (e.g., the apple tree object in 2D) and a 3D object in the 3D location (e.g., the corresponding apple tree object in 3D). In this way, there is no ambiguity in terms of which object of interest is selected, despite the graphical representations being 2D for the tablet, and 3D XR for an XR host giving an AR park tour. The XR application may then augment the corresponding 3D object in the 3D location (e.g., the apple tree may be highlighted for the XR host to elaborate further upon on the park tour). In this way, the XR application provides for an asymmetrical efficient navigation between different graphical representations of an environment. The apple tree seen on the 2D tablet is immediately selected and highlighted for the XR host to interact with and provide further information about on requests from the 2D user.
In some aspects of this disclosure, the graphical representation may include a 2D graphical representation of the 3D location on the client device. In this aspect, the XR application may identify a correspondence between the graphical object in the portion of the 2D graphical representation and the 3D object in the 3D location based on the mapping. For example, if the client device was a 2D tablet interacting with an XR 3D host, the XR application would identify a correspondence between the graphical object in the portion of the 2D graphical representation and the 3D object in the 3D location based on the mapping.
In other aspects of this disclosure, the graphical representation may include a 3D graphical representation of the 3D location on the client device. In this aspect, the XR application may identify a correspondence between the graphical object in the portion of the 3D graphical representation and the 3D object in the 3D location.
In some embodiments, the XR application may create the graphical representation of the 3D location on an XR device. This graphical representation of the 3D location may be stored in memory. The XR application may receive an input from the client device, requesting the graphical representation of the 3D location. In response to this request, the XR application may retrieve the graphical representation of the 3D location from the memory and provide the graphical representation of the 3D location to the client device, fulfilling the request. In some embodiments of this disclosure, the XR application may record the augmentation for the 3D object in the 3D location that is generated for display in memory. The XR application may receive an input from a client device requesting the augmentation for the 3D object in the 3D location. In response to this request, the XR application may retrieve the augmentation for the 3D object in the 3D location from the memory, and provide the augmentation for the 3D object in the 3D location to the client device to fulfill the request. In this way, the XR application can provide previously recorded interactions to future requests on demand without the augmentation needing to be “live” at every client request.
In some embodiments, the XR application receives a set of coordinates from the client device on a portion of the graphical representation. Based on these coordinates, the XR application identifies a correspondence between a graphical object at the set of coordinates and a 3D object in the 3D location. The XR application may then transmit a second signal to cause the XR device at the 3D location to generate for display the augmentation for the 3D object in the 3D location. In some variants, the mapping may receive a plurality of sets of coordinates from a plurality of client devices. The XR application may then determine a coordinate of interest, identifying a correspondence between a graphical object at the coordinate of interest and a 3D object in the 3D location. The XR application may then transmit a third signal to cause the AR device at the 3D location to generate for display the augmentation for the 3D object in the 3D location. In this way, the XR host may be able to address the most interesting aspects of the 3D location, whether the requests be express (e.g., via client device express request) or implied (e.g., receiving eye gaze data from a plurality of client devices to show areas of most interest).
The present disclosure, in accordance with one or more various embodiments, is described in detail with reference to the following figures. The drawings are provided for purposes of illustration only and merely depict typical or example embodiments. These drawings are provided to facilitate an understanding of the concepts disclosed herein and should not be considered limiting of the breadth, scope, or applicability of these concepts. It should be noted that for clarity and ease of illustration, these drawings are not necessarily made to scale.
The XR application may receive an input from the client device, identifying a portion of the graphical representation. In
The XR application may access a mapping between objects in the graphical representation of the 3D location and 3D objects in the 3D location. In
The XR application may, based on the mapping, identify a correspondence between a graphical object in the portion of the graphical representation and a 3D object in the 3D location. In
The XR application may transmit a signal to cause an extended reality (XR) device at the 3D location to generate for display an augmentation for the 3D object in the 3D location. In
In some embodiments, the graphical representation comprises a 3D graphical representation of the 3D location on the client device. For example, the client device may be a VR headset viewing an environment in 3D. The XR application may then identify the correspondence between the graphical object in the portion of the 3D graphical representation (e.g., the VR headset as the client device) and the 3D object in the 3D location (e.g., the XR headset used by the XR Concierge). In some embodiments, the 3D graphical representation is a digital twin of the real-world environment (i.e., the 3D location). In some embodiments, the XR application may store one or more actions that take place in the 3D graphical representation from one or more client devices. For example, ten or more users take selfies in VR at roughly the same specific spatial location. The XR application may designate that spatial location in actual reality a point of interest. The XR application may transmit information about a point of interest with respect to the present environment, to an AR device overlaying AR in a real environment (e.g., using AR headset in a park). In some embodiments, the XR application continuously updates the mapping during the XR session. In some embodiments, the XR application updates the mapping upon any movement within the 3D graphical representation. In some embodiments, the XR application may implement a predefined spatial boundary for rendering (e.g., calculated visible line of sight, pre-determined distance of one-mile radius, limit of room size within building, etc.).
In some embodiments, the XR application may be embedded within the XR device itself. The XR application may create, on the XR device, the graphical representation of the 3D location. For example, the XR device may be an AR/VR headset that is being worn within a museum by the XR concierge. The AR/VR headset provides the requisite hardware to optically capture the museum environment and create a 3D graphical representation of the museum. The mapping device may store the graphical representation of the 3D location in memory. For example, the AR/VR headset may store the 3D graphical representation within embedded memory. The XR application may receive an input, via the client device, requesting the graphical representation of the 3D location. For example, a tablet accessing a museum tour may request the library view from the XR concierge. The XR application may then retrieve the graphical representation of the 3D location from the memory and provide the graphical representation of the 3D location to the client device. For example, the AR/VR device may provide the mapping for the museum to the tablet client device. In some embodiments, the XR application will provide the corresponding mapping of the 3D location to the 2D client device.
In some embodiments, the 2D client device may be fully aware of full 3D objects of the 3D graphical representation. Full object point clouds or 3D mesh and metadata may be sent by the XR application to the video client from the AR headset, along with a mapping function to translate between said cloud spatial coordinates and the current 2D video viewport coordinates. Inputs (where the AR client is looking, for example) for this mapping function can be updated every time the AR headset moves or rotates. In another embodiment, the 2D client device may be fully aware of translated 2D objects. In this implementation, a list of objects and associated metadata along with the current 2D coordinates of their facing planes is sent to the video client. This list must be updated every time the AR headset moves or rotates. In yet another embodiment, the 2D client device may be aware only of clicks input over a 2D video. In this approach, only the video is sent to the video client. The video client merely reports back click data on the viewport to the XR application, and the AR headset handles figuring out what real-world object was interacted with. No object updates for the 2D client are required in this implementation, but the XR application needs to correlate the object selected from the 2D screen via click to the objects data to identify which objects were clicked or otherwise identified by the interface of the 2D client device.
In some embodiments, the XR application records the augmentation for the 3D object in the 3D location that is generated for display in memory. For example, in some scenarios the client device may wish to interact with a live XR concierge; however, the XR concierge is not live at the moment of desired interaction. In this example, the XR application may store the augmentation and provide a recording to the client device. The XR application may receive an input, via the client device, requesting the augmentation for the 3D object in the 3D location. The XR application may then retrieve the augmentation for the 3D object in the 3D location from the memory and provide the augmentation for the 3D object in the 3D location to the client device. For example, the tablet client device may request information about the sphinx when the XR concierge is not live as the XR tour was completed weeks ago upon a museum exhibition commencing. The XR application determines that the request for further information about the sphinx is stored in memory. The XR application retrieves this recording of further information about the sphinx and provides it to the client device as requested. In some embodiments, these recorded augmentations are stored in memory (e.g., cache). The XR application may generate for display user interfaces having queues of previous recorded augmentations based on/sorted in relation to a received query from a client device. For example, the XR application may generate for display the recorded augmentations in a user interface that provides for a list of stored augmentations. The list of stored augmentations may have corresponding thumbnails and/or icons relating to the specific augmentation. In some embodiments, the XR application can assign metadata to the stored recorded augmentations. In this way, based on received inputs from client devices, the XR application may be able to easily sort for relevant augmentations to provide as relevant recorded augmentations for the requesting client devices. In one example, if the client device is 3D compatible (e.g., an AR headset) and the client device is within the line of sight of an object that has a previously recorded augmentation, the XR application may highlight the object for which the previously recorded augmentation took place (e.g., the sphinx object). In this way, the user using AR glasses knows, while walking through the real 3D environment, that there is content relating to the sphinx as it is highlighted.
In some embodiments, the XR application receives a set of coordinates, via the client device, on a portion of the graphical representation. In some embodiments, the set of coordinates may be received by the XR application by a mouse click, a selection, a touch on a touchscreen, or a point of interest based on eye gaze. For example, the XR application uses embedded hardware (e.g, front-facing camera on tablet) to determine eye gaze, to identify which portion of the graphical representation the user is viewing. The XR application may, based on the set of coordinates, identify a correspondence between a graphical object at the set of coordinates and a 3D object in the 3D location. For example, the XR application may determine that the user is looking at the sphinx more than any other object in a specified time interval. The XR application may transmit a second signal to cause the XR device at the 3D location to generate for display the augmentation for the 3D object in the 3D location. For example, without an active input conducted on the part of the client device; but rather a passive eye gaze, the XR application tells the XR device to augment the sphinx for the concierge to note that the client device views this as interesting and perhaps wants more information regarding the sphinx.
In some embodiments, the XR application receives a plurality of sets of coordinates from a plurality of client devices. In this scenario, the XR application may receive a crowdsourced set of data relating to eye gaze from a plurality of devices. This information may be processed to function as a “heat map” that indicates the most popular portions, a coordinate of interest, of eye gaze, by the plurality of devices. The XR application may determine the coordinate of interest based on applying a mathematical model to the plurality of sets of coordinates. The mathematical model may be any one of a machine learning model, lineal regression on coordinates, mean/mode of coordinates, and other similar mathematical models. The XR application may, based on the coordinate of interest, identify a correspondence between a graphical object at the coordinate of interest and a 3D object in the 3D location. The XR application may transmit a third signal to cause the XR device at the 3D location to generate for display the augmentation for the 3D object in the 3D location. In some embodiments, after the signal is transmitted, the object is highlighted for the XR device to perform the augmentation. In some embodiments, after the signal is transmitted, the object is described to the XR device (e.g., via audio or video, or text) for the XR device to perform the augmentation.
In some embodiments, the client device (e.g., an AR headset) user is walking through a museum. There is no live XR concierge at this location at the same time. As the user walks through the museum, depending on location and eye-gaze point of view, the XR application may highlight one or more objects of interest for which there is previously recorded content by a previous XR host. In this way, the AR user can engage cached content for objects of interest as they walk through the museum.
In some embodiments, the example is similar to the one listed above; however, there is a live XR host as well within the museum. The client device (e.g., an AR headset) user is walking through the museum. As the user walks through the museum, the user wishes to learn more about an object for which there is no cached content. The client device sends a communication message to the XR host (e.g., screenshot of object of interest along with audio voice note that requests further information on the object). The XR application may send this information to the XR host, which receives this request. The XR host then walks to the object and provides further information. The XR application alerts the user that the request for further information has been obliged. The XR application records the further information provided by the XR host. The XR application then caches this information for future possible requests about this specific object.
Each one of user equipment device 1200 and user equipment device 1201 may receive content and data via input/output (I/O) path 1202. I/O path 1202 may provide content (e.g., broadcast programming, on-demand programming, Internet content, content available over a local area network (LAN) or wide area network (WAN), and/or other content) and data to control circuitry 1204, which may comprise processing circuitry 1206 and storage 1208. Control circuitry 1204 may be used to send and receive commands, requests, and other suitable data using I/O path 1202, which may comprise I/O circuitry. I/O path 1202 may connect control circuitry 1204 (and specifically processing circuitry 1206) to one or more communications paths (described below). I/O functions may be provided by one or more of these communications paths, but are shown as a single path in
Control circuitry 1204 may be based on any suitable control circuitry such as processing circuitry 1206. As referred to herein, control circuitry should be understood to mean circuitry based on one or more microprocessors, microcontrollers, digital signal processors, programmable logic devices, field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), etc., and may include a multi-core processor (e.g., dual-core, quad-core, hexa-core, or any suitable number of cores) or supercomputer. In some embodiments, control circuitry may be distributed across multiple separate processors or processing units, for example, multiple of the same type of processing units (e.g., two Intel Core i7 processors) or multiple different processors (e.g., an Intel Core i5 processor and an Intel Core i7 processor). In some embodiments, control circuitry 1204 executes instructions for the XR application stored in memory (e.g., storage 1208). Specifically, control circuitry 1204 may be instructed by the XR application to perform the functions discussed above and below. In some implementations, processing or actions performed by control circuitry 1204 may be based on instructions received from the XR application.
In client/server-based embodiments, control circuitry 1204 may include communications circuitry suitable for communicating with a server or other networks or servers. The XR application may be a stand-alone application implemented on a device or a server. The XR application may be implemented as software or a set of executable instructions. The instructions for performing any of the embodiments discussed herein of the XR application may be encoded on non-transitory computer-readable media (e.g., a hard drive, random-access memory on a DRAM integrated circuit, read-only memory on a BLU-RAY disk, etc.). For example, in
In some embodiments, the XR application may be a client/server application where only the client application resides on device 1200, and a server application resides on an external server (e.g., server 1304 and/or server 1316). For example, the XR application may be implemented partially as a client application on control circuitry 1204 of device 1200 and partially on server 1304 as a server application running on control circuitry 1311. Server 1304 may be a part of a local area network with one or more of devices 1200 or may be part of a cloud computing environment accessed via the internet. In a cloud computing environment, various types of computing services for performing searches on the internet or informational databases, providing storage (e.g., for a database) or parsing data are provided by a collection of network-accessible computing and storage resources (e.g., server 1304), referred to as “the cloud.” Device 1200 may be a cloud client that relies on the cloud computing capabilities from server 1304 to determine whether processing should be offloaded and facilitate such offloading. When executed by control circuitry 1204 or 1311, the XR application may instruct control circuitry 1204 or 1311 circuitry to perform processing tasks for the client device and facilitate a media consumption session integrated with social network services. The client application may instruct control circuitry 1204 to determine whether processing should be offloaded.
Control circuitry 1204 may include communications circuitry suitable for communicating with a server, social network service, a table or database server, or other networks or servers The instructions for carrying out the above-mentioned functionality may be stored on a server (which is described in more detail in connection with
Memory may be an electronic storage device provided as storage 1208 that is part of control circuitry 1204. As referred to herein, the phrase “electronic storage device” or “storage device” should be understood to mean any device for storing electronic data, computer software, or firmware, such as random-access memory, read-only memory, hard drives, optical drives, digital video disc (DVD) recorders, compact disc (CD) recorders, BLU-RAY disc (BD) recorders, BLU-RAY 3D disc recorders, digital video recorders (DVR, sometimes called a personal video recorder, or PVR), solid state devices, quantum storage devices, gaming consoles, gaming media, or any other suitable fixed or removable storage devices, and/or any combination of the same. Storage 1208 may be used to store various types of content described herein as well as XR application data described above. Nonvolatile memory may also be used (e.g., to launch a boot-up routine and other instructions). Cloud-based storage may be used to supplement storage 1208 or instead of storage 1208.
Control circuitry 1204 may include video generating circuitry and tuning circuitry, such as one or more analog tuners, one or more MPEG-2 decoders or other digital decoding circuitry, high-definition tuners, or any other suitable tuning or video circuits or combinations of such circuits. Encoding circuitry (e.g., for converting over-the-air, analog, or digital signals to MPEG signals for storage) may also be provided. Control circuitry 1204 may also include scaler circuitry for upconverting and downconverting content into the preferred output format of user equipment 1200. Control circuitry 1204 may also include digital-to-analog converter circuitry and analog-to-digital converter circuitry for converting between digital and analog signals. The tuning and encoding circuitry may be used by user equipment device 1200, 1201 to receive and to display, to play, or to record content. The tuning and encoding circuitry may also be used to receive media consumption data. The circuitry described herein, including for example, the tuning, video generating, encoding, decoding, encrypting, decrypting, scaler, and analog/digital circuitry, may be implemented using software running on one or more general purpose or specialized processors. Multiple tuners may be provided to handle simultaneous tuning functions (e.g., watch and record functions, picture-in-picture (PIP) functions, multiple-tuner recording, etc.). If storage 1208 is provided as a separate device from user equipment device 1200, the tuning and encoding circuitry (including multiple tuners) may be associated with storage 1208.
Control circuitry 1204 may receive instruction from a user by way of user input interface 1210. User input interface 1210 may be any suitable user interface, such as a remote control, mouse, trackball, keypad, keyboard, touch screen, touchpad, stylus input, joystick, voice recognition interface, or other user input interfaces. Display 1212 may be provided as a stand-alone device or integrated with other elements of each one of user equipment device 1200 and user equipment device 1201. For example, display 1212 may be a touchscreen or touch-sensitive display. In such circumstances, user input interface 1210 may be integrated with or combined with display 1212. In some embodiments, user input interface 1210 includes a remote-control device having one or more microphones, buttons, keypads, any other components configured to receive user input or combinations thereof. For example, user input interface 1210 may include a handheld remote-control device having an alphanumeric keypad and option buttons. In a further example, user input interface 1210 may include a handheld remote-control device having a microphone and control circuitry configured to receive and identify voice commands and transmit information to set-top box 1215.
Audio output equipment 1214 may be integrated with or combined with display 1212. Display 1212 may be one or more of a monitor, a television, a liquid crystal display (LCD) for a mobile device, amorphous silicon display, low-temperature polysilicon display, electronic ink display, electrophoretic display, active matrix display, electro-wetting display, electro-fluidic display, cathode ray tube display, light-emitting diode display, electroluminescent display, plasma display panel, high-performance addressing display, thin-film transistor display, organic light-emitting diode display, surface-conduction electron-emitter display (SED), laser television, carbon nanotubes, quantum dot display, interferometric modulator display, or any other suitable equipment for displaying visual images. A video card or graphics card may generate the output to the display 1212. Audio output equipment 1214 may be provided as integrated with other elements of each one of device 1200 and equipment 1201 or may be stand-alone units. An audio component of videos and other content displayed on display 1212 may be played through speakers (or headphones) of audio output equipment 1214. In some embodiments, audio may be distributed to a receiver (not shown), which processes and outputs the audio via speakers of audio output equipment 1214. In some embodiments, for example, control circuitry 1204 is configured to provide audio cues to a user, or other audio feedback to a user, using speakers of audio output equipment 1214. There may be a separate microphone 1216 or audio output equipment 1214 may include a microphone configured to receive audio input such as voice commands or speech. For example, a user may speak letters or words that are received by the microphone and converted to text by control circuitry 1204. In a further example, a user may voice commands that are received by a microphone and recognized by control circuitry 1204. Camera 1218 may be any suitable video camera integrated with the equipment or externally connected. Camera 1218 may be a digital camera comprising a charge-coupled device (CCD) and/or a complementary metal-oxide semiconductor (CMOS) image sensor. Camera 1218 may be an analog camera that converts to digital images via a video card.
The XR application may be implemented using any suitable architecture. For example, it may be a stand-alone application wholly-implemented on each one of user equipment device 1200 and user equipment device 1201. In such an approach, instructions of the application may be stored locally (e.g., in storage 1208), and data for use by the application is downloaded on a periodic basis (e.g., from an out-of-band feed, from an Internet resource, or using another suitable approach). Control circuitry 1204 may retrieve instructions of the application from storage 1208 and process the instructions to provide media consumption and social network interaction functionality and generate any of the displays discussed herein. Based on the processed instructions, control circuitry 1204 may determine what action to perform when input is received from user input interface 1210. For example, movement of a cursor on a display up/down may be indicated by the processed instructions when user input interface 1210 indicates that an up/down button was selected. An application and/or any instructions for performing any of the embodiments discussed herein may be encoded on computer-readable media. Computer-readable media includes any media capable of storing data. The computer-readable media may be non-transitory including, but not limited to, volatile and non-volatile computer memory or storage devices such as a hard disk, floppy disk, USB drive, DVD, CD, media card, register memory, processor cache, Random Access Memory (RAM), etc.
Control circuitry 1204 may allow a user to provide user profile information or may automatically compile user profile information. For example, control circuitry 1204 may access and monitor network data, video data, audio data, processing data, participation data from a XR application and social network profile. Control circuitry 1204 may obtain all or part of other user profiles that are related to a particular user (e.g., via social media networks), and/or obtain information about the user from other sources that control circuitry 1204 may access. As a result, a user can be provided with a unified experience across the user's different devices.
In some embodiments, the XR application is a client/server-based application. Data for use by a thick or thin client implemented on each one of user equipment device 1200 and user equipment device 1201 may be retrieved on-demand by issuing requests to a server remote to each one of user equipment device 1200 and user equipment device 1201. For example, the remote server may store the instructions for the application in a storage device. The remote server may process the stored instructions using circuitry (e.g., control circuitry 1204) and generate the displays discussed above and below. The client device may receive the displays generated by the remote server and may display the content of the displays locally on device 1200. This way, the processing of the instructions is performed remotely by the server while the resulting displays (e.g., that may include text, a keyboard, or other visuals) are provided locally on device 1200. Device 1200 may receive inputs from the user via input interface 1210 and transmit those inputs to the remote server for processing and generating the corresponding displays. For example, device 1200 may transmit a communication to the remote server indicating that an up/down button was selected via input interface 1210. The remote server may process instructions in accordance with that input and generate a display of the application corresponding to the input (e.g., a display that moves a cursor up/down). The generated display may then be transmitted to device 1200 for presentation to the user.
In some embodiments, the XR application may be downloaded and interpreted or otherwise run by an interpreter or virtual machine (run by control circuitry 1204). In some embodiments, the XR application may be encoded in the ETV Binary Interchange Format (EBIF), received by control circuitry 1204 as part of a suitable feed, and interpreted by a user agent running on control circuitry 1204. For example, the XR application may be an EBIF application. In some embodiments, the XR application may be defined by a series of JAVA-based files that are received and run by a local virtual machine or other suitable middleware executed by control circuitry 1204. In some of such embodiments (e.g., those employing MPEG-2 or other digital media encoding schemes), the XR application may be, for example, encoded and transmitted in an MPEG-2 object carousel with the MPEG audio and video packets of a program.
Although communications paths are not drawn between user equipment devices, these devices may communicate directly with each other via communications paths as well as other short-range, point-to-point communications paths, such as USB cables, IEEE 1394 cables, wireless paths (e.g., Bluetooth, infrared, IEEE 702-11x, etc.), or other short-range communication via wired or wireless paths. The user equipment devices may also communicate with each other directly through an indirect path via communication network 1306.
System 1300 may comprise media content source 1302, one or more servers 1304, and one or more social network services. In some embodiments, the XR application may be executed at one or more of control circuitry 1311 of server 1304 (and/or control circuitry of user equipment devices 1307, 1308, 1310.
In some embodiments, server 1304 may include control circuitry 1311 and storage 1314 (e.g., RAM, ROM, Hard Disk, Removable Disk, etc.). Instructions for the XR application may be stored in storage 1314. In some embodiments, the XR application, via control circuitry, may execute functions outlined in
Control circuitry 1311 may be based on any suitable control circuitry such as one or more microprocessors, microcontrollers, digital signal processors, programmable logic devices, field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), etc., and may include a multi-core processor (e.g., dual-core, quad-core, hexa-core, or any suitable number of cores) or supercomputer. In some embodiments, control circuitry 1311 may be distributed across multiple separate processors or processing units, for example, multiple of the same type of processing units (e.g., two Intel Core i7 processors) or multiple different processors (e.g., an Intel Core i5 processor and an Intel Core i7 processor). In some embodiments, control circuitry 1311 executes instructions for an emulation system application stored in memory (e.g., the storage 1314). Memory may be an electronic storage device provided as storage 1314 that is part of control circuitry 1311.
At 1402, the XR application, via the I/O path 1312, generates for display on a client device (e.g., smart phone, tablet, AR headset, VR headset, laptop, television, personal computer, etc.) a graphical representation of a 3D location. At 1404, the XR application, via the I/O path 1312, receives an input (e.g., via touchscreen, keyboard, voice, gesture, instruction, code, eye gaze, etc.), via the client device (e.g., user equipment 1307, 1308, 1310), identifying a portion of the graphical representation. At 1406, the XR application, via the I/O path 1312 and/or control circuitry 1311, accesses (e.g., via networking, Bluetooth, internal bus, wireless communication, wired communication, or any suitable digital communication medium thereof) a mapping between objects in the graphical representation of the 3D location and 3D objects in the 3D location. At 1408, the XR application, via control circuitry 1311, based on the mapping, identifies (e.g., via control circuitry of server/XR device/user device, etc.) a correspondence between a graphical object in the portion of the graphical representation and a 3D object in the 3D location. If the XR application determines, at 1410, that the object in the portion of the graphical representation and 3D object correspond, then processing may proceed to 1412. If the XR application determines at 1410, the object in the portion of the graphical representation and 3D object does not correspond, then processing reverts to 1408. At 1412, the XR application transmits, via the I/O path 1312, (e.g., via networking, Bluetooth, internal bus, wireless communication, wired communication, or any suitable digital communication medium thereof) a signal to cause an AR device at the 3D location to generate for display an augmentation for the 3D object in the 3D location.
The processes discussed above are intended to be illustrative and not limiting. One skilled in the art would appreciate that the steps of the processes discussed herein may be omitted, modified, combined and/or rearranged, and any additional steps may be performed without departing from the scope of the invention. More generally, the above disclosure is meant to be illustrative and not limiting. Only the claims that follow are meant to set bounds as to what the present invention includes. Furthermore, it should be noted that the features and limitations described in any one embodiment may be applied to any other embodiment herein, and flowcharts or examples relating to one embodiment may be combined with any other embodiment in a suitable manner, done in different orders, or done in parallel. In addition, the systems and methods described herein may be performed in real time. It should also be noted that the systems and/or methods described above may be applied to, or used in accordance with, other systems and/or methods.