Representing real-world objects with a virtual reality environment

Information

  • Patent Grant
  • 12108184
  • Patent Number
    12,108,184
  • Date Filed
    Friday, October 29, 2021
    3 years ago
  • Date Issued
    Tuesday, October 1, 2024
    a month ago
Abstract
An image processing system enables a user wearing a head-mounted display to experience a virtual environment combined with a representation of a real-world object. The image processing system receives a captured scene of a real-world environment that includes a target object. The image processing system identifies the target object in the captured scene and generates a representation of the target object. In some cases, the image processing system may include a graphical overlay with the representation of the target object. The image processing system can generate a combined scene that includes the target object and the virtual environment. The combined scene is presented to the user, thereby allowing the user to interact with the real-world target object (or a representation thereof) in combination with the virtual environment.
Description
BACKGROUND

This disclosure relates generally to an image processing system, and more specifically to rendering content via a virtual reality (VR) system.


VR technology and corresponding equipment such as head-mounted displays (HMDs) or VR headsets are becoming increasingly popular. A virtual scene rendered to a user wearing an HMD can provide an interactive experience in a virtual environment. At times, the user may intend to interact with objects, such as real-world objects, while wearing the HMD. However, in some conventional VR systems, while the user is wearing the HMD, he/she may be unable to see and/or may have difficulty determining where a real-world object is. As such, conventional approaches can make it inconvenient or challenging for the user wearing the HMD to interact with the real-world object while experiencing the virtual environment. This can degrade the user experience associated with utilizing, engaging with, or otherwise interacting with the virtual environment.


SUMMARY

An image processing system can provide a virtual reality (VR) experience to a user wearing a head-mounted display (HMD) and can enable the user to interact with one or more objects in a real-world environment. In one example, the image processing system receives image data (e.g., one or more still frame images and/or video frame images, etc.) of a scene. In some cases, receiving data can include capturing, detecting, acquiring, and/or obtaining data. The scene can be associated with a real-world environment around the user wearing the HMD. The real-world environment can include a real-world object that is captured in the scene. In other words, received image data representing the scene can include image data that represents the real-world object. The real-world object in the captured scene (i.e., in the received image data of the scene) is referred to as a target object. In this example, the user wearing the HMD and experiencing a virtual environment may desire or intend to interact with the target object while continuing to experience the virtual environment while wearing the HMD. The image processing system can detect or identify the target object in the captured scene. After identifying the target object in the captured image, the image processing system can include the target object within the virtual environment that the user is experiencing via the HMD. A generated scene including the virtual environment and a rendering (i.e., a rendered/generated representation) of the target object is referred to as a combined scene. The image processing system can present the combined scene to the user via the HMD.


In some embodiments, the image processing system creates the appearance that the target object (e.g., received pixel data representing the target object) “passes through” into the virtual environment provided to the user via the HMD. A user holding a target object, for example, may have the target object represented in the virtual world shown in the HMD at the location of the physical object in the real world. For instance, pixel data received for the target object (e.g., real-world object) can be used to generate pixel data for a representation of the target object rendered in combination with the virtual environment. The pixel data for the representation of the target object can be rendered in a combined scene with the virtual environment. In some cases, the pixel data received for the target object can be modified in order to generate the pixel data for the representation of the target object rendered in combination with the virtual environment. In some cases, the pixel data for the representation of the target object can be generated to be equivalent to the pixel data initially received for the target object.


Moreover, in some implementations, the image processing system can cause the target object to appear to be overlaid on the virtual environment experienced by the user wearing the HMD. In some implementations, while rendering the target object with the virtual environment, the image processing system can apply a graphical overlay, such as a skin, to the target object. The graphical overlay can, as used herein, refer to a visual effect that the image processing system applies in association with rendering a representation of the real-world object. In some cases, the graphical overlay (e.g., skin) can be applied in attempt to assist the user to track the target object in the virtual environment, and/or to allow the target object to more appropriately fit the virtual environment in a graphical sense (e.g., to visually fit a theme of the virtual environment).





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates a block diagram of an example system environment in which an image processing system can operate, in accordance with an embodiment.



FIG. 2A illustrates an example environment in which an image processing system operates, in accordance with an embodiment.



FIG. 2B illustrates an example real-world scene including one or more real-world objects in a real-world environment, in accordance with an embodiment.



FIG. 2C illustrates an example virtual scene including one or more virtual objects in a virtual environment, in accordance with an embodiment.



FIG. 2D illustrates an example combined scene including one or more real-world objects and a virtual environment, in accordance with an embodiment.



FIG. 3 illustrates a block diagram of an architecture of an example image processing system, in accordance with an embodiment.



FIG. 4A illustrates a flowchart describing an example process of representing a real-world object with a virtual environment, in accordance with an embodiment.



FIG. 4B illustrates a flowchart describing an example process of representing a real-world object with a virtual environment, in accordance with an embodiment.





The figures depict various embodiments of the disclosed technology for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the technology described herein.


DETAILED DESCRIPTION

System Architecture



FIG. 1 shows a block diagram of a system environment 100 in which an image processing system, such as a virtual reality (VR) system 300, operates, in accordance with an embodiment. The system environment 100 shown in FIG. 1 includes an input component 120, (e.g., an image capture device, a camera, etc.), a VR system 300, and a head-mounted display (HMD) 150. Only one input component, one VR system, and one HMD are shown in FIG. 1 for purposes of illustration. In alternative embodiments not explicitly shown, the system environment 100 can include multiple input components 120, VR systems 300, HMDs 150, and different and/or additional components. Likewise, the functions performed by various entities, modules, components, and/or devices, etc., in the system environment 100 may differ in different embodiments.


As discussed more fully below, in this environment, the HMD 150 provides content for a user operating in an environment (e.g., real-world environment) or local area. The user wearing the HMD 150 may interact with objects in the environment that are identified by the VR system and represented in the display shown to the user. The input component 120 receives, such as by detecting, acquiring, and/or capturing, a view of the environment that can include various real-world objects. The VR system 300 analyzes the view of the environment to identify one or more target objects to be presented to the user via the HMD 150. For example, a target object may be a real world object with a recognizable size, shape, color, or other characteristics identifiable by the VR system 300. A representation of these target objects is generated by the VR system 300 and combined with the other content (such as rendered VR content) to generate a combined scene for viewing in the HMD 150.


The input component 120 can be configured to capture image data (e.g., one or more still frames, one or more video image frames, etc.) of a real-world scene in a local area. The local area can be an environment in reality (e.g., in the real world) where a user wearing the HMD 150 is located. For example, the user wearing the HMD 150 may stand or sit in an environment and may face a particular real-world scene in the local area. In one embodiment, the input component 120 is an outward facing camera installed on the HMD 150. For instance, the camera can be facing, within an allowable deviation, a direction in which the user wearing the HMD 150 is facing. In some implementations, the input component 120 thus captures image data representing the real-world scene that the user wearing the HMD 150 would see if he/she was not wearing the HMD 150. For example, the real-world scene can depict one or more real-world objects and the input component 120 captures image data representing the one or more real-world objects. In this example, at least some of the one or more real-world objects can correspond to a target object. The input component 120 provides the information that it detects, captures, acquires, and/or receives (e.g., image data representing at least one target object) to the VR system 300.


In some embodiments, the HMD 150 can be included as a part of the VR system 300. In some embodiments, the HMD 150 can be separate from, but utilized by and/or operable with, the VR system 300. In one example, the HMD 150 can correspond to a VR headset that enables the user wearing it to experience a virtual environment. For instance, the HMD 150 can display one or more virtual scenes or views of the virtual environment, such that the user feels as if he/she were actually present in the virtual environment through the scenes or views displayed on the HMD 150. Additionally, in some implementations, the HMD 150 can include the input component 120 (e.g., a camera facing the same direction as the user) configured to capture a view(s) or scene(s) of a real-world environment that the user would perceive if he/she was not wearing the HMD 150. In some cases, the input component 120 can be separate from the HMD 150.


In some embodiments, the VR system 300 presents one or more representations of one or more target objects combined together with a virtual environment to a user wearing the HMD 150. For instance, the VR system 300 sends the representations of the target objects combined with the virtual environment for presentation at the HMD 150. The VR system 300 receives image data representing a real-world environment. The VR system 300 identifies one or more portions of the image data corresponding to one or more target objects included in the real-world environment. For example, the VR system 300 can identify the one or more portions by tracking the one or more target objects from one received image frame to another. In some cases, the VR system 300 can include a default setting, a predefined configuration, and/or programmed logic, etc., to identify certain real-world objects as target objects. For instance, utilizing object recognition technology (e.g., based on machine learning, edge detection, line detection, and/or computer vision, etc.), the VR system 300 can identify one or more portions of received image data that represent one or more specified target objects. In one example, a target object may be a specific object, or may be an object with one or more specific properties, that the object recognition technology has already been set, configured, or programmed to attempt to recognize in the received image data.


Moreover, in some cases, the VR system 300 can utilize at least one of a marker, a tag, a coloring, or a painting that is physically associated with a target object in order to identify the target object (or image data portions representing the target object). For example, one or more target objects can be physically marked with one or more tags (e.g., one or more colored dots/stickers). The VR system 300 can be configured and/or trained to detect, identify, and/or recognize the one or more tags based on analysis of image data received via the input component 120. Based on detecting the one or more tags, the VR system 300 can identify the one or more target objects that are marked with the one or more tags. In another example, a target object can be fitted with a marker that transmits a signal, such as an infrared (IR) emitter that emits an infrared signal. In this example, the VR system 300 can utilize an input component 120 corresponding to an IR detector or sensor to identify the emitted IR signal and thus to facilitate identifying the target object.


Furthermore, the VR system 300 can generate, based on the identified one or more portions of the image data, one or more representations of the one or more target objects. In some implementations, the VR system 300 can “pass through” the received image data representing a target object into a rendered scene to combine the real-world target object and a virtual environment. For example, the identified one or more portions of the image data can be associated with or can correspond to a first set of pixels (i.e., data about a first set of pixels). In this example, one or more representations of one or more target objects can be associated with or can correspond to a second set of pixels (i.e., data about a second set of pixels). The VR system 300 can generate the second set of pixels based on the first set of pixels received via the input component 120. Pixel locations of the first set of pixels can be translated to pixel locations for the second set of pixels. Continuing with this example, the VR system 300 can cause the first set of pixels to pass through as the second set of pixels, such that the second set of pixels is generated to be equivalent to the first set of pixels. In some instances, the one or more representations of the one or more target objects can be generated without modifying (e.g., without additional image processing techniques being applied to) the one or more portions of the image data.



FIG. 2A illustrates an example environment in which an image processing system, such as a VR system 300, operates, in accordance with an embodiment. As shown in FIG. 2A, there can be a user 202 of the image processing system and a real-world environment 204. In this example, the user 202 can be wearing a head-mounted display (HMD) 150 associated with the image processing or VR system 300. Although not explicitly shown, in some implementations, the HMD 150 can include an input component 120. The input component 120 can, for instance, be outward facing or substantially facing a direction in which the user 202 is facing. The input component 120 captures image data representing the real-world environment 204. In the example of FIG. 2A, the real-world environment 204 can include one or more real-world objects, such as one or more target objects 206. In this example, the one or more real-world objects can also include one or more background objects 208.


In some embodiments, the input component 120, which can be associated with the VR system 300 and/or installed at the HMD 150, receives image data corresponding to or representing a real-world scene (e.g., a portion, view, and/or perspective, etc., of the real-world environment 204). The VR system 300 can analyze the received image data to identify one or more target objects 206 represented in the received image date (i.e., in one or more portions of the received image data). For example, object detection technology and/or object recognition technology can be utilized by the VR system 300 to facilitate identifying one or more portions of the received image data that represent or depict the one or more target objects 206. In this example, object detection technology and/or object recognition technology can be utilized by the VR system 300 to distinguish a target object 206 from a background object 208 or from another non-target object. As discussed above, in some cases, the target object 206 can be defined based on preprogrammed logic, a predefined setting, and/or a default configuration, etc. In the example of FIG. 2A, the VR system 300 can be configured to identify a stick as a target object 206. In other words, the target object 206 can be specifically defined to be a stick. As such, in this example, the received image data can be analyzed to identify one or more portions of the image data that represent the stick target object 206. The VR system 300 can distinguish between the stick target object 206 and the background objects 208. For instance, there may be multiple target objects in the real-world environment 204 to be detected and/or identified by the VR system 300. The VR system may be configured to identify target object(s) 206 having various different shapes or types and there may be more than one background object 208 identified in the captured scene.



FIG. 2B illustrates an example real-world scene 210 including one or more real-world objects in a real-world environment, in accordance with an embodiment. In the example real-world scene 210, there can be one or more real-world objects, such as a target object 216 and background objects 218. In this example, the target object 216 includes a marker 214. Accordingly, a VR system 300 can identify one or more portions of received image data that represent the target object 216 based on the marker 214. As an example, the target object 216 has an infrared (IR) emitter as the marker of the target object 216. In this example, the VR system 300 can utilize an input component 120 that is configured to detect, capture, acquire and/or otherwise receive IR image data, such as an IR signal emitted by the IR emitter marker 214 of the target object 216. For instance, at least a portion of the input component 120 can include an IR sensor, detector, and/or camera, etc. The received IR signal emitted from the IR emitter 214 is used to identify the target object 216 and distinguish the target object 216 from the background objects 218 and other non-target objects.



FIG. 2C illustrates an example virtual scene 220 including one or more virtual objects in a virtual environment, in accordance with an embodiment. In FIG. 2C, a VR system 300 can provide a virtual environment for a user wearing a HMD 150 to experience. The example virtual scene 220 can be a portion, view, and/or perspective, etc., of the virtual environment that is displayed to the user via the HMD 150. As shown in FIG. 2C, the example virtual scene 220 can include one or more virtual objects 222. In this example, the virtual objects 222 can correspond to virtual hills or mountains within the virtual environment provided by the VR system 300.



FIG. 2D illustrates an example combined scene 230 including one or more real-world objects and a virtual environment, in accordance with an embodiment. In the example of FIG. 2D, the VR system 300 can generate a representation of the stick target object 216 based on a portion(s) of received image data identified as representing or depicting the stick target object 216. The representation of the stick target object 216 can be combined with the virtual environment, including the virtual scene 220 with the virtual objects 222, to produce the combined scene 230.


As discussed previously, the representation of the stick target object 216 can be generated based on the identified portion(s) of the received image data that represents, depicts, and/or corresponds to the stick target object 216 in the real-world environment. In some instances, pixels that form the identified portion(s) of the received image data can be passed through into the virtual scene 220 including the virtual objects 222, in order to generate pixels forming the representation of the stick target object 216. In some cases, the representation of the stick target object 216 can be generated based on the identified portion(s) in that the representation can be rendered in the combined scene 230 based on the location(s) of the identified portion(s). For example, the location(s) of the identified portion(s) can be determined and/or tracked in or near real-time, and the representation of the target object 216 can correspond to a rendered skin(s) and/or a graphical icon(s) overlaid in the virtual scene 220 at the location(s) of the identified portion(s).


In the example of FIG. 2D, the combined scene 230 includes the representation of the target object 216 being rendered with one or more graphical overlays (e.g., skins, stickers, animations, graphical filters, etc.). As such, the representation of the target object 216 can include the one or more graphical overlays. In this example, the one or more graphical overlays can partially replace the original appearance of the target object 216. As shown in FIG. 2D, there can be three example skins, such as Skin A 232, Skin B 234, and Skin C 236. Each of these skins can be overlaid on a respective portion of the target object 216. Skin A 232 can be overlaid onto a top portion of the stick target object 216. Skin B 234 can be overlaid onto an upper portion of the body of the stick target object 216. Skin C 236 can be overlaid onto a bottom portion of the body of the stick target object 216. In some cases, the one or more graphical overlays can entirely replace or cover the original appearance of the target object 216. In one example, the representation of the stick target object 216 can include a baseball bat skin, a sword skin, or a joystick overlay, etc. In another example, graphical overlays can have various types, shapes, sizes, and/or colors, etc.


Furthermore, in some cases, a target object can be associated with at least one of a marker, a tag, a coloring, or a painting, as discussed above. In one instance, the marker associated with a target object can emit a wireless signal, such as an IR signal. As discussed, the disclosed technology can cause a representation of the target object to be generated with a graphical overlay, such as a skin. In this instance, the disclosed technology can cause the graphical overlay to be selected based on the wireless signal. A different wireless signal can cause a different graphical overlay to be selected for the representation of the target object.



FIG. 3 illustrates a block diagram of an architecture of an example image processing system, such as the VR system 300 of FIG. 1, in accordance with an embodiment. As shown in the example architecture of FIG. 3, the VR system 300 can include an image data module 310, an identification module 320, a representation module 330, a virtual environment module 340, a combination module 350, a captured scene store 370, and a virtual environment store 380. FIG. 3 shows one example, and in alternative embodiments not shown, additional, and/or different modules or data stores can be included in the VR system 300. For instance, in some implementations, the VR system 300 also includes a head-mounted display (HMD) 150 and may also include an input component 120. As described above, in some embodiments, the HMD 150 can include an input component 120, such as an outward facing camera installed at the HMD 150.


The image data module 310 communicates with the input component 120 to receive image data captured or received by the input component 120. In some cases, the image data module 310 transmits the received image data to the identification module 320 for identification of target objects within a real-world environment.


The identification module 320 identifies one or more portions of the image data that represent one or more target objects included in the real-world environment. The one or more portions of the image data can, for instance, be associated with a first set of pixels (i.e., data about a first set of pixels). In some embodiments, the identification module 320 can analyze the received image data using object recognition and/or detection techniques to identify the one or more portions of the image data (e.g., the first set of pixels) that represent or depict the one or more target objects. In one example, machine learning can be utilized to train a classifier or model for recognizing one or more particular objects, such as a baseball. In this example, the identification module 320 can include logic, such as a setting, a preference, a configuration, a specification, and/or a manually inputted command, that instructs the identification module 320 to identify baseballs as being target objects. Continuing with this example, when the received image data of the real-world environment contains a baseball, the identification module 320 identifies the specific portion(s) of the received image corresponding to the baseball.


In some embodiments, the identification module 320 identifies a portion(s) of the image data that represents a target object based on at least one of a marker, a tag, a coloring, or a painting associated with the target object. In one example, the target object can have an IR emitter attached to it. The identification module 320 utilizes an IR sensor, detector, and/or receiver, etc., to identify the target object. In another example, a physical sticker having a particular color can be affixed to the target object. In this example, the identification module 320 can include logic that causes the target object to be identified based on recognizing or detecting the physical sticker of the particular color.


Additionally, the representation module 330 can be configured to facilitate generating, based on the one or more portions of the image data, one or more representations of the one or more target objects. As discussed above, the one or more portions of the image data identified by the identification module 320 can be associated with a first set of pixels (i.e., data about a first set of pixels, or first pixel data). For instance, the first set of pixels can form the one or more portions of the image data. In some embodiments, the one or more representations of the one or more target objects can be associated with a second set of pixels (i.e., data about a second set of pixels, or second pixel data). For example, the representation module 330 can cause the second set of pixels to form the one or more representations of the one or more target objects.


In some implementations, the representation module 330 can generate the second set of pixels associated with the one or more representation based on the first set of pixels associated with the one or more portions of the image data. For example, the second set of pixels can be generated by the representation module 330 to be equivalent to the first set of pixels associated with the one or more portions of the image data. In some cases, the one or more representations of the one or more target objects can be generated without modifying the one or more portions of the image data. As such, additional image processing need not be applied to the one or more portions of the image data in order to generate the one or more representations. For instance, the representation module 330 can “pass through” the first set of pixels as the second set of pixels.


Additionally or alternatively, in some embodiments, the representation module 330 can generate the one or more representations of the one or more target objects by graphically modifying the one or more portions of the image data. For instance, the representation module 330 can modify the colors of the one or more portions of the image data to produce the one or more representations. In another instance, the representation module 330 can modify the sizes and/or orientations of the one or more portions of the image data to produce the one or more representations.


In addition, the virtual environment module 340 can be configured to facilitate generating a virtual environment. The virtual environment can be experienced by a user via one or more virtual scenes, which can include one or more virtual objects. The virtual environment module 340 can cause the virtual environment, including the virtual scenes, to be presented or displayed to the user via a HMD 150 worn by the user.


Furthermore, the combination module 350 can be configured to facilitate rendering, displaying, providing, generating, and/or otherwise presenting a scene that combines the virtual environment and the one or more representations of the one or more target objects. For instance, the combination module 350 can generate a combined scene in which a representation of a target object is displayed to the user along with a virtual environment. Accordingly, in this instance, the user can experience the virtual environment while simultaneously interacting with the target object (or a representation thereof).


Moreover, in some implementations, the identification module 320 of the VR system 300 can identify one or more locations of the one or more portions relative to the received image data (e.g., relative to a received image frame). The representation module 330 can then generate the one or more representations of the one or more target objects by rendering the one or more representations based on the one or more locations. For example, the identification module 320 can identify pixel coordinates where one or more target objects are located within an image frame captured or received by the input component 120. The virtual environment module 340 can determine one or more corresponding locations, relative to a virtual environment, that correspond to the one or more locations of the one or more portions relative to the received image data. In this example, the virtual environment module 340 can determine pixel coordinates within the virtual environment that correspond to the identified pixel coordinates relative to the received image frame. The representation module 330 can then cause one or more representations of the one or more target objects to be rendered at the one or more corresponding locations relative to the virtual environment (e.g., at the corresponding pixel coordinates within the virtual environment). The combination module 350 can produce a combined scene in which there is a combination of the one or more representations rendered at the one or more corresponding locations and virtual content rendered at all other locations relative to the virtual environment.


In one example, there can be two target objects, such as a stick and a baseball. A user can be holding the stick and the baseball can be thrown toward the user. In this example, the user can be wearing an HMD 150, which can display to the user a virtual environment, such as a rendered environment representing a famous baseball stadium. The disclosed technology can enable the user to see, via the HMD 150, the stick (or a representation thereof) that he/she is holding in the real-world. In this example, the stick can be overlaid with a skin, such as a virtual baseball bat skin. Also, when the baseball is thrown toward the user in the real-world, the disclosed technology can display the baseball (or a representation thereof), as well as the stick overlaid with the virtual baseball bat skin, in combination with the virtual baseball stadium environment.


In another example, there can be a target object in the form of a stick. A virtual environment provided by the virtual environment module 340 of the VR system 300 can present a flight simulation environment. The disclosed technology can identify the stick from received image data. In this example, the disclosed technology can also provide a representation of the stick via a rendered joystick skin. The target object representation including the rendered joystick skin can be presented, by the combination module 350, in combination with the flight simulation environment. When the user moves the real-world stick, the disclosed technology can cause the representation (e.g., the rendered joystick) to correspondingly move in the flight simulation environment in (or near) real-time.


In a further example, there can be a target object in the form of a stick. A virtual environment provided by the virtual environment module 340 of the VR system 300 can present a space adventure environment. The disclosed technology can identify the stick from received image data. In this example, the disclosed technology can also provide a representation of the stick via a rendered space weapon skin (e.g., a light/optic/laser saber). The target object representation including the space weapon skin can be presented, by the combination module 350, in combination with the space adventure environment. When the user moves the real-world stick, the disclosed technology can cause the representation (e.g., the space weapon) to correspondingly move in the space adventure environment in (or near) real-time. Many variations are possible. For instance, in some cases, graphical overlays (e.g., skins) need not be utilized at all.


Additionally, the captured scene store 370 can be configured to facilitate storing data associated with a captured scene that includes a target object(s). For instance, the captured scene store 370 can store image data (e.g., one or more still image frames and/or video image frames) representing or depicting a real-world scene that includes the target object(s). The captured scene store 370 can also be configured to facilitate providing data associated with a captured scene to the identification module 320 for recognizing, detecting, tracking, and/or otherwise identifying the target object.


Moreover, the virtual environment store 380 can be configured to facilitate storing data associated with a virtual environment, as well as data associated with a combined scene that includes the rendered representation of a target object and virtual content from the virtual environment. The virtual environment store 380 can also be configured to facilitate providing data associated with the virtual environment and/or data associated with the combined scene to a HMD 150 for presentation to a user wearing the HMD 150.



FIG. 4A illustrates a flowchart describing an example process of representing a real-world object with a virtual environment, in accordance with an embodiment. In the example process, at block 410, image data representing a real-world environment can be received, such as via the image data module 310 and/or the input component 120. At block 420, one or more portions of the image data that represent one or more target objects included in the real-world environment can be identified, such as by the identification module 320. In some instances, the one or more portions of the image data can be associated with first pixel data (i.e., data associated with a first set of pixels forming the one or more portions). At block 430, one or more representations of the one or more target objects can be generated, such as by the representation module 330, based on the one or more portions of the image data. In some cases, the one or more representations of the one or more target objects can be associated with second pixel data (i.e., data associated with a second set of pixels forming the one or more representations). In such cases, the second pixel data can be generated by the representation module 330 based on the first pixel data. In one instance, the second set of pixels forming the one or more representations can be generated to be equivalent to the first set of pixels forming the identified one or more portions. In another instance, locational pixel data (e.g., pixel coordinates) for pixels in the first set can be utilized to determine locational pixel data (e.g., pixel coordinates) for pixels in the second set.


Continuing with the example of FIG. 4A, at block 440, a virtual environment can be generated, such as by the virtual environment module 340. At block 450, a scene that combines the virtual environment and the one or more representations of the one or more target objects can be presented, such as by the combination module 350. It should be appreciated that FIG. 4A shows merely one example, and in alternative embodiments, fewer, additional, and/or different steps may be included in the flowchart.



FIG. 4B illustrates a flowchart describing an example process of representing a real-world object with a virtual environment, in accordance with an embodiment. In the example process of FIG. 4B, at block 460, one or more locations of the identified one or more portions relative to the image data can be identified, such as by the identification module 320. At block 470, one or more corresponding locations relative to the virtual environment that correspond to the one or more locations can be determined, such as by the virtual environment module 340. At block 480, the one or more representations can be rendered, such as by the representation module 330, at the one or more corresponding locations relative to the virtual environment. As discussed, it should be understood that there can be many variations associated with the disclosed technology.


Additional Configuration Information


The foregoing description of the embodiments of the invention has been presented for the purpose of illustration; it is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Persons skilled in the relevant art can appreciate that many modifications and variations are possible in light of the above disclosure.


Some portions of this description describe the embodiments of the invention in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, hardware, or any combinations thereof.


Any of the steps, operations, or processes described herein may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In one embodiment, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described.


Embodiments of the invention may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus. Furthermore, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.


Embodiments of the invention may also relate to a product that is produced by a computing process described herein. Such a product may comprise information resulting from a computing process, where the information is stored on a non-transitory, tangible computer readable storage medium and may include any embodiment of a computer program product or other data combination described herein.


Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the embodiments of the invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.

Claims
  • 1. A computer-implemented method, comprising: receiving, by a virtual reality system, image data for an image frame of a real-world environment captured by an imaging sensor from a perspective of a user wearing a head-mounted display, the imaging sensor configured to capture a view of the real-world environment that the user would perceive if the user were not wearing the head-mounted display;identifying, by the virtual reality system, a first set of pixels of the image data representing a target object in the received image frame, the target object included in the real-world environment;generating, by the virtual reality system, a scene of one of a plurality of different virtual environments, the scene being from the perspective of the user wearing the head-mounted display, and the scene visually corresponding to a theme of the one virtual environment;overlaying, by the virtual reality system, the first set of pixels of the image data representing the target object in the received image frame onto the scene of the one virtual environment, the first set of pixels of the image data representing the target object located in the scene in a position corresponding to a position of the first set of pixels in the received image frame;applying a skin to at least a portion of the first set of pixels of the image data representing the target object in the scene, the skin applied to transform the first set of pixels to visually correspond to the theme of the one virtual environment; andpresenting, by the virtual reality system, a combined scene, including both the scene of the one virtual environment and the transformed first pixels, on the head-mounted display.
  • 2. The method of claim 1, wherein presenting the combined scene on the head-mounted display comprises: rendering the first set of pixels of the image data representing the target object at a set of coordinates and virtual content of the one virtual environment rendered at all other locations of the scene.
  • 3. The method of claim 1, wherein the first set of pixels of the image data representing the target object includes one or more graphical overlays.
  • 4. The method of claim 1, wherein the target object is associated with at least one of a marker, a tag, a coloring, or a painting, and wherein the first set of pixels of the image data representing the target object is based on the at least one of the marker, the tag, the coloring, or the painting.
  • 5. The method of claim 4, wherein the target object is associated with a marker that emits a wireless signal, wherein the first set of pixels of the image data representing the target object includes a graphical overlay selected based on the wireless signal.
  • 6. The method of claim 1, wherein the image data for an image frame of a real-world environment is received from the imaging sensor installed at a head-mounted display worn by a user, and wherein the combined scene is presented to the user via the head-mounted display.
  • 7. The method of claim 1, wherein the virtual reality system includes preprogrammed logic, a predefined setting, and/or a default configuration defining the target object for object detection or recognition.
  • 8. A system comprising: at least one processor; anda memory storing instructions that, when executed by the at least one processor, cause the system to perform: receiving, by a virtual reality system, image data for an image frame of a real-world environment captured by an imaging sensor from a perspective of a user wearing a head-mounted display, the imaging sensor configured to capture a view of the real-world environment that the user would perceive if the user were not wearing the head-mounted display;identifying, by the virtual reality system, a first set of pixels of the image data representing a target object in the received image frame, the target object included in the real-world environment;generating, by the virtual reality system, a scene of one of a plurality of different virtual environments, the scene being from the perspective of the user wearing the head-mounted display, and the scene visually corresponding to a theme of the one virtual environment;overlaying, by the virtual reality system, the first set of pixels of the image data representing the target object in the received image frame onto the scene of the one virtual environment, the first set of pixels of the image data representing the target object located in the scene in a position corresponding to a position of the first set of pixels in the received image frame;applying a skin to at least a portion of the first set of pixels of the image data representing the target object in the scene, the skin applied to transform the first set of pixels to visually correspond to the theme of the one virtual environment; andpresenting, by the virtual reality system, a combined scene, including both the scene of the one virtual environment and the transformed first pixels, on the head-mounted display.
  • 9. The system of claim 8, wherein presenting the combined scene on the head-mounted display comprises: rendering the first set of pixels of the image data representing the target object at a set of coordinates and virtual content of the one virtual environment rendered at all other locations of the scene.
  • 10. The system of claim 8, wherein the first set of pixels of the image data representing the target object includes one or more graphical overlays.
  • 11. The system of claim 8, wherein the target object is associated with at least one of a marker, a tag, a coloring, or a painting, and wherein the first set of pixels of the image data representing the target object is based on the at least one of the marker, the tag, the coloring, or the painting.
  • 12. The system of claim 11, wherein the target object is associated with a marker that emits a wireless signal, wherein the first set of pixels of the image data representing the target object includes a graphical overlay selected based on the wireless signal.
  • 13. The system of claim 8, wherein the image data for an image frame of a real-world environment is received from the imaging sensor installed at a head-mounted display worn by a user, and wherein the combined scene is presented to the user via the head-mounted display.
  • 14. The system of claim 8, wherein the virtual reality system includes preprogrammed logic, a predefined setting, and/or a default configuration defining the target object for object detection or recognition.
  • 15. A non-transitory computer-readable storage medium including instructions that, when executed by at least one processor of a computing system, cause the computing system to perform a method comprising: receiving, by a virtual reality system, image data for an image frame of a real-world environment captured by an imaging sensor from a perspective of a user wearing a head-mounted display, the imaging sensor configured to capture a view of the real-world environment that the user would perceive if the user were not wearing the head-mounted display;identifying, by the virtual reality system, a first set of pixels of the image data representing a target object in the received image frame, the target object included in the real-world environment;generating, by the virtual reality system, a scene of one of a plurality of different virtual environments, the scene being from the perspective of the user wearing the head-mounted display, and the scene visually corresponding to a theme of the one virtual environment;overlaying, by the virtual reality system, the first set of pixels of the image data representing the target object in the received image frame onto the scene of the one virtual environment, the first set of pixels of the image data representing the target object located in the scene in a position corresponding to a position of the first set of pixels in the received image frame;applying a skin to at least a portion of the first set of pixels of the image data representing the target object in the scene, the skin applied to transform the first set of pixels to visually correspond to the theme of the one virtual environment; andpresenting, by the virtual reality system, a combined scene, including both the scene of the one virtual environment and the transformed first pixels, on the head-mounted display.
  • 16. The computer-readable storage medium of claim 15, wherein presenting the combined scene on the head-mounted display comprises: rendering the first set of pixels of the image data representing the target object at a set of coordinates and virtual content of the one virtual environment rendered at all other locations of the scene.
  • 17. The computer-readable storage medium of claim 15, wherein the first set of pixels of the image data representing the target object includes one or more graphical overlays.
  • 18. The computer-readable storage medium of claim 15, wherein the target object is associated with at least one of a marker, a tag, a coloring, or a painting, and wherein the first set of pixels of the image data representing the target object is based on the at least one of the marker, the tag, the coloring, or the painting.
  • 19. The computer-readable storage medium of claim 18, wherein the target object is associated with a marker that emits a wireless signal, wherein the first set of pixels of the image data representing the target object includes a graphical overlay selected based on the wireless signal.
  • 20. The computer-readable storage medium of claim 15, wherein the virtual reality system includes preprogrammed logic, a predefined setting, and/or a default configuration defining the target object for object detection or recognition.
CROSS REFERENCE TO RELATED APPLICATIONS

This application is a continuation of co-pending U.S. application Ser. No. 15/651,932, filed Jul. 17, 2017, which is incorporated by reference in its entirety.

US Referenced Citations (205)
Number Name Date Kind
6556196 Blanz et al. Apr 2003 B1
6792147 Saka et al. Sep 2004 B1
6842175 Schmalstieg et al. Jan 2005 B1
7701439 Hillis et al. Apr 2010 B2
8026918 Murphy Sep 2011 B1
D683749 Hally Jun 2013 S
D689874 Brinda et al. Sep 2013 S
8947351 Noble Feb 2015 B1
D726219 Chaudhri et al. Apr 2015 S
D727352 Ray et al. Apr 2015 S
D727354 Park et al. Apr 2015 S
D733740 Lee et al. Jul 2015 S
9117274 Liao et al. Aug 2015 B2
9292089 Sadek Mar 2016 B1
D761273 Kim et al. Jul 2016 S
D763279 Jou Aug 2016 S
9477368 Filip et al. Oct 2016 B1
D775179 Kimura et al. Dec 2016 S
D775196 Huang et al. Dec 2016 S
9530252 Poulos et al. Dec 2016 B2
D780794 Kisielius et al. Mar 2017 S
D781905 Nakaguchi et al. Mar 2017 S
D783037 Hariharan et al. Apr 2017 S
D784394 Laing et al. Apr 2017 S
D784395 Laing et al. Apr 2017 S
D787527 Wilberding May 2017 S
D788136 Jaini et al. May 2017 S
D788793 Ogundokun et al. Jun 2017 S
D789416 Baluja et al. Jun 2017 S
D789977 Mijatovic et al. Jun 2017 S
D790567 Su et al. Jun 2017 S
D791823 Zhou Jul 2017 S
D793403 Cross et al. Aug 2017 S
9770203 Berme et al. Sep 2017 B1
9817472 Lee et al. Nov 2017 B2
D817994 Jou May 2018 S
D819065 Xie et al. May 2018 S
D824951 Kolbrener et al. Aug 2018 S
D828381 Lee et al. Sep 2018 S
D829231 Hess et al. Sep 2018 S
D831681 Eilertsen Oct 2018 S
D835665 Kimura et al. Dec 2018 S
10168768 Kinstner Jan 2019 B1
10176636 Neustein Jan 2019 B1
D842889 Krainer et al. Mar 2019 S
10220303 Schmidt et al. Mar 2019 B1
10248284 Itani et al. Apr 2019 B2
D848474 Baumez et al. May 2019 S
D850468 Malahy et al. Jun 2019 S
D851123 Turner Jun 2019 S
D853431 Sagrillo et al. Jul 2019 S
D854551 Pistiner et al. Jul 2019 S
D856366 Richardson Aug 2019 S
D859426 Poes Sep 2019 S
10473935 Gribetz et al. Nov 2019 B1
10521944 Sareen et al. Dec 2019 B2
10665019 Hildreth et al. May 2020 B2
D888071 Wilberding Jun 2020 S
D900123 Lopes Oct 2020 S
D908713 Fremine et al. Jan 2021 S
D910655 Matthewman et al. Feb 2021 S
D910660 Chaturvedi et al. Feb 2021 S
10916220 Ngo Feb 2021 B2
10976804 Atlas et al. Apr 2021 B1
10987573 Nietfeld et al. Apr 2021 B2
10990240 Ravasz et al. Apr 2021 B1
11086476 Inch et al. Aug 2021 B2
11184574 Reif Nov 2021 B2
11435593 Sztuk et al. Sep 2022 B1
20080089587 Kim et al. Apr 2008 A1
20090044113 Jones et al. Feb 2009 A1
20100103196 Kumar et al. Apr 2010 A1
20100306716 Perez Dec 2010 A1
20110148916 Blattner Jun 2011 A1
20110267265 Stinson Nov 2011 A1
20110302535 Clerc et al. Dec 2011 A1
20120062444 Cok et al. Mar 2012 A1
20120069168 Huang et al. Mar 2012 A1
20120105473 Bar-Zeev et al. May 2012 A1
20120113140 Hilliges May 2012 A1
20120113223 Hilliges et al. May 2012 A1
20120117514 Kim et al. May 2012 A1
20120143358 Adams et al. Jun 2012 A1
20120206345 Langridge Aug 2012 A1
20120275686 Wilson et al. Nov 2012 A1
20120293544 Miyamoto et al. Nov 2012 A1
20130063345 Maeda Mar 2013 A1
20130093789 Liu et al. Apr 2013 A1
20130125066 Klein et al. May 2013 A1
20130147793 Jeon et al. Jun 2013 A1
20130265220 Fleischmann et al. Oct 2013 A1
20140078176 Kim et al. Mar 2014 A1
20140125598 Cheng et al. May 2014 A1
20140191946 Cho et al. Jul 2014 A1
20140236996 Masuko et al. Aug 2014 A1
20140361976 Osman et al. Dec 2014 A1
20140364215 Mikhailov et al. Dec 2014 A1
20150035746 Cockburn et al. Feb 2015 A1
20150049018 Gomez Feb 2015 A1
20150054742 Imoto et al. Feb 2015 A1
20150062160 Sakamoto et al. Mar 2015 A1
20150094142 Stafford Apr 2015 A1
20150123967 Quinn et al. May 2015 A1
20150153833 Pinault et al. Jun 2015 A1
20150160736 Fujiwara Jun 2015 A1
20150169076 Cohen et al. Jun 2015 A1
20150181679 Liao et al. Jun 2015 A1
20150206321 Scavezze et al. Jul 2015 A1
20150220150 Plagemann et al. Aug 2015 A1
20150261659 Bader et al. Sep 2015 A1
20150293666 Lee et al. Oct 2015 A1
20150358614 Jin Dec 2015 A1
20150371441 Shim Dec 2015 A1
20160062618 Fagan et al. Mar 2016 A1
20160093108 Mao et al. Mar 2016 A1
20160110052 Kim et al. Apr 2016 A1
20160147308 Gelman et al. May 2016 A1
20160170603 Bastien et al. Jun 2016 A1
20160250548 Tsuchiya et al. Sep 2016 A1
20160253841 Ur Sep 2016 A1
20160314341 Maranzana et al. Oct 2016 A1
20160378291 Pokrzywka Dec 2016 A1
20170031503 Rosenberg et al. Feb 2017 A1
20170060230 Faaborg et al. Mar 2017 A1
20170061696 Li et al. Mar 2017 A1
20170109936 Powderly et al. Apr 2017 A1
20170139478 Jeon et al. May 2017 A1
20170153698 Bamidele et al. Jun 2017 A1
20170153709 Kondo Jun 2017 A1
20170192513 Karmon et al. Jul 2017 A1
20170236320 Gribetz et al. Aug 2017 A1
20170237789 Harner et al. Aug 2017 A1
20170262063 Blénessy et al. Sep 2017 A1
20170266551 Baba Sep 2017 A1
20170270715 Lindsay et al. Sep 2017 A1
20170278304 Hildreth et al. Sep 2017 A1
20170287225 Powderly et al. Oct 2017 A1
20170296363 Yetkin et al. Oct 2017 A1
20170316606 Khalid et al. Nov 2017 A1
20170336951 Palmaro Nov 2017 A1
20170364198 Yoganandan et al. Dec 2017 A1
20170364960 Huang Dec 2017 A1
20180059901 Gullicksen Mar 2018 A1
20180082454 Sahu et al. Mar 2018 A1
20180095542 Mallinson Apr 2018 A1
20180107278 Goel et al. Apr 2018 A1
20180108179 Tomlin Apr 2018 A1
20180113599 Yin Apr 2018 A1
20180120944 Wang et al. May 2018 A1
20180144556 Champion et al. May 2018 A1
20180150993 Newell et al. May 2018 A1
20180285636 Fei et al. Oct 2018 A1
20180307303 Powderly et al. Oct 2018 A1
20180322701 Pahud et al. Nov 2018 A1
20180335925 Hsiao et al. Nov 2018 A1
20180349690 Rhee et al. Dec 2018 A1
20190050427 Wiesel et al. Feb 2019 A1
20190065027 Hauenstein et al. Feb 2019 A1
20190094981 Bradski et al. Mar 2019 A1
20190099681 Rico et al. Apr 2019 A1
20190102044 Wang et al. Apr 2019 A1
20190107894 Hebbalaguppe et al. Apr 2019 A1
20190130172 Zhong et al. May 2019 A1
20190213792 Jakubzak et al. Jul 2019 A1
20190236344 Chen et al. Aug 2019 A1
20190258318 Qin et al. Aug 2019 A1
20190278376 Kutliroff et al. Sep 2019 A1
20190279424 Clausen et al. Sep 2019 A1
20190286231 Burns et al. Sep 2019 A1
20190310757 Lee et al. Oct 2019 A1
20190313915 Tzvieli et al. Oct 2019 A1
20190340419 Milman et al. Nov 2019 A1
20190362562 Benson Nov 2019 A1
20190377416 Alexander Dec 2019 A1
20190385368 Cartwright et al. Dec 2019 A1
20190385372 Cartwright et al. Dec 2019 A1
20200050289 Hardie-Bick et al. Feb 2020 A1
20200051527 Ngo Feb 2020 A1
20200082629 Jones et al. Mar 2020 A1
20200097077 Nguyen et al. Mar 2020 A1
20200097091 Chou et al. Mar 2020 A1
20200111260 Osborn et al. Apr 2020 A1
20200225736 Schwarz et al. Jul 2020 A1
20200225758 Tang et al. Jul 2020 A1
20200226814 Tang et al. Jul 2020 A1
20200306640 Kolen et al. Oct 2020 A1
20200312002 Comploi et al. Oct 2020 A1
20210007607 Frank et al. Jan 2021 A1
20210011556 Atlas et al. Jan 2021 A1
20210019911 Kusakabe et al. Jan 2021 A1
20210090333 Ravasz et al. Mar 2021 A1
20210124475 Inch et al. Apr 2021 A1
20210134042 Streuber et al. May 2021 A1
20210168324 Ngo Jun 2021 A1
20210192799 Miura et al. Jun 2021 A1
20210247846 Shriram et al. Aug 2021 A1
20210296003 Baeurele Sep 2021 A1
20210311320 Pike et al. Oct 2021 A1
20210312658 Aoki et al. Oct 2021 A1
20210383594 Tang et al. Dec 2021 A1
20220092862 Faulkner et al. Mar 2022 A1
20220157036 Chen et al. May 2022 A1
20220197382 LeBeau et al. Jun 2022 A1
20230252691 Miura et al. Aug 2023 A1
20230324985 Olson et al. Oct 2023 A1
Foreign Referenced Citations (1)
Number Date Country
113050795 Jun 2021 CN
Non-Patent Literature Citations (42)
Entry
Advisory Action mailed Apr. 6, 2021 for U.S. Appl. No. 16/720,699, filed Dec. 19, 2019, 3 pages.
Advisory Action mailed Feb. 22, 2022 for U.S. Appl. No. 16/720,699, filed Dec. 19, 2019, 3 pages.
Chen Y., et al., “Object Modeling by Registration of Multiple Range Images,” Proceedings of the 1991 IEEE International Conference on Robotics and Automation, Apr. 1991, pp. 2724-2729, Retrieved from the internet: URL: https://graphics.stanford.edu/courses/cs348a-17-winter/Handouts/chen-medioni-align-rob91.pdf.
Final Office Action mailed Nov. 15, 2022 for U.S. Appl. No. 16/720,699, filed Dec. 19, 2019, 20 pages.
Final Office Action mailed Jan. 22, 2021 for U.S. Appl. No. 16/720,699, filed Dec. 19, 2019, 16 Pages.
Final Office Action mailed Dec. 7, 2021 for U.S. Appl. No. 16/720,699, filed Dec. 19, 2019, 17 pages.
Hincapie-Ramos J.D., et al., “GyroWand: IMU-Based Raycasting for Augmented Reality Head-Mounted Displays,” Proceedings of the 3rd Association for Computing Machinery (ACM) Symposium on Spatial User Interaction, Los Angeles, CA, USA, Aug. 8-9, 2015, pp. 89-98.
International Preliminary Report on Patentability for International Application No. PCT/US2020/052976, mailed May 5, 2022, 9 pages.
International Preliminary Report on Patentability for International Application No. PCT/US2021/064674, mailed Jul. 6, 2023, 12 pages.
International Search Report and Written Opinion for International Application No. PCT/US2020/052976, mailed Dec. 11, 2020, 10 Pages.
International Search Report and Written Opinion for International Application No. PCT/US2021/064674, mailed Apr. 19, 2022, 13 pages.
International Search Report and Written Opinion for International Application No. PCT/US2022/046196, mailed Jan. 25, 2023, 11 pages.
International Search Report and Written Opinion for International Application No. PCT/US2023/020446, mailed Sep. 14, 2023, 14 pages.
International Search Report and Written Opinion for International Application No. PCT/US2023/033557, mailed Jan. 3, 2024, 12 pages.
Katz N., et al., “Extending Web Browsers with a Unity 3D-Based Virtual Worlds Viewer,” IEEE Computer Society, Sep./Oct. 2011, vol. 15 (5), pp. 15-21.
Mayer S., et al., “The Effect of Offset Correction and Cursor on Mid-Air Pointing in Real and Virtual Environments,” Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, Apr. 21-26, 2018, pp. 1-13.
Milborrow S., “Active Shape Models with Stasm,” [Retrieved on Sep. 20, 2022], 3 pages, Retrieved from the internet: URL: http://www.milbo.users.sonic.net/stasm/.
Milborrow S., et al., “Active Shape Models with SIFT Descriptors and Mars,” Department of Electrical Engineering, 2014, 8 pages, Retrieved from the internet: URL: http://www.milbo.org/stasm-files/active-shape-models-with-sift-and-mars.pdf.
MRPT: “RANSAC C++ Examples,” 2014, 6 pages, Retrieved from the internet: URL: https://www.mrpt.org/tutorials/programming/maths-and-geometry/ransac-c-examples/.
Non-Final Office Action mailed Jul. 6, 2021 for U.S. Appl. No. 16/720,699, filed Dec. 19, 2019, 17 Pages.
Non-Final Office Action mailed Aug. 18, 2020 for U.S. Appl. No. 16/720,699, filed Dec. 19, 2019, 15 Pages.
Non-Final Office Action mailed Apr. 25, 2022 for U.S. Appl. No. 16/720,699, filed Dec. 19, 2019, 17 Pages.
Olwal A., et al., “The Flexible Pointer: An Interaction Technique for Selection in Augmented and Virtual Reality,” Proceedings of ACM Symposium on User Interface Software and Technology (UIST), Vancouver, BC, Nov. 2-5, 2003, pp. 81-82.
Qiao X., et al., “Web AR: A Promising Future for Mobile Augmented Reality—State of the Art. Challenges, and Insights,” Proceedings of the IEEE, Apr. 2019, vol. 107 (4), pp. 651-666.
Renner P., et al., “Ray Casting”, Central Facility Labs [Online], [Retrieved on Apr. 7, 2020], 2 pages, Retrieved from the Internet: URL:https://www.techfak.uni-bielefeld.de/˜tpfeiffe/lehre/VirtualReality/interaction/ray_casting.html.
Response to Office Action mailed Nov. 18, 2020 for U.S. Appl. No. 16/720,699, filed Dec. 19, 2019, 9 pages.
Response to Office Action mailed Apr. 22, 2021 for U.S. Appl. No. 16/720,699, filed Dec. 19, 2019, 9 pages.
Response to Office Action mailed Mar. 22, 2021 for U.S. Appl. No. 16/720,699, filed Dec. 19, 2019, 9 pages.
Response to Office Action mailed Aug. 24, 2022 for U.S. Appl. No. 16/720,699, filed Dec. 19, 2019, 11 pages.
Response to Office Action mailed Oct. 6, 2021 for U.S. Appl. No. 16/720,699, filed Dec. 19, 2019, 11 pages.
Response to Office Action mailed Feb. 7, 2022 for U.S. Appl. No. 16/720,699, filed Dec. 19, 2019, 10 pages.
Response to Office Action mailed Mar. 7, 2022 for U.S. Appl. No. 16/720,699, filed Dec. 19, 2019, 12 pages.
Schweigert R., et aL., “EyePointing: A Gaze-Based Selection Technique,” Proceedings of Mensch and Computer, Hamburg, Germany, Sep. 8-11, 2019, pp. 719-723.
Srinivasa R.R., “Augmented Reality Adaptive Web Content,” 13th IEEE Annual Consumer Communications Networking Conference (CCNC), 2016, pp. 1-4.
Trademark Application Serial No. 73/289,805, filed Dec. 15, 1980, 1 page.
Trademark Application Serial No. 73/560,027, filed Sep. 25, 1985, 1 page.
Trademark Application Serial No. 74/155,000, filed Apr. 8, 1991, 1 page.
Trademark Application Serial No. 76/036,844, filed Apr. 28, 2000, 1 page.
Unity Gets Toolkit for Common AR/VR Interactions, Unity XR interaction Toolkit Preview [Online], Dec. 19, 2019 Retrieved on Apr. 7, 2020], 1 page, Retrieved from the Internet: URL: http://youtu.be/ZPhv4qmT9EQ.
Whitton M., et al., “Integrating Real and Virtual Objects in Virtual Environments,” Aug. 24, 2007, Retrieved from http://web.archive.org/web/20070824035829/ http://www.cs.unc.edu/˜whitton/ExtendedCV/Papers/2005-HCII-Whitton-MixedEnvs.pdf, on May 3, 2017, 10 pages.
Wikipedia: “Canny Edge Detector,” [Retrieved on Sep. 20, 2022], 10 pages, Retrieved from the internet: URL: https://en.wi ki pedia.org/wi ki/Canny_edge_detector.
Wilipedia: “Iterative Closest Point,” [Retrieved on Sep. 20, 2022], 3 pages, Retrieved from the internet: URL: https://en.wikipedia.org/wiki/Iterative_closest_point.
Continuations (1)
Number Date Country
Parent 15651932 Jul 2017 US
Child 17515316 US