This invention relates to an augmented reality viewer and an augmented reality visualization method.
Modern computing technology has advanced to the level of compact and persistently connected wearable computing systems and assemblies that may be utilized to provide a user with the perception of rich augmented reality experiences.
An augmented reality viewer usually has multiple sensors, including cameras positioned to sense a location real-world objects. A storage device holds a set of data including a virtual object. A display module on the storage device is executable by a processor to determine a desired display of the virtual object relative to the location the real-world objects. A data stream generator on the storage device is executable by the processor to generate a data stream based on the data and the desired display. A light generator, such as a laser light generator, is connected to the processor to receive the data stream and generate light based on the data stream. A display device is positioned to receive the light that is generated and to display the light to a user. The light creates a rendering of the virtual object visible to the user and rendered in accordance with the desired display.
The invention provides an augmented reality viewer including at least one sensor positioned to sense a location of at least one of a plurality of real-world objects, a storage device, a set of data on the storage device including a virtual object, a processor connected to the storage device, a display module on the storage device and executable by the processor to determine a desired display of the virtual object relative to the location of at least one of the real-world objects, a data stream generator on the storage device and executable by the processor to generate a data stream based on the data and the desired display, a light generator connected to the processor to receive the data stream and generate light based on the data stream, and a display device positioned to receive the light that is generated and to display the light to a user, wherein the light creates a rendering of the virtual object visible to the user and rendered in accordance with the desired display.
The augmented reality viewer may further include that the virtual object is a nomadic life object, further include a nomadic subroutine on the storage device and executable by the processor to move the nomadic life object relative to the real-world objects.
The augmented reality viewer may further include that the data includes a plurality of nomadic life objects, wherein the nomadic subroutine is executable by the processor to move the plurality of nomadic life objects relative to the real-world objects and relative to one another.
The augmented reality viewer may further include that the at least one sensor senses a wave movement initiated by the user, further including a wave movement routine on the storage device and executable by the processor to move the nomadic life object in response to the wave movement that is sensed by the at least one sensor.
The augmented reality viewer may further include that the wave movement is initiated in a target zone and the nomadic object is moved out of the target zone.
The augmented reality viewer may further include that the at least one sensor sensing the wave movement is a camera that detects an image of a hand of the user.
The augmented reality viewer may further include a handheld controller, wherein the sensor is mounted to a handheld controller that is held in a hand of the user.
The augmented reality viewer may further include a retrieval agent on the storage device and executable with the processor to retrieve incoming information from a resource, associate the incoming information with a location of the nomadic life object relative to the real-world objects, and communicate the incoming information to the user from the location of the nomadic life object.
The augmented reality viewer may further include that the retrieval agent activates the light source to display the incoming information through the display.
The augmented reality viewer may further include a speaker, wherein the retrieval agent activates the speaker so that the user hears the incoming information.
The augmented reality viewer may further include a transmission agent on the storage device and executable with the processor to sense, with the at least one sensor, an instruction by the user, sense, with the at least one sensor, that the instruction is directed by the user to the nomadic life object at a location of the nomadic life object relative to the real-world objects, determine, based on the instruction, an outgoing communication and a resource, and communicate the outgoing communication to the resource.
The augmented reality viewer may further include that the at least one sensor that senses the communication is a microphone suitable to receive a voice instruction from the user.
The augmented reality viewer may further include a transmission agent on the storage device and executable with the processor to sense, with the at least one sensor, an instruction by the user, sense, with the at least one sensor, that the instruction is directed by the user to the nomadic life object at a location of the nomadic life object relative to the real-world objects, determine, based on the instruction, an outgoing communication and an IOT device, and communicate the outgoing communication to the IOT device to operate the IOT device.
The augmented reality viewer may further include that the at least one sensor that senses the communication is a microphone suitable to receive a voice instruction from the user.
The augmented reality viewer may further include an artificial intelligence system on the storage device and executable with the processor to sense an action by the user using the at least one sensor, perform a routine involving the virtual object that is responsive to the action of the user, associate the routine with the action as an artificial intelligence cluster, determine a parameter that exists at a first time when the action is sensed, associate the parameter that exists at the first time with the artificial intelligence cluster, sense a parameter at a second time, determine whether the parameter at the second time is the same as the parameter at the first time, and if the determination is made that the parameter at the second time is the same as the parameter at the first time then executing the routine.
The augmented reality viewer may further include that the at least one sensor includes an eye tracking camera, wherein the action is a wave motion initiated by the user and the parameters are a gaze direction as determined by eye tracking of the user using the eye tracking camera.
The augmented reality viewer may further include that the action is a wave movement initiated in a target zone and the nomadic object is moved out of the target zone.
The augmented reality viewer may further include that the at least one sensor includes an eye tracking camera positioned to sense a gaze direction of the user, further including a gaze direction movement routine on the storage device and executable by the processor to move the nomadic life object in response to the gaze direction that is sensed by the eye tracking camera.
The augmented reality viewer may further include that there are a plurality of nomadic life objects and the nomadic subroutine is executable by the processor to move the plurality of nomadic life object relative to the real-world objects, further including a personal assistant module on the storage device and executable with the processor to select a personal assistant nomadic life object among the plurality of nomadic life objects, and move at least one of the nomadic life objects other than the personal assistant nomadic life objects with the personal assistant nomadic life object.
The augmented reality viewer may further include that the nomadic life object is a fish of a first type, further including a movement module on the storage device and executable with the processor to articulate a body of the fish in a first back-and-forth manner.
The augmented reality viewer may further include that the movement module is executable with the processor to articulate a body of a fish of a second type in a second back-and-forth manner that is different from the first back-and-forth manner.
The augmented reality viewer may further include that the movement module is executable with the processor to sense, with the at least one sensor, a slow hand movement of a hand of the user, articulate the body of the fish in the first back-and-forth manner at a low speed in response to the slow speed of the hand movement, sense, with the at least one sensor, a fast hand movement of the hand of the user, and articulate the body of the fish in the first back-and-forth manner at a high speed in response to the fast speed of the hand movement.
The augmented reality viewer may further include that the movement module is executable with the processor to move the first fish to stay close to the hand when the hand moves at the slow speed, and move the first fish to flee the hand when the hand moves at the fast speed.
The augmented reality viewer may further include a surface extraction routine on the storage device and executable with the processor to identify a surface among the real-world objects.
The augmented reality viewer may further include that the surface is a two-dimensional surface of a wall or a ceiling.
The augmented reality viewer may further include a depth creation module on the storage device and executable with the processor according to the desired display, display, in three-dimensional space, the virtual object to the user on a side of the opposing the user and with the surface between the user and the virtual object.
The augmented reality viewer may further include that the depth creation module is executable with the processor to display a porthole in the surface to the user through which the virtual object is visible to the user.
The augmented reality viewer may further include that the virtual object is a three-dimensional virtual object.
The augmented reality viewer of may further include a vista placement routine on the storage device and executable with the processor to capture a space that includes the real-world objects, represent the space as a real-world mesh, collect vertical and horizontal planes from the real-world mesh, filter the planes by height from a floor, dimensions, orientation, and location relative to the real-world mesh, spawn a blueprint which includes a portal frame and all the content in the vista at the selection location, and cut a hole in an occlusion material of the real-world mesh material so the user can see through the portal into the vista.
The augmented reality viewer may further include a vertex animation routine on the storage device and executable with the processor to store a virtual object mesh representing the virtual object, associate a texture with the virtual object mesh, and manipulate the virtual object mesh to cause movement of the texture and the virtual object in a view of the user.
The augmented reality viewer may further include that the virtual object mesh is manipulated to articulate the virtual object.
The augmented reality viewer may further include that the same virtual object mesh is used multiple times to cause movement of the texture and the virtual object.
The augmented reality viewer may further include that the virtual object is a coral cluster.
The augmented reality viewer may further include a coral cluster spawner on the storage device and executable with the processor to determine a volume, perform a line trace at random points within the volume from a maximum height of the volume to a floor of the volume, determine whether a valid location is identified by the line trace, if a valid location is identified then, in response to the identification, perform a box trace to test if a random cluster will fit without overlapping a world mesh while attempting different scales and rotations and generating a score for each placement, determine a select placement with a highest score among scores, and spawn the coral cluster to the placement with the highest score.
The augmented reality viewer may further include a vista placement routine on the storage device and executable with the processor to place a vista, wherein the volume is bound by the vista.
The augmented reality viewer may further include a coral spawner system on the storage device and executable with the processor to store at least a first coral element of a first type on the storage device, and construct the coral cluster from a plurality of coral elements including the first coral element.
The augmented reality viewer may further include that the coral spawner system is executable with the processor to construct the coral cluster from a plurality of first coral elements.
The augmented reality viewer may further include that the coral spawner system is executable with the processor to store at least a second coral element of a second type on the storage device, wherein the plurality of coral elements includes the second coral element.
The augmented reality viewer may further include that the coral spawner system is executable with the processor to determining a coral cluster setting, wherein the processor constructs the coral cluster according to the setting.
The augmented reality viewer may further include that the setting is available space that is detected, and a number of the coral elements is selected based on the available space.
The augmented reality viewer may further include that the coral spawner system is executable with the processor to simulate ambient light, wherein the setting is the ambient light, wherein a number of coral elements is selected based on the ambient light, wherein orientations of the coral elements are selected based on the ambient light.
The augmented reality viewer may further include that the coral spawner system includes a data table on the storage device with a plurality of coral cluster settings, wherein the coral spawner system constructs the coral cluster according to the plurality of coral cluster settings.
The augmented reality viewer may further include that the coral cluster settings include at least one of population, species max counts, spawn type, and height-based percentages.
The augmented reality viewer may further include that the coral spawner system includes a vertex crawling and raycast algorithm to check for placement viability, growth, and caching of valid points to file.
The augmented reality viewer may further include that the coral spawner system includes a run-time coral static mesh loop that first calculates a shadow pass and then creates instanced static meshes of all corals clusters.
The augmented reality viewer may further include that the coral spawner system includes collision-based exclusion configuration to place box colliders where certain species should not grow.
The augmented reality viewer may further include that the display device displays the light that is generated to a user while the views at least one of the real-world objects.
The augmented reality viewer may further include that the display device is a see-through display device that allows light from the at least one real-world objects to reach an eye of the user.
The augmented reality viewer may further include a head-worn structure shaped to be worn on a head of the user, wherein the display device is mounted to the head-worn structure and the at least one sensor is of a kind that is suitable to sense movement of the display device due to movement of a head of the user, and a position adjustment module, executable by the processor, to adjust a location of the virtual object so that, within a view of the user, the virtual object remains stationary relative to the at least one real-world object.
The invention also provides an augmented reality visualization method including sensing, with at least one sensor, a location of at least one of a plurality of real-world objects, storing, on a storage device, data including a virtual object, determining, with a processor, a desired display of the virtual object relative to the location of at least one of the real-world objects, generating, by the processor, a data stream based on the data and the desired display, generating, with a light generator, light based on the data stream, and displaying, with a display device, the light that is generated to a user, wherein the light creates a rendering of the virtual object visible to the user and rendered in accordance with the desired display.
The method may further include that the virtual object is a nomadic life object, further including moving, with the processor, the nomadic life object relative to the real-world objects.
The method may further include that the data includes a plurality of nomadic life objects, further including moving, with the processor, the plurality of nomadic life objects relative to the real-world objects and relative to one another.
The method may further include sensing, by the at least one sensor, a wave movement initiated by the user, and moving, by the processor, the nomadic life object in response to the wave movement that is sensed by the at least one sensor.
The method may further include that the wave movement is initiated in a target zone and the nomadic object is moved out of the target zone.
The method may further include that the at least one sensor sensing the wave movement is a camera that detects an image of a hand of the user.
The method may further include that the sensor is mounted to a handheld controller that is held in a hand of the user.
The method may further include retrieving, with the processor, incoming information from a resource, associating, with the processor, the incoming information with a location of the nomadic life object relative to the real-world objects, and communicating, with the processor, the incoming information to the user from the location of the nomadic life object.
The method may further include that the incoming information is displayed to the user.
The method may further include that the user hears the incoming information.
The method may further include sensing, with the at least one sensor, an instruction by the user, sensing, with the at least one sensor, that the instruction is directed by the user to the nomadic life object at a location of the nomadic life object relative to the real-world objects, determining, with the processor, based on the instruction, an outgoing communication and a resource, and communicating, with the processor, the outgoing communication to the resource.
The method may further include that the user speaks the instruction.
The method of may further include sensing, with the at least one sensor, an instruction by the user, sensing, with the at least one sensor, that the instruction is directed by the user to the nomadic life object at a location of the nomadic life object relative to the real-world objects, determining, with the processor, based on the instruction, an outgoing communication and an IOT device, and communicating, with the processor, the outgoing communication to the IOT device to operate the IOT device.
The method may further include that the user speaks the instruction.
The method may further include sensing, with the at least one sensor, an action by the user, performing, with the processor, a routine involving the virtual object that is responsive to the action of the user, associating, with the processor, the routine with the action as an artificial intelligence cluster, determining, with the at least one processor, a parameter that exists at a first time when the action is sensed, associating, with the processor, the parameter that exists at the first time with the artificial intelligence cluster, sensing, with the at least one sensor, a parameter at a second time, determining, with the processor, whether the parameter at the second time is the same as the parameter at the first time, and if the determination is made that the parameter at the second time is the same as the parameter at the first time then executing the routine.
The method may further include that the action is a wave motion initiated by the user and the parameters are a gaze direction of the user.
The method may further include that the action is a wave movement initiated in a target zone and the nomadic object is moved out of the target zone.
The method may further include sensing, by the at least one sensor, a gaze direction of the user, and moving, by the processor, the nomadic life object in response to the gaze direction.
The method of may further include moving, with the processor, a plurality of nomadic life object relative to the real-world objects, selecting, with the processor, a personal assistant nomadic life object among the plurality of nomadic life objects, and moving, with the processor, at least one of the nomadic life objects other than the personal assistant nomadic life objects with the personal assistant nomadic life object.
The method may further include that the nomadic life object is a fish of a first type, further including articulating, with the processor, a body of the fish in a first back-and-forth manner.
The method may further include articulating, with the processor, a body of a fish of a second type in a second back-and-forth manner that is different from the first back-and-forth manner.
The method may further include sensing, with the at least one sensor, a slow hand movement of a hand of the user, and articulating, with the processor, the body of the fish in the first back-and-forth manner at a low speed in response to the slow speed of the hand movement, sensing, with the at least one sensor, a fast hand movement of the hand of the user, and articulating, with the processor, the body of the fish in the first back-and-forth manner at a high speed in response to the fast speed of the hand movement.
The method may further include moving, with the processor, the first fish to stay close to the hand when the hand moves at the slow speed, and moving, with the processor, the first fish to flee the hand when the hand moves at the fast speed.
The method may further include identifying, with the processor, a surface among the real-world objects.
The method may further include that the surface is a two-dimensional surface of a wall or a ceiling.
The method may further include that the processor, according to the desired display, displays, in three-dimensional space, the virtual object to the user on a side of the surface opposing the user and with the surface between the user and the virtual object.
The method may further include that the processor displays a porthole in the surface to the user through which the virtual object is visible to the user.
The method may further include that the virtual object is a three-dimensional virtual object.
The method may further include capturing, with the processor, a space that includes the real-world objects, representing, with the processor, the space as a real-world mesh, collecting, with the processor, vertical and horizontal planes from the real-world mesh, filtering, with the processor, the planes by height from a floor, dimensions, orientation, and location relative to the real-world mesh, spawning, with the processor, a blueprint which includes a portal frame and all the content in the vista at the selection location, and cutting, with the processor, a hole in an occlusion material of the real-world mesh material so the user can see through the portal into the vista.
The method may further include storing, with the processor, a virtual object mesh representing the virtual object, associating, with the processor, a texture with the virtual object mesh, and manipulating, with the processor, the virtual object mesh to cause movement of the texture and the virtual object in a view of the user.
The method may further include that the virtual object mesh is manipulated to articulate the virtual object.
The method may further include that the same virtual object mesh is used multiple times to cause movement of the texture and the virtual object.
The method may further include that the virtual object is a coral cluster.
The method may further include placing, with the processor, the coral cluster including determining a volume, performing a line trace at random points within the volume from a maximum height of the volume to a floor of the volume, determining whether a valid location is identified by the line trace, if a valid location is identified then, in response to the identification, performing a box trace to test if a random cluster will fit without overlapping a world mesh while attempting different scales and rotations and generating a score for each placement, determining a select placement with a highest score among scores, and spawning the coral cluster to the placement with the highest score.
The method may further include placing a vista, wherein the volume is bound by the vista.
The method may further include storing at least a first coral element of a first type on the storage device, and constructing, with the processor, the coral cluster from a plurality of coral elements including the first coral element.
The method may further include that the processor constructs the coral cluster from a plurality of first coral elements.
The method may further include storing at least a second coral element of a second type on the storage device, wherein the plurality of coral elements include the second coral element.
The method may further include determining, with the processor, a coral cluster setting, wherein the processor constructs the coral cluster according to the setting.
The method of may further include that the setting is available space that is detected, and a number of the coral elements is selected based on the available space.
The method may further include simulating, with the processor, ambient light, wherein the setting is the ambient light, wherein a number of coral elements is selected based on the ambient light, and wherein orientations of the coral elements are selected based on the ambient light.
The method may further include storing a data table on the storage device with a plurality of coral cluster settings, wherein the processor constructs the coral cluster according to the plurality of coral cluster settings.
The method of may further include that the coral cluster settings include at least one of population, species max counts, spawn type, and height-based percentages.
The method may further include executing, with the processor, a vertex crawling and raycast algorithm to check for placement viability, growth, and caching of valid points to file.
The method may further include executing, with the processor, a run-time coral static mesh loop that first calculates a shadow pass and then creates instanced static meshes of all corals clusters.
The method may further include executing, with the processor, collision-based exclusion configuration to place box colliders where certain species should not grow.
The method the display device displays the light that is generated to a user while the views at least one of the real-world objects.
The method the display device is a see-through display device that allows light from the at least one real-world objects to reach an eye of the user.
The method may further include sensing, with the at least one sensor, movement of the display device due to movement of a head of the user, and adjusting a location of the virtual object so that, within a view of the user, the virtual object remains stationary relative to the at least one real-world object.
The invention is further described by way of examples in the following drawings wherein:
The augmented reality viewer 2 includes a head-worn structure 25 that can be worn on a head of a user. The augmented reality viewer 2 and the controller component 6 each have a processor and a storage device connected to the processor. Data and executable code are stored on the storage devices and are executable with the processors. The augmented reality viewer 2 includes a projector that serves as a light generator. The processor of the augmented reality viewer 2 sends instructions to the projector and the projector generates light, typically laser light, that is transmitted through the display to eyes of the user.
The display device may display the light that is generated to a user while the user views at least one of the real-world objects. The display device may be a see-through display device that allows light from the at least one real-world objects to reach an eye of the user.
The augmented reality viewer may include a head-worn structure shaped to be worn on a head of the user, wherein the display device is mounted to the head-worn structure and the at least one sensor is of a kind that is suitable to sense movement of the display device due to movement of a head of the user, and a position adjustment module, executable by the processor, to adjust a location of the virtual object so that, within a view of the user, the virtual object remains stationary relative to the at least one real-world object.
The following definitions will assist in an understanding or the various terms used herein:
1. virtual objects
1.1. life-like objects (objects that migrate nomadically, that display articulating features, or have surface textures that move)
1.2. inanimate objects (objects that do not migrate nomadically, that do not display articulating features, and do not have surface textures that move)
2.1. walls
2.2. ceilings
2.3. furniture
Referring to
At 36, the spatial computing system may be configured to run or operate software, such as that available from Magic Leap, Inc., of Plantation, Fla., under the tradename Undersea (TM). The software may be configured to utilize the map or mesh of the room to assist the user in selecting (such as with a handheld controller 4 of
At 38, the system may be configured to operate the software to present the aquatic environment to the user. The aquatic environment is preferably presented to the user in full color and three dimensions, so that the user perceives the environment around the user to be that of an aquatic environment such as a fish tank, which in the case of a “vista”, extends not only around the user's immediate environment/room, but also through the virtual connection framing or porting and into the extended vista presentation outside of the user's immediate environment/room. Time domain features (i.e., features that change position and/or geometry with time) may be configured such that the elements simulate natural movement (i.e., such as slow aquatic movement of virtually-presented aquatic plants, slow growth of virtually-presented aquatic coral elements, and/or creation, movement, or growth of virtually-presented fish or schools or groups thereof). The system may be configured to propose suitable locations and sizes for a possible portal or a framing location for a “vista”, such as locations and sizes at the center of a vertical wall within the actual room occupied by the user that would give the user a broad view into the virtual “vista” extension of the virtual aquatic environment. In
In
The at least one sensor may sense a wave movement initiated by the user, and the augmented reality viewer may further include a wave movement routine on the storage device and executable by the processor to move the nomadic life object in response to the wave movement that is sensed by the at least one sensor. The wave movement may be initiated in a target zone and the nomadic object may be moved out of the target zone. The at least one sensor sensing the wave movement may be a camera that detects an image of a hand of the user.
The augmented reality viewer may further include a handheld controller, wherein the sensor is mounted to a handheld controller that is held in a hand of the user.
Referring to
Referring to
Referring to
Referring to
The augmented reality viewer may include a depth creation module on the storage device and executable with the processor to display, according to the desired display, in three-dimensional space, the virtual object to the user on a side of the opposing the user and with the surface between the user and the virtual object. The depth creation module may be executable with the processor to display a porthole in the surface to the user through which the virtual object is visible to the user. The virtual object may be a three-dimensional virtual object.
Referring to
Referring back to
The augmented reality viewer may thus include a transmission agent on the storage device and executable with the processor to sense, with the at least one sensor, an instruction by the user, sense, with the at least one sensor, that the instruction is directed by the user to the nomadic life object at a location of the nomadic life object relative to the real-world objects, determine, based on the instruction, an outgoing communication and a resource, and communicate the outgoing communication to the resource.
The at least one sensor that senses the communication may be a microphone suitable to receive a voice instruction from the user.
Referring to
The augmented reality viewer may thus include a transmission agent on the storage device and executable with the processor to sense, with the at least one sensor, an instruction by the user, sense, with the at least one sensor, that the instruction is directed by the user to the nomadic life object at a location of the nomadic life object relative to the real-world objects, determine, based on the instruction, an outgoing communication and an IOT device, and communicate the outgoing communication to the IOT device to operate the IOT device.
The at least one sensor that senses the communication may be a microphone suitable to receive a voice instruction from the user.
Referring to
The augmented reality viewer my include an artificial intelligence system on the storage device and executable with the processor to sense an action by the user using the at least one sensor, perform a routine involving the virtual object that is responsive to the action of the user, associate the routine with the action as an artificial intelligence cluster, determine a parameter that exists at a first time when the action is sensed, associate the parameter that exists at the first time with the artificial intelligence cluster, sense a parameter at a second time, determine whether the parameter at the second time is the same as the parameter at the first time, and, if the determination is made that the parameter at the second time is the same as the parameter at the first time then executing the routine.
The at least one sensor may include an eye tracking camera, wherein the action is a wave motion initiated by the user and the parameters are a gaze direction as determined by eye tracking of the user using the eye tracking camera. The action may be a wave movement initiated in a target zone and the nomadic object is moved out of the target zone.
The at least one sensor may include an eye tracking camera positioned to sense a gaze direction of the user, and the augmented reality viewer may further include a gaze direction movement routine on the storage device and executable by the processor to move the nomadic life object in response to the gaze direction that is sensed by the eye tracking camera.
There may be a plurality of nomadic life objects and the nomadic subroutine may be executable by the processor to move the plurality of nomadic life object relative to the real-world objects, and the augmented reality viewer may further include a personal assistant module on the storage device and executable with the processor to select a personal assistant nomadic life object among the plurality of nomadic life objects, and move at least one of the nomadic life objects other than the personal assistant nomadic life object with the personal assistant nomadic life object.
Referring to
The nomadic life object may be a fish of a first type, and the augmented reality viewer may further include a movement module on the storage device and executable with the processor to articulate a body of the fish in a first back-and-forth manner. The movement module is executable with the processor to articulate a body of a fish of a second type in a second back-and-forth manner that is different from the first back-and-forth manner. The movement module may be executable with the processor to sense, with the at least one sensor, a slow hand movement of a hand of the user, and articulate the body of the fish in the first back-and-forth manner at a low speed in response to the slow speed of the hand movement, sense, with the at least one sensor, a fast hand movement of the hand of the user, and articulate the body of the fish in the first back-and-forth manner at a high speed in response to the fast speed of the hand movement. The movement module may be executable with the processor to move the first fish to stay close to the hand when the hand moves at the slow speed, and move the first fish to flee the hand when the hand moves at the fast speed.
Referring to
Referring to
Referring to
The augmented reality viewer may include a vertex animation routine on the storage device and executable with the processor to store a virtual object mesh representing the virtual object, associate a texture with the virtual object mesh, and manipulate the virtual object mesh to cause movement of the texture and the virtual object in a view of the user. The virtual object mesh may be manipulated to articulate the virtual object. The same virtual object mesh may be used multiple times to cause movement of the texture and the virtual object.
Referring to
Referring to
Referring to
Referring to
Referring to
The augmented reality viewer may include a surface extraction routine on the storage device and executable with the processor to identify a surface among the real-world objects. The surface may be a two-dimensional surface of a wall or a ceiling.
The augmented reality viewer may include a vista placement routine on the storage device and executable with the processor to capture a space that includes the real-world objects, represent the space as a real-world mesh, collect vertical and horizontal planes from the real-world mesh, filter the planes by height from a floor, dimensions, orientation, and location relative to the real-world mesh, spawn a blueprint which includes a portal frame and all the content in the vista at the selection location, and cut a hole in an occlusion material of the real-world mesh material so the user can see through the portal into the vista.
Referring to
The virtual object may thus be a coral cluster. The augmented reality viewer may include a coral cluster spawner on the storage device and executable with the processor to determine a volume, perform a line trace at random points within the volume from a maximum height of the volume to a floor of the volume, determine whether a valid location is identified by the line trace, if a valid location is identified then, in response to the identification, perform a box trace to test if a random cluster will fit without overlapping a world mesh while attempting different scales and rotations and generating a score for each placement, determine a select placement with a highest score among scores, and spawn the coral cluster to the placement with the highest score. The augmented reality viewer may thus include a vista placement routine on the storage device and executable with the processor to place a vista, wherein the volume is bound by the vista.
Referring to
Referring to
One or more users may be able to share their virtual environments with one or more other users, such that a plurality of users experience the same virtual environment features from their own viewing perspectives, by virtue of multi-location “passable world” types of configurations, as described in the aforementioned incorporated applications. For example, if a user located in an office in New York has virtual aquarium features displayed around him in his office, and if another user from San Francisco is virtually brought into that New York office and virtual world, then the user from San Francisco preferably is able to see the virtual aquarium features from that San Francisco user's virtual position/orientation within the New York room.
Referring to
Referring to
Referring to
Referring to
A system for dynamically spawning and placing corals was developed using blueprints. Although blueprints were initially considered more for early prototyping and making gameplay proof of concepts, our technical art team garnered important functionality from them. Referring to
Referring to
The augmented reality viewer may include a coral spawner system on the storage device and executable with the processor to store at least a first coral element of a first type on the storage device, and construct the coral cluster from a plurality of coral elements including the first coral element.
The coral spawner system may be executable with the processor to construct the coral cluster from a plurality of first coral elements.
The coral spawner system may be executable with the processor to store at least a second coral element of a second type on the storage device, wherein the plurality of coral elements includes the second coral element.
The coral spawner system may be executable with the processor to determine a coral cluster setting, wherein the processor constructs the coral cluster according to the setting. The setting may be available space that is detected, and a number of the coral elements is selected based on the available space. The coral spawner system may be executable with the processor to simulate ambient light, wherein the setting is the ambient light. A number of coral elements may be selected based on the ambient light. Orientations of the coral elements may be selected based on the ambient light. The coral spawner system may include a data table on the storage device with a plurality of coral cluster settings, wherein the coral spawner system constructs the coral cluster according to the plurality of coral cluster settings. The coral cluster settings may include at least one of population; species max counts; spawn type; and height-based percentages. The coral spawner system may include a vertex crawling and raycast algorithm to check for placement viability, growth, and caching of valid points to file. The coral spawner system may include a run-time coral static mesh loop that first calculates a shadow pass and then creates instanced static meshes of all coral clusters. The coral spawner system may include collision-based exclusion configuration to place box colliders where certain species should not grow.
Various example embodiments of the invention are described herein. Reference is made to these examples in a non-limiting sense. They are provided to illustrate more broadly applicable aspects of the invention. Various changes may be made to the invention described and equivalents may be substituted without departing from the true spirit and scope of the invention. In addition, many modifications may be made to adapt a particular situation, material, composition of matter, process, process act(s) or step(s) to the objective(s), spirit, or scope of the present invention. Further, as will be appreciated by those with skill in the art that each of the individual variations described and illustrated herein has discrete components and features which may be readily separated from or combined with the features of any of the other several embodiments without departing from the scope or spirit of the present inventions. All such modifications are intended to be within the scope of claims associated with this disclosure.
The invention includes methods that may be performed using the subject devices. The methods may comprise the act of providing such a suitable device. Such provision may be performed by the end user. In other words, the “providing” act merely requires the end user obtain, access, approach, position, set-up, activate, power-up or otherwise act to provide the requisite device in the subject method. Methods recited herein may be carried out in any order of the recited events which is logically possible, as well as in the recited order of events.
Example aspects of the invention, together with details regarding material selection and manufacture have been set forth above. As for other details of the present invention, these may be appreciated in connection with the above-referenced patents and publications as well as generally known or appreciated by those with skill in the art. The same may hold true with respect to method-based aspects of the invention in terms of additional acts as commonly or logically employed.
In addition, though the invention has been described in reference to several examples optionally incorporating various features, the invention is not to be limited to that which is described or indicated as contemplated with respect to each variation of the invention. Various changes may be made to the invention described and equivalents (whether recited herein or not included for the sake of some brevity) may be substituted without departing from the true spirit and scope of the invention. In addition, where a range of values is provided, it is understood that every intervening value, between the upper and lower limit of that range and any other stated or intervening value in that stated range, is encompassed within the invention.
Also, it is contemplated that any optional feature of the inventive variations described may be set forth and claimed independently, or in combination with any one or more of the features described herein. Reference to a singular item, includes the possibility that there are plural of the same items present. More specifically, as used herein and in claims associated hereto, the singular forms “a,” “an,” “said,” and “the” include plural referents unless the specifically stated otherwise. In other words, use of the articles allow for “at least one” of the subject item in the description above as well as claims associated with this disclosure. It is further noted that such claims may be drafted to exclude any optional element. As such, this statement is intended to serve as antecedent basis for use of such exclusive terminology as “solely,” “only” and the like in connection with the recitation of claim elements, or use of a “negative” limitation.
Without the use of such exclusive terminology, the term “comprising” in claims associated with this disclosure shall allow for the inclusion of any additional element—irrespective of whether a given number of elements are enumerated in such claims, or the addition of a feature could be regarded as transforming the nature of an element set forth in such claims. Except as specifically defined herein, all technical and scientific terms used herein are to be given as broad a commonly understood meaning as possible while maintaining claim validity.
The breadth of the present invention is not to be limited to the examples provided and/or the subject specification, but rather only by the scope of claim language associated with this disclosure.
While certain exemplary embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative and not restrictive of the current invention, and that this invention is not restricted to the specific constructions and arrangements shown and described since modifications may occur to those ordinarily skilled in the art.
This application claims priority from U.S. Provisional Patent Application No. 62/879,408, filed on Jul. 26, 2019, U.S. Provisional Patent Application No. 62/881,355, filed on Jul. 31, 2019 and U.S. Provisional Patent Application No. 62/899,678, filed on Sep. 12, 2019, all of which are incorporated herein by reference in their entirety.
| Number | Name | Date | Kind |
|---|---|---|---|
| 4344092 | Miller | Aug 1982 | A |
| 4652930 | Crawford | Mar 1987 | A |
| 4810080 | Grendol et al. | Mar 1989 | A |
| 4997268 | Dauvergne | Mar 1991 | A |
| 5007727 | Kahaney et al. | Apr 1991 | A |
| 5074295 | Willis | Dec 1991 | A |
| 5240220 | Elberbaum | Aug 1993 | A |
| 5251635 | Dumoulin et al. | Oct 1993 | A |
| 5410763 | Bolle | May 1995 | A |
| 5455625 | Englander | Oct 1995 | A |
| 5495286 | Adair | Feb 1996 | A |
| 5497463 | Stein et al. | Mar 1996 | A |
| 5682255 | Friesem et al. | Oct 1997 | A |
| 5826092 | Flannery | Oct 1998 | A |
| 5854872 | Tai | Dec 1998 | A |
| 5864365 | Sramek et al. | Jan 1999 | A |
| 5937202 | Crosetto | Aug 1999 | A |
| 6012811 | Chao et al. | Jan 2000 | A |
| 6016160 | Coombs et al. | Jan 2000 | A |
| 6064749 | Hirota et al. | May 2000 | A |
| 6076927 | Owens | Jun 2000 | A |
| 6117923 | Amagai et al. | Sep 2000 | A |
| 6124977 | Takahashi | Sep 2000 | A |
| 6191809 | Hori et al. | Feb 2001 | B1 |
| 6375369 | Schneider et al. | Apr 2002 | B1 |
| 6385735 | Wilson | May 2002 | B1 |
| 6538655 | Kubota | Mar 2003 | B1 |
| 6541736 | Huang et al. | Apr 2003 | B1 |
| 6757068 | Foxlin | Jun 2004 | B2 |
| 7046515 | Wyatt | May 2006 | B1 |
| 7051219 | Hwang | May 2006 | B2 |
| 7076674 | Cervantes | Jul 2006 | B2 |
| 7111290 | Yates, Jr. | Sep 2006 | B1 |
| 7119819 | Robertson et al. | Oct 2006 | B1 |
| 7219245 | Raghuvanshi | May 2007 | B1 |
| 7431453 | Hogan | Oct 2008 | B2 |
| 7542040 | Templeman | Jun 2009 | B2 |
| 7573640 | Nivon et al. | Aug 2009 | B2 |
| 7724980 | Shenzhi | May 2010 | B1 |
| 7751662 | Kleemann | Jul 2010 | B2 |
| 7758185 | Lewis | Jul 2010 | B2 |
| 8060759 | Arnan et al. | Nov 2011 | B1 |
| 8120851 | Iwasa | Feb 2012 | B2 |
| 8214660 | Capps, Jr. | Jul 2012 | B2 |
| 8246408 | Elliot | Aug 2012 | B2 |
| 8353594 | Lewis | Jan 2013 | B2 |
| 8508676 | Silverstein et al. | Aug 2013 | B2 |
| 8547638 | Levola | Oct 2013 | B2 |
| 8605764 | Rothaar et al. | Oct 2013 | B1 |
| 8619365 | Harris et al. | Dec 2013 | B2 |
| 8696113 | Lewis | Apr 2014 | B2 |
| 8733927 | Lewis | May 2014 | B1 |
| 8736636 | Kang | May 2014 | B2 |
| 8759929 | Shiozawa et al. | Jun 2014 | B2 |
| 8793770 | Lim | Jul 2014 | B2 |
| 8823855 | Hwang | Sep 2014 | B2 |
| 8847988 | Geisner et al. | Sep 2014 | B2 |
| 8874673 | Kim | Oct 2014 | B2 |
| 9010929 | Lewis | Apr 2015 | B2 |
| 9015501 | Gee | Apr 2015 | B2 |
| 9086537 | Iwasa et al. | Jul 2015 | B2 |
| 9095437 | Boyden et al. | Aug 2015 | B2 |
| 9239473 | Lewis | Jan 2016 | B2 |
| 9244293 | Lewis | Jan 2016 | B2 |
| 9244533 | Friend et al. | Jan 2016 | B2 |
| 9383823 | Geisner | Jul 2016 | B2 |
| 9489027 | Ogletree | Nov 2016 | B1 |
| 9519305 | Wolfe | Dec 2016 | B2 |
| 9581820 | Robbins | Feb 2017 | B2 |
| 9582060 | Balatsos | Feb 2017 | B2 |
| 9658473 | Lewis | May 2017 | B2 |
| 9671566 | Abovitz et al. | Jun 2017 | B2 |
| 9671615 | Vallius et al. | Jun 2017 | B1 |
| 9696795 | Marcolina et al. | Jul 2017 | B2 |
| 9798144 | Sako et al. | Oct 2017 | B2 |
| 9874664 | Stevens et al. | Jan 2018 | B2 |
| 9880441 | Osterhout | Jan 2018 | B1 |
| 9955862 | Freeman et al. | May 2018 | B2 |
| 9978118 | Ozgumer et al. | May 2018 | B1 |
| 9996797 | Holz et al. | Jun 2018 | B1 |
| 10018844 | Levola et al. | Jul 2018 | B2 |
| 10082865 | Raynal et al. | Sep 2018 | B1 |
| 10151937 | Lewis | Dec 2018 | B2 |
| 10185147 | Lewis | Jan 2019 | B2 |
| 10218679 | Jawahar | Feb 2019 | B1 |
| 10241545 | Richards et al. | Mar 2019 | B1 |
| 10317680 | Richards et al. | Jun 2019 | B1 |
| 10436594 | Belt et al. | Oct 2019 | B2 |
| 10516853 | Gibson et al. | Dec 2019 | B1 |
| 10551879 | Richards et al. | Feb 2020 | B1 |
| 10578870 | Kimmel | Mar 2020 | B2 |
| 10698202 | Kimmel et al. | Jun 2020 | B2 |
| 10856107 | Mycek et al. | Oct 2020 | B2 |
| 10825424 | Zhang | Nov 2020 | B2 |
| 10987176 | Poltaretskyi et al. | Apr 2021 | B2 |
| 11190681 | Brook et al. | Nov 2021 | B1 |
| 11209656 | Choi et al. | Dec 2021 | B1 |
| 11236993 | Hall et al. | Feb 2022 | B1 |
| 20010010598 | Aritake et al. | Aug 2001 | A1 |
| 20020007463 | Fung | Jan 2002 | A1 |
| 20020108064 | Nunally | Feb 2002 | A1 |
| 20020063913 | Nakamura et al. | May 2002 | A1 |
| 20020071050 | Homberg | Jun 2002 | A1 |
| 20020122648 | Mule' et al. | Sep 2002 | A1 |
| 20020140848 | Cooper et al. | Oct 2002 | A1 |
| 20030028816 | Bacon | Feb 2003 | A1 |
| 20030048456 | Hill | Mar 2003 | A1 |
| 20030067685 | Niv | Apr 2003 | A1 |
| 20030077458 | Korenaga et al. | Apr 2003 | A1 |
| 20030115494 | Cervantes | Jun 2003 | A1 |
| 20030218614 | Lavelle et al. | Nov 2003 | A1 |
| 20030219992 | Schaper | Nov 2003 | A1 |
| 20030226047 | Park | Dec 2003 | A1 |
| 20040001533 | Tran et al. | Jan 2004 | A1 |
| 20040021600 | Wittenberg | Feb 2004 | A1 |
| 20040025069 | Gary et al. | Feb 2004 | A1 |
| 20040042377 | Nikoloai et al. | Mar 2004 | A1 |
| 20040073822 | Greco | Apr 2004 | A1 |
| 20040073825 | Itoh | Apr 2004 | A1 |
| 20040111248 | Granny et al. | Jun 2004 | A1 |
| 20040174496 | Ji et al. | Sep 2004 | A1 |
| 20040186902 | Stewart | Sep 2004 | A1 |
| 20040201857 | Foxlin | Oct 2004 | A1 |
| 20040238732 | State et al. | Dec 2004 | A1 |
| 20040240072 | Schindler et al. | Dec 2004 | A1 |
| 20040246391 | Travis | Dec 2004 | A1 |
| 20040268159 | Aasheim et al. | Dec 2004 | A1 |
| 20050001977 | Zelman | Jan 2005 | A1 |
| 20050034002 | Flautner | Feb 2005 | A1 |
| 20050157159 | Komiya et al. | Jul 2005 | A1 |
| 20050177385 | Hull | Aug 2005 | A1 |
| 20050273792 | Inohara et al. | Dec 2005 | A1 |
| 20060013435 | Rhoads | Jan 2006 | A1 |
| 20060015821 | Jacques Parker et al. | Jan 2006 | A1 |
| 20060019723 | Vorenkamp | Jan 2006 | A1 |
| 20060038880 | Starkweather et al. | Feb 2006 | A1 |
| 20060050224 | Smith | Mar 2006 | A1 |
| 20060090092 | Verhulst | Apr 2006 | A1 |
| 20060126181 | Levola | Jun 2006 | A1 |
| 20060129852 | Bonola | Jun 2006 | A1 |
| 20060132914 | Weiss et al. | Jun 2006 | A1 |
| 20060179329 | Terechko | Aug 2006 | A1 |
| 20060221448 | Nivon et al. | Oct 2006 | A1 |
| 20060228073 | Mukawa et al. | Oct 2006 | A1 |
| 20060250322 | Hall et al. | Nov 2006 | A1 |
| 20060259621 | Ranganathan | Nov 2006 | A1 |
| 20060268220 | Hogan | Nov 2006 | A1 |
| 20070058248 | Nguyen et al. | Mar 2007 | A1 |
| 20070103836 | Oh | May 2007 | A1 |
| 20070124730 | Pytel | May 2007 | A1 |
| 20070159673 | Freeman et al. | Jul 2007 | A1 |
| 20070188837 | Shimizu et al. | Aug 2007 | A1 |
| 20070198886 | Saito | Aug 2007 | A1 |
| 20070204672 | Huang et al. | Sep 2007 | A1 |
| 20070213952 | Cirelli | Sep 2007 | A1 |
| 20070283247 | Brenneman et al. | Dec 2007 | A1 |
| 20080002259 | Ishizawa et al. | Jan 2008 | A1 |
| 20080002260 | Arrouy et al. | Jan 2008 | A1 |
| 20080043334 | Itzkovitch et al. | Feb 2008 | A1 |
| 20080046773 | Ham | Feb 2008 | A1 |
| 20080063802 | Maula et al. | Mar 2008 | A1 |
| 20080068557 | Menduni et al. | Mar 2008 | A1 |
| 20080146942 | Dala-Krishna | Jun 2008 | A1 |
| 20080173036 | Willaims | Jul 2008 | A1 |
| 20080177506 | Kim | Jul 2008 | A1 |
| 20080205838 | Crippa et al. | Aug 2008 | A1 |
| 20080215907 | Wilson | Sep 2008 | A1 |
| 20080225393 | Rinko | Sep 2008 | A1 |
| 20080316768 | Travis | Dec 2008 | A1 |
| 20090153797 | Allon et al. | Jun 2009 | A1 |
| 20090224416 | Laakkonen et al. | Sep 2009 | A1 |
| 20090245730 | Kleemann | Oct 2009 | A1 |
| 20090310633 | Ikegami | Dec 2009 | A1 |
| 20100005326 | Archer | Jan 2010 | A1 |
| 20100019962 | Fujita | Jan 2010 | A1 |
| 20100056274 | Uusitalo et al. | Mar 2010 | A1 |
| 20100063854 | Purvis et al. | Mar 2010 | A1 |
| 20100079841 | Levola | Apr 2010 | A1 |
| 20100153934 | Lachner | Jun 2010 | A1 |
| 20100194632 | Raento et al. | Aug 2010 | A1 |
| 20100232016 | Landa et al. | Sep 2010 | A1 |
| 20100232031 | Batchko et al. | Sep 2010 | A1 |
| 20100244168 | Shiozawa et al. | Sep 2010 | A1 |
| 20100296163 | Sarikko | Nov 2010 | A1 |
| 20110021263 | Anderson | Jan 2011 | A1 |
| 20110022870 | Mcgrane | Jan 2011 | A1 |
| 20110050655 | Mukawa | Mar 2011 | A1 |
| 20110122240 | Becker | May 2011 | A1 |
| 20110145617 | Thomson et al. | Jun 2011 | A1 |
| 20110170801 | Lu et al. | Jul 2011 | A1 |
| 20110218733 | Hamza et al. | Sep 2011 | A1 |
| 20110286735 | Temblay | Nov 2011 | A1 |
| 20110291969 | Rashid et al. | Dec 2011 | A1 |
| 20120011389 | Driesen | Jan 2012 | A1 |
| 20120050535 | Densham et al. | Mar 2012 | A1 |
| 20120075501 | Oyagi | Mar 2012 | A1 |
| 20120081392 | Arthur | Apr 2012 | A1 |
| 20120089854 | Breakstone | Apr 2012 | A1 |
| 20120113235 | Shintani | May 2012 | A1 |
| 20120127062 | Bar-Zeev et al. | May 2012 | A1 |
| 20120154557 | Perez et al. | Jun 2012 | A1 |
| 20120218301 | Miller | Aug 2012 | A1 |
| 20120246506 | Knight | Sep 2012 | A1 |
| 20120249416 | Maciocci et al. | Oct 2012 | A1 |
| 20120249741 | Maciocci et al. | Oct 2012 | A1 |
| 20120260083 | Andrews | Oct 2012 | A1 |
| 20120307075 | Margalitq | Dec 2012 | A1 |
| 20120307362 | Silverstein et al. | Dec 2012 | A1 |
| 20120320460 | Levola | Dec 2012 | A1 |
| 20120326948 | Crocco et al. | Dec 2012 | A1 |
| 20130021486 | Richardon | Jan 2013 | A1 |
| 20130050833 | Lewis et al. | Feb 2013 | A1 |
| 20130051730 | Travers et al. | Feb 2013 | A1 |
| 20130502058 | Liu et al. | Feb 2013 | |
| 20130077049 | Bohn | Mar 2013 | A1 |
| 20130077170 | Ukuda | Mar 2013 | A1 |
| 20130094148 | Sloane | Apr 2013 | A1 |
| 20130129282 | Li | May 2013 | A1 |
| 20130169923 | Schnoll et al. | Jul 2013 | A1 |
| 20130205126 | Kruglick | Aug 2013 | A1 |
| 20130268257 | Hu | Oct 2013 | A1 |
| 20130278633 | Ahn | Oct 2013 | A1 |
| 20130314789 | Saarikko et al. | Nov 2013 | A1 |
| 20130318276 | Dalal | Nov 2013 | A1 |
| 20130336138 | Venkatraman et al. | Dec 2013 | A1 |
| 20130342564 | Kinnebrew | Dec 2013 | A1 |
| 20130342570 | Kinnebrew | Dec 2013 | A1 |
| 20130342571 | Kinnebrew | Dec 2013 | A1 |
| 20130343408 | Cook | Dec 2013 | A1 |
| 20140013098 | Yeung | Jan 2014 | A1 |
| 20140016821 | Arth et al. | Jan 2014 | A1 |
| 20140022819 | Oh et al. | Jan 2014 | A1 |
| 20140078023 | Ikeda et al. | Mar 2014 | A1 |
| 20140082526 | Park et al. | Mar 2014 | A1 |
| 20140119598 | Ramachandran et al. | May 2014 | A1 |
| 20140126769 | Reitmayr et al. | May 2014 | A1 |
| 20140140653 | Brown et al. | May 2014 | A1 |
| 20140149573 | Tofighbakhsh et al. | May 2014 | A1 |
| 20140168260 | O'Brien et al. | Jun 2014 | A1 |
| 20140266987 | Magyari | Sep 2014 | A1 |
| 20140267419 | Ballard et al. | Sep 2014 | A1 |
| 20140274391 | Stafford | Sep 2014 | A1 |
| 20140282105 | Nordstrom | Sep 2014 | A1 |
| 20140340449 | Plagemann et al. | Nov 2014 | A1 |
| 20140359589 | Kodsky et al. | Dec 2014 | A1 |
| 20140375680 | Ackerman et al. | Dec 2014 | A1 |
| 20150005785 | Olson | Jan 2015 | A1 |
| 20150009099 | Queen | Jan 2015 | A1 |
| 20150077312 | Wang | Mar 2015 | A1 |
| 20150097719 | Balachandreswaran et al. | Apr 2015 | A1 |
| 20150123966 | Newman | May 2015 | A1 |
| 20150130790 | Vazquez, II et al. | May 2015 | A1 |
| 20150134995 | Park et al. | May 2015 | A1 |
| 20150138248 | Schrader | May 2015 | A1 |
| 20150155939 | Oshima et al. | Jun 2015 | A1 |
| 20150168221 | Mao et al. | Jun 2015 | A1 |
| 20150205126 | Schowengerdt | Jul 2015 | A1 |
| 20150235431 | Schowengerdt | Aug 2015 | A1 |
| 20150253651 | Russell et al. | Sep 2015 | A1 |
| 20150256484 | Cameron | Sep 2015 | A1 |
| 20150269784 | Miyawaki et al. | Sep 2015 | A1 |
| 20150294483 | Wells et al. | Oct 2015 | A1 |
| 20150301955 | Yakovenko et al. | Oct 2015 | A1 |
| 20150338915 | Publicover et al. | Nov 2015 | A1 |
| 20150355481 | Hilkes et al. | Dec 2015 | A1 |
| 20160004102 | Nisper et al. | Jan 2016 | A1 |
| 20160027215 | Burns et al. | Jan 2016 | A1 |
| 20160033770 | Fujimaki et al. | Feb 2016 | A1 |
| 20160077338 | Robbins et al. | Mar 2016 | A1 |
| 20160085285 | Mangione-Smith | Mar 2016 | A1 |
| 20160085300 | Robbins et al. | Mar 2016 | A1 |
| 20160091720 | Stafford et al. | Mar 2016 | A1 |
| 20160093099 | Bridges | Mar 2016 | A1 |
| 20160093269 | Buckley et al. | Mar 2016 | A1 |
| 20160123745 | Cotier et al. | May 2016 | A1 |
| 20160155273 | Lyren et al. | Jun 2016 | A1 |
| 20160180596 | Gonzalez del Rosario | Jun 2016 | A1 |
| 20160187654 | Border et al. | Jun 2016 | A1 |
| 20160191887 | Casas | Jun 2016 | A1 |
| 20160202496 | Billetz et al. | Jul 2016 | A1 |
| 20160217624 | Finn et al. | Jul 2016 | A1 |
| 20160266412 | Yoshida | Sep 2016 | A1 |
| 20160267708 | Nistico et al. | Sep 2016 | A1 |
| 20160274733 | Hasegawa et al. | Sep 2016 | A1 |
| 20160287337 | Aram et al. | Oct 2016 | A1 |
| 20160300388 | Stafford et al. | Oct 2016 | A1 |
| 20160321551 | Priness et al. | Nov 2016 | A1 |
| 20160327798 | Xiao et al. | Nov 2016 | A1 |
| 20160334279 | Mittleman et al. | Nov 2016 | A1 |
| 20160357255 | Lindh et al. | Dec 2016 | A1 |
| 20160370404 | Quadrat et al. | Dec 2016 | A1 |
| 20160370510 | Thomas | Dec 2016 | A1 |
| 20170038607 | Camara | Feb 2017 | A1 |
| 20170060225 | Zha et al. | Mar 2017 | A1 |
| 20170061696 | Li et al. | Mar 2017 | A1 |
| 20170064066 | Das et al. | Mar 2017 | A1 |
| 20170100664 | Osterhout et al. | Apr 2017 | A1 |
| 20170115487 | Travis | Apr 2017 | A1 |
| 20170122725 | Yeoh et al. | May 2017 | A1 |
| 20170123526 | Trail et al. | May 2017 | A1 |
| 20170127295 | Black et al. | May 2017 | A1 |
| 20170131569 | Aschwanden et al. | May 2017 | A1 |
| 20170147066 | Katz et al. | May 2017 | A1 |
| 20170160518 | Lanman et al. | Jun 2017 | A1 |
| 20170161951 | Fix et al. | Jun 2017 | A1 |
| 20170185261 | Perez et al. | Jun 2017 | A1 |
| 20170192239 | Nakamura et al. | Jul 2017 | A1 |
| 20170205903 | Miller et al. | Jul 2017 | A1 |
| 20170206668 | Poulos et al. | Jul 2017 | A1 |
| 20170213388 | Margolis et al. | Jul 2017 | A1 |
| 20170219841 | Popovich et al. | Aug 2017 | A1 |
| 20170232345 | Rofougaran et al. | Aug 2017 | A1 |
| 20170235126 | DiDomenico | Aug 2017 | A1 |
| 20170235129 | Kamakura | Aug 2017 | A1 |
| 20170235142 | Wall et al. | Aug 2017 | A1 |
| 20170235144 | Piskunov et al. | Aug 2017 | A1 |
| 20170235147 | Kamakura | Aug 2017 | A1 |
| 20170243403 | Daniels et al. | Aug 2017 | A1 |
| 20170254832 | Ho et al. | Sep 2017 | A1 |
| 20170256096 | Faaborg et al. | Sep 2017 | A1 |
| 20170258526 | Lang | Sep 2017 | A1 |
| 20170270712 | Tyson | Sep 2017 | A1 |
| 20170281054 | Stever et al. | Oct 2017 | A1 |
| 20170287376 | Bakar et al. | Oct 2017 | A1 |
| 20170293141 | Schowengerdt et al. | Oct 2017 | A1 |
| 20170307886 | Stenberg et al. | Oct 2017 | A1 |
| 20170307891 | Bucknor et al. | Oct 2017 | A1 |
| 20170312032 | Amanatullah et al. | Nov 2017 | A1 |
| 20170322426 | Tervo | Nov 2017 | A1 |
| 20170329137 | Tervo | Nov 2017 | A1 |
| 20170332098 | Rusanovskyy et al. | Nov 2017 | A1 |
| 20170336636 | Amitai et al. | Nov 2017 | A1 |
| 20170357332 | Balan et al. | Dec 2017 | A1 |
| 20170371394 | Chan | Dec 2017 | A1 |
| 20170371661 | Sparling | Dec 2017 | A1 |
| 20180014266 | Chen | Jan 2018 | A1 |
| 20180024289 | Fattal | Jan 2018 | A1 |
| 20180044173 | Netzer | Feb 2018 | A1 |
| 20180052007 | Teskey et al. | Feb 2018 | A1 |
| 20180052501 | Jones, Jr. et al. | Feb 2018 | A1 |
| 20180059305 | Popovich et al. | Mar 2018 | A1 |
| 20180067779 | Pillalamarri et al. | Mar 2018 | A1 |
| 20180070855 | Eichler | Mar 2018 | A1 |
| 20180082480 | White et al. | Mar 2018 | A1 |
| 20180088185 | Woods et al. | Mar 2018 | A1 |
| 20180102981 | Kurtzman et al. | Apr 2018 | A1 |
| 20180108179 | Tomlin et al. | Apr 2018 | A1 |
| 20180114298 | Malaika et al. | Apr 2018 | A1 |
| 20180131907 | Schmirier et al. | May 2018 | A1 |
| 20180136466 | Ko | May 2018 | A1 |
| 20180151796 | Akahane | May 2018 | A1 |
| 20180188115 | Hsu et al. | Jul 2018 | A1 |
| 20180189568 | Powderly et al. | Jul 2018 | A1 |
| 20180190017 | Mendez et al. | Jul 2018 | A1 |
| 20180191990 | Motoyama | Jul 2018 | A1 |
| 20180250589 | Cossairt et al. | Sep 2018 | A1 |
| 20180284877 | Klein | Oct 2018 | A1 |
| 20180357472 | Dreessen | Dec 2018 | A1 |
| 20190011691 | Peyman | Jan 2019 | A1 |
| 20190056591 | Tervo et al. | Feb 2019 | A1 |
| 20190087015 | Lam et al. | Mar 2019 | A1 |
| 20190101758 | Zhu et al. | Apr 2019 | A1 |
| 20190155439 | Mukherjee et al. | May 2019 | A1 |
| 20190158926 | Kang et al. | May 2019 | A1 |
| 20190167095 | Krueger | Jun 2019 | A1 |
| 20190172216 | Ninan et al. | Jun 2019 | A1 |
| 20190178654 | Hare | Jun 2019 | A1 |
| 20190196690 | Chong | Jun 2019 | A1 |
| 20190219815 | Price et al. | Jul 2019 | A1 |
| 20190243123 | Bohn | Aug 2019 | A1 |
| 20190318540 | Piemonte | Oct 2019 | A1 |
| 20190321728 | Imai | Oct 2019 | A1 |
| 20190347853 | Chen et al. | Nov 2019 | A1 |
| 20190380792 | Poltaretskyi et al. | Dec 2019 | A1 |
| 20200098188 | Bar-Zeev | Mar 2020 | A1 |
| 20200110928 | Al Jazaery et al. | Apr 2020 | A1 |
| 20200117267 | Gibson et al. | Apr 2020 | A1 |
| 20200117270 | Gibson et al. | Apr 2020 | A1 |
| 20200202759 | Ukai et al. | Jun 2020 | A1 |
| 20200309944 | Thoresen et al. | Oct 2020 | A1 |
| 20200356161 | Wagner | Nov 2020 | A1 |
| 20200368616 | Delamont | Nov 2020 | A1 |
| 20200409528 | Lee | Dec 2020 | A1 |
| 20210008413 | Asikainen et al. | Jan 2021 | A1 |
| 20210033871 | Jacoby et al. | Feb 2021 | A1 |
| 20210041951 | Gibson et al. | Feb 2021 | A1 |
| 20210053820 | Gurin et al. | Feb 2021 | A1 |
| 20210093391 | Poltaretskyi et al. | Apr 2021 | A1 |
| 20210093410 | Gaborit et al. | Apr 2021 | A1 |
| 20210093414 | Moore et al. | Apr 2021 | A1 |
| 20210097886 | Kuester et al. | Apr 2021 | A1 |
| 20210142582 | Jones | May 2021 | A1 |
| 20210158627 | Cossairt | May 2021 | A1 |
| 20210173480 | Osterhout et al. | Jun 2021 | A1 |
| Number | Date | Country |
|---|---|---|
| 104603675 | May 2015 | CN |
| 107683497 | Feb 2018 | CN |
| 0504930 | Mar 1992 | EP |
| 0535402 | Apr 1993 | EP |
| 0632360 | Jan 1995 | EP |
| 1215522 | Jun 2002 | EP |
| 1494110 | Jan 2005 | EP |
| 1938141 | Jul 2008 | EP |
| 1943556 | Jul 2008 | EP |
| 2290428 | Mar 2011 | EP |
| 2350774 | Aug 2011 | EP |
| 1237067 | Jan 2016 | EP |
| 3139245 | Mar 2017 | EP |
| 3164776 | May 2017 | EP |
| 3236211 | Oct 2017 | EP |
| 2723240 | Aug 2018 | EP |
| 2896986 | Feb 2021 | EP |
| 2499635 | Aug 2013 | GB |
| 2542853 | Apr 2017 | GB |
| 938DEL2004 | Jun 2006 | IN |
| 2002-529806 | Sep 2002 | JP |
| 2003-029198 | Jan 2003 | JP |
| 2007-012530 | Jan 2007 | JP |
| 2008-257127 | Oct 2008 | JP |
| 2009-090689 | Apr 2009 | JP |
| 2009-244869 | Oct 2009 | JP |
| 2012-015774 | Jan 2012 | JP |
| 2013-525872 | Jun 2013 | JP |
| 2016-85463 | May 2016 | JP |
| 2016-516227 | Jun 2016 | JP |
| 6232763 | Nov 2017 | JP |
| 6333965 | May 2018 | JP |
| 2005-0010775 | Jan 2005 | KR |
| 10-1372623 | Mar 2014 | KR |
| 201219829 | May 2012 | TW |
| 201803289 | Jan 2018 | TW |
| 1991000565 | Jan 1991 | WO |
| 2000030368 | Jun 2000 | WO |
| 2002071315 | Sep 2002 | WO |
| 2004095248 | Nov 2004 | WO |
| 2006132614 | Dec 2006 | WO |
| 2007085682 | Aug 2007 | WO |
| 2007102144 | Sep 2007 | WO |
| 2008148927 | Dec 2008 | WO |
| 2009101238 | Aug 2009 | WO |
| 2012030787 | Mar 2012 | WO |
| 2013049012 | Apr 2013 | WO |
| 2013062701 | May 2013 | WO |
| 2015143641 | Oct 2015 | WO |
| 2016054092 | Apr 2016 | WO |
| 2017004695 | Jan 2017 | WO |
| 2017044761 | Mar 2017 | WO |
| 2017120475 | Jul 2017 | WO |
| 2017203201 | Nov 2017 | WO |
| 2018044537 | Mar 2018 | WO |
| 2018087408 | May 2018 | WO |
| 2018097831 | May 2018 | WO |
| 2018166921 | Sep 2018 | WO |
| 2019148154 | Aug 2019 | WO |
| 2020010226 | Jan 2020 | WO |
| Entry |
|---|
| Communication Pursuant to Article 94(3) EPC dated Sep. 4, 2019, European Patent Application No. 10793707.0, (4 pages). |
| Examination Report dated Jun. 19, 2020, European Patent Application No. 20154750.2, (10 pages). |
| Extended European Search Report dated May 20, 2020, European Patent Application No. 20154070.5, (7 pages). |
| Extended European Search Report dated Jun. 12, 2017, European Patent Application No. 16207441.3, (8 pages). |
| Final Office Action dated Aug. 10, 2020, U.S. Appl. No. 16/225,961, (13 pages). |
| Final Office Action dated Dec. 4, 2019, U.S. Appl. No. 15/564,517, (15 pages). |
| Final Office Action dated Feb. 19, 2020, U.S. Appl. No. 15/552,897, (17 pages). |
| International Search Report and Written Opinion dated Mar. 12, 2020, International PCT Patent Application No. PCT/US19/67919, (14 pages). |
| International Search Report and Written Opinion dated Aug. 15, 2019, International PCT Patent Application No. PCT/US19/33987, (20 pages). |
| International Search Report and Written Opinion dated Jun. 15, 2020, International PCT Patent Application No. PCT/US2020/017023, (13 pages). |
| International Search Report and Written Opinion dated Oct. 16, 2019, International PCT Patent Application No. PCT/US19/43097, (10 pages). |
| International Search Report and Written Opinion dated Oct. 16, 2019, International PCT Patent Application No. PCT/US19/36275, (10 pages). |
| International Search Report and Written Opinion dated Oct. 16, 2019, International PCT Patent Application No. PCT/US19/43099, (9 pages). |
| International Search Report and Written Opinion dated Jun. 17, 2016, International PCT Patent Application No. PCT/FI2016/050172, (9 pages). |
| International Search Report and Written Opinion dated Oct. 22, 2019, International PCT Patent Application No. PCT/US19/43751, (9 pages). |
| International Search Report and Written Opinion dated Dec. 23, 2019, International PCT Patent Application No. PCT/US19/44953, (11 pages). |
| International Search Report and Written Opinion dated May 23, 2019, International PCT Patent Application No. PCT/US18/66514, (17 pages). |
| International Search Report and Written Opinion dated Sep. 26, 2019, International PCT Patent Application No. PCT/US19/40544, (12 pages). |
| International Search Report and Written Opinion dated Aug. 27, 2019, International PCT Application No. PCT/US2019/035245, (8 pages). |
| International Search Report and Written Opinion dated Dec. 27, 2019, International Application No. PCT/US19/47746, (16 pages). |
| International Search Report and Written Opinion dated Sep. 30, 2019, International Patent Application No. PCT/US19/40324, (7 pages). |
| International Search Report and Written Opinion dated Sep. 4, 2020, International Patent Application No. PCT/US20/31036, (13 pages). |
| International Search Report and Written Opinion dated Jun. 5, 2020, International Patent Application No. PCT/US20/19871, (9 pages). |
| International Search Report and Written Opinion dated Aug. 8, 2019, International PCT Patent Application No. PCT/US2019/034763, (8 pages). |
| International Search Report and Written Opinion dated Oct. 8, 2019, International PCT Patent Application No. PCT/US19/41151, (7 pages). |
| International Search Report and Written Opinion dated Jan. 9, 2020, International Application No. PCT/US19/55185, (10 pages). |
| International Search Report and Written Opinion dated Feb. 28, 2019, International Patent Application No. PCT/US18/64686, (8 pages). |
| International Search Report and Written Opinion dated Feb. 7, 2020, International PCT Patent Application No. PCT/US2019/061265, (11 pages). |
| International Search Report and Written Opinion dated Jun. 11, 2019, International PCT Application No. PCT/US19/22620, (7 pages). |
| Invitation to Pay Additional Fees dated Aug. 15, 2019, International PCT Patent Application No. PCT/US19/36275, (2 pages). |
| Invitation to Pay Additional Fees dated Sep. 24, 2020, International Patent Application No. PCT/US2020/043596, (3 pages). |
| Invitation to Pay Additional Fees dated Oct. 22, 2019, International PCT Patent Application No. PCT/US19/47746, (2 pages). |
| Invitation to Pay Additional Fees dated Apr. 3, 2020, International Patent Application No. PCT/US20/17023, (2 pages). |
| Invitation to Pay Additional Fees dated Oct. 17, 2019, International PCT Patent Application No. PCT/US19/44953, (2 pages). |
| Non Final Office Action dated Aug. 21, 2019, U.S. Appl. No. 15/564,517, (14 pages). |
| Non Final Office Action dated Jul. 27, 2020, U.S. Appl. No. 16/435,933, (16 pages). |
| Non Final Office Action dated Jun. 17, 2020, U.S. Appl. No. 16/682,911, (22 pages). |
| Non Final Office Action dated Jun. 19, 2020, U.S. Appl. No. 16/225,961, (35 pages). |
| Non Final Office Action dated Nov. 19, 2019, U.S. Appl. No. 16/355,611, (31 pages). |
| Non Final Office Action dated Oct. 22, 2019, U.S. Appl. No. 15/859,277, (15 pages). |
| Non Final Office Action dated Sep. 1, 2020, U.S. Appl. No. 16/214,575, (40 pages). |
| Notice of Allowance dated Mar. 25, 2020, U.S. Appl. No. 15/564,517, (11 pages). |
| Notice of Allowance dated Oct. 5, 2020, U.S. Appl. No. 16/682,911, (27 pages). |
| Notice of Reason of Refusal dated Sep. 11, 2020 with English translation, Japanese Patent Application No. 2019-140435, (6 pages). |
| Summons to attend oral proceedings pursuant to Rule 115(1) EPC mailed on Jul. 15, 2019, European Patent Application No. 15162521.7, (7 pages). |
| Aarik, J. et al., “Effect of crystal structure on optical properties of TiO2 films grown by atomic layer deposition”, Thin Solid Films; Publication [online). May 19, 1998 [retrieved Feb. 19, 2020]. Retrieved from the Internet: <URL: https://www.sciencedirect.com/science/article/pii/S0040609097001351?via%3Dihub>; DOI: 10.1016/50040-6090(97)00135-1; see entire document, (2 pages). |
| Azom, , “Silica—Silicon Dioxide (SiO2)”, AZO Materials; Publication [Online]. Dec. 13, 2001 [retrieved Feb. 19, 2020]. Retrieved from the Internet: <URL: https://www.azom.com/article.aspx?Article1D=1114>, (6 pages). |
| Goodfellow, , “Titanium Dioxide—Titania (TiO2)”, AZO Materials; Publication [online]. Jan. 11, 2002 [retrieved Feb. 19, 2020]. Retrieved from the Internet: <URL: https://www.azom.com/article.aspx?Article1D=1179>, (9 pages). |
| Levola, T. , “Diffractive Optics for Virtual Reality Displays”, Journal of the SID EURODISPLAY 14/05, 2005, XP008093627, chapters 2-3, Figures 2 and 10, pp. 467-475. |
| Levola, Tapani , “Invited Paper: Novel Diffractive Optical Components for Near to Eye Displays—Nokia Research Center”, SID 2006 Digest, 2006 SID International Symposium, Society for Information Display, vol. XXXVII, May 24, 2005, chapters 1-3, figures 1 and 3, pp. 64-67. |
| Memon, F. et al., “Synthesis, Characterization and Optical Constants of Silicon Oxycarbide”, EPJ Web of Conferences; Publication [online). Mar. 23, 2017 [retrieved Feb. 19, 2020).<URL: https://www.epj-conferences.org/articles/epjconf/pdf/2017/08/epjconf_nanop2017_00002.pdf> DOI: 10.1051/epjconf/201713900002, (8 pages). |
| Spencer, T. et al., “Decomposition of poly(propylene carbonate) with UV sensitive iodonium 11 salts”, Polymer Degradation and Stability; (online]. Dec. 24, 2010 (retrieved Feb. 19, 2020]., <URL: http:/fkohl.chbe.gatech.edu/sites/default/files/linked_files/publications/2011Decomposition%20of%20poly(propylene%20carbonate)%20with%20UV%20sensitive%20iodonium%20salts,pdf>; DOI: 10, 1016/j.polymdegradstab.2010, 12.003, (17 pages). |
| Weissel, et al., “Process cruise control: event-driven clock scaling for dynamic power management”, Proceedings of the 2002 international conference on Compilers, architecture, and synthesis for embedded systems. Oct. 11, 2002 (Oct. 11, 2002) Retrieved on May 16, 2020. |
| (May 16, 2020) from <URL: https://dl.acm.org/doi/pdf/10.1145/581630.581668>, p. 238-246. |
| “ARToolKit: Hardware”, https://web.archive.org/web/20051013062315/http://www.hitl.washington.edu:80/artoolkit/documentation/hardware.htm (downloaded Oct. 26, 2020), Oct. 13, 2015, (3 pages). |
| European Search Report dated Oct. 15, 2020, European Patent Application No. 20180623.9, (10 pages). |
| Extended European Search Report dated Jan. 22, 2021, European Patent Application No. 18890390.0, (11 pages). |
| Extended European Search Report dated Nov. 3, 2020, European Patent Application No. 18885707.2, (7 pages). |
| Extended European Search Report dated Nov. 4, 2020, European Patent Application No. 20190980.1, (14 pages). |
| Final Office Action dated Nov. 24, 2020, U.S. Appl. No. 16/435,933, (44 pages). |
| International Search Report and Written Opinion dated Dec. 3, 2020, International Patent Application No. PCT/US20/43596, (25 pages). |
| Non Final Office Action dated Jan. 26, 2021, U.S. Appl. No. 16/928,313, (33 pages). |
| Non Final Office Action dated Jan. 27, 2021, U.S. Appl. No. 16/225,961, (15 pages). |
| Non Final Office Action dated Nov. 5, 2020, U.S. Appl. No. 16/530,776, (45 pages). |
| “Phototourism Challenge”, CVPR 2019 Image Matching Workshop. https://image matching-workshop, github.io., (16 pages). |
| Arandjelović, Relja et al., “Three things everyone should know to improve object retrieval”, CVPR, 2012, (8 pages). |
| Azuma, Ronald T. , “A Survey of Augmented Reality”, Presence Teleoperators and Virtual Environments 6, 4 (Aug. 1997), 355-385 https://web.archive.org/web/20010604100006/http://www.cs.unc.edu/˜azuma/ARpresence.pdf (downloaded Oct. 26, 2020). |
| Azuma, Ronald T. , “Predictive Tracking for Augmented Reality”, Department of Computer Science, Chapel Hill NC; TR95-007, Feb. 1995, 262 pages. |
| Battaglia, Peter W. et al., “Relational inductive biases, deep learning, and graph networks”, arXiv:1806.01261, Oct. 17, 2018, pp. 1-40. |
| Berg, Alexander C. et al., “Shape matching and object recognition using low distortion correspondences”, In CVPR, 2005, (8 pages). |
| Bian, Jiawang et al., “GMS: Grid-based motion statistics for fast, ultra-robust feature correspondence.”, In CVPR (Conference on Computer Vision and Pattern Recognition), 2017, (10 pages). |
| Bimber, Oliver et al., “Spatial Augmented Reality: Merging Real and Virtual Worlds”, https://web.media.mit.edu/˜raskar/book/BimberRaskarAugmentedRealityBook.pdf; published by A K Peters/CRC Press (Jul. 31, 2005); eBook (3rd Edition, 2007), (393 pages). |
| Brachmann, Eric et al., “Neural-Guided RANSAC: Learning Where to Sample Model Hypotheses”, In ICCV (International Conference on Computer Vision ), arXiv:1905.04132v2 [cs.CV] Jul. 31, 2019, (17 pages). |
| Butail, et al., “Putting the fish in the fish tank: Immersive VR for animal behavior experiments”, In: 2012 IEEE International Conference on Robotics and Automation. May 18, 2012 (May 18, 2012) Retrieved on Nov. 14, 2020 (Nov. 14, 2020) from <http:/lcdcl.umd.edu/papers/icra2012.pdf> entire document, (8 pages). |
| Caetano, Tibério S. et al., “Learning graph matching”, IEEE TPAMI, 31(6):1048-1058, 2009. |
| Cech, Jan et al., “Efficient sequential correspondence selection by cosegmentation”, IEEE TPAMI, 32(9):1568-1581, Sep. 2010. |
| Cuturi, Marco , “Sinkhorn distances: Lightspeed computation of optimal transport”, NIPS, 2013, (9 pages). |
| Dai, Angela et al., “ScanNet: Richly-annotated 3d reconstructions of indoor scenes”, In CVPR, arXiv:1702.04405v2 [cs.CV] Apr. 11, 2017, (22 pages). |
| Deng, Haowen et al., “PPFnet: Global context aware local features for robust 3d point matching”, In CVPR, arXiv:1802.02669v2 [cs.CV] Mar. 1, 2018, (12 pages). |
| Detone, Daniel et al., “Deep image homography estimation”, In RSS Work-shop: Limits and Potentials of Deep Learning in Robotics, arXiv:1606.03798v1 [cs.CV] Jun. 13, 2016, (6 pages). |
| Detone, Daniel et al., “Self-improving visual odometry”, arXiv:1812.03245, Dec. 8, 2018, (9 pages). |
| Detone, Daniel et al., “SuperPoint: Self-supervised interest point detection and description”, In CVPR Workshop on Deep Learning for Visual SLAM, arXiv:1712.07629v4 [cs.CV] Apr. 19, 2018, (13 pages). |
| Dusmanu, Mihai et al., “D2-net: A trainable CNN for joint detection and description of local features”, CVPR, arXiv:1905.03561v1 [cs.CV] May 9, 2019, (16 pages). |
| Ebel, Patrick et al., “Beyond cartesian representations for local descriptors”, ICCV, arXiv:1908.05547v1 [cs.CV] Aug. 15, 2019, (11 pages). |
| Fischler, Martin A et al., “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography”, Communications of the ACM, 24(6): 1981, pp. 381-395. |
| Gilmer, Justin et al., “Neural message passing for quantum chemistry”, In ICML, arXiv:1704.01212v2 [cs.LG] Jun. 12, 2017, (14 pages). |
| Hartley, Richard et al., “Multiple View Geometry in Computer Vision”, Cambridge University Press, 2003, pp. 1-673. |
| Jacob, Robert J. , “Eye Tracking in Advanced Interface Design”, Human-Computer Interaction Lab, Naval Research Laboratory, Washington, D.C., date unknown. 2003, pp. 1-50. |
| Lee, Juho et al., “Set transformer: A frame-work for attention-based permutation-invariant neural networks”, ICML, arXiv:1810.00825v3 [cs.LG] May 26, 2019, (17 pages). |
| Leordeanu, Marius et al., “A spectral technique for correspondence problems using pairwise constraints”, Proceedings of (ICCV) International Conference on Computer Vision, vol. 2, pp. 1482-1489, Oct. 2005, (8 pages). |
| Li, Yujia et al., “Graph matching networks for learning the similarity of graph structured objects”, ICML, arXiv:1904.12787v2 [cs.LG] May 12, 2019, (18 pages). |
| Li, Zhengqi et al., “Megadepth: Learning single-view depth prediction from internet photos”, In CVPR, fromarXiv: 1804.00607v4 [cs.CV] Nov. 28, 2018, (10 pages). |
| Loiola, Eliane M. et al., “A survey for the quadratic assignment problem”, European journal of operational research, 176(2): 2007, pp. 657-690. |
| Lowe, David G. , “Distinctive image features from scale-invariant keypoints”, International Journal of Computer Vision, 60(2): 91-110, 2004, (28 pages). |
| Luo, Zixin et al., “ContextDesc: Local descriptor augmentation with cross-modality context”, CVPR, arXiv:1904.04084v1 [cs.CV] Apr. 8, 2019, (14 pages). |
| Munkres, James , “Algorithms for the assignment and transportation problems”, Journal of the Society for Industrial and Applied Mathematics, 5(1): 1957, pp. 32-38. |
| Ono, Yuki et al., “LF-Net: Learning local features from images”, 32nd Conference on Neural Information Processing Systems (NIPS 2018), arXiv:1805.09662v2 [cs.CV] Nov. 22, 2018, (13 pages). |
| Paszke, Adam et al., “Automatic differentiation in Pytorch”, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, (4 pages). |
| Peyre, Gabriel et al., “Computational Optimal Transport”, Foundations and Trends in Machine Learning, 11(5-6):355-607, 2019 arXiv:1803.00567v4 [stat.ML] Mar. 18, 2020, (209 pages). |
| Qi, Charles R. et al., “Pointnet++: Deep hierarchical feature learning on point sets in a metric space.”, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA., (10 pages). |
| Qi, Charles R. et al., “Pointnet: Deep Learning on Point Sets for 3D Classification and Segmentation”, CVPR, arXiv:1612.00593v2 [cs.CV] Apr. 10, 201, (19 pages). |
| Radenović, Filip et al., “Revisiting Oxford and Paris: Large-Scale Image Retrieval Benchmarking”, CVPR, arXiv:1803.11285v1 [cs.CV] Mar. 29, 2018, (10 pages). |
| Raguram, Rahul et al., “A comparative analysis of ransac techniques leading to adaptive real-time random sample consensus”, Computer Vision—ECCV 2008, 10th European Conference on Computer Vision, Marseille, France, Oct. 12-18, 2008, Proceedings, Part I, (15 pages). |
| Ranftl, René et al., “Deep fundamental matrix estimation”, European Conference on Computer Vision (ECCV), 2018, (17 pages). |
| Revaud, Jerome et al., “R2D2: Repeatable and Reliable Detector and Descriptor”, In NeurIPS, arXiv:1906.06195v2 [cs.CV] Jun. 17, 2019, (12 pages). |
| Rocco, Ignacio et al., “Neighbourhood Consensus Networks”, 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montreal, Canada, arXiv:1810.10510v2 [cs.CV] Nov. 29, 2018, (20 pages). |
| Rublee, Ethan et al., “ORB: An efficient alternative to SIFT or SURF”, Proceedings of the IEEE International Conference on Computer Vision. 2564-2571. 2011; 10.1109/ICCV.2011.612654, (9 pages). |
| Sattler, Torsten et al., “SCRAMSAC: Improving RANSAC's efficiency with a spatial consistency filter”, ICCV, 2009: 2090-2097., (8 pages). |
| Schonberger, Johannes L. et al., “Pixelwise view selection for unstructured multi-view stereo”, Computer Vision—ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, Oct. 11-14, 2016, Proceedings, Part III, pp. 501-518, 2016. |
| Schonberger, Johannes L. et al., “Structure-from-motion revisited”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 4104-4113, (11 pages). |
| Sinkhorn, Richard et al., “Concerning nonnegative matrices and doubly stochastic matrices.”, Pacific Journal of Mathematics, 1967, pp. 343-348. |
| Tanriverdi, Vildan et al., “Interacting With Eye Movements in Virtual Environments”, Department of Electrical Engineering and Computer Science, Tufts University; Proceedings of the SIGCHI conference on Human Factors in Computing Systems, Apr. 2000, pp. 1-8. |
| Thomee, Bart et al., “YFCC100m: The new data in multimedia research”, Communications of the ACM, 59(2):64-73, 2016; arXiv:1503.01817v2 [cs.MM] Apr. 25, 2016, (8 pages). |
| Torresani, Lorenzo et al., “Feature correspondence via graph matching: Models and global optimization”, Computer Vision—ECCV 2008, 10th European Conference on Computer Vision, Marseille, France, Oct. 12-18, 2008, Proceedings, Part II, (15 pages). |
| Tuytelaars, Tinne et al., “Wide baseline stereo matching based on local, affinely invariant regions”, BMVC, 2000, pp. 1-14. |
| Ulyanov, Dmitry et al., “Instance normalization: The missing ingredient for fast stylization”, arXiv:1607.08022v3 [cs.CV] Nov. 6, 2017, (6 pages). |
| Vaswani, Ashish et al., “Attention is all you need”, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA; arXiv:1706.03762v5 [cs.CL] Dec. 6, 2017, (15 pages). |
| Veli{hacek over (c)}kovi{hacek over (c)}, Petar et al., “Graph attention networks”, ICLR, arXiv:1710.10903v3 [stat.ML] Feb. 4, 2018, (12 pages). |
| Villani, Cédric , “Optimal transport: old and new”, vol. 338. Springer Science & Business Media, Jun. 2008, pp. 1-998. |
| Wang, Xiaolong et al., “Non-local neural networks”, CVPR, arXiv:1711.07971v3 [cs.CV] Apr. 13, 2018, (10 pages). |
| Wang, Yue et al., “Deep Closest Point: Learning representations for point cloud registration”, ICCV, arXiv:1905.03304v1 [cs.CV] May 8, 2019, (10 pages). |
| Wang, Yue et al., “Dynamic Graph CNN for learning on point clouds”, ACM Transactions on Graphics, arXiv:1801.07829v2 [cs.CV] Jun. 11, 2019, (13 pages). |
| Yi, Kwang M. et al., “Learning to find good correspondences”, CVPR, arXiv:1711.05971v2 [cs.CV] May 21, 2018, (13 pages). |
| Yi, Kwang Moo et al., “Lift: Learned invariant feature transform”, ECCV, arXiv:1603.09114v2 [cs.CV] Jul. 29, 2016, (16 pages). |
| Zaheer, Manzil et al., “Deep Sets”, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA; arXiv:1703.06114v3 [cs.LG] Apr. 14, 2018, (29 pages). |
| Zhang, Jiahui et al., “Learning two-view correspondences and geometry using order-aware network”, ICCV; aarXiv:1908.04964v1 [cs.CV] Aug. 14, 2019, (11 pages). |
| Zhang, Li et al., “Dual graph convolutional net-work for semantic segmentation”, BMVC, 2019; arXiv:1909.06121v3 [cs.CV] Aug. 26, 2020, (18 pages). |
| Communication Pursuant to Article 94(3) EPC dated Oct. 21, 2021, European Patent Application No. 16207441.3 , (4 pages). |
| Communication Pursuant to Rule 164(1) EPC dated Jul. 27, 2021, European Patent Application No. 19833664.6 , (11 pages). |
| Extended European Search Report dated Jun. 30, 2021, European Patent Application No. 19811971.1 , (9 pages). |
| Extended European Search Report dated Mar. 4, 2021, European Patent Application No. 19768418.6 , (9 pages). |
| Extended European Search Report dated Jul. 16, 2021, European Patent Application No. 19810142.0 , (14 pages). |
| Extended European Search Report dated Jul. 30, 2021, European Patent Application No. 19839970.1 , (7 pages). |
| Extended European Search Report dated Oct. 27, 2021, European Patent Application No. 19833664.6 , (10 pages). |
| Extended European Search Report dated Sep. 20, 2021, European Patent Application No. 19851373.1 , (8 pages). |
| Extended European Search Report dated Sep. 28, 2021, European Patent Application No. 19845418.3 , (13 pages). |
| Final Office Action dated Jun. 15, 2021, U.S. Appl. No. 16/928,313 , (42 pages). |
| Final Office Action dated Mar. 1, 2021, U.S. Appl. No. 16/214,575 , (29 pages). |
| Final Office Action dated Mar. 19, 2021, U.S. Appl. No. 16/530,776 , (25 pages). |
| International Search Report and Written Opinion dated Feb. 2, 2021, International PCT Patent Application No. PCT/US20/60550 , (9 pages). |
| Non Final Office Action dated Aug. 4, 2021, U.S. Appl. No. 16/864,721 , (51 pages). |
| Non Final Office Action dated Jul. 9, 2021, U.S. Appl. No. 17/002,663 , (43 pages). |
| Non Final Office Action dated Jul. 9, 2021, U.S. Appl. No. 16/833,093 , (47 pages). |
| Non Final Office Action dated Jun. 29, 2021, U.S. Appl. No. 16/698,588 , (58 pages). |
| Non Final Office Action dated Mar. 3, 2021, U.S. Appl. No. 16/427,337, (41 pages). |
| Non Final Office Action dated May 26, 2021, U.S. Appl. No. 16/214,575 , (19 pages). |
| Non Final Office Action dated Sep. 20, 2021, U.S. Appl. No. 17/105,848 , (56 pages). |
| Non Final Office Action dated Sep. 29, 2021, U.S. Appl. No. 16/748,193 , (62 pages). |
| Altwaijry , et al. , “Learning to Detect and Match Keypoints with Deep Architectures”, Proceedings of the British Machine Vision Conference (BMVC), BMVA Press, Sep. 2016, [retrieved on Jan. 8, 2021 (Jan. 8, 2021 )] < URL: http://www.bmva.org/bmvc/2016/papers/paper049/index.html >, en lire document, especially Abstract. |
| Giuseppe, Donato , et al. , “Stereoscopic helmet mounted system for real time 3D environment reconstruction and indoor ego-motion estimation”, Proc. SPIE 6955, Head- and Helmet-Mounted Displays XIII: Design and Applications , 69550P, May 1, 2008. |
| Lee , et al. , “Self-Attention Graph Pooling”, Cornell University Library/Computer Science/Machine Learning, Apr. 17, 2019 [retrieved on Jan. 8, 2021 from the Internet< URL: https://arxiv.org/abs/1904.08082 >, entire document. |
| Libovicky , et al. , “Input Combination Strategies for Multi-Source Transformer Decoder”, Proceedings of the Third Conference on Machine Translation (WMT). vol. 1: Research Papers, Belgium, Brussels, Oct. 31-Nov. 1, 2018; retrieved on Jan. 8, 2021 (Jan. 8, 2021 ) from < URL: https://doi.org/10.18653/v1/W18-64026 >, entire document. |
| Molchanov, Pavlo , et al. , “Short-range FMCW monopulse radar for hand-gesture sensing”, 2015 IEEE Radar Conference (RadarCon) (2015) , pp. 1491-1496. |
| Sarlin , et al. , “SuperGlue: Learning Feature Matching with Graph Neural Networks”, Cornell University Library/Computer Science/ Computer Vision and Pattern Recognition, Nov. 26, 2019 [retrieved on Jan. 8, 2021 from the Internet< URL: https://arxiv.org/abs/1911.11763 >, entire document. |
| Sheng, Liu , et al. , “Time-multiplexed dual-focal plane head-mounted display with a liquid lens” , Optics Letters, Optical Society of Amer I CA, US, vol. 34, No. 11, Jun. 1, 2009 (Jun. 1, 2009), XP001524475, ISSN: 0146-9592 , pp. 1642-1644. |
| Communication according to Rule 164(1) EPC dated Feb. 23, 2022, European Patent Application No. 20753144.3, (11 pages). |
| Extended European Search Report dated Jan. 28, 2022, European Patent Application No. 19815876.8, (9 pages). |
| Final Office Action dated Feb. 23, 2022, U.S. Appl. No. 16/748,193, (23 pages). |
| Final Office Action dated Feb. 3, 2022, U.S. Appl. No. 16/864,721, (36 pages). |
| Non Final Office Action dated Feb. 2, 2022, U.S. Appl. No. 16/783,866, (8 pages). |
| Communication Pursuant to Article 94(3) EPC dated Jan. 4, 2022, European Patent Application No. 20154070.5 , (8 pages). |
| Extended European Search Report dated Jan. 4, 2022, European Patent Application No. 19815085.6 , (9 pages). |
| International Search Report and Written Opinion dated Feb. 12, 2021, International Application No. PCT/US20/60555 , (25 pages). |
| “Multi-core processor”, TechTarget, 2013 , (1 page). |
| Mrad , et al. , “A framework for System Level Low Power Design Space Exploration”, 1991. |
| “Communication Pursuant to Article 94(3) EPC dated Apr. 25, 2022”, European Patent Application No. 18885707.2, (5 pages). |
| “Extended European Search Report dated Mar. 22, 2022”, European Patent Application No. 19843487.0, (14 pages). |
| “First Office Action dated Mar. 14, 2022 with English translation”, Chinese Patent Application No. 201880079474.6, (11 pages). |
| “Non Final Office Action dated Apr. 1, 2022”, U.S. Appl. No. 17/256,961, (65 pages). |
| “Non Final Office Action dated Apr. 12, 2022”, U.S. Appl. No. 17/262,991, (60 pages). |
| “Non Final Office Action dated Mar. 31, 2022”, U.S. Appl. No. 17/257,814, (60 pages). |
| “Non Final Office Action dated Mar. 9, 2022”, U.S. Appl. No. 16/870,676, (57 pages). |
| “Non Final Office Action dated May 10, 2022”, U.S. Appl. No. 17/140,921, (25 pages). |
| “Extended European Search Report dated Aug. 24, 2022”, European Patent Application No. 20846338.0, (13 pages). |
| “Extended European Search Report dated Aug. 8, 2022”, European Patent Application No. 19898874.3, (8 pages). |
| “Extended European Search Report dated Sep. 8, 2022”, European Patent Application No. 20798769.4, (13 pages). |
| “First Examination Report dated Jul. 27, 2022”, Chinese Patent Application No. 201980036675.2, (5 pages). |
| “First Examination Report dated Jul. 28, 2022”, Indian Patent Application No. 202047024232, (6 pages). |
| “FS_XR5G: Permanent document, v0.4.0”, Qualcomm Incorporated, 3GPP TSG-SA 4 Meeting 103 retrieved from the Internet: URL:http://www.3gpp.org/ftp/Meetings%5F3GP P%5FSYNC/SA4/Docs/S4%2DI90526%2Ezip [retrieved on Apr. 12, 2019], Apr. 12, 2019, (98 pages). |
| “Non Final Office Action dated Sep. 19, 2022”, U.S. Appl. No. 17/263,001, (14 pages). |
| “Second Office Action dated Jul. 13, 2022 with English Translation”, Chinese Patent Application No. 201880079474.6, (10 pages). |
| Anonymous , “Koi Pond: Top iPhone App Store Paid App”, https://web.archive.org/web/20080904061233/https://www.iphoneincanada.ca/reviews /koi-pond-top-iphone-app-store-paid-app/—[retrieved on Aug. 9, 2022], (2 pages). |
| Chittineni, C., et al., “Single filters for combined image geometric manipulation and enhancement”, Proceedings of SPIE vol. 1903, Image and Video Processing, Apr. 8, 1993, San Jose, CA. (Year: 1993), pp. 111-121. |
| “Communication Pursuant to Article 94(3) EPC dated May 30, 2022”, European Patent Application No. 19768418.6, (6 pages). |
| “Extended European Search Report dated May 16, 2022”, European Patent Application No. 19871001.4, (9 pages. |
| “Extended European Search Report dated May 30, 2022”, European Patent Application No. 20753144.3, (10 pages). |
| “Final Office Action dated Jul. 13, 2022”, U.S. Appl. No. 17/262,991, (18 pages). |
| “First Examination Report dated May 13, 2022”, Indian Patent Application No. 202047026359, (8 pages). |
| “Non Final Office Action dated Jul. 26, 2022”, U.S. Appl. No. 17/098,059, (28 pages). |
| “Non Final Office Action dated May 17, 2022”, U.S. Appl. No. 16/748,193, (11 pages). |
| Number | Date | Country | |
|---|---|---|---|
| 20210097286 A1 | Apr 2021 | US |
| Number | Date | Country | |
|---|---|---|---|
| 62899678 | Sep 2019 | US | |
| 62881355 | Jul 2019 | US | |
| 62879408 | Jul 2019 | US |