Reset modeling based on reset and object properties

Information

  • Patent Grant
  • 12211161
  • Patent Number
    12,211,161
  • Date Filed
    Friday, June 24, 2022
    2 years ago
  • Date Issued
    Tuesday, January 28, 2025
    2 days ago
Abstract
A modeling system receives, in association with generating a three-dimensional (3D) virtual reset, a selection of a 3D virtual object. The modeling system presents, at a user interface, the 3D virtual object in the 3D virtual reset at a first position. The modeling system receives, via the user interface, an edit to the 3D virtual object in the 3D virtual reset. The modeling system updates the presentation of the 3D virtual reset by showing the edit. The modeling system stores the 3D virtual reset by including, in the 3D virtual reset, information about the 3D virtual object and information about the edit.
Description
TECHNICAL FIELD

This disclosure generally relates to three-dimensional (3D) modeling in support of virtual and/or augmented reality applications. More specifically, but not by way of limitation, this disclosure relates to 3D modeling of objects and arrangements of such objects for virtual and/or augmented reality applications.


BACKGROUND

Modeling objects for display in computer-based simulated environments (e.g., virtual reality environments and/or augmented reality environments) can be useful for applications in the physical world. For example, virtual models of physical resets (e.g., shelves including stacked or otherwise arranged objects) can be displayed in a virtual reality environment and/or an augmented reality environment to help the viewer assemble the physical resets in a physical environment.


However, conventional virtual modeling systems for creating virtual objects are typically complex, time consuming, rely on special equipment, and may not result in accurate, real-world like virtual objects. For instance, a user may have difficulty identifying a physical object corresponding to a conventionally-generated virtual model when such a model does not provide an adequate visual representation of the physical object. The conventionally-generated virtual model may also not provide physical object specific properties to aid with the identification. Further, the conventional virtual modeling systems permit generation of a virtual model for which a physical object may not feasible to assemble or arrange in the physical world. In some instances, it may be physically impossible to assemble a physical reset according to a virtual model because of weight or dimensional limitations. For example, a shelf in a physical reset may not support a weight of items prescribed in the virtual model to be placed on top of the shelf.


SUMMARY

The present disclosure describes techniques for generating, by a virtual modeling system, virtual models of real-world objects and a virtual reset including an arrangement of such virtual objects.


In certain embodiments, the modeling system receives, via a user interface of a device, a selection of a three-dimensional shape that includes a plurality of faces. The modeling system generates a virtual object by: receiving an image showing a portion of a real-world object, determining an area of the image that corresponds to the portion of the real-world object, associating, in the virtual object, the area of the image with a face of the plurality of faces of the three-dimensional shape, and associating, in the virtual object or in metadata of the virtual object, properties of the three-dimensional shape with the virtual object. The virtual modeling system presents the virtual object in the user interface by at least showing the area of the image superimposed on the face and showing the properties.


In certain embodiments, the modeling system receives, in association with generating a three-dimensional (3D) virtual reset, a selection of a 3D virtual object. The modeling system presents, at a user interface, the 3D virtual object in the 3D virtual reset at a first position. The modeling system receives, via the user interface, an edit to the 3D virtual object in the 3D virtual reset. The modeling system updates the presentation of the 3D virtual reset by showing the edit. The modeling system stores the 3D virtual reset by including, in the 3D virtual reset, information about the 3D virtual object and information about the edit.


In certain embodiments, the modeling system receives, via a user interface of a device, a selection of a three-dimensional shape that includes a plurality of faces. The modeling system generates a virtual object by receiving an image showing a portion of a real-world object, determining an area of the image that corresponds to the portion of the real-world object, associating, in the virtual object, the area of the image with a face of the plurality of faces of the three-dimensional shape, and associating, in the virtual object or in metadata of the virtual object, properties of the three-dimensional shape with the virtual object. The modeling system presents the virtual object in the user interface by at least showing the area of the image superimposed on the face and showing the properties.


Various embodiments are described herein, including methods, systems, non-transitory computer-readable storage media storing programs, code, or instructions executable by one or more processors, and the like. These illustrative embodiments are mentioned not to limit or define the disclosure, but to provide examples to aid understanding thereof. Additional embodiments are discussed in the Detailed Description, and further description is provided there.





BRIEF DESCRIPTION OF THE DRAWINGS

Features, embodiments, and advantages of the present disclosure are better understood when the following Detailed Description is read with reference to the accompanying drawings.



FIG. 1 depicts an example of a computing environment for generating, by a modeling system, virtual objects and virtual resets including such virtual objects in support of a computer-based simulated environment, according to certain embodiments disclosed herein.



FIG. 2 depicts an example of a computing environment for generating, by a modeling system, a virtual reset including virtual objects which accurately model corresponding physical objects of a physical reset, according to certain embodiments disclosed herein.



FIG. 3 depicts an example of a method for generating a virtual object, according to certain embodiments disclosed herein.



FIG. 4A depicts an illustration of a user interface for generating a virtual object, including requesting a new virtual object, according to certain embodiments disclosed herein.



FIG. 4B depicts an example user interface for receiving properties information defining the virtual object requested in FIG. 4A, including a selection of a shape selected from a set of shapes, according to certain embodiments disclosed herein.



FIG. 4C depicts an illustration of a user interface for receiving properties information defining the virtual object requested in FIG. 4A, including displaying an interface object for selecting a face upon which to impose a facial image, according to certain embodiments disclosed herein.



FIG. 4D depicts an illustration of a user interface for generating a virtual object, including a facial image imposed to a face selected via the user interface of FIG. 4C and resizing objects that are selectable to resize an area of the facial image, according to certain embodiments disclosed herein.



FIG. 4E depicts an illustration of a user interface for generating a virtual object, including a display of a virtual object, according to certain embodiments disclosed herein.



FIG. 5 depicts an illustration of a method for generating a virtual reset, according to certain embodiments disclosed herein.



FIG. 6 depicts an illustration of a user interface tool for applying edits to a virtual reset which can be used with the method of FIG. 5, according to certain embodiments disclosed herein.



FIG. 7 depicts a method for rendering a virtual reset in an augmented reality scene, according to certain embodiments disclosed herein.



FIG. 8A depicts an illustration of a user interface for instructing a computing system to display a virtual reset within an augmented reality scene, according to certain embodiments disclosed herein.



FIG. 8B depicts an illustration of a user interface for viewing the display of the virtual reset of FIG. 8A within an augmented reality scene, according to certain embodiments disclosed herein.



FIG. 9 depicts an example of a computing system that performs certain operations described herein, according to certain embodiments described in the present disclosure.



FIG. 10 depicts an example of a cloud computing system that performs certain operations described herein, according to certain embodiments described in the present disclosure.





DETAILED DESCRIPTION

In the following description, for the purposes of explanation, specific details are set forth in order to provide a thorough understanding of certain embodiments. However, it will be apparent that various embodiments may be practiced without these specific details. The figures and description are not intended to be restrictive. The words “exemplary” or “example” are used herein to mean “serving as an example, instance, or illustration.” Any embodiment or design described herein as “exemplary” or “example” is not necessarily to be construed as preferred or advantageous over other embodiments or designs.


With reference to the embodiments described herein, a computing environment may include a modeling system, which can include a number of computing devices, modeling applications, and a data store. The modeling system may be configured to generate, responsive to inputs received via a user interface, virtual objects corresponding to real-world objects. The modeling system may also be configured to generate a virtual reset, which is a virtual space including an arrangement of virtual objects. The virtual reset can be presented in a computer-based simulated environment, such as in a virtual reality environment and/or an augmented reality environment.


The following non-limiting example is provided to introduce certain embodiments. In this example, a modeling system provides a user interface for creation of virtual objects, creation and editing of virtual resets, and presentation of virtual resets in a computer-based simulated environment. The modeling system can receive, via the user interface, a request to create a virtual object that corresponds to a real-world object. In response, the modeling system can present, via the user interface, a set of three-dimensional (3D) shapes and receive a selection of a shape from the set of shapes for the virtual object. Because the shape is three dimensional, the shape can have multiple faces. The modeling system can also receive, via the user interface, a set of properties of the real-world object to be applied to the virtual object. Properties could include a name, an identifier, a weight, dimensions, a quantity, a price, and/or any other property that can describe an attribute of the real-world object. For example, if the user desires to create a virtual object that models a physical, boxed product, the user can select a ‘cuboid’ shape and input dimensions corresponding to the dimensions of the physical, boxed product. The modeling system can also request and receive, for each face of the 3D shape, an image that shows a corresponding portion of the real-world object. For example, images of the different sides of the physical, boxed product are generated via a camera and provided to the modeling system that then associates each of these images with the corresponding face of the 3D shape. The modeling system can generate the virtual object based on the 3D shape, the images, and the properties and store the virtual object in a data store.


Subsequently, the modeling system can receive, via a user interface, a request to generate a virtual reset and a selection of one or more virtual objects stored in the data store. The modeling system can present the virtual objects in the virtual reset, allow movement of the virtual objects within the reset (e.g., to change their positions) responsive to inputs received via the user interface, and prohibit positioning or a change to a position based on properties of the virtual objects. For example, the generated virtual object (the boxed product), which is associated with a property of a weight of 20 kilograms, cannot be moved on top of a second virtual object (e.g., a virtual shelf) associated with a property of a weight capacity of 15 kilograms. Accordingly, the modeling system can constrain the generation and editing of virtual resets so that virtual resets generated via the modeling system are physically possible to implement. The modeling system can present, in a virtual and/or augmented reality scene of the user interface, the virtual reset at a location in the virtual and/or augmented reality scene corresponding to a desired physical location of a physical reset modeled by the virtual reset. The modeling system can also present, in the user interface, properties associated with a particular virtual object of the virtual reset responsive to detecting a selection of the virtual object.


The virtual reset can also be stored in the data store (or in another data store). During an augmented reality session, the information about the virtual reset can be retrieved from the data store and used. In particular, the virtual reset can be shown superimposed at the corresponding location in the physical environment.


Generation of virtual objects and resets using the modeling system, as described herein, provides several improvements and benefits over conventional techniques. For example, embodiments of the present disclosure provide a modeling system that enables accurate 3D modeling of real-world objects in accurate manners and without the need for specialized equipment. Such virtual models can be arranged to create virtual resets. Certain embodiments described herein address the limitations of conventional modeling systems by constraining editing operations within a user interface for generating virtual resets that conform to physical constraints of corresponding physical resets. The arrangement of virtual objects within a virtual reset itself can be properly replicated in the physical world. For instance, the modeling system described herein may only allow a set of virtual objects to be stacked on a shelf of the virtual reset if a combined weight of the corresponding real-world objects is less than a load capacity of the corresponding real-world shelf. In another example, the modeling system described herein may allow a virtual object to be placed under a shelf of the virtual reset only if a clearance height under the shelf is greater than or equal to a height of the virtual object. Also, the modeling system described herein enables association of properties information (e.g., height, weight, identifier, name) with virtual objects within the virtual reset during generation of the virtual objects, which conventional systems do not provide, thereby enabling the presentation of object-level properties information during the presentation of the virtual reset in an augmented and/or virtual reality scene.


As used herein, the terms “real-world object,” “physical object,” or “physical product” are synonymously used and refer to a tangible object that exists in the real-world. This object, in some embodiments, can be a product, a decoration, a support structure (e.g., a shelf, a rack, a stand, etc.), an object attached to another object or support structure (e.g., signage), or any other tangible object.


As used herein, the term “physical reset” refers to an assembly or other arrangement of physical products or other physical objects. For example, a physical reset can be a set of shelves with physical products arranged thereon at a physical location (e.g., at a store).


As used herein, the terms “virtual object” or “three-dimensional (3D) virtual object” refers to a virtual model or a 3D model of a physical object. In certain embodiments, a set of virtual objects can be used to generate a virtual reset within a virtual space.


As used herein, the term “virtual object properties” refer to properties assigned to a virtual object based on properties of the corresponding physical object.


As used herein, the term “virtual shape” refers to a particular property of a virtual object and can be a 3D shape. The virtual shape can be any of a number of predefined shapes including a cube, a rectangular prism (e.g., a cuboid), a sphere, a cylinder, a triangular prism, a pyramid. The virtual shape can be selected to model a shape of the corresponding physical object. For example, a user may select a rectangular prism virtual object to model a boxed physical product.


As used herein, the term “facial image” refers to a property of a virtual object and includes an image to associate with a face of the virtual shape that forms the virtual object. In certain embodiments herein, a user captures, via a camera device, a facial image of each side of the corresponding real-world object and a modeling system imposes or otherwise associates, in the virtual object, each of the facial images with the corresponding face of the virtual shape. In other examples, instead of capturing a facial image via a camera device, a stored image can be used.


As used herein, the term “virtual reset” refers to an arranged set of virtual objects within a virtual space. In some instances, a virtual reset model is a virtual model of a physical reset. A user can construct a virtual reset by selecting one or more virtual objects via a user interface and moving, rotating, stacking, or otherwise manipulating the virtual objects within the virtual space until the virtual reset is constructed. For example, the virtual reset can include a virtual object that models a structural support (e.g., a virtual shelf) with one or more virtual objects representing products (e.g., boxed products) stacked or otherwise arranged thereon.


As used herein, the term “virtual space” or “3D virtual space” refers to a space within which virtual objects can be placed to construct a virtual reset. In some instances, the virtual space can model a corresponding physical space.


As used herein, the term “augmented reality scene” or “virtual reality scene” refers to a scene of a real-world environment in which a virtual reset is overlaid. In certain embodiments, the virtual reset is presented, within the augmented and/or virtual reality scene, at a location that corresponds to a location of a corresponding physical reset to be assembled.


Referring now to the drawings, FIG. 1 depicts an example of a computing environment 100 for generating, by a modeling system 130, virtual objects and virtual resets including such virtual objects in support of a computer-based simulated environment, in accordance with certain embodiments described herein. The modeling system 130 can include one or more processing devices that execute one or more modeling applications. In certain embodiments, the modeling system 130 includes a network server and/or one or more computing devices communicatively coupled via a network 120. The modeling system 130 may be implemented using software (e.g., code, instructions, program) executed by one or more processing units (e.g., processors, cores), hardware, or combinations thereof. The software may be stored on a non-transitory storage medium (e.g., on a memory device). The computing environment 100 depicted in FIG. 1 is merely an example and is not intended to unduly limit the scope of claimed embodiments. Based on the present disclosure, one of the ordinary skill in the art would recognize many possible variations, alternatives, and modifications. In some instances, the modeling system 130 provides a service that enables generation of virtual objects based on physical objects, generation and editing of virtual resets, and display of virtual resets in an augmented and/or virtual reality environment for users, for example, including a user associated with a user computing device 110.


In certain embodiments, the modeling system 130 includes a central computer system 136, which supports a plurality of applications, including a virtual object modeling application 131, a reset object modeling application 133, and an augmented and/or virtual reality application 135. The virtual object modeling application 131 is an application that enables users to generate virtual objects. The reset object modeling application 133 is an application that enables users to generate virtual resets that include arrangements of virtual objects. The augmented reality application 135 is an application that enables a presentation of virtual resets in an augmented and/or virtual reality scene. The plurality of applications, including the virtual object modeling application 131, reset object modeling application 133, and the augmented and/or virtual reality application 135 may be accessed by and executed on a user computing device 110 associated with a user of one or more services of the modeling system 130. For example, the user accesses one or more of the applications 131, 133, and 135 via web browser application of the user computing device 110. In other examples, one or more of the applications 131, 133, and 135 is provided by the modeling system 130 for download on the user computing device 110. In some examples, a single application which supports each of the applications 131, 133, and 135 is provided for access by (and execution via) the user computing device 110 or is provided for download by the user computing device 110. As depicted in FIG. 1, the user computing device 110 communicates with the central computer 136 via the network 120. Although a single computing device 110 is illustrated in FIG. 1, each of the applications can be provided to a different computing device.


In certain embodiments, the modeling system 130 comprises a data repository 137. The data repository 137 could include a local or remote data store accessible to the central computer system 136. In some instances, the data repository 137 is configured to store virtual objects and associated properties generated via the virtual object modeling application in a virtual object creation process. In some instances, the data repository 137 is configured to store virtual resets, which define an arrangement of virtual objects arranged in a virtual space, generated via the reset object modeling application 133. In some instances, the data repository 137 is configured to provide virtual objects and/or virtual resets in support of augmented reality scenes generated via the augmented reality application 135. The user computing device 110 also communicates with the data repository 137 via the network 120.


As depicted in FIG. 1, in some examples, the user computing device 110 executes the applications 131, 133, 135 in an order indicated by the timeline depicted in FIG. 1. For example, a user conducts a virtual object creation process using the virtual object modeling application 131 executed on or otherwise accessed via the user computing device 110. After conducting the virtual object creation process, the user conducts a reset creation process using the reset object modeling application 133 executed on or otherwise accessed via the user computing device 110. After conducting the reset creation process, the user initiates an augmented and/or virtual reality session using the augmented and/or virtual reality application 135 executed on or otherwise accessed via the user computing device 110. As explained herein above, a different computing device can be (but does not necessarily need to be) used for some or each of the virtual object creation process, the reset creation process, and the augmented reality session. Additionally, a different user can initiate each application.



FIG. 2 depicts an example of a computing environment 100 for generating, by a modeling system, a virtual reset including virtual objects which accurately model corresponding physical objects of a physical reset, in accordance with certain embodiments described herein.


The computing environment 100 of FIG. 2 provides further details concerning the computing environment 100 of FIG. 1. Elements that are found in FIG. 1 are further described in FIG. 2 and referred thereto using the same element numbers.


The computing environment 100 includes the modeling system 130. The modeling system 130, in certain embodiments, including a virtual object generator subsystem 231, a reset modeling subsystem 233, and an augmented reality (AR) and/or virtual reality (VR) reset rendering subsystem 235.


In certain embodiments, the virtual object generator subsystem 231 is configured to generate, store, and/or render virtual objects 201. In certain examples, the virtual object generator subsystem 231 communicates, via the network 120, with the computing device 110 upon an execution of a modeling application 212 on the computing device 110. The modeling application 212 can include the virtual object modeling application 131. As such, the virtual object generator subsystem 231 can receive, from the computing device 110, a selection of a virtual shape for a virtual object 201, properties 202 for the virtual object 201, and facial images for faces of the virtual shape. The virtual object generator subsystem 231 can generate the virtual object 201 based on the selected virtual shape, the properties 202, and facial images and store the virtual object 201 in a data repository 137. The virtual object generator subsystem 231 can associate, in the data repository 137, the virtual object 201 with its associated shape, facial images, and other properties 202. Additional details about generating a virtual object 201 are provided below, and example illustrations of user interface 211 interactions to generate a virtual object 201 are provided below with respect to FIGS. 4A, 4B, 4C, 4D, and 4E.


In certain embodiments, the reset modeling subsystem 233 is configured to generate, store, and/or render virtual resets 203. In certain examples, the virtual object generator subsystem 231 communicates, via the network 120, with the computing device 110 upon the execution of the modeling application 212. The modeling application 212 can include the reset object modeling application 133. As such, the virtual object generator subsystem 231 can receive, from the computing device 110, a selection of virtual objects 201 and an arrangement of the virtual objects 201 with respect to other virtual objects in a virtual space. The reset modeling subsystem 233 can generate the virtual reset 203 that defines the arrangement of the virtual objects 201 within the virtual space. In some instances, the reset modeling subsystem 233 can store the virtual reset 203 in the data repository 137 including an identity of each virtual object 201 in the virtual reset 203 and a position of each virtual object 203 within the virtual space. Additional details about generating and/or editing a virtual reset 203 are provided below with respect to FIG. 5, and example illustrations of user interface 211 interactions to generate a virtual reset 203 are provided below with respect to FIG. 6.


In certain embodiments, the AR and/or VR reset rendering subsystem 235 is configured to present a selected virtual reset 203 within an AR and/or VR scene 215. In some embodiments, the AR and/or VR reset rendering subsystem 235 is configured to communicate the AR and/or VR scene 215 to the user computing device 110 for presentation via the user interface 211. Additional details about rendering a virtual reset 203 are provided below with respect to FIG. 7, and example illustrations of user interface 211 interactions to render an AR and/or VR scene 215 including a virtual reset 203 are provided below with respect to FIG. 8A and FIG. 8B.


In certain embodiments, the various subsystems (e.g., the virtual object generator subsystem 231, the reset modeling subsystem 233, the AR and/or VR reset rendering subsystem 235) of the modeling system 130 can be implemented as one or more of program code, program code executed by processing hardware (e.g., a programmable logic array, a field-programmable gate array, etc.), firmware, or some combination thereof.


In certain embodiments, one or more processes described herein as being performed by the modeling system 130, or by one or more of the subsystems 231, 233, or 235 thereof, can be performed by the user computing device 110, for example, by the modeling application 212. Accordingly, in certain embodiments, the user computing device 110 can generate a virtual object 201 by performing one or more steps of the method of FIG. 3, can construct and/or modify a virtual reset 203 by performing one or more steps of the method of FIG. 5, and/or can render a virtual reset 203 in an AR and/or VR scene 215 via the user interface 211 by performing one or more steps of the method of FIG. 7 without having to communicate with the modeling system 130 via the network 120.


In certain embodiments, the data repository 137 could include a local or remote data store accessible to the modeling system 130. In some instances, the data repository 137 is configured to store virtual objects 201 and associated properties 202. In some instances, the data repository 137 is configured to store virtual resets 203, which define an arrangement of virtual objects 201 (and associated properties 202) within a virtual space.


The user computing device 110, in certain embodiments, includes a user interface 211, a modeling application 212, a camera device 213, and a data storage unit 214. An operator of the user computing device 110 may be a user of the modeling system 130.


The operator may download the modeling application 212 to the user computing device 110 via a network 120 and/or may start an application session with the modeling system 130. In some instances, the modeling system 130 may provide the modeling application 212 for download via the network 120, for example, directly via a website of the modeling system 130 or via a third-party system (e.g., a service system that provides applications for download).


The user interface 211 enables the user of the user computing device 110 to interact with the modeling application 212 and/or the modeling system 130. The user interface 211 could be provided on a display device (e.g., a display monitor), a touchscreen interface, or other user interface that can present one or more outputs of the modeling application 212 and/or modeling system 130 and receive one or more inputs of the user of the user computing device 110. The user interface 211 can include an augmented reality view which can present virtual resets 203 within an augmented reality (AR) and/or virtual reality (VR) scene 215 such that the virtual reset 203 appears to be displayed within a physical environment of a user when viewed by the user through the user interface 211 in the augmented reality view. In some embodiments, the user interface 211 can include a virtual reality view which can present virtual resets 203 within a virtual reality (VR) scene such that the virtual resets 203 appear to be displayed within the virtual reality scene and wherein the virtual reality scene represents a physical environment (e.g., a retail store) where physical counterparts of the virtual resets 203 can be physically located.


The user computing device 110 modeling application 212, in certain embodiments, is configured to provide, via the user interface 211, an interface for generating and editing virtual objects 201 and virtual resets 203 and for presenting AR and/or VR scenes. The modeling application 212 can include one of, a combination of, or all of the applications 131, 133, and 135.


The camera device 213 can capture one or more facial images of a physical product 201X to be associated with faces of a virtual shape selected for constructing a virtual object 201 that represents the physical object 201X. The camera device 213 is either a component of the user computing device 110 or otherwise is communicatively coupled to the user computing device 110. A camera application of the camera device 213, in some instances, exchanges data (e.g., image data) with the modeling application 212.


In certain embodiments, the data storage unit 214 could include a local or remote data store accessible to the user computing device 110. In some instances, the data storage unit 214 is configured to store virtual objects 201 and associated properties 202. In some instances, the data storage unit 214 is configured to store virtual resets 203, which define an arrangement of virtual objects 201 (and associated properties 202) within a virtual space.


In an example depicted in FIG. 2, the user can use user interface 211 to generate a new virtual object 201-1. For example, the user, via the user computing device 110, accesses or otherwise executes (e.g., via the modeling application 122) the virtual object modeling application 131 to generate the new virtual object 201-1 for a physical object. For example, the user computing device 110 receives, via the user interface 211, properties 202-1 to define a virtual object 201-1. The properties 202-1 include a selection of a shape (e.g., a rectangular prism) for the virtual object 201-1 as well as a definition of other properties 202-1 (e.g., a weight, dimensions, etc.) for the virtual object 201-1. The properties 202-1 further include one or more facial images showing respective one or more sides of the physical object 210X, where each side corresponds to a face of the selected shape. The user may define properties 202-1 via one or more interactions with the user interface 211. For example, the user may select the shape via a drop-down menu or other user interface 211 object that enables the user to select the shape (e.g., the rectangular prism) from among a set of possible shapes (e.g., rectangular prism, cube, sphere, cylinder, pyramid, etc.). In an example, the user may enter other information defining the properties 202-1 for the shape via one or more fields, menus, or other user interface 211 objects. For example, the user may enter a weight, a price, an identifier, one or more dimensions, a name, a quantity, or other properties 202-1 via the user interface 211. In an example, the user may select one or more user interface 211 objects to select a face of the selected shape for the virtual object 201-1 and upload or otherwise capture, for the selected face of the virtual object 201-1, a facial image of a corresponding side of the physical object 201X. The modeling system 130 receives, from the user computing device 110 (e.g., from the modeling application 212) via the network 120, the properties 202-1 that define the virtual object 201-1 and generates the virtual object 201-1 based on the properties 202-1. For example, the virtual object generator subsystem 231 generates a virtual object 201-1 as a rectangular prism having dimensions of 1×1.5×0.5 meters specified by the properties 202-1 and associates other properties 202-1 with the rectangular prism including each facial image to impose or otherwise associate with each face of the rectangular prism as well as data such as a weight, an item identifier, a name, or other information in the properties 202-1 information. The virtual object generator subsystem 231 may store the virtual object 201-1 in the data repository 137 of the modeling system 130 and/or, as depicted in FIG. 2, in the data storage unit 214 of the user computing device 110. One or more steps for generating the virtual object 201-1 in this example described as being performed by the modeling system 130 (or a subsystem thereof) can instead be performed, in certain embodiments, by the computing device 110. In an additional embodiment, not illustrated, one or more of the properties 202 of a virtual object 201 may be derived from the facial images of a physical object. For example, the images of the physical object may also incorporate a scaling object in the frame, such as a ruler, allowing dimensions of the physical object to be attributed to the virtual object 201 without the user having to manually enter the dimensions in the user interface 211.


As depicted in FIG. 2, a user can use an example user interface 211-1 to generate a new virtual reset 203-1 and/or edit an existing virtual reset 203-1. For example, the user, via the user computing device 110, accesses or otherwise executes (e.g., via the modeling application 212) the reset object modeling application 133 to generate the new virtual reset 203-1 and/or edit the existing virtual reset 203-1. For example, the reset modeling subsystem 233 may generate user interface 211-1 which enables construction or editing of virtual resets 203-1. The reset modeling subsystem 233 may receive, in user interface 211-1, a request to generate a new reset 203-1 and a selection of stored virtual object 201-1 and a stored virtual object 201-2. In another example, the user computing device 110 may receive, in the user interface 211-1, a request to access a stored virtual reset 203-1, which includes an arrangement of virtual object 201-1 and virtual object 201-2. The user may arrange and/or rearrange a position, a rotation, or other spatial feature of the virtual objects 201-1 and 201-2 within the virtual space of the virtual reset 203-1 until a desired arrangement of the virtual objects 201-1 and 201-2 is achieved. In certain examples, the reset modeling subsystem 233 moves and/or otherwise rearranges the virtual objects 201 within the virtual reset 203-1 responsive to inputs received at the user interface 211-1. The virtual reset 203-1 shown in FIG. 2 is an example and includes only virtual objects 201-1 and 202-1, however, the virtual reset can include any number of virtual objects 201. In the example depicted in FIG. 2, the virtual object 201-2 could represent a structural support object (e.g., a shelf) and the virtual object 201-1 stacked on top of the structural support object could be a product (e.g., a boxed product). Responsive to receiving a request to save the virtual reset 203-1 (e.g., via selection of a user interface 211 object) the reset modeling subsystem 233 saves the virtual reset 203-1 including the virtual objects 201-1 and 201-2 arranged as instructed via the inputs received via the user interface 211-1 in the data repository 137 and/or, as depicted in FIG. 2, in the data storage unit 214 of the user computing device 110. One or more steps for generating the new virtual reset 203- or editing the existing virtual reset 203-1 described in this example as being performed by the modeling system 130 (or a subsystem thereof) can instead be performed, in certain embodiments, by the computing device 110. In one embodiment, the virtual model object modeling application and reset object modeling application have the ability to periodically save a current state of a virtual object or virtual reset while it is being worked on, in persistent memory to be uploaded when network connectivity is achieved.


As further depicted in FIG. 2, a user can use an example user interface 211-2 to render a virtual reset 203-1 (e.g., the virtual reset generated based on inputs to user interface 211-1) in an AR and/or VR scene 215. For example, the user, via the user computing device 110, accesses or otherwise executes (e.g., via the modeling application 122) the augmented reality and/or virtual reality application 135 to render the virtual reset 203-1 in the AR and/or VR scene 215. In certain examples, the user interface 211-2 is displayed by the user computing device 110 in an augmented reality display mode. In other examples, the user interface 211-2 is displayed via an augmented reality viewing device (e.g., AR glasses, an AR headset, etc.) that is communicatively coupled to one or more of the user computing device 110 and/or the modeling system 130. For example, the AR and/or VR reset rendering subsystem 235 may render the AR and/or VR scene 215 including virtual reset 203-1 within the user interface 211-2 responsive to receiving, via the user interface 211-2, a selection of stored virtual reset 203-1 and a request to render the virtual reset 203-1 in an augmented reality view. The AR and/or VR reset rendering subsystem 235 can access the data repository 137 or, as depicted in FIG. 2, the data storage unit 214 of the user computing device 110 to retrieve the stored virtual reset 203-1. The AR and/or VR reset rendering subsystem 235 renders the AR scene 215 so that the user viewing the user interface 211-2 can view the virtual reset 203-1 in the AR scene 215 in an overlay over the physical environment. An example of a virtual reset displayed in an AR scene 215 is depicted in FIG. 7B. One or more steps for rendering the virtual reset 203-1 in an AR scene 215 described in this example as being performed by the modeling system 130 (or a subsystem thereof) can instead be performed, in certain embodiments, by the computing device 110.



FIG. 3 depicts an example of a method 300 for generating a virtual object 201, according to certain embodiments disclosed herein. One or more computing devices (e.g., the modeling system 130 or the virtual object generator subsystem 231 included therein) implement operations depicted in FIG. 3. For illustrative purposes, the method 300 is described with reference to certain examples depicted in the figures. Other implementations, however, are possible.


In the example method 300 described herein, the user interacts with a modeling application 212 executing on a computing device 110 via a user interface 211 to provide information as a basis to generate a virtual object 201. In certain embodiments, as described in the following steps of method 300, the modeling system 130 or one or more subsystems thereof performs the steps of method 300 by receiving the information input via the user interface 211 and generating the virtual object 201. However, in other embodiments, the steps of method 300 can be performed by the user computing device 110 without the user computing device 110 needing to communicate with a modeling system 130 via the network 120.


At block 310, the method 300 involves receiving, by a virtual object generator subsystem 231, a request to create a virtual object 201. In certain embodiments, a user of the user computing device 110 accesses the modeling application 212 via the user interface 211 and interacts therewith to instruct the modeling application 212 to create a new virtual object 201. In some instances, the user wants to generate a virtual object 201 that models a physical object 201X. The virtual object generator subsystem 231 communicates with the user computing device 110 via the network 120 and receives the request to generate the new virtual object 201 responsive to the one or more inputs of the user. FIG. 4A depicts an example user interface 211 for receiving a request to generate a new virtual object 201.


At block 320, the method 300 involves receiving properties 202 information to define the virtual object 201. The virtual object generator subsystem 231 may display, via the user interface 211 and responsive to receiving the request to generate a new virtual object 201, one or more user interface fields to receive properties 202 information to define the new virtual object 201. The user interface 211 fields to receive the properties 202 information can include one or more of drop down menus, check boxes, input fields, an interface object to receive a file upload from the user computing device 110, an interface object to receive an image captured by a camera device 213, or other user interface fields via which property information including one or more of text, files, item selections from a set of items, or other user inputs may be received.


In certain embodiments, the method 300 at block 320 involves implementing blocks 321, 323, and 325, in which the user inputs properties 202 information to define the virtual object 201. For example, the user inputs properties 202 information so that the virtual object 201 models a physical object 201X.


At block 321, the method 300 involves receiving a selection of a shape of a set of shapes, the shape including a set of faces. The virtual object generator subsystem 231 can display one or more user interface 211 objects to receive a selection of a shape. For example, the virtual object generator subsystem 231 can display a drop down menu that enables a selection of a shape from a set of shapes listed in the drop down menu. In another example, the virtual object generator subsystem 231 displays another type of input field to receive the selection of the shape. The set of shapes could include a set of one or more of a cube, a rectangular prism, a cylinder, a pyramid, a cone, a sphere, or other shape. Each shape is associated with a respective set of faces. For example a cube has six faces of equal area. In some instances, the faces comprise a region of surface area. For example, a cylinder could comprise a top circular face, a bottom circular face, and one or more curved portions of surface area around a circumference of the cylinder which runs perpendicular to each of the top and bottom faces. FIG. 4B depicts an example user interface 211 for receiving a selection of a shape selected from a set of shapes.


At block 323, the method 300 involves receiving an input of further properties 202 defining the virtual object 201. The virtual object generator subsystem 231 can display one or more user interface 211 objects to receive a selection and/or other input of further properties 202 (in addition to the shape selection) to define the virtual object 201. In certain examples, the virtual object generator subsystem 231 displays a combined user interface to receive both the selection of the shape as well as the input and/or selection of further properties 202. Further properties 202 can include one or more of a name, an identifier (e.g., an item number), a description, dimensions, weight, or any other property 202 that describes the virtual object 201 such that the virtual object 201 can correspond to the physical object. The user inputs, in some instances, property information that accurately represents the physical object 201X which the user wants to model using the virtual object 201. FIG. 4B depicts an example user interface 211 for receiving properties 202 information to define a virtual object 201.


At block 325, the method 300 involves receiving, for each of a set of faces corresponding to the shape selected in block 321, an image of a portion (e.g., a side, a face, a surface, etc.) of the physical object 201X corresponding to the face. The virtual object generator subsystem 231 may display a user interface 211 via which to receive images of each of a number of faces associated with the shape selected at block 221. For example, a cube comprises six faces and the virtual object generator subsystem 231 could provide a user interface 211 to request and receive images to use for the six faces. Responsive to detecting a click or other interaction with a particular face of the shape, the virtual object generator subsystem 231 can enable a capture, via the camera device 213, of a corresponding facial image of the physical object 201X or enable a selection of a stored image stored on the data storage unit 214 and/or the data repository 137. For example, responsive to an input of the user, the camera device 213 captures an image and transmits the image to the virtual object generator subsystem 231, which associates the captured image with the particular face.


The virtual object generator subsystem 231 may receive a respective facial image for each face of the selected shape. In some embodiments, the virtual object generator subsystem 231 presents a wizard or other program that requests, sequentially, the camera device 213 to capture or upload a respective facial image for each respective face of the selected shape. For example, the virtual object generator subsystem 231 can display, via the user interface 211, a request for a subsequent image corresponding to a subsequent face of the plurality of faces of the 3D shape. The virtual object generator subsystem 231 can receive the subsequent image showing a subsequent portion of the physical object 201X. The virtual object generator subsystem 231 can determine an area of the subsequent image that corresponds to another portion of the physical object 201X. The virtual object generator subsystem 231 can associate, in the virtual object 201, the area of the subsequent image with the subsequent face.


In some embodiments, the virtual object generator subsystem 231 can determine that a face of the set of faces of the selected three-dimensional shape does not have an associated image and, responsive to this determination, display, via the user interface 211, a request for the image, wherein the image is received responsive to requesting the image.


The properties 202 information of the virtual object 201 comprise the received facial images. FIG. 4C depicts an illustration of a user interface for receiving properties information defining the virtual object requested in FIG. 4A, including displaying an interface object for selecting a face upon which to impose a facial image, according to certain embodiments disclosed herein.


In certain embodiments, boundaries of an area of a facial image uploaded or captured by the user computing device 110 do not correspond to boundaries of a face of the virtual object 201. The virtual object generator subsystem 231 may provide one or more user interface objects for performing image manipulations. Image manipulations could include scaled resizing, unscaled resizing, cropping, rotating, warping, or otherwise manipulating the facial image so that boundaries of the facial image are changed. In one embodiment, the user interface provides for the user to zoom in on various portions of the image for finer and more precise control of the image manipulations. The virtual object generator subsystem 231 receives, via the user interface 211 one or more adjustments to the boundaries of the facial image and applies the adjustments to the facial image. After the virtual object generator subsystem 231 has performed one or more adjustments to boundaries of the facial image via requested image manipulations, the virtual object generator subsystem 231 can save a manipulated image responsive to receiving a selection of a user interface object 211 (e.g., the user clicks an interface object entitled “save image”). The virtual object generator subsystem 231 can determine an area of the image that corresponds to the portion of the physical object 201X. The portion can include a side, a surface, a face, or other region of the physical object 201X able to be captured in an image of the physical object 201X. The virtual object generator subsystem 231 can associate, in the virtual object 201, the area of the image with a face of the set of faces of the selected 3D shape.


In certain examples, the user can resize, edit, rotate, warp, or otherwise manipulate an uploaded or captured image so that boundaries of a portion of the physical object 201X in the image correspond to boundaries of the face of the selected three-dimensional shape. For example, the virtual object generator subsystem 231 can display, via the user interface 211, the image imposed on the face, the image showing, in addition to the portion of the physical object 201X, a portion of a space (e.g., in an environment of the physical object 201X) where the physical object 201X is located. The virtual object generator subsystem 231 can provide, via the user interface 211, resizing objects selectable to enable resizing of the image, wherein the resizing objects are placed on detectable boundaries in the image between the physical object 201X and the space. The virtual object generator subsystem 231 can resize the image to correspond to an area of the face responsive to receiving inputs including a change in position of one or more of the resizing objects. For example, the user can resize the image so that the boundaries of the portion of the physical object 201X in the image correspond to boundaries of the face of the selected three-dimensional shape. In some examples, the virtual object generator subsystem 231 can display, via the user interface 211, a rotation interface object and can rotate, responsive to a manipulation of the rotation interface object, the image so that the boundaries of the portion of the physical object 201X in the image correspond to boundaries of the face of the selected three-dimensional shape. In some examples, the virtual object generator subsystem 231 can display, via the user interface 211, an editing interface object and can edit, responsive to a manipulation of the editing interface object, the image so that the boundaries of the portion of the physical object 201X in the image correspond to boundaries of the face of the selected three-dimensional shape. The editing could include warping, stretching, cropping, or other manipulation of the image. In certain embodiments, the virtual object generator subsystem 231 can display, via the user interface 211, the boundaries of the portion of the physical object 201X in the image and the boundaries of the face of the selected three-dimensional shape to aid the user in manipulating the image using the interface objects. FIG. 4D depicts an illustration of a user interface for generating a virtual object, including a facial image imposed to a face selected via the user interface of FIG. 4C and resizing objects that are selectable to resize an area of the facial image, according to certain embodiments disclosed herein.


At block 330, the method 300 involves presenting the virtual object 201 in a virtual space based on the properties 202 information defined in block 320. For example, the properties 202 information can include the selection in block 321 of the shape from a set of shapes, the input in block 323 of further properties 202 information (e.g., weight, dimensions, identifier, name, price, etc.), and the input in block 325 of facial images for each of a number of faces of the shape selected in block 321. The virtual object generator subsystem 231 can superimpose an image received (and, in some instances, the image is subsequently edited, as described herein) for each face of the shape. Superimposing the image on the face comprises superimposing the portion of the image defined by the boundaries of the physical object 201X in the image onto the face. In certain embodiments, the virtual object generator subsystem 231 can present a preview of the virtual object 201 in the user interface 211 and can allow a rotation of the virtual object to display various views of the virtual object 201.


For example, the item being modeled is a physical box of pool shock and the selected shape is a cube, the dimensions specified are 3 ft×3 ft×3 ft, the name specified is “Merchant X pool shock,” the price specified is “$35.00,” and the identifier specified is “1268439383.” Further, the user uploads an image captures of each of six faces of the physical box of pool shock. The virtual object generator subsystem 231 can display the virtual object 201 that models the box of pool shock including the properties 102 information and can rotate the virtual object 201, responsive to inputs to the user interface 211, to display various views of the virtual object 201. For example, in one view, the user can view three of six faces of the virtual object 201 model of the box of pool shock and, in another view, the user can view a different three of the six faces of the virtual object 201 model.


At block 340, the method 300 involves storing the virtual object 201. The virtual object generator subsystem 231 may associate each of the captured facial images with respective faces of the virtual object 201 so that, when displayed, the modeling system 130 may display the virtual object 201 with the facial images imposed or otherwise displayed on top of the associated faces of the virtual object 201. The virtual object generator subsystem 231 may associate further properties 202 information with the virtual object 201 (e.g., price, name, identifier, description, etc.), for example, in metadata of the virtual object 201. The virtual object 201 may be represented as the shape selected by the user and to scale within the virtual space based on dimensions properties 202 specified by the user. For example, the virtual object generator subsystem 231 can update the 3D shape to include the images (edited as needed) and the remaining properties. Alternatively or additionally, the virtual object generator subsystem 231 stores links between each face and a corresponding image (edited as needed), where the links are to storage location of these images. The remaining properties can be stored in the 3D shape or linked thereto (e.g., stored in metadata that has a storage location link). In an example, responsive to receiving, via the user interface 211, a request to display a stored virtual object 201, the virtual object generator subsystem 231 can retrieve the virtual object 201 from the data repository 137 or from the data storage unit 114. In this example, the virtual object generator subsystem 231 can present the virtual object in the user interface 211, including displaying the 3D shape associated with the virtual object 201 including a quantity of faces and, for each of the quantity of faces, an image superimposed upon the face. In this example, the virtual object generator subsystem 231 can present further properties 202 of the virtual object 201, for example, a name, a price, a weight, an identifier, a description, or other properties 202 information associated with the virtual object 201. In another embodiment, the virtual object generator subsystem 231 may associate an image superimposed upon one or more of the faces of the virtual object, but not every face of the virtual object.


Continuing with the example above with a physical object 201X comprising a box of pool shock, the virtual object 201, when rendered by the modeling system 130, is a realistic representation of the box of pool shock within a virtual space that is to scale and that includes associated properties 202 information that may be retrieved upon a selection of the virtual object 201. FIG. 4E depicts an illustration of a user interface for generating a virtual object, including a display of a virtual object, according to certain embodiments disclosed herein.



FIG. 4A depicts an example user interface 211 for receiving a request to generate a new virtual object 201, in accordance with certain embodiments disclosed herein. The example user interface 211 of FIG. 4A includes a user interface object 401. The modeling system 130 receives a request to generate a new virtual object 201 responsive to the modeling application 212 detecting a selection of the user interface object 401. The user interface 211 of FIG. 4A also depicts further user interface objects for selecting an existing virtual object. For example, a search field that reads “Search items” is depicted in FIG. 4A that enables a user to search and retrieve an existing virtual object 201 that is stored by the modeling system 130.



FIG. 4B depicts an example user interface 211 for receiving properties information defining the virtual object 201 requested in FIG. 4A, including a selection of a shape selected from a set of shapes, in accordance with certain embodiments disclosed herein. As depicted in FIG. 4B, user interface objects 402-409 are displayed and enable an input of properties 202 information defining the virtual object 201. For example, user interface object 402 enables receiving an input of a name, user interface object 403 enables receiving an input of an identifier, user interface object 404 enables receiving an input of a description, user interface object 405 enables selection of a shape from a set of shapes, user interface objects 406, 407, 408, and 409 enable indication of dimensions and a measurement unit for the dimensions. The user interface objects depicted herein are example and additional, less, and/or different interface objects may be displayed to receive properties 202 information from the ones depicted in FIG. 4B. For example, an interface object may be displayed to receive a weight of the virtual object 201 or a price of the virtual object 201. As depicted in FIG. 4B, the user has input values in the interface objects (e.g., objects 402, 403, 404, 406, 408, 409) and/or made a menu selection (e.g., objects 405, 407) to define properties 202 information for the virtual object 201.



FIG. 4C depicts an illustration of a user interface for receiving properties information defining the virtual object requested in FIG. 4A, including displaying an interface object for selecting a face upon which to impose a facial image, according to certain embodiments disclosed herein. As depicted in FIG. 4C, a user interface 211 displays, in a virtual space 414, a virtual shape 413 (a cube) selected by the user using interface object 405 of FIG. 4B. Further the user interface 211 of FIG. 4C depicts a display of interface objects on each side of the shape selected in FIG. 4B. For example, interface objects 410, 411, and 412 are selectable via the user interface 211. Responsive to receiving a selection of one of the interface objects 410, 411, or 412, the modeling system 130 can activate or cause the camera device 213 to enable a user to capture a facial image of a corresponding face of a physical object 201X or can display one or more user interface objects that enable the user to access an image file to upload as the facial image.



FIG. 4D depicts an illustration of a user interface for generating a virtual object, including a facial image imposed to a face selected via the user interface of FIG. 4C and resizing objects that are selectable to resize an area of the facial image, according to certain embodiments disclosed herein. As depicted in FIG. 4D, an image is superimposed over a face of a virtual object. User interface objects 415, 416, 417, and 418 are provided at each of four corners of the facial image and enable, via selection and/or dragging of the user interface objects 415, 416, 417, and 418, the virtual object generator subsystem 231 to change one or more boundaries of the facial image with respect to the face of the virtual object 201. User interface object 419 enables saving of the modified facial image subsequent to application of image modification operations instructed via interface objects 415, 416, 417, and 418. The user interface objects depicted in FIG. 4D are example and other types of user interface 211 objects may be used to perform additional or different facial image manipulations.



FIG. 4E depicts an illustration of a user interface for generating a virtual object, including a display of a virtual object, according to certain embodiments disclosed herein. FIG. 4E depicts a rendering, within a virtual space, a virtual object 201 and associated properties 202 information generated based on inputs received via user interfaces in FIG. 4A, FIG. 4B, FIG. 4C, and FIG. 4D. As shown in FIG. 4E, the rendered virtual object 201 includes facial images provided by the user (e.g., captured directly from the physical object 201X or otherwise uploaded) superimposed upon faces of the virtual object 201. As shown in FIG. 4E, a properties 202 information section below the rendered virtual object 201 includes properties 202 specified by the user. FIG. 4E further depicts a user interface object 420, selection of user interface object 420 causing the virtual object generator subsystem 231 to store the virtual object 201 on the data repository 137 and/or the data storage unit 214 of the user computing device 110. In the example depicted in FIG. 4E, the stored virtual object 201 includes the properties 202 information (e.g., shape, further properties information, facial images) provided by the user via the user interfaces of FIGS. 4A, 4B, 4C, and 4D.



FIG. 5 depicts a method 500 for generating a virtual reset, according to certain embodiments disclosed herein. One or more computing devices (e.g., the modeling system 130 and/or the reset modeling subsystem 233) implement operations depicted in FIG. 5. For illustrative purposes, the method 500 is described with reference to certain examples depicted in the figures. Other implementations, however, are possible.


In certain embodiments, the method 500 begins at block 510. At block 510, the method 500 involves receiving, by a reset modeling subsystem 233, a selection of a 3D virtual object 201 in association with generating a 3D virtual reset 203. In an example, the reset modeling subsystem 233 receives a request to generate a new virtual reset 203 from the reset object modeling application 133 executing on the user computing device 110 responsive to receiving one or more inputs to the user interface 211. The reset object modeling application 133 receives a selection of at least one virtual object 201 to include within the virtual reset 103. For example, the user of the user computing device 110 accesses the reset object modeling application 133 (e.g., via the application 112), selects an option to generate a new virtual reset 203, and selects at least one virtual object 201 to include within the virtual reset 103. The reset modeling subsystem 233, via the reset object modeling application 133, may provide menus, fields, or other user interface 211 objects to enable the user to request the new virtual reset 203 and select the at least one virtual object for inclusion within the new virtual reset 203.


In certain embodiments, instead of generating a new virtual reset 203, the user retrieves a stored virtual reset 203. For example, the reset modeling subsystem 233 receives a request to retrieve a stored virtual reset 203 from the reset object modeling application 133 executing on the user computing device 110 responsive to receiving one or more inputs to the user interface 211. The reset object modeling application 133 can access a selected stored virtual reset 203 from the data repository 137 of the modeling system 130 or from the data storage unit 214 of the user computing device 110. For example, the user of the user computing device 110 accesses the reset object modeling application 133 (e.g., via the application 112), selects an option to retrieve a stored virtual reset 203, and selects the stored virtual reset 203 from a list of stored virtual resets 203. The reset modeling subsystem 233, via the reset object modeling application 133, may provide menus, fields, or other user interface 211 objects to enable the user to request the stored virtual reset 203. The stored virtual reset 203 includes at least one virtual object 201.


At block 520, the method 500 involves presenting, by the reset modeling subsystem 233 at the user interface 211, the 3D virtual object 201 in the 3D virtual reset 203 at a first position. The new virtual reset 201 or the stored virtual reset 201 includes at least one virtual object 201 arranged in a virtual space at the first position within the virtual reset 203. The virtual reset 203 may include, in some instances, multiple virtual objects 201 at respective positions within the virtual reset 203. For example, the virtual reset 203 can include first, second, third, or subsequent virtual objects 201 within the virtual reset 203 at first, second, third, or subsequent respective positions within a virtual space within the virtual reset 203.


At block 530, the method 500 involves receiving, by the reset modeling subsystem 233 via the user interface 211, an edit to the 3D virtual object 201 in the 3D virtual reset 203. The reset object modeling application 133 may provide a user interface 211 which the user can visualize edits as well as tools via which the user can apply edits to the virtual objects 203 of the virtual reset 203. For example, a position tool enables a user to select the virtual object 201 and change a position of the virtual object 201 within the virtual reset 203. A rotation tool enables a user to select the virtual object 201 and rotate the virtual object 201. FIG. 6 depicts an illustration of a user interface 211 tool for applying edits to a virtual reset 203 which can be used with the method 500 of FIG. 5, according to certain embodiments disclosed herein.


In certain embodiments, implementing block 530 comprises performing one or more iterations of one or more of block 530A or block 530B. For example, block 530A can be repeated multiple times to receive edits including changes in position for one or more virtual objects 201 in the virtual reset 203. Block 530B can be repeated multiple times to receive edits including changes to characteristics of one or more virtual objects 201 in the virtual reset 203. The reset modeling subsystem 233 receives the edits requested by the user via the reset object modeling application 133.


At block 530A, the method 500 involves receiving, by the reset modeling subsystem 233, an edit that includes changing a position of the virtual object 201 to a second position within the virtual reset 203. In some instances, a virtual reset models a corresponding physical reset and a user of the user computing device 110 (e.g., a reset designer), as part of a process of designing a virtual reset 203 corresponding to the physical reset, interacts with the reset object modeling application 133 to change a position of the virtual object from a first position to a second position within the virtual reset 203. Changing the position can include moving, rotating, stacking, or otherwise manipulating the virtual object 203 within the virtual space of the virtual reset 203. For example, the first position can include a first location (e.g., within an x, y, z coordinate system within a virtual space) and a first orientation (e.g., default configuration) and the second position can include a second location and a second orientation (e.g., rotated 90 degrees about the y axis). The user may use the reset object modeling application 133 to construct a virtual reset 203 that accurately models a physical reset. For example, the virtual reset 203 can include a virtual object 201 that models a structural support (e.g., a virtual shelf) with one or more other virtual objects 201 representing products (e.g., boxed products), signage (a sign that can be placed on or otherwise attached to a surface of the structural support), or other objects. The user may move, within the virtual reset 203, the structural support to a desired position and orientation, move and/or orient one or more of the products to stack or otherwise arrange the products on the structural support, and move and/or orient the signage to place the signage at desired location(s) on the structural support. For example, a first virtual object 201 has first boundaries, a second virtual object 201 has second boundaries, and the edit includes a request to move the first virtual object 201 so that the first virtual object 201 is stacked on or beside and against (e.g., packed tightly next to) the second virtual object 201. For example, the edit instructs moving the first virtual object 201 so that a first portion of the first boundaries of the first 3D virtual object is adjacent to a second portion of the second boundaries of the second 3D virtual object.


At block 530B, the method 500 involves receiving, by the reset modeling subsystem 233, an edit that includes editing a characteristic of the virtual object 201. For example, the characteristic can include images associated with one or more faces of the 3D virtual object 201 and editing the characteristic can include changing one or more of the images. In some instances, the characteristic can include properties 202, such as dimensions of the virtual object 201, and editing the characteristic can include resizing or otherwise changing the dimensions. In some instances, editing the characteristic of the virtual object 201 comprises duplicating the virtual object 201 within the virtual reset 203. In some instances, instead of and/or in addition to editing a characteristic of the virtual object 201, the user adds a new virtual object 201 to the virtual reset and/or deletes one or more virtual objects 201 from the virtual reset 203.


From block 530, the method 500 proceeds to block 540.


At block 540, the method 500 involves updating, by the reset modeling subsystem 233, the presentation of the 3D virtual reset 203 by showing the edit received in block 530. For example, the reset modeling subsystem 233 displays, in the user interface 211, the virtual object 201 in a second position responsive to receiving the edit instructing to move the virtual object 201 from a first position to the second position. The reset modeling subsystem 233 can present a rotation of the virtual object 201, a change in position of the virtual object 201, a change in one or more images of faces of the virtual object 201, a resizing or other change in dimensions of the virtual object 201, a duplication of the virtual object 201, or other edits to the virtual object 201. In some instances, the reset modeling subsystem 233 can present, via the user interface 211, an addition of a virtual object 201 and a deletion of a virtual object 201 in the virtual reset 203. FIG. 6 depicts an illustration of a user interface 211 tool for applying edits to a virtual reset 203 which can be used with the method 500 of FIG. 5, according to certain embodiments disclosed herein. In some embodiments, block 530 and block 540 can be repeated, allowing the user, for example, to change the position of the virtual object in the virtual reset after seeing the virtual object's position in the virtual reset.


In some embodiments, the reset modeling subsystem 233 can constrain editing operations with respect to the virtual reset 203.


In some embodiments, editing operations are constrained based on a weight capacity property 202 and/or weight property 202 of virtual objects 201 within the virtual reset 203. In an example, a first virtual object 201 is a boxed product having a weight of 200 kg and a second virtual object 201 is a shelf having a weight capacity of 100 kg. In this example, the reset modeling subsystem 233 receives an edit requesting a change in position of the first virtual object 201 such that it is stacked on top of the second virtual object 201. In this example, the reset modeling subsystem 233 determines that a weight of the first virtual object 201 (200 kg) is greater than the weight capacity of the second virtual object (100 kg) upon which the first virtual object 201 is to be stacked. In this example, responsive to determining that the weight capacity does not enable the requested stacking editing operation, the reset modeling subsystem 233 denies and reverses the editing operation. In this example, the reset modeling subsystem 233 may indicate, via the user interface 211, that the editing operation is not allowed and may display a reason or reason code to the user (e.g., “selected object is too heavy to stack on this shelf.”). Reversing the editing operation can include returning the virtual object 201 from the requested second position (e.g., the position in which it is stacked on the shelf) to its original first position within the virtual reset 203. In certain examples, the reset modeling subsystem 233 can deny and reverse a requested editing operation based on a weight capacity of a structural support virtual object 201 in view of a combined weight of multiple virtual objects 201 stacked upon the structural support virtual object 201. For example, the weight capacity of the structural support virtual object 201 is 100 kg, a first virtual object 201 stacked on the structural support virtual object 201 is 60 kg, and the reset modeling subsystem 233 receives a request to stack an additional virtual object 201 having a weight property 202 of 50 kg upon the structural support object 201. In this example, responsive to determining that a combined weight of 110 kg is greater than the weight capacity of 100 kg, the reset modeling subsystem 233 does not allow the edit and reverses the edit.


In some embodiments, editing operations are constrained based on dimension properties 202 and/or clearances (e.g., height/length/width clearances) between virtual objects 201 within the virtual reset 203. In an example, a first virtual object 201 is a first shelf object, a second virtual object 201 is a second shelf object that is 3 ft above the first shelf object within the virtual reset 203, and a third virtual object is a boxed product having a height of 3.5 ft. In this example, the reset modeling subsystem 233 receives an edit requesting a change in position of the third virtual object 201 such that it is placed above the first shelf and below the second shelf within the virtual reset 203. In this example, the reset modeling subsystem 233 determines that a height clearance (3 ft) between the shelves is less than a height (3.5 ft) of the third virtual object 201 which the edit specifies to place between the shelves. In this example, responsive to determining that the height clearance does not enable the requested editing operation, the reset modeling subsystem 233 denies and reverses the editing operation. In this example, the reset modeling subsystem 233 may indicate, via the user interface 211, that the editing operation is not allowed and may display a reason or reason code to the user (e.g., “selected object is too tall/wide/long to stack in this location.”). Reversing the editing operation can include returning the third virtual object 201 from the requested second position (e.g., the position in which it is stacked between the shelves) to its original first position within the virtual reset 203.


In certain embodiments, the reset modeling subsystem 233 can indicate, via the user interface 211, where a virtual object 211 can or cannot be repositioned based on weight and clearance constraints of virtual objects 201 within the virtual reset 203. For example, the reset modeling subsystem 233 can determine, responsive to a selection of a virtual object 201, a set of possible locations within the virtual reset 203 where the virtual object 201 can be moved without violating one or more constraints associated with weight capacity and/or clearances and can indicate the locations in the user interface 211. In another example, the rest modeling system can determine, responsive to a selection of a virtual object 201, a set of possible locations within the virtual reset 203 where the virtual object 201 cannot be moved without violating one or more constraints associated with weight capacity and/or clearances and can indicate the locations in the user interface 211.


At block 550, the method 500 involves storing, by the reset modeling subsystem 233, the 3D virtual reset 203 by including, in the 3D virtual reset, information about the 3D virtual object 201 and information about the edit received in block 530. In some instances, the reset modeling subsystem 233 can store an edited virtual object 201 and/or an edit at a storage location in a data storage unit (e.g., data repository 137 and/or data storage unit 214), including storing information about the virtual object 201 and information about the edit. The stored edit virtual object 201 could include the edited virtual object itself or a link to the storage location of the edited virtual object 201. The stored edit could include the edit itself or a link to the storage location of the edit. In some instances, the reset modeling subsystem 233 can store, for multiple edited virtual objects 201 and/or edits in a virtual reset 203, edited virtual objects 201 and/or an edits at respective storage locations in the data storage unit, including storing information about the respective virtual objects 201 and/or information about the respective edits.


In certain examples, responsive to receiving a request to present the stored virtual reset 203 stored in block 550, the reset modeling subsystem 233 can display, via the user interface 211, the virtual reset 203 in an augmented reality user interface 211. The reset modeling subsystem 233, responsive to receiving a selection of a virtual object 201 of the virtual reset 203, can display properties 202 information associated with the virtual object 201 of the virtual reset 203. For example, associated property 202 information could be associated in metadata of the virtual object 201 and could include a weight, dimensions, brand information, a price, an item identifier, or other property. In some examples, the virtual object 201 could be signage and displaying the virtual reset 203 in the augmented reality environment includes presenting the signage. FIG. 7 depicts a method for rendering a reset in an augmented reality scene, according to certain embodiments disclosed herein.



FIG. 6 depicts an illustration of a user interface 211 tool for applying edits to a virtual reset 203, which can be used with the method 500 of FIG. 5, according to certain embodiments disclosed herein. FIG. 6 depicts a view of a user interface 211-1 for generating and/or editing a virtual reset 203 using the reset object modeling application 133. A depicted virtual reset 601 includes the following virtual objects 201: VO 602, VO 603, VO 604, VO 605, VO 606, and VO 607. As depicted in FIG. 6, VO 605 has been selected by the user. The user interface 211-1 includes tools for editing the selected VO 605. For example, user interface 211 objects 608 and 609 enable rotation of the selected VO 605 in a counterclockwise or clockwise direction, respectively. User interface 211 objects 610, 611, and 612 enable duplication of the selected VO 605 via duplication in a horizontal direction (“stack width”), in a vertical direction (“stack height”), or in a direction behind (“stack depth”) the selected VO 605. User interface 211 objects 613 enable further rotation operations, for example, via an x-axis, y-axis, or z-axis. User interface 211 objects 614 enables deletion of the selected VO 605. User interface 211 object 615 enables addition of an additional virtual object 201 to the virtual reset 601. The tools depicted in FIG. 6 are examples and further tools could be added or different tools may be displayed. For example, a tool for moving the selected VO 605 from its current depicted position to a subsequent position could be provided.



FIG. 7 depicts a method for rendering a virtual reset in an augmented reality scene, according to certain embodiments disclosed herein. One or more computing devices (e.g., the modeling system 130 or the AR and/or VR reset rendering subsystem 235) implement operations depicted in FIG. 7. For illustrative purposes, the method 700 is described with reference to certain examples depicted in the figures. Other implementations, however, are possible.


At block 710, the method 700 involves storing, by the modeling system 130, a 3D virtual object 201 that corresponds to a real-world object 201X, the 3D virtual object 201 including a superimposition of an image area showing a portion of the real-world object 201X on a face of a 3D shape. The 3D virtual object 201 can be defined per the method 300 of FIG. 3. In some instances, the virtual object generator subsystem 231 receives, via the user interface 211, a selection of a 3D shape (e.g., a cube), from a plurality of shapes (e.g., a cube, a rectangular prism, a cylinder, a sphere, a spheroid, a cone, a pyramid, etc.) that includes a plurality of faces including the face. The virtual object generator subsystem 231 further receives a selection of the face and an image to superimpose on the face showing a portion of the real-world object 201X. The virtual object generator subsystem 231 can determine an area of the image that corresponds to the portion of the real-world object and associate the area of the image with the face. The virtual object generator subsystem 231 can receive an input indicating further properties 102 for the 3D shape, for example, dimensions and/or a weight to associate with the virtual object 201. The virtual object generator subsystem 231 can associate properties 202 information with the virtual object including the selected 3D shape, the associated area of the image (for the face), and the input further properties 202 for the 3D shape. The properties 202 information can be included in the virtual object 201 or in metadata of the virtual object 201. In certain examples, associating the area of the image with the selected face of the 3D shape can include presenting the image imposed on the selected face and providing an editing interface object so that a user can change boundaries of the portion of the real-world object 201X to correspond with boundaries of the face. Further details about generating a virtual object 201 are described in FIG. 3 and FIGS. 4A, 4B, 4C, 4D, and 4E.


At block 720, the method 700 involves storing, by the modeling system, a virtual reset 203 that includes information about the 3D virtual object 201 and a position of the 3D virtual object 201 in the virtual reset 203. The virtual reset 203 can be defined per the method 500 of FIG. 5. In some instances, generating the 3D virtual reset can include presenting, at the user interface 211, the virtual object 201 in the virtual object 203 at a first position, receiving an edit to the virtual object 201 in the virtual reset 203, and updating the presentation of the virtual reset 203 by showing the edit, and storing the virtual reset 203 by including information about the virtual object 201 and information about the edit in the virtual reset 203. The information about the virtual object 201 can include the virtual object 201 itself or a link to the virtual object 201. In some instances, the edit to the virtual object 201 could include a change in position of the virtual object 201 from a first position to a second position. In some instances, the edit to the virtual object 201 could include a rotation of the virtual object 201. In some instances, the edit to the virtual object 201 includes a change to images of one or more of the faces of the virtual object 201. In some instances, the edit to the virtual object 201 includes a resizing or other change in dimensions of the virtual object 201. The reset modeling subsystem 233 can store the edited virtual object 201 at a storage location in a data storage unit, including the information about the virtual object 201 and/or the information about the edit. In some instances, the information about the virtual object 201 and the information about the edit can include the edited virtual object 201 or a link to the storage location of the edited virtual object 201. In some instances, the information about the virtual object 201 includes the virtual object 201 or a link to the virtual object 201 and the information about the edit includes the edit or a link to the storage location of the edit. Further examples of generating and/or editing a virtual reset 203 are described herein in FIG. 5 and FIG. 6.


At block 730, the method 700 involves presenting, by the modeling system, the virtual reset 203 in an augmented reality and/or virtual reality environment, the presentation showing the 3D virtual object 201 at the position. In certain embodiments, the user interface 211 can include an augmented reality view which can display virtual resets 203 within an augmented reality (AR) and/or virtual reality (VR) scene 215 such that the virtual reset 203 appears to be displayed within a physical environment of a user when viewed by user through the user interface 211 in the augmented reality view. In certain embodiments, the AR and/or VR reset rendering subsystem 235 moves the virtual reset 203 within the augmented reality environment, responsive to receiving an input in the augmented reality environment, so that a location within the augmented reality environment of the virtual reset 203 corresponds to a physical location in a physical environment of a physical reset to be assembled. In some instances, the user uses the displayed virtual reset 203, which includes an arrangement of virtual object 201 in virtual space, as a guide to assemble a corresponding physical reset which includes a like arrangement of physical objects 201X in a physical environment of the user. The reset modeling subsystem 233, responsive to receiving a selection of a virtual object 201 of the virtual reset 203, can display properties 202 information associated with the virtual object 201 of the virtual reset 203 within the augmented reality view. For example, associated property 202 information could be associated in metadata of the virtual object 201 and could include a weight, dimensions, brand information, a price, an item identifier, object material, restrictions on placement, or other property 202.



FIG. 8A depicts an illustration of a user interface 211 for instructing a display of a virtual reset 203 within an augmented reality scene 215, according to certain embodiments disclosed herein. FIG. 8A depicts a user interface 211-1 display of a reset 801. The user interface 211-1 includes a user interface object 802 for enabling display of an augmented reality view of the reset 801 as depicted in FIG. 8B.



FIG. 8B depicts an illustration of a user interface 211 for viewing the display of the virtual reset 203 of FIG. 8A within an augmented reality scene 215, according to certain embodiments disclosed herein. For example, the user interface 211-2 of FIG. 8B is displayed responsive to detecting a selection of user interface 802 of FIG. 8A, which requested display of the reset 801 in the augmented reality view. FIG. 8B depicts an augmented reality scene 215 which includes the displayed reset 801.


In other embodiments, the virtual objects and virtual resets described herein as well as the methods to create the virtual objects and virtual resets described herein can be utilized outside of a virtual or augmented reality environment. In one embodiment, a virtual object and/or virtual reset may simply be presented as an image or a rotatable 3D object, independent of an virtual or augmented reality environment.


Any suitable computer system or group of computer systems can be used for performing the operations described herein. For example, FIG. 9 depicts an example of a computer system 900. The depicted example of the computer system 900 includes a processor 902 communicatively coupled to one or more memory devices 904. The processor 902 executes computer-executable program code stored in a memory device 904, accesses information stored in the memory device 904, or both. Examples of the processor 902 include a microprocessor, an application-specific integrated circuit (“ASIC”), a field-programmable gate array (“FPGA”), or any other suitable processing device. The processor 902 can include any number of processing devices, including a single processing device.


The memory device 904 includes any suitable non-transitory computer-readable medium for storing program code 906, program data 908, or both. A computer-readable medium can include any electronic, optical, magnetic, or other storage device capable of providing a processor with computer-readable instructions or other program code. Non-limiting examples of a computer-readable medium include a magnetic disk, a memory chip, a ROM, a RAM, an ASIC, optical storage, magnetic tape or other magnetic storage, or any other medium from which a processing device can read instructions. The instructions may include processor-specific instructions generated by a compiler or an interpreter from code written in any suitable computer-programming language, including, for example, C, C++, C #, Visual Basic, Java, Python, Perl, JavaScript, and ActionScript. In various examples, the memory device 804 can be volatile memory, non-volatile memory, or a combination thereof.


The computer system 900 executes program code 906 that configures the processor 902 to perform one or more of the operations described herein. Examples of the program code 906 include, in various embodiments, the modeling system 130 and subsystems thereof (including the virtual object generator subsystem 231, the reset modeling subsystem 233, and the AR and/or VR reset rendering subsystem 235) of FIG. 1, which may include any other suitable systems or subsystems that perform one or more operations described herein (e.g., one or more neural networks, encoders, attention propagation subsystem and segmentation subsystem). The program code 906 may be resident in the memory device 904 or any suitable computer-readable medium and may be executed by the processor 902 or any other suitable processor.


The processor 902 is an integrated circuit device that can execute the program code 906. The program code 906 can be for executing an operating system, an application system or subsystem, or both. When executed by the processor 902, the instructions cause the processor 902 to perform operations of the program code 906. When being executed by the processor 902, the instructions are stored in a system memory, possibly along with data being operated on by the instructions. The system memory can be a volatile memory storage type, such as a Random Access Memory (RAM) type. The system memory is sometimes referred to as Dynamic RAM (DRAM) though need not be implemented using a DRAM-based technology. Additionally, the system memory can be implemented using non-volatile memory types, such as flash memory.


In some embodiments, one or more memory devices 904 store the program data 908 that includes one or more datasets described herein. In some embodiments, one or more of data sets are stored in the same memory device (e.g., one of the memory devices 904). In additional or alternative embodiments, one or more of the programs, data sets, models, and functions described herein are stored in different memory devices 904 accessible via a data network. One or more buses 810 are also included in the computer system 900. The buses 910 communicatively couple one or more components of a respective one of the computer system 900.


In some embodiments, the computer system 900 also includes a network interface device 912. The network interface device 912 includes any device or group of devices suitable for establishing a wired or wireless data connection to one or more data networks. Non-limiting examples of the network interface device 912 include an Ethernet network adapter, a modem, and/or the like. The computer system 900 is able to communicate with one or more other computing devices via a data network using the network interface device 912.


The computer system 900 may also include a number of external or internal devices, an input device 914, a presentation device 916, or other input or output devices. For example, the computer system 900 is shown with one or more input/output (“I/O”) interfaces 918. An I/O interface 918 can receive input from input devices or provide output to output devices. An input device 914 can include any device or group of devices suitable for receiving visual, auditory, or other suitable input that controls or affects the operations of the processor 902. Non-limiting examples of the input device 914 include a touchscreen, a mouse, a keyboard, a microphone, a separate mobile computing device, etc. A presentation device 916 can include any device or group of devices suitable for providing visual, auditory, or other suitable sensory output. Non-limiting examples of the presentation device 916 include a touchscreen, a monitor, a speaker, a separate mobile computing device, etc.


Although FIG. 9 depicts the input device 914 and the presentation device 916 as being local to the computer system 900, other implementations are possible. For instance, in some embodiments, one or more of the input device 914 and the presentation device 916 can include a remote client-computing device (e.g., user computing device 110) that communicates with computing system 900 via the network interface device 912 using one or more data networks described herein.


Embodiments may comprise a computer program that embodies the functions described and illustrated herein, wherein the computer program is implemented in a computer system that comprises instructions stored in a machine-readable medium and a processor that executes the instructions. However, it should be apparent that there could be many different ways of implementing embodiments in computer programming, and the embodiments should not be construed as limited to any one set of computer program instructions. Further, a skilled programmer would be able to write such a computer program to implement an embodiment of the disclosed embodiments based on the appended flow charts and associated description in the application text. Therefore, disclosure of a particular set of program code instructions is not considered necessary for an adequate understanding of how to make and use embodiments. Further, those skilled in the art will appreciate that one or more aspects of embodiments described herein may be performed by hardware, software, or a combination thereof, as may be embodied in one or more computer systems. Moreover, any reference to an act being performed by a computer should not be construed as being performed by a single computer as more than one computer may perform the act.


The example embodiments described herein can be used with computer hardware and software that perform the methods and processing functions described previously. The systems, methods, and procedures described herein can be embodied in a programmable computer, computer-executable software, or digital circuitry. The software can be stored on computer-readable media. For example, computer-readable media can include a floppy disk, RAM, ROM, hard disk, removable media, flash memory, memory stick, optical media, magneto-optical media, CD-ROM, etc. Digital circuitry can include integrated circuits, gate arrays, building block logic, field programmable gate arrays (FPGA), etc.


In some embodiments, the functionality provided by computer system 900 may be offered as cloud services by a cloud service provider. For example, FIG. 10 depicts an example of a cloud computer system 1000 offering a service for generation of virtual objects 201, generation of virtual resets 203, and display of virtual resets 203 in an augmented reality view that can be used by a number of user subscribers using user devices 1004A, 1004B, and 1004C across a data network 1006. In the example, the service for generation of virtual objects 201, generation of virtual resets 203, and display of virtual resets 203 in an augmented reality view may be offered under a Software as a Service (SaaS) model. One or more users may subscribe to the service for generation of virtual objects 201, generation of virtual resets 203, and display of virtual resets 203 in an augmented reality view and the cloud computer system 1000 performs the processing to provide the service for generation of virtual objects 201, generation of virtual resets 203, and display of virtual resets 203 in an augmented reality view to subscribers. The cloud computer system 1000 may include one or more remote server computers 1008.


The remote server computers 1008 include any suitable non-transitory computer-readable medium for storing program code 1010 (e.g., the modeling system 130 and the virtual object generator subsystem 231, the reset modeling subsystem 233, and the AR and/or VR reset rendering subsystem 235 of FIG. 1) and program data 1012, or both, which is used by the cloud computer system 800 for providing the cloud services. A computer-readable medium can include any electronic, optical, magnetic, or other storage device capable of providing a processor with computer-readable instructions or other program code. Non-limiting examples of a computer-readable medium include a magnetic disk, a memory chip, a ROM, a RAM, an ASIC, optical storage, magnetic tape or other magnetic storage, or any other medium from which a processing device can read instructions. The instructions may include processor-specific instructions generated by a compiler or an interpreter from code written in any suitable computer-programming language, including, for example, C, C++, C #, Visual Basic, Java, Python, Perl, JavaScript, and ActionScript. In various examples, the server computers 1008 can include volatile memory, non-volatile memory, or a combination thereof.


One or more of the server computers 1008 execute the program code 1010 that configures one or more processors of the server computers 1008 to perform one or more of the operations that provide virtual object generation, virtual reset generation, and augmented-reality-view display of virtual reset services. As depicted in the embodiment in FIG. 10, the one or more servers providing the services for generation of virtual objects 201, generation of virtual resets 203, and display of virtual resets 203 in an augmented reality view may implement the modeling system 130 and the virtual object generator subsystem 231, the reset modeling subsystem 233, and the AR and/or VR reset rendering subsystem 235. Any other suitable systems or subsystems that perform one or more operations described herein (e.g., one or more development systems for configuring an interactive user interface) can also be implemented by the cloud computer system 1000.


In certain embodiments, the cloud computer system 1000 may implement the services by executing program code and/or using program data 1012, which may be resident in a memory device of the server computers 1008 or any suitable computer-readable medium and may be executed by the processors of the server computers 808 or any other suitable processor.


In some embodiments, the program data 1012 includes one or more datasets and models described herein. In some embodiments, one or more of data sets, models, and functions are stored in the same memory device. In additional or alternative embodiments, one or more of the programs, data sets, models, and functions described herein are stored in different memory devices accessible via the data network 1006.


The cloud computer system 1000 also includes a network interface device 1014 that enable communications to and from cloud computer system 1000. In certain embodiments, the network interface device 1014 includes any device or group of devices suitable for establishing a wired or wireless data connection to the data networks 1006. Non-limiting examples of the network interface device 1014 include an Ethernet network adapter, a modem, and/or the like. The service for generation of virtual objects 101, generation of virtual resets 103, and display of virtual resets 103 in an augmented reality view is able to communicate with the user devices 1004A, 1004B, and 1004C via the data network 1006 using the network interface device 1014.


The example systems, methods, and acts described in the embodiments presented previously are illustrative, and, in alternative embodiments, certain acts can be performed in a different order, in parallel with one another, omitted entirely, and/or combined between different example embodiments, and/or certain additional acts can be performed, without departing from the scope and spirit of various embodiments. Accordingly, such alternative embodiments are included within the scope of claimed embodiments.


Although specific embodiments have been described above in detail, the description is merely for purposes of illustration. It should be appreciated, therefore, that many aspects described above are not intended as required or essential elements unless explicitly stated otherwise. Modifications of, and equivalent components or acts corresponding to, the disclosed aspects of the example embodiments, in addition to those described above, can be made by a person of ordinary skill in the art, having the benefit of the present disclosure, without departing from the spirit and scope of embodiments defined in the following claims, the scope of which is to be accorded the broadest interpretation so as to encompass such modifications and equivalent structures.


Numerous specific details are set forth herein to provide a thorough understanding of the claimed subject matter. However, those skilled in the art will understand that the claimed subject matter may be practiced without these specific details. In other instances, methods, apparatuses, or systems that would be known by one of ordinary skill have not been described in detail so as not to obscure claimed subject matter.


Unless specifically stated otherwise, it is appreciated that throughout this specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining,” and “identifying” or the like refer to actions or processes of a computing device, such as one or more computers or a similar electronic computing device or devices, that manipulate or transform data represented as physical electronic or magnetic quantities within memories, registers, or other information storage devices, transmission devices, or display devices of the computing platform.


The system or systems discussed herein are not limited to any particular hardware architecture or configuration. A computing device can include any suitable arrangement of components that provide a result conditioned on one or more inputs. Suitable computing devices include multi-purpose microprocessor-based computer systems accessing stored software that programs or configures the computer system from a general purpose computing apparatus to a specialized computing apparatus implementing one or more embodiments of the present subject matter. Any suitable programming, scripting, or other type of language or combinations of languages may be used to implement the teachings contained herein in software to be used in programming or configuring a computing device.


Embodiments of the methods disclosed herein may be performed in the operation of such computing devices. The order of the blocks presented in the examples above can be varied—for example, blocks can be re-ordered, combined, and/or broken into sub-blocks. Certain blocks or processes can be performed in parallel.


The use of “adapted to” or “configured to” herein is meant as an open and inclusive language that does not foreclose devices adapted to or configured to perform additional tasks or steps. Where devices, systems, components or modules are described as being configured to perform certain operations or functions, such configuration can be accomplished, for example, by designing electronic circuits to perform the operation, by programming programmable electronic circuits (such as microprocessors) to perform the operation such as by executing computer instructions or code, or processors or cores programmed to execute code or instructions stored on a non-transitory memory medium, or any combination thereof. Processes can communicate using a variety of techniques including but not limited to conventional techniques for inter-process communications, and different pairs of processes may use different techniques, or the same pair of processes may use different techniques at different times.


Additionally, the use of “based on” is meant to be open and inclusive, in that, a process, step, calculation, or other action “based on” one or more recited conditions or values may, in practice, be based on additional conditions or values beyond those recited. Headings, lists, and numbering included herein are for ease of explanation only and are not meant to be limiting.


While the present subject matter has been described in detail with respect to specific embodiments thereof, it will be appreciated that those skilled in the art, upon attaining an understanding of the foregoing, may readily produce alterations to, variations of, and equivalents to such embodiments. Accordingly, it should be understood that the present disclosure has been presented for purposes of example rather than limitation, and does not preclude the inclusion of such modifications, variations, and/or additions to the present subject matter as would be readily apparent to one of ordinary skill in the art.

Claims
  • 1. A computer-implemented method in which one or more processing devices perform operations comprising: receiving, in association with generating a three-dimensional (3D) virtual reset, a selection of a 3D virtual object comprising first boundaries;presenting, at a user interface, the 3D virtual object in the 3D virtual reset at a first position in the 3D virtual reset, wherein the 3D virtual object comprises a plurality of faces including a first face, wherein an association exists between a first image and the first face, and wherein an area of the first image is presented superimposed on the first face;receiving, via the user interface, a request to perform an edit to the association between the first face of the 3D virtual object in the 3D virtual reset and the first image;responsive to receiving the request to perform the edit via the user interface, changing the association of the first face from the first image to a second image, that is different from the first image and causes the second image to be presented in lieu of the first image;storing the 3D virtual reset by including, in the 3D virtual reset, information about the 3D virtual object and information about the edit;receiving a second selection of a second 3D virtual object comprising a virtual shelf object comprising second boundaries; andpresenting, at the user interface, the second 3D virtual object in the 3D virtual reset at a second position,wherein storing the 3D virtual reset further comprises including, in the 3D virtual reset, information about the second 3D virtual object; andwherein the edit further comprises: responsive to receiving an input, changing a position of the 3D virtual object from the first position to a third position in the 3D virtual reset such that a first portion of the first boundaries of the 3D virtual object is adjacent to a second portion of the second boundaries of the second 3D virtual object;determining a weight capacity for the second 3D virtual object adjacent to the 3D virtual object;determining a current load that the second 3D virtual object is simulated to be under;determining a remaining weight capacity based on the weight capacity for the second 3D virtual object and the current load;determining a weight associated with the 3D virtual object;comparing the remaining weight capacity for the second 3D virtual object and the weight associated with the 3D virtual object; andresponsive to determining that the weight associated with the 3D virtual object is greater than the remaining weight capacity for the second 3D virtual object: returning the 3D virtual object from the third position to the first position to indicate a denial of the input, wherein the stored 3D virtual reset comprises the 3D virtual object at the first position and the second 3D virtual object at the second position.
  • 2. The computer-implemented method of claim 1, wherein the information about the 3D virtual object comprises the 3D virtual object or a link to the 3D virtual object.
  • 3. The computer-implemented method of claim 1, wherein the edit further comprises one or more of a change in position of the 3D virtual object in the 3D virtual reset from the first position to the second position or a rotation of the 3D virtual object.
  • 4. The computer-implemented method of claim 1, wherein the operations further comprise resizing the 3D virtual object, wherein the information about the edit includes information about the resized 3D virtual object.
  • 5. The computer-implemented method of claim 1, the operations further comprising: storing the edited 3D virtual object at a storage location in a data storage unit, wherein the information about the 3D virtual object and the information about the edit include the edited 3D virtual object or a link to the storage location of the edited 3D virtual object.
  • 6. The computer-implemented method of claim 1, the operations further comprising: storing the edit at a storage location in a data storage unit, wherein the information about the 3D virtual object includes the 3D virtual object or a link to the 3D virtual object, and wherein the information about the edit includes the edit or the link to the storage location of the edit.
  • 7. The computer-implemented method of claim 1, wherein changing the position of the 3D virtual object comprises stacking the 3D virtual object on the second 3D virtual object.
  • 8. The computer-implemented method of claim 1, wherein the first boundaries are above the second boundaries within the 3D virtual reset.
  • 9. The computer-implemented method of claim 1, wherein the 3D virtual reset further comprises a third 3D virtual object, wherein the second 3D virtual object comprises a first shelf object, wherein the third 3D virtual object comprises a second shelf object vertical to the first shelf object, and wherein the edit further comprises: determining a distance between the first shelf object and the second shelf object;determining a height of the 3D virtual object; andresponsive to determining that the height is greater than the distance, returning the 3D virtual object from the first subsequent position to the first position, wherein the stored 3D virtual reset comprises the 3D virtual object at the first position and the second 3D virtual object at the second position.
  • 10. The computer-implemented method of claim 1, the operations further comprising: determining a property of the 3D virtual object, wherein storing the 3D virtual reset further comprises associating the property with the 3D virtual object;responsive to receiving a subsequent input, presenting the stored 3D virtual reset in an augmented reality interface; andresponsive to receiving the selection of the 3D virtual object, presenting the property, wherein the property comprises a weight, dimensions, brand information, a price, an item identifier, or other property.
  • 11. The computer-implemented method of claim 10, wherein presenting the 3D virtual reset includes presenting the property based on metadata of the 3D virtual object.
  • 12. The computer-implemented method of claim 10, wherein the 3D virtual reset includes a signage model generated from metadata of the 3D virtual object, wherein presenting the 3D virtual reset includes presenting the signage model.
  • 13. The computer-implemented method of claim 1, wherein responsive to receiving the request, enabling a change to one or more boundaries of the second image with respect to the first face of the 3D virtual object in the 3D virtual reset.
  • 14. A system comprising: a processor; anda non-transitory computer readable medium storing computer-readable program instructions that, when executed by the processor, cause the system to perform operations comprising: receiving, in association with generating a three-dimensional (3D) virtual reset, a selection of a 3D virtual object-comprising first boundaries;presenting, at a user interface, the 3D virtual object in the 3D virtual reset at a first position in the 3D virtual reset, wherein the 3D virtual object comprises a plurality of faces including a first face, wherein an association exists between a first image and the first face, and wherein an area of the first image is presented superimposed on the first face;receiving, via the user interface, a request to perform an edit to the association between the first face of the 3D virtual object in the 3D virtual reset and the first image;responsive to receiving the request to perform the edit via the user interface, changing the association of the first face from the first image to a second image, that is different from the first image and causes the second image to be presented in lieu of the first image;storing the 3D virtual reset by including, in the 3D virtual reset, information about the 3D virtual object and information about the edit;receiving a second selection of a second 3D virtual object comprising a virtual shelf object comprising second boundaries; andpresenting, at the user interface, the second 3D virtual object in the 3D virtual reset at a second position,wherein storing the 3D virtual reset further comprises including, in the 3D virtual reset, information about the second 3D virtual object; andwherein the edit further comprises: responsive to receiving an input, changing a position of the 3D virtual object from the first position to a third position in the 3D virtual reset such that a first portion of the first boundaries of the 3D virtual object is adjacent to a second portion of the second boundaries of the second 3D virtual object;determining a weight capacity for the second 3D virtual object adjacent to the 3D virtual object;determining a current load that the second 3D virtual object is simulated to be under;determining a remaining weight capacity based on the weight capacity for the second 3D virtual object and the current load;determining a weight associated with the 3D virtual object;comparing the remaining weight capacity for the second 3D virtual object and the weight associated with the 3D virtual object; andresponsive to determining that the weight associated with the 3D virtual object is greater than the remaining weight capacity for the second 3D virtual object: returning the 3D virtual object from the third position to the first position to indicate a denial of the input, wherein the stored 3D virtual reset comprises the 3D virtual object at the first position and the second 3D virtual object at the second position.
  • 15. The system of claim 14, wherein the information about the 3D virtual object comprises the 3D virtual object or a link to the 3D virtual object.
  • 16. The system of claim 14, wherein the edit further comprises one or more of a change in position of the 3D virtual object in the 3D virtual reset from the first position to the second position or a rotation of the 3D virtual object.
  • 17. A non-transitory computer-readable medium having program code that is stored thereon, the program code executable by one or more processing devices for performing operations comprising: receiving, in association with generating a three-dimensional (3D) virtual reset, a selection of a 3D virtual object comprising first boundaries;presenting, at a user interface, the 3D virtual object in the 3D virtual reset at a first position in the 3D virtual reset, wherein the 3D virtual object comprises a plurality of faces including a first face, wherein an association exists between a first image and the first face, and wherein an area of the first image is presented superimposed on the first face;receiving, via the user interface, a request to perform an edit to the association between the first face of the 3D virtual object in the 3D virtual reset and the first image;responsive to receiving the request to perform the edit via the user interface, changing the association of the first face from the first image to a second image, that is different from the first image and causes the second image to be presented in lieu of the first image;updating the presentation of the 3D virtual reset to present the 3D virtual object with the second image presented at the first face instead of the first image;storing the 3D virtual reset by including, in the 3D virtual reset, information about the 3D virtual object and information about the edit;receiving a second selection of a second 3D virtual object comprising a virtual shelf object comprising second boundaries; andpresenting, at the user interface, the second 3D virtual object in the 3D virtual reset at a second position,wherein storing the 3D virtual reset further comprises including, in the 3D virtual reset, information about the second 3D virtual object; andwherein the edit further comprises: responsive to receiving an input, changing a position of the 3D virtual object from the first position to a third position in the 3D virtual reset such that a first portion of the first boundaries of the 3D virtual object is adjacent to a second portion of the second boundaries of the second 3D virtual object;determining a weight capacity for the second 3D virtual object adjacent to the 3D virtual object;determining a current load that the second 3D virtual object is simulated to be under;determining a remaining weight capacity based on the weight capacity for the second 3D virtual object and the current load;determining a weight associated with the 3D virtual object;comparing the remaining weight capacity for the second 3D virtual object and the weight associated with the 3D virtual object; andresponsive to determining that the weight associated with the 3D virtual object is greater than the remaining weight capacity for the second 3D virtual object: returning the 3D virtual object from the third position to the first position to indicate a denial of the input, wherein the stored 3D virtual reset comprises the 3D virtual object at the first position and the second 3D virtual object at the second position.
US Referenced Citations (355)
Number Name Date Kind
5251156 Heier et al. Oct 1993 A
5862252 Yamamoto et al. Jan 1999 A
5949373 Eslambolchi et al. Sep 1999 A
6025847 Marks Feb 2000 A
6175647 Schick et al. Jan 2001 B1
6195455 Mack et al. Feb 2001 B1
6201546 Bodor et al. Mar 2001 B1
6252538 Chignell Jun 2001 B1
6611617 Crampton Aug 2003 B1
6661914 Dufour Dec 2003 B2
6748340 Otsuki et al. Jun 2004 B2
6816819 Loveland Nov 2004 B1
6858826 Mueller et al. Feb 2005 B2
6983064 Song Jan 2006 B2
7127378 Hoffman et al. Oct 2006 B2
7391424 Lonsing Jun 2008 B2
7474803 Petrov et al. Jan 2009 B2
7523411 Carlin Apr 2009 B2
7551760 Scharlack et al. Jun 2009 B2
7627502 Cheng et al. Dec 2009 B2
7699226 Smith et al. Apr 2010 B2
7710420 Nonclercq et al. May 2010 B2
7728833 Verma et al. Jun 2010 B2
7768656 Lapa et al. Aug 2010 B2
7873643 Hadzikadic et al. Jan 2011 B2
7885865 Benson et al. Feb 2011 B2
7957998 Riley et al. Jun 2011 B2
8054460 Agapiou et al. Nov 2011 B2
8107086 Hart et al. Jan 2012 B2
8189963 Li et al. May 2012 B2
8207963 Cotter et al. Jun 2012 B2
8244508 Dean et al. Aug 2012 B1
8249941 Saul et al. Aug 2012 B2
RE43895 Crampton Jan 2013 E
8345252 Nisper et al. Jan 2013 B2
8358201 Haddy Jan 2013 B1
8427656 Hullin et al. Apr 2013 B2
8429004 Hamilton et al. Apr 2013 B2
8463025 Melvin et al. Jun 2013 B2
8488877 Owechko et al. Jul 2013 B1
8510338 Cushman, II et al. Aug 2013 B2
8525846 Hickman et al. Sep 2013 B1
8538128 Mellin et al. Sep 2013 B2
8547247 Haddy Oct 2013 B1
8611694 Kogan et al. Dec 2013 B2
8654121 Yu et al. Feb 2014 B1
8668498 Calman et al. Mar 2014 B2
8682045 Vining et al. Mar 2014 B2
8687104 Penov et al. Apr 2014 B2
8732025 Gokturk et al. May 2014 B2
8787679 Ramesh et al. Jul 2014 B1
8797354 Noge Aug 2014 B2
8848201 Bruce et al. Sep 2014 B1
8866847 Bedi et al. Oct 2014 B2
8878648 Haddy Nov 2014 B2
8897578 Huang et al. Nov 2014 B2
8941641 Chen et al. Jan 2015 B2
8941645 Grimaud Jan 2015 B2
8943420 Goldthwaite et al. Jan 2015 B2
8953037 Wang et al. Feb 2015 B2
8989440 Klusza et al. Mar 2015 B2
9001154 Meier et al. Apr 2015 B2
9053392 Yang et al. Jun 2015 B2
9074986 Pal et al. Jul 2015 B2
9102055 Konolige et al. Aug 2015 B1
9137511 LeGrand, III et al. Sep 2015 B1
9157855 Tin et al. Oct 2015 B2
9189854 Dhua et al. Nov 2015 B2
9208401 Westphal Dec 2015 B2
9245170 Nikic et al. Jan 2016 B1
9245382 Zhou et al. Jan 2016 B2
9280560 Dube et al. Mar 2016 B1
9292969 Laffargue et al. Mar 2016 B2
9307231 Mallet et al. Apr 2016 B2
9307232 Warner Apr 2016 B1
9324179 Wooley et al. Apr 2016 B2
9325966 Tin Apr 2016 B2
9329305 Sieracki May 2016 B2
9349200 Cardno May 2016 B2
9359880 Narayan et al. Jun 2016 B2
9367909 Tin Jun 2016 B2
9373195 Kasahara Jun 2016 B2
9378065 Shear et al. Jun 2016 B2
9384594 Maciocci et al. Jul 2016 B2
9417185 Bruce et al. Aug 2016 B1
9433350 Schönborn et al. Sep 2016 B2
9444977 Moesle et al. Sep 2016 B2
9473747 Kobres et al. Oct 2016 B2
9509905 Gordon et al. Nov 2016 B2
9516278 Renkis Dec 2016 B1
9524547 Graumann et al. Dec 2016 B2
9547838 Larsen Jan 2017 B2
9552650 Furuya Jan 2017 B2
9563825 Shen et al. Feb 2017 B2
9607226 Zhu et al. Mar 2017 B2
9607422 Leedom Mar 2017 B1
9613300 Tin et al. Apr 2017 B2
9619944 Finn et al. Apr 2017 B2
9626801 Mullins Apr 2017 B2
9645237 Stolarczyk May 2017 B2
9672497 Lewis et al. Jun 2017 B1
9697416 Shen et al. Jul 2017 B2
9728010 Thomas et al. Aug 2017 B2
9787904 Birkler et al. Oct 2017 B2
9928438 Wu et al. Mar 2018 B2
9972158 Schtein et al. May 2018 B2
9990767 Sheffield et al. Jun 2018 B1
10013506 Reeves et al. Jul 2018 B2
10019803 Venable et al. Jul 2018 B2
10031974 Abdullah et al. Jul 2018 B1
10078826 Morandi et al. Sep 2018 B2
10122997 Sheffield et al. Nov 2018 B1
10157503 Tran et al. Dec 2018 B2
10163271 Powers et al. Dec 2018 B1
10192115 Sheffield et al. Jan 2019 B1
10235797 Sheffield et al. Mar 2019 B1
10235810 Morrison Mar 2019 B2
10289990 Rizzolo et al. May 2019 B2
10304251 Pahud et al. May 2019 B2
10366531 Sheffield Jul 2019 B2
10387897 Sakata et al. Aug 2019 B2
10395435 Powers et al. Aug 2019 B2
10424110 Sheffield et al. Sep 2019 B2
10438165 Findlay Oct 2019 B2
10521645 Adato et al. Dec 2019 B2
10565548 Skaff et al. Feb 2020 B2
10580202 Sheffield et al. Mar 2020 B2
10592855 Griffin et al. Mar 2020 B2
10657712 Sheffield et al. May 2020 B2
10671856 Ren et al. Jun 2020 B1
10679181 Prater et al. Jun 2020 B1
10679372 Sheffield Jun 2020 B2
10733661 Bergstrom et al. Aug 2020 B1
10755483 Cote Aug 2020 B1
10769582 Williams et al. Sep 2020 B2
10885493 Hassan et al. Jan 2021 B2
10885577 Parker et al. Jan 2021 B2
10956967 Ayush et al. Mar 2021 B2
10970932 Leppanen et al. Apr 2021 B2
11062139 Sheffield et al. Jul 2021 B2
11093785 Siddiquie et al. Aug 2021 B1
11120459 Vaculin et al. Sep 2021 B2
11126961 Kulkarni Wadhonkar et al. Sep 2021 B2
11263795 Desai et al. Mar 2022 B1
11270510 Warhol Mar 2022 B2
11354549 Shaw et al. Jun 2022 B2
11373320 Le et al. Jun 2022 B1
11423075 Morate et al. Aug 2022 B2
11468500 Roesbery et al. Oct 2022 B2
11481746 Wu et al. Oct 2022 B2
11488104 Soon-Shiong Nov 2022 B2
11514765 Blaser Nov 2022 B2
11544668 Bahirat et al. Jan 2023 B2
11562314 Morate et al. Jan 2023 B2
11568356 Rochon et al. Jan 2023 B1
20020140695 Weiss Oct 2002 A1
20030030636 Yamaoka et al. Feb 2003 A1
20030071194 Mueller et al. Apr 2003 A1
20030110099 Trajkovic et al. Jun 2003 A1
20030137506 Efran et al. Jul 2003 A1
20030174179 Suermondt et al. Sep 2003 A1
20050021499 Bradley et al. Jan 2005 A1
20050035980 Lonsing Feb 2005 A1
20050286799 Huang et al. Dec 2005 A1
20060149634 Pelegrin et al. Jul 2006 A1
20060200518 Sinclair Sep 2006 A1
20070035539 Matsumura et al. Feb 2007 A1
20070253618 Kim et al. Nov 2007 A1
20070285560 Perlman Dec 2007 A1
20080082426 Gokturk et al. Apr 2008 A1
20080144934 Raynaud Jun 2008 A1
20080246759 Summers Oct 2008 A1
20080252640 Williams Oct 2008 A1
20080273801 Podilchuk Nov 2008 A1
20090067706 Lapa Mar 2009 A1
20090094260 Cheng et al. Apr 2009 A1
20100191770 Cho et al. Jul 2010 A1
20100231692 Perlman et al. Sep 2010 A1
20100275018 Pedersen Oct 2010 A1
20100290032 Bugge Nov 2010 A1
20100290697 Benitez et al. Nov 2010 A1
20100315422 Andre et al. Dec 2010 A1
20100321386 Lin et al. Dec 2010 A1
20110040539 Szymczyk et al. Feb 2011 A1
20110187713 Pershing et al. Aug 2011 A1
20120062596 Bedi et al. Mar 2012 A1
20120120113 Hueso May 2012 A1
20120223943 Williams et al. Sep 2012 A1
20120242795 Kane et al. Sep 2012 A1
20130004060 Bell et al. Jan 2013 A1
20130083064 Geisner et al. Apr 2013 A1
20130103608 Scipioni et al. Apr 2013 A1
20130135450 Pallone et al. May 2013 A1
20130147839 Fukushima et al. Jun 2013 A1
20130196772 Latta et al. Aug 2013 A1
20130215235 Russell Aug 2013 A1
20130249948 Reitan Sep 2013 A1
20130275089 Gary Oct 2013 A1
20130293539 Hunt et al. Nov 2013 A1
20130317950 Abraham et al. Nov 2013 A1
20140012503 Haddy Jan 2014 A1
20140022355 Poursohi et al. Jan 2014 A1
20140043436 Bell et al. Feb 2014 A1
20140049559 Fleck et al. Feb 2014 A1
20140072213 Paiton et al. Mar 2014 A1
20140082610 Wang et al. Mar 2014 A1
20140118402 Gallo et al. May 2014 A1
20140125654 Oh May 2014 A1
20140171039 Bjontegard Jun 2014 A1
20140172553 Goulart Jun 2014 A1
20140207420 Edwards et al. Jul 2014 A1
20140210856 Finn et al. Jul 2014 A1
20140225814 English et al. Aug 2014 A1
20140225978 Saban et al. Aug 2014 A1
20140267228 Ofek et al. Sep 2014 A1
20140267717 Pitzer et al. Sep 2014 A1
20140282220 Wantland et al. Sep 2014 A1
20140293011 Lohry et al. Oct 2014 A1
20140333615 Ramalingam et al. Nov 2014 A1
20140351078 Kaplan et al. Nov 2014 A1
20150003723 Huang et al. Jan 2015 A1
20150043225 Goodman et al. Feb 2015 A1
20150049170 Kapadia et al. Feb 2015 A1
20150058116 Liu et al. Feb 2015 A1
20150073947 Higgins et al. Mar 2015 A1
20150095196 Burks et al. Apr 2015 A1
20150116509 Birkler et al. Apr 2015 A1
20150138320 El Daher May 2015 A1
20150147460 Manzi et al. May 2015 A1
20150153444 Nichols et al. Jun 2015 A1
20150161818 Komenczi et al. Jun 2015 A1
20150164335 Van et al. Jun 2015 A1
20150169994 Chinen et al. Jun 2015 A1
20150170260 Lees et al. Jun 2015 A1
20150178986 Schpok Jun 2015 A1
20150186894 Mosier Jul 2015 A1
20150193971 Dryanovski et al. Jul 2015 A1
20150208043 Lee et al. Jul 2015 A1
20150242542 Bosdriesz Aug 2015 A1
20150256813 Dal Mutto et al. Sep 2015 A1
20150269785 Bell et al. Sep 2015 A1
20150288951 Mallet et al. Oct 2015 A1
20150302027 Wnuk et al. Oct 2015 A1
20150312550 Robert Oct 2015 A1
20150331970 Jovanovic Nov 2015 A1
20150332513 Scavezze et al. Nov 2015 A1
20150339853 Wolper et al. Nov 2015 A1
20160005228 Niebla, Jr. et al. Jan 2016 A1
20160035538 Fukuda Feb 2016 A1
20160048497 Goswami Feb 2016 A1
20160055268 Bell et al. Feb 2016 A1
20160071318 Lee et al. Mar 2016 A1
20160086078 Ji et al. Mar 2016 A1
20160088275 Fuchikami Mar 2016 A1
20160092608 Yamamoto et al. Mar 2016 A1
20160125650 Crocker et al. May 2016 A1
20160147408 Bevis et al. May 2016 A1
20160149123 Park et al. May 2016 A1
20160174902 Georgescu et al. Jun 2016 A1
20160180193 Masters et al. Jun 2016 A1
20160189426 Thomas et al. Jun 2016 A1
20160196691 Crocker Jul 2016 A1
20160202048 Meng et al. Jul 2016 A1
20160217225 Bell et al. Jul 2016 A1
20160217615 Kraver Jul 2016 A1
20160266939 Shear et al. Sep 2016 A1
20160275376 Kant Sep 2016 A1
20160284116 Crain et al. Sep 2016 A1
20160284121 Azuma Sep 2016 A1
20160292592 Patthak et al. Oct 2016 A1
20160300293 Nagar Oct 2016 A1
20160343140 Ciprari et al. Nov 2016 A1
20160364793 Sacco Dec 2016 A1
20160364874 Tohme et al. Dec 2016 A1
20160371846 Starns et al. Dec 2016 A1
20170011558 Meier et al. Jan 2017 A1
20170018120 Li et al. Jan 2017 A1
20170021273 Rios Jan 2017 A1
20170039613 Kaehler et al. Feb 2017 A1
20170039986 Lanier et al. Feb 2017 A1
20170046844 Jones et al. Feb 2017 A1
20170046868 Chernov et al. Feb 2017 A1
20170046873 Terry et al. Feb 2017 A1
20170053042 Sugden et al. Feb 2017 A1
20170054954 Keitler et al. Feb 2017 A1
20170059305 Nonn et al. Mar 2017 A1
20170061286 Kumar et al. Mar 2017 A1
20170085863 Lopez et al. Mar 2017 A1
20170092014 Perlman et al. Mar 2017 A1
20170103510 Wang et al. Apr 2017 A1
20170109874 Hallasch et al. Apr 2017 A1
20170109924 Crocker Apr 2017 A1
20170109931 Knorr et al. Apr 2017 A1
20170123750 Todasco May 2017 A1
20170132497 Santos et al. May 2017 A1
20170140236 Price et al. May 2017 A1
20170154425 Pierce et al. Jun 2017 A1
20170161590 Boulkenafed et al. Jun 2017 A1
20170161592 Su et al. Jun 2017 A1
20170161960 High et al. Jun 2017 A1
20170168488 Wierzynski et al. Jun 2017 A1
20170176979 Lalish et al. Jun 2017 A1
20170178272 Lashkari et al. Jun 2017 A1
20170193694 Freund et al. Jul 2017 A1
20170200313 Lee et al. Jul 2017 A1
20170201735 Tyshchenko et al. Jul 2017 A1
20170220887 Fathi et al. Aug 2017 A1
20170229154 Bose et al. Aug 2017 A1
20170272651 Mathy et al. Sep 2017 A1
20170301104 Qian et al. Oct 2017 A1
20170318407 Meister et al. Nov 2017 A1
20180056515 Boca et al. Mar 2018 A1
20180056801 Leary Mar 2018 A1
20180060946 Devries Mar 2018 A1
20180068489 Kim et al. Mar 2018 A1
20180121869 Bradley et al. May 2018 A1
20180144535 Ford et al. May 2018 A1
20180204329 Cutu et al. Jul 2018 A1
20180218513 Ho Aug 2018 A1
20180253895 Arumugam Sep 2018 A1
20180324401 Sheffield et al. Nov 2018 A1
20180330480 Liu et al. Nov 2018 A1
20180365495 Laycock et al. Dec 2018 A1
20180367835 Hamidi-rad et al. Dec 2018 A1
20190007659 Neubauer et al. Jan 2019 A1
20190025905 Godina et al. Jan 2019 A1
20190057551 Mathwig et al. Feb 2019 A1
20190122422 Sheffield et al. Apr 2019 A1
20190122425 Sheffield Apr 2019 A1
20190156545 Horn May 2019 A1
20190164335 Sheffield et al. May 2019 A1
20190180104 Sheffield et al. Jun 2019 A1
20190183577 Fahim et al. Jun 2019 A1
20190266793 Sheffield et al. Aug 2019 A1
20190279284 Sunday et al. Sep 2019 A1
20190311538 Vadakkeveedu et al. Oct 2019 A1
20190362513 Sheffield Nov 2019 A1
20190362551 Sheffield et al. Nov 2019 A1
20200034918 Naware et al. Jan 2020 A1
20200265599 Sheffield Aug 2020 A1
20200368616 Delamont Nov 2020 A1
20200380457 Soon-Shiong Dec 2020 A1
20210073901 Yankovich et al. Mar 2021 A1
20210134072 Uppal May 2021 A1
20210137350 Inglis et al. May 2021 A1
20210248669 Wade et al. Aug 2021 A1
20210248811 Shan Aug 2021 A1
20210248823 Rollon et al. Aug 2021 A1
20210295052 Sheffield et al. Sep 2021 A1
20220075907 Vellasques et al. Mar 2022 A1
20220155850 Kodeih et al. May 2022 A1
20220230216 Buibas et al. Jul 2022 A1
20220397862 Karafin et al. Dec 2022 A1
20230015235 Van Der Heijden et al. Jan 2023 A1
20230393715 Jensen et al. Dec 2023 A1
Foreign Referenced Citations (43)
Number Date Country
2825498 May 2017 CA
104765915 Jul 2015 CN
104866691 Aug 2015 CN
106446033 Feb 2017 CN
4229349 May 1995 DE
102010010296 Sep 2011 DE
3609669 Oct 2004 JP
2006098065 Apr 2006 JP
4168040 Oct 2008 JP
4700739 Jun 2011 JP
6095664 Feb 2017 JP
20130137968 Dec 2013 KR
20140081729 Jul 2014 KR
20140082610 Jul 2014 KR
101450133 Oct 2014 KR
20160023161 Mar 2016 KR
20160072547 Jun 2016 KR
101650011 Aug 2016 KR
20160117704 Oct 2016 KR
101730296 Apr 2017 KR
101750546 Jun 2017 KR
20170132822 Dec 2017 KR
0223918 Mar 2002 WO
2009105126 Aug 2009 WO
2011055245 May 2011 WO
2014049301 Apr 2014 WO
2014074003 May 2014 WO
2015192117 Dec 2015 WO
2016075081 May 2016 WO
2016099471 Jun 2016 WO
2016116780 Jul 2016 WO
2016127173 Aug 2016 WO
2017100658 Jun 2017 WO
2017206451 Dec 2017 WO
2018204279 Nov 2018 WO
2019083832 May 2019 WO
2019104049 May 2019 WO
2019118599 Jul 2019 WO
2019164830 Aug 2019 WO
2019226560 Nov 2019 WO
2019226702 Nov 2019 WO
2020123114 Jun 2020 WO
2021165628 Aug 2021 WO
Non-Patent Literature Citations (121)
Entry
“3D Room Designer”, Living Spaces, livingspaces.com, Jul. 2017, 5 pages.
“Augmented Reality for Subsurface Utilities”, Available online at: https://www.youtube.com/watch?v=KS_5OHoHHuo, Jul. 20, 2013, 3 pages.
“Utility Safety: 3D Data—Virtual reality for Electrical Utilities”, Available Online: http://www.utilityproducts.com/articles/2015/03/utility-safety-3d-data-virtual-realityfor-electrical-utilities.html, Mar. 11, 2015, 6 pages.
U.S. Appl. No. 15/586,207, “Advisory Action”, Apr. 4, 2018, 3 pages.
U.S. Appl. No. 15/586,207, “Final Office Action”, Jan. 8, 2018, 24 pages.
U.S. Appl. No. 15/586,207, “Non-Final Office Action”, Jun. 28, 2017, 22 pages.
U.S. Appl. No. 15/586,207, “Notice of Allowance”, Jun. 14, 2018, 8 Pages.
“U.S. Appl. No. 15/586,207”, Automated Matrix Photo Framing Using Range Camera Input, May 3, 2017, pp. 1-42.
U.S. Appl. No. 15/791,940, “Notice of Allowance”, Feb. 2, 2018, 10 pages.
U.S. Appl. No. 15/823,055, “Final Office Action”, Jul. 11, 2018, 50 pages.
U.S. Appl. No. 15/823,055, “Non-Final Office Action”, Feb. 9, 2018, 27 pages.
U.S. Appl. No. 15/823,055, “Notice of Allowance”, Oct. 29, 2018, 8 pages.
U.S. Appl. No. 15/840,567, “Non-Final Office Action”, Feb. 8, 2018, 35 pages.
U.S. Appl. No. 15/840,567, “Notice of Allowance”, Sep. 17, 2018, 12 pages.
U.S. Appl. No. 15/903,501, “Final Office Action”, Jul. 18, 2019, 25 pages.
U.S. Appl. No. 15/903,501, “Final Office Action”, Sep. 14, 2018, 31 pages.
U.S. Appl. No. 15/903,501, “Final Office Action”, May 1, 2020, 33 pages.
U.S. Appl. No. 15/903,501, “Non-Final Office Action”, Apr. 24, 2018, 19 Pages.
U.S. Appl. No. 15/903,501, “Non-Final Office Action”, Feb. 1, 2019, 23 pages.
U.S. Appl. No. 15/903,501, “Non-Final Office Action”, Nov. 20, 2019, 29 pages.
U.S. Appl. No. 15/977,724, “Non-Final Office Action”, Sep. 18, 2018, 10 pages.
U.S. Appl. No. 15/977,724, “Notice of Allowance”, Mar. 19, 2019, 5 pages.
U.S. Appl. No. 15/977,724, “Notice of Allowance”, May 15, 2019, 5 pages.
U.S. Appl. No. 15/988,891, “Final Office Action”, Feb. 1, 2019, 30 pages.
U.S. Appl. No. 15/988,891, “Final Office Action”, Oct. 23, 2019, 35 pages.
U.S. Appl. No. 15/988,891, “Non-Final Office Action”, Aug. 27, 2018, 24 pages.
U.S. Appl. No. 15/988,891, “Non-Final Office Action”, Jun. 21, 2019, 34 pages.
U.S. Appl. No. 15/988,891, “Notice of Allowance”, Feb. 4, 2020, 8 pages.
U.S. Appl. No. 15/990,429, “Advisory Action”, Dec. 9, 2019, 8 pages.
U.S. Appl. No. 15/990,429, “Final Office Action”, Aug. 22, 2019, 9 pages.
U.S. Appl. No. 15/990,429, “Non-Final Office Action”, Jan. 11, 2019, 19 pages.
U.S. Appl. No. 15/990,429, “Notice of Allowance”, Jan. 16, 2020, 12 pages.
U.S. Appl. No. 15/990,429, “Restriction Requirement”, Aug. 1, 2018, 7 Pages.
U.S. Appl. No. 16/216,476, “Notice of Allowance”, Mar. 5, 2019, 9 pages.
U.S. Appl. No. 16/222,333, “Final Office Action”, Jan. 15, 2021, 27 pages.
U.S. Appl. No. 16/222,333, “Non-Final Office Action”, Jul. 14, 2020, 26 pages.
U.S. Appl. No. 16/222,333, “Notice of Allowance”, Mar. 19, 2021, 12 pages.
U.S. Appl. No. 16/256,461, “Non-Final Office Action”, Jul. 11, 2019, 24 pages.
U.S. Appl. No. 16/256,461, “Notice of Allowance”, Oct. 28, 2019, 9 pages.
U.S. Appl. No. 16/866,801, “Final Office Action”, Jun. 24, 2022, 29 pages.
U.S. Appl. No. 16/866,801, “Final Office Action”, Oct. 29, 2021, 39 pages.
U.S. Appl. No. 16/866,801, “Non-Final Office Action”, Feb. 16, 2022, 29 pages.
U.S. Appl. No. 16/866,801, “Non-Final Office Action”, Mar. 4, 2021, 38 pages.
Amadeo, “Google Tango Review: Promising Google Tech Debuts on Crappy Lenovo Hardware”, Ars Technica, Available online at https://arstechnica.com/gadgets/2016/12/google-tango-review-promising-google-tech-debuts-on-crappy-lenovo-hardware/, Mar. 15, 2018, 9 pages.
Behzadan et al., “Visualization of Construction Graphics in Outdoor Augmented Reality”, Proceedings of the 37th conference on Winter simulation. Winter Simulation Conference. Available Online: https://pdfs.semanticscholar.org/05d1/a790dbc744a616c84aebec5c642eac7959c0.pdf, 2005, pp. 1-115.
Blum et al., “Selection of Relevant Features and Examples in Machine Learning”, Artificial intelligence, vol. 97, No. 1, Dec. 1997, 24 pages.
Brennan, “New Jersey Utility Uses Augmented Reality to Visualise Underground Infrastructure”, Available online at: https://www.roadtovr.com/utility-company-uses-augmented-reality-visualise-underground-infrastructure/, Mar. 27, 2018, 3 pages.
Brennan et al., “Virtualizing Living and Working Spaces: Proof of Concept For A Biomedical Space-Replication Methodology”, Journal Of Biomedical Informatics vol. 57, Available online at: http://ac.els- cdn.com/S1532046415001471/1-s2.0-S1532046415001471-main.pdf?_tid=ccf847d4-7bc3-11e7-8930-00000aacb362&acdnat=1502146834_89e652ee1e1ee3672d9747963120432d, 2015, pp. 53-61.
Cheng et al., “Locality-Sensitive Deconvolution Networks with Gated Fusion for RGB-D Indoor Semantic Segmentation”, Computer Vision Foundation, 2017, pp. 3029-3037.
Chilton et al., “Cascade: Crowdsourcing Taxonomy Creation”, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, Apr. 27-May 2, 2013, pp. 1999-2008.
Choi et al., “Robust Reconstruction of Indoor Scenes”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Available online at: http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Choi_Robust_Reconstruction_of_2015_CVPR_paper.pdf, 2015, 10 pages.
Couprie et al., “Indoor Semantic Segmentation Using Depth Information”, Arxiv Preprint Arxiv:1301.3572 Available online at: https://arxiv.org/pdf/1301.3572.pdf, 2013, 8 pages.
Coxworth , “Lynx A Camera Generates 3D Models In Real Time”, New Atlas, newatlas.com Available online at: http://newatlas.com/lynx-a-3D-modeling-camera/26149/, Feb. 8, 2013, 27 pages.
Delage et al., “Automatic Single-image 3D Reconstructions Of Indoor Manhattan World Scenes”, Robotics Research Available online at: https://pdfs.semanticscholar.org/5ea3/e6ef1012b9e7f39451364d68312595b544b8.pdf, 2007, pp. 305-321.
Deng et al., “Imagenet: A Large-Scale Hierarchical Image Database”, Institute of Electrical and Electronics Engineers Conference on Computer Vision and Pattern Recognition, Jun. 20-25, 2009, pp. 248-255.
Dorsey et al., “Modeling and Rendering of Metallic Patinas”, ACM SIGGRAPH 2006 Courses., 2006, pp. 387-396.
Geiger et al., “Joint 3D Object and Layout Inference From A Single RGB-D Image”, German Conference on Pattern Recognition. Springer, Cham, Available online at: http://ww.w.cvlibs.net/publications/Geiger2015GCPR.pdf, 2015, 12 pages.
Gupta et al., “Aligning 3D Models To RGB-D Images of Cluttered Scenes”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Available online at: http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Gupta_Aligning_3D_Models_2015_CVPR_paper.pdf, 2015, 10 pages.
Gupta et al., “Structured Light 3d Scanning in the Presence of Global Illumination”, IEEE Conference on. IEEE, Computer Vision and Pattern Recognition (CVPR), 2011, 10 pages.
Hermans et al., “Dense 3D Semantic Mapping of Indoor Scenes From RGB-D Images”, Robotics and Automation (ICRA), 2014 IEEE International Conference on IEEE, Available online at: http://web-info8.informatik.rwth-aachen.de/media/papers/hermans-icra-2014.pdf, 2014, 8 pages.
Izadi et al., “KinectFusion: Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera”, Proceedings Of The 24th Annual Acm Symposium On User Interface Software And Technology, ACM, Available online at: https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/kinectfusion-uist-comp.pdf, 2011, 10 pages.
Izadinia et al., “IM2CAD”, arXiv preprint arXiv:1608.05137, 2016, 11 pages.
Job, “Incredible New Augmented Reality System by Augview.net”, www.youtube.com/watch?v=Mc9BFlx4MKU, Feb. 19, 2014, 1 page.
Kang et al., “Discovering Object Instances From Scenes of Daily Living”, Computer Vision (ICCV), IEEE International Conference, 2011, 9 pages.
Karsch et al., “Rendering Synthetic Objects Into Legacy Photographs”, ACM Transactions On Graphics (TOG). vol. 30, No. 6 Available online at: http://www.cs.jhu.edu/˜misha/ReadingSeminar/Papers/Karsch11.pdf, 2011, 12 pages.
Lai et al., “A Large-Scale Hierarchical Multi-View rgb-d Object Dataset”, Robotics and Automation (ICRA), IEEE International Conference on. IEEE, 2011, 8 pages.
Lai et al., “Detection-based Object Labeling In 3D Scenes”, Robotics And Automation (Icra), 2012 IEEE International Conference On. IEEE Available online at: http://ftp.idiap.ch/pub/courses/EE-700/material/12-12-2012/3Dobject-labelingicra-12.pdf, 2012, 8 pages.
Li et al., “Rgb-d Scene Labeling With Long Short-term Memorized Fusion Model”, arXiv preprint arXiv:1604.05000, 2016, 17 pages.
Liu et al., “Learning Discriminative Illumination and Filters for Raw Material Classification With Optimal Projections of Bidirectional Texture Functions”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 1430-1437.
Luongo , “Modeling A Modern Interior Scene In Blender”, Envotus Tuts+, cgi.tutsplus.com Available online at: https://cgi.tutsplus.com/tutorials/modeling-a-modern-interior-scene-in-blender--cg-15294, May 18, 2012, 59 pages.
Massa et al., “Deep Exemplar 2d-3d Detection by Adapting From Real to Rendered Views”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 6024-6033.
McCormac et al., “Semanticfusion: Dense 3d Semantic Mapping With Convolutional Neural Networks”, Robotics and Automation (ICRA), IEEE International Conference, 2017, 7 pages.
Meehan, “NJ Utility on Forefront with New Mixed Reality Application”, Available online At: https://www.esri.com/about/newsroom/publications/wherenext/nj-utility-on-forefront-with-new-mixed-reality-application/, Jun. 14, 2017, 7 pages.
Meriaudeau et al., “Non-Conventional Imaging Systems For 3D Digitization of Transparent Objects: Shape From Polarization In The IR And Shape From Visible Fluorescence Induced UV”, AIP Conference Proceedings, vol. 1537, Issue 1, Jul. 9, 2012, 1 page.
Olivier, “Infrared System for 3D Scanning of Metallic Surfaces”, Machine Vision and Applications, Springer Verlag, vol. 24, Sep. 23, 2013, pp. 1513-1524.
Pastorius , “Structured Light vs. Laser Triangulation for 3D Scanning and Inspection”, LMI Technologies, Imi3d.com, Dec. 7, 2015, pp. 1-6.
PCT/US2018/030283, “International Preliminary Report on Patentability”, Nov. 14, 2019, 15 pages.
PCT/US2018/030283, “International Search Report and Written Opinion”, Sep. 11, 2018, 20 pages.
PCT/US2018/030283, “Invitation to Pay Add'l Fees and Partial Search Report”, Jul. 17, 2018, 18 pages.
PCT/US2018/056664, “International Preliminary Report on Patentability”, May 7, 2020, 19 pages.
PCT/US2018/056664, “International Search Report and Written Opinion”, Feb. 8, 2019, 22 pages.
PCT/US2018/062037, “International Preliminary Report on Patentability”, Jun. 11, 2020, 6 pages.
PCT/US2018/062037, “International Search Report and Written Opinion”, Mar. 11, 2019, 9 pages.
PCT/US2018/065208, “International Preliminary Report on Patentability”, Jun. 25, 2020, 9 pages.
PCT/US2018/065208, “International Search Report and Written Opinion”, Jul. 2, 2019, 12 pages.
PCT/US2019/018552, “International Preliminary Report on Patentability”, Sep. 3, 2020, 9 pages.
PCT/US2019/018552, “International Search Report and Written Opinion”, May 31, 2019, 12 pages.
PCT/US2019/033148, “International Preliminary Report on Patentability”, Dec. 3, 2020, 11 pages.
PCT/US2019/033148, “International Search Report and Written Opinion”, Sep. 9, 2019, 14 pages.
PCT/US2019/033394, “International Preliminary Report on Patentability”, Dec. 10, 2020, 10 pages.
PCT/US2019/033394, “International Search Report and Written Opinion”, Sep. 27, 2019, 14 pages.
PCT/US2019/062557, “International Preliminary Report on Patentability”, Jun. 24, 2021, 7 pages.
PCT/US2019/062557, “International Search Report and Written Opinion”, Mar. 11, 2020, 10 pages.
Qi et al., “PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation”, In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Apr. 10, 2017, 19 pages.
Raska et al., “Influence of Augmented Reality on Purchase Intention: The IKEA Case”, Jonkoping University, May 2017, 100 pages.
Sairiala , “PBR workflows in Cycles Render Engine : PBR Workflows for Realistic Rendering in Cycles Render Engine”, Thesis at Tampere University of Applied Sciences, Degree Programme in Media, 2015, 78 pages.
Sandu et al., “Augmented Reality Uses in Interior Design”, Informatica Economica, vol. 22, No. 3, 2018, pp. 5-13.
Shao et al., “An Interactive Approach To Semantic Modeling Of Indoor Scenes With An Rgbd Camera”, ACM Transactions On Graphics (TOG) vol. 31, No. 6, 2012, 12 pages.
Siewczyska, “Method for Determining the Parameters of Surface Roughness by Usage of a 3d Scanner”, Archives of Civil and Mechanical Engineering, vol. 12, No. 1, 2012, pp. 83-89.
Talmaki et al., “Geospatial Databases and Augmented Reality Visualization for Improving Safety in Urban Excavation Operations”, Construction Research Congress: Innovation for Reshaping Construction Practice. Available Online: http://www-personal.umich.edu/˜vkamat/documents/Talmaki&Dong&Kamat.CRC.2010.pdf, 2010, pp. 1-10.
Tang et al., “AR Interior Designer: Automatic Furniture Arrangement Using Spatial and Functional Relationships” International Conference on Virtual Systems & Multimedia (VSMM), Jun. 2015, 8 pages.
Unal et al., “Distant Augmented Reality: Bringing A New Dimension to User Experience Using Drones”, Digital Applications in Archaeology and Cultural Heritage, vol. 17, 2020, pp. 1-12.
Wang et al., “Material Classification using BRDF Slices”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009, 7 pages.
Wu et al., “Single Image 3d Interpreter Network”, European Conference on Computer Vision, Springer International Publishing, 2016, pp. 1-18.
Xiang et al., “Objectnet3d: A Large Scale Database for 3d Object Recognition”, European Conference on Computer Vision, Springer International Publishing, 2016, 16 pages.
Yuekang et al., “The Aiming Error Analysis of Machine Vision Based on the Precision of Edge Detection”, Institute of Quality and Safety Engineering, China Jiliang University, Oct. 10, 2009, pp. 706-709.
Yurieff, “This Shopping App Lets You See a Virtual Couch in Your Real Living Room”, CNN Tech, money.cnn.com, May 3, 2017, 3 pages.
Zahavy et al., “Is a Picture Worth a Thousand Words? A Deep Multi-modal Fusion Architecture for Product Classification in E-commerce”, arXiv preprint arXiv:1611.09534, 2016, pp. 1-10.
Zeiss, “Accelerating World Wide Initiatives to Map Underground Utilities”, Linkedin, Available Online: https://www.linkedin.com/pulse/20140522175912-840956-accelerating-world-wideinitiatives-to-map-underground-utilities, May 22, 2014, 18 pages.
Zeiss, “Major New Initiatives to Enable Mapping Underground Utilities”, Available Online: https://www.linkedin.com/pulse/major-new-initiatives-enable-mappingunderground-utilities-zeiss?published=t, Jul. 24, 2017.
Zhang et al., “Active Arrangement of Small Objects in 3D Indoor Scenes”, IEEE Transactions on Visualization and Computer Graphics, vol. 27, No. 4, Oct. 25, 2019, pp. 1-16.
Zhao et al., “Image Parsing With Stochastic Scene Grammar”, Advances in Neural Information Processing Systems, 2011, pp. 1-9.
Zhao et al., “Scene Parsing by Integrating Function, Geometry and Appearance Models”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 3119-3126.
U.S. Appl. No. 17/849,544, titled “Simulated Environment for Presenting Virtual Objects and Virtual Resets”, filed Jun. 24, 2022.
U.S. Appl. No. 17/849,533, titled “Object Modeling Based on Properties and Images of an Object”, filed Jun. 24, 2022.
“Spacesmart Automated Retail Space Planning Software”, Available Online at https://www.impactanalytics.co/solutions/retail-space-and-floor-planning/, Accessed from Internet on Feb. 16, 2023, 4 pages.
“ThincSoft Case Study”, Available Online at https://download.oracle.com/otndocs/products/spatial/pdf/osuc2008_presentations/osuc2008_thincsoft.pdf, Accessed from Internet on Feb. 15, 2023, 14 pages.
PCT/US2023/068372, “International Search Report and Written Opinion”, Jul. 19, 2023, 11 pages.
PCT/US2023/068364 , “International Search Report and Written Opinion”, Oct. 10, 2023, 10 pages.
PCT/US2023/068367 , “International Search Report and Written Opinion”, Oct. 3, 2023, 8 pages.
U.S. Appl. No. 17/849,544 , “Final Office Action”, Jul. 16, 2024, 17 pages.
Related Publications (1)
Number Date Country
20230419628 A1 Dec 2023 US