Making high-quality three-dimensional (3D) worlds is difficult. Historically, 3D content creation pipelines (e.g., games, film, etc.) have been mostly linear. Due to concerns over consistency and fidelity, multiple content creators cannot work on the same asset simultaneously. Due to these constraints, large or immersive worlds are difficult, if not impossible. World size in particular has been historically limited due to the necessity of assets being client-side of each content-creator.
3D content is in high demand (e.g., for training autonomous vehicles and robots, for augmented reality, virtual reality, design, etc.). However, only a relatively small number of people or organizations have the skills and/or tools to make high-quality 3D worlds. In addition, the complexity of producing high quality 3D content is increasing as the number of contributions from traditionally distinct departments (e.g., 3D object modeling, world modeling, animation, physics, rendering, etc.) required to make 3D content that is vibrant, interesting, and attractive to consumers also continues to rise—all while the line among content creators, and even between content creators and content consumers, continues to blur.
Disclosed is, in general, a cloud-centric platform for generating virtual three-dimensional (3D) content that allows users to collaborate online and that can be connected to different software tools (applications). Using the platform, virtual environments (e.g., scenes, worlds, universes) can be created, accessed, and interacted with.
In embodiments, a server includes a database that stores assets comprising three-dimensional data useful for generating a virtual environment (e.g., a virtual scene), and also includes a synchronizer. The synchronizer can synchronize a change made by a client coupled to the server and data of the assets to include the change in the database, and can also synchronize changes in the database and data of clients coupled to the server.
The clients interoperate with each other to produce and modify the virtual environment. The clients include different types of clients that can operate on an object of the virtual environment in different ways.
In operation, in an embodiment, a first change to a first element of an asset is generated by a first application. The first element is updated in the database to include the first change. The first change is provided from the database to a second application. In response to the first change, a second change can be generated by the second application, in which case the database is updated to include the second change.
The platform thus allows collaborative, Web-based real-time editing through a published interface so that clients that are subscribers to an asset can work together on that asset or object. Subscribers can work together at the same time or at different times while a version control system is used to manage changes to maintain the integrity and fidelity of the work product from potentially multiple simultaneous accesses and/or collaborators.
From the point of view of the server and database, updates to an asset are provided from the clients as incremental updates (deltas) to the previous version of the asset. From the point of view of a client, updates to an asset from the server are also provided as deltas to the previous version of the asset. Consequently, network traffic and the computational loads on server and client devices are reduced. This advantageously provides the ability to perform bidirectional real-time updates between the clients and server for dynamic, complex virtual environments. Modifications to an environment can happen live, in real-time.
These and other objects and advantages of the various embodiments according to the present invention will be recognized by those of ordinary skill in the art after reading the following detailed description of the embodiments that are illustrated in the various drawing figures.
The accompanying drawings, which are incorporated in and form a part of this specification and in which like numerals depict like elements, illustrate embodiments of the present disclosure and, together with the detailed description, serve to explain the principles of the disclosure.
Reference will now be made in detail to the various embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings. While described in conjunction with these embodiments, it will be understood that they are not intended to limit the disclosure to these embodiments. On the contrary, the disclosure is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the disclosure as defined by the appended claims. Furthermore, in the following detailed description of the present disclosure, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. However, it will be understood that the present disclosure may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to unnecessarily obscure aspects of the present disclosure.
Some portions of the detailed descriptions that follow are presented in terms of procedures, logic blocks, processing, and other symbolic representations of operations on data bits within a computer memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. In the present application, a procedure, logic block, process, or the like, is conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps are those utilizing physical manipulations of physical quantities. Usually, although not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as transactions, bits, values, elements, symbols, characters, samples, pixels, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present disclosure, discussions utilizing terms such as “storing,” “saving,” “changing,” “updating,” “synchronizing,” “providing,” “performing,” “making a change,” “generating,” “rendering,” “identifying,” “loading,” “resolving,” “displaying,” “assembling,” “accumulating,” “receiving,” “sending,” or the like, refer to actions and processes of an apparatus or computer system or similar electronic computing device or processor. A computer system or similar electronic computing device manipulates and transforms data represented as physical (electronic) quantities within memories, registers or other such information storage, transmission or display devices.
Embodiments described herein may be discussed in the general context of computer-executable instructions residing on some form of computer-readable storage medium, such as program modules, executed by one or more computers or other devices. By way of example, and not limitation, computer-readable storage media may comprise non-transitory computer storage media and communication media. Generally, program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or distributed as desired in various embodiments.
Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, random access memory (RAM), read only memory (ROM), electrically erasable programmable ROM (EEPROM), flash memory (e.g., a solid-state drive) or other memory technology, compact disk ROM (CD-ROM), digital versatile disks (DVDs) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information and that can accessed to retrieve that information.
Communication media can embody computer-executable instructions, data structures, and program modules, and includes any information delivery media. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), infrared and other wireless media. Combinations of any of the above can also be included within the scope of computer-readable media.
Disclosed is, in general, a cloud-centric platform for generating virtual three-dimensional (3D) content, that allows users to collaborate online and that can be connected to different software tools (applications). Using the platform, virtual environments (e.g., scenes, worlds, universes) can be created, accessed, and interacted with.
In embodiments, one or more applications (e.g., Web-based applications) are communicatively coupled to a database that can be accessed, edited, and stored. A wide variety of third-party applications can connect as seamlessly as possible or practicable with the database, and vice versa, through well-designed application programming interfaces (APIs). In embodiments, at least one of the applications is a 3D content creation application (e.g., an animation tool or a computer graphics application such as Autodesk Maya®).
In embodiments, the database is based on the Universal Scene Description (USD) format and schema. The entries or elements in the database are referred to herein as “assets.” Objects in a virtual 3D environment, and the virtual environment itself, can be composed from one or more of the elemental assets. The database can be queried and updated using the applications. Appropriate plug-ins are created and used for applications, to allow those applications to operate smoothly for real-time editing.
As will be described in greater detail, each application interacts with certain attributes or properties of objects that can be described or defined using the assets in the database. For example, a graphics editor (e.g., Photoshop®) can be connected to the database (to an asset in the database) using a plug-in to add a texture to an object in a virtual scene, while a computer graphics application or animation tool (e.g., Autodesk Maya®) can be connected to the database (to an asset in the database) using a plug-in to animate that object (or a different object) in the virtual scene. In other words, an object created with a first application such as Maya® can have associated properties understood only by that application, but the disclosed platform supports the ability for a second application (e.g., Photoshop®) to make changes to that object, while advantageously leaving the properties understood only by the first application undisturbed. The two applications can thus interoperate, essentially interacting with each other, without being directly connected to each other or being aware of each other. More than two applications can interoperate or collaborate in this manner.
Subscribers can identify what assets or objects are of interest to them, and make changes where they have permission to do so. The platform allows collaborative, Web-based real-time editing through a published interface so that subscribers to an asset can work together on that asset or object. Subscribers can work together at the same time or at different times. A version control system is used to manage changes.
In embodiments, the platform 100 includes a system of clients 102a-n, which are applications or software tools that can be executed on or using one or more computing systems (the client devices or systems 103a-n). The client devices 103a-n can include different types of devices; that is, they may have different computational and display capabilities and different operating systems, for example. Depending on their hardware and software capabilities, the client devices 103a-n may be referred to as either a thick client or a thin client.
The platform 100 also includes a server 104 communicatively coupled to the clients 102a-n. The server 104 can be executed on or using one or more computing systems (e.g., the server system 105).
Generally speaking, in embodiments, the platform 100 is implemented as a cloud-centric platform; that is, it is a Web-based platform that can be implemented using one or more devices connected and working cooperatively via a network (e.g., the Internet).
In an embodiment, each of the clients 102a-n connects to the server 104 through a port or socket 112, and communicates with the server using a common application programming interface (API) that enables bidirectional communication (e.g., the WebSockets API). The clients 102a-n include different types of applications such as, but not limited to: a physics simulation application, an artificial intelligence (AI) application, a global illumination (GI) application, a game engine, a computer graphics application, a renderer, a graphics editor, a virtual reality (VR) application, an augmented reality application, and a scripting application. Because they are different from each other, the clients 102a-n can be called “heterogeneous clients.” The clients 102a-n may also be referred to as “microservices.”
In embodiments, the server 104 includes a database 106. Although referred to singularly as a database, the database 106 can include multiple databases that are implemented and stored on one or more computing systems (e.g., a datacenter). The database 106 stores data representative of assets. Each asset stores 3D data that can be used with other assets to compose a virtual scene. Virtual scenes can be combined to form virtual worlds or universes. In general, the term “virtual environment” is used herein to refer to a virtual scene, world, or universe. Use cases include, but are not limited to: design reviews for product design and architecture; scene generation; scientific visualization (SciVis); automobile simulation (e.g., AutoSIM); cloud versions of games; virtual set production; and social VR with user-generated content and elaborate worlds.
There may be different servers inside the platform 100 for different virtual environments, and the owner of a virtual environment may make choices about the way the environment is constructed and which resources are provided in order to provide the most relevant and important scalability.
Each asset in the database 106 can be accessed and optionally changed by one or more of the clients 102a-n. In an embodiment, the clients 102a-n are connected to the database using a respective plug-in 114. However, in embodiments, access to an asset is limited to clients that subscribe to that asset, and a change to an asset can only be made by a subscriber that has permission to do so.
Not all of the clients 102a-n are applications that can make changes to assets in the database 106. For example, the clients 102a-n may include applications that render an asset or environment, and may include applications that display an asset or environment.
The platform 100 leverages the cloud. The platform 100 factors out asset storage, editing, and querying into a cloud service (the database 106). The platform 100 allows virtually any application to connect to the server 104.
In embodiments, the server 104 also includes a synchronizer 108. As mentioned above, one or more of the clients 102a-n can make changes to an asset. The synchronizer 108 synchronizes those changes with the data of the asset, and also synchronizes the data of the changed (updated) asset with other clients interested in that asset (e.g., other subscribers to the asset).
More specifically, in embodiments, an asset can be loaded across a network from the server 104 to a first client 102a. The client 102a can make a change or changes (an update) to the asset. In embodiments according to the invention, after the update is made, the client 102a advantageously loads and saves to the database 106 only the update to the asset. That is, the client 102a does not return the entire, updated asset to the database 106; instead, the client 102a saves only the part(s) (e.g., object, property, attribute) of the asset that changed. In turn, the changes to the asset can be provided to one or more of the other clients 102b-n that, for example, subscribe to that asset. That is, the database 106 does not provide the entire, updated asset to the other clients; instead, the database provides only the part(s) (e.g., object, property, attribute) of the asset that changed. The synchronization process in which only changes are shared between the database 106 and the clients 102a-n may be referred to herein as “incremental updates” or “incremental changes.”
Thus, from the point of view of the server 104 (specifically, the database 106), updates to an asset are provided from the clients 102a-n as incremental changes (deltas) to the previous version of the asset; and, from the point of view of a client, updates to an asset from the server are also provided as deltas to the previous version of the asset. Consequently, network traffic and the computational loads on server and client devices are reduced. This advantageously provides the ability to perform bidirectional real-time updates between the clients 102a-n and server 104 for dynamic, complex virtual environments. Modifications to an environment can happen live, in real time. Any runtime can connect to the server 104 and see live updates to the environments.
To summarize to this point, the server 104 enables the clients 102a-n to publish and subscribe to any asset in the database 106 for which the clients have suitable permissions. In use, multiple clients 102a-n can publish and subscribe to the same set of assets, creating a shared virtual world. The database 106 operates, in essence, as a hub that allows the clients 102a-n to interoperate with each other through changes to the database. Plug-ins 114 for the clients 102a-n allow the clients to interoperate with the database 106 and with each other through the database. The clients 102a-n can be heterogeneous (different types of) applications that are able to work together through the database 106. Updates from any one of the clients 102a-n can be replicated to the database 106 then to the other clients at interactive speeds.
Thus, in embodiments according to the present invention, each client (application) 102a-n interacts with certain attributes or properties of objects that can be described or defined using the assets in the database 106. In the example described earlier herein, a graphics editor (e.g., Photoshop®) can be connected to the database 106 (to an asset in the database) to add a texture to an object in a virtual scene, and a computer graphics application or animation tool (e.g., Autodesk Maya®) can be connected to the database (to an asset in the database) to animate that object (or a different object) in the virtual scene.
In embodiments, a change to an asset in the database 106 made by one of the clients (application) 102a-n prompts each subscriber to consider the change. That is, in embodiments, a subscriber processes or performs operations in response to being notified of the change or by receiving the change (e.g., by synchronizing with the database 106). For example, an animation tool (e.g., Autodesk Maya®) can be used to animate an object in a virtual scene, and a physics simulation application or physics engine (e.g., PhysX) can be used to simulate real-world physics associated with the object in the virtual scene. In this example, if the animation tool is used to, for instance, move the object over the edge of a table and updates the asset in the database 106 accordingly, then (e.g., after synchronizing with the database) the physics simulation application will automatically simulate real-world physics based on that movement (e.g., the trajectory of the falling object) and update the database accordingly. Thus, an asset in the database 106 can be changed by a type of first client, and a second client (that may be a different type of client) can also change that asset (perhaps in direct response to the change made by the first client). In this manner, two or more clients (e.g., different types of clients) can interoperate, essentially interacting with each other, to effect changes to different properties or attributes of the same asset.
However, depending on the type of client, a change to an asset provided to the client does not necessarily trigger another change to either that asset or another asset. In general, in response to being provided a change to an element of an asset, a client can make another change to that element, and update the database 106 to include the other change; make a change to another element of the asset, and update the database to include the change to the other element; use the element including any change in some type of operation that does not cause another change to the element; render the element/asset; and/or display the element/asset.
In embodiments, the platform 100 (e.g., the server 104) also includes a database engine 110 that can resolve conflicts between changes to the database 106. The database engine 110 functions as a change or version control mechanism. In an embodiment, conflicts are resolved according to a ranking assigned to the clients 102a-n, which may be referred to as source control. That is, if for example two clients are subscribers to an asset and both have permission to change the asset, the changes from one of the clients have priority over and would supersede any conflicting changes from the other client. Priorities can be assigned in different ways. For example, one client can have priority over one spatial portion of a virtual environment, and another client can have priority over another spatial portion of the virtual environment. In that case, if an asset appears or is used in one area of the virtual environment, then the changes from the first client would have priority over conflicting changes from the second client; but if the asset appears or is used in the other area of the virtual environment, then the changes from the second client would have priority.
As mentioned above, in embodiments, the database 106 is based on the USD format and schema. USD provides the ability to layer together a series of “opinions” about properties for collections of objects. A layer is a group of objects that are outside of a conventional tree structure of transformation hierarchy; that is, the objects may be included in different leaves of the transformation hierarchy. Layering allows properties across objects in the layer (group) to be changed. For example, the engine and the doors of a car may be represented as different objects in the transformation hierarchy; however, the engine and the doors can both include screws. Layers permit the properties of the screws to be changed no matter where the screws appear in the hierarchy. In embodiments, different client subscribers may have control over respective layers, in which case their updates take precedent over the updates of other subscribers. Different layers may be ranked higher than other layers; the ranking can be used to control which changes to a layer have priority.
In an embodiment, a scripting engine 201 runs lightweight and safe (e.g., sandboxed) scripts close to the database 106 without passing through the API layer. In an embodiment, procedural data-flow elements are created, submitted, and linked in the server 104. The individual procedural elements can be specified in ways similar to the way procedural shaders are specified.
Updates to assets in the database 106 are communicated back to clients that are subscribers to those assets. In an embodiment, a notifier 203 sends a message to the clients that subscribe to an asset when that asset is changed. In effect, notifications are filtered per client based on what is of interest to the client.
In an embodiment, the framework 300 includes a replication engine (not shown). For speedy replication, where correctness requires locks, API calls can be used with guaranteed correctness. A client 102 can take a snapshot and get a fully correct set of data at a particular moment suitable for offline rendering at the highest quality. All relevant static assets are not necessarily preloaded. Teleporting is an important use case, and framework 300 provides the ability to go quickly to non-preloaded places. Certain applications (clients) may choose to preload all relevant assets, but that is not necessarily imposed on all applications.
In an embodiment, the framework 300 includes an API to support data-flow dependencies, caching, and re-computation, for situations where, for example, a computation depends on a set of properties, objects, or a volume of space and should be updated when they change (e.g., GI computations, LOD, and potentially visible sets).
In an embodiment, the framework 300 includes an API for spatial queries.
In an embodiment, the framework 300 includes software tools for authoring and running procedural scripting in the cloud to create behavior, including APIs for, but not limited to: registering scripts, setting up events, and triggers. These may be containerized and linked by permissions to a particular virtual environment.
In an embodiment, the framework 300 includes APIs to aid in the creation of very large worlds including APIs for, but not limited to: LODs, auto-creation of distant environment maps, and visibility culling.
Using the framework 300, cloud rendering can be provided for a wide variety of experiences. The platform 300 can be used for applications where the minimum specification of rendering power can be very high, even when sending the result to a thin (e.g., mobile) client device.
Thin VR clients are also supported by, for example, cloud rendering RGB-D videos wider than the field-of-view and also by transmitting supplemental hole-filling data from nearby viewpoints. Thus, during a period when a client has stale data, it can re-project the stale data from the new viewpoint using the depth and hole-filling data to create appropriate parallax. In a gaming application, common calculations (e.g., GI and physics) between clients can be naturally factored out. View-dependent calculations (e.g., final render, user experience (UX), physics, GI, and LOD) can be done separately in the cloud and streamed to thin clients outside the cloud.
The platform 300 can also support latency-sensitive experiences that may benefit greatly from, for example, thick clients over wide area networks (WANs), although the invention in not so limited. Massively multiplayer online games (MMOs) and their relatives are also supported.
In block 602 of
In block 604, the element 701 is updated in the database 106 to include the first change. In embodiments, the database 106 is updated by synchronizing the database and data of the client 102a.
In block 606, the first change is provided from the database 106 to the client 102b (a second application). In embodiments according to the invention, only the change or changes to the element 701 are provided from the database 106 to the client 102b: the entire, updated element—or the entire, updated asset 704—is not provided from the database to the client 102b. In an embodiment, the client 102b subscribes to the asset or to the element 701. In an embodiment, the clients 102a and 102b are different types of clients. The clients may be directed to different aspects of content creation or asset management, or may be used on different scales, or modalities. For example, one client may operate on one or more two-dimensional spaces, whereas another client may execute in a three-dimensional space. The same asset could appear to the first client as a series of discrete two-dimensional object geometries, whereas the latter client may visualize the asset three-dimensionally (e.g., through a virtual reality device) with perspectives of the asset that can be interactively and contiguously traversed. In an embodiment, the server 104 sends a message to the client 102b when a change to the asset or to the element 701 is saved to the database 106. In embodiments, the first change is provided by synchronizing the database 106 and data of the client 102b. The first change can also be provided to other clients that subscribe to the element 701.
In an embodiment, the second change triggers (prompts) the second client 102b to perform operations (execute) in order to process or analyze the effect (if any) of the first change. In block 608, in an embodiment, a second change (change 2) is generated by the second client 102b in response to the first change being provided to the second client. In an embodiment, the second change is based on the first change. The change made by the second change may be to the element 701 or to a different element (of the asset 704 or of a different asset) affected by the first change.
In block 610, the database 106 is updated to include the second change. In embodiments, the database 106 is updated by synchronizing the database and data of the client 102b. In embodiments according to the invention, only the change or changes to the element 701 (or the other element) are provided to the database 106 by the client 102b: the entire, updated element—or the entire, updated asset—is not provided to the database by the client 102b.
The second change can be provided to the client 102a, by synchronizing the database 106 and data of the client 102a, by synchronizing the database and data of the client 102a.
In the manner just described, the clients 102a and 102b interoperate with each other through changes to the database 106 to collaboratively generate content (the virtual environment 702 and object 703). In an embodiment, one of the client applications 102a-n (
Thus, in essence, instead of a client loading an asset to local memory, the asset is loaded from the database 106 across a network to the client; and, when a change is made, instead of saving the change to local memory, the change is saved across the network to the database. However, a client can make and register (list) multiple changes to an asset before saving any of the changes to the database 106. In other words, a client can save a single change to the database 106, or it can save a group of changes to the database. In an embodiment, the client can save changes to a database at a rate that corresponds to how the virtual environment is to be rendered and displayed. For example, in a video or gaming application that renders and displays scenes at the rate of 60 frames per second, clients may save changes at the rate of 60 times per second.
Depending on the type of client, a change to an element provided to the client does not necessarily trigger another change to either that element or another element. In general, in response to being provided a change to an element of an asset, a client 102a-n can make another change to that element, and update the database 106 to include the other change; make a change to another element of the asset, and update the database to include the change to the other element; use the element including any change in some type of operation that does not cause another change to the element; render the element/asset; and/or display the element/asset.
With reference also to
The user input 820 of
In the
Graphics memory may include a display memory 840 (e.g., a framebuffer) used for storing pixel data for each pixel of an output image. In another embodiment, the display memory 840 and/or additional memory 845 are part of the memory 810 and are shared with the CPU 805. Alternatively, the display memory 840 and/or additional memory 845 can be one or more separate memories provided for the exclusive use of the graphics system 830.
In another embodiment, graphics processing system 830 includes one or more additional physical GPUs 855, similar to the GPU 835. Each additional GPU 855 is adapted to operate in parallel with the GPU 835. Each additional GPU 855 generates pixel data for output images from rendering commands. Each additional physical GPU 855 can be configured as multiple virtual GPUs that are used in parallel (concurrently) by a number of applications executing in parallel.
While the foregoing disclosure sets forth various embodiments using specific block diagrams, flowcharts, and examples, each block diagram component, flowchart step, operation, and/or component described and/or illustrated herein may be implemented, individually and/or collectively, using a wide range of hardware, software, or firmware (or any combination thereof) configurations. In addition, any disclosure of components contained within other components should be considered as examples because many other architectures can be implemented to achieve the same functionality.
The process parameters and sequence of steps described and/or illustrated herein are given by way of example only and can be varied as desired. For example, while the steps illustrated and/or described herein may be shown or discussed in a particular order, these steps do not necessarily need to be performed in the order illustrated or discussed. The example methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps in addition to those disclosed.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the disclosure is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the disclosure.
Embodiments according to the invention are thus described. While the present disclosure has been described in particular embodiments, it should be appreciated that the invention should not be construed as limited by such embodiments, but rather construed according to the below claims.
This application claims priority to U.S. Provisional Application No. 62/717,730, titled “Cloud-Centric Platform for Collaboration and Connectivity,” filed on Aug. 10, 2018, and U.S. Provisional Application No. 62/879,901, titled “Real-Time Ray-Tracing Renderer,” filed on Jul. 29, 2019, incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5561752 | Jevans | Oct 1996 | A |
5862325 | Reed et al. | Jan 1999 | A |
5896139 | Strauss | Apr 1999 | A |
5978582 | McDonald et al. | Nov 1999 | A |
5986667 | Jevans | Nov 1999 | A |
6154215 | Hopcroft et al. | Nov 2000 | A |
6263496 | Meyer et al. | Jul 2001 | B1 |
6272650 | Meyer et al. | Aug 2001 | B1 |
6345288 | Reed et al. | Feb 2002 | B1 |
6366933 | Ball et al. | Apr 2002 | B1 |
6377263 | Falacara et al. | Apr 2002 | B1 |
6377309 | Ito et al. | Apr 2002 | B1 |
6570564 | Sowizral et al. | Mar 2003 | B1 |
6557012 | Arun et al. | Apr 2003 | B1 |
6598059 | Vasudevan et al. | Jul 2003 | B1 |
6611262 | Suzuki | Aug 2003 | B1 |
6856322 | Marrin et al. | Feb 2005 | B1 |
7013469 | Smith et al. | Mar 2006 | B2 |
7088374 | David et al. | Aug 2006 | B2 |
7181731 | Pace et al. | Feb 2007 | B2 |
7870538 | Zenz et al. | Jan 2011 | B2 |
8117192 | Pogodin | Feb 2012 | B1 |
8352443 | Polson et al. | Jan 2013 | B1 |
8369564 | Hervas et al. | Feb 2013 | B2 |
8441496 | Maguire | May 2013 | B1 |
8612485 | Selan et al. | Dec 2013 | B2 |
8620959 | Denton, III et al. | Dec 2013 | B1 |
8624898 | Bugaj et al. | Jan 2014 | B1 |
8782637 | Khalid | Jul 2014 | B2 |
9355478 | Simon et al. | May 2016 | B2 |
9378296 | Clarke | Jun 2016 | B2 |
9430229 | Van Zijst et al. | Aug 2016 | B1 |
9535969 | Epstein et al. | Jan 2017 | B1 |
9557968 | Smith et al. | Jan 2017 | B1 |
9569875 | Milliron et al. | Feb 2017 | B1 |
9582247 | Milliron et al. | Feb 2017 | B1 |
9659398 | Liou et al. | May 2017 | B2 |
9762663 | Losacco et al. | Sep 2017 | B2 |
9953009 | Behar et al. | Apr 2018 | B1 |
10152489 | Pola | Dec 2018 | B2 |
10217185 | Cabanero | Feb 2019 | B1 |
10297064 | Papp et al. | May 2019 | B2 |
10339120 | Davidson et al. | Jul 2019 | B2 |
10353529 | Xu | Jul 2019 | B2 |
10437239 | Bowman et al. | Oct 2019 | B2 |
10620948 | Brebner | Apr 2020 | B2 |
10679414 | Jacobson et al. | Jun 2020 | B2 |
10728291 | Bushkin et al. | Jul 2020 | B1 |
10789244 | Yan | Sep 2020 | B1 |
11321012 | Horns et al. | May 2022 | B2 |
11349294 | Zhou et al. | May 2022 | B2 |
11463250 | Losacco et al. | Oct 2022 | B2 |
11582485 | Cherian et al. | Feb 2023 | B1 |
11635908 | Switzer et al. | Apr 2023 | B2 |
11683395 | Mladin et al. | Jun 2023 | B2 |
11693880 | Switzer et al. | Jul 2023 | B2 |
11712628 | Bar-Zeev et al. | Aug 2023 | B2 |
12100112 | Lebaredian et al. | Sep 2024 | B2 |
20010027388 | Beverina et al. | Oct 2001 | A1 |
20020063704 | Sowizral et al. | May 2002 | A1 |
20020089508 | Sowizral et al. | Jul 2002 | A1 |
20020095454 | Reed et al. | Jul 2002 | A1 |
20020116702 | Aptus et al. | Aug 2002 | A1 |
20030132937 | Schneider et al. | Jul 2003 | A1 |
20030174796 | Isozaki | Sep 2003 | A1 |
20030204592 | Crouse-Kemp et al. | Oct 2003 | A1 |
20040024898 | Wan | Feb 2004 | A1 |
20040044998 | Wildhagen et al. | Mar 2004 | A1 |
20040103141 | Miller et al. | May 2004 | A1 |
20040189645 | Beda et al. | Sep 2004 | A1 |
20040189667 | Beda et al. | Sep 2004 | A1 |
20040189668 | Beda et al. | Sep 2004 | A1 |
20050035970 | Wirtschafter et al. | Feb 2005 | A1 |
20050039176 | Fournie | Feb 2005 | A1 |
20050140694 | Subramanian et al. | Jun 2005 | A1 |
20050193408 | Sull et al. | Sep 2005 | A1 |
20050203927 | Sull et al. | Sep 2005 | A1 |
20050212803 | Peachey | Sep 2005 | A1 |
20050262470 | Gavrilov et al. | Nov 2005 | A1 |
20060015494 | Keating et al. | Jan 2006 | A1 |
20060041842 | Loberg | Feb 2006 | A1 |
20060112167 | Steele et al. | May 2006 | A1 |
20060271603 | Mathias | Nov 2006 | A1 |
20070094325 | Ih et al. | Apr 2007 | A1 |
20070135106 | Sung et al. | Jun 2007 | A1 |
20070208992 | Koren | Sep 2007 | A1 |
20070256055 | Herscu | Nov 2007 | A1 |
20070294270 | Gregory et al. | Dec 2007 | A1 |
20070299825 | Rush et al. | Dec 2007 | A1 |
20080104206 | Novik et al. | May 2008 | A1 |
20080122838 | Hoover et al. | May 2008 | A1 |
20080177782 | Poston et al. | Jul 2008 | A1 |
20080195759 | Novik et al. | Aug 2008 | A1 |
20080278482 | Farmanbar et al. | Nov 2008 | A1 |
20090006553 | Grandhi | Jan 2009 | A1 |
20090077002 | Clark et al. | Mar 2009 | A1 |
20090102846 | Flockermann et al. | Apr 2009 | A1 |
20090172101 | Arthursson | Jul 2009 | A1 |
20090199090 | Poston et al. | Aug 2009 | A1 |
20090249290 | Jenkins et al. | Oct 2009 | A1 |
20090327219 | Finn et al. | Dec 2009 | A1 |
20100010967 | Muller | Jan 2010 | A1 |
20100083172 | Breeds et al. | Apr 2010 | A1 |
20100106705 | Rush et al. | Apr 2010 | A1 |
20100134501 | Lowe et al. | Jun 2010 | A1 |
20100146085 | Van Wie et al. | Jun 2010 | A1 |
20100150526 | Rose et al. | Jun 2010 | A1 |
20100177104 | Dufour et al. | Jul 2010 | A1 |
20100214284 | Rieffel et al. | Aug 2010 | A1 |
20100235321 | Shukla et al. | Sep 2010 | A1 |
20100257463 | Ducheneaut et al. | Oct 2010 | A1 |
20100283795 | Deffeyes | Nov 2010 | A1 |
20100302249 | Fowler et al. | Dec 2010 | A1 |
20110047217 | Arnaud et al. | Feb 2011 | A1 |
20110055732 | Dawson et al. | Mar 2011 | A1 |
20120167193 | Gargaro et al. | Jun 2012 | A1 |
20120236842 | De Foy et al. | Sep 2012 | A1 |
20120278386 | Losacco et al. | Nov 2012 | A1 |
20120331061 | Lininger | Dec 2012 | A1 |
20130010421 | Fahey et al. | Jan 2013 | A1 |
20130038618 | Urbach | Feb 2013 | A1 |
20130080349 | Bhola et al. | Mar 2013 | A1 |
20130120421 | Maguire | May 2013 | A1 |
20130120422 | Rao et al. | May 2013 | A1 |
20130132466 | Miller | May 2013 | A1 |
20130185198 | Lorch | Jul 2013 | A1 |
20130218829 | Martinez | Aug 2013 | A1 |
20130246513 | Zaveri et al. | Sep 2013 | A1 |
20130246901 | Massand | Sep 2013 | A1 |
20130246932 | Zaveri et al. | Sep 2013 | A1 |
20130339723 | Hix et al. | Dec 2013 | A1 |
20140022986 | Wu et al. | Jan 2014 | A1 |
20140108485 | Geibel et al. | Apr 2014 | A1 |
20140181789 | Canter et al. | Jun 2014 | A1 |
20140222919 | Nysetvold et al. | Aug 2014 | A1 |
20140229865 | Da Costa | Aug 2014 | A1 |
20140236550 | Nysetvold et al. | Aug 2014 | A1 |
20140258373 | Lerman | Sep 2014 | A1 |
20140267237 | McNerney et al. | Sep 2014 | A1 |
20140267239 | Wilson et al. | Sep 2014 | A1 |
20140279903 | Hsiao et al. | Sep 2014 | A1 |
20140279976 | Davidson et al. | Sep 2014 | A1 |
20140292781 | Flototto et al. | Oct 2014 | A1 |
20140297759 | Mody | Oct 2014 | A1 |
20140337734 | Bradford et al. | Nov 2014 | A1 |
20150054823 | Dzhurinskiy et al. | Feb 2015 | A1 |
20150074181 | Taerum et al. | Mar 2015 | A1 |
20150010675 | Konami et al. | Apr 2015 | A1 |
20150106750 | Konami et al. | Apr 2015 | A1 |
20150106790 | Bigwood et al. | Apr 2015 | A1 |
20150220332 | Bernstein et al. | Aug 2015 | A1 |
20150220636 | Deen et al. | Aug 2015 | A1 |
20150221336 | Deen et al. | Aug 2015 | A1 |
20150222730 | Gower et al. | Aug 2015 | A1 |
20150301837 | Goetz et al. | Oct 2015 | A1 |
20160063753 | Peterson et al. | Mar 2016 | A1 |
20160070767 | Karras et al. | Mar 2016 | A1 |
20160098494 | Webster et al. | Apr 2016 | A1 |
20160210602 | Siddique | Jul 2016 | A1 |
20160283571 | Gatzsche et al. | Sep 2016 | A1 |
20160307353 | Ligenza et al. | Oct 2016 | A1 |
20170024447 | Bowman, Jr. et al. | Jan 2017 | A1 |
20170132567 | Glunz | May 2017 | A1 |
20170132568 | Glunz | May 2017 | A1 |
20170132842 | Morrison | May 2017 | A1 |
20170153926 | Callegari et al. | Jun 2017 | A1 |
20170180756 | Tuffreau et al. | Jun 2017 | A1 |
20170235568 | Cowan et al. | Aug 2017 | A1 |
20170264592 | Yoda et al. | Sep 2017 | A1 |
20170285896 | Chandra | Oct 2017 | A1 |
20180060065 | Lai et al. | Mar 2018 | A1 |
20180107455 | Psistakis | Apr 2018 | A1 |
20180121530 | McGregor et al. | May 2018 | A1 |
20180225885 | Dishno | Aug 2018 | A1 |
20180286116 | Babu J D | Oct 2018 | A1 |
20180307794 | Bowman et al. | Oct 2018 | A1 |
20180322692 | Babu J D | Nov 2018 | A1 |
20180373502 | Ganninger et al. | Dec 2018 | A1 |
20180373770 | Switzer et al. | Dec 2018 | A1 |
20190035138 | Fuetterling | Jan 2019 | A1 |
20190121874 | Vilim et al. | Apr 2019 | A1 |
20190149619 | Lisac et al. | May 2019 | A1 |
20190188911 | Jacobson et al. | Jun 2019 | A1 |
20190197785 | Tate-Gans et al. | Jun 2019 | A1 |
20190278459 | da Costa et al. | Sep 2019 | A1 |
20190340166 | Raman et al. | Nov 2019 | A1 |
20190340333 | Srinivasan et al. | Nov 2019 | A1 |
20190340832 | Srinivasan et al. | Nov 2019 | A1 |
20190340834 | Martinez Molina et al. | Nov 2019 | A1 |
20190346819 | Kroner et al. | Nov 2019 | A1 |
20190349624 | Rodriguez et al. | Nov 2019 | A1 |
20190355181 | Srinivasan et al. | Nov 2019 | A1 |
20200035026 | Demirchian et al. | Jan 2020 | A1 |
20200036816 | Babu et al. | Jan 2020 | A1 |
20200051030 | Lebaredian et al. | Feb 2020 | A1 |
20200117705 | Hance et al. | Apr 2020 | A1 |
20200204739 | Beres et al. | Jun 2020 | A1 |
20200210488 | Centurion | Jul 2020 | A1 |
20200285464 | Brebner | Sep 2020 | A1 |
20200285788 | Brebner | Sep 2020 | A1 |
20200285977 | Brebner | Sep 2020 | A1 |
20200320764 | Bryson | Oct 2020 | A1 |
20200326936 | Sigmon et al. | Oct 2020 | A1 |
20200334917 | Dzhurinskiy et al. | Oct 2020 | A1 |
20200364042 | Wuensche | Nov 2020 | A1 |
20200394263 | Ganu et al. | Dec 2020 | A1 |
20200404218 | Yerli | Dec 2020 | A1 |
20210049827 | Lebaredian et al. | Feb 2021 | A1 |
20210056762 | Robbe et al. | Feb 2021 | A1 |
20210073287 | Hunter | Mar 2021 | A1 |
20210097061 | Amihod et al. | Apr 2021 | A1 |
20210157970 | Behar et al. | May 2021 | A1 |
20210248115 | Jones et al. | Aug 2021 | A1 |
20210248789 | Du | Aug 2021 | A1 |
20210255853 | Zhou et al. | Aug 2021 | A1 |
20210275918 | Devaranjan et al. | Sep 2021 | A1 |
20210382708 | Sagal et al. | Dec 2021 | A1 |
20210382709 | Sagal et al. | Dec 2021 | A1 |
20210390760 | Muthler et al. | Dec 2021 | A1 |
20220020201 | Fenney | Jan 2022 | A1 |
20220101619 | Lebaredian et al. | Mar 2022 | A1 |
20220150419 | Beres et al. | May 2022 | A1 |
20220171654 | Jones et al. | Jun 2022 | A1 |
20220215343 | Bever et al. | Jul 2022 | A1 |
20220277514 | Mn et al. | Sep 2022 | A1 |
20220309743 | Harviainen | Sep 2022 | A1 |
20220337919 | Yip et al. | Oct 2022 | A1 |
20230093087 | Babinowich et al. | Mar 2023 | A1 |
20230132703 | Marsenic et al. | May 2023 | A1 |
20230152598 | Brebner et al. | May 2023 | A1 |
20230177594 | Besecker et al. | Jun 2023 | A1 |
20230336830 | Yerli | Oct 2023 | A1 |
Number | Date | Country |
---|---|---|
102413164 | Apr 2012 | CN |
103890815 | Jun 2014 | CN |
104183023 | Dec 2014 | CN |
107408142 | Nov 2017 | CN |
2472898 | Feb 2011 | GB |
WO-2013024397 | Feb 2013 | WO |
2019183593 | Sep 2019 | WO |
2020033969 | Feb 2020 | WO |
Entry |
---|
“3D virtual reality space sharing method, 3D virtual reality space sharing system, address management method, and address management server terminal”, published on Dec. 20, 2006, Document ID: JP-3859018-B2, pp. 33. (Year: 2006). |
Honda Yasuaki (Yasuaki), “Method and system for three-dimensional virtual reality space, medium and method for recording information, medium and method for transmitting information, information processing method . . . ”, published on Mar. 28, 1997, Document ID: JP-H0981782-A, pp. 35. (Year: 1997). |
Clarke Simon P. Rasmussen David, Kofman Igor, “Method and System for Synchronizing Change Made by a Plurality of Users on Shared Object”, published on Jul. 27, 2006, Document ID: JP-2006195972-A, pp. 15 (Year: 2006). |
Preinterview First Office Action dated Jan. 22, 2021 in U.S. Appl. No. 16/826,269, 5 pages. |
International Preliminary Report on Patentability received for PCT Application No. PCT/US2019/046218, mailed on Feb. 25, 2021, 8 pages. |
International Search Report and Written Opinion in International Application No. PCT/US2019/046218 mailed Oct. 30, 2019. 10 pages. |
First Action Interview Office Action dated May 10, 2021 in U.S. Appl. No. 16/826,269, 4 pages. |
Introduction to USD, 2017, pixar.com. Available at: https://graphics.pixar.com/usd/docs/index.html. |
Watch a Jaw Dropping Example of Universal Scene Description, lesterbanks.com, Sep. 3, 2016. Available at: http://lesterbanks.com/2016/09/jaw-dropping-universal-scene-description/. |
Ellis, C. A., et al., “Concurrency Control in Groupware Systems.”, in Proceedings of The 1989 ACM SIGMOD International Conference on Management of Data, pp. 1-9 (Jun. 1989). |
Georgiev, L., et al., “Autodesk Standard Surface.”, Retrieved From the Internet on Mar. 23, 2020, pp. 1-29, at URL: https://autodesk.github.io/standard-surface/. |
He, Y., et al., “Slang: Language Mechanisms for Extensible Real-Time Shading Systems.”, ACM Transactions on Graphics (TOG), 37(4), pp. 1-13 (2018). |
Newman, R., “Beginner's Guide to Techniques for Refreshing Web Pages: Ajax, Comet, HTML5.”, Retrieved From the Internet on Mar. 23, 2020, pp. 1-7, at URL: https://richnewman.wordpress.com/2012/09/08/beginners-guide-to-techniques-for-refreshing-web-pages-ajax-comethtml5/. |
Parker, S. G., et al., “Optix: A General-Purpose Ray Tracing Engine.”, ACM Transactions on Graphics, vol. 29(4), pp. 1-13 (2010). |
Shapiro, M., et al., “Conflict-Free Replicated Data Types.”, in Symposium on Self-Stabilizing Systems, pp. 1-22 (Oct. 2011). |
Notice of Allowance dated Sep. 7, 2021 in U.S. Appl. No. 16/826,269, 10 pages. |
Lebaredian, Lev; Non-Final Office Action for U.S. Appl. No. 17/542,041, filed Dec. 3, 2021, mailed Jan. 27, 2023, 41 pgs. |
Lebaredian, Rev; Non-Final Office Action for U.S. Appl. No. 17/088,490, filed Nov. 3, 2020, mailed May 11, 2023, 64 pgs. |
Lebaredian, et al.; First Office Action for Chinese Patent Application No. 201980066854.0, filed Apr. 9, 2021, mailed Jun. 26, 2023, 9 pgs. |
Lebaredian, et al.; Final Office Action for U.S. Appl. No. 17/542,041, filed Dec. 3, 2021, mailed Aug. 15, 2023, 50 pgs. |
Lebaredian, Rev; Final Office Action for U.S. Appl. No. 17/088,490, filed Nov. 3, 2020, mailed Nov. 6, 2023, 76 pgs. |
Lebaredian, Rev; Second Office Action for Chinese Patent Application No. 201980066854.0, filed Apr. 9, 2021, mailed Jan. 31, 2024, 4 pgs. |
Lebaredian, Rev; First Office Action for Chinese Patent Application No. 202111294754.9, filed Nov. 3, 2021, mailed Feb. 8, 2024, 10 pgs. |
Lebaredian, Rev; Notice of Allowance for U.S. Appl. No. 17/542,041, filed Dec. 3, 2021, mailed May 31, 2024, 19 pgs. |
Lebaredian, Rev; Second Office Action for Chinese Patent Application No. 202111294754.9, filed Nov. 3, 2021, mailed Jun. 28, 2024, 7 pgs. **English Abstract Included. |
Lebaredian, Rev; Notice of Registration for Chinese Patent Application No. 201980066854.0, filed Apr. 9, 2021, mailed Apr. 25, 2024, 6 pgs. |
Lebaredian, Rev; Non-Final Office Action for U.S. Appl. No. 17/542,041, filed Dec. 3, 2021, mailed Jan. 25, 2024, 42 pgs. |
Lebaredian, Rev; Non-Final Office Action for U.S. Appl. No. 17/088,490, filed Nov. 3, 2020, mailed Feb. 29, 2024, 47 pgs. |
Lebaredian, Rev; Final Office Action for U.S. Appl. No. 17/088,490, filed Nov. 3, 2020, mailed Sep. 16, 2024, 74 pgs. |
Number | Date | Country | |
---|---|---|---|
20200051030 A1 | Feb 2020 | US |
Number | Date | Country | |
---|---|---|---|
62717730 | Aug 2018 | US |