Displaying user-generated three-dimensional (3D) content in some applications pose challenges when the assets are not properly optimized for the constraints of mixed reality (MR), virtual reality (VR), and augmented reality (AR) (collectively MR) displays. Some displays may thus impose restrictions on the geometric complexity of the 3D content, or restrict the amount of assets that may be placed in a scene. In traditional 3D and Holographic applications, where the 3D content is generated by application developers, content may be optimized in advance so that the application will run smoothly within any hardware limitations. However, user-generated application content is not available for optimization by application developers, and some users may not be sufficiently familiar with optimization techniques in order to ensure that applications run smoothly and efficiently. Even when a user has the proper technical expertise to optimize models, manually optimizing a large catalog of models can be a time-consuming and costly exercise for the user.
The disclosed examples are described in detail below with reference to the accompanying drawing figures listed below. The following summary is provided to illustrate some examples disclosed herein. It is not meant, however, to limit all examples to any particular configuration or sequence of operations.
Some aspects disclosed herein are directed to optimizing viewing assets, such as three-dimensional (3D) virtual objects for mixed reality (MR), virtual reality (VR), or augmented reality (AR) environment visualization may involve obtaining a viewing asset, generating a decimation request for the obtained viewing asset, responsive to generating the decimation request, receiving a set of decimation files, based at least on the set of decimation files, generating a plurality of selectable options, receiving a user selection of an option, and outputting the selected option as a converted viewing asset. Disclosed examples may also include specifying a maximum file size, a minimum polygon count, a maximum polygon count, and a minimum number of renderable viewing assets on a target display platform. Disclosed examples may operate automatically, and may permit tailoring of a decimation recipe.
Some aspects disclosed herein are directed to optimizing viewing assets using a processor; and a computer-readable medium storing instructions that are operative when executed by the processor to: obtain a viewing asset; generate a decimation request for the obtained viewing asset; responsive to generating the decimation request, receive a set of decimation files; based at least on the set of decimation files, generate a plurality of selectable options; receive a user selection of an option; and output the selected option as a converted viewing asset.
The disclosed examples are described in detail below with reference to the accompanying drawing figures listed below:
Corresponding reference characters indicate corresponding parts throughout the drawings.
The various embodiments will be described in detail with reference to the accompanying drawings. The same reference numbers may be used throughout the drawings to refer to the same or like parts. References made throughout this disclosure relating to specific examples and implementations are provided solely for illustrative purposes but, unless indicated to the contrary, are not meant to limit all examples.
Optimizing viewing assets, such as three-dimensional (3D) virtual objects for mixed reality (MR), virtual reality (VR), or augmented reality (AR) environment visualization may involve obtaining a viewing asset, generating a decimation request for the obtained viewing asset, responsive to generating the decimation request, receiving a set of decimation files, based at least on the set of decimation files, generating a plurality of selectable options, receiving a user selection of an option, and outputting the selected option as a converted viewing asset. Disclosed examples may also include specifying a maximum file size, a minimum polygon count, a maximum polygon count, and a minimum number of renderable viewing assets on a target display platform. Disclosed examples may operate automatically, and may permit tailoring of a decimation recipe.
When users bring custom-generated assets (3D models) to an MR, VR or AR (collectively, MR) display platform, such as a head mounted displays (HMD), the assets may require conversion and decimation. This is due to possible processing and storage constraints on wearable devices, such as a HoloLens®. The user might prefer a heavy polygon count (polycount) if high display quality is important, or the user may instead prefer a lower polycount if the speed of rendering is important or a high number of duplicates will be rendered.
Polycount is the number of facets on a virtual. While higher polycount may produce higher display quality in some scenarios, this is not universal, and higher polycounts may negatively impact the potential rendered frames per second. For some HMDs, approximately 100,000 unique polygons and/or 100 unique meshes may be used while maintaining an appropriate rate of rendered frames per second. Thus, for some games, it may be desirable to keep polycount relatively low, while still preserving sufficient display quality for the application. The polygons are typically triangles and have associated and meshes (sets of triangles). Meshes add complexity and impact rendering performance. Meshes are subparts of the viewing asset object; each mesh has its own polycount, and the total polycount for an asset is the sum of all meshes.
Various embodiments disclosed herein refer to 3D models, e.g., full-sized 3D models and decimated 3D models. These may include generated 2D or 3D holograms. Embodiments are not limited to just 3D models, as the same disclosed decimation techniques may be applied to 2D models as well. Thus, full-sized and decimated 2D models are interchangeable with the referenced full-sized and decimated 3D models. Referring to the figures, examples of the disclosure enable efficient 3D model decimation that provides varying options of Level of Detail (LOD) for user selection and configuration. The methods described herein optimize viewing assets, providing user controls for decimation and LOD selection, and interpreting the resulting model into a format that may be consumed by MR applications.
Using 3D objects, or models, in MR programs and/or devices may enhance realism and improve the experience of user interaction. However, if a user wants to bring custom assets (i.e. objects, models, etc.) to the MR space, the custom asset must be decimated and converted into a format compatible with the desired program and/or device. For example, a user may create a 3D model of an object for an MR environment using 3D-modeling software. For example, the 3D model may be created using AutoCAD® developed by AUTODESK®. This 3D model is generally a large computer-aided design (CAD) file with a considerable amount of detail and a large initial number of polygons, faces, textures, colors, levels of detail (LODs), polygon count (polycount), file size, and/or primitive data. A target MR device may require that the 3D model be decimated in order to be used, e.g., decimated to a certain resolution or file size in order before being processed to avoid heating the MR device beyond certain thermic parameters, for example.
In addition, different options of LODs may be desired depending on the asset. For example, to demonstrate que quality and showcase of a model, a heavy polycount asset may be desired. As another example, to bring multiple assets with a desired dimension, a low polycount asset may be desired. As used herein, polycount refers to the number of faces on an object. In some environments, such as gaming, polycount optimization may target a lower polycount, while in other environments a higher polycount may be optimal.
Aspects of this disclosure provide a user with varying selectable options of LODs with decimation in a user interface (UI)—friendly manner that presents selections and customization options to the user as well. An imported asset is decimated and various different polycount options are provided along with a preview of the asset (or model) with the associated LOD and other display information corresponding to each different polycount option. For example, the higher the fidelity of the model, the higher the polycount may be for a given option. User selection may be received for the desired option and the resulting decimated and converted asset may then be output or otherwise used by target computing systems (i.e., MR devices, applications, etc.).
Aspects of the disclosure further provide increased user interaction performance by enabling realism and improving user experience within a mixed reality space or representation of space. By providing custom 3D assets decimated and converted to a targeted LOD, aspects of the disclosure further improve processing speed and resource usage of the corresponding computer systems as well. As described herein, an optimization component provides a representation of varying polycount options via a UI that allows for comparison of 3D assets at various target LODs and polycounts, converting the selected option into a format that the target application (i.e., MR application) understands. In effect, the optimization component enables a configurable or customizable decimation process for user control.
By taking in user-generated 3D content of arbitrary complexity and automatically optimizing the content for export to the target device/application, optimal option results are provided for selection without requiring user knowledge of the optimal requirements. These optimal option results are based on heuristics that consider the limitations of the target hardware and the topological structure of models typically created by users in the application's target audience. The resulting selectable options provide additional data that inform a user as to the impact of a given selection on the target device/application and/or environment.
In some examples, computing device 102 has at least one processor 104, a memory area 106, and at least one user interface. These may be the same or similar to processor(s) 714 and memory 712 of
Computing device 102 further has one or more computer readable media such as the memory area 106. Memory area 106 includes any quantity of media associated with or accessible by the computing device. Memory area 106 may be internal to computing device 102 (as shown in
The user interface component 116, may include instructions executed by processor 104 of computing device 102, and cause processor 104 to perform operations, including to receive user input, provide output to a user and/or user device, and interpret user interactions with a computing device. Portions of user interface component 116 may thus reside within memory area 106. In some examples, user interface component 116 includes a graphics card for displaying data to a user 122 and receiving data from user 122. User interface component 116 may also include computer-executable instructions (e.g., a driver) for operating the graphics card. Further, user interface component 116 may include a display (e.g., a touch screen display or natural user interface) and/or computer-executable instructions (e.g., a driver) for operating the display. In some examples, the display may be a 3D display, such as may be found in an HMD. User interface component 116 may also include one or more of the following to provide data to the user or receive data from the user: a keyboard (physical or touchscreen display), speakers, a sound card, a camera, a microphone, a vibration motor, one or more accelerometers, a Bluetooth® brand communication module, global positioning system (GPS) hardware, and a photoreceptive light sensor. For example, the user may input commands or manipulate data by moving the computing device in a particular way. In another example, the user may input commands or manipulate data by providing a gesture detectable by the user interface component, such as a touch or tap of a touch screen display or natural user interface. In still other examples, a user, such as user 122, may interact with a separate user device 124, which may control or be controlled by computing device 102 over communications network 120, a wireless connection, or a wired connection. In some examples, user device 124 may be similar to functionally equivalent to computing device 102.
As illustrated, in some examples, computing device 102 further includes a camera 130, which may represent a single camera, a stereo camera set, a set of differently-facing cameras, or another configuration. Computing device 102 may also further include an inertial measurement unit (IMU) 132 that may incorporate one or more of an accelerometer, a gyroscope, and/or a magnetometer. The accelerometer gyroscope, and/or a magnetometer may each output measurements in 3D. The combination of 3D position and 3D rotation may be referred to as six degrees-of-freedom (6DoF), and a combination of 3D accelerometer and 3D gyroscope data may permit 6DoF measurements. In general, linear accelerometer data may be the most accurate of the data from a typical IMU, whereas magnetometer data may be the least accurate.
Also illustrated, in some examples, computing device 102 additionally may include a generic sensor 134 and a radio system 136. Generic sensor 134 may include an infrared (IR) sensor (non-visible light sensor), a visible light sensor (such as an ambient light sensor or a spectrally-differentiated set of ambient light sensors), a light detection and ranging (LIDAR) sensor (range sensor), an RGB-D sensor (light and range sensor), an ultrasonic sensor, or any other sensor, including sensors associated with position-finding and range-finding. Radio system 136 may include Bluetooth®, Wi-Fi, cellular, or any other radio or wireless system. Radio system 136 may act as a sensor by detecting signal strength, direction-of-arrival and location-related identification data in received signals, such as GPS signals. Together, one or more of camera 130, IMU 132, generic sensor 134, and radio system 136 may collect data (either real-time, telemetry, or historical data) for use in behavior analysis of user position, movement, and gaze in mixed reality space.
Optimizer component 200 obtains or receives a viewing asset, such as asset 202, as input. Asset 202 may include, without limitation, a 3D model, 3D object, graphical layout, or any other suitable 3D asset, for example. Decimation request generator 204 generates decimation request 205 for decimation of asset 202. The decimation request may include the number of desired options and the desired LOD range for the options to be returned as decimated image files. In some examples, decimation request generator 204 determines a number of options and the desired LOD range for the options based on pre-configured parameters (i.e. default configuration for four options within a threshold range). In other examples, decimation request generator 204 determines a number of options and the desired LOD range based in part on data input 212. Data input 212 may be user input or may be input derived from machine learning and/or telemetry data in some examples. Decimation request 205 is transmitted to decimation service 210 by optimizer component 200 and set of decimated image files 206 is received in response to decimation request 205.
Set of decimated image files 206 may be one or more image files for asset 202 converted by decimation service 210 into a target format expected for a target application or environment, such as a VR/MR program for example. The set of decimated image files may include an individual image file for each individual option requested. In other words, if the decimation request is for four options, the set of decimated image files returned will include four individual files. File parser 208 parses set of decimated image files 206 to extract the data points and image previews for each option to display as selectable and/or configurable options. Extracted data points may include, without limitation, triangles, meshes, file properties, and other data. File parser 208 analyzes the extracted data points to identify information for each option, such as the polycount, meshes, vertices, file size, and so on, and uses that identified information alongside the image preview of the asset to generate plurality of output options 214. As used herein, meshes refer to subparts of the object, or asset, where each mesh has its own polycount, and the total polycount for the object is the sum of all meshes. A mesh is a set of triangles, and polycount refers to the number of triangles.
Plurality of output options 214 includes option-A 216, option-B 218, and option-C 220. Each option has a corresponding image preview (image 222, image 224, and image 226) and corresponding image data (image data 228, image data 230, image data 232) for that option. Image data may include, without limitation, image properties, file properties, polycount or triangle count, mesh count, materials count, file size, LOD value, and any other suitable information. Plurality of output options 214 provide a selectable and/or interactive representation of decimated options for display via a user interface. Plurality of output options 214 as depicted here includes three options for illustrative purposes of describing aspects of the disclosure, however it should be understood that more or fewer options may be output.
In some examples, the decimated image files may be received from the decimation service as GLB files. The optimizer component may open the GLB files using a GLB file parser (i.e. file parser 208 for example) to parse for properties, parsing into a structure where each data point can be calculated to obtain values to output as image data, for example. The file parser may further interpret and/or analyze the file properties to generate an indication of how a selection of the corresponding option may affect the end product or target scene. For example, analysis results from file parser 208 may provide information indicating a number of models that may be rendered for an environment (i.e. selection of option-A would allow for three assets to be rendered in the room, whereas selection of option-B would allow for 200 assets to be rendered in the same room). Other analysis results could provide an indication of the frames per second for a given option, for example.
A single GLB file may have the high-level LOD image (LOD-0) and the low-level LOD image for a given option, such that in the example of four options, with four GLB files returned as the set of decimated files, each of the four returned files will include the LOD-0 for that option, and the optimizer component will display each of the four LOD-0 images as the image preview for the corresponding option. (See also
Decision operation 310 determines whether the zip file has been uploaded already and a time limit has not expired. If the file has not previously been uploaded, or the file has been previously uploaded, but the time limit has expired, the process uploads the file to the decimation service in operation 312 and stores the asset ID, asset endpoint, and asset expiration date in operation 314. When the file has been uploaded and the time limit has not expired, a decimation process begins in operation 316. During operation 318, the process may poll the decimation service for progress throughout the decimation process. When a determination is made that the decimation process is complete, in decision operation 320, the process then downloads the zipped GLB file(s) in operation 322 and extracts the zipped file to parse, analyze, interpret, and generate selectable options. See
The process receives a set of decimation files at operation 506. The set of decimation files are received from the decimation service and may include an individual file for each requested option. The process parses the set of decimation files to identify data points at operation 508, such as polycount, meshes, materials, file properties, and the like. The process extracts the identified data points at operation 510 and uses the extracted data points to generate a plurality of selectable options at operation 512. The process receives user selection of an option at operation 514. The process outputs the converted asset based on the selected option at operation 516, with the process terminating thereafter. Thus, flowchart diagram 500 illustrates an optimization process that includes obtaining a viewing asset; generating a decimation request for the obtained viewing asset; responsive to generating the decimation request, receiving a set of decimation files; based at least on the set of decimation files, generating a plurality of selectable options; receiving a user selection of an option; and outputting the selected option as a converted viewing asset.
In some examples, operation 606 involves determining the target host, such as the intended display platform, although in other examples, the target host may be predetermined. In some examples, host constraints, such as memory limitations, and other processing constraints that may be relevant to determining an optimal viewing asset complexity for rendering on the host, are determined in operation 608. That is, operation 608 includes determining a constraint of a target host for rendering the viewing asset. In some examples, operation 610 includes determining whether the obtained viewing asset is compatible with the target host, such as whether the imported model is an unsupported file format or is excessively large. In some examples, based at least on the determined constraint(s) of the target host, operation 612 determines a number of original (i.e., as obtained) viewing assets that can be rendered on the target host.
Operation 614 determine the classes of options that may be available to offer user. These may include decimation of the imported asset based on LOD, file size, polygon reduction, and/or file format conversion. The options may be driven by the capabilities of the decimation tool or service (such as optimizer module 112 of
Operation 618 then includes generating a decimation request for the obtained viewing asset. In some examples, generating a decimation request for the obtained viewing asset comprises, based at least on the user specification of the decimation request (received in operation 616), generating the decimation request for the obtained viewing asset. A decimation request may include multiple different polycounts. Decimation is performed in operation 620. Decimation may be based on heuristics that consider the limitations of the target hardware and the topological structure of models typically created by users in the application's target audience. The decimation service may be local or remote, such as a cloud service, and may output a simplified GLB file. A single GLB file may contain both the high LOD image (LOD-0 (zero)) and the low LOD level image or data. In the example depicted in
Operation 622 includes, responsive to generating the decimation request, receive a set of decimation files. In some examples the files may be received directly. In some examples, a URL may be received, and the files may then be retrieved from that URL. The received decimation files (GLB) may then be opened for parsing, starting in operation 624 to extract the data points, in operation 626, and generate image previews for each option to display as selectable and/or configurable options in operation 628. Extracted data points may include, without limitation, triangles, meshes, file properties, and other data. A file parser, for example file parser 208 (of
In some example, operation 630 includes, based at least on the constraint of the target host, for each of the plurality of selectable options, determining a number of viewing assets that can be rendered on the target host. For example, a first option may have a size such that only 10 of the assets may be rendered on the target host, whereas for another option having a smaller size, 100 of the assets may be rendered on the target host. This number may be presented to the user, to further inform the user's selection of an option.
The options are displayed to the user in operation 632, perhaps using an output similar to exemplary user interface diagram 400 of
Some aspects and examples disclosed herein are directed to a solution for optimizing viewing assets that may comprise: a processor; and a computer-readable medium storing instructions that are operative when executed by the processor to: obtain a viewing asset; generate a decimation request for the obtained viewing asset; responsive to generating the decimation request, receive a set of decimation files; based at least on the set of decimation files, generate a plurality of selectable options; receive a user selection of an option; and output the selected option as a converted viewing asset.
Additional aspects and examples disclosed herein are directed to a process for optimizing viewing assets that may comprise: obtaining a viewing asset; generating a decimation request for the obtained viewing asset; responsive to generating the decimation request, receiving a set of decimation files; based at least on the set of decimation files, generating a plurality of selectable options; receiving a user selection of an option; and outputting the selected option as a converted viewing asset.
Additional aspects and examples disclosed herein are directed to one or more computer storage devices having computer-executable instructions stored thereon for optimizing viewing assets, which, on execution by a computer, may cause the computer to perform operations comprising: obtaining a viewing asset comprising a 3D virtual object; generating a decimation request for the obtained viewing asset; responsive to generating the decimation request, receiving a set of decimation files; based at least on the set of decimation files, generating a plurality of selectable options; receiving a user selection of an option; and outputting the selected option as a converted viewing asset.
Alternatively, or in addition to the other examples described herein, other examples may include, but are not limited to, any combination of the following:
While the aspects of the disclosure have been described in terms of various examples with their associated operations, a person skilled in the art would appreciate that a combination of operations from any number of different examples is also within scope of the aspects of the disclosure.
The examples and embodiments disclosed herein may be described in the general context of computer code or machine-useable instructions, including computer-executable instructions such as program components, being executed by a computer or other machine, such as a personal data assistant or other handheld device. Generally, program components including routines, programs, objects, components, data structures, and the like, refer to code that performs particular tasks, or implement particular abstract data types. The discloses examples may be practiced in a variety of system configurations, including personal computers, laptops, smart phones, mobile tablets, hand-held devices, consumer electronics, specialty computing devices, etc. The disclosed examples may also be practiced in distributed computing environments, such as those disclosed in
Computing device 700 includes a bus 710 that directly or indirectly couples the following devices: computer-storage memory 712, one or more processors 714, one or more presentation components 716, input/output (I/O) ports 718, I/O components 720, a power supply 722, and a network component 724. Computer device 700 should not be interpreted as having any dependency or requirement related to any single component or combination of components illustrated therein. While computer device 700 is depicted as a seemingly single device, multiple computing devices 700 may work together and share the depicted device resources. For instance, computer-storage memory 712 may be distributed across multiple devices, processor(s) 714 may provide housed on different devices, and so on.
Bus 110 represents a system bus that may be any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. Bus 710 represents what may be one or more busses (such as an address bus, data bus, or a combination thereof). Although the various blocks of
Computer-storage memory 712 may take the form of the computer-storage media references below and operatively provide storage of computer-readable instructions, data structures, program modules and other data for the computing device 700. For example, computer-storage memory 712 may store an operating system, a universal application platform, or other program modules and program data. Computer-storage memory 712 may be used to store and access instructions configured to carry out the various operations disclosed herein.
As mentioned below, computer-storage memory 712 may include computer-storage media in the form of volatile and/or nonvolatile memory, removable or non-removable memory, data disks in virtual environments, or a combination thereof. And computer-storage memory 712 may include any quantity of memory associated with or accessible by the display device 700. The memory 712 may be internal to the display device 700 (as shown in
Processor(s) 714 may include any quantity of processing units that read data from various entities, such as memory 712 or I/O components 720. Specifically, processor(s) 714 are programmed to execute computer-executable instructions for implementing aspects of the disclosure. The instructions may be performed by the processor, by multiple processors within the computing device 700, or by a processor external to the client computing device 700. In some examples, the processor(s) 714 are programmed to execute instructions such as those illustrated in the flowcharts discussed below and depicted in the accompanying drawings. Moreover, in some examples, the processor(s) 714 represent an implementation of analog techniques to perform the operations described herein. For example, the operations may be performed by an analog client computing device 700 and/or a digital client computing device 700.
Presentation component(s) 716 present data indications to a user or other device. Exemplary presentation components include a display device, speaker, printing component, vibrating component, etc. One skilled in the art will understand and appreciate that computer data may be presented in a number of ways, such as visually in a graphical user interface (GUI), audibly through speakers, wirelessly between computing devices 700, across a wired connection, or in other ways. Ports 718 allow computing device 700 to be logically coupled to other devices including I/O components 720, some of which may be built in. Examples I/O components 720 include, for example but without limitation, a microphone, joystick, game pad, satellite dish, scanner, printer, wireless device, etc.
The computing device 700 may operate in a networked environment via the network component 724 using logical connections to one or more remote computers over a network 730. In some examples, the network component 724 includes a network interface card and/or computer-executable instructions (e.g., a driver) for operating the network interface card. Communication between the computing device 700 and other devices may occur using any protocol or mechanism over any wired or wireless connection. In some examples, the network component 724 is operable to communicate data over public, private, or hybrid (public and private) using a transfer protocol, between devices wirelessly using short range communication technologies (e.g., near-field communication (NFC), Bluetooth® branded communications, or the like), or a combination thereof. By way of example, network 730 may include, without limitation, one or more communication networks, such as local area networks (LANs) and/or wide area networks (WANs).
Turning now to
The distributed computing environment of
Hybrid cloud 808 may include any combination of public network 802, private network 804, and dedicated network 806. For example, dedicated network 806 may be optional, with hybrid cloud 808 comprised of public network 802 and private network 804. Along these lines, some customers may opt to only host a portion of their customer data center 810 in the public network 802 and/or dedicated network 806, retaining some of the customers' data or hosting of customer services in the private network 804. For example, a customer that manages healthcare data or stock brokerage accounts may elect or be required to maintain various controls over the dissemination of healthcare or account data stored in its data center or the applications processing such data (e.g., software for reading radiology scans, trading stocks, etc.). Myriad other scenarios exist whereby customers may desire or need to keep certain portions of data centers under the customers' own management. Thus, in some examples, customer data centers may use a hybrid cloud 808 in which some data storage and processing is performed in the public network 802 while other data storage and processing is performed in the dedicated network 806.
Public network 802 may include data centers configured to host and support operations, including tasks of a distributed application, according to the fabric controller 818. It will be understood and appreciated that data center 814 and data center 816 shown in
Data center 814 illustrates a data center comprising a plurality of servers, such as servers 820 and 824. A fabric controller 818 is responsible for automatically managing the servers 820 and 824 and distributing tasks and other resources within the data center 814. By way of example, the fabric controller 818 may rely on a service model (e.g., designed by a customer that owns the distributed application) to provide guidance on how, where, and when to configure server 822 and how, where, and when to place application 826 and application 828 thereon. One or more role instances of a distributed application may be placed on one or more of the servers 820 and 824 of data center 814, where the one or more role instances may represent the portions of software, component programs, or instances of roles that participate in the distributed application. In other examples, one or more of the role instances may represent stored data that are accessible to the distributed application.
Data center 816 illustrates a data center comprising a plurality of nodes, such as node 832 and node 834. One or more virtual machines may run on nodes of data center 816, such as virtual machine 836 of node 834 for example. Although
In operation, the virtual machines are dynamically assigned resources on a first node and second node of the data center, and endpoints (e.g., the role instances) are dynamically placed on the virtual machines to satisfy the current processing load. In one instance, a fabric controller 830 is responsible for automatically managing the virtual machines running on the nodes of data center 816 and for placing the role instances and other resources (e.g., software components) within the data center 816. By way of example, the fabric controller 830 may rely on a service model (e.g., designed by a customer that owns the service application) to provide guidance on how, where, and when to configure the virtual machines, such as virtual machine 836, and how, where, and when to place the role instances thereon.
As described above, the virtual machines may be dynamically established and configured within one or more nodes of a data center. As illustrated herein, node 832 and node 834 may be any form of computing devices, such as, for example, a personal computer, a desktop computer, a laptop computer, a mobile device, a consumer electronic device, a server, the computing device 700 of
Typically, each of the nodes include, or is linked to, some form of a computing unit (e.g., central processing unit, microprocessor, etc.) to support operations of the component(s) running thereon. As utilized herein, the phrase “computing unit” generally refers to a dedicated computing device with processing power and storage memory, which supports operating software that underlies the execution of software, applications, and computer programs thereon. In one instance, the computing unit is configured with tangible hardware elements, or machines, that are integral, or operably coupled, to the nodes to enable each device to perform a variety of processes and operations. In another instance, the computing unit may encompass a processor (not shown) coupled to the computer-readable medium (e.g., computer storage media and communication media) accommodated by each of the nodes.
The role of instances that reside on the nodes may be to support operation of service applications, and thus they may be interconnected via APIs. In one instance, one or more of these interconnections may be established via a network cloud, such as public network 802. The network cloud serves to interconnect resources, such as the role instances, which may be distributed across various physical hosts, such as nodes 832 and 834. In addition, the network cloud facilitates communication over channels connecting the role instances of the service applications running in the data center 816. By way of example, the network cloud may include, without limitation, one or more communication networks, such as LANs and/or WANs. Such communication networks are commonplace in offices, enterprise-wide computer networks, intranets, and the internet, and therefore need not be discussed at length herein.
Although described in connection with an example computing device 700, examples of the disclosure are capable of implementation with numerous other general-purpose or special-purpose computing system environments, configurations, or devices. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with aspects of the disclosure include, but are not limited to, smart phones, mobile tablets, mobile computing devices, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, gaming consoles, microprocessor-based systems, set top boxes, programmable consumer electronics, mobile telephones, mobile computing and/or communication devices in wearable or accessory form factors (e.g., watches, glasses, headsets, or earphones), network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, VR devices, holographic device, and the like. Such systems or devices may accept input from the user in any way, including from input devices such as a keyboard or pointing device, via gesture input, proximity input (such as by hovering), and/or via voice input.
Examples of the disclosure may be described in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices in software, firmware, hardware, or a combination thereof. The computer-executable instructions may be organized into one or more computer-executable components or modules. Generally, program modules include, but are not limited to, routines, programs, objects, components, and data structures that perform particular tasks or implement particular abstract data types. Aspects of the disclosure may be implemented with any number and organization of such components or modules. For example, aspects of the disclosure are not limited to the specific computer-executable instructions or the specific components or modules illustrated in the figures and described herein. Other examples of the disclosure may include different computer-executable instructions or components having more or less functionality than illustrated and described herein. In examples involving a general-purpose computer, aspects of the disclosure transform the general-purpose computer into a special-purpose computing device when configured to execute the instructions described herein.
By way of example and not limitation, computer readable media comprise computer storage media and communication media. Computer storage media include volatile and nonvolatile, removable and non-removable memory implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or the like. Computer storage media are tangible and mutually exclusive to communication media. Computer storage media are implemented in hardware and exclude carrier waves and propagated signals. Computer storage media for purposes of this disclosure are not signals per se. Exemplary computer storage media include hard disks, flash drives, solid-state memory, phase change random-access memory (PRAM), static random-access memory (SRAM), dynamic random-access memory (DRAM), other types of random-access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, compact disk read-only memory (CD-ROM), digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information for access by a computing device. In contrast, communication media typically embody computer readable instructions, data structures, program modules, or the like in a modulated data signal such as a carrier wave or other transport mechanism and include any information delivery media.
The examples illustrated and described herein, as well as examples not specifically described herein but within the scope of aspects of the disclosure, constitute exemplary means for providing solutions as disclosed herein. The order of execution or performance of the operations in examples of the disclosure illustrated and described herein is not essential, and may be performed in different sequential manners in various examples. For example, it is contemplated that executing or performing a particular operation before, contemporaneously with, or after another operation is within the scope of aspects of the disclosure. When introducing elements of aspects of the disclosure or the examples thereof, the articles “a,” “an,” “the,” and “said” are intended to mean that there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. The term “exemplary” is intended to mean “an example of” The phrase “one or more of the following: A, B, and C” means “at least one of A and/or at least one of B and/or at least one of C.”
Having described aspects of the disclosure in detail, it will be apparent that modifications and variations are possible without departing from the scope of aspects of the disclosure as defined in the appended claims. As various changes could be made in the above constructions, products, and methods without departing from the scope of aspects of the disclosure, it is intended that all matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense. While the disclosure is susceptible to various modifications and alternative constructions, certain illustrated examples thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the disclosure to the specific forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the disclosure.
The present application claims priority to U.S. provisional patent application Ser. No. 62/671,370, filed May 14, 2018, entitled “OPTIMIZING VIEWING ASSETS”, and hereby incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
6222555 | Christofferson et al. | Apr 2001 | B1 |
8416242 | Hutchins | Apr 2013 | B1 |
9704270 | Main et al. | Jul 2017 | B1 |
20030107572 | Smith et al. | Jun 2003 | A1 |
20150348305 | Goossens et al. | Dec 2015 | A1 |
20160140189 | Amitai | May 2016 | A1 |
20170139945 | Gandhi | May 2017 | A1 |
20170358110 | Omachi et al. | Dec 2017 | A1 |
20180350135 | Castaneda | Dec 2018 | A1 |
20190088013 | Baeli | Mar 2019 | A1 |
Entry |
---|
“Mesh Simplify—Quickly reduce polygon count on your 3D models”, Retrieved from: https://forum.unity.com/threads/mesh-simplify-quickly-reduce-polygon-count-on-your-3d-models.347057/, Aug. 10, 2015, 23 Pages. |
“Polygon Cruncher: the optimization studio”, Retrieved from: http://www.mootools.com/plugins/us/polygoncruncher/, Retrieved Date: Jul. 12, 2018, 9 Pages. |
“Polygon Reduction with Meshlab”, Retrieved from: https://www.shapeways.com/tutorials/polygon_reduction_with_meshlab, May 15, 2009, 4 Pages. |
“Application as Filed in PCT Application No. US2018/038885”, Jun. 22, 2018, 23 Pages. |
Lee, et al., “An Accelerating 3D Image Reconstruction System Based on the Level-of-Detail Algorithm”, In GSTF Journal on Computing, vol. 3, Issue 3, Dec. 2013, 10 Pages. |
“International Search Report and Written Opinion Issued in PCT Application No. PCT/US19/031870”, dated Jul. 25, 2019, 9 Pages. |
Number | Date | Country | |
---|---|---|---|
20190347866 A1 | Nov 2019 | US |
Number | Date | Country | |
---|---|---|---|
62671370 | May 2018 | US |