This disclosure generally relates to three-dimensional (“3D”) computer graphics.
Pixar is well known for producing award-winning three-dimensional (“3D”) computer-animated films, such as “Toy Story” (1995), “Monsters, Inc.” (2001), “Finding Nemo” (2003), “The Incredibles” (2004), “Ratatouille” (2007), “WALL-E” (2008), “Up” (2009), and “Brave” (2012). In order to produce films such as these, Pixar developed its own platform for network-distributed rendering of complex 3D graphics, including ray-traced 3D views. The RenderMan® platform includes the RenderMan® Interface Specification (an API to establish an interface between modeling programs, e.g., AUTODESK MAYA, and rendering programs in order to describe 3D scenes), RenderMan® Shading Language (a language to define various types of shaders: surface, light, volume, imager, and displacement), and PhotoRealistic RenderMan® (a rendering software system).
Many computer graphic images are created by mathematically modeling the interaction of light with various objects (e.g., a character) a 3D scene from a given viewpoint. Each 3D object in the scene may be represented by a 3D model of its surface geometry, for example, a shell model (e.g., polygon mesh, non-uniform rational B-spline (NURBS) curves, or subdivision surface, such as a Catmull-Clark subdivision mesh).
By utilizing various shaders (shading and lighting programs), the scene may be illuminated by one or more light sources in the scene to determine the final color information at each location in the scene. By
This process, called rendering, uses a rendering system to generate a two-dimensional image (2D) of the scene from the given viewpoint, and is analogous to taking a photograph of a real-world scene. Animated sequences can be created by rendering a sequence of images of a scene as the scene changes over time.
Surface attribute functions can define the values of attributes of surfaces in three-dimensional space. Surface attribute functions can be evaluated at any point on the surface to provide corresponding attribute values at that point on the surface. Attributes of surfaces can include optical properties of a surface, such as color, transparency, reflectivity, and refractivity. Attributes can also include visibility or occlusion information; artistically or procedurally generated texture data in one, two, three, or more dimensions; shadow generation information; illumination information, which specifies the amount and direction of light on the surface point from other portions of the scene; and rendering information, such as ray tracing path information or radiosity rendering information. Functions can be relatively simple, such as looking up texture data from a texture map, or very complex, such as the evaluation of complex user-defined shader programs, ray tracing programs, animation or modeling programs, or simulation programs.
An application such as a rendering or animation application determines pixel values in an image by evaluating or sampling a surface and its associated surface attribute functions. Surfaces can include triangles and polygons; higher-order surfaces such as B-splines; subdivision surfaces; and implicit surfaces, among others.
Many rendering effects are performed by sampling a 3D scene at discrete points. The rendering system determines one or more attribute values, such as color, transparency, or depth, for the sample of the 3D scene. The attribute values of one or more samples of the 3D scene are then combined to determine the value of a pixel of the rendered image. For example, a rendering system may trace sample rays into a 3D scene (or project geometry onto an image plane) to render geometry. The intersection of a sampling ray and geometry (or an image sample point in the image plane and the projected geometry) defines a sample of the 3D scene used to determine the value of a pixel of the rendered image. Additionally, illumination, shadowing, scattering, depth of field, motion blur, reflection, and refraction effects are created by casting additional sample rays from an intersected portion of scene geometry into further portions of the 3D scene.
As part of the determination of a color attribute of a point (or points) on a surface, each light source in a set typically is evaluated to determine whether that light source contributes to the computed color value of that point. This determination entails identifying whether the light emitted from each light source is transmitted to the given point on the surface, whether the light is blocked by some other element of the object scene, and/or whether the light falls off (loses all intensity or ability to light an object) before reaching the surface. It further is possible that the light source is outside the frame or shot (multiple contiguous frames) of animation, or outside the view of a virtual camera viewing the set and determining the bounds of the frame(s), but still illuminates at least one surface in the frame or shot. Even further still, a light outside a frame might cast a shadow on an object or surface in the frame.
Conventional techniques of generating special effects for an animated character have been achieved using a manual and iterative process of compositing 2D images to combine visual elements from different sources into a single image. When used to achieve certain special effects, such as a soft, glow-like effect, such techniques may require such time-consuming and labor-intensive effort on the part of the artist/animator (particularly when dealing with 3D graphics) that oftentimes, a dedicated person may be hired as a compositer.
Particular embodiments may utilize a rendering system, e.g., the RenderMan® platform, to generate a volumetric projection (e.g., a volumetric soft edge) for an object (e.g., a character) in a scene. In particular embodiments, a soft, glow-like effect may appear to emanate from a region or edge of the object for which a volumetric projection is generated. The volumetric projection may be generated by blurring the color from a specified location on a surface geometry for the object outward through a voxel grid overlaying the surface geometry. Two operations may be performed concurrently or serially on the surface geometry in order to generate the volumetric projection: (1) one or more lighting operations may be performed on the object in the scene, wherein the lit mesh for the object is ultimately rendered as invisible (but the color information at each location on the object's surface geometry—as lit in the scene—is preserved), and (2) a voxel grid may be generated for the object, wherein the depth of the voxel grid over any particular point of the lit mesh and/or beyond the perimeter of the lit mesh may vary. In some embodiments, the voxel grid may be overlaid on top of the lit mesh in the scene. In particular embodiments, the voxel grid may be culled from those areas for which a volumetric projection is not desired (e.g., if only the character's skin—and not their clothing, hair, or facial features—needs to have the glow-like effect). A first set of rays may then be traced from a viewpoint (e.g., the location of a virtual camera or a viewer's eye) toward the object (and through the voxel grid at each location on the lit mesh). For each ray in the first set of rays that hits a location in the mesh, colors sampled at that location may be blurred from that location in the mesh outward through the voxel grid. Once determination of the color information for the voxel grid has been completed, the voxel grid and the invisible lit mesh may be incorporated back into the overall 3D scene, and the objects in the scene for the frame may continue through the graphics pipeline until they are rasterized into a 2D image for the frame.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
In order to describe and illustrate embodiments and/or examples of any inventions presented within this disclosure, reference may be made to one or more accompanying drawings. The additional details or examples used to describe the accompanying drawings should not be considered as limitations to the scope of any of the disclosed inventions, any of the presently described embodiments and/or examples, or the presently understood best mode of any invention presented within this disclosure.
Particular embodiments may utilize a rendering system, e.g., the RenderMan® platform, to generate a volumetric projection (e.g., a volumetric soft edge) for an object (e.g., a character) in a scene. In particular embodiments, a soft, glow-like effect may appear to emanate from a region or edge of the object for which a volumetric projection is generated. The volumetric projection may be generated by blurring the color from a specified location on a surface geometry for the object outward through a voxel grid overlaid on the surface geometry.
In particular embodiments, lit mesh 220 is rendered as invisible, but the color information at each location on the object's surface geometry—as lit in the scene—is preserved at each surface point on the mesh. The color information may then be utilized as a basis for determining color for the voxel grid 230 (which, as initially generated, may not include any color information). As discussed in further detail below, ray tracing techniques may be utilized to determine colors at different locations in the voxel grid by sampling colors at proximate locations in lit mesh 220.
In particular embodiments, voxel grid 230 may be specified (e.g., as part of the object model) as a combination of multiple volumetric masks, such as, for example, a “tight” inner volumetric mask (forming a thin volumetric layer over lit mesh 220 to produce little or no blur), and in particular areas of lit mesh 220, one or more “soft” outer volumetric masks (e.g., to form thicker regions of the voxel grid over the lit mesh to produce increased color blurring). In particular embodiments, voxel grid 230 may be specified as a signed distance field (storing values for the distance of each voxel in voxel grid 230 from the original mesh). Regions of the voxel grid 230 may be culled in particular areas in order to reduce blur in those areas for which clear definition is desired (e.g., eyes, hair, mouth).
Once the voxel grid 230 has been overlaid on lit mesh 220 in scene, particular embodiments may generate volumetric projection 240. Volumetric projection 240 represents the result of performing ray tracing from the viewpoint through the voxel grid to the mesh, in order to sample colors from locations in the mesh, which are blurred outwards from the mesh through the voxel grid in order to produce a volumetric soft edge.
In one example, the blurring of the sampled color may be determined by applying a gradient between the sampled color at the location on the surface of the object and the color in the scene just beyond the outer boundary of the voxel grid, where the gradient extends from the surface of the object to the outer boundary of the voxel grid. In particular embodiments, the direction of blurring through the voxel grid may be toward the viewpoint; in particular embodiments, the direction of blurring through the voxel grid may radiate orthogonally away from a perimeter of the lit mesh. Once determination of the color information for the object in the scene has been completed for every location in the voxel grid corresponding to a location on the surface of the object that was hit by a traced ray, the color information may be incorporated back into the overall 3D scene, and the final 2D image for each frame may be rasterized using any rasterization technique.
In particular embodiments, two sets of rays 320 may be traced from viewpoint 310 towards the surface of the object. The first set of rays 320 may simply be utilized to determine which rays hit the surface of the object and which did not, and the second set of rays 320 may be utilized to perform the actual color sampling. For each ray in the first set of rays 320 that hit the surface of the object, a second ray 320 may be fired (along the same differential: from viewpoint 310 to the same location on lit mesh 220) in order to sample color at the location at which the ray hit the mesh.
In a few cases, certain rays 320 may miss the surface of the object but hit and pass through at least one voxel in voxel grid 230 (e.g., as shown in the exploded view section of
In some situations, when a region of the object for which a volumetric projection is desired overlays (with respect to the viewpoint) a region of the object for which a volumetric projection is not desired, a ray may hit lit mesh 220 in a location from which color sampling is undesirable. For example, a character may be clothed in a navy short-sleeved shirt and jeans with the character's bare forearm bent and folded across the torso (which is covered by the shirt). In this case, rays traced towards the boundaries of the voxel grid surrounding the bare forearm may pass through the voxel grid without hitting the region of the lit mesh that includes the bare forearm. If the desired effect is to generate a volumetric projection of any areas of exposed skin, it may be possible that a ray hits lit mesh 220 just above the arm (at a location on the navy shirt). Since it is not desirable that color is sampled in that area (which may result in blurring the navy color of the shirt into the arm area and/or result in a blurry outline for the shirt), in particular embodiments, a maximum distance (from the viewpoint) may be established for rays directed at regions of lit mesh 220 that should not be sampled (so that the ray falls short of hitting the lit mesh). For example, the maximum distance for rays being traced towards that particular location may be established based on information provided by the 3D model for the object indicating which regions of the object (e.g., the surface of the character's shirt) should not be included when sampling colors for a volumetric projection. Using this technique, one may restrict rays from being traced through the voxel grid for the forearm to hit the lit mesh at a location where only colors for the shirt may be sampled. By establishing a maximum distance that the ray may be traced, the ray falls short of the lit mesh (e.g., a miss), and particular embodiments execute the steps described above to determine the location of the closest (with respect to the voxel) corresponding location in lit mesh 220 that registered a hit (most likely the closest location on the character's arm). One or more second rays 230 may be traced from the voxel location to the corresponding location(s) in lit mesh 220 in order to sample color at the corresponding location(s). The sampled color is then blurred from the mesh through the voxel grid in the direction of the voxel that registered the “miss.” As described above, articular embodiments may trace multiple rays from the voxel location to the corresponding location in lit mesh 220 and then combine colors (e.g., by simply averaging RGB color values, or by utilizing an appropriate color blending mode, such as soft light mode) from the multiple rays in order to achieve a higher-quality blur.
In particular embodiments, color sampling may be performed according to a sampling pattern. The sampling pattern may specify whether the color to be blurred into the voxel grid should be sampled in a coarse-grained manner (e.g., sampling color from a large cluster of points in the mesh) or fine-grained (e.g., based on a single point in the mesh or a small cluster of points in the mesh) manner. In either case, if the sampling pattern indicates that color should be sampled from more than one point in the mesh, the sampled colors are combined to obtain a single color for volumetric projection from the sampled locations outwards through the mesh. Once color values are obtained for a location in the mesh, colors sampled at that location may be combined, and the combined volumetric projection color may be blurred from that location in the mesh outward through the voxel grid. For example, as shown with respect to the character in
In particular embodiments, steps 420 and 425 may be executed concurrently with steps 410 and 415 in order speed up the overall process. In step 420, particular embodiments may generate the volumetric masks for the mesh by first generating a bounding box to define the perimeter of the voxel grid around the mesh (e.g., as a signed distance field). In particular embodiments, the dimensions of the bounding box may be specified by the 3D model for the object, as the minimum enclosing space for the voxel grid given the dimensions of the character and the maximum depth of the volumetric masks. For each voxel within the voxel grid, particular embodiments may: (1) calculate the voxel's distance to the mesh, (2) calculate the solid angle subtended by the voxel at the viewpoint (e.g., by determining the area of the projection of the voxel onto a unit sphere centered at the viewpoint), and (3) determine the density or opacity of the voxel (e.g., based on the location of the voxel with respect to the voxel grid and the depth of the voxel grid at the location of the voxel). In step 425, certain areas of the voxel grid may be culled to reduce or eliminate volumetric projection. Selection of any such areas may be determined by any applicable method, such as, for example, by designation of clothed areas of a body as areas in which to eliminate volumetric projection, or by designation of certain facial features (e.g., eyes, hair, mouth) as areas in which to reduce or entirely eliminate volumetric projection, or by designation of certain frames in which to alter the volumetric projection with respect to prior frames.
In step 430, the voxel grid may be positioned in scene in the same location as the lit mesh. In particular embodiments, the voxel grid may not fully extend across the lit mesh—in such embodiments, the voxel grid may be superimposed over those areas of the lit mesh corresponding to the voxel grid. In step 435, particular embodiments calculate viewpoint (e.g., camera) location and direction with respect to the scene.
In step 440, particular embodiments determine a sampling pattern based on targeted areas for volumetric projection. The sampling pattern may indicate whether color should be sampled in a coarse- or fine-grained manner. For example, fine-grained color sampling may result in color being sampled from a smaller region of the lit mesh (e.g., encompassing just a couple of polygonal faces), whereas coarse-grained color sampling may result in color being sampled from a larger region of the lit mesh.
In step 445, particular embodiments trace an initial set of rays through the voxel grid to the lit mesh according to the sampling pattern. Any conventional technique for ray-tracing may be applied. For each ray in this initial set of rays, particular embodiments assess whether the initial ray hit or missed the lit mesh (step 450).
If the initial ray hit the mesh in an intended target region, in step 455, a second ray 320 is traced to the same location in the mesh, and color(s) are sampled at that location. In particular embodiments, in the situation where the initial ray would otherwise hit the mesh in a region from which color should not be sampled, a maximum distance may be set so that the initial ray falls short of the lit mesh and is handled as a miss in accordance with step 460.
If the initial ray missed the mesh but hits a voxel location in the voxel grid 230, in step 460, particular embodiments may determine the location of the closest (with respect to the voxel) corresponding location(s) in lit mesh 220 that registered a hit, and one or more second rays 230 may be traced from the voxel location to the corresponding location(s) in order to sample color at the location(s).
In step 465, the sampled colors are combined to obtain a single color for volumetric projection from the sampled locations outwards through the mesh. In particular embodiments, combining the sampled colors may comprise averaging the sampled colors. In particular embodiments, the sampling pattern may specify sampling weights for different locations in the lit mesh.
In step 470, for each sampled location in the mesh, the corresponding color for the volumetric projection is blurred outward from the sampled location in the mesh outward through the voxel grid. In particular embodiments, the direction of blurring through the voxel grid may be toward the viewpoint; in particular embodiments, the direction of blurring through the voxel grid may radiate orthogonally away from a perimeter of the lit mesh.
Finally, in step 480, the fully-colored voxel grid is integrated into the scene. In particular embodiments, the voxel grid may be rendered and composited with a fully-rendered version of the scene (including a visible rendering of the object for which the volumetric projection was generated). The default opacity of each voxel in the voxel grid may be configured to have a level of transparency that enables the underlying character to be visible when composited with the volumetric projection. In such embodiments, certain areas of the voxel grid may be completely culled away such that there are portions of the lit mesh that are not overlaid by any voxels (e.g., in areas where the character should be clearly visible without any blurring or color smearing).
Particular embodiments may repeat one or more steps of the method of
This disclosure contemplates any suitable number of computer systems 500. This disclosure contemplates computer system 500 taking any suitable physical form. As example and not by way of limitation, computer system 500 may be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) (such as, for example, a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, a tablet computer system, or a combination of two or more of these. Where appropriate, computer system 500 may include one or more computer systems 500; be unitary or distributed; span multiple locations; span multiple machines; span multiple data centers; or reside in a cloud, which may include one or more cloud components in one or more networks. Where appropriate, one or more computer systems 500 may perform without substantial spatial or temporal limitation one or more steps of one or more methods described or illustrated herein. As an example and not by way of limitation, one or more computer systems 500 may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein. One or more computer systems 500 may perform at different times or at different locations one or more steps of one or more methods described or illustrated herein, where appropriate.
In particular embodiments, computer system 500 includes a processor 502, memory 504, storage 506, an input/output (I/O) interface 508, a communication interface 510, and a bus 512. Although this disclosure describes and illustrates a particular computer system having a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement.
In particular embodiments, processor 502 includes hardware for executing instructions, such as those making up a computer program. As an example and not by way of limitation, to execute instructions, processor 502 may retrieve (or fetch) the instructions from an internal register, an internal cache, memory 504, or storage 506; decode and execute them; and then write one or more results to an internal register, an internal cache, memory 504, or storage 506. In particular embodiments, processor 502 may include one or more internal caches for data, instructions, or addresses. This disclosure contemplates processor 502 including any suitable number of any suitable internal caches, where appropriate. As an example and not by way of limitation, processor 502 may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (TLBs). Instructions in the instruction caches may be copies of instructions in memory 504 or storage 506, and the instruction caches may speed up retrieval of those instructions by processor 502. Data in the data caches may be copies of data in memory 504 or storage 506 for instructions executing at processor 502 to operate on; the results of previous instructions executed at processor 502 for access by subsequent instructions executing at processor 502 or for writing to memory 504 or storage 506; or other suitable data. The data caches may speed up read or write operations by processor 502. The TLBs may speed up virtual-address translation for processor 502. In particular embodiments, processor 502 may include one or more internal registers for data, instructions, or addresses. This disclosure contemplates processor 502 including any suitable number of any suitable internal registers, where appropriate. Where appropriate, processor 502 may include one or more arithmetic logic units (ALUs); be a multi-core processor; or include one or more processors 502. Although this disclosure describes and illustrates a particular processor, this disclosure contemplates any suitable processor.
In particular embodiments, memory 504 includes main memory for storing instructions for processor 502 to execute or data for processor 502 to operate on. As an example and not by way of limitation, computer system 500 may load instructions from storage 506 or another source (such as, for example, another computer system 500) to memory 504. Processor 502 may then load the instructions from memory 504 to an internal register or internal cache. To execute the instructions, processor 502 may retrieve the instructions from the internal register or internal cache and decode them. During or after execution of the instructions, processor 502 may write one or more results (which may be intermediate or final results) to the internal register or internal cache. Processor 502 may then write one or more of those results to memory 504. In particular embodiments, processor 502 executes only instructions in one or more internal registers or internal caches or in memory 504 (as opposed to storage 506 or elsewhere) and operates only on data in one or more internal registers or internal caches or in memory 504 (as opposed to storage 506 or elsewhere). One or more memory buses (which may each include an address bus and a data bus) may couple processor 502 to memory 504. Bus 512 may include one or more memory buses, as described below. In particular embodiments, one or more memory management units (MMUs) reside between processor 502 and memory 504 and facilitate accesses to memory 504 requested by processor 502. In particular embodiments, memory 504 includes random access memory (RAM). This RAM may be volatile memory, where appropriate Where appropriate, this RAM may be dynamic RAM (DRAM) or static RAM (SRAM). Moreover, where appropriate, this RAM may be single-ported or multi-ported RAM. This disclosure contemplates any suitable RAM. Memory 504 may include one or more memories 504, where appropriate. Although this disclosure describes and illustrates particular memory, this disclosure contemplates any suitable memory.
In particular embodiments, storage 506 includes mass storage for data or instructions. As an example and not by way of limitation, storage 506 may include a hard disk drive (HDD), a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. Storage 506 may include removable or non-removable (or fixed) media, where appropriate. Storage 506 may be internal or external to computer system 500, where appropriate. In particular embodiments, storage 506 is non-volatile, solid-state memory. In particular embodiments, storage 506 includes read-only memory (ROM). Where appropriate, this ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of these. This disclosure contemplates mass storage 506 taking any suitable physical form. Storage 506 may include one or more storage control units facilitating communication between processor 502 and storage 506, where appropriate. Where appropriate, storage 506 may include one or more storages 506. Although this disclosure describes and illustrates particular storage, this disclosure contemplates any suitable storage.
In particular embodiments, I/O interface 508 includes hardware, software, or both, providing one or more interfaces for communication between computer system 500 and one or more I/O devices. Computer system 500 may include one or more of these I/O devices, where appropriate. One or more of these I/O devices may enable communication between a person and computer system 500. As an example and not by way of limitation, an I/O device may include a keyboard, keypad, microphone, monitor, mouse, printer, scanner, speaker, still camera, stylus, tablet, touch screen, trackball, video camera, another suitable I/O device or a combination of two or more of these. An I/O device may include one or more sensors. This disclosure contemplates any suitable I/O devices and any suitable I/O interfaces 508 for them. Where appropriate, I/O interface 508 may include one or more device or software drivers enabling processor 502 to drive one or more of these I/O devices. I/O interface 508 may include one or more I/O interfaces 508, where appropriate. Although this disclosure describes and illustrates a particular I/O interface, this disclosure contemplates any suitable I/O interface.
In particular embodiments, communication interface 510 includes hardware, software, or both providing one or more interfaces for communication (such as, for example, packet-based communication) between computer system 500 and one or more other computer systems 500 or one or more networks. As an example and not by way of limitation, communication interface 510 may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a WI-FI network. This disclosure contemplates any suitable network and any suitable communication interface 510 for it. As an example and not by way of limitation, computer system 500 may communicate with an ad hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), or one or more portions of the Internet or a combination of two or more of these. One or more portions of one or more of these networks may be wired or wireless. As an example, computer system 500 may communicate with a wireless PAN (WPAN) (such as, for example, a BLUETOOTH WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network), or other suitable wireless network or a combination of two or more of these. Computer system 500 may include any suitable communication interface 510 for any of these networks, where appropriate. Communication interface 510 may include one or more communication interfaces 510, where appropriate. Although this disclosure describes and illustrates a particular communication interface, this disclosure contemplates any suitable communication interface.
In particular embodiments, bus 512 includes hardware, software, or both coupling components of computer system 500 to each other. As an example and not by way of limitation, bus 512 may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a HYPERTRANSPORT (HT) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCIe) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local (VLB) bus, or another suitable bus or a combination of two or more of these. Bus 512 may include one or more buses 512, where appropriate. Although this disclosure describes and illustrates a particular bus, this disclosure contemplates any suitable bus or interconnect.
Herein, a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other integrated circuits (ICs) (such, as for example, field-programmable gate arrays (FPGAs) or application-specific ICs (ASICs)), hard disk drives (HDDs), hybrid hard drives (HHDs), optical discs, optical disc drives (ODDs), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (FDDs), magnetic tapes, solid-state drives (SSDs), RAM-drives, SECURE DIGITAL cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate. A computer-readable non-transitory storage medium may be volatile, non-volatile, or a combination of volatile and non-volatile, where appropriate.
Herein, “or” is inclusive and not exclusive, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A or B” means “A, B, or both,” unless expressly indicated otherwise or indicated otherwise by context. Moreover, “and” is both joint and several, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A and B” means “A and B, jointly or severally,” unless expressly indicated otherwise or indicated otherwise by context.
The scope of this disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments described or illustrated herein that a person having ordinary skill in the art would comprehend. The scope of this disclosure is not limited to the example embodiments described or illustrated herein. Moreover, although this disclosure describes and illustrates respective embodiments herein as including particular components, elements, functions, operations, or steps, any of these embodiments may include any combination or permutation of any of the components, elements, functions, operations, or steps described or illustrated anywhere herein that a person having ordinary skill in the art would comprehend. Furthermore, reference in the appended claims to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative.
This application claims the benefit, under 35 U.S.C. § 119(e), of U.S. Provisional Patent Application No. 62/034,683, filed 7 Aug. 2014, which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5068882 | Eberhard | Nov 1991 | A |
5257183 | Tam | Oct 1993 | A |
5422986 | Neely | Jun 1995 | A |
5461651 | Tam | Oct 1995 | A |
5570460 | Ramanujam | Oct 1996 | A |
5583975 | Naka | Dec 1996 | A |
5760781 | Kaufman | Jun 1998 | A |
5909476 | Cheng | Jun 1999 | A |
6028910 | Kirchner | Feb 2000 | A |
6072497 | Lichtenbelt | Jun 2000 | A |
6078638 | Sauer | Jun 2000 | A |
6278460 | Myers | Aug 2001 | B1 |
6292525 | Tam | Sep 2001 | B1 |
6324245 | Tam | Nov 2001 | B1 |
6359618 | Heirich | Mar 2002 | B1 |
6512517 | Knittel | Jan 2003 | B1 |
6556199 | Fang | Apr 2003 | B1 |
6664961 | Ray | Dec 2003 | B2 |
6760024 | Lokovic | Jul 2004 | B1 |
6857746 | Dyner | Feb 2005 | B2 |
6906724 | Lake | Jun 2005 | B2 |
6924798 | Marshall | Aug 2005 | B2 |
7277572 | MacInnes | Oct 2007 | B2 |
7292711 | Kiraly | Nov 2007 | B2 |
7306341 | Chang | Dec 2007 | B2 |
7327365 | Chen | Feb 2008 | B2 |
7525543 | Engel | Apr 2009 | B2 |
7609264 | Tong | Oct 2009 | B2 |
7693318 | Stalling | Apr 2010 | B1 |
7755625 | Kwon | Jul 2010 | B2 |
7764288 | Anderson | Jul 2010 | B2 |
7778392 | Berman | Aug 2010 | B1 |
7901093 | Tan | Mar 2011 | B2 |
7901095 | Xiao | Mar 2011 | B2 |
RE42638 | Ray | Aug 2011 | E |
8042954 | Tan | Oct 2011 | B2 |
8102403 | DeRose | Jan 2012 | B1 |
8189002 | Westerhoff | May 2012 | B1 |
8243071 | Wang | Aug 2012 | B2 |
8379955 | McKenzie | Feb 2013 | B2 |
8411080 | Zimmermann | Apr 2013 | B1 |
8508615 | Tan | Aug 2013 | B2 |
8587588 | Smyth | Nov 2013 | B2 |
8922553 | Tena | Dec 2014 | B1 |
9007372 | Bakalash | Apr 2015 | B2 |
9117306 | Bakalash | Aug 2015 | B2 |
9142056 | Baran | Sep 2015 | B1 |
9245363 | Laine | Jan 2016 | B2 |
9245377 | Jarosz | Jan 2016 | B1 |
9317970 | Beeler | Apr 2016 | B2 |
9786084 | Bhat | Oct 2017 | B1 |
9990761 | Anderson | Jun 2018 | B1 |
20020113787 | Ray | Aug 2002 | A1 |
20020191814 | Ellis | Dec 2002 | A1 |
20030052875 | Salomie | Mar 2003 | A1 |
20030071822 | Lake | Apr 2003 | A1 |
20030112237 | Corbetta | Jun 2003 | A1 |
20030214539 | Iwema | Nov 2003 | A1 |
20040021771 | Stearns | Feb 2004 | A1 |
20040125103 | Kaufman | Jul 2004 | A1 |
20040150642 | Borshukov | Aug 2004 | A1 |
20040169651 | Everitt | Sep 2004 | A1 |
20040222987 | Chang | Nov 2004 | A1 |
20040263511 | West | Dec 2004 | A1 |
20050017971 | Ard | Jan 2005 | A1 |
20050041023 | Green | Feb 2005 | A1 |
20050129296 | Setala | Jun 2005 | A1 |
20060028468 | Chen | Feb 2006 | A1 |
20060104408 | Thibault | May 2006 | A1 |
20060181551 | Matsumoto | Aug 2006 | A1 |
20060210147 | Sakaguchi | Sep 2006 | A1 |
20060238534 | Matsumoto | Oct 2006 | A1 |
20060274065 | Buyanovskiy | Dec 2006 | A1 |
20060274070 | Herman | Dec 2006 | A1 |
20070040830 | Papageorgiou | Feb 2007 | A1 |
20070040833 | Buyanovski | Feb 2007 | A1 |
20070098299 | Matsumoto | May 2007 | A1 |
20070229502 | Tong | Oct 2007 | A1 |
20070245119 | Hoppe | Oct 2007 | A1 |
20080033277 | Yang | Feb 2008 | A1 |
20080118143 | Gordon | May 2008 | A1 |
20080180440 | Stich | Jul 2008 | A1 |
20080246766 | Yokohari | Oct 2008 | A1 |
20080259080 | Masumoto | Oct 2008 | A1 |
20080278490 | Dekel | Nov 2008 | A1 |
20080309667 | Zhou | Dec 2008 | A1 |
20080317321 | Zhang | Dec 2008 | A1 |
20090021513 | Joshi | Jan 2009 | A1 |
20090033661 | Miller | Feb 2009 | A1 |
20090034874 | Miller | Feb 2009 | A1 |
20090073324 | Tan | Mar 2009 | A1 |
20090167763 | Waechter | Jul 2009 | A1 |
20090219287 | Wang | Sep 2009 | A1 |
20090243916 | Beeri | Oct 2009 | A1 |
20090284529 | De Aguiar | Nov 2009 | A1 |
20100053209 | Rauch | Mar 2010 | A1 |
20100074532 | Gordon | Mar 2010 | A1 |
20100088034 | Hill | Apr 2010 | A1 |
20100111370 | Black | May 2010 | A1 |
20100123784 | Ding | May 2010 | A1 |
20100128038 | Hoffman | May 2010 | A1 |
20100141780 | Tan | Jun 2010 | A1 |
20100149073 | Chaum | Jun 2010 | A1 |
20100188403 | Mejdrich | Jul 2010 | A1 |
20100265250 | Koenig | Oct 2010 | A1 |
20100295851 | Diamand | Nov 2010 | A1 |
20100302245 | Best | Dec 2010 | A1 |
20100321478 | Sliwa | Dec 2010 | A1 |
20110002020 | Khan | Jan 2011 | A1 |
20110043521 | Smyth | Feb 2011 | A1 |
20110103658 | Davis | May 2011 | A1 |
20110142201 | Eberhard | Jun 2011 | A1 |
20110234583 | Bakalash | Sep 2011 | A1 |
20110235885 | Rauch | Sep 2011 | A1 |
20110254839 | Hammer | Oct 2011 | A1 |
20110254845 | Oikawa | Oct 2011 | A1 |
20110285710 | Mejdrich | Nov 2011 | A1 |
20120019517 | Corazza | Jan 2012 | A1 |
20120053464 | Murashita | Mar 2012 | A1 |
20120098941 | Joseph | Apr 2012 | A1 |
20120155731 | Weersink | Jun 2012 | A1 |
20120162372 | Ghyme | Jun 2012 | A1 |
20120280978 | Holub | Nov 2012 | A1 |
20130024545 | Sheppard | Jan 2013 | A1 |
20130063436 | Li | Mar 2013 | A1 |
20130135288 | King | May 2013 | A1 |
20130135308 | Kho | May 2013 | A1 |
20130229411 | Choi | Sep 2013 | A1 |
20130257853 | Schmidt | Oct 2013 | A1 |
20130301908 | Shim | Nov 2013 | A1 |
20130321418 | Kirk | Dec 2013 | A1 |
20140015834 | Kho | Jan 2014 | A1 |
20140146198 | Omori | May 2014 | A1 |
20140168217 | Kim | Jun 2014 | A1 |
20140176535 | Krig | Jun 2014 | A1 |
20140176574 | Bakalash | Jun 2014 | A1 |
20140185741 | Shen | Jul 2014 | A1 |
20140198103 | Johansson | Jul 2014 | A1 |
20140212018 | Hein | Jul 2014 | A1 |
20140218364 | Collins | Aug 2014 | A1 |
20140247264 | Sundberg | Sep 2014 | A1 |
20140324204 | Vidimce | Oct 2014 | A1 |
20140330121 | Kim | Nov 2014 | A1 |
20140333623 | Ozdas | Nov 2014 | A1 |
20140375641 | Bakalash | Dec 2014 | A1 |
20150029185 | Ikits | Jan 2015 | A1 |
20150124086 | Melle | May 2015 | A1 |
20150187119 | Masumoto | Jul 2015 | A1 |
20150228110 | Hecht | Aug 2015 | A1 |
20150287229 | Sela | Oct 2015 | A1 |
20150317825 | Hazel | Nov 2015 | A1 |
20150371393 | Ramachandra | Dec 2015 | A1 |
20160005213 | Lecocq | Jan 2016 | A1 |
20160015350 | Chang | Jan 2016 | A1 |
20160042552 | McNabb | Feb 2016 | A1 |
20160078663 | Sareen | Mar 2016 | A1 |
20160093084 | Sumner | Mar 2016 | A1 |
20160125642 | Zhu | May 2016 | A1 |
20160163048 | Yee | Jun 2016 | A1 |
20160203635 | Wyman | Jul 2016 | A1 |
20160240001 | Sheffer | Aug 2016 | A1 |
20160260245 | DeCell | Sep 2016 | A1 |
20160267705 | O'Leary | Sep 2016 | A1 |
20160292902 | Milne | Oct 2016 | A1 |
20160321838 | Barone | Nov 2016 | A1 |
20170011546 | Zatonyi | Jan 2017 | A1 |
20170323471 | Chien | Nov 2017 | A1 |
20180005428 | Montero | Jan 2018 | A1 |
20180047208 | Marin | Feb 2018 | A1 |
20180174355 | Day | Jun 2018 | A1 |
Entry |
---|
Williams et al., Perceptually Guided Simplification of Lit, Textured Meshes, Apr. 2003, ACM. |
Number | Date | Country | |
---|---|---|---|
20160042553 A1 | Feb 2016 | US |
Number | Date | Country | |
---|---|---|---|
62034683 | Aug 2014 | US |