Many non-homogeneous translucent materials are composed of various substances evenly distributed throughout its volume. These quasi-homogeneous materials present a formidable challenge in realistic rendering because of their complex spatially variant sub-surface scattering properties. Furthermore, surface mesostructures typically complicate the appearance of quasi-homogeneous materials. Surface mesostructures not only produce surface reflections, but also affect how light enters and exits a material volume.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
In view of the above, representing quasi-homogenous materials is described. In one aspect, quasi-homogenous materials are modeled to generate a material model of a physical sample. The material model identifies how light is scattered by the quasi-homogenous materials. The material model, independent of an object model of the physical sample, provides information that is useful to texture surfaces of arbitrary types and sizes of mesh models (e.g., representing the physical sample or other objects) with the quasi-homogenous materials.
In the Figures, the left-most digit of a component reference number identifies the particular Figure in which the component first appears.
Overview
Representing quasi-homogenous materials provides for modeling and rendering quasi-homogeneous materials based on a material representation which is acquired from physical samples. This material representation is applied to geometric models of any arbitrary shape. The resulting objects are efficiently rendered without expensive sub-surface light transport simulation. To emphasize its broad applicability to arbitrary geometric models, this representation is referred to as a material model to distinguish it from object models, which are typically used in image-based modeling and rendering of real-world objects. A material model describes how light is scattered by the material, while an object model captures the actual appearance of a specific physical object. Unlike a material model, an object model measured from a given translucent object cannot be applied to objects of other shapes, because the appearance of each surface point is not a local quantity but the result of light propagation within the whole object.
The material model utilized to represent quasi-homogenous materials is used to effectively analyze sub-surface scattering characteristics of quasi-homogeneous materials at two different scales. At a local level, the heterogeneity of a volume leads to non-homogeneous sub-surface scattering. At a larger scale, because of the even distribution of materials within a quasi-homogeneous volume, small neighborhoods centered at different points in the volume have a statistically similar material composition and distribution. For global sub-surface scattering at this larger scale, photon propagation is similar to that in a homogeneous material composed of particles whose scattering properties approximate the overall scattering of these volume neighborhoods.
Based on this observation of local heterogeneity and global homogeneity, the material representation includes four components: a homogeneous sub-surface scattering component for global light transport, a local reflectance function, a mesostructure entrance function, and a mesostructure exiting function. The last three components account for local scattering effects that include inhomogeneous local sub-surface scattering, surface reflections from mesostructure, and locally inhomogeneous mesostructure effects on light entering and exiting the material. In this implantation, the four components of this material model are acquired by capturing both laser stripe and halogen lamp images of a physical sample under multiple illumination and viewing directions, and then breaking down these images into appropriate components.
These and other aspects of representing quasi-homogenous materials are now described in greater-detail below with respect to
An Exemplary System
Although not required, representing quasi-homogenous materials is described in the general context of computer-program instructions being executed by a computing device such as a personal computer. Program modules generally include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. While the systems and methods are described in the foregoing context, acts and operations described hereinafter may also be implemented in hardware.
Computing device 102 includes program modules 104 and program data 106. Program modules 104 include, for example, quasi-homogenous material representation module 108 (hereinafter often referred to as “modeling and rendering module 108”), and other program modules 110 such as a quasi-homogenous image data acquisition controller in one implementation, an operating system, etc.
Modeling and rendering module 108 generates or receives quasi-homogenous material image data 112 (“image data 112”) to create a material model 114 representing quasi-homogenous materials that comprise an object (a physical sample). Image data 112 is acquired by physical sampling of the object. Such physical sampling of the object to acquire image data 112 is described in greater detail below in the section titled “Exemplary Measurement System”. Modeling and rendering module 108 applies the material model 114, under configurable lighting conditions, to one or more object meshes 116 of arbitrary size and shape to render one or quasi-homogenous objects 118. Accordingly, a quasi-homogenous object 118 is an object with texture of the quasi-homogenous materials that are represented by the material model 114, rendered onto the objects' surface.
Exemplary Quasi-Homogenous Material Representation
Before describing how an object is physically sampled to generate image data 112, this section describes modeling and rendering module's 108 exemplary representation of quasi-homogenous materials. Modeling and rendering module 108 utilizes the bidirectional scattering-surface reflectance-distribution function (BSSRDF) to provide a general model for light scattering from surfaces. The representation for quasi-homogeneous materials is based upon this function. More particularly, modeling and rendering module 108 computes outgoing radiance L(χo,ωo) at a surface point χo in direction ωo by integrating the contribution of incoming radiance L(χi,ωi) for all incident directions ωi over the surface A as follows:
L(χo,ωo)=∫A∫ΩS(χi,ωi,χo,ωo)L(χi,ωi)d ωidχi.
Here, dχi=(n·ωi)dA(χi), where (n·ωi) is the cosine factor and dA(χi) is a differential area at χi. The integral is separated into local and global contributions as follows:
L(χo,ωo)=∫A
wherein the local contribution is integrated over a small disk Ao of radius rs around χo and the global contribution is aggregated from the rest of the surface Bo=A−Ao.
Since incident radiance L(χi,ωi) may be regarded as locally uniform over Ao, the local contribution Ll(χo,ωo) can be written as follows:
is the local reflectance function. For illumination to be locally uniform, the distance of illuminants from the measured object is larger than the radius of a local heterogeneous area. (In another implementation, illumination is not locally uniform, and this non-uniformity is addressed by pre-filtering irradiance on the surface, or measuring point response functions within local areas).
For the global contribution
Lg(χo,ωo)=∫B
the incoming radiance L(χi,ωi) undergoes significant scattering. Modeling and rendering module 108 approximates its impact at χo by an averaging effect as follows:
where Ai is a disk of radius rs around χi and L(χi,ωi) is locally uniform over Ai.
Further averaging over Ao is performed to obtain the following:
Notice that
∫A
is the radiance contribution from area Ai to area Ao. Since a quasi-homogeneous material is homogeneous at a large scale and is optically thick, modeling and rendering module 108 applies the dipole diffusion approximation to this area-to-area contribution:
∫A
≈π2rs4Fo(Ao,ωo)Rd(χi,χo)L(χi,ωi)ƒ(ωi)
where Rd(χi,χo) is the diffuse reflectance given by the dipole approximation, Fo(Ao,ωo) is the average outgoing Fresnel term over Ao in direction ωo, and ƒ(ωi) is the mesostructure entrance function of the material, which essentially represents an average product of the incoming Fresnel term and cosine foreshortening factor over the surface area. The mesostructure entrance function is expressed independently of location since its average effect within a local region Ai is taken to be uniform over the material surface.
Based on the above analysis for a local region Ao, it can be seen that Lg(χo,ωo) is proportional to the following dipole diffusion term:
Lg(χo,ωo)α∫B
Such a dipole diffusion term is represented by component of
The above relation can be rewritten as follows:
Lg(χo,ωo)=∫B
by introducing a mesostructure exiting function ƒv(χo,ωo) to modulate the average diffuse contribution of the dipole diffusion term according to the view-dependent effects at χo, the local variation within Ao, and the outgoing Fresnel factor at χo. From Eq. (1) and Eq. (2), the mesostructure exiting function is expressed as follows:
In summary, the outgoing radiance from a quasi-homogeneous material is formulated as follows:
L(χo,ωo)=∫ΩR(χo,ωi,ωo)L(χi,ωi) (dωi
+∫B
Based on this equation, modeling and rendering module 108 implements a material model 114 to represent quasi-homogeneous materials that includes four components: the local reflectance function R(χo,ωi,ωo), the mesostructure exiting function ƒv(χo,ωo), the mesostructure entrance function ƒ(ωi), and the global dipole term Rd(χi,χo). As described immediately below, all of these components are measured from real materials.
An Exemplary Measurement System
In measuring a material model, a laser beam could be used to illuminate a material sample, but its highly concentrated illumination results in large intensity differences between specular radiance and global scattering radiance. An extensive dynamic range presents an impediment to accurate radiance measurement. To diminish this problem, system 100 separately captures global and local scattering exhibited by quasi-homogenous materials using different types of lighting. Since global scattering does not contain specular reflections and exhibits an exponential falloff with increasing scattering distance, system 100 employs a laser to heighten its visibility. Specifically, system 100 utilizes a laser stripe instead of a laser beam to significantly accelerate the capture of image data 112, and substantially reduce the amount of image data 112. Although a laser stripe does not illuminate a local circular area as shown for Ai in
Image capture device 120 of
In this implementation, the cameras (i.e., imaging array) includes, for example, eight DragonFly™ cameras evenly mounted on a stationary arc. The bottom camera, as well as the bottom lamp, are not used in the capture process. This is because, in this exemplary implementation, the top surface of a thick sample is occluded from the bottom camera and the bottom lamp. In this implementation, each camera acquires 24-bit images (respective portions of image data 112), although other image depths can be used. A 24-bit image has a resolution of 1024×768. In this implementation, images are acquired at 3.75 Hz using different shutter speeds and the maximum aperture setting to create high dynamic range (HDR) images.
The lighting array (light arc) is mounted on the right arc of the capture device 120. In this exemplary implementation, the lighting array includes eight halogen lamps. This provides nearly directional light for samples on the turntable. In this implementation, the light arc is driven by another stepping motor that rotates it around the turntable center.
In this implementation, the laser scan unit of image capture device 120 includes one of three interchangeable 10 mw lasers (e.g., red: 650 nm, green: 532 nm, blue: 473 nm) with a line lens attached to produce a laser stripe. In this implementation, only one laser at a time can be attached to the laser control unit, so image measurement is iterated three times for the red, green, and blue lasers. In another implementation, all three lasers are attached to the laser control unit, so image measurement is iterated a single time for the red, green, and blue lasers. The laser scan unit can be manually fixed at any position on the light arc, and a controller adjusts the orientation of the laser to sweep a stripe over the surface of the sample.
For geometric calibration, the origin of the coordinate system utilized by system 100 is set at the center of the turntable, with the XY plane aligned with the flat surface. Before capture, the position of the light sources are calibrated (e.g., manually calibrated). For example, the intrinsic and extrinsic parameters of the cameras are calibrated. For photometric calibration, the response curves of the cameras are determined. Then for relative color calibration of the cameras, a standard color pattern is placed onto the turntable. Images are captured from each camera with light from a given lamp. To calibrate lamp illumination colors, an image is conversely captured from a given camera for a lamp.
To calibrate the laser stripes, the angle between the laser plane is calculated; defined as the plane containing the laser stripe and laser emission point, and the XY plane. To this end, a laser stripe is projected onto two planes of different heights. The offset of the laser stripe in the two planes from two recorded laser scan images is calculated. From this offset, the angle between the laser plane and the XY plane is derived. In this implementation and since the distance of the laser emitter from the turntable is much larger than the material sample diameter, all the laser planes in one scan are considered as being parallel.
Exemplary Material Model Acquisition
System 100 measures a material sample to generate image data 112. The material sample is measured using the laser stripe and lamp illumination in the process outlined in Table 1. It is assumed that the material sample is optically thick enough so that a negligible amount of light exits the bottom of the sample. In this implementation, any such transmitted light is absorbed (and ignored) with a black cloth set beneath the sample. In a different implementation, transmitted light that exits the bottom of a physical sample (e.g., the light absorbed by the black cloth) is accounted for in the generated material model 114.
From the acquired data set (a respective portion of image data 112), modeling and rendering module 108 separates appearance components that correspond to the elements of material model 114. The separation operations are performed to enable inverse rendering of dipole diffusion parameters and measurement of the texture functions. In the laser stripe images, global sub-surface scattering is clearly revealed, but its appearance is influenced by mesostructure entrance and exiting functions. To decouple these reflectance components, modeling and rendering module 108 takes advantage of their distinct directional dependencies: the mesostructure entrance function depending on light direction, the mesostructure exiting function on view direction, and dipole diffusion being directionally independent. Modeling and rendering module 108 subtracts the global components from the halogen lamp images, leaving the local reflectance function, which consists of local non-homogeneous sub-surface scattering and mesostructure surface reflections.
In the image capture procedure of Table 1, all laser images are rectified onto the reference plane of the material model 114, which is defined above the top surface of the sampled object. The top surface of the sampled object is considered to be globally flat, but may contain local mesostructures. In the rectification process, modeling and rendering module 108 identifies pixels on the laser stripe according to a prescribed intensity threshold, and to these pixels, module 108 fits a line. In this implementation, HDR images are not used in the rectification process. The obtained set of rectified images I(p,v,l) record for incoming flux from laser stripe l the measured radiance in direction v from 3D location p(x,y) on the reference plane, where p lies within a user-specified distance interval from l such that radiance from p results from global scattering and has an intensity that is sufficiently above the camera noise level. For purposes of exemplary illustration, such rectified images are shown as a respective portion of “other data” 122.
In this implementation, the measured material sample is considered to be optically thick, such that a negligible amount of light passes through the bottom surface. Only a central portion of the sample is measured to avoid edge effects such as light leakage through the sides.
An Exemplary Mesostructure Entrance Function
In the first step of the measurement process, image capture device 120 scans the sampled object with laser stripes emitted from different light directions. To this end, the laser is mounted at several positions along the light arc. For each position, the light arc is rotated to sampled angles to capture laser scan images at sampled viewing directions.
For a laser stripe projected from the vertical direction, the corresponding mesostructure entrance function value is set to equal one. From other incident light directions, modeling and rendering module 108 computes the mesostructure entrance function in relation to the vertical direction as follows:
where the numerator aggregates images with laser lines from direction ωi, and the denominator sums images with laser lines from the vertical direction ω0. Although images from a single view are sufficient to solve for this function, modeling and rendering module 108 utilizes utilize several sparse viewing samples for greater robustness. In this implementation, 4×4 viewing samples are utilized.
Exemplary Dipole Diffusion Parameters
Referring to Table 1, and as shown in section titled “Second Pass”, capture device 120 captures laser scan images from different viewing directions and under vertically projected illumination. This generates image data 112 utilized by modeling and rendering module 108 to compute the dipole diffusion parameters. For purposes of exemplary illustration, the dipole diffusion parameters are shown as a respective portion of “other data” 122. After the turntable is rotated to a desired sampling position, the light arc is rotated accordingly so that the sample is always scanned along the same direction. Capture device 120 utilizes all cameras to simultaneously-capture the appearance of the sample.
In this implementation, to fit the global dipole model to a particular data set, modeling and rendering module 108 utilizes an approximation to set the refraction index to 1.3 in dipole fitting. In other implementations, different approximations are utilized to set the refraction index to different value(s) as a function of the quasi-homogenous materials being sampled. In view of this approximation, modeling and rendering module 108 determines the global dipole term from two scattering properties: the reduced albedo α′ and reduced extinction coefficient σ′t For purposes of exemplary illustration, these two scattering properties are shown as respective portions of “other data” 122. Directly fitting these two parameters to measured data is ill-conditioned. Instead, modeling and rendering module 108 first measures the total diffuse reflectance R of the material sample and derives the reduced albedo α′. Then the reduced extinction coefficient σ′t is fit to the captured images.
In this implementation, the total diffuse reflectance is the ratio between the outgoing flux of the sample surface and incoming flux of a vertical laser stripe, expressed for stripe l from direction ω0 as follows:
where k1 is the laser stripe length on the sample, φ is the laser flux measured from a reference surface of known reflectance with laser stripe length k0. It is assumed that the intensity of the laser stripe is constant along the line. The effect of the mesostructure exiting function is effectively removed by summing corresponding image pixels from the different viewpoints. For a more robust estimate, capture device 120 captures multiple laser stripe projections at different positions and then averages the resulting diffuse reflectance values. With the total diffuse reflectance, modeling and rendering module 108 solves for the reduced albedo α′ of the material.
With the derived reduced albedo α′, modeling and rendering module 108 solves for the material's reduced extinction coefficient σ′, by fitting the dipole model to the measured data. In fitting the dipole model, the same image set is summed over the viewing directions to eliminate mesostructure exiting function effects from the measured data:
Additionally, modeling and rendering module 108 integrates the outgoing radiance of all points pd that are a distance d from the laser line to obtain the function as follows:
In this implementation, and since the double summation in Eq. (6) has no closed-form solution, modeling and rendering module 108 creates a virtual version of the laser stripe with the same incoming flux
and render ID(d) from it using Eq. (6) and σ′t values sampled from 0.0 to 10.0, which covers most of the materials of interest. In another implementation, σ′t is sampled from different ranges. In this implementation, modeling and rendering module 108 takes the σ′t value whose solution ID(d) to Eq. (6) most closely matches the measured I(d) according to ∥ID(d)−I(d))∥2. To efficiently determine the σ′t in the current implementation, a coarse-to-fine search is employed.
Exemplary Mesostructure Exiting Function
From the image set captured for estimating the dipole parameters (i.e., a respective portion of image data 112), modeling and rendering module 108 solves the mesostructure exiting function using the dipole model in Eq. (5) as follows:
Essentially, this represents the image residual after factoring out global dipole diffusion. Since multiple scattering in global light transport is directionally independent, Eq. (3) can be determined independently of incident light direction.
Exemplary Local Reflectance Function
At this point, capture device 120 captures halogen lamp images (i.e., respective portions of image data 112) of the material sample with different lighting directions and viewpoints, as outlined in the “Third Pass” of Table 1. The acquired images include both local and global lighting effects. To obtain an LRF that represents only local scattering, modeling and rendering module 108 computes the global component from the determined dipole parameters and mesostructure entrance/exiting functions, then subtracts this global component from the captured data as follows:
Although the dipole diffusion model is not intended to model light transport through inhomogeneous local volumes, some portion of the local scattering is allowed to be represented by the light transport, instead of being fully modeled in the LRF. This substantially avoids having to specify the extent of area Bo in Eq. (4). Subtracting the dipole component in the local area may lead to negative values, but these are not errors and are used to obtain the correct rendering result. For any negative values in the final rendering result due to noise or measurement errors, modeling and rendering module 108 simply truncates such values to zero before tone mapping. Since the sampled laser and lamp positions are not identical due to the structure of the exemplary capture device 120, mesostructure entrance function values at the lamp directions are bilinearly interpolated from the four nearest laser directions.
Exemplary Rendering
During rendering operations, modeling and rendering module 108 textures two components of material model 114 over a mesh surface 116: the mesostructure exiting function and the local reflectance function. One of the other components, the mesostructure entrance function, is uniform over a surface 116, so it is does not need to be textured. The final component, the dipole diffusion parameters, is also not textured over the surface 116. The contribution of dipole diffusion to object 118 appearance is computed at run time with respect to the dipole diffusion parameters and the shape of the object 118 (i.e., the corresponding mesh 116) to be rendered. Given the dipole parameters and object shape, there exist efficient dipole algorithms that can compute the global light transport (dipole) contribution to appearance.
As a material model 114, the quasi-homogeneous material acquired from real samples can be applied to arbitrary objects. For this purpose, modeling and rendering module 108 reorganizes captured image data 112 as a 2-D texture T(x,y) defined on the reference plane. For purposes of exemplary illustration, such 2-D texture is shown as a respective portion of “other data” 122. Each texel T(xi,yi) in the texture data includes the reflectance function values and mesostructure exiting function at (xi,yi). Modeling and rendering module 108 synthesizes the texture T(x,y) onto the target surface mesh 116, while maintaining consistency in surface appearance under different lighting and viewing directions. Modeling and rendering module 108 directly assigns scattering properties to the resulting formed quasi-homogenous object 118 for simulating global light transport within its volume.
Quasi-homogeneous objects 118 formed from this material model 114 are straightforward to render. To evaluate the radiance of a surface point χo under a given lighting condition, modeling and rendering module 108 attenuates the incoming radiance on object surface according to the mesostructure entrance function. The diffuse light transport beneath the surface is then evaluated using the dipole approximation and modified by the mesostructure exiting function texture mapped onto the surface. For the local scattering effects, modeling and rendering module 108 directly integrates the local reflectance function on χo over the incoming radiance at χo. As a result, both appearance variations over the surface of the quasi-homogenous material object 118, and the scattering effects inside the object volume, are evaluated with material model 114.
To render the LRF, existing hardware-accelerated BTF rendering methods can be utilized, while an efficient dipole algorithm can be used to compute global light transport. With these methodologies, the local and global contributions are rapidly computed and added together to give the rendering result.
Exemplary Procedure
An Exemplary Operating Environment
The methods and systems described herein are operational with numerous other general purpose or special purpose computing system, environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use include, but are not limited to personal computers, server computers, multiprocessor systems, microprocessor-based systems, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and so on. Compact or subset versions of the framework may also be implemented in clients of limited resources, such as handheld computers, or other computing devices. The invention, although not required, may be practiced in a distributed computing environment where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
With reference to
A computer 710 typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by computer 710, including both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer 710.
Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism, and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example and not limitation, communication media includes wired media such as a wired network or a direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above should also be included within the scope of computer-readable media.
System memory 730 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 731 and random access memory (RAM) 732. A basic input/output system 733 (BIOS), containing the basic routines that help to transfer information between elements within computer 710, such as during start-up, is typically stored in ROM 731. RAM 732 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 720. By way of example and not limitation,
The computer 710 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only,
The drives and their associated computer storage media discussed above and illustrated in
A user may enter commands and information into the computer 710 through input devices such as a keyboard 762 and pointing device 761, commonly referred to as a mouse, trackball or touch pad. Other input devices (not shown) may include a microphone, joystick, graphics pen and pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 720 through a user input interface 760 that is coupled to the system bus 721, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB).
A monitor 791 or other type of display device is also connected to the system bus 721 via an interface, such as a video interface 790. In addition to the monitor, computers may also include other peripheral output devices such as printer 796 and audio device(s) 797, which may be connected through an output peripheral interface 795.
The computer 710 operates in a networked environment using logical connections to one or more remote computers, such as a remote computer 780. In one implementation, remote computer 780 represents computing device 102 of
When used in a LAN networking environment, the computer 710 is connected to the LAN 771 through a network interface or adapter 770. When used in a WAN networking environment, the computer 710 typically includes a modem 772 or other means for establishing communications over the WAN 773, such as the Internet. The modem 772, which may be internal or external, may be connected to the system bus 721 via the user input interface 760, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 710, or portions thereof, may be stored in the remote memory storage device. By way of example and not limitation,
Conclusion
Although the systems and methods for representing quasi-homogenous materials have been described in language specific to structural features and/or methodological operations or actions, it is understood that the implementations defined in the appended claims are not necessarily limited to the specific features or actions described. Rather, the specific features and operations of system 100 are disclosed as exemplary forms of implementing the claimed subject matter.
Number | Date | Country | |
---|---|---|---|
20060290719 A1 | Dec 2006 | US |