COMPUTER GRAPHICS WITH ENUMERATING QMC SEQUENCES IN VOXELS

Information

  • Patent Application
  • 20090141026
  • Publication Number
    20090141026
  • Date Filed
    September 30, 2008
    16 years ago
  • Date Published
    June 04, 2009
    15 years ago
Abstract
The invention provides systems and computer-implemented methods for evaluating integrals using quasi-Monte Carlo methodologies, and in particular embodiments, adaptive quasi-Monte Carlo integration and adaptive integro-approximation in conjunction with techniques including a scrambled Halton Sequence, stratification by radical inversion, stratified samples from the Halton Sequence, deterministic scrambling, bias elimination by randomization, adaptive and deterministic anti-aliasing, anti-aliasing by rank-1 lattices, and trajectory splitting by dependent sampling and rank-1 lattices.
Description
FIELD OF THE INVENTION

The present invention relates generally to methods and systems for image synthesis in and by digital computing systems, such as for motion pictures and other computer graphics applications, and in particular, relates to methods, systems, devices, and computer software for efficient synthesis of realistic images. More particularly, the invention relates to methods, systems and computer code products useful in computer graphics, and which utilize techniques of enumerating quasi-Monte Carlo sequences in voxels.


BACKGROUND OF THE INVENTION

The use of synthetic images has become increasingly important and widespread in motion pictures and other commercial and scientific applications. A synthetic image represents a two-dimensional array of digital values, called picture elements or pixels, but can also be regarded as a two-dimensional function. Image synthesis, then, is the process of creating synthetic images from scenes.


As a general matter, digital images are generated by rasterization (as described in greater detail below and in the references cited in this document, which are incorporated herein by reference as if set forth in their entireties herein), or, in the case of photorealistic images of three-dimensional scenes, by ray tracing (also as described in greater detail below and in the references cited herein). Both approaches aim at determining the appropriate color for each pixel by projecting the original function into the pixel basis.


Image synthesis is perhaps the most visible part of computer graphics. On the one hand it is concerned with physically correct image synthesis, which intends to identify light paths that connect light sources and cameras and to sum up their contributions. On the other hand it also comprises non-photorealistic rendering, such as the simulation of pen strokes or watercolor.


The underlying mathematical task of image synthesis is to determine the intensity for each picture element on the display medium. Computing the intensity of a single pixel requires an integration function over the pixel area. This integral is often highly complex, as discussed below, and cannot be solved analytically, thus requiring numerical methods for solution, which may include Monte Carlo and quasi-Monte Carlo methods. In particular, image synthesis is an integro-approximation problem for which analytical solutions are available only in exceptional cases. Therefore numerical techniques need to be applied. While standard graphics text books still recommend elements of classical Monte Carlo integration, the majority of visual effects in movie industry are produced by using quasi-Monte Carlo techniques.


However, typical numerical methods used in such applications have their own limitations and attendant problems. It would therefore be desirable to provide improved methods and systems for image synthesis whereby images can be rendered efficiently.


One major disadvantage of quasi-Monte Carlo methods over classical Monte Carlo integration is the difficulty of placing samples into a specific sub-domain of the integration domain. This problem becomes even more difficult when the number of samples per sub-domain differs, e.g. when using adaptive termination criteria. In general, great care has to be taken to ensure the employed point set does not spoil the consistency of the estimator. However, it is often desirable to adaptively place samples in specific regions to:

    • (a) spend more effort on regions with higher variance,
    • (b) guarantee a fair “sample per domain” distribution to reduce aliasing artifacts,
    • (c) or parallelize an algorithm by domain decomposition.


When using pure Monte Carlo methods, these problems can usually be tackled easily, even though parallelization can be tricky when using pseudo-random number generators [e.g., see reference MN00 in list of references below]. When using random uniform samples, full stratification can be realized simply by placing the desired number of random samples in the region of interest without losing the random distribution properties.


This application for U.S. Patent presents methods of using quasi-Monte Carlo samples while allowing direct placement of samples within sub-domains without compromising the quality of the entire point set.


Applications: A low-dimensional example from computer graphics is the distribution of the first two sample dimensions over the screen. As digital images are represented by a discrete set of pixels, usually represented by regular squares on the image plane, the distribution of samples with respect to those pixels is quite essential. Especially at small sampling rates, visually disturbing artifacts occur for quasi-Monte Carlo points whose stratification properties do not match the pixel grid.


Even for high-dimensional problems, such as path tracing for global illumination, it usually pays off to stratify the first dimensions (e.g. two dimensions on the image plane and one for time). The sample index can then be used to draw more components to sample further path events.


One prior art approach used the stratification properties of the Halton-sequence to generate samples that differ at most by the size of a stratum. In an interactive photon mapping context, they consider the first sample of each stratum as a representative of the other samples in its stratum. They validate the photon path that was generated using the representative sample and update the photon paths generated using the other samples of this stratum accordingly.


In another prior art approach, this idea was generalized to stratify samples on the image plane. However, no implementation details were given, apart from noting the usage of a look-up table. Though that approach can be quite efficient, it poses restrictions on the image size, since the table needs to be of the same size as the number of pixels. Furthermore, such a table is not feasible for higher dimensionality.


With respect to parallelization it is often useful to partition the problem domain. For example when estimating the mass of a density-point set using quasi-Monte Carlo points, it is potentially possible for each processing node having to load all density data. Yet when stratifying the samples we can guarantee which parts of the density points to be present, so each process can work with a small local data set.


SUMMARY OF THE INVENTION

One aspect of the present invention relates to the generation and synthesis of images, such as for display in a motion picture or other dynamic display. The invention provides improved methods, systems and computer program code products (i.e., software products) for, and useful in, image synthesis, including efficient methods for determining intensity whereby realistic images can be rendered efficiently within the limits of available computational platforms.


In one aspect, the invention provides new and improved systems, methods and computer program code products for evaluating integrals using quasi-Monte Carlo methodologies, and in particular embodiments, adaptive quasi-Monte Carlo integration and adaptive integro-approximation in conjunction with techniques including a scrambled Halton Sequence, stratification by radical inversion, stratified samples from the Halton Sequence, deterministic scrambling, bias elimination by randomization, adaptive and deterministic anti-aliasing, anti-aliasing by rank-1 lattices, and trajectory splitting by dependent sampling and domain stratification induced by rank-1 lattices.


Still further, one aspect of the invention relates to a method (and corresponding devices, systems and computer program code products) useful in a computer graphics system that utilizes low-discrepancy sequences, the method comprising: receiving a low-discrepancy sequence with inherent stratification properties that imply a regular voxelization, and based on the low-discrepancy sequence with inherent stratification properties, enumerating an arbitrary number of samples of the sequence that lie inside an arbitrarily chosen voxel.


The low-discrepancy sequence can be a Halton-sequence, and the method can further comprise enumerating samples from the Halton-sequence, using an arbitrarily dimensional voxel grid. In one practice of the invention, the Halton-sequence can be a scrambled Halton-sequence. Alternatively, the Halton-sequence can be a not-scrambled Halton sequence.


The invention can further include enumerating samples from a (t, s)-sequence, including the Sobol′-sequence and the Faure-sequence, using an arbitrarily dimensional voxel grid.


Another aspect of the invention can utilize an arbitrary number of additional components from the corresponding low-discrepancy sequence to cover higher dimensions in the computation.


Another aspect of the invention relates to a computer-implemented method of generating a pixel value for a pixel in an image displayable via a display device, the pixel value being representative of a point in a scene, the method comprising:


A. generating a set of sample points, at least one sample point being generated using at least one sample, the at least one sample comprising at least one element of a sequence, and wherein the set of sample points comprises quasi-Monte Carlo points;


B. evaluating a selected function at one of the sample points to generate a value, the generated value corresponding to the pixel value, the pixel value being usable to generate a display-controlling electronic output; and


C. given a low-discrepancy sequence with inherent stratification properties that imply a regular voxelization, enumerating an arbitrary number of samples of the sequence that lie inside an arbitrarily chosen voxel.


Yet another aspect of the invention relates to a subsystem within a computer graphics system, for generating a pixel value for a pixel in an image displayable via a display device, the pixel value being representative of a point in a scene, the subsystem comprising:


A. means for generating a set of sample points, at least one sample point being generated using at least one sample, the at least one sample comprising at least one element of a sequence, and wherein the set of sample points comprises quasi-Monte Carlo points;


B. means for evaluating a selected function at one of the sample points to generate a value, the generated value corresponding to the pixel value, the pixel value being usable to generate a display-controlling electronic output; and


C. means operable to enumerate an arbitrary number of samples of the sequence that lie inside an arbitrarily chosen voxel, given a low-discrepancy sequence with inherent stratification properties that imply a regular voxelization.


Another aspect of the invention relates to a computer program product, comprising computer readable computer program code encoded on a physical medium readable by a computer, for use in a computer graphics system, the computer program code product comprising: first computer program code means for receiving, as an input, a low-discrepancy sequence with inherent stratification properties that imply a regular voxelization, and second computer program code means for enumerating, based on the received low-discrepancy sequence with inherent stratification properties, an arbitrary number of samples of the sequence that lie inside an arbitrarily chosen voxel.


These and other aspects, practices, embodiments and examples of the invention will next be discussed in greater detail.





BRIEF DESCRIPTION OF THE DRAWINGS

This invention is pointed out with particularity in the appended claims. The above and further advantages of this invention may be better understood by referring to the following description taken in conjunction with the accompanying drawings, in which:



FIG. 1 shows a diagram of a computer graphics system suitable for use in implementations of various aspects of the invention described herein.



FIG. 2 shows a diagram illustrating components of the computer graphics system and processor module shown in FIG. 1.



FIG. 2A shows a diagram illustrating further components of a computing system according to aspects of the present invention.



FIG. 3 shows a diagram of a network configuration suitable for use in implementations of various aspects of the invention described herein.



FIG. 4 shows a diagram illustrating components of the of the network configuration shown in FIG. 3.



FIGS. 5 and 6 show code fragments for generating imaging data in accordance with aspects of the invention.



FIGS. 7A and 7B show plots of the first two components of the Halton sequence.



FIGS. 8A and 8B show a pair of low-dimensional projections, including the Halton sequence and the scrambled Halton sequence for designated points.



FIG. 9 shows a plot of a sample pattern that is tiled over the image plane.



FIG. 10 shows a plot illustrating an interleaved adaptive supersampling technique according to a further aspect of the invention.



FIGS. 11A-H show a series of plots illustrating classical quasi-Monte Carlo points, along with their mutual minimum distance.



FIGS. 12A-C shows a series of drawings illustrating selection of lattices by maximum minimum distance.



FIG. 13 shows a computer-generated image of an infinite plane with a checkerboard texture.



FIGS. 14A-C are a series of alternative sampling patterns of computer graphics.



FIGS. 15A-E show sampling patterns using quasi-Monte Carlo points.



FIG. 16 shows an illustration of how samples in a pixel are determined by tiled instances of a Hammersley point set.



FIGS. 17A and 17B are plots illustrating how samples from the Halton sequence in the unit square are scaled to fit the pixel raster.



FIGS. 18A-C are a series of plots illustrating replications by rank-1 lattices.



FIGS. 19-22 show a series of flowcharts of general methods in accordance with tile systems and techniques set forth in FIGS. 1-18 and accompanying written description.



FIG. 23 is a flowchart of a general technique according to described aspects of the present invention for enumerating quasi-Monte Carlo sequences in voxels.



FIG. 24 is a flowchart illustrating a technique according to a further aspect of the invention utilizing a Halton-sequence.



FIG. 25 is a flowchart illustrating a technique according to a further aspect of the invention utilizing (t, s)-sequences, in base b.



FIG. 26 is a flowchart illustrating a technique according to a further aspect of the invention with respect to the FIG. 25 technique for the special case, a Sobol′-sequence for s=2.



FIGS. 27-30 are a series of code listings, in the Python programming language, illustrating implementations of various described aspects of the invention.





DETAILED DESCRIPTION OF THE INVENTION

Various aspects, examples, features, embodiments and practices in accordance with the present invention will be set forth in detail in the present Detailed Description of the Invention, which is organized into the following sections:

  • Preface: Digital Processing Environment in Which Invention Can Be Implemented
  • I. Introduction, Overview and Description of Quasi-Monte Carlo Methodologies in Which Sample Points Represent Dependent Samples Generated Using a Low-Discrepancy Sequence
  • II. Image Synthesis by Adaptive Quasi-Monte Carlo Integration
  • III. Additional Examples and Points Regarding Quasi-Monte Carlo Integration
  • IV. General Methods
  • V. Enumerating Quasi-Monte Carlo Sequences in Voxels


    Preface: Digital Processing Environment in which Invention can be Implemented


The present invention relates to the generation and synthesis of images, such as for display in a motion picture or other dynamic display. The techniques described herein are practiced as part of a computer graphics system, in which a pixel value is generated for each pixel in an image. The pixel value is representative of a point in a scene as recorded on an image plane of a simulated camera. The computer graphics system is configured to generate the pixel value for an image using a selected methodology.


The following discussion describes methods, structures, systems, and display technology in accordance with these techniques. It will be understood by those skilled in the art that the described methods and systems can be implemented in software, hardware, or a combination of software and hardware, using conventional computer apparatus such as a personal computer (PC) or equivalent device operating in accordance with (or emulating) a conventional operating system such as Microsoft Windows, Linux, or Unix, either in a standalone configuration or across a network. The various processing means and computational means described below and recited in the claims may therefore be implemented in the software and/or hardware elements of a properly configured digital processing device or network of devices. Processing may be performed sequentially or in parallel, and may be implemented using special purpose or reconfigurable hardware.



FIG. 1 attached hereto depicts an illustrative computer system 10 that makes use of such a strictly deterministic methodology. With reference to FIG. 1, the computer system 10 in one embodiment includes a processor module 11 and operator interface elements comprising operator input components such as a keyboard 12A and/or a mouse 12B (generally identified as operator input element(s) 12) and an operator output element such as a video display device 13. The illustrative computer system 10 is of the conventional stored-program computer architecture. The processor module 11 includes, for example, one or more processor, memory and mass storage devices, such as disk and/or tape storage elements (not separately shown), which perform processing and storage operations in connection with digital data provided thereto. The operator input element(s) 12 are provided to permit an operator to input information for processing. The video display device 13 is provided to display output information generated by the processor module 11 on a screen 14 to the operator, including data that the operator may input for processing, information that the operator may input to control processing, as well as information generated during processing. The processor module 11 generates information for display by the video display device 13 using a so-called “graphical user interface” (“GUI”), in which information for various applications programs is displayed using various “windows.” Although the computer system 10 is shown as comprising particular components, such as the keyboard 12A and mouse 12B for receiving input information from an operator, and a video display device 13 for displaying output information to the operator, it will be appreciated that the computer system 10 may include a variety of components in addition to or instead of those depicted in FIG. 1.


In addition, the processor module 11 includes one or more network ports, generally identified by reference numeral 14, which are connected to communication links which connect the computer system 10 in a computer network. The network ports enable the computer system 10 to transmit information to, and receive information from, other computer systems and other devices in the network. In a typical network organized according to, for example, the client-server paradigm, certain computer systems in the network are designated as servers, which store data and programs (generally, “information”) for processing by the other, client computer systems, thereby to enable the client computer systems to conveniently share the information. A client computer system which needs access to information maintained by a particular server will enable the server to download the information to it over the network. After processing the data, the client computer system may also return the processed data to the server for storage. In addition to computer systems (including the above-described servers and clients), a network may also include, for example, printers and facsimile devices, digital audio or video storage and distribution devices, and the like, which may be shared among the various computer systems connected in the network. The communication links interconnecting the computer systems in the network may, as is conventional, comprise any convenient information-carrying medium, including wires, optical fibers or other media for carrying signals among the computer systems. Computer systems transfer information over the network by means of messages transferred over the communication links, with each message including, information and an identifier identifying the device to receive the message.



FIG. 2 shows a diagram illustrating the sample point generator 20, function evaluator 22, simulated rays 24 and simulated lens 26 processing aspects of a computer graphics system 10 and processor module 11 in accordance with the invention.



FIG. 2A shows a diagram illustrating-additional components of the computing system 10 according to further aspects of the invention, described below. As shown in FIG. 2A, the computing system 10 further includes a sample point generator 30 for generating a set of sample points, at least one sample point being generated using at least one sample, the at least one sample comprising at least one element of a sequence, and wherein the set of sample points comprises quasi-Monte Carlo points. The computing system 10 also includes a function evaluator 32 in communication with the sample point generator 30 for evaluating a selected function at one of the sample points to generate a value, the generated value corresponding to the pixel value. The pixel value is usable to generate an electronic output that controls the display 13.


In addition to the computer system 10 shown in FIGS. 1 and 2, methods, devices or software products in accordance with the invention can operate on any of a wide range of conventional computing devices and systems, such as those depicted by way of example in FIG. 3 (e.g., network system 100), whether standalone, networked, portable or fixed, including conventional PCs 102, laptops 104, handheld or mobile computers 106, or across the Internet or other networks 108, which may in turn include servers 110 and storage 112.


In line with conventional computer software and hardware practice, a software application configured in accordance with the invention can operate within, e.g., a PC 102 like that shown in FIG. 4, in which program instructions can be read from CD ROM 116, magnetic disk or other storage 120 and loaded into RAM 114 for execution by CPU 118. Data can be input into the system via any known device or means, including a conventional keyboard, scanner, mouse or other elements 103.


Those skilled in the art will understand that the method aspects of the invention described below can be executed in hardware elements, such as an Application-Specific Integrated Circuit (ASIC) constructed specifically to carry out the processes described herein, using ASIC construction techniques known to ASIC manufacturers. Various forms of ASICs are available from many manufacturers, although currently available ASICs do not provide the functions described in this patent application. Such manufacturers include Intel Corporation and NVIDIA Corporation, both of Santa Clara, Calif. The actual semiconductor elements of such ASICs and equivalent integrated circuits are not part of the present invention, and will not be discussed in detail herein.


Those skilled in the art will also understand that method aspects of the present invention can be carried out within commercially available digital processing systems, such as workstations and personal computers (PCs), operating under the collective command of the workstation or PC's operating system and a computer program product configured in accordance with the present invention. The term “computer program product” can encompass any set of computer-readable programs instructions encoded on a computer readable medium. A computer readable-medium can encompass any form of computer readable element, including, but not limited to, a computer hard disk, computer floppy disk, computer-readable flash drive, computer-readable RAM or ROM element, or any other known means of encoding, storing or providing digital information, whether local to or remote from the workstation, PC or other digital processing device or system. Various forms of computer readable elements and media are well known in the computing arts, and their selection is left to the implementer. In each case, the invention is operable to enable a computer system to calculate a pixel value, and the pixel value can be used by hardware elements in the computer system, which can be conventional elements such as graphics cards or display controllers, to generate a display-controlling electronic output. Conventional graphics cards and display controllers are well known in the computing arts, are not necessarily part of the present invention, and their selection can be left to the implementer.


In particular, the systems illustrated in FIGS. 1-4 may be used, in accordance with the following described aspects of the invention, to implement a computer graphics system that evaluates integrals using a quasi-Monte Carlo methodology, which can include adaptive quasi-Monte Carlo integration and adaptive integro-approximation in conjunction with techniques including a scrambled Halton Sequence, stratification by radical inversion, stratified samples from the Halton Sequence, deterministic scrambling, bias elimination by randomization, adaptive and deterministic anti-aliasing, anti-aliasing by rank-1 lattices, trajectory splitting by dependent sampling and domain stratification induced by rank-1 lattices, and enumerating quasi-Monte Carlo sequences in voxels.


I. Introduction, Overview and Description of Quasi-Monte Carlo Methodologies in which Sample Points Represent Dependent Samples Generated Using a Low-Discrepancy Sequence


Aspects of the present invention provide a computer graphic system and method for generating pixel values for pixels in an image of a scene, which makes use of a strictly deterministic quasi-Monte Carlo methodology in conjunction with various sub-techniques, which can include, for example, trajectory splitting by dependent sampling for generating sample points for use in generating sample values for evaluating the integral or integrals whose function(s) represent the contributions of the light reflected from the various points in the scene to the respective pixel value, rather than the random or pseudo-random Monte Carlo methodology which has been used in the past. The strictly deterministic methodology ensures a priori that the sample points will be generally more evenly distributed over the interval or region over which the integral(s) is (are) to be evaluated in a low-discrepancy manner.


It will be helpful to initially provide some background on operations performed by the computer graphics system in generating an image. Generally, the computer graphic system generates an image that attempts to simulate an image of a scene that would be generated by a camera. The camera includes a shutter that will be open for a predetermined time T starting at a time t0 to allow light from the scene to be directed to an image plane. The camera may also include a lens or lens model (generally, “lens”) that serves to focus light from the scene onto the image plane. The average radiance flux Lm,n through a pixel at position (m, n) on an image plane P, which represents the plane of the camera's recording medium, is determined by










L

m
,
n


=


1





A
P



·
T



·



A
L











A
P







t
0



t
0

+
T





L


(





h


(

x
,
t
,
y

)


,






-

ω


(

x
,
t
,
y

)






)





f

m
,
n




(

x
,
y
,
t

)









y








t





x

.









(
1.7
)







where Ap refers to the area of the pixel, AL refers to the area of the portion of the lens through which rays of light pass from the scene to the pixel, and fm,n represents a filtering kernel associated with the pixel. An examination of the integral in Equation (1.7) will reveal that, for the variables of integration, x, y and t, the variable y refers to integration over the lens area AL, the variable t refers to integration over time (the time interval from t0 to t0+T) and the variable x refers to integration over the pixel area Ap.


The value of the integral in Equation (1.7) is approximated in accordance with a quasi-Monte Carlo methodology by identifying Np sample points xi in the pixel area, and, for each sample point, shooting NT rays at times ti,j in the time interval to t0 to t0+T through the focus into the scene, with each ray spanning NL sample points yi,j,k on the lens area AL. The manner in which subpixel jitter positions xi, points in time ti,j and positions on the lens yi,j,k are determined will be described below. These three parameters determine the primary ray hitting the scene geometry in h(xi, ti,j, yi,j,k) with the ray direction w(xi, ti,j, yi,j,k). In this manner, the value of the integral in Equation (1.7) can be approximated as follows:











L

m
,
n





1
N






i
=
0



N
P

-
1





1

N
T







j
=
0



N
T

-
1





1

N
L







k
=
0



N
L

-
1





L


(


h


(


x
i

,

t

i
,
j


,

y

i
,
j
,
k



)


,





-

ω


(


x
i

,


t

i
,
j




y

i
,
j
,
k




)




)





f

m
,
n




(


x
i

,

t

i
,
j


,

y

i
,
j
,
k



)










,




(
1.8
)







where N is the total number of rays directed at the pixel.


It will be appreciated that rays directed from the scene toward the image plane can comprise rays directly from one or more light sources in the scene, as well as rays reflected off surfaces of objects in the scene. In addition, it will be appreciated that a ray that is reflected off a surface may have been directed to the surface directly from a light source, or a ray that was reflected off another surface. For a surface that reflects light rays, a reflection operator Tfr is defined that includes a diffuse-portion Tfd, a glossy portion Tfg and a specular portion Tfs, or






T
f

r

=T
f

d

+T
f

g

+T
f

s
  (1.9)


In that case, the Fredholm integral equation L=Le+TfrL governing light transport can be represented as






L=L
c
+T
f

r

−f

s

L
e
+T
f

s
(L−Le)+TfsL+TfdTfgf+sL+TfdTfdL  (1.10)


where transparency has been ignored for the sake of simplicity; transparency is treated in an analogous manner. The individual terms in Equation (1.10) are as follows:

  • (i) Le represents flux due to a light source;


(ii) Tfr−fsLe (where Tfr−fs−Tfr=−Tfs) represents direct illumination, that is, flux reflected off a surface that was provided thereto directly by a light source; the specular component, associated with the specular portion Tfs of the reflection operator, will be treated separately since it is modeled using a δ-distribution;

  • (iii) Tfg(L−Le) represents glossy illumination, which is handled by recursive distribution ray tracing, where, in the recursion, the source illumination has already been accounted for by the direct illumination (item (ii) above);
  • (iv) TfsL represents a specular component, which is handled by recursively using L for the reflected ray,
  • (v) TfdTfg+fsL (where Tfg+fs=Tfg+Tfs) represents a caustic component, which is a ray that has been reflected off a glossy or specular surface (reference the Tfg+fs operator) before hitting a diffuse surface (reference the Tfd operator); this contribution can be approximated by a high resolution caustic photon map; and
  • (vi) TfdTfdL represents ambient light, which is very smooth and is therefore approximated using a low resolution global photon map.


As noted above, the value of the integral in Equation (1.7) is approximated by solving Equation (1.8) making use of sample points xi, ti,j, and yi,j,k, where xi refers to sample points within area AL of the respective pixel at location (m, n) in the image plane, ti,j refers to sample points within the time interval t0 to t0+T during which the shutter is open, and yi,j,k refers to sample points on the lens AL. In accordance with one aspect of the invention, the sample points xi comprise two-dimensional Hammersley points, which are defined as






(


i
N

·


Φ
2



(
i
)



)




where 0≦i<N, and Φ2(i) refers to the radical inverse of i in base two. Generally, the s-dimensional Hammersley point set is defined as follows:












U

N
,
s

Hammersley



:







{

0
,





,

N
-
1


}




1
s









i


x
i


:=

(


i
N

,


Φ

b
1




(
i
)


,





,


Φ

b

s
-
1





(
i
)



)






(
1.11
)







where Is is the s-dimensional unit cube [0,1)s (that is, an s-dimensional cube each of whose dimensions includes zero, and excludes one), the number of points N in the set is fixed and b1, . . . , bs−1 are bases. The bases do not need to be prime numbers, but they are preferably relatively prime to provide a uniform distribution. The radical inverse function Φb, in turn, is generally defined as












Φ
b



:







N
0



1







i
=





j
=
0







a
j



(
i
)




b
j








j
=
0







a
j



(
i
)




b


-
j

-
1










(
1.12
)







where (αj)j=0 is the representation of i in integer base b. At N=(2n)2, the two-dimensional Hammersley points are a (0, 2n, 2)-net in base two, which are stratified on a 2n×2n grid and a Latin hypercube sample at the same time. Considering the grid as subpixels, the complete subpixel grid underlying the image plane can be filled by simply abutting copies of the grid to each other.


Given integer subpixel coordinates (sx, sy) the instance i and coordinates (x, y) for the sample point xi in the image plane can be determined as follows. Preliminarily, examining







(


i
N

,


Φ
2



(
i
)



)


i
=
0


N
-
1





one observes the following:

    • (a) each line in the stratified pattern is a shifted copy of another, and
    • (b) the pattern is symmetric to the line y=x, that is, each column is a shifted copy of another column.


Accordingly, given the integer permutation σ(k):=2nΦ2(k) for 0≦k<2n, subpixel coordinates (sx, sy) are mapped onto strata coordinates (j, k):=(sx mod 2n, sy mod 2n), an instance number i is computed as






i=j2n+σ(k)  (1.13)


and the positions of the jittered subpixel sample points are determined according to













x
i

=



(



s
x

+


Φ
2



(
k
)



,


s
y

+


Φ
2



(
j
)




)







=



(



s
x

+


σ


(
k
)



2
n



,


s
y

+


σ


(
j
)



2
n




)








(
1.14
)







An efficient algorithm for generating the positions of the jittered subpixel sample points xi will be provided below in connection with Code Segment 1. A pattern of sample points whose positions are determined as described above in connection with Equations (1.13) and (1.14) has an advantage of having much reduced discrepancy over a pattern determined using a Halton sequence or windowed Halton sequence, as described in Grabenstein, and therefore the approximation described above in connection with Equation (1.8) gives in general a better estimation to the value of the integral described above in connection with Equation (1.7). In addition, if N is sufficiently large, sample points in adjacent pixels will have different patterns, reducing the likelihood that undesirable artifacts will be generated in the image.


A “ray tree” is a collection of paths of light rays that are traced from a point on the simulated camera's image plane into the scene. The computer graphics system 10 generates a ray tree by recursively following transmission, subsequent reflection and shadow rays using trajectory splitting. In accordance with another aspect of the invention, a path is determined by the components of one vector of a global generalized scrambled Hammersley point set. Generally, a scrambled Hammersley point set reduces or eliminates a problem that can arise in connection with higher-dimensioned low-discrepancy sequences since the radical inverse function Φb typically has subsequences of b−1 equidistant values spaced by 1/b. Although these correlation patterns are merely noticeable in the full s-dimensional space, they are undesirable since they are prone to aliasing. The computer graphics system 10 attenuates this effect by scrambling, which corresponds to application of a permutation to the digits of the b-ary representation used in the radical inversion. For the symmetric permutation σ from the symmetric group Sb over integers 0, . . . , b−1, the scrambled radical inverse is defined as
















Φ
b

:



N
0

×

S
b



l













(

i
,
σ

)






j
=
0










σ


(


a
j



(
i
)


)





b


-
j

-
1



i




=




j
=
0







a
j



(
i
)




b
j











(
1.15
)







If the symmetric permutation σ is the identity, the scrambled radical inverse corresponds to the unscrambled radical inverse. In one embodiment, the computer graphics system 10 generates the symmetric permutation a recursively as follows. Starting from the permutation σ2=(0,1) for base b=2, the sequence of permutations is defined as follows:


(i) if the base b is even, the permutation σb is generated by first taking the values of 2σb/2 and appending the values of







2


σ

b
2



+
1




and


(ii) if the base b is odd, the permutation ah is generated by taking the values of σb−1, incrementing each value that is greater than or equal to







b
-
1

2




by one, and inserting the value







b
-
1

2




in the middle.


This recursive procedure results in





σ2=(0,1)





σ3=(0,1,2)





σ4=(0,2,1,3)





σ5=(0,3,2,1,4)





σ6=(0,2,4,1,3,5)





σ7=(0,2,5,3,1,4,6)





σ8=(0,4,2,6,1,5,3,7) . . . .


The computer graphics system 10 can generate a generalized low-discrepancy point set as follows. It is often possible to obtain a low-discrepancy sequence by taking any rational s-dimensional point x as a starting point and determine a successor by applying the corresponding incremental radical inverse function to the components of x. The result is referred to as the generalized low-discrepancy point set. This can be applied to both the Halton sequence and the Hammersley sequence. In the case of the generalized Halton sequence, this can be formalized as





xib1(i+i1),Φb2(i+i2), . . . , Φ(i+is)  (1.16)


where the integer vector (i1, i2, . . . is) represents the offsets per component and is fixed in advance for a generalized sequence. The integer vector can be determined by applying the inverse of the radical inversion to the starting point x. A generalized Hammersley sequence can be generated in an analogous manner.


Returning to trajectory splitting, generally trajectory splitting is the evaluation of a local integral, which is of small dimension and which makes the actual integrand smoother, which improves overall convergence. Applying replication, positions of low-discrepancy sample points are determined that can be used in evaluating the local integral. The low-discrepancy sample points are shifted by the corresponding elements of the global scrambled Hammersley point set. Since trajectory splitting can occur multiple times on the same level in the ray tree, branches of the ray tree are decorrelated in order to avoid artifacts, the decorrelation being accomplished by generalizing the global scrambled Hammersley point set.


An efficient algorithm for generating a ray tree will be provided below in connection with Code Segment 2. Generally, in that algorithm, the instance number i of the low-discrepancy vector, as determined above in connection with Equation (1.13), and the number d of used components, which corresponds to the current integral dimension, are added to the data structure that is maintained for the respective ray in the ray tree. The ray tree of a subpixel sample is completely specified by the instance number i. After the dimension has been set to “two,” which determines the component of the global Hammersley point set that is to be used next, the primary ray is cast into the scene to span its ray tree. In determining the deterministic splitting by the components of low discrepancy sample points, the computer graphics system 10 initially allocates the required number of dimensions Δd. For example, in simulating glossy scattering, the required number of dimensions will correspond to “two.” Thereafter, the computer graphics system 10 generates scattering directions from the offset given by the scrambled radical inverses





Φbd(i,σbd), . . . , Φbd+Δd−1(i,σbd+Δd−1)


yielding the instances











(

y

i
,
j


)


j
=
0


M
-
1


=

(




Φ

b
d




(

i
,

σ

b
d



)




J
M


,





,



Φ

b

d
+

Δ





d

-
1





(

i
,

σ

b

d
+

Δ





d

-
1




)





Φ

b

d
+

Δ





d

-
2





(

j
,

σ

b

d
+

Δ





d

-
2




)




)





(
1.17
)







where “⊕” refers to “addition modulo once.” Each direction of the M replicated rays is determined by yi,j and enters the next level of the ray tree with d′:=d+Δd as the new integral dimension in order to use the next elements of the low-discrepancy vector, and i′=i+j in order to decorrelate subsequent trajectories. Using an infinite sequence of low-discrepancy sample points, the replication heuristic is turned into an adaptive consistent sampling arrangement. That is, computer graphics system 10 can fix the sampling rate ΔM, compare current and previous estimates every ΔM samples, and, if the estimates differ by less than a predetermined threshold value, terminate sampling. The computer graphics system 10 can, in turn, determine the threshold value, by importance information, that is, how much the local integral contributes to the global integral.


As noted above, the integral described above in connection with Equation (1.7) is over a finite time period T from t0 to t0+T, during which time the shutter of the simulated camera is open. During the time period, if an object in the scene moves, the moving object may preferably be depicted in the image as blurred, with the extent of blurring being a function of the object's motion and the time interval t0+T. Generally, motion during the time an image is recorded is linearly approximated by motion vectors, in which case the integrand in Equation (1.7) is relatively smooth over the time the shutter is open and is suited for correlated sampling. For a ray instance i, started at the subpixel position xi, the offset Φ3(i) into the time interval is generated and the NT−1 subsequent samples








Φ
3



(
i
)


+

j

N
T






mod 1 are generated for 0<j<NT, that is










t

i
,
j


:=



t
0



(



Φ
3



(
i
)




j

N
T



)


·
T





(
1.18
)







It will be appreciated that the value of NT may be chosen to be “one,” in which case there will be no subsequent samples for ray instance i. Determining sample points in this manner fills the sampling space, resulting in a more rapid-convergence to the value of the integral in Equation (1.7). For subsequent trajectory splitting, rays are decorrelated by setting the instance i′=i+j.


In addition to determining the position of the jittered subpixel sample point xi, and adjusting the camera and scene according to the sample point ti,j for the time, the computer graphics system also simulates depth of field. In simulating depth of field, the camera to be simulated is assumed to be provided with a lens having known optical characteristics and, using geometrical optics, the subpixel sample point xi is mapped through the lens to yield a mapped point xi′. The lens is sampled by mapping the dependent samples










y

i
,
j
,
k


=

(


(



Φ
5



(


i
+
j

,

σ
5


)




k

N
L



)

,

(



Φ
7



(


i
+
j

,

σ
7


)





Φ
2



(
k
)



)


)





(
1.19
)







onto the lens area AL using a suitable one of a plurality of known transformations. As with NT, the value of NL may be chosen to be “one.” Thereafter, a ray is shot from the sample point on the lens specified by yi,j,k through the point xi′ into the scene. The offset





5(i+j,σ5),Φ7(i+j,σ7))


in Equation (1.19) comprises the next components taken from the generalized scrambled Hammersley point set, which, for trajectory splitting, is displaced by the elements






(


k

N
L


,


Φ
2



(
k
)



)




of the two-dimensional Hammersley point set. The instance of the ray originating from sample point yi,j,k is set to i+j+k in order to decorrelate further splitting down the ray tree. In Equation (1.19), the scrambled samples (Φ5(i+j, σ5), Φ7 (i+j, σ7)) are used instead of the unscrambled samples of (Φ5(i+j), Φ7(i+j)) since in bases five and seven, up to five unscrambled samples will lie on a straight line, which will not be the case for the scrambled samples.


In connection with determination of a value for the direct illumination (Tfr−fsLe above), direct illumination is represented as an integral over the surface of the scene ∂V, which integral is decomposed into a sum of integrals, each over one of the L single area light sources in the scene. The individual integrals in turn are evaluated by dependent sampling, that is














(


T

fr
-
fs




L
e


)



(

y
,
z

)


=







V






L
e



(

x
,
y

)





f
r



(

x
,
y
,
z

)




G


(

x
,
y

)









x









=






k
=
1

L










suppL

e
,
k







L
e



(

x
,
y

)









f
r



(

x
,
y
,
z

)




G


(

x
,
y

)





x

















k
=
1

L








1

M
K







j
=
0



M
k

-
1






L
e



(


x
j

,
y

)





f
r



(


x

j






,
y
,
z

)




G


(


x
j

,
y

)













(
1.20
)







where suppLe,k refers to the surface of the respective k-th light source. In evaluating the estimate of the integral for the k-th light source, for the Mk-th query ray, shadow rays determine the fraction of visibility of the area light source, since the point visibility varies much more than the smooth shadow effect. For each light source, the emission Le is attenuated by a geometric term G, which includes the visibility, and the surface properties are given by a bidirectional distribution function ƒr−fs. These integrals are local integrals in the ray tree yielding the value of one node in the ray tree, and can be efficiently evaluated using dependent sampling. In dependent sampling, the query ray comes with the next free integral dimension d and the instance i, from which the dependent samples are determined in accordance with










x
j

=

(




Φ

b
d




(

i
,

σ

b
d



)




j

M
k



,



Φ

b

d
+
1





(

i
,

σ

b

d
+
1




)





Φ
2



(
j
)




)





(
1.21
)







The offset





bd(iσbd),Φbd+1(i,σbd+1))


again is taken from the corresponding generalized scrambled Hammersley point set, which shifts the two-dimensional Hammersley point set






(


j

M
k


,


Φ
2



(
j
)



)




on the light source. Selecting the sample rate Mk=2nk as a power of two, the local minima are obtained for the discrepancy of the Hammersley point set that perfectly stratifies the light source. As an alternative, the light source can be sampled using an arbitrarily-chosen number Mk of sample points using the following replication rule:







(


j

M
k


,


Φ

M
k




(

j
,

σ

M

k








)



)


j
=
0



M
k

-
1





Due to the implicit stratification of the positions of the sample points as described above, the local convergence will be very smooth.


The glossy contribution Tfe (L−Le) is determined in a manner similar to that described above in connection with area light sources (Equations (1.20) and (1.21)), except that a model fg used to simulate glossy scattering will be used instead of the bidirectional distribution function ƒr. In determining the glossy contribution, two-dimensional Hammersley points are generated for a fixed splitting rate M and shifted modulo “one” by the offset





bd(i,σbd),Φbd+1(i,σbd+1))


taken from the current ray tree depth given by the dimension field d of the incoming ray. The ray trees spanned into the scattering direction are decorrelated by assigning the instance fields i′=i+j in a manner similar to that done for simulation of motion blur and depth of field, as described above. The estimates generated for all rays are averaged by the splitting rate M and propagated up the ray tree.


Volumetric effects are typically provided by performing a line integration along respective rays from their origins to the nearest surface point in the scene. In providing for a volumetric effect, the computer graphics system 10 generates from the ray data a corresponding offset Φbd (i) which it then uses to shift the M equidistant samples on the unit interval seen as a one-dimensional torus. In doing so, the rendering time is reduced in comparison to use of an uncorrelated jittering methodology. In addition, such equidistant shifted points typically obtain the best possible discrepancy in one dimension.


Global illumination includes a class of optical effects, such as indirect illumination, diffuse and glossy inter-reflections, caustics and color bleeding, that the computer graphics system 10 simulates in generating an image of objects in a scene. Simulation of global illumination typically involves the evaluation of a rendering equation. For the general form of an illustrative rendering equation useful in global illumination simulation, namely:






L({right arrow over (x)}, {right arrow over (w)})=Le({right arrow over (x)}, {right arrow over (w)})+∫S′f({right arrow over (x)}, {right arrow over (w)}→{right arrow over (w)})G({right arrow over (x)}, {right arrow over (x)}′)V({right arrow over (x)}, {right arrow over (x)}′)L({right arrow over (x)}, {right arrow over (w)})dA′  (1.22)


it is recognized that the light radiated at a particular point {right arrow over (x)} in a scene is generally the sum of two components, namely, the amount of light, if any, that is emitted from the point and the amount of light, if any, that originates from all other points and which is reflected or otherwise scattered from the point {right arrow over (x)}. In Equation (1.22), L ({right arrow over (x)}, {right arrow over (w)}) represents the radiance at the point {right arrow over (x)} in the direction {right arrow over (w)}=(θ, φ) (where θ represents the angle of direction {right arrow over (w)} relative to a direction orthogonal of the surface of the object in the scene containing the point {right arrow over (x)}, and φ represents the angle of the component of direction {right arrow over (w)} in a plane tangential to the point {right arrow over (x)}). Similarly, L ({right arrow over (x)}′,{right arrow over (w)}′) in the integral represents the radiance at the point {right arrow over (x)}′ in the direction {right arrow over (w)}′ (θ′,φ′) (where θ′ represents the angle of direction {right arrow over (w)}′ relative to a direction orthogonal of the surface of the object in the scene containing the point {right arrow over (x)}′, and φ′ represents the angle of the component of direction {right arrow over (w)}′ in a plane tangential to the point {right arrow over (x)}′), and represents the light, if any, that is emitted from point {right arrow over {right arrow over (x)}′ which may be reflected or otherwise scattered from point {right arrow over (x)}.


Continuing with Equation (1.22), Le({right arrow over (x)}, {right arrow over (w)}) represents the first component of the sum, namely, the radiance due to emission from the point {right arrow over (x)} in the direction {right arrow over (w)}, and the integral over the sphere S′ represents the second component, namely, the radiance due to scattering of light at point {right arrow over (x)}. f({right arrow over (x)}, {right arrow over (w)}′→{right arrow over (w)}) is a bidirectional scattering distribution function which describes how much of the light coming from direction {right arrow over (w)}′ is reflected, refracted or otherwise scattered in the direction {right arrow over (w)}, and is generally the sum of a diffuse component, a glossy component and a specular component. In Equation (1.22), the function G({right arrow over (x)}, {right arrow over (x)}′) is a geometric term










G


(


x


,


x





)


=


cos






θcosθ








x


-


x







2






(
1.23
)







where θ and θ′ are angles relative to the normals of the respective surfaces at points {right arrow over (x)} and {right arrow over (x)}′, respectively. Further in Equation (1.22), V({right arrow over (x)}, {right arrow over (x)}′) is a visibility function which equals the value one if the point x′ is visible from the point i and zero if the point {right arrow over (x)}′ is not visible from the point {right arrow over (x)}.


The computer graphics system 10 makes use of global illumination specifically in connection with determination of the diffuse component





TfdTfdL


and the caustic component





TfdTfg+fsL


using a photon map technique. Generally, a photon map is constructed by simulating the emission of photons by light source(s) in the scene and tracing the path of each of the photons. For each simulated photon that strikes a surface of an object in the scene, information concerning the simulated photon is stored in a data structure referred to as a photon map, including, for example, the simulated photon's color, position, and direction angle. Thereafter a Russian roulette operation is performed to determine the photon's state, i.e., whether the simulated photon will be deemed to have been absorbed or reflected by the surface. If a simulated photon is deemed to have been reflected by the surface, the simulated photon's direction is determined using, for example, a bidirectional reflectance distribution function (“BRDF”). If the reflected simulated photon strikes another surface, these operations will be repeated (see Grabenstein). The data structure in which information for the various simulated photons is stored may have any convenient form; typically k-dimensional trees, for k an integer, are used. After the photon map has been generated, it can be used in rendering the respective components of the image.


In generating a photon map, the computer graphics system 10 simulates photon trajectories, thus avoiding the necessity of discretizing the kernel of the underlying integral equation. The interactions of the photons with the scene, as described above, are stored and used for density estimation. The computer graphics system 10 makes use of a scrambled low-discrepancy strictly-deterministic sequence, such as a scrambled Halton sequence, which has better discrepancy properties in higher dimensions than does an unscrambled sequence. The scrambled sequence also has the benefit, over a random sequence, that the approximation error decreases more smoothly, which will allow for use of an adaptive termination scheme during generation of the estimate of the integral. In addition, since the scrambled sequence is strictly deterministic, generation of estimates can readily be parallelized by assigning certain segments of the low-discrepancy sequence to ones of a plurality of processors, which can operate on portions of the computation independently and in parallel. Since usually the space in which photons will be shot by selecting directions will be much larger than the area of the light sources from which the photons were initially shot, it is advantageous to make use of components of smaller discrepancy, for example, Φ2 or Φ3 (where, as above, Φb refers to the radical inverse function for base b), for use in connection with angles at which photons are shot, and components of higher discrepancy, for example, scrambled Φ5 or Φ7, for use in connection with sampling of the area of the respective light source, which will facilitate filling the space more uniformly.


The computer graphics system 10 estimates the radiance from the photons in accordance with












L
_

r



(

x
,
ω

)





1
A






i



B
k



(
x
)








f
r



(


ω
i

,
x
,
ω

)




Φ
i








(
1.24
)







where, in Equation (1.24), Φi represents the energy of the respective i-th photon, ωi is the direction of incidence of the i-th photon, Bk(x) represents the set of the k nearest photons around the point x, and A represents an area around point x that includes the photons in the set Bk(x). Since the energy of a photon is a function of its wavelength, the Φi in Equation (1.24) also represents the color of the respective i-th photon. The computer graphics system 10 makes use of an unbiased but consistent estimator for the area A for use in Equation (1.24), which is determined as follows. Given a query ball, that is, a sphere that is centered at point x and whose radius r (Bk(x)) is the minimal radius necessary for the sphere to include the entire set Bk(x), a tangential disk D of radius r (Bk(x)) centered on the point x is divided into M equal-sized subdomains Di, that is














i
=
0


M
-
1




D
i


=



D





and






D
i




D
j




0











for





i


j


,










where








D
i




=




D


M

=


π







r
2



(


B
k



(
x
)


)



M







(
1.25
)







The set






P={D
i
|D
i
∩{x
i|D|i∈Bk(x)}≠0}  (1.26)


contains all the subdomains Di that contain a point xi|D on the disk, which is the position of the i-th photon projected onto the plane defined by the disk D along its direction of incidence ωi. Preferably, the number M of subdomains will be on the order of √{square root over (k)} and the angular subdivision will be finer than the radial subdivision in order to capture geometry borders. The actual area A is then determined by









A
=

π







r
2



(


B
k



(
x
)


)






P


M






(
1.27
)







Determining the actual coverage of the disk D by photons significantly improves the radiance estimate in Equation (1.24) in corners and on borders, where the area obtained by the standard estimate πr2 (Bk(x)) would be too small, which would be the case at corners, or too large, which would be the case at borders. In order to avoid blurring of sharp contours of caustics and shadows, the computer graphics system 10 sets the radiance estimate L to black if all domains Di that touch point x do not contain any photons.


It will be appreciated that, in regions of high photon density, the k nearest photons may lead to a radius r(Bk(X)) that is nearly zero, which can cause an over-modulation of the estimate. Over-modulation can be avoided by selecting a minimum radius rmin, which will be used if r(Bk(x)) is less than rmin. In that case, instead of Equation (1.24), the estimate is generated in accordance with the following equation:












L
r

_



(

x
,
ω

)


=


N
k






i



B
k



(
x
)














Φ
i




f
r



(


ω
i

,
x
,

ω
r


)









(
1.28
)







assuming each photon is started with 1/N of the total flux Φ. The estimator in Equation (1.28) provides an estimate for the mean flux of the k photons if r(Bk(x))<rmin.


The global photon map is generally rather coarse and, as a result, subpixel samples can result in identical photon map queries. As a result, the direct visualization of the global photon map is blurry and it is advantageous to perform a smoothing operation in connection therewith In performing such an operation, the computer graphics system 10 performs a local pass integration that removes artifacts of the direct visualization. Accordingly, the computer graphics system 10 generates an approximation for the diffuse illumination term TfdTfdL as














T

f
d




T

f
d



L






(


T

f
d





L
r

_


)



(
x
)








=







S
2



(
x
)







f
d



(
x
)






L
r

_



(

h
(

x
,

ω







)

)



cos





θ







ω
















f
d



(
x
)


M






i
=
0


M
-
1










L
r

_



(

h


(

x
,

ω


(


arcsin



u

i
,
1




,

2





π






u

i
,
2




)



)


)











(
1.29
)







with the integral over the hemisphere S2(x) of incident directions aligned by the surface normal in x being evaluated using importance sampling. The computer graphics system 10 stratifies the sample points on a two-dimensional grid by applying dependent trajectory splitting with the Hammersley sequence and thereafter applies irradiance interpolation. Instead of storing the incident flux Φi of the respective photons, the computer graphics system 10 stores their reflected diffuse power fd(xii with the respective photons in the photon map, which allows for a more exact approximation than can be obtained by only sampling the diffuse BRDF in the hit points of the final gather rays. In addition, the BRDF evaluation is needed only once per photon, saving the evaluations during the final gathering. Instead of sampling the full grid, the computer graphics system 10 uses adaptive sampling, in which refinement is triggered by contrast, distance traveled by the final gather rays in order to more evenly sample the projected solid angle, and the number of photons that are incident form the portion of the projected hemisphere. The computer graphics system 10 fills in positions in the grid that are not sampled by interpolation. The resulting image matrix of the projected hemisphere is median filtered in order to remove weak singularities, after which the approximation is generated. The computer graphics system 10 performs the same operation in connection with, for example, hemispherical sky illumination, spherical high dynamic-range environmental maps, or any other environmental light source.


The computer graphics system 10 processes final gather rays that strike objects that do not cause caustics, such as plane glass windows, by recursive ray tracing. If the hit point of a final gather ray is closer to its origin than a predetermined threshold, the computer graphics system 10 also performs recursive ray tracing. This reduces the likelihood that blurry artifacts will appear in corners, which might otherwise occur since for close hit points the same photons would be collected, which can indirectly expose the blurry structure of the global photon map.


Generally, photon maps have been taken as a snapshot at one point in time, and thus were unsuitable in connection with rendering of motion blur. Following the observation that averaging the result of a plurality of photon maps is generally similar to querying one photon map with the total number of photons from all of the plurality of photon maps, the computer graphics system 10 generates NT photon maps, where NT is determined as described above, at points in time










t
b

=


t
0

+



b
+

1
2



N
T



T






(
1.30
)







0≦b≦NT. As noted above, NT can equal “one,” in which case N photon maps are used, with N being chosen as described above. In that case,






t
i
=t
03(i)T  (1.31)


and thus ti,j=ti,0, that is, ti, for NT=1. In the general case (Equation (1.30)), during rendering, the computer graphics system 10 uses the photon map with the smallest time difference |ti,j−tb| in connection with rendering for the time sample point ti,j.


The invention provides a number of advantages. In particular, the invention provides a computer graphics system that makes use of strictly deterministic distributed ray tracing based on low-discrepancy sampling and dependent trajectory splitting in connection with rendering of an image of a scene. Generally, strictly deterministic distributed ray tracing based on deterministic low-discrepancy sampling and dependent trajectory splitting is simpler to implement than an implementation based on random or pseudo-random numbers. Due to the properties of the radical inverse function, stratification of sample points is intrinsic and does not need to be considered independently of the generation of the positions of the sample points. In addition, since the methodology is strictly deterministic, it can be readily parallelized by dividing the image into a plurality of tasks, which can be executed by a plurality of processors in parallel. There is no need to take a step of ensuring that positions of sample points are not correlated, which is generally necessary if a methodology based on random or pseudo-random numbers is to be implemented for processing in parallel.


Moreover, the methodology can be readily implemented in, hardware, such as a graphics accelerator, particularly if Hammersley point sets are used, since all points with a fixed index i yield a regular grid. A graphics accelerator can render a plurality of partial images corresponding to these regular grids in a number of parallel tasks, and interleave the partial images in an accumulation buffer to provide the final image. Operating in this manner provides very good load balancing among the parallel tasks, since all of the tasks render almost the same image.


In addition, the methodology can readily be extended to facilitate rendering of animations. Generally, an animation consists of a series of frames, each frame comprising an image. In order to decorrelate the various frames, instead of initializing the field of integers used as identifiers for ray instances for each frame by i, i+if can be used, where if is a frame number. This operates as an offsetting of i by if, which is simply a generalization of the Hammersley points. A user can select to initialize the field of integers for each frame by i, in which case the frames will not be correlated. In that case, undersampling artifacts caused by smooth motion will remain local and are only smoothly varying. Alternatively, the user can select to initialize the field of integers for each frame by i+if, in which case the artifacts will not remain local, and will instead appear as noise or film grain flicker in the final animation. The latter is sometimes a desired feature of the resulting animation, whether for artistic reasons or to match actual film grain. Another variation is to add if directly to k and clip the result by 2n (reference Code Segment 1, below). In that case, the pixel sampling pattern will change from frame to frame and the frame number if will need to be known in the post-production process in order to reconstruct the pixel sampling pattern for compositing purposes.


Generally, a computer graphics system that makes use of deterministic low-discrepancy sampling in determination of sample points will perform better than a computer graphics system that makes use of random or pseudo-random sampling, but the performance may degrade to that of a system that makes use of random or pseudo-random sampling in higher dimensions. By providing that the computer graphics system performs dependent splitting by replication, the superior convergence of low-dimensional low-discrepancy sampling can be exploited with the effect that the overall integrand becomes smoother, resulting in better convergence than with stratified random or pseudo-random sampling. Since the computer graphics system also makes use of dependent trajectory sampling by means of infinite low discrepancy sequences, consistent adaptive sampling of, for example, light sources, can also be performed.


In addition, it will be appreciated that, although the computer graphics system has been described as making use of sample points generated using generalized scrambled and/or unscrambled Hammersley and Halton sequences, it will be appreciated that generally any (t, m, s)-net or (t, s)-sequence can be used.


At a more general level, the invention provides an improved quasi-Monte Carlo methodology for evaluating an integral of a function ƒ on the s-dimensional unit cube [0, 1)s. In contrast with this methodology, which will be referred to as trajectory splitting by dependent splitting, in prior methodologies, the sample points in the integration domain for which the sample values at which sample values for the function were generated were determined by providing the same number of coordinate samples along each dimension. However, for some dimensions of an integrand, it is often the case that the function ƒ will exhibit a higher variance than for other dimensions. The invention exploits this by making use of trajectory splitting by dependent samples in critical regions.


The partial integral










g


(
x
)


=






I
s


2





f


(

x
,
y

)





y






1


N
2












j
=
0



N
2

-
1








f


(

x
,

y
j


)









(
1.32
)







(x and y comprising disjoint sets of the s-dimensions, and x∪y comprising the set of all of the dimensions), where N2 identifies the number of samples selected for the set y of dimensions, can be defined over the portion of the integration domain that is defined by unit cube (0, 1]s2, which, in turn, corresponds to the portion of the integration domain that is associated with set s2 dimensions. Evaluating g(x) using Equation (1.32) will affect a smoothing of the functions ƒ in the s2 dimensions that are associated with set y.


The result generated by applying Equation (1.32) can then be used to evaluate the full integral














I
s


1








I
s


2





f


(

x
,
y

)









y








x




=






I
s


1





g


(
x
)









x






1

N
1







i
=
0



N
1

-
1





1

N
2







j
=
0



N
2

-
1




f


(


x
i

,

y
j


)











(
1.33
)







where N1 identifies the number of samples selected for the set x of dimensions, that is, over the remaining dimensions of the integration domain that are associated with the s1 dimensions that are associated with the set x. If the dimension splitting x,y is selected such that the function ƒ exhibits relatively high variance over the set y of the integration domain, and relatively low variance over the set x, it will not be necessary to generate sample values for the function N1-times-N2 times. In that case, it will suffice to generate sample only values N2 times over the integration domain. If the correlation coefficient of ƒ(ξ, η) and θ(ξ, η′), which indicates the degree of correlation between values of function evaluated, for the former, at (xi, yi)=(ξ, η), and, for the later, at b (xi, yi)=(ξ, η′), is relatively high, the time complexity required to evaluate the function





ƒ[0,1)s(x0, . . . , xs−1)


will be decreased.


The smoothness of an integrand can be exploited using a methodology that will be referred to as correlated sampling. Generally, that is, if correlated sampling is not used, in evaluating an integral, each dimension will be associated with its respective sequence. However, in correlated sampling, the same sequence can be used for all of the dimensions over the integration domain, that is














1
M






j
-
1

N








1

N
j







i
=
0



N
j

-
1





f
j



(


x
i

,

y
j


)











1
M






j
=
1

M






I
s






f
j



(
x
)





x











=






I
s





1
M






j
=
1

M





f
j



(
x
)





x
















1
N






i
=
0


N
-
1





1
M






j
=
1

M




f
j



(

x
i

)













(
1.34
)







The methodology of trajectory splitting by depending sampling makes use of a combination of the trajectory splitting technique described above in connection with Equations (1.32) and (1.33) with the correlated sampling methodology described in connection with Equation (1.34).


Since integrals are invariant under toroidal shifting for zj∈ls2, that is,






S
j
:l
s

2

→l
s

2

y
(y+zj) mod ll2g(y)dy=∫ls2g(Sj(y)dy  (1.35)


the values of the integrals also do not change. Thus, if, in Equation (1.33), the inner integral is replicated M times,

















I
s


1








I
s


2





f


(

x
,
y

)









y








x




=







I
s


1








I
s


2





1
M






j
=
0


M
-
1





f


(

x
,


S
j



(
y
)



)









y








x

















1
N






i
=
0


N
-
1





1
M






j
=
0


M
-
1




f


(


x
i

,


S
j



(

y
i

)



)












=




1
N






i
=
0


N
-
1





1
M






j
=
0


M
-
1




f


(


x
i

,


(


y
i

+

z
j


)


mod





1


)













(
1.36
)







For index j, the functions ƒ(xi, Sj (yi)) are correlated, enabling the smoothness of the integrand in those dimensions that are represented by y to be exploited, as illustrated above in connection with Equation (1.19) (lens sampling), Equations (1.20) and (1.21) (area light sources) and Equation (1.29) (approximation for the diffuse illumination term). It will be appreciated that the evaluation using the replication is the repeated application of the local quadrature rule





UMyss:=(zj)j=0M


shifted by random offset values yi. The use of dependent variables in this manner pays off particularly if there is some smoothness in the integrand along one or more dimensions. Splitting can be applied recursively, which yields a history tree, in which each path through the respective history tree represents a trajectory of a particle such as a photon.


The quasi-Monte Carlo methodology of trajectory splitting by dependent sampling makes use of sets of deterministic, low-discrepancy sample points both for the global quadrature rule






U
N,s

1

+s

2
=(xi,yi)i=0N


that is, integration over all of the dimensions s1+s2 comprising the entire integration domain, as well as for the local quadrature rule





UM,ss:=(zj)j=0M


that is, integration over the dimensions s2 of the integration domain. The methodology unites splitting and dependent sampling, exploiting the stratification properties of low-discrepancy sampling. Accordingly, it will be possible to concentrate more samples along those dimensions in which the integrand exhibits high levels of variation, and fewer samples along those dimensions in which the integrand exhibits low levels of variation, which reduces the number of sample points at which the function will need to be evaluated. If the methodology is to be applied recursively a plurality of times, it will generally be worthwhile to calculate a series of values zj that are to comprise the set UM,s2. In addition, the methodology may be used along with importance sampling and, if U is an infinite sequence, adaptive sampling. In connection with adaptive sampling, the adaptations will be applied in the replication independently of the sampling rate, so that the algorithm will remain consistent. The low-discrepancy sample points sets UN,s1+s2 and UM,s2 can be chosen arbitrarily, for example, the sample point set UM,s2 can be a projection of sample point set UN,s1+s2. When trajectory splitting is recursively applied to build trajectory trees, generalizing the point set UN,s1+s2 for the subsequent branches can be used to decorrelate the separate parts of the respective tree.



FIG. 5 shows a code fragment 140, referred to herein as “Code Segment 1,” in the C++ programming language for generating the positions of the jittered subpixel sample points xi. FIG. 6 shows a code fragment 142, referred to herein as “Code Segment 2,” in the C++ programming language for generating a ray tree class Ray.


It will be appreciated that a system in accordance with the invention can be constructed in whole or in part from special purpose hardware or a general purpose computer system, or any combination thereof, any portion of which may be controlled by a suitable program. Any program may in whole or in part comprise part of or be stored on the system in a conventional manner, or it may in whole or in part be provided in to the system over a network or other mechanism for transferring information in a conventional manner. In addition, it will be appreciated that the system may be operated and/or otherwise controlled by means of information provided by an operator using operator input elements (not shown) which may be connected directly to the system or which may transfer the information to the system over a network or other mechanism for transferring information in a conventional manner.


With these points in mind, we next turn to image synthesis by adaptive quasi-Monte Carlo integration.


II. Image Synthesis by Adaptive Quasi-Monte Carlo Integration

Analyzing the implicit stratification properties of the deterministically scrambled Halton sequence leads to an adaptive interleaved sampling scheme that improves many rendering algorithms. Compared to uncorrelated adaptive random sampling schemes, the correlated and highly uniform sample points from the incremental Halton sequence result in a faster convergence and much more robust adaptation. Since the scheme is deterministic, parallelization and reproducibility become trivial, while interleaving maximally avoids aliasing. The sampling scheme described herein are useful in a number of applications, including, for example, industrial path tracing, distribution ray tracing, and high resolution compositing.


As discussed above, the process of image synthesis includes computing the color of each pixel in the image. The pixel color itself is determined by an integral. Due to high dimensionality and unknown discontinuities of the integrand, this integral typically must be approximated using a numerical technique, such as the Monte Carlo method. The efficiency of the image synthesis process can be significantly improved by using adaptive schemes that take into account variations in the complexity of the integrands.


Analytical integration methods developed for computer graphics perform very well for small problems, i.e., low integrand dimension or untextured scenes. Approaches like discontinuity meshing or approximate analytic integration, however, utterly fail for, e.g., higher order shadow effects. Consequently, high-end rendering algorithms rely on sampling.


Starting from early computer graphics, many adaptive sampling schemes have been developed in order to control rendering effort. A number of these schemes rely on partitioning the integration domain along each axis. As long as only low-dimensional integration (e.g., pixel anti-aliasing) was of interest, the inherent curse of dimension of axis-aligned recursive refinement was not perceptible. (The term “curse of dimension” refers to the known issue that computational cost typically increases exponentially with the dimension of a problem.) These schemes are still applied today. However, due to the curse of dimension, for example, in distribution ray tracing and global illumination simulation, these schemes are applied only to the lowest dimensions of the integrands, e.g., to the image plane.


Many adaptive schemes rely on comparing single samples in order to control refinement. For example edges are detected by comparing contrast against a threshold. Such schemes fail in two aspects: If the contrast does not indicate refinement, nevertheless important contributions to the image can be missed. On the other hand refinement can be taken too far. This happens, for example, when sampling an infinite black-and-white checkerboard in perspective. At the horizon, refinement is driven to full depth, although the correct gray pixel color may already be obtained by one black and white sample.


In fact, the paradigm of sample-based refinement performs function approximation of the integrand itself, although only averages of the integrand are required. This is considered by pixel-selective Monte Carlo schemes that consider the estimated variance of the estimate of the functional to be computed. However, Monte Carlo error estimation requires independent random samples, which limits the amount of uniformity of the samples and thus convergence speed.


Thinking of adaptive sampling as image processing it is easy to identify noise or edges in an image by computing derivatives between pixels in order to trigger refinement.


By considering image synthesis as computing families of functionals instead of independent pixel values only, a powerful adaptive sampling scheme has been developed. Therefore stratified sequences of sample points are extracted from the scrambled Halton sequence. Although these points are deterministic, aliasing is avoided maximally. Incorporating tone mapping directly into the pixel integral, in addition to high uniformity of the subsequences, yields a superior and smooth convergence. Consequently adaptation can be controlled robustly by applying simple image processing operators to the final pixel values rather than to single samples. Since everything is deterministic exact reproducibility as required in production is trivial. The superior performance of the new technique is described below with respect to various applications.


The scrambled Halton sequence is now described. For the purposes of the present discussion, filtering, tone mapping, and actual radiance computations are hidden in the integrand f defined on the s-dimensional unit cube. The color of a pixel is then determined by an integral











[

0
,
1

)

s





f


(
x
)









x






1
N






j
=
0


N
-
1




f


(

x
j

)








which is numerically is approximated by averaging N function samples at the positions xj∈[0,1)s.


It has been demonstrated non-adaptive quasi-Monte Carlo integration is highly efficient in computer graphics. The used so-called quasi-Monte Carlo points are of low discrepancy, meaning that due to high correlation they are much more uniformly distributed than random samples can be. Due to their deterministic nature, however, unbiased error estimation from the samples themselves is not possible opposite to the Monte Carlo method.


In order to take advantage of the much faster and smoother convergence of quasi-Monte Carlo integration and to obtain a reliable adaptation control, important properties of the scrambled Halton sequence are described in the following discussion. This deterministic low discrepancy point is easily constructed as can be seen in the sample code, discussed below.


“Stratification by radical inversion” is now described. The radical inverse













Φ
b



:








0











[

0
,
1

)







i
=




l
=
0











a
l



(
i
)




b
l













l
=
0







a
l



(
i
)




b


-
l

-
1










(
2.1
)







mirrors the representation of the index i by the digits





ai(i)∈{0, . . . , b−1}


in the integer base b∈ at the decimal point. In base b=2, this means that











Φ
2



(
i
)




{




[

0
,

1


/


2


)





for





i





even

,






[


1


/


2

,
1

)




else
.









(
2.2
)







This observation has been generalized, and given the name periodicity. In fact, however, this property relates much more to stratification as formalized by the definition of (0,1)-sequences. Therefore, a different derivation is presented herein that stresses the actual stratification property. The index i is chosen as






i=j·b
n
+k for k∈{0, . . . , bn−1}


and inserted into Equation (2.1) yielding














Φ
b



(
i
)


=


Φ
b



(


j
·

b
n


+
k

)








=




l
=
0







a
l



(


j
·

b
n


+
k

)




b


-
l

-
1










=





l
=
0







a
l



(

j
·

b
n


)




b


-
l

-
1




+




l
=
0







a
l



(
k
)




b


-
l

-
1











=





l
=
0







a
l



(
j
)




b


-
n

-
l
-
1




+


Φ
b



(
k
)









=





b

-
n





Φ
b



(
j
)








[

0
,

b

-
n



)



+


Φ
b



(
k
)










(
2.3
)







by exploiting the additivity of the radical inverses that belong to the class of (0, 1)-sequences. The first term of the result depends on j and obviously is bounded by b−n, while the second term is a constant offset by the radical inverse of k. Since





k∈{0, . . . , bn−1}


it follows that





Φk∈{0, b−n, 2b−n, . . . , (bn−1)b−n}


Fixing the n first digits by k then for j∈0 2 yields radical inverses only in the interval





b(k), Φb(k)+b−n)


There are now described stratified samples from the Halton sequence. For quasi-Monte Carlo integration, multidimensional uniform samples are required. However, from Equation (2.2) it may be seen that the radical inverse is not completely uniformly distributed, i.e., cannot just replace the random number generator. Therefore, multidimensional uniform deterministic samples may be constructed, for example, by the Halton sequence





xi=(Φb1(i), . . . , Φbs(i))∈[0, 1)s


where for the c-th component bc is the c-th prime number.


The one-dimensional observations above generalize to higher dimensions. Stratified sampling is implicitly embodied. This is seen by choosing the index









i
=



j
·




d
=
1

s







b
d

n
d




+

k





with





0



k
<




d
=
1

s



b
d

n
d








(
2.4
)







yielding












Φ
b

c



(
i
)


=





Φ
b

c



(


j
·




d
=
1

s



b
d

n
d




+
k

)








=





b
c

-

n
c







Φ
b

c

(

j
·





d
=
1

s


d

c




b
d

n
d




)


+



Φ
b

c



(
k
)










analogous to Equation (2.3), and consequently







x
i






c
=
1

s



[



Φ

b
c




(
k
)


,



Φ

b
c




(
k
)


+

b
c

-

n
c





)






for fixed k and j∈ with the choice of the index i according to Equation (2.4). It may be seen that the







Π

d
=
1

s



b
d

n
d






disjoint strata selected by k are disjoint and form a partition of the unit cube [0, 1)s. However, the scheme suffers the curse of dimension, since stratifying all s dimensions results in an exponential growth of the number of strata. This may be the reason, why only the first four dimensions have been stratified.


The stratification property is illustrated in FIGS. 7A and 7B, which show plots 200 and 210, respectively, of the first two components xi=(Φ2(i), Φ3(i)) of the Halton sequence for 0≦i<j·6+k<24·33=216. The stratum with the emphasized points contains all indices i with k=1. Scaling the strata to be square, i.e.,






x
i
→x
i
n=(21·Φ2(i), 31·Φ3(i))


does not affect discrepancy, since it is a separable mapping.


Deterministic scrambling is now described. The Halton sequence exposes superior uniformity properties, i.e., low discrepancy and minimum distance property. However, low-dimensional projections exhibit correlation patterns. FIGS. 8A and 8B show a pair of low-dimensional projections 220 and 230. FIG. 8A shows the Halton sequence for the points





17(i), Φ19(i))i=0255.



FIG. 8B shows the scrambled Halton sequence for the points





(Φ′17(i), Φ′19(i))i=0255


As illustrated by FIGS. 8A and 8B, scrambling can significantly improve uniformity.


While usually not perceptible, this low-dimensional correlation often interferes with the integrands in computer graphics that have low-dimensional structure. For example, correlation can slow down convergence in a sequence of two-dimensional scattering events as used in path tracing.


One remedy is to scramble the Halton sequence. The radical inverse is replaced by the scrambled radical inverse











Φ
b




:








0







[

0
,
1

)






(
2.5
)






i
=





ι
=
0







a
ι



(
i
)




b
ι








ι
=
0







π
b



(


a
ι



(
i
)


)




b


-
ι

-
1

















yielding a scrambled Halton sequence






x′
i:=(Φ′b1(i), . . . , Φ′bs(i))


The scrambling permutations πb applied to the digits a1(i) are determined by a recursion starting with the identity π2=(0, 1). If b is odd, πb is constructed from πb−1 by incrementing each value









b
-
1

2





and inserting







b
-
1

2




in the middle. Otherwise, πb is constructed from πb−1 by concatenating






2


π

b
2







and






2


π

b
2



+
1




This algorithm yields





π2=(0, 1)





π3=(0, 1, 2)





π4=(0, 2, 1, 3)





π5=(0, 3, 2, 1, 4)





π6=(0, 2, 4, 1, 3, 5)





π7=(0, 2, 5, 3, 1, 4, 6)





π8=(0, 4, 2, 6, 1, 5, 3, 7)














The scrambling improves the uniformity. This is especially visible for low-dimensional projections as illustrated in FIGS. 8A and 8B. In addition, the minimum distance of samples is increased, which indicates an increased uniformity. An implementation is described below.


The observations from Equation (2.3) and from the above discussion transfer to the scrambled Halton sequence in a straightforward way. This can be seen for two-dimensional stratification, since π2 and π3 are identities and consequently





Φ′2≡Φ2





and





Φ′3≡Φ3


There is now described a technique for bias elimination by randomization. By construction, radical inversion generates only rational numbers ∩[0,1) as set forth in Equations (2.1) and (2.5). Nevertheless, it can be shown that quasi-Monte Carlo integration is biased but consistent for Riemann-integrable functions.


If required, the bias can be removed by randomly shifting the deterministic points






x′
i
→x′
i+ξ mod 1:=(Φ′b1(i)+ξ(1) mod 1, . . . , Φ′bs(i)+ξ(s) mod 1)


of the scrambled Halton sequence modulo one, where





ξ=(ξ(1), . . . , ξ(s))∈[0, 1)s


is a vector of s independent realizations of uniform random numbers on the unit interval. The resulting minimally randomized estimator has been analyzed, and a variance reduction of








σ
2



(


1
N






i
=
0


N
-
1




f


(


x
i


+

ξ





mod





1


)




)


=

O


(



ln

2





s



N


N
2


)






has been proofed for square integrable functions. Multiple realizations of this and other randomization techniques allow the estimation of the variance in order to control tile integration error. Since the error is controlled in a different way, one instance of such a randomization is sufficient to cancel the bias. However, in fact, the bias of the scrambled Halton sequence compared to the randomly shifted points, and consequently the randomization becomes negligible.


Considering random scrambling leads to a second important observation. Often the uniformity of the samples is improved; however, some realizations also can decrease the uniformity of a point set by, e.g., lowering the mutual minimum distance. Our experiments show that random scrambling only marginally changes the uniformity of the scrambled Halton sequence, indicating that the deterministic permutations πb, which are themselves a subset of the permutations available for random scrambling, already are a very good choice. In addition implementing the scrambled Halton sequence, as described below, is simpler than random scrambling.


From the above observations, it can be concluded that randomization in not necessary for the presently described applications. The structure of the deterministic scrambled Halton sequence is described below with respect to specific implementations.


A new technique for image synthesis is now described. Image synthesis includes computing a matrix of pixel colors













I

m
,
n


:=







[

0
,
1

)

s









f

m
,
n




(
x
)









x
















j
=
0



N

m
,
n


-
1





ω

j
,
m
,
n





R
α



(

L


(

x

j
,
m
,
n


)


)











(
2.6
)







The function ƒm, n to be integrated for the pixel at position (m, n) usually contains discontinuities and can be of high dimension. Efficient general analytical solutions are not available and consequently the integrals have to be numerically approximated. In addition the complexity of the integrands varies so that adaptive integration pays off.


Deterministic anti-aliasing is now described. The first component xi(1) is scaled by b1n1 and the second xi(2) by b2n2 as illustrated in FIGS. 7A and 7B. Thus, a b1n1×b2n2 stratified sample pattern is obtained that is periodically tiled over the image plane. FIG. 9 shows a plot 240 of the tiled sample pattern.


Identifying each stratum with a pixel, the identification k is determined, for example by a table lookup, from the pixel coordinates, and a Halton sequence restricted to that pixel is obtained from






i=j·b
1
n

1

·b
2
n

2

+k for j∈


This restriction means fixing the first n1 and n2 decimal digits of the first and second component, respectively, and does not change the superior uniformity properties of the Halton points. Consequently, the improved and smooth convergence of deterministic low discrepancy sampling is preserved. Convergence is improved substantially further by applying tone-mapping techniques that in fact bound the integrands. Then, adaptation triggered by image processing operators becomes very reliable. The number of strata b1ni×b2n2 is determined by n1 and n2, which are chosen large enough, so that the strata covered by adjacent pixel reconstruction filters do not contain repeated patterns.


While for Monte Carlo integration it is easy to control adaptation by estimating the integration error from the samples, this is not possible for the correlated samples of quasi-Monte Carlo integration.


A pixel is refined, whenever a refinement criterion is met. As an example, a simple criterion indicates refinement by checking the image gradient





∥∇Im,n∥>T·Nm,n−α


against a threshold T. The exponent α∈[0.5, 1.5] can be used to adapt to the speed of convergence. Nm,n is the number of samples used for pixel (m, n).


As known from music equipment, clipping signals causes distortion. Therefore, instead of clipping, compression is used, meaning that an upper bound on the signal is achieved by using a continuously differentiable function. In addition, single signals are compressed before being mixed.


According to an aspect of the invention, the same is done for image synthesis. In Equation (2.6), the luminance L is compressed by








R
α



:








0
+




[

0
,
1

]







L


{



L



L
<
α






α
+


(

1
-
α

)




L
-
α


1
+
L
-
α






else








before averaging. By α∈[0, 1], it is possible to blend between linear transfer that is clipped at α=1 and compression for α=0. The derivative of








x

1
+
x







in





x

=
0




is 1 and such the mapping Rα is continuously differentiable. Depending on the output media, many other response curve mappings Rα are possible such as, for example, the sRGB compression for video displays or film material characteristics.


Performing the tone-mapping, i.e., compression, directly in the quadrature is a remedy to the overmodulation problems from, e.g., reflections or light source samples, and consequently convergence is increased because the integrands are bounded now. Thus, the noise level is reduced, contrary to the gradient domain method and advanced filter kernel construction becomes unnecessary.


In applying the above-described techniques to synthesize an image, more samples are used to resolve certain details in the final image. Adaptive samples are generated by refining the Halton sequence in the first two dimensions by some contrast criterion in screen space.


The above-described technique has a number of different applications, including high-resolution compositing, frameless rendering, and parallelization.


Based on the stratification properties of radical inversion a new adaptive integration algorithm for image synthesis has been presented. Using the stratification for interleaving, aliases are efficiently suppressed, while the smooth convergence of low-discrepancy sequences allows one to use a very efficient and robust termination criterion for adaptation. Since all sample positions are deterministic, storing the sampled function values allows high-resolution compositing. In a similar way, the convergence can be improved by using the microstructure of a (t, s)-sequence, such as, for example, the Sobol sequence. While the described techniques benefit from stratification in only two dimensions, it is not suited for general high-dimensional stratification due to the curse of dimension.



FIG. 10 shows a plot 250 illustrating an interleaved adaptive supersampling technique, according to a further aspect of the invention, which is tiled over the whole screen. The plotted points are represented by:





2(i), Φ3(i))=P(i)


The image is scaled by (1, 1.5), and a 2×3 stratification is used. As shown in FIG. 10, the strata are addressed by an offset, generating a point sequence in a subpixel. The points are enumerated by the offset of the desired stratum plus the number of strata multiplied by the point number. In the illustrated section of the plot, the point numbers are:






i=6j+3





i∈{0, 1295}


In the equation i=6j+3, j is multiplied by 6 because of the 2×3 stratification. Also, i∈{0, 1295}=24·34.


Using the illustrated interleaving technique, adjacent pixels are sampled differently, but strictly deterministically.


III. Additional Examples and Points Regarding Quasi-Monte Carlo Integration

Computer graphics textbooks teach that sampling images using deterministic patterns or lattices can result in aliasing. Aliasing can only be avoided by random, i.e., independent sampling of images. Thus, textbooks typically recommend random samples with blue noise characteristic. However, these types of samples are highly correlated due to their maximized minimum mutual distance. Contrary to the textbook approach, the systems and techniques described herein are based on parametric integration by quasi-Monte Carlo methods, and are strictly deterministic.


Image synthesis is the most visible part of computer graphics. One aspect of image synthesis is concerned with the synthesis of physically correct images. Thus, one image synthesis technique includes identifying light paths that connect light sources and cameras and summing up their respective contributions. Another aspect of image synthesis is concerned with non-photorealistic rendering including, for example, the simulation of pen strokes or watercolor.


Image synthesis poses an integro-approximation problem for which analytical solutions are available in exceptional cases only. Therefore numerical techniques have to be applied. Prior art approaches typically use elements of classical Monte Carlo integration, in which random points are used to numerically approximate the solution to an image integral. However, as described herein, it is significantly more efficient to use quasi-Monte Carlo integration, in which sequences of quasirandom numbers are used to compute the solution to an image integral. The presently described systems and techniques are useful, for example, in a motion picture, which typically requires an extremely large number of high-quality images.


The underlying mathematical task is to determine the intensity I (k, l, t, λ), where (k, l) is the location of a pixel on the display medium. For the sake of clarity, there is omitted the dependency on the time t and the wavelength λ of a color component of a pixel in the sequel.


Determining the intensity of a single pixel I (k, l), i.e., measuring the light flux through a pixel, requires to compute a functional of the solution of the radiance transport integral equation







L


(

x
,
ω

)


=



L
e



(

x
,
ω

)


+





S
2








L


(


h


(

x
,

ω
i


)


,

-

ω
i



)




f


(


ω
i

,
x
,
ω

)






cos






θ
i













ω
i


.






=

:


(


T
f


L

)



(

x
,
ω

)











As a Fredholm integral equation of the second kind, the radiance L in the point x into the direction w is the sum of the source radiance Le and the reflected and transmitted radiance TfL, which is an integral over the unit sphere S2. The cosine of the angle θi between the surface normal in x and the direction of incidence wi accounts for the perpendicular incident radiance only, which is colored by the surface interface properties given by f. Finally h determines the closest point of intersection of a ray from x into the direction wi. The extension to participating media, which we omit here for lack of space, exposes the same structure.


Simultaneously computing all pixels






I(k, l):=∫∂VS2Rα(L(x,w), k,l,x,w)dwdx


of an image is an integro-approximation problem. The mapping Rα represents the mathematical description of the camera and its response to the radiance L. Rα often is non-linear in order to be able to compensate for the limited dynamic range of most display media.


In a physically correct setting the norm ∥Tf∥ must be bounded by 1 in order to guarantee energy conservation. Then the Neumann-series converges and the computation of the radiance






L
=


SL
e

:=




i
=
0






T
f
i



L
e








can be reduced to an infinite sum of integrals with increasing dimension. The single integrals TfiLe have a repetitive low dimensional structure inherited from stacking transport operators. Obviously lower powers of the transport operator are likely to be more important. Real world light sources Le are bounded and consequently the radiance L uniformly can be bounded by some b>0. In addition real world radiance





L(y,w,t,λ)∈Lb2


is a signal of finite energy and thus must be square integrable.


However, often singular surface properties, as for example specular reflection, are modeled by





(w,L)(x,w):=L(h(x,w′), −w′)


using Dirac's δ distribution, where w′≡w′(w) is the direction of specular reflection. Then the operator norm of the solution operator can even reach] and the Neumann series can diverge. The additional problem of insufficient techniques is caused by





δ∉Lb2


because some transport paths cannot be efficiently sampled and force the need of biased approximations like, e.g., the photon mapping algorithm for rendering caustics.


Both the radiance L and the intensity I are non-negative and piecewise continuous, where the discontinuities cannot be efficiently predicted. The actual basis of the function class to represent and approximate the intensity I (k, l, t, λ) in fact is determined by the display medium or image storage format, e.g., an interleaved box basis for the color components of TFT displays, cosines for JPEG compressed images, etc.


Due to the lack of efficient analytical solutions, rendering algorithms reduce image synthesis to numerical integro-approximation. Simulating a camera with anti-aliasing, motion blur, and depth of field already contributes 5 dimensions to the integration domain of the intensity I. Area light sources and each level of reflection contribute another 2 dimensions. Consequently the mathematical problem is high-dimensional, discontinuous, and in Lb2. Since tensor product techniques will fail due to dimensionality and a lack of continuity, Monte Carlo, and quasi-Monte Carlo methods are the obvious choice.


Monte Carlo methods use random sampling for estimating integrals by means. Quasi-Monte Carlo methods look like Monte Carlo methods, however, they use deterministic points for sampling an integrated. In contrast to random samples, the specifically designed deterministic point sets are highly correlated, which allows for a much higher uniformity and results in a faster convergence.


Real random numbers on the unit interval are characterized by independence, unpredictability, and uniformity. For Monte Carlo integration the independence is required to prove error bounds and the uniformity is required to prove the order of convergence. Since real random numbers are expensive to generate, usually efficient deterministic algorithms are used to simulate pseudorandom numbers, which then of course are perfectly predictable but seemingly independent. However, the independence cannot be observed any longer after averaging the samples.


Quasi-Monte Carlo integration is based on these observations. By neglecting independence and unpredictability it is possible to construct deterministic points, which are much more uniform than random number samples can be. There exist a lot of constructions for such deterministic point sets Pn={x0, . . . , n, xn−1} ⊂[0, 1)s, which are based on (1) radical inversion based point sets; and (2) rank-1 lattice points:


(1) Radical inversion based point sets determine samples by










x
i

=


(


i
n

,



Φ

b
1




(
i
)


;





;


Φ

b

s
-
1





(
i
)




)






where














Φ
b



:








0







[

0
,
1

)













i
=





ι
=
0







a
ι



(
i
)




b









ι
=
0







a
ι



(
i
)




b


-
ι

-
1










(
3.1
)












is the radical inverse in an integer base b. The digit aj(i) is the j-th digit of the index i represented in base b. The Hammersley point set is obtained by choosing bc as the c-th prime number. The uniformity of these points has been improved by applying permutations to the aj(i) before computing the inverse. The permutation





πb(aj(i))=aj(i)+j mod b


has been used, and other permutations have been developed, generalizing and improving on these results. Choosing all bc=b along with an appropriate set of mappings applied to the digits aj(i) yields the construction and theory of (t, m, s)-nets. There has been a lot of research as to how to efficiently compute radical inverses. One method is to tabulate the sum of the least significant T digits and to reuse them while generating the points










j
=
0







π
b



(


a
j



(
i
)


)




b


-
j

-
1




=






j
=
T







π
b



(


a
j



(
i
)


)




b


-
j

-
1







only





every






b
T



-


th





time



+





j
=
0


T
-
1






π
b



(


a
j



(
i
)


)




b


-
j

-
1







Table





of





size






b
T









This method has been developed in the context of scrambled radical inversion. Rather than using Gray-codes, this method generates the points in their natural order at comparable speed.


(2) Rank-1 lattice points







x
i

=


i
n



(

1
,


g
1

;





;

g

s
-
1




)






mod





1





are faster to generate than radical inversion based points. Their quality depends on the integer generator vector





(1, g1, . . . , gs−1)∈


However, the construction of good generator vectors is not obvious. In order to reduce the search space, the generator vectors have been determined by only one parameter a with gi=aj. Higher rank lattices can be constructed by linear combinations of rank-1 lattices.


Both principles can be generalized to yield sequences of points, which allow for adaptive sampling without discarding previously taken samples, however, at the price of a slight loss of uniformity: the Halton sequence and its variations corresponding to the Hammersley points, (t, s)-sequences containing (t, m, s)-nets, and extensible lattice rules containing lattices.


The above constructions yield rational numbers in the unit interval. It is especially interesting to use the base b=2 and n=2m points, because then the points can be represented exactly in the actual machine numbers as defined by the ANSI/IEEE 754-1985 standard for binary floating point arithmetic.


The different constructions of the previous section in fact have one common feature: They induce uniform partitions of the unit cube. This kind of uniformity has been characterized as follows:


Definition 1. Let (X, β, μ) be an arbitrary probability space and let be a nonempty subset of β. A point set Pn of n elements of X is called μ)-uniform if











i
=
0


n
-
1






χ





M



(

x
i

)



=




μ


(
M
)


·
n






for





all





M





,




where xM(xi)=1 if xi∈M, zero otherwise.


Examples of (M, μ)-uniform point sets are samples from the Cartesian product midpoint rule and radical inversion based points. In addition, rank-1 lattices are also (M, μ)-uniform. The Voronoi-diagram of a lattice partitions the unit cube into n sets of identical shape and volume 1/n. This underlines that for (M, μ)-uniformity all μ(M) must have the same denominator n.


The function classes of computer graphics imply to use the probability space





([0,1)s,β, λs)


with the Borel-sets β and the s-dimensional Lebesgue-measure λs.


A sequence of point sets is uniformly distributed if and only if its discrepancy vanishes in the limit. The deterministic constructions sketched in previous section can obtain so-called low discrepancy, which vanishes with roughly speaking 1/n while independent random points only can obtain roughly 1/√{square root over (n)} and points from the Cartesian product midpoint rule even only acquire 1/{square root over (n)}.


There are some facts about discrepancy that make it problematic. Discrepancy is an anisotropic measure, because its concept is based on axis-aligned boxes. Consequently, discrepancy is influenced by rotating point sets. While samples from the Cartesian product midpoint rule result in bad discrepancy, lattice points from the Fibonacci lattices have low discrepancy, although some of them are just rotated rectangular grids. Discrepancy is not even shift-invariant since shifting a point set on the unit torus also changes discrepancy.


Definition 1, above, supports partitions which are not axis-aligned, as for example the Voronoi-diagram of a rank-1 lattice. Maximum uniformity in this sense can be obtained by selecting the points such that the regions of the Voronoi-diagram approximate spheres as much as possible, i.e., by maximizing the mutual minimum di stance








d
min



(

P
n

)


:=


min

0

i
<
n




·


min

i
<
j
<
n








x
j

-

x
i




T








among all sample points in Pn. ∥•∥T is used to denote the Euclidean distance on the unit torus. The minimum distance measure is isotropic and shift-invariant thus overcoming these disadvantages of discrepancy.



FIGS. 11A-H show a series of plots 260-330 illustrating classical quasi-Monte Carlo points for n=16 (top row) and n=64 (bottom row) along with their mutual minimum distance dmin. The rank-1 lattice has been selected such that its minimum distance is maximal. It is interesting to observe that the constructions with better discrepancy have larger minimum distance, too, as can be seen for the Hammersley points, the Sobol sequence, and the Larcher-Pillichshammer points. It also can be observed that the minimum distance of the Halton sequence, with permutations, is maximized as compared to the original Halton sequence.


The rank-1 lattices in FIGS. 11D and 11H are Korobov lattices, where the parameter a has been chosen to maximize the minimum distance. The rank-1 lattice at n=16 points in FIG. 11D in fact can be generated as a (t, 4, 2)-net in base b=2. This is not possible for the quality parameter t=0, because at least two points lie on one line parallel to the x-axis. In this case the best quality parameter prevents the points from reaching maximum minimum distance.


Similarly, postulating gcd(gi, n)=1 restricts the search space for generator vectors of lattices in such a way that the lattices with maximum minimum distance cannot be found. Forcing Latin hypercube properties by t=0 or gcd(gi, n)=1, respectively, may be useful in some situations, however, prevents the sample points to optimally cover the unit torus in the minimum distance sense.


Maximizing the minimum distance for generating highly uniform point sets is a principle of nature. For example the distribution of the receptors in the retina is grown that way. Algorithmically, this principle is known as “Lloyd's relaxation scheme,” according to which samples with identical charges are placed on the unit torus and then the points are allowed to repel each other until they reach some equilibrium. From a mathematical point of view, the points are moved tote center of gravity of their Voronoi cells during relaxation. The convergence of this scheme has been improved quadratically. Note that rank-1 lattices are invariant under this kind of relaxation scheme.


Point sets selected by maximizing the mutual Euclidean distance on the unit torus induced by ∥••T have the advantage that they seamlessly can be tiled in order to fill s-dimensional space. This property is intrinsic to lattices. However, it is not true for radical inversion based points in general. While the points by Larcher and Pillichshammer tile seamlessly, the Hammersley points do not. The norm ∥•∥T in fact should be a weighted norm, which includes the size of the integration domain. FIGS. 12A-C shows a series of drawings 340-360, illustrating selection of lattices by maximum minimum distance in the unit cube (FIG. 12A); in the unit cube scaled to the integration domain (FIG. 12B); and (in the integration domain (FIG. 12C). Considering the scale of the integration domain yields more uniform points. In particular, it will be see that maximizing the minimum distance on the unit cube does not imply nice uniformity when scaling the domain of integration.


It is common knowledge that quasi-Monte Carlo integration outperforms Monte Carlo integration in many applications. However, the classical error bound theorems often do not fit the function classes of the application.


The Koksma-Hlawka inequality deterministically bounds the integration error by the product of the discrepancy of the sample points used to determine the sample mean and the variation of the integrand in the sense of Hardy and Krause. The variation can be considered as the remainder of trying to obtain discrepancy as a factor in the error bound.


The class of functions of bounded variation is impractical in certain situations. For example, discontinuities that are not aligned with the coordinate axes already yield infinite variation. The error bound thus becomes useless in already simple settings of computer graphics such as an edge










f


(

x
,
y

)


=

{



1



y
>
x





0


else








(
3.2
)







in the unit square, for which the variation is unbounded. Using other kinds, such as for example isotropic discrepancy, it is possible to find an error bound that works for this case. However, it becomes far too pessimistic in higher dimensions.


Similar to the Koskma-Hlawka inequality there exist deterministic error bounds for the integration by lattices. The used function class requires periodic functions and imposes certain constraints on the Fourier coefficients of the integrand, which do not apply for discontinuous functions as used in computer graphics.


In a vast number of publications experiments give numerical evidence that quasi-Monte Carlo methods outperform Monte Carlo methods in practice, however, in the majority of the cases classical theorems cannot explain the observed results. The main reason is that general discontinuities cannot be accounted for by the classical error bounds.


As stated above, image synthesis is an integro-approximation problem in Lb2 and quasi-Monte Carlo integro-approximation in fact successfully has been used in computer graphics. Therefore, the following theorem may be generalized in the sense of parametric integration:


Theorem 1. Let (X, β, μ) be an arbitrary probability space and let ={M1, . . . , Mk} be a partition of X with Mj∈β for 1≦j≦k. Then for any μ)-uniform point set P={x1, . . . , xn)} and any bounded function ƒ, which restricted to X is μ-integrable, we have











1
n






i
=
0


n
-
1




f


(


x
i

,
y

)




-



X




f


(

x
,
y

)






μ


(
x
)













j
=
1

k




μ


(

M
j

)








sup

x


M
j





f


(

x
,
y

)



-


inf

x


M
j





f


(

x
,
y

)











for





any





suitable





norm




·


.







This theorem may be proved as follows: For all y∈Y, consider an arbitrary stratum Mj∈M. Then,










i
=
0


n
-
1






χ

M
j




(

x
i

)




inf

x


M
j





f


(

x
,
y

)










i
=
0



x
i



M
j




n
-
1




f


(


x
i

,
y

)








i
=
0


n
-
1






χ

M
j




(

x
i

)




sup

x


M
j





f


(

x
,
y

)








which implies








μ


(

M
j

)




inf

x


M
j





f


(

x
,
y

)






1
n







i
=
0



x
i



M
j




n
-
1




f


(


x
i

,
y

)







μ


(

M
j

)




sup

x


M
j





f


(

x
,
y

)







because P is an μ)-uniform point set. Similarly,








μ


(

M
j

)




inf

x


M
j





f


(

x
,
y

)








M
j





f


(

x
,
y

)






μ


(
x
)








μ


(

M
j

)




sup

x


M
j





f


(

x
,
y

)







From the latter two sets of inequalities it follows that














1
n







i
=
0



x
i



M
j




n
-
1




f


(


x
i

,
y

)




-









M
j





f


(

x
,
y

)






μ


(
x
)













μ
(

M
j

)



(



sup

x


M
j





f


(

x
,
y

)



-


inf

x


M
j





f


(

x
,
y

)




)






Since is a partition of X,












1
n






i
=
0


n
-
1




f


(


x
i

,
y

)




-



X




f


(

x
,
y

)






μ


(
x
)






=





1
n






j
=
1

k







i
=
0



x
i



M
j




n
-
1




f


(


x
i

,
y

)





-













j
=
1

k






M
j





f


(

x
,
y

)






μ


(
x
)












=






j
=
1

k



(



1
n







i
=
0



x
i



M
j




n
-
1




f


(


x
i

,
y

)




-




M
j





f


(

x
,
y

)






μ


(
x
)






)









Using the previous inequality and applying the norm ∥•∥ to both sides of the resulting inequality yields the desired bound.


By the omission of y, the norm reduces to the absolute value and it remains the original theorem and proof of the following Theorem 2:


Theorem 2. Let (X, β, μ) be an arbitrary probability space and let ={M1, . . . , Mk} be a partition of X with Mj∈β for 1≦j≦k. Then for any μ)-uniform point set P={x0, . . . , xn−1} and any bounded μ-integrable function ƒ on X we have












X




f


(
x
)






μ


(
x
)





-


1
n






i
=
0


n
-
1




f


(

x
i

)












j
=
1

k




μ


(

M
j

)




(



sup

x


M
j





f


(
x
)



-


inf

x


M
j





f


(
x
)




)







By using the concept of μ)-uniform point sets instead of discrepancy, proofs become simpler and results are more general, compared with earlier approaches. With (X; β, μ)=([0, 1)s, β, λs) both theorems are applicable in the setting of computer graphics.


For the example, the deterministic error (n−1/2) bound can be obtained by selecting an μ)-uniform point set with k=n. The difference of the supremum and the infimum can only be one in the (n−1/2) sets of the partition, which are crossed by the discontinuity, otherwise it must be zero. With








μ


(







j

)


=

1
n


,




the bound is obtained. It will be seen that this argument does not use probabilistic arguments for a deterministic algorithm.


Quasi-Monte Carlo methods are biased, because they are deterministic, but consistent, because they asymptotically converge to the right solution. Randomizing these algorithms allows for unbiased estimators and for unbiased error estimators.


It is useful to consider the method of dependent tests

















[

0
,
1

)

s





f


(

x
,
y

)





x



=







[

0
,
1

)

s







i
=
0


n
-
1






w
i



(

x
,
y

)




f


(



R
i



(
x
)


,
y

)





x

















i
=
0


n
-
1






w
i



(

ω
,
y

)




f


(



R
i



(
ω
)


,
y

)











(
3.3
)







by first applying an equivalence transformation, which does not change the result, and then using one random sample w in order to obtain an unbiased estimate.


This formulation is able to cover a wide range of techniques that increase the efficiency of the method of dependent tests. For example, random scrambling of arbitrary points, random translation on the unit torus, random padding of arbitrary point sets, stratification and stratification induced by rank-1 lattices, trajectory splitting, and many more techniques easily can be formulated by a tuple of replications Ri with associated weights wi.


Omitting y, the special case with equal weights








w
i



(
x
)


=

1
n





where for fixed ω the set of points





(Ri(ω))i=0n−1


(Ri(ω))n−1 is of low discrepancy, is defined to be randomized quasi-Monte Carlo integration.


Repeating independent realizations allows one to estimate the error of the approximation in an unbiased way. However, some convergence is sacrificed, since independent random samples cannot be as uniformly distributed as correlated samples can be. This is especially noticeable in the setting of computer graphics, where small numbers of samples are used.


Anti-aliasing is a central problem in computer graphics. FIG. 13 is a computer-generated image 370 of an infinite plane with a checkerboard texture that illustrates various difficulties. While in the front of the checkerboard, the fields clearly can be distinguished, extraneous patterns appear towards the horizon. While it is simple to compute the color of a pixel as an average as long as the checker board cells are clearly distinguishable, this is no longer possible at the horizon, where through one pixel infinitely many cells can be seen. By common sense, it would be expected that the pixels at the horizon would be gray, i.e., the average of black and white. However, surprisingly, zooming into a pixel reveals that the areas of black and white tiles are not equal in general. This means that no matter what quadrature is used, the horizon will not appear gray, but somehow patterned.


The lens of the human eye is not perfectly transparent and thus slightly blurs the light before it reaches the retina. The amount of blur perfectly fits the resolution of the receptors in the retina. A similar trick in computer graphics is to blur the texture before integration, where the strength of the blur depends on the distance from the eye. However, this blurring technique does not help, if the mathematical problems are not caused by textures. Another compromise is to filter the resulting image. However, a filtering technique causes not only aliases but also previously sharp details to become blurred.


Aliasing can only be hidden by random sampling. Taking one independent random sample inside each pixel, the horizon will appear as uncorrelated black and white pixels. The structured aliases thus are mapped to noise, which is less disturbing to the eye. However, taking more random samples per pixel, the quadrature will eventually converge and aliases will appear instead of the noise.


Assuming that situations like the ones mentioned before can be managed by suitable filtering, there are now discussed alternative sampling patterns of computer graphics. These sampling patterns are illustrated in FIGS. 14A-C. FIG. 14A shows stratified sampling 380; FIG. 14B shows Latin hypercube sampling 390; and FIG. 14C shows blue noise sampling 400, FIGS. 15A-E show a series of drawings 410-450, illustrating sampling using quasi-Monte Carlo points.


Stratified sampling, shown in (a), is at least as good as random sampling. However, stratified sampling suffers from the curse of dimension, since each axis of the integration domain has to be divided at least once.


Latin hypercube sampling, shown in (b), can never be much worse than random sampling and is available for any dimension and number of samples. The average observed performance is good, although it cannot be guaranteed.


Poisson disk sampling, shown in (c), simulates the distribution of receptors in the human eye. The mutual minimum distance of the points is maximized, which results in a reduced noise level in the rendered image. Although restricted by a guaranteed minimum distance, i.e., an empty disk around each sample, the samples points are randomly placed. Thus representable details remain sharp in the final image and aliases are efficiently mapped to noise. However, Poisson disk sampling patterns are typically expensive to generate.


The properties of the sampling patterns seem disjointed, however, there exist quasi-Monte Carlo points, which can be generated efficiently, and which unite the above properties.


There is now discussed an anti-aliasing technique, using (0, 2m, 2)-nets in base b=2.


The use of Hammersley points in the accumulation buffer algorithm has been investigated. Despite considerable improvements, each pixel had to use the same samples, which required higher sampling rates in order to avoid aliases. One solution to that problem is to exploit the structure of (0, 2m, 2)-nets in base b=2.



FIGS. 15A-E illustrate that (0, 2m, 2)-nets in base b=2 unite the properties of classical sampling patterns in computer graphics. The first two dimensions







x
i

=

(


i


(

2
m

)

2


,


Φ
2



(
i
)



)





of the Hammersley points are an example of such a point set, which is stratified, a Latin hypercube sample, and has a guaranteed minimum distance of at least







1


(

2
m

)

2


.




The method of dependent tests is realized by tiling the image plane with copies of the (0, 2m, 2)-net. FIG. 16 shows a plot 460, illustrating how the samples in a pixel are determined by tiled instances of a Hammersley point set. The solid lines indicate the unit square including one set of 16 Hammersley points and the dashed lines indicate screen pixel boundaries. Because of a lack of space, the illustration uses only 4 samples per pixel.


In order to reduce aliasing artifacts, neighboring pixels should have different samples. This is achieved by letting one Hammersley point set cover multiple pixels. The improvement in convergence is visible in the same figure, where we compare stratified random sampling and anti-aliasing by using the Hammersley point set. The improvements directly transfer to other rendering algorithms, such as for example the REYES architecture as used in PIXAR's RenderMan software.


Selecting rank-1 lattices by maximum minimum distance takes the principle of Lloyd relaxation and Poisson disk sampling to the limit. Since rank-1 lattices are available for any number n of points, the sampling rate can be chosen freely. A factorization of n as required for axis-aligned stratified sampling is not needed. In order to attenuate aliasing in each pixel a different random shift is added to the lattice points resulting in an unbiased estimate. The images obtained that way display minimal noise and aliasing. The samples of a randomly shifted lattice can be generated fast and the observed convergence is even faster as compared to the radical inversion based methods from the previous section. The method can be derandomized by extracting the shift from, e.g., a Hammersley point set using the stratification properties pointed out in the previous section.


The principles of the previous sections can be extended to approximate the full integro-approximation problem. Using the Neumann series the computation of L is reduced to a sum of integrals as sketched in the introduction. Extensive investigations have been carried out on path tracing algorithms in connection with quasi-Monte Carlo methods and randomized versions of the latter methods. All the experiments resulted in improvements when using quasi-Monte Carlo methods as compared to classical algorithms of computer graphics.


As mentioned above, randomizing quasi-Monte Carlo methods allows for unbiased estimators and unbiased error estimates. The latter does not appear to be particularly interesting in computer graphics, as better adaptive methods already exist, as discussed below. Furthermore the resulting images can be generated in the same time and quality no matter whether a quasi-Monte Carlo or its randomized counterpart is used.


However, it is interesting to look at the effect of randomization schemes on the minimum distance. While randomizing point sets by random shifts does not change their maximum minimum distance on the torus, random scrambling does.


Applying random scrambling to the classical low discrepancy points sets it can be observed that often the uniformity is improved and rarely decreased. This is true for both discrepancy and minimum distance. Experiments indicate that random scrambling only marginally changes the uniformity of the scrambled Halton sequence, indicating that deterministic permutations already are a very good choice.


Permutations for the Halton sequence are realizations of random scrambling and can be considered as deterministic scrambling long before random scrambling was introduced. Although the permutations have been introduced to improve on discrepancy, they also increase the maximum minimum distance of the Halton points. Implementing the scrambled Halton sequence is much simpler than random scrambling and avoids the risk that uniformity if influenced negatively.


In computer graphics, the difficulty of the integro-approximation problem differs from pixel to pixel and adaptive methods pay off. A possible criterion for refinement is to compare the image gradient





∥∇I(k, l)∥2>T·nk,l−α  (3.4)


against a threshold T. By the exponent α∈[0.5, 1.5] we can adapt to the speed of convergence and rats is the number of samples used for pixel (k, l). Alternatively refinement can be indicated by the routines that evaluate the radiance L, since these routines can access more information of the scene to be rendered.


In the following discussion, there is described a technique for adaptive anti-aliasing by elements of the Halton sequence. The method is superior to random sampling, since the points of the Halton sequence are more uniform and spoken visually exactly fall into the largest unsampled gaps of the sampling domain resulting in a smooth convergence.


The radical inverse in Equation (3.1) mirrors the representation of the index i by the digits αl(i)∈{0, . . . , b−1} in the integer base b∈ at the decimal point. In base b=2 this means that











Φ
2



(
i
)




{




[

0
,

1


/


2


)





for





i





even

,






[


1


/


2

,
1

)




else
.









(
3.5
)







This observation has been generalized and named periodicity. In fact, however, this property relates much more to stratification as formalized by the definition of (0, 1)-sequences. Therefore, a different derivation is presented that stresses the actual stratification property. The index i is chosen as






i=j·b
n
+k for k∈{0, . . . , bn−1}


and inserted into (1), yielding














Φ
b



(
i
)


=




Φ
b



(


j
·

b
n


+
k

)








=






l
=
0







a
l



(


j
·

b
n


+
k

)




b


-
l

-
1










=







l
=
0







a
l



(

j
·

b
n


)




b


-
l

-
1




+




l
=
0







a
l



(
k
)




b


-
l

-
1











=







l
=
0







a
l



(
j
)




b


-
n

-
l
-
1




+


Φ
b



(
k
)









=






b

-
n





Φ
b



(
j
)








[

0
,

b

-
n



)




+


Φ
b



(
k
)










(
3.6
)







by exploiting the additivity of the radical inverses that belong to the class of (0, 1)-sequences. The first term of the result depends on j and obviously is bounded by b−n, while the second term is a constant offset by the radical inverse of k. Since





k∈{0, . . . , bn−1}


it follows that





Φk∈{0,b−n,2b−n, . . . , (bn−1)b−n}


Fixing the n first digits by k then for j∈ No yields radical inverses only in the interval





b(k), Φb(k)+b−n)


The one-dimensional observations from the previous section generalize to higher dimensions. For the Halton sequence






x
i=(Φb1(i), . . . , Φbs(i))∈[0,1)s


where for the c-th component bc is the c-th prime number, this is seen by choosing the index









i
=



j
·




d
=
1

s



b
d

n
d




+

k





with





0



k
<




d
=
1

s



b
d

n
d








(
3.7
)







yielding








Φ

b
c




(
i
)


=



Φ

b
c


(


j
·




d
=
1

s



b
d

n
d




+
k

)

=



b
c

-

n
c






Φ

b
c


(

j
·





d
=
1


d

c


s



b
d

n
d




)


+


Φ

b
c




(
k
)








which is analogous to Equation (3.6) and consequently







x
i






c
=
1

s



[



Φ

b
c




(
k
)


,



Φ

b
c




(
k
)


+

b
c

-

n
c





)






for fixed k and j∈0 with the choice of the index i according to Equation (3.7). The







Π

d
=
1

s



b
d

n
d






disjoint strata selected by k are disjoint and form a partition of the unit cube [0,1)s. However the scheme suffers from the curse of dimension, since stratifying all s-dimensions results in an exponential growth of the number of strata. The increment







Π

d
=
1

s



b
d

n
d






grows even faster than 2s since except the first, all bd>2.


A tensor product approach has been compared with an approach using the Halton sequence. An illustration has been constructed of the stratification of path space for photons that are emitted on the light source and traced for 2 reflections. The problem requires samples from the 8-dimensional unit cube. The sampling domain was stratified into 1288 strata of identical measure. Then one stratum was selected and 8 random samples have been drawn from it to determine the photon trajectories. Stratification by the Halton sequence was been used to determine the 8 paths. In spite of an enormously large increment that hardly fits into the integer representation of a computer, the trajectories start to diverge after the 2nd reflection.


For a stratification as fine as the tensor product approach large increments







Π

d
=
1

s



b
d

n
d






are required that hardly fit the integer representation of a computer. In addition, the assumption that close points in the unit cube result in close paths in path space is not valid for more complex scenes. Depending on what part of the geometry is hit, the photons can be scattered in completely different parts of the scene, although their generating points had been close in the unit cube. The longer the trajectories the more diverging they will be. The above observations easily can be transferred to (t, s)-sequences in base b.


While stratification by the Halton sequence is not useful in high dimensions, it can be very useful in small dimensions as, for example, in pixel anti-aliasing. The properties in two dimensions are illustrated in FIGS. 17A and 17B, which are plots 470 and 480 illustrating how the samples from the Halton sequence in the unit square were scaled to fit the pixel raster. The plots show, respectively, the first two components xi=(Φ2(i), Φ3(i)) of the Halton sequence for 0≦i<23·33=216. The solid points have the indices i≡ik(j)=21·31·j+k selected by k=1. The stratum with the emphasized points contains all indices i≡ik(j) with k=1. In order to match the square pixels the coordinates are scaled, i.e.,






x
i
→x′
i=(21·Φ2(i), 31·Φ3(i))


In general the first component x,i(1) is scaled by b1n1 and the second xi(2) by b2n2. Thus a b1n1×b2n2 stratified sample pattern is obtained that can be periodically tiled over the image plane, as described above. Identifying each stratum with a pixel, the identification k easily is determined (for example by a table lookup) by the pixel coordinates and a Halton sequence restricted to that pixel is obtained from






i≡i
k(j)=j·b1n1·b2n2+k for j∈


Exemplary images have been computed using a path tracer, as described above, with the scrambled Halton sequence. Refinement was triggered by the gradient (4). Note that the scrambling does not change Φ2 and Φ3. Consequently, the above algorithm can be applied directly and benefits from the improved uniformity of the scrambled sequence.


Trajectory splitting can increase efficiency in rendering algorithms. A typical example is volume rendering. While tracing one ray through a pixel it is useful to send multiple rays to the light sources along that ray. It has been shown that taking equidistant samples on the ray that all have been randomly shifted by the same amount is much more efficient than the original method that used jittered sampling.


A general approach to trajectory splitting is to restrict the replications in Equation (3.3) to some dimensions of the integrand. For quasi-Monte Carlo methods, this approach has been used in an implementation, according to which a strictly deterministic version of distribution ray tracing was developed. A systematic approach has been taken, according to which randomization techniques from the field of randomized quasi-Monte Carlo methods have been parameterized. Instead of using random parameters, deterministic quasi-Monte Carlo points have been applied. Seen from a practical point of view, trajectory splitting can be considered as low-pass filtering of the integrand with respect to the splitting dimensions.


The most powerful method is to split trajectories using domain stratification induced by rank-1 lattices. For the interesting s-dimensions of the problem domain, a rank-1 lattice is selected. The matrix B contains the vectors spanning the unit cell as identified by the Voronoi-diagram of the rank-1 lattice. Then











R
i





:

[

0
,
1

)

s




A
i

















x



(



i
n

·

(

1
,

g
1

,





,

g

s
-
1



)


+
Bx

)








mod




[

0
,
1

)

s









maps points from the unit cube to the i-th stratum Ai of the rank-1 lattice, as depicted in FIG. 18B, discussed below. This scheme can be applied recursively yielding recursive Korobov filters.


For the special case of the Fibonacci lattice at n=5 points the recursive procedure has been used for adaptive sampling in computer graphics. Starting with the lattice the next refinement level is found by rotating by arctan(½) and scaling it by 1/√{square root over (5)}, as indicated in FIG. 18C, discussed below. The resulting lattice again is a rectangular lattice and the procedure recursively can be continued. Thus, the construction is completely unrelated to rank-1 lattices.



FIGS. 18A-C show a series of plots 490-510 illustrating replications by rank-1 lattices. The Voronoi-diagram of a rank-1 lattice induces a stratification, shown in FIG. 18A. All cells Ai are of identical measure and in fact rank-1 lattices are μ)-uniform. A cell Ai is anchored at the i-th lattice point xi and is spanned by the basis vectors (b1, b2). This can be used for recursive Korobov-filters, shown in FIG. 18B, where the points inside a lattice cell are determined by another set of lattice points transformed into that lattice cell. In computer graphics one special case (c) of this principle has been named √{right arrow over (5)} sampling, shown in FIG. 18C, because the length of the dashed lines is 1/√{square root over (5)}. It is in fact a recursive Korobov filter with points from the Fibonacci lattice at n=5 points.


Good results have been obtained from a distribution ray tracer that used randomly shifted rank-1 lattices with maximized minimum distance. Trajectory splitting was realized using rank-1 lattices with maximized minimum distance, too.


One random vector was used per pixel to shift the lattice points in order to obtain an unbiased estimator and to decorrelate neighboring pixels. The resulting images exposed minimal noise and while aliasing artifacts are pushed to noise. Compared to previous sampling methods the convergence was superior, due to the high uniformity of the lattice points.


As lattice points are maximally correlated, this is a good example for correlated sampling in computer graphics. In this context, quasi-Monte-Carlo integro-approximation by lattice points can be considered as Korobov filtering.


While applications of quasi-Monte Carlo integration in finance attracted a lot of attention instantly, developments in computer graphics were not that spectacular. Today, however, about half of the rendered images in movie industry are synthesized using strictly deterministic quasi-Monte Carlo integro-approximation. In 2003, these techniques were awarded a Technical Achievement Award (Oscar) by the American Academy of Motion Picture Arts and Sciences. In contrast to academia, graphics hardware and software industry early recognized the benefits of quasi-Monte Carlo methods.


Deterministic quasi-Monte Carlo methods have the advantage that they can be parallelized without having to consider correlation as encountered when using pseudo-random number generators. By their deterministic nature the results are exactly reproducible even in a parallel computing environment.


Compared to classical algorithms of computer graphics the algorithms are smaller and more efficient, since high uniformity is intrinsic to the sample points. A good example is trajectory splitting by rank-1 lattice that have maximized minimum di stance.


In computer graphics it is known that maximizing the minimum distance of point sets increases convergence speed. However, algorithms to create such points, such as Lloyd's relaxation method, are expensive. With quasi-Monte Carlo methods selected by maximized minimum distance, efficient algorithms are available and savings up to 30% of the computation time for images of the same quality as compared to random sampling methods can be observed.


In the setting of computer graphics quasi-Monte Carlo methods benefit from the piecewise continuity of the integrands in Lb2. Around the lines of discontinuity the methods are observed to perform no worse than random sampling, while in the regions of continuity the better uniformity guarantees for faster convergence. The observed convergence rate is between (n−1) and (n−1/2). It depends on the ratio of the number of sets in the partition induced by μ)-uniform points and the number of these sets containing discontinuities. Since with increasing number of dimensions the integrands tend to contain more discontinuities the largest improvements are observed for smaller dimensions.


Since photorealistic image generation comprises the simulation of light transport by computing functionals of the solution of a Fredholm integral equation of the second kind, the quasi-Monte Carlo methods developed for computer graphics apply to other problems of transport theory as well.


The concept of maximized minimum distance as used in computer graphics nicely fits the concept of μ)-uniformity as used in quasi-Monte Carlo theory. Rank-1 lattices selected by maximized minimum distance ideally fit both requirements and yield superior results in computer graphics.


IV. General Methods


FIGS. 19-22 are a series of flowcharts illustrating general methods according to further aspects of the present invention.



FIG. 19 is a flowchart of a computer-implemented method 600 for generating a pixel value for a pixel in an image displayable via a display device, the pixel value being representative of a point in a scene. The method includes the following steps:


Step 601: Generating a set of sample points, at least one sample point being generated using at least one sample, the at least one sample comprising at least one element of a sequence, and wherein the set of sample points comprises quasi-Monte Carlo points.


Step 602: Evaluating a selected function at one of the sample points to generate a value, the generated value corresponding to the pixel value, the pixel value being usable to generate a display-controlling electronic output.



FIG. 20 shows a flowchart of a computer-implemented method 620 for generating a pixel value for a pixel in an image displayable via a display device, the pixel value being representative of a point in a scene. The method comprises the following steps:


Step 621: Generating a set of sample points, at least one sample point being generated using at least one sample, the at least one sample comprising at least one element of a low-discrepancy sequence, and wherein the generating includes using an adaptive, interleaved sampling scheme based on a deterministically scrambled Halton sequence to yield a deterministic, low-discrepancy set of sample points.


Step 622: Evaluating a selected function at one of the sample points to generate a value, the generated value corresponding to the pixel value, the pixel value being usable to generate a display-controlling electronic output.



FIG. 21 shows a flowchart of a computer-implemented method 640 for generating a pixel value for a pixel in an image displayable via a display device, the pixel value being representative of a point in a scene. The method comprises the following steps:


Step 641: Generating a set of sample points, at least one sample point being generated using at least one sample, the at least one sample comprising at least one element of a sequence, wherein the set of sample points comprises quasi-Monte Carlo points, and wherein the generating includes adaptively sampling by using radical inversion-based points.


Step 642: Evaluating a selected function at one of the sample points to generate a value, the generated value corresponding to the pixel value, the pixel value being usable to generate a display-controlling electronic output.



FIG. 22 shows a flowchart of a computer-implemented method 660 for generating a pixel value for a pixel in an image displayable via a display device, the pixel value being representative of a point in a scene. The method comprises the following steps:


Step 661: Generating a set of sample points, at least one sample point being generated using at least one sample, the at least one sample comprising at least one element of a sequence, wherein the generating includes sampling by using rank-1 lattice points.


Step 662: Evaluating a selected function at one of the sample points to generate a value, the generated value corresponding to the pixel value, the pixel value being usable to generate a display-controlling electronic output.


V. Enumerating Quasi-Monte Carlo Sequences in Voxels

As discussed above, in other approaches partitions of the integration domain are given in a rather simple way. Typical examples include the following: binary space partitioning trees, especially axis-aligned kd-trees; Voronoi diagrams; and regular grids. The present discussion focuses on voxelizations, i.e., regular grids with a fixed resolution per axis. A “voxel” is a volume element that is the three-dimensional equivalent of a pixel.


There is defined a uniform voxelization of the unit cube |0, 1)s for resolutions rj in dimension 1≦j≦s, such that there are intervals






[


k

r
j


,


k
+
1


r
j



)




for 0≦k<rj. In other words, each voxel cell can be identified by a set of integer coordinates (p1, . . . , ps), with 0≦pj<rj.


5.1 Enumerating the Halton-Sequence in Voxel Cells

The Halton-sequence is a point sequence often used for multidimensional quasi-Monte Carlo integration. It is constructed using the van der Corput radical inverse function.


Consider an integer i∈ No function and its b-adic expansion:










i
=




k
=
1







a
k



(
i
)


·

b

k
-
1





,




Eq
.





(
5.0
)








where ak(i) denotes the k-th digit of i in base b. The radical inverse of i is obtained by reversing the order of the digits and placing the resulting digit sequence after the floating point. This can be generalized by additionally permuting the digits, resulting in the generalized radical inverse function:












φ
b



(
i
)


:=




k
=
1







σ
b



(


a
k



(
i
)


)


·

b

-
k





,




Eq
.





(
5.1
)








where σb is a permutation of {0, . . . , b−1} that helps to bide correlation between components of the sequence.


For each integer i∈ the corresponding point of the Halton sequence xi∈|0,1)s is composed by using the first s prime numbers as bases for the radical inverse function. In other words, xi=(φb1(i), . . . , φbs(i)), where bj is the j-th prime number for 1≦j≦s.


The stratification of the Halton-sequence can be characterized as follows:


Property 1. Given a regular voxelization with resolution bjnj in dimension j for s dimensions, the first







Π

k
=
1

s







b
k

n
k






points are stratified such that there is exactly one point in each voxel cell.


In the following description, there is presented a technique for determining the index of the sample that falls into a given voxel cell. This information can then be used to enumerate the samples for the voxel grid of resolution (r1, . . . , rs) by choosing (n1, . . . , ns) as small as possible such that bjnj≧rj for 1≦j≦s. Then samples in a stratum (p1, . . . , ps) are only enumerated whenever pj<rj for 1≦j≦s.


A fixed index i is partitioned into nj lower digits lbj and higher digits hbj with respect to base bj as follows:






i=b
j
n

j

·h
b

j

+l
b

j
.  Eq. (5.1.1)


Taking advantage of the structure of Equation (5.1), the radical inverse can then be written as











φ

b
j




(
i
)


=


φ

b
j




(



b
j

n
j


·

h

b
j



+

l

b
j



)






(

Eq
.




5.2

)











=



b
j

-

n
j



·


φ

b
j




(

h

b
j


)



+



φ

b
j




(

l

b
j


)


.







(

Eq
.




5.3

)







That means the lower digits are responsible for choosing a stratum 0≦c<bn, while the higher digits hb determine the position inside this stratum. This leads to:


Property 2. The Halton sequence points with indices i and






i
+


t
·

Π

k
=
1

s








b
k

n
k







fall into the same stratum (p1, . . . , ps) for every t∈ where 0≦pj<bjnj for 1≦j≦s.


This allows for a simple lookup table of size







Π

k
=
1

s







b
k

n
k






for determining the indices of all Halton-sequence points that fall into a given stratum. However, even for s=2 the memory requirements can be prohibitive.


Note that, given a fixed stratum (p1, . . . , ps), with 0≦pj<bjnj, it is possible, due to Equation (5.3), to determine the lower index digits for every dimension. It is only necessary to invert the radical inversion function, which can easily be accomplished algorithmically. This is still true when using Faure permutations, or other permutations which may or may not depend on the digit position k, since these can be inverted as well.


For each dimension 1≦j≦s, the lower index digits with respect to base bj can be determined by computing:










l

b
j


=



φ

b
j


-
1


(


p
j


b
j

n
j



)

.





Eq
.





(

5.3

.1

)








What remains is to find the first index i such that for every dimension 1≦j≦s, its digits with respect of base bj match lbj. That means we need to solve the following s equations simultaneously:





lbj≡i mod bjnj, j=1; . . . , s. Eq. (5.4)


Since for the Halton-sequence the bases b1, . . . , bs are relatively prime, the prime powers bjnj are relatively prime as well. Therefore a simultaneous solution for Equation (5.4) is given by the Chinese remainder theorem. The unique result is given by:









i
=


(




k
=
1

s




l

b
j


·


m
k



(


m
k

-
1



mod






b
k

n
j



)




)


mod





k
=
1

s



b
k

n
k








Eq
.





(
5.5
)








where







m
j

=


1

b
j

n
j








k
=
1

s




b
k

n
k


.







and the multiplicative inverse (mj−1 mod bjnj) can be computed efficiently by the means of the extended Euclid algorithm.


Note that these s multiplicative inverses only need to be computed once in a precomputation stage, since they are independent of the lbj. That means by using a fixed table with s entries, it is possible to compute the first index i with Equation (5.5) for any stratum. An arbitrary number of samples in this stratum can be generated using the indices






i
+


t
·

Π

k
=
1

s








b
k

n
k







for i∈ by Property 5.2. In contrast to an earlier approach where a full lookup table of size







Π

k
=
1

s







b
k

n
k






was used, the approach described here poses only minimal restrictions on memory.


For illustrative purposes, the Python programming language was chosen for a prototype implementation, since Python transparently handles arbitrary long integers. Depending on the resolution r0, . . . , rs−1 and the prime bases used, intermediate results can easily overflow using only 32-bit integers. In this example, the indices range from 0 to s−1, which does not match the rest of the present discussion. This is easily explained: It refers to an implementation where all indices start at zero.


In particular, the increment







Π

k
=
1

s







b
k

n
k






grows rapidly when increasing the number of dimensions. Therefore, care must be taken when implementing the algorithm in languages such as C++ without built-in arbitrary precision integers. An exemplary full implementation in Python is described hereinbelow.


5.2 Enumerating (t, s)-Sequences in Voxel Cells


(t, s)-sequences in base b are constructed using generator matrices for c(j)=(ck,l(j))k,l=1 for j=1, . . . , s, where b is a finite field. In practice, the size of the matrices is limited to a fixed precision M. Then the j-th component of the i-th point is given by:











x
i

(
j
)


=




(




b

-
1












b

-
M





)

T



[


C

(
j
)




(





a
1



(
i
)













a
M



(
i
)





)


]






0
,
1

)



,




Eq
.





(
5.6
)








where the matrix-vector multiplication in brackets has to be performed in b.


Since, in contrast to the previous section, the base is the same for all components for a voxel grid resolution (r1, . . . , rs) it is only necessary to find a single exponent m such that bm≧max {r1, . . . , rs}. If it is desired to generate at most be samples per voxel cell, the sample index i will always be smaller than (bm)s·be=bms+e, so a matrix resolution of M=ms+e will suffice.


To compute the index i of the (q+1)-th sample of the voxel (p1, . . . , ps), the following system of linear equations is solved, which follows by analyzing how the digits of the resulting component in Equation (5.6) are computed:











(




c
1.1

(
1
)





c
1.2

(
1
)








c

1
,

sm
+
c



(
1
)





















c

m
,
1


(
1
)





c

m
,
2


(
1
)








c

m
,

sm
+
e



(
1
)





















c

1
,
1


(
s
)





c

1
,
2


(
s
)








c

1
,

sm
+
e



(
s
)





















c

m
,
1


(
s
)





c

m
,
2


(
s
)








c

m
,

sm
+
c



(
s
)





)

·

(





a
1



(
i
)













a
ms



(
i
)








a
1



(
q
)













a
e



(
q
)





)


=

(





a
1



(

p
1

)













a
m



(

p
1

)













a
1



(

p
e

)













a
m



(

p
s

)





)





Eq
.





(

5.6

.1

)








That means the matrix above consists of the first m rows of C(1) followed by the first m rows of C(2), and so on. Since a1(q), . . . , ae(q) are known, it is possible to rewrite the system above as A·v=b, where











A
·
v

=


(




c
1.1

(
1
)





c
1.2

(
1
)








c

1
,
sm


(
1
)





















c

m
,
1


(
1
)





c

m
,
2


(
1
)








c

m
,
sm


(
1
)





















c

1
,
1


(
s
)





c

1
,
2


(
s
)








c

1
,
sm


(
s
)





















c

m
,
1


(
s
)





c

m
,
2


(
s
)








c

m
,
sm


(
s
)





)



(





a
1



(
i
)













a
ms



(
i
)





)



,




Eq
.





(
5.7
)







b
=


(





a
1



(

p
1

)













a
m



(

p
1

)













a
1



(

p
s

)













a
m



(

p
s

)





)

-


(




c

1
,

sm
+
1



(
1
)





c

1
,
2


(
1
)








c

1
,

sm
+
c



(
1
)





















c

m
,

sm
+
1



(
1
)





c

m
,
2


(
1
)








c

m
,

sm
+
c



(
1
)





















c

1
,

sm
+
1



(
s
)





c

1
,
2


(
s
)








c

1
,

sm
+
c



(
s
)





















c

m
,

sm
+
1



(
s
)





c

m
,
2


(
s
)








c

m
,

am
+
c



(
s
)





)

·


(





a
1



(
q
)













a
c



(
q
)





)

.







Eq
.





(
5.8
)








Determining the unknown v, respectively, the index bits a1(i), . . . , ams(i) is therefore only possible when det A≠0. In other words, the (t, s)-sequence must obey this criterion to be used for enumerating samples inside voxel cells as describe above.


If that is the case, the inverse A−1 can be precomputed once in a preprocessing step and reused to compute the indices for all samples. Determining the sample index is therefore not much more expensive than evaluating an additional component of the sequence.


5.2.1 Suitable (t, s)-Sequences


There are a number of suitable (t, s)-sequences with det A≠0. Especially (0, s)-sequences satisfy this criterion, as the requirement for (0, s)-sequences is even stronger. Voxel hypercubes are just one kind of elementary interval. Thus each block of bms points that forms (0, ms, s)-net in base b fulfills the presently described stratification needs.


Each component of the Sobol′-sequence forms a (0, 1)-sequence in base 2. The first two components form a (0, 2)-sequence in base 2. A suitable implementation is described hereinbelow.


The first three components, which are constructed using the first three primitive polynomials and direction numbers, using techniques known in the art, do not form a (0, 3)-sequence. Interestingly, they are still usable for the sample enumeration in voxels. It has been verified that det A≠0 for s=3 and m=0, . . . , 1000. This property has not yet been proved for every positive n and s=3.


For s≧4, primitive polynomials and direction numbers were not found that resulted in det A≠0 for a wide range of possible values of m.


While the Sobol′-sequences are defined for base 2 only, tile Faure sequences are suited to generate (0, s)-sequences for any s by choosing an appropriate base b≧s−1. Consequently, these sequences can also be used for enumerating samples in voxel cells.


5.2.2 Enumerating the Sobol′-Sequence in 2D-Pixels

In the present section, there is described an efficient enumeration scheme for the Sobol′-sequence for s=2. It is very well suited for adaptive rendering, where stratified sampling over the image plane can improve convergence.


The Sobol′-sequence is an infinite-dimensional low-discrepancy sequence of points. However, in contrast to the Halton-sequence, the bases used for each component do not differ. Instead, all components can be computed using the index digits in base 2. Thus the Sobol′-sequence can be implemented very efficiently on current hardware architectures.


Each component can be generated by using binary generator matrices, using a suitable technique. While in theory these matrices are infinite-dimensional, in practice, the size is limited by the number of available bits M, e.g., M=32, which allows for the generation of up to 2M points. The j-th component of the i-th point is then given by











x
i

(
j
)


=




(




2

-
1












2

-
M





)

T



[


C
j



(





a
1



(
i
)













a
M



(
i
)





)


]




[

0
,
1

)



,




Eq
.





(
5.9
)








where Cj denotes the binary generator matrix for the j-th component and the matrix-vector multiplication in brackets has to be performed in 2. These components may be implemented using techniques known in the art.


The first two components of the Sobol′-sequence, given respectively by the van der Corput radical inverse and the Sobol′ radical inverse, form a (0, 2) sequence in base 2. Among other things, this means that the set of the first 22m two-dimensional points, which m≧0 is stratified such that there is exactly one point in each cell of a 2m×2m regular grid spanned over |0, 1)2.


After finding the index of a sample that falls into a given pixel, the remaining components for the infinite-dimensional sample can again be used for higher dimensions of the integrand.


One approach provided a technique for generating such a mapping from pixel values to indices of a Hammersley (0, 2m, 2)-net. However, this approach requires the number of samples to be known beforehand, which does not work in an adaptive sampling context. A generalization to other to other (0, 2m 2)-nets, e.g., the Larcher-Pillichshammer-net can be found in the art.


In analogy to the mapping for the Halton-sequence, there is now presented a technique that maps from a two-dimensional voxelizations (e.g., the pixel coordinates) to the index of a Sobol′-sequence sample point. The point index i is required to be in the range 0, . . . , 22m−1, thus it can be represented by using 2m bits. The index is split up into two parts, each comprising m bits:









i
=





j
=
1


2

m






a
j



(
i
)


·

2

j
-
1




=






j
=
1

m





a
j



(
i
)


·

2

j
-
1







m





L





S





B



+





j
=

m
+
1



2

m






a
j



(
i
)


·

2

j
-
1







m





M





S





B









Eq
.





(
5.10
)








For base b=2 and indices k·2m, . . . , (k+1)·2m−1, only the m least significant bits (LSB) of the index change. These m bits correspond to the m leftmost columns of the generator matrix. Analogously, them most significant bits (MSB) of each index correspond to the m rightmost matrix columns.


That is the reason that two tables are precomputed, each of size 2m. One table holds the results for the matrix-vector multiplications for all indices where the m upper bits are zero. The other table holds the results for indices where the no lower bits are zero. To find the result for an arbitrary index, it is only necessary to split up the index into lower and m upper bits, look-up the results in the table and combine them with an exclusive-or (XOR) operation.


The key of the pixel-to-index mapping algorithm is to precompute two additional tables of size 2m that hold an inverse mapping:


Given the m upper bits of the first component and given by the van der Corput radical inverse in base 2, the m lower index bits are obtained, when assuming the m upper index bits are zero.


Given the m upper bits of the second component and given by the Sobol′ radical inverse, the m upper index bits are obtained, when assuming the m lower index bits are zero.


For integer pixel coordinates (ex, ey), with 0≦ex,ey<2m, the presently described technique then proceeds as follows:


1. ex determines the m lower index bits, which it is possible to look up using the inverse mapping table. Alternatively, direct computation is possible, since the van der Corput radical inverse is its own inverse function.


2. Given these fixed m lower index bits, the m upper index bits are still missed. However, it is possible to compute the result of the second component using the already known m lower index bits, setting the m upper index bits to zero.


3. The n, upper bits of this result will in general not match tile desired result, given by ey. However, it is known that there must exist m upper index bits that are responsible for achieving a match. By computing the necessary difference, it is possible to look up the m upper index bits in the second precomputed inverse mapping table.


An implementation is set forth hereinbelow. Note that it is no longer necessary for the lookup code to call the van der Corput and the Sobol′ radical inverse functions anymore. Instead, all computations are done using the lookup tables of size 2m each.


In contrast to the Halton-sequence, the increment for samples falling into the same stratum is 2. The lookup code allows for specifying the current frame, which has the effect of adding this increment to all sample indices. The tables must be recomputed for each frame, but the resulting points will always constitute a (0, 2m, 2)-net due to the (0, 2)-sequence property.


However, care must be taken not to discard bits since the integer coordinates of samples now can have more than 2m bits. Thus, the returned values of the lookup method must be divided by 232 to map to |0, 1)2.


Techniques and Implementations

There are now described techniques and implementations in accordance with the above description.



FIG. 23 shows a flowchart 2300 of an overall technique according to the present invention, and includes the following:


Box 2301: Select uniform voxel grid; select suitable low-discrepancy sequence.


Box 2302: Select voxel.


Box 2303: Generate samples in voxel.



FIGS. 24-26 are a series of flowcharts illustrating techniques according to further aspects of the invention. These techniques include, first, a technique utilizing a Halton-sequence (FIG. 24) and, second, a technique utilizing (t, s)-sequences in base b (FIG. 25). Further described with respect to the second technique is a technique for the special case, a Sobol′-sequence for s=2 (FIG. 26).


The description of techniques is followed by a description of two implementations: first, an implementation for the Halton-sequence and second an implementation for the Sobol′-sequence for s=2. As noted above, the exemplary implementations have been written using the Python programming language. However, it will be apparent that the present invention may be practiced using other implementations writing in other programming languages.


A.1 Exemplary Technique—Halton-Sequence


FIG. 24 shows a flowchart of an exemplary technique 2400 utilizing a Halton-sequence. The technique 2400 includes the following:


Box 2401: Choose s-dimensional voxel grid resolution r1, . . . , rs.


Box 2402: Compute prime powers b1n1, . . . , bsns, such that bjnj≧ri for j=1, . . . , s and b1, . . . , bs are relatively prime.


Box 2403: Compute the multiplicative inverses (mj−1 mod bjnj) using the extended Euclid algorithm and store them in a table.


Box 2404: Now that the precomputation phase is over, loop over the voxel cells as follows:


Box 2405: Given a desired voxel cell identified by the integer coordinates p1, . . . , ps), with 0≦pj<rj for j=1, . . . , s, compute:







l

b
j


=



φ

b
j

1

(


p
j


b
j

n
j



)

.





for j=1, . . . , s.


Box 2406: Using the Chinese remainder theorem, solve for the unknown i for the set of equations





lbj≡i mod bjnj, j=1, . . . , s,


by computing







i
=


(




k
=
1

s




l

b
j


·


m
k



(


m
k

-
1



mod






b
k

n
k



)




)


mod









k
=
1

s



b
k

n
k





,




using the precomputed





(mj−1 mod bjnj).


Box 2407: Generate an arbitrary number of samples in the specified voxel cell using the indices






i
+

t
·




k
=
1

s



b
k

n
k








by setting t=0, 1, 2, . . . .


A.2 Exemplary Technique—(t, s)-Sequences in Base b



FIG. 25 shows a flowchart of an exemplary technique 2500 utilizing (t, s)-sequences in base b. Technique 2500 includes the following:


Box 2501: Choose voxel grid resolution (r1, . . . , rs) and compute m such that bm≧max {r1, . . . , rs}.


Box 2502: Determine an upper bound be of the number of samples to generate.


Box 2503: Invert the matrix A described in Equation (5.7), resulting in A−1.


Box 2504: For the (q+1)-th sample of the voxel (p1, . . . , ps), compute b as in Equation (5.8).


Box 2506: Compute v=A−1·b to determine the index digits of the sample as outlined in Equation (5.7).


A.2.1 Exemplary Technique—Special Case: Sobol′-Sequence for s=2



FIG. 26 shows a flowchart of an exemplary technique 2600 for the special case of a Sobol′-sequence for s=2. Technique 2600 includes the following:


Box 2601: Choose 2-dimensional voxel grid resolution r1, r2.


Box 2602: Compute 2m such that 2m≧max {r1, r2}.


Box 2603: For each frame, i.e., a set of samples that forms a (0, 2m, 2)-net in base 2, with one sample in each voxel cell.


Box 2604: Precompute tables, each of size 2m, by setting the frame parameter. FIG. 27 shows an exemplary Python code listing 2700 of an implementation of this step.


Box 2605: For any voxel cell (ex, ey) with 0≦ex<r1 and 0≦ey<r2, compute the respective sample index. FIG. 28 shows an exemplary Python code listing 2800 of an implementation of this step.


B. Exemplary Implementation for the Halton-Sequence


FIGS. 29A-F show a Python code listing 2900a-f for an exemplary implementation of the Halton-sequence technique discussed above.


C. Exemplary Implementation for the Sobol′-Sequence for s=2



FIGS. 30A-B shows a Python code listing 3000 for an exemplary implementation 3000a-b of the above-described technique for the special case of a Sobol′-Sequence for x=2.


While the foregoing description includes details which will enable those skilled in the art to practice the invention, it should be recognized that the description is illustrative in nature and that many modifications and variations thereof will be apparent to those skilled in the art having the benefit; of these teachings. It is accordingly intended that the invention herein be defined solely by the claims appended hereto and that the claims be interpreted as broadly as permitted by the prior art.

Claims
  • 1. A method useful in a computer graphics system that utilizes low-discrepancy sequences, the method comprising: receiving a low-discrepancy sequence with inherent stratification properties that imply a regular voxelization, andbased on the low-discrepancy sequence with inherent stratification properties, enumerating an arbitrary number of samples of the sequence that lie inside an arbitrarily chosen voxel.
  • 2. The method of claim 1 wherein the low-discrepancy sequence is a Halton-sequence, and further comprising enumerating samples from the Halton-sequence, using an arbitrarily dimensional voxel grid.
  • 3. The method of claim 2 wherein the Halton-sequence may or may not be scrambled
  • 4. The method of claim 1 further comprising enumerating samples from a (t, s)-sequence, including the Sobol′-sequence and tile Faure-sequence, using an arbitrarily dimensional voxel grid.
  • 5. The method of claim 1 further comprising utilizing an arbitrary number of additional components from the corresponding low-discrepancy sequence to cover higher dimensions in the computation.
  • 6. A computer graphics system utilizing low-discrepancy sequences, the computer graphics system comprising: means operable to receive a low-discrepancy sequence with inherent stratification properties that imply a regular voxelization, andmeans, in communication with the means for receiving the low-discrepancy sequence, operable to enumerate an arbitrary number of samples of the sequence that lie inside an arbitrarily chosen voxel.
  • 7. The system of claim 6 wherein the low-discrepancy sequence is a Halton-sequence, and further comprising means for enumerating samples from the Halton-sequence, using an arbitrarily dimensional voxel grid.
  • 8. The system of claim 7 wherein the Halton-sequence may or may not be scrambled.
  • 9. The system of claim 6 further comprising means for enumerating samples from a (t, s)-sequence, including the Sobol′-sequence and the Faure-sequence, using an arbitrarily dimensional voxel grid.
  • 10. The system of claim 6, further comprising means for utilizing an arbitrary number of additional components from the corresponding low-discrepancy sequence to cover higher dimensions in the computation.
  • 11. A computer program product, comprising computer readable computer program code encoded on a physical medium readable by a computer, for use in a computer graphics system, the computer program code product comprising: first computer program code means for receiving, as an input, a low-discrepancy sequence with inherent stratification properties that imply a regular voxelization, andsecond computer program code means for enumerating, based on the received low-discrepancy sequence with inherent stratification properties, an arbitrary number of samples of the sequence that lie inside an arbitrarily chosen voxel.
  • 12. The computer program product of claim wherein the low-discrepancy sequence is a Halton-sequence, and further comprising computer program code means for enumerating samples from the Halton-sequence, using an arbitrarily dimensional voxel grid.
  • 13. The computer program product of claim 12 wherein the Halton-sequence may or may not be scrambled.
  • 14. The computer program product of claim 11 further comprising computer program code means for enumerating samples from a (t, s)-sequence, including the Sobol′-sequence and the Faure-sequence, using an arbitrarily dimensional voxel grid.
  • 15. The computer program product of claim 11, further comprising computer program code means for utilizing an arbitrary number of additional components from the corresponding low-discrepancy sequence to cover higher dimensions in the computation.
  • 16. A computer-implemented method of generating a pixel value for a pixel in an image displayable via a display device, the pixel value being representative of a point in a scene, the method comprising: A. generating a set of sample points at least one sample point being generated using at least one sample, the at least one sample comprising at least one element of a sequence, and wherein the set of sample points comprises quasi-Monte Carlo points; andB. evaluating a selected function at one of the sample points to generate a value, the generated value corresponding to the pixel value, the pixel value being usable to generate a display-controlling electronic output; andC. given a low-discrepancy sequence with inherent stratification properties that imply a regular voxelization, enumerating an arbitrary number of samples of the sequence that lie inside an arbitrarily chosen voxel.
  • 17. A subsystem within a computer graphics system, for generating a pixel value for a pixel in an image displayable via a display device, the pixel value being representative of a point in a scene, the subsystem comprising: A. means for generating a set of sample points, at least one sample point being generated using at least one sample, the at least one sample comprising at least one element of a sequence, and wherein the set of sample points comprises quasi-Monte Carlo points;B. means for evaluating a selected function at one of the sample points to generate a value, the generated value corresponding to the pixel value, the pixel value being usable to generate a display-controlling electronic output; andC. means operable to enumerate an arbitrary number of samples of the sequence that lie inside an arbitrarily chosen voxel, given a low-discrepancy sequence with inherent stratification properties that imply a regular voxelization.
  • 18. A computer program code product for use in a computer graphics system, for generating a pixel value for a pixel in an image displayable via a display device, the pixel value being representative of a point in a scene, the computer program code product comprising computer-readable computer program code stored on a physical medium, the computer program code product further comprising: A. computer code means for generating a set of sample points, at least one sample point being generated using at least one sample, the at least one sample comprising at least one element of a sequence, and wherein the set of sample points comprises quasi-Monte Carlo points;B. computer code means for evaluating a selected function at one of the sample points to generate a value, the generated value corresponding to the pixel value, the pixel value being usable to generate a display-controlling electronic output; andC. computer code means operable to enumerate an arbitrary number of samples of the sequence that lie inside an arbitrarily chosen voxel, given a low-discrepancy sequence with inherent stratification properties that imply a regular voxelization.
  • 19. The method of claim 4 wherein the (t, s)-sequence is a scrambled (t, s)-sequence.
  • 20. The system of claim 9 wherein the (t, s)-sequence is a scrambled (t, s)-sequence.
  • 21. The computer program product of claim 14 wherein the (t, s)-sequence is a scrambled (t, s)-sequence.
CROSS-REFERENCE TO RELATED APPLICATIONS AND INCORPORATION BY REFERENCE

The present application claims the priority benefit of U.S. Provisional Patent Application Ser. No. 61/084,669, which was filed on Jul. 30, 2008. The present application is a continuation in part of commonly owned U.S. patent application Ser. No. 11/465,717 (Attorney Dkt. MENT-104-US), which claims the priority benefit of Provisional App. 60/709,173 (MNTL-104-PR) and which is a continuation in part of U.S. patent application Ser. No. 10/299,958 (which issued as U.S. Pat. No. 7,167,175) (MENT-072). In turn, U.S. patent application Ser. No. 10/299,958 is a continuation in part of U.S. patent application Ser. No. 09/884,861 (MENT-061), which claims priority from U.S. Provisional Patent Apps. 60/265,934 and 60/212,286. Each of the foregoing is incorporated herein by reference as if the foregoing were fully set forth herein in their entireties. Also incorporated herein by reference is commonly owned U.S. patent application Ser. No. 08/880,418 (MENT-002), now U.S. Pat. No. 6,529,193, hereinafter referred to as “Grabenstein.” Also incorporated by reference are each of the references listed in the section below captioned References.”

Provisional Applications (4)
Number Date Country
61084669 Jul 2008 US
60709173 Aug 2005 US
60265934 Feb 2001 US
60212286 Jun 2000 US
Continuation in Parts (3)
Number Date Country
Parent 11465717 Aug 2006 US
Child 12241928 US
Parent 10299958 Nov 2002 US
Child 11465717 US
Parent 09884861 Jun 2001 US
Child 10299958 US