Deterministic wavelet thresholding for general-error metrics

Information

  • Patent Application
  • 20060280378
  • Publication Number
    20060280378
  • Date Filed
    June 13, 2005
    19 years ago
  • Date Published
    December 14, 2006
    18 years ago
Abstract
Novel, computationally efficient schemes for deterministic wavelet thresholding with the objective of optimizing maximum-error metrics are provided. An optimal low polynomial-time algorithm for one-dimensional wavelet thresholding based on a new dynamic-programming (DP) formulation is provided that can be employed to minimize the maximum relative or absolute error in the data reconstruction. Directly extending a one-dimensional DP algorithm to multi-dimensional wavelets results in a super-exponential increase in time complexity with the data dimensionality. Thus, novel, polynomial-time approximation schemes (with tunable approximation guarantees for the target maximum-error metric) for deterministic wavelet thresholding in multiple dimensions are also provided.
Description
FIELD OF THE INVENTION

The present invention relates generally to the field of wavelet synopses and, in particular, relates to computationally-efficient schemes for deterministic maximum-error wavelet thresholding in one and multiple dimensions.


BACKGROUND OF THE INVENTION

Several studies have demonstrated the effectiveness of the wavelet decomposition as a tool for reducing large amounts of data down to compact wavelet synopses that can be used to obtain fast, accurate approximate answers to user queries. While conventional wavelet synopses are based on greedily minimizing the overall root-mean-squared (i.e., L2-norm) error in the data approximation, recent work has demonstrated that such synopses can suffer from important problems, including severe bias and wide variance in the quality of the data reconstruction, and lack of non-trivial guarantees for individual approximate answers. As a result, probabilistic thresholding schemes have been recently proposed as a means of building wavelet synopses that try to probabilistically control other approximation-error metric, such as the maximum relative error in data-value reconstruction, which is arguably the most important for approximate query answers and meaning full error guarantees.


One of the main open problems posed by this earlier work is whether it is possible to design efficient deterministic wavelet-thresholding algorithms for minimizing general, non-L2 error metrics that are relevant to approximate query processing systems, such as maximum relative or maximum absolute error. Such algorithms can guarantee better wavelet synopses and avoid the pitfalls of probabilistic techniques (e.g., bad coin-flip sequences) leading to poor solutions.


SUMMARY

Various deficiencies of the prior art are addressed by embodiments of the present invention including novel, computationally efficient schemes for deterministic wavelet thresholding with the objective of optimizing general approximation-error metrics. Embodiments include an optimal low polynomial-time algorithm for one-dimensional wavelet thresholding based on a dynamic-programming (DP) formulation that can be employed to minimize the maximum relative or absolute error in the data reconstruction. Directly extending the one-dimensional DP algorithm to multi-dimensional wavelets results in a super-exponential increase in time complexity with the data dimensionality. Thus, embodiments also include novel, polynomial-time approximation schemes (with tunable approximation guarantees for the target maximum-error metric) for deterministic wavelet thresholding in multiple dimensions. These embodiments for optimal and approximate thresholding algorithms for maximum error are extended to handle a broad, natural class of distributive error metrics, which includes several error measures, such as mean weighted relative error and weighted LP-norm error. Experimental results on real-world and synthetic data sets demonstrate the effectiveness of the embodiments for these extensions.


One embodiment is a method for deterministic wavelet thresholding for optimizing maximum-error metrics. A deterministic wavelet synopsis is built that minimizes maximum error in a data-value approximation. A minimum maximum error for all data values in a subtree for the deterministic wavelet synopsis is provided.


Another embodiment is another method for deterministic wavelet thresholding for optimizing maximum-error metrics. A minimum error value for an error subtree is conditioned not only on a root node of the error subtree and the amount of synopsis storage allotted, but also on an error that enters the subtree through coefficient selections made on a path from the root node to a particular node, excluding that particular node itself. All possible coefficient selections are tabulated in polynomial time.


Yet another embodiment is another method for deterministic wavelet thresholding for optimizing maximum error metrics. A range of all possible error contributions for paths up to a root node of an error subtree are approximately covered using a plurality of approximate error values. The approximate error values are tabulated by capturing approximate error in a plurality of error subtrees. Each error subtree is rooted at a particular node. The rounded additive error contribution due to ancestors of the particular node being discarded from a synopsis is tabulated.


Still another embodiment is another method for deterministic wavelet thresholding for optimizing maximum error metrics. Each coefficient is replaced with a scaled-down coefficient value in an error tree for multi-dimensional wavelet thresholding. Dynamic programming is performed over the error tree. A table is built for each node in the error tree. The table holds a minimum maximum absolute error in a subtree for that node and an error contribution due to ancestors of that node. All coefficients in a set of coefficients in a synopsis that are greater than a predefined constant are retained. An error approximation is calculated and provided.




BRIEF DESCRIPTION OF THE DRAWINGS

The teachings of the present invention can be readily understood by considering the following detailed description in conjunction with the accompanying drawings, in which:



FIG. 1A depicts the error tree for a simple example data vector;



FIG. 1B depicts the support regions and signs of the sixteen nonstandard, two-dimensional Haar coefficients in the corresponding locations of a 4×4 wavelet-coefficient array WA;



FIG. 2 depicts the error tree structure for the two-dimensional 4×4 Haar coefficient array in FIG. 1B;



FIG. 3 is a block diagram showing the input and output for an exemplary method of optimal deterministic thresholding for maximum error in one dimension (MinMaxErr);



FIGS. 4A, 4B, and 4C are flowcharts describing the exemplary method MinMaxErr of FIG. 3;



FIG. 5 is a block diagram showing input and output for an exemplary embodiment of an ε-additive error approximation scheme for absolute/relative error minimization for deterministic multi-dimensional wavelet thresholding (AddMultIDErr);



FIGS. 6A and 6B are flowcharts describing the exemplary embodiment of the scheme AddMultIDErr of FIG. 5;



FIG. 7 is a block diagram showing input and output for an exemplary embodiment of a (1+ε) approximation scheme for absolute error minimization for deterministic multi-dimensional wavelet thresholding (ApproxMultIDErr);



FIGS. 8A, 8B, and 8C are flowcharts describing the exemplary embodiment of the scheme ApproxMultIDErr of FIG. 7; and



FIG. 9 is a high level block diagram showing a computer.




To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures.


DETAILED DESCRIPTION OF THE INVENTION

The invention will be primarily described within the general context of an embodiment of deterministic wavelet thresholding for maximum-error metrics. However, those skilled in the art and informed by the teachings herein will realize that the invention is applicable to database management, systems query processing, discrete mathematics, combinatorics, combinatorial algorithms, signal processing, image processing, performance, data synopses, wavelets, approximate query processing, and many other different kinds of data processing.


Approximate query processing over precomputed data synopses has emerged as a cost-effective approach for dealing with the huge data volumes, the high query complexities, and the increasingly stringent response-time requirements that characterize today's decision support systems (DSS) applications. Typically, DSS users pose very complex queries to the underlying database management system (DBMS) that require complex operations over gigabytes or terabytes of disk-resident data and, thus, take a very long time to execute to completion and produce exact answers. Due to the exploratory nature of many DSS applications, there are a number of scenarios in which an exact answer may not be required, and a user may in fact prefer a fast, approximate answer. For example, during a drill-down query sequence in ad-hoc data mining, initial queries in the sequence frequently have the sole purpose of determining the truly interesting queries and regions of the database. Providing (reasonably accurate) approximate answers to these initial queries gives user the ability to focus their explorations quickly and effectively, without consuming inordinate amounts of valuable system resources. An approximate answer can also provide useful feedback on how well-posed a query is, allowing DSS users to make an informed decision on whether they would like to invest more time and resources to execute their query to completion. Moreover, approximate answers obtained from appropriate synopses of the data may be the only available option when the base data is remote and unavailable. Finally, for DSS queries requesting a numerical answer (e.g., total revenues or annual percentage) it is often the case that the full precision of the exact answer is not needed and the first few digits of precision will suffice (e.g., the leading few digits of a total in the millions or the nearest percentile of a percentage.


Wavelets provide a mathematical tool for the hierarchical decomposition of functions, with a long history of successful applications in signal and image processing. Recent studies have also demonstrated the applicability of wavelets to selectivity estimation and to approximate query processing over massive relational tables and data streams. Briefly, the idea is to apply wavelet decomposition to the input relation (attribute column(s) or online analytical processing (OLAP) cube) to obtain a compact data synopsis that comprises a select small collection of wavelet coefficients. Some experimental results have demonstrated that fast and accurate approximate query processing engines can be designed to operate solely over such compact wavelet synopses.


A major shortcoming of conventional wavelet-based techniques for approximate query processing (including all the above-cited studies) is the fact that the quality of approximate answers can vary widely and no meaningful error guarantees can be provided to the users of the approximate query engine. Coefficients in conventional wavelet synopses are typically chosen to optimize the overall root-mean-squared (i.e., L2-norm average) error in the data approximation, which, as demonstrated in a recent study, can result in wide variance as well as severe bias in the quality of the approximation over the underlying domain of data values. One proposed solution, termed probabilistic wavelet synopses, relies on probabilistic-thresholding schemes (based on randomized rounding) for synopsis construction that try to probabilistically control other approximation-error metrics, such as the maximum relative error in the data reconstruction. Such maximum relative-error metrics are arguably the most important for effective approximate query processing and can provide meaningful error guarantees for individual approximate answers. In order to probabilistically control maximum relative error, some algorithms previously proposed explicitly try to minimize appropriate probabilistic metrics (such as normalized standard error or normalized bias) for the randomized synopsis construction process. Similar schemes are also given for controlling maximum absolute error.


A potential problem with some probabilistic-thresholding techniques of the past is that, exactly due to their probabilistic nature, there is always a possibility of a bad sequence of coin flips resulting in a poor synopsis; furthermore, they are based on a quantization of the possible synopsis-space allotments, whose impact on the quality of the final synopsis is not entirely clear. A deterministic thresholding algorithm that explicitly minimizes the relevant maximum-error metric (e.g., maximum relative error) in the synopsis, is always guaranteed to give better results. Unfortunately, their thresholding algorithms depend critically on the probabilistic nature of their solution, and are inapplicable in a deterministic setting. An additional shortcoming of the probabilistic thresholding schemes is that they explicitly target only maximum-error metrics; it is not at all clear if or how they can be extended to other useful error measures for approximate query answers, such as, for example, mean relative error. In fact, one of the main open problems is whether it is possible to design efficient deterministic thresholding for minimizing non-L2 error metrics that are relevant for approximate query answers, such as the maximum relative or absolute error in the data approximation.


Embodiments of the present invention include novel, computationally efficient schemes for deterministic wavelet thresholding with an objective of optimizing a general class of error metrics (e.g., maximum or mean relative error). An optimal low polynomial-time algorithm for one-dimensional wavelet thresholding is provided. This algorithm is based on a novel dynamic-programming (DP) formulation and can be employed to minimize either maximum relative error or maximum absolute error in the data reconstruction—its running time and working-space requirements are only O(N2) and O(N min {B, log N}) respectively, where N denotes the size of the data domain and B is the desired size of the synopsis (i.e., number of retained coefficients). Directly extending the optimal DP algorithm to multidimensional wavelets results in a super-exponential increase in time complexity with the data dimensionality rendering such a solution unusable even for the relatively small dimensionalities where wavelets are typically used (e.g., 2-5 dimensions). Thus, two embodiments of efficient polynomial-time approximation schemes (with tunable ε-approximation guarantees for the target maximum-error metric) for deterministic wavelet thresholding in multiple dimensions are provided. Both embodiments of the approximation schemes are based on approximate dynamic programs that tabulate a much smaller number of sub-problems than the optimal DP solution, while guaranteeing a small deviation from the optimal objective value. More specifically, one embodiment of the approximation algorithm gives ε-additive-error guarantees for maximum relative or absolute error, whereas the other embodiment is a (1+ε)-approximation scheme for maximum absolute error—the running time for both embodiments of the approximation schemes is roughly proportional to
O(1ɛNlog2NBlogB).

Various embodiments of the present invention include efficient optimal and near-optimal algorithms for building wavelet synopses optimized for maximum-error metrics in one or multiple dimensions. These can be extended to handle a broad, natural class of distributive error metrics. This class-includes several useful error measures for approximate query answers, such as mean weighted relative error and weighted Lp-norm error,


Wavelet Decomposition and Wavelet Data Synopses


Wavelets are a useful mathematical tool for hierarchically decomposing functions in ways that are both efficient and theoretically sound. Broadly speaking, the wavelet decomposition of a function consists of a coarse overall approximation together with detail coefficients that influence the function at various scales. The wavelet decomposition has excellent energy compaction and de-correlation-properties, which can be used to effectively generate compact representations that exploit the structure of data. Furthermore, wavelet transforms can generally be computed in linear time.


One-Dimensional Haar Wavelets


Given the one-dimensional data vector A containing the N=8 data values A=[2,2,0,2,3,5,4,4], the Haar wavelet transform of A is computed as follows. First, average the values together pairwise to get a new lower resolution representation of the data with the following average values [2,1,4,4]. In other words, the average of the first two values (i.e., 2 and 2) is 2 that of the next two values (i.e., 0 and 2) is 1, and so on. Some information is lost in this averaging process. To be able to restore the original values of the data array, some detail coefficients are stored to capture the missing information. In Haar wavelets, these detail coefficients are simply the differences of the (second of the) averaged values from the computed pairwise average. Thus, in this simple example, for the first pair of averaged values, the detail coefficient is 0 since 2−2=0, for the second −1 is stored again since 1−2=−1. Note that no information has been lost in this process—it is fairly simple to reconstruct the eight values of the original data array from the lower-resolution array containing the four averages and the four detail coefficients. Recursively applying the above pairwise averaging and differencing process on the lower-resolution array containing the averages, we get the following full decomposition:

ResolutionAveragesDetail Coefficients3[2, 2, 0, 2, 3, 5, 4, 4]2[2, 1, 4, 4][0, −1, −1, 0]1[3/2, 4][½, 0]0[11/4][−5/4]


The wavelet transform (also known as the wavelet decomposition) of A is the single coefficient representing the overall average of the data values followed by the detail coefficients in the order of increasing resolution. Thus, the one-dimensional Haar wavelet transform of A is given by WA=[ 11/4,− 5/4,½,0,0,−1,−1,0]. Each entry in WA is called a wavelet coefficient. The main advantage of using WA instead of the original data vector A is that for vectors containing similar values most of the detail coefficients tend to have very small values. Thus, eliminating such small coefficients from the wavelet transform (i.e., treating them as zeros) introduces only small errors when reconstructing the original data, resulting in a very effective form of lossy data compression.


Note that, intuitively, wavelet coefficients carry different weights with respect to their importance in rebuilding the original data values. For example, the overall average is more important than any detail coefficient since it affects the reconstruction of all entries in the data array. In order to equalize the importance of all wavelet coefficients, the final entries of WA are normalized appropriately. A common normalization scheme is to divide each wavelet coefficient by √{square root over (2l)}, where l denotes the level of resolution at which the coefficient appears (with l=0 corresponding to the coarsest resolution level). Thus, the normalized coefficient ci* is ci*/√{square root over (2level(ci))}.


Basic Haar Wavelet Properties and Notational Conventions


A helpful tool for exploring and understanding the key properties of the Haar wavelet decomposition is the error tree structure. The error tree is a hierarchical structure built based on the wavelet transform process (even though it is primarily used as a conceptual tool, an error tree can be easily constructed in linear O(N) time).



FIG. 1A depicts the error tree for the simple example data vector A. Each internal node ci (i=0, . . . , 7) is associated with a wavelet coefficient value and each leaf di (i=0, . . . , 7) is associated with a value in the original data array; in both cases, the index i denotes the positions in the (data or wavelet transform) array. For example, c0 corresponds to the overall average of A. Note that the values associated with the error tree nodes cj are the unnormalized coefficient values; the resolution levels l for the coefficients (corresponding to levels in the tree) are also depicted in FIG. 1A. The terms node, coefficient, and node/coefficient value are used interchangeably in what follows, even though they have distinct meanings. For ease of reference, Table 1 below summarizes some the notation for some symbols with brief descriptions of its semantics. Detailed definitions of all these parameters are provided at appropriate locations in this description. For simplicity, the notation assumes one-dimensional wavelets—extensions to multi-dimensional wavelets are straightforward. Additional notation is introduced when necessary.

TABLE 1NotationSymboli ∈ {0 . . . N − 1}DescriptionNNumber of data-array cellsDData-array dimensionalityBSpace budget for synopsisA, WAInput data and wavelet transform arraysdiData value for ith data-array cell{circumflex over (d)}iReconstructed data value for ith array cellciHaar coefficient at index/coordinate iTiError subtree rooted at node cicoeff(Ti), data(Ti)Coefficient/data values in Ti subtreepath(u)All non-zero proper ancestors of node u inthe error treesSanity bound for relative-error metricrelErri, absErriRelative/absolute error for data value di


Given a node u in an error tree T, let path(u) denote the set of all proper ancestors of u in T (i.e., the nodes on the path from u to the root of T, including the root but not u) with non-zero coefficients. A key property of the Haar wavelet decomposition is that the reconstruction of any data value di depends only on the values of coefficients on path(di); more specifically,
di=cjpath(di)signij·cj(1)

where signij=+1 if di is in the left child subtree of cj or j=0, and signij=−1 otherwise. Thus, reconstructing any data value involves summing at most log N+1 coefficient. For example, in FIG. 1A,
d4=c0-c1+c6=114-(-54)+(-1)=3.

The support region for a coefficient ci is defined as the set of (contiguous) data values that ci is used to reconstruct; the support region for a coefficient ci is uniquely identified by its coordinate i.


Multi-Dimensional Haar Wavelets


The Haar wavelet decomposition can be extended to multi-dimensional data arrays using two distinct known methods, namely the standard and nonstandard Haar decomposition. Each of these transforms results from a natural generalization of the one-dimensional decomposition process described above, and both have been used in a wide variety of applications, including approximate query answering over high-dimensional DSS data sets.


As in the one-dimensional case, the Haar decomposition of a D-dimensional data array A results in a D-dimensional wavelet coefficient array WA with the same dimension ranges and number of entries. Consider a D-dimensional wavelet coefficient W in the (standard or nonstandard) wavelet-coefficient array WA. W contributes to the reconstruction of a D-dimensional rectangular region of cells in the original data array A (i.e., W's support region). Further, the sign of W's contribution (+W or −W) can vary along the quadrants of W's support region in A.


As an example, FIG. 1B depicts the support regions and signs of the sixteen nonstandard, two-dimensional Haar coefficients in the corresponding locations of a 4×4 wavelet-coefficient array WA. The blank areas for each coefficient correspond to regions of A whose reconstruction is independent of the coefficient, i.e., the coefficient's contribution is 0. Thus, WA[0,0] is the overall average that contributes positively (i.e., +WA[0,0]) to the reconstruction of all values in A, whereas WA[3,3] is a detail coefficient that contributes (with the signs shown in FIG. 1B) only to values in A's upper right quadrant. Each data cell in A can be accurately reconstructed by adding up the contributions (with the appropriate signs) of those coefficients whose support regions include the cell. FIG. 1B also depicts the two levels of resolution (l=0,1) for the example two-dimensional Haar coefficients; as in the one-dimensional case, these levels define the appropriate constants for normalizing coefficient values.


Error-tree structures for multi-dimensional Haar wavelets can be constructed (once again in linear O(N) time) in a manner similar to those for the one-dimensional case, but their semantics and structure are somewhat more complex. A major difference is that, in a D-dimensional error tree, each node (except for the root, i.e., the overall average) actually corresponds to a set of 2D−1 wavelet coefficients that have the same support region but different quadrant signs and magnitudes for their contribution. Furthermore, each (non-root) node t in a D-dimensional error tree has 2D children corresponding to the quadrants of the (common) support region of all coefficients in t2. The number of children (coefficients) for an internal error-tree node can actually be less than 2D (resp., 2D−1) when the sizes of the data dimensions are not all equal. In these situations, the exponent for 2 is determined by the number of dimensions that are active at the current level of the decomposition (i.e., those dimensions that are still being recursively split by averaging/differencing. The sign of each coefficient's contribution to the leaf (data) values residing at each of its children in the tree is determined by the coefficient's quadrant sign information.


As an example, FIG. 2 depicts the error-tree structure for the two-dimensional 4×4 Haar coefficient array in FIG. 1B. Thus, the (single) child t of the root node contains the coefficients WA[0,1], WA[1,0], and WA[1,1], and has four children corresponding to the four 2×2 quadrants of the array; the child corresponding to the lower-left quadrant contains the coefficients WA[0,2], WA[2,0], and WA[2,2], and all coefficients in t contribute with a plus sign (+) to all values in this quadrant.


Based on the above generalization of the error-tree structure to multiple dimensions, the formula for data-value reconstruction (Equation (1)) can naturally be extended to multi-dimensional Haar wavelets. Once again, the reconstruction of di depends only on the coefficient sets for all error-tree nodes in path(di), where the sign of the contribution for each coefficient W in node tεpath(di) is determined by the quadrant sign information for W.


Wavelet-based Data Reduction: Coefficient Thresholding


Given a limited amount of storage for building a wavelet synopsis of the input data array A, a thresholding procedure retains a certain number B<<N of the coefficients in WA as a highly-compressed approximate representation of the original data (i.e., the remaining coefficients are implicitly set to 0). The goal of coefficient thresholding is to determine the best subset of B coefficients to retain, so that some overall error measure in the approximation is minimized. The method of choice for the vast majority of earlier studies on wavelet-based data reduction and approximation is conventional coefficient thresholding that greedily retinas the B largest Haar-wavelet coefficients in absolute normalized value. It is a well-known fact that this thresholding method is in fact provably optimal with respect to minimizing the overall root-mean-squared error (i.e., L2-norm average error) in the data compression. More formally, letting {circumflex over (d)}i denote the (approximate) reconstructed data value for cell i, retaining the B largest normalized coefficients implies that the resulting synopsis minimizes the quantity
1Ni(di-d^i)2

(for the given amount of space B).


Unfortunately, wavelet synopses optimized for overall L2 error may not always be the best choice for approximate query processing systems. As observed in a recent study, such conventional wavelet synopses suffer from several important problems, including the introduction of severe bias in the data reconstruction and wide variance in the quality of the data approximation, as well as the lack of non-trivial guarantees for individual approximate answers. To address these shortcomings, their work introduces novel, probabilistic thresholding schemes based on randomized rounding, that probabilistically round coefficients either up to a larger rounding value (to be retained in the synopsis) or down to zero. Intuitively, their probabilistic schemes assign each non-zero coefficient fractional storage yε(0,1] equal to its retention probability, and then flip independent, appropriately-biased coins to construct the synopsis. Existing strategies are based on trying to probabilistically control the maximum relative error (with an appropriate sanity bound s) in the approximation of individual data values based on the synopsis; that is, they attempt to minimize the quantity
maxi{d^i-dimax{di,s}},

(where {circumflex over (d)}i denotes the data value reconstructed based on the synopsis) with sufficiently-high probability. More specifically, the existing algorithms are based on dynamic-programming (DP) formulations that explicitly minimize appropriate probabilistic metrics, such as the maximum normalized standard error (MinRelVar) or the maximum normalized bias (MinRelBias), in the randomized synopsis construction. These formulations are then combined with a quantization of the potential fractional-storage allotments to give combinatorial thresholding algorithms. Similar probabilistic schemes are also given for probabilistically controlling maximum absolute error.


Building and Using Wavelet Synopses in a DBMS


The construction of wavelet synopses typically takes place during a statistics collection process, whose goal is to create concise statistical approximations for the value distributions of either individual attributes or combinations of attributes in the relations of a DBMS. Statistics collection is usually an off-line process, carried out during night-time or other system-idling periods. Once created, such statistical summaries are typically stored as part of the DBMS-catalog information. More specifically, a wavelet synopsis comprising B wavelet coefficients is stored as a collection of B pairs {<ij,cij>:j=1, . . . , B} where ij and cij denote the index and value (respectively) of the jth retained synopsis coefficient.


Wavelet synopsis information in the DBMS catalog can be exploited for several different purposes. The primary (and, more conventional) use of such summaries is as a tool for enabling effective (compile-time) estimates of the result sizes of relational operators for the purpose of cost-based query optimization. For instance, estimating the selectivity of a range predicate like l≦X≦h is equivalent to estimating the range summation
fx(l:h)=i=1hfx[i]

where fx is the frequency-distribution array for attribute X. It is not difficult to see that, given a B-coefficient synopsis of the fx array, computing fx(l:h) only involves coefficients in and, thus, can be estimated by summing only min {B,2 log N+1} synopsis coefficients. A B-coefficient wavelet synopsis can also be easily expanded (in O(B) time) into an O(B)-bucket histogram (i.e., piecewise-constant) approximation of the underlying data distribution with several possible uses (e.g., as a data visualization/approximation tool). Finally, wavelet-coefficient synopses can enable very fast and accurate approximate query answers during interactive data-exploration sessions (e.g., initial, drill-down query sequences in data-mining tasks). In fact, as efficient approximate query processing algorithms for complex select-project-join relational queries can be designed to operate entirely in the wavelet-coefficient domain.


Deterministic Wavelet Thresholding


Rather than trying to probabilistically control maximum relative error through the optimization of probabilistic measures (like normalized standard error or bias), a more direct solution is to design deterministic thresholding schemes that explicitly minimize maximum-error metrics. Such schemes can not only guarantee better synopses, but they also avoid the potential pitfalls of randomized techniques, as well as the space-quantization requirement of some known methods whose impact on the quality of the final solution is not entirely clear. Unfortunately, the DP formations and algorithms depend crucially on the ability to assign fractional storage (i.e., retention probability) values in (0,1] to individual coefficients, which renders their schemes inapplicable for deterministic wavelet thresholding (where the storage assigned to each coefficient is either 0 or 1). In fact, one of the main open problems is whether it is possible to design efficient algorithms for deterministic Haar-coefficient thresholding for minimizing non-L2 error metrics that are relevant for approximate query answers, such as maximum relative or absolute error in the data approximation. Embodiments of the present invention attack this problem for both one- and multi-dimensional Haar wavelets. Unlike the probabilistic schemes, embodiments of the present invention are applicable to a general class of approximation-error metrics, including, for example, mean weighted relative error.


One-Dimensional Wavelet Thresholding


One exemplary embodiment is a novel, simple, low polynomial-time scheme based on Dynamic-Programming (DP) for building deterministic Haar-wavelet synopses that minimize the maximum relative or absolute error in the data-value approximation. More formally, let
relErri=d^i-dimax{di,S}

be an error metric that combines relative error with an appropriate sanity bound s and, similarly, let absErri=|{circumflex over (d)}i−di| denote the absolute approximation error for the ith data value. The deterministic wavelet thresholding problem is formally defined as follows.


Deterministic Maximum Relative/Absolute Error Minimization


Given a synopsis space budget B, determine a subset of at most B Haar-wavelet coefficients that minimize the maximum relative (or, absolute) error in the data-value approximation; that is, for err ε{relErr,absErr} minimize maxiε{0, . . . , N-1}{erri}.


Note that unlike the probabilistic normalized standard error or normalized bias metrics employed in known works, embodiments of the present invention including relErr and absErr metrics do not have a simple monotonic/additive structure over the Haar-coefficient error tree. The key problem, of course, is that, even though data values are simple linear combinations of coefficients, each coefficient contributes with a different sign on different parts of the underlying data domain. This also implies that the DP formulations are no longer applicable, since the assumed principle of optimality for error subtrees is no longer valid. In other words, due to the absence of additive/monotonic structure for the objective, the optimal solution for the subtree rooted at node cj is not necessarily a combination of the optimal (partial) solutions for cj's child subtrees.


Exemplary embodiments of a DP recurrence and algorithm for a deterministic maximum-error optimization problem condition the optimal error value for an error subtree not only on the root node cj of the subtree and the amount B of synopsis storage allotted, but also on the error that enters that subtree through the coefficient selections made on the path from the root to node cj (excluding cj itself), i.e., coefficient selections on path (cj). Since the depth of the error tree is O(log N), all such possible selections can be tabulated while keeping the running-time of the algorithm in the low-polynomial range.


More formally, let B denote the total space budget for the synopsis, and let Tj be the subtree of the error-tree rooted at node cj, with coeff(Tj)(data(Tj)) denoting the set of coefficient (resp., data) values in Tj. Finally, let M[j,b,S] denote the optimal (i.e., minimum) value of the maximum error (relative or absolute) among all data values in Tj assuming a synopsis space budget of b coefficients for the Tj subtree, and that a subset Spath(cj) (of size at most min {B−b, log N+1}) of proper ancestors of cj have been selected for the synopsis; that is, assuming a relative-error metric (i.e., err=relErr),
M[j,b,S]=minSjcoeff(Tj),Sjb{maxdidata(Tj)relErri},whererelErri=di-ckpath(di)(SjS)signik·ckmax{di,S}.

The case for absolute error (i.e., err=absErr) is defined similarly. A DP recurrence for computing the M[j,b,S] entries is formulated as follows. M[0,B,φ] gives the desired optimal error value at the root node of the error tree. The corresponding error-optimal wavelet synopsis is then built by simply retracing the choices of the DP computations using known techniques.


The base case for the recurrence occurs for data (i.e., leaf) nodes in the Haar error tree; that is, for cj=dj-N with j≧N. (See FIG. 1A). In this case, M[j,b,S] is not defined for b>0, whereas for b=0 and for each subset Spath(dj-N) (of size≦min {B, log N+1}) define
M[j,0,S]=dj-N-ckSsignj-N,k·ckr,

where r=max {|dj-N|,S} for err=relErr, and r=1 for err=absErr.


In the case of an internal error-tree node cj with j<N, the DP algorithm has two distinct choices when computing M[j,b,S], namely either drop coefficient cj or keep it in the final synopsis. If we choose to drop cj from the synopsis, then it is easy to see that the maximum error from cj's two child subtrees (i.e., c2j and c2j+1) will be propagated upward; thus, the minimum possible maximum error M[j,b,S] for Tj in this case is simply

min max {M[2j,b′,S],M[2j+1,b−b′,S]}.  (2)

In the above recurrence, S contains proper ancestors of cj that are clearly proper ancestors of c2j and c2j+1 as well; thus, the right-hand side of the recurrence is well-defined as the DP computation proceeds bottom-up to reach cj. If, on the other hand, cj is kept in the synopsis (assuming, of course, b≧1), the least possible error M[j,b,S] for Tj is computed as
min0bb-1max{M[2j,b,S{cj}],M[2j+1,b-b,S{cj}]}.(3)


Again, the right-hand side of the recurrence is well-defined. The final value for M[j,b,S] defined as the minimum of the two possible choices for coefficient cj, i.e., equations (2) and (3) above. A pseudo-code description of an exemplary embodiment of an optimal DP algorithm for deterministic maximum-error thresholding in one dimension (termed MinMaxErr) is shown Table 2 below.

TABLE 2An exemplary embodiment of the MinMaxErr Algorithm: OptimalDeterministic Thresholding for Maximum Error in One Dimension.procedure MinMaxErr(WA,B,root,S,err)Input: Array WA = [c0, . . . , CN−1] of N Haar wavelet coefficients, spacebudget B (number of retained coefficients), error-subtree root-node indexroot, subset of retained ancestors of root node S path(root), targetmaximum error metric err.Output: Value of M[root,B,S] according to the optimal dynamic program(M[root,B,S],value), decision made for the root node(M[root,B,S].retained), and space allotted to left child subtree(M[root,B,S].leftAllot). (The last two are used for retracing the optimalsolution to build the synopsis.)begin1.if (M[root,B,S].computed = true) then2.return M[root,B,S].value // optimal value already in M[ ]3.if (N ≦ root < 2N) then // leaf/data node4.if (B = 0) then5.M[root,B,S].value:=dj-N-ckεSsignj-N,k·Ck6.if (err = relErr) then7.M[root,B,S].value:=M[root,B,S].valuemax{dj-N,S}8.endif9.else10.M[root,B,S].value := ∞11.for b := 0 to B step 1 do // first choice; drop root12.left := MinMaxErr(WA,B-b,2*root,S,err)13.right := MinMaxErr(WA,B - b,2*root + 1,S,err)14.if (max{left,right} < M[root,B,S].value) then15.M[root,B,S].value max{left,right}16.M[root,B,S].retained := false17.M[root,B,S].leftAllot := b18.endif19.endfor20.for b := 0 to B - 1 step 1 do // second choice; keep root21.left := MinMaxErr(WA,b,2*root,S ∪ {root},err)22.right = MinMaxErr(WA,B - b - 1,2*root + 1,S ∪ {root},err)23.if (max{left,right} < M[root,B,S].value) then24.M[root,B,S].value = max{left,right}25.M[root,B,S].retained = true26.M[root,B,S].leftAllot = b27.endif28.endfor29.endif30.M[root,B,S].computed = true31.return(M[root,B,S].value)end



FIG. 3 shows the input and output for the exemplary method of optimal deterministic thresholding for maximum error in one dimension (MinMaxErr) 300, for which pseudo code is shown in Table 2 above. The input includes a wavelet array (WA) 302, a number of retained coefficients (B) 304, subtree root index (j) 306, and a subset of retained coefficients (S) 308 on a path from root to j 306. The output includes an array M[j,B,S], which is an optimal (minimum) maximum relative error for all data values in the j-subtree for a synopsis size B, assuming coefficients in S are retained above j.



FIGS. 4A-4C are flowcharts describing the exemplary method (MinMaxErr) 300 of FIG. 3. In this exemplary method, procedure MinMaxErr starts at 400 and determines whether M[j,B,S] has already been computed at 402. If so, M[j,B,S] is returned at 404; otherwise it is determined whether j is a leaf node at 406. If so, M[j,B,S] is computed using a formula or set to be undefined if b is zero and returned at 408; otherwise, set OPT to infinity and b to 0 at 410.


Looping over b from zero to B 412, 414, 416, left is set to the result of calling MinMaxErr 400 with one set of variables, while right is set to the result of calling MinMaxErr 400 with a different set of variables at 418. Thus, the procedure is called recursively within the loop. Next, it is determined whether the maximum of left and right are less than OPT at 420. If so, then OPT is set to the maximum of left and right at 422 and the loop continues until b is not less than B at 416.


Then, b is initialized to zero at 424 outside another loop over b from zero to B−1 426, 428, 430. Again, left is set to the result of calling MinMaxErr 400 with one set of variables, while right is set to the result of calling MinMaxErr 400 with a different set of variables at 432. Next, it is determined whether the maximum of left and right is less than OPT at 434. If so, then OPT is set to the maximum of left and right at 436 and the loop continues until b is not less than B at 430.


Time and Space Complexity


Given a node/coefficient cj at level l of the error tree, the exemplary MinMaxErr algorithm considers at most B+1 space allotments to the Tj subtree (where the “+1” accounts for the possibility of zero space) and at most 2l+1 subsets of ancestors of cj. Thus, the number of entries in the DP array M[ ] that need to be computed for cj is O(B2l+1). Furthermore, the time needed to compute each such entry is O(log B). To see this, note that for any fixed node k and ancestor subset S, M[k,b′,S] is a decreasing function of the space allotment b′. Thus, the optimal distribution point b′ in equations (2) and (3) (Steps 11-28 in MinMaxErr) is computed using an O(log B)-time binary search procedure, looking for the space allotment where the error values for the two child subtrees are equal or the adjacent pair of cross-over allotments. (To simplify the description, this binary-search procedure is not shown in the pseudo-code description in Table 2.) Clearly, the total number of error-tree nodes at level l is O(2l) making the overall time complexity of our DP algorithm
O(i=0logN2l2l+1BlogB)=O(BlogBi=0logN22l)=O(N2BlogB).


Given an internal node cj at level l of the error tree and a synopsis budget B, the number of coefficients that can be retained inside the Tj subtree is actually upper bounded by min {B,2log N−1−1} (since Tj comprises at most 2ogN−1−1 coefficient nodes. Thus, the number of entries for cj in the M[ ] array is at most O(2l+1 min {B+1,2log N−1})=O(min {B2l+1,2N}) with each entry requiring O(min {log B, log N−1}) computation time. Summing across all error tree nodes (as above) using these tighter bounds gives an overall time complexity of only O(N2) for the exemplary embodiment of the DP algorithm.


With respect to the space requirements of exemplary embodiments of the scheme, it is easy to see that the overall size of the DP array MP[ ] is
O(l=0logN2lmin{B2l+1,2N})O(Nl=0logN2l)=O(N2).

Note that, however, the exemplary MinMaxErr algorithm does not actually require the entire DP array to be memory-resident at all times. For instance, the results for all descendants of a node cj are no longer needed and can be swapped out of memory once the results for node cj have been computed. With this small optimization, it is easy to see that the exemplary bottom-up DP computation never requires more than one active line (i.e., entries corresponding to a single tree node) of the M[ ] array per error-tree level, where the size of such a line for a node at level l is O(N min {B, log N}). Thus, the size of the memory-resident working set for the exemplary DP algorithm drops to only
O(l=0logNmin{B2l+1,2N})=O(Nmin{B,logN}).


In summary, for the one-dimensional case, the exemplary MinMaxErr algorithm is an optimal deterministic thresholding scheme for building one-dimensional wavelet synopses that minimizes the maximum relative error (or maximum absolute error) in the data approximation. MinMaxErr runs in time O(N2) and has a total-space (working-space) requirement of O(N2) (resp., O(N min {B. log N})).


Multi-Dimensional Wavelet Thresholding


The deterministic wavelet thresholding problem becomes significantly more complex for multi-dimensional wavelets, and directly extending the exemplary embodiment of the optimal one-dimensional DP formulation to the case of multiple dimensions fails to give a practical solution. As described above, in the D-dimensional error-tree structure, even though the tree depth remains O(log N), each node in the tree now contains up to 2D−1 wavelet coefficients with the same support region and different quadrant signs (FIG. 2). This implies that the total number of possible ancestor subsets S for a multi-dimensional coefficient at a level l=Θ(log N) is O(2log N−(2D−1))=O(N22−1), rendering the exhaustive-enumeration DP scheme described above completely impractical, even for the relatively small data dimensionalities (i.e., D=2-5) where wavelet-based data reduction is typically employed. It is well known that, due to the dimensionality curse, wavelets and other space-partitioning schemes become ineffective above 5-6 dimensions.


There are two exemplary embodiments of efficient polynomial-time approximation schemes for deterministic multi-dimensional wavelet thresholding for maximum-error metrics. Both the approximation schemes are based on approximate dynamic programs that explore a much smaller number of options than the optimal DP formulation, while offering tunable ε-approximation guarantees for the final target maximum-error metric. More specifically, our first scheme can give ε-additive-error guarantees for maximum relative or absolute error, whereas the second scheme is a (1+ε)-approximation algorithm for maximum absolute error.


An ε-Additive-Error Approximation Scheme for Absolute/Relative Error Minimization


Intuitively, the optimal one-dimensional DP scheme is based on exhaustively enumerating, for each error subtree, all the possible error values entering the subtree through the choices made on the path to the subtree's root node. The first approximation scheme avoids this exhaustive enumeration and, instead, tries to approximately cover the range of all possible error contributions for paths up to the root node of an error subtree using a much smaller number of (approximate) error values. The exemplary embodiment of the approximation scheme is then based on a much sparser DP formulation, which only tabulates this smaller set of error values.


Specifically, let R denote the maximum absolute coefficient value in the error tree and let ε<1 denote the desired approximation factor. The additive contribution to the absolute data-value reconstruction error from any possible path in the error tree is guaranteed to lie in the range custom character=[−R2D log N, +R2D log N]. The approximation entire custom character range using error-value breakpoints of the form±(1+ε)k, for a range of (contiguous) integer values for the exponent k. Note that the number of such breakpoints needed is essentially
O(log1+ɛ(2R2DlogN)O(D+logR+loglogNɛ),

for small values of ε<1; in other words,
kK={0,1,,O(D+logR+loglogNɛ)}.

Now, let roundε(v) be a function that rounds any value vεR down to the closest value in the set E={0}∪{±(1+ε)k, kεK}; that is, letting l=log1+ε|v|, we have roundε(v)=1+ε)l if v≧1, (1+ε)1 ifv≦−1, and 0 otherwise. The DP array Ma[ ] for the approximation scheme tabulates the values Ma[j,b,e] capturing the approximate maximum error (relative or absolute) in the Tj error subtree (rooted at node j), assuming a space budget of b coefficients allotted to Tj and an approximate/rounded additive error contribution of eεE due to proper ancestors of node j being discarded from the synopsis.


The base case for the computation of Ma[ ], i.e., the case of a leaf/data node j≧N in the error tree is fairly straightforward; once again, Ma[j,b,e] is only defined for b=0 and Ma[j,b,e]=|e|/r, where r is either max {|dj-N|,S} or 1 depending on whether a relative or absolute error metric is being targeted.


In the case of an internal error-tree node j, each node now corresponds to a set S(j) of at most 2D−1 (non-zero) coefficients, and has at most 2D child subtrees (with indices, say j1, . . . , jm). Assume that a subset sS(j) of node j's coefficients in the synopsis is chosen to be maintained and, for each coefficient cεS(j), let sign(c,j) denote the sign of c's contribution to the ji child subtree; then, the least possible maximum error entries Ma[j,b,e],eεE are estimated for Tj (assuming, of course, that b≧|s|) as
min0b1++bmb-smax1im{Ma[ji,bi,roundɛ(e+cS(j)-ssign(c,ji)·c)]}.

In other words, for a given selected coefficient subset s, all possible allotments of the remaining b-|s| space to children of node j are considered with the rounded cumulative error that enters those children taking into account the contribution from the dropped coefficients in S(j)−s. To avoid the O(B2D) factor in run-time complexity implied by the search of all possible space allotments b1, . . . , bm to child subtrees in the above recurrence, the search is simply ordered among a node's children. The basic idea is to generalize the approximate DP-array entries to Ma[j,b,e], where j=(j1, . . . , jk) is a list of error-tree nodes and e=(e1, . . . , ek) is a list of incoming additive errors corresponding to the nodes in j. The above recurrence for Ma[(j),b(e)] then becomes simply
Ma[(j),b,(e)]=minsS(j){Ma[(j2,,jm),b-s,(e1,em)]},whereek=roundɛ(e+cS(j)-ssign(c,jk)·c)fork=1,,mandMa[(j1,,jm),b,(e1,,em)]=min0bbmax{Ma[(j1),b,(e1)],Ma[(j2,,jm),b-b,(e2,,em)]}

. Intuitively, the first equation states that the approximate error value Ma[(j),b,(e)] at node j is computed as the minimum (over all possible choices for the subset sS(j) of retained coefficients at j) of the corresponding approximate error values for the list of node j's children. The second equation then computes the approximate error values for this list (j1, . . . , jm) by exploring all possible allotments of a given space budget b′ between the first child (j1) and the list suffix of all remaining children (j2, . . . , jm) of node j.


To see how this generalization affects the space complexity of the exemplary DP formulation, note that, assuming a given node j in the error tree: (1) there are at most 2D possible different values for the list suffix j of child error-tree nodes of j; and (2) for each such list suffix j and each possible incoming error value eεE at node j, there are at most 22D−1 possible different values for the corresponding list of child error values e (i.e., one for each possible subset sS(j)


of retained coefficients at j). Thus, given that we obviously have O(B) choices for the b parameter, it is easy to see that the overall space requirements of the exemplary generalized DP array are O(|E|2D+D NB). Again, D is a small constant, typically between 2-5. In terms of time complexity, note that the exemplary generalization also allows for an O(log B)-time search for the breakup of a node's allotment to its children *using binary search in the above recurrence). Thus, the overall time complexity of the exemplary DP it O(|E|22D+D NB log B).


In summary, the above-described approximation scheme for deterministic multi-dimensional wavelet thresholding discovers an approximate solution that is guaranteed to be within a worst-case additive error of εR(resp.,εR/S)of the optimal (i.e., minimum) maximum absolute (resp., relative) error in time
O(D+logR+loglogNɛ22D+2DNlog2NlogB)

and with a total space requirement of
O(D+logR+loglogNɛ22D+2DNBlog2N).


Proof: The goal is to demonstrate that the exemplary approximation scheme produces a solution that is within a small additive error of the optimal solution. Only the case of absolute error is considered, as the argument for relative error is similar. Let M[j,b,e]=M[(j1, . . . , jm),b(e1, . . . , em)] denote the optimal absolute error value when allowed a coefficient synopsis of size b in the list of error subtrees (Tjl, . . . , Tj), assuming an incoming additive error of ejk at subtree Tjk (due to its discarded ancestor coefficients), where k=1, . . . , m. Thus, the goal is to upper-bound the absolute difference |Ma[(root),B,(0)]−M[(root),B,(0)]|.


Let the height of an error-tree node denote its distance from the error-tree leaf (i.e., data) nodes (i.e., the leaves are at height 0). Induction is used to demonstrate the following claim. Claim: if the tree nodes in list j=(j1, . . . , jm) are at height i, then, for any space allotment b and error-value list e, |Ma[j,b,e]=M[j,b,e]|≦(i+1)·ε2D R log N.


Let N(j) denote the number of nodes in the subtrees rooted at all nodes in j; that is, N(j)=|Tjl∪ . . . ÅTjm|. The proof of the above claim is based on an inductive argument on N(j). For the base case, note that if j comprises only a leaf node, then the claim is clearly true by the construction of the exemplary approximate breakpoint set E. Now, assume that the claim is true for all lists j′ with N(j′)<n, and let j be a node list such that N(j)=n; also, assume that the nodes in j are at height i in the error tree.


Consider the simpler case where j comprises a single node j, i.e., j=(j) and let (j1, . . . , jm) denote the list of children of node j. Recall that
Ma[(j),b(e)]=minsS(j){Ma[(j1,,jm),b-s,(e1,,em)]}whereek=(e+cS(j)-ssign(c,jk)·c)fork=1,,m.(4)

Now, let
ek=(e+cS(j)-ssign(c,jk)·c)fork=1,,m

and it is easy to see that the corresponding optimal error value at node j is defined similarly as
M[(j),b,(e)]=minsS(j){M[(j1,,jm),b-s,(e1,,em)]}.(5)

For the sake of brevity, let jc=(jl, . . . , jm), e=(e1, . . . , em), and e′=(e′1, . . . , e′m). By the inductive hypothesis, for any choice of the s subset,

|Ma[jc,b−|s|,e]|−M[jc,b−|s|,e]≦i·ε2DR log N.  (6)

Now, assume it is wanted to compute the optimal error value for jc with the unrounded incoming additive errors, i.e., M[jc,b-−|s|,e′]. Clearly, an upper bound for this error can be obtained by retaining those coefficients corresponding to the optimal value with the rounded incoming errors (i.e., M[jc,b−|s|,e], and then adding the rounding error |ek−e′k| to all the leaf nodes in the subtree rooted at node jk (k=1, . . . , m). In other words,

|M[jcb−|s|,e]−M[jc,b−|s|,e′]≦ε2DR log N|  (7)

Now, a simple application of the triangle inequality over inequalities (6) and (7) gives

|Ma[jc,b−|s|,e]|=M[jc,b−|s|,e′]≦(i+1)·ε2DR log N

which, combined with inequalities (4) and (5) guarantees that |Ma[(j),b,(e)]|−M[(j),b,(e)]≦(i+1)·ε2D R log N. This completes the proof the claim for the case of a single-node list j=(j).


For the case of a multi-node list j=(1, . . . , jm), where l>1, recall that
Ma[j,b,e]=min0bbmax{Ma[(j1),b,(e1)],Ma[(j2,,ji),b-b,(e2,,el)]},

with the exact same relation holding for the optimal error values (using M[ ] instead of Ma[ ] above). Thus, once again, applying the inductive hypothesis and the properties of the exemplary error-rounding procedure to the right-hand side of the above expression, gives |Ma[j,b,e]−M[j,b,e]|≦(i+1)·ε2D R log N. Thus, the claim holds for this case as well.


Now, a simple application of the claim with j=(root) gives |Ma[(root),B,(0)]−M[(root),B,(0)]|≦(1+log N)ε2DR log N=O(ε2DR log2 N). Thus, simply setting
ɛ=ɛO(2Dlog2N)

gives a worst-case additive deviation of εR from the optimal maximum absolute error value. The running-time and space complexity of the exemplary approximation scheme for the above value of ε′ follow immediately from the earlier discussion.


The bounds represent a truly pathological worst-case scenario for the scheme, where all coefficients on a root-to-leaf path are of maximum absolute value R. In real-life applications, most of the energy of a data signal (i.e., array) is typically concentrated in a few large wavelet coefficients, which implies that most coefficient values in the error tree will be (much) smaller than R. Thus, for real-life practical scenarios, the approximation scheme would be much closer to the optimal solution than the worst-case deviation bounds.



FIG. 5 shows input and output for an exemplary embodiment of an ε-additive error approximation scheme for absolute/relative error minimization for deterministic multi-dimensional wavelet thresholding (AddMultIDErr) 500. Inputs include the wavelet array (WA) 502, a number of retained coefficients (B) 504, a list of subtree-root indexes (j1, . . . , jm) 506, and a list of incoming (rounded) error values (e1, . . . , em) 508. Output includes array M[(j1, . . . , jm),B, (e1, . . . , em)], which is the approximate minimum maximum relative error for all data values in the subtrees rooted at (j1, . . . , jm) for synopsis space B, assuming corresponding incoming error values (e1, . . . , em). The notation includes roundε(v) which is the round value down to the closest power of (1+ε), i.e. ±(1+ε)k.



FIGS. 6A and 6B are flowcharts describing the exemplary embodiment of the scheme AddMultIDErr of FIG. 5. In this example, procedure AddMultIDErr starts at 600. It is determined whether array M[(j1, . . . , jm),B, (e1, . . . , em)] is already computed at 602. If so, then the resulting array is returned at 604; otherwise, it is determined whether m=j and j1 is a leaf node at 606. If so, then the array M[j1,B,ej] is computed using a formula if b equals zero, otherwise the array is set to undefined and the array is returned at 608. It is determined whether m=1 at 610. If so, then control flows to 612; otherwise to 614. At 612, computations are performed and array M[(j1),B,(e1)]=OPT is returned. At 614, other computations are performed and array M[(j1, . . . , jm),B,(e1, . . . , em]=OPT is returned.


Approximation Scheme for Absolute Error Minimization


For the special case of minimizing absolute error for deterministic multi-dimensional wavelet thresholding, there is an exemplary embodiment of a polynomial-time (1+ε)-approximation scheme. Assume that all wavelet coefficients are integers, which can always be satisfied by appropriately scaling the coefficient values. For example, for integer data values di (e.g., a multi-dimensional frequency count array), scaling by a factor of O(2D log N)=O(ND) is always guaranteed to give integer Haar coefficients. Let Rz denote the maximum (scaled) coefficient value in the error tree; as previously; it is easy to see that the additive (integer) contribution to the absolute reconstruction error from any possible path in the Haar error tree is guaranteed to lie in the integer range Rz=[−Emax log N, +Emax log N], where, of course, Emax≦Rz2D. This observation directly leads to an optimal pseudo-polynomial time algorithm for the maximum absolute-error minimization problem. In fact, this pseudo-polynomial time scheme directly extends to maximum relative-error minimization as well. The coefficients are intelligently scaled-down for the (1+ε)-approximation scheme for absolute error so that the possible range of integer additive-error values entering a subtree is polynomially-bounded.


Briefly, the optimal pseudo-polynomial time scheme is again based on dynamic programming over the error tree of integer coefficient values and follows along similar lines as the additive-error approximation algorithm, but without doing any rounding of incoming error values. The algorithm starts by precomputing an error vector E(j,s) for each internal node j in the error tree and each subset s of the coefficient values in S(j). The E(j,s) vector basically stores the total additive error propagated to each of the children of node j if it is decided to drop the coefficients in s cS(j) from the synopsis; thus, the E(j,s) vector is indexed by the children of node j, and, for the ith child node ji of j, define the corresponding ith error-vector entry as
E(j,s)=cssign(c,ji)·c

where sign(c,ji) is the sign of c's contribution to the ji child subtree. It is easy to see that computing these error vectors E(j,s) for every j and every sS(j) takes O(N22D−12D)=O(N22D+D) time. Also, note that the Emax boundary in the integer error range Rz can now be defined more precisely as
Emax=maxj,s,i{E(j,s))[i]}Rz2D.


The exemplary pseudo-polynomial DP scheme then constructs a table M[j,b,e] for every node j, space budget b≦B and error value e,eεRz, where M[j,b,e] denotes the minimum possible maximum absolute error in the Tj subtree, assuming a space budget of b coefficients in Tj and an error contribution of e due to ancestors of node j. As earlier, we define M[j,0,e]=|e| for a leaf/data node j, whereas for an internal node j (with children j1, . . . , jm) and assuming that a subset sS(j) of coefficients is dropped from the synopsis, compute M[j,b,e] as
min0b1++bmb-(2D-1-s)max{M[ji,bi,E(j,s)[i]+e]}.

Thus, the final (optimal) value of M[j,b,e] is computed by dropping the coefficient subset sS(j) that minimizes the above term. Once again, the O(B2D) factor needed to cycle through all the b1, . . . , bm allotments can be avoided using the generalization described earlier; thus, there is an optimal DP algorithm that runs in time O(Emax22D+D N log NB log B). An observation is that if Emax is polynomially bounded, then the exemplary DP scheme described above is also a polynomial-time algorithm. This idea is used to devise a polynomial-time (1+ε)-approximation scheme.


Given a threshold parameter τ>0, define a truncated DP algorithm as follows. Let V≦τ denote the set of error vectors E(j,s) for which the absolute value of each of their entries E(j,s)[i] is at most τi; that is, V≦τ={E(j,s):|E(j,s)[i]|≦τ for all i}. Similarly, let V denote the set of error vectors E(j,s)[i] not in V≦τ. Finally, define Kτ as the quantity
Kτ=ɛτlogN.

The truncated DP algorithm replaces each error vector E(j,s) in the error tree with a scaled-down error vector
Eτ(j,s)=[E(j,s)Kτ]

(i.e.,
Eτ(j,s)[i]=[E(j,s)[i]Kτ]

for all i) and works with these scaled (integer) error vectors. Furthermore, at any node j, the algorithm only considers dropping subsets sS(j) such that E(j,s)εV≦τ from the synopsis. More formally, build a DP array Mτ[j,b,e] using the scaled error vectors as follows. As previously, for a leaf node j, define Mτ[j,b,e]=|e|. For an internal node j, the algorithm cycles through only those subsets SεS(j) such that E(j,s)εV≦τ. The Mτ[j,b,e] entry is undefined if b<2D−1−|s| for all s such that E(j,s)εVτ. The DP recurrence for computing Mτ[j,b,e] is identical to the one for the exemplary pseudo-polynomial scheme described above, except for the fact that E(j,s) is replaced by its scaled version Eτ(j,s).


It is claimed that the above truncated DP is a polynomial-time algorithm for any value of the threshold parameter τ. Indeed, since at each node j, it is considered dropping coefficient subsets s with error vectors E(j,s,) in V≦τ, the absolute additive error propagated to each child of j is guaranteed to be at most
maxiEτ(j,s)[i]τKτ=logNɛ.

This, of course, implies that the absolute additive error that can enter any subtree in the exemplary truncated DP algorithm is at most log2 N/ε; in other words, the range of possible (integer) incoming error values e for the truncated DP array Mτ[ ] is guaranteed to be only
RZτ=[-1ɛlog2N,+1ɛlog2N].

Thus, based on the earlier analysis, the running time of the truncated DP algorithm for a fixed parameter τ is only O(1/ε22D+D N log2 NB log B).


Given a threshold parameter τ, the exemplary truncated DP algorithm selects a subset Cτ of coefficients to retain in the synopsis. The absolute-error approximation scheme employs the truncated DP algorithm for each value τε{2k:k=0, . . . , [log(Rz2D)]}, and finally selects the synopsis Cτ that minimizes the maximum absolute error in the data-value reconstruction. Clearly, since we only try O(D+log Rz) different values for τ, the running time of the approximation algorithm remains polynomial.


The above-described scheme gives a (1+ε)-approximation algorithm for maximum absolute error minimization. Consider the optimal maximum error synopsis COPT and let absErr(COPT) denote the corresponding maximum absolute error value. Also, for each internal node j in the error tree, let s*jS(j) denote the subset of coefficient dropped form the optimal synopsis COPT at node j. Finally, let C denote the maximum absolute value across all entries in the collection of error vectors E(j,s*j) for all j; that is, C=maxj,i{E(j,s*j)[i]}. Clearly, the approximation algorithm is going to try a threshold parameter, say τ′, such that. The goal is to show that the maximum absolute error achieved by Cτ′ (i.e., absErr(Cτ′)) is very close to that achieved by the optimal solution COPT.


First, note that, by the definition of C and τ′, the optimal solution may drop a subset of coefficients sS(j) at node j only if E(j,s)εV≦τ′. Thus, COPT is a feasible solution to the truncated DP instance (with threshold=τ′). Now let absErr(Cτ′), absErr(COPT) denote the maximum absolute errors in the Kτ′-scaled instance for the Cτ′ synopsis (obtained by the truncated DP scheme) and the optimal COPT synopsis, respectively. Given the optimality of the truncated DP for the scaled instance,

absErrτ′(Cτ′)≦absErrτ′(COPT)  (8)

Let C be any wavelet synopsis (i.e., subset of Haar coefficients). In a Kτ′-scaled instance, any error-vector value for C is represented by
Eτ(j,s)[i]=[E(j,s)[i]Kτ]

which differs by at most 1 from
E(J,s)[i]Kτ;

thus, it is easy to see that the scaled and non-scaled maximum absolute errors for the C synopsis are related as follows

absErr(C)ε(Kτ′absErrτ′(CKτ′ log N).  (9)

Applying the above formula for C=COPT and combining with equation (8),

absErr(COPT)≧Kτ′absErrτ′(COPT)−Kτ′ log N
Kτ′absErrτ′(Cτ′)−Kτ′ log N,

and using equation (9) once again with C=COPT,

absErr(Cτ′)≦Kτ′absErrτ′(Cτ′)+Kτ′ log N

Now, simply combining the last two formulas and substituting

absErr(Cτ′)≦absErr(COPT)+2Kτ′ log N≦absErr(COPT)+2ετ′.  (10)

The goal now is to demonstrate that absErr(COPT) is at least Ω(τ′). The proof relies on the following ancillary claim. Claim: Let C be any Haar-coefficient synopsis and, for any internal node j, let ejC denote the absolute value of the additive error coming into the Tj subtree due to ancestors of node j dropped from the C synopsis. Then, absErr(C)≧ejC.


Proof: Making use of the following observation, let T denote the (multidimensional) Haar error tree (FIG. 2) where all coefficient values are retained (to allow for exact data reconstruction). Given an internal node j, let v(Tj) denote the total additive contribution from ancestors of node j during the data-reconstruction process fro the data values at the leaves of the Tj subtree (rooted at j). Clearly, this additive contribution v(Tj) is the same for all leaves of Tj, and, letting paths) denote the set of all proper ancestor nodes of j in T, v(Tj) is simply computed as
v(Tj)=cipath(j)signi,j·ci

where signi,j is either +1 or −1 depending on how the ci coefficient affects the data values in the Tj subtree. One of the basic properties of the Haar-wavelet decomposition is that v(Tj) is exactly the average of all the data values at the leaf nodes of the Tj subtree. This is quite intuitive, as the Haar-wavelet coefficients are essentially trying to summarize the data at successively coarser levels of resolution.


Now, let TC denote the error tree where only the coefficients in C are retained (with all other coefficients set to 0). Note that, by the definition of ejC, ejC=|v(Tj)−v(TjC|. Also, let j1, . . . , jm be the children of node j in T and TC. The observation above directly implies that v(Tj) is the average of the child-subtree values v(Tj), i.e.,
v(Tj)=1mi=1mv(Tj)

(and, of course, the same also holds for the corresponding approximate values v(TjC) and v(TjlC), . . . v(TjmC)). Consider the absolute incoming additive errors ejlC at child nodes ji of j. The discussion earlier implies the following inequality:
ejC=v(Tj)-v(TjC)=1mi=1mv(Tji)-i=1mv(TjiC)1mi=1mejiC.

Thus, there must exist at least one child node, say jk, of j such that ejkC≧ejC. Continuing this argument, a path is obtained from node j to a leaf/data value dl in the TjC subtree such that the absolute additive error entering that leaf, i.e., |{circumflex over (d)}l−dl|, is at least ejC. But, then, absErr(C)≧|{circumflex over (d)}l−dl|≧ejC. This completes the proof for the claim.


Lemma: Let COPT be the optimal maximum absolute error synopsis and let τ′ be as defined above. Then,
absErr(COPT)>τ4.


Proof: By the definition of τ′, there exists an error-tree node j such that the subset of dropped coefficients s*jS(j) for j in the optimal solution COPT satisfies
E(j,sj*)[k]τ2

for some child jk of j. Now consider two possible cases for ejCOPT, the absolute additive error entering the subtree rooted at node j in the optimal solution: (1) If
ejCOPTτ4,

then, by the above claims,
absErr(COPT)τ4.

(2) If
ejCOPT<τ4,

hen the absolute value of the additive error entering the subtree rooted at child jk of j is at least
τ2-τ4=τ4;

thus, the claims again imply that
absErr(COPT)τ4.

This completes the proof.


Combining inequality (10) with the lemma,

absErr(Cτ′)≦(1+8ε)absErr(COPT).

Thus, simply setting ε′=ε/8, there is a (1+ε)-approximation scheme for maximum absolute error minimization in multiple dimensions.


In summary, there is an exemplary approximation scheme for deterministic multi-dimensional wavelet thresholding that discovers an approximate solution that is guaranteed to be within (1+ε) of the optimal (i.e., minimum) maximum absolute error in time
O(logRZ2ɛ22D+DNlog2NBlogB)

and with a total space requirement of
O(1ɛ2DNlog2NB).

Extension to General Error Metrics


Exemplary embodiments described above include the minimization of maximum-error metrics (like, maximum relative error) in the approximate data-value reconstruction. However, exemplary embodiments of the present invention have much more general applicability. Once again, this is in sharp contrast with prior art thresholding schemes that can handle only maximum-error metrics. Consider the natural class of distributive error metrics defined formally below.


Distributive Error Metrics


Consider an approximation of a (one- or multi-dimensional) data array A, and let f(R) denote the error in the data-value approximation over the (one- or multi-dimensional) range of values R in A. We say that the error metric f( ) is distributive if and only if, for any collection of disjoint ranges R1 . . . , Rk, there exists some combining function g( ) such that the error over the entire region ∪i=1kRi can be defined as

ƒ(∪i=1kRi)=g(ƒ(R1), . . . , ƒ(Rk))


The above-defined class of distributive error metrics encompasses several important approximation-error functions. For instance, the maximum-error metrics defined above are clearly distributive (with the combining function g( ) being simply the max { } of its input arguments). Furthermore, it is not hard to see that most important cumulative approximation-error metrics are also naturally distributive, including the mean relative error
1Niwi·relErri

and the
Lp-normerror[1Nidi-dip]1p

for any p≧0) in the data reconstruction, as well as the weighted variants of these metrics
1Niwi·relErriand[1Niwi·d^i-dip]1p,

respectively), where different weights wi (typically normalized, such that
iwi=1

are associated with the errors for different values in the underlying data domain. Such weights are an important tool for capturing the importance of individual data values, for example, based on the non-uniformities of the observed query workload.)


An observation is that the exemplary optimal DP formulation can, in fact, be easily adapted to work with any distributive error metric. Given such an error metric f( ), the basic idea is to define the DP array M[j,b,S] as well as the DP recurrences for computing its entries using the general form of the f( )-metric and the corresponding combining function g( ) that allows us to “distribute” the computation of f( ) over sub-ranges of the data domain I. More specifically, define the base case for the exemplary DP recurrence (i.e., for leaf nodes cj=dj-N, j≧N) as M[j,0,S]=f({dj-N}|S) for each subset Spath(dj−N), where f({dj-N}|S) denotes the value of the f( ) error metric at data value dj-N assuming the coefficients in S are kept in the synopsis. Now, in the case of an internal note cj with j<N, define the optimal error M[j,b,S] when coefficient c; is either dropped from or kept in the synopsis in a manner similar to equations (2)-(3) with the difference that max { } is now replaced by the combining function g( ) for the distributive error metric f( ). More formally, the general DP recurrence for f( ) simply defines M[j,b,S]=min {Mdrop[j,b,S],Mkeep[j,b,S]}, where
Mdrop[j,b,S]=min0bbg(M[2j,b,S],M[2j+1,b-b,S]),andMkeep[j,b,S]=min0bb-1g(M[2j,b,S{cj}],M[2j+1,b-b-1,S{cj}]).

For example, as a more concrete problem setting, consider the adaptation of our optimal DP formulation for the case of the mean weighted relative error metric
1Niwi·relErri.

Since the averaging factor N is basically constant (i.e., the size of the data domain), it is obviously sufficient to optimize the cumulative weighted relative error; that is, we seek to minimize
f({d1,,dN})=iwi·relErri=iwi·i-imax{is},

(for a given sanity bound value s).


For the base case of data (i.e., leaf) nodes cj=dj−N with j≧N, we define M[j,0,S] (for each subset Spath (dj-N)) as the weighted relative error at value dj-N (assuming the coefficients in S are retained in the synopsis); that is,
M[j,0,S]=f({dj-N}S=wj-N·j-N-ckSsignj-N,k·ckmax{j-N,s}

For the case of internal nodes cj with j<N, note that the combining function g( ) for our cumulative weighted relative error metric is simple summation; thus, we define M[j,b,S]=min {Mdrop[j,b,S], Mkeep[j,b,S]}, where
Mdrop[j,b,S]=min0bb{M[2j,b,S]+M[2j+1,b-b,S]},andMkeep[j,b,S]=min0bb-1{M[2j,b,S{cj}]+M[2j+1,b-b-1,S{cj}]}.


Since the exemplary optimal DP formulation extends directly to multi-dimensional data arrays and wavelets, the above-described generalizations are directly applicable to multiple dimensions as well; that is, of course, modulo the super-exponential explosion in space/time complexity with increasing dimensionality. Furthermore, note that both the exemplary efficient approximation schemes for near-optimal thresholding in multiple dimensions are essentially based on sparser versions of this optimal multi-dimensional dynamic program. Thus, the exemplary maximum-error approximation algorithms can also be adapted to work with other distributive error metrics as well with similar approximation guarantees; for instance, the techniques and analyses above can easily be extended to (approximately) optimize for the corresponding cumulative error metrics (i.e., mean absolute and mean relative error).



FIG. 7 shows input and output for an exemplary embodiment of a (1+ε) approximation scheme for absolute error minimization for deterministic multi-dimensional wavelet thresholding (ApproxMultIDErr) 700. Inputs to ApproxMultIDErr 700 include a wavelet array (WA) 702 and a number of retained coefficients (B) 704. Outputs include the ε-approximate absolute error synopsis of wavelet coefficients 706. Some notation includes Rz=maximum absolute coefficient value, N=maximum dimension size, and S(j)=set of coefficients at node j.



FIGS. 8A-8C are flowcharts describing the exemplary embodiment of the scheme ApproxMultIDErr of FIG. 7. In this example, procedure ApproxMultIDErr starts at 800. An error vector is computed for each node at 802. Initializations are performed at 804 and the top of a loop starts at 806, 808, where τ=2k. Then, V, V≦τ, and V=V−V≦τ are computed at 810. Next, Kr is computed and the r-scaled-down error vectors Eτ(j,s) are defined at 812. Using dynamic programming over the r-scaled down error vectors, the array Mτ[j,b,e] is computed, which is the minimum maximum absolute error at node j with b coefficients and incoming error of e and the resulting synopsis is defined at 814. It is determined whether Mτ[root,B,0] is less than the best error at 816. If so, the best error is set to Mτ[root,B,0] and the best synopsis is set to Cτ; otherwise, the end of the loop test is performed at 820. If it is not the end of the loop, the loop index is incremented at 822 and control flow back via branch 806 to 808; otherwise, the best synopsis is returned at 824.


Wavelets have a long history of successes in the signal and image processing arena and, recently, they have also found their way into data-management applications. Haar-wavelet coefficients have been used as synopses for accurately estimating the selectivities of range queries. I/O-efficient algorithms for building multi-dimensional Haar wavelets from large relational data sets and showing that a small set of wavelet coefficients can efficiently provide accurate approximate answers to range aggregates over OLAP cubes. Haar wavelets are used as a general-purpose approximate query processing tool by designing efficient algorithms that can process complex relational queries (with joins, selections, etc.) entirely in the wavelet-coefficient domain. The problem of on-line maintenance for coefficient synopses can be solved with a probabilistic-counting technique that approximately maintains the largest normalized-value coefficients in the presence of updates. Algorithms exist for building approximate one-dimensional Haar-wavelet synopses over numeric data streams. There are time- and space-efficient techniques for constructing Haar-wavelet synopses for data sets with multiple measure (such as those typically found in OLAP applications).


All of the above applications rely on conventional L2-error-based thresholding schemes that typically decide the significance of a coefficient based on this absolute normalized value. Such conventional wavelet synopses can suffer from several problems, including the introduction of severe bias in the data reconstruction and wide variance in the quality of the data approximation, as well as a lack of non-trivial guarantees for individual approximate answers. In contrast, their proposed probabilistic wavelet synopses rely on a probabilistic thresholding process based on randomized rounding that tries to probabilistically control the maximum relative error in the synopsis by minimizing appropriate probabilistic metrics, like normalized standard error or normalized bias. The problem addressed by the present invention, namely the design of efficient deterministic thresholding schemes for maximum error metrics, has not been addressed by the prior art.


There is a rich mathematical literature on m-term approximations using wavelets, where m is the number of coefficients in the synopsis. Some prior work has studied thresholding approaches for meeting a target upper bound for an Lp-error metric. However, the prior art does not address the deterministic minimization of relative errors with sanity bounds, which is an important scenario for approximate query processing in databases and the present invention provides computationally-efficient (optimal and near-optimal) deterministic thresholding schemes for minimizing maximum-error metrics for one- and multi-dimensional wavelet summaries.


Embodiments of the present invention include novel, computationally-efficient schemes for deterministic maximum-error wavelet thresholding in one and multiple dimensions. For one-dimensional wavelets, an optimal, low polynomial-time thresholding algorithm based on a new dynamic-programming formulation minimizes either the maximum relative error or the maximum absolute error in the data approximation. For the multi-dimensional case, embodiments include novel, polynomial-time approximation schemes with tunable ε-approximation guarantees for the target metric for maximum-error thresholding based on approximate dynamic programs.



FIG. 9 is a high level block diagram showing a computer. The computer 900 may be employed to implement embodiments of the present invention. The computer 900 comprises a processor 930 as well as memory 940 for storing various programs 944 and data 946. The memory 940 may also store an operating system 942 supporting the programs 944.


The processor 930 cooperates with conventional support circuitry such as power supplies, clock circuits, cache memory and the like as well as circuits that assist in executing the software routines stored in the memory 940. As such, it is contemplated that some of the steps discussed herein as software methods may be implemented within hardware, for example, as circuitry that cooperates with the processor 930 to perform various method steps. The computer 900 also contains input/output (I/O) circuitry that forms an interface between the various functional elements communicating with the computer 900.


Although the computer 900 is depicted as a general purpose computer that is programmed to perform various functions in accordance with the present invention, the invention can be implemented in hardware as, for example, an application specific integrated circuit (ASIC) or field programmable gate array (FPGA). As such, the process steps described herein are intended to be broadly interpreted as being equivalently performed by software, hardware, or a combination thereof.


The present invention may be implemented as a computer program product wherein computer instructions, when processed by a computer, adapt the operation of the computer such that the methods and/or techniques of the present invention are invoked or otherwise provided. Instructions for invoking the inventive methods may be stored in fixed or removable media, transmitted via a data stream in a broadcast media or other signal bearing medium, and/or stored within a working memory within a computing device operating according to the instructions.


While the foregoing is directed to various embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. As such, the appropriate scope of the invention is to be determined according to the claims, which follow.

Claims
  • 1. A method for deterministic wavelet thresholding for optimizing general-error metrics, comprising: building a deterministic wavelet synopsis that minimizes maximum error in a data-value approximation; and providing a minimum maximum error for all data values in a subtree for the deterministic wavelet synopsis.
  • 2. The method of claim 1, wherein the minimum maximum error is a maximum relative error.
  • 3. The method of claim 1, wherein the minimum maximum error is a maximum absolute error.
  • 4. The method of claim 1, wherein the deterministic wavelet synopses are built over a one-dimensional data array.
  • 5. The method of claim 1, wherein the deterministic wavelet synopses are built over a multi-dimensional data array.
  • 6. A method for deterministic wavelet thresholding for optimizing general-error metrics, comprising: conditioning a minimum error value for an error subtree not only on a root node of the error subtree and the amount of synopsis storage allotted, but also on an error that enters the subtree through coefficient selections made on a path from the root node to a particular node, excluding that particular node itself; and tabulating all possible coefficient selections in polynomial time.
  • 7. The method of claim 6, further comprising: minimizing error in a data approximation; wherein the minimum error value is maximum relative error.
  • 8. The method of claim 6, further comprising: minimizing error in a data approximation; wherein the minimum error value is maximum absolute error.
  • 9. A method for deterministic wavelet thresholding for optimizing general-error metrics, comprising: approximately covering a range of all possible error contributions for paths up to a root node of an error subtree using a plurality of approximate error values; tabulating the approximate error values by capturing approximate error in a plurality of error subtrees, each error subtree being rooted at a particular node; and tabulating a rounded additive error contribution due to ancestors of the particular node being discarded from a synopsis.
  • 10. The method of claim 9, wherein the error values are relative.
  • 11. The method of claim 9, wherein the error values are absolute.
  • 12. A method for deterministic wavelet thresholding for optimizing general-error metrics, comprising: determining error value vectors for each of a plurality of coefficient subsets; scaling-down the error value vectors using a threshold value; using dynamic programming over the scaled error value vectors, cycling only through coefficient subsets whose error value vector entries are above the threshold value to determine an optimal solution for a scaled problem instance; trying an appropriate collection of additional threshold values and returning the synopsis with the smallest error.