METHOD AND APPARATUS FOR IMPROVED REGRESSION MODELING

Information

  • Patent Application
  • 20080154817
  • Publication Number
    20080154817
  • Date Filed
    October 17, 2006
    18 years ago
  • Date Published
    June 26, 2008
    16 years ago
Abstract
The present invention is a method and an apparatus for improved regression modeling to address the curse of dimensionality, for example for use in data analysis tasks. In one embodiment, a method for analyzing data includes receiving a set of exemplars, where at least two of the exemplars include an input pattern (i.e., a point in an input space) and at least one of the exemplars includes a target value associated with the input pattern. A function approximator and a distance metric are then initialized, where the distance metric computes a distance between points in the input space, and the distance metric is adjusted such that an accuracy measure of the function approximator on the set of exemplars is improved.
Description
BACKGROUND

The invention relates generally to data analysis, and relates more particularly to data analysis techniques using nonlinear regression.


Nonlinear regression refers to the development of empirical models often used in data analysis to encapsulate temporal and/or structural relationships between observed quantities (“input variables”) and quantities of interest (“output variables” or “target variables”), which may be difficult to observe directly. Specifically, the goal of nonlinear regression is to construct a mathematical model that is capable of accurately estimating an unobserved target variable as a function of the settings of the collection of input variables to particular input states or patterns.


Typically, these mathematical models are produced by applying machine learning or training techniques to a data set that contains a number of historical exemplars, where each exemplar i comprises a particular input pattern {right arrow over (x)}i (with each of the input variables set to a particular value) and an associated target value yi that was observed or known by some means. Training on the data set aims to obtain a general functional mapping ŷ=F({right arrow over (x)}) that estimates a predicted or likely target value ŷ for a general input pattern {right arrow over (x)}. A desirable property of the mapping is that it is general enough to provide accurate target value estimates for input patterns not contained in the training data set.


Existing techniques for performing nonlinear regression (including neural networks, regression trees, splines, wavelets, and support vector regression, among others) commonly suffer from a limitation referred to as the curse of dimensionality. That is, it becomes progressively (e.g., exponentially) more difficult to learn an accurate functional mapping as the dimensionality (number of features or state variables) of the input space increases.


Thus, there is a need for an improved method for regression modeling that addresses the curse of dimensionality which limits existing methods.


SUMMARY OF THE INVENTION

The present invention is a method and an apparatus for improved regression modeling to address the curse of dimensionality, for example for use in data analysis tasks. In one embodiment, a method for analyzing data includes receiving a set of exemplars, where at least two of the exemplars include an input pattern (i.e., a point in an input space) and at least one of the exemplars includes a target value associated with the input pattern. A function approximator and a distance metric are then initialized, where the distance metric computes a distance between points in the input space, and the distance metric is adjusted such that an accuracy measure of the function approximator on the set of exemplars is improved.





BRIEF DESCRIPTION OF THE DRAWINGS

So that the manner in which the above recited embodiments of the invention are attained and can be understood in detail, a more particular description of the invention, briefly summarized above, may be obtained by reference to the embodiments thereof which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.



FIG. 1 is a flow diagram illustrating one embodiment of a method for regression modeling, according to the present invention;



FIG. 2 is a flow diagram illustrating one embodiment of a method for training on a batch of exemplars; and



FIG. 3 is a high level block diagram of the regression modeling method that is implemented using a general purpose computing device.





To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures.


DETAILED DESCRIPTION

In one embodiment, the present invention is a method and apparatus for improved regression modeling to address the curse of dimensionality. Embodiments of a present invention simultaneously learn a distance metric (between points in an input space) and a distance-based function approximator. These quantities are broadly useful for purposes of estimation, prediction and control throughout many areas of science and engineering, including, but not limited to, systems management, medical science, pharmacology, bioinformatics, geophysical and atmospheric sciences, financial and economic modeling and consumer/customer modeling.



FIG. 1 is a flow diagram illustrating one embodiment of a method 100 for regression modeling, according to the present invention. The method 100 may be implemented, for example, to perform a data analysis task.


The method 100 is initialized at step 102 and proceeds to step 104, where the method 100 receives a training set of S exemplars. At least two of the exemplars i in the training set comprise an input pattern {right arrow over (x)}i (i.e., a point in an N-dimensional input space), and at least one of the exemplars comprises a target value yi (e.g., a scalar value) associated with the input pattern {right arrow over (x)}i.


At step 106, the method 100 initializes a distance metric D({right arrow over (x)},{right arrow over (x)}′). The distance metric is a global function for computing a distance between general points {right arrow over (x)} and {right arrow over (x)}′ in the input space. In one embodiment, the distance metric takes the form of a Mahalanobis distance:






D({right arrow over (x)},{right arrow over (x)}′)=√{square root over (Σi.jMij(xi−x′i)(xj−x′j))}{square root over (Σi.jMij(xi−x′i)(xj−x′j))}  (EQN. 1)


where Mij denotes the elements of a positive semi-definite matrix M. In this case, initialization of the distance metric comprises setting initial values of Mij. In one embodiment, any one of a number of initialization schemes may be implemented to initialize the elements of Mij, including setting the elements of Mij to random values or to values corresponding to an identity matrix. In another embodiment, initial values Lij of elements of a linear transformation matrix L are provided, where the matrix L relates to the matrix M according to:






M=L·L
T  (EQN. 2)


with LT denoting the transpose of the matrix L.


In step 108, the method 100 initializes a function approximator F({right arrow over (x)}). In one embodiment, the function approximator is a distance-based nonlinear function approximator. In a further embodiment, the function approximator is governed by a set of k distances {d1, d2, . . . , dk} between {right arrow over (x)} and a set of k reference points {{right arrow over (x)}1, {right arrow over (x)}2, . . . {right arrow over (x)}k). In this case, initialization of the function approximator comprises setting the number and locations of the reference points, as well as setting the initial distance function D({right arrow over (x)},{right arrow over (x)}′) in step 104.


In a further embodiment still, the number (k) of reference points equals the number of exemplars (S) in the training set. In this case, the locations of the reference points coincide with the locations of the input exemplars. Additionally, the function approximator comprises a normalized sum of Radial Basis Functions, i.e.:










F


(

x


)


=




j
=
1

S






w
j



(

x


)


*

y
j



Ω


(

x


)








(

EQN
.




3

)







where yj is the target output value for exemplar j,












w
j



(

x


)


=

exp


(

-


d
j



(

x


)



)








with




(

EQN
.




4

)









d
j



(

x


)


=


D
2



(


x


,


x


j


)








and




(

EQN
.




5

)







Ω


(

x


)


=




j
=
1

S




w
j



(

x


)







(

EQN
.




6

)







In yet another embodiment, the function approximator comprises additional adjustable structure or parameters θ. In this case, initialization of the function approximator further comprises an initialization of θ in accordance with established methods in the art. For example, if the function approximator is based on neural networks with adjustable weight values, initialization of the function approximator might comprise initializing the weight values in accordance with standard methods (e.g., randomly).


In step 110, the method 100 performs a training sweep through the training set of exemplars. In one embodiment, a training sweep comprises training on batches (i.e., subsets) of the exemplars. In one embodiment, the size of a batch for training sweep purposes ranges from a single exemplar to the entire set of exemplars. In one embodiment, training on a batch of exemplars is performed in accordance with a method described in greater detail with respect to FIG. 2. In one embodiment, the sweep through the exemplars is performed in a random order. In an alternate embodiment, the sweep through the exemplars is performed in a defined sequential order. The output of the training sweep is a trained function approximator F( ) and a trained distance metric D( ). The primary goal of the training sweep(s), as described in greater detail with respect to FIG. 2, is to progressively adjust the parameters encoding the distance function (e.g., the matrix elements encoding a Mahalanobis distance), as well as any adjustable structure or parameters θ of the function approximator, such that an error measure of the function approximator on the set of exemplars is minimized. The training sweep may also incorporate established statistical methods (e.g., regularization methods) aimed at reducing the occurrence of overfitting, as also described in greater detail with respect to FIG. 2.


In step 112, the method 100 determines whether another training sweep should be performed through the training set of exemplars. In one embodiment, the method 100 performs another training sweep if a termination criterion (i.e., a criterion dictating when to terminate training) has not been met. In one embodiment, the termination criterion is met if the total error E over the entire training set of exemplars falls below a predefined threshold value. In another embodiment, the termination criterion is met if a rate of decrease of the total error E per training sweep reaches a predefined threshold value. In another embodiment still, the termination criterion is met if an upper bound on a total number of training sweeps is met.


In yet another embodiment, the termination criterion is based on one or more established “early stopping” methods to avoid overfitting (i.e., learning a model that performs well on the training data, but generalizes poorly to additional data not seen during the training process). For example, training may be terminated at the point at which the “cross-validation error” (i.e., average error on holdout data over a number of runs in which the data is randomly partitioned into training data and holdout data) is minimized.


If the method 100 concludes in step 112 that another training sweep should be performed, the method 100 returns to step 110 and proceeds as described above to perform another training sweep.


Alternatively, if the method 100 concludes in step 112 that another training sweep should not be performed, the method 100 proceeds to step 114 and performs dimensionality reduction using the trained distance metric produced in step 110 and the training set of exemplars. The output of the dimensionality reduction application is a reduction mapping {right arrow over (z)}=R({right arrow over (x)}). The reduction mapping transforms a general input vector {right arrow over (x)} into a lower-dimensional vector {right arrow over (z)}.


In one embodiment, where the trained distance metric D( ) comprises a Mahalanobis distance expressed by a linear transformation matrix L, a linear reduction mapping R( ) can be obtained that is based directly on the linear transformation matrix. In one embodiment, the matrix L that is output by step 110 is already of low rank, due to constrained or unconstrained error minimization performed in step 110. In this case, the linear reduction mapping R( ) may consist simply of multiplication by L, i.e., {right arrow over (z)}=L·({right arrow over (x)}).


In another embodiment, the trained distance metric is applied to a plurality of input exemplar pairs ({right arrow over (x)}i,{right arrow over (x)}j) in order to obtain an in-sample set of pairwise distances {dij}. Nonlinear Dimensionality Reduction is then applied to the set of pairwise distances, in accordance with any one or more of a number of known techniques (e.g., Isomap, Laplacian Eigenmaps, Maximum Variance Unfolding, etc.). This produces a low-dimensional in-sample embedding {{right arrow over (z)}i } of the input exemplars {{right arrow over (x)}i}. The method 100 then applies to the in-sample embedding any one or more of a number of known techniques for constructing a general out-of-sample embedding {right arrow over (z)}=R({right arrow over (x)}), such as Local Linear Embedding.


In step 116, the method 100 obtains a lower dimensional set of exemplars {({right arrow over (z)}i,yi)}from the original training set of exemplars {({right arrow over (x)}i,yi)}. In one embodiment, the lower dimensional set of exemplars is obtained directly from the in-sample embedding {{right arrow over (z)}i}. In an alternate embodiment, the method 100 applies the reduction mapping R( ) produced in step 114 to the training set of exemplars {({right arrow over (x)}i,yi)} in step 116. In some embodiments, delivery of the lower dimensional set of exemplars {({right arrow over (z)}i,yi)} may be useful for purposes of interpretation and understanding by a human data analyst. For example, plots or visualizations of the lower dimensional exemplars may be feasible and interpretable, whereas such plots or visualizations may be entirely infeasible if the original training set of exemplars {({right arrow over (x)}i, yi)} is of sufficiently high dimensionality.


In step 118, the method 100 generates a new regression model G( ) in accordance with the lower dimensional set of exemplars {({right arrow over (z)}i,yi)}. In one embodiment, this is accomplished by performing standard nonlinear regression, for example in accordance with any one or more known techniques, in order to produce the new regression model {tilde over (y)}=G({right arrow over (z)}). The new regression model estimates a target value {tilde over (y)} for a general reduced dimensional input vector {right arrow over (z)}.


In step 120, the method 100 delivers the new regression model G( ) and the reduction mapping R( ) to an application. The application may use the new regression model G( ) and the reduction mapping R( ), for example, in combination for the purposes of estimation, prediction or control (e.g., given input vectors {right arrow over (x)} in the original input space). For instance, the reduction mapping R({right arrow over (x)}) may be applied to obtain a lower dimensional input vector {right arrow over (z)}, and the regression model G({right arrow over (z)}) may then be applied to obtain an estimated target value.


The method 100 then terminates in step 122.


In an alternative embodiment, the method 100 may skip the dimensionality reduction (i.e., steps 114-118) and proceed from step 112 (i.e., the training sweeps) directly to step 120 to deliver the function approximator F( ) along with the learned distance metric D( ).


The method 100 thus automatically learns a distance metric between points in the input space. Such distance metrics are typically unavailable or ill-defined in many regression datasets, especially when the various features in the input space represent completely different types of quantities. The distance metric may provide the basis for dimensionality reduction of an input space in many applications. Moreover, the method 100 simultaneously learns a distance-based function approximator, which may also be implemented in many applications.


For example, the method 100 may be implemented in the field of systems management, where the input patterns comprise measurements of various quantities that describe the state of a distributed computing system (e.g., workload levels, queue lengths, response times and throughput measures, central processing unit and memory utilizations, etc.) and/or data that describes management actions (e.g., allocated resource levels, control parameter/configuration settings, etc.). In this case, the target values (output) may represent an expected value of an overall multi-criteria utility function that expresses how well the system is performing with respect to a set of system management objectives.


Alternatively, the method 100 may be implemented in the field of medical data analysis. In this case, the input patterns might comprise measurements or test results pertaining to cancer patients, and the target values may comprise life expectancy estimates associated with particular types of treatments of therapies. An accurate regression model could help a physician decide upon the type of treatment most likely to be effective for a particular patient.


The method 100 could also be implemented in the field of pharmacology/bioinformatics. In this case, the input patterns could comprise features that describe primary, secondary and tertiary structures of proteins, while the target values could represent estimated affinities or binding energies to particular types of receptors. An accurate regression model could assist in drug design and development, by prescreening a large number of candidate molecules and selecting a small number of candidates with highest expected affinities for in vitro or in vivo testing.


In addition, the method 100 could be implemented in the fields of marine/geophysical/atmospheric sciences. In this case, the input patterns might comprise signal data that results from acoustic or gamma densitometer probes of porous rock formations, while the target values might represent expected densities of oil or oil/water/natural gas fractions. An accurate regression model would be useful in searching for oil, natural gas and other types of natural resources. Alternatively, the input patterns may comprise climatic or ecological data such as temperature, carbon dioxide levels, or population levels of various aquatic species, while the target values might represent future sea levels, temperatures, or animal population levels. An accurate regression model may be useful in forecasting sea level changes, or weather or climate changes, particularly in relation to human activities such as carbon dioxide emissions or depletion of various fish species.


The method 100 may also be implemented in the field of financial/economic modeling. In this case, the input patterns might comprise readings at particular points of time of econometric variables (e.g., inflation, gross domestic product growth, interest rates, corporate debt levels, etc.), while the target values might represent predictions of future quantities (e.g., inflation, interest rates, equity prices, etc.). An accurate regression model would be useful in making economic forecasts, in portfolio management or in trading in various financial markets.


Additionally, the method 100 may be implemented in the field of consumer/customer modeling. In this case, the input patterns could comprise data pertaining to individual consumers (e.g., age, annual income, historical spending and debt payment patterns, etc.), while the target values could represent expected spending in response to particular promotional campaigns or expected delinquency in repaying a given level of credit card debt. An accurate regression model would be useful in targeted marketing or in approval of credit card purchases.



FIG. 2 is a flow diagram illustrating one embodiment of a method 200 for training on a batch of exemplars. The method 200 may be implemented, for example, in order to perform a training sweep on a training set of exemplars as described with regard to step 110 of the method 100.


The method 200 is initialized at step 202 and proceeds to step 204, where the method 200 selects a batch of exemplars for processing. The method 200 then proceeds to step 206 and selects an exemplar i from the selected batch. In step 208, the method 200 computes a function approximator estimate ŷi=F({right arrow over (x)}i) for the selected exemplar.


In step 210, the method 200 computes a difference between a recorded target (output) value yi for the selected exemplar and the function approximator estimate ŷi. The method 200 then proceeds to step 212 and determines whether there are any unexamined exemplars remaining in the batch (i.e., any exemplars for whom a function approximator estimate and a difference have not been computed). If the method 200 determines that at least one exemplar does remain to be examined in the batch, the method 200 returns to step 206 and proceeds as described above to select and process a next exemplar.


Alternatively, if the method 200 determines in step 212 that there are no exemplars remaining to be examined in the batch, the method 200 proceeds to step 214 and adjusts the (initialized) distance metric and the (initialized) function approximator in accordance with the set of differences {(yi−ŷi)} for all exemplars i in the batch. These adjustments reduce a given error measure on the batch of exemplars.


In step 216, the method 200 determines whether there are any unexamined batches remaining in the training set of exemplars (i.e., any batches for whom the distance metric and function approximator have not been adjusted). If the method 200 concludes in step 216 that at least one unexamined batch remains, the method 200 returns to step 204 and proceeds as described above to select and process a next batch.


Alternatively, if the method 200 concludes in step 216 that no unexamined batches remain, the method 200 terminates in step 218.


In one embodiment, adjustments made to the distance metric and to the function approximator (e.g., in accordance with step 214 of the method 200) respect one or more hard or soft constraints on allowable adjustments. For example, in the case where the distance metric D( ) computes a Mahalanobis distance, a constraint may be imposed that dictates that the rank of the Mahalanobis matrix may not exceed a specified upper bound. Likewise, the constraints may also embody well-known statistical methods (e.g., “regularization” methods) aimed at reducing the occurrence of overfitting, as described in further detail below.


In one embodiment, if the dependence of the function approximator output ŷ on the distance metric D( ) or on tunable parameters or structure θ is differentiable, the adjustments are computed in accordance with a standard Gradient-Descent technique (e.g., as described by D. E. Rumelhart et al. in “Parallel Distributed Processing”, Vols. 1 and 2, Cambridge, Mass.: MIT Press, 1986, which is herein incorporated by reference in its entirety) applied to a quadratic error measure Σi(yi−ŷi)2 summed over all exemplars in the batch. For example, when adjusting the elements Ljk of a linear transformation matrix L used in computing a Mahalanobis distance, the adjustment Δ Ljk would be computed as:










Δ






L
jk


=


-
ɛ







L
jk







i







(


y
i

-


y
^

i


)

2







(

EQN
.




7

)







where ε is a small constant.


In an alternative embodiment, if the dependence of the function approximator output ŷ on the distance metric D( ) or on tunable parameters or structure θ is not differentiable, the adjustments to the distance metric or to the tunable parameters or structure are computed in accordance with a standard Derivative-Free Optimization procedure (e.g., hill climbing, simulated annealing or the like).


In other embodiments, other standard error measures (e.g., cross-entropy, hinge-loss error and the like) and/or other standard optimization techniques (e.g., conjugate-gradient, quasi-Newton, second-order, convex optimization and the like) are implemented to compute adjustments to the distance metric and/or to the tunable parameters or structure. Furthermore, the training sweep methodology implemented in accordance with the present invention may incorporate any one or more of a number of well-known statistical methodologies in order to reduce the occurrence of overfitting. For example, a variety of methods for “regularization”, such as penalizing learning parameters of large absolute magnitude, may be applied to the adjustments computed for the distance metric D( ) or the tunable parameters or structure θ. Additionally, the criterion for termination of training may incorporate principles aimed at reducing overfitting, as described above with respect to step 112 of the method 100. In further embodiments, the training sweep methodology may additionally incorporate established methods for “semi-supervised learning” (e.g., minimum entropy regularization), which permit effective training on datasets containing one or more unlabeled exemplars (i.e., input patterns without associated target values).



FIG. 3 is a high level block diagram of the regression modeling method that is implemented using a general purpose computing device 300. In one embodiment, a general purpose computing device 300 includes a processor 302, a memory 304, a regression modeling module 305 and various input/output (I/O) devices 306 such as a display, a keyboard, a mouse, a modem, and the like. In one embodiment, at least one I/O device is a storage device (e.g., a disk drive, an optical disk drive, a floppy disk drive). It should be understood that the regression modeling module 305 can be implemented as a physical device or subsystem that is coupled to a processor through a communication channel.


Alternatively, the regression modeling module 305 can be represented by one or more software applications (or even a combination of software and hardware, e.g., using Application Specific Integrated Circuits (ASIC)), where the software is loaded from a storage medium (e.g., I/O devices 306) and operated by the processor 302 in the memory 304 of the general purpose computing device 300. Thus, in one embodiment, the regression modeling module 305 for performing nonlinear regression described herein with reference to the preceding Figures can be stored on a computer readable medium or carrier (e.g., RAM, magnetic or optical drive or diskette, and the like).


Thus, the present invention represents a significant advancement in the field of data analysis. Embodiments of the present invention simultaneously learn a regression-relevant distance metric (between points in an input space) and a distance-based function approximator, which enables application of sophisticated linear or nonlinear dimensionality reduction techniques, thereby addressing the curse of dimensionality. Embodiments of the present invention are broadly useful for purposes of estimation, prediction and control throughout many areas of science and engineering, including, but not limited to, systems management, medical science, pharmacology, bioinformatics, marine, geophysical and atmospheric sciences, financial and economic modeling and consumer/customer modeling.


While foregoing is directed to the preferred embodiment of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.

Claims
  • 1. A method for analyzing data, comprising: receiving a set of one or more exemplars, where at least two of the one or more exemplars comprise an input pattern and at least one of the one or more exemplars comprises a target value associated with the input pattern;initializing a distance metric, where the distance metric computes a distance between pairs of input patterns;initializing a function approximator; andadjusting the distance metric such that an accuracy measure of the function approximator on the set of one or more exemplars is improved.
  • 2. The method of claim 1, wherein the distance metric takes the form of a Mahalanobis distance.
  • 3. The method of claim 2, wherein initializing the distance metric comprises: setting initial values for one or more elements in a positive semi-definite matrix.
  • 4. The method of claim 3, wherein the setting comprises: setting the one or more elements to random values.
  • 5. The method of claim 3, wherein the setting comprises: setting the one or more elements to values that correspond to an identity matrix.
  • 6. The method of claim 1, wherein the function approximator is governed by a set of distances between the input pattern and a set of one or more reference points.
  • 7. The method of claim 6, wherein initializing the function approximator comprises: setting a number of the one or more reference points; andsetting locations for the one or more reference points.
  • 8. The method of claim 6, wherein initializing the function approximator comprises: setting one or more adjustable parameters to initial values.
  • 9. The method of claim 1, wherein the adjusting comprises: performing one or more training sweeps through the set of one or more exemplars to produce a trained distance metric and a trained function approximator.
  • 10. The method of claim 9, wherein performing a training sweep comprises: dividing the set of one or more exemplars into one or more batches, each of the one or more batches comprising one or more of the one or more exemplars;computing for each exemplar in a given batch a difference between a function approximator estimated value and a target value associated with the exemplar; andadjusting the distance metric and the function approximator in accordance with a set comprising a difference between the function approximator estimated value and the target value for each of the exemplars in the given batch.
  • 11. The method of claim 10, wherein adjusting the distance metric and the function approximator comprises respecting one or more constraints on allowable adjustments.
  • 12. The method of claim 10, wherein adjusting the distance metric and the function approximator is performed in accordance with at least one of: a gradient-descent technique, a second-order technique, or a derivative-free optimization procedure.
  • 13. The method of claim 9, further comprising: generating a reduction mapping in accordance with the trained distance metric and the set of one or more exemplars; andapplying the reduction mapping to the set of one or more exemplars to produce a lower-dimensionality set of one or more exemplars.
  • 14. The method of claim 13, further comprising: generating a new regression model in accordance with the lower-dimensionality set of one or more exemplars, the new regression model estimating a target value for a general reduced input vector.
  • 15. The method of claim 13, further comprising: plotting or visualizing the lower-dimensionality set of one or more exemplars in a manner that facilitates interpretation, analysis or understanding of the original set of one or more exemplars by a human data analyst.
  • 16. The method of claim 1, further comprising: delivering at least one of the adjusted distance metric and the function approximator to an application that performs at least one of: estimation, prediction or control.
  • 17. The method of claim 16, wherein the application relates to at least one of: systems management, medical science, pharmacology, bioinformatics, marine science, geophysical science, atmospheric science, financial modeling, economic modeling or consumer modeling.
  • 18. A computer readable medium containing an executable program for analyzing data, where the program performs the steps of: receiving a set of one or more exemplars, where at least two of the one or more exemplars comprise an input pattern and at least one of the one or more exemplars comprises a target value associated with the input pattern;initializing a distance metric, where the distance metric computes a distance between pairs of input patterns;initializing a function approximator; andadjusting the distance metric such that an accuracy measure of the function approximator on the set of one or more exemplars is improved.
  • 19. The computer readable medium of claim 18, wherein the distance metric takes the form of a Mahalanobis distance.
  • 20. The computer readable medium of claim 19, wherein initializing the distance metric comprises: setting initial values for one or more elements in a positive semi-definite matrix.
  • 21. The computer readable medium of claim 20, wherein the setting comprises: setting the one or more elements to random values.
  • 22. The computer readable medium of claim 20, wherein the setting comprises: setting the one or more elements to values that correspond to an identity matrix.
  • 23. The computer readable medium of claim 18, wherein the function approximator is governed by a set of distances between the input pattern and a set of one or more reference points.
  • 24. The computer readable medium of claim 23, wherein initializing the function approximator comprises: setting a number of the one or more reference points; andsetting locations for the one or more reference points.
  • 25. The computer readable medium of claim 23, wherein initializing the function approximator comprises: setting one or more adjustable parameters to initial values.
  • 26. The computer readable medium of claim 18, wherein the adjusting comprises: performing one or more training sweeps through the set of one or more exemplars to produce a trained distance metric and a trained function approximator.
  • 27. The computer readable medium of claim 26, wherein performing a training sweep comprises: dividing the set of one or more exemplars into one or more batches, each of the one or more batches comprising one or more of the one or more exemplars;computing for each exemplar in a given batch a difference between a function approximator estimated value and a target value associated with the exemplar; andadjusting the distance metric and the function approximator in accordance with a set comprising a difference between the function approximator estimated value and the target value for each of the exemplars in the given batch.
  • 28. The computer readable medium of claim 27, wherein adjusting the distance metric and the function approximator comprises respecting one or more constraints on allowable adjustments.
  • 29. The computer readable medium of claim 27, wherein adjusting the distance metric and the function approximator is performed in accordance with at least one of: a gradient-descent technique, a second-order technique or a derivative-free optimization procedure.
  • 30. The computer readable medium of claim 26, further comprising: generating a reduction mapping in accordance with the trained distance metric and the set of one or more exemplars; andapplying the reduction mapping to the set of one or more exemplars to produce a lower-dimensionality set of one or more exemplars.
  • 31. The computer readable medium of claim 30, further comprising: generating a new regression model in accordance with the lower-dimensionality set of one or more exemplars, the new regression model estimating a target value for a general reduced input vector.
  • 32. The computer readable medium of claim 31, further comprising: plotting or visualizing the lower-dimensionality set of one or more exemplars in a manner that facilitates interpretation, analysis or understanding of the original set of one or more exemplars by a human data analyst.
  • 33. The computer readable medium of claim 18, further comprising: delivering at least one of the adjusted distance metric and the function approximator to an application that performs at least one of: estimation, prediction or control.
  • 34. The computer readable medium of claim 33, wherein the application relates to at least one of: systems management, medical science, pharmacology, bioinformatics, marine science, geophysical science, atmospheric science, financial modeling, economic modeling or consumer modeling.
  • 35. A system for analyzing data, comprising: means for receiving a set of one or more exemplars, where at least two of the one or more exemplars comprise an input pattern and at least one of the one or more exemplars comprises a target value associated with the input pattern;means for initializing a distance metric, where the distance metric computes a distance between pairs of input patterns;means for initializing a function approximator; andmeans for adjusting the distance metric such that an accuracy measure of the function approximator on the set of one or more exemplars is improved.