Distance Quantization in Computing Distance in High Dimensional Space

Information

  • Patent Application
  • 20100114871
  • Publication Number
    20100114871
  • Date Filed
    November 02, 2009
    15 years ago
  • Date Published
    May 06, 2010
    14 years ago
Abstract
Techniques and systems for quantization based nearest neighbor searches can include quantizing a set of candidate points based on one or more characteristics of a query point; generating metric values based on the quantized candidate points, respectively, the metric values being indicative of respective proximities between the query point and the candidate points; and selecting one or more of the candidate points in response to the query point based on the metric values. In some implementations, techniques and systems can compress search metric computation resolution by implementing non-uniform scalar quantization within a metric computation process.
Description
BACKGROUND

This document relates to nearest neighbor search techniques and their implementations based on computer processors.


Various applications perform nearest neighbor searches to locate one or more points closest to an input point, such as a query point. Nearest neighbor searches can include locating a data point inside a data set S in metric space M that is closest to a given query point q ε M based on a distance metric d:q×S→. In some cases, metric space M is the k-dimensional Euclidean space k . The nearest neighbor for a point is given by:






NN(q)={ x ε S|∀x ε S ⊂ M, q ε M:d( x,q)≦d(x,q)}.


A distance metric can measure a proximity of one point to another point. One example of a distance metric is the Minkowski metric. A Minkowski metric of order p, also known as the p-norm distance, measures a distance between two k-dimensional data points q and x. The Minkowski metric is defined as:







d


(

q
,
r

)


=





q
-
r



p

=



(




j
=
1

k







q
j

-

r
j




p


)


1
/
p


.






Performing a metric computation can include calculating a value based on the Minkowski metric. For example, a metric computation can include performing a distance computation in each dimension to compute respective dimension-distances: distj(q,r)=|qj−rj|p performing a summation of all such distances:










j
=
1

d




dist
j



(

q
,
r

)



,




and performing a 1/p-th power computation on an output of the summation to produce an output.


SUMMARY

This document describes, among other things, technologies that perform quantization based nearest neighbor searches.


Techniques for quantization based nearest neighbor searches can include quantizing a set of candidate points based on one or more characteristics of a query point; generating metric values based on the quantized candidate points, respectively, the metric values being indicative of respective proximities between the query point and the candidate points; and selecting one or more of the candidate points in response to the query point based on the metric values. Other implementations can include corresponding systems, apparatus, and computer programs, configured to perform the actions of the methods, encoded on computer storage devices.


These and other implementations can include one or more of the following features. Implementations can compress search metric computation resolution based on non-uniform scalar quantization within a metric computation process. Quantizing the candidate points can include accessing non-uniform intervals based on the query point, each non-uniform interval being described by one or more threshold values and associated with a range of inputs and an output, and quantizing the candidate points based on non-uniform intervals. The query point and the candidate points can include elements that correspond to respective dimensions. Quantizing the candidate points can include using different sets of non-uniform intervals, associated with respective different ones of the dimensions, to quantize the dimensional elements of the candidate points, each set of non-uniform intervals selected based on a respective element of the query point. Generating metric values based on quantized candidate points can include summing quantized elements of a quantized candidate point to produce a metric value. Implementations can include performing motion estimation based on information including the selected one or more candidate points.


Implementations can include determining one or more quantizers that preserve distance ranking between the query point and the candidate points. Quantizing the candidate points based on one or more characteristics of the query point can include using the one or more quantizers. Quantizing the candidate points based on one or more characteristics of the query point can include using different quantizers, associated with different dimensions, to quantize elements. Determining one or more quantizers can include determining a number of quantization levels, one or more quantization threshold values, and mapping values for one or more dimensions.


Implementations can include determining one or more statistical characteristics of multiple, related, query points; the query points can include elements that correspond to respective dimensions. Implementations can include determining one or more quantizers based on the one or more statistical characteristics, each quantizer corresponding to at least one of the dimensions and operable to generate a quantized output based on an input. Quantizing the candidate points based on one or more characteristics of the query point can include using the one or more quantizers. Determining one or more quantizers can include determining a quantizer that maps successive bins of input values to respective integer values. Determining one or more quantizers can include determining threshold values that delineate non-uniform quantization intervals based on an iterative process that minimizes a nearest neighbor search measure.


In another aspect, techniques can include accessing a set of candidate points from a memory; and operating processor electronics to perform operations based on the set of candidate points with respect to a query point to produce values being indicative of respective proximities between the query point and the candidate points, and use the values to determine a nearest neighbor point from the set of candidate points. The computations include applying non-uniform quantizations based on one or more characteristics of the query point. Other implementations can include corresponding systems, apparatus, and computer programs, configured to perform the actions of the methods, encoded on computer storage devices.


These and other implementations can include one or more of the following features. Applying non-uniform quantizations can include quantizing the candidate points based on non-uniform intervals. Non-uniform intervals can be described by a set of threshold values that are based on the query point. Each one of the quantized candidate points can include quantized elements corresponding to a plurality of dimensions. Operating processor electronics to perform operations can include operating processor electronics to sum quantized elements of a corresponding one of the quantized candidate points to produce a corresponding one of the values. The query point can include elements corresponding to a plurality of dimensions. Candidate points can include elements corresponding to the plurality of dimensions. Operating processor electronics to perform operations can include operating processor electronics to generate, for two or more of the dimensions, a partial distance term that is indicative of a distance between corresponding elements of the query point and each one of the candidate points. Operating processor electronics to perform operations can include operating processor electronics to quantize the partial distance terms based on the non-uniform intervals. Operating processor electronics to perform operations can include operating processor electronics to determine a metric value based on a summation of the quantized partial distance terms associated with the each one of the candidate points. Partial distance terms can include dimension-distance terms. Quantizing can reduce a bit-depth of each dimension-distance term.


In another aspect, apparatuses and systems can include a memory configured to store data points and processor electronics. Data points can include elements that correspond to respective dimensions. Processor electronics can be configured to access a query point, use one or more of the data points as candidate points, use one or more quantizers to quantize the candidate points based on one or more characteristics of the query point, generate metric values based on the quantized candidate points, respectively, the metric values being indicative of respective proximities between the query point and the candidate points, select one or more of the candidate points, based on the metric values, as an output to the query point.


Particular embodiments of the subject matter described in this document can be implemented so as to realize one or more of the following advantages. Quantization based metric computations based on non-uniform quantization can preserve nearest neighbor search rankings Applying non-uniform quantization to candidate points can maintain distance rankings


Quantization based metric techniques can provide reduced complexity for metric computations. In some implementations, the number of computationally expensive arithmetic processes such as those associated with calculating non-quantized dimension-distances can be reduced. Complexity of one or more additional arithmetic processes associated with a metric computation can be reduced. Quantization based metric techniques can be implemented such that complexity does not increase with the order of the 1p norm. In some implementations, quantizing the output of each dimension-distance computation into 1-bit outputs can significantly reduce implementation complexity and its performance tends to be nearly unchanged for several applications because some dimension-distances tend to exhibit very compact low-variance statistical characteristics. Implementations can use one or more data sets or dimension reduction techniques to provide additional complexity reduction.


Quantization based metric techniques can be implemented with one or more applications such as video processing, vector quantization, information retrieval, pattern recognition, optimization tasks, and computer graphics. For example, quantization based metric techniques can be implemented to find similar images in a database. Video coding applications can use quantization based metric techniques for various tasks such as motion estimation and compensation for video coding. For example, without using any filtering, transform, or sorting process, one or more embodiments based on the described techniques and systems can provide on average 0.02 dB loss using only 1 bit per dimension instead of 8 bits and 0.0 dB loss when 2 bits are used.


The details of one or more embodiments of the subject matter described in this document are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.





BRIEF DESCRIPTION OF THE DRAWINGS


FIGS. 1A and 1B show different examples of non-uniform quantization within a metric computation technique.



FIGS. 2A and 2B show examples of various circuitry in some metric computation implementations.



FIGS. 3A and 3B show examples of complexity behaviors for different metric computation techniques based on input bit size.



FIGS. 4A, 4B, and 4C show examples of comparisons of complexity-performance trade-offs for four different scenarios.



FIGS. 5A and 5B show examples of comparisons between different cost functions.



FIG. 6 shows examples of different techniques performances as a function of quantization thresholds.



FIGS. 7A and 7B show examples of different techniques performances as a function of bit rate.



FIGS. 8A, 8B, and 8C show example performance measures using different image sequences.



FIG. 8D shows examples of different computational complexity costs.



FIG. 9 shows an example of a quantization based on a nearest-neighbor-preserving metric approximation technique.



FIGS. 10A and 10B show examples of metric computation architectures that include one or more quantizers.



FIG. 11 shows an example of a system configured to perform non-uniform quantized based metric computations.



FIG. 12 shows an example of a process that includes non-uniform quantized based metric computations.





Like reference symbols and designations in the various drawings indicate like elements.


DETAILED DESCRIPTION

Searching for nearest neighbors in high dimensional spaces is important to a wide range of areas and related applications including statistics, pattern recognition, information retrieval, molecular biology, optimization tasks, artificial intelligence, robotics, computer graphics, and data compression. Various real world applications perform searches in high dimensional search spaces with often highly varying non-deterministic data sets, which can lead to increased exponential complexity in search computations. Thus, finding the nearest neighbor in high dimensional space can pose serious computational challenges due to factors including the size of the data point set such as a database, dimensionality of the search space, and metric complexity. For example, computational complexity associated with some nearest neighbor implementations can be high due to high dimensional space searches.


Some nearest neighbor techniques reduce complexity based on altering the data set while computing a distance metric to full precision. Some techniques are based on using partial information, e.g., a data set S can be altered so that only parts of the database are searched or only part of the query data is used for matching. For example, some techniques search only a subset of candidate vectors. In another example, some techniques reduce search space dimensionality. Some nearest neighbor techniques can alter the data set S by using algorithms such as data-restructuring, filtering, sorting, sampling, transforming, bit-truncating, quantizing and can blindly compute a given metric to full resolution for a dissimilarity comparison based on such alterations data to locate the minimum distance data point.


This document describes, among other things, techniques and systems to perform fast nearest neighbor search computations. The described techniques and systems can provide significant reduction in complexity based on preserving the fidelity of the minimum distance ranking instead of the data set S, and selectively reducing the search metric computation resolution—instead of blindly computing a metric to full resolution. In some implementations, a metric value is computed only in order to compare different candidate points, and thus the metric value itself is not important, as long as the metric value provides relative information that permits a metric technique to identify the candidate closest to the query point. A query point can be represented as a query vector. The described techniques and systems can apply non-uniform quantization based one or more characteristics of a query point to reduce search metric computation resolution in such a way that the minimum distance ranking is most likely to be preserved. The techniques and systems can use one or more quantizers optimized to minimize the impact of quantization on identifying the nearest neighbor.


A nearest neighbor search process can include accessing a query point and a set of candidate points. The process can quantize the candidate points based on one or more characteristics of the query point. The process can calculate metric values based on the quantized candidate points. In some implementations, the metric values are indicative of respective proximities between the query point and the candidate points. The process can output one or more of the candidate points in response to the query point based on the metric values. In some implementations, the output can include the identities of one or more candidate points in a distance rank order.


Various nearest neighbor search processes can use a quantization based metric computation method. For example, metric techniques can apply quantization to one or more aspects of a Minkowski metric. A quantized metric can be based on a quantized form of a Minkowski metric. A quantized metric can include a quantization function such as Qj and Qj For example, a quantized metric d can be represented as:








d
_



(

q
,
r

)


=



(




j
=
1

k




Q
j



(





q
j

-

r
j





p






)



)



1
/
p







.





In yet another aspect, a quantized metric d can be represented as:








d
_



(

q
,
r

)


=



(




j
=
1

k





Q
_

j




(

r
j

)



)


1
/
p


.





The above two equations including Qj and Qj, respectively, may have similar, if not identical, computational performance. A quantizer can be configured to quantize the output of a dimension-distance |qj−rj|p. Such a quantizer can provide a reduced bit-depth of each dimension-distance output which can lead to a significant complexity reduction in metric processes such as in a tree of k−1 summations and in a 1/p-th power computation. In some implementations, a quantizer can be implemented such that the input dimension-distance computation |qj−rj|p does not have to be computed at all. In some cases, quantizer thresholds can be fixed over multiple queries and a given query point, q, being constant over searching many different candidate points, e.g., different r. In some implementations, a quantizer can be configured to directly quantize candidate points. For example, candidate points can be quantized directly without having to compute |qj−rj|p first and then to apply quantization.


The quantization function Qj represents quantization on a j-th dimensional input and uses a threshold set such as {θji}1=1N-1. The quantization function Qj uses a threshold set such as {qj±θji1/p}i=1N-1. Compared to Qj, Qj uses twice as many thresholds even though computation of |qj−rj|p is not required. In some implementations, quantization using Qj can be performed using a table-lookup method to increase performance. In some implementations, the inversion operation and the p-th power operation associated with a metric function can be replaced with operations that compare a value with one or more thresholds. In some implementations, a quantization function can use a threshold set such as {sign(qj−rj)·θji1/p}i−1N 1 or {±θji1/p}i−1N 1.


Quantization based metric computation can reduce associated computational complexity. In some implementations, this result in reduced complexity in one or more calculations of a metric computation. In some implementations, such a complexity reduction may come with some performance loss due to possible information loss caused by a quantization process. For example, coarser quantizer may increase the complexity reduction ratio while it may lead to an increased information loss.



FIG. 1A shows an example of non-uniform quantization within a metric computation technique. In this example, a metric computation technique, such as one based on a Minkowski metric, can measure the dissimilarity between two k-dimensional data points q and x. The metric computation technique can include computing partial distance terms such as dimension-distance terms, e.g., distj(q,x)=|1j−xj|p, for each dimension by performing operations such as subtracting 110, taking the absolute value 115, and rising to the p-th power 120.


The metric computation technique can include applying non-uniform scalar quantization 125 on the partial distance terms to produce quantized partial distance terms. Applying non-uniform scalar quantization 125 can include using a set of integer values that are assigned to respective intervals that cover possible input values. For example, a partial distance term value that falls into a specific interval can be assigned the integer value corresponding to that specific interval. Hence, the quantized partial distance value can be the corresponding integer value. are not required to be uniform. The set of intervals can be non-uniform, e.g., one interval has a larger span than another interval. The metric computation technique can sum over the quantized partial distance terms using a network of one or more summations 130. The metric computation technique can perform a 1/p-th power computation 135 on an output of the summation(s) to produce an output. In some implementations, non-uniform scalar quantization 125 can include using a quantizer that is chosen to preserve a minimum distance ranking



FIG. 1B shows a different example of non-uniform quantization within a metric computation technique. In this example, a metric computation technique, such as one based on a Minkowski metric, can measure the dissimilarity between two k-dimensional data points q and x. The metric computation technique can apply a non-uniform scalar quantization 140 on each of the dimensional values for the point x, which is represented by Q′(xi). The quantization function can be different for one or more of the dimensions. For example, each dimensional value can be quantized using different sets of intervals and different assigned values to the intervals. In some implementations, the selection of intervals is based on a query point which is represented by q in this example. Applying non-uniform scalar quantization 140 can transform a n-bit value representation into a 1-bit quantized value representation. The quantized outputs can be summed via one or more summations 145. The metric computation technique can perform a 1/p-th power computation 150 on an output of the summation(s) network to produce an output. In some implementations, non-uniform scalar quantization 140 can include using a quantizer that is chosen to preserve a minimum distance ranking.


In one aspect, non-uniform scalar quantization can be applied within a metric computation process. In various implementations, this approach can be used to achchieve significant complexity savings by reducing the number of operations such as a total number of additions and complexity such as an adder bit depth of required arithmetic operations. Moreover, these computational savings can have minimal impact on performance because quantization processes can preserve the minimum distance ranking fidelity. In some implementations, metric computation processes can include non-uniform scalar quantization and one or more techniques that modify a candidate data set S.


Computational complexity, computation-related power consumption and circuit size of most arithmetic elements such as adder or multiplier can depend on an input bit depth. Computational complexity tends to increase linearly or exponentially with the number of input bit lines in various circuitry for executing such computations. A dimension-distance computation that includes performing |qj−rj|p can result in n·p bit depth output, where inputs qj and rj are represented by n-bit numbers (e.g., n=8, 16, 32, and 64 bits). A metric value can be computed by summing the distances computed in each dimension. Circuitry to implement such a summation can include k−1 multiple-bit adders with maximum bit depth of n·p+┌log2 k┐.


Various implementations of the described subject matter can reduce computational complexity associated with nearest neighbor searches at the circuit level. The input bit-depth of each arithmetic element can be incorporated into a complexity analysis.



FIGS. 2A and 2B show examples of various circuitry in some metric computation implementations. FIG. 2A shows an example of an adder circuit 205. FIG. 2B shows an example of a multiplier circuit 210. The size of the arithmetic circuit such as adder or multiplier circuits may increase with an input bit size. For example, computational complexity, circuit size, static and dynamic power consumption, computation delays of most basic arithmetic elements including adder or multiplier can be influenced by, and increase polynomially with, the input bit size. Therefore, quantization applied to partial distance terms in each dimension, e.g., as shown in FIG. 1a, can significantly reduce complexity associated with a summation process. In some applications such as video coding, very coarse quantization is possible (e.g., to 1 bit), which can result in reduced complexity in a summation process while leaving video coding performance nearly unchanged.


Metric computations can use quantizers that eliminate a per dimension distance computation |qj−xj|p, e.g., an implementation based on the architecture shown in FIG. 1B. In some cases, the quantizer thresholds }θi} and the query vector q are fixed for a given search query, so that only the x ε S being tested for their proximity to q vary. Therefore, candidate data x can be quantized directly with a quantizer Q′: {q±θi1/P}, which can lead to the same result as computing |qj−xj |P followed by quantization by Q: {θi}, but at a fraction of the complexity.



FIG. 3A shows an example of complexity behaviors for different metric computation techniques based on input bit size. FIG. 3B shows an example of complexity behaviors for different metric computation techniques based on dimensionality. In these examples, the metric computation techniques include conventional l1 and l2 norm metric computations and a proposed distance quantization based lp norm metric computation. FIGS. 3A and 3B show that complexity increases as a function of the input bit size, dimensionality, and order p of metric (p-norm distance) for both conventional and proposed metric computations. Complexity can be measured, for example, in units of number of full-adder operations, e.g., basic building blocks of arithmetic logic circuits, under the assumption that n-bit addition, subtraction, and absolute value operations have the same complexity and that a square operation has equivalent complexity to that of an n2-bit addition. For a motion estimation example, the dimensionality represents the number of pixels per matching block and the input bit size represents pixel bit-depth. In these examples, the complexity of the proposed distance quantization remains constant over different input bit sizes and lp norms, while the complexity slowly increases with dimensionality as compared to the conventional metric computations.


A quantized metric computation process can use an output of a quantizer optimization technique that determines one or more optimal quantizers. A quantizer optimization technique can include using a cost function to quantify the difference in performance between an arbitrary search algorithm and a chosen benchmark search algorithm. A cost function can be based on computing the average difference in distance between a query point and the, possibly different, nearest neighbors identified by each algorithm. The search dataset, a query, a metric, and a resulting nearest neighbor data point of a benchmark algorithm are represented by S, q, d, and NN(q), respectively. Here,






NN(q)={x ε S|∀x ε S ⊂ M,q ε M:d(x,q)≦d(x,q)}.


The search dataset, a query, a metric, and a resulting nearest neighbor data point of a target algorithm are represented by S, q, d, and NN(q) similar, respectively. Here,







NN
(q)={ x ε S|∀x ε S⊂ S,q ε M: d( x, q)≦ d(x, q)}


A nearest neightbor cost function, ENN, can be written as:






E
NN
=E{d(NN(q),q)−d(NN(q),q)}.


ENN can represent an average NNS error measure. In some implementations, the expectation E is with respect to the query data when S and S are fixed. In some implementations, the expectation E is with respect to the set {(q,S, S)i}i. The cost function can be further expressed as:






E
NN=∫Rμ(a)f(a)da−∫R+a f(a)da


where μ(a)=E{d(x,q)| d(x,q)=a,x ε S}, and minimum distance distribution functions f(a)=Pr( d( NN(q),q)=a), f(a)=Pr(d(NN(q),q)=a).


In some implementations, only the first terms in the equations for ENN are considered because the target algorithm affects the first term and not the second term. Therefore, a cost function can be expressed as Ē=E {d( NN(q),q)}=∫R+μ(a) f(a)da.


In some implementations, techniques and systems do not modify a data set S or a query point q, but instead use a quantizer within the Minkowski metric computation. Thus, the Minkowski metric can be used as a benchmark with S=S to find a quantizer that, for a given number of quantization levels N, can minimize ENN. Instead of considering statistical information of x ε S and q separately, a cost function can be based on statistical characteristics of Y, a k -dimensional multivariate random variable representing the input data on which a quantizer is applied:






Y
i=(yn, y12, . . . , yik)=(|q1−xi1|p,|q2−xi2|p, . . . , qk−xik|p).


Quantized input can be described as:






Z
i=(zi1, zi2, . . . , zik)=Q(Y)=(q(yi1),Q(yi2), . . . , Q(yik)).


Corresponding benchmark and proposed target metrics are:







d


(

Y
i

)


=


(




j
=
1

k



y
ij


)


1
/
p










d
_



(

Y
i

)


=


d


(

Q


(

Y
i

)


)


=



(




j
=
1

k



z
ij


)


1
/
p


.






The number of candidates M and their dimensions k can be assumed to be fixed over the search process. A quantizer operating on y as a set of N non-overlapping intervals that cover all possible values of y: S={sn;sn=[θnn+1),n ε Φ56 , where Φ is a set of consecutive integers from 0 to N-1, and {θn} is an increasing sequence of thresholds. Therefore, for all Yij ε sn, we assign zij=Q(yij)=n, and the probability mass function (pmf) pij and centroid of μij of zij can be computed using fyij, the probability density function (pdf) of yij as:








z
ij

=


Q


(

y
ij

)


=



n



n






1

s
n




(

y
ij

)





,







p
ij



(
n
)


=




s
n






f

y
ij




(
y
)





y




,
and








μ
ij



(
n
)


=






s
n






yf

y
ij




(
y
)





y







s
n






f

y
ij




(
y
)





y




.





The cumulative mass and centroid functions of zij is denoted as








P
ij



(
n
)


=




s
0


s
n





p
ij



(
n
)








and







U
ij



(
n
)


=




s
0


s
n






p
ij



(
n
)






μ
ij



(
n
)


.







A simple case with M random samples from a k-variate distribution fY with iid dimensions, i.e., all yj following the same pdf fy and are independent of each other. The k-dimensional space can be partitioned into hypercubes through quantization, so that each input sample Y=(y1,y2, . . . , yk) falls into one of the hypercubes. Each hypercube can be represented by a vector Z=(z1,z2, . . . , zk)=(Q(y1),Q(y2), . . . , Q(yk)) and all zj have the same pmf p and the same centroid function μ. Each hypercube Z can be described by i) a probability mass Mz, ii) a centroid Cz, and iii) its corresponding total metric Sz:








M
Z

=



j



p


(

z
j

)




,


C
Z

=



j



μ


(

z
j

)




,


S
Z

=




Z


1

=



j




z
j

.








The pmf P∥z∥1 represents the probability of a sample Y falling in one of the hypercubes having a given Sz:









p



Z


1




(
x
)


=








Z


1

=
x




M
Z


=








Z


1

=
x






p


(

z
j

)




=


p

*
k




(
x
)





,




where p*k is the k-fold power convolution of p. Some implementations can minimize the cost function









E
_



:







E
_


=



a





μ
_



(
a
)





p
_



(
a
)





,




where p is the pmf of the minimum Sz value among M samples of Y:









p
_



(
a
)


=



(




x
=
a






p



Z


1




(
x
)



)

M

-


(




x
=

a
+
1







p



Z


1




(
x
)



)

M



,




and








p
_

=




(



P
^




Z


1




(
a
)


)

M



,




where ∇ is a backward difference operator and we define a reverse cmf {circumflex over (P)}(x)=1−P(x)=Pr[X≧x]. μ(a) is the centroid of all hypercubes with the same Sz=a.








μ
_



(
a
)


=








Z


1

=
a





M
Z



C
Z










Z


1

=
a




M
Z










μ
_

=




kp

*

(

k
-
1

)



*

(

p





μ

)



p

*
k



.





The above formulation assumes p=1. Alternatively, it would be valid for cases when the benchmark metric does not include 1/p -th power computation, as is the case in most real search applications. Otherwise, redefining C, as







C
Z

=


(



j



μ


(

z
j

)



)


1

p











allows the same procedure to be used.


Extending this to the more general case, candidates Yi can be considered to be drawn each from different fYi. Similarly, each vector dimension can have non-identical distributions. However, vector data is independent across dimensions and candidates with similar distance in terms of the benchmark metric d also share a similar distribution. The following function fλ(λ)=Pr(d(x,q)=λ,x ε S), a distribution of M candidates Yi can be denoted in terms of benchmark distance (λ). Representing candidates having same λ as Yλ=(yλ1,yλ2, . . . , yλk) with yλj, following a pdf fyλj, provides Zλ=(zλ1,zλ2,zλk) with zλj following a pmf pλj and its centroid function μλj. Thus, for each hypercube Zλ,







M

Z

λ







=



j




p

λ





j




(

z

λ





j


)










C

Z
λ


=



j




μ

λ





j




(

z

λ





j


)










S

Z
λ


=





Z
λ



1

.





A new operator










i
=
n

m



p
i
*





p
n

*

p

n
+
1


*

*

p
m






with which can be represented p∥zλ1 as,








p




Z
λ



1




(
x
)


=








Z
λ



1

=
x




M

Z
λ










p




Z
λ



1


=




j
=
1

k




p

λ





j

*

.






Consequently, p and μ of Ē=Σdμ(a) p(a) becomes








p
_

=




(


E
λ



[



P
^





Z
λ



1




(
x
)


]


)

M



,




and







μ
_

=



E
λ

[




i
=
1

k




(




j

i


1











k




p

λ





j

*


)

*


(


p

λ





i




μ

λ





i



)

/

p




Z
λ



1





]

.





Given the cost function quantifying the performance loss, a quantizer is identified that leads to the minimum Ē. Considering the case when data is assumed to be independent identical distributed (iid) across dimensions, for a given input distribution fy, a quantizer can be uniquely defined by two vectors μ, p ε, where p satisfies the probability axioms, e.g., it is uniquely defined given the set of centroids and the probability masses of each quantization bin. Note that given fy, Ē is a function of p. Note also that Ē can be represented in terms of P and U, defined previously as cumulative mass and centroid functions of zj, where P ε C such that Ē(P): and C is a convex subset of , C={x|xi≦xi+1,0≦xi≦1,∀i,x ε. It can be shown that






Ē({circumflex over (P)})≧Ē(P)+({circumflex over (P)}−P)′∇Ē(P), ∀{circumflex over (P)},P ε C


where a gradient of









E
_



:










E
_



(

F
z

)




=


(






E
_



(
P
)






P


(
0
)




,





,





E
_



(
P
)






P


(

N
-
1

)





)




,




proving that Ē is convex over C.


Finding the optimal quantizer can be formulated as a constrained convex optimization problem with the goal to minimize Ē(P) subject to P ε C. The global minimum value represents the optimal performance attainable given input distribution and can be obtained using standard convex optimization techniques. A quantizer can be determined based on the P vector corresponding to the global minimum.


The techniques and systems as described in this document can be applied to motion estimation (ME) process used in video coding system, for example. Without requiring any filtering, transform, or sorting process, using simple hardware oriented mapping, one or more embodiments of the described techniques and systems can provide on average 0.05 dB loss using only 1 bit per dimension instead of 8 bits and 0.01 dB loss when 2 bits are used, when a 11 norm distance was used for distance computation. In another aspect, one or more embodiments based on the described techniques and systems can provide on average 0.02 dB loss using only 1 bit per dimension instead of 8 bits and 0.0 dB loss when 2 bits are used. Similar results can be obtained for general 1p distances.


Various sequences are tested for simulation using a H.264/MPEG-4 AVC baseline encoder with 16×16 block partitions (256 dimensional vectors), a single reference, full pel resolution search, 8-bit depth pixel, and l1 norm, e.g., sum of absolute difference for search metric, and the search window of ±16 resulting a data set size of 1089.


Statistical characteristics of general ME input data show input dimension distances, e.g., pixel distances, to have approximately independent identical distributions while distribution varies with different candidates, e.g., distant candidates showed higher variance than nearer ones). Therefore, p and μ associated with a cost function







Ε
_

=




a











μ
_



(
a
)





p
_



(
a
)








for the general ME data becomes:







p
_

=




(


E
λ



[



P
^




z


1




(
x
)


]


)

M









μ
_

=



E
λ



[


kp

*

(

k
-
1

)

*

(

p





μ

)




p









*
k





]


.






FIGS. 4A, 4B, and 4C show different examples of comparisons of complexity-performance trade-offs for four different scenarios. The four scenarios show the trade-offs between complexity and performance for three different representative scenarios and a proposed distance quantization based metric computation based on the subject matter described herein. Each scenario reduces one of i) size of a data set S, ii) dimensionality of each data x ε S, iii) bit depth of each data dimension by truncating least significant bits (equally seen as uniform quantization on each data dimension), and iv) resolution of each dimension-distance via the proposed distance quantization. The X axis represents complexity percentage to that of original full computation. The Y axis represents the rate distortion (RD) performance loss measured in dB. FIGS. 4A, 4B, and 4C show performance examples based on Bus CIF, Foreman CIF, and Stefan CIF, respectively. The proposed approach provides a better trade-off and can also be used together with most of other existing algorithms to further improve the complexity reduction.



FIGS. 5A and 5B show examples of comparisons between different cost functions. FIGS. 5A shows comparisons of different cost functions such as ĒNN uniform, rayleigh, log normal, and model with the expected performance error collected from numerically simulated experiments for different input distribution settings fy. As the number of experiments increases, expected error converges the cost function, confirming the accuracy of the Ē formulation. FIGS. 5B shows comparisons of cost functions based on the collected ME data with simulated experiments for CIFs including Foreman CIF, Mobile CIF, and Stefan CIF.



FIG. 6 and FIGS. 7A and 7B compare the performances of at least one implemention of the described subject matter with three different thresholds each of which minimizes overall coding efficiency, ĒNN measure, and a cost model. FIG. 6 shows examples of different techniques performances as a function of quantization thresholds. FIGS. 7A and 7B show examples of different techniques performances as a function of bitrate. These results show that quantizers obtained by optimizing a cost function described herein can achieve near optimal performance. FIG. 6 also provides some insight about the sensitivity of optimal threshold to input variation. Despite large variation of the input source characteristics, dimension-distances where quantization is applied exhibit more consistent statistical behavior.


Some implementations can compress a search metric computation resolution by applying non-uniform scalar quantization, based on one or more query points, to candidate points prior to a metric computation summation process. Potential advantages of such implementations include removing certain computationally expensive arithmetic operations completely and can reduce the complexity of the rest of arithmetic operations significantly, complexity does not increase with the order of 1p norm, and, most importantly, the penalty to be paid in performance for the complexity reduction is surprisingly quite small if designed optimally. In some implementations, quantization at the output of each dimension-distance into 1-bit results in maximized complexity reduction yet the performance tends to be almost unchanged for many applications because dimension-distances tends to exhibit very compact low-variance statistical characteristics unlike the actual source data q, r ε S. Moreover, the search metric computation resolution can be compressed such that computational complexity reduction is maximized and its impact on nearest neighbor search result is minimized. One way of accomplishing this is to apply non-uniform scalar quantization at the output of each dimension-distance distj(q,r)=|qj−rj|p prior to the summation process.


Some implementations can determine a quantizer based on the statistical characteristic of input query data. Quantization can be used to map high rate data into lower rate data so as to minimize digital storage or transmission channel capacity requirement while preserving the essential data fidelity. While conventional optimal quantizer for compression and reconstruction purpose targets to minimize the reconstruction distortion given the input probability function, optimal quantizer embedded within the search metric computation however, has to minimize the search performance degradation cost given the input statistics. This quantization can be designed in such a way that for the given bit rate the fidelity of compressed data as a search metric measure is preserved in maximum.


Implementations of the described subject matter can include processing video data. One of the factors of video compression efficiency is how well the temporal redundancy is exploited by motion compensated prediction. Performance of the motion estimation (ME) process can relate to the video compression performance. The encoder searches for and selects the motion vector (MV) with minimum distance based on the metric d among all possible MVs. Then it performs the residual coding by encoding the difference block (prediction residual) between the original and motion compensated block. Each residual block is transformed, quantized, and entropy coded. For motion estimation case, the data set S (all reference blocks within the search range) varies largely from query to query (current block). To evaluate the techniques and systems described in this document in experimental application and compare with others, various sequences are tested using a H.264/MPEG-4 AVC baseline encoder. As it is in a typical video coding setting, 16×16 block partitions, a single reference, full pel resolution search, 8-bit depth pixel, and 11 norm distance for search metric were considered for ME. A search window of ±32 is used resulting in the size of data set to be 4225.



FIGS. 8A, 8B, and 8C show example performance measures for the CIF resolution Foreman, Mobile, and Akiyo sequences, respectively. In these examples, comparisons were made for total six different scenarios (as indicated in figure legend): i) full computation (benchmark approach, which compares all candidates in full resolution/dimensions), ii) data set reduction (reducing the number of candidates by a factor of two) iii) dimension reduction (subsample of dimensions into half), iv) four least significant bits truncation of both queries and candidates, v) a proposed quantization technique using 8 bins that compresses an 8-bit depth to 3-bit depth, and vi) a proposed quantization technique using 2 bins that compresses an 8-bit depth to a 1-bit depth). Their approximate complexity ratio as a percentage of the benchmark scenario is shown in parenthesis in the figure legend.



FIG. 8D shows examples of different computational complexity costs. In particular,



FIG. 8D shows the ratio of the total computational complexity cost comparing these 6 different cases over different p (the order of Minkowski metric). FIGS. 8A, 8B, and 8C represent RD performance when p is 1. Note that the thick solid line (original) is the benchmark full complexity while thin solid lines have the same complexity which is half of original case. This essentially compares the performance of four different approaches for the given equal complexity.


This document includes descriptions of a quantization based nearest-neighbor-preserving metric (QNNM) approximation algorithm. The QNNM algorithm is based on three observations: (i) the query vector is fixed during the entire search process, (ii) the minimum distance exhibits an extreme value distribution, and (iii) there is high homogeneity of viewpoints. Based on these, QNNM approximates original/benchmark metric in terms of preserving the fidelity of nearest neighbor search (NNS) rather than the distance itself, while achieving significantly lower complexity using a query-dependent quantizer. A quantizer design can be formulated to minimize an average NNS error. Query adaptive quantizers can be designed off-line without prior knowledge of the query and present an efficient and specifically tailored off-line optimization algorithm to find such optimal quantizer.


Given a metric space (U, d) with a distance/dissimilarity metric d:U×U→[0,∞), a set R ⊂ U of N objects, and a query object q ε U in (U,d), the nearest neighbor search (NNS) problem is to find efficiently the (either exact or approximate) nearest object











r
*

=


argmin
r



d


(

q
,
r

)




,



r


R
.







(
1
)







Some NNS techniques can present serious computational challenges based on the size of data set N, the dimensionality of search space D, and the metric complexity of d. To reduce such complexity, some existing algorithms focus on how to preprocess a given data set R, so as to reduce either (i) the subset of data to be examined, by discarding a large portion of data points during the search process using efficient data structures and querying execution (e.g., variants of k-d tree, metric trees, ball-trees, or similarity hashing) and/or (ii) the dimensionality of the vectors by exploiting metric space transformations, such as metric embedding techniques or techniques based on linear transforms, e.g., principal component analysis. This document includes descriptions of techniques that reduce complexity reduction by allowing approximation within the metric computation, instead of computing the chosen distance metric to full precision. Reduction of metric computation cost has been considered only to a limited extent (e.g., simple heuristic methods such as avoiding the computation of square roots of l2 norm, truncation of least significant bits, early stopping conditioning, etc.).


This document includes descriptions of a metric approximation algorithm which maps the original metric space to a simpler one while seeking to preserve, approximate nearest-neighbors to a given query. A metric approximation algorithm can be based on the following observations: (i) the query vector is fixed during the entire search process, (ii) when performing NNS for different queries the distances d(q, r*) between a query vector and its best match (NN) tend to be concentrated in a very narrow range (e.g., extreme value distribution of the sample minimum Fmin(x)=Pr(d(q,r*)≦x)), and (iii) high homogeneity of viewpoints property.


The metric approximation algorithm can approximate the original metric d using a query-adaptive quantizer. For a given query q, based on Observation (i), a set of query-dependent scalar quantizers is applied to each of the components/dimensions of every candidate r ε R. The quantizer produces one integer index per dimension and the sum of these indices is used as an approximation of d(q,r). Based on Observation (ii), these quantizers can be very coarse (e.g., 1 or 2 bits per dimension) leading to very low complexity without affecting overall NNS performance. This is because we can afford to quantize coarsely the distance to candidates unlikely to be NN for a given query without affecting the outcome of the NNS. Based on Observation (iii), the problem of finding the optimal query-dependent quantization parameters can be formulated as an off-line optimization process, so that minimum complexity is required for each querying operation.


A QNNM algorithm can use a metric function dobj to approximate a benchmark metric d in terms of preserving the fidelity of NNS while having significantly lower computational complexity than that of d. A metric approximation approach can be formulated as ψ:U→UQ mapping the original metric space (U,d) into a simpler metric space (UQ,dQ) where NN search is performed with dQ, metric. If ψ is the same for all queries, this metric space mapping can be seen as a preprocessing (e.g., space transformation to reduce dimensionality) aiming at simplifying the metric space while preserving relative distance between objects. A query-adaptive mapping ψq:U→UQ can use the information of a given query location q such that its resulting (UQ,dQ) preserves a NN ouput, rather than relative distance between objects, without having to find the optimal ψq prior to each querying process:











(

U
,
d

)





ψ
q




(


U
Q

,

d
Q


)


,




(
2
)








d
obj



(

q
,
r

)


=



d
Q



(



ψ
q



(
q
)


,


ψ
q



(
r
)



)


.





(
3
)







Some implementations can be based on D -dimensional Euclidean space U=RD.


In some implementations, each dimensional dissimilarity is measured independently and then averaged together. e.g., generalized Minkowski (Euclidean, Manhattan, weighted Minkowski etc) metric, inner product, Canberra metric, etc. For example, It can be assumed that there are no cross-interference among dimensions in original metric d, e.g., general metric function structure d can be written as:











d


(

q
,
r

)


=




j
=
1

D








d
j



(


q
j

,

r
j


)




,

r


U
.






(
4
)







An NNS algorithm accuracy can be evaluated in terms of the expected solution quality, ε (closeness in terms of d metric between the original NN, r* based on d and a returned object r*Q based on dobj metric)










ɛ
_

=



E
q



(



d


(

q
,


r
Q
*



(
q
)



)


-

d


(

q
,


r
*



(
q
)



)




d


(

q
,


r
*



(
q
)



)



)


.





(
5
)







It can be assumed that there exists a high homogeneity of viewpoints (towards nearest neighbors).



FIG. 9 shows an example of a quantization based on a nearest-neighbor-preserving metric approximation technique. FIG. 9 additionally shows the relation between ψq and Q and their related spaces. For a given query q, ψq is defined as:





ψq(r)=(ψq1(r1),ψq2(r2), . . . , ψqD(rD), r ε U,   (6)


where each ψqj is a non-uniform scalar quantizer chosen based on the query. Quantization is chosen due to its computational efficiency and flexibility to adapt to queries by simply adjusting thresholds. Since ψq(q) of Eq. (3) is constant over a searching process given q, an objective metric dobj becomes a function of only ψq(r). Based on Eq. (4), dQ can be formulated to be the sum of scalar quantizer outputs:











d
obj



(

q
,
r

)


=



d
Q



(


ψ
q



(
r
)


)


=




j
=
1

D









ψ
qj



(

r
j

)


.







(
7
)







Finding the optimal query-dependent ψq parameters minimizing ε (5) prior to each querying operation would not be practical. However, based on the homogeneity of viewpoint property, an off-line optimization can be used to design these query-dependent quantizers. The aggregate statistics of NNS dataset/candidates in terms of their distances with respect to a query q can be very similar regardless of a query/viewpoint position. This allows to consider a viewpoint space (Uv,dv), where v denotes the vector of distances between a query point and a search point:






v=(d1(q1,r1),d2(q2,r2), . . . , dD(qD,rD))εUv.   (8)


Then, under the assumption of viewpoint homogeneity, we can generate off-line statistics over multiple queries and model a dataset by the an overall distance distribution Fv of v in Uv:






F
v(x)=Pr(v≦x),   (9)


where Fv represents the probability that there exist objects whose distance v to a given arbitrary query is smaller than x.


Given the query-independent Fv model, instead of directly finding ψq:U→UQ minimizing ε for every q, we could equivalently look for its analogous mapping function Q:UV→UVQ such that






d
obj(q,r)=dQq(r))=dVQ(Q(v)),   (10)


where Q partitions the viewpoint space Uv into a set of hyper-rectangular cells UVQ with dVQ=dQ. Each dimension of UV is quantized independently with Qj with successive bins mapped to consecutive integer values: the bin including the origin is mapped to 0, the next one mapped to 1, etc. Each cell is therefore represented with a vector of mapping symbols z ε UVQ.






z=Q(v)=(Q1(v1),Q2(v2), . . . , QD(vD))ε UVQ.   (11)


The problem of finding the optimal ψq can be replaced by finding the optimal Q minimizing ε given FV because: (i) Q is query-independent which allows off-line process to find optimal Q, (ii) ε of ψq:U→UQ is identical to ε of Q:UV→UVQ, and (iii) conversion from Q to ψq for a given query q is very simple: once optimal Q minimizing ε is obtained off-line, given a query q prior to each querying operation, optimal ψq can be obtained by the following equation:





ψqj(rj)=Qj(vj)=(dj(qj,rj)), ∀j.   (12)


For example, if d is l2 norm and if we denote a quantization threshold from Qj and its corresponding threshold from ψqj as θ and Θ respectively, then rj≦Θ should be equivalent to vj=dj(qj,rj)=(qj−rj)2 ≦θ, and therefore Θ=qj±√{square root over (θ)} A set of √{square root over (θ)} needs to be obtained and stored off-line and only prior to each querying process with a given q, a set of Θ=qj±√{square root over (θ)} need to be computed on the fly. Note that this computation is done only once given a query q before computing any dobj for all data points to identify q's NN.



FIGS. 10A and 10B show examples of metric computation architectures that include one or more quantizers. The metric computation architecture 1005 in FIG. 10A uses a quantizer to quantize dimension-distance values. The metric computation architecture 1010 in FIG. 10B uses a quantizer to directly quantize canidate points. The quantizers in these architectures 1005, 1010 can be selected based on one or more query points.


An optimization algorithm to select the quantizer Q* that minimizes the average NNS error (5) given FV as in (9) can have a form:










Q
*

=


argmin
Q




{



f
obj



(
Q
)


:=

ɛ
_


}

.






(
13
)







This problem is a stochastic optimization problem with an objective function fobj= ε. Note that in this problem we aim to optimize the quantizer to be used in NNS. Thus, in the context of this optimization, we refer to the “search” for the optimal set of quantizer parameters, which should not be confused with the search performed in NNS itself. Optimization process in general consists of two phases: the search process (e.g., generating candidate solutions) and the evaluation process (evaluating solutions, e.g., fobj computation). Stochastic optimization can be computationally expensive especially due to its evaluation process, e.g., a typical Monte Carlo simulation approach (for every candidate solution, training data samples are simulated to estimate its average performance ε) would have a total complexity of O(TNs), where T is the size of training data which is sufficiently large and Ns is denoted as the total number of candidate solutions evaluated during the search process.


Our goal is to reduce complexity by formulating fobj such that a large portion of fobj computations can be shared and computed only once as a preprocessing step for a certain set of (quantizer) solution points, instead of computing fobj for each solution point independently. This leads to the total optimization complexity to change from O(TNs) to O(T+c1+c2Ns), where c1 and c2 are preprocessing cost and fobj evaluation cost, respectively. This requires a joint design of the search and evaluation processes.


Since only E[d(q,rQ*(g))] term of ε (5) changes with Q while E[d(q,r*(g))] is constant given FV, fobj can be reduced to:











f
obj

=


E


[

d


(

q
,


r
Q
*



(
q
)



)


]


=



a









μ
Q



(
a
)





f
Q
min



(
a
)






,




(
14
)







where fQmin is the pdf of FQmin(a)=Pr(dobj(q,rQ*)≦a) and μQ(a)=E(d(q,r)|dobj(q,r)=a,∀q,r εU).


Computing μQ and FQmin can include assigning three parameters to each cell c z of the set of hyper-rectangular cells defined by Q: (i) probability mass pz, (ii) non-normalized centroid u z, and (iii) distance dz=Σzj. Then FQmin and μQ(a) are formulated as:











p
z

=




c
z






f
V



(
v
)









v












u
z

=




c
z




<
v



,

1
>



f
V



(
v
)









v








(
15
)









F
Q



(
a
)


=





d
z


a








p
z











F
Q
min



(
a
)


=

1
-


(

1
-


F
Q



(
a
)



)

N







(
16
)








μ
Q



(
a
)


=






d
z

=
a








u
z







d
z

=
a








p
z







(
17
)







Implementations can compute fobj based on pz, uz for one or more cells cz. However, if the following two data sets FV and HV are available or computed in a pre-processing stage:












F
V



(
x
)


=

Pr


(

v

x

)












H
V



(
x
)


=




v

x








<
v



,

1
>

,





(
18
)







then






Pz
=






z



z





p

z








and






U
z



=





z



z




u

z









can be easily computed for each cell cz, so that all necessary pz, uz values can be obtained with only c2=O(DNc) cost. Here, NC is total number of cells generated by







Q
·

N
C


=



j







(


b
j

+
1

)






where bj denotes the number of thresholds assigned by Q on j-dimension of UV.


However, the computational (c1) and storage complexity of FV and HV may increase exponentially (e.g., O(DWD) assuming all dimensions are represented with the same resolution W). In some implementations, D is reducible depending on the input distribution FV if certain dimensions are independent or interchangeable/commutative. In fact this is usually the case for real-world applications (e.g., for video coding, all pixels tend to be heavily correlated yet interchangeable statistical characteristics thus common 16×16 processing unit image block (D=256) can be reduced to D=1).


A search algorithm can maximally reuse FV and HV data and can update FV and HV in conjunction with the search process in order to reduce overall storage and computation. Observation: Given k arbitrary solution points on the search space, preprocessing cost Sk to build FV and HV containing only necessary data to compute fobj of those k points is the same as that for computing fobj of K different solution points which form a grid, where:










K
=



j







(





(

k
+
1

)



b
j







b
j




)










S
k

=



j








(


kb
j

+
1

)

.







(
19
)







In other words, if a set of solution points form a grid, they maximally reuse data from FV and HV and thus lead to minimal preprocessing cost in both space and time complexity. A grid based iterative search algorithm framework with guaranteed convergence to the optimal solution can be based on the above observation. A quantization parameter can be represented by a marginal cumulative probability FV(θ), such that the search space becomes [0,1]D. This can facilitate increasing slope, reducing neutrality, ruggedness, or discontinuity of fobj function, which can increase search speed. This also provides further indication regarding to the sensitivity to performance.


A QNNM algorithm can include (i) generating a grid Gi which equivalently indicates a set of solution points which correspond to all grid points, (ii) building minimum required preprocessed structures FVi and HVi for computing fobj of all grid points on Gi, (iii) computing a set of fobj and finding its minimizer Qi* of Gi, and (iv) generating a next grid Gi+1 by either moving or scaling Gi based on Qi* information. Implementations can model a grid G on the search space with its center/location C, grid spacing Δ, and size parameter ω, assuming it has equal spacing and size for all dimensions. Algorithm implementations can initialize a grid-size parameter ω, grid scaling rate γ, tolerance for convergence Δtol>0, grid-spacing parameter Δ0, and initial grid G0. For each each iteration i=0,1, . . . , the algorithm includes performing a preprocess routine to construct FVi and HVi vto evaluate Gi, a search routine to seek a minimizer Qi* from Gi, and an update routine to generate a new grid Gi+1 based on Qi*. The update routine can include moving the center of grid: Ci+1=Qi*. The update routine can include performing a grid space update, where for a moving grid, if Qi* is on the boundary of grid Gi, then Δi+1i, where for a scaling grid, if Qi* is not on the boundary of grid Gi, then Δi+1i/γ. The update routine can terminate if Δi+1tol. The update routine can generate Gi+1: with parameters ω, Δi+1, and Ci+1.


Some implementations can determine integer parameter values, w and γ, that minimize computational complexity. Optimization complexity can be quantified as






O(T+Lc1+Lc2Ns) c1=O(B) c2=O(DS1).


Here, Ns depends on phase 2 grid search algorithm but roughly varies from O(ωD) to O(ωcD). Here, c1 is both time and space complexity of phase 1. L denotes the total number of iterations. Note that c2 is fixed regardless of w and γ. Overall complexity can be reduced from O(L(T+c1+c2Ns)) to O(T+Lc1+Lc2Ns) by splitting and deleting portions of training data set at each iteration such that only relevant data is examined for each update. If we assume to continue iteration until it gets as fine as resolution W, total iteration number is






L



γ
ω



log
γ




W
w

.






Therefore, γ≧1 minimizing γ logγ W and minimum possible integer ω≧2 can minimize overall complexity in both time and space, e.g., γ=3 and ω=2.



FIG. 11 shows an example of a system configured to perform non-uniform quantized based metric computations. A system can include a processing apparatus 1105 and a video capture device 1110. The processing apparatus 1105 can receive video data from the video capture device 1110 and can process the video data. For example, a processing apparatus 1105 can perform motion estimation to compress the video data. A processing apparatus 1105 can include a memory 1120, processor electronics 1125, and one or more input/output (I/O) channel 1130 such as a network interface or a data port such as a Universal Serial Bus (USB). Memory 1120 can include random access memory. Processor electronics 1125 can include one or more processors. In some implementations, processor electronics 1125 can include specialized logic configured to perform quantized based metric computations. An input/output (I/O) channel 1130 can receive data from the video capture device 1110. A processing apparatus 1105 can be implemented in one or more integrated circuits. In some implementations, memory 1120 can store candidate points. In some implementations, memory 1120 can store processor instructions associated with a quantized based metric process.



FIG. 12 shows an example of a process that includes non-uniform quantized based metric computations. The process can access a query point and a set of candidate points (1205).


The process can quantize the candidate points based on one or more characteristics of the query point (1210). The process can generate metric values based on the quantized candidate points (1215). In some implementations, the metric values are indicative of respective proximities between the query point and the candidate points. The process can select one or more of the candidate points in response to the query point based on the metric values (1220).


In some implementations, the precision level of a distance measure can be taken into account for complexity reduction. Some implementations can alter the metric computation precision by compressing the search metric computation resolution through applying non-uniform scalar quantization within the metric computation process. Quantization of the output of a dimension-distance, such as |qj−rj|p, can reduce complexity. Quantization can reduce the bit-depth of each dimension-distance output which leads to a significant complexity reduction in its following process (a tree of k−1 summations and 1/p-th power computation). A quantizer can be implemented in such a way that the input dimension-distance computation |qj−rj|p does not have to be computed at all. In some implementations, the quantizer thresholds are fixed over queries and query vector q is also constant over searching many different candidate points r, thus only r is varying. Therefore r can be quantized directly and have the same result without having to compute |qj−rj|p first and then to apply the quantization.


In some implementations, approximations of one or more quantizers can be used to minimize circuit complexity. Quantization can be query dependent, e.g., each query uses a different quantization. Some implementations can use reconfigurable hardware. For example, some implementations can reconfigure one or more portions of a system before processing a query. Some implementations can use circuitry that takes query q and candidate r as inputs and would approximate the quantization output of the optimized quantizer with minimal circuit complexity.


A few embodiments have been described in detail above, and various modifications are possible. The disclosed subject matter, including the functional operations described in this document, can be implemented in electronic circuitry, computer hardware, firmware, software, or in combinations of them, such as the structural means disclosed in this document and structural equivalents thereof, including potentially a program operable to cause one or more data processing apparatus to perform the operations described (such as a program encoded in a computer storage medium, which can be a memory device, a storage device, a machine-readable storage substrate, or other physical, machine-readable medium, or a combination of one or more of them).


The term “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.


A program (also known as a computer program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a stand alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.


While this document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.


Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results.


Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this document.

Claims
  • 1. A method performed by data processing apparatus, comprising: quantizing a set of candidate points based on one or more characteristics of a query point;generating metric values based on the quantized candidate points, respectively, the metric values being indicative of respective proximities between the query point and the candidate points; andselecting one or more of the candidate points in response to the query point based on the metric values.
  • 2. The method of claim 1, wherein quantizing the candidate points comprises: accessing non-uniform intervals based on the query point, each non-uniform interval being described by one or more threshold values and associated with a range of inputs and an output; andquantizing the candidate points based on non-uniform intervals.
  • 3. The method of claim 2, wherein the query point and the candidate points comprise elements that correspond to respective dimensions, wherein quantizing the candidate points comprises: using different sets of non-uniform intervals, associated with respective different ones of the dimensions, to quantize the dimensional elements of the candidate points, each set of non-uniform intervals selected based on a respective element of the query point.
  • 4. The method of claim 3, wherein generating metric values based on quantized candidate points comprises: summing quantized elements of a quantized candidate point to produce a metric value.
  • 5. The method of claim 1, comprising: determining one or more quantizers that preserve distance ranking between the query point and the candidate points, wherein quantizing the candidate points based on one or more characteristics of the query point comprises using the one or more quantizers.
  • 6. The method of claim 5, wherein quantizing the candidate points based on one or more characteristics of the query point comprises using different quantizers, associated with different dimensions, to quantize elements.
  • 7. The method of claim 5, wherein determining one or more quantizers comprises: determining a number of quantization levels, one or more quantization threshold values, and mapping values for one or more dimensions.
  • 8. The method of claim 1, comprising: determining one or more statistical characteristics of multiple, related, query points, wherein the query points comprise elements that correspond to respective dimensions; anddetermining one or more quantizers based on the one or more statistical characteristics, each quantizer corresponding to at least one of the dimensions and operable to generate a quantized output based on an input.
  • 9. The method of claim 8, wherein quantizing the candidate points based on one or more characteristics of the query point comprises using the one or more quantizers.
  • 10. The method of claim 8, wherein determining one or more quantizers comprises determining a quantizer that maps successive bins of input values to respective integer values.
  • 11. The method of claim 8, wherein determining one or more quantizers comprises determining threshold values that delineate non-uniform quantization intervals based on an iterative process that minimizes a nearest neighbor search measure.
  • 12. The method of claim 1, comprising: performing motion estimation based on information comprising the selected one or more candidate points.
  • 13. A method performed by data processing apparatus, comprising: accessing a set of candidate points from a memory; andoperating processor electronics to perform operations based on the set of candidate points with respect to a query point to produce values being indicative of respective proximities between the query point and the candidate points, and use the values to determine a nearest neighbor point from the set of candidate points, wherein the computations include applying non-uniform quantizations based on one or more characteristics of the query point.
  • 14. The method of claim 13, wherein applying non-uniform quantizations comprises quantizing the candidate points based on non-uniform intervals, wherein the non-uniform intervals are described by a set of threshold values that are based on the query point, wherein each one of the quantized candidate points comprises quantized elements corresponding to a plurality of dimensions.
  • 15. The method of claim 14, wherein operating processor electronics to perform operations comprises operating processor electronics to sum quantized elements of a corresponding one of the quantized candidate points to produce a corresponding one of the values.
  • 16. The method of claim 14, wherein the query point comprises elements corresponding to a plurality of dimensions, wherein each one of the candidate points comprises elements corresponding to the plurality of dimensions, wherein operating processor electronics to perform operations comprises operating processor electronics to generate, for two or more of the dimensions, a partial distance term that is indicative of a distance between corresponding elements of the query point and each one of the candidate points.
  • 17. The method of claim 16, wherein operating processor electronics to perform operations comprises operating processor electronics to quantize the partial distance terms based on the non-uniform intervals.
  • 18. The method of claim 17, wherein operating processor electronics to perform operations comprises operating processor electronics to determine a metric value based on a summation of the quantized partial distance terms associated with the each one of the candidate points.
  • 19. The method of claim 17, wherein the partial distance terms respectively comprise dimension-distance terms, wherein the quantizing reduces a bit-depth of each dimension-distance term.
  • 20. A computer storage medium encoded with a computer program, the program comprising instructions that when executed by data processing apparatus cause the data processing apparatus to perform operations comprising: quantizing a set of candidate points based on one or more characteristics of a query point;generating metric values based on the quantized candidate points, respectively, the metric values being indicative of respective proximities between the query point and the candidate points; andselecting one or more of the candidate points in response to the query point based on the metric values.
  • 21. The medium of claim 20, wherein quantizing the candidate points comprises: accessing non-uniform intervals based on the query point, each non-uniform interval being described by one or more threshold values and associated with a range of inputs and an output; andquantizing the candidate points based on non-uniform intervals.
  • 22. The medium of claim 21, wherein the query point and the candidate points comprise elements that correspond to respective dimensions, wherein quantizing the candidate points comprises: using different sets of non-uniform intervals, associated with respective different ones of the dimensions, to quantize the dimensional elements of the candidate points, each set of non-uniform intervals selected based on a respective element of the query point.
  • 23. The medium of claim 22, wherein generating metric values based on quantized candidate points comprises: summing quantized elements of a quantized candidate point to produce a metric value.
  • 24. The medium of claim 20, wherein the operations comprise: determining one or more quantizers that preserve distance ranking between the query point and the candidate points, wherein quantizing the candidate points based on one or more characteristics of the query point comprises using the one or more quantizers.
  • 25. The medium of claim 24, wherein quantizing the candidate points based on one or more characteristics of the query point comprises using different quantizers, associated with different dimensions, to quantize elements.
  • 26. The medium of claim 24, wherein determining one or more quantizers comprises: determining a number of quantization levels, one or more quantization threshold values, and mapping values for one or more dimensions.
  • 27. The medium of claim 20, wherein the operations comprise: determining one or more statistical characteristics of multiple, related, query points, wherein the query points comprise elements that correspond to respective dimensions; anddetermining one or more quantizers based on the one or more statistical characteristics, each quantizer corresponding to at least one of the dimensions and operable to generate a quantized output based on an input.
  • 28. The medium of claim 27, wherein quantizing the candidate points based on one or more characteristics of the query point comprises using the one or more quantizers.
  • 29. The medium of claim 27, wherein determining one or more quantizers comprises determining a quantizer that maps successive bins of input values to respective integer values.
  • 30. The medium of claim 27, wherein determining one or more quantizers comprises determining threshold values that delineate non-uniform quantization intervals based on an iterative process that minimizes a nearest neighbor search measure.
  • 31. The medium of claim 20, wherein the operations comprise: performing motion estimation based on information comprising the outputted one or more candidate points.
  • 32. A system, comprising: a memory configured to store data points, wherein the data points comprise elements that correspond to respective dimensions; andprocessor electronics configured to access a query point, use one or more of the data points as candidate points, use one or more quantizers to quantize the candidate points based on one or more characteristics of the query point, generate metric values based on the quantized candidate points, respectively, the metric values being indicative of respective proximities between the query point and the candidate points, select one or more of the candidate points, based on the metric values, as an output to the query point.
  • 33. The system of claim 32, wherein the processor electronics are configured to access non-uniform intervals based on the query point, each non-uniform interval being described by one or more threshold values and associated with a range of inputs and an output, and quantize the candidate points based on non-uniform intervals.
  • 34. The system of claim 33, wherein the query point and the candidate points comprise elements that correspond to respective dimensions, wherein the processor electronics are configured to use different sets of non-uniform intervals, associated with respective different ones of the dimensions, to quantize the dimensional elements of the candidate points, each set of non-uniform intervals selected based on a respective element of the query point.
  • 35. The system of claim 34, wherein the processor electronics are configured to sum quantized elements of a quantized candidate point to produce a metric value.
  • 36. The system of claim 32, wherein the processor electronics are configured to determine the one or more quantizers to preserve distance ranking between the query point and the candidate points.
  • 37. The system of claim 36, wherein the processor electronics are configured to use different quantizers, associated with different dimensions, to quantize elements.
  • 38. The system of claim 36, wherein determining the one or more quantizers comprises determining a number of quantization levels, one or more quantization threshold values, and mapping values for one or more dimensions.
  • 39. The system of claim 32, wherein the processor electronics are configured to determine one or more statistical characteristics of multiple, related, query points, wherein the query points comprise elements that correspond to respective dimensions and determine one or more quantizers based on the one or more statistical characteristics, each quantizer corresponding to at least one of the dimensions and operable to generate a quantized output based on an input.
  • 40. The system of claim 39, wherein determining the one or more quantizers comprises determining a quantizer that maps successive bins of input values to respective integer values.
  • 41. The system of claim 39, wherein determining the one or more quantizers comprises determining threshold values that delineate non-uniform quantization intervals based on an iterative process that minimizes a nearest neighbor search measure.
  • 42. The system of claim 32, wherein the processor electronics are configured to perform motion estimation based on information comprising the outputted one or more candidate points.
PRIORITY CLAIM AND CROSS REFERENCE TO RELATED APPLICATION

This document claims the benefit of U.S. Provisional Application No. 61/110,472 entitled “Distance Quantization in Computing Distance in High Dimensional Space” and filed on Oct. 31, 2008, which is incorporated by reference as part of the disclosure of this document.

FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT

This invention was made with government support under 0428940 awarded by the National Science Foundation (NSF). The government has certain rights in the invention.

Provisional Applications (1)
Number Date Country
61110472 Oct 2008 US