The present disclosure is based on Japanese Patent Application No. 2013-116918 filed on Jun. 3, 2013 and Japanese Patent Application No. 2014-28980 filed on Feb. 18, 2014, the disclosures of which are incorporated herein by reference.
The present disclosure relates to a feature amount conversion apparatus that converts a feature amount used for recognition of a target. The present disclosure also relates to a learning apparatus and a recognition apparatus that include the feature amount conversion apparatus, and to a feature amount conversion program product.
There is conventionally commercialized a recognition apparatus that recognizes a target through machine learning, in various fields such as image search, voice recognition, and text search. Such recognition extracts a feature amount from information, e.g., image, voice, or text. When a particular target is recognized from an image, a HOG (Histograms of Oriented Gradients) feature amount may be used as an image feature amount (refer, e.g., to Non-Patent Literature 1). A feature amount is handled in the form of a feature vector, permitting a computer to easily handle. The information, such as image, voice, or text, is namely converted to a feature vector for target recognition purposes.
The recognition apparatus recognizes a target by applying a feature vector to a recognition model. A recognition model for a linear discriminator is given, e.g., by Formula (1).
f(x)=wTx+b (1)
where x is a feature vector, w is a weight vector, and b is a bias. The linear discriminator performs a binary classification depending on whether f(x) is greater or smaller than zero when the feature vector x is given.
This recognition model is determined through a learning using many feature vectors prepared for learning purposes. The above linear discriminator uses, as learning data, many positive examples and negative examples to determine the weight vector w and the bias b. An SVM (Support Vector Machine)-based learning method may be adopted as a concrete example.
The linear discriminator is particularly useful due to its rapid calculations in learning and discrimination. However, the linear discriminator can achieve linear discrimination (binary classification) only, therefore exhibiting a disadvantage failing to provide a high discrimination capability. This leads to an attempt to improve a feature amount description capability by subjecting a feature amount to nonlinear conversion in advance, for instance, by using co-occurrence of feature amounts. This corresponds to a FIND (Feature Interaction Descriptor) feature amount (refer, e.g., to Non-Patent Literature 2).
The FIND feature amount provides an improved feature amount discrimination capability by calculating the harmonic mean of all combinations of elements of a feature vector to obtain co-occurring elements. More specifically, when a d-dimensional feature vector x=(x1, x2, . . . , xD)T is given, nonlinear calculations are performed on all combinations of the elements as indicated by Equation (2).
y
ij
=x
i
y
j/(xi+yj) (2)
Herein, the FIND feature amount is given by y=(y11, y12, . . . , yDD)T.
When the feature vector x is, e.g., 32-dimensional, the FIND feature amount is 528-dimensional excluding overlapping combinations. If necessary, y may be normalized until its length is 1.
Determining the FIND feature amount, however, needs calculations of all combinations of the elements of the feature vector. The amount of such calculations is in the order of the square of the number of dimensions. Further, the calculations are extremely slow because a division operation needs to calculate each element. Moreover, the number of dimensions of the feature amount is large, involving the increase in the amount of memory consumption.
The present disclosure has been made in view of the above circumstances. An object of the present disclosure is to provide a feature amount conversion apparatus that rapidly performs nonlinear conversion on a feature amount when the feature amount is binary.
Another object of the present disclosure is to provide a feature amount conversion apparatus that converts a feature vector to a binary value even when the feature amount is not binary.
A feature amount conversion apparatus according to a first example of the present disclosure includes a bit rearrangement portion, a logical operation portion, and a feature integration portion. The bit rearrangement portion generates a plurality of rearranged bit strings by rearranging elements of an inputted binary feature vector into diverse arrangements. The logical operation portion generates a plurality of logically-operated bit strings by performing a logical operation on the inputted feature vector and each of the rearranged bit strings. The feature integration portion generates a nonlinearly converted feature vector by integrating the generated logically-operated bit strings. This configuration calculates co-occurring elements of the inputted feature vector by rearranging the inputted feature vector and performing a logical operation. Therefore, the co-occurring elements can be rapidly computed.
The feature integration portion may further integrate the elements of the inputted feature vector as well as the generated logically-operated bit strings. This configuration additionally uses the elements of an original feature vector. Therefore, a nonlinearly converted feature vector having a high description capability can be obtained without increasing a computation amount.
The logical operation portion may calculate the exclusive OR of the rearranged bit strings and the inputted feature vector. The exclusive OR is equivalent to the harmonic mean. The probability of occurrence of “+1” is equal to the probability of occurrence of “−1”; this configuration can calculate co-occurring elements having a high feature description capability comparable to the feature description capability of FIND.
The bit rearrangement portion may generate the rearranged bit strings by performing a rotate shift operation with no carry on the elements of the inputted feature vector. This configuration can efficiently calculate co-occurring elements having a high feature description capability.
The feature amount conversion apparatus may include d/2 bit rearrangement portions when the inputted feature vector is d-dimensional. Under this configuration, each of a plurality of the bit rearrangement portions performs a bit shift by one bit to provide a rotate shift operation with no carry, enabling the plurality of the bit rearrangement portions to generate all combinations of the elements of the inputted feature vector.
The bit rearrangement portion may randomly rearrange the elements of the inputted feature vector. This configuration can also calculate co-occurring elements having a high feature description capability.
The feature amount conversion apparatus may include a plurality of binarization portions and a plurality of co-occurring element generation portions. Each binarization portion may generate the binary feature vector by binarizing an inputted real number feature vector. The co-occurring element generation portions may correspond to the respective binarization portions. The co-occurring element generation portions may each include the plurality of the bit rearrangement portions and the plurality of the logical operation portions. The binary feature vector may be inputted to the co-occurring element generation portions from the corresponding binarization portions. The feature integration portion may generate the nonlinearly converted feature vector by integrating all the logically-operated bit strings generated respectively by the plurality of the logical operation portions in each of the co-occurring element generation portions. This configuration can rapidly acquire a binary feature vector having a high feature description capability even when the elements of the feature vector are real numbers.
The binary feature vector may be acquired by binarizing a HOG feature amount.
The feature amount conversion apparatus according to a second example of the present disclosure includes a bit rearrangement portion, a logical operation portion, and a feature integration portion. The bit rearrangement portion generates a rearranged bit string by rearranging elements of an inputted binary feature vector. The logical operation portion generates a logically-operated bit string by performing a logical operation on the rearranged bit string and the inputted feature vector. The feature integration portion generates a nonlinearly converted feature vector by integrating the elements of the feature vector and the generated logically-operated bit string. This configuration also calculates co-occurring elements of the inputted feature vector by rearranging the inputted feature vector and performing a logical operation. Therefore, the co-occurring elements can be rapidly computed.
The feature amount conversion apparatus according to a third example of the present disclosure includes a plurality of bit rearrangement portions, a logical operation portion, and a feature integration portion. The bit rearrangement portions generate a rearranged bit string by rearranging elements of an inputted binary feature vector into diverse arrangements. The logical operation portion generates logically-operated bit strings by performing a logical operation on the rearranged bit strings generated by the bit rearrangement portions. The feature integration portion generates a nonlinearly converted feature vector by integrating the elements of the feature vector and the generated logically-operated bit strings. This configuration also calculates co-occurring elements of the inputted feature vector by rearranging the inputted feature vector and performing a logical operation. Therefore, the co-occurring elements can be rapidly computed.
The feature amount conversion apparatus according to a fourth example of the present disclosure includes a plurality of bit rearrangement portions, a plurality of logical operation portions, and a feature integration portion. The bit rearrangement portions generate a rearranged bit string by rearranging elements of an inputted binary feature vector into diverse arrangements. The logical operation portions generate logically-operated bit strings by performing a logical operation on the rearranged bit strings generated by the bit rearrangement portions. The feature integration portion generates a nonlinearly converted feature vector by integrating the generated logically-operated bit strings. This configuration also calculates co-occurring elements of the inputted feature vector by rearranging the inputted feature vector and performing a logical operation. Therefore, the co-occurring elements can be rapidly computed.
A learning apparatus according to another example of the present disclosure includes a feature amount conversion apparatus according to any one of the foregoing examples of the present disclosure and a learning portion. The learning portion achieves learning by using the nonlinearly converted feature vector generated by the feature amount conversion apparatus. This configuration also calculates co-occurring elements of an inputted feature vector by rearranging the inputted feature vector and performing a logical operation. Therefore, the co-occurring elements can be rapidly computed.
A recognition apparatus according to yet another example of the present disclosure includes a feature amount conversion apparatus according to any one of the foregoing examples of the present disclosure and a recognition portion. The recognition portion achieves recognition by using the nonlinearly converted feature vector generated by the feature amount conversion apparatus. This configuration also calculates co-occurring elements of an inputted feature vector by rearranging the inputted feature vector and performing a logical operation. Therefore, the co-occurring elements can be rapidly computed.
The recognition portion in the above recognition apparatus may calculate the inner product of a weight vector in the recognition and the nonlinearly converted feature vector in the order of the largest distribution to the smallest or in the order of the highest entropy value to the lowest, and may terminate the calculation of the inner product when the inner product is determined to be greater or smaller than a predetermined threshold value for recognition. This configuration can rapidly perform a recognition process.
A feature amount conversion program product according to still another example of the present disclosure includes instructions causing a computer to function as a plurality of bit rearrangement portions, as a plurality of logical operation portions, and as a feature integration portion, and is recorded on a computer-readable, non-transitory medium. The bit rearrangement portions generate a rearranged bit string by rearranging elements of an inputted binary feature vector into diverse arrangements. The logical operation portions generate logically-operated bit strings by performing a logical operation on the inputted feature vector and the rearranged bit strings. The feature integration portion generates a nonlinearly converted feature vector by integrating the generated logically-operated bit strings. This configuration also calculates co-occurring elements of the inputted feature vector by rearranging the inputted feature vector and performing a logical operation. Therefore, the co-occurring elements can be rapidly computed.
The above configurations calculate co-occurring elements of an inputted feature vector by rearranging the inputted feature vector and performing a logical operation. Consequently, the co-occurring elements can be rapidly computed.
The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description made with reference to the accompanying drawings. In the drawings:
Embodiments of a feature amount conversion apparatus according to the present disclosure will now be described with reference to the accompanying drawings. The embodiments described below are intended to be illustrative only. The present disclosure is not limited to specific configurations described below. When the present disclosure is to be implemented, any specific configurations may be adopted as appropriate depending on an embodiment of the present disclosure.
When a feature vector, which is a binary HOG feature amount, is given, the feature amount conversion apparatus according to a first embodiment of the present disclosure performs nonlinear conversion on the feature vector (hereinafter, referred to as “nonlinearly converted feature vector”) to obtain a feature vector having an improved discrimination capability. If, for instance, an area formed by 8 pixels×8 pixels as one unit is defined as a cell, a HOG feature amount is obtained as a 32-dimensional vector for each block formed by 2×2 cells. In this first embodiment, it is assumed that the HOG feature amount is obtained as a binarized vector. A principle of determining a nonlinearly converted feature vector having co-occurring elements comparable to those of FIND by performing nonlinear conversion on a binary feature vector will be described before describing a configuration of the feature amount conversion apparatus according to the present embodiment.
When a FIND feature amount is to be determined, the elements are used to calculate a harmonic mean as indicated in Formula (2).
a×b/(|a|+|b|) (2)
where a and b are the value of each element (+1 or −1). As a and b are either +1 or −1, the number of their combinations is limited to four. Therefore, when the elements of the feature vector are binarized to either +1 or −1, their harmonic mean is equivalent to the XOR.
As is obvious from Formula (2), the harmonic mean remains unchanged even if a and b are interchanged. Thus, a portion enclosed by a thick line in
When the elements of an original feature vector in the present embodiment are arranged together with the elements (co-occurring elements) enclosed by the thick line in
When calculations are performed as indicated in
The feature amount conversion apparatus acquires a nonlinearly converted feature vector by adding the elements of the original feature vector to the co-occurring elements obtained as described. Hence, when a 32-dimensional binary feature vector is converted, the number of dimensions of the resulting nonlinearly converted feature vector is 32×16+32=544. A configuration of the feature amount conversion apparatus that achieves the above conversion of a feature vector will be described below.
In the present embodiment, a binarized feature vector is inputted to the feature amount conversion apparatus 10 as the feature amount to be converted. The feature vector is inputted to the N bit rearrangement units 111-11N and the N logical operation units 121-12N, respectively. Further, the N logical operation units 121-12N receive outputs generated from the corresponding bit rearrangement units 111-11N.
The bit rearrangement units 111-11N generate a rearranged bit string by performing a rotate shift operation with no carry on the inputted binary feature vector. More specifically, the bit rearrangement unit 111 performs a rotate shift operation with no carry to shift the feature vector by one bit to the right, the bit rearrangement unit 112 performs a rotate shift operation with no carry to shift the feature vector by two bits to the right, the bit rearrangement unit 113 performs a rotate shift operation with no carry to shift the feature vector by three bits to the right, and the bit rearrangement unit 11N performs a rotate shift operation with no carry to shift the feature vector by N bits to the right.
In the present embodiment, when an inputted binary feature vector is d-dimensional, N=d/2. This can calculate the XOR of all combinations of all elements of the feature vector.
The logical operation units 121-12N calculate the XOR of the bit string of the original feature vector and the rearranged bit string outputted respectively from the bit rearrangement units 111-11N. More specifically, the logical operation unit 121 calculates the XOR of the bit string of the original feature vector and the rearranged bit string outputted from the bit rearrangement unit 111 (see
A feature integration unit 13 arranges the original vector together with the outputs (logically-operated bit strings) generated from the logical operation units 121-12N and generates a nonlinearly converted feature vector that includes them as elements. As mentioned, when the inputted feature vector is 32-dimensional, the nonlinearly converted feature vector generated by the feature integration unit 13 is 544-dimensional.
As described, the feature amount conversion apparatus 10 according to the present embodiment increases the number of dimensions of a binarized feature vector by adding the elements of the binarized feature vector to their co-occurring elements (elements of a logically-operated bit string). This can improve the discrimination capability of a feature vector.
Further, as the elements of the original feature vector are either +1 or −1, handling the harmonic mean of the elements as a co-occurring element as in the case of a FIND feature amount is equivalent to handling the XOR of the individual elements as a co-occurring element. The feature amount conversion apparatus 10 according to the present embodiment therefore calculates the XORs of all combinations of the individual elements and handles the calculated XORs as co-occurring elements. Consequently, the co-occurring elements can be rapidly calculated.
Furthermore, in order to calculate the XOR of the individual elements, the feature amount conversion apparatus 10 according to the present embodiment calculates the XOR of the bit string of the original feature vector and a bit string obtained by performing a rotate shift operation with no carry on the bit string of the original feature vector. Therefore, when the width of a computer register is not greater than the number of bits of the original feature vector (the number of XOR calculations), this XOR can be simultaneously calculated. Consequently, the co-occurring elements can be rapidly calculated.
The feature amount conversion apparatus according to a second embodiment of the present disclosure will now be described. When a HOG feature amount is acquired as a real vector instead of a binary vector, the feature amount conversion apparatus according to the second embodiment converts the real vector to a binary vector having a high discrimination capability.
The individual elements are binarized to obtain a binarized feature vector as in the lower half of
Here, the use of a multiple threshold value can enhance the feature description capability of the feature vector (increase the amount of information in the feature vector). In other words, when k different threshold values are set and individually binarized as in
When a feature vector is given as a real vector, the feature description capability of the feature vector can be enhanced by binarization based on a multiple threshold value as in
A scheme for increasing the speed of HOG feature amount binarization will now be described. In general, the length of a HOG feature amount needs to be normalized to 1 on an individual block basis. The reason is that such normalization provides robustness against brightness.
An unnormalized, 32-dimensional, real HOG feature amount is expressed by [Expression 1].
h=(h1,h2, . . . ,h32)T [Expression 1]
Further, a normalized, 32-dimensional, real HOG feature amount is expressed by [Expression 2].
=(
In this instance, [Expression 3] is obtained.
A binarized, 32-dimensional HOG feature amount is expressed by [Expression 4].
b=(b1,b2, . . . ,b32)T [Expression 4]
In this instance, [Expression 5] is obtained.
The above binarization is very slow because one square root calculation and one division operation are involved. Therefore, it is well to remember that the HOG feature amount is nonnegative. Thus, the above inequality expression is used as [Expression 6].
l
>T
i [Expression 6]
[Expression 7] below is obtained by squaring both sides of [Expression 6] and transposing the denominator on the left side to the right side.
h
i
2
>T
i
2Σk=132hk2 [Expression 7]
Through the above deformation, the real HOG feature amount can be binarized by [Expression 8] below without calculating a square root or performing a division operation.
When, for instance, an element is determined to be −1 (smaller than a threshold value) as a result of binarization achieved by using the 20% position in the range as the threshold value, the element is naturally determined to be −1 when binarization is achieved by using the 40% position, 60% position, and 80% position in the range as the threshold value. In this sense, a 128-bit binarized vector obtained by binarization based on a multiple threshold value includes redundant elements. Therefore, it is not an efficient way to determine the co-occurring elements by directly applying the 128-bit binarized vector to the feature amount conversion apparatus 10 according to the first embodiment. In view of the above circumstances, the present embodiment provides a feature amount conversion apparatus that is capable of efficiently determining the co-occurring elements by reducing the above redundancy.
Before integrating the bit strings obtained based on the threshold values, the feature amount conversion apparatus according to the present embodiment uses the bit strings to determine co-occurring elements. Hence, 544-bit bit strings can be obtained from 32-bit bit strings, as in
In the present embodiment, a real feature vector is inputted to the feature amount conversion apparatus 20. The feature vector is inputted to the N binarization units 211-21N. The binarization units 211-21N binarize the real feature vector with different threshold values. The binarized feature vectors are respectively inputted to the corresponding co-occurring element generation units 221-22N.
The co-occurring element generation units 221-22N each have the same configuration as the feature amount conversion apparatus 10 described in conjunction with the first embodiment. More specifically, the co-occurring element generation units 221-22N each include a plurality of bit rearrangement units 111-11N, a plurality of logical operation units 121-12N, and a feature integration unit 13, calculate co-occurring elements by performing a rotate shift operation with no carry and an XOR operation, and integrate the calculated co-occurring elements with inputted bit strings.
When a 32-bit bit string is inputted to each co-occurring element generation unit 221-22N, each co-occurring element generation unit 221-22N outputs a 544-bit bit string. The feature integration unit 23 arranges outputs generated from the co-occurring element generation unit 221-22N and generates a nonlinearly converted feature vector that includes them as elements. As mentioned, when the inputted feature vector is 32-dimensional, the feature vector generated by the feature integration unit 23 is 2176-dimensional (2176-bit).
As described, even when the feature amount is obtained as a real vector, the feature amount conversion apparatus 20 according to the present embodiment is capable of binarizing the real vector and increasing the amount of information in the binarized vector.
When determining a recognition model from many learning data, the feature amount conversion apparatus 10 according to the first embodiment and the feature amount conversion apparatus 20 according to the second embodiment acquire a nonlinearly converted feature vector by performing the above nonlinear conversion on a feature vector inputted as learning data. The nonlinearly converted feature is used for a learning process performed by a learning apparatus on the basis, for instance, of SVM, and a recognition model is determined. In other words, the feature amount conversion apparatuses 10, 20 are used for the learning apparatus. Further, even when the recognition model is determined and the data to be recognized is inputted as a feature vector that is in the same form as the learning data, the feature amount conversion apparatuses 10, 20 perform the above nonlinear conversion on the feature vector to acquire a nonlinearly converted feature vector. The nonlinearly converted feature vector is used, for instance, for linear discrimination by a recognition apparatus, and a recognition result is obtained. In short, the feature amount conversion apparatuses 10, 20 can be used for the recognition apparatus.
It should be noted that the logical operation units 121-12N need not always perform a logical operation by calculating XOR. The logical operation units 121-12N may alternatively perform the logical operation by calculating, for example, AND or OR. However, if the XOR is equivalent to a harmonic mean for determining the FIND feature amount, as described, and the feature vector is arbitrary as is obvious from
The feature amount conversion apparatus 10 and the co-occurring element generation units 221-22N include d/2 bit rearrangement units 111-11N when the number of dimensions of a feature vector is d. However, the number of bit rearrangement units may be smaller than d/2 (N=1 is acceptable) or larger than d/2. Further, the number of logical operation units 121-12N may be smaller than d/2 (N=1 is acceptable) or larger than d/2.
The bit rearrangement units 111-11N generates a new bit string by performing a rotate shift operation with no carry on the bit string of the original feature vector. Alternatively, however, the bit rearrangement units 111-11N may generate a new bit string, for example, by randomly rearranging the bit string of the original feature vector. However, performing a carry rotate operation with no shift is advantageous in that it covers all combinations with a minimum number of bits, is based on a simple logic, and has a high processing speed.
The logical operation units 121-12N perform a logical operation on the bit string of the original feature vector and bit strings rearranged by the bit rearrangement units. Alternatively, however, some or all of the logical operation units may perform a logical operation on the bit strings rearranged by the bit rearrangement units. In such an instance, the number of dimensions of the bit strings acquired by the bit rearrangement units may differ from the number of dimensions of the original feature vector. The inputs and outputs of the binarization units 211-21N may differ in dimension. The feature integration unit 13 generates a nonlinearly converted feature vector by using the elements of the original feature vector as well. Alternatively, however, the feature integration unit 13 may generate the nonlinearly converted feature vector without using the original feature vector.
The co-occurring element generation units 221-22N in the second embodiment each have the same configuration as the feature amount conversion apparatus 10 according to the first embodiment, that is, include the bit rearrangement units 111-11N, the logical operation units 121-12N, and the feature integration unit 13. However, an alternative is to provide the co-occurring element generation units 221-22N with no feature integration unit 13, output a plurality of logically-operated bit strings, which are outputted from the logical operation units 121-12N, directly to the feature integration unit 23, and let the feature integration unit 23 integrate the logically-operated bit strings to generate the nonlinearly converted feature vector.
(Modifications)
The first and second embodiments have been described on the assumption that they are applied to discriminate images. Alternatively, however, other data, such as voice and text, may be adopted as a discrimination target. Further, a recognition process other than a linear discrimination process may be alternatively performed.
In the first and second embodiments, the bit rearrangement units 111-11N each generate a rearranged bit string, and a plurality of rearranged bit strings are thereby generated. Further, the logical operation units 121-12N each perform a logical operation to calculate the XOR of each of the rearranged bit strings and the bit string of the original feature vector. These bit rearrangement units 111-11N and logical operation units 121-12N correspond to bit rearrangement portions and logical operation portions according to the present disclosure. However, the bit rearrangement portions and logical operation portions according to the present disclosure are not limited to the corresponding units in the foregoing embodiments. Alternatively, software may be executed to generate a plurality of rearranged bits and perform a plurality of logical operations.
An exemplary embodiment based on the use of the feature amount conversion apparatus according to the foregoing embodiments of the present disclosure will now be described.
The programs represented by the comparative example and the exemplary embodiment were used to convert the same pseudo data. The calculation time per block was therefore 7212.71 nanoseconds in the comparative example. Meanwhile, in the comparative example, the calculation time per block was 22.04 nanoseconds (327.32 times the speed of the comparative example) when k=1, 33.20 nanoseconds (217.22 times the speed of the comparative example) when k=2, 42.14 nanoseconds (171.17 times the speed of the comparative example) when k=3, and 53.76 nanoseconds (134.16 times the speed of the comparative example) when k=4. As mentioned, nonlinear conversion in the exemplary embodiment was sufficiently higher in speed than the comparative example.
In
As is obvious from
A further embodiment of the present disclosure will now be described. The present embodiment performs a cascade process to increase the speed of recognition that is achieved by a discriminator when a real feature amount is binarized with k different threshold values. [Expression 9] below represents a vector that is obtained when a real feature amount X is binarized with k different threshold values.
b=(b1T,b2T, . . . ,bkT)T [Expression 9]
For discrimination or other similar purposes, wTb in [Expression 10] below is calculated to compare the result against a threshold value Th. In [Expression 10], w is a weight vector for discrimination.
w
T
b=Σ
i=1
k
w
i
T
b
i [Expression 10]
It is assumed, for example, that k=4, and that b1, b2, b3, and b4 are binarized at a 20% position, at a 40% position, at a 60% position, and at an 80% position, respectively. In this instance, b2 and b3 are obviously higher in entropy than b1 and b4. Therefore, w2Tb2 and w3Tb3 have a wider value distribution than wiTb1 and w4Tb4.
In view of the above, the present embodiment calculates w2Tb2, w3Tb3, and w4Tb4 in the order named. If wTb can be determined to be definitely greater or smaller than the threshold value Th in the middle of the sequence of calculations, the present embodiment brings the process to an immediate end. This results in an increase in the speed of processing. In short, cascading is performed in the order of the widest wiTbi distribution to the narrowest or in the order of the highest entropy value to the lowest.
The present disclosure calculates co-occurring elements of an inputted feature vector by rearranging the inputted feature vector and performing a logical operation. Therefore, the co-occurring elements can be rapidly computed. The present disclosure is therefore useful, for example, as a feature amount conversion apparatus that converts a feature amount used for target recognition.
While the present disclosure has been described with reference to embodiments thereof, it is to be understood that the disclosure is not limited to the embodiments and constructions. The present disclosure is intended to cover various modification and equivalent arrangements. In addition, while the various combinations and configurations, other combinations and configurations, including more, less or only a single element, are also within the spirit and scope of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
2013-116918 | Jun 2013 | JP | national |
2014-028980 | Feb 2014 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2014/002816 | 5/28/2014 | WO | 00 |