The present disclosure particularly relates to an information processing apparatus and an information processing method which are suitable for performing dimension reduction on information on input data, and a storage medium which stores a program.
In general, in a recognition situation of data analysis and a determination as to whether newly-generated data obtained after a plurality of data have learnt has a property which belongs to the learnt data, when the number of dimensions of input data is large, a computing cost is increased, a learning issue exponentially becomes difficult. Therefore, it is important that a feature value of the input data is not used as it is, but the number of dimensions is reduced while intrinsic information is maintained. Furthermore, data of four dimensions or more may not simply recognized in a visual manner, and therefore, it is similarly important that the number of dimensions of original data is reduced to three dimensions or less when the data is visualized while trend of the original data is stored as much as possible.
Examples of general methods for dimension reduction include Fisher Discriminant Analysis (FDA).
In the FDA, dimension reduction is performed using linear projection calculated from a generalized eigenvalue issue which minimizes dispersion in a class and maximizes inter-class dispersion. Therefore, the FDA does not support dimension reduction of distributed data having multimodality in a class, that is, the FDA is not suitable for such dimension reduction. Furthermore, examples of a general unsupervised dimension reduction method which does not use labels include Principal Component Analysis (PCA). In this method, dimension reduction is performed using linear projection calculated from an eigenvalue issue associated with a covariance matrix of data. As another unsupervised dimension reduction method which does not use labels, Projection Pursuit (PP) disclosed in Friedman, J. H. Exploratory projection pursuit. J. A. S. A., 82, 249-266 has been used. The PP is a method for performing dimension reduction such that a comparatively interesting structure is stored based on an index indicating “interestingness” of a data structure. Although the index in this case is referred to as “Projection Index”, the dimension reduction is basically performed such that a most non-Gaussian direction in data is found, and therefore, the index serves as a reference for measurement of a non-Gaussian property. However, this method globally determines a feature axis which is also important in the PCA and the PP, and therefore, dimension reduction which reflects a local distribution state may not be performed.
Therefore, a number of methods for performing dimension reduction while local data distribution (hereinafter referred to as “neighborhood data”) is stored as much as possible have been proposed. In the methods, as a method for obtaining a result of dimension reduction as a method for performing unsupervised linear dimension reduction, Locality Preserving Projections (LPP) is disclosed in Locality Preserving Projections, Xiaofei He, Partha Niyogi, Advances in Neural Information Processing Systems 16 (NIPS), Vancouver, Canada, 2003. In the LPP, locality of data in a feature space is focused and dimension reduction is performed while the local distance relationships between data items located close to each other in the feature space are maintained as much as possible so that the data items are also located close to each other in the feature space after the dimension reduction. According to this method, the neighborhood relationships between the data items are determined in accordance to a determination as to whether one of the data items is located within a neighborhood region of the other of the data items or a determination as to whether one of the data items is located in a hypersphere having arbitrary one of the data items at a center.
As an example of a supervised linear dimension reduction method taking the locality into consideration, Local Discriminant Information (LDI) disclosed in T. Hastie and R. Tibshirani, Discriminant adaptive nearest neighbor classification, IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(6):607-615, 1996a, dimensionality reduction, In Proceedings of the 23th International Conference on Machine Learning, pages 905-912, 2006 is used. Also in this method, dimension reduction is performed such that data items in different classes are separated from each other with reference to labels while the local distance relationships between the data items are stored as much as possible by bringing one of the data items which are located close to each other to the other of the data items. As a method for performing unsupervised nonlinear dimension reduction, Isomap disclosed in Joshua B. Tenenbaum, Vin de Silva, John C. Langford, “Global Geometric Framework for Nonlinear Dimensionality Reduction”, Science, vol. 290, no. 5500, Dec. 22, 2000, pp. 2319-2323 is used. Also in this method, as parameters determined in advance, a neighborhood number k and data included in a hypersphere having a size ε is determined as neighborhood data. For example, neighborhood data to be embedded in a neighborhood region is determined after the dimension reduction using data in accordance with the size ε of the hypersphere, and the dimension reduction is performed such that the relationship between the data is stored.
According to an embodiment of the disclosure, an information processing apparatus which performs dimension reduction while local data distribution is stored as neighborhood data includes a distance calculation unit configured to calculate a distance between data to be subjected to the dimension reduction, a determination unit configured to determine a parameter which determines the neighborhood data based on the distance calculated by the distance calculation unit for each data to be subjected to the dimension reduction, and a dimension reduction unit configured to perform dimension reduction on data to be subjected to the dimension reduction based on the parameter determined by the determination unit.
Further features of the disclosure will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Hereinafter, a first embodiment of the disclosure will be described in detail with reference to the accompanying drawings. In this embodiment, a case where manifold learning is performed with unsupervised data will be described as an example. The term “manifold” means a graphic or a space which is locally seen to be a Euclidean space, and the term “manifold learning” means a method for extracting substantially low-dimensional information of a data structure embedded in a high-dimensional manifold.
Concrete examples of data to be subjected to dimension reduction in this embodiment include high-dimensional data obtained by extracting a plurality of features from an image obtained by capturing an object, such as a face or a body of a person, an animal, an internal organ, or a vehicle, a monitoring image in a town, or the like. The examples may further include data which is represented by a high-dimensional feature value by extracting a plurality of features from images obtained by capturing an object in a plurality of directions. The manifold learning is a versatile method which may be generally employed in a case where behavior and relevance of high-dimensional data are analyzed, and therefore, original data may not be particularly an image, and may be audio, other signals, or a feature of a combination of audio and a signal.
In this embodiment, in the general manifold learning systems (such as Isomap, Locally Linear Embedding, and Laplician Eigenmaps), the local neighborhood relationship obtained when nonlinear mapping is performed is more effectively reflected to a mapping result. Here, a method for setting the neighborhood relationship for each data will now be described. In the manifold learning systems listed above as the general techniques, the local neighborhood relationship is reflected to a mapping result in similar methods. Specifically, neighborhood data of each data is set by a neighborhood number k of the k-nearest neighbor algorithm or all data included in an inside of a hypersphere having arbitrary data (an arbitrary point a in a feature space) at a center is set as neighborhood data. Accordingly, this embodiment is applicable to any of the manifold learning systems described above. Note that the hypersphere having the arbitrary point a in the feature space at the center corresponds to a region within a distance (radius) ε from the arbitrary point a in the feature space when ε is a positive real number.
Hereinafter, an example of target data which attains the most excellent effect when this embodiment is employed is illustrated.
Furthermore, Swiss Roll in
For example, in a case of data located in the vicinity of a region 302 in which a distance between the curved planes is large, it is difficult to aggregate the different curved planes even if the parameter which determines neighborhood data is set large. However, in a case where regions on different curved planes are located close to each other, that is, in a case of a region 301, for example, the curved planes may not be separated from each other if the parameter k or ε which determines a range of neighborhood data is fixed to a value which is appropriate for the region 302. Furthermore, if the parameter k or ε suitable for the region 301 is applied to all the data, neighboring curved planes are not aggregated, and therefore, the parameter k or ε becomes considerably small. In this case, as illustrated in distribution of
Accordingly, in this embodiment, different parameters k or ε which determine the neighborhood relationship are set for different data. Hereinafter, a procedure of a determination of the parameters k or ε will be described.
In
First, in step S201 of
In step S202, the inter-data distance calculation unit 108 normalizes a feature dimension of all the data and obtains distances between the data. Specifically, dispersion of the target data is 1 in all feature values, and thereafter, distances between all the data are obtained. Furthermore, the neighborhood data determination unit 103 inputs information on a dimension reduction method, a neighborhood data determination rule, and a neighborhood embedding weight determination rule using the input device 102 operated by a user. The dimension reduction method specified here is a basic manifold learning system such as Isomap, LLE, or Laplacian Eigenmaps. The neighborhood data determination rule is a method for determining neighborhood data using a k-nearest neighbor algorithm or a size ε of a hypersphere. The neighborhood embedding weight determination rule is a method for determining a weight Wij (i and j are IDs of data) used when data to be embedded in a neighboring region determined by the neighborhood data determination rule is moved closer. As an option, a weight calculated in accordance with Expressions (1) to (4) below is used, for example. In this case, a weight for data items which are not to be embedded close to each other is set to 0, for example. Furthermore, xi(k) in Expression (4) indicates a k-th neighborhood sample of xi.
In step S203, the neighborhood data determination unit 103 determines neighborhood data based on various methods specified by the user with reference to the distances between the data calculated by the inter-data distance calculation unit 108. Specifically, the neighborhood data determination unit 103 determines neighborhood data to be embedded in a neighborhood region after the dimension reduction in all the data to be subjected to the dimension reduction which is input by the data input unit 101 by determining a certain limit distance at a time of the dimension reduction.
An example of the process performed in step S203 will be described with reference to
In step S601 of
In step S603, a process from step S604 to step S608 below is repeatedly performed until sizes ε of all the data are determined. First, in step S604, the neighborhood data determination unit 103 determines data which is farthest from the center of the hypersphere as F1, and sets another hypersphere of the same size ε having F1 at a center. In
In step S605, the neighborhood data determination unit 103 determines whether a rate of the number of data shared by the two hyperspheres is reduced by a rate set in advance or more. As a result of the determination, when the determination is affirmative, the process proceeds to step S606, and otherwise, the process proceeds to step S608. When the process in step S604 is performed for the first time, the process proceeds to step S608.
In step S608, the neighborhood data determination unit 103 increases the size ε by an arbitrary width σ, and thereafter, returns to step S604 where the same process is performed. By gradually increasing the size ε of the hypersphere, the farthest point from the center C of the hypersphere is set as a new Fx point (x=1, 2, 3, and so on) step by step. Then a rate of the number of data shared by the two hyperspheres is calculated based on the new Fx point.
When the size ε of the hypersphere reaches a certain value, the rate of the number of shared data is reduced by the rate set in advance or more. In this case, in step S606, the neighborhood data determination unit 103 determines a size (ε−σ) immediately before step S606 as the size ε which determines neighborhood data of the data associated with the center C.
Here, a case where a threshold value used for the determination in step S605 is determined as reduction of 20% or more will be described. The size ε is increased, and when the size ε illustrated in
Note that the neighborhood number k of the k-nearest neighbor algorithm may be obtained by the same procedure. Specifically, the neighborhood number k is increased in step S608, the farthest point in a group of points of the neighborhood number k is set as an Fx point in step S604, and it is determined whether a rate of the number of shared points (data) is reduced by a value equal to or smaller than a threshold value. In step S606, a neighborhood number (k−σ) immediately before step S606 is determined as a neighborhood number k which determines neighborhood data.
By this process, an aggregation of different neighborhood numbers k or an aggregation of sizes ε of hyperspheres may be obtained for different data. Hereinafter, the aggregation of the neighborhood numbers k and the aggregation of the sizes ε of the hyperspheres are denoted by {k} and {ε}, respectively. Note that when a total number of data to be subjected to the dimension reduction is denoted by n, {k} and {ε} are represented as follows: {k}={k1, k2, k3, . . . , kn} and {ε}={ε1, ε2, ε3, . . . εm}.
Referring back to
For example, when Isomap is selected by the user, an edge is obtained by connecting neighborhoods of n data corresponding to the equation “{k}={k1, k2, k3, . . . , kn}” using the equation and assigns a weight wij of the connected edge in accordance with Expression (1). Thereafter, a shortest geodesic line distance in geodesic line distances defined in a graph in which all points are connected to one another by an algorithm of Floyd-Warshall disclosed in Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest and Clifford Stein, Introduction to Algorithms (first edition ed.), MIT Press and McGraw-Hill, Section 26.2, “The Floyd-Warshall algorithm”, pp. 558-565, 1990 or an algorithm of Dijkstra disclosed in E. W. Dijkstra, A note on two problems in connexion with graphs, In Numerische Mathematik, 1, pp. 269-271, 1959. Thereafter, data distribution in a space of an input feature dimension or less is obtained by a multidimensional scaling method (MDS) so that the shortest geodesic line distance between all the points are reproduced.
A result of the process corresponds to a result of the dimension reduction. Here, the mapping calculation unit 106 stores the result of the dimension reduction in the mapping storage unit 107. Alternatively, the mapping calculation unit 106 may display the result of the dimension reduction in the output device 105. Note that any method may be employed in the display in addition to three-dimensional or two-dimensional display which allows a person to visually check data distribution as long as the result of the dimension reduction may be evaluated.
Furthermore, the method for reducing a dimension by non-linear conversion is described in detail mainly in the manifold learning in this embodiment. However, in an unsupervised state, a method for determining parameters obtains the same result both in the case of non-linear dimension reduction and in the case of linear dimension reduction. Therefore, a linear dimension reduction method taking locality into consideration (Locality Preserving Projections or the like) may be similarly employed.
As described above, according to this embodiment, the neighborhood number k of the k-nearest neighbor algorithm or the size ε of the hypersphere may be determined for each data. Accordingly, neighborhood data may be appropriately determined even in data distribution of Swiss Roll, for example, and a result of dimension reduction desired by the user may be obtained.
Hereinafter, a second embodiment of the disclosure will be described in detail with reference to the accompanying drawings. In this embodiment, a case where a result of dimension reduction desired by a user is obtained more accurately after a result of the dimension reduction is obtained by the first embodiment will be described as an example.
In
In step S801, the data extraction unit 701 performs correction by reflecting an intention of the user to the result of the dimension reduction obtained in step S204. Specifically, first, the data extraction unit 701 determines candidates of the distance relationships between data items (information indicating that the relationships between data items which are close to each other) after the dimension reduction which are to be input by the user. These candidates are determined in accordance with the neighborhood relationships set by a neighborhood data determination unit 103, a result of mapping corresponding to the set neighborhood relationships, and information associated with data input from a data input unit 101 (images corresponding to the data, for example). However, data in a feature space which is randomly selected and which are input by the data input unit 101 may be determined as the candidates of the distance relationships to be input by the user, or a neighborhood graph generated based on the neighborhood relationships determined by the neighborhood data determination unit 103 may be determined as the candidates to be input by the user. Alternatively, the candidates may be selected from the feature space after the mapping temporarily determined by a mapping calculation unit 106. The user checks the result of the dimension reduction displayed by the output device 105 and inputs a new condition using an input device 102. In this way, a result of the dimension reduction desired by the user may be more accurately obtained.
In general, features before the dimension reduction may include a feature value which is not associated with data distribution desired by the user (for example, a feature value corresponding to uniform distribution obtained irrespective of a property and a state of data). However, the neighborhood determination method and the method for determining the neighborhood relationship described above in the general manifold learning are simply based on distances between all input data in a feature space, and therefore, the neighborhood relationships desired by the user may not be obtained. Therefore, in this embodiment, information on data to be located in an isolated manner or data to be embedded in a neighboring region is externally supplied based on previous knowledge of the user or the like so that an amount of information is increased. In this way, a result of the dimension reduction desired by the user may be more accurately obtained.
First, in step S1201, the data extraction unit 701 displays an extraction result based on the result of the dimension reduction in the output device 105 and determines whether an instruction indicating that a result of the dimension reduction is to be newly obtained has been issued by the user. As a result of the determination, when the determination is affirmative, the process proceeds to step S1202, and otherwise, the process is terminated.
In step S1202, the neighborhood data determination unit 103 obtains content input by the user using the input device 102. An example of the display of the result of the extraction performed by the data extraction unit 701 in the output device 105 and additional information input by the user who has checked the display using the input device 102 will be described hereinafter.
Furthermore,
The method for indicating the order of the target data by causing the user to input the neighborhood relationships between the target data is illustrated in
Referring back to
Algorithms for the examples illustrated in
For example, an order of randomly-extracted seven images which are tentatively output as a result of dimension reduction is different from an order input by the user, the parameters k and ε which determine the neighborhood data corresponding to different images are changed so that the order obtained as the result of the dimension reduction becomes close to information indicated by the user. For example, when the data is arranged in one line, leftmost data is located in a closest position. However, if data to be on a right side is positioned on a left side which is different from the order input by the user, the parameter k or ε is reduced from a current tentative value. Furthermore, if data to be on the left side is positioned on the right side, the parameter k or ε which determines the neighborhood data is increased. However, the parameter k or ε is linearly increased not only for the randomly-extracted data but also for surrounding data.
Specifically, it is assumed that the distribution illustrated in
The mapping calculation unit 106 performs the dimension reduction again using the parameter {k} or {ε} which is newly obtained by the conversion and the process from step S1203 to step S1205 is repeatedly performed until matching with the order input by the user is attained. By this repeated process, data which is not desired by the user to be closely positioned may be reset as data not to be embedded in a neighboring region. By repeatedly performing the calculation described above, the manifold illustrated in
As described above, according to this embodiment, a result of dimension reduction desired by the user is reflected to a tentative result of dimension reduction, and therefore, a result of the dimension reduction desired by the user may be more accurately obtained.
Hereinafter, a third embodiment of the disclosure will be described in detail with reference to the accompanying drawings. As a result of dimension reduction, data branch 1301 or data aggregation 1302 in distribution which is not to be generated may be generated as illustrated in
Here, as with the parameters {k}(0) and {ε}(0), parameters which newly set the neighborhood relationships in p0-dimensional space after the dimension reduction are denoted by {k}(1) and {ε}(1), respectively. In step S1502, the method for determining the parameter k or ε which determines neighborhood data in the d-dimensional feature space for each data which is described above is employed in the p0-dimensional space after the dimension reduction. Then the parameter {k}(1) or {ε}(1) is determined based on the neighborhood relationships in the p0-dimensional space. Subsequently, in step S1503, a mapping calculation unit 106 holds all neighborhood data which is determined from target data in the p0-dimensional space in accordance with the parameter {k}(1) or {ε}(1), and performs the dimension reduction of the target data again with reference to the distance relationships in the d-dimensional feature space which serves as a feature value. Note that the neighborhood data which is determined by the parameter {k}(1) or {ε}(1) is not simply moved to a neighboring region in the d-dimensional feature space. The neighborhood relationships in the p0-dimensional space is normally different from the neighborhood relationships in the d-dimensional space.
In this case, assuming that a dimension in a feature space obtained as a result of the dimension reduction which is newly performed is set as a p1 dimensions (P1 may be equal to p0), a parameter {k}(2) or {ε}(2) which sets the new neighborhood relationships is obtained by the same procedure. Thereafter, the same process from step S1501 to step S1504 is repeatedly performed.
This process is repeatedly performed until neighborhood data of all target data may not be finally updated. In this way, a result of the dimension reduction to which local data distribution is further reflected may be obtained. Note that, also in a case of an example illustrated in
Furthermore, the method for reducing a dimension by non-linear conversion is described in detail mainly in the manifold learning in this embodiment. However, in an unsupervised state, a method for determining parameters obtains the same result both in the case of non-linear dimension reduction and in the case of linear dimension reduction. Therefore, a linear dimension reduction method taking locality into consideration (Locality Preserving Projections or the like) may be similarly employed.
Hereinafter, a fourth embodiment of the disclosure will be described in detail with reference to the accompanying drawings. In this embodiment, a case where supervised dimension reduction is performed will be described as an example. Note that a configuration of an information processing apparatus according to this embodiment is the same as that of
Concrete examples of data to be subjected to dimension reduction in this embodiment include high dimensional data obtained by extracting a plurality of features from images obtained by capturing a plurality of types of object to which different labels are assigned for different types. Furthermore, the images of the plurality of types of object may be images obtained by capturing faces and bodies of various persons, animals, organs, vehicles, and the like. The concrete examples may further include high-dimensional data obtained by extracting a plurality of features from a series of images obtained by capturing the same target in a time series manner, such as monitoring images in a town, to which labels indicating a “normal operation” or an “abnormal operation” are assigned. Furthermore, the method is versatile and may be generally employed in a case where behavior and relevance of high-dimensional data are analyzed, and therefore, data may not be particularly an image, and may be audio, other signals, or a feature of a combination of audio and a signal.
The case where supervised dimension reduction is performed in this embodiment, and therefore, all data to be subjected to the dimension reduction has class labels to be classified. Furthermore, this embodiment is applicable to supervised dimension reduction methods (Local Discriminant Information, Local Fisher Discriminant Analysis, and the like) which are generally used. Then a method for reflecting more accurately the local neighborhood relationships to a mapping result when linear mapping is performed in this method will be described. In the supervised dimension reduction methods described above, the local neighborhood relationships are reflected to a mapping result in similar methods. Specifically, neighborhood data of each data is set by a neighborhood number k of the k-nearest neighbor algorithm or all data included in an inside of a hypersphere of a size ε having arbitrary data at a center is set as neighborhood data. Accordingly, this embodiment is applicable to any of the supervised dimension reduction methods described above.
In step S1601, a data input unit 101 inputs data to be subjected to dimension reduction having labels. In step S1602, an inter-data distance calculation unit 108 normalizes feature dimensions of all the data. Specifically, dispersion of the target data is 1 in all feature values. Thereafter, distances between all the data are obtained. Then a neighborhood data determination unit 103 assigns an appropriate initial value to a parameter k or ε (for example, {k}={1, 1, 1, . . . , 1} or ε=distance to a nearest point). Furthermore, the neighborhood data determination unit 103 inputs information on a dimension reduction method, a neighborhood data determination rule, and a neighborhood embedding weight determination rule using the input device 102 operated by a user. Here, a basic supervised dimension reduction method, such as Local Discriminant Information or Local Fisher Discriminant Analysis, is specified. The neighborhood date determination rule and the neighborhood embedding weight determination rule are the same as those of the first embodiment.
In step S1603, the neighborhood data determination unit 103 increases the initial value set in step S1602 so as to obtain a maximum value which does not include data having a different label in neighborhood data. The maximum value is determined as a tentative parameter k or ε of each data.
Specifically, as illustrated in
Similarly, when the neighborhood number k having an initial value of 1 of the data item 2102 is increased so that a neighborhood point is searched for, class-2 data having a different label is reached when the neighborhood number k is 3 as illustrated in
Based on the rule described above, the parameters k (or ε) for determining neighborhood data of all the classes and all the data in a feature space before dimension reduction are set. Accordingly, small neighborhood numbers k may be set in portions which use a large amount of information whereas large neighborhood numbers k may be set in regions in which data having the same class label gathers, and consequently, generalization ability may be improved.
Furthermore, in step S1603, the neighborhood data determination unit 103 may multiply the parameter {k} or {ε} determined in step S1202 by a constant α so as to obtain a parameter K (α×k) or E (α×ε), and the constant α of an optimum value may be searched for by cross validation or the like.
In step S1604, a mapping calculation unit 106 performs dimension reduction by the dimension reduction method selected by the user in step S1602 based on the parameter {k} or {ε} determined in step S1603. Then the mapping calculation unit 106 stores a result of the dimension reduction in a mapping storage unit 107.
Although a result of the dimension reduction is obtained in the procedure illustrated in
In step S1701, a result of dimension reduction obtained in step S1604 is updated to the relationships which are further close to the neighborhood relationships obtained after the dimension reduction. Specifically, the neighborhood relationships are determined and updated in a p0-dimensional feature space obtained by performing the dimension reduction using a parameter {k} or {ε}. Hereinafter, the parameters {k} and {ε} are represented as {k}(0) and {ε}(0), respectively.
Subsequently, in step S1803, the mapping calculation unit 106 holds all neighborhood data which is determined from target data in the p0-dimensional space in accordance with the parameter {k}(1) or {ε}(1), and performs the dimension reduction of the target data again with reference to the distance relationships in the d-dimensional feature space before the dimension reduction. Note that neighborhood data which is determined by the parameter {k}(1) or {ε}(1) is not simply moved to a neighboring region in the d-dimensional feature space. The neighborhood relationship in the p0-dimensional space is normally different from the neighborhood relationship in the d-dimensional space.
In this case, assuming that a dimension in a feature space obtained as a result of dimension reduction which is newly performed is set as a p1 dimension (P1 may be equal to p0), a parameter {k}(2) or {ε}(2) which sets the new neighborhood relationships is obtained by the same procedure. Thereafter, the same process is repeatedly performed from step S1801 to step S1804. This process is repeatedly performed until neighborhood data of all target data may not be finally updated. In this way, a result of dimension reduction to which local data distribution is further reflected may be obtained even in a case of supervised dimension reduction.
Hereinafter, a fifth embodiment of the disclosure will be described in detail with reference to the accompanying drawings. As with the fourth embodiment, supervised dimension reduction is performed in this embodiment. A parameter k or ε which determines neighborhood data is changed after the dimension reduction for each class, and data in the same class share the same parameter k or ε. Note that a configuration of an information processing apparatus according to this embodiment is the same as that of
A processing procedure of this embodiment is basically the same as that of
The number of dimensions of input data is denoted by d. When the total number of dimension reduction target data is denoted by n and the number of all classes for classification is denoted by L, the number of elements for each class is denoted by n1 (1={1, 2, 3, . . . , L}. In this case, when a process of determining data to be embedded in a neighborhood region after the dimension reduction is performed by the k-nearest neighbor algorithm, the neighborhood number k is represented as “{k}={k1, k2, k3, . . . , k1, . . . , kL}”. When the data to be embedded in a neighborhood region is determined using a hypersphere of a size ε, the size is represented as “{ε}={ε1, ε2, ε3, . . . , ε1, . . . , εL}”.
First, a process from step S1601 to step S1602 is the same as that of the fourth embodiment. Specifically, an appropriate initial value is assigned to the parameter {k} or {ε} (for example, {k}={1, 1, 1, . . . , 1} and ε=distance to a nearest point). In step S1603, a neighborhood data determination unit 103 obtains the parameter {k} or {ε}. As a concrete method, the initial value is increased and a parameter k1 or ε1 is obtained for each class such that a rate of data of a different class of a different label is minimized and becomes a value as large as possible. Here, in the neighborhood data set by the parameter k1 or ε1 set for data of a class 1, the total number of data having different labels is denoted by F1(k1) or F1(ε1). In this case, when the neighborhood data is determined by the k-nearest neighbor algorithm, Expression (6) below may be used as a method for determining the parameter k1. Note that the parameter ε1 may be obtained by the same method when neighborhood data is obtained using a hypersphere having a size ε.
As described above, in step S1603, the neighborhood data determination unit 103 obtains the parameters {k} or {ε} of all the classes in the feature space. Subsequently, a process in step S1604 is performed in the same procedure as the fourth embodiment. In this way, the dimension reduction may be performed.
Furthermore, as with the case of the fourth embodiment, a result of the dimension reduction may be obtained in accordance with the processing procedure of
Hereinafter, a sixth embodiment of the disclosure will be described in detail with reference to the accompanying drawings. As with the fifth embodiment, different parameters k or ε are set in different classes, and data in the same class share the same parameter k or ε. Note that a configuration of an information processing apparatus according to this embodiment is the same as that of
In the fifth embodiment, the parameter k or ε is obtained from input dimension reduction target data. On the other hand, a user may recognize trend of a degree of concentration or dispersion of data in each class in addition to label information as a property of data to be subjected to classification. Therefore, in this embodiment, a method for obtaining a result of dimension reduction using the trend of concentration or dispersion of the data in each class when a d-dimensional feature is extracted from dimension reduction target data and dimension reduction is performed will be described. As an amount of correct information associated with data input in advance is larger, another result of the dimension reduction may be obtained. Accordingly, the parameter k or ε which determines neighborhood data is not determined only using input data in this embodiment but is determined by being externally supplied using other information.
Note that, for simplicity of description, the total number L of classification target classes is 2 in a description below. Furthermore, a d-dimensional feature is extracted from an image obtained by capturing a front face of a person by an arbitrary feature extraction method in a first class. A d-dimensional feature is extracted from a general background image by the feature extraction method described above in a second class. In
First, it is assumed that the user desires that data in the first class is concentrated in a considerably small space and data in the second class disperses in a large area. In this case, by inputting information associated with a difference of distribution, dimension reduction may be performed taking a feature for separating the two classes from each other in terms of an original d-dimension feature value into consideration, and in addition, the dimension reduction may be performed while trend of a degree of concentration or dispersion of each class is reflected.
Therefore, in step S1901, an inter-data distance calculation unit 108 normalizes a feature dimension of all the data. Specifically, dispersion of the target data is 1 in all feature values. Thereafter, distances between all the data are obtained. Then a neighborhood data determination unit 103 inputs an initial value of the parameter k or ε using an input device 102 operated by the user in addition to information on a dimension reduction method, a neighborhood data determination rule, and a neighborhood embedding weight determination rule. When neighborhood data is determined using the k-nearest neighbor algorithm, for example, {k}={k1, k2}={100, 0} is set as an initial value of the parameter k using the input device 102 operated by the user. Note that n1 is equal to or larger than 100.
The parameter k or ε is determined based on the initial value in step S1603, and dimension reduction is performed in step S1604.
Furthermore, when the neighborhood relationships are to be further corrected, a procedure illustrated in
Although the user inputs the initial value of the parameter k or ε in this embodiment, the initial value of the parameter k or ε may be determined using dispersion values of data in the individual classes obtained from another experiment. Alternatively, the dimension reduction may be performed by setting a small parameter {k} or {ε} when a target included in a classification target class belongs to a deep layer with reference to a layer structure of WordNet and setting a large parameter {k} or {ε} in a case of an upper layer.
The aspect of the embodiments may be realized by a process of supplying a program which realizes at least one function according to the foregoing embodiments to a system or an apparatus through a network or a storage medium and reading the program using at least one processor included in a computer of the system or the apparatus. Furthermore, the aspect of the embodiments may be realized by a circuit which realizes at least one function (an application specific integrated circuit (ASIC), for example).
Embodiment(s) of the disclosure can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the disclosure has been described with reference to exemplary embodiments, it is to be understood that the disclosure is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2016-215470 filed Nov. 2, 2016, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2016-215470 | Nov 2016 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
9036860 | Chen | May 2015 | B2 |
20040076313 | Bronstein | Apr 2004 | A1 |
20080199075 | Gokturk | Aug 2008 | A1 |
20100329556 | Mitarai | Dec 2010 | A1 |
20120062732 | Marman | Mar 2012 | A1 |
20130183022 | Suzuki | Jul 2013 | A1 |
20140092223 | Lin | Apr 2014 | A1 |
20140139529 | Mukai | May 2014 | A1 |
20160071281 | Cordara | Mar 2016 | A1 |
20160127405 | Kasahara | May 2016 | A1 |
20160267339 | Nakano | Sep 2016 | A1 |
Entry |
---|
Wu et al, “A SOM-Based Dimensionality Reduction Method for KNN Classifiers”, IEEE 2010 International Conference of System Science and Engineering, pp. 1-6. (Year: 2010). |
Friedman, J. H., J. A. S. A., ; “Exploratory Projection Pursuit;” 1987 American Statistical Association, Journal of the American Statistical Association; Mar. 1987, vol. 82, No. 397; pp. 249-266. |
Xiaofei He, Partha Niyogi; “Locality Preserving Projections;” Advances in Neural Information Processing Systems 16 (NIPS), Vancouver, Canada, 2003; pp. 1-8. |
T. Hastie and R. Tibshirani; “Discriminant Adaptive Nearest Neighbor Classification;” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18, No. 6; pp. 607-616, Jun. 1996. |
Joshua B. Tenenbaum, Vin de Silva, John C. Langford; “Global Geometric Framework for Nonlinear Dimensionality Reduction;” Science, vol. 290, No. 5500, Dec. 22, 2000, pp. 2319-2323. |
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest and Clifford Stein; “Introduction to Algorithms;” (third edition), 2009 Massachusetts Institute of Technology; pp. 1-1313. |
E W. Dijkstra; “A Note on Two Problems in Connexion with Graphs;” In Numerische Mathematik, 1, pp. 269-271, 1959. |
Number | Date | Country | |
---|---|---|---|
20180121765 A1 | May 2018 | US |