The present invention relates to machine learning of artificial intelligence (AI) and, more particularly, to a classification algorithm based on multiform separation, in particular, on quadratic multiform separation.
As is well known, machine learning builds a hypothetical model based on sample data for a computer to make a prediction or a decision. The hypothetical model may be implemented as a classifier, which approximates a mapping function from input variables to output variables. The goal of machine learning is to make the hypothetical model as close as possible to a target function which always gives correct answers. This goal may be achieved by training the hypothetical model with more sample data.
Machine learning approaches are commonly divided into three categories: supervised learning, unsupervised learning, and reinforcement learning. Various models have been developed for machine learning, such as convolutional neural network (CNN), recurrent neural network (RNN), long short-term memory (LSTM) network, YOLO, ResNet, ResNet-18, ResNet-34, Vgg16, GoogleNet, Lenet, MobileNet, decision trees, and support vector machine (SVM).
In the traditional approach, a classifier is applied with only a single model having one single inference function to accomplish the job of classification. It may solve a simple classification problem as shown in
However, in some cases, it is very difficult and even impossible to use one single inference function to distinguish the characteristics among elements, and put the elements into different classes.
Also, every model has its own advantages and drawbacks, in terms of accuracy, robustness, complexity, speed, dependency, cost, and so on; when a model focuses on some points, it may possibly neglect the others, and therefore an extreme bias may occur.
Therefore, it is desirable to provide an improved classifier to mitigate and/or obviate the aforementioned problems.
The present invention proposes a classification algorithm (or paradigm) in supervised machine learning. It is based on a revolutionary concept, which is called “multiform separation” (MS), on separating sets in Euclidean space. Next, an ingenious method is presented to carry out the multiform separation. This method may be regarded as the core of the present invention.
Further, a particular embodiment uses “quadratic multiform separation” (QMS) to realize the present invention. In such embodiment, quadratic functions are utilized to build the cost function of which the minimizers can generate the solution of the classification problem.
The classification algorithm of the present invention may be implemented into a cloud server or a local computer as hardware or software (or computer program) or as separated circuit devices on a set of chips or an integrated circuit device on a single chip.
Before implementing the main steps of the classification algorithm of the present invention, several preliminary steps should be done in advance.
(Preliminary Step P1: Preparing a Training Set)
Let Ω⊂p be a collection of data (or observations) which is composed of m memberships (or categories) of elements, and the m memberships are digitized as 1, 2, . . . , m.
A part of data Ωtr⊂Ω, typically called a “training set”, and another part of data Ωtt⊂Ω, typically called a “test set”, are prepared from the data Ω. The collection of data Ω may optionally include more parts, such as a remaining set Ωth. It is assumed that the training set Ωtr and the test set Ωtt are sufficient large and share the full characteristics represented by the whole collection of data Ω.
(Preliminary Step P2: Setting a Membership Function)
Let y: Ω→S={1, 2, . . . , m} be a membership function (also regarded as a target function) so that y(x) gives precisely the genuine membership of x.
(Preliminary Step P3: Training a Classifier Using the Classification Algorithm of the Present Invention)
The goal of the classification problem is to use the training set Ωtr to derive a classifier ŷ(·) that serves as a good approximation of y(·).
(Preliminary Step P4: Decomposing the Training Set into Subsets)
Clearly, y(·) and ŷ(·) produce two decompositions of the training set Ωtr as disjoint unions of subsets:
Define the cardinalities ntr=|Ωtr| and ntr(j)=|Ωtr(j)|, where, for a finite set A, the cardinality |A| is the number of elements of A. Since the subsets Ωtr(j)'s are disjoint and the union of them is the training set Ωtr, it is obvious that ntr=Σtr (j).
(Preliminary Step P5: Preparing a Test Set)
In some embodiments, the test set Ωtr is used to determine the accuracy of ŷ, where the accuracy may refer to the percentage (%) of x's in Ωtt such that ŷ(x)=y(x), for example.
(General Multiform Separation)
Instead of finding one single inference function to accomplish the job of classification as commonly seen in many prior art methods, the present invention finds that an appropriate utilization of multiple functions can produce better solutions, in terms of accuracy, robustness, complexity, speed, dependency, cost, and so on.
In some cases, it is very difficult and even impossible to use one single inference function to distinguish the characteristics among elements with different memberships. Along this reasoning, the present invention is lead to the utilization of multiform separation.
(Main Step Q1: Generating Piecewise Continuous Functions)
Loosely speaking, a function h: p→
is called a piecewise continuous function if there exist finite disjoint subsets D1, . . . , Dw such that D1∪ . . . ∪Dw=
p and h is continuous on the interior of Dj, j=1, . . . , w.
Generate m piecewise continuous functions fj: p→
, j=1, . . . , m, based on the training set Ωtr. After training, the m piecewise continuous functions f1, . . . , fm can carry important characteristics of the respective training subsets Ωtr(j)'s so that each membership subset
U(j)={x∈Ω:fj(x)=min{f1(x),f2(x), . . . ,fm(x)}}
is expected to satisfy U(j)∩Ωtr≈Ωtr(j) for each j=1, . . . , m.
Herein, the operator min{ } indicates the minimal item of the set. In other embodiments, it is also possible to choose other operators, such as max{ }, which indicates the maximal item of the set, for the aforementioned equation to realize the present invention.
(Main Step Q2: Giving a Classifier by Multiform Separation)
The multiform separation now gives a classifier ŷ: →S defined by
ŷ(x)=j if×∈U(j)
or equivalently,
ŷ(x)=j if fj(x)≤fk(x),k=1, . . . ,m
In other words, an element x∈Ω is classified by the classifier ŷ to have a membership j if the evaluation at x of fj is minimal among all the evaluations at x of the m piecewise continuous functions f1, . . . , fm.
It is noted that, though the membership subsets U(j)'s are not necessarily disjoint, the cases that the same minimum are attained by multiple piecewise continuous functions fj's are rare. (The outputs of fj(x)'s are real numbers, so it is rare that some of fj's output the same value.) However, when the case happens, a possible solution is to randomly pick one membership from the involved fj(x)'s.
Hereby, a general MS classifier y of the present invention is provided.
(Quadratic Multiform Separation)
The quadratic multiform separation is a specific embodiment of the general multiform separation as previously discussed. Needless to say, the way to generate the piecewise continuous functions fj(·)'s in the multiform separation is not unique. Any suitable functions may be used therein. However, the piecewise continuous functions fj(·)'s must be generated carefully in order to dig out (or extract) the characteristics hidden in each training subset Ωtr(j), j=1, . . . , m. According to the present invention, the quadratic multiform separation is one efficient way to generate the piecewise continuous functions fj(·)'s which carry rich and useful information of the training subsets Ωtr(j)'s that can be applied in various applications in addition to solving supervised classification problem.
(Sub-Step Q11: Defining Forms of Member Functions)
Let q∈ be given. A function f:
p→
is called a q-dimensional member function if it is of the form
f(x)=∥Ax−b∥2
for a constant matrix A∈q×p and a constant vector b∈
q, where ∥·∥ denotes the Euclidean norm. Clearly, f(x) is a quadratic function of x. As will be discussed later in sub-step Q16, the constant matrices A1, . . . , Am and the constant vectors b1, . . . , bm of the m q-dimensional member functions f1(x), . . . , fm(x) are items to be solved.
(Sub-Step Q12: Creating a Set of Member Functions)
Θ(q) denotes the set of all q-dimensional member functions, that is, Θ(q)={∥Ax−b∥2: A∈q×p, b∈
q}. When q is chosen and fixed, Θ(q) may be written as Θ for convenience in the following description. The member functions may be regarded as data structures.
(Sub-Step Q13: Generating Member Functions)
Fix an integer q that is sufficiently large. Generate m q-dimensional member functions fj: p→
, j=1, . . . , m, based on the training sets Ωtr. Following the same approach as previously discussed in general multiform separation, the subsets U(j)={x∈Ω: fj(x)=min{f1(x), f2(x), . . . , fm(x)}}, j=1, . . . , m, are derived and the classifier ŷ: Ω→S is defined by ŷ(x)=j if x∈U(j).
(Detail-Step Q131: Setting Control Parameters of Learning Process)
Herein, the present invention provides an efficient learning process for generating the member functions in sub-step Q13 so that the intersection of each membership set U(j) and the training set Ωtr is satisfactorily close to each Ωtr(j), that is, U(j)∩Ωtr≈Ωtr(j), j=1, . . . , m.
Fix an integer q and let αjk∈(0,1), j=1, . . . , m, k=1, . . . , m be control parameters of the learning process. The m2 control parameters αjk, j=1, . . . , m, k=1, . . . , m are not necessarily distinct.
(Detail-Step Q132: Defining Intermediate Functions of Learning Process)
Given m q-dimensional member functions f1, . . . , fm and m2 control parameters αjk, j=1, . . . , m, k=1, . . . , m, intermediate functions φjk: Ω→, j=1, . . . , m, k=1, . . . , m, are defined by
Obviously, φjk(x)<1 if and only if fj(x)<fk(x), j=1, . . . , m, k=1, . . . , m, k≠j.
The training process will be more efficient with the introduction of the control parameters.
(Detail-Step Q133: Defining a Cost Function of Learning Process)
The goal of the learning process according to the present invention is to match the property “x has membership j” for j=1, . . . , m, with the algebraic relations φjk(x)<1, k∈S, k≠j.
A cost function Φ is denoted by
The quantity of the cost function Φ provides a performance measure for separating the training subsets Ωtr(1), . . . , Ωtr(m) by the given member functions f1, . . . , fm. With the integer q, the m2 control parameters αjk, j=1, . . . , m, k=1, . . . , m, and the training set Ωtr given, and q sufficiently large, the cost function Φ defined above therefore depends only on the constant matrices A1, . . . , Am and the constant vectors b1, . . . , bm that define the member functions f1, . . . , fm.
(Detail-Step Q134: Executing Learning Process Based on Cost Function)
The member functions f1, . . . , fm are generated by minimizing the cost function Φ among Θ(q), the set of all q-dimensional member functions. Formally, the task of the learning process is to solve
The generated member functions f1, . . . , fm are the objectives pursued in sub-step Q13. They are used to construct the classifier ŷ of the present invention, which is called a QMS classifier in this embodiment.
Other objects, advantages, and novel features of the invention will become more apparent from the following detailed description when taken in conjunction with the accompanying drawings.
Different embodiments of the present invention are provided in the following description. These embodiments are meant to explain the technical content of the present invention, but not meant to limit the scope of the present invention. A feature described in an embodiment may be applied to other embodiments by suitable modification, substitution, combination, or separation.
It should be noted that, in the present specification, when a component is described to have an element, it means that the component may have one or more of the elements, and it does not mean that the component has only one of the element, except otherwise specified.
Moreover, in the present specification, the ordinal numbers, such as “first” or “second”, are used to distinguish a plurality of elements having the same name, and it does not means that there is essentially a level, a rank, an executing order, or a manufacturing order among the elements, except otherwise specified. A “first” element and a “second” element may exist together in the same component, or alternatively, they may exist in different components, respectively. The existence of an element described by a greater ordinal number does not essentially mean the existent of another element described by a smaller ordinal number.
Moreover, in the present specification, the terms, such as “preferably” or “advantageously”, are used to describe an optional or additional element or feature, and in other words, the element or the feature is not an essential element, and may be ignored in some embodiments.
Moreover, each component may be realized as a single circuit or an integrated circuit in suitable ways, and may include one or more active elements, such as transistors or logic gates, or one or more passive elements, such as resistors, capacitors, or inductors, but not limited thereto. Each component may be connected to each other in suitable ways, for example, by using one or more traces to form series connection or parallel connection, especially to satisfy the requirements of input terminal and output terminal. Furthermore, each component may allow transmitting or receiving input signals or output signals in sequence or in parallel. The aforementioned configurations may be realized depending on practical applications.
Moreover, in the present specification, the terms, such as “system”, “apparatus”, “device”, “module”, or “unit”, refer to an electronic element, or a digital circuit, an analogous circuit, or other general circuit, composed of a plurality of electronic elements, and there is not essentially a level or a rank among the aforementioned terms, except otherwise specified.
Moreover, in the present specification, two elements may be electrically connected to each other directly or indirectly, except otherwise specified. In an indirect connection, one or more elements may exist between the two elements.
(General Multiform Separation Classifier)
As shown in
It can be understood that the modules or engines are illustrated here for the purpose of explaining the present invention, and the modules or engines may be integrated or separated into other forms as hardware or software in separated circuit devices on a set of chips or an integrated circuit device on a single chip. The multiform separation classifier 1 may be implemented in a cloud server or a local computer.
The input module 10 is configured to receive sample data (or an element) x. The input module 10 may be a sensor, a camera, a speaker, and so on, that can detect physical phenomena, or it may be a data receiver.
The data collection module 20 is connected to the input module 10 and configured to store a collection of data 51, from the input module 10. The collection of data Ω⊂p includes a training set Ωtr and/or a test set Ωtt and/or a remaining set Ωth. Here
is the set of real numbers and the expression Ω⊂
p means that the collection of data Ω belongs to
p, the space of p-dimensional real vectors. The collection of data Ω may also be regarded as a data structure.
With supervised approach, a membership function y: Ω→S={1, 2, . . . , m} can be found so that y(x) gives precisely the membership of the input data x. Accordingly, the collection of data 12 is composed of m memberships (or data categories), and the m memberships are digitized as 1, 2, . . . , m. To specifically explain the meaning of the data categories, for example, when a classifier is used to recognize animal pictures, membership “1” may indicate “dog”, membership “2” may indicate “cat”, . . . , and membership “m” may indicate “rabbit”. Herein, “dog”, “cat”, and “rabbit” are regarded as the data categories. For another example, when a classifier is used to recognize people's age by their faces, membership “1” may indicate “child”, membership “2” may indicate “teenage”, . . . , and membership “m” may indicate “adult”. Herein, “child”, “teenage”, and “adult” are regarded as the data categories.
The multiform separation engine 30 is connected to the data collection module 20 and configured to use m piecewise continuous functions f1, f2, . . . , fm to perform classification. The m piecewise continuous functions f1, f2, . . . , fm typically handle the same type of data, for example, they all handle image files for image recognition, all handle audio files for sound recognition, and so on, so that they can work consistently.
The classification involves two stages (or modes): a training (or learning) stage and a prediction (or decision) stage.
Loosely speaking, a function h: p→
is called a piecewise continuous function if there exist finite disjoint subsets D1, . . . , Dw such that D1∪ . . . ∪Dw=
p, and f is continuous on the interior of Dj, j=1, . . . , w.
In the training stage, m piecewise continuous functions fj: p→
, j=1, . . . , m, are generated based on the training set Ωtr through a learning process. After training, the m piecewise continuous functions f1, . . . , fm can carry important characteristics of the respective training subsets Ωtr(j)'s so that each membership subset
U(j)={x∈Ω:fj(x)=min{fj(x),f2(x), . . . ,fm(x)}}
is expected to satisfy U(j)∩Ωtr≈Ωtr(j) for each j=1, . . . , m. That is, the present invention aims to obtain each membership subset U(j), which should ideally coincide with the collection of data in Ω having membership j. Consequently, U(j)∩Ωtr should be approximately the same as Ωtr(j).
Herein, the operator min{ } indicates the minimal item of the set. In other embodiments, it is also possible to choose other operators, such as max{ }, which indicates the maximal element of the set, for the aforementioned equation to realize the present invention.
In brief, according to the present invention, the multiform separation engine 30 is configured to derive temporary evaluations fj(x), . . . , fm(x) for certain sample data x from the m trained piecewise continuous functions f1, . . . , fm. Then, the sample data x is assigned to have a membership j if fj(x) is the minimal among f1(x), . . . , fm(x). This process is applied to every sample data x to generate membership subsets U(1), . . . , U(m).
The output module 70 is directly or indirectly connected to the multiform separation engine 30, and configured to derive an output result after the sample data x is processed through the multiform separation engine 30. The output result may be directly the membership j, or be further converted to the data category, such as “dog”, “cat”, or “rabbit” indicated by the membership j.
If the output module 70 directly outputs the membership j, the multiform separation classifier 1 of the present invention can be expressed by
ŷ(x)=j if x∈U(j)
or equivalently,
ŷ(x)=j if fj(x)≤fk(x),k=1, . . . ,m
In other words, the sample data x∈Ω is classified by the multiform separation classifier 1, denoted by ŷ, to have a membership j if a temporary evaluation fj(x) at the sample data x of a certain trained piecewise continuous function fj is minimal among all temporary evaluations f1(x), . . . , fm(x) at the sample data x of the m trained piecewise continuous functions f1, . . . , fm.
In rare cases, multiple trained piecewise continuous functions may attain the same minimum, and in such cases, a possible solution is to randomly pick one of the memberships indicated by the multiple trained piecewise continuous functions.
(Quadratic Multiform Separation Classifier)
As shown in
In this embodiment, each piecewise continuous function fj(x) is set to be a quadratic function of the sample data x. In particular, let q∈ be given, where
represents the set of natural numbers. A function f:
p→
is called a q-dimensional member function if it is of the form
f(x)=∥Ax−b∥2
for a constant matrix A∈q×p and a constant vector b∈
q, where ∥·∥ denotes the Euclidean norm. In particular,
Fix an integer q that is sufficiently large. Generate m q-dimensional member functions fj: p→
, j=1, . . . , m, based on the training sets Ωtr. As will be discussed later, the constant matrices A1, . . . , Am and the constant vectors b1, . . . , bm of the m q-dimensional member functions are items to be solved.
Accordingly, the member function collector 42 of the quadratic multiform separation engine 40 is configured to store a set of member functions, denoted by Θ(q). That is, Θ(q)={∥Ax−b∥2: constant matrix A∈q×p, constant vector b∈
q}.
The member function trainer 44 of the quadratic multiform separation engine 40 is configured to perform the learning process.
Herein, the present invention provides an efficient learning process for generating the member functions so that the intersection of each membership set U(j) and the training set Ωtr is satisfactorily close to respective Ωtr(j), that is, U(j)∩Ωtr≈Ωtr(j), j=1, . . . , m.
According to the present invention, in the learning process, m2 control parameters αjk, j=1, . . . , m, k=1, . . . , m are set to participate comparisons among the m member functions, and the comparisons are performed according to a specific operator. Preferably, the m2 control parameters αjk, j=1, . . . , m, k=1, . . . , m are set between 0 and 1, and they are not necessarily distinct.
With the m q-dimensional member functions f1, . . . , fm, and the m2 control parameters αjk, j=1, . . . , m, k=1, . . . , m, intermediate functions φjk: Ω→, j=1, . . . , m, k=1, . . . , m, are defined by
Obviously, φjk(x)<1 if and only if fj(x)<fk(x), j∈S, k∈S, k≠j. It is noted that S={1, 2, . . . , m} is the set of memberships.
The training process will be more efficient with the introduction of the control parameters.
It is to be understood that the goal of the learning process according to the present invention is to match the property “x has membership j” for j=1, . . . , m, with the algebraic relations φjk(x)<1, k∈S, k≠j.
In order to reach the goal, a so-called cost function t is introduced and denoted by
The quantity of the cost function Φ provides a performance measure for separating the training subsets Ωtr(1), . . . , Ωtr(m), by the given member functions f1, . . . , fm. With the integer q, the m2 control parameters αjk, j=1, . . . , m, k=1, . . . , m, and the training set Ωtr given, and q sufficiently large, the cost function Φ defined above therefore depends only on the constant matrices A1, . . . , Am and the constant vectors b1, . . . , bm that define the member functions f1, . . . , fm.
The member functions f1, . . . , fm are generated by minimizing the cost function Φ among Θ(q), the set of all q-dimensional member functions. Formally, the task of the learning process is to solve
The generated member functions f1, . . . , fm are the objectives pursued in the learning process performed by the member function trainer 44. They are used to construct the quadratic multiform separation classifier 2, denoted by ŷ, of the present invention.
The multiform separation classifier 1 of the present invention can be expressed by the following two equations:
U(j)={x∈Ω:fj(x)=min{f1(x),f2(x), . . . ,fm(x)}},j=1, . . . ,m
and
ŷ(x)=j if x∈U(j)
or equivalently,
ŷ(x)=j if fj(x)≤fk(x),k=1, . . . ,
In summary, the present invention solves the classification problem as follows: Given any sample data x∈Ω (Ω may include the training set Ωtr and/or the test set Ωtt and/or the remaining set Ωth), apply the m trained piecewise continuous functions f1, . . . , fm at the sample data x to obtain their respective temporary evaluations. If the temporary evaluation fj(x) at the sample data x of a certain trained piecewise continuous function fj is minimal among all temporary evaluations f1(x), . . . , fm(x) at the sample data x of the m trained piecewise continuous functions f1, . . . , fm, x is determined to belong to the j-th membership subset U(j), and consequently x is determined to have the membership j.
(Method to Implement a Multiform Separation Classifier)
The respective modules, engines, and the overall structure of the multiform separation classifier 1 of the present invention have been discussed above. However, in the aspect of software, the multiform separation classifier 1 may be alternatively implemented by a sequence of steps, as introduced above. Therefore, the method of the present invention essentially includes the following steps, executed in order:
In conclusion, the present invention provides a multiform separation classifier, which appropriate utilizes multiple functions, so as to produce better solutions, in terms of accuracy, robustness, complexity, speed, dependency, cost, and so on.
Although the present invention has been explained in relation to its preferred embodiment, it is to be understood that many other possible modifications and variations can be made without departing from the spirit and scope of the invention as hereinafter claimed.
Number | Name | Date | Kind |
---|---|---|---|
20190102701 | Singaraju | Apr 2019 | A1 |
20210133565 | Shamir | May 2021 | A1 |
Entry |
---|
Zhang et al Linearized Proximal Alternating Direction Method of Multipliers for Parallel Magnetic Resonance Imaging, published in IEEE/CAA Journal of Automatica Sinica, vol. 4, No. 4, Oct. 2017 (Year: 2017). |
Number | Date | Country | |
---|---|---|---|
20220222494 A1 | Jul 2022 | US |