Classification algorithm based on multiform separation

Information

  • Patent Grant
  • 12204613
  • Patent Number
    12,204,613
  • Date Filed
    Thursday, January 14, 2021
    4 years ago
  • Date Issued
    Tuesday, January 21, 2025
    a month ago
  • CPC
  • Field of Search
    • CPC
    • G06F18/241
    • G06F17/10
    • G06F18/217
    • G06N20/00
  • International Classifications
    • G06F18/241
    • G06F17/10
    • G06F18/21
    • G06N20/00
    • Term Extension
      1045
Abstract
The present invention provides a multiform separation classifier, including an input module, a data collection module, a multiform separation engine, and an output module. The input module is configured to receive sample data. The data collection module is connected to the input module and configured to store a collection of data from the input module. The collection of data includes a training set and/or a test set. The multiform separation engine is connected to the data collection module and configured to use at least two piecewise continuous functions to perform classification. The piecewise continuous functions are respectively trained with the training set through a learning process. The output module is connected to the multiform separation engine and configured to derive an output result after the sample data is processed through the multiform separation engine.
Description
BACKGROUND OF THE INVENTION
1. Field of the Invention

The present invention relates to machine learning of artificial intelligence (AI) and, more particularly, to a classification algorithm based on multiform separation, in particular, on quadratic multiform separation.


2. Description of Related Art

As is well known, machine learning builds a hypothetical model based on sample data for a computer to make a prediction or a decision. The hypothetical model may be implemented as a classifier, which approximates a mapping function from input variables to output variables. The goal of machine learning is to make the hypothetical model as close as possible to a target function which always gives correct answers. This goal may be achieved by training the hypothetical model with more sample data.


Machine learning approaches are commonly divided into three categories: supervised learning, unsupervised learning, and reinforcement learning. Various models have been developed for machine learning, such as convolutional neural network (CNN), recurrent neural network (RNN), long short-term memory (LSTM) network, YOLO, ResNet, ResNet-18, ResNet-34, Vgg16, GoogleNet, Lenet, MobileNet, decision trees, and support vector machine (SVM).


In the traditional approach, a classifier is applied with only a single model having one single inference function to accomplish the job of classification. It may solve a simple classification problem as shown in FIG. 1.


However, in some cases, it is very difficult and even impossible to use one single inference function to distinguish the characteristics among elements, and put the elements into different classes. FIG. 2 shows a more complicated problem. When Class_A and Class_B have a checkerboard pattern, one single inference function can hardly fit the boundaries of the pattern, and a proper classification is unlikely to be done.


Also, every model has its own advantages and drawbacks, in terms of accuracy, robustness, complexity, speed, dependency, cost, and so on; when a model focuses on some points, it may possibly neglect the others, and therefore an extreme bias may occur.


Therefore, it is desirable to provide an improved classifier to mitigate and/or obviate the aforementioned problems.


SUMMARY OF THE INVENTION

The present invention proposes a classification algorithm (or paradigm) in supervised machine learning. It is based on a revolutionary concept, which is called “multiform separation” (MS), on separating sets in Euclidean space. Next, an ingenious method is presented to carry out the multiform separation. This method may be regarded as the core of the present invention.


Further, a particular embodiment uses “quadratic multiform separation” (QMS) to realize the present invention. In such embodiment, quadratic functions are utilized to build the cost function of which the minimizers can generate the solution of the classification problem.


The classification algorithm of the present invention may be implemented into a cloud server or a local computer as hardware or software (or computer program) or as separated circuit devices on a set of chips or an integrated circuit device on a single chip.


Before implementing the main steps of the classification algorithm of the present invention, several preliminary steps should be done in advance.


(Preliminary Step P1: Preparing a Training Set)


Let Ω⊂custom characterp be a collection of data (or observations) which is composed of m memberships (or categories) of elements, and the m memberships are digitized as 1, 2, . . . , m.


A part of data Ωtr⊂Ω, typically called a “training set”, and another part of data Ωtt⊂Ω, typically called a “test set”, are prepared from the data Ω. The collection of data Ω may optionally include more parts, such as a remaining set Ωth. It is assumed that the training set Ωtr and the test set Ωtt are sufficient large and share the full characteristics represented by the whole collection of data Ω.


(Preliminary Step P2: Setting a Membership Function)


Let y: Ω→S={1, 2, . . . , m} be a membership function (also regarded as a target function) so that y(x) gives precisely the genuine membership of x.


(Preliminary Step P3: Training a Classifier Using the Classification Algorithm of the Present Invention)


The goal of the classification problem is to use the training set Ωtr to derive a classifier ŷ(·) that serves as a good approximation of y(·).


(Preliminary Step P4: Decomposing the Training Set into Subsets)


Clearly, y(·) and ŷ(·) produce two decompositions of the training set Ωtr as disjoint unions of subsets:










Ω

t

r


=





m


j
=
1




Ω

t

r


(
j
)


=




m


j
=
1





Ω
ˆ


t

r


(
j
)















    • where, for j=1, . . . , m,

      Ωtr(j)={x∈Ωtr:y(x)=j}

    • which is the genuine classification of the elements,

    • and

      {circumflex over (Ω)}tr(j)={x∈Ωtr(x)=j}

    • which is the approximate classification of the elements.





Define the cardinalities ntr=|Ωtr| and ntr(j)=|Ωtr(j)|, where, for a finite set A, the cardinality |A| is the number of elements of A. Since the subsets Ωtr(j)'s are disjoint and the union of them is the training set Ωtr, it is obvious that ntrtr (j).


(Preliminary Step P5: Preparing a Test Set)


In some embodiments, the test set Ωtr is used to determine the accuracy of ŷ, where the accuracy may refer to the percentage (%) of x's in Ωtt such that ŷ(x)=y(x), for example.


(General Multiform Separation)


Instead of finding one single inference function to accomplish the job of classification as commonly seen in many prior art methods, the present invention finds that an appropriate utilization of multiple functions can produce better solutions, in terms of accuracy, robustness, complexity, speed, dependency, cost, and so on.


In some cases, it is very difficult and even impossible to use one single inference function to distinguish the characteristics among elements with different memberships. Along this reasoning, the present invention is lead to the utilization of multiform separation.


(Main Step Q1: Generating Piecewise Continuous Functions)


Loosely speaking, a function h: custom characterpcustom character is called a piecewise continuous function if there exist finite disjoint subsets D1, . . . , Dw such that D1∪ . . . ∪Dw=custom characterp and h is continuous on the interior of Dj, j=1, . . . , w.


Generate m piecewise continuous functions fj: custom characterpcustom character, j=1, . . . , m, based on the training set Ωtr. After training, the m piecewise continuous functions f1, . . . , fm can carry important characteristics of the respective training subsets Ωtr(j)'s so that each membership subset

U(j)={x∈Ω:fj(x)=min{f1(x),f2(x), . . . ,fm(x)}}

is expected to satisfy U(j)∩Ωtr≈Ωtr(j) for each j=1, . . . , m.


Herein, the operator min{ } indicates the minimal item of the set. In other embodiments, it is also possible to choose other operators, such as max{ }, which indicates the maximal item of the set, for the aforementioned equation to realize the present invention.


(Main Step Q2: Giving a Classifier by Multiform Separation)


The multiform separation now gives a classifier ŷ: →S defined by

ŷ(x)=j if×∈U(j)
or equivalently,
ŷ(x)=j if fj(x)≤fk(x),k=1, . . . ,m


In other words, an element x∈Ω is classified by the classifier ŷ to have a membership j if the evaluation at x of fj is minimal among all the evaluations at x of the m piecewise continuous functions f1, . . . , fm.


It is noted that, though the membership subsets U(j)'s are not necessarily disjoint, the cases that the same minimum are attained by multiple piecewise continuous functions fj's are rare. (The outputs of fj(x)'s are real numbers, so it is rare that some of fj's output the same value.) However, when the case happens, a possible solution is to randomly pick one membership from the involved fj(x)'s.


Hereby, a general MS classifier y of the present invention is provided.


(Quadratic Multiform Separation)


The quadratic multiform separation is a specific embodiment of the general multiform separation as previously discussed. Needless to say, the way to generate the piecewise continuous functions fj(·)'s in the multiform separation is not unique. Any suitable functions may be used therein. However, the piecewise continuous functions fj(·)'s must be generated carefully in order to dig out (or extract) the characteristics hidden in each training subset Ωtr(j), j=1, . . . , m. According to the present invention, the quadratic multiform separation is one efficient way to generate the piecewise continuous functions fj(·)'s which carry rich and useful information of the training subsets Ωtr(j)'s that can be applied in various applications in addition to solving supervised classification problem.


(Sub-Step Q11: Defining Forms of Member Functions)


Let q∈custom character be given. A function f: custom characterpcustom character is called a q-dimensional member function if it is of the form

f(x)=∥Ax−b∥2

for a constant matrix A∈custom characterq×p and a constant vector b∈custom characterq, where ∥·∥ denotes the Euclidean norm. Clearly, f(x) is a quadratic function of x. As will be discussed later in sub-step Q16, the constant matrices A1, . . . , Am and the constant vectors b1, . . . , bm of the m q-dimensional member functions f1(x), . . . , fm(x) are items to be solved.


(Sub-Step Q12: Creating a Set of Member Functions)


Θ(q) denotes the set of all q-dimensional member functions, that is, Θ(q)={∥Ax−b∥2: A∈custom characterq×p, b∈custom characterq}. When q is chosen and fixed, Θ(q) may be written as Θ for convenience in the following description. The member functions may be regarded as data structures.


(Sub-Step Q13: Generating Member Functions)


Fix an integer q that is sufficiently large. Generate m q-dimensional member functions fj: custom characterpcustom character, j=1, . . . , m, based on the training sets Ωtr. Following the same approach as previously discussed in general multiform separation, the subsets U(j)={x∈Ω: fj(x)=min{f1(x), f2(x), . . . , fm(x)}}, j=1, . . . , m, are derived and the classifier ŷ: Ω→S is defined by ŷ(x)=j if x∈U(j).


(Detail-Step Q131: Setting Control Parameters of Learning Process)


Herein, the present invention provides an efficient learning process for generating the member functions in sub-step Q13 so that the intersection of each membership set U(j) and the training set Ωtr is satisfactorily close to each Ωtr(j), that is, U(j)∩Ωtr≈Ωtr(j), j=1, . . . , m.


Fix an integer q and let αjk∈(0,1), j=1, . . . , m, k=1, . . . , m be control parameters of the learning process. The m2 control parameters αjk, j=1, . . . , m, k=1, . . . , m are not necessarily distinct.


(Detail-Step Q132: Defining Intermediate Functions of Learning Process)


Given m q-dimensional member functions f1, . . . , fm and m2 control parameters αjk, j=1, . . . , m, k=1, . . . , m, intermediate functions φjk: Ω→custom character, j=1, . . . , m, k=1, . . . , m, are defined by











φ

j

k


(
x
)

=

max


{


α
jk

,



f
j

(
x
)



f
k

(
x
)



}












Obviously, φjk(x)<1 if and only if fj(x)<fk(x), j=1, . . . , m, k=1, . . . , m, k≠j.


The training process will be more efficient with the introduction of the control parameters.


(Detail-Step Q133: Defining a Cost Function of Learning Process)


The goal of the learning process according to the present invention is to match the property “x has membership j” for j=1, . . . , m, with the algebraic relations φjk(x)<1, k∈S, k≠j.


A cost function Φ is denoted by










Φ

(


f
1

,

.

.

.


,

f
m


)

=





j
=
1


m






k

S

,

k

j







x



Ω

t

r


(
j
)





φ
jk

(
x
)














The quantity of the cost function Φ provides a performance measure for separating the training subsets Ωtr(1), . . . , Ωtr(m) by the given member functions f1, . . . , fm. With the integer q, the m2 control parameters αjk, j=1, . . . , m, k=1, . . . , m, and the training set Ωtr given, and q sufficiently large, the cost function Φ defined above therefore depends only on the constant matrices A1, . . . , Am and the constant vectors b1, . . . , bm that define the member functions f1, . . . , fm.


(Detail-Step Q134: Executing Learning Process Based on Cost Function)


The member functions f1, . . . , fm are generated by minimizing the cost function Φ among Θ(q), the set of all q-dimensional member functions. Formally, the task of the learning process is to solve











min


f
1

,


.

.

.


,


f
m




Φ

(


f
1

,

.

.

.


,

f
m


)


=


min


f
1

,


.

.

.


,


f
m








j
=
1


m






k

S

,

k

j







x



Ω

t

r


(
j
)





φ

j

k


(
x
)















The generated member functions f1, . . . , fm are the objectives pursued in sub-step Q13. They are used to construct the classifier ŷ of the present invention, which is called a QMS classifier in this embodiment.


Other objects, advantages, and novel features of the invention will become more apparent from the following detailed description when taken in conjunction with the accompanying drawings.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 shows a schematic diagram of classification accomplished by one single inference function of a prior art classifier;



FIG. 2 shows a problem that the prior art classifier may face;



FIG. 3 shows a schematic block diagram of the multiform separation classifier according to one embodiment of the present invention;



FIG. 4 shows a schematic block diagram of the multiform separation classifier according to another embodiment of the present invention; and



FIG. 5 shows a schematic block diagram of the multiform separation classifier according to one embodiment of the present invention for prediction or decision.





DETAILED DESCRIPTION OF THE EMBODIMENT

Different embodiments of the present invention are provided in the following description. These embodiments are meant to explain the technical content of the present invention, but not meant to limit the scope of the present invention. A feature described in an embodiment may be applied to other embodiments by suitable modification, substitution, combination, or separation.


It should be noted that, in the present specification, when a component is described to have an element, it means that the component may have one or more of the elements, and it does not mean that the component has only one of the element, except otherwise specified.


Moreover, in the present specification, the ordinal numbers, such as “first” or “second”, are used to distinguish a plurality of elements having the same name, and it does not means that there is essentially a level, a rank, an executing order, or a manufacturing order among the elements, except otherwise specified. A “first” element and a “second” element may exist together in the same component, or alternatively, they may exist in different components, respectively. The existence of an element described by a greater ordinal number does not essentially mean the existent of another element described by a smaller ordinal number.


Moreover, in the present specification, the terms, such as “preferably” or “advantageously”, are used to describe an optional or additional element or feature, and in other words, the element or the feature is not an essential element, and may be ignored in some embodiments.


Moreover, each component may be realized as a single circuit or an integrated circuit in suitable ways, and may include one or more active elements, such as transistors or logic gates, or one or more passive elements, such as resistors, capacitors, or inductors, but not limited thereto. Each component may be connected to each other in suitable ways, for example, by using one or more traces to form series connection or parallel connection, especially to satisfy the requirements of input terminal and output terminal. Furthermore, each component may allow transmitting or receiving input signals or output signals in sequence or in parallel. The aforementioned configurations may be realized depending on practical applications.


Moreover, in the present specification, the terms, such as “system”, “apparatus”, “device”, “module”, or “unit”, refer to an electronic element, or a digital circuit, an analogous circuit, or other general circuit, composed of a plurality of electronic elements, and there is not essentially a level or a rank among the aforementioned terms, except otherwise specified.


Moreover, in the present specification, two elements may be electrically connected to each other directly or indirectly, except otherwise specified. In an indirect connection, one or more elements may exist between the two elements.


(General Multiform Separation Classifier)



FIG. 3 shows a schematic block diagram of the multiform separation classifier 1 according to one embodiment of the present invention.


As shown in FIG. 3, the multiform separation classifier 1 of the present invention, provided in the context of machine learning, includes an input module 10, a data collection module 20, a multiform separation engine 30, and an output module 70.


It can be understood that the modules or engines are illustrated here for the purpose of explaining the present invention, and the modules or engines may be integrated or separated into other forms as hardware or software in separated circuit devices on a set of chips or an integrated circuit device on a single chip. The multiform separation classifier 1 may be implemented in a cloud server or a local computer.


The input module 10 is configured to receive sample data (or an element) x. The input module 10 may be a sensor, a camera, a speaker, and so on, that can detect physical phenomena, or it may be a data receiver.


The data collection module 20 is connected to the input module 10 and configured to store a collection of data 51, from the input module 10. The collection of data Ω⊂custom characterp includes a training set Ωtr and/or a test set Ωtt and/or a remaining set Ωth. Here custom character is the set of real numbers and the expression Ω⊂custom characterp means that the collection of data Ω belongs to custom characterp, the space of p-dimensional real vectors. The collection of data Ω may also be regarded as a data structure.


With supervised approach, a membership function y: Ω→S={1, 2, . . . , m} can be found so that y(x) gives precisely the membership of the input data x. Accordingly, the collection of data 12 is composed of m memberships (or data categories), and the m memberships are digitized as 1, 2, . . . , m. To specifically explain the meaning of the data categories, for example, when a classifier is used to recognize animal pictures, membership “1” may indicate “dog”, membership “2” may indicate “cat”, . . . , and membership “m” may indicate “rabbit”. Herein, “dog”, “cat”, and “rabbit” are regarded as the data categories. For another example, when a classifier is used to recognize people's age by their faces, membership “1” may indicate “child”, membership “2” may indicate “teenage”, . . . , and membership “m” may indicate “adult”. Herein, “child”, “teenage”, and “adult” are regarded as the data categories.


The multiform separation engine 30 is connected to the data collection module 20 and configured to use m piecewise continuous functions f1, f2, . . . , fm to perform classification. The m piecewise continuous functions f1, f2, . . . , fm typically handle the same type of data, for example, they all handle image files for image recognition, all handle audio files for sound recognition, and so on, so that they can work consistently.


The classification involves two stages (or modes): a training (or learning) stage and a prediction (or decision) stage.


Loosely speaking, a function h: custom characterpcustom character is called a piecewise continuous function if there exist finite disjoint subsets D1, . . . , Dw such that D1∪ . . . ∪Dw=custom characterp, and f is continuous on the interior of Dj, j=1, . . . , w.


In the training stage, m piecewise continuous functions fj: custom characterpcustom character, j=1, . . . , m, are generated based on the training set Ωtr through a learning process. After training, the m piecewise continuous functions f1, . . . , fm can carry important characteristics of the respective training subsets Ωtr(j)'s so that each membership subset

U(j)={x∈Ω:fj(x)=min{fj(x),f2(x), . . . ,fm(x)}}

is expected to satisfy U(j)∩Ωtr≈Ωtr(j) for each j=1, . . . , m. That is, the present invention aims to obtain each membership subset U(j), which should ideally coincide with the collection of data in Ω having membership j. Consequently, U(j)∩Ωtr should be approximately the same as Ωtr(j).


Herein, the operator min{ } indicates the minimal item of the set. In other embodiments, it is also possible to choose other operators, such as max{ }, which indicates the maximal element of the set, for the aforementioned equation to realize the present invention.


In brief, according to the present invention, the multiform separation engine 30 is configured to derive temporary evaluations fj(x), . . . , fm(x) for certain sample data x from the m trained piecewise continuous functions f1, . . . , fm. Then, the sample data x is assigned to have a membership j if fj(x) is the minimal among f1(x), . . . , fm(x). This process is applied to every sample data x to generate membership subsets U(1), . . . , U(m).


The output module 70 is directly or indirectly connected to the multiform separation engine 30, and configured to derive an output result after the sample data x is processed through the multiform separation engine 30. The output result may be directly the membership j, or be further converted to the data category, such as “dog”, “cat”, or “rabbit” indicated by the membership j.


If the output module 70 directly outputs the membership j, the multiform separation classifier 1 of the present invention can be expressed by

ŷ(x)=j if x∈U(j)
or equivalently,
ŷ(x)=j if fj(x)≤fk(x),k=1, . . . ,m


In other words, the sample data x∈Ω is classified by the multiform separation classifier 1, denoted by ŷ, to have a membership j if a temporary evaluation fj(x) at the sample data x of a certain trained piecewise continuous function fj is minimal among all temporary evaluations f1(x), . . . , fm(x) at the sample data x of the m trained piecewise continuous functions f1, . . . , fm.


In rare cases, multiple trained piecewise continuous functions may attain the same minimum, and in such cases, a possible solution is to randomly pick one of the memberships indicated by the multiple trained piecewise continuous functions.


(Quadratic Multiform Separation Classifier)



FIG. 4 shows a schematic block diagram of the multiform separation classifier 2 according to another embodiment of the present invention.


As shown in FIG. 4, the quadratic multiform separation classifier 2 has the same structure as in the multiform separation classifier 1 of FIG. 3, but its multiform separation engine 30 evolves into a quadratic multiform separation engine 40. The quadratic multiform separation engine 40 includes a member function collector 42 and a member function trainer 44.


In this embodiment, each piecewise continuous function fj(x) is set to be a quadratic function of the sample data x. In particular, let q∈custom character be given, where custom character represents the set of natural numbers. A function f: custom characterpcustom character is called a q-dimensional member function if it is of the form

f(x)=∥Ax−b∥2


for a constant matrix A∈custom characterq×p and a constant vector b∈custom characterq, where ∥·∥ denotes the Euclidean norm. In particular,







A
=

(




a

1

1





.

.

.




a

1

p


















a

q

1





.

.

.




a

q

p





)





x
=

(




x
1











x
p




)





b
=

(




b
1











b
q




)






Fix an integer q that is sufficiently large. Generate m q-dimensional member functions fj: custom characterpcustom character, j=1, . . . , m, based on the training sets Ωtr. As will be discussed later, the constant matrices A1, . . . , Am and the constant vectors b1, . . . , bm of the m q-dimensional member functions are items to be solved.


Accordingly, the member function collector 42 of the quadratic multiform separation engine 40 is configured to store a set of member functions, denoted by Θ(q). That is, Θ(q)={∥Ax−b∥2: constant matrix A∈custom characterq×p, constant vector b∈custom characterq}.


The member function trainer 44 of the quadratic multiform separation engine 40 is configured to perform the learning process.


Herein, the present invention provides an efficient learning process for generating the member functions so that the intersection of each membership set U(j) and the training set Ωtr is satisfactorily close to respective Ωtr(j), that is, U(j)∩Ωtr≈Ωtr(j), j=1, . . . , m.


According to the present invention, in the learning process, m2 control parameters αjk, j=1, . . . , m, k=1, . . . , m are set to participate comparisons among the m member functions, and the comparisons are performed according to a specific operator. Preferably, the m2 control parameters αjk, j=1, . . . , m, k=1, . . . , m are set between 0 and 1, and they are not necessarily distinct.


With the m q-dimensional member functions f1, . . . , fm, and the m2 control parameters αjk, j=1, . . . , m, k=1, . . . , m, intermediate functions φjk: Ω→custom character, j=1, . . . , m, k=1, . . . , m, are defined by











φ

jk


(
x
)

=

max


{


α

jk


,



f
j

(
x
)



f
k

(
x
)



}












Obviously, φjk(x)<1 if and only if fj(x)<fk(x), j∈S, k∈S, k≠j. It is noted that S={1, 2, . . . , m} is the set of memberships.


The training process will be more efficient with the introduction of the control parameters.


It is to be understood that the goal of the learning process according to the present invention is to match the property “x has membership j” for j=1, . . . , m, with the algebraic relations φjk(x)<1, k∈S, k≠j.


In order to reach the goal, a so-called cost function t is introduced and denoted by










Φ

(


f
1

,

.

.

.


,

f
m


)

=





j
=
1


m






k

S

,

k

j







x



Ω

t

r


(
j
)





φ
jk

(
x
)














The quantity of the cost function Φ provides a performance measure for separating the training subsets Ωtr(1), . . . , Ωtr(m), by the given member functions f1, . . . , fm. With the integer q, the m2 control parameters αjk, j=1, . . . , m, k=1, . . . , m, and the training set Ωtr given, and q sufficiently large, the cost function Φ defined above therefore depends only on the constant matrices A1, . . . , Am and the constant vectors b1, . . . , bm that define the member functions f1, . . . , fm.


The member functions f1, . . . , fm are generated by minimizing the cost function Φ among Θ(q), the set of all q-dimensional member functions. Formally, the task of the learning process is to solve











min


f
1

,


.

.

.


,


f
m




Φ

(


f
1

,

.

.

.


,

f
m


)


=


min


f
1

,


.

.

.


,


f
m








j
=
1


m






k

S

,

k

j







x



Ω

t

r


(
j
)





φ

j

k


(
x
)















The generated member functions f1, . . . , fm are the objectives pursued in the learning process performed by the member function trainer 44. They are used to construct the quadratic multiform separation classifier 2, denoted by ŷ, of the present invention.



FIG. 5 shows a schematic block diagram of the multiform separation classifier 1 according to one embodiment of the present invention for prediction or decision.


The multiform separation classifier 1 of the present invention can be expressed by the following two equations:

U(j)={x∈Ω:fj(x)=min{f1(x),f2(x), . . . ,fm(x)}},j=1, . . . ,m
and
ŷ(x)=j if x∈U(j)
or equivalently,
ŷ(x)=j if fj(x)≤fk(x),k=1, . . . ,


In summary, the present invention solves the classification problem as follows: Given any sample data x∈Ω (Ω may include the training set Ωtr and/or the test set Ωtt and/or the remaining set Ωth), apply the m trained piecewise continuous functions f1, . . . , fm at the sample data x to obtain their respective temporary evaluations. If the temporary evaluation fj(x) at the sample data x of a certain trained piecewise continuous function fj is minimal among all temporary evaluations f1(x), . . . , fm(x) at the sample data x of the m trained piecewise continuous functions f1, . . . , fm, x is determined to belong to the j-th membership subset U(j), and consequently x is determined to have the membership j.


(Method to Implement a Multiform Separation Classifier)


The respective modules, engines, and the overall structure of the multiform separation classifier 1 of the present invention have been discussed above. However, in the aspect of software, the multiform separation classifier 1 may be alternatively implemented by a sequence of steps, as introduced above. Therefore, the method of the present invention essentially includes the following steps, executed in order:

    • (a) preparing a training set Ωtr from a collection of data a Preferably, the training set Ωtr is further decomposed into training subsets Ωtr(j), j=1, . . . , m, where the memberships j's refer to data categories of the training set Ωtr of the classification. This step may be executed by the aforementioned data collection module 20.
    • (b) constructing a multiform separation engine 30 configured to use m piecewise continuous functions to perform classification, where m≥2. The m piecewise continuous functions are respectively trained with the training set Ωtr through a learning process. Preferably, each piecewise continuous function is a quadratic function. However, in other embodiments, each piecewise continuous function may be a linear function or a quartic function, or more generally may be a polynomial function, a rational function, an algebraic function, a transcendental function, or any explicitly or implicitly defined suitable function.
    • (c) deriving an output result after sample data x is processed through the multiform separation engine 30. In particular, the sample data x is classified to have a membership j if a temporary evaluation at the sample data x of a certain trained piecewise continuous function is minimal among all temporary evaluations at the sample data x of the m trained piecewise continuous functions. In particular, the trained m piecewise continuous functions are generated by minimizing a cost function Φ, which provides a performance measure for separating training subsets ntr(j), j=1, . . . , m. This step may be executed by the aforementioned output module 70.


In conclusion, the present invention provides a multiform separation classifier, which appropriate utilizes multiple functions, so as to produce better solutions, in terms of accuracy, robustness, complexity, speed, dependency, cost, and so on.


Although the present invention has been explained in relation to its preferred embodiment, it is to be understood that many other possible modifications and variations can be made without departing from the spirit and scope of the invention as hereinafter claimed.

Claims
  • 1. A multiform separation classifier implemented as a manufacture, comprising: an input module configured to receive sample data x;a data collection module connected to the input module and configured to store a collection of data Ω⊂p from the input module, the collection of data Ω⊂p including a training set Ωtr and a test set Ωtt, wherein the collection of data Ω⊂p is composed of m memberships or categories of elements, and the m memberships or categories are digitized as 1, 2, . . . , m, m is greater or equal to 2;a quadratic multiform separation engine connected to the data collection module and configured to use m q-dimensional member functions f1, . . . , fm generated through a learning process based on the training set Ωtr to perform classification, wherein each q-dimensional member function is defined as a quadratic function f having a form ∥Ax−b∥2, for an integer q, a constant matrix A∈q×p, and a constant vector b∈q, where ∥·∥ denotes the Euclidean norm, thus m trained q-dimensional member functions f1, . . . , fm have expressions f1(x)=∥A1x−b1∥2, f2(x)=∥A2x−b2∥2, . . . , fm(x)=∥Amx−bm∥2, where trained matrices of constant matrices A1, . . . , Am and trained vectors of constant vectors b1, . . . , bm are solved through the learning process; wherein in the learning process, m2 control parameters αjk, j=1, . . . , m, k=1, . . . , m are set between 0 and 1, and each control parameter αjk is used to compare with a ratio
  • 2. The multiform separation classifier of claim 1, wherein the sample data x is classified to have a membership j if a temporary evaluation at the sample data x of a certain trained q-dimensional member function is minimal among all temporary evaluations at the sample data x of the m trained q-dimensional member functions.
  • 3. The multiform separation classifier of claim 1, wherein the quadratic multiform separation engine is configured to generate membership subsets U(j)'s based on the m trained q-dimensional member functions.
  • 4. The multiform separation classifier of claim 3, wherein the quadratic multiform separation engine is configured to derive temporary evaluations f1(x), . . . , fm(x) from the m trained q-dimensional member functions f1, . . . , fm for a certain sample data x, and the certain sample data x is assigned to have a membership j by finding out a specific evaluation fj(x) after processing the temporary evaluations f1(x), . . . , fm(x) according to a specific operator.
  • 5. The multiform separation classifier of claim 4, wherein the specific operator is a minimum operator used to indicate a minimal item of the temporary evaluations f1(x), . . . , fm(x), wherein the minimal item is the specific evaluation fj(x), if fj(x)≤fk(x), k=1, . . . , m.
  • 6. The multiform separation classifier of claim 1, wherein in the learning process, given the integer q, the m2 control parameters αjk, j=1, . . . , m, k=1, . . . , m, and the training set Ωtr, a cost function Φ is defined depending on constant matrices A1, . . . , Am and constant vectors b1, . . . , bm that define the member functions f1, . . . , fm.
  • 7. The multiform separation classifier of claim 6, wherein the cost function Φ is defined by
  • 8. The multiform separation classifier of claim 7, wherein the m trained q-dimensional member functions are generated by minimizing the cost function Φ among Θ(q), a set of all q-dimensional member functions.
  • 9. The multiform separation classifier of claim 8, wherein the learning process is configured to solve
  • 10. The multiform separation classifier of claim 1, wherein the multiform separation classifier is implemented in a cloud server or a local computer as hardware or software or as separated circuit devices on a set of chips or an integrated circuit device on a single chip.
  • 11. A method to implement a multiform separation classifier, comprising following steps: preparing a training set Ωtr from a collection of data Ω⊂p including the training set Ωtr and a test set Ωtt, wherein the collection of data Ω⊂p is composed of m memberships or categories of elements, and the m memberships or categories are digitized as 1, 2, . . . , m, m is greater or equal to 2;configuring a quadratic multiform separation engine to use m q-dimensional member functions f1, . . . , fm generated through a learning process based on the training set Ωtr to perform classification, wherein each q-dimensional member function is defined as a quadratic function f having a form ∥Ax−b∥2, for an integer q, a constant matrix A∈q×p, and a constant vector b∈q, where ∥·∥ denotes the Euclidean norm, thus m trained q-dimensional member functions f1, . . . , fm have expressions f1(x)=∥A1x−b1∥2, f2(x)=∥A2x−b2∥2, . . . fm(x)=∥Amx−bm∥2, where trained matrices of constant matrices A1, . . . , Am and trained vectors of constant vectors b1, . . . , bm are solved through the learning process; wherein in the learning process, m2 control parameters αjk, j=1, . . . , m, k=1, . . . , m are set between 0 and 1, and each control parameter αjk is used to compare with a ratio
  • 12. The method of claim 11, further comprising a step of decomposing the training set Ωtr into training subsets Ωtr(j), j=1, . . . , m, where j's represent memberships of the classification.
  • 13. The method of claim 12, the memberships j's referring to data categories of the training set Ωtr.
  • 14. The method of claim 11, wherein the sample data x is classified to have a membership j if a temporary evaluation at the sample data x of a certain trained q-dimensional member function fj(x) is minimal among all temporary evaluations at the sample data x of the m trained q-dimensional member functions.
  • 15. The method of claim 11, wherein the trained m q-dimensional member functions are generated by minimizing a cost function Φ, which provides a performance measure for separating training subsets Φtr(j), j=1, . . . , m.
US Referenced Citations (2)
Number Name Date Kind
20190102701 Singaraju Apr 2019 A1
20210133565 Shamir May 2021 A1
Non-Patent Literature Citations (1)
Entry
Zhang et al Linearized Proximal Alternating Direction Method of Multipliers for Parallel Magnetic Resonance Imaging, published in IEEE/CAA Journal of Automatica Sinica, vol. 4, No. 4, Oct. 2017 (Year: 2017).
Related Publications (1)
Number Date Country
20220222494 A1 Jul 2022 US