INCOMPLETE MULTI-VIEW FUZZY SYSTEM MODELING METHOD BASED ON VISIBLE AND HIDDEN VIEW COLLABORATIVE LEARNING

Information

  • Patent Application
  • 20240267506
  • Publication Number
    20240267506
  • Date Filed
    July 31, 2023
    2 years ago
  • Date Published
    August 08, 2024
    a year ago
  • CPC
    • H04N13/351
    • G06V10/764
  • International Classifications
    • H04N13/351
    • G06V10/764
Abstract
The present invention belongs to the field of intelligent computing, and particularly relates to an incomplete multi-view fuzzy system modeling method based on visible and hidden view collaborative learning. In the method, in first stage, missing view imputation and common hidden view learning are unified into one framework. In the framework, the learned common hidden view can improve the quality of the imputed multi-view data, and the imputed multi-view data also can be used as a guide for learning the common hidden view, which negotiates with each other and improve each other. In the second stage, the present invention constructs an incomplete multi-view TSK fuzzy system with visible and hidden view collaboration learning. In the system, the imputed multi-view data and the common hidden view data are fully explored. At the same time, collaborative learning enables the system to mine the consistency and complementary information between visible and hidden views.
Description
TECHNICAL FIELD

The present invention belongs to the field of intelligent computing, and particularly relates to an incomplete multi-view fuzzy system modeling method based on visible and hidden view collaborative learning.


BACKGROUND

In the real world, data often has multiple representations or comes from multiple sources, which is called multi-view or multi-modal data. For example, in a content-based web image search, objects can be represented by the visual features of an image and textual features of the image description.


In order to efficiently mine and utilize multi-view data, a multi-view learning method has been greatly developed in recent years. However, the current multi-view algorithm has two problems. First, both have a common assumption, that is, all views are complete. However, in a real scene, most of the multi-view data is missing. For example, in document clustering, different languages can be considered as different views. But, due to human error, some documents are not fully translated. Another example is in web image retrieval, where text descriptions are not always associated with web images, and some web images may not have associated text. In these scenes, the complementary information among the multi-view becomes very limited, which leads to the traditional multi-view methods being unreliable or unavailable. Second, with the widespread application of machine learning in recent years, an interpretability problem about machine learning has received increasing attention. However, these algorithms focus too much on the performance of the algorithms while neglecting the interpretability of a model.


Several explorations have been made to meet the challenges brought by incomplete multi-view data, and these explorations can be divided into four categories. 1. Discarding all data with missing views, which results in the loss of a large amount of available information. 2. The missing views are imputed by using the existing imputation technology, which can reduce the negative impact of missing views to a certain extent. However, the existing imputation technology can only complete one view at a time, and cannot use the complementary information among the views. In addition, an estimation method may introduce additional estimation errors, thus reducing the data quality. 3. Using subspace learning technology to learn a common view for all views cannot guarantee that the learned common hidden view still has high discrimination under the condition of many missing views. Meanwhile, discarding the original multi-view data and using only the common hidden view modeling will easily result in poor generalization ability of the model. 4.


The incomplete multi-view problem is transformed into a multi-task learning problem by using view alignment and grouping strategy, which can guarantee data utilization better, but can ignore the complementary information among the views. Therefore, incomplete multi-view learning still faces major challenges.


In order to address the problem of model interpretability, two mainstream strategies are proposed. The first uses relatively simple, essentially interpretable models to deal with problems, such as linear models, tree-based models, and rule-based models. The second uses a post-hoc interpretable method, using such methods as visualization and example interpretation to interpret a decision-making process of the model while ensuring the performance of the model. Because a TSK fuzzy system is a rule-based interpretability model and at the same time has a strong data-driven learning ability, it has been widely concerned. In recent years, it has also made great progress in the field of multi-view. However, the current multi-view model based on the TSK is still unable to deal with the incomplete multi-view problem, therefore, how to build an efficient and more transparent model to solve the incomplete multi-view problem is still a challenging task.


SUMMARY

According to the defects in the prior art, the present invention provides an incomplete multi-view fuzzy system modeling method (IMV_TSK) based on visible and hidden view collaborative learning. The present invention firstly designs a new hidden view extraction method, and adaptively integrates the missing view imputation into the process. Then, the present invention designs a new incomplete multi-view fuzzy system modeling method by combining visible and hidden views.


The present invention has the following technical solution:

    • 1. an incomplete multi-view fuzzy system modeling method based on visible and hidden collaborative learning, comprising the following steps:
    • step one: identifying the number c of classes of incomplete multi-view data {Xν∈RN×dν, ν=1, 2, . . . , V} for training, the number V of views, the size N of samples, and the feature dimension dν of each view; and
    • step two: constructing a hidden view extraction module:
    • 2.1 determining an identification matrix Eν⊂RN×N and a sample weight matrix Wν∈RN×N according to input incomplete multi-view data, which are defined as follows:










E

j
,
j

v

=

{




1
,







if


the


j
-
th


instance


is


the


j
-
th


missing






instance


in


the


v
-
th


view









0
,



ohterwise








(
1
)













W

j
,
j

v

=

{




1
,




if


v
-
th


view


contains


j
-
th


instance






w
,



otherwise









(
2
)










    • where w is the weight of the imputed views, which is defined as the percentage of the number of available instances to the total number of instances; and at the same time, the common hidden view H∈RN×c, a basis matrix Bν∈Rdν×c of each view and an error matrix Uν∈RN×dν of each view are initialized, respectively;

    • 2.2 constructing the initialized hidden view to extract an objective function, and calculating the value of the objective function, wherein the target formula is as follows:














min

H
,
B
,
U






V


v
=
1







W
v

(



X
v

(


X
v

+


E
v



U
v



)

-


H

(

B
v

)

T





F
2



+

β




H



2
,
1



+

γ





V


v
=
1



tr

(


U
T



L
v


U

)







(
3
)











s
.
t
.
H


0

,


B
v


0





where Lν=Dν−Sν is a Laplacian matrix, Dν is a diagonal matrix, and the ith diagonal element diν thereof is equal to Σj=1Nsi,jν. The first two items of the formula (3) are used for solving the common hidden view and completing the missing views. The second item









V


v
=
1



tr

(


U
T



L
v


U

)





is used for enabling the reconstructed error matrix to be closer to the real value;


2.3 solving H, Bν and Uν in the formula (3) by using an iterative solution method, wherein the update formula is as follows:










H

i
,
j





H

i
,
j











v
=
1

V




(




W
~

v



X
v



B
v


+



W
~

v



E
v



U
v



B
v



)


i
,
j










v
=
1

V




(




W
~

v




H

(

B
v

)

T



B
v


+
PH

)


i
,
j









(
4
)















B

i
,
j

v




B

i
,
j

v





(




(

X
v

)

T




W
~

v


H

+



(

U
v

)

T




(

E
v

)

T




W
~

v


H


)


i
,
j




(


B
v



H
T




W
~

v


H

)


i
,
j








(
5
)













U
v





(




(

E
v

)

T




W
~

v



E
v


+

γ


L
v



)


-
1




(



W
~

v



E
v




H

(

B
v

)

T


)






(
6
)









    • obtaining a locally optimal solution by iterative optimizations (4), (5), and (6) until convergence, and obtaining the optimal Uν;

    • step three: according to the optimal error matrix, imputing the multi-view data according to the following formula:













X
filled
v

=


X
incomplete
v

+


E
v



U
v







(
7
)









    • step four: having an incomplete multi-view fuzzy system modeling module:

    • according to the hidden views and the imputed multi-view data acquired from the first two steps, constructing an incomplete multi-view fuzzy system based on visible and hidden collaborative learning in the present invention;

    • 4.1 determining the number K of fuzzy rules, and calculating antecedent parameters eik and δik of each view fuzzy system by using a VarPart clustering algorithm;

    • 4.2 projecting multi-view data into the fuzzy space based on the following formula;













μ

(
x
)

=

exp



(


-


(


x
i

-

e
i
k


)

2



2


δ
i
k



)






(

8

a

)















μ
~

k

(
x
)

=



μ
k

(
x
)





K


k
=
1




μ
k

(
x
)







(

8

b

)













x
e

=


(

1
,

x
T


)

T






(

8

c

)















x
~

k

=




μ
~

k

(
x
)



x
e






(

8

d

)













x
g

=


(



(


x
~

1

)

T

,


(


x
~

2

)

T

,


,


(


x
~

K

)

T


)

T





(

8

e

)









    • 4.3 constructing the initialized objective function, and calculating the value of the target formula, wherein the objective function is as follows:














min


P
g

,

α
v







V


v
=
1




α
v







W
v

(



X
g
v



P
g
v


-
Y

)



2




+


α

v
+
1









H
g



P
g

v
+
1



-
Y



2


+


λ
1






V


v
=
1







W
v

(



X
g
v



P
g
v


-

Λ

v
+
1



)



2



+


λ
1








H
g



P
g

v
+
1



-

Λ

v
+
1





2


+


λ
2







V
+
1



v
=
1




α
v



ln

α
v





+


λ
3







V
+
1



v
=
1






P
g
v



2







(
9
)











s
.
t
.





V


v
=
1



α
v



=
1

,


α
v

>
0





where Xgν∈RN×dgν+1 is the mapping of original data Xν∈RN×dν in the new feature space by fuzzy rules under the νth view, Hg∈∈RN×dgν+1 is the mapping of an original hidden view H∈RN×c in the new feature space by the fuzzy rules, and Pgν is a consequent parameter of the vth view; Y∈[y1, y2, . . . yN]∈RN×C is a label matrix of the multi-view data, where yi∈R1×C is the label of the ith instance, for example, yi=[1, 0, 0] indicates that the number of class of the ith multi-view instances xi is first class; αν is the weight of each view; Wν is a sample weight matrix of each view, considering that although the missing views are imputed by an imputation method in the previous section, the difference between the imputed views and the real views cannot be measured. The present invention introduces the sample weight matrix to reduce the possibility of large differences resulting in poor model robustness.


Further details of a learning criterion for (9) are explained below:

    • 1) the first two terms









V


v
=
1




α
v







W
v

(



X
g
v



P
g
v


-
Y

)



2



and



α

v
+
1









H
g



P
g

v
+
1



-
Y



2






are empirical error terms, which are used for training a fuzzy system under each view.












V


v
=
1








W
v

(



X
g
v



P
g
v


-

Λ
v


)



2



and








H
g



P
g

v
+
1



-

Λ

v
+
1





2






2
)







are collaborative terms among the views, which are used for ensuring that the outputs of the multi-view are consistent, and for mining mutual information among the views, thereby improving the generalization ability of a training model. Here is







Λ
v

=


1
V







V
+
1




l
=
1

,

l

v





X
g
l



P
g
l








(for writing convenience, here is Hg=Xgν+1)

    • 3) The information contained in different views is different. In order to mine this difference and improve the robustness of the model, the negative Shannon entropy term










V
+
1



v
=
1



α
v





lnαν is introduced herein. According to a maximum entropy principle, by minimizing both negative entropy and predicting accuracy loss at the same time, the importance of the views can be balanced adaptively, thereby preventing a certain view from controlling the final output, and finally improving the robustness of the model.

    • 4) The regularization parameters λ1>0, λ2>0, λ3>0 are used for controlling the effects of the corresponding components, which can be manually set or obtained through optimization.


4.4 Formula (9) is a non-convex problem, so the present invention expresses it as a Lagrange function, and then solves it by means of iterative optimization. An updated formula of Pgν and αν is given as follows, for writing convenience, setting {tilde over (W)}ν(Wν)TWν:










P
g
v

=



[



λ
3



I

d
v



+


(

α
+

λ
1


)




(

X
g
v

)

T




W
~

v



X
g
v



]


-
1


[




α

(

X
g
v

)

T




W
~

v


Y

+




λ
1

(

X
g
v

)

T




W
~

v



Λ
v



]





(
10
)













α
v

=


exp

(


-





W
v

(



X
g
v



P
g
v


-
Y

)



2


/

λ
3


)








l
=
1


V
+
1




exp

(


-





W
v

(



X
g
l



P
g
l


-
Y

)



2


/

λ
3


)







(
11
)







the value can be converted to the local minimum through iterative optimizations (10) and (11), thereby obtaining the locally optimal solution.


Step five: according to the following objective function, the final output of the incomplete multi-view fuzzy system can be obtained:










Y
output

=





V


v
=
1




α
v



W
v



X
g
v



P
g
v



+


α

V
+
1




H
g



P
g

v
+
1








(
12
)







the present invention comprises the following advantages:


1) different from the existing method for simply imputing the missing views or finding the common hidden view for all views, the present invention combines two methods together and completes the data while learning the hidden views. The learned hidden views can promote data imputation, and the imputed multi-view data also can improve the discrimination of the hidden views.


2) The present invention adopts a TSK fuzzy system as a basic model to construct a multi-view classification model with strong interpretability. Finally, all views comprising hidden views are connected together by using collaborative learning. Certain differences and complementary information exist among different views and collaborative learning can mine complementary information among views, reduce differences among views, and finally greatly promote the robustness of the model.


3) The validity of the method herein is verified on multiple real multi-view datasets.





DESCRIPTION OF DRAWINGS


FIG. 1 is an overall structural diagram of an algorithm of the present invention.



FIG. 2 is a flow chart of the present invention.





DETAILED DESCRIPTION

The present invention will be described in detail below in combination with the drawings and the embodiments.


As shown in FIGS. 1-2, the present invention realizes an incomplete multi-view fuzzy system modeling method based on visible and hidden view collaborative learning. The method comprises two main stages: missing view imputation and common hidden view learning, and incomplete multi-view fuzzy system modeling. In the first stage of the present invention, the missing view imputation and common hidden view learning are integrated into one framework based on multi-view matrix factorization technology, so that the two parts negotiate with each other to acquire the optimal imputed multi-view data and the common hidden view. In the second stage, the present invention constructs an incomplete multi-view fuzzy system based on a traditional TSK fuzzy system for the imputed multi-view data and the hidden views. In this process, the present invention realizes the optimal multi-view data mining through collaborative learning and Shannon entropy.









TABLE 1







Statistical Information on Data Set











Number of
Number of
Number of


Data Set
Samples
Views/Dimension
Classes













Dermatology
366
 2(22-12)
6


Image Segmentation
2310
2(10-9)
7


Forest Type
326
2(18-9)
4


Corel Images
1000
  2(300-256)
10


ionosphere
351
 2(34-25)
2


Epileptic EEG
500
2(20-6)
2


Caltech7
1474
    3(48-40-254)
7









Embodiment 1

1. an incomplete multi-view fuzzy system modeling method based on visible and hidden collaborative learning, comprising the following steps:

    • step one: identifying the number c of classes of incomplete multi-view data for training, the number V of views, the size N of samples, and the feature dimension d′ of each view; and
    • step two: constructing hidden views to extract an objective function and extract hidden views, and imputing missing views;
    • step three: imputing the missing views according to an optimal error matrix acquired in step two;
    • step four: projecting multi-view data into fuzzy space, and constructing an objective function of an incomplete multi-view fuzzy system and solving;
    • step five: acquiring the final classification results.


In embodiment 1, the present invention uses seven public multi-view data for model construction and evaluation, and the specific data set information is shown in Table 1.


Table 2 to Table 8 summarizes the classification accuracy of the present invention and thirteen advanced incomplete multi-view classification algorithms in seven datasets. By observing Table 2 to Table 8, it can be concluded that: (1) the effect of only replacing the missing views with a value of 0 is worse in most cases compared to other imputing methods. (2) In the case of fewer missing views, the imputation strategy will play a certain role, while when the proportion of missing is large, the effectiveness of the imputation strategy is greatly reduced. For example, in a Dermatology data set, AMVMED (SVT) and other methods are more effective when the proportion of missing views is 10% and 30%. When the proportion of the missing views is 90%, the performance of this algorithm is different from that of IMV_TSK. (3) Similarly, because an IMSF algorithm adopts a grouping strategy, when the proportion of missing is less, the model can utilize information on the multi-view; while when the proportion of the missing is larger, the model can utilize less information on the multi-view, resulting in poor performance. At the same time, raw data is divided into several groups, therefore, the training data for each group is less in the case of a large number of categories, resulting in poor performance of IMSF. For example, IMSF performs better on binary classification data sets such as Epileptic EEG and Ionosphere, while the effect on multi-classification data sets such as Image Segmentation and Forest Type is poor. (4) In most cases, the performance of IMG, IMC_GRMF, DAIMC, and other algorithms is not excellent, which proves that simply using the common hidden view of mining for classification modeling will lead to poor model performance. (5) It can be seen that the IMV_TSK uses complementary information among views to complete the data, therefore, IMV_TSK performs better than other algorithms in most cases. In addition, because the IMV_TSK uses the common hidden view, the maximization for data utilization is realized. Therefore, the performance of the IM_TSK is still better even in the case of a larger missing rate.









TABLE 2







Classification Accuracy (Mean ± Variance)


of Twelve Algorithms on Dermatology Dataset









Dermatology












Algorithms
10%
30%
50%
70%
90%





TwoV-TSKFS
0.8190 ±
0.7814 ±
0.7192 ±
0.6162 ±
0.6246 ±


(Zero)
0.042
0.032
0.044
0.018
0.047


TwoV-TSKFS
0.8115 ±
0.7842 ±
0.7609 ±
0.7069 ±
0.6673 ±


(Mean)
0.021
0.033
0.043
0.026
0.046


TwoV-TSKFS
0.7903 ±
0.7582 ±
0.6898 ±
0.6148 ±
0.5649 ±


(KNN)
0.038
0.028
0.059
0.016
0.012


TwoV-TSKFS
0.8026 ±
0.7438 ±
0.7192 ±
0.7015 ±
0.6387 ±


(SVT)
0.052
0.053
0.023
0.046
0.051


AMVMED(Zero)
0.9014 ±
0.8401 ±
0.7994 ±
0.7979 ±
0.7767 ±



0.005
0.003
0.012
0.024
0.008


AMVMED(Mean)
0.9253 ±
0.8656 ±
0.8424 ±
0.8147 ±
0.8024 ±



0.014
0.007
0.008
0.005
0.008


AMVMED(KNN)
0.9155 ±
0.8725 ±
0.8336 ±
0.8129 ±
0.7803 ±



0.010
0.010
0.000
0.012
0.020


AMVMED(SVT)
0.8916 ±
0.8730 ±
0.8377 ±
0.8144 ±
0.7698 ±



0.007
0.006
0.030
0.023
0.013


iMSF
0.9456 ±
0.8439 ±
0.8198 ±
0.7117 ±
0.6937 ±



0.036
0.026
0.060
0.036
0.054


IMG
0.9459 ±
0.9333 ±
0.9315 ±
0.8793 ±
0.8603 ±



0.003
0.005
0.017
0.041
0.030


IMC_GRMF
0.9608 ±
0.9381 ±
0.9162 ±
0.8890 ±
0.8625 ±



0.004
0.006
0.001
0.006
0.006


DAIMC
0.9268 ±
0.8934 ±
0.8598 ±
0.8224 ±
0.7795 ±



0.021
0.013
0.043
0.055
0.036


IMV_TSK
0.9836 ±
0.9536 ±
0.9453 ±
0.9327 ±
0.9042 ±



0.017
0.012
0.026
0.015
0.039
















TABLE 3







Classification Accuracy (Mean ± Variance)


of Twelve Algorithms on Image Segmentation Dataset









Image Segmentation












Algorithms
10%
30%
50%
70%
90%





TwoV-TSKFS
0.7475 ±
0.7068 ±
0.6479 ±
0.6016 ±
0.5503 ±


(Zero)
0.036
0.029
0.003
0.055
0.004


TwoV-TSKFS
0.7669 ±
0.7368 ±
0.6604 ±
0.6321 ±
0.5622 ±


(Mean)
0.035
0.035
0.003
0.033
0.002


TwoV-TSKFS
0.7566 ±
0.7259 ±
0.6843 ±
0.6356 ±
0.5706 ±


(KNN)
0.015
0.041
0.006
0.025
0.003


TwoV-TSKFS
0.7649 ±
0.7267 ±
0.6748 ±
0.6328 ±
0.5712 ±


(SVT)
0.005
0.043
0.035
0.018
0.002


AMVMED
0.8718 ±
0.7975 ±
0.7429 ±
0.6939 ±
0.6452 ±


(Zero)
0.005
0.006
0.010
0.014
0.001


AMVMED
0.8765 ±
0.8095 ±
0.7624 ±
0.7074 ±
0.6810 ±


(Mean)
0.004
0.018
0.014
0.013
0.015


AMVMED
0.8807 ±
0.8043 ±
0.7434 ±
0.7152 ±
0.6605 ±


(KNN)
0.004
0.009
0.009
0.006
0.007


AMVMED
0.8729 ±
0.8131 ±
0.7588 ±
0.6988 ±
0.6411 ±


(SVT)
0.001
0.007
0.010
0.006
0.015


iMSF
0.7695 ±
0.7392 ±
0.6840 ±
0.6210 ±
0.5720 ±



0.035
0.016
0.014
0.003
0.018


IMG
0.7152 ±
0.6958 ±
0.6503 ±
0.6123 ±
0.5961 ±



0.021
0.004
0.031
0.016
0.051


IMC_GRMF
0.7432 ±
0.6928 ±
0.6843 ±
0.6495 ±
0.6170 ±



0.011
0.004
0.012
0.013
0.002


DAIMC
0.8080 ±
0.7369 ±
0.6687 ±
0.6137 ±
0.5746 ±



0.043
0.056
0.047
0.048
0.052


IMV_TSK
0.8970 ±
0.8390 ±
0.7874 ±
0.7303 ±
0.7087 ±



0.014
0.019
0.016
0.015
0.016
















TABLE 4







Classification Accuracy (Mean ± Variance)


of Twelve Algorithms on Forest Type Dataset









Forest Type












Algorithms
10%
30%
50%
70%
90%





TwoV-TSKFS
0.7657 ±
0.7533 ±
0.6874 ±
0.6372 ±
0.5402 ±


(Zero)
0.051
0.023
0.069
0.058
0.053


TwoV-TSKFS
0.7734 ±
0.7581 ±
0.7127 ±
0.7013 ±
0.6008 ±


(Mean)
0.017
0.006
0.029
0.060
0.029


TwoV-TSKFS
0.7868 ±
0.7056 ±
0.6955 ±
0.6195 ±
0.5167 ±


(KNN)
0.030
0.029
0.021
0.052
0.021


TwoV-TSKFS
0.7811 ±
0.7800 ±
0.7591 ±
0.6898 ±
0.5421 ±


(SVT)
0.021
0.015
0.036
0.049
0.035


AMVMED(Zero)
0.8504 ±
0.8297 ±
0.8192 ±
0.7903 ±
0.7787 ±



0.007
0.018
0.008
0.028
0.004


AMVMED(Mean)
0.8634 ±
0.8422 ±
0.8432 ±
0.8403 ±
0.8279 ±



0.002
0.004
0.004
0.008
0.032


AMVMED(KNN)
0.8665 ±
0.8491 ±
0.8364 ±
0.8370 ±
0.8332 ±



0.004
0.009
0.004
0.010
0.003


AMVMED(SVT)
0.8638 ±
0.8174 ±
0.8031 ±
0.7945 ±
0.7725 ±



0.011
0.018
0.004
0.022
0.010


iMSF
0.8266 ±
0.7924 ±
0.7722 ±
0.7089 ±
0.6899 ±



0.029
0.047
0.034
0.045
0.042


IMG
0.8471 ±
0.8337 ±
0.8146 ±
0.8069 ±
0.7940 ±



0.022
0.015
0.032
0.016
0.032


IMC_GRMF
0.7340 ±
0.6928 ±
0.6843 ±
0.6495 ±
0.6170 ±



0.010
0.003
0.011
0.013
0.002


DAIMC
0.8343 ±
0.8184 ±
0.8171 ±
0.8069 ±
0.7681 ±



0.006
0.010
0.013
0.011
0.029


IMV_TSK
0.8892 ±
0.8796 ±
0.8758 ±
0.8565 ±
0.8490 ±



0.029
0.031
0.038
0.024
0.002
















TABLE 5







Classification Accuracy (Mean ± Variance)


of Twelve Algorithms on Epileptic EEG Dataset









Epileptic EEG












Algorithms
10%
30%
50%
70%
90%





TwoV-TSKFS
0.8465 ±
0.8173 ±
0.7903 ±
0.7523 ±
0.7290 ±


(Zero)
0.022
0.018
0.032
0.033
0.020


TwoV-TSKFS
0.8590 ±
0.8300 ±
0.7950 ±
0.7820 ±
0.7335 ±


(Mean)
0.023
0.034
0.022
0.034
0.071


TwoV-TSKFS
0.8355 ±
0.8145 ±
0.7720 ±
0.7175 ±
0.7055 ±


(KNN)
0.020
0.026
0.045
0.044
0.036


TwoV-TSKFS
0.8540 ±
0.8295 ±
0.8155 ±
0.7990 ±
0.7680 ±


(SVT)
0.024
0.049
0.024
0.028
0.044


AMVMED(Zero)
0.8933 ±
0.8513 ±
0.8210 ±
0.7822 ±
0.7670 ±



0.003
0.007
0.008
0.008
0.011


AMVMED(Mean)
0.8927 ±
0.8802 ±
0.8563 ±
0.8375 ±
0.8157 ±



0.006
0.016
0.013
0.017
0.015


AMVMED(KNN)
0.9100 ±
0.8673 ±
0.8305 ±
0.8243 ±
0.7947 ±



0.003
0.011
0.019
0.019
0.004


AMVMED(SVT)
0.9063 ±
0.8548 ±
0.8188 ±
0.7745 ±
0.7650 ±



0.006
0.000
0.013
0.007
0.024


iMSF
0.9117 ±
0.8675 ±
0.8212 ±
0.8146 ±
0.7704 ±



0.026
0.011
0.011
0.004
0.013


IMG
0.8120 ±
0.7000 ±
0.6980 ±
0.6880 ±
0.6980 ±



0.026
0.046
0.033
0.030
0.004


IMC_GRMF
0.8027 ±
0.7720 ±
0.7393 ±
0.7013 ±
0.6547 ±



0.004
0.009
0.007
0.008
0.007


DAIMC
0.6540 ±
0.6533 ±
0.6307 ±
0.6347 ±
0.6167 ±



0.002
0.031
0.022
0.018
0.024


IMV_TSK
0.9360 ±
0.8800 ±
0.8720 ±
0.8440 ±
0.8100 ±



0.014
0.036
0.003
0.032
0.004
















TABLE 6







Classification Accuracy (Mean ± Variance)


of Twelve Algorithms on Ionosphere Dataset









Ionosphere












Algorithms
10%
30%
50%
70%
90%





Two V-TSKFS
0.8198 ±
0.8120 ±
0.7785 ±
0.7543 ±
0.7272 ±


(Zero)
0.072
0.061
0.033
0.058
0.065


TwoV-TSKFS
0.8597 ±
0.8006 ±
0.7956 ±
0.7813 ±
0.7457 ±


(Mean)
0.037
0.034
0.068
0.38
0.037


TwoV-TSKFS
0.8197 ±
0.8019 ±
0.7857 ±
0.7301 ±
0.7202 ±


(KNN)
0.026
0.033
0.061
0.037
0.061


TwoV-TSKFS
0.8390 ±
0.8191 ±
0.8055 ±
0.7401 ±
0.7179 ±


(SVT)
0.034
0.041
0.038
0.046
0.047


AMVMED(Zero)
0.9870 ±
0.9482 ±
0.9339 ±
0.9155 ±
0.8957 ±



0.004
0.004
0.008
0.003
0.004


AMVMED(Mean)
0.9888 ±
0.9653 ±
0.9499 ±
0.9257 ±
0.8985 ±



0.004
0.002
0.004
0.012
0.012


AMVMED(KNN)
0.9829 ±
0.9615 ±
0.9325 ±
0.9110 ±
0.9081 ±



0.005
0.006
0.009
0.013
0.023


AMVMED(SVT)
0.9865 ±
0.9694 ±
0.9308 ±
0.9081 ±
0.8946 ±



0.007
0.009
0.014
0.028
0.023


iMSF
0.9497 ±
0.9308 ±
0.8931 ±
0.8711 ±
0.8239 ±



0.014
0.002
0.034
0.014
0.002


IMG
0.8634 ±
0.7807 ±
0.7778 ±
0.7352 ±
0.7266 ±



0.037
0.024
0.023
0.042
0.038


IMC_GRMF
0.8509 ±
0.8119 ±
0.8008 ±
0.7863 ±
0.7618 ±



0.005
0.012
0.012
0.014
0.006


DAIMC
0.8328 ±
0.8158 ±
0.8091 ±
0.8002 ±
0.7720 ±



0.010
0.019
0.017
0.018
0.038


IMV_TSK
0.9857 ±
0.9714 ±
0.9572 ±
0.9373 ±
0.9088 ±



0.006
0.012
0.037
0.021
0.003
















TABLE 7







Classification Accuracy (Mean ± Variance)


of Twelve Algorithms on Corel Images Dataset









Corel Images












Algorithms
10%
30%
50%
70%
90%





TwoV-TSKFS
0.3170 ±
0.2590 ±
0.2403 ±
0.1873 ±
0.1483 ±


(Zero)
0.030
0.080
0.083
0.082
0.067


TwoV-TSKFS
0.3062 ±
0.2775 ±
0.2508 ±
0.2393 ±
0.1945 ±


(Mean)
0.009
0.033
0.034
0.020
0.020


TwoV-TSKFS
0.2753 ±
0.1990 ±
0.1835 ±
0.1550 ±
0.1448 ±


(KNN)
0.099
0.089
0.058
0.073
0.036


TwoV-TSKFS
0.3185 ±
0.2027 ±
0.1695 ±
0.1828 ±
0.1362 ±


(SVT)
0.015
0.091
0.093
0.077
0.048


AMVMED(Zero)
0.5215 ±
0.4850 ±
0.4370 ±
0.4110 ±
0.4008 ±



0.012
0.013
0.016
0.014
0.009


AMVMED(Mean)
0.5325 ±
0.4838 ±
0.4578 ±
0.4277 ±
0.4125 ±



0.025
0.015
0.016
0.018
0.018


AMVMED(KNN)
0.5270 ±
0.4843 ±
0.4563 ±
0.4197 ±
0.4010 ±



0.008
0.025
0.017
0.006
0.019


AMVMED(SVT)
0.5253 ±
0.4870 ±
0.4300 ±
0.4045 ±
0.3913 ±



0.008
0.027
0.010
0.019
0.020


iMSF
0.5714 ±
0.4795 ±
0.3643 ±
0.3355 ±
0.3311 ±



0.000
0.037
0.028
0.023
0.008


IMG
0.4720 ±
0.4790 ±
0.4420 ±
0.4320 ±
0.4750 ±



0.026
0.025
0.018
0.013
0.038


IMC_GRMF
0.5033 ±
0.4757 ±
0.4583 ±
0.4376 ±
0.4203 ±



0.007
0.006
0.010
0.003
0.003


DAIMC
0.4697 ±
0.4477 ±
0.4313 ±
0.4130 ±
0.3850 ±



0.001
0.014
0.005
0.004
0.007


IMV_TSK
0.6590 ±
0.6200 ±
0.5900 ±
0.5720 ±
0.5600 ±



0.015
0.052
0.020
0.032
0.034
















TABLE 8







Classification Accuracy (Mean ± Variance)


of Twelve Algorithms on Caltech7 Dataset









Caltech7












Algorithms
10%
30%
50%
70%
90%





TwoV-TSKFS
0.7646 ±
0.7524 ±
0.7524 ±
0.6845 ±
0.6452 ±


(Zero)
0.005
0.008
0.022
0.007
0.009


TwoV-TSKFS
0.7802 ±
0.7659 ±
0.7212 ±
0.6776 ±
0.6228 ±


(Mean)
0.015
0.006
0.027
0.016
0.087


TwoV-TSKFS
0.7863 ±
0.7714 ±
0.7449 ±
0.7391 ±
0.5590 ±


(KNN)
0.009
0.017
0.030
0.011
0.008


TwoV-TSKFS
0.7563 ±
0.7490 ±
0.7293 ±
0.6744 ±
0.6839 ±


(SVT)
0.016
0.019
0.021
0.031
0.033


AMVMED(Zero)
0.7503 ±
0.6974 ±
0.6676 ±
0.6025 ±
0.5529 ±



0.021
0.023
0.019
0.028
0.011


AMVMED(Mean)
0.7666 ±
0.7571 ±
0.7375 ±
0.7205 ±
0.7165 ±



0.026
0.020
0.024
0.039
0.004


AMVMED(KNN)
0.7788 ±
0.7551 ±
0.7374 ±
0.7286 ±
0.7096 ±



0.017
0.029
0.022
0.035
0.019


AMVMED(SVT)
0.7564 ±
0.7178 ±
0.6737 ±
0.6269 ±
0.5882 ±



0.032
0.022
0.026
0.021
0.014


iMSF
0.8743 ±
0.8593 ±
0.8164 ±
0.7931 ±
0.7269 ±



0.018
0.005
0.033
0.016
0.054


IMG
0.8630 ±
0.8514 ±
0.8636 ±
0.8182 ±
0.7062 ±



0.006
0.006
0.011
0.007
0.006


IMC_GRMF
0.8157 ±
0.8044 ±
0.7959 ±
0.7918 ±
0.7900 ±



0.003
0.004
0.010
0.002
0.018


DAIMC
0.8449 ±
0.8352 ±
0.8300 ±
0.8249 ±
0.8162 ±



0.012
0.014
0.010
0.010
0.013


IMV_TSK
0.9254 ±
0.9213 ±
0.9084 ±
0.8996 ±
0.8876 ±



0.009
0.026
0.018
0.015
0.014
















TABLE 9







Classification Accuracy (Mean ± Variance) of IMV_TSK1,


IMV_TSK2 and IMV_TSK on Seven Datasets










Datasets
IMV_TSK1
IMV_TSK2
IMV_TSK





Dermatology
0.9454 ± 0.027
0.9426 ± 0.030
0.9536 ± 0.012


Image
0.8087 ± 0.035
0.7593 ± 0.028
0.8390 ± 0.019


Segmentation


Forest Type
0.8565 ± 0.016
0.8009 ± 0.059
0.8796 ± 0.031


Corel Images
0.6090 ± 0.024
0.6170 ± 0.023
0.6200 ± 0.052


Epileptic EEG
0.8960 ± 0.011
0.9100 ± 0.034
0.8800 ± 0.036


Caltech7
0.9145 ± 0.012
0.9199 ± 0.016
0.9213 ± 0.026


Ionosphere
0.9645 ± 0.072
0.9613 ± 0.012
0.9714 ± 0.012









Embodiment 2

The present invention continues to analyze the effectiveness of the imputation of the hidden views and missing views when the proportion of missing views is 30%. Denoting IMV_TSK without hidden view as IMV_TSK1, and denoting IMV_TSK without using the imputation approach of the proposed method but mean imputation to fill the missing views as IMV_TSK2. Table 9 gives the classification accuracy of IMV_TSK1, IMV_TSK2, and IMV_TSK.


It can be clearly seen that IMV_TSK is superior to IMV_TSK1 and IMV_TSK2 on most datasets, especially on an Image Segmentation data set and a Forest Type data set, which illustrates the effectiveness of the hidden view and the missing view imputation approach in IMV_TSK. By comparing IMV_TSK with IMV_TSK1, the results show that hidden view information is very useful to improve the classification performance of multi-view data. In addition, by comparing IMV_TSK with IMV_TSK2, the corresponding results show that the hidden view assisted missing view imputation technology has advantages over traditional imputation technology.

Claims
  • 1. An incomplete multi-view fuzzy system modeling method based on visible and hidden view collaborative learning, comprising the following steps: step one: identifying the number c of classes of incomplete multi-view data {xν∈RN×dν, ν=1, 2, . . . , V} for training, the number V of views, the size N of samples, and the feature dimension dν of each view;step two: constructing an objective function to extract the common view, and to impute missing views;(2.1) determining an identification matrix Eν∈RN×N and a sample weight matrix Wν∈RN×N according to input incomplete multi-view data, which are defined as follows:
  • 2. The incomplete multi-view fuzzy system modeling method with visible and hidden view collaborative learning of claim 1, wherein step three specifically comprises: according to an optimal error matrix, imputing the multi-view data according to the following function;
  • 3. The incomplete multi-view fuzzy system modeling method based on visible and hidden view collaborative learning of claim 1, wherein step four specifically comprises: according to the acquired hidden view and the imputed multi-view data, constructing an incomplete multi-view fuzzy system with collaborative learning in the present invention;(3.1) determining the number K of fuzzy rules, and calculating antecedent parameters eik and δik of each view fuzzy system by using a VarPart clustering algorithm;(3.2) mapping multi-view data into the fuzzy space based on the following function;
  • 4. The incomplete multi-view fuzzy system modeling method based on visible and hidden collaborative learning of claim 2, wherein step four specifically comprises: according to the acquired hidden view and the imputed multi-view data, constructing an incomplete multi-view fuzzy system with collaborative learning in the present invention;(3.1) determining the number K of fuzzy rules, and calculating antecedent parameters eik and δik of each view fuzzy system by using a VarPart clustering algorithm;(3.2) mapping multi-view data into the fuzzy space based on the following function;
  • 5. The incomplete multi-view fuzzy system modeling method based on visible and hidden view collaborative learning of claim 1, wherein the step five specifically comprises: obtaining the final output of the incomplete multi-view fuzzy system according to the following formula:
  • 6. The incomplete multi-view fuzzy system modeling method based on visible and hidden view collaborative learning of claim 2, wherein the step five specifically comprises: obtaining the final output of the incomplete multi-view fuzzy system according to the following formula:
  • 7. The incomplete multi-view fuzzy system modeling method based on visible and hidden view collaborative learning of claim 3, wherein the step five specifically comprises: obtaining the final output of the incomplete multi-view fuzzy system according to the following formula:
  • 8. The incomplete multi-view fuzzy system modeling method based on visible and hidden view collaborative learning of claim 4, wherein the step five specifically comprises: obtaining the final output of the incomplete multi-view fuzzy system according to the following formula:
Priority Claims (1)
Number Date Country Kind
202310071513.0 Feb 2023 CN national