METHOD OF INFERRING CONTENT RATIO OF COMPONENT IN SAMPLE, COMPOSITION INFERENCE DEVICE, AND PROGRAM

Information

  • Patent Application
  • 20240297031
  • Publication Number
    20240297031
  • Date Filed
    June 06, 2022
    2 years ago
  • Date Published
    September 05, 2024
    3 months ago
Abstract
A method of the present invention is a method of inferring a content ratio of a component in a sample containing a component selected from K types, including: heating a sample set, ionizing resultant gas components sequentially, and observing mass spectra continuously; acquiring two-dimensional mass spectra of the respective samples from the mass spectra, and merging two or more of these spectra to acquire a data matrix; performing non-negative matrix factorization on the data matrix; correcting an intensity distribution matrix through analysis on canonical correlation between a base spectrum matrix and the data matrix; acquiring a feature vector from a corrected intensity distribution matrix and expressing the sample in vector space; defining a K-1 dimensional simplex and determining an end member; and inferring a content ratio of the component in the sample on the basis of the end member and the feature vector. It is possible to infer the composition of a component in an unknown mixture even from a mass spectrum acquired under an ambient condition.
Description
TECHNICAL FIELD

The present invention relates to a method of inferring a content ratio of a component in a sample, a composition inference device, and a program.


BACKGROUND ART

As a method for inferring a component composition in a sample as a multicomponent mixing system, there is a conventionally-known method of separating components in the sample by some process and identifying and quantifying each fractionated component. One of such identifying methods uses a mass spectrum acquired by a mass spectrometer.


However, information acquired from mass spectra is a huge volume of multidimensional data having a large number of correlations, imposing difficulty in extracting a distinctive signal leading to component identification intuitively or empirically. As described in Non-Patent Literature 1, a method of searching for a feature by reducing the dimension of such multidimensional data through main component analysis has been used recently. However, even such known data analysis technique is used, it is still inherently difficult to realize quantification using a mass spectrum having a peak intensity depending on ionization efficiency. For quantitative analysis, it is first required to determine ionization efficiency that is an unknown variable for each compound. To achieve this, isotopic samples having known sample volumes and having the same ionization efficiency should be used as benchmark materials.


CITATION LIST
Non-Patent Literature



  • Non-Patent Literature 1: Analytical chemistry, 2020, vol. 92, issue 2, pp. 1925-1933



SUMMARY OF INVENTION
Technical Problem

According to the method described in Non-Patent Literature 1, much of information contained in the original multidimensional data is lost during the course of the dimension reduction. This disables quantitative analysis on a component even through qualitative analysis thereon is possible.


Furthermore, as long as there is the characteristic that ionization efficiency differs between components in response to a component type or a test environment, etc., a ratio between the sizes (areas) of resultant mass spectra does not reflect a content ratio of each component in a mixture. This forms one of factors to inhibit quantitative analysis using mass spectra.


Ambient desorption ionization (ADI) has widely been used recently as one of the mass spectrometry methods by which a specimen in the ambient condition (in the open air under atmospheric pressure) is subjected to mass spectrometry without preprocessing. While this method has an advantage in terms of allowing unprocessed specimens or open-system specimens to be analyzed as they are in solid state, it has been deemed to make quantitative analysis more difficult due to influence by inconsistence of ionization rate between components, and the measurement environment, etc.


In consideration of the foregoing circumstances, an object of the present invention is to provide a component content ratio inferring method capable of inferring a composition of components in an unknown mixture even from a mass spectrum acquired under the ambient condition.


It is also an object of the present invention to provide a composition inference device and a program.


Means of Solving Problem

The present inventors conducted study earnestly to attain the foregoing objects, and found that these objects can be attained by configurations described below.


[1] A method of inferring a content ratio of a component in an inference target sample containing at least one type of the component selected from K types of components while K is an integer equal to or greater than 1. The method includes: preparing learning samples of a number equal to or greater than K containing at least one type of component selected from the K types of components and having compositions different from each other, and a background sample not containing the component; sequentially ionizing gas components generated by thermal desorption and/or pyrolysis while heating each sample in a sample set including the inference target sample, the learning samples, and the background sample, and observing mass spectra continuously; storing the mass spectrum acquired for each heating temperature into each row to acquire two-dimensional mass spectra of the respective samples, and merging at least two or more of the two-dimensional spectra and converting the spectra into a data matrix; performing NMF process by which the data matrix is subjected to non-negative matrix factorization to be factorized into the product of a normalized base spectrum matrix and a corresponding intensity distribution matrix; extracting a noise component in the intensity distribution matrix through analysis on canonical correlation between the base spectrum matrix and the data matrix, and correcting the intensity distribution matrix so as to reduce influence by the noise component, thereby acquiring a corrected intensity distribution matrix; partitioning the corrected intensity distribution matrix into a submatrix corresponding to each of the samples, and expressing each of the samples in vector space using the submatrix as a feature vector; defining a K-1 dimensional simplex including all of the feature vectors and determining K end members in the K-1 dimensional simplex; and calculating an Euclidean distance between each of the K end members and the feature vector of the inference target sample, and inferring a content ratio of the component in the inference target sample on the basis of a ratio of the Euclidean distance. If the K is equal to or greater than 3, at least one of the feature vectors of the learning samples is present in each region external to a hypersphere inscribed in the K-1 dimensional simplex or the learning samples contain at least one of the end members.


[2] The method according to [1], wherein the K is an integer equal to or greater than 2, and the inference target sample is a mixture of the components.


[3] The method according to [1] or [2], wherein the learning sample contains the end member.


[4] The method according to [3], wherein the end member is determined on the basis of a determination label given to the learning sample.


[5] The method according to [3], wherein the end member is determined through vertex component analysis on the feature vector of the learning sample.


[6] The method according to [1], wherein the end member is determined on the basis of an algorithm by which a vertex is defined in such a manner that the K-1 dimensional simplex has a minimum volume.


[7] The method according to [6], wherein the end member is determined by second NMF process by which the corrected intensity distribution matrix is subjected to non-negative matrix factorization to be factorized into the product of a matrix representing the weight fractions of the K types of components in the sample and a matrix representing an individual fragment abundance of each of the K types of components.


[8] The method according to [6] or [7], wherein the learning sample does not contain the end member.


[9] The method according to any one of [1] to [8], wherein acquiring the corrected intensity distribution matrix further includes making intensity correction on the intensity distribution matrix.


[10] The method according to [9], wherein the sample set further includes a calibration sample, the calibration sample contains all the K types of components and has a known composition, and the intensity correction includes normalizing the intensity distribution matrix.


[11] The method according to [9], wherein the intensity correction includes allocating at least part of the intensity distribution matrix using the product of the mass of the corresponding sample in the sample set and an environmental variable, and the environmental variable is a variable representing influence on ionization efficiency of the component during the observation.


[12] The method according to [11], wherein the environmental variable is a total value of peaks in a mass spectrum of a compound having a molecular weight of 50-1500 contained in a predetermined quantity in an atmosphere during the observation or an organic low-molecular compound having a molecular weight of 50-500 contained in a predetermined quantity in each sample in the sample set.


[13] A composition inference device that infers a content ratio of a component in an inference target sample containing at least one type of the component selected from K types of components while K is an integer equal to or greater than 1. The composition inference device includes: a mass spectrometer that sequentially ionizes gas components generated by thermal desorption and/or pyrolysis while heating each of samples in a sample set including learning samples of a number equal to or greater than K containing at least one type of component selected from the K types of components and having compositions different from each other, a background sample not containing the component, and the inference target sample, and observes mass spectra continuously; and an information processing device that processes the observed mass spectra. The information processing device includes: a data matrix generating part that stores the mass spectrum acquired for each heating temperature into each row to acquire two-dimensional mass spectra of the respective samples, and merges at least two or more of the two-dimensional spectra and converts the spectra into a data matrix; an NMF processing part that performs NMF process by which the data matrix is subjected to non-negative matrix factorization to be factorized into the product of a normalized base spectrum matrix and a corresponding intensity distribution matrix; a correction processing part that extracts a noise component in the intensity distribution matrix through analysis on canonical correlation between the base spectrum matrix and the data matrix, and corrects the intensity distribution matrix so as to reduce influence by the noise component, thereby generating a corrected intensity distribution matrix; a vector processing part that partitions the corrected intensity distribution matrix into a submatrix corresponding to each of the samples in the sample set, and expresses each of the samples in vector space using the submatrix as a feature vector; an end member determining part that defines a K-1 dimensional simplex including all of the feature vectors and determines K end members in the K-1 dimensional simplex; and a content ratio calculating part that calculates an Euclidean distance between each of the K end members and the feature vector of the inference target sample, and infers a content ratio of the component in the inference target sample on the basis of a ratio of the Euclidean distance. If the K is equal to or greater than 3, at least one of the feature vectors of the learning samples is present in each region external to a hypersphere inscribed in the K-1 dimensional simplex or the learning samples contain at least one of the end members.


[14] The composition inference device according to [13], wherein the end member determining part determines the end member on the basis of a determination label given to the learning sample.


[15] The composition inference device according to [13], wherein the end member determining part determines the end member on the basis of an algorithm by which a vertex is defined in such a manner that the K-1 dimensional simplex has a minimum volume.


[16] The composition inference device according to [15], wherein the end member determining part determines the end member by performing second NMF process by which the corrected intensity distribution matrix is subjected to non-negative matrix factorization to be factorized into the product of a matrix representing the weight fractions of the K types of components in the sample and a matrix representing an individual fragment abundance of each of the K types of components.


[17] The composition inference device according to [15], wherein the correction processing part further makes intensity correction on the intensity distribution matrix.


[18] The composition inference device according to [17], wherein the sample set further includes a calibration sample, the calibration sample contains all the K types of components and has a known composition, and the intensity correction includes normalizing the intensity distribution matrix.


[19] The composition inference device according to [17], wherein the intensity correction includes allocating at least part of the intensity distribution matrix using the product of the mass of the corresponding sample in the sample set and an environmental variable, and the environmental variable is a variable representing influence on ionization efficiency of the component during the observation.


[20] The composition inference device according to [19], wherein the mass spectrometer further includes a pressure-reducing pump, and the environmental variable is a total value of peaks in a mass spectrum of a material generated by operation of the pressure-reducing pump.


[21] A program used in a composition inference device that infers a content ratio of a component in an inference target sample containing at least one type of the component selected from K types of components while K is an integer equal to or greater than 1. The composition inference device includes: a mass spectrometer that sequentially ionizes gas components generated by thermal desorption and/or pyrolysis while heating each of samples in a sample set including learning samples of a number equal to or greater than K containing at least one type of component selected from the K types of components and having compositions different from each other, a background sample not containing the component, and the inference target sample, and observes mass spectra continuously; and an information processing device that processes the observed mass spectra. The program includes: a data matrix generating function of storing the mass spectrum acquired for each heating temperature by the mass spectrometer into each row to acquire two-dimensional mass spectra of the respective samples, and merging at least two or more of the two-dimensional spectra and converting the spectra into a data matrix; an NMF processing function of performing NMF process by which the data matrix is subjected to non-negative matrix factorization to be factorized into the product of a normalized base spectrum matrix and a corresponding intensity distribution matrix; a correction processing function of extracting a noise component in the intensity distribution matrix through analysis on canonical correlation between the base spectrum matrix and the data matrix, and correcting the intensity distribution matrix so as to reduce influence by the noise component, thereby generating a corrected intensity distribution matrix; a vector processing function of partitioning the corrected intensity distribution matrix into a submatrix corresponding to each of the samples, and expressing each of the samples in vector space using the submatrix as a feature vector; an end member determining function of defining a K-1 dimensional simplex including all of the feature vectors and determining K end members in the K-1 dimensional simplex; and a content ratio calculating function of calculating an Euclidean distance between each of the K end members and the feature vector of the inference target sample, and inferring a content ratio of the component in the inference target sample on the basis of a ratio of the Euclidean distance. If the K is equal to or greater than 3, at least one of the feature vectors of the learning samples is present in each region external to a hypersphere inscribed in the K-1 dimensional simplex or the learning samples contain at least one of the end members.


Advantageous Effects of Invention

One of characteristics of the method of the present invention (hereinafter also called a “present method”) is that, gas components generated by thermal desorption and/or pyrolysis are ionized sequentially while each sample in the sample set is heated and mass spectra are observed continuously, the mass spectrum acquired for each heating temperature is stored into each row to acquire two-dimensional mass spectra of the respective samples, these spectra are merged and converted into a data matrix, the data matrix is subjected to non-negative matrix factorization (NMF) to be factorized into the product of the normalized base spectrum matrix and the corresponding intensity distribution matrix, a noise component in the intensity distribution matrix is extracted through analysis on canonical correlation and the intensity distribution matrix is corrected, a corrected intensity distribution matrix containing a feature vector of each sample is acquired, and quantitative calculation is performed thereafter.


By performing the process of correcting the intensity distribution matrix through the canonical correlation analysis, it becomes possible to make quantitative analysis more easily with high precision (the details thereof will be described later).


If the K is an integer equal to or greater than 2 and a sample as an inference target (inference target sample) is a mixture of the K types of components, the present method makes it possible to infer a mixture ratio of the K types of components (composition) in the inference target sample.


If the learning sample contains the end member, it is possible to infer the composition of the sample more accurately and more easily.


If the learning sample is given the determination label for determining the end member, the end member can be determined on the basis of information about the label, making it possible to infer the composition of the sample more easily.


If the determination label is not given and the learning sample contains the end member, the end member can be determined through vertex component analysis on the feature vector of the learning sample, allowing inference of the composition of the sample.


Even if the sample does not contain the end member, the composition of the sample can still be inferred by determining the end member using an algorithm by which a vertex is defined in such a manner that the K-1 dimensional simplex has a minimum volume. This has excellence in that it does not require the end member as an essential in the learning sample.


If the corrected intensity distribution matrix is a matrix after the intensity correction on the intensity distribution matrix, the accuracy of inference is increased further.


If the sample set further includes the calibration sample and the intensity correction includes normalizing the intensity distribution matrix, the composition of the sample can be inferred more precisely by a simpler procedure.


By allocating part of the intensity distribution matrix corresponding to each sample using the product of the mass of the sample and the environmental variable, influence by the ionization efficiency of each component during the observation is corrected. Thus, even in the absence of the calibration sample in the sample set, it is still possible to infer the composition of the sample with high precision.


If the environmental variable used in the foregoing allocation is a total value of peaks in a mass spectrum of a compound having a molecular weight of 50-1500 contained in a predetermined quantity in an atmosphere during the observation or an organic low-molecular compound having a molecular weight of 50-500 contained in a predetermined quantity in each sample in the sample set, it is possible to acquire more accurate composition inference result through simpler operation even in the absence of the calibration sample in the sample set.





BRIEF DESCRIPTION OF DRAWINGS


FIG. 1 is a flowchart of a composition inference method according to an embodiment of the present invention;



FIG. 2 is a conceptual view showing a K-1 dimensional simplex (two-dimensional simplex, a triangle) while K is 3;



FIG. 3 is a hardware configuration view of a composition inference device according to the embodiment of the present invention;



FIG. 4 is a functional block diagram of the composition inference device according to the embodiment of the present invention;



FIG. 5 is a view showing an original spectrum (A) of “MSE_calib” and a spectrum (B) reconstructed after process;



FIG. 6 shows Cδ(MSE_Calib) as part of a matrix C after NMF factorization and ST resulting from removing a background spectrum and an impurity spectrum using CCA-Filter;



FIG. 7 shows composition analysis result about a ternary system composed of M (poly(methyl methacrylate)), S (polystyrene), and E (poly(ethyl methacrylate));



FIG. 8 shows composition inference result obtained by eliminating orthogonal constraint from an NMF condition;



FIG. 9 shows composition inference result obtained by eliminating only merging S from a condition for MNF optimization;



FIG. 10 shows composition inference result corresponding to result obtained by eliminating CCA-filter (Comparative Example);



FIG. 11 is a picture showing an original spectrum (raw of MSE_calib in the upper section) of “MSE_calib” and a spectrum reconstructed after eliminating CCA-filter and then performing process;



FIG. 12 shows composition inference result obtained without making correction using an environmental variable;



FIG. 13 is a conceptual view showing a K-1 dimensional simplex (two-dimensional simplex, a triangle) while K is 3;



FIG. 14 is an explanatory view of a mechanism of a method of determining an end member according to a second embodiment of the present invention;



FIG. 15 is a view showing inference result according to an Example 2; and



FIG. 16 is a view showing inference result according to an Example 3.





DESCRIPTION OF EMBODIMENTS

The present invention will be described below in detail.


While constituting elements will be described below on the basis of representative embodiments of the present invention, the present invention are not limited to these embodiments.


In the present description, a numerical value range expressed using “-” means a range including numerical values written before and after “-” as a lower limit value and an upper limit value.


DEFINITION OF TERMS

Terms used in the present description will be explained. A term to be used and not to be explained below will be given a meaning generally understandable for persons skilled in the art.


In the present description, a “component” means a material forming each sample, and an inference target sample, a learning sample, and a calibration sample each mean a mixture composed of a combination of components.


The “inference target sample” means a sample to be subjected to composition inference, and the number of components and the content of each component therein may be unknown.


The “learning sample” means samples required in determining in K end members in a K-1 dimensional simplex described later. These samples contain at least one type of component selected from K types of components and have different composition. The number of the contained components is not particularly limited but may be 1-K types. The learning samples may include a sample having the same composition as the inference target sample. Specifically, while the inference target sample and the learning sample may be the same, the learning samples differ from each other.


The “end member” means a vector corresponding to a vertex of the K-1 dimensional simplex. This end member corresponds to a feature vector of a sample composed of only one of the K types of components (learning sample or virtual sample).


A “background sample” means a sample different from the inference target sample and the learning sample, it means a sample containing none of the foregoing components.


The “calibration sample” means a sample different from the inference target sample, the learning sample, and the background sample, it means a sample containing all the K types of components and having a definite composition.


First Embodiment of Present Method


FIG. 1 is a flowchart of a composition inference method (hereinafter also called a “present method”) according to an embodiment of the present invention.


First, an inference target sample, learning samples of a number of equal to or greater than K having compositions different from each other, and a background sample are prepared (these samples will collectively be called a “sample set”) (step S101). The sample set may further include a calibration sample.


The inference target sample contains at least one type of component among the K types of components. If the inference target sample contains one type of component, the purity of this component can be inferred by the present method. In this case, the inference target sample contains this “component” and impurities other than this component.


If K is equal to or greater than 2, the inference target sample is a mixture of components. As long as K is equal to or greater than 1 in the present method, an upper limit thereof is not particularly limited. Typically, K is preferably equal to or less than 1000.


The number of the learning samples relates to the number K of components and is required to be equal to or greater than K. The number of the learning samples is not particularly limited as long as it is equal to or greater than K. Meanwhile, with a larger number of the learning samples, a vertex of the K-1 dimensional simplex described later is likely to be determined with higher precision. This eventually facilitates acquisition of more accurate composition inference result.


Specifically, the ratio of the number of the learning samples to the number K of components (the number of learning samples/the number of components) is equal to or greater than 1, preferably, equal to or greater than 2, more preferably, equal to or greater than 3, still more preferably, equal to or greater than 5, particularly preferably, equal to or greater than 10. Meanwhile, while an upper limit of the ratio is not particularly limited, it is generally preferably equal to or less than 1000.


The learning samples having compositions different from each other. The composition is determined by the type of a contained component, the number of the components, and the content of each component. Samples having the same composition means that these samples are the same in terms of the type of a contained component, the number of components, and the content of each component. If any one of these elements differs between the samples, these samples have “compositions different from each other.”


According to the present method, as long as the learning samples have compositions different from each other, information other than the difference, specifically, the type of a contained component, the number of the components, and the content of each component are not necessary for composition inference of the inference target sample. Namely, the composition of the learning sample may be unknown.


A difference between the inference target sample and the learning sample is that the compositions of the learning samples should differ from each other. The learning samples may include a learning sample having the same composition as the inference target sample.


The learning sample may contain an end member. Specifically, the learning sample may contain a member composed of one type of component (single-ingredient, pure) selected from the K types of components. If the learning sample contains the end member, high-precision composition inference can be made more easily.


In this case, the learning sample as the end member may contain a determination label. The learning sample preferably contains the determination label as it allows the end member to be determined more easily.


Next, each inference target sample is heated to sequentially ionize gases generated by thermal desorption and/or pyrolysis, thereby acquiring a mass spectrum (step S102).


While a method of acquiring the mass spectrum is not particularly limited, a method of performing mass spectrometry without preparatory process on a specimen in an ambient condition is preferred. A mass spectrometer called DART-MS using an ion source called a DART (direct analysis in real time) ion source and a mass spectrometer instrument in combination is known as a device used in a method for the ionization and mass spectrometry described above.


The mass spectrometer instrument is not particularly limited but is preferably an instrument allowing precise mass spectrometry and may be any of types including a quadrupolar type and a time-of-flight (TOF) type.


While a condition for acquisition of the mass spectrum is not particularly limited, according to a procedure given as a non-restrictive example, all the samples in the sample set are sequentially heated at a temperature increasing rate of 50° C./min, and helium ions are injected at an interval of 50 shot/min to pyrolysis gas generated in a temperature range of 50-550° C. to ionize the gas, thereby acquiring a two-dimensional MS spectrum having m/z along a horizontal axis and a temperature along a vertical axis.


Next, the mass spectrum acquired at each heating temperature is stored in each row to acquire a two-dimensional mass spectrum of each sample, and at least two or more of these two-dimensional mass spectra are merged and converted to a data matrix (step S103).


A method of acquiring the data matrix will be described in detail.


In step S102, mass spectra are acquired continuously on the basis of each predetermined temperature increasing interval. While these mass spectra can be used as they are in generating the data matrix, they may be used after being averaged on the basis of each predetermined temperature increasing range. Averaging the mass spectra on the basis of each predetermined temperature increasing range and merging the mass spectra into one allows compression of a data volume. This temperature increasing range is approximately 10-30° C., for example.


In each spectrum, a peak intensity may be normalized. For example, this normalization can be realized by a method of normalizing peak intensities in such a manner that the sum of squares of the peak intensities becomes 1.


As a result, by making one measurement on one sample, it becomes possible to acquire mass spectra of a predetermined number at each heating temperature (or heating temperatures merged in each predetermined range) (this number of the mass spectra differs depending on a way of the merging and may be 20, for example).


By storing each of these mass spectra in each row and the heating temperature in each column, a two-dimensional mass spectrum is acquired for each sample.


After the two-dimensional mass spectra are acquired for the respective samples in this way, at least two or more of these spectra are merged and converted to a data matrix X.


As long as the number of the two-dimensional mass spectra used in generating the data matrix X is equal to or greater than 2, it is not particularly limited. Meanwhile, it is preferable that two-dimensional mass spectra of all the samples in the sample set be used.


If one sample is to be subjected to measurements twice or more, some or all of two-dimensional mass spectra acquired by the two or more measurements may be used in generating the data matrix X.


Next, the acquired data matrix is subjected to non-negative matrix factorization (NMF) to be factorized into the product of a normalized base spectrum matrix and a corresponding intensity distribution matrix (step S104).


The non-negative matrix factorization will be described.


Under the assumption that linear additive property is established between an intensity of a spectrum and a component composition of a sample, a data matrix of the spectrum can generally be subjected to matrix decomposition as









X
=

C


S
T






[

Formula


1

]







Here, the data matrix X is a data matrix of N×M obtained by piling N spectra, each having M numbers of channels, S is a base spectrum matrix, and C is a corresponding intensity distribution matrix. Assuming that the number of all components contained in a data set is K1, C is a matrix N×K1 and S is a matrix M×K1. As noise is present in actual measurement,









X
=


C


S
T


+
N





[

Formula


2

]







by taking an error matrix N into consideration in this way, the C and S matrices are defined in such a manner as to minimize N. A sign T on the upper right side of the matrix means a transposed matrix. Here, on the assumption of the existence of Gaussian noise,













C
,



min
S







X
-

C


S
T





F
2





[

Formula


3

]









    • is formulated as an optimization problem about C and S. Furthermore,














matrix


F




[

Formula


4

]









    • represents a matrix Frobenius norm,














vec


1




[

Formula


5

]









    • represents an L1 norm of a vector, and













vec





[

Formula


6

]









    • represents an L2 norm of the vector.





As C and S in the foregoing formulas do not have unique solutions, a restraint condition assumed to be satisfied by a spectrum in response to physical requirement is imposed. On the basis of obvious requirement that a composition ratio or a spectrum intensity is to take a non-negative value, the NMF is formulated as














C

0

,



min

S

0









X
-

C


S
T





F
2

.





[

Formula


7

]







However, as a unique solution is not defined only under a non-negative condition, it is required to apply an additional restraint condition responsive to a measuring device. A restraint condition for NMF in the present step will be described below.


(NMF Update Formula)

Preferably, in the present step, the NMF process is performed to solve the foregoing formula (formula 7) as a convex optimization problem with respect to each column of C and S by making updates alternately between C and S in a column vector-wise manner using hierarchical alternating least square (HALS) into which constraint mentioned below can be incorporated easily.


According to the present method, mass spectrometry is performed on gas generated by thermal desorption and/or pyrolysis while each sample is heated. Hence, the number K1 of components after the pyrolysis is hard to acquire as prior information and should be inferred from data accordingly. As a method of the inference, automatic relevance determination (ARD) is preferred. This method is to assume sparseness in the intensity distribution matrix C. This assumption can be deemed to be natural considering that pyrolysis of one component occurs only at a particular temperature of a sample containing this component. By doing so, balance is obtained between power of minimizing the number of components and power of minimizing the foregoing formula (formula 7), making it possible to determine an optimum number of components.


(Orthogonal Constraint)

A mass spectrometer instrument generally has a resolution that is approximately 0.1-0.001 m/z. This makes the occurrence of a situation rare where different gas components are equal to each other to a place after the decimal point in terms of m/z and share the same channel. For this reason, it is reasonable to assume orthogonality to a certain degree between column vectors of the base spectrum matrix S.


This not only achieves the effect of resulting in a favorable local solution but is also expected to achieve the effect of preventing concentration of fitting only on a high intensity peak. On the other hand, complete orthogonality and non-negativity result in excessively severe constraint. For this reason, it is preferable that orthogonal constraint be imposed softly by adding a penalty term having a Lagrange's undetermined multiplier (soft orthogonal constraint).


(Component Merging)

If two or more base spectra nearly equal to each other are acquired during the course of updating C and S, a correct number n of components can be acquired by merging these spectra into one spectrum. By applying EALS spectra images to the NMF and writing a k-th column vector in S as S:k, in the presence of k<m that satisfies ST:kS:m>threshold, merging can be conducted as (Merging C):










C

:
k





C

:
k


+

C

:
m







[

Formula


8

]










C

:
m



0




Here, a threshold is a hyper parameter given in a range of 0.9-0.99. Preferably, merging is performed on similar S in the same way. Still preferably, if C has a plurality of column vectors parallel to each other, corresponding column vectors in S are superimposed in response to the weight of C. Specifically,











C

:
k

T



C

::
m



>

threshold




C

:
k









C

:
m









[

Formula


9

]









    • in the presence of k≠m satisfying this formula, merging (Merging S) is performed as













S

:
k





S

:
k


+






C

:
m




1





C

:
k




1




S

:
m








[

Formula


10

]









s
=



S

:
k











S

:
k





S
k





S

:
k













C

:
k




sC

:
k









C

:
m



0.




This is expected to achieve the effect of merging isotopic peaks or oligomer peaks into one base spectrum.


(Generation of Update Formula)

The update formula obtained by the foregoing method will be described.


A totally simultaneous distribution of a model is










p

(

X
,
C
,
S

)

=


p

(


X
|
C

,
S
,

σ
2


)



p

(

C
|
λ

)



p

(
S
)




p

(


λ
|
a

,
b

)

.






[

Formula


11

]









    • where σ2 is a variance parameter.












λ



K





[

Formula


12

]







An element λk in this formula is a parameter following an inverse gamma distribution representing the significance of a component k in data. As noise is assumed to have a Gaussian distribution,










p

(


X
|
C

,
S

)

=





i
=
1


N






j
=
1


M



1


2

π


σ
2





exp



{

-



(


X

i

j


-


[

C


S
T


]


i

j



)

2


2


σ
2




}

.








[

Formula


13

]







Assuming that K is the number of components significantly large (about 50) to be input as an initial value, an exponential distribution to be followed by C is










p

(

C
|
λ

)

=





i
=
1


N






k
=
1


K


p

(


C

i

k


|

λ
k


)







[

Formula


14

]










p

(


C

i

k


|

λ
k


)

=


1

λ
k





exp
(

-


C

i

k



λ
k



)

.






An inverse gamma distribution to be followed by λk is











p

(


λ
|
a

,
b

)

=






k
=
1


K


p

(



λ
k


a

,
b

)


=



b
a


Γ

(
a
)




λ
k

-

(

a
+
1

)





exp

(

-

b

λ
k



)




,


k
=
1

,


,

K
.





[

Formula


15

]







Here, a is a hyper parameter with which a−1 becomes an extremely small positive number, and b is a parameter defined as follows using an average μx of data matrices:









b
=





μ
X

(

a
-
1

)



N


K

.





[

Formula


16

]







Furthermore, an expected value of an element in Cis










e
[

C
ik

]

=




μ
X



N


K

.





[

Formula


17

]







A negative log likelihood of p(X, C, S) is










L

(

X
,
C
,
S
,
λ

)

=


-

log
[


p

(


X
|
C

,
S

)



p

(

C
|
λ

)



p

(


λ

a

,
b

)


]


=



1

2


σ
2








X
-

CS
T




F
2


+



M

N

2


log

2

π


σ
2


+


(

N
+
a
+
1

)






k
=
1

K


log


λ
k




+




k
=
1

K



1

λ
k




(

b
+




i
=
1

N


C

i

k




)



+


K

(


log


Γ

(
a
)


-

a

log

b


)

.







[

Formula


18

]







Here, the variance parameter σ2 under the assumption of a Gaussian distribution is given as










σ
2

=


1
MN







X
-

CS
T




F
2

.






[

Formula


19

]







Furthermore, as a convex function is determined for λ,










λ
k

=


b
+







i
=
1

N



C
ik




N
+
a
+
1






[

Formula


20

]









    • can be obtained.





Next, an update formula for S will be described. Extracting only a term J(S:k) related to S:k from a negative log likelihood function is considered first.










X

(
k
)


=

X
-

C


S
T


+


C

:
k




S

:
k

T







[

Formula


21

]







Using a contribution of a component k to X written in this way,













X
-

CS
T




F
2

=






X

(
k
)


-


C

:
k




S

:
k

T





F
2

=

Tr
[



(


X

(
k
)


-


C

:
k




S

:
k

T



)

T



(


X

(
k
)


-


C

:
k




S

:
k

T



)


]






[

Formula


22

]









    • can be defined. Then, by removing a constant term,













J

(

S

:
k


)

=



1

2


σ
2



[



-
2



S

:
k

T



X

(
k
)




C

:
k



+





C

:
k




2






S

:
k




2



]

.





[

Formula


23

]







This formula was derived on the basis of the following: Shiga, M. et al., Sparse modeling of EELS and EDX spectral imaging data by nonnegative matrix factorization. Ultramicroscopy 170, 43-59 (2016). Here, a penalty term to be added to S is incorporated that is determined by the foregoing orthogonal constraint added to S.










J

(

S

:
k


)

=



1

2


σ
2



[



-
2



S

:
k

T



X

(
k
)







T


C

:
k




+





C

:
k




2






S

:
k




2



]

+


ξ
k



s

(
k
)








T


S

:
k



.







[

Formula


24

]







Here, ξk is a Lagrange's undetermined multiplier, and










s

(
k
)


=







j

k

K



S

:
j







[

Formula


25

]







Through differentiation with respect to S:k,














J

(

S

:

k


)





S

:

k




=



1

σ
2


[



X

(
k
)










T




C

:
k



+





C

:
k




2



S

:
k




]

+

ξ


s

(
k
)





,




[

Formula


26

]
















J

(

S

:
k


)





S

:
k




=
0




[

Formula


27

]







By defining these formulas an update formula for S:k is obtained as










S

:
k







-

σ
2




ξ
k



s

(
k
)



+




X

(
k
)




T



C

:
k








C

:
k




2






[

Formula


28

]







Regarding ξk to apply a complete orthogonal condition,











s

(
k
)







T


S

:
k




=
0




[

Formula


29

]









    • on the basis this formula, ξk can be estimated as













ξ
k

=



s

(
k
)








T



X

(
k
)








T



C

:
k





σ
2



s

(
k
)







T


s

(
k
)









[

Formula


30

]







Regarding a soft orthogonal condition, a degree of orthogonality can be adjusted freely between complete orthogonality and no orthogonality constraint by multiplying ξk by a hyper parameter w[0, 1]. A final update formula is










S

:
k







-

σ
2



w


ξ
k



s

(
k
)



+




X

(
k
)




T



C

:
k








C

:
k




2






[

Formula


31

]










S

:
k





S

:
k





S

:
k









An update formula for C will be described. Extracting only a term J(C:k) related to C:k from a negative log likelihood function is considered first.










J

(

C

:
k


)

=



1

2


σ
2



[



-
2



S

:
k

T



X

(
k
)




C

:
k



+





C

:
k




2






S

:
k




2



]

+


1

λ
k







i
=
1

N


C
i








[

Formula


32

]













J

(

C

:
k


)





C

:
k




=



1

σ
2


[



X

(
k
)




S

:
k



+





S

:
k




2



C

:
k




]

+

1

λ
k


















S

:
k




2

=
1




[

Formula


33

]







By defining normalization in this way, on the basis of the following:













J

(

C

:
k


)





C

:
k




=
0




[

Formula


34

]









    • the update formula is written as













C

:
k






X

(
k
)




S

:
k



-


σ
2


λ
k







[

Formula


35

]







A specific algorithm is as follows:












[Formula 36]


Algorithm 1: Pseudo-code of ARD-SO-NMF for TDMS







Input: Data X ∈ custom-character , orthogonal constrain weight w, initial


component number K, small positive number a, iteration number L,


threshold ∈ [0.9, 0.99]


Output: optimized component number K*, Cardinality matrix


C ∈ custom-character , base spectra S ∈ custom-character , components


importance λ ∈ custom-character


//Initialization


set C, b by Eq. (1-2)


set S by randomly selecting raw vector from X


//Main loop


for l from 1 to L:


 for k from 1 to K:


  update S:k by Eq. (6)


  S:k ← (S:k + |S:k|)/2


 end for


 for k from 1 to K:


  update C:k by Eq. (7)


  C:k ← (C:k + |C:k|)/2


 end for


 for k from 1 to K − 1:


  for m from k + 1 to K:


   //Merging C


   if S:kT S:m > t:


    C:k ← C:k + C:m


    C:m ← 0


   end if


   //Merging S


   if C:kTC:m > threshold∥C:k∥∥C:m∥:





    
S:kS:k+C:m1C:k1S:m,s=S:k,S:ksks:k






    C:k ← sC:k, C:m ← 0


   end if


  end for


 end for


 update λ by Eq. (4)


 update σ2 by Eq. (3)


end









Referring back to FIG. 1, the base spectrum matrix and the data matrix are subjected to canonical correlation analysis to acquire a corrected intensity distribution matrix (step S105).


More specifically, a noise component in the intensity distribution matrix is extracted through analysis on canonical correlation between the base spectrum matrix and the data matrix and the intensity distribution matrix is corrected so as to reduce influence by the noise component, thereby acquiring the corrected intensity distribution matrix.


The NMF is low-rank approximation of a data matrix. Thus, even if the component k is not actually present in an i-th spectrum, Cik>0 is established in a case where assuming the presence of the component k provides better approximation in the sense of a least square. In many cases, such Cik is considerably small and often does not cause any problem in the NMF factorization.


In detecting a minor component, however, distinction is preferably made between Cjk>0 actually present in tiny quantity in a j-th spectrum and Cik>0 as an NMF artifact, and 0 is preferably substituted into Cik as a ghost peak. The reason for this is that removing a spurious peak derived from the artifact from the NMF algorithm as one noise makes it possible to provide inference result with higher precision.


In the present step, canonical correlation analysis is employed to solve the foregoing issue. The present step is one of features of the present method and was named canonical correlation analysis (CCA) filter by the present inventors.


Conceptually, the CCA filter is to make sample-wise scanning to see whether each component in a base spectrum ST output from the NMF is actually contained in original data, and to delete the component from this spectrum of a sample if a similar peak pattern is not observed in the original data. If spectra generated from an L-th sample are stored from an l-th row to an l′-th row in the data matrix X, a submatrix obtained by extracting this part is written as Xδ(L). Specifically, δ(L)=(l, l+1, . . . , l′−2, l′−1).


The CCA is conventionally known as a technique of examining correlation between two-system data segments and is formulated as follows.


A matrix composed of N p-dimensional vectors is written as









Y




p
×
N






[

Formula


37

]







A matrix composed of N q-dimensional vectors is written as









Z




q
×
n






[

Formula


38

]







Then, the following formula is generated:









ρ
=



a
T



V
yz


b





a
T



V
yy


a






b
T



V
zz


b








[

Formula


39

]










max

a
,
b



ρ




Here,











V

Y

Y


=

Y


Y
T

/
N


,


V

Z

Z


=

Z


Z
T

/
N


,


V
YZ

=

Y


Z
T

/
N






[

Formula


40

]









    • is established.





Specifically, a problem set to be solved by the CCA is to project Y and Z onto one axis and find a solution (aT, bT)T with which a correlation coefficient p therebetween becomes maximum.


Here, (aT, bT)T and the correlation coefficient p are given as a characteristic vector and a characteristic value relating to a generalized eigenvalue problem in the following formula, and all the characteristic vectors (aT, bT)T are written collectively as (AT, BT)T. Then, the following formula is generated:












(



O



V
yz






V
yz
T



O



)



(



A




B



)


=


(




V
yy



O




O



V
zz




)



(



A




B



)



(




ρ
1





























ρ

p
+
q





)



,



ρ
1






ρ

p
+
q


.






[

Formula


41

]







Here, it is assumed that the presence or absence of the base spectrum S:k of one component k in a sample L is to be judged. First, a pertinent partial data matrix Xδ(L) is extracted from the data matrix X. Meanwhile, ST is divided into two matrices including a matrix of a group of spectra SsimT all similar to S:kT and a matrix of a group of spectra SoutT largely different from S:kT.









Y
=


(




S
out
T






X

δ

(
L
)





)

.





[

Formula


42

]







Then, the CCA is performed in this way. Here, all characteristic values satisfying p>t1 are extracted with respect to a certain value tϵ[0,1]. If a first component of a corresponding characteristic vector b satisfies b1>t2bmax with respect to a maximum component of this vector, it is determined that the base spectrum S:k of the component k is contained in the sample L. If not, corresponding Cδ(L)k is determined to be 0.


Here, as t1 and t2 are hyper parameters and are strongly influenced by fitting precision of the NMF, specifically, by a threshold as an NMF hyper parameter, deliberate examination is required. Generally, t1=0.9 threshold and t2ϵ[0.1, 0.3] are found to produce favorable result from experience.


In the example given above, the CCA filter is applied to the NMF artifact (noise). However, the foregoing noise removal is applicable not only to the NMF artifact but also to removal of background or contamination.


Specifically, examples of noise removable by the CCA filter include: (1) an artifact from the NMF (so-called ghost peak); (2) a peak derived from a minor chemical material mixed from an environment around the device (background); and (3) a peak derived from a material contained in a sample and having a risk of distorting the substance of the sample if incorporated in analysis (contamination).


The sample set handled by the present method contains a background sample. Thus, by defining the following formula:










Y
=

(




S
out
T






X

δ

(
BG
)





)


,




[

Formula


43

]









    • it becomes possible to judge whether the base spectrum S:k is contained in a background file. The presence of the base spectrum S:k therein means that the base spectrum S:k is system noise derived from an environment. Thus, this spectrum can be removed from the sample by defining C:k=0.





If the background sample further contains a contamination material, noise derived from the contamination can also be removed by following the same procedure as that described above. While the contamination material is not particularly limited, it may be a solvent used for preparation or loading of a sample, for example.


The present step may further include a step of correcting an intensity in an intensity distribution matrix.


For example, the DART-MS ionizes water or ammonia in the air primarily using helium ions, thereby ionizing a sample secondarily. For this reason, a peak intensity may be changed by an atmospheric condition such as a humidity. Furthermore, the mass of a sample to be loaded generally differs between samples and this also influences a peak intensity. Moreover, the ionization efficiency of each component contained in a sample generally differs between samples and this also influences a peak intensity.


By correcting the intensity in the intensity distribution matrix, it becomes possible to obtain more accurate quantitative result by alleviating these influences.


If all samples contain all the K types of components and if these samples include a calibration sample having a known composition, the intensity correction can be made by normalizing the intensity distribution matrix.


Normalizing the intensity distribution matrix means normalizing the matrix C for each sample, more specifically, means performing operation of adding all elements to a total of 1 for each sample.


Partitioning the matrix C sample-wise and writing the matrix C derived from the sample L as Cδ(L),










=


C

δ

(
L
)









i
,
j





(

C

δ

(
L
)


)

ij




,

L


for


all


samples





[

Formula


44

]







As information about a pyrolytic temperature is unnecessary in composition analysis,










[

Formula


45

]









    • is added in a vertical direction to obtain the following vector:















K





[

Formula


46

]







This vector is used as a feature quantity vector of the sample L. Next, feature vectors of respective p single component samples measured as end members are aligned to form an end member matrix:









P




K
×
p






[

Formula


47

]







A feature vector of a mixed sample can be written as










[

Formula


48

]









    • as a linear sum of the feature vectors of the end members. Thus, using a coefficient vector thereof as follows:














r
L




p


,




[

Formula


49

]









    • the following formula can be defined:













=

Pr
L


,


subject


to



r
L



0

,





r
L



1

=
1.





[

Formula


50

]







As one calibration sample necessarily gives one or more pyrolysis samples, the number of components after the pyrolysis is written in a vertically-long matrix of a full column rank. For this reason, a solution is generally infeasible and an approximate solution in the sense of a least square can be obtained by quadratic programming.


In the mass spectrum acquired by the present method, ionization efficiency differs between compounds, pyrolysis efficiency also differs therebetween, and a compound quantity and a peak intensity are not proportionate to each other.


However, a peak intensity ratio including the ionization efficiency of the component k acquired by the NMF is already reflected in S:kT and an intensity ratio between spectra is reflected in C:k.


By the provision of such a hierarchical data configuration formed by the NMF, as long as consideration is given to a peak intensity ratio in an end member, it is possible to make more accurate quantitative composition analysis even without the need of determining unknown ionization efficiency.


Meanwhile, if the sample set does not include a calibration sample, at least part of (preferably, all of) the intensity distribution matrix is allocated using the product of the mass of the corresponding sample and an environmental variable. By doing so, it becomes possible to make more accurate quantitative analysis.


The environmental variable mentioned herein is a variable representing influence on ionization efficiency of each component during observation.


If a component has high ionization efficiency, a peak intensity is increased. If a component has low ionization efficiency, a peak intensity is reduced.


Regarding the mass of a sample, a peak intensity is generally increased with increase in the mass and a peak intensity is generally reduced with reduction in the mass.


If the sample set does not include a calibration sample, the matrix C is corrected as follows so as to make intensity information between samples held in the intensity distribution matrix available without normalizing the intensity distribution matrix:










=


C

δ

(
L
)




m
L



I
L




,

L


for


all



samples
.






[

Formula


51

]







Here, mL is the mass of the sample L, and IL is a peak intensity in (poly)dimethylsiloxane generated from an oil pump annexed to the DART-MS observed at 610 m/z in an entire temperature range. Furthermore, IL reflects a standby state during the measurement and a peak intensity actually differs by about three times between a rainy day and a sunny day. As a result of the correction,










[

Formula


52

]









    • is obtained and this is added in a vertical direction to obtain the following vector:















K





[

Formula


53

]







This vector is used as a feature vector of the sample L. Then, an end member matrix is formulated from a feature vector of an end member in the same way as follows:









P





K
×
p


.





[

Formula


53

]







Then, on the basis of the following:










=

Pr
L


,


subject


to



r
L



0

,





r
L



1

=
1

,




[

Formula


55

]









    • rL is determined.





While IL has been described as being derived from poly(dimethylsiloxane) and having 610 m/z, it is not limited to that but is preferably derived from a compound having a molecular weight of 50-1500 contained in a predetermined quantity in an atmosphere during observation and/or is preferably derived from an organic low-molecular compound having a molecular weight of 50-500 contained in a predetermined quantity in each of all the samples.


Referring back to FIG. 1, the corrected intensity distribution matrix is partitioned into submatrices corresponding to respective ones of the samples in the sample set, and the submatrices are used as feature vectors to express the respective corresponding samples in vector space (step S106).


Furthermore, a K-1 dimensional simplex including all of these feature vectors (each corresponding to one sample) is defined, and K end members in the K-1 dimensional simplex are determined (step S107).


If the learning sample contains an end member and the learning sample is given a determination label (a label for determining that this learning sample is an “end member”), the end member is determined on the basis of the label.


If the learning sample contains an end member and the learning sample is not given a determination label, specifically, if the learning sample contains an end member but it is unknown which learning sample contains the end member, the end member is determined through vertex component analysis on a feature vector.


If the learning sample does not contain an end member, an end member is determined on the basis of an algorithm by which a vertex is defined in such a manner that the K-1 dimensional simplex has a minimum volume. If the K-1 dimensional simplex is a two-dimensional simplex (triangle), the minimum volume of the K-1 dimensional simplex means a minimum area thereof.


According to one aspect of the present method, even if the learning sample does not contain an end member, at least one feature vector of the learning sample is present in each region external to a hyperspherical body inscribed in the foregoing K-1 dimensional simplex (if K is equal to or greater than 3), making it possible to determine an end member precisely.



FIG. 2 is a conceptual view showing the K-1 dimensional simplex (two-dimensional simplex, a triangle) while K is 3. A K-1 dimensional simplex 10 is a triangle with vertexes defined by end members 13, 14, and 15 determined by learning samples 16, 17, and 18 respectively.


As shown in FIG. 2, if all the learning samples 16, 17, and 18 are present in a region 19 (hatched) external to a hypersphere 12 (in this case, a “circle”) inscribed in the K-1 dimensional simplex, high-precision quantitative analysis result is obtained.


This was devised on the basis of Robust Volume Minimization-based Matrix Factorization for remote Sensing and Document Clustering, IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 64, NO. 23, Dec. 1, 2016.


There is no particular limitation on the algorithm by which a vertex is defined in such a manner that the K-1 dimensional simplex has a minimum volume but a publicly-known algorithm is applicable. Examples of such an algorithm include a minimum volume constraint-NMF method, a minimum distance constraint-NMF method, a simplex identification via split augmented Lagrangian method, and a simplex volume minimization method.


Next, a Euclidean distance between each of K end members and a feature vector of an inference target sample is calculated to infer a content ratio of each component in the sample (step S108).


According to another aspect of the present method, if the learning sample contains at least one end member (if at least one learning sample is an end member), a feature vector of another learning sample may be present in a region inside a hyperspherical body inscribed in the K-1 dimensional simplex. Like FIG. 2, FIG. 13 is a conceptual view showing a K-1 dimensional simplex (two-dimensional simplex, a triangle) while K is 3. A K-1 dimensional simplex 60 is a triangle with vertexes defined by a learning sample 61 as an end member, and by end members 64 and 65 determined by different learning samples 62 and 63 respectively. A difference from FIG. 2 is that the learning samples 62 and 63 are present in a region inside a hypersphere inscribed in the K-1 dimensional simplex.


Even if at least one learning sample is an end member, a different learning sample may still be present on a hypersphere inscribed in the K-1 dimensional simplex or in a region external to the hypersphere. Specifically, in this case, the other learning sample may be present at any position.


In terms of achieving more excellent effect of the present invention, it is particularly preferable that the other learning sample contain 20% by mass or more, more preferably, 40% by mass or more of a component in an end member different from the learning sample as an end member (a component in a different end member).


According to the present method, even if a sample is a solid sample hard to dissolve in a solvent and hard to undergo conventional composition analysis, this sample is still available for analysis without particular preparatory process and can be subjected to accurate quantitative analysis. The present method achieves significant reduction in time required for obtaining result, particularly by being applied to quality control of a sample as a mixture of a plurality of components, investigation of a cause for failure, etc.


Second Embodiment of Present Method

A second embodiment of the present method includes a method of determining an end member (vertex) applied if the learning sample does not include an end member, specifically, if there is no learning sample at a vertex of the K-1 dimensional simplex in the step S107 in the first embodiment already described.


One feature of the method according to the present embodiment is that an end member is determined by second NMF process by which a matrix representing a fragment abundance of each sample is subjected to non-negative matrix factorization to be factorized into the product of a matrix representing the weight fractions of the K types of components in the sample and a matrix representing a fragment abundance of each simplex of the K types of components.



FIG. 14 is a schematic view for explaining a mechanism thereof.


While each sample is a mixture of the K types of components (this will be described as a mixture of “polymers”), what can actually be measured is an appropriate linear sum of spectra S of M types of fragment gases generated by pyrolysis of each polymer.


This feature according to the present embodiment is based on an idea that, by expressing a coefficient matrix thereof as











A
~




+

N
×
M



,




[

Formula


56

]













X
~




A
~


S





[

Formula


57

]







is formulated. Assuming that simplexes of the K polymers can be subjected to measurement and can be observed as











[

Formula


58

]










BS



+

K
×
D



,


















[

Formula


59

]










A
~


CB











a composition C can be obtained by performing the second NMF process in this way.


Here,











[

Formula


60

]









B



+

K
×
M













is an abundance of M fragments generated from the K simplexes (polymers).


The following describes the details of a difference between the NMF process in step S104 (hereinafter also called “first NMF process”) and the NMF process performed in determining an end member in step S107 (hereinafter also called “second NMF process”) in the flow of FIG. 1, and others. In principle, steps, etc. not described below are the same as the corresponding steps in the first embodiment.


In the following description, symbols, etc. used in the description of the first embodiment will be replaced with those shown in the following table:











TABLE 1





Term
Before replacement
After replacement







Data matrix
X:N spectra × M
(The number N of samples ×



channels
the number of temperature




zones) × D channel




This is defined as X ∈ custom-character




(meaning non-negative NT ×




D matrix)


The number of
K
M


basis fragments




Intensity
C ∈ custom-character
A ∈ custom-character


distribution matrix




Base spectrum
S ∈ custom-character
S ∈ custom-character


matrix




Characteristic
(aT, bT)T
(u*, v*)


vector







Matrix representation of characteristic vector




(



A




B



)








(



U




V



)













Furthermore, in many cases, notations used in the following description follow a system generally employed in signal processing.











[

Formula


61

]











+

N
×
M


,



N
×
M













This is a non-negative or real-number N×M matrix.


Regarding the following matrix











[

Formula


62

]










X



+

N
×
M



,


















[

Formula


63

]











X

n
:





+

1
×
M



,


X

:
m





+

N
×
1



,


X
nm




+













represents an n-th row vector, an m-th column vector, and an (n, m)-th element. Furthermore, XT is a transposed matrix of X, and











[

Formula


64

]











X


F











represents a Frobenius norm.











[

Formula


65

]













X

n
:




1

,




X

n
:




2












represent the following in an n-th row of X:











[

Formula


66

]












1

-
norm

,



2

-

norm
.













If X is a square matrix, Tr(X) represents a trace and diag(X) represents a diagonal matrix composed of a diagonal component. Furthermore, 1N and 11N represent an n-dimensional vector or an (N, N) matrix where all elements are 1. Moreover, IN represents an N-dimensional unit matrix.


(First NMF)










[

Formula


67

]









X

AS















Input
:

data


matrix












[

Formula


68

]









X



+

NT
×
D

















Output
:

distribution


matrix












[

Formula


69

]









A



+

NT
×
M

















A


basis


fragment


spectrum












[

Formula


70

]









S



+

M
×
D













Here, N is the number of samples, T is the number of temperature zone partitions, D is the number of channels, and M is the number of base spectra.


The first NMF was developed so as to achieve better conformity with data interpretation of MS (mass spectrometry) by adding the following three points as main changes to ARD-SO-NMF proposed by Shiga, et al.

    • 1) Variance-covariance matrix of Gaussian noise for each channel











[

Formula


71

]









R




D
×
D













is estimated on the basis of a natural isotopic peak.

    • 2) Application of soft orthogonal constraint between basis fragment spectra.
    • 3) Merging of fragment spectra having similar intensity distributions (expansion of merging condition)


Regarding 1), Gaussian noise of an independent equal variance (i. i. d) is assumed with a variance σ2 for some time and then a variance-covariance matrix R is introduced in the middle. Assuming i. i. d in the noise, a probability generative model for the data matrix X can be written as











[

Formula


72

]










p



(

X




"\[LeftBracketingBar]"


A
,
S
,

σ





2





)



=





i
=
1

NT







m
=
1

M





1


2

π


σ





2








exp







{


-



(


X
im

-


[
AS
]

im


)

2


2


σ





2






}

.















As the number M of base spectra is unknown, automatic relevance determination (ARD) is introduced on the basis of sparseness of a distribution matrix A for allowing this number to be inferred automatically. First, an exponential distribution parametrized with λm for each basis component is assumed as a prior distribution of A.


Specifically, with respect to the following:











[

Formula


73

]










λ
=



(


λ
1

,


,

λ
M


)

T




M



,


















[

Formula


74

]











p

(


A
im





"\[LeftBracketingBar]"


λ
m



)

=



1

λ
m




exp

(

-


A
im


λ
m



)




s
.
t
.


λ
m



>
0


,
for
















i
=
1

,


,
NT
,

m
=
1

,


,
M
,


p

(

A




"\[LeftBracketingBar]"

λ


)

=





i
=
1

NT







m
=
1

M





p

(



A

im





"\[LeftBracketingBar]"


λ
m



)

.









An entire probability model can be written as











[

Formula


75

]












p

(

X
,
A
,
S

)

=

p



(

X




"\[LeftBracketingBar]"


A
,
S
,

σ





2









)



p

(

A




"\[LeftBracketingBar]"

λ


)



p

(
S
)




p

(

λ




"\[LeftBracketingBar]"


a
,
b



)

.












Here, p(S) is a uniform distribution on a hypersphere as follows:











[

Formula


76

]













S

n
:




2

=
1.











Furthermore, p(λ|a, b) is an inverse gamma distribution parametrized with (a, b), which is specifically











[

Formula


77

]











p

(

λ




"\[LeftBracketingBar]"


a
,
b



)

=





m
=
1

M


p

(


λ
m





"\[LeftBracketingBar]"


a
,
b



)


=




b





a



Γ

(
a
)




λ
m

-

(

a
+
1

)





exp

(

-

b

λ
m



)



for


m

=
1



,


,

M
.












Here, a is a hyper parameter for adjusting sparseness and a=1+10−16, and b can empirically be estimated from an expected value E (Aim) of Aim and is related with











[

Formula


78

]










E

(

A
im

)

=


b

a
-
1


.





(
1
)








This further has a relation with E(Xid) as follows:











E

(

X
id

)

=






m
=
1


M



E

(

A

i

m


)



E

(

S
md

)



=


M
·

E

(

A
im

)




E

(

S

m

d


)




,




[

Formula


79

]







By approximating E(Xid) with an average μx of X











μ
X

=

M


b


(

a
-
1

)



D





,




[

Formula


80

]







b is defined as follows:









[

Formula


81

]










b
=




μ
X

(

a
-
1

)



D


M


,




(
2
)







Thus, a negative log likelihood function can be written as










L

(

X
,
A
,
S
,
λ

)

=


-

log
[


p

(


X

A

,
S

)



p

(

A
|
λ

)



p

(


λ
|
a

,
b

)


]


=



1

2


σ
2








X
-
AS



F
2


+



D

N

T

2


log

2

π


σ
2


+


(

NT
+
a
+
1

)






m
=
1

M


log


λ
m




+




m
=
1

M



1

λ
m




(

b
+




i
=
1

NT


A
im



)



+


M

(


log


Γ

(
a
)


-

a

log

b


)

.







[

Formula


82

]







This is a downward-convex function with respect to λ. Thus, an update formula for λ is obtained as













L



λ



0

,




[

Formula


83

]







specifically, as









[

Formula


84

]











λ
m

=


b
+







i
=
1


N

T




A
im





N

T

+
a
+
1



,


for


m

=
1

,


,

M
.





(
S3
)







Derivation of the ARD-SO-NMF proposed by Shiga, et al. has completely be followed so far.


The assumption of the i. i. d Gaussian distribution of the noise does not apply to MS data. A larger signal is known to be likely to have larger noise. Then, a noise distribution is determined as









E




NT
×
D






[

Formula


85

]







as a residual component of linear regression resulting from the natural isotopic peak. Specifically,









[

Formula


86

]











E

:
d


=


X

:
d


-




M

(
d
)


(


M


(
D
)


T




M

(
D
)



)


-
1




M


(
D
)


T




X

:
d





,


for


d

=
1

,


,

D
.





(
S4
)







Here,









M

(
d
)


=

X

:
[


d
-
30

,

d
-
20

,

d
-
10

,

d
+
10

,

d
+
20

,

d
+
30


]






[

Formula


87

]







is a channel [d−30, d−20, d−10, d+10, d+20, d+30] centered around the channel d and is an isotropic peak of ±3 m/z of the channel d (note that channel spacing is 0.1 m/z). The variance-covariance matrix of each channel, which is given as










R




D
×
D



,




[

Formula


88

]







is determined as follows:









R
=


1
NT



E
T


E





[

Formula


89

]







Using this formula, the likelihood function is rewritten as










p

(


X

A

,
S
,
R

)

=


1




2

π


DNT







"\[LeftBracketingBar]"

R


"\[RightBracketingBar]"



NT




exp



{

-

Tr
[


(

X
-
AS

)





R

-
1


(

X
-
AS

)

T


]


}

.






[

Formula


90

]







Note that R is a constant matrix. A total negative log likelihood function is defined as









L

(

X
,
A
,
S
,
λ

)

=


-
log

|


p

(


X
|
A

,
S
,
R

)



p

(

A
|
λ

)



p

(


λ
|
a

,
b

)




]

=


Tr
[


(

X
-

A

S


)





R

-
1


(

X
-

A

S


)

T


]

+



D

N

T

2


log

2

π

+



N

T

2


log




"\[LeftBracketingBar]"

R


"\[RightBracketingBar]"



+


(

NT
+
a
+
1

)






m
=
1

M


log


λ
m




+




m
=
1

M



1

λ
m




(

b
+




i
=
1


N

T



A

i

m




)



+


M

(


log


Γ

(
a
)


-

a

log

b


)

.






By substituting the update formula Eq. S3 for λ and making simplification by omitting a constant term,











L

(

X
,
A
,
S
,
λ

)

=


T


r
[


(

X
-

A

S


)





R

-
1


(

X
-

A

S


)

T


]


+


(


N

T

+
a
+
1

)






m
=
1

M


log


λ
m






,




[

Formula


92

]







is obtained. This function is minimized with respect to A and S by using hierarchical alternating least square (HALS). For simplification, the following vector notation is used:











a
m



A

:
m



,


s
m



S

:
m

T


,


for


m

=
1

,


,

M
.





[

Formula


93

]







According to the HALS, a residual X-AS is expressed as











[

Formula


94

]











X






(
m
)



-


a
m





s
m





T


(


m
=
1

,


,
M

)

.

Here



,


















[

Formula


95

]










X






(
m
)



=

X
-
AS
+


a
m




s
m





T


.














Then, L(X, A, S, λ) can be written separately between components m as











[

Formula


96

]











L

(

X
,

a
m

,

s
m

,


λ
m

(

a
m

)


)

=



Tr
[


(


X






(
m
)



-


a
m



s
m





T




)





R






-
1



(


X






(
m
)



-


a
m



s
m





T




)

T


]

+


(

NT
+
a
+
1

)


log


λ
m




,

















for


m

=
1

,


,

M
.






Soft orthogonal constraint on S can be incorporated as a penalty term as follows into an objective function L:











[

Formula


97

]











w
O



ξ
m



s
m





T





s






(
m
)



.

Here


,


















[

Formula


98

]










s






(
m
)











j

m

M



s
j













is established. This term represents non-orthogonality of an m-component and other components, and ξm represents a Lagrange's undetermined multiplier applied if orthogonality is satisfied in a strict sense. This term is eased further using woϵ[0, 1]. Thus, the objective function to be minimized is defined as











[

Formula


99

]










L

(

X
,

a
m

,

s
m

,


λ
m

(

a
m

)


)

=



Tr
[


(


X






(
m
)



-


a
m



s
m





T




)





R






-
1



(


X






(
m
)



-


a
m



s
m





T




)

T


]

+


(

NT
+
a
+
1

)


log


λ
m


+


w
O



ξ
m



s
m





T





s






(
m
)



.














Slopes with respect to am and sm are as follows:











[

Formula


100

]













L




a
m



=



(



a
m



s
m





T



-

X






(
m
)




)



R






-
1





s
m


+


1

λ
m




1
NT




,



















L




s
m



=




R






-
1



(



s
m



a
m





T



-

X







(
m
)






T





)



a
m


+


w
O



ξ
m



s






(
m
)






,





and by assuming these values as zero, update formulas are written as











[

Formula


101

]











a
m

=




X






(
m
)





R






-
1





s
m


-


1

λ
m




1
NT





s
m





T




R






-
1





s
m




,




(

S

5

)

















s
m

=



X







(
m
)






T






a
m


-


w
O



ξ
m



Rs






(
m
)






,





(

S

6

)

.








As sm is normalized in response to each update, a constant coefficient was omitted. Non-negative constraint was satisfied by being projected onto a non-negative quadrant in response to each update. As a specific example, the projection can be realized as











[

Formula


102

]











a
m





a
m

+



"\[LeftBracketingBar]"


a
m



"\[RightBracketingBar]"



2


,











where,











[

Formula


103

]











"\[LeftBracketingBar]"


a
m



"\[RightBracketingBar]"












means a vector assuming an absolute value for each element. By multiplying Eq. S6 by











[

Formula


104

]










s







(
m
)






T





R











from the left side and applying the following strict orthogonal condition and wo=1:











[

Formula


105

]












s







(
m
)






T






s
m


=
0

,











the undetermined multiplier ξm is obtained as











[

Formula


106

]













-

s







(
m
)






T







X







(
m
)






T






a
k


+


ξ
m



s







(
m
)






T






Rs






(
m
)





=
0

,




(

S

7

)













ξ
m

=




s







(
m
)






T






X







(
m
)






T






a
k




s







(
m
)






T






Rs






(
m
)





.






Using the equations S1 to S7 given above, an algorithm 1 is suggested as follows:












[Formula 107]


Algorithm 1: Pseudo-code of the first NMF







Input: custom-character-normalized data; X ∈ custom-character , orthogonal constraint weight;


w0, initial component number; M, iteration number; itr, margining


threshold; t ∈ [0.9, 0.99].


Output: spectra-wise fragment abundance matrix A ∈ custom-character ,


M-fragment spectra S ∈ custom-character  with optimized M.


Initialization


initialize A by Eq. S1


initialize S by random selection of M-row vectors from X and



custom-character -normalization.



calculate b by Eq. S2


initialize 2 by Eq. S3










calculate


E


by



Eq
.

S


4


and


R

=


1
NT



E
T


E










Repeat until convergence criteria are satisfied:


for m = 1, . . . , M:


 calculate s(m) ← Σj≠mM sj and calculate ξm by Eq. S7


 update sm by Eq. S6.


 project sm to the non-negative orthant


custom-character -normalize sm


for m = 1, . . . , M;


 update am by Eq. S5


 project am to the non-negative orthant


# Merging very similar components


for k = 1, . . . , M − 1:


 for m = k, . . . , M:


 if skTsm > t:


  ak ← ak + am, am ← 0


 if akTam > t∥ak∥∥am∥:





  
sksk+am1ak1sm,s=sk2,sksks






  ak ← sak, am ← 0


update λ by Eq. S3









Here, each time A and S are updated, similar components are merged with each other. While merging of components having similar spectra with each other has been suggested by Shiga et al., components having similar intensity distributions are further subjected to the merging mentioned herein. By doing so, an isotopic peak, a fragment series ionized through addition of different ions, an oligomer peak series having different numbers of monomers, etc. can be merged into one component, making it possible to provide result with a higher degree of interpretability.


(Derivation of Canonical correlation analysis filter (CCA-filter))








Input
:

Output


of


first


NMF












[

Formula


108

]










S



+

M
×
D



,















a


background


spectrum
:












[

Formula


109

]










X
BG




+

T
×
D













Output: List of component numbers judged to be derived from background


The m-component obtained by the first NMF includes a component derived from background or a tramp material. As such a component might distort composition analysis result and is preferably removed from A or S accordingly. Assuming that an M′-component is judged to be a component derived from background by the CCA-filter,











[

Formula


110

]









A




+

NT
×
M




and



















[

Formula


111

]









S




+

M
×
D




are


defined


as



















[

Formula


112

]









A




+

NT
×


(

M
-

M









)






and



















[

Formula


113

]









S



+



(

M
-

M









)


×
D













respectively. While the number of components after application of the CCA-filter is M-M′ for simplification, M is still used consistently.


To apply the CCA-filter, a background spectrum











[

Formula


114

]










X
BG




+

T
×
D













is required to be incorporated in the data set and required to be used together with a sample spectrum for performing the first NMF. If the presence of any tramp material can be expected, a spectrum measured for the tramp material can be used as XBG. According to the CCA-filter, components m=1, . . . , M are checked one by one to see whether their respective spectra Sm: are contained in XBG.


A first step is to partition S composed of an M-spectrum and subjected to:











[

Formula


115

]











2

-
normalized











into a spectrum set similar to Sm::









Y




+



M

s

i

m


×
D






[

Formula


116

]







and


into a spectrum set dissimilar to Sm::









Z




+



M
dis

×
D






[

Formula


117

]







This partitioning is performed in such a manner as to satisfy












S

m
:





Y

m


:



T




t
1


,


for



m



=
1

,

,

M
sim

,




[

Formula


118

]












S

m
:





Z

m


:



T


<

t
1


,


for



m



=
1

,

,

M
dis

,




Here, t1ϵ[0, 1] is a certain threshold and t1=0.2 was consistently used in the present invention. Furthermore, as Sm: is always contained in Y, Sm: is stored in a first column of Y. Z is combined with XBG as










Z


(



Z





X
BG




)


,




[

Formula


119

]







In order to obtain an average zero, Y and Z are defined as











Y
¯

=


Y

(


I
D


-



1
D


1


1
D



)






M
sim

×
D




,




(
S8
)














Z
¯

=


Z

(


I
D

-


1
D


1


1
D



)






(


M
dis

+
T

)

×
D




,




(
S9
)







These two spectrum sets are subjected to the CCA. The CCA is to generate a pair of spectra as similar as possible to each other through linear combination inside the two spectrum sets.


It is assumed that coefficients of linear combination of









Y
_




[

Formula


121

]








and








Z
_




[

Formula


122

]









are


stored


in


a


vector









u




M
sim






[

Formula


123

]









and


in


a


vector









v





M
dis

+
T






[

Formula


124

]







Accordingly, the spectrum pair can be written as










y



u
T



Y
_







1
×
D






[

Formula


125

]








and









z



u
T



Z
_







1
×
D






[

Formula


126

]







Similarity therebetween is evaluated using the correlation coefficient p as










ρ
=



u
T



V
yz


v





u
T



V
yy


u






v
T



V
ZZ


v





,




[

Formula


127

]








where










V
YY

=


Y
¯





Y
¯

T

/
D



,


V

Z

Z



=


Z
¯





Z
¯

T

/
D



,


V

Y

Z


=


Y
¯





Z
¯

T

/

D
.








[

Formula


128

]







A problem setting by the CCA can be written as











(


u
*

,
v


*)

=


arg


max



ρ
.



u
,
v






[

Formula


129

]







A solution thereof, which is written as











(



u




v



)






M

s

i

m


+

M
dis

+
T



,




[

Formula


130

]







is provided as a solution of a generalized eigenvalue problem as












(



O



V
yz






V

y

z

T



O



)



(



U




V



)


=


(




V
yy



O




O



V
zz




)



(



U




V



)



(




ρ
1

















ρ


M
sim

+

M
dis

+
T





)



,


ρ
1





ρ


M
sim

+

M
dis

+
T



,




(

S

10

)








where








(



U




V



)





[

Formula


132

]








is a matrix obtained by aligning the following characteristic vectors as column vectors in descending order of characteristic value:










(




u
*






v
*




)

.




[

Formula


133

]







Each characteristic value









(


ρ
1

,
..

,


ρ

M
sim


+

M
dis

+
T


)




[

Formula


134

]







is a coefficient of correlation between y and z generated through linear combination using corresponding (u*, v*) as a coefficient. Here, u* corresponding to every characteristic value satisfying p>t2 is extracted. If a first element corresponding to a coefficient of Sm: is a large component largely contributing to u*, specifically, if














"\[LeftBracketingBar]"


u
1



"\[RightBracketingBar]"






u
*



1




t
3


,




[

Formula


135

]







the component m is judged to be a component derived from background and removed from a system. Here, t2δ[0.9, 0.99] and t3δ[0, 1]. These processes are summarized as follows as an algorithm 2:












[Formula 136]


 Algorithm 2: CCA-filter







 Input: custom-character -normalized basis-spcetra of the 1stNMF output; S ∈


custom-character , a background spectrum;


 XBG ∈ custom-character  , thresholds: t1 ∈ [0, 1], t2 ∈ [0, 1], t3 ∈ [0, 1], in this


 study (t1, t2, t3) = (0.2, 0.9, 0.5) is consistently used.


 Output: a list of the components judged as background components.


 for m = 1, . . . , M:


  for m′ = 1, . . . , M:


   classify Sm′, into Y if Sm:Sm′:T ≥ t1 else into Z.





  
Z(ZXBG)






  calculate Y and Z by Eq. S8 and Eq. S9


  calculate VYY = YYT/D, VZZ = ZZT/D, VYZ = YZT/D


  obtain U* = (u*1, . . . , u*Msim+Mdis+T) and


  (ρ1, . . . , ρMsim+Mdis+T) by solving Eq. S10


  find Q such that ρQ ≥ t2 and ρQ+1 < t2


  for q = 1, . . . , Q:





  
if|U1q*|U:q*2t3,addminthebackground-componentlist






 return the background-component list









After the background component is identified, a corresponding column vector in A and a corresponding row vector in S are deleted to be removed from the system.


After removal of the M′-component derived from background, the following is output from the CCA-filter:









A




+


NT
×

(

M
-

M



)







[

Formula


137

]







(This is expressed as follows while M-M′ is replaced with M for simplification:)









A





+

NT
×
M






[

Formula


138

]







This is used for correcting an allocation intensity on the basis of a sample quantity and an internal reference peak. Then,











a
n

=

vec

(

A

(
n
)


)


,




[

Formula


139

]







which is a one-dimensional vector generated from a submatrix section A(n) of A related to a sample n, may be used as a feature vector of the sample n and may be input in the second NMF. Meanwhile, as an M-fragment temperature distribution is unnecessary information for composition analysis, all temperature zones may be added for each sample so that M-fragment abundance (FA) may be obtained for each sample, and the following representing this abundance may be used as input:










A
˜




+

N
×
M






[

Formula


140

]







Here, for simplification, the second NMF is performed with respect to the following:









A
~




[

Formula


141

]







In the following section, fitting by non-negative least square (NNLS) is performed frequently. Using an optimum non-negative coefficient as









x



+
n





[

Formula


142

]







and through column vector linear combination of a constant matrix as










Φ




m
×
n



,




[

Formula


143

]







the problem mentioned herein is to approximate:










y




m


,




[

Formula


144

]







and is determined by the following:











x
*

=



arg

min

x






y
-
Φx



2



,


s
.
t
.

x


>
¯

0

,




[

Formula


145

]







While this can be solved by a large number of optimization techniques including alternating direction multiplier methods (ADMM) (1), ADMM-NNLS developed by Fu. et al. is used herein to solve this problem and a solution is represented as










x
*

=

NNLS

(

y
,
Φ

)





[

Formula


146

]







A non-negative coefficient vector xl*(l=1, . . . , L) corresponding to an approximate vector set Y=[y1, . . . , YL] can be calculated individually and is written as












x
l
*

.

=



NNLS

(


Y

:
l


,
Φ

)



for


l

=
1


,
¨

,
L
,




[

Formula


147

]







or in a matrix form, as










X
*

=

NNLS

(

Y
,
Φ

)





[

Formula


148

]







Here,









X
*

=


[



x
*

1

,
¨

,


x
L
*


]

.





[

Formula


149

]







A similar problem, which is a problem with a restraint condition that a total sum of coefficient vectors will be 1, is called fully constrained least square (FCLS), can also be solved using ADMM, and is written as











x
l
*

=



FCLS

(


Y

:
l



,
Φ

)



for


l

=
1


,
¨

,
L
,




[

Formula


150

]







or in a matrix notation, as










X
*

=


FCLS

(

Y
,
Φ

)

.





[

Formula


151

]







(Second NMF)

Input: The following representing FA for each sample:








A
~




+

N
×
M



,






    • a spectrum of each basis M-fragment:












[

Formula


153

]









S




+

M
×
D


.











and the number K of polymer components.


Output: Polymer weight fraction for each sample:









[

Formula

154

]










C



+

N
×
K



,










and FA for each unit weight K-base polymer:









[

Formula


155

]









B




+

K
×
M


.











Second NMF:









[

Formula


156

]










A
~



C

B











is evaluates in terms of its approximate residual as follows using Riemannian metrics:









[

Formula


157

]










D
G

(

A
~

|
C

B


)
.











Specifically,








[

Formula


158

]












D
G

(

A
~

|
C

B
)

=

Tr
[


D
G

(

A
~

|-
C

B
)
G
(

A
~

|-
C

B

)
T


]


,
where















[

Formula


159

]









G
=


S


S
T






+

K
×
M


.












By obtaining a lower triangular matrix as follows through Cholesky factorization G=LLT









[

Formula


160

]









L





M
×
M


.











the following formula can be written:









[

Formula


161

]













D
G

(

A
~

|
C

B
)

=



Tr
[(

A
~


-

C

B
)
L


L
T

(

A
~


-

C

B

)
T

]


=


Tr
[(
Â

-

C


B
ˆ

)(
Â

-

C


B
ˆ


)
T





]

=



Â

-

C


B
ˆ





F
2

.













Here,








[

Formula


162

]











A
^

=



A
~


L





N
×
M




,


B
ˆ

=


B

L





K
×
M




,










which is equivalent to matrix factorization as follows:









[

Formula


163

]










A
^



C



B
^

.












Defining a row vector as









[

Formula


164

]










B
ˆ

,










a volume term of a simplex spanned by this row vector is written as









[

Formula


165

]










vol

(

B
ˆ

)

,

B
ˆ

,










and a non-orthogonality term between such row vectors is written as









[

Formula


166

]









nonorth
(

B
ˆ



)
.











Then, using α>0 and βϵ[0, 1] as weights,









[

Formula


167

]










(

C
*


,




B
^

*

)

=




arg

min


C
,

B
^





1
2







A
^

-

C


B
^





F
2


+


α
2


vol


(

B
^

)


+


β
2



nonorth

(

B
^

)




,




(
S11
)











s
.
t
.


B
^


=

B

L


,

B

0

,

C

0

,


C


1
K


=


1
N

.






On the basis of this formula,









[

Formula


168

]










(

C
*


,



B
^

*

)











is determined. Here, in order to provide robustness to an outlier (3), introduction of the following weighting matrix is suggested by Fu et al.:










w
n

=


p
2




(







A
^


n
:


-


C

n
:




B
^





2

+
ε

)



p
-
2

2







[

Formula


169

]









W
=


diag

(


w
1

,





w
N



)

.





Here, pϵ(0, 2], and ε is a small regularization parameter. According to the present invention, p=1.5 and ε=10−8 were consistently used. An optimization program is written as












min

C
,

B
^




1
2



Tr

[


W

(


A
^

-

C


B
^



)




(


A
^

-

C


B
^



)

T


]


+


α
2



vol

(

B
^

)


+


β
2



nonorth

(

B
^

)



,




[

Formula


170

]











s
.
t
.

B
^


=
BL

,

B

0

,

C

0

,


C


1
K


=


1
N

.






Block coordinate descent (BCD) theory is employed to alternately optimize C and










B
^

.




[

Formula


171

]







Here, it is assumed that optimization is performed t times to obtain










(


C

(
t
)


,


B
^


(
t
)



)

.




[

Formula


172

]







Using vertex component analysis (VCA) (4) for initialization, K row vectors approximate to a K-vertex of a simplex are selected from N-row vectors of










A
^

.




[

Formula


173

]







Then, the selected vectors are defined as











B
^


(
0
)


.




[

Formula


174

]







Regarding update of C based on the following:











B
^


(
t
)


,




[

Formula


175

]







the update can be done simply by









[

Formula


176

]










C


(

t
+
1

)

T


=

FCLS




(



A
^

T

,


B
^



(
t
)

T



)

.






(
S13
)







Then, update based on C(t) from










B
^


(
t
)





[

Formula


177

]













B
^


(

t
+
1

)





[

Formula


178

]







will be considered. Here,











nonorth

(

B
^

)

=

Tr



(

Λ

(



B
^




B
^

T


-

diag

(


B
^




B
^

T


)


)

)



,




[

Formula


179

]








and








Λ




K
×
K






[

Formula


180

]







is an undetermined multiplier. Furthermore,











vol

(

B
^

)

=


log




"\[LeftBracketingBar]"


det

(



B
^




B
^

T


+

τ


I
K



)



"\[RightBracketingBar]"



=

log




"\[LeftBracketingBar]"


det

(
H
)



"\[RightBracketingBar]"





,




[

Formula


181

]








where










H

(

B
^

)

=



B
^




B
^

T


+

τ


I
K




,
τ




[

Formula


182

]







is a small regularization parameter, and t=10−8 was consistently used. Here,









vol

(

B
^

)




[

Formula


183

]







as a Majorizer function is introduced. Using a tangent inequality and on the basis of the following at previous timing:











H

(
t
)


=

H

(


B
^


(
t
)


)


,




[

Formula


184

]













log




"\[LeftBracketingBar]"


det

(
H
)



"\[RightBracketingBar]"







log




"\[LeftBracketingBar]"


det

(

H

(
t
)


)



"\[RightBracketingBar]"



+

Tr

[



(




H

(
t
)




log




"\[LeftBracketingBar]"


det

(
H
)



"\[RightBracketingBar]"



)

T



(

H
-

H

(
t
)



)


]






[

Formula


185

]







is established. Here,












H

(
t
)




log




"\[LeftBracketingBar]"


det

(
H
)



"\[RightBracketingBar]"






[

Formula


186

]







is a slope at H(t) in the following:









log




"\[LeftBracketingBar]"


det

(
H
)



"\[RightBracketingBar]"






[

Formula


187

]








Using













H

(
t
)




log




"\[LeftBracketingBar]"


det

(
H
)



"\[RightBracketingBar]"



=

H


(
t
)


-
T




,





[

Formula


188

]















vol

(

B
^

)

=



log




"\[LeftBracketingBar]"


det

(
H
)



"\[RightBracketingBar]"






Tr
[


H


(
t
)


-
1




H

]

+
const


=


Tr
[


F

(
t
)




B
^




B
^

T


]

+
const



,




[

Formula


189

]








where










F

(
t
)


=

H


(
t
)


-
1




,




[

Formula


190

]







and const is a constant term. Thus, by replacing









vol

(

B
^

)




[

Formula


191

]








with









Tr
[


F

(
t
)




B
^




B
^

T


]

,




[

Formula


192

]







all penalty terms are written collectively as














α
2



vol

(

B
^

)


+


β
2



nonorth

(

B
^

)







α
2



Tr
(


F

(
t
)




B
^




B
^

T


)


+


β
2



Tr
(


Λ

(
t
)




B
^




B
^

T


)


+
const


=



1
2



Tr
(

V


B
^




B
^

T


)


+
const


,




[

Formula


193

]








where








V
=


α


F

(
t
)



+


βΛ

(
t
)


.






[

Formula


194

]







Accordingly, update of










B
^


(

t
+
1

)





[

Formula


195

]







can be obtained by solving the following:









[

Formula


196

]











B
^


(

t
+
1

)


=


arg


min

B
^



1
2



Tr
[


W

(


A
^

-


C

(
t
)




B
^



)




(


A
^

-


C

(
t
)




B
^



)

T


]


+


1
2



Tr
(

V


B
^




B
^

T


)







(
S14
)











s
.
t
.


B
^


=
BL

,

B

0.





In order to organize complicated constraint conditions,

    • a constraint










B
^

=
BL




[

Formula


197

]







is incorporated into an objective function using a Lagrange's undetermined multiplier as follows:










Z




K
×
M



,




[

Formula


198

]







thereby solving this problem in the framework of ADMM.











(


B

(

t
+
1

)


,


B
^


(

t
+
1

)



)

=



arg

min



B
,

B
^




max
Z


{


f

(

B
^

)

+

Tr
[


Z
T

(


B
^

-
BL

)

]

-


1

2

μ







Z
-

Z





F
2



}



,




[

Formula


199

]











s
.
t
.

B


0

,





where










f

(

B
^

)





1
2



Tr
[


W

(


A
^

-


C

(
t
)




B
^



)




(


A
^

-


C

(
t
)




B
^



)

T


]


+


1
2



Tr
(

V


B
^




B
^

T


)




,




[

Formula


200

]







and μ is a hyper parameter for ADMM (here, μ=1 was consistently used). Furthermore, Z′ represents Z at previous timing.


If








[

Formula


201

]










Z
=


Z


+

μ

(


B
^

-
BL

)



,





(
S15
)

,







the objective function provides the following maximized with respect to Z:











g

(

B
,


B
^

;
Z


)




f

(

B
^

)

+


μ
2







B
^

-
BL
+


1
μ


Z




F
2




,

B

0.





[

Formula


202

]







This may be optimized with respect to










(

B
,

B
^


)

.




[

Formula


203

]












(

B
,

B
^

,
Z

)




[

Formula


204

]







was optimized cyclically and an algorithm was summarized as Algorithm 3.












[Formula 205


 Algorithm3: ADMM for solving the optimization of Eq. S14







 input: B(t), {circumflex over (B)}(t), hyperparameter μ, function g(B, {circumflex over (B)})


 output: B(t+1), {circumflex over (B)}(t+1)


 initialize: q = 0, Bq ← B(t), {circumflex over (B)}q ← {circumflex over (B)}(t), Zq ← 0


 repeat until convergence:





  
Bˆq+1=argminB^g(Bˆ;Bq,Zq),solvedbyEq.S16






  
Bq+1=argminB0g(B;B^q+1,Zq),solvedbyEq.S17






  Zq+1 = Zq + μ({circumflex over (B)}q+1 − Bq+1L)


  q ← q + 1


  if ∥{circumflex over (B)}q − BqL∥F2 < 10−6:


   end


 B(t+1) ← Bq, {circumflex over (B)}(t+1) ← {circumflex over (B)}q.









Objective function as









g

(



B
^

;

B
q


,

Z
q


)




[

Formula


206

]







is optimization of a quadratic function without restraint condition with respect to










B
^

.




[

Formula


207

]







it is minimized by the following:









[

Formula


208

]












g




B
^



=




(



C


(
t
)

T




WC

(

t
+
1

)



+
V
+

μ


I
K



)



B
^


-


C


(
t
)

T



W


A
^


-


B
q


L

+


1
μ


Z



0






(
S16
)

.












B
^


q
+
1


=



(



C


(
t
)

T




WC

(
t
)



+
V
+

μ


I
K



)


-
1




(



C


(
t
)

T



W


A
^


+


B
q


L

-


1
μ


Z


)



,




Furthermore,








g

(


B
;


B
^


q
+
1



,

Z
q


)




[

Formula


209

]







can be solved by NNLS and is updated as









[

Formula


210

]












B

q
+
1


T

=

NNLS

(



(



B
^


q
+
1


+


1
μ


Z


)

T

,

L
T


)


,





(
S17
)

.







As a result of the foregoing, the following update formula for solving the original problem Eq. S11 was obtained:









[

Formula


211

]









(


B
ˆ

,
B
,
C

)










Finally, update of V=αF(t)+βΛ(t) is considered. Regarding F(t), it can be determined easily as









[

Formula


212

]










F

(
t
)


=



(




B
ˆ


(
t
)





B
ˆ



(
t
)

T



+

τ


I
K



)


-
1


.





(
S18
)







Regarding Λ(t) by combining strict orthogonal conditions, which are specifically









[

Formula


213

]












B
^




B
^

T


=


diag

(


B
^




B
^

T


)


D


,

β
=
1





(
S19
)








and










f

(


B
^

;

C

(
t
)



)






B
^



=




C


(
t
)


(
T
)





W

(



C

(
t
)




B
^


-

A
^


)


+


(


αF

(
t
)


+

Λ

(
t
)



)



B
^




0.





and by multiplying Eq. S19 by the following from the right side:









[

Formula


215

]











B
ˆ

T

,










Λ(t) can be obtained as









[

Formula


216

]













C


(
t
)

T




W

(



C

(
t
)



D

-

Â



B
ˆ

T



)


+


(


α


F

(
t
)



+

Λ

(
t
)



)


D



0

,




(
S20
)










Λ

(
t
)


=



C


(
t
)

T




W

(


Â



B
ˆ

T



D

-
1



-

C

(
t
)



)


-

α



F

(
t
)


.










Using



Eq
.

S


18


and



Eq
.

S


20

,









[

Formula


217

]









V
=



α

(

1
-
β

)




(




B
^


(
t
)





B
^



(
t
)

T



+

τ


I
K



)


-
1



+

β


C


(
t
)

T





W

(



A
^



B
^



D

-
1



-

C

(
t
)



)

.







(

S

21

)







is given. Using the foregoing, an algorithm for solving the problem Eq. S11 is obtained as follows:












Formula 218


Algorithm 4: Pseudo-code for the second


NMF (solving optimization Eq. S11)















Input: output of the first NMF (sample-wise FA; Ã ∈ custom-character  ,


basis-fragment spectra; S ∈ custom-character  ), basis-polymer number;


K, weights for penalty terms; (α, β), weight for outliers; p


Output: polymer weight-fraction; C ∈ custom-character  ,


FAs of unit-weight basis-polymers B ∈ custom-character


Initialization


calculate L via Cholesky decomposition of SST


set  = ÃL


initialize {circumflex over (B)}(0) by selecting K-rows of  via VCA algorithm


set B(0) = {circumflex over (B)}(0) L−1


initialize C(0) by C(0)T = FCLS(ÂT, {circumflex over (B)}(0)T)


initialize W by Eq. S12


initialize V based on B(0) and C(0) by Eq. S21


Repeat until criteria for convergence satisfied:


 update {circumflex over (B)}, B by algorithm 3


 update C by Eq. S13


 update W, V by Eq. S12 and S21, respectively










(Projection of Test Data onto Hyperplane Spanned by S and B)


After S and B are inferred from the data set, the following data not having been used for the inference of S and B (here, called test data) is projected onto a hyperplane spanned by S and B by a method described herein:









[

Formula


219

]










X
test




+

T
×
D












First, by the projection onto an S-hyperplane,









[

Formula


220

]










A
test




+

T
×
M












is obtained. Specifically, a total sum is determined with respect to temperature zones









[

Formula


221

]











A

t

est

T

=

N

N

L


S

(



R

-

1
2





X
test
T


,


R

-

1
2





S
T



)



,










which is converted to









[

Formula


222

]











A
~

test





+

1
×
M


.











Then, by performing projection onto a B-hyperplane and normalization,









[

Formula


223

]










C
test




+

1
×
K












is obtained. Specifically,









[

Formula


224

]











C
test
T

=

NNLS

(



L
T




A
~

test
T


,


L
T



B
T



)


,













C
test





C
test





C
test



1


.





The data set and hyper parameters are as follows:









TABLE 2







Summary of dataset and hyerparameters.















Temp.









range
















Sample
(° C.)
First NMF
Second














number
Mass

Merging
Initial


NMF
















N
range(m/z)
wo
thres.
M
Prior
iteration
K
α = β
p





24
[50, 450]
0.2
0.99
30
Exp
3000
3
0.1
1.5



[50, 1000]









Cited literatures are as follows:










TABLE 3







1.
S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein,



Distributed optimization and statistical learning



via the alternating direction method of multipliers.




Found. Trends Mach. Learn. 3, 1-122 (2010).



2.
X. Fu, W. Ma, K. Huang, N. D. Sidiropoulos, ROBUST



VOLUME MINIMIZATION-BASED MATRIX FACTORIZATION



VIA ALTERNATING OPTIMIZATION Department of ECE,



University of Minnesota, Minneapolis, MN, USA



Department of EE, The Chinese University of



Hong Kong, Shatin, N. T., Hong Kong. Icassp



2016. MI, 2534-2538 (2016).


3.
X. Fu, K. Huang, B. Yang, W. K. Ma, N. D. Sidiropoulos,



Robust Volume Minimization-Based Matrix Factorization



for Remote Sensing and Document Clustering.




IEEE Trans. Signal Process. 64, 6254-6268 (2016).



4.
J. M. B. Dias, J. M. P. Nascimento, Vertex component



analysis: A geometric-based approach to unmix



hyperspectral data. Signal Image Process. Remote Sens.



43, 415-439 (2006).









(Composition Inference Device)

A composition inference device according to an embodiment of the present invention will be described next by referring to the drawings. FIG. 3 is a hardware configuration view of the composition inference device according to the embodiment of the present invention.


A composition inference device 30 includes a mass spectrometer 31 and an information processing device 33. The mass spectrometer 31 includes a pressure-reducing pump 32. The information processing device 33 includes a processor 34, a storage device 35, a display device 36, and an input device 37. The mass spectrometer 31 and the information processing device 33 are configured in such a manner that data can be transferred therebetween.


Examples of the processor 34 include a microprocessor, a processor core, a multiprocessor, an ASIC (application-specific integrated circuit), an FPGA (field programmable gate array), GPGPUs (general-purpose computing on graphics processing units), and the like.


The storage device 35 has the function of storing various types of programs and data in a transitory manner and/or a non-transitory manner, and provides a work area for the processor 34.


Examples of the storage device 35 include a ROM (read only memory), a RAM (random access memory), an HDD (hard disk drive), a flash memory, and an SSD (solid state drive).


The input device 37 can accept various types of information and can accept input of a command to the composition inference device 30. Examples of the input device 37 may include a keyboard, a mouse, a scanner, and a touch panel.


The display device 36 can display a status of the composition inference device 30, and progress of analysis, composition inference result, etc. by the composition inference device 30. Examples of the display device 36 may include a liquid crystal display and an organic EL (electro luminescence) display.


The display device 36 may be configured as a device integrated with the input device 37. In this case, the display device 36 may be configured as a touch panel display and may be configured to provide a GUI (graphical user interface).


The information processing device 33, which includes the processor 34, the storage device 35, the input device 37, and the display device 36 allowing communication of data therebetween through a data bus, is typically a computer.


The mass spectrometer 31 includes the pressure-reducing pump 32, and is typically a mass spectrometer including a DART ion source, a sample heater, and a time-of-flight type mass spectrometer instrument. Both the ion source and the mass spectrometer instrument of the mass spectrometer 31 are non-restrictive examples and the configuration of the mass spectrometer provided to the present composition inference device is not limited to that described above.



FIG. 4 is a functional block diagram of the composition inference device 30 according to the embodiment of the present invention. The composition inference device 30 includes a mass spectrometer unit 41 that performs mass spectrometry on all samples, and an information processing unit 42 that processes mass spectra acquired by the mass spectrometer unit 41.


The mass spectrometer unit 41 has a configuration including the mass spectrometer 31 and corresponds to a function realized by causing a controller 43 described later to control the mass spectrometer 31.


The mass spectrometer unit 41 ionizes gas components generated as a result of thermal desorption and/or pyrolysis on a loaded sample set (samples, learning samples, and background samples indicated by sign a in FIG. 4) while heating the sample set, performs mass spectrometry sequentially, and outputs mass spectra (sign c in FIG. 4).


At this time, an environmental factor resulting from an environment where the mass spectrometer 31 is installed (sign b in FIG. 4, which may be poly(dimethylsiloxane) generated from the pressure-reducing pump 32, for example) is further loaded simultaneously on the mass spectrometer 31 to exert influence on the mass spectra.


Described next is a function of each part of the information processing unit 42 having a configuration including the information processing device 33.


The controller 43 has a configuration including the processor 34. The controller 43 controls each part to realize each function of the composition inference device 30.


A storage part 44 has a configuration including the storage device 35. The storage part 44 includes a program stored in advance for control over each part, and provides a transitory or non-transitory storage area for data input to and output from each part. The storage part 44 is controlled by the controller 43 and is capable of reading and writing data.


An input part 45 has a configuration including the input device 37. An output part 46 has a configuration including the display device 36. The controller 43 controls these parts, thereby allowing input from a user of the composition inference device 30 to be accepted and the accepted input to be stored into the storage part 44, allowing information in the storage part 44 to be transmitted to each part, and allowing the transmitted information to be presented to the user of the composition inference device 30 through the output part 46.


A data matrix generating part 47 is a function realized by causing the controller 43 to execute the program stored in the storage part 44.


The data matrix generating part 47 generates a data matrix X from a two-dimensional mass spectrum including the mass spectrum (sign c in FIG. 4) in each row acquired for each heating temperature by the mass spectrometer unit 41, and transfers the data matrix X to an NMF processing part 48 described later (sign d in FIG. 4). The details of the data matrix X are as have already been described.


The NMF processing part 48 is a function realized by causing the controller 43 to execute the program stored in the storage part 44.


The NMF processing part 48 performs non-negative matrix factorization (NMF) on the data matrix X (sign d in FIG. 4) generated by the data matrix generating part 47 to factorize the data matrix X into the product of a normalized base spectrum matrix (S) and a corresponding intensity distribution matrix (C) (X=C×ST). Result thereof is transferred to a correction processing part 49 (sign e in FIG. 4). The details of the NMF process are as have already been described.


The correction processing part 49 is a function realized by causing the controller 43 to execute the program stored in the storage part 44.


The correction processing part 49 is the function of correcting the intensity distribution matrix through analysis on canonical correlation between the base spectrum matrix generated by the NMF processing part 48 and the data matrix, thereby generating a corrected intensity distribution matrix. The generated corrected intensity distribution matrix is transferred to a vector processing part 50 (sign f in FIG. 4).


The correction processing part 49 may further make the intensity correction already described in addition to the foregoing correction through the canonical correlation analysis. In this case, if there is a composition of a calibration sample (calib sample composition) to be used for the intensity correction, this composition is transferred to the correction processing part 49 through the input part 45 (sign k in FIG. 4). The details of the correction through the canonical correlation analysis (CCA filter) and the intensity correction are as have already been described.


The vector processing part 50 is a function realized by causing the controller 43 to execute the program stored in the storage part 44.


The vector processing part 50 partitions the corrected intensity distribution matrix generated by the correction processing part 49 into a submatrix corresponding to each of all the samples, and expresses all the samples in vector space using the submatrix as a feature vector. Result thereof is transferred to an end member determining part 51 (sign g in FIG. 4).


The end member determining part 51 is a function realized by causing the controller 43 to execute the program stored in the storage part 44.


The end member determining part 51 defines a K-1 dimensional simplex including all the feature vectors generated by the vector processing part 50, and determines K end members in the K-1 dimensional simplex. Result thereof is transferred to a content ratio calculating part 52 (sign h in FIG. 4).


At this time, if the learning sample contains an end member and has a determination label, information about the determination label is transferred from the input part 45 to the end member determining part 51 (sign j in FIG. 4).


As has already been described, however, the determination label is not essential for determining an end member (for processing by the end member determining part 51).


The content ratio calculating part 52 is a function realized by causing the controller 43 to execute the program stored in the storage part 44.


The content ratio calculating part 52 calculates a Euclidean distance between each of the K end members determined by the end member determining part 51 and the feature vector of an inference target sample generated by the vector processing part 50, and infers content ratios of the K components in the inference target sample on the basis of a ratio of the Euclidean distance between each of the K end members and the feature vector of the sample. Result of the content ratio inference is output from the output part 46 (sign i in FIG. 4).


At this time, the composition inference result about the learning sample may be output together.


Even if a sample is a solid sample hard to dissolve in a solvent and hard to undergo conventional composition analysis, the present composition inference device still makes it possible to make this sample available for analysis without particular preparatory process and to acquire accurate quantitative analysis result. The present composition inference device achieves significant reduction in time required for obtaining result, particularly by being applied to quality control of a sample as a mixture of a plurality of components, investigation of a cause for failure, etc.


EXAMPLES

The present composition analysis method will be described in detail on the basis of examples. The composition analysis method of the present invention should not be interpreted in a restricted sense by the following specific examples.


Example 1
(Preparation of Sample)

Three types of polymer dioxane solutions (about 4% by mass containing about 0.4 mg of a polymer solid content) including poly(methyl methacrylate) (symbol “M”), polystyrene (symbol “S”), and poly(ethyl methacrylate) (symbol “E”) were casted on a sample pot and air-dried overnight, thereby obtaining samples. Table 4 shows the respective compositions of the samples. Mass spectrometry was performed using LCMS-2020 available from SHIMADZU CORPORATION as a detector, a DART ion source available from IonSense Inc., and ionRocket available from BioChromato, Inc. as a heater for the samples.











TABLE 4









Composition












Sample Name
M
S
E
















M
1.000





S

1.000



E


1.000



EM
0.475

0.525



MS
0.510
0.490



SE

0.502
0.498



MS01
0.884
0.116



MS005
0.958
0.042



MS001
0.990
0.010



MS0001
0.991
0.009



M1_S2_E2
0.201
0.397
0.402



M2_S1_E2
0.360
0.228
0.413



M2_S2_E1
0.408
0.394
0.197



MSE_calib
0.337
0.326
0.337










The resultant mass spectra were subjected to analysis using an algorithm that implements the following (i) to (iv):

    • (i) Soft orthogonal constraint on matrix C ARD and matrix S;
    • (ii) Merging condition in NMF optimization;
    • (iii) Removal of NMF artifact, background, and contamination using CCA-filter; and
    • (iv) Quantitative analysis without requiring calibration by incorporating environmental variable into correction term.


Hyper parameters used in the analysis are as follows:

    • NMF:


Orthogonal constrain weight w=0.2, initial component number K=50, small positive number a=1+0.1−15, iteration number L=2000, threshold=0.9

    • CCA-Filter:
    • t1=0.8, t2=0.2.



FIG. 5 is a view showing an original spectrum (A) of “MSE_calib” and a spectrum (B) reconstructed after performing the foregoing process. These spectra are found to have waveforms similar to each other. Specifically, it was found that, as a result of the foregoing process, a feature in the original spectrum is retained to provide data easily available for composition analysis.


While the original data contains background observed at 390 m/z that may be phthalate ester exhausted from an air conditioner, for example (a portion indicated as “BG” in FIG. 5(A)), this is removed from the reconstructed peak (B). Dioxane (a portion indicated as “DOX” in FIG. 5(A)) as a solvent used in the sample preparation was also removed. They each correspond to noise and such noise was found to be removed effectively by the present method.



FIG. 6 shows Co(MISE Calib) as part of the matrix C after the NMF factorization and ST resulting from removing a background spectrum and impurity spectra using the CCA-Filter.



FIG. 7 shows composition analysis result about the ternary system composed of M, S, and E. Hollow circles and an explanatory note “Calibrated” in FIG. 7 represent composition inference result about a sample obtained using MSE_calib as a calibration sample, specifically, by providing composition information of MSE_calib as known information and making intensity correction on an intensity distribution matrix.


In FIG. 7, solid circles and an explanatory note “Non-calibrated” represent composition inference result about a sample obtained by making intensity correction on an intensity distribution matrix using the product of an environmental variable (specifically 610 m/z: peak intensity in poly(dimethylsiloxane)) and a sample mass.


In FIG. 7, “Known” in the legend, indicated by a star symbol, represents an actual composition (true value).


It is found from the result shown in FIG. 7 that excellent inference result similar to a true value was obtained in each of the cases “Calibrated” and “Non-calibrated.”


Referring to FIG. 8 next, it shows composition inference result obtained by eliminating the orthogonal constraint in (1) from an NMF condition. Each plot is defined in the same way as in FIG. 7.


It was found from the result in FIG. 8 that the presence of the orthogonal constraint on the NMF provides more excellent inference result.


Referring to FIG. 9 next, it shows composition inference result obtained by eliminating only merging S from a condition for MNF optimization. Each plot is defined in the same way as in FIG. 7.


It was found from the result in FIG. 9 that the presence of merging S in a condition for the MNF optimization provides more excellent inference result.


While the number of components can be reduced only to 36 by the ARD, maintaining the Merging S condition allows an imposition to be inferred more precisely with components of a number 9. In this way, the significance of Merging S was proved.



FIG. 10 shows composition inference result corresponding to result obtained by eliminating the CCA-filter (Comparative Example). Each plot is defined in the same way as in FIG. 7.


In FIG. 10, calculation result about a sample “SE” shows that about 1% of M was contained even though the content of M is 0%. Estimating 0% to be 1% is not practical as it lacks even a small percent of inference precision.



FIG. 11 is a view showing an original spectrum (raw of MSE_calib in the upper section) of “MSE_calib” and a spectrum (reconstructed of MSE_calib in the lower section) reconstructed after eliminating the CCA-filter and then performing the foregoing process.


It was found from this result that eliminating the CCA-filter leads to the failure of removing both the DOX noise and the BG noise (see the explanation about FIG. 5).



FIG. 12 shows composition inference result obtained without making correction using an environmental variable. In the result of FIG. 12, “Non-calibrated” indicated by solid circles largely deviates from Known, showing that making correction using an environmental variable provides more accurate inference result.


Example 2
(Inference Using Sample Not Containing End Member)

To determine that inference can be made precisely even if samples (sample group) including a learning sample do not contain an end member, samples listed in Table 5 were prepared and a test was conducted in the same way as in the Example 1.


In the table, symbols “M,” “S,” and “E” have the same meanings as those in Table 4 (Example 1). Each sample is prepared in such a manner that the weight fraction of a single component (polymer) does not exceed 80%.













TABLE 5





sample
M(mass %)
S(mass %)
E(mass %)
mass(mg)



















1
0
50
50
0.42


2
0
62
38
0.60


3
0
31
69
0.44


4
0
79
21
0.35


5
0
32
68
0.96


6
0
20
80
0.87


7
14
72
14
0.44


8
15
15
70
0.68


9
19
0
81
0.39


10
20
40
40
0.44


11
25
75
0
0.35


12
29
0
71
0.42


13
29
71
0
0.72


14
34
33
34
0.39


16
36
23
41
0.47


17
41
39
20
0.44


18
48
0
52
0.43


19
49
51
0
0.32


20
64
36
0
0.43


21
67
0
33
0.36


22
70
30
0
0.98


23
79
9
12
0.46


24
80
0
20
0.27


25
83
17
0
0.34









A two-dimensional mass spectrum of each of these samples was acquired in the same way as in the Example 1 and converted to a data matrix. The resultant data matrix was subjected to the first NMF process already described, namely,









[

Formula


225

]









X


AS
.











For estimation of a variance-covariance matrix of noise, calculation can be made only using X. Thus, the estimation was made as process inside a module in the first NMF process.


Next, A was subjected to intensity correction using the CCA-Filter, a weight, or a reference peak. As temperature distribution information is unnecessary for composition analysis, a total sum along a temperature axis may be obtained as follows for each sample:









[

Formula


226

]









A




+

NT
×
M




A
~






+

N
×
M


.











Next, the following second NMF process was performed:









[

Formula


227

]










A
~



CB
.











Evaluation and others using Riemannian metrics were performed as internal processing in the second NMF process. FIG. 15 shows C and BS illustrated as result of this processing.


Regarding C in the drawing, compositions inferred by the present method are indicated by circles and compositions of used samples (correct compositions) are indicated by stars. All these compositions exhibit favorable match therebetween.


Here, the absence of a plot at each vertex of C shows that the samples used herein do not contain an end member, namely, do not contain pure polymers of E, S, and M.


Regarding BS in the drawing, spectra of pure polymers of E, M, and S acquired through inference and actually-measured spectra of E, M, and S are superimposed on each other. There is no difference therebetween, proving that the interference including that of absolute intensity was made favorably.


As described above, as shown in BS in FIG. 15, the method of the present invention was confirmed to achieve accurate inference of spectra of pure polymers of E, M, and S and a composition of a mixture.


In the presence of test data, Xtest is projected onto S and B continuously to obtain Ctest.


Example 3
(Inference Using Sample Containing One End Member)

Next, it was confirmed that, if one of (learning) samples contains an end member, a component content in the other sample is determined arbitrarily (the end member is not required to be in a region external to a circle inscribed in a K-1 dimensional simplex).


Composition analysis was conducted in the same way as in the Example 2 except that samples containing S having a weight percent equal to or greater than 40 were excluded from the samples listed in Table 5 and that M pure polymer was added as a sample. Result thereof is shown in FIG. 16.


In FIG. 16, dots indicated by solid circles represent inference result about samples used for inference of B and S, and stars indicate actual compositions of these samples. As seen from this result, it was confirmed that, while no sample of S is present in a region external to an inscribed circle (hollow circles represent samples not used for the inference), inference can be made precisely.


One of possible reasons therefor is that the following basis non-orthogonal term was incorporated as a penalty term into the second NMF process (Eq. 11) already described:









[

Formula


228

]










nonorth

(

B
^

)

.










INDUSTRIAL APPLICABILITY

In many cases of material development, material performance is desired to be examined by being reduced into each constituent component. While this requires identification and quantification of a component, increasing complexity of a system increases the difficulty levels thereof. High-resolution mass spectrometry in which peaks of different components are unlikely to overlap each other can be said to be the most powerful analysis method for identifying the chemical constitution of a component in a complicated mixture system. On the other hand, the mass spectrometry has a problem that it lacks in quantitativeness. Furthermore, while the mass spectrometry does not require separation by nature, a resultant spectrum is complicated and is likely to be unreadable in reality.


According to the technique developed by the present invention, a fragment of a material is directly subjected to pyrolysis in an atmosphere directly without preparatory process, resultant gas is analyzed through the DART-MS, and a resultant spectrum is learned without a teacher, thereby inferring pure end member spectra of respective components and representing a spectrum of a sample using a linear sum of the spectra. Even in the case of a material having a complicated composition, this technique still allows identification and quantification of a component in the material without requiring separation.


Regarding identification, as long as it is possible to acquire a highly-readable end member spectrum from which peaks derived from different components have been excluded completely, a subsequent procedure can be followed relatively easily. On the other hand, the present invention also encounters the fact that ionization efficiency differs between peaks and a peak intensity and an abundance ratio thereof do not proportionate to each other, which becomes a hindrance to quantitative analysis. In this regard, according to the present invention, it is possible to infer each peak intensity in an end member spectrum not as a normalized relative value but as an absolute intensity. In expressing a sample spectrum using a linear sum of end member spectra, this allows a linear coefficient of the spectrum to be determined as it is as a composition ratio.


The present invention, according to which a sample is decomposed into each component and a content ratio of the component is determined, can clearly be a useful tool in material development in terms of identification of a key component as a factor for exhibiting the performance of a material. The present invention is further expected to be applied in a considerably wide range of fields including quality control such as inspection on the content of a degraded component, compliance inspection on the content of a dangerous substance, etc., and inspection for detecting infringement. The present invention is further expected to accelerate techniques such as prediction of degeneration or lifetime of resin, detection of an infinitesimal impurity component in resin or organic materials, quantitative analysis on smell or taste, and traceability and tags for preventing false origin designation of food.


Furthermore, Plastics Europe submitted a suggestion to European Commission in September, 2021 to make it mandatory to achieve a 30% content of recycled plastic in plastic containers and packages until 2030. While use of recycled materials has been facilitated as understood therefrom, a general technique of measuring a content has not been established. The present invention is further expected to function as a technique of allowing accurate measurement of a ratio between a virgin material and a recycled material and to be applied in quality control, compliance inspection, and others.


REFERENCE SIGNS LIST






    • 10: K-1 dimensional simplex


    • 12: Inscribed hypersphere


    • 13-15: End member


    • 16-18: Learning sample


    • 19: Region


    • 30: Composition inference device


    • 31: Mass spectrometer


    • 32: Pressure-reducing pump


    • 33: Information processing device


    • 34: Processor


    • 35: Storage device


    • 36: Display device


    • 37: Input device


    • 41: Mass spectrometer unit


    • 42: Information processing unit


    • 43: Controller


    • 44: Storage part


    • 45: Input part


    • 46: Output part


    • 47: Data matrix generating part


    • 48: NMF processing part


    • 49: Correction processing part


    • 50: Vector processing part


    • 51: End member determining part


    • 52: Content ratio calculating part




Claims
  • 1. A method of inferring a content ratio of a component in an inference target sample containing at least one type of the component selected from K types of components while K is an integer equal to or greater than 1, the method comprising: preparing learning samples of a number equal to or greater than K containing at least one type of component selected from the K types of components and having compositions different from each other, and a background sample not containing the component;sequentially ionizing gas components generated by thermal desorption and/or pyrolysis while heating each sample in a sample set including the inference target sample, the learning samples, and the background sample, and observing mass spectra continuously;storing the mass spectrum acquired for each heating temperature into each row to acquire two-dimensional mass spectra of the respective samples, and merging at least two or more of the two-dimensional spectra and converting the spectra into a data matrix;performing NMF process by which the data matrix is subjected to non-negative matrix factorization to be factorized into the product of a normalized base spectrum matrix and a corresponding intensity distribution matrix;extracting a noise component in the intensity distribution matrix through analysis on canonical correlation between the base spectrum matrix and the data matrix, and correcting the intensity distribution matrix so as to reduce influence by the noise component, thereby acquiring a corrected intensity distribution matrix;partitioning the corrected intensity distribution matrix into a submatrix corresponding to each of the samples, and expressing each of the samples in vector space using the submatrix as a feature vector;defining a K-1 dimensional simplex including all of the feature vectors and determining K end members in the K-1 dimensional simplex; andcalculating a Euclidean distance between each of the K end members and the feature vector of the inference target sample, and inferring a content ratio of the component in the inference target sample on the basis of a ratio of the Euclidean distance, whereinif the K is equal to or greater than 3, at least one of the feature vectors of the learning samples is present in each region external to a hypersphere inscribed in the K-1 dimensional simplex or the learning samples contain at least one of the end members.
  • 2. The method according to claim 1, wherein the K is an integer equal to or greater than 2, andthe inference target sample is a mixture of the components.
  • 3. The method according to claim 1, wherein the learning sample contains the end member.
  • 4. The method according to claim 3, wherein the end member is determined on the basis of a determination label given to the learning sample.
  • 5. The method according to claim 3, wherein the end member is determined through vertex component analysis on the feature vector of the learning sample.
  • 6. The method according to claim 1, wherein the end member is determined on the basis of an algorithm by which a vertex is defined in such a manner that the K-1 dimensional simplex has a minimum volume.
  • 7. The method according to claim 6, wherein the end member is determined by second NMF process by which the corrected intensity distribution matrix is subjected to non-negative matrix factorization to be factorized into the product of a matrix representing the weight fractions of the K types of components in the sample and a matrix representing an individual fragment abundance of each of the K types of components.
  • 8. The method according to claim 6, wherein the learning sample does not contain the end member.
  • 9. The method according to claim 1, wherein acquiring the corrected intensity distribution matrix further includes making intensity correction on the intensity distribution matrix.
  • 10. The method according to claim 9, wherein the sample set further includes a calibration sample,the calibration sample contains all the K types of components and has a known composition, andthe intensity correction includes normalizing the intensity distribution matrix.
  • 11. The method according to claim 9, wherein the intensity correction includes allocating at least part of the intensity distribution matrix using the product of the mass of the corresponding sample in the sample set and an environmental variable, andthe environmental variable is a variable representing influence on ionization efficiency of the component during the observation.
  • 12. The method according to claim 11, wherein the environmental variable is a total value of peaks in a mass spectrum of a compound having a molecular weight of 50-1500 contained in a predetermined quantity in an atmosphere during the observation or an organic low-molecular compound having a molecular weight of 50-500 contained in a predetermined quantity in each sample in the sample set.
  • 13. A composition inference device that infers a content ratio of a component in an inference target sample containing at least one type of the component selected from K types of components while K is an integer equal to or greater than 1, the composition inference device comprising: a mass spectrometer that sequentially ionizes gas components generated by thermal desorption and/or pyrolysis while heating each of samples in a sample set including learning samples of a number equal to or greater than K containing at least one type of component selected from the K types of components and having compositions different from each other, a background sample not containing the component, and the inference target sample, and observes mass spectra continuously; andan information processing device that processes the observed mass spectra, whereinthe information processing device comprises:a data matrix generating part that stores the mass spectrum acquired for each heating temperature into each row to acquire two-dimensional mass spectra of the respective samples, and merges at least two or more of the two-dimensional spectra and converts the spectra into a data matrix;an NMF processing part that performs NMF process by which the data matrix is subjected to non-negative matrix factorization to be factorized into the product of a normalized base spectrum matrix and a corresponding intensity distribution matrix;a correction processing part that extracts a noise component in the intensity distribution matrix through analysis on canonical correlation between the base spectrum matrix and the data matrix, and corrects the intensity distribution matrix so as to reduce influence by the noise component, thereby generating a corrected intensity distribution matrix;a vector processing part that partitions the corrected intensity distribution matrix into a submatrix corresponding to each of the samples in the sample set, and expresses each of the samples in vector space using the submatrix as a feature vector;an end member determining part that defines a K-1 dimensional simplex including all of the feature vectors and determines K end members in the K-1 dimensional simplex; anda content ratio calculating part that calculates a Euclidean distance between each of the K end members and the feature vector of the inference target sample, and infers a content ratio of the component in the inference target sample on the basis of a ratio of the Euclidean distance, andif the K is equal to or greater than 3, at least one of the feature vectors of the learning samples is present in each region external to a hypersphere inscribed in the K-1 dimensional simplex or the learning samples contain at least one of the end members.
  • 14. The composition inference device according to claim 13, wherein the end member determining part determines the end member on the basis of a determination label given to the learning sample.
  • 15. The composition inference device according to claim 13, wherein the end member determining part determines the end member on the basis of an algorithm by which a vertex is defined in such a manner that the K-1 dimensional simplex has a minimum volume.
  • 16. The composition inference device according to claim 15, wherein the end member determining part determines the end member by performing second NMF process by which the corrected intensity distribution matrix is subjected to non-negative matrix factorization to be factorized into the product of a matrix representing the weight fractions of the K types of components in the sample and a matrix representing an individual fragment abundance of each of the K types of components.
  • 17. The composition inference device according to claim 15, wherein the correction processing part further makes intensity correction on the intensity distribution matrix.
  • 18. The composition inference device according to claim 17, wherein the sample set further includes a calibration sample,the calibration sample contains all the K types of components and has a known composition, andthe intensity correction includes normalizing the intensity distribution matrix.
  • 19. The composition inference device according to claim 17, wherein the intensity correction includes allocating at least part of the intensity distribution matrix using the product of the mass of the corresponding sample in the sample set and an environmental variable, andthe environmental variable is a variable representing influence on ionization efficiency of the component during the observation.
  • 20. (canceled)
  • 21. A program used in a composition inference device that infers a content ratio of a component in an inference target sample containing at least one type of the component selected from K types of components while K is an integer equal to or greater than 1, the composition inference device comprising: a mass spectrometer that sequentially ionizes gas components generated by thermal desorption and/or pyrolysis while heating each of samples in a sample set including learning samples of a number equal to or greater than K containing at least one type of component selected from the K types of components and having compositions different from each other, a background sample not containing the component, and the inference target sample, and observes mass spectra continuously; andan information processing device that processes the observed mass spectra,the program comprising:a data matrix generating function of storing the mass spectrum acquired for each heating temperature by the mass spectrometer into each row to acquire two-dimensional mass spectra of the respective samples, and merging at least two or more of the two-dimensional spectra and converting the spectra into a data matrix;an NMF processing function of performing NMF process by which the data matrix is subjected to non-negative matrix factorization to be factorized into the product of a normalized base spectrum matrix and a corresponding intensity distribution matrix;a correction processing function of extracting a noise component in the intensity distribution matrix through analysis on canonical correlation between the base spectrum matrix and the data matrix, and correcting the intensity distribution matrix so as to reduce influence by the noise component, thereby generating a corrected intensity distribution matrix;a vector processing function of partitioning the corrected intensity distribution matrix into a submatrix corresponding to each of the samples, and expressing each of the samples in vector space using the submatrix as a feature vector;an end member determining function of defining a K-1 dimensional simplex including all of the feature vectors and determining K end members in the K-1 dimensional simplex; anda content ratio calculating function of calculating a Euclidean distance between each of the K end members and the feature vector of the inference target sample, and inferring a content ratio of the component in the inference target sample on the basis of a ratio of the Euclidean distance, whereinif the K is equal to or greater than 3, at least one of the feature vectors of the learning samples is present in each region external to a hypersphere inscribed in the K-1 dimensional simplex or the learning samples contain at least one of the end members.
Priority Claims (1)
Number Date Country Kind
2021-104510 Jun 2021 JP national
PCT Information
Filing Document Filing Date Country Kind
PCT/JP2022/022813 6/6/2022 WO