The present invention relates to a method of inferring a content ratio of a component in a sample, a composition inference device, and a program.
As a method for inferring a component composition in a sample as a multicomponent mixing system, there is a conventionally-known method of separating components in the sample by some process and identifying and quantifying each fractionated component. One of such identifying methods uses a mass spectrum acquired by a mass spectrometer.
However, information acquired from mass spectra is a huge volume of multidimensional data having a large number of correlations, imposing difficulty in extracting a distinctive signal leading to component identification intuitively or empirically. As described in Non-Patent Literature 1, a method of searching for a feature by reducing the dimension of such multidimensional data through main component analysis has been used recently. However, even such known data analysis technique is used, it is still inherently difficult to realize quantification using a mass spectrum having a peak intensity depending on ionization efficiency. For quantitative analysis, it is first required to determine ionization efficiency that is an unknown variable for each compound. To achieve this, isotopic samples having known sample volumes and having the same ionization efficiency should be used as benchmark materials.
According to the method described in Non-Patent Literature 1, much of information contained in the original multidimensional data is lost during the course of the dimension reduction. This disables quantitative analysis on a component even through qualitative analysis thereon is possible.
Furthermore, as long as there is the characteristic that ionization efficiency differs between components in response to a component type or a test environment, etc., a ratio between the sizes (areas) of resultant mass spectra does not reflect a content ratio of each component in a mixture. This forms one of factors to inhibit quantitative analysis using mass spectra.
Ambient desorption ionization (ADI) has widely been used recently as one of the mass spectrometry methods by which a specimen in the ambient condition (in the open air under atmospheric pressure) is subjected to mass spectrometry without preprocessing. While this method has an advantage in terms of allowing unprocessed specimens or open-system specimens to be analyzed as they are in solid state, it has been deemed to make quantitative analysis more difficult due to influence by inconsistence of ionization rate between components, and the measurement environment, etc.
In consideration of the foregoing circumstances, an object of the present invention is to provide a component content ratio inferring method capable of inferring a composition of components in an unknown mixture even from a mass spectrum acquired under the ambient condition.
It is also an object of the present invention to provide a composition inference device and a program.
The present inventors conducted study earnestly to attain the foregoing objects, and found that these objects can be attained by configurations described below.
[1] A method of inferring a content ratio of a component in an inference target sample containing at least one type of the component selected from K types of components while K is an integer equal to or greater than 1. The method includes: preparing learning samples of a number equal to or greater than K containing at least one type of component selected from the K types of components and having compositions different from each other, and a background sample not containing the component; sequentially ionizing gas components generated by thermal desorption and/or pyrolysis while heating each sample in a sample set including the inference target sample, the learning samples, and the background sample, and observing mass spectra continuously; storing the mass spectrum acquired for each heating temperature into each row to acquire two-dimensional mass spectra of the respective samples, and merging at least two or more of the two-dimensional spectra and converting the spectra into a data matrix; performing NMF process by which the data matrix is subjected to non-negative matrix factorization to be factorized into the product of a normalized base spectrum matrix and a corresponding intensity distribution matrix; extracting a noise component in the intensity distribution matrix through analysis on canonical correlation between the base spectrum matrix and the data matrix, and correcting the intensity distribution matrix so as to reduce influence by the noise component, thereby acquiring a corrected intensity distribution matrix; partitioning the corrected intensity distribution matrix into a submatrix corresponding to each of the samples, and expressing each of the samples in vector space using the submatrix as a feature vector; defining a K-1 dimensional simplex including all of the feature vectors and determining K end members in the K-1 dimensional simplex; and calculating an Euclidean distance between each of the K end members and the feature vector of the inference target sample, and inferring a content ratio of the component in the inference target sample on the basis of a ratio of the Euclidean distance. If the K is equal to or greater than 3, at least one of the feature vectors of the learning samples is present in each region external to a hypersphere inscribed in the K-1 dimensional simplex or the learning samples contain at least one of the end members.
[2] The method according to [1], wherein the K is an integer equal to or greater than 2, and the inference target sample is a mixture of the components.
[3] The method according to [1] or [2], wherein the learning sample contains the end member.
[4] The method according to [3], wherein the end member is determined on the basis of a determination label given to the learning sample.
[5] The method according to [3], wherein the end member is determined through vertex component analysis on the feature vector of the learning sample.
[6] The method according to [1], wherein the end member is determined on the basis of an algorithm by which a vertex is defined in such a manner that the K-1 dimensional simplex has a minimum volume.
[7] The method according to [6], wherein the end member is determined by second NMF process by which the corrected intensity distribution matrix is subjected to non-negative matrix factorization to be factorized into the product of a matrix representing the weight fractions of the K types of components in the sample and a matrix representing an individual fragment abundance of each of the K types of components.
[8] The method according to [6] or [7], wherein the learning sample does not contain the end member.
[9] The method according to any one of [1] to [8], wherein acquiring the corrected intensity distribution matrix further includes making intensity correction on the intensity distribution matrix.
[10] The method according to [9], wherein the sample set further includes a calibration sample, the calibration sample contains all the K types of components and has a known composition, and the intensity correction includes normalizing the intensity distribution matrix.
[11] The method according to [9], wherein the intensity correction includes allocating at least part of the intensity distribution matrix using the product of the mass of the corresponding sample in the sample set and an environmental variable, and the environmental variable is a variable representing influence on ionization efficiency of the component during the observation.
[12] The method according to [11], wherein the environmental variable is a total value of peaks in a mass spectrum of a compound having a molecular weight of 50-1500 contained in a predetermined quantity in an atmosphere during the observation or an organic low-molecular compound having a molecular weight of 50-500 contained in a predetermined quantity in each sample in the sample set.
[13] A composition inference device that infers a content ratio of a component in an inference target sample containing at least one type of the component selected from K types of components while K is an integer equal to or greater than 1. The composition inference device includes: a mass spectrometer that sequentially ionizes gas components generated by thermal desorption and/or pyrolysis while heating each of samples in a sample set including learning samples of a number equal to or greater than K containing at least one type of component selected from the K types of components and having compositions different from each other, a background sample not containing the component, and the inference target sample, and observes mass spectra continuously; and an information processing device that processes the observed mass spectra. The information processing device includes: a data matrix generating part that stores the mass spectrum acquired for each heating temperature into each row to acquire two-dimensional mass spectra of the respective samples, and merges at least two or more of the two-dimensional spectra and converts the spectra into a data matrix; an NMF processing part that performs NMF process by which the data matrix is subjected to non-negative matrix factorization to be factorized into the product of a normalized base spectrum matrix and a corresponding intensity distribution matrix; a correction processing part that extracts a noise component in the intensity distribution matrix through analysis on canonical correlation between the base spectrum matrix and the data matrix, and corrects the intensity distribution matrix so as to reduce influence by the noise component, thereby generating a corrected intensity distribution matrix; a vector processing part that partitions the corrected intensity distribution matrix into a submatrix corresponding to each of the samples in the sample set, and expresses each of the samples in vector space using the submatrix as a feature vector; an end member determining part that defines a K-1 dimensional simplex including all of the feature vectors and determines K end members in the K-1 dimensional simplex; and a content ratio calculating part that calculates an Euclidean distance between each of the K end members and the feature vector of the inference target sample, and infers a content ratio of the component in the inference target sample on the basis of a ratio of the Euclidean distance. If the K is equal to or greater than 3, at least one of the feature vectors of the learning samples is present in each region external to a hypersphere inscribed in the K-1 dimensional simplex or the learning samples contain at least one of the end members.
[14] The composition inference device according to [13], wherein the end member determining part determines the end member on the basis of a determination label given to the learning sample.
[15] The composition inference device according to [13], wherein the end member determining part determines the end member on the basis of an algorithm by which a vertex is defined in such a manner that the K-1 dimensional simplex has a minimum volume.
[16] The composition inference device according to [15], wherein the end member determining part determines the end member by performing second NMF process by which the corrected intensity distribution matrix is subjected to non-negative matrix factorization to be factorized into the product of a matrix representing the weight fractions of the K types of components in the sample and a matrix representing an individual fragment abundance of each of the K types of components.
[17] The composition inference device according to [15], wherein the correction processing part further makes intensity correction on the intensity distribution matrix.
[18] The composition inference device according to [17], wherein the sample set further includes a calibration sample, the calibration sample contains all the K types of components and has a known composition, and the intensity correction includes normalizing the intensity distribution matrix.
[19] The composition inference device according to [17], wherein the intensity correction includes allocating at least part of the intensity distribution matrix using the product of the mass of the corresponding sample in the sample set and an environmental variable, and the environmental variable is a variable representing influence on ionization efficiency of the component during the observation.
[20] The composition inference device according to [19], wherein the mass spectrometer further includes a pressure-reducing pump, and the environmental variable is a total value of peaks in a mass spectrum of a material generated by operation of the pressure-reducing pump.
[21] A program used in a composition inference device that infers a content ratio of a component in an inference target sample containing at least one type of the component selected from K types of components while K is an integer equal to or greater than 1. The composition inference device includes: a mass spectrometer that sequentially ionizes gas components generated by thermal desorption and/or pyrolysis while heating each of samples in a sample set including learning samples of a number equal to or greater than K containing at least one type of component selected from the K types of components and having compositions different from each other, a background sample not containing the component, and the inference target sample, and observes mass spectra continuously; and an information processing device that processes the observed mass spectra. The program includes: a data matrix generating function of storing the mass spectrum acquired for each heating temperature by the mass spectrometer into each row to acquire two-dimensional mass spectra of the respective samples, and merging at least two or more of the two-dimensional spectra and converting the spectra into a data matrix; an NMF processing function of performing NMF process by which the data matrix is subjected to non-negative matrix factorization to be factorized into the product of a normalized base spectrum matrix and a corresponding intensity distribution matrix; a correction processing function of extracting a noise component in the intensity distribution matrix through analysis on canonical correlation between the base spectrum matrix and the data matrix, and correcting the intensity distribution matrix so as to reduce influence by the noise component, thereby generating a corrected intensity distribution matrix; a vector processing function of partitioning the corrected intensity distribution matrix into a submatrix corresponding to each of the samples, and expressing each of the samples in vector space using the submatrix as a feature vector; an end member determining function of defining a K-1 dimensional simplex including all of the feature vectors and determining K end members in the K-1 dimensional simplex; and a content ratio calculating function of calculating an Euclidean distance between each of the K end members and the feature vector of the inference target sample, and inferring a content ratio of the component in the inference target sample on the basis of a ratio of the Euclidean distance. If the K is equal to or greater than 3, at least one of the feature vectors of the learning samples is present in each region external to a hypersphere inscribed in the K-1 dimensional simplex or the learning samples contain at least one of the end members.
One of characteristics of the method of the present invention (hereinafter also called a “present method”) is that, gas components generated by thermal desorption and/or pyrolysis are ionized sequentially while each sample in the sample set is heated and mass spectra are observed continuously, the mass spectrum acquired for each heating temperature is stored into each row to acquire two-dimensional mass spectra of the respective samples, these spectra are merged and converted into a data matrix, the data matrix is subjected to non-negative matrix factorization (NMF) to be factorized into the product of the normalized base spectrum matrix and the corresponding intensity distribution matrix, a noise component in the intensity distribution matrix is extracted through analysis on canonical correlation and the intensity distribution matrix is corrected, a corrected intensity distribution matrix containing a feature vector of each sample is acquired, and quantitative calculation is performed thereafter.
By performing the process of correcting the intensity distribution matrix through the canonical correlation analysis, it becomes possible to make quantitative analysis more easily with high precision (the details thereof will be described later).
If the K is an integer equal to or greater than 2 and a sample as an inference target (inference target sample) is a mixture of the K types of components, the present method makes it possible to infer a mixture ratio of the K types of components (composition) in the inference target sample.
If the learning sample contains the end member, it is possible to infer the composition of the sample more accurately and more easily.
If the learning sample is given the determination label for determining the end member, the end member can be determined on the basis of information about the label, making it possible to infer the composition of the sample more easily.
If the determination label is not given and the learning sample contains the end member, the end member can be determined through vertex component analysis on the feature vector of the learning sample, allowing inference of the composition of the sample.
Even if the sample does not contain the end member, the composition of the sample can still be inferred by determining the end member using an algorithm by which a vertex is defined in such a manner that the K-1 dimensional simplex has a minimum volume. This has excellence in that it does not require the end member as an essential in the learning sample.
If the corrected intensity distribution matrix is a matrix after the intensity correction on the intensity distribution matrix, the accuracy of inference is increased further.
If the sample set further includes the calibration sample and the intensity correction includes normalizing the intensity distribution matrix, the composition of the sample can be inferred more precisely by a simpler procedure.
By allocating part of the intensity distribution matrix corresponding to each sample using the product of the mass of the sample and the environmental variable, influence by the ionization efficiency of each component during the observation is corrected. Thus, even in the absence of the calibration sample in the sample set, it is still possible to infer the composition of the sample with high precision.
If the environmental variable used in the foregoing allocation is a total value of peaks in a mass spectrum of a compound having a molecular weight of 50-1500 contained in a predetermined quantity in an atmosphere during the observation or an organic low-molecular compound having a molecular weight of 50-500 contained in a predetermined quantity in each sample in the sample set, it is possible to acquire more accurate composition inference result through simpler operation even in the absence of the calibration sample in the sample set.
The present invention will be described below in detail.
While constituting elements will be described below on the basis of representative embodiments of the present invention, the present invention are not limited to these embodiments.
In the present description, a numerical value range expressed using “-” means a range including numerical values written before and after “-” as a lower limit value and an upper limit value.
Terms used in the present description will be explained. A term to be used and not to be explained below will be given a meaning generally understandable for persons skilled in the art.
In the present description, a “component” means a material forming each sample, and an inference target sample, a learning sample, and a calibration sample each mean a mixture composed of a combination of components.
The “inference target sample” means a sample to be subjected to composition inference, and the number of components and the content of each component therein may be unknown.
The “learning sample” means samples required in determining in K end members in a K-1 dimensional simplex described later. These samples contain at least one type of component selected from K types of components and have different composition. The number of the contained components is not particularly limited but may be 1-K types. The learning samples may include a sample having the same composition as the inference target sample. Specifically, while the inference target sample and the learning sample may be the same, the learning samples differ from each other.
The “end member” means a vector corresponding to a vertex of the K-1 dimensional simplex. This end member corresponds to a feature vector of a sample composed of only one of the K types of components (learning sample or virtual sample).
A “background sample” means a sample different from the inference target sample and the learning sample, it means a sample containing none of the foregoing components.
The “calibration sample” means a sample different from the inference target sample, the learning sample, and the background sample, it means a sample containing all the K types of components and having a definite composition.
First, an inference target sample, learning samples of a number of equal to or greater than K having compositions different from each other, and a background sample are prepared (these samples will collectively be called a “sample set”) (step S101). The sample set may further include a calibration sample.
The inference target sample contains at least one type of component among the K types of components. If the inference target sample contains one type of component, the purity of this component can be inferred by the present method. In this case, the inference target sample contains this “component” and impurities other than this component.
If K is equal to or greater than 2, the inference target sample is a mixture of components. As long as K is equal to or greater than 1 in the present method, an upper limit thereof is not particularly limited. Typically, K is preferably equal to or less than 1000.
The number of the learning samples relates to the number K of components and is required to be equal to or greater than K. The number of the learning samples is not particularly limited as long as it is equal to or greater than K. Meanwhile, with a larger number of the learning samples, a vertex of the K-1 dimensional simplex described later is likely to be determined with higher precision. This eventually facilitates acquisition of more accurate composition inference result.
Specifically, the ratio of the number of the learning samples to the number K of components (the number of learning samples/the number of components) is equal to or greater than 1, preferably, equal to or greater than 2, more preferably, equal to or greater than 3, still more preferably, equal to or greater than 5, particularly preferably, equal to or greater than 10. Meanwhile, while an upper limit of the ratio is not particularly limited, it is generally preferably equal to or less than 1000.
The learning samples having compositions different from each other. The composition is determined by the type of a contained component, the number of the components, and the content of each component. Samples having the same composition means that these samples are the same in terms of the type of a contained component, the number of components, and the content of each component. If any one of these elements differs between the samples, these samples have “compositions different from each other.”
According to the present method, as long as the learning samples have compositions different from each other, information other than the difference, specifically, the type of a contained component, the number of the components, and the content of each component are not necessary for composition inference of the inference target sample. Namely, the composition of the learning sample may be unknown.
A difference between the inference target sample and the learning sample is that the compositions of the learning samples should differ from each other. The learning samples may include a learning sample having the same composition as the inference target sample.
The learning sample may contain an end member. Specifically, the learning sample may contain a member composed of one type of component (single-ingredient, pure) selected from the K types of components. If the learning sample contains the end member, high-precision composition inference can be made more easily.
In this case, the learning sample as the end member may contain a determination label. The learning sample preferably contains the determination label as it allows the end member to be determined more easily.
Next, each inference target sample is heated to sequentially ionize gases generated by thermal desorption and/or pyrolysis, thereby acquiring a mass spectrum (step S102).
While a method of acquiring the mass spectrum is not particularly limited, a method of performing mass spectrometry without preparatory process on a specimen in an ambient condition is preferred. A mass spectrometer called DART-MS using an ion source called a DART (direct analysis in real time) ion source and a mass spectrometer instrument in combination is known as a device used in a method for the ionization and mass spectrometry described above.
The mass spectrometer instrument is not particularly limited but is preferably an instrument allowing precise mass spectrometry and may be any of types including a quadrupolar type and a time-of-flight (TOF) type.
While a condition for acquisition of the mass spectrum is not particularly limited, according to a procedure given as a non-restrictive example, all the samples in the sample set are sequentially heated at a temperature increasing rate of 50° C./min, and helium ions are injected at an interval of 50 shot/min to pyrolysis gas generated in a temperature range of 50-550° C. to ionize the gas, thereby acquiring a two-dimensional MS spectrum having m/z along a horizontal axis and a temperature along a vertical axis.
Next, the mass spectrum acquired at each heating temperature is stored in each row to acquire a two-dimensional mass spectrum of each sample, and at least two or more of these two-dimensional mass spectra are merged and converted to a data matrix (step S103).
A method of acquiring the data matrix will be described in detail.
In step S102, mass spectra are acquired continuously on the basis of each predetermined temperature increasing interval. While these mass spectra can be used as they are in generating the data matrix, they may be used after being averaged on the basis of each predetermined temperature increasing range. Averaging the mass spectra on the basis of each predetermined temperature increasing range and merging the mass spectra into one allows compression of a data volume. This temperature increasing range is approximately 10-30° C., for example.
In each spectrum, a peak intensity may be normalized. For example, this normalization can be realized by a method of normalizing peak intensities in such a manner that the sum of squares of the peak intensities becomes 1.
As a result, by making one measurement on one sample, it becomes possible to acquire mass spectra of a predetermined number at each heating temperature (or heating temperatures merged in each predetermined range) (this number of the mass spectra differs depending on a way of the merging and may be 20, for example).
By storing each of these mass spectra in each row and the heating temperature in each column, a two-dimensional mass spectrum is acquired for each sample.
After the two-dimensional mass spectra are acquired for the respective samples in this way, at least two or more of these spectra are merged and converted to a data matrix X.
As long as the number of the two-dimensional mass spectra used in generating the data matrix X is equal to or greater than 2, it is not particularly limited. Meanwhile, it is preferable that two-dimensional mass spectra of all the samples in the sample set be used.
If one sample is to be subjected to measurements twice or more, some or all of two-dimensional mass spectra acquired by the two or more measurements may be used in generating the data matrix X.
Next, the acquired data matrix is subjected to non-negative matrix factorization (NMF) to be factorized into the product of a normalized base spectrum matrix and a corresponding intensity distribution matrix (step S104).
The non-negative matrix factorization will be described.
Under the assumption that linear additive property is established between an intensity of a spectrum and a component composition of a sample, a data matrix of the spectrum can generally be subjected to matrix decomposition as
Here, the data matrix X is a data matrix of N×M obtained by piling N spectra, each having M numbers of channels, S is a base spectrum matrix, and C is a corresponding intensity distribution matrix. Assuming that the number of all components contained in a data set is K1, C is a matrix N×K1 and S is a matrix M×K1. As noise is present in actual measurement,
by taking an error matrix N into consideration in this way, the C and S matrices are defined in such a manner as to minimize N. A sign T on the upper right side of the matrix means a transposed matrix. Here, on the assumption of the existence of Gaussian noise,
As C and S in the foregoing formulas do not have unique solutions, a restraint condition assumed to be satisfied by a spectrum in response to physical requirement is imposed. On the basis of obvious requirement that a composition ratio or a spectrum intensity is to take a non-negative value, the NMF is formulated as
However, as a unique solution is not defined only under a non-negative condition, it is required to apply an additional restraint condition responsive to a measuring device. A restraint condition for NMF in the present step will be described below.
Preferably, in the present step, the NMF process is performed to solve the foregoing formula (formula 7) as a convex optimization problem with respect to each column of C and S by making updates alternately between C and S in a column vector-wise manner using hierarchical alternating least square (HALS) into which constraint mentioned below can be incorporated easily.
According to the present method, mass spectrometry is performed on gas generated by thermal desorption and/or pyrolysis while each sample is heated. Hence, the number K1 of components after the pyrolysis is hard to acquire as prior information and should be inferred from data accordingly. As a method of the inference, automatic relevance determination (ARD) is preferred. This method is to assume sparseness in the intensity distribution matrix C. This assumption can be deemed to be natural considering that pyrolysis of one component occurs only at a particular temperature of a sample containing this component. By doing so, balance is obtained between power of minimizing the number of components and power of minimizing the foregoing formula (formula 7), making it possible to determine an optimum number of components.
A mass spectrometer instrument generally has a resolution that is approximately 0.1-0.001 m/z. This makes the occurrence of a situation rare where different gas components are equal to each other to a place after the decimal point in terms of m/z and share the same channel. For this reason, it is reasonable to assume orthogonality to a certain degree between column vectors of the base spectrum matrix S.
This not only achieves the effect of resulting in a favorable local solution but is also expected to achieve the effect of preventing concentration of fitting only on a high intensity peak. On the other hand, complete orthogonality and non-negativity result in excessively severe constraint. For this reason, it is preferable that orthogonal constraint be imposed softly by adding a penalty term having a Lagrange's undetermined multiplier (soft orthogonal constraint).
If two or more base spectra nearly equal to each other are acquired during the course of updating C and S, a correct number n of components can be acquired by merging these spectra into one spectrum. By applying EALS spectra images to the NMF and writing a k-th column vector in S as S:k, in the presence of k<m that satisfies ST:kS:m>threshold, merging can be conducted as (Merging C):
Here, a threshold is a hyper parameter given in a range of 0.9-0.99. Preferably, merging is performed on similar S in the same way. Still preferably, if C has a plurality of column vectors parallel to each other, corresponding column vectors in S are superimposed in response to the weight of C. Specifically,
This is expected to achieve the effect of merging isotopic peaks or oligomer peaks into one base spectrum.
The update formula obtained by the foregoing method will be described.
A totally simultaneous distribution of a model is
An element λk in this formula is a parameter following an inverse gamma distribution representing the significance of a component k in data. As noise is assumed to have a Gaussian distribution,
Assuming that K is the number of components significantly large (about 50) to be input as an initial value, an exponential distribution to be followed by C is
An inverse gamma distribution to be followed by λk is
Here, a is a hyper parameter with which a−1 becomes an extremely small positive number, and b is a parameter defined as follows using an average μx of data matrices:
Furthermore, an expected value of an element in Cis
A negative log likelihood of p(X, C, S) is
Here, the variance parameter σ2 under the assumption of a Gaussian distribution is given as
Furthermore, as a convex function is determined for λ,
Next, an update formula for S will be described. Extracting only a term J(S:k) related to S:k from a negative log likelihood function is considered first.
Using a contribution of a component k to X written in this way,
This formula was derived on the basis of the following: Shiga, M. et al., Sparse modeling of EELS and EDX spectral imaging data by nonnegative matrix factorization. Ultramicroscopy 170, 43-59 (2016). Here, a penalty term to be added to S is incorporated that is determined by the foregoing orthogonal constraint added to S.
Here, ξk is a Lagrange's undetermined multiplier, and
Through differentiation with respect to S:k,
By defining these formulas an update formula for S:k is obtained as
Regarding ξk to apply a complete orthogonal condition,
Regarding a soft orthogonal condition, a degree of orthogonality can be adjusted freely between complete orthogonality and no orthogonality constraint by multiplying ξk by a hyper parameter w[0, 1]. A final update formula is
An update formula for C will be described. Extracting only a term J(C:k) related to C:k from a negative log likelihood function is considered first.
By defining normalization in this way, on the basis of the following:
A specific algorithm is as follows:
Referring back to
More specifically, a noise component in the intensity distribution matrix is extracted through analysis on canonical correlation between the base spectrum matrix and the data matrix and the intensity distribution matrix is corrected so as to reduce influence by the noise component, thereby acquiring the corrected intensity distribution matrix.
The NMF is low-rank approximation of a data matrix. Thus, even if the component k is not actually present in an i-th spectrum, Cik>0 is established in a case where assuming the presence of the component k provides better approximation in the sense of a least square. In many cases, such Cik is considerably small and often does not cause any problem in the NMF factorization.
In detecting a minor component, however, distinction is preferably made between Cjk>0 actually present in tiny quantity in a j-th spectrum and Cik>0 as an NMF artifact, and 0 is preferably substituted into Cik as a ghost peak. The reason for this is that removing a spurious peak derived from the artifact from the NMF algorithm as one noise makes it possible to provide inference result with higher precision.
In the present step, canonical correlation analysis is employed to solve the foregoing issue. The present step is one of features of the present method and was named canonical correlation analysis (CCA) filter by the present inventors.
Conceptually, the CCA filter is to make sample-wise scanning to see whether each component in a base spectrum ST output from the NMF is actually contained in original data, and to delete the component from this spectrum of a sample if a similar peak pattern is not observed in the original data. If spectra generated from an L-th sample are stored from an l-th row to an l′-th row in the data matrix X, a submatrix obtained by extracting this part is written as Xδ(L). Specifically, δ(L)=(l, l+1, . . . , l′−2, l′−1).
The CCA is conventionally known as a technique of examining correlation between two-system data segments and is formulated as follows.
A matrix composed of N p-dimensional vectors is written as
A matrix composed of N q-dimensional vectors is written as
Then, the following formula is generated:
Here,
Specifically, a problem set to be solved by the CCA is to project Y and Z onto one axis and find a solution (aT, bT)T with which a correlation coefficient p therebetween becomes maximum.
Here, (aT, bT)T and the correlation coefficient p are given as a characteristic vector and a characteristic value relating to a generalized eigenvalue problem in the following formula, and all the characteristic vectors (aT, bT)T are written collectively as (AT, BT)T. Then, the following formula is generated:
Here, it is assumed that the presence or absence of the base spectrum S:k of one component k in a sample L is to be judged. First, a pertinent partial data matrix Xδ(L) is extracted from the data matrix X. Meanwhile, ST is divided into two matrices including a matrix of a group of spectra SsimT all similar to S:kT and a matrix of a group of spectra SoutT largely different from S:kT.
Then, the CCA is performed in this way. Here, all characteristic values satisfying p>t1 are extracted with respect to a certain value tϵ[0,1]. If a first component of a corresponding characteristic vector b satisfies b1>t2bmax with respect to a maximum component of this vector, it is determined that the base spectrum S:k of the component k is contained in the sample L. If not, corresponding Cδ(L)k is determined to be 0.
Here, as t1 and t2 are hyper parameters and are strongly influenced by fitting precision of the NMF, specifically, by a threshold as an NMF hyper parameter, deliberate examination is required. Generally, t1=0.9 threshold and t2ϵ[0.1, 0.3] are found to produce favorable result from experience.
In the example given above, the CCA filter is applied to the NMF artifact (noise). However, the foregoing noise removal is applicable not only to the NMF artifact but also to removal of background or contamination.
Specifically, examples of noise removable by the CCA filter include: (1) an artifact from the NMF (so-called ghost peak); (2) a peak derived from a minor chemical material mixed from an environment around the device (background); and (3) a peak derived from a material contained in a sample and having a risk of distorting the substance of the sample if incorporated in analysis (contamination).
The sample set handled by the present method contains a background sample. Thus, by defining the following formula:
If the background sample further contains a contamination material, noise derived from the contamination can also be removed by following the same procedure as that described above. While the contamination material is not particularly limited, it may be a solvent used for preparation or loading of a sample, for example.
The present step may further include a step of correcting an intensity in an intensity distribution matrix.
For example, the DART-MS ionizes water or ammonia in the air primarily using helium ions, thereby ionizing a sample secondarily. For this reason, a peak intensity may be changed by an atmospheric condition such as a humidity. Furthermore, the mass of a sample to be loaded generally differs between samples and this also influences a peak intensity. Moreover, the ionization efficiency of each component contained in a sample generally differs between samples and this also influences a peak intensity.
By correcting the intensity in the intensity distribution matrix, it becomes possible to obtain more accurate quantitative result by alleviating these influences.
If all samples contain all the K types of components and if these samples include a calibration sample having a known composition, the intensity correction can be made by normalizing the intensity distribution matrix.
Normalizing the intensity distribution matrix means normalizing the matrix C for each sample, more specifically, means performing operation of adding all elements to a total of 1 for each sample.
Partitioning the matrix C sample-wise and writing the matrix C derived from the sample L as Cδ(L),
As information about a pyrolytic temperature is unnecessary in composition analysis,
This vector is used as a feature quantity vector of the sample L. Next, feature vectors of respective p single component samples measured as end members are aligned to form an end member matrix:
A feature vector of a mixed sample can be written as
As one calibration sample necessarily gives one or more pyrolysis samples, the number of components after the pyrolysis is written in a vertically-long matrix of a full column rank. For this reason, a solution is generally infeasible and an approximate solution in the sense of a least square can be obtained by quadratic programming.
In the mass spectrum acquired by the present method, ionization efficiency differs between compounds, pyrolysis efficiency also differs therebetween, and a compound quantity and a peak intensity are not proportionate to each other.
However, a peak intensity ratio including the ionization efficiency of the component k acquired by the NMF is already reflected in S:kT and an intensity ratio between spectra is reflected in C:k.
By the provision of such a hierarchical data configuration formed by the NMF, as long as consideration is given to a peak intensity ratio in an end member, it is possible to make more accurate quantitative composition analysis even without the need of determining unknown ionization efficiency.
Meanwhile, if the sample set does not include a calibration sample, at least part of (preferably, all of) the intensity distribution matrix is allocated using the product of the mass of the corresponding sample and an environmental variable. By doing so, it becomes possible to make more accurate quantitative analysis.
The environmental variable mentioned herein is a variable representing influence on ionization efficiency of each component during observation.
If a component has high ionization efficiency, a peak intensity is increased. If a component has low ionization efficiency, a peak intensity is reduced.
Regarding the mass of a sample, a peak intensity is generally increased with increase in the mass and a peak intensity is generally reduced with reduction in the mass.
If the sample set does not include a calibration sample, the matrix C is corrected as follows so as to make intensity information between samples held in the intensity distribution matrix available without normalizing the intensity distribution matrix:
Here, mL is the mass of the sample L, and IL is a peak intensity in (poly)dimethylsiloxane generated from an oil pump annexed to the DART-MS observed at 610 m/z in an entire temperature range. Furthermore, IL reflects a standby state during the measurement and a peak intensity actually differs by about three times between a rainy day and a sunny day. As a result of the correction,
This vector is used as a feature vector of the sample L. Then, an end member matrix is formulated from a feature vector of an end member in the same way as follows:
Then, on the basis of the following:
While IL has been described as being derived from poly(dimethylsiloxane) and having 610 m/z, it is not limited to that but is preferably derived from a compound having a molecular weight of 50-1500 contained in a predetermined quantity in an atmosphere during observation and/or is preferably derived from an organic low-molecular compound having a molecular weight of 50-500 contained in a predetermined quantity in each of all the samples.
Referring back to
Furthermore, a K-1 dimensional simplex including all of these feature vectors (each corresponding to one sample) is defined, and K end members in the K-1 dimensional simplex are determined (step S107).
If the learning sample contains an end member and the learning sample is given a determination label (a label for determining that this learning sample is an “end member”), the end member is determined on the basis of the label.
If the learning sample contains an end member and the learning sample is not given a determination label, specifically, if the learning sample contains an end member but it is unknown which learning sample contains the end member, the end member is determined through vertex component analysis on a feature vector.
If the learning sample does not contain an end member, an end member is determined on the basis of an algorithm by which a vertex is defined in such a manner that the K-1 dimensional simplex has a minimum volume. If the K-1 dimensional simplex is a two-dimensional simplex (triangle), the minimum volume of the K-1 dimensional simplex means a minimum area thereof.
According to one aspect of the present method, even if the learning sample does not contain an end member, at least one feature vector of the learning sample is present in each region external to a hyperspherical body inscribed in the foregoing K-1 dimensional simplex (if K is equal to or greater than 3), making it possible to determine an end member precisely.
As shown in
This was devised on the basis of Robust Volume Minimization-based Matrix Factorization for remote Sensing and Document Clustering, IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 64, NO. 23, Dec. 1, 2016.
There is no particular limitation on the algorithm by which a vertex is defined in such a manner that the K-1 dimensional simplex has a minimum volume but a publicly-known algorithm is applicable. Examples of such an algorithm include a minimum volume constraint-NMF method, a minimum distance constraint-NMF method, a simplex identification via split augmented Lagrangian method, and a simplex volume minimization method.
Next, a Euclidean distance between each of K end members and a feature vector of an inference target sample is calculated to infer a content ratio of each component in the sample (step S108).
According to another aspect of the present method, if the learning sample contains at least one end member (if at least one learning sample is an end member), a feature vector of another learning sample may be present in a region inside a hyperspherical body inscribed in the K-1 dimensional simplex. Like
Even if at least one learning sample is an end member, a different learning sample may still be present on a hypersphere inscribed in the K-1 dimensional simplex or in a region external to the hypersphere. Specifically, in this case, the other learning sample may be present at any position.
In terms of achieving more excellent effect of the present invention, it is particularly preferable that the other learning sample contain 20% by mass or more, more preferably, 40% by mass or more of a component in an end member different from the learning sample as an end member (a component in a different end member).
According to the present method, even if a sample is a solid sample hard to dissolve in a solvent and hard to undergo conventional composition analysis, this sample is still available for analysis without particular preparatory process and can be subjected to accurate quantitative analysis. The present method achieves significant reduction in time required for obtaining result, particularly by being applied to quality control of a sample as a mixture of a plurality of components, investigation of a cause for failure, etc.
A second embodiment of the present method includes a method of determining an end member (vertex) applied if the learning sample does not include an end member, specifically, if there is no learning sample at a vertex of the K-1 dimensional simplex in the step S107 in the first embodiment already described.
One feature of the method according to the present embodiment is that an end member is determined by second NMF process by which a matrix representing a fragment abundance of each sample is subjected to non-negative matrix factorization to be factorized into the product of a matrix representing the weight fractions of the K types of components in the sample and a matrix representing a fragment abundance of each simplex of the K types of components.
While each sample is a mixture of the K types of components (this will be described as a mixture of “polymers”), what can actually be measured is an appropriate linear sum of spectra S of M types of fragment gases generated by pyrolysis of each polymer.
This feature according to the present embodiment is based on an idea that, by expressing a coefficient matrix thereof as
is formulated. Assuming that simplexes of the K polymers can be subjected to measurement and can be observed as
a composition C can be obtained by performing the second NMF process in this way.
Here,
is an abundance of M fragments generated from the K simplexes (polymers).
The following describes the details of a difference between the NMF process in step S104 (hereinafter also called “first NMF process”) and the NMF process performed in determining an end member in step S107 (hereinafter also called “second NMF process”) in the flow of
In the following description, symbols, etc. used in the description of the first embodiment will be replaced with those shown in the following table:
Furthermore, in many cases, notations used in the following description follow a system generally employed in signal processing.
This is a non-negative or real-number N×M matrix.
Regarding the following matrix
represents an n-th row vector, an m-th column vector, and an (n, m)-th element. Furthermore, XT is a transposed matrix of X, and
represents a Frobenius norm.
represent the following in an n-th row of X:
If X is a square matrix, Tr(X) represents a trace and diag(X) represents a diagonal matrix composed of a diagonal component. Furthermore, 1N and 11N represent an n-dimensional vector or an (N, N) matrix where all elements are 1. Moreover, IN represents an N-dimensional unit matrix.
Here, N is the number of samples, T is the number of temperature zone partitions, D is the number of channels, and M is the number of base spectra.
The first NMF was developed so as to achieve better conformity with data interpretation of MS (mass spectrometry) by adding the following three points as main changes to ARD-SO-NMF proposed by Shiga, et al.
is estimated on the basis of a natural isotopic peak.
Regarding 1), Gaussian noise of an independent equal variance (i. i. d) is assumed with a variance σ2 for some time and then a variance-covariance matrix R is introduced in the middle. Assuming i. i. d in the noise, a probability generative model for the data matrix X can be written as
As the number M of base spectra is unknown, automatic relevance determination (ARD) is introduced on the basis of sparseness of a distribution matrix A for allowing this number to be inferred automatically. First, an exponential distribution parametrized with λm for each basis component is assumed as a prior distribution of A.
Specifically, with respect to the following:
An entire probability model can be written as
Here, p(S) is a uniform distribution on a hypersphere as follows:
Furthermore, p(λ|a, b) is an inverse gamma distribution parametrized with (a, b), which is specifically
Here, a is a hyper parameter for adjusting sparseness and a=1+10−16, and b can empirically be estimated from an expected value E (Aim) of Aim and is related with
This further has a relation with E(Xid) as follows:
By approximating E(Xid) with an average μx of X
b is defined as follows:
Thus, a negative log likelihood function can be written as
This is a downward-convex function with respect to λ. Thus, an update formula for λ is obtained as
specifically, as
Derivation of the ARD-SO-NMF proposed by Shiga, et al. has completely be followed so far.
The assumption of the i. i. d Gaussian distribution of the noise does not apply to MS data. A larger signal is known to be likely to have larger noise. Then, a noise distribution is determined as
as a residual component of linear regression resulting from the natural isotopic peak. Specifically,
is a channel [d−30, d−20, d−10, d+10, d+20, d+30] centered around the channel d and is an isotropic peak of ±3 m/z of the channel d (note that channel spacing is 0.1 m/z). The variance-covariance matrix of each channel, which is given as
is determined as follows:
Using this formula, the likelihood function is rewritten as
Note that R is a constant matrix. A total negative log likelihood function is defined as
By substituting the update formula Eq. S3 for λ and making simplification by omitting a constant term,
is obtained. This function is minimized with respect to A and S by using hierarchical alternating least square (HALS). For simplification, the following vector notation is used:
According to the HALS, a residual X-AS is expressed as
Then, L(X, A, S, λ) can be written separately between components m as
Soft orthogonal constraint on S can be incorporated as a penalty term as follows into an objective function L:
is established. This term represents non-orthogonality of an m-component and other components, and ξm represents a Lagrange's undetermined multiplier applied if orthogonality is satisfied in a strict sense. This term is eased further using woϵ[0, 1]. Thus, the objective function to be minimized is defined as
Slopes with respect to am and sm are as follows:
and by assuming these values as zero, update formulas are written as
As sm is normalized in response to each update, a constant coefficient was omitted. Non-negative constraint was satisfied by being projected onto a non-negative quadrant in response to each update. As a specific example, the projection can be realized as
where,
means a vector assuming an absolute value for each element. By multiplying Eq. S6 by
from the left side and applying the following strict orthogonal condition and wo=1:
the undetermined multiplier ξm is obtained as
Using the equations S1 to S7 given above, an algorithm 1 is suggested as follows:
-normalization.
Here, each time A and S are updated, similar components are merged with each other. While merging of components having similar spectra with each other has been suggested by Shiga et al., components having similar intensity distributions are further subjected to the merging mentioned herein. By doing so, an isotopic peak, a fragment series ionized through addition of different ions, an oligomer peak series having different numbers of monomers, etc. can be merged into one component, making it possible to provide result with a higher degree of interpretability.
(Derivation of Canonical correlation analysis filter (CCA-filter))
Output: List of component numbers judged to be derived from background
The m-component obtained by the first NMF includes a component derived from background or a tramp material. As such a component might distort composition analysis result and is preferably removed from A or S accordingly. Assuming that an M′-component is judged to be a component derived from background by the CCA-filter,
respectively. While the number of components after application of the CCA-filter is M-M′ for simplification, M is still used consistently.
To apply the CCA-filter, a background spectrum
is required to be incorporated in the data set and required to be used together with a sample spectrum for performing the first NMF. If the presence of any tramp material can be expected, a spectrum measured for the tramp material can be used as XBG. According to the CCA-filter, components m=1, . . . , M are checked one by one to see whether their respective spectra Sm: are contained in XBG.
A first step is to partition S composed of an M-spectrum and subjected to:
into a spectrum set similar to Sm::
and
into a spectrum set dissimilar to Sm::
This partitioning is performed in such a manner as to satisfy
Here, t1ϵ[0, 1] is a certain threshold and t1=0.2 was consistently used in the present invention. Furthermore, as Sm: is always contained in Y, Sm: is stored in a first column of Y. Z is combined with XBG as
In order to obtain an average zero, Y and Z are defined as
These two spectrum sets are subjected to the CCA. The CCA is to generate a pair of spectra as similar as possible to each other through linear combination inside the two spectrum sets.
It is assumed that coefficients of linear combination of
Accordingly, the spectrum pair can be written as
Similarity therebetween is evaluated using the correlation coefficient p as
A problem setting by the CCA can be written as
A solution thereof, which is written as
is provided as a solution of a generalized eigenvalue problem as
is a matrix obtained by aligning the following characteristic vectors as column vectors in descending order of characteristic value:
Each characteristic value
is a coefficient of correlation between y and z generated through linear combination using corresponding (u*, v*) as a coefficient. Here, u* corresponding to every characteristic value satisfying p>t2 is extracted. If a first element corresponding to a coefficient of Sm: is a large component largely contributing to u*, specifically, if
the component m is judged to be a component derived from background and removed from a system. Here, t2δ[0.9, 0.99] and t3δ[0, 1]. These processes are summarized as follows as an algorithm 2:
After the background component is identified, a corresponding column vector in A and a corresponding row vector in S are deleted to be removed from the system.
After removal of the M′-component derived from background, the following is output from the CCA-filter:
(This is expressed as follows while M-M′ is replaced with M for simplification:)
This is used for correcting an allocation intensity on the basis of a sample quantity and an internal reference peak. Then,
which is a one-dimensional vector generated from a submatrix section A(n) of A related to a sample n, may be used as a feature vector of the sample n and may be input in the second NMF. Meanwhile, as an M-fragment temperature distribution is unnecessary information for composition analysis, all temperature zones may be added for each sample so that M-fragment abundance (FA) may be obtained for each sample, and the following representing this abundance may be used as input:
Here, for simplification, the second NMF is performed with respect to the following:
In the following section, fitting by non-negative least square (NNLS) is performed frequently. Using an optimum non-negative coefficient as
and through column vector linear combination of a constant matrix as
the problem mentioned herein is to approximate:
and is determined by the following:
While this can be solved by a large number of optimization techniques including alternating direction multiplier methods (ADMM) (1), ADMM-NNLS developed by Fu. et al. is used herein to solve this problem and a solution is represented as
A non-negative coefficient vector xl*(l=1, . . . , L) corresponding to an approximate vector set Y=[y1, . . . , YL] can be calculated individually and is written as
or in a matrix form, as
A similar problem, which is a problem with a restraint condition that a total sum of coefficient vectors will be 1, is called fully constrained least square (FCLS), can also be solved using ADMM, and is written as
or in a matrix notation, as
Input: The following representing FA for each sample:
and the number K of polymer components.
Output: Polymer weight fraction for each sample:
and FA for each unit weight K-base polymer:
Second NMF:
is evaluates in terms of its approximate residual as follows using Riemannian metrics:
By obtaining a lower triangular matrix as follows through Cholesky factorization G=LLT
the following formula can be written:
which is equivalent to matrix factorization as follows:
Defining a row vector as
a volume term of a simplex spanned by this row vector is written as
and a non-orthogonality term between such row vectors is written as
Then, using α>0 and βϵ[0, 1] as weights,
On the basis of this formula,
is determined. Here, in order to provide robustness to an outlier (3), introduction of the following weighting matrix is suggested by Fu et al.:
Here, pϵ(0, 2], and ε is a small regularization parameter. According to the present invention, p=1.5 and ε=10−8 were consistently used. An optimization program is written as
Block coordinate descent (BCD) theory is employed to alternately optimize C and
Here, it is assumed that optimization is performed t times to obtain
Using vertex component analysis (VCA) (4) for initialization, K row vectors approximate to a K-vertex of a simplex are selected from N-row vectors of
Then, the selected vectors are defined as
Regarding update of C based on the following:
the update can be done simply by
Then, update based on C(t) from
will be considered. Here,
is an undetermined multiplier. Furthermore,
is a small regularization parameter, and t=10−8 was consistently used. Here,
as a Majorizer function is introduced. Using a tangent inequality and on the basis of the following at previous timing:
is established. Here,
is a slope at H(t) in the following:
and const is a constant term. Thus, by replacing
all penalty terms are written collectively as
Accordingly, update of
can be obtained by solving the following:
In order to organize complicated constraint conditions,
is incorporated into an objective function using a Lagrange's undetermined multiplier as follows:
thereby solving this problem in the framework of ADMM.
and μ is a hyper parameter for ADMM (here, μ=1 was consistently used). Furthermore, Z′ represents Z at previous timing.
the objective function provides the following maximized with respect to Z:
This may be optimized with respect to
was optimized cyclically and an algorithm was summarized as Algorithm 3.
Objective function as
is optimization of a quadratic function without restraint condition with respect to
it is minimized by the following:
can be solved by NNLS and is updated as
As a result of the foregoing, the following update formula for solving the original problem Eq. S11 was obtained:
Finally, update of V=αF(t)+βΛ(t) is considered. Regarding F(t), it can be determined easily as
Regarding Λ(t) by combining strict orthogonal conditions, which are specifically
and by multiplying Eq. S19 by the following from the right side:
Λ(t) can be obtained as
is given. Using the foregoing, an algorithm for solving the problem Eq. S11 is obtained as follows:
(Projection of Test Data onto Hyperplane Spanned by S and B)
After S and B are inferred from the data set, the following data not having been used for the inference of S and B (here, called test data) is projected onto a hyperplane spanned by S and B by a method described herein:
First, by the projection onto an S-hyperplane,
is obtained. Specifically, a total sum is determined with respect to temperature zones
which is converted to
Then, by performing projection onto a B-hyperplane and normalization,
is obtained. Specifically,
The data set and hyper parameters are as follows:
Cited literatures are as follows:
Found. Trends Mach. Learn. 3, 1-122 (2010).
IEEE Trans. Signal Process. 64, 6254-6268 (2016).
A composition inference device according to an embodiment of the present invention will be described next by referring to the drawings.
A composition inference device 30 includes a mass spectrometer 31 and an information processing device 33. The mass spectrometer 31 includes a pressure-reducing pump 32. The information processing device 33 includes a processor 34, a storage device 35, a display device 36, and an input device 37. The mass spectrometer 31 and the information processing device 33 are configured in such a manner that data can be transferred therebetween.
Examples of the processor 34 include a microprocessor, a processor core, a multiprocessor, an ASIC (application-specific integrated circuit), an FPGA (field programmable gate array), GPGPUs (general-purpose computing on graphics processing units), and the like.
The storage device 35 has the function of storing various types of programs and data in a transitory manner and/or a non-transitory manner, and provides a work area for the processor 34.
Examples of the storage device 35 include a ROM (read only memory), a RAM (random access memory), an HDD (hard disk drive), a flash memory, and an SSD (solid state drive).
The input device 37 can accept various types of information and can accept input of a command to the composition inference device 30. Examples of the input device 37 may include a keyboard, a mouse, a scanner, and a touch panel.
The display device 36 can display a status of the composition inference device 30, and progress of analysis, composition inference result, etc. by the composition inference device 30. Examples of the display device 36 may include a liquid crystal display and an organic EL (electro luminescence) display.
The display device 36 may be configured as a device integrated with the input device 37. In this case, the display device 36 may be configured as a touch panel display and may be configured to provide a GUI (graphical user interface).
The information processing device 33, which includes the processor 34, the storage device 35, the input device 37, and the display device 36 allowing communication of data therebetween through a data bus, is typically a computer.
The mass spectrometer 31 includes the pressure-reducing pump 32, and is typically a mass spectrometer including a DART ion source, a sample heater, and a time-of-flight type mass spectrometer instrument. Both the ion source and the mass spectrometer instrument of the mass spectrometer 31 are non-restrictive examples and the configuration of the mass spectrometer provided to the present composition inference device is not limited to that described above.
The mass spectrometer unit 41 has a configuration including the mass spectrometer 31 and corresponds to a function realized by causing a controller 43 described later to control the mass spectrometer 31.
The mass spectrometer unit 41 ionizes gas components generated as a result of thermal desorption and/or pyrolysis on a loaded sample set (samples, learning samples, and background samples indicated by sign a in
At this time, an environmental factor resulting from an environment where the mass spectrometer 31 is installed (sign b in
Described next is a function of each part of the information processing unit 42 having a configuration including the information processing device 33.
The controller 43 has a configuration including the processor 34. The controller 43 controls each part to realize each function of the composition inference device 30.
A storage part 44 has a configuration including the storage device 35. The storage part 44 includes a program stored in advance for control over each part, and provides a transitory or non-transitory storage area for data input to and output from each part. The storage part 44 is controlled by the controller 43 and is capable of reading and writing data.
An input part 45 has a configuration including the input device 37. An output part 46 has a configuration including the display device 36. The controller 43 controls these parts, thereby allowing input from a user of the composition inference device 30 to be accepted and the accepted input to be stored into the storage part 44, allowing information in the storage part 44 to be transmitted to each part, and allowing the transmitted information to be presented to the user of the composition inference device 30 through the output part 46.
A data matrix generating part 47 is a function realized by causing the controller 43 to execute the program stored in the storage part 44.
The data matrix generating part 47 generates a data matrix X from a two-dimensional mass spectrum including the mass spectrum (sign c in
The NMF processing part 48 is a function realized by causing the controller 43 to execute the program stored in the storage part 44.
The NMF processing part 48 performs non-negative matrix factorization (NMF) on the data matrix X (sign d in
The correction processing part 49 is a function realized by causing the controller 43 to execute the program stored in the storage part 44.
The correction processing part 49 is the function of correcting the intensity distribution matrix through analysis on canonical correlation between the base spectrum matrix generated by the NMF processing part 48 and the data matrix, thereby generating a corrected intensity distribution matrix. The generated corrected intensity distribution matrix is transferred to a vector processing part 50 (sign f in
The correction processing part 49 may further make the intensity correction already described in addition to the foregoing correction through the canonical correlation analysis. In this case, if there is a composition of a calibration sample (calib sample composition) to be used for the intensity correction, this composition is transferred to the correction processing part 49 through the input part 45 (sign k in
The vector processing part 50 is a function realized by causing the controller 43 to execute the program stored in the storage part 44.
The vector processing part 50 partitions the corrected intensity distribution matrix generated by the correction processing part 49 into a submatrix corresponding to each of all the samples, and expresses all the samples in vector space using the submatrix as a feature vector. Result thereof is transferred to an end member determining part 51 (sign g in
The end member determining part 51 is a function realized by causing the controller 43 to execute the program stored in the storage part 44.
The end member determining part 51 defines a K-1 dimensional simplex including all the feature vectors generated by the vector processing part 50, and determines K end members in the K-1 dimensional simplex. Result thereof is transferred to a content ratio calculating part 52 (sign h in
At this time, if the learning sample contains an end member and has a determination label, information about the determination label is transferred from the input part 45 to the end member determining part 51 (sign j in
As has already been described, however, the determination label is not essential for determining an end member (for processing by the end member determining part 51).
The content ratio calculating part 52 is a function realized by causing the controller 43 to execute the program stored in the storage part 44.
The content ratio calculating part 52 calculates a Euclidean distance between each of the K end members determined by the end member determining part 51 and the feature vector of an inference target sample generated by the vector processing part 50, and infers content ratios of the K components in the inference target sample on the basis of a ratio of the Euclidean distance between each of the K end members and the feature vector of the sample. Result of the content ratio inference is output from the output part 46 (sign i in
At this time, the composition inference result about the learning sample may be output together.
Even if a sample is a solid sample hard to dissolve in a solvent and hard to undergo conventional composition analysis, the present composition inference device still makes it possible to make this sample available for analysis without particular preparatory process and to acquire accurate quantitative analysis result. The present composition inference device achieves significant reduction in time required for obtaining result, particularly by being applied to quality control of a sample as a mixture of a plurality of components, investigation of a cause for failure, etc.
The present composition analysis method will be described in detail on the basis of examples. The composition analysis method of the present invention should not be interpreted in a restricted sense by the following specific examples.
Three types of polymer dioxane solutions (about 4% by mass containing about 0.4 mg of a polymer solid content) including poly(methyl methacrylate) (symbol “M”), polystyrene (symbol “S”), and poly(ethyl methacrylate) (symbol “E”) were casted on a sample pot and air-dried overnight, thereby obtaining samples. Table 4 shows the respective compositions of the samples. Mass spectrometry was performed using LCMS-2020 available from SHIMADZU CORPORATION as a detector, a DART ion source available from IonSense Inc., and ionRocket available from BioChromato, Inc. as a heater for the samples.
The resultant mass spectra were subjected to analysis using an algorithm that implements the following (i) to (iv):
Hyper parameters used in the analysis are as follows:
Orthogonal constrain weight w=0.2, initial component number K=50, small positive number a=1+0.1−15, iteration number L=2000, threshold=0.9
While the original data contains background observed at 390 m/z that may be phthalate ester exhausted from an air conditioner, for example (a portion indicated as “BG” in
In
In
It is found from the result shown in
Referring to
It was found from the result in
Referring to
It was found from the result in
While the number of components can be reduced only to 36 by the ARD, maintaining the Merging S condition allows an imposition to be inferred more precisely with components of a number 9. In this way, the significance of Merging S was proved.
In
It was found from this result that eliminating the CCA-filter leads to the failure of removing both the DOX noise and the BG noise (see the explanation about
To determine that inference can be made precisely even if samples (sample group) including a learning sample do not contain an end member, samples listed in Table 5 were prepared and a test was conducted in the same way as in the Example 1.
In the table, symbols “M,” “S,” and “E” have the same meanings as those in Table 4 (Example 1). Each sample is prepared in such a manner that the weight fraction of a single component (polymer) does not exceed 80%.
A two-dimensional mass spectrum of each of these samples was acquired in the same way as in the Example 1 and converted to a data matrix. The resultant data matrix was subjected to the first NMF process already described, namely,
For estimation of a variance-covariance matrix of noise, calculation can be made only using X. Thus, the estimation was made as process inside a module in the first NMF process.
Next, A was subjected to intensity correction using the CCA-Filter, a weight, or a reference peak. As temperature distribution information is unnecessary for composition analysis, a total sum along a temperature axis may be obtained as follows for each sample:
Next, the following second NMF process was performed:
Evaluation and others using Riemannian metrics were performed as internal processing in the second NMF process.
Regarding C in the drawing, compositions inferred by the present method are indicated by circles and compositions of used samples (correct compositions) are indicated by stars. All these compositions exhibit favorable match therebetween.
Here, the absence of a plot at each vertex of C shows that the samples used herein do not contain an end member, namely, do not contain pure polymers of E, S, and M.
Regarding BS in the drawing, spectra of pure polymers of E, M, and S acquired through inference and actually-measured spectra of E, M, and S are superimposed on each other. There is no difference therebetween, proving that the interference including that of absolute intensity was made favorably.
As described above, as shown in BS in
In the presence of test data, Xtest is projected onto S and B continuously to obtain Ctest.
Next, it was confirmed that, if one of (learning) samples contains an end member, a component content in the other sample is determined arbitrarily (the end member is not required to be in a region external to a circle inscribed in a K-1 dimensional simplex).
Composition analysis was conducted in the same way as in the Example 2 except that samples containing S having a weight percent equal to or greater than 40 were excluded from the samples listed in Table 5 and that M pure polymer was added as a sample. Result thereof is shown in
In
One of possible reasons therefor is that the following basis non-orthogonal term was incorporated as a penalty term into the second NMF process (Eq. 11) already described:
In many cases of material development, material performance is desired to be examined by being reduced into each constituent component. While this requires identification and quantification of a component, increasing complexity of a system increases the difficulty levels thereof. High-resolution mass spectrometry in which peaks of different components are unlikely to overlap each other can be said to be the most powerful analysis method for identifying the chemical constitution of a component in a complicated mixture system. On the other hand, the mass spectrometry has a problem that it lacks in quantitativeness. Furthermore, while the mass spectrometry does not require separation by nature, a resultant spectrum is complicated and is likely to be unreadable in reality.
According to the technique developed by the present invention, a fragment of a material is directly subjected to pyrolysis in an atmosphere directly without preparatory process, resultant gas is analyzed through the DART-MS, and a resultant spectrum is learned without a teacher, thereby inferring pure end member spectra of respective components and representing a spectrum of a sample using a linear sum of the spectra. Even in the case of a material having a complicated composition, this technique still allows identification and quantification of a component in the material without requiring separation.
Regarding identification, as long as it is possible to acquire a highly-readable end member spectrum from which peaks derived from different components have been excluded completely, a subsequent procedure can be followed relatively easily. On the other hand, the present invention also encounters the fact that ionization efficiency differs between peaks and a peak intensity and an abundance ratio thereof do not proportionate to each other, which becomes a hindrance to quantitative analysis. In this regard, according to the present invention, it is possible to infer each peak intensity in an end member spectrum not as a normalized relative value but as an absolute intensity. In expressing a sample spectrum using a linear sum of end member spectra, this allows a linear coefficient of the spectrum to be determined as it is as a composition ratio.
The present invention, according to which a sample is decomposed into each component and a content ratio of the component is determined, can clearly be a useful tool in material development in terms of identification of a key component as a factor for exhibiting the performance of a material. The present invention is further expected to be applied in a considerably wide range of fields including quality control such as inspection on the content of a degraded component, compliance inspection on the content of a dangerous substance, etc., and inspection for detecting infringement. The present invention is further expected to accelerate techniques such as prediction of degeneration or lifetime of resin, detection of an infinitesimal impurity component in resin or organic materials, quantitative analysis on smell or taste, and traceability and tags for preventing false origin designation of food.
Furthermore, Plastics Europe submitted a suggestion to European Commission in September, 2021 to make it mandatory to achieve a 30% content of recycled plastic in plastic containers and packages until 2030. While use of recycled materials has been facilitated as understood therefrom, a general technique of measuring a content has not been established. The present invention is further expected to function as a technique of allowing accurate measurement of a ratio between a virgin material and a recycled material and to be applied in quality control, compliance inspection, and others.
Number | Date | Country | Kind |
---|---|---|---|
2021-104510 | Jun 2021 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2022/022813 | 6/6/2022 | WO |