This application is a National Stage Entry of PCT/JP2018/028565 filed on Jul. 31, 2018, the contents of all of which are incorporated herein by reference, in their entirety.
The present invention relates to an analysis of a feature of gas.
A technique has been developed to obtain information related to gas by measuring the gas with a sensor. Patent Document 1 discloses a technique for discriminating the type of sample gas by using a signal (time-series data of detected values) obtained by measuring the sample gas with a nanomechanical sensor. Specifically, since a diffusion time constant of the sample gas with respect to a receptor of the sensor is determined by a combination of the type of the receptor and the type of the sample gas, it is disclosed that the type of the sample gas can be discriminated based on the diffusion time constant obtained from the signal and the type of the receptor.
In Patent Document 1, it is assumed that one type of molecule is contained in the sample gas, and it is not assumed that the sample gas in which a plurality of types of molecules are mixed is handled. The present invention has been made in view of the above problems and is to provide a technique for extracting a feature of gas in which a plurality of types of molecules are mixed.
An information processing apparatus of the present invention includes: 1) a time-series data acquisition unit that acquires time-series data of detected values output from a sensor where a detected value thereof changes according to attachment and detachment of a molecule contained in a target gas; 2) a computation unit that computes a plurality of feature constants contributing with respect to the time-series data and a contribution value representing a magnitude of contribution for each feature constant with respect to the time-series data; and 3) an output unit that outputs a combination of the plurality of feature constants and the contribution values computed for each feature constant as a feature value of gas sensed by the sensor. The feature constant is a time constant or a velocity constant related to a magnitude of a temporal change of the number of molecules attached to the sensor.
A control method of the present invention is executed by a computer. The control method includes: 1) a time-series data acquisition step of acquiring time-series data of detected values output from a sensor where a detected value thereof changes according to attachment and detachment of a molecule contained in a target gas; 2) a computation step of computing a plurality of feature constants contributing with respect to the time-series data and a contribution value representing a magnitude of contribution for each feature constant with respect to the time-series data; and 3) an output step of outputting a combination of the plurality of feature constants and the contribution values computed for each feature constant as a feature value of gas sensed by the sensor. The feature constant is a time constant or a velocity constant related to a magnitude of a temporal change of the number of molecules attached to the sensor.
A program of the present invention causes a computer to execute each step included in the control method of the present invention.
According to the present invention, there is provided a technique for extracting a feature of gas in which a plurality of types of molecules are mixed.
The above-described object, other objects, features, and advantages will be further clarified by the preferred embodiments described below and the accompanying drawings.
Hereinafter, embodiments of the present invention will be described with reference to the drawings. In all the drawings, the same constituents will be referred to with the same numerals, and the description thereof will not be repeated. Further, in each block diagram, each block represents a functional unit configuration, not a hardware unit configuration, unless otherwise specified.
<Outline of Invention and Theoretical Background>
For example, the sensor 10 is a Membrane-type Surface Stress (MSS) sensor. The MSS sensor has a functional membrane to which a molecule is attached as a receptor, and stress generated in a supporting member of the functional membrane is changed due to the attachment and detachment of the molecule with respect to the functional membrane. The MSS sensor outputs the detected value based on the change in the stress. Note that, the sensor 10 is not limited to the MSS sensor, may output the detected value based on changes in physical quantities related to the viscoelasticity and dynamic characteristics (the mass, the moment of inertia, or the like) of a member of the sensor 10 that occur in response to the attachment and detachment of the molecule with respect to the receptor, and can adopt various types of sensors such as a cantilever type, a membrane type, an optical type, a Piezo, and an oscillation response.
For the sake of explanation, sensing by the sensor 10 is modeled as follows.
The temporal change of the number of molecules k nk(t) attached to the sensor 10 can be formulated as follows.
The first and second terms on the right side in Expression (1) represent the amount of increase (the number of molecules k newly attached to the sensor 10) and the amount of decrease (the number of molecules k detached from sensor 10) of the molecules k per unit time, respectively. Further, αk is a velocity constant representing a velocity at which the molecule k is attached to the sensor 10, and βk is a velocity constant representing a velocity at which the molecule k is detached the sensor 10.
Since the concentration ρk is constant, the number of molecules k nk(t) at time t can be formulated from the above Expression (1) as follows.
Further, assuming that no molecule is attached to the sensor 10 at time t0 (initial state), nk(t) is represented as follows.
nk(t)=n*k(1−e−β
The detected value of the sensor 10 is determined by the stress acting on the sensor 10 by the molecules contained in the target gas. It is considered that the stress acting on the sensor 10 by a plurality of molecules can be represented by the linear sum of the stress acting on individual molecules. However, the stress generated by the molecule is considered to differ depending on the type of molecule. That is, it can be said that the contribution of the molecule with respect to the detected value of the sensor 10 differs depending on the type of the molecule.
Thereby, the detected value y(t) of the sensor 10 can be formulated as follows.
Both γk and ξk represent the contribution of the molecule k with respect to the detected value of the sensor 10. Note that, the meanings of “rising” and “falling” will be described later. When the time-series data 14 obtained from the sensor 10 that senses the target gas can be decomposed as in the above Expression (4), it is possible to recognize the types of molecules contained in the target gas and the ratio of each type of molecules contained in the target gas. That is, by the decomposition represented by Expression (4), data representing the feature of the target gas (that is, the feature value of the target gas) can be obtained.
The information processing apparatus 2000 acquires the time-series data 14 output by the sensor 10 and decomposes the time-series data 14 as shown in the following Expression (5).
ξi is a contribution value representing the contribution of the feature constant θi with respect to the detected value of the sensor 10.
Specifically, first, by using the time-series data 14, the information processing apparatus 2000 computes a set of the plurality of feature constants Θ={θ1, . . . , θm} that contribute with respect to the time-series data 14, and a contribution value ξi that represents the magnitude of the contribution of each feature constant θi with respect to the time-series data 14. Note that, as will be described later, there are cases where the contribution value ξi is computed after the feature constant θi is computed, and cases where the contribution value ξi is computed together with the feature constant θi.
Further, the information processing apparatus 2000 outputs information in which the set Θ of the feature constants and the set Ξ of the contribution values are associated with each other as a feature value representing the feature of the target gas. The association between the set Θ of the feature constants and the set Ξ of contribution values is represented by, for example, a feature matrix F with m rows and 2 columns (m is the number of feature constants and the number of contribution values). For example, this matrix F has a feature constant vector Θ=(θ1, . . . , θm) representing a set of the feature constants in a first column, and also has a contribution vector Ξ=(ξ1, . . . , ξm) representing a set of contribution values in a second column.
That is, F=(ΘT, ΞT). In the following description, unless otherwise specified, the feature value of the target gas is represented by the feature matrix F. However, the feature value of the target gas does not necessarily have to be represented as a vector.
As the feature constant θ, the above-mentioned velocity constant β or the time constant T, which is the reciprocal of the velocity constant, can be adopted. Expression (5) can be represented as follows for each of the cases where β and τ are used as θ.
Note that, in
<Action and Effect>
As described above, since the contribution of the molecule with respect to the detected value of the sensor 10 is considered to differ depending on the type of the molecule, the above-mentioned set Θ of the feature constants and the set Ξ of the contribution values, which corresponds to the set Θ of the feature constants, are considered to be different depending on the type of the molecule contained in the target gas and a mixing ratio thereof. Therefore, the information, in which the set Θ of the feature constants and the set Ξ of the contribution values are associated with each other, can be used as information that can distinguish gas in which a plurality of types of molecules are mixed, from each other, that is, as the feature value of the gas.
Therefore, the information processing apparatus 2000 of the present example embodiment computes the set Θ of the feature constants and the set Ξ of the contribution values that represents the contribution of each feature constant with respect to the time-series data 14 based on the time-series data 14 obtained by sensing the target gas with the sensor 10 and outputs the information in which the computed sets Θ and Ξ are associated with each other as the feature value of the target gas. By doing so, the feature value capable of identifying the gas in which the plurality of types of molecules are mixed can be automatically generated from the result of sensing the gas with the sensor 10.
Note that, the above description with reference to
<Example of Functional Configuration of Information Processing Apparatus 2000>
<Hardware Configuration of Information Processing Apparatus 2000>
Each functional configuration unit of the information processing apparatus 2000 may be implemented by hardware (for example, a hard-wired electronic circuit or the like) that implements each functional configuration unit, or may be implemented by a combination of hardware and software (for example, a combination of an electronic circuit and a program for controlling the electronic circuit). Hereinafter, a case where each functional configuration unit of the information processing apparatus 2000 is implemented by a combination of hardware and software will be further described.
The computer 1000 includes a bus 1020, a processor 1040, a memory 1060, a storage device 1080, an input and output interface 1100, and a network interface 1120. The bus 1020 is a data transmission path for the processor 1040, the memory 1060, the storage device 1080, the input and output interface 1100, and the network interface 1120 to mutually transmit and receive data. However, the method of connecting the processors 1040 and the like to each other is not limited to the bus connection.
The processor 1040 is various processors such as a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), and a Field-Programmable Gate Array (FPGA). The memory 1060 is a main storage device implemented by using a Random Access Memory (RAM) or the like. The storage device 1080 is an auxiliary storage device implemented by using a hard disk, a Solid State Drive (SSD), a memory card, a Read Only Memory (ROM), or the like.
The input and output interface 1100 is an interface for connecting the computer 1000 and the input and output devices. For example, an input device such as a keyboard or an output device such as a display device is connected to the input and output interface 1100. In addition, for example, the sensor 10 is connected to the input and output interface 1100. However, the sensor 10 does not necessarily have to be directly connected to the computer 1000. For example, the sensor 10 may store the time-series data 14 in a storage device shared with the computer 1000.
The network interface 1120 is an interface for connecting the computer 1000 to a communication network. The communication network is, for example, a Local Area Network (LAN) or a Wide Area Network (WAN). A method of connecting the network interface 1120 to the communication network may be a wireless connection or a wired connection.
The storage device 1080 stores a program module that implements each functional configuration unit of the information processing apparatus 2000. The processor 1040 implements the function corresponding to each program module by reading each of these program modules into the memory 1060 and executing the modules.
<Process Flow>
The timing at which the information processing apparatus 2000 executes the series of processes illustrated in
<Acquisition of Time-Series Data 14: S102>
The time-series data acquisition unit 2020 acquires the time-series data 14 (S102). A method is any method in which the time-series data acquisition unit 2020 acquires the time-series data 14. For example, the information processing apparatus 2000 acquires the time-series data 14 by accessing a storage device in which the time-series data 14 is stored. The storage device in which the time-series data 14 is stored may be provided inside the sensor 10 or may be provided outside the sensor 10. In addition, for example, the time-series data acquisition unit 2020 may acquire the time-series data 14 by sequentially receiving the detected values output from the sensor 10.
The time-series data 14 is time-series data in which the detected values output by the sensor 10 are arranged in the order of earliest time output from the sensor 10. However, the time-series data 14 may be obtained by adding predetermined preprocessing with respect to the time-series data of the detected values obtained from the sensor 10. Further, instead of acquiring the preprocessed time-series data 14, the time-series data acquisition unit 2020 may perform preprocessing with respect to the time-series data 14. As the preprocessing, for example, filtering for removing noise components from time-series data can be adopted.
The time-series data 14 is obtained by exposing the sensor 10 to the target gas. However, when performing a measurement related to the gas using the sensor, by repeating an operation of exposing the sensor to the gas to be measured and an operation of removing the gas to be measured from the sensor, a plurality of time-series data to be analyzed may be obtained from the sensor.
On the other hand, the time-series data 14-2 of a period P2 and the time-series data 14-4 of a period P4 are obtained by the operation of removing the gas to be measured from the sensor. Note that, the operation of removing the gas to be measured from the sensor is implemented, for example, by exposing the sensor to gas called purge gas. When the operation of removing the gas to be measured from the sensor is performed in this way, the detected value of the sensor decreases. The time-series data obtained by the operation of removing the gas to be measured from the sensor is also called “falling” time-series data. The “when falling” in Expression (4) means “in a case where the time-series data 14 is falling time-series data”. The same applies to the following expressions.
In the information processing apparatus 2000, the time-series data 14 obtained by each of the operations of exposing the sensor 10 to the target gas and the operation of removing the target gas from the sensor 10 are distinguished and are treated as different time-series data 14. For example, in the example in
Various methods can be adopted as a method for obtaining the plurality of time-series data 14 by dividing the series of time-series data obtained from the sensor 10. For example, the plurality of time-series data 14 can be obtained by manually dividing the series of time-series data obtained from the sensor 10. In addition, for example, the information processing apparatus 2000 may acquire the series of time-series data and obtain the plurality of time-series data 14 by dividing the time-series data.
Note that, various methods can be adopted as the method of dividing the time-series data by the information processing apparatus 2000. For example, there are the following methods.
<<(a) Method Using First Derivative>>
In the time-series data 14, the derivative of a sensor value becomes discontinuous at a portion to be divided, and the absolute value becomes maximum immediately after that. Therefore, the time-series data 14 can be divided by using a point where the absolute value of the first derivative becomes large.
<<(b) Method Using Second Derivative>>
Similarly, the derivative is discontinuous at the point to be divided, so the second derivative diverges to infinity. Therefore, the time-series data 14 can be divided by using a point where the absolute value of the second derivative becomes large.
<<(c) Method of Using Metadata Obtained from Sensor>>
Depending on the type of sensor, metadata other than the detected value is provided. For example, in the MSS module, different pumps (sample pump and purge pump) are prepared for suction of the gas (sample) to be measured and the purge gas and by turning these pumps on and off alternately, the measurement of rising and the measurement of falling are performed. Further, an operation sequence of the pump (information representing which pump is used for the detected value, the flow rate measurement value used for the feedback control of the flow rate, or the like) is added to the recorded detected value as time-series information. Therefore, for example, the information processing apparatus 2000 can divide the time-series data 14 by using the operation sequence of the pump obtained together with the time-series data 14.
<<Combination of the Above Methods>>
Regarding the method (c), it is preferable to make a correction in consideration of the delay from the operation of the pump to the arrival of the gas at the sensor. Therefore, for example, the information processing apparatus 2000 tentatively divides the time-series data 14 into a plurality of sections by using the method (c) and then determines a time point at which the absolute value of the first derivative becomes maximum in each section, and divides the time-series data 14 at each determined time point.
Note that, the information processing apparatus 2000 may be configured to use only one of the time-series data 14 obtained by the operation of exposing the sensor 10 to the target gas and the time-series data 14 obtained by the operation of removing the target gas from the sensor 10.
<Computation of Feature Constant and Contribution Value: S104>
The computation unit 2040 computes the plurality of feature constants θi and the contribution value ξi corresponding to the feature constant θi by using the time-series data 14 (S104). This process corresponds to decomposing the time-series data 14 into the sum of ξi*f(θi) shown in Expression (5).
Various methods can be used for the method in which the computation unit 2040 computes the feature constant and the contribution value thereof from the time-series data 14. Hereinafter, the method of computing the feature constant will be described first. Note that, in some cases, the contribution value is computed in the process of computing the feature constant. This case will be described later.
<<Computation Method 1 of Feature Constant>>
For example, the computation unit 2040 computes the set of the feature constants based on a slope of log [y′(t)] obtained by taking log as the derivative y′(t) of the time-series data 14. log [y′(t)] is expressed as follows.
Note that, since a log is taken, it is assumed that y′(t) is always positive. In the case where y′(t) is negative, the feature constant is computed by using another method described later.
When g(t)=ci−βit is approximately set, the slope g′(t) of this function becomes the velocity constant βi. Therefore, the slope g′(t) of g(t) at time t is substantially the same as βi corresponding to i at which ci−βit is maximum at the time. Further, since exp(ci−βit) is a monotonically decreasing function, it can be said that the slope of g(t) changes stepwise as illustrated in
Therefore, the computation unit 2040 computes the slope g′(t) of g(t) at each time t and extracts a plurality of periods (hereafter, a partial period) in which g′(t) is substantially the same from a domain (a measurement period of time-series data 14) of g(t). The computation unit 2040 computes g′(t) in each extracted partial period as the velocity constant βi. For example, the computation unit 2040 computes a statistical value (average value or the like) of g′(t) in each partial period as the velocity constant corresponding to the partial period. In addition, for example, the computation unit 2040 may perform linear regression on (t, g(t)) included in the partial period for each partial period and may use the slope of the regression line obtained for each partial period as the velocity constant. Note that, the period during which g′(t) is substantially the same can be determined, for example, by clustering a set of g′(t) based on the magnitude of the value thereof.
For example, in
As in the section 20 illustrated in
For example, the minimum value of a length of the partial period is predetermined. The computation unit 2040 divides the domain of g(t) into a plurality of periods based on the magnitude of the slope g′(t) and extracts only the period having a length equal to or greater than the minimum value as the above-mentioned partial period (that is, g′(t) corresponding to a period having a length less than the minimum value is not included in the set of the feature constants). By doing so, g′(t) that appears only for a short period of time can be excluded from the feature constants. Note that, the minimum value of the length of the partial period can also be represented as the minimum value of the number of detected values included in the partial period.
In addition, for example, the computation unit 2040 may determine the number of feature constants m by using a method described later and extract as many partial periods as that number from the domain of g(t). In this case, for example, the computation unit 2040 divides the domain of g(t) into the plurality of periods based on the magnitude of the slope g′(t) and extracts m periods, which are higher in terms of the length of the period, as the above-mentioned partial periods.
<<Computation Method 2 of Feature Constant>>
In order to adopt the above-mentioned computation method 1, it is necessary to take the log of y′(t), so the value of y′(t) should always be positive. In contrast to this, in the computation method 2 described here, it can be used even when the value of y′(t) becomes negative while using the index corresponding to the slope g′(t) of g(t) described above.
First, g′(t), which is the slope of g(t)=log [y′(t)] described above, can be represented as follows.
In the computation method 2, by using y″(t)/y′(t) as the index corresponding to the slope g′(t) of g(t) without taking the log of y′(t), the case where y′(t) is negative can be also handled.
g′(t) can be said to be a direction of the vector (y′(t), y″(t)). In other words, g′(t) can be said to be a direction of the velocity vector (vector representing the temporal change) of the vector (y(t), y′(t)).
The computation unit 2040 computes the vector (y(t), y′(t)) for each time t, and uses each of the computed vectors to compute the velocity vector (y(t+1)−y (t), y′(t+1)−y′(t)) for each time t. The computation unit 2040 extracts a plurality of partial periods in which the directions of the computed plurality of velocity vectors are substantially the same from the measurement period of y(t). Note that, y(t+1) is the detected value obtained next to y(t) in the sensor.
The computation unit 2040 computes the direction of the velocity vector of (y(t), y′(t)) in each extracted partial period as the velocity constant corresponding to the partial period. For example, the computation unit 2040 computes a statistical value (average value or the like) of the direction of the velocity vector in each partial period as the velocity constant corresponding to the partial period. In addition, for example, the computation unit 2040 may perform linear regression on a point (y(t), y′(t)) included in the partial period for each partial period and may use the slope of the regression line obtained for each partial period as the velocity constant corresponding to the partial period. Note that, the division of the measurement period of y(t) can be implemented by clustering the velocity vectors based on that direction. Note that, the direction of the velocity vector (y(t+1)−y(t), y′(t+1)−y′(t)) is represented as {y′(t+1)−y′(t)}/{y (t+1)−y(t)}.
As in the section 30 illustrated in
<<Computation Method of Contribution Value>>
A method of computing the contribution value corresponding to the feature constant after the feature constant is computed will be described. The computation unit 2040 generates a prediction model for predicting the detected value of the sensor 10 using the set of the contribution values ξi (that is, the contribution vector) Ξ={ξ1, . . . , ξm} corresponding to the set of the computed feature constants Θ={θ1, . . . , θm}, as the parameter. When generating the prediction model, the contribution vector Ξ can be computed by performing a parameter estimation for the contribution vector Ξ by using the time-series data 14 which is the observation data. An example of the prediction model when the velocity constant β is used as the feature constant can be represented by Expression (6). Further, an example of the prediction model when the time constant τ is used as the feature constant can be represented by Expression (7). Various methods can be used for estimating the parameters of the prediction model.
Hereinafter, some examples of the method will be given. Note that, in the following description, a case where the velocity constant β is used as the feature constant is described. The method of parameter estimation when the time constant τ is used as the feature constant can be implemented by reading the velocity constant β in the following description as 1/τ.
<<Parameter Estimation Method 1>>
For example, the computation unit 2040 estimates the parameter Ξ by a maximum likelihood estimation using the predicted value obtained from the prediction model and the observed value (that is, time-series data 14) obtained from the sensor 10. For the maximum likelihood estimation, for example, the least squares method can be used. In this case, specifically, the parameter Ξ is determined according to the following objective function.
T represents the length (the number of detected values) of the time-series data 14. Further, y{circumflex over ( )}(ti) represents the predicted value at time ti.
The vector Ξ that minimizes the above objective function can be computed using the following Expression (11).
The vector is expressed as Y=(y(t0), y(t1), . . . ).
Therefore, the computation unit 2040 computes the parameter Ξ by applying the time-series data Y and the set of the feature constants Θ={β1, β2, . . . } to the above Expression
<<Parameter Estimation Method 2>>
For the least squares method described above, a regularization term may be introduced to perform regularization. For example, the following Expression (12) shows an example of performing L2 regularization.
λ is a hyperparameter representing the weight given to the regularization term.
In this case, the parameter Ξ can be determined according to the following expression (13).
Θ=(ΦTΦ+λI)−1ΦY (13)
By introducing such a regularization term, it is possible to suppress the amplification of the measurement error in the matrix computation as compared with the case where the regularization term is not introduced, thereby each contribution value ξi can be computed more accurately. Further, by suppressing the amplification of the error, the contribution value is numerically stable, so that the robustness of the feature value with respect to the mixing ratio is improved.
Note that, as described above, λ, is the hyperparameter and needs to be determined in advance. For example, the value of λ, is determined through a test measurement and a simulation. It is preferable to set the value of λ, to a small value so that the contribution value does not oscillate.
The simulation for determining the value of λ, will be described. In the simulation, in a case where “a single molecule with a contribution of 1” is virtually measured (for example, in the case of falling, when the velocity constant of the single molecule is defined as β0, it is expressed as y(t)=exp{−β0*T}) is considered, and the result of the feature value estimation value by Expression (13) in this case is observed. Virtually, when it is assumed that the ideal observation (measurement can be performed for an infinitely long time at an infinitesimal measurement interval and the observation error is zero) is possible, in the simulation of a virtual single molecule, the feature value in which only β0 has a sharp peak as shown below, is obtained, thereby the original velocity constant β=β0 and the contribution ξ=1 are completely reproduced.
However, since the ideal observation is not possible in reality, the peak of the contribution value becomes blunted or the contribution value oscillates.
The purpose of the simulation is to evaluate the degree of occurrence of such peak blunting or the oscillation while changing λ. In order to quantitatively measure the “oscillation magnitude” and “peak width”, for example, the contribution vectors Ξ1 and Ξ2 of two virtual single molecules having two different velocity constants β1 and β2, respectively, are computed by simulation. Thereafter, the inner product of these two contribution vectors is computed as follows.
The function f(Δv) attenuates while oscillating. Therefore, it can be quantified with the width of the main lobe of the oscillation as the “peak width” and with the level of the side lobes as the “oscillation magnitude”. λ is determined by selecting a value of λ such that the main lobe width is as narrow as possible and the side lobe level is as small as possible.
One of the advantages of suppressing the oscillation of the contribution value is that, as described above, the feature value becomes robust against changes in the time constant and the velocity constant. In other words, the feature value becomes robust with respect to the change in the temperature. The reason will be described below.
When the changes in time constant or the velocity constant occur due to the change in the temperature, the contribution value illustrated in
In contrast to this, when the oscillation of the contribution value is small, the distance between the contribution vectors before and after the parallel movement becomes short. This means that when the time constant or velocity constant changes slightly, the feature value also changes slightly. That is, it means that the feature value is highly robust. Therefore, it can be said that the robustness of the feature value is improved by suppressing the oscillation of the feature value.
Note that, the regularization in the least squares method is not limited to the L2 regularization described above, and other regularizations such as the L1 regularization may be introduced.
<<Parameter Estimation Method 3>>
In this method, the prior distribution P(s) is set for the parameter E. Thereafter, the computation unit 2040 determines the parameter Ξ by using a Maximum a Posteriori (MAP) estimation that uses the time-series data 14 which is the observed value. Specifically, the parameter Ξ that maximizes the following objective function is adopted.
P(Y|Ξ) and P(Ξ) are defined by a multivariate normal distribution, for example, as follows.
P(Y|Ξ)=N(Y|Ŷ,σ2I)
P(Ξ)=N(Ξ|0,Λ) (17)
N(⋅|μ, Σ) is a multivariate normal distribution with average μ and covariance Σ. Further, the vector is expressed as y{circumflex over ( )}=(y{circumflex over ( )}(t1), y{circumflex over ( )}(t2), . . . )=ΦΞ. σ{circumflex over ( )}2 is a parameter that represents the variance of the observation error.
Λ is a covariance matrix of the prior distribution of Ξ, and any semi-normal definite matrix may be given in advance or may be determined by a method described later or the like.
Further, P(Y|Ξ) and P(Ξ) may be determined by a Gaussian process (GP) as follows.
P(ξ(β))=GP(ξ(β)|0,Λ(β,β′))
P(y(t))=N(y(t)|ŷ(t),σ2) (18)
GP (ξ(β)|μ(β), Λ(β, β′)) is a Gaussian process having an average value function of μ(β) and a covariance function (kernel function) of Λ(β, β′). Further, since the Gaussian process is a stochastic process that generates a continuous function, here, ξ(β) is a continuous function that represents the contribution ratio with respect to β(or τ), and the vector Ξ is a vector Ξ=(ξβ1), (β2), . . . ) in which the values in “β=β1, β2, . . . ” of the function Λ(β) are arranged. In this case, Expression (17) can be regarded as a special case of Expression (18), and the (i, j) component of the covariance matrix Λ in Expression (17) is a value in (β, β′)=(β1, β2) of the covariance function Λ (β, β′) in Expression (18). That is, the matrix Λ in Expression. (17) is a Gram matrix in the so-called Gaussian process.
Further, the computation unit 2040 may determine the parameter Ξ by using a Bayesian estimation that uses the time-series data 14 which is the observed value. Specifically, the parameter Ξ is determined by computing the following conditional expected value.
Ξ[Ξ|Y] is a conditional expected value assuming that Ξ and Y follow the probability distribution in Expression (18).
The feature vector Ξ that maximizes the above objective function (14) and the feature vector Ξ obtained by the conditional expected value (19) can both be computed by the following Expression (20).
Ξ=ΛΦT(ΦΛΦT+σ2I)−1Y (20)
<<<How to Determine Hyperparameters>>>
When using the Gaussian process, as the hyperparameters that are set in advance, there are a) the form of the covariance function Λ(β, β′), b) the parameters of the covariance function, and c) the measurement error parameter σ{circumflex over ( )}2. The following steps are performed while changing these parameters.
Note that, for example, the in-lobe width and side-lobe level of the above-mentioned function f(Δv) are used as indexes for quantifying the magnitude of the oscillation and peak width of the feature value. Further, besides that, the variance value (the square variance or absolute value variance) when the estimated Ξ is regarded as a probability distribution may be used. These variance values become smaller values as the oscillation is smaller and the peak width is narrower. Note that, the actual measurement (test measurement) may be carried out instead of the simulation.
<A Case where the Contribution Value is Computed Together with the Feature Constant>
As described above, there is a method of using the least squares method as one of the methods for computing the contribution value corresponding to the feature constant. Instead of using the least squares method after computing the feature constant, the computation unit 2040 may compute both the set Θ of the feature constants and the set Ξ of the contribution values by solving the combinatorial optimization problem of minimizing the minimum value of the objective function of the least squares method with respect to the set Θ of the feature constants. Specifically, Θ that minimizes the following objective function h(Θ) and the parameter Ξ in the minimized h(Θ) are the computed results of the computation unit 2040.
An existing method can be used as a specific method for solving the above-mentioned combinatorial optimization problem. In particular, when all the elements of Ξ are positive numbers, the above objective function becomes a monotonically decreasing submodular function, so that it can be computed accurately by, for example, a greedy algorithm.
<Method of Determining the Number of Feature Constants>
The computation unit 2040 may determine the number of feature constants. The determined number of feature constants can be used, for example, to determine the number of partial periods to be extracted from the domain of g(t) or the measurement period of y(t) described above.
For example, the number of feature constants can be determined by using information criteria such as Akaike's Information Criterion (AIC) and Bayesian Information Criterion (BIC). The function h(Θ) of Expression (22) described above is used to compute the information criterion. For example, the AIC can be computed as follows. Note that, the method for deriving the following AIC will be described later.
Here, σ{circumflex over ( )}2 is a variance of the observation error.
The computation unit 2040 determines the integer K where the AIC is minimized as the number of feature constants.
By determining the number of feature constants using the information criterion in this way, the number of feature constants can be appropriately determined in consideration of a balance between the merit of improving the prediction accuracy obtained by increasing the number of parameters of the prediction model (here, the number of feature constants or the number of contribution values) and the demerit of increasing the complexity of the model by increasing the number of parameters.
The method of deriving the AIC of Expression (24) will be described. First, the definition of the AIC is as follows.
AIC=−2 log l+2p (25)
Where 1 is the maximum likelihood and p is the number of parameters.
Since the number of feature constants and the number of contribution values are K, respectively, the number of parameters p is 2K. Further, the maximum likelihood is computed as follows.
Expression (24) can be obtained by substituting the above 1 and p into the definition of the AIC.
<Output of Feature Value: S106>
The output unit 2060 outputs information (hereinafter, output information) in which the set Θ of the feature constants and the set Ξ of the contribution values, which are obtained by using the above method, are associated with each other as the feature value representing the feature of the gas (S106). For example, the output information is text data representing the feature matrix F. In addition, for example, the output information may be information that graphically represents the association between the set Θ of the feature constants and the set Ξ of the contribution values with a table, graph, or the like.
There are various specific methods for outputting the output information. For example, the output unit 2060 stores the output information in any storage device. In addition, for example, the output unit 2060 causes the display device to display the output information. In addition, for example, the output unit 2060 may transmit the output information to an apparatus other than the information processing apparatus 2000.
<A Case where a Plurality of Computations are Performed to Associate a Set of Feature Constants with a Set of Contribution Values>
The information processing apparatus 2000 may compute the association between the set Θ of the feature constants and the set Ξ of the contribution values for each of the plurality of time-series data 14 obtained for the same target gas. In this case, the output unit 2060 may use a set of these plurality of associations as the feature value of the target gas.
For example, the information processing apparatus 2000 computes the set Θu of the feature constants and a set Ξu of the contribution values for the rising time-series data 14 and generates a feature matrix Fu in which these sets are associated with each other. Further, the information processing apparatus 2000 computes the set Θd of the feature constants and a set Ξd of the contribution values for the falling time-series data 14 and generates a feature matrix Fd in which these sets are associated with each other. The information processing apparatus 2000 outputs {Fu, Fd}, which is a group of the generated feature matrices, as the feature value of the target gas.
Note that, the output unit 2060 may use one matrix obtained by connecting the feature matrix obtained from the rising time-series data 14 and the feature matrix obtained from the falling time-series data 14 as the feature value of the target gas. For example, in this case, the output unit 2060 outputs Fc=(ΘuT, ΞuT, ΘdT, θdT), in which Fu=(ΘuT, ΞuT) and Fd=(ΘdT, ΞdT) are connected to each other, as the feature value of the target gas. Note that, when the number of rows of the feature matrix to be connected is different from each other, by expanding the matrix with the smaller number of rows by a method such as zero padding, the number of rows of the feature matrix to be connected is matched with each other.
The plurality of feature matrices are not limited to those obtained from each of the rising time-series data 14 and the falling time-series data 14. For example, the plurality of time-series data 14 may be obtained by exposing each of the plurality of sensors 10 having different characteristics to the target gas. When the molecules are attached to the sensor, the ease of attachment of each molecule with respect to the sensor differs depends on the characteristics of the sensor. For example, when using a type of sensor in which molecules are attached to the functional membrane, the ease of attachment of each molecule with respect to the functional membrane differs depending on the material of the functional membrane. The same applies to the ease of detachment of each molecule. Therefore, by preparing sensors 10, which have functional membranes made of different materials, and obtaining and analyzing the time-series data 14 from each of the plurality of sensors 10, the features of the target gas can be recognized more accurately.
The information processing apparatus 2000 acquires the time-series data 14 from each of the plurality of sensors 10 having different characteristics and generates the information in which the set of the feature constants and the set of the contribution values are associated with each other for each time-series data 14. The output unit 2060 outputs the group of the plurality of information obtained in this way as the feature value of the target gas.
The plurality of sensors 10 having different characteristics may be accommodated in one housing or may be accommodated in different housings. In the former case, for example, the sensor 10 is configured such that a plurality of functional membranes made of different materials are accommodated in one sensor housing and a detected value can be obtained for each functional membrane.
Further, the method described in
<Computation of Feature Value Considering Bias>
The detected value of the sensor 10 may include a bias term that does not represent a change with time. In this case, the time-series data 14 is represented as follows. Note that, the velocity constant β is used as the feature constant.
The Bias is generated, for example, due to the shifting of the offset of the sensor 10. In addition, for example, the bias is generated due to the contribution of components commonly contained in the target gas and the purge gas (for example, the contribution of nitrogen or oxygen in the atmosphere).
The information processing apparatus 2000 may have a function of removing an offset from the time-series data 14. By doing so, the feature value of the target gas can be computed more accurately. Hereinafter, a method of computing the feature value in consideration of the offset will be described.
The computation unit 2040 computes the contribution vector Ξ in consideration of the bias by generating the prediction model of the time-series data 14 represented by the above Expression (27). That is, the computation unit 2040 estimates the parameters Ξ and b for the prediction model represented by the Expression (27). Specifically, the computation unit 2040 estimates Ξ and b by optimizing the objective functions (10), (12), or (16) not only for Ξ but also for b. Note that, when the time constant is used as the feature constant, βk is replaced with 1/τk in Expression (27).
For example, it is assumed that Expression (14) is used as the objective function. In this case, the computation unit 2040 computes Ξ and b by the following optimization problem. The same applies when (8) or (10) is used as the objective function.
The solutions Ξ and b of the above optimization problem can be computed by the following expressions.
wherein, vector 1 in which all components are 1.
By estimating both the bias b and the contribution vector Ξ in this way, the effect of the bias is removed from the contribution vector, and the contribution vector can be computed accurately even when the bias is included in the detected value of the sensor 10.
Note that, the output unit 2060 may output the bias b or b0 in addition to the feature matrix F. When the bias is generated due to the shifting of the offset of the sensor, the value of b0 can be used to calibrate the sensor offset.
Although the example embodiments of the present invention have been described above with reference to the drawings, these are examples of the present invention, and various configurations other than the above can be adopted.
The whole or part of the example embodiments disclosed above can be described as, but not limited to, the following supplementary notes.
1. An information processing apparatus including: a time-series data acquisition unit that acquires time-series data of detected values output from a sensor where a detected value thereof changes according to attachment and detachment of a molecule contained in a target gas; a computation unit that computes a plurality of feature constants contributing with respect to the time-series data and a contribution value representing a magnitude of contribution for each feature constant with respect to the time-series data; and an output unit that outputs a combination of the plurality of feature constants and the contribution values computed for each feature constant as a feature value of gas sensed by the sensor, in which the feature constant is a time constant or a velocity constant related to a magnitude of a temporal change of the number of molecules attached to the sensor.
2. The information processing apparatus according to 1., in which the computation unit extracts a plurality of partial periods from a measurement period of the time-series data, and computes the feature constant for each of the partial periods based on a logarithm of a temporal change rate of the detected values in the partial period, and the partial period is a period in which the logarithms of the temporal change rate of the detected values contained in the partial period are substantially the same.
3. The information processing apparatus according to 1., in which the computation unit computes time-series vector data having the detected value at each time and a temporal change rate of the detected values at the time as elements, by using the time-series data, computes a velocity vector for each of the computed time-series vector data, extracts a plurality of partial periods from a measurement period of the time-series data based on a direction of the velocity vector, and computes the feature constant for each of the partial periods based on the direction of the velocity vector in the partial period, and the partial period is a period in which the directions of the velocity vectors contained in the partial period are substantially the same.
4. The information processing apparatus according to any one of 1. to 3., in which the computation unit computes each contribution value by performing, for a prediction model of the detected value of the sensor with the contribution value of each of the plurality of feature constants as a parameter, a parameter estimation that uses the acquired time-series data.
5. The information processing apparatus according to 4., in which the computation unit computes each of the contribution values by performing, for time-series data obtained from the prediction model and the acquired time-series data, a maximum likelihood estimation that uses a least squares method.
6. The information processing apparatus according to 5., in which in the maximum likelihood estimation in the least squares method, a regularization term is included in an objective function.
7. The information processing apparatus according to 4., in which the computation unit computes each of the contribution values by using a Maximum a Posteriori (MAP) estimation or a Bayesian estimation that uses a prior distribution of each of the contribution values and the acquired time-series data.
8. The information processing apparatus according to 7., in which the prior distribution is a multivariate normal distribution or a Gaussian process.
9. The information processing apparatus according to 4., in which the computation unit computes a plurality of feature constants and a plurality of contribution values by minimizing a minimum value of an objective function with respect to the plurality of feature constants for the objective function that represents a square error between the time-series data obtained from the prediction model and the acquired time-series data.
10. The information processing apparatus according to any one of 4. to 9., in which the prediction model contains a parameter that represents a bias, and the computation unit estimates parameters that each represent the contribution value and the bias for the prediction model.
11. The information processing apparatus according to any one of 1. to 10., in which the time-series data acquisition unit acquires a plurality of time-series data, the computation unit computes a group of a set of the feature constants and a set of the contribution values for each of the plurality of time-series data, and the output unit outputs information obtained by combining a plurality of the computed groups of the sets of the feature constants and the sets of the contribution values, as the feature value of the target gas.
12. The information processing apparatus according to 11., in which the plurality of time-series data include both time-series data obtained when the sensor is exposed to the target gas and time-series data obtained when the target gas is removed from the sensor.
13. The information processing apparatus according to 11., in which the plurality of time-series data include time-series data obtained from each of a plurality of the sensors having different characteristics.
14. A control method executed by a computer, the method including: a time-series data acquisition step of acquiring time-series data of detected values output from a sensor where a detected value thereof changes according to attachment and detachment of a molecule contained in a target gas; a computation step of computing a plurality of feature constants contributing with respect to the time-series data and a contribution value representing a magnitude of contribution for each feature constant with respect to the time-series data; and an output step of outputting a combination of the plurality of feature constants and the contribution values computed for each feature constant as a feature value of gas sensed by the sensor, in which the feature constant is a time constant or a velocity constant related to a magnitude of a temporal change of the number of molecules attached to the sensor.
15. The control method according to 14., in which in the computation step, a plurality of partial periods are extracted from a measurement period of the time-series data, and the feature constant is computed for each of the partial periods based on a logarithm of a temporal change rate of the detected values in the partial period, and the partial period is a period in which the logarithms of the temporal change rate of the detected values contained in the partial period are substantially the same.
16. The control method according to 14., in which in the computation step, time-series vector data having the detected value at each time and a temporal change rate of the detected values at the time as elements is computed by using the time-series data, a velocity vector is computed for each of the computed time-series vector data, a plurality of partial periods are extracted from a measurement period of the time-series data based on a direction of the velocity vector, and the feature constant is computed for each of the partial periods based on the direction of the velocity vector in the partial period, and the partial period is a period in which the directions of the velocity vectors contained in the partial period are substantially the same.
17. The control method according to any one of 14. to 16., in which in the computation step, each contribution value is computed by performing, for a prediction model of the detected value of the sensor with the contribution value of each of the plurality of feature constants as a parameter, a parameter estimation that uses the acquired time-series data.
18. The control method according to 17., in which in the computation step, each of the contribution values is computed by performing, for time-series data obtained from the prediction model and the acquired time-series data, a maximum likelihood estimation that uses a least squares method.
19. The control method according to 18., in which in the maximum likelihood estimation in the least squares method, a regularization term is included in an objective function.
20. The control method according to 17., in which in the computation step, each of the contribution values is computed by using a Maximum a Posteriori (MAP) estimation or a Bayesian estimation that uses a prior distribution of each of the contribution values and the acquired time-series data.
21. The control method according to 20., in which the prior distribution is a multivariate normal distribution or a Gaussian process.
22. The control method according to 17., in which in the computation step, a plurality of feature constants and a plurality of contribution values are computed by minimizing a minimum value of an objective function with respect to the plurality of feature constants for the objective function that represents a square error between the time-series data obtained from the prediction model and the acquired time-series data.
23. The control method according to any one of 17. to 22., in which the prediction model contains a parameter that represents a bias, and in the computation step, parameters that each represent the contribution value and the bias are estimated for the prediction model.
24. The control method according to any one of 14. to 23., in which in the time-series data acquisition step, a plurality of time-series data are acquired, in the computation step, a group of a set of the feature constants and a set of the contribution values is computed for each of the plurality of time-series data, and in the output step, information obtained by combining a plurality of the computed groups of the sets of the feature constants and the sets of the contribution values is output as the feature value of the target gas.
25. The control method according to 24., in which the plurality of time-series data include both time-series data obtained when the sensor is exposed to the target gas and time-series data obtained when the target gas is removed from the sensor.
26. The control method according to 24., in which the plurality of time-series data include time-series data obtained from each of a plurality of the sensors having different characteristics.
27. A program that causes a computer to execute each step of the control method according to any one of 14. to 26.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2018/028565 | 7/31/2018 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2020/026327 | 2/6/2020 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
8732528 | Zhang | May 2014 | B1 |
9122705 | Ioffe | Sep 2015 | B1 |
10121156 | Takahashi | Nov 2018 | B2 |
10127694 | Vlassis | Nov 2018 | B2 |
10309924 | Jayant | Jun 2019 | B2 |
10909457 | Tan | Feb 2021 | B2 |
11216534 | Takada | Jan 2022 | B2 |
11468364 | Shi | Oct 2022 | B2 |
11513107 | Eto | Nov 2022 | B2 |
20090169089 | Hunt | Jul 2009 | A1 |
20120072141 | Hidai | Mar 2012 | A1 |
20150339680 | Takahashi | Nov 2015 | A1 |
20160084808 | Bertholon | Mar 2016 | A1 |
20190012297 | Kobayashi | Jan 2019 | A1 |
20190180194 | Kobayashi | Jun 2019 | A1 |
20200075134 | Shiba | Mar 2020 | A1 |
20210224664 | Kisamori | Jul 2021 | A1 |
20210232957 | Kisamori | Jul 2021 | A1 |
20210248847 | Ito | Aug 2021 | A1 |
20210293681 | Suzuki | Sep 2021 | A1 |
20210311009 | Suzuki | Oct 2021 | A1 |
20220018823 | Eto | Jan 2022 | A1 |
20220036223 | Eto | Feb 2022 | A1 |
20220172086 | Katz | Jun 2022 | A1 |
20220221839 | Eto | Jul 2022 | A1 |
20220309397 | Yamada | Sep 2022 | A1 |
Number | Date | Country |
---|---|---|
112418395 | Feb 2021 | CN |
112464999 | Mar 2021 | CN |
4148623 | Mar 2023 | EP |
3026188 | Mar 2016 | FR |
2845554 | Dec 1991 | JP |
H03-276046 | Dec 1991 | JP |
H11-264809 | Sep 1999 | JP |
H11264809 | Sep 1999 | JP |
3815041 | Aug 2006 | JP |
2006-275606 | Oct 2006 | JP |
2006275606 | Oct 2006 | JP |
2012064023 | Mar 2012 | JP |
5598200 | Oct 2014 | JP |
2014-221005 | Nov 2014 | JP |
2017-156254 | Sep 2017 | JP |
2018506722 | Jun 2018 | JP |
2019016193 | Jan 2019 | JP |
6663151 | Mar 2020 | JP |
7099623 | Jul 2022 | JP |
2004090517 | Oct 2004 | WO |
WO-2004090517 | Oct 2004 | WO |
2014103560 | Jul 2014 | WO |
WO-2014103560 | Jul 2014 | WO |
2018101128 | Jun 2018 | WO |
WO-2018101128 | Jun 2018 | WO |
WO-2020026326 | Feb 2020 | WO |
WO-2020026327 | Feb 2020 | WO |
WO-2020065806 | Apr 2020 | WO |
WO-2020100285 | May 2020 | WO |
WO-2020202338 | Oct 2020 | WO |
WO-2023037999 | Mar 2023 | WO |
Entry |
---|
Shuichi Setoÿ et al, Analysis of Transient Response Output of Semiconductor Gas Sensor, Electrical Discourse E, 125 No. 3, 2005 (Year: 2005). |
By Ye Zhang et al., Estimating the Rate Constant From Biosensor Data via an Adaptive Variational Bayesian Approach, The Annals of Applied Statistics 2019, vol. 13, No. 4, 2011-2042 (Year: 2019). |
Ben Lambert et al., R*: A Robust MCMC Convergence Diagnostic with Uncertainty Using Decision Tree Classifiers, International Society for Bayesian Analysis, 2022, 17, No. 2, pp. 353-379 (Year: 2022). |
Tantithamthavorn et al., An Empirical Comparison of Model Validation Techniques for Defect Prediction Models, IEEE Transactions on Software Engineering, vol. 43, No. 1, Jan. 2017 (Year: 2017). |
Hébert et al., The Living Planet Index's ability to capture biodiversity change from uncertain data, Ecological Society of America, Mar. 21, 2023, pp. 1-13 (Year: 2023). |
Wu et al., Bayesian Annealed Sequential Importance Sampling (BASIS): an unbiased version of Transitional Markov Chain Monte Carlo, American Society of Mechanical Engineers, https://doi.org/10.1115/1.4037450, 2018, p. 18 (Year: 2018). |
Roopnarine et al., The description and classification of evolutionary mode: A computational approach, The Paleontological Society., 2001, 27(3), 2001, pp. 446-465 (Year: 2001). |
International Search Report for PCT Application No. PCT/JP2018/028565, mailed on Oct. 9, 2018. |
Shuichi Seto et al., Chemical Analysis for Transient Response of Semiconductor Sensor by Autoregressive Model. IEEJ Transactions on Sensors and Micromachines., 2005, Japan, vol. 125, No. 3, pp. 129-134. |
Japanese Office Action for JP Application No. 2020-533925 mailed on Jan. 18, 2022 with English Translation. |
Kensuke Sekihara, “Bayesian signal processing”, 1st edition, 2nd printing, Jan. 10, 2016, pp. 9-32, Kyoritsu Shuppan Co., Ltd., Japan. |
Number | Date | Country | |
---|---|---|---|
20210311009 A1 | Oct 2021 | US |