Advanced imaging technologies, such as microscopy and spectroscopy, have often been employed by scientists and engineers to gain insights into materials of interest. Such technologies provide a two (or higher) dimensional magnified image of the material (or at least a part thereof). Analysis techniques may then be applied on the acquired image to visualize the internal structure and/or to characterize the material. Depending on the analysis, a number of characteristic properties are measured and quantified, such as structure, type and number of distinct phases, phase morphology, and phase chemistry.
Regardless of the characteristic properties being measured, it is recognized that a majority of natural and man-made materials possess a high degree of heterogeneity, often as the result of varying voids and grain sizes. Such heterogeneity often makes it challenging to find a suitable sample volume that, for the purposes of engineering analysis, may be regarded as representative of the material body as a whole. Typically, the smallest such volume (or, in the case of a two dimensional image, the smallest such area) is called the representative elementary volume (“REV”).
There are a number of existing REV determination procedures, at least one of which performs adequately for relatively homogenous materials. See, e.g., Costanza-Robinson, et al. “REV estimation for porosity, moisture saturation, and air-water interfacial areas in unsaturated porous media: Data quality implications”, Water Resources Research, v47, W07513, 2011. Nevertheless, existing methods may suffer from a number of shortcomings, including overestimation, overly-generous search region requirements, overly-restrictive subsample positioning requirements, and a general inability to cope with relatively heterogeneous materials.
Accordingly, there is disclosed herein systems and methods for performing representative elementary volume (“REV”) determination via clustering-based statistics. The disclosed systems and methods extend the REV concept so as to enable the number and type of REV properties to be chosen in light of the material, thereby producing a better representation of the material that includes the REV size and further includes the number and location of distinctive regions in the material.
The disclosed systems and methods may benefit from the use of a novel clustering method that is also disclosed herein for this and other purposes as well. The novel clustering method operates on complex and/or large data sets to identify the number of clusters and their associated parameters. An iterative split-and-merge process is employed to split clusters having a low likelihood of representing the data set, before merging those clusters that a data set-based criterion indicates should be merged.
We begin with a reconsideration of the conventional approach to determining the representative elementary volume (“REV”). For convenience, the drawings employ two-dimensional images, but the described principles are applicable to three dimensional volumes as well.
However, as shown by the example of
As an example, assume for the ease of discussion that there are five subsample positions, with one in the host rock layer 302, three in the proto-cataclastic layer 304, and one in the damage zone 306. The dependence of the five subsamble porosities on subsample size are shown as broken lines in
Consequently, it is proposed that the property vs. size stability criterion be dropped in favor of a property distribution vs. size criterion.
The distribution stability criterion may be expressed in various forms. At one extreme, the distribution can be characterized solely in terms of its mean, which corresponds to the subsample averages plotted in
Rather, as indicated by
Moreover, it should be noted that the unimodal distributions sought need not be limited to distributions of one variable as shown in
A flow diagram for the method is shown in
Blocks 508-513 represent the search loop which iterates through a series of increasing subsample sizes. In block 508, the selected properties for the current subsamples are measured. The selected properties are determined beforehand, and their number can vary. Each of these properties will be measured and analyzed together.
In block 510, the subsample property measurements are collected and used to analyze the collective property distribution for that subsample size. Any one of a number of available statistical analysis techniques may be employed to determine the number, position, and variance of the constituent modes/groups/clusters, including the expectation-maximization method and its variations, logistic regression and variations of the Bayesian regression method. (A particularly suitable clustering method is discussed in greater detail below.) When analyzing the distribution of a single property, the analysis yields the mean, variance, and fraction (relative weighting) of each cluster. When multiple properties are analyzed, the analysis provides a mean vector, covariance matrix, and scalar fraction for each cluster.
In block 512, the results of the analysis are compared to the results determined with the previous subsample size, and if the results do not match to within some threshold, the subsample size is increased in block 513 and the loop is repeated. Once a match is found, the REV size is set to the previous subsample size in block 514, and in block 516 that REV size is used for further characterization of the sample.
As shown in
Unlike existing methods, the novel method advantageously avoids solutions of the form shown in
In addition to the set of input points, the novel method requires only a threshold value for determining convergence. Based on these inputs, the method iteratively adjusts the number of clusters and associated parameters to achieve the maximum likelihood, i.e. the likelihood that the clusters are the best representation of the data set. At the end of the method a mixture model based on the computed clusters and associated parameters is achieved with an optimum number of well-separated clusters to represent a data set and computation resources.
As shown in
In block 806, the method determines two merge criteria based on the data set. A first merge criterion is the critical merge degree. The Ma (2005) reference above establishes this criterion as
δ1=0.004*N½
where N is the number of data points. Other critical merge criteria may also be used to set a maximum threshold for merging two different clusters.
The present method further determines a second merge criterion to enforce a minimum distance between clusters. If the centers of the clusters are below the distance threshold, they are merged. Various distance criteria may be employed for this purpose, including Euclidean distance, Mahalanobis distance, and bin size-based distance. One particularly suitable bin-size based distance measure is based on a modified Sturges' formula. The second merge criterion is:
where ømin and ømax represent the maximum and minimum value of the observed quantity, respectively, and Ć is the characteristic number of clusters. The characteristic number of clusters Ć may be estimated based on number and distance between the current clusters. For example, if the data set is described using a mixture of Gaussian distributions and the distance between the mean values μ of two clusters is less than either of their standard deviations σ, then one of them is omitted from the count. In contrast, if the two clusters are well separated, i.e. the distance between their mean values is greater than both of their standard deviations, they are each included in the count. Consequently, the latter case yields a larger second merge criteria. The merge criteria can be viewed as a way to minimize the number of clusters while preserving the representativeness of the computed clusters.
In block 808, the method performs an EM (optimization) step, adjusting the current cluster parameters to maximize their likelihood. This EM operation has been intensively discussed in literature, including the foregoing references. For the sake of completeness, a short discussion concerning the EM step will be repeated. Let the data set {x1, x2, x3, . . . , xN} be a random data points of size N from the d-dimensional data set. A d-variate mixture model for the given data set can be written as:
f(x|θ)=Σk=1Cf(x|θk,πk)
where f(−) represents a selected density function with C components (i.e. C constituent clusters), θk is a vector of the associated parameters of the density function for the kth component, and πk is the mixing weight of the kth component. In the EM step, the expected log-likelihood function of the mixture model is computed from:
(θ|θt)=Σn=1N log {Σk=1Cf(xn|θkπk)}
Then, parameters that maximize (θ) are computed from:
where arg max stands for the argument of the maximum. The procedure is repeated for t=1, 2, 3, . . . until a convergence based on the log-likelihood is achieved. Then the method proceeds to the next step.
In block 810, the method performs a split (S) operation on the cluster having the smallest local log-likelihood such that the overall likelihood of the mixture model is increased. The local log-likelihood may be computed from:
where the responsibility γ of the data point n in kth cluster is:
When split, the cluster with minimum log-likelihood is divided into two clusters that are offset from the local maximum. For example, in the case that the data is described using mixture model based on Gaussian distribution function with mean μ, variance σ2 and mixing weight it, a cluster with index a may be split into clusters and co as follows:
The S-step is followed in block 812 by another EM-step to maximize the overall log-likelihood with the new cluster included. Then, using the new parameters, a merge operation is performed in block 814. During the novel merge (M) operation implemented by the present method, a merge degree between each pair of clusters i,j is computed:
The merge degree with minimum value is compared with the critical merge degree δ1, i.e. the first merge criterion. If the minimum merge degree is lower than δ1 this pair of clusters will be merged at the end of the M-step. In at least some embodiments, each of the cluster pairs having a merge degree below the first merge criterion are simultaneously merged.
However, before such merging is actually performed, the distance between each given cluster and the remaining clusters is evaluated. For example, for one-dimensional data with Gaussian distribution, the distance S between ith and jth cluster may be computed from the absolute difference between the means S=|μi−μj|. Cluster pairs are slated for merging whenever the distance S is less than the minimum distance δ2, i.e. the second merge criterion.
Once all of the pairs have been evaluated and one or more designated for merging, the associated parameters of the merging clusters are combined. In the case that data is described using mixture model based on Gaussian distribution function with parameters discussed above, the parameters of the new cluster η (resulted from merge operation) may be computed as follows:
where the summation from p=1 to MP represents the summation of all merge pairs which satisfied the merge criteria. In block 816, the EM-step is again repeated after the M-step to maximize the overall log-likelihood.
In block 818, the method determines whether convergence has been reached by comparing the log-likelihood before the split step 810 and one after the merge step (after block 816). Convergence is indicated when the difference between the log-likelihoods is less than a pre-defined convergence criteria, and the resulting cluster parameters are displayed and/or stored for later use in block 822. For example, once convergence is reached, a cluster or property distribution index value may be assigned to each of a plurality of data points used to represent property measurements throughout 2D or 3D digital images. Further, as the data points correspond to known coordinate positions in 2D or 3D digital images, a cluster or property distribution index value may be spatially assigned to subsamples within 2D or 3D digital images. The property distribution index values may be used for subsequent analysis and characterization of the corresponding sample. For more information regarding subsequent analysis options that may benefit from property distribution index values, reference may be had to Radompon Sungkorn et al., “Digital Rock Physics-Based Trend Determination and Usage for Upscaling”, PCT Application Serial Number PCT/US15/23420 and filed Mar. 30, 2015, and hereby incorporated herein by reference in its entirety. If a determination is made that convergence has not been reached (determination block 818), the log-likelihood information is stored for later reference and the merge criteria are updated (block 820) before the method returns to block 810.
The methods represented by
For high resolution imaging, the observation chamber 122 is typically evacuated of air and other gases. A beam of electrons or ions can be rastered across the sample's surface to obtain a high resolution image. Moreover, the ion beam energy can be increased to mill away thin layers of the sample, thereby enabling sample images to be taken at multiple depths. When stacked, these images offer a three-dimensional image of the sample to be acquired. As an illustrative example of the possibilities, some systems enable such imaging of a 40×40×40 micrometer cube at a 10 nanometer resolution.
However, the system described above is only one example of the technologies available for imaging a sample. Transmission electron microscopes (TEM) and three-dimensional tomographic x-ray transmission microscopes are two other technologies that can be employed to obtain a digital model of the sample. Regardless of how the images are acquired, the foregoing disclosure applies so long as the resolution is sufficient to reveal the porosity structure of the sample.
The source of the sample, such as in the instance of a rock formation sample, is not particularly limited. For rock formation samples, for example, the sample can be sidewall cores, whole cores, drill cuttings, outcrop quarrying samples, or other sample sources which can provide suitable samples for analysis using methods according to the present disclosure.
Typically, a user would employ a personal workstation 202 (such as a desktop or laptop computer) to interact with the larger system 200. Software in the memory of the personal workstation 202 causes its one or more processors to interact with the user via a user interface, enabling the user to, e.g., craft and execute software for processing the images acquired by the scanning microscope. For tasks having small computational demands, the software may be executed on the personal workstation 202, whereas computationally demanding tasks may be preferentially run on the high performance computing platform 206.
When adapted for use in the illustrative systems, the methods may be modified to enable one or more of the operations to be carried out concurrently to exploit the availability of parallel processing resources. Moreover, the order of the steps may vary, with some of the steps carried out in a potentially speculative fashion. Such variations are within the scope of the claims. Fortunately the disclosed methods reduce the computational complexity to a level where data sets on the order of O(108) pixels can be analyzed in a timely fashion. The foregoing systems and methods are applicable to many industries, including subterranean water and hydrocarbon reservoir analysis, mining, tissue analysis, and structural analysis of materials. The clustering methods disclosed above have even wider application including statistical data analysis and information mining in small and large data sets from all fields of endeavor including, in particular, genetics and other medical sciences.
The present application claims priority to U.S. Patent Application 61/972,990 titled “Representative Elementary Volume Determination via Clustering-Based Statistics”, filed Mar. 31, 2014 by inventors Radompon Sungkorn, Jonas Toelke, Yaoming Mu, Carl Sisk, Avrami Grader, and Naum Derzhi, which is incorporated by reference in its entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2015/023419 | 3/30/2015 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2015/153505 | 10/8/2015 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
8504491 | Thiesson et al. | Aug 2013 | B2 |
20090259446 | Zhang et al. | Oct 2009 | A1 |
20100228486 | Wu | Sep 2010 | A1 |
20120283989 | Hirohata | Nov 2012 | A1 |
20130262028 | De Prisco | Oct 2013 | A1 |
20130338976 | De Prisco | Dec 2013 | A1 |
20140044315 | Derzhi et al. | Feb 2014 | A1 |
Number | Date | Country |
---|---|---|
1173816 | Jan 2002 | EP |
0065481 | Nov 2000 | WO |
2012118867 | Jul 2012 | WO |
2013188239 | Dec 2013 | WO |
2015153505 | Oct 2015 | WO |
2015153506 | Oct 2015 | WO |
Entry |
---|
Thomas, M. et al. “Representative Volume Element of Anisotropic Unidirectional Carbon-Epoxy Composite with High-Fibre Volume Fraction,” Composites Science and Technology, vol. 68, No. 15-16, XP025670373, Dec. 1, 2008, p. 3184-3192. |
Wang, Hai Xian et al. “Estimation for the Number of Components in a Mixture Model Using Stepwise Split-and-Merge EM Algorithm,” Pattern Recognition Letters, Elsevier, Amsterdam, NL, vol. 25, No. 16, XP004619671, Dec. 1, 2004, p. 1799-1809. |
Konstantinos Blekas, “Split-Merge Incremental Learning (SMILE) of Mixture Models,” Artificial Neural Networks—Icann, Springer Berlin Heidelberg, Berlin, XP019069466, Sep. 9, 2007, p. 291-300. |
“AU Patent Examination Report”, dated Mar. 6, 2017, Appl No. 2015241029, “Reprentative Elementary Volume Determination via Clustering-based Statistics,” Filed Mar. 30, 2015, 3 pgs. |
“PCT International Search Report and Written Opinion”, dated Jul. 7, 2015, Appl No. PCT/US2015/023419, “Reprentative Elementary Volume Determination via Clustering-based Statistics, ” filed Mar. 30, 2015, 11 pgs. |
Bishop, Chapter 9: “Mixture Models and EM,” Pattern Recognition and Machine Learning, Springer Science +Business Media LLC, eds. Jordan et al., Singapore, 2006: pp. 423-459. |
Donstanza-Robinson et al., “Representative elementary volume estimation for porosity, moisture saturation, and air-water interfacial areas in unsaturated porous media: Data quality implications,” Water Resources Research, Jul. 2011, vol. 47(W07513): pp. 1-12. |
Do et al., “What is the expectation maximization algorithm?,” Nature Biotechnology, Aug. 2008, vol. 26(8): pp. 897-899. |
Ma et al., “A Dynamic Merge-or-Split Learning Algorithm on Gaussian Mixture for Automated Model Selection*,” Ideal, 2005: pp. 203-210. |
Ueda et al., “SMEM Algorithm for Mixture Models,” Journal of Neural Computation, Sep. 2000, vol. 12(9): pp. 2109-2128. |
Zhang et al., “EM algorithms for Gaussian mixtures with split-and-merge operation,” Pattern Recognition 36, 2003: pp. 1973-1983. |
Number | Date | Country | |
---|---|---|---|
20170018096 A1 | Jan 2017 | US |
Number | Date | Country | |
---|---|---|---|
61972990 | Mar 2014 | US |