The present application relates to co-pending U.S. patent application Ser. No. 10/811,403 , filed Mar. 26, 2004, and entitled “Genetic Algorithm Based Selection of Neural Network Ensemble for Processing Well Logging Data”.
Neural networks are useful tools for machine learning. Inspired by studies of nerve and brain tissue, designers have created a variety of neural network architectures. In many commonly-used architectures, the neural networks are trained with a set of input signals and corresponding set of desired output signals. The neural networks “learn” the relationships between input and output signals, and thereafter these networks can be applied to a new input signal set to predict corresponding output signals. In this capacity, neural networks have found many applications including identifying credit risks, appraising real estate, predicting solar flares, regulating industrial processes, and many more.
In many applications, there are a large number of possible input parameters that can be selected in order to predict desired output parameters. Optimizing the choice of input parameters can assist in producing stable and accurate predictions. Unfortunately, the input optimization process can be difficult.
A better understanding of the disclosed embodiments can be obtained when the following detailed description is considered in conjunction with the following drawings, in which:
While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the figures and will herein be described in detail. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims.
The problems outlined above are at least in part addresses by the herein-disclosed methods of creating and using neural network ensembles (combinations of more than one neural network) to obtain robust performance. Some embodiments take the form of computer-based methods that comprise receiving a set of available inputs; receiving training data comprising values for the available inputs and corresponding values for at least one output; training at least one neural network for each of at least two different subsets of the set of available inputs; and providing at least two trained neural networks having different subsets of the available inputs as components of a neural network ensemble configured to transform the available inputs into at least one output.
Some embodiments provide a well log synthesis method that comprises: receiving a set of input signals that represent measurements of downhole formation characteristics; applying a first subset of the set of input signals to a first neural network to obtain one or more estimated logs; applying a second, different subset of the set of input signals to a second neural network to obtain one or more estimated logs; and combining corresponding ones of the one or estimated logs from the first and second neural networks to obtain one or more synthetic logs. More than two neural networks can be used. Each of the neural networks may also differ in ways other than the input signal subset, e.g., the neural networks may also have different complexities.
The disclosed methods may be embodied in an information carrier medium that, when placed in operable relation to a computer, provides the computer with software comprising a training process, a selection process, and a prediction process. The training process generates a pool of neural networks having diversity in inputs and in complexity. The selection process identifies an ensemble of neural networks from the pool having a desirable fitness measure, the fitness measure for each neural network ensemble being based at least in part on a measure of one or more of the following: validation error, complexity, and negative correlation. The prediction process applies the neural network ensemble to obtain a prediction of one or more estimated logs.
As borehole drilling is completed, a string of casing pipe 118 is inserted to preserve the integrity of the hole and to prevent fluid loss into porous formations along the borehole path. Typically, the casing is permanently cemented into place to maximize the borehole's longevity.
The logging information is intended to characterize formations 116 so as to locate reservoirs of oil, gas, or other underground fluids, and so as to provide data for use in field correlation studies and to assist in seismic data interpretation. Whenever possible, logging is performed in uncased (“open hole”) conditions because the logging tool can achieve closer contact with the formation and because some of the desired open hole measurements are adversely affected by the casing and/or cement in a cased borehole. Three open hole logs that have proven useful for characterizing downhole formations are those shown in
However, it is often necessary to gather logging information after a borehole has been cased, e.g., after casing pipe 118 has been cemented in along the full length of the borehole. Because the formation is isolated from the borehole interior, logging can only be performed by a limited number of tools that can sense formation properties through the casing, e.g., acoustic logging tools or nuclear logging tools. In particular, pulsed neutron logging tools such as the pulsed neutron capture (PNC) logging tool provide a number of cased hole measurements (“logs”), including those shown in
Given the set of available logs from a PNC logging tool and/or other cased hole logging tools, it is desirable to convert those logs into synthetic approximations to those open hole logs that have proved useful in the past. Such a conversion also enables an apples-to-apples comparison of logs taken before and after casing the borehole, and is useful in correlating with open hole logs taken in other wells in the field.
Returning to the illustrated example, the input values to transform block 406 are those cased hole log values at a given depth 408. For this set of input values, transform block 406 produces a set of output values that are the synthetic open hole log values at the given depth 408. The open hole logs across the entire depth interval logged with the pulsed neutron tool can thus be simulated by repeating the conversion at each depth covered by the cased hole logs.
Transform block 406 may employ neural networks to perform the conversion. Because the amount of training data is limited relative to the desired operating scope, the transform block 406 may employ multiple neural networks that are combined in an ensemble to provide more robust behavior both within and outside the training region.
In addition to being diverse in the input signals upon which they operate, the neural networks may also be diverse in other ways. For example, when the neural networks are based on a back-propagation architecture (back-propagation networks, or “BPN”), one of the architectural parameters is the number of nodes in the hidden layer. (Details regarding BPN design are widespread in the literature. See, e.g., J. A. Freeman and D. M. Skapura, Neural Networks,© 1991 by Addison Wesley, Chapter 3.) In some embodiments, the pool of neural networks is diverse in the number of nodes in the hidden layer. For example, each of the neural networks shown in pool 502 is accompanied by an ordered pair indicating the size of the input signal set and the number of nodes in the hidden layer (so 3,10 indicates three input signals and ten hidden nodes). Other ways to construct a pool of diverse neural networks includes: training different networks on different training data sets; training different networks on differently partitioned training data sets; training from different initial states; using different neuron functions; using different training algorithms; and/or using different architectural parameters where available.
Given a range of diverse neural networks, each network is trained in accordance with the appropriate training algorithm to obtain pool 502. A selection process 504 is then applied to assemble an optimized neural network ensemble. The transform block 406 shown in
Each of the neural networks has been trained to produce three outputs, each output corresponding to one of the open hole logs. For each open hole log, a corresponding output unit 518, 520, or 522, averages the corresponding output signal from the five neural networks to produce the corresponding synthetic open hole log. In some embodiments, the output units do a straight averaging operation, while alternative embodiments perform a weighted averaging operation.
In alternative embodiments, each neural network is trained to produce a single output, with different networks being trained to produce different synthetic open hole logs. The outputs of those networks trained for a given open hole log are combined to synthesize that open hole log. In yet other ensemble embodiments, multiple-output neural networks are combined with single-output neural networks. In such embodiments, each output unit is associated with a single open hole log and accordingly combines only those neural network outputs that have been trained to predict that open hole log.
Neural network ensemble architectures such as that described above, when constructed using an appropriate pool (such as one constructed in accordance with the method described below with respect to
Blocks 606-622 form a loop that is performed for each input subset size between from the starting size to the maximum size (the number of available input signals). An inner loop, comprising blocks 608-618, is performed for each candidate input subset of the given size. The order in which the input subsets of a given size are considered is unimportant. In some embodiments, however, restrictions are placed on which input subsets of a given size are considered. For example, in some embodiments the candidate subsets of a given size include the “best” subset of the next-smaller size (“stepwise selection”). In some alternative embodiments, the direction is reversed, and the candidate input subsets are only those proper subsets of the “best” input subset of the next larger size (“reverse stepwise selection”). In yet other alternative embodiments, an exhaustive processing of all subsets of a given size is performed for each input subset size (“exhaustive search”). In still other alternative embodiments, a genetic algorithm is used as a fast approximation of the exhaustive processing alternative (“genetic input selection”) when a large number of candidate inputs are available.
In block 606, a first input subset of the given size is chosen. In the first iteration of outer loop 606-622 (e.g., when the size equals 1), the candidate input subsets may be expressed as an exhaustive list of all subsets of that size that can be made from the set of available input signals. In subsequent iterations of the outer loop, the candidate input subsets may be restricted to only those subsets that include the subset determined to be best in the preceding loop iteration, i.e., stepwise selection. Thus, for example, if the first outer loop iteration determines that the best input subset of size 1 is {SGFM}, then in the second iteration of the outer loop, the candidate input subsets in some embodiments are restricted to input subsets of size 2 that include SGFM. The order in which the candidate subsets are considered is unimportant.
In block 608, one or more neural networks are trained. In those embodiments where diversity beyond input set diversity is desired, multiple neural networks are trained in block 608. For example, pool 502 (
The percentage of data in the training, validation, and testing sets can be varied. In some embodiments, eighty percent of each cased hole log and corresponding parts of the open hole logs are applied in a standard BPN training algorithm. Ten percent of the data from the test wells is used for validation, i.e., for early training termination if performance fails to converge. Finally, ten percent of the test well data is withheld from training for testing performance in block 610. This percentage breakdown is abbreviated as 80/10/10. Other percentage breakdowns that have yielded success are 60/15/25 and 70/10/20. These percentages are subject to change depending on the training situation.
In block 610, an overall error measurement is determined for the current input subset. In some embodiments, the overall error is based on per-network measure of squared error between predicted open hole logs and actual open hole logs. In these embodiments, the overall error is the mean of the per-network measures for the current input subset. In alternative embodiments, the performance is measured with different error functions.
In block 612, a test is made to see if the overall error for the current input subset is smaller than that of previous input subsets of the same size. If so, then in block 614, the neural networks trained for the current set of input signals are saved as the current “best”. In block 616, a test is made to see if there are any more candidate subsets of the current size. If so, then in block 618, the next input subset is determined and another iteration of inner loop 608-618 is performed. Otherwise, in block 620, a test is made to see if there are any more input set sizes. If so, in block 622, the input set size is incremented, and another iteration of outer loop 606-622 is performed. If not, the process halts.
At the end of the illustrative process of
In addition to stepwise selection, locally ranked and globally ranked neural networks can be determined using other search techniques including reverse stepwise selection, exhaustive search, and genetic input selection. Whether locally or globally ranked, the stored neural networks are used as a starting point for a selection process 504.
In block 704, the selection process determines the number of neural networks that will be used to construct an ensemble. This is a programmable number to be set by the user, but it is expected that to obtain the benefits of using an ensemble without incurring an excessive computational load, the number will be in the range from three to ten neural networks, with five being a default.
Given the pool and the ensemble size, different selection processes may be used to obtain and optimize the neural network ensemble. For example, the selection process may simply be selecting those networks with the best performance. In the embodiments illustrated by
In block 706, an initial, randomly constructed population of ensembles is determined. The population size is set by the user and may be, e.g., 50 neural network ensembles. In block 708, a fitness value is determined for each neural network ensemble. In the above-referenced application, the fitness value is calculated in accordance with a multi-objective function (“MOF”), e.g., a weighted sum of: a performance measure, a complexity measure, and a negative correlation measure. Weights of each of these three components can vary in the range zero to one, with the sum of the weights equaling one. The performance measure is a mean of squared error between predicted open hole logs and actual open hole logs. The complexity measure is a sum of squared weights in the ensemble. The negative correlation measure is the average of individual negative correlation measures for the neural networks in the ensemble over multiple outputs and data samples. The individual measure of each output for a given sample is determined by finding (1) a difference between the individual neural network's output and the average neural network output of the ensemble; (2) a sum of such differences for all other neural networks in the ensemble; and (3) the product of (1) and (2). Further details can be found in the above-referenced application.
In block 710, the number of loop iterations (generations) is compared to a threshold. If the maximum number of generations is not met, then in block 712 a new population is determined. The new population is determined using genetic algorithm techniques such as removing those population members with the worst fitness values from the population, “breeding” new ensembles by combining neural networks from remaining ensembles, introducing “mutations” by randomly replacing one or more neural networks in selected ensembles, and “immigrating” new ensembles of randomly-selected neural networks.
Once the maximum number of generations is met, an ensemble is selected from the population in block 714. In some embodiments, the selected ensemble is the ensemble with the best fitness measure. In other embodiments, validity testing is performed and the selected ensemble is the ensemble with the best performance measure. Typically, after validity testing, the ensemble may be deployed for general use with similar formation conditions for converting from cased hole logs to synthetic open hole logs. In other embodiments, the training and selection processes are deployed as parts of a software package for determining customized ensembles for log conversion.
It is desirable (but not mandatory) to have complete data from at least two wells with similar formation conditions. In some embodiments, the data from one well is reserved for ensemble validity testing. In other words, the rest of the data is used for input subset selection, candidate neural network generation, and ensemble optimization (in the multi-objective function). Other embodiments use the data from one well for input subset selection and candidate neural network generation. The data from the first and second wells is then combined, with part of the combined data being used for ensemble optimization, and the remainder of the combined data being used for ensemble validity testing. In yet other embodiments, the data from one well is used for input subset selection and candidate neural network generation. The data from the second well is used for ensemble optimization, then a combined set of data is used for ensemble validity testing. In still other embodiments, part of the combined data set is used for input subset selection, candidate neural network generation, and ensemble optimization. The remainder of the combined data set is then used for ensemble validity testing.
The process may be implemented as software in a general purpose desktop computer or in a high-performance server.
Input devices 806, 808 are coupled to a peripheral interface 810 that accepts input signals and converts them into a form suitable for communications on internal bus 812. Bus 812 couples peripheral interface 810, a modem or network interface 814, and an internal storage device 816 to a bus bridge 818. Bridge 818 provides high bandwidth communications between the bus 812, a processor 820, system memory 822, and a display interface 824. Display interface 824 transforms information from processor 820 into an electrical format suitable for use by display 804.
Processor 820 gathers information from other system elements, including input data from peripheral interface 810 and program instructions and other data from memory 822, information storage device 816, or from a remote location via network interface 814. Processor 820 carries out the program instructions and processes the data accordingly. The program instructions can further configure processor 820 to send data to other system elements, including information for the user which can be communicated via the display interface 824 and the display 804.
Processor 820, and hence computer 802 as a whole, typically operates in accordance with one or more programs stored on information storage device 816. Processor 820 copies portions of the programs into memory 222 for faster access, and can switch between programs or carry out additional programs in response to user actuation of the input device. The methods disclosed herein can take the form of one or more programs executing in computer 802. Thus computer 802 can carry out the information gathering processes described with respect to
Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. For example, the foregoing description is made in the context of downhole log conversion. However, it should be recognized that the disclosed neural network solution design processes disclosed herein have wide applicability to all applications where neural networks can be employed. It is intended that the following claims be interpreted to embrace all such variations and modifications.
Number | Name | Date | Kind |
---|---|---|---|
3802259 | Echels | Apr 1974 | A |
3946226 | Smith, Jr. | Mar 1976 | A |
3975157 | Smith | Aug 1976 | A |
4055763 | Antkiw | Oct 1977 | A |
4122240 | Banas et al. | Oct 1978 | A |
4122339 | Smith, Jr. et al. | Oct 1978 | A |
4122340 | Smith, Jr. et al. | Oct 1978 | A |
4239965 | Oliver et al. | Dec 1980 | A |
4293933 | Park et al. | Oct 1981 | A |
4297575 | Smith, Jr. et al. | Oct 1981 | A |
4430567 | Oliver et al. | Feb 1984 | A |
4605854 | Smith, Jr. | Aug 1986 | A |
4617825 | Ruhovets | Oct 1986 | A |
4645926 | Randall | Feb 1987 | A |
4646240 | Serra et al. | Feb 1987 | A |
4656354 | Randall | Apr 1987 | A |
4912655 | Wood | Mar 1990 | A |
4926488 | Nadas et al. | May 1990 | A |
5112126 | Graebner | May 1992 | A |
5251286 | Wiener et al. | Oct 1993 | A |
5374823 | Odom | Dec 1994 | A |
5444619 | Hoskins et al. | Aug 1995 | A |
5461698 | Schwanke et al. | Oct 1995 | A |
5465321 | Smyth | Nov 1995 | A |
5469404 | Barber et al. | Nov 1995 | A |
5475509 | Okamoto | Dec 1995 | A |
5517854 | Plumb et al. | May 1996 | A |
5525797 | Moake | Jun 1996 | A |
5608215 | Evans | Mar 1997 | A |
5828981 | Callender et al. | Oct 1998 | A |
5848379 | Bishop | Dec 1998 | A |
5862513 | Mezzatesta et al. | Jan 1999 | A |
5870690 | Frenkel et al. | Feb 1999 | A |
5875284 | Watanabe et al. | Feb 1999 | A |
5900627 | Odom et al. | May 1999 | A |
5940777 | Keskes | Aug 1999 | A |
6044327 | Goldman | Mar 2000 | A |
6092017 | Ishida et al. | Jul 2000 | A |
6140816 | Herron | Oct 2000 | A |
6150655 | Odom | Nov 2000 | A |
6192352 | Alouani et al. | Feb 2001 | B1 |
6207953 | Wilson | Mar 2001 | B1 |
6216134 | Heckerman et al. | Apr 2001 | B1 |
6272434 | Wisler et al. | Aug 2001 | B1 |
6295504 | Ye et al. | Sep 2001 | B1 |
6311591 | Grossmann et al. | Nov 2001 | B1 |
6317730 | Neuneier et al. | Nov 2001 | B1 |
6374185 | Taner et al. | Apr 2002 | B1 |
6381591 | Hoffmann et al. | Apr 2002 | B1 |
6411903 | Bush | Jun 2002 | B2 |
6424956 | Werbos | Jul 2002 | B1 |
6456990 | Hoffmann et al. | Sep 2002 | B1 |
6466893 | Latwesen et al. | Oct 2002 | B1 |
6760716 | Ganesamoorthi et al. | Jul 2004 | B1 |
6789620 | Schultz et al. | Sep 2004 | B2 |
20020147695 | Khedkar et al. | Oct 2002 | A1 |
20020152030 | Schultz et al. | Oct 2002 | A1 |
20020165911 | Gabber et al. | Nov 2002 | A1 |
20020170022 | Shirai et al. | Nov 2002 | A1 |
20020177954 | Vail, III | Nov 2002 | A1 |
20020178150 | Hytopoulos et al. | Nov 2002 | A1 |
20020183932 | West et al. | Dec 2002 | A1 |
20020187469 | Kolodner et al. | Dec 2002 | A1 |
20020188424 | Grinstein et al. | Dec 2002 | A1 |
20030115164 | Jeng et al. | Jun 2003 | A1 |
20040019427 | San Martin | Jan 2004 | A1 |
20040117121 | Gray et al. | Jun 2004 | A1 |
20040133531 | Chen et al. | Jul 2004 | A1 |
20040222019 | Estes et al. | Nov 2004 | A1 |
20040257240 | Chen et al. | Dec 2004 | A1 |
20050246297 | Chen et al. | Nov 2005 | A1 |
20070011114 | Chen et al. | Jan 2007 | A1 |
20070011115 | Smith, Jr. et al. | Jan 2007 | A1 |
Number | Date | Country |
---|---|---|
04089998 | Mar 1992 | JP |
WO-9964896 | Dec 1999 | WO |
Number | Date | Country | |
---|---|---|---|
20070011114 A1 | Jan 2007 | US |