Fluid sampling is one useful step used for characterizing a reservoir. In-situ fluid composition analysis can be performed during the fluid sampling, and many properties of interest (e.g., GOR) can be inferred about the formation fluid. Knowledge of these properties is useful in characterizing the reservoir and in making of any engineering and business decisions.
The formation fluid obtained during the fluid sampling has a number of unknown natural constituents, such as water, super critical gas, and liquid hydrocarbons. In addition to these unknown natural constituents, the composition of the formation fluid sample may also include an artificial contaminant (i.e., filtrate including water-based mud or oil-based mud), which has been used during drilling operations. Therefore, during fluid sampling downhole, the fluid initially monitored with a fluid sampling device or other instrument is first assumed to be fully contaminated. Then, the monitored fluid is assumed to go through a continuous cleanup process as more formation fluid is obtained from the area of interest.
During cleanup, repeated density measurements are taken at fixed time intervals, and the density measurements are analyzed to estimate the sample's quality. For example, the repeated density measurements can be used to plot the change in density over time. Characteristics of this density-time plot are then used to assess the contamination level of the fluid being sampled. Once a minimum threshold contamination level is believed to be reached, the sample is then captured and stored in the tool so the sample can be returned to the surface and can undergo additional analysis.
For example, FluidXpert® is software that can analyze density sensor data and can estimate the current level of contamination and the amount of time required to reach a desired level of contamination. Since the filtrate density and the uncontaminated formation fluid density are not known and can only be estimated based on the filtrate properties and the pressure gradient, too much uncertainty is present to make a definitive determination that the desired level of contamination has actually been reached. All the same, even with such uncertainty, the information obtained is considered acceptable for regression trend analysis to estimate contamination.
An example of such an approach is disclosed in U.S. Pat. No. 6,748,328 to Storm, Jr. et al., which discloses a method for determining the composition of a fluid by using measured properties (e.g., density) of the fluid. The quality of a fluid sample obtained downhole is evaluated by monitoring the density of the fluid sample over time. During the sampling process, the density of the sample volume changes until it levels out to what is expected to be the density of the formation fluid. Unfortunately, a point of equilibrium may simply be reached between the amounts of formation fluid and filtrate contamination in the sample volume so that the level of contamination is not really known.
To solve this, Storm, Jr. et al. assumes a mixture for the sampled fluid that has only two components, namely filtrate and formation fluid. In this way, the incremental change in the fluid mixture's density corresponds to an incremental change in the volume fraction of the two fluid components by the difference between the two fluid components' densities. The endpoint values for the mixture's change in density include (1) the density of the filtrate (which can be determined based on surface measurements of the mud system) and (2) the density of the formation fluid (which can be determined from pressure gradient data). In the end, Storm, Jr. et al. can indicate the composition of the mixture (i.e., the relative fraction of filtration in the mixture compared to formation fluid) based on the change in the mixture's density over time.
In addition to monitoring density, pressure, temperature, and the like, various other modules can perform analysis downhole. For example, spectrophotometers, spectrometers, spectrofluorometers, refractive index analyzers, and similar devices have been used to analyze downhole fluids by measuring the fluid's spectral response with appropriate sensors. Although useful and effective, these analysis modules can be very complex and hard to operate in the downhole environment. Additionally, these various analysis modules may not be appropriate for use under all sampling conditions or with certain types of downhole tools used in a borehole to determine characteristics of formation fluid.
The subject matter of the present disclosure is directed to overcoming, or at least reducing the effects of, one or more of the problems set forth above.
In this disclosure, a dynamic (i.e., real-time) fluid composition analysis is devised as a full-scale estimator of the composition of a fluid sample from a formation based on density measurements made at discrete points-in-time downhole as the sampled fluid is cleaned-up. In other words, the disclosed dynamic fluid composition analysis can estimate the fraction of each and every constituent presumably present in the formation fluid. The presumed constituents can include one or more of water, a gas, a vapor phase gas, a supercritical gas, a natural gas, carbon dioxide, hydrogen sulfide, nitrogen, a hydrocarbon, a liquid hydrocarbon, a filtrate contaminant, a solid, and the like.
The presumption of the existence of any particular constituent is not limited in any way. In fact, the disclosed analysis enumerates a plurality (if not all) possible constituents that may exist in the formation fluid, predefines linear constraints on the fraction range of each constituent as well as constraints on the fraction dynamics in discrete points-in-time (i.e., at fixed time intervals, time steps, or time ticks), and computes estimates of the constituents' fractions and their confidence levels after dynamically assimilating the boundary constraints and the constraints on the system dynamics in real-time with the observed density for each new time interval. By implication, the disclosed analysis can infer reservoir properties that may relate two or more constituents, such as the gas-to-oil ratio (GOR), which is defined as the volumetric ratio of the super critical gas and liquid hydrocarbon components.
A. Downhole Implementation
The tool 10 can be any tool used for wireline formation testing, production logging, Logging While Drilling/Measurement While Drilling (LWD/MWD), or other operations. For example, the tool 10 as shown in
In use, the tool 10 obtains formation fluids and measurements at various depths in the borehole 16 to determine properties of the formation fluids in various zones. To do this, the tool 10 can have a probe 50, a measurement device 20, and other components for in-situ sampling and analysis of formation fluids in the borehole 16. Rather than a probe 50, the tool 10 can have an inlet with straddle packers or some other known sampling component. As fluid is obtained at a given depth, its composition evolves over time during the pump-out process as the fluid is being cleaned up. Cleanup is the process whereby filtrate fluid is removed from the pump-out region, which allows for direct sampling of formation fluids. However, mud filtrate along the borehole wall dynamically invades the formation during this process so that an equilibrium is established, which essentially limits any final cleanup or contamination level that can be attained.
The cleanout process can take as little as 10 min. to many hours irrespective of the type of tool being used. The time required also depends on the type of probe 50 or other sample inlet employed (typically packers) and the type of drilling mud used. In general, any suitable type of formation testing inlet known in the art can be used, with some being more beneficial than others. Also, the disclosed analysis can be used with any type of drilling mud, such as oil-based or water-based muds.
During this pump-out process, measurements are recorded in a memory unit 74, communicated or telemetered uphole for processing by surface equipment 30, or processed locally by a downhole controller 70. Each of these scenarios is applicable to the disclosed fluid composition analysis.
Although only schematically represented, it will be appreciated that the controller 70 can employ any suitable processor 72, program instructions, memory 74, and the like for achieving the purposes disclosed herein. The surface equipment 30 can be similarly configured. As such, the surface equipment 30 can include a general-purpose computer 32 and software 34 for achieving the purposes disclosed herein.
The tool 10 has a flow line 22 that extends from the probe 50 (or equivalent inlet) and the measurement section 20 through other sections of the tool 10. The inlet obtains fluid from the formation via the probe 50, isolation packers, or the like. As noted above, any suitable form of probe 50 or isolation mechanism can be used for the tool's inlet. For example, the probe 50 can have an isolation element 52 and a snorkel 54 that extend from the tool 10 and engage the borehole wall. A pump 27 lowers pressure at the snorkel 54 below the pressure of the formation fluids so the formation fluids can be drawn through the probe 50.
In a particular measurement procedure of the probe 50, the tool 10 positions at a desired location in the borehole 16, and an equalization valve (not shown) of the tool 10 opens to equalize pressure in the tool's flow line 22 with the hydrostatic pressure of the fluid in the borehole 16. A pressure sensor 64 measures the hydrostatic pressure of the fluid in the borehole. Commencing test operations, the probe 50 positions against the sidewall of the borehole 16 to establish fluid communication with the formation, and the equalization valve closes to isolate the tool 10 from the borehole fluids. The probe 50 then seals with the formation to establish fluid communication.
At this point, the tool 10 draws formation fluid into the tool 10 by retracting a piston 62 in a pretest chamber 60. This creates a pressure drop in the flow line 22 below the formation pressure. The volume expansion is referred to as “drawdown” and typically has a characteristic relationship to measured pressures.
Eventually, the piston 62 stops retracting, and fluid from the formation continues to enter the probe 50. Given a sufficient amount of time, the pressure builds up in the flow line 22 until the flow line's pressure is the same as the pressure in the formation. The final build-up pressure measured by the pressure sensor 64 is referred to as the “sand face” or “pore” pressure and is assumed to approximate the formation pressure.
During this process, sensors in the tool 10 can measure the density of the drawn fluid and can determine when the drawn fluid is primarily formation fluids. At various points, components such as valves, channels, chambers, and the pump 27 on the tool 10 operate to draw fluid from the formation that can be analyzed in the tool 10 and/or stored in one or more sample chambers 26. For example, the tool 10 may conduct a pre-test drawdown analysis in which a volume of fluid is drawn using a pre-test piston to determine the state (e.g., formation pressure) at time (0). Once the pretest analysis is completed, the downhole fluid pump 27 continuously moves fluid from the inlet or probe 50 and through the sensor sections (20 and 24), allowing for the continuous monitoring of the fluid density and contamination prediction prior to formation sample acquisition in sample chambers 26. Eventually, the probe 50 can be disengaged, and the tool 10 can be positioned at a different depth to repeat the test cycle.
Because the intention is to determine properties of the formation fluid, obtaining uncontaminated sampled fluid with the probe 50 is important. The sampled fluid can be contaminated by drilling mud because the probe 50 has made a poor seal with borehole wall because mud filtrate has invaded the formation, and/or dynamic filtration through the mudcake establishes an equilibrium inflow during pump-out operations. Therefore, the fluid can contain hydrocarbon components (solids, liquids, and/or supercritical gas) as well as drilling mud filtrate (e.g., water-based mud or oil-based mud) or other contaminants. The drawn fluid flows through the tool's flow line 22, and various instruments and sensors (20 and 24) in the tool 10 analyze the fluid.
For example, the probe 50 and measurement section 20 can have sensors that measure various physical parameters (i.e., pressure, flow rate, temperature, density, viscosity, resistivity, capacitance, etc.) of the obtained fluid, and a measurement device, such as a spectrometer or the like, in a fluid analysis section 24 can determine physical and chemical properties of oil, water, and gas constituents of the fluid downhole using optical sensors. Eventually, fluid directed via the flow line 22 can either be purged to the annulus or can be directed to the sample carrier section 26 where the samples can be retained for additional analysis at the surface.
Additional components 28 of the tool 10 can hydraulically operate valves, move formation fluids and other elements within the tool 10, can provide control and power to various electronics, and can communicate data via wireline, fluid telemetry, or other method to the surface. Uphole, surface equipment 30 can have a surface telemetry unit (not shown) to communicate with the downhole tool's telemetry components. The surface equipment 30 can also have a surface processor (not shown) that performs processing of the data measured by the tool 10 in accordance with the present disclosure.
B. Real-Time Fluid Composition Analysis
1. Overview
Briefly, the real-time fluid composition analysis uses a mathematical algorithm to estimate the composition of formation fluid based on fluid density measurements made in discrete time. As discussed above, the composition of the sampled formation fluid evolves over time as it is being cleaned up. Therefore, the analysis casts the evolving composition as an estimate of a discrete-time multivariate dynamic state and constructs a recursive online framework to statistically characterize the dynamic state vector at each new time interval in the analysis. The real-time state characterization, in turn, can be used to infer confidence intervals on crucial fluid properties, which are functions of the composition, such as the fluid contamination fraction and the GOR. Knowing confidence intervals on such properties can help optimize operations and engineering decisions.
In general terms, the fluid composition analysis combines (1) analytical geometry to define the span of the state vector via state boundary conditions and a fundamental density equation, and (2) probability theory to define constraints on the state evolution and to characterize the state probability distribution over the state space. Turning to particular details of the fluid composition analysis of the present disclosure, the following subsections first describe the building blocks needed to formally define the problem at hand.
2. Fundamental Density Equation
The fluid being sampled downhole is a mixture of fluid components. For the fluid component mixture under investigation in the analysis, the fluid mass satisfies an additive property—i.e., the total fluid mass is the sum of the masses of the individual components. This can be expressed as follows:
where m is the total mass, and where mi is the mass of the ith constituent.
Using the fundamental definition of fluid density relating mass and volume, the above equation is equivalently written as follows:
where, ρ and v are the fluid mixture's density and the fluid mixture's volume, respectively. In the above equation, ρi and vi denote respectively the density and the volume of the individual constituent indexed by i. The ratio can
be relabeled by variable fi to indicate the volume fraction of the ith constituent. Since volume fractions f are positive and must sum up to one, the last form of the density equation above can be equivalently written in these terms:
The above linear system of equalities and inequalities in terms of the set {fi}i defines the complete state space of the vector fii with i iterating through all constituents. A minimal reflection reveals that the fraction state vector fii (hereafter denoted as state vector f) lies necessarily in the intersection of a hyperplane defined by the density equation and the standard simplex defined by the above-noted set of inequalities and the equation obtained by the rule of fractions. In general, this intersection yields a convex polyhedron P.
Note that for any given time interval in the measurement procedure of the disclosed analysis, the complete state space P for the state vector f is parameterized only via the density ρ. Therefore, given that a new density is observed at every new time interval in the measurement procedure and assuming that every data point in the complete state space P is equally probable, integrating the fraction state vector f over the complete state space P and dividing by the volume of the polyhedron state space P should give the mean state vector. Similarly, higher-order moments may be calculated to characterize the statistical distribution of the fraction state vector f over the complete state space P. This scheme defines a way to statistically characterize the state vector so inferences can be made about any constituent of interest and the properties relating two or more constituents (e.g., GOR).
However, note that this scheme only depends on the density value at the given time interval during the measurement procedure. In particular, characterization of the state vector at any given time interval does not depend on the state vector of any previous time intervals. In other words, this scheme is time-independent or static.
Although the above scheme can sufficiently serve to provide an estimate of the state vector as well as probabilistic guarantees about such an estimate, it is clear that such a scheme can benefit from additional information (other than the density information). Such additional information can be assimilated to refine the distribution of the state vector leading to better quality estimates.
The next two subsections describe additional information that may be used to more accurately characterize the state vector. For instance, Section B.3 delineates the state boundary constraints and how they can be utilized to derive better estimates, and Section B.4 explains how state dynamic constraints may be assimilated to further enhance the estimation and its guarantees.
For simplicity of in the current discussion, the density measurements obtained are assumed to be error free. However, Section F later handles the case of erroneous density measurements and shows how the forthcoming algorithm can seamlessly incorporate errors in density observations without requiring any modifications given a simple assumption on the statistical characterization of the measurement noise. For all other characterizations of the measurement noise, simple additional computation will be performed.
3. State Boundary Constraints
The density equation uses the density coefficients and the observed mixture's density to define the span of the state vector f. The complete state space P spanned by the state vector f is rather too large to have an estimate of small enough variance. In reality, the complete state space P is a very loose superset of the true space of the state vector f. With the aid of additional information, the span of the state vector f can be narrowed to yield a smaller estimate variance.
In one embodiment, the fluid composition analysis places state boundary constraints on the analysis by imposing linear constraints on the fraction of any constituent presumed in the formation. A particular implementation can use a reduced or specific set of constituents as detailed below. In fact, the boundary constraints and particular constituents can be predefined for a particular implementation, such as a particular reservoir, geographical region, and formation. In this way, the implementation can be tailored to the particular constituents to be expected or analyzed. For the purposes of the current discussion, the set of all constituents assumed present is comprehensive of all elements (e.g., materials or fluids) that may be expected in any formation.
Just a few examples of state boundary constraints imposing linear constraints on the fraction of any constituent presumed in the formation are discussed here. Other state boundary constraints can be determined by one of ordinary skill in the art having the benefit of the present disclosure. As an example, the volumetric fraction of CH4 in any gas mixture should not be less than 70% of the total gas mixture. Similarly, CO2's fraction should not exceed 5% of the total gas mixture. Pentanes' volume fraction is not expected to exceed 3% of any oil mixture, whereas Nonanes can constitute as high as 15% of any oil composition. In the end, the fraction of every constituent may be constrained with respect to the total fraction of the components of the same phase type—i.e., liquid or gaseous.
As will be appreciated with the benefit of the present disclosure, these and other such constraints may be established from historical data or scientific knowledge. Cross-phase constraints may also be constructed if details (e.g., dry gas, condensate, heavy oil, etc.) on the particular reservoir in question are available. Thus, these and other constraints can be used in the disclosed fluid composition analysis.
To formalize the state boundary constraints, the set of all constituents are first partitioned into sets of gas (G) and oil (O) denoting the supercritical gas and liquid hydrocarbon constituents, respectively. The constraints on a particular gas constituent cgas is represented as follows:
where αc and βc are the lower and upper fraction bounds, respectively.
Similarly, if ccoil is an oil component, the linear constraints on the fraction of coil is represented as follows:
Then, a collection H of all constraints for all constituent fractions will constitute the state boundary constraints for the state vector f. Every inequality in H is either an upper or lower bounding hyperplane for the state vector f. Therefore, the reduced state space for the state vector f is the portion of the complete state space P within the bounding hyperplanes defined by the collection H of all constituent fractions, which is itself a polyhedron subset of the complete state space P.
4. State Dynamic Constraints
In the previous Section I.B.3 above, the stretch of the state space for the state vector f was narrowed. By implication, the estimate variance is also narrowed. At every given time interval during the measurement process, the state vector f is contained within a well-defined polyhedron having dimension in the order of the number of constituents. Again, if every data point in the constrained state space is assumed equally likely, integrating the state vector f over the polyhedron space P gives its mean value. In a similar fashion, the covariance matrix and higher order moments of the state vector f may be computed and used statistically to derive confidence intervals on the estimate of the state of the fluid under investigation.
To this end, the fluid composition analysis is static—i.e., time-independent. The state of the sampled formation fluid described herein is, however, inherently dynamic. As noted before, the fluid state or the component fraction vector evolves over time due to the cleanup process during measurement, which alters the overall composition following every new time interval by removing a portion of fluid contaminant. By constraining the state dynamics that govern how the state evolves with respect to time, such information can be used dynamically (i.e., in real-time or continuously) to help better characterize the distribution of the state vector f, and hence give better accuracy of the estimate.
In practice, the amount of contaminant removed at each time interval cannot be assessed directly; however, previous information of the cleanup process experienced with the particular testing tool 10 being used can help establish some expectations on the range of the amount of contaminant removed for a given time interval in the measurement process. For example, depending on the tool 10 used and other factors, it may be assumed that following every new time interval of 30 seconds, the fraction of the contaminant may drop by a factor of anywhere between 0 and 10% of its value compared to the previous time interval. (Other assumptions may apply for other implementations.) This assumption will not solely drive the contamination model. Instead, the assumption of cleanup between time intervals serves to constrain the state dynamics by forcing a minimum and maximum threshold on the change encountered for the contamination constituent. As such, the assumption will be used in conjunction with the dynamic density observation.
5. Summary
With the benefit of the above discussion, the measurement process and the fluid composition analysis can be summarized as follows. At an initial time interval t=0, the sampled fluid is known to be near entirely (i.e., ≈100%) composed of contaminant (filtrate). As the fluid is subjected to the cleanup process during measurement, fluid density is measured at time intervals, time steps, or time ticks with discrete time steps. The analysis then models the fluid state as it progresses over time using the (1) state boundary constraints, (2) the state dynamic constraints, and (3) the observed density. All of this information is processed dynamically following every new time interval to yield a multivariate probability distribution of the fluid state. Based upon such a distribution, inferences of interest are made about the fluid composition and related properties (e.g., contamination level, GOR, etc.). In turn, the details of the fluid composition determined by the system 10 and related properties can be used for operation and interpretation services or to guide engineering and business decisions concerning the formation fluid analyzed.
C. Embodiment of Real-Time Fluid Composition Analysis
1. Overview
As illustrated in
During the initial fluid draw, sensor measurements are made at an initial time interval (time t=0) defining the initial starting composition (Block 104). Then, an initial state probability distribution is obtained from this initial starting composition (Block 106). Typically, this distribution information would indicate that the current fluid state is composed entirely (or almost entirely) of the contamination component. Then, the analysis in
Thus, at every time interval, the analysis 100 estimates a probability distribution of the fluid, which is expressed via its first two moments (mean vector and covariance matrix) of the fluid and which as noted above is represented by a state vector comprising all presumed constituents (e.g., gas, oil, water, filtrate, hydrocarbon, or the most elemental constituents if desired). In this sense, the distribution's mean value for a given constituent of the fluid at a given time interval estimates what amount of the sample is comprised of that constituent. The covariance matrix allows confidence levels to be inferred for each estimate, given an assumption of a particular distribution model (note, however, that the analysis framework is not bound to any particular distribution model assumption).
This time loop terminates when it is decided that no more cleanup is needed (No at Decision 108). The decision to terminate cleanup is made by observing the state probability distribution 126 at the current time interval and determining whether the distribution 126 indicates a sufficiently low contamination level. In a practical implementation, some level of contamination is acceptable. In any event, results of the recursive analysis framework yield a final state probability distribution (Block 150).
Based on the final state probability distribution, the analysis 100 can perform additional processing as shown in
As shown in
2. Recursive Composition Model
With an understanding of the analysis presented above, discussion now turns to the computational details of applying the composition model shown as step (200) in
According to the present disclosure, the state probability distribution 112/126 is represented by its first two-order moments—i.e., mean vector and covariance matrix (though the framework is not inherently restricted to only two moments). Therefore, the composition model 200 computes the mean vector and covariance matrix of the probability distribution of the fluid's state fk (at time interval k). To do this, the model 200 must, in part, determine the complete state space Pk for the time interval k (Block 202). The complete state space Pk is the polyhedron or the state space of the fluid's current state fk and is defined by the measured fluid density 116 and the state boundary constraints 122.
Knowing the state probability distribution of the previous state fk-1 (i.e., the last state probability distribution 112) and the state dynamics 120, a preliminary state probability distribution is computed at time interval k by fusing the last state probability distribution 112 and the state dynamics 120 (Block 204). This preliminary state probability distribution is then normalized with respect to the complete state space Pk defined by the measured fluid density 116 and the state boundary constraints 122 (Block 206). Normalization then gives the mean and covariance of the current state fk, from which the new state probability distribution 126 is obtained (Block 208).
As will be described in more detail below, the range αk and βk of the time-dependent integration is computed (Block 260), and the last state distribution 112 is cast as a Dirichlet distribution (Block 262), although the distribution can be cast to any type of distribution, such as Gaussian or the like. A symbolic expression for the probability function (i) below is obtained using Taylor series approximation of the Beta distribution (See Appendix C) (Block 264). Then, equation (ii′) of the mean state vector, equation of the normalizing constant, and equation (v′) of the expectation expression below are evaluated using a simplicial decomposition, the symbolic expression, and monomial integration formulae over simplexes (See Appendix D) (Block 266). Finally, the equation (iv) of the covariance matrix below is then computed based on the equation (ii′) of the mean state vector and the equation (v′) of the expectation expression below (Block 268) so that finally the mean state vector fk from equation (ii′) below and the covariance matrix Σk from equation (iv) below can be returned (Block 270).
The initial step (Block 254) involves computing a preliminary state probability distribution from the last state probability distribution 112 and the state dynamics 120. The state dynamics 120 define the heuristic by which the eventual state vector f may potentially evolve from one time interval to another. For instance, knowing the value of the contamination fraction at the previous time interval k−1, it may be assumed that any value for the current state fk is equally probable if the value of its contamination constituent fk,c is within 90% to 100% of the previous contamination constituent fk-1,c, or more generally within α% to β% of the previous contamination constituent fk-1,c. Hence, the preliminary state probability distribution at time interval k is uniform given the value of previous contamination constituent fk-1,c. However, the last state probability distribution 112 indicates that the previous state fk-1 obeys a well defined state probability distribution and by implication so does the previous contamination constituent fk-1,c.
To capture the variability of the previous contamination constituent fk-1,c in deriving the preliminary state probability distribution for the current state fk, the conditional probability rule can be used to write the following:
p(fkfk-1,c)=p(fk|fk-1,c)p(fk-1,c)
Here, p(fkfk-1,c) is the joint probability of the current state fk, and the previous contamination constituent fk-1,c. Additionally, p(fk|fk-1,c) is the probability of the current state fk conditioned on the previous contamination constituent fk-1,c (given by the state dynamics 120). Also, p(fk-1,c) is the probability of the previous contamination constituent fk-1,c (obtained from the last state probability distribution 112).
Using the law of total probability, the probability function for the current state fk may be written as follows:
where, Projc(Pk-) is the span of the contamination constituent obtained by projecting the complete space Pk-1 onto the c dimension, which corresponds to the contamination variable. Because the above probability function for the current state fk is preliminary (in the sense that it does not yet account for the current state space Pk), it can be denoted as pprelim(fk). Hence, the last state probability distribution 112 and the state dynamics 120 yield:
The expression of the above integrand can be further simplified. Since P(fk|fk-1,c) is either constant (uniform distribution) or zero depending on the values of fk, fk-1,c, α, and β, the probability function may simply be written as:
where,
is the uniform probability density value of p(fk|fk-1,c) when αfk-1,c≦fk,c≦βfk-1,c (it is zero outside that interval). The ranges [αk, βk] is the time-dependent integration range over the previous contamination constituent fk-1,c. The dynamic integration range depends on the polyhedron Pk-1, α, β, and fk,c. It is easy to verify that the integration range
In fact, the Projc(Pk-1) term can be discarded, which allows to the range
because p(fk-1,c) is by definition equal to zero outside Projc(Pk-1). Thus, the Projc(Pk-1) information does not have to be fed to the next time interval iteration, which minimizes the input required as indicated in the framework in
The last formulation of pprelim(fk) gets around the piecewise definition of p(fk|fk-1,c) by discarding the range for which it is equal to zero.
Turning to the normalization step 206 of
where N is a normalizing constant—i.e., N=∫P
Similarly, the covariance matrix Σk for the state vector fk can be computed as follows:
Σk=[Cov(fk,i,fk,j)]i=1 . . . d,j=1 . . . d=[E[fk,ifk,j]−{circumflex over (f)}k,i{circumflex over (f)}k,j]i=1 . . . d,j=1 . . . d (iv)
where d is the number of constituents (problem dimension). Here, fk,i represents the ith constituent in the state vector fk, and fk,i is its mean value (analogously for fk,j and {circumflex over (f)}k,j). Similar to the previous expectation expression, E[fk,ifk,j] can be calculated as follows:
The estimate for fk can be chosen as its mean value {circumflex over (f)}k. Note that such an estimate can be interpreted as the center of mass of a polyhedral solid where the mass is distributed according to the function pprelim( ). In addition to the fixed-point estimate, arbitrary confidence intervals on the estimate may be obtained by exploiting p(fk). Moreover, the mean value and confidence intervals on values of functions of two or more constituent fractions (e.g., GOR) can be calculated by the aid of the p(fk) information (See Section D).
The foregoing description has formulated the appropriate integrals needed to compute the first two-order moments of the state probability distribution p(fk). In the next two subsections, discussion turns to (a) design choices for the probability distribution model that will be computed using only the first two-order moments and (b) suitable techniques for integrating over polyhedra.
a) Distribution Model
The disclosed framework is not theoretically bound to any particular distribution model (e.g., Gaussian, Exponential, etc.). In one implementation, the Dirichlet probability can be used to model the data distribution. The main reason for this choice is twofold. First, the Dirichlet distribution can be completely specified via its first two moments, which allows for fast computation and a compact representation. Second, the Dirichlet distribution has the standard simplex as its input domain, making it a natural choice for this problem.
The Dirichlet distribution is the multidimensional generalization of the beta distribution. A parameter vector α=αii=1 . . . d completely characterizes this multivariate distribution and defines the shape and density of the distribution over the (d−1)-simplex domain, where d is the number of variables (components). The parameter vector a correlates directly to the first two-order distribution moments and represents the distribution variation among the d components. The probability density function for the Dirichlet Distribution for an input x=xii=1 . . . d and a parameter α=αii=1 . . . d is expressed as follows:
where,
is the multinomial beta function and Γ(αi)=∫0∞tα
The first distribution moment (mean vector) for a Dirichlet-distributed d-dimensional variable X can be expressed in terms of the α vector as follows.
where, α0=Σi=1dαi.
The second distribution moment or the covariance matrix can be expressed in terms of the first moment and the α vector as follows:
When X is Dirichlet-distributed, each component X, of X obeys a beta distribution with shape parameters αi and α0−αi. Particularly, the probability density function p(fk-1,c) for the distribution of the contamination component used in the computation of the preliminary state probability distribution becomes that of a beta distribution following the assumption of a Dirichlet-distributed fk.
Note that p(fk-1,c) is the only distribution information that is propagated into the recursive computation of future state distributions. Hence, potential propagated errors are only the ones induced by the beta distribution model and not by the whole Dirichlet state model. The complete state distribution model is only needed to infer confidence intervals on each estimated fraction for a given time interval because only the contamination distribution model is used for subsequent time intervals.
Once the first two-order moments are computed using the above equations (i)-(v), casting the state distribution to the Dirichlet model reduces to obtaining the α vector. To compute α, it suffices to compute α0 and then use equation (vi) of the first distribution moment above to obtain each of the αi components. To compute α0, note that each equation in the two sets (vii) and (viii) of the second distribution moment or the covariance matrix gives one possible value for α0. To resolve the over-determined system in terms of α0, one might use simple linear regression to minimize the sum of squares. The least squares error provides a measure for assessing the accuracy of the Dirichlet model.
b) Integration Over Polyhedra
The normalization step mentioned in subsection a) above requires that integration be done over a polyhedron state space. Accordingly, the sampling-based and analytical approaches to evaluating the integral (ii) of the mean state vector, the integral (iii) of the normalizing constant, and the integral (v) of the expectation expression in Section C above are now discussed.
(1) Sampling-Based Integration
The simplest way to integrate a function over a polyhedron is to approximate the surface integral by sampling a sufficient number of points from the polyhedral surface, evaluating function values of the sampled points, and approximating the integral by the aid of a finite Riemann sum. The polyhedral surface can be represented in terms of a constrained mixture design, which allows standard constrained mixture design methods to be used to sample from the polyhedral surface according the desired granularity. Other sampling techniques from the polyhedron are possible, such as space-projection sampling using Linear Programming.
(2) Analytical Integration
In one implementation, an analytical approach can be used to evaluate equation (i) of the probability function, equation (iii) of the normalizing constant, and equation (v) of the expectation expression in Section C.2 above. Here, a simplicial decomposition of the polyhedral surface is performed, each integral of interest is evaluated over each simplex in the decomposition, and finally the integration results are summed over all simplexes to yield the result of each of the original polyhedral integrals.
The simplicial decomposition involves two steps. In a first step (1), an enumeration is performed of all vertices of the polyhedral surface. In a second step (2), a triangulation approach is applied on the vertex set obtained from the first step (1) to yield the simplicial decomposition.
By virtue of this simplicial decomposition approach, the integral (ii) of the mean state vector, the integral (iii) of the normalizing constant, and the integral (v) of the expectation expression in Section C.2 can be rewritten as follows (where a denotes a simplex):
To this end, the evaluation of the integrands in above equation (ii′) of the mean state vector, equation (iii′) of the normalizing constant, and equation (v′) of the expectation expression over a simplex remains an issue. This is because pprelim(fk) depends on the chosen distribution model, as does the complexity of the above integrals. To get around this difficulty and simultaneously standardize the problem's complexity, it is proposed to approximate any distribution model by its Taylor series expansions. Taylor series are sums of monomial functions so integration is linear in terms of the addition operation. All of the integrations will reduce to integrations of monomials over simplexes. The formulae for integration of monomials over simplexes are known in the art and are shown in Appendix D for reference.
This completes the description of the composition model 200 of the present disclosure. As noted above, additional details are provided in the attached Appendices—e.g., for performing the Taylor series expansion (Appendix A), the polyhedron vertex enumeration (Appendix B), the polyhedron triangulation (Appendix C), and the integration of monomials over simplexes (Appendix D).
D. Inferences of Properties of Interest
1. Contamination Estimate and Probabilistic Intervals
As noted above, the probability distribution can be used to estimate the contamination of the fluid sample. In particular, the probability distribution of the contamination constituent at a time interval k is directly represented by p(fk,c), which is a Beta distribution in the particular implementation based on the assumption of a Dirichlet distribution for the dynamic state vector. The estimate of the contamination is thus directly given by {circumflex over (f)}k,C.
The probability over any desired confidence intervals (say [a, b]) can be evaluated as:
Again, Taylor series approximation (See Appendix C) can be used to approximate the above integrand. Use of the Taylor series approximation allows the integral to be evaluated analytically in order to determine a confidence level for contamination within a certain range of a to b percent.
2. GOR Estimate and Probabilistic Intervals
As also noted above, the probability distribution can be used to estimate the gas-to-oil ratio (GOR) of the fluid sample. In particular, the probability distribution of the GOR can be calculated to provide a GOR estimate and GOR confidence intervals. Recall that the GOR is the volumetric ratio of the sum of the vapor phase gas constituent volumetric fractions divided by the sum of liquid hydrocarbon constituent volumetric fractions. If G denotes the set of all gas constituents and O denotes the set of all oil constituents, then at time interval k, GOR can be written as:
Clearly, GORk is a random variable, and its mean value can be computed as follows:
The above equation can be rewritten in terms of the simplicial decomposition as follows:
Similarly, higher order moments of the distribution of GORk can be expressed as below:
where mi denote the ith moment of GORk. The integrand in mi is approximated using the Taylor series expansion detailed in Appendix A. Refer to Appendices A-D for computing mi.
The distribution of the GORk variable can be approximated via the set of the first m moments (e.g., using the Pearson system with the first 4 moments). Using this moment-based approach, an approximation can be obtained for the probability density function p(GORk) of the gas-to-oil ratio GORk at time interval k.
Arbitrary confidence intervals (for example [a, b]) for GORk can now be obtained in similar fashion as with the contamination constituent described above.
E. Dimension Reduction
So far, the analysis 100 has assumed the complete fluid composition (i.e., exhaustive of all possible constituents). When the computations are performed in real-time with the downhole tool 10 in the borehole or at least if downhole measurements are communicated to the surface for processing, the analysis 100's time complexity can be lowered by effectively reducing the problem dimension—i.e., the number of presumed constituents. Characterizing the chance of the existence of every possible constituent in the formation fluid may be of little use, especially when some of the more critical components in the reservoir's fluid composition are the contaminant, water, supercritical gas, and liquid hydrocarbon.
Accordingly, the analysis 100 can be optimized in terms of the problem dimension by abstracting relevant constituents into a gas mixture component and an oil (crude) mixture component in addition to the water and the contaminant components. This reduces the problem's dimension to four (i.e., gas, oil, water, and contaminant). As will be appreciated, alternative fluid composition abstractions are possible, and the dimension reduction approach discussed below can apply to any chosen abstraction.
Of particular note, the individual densities for the gas and oil mixtures are no longer constants. Because the state boundary constraints (122) are constant (Section B.3 above), their incorporation can be predetermined to obtain distributions for the individual fluid densities for the gas and oil mixtures. In particular, for every gas mixture within the boundary constraints, a different density value can be obtained for the mixture. Accounting for all possible gas mixtures that satisfy the boundary constraints yields a fluid density distribution for the gas mixture that can then be stored in memory 74 of the tool 10 in any relevant format for reference during processing.
In the absence of any prior information, any gas mixture satisfying the boundary constraints can be assumed equally probable. The same idea is applicable to oil mixtures satisfying the boundary constraints. The assumption of equiprobability does not contradict the previous developments in Sections B-C above. Rather, the state boundary constraints (122) are moved out of the online computations. In fact, to obtain the offline mixture density distributions for gas and oil, the density space has to be integrated over a polyhedron. Only in this case, the polyhedron solid is uniformly distributed.
Integration over polyhedra can be done as discussed previously via simplicial decomposition. This time, the integrand is much simpler (the expression for the mixture fluid density). Alternative numerical approaches can be used to compute the mixture density distribution, and one possible approach is discussed below in Appendix E.
Because the computations in Sections B-D assume constant density values for each component, the variability of the gas and oil mixture densities need to be accounted for. To do this, the analysis uses model averaging using the definition of conditional probability and total probability law.
Under the assumption of variable gas and oil densities, the calculations (at the end of Section D) include the conditional probability density functions i.e., p(fk,c|ρg, ρo) and p(GORk|ρg,ρo) as opposed to p(fk,c) and p(GORk) indicated previously. That is, given fixed density values for gas and oil mixtures i.e., ρg and ρo, the conditional probability functions of fk,c and GORk can be obtained using the techniques discussed above in Section D. To then infer the actual probabilities p(fk,c) and p(GORk), the total probability law can be used as follows:
Because ρg and ρo are independent, p(ρg, ρo)=p(ρg)p(ρo) which then gives:
The functions p(ρg) and p(ρo) are obtained offline by the description of the previous procedure mentioned in this Section. For each set of values of ρg and ρo, the techniques in Sections B-D give p(fk,c|ρg, ρo). To evaluate the last double integral over the space of ρgXρo, an infinite number of runs would be needed to compute every possible p(fk,c|ρg,ρo). To get around this issue, the last double integral is approximated using finite sums to yield the following:
where Δρg and Δρo are the discretization granularities over the gas density space and oil density space, respectively. The granularity level can be chosen based on an appropriate tradeoff between complexity and accuracy of approximation for p(ρg) and p(ρo).
An equivalent logic provides:
Confidence intervals can be computed by substituting the last approximations in the same expressions in Section D—i.e.,
The evaluation of the term ∫abp(fk,c|ρg,ρo)dfk,c is equivalent to that in Section D with fixed ρg and ρo.
Similarly,
Evaluating ∫abp(GORk|ρgρo)dGORk is done exactly as according to Section D.
F. Erroneous Density Measurement
In Section B.2 above, perfect fluid density measurements were assumed to be obtained. In reality, observational noise is common especially in a downhole environment with a tool (10), such as described previously. In fact, what is truly measured is ρ+ε, where ε is measurement noise. A statistical characterization of ε is preferably used.
One way to characterize the noise ε is to assume that the noise ε can be anywhere within plus or minus a certain threshold (e.g., ±10−3) and that all errors within that interval are equally probable, which would correspond to uniform random noise. This assumption changes the density equation to a double inequality, but the state space remains in principle a polyhedron, which allows the same techniques disclosed above to be used with no required changes.
If the assumption of a uniform random noise is not used so that the noise ε is instead characterized as behaving according to a certain probability density function p(ε) (e.g., Gaussian distribution), then the noise ε becomes a parameter in the same way as the gas and oil densities ρg and ρo. For this reason, the same handling of random parameters as disclosed above in Section E can be done to further incorporate a third parameter for the noise E. Evidently, all of the parameters ρg, ρo, and ε are independent so that their joint probability would be expressed as: p(ρg, ρo, ε)=p(ρg)p(ρo)p(ε). As indicated in this section, consideration of measurement noise can further refine the analysis of the present disclosure.
The techniques of the present disclosure can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of these. Apparatus for practicing the disclosed techniques can be implemented in a computer program product tangibly embodied in a machine-readable storage device for execution by a programmable processor; and method steps of the disclosed techniques can be performed by a programmable processor executing a program of instructions to perform functions of the disclosed techniques by operating on input data and generating output. The disclosed techniques can be implemented advantageously in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. Each computer program can be implemented in a high-level procedural or object-oriented programming language, or in assembly or machine language if desired; and in any case, the language can be a compiled or interpreted language. Suitable processors include, by way of example, both general and special purpose microprocessors. Generally, a processor will receive instructions and data from a read-only memory and/or a random access memory. Generally, a computer will include one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM disks. Any of the foregoing can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits).
The foregoing description of preferred and other embodiments is not intended to limit or restrict the scope or applicability of the inventive concepts conceived of by the Applicants. It will be appreciated with the benefit of the present disclosure that features described above in accordance with any embodiment or aspect of the disclosed subject matter can be utilized, either alone or in combination, with any other described feature, in any other embodiment or aspect of the disclosed subject matter.
In exchange for disclosing the inventive concepts contained herein, the Applicants desire all patent rights afforded by the appended claims. Therefore, it is intended that the appended claims include all modifications and alterations to the full extent that they come within the scope of the following claims or the equivalents thereof.
As noted above with reference to
Methods and assessment of their associated complexities are disclosed in [Matheiss et al. 1980] and [Dyer 1983]. In [Avis et al. 1992], an efficient enumeration algorithm is proposed and was later improved by [Avis 2000]. A different approach is proposed in [Fukuda et al. 1997]. For theoretical results on the vertex enumeration problem of well-defined classes of polyhedra, see [Bremner et al. 1997] and [Kachiyan et al. 2006]. In the case of a polyhedron embedded within a simplex (as is the case of the state space P of Section B), algorithms within the mixture design literature exist for enumerating polyhedron vertices e.g., [McLean et al. 1966], [Snee et al. 1974], and [Crosier 1986].
As noted above with reference to
The Delaunay triangulation is one particular type of polyhedral triangulation of great interest due its inherent duality with respect to Voronoi diagrams. The Delaunay triangulation requires that the circumcircle of any simplex in the decomposition contain only the vertices of its associated simplex on its boundary and no other points (vertices of other simplexes) in either its interior or boundary.
Various methods can be used to solve the general Delaunay triangulation problem for d dimensions. For the decomposition problem of the present disclosure, a slightly modified version of the Delaunay triangulation algorithm for d-dimensional polyhedra proposed in [Cignoni et al. 1998] can be used. Here, an arbitrary triangulation is sufficient, much of the computation in the algorithm of [Cignoni et al. 1998] needed to maintain the Delaunay property can be avoided and improve the complexity of constructing the final triangulation (no vertex point optimization is needed for constructing the new simplex to be added into the decomposition). Though the final triangulation in turn might influence the complexity of solving our estimation problem, this issue is not addressed as per the current implementation (i.e., the current implementation may be only concerned with optimizing the time complexity of generating the output triangulation and not that of the output triangulation itself).
As noted above with reference to
A function f is often approximated by its Taylor series of order k i.e., truncated after the kth term. This is applied to provide a Taylor series approximation for the probability density function of the Beta distribution. The probability density function p(x) for the Beta distribution is given by:
with B(α,β)=∫01uα−1(1−u)βdu.
To be able to apply the Taylor series approximation for the Beta distribution density function, the nth derivative of p(x) needs to be evaluated.
Let q(x)=xα−1(1−x)β−1 and
It is easy to verify that D (1, α, β, x)=(α−1)xα−2(1−x)β−(β−1)xα−1 (1−x)β−2 and that the below recursive relation is satisfied.
D(n,α,β,x)=(α−1)D(n−1,α−1,β,x)−(β−1)D(n−1,α,β−1,x)
Hence, the coefficients in the Taylor series approximation for p(x) may be evaluated iteratively starting from the lowest order coefficient in ascending order up to the coefficient of order k.
As noted above with reference to
If {circumflex over (σ)} is a d-dimensional standard simplex and u1h
If the integration space is a non-standard simplex then appropriate coordinate transformation must be applied to transform it into a standard simplex.
As noted above, the composition model 200 involves evaluating the mixture density distribution—one possible approach being discussed here. Let ρinv,d=(ρ1ρ−1, . . . , ρdρ−1) be α vector in Rd representing the fluid density of d chemical components multiplied by the inverse of the density of their mixture (ρ−1). Let Ri be a range in [0,1] for i=1 . . . d representing the expected volume fraction range for the ith chemical component. Let σ be the standard simplex in Rd. Let f be α vector in the polyhedron space P defined by the intersection of σ and {Ri}i=1 . . . d. f denotes in fact the set of volume fractions for all of the d components. The desire is to compute the distribution of the average mixture fluid density
The forthcoming approach shown in this appendix is numerical. The idea is to evaluate the distribution of
Discretize P by discretizing every Ri based on a fixed uniform granularity (in the literature of mixture design, this may be achieved via a simplex-lattice design). For instance, if Ri
or the inverse of the granularity. More intuitively, every sample point in P can be made equivalent to one number composition of 100 of d terms as per this example. Hence in general, the sample size with this discretization scheme is on the order of all possible number compositions of
in d terms.
Let Cα
Plainly put, there is exactly one composition of any integer into exactly one term if the limits are satisfied and none if not.
It can also be verified that Cα
The C function will be needed to evaluate the moments of the distribution of
To evaluate mk, it only remains to compute the function Sα
Let td=(t1, . . . ,td)εS(P) (i.e., td is any possible composition). For a fixed td (dth component of td) gives:
To get
it suffices to add one ρd−1td for every ρinv,d−1.td-1 term and allow td to vary. Hence,
Factoring ρd−1td out of the second inner sum,
The second inner sum is known to have exactly
terms giving,
Replacing the first inner sum in terms of the S function yields finally,
Hence, the recursive definition for Sα
elements for the first dimension and d elements for the second and thus a complexity of
which is evidently much less than the cardinality of the sample space or
needs to be computed and stored. Below shows the formulae for the second, third, and fourth order S functions needed to evaluate m2, m3, and m4. The derivation (which is omitted for concision) is similar to the above for Sα
The teachings of the following materials are referred to in the above Appendices A-E and are incorporated herein by reference: