In a coarsened model, properties for various cells may be averaged in a process called upscaling. However, if not applied properly, up scaling may provide a solution in the coarsened model that may lose accuracy as details are lost in the averaging process, especially where coarsening is applied to highly influential grid cells. Thus, accurate simulations may require a coarsened model that reduces the computational time to a reasonable speed while also preserving relevant physical relationships in the underlying data.
This summary is provided to introduce a selection of concepts that are further described below in the detailed description. This summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used as an aid in limiting the scope of the claimed subject matter.
In general, in one aspect, embodiments relate to a method that includes obtaining grid model data regarding a geological region of interest. The method further includes obtaining well data regarding a well in the geological region of interest. The method further includes obtaining a grid model for the geological region of interest based on the grid model data and the well data. The method further includes obtaining a time selection for a look-ahead simulation. The method further includes determining, by a computer processor, a look-ahead model for the look-ahead simulation based on the grid model data, the well data, and a coarsening function. The look-ahead model simulates the geological region of interest at a faster rate than the grid model. The method further includes performing, by the computer processor, the look-ahead simulation using the look-ahead model and the time selection. The method further includes performing, by the computer processor, a reservoir simulation of the geological region of interest using the grid model and the look-ahead simulation.
In general, in one aspect, embodiments relate to a system that includes a network, which includes various parallel processors. The system further includes a reservoir simulator that includes a computer processor. The reservoir simulator is coupled to the network. The reservoir simulator obtains grid model data regarding a geological region of interest. The reservoir simulator obtains well data regarding a well in the geological region of interest. The reservoir simulator obtains a grid model for the geological region of interest based on the grid model data and the well data. The reservoir simulator obtains a time selection for a look-ahead simulation. The reservoir simulator determines a look-ahead model for the look-ahead simulation based on the grid model data, the well data, and a coarsening function. The look-ahead model simulates the geological region of interest at a faster rate than the grid model. The reservoir simulator performs the look-ahead simulation using the look-ahead model and the time selection. The reservoir simulator performs a reservoir simulation of the geological region of interest using the grid model and the look-ahead simulation.
In general, in one aspect, embodiments relate to a non-transitory computer readable medium storing instructions executable by a computer processor. The instructions obtain grid model data regarding a geological region of interest. The instructions obtain well data regarding a well in the geological region of interest. The instructions obtain a grid model for the geological region of interest based on the grid model data and the well data. The instructions obtain a time selection for a look-ahead simulation. The instructions determine a look-ahead model for the look-ahead simulation based on the grid model data, the well data, and a coarsening function. The look-ahead model simulates the geological region of interest at a faster rate than the grid model. The instructions perform the look-ahead simulation using the look-ahead model and the time selection. The instructions perform a reservoir simulation of the geological region of interest using the grid model and the look-ahead simulation.
In some embodiments, look-ahead data are determined based on a look-ahead simulation. A composite reservoir parameter may be determined based on well data and the look-ahead data, where the composite reservoir parameter is based on a reservoir parameter from the look-ahead simulation and a second reservoir simulation that is performed before another reservoir simulation. The first reservoir simulation may be performed using the composite reservoir parameter. In some embodiments, the composite reservoir parameter includes a weighted value based on a number of look-ahead. In some embodiments, a first well potential for various wells in a geological region of interest are determined at a time step in an iterative process. A second well potential is determined for the wells using a look-ahead simulation. The first well potential may describe a simulated production rate for the wells for the first time step, and the second well potential may describe a simulated production rate for the wells over the time selection. A composite well potential may be determined based on the first well potential and the second well potential, where a reservoir simulation is performed using the composite well potential.
In some embodiments, various look-ahead models are determined for various wells in the geological region of interest. A respective look-ahead model among the look-ahead models may correspond to a respective well among the wells. The respective look-ahead model may include an area of interest around the respective well, and the respective look-ahead model may include a first subset of cells outside the area of interest with greater coarsening than a second subset of cells within the area of interest. A second reservoir simulation may be performed based on various look-ahead simulations using the look-ahead models. In some embodiments, a geological region of interest is simulated using various reservoir simulations including a first reservoir simulation, wherein simulating the geological region of interest corresponds to a predetermined time period. The time selection may be a subset of the predetermined time period that is less than the predetermined time period. In some embodiments, a time selection includes a target time, where the target time may correspond to a predetermined time step in an iterative process where the first look-ahead simulation ends. A reservoir simulation may continue in the iterative process past the predetermined time step. In some embodiments, a reservoir simulation and a look-ahead simulation are performed in parallel using various parallel processors. In some embodiments, a look-ahead simulation is performed before beginning a particular reservoir simulation. In some embodiments, a look-ahead model is determined for a look-ahead simulation based on grid model data, well data, and a coarsening function. A first look-ahead simulation may be performed in parallel with a second look-ahead simulation using the look-ahead models and a time selection, wherein first coarsening function and the second coarsening functions are different functions for the different look-ahead models.
In some embodiments, a coarsening function is selected from a group consisting of a fluid characterization coarsening, a streamline coarsening, or a grid coarsening. In some embodiments, a look-ahead model is a proxy model that includes various well constraints that are the same as the well constraints in the grid model. The proxy model may have a number of grid cells that are less than the number of grid cells in the grid model. In some embodiments, a network is a parallel cluster that includes various graphical processing units (GPUs). The network may perform multiple reservoir simulations and/or look-ahead simulations in parallel or sequentially.
In light of the structure and functions described above, embodiments of the invention may include respective means adapted to carry out various steps and functions defined above in accordance with one or more aspects and any one of the embodiments of one or more aspect described herein.
Other aspects and advantages of the claimed subject matter will be apparent from the following description and the appended claims.
Specific embodiments of the disclosed technology will now be described in detail with reference to the accompanying figures. Like elements in the various figures are denoted by like reference numerals for consistency.
In the following detailed description of embodiments of the disclosure, numerous specific details are set forth in order to provide a more thorough understanding of the disclosure. However, it will be apparent to one of ordinary skill in the art that the disclosure may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description.
Throughout the application, ordinal numbers (e.g., first, second, third, etc.) may be used as an adjective for an element (i.e., any noun in the application). The use of ordinal numbers is not to imply or create any particular ordering of the elements nor to limit any element to being only a single element unless expressly disclosed, such as using the terms “before”, “after”, “single”, and other such terminology. Rather, the use of ordinal numbers is to distinguish between the elements. By way of an example, a first element is distinct from a second element, and the first element may encompass more than one element and succeed (or precede) the second element in an ordering of elements.
In general, embodiments of the disclosure include systems and methods for using look-ahead models in conjunction with a grid model to perform a reservoir simulation. In some embodiments, for example, a look-ahead model is a proxy model that may be a coarsened version of a main grid model. By performing at a faster computing speed than the main grid model, the look-ahead model may simulate one or more time periods beyond the current simulation time of a main reservoir simulation. Thus, smaller look-ahead simulations may be used to optimize the larger reservoir simulation based on parallel processing. Accordingly, look-ahead simulations may predict reservoir parameters at later times in the main reservoir simulation in order to provide valuable information to guide the more accurate (but slower) simulation run. As such, results from look-ahead simulations may be harvested by the main grid model to optimize the reservoir simulation, e.g., by producing composite reservoir parameters based on the output of both look-ahead simulations and the current state of the main reservoir simulation.
Moreover, a reservoir simulation using a grid model may be divided into multiple underlying reservoir simulations using input parameters based on different look-ahead simulations. While an initial time period (e.g., 0-1 years) of the main reservoir simulation may only use a larger grid model, later time periods in the main reservoir simulation may use one or more look-ahead simulations to fine-tune input parameters for those time periods. In a twenty year reservoir simulation, a look-ahead simulation may be performed for five years into the simulation's future in parallel with a much shorter time period being simulated based on the main grid model. After completion of the look-ahead simulation, the current set of input parameters to the main reservoir simulation may be adjusted, while another set of look-ahead simulations may be initiated using the adjusted input parameters.
In some embodiments, look-ahead models are used as part of an iterative process and generated on-the-fly during a simulation run for a fraction of the total simulation time. Using different time selections, for example, look-ahead models may predict the state of different simulation parameters at different target times later in the current reservoir simulation, such as well production states, reservoir states, the number of open and shut-in wells, etc. By having advanced knowledge of time periods later in the main reservoir simulation, a reservoir simulator may optimize the reservoir simulation at its current time state.
Turning to
In some embodiments, the well system (106) includes a wellbore (120), a well sub-surface system (122), a well surface system (124), and a well control system (126). The control system (126) may control various operations of the well system (106), such as well production operations, well completion operations, well maintenance operations, and reservoir monitoring, assessment and development operations. In some embodiments, the control system (126) includes a computer system that is the same as or similar to that of computer system (902) described below in
The wellbore (120) may include a bored hole that extends from the surface (108) into a target zone of the hydrocarbon-bearing formation (104), such as the reservoir (102). An upper end of the wellbore (120), terminating at or near the surface (108), may be referred to as the “up-hole” end of the wellbore (120), and a lower end of the wellbore, terminating in the hydrocarbon-bearing formation (104), may be referred to as the “down-hole” end of the wellbore (120). The wellbore (120) may facilitate the circulation of drilling fluids during drilling operations, the flow of hydrocarbon production (“production”) (121) (e.g., oil and gas) from the reservoir (102) to the surface (108) during production operations, the injection of substances (e.g., water) into the hydrocarbon-bearing formation (104) or the reservoir (102) during injection operations, or the communication of monitoring devices (e.g., logging tools) into the hydrocarbon-bearing formation (104) or the reservoir (102) during monitoring operations (e.g., during in situ logging operations).
In some embodiments, during operation of the well system (106), the control system (126) collects and records wellhead data (140) for the well system (106). The wellhead data (140) may include, for example, a record of measurements of wellhead pressure (Pwh) (e.g., including flowing wellhead pressure), wellhead temperature (Twh) (e.g., including flowing wellhead temperature), wellhead production rate (Qwh) over some or all of the life of the well (106), and water cut data. In some embodiments, the measurements are recorded in real-time, and are available for review or use within seconds, minutes or hours of the condition being sensed (e.g., the measurements are available within 1 hour of the condition being sensed). In such an embodiment, the wellhead data (140) may be referred to as “real-time” wellhead data (140). Real-time wellhead data (140) may enable an operator of the well (106) to assess a relatively current state of the well system (106), and make real-time decisions regarding development of the well system (106) and the reservoir (102), such as on-demand adjustments in regulation of production flow from the well.
In some embodiments, the well surface system (124) includes a wellhead (130). The wellhead (130) may include a rigid structure installed at the “up-hole” end of the wellbore (120), at or near where the wellbore (120) terminates at the Earth's surface (108). The wellhead (130) may include structures for supporting (or “hanging”) casing and production tubing extending into the wellbore (120). Production (121) may flow through the wellhead (130), after exiting the wellbore (120) and the well sub-surface system (122), including, for example, the casing and the production tubing. In some embodiments, the well surface system (124) includes flow regulating devices that are operable to control the flow of substances into and out of the wellbore (120). For example, the well surface system (124) may include one or more production valves (132) that are operable to control the flow of production (121). For example, a production valve (132) may be fully opened to enable unrestricted flow of production (121) from the wellbore (120), the production valve (132) may be partially opened to partially restrict (or “throttle”) the flow of production (121) from the wellbore (120), and production valve (132) may be fully closed to fully restrict (or “block”) the flow of production (121) from the wellbore (120), and through the well surface system (124).
Keeping with
In some embodiments, the surface sensing system (134) includes a surface pressure sensor (136) operable to sense the pressure of production (121) flowing through the well surface system (124), after it exits the wellbore (120). The surface pressure sensor (136) may include, for example, a wellhead pressure sensor that senses a pressure of production (121) flowing through or otherwise located in the wellhead (130). In some embodiments, the surface sensing system (134) includes a surface temperature sensor (138) operable to sense the temperature of production (121) flowing through the well surface system (124), after it exits the wellbore (120). The surface temperature sensor (138) may include, for example, a wellhead temperature sensor that senses a temperature of production (121) flowing through or otherwise located in the wellhead (130), referred to as “wellhead temperature” (Twh). In some embodiments, the surface sensing system (134) includes a flow rate sensor (139) operable to sense the flow rate of production (121) flowing through the well surface system (124), after it exits the wellbore (120). The flow rate sensor (139) may include hardware that senses a flow rate of production (121) (Qwh) passing through the wellhead (130).
In some embodiments, the well system (106) includes a reservoir simulator (160). For example, the reservoir simulator (160) may include hardware and/or software with functionality for generating one or more reservoir models regarding the hydrocarbon-bearing formation (104) and/or performing one or more reservoir simulations. For example, the reservoir simulator (160) may store well logs and data regarding core samples for performing simulations. A reservoir simulator may further analyze the well log data, the core sample data, seismic data, and/or other types of data to generate and/or update the one or more reservoir models. While the reservoir simulator (160) is shown at a well site, embodiments are contemplated where reservoir simulators are located away from well sites. In some embodiments, the reservoir simulator (160) may include a computer system that is similar to the computer system (902) described below with regard to
In some embodiments, production wells and/or injection wells are used in one or more stimulation operations. For example, one type of stimulation operation is a water-alternating-gas (WAG) operation. A WAG operation may be a cyclic process of injecting water followed by gas. Using a WAG injection, macroscopic or microscopic sweep efficiency may be improved for a reservoir, e.g., by maintaining nearly initial high pressure, slow down any gas breakthroughs, and reduce oil viscosity. Likewise, WAG injections may also decrease residual oil saturation resulting from three phase flows and effects associated with relative permeability hysteresis. Thus, some stimulation operations may produce gas flooding, which is a type of enhanced oil recovery (EOR) method for increasing recovery of light to moderate oil reservoirs. In some stimulation operations, water may be injected during the initial phase of the operation and followed by a gas (e.g., carbon dioxide) because water may have a higher mobility ratio than the injected gas, thereby preventing breakthroughs in the reservoir. Injected gas may be a mixture of hydrocarbon gas or nonhydrocarbon gases. With hydrocarbon gases, the gas mixture may include methane, ethane, and propane for achieving a miscible or immiscible gas-oil system in the reservoir. With nonhydrocarbon gases, the gas mixture may include carbon dioxide (CO2), nitrogen (N2), and some exotic gases that displace fluid in the reservoir. Likewise, gas may also be injected directly into a reservoir, e.g., into the gas cap, to compensate for the reservoir's pressure decline.
Furthermore, a stimulation injection during a stimulation operation may correspond to various injection parameters, such as bank size, cycle time, and a predetermined water-gas ratio (also called a “WAG ratio”). Bank size may refer to a size of sequential banks of fluids (e.g., oil, CO2 and water) formed in the reservoir rock in response to a stimulation operation that migrate from the injection to the production wells. For illustration, a WAG ratio of 1:1 may result in a high oil production for one or more production wells, such as production wells coupled to a miscible reservoir. Based on some reservoir parameters such as oil composition, gas flooding can be carried out in miscible or immiscible conditions. Moreover, different types of stimulation operations may use different stimulation parameters. Examples of different stimulation operations may include: (1) continuous gas injections; (2) WAG injections; (3) simultaneous water-alternating-gas (SWAG) injections; and (4) tapered WAG injections. Different strategies have been developed by the petroleum industry to cope with these conditions.
Turning to
Turning to
Prior to performing a reservoir simulation, local grid refinement and coarsening (LGR) may be used to increase or decrease grid resolutions in various regions of a grid model. For example, various reservoir properties, e.g., permeability, porosity, or saturations, may correspond to discrete values that are associated with a particular grid cell or coarsened grid block. However, by using discrete values to represent a portion of a geological region, a discretization error may occur in a reservoir simulation. Thus, various fine-grid regions may reduce discretization errors as the numerical approximation of a finer grid is closer to the exact solution, however through a higher computational cost. As shown in
In general, coarsening may be applied to cells that do not contribute to a total flow within a reservoir region because a slight change on such reservoir properties may not affect the output of a simulation. Accordingly, different levels of coarsening may be used on different regions of the same reservoir model. As such, a coarsening ratio may correspond to a measure of coarsening efficiency, which may be defined as a total number of cells in a coarsened reservoir model divided by the original number of cells in the original reservoir model.
In some embodiments, a reservoir simulator uses a multilevel mask to label original cells in a grid model according to their respective coarsening or refinement levels for generating a particular coarsened grid model. In particular, these labels may correspond to various coarsening levels for original cells. In some embodiments, a multilevel mask is generated from multiple binary masks that specify areas where refinement or coarsening is desired in a grid model. A binary mask may be an image or other dataset that is defined according to ‘1’ and ‘0’s, or any other binary integer set. For example, a ‘1’ in a binary mask may correspond to a coarsening level of 8×8 cells for the respective coarsened grid block, while a ‘0’ identifies a cell or block that is left unchanged during the respective coarsening step.
In some embodiments, one or more coarsening functions are used to produce a coarsened-grid model. In particular, a coarsening function may be based on one or more fluid characterizations (e.g., to produce a proxy model with less rigorous fluid characterization), streamlines, or any other criterion that may provide a satisfactory estimate of future reservoir performance within a geological region. In order to reduce computational time, a coarsened grid model may be generated by coarsening cells that do not contribute to a total flow of a reservoir region, because a slight change on reservoir properties, i.e. permeability, porosity or saturations, may impact reservoir simulations. Accordingly, a coarsening function may be based on various flow properties of a reservoir region of interest.
For a coarsening function based on streamlines, for example, streamlines may be field lines instantaneously tangent to a fluid velocity field that provide a representation of reservoir connectivity. As such, streamlines may be an alternative to cell-based grid modeling techniques in reservoir simulations, where a streamline output from a reservoir simulator may include a snapshot of an instantaneous flow field in a geological region. Likewise, individual streamlines may describe flow properties of a production well or an injection well. As such, a reservoir simulator may transform a reservoir grid model into a number of flow paths of nominally constant flux to produce a particular proxy model or other type of coarsened-grid model.
Turning to
In some embodiments, a reservoir simulator is the CPU and uses one or more GPUs (e.g., GPU C (340)) to determine look-ahead data (e.g., look-ahead data A (382)) and reservoir simulation data (e.g., reservoir simulation data A (381)) within an iterative reservoir simulation algorithm. In regard to CPU E (315), for example, the CPU E (315) includes a computer processor E (316) and a memory E (318) that stores a grid model E (319) for performing a reservoir simulation. As such, the CPU E (315) may transmit a portion of grid model data A (361) to the GPU C (340) or to another GPU, e.g., as a portion of coarsened grid model data that is used to construct a look-ahead model (e.g., look-ahead model C (351)). After collecting look-ahead data from the GPUs performing the reservoir simulation algorithm, the CPU E (315) may transmit the results to an external source for storage and/or analysis.
In some embodiments, a GPU performs a look-ahead simulation using a look-ahead model (e.g., look-ahead model C (351)). For example, the look-ahead model C (351) may be a grid model coarsened using one or more coarsening functions and based on the original grid model data (e.g., grid model data B (362) and well data (e.g., well data B (372)). For example, the grid model data B (362) obtained by GPU C (340) and well data B (372) may be a portion of the grid model data A (361) and well data A (371) received by the CPU E (315), respectively. While the CPU E (315) or another GPU may perform a reservoir simulation for one or more time steps using the grid model E (319), the GPU C (340) may perform a look-ahead simulation in parallel using one of its processors (e.g., processor Y (342), processor Z (343)).
Furthermore, a grid model may be a giant multi-million cell model that may require a long simulation period, while a look-ahead model may be a proxy model that simulates a period of time faster than the grid model. For example, a look-ahead model may have a simulation speed that is ten times or more faster than a main grid model. As such, a look-ahead model may use less computer resources (e.g., 5× less processing power) than simulations with a grid model. Furthermore, look-ahead models may be coarsened or upscaled versions of the original grid model. For example, a look-ahead model may have selectively less grid cells or a reduced number of model components.
In some embodiments, a look-ahead model includes the same well requirements for one or more wells that are being simulated by a main grid model. For example, a grid model and one or more look-ahead models may have similar well rules, well facility constraints, and production targets while other reservoir parameters may be coarsened or simplified in the look-ahead model. Thus, a perturbation of a grid model may be reflected in the respective look-ahead model. Accordingly, look-ahead models may be used to guide and optimize the grid model based on a particular time selection (e.g., time period being simulated using a look-ahead model).
Returning to GPUs, a GPU may include different types of memory hardware, such as register memory, shared memory, device memory, constant memory, texture memory, etc. For example, register memory and shared memory (e.g., shared memory C (344)) may be disposed on an actual GPU chip, while other types of memory may be separate components in the GPU. In particular, register memory may only be accessible to the hardware thread that wrote its memory values, which may only last throughout the respective thread's lifetime. On the other hand, shared memory may be accessible to all hardware threads within a thread block and shared memory values may exist for the duration of the thread block (e.g., shared memory enables hardware threads to communicate and share data between one another). Device memory (e.g., device memory C (346)) may be global memory that is accessible to any hardware threads within a GPU's application as well as devices outside the GPU, such as a reservoir simulator. Device memory may be allocated by a host for example, and may survive until the host deallocates the memory. Constant memory (e.g., constant memory C (345)) may be a read-only memory device that provides memory values that do not change over the course of a kernel execution (e.g., constant memory may provide data faster than device memory and thus reduce memory bandwidth). Texture memory (not shown) may be another read-only memory device that is similar to constant memory, where the memory reads in texture memory may be limited to physically adjacent hardware threads, e.g., those hardware threads in a warp.
In some embodiments, multiple GPUs, a central processing unit, and/or one or more reservoir simulators may communicate with each other using a peer-to-peer (P2P) communication protocol. For example, two GPUs may be attached to the same PCIe bus in a reservoir simulator and communicate directly with each other. Thus, over a P2P communication protocol, a component in a reservoir simulator may access a different memory in another GPU or CPU. In some embodiments, for example, the CPU E (315) may not store locally the look-ahead data B (382), but may simply access the device memory C (346) in the GPU C (340) that stores a portion of the look-ahead data B (382). Likewise, the P2P communication protocol may also enable direct memory transfers between system components, e.g., to distribute grid model data and/or well data among multiple GPUs.
While
Turning to
In Block 400, grid model data are obtained regarding a geological region of interest in accordance with one or more embodiments. For example, a reservoir simulator may access model data from a fine-grid model, where the model data includes various reservoir property values, such as oil saturation, water saturation, porosity, permeability, etc. A geological region of interest may be a portion of a geological area or volume that includes one or more wells or formations of interest desired or selected for further analysis, e.g., for determining a location of hydrocarbons or reservoir development purposes for a respective reservoir. As such, a geological region of interest may include one or more reservoir regions selected for running simulations. For example, the geological region of interest may be similar to geological region (200) or reservoir region (230) described above in
In Block 410, well data for one or more wells are obtained regarding a geological region of interest in accordance with one or more embodiments. Well data may correspond to wellhead data described above in
In Block 420, a grid model is determined based on grid model data and/or well data in accordance with one or more embodiments. The grid model may be similar may be similar to a fine-grid model and/or a coarsened-grid model described above in
In Block 430, one or more look-ahead models are determined based on grid model data, well data, and/or one or more coarsening functions in accordance with one or more embodiments. For example, a look-ahead model may be several times faster than the main grid model, i.e. at least 5× to 10× faster. For a look-ahead model that is 10× faster than the grid model, the look-ahead model may simulate ten years for every one year simulated for the grid model. In some embodiments, look-ahead models may be used to perform look-ahead simulations performed in parallel to the main reservoir simulation. Likewise, look-ahead models may be accessed by one or more parallel machines with parallel processors to perform look-ahead simulations.
In some embodiments, a reservoir simulator areally coarsen various cells or blocks within a grid model to produce a look-ahead model. For example, one or more fine-grid regions may have their resolutions preserved around respective wells or important geological features, while other areas may be coarsened accordingly. Coarsening functions may also be based on various data-driven approaches, such as reducing the number of solutions for reservoir equations computed during a particular time step of a simulation. For more information on coarsening functions, see
In Block 440, a time selection is obtained for one or more look-ahead simulations in accordance with one or more embodiments. In particular, a user may choose one or more time selections within a graphical user interface for look-ahead simulations and/or a reservoir simulation with a grid model. For example, a time selection may specify the starting time, the ending time, and/or a duration time of a look-ahead simulation. In some embodiments, a time selection is computed based on other user input parameters (e.g., as a percentage of a time selection of an entire reservoir simulation). In some embodiments, a reservoir simulator automatically determines a respective time selection based on analyzing simulation parameters during an ongoing reservoir simulation. Moreover, time selections may be based on computing time or the progress of a grid model's reservoir simulation. For example, the look-ahead simulation may end once the primary reservoir simulation achieves a certain milestone or reaches a specific time step.
In some embodiments, a reservoir simulation automatically determines a time selection based on one or more predetermined criteria. For example, the reservoir simulator may determine a time selection for a look-ahead simulation based on the computing speed of simulating a grid model. If the grid model's simulation is very slow, the reservoir simulator may be able to perform a look-ahead simulation for a much later time step. Where multiple look-ahead simulations are being performed, the time selections may be staggered to provide a glimpse of the reservoir simulation across different time periods (e.g., one look-ahead simulation may end at a quarter interval, another simulation may end at a midpoint interval, another simulation may end at the three quarter interval, and another simulation may end at the final time step in the grid model's reservoir simulation).
In Block 450, one or more look-ahead simulations are performed using one or more look-ahead models and a time selection in accordance with one or more embodiments. For example, a look-ahead simulation may be performed in parallel to a running full-field simulation of a geological region of interest. On the other hand, the ongoing reservoir simulation may be paused until one or more look-ahead simulations are completed.
In Block 460, a reservoir simulation is performed using a grid model and one or more look-ahead simulations in accordance with one or more embodiments. After look-ahead simulations are performed, the main grid model may continue to simulate to a predetermined simulation time. Afterwards, a reservoir simulator may collect simulation output (e.g., in a specified form of look-ahead data) regarding the look-ahead simulations. The look-ahead data may be used to condition the grid model in future time steps going forward in the reservoir simulation. This agglomeration of look-ahead data and current simulation data from the grid model may be dependent on the simulation workflow that is being executed.
In some embodiments, various types of reservoir simulations are performed by a reservoir simulator. For example, reservoir simulations may be used for history matching, predicting production rates at one or more wells, and/or determining the presence of hydrocarbon-producing formations for new wells. Likewise, various reservoir simulation applications may be performed, such as rankings, uncertainty analyses, sensitivity analyses, and/or well-by-well history matching. With respect to history matching, the objective may be to fit measured historical data to a reservoir model. In some embodiments, one or more reservoir simulations are used to optimize production for a well or group of wells, provide well design parameters for one or more wells, completion operations for one or more wells (e.g., using which down-hole devices).
Turning to
In some embodiments, a reservoir simulator has a choice over which wells to open for production in-order to maintain certain group targets during the prediction phase of a reservoir simulation. For example, a reservoir simulation may select a particular well based on the production potential at the current time, e.g. the flow of oil that is obtained if the well is operated at the minimum bottom-hole pressure. Thus, one or more candidate wells with the largest production potential may be chosen as the well to open during one or more time steps in a reservoir simulation. Using look-ahead simulations, this well selection procedure may be enhanced accordingly. For example, the number of look-ahead models may be set to the number of candidate wells in the reservoir simulation. Each candidate well may be inserted into a new coarsened look-ahead model. The well selection look-ahead models may be simulated for a time period (i.e., ΔL) where the well can be properly evaluated (e.g., 5-20 years). Likewise, the well selection may include another time period (i.e., ΔW) where the reservoir simulator can simulate a geological region prior to the candidate wells needing to flow or provide production. This time period prior to flowing may be a short period of time, such as one year or less.
Keeping with look-ahead simulations based on well selections, the look-ahead models may be coarsened outside an area of interest around respective candidate wells. For example, by maintaining a predetermined level of resolution around the candidate wells in the coarsened grid model, look-ahead simulations may also maintain a predetermined level of accuracy. Using an agglomeration function, for example, a reservoir simulation may determine the performance for each candidate well from look-ahead data, such as to choose the candidate well with the greatest cumulative oil production during the reservoir simulation. Additionally, because the look-ahead models may include various well constraints and actions similar to the main grid model, the production potential of the entire oil field may be a better choice to evaluate how different well scenarios increase the performance of the entire grid model.
Turning to
Turning to
In some embodiments, a reservoir simulator uses look-ahead simulations to perform a plateau optimization. During the prediction phase of the main reservoir simulation, an entire field may be given a target production rate that is done by adding up the contribution of various wells in the field. As such, the amount of time that this production target can be maintained may be referred to as the “plateau time.” By using a reservoir simulator, how long a field may maintain a given target plateau production rate may be determined accordingly. In a well management system within a reservoir simulator, respective well productions may be allocated in order to reach that target plateau. This may involve assigning a well potential (i.e., γit*) for each well i at each simulation time step t. A well with high potential may thus contribute more to the group target than wells with low well potential. Well potentials may be determined in many ways to account for oil production rate and other factors (e.g., penalizing wells that produce a lot of water and gas). In some embodiments, the well production potential is evaluated at the current simulation time t, i.e. the instantaneous well potential, as well as future simulation times using look-ahead simulations.
Furthermore, a composite well potential may be used in a reservoir simulation that is based on the instantaneous well potential at a particular time step and one or more well potentials determined by various look-ahead simulations. In some embodiments, a composite well potential is expressed using the following equation:
γit*=ωγit,m+(1−ω)γit+ΔF,c Equation 1
where γit* is the composite well potential that is used in the well management system for reservoir simulations based on a main grid model, γit,m is the instantaneous well potential for well i is determined by the main grid model at the current time t, γit+ΔF,c is the future well potential of well i based on one or more look-ahead simulations for time t+ΔF, ΔF is the time increment for the future delta (e.g., twenty years in
Turning to
In Block 700, grid model data and well data regarding one or more wells are obtained for a geological region of interest in accordance with one or more embodiments.
In Block 705, an initial time step is selected for a reservoir simulation in accordance with one or more embodiments.
In Block 710, a grid model is determined based on grid model data and well data in accordance with one or more embodiments.
In Block 715, a time selection is obtained for a look-ahead simulation in accordance with one or more embodiments.
In Block 720, a look-ahead model is determined based on grid model data, well data, and one or more coarsening functions in accordance with one or more embodiments.
In Block 725, a reservoir simulation is performed at a selected time step in accordance with one or more embodiments.
In Block 730, a determination is made whether a look-ahead simulation is complete in accordance with one or more embodiments. Where a determination is made a look-ahead simulation is still being performed, the process shown in FIG. 7 may proceed to Block 735. Where a determination is made that the look-ahead simulation is complete, the process shown in
In Block 735, a next time step is selected for a reservoir simulation in accordance with one or more embodiments.
In Block 740, a look-ahead simulation is performed based on a look-ahead model and a time selection in accordance with one or more embodiments.
In Block 745, look-ahead data are determined based on a look-ahead simulation in accordance with one or more embodiments.
In Block 750, one or more composite reservoir parameters are determined for a reservoir simulation based on grid model data, well data, and/or look-ahead data in accordance with one or more embodiments.
In Block 760, a next time step is selected for a reservoir simulation in accordance with one or more embodiments.
In Block 770, a reservoir simulation is performed at a selected time step using a grid model, one or more composite reservoir parameters, grid model data, and/or well data in accordance with one or more embodiments.
Turning to
Turning to
Furthermore, the water breakthrough time (i.e., the time for the injection water to reach production wells) is between 5-15 years as shown in
Embodiments may be implemented on a computer system.
The computer (902) can serve in a role as a client, network component, a server, a database or other persistency, or any other component (or a combination of roles) of a computer system for performing the subject matter described in the instant disclosure. The illustrated computer (902) is communicably coupled with a network (930). In some implementations, one or more components of the computer (902) may be configured to operate within environments, including cloud-computing-based, local, global, or other environment (or a combination of environments).
At a high level, the computer (902) is an electronic computing device operable to receive, transmit, process, store, or manage data and information associated with the described subject matter. According to some implementations, the computer (902) may also include or be communicably coupled with an application server, e-mail server, web server, caching server, streaming data server, business intelligence (BI) server, or other server (or a combination of servers).
The computer (902) can receive requests over network (930) from a client application (for example, executing on another computer (902)) and responding to the received requests by processing the said requests in an appropriate software application. In addition, requests may also be sent to the computer (902) from internal users (for example, from a command console or by other appropriate access method), external or third-parties, other automated applications, as well as any other appropriate entities, individuals, systems, or computers.
Each of the components of the computer (902) can communicate using a system bus (903). In some implementations, any or all of the components of the computer (902), both hardware or software (or a combination of hardware and software), may interface with each other or the interface (904) (or a combination of both) over the system bus (903) using an application programming interface (API) (912) or a service layer (913) (or a combination of the API (912) and service layer (913). The API (912) may include specifications for routines, data structures, and object classes. The API (912) may be either computer-language independent or dependent and refer to a complete interface, a single function, or even a set of APIs. The service layer (913) provides software services to the computer (902) or other components (whether or not illustrated) that are communicably coupled to the computer (902). The functionality of the computer (902) may be accessible for all service consumers using this service layer. Software services, such as those provided by the service layer (913), provide reusable, defined business functionalities through a defined interface. For example, the interface may be software written in JAVA, C++, or other suitable language providing data in extensible markup language (XML) format or other suitable format. While illustrated as an integrated component of the computer (902), alternative implementations may illustrate the API (912) or the service layer (913) as stand-alone components in relation to other components of the computer (902) or other components (whether or not illustrated) that are communicably coupled to the computer (902). Moreover, any or all parts of the API (912) or the service layer (913) may be implemented as child or sub-modules of another software module, enterprise application, or hardware module without departing from the scope of this disclosure.
The computer (902) includes an interface (904). Although illustrated as a single interface (904) in
The computer (902) includes at least one computer processor (905). Although illustrated as a single processor (905) in
The computer (902) also includes a memory (906) that holds data for the computer (902) or other components (or a combination of both) that can be connected to the network (930). For example, memory (906) can be a database storing data consistent with this disclosure. Although illustrated as a single memory (906) in
The application (907) is an algorithmic software engine providing functionality according to particular needs, desires, or particular implementations of the computer (902), particularly with respect to functionality described in this disclosure. For example, application (907) can serve as one or more components, modules, applications, etc. Further, although illustrated as a single application (907), the application (907) may be implemented as multiple applications (907) on the computer (902). In addition, although illustrated as integral to the computer (902), in alternative implementations, the application (907) can be external to the computer (902).
There may be any number of computers (902) associated with, or external to, a computer system containing computer (902), each computer (902) communicating over network (930). Further, the term “client,” “user,” and other appropriate terminology may be used interchangeably as appropriate without departing from the scope of this disclosure. Moreover, this disclosure contemplates that many users may use one computer (902), or that one user may use multiple computers (902).
In some embodiments, the computer (902) is implemented as part of a cloud computing system. For example, a cloud computing system may include one or more remote servers along with various other cloud components, such as cloud storage units and edge servers. In particular, a cloud computing system may perform one or more computing operations without direct active management by a user device or local computer system. As such, a cloud computing system may have different functions distributed over multiple locations from a central server, which may be performed using one or more Internet connections. More specifically, cloud computing system may operate according to one or more service models, such as infrastructure as a service (IaaS), platform as a service (PaaS), software as a service (SaaS), mobile “backend” as a service (MBaaS), serverless computing, and/or function as a service (FaaS).
Although only a few example embodiments have been described in detail above, those skilled in the art will readily appreciate that many modifications are possible in the example embodiments without materially departing from this invention. Accordingly, all such modifications are intended to be included within the scope of this disclosure as defined in the following claims. In the claims, any means-plus-function clauses are intended to cover the structures described herein as performing the recited function(s) and equivalents of those structures. Similarly, any step-plus-function clauses in the claims are intended to cover the acts described here as performing the recited function(s) and equivalents of those acts. It is the express intention of the applicant not to invoke 35 U.S.C. § 112(f) for any limitations of any of the claims herein, except for those in which the claim expressly uses the words “means for” or “step for” together with an associated function.