HYBRID-PERMEABILITY LOG

Information

  • Patent Application
  • 20240328312
  • Publication Number
    20240328312
  • Date Filed
    March 27, 2023
    a year ago
  • Date Published
    October 03, 2024
    2 months ago
Abstract
Methods and systems are disclosed. The methods may include collecting matrix training data along a first depth interval of a first well and training a first artificial intelligence (AI) model using the matrix training data. The methods may further include collecting well data along a depth interval of a well, inputting the well data into the first AI model, and producing a predicted matrix permeability log along the depth interval from the first AI model. The methods may still further include collecting an image at a discrete depth within the depth interval of the well, where the image is of a fracture, inputting the image into a second AI model, producing a predicted fracture permeability from the second AI model, and generating the predicted hybrid-permeability log using the predicted matrix permeability log and the predicted fracture permeability.
Description
BACKGROUND

Rock matrix may be composed of grains organized such that pores or space between the grains exist. The pores may be one mechanism that allows fluid, such as water and hydrocarbons, to flow through the rock. The ease with which or ability of fluid to flow through the rock may be known as permeability. Another mechanism that allows fluid to flow through rock is fractures, such as natural fractures or man-made fractures. Quantifying rock permeability may be challenging, as well as expensive, due to the complex interplay between multi-modal pore systems of the rock matrix and the heterogeneity and density of fractures that interrupt those pore systems.


However, it may be useful to collectively quantify the matrix permeability and the fracture permeability of rock. In turn, the matrix permeability and the fracture permeability may be used to predict dynamic fluid flow behavior in rock (i.e., a hydrocarbon production rate), inform reservoir simulation models, design completion strategies, evaluate the usefulness of recovery schemes, such as waterflooding and enhanced oil recovery, and determine future well placement.


SUMMARY

This summary is provided to introduce a selection of concepts that are further described below in the detailed description. This summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used as an aid in limiting the scope of the claimed subject matter.


In general, in one aspect, embodiments relate to a method of training a first artificial intelligence (AI) model. The method includes collecting matrix training data along a first depth interval of a first well. The matrix training data includes training well data and associated training rock core permeability data. The method further includes training the first AI model using the matrix training data. The first AI model is trained to produce a predicted matrix permeability log from input well data. Further, the input well data is collected along a second depth interval of the first well or along a depth interval of a second well.


In general, in one aspect, embodiments relate to a method of determining a predicted hybrid-permeability log. The method includes collecting well data along a depth interval of a well, inputting the well data into the first AI model, and producing a predicted matrix permeability log along the depth interval from the first AI model. The method further includes collecting an image at a discrete depth within the depth interval of the well, where the image is of a fracture, inputting the image into a second AI model, and producing a predicted fracture permeability from the second AI model. The method still further includes generating the predicted hybrid-permeability log using the predicted matrix permeability log and the predicted fracture permeability.


In general, in one aspect, embodiments relate to a system. The system includes a computer system configured to receive well data along a depth interval of a well, input the well data into the first AI model, and produce a predicted matrix permeability log along the depth interval from the first AI model. The computer system is further configured to receive an image for a discrete depth within the depth interval of the well, where the image is of a first fracture, input the image into a second AI model, and produce a predicted fracture permeability from the second AI model. The computer system is still further configured to generate a predicted hybrid-permeability log using the predicted matrix permeability log and the predicted fracture permeability and determine a hydrocarbon production rate based, at least in part, on the predicted hybrid-permeability log. The system further includes a production management system configured to determine a production management plan based, at least in part, on the hydrocarbon production rate.


Other aspects and advantages of the claimed subject matter will be apparent from the following description and the appended claims.





BRIEF DESCRIPTION OF DRAWINGS

Specific embodiments of the disclosed technology will now be described in detail with reference to the accompanying figures. Like elements in the various figures are denoted by like reference numerals for consistency.



FIG. 1 illustrates rock in accordance with one or more embodiments.



FIG. 2 illustrates a rock coring system in accordance with one or more embodiments.



FIG. 3 illustrates a permeability system in accordance with one or more embodiments.



FIG. 4 displays a portion of associated training rock core permeability data in accordance with one or more embodiments.



FIG. 5 illustrates a well logging system in accordance with one or more embodiments.



FIG. 6 displays a portion of training well data in accordance with one or more embodiments.



FIG. 7 shows a workflow in accordance with one or more embodiments.



FIG. 8 illustrates a neural network in accordance with one or more embodiments.



FIG. 9 illustrates a u-net in accordance with one or more embodiments.



FIG. 10 shows a workflow in accordance with one or more embodiments.



FIG. 11A shows a predicted matrix permeability log in accordance with one or more embodiments.



FIG. 11B shows images in accordance with one or more embodiments.



FIG. 11C shows predicted fracture permeabilities in accordance with one or more embodiments.



FIG. 11D shows a predicted hybrid-permeability log in accordance with one or more embodiments.



FIG. 12 describes a method in accordance with one or more embodiments.



FIG. 13 describes a method in accordance with one or more embodiments.



FIG. 14 illustrates a computer system in accordance with one or more embodiments.



FIG. 15 shows a series of systems in accordance with one or more embodiments.





DETAILED DESCRIPTION

In the following detailed description of embodiments of the disclosure, numerous specific details are set forth in order to provide a more thorough understanding of the disclosure. However, it will be apparent to one of ordinary skill in the art that the disclosure may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description.


Throughout the application, ordinal numbers (e.g., first, second, third, etc.) may be used as an adjective for an element (i.e., any noun in the application). The use of ordinal numbers is not to imply or create any particular ordering of the elements nor to limit any element to being only a single element unless expressly disclosed, such as using the terms “before,” “after,” “single,” and other such terminology. Rather, the use of ordinal numbers is to distinguish between the elements. By way of an example, a first element is distinct from a second element, and the first element may encompass more than one element and succeed (or precede) the second element in an ordering of elements.


It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “an image” includes reference to one or more of such images.


Terms such as “approximately,” “substantially,” etc., mean that the recited characteristic, parameter, or value need not be achieved exactly, but that deviations or variations, including for example, tolerances, measurement error, measurement accuracy limitations and other factors known to those of skill in the art, may occur in amounts that do not preclude the effect the characteristic was intended to provide.


It is to be understood that one or more of the steps shown in the flowcharts may be omitted, repeated, and/or performed in a different order than the order shown. Accordingly, the scope disclosed herein should not be considered limited to the specific arrangement of steps shown in the flowcharts.


Although multiple dependent claims are not introduced, it would be apparent to one of ordinary skill that the subject matter of the dependent claims of one or more embodiments may be combined with other dependent claims.


In the following description of FIGS. 1-15, any component described regarding a figure, in various embodiments disclosed herein, may be equivalent to one or more like-named components described regarding any other figure. For brevity, descriptions of these components will not be repeated regarding each figure. Thus, each and every embodiment of the components of each figure is incorporated by reference and assumed to be optionally present within every other figure having one or more like-named components. Additionally, in accordance with various embodiments disclosed herein, any description of the components of a figure is to be interpreted as an optional embodiment which may be implemented in addition to, in conjunction with, or in place of the embodiments described regarding a corresponding like-named component in any other figure.


As hydrocarbon discoveries decline, there is a compelling need in the oil and gas industry to effectively manage current resources while also preparing to effectively manage future, perhaps challenging, resources, such as low-quality rock reservoirs, naturally-fractured reservoirs, and/or shale reservoirs. As such, efforts to predict dynamic fluid flow behavior, inform reservoir simulation models, design completion strategies, and enhance recovery schemes to ensure reservoirs are being effectively managed may be crucial. Rock permeability, or the ease with which fluid flows through rock, may be used, at least in part, to aid in effective management efforts.


Turning to FIG. 1, FIG. 1 illustrates rock matrix 100 (hereinafter also simply “rock”) in accordance with one or more embodiments. The rock matrix 100 may be composed of grains 105. The grains 105 may be physically characterized by type, sphericity, roundness, roughness, connectivity, size, sorting of size, etc. Grain types include sandstone, carbonate, or shale, for example. Such physical characteristics of rock matrix 100 may be described as the lithology of the rock matrix 100.


The grains 105 may be organized such that pores 110 or space between the grains 105 exist. In turn, the lithology of the rock matrix 100 may govern the types of pore systems that exist. Each rock 100 may contain complex, multi-model pore systems. Further, each pore system may overlap or invade another pore system. These pore systems may allow fluid, such as air, water, brine, hydrocarbons, or any combination thereof, to flow through the rock 100. The ease with which fluid flows through the rock 100 may be known as permeability, a dynamic property of rock 100. As such, these pore systems may govern the permeability of rock 100. However, permeability may additionally be governed by fractures 115 (i.e., cracks or breaks) that interrupt the pore systems within rock 100. Fractures 115 may be heterogeneous and densely pervade a rock 100. Heterogeneity may take the form of non-uniform apertures, roughness, and tortuosity. Further, fractures 115 may be natural or man-made.


As such, quantifying permeability of rock 100 may be challenging due to the complex interplay between multi-modal pore systems and heterogeneous fractures 115.


Methods and systems are disclosed to generate a predicted hybrid-permeability log. The predicted hybrid-permeability log includes rock matrix permeability and rock fracture permeability along a depth interval of a well. A collection of artificial intelligence (AI) models is trained and deployed to determine the rock matrix permeability (hereinafter “predicted matrix permeability log”) and the rock fracture permeability (hereinafter “predicted fracture permeability”).


A first AI model is trained to produce the predicted matrix permeability log from input well data. Hereinafter, training well data and associated training rock core permeability data makes up matrix training data. The matrix training data may be used to train the first AI model. The training well data may be collected using a well logging system and/or determined from rock cores tested in a laboratory setting. The associated training rock core permeability data may be determined using a permeability system and rock cores collected downhole using a rock coring system.


A second AI model is trained to produce the predicted fracture permeability from an image. Hereinafter, each mth training image and each associated mth training fracture permeability makes up an mth fracture training pair. Here, m is an integer between 1 and M, inclusive of 1 and M. Further, M is an integer greater than or equal to one. For example, if M=2, two fracture training pairs include a first fracture training pair and a second fracture training pair. Further, the first fracture training pair includes a first training image and an associated first training fracture permeability. Further still, the second fracture training pair includes a second training image and an associated second training fracture permeability. The M fracture training pairs may be used to train the second AI model. Each mth training image may be collected using a well logging system. Each associated mth training fracture permeability may be modeled using the mth training image.


Embodiments disclosed herein may embody improvements over existing methods. For example, one improvement lies in that the present disclosure may determine predicted fracture permeability from an image of a fracture collected downhole. While a downhole image of a fracture is commonly used to determine static fracture properties, it may be uncommon to use a downhole image of a fracture to determine dynamic fracture properties like permeability. A second improvement lies in that the present disclosure combines a predicted matrix permeability log and a predicted fracture permeability to generate a predicted hybrid-permeability log, which may be uncommon. However, combining the two into a predicted hybrid-permeability log may present a clearer understanding of how fluid may flow within rock 100.


For clarity, a discussion surrounding the matrix training data that may be used to train the first AI model will be provided first. A discussion surrounding the M fracture training pairs that may be used to train the second AI model will be provided second. A discussion of the architecture of both the first AI model and second AI model will be provided third. A discussion of the deployment of the AI models for use will be provided will be provided fourth.


Turning to the matrix training data, which may be used to train the first AI model, the matrix training data includes training well data and associated training rock core permeability data collected along a first depth interval of a first well. In some embodiments, the training well data may additionally include training geological data, such as training lithofacies information, training porosity log data, and/or training rock core porosity data.


Turning to the associated training rock core permeability data, rock cores may be collected along the first depth interval of the first well using a rock coring system, which will be discussed in reference to FIG. 2. The rock cores may then be tested in a laboratory setting using a permeability system, which will be discussed in reference to FIG. 3, to determine the associated training rock core permeability data.



FIG. 2 illustrates the rock coring system 200 in accordance with one or more embodiments. The rock coring system 200 may be configured to simultaneously drill a well 205, such as the first well, within a subterranean region of interest 210 and retrieve one or more rock cores 215 along a depth interval, such as the first depth interval, of the well 205. As such, the system may be considered a drilling system that includes a rock coring system 200. The rock coring system 200 may collect rock cores 215 continuously or at intervals while drilling the well 205. To do so, the rock coring system 200 may include a coring bit 220 attached to a core barrel 225. Within the core barrel 225, an inner barrel 230 is disposed between a swivel 235 attached to an upper portion of the core barrel 225 and a core catcher 240 disposed close to the coring bit 220. The coring bit 220 consists of an annular cutting or grinding surface configured to flake, gouge, grind, or wear away the rock 100 at the base or “toe” of the well 205 and a central axial orifice configured to allow a cylindrical column, or rock core 215, to pass through. The annular cutting surface of the coring bit 220 typically includes embedded polycrystalline compact diamond (PDC) cutting elements.


The inner barrel 230 within the core barrel 225 may be disposed above or behind the coring bit 220. Further, the inner barrel 230 may be separated from the coring bit 220 by the core catcher 240. As the coring bit 220 grinds away the rock 100, the cylindrical rock core 215 passes through the central orifice of the coring bit 220 and through the core catcher 240 into the inner barrel 230 as the coring bit 220 advances deeper into the subterranean region of interest 210. The inner barrel 230 may be attached by the swivel 235 to the remainder of the core barrel 225 to permit the inner barrel 230 to remain stationary as the core barrel 225 rotates together with the coring bit 220. When the inner barrel 230 is filled with the rock core 215, the core barrel 225 containing the rock core 215 may be raised and retrieved at the surface of the earth 250. The core catcher 240 serves to grip the bottom of the rock core 215 and, as lifting tension is applied to the drillstring 245 and the core barrel 225, the rock 100 under the rock core 215 breaks away from the undrilled rock 100 within the subterranean region of interest 210 below it. The core catcher 240 may retain the rock core 215 so that it does not fall out the bottom of the core barrel 225 through the annular orifice of the coring bit 220 as the core barrel 225 is raised to the surface of the earth 250.


In addition to collecting rock cores 215 while drilling the well 205, smaller “sidewall cores” may be obtained after drilling the well 205. A sidewall coring tool (not shown) may be lowered by wireline into the well 205. When deployed, the sidewall coring tool may press or clamp itself against the wall of the well 205 and a rock core 215 may be obtained either by drilling into the well wall with a hollow drill bit or by firing a hollow bullet into the well wall using an explosive charge. In excess of 50 such sidewall cores may be obtained during a single deployment of a sidewall coring tool into the well 205. Hereinafter, the term “rock cores” 215 may be used to describe rock cores 215 collected using a rock coring system 200 as described in FIG. 2 or a sidewall coring tool.


Rock cores 215 may provide representative samples of rock 100 within the subterranean region of interest 210. Further, rock cores 215 may permit physical examination and direct measurement of porosity, permeability, fluid saturation, grain density, lithology, texture, and other geologic characteristics of interest in a laboratory setting. Analysis of rock cores 215 may further provide evidence of presence, distribution, and deliverability of hydrocarbons within a given hydrocarbon reservoir.


Under ideal circumstances, a rock core 215 may be recovered as a single, continuous, intact cylinder of rock 100. However, frequently, the rock core 215 takes the form of several shorter cylindrical segments separated by breaks. The breaks may be a consequence of stresses experienced by the rock core 215 during coring or may be caused by pre-existing natural fractures 115 within the subterranean region of interest 210.


In general, each extracted rock core 215 may be up to 15 centimeters in diameter and approximately ten meters long. To prepare each rock core 215 for testing in a laboratory setting to determine associated training rock core permeability data, the rock core 215 may be cut and ground into core plugs. Each core plug may be a few centimeters in diameter and approximately five centimeters long, though other shapes and dimensions may be used. Further, each core plug may be cut and ground along a particular axis, such as parallel or perpendicular to an axis of the well 205.


Each core plug may be dried to remove fluid, such as water and hydrocarbons. In some embodiments, each core plug may be placed in a vacuum oven to remove the fluid. Each core plug may also be cyclically pre-stressed to remove inelastic deformation. In some embodiments, cyclical pre-stressing may occur over a period of days.


Turning to FIG. 3, each core plug 300 may be cyclically pre-stressed and tested to determine associated training rock core permeability data using a permeability system 305. Prior to cyclical pre-stressing and/or testing, each core plug 300 may be wrapped in a jacket 310, placed between two endcaps 318, and housed in a confining cell 316. In some embodiments, the jacket 310 may be a hollow Viton sleeve. A pressure generator 315 may be connected to the confining cell 316 and configured to provide a fluid to the confining cell 316 to control confining stress (i.e., overburden stress). A gas pump 320 may be connected to each end of each core plug 300 via an upstream reservoir 330 and a downstream reservoir 335 and configured to uniquely control pore pressure at either end of the core plug 300. To control pore pressure, a gas tank 325 may supply helium, nitrogen, or other gas into the pores 110 of the core plug 300 via the upstream reservoir 330 and/or the downstream reservoir 335 within the gas pump 320. As such, wrapping the core plug 300 in the jacket 310 between two endcaps 318 housed in a confining cell 316 may ensure the helium, nitrogen, or other gas supplied by the gas tank 325 does not communicate with the fluid that controls the confining stress.


The upstream reservoir 330 and downstream reservoir 335 may also be used to recover gas from each core plug 300 depending on the mode of operation of the gas pump 320. For example, if a constant pore pressure is applied to each core plug 300, both the upstream reservoir 330 and downstream reservoir 335 may supply gas to each core plug 300. However, if a pressure pulse is applied to each core plug 300, the upstream reservoir 330 at a high pressure may supply gas while the downstream reservoir 335 at a low pressure recovers the gas that passed though the core plug 300. In some embodiments, various parts of the permeability system 305 may be communicably coupled to a computer 1405 as will be described in reference to FIG. 14. The computer 1405 may be configured to control and/or collect data from the pressure generator 315, gas tank 325, and/or gas pump 320.


During testing, each core plug 300 may be simultaneously subjected to a pore pressure and a confining stress. Permeability may then be determined using indirect methods, such as a pressure pulse decay method or steady-state Darcy flow method. The pressure pulse decay method may take on the order of hours. The steady-state Darcy flow method may take on the order of hours or even days. In brief, the pressure pulse decay method may rely on the gas pump 320 to generate small pressure pulses at the upstream reservoir 330 that travel through each core plug 300 to the downstream reservoir 335. In some embodiments, each pressure pulse may be small to minimize changes in pore pressure. For each pressure pulse, the first transient pressure at the upstream reservoir 330 and the second transient pressure at the downstream reservoir 335 may be fit to a permeability-pressure model to determine permeability.


This process to determine permeability may be repeated for each core plug 300 collected from the well 205. A person of ordinary skill in the art will appreciate that FIG. 3 illustrates a highly generic permeability system 305 that is not meant to limit the present disclosure. For example, other permeability systems that may be used to determine the associated training rock core permeability data may further control one or more axial stresses of the core plug 300 using one or more actuators coupled to one or more load cells.


Once the permeability system 305 has been used to determine the permeability of each core plug 300 from a well 205, a portion of the associated training rock core permeability data 400 may be displayed as shown in FIG. 4. The abscissa 405 represents permeability determined using the permeability system 305. The ordinate 410 represents the first depth interval 415, such as a measured depth interval, that includes each discrete depth 420 each core plug 300 was collected from. FIG. 4 may also be described as a partial matrix permeability log.


Returning to the matrix training data, recall that the matrix training data includes associated training rock core permeability data 400, as just described, and training well data. Some training well data may be collected from rock cores 215 and tested in a laboratory setting to determine training rock core porosity data and/or training geological data. Further, some training well data may be collected using one or more well logging systems 500 deployed downhole along the first depth interval of the first well, such as the well 205, as illustrated in FIG. 5.



FIG. 5 illustrates a well logging system 500 downhole in accordance with one or more embodiments. Prior to deploying the well logging system 500 downhole, the well 205 may be partially or completely drilled within the subterranean region of interest 210 using the rock coring system 200 as previously described relative to FIG. 2. Recall that the well 205 may traverse layers of rock 100 separated by horizons 510, faults, fractures 115, which may be natural or man-made, and/or other structural features before ultimately penetrating a hydrocarbon reservoir 515. In some embodiments, the hydrocarbon reservoir 515 may be a heterogeneous carbonate formation.


Following the removal of the rock coring system 200, the well logging system 500 may be lowered into the well 205. The well logging system 500 may be supported by a truck 520 and derrick 525 above ground. For example, the truck 520 may carry a conveyance mechanism 530 used to lower the well logging system 500 into the well 205. The conveyance mechanism 530 may be a wireline, coiled tubing, or drillpipe that may include means to provide power to the well logging system 500 and a telemetry channel from the well logging system 500 to the surface of the earth 250. In some embodiments, the well logging system 500 may be translated along the well 205 to acquire training well data over multiple intervals 535.


The well logging system 500 used to collect training well data may be, but is not limited to, a density logging tool, neutron porosity logging tool, acoustic logging tool, gamma ray logging tool, resistivity logging tool, and nuclear magnetic resonance (NMR) logging tool. As such, training well data may include, but are not limited to, a density log, neutron porosity log (hereinafter also simply “neutron log”), acoustic log, gamma ray log, resistivity log, NMR log, and any combination thereof.



FIG. 6 displays a portion of training well data 605 in accordance with one or more embodiments. The training well data 605 may, at least in part, characterize the rock 100 surrounding the first well, such as well 205, along the first depth interval 415. Here, the portion of the training well data 605 includes a neutron log 610 and a density log 615 in accordance with one or more embodiments.


While the discussion of the matrix training data thus far has been restricted to data collected along the first depth interval 415 of the first well, in practice, the matrix training data may include matrix training data collected along one or more depth intervals for each of multiple wells. Further, as previously mentioned, the training well data may include training porosity log data and training rock core porosity data. In these embodiments, the training porosity log data and training rock core porosity data may be compared to ensure the training porosity log data and training rock core porosity data are reasonable. For example, the standard deviation of one data type may be required to be within 2% of the other, where 2% may be denoted a threshold.


Turning to the M fracture training pairs, which may be used to train the second AI model, recall that each mth fracture training pair includes an mth training image and an associated mth training fracture permeability. In some embodiments, each mth training image may be collected using one or more well logging systems 500 deployed downhole as described relative to FIG. 5. However, the well logging system 500 used to collect each mth training image may now be, but is not limited to, an acoustic image logging tool and resistivity image logging tool. Further, there is no limitation as to what well 205 each mth training image is collected from if collected from a well 205. In other embodiments, each mth training image may be collected from rock cores 215 or outcrops. As such, in these embodiments, each mth training image may be, but is not limited to, an x-ray image or computed tomography (CT) image. In still other embodiments, each mth training image may be a simulated image. Further, each mth training image may be considered a high-resolution image.



FIG. 7 displays an mth training image 700 within a workflow in accordance with one or more embodiments. Each mth training image 700 may be of a natural fracture 115, a man-made fracture 115, or a simulated fracture 115. In some embodiments, the fracture 115 transects the well 205 as illustrated in FIG. 7. Each mth training image 700 may be used to determine an mth training fracture-identified image 710. Each fracture 115 within each mth training image 700 may be identified as an identified fracture 715 using, at least in part, any image segmentation technique and/or edge detection method known to a person of ordinary skill in the art. Image segmentation may be defined as the process of partitioning an image, where partitioning may include locating objects, such as fractures 115, or boundaries, such as fracture boundaries. Image segmentation techniques may include, but are not limited to, thresholding methods, K-means algorithms, histogram-based methods, region-growing methods, variational methods, partitioning methods, watershed transformations, model-based approaches, multi-scale segmentations, and other differential approaches. Edge detection may be defined as the process of locating boundaries or edges, such as fracture boundaries, within an image. Edge detection methods may include, but are not limited to, Canny edge detection, Deriche edge detection, a Sobel operator, a Prewitt operator, a Roberts cross operator, and other differential approaches. Further, the image segmentation technique and/or edge detection method may be performed manually, semi-manually, or automatically. In some embodiments, image segmentation techniques and/or edge detection methods may be used to assign a label to each pixel within each mth training image 700 to, thus, determine the mth training fracture-identified image 710. For example, each pixel within each mth training fracture-identified image 710 may be labeled as either “fracture” or “not fracture.”


Once each mth training fracture-identified image 710 is determined, each mth training fracture-identified image 710 may be further manipulated by, for example, converting to a binary image and/or downsampling to a lower resolution as shown as an mth training intermediate fracture-identified image 720 in FIG. 7. A model 725 may then be applied to each mth training intermediate fracture-identified image 720 to determine each associated mth training fracture permeability of the fracture 115 within each mth training image 700. In some embodiments, the model 725 may be the Navier-Stokes equations, which are coupled partial differential equations that describe how the velocity, pressure, temperature, and density of a moving fluid are related. In some embodiments, the Navier-Stokes equations may be approximated using methods such as finite difference, finite volume, finite element, and spectral methods. In practice, tens to thousands of fracture training pairs may be generated using the workflow shown in FIG. 7.


Turning to the AI models, recall that the matrix training data discussed in reference to FIGS. 2-6 may be used, at least in part, to train the first AI model. Further, recall that the M fracture training pairs discussed in reference to FIGS. 5 and 7 may be used, at least in part, to train the second AI model.


The phrases “artificial intelligence,” “machine learning,” “deep learning,” and “pattern recognition” are often convoluted, interchanged, and used synonymously. This ambiguity arises because the field of “extracting patterns and insights from data” was developed simultaneously and disjointedly among a number of classical arts like mathematics, statistics, and computer science. For consistency, the term artificial intelligence (AI) will be adopted herein. However, one skilled in the art will recognize that the concepts and methods detailed hereafter are not limited by this choice of nomenclature. Further, in some fields, machine learning models may be considered a subset within AI models. As such, a “ML model” may more specifically describe the AI model depending on the AI model selected.


To begin the discussion of AI, a cursory introduction to a neural network (NN) is provided herein as one or more of the AI models may be or include one or more specialized NNs and/or convolutional neural networks (CNN). However, note that many variations of an NN and CNN exist. Therefore, one of ordinary skill in the art will recognize that any variation of an NN or CNN (or any other AI model) may be employed without departing from the scope of the disclosure. Further, it is emphasized that the following discussions of an NN and CNN are basic summaries and should not be considered limiting.


A diagram of an NN 800 is shown in FIG. 8. At a high level, an NN 800 may be graphically depicted as being composed of nodes 802 and edges 804. The nodes 802 may be grouped to form layers 805. FIG. 8 displays four layers 808, 810, 812, 814 of nodes 802 where the nodes 802 are grouped into columns. However, each group need not be as shown in FIG. 8. The edges 804 connect the nodes 802 to other nodes 802. Edges 804 may connect, or not connect, to any node(s) 802 regardless of which layer 805 the node(s) 802 is in. That is, the nodes 802 may be sparsely and residually connected.


An NN 800 will have at least two layers 805, where the first layer 808 is the “input layer” and the last layer 814 is the “output layer.” Any intermediate layer 810, 812 is usually described as a “hidden layer.” An NN 800 may have zero or more hidden layers 810, 812. An NN 800 with at least one hidden layer 810, 812 may be described as a “deep” neural network or “deep learning method.” In general, an NN 800 may have more than one node 802 in the output layer 814. In these cases, the neural network 800 may be referred to as a “multi-target” or “multi-output” network.


Nodes 802 and edges 804 carry associations. Namely, every edge 804 is associated with a numerical value. The edge numerical values, or even the edges 804 themselves, are often referred to as “weights” or “parameters.” While training an NN 800, a process that will be described below, numerical values are assigned to each edge 804. Additionally, every node 802 is associated with a numerical value and may also be associated with an activation function. Activation functions are not limited to any functional class, but traditionally follow the form:










A
=

f

(







i


(
incoming
)



[



(

node


value

)

i





(

edge


value

)

i


]

)


,




Equation



(
1
)








where i is an index that spans the set of “incoming” nodes 802 and edges 804 and ƒ is a user-defined function. Incoming nodes 802 are those that, when viewed as a graph (as in FIG. 8), have directed arrows that point to the node 802 where the numerical value is being computed. Some functions ƒ may include the linear function ƒ(x)=x, sigmoid function








f

(
x
)

=

1

1
+

e

-
x





,




and rectified linear unit (ReLU) function ƒ(x)=max(0,x), however, many additional functions are commonly employed. Every node 802 in an NN 800 may have a different associated activation function. Often, as a shorthand, activation functions are described by the function ƒ by which it is composed. That is, an activation function composed of a linear function ƒ may simply be referred to as a linear activation function without undue ambiguity.


When the NN 800 receives an input, the input is propagated through the network according to the activation functions and incoming node values and edge values to compute a value for each node 802. That is, the numerical value for each node 802 may change for each received input while the edge values remain unchanged. Occasionally, nodes 802 are assigned fixed numerical values, such as the value of 1. These fixed nodes 806 are not affected by the input or altered according to edge values and activation functions. Fixed nodes 806 are often referred to as “biases” or “bias nodes” as displayed in FIG. 8 with a dashed circle.


In some implementations, the NN 800 may contain specialized layers 805, such as a normalization layer, pooling layer, or additional connection procedures, like concatenation. One skilled in the art will appreciate that these alterations do not exceed the scope of the disclosure.


The number of layers in an NN 800, choice of activation functions, inclusion of batch normalization layers, and regularization strength, among others, may be described as “hyperparameters” that are associated to the AI model. It is noted that in the context of AI, the regularization of an AI model refers to a penalty applied to the loss function of the AI model. The selection of hyperparameters associated to an AI model is commonly referred to as selecting the AI model “architecture.”


Once an AI model, such as an NN 800, and associated hyperparameters have been selected, the AI model may be trained. To train or partially train the first AI model, the matrix training data as previously described may be provided to the first AI model. In general, the matrix training data includes an input (i.e., the training well data) and an associated target output (i.e., the associated training rock core permeability data). To train or partially train the second AI model, the M fracture training pairs as previously described may be provided to the second AI model. Similarly, the M training pairs include an input (i.e., each mth training image) and an associated target output (i.e., each associated mth training fracture permeability). Each associated target output represents the “ground truth” or the otherwise desired output upon processing the input. During training, each AI model processes at least one input to produce at least one output. Each output is then compared to the associated target output.


If either AI model is or includes an NN 800 as described in FIG. 8, the NN 800 may be trained by first assigning initial values to the edges 804. These values may be assigned randomly, according to a prescribed distribution, manually, or by some other assignment mechanism. Once edge values have been initialized, the NN 800 may act as a function such that it may receive an input and produce an output. At least one input is propagated through the NN 800 to produce an output.


The comparison of the output to the associated target output is typically performed by a “loss function.” Other names for this comparison function include an “error function,” “misfit function,” and “cost function.” Many types of loss functions are available, such as the log-likelihood function or cross-entropy loss function. However, the general characteristic of a loss function is that the loss function provides a numerical evaluation of the similarity between the output and the associated target output. The loss function may also be constructed to impose additional constraints on the values assumed by the edges 804. For example, a penalty term, which may be physics-based, or a regularization term may be added. Generally, the goal of a training procedure is to alter the edge values to promote similarity between the output and associated target output for most, if not all, of either the matrix training data or the M fracture training pairs. Thus, the loss function is used to guide changes made to the edge values. This process is typically referred to as “backpropagation.”


While a full review of the backpropagation process exceeds the scope of this disclosure, a brief summary is provided. Backpropagation consists of computing the gradient of the loss function over the edge values. The gradient indicates the direction of change in the edge values that results in the greatest change to the loss function. Because the gradient is local to the current edge values, the edge values are typically updated by a “step” in the direction indicated by the gradient. The step size is often referred to as the “learning rate” and need not remain fixed during the training process. Additionally, the step size and direction may be informed by previous edge values or previously computed gradients. Such methods for determining the step direction are usually referred to as “momentum” based methods.


Once the edge values of the NN 800 have been updated through the backpropagation process, the NN 800 will likely produce different outputs than it did previously. Thus, the procedure of propagating at least one input through the NN 800, comparing the NN output with the associated target output with a loss function, computing the gradient of the loss function with respect to the edge values, and updating the edge values with a step guided by the gradient is repeated until a termination criterion is reached. Common termination criteria include, but are not limited to, reaching a fixed number of edge updates (otherwise known as an iteration counter), reaching a diminishing learning rate, noting no appreciable change in the loss function between iterations, or reaching a specified performance metric as evaluated on the matrix training data, the M fracture training pairs, or separate hold-outs of either (generically denoted “validation data”). Once the termination criterion is satisfied, the edge values are no longer altered and the NN 800 is said to be “trained.”


Turning to a CNN, a CNN is similar to an NN 800 in that it can technically be graphically represented by a series of edges 804 and nodes 802 grouped to form layers 805. However, it is more informative to view a CNN as structural groupings of weights. Here, the term “structural” indicates that the weights within a group have a relationship, often a spatial relationship. CNNs are widely applied when the input also has a relationship. For example, an image, such as the mth training image 700, has a spatial relationship where the value associated to each pixel is spatially dependent on the value of other neighboring pixels within the image. Consequently, a CNN is an intuitive choice for processing images.


A structural grouping of weights is herein referred to as a “filter” or “convolution kernel.” The number of weights in a filter is typically much less than the number of inputs, where, now, each input may refer to a pixel in an image. For example, a filter may take the form of a square matrix, such as a 3×3 or 8×8 matrix. In a CNN, each filter can be thought of as “sliding” over, or convolving with, all or a portion of the inputs to form an intermediate output or intermediate representation of the inputs which possess a relationship. The portion of the inputs convolved with the filter may be referred to as a “receptive field.” Like the NN 800, the intermediate outputs are often further processed with an activation function. Many filters of different sizes may be applied to the inputs to form many intermediate representations. Additional filters may be formed to operate on the intermediate representations creating more intermediate representations. This process may be referred to as a “convolutional layer” within the CNN. Multiple convolutional layers may exist within a CNN as prescribed by a user. Lastly, note that a “deconvolutional layer” may also exist within a CNN.


There is a “final” group of intermediate representations where no filters act on these intermediate representations. In some instances, the relationship of the final intermediate representations is ablated, which is a process known as “flattening.” The flattened representation may be passed to an NN to produce a final output.


Like an NN 800, a CNN is trained. The filter weights and the edge values of the CNN, if present, are initialized and then determined using either the matrix training data or the M fracture training pairs and backpropagation as previously described.


In other embodiments, the first AI model may be or include multi-resolution graph-based clustering (MRGC) methods, dynamic clustering (DC) methods, or random forest algorithms. For MRGC and DC methods, a certain number of clusters may be pre-selected, such as between five and 40 clusters. However, a person of ordinary skill in the art will appreciate that any number of clusters may be used without departing from the scope of the disclosure. MRGC and DC methods may be iterative methods. For example, following the plotting of the matrix training data in multi-dimensional space, the MRGC and DC methods may randomly initialize the location of each kernel in the multi-dimensional space and determine the distance between each data point and the nearest kernel. Once this process is complete for all data points, each kernel is thus surrounded by a cloud of points. Each kernel is then moved to the center of its cloud. This iterative process continues until each kernel location stabilizes and is positioned at the center of each pre-selected cluster. In some embodiments, stabilization occurs when the coordinates of each kernel change less than a prescribed amount relative to the previous iteration. Further, both the MRGC and DC methods may each rely on another unique method, such as the use of a NN 800 or K-nearest neighbor algorithm.


In some embodiments, the second AI model may be a series of CNNs. For example, in some embodiments, a first CNN may take the form of a u-net 900 as illustrated in FIG. 9. The contracting or convolution path 905 may consist of several cycles where each cycle may include multiple repeating convolution operators (3×3), an ReLU activation function, and a pooling operator (2×2) as shown by the key 910. The expanding or deconvolution path 915 may consist of several cycles where each cycle may include multiple repeating deconvolution operators (3×3) and an ReLU activation function. The final layer may consist of one convolution and one softmax activation function. In other embodiments, the second AI model may be a series of random forest algorithms, which may be based on the probability of pixels.


Following the selection of each AI model and associated hyperparameters, training, and validation, each AI model may be deployed for use.



FIG. 10 illustrates a workflow for using the AI models in accordance with one or more embodiments. Following training, a predicted matrix permeability log 1005 may be produced by inputting well data 1010 into the first AI model 1015. Further, a predicted fracture permeability 1020 may be produced by inputting an image 1025 into the second AI model 1030. The well data 1010 is collected along a depth interval of a well 205. In turn, the image 1025 is collected at a discrete depth 420 within the same depth interval of the same well 205 as the well data 1010. While FIG. 10 illustrates only one image 1025 being input into the second AI model 1030 to produce one predicted fracture permeability 1020, any number of images 1025 may be separately input into the second AI model 1030 to produce that many predicted fracture permeabilities 1020 as long as each image 1025 is collected at a discrete depth 420 within the same depth interval of the same well 205 as the well data 1010.


The well data 1010 may be similar to the training well data, such as training well data 605. More specifically, the well data 1010 may additionally include geological data, porosity log data, and/or rock core porosity data. Further, the well data 1010 may be collected from a well 205 using a well logging system 500 and/or may be determined from rock cores 215 tested in a laboratory setting as previously described.


The image 1025 may be similar to each mth training image, such as mth training image 700. More specifically, each image 1025 may be of a unique fracture 115. Each image 1025 may be collected using a well logging system 500 downhole in the well 205, a rock core 215 collected while drilling the well 205, or an outcrop neighboring the well 205.


The predicted matrix permeability log 1005 and one or more predicted fracture permeabilities 1020 may be used to generate a predicted hybrid-permeability log 1035.



FIGS. 11A-D illustrate an example of various data collected for or produced using the workflow described in FIG. 10 in accordance with one or more embodiments. FIG. 11A shows a predicted matrix permeability log 1005 along a depth interval 1100 of a well 205 produced from the first AI model 1015 in accordance with one or more embodiments. Here, the depth interval 1100 of the well 205 may be a second depth interval of the first well or a depth interval of a second well. FIG. 11B shows images 1025 collected at discrete depths 420 within the depth interval 1100 in accordance with one or more embodiments. FIG. 11C shows predicted fracture permeabilities 1020 produced from the second AI model 1030 at the discrete depths 420 corresponding to each of the images 1025 in FIG. 11B in accordance with one or more embodiments. FIG. 11D shows a predicted hybrid-permeability log 1035 generated using the predicted matrix permeability log 1005 from FIG. 11A and the predicted fracture permeabilities 1020 from FIG. 11C in accordance with one or more embodiments.


In some embodiments, the predicted fracture permeabilities 1020 may be overlayed on top of the predicted matrix permeability log 1005 at the corresponding discrete depths 420 to generate the predicted hybrid-permeability log 1035. In other embodiments, each predicted fracture permeability 1020 may replace the predicted matrix permeability within the predicted matrix permeability log 1005 at the corresponding discrete depths 420 to generate the predicted hybrid-permeability log 1035.



FIG. 12 describes a method of training the first AI model 1015 in accordance with one or more embodiments. In step 1205, the matrix training data is collected along a first depth interval 415 of a first well, such as well 205. Recall that the matrix training data includes training well data, such as training well data 605, and associated training rock core permeability data, such as associated training rock core permeability data 400. The training well data may further include training geological data, training porosity log data, and training rock core porosity data. Further, recall that the training well data may be collected using a well logging system 500 downhole within the first well and/or from rock cores 215 collected using a rock coring system 200 configured to drill the first well. Further still, recall that the associated training rock core permeability data may be collected from rock cores 215 collected using a rock coring system 200 from the first well and tested using a permeability system 305. In practice, matrix training data may be collected along one or more first depth interval 415 for each of multiple first wells.


In step 1210, the first AI model 1015 is trained using the matrix training data. The first AI model 1015 may include MRGC methods as previously described. If the first AI model 1015 includes an NN 800, training may be performed, at least in part, using backpropagation as previously described relative to FIG. 8. The first AI model 1015 is trained to produce a predicted matrix permeability log 1005 from input well data 1010 where the well data 1010 may be similar to the matrix training data but collected along a second depth interval of the first well or along a depth interval of the second well.



FIG. 13 describes a method of determining a predicted hybrid-permeability log 1035 in accordance with one or more embodiments. In step 1305, the well data 1010 is collected along a depth interval of a well, such as well 205. The depth interval 1100 of the well 205 may be a second depth interval of the first well or a depth interval of a second well. The well data 1010 may be similar to the matrix training data in that the well data 1010 may be collected using a well logging system 500 downhole within the well 205 and/or from rock cores 215 collected using a rock coring system 200 configured to drill the well 205, though other systems may also be used. The well data 1010 may further include geological data, porosity log data, and rock core porosity data.


In step 1310, the well data 1010 is input into the first AI model 1015. Here, the first AI model 1015 may have been previously trained or partially trained using the matrix training data.


In step 1315, a predicted matrix permeability log 1005 along the depth interval 1100 is produced from the first AI model 1015. The predicted matrix permeability log 1005 may be similar to the associated training rock core permeability data 400 though produced along a different depth interval of the first well or along a depth interval 1100 of a second well and at more densely spaced discrete depths 420. FIG. 11A illustrates a predicted matrix permeability log 1005 in accordance with one or more embodiments.


In step 1320, an image 1025 is collected at a discrete depth within the depth interval 1100 of the well 205. The image 1025 is of a fracture 115, which may be a natural fracture 115 or a man-made fracture 115. In practice, any number of images 1025 may be collected within the depth interval 1100 of the well 205. The image 1025 may be collected using a well logging system 500 downhole within the well 205, from rock cores 215 collected using a rock coring system 200 configured to drill the well 205, and/or outcrops near the well 205. An image 1025 from a rock core 215 may be an x-ray image or a CT image.


In step 1325, the image 1025 is input into the second AI model 1030. The second AI model 1030 may be a series of CNNs, which may include a u-net 900. Here, the second AI model 1030 may have been previously training or partially trained using the M fracture training pairs. Recall that each mth fracture training pair includes an mth training image, such as mth training image 700, and associated mth training fracture permeability. In some embodiments, each mth training image may be similar to the image 1025 in that each mth training image may be collected using a well logging system 500 downhole, from rock cores 215 collected using a rock coring system 200, and/or outcrops. In other embodiments, each mth training image may be a simulated image. Each mth training image from a rock core 215 may be an x-ray image or a CT image. If the second AI model 1030 includes an NN 800, training may be performed, at least in part, using backpropagation as previously described relative to FIG. 8. As such, the second AI model 1030 may be thought of as performing image segmentation and/or edge detection.


In step 1330, the predicted fracture permeability 1020 is produced from the second AI model 1030. The predicted fracture permeability 1020 may be similar to each associated mth training fracture permeability. FIG. 11C illustrates predicted fracture permeabilities 1020 in accordance with one or more embodiments.


In step 1335, the predicted hybrid-permeability log 1035 is generated using the predicted matrix permeability log 1005 from step 1315 and the predicted fracture permeability 1020 from step 1330. FIG. 11D illustrates a predicted hybrid-permeability log 1035 in accordance with one or more embodiments.


The predicted hybrid-permeability log 1035 quantifies both matrix permeability and fracture permeability associated with rock 100 that surrounds the well 205 along the depth interval 1100. As such, the predicted hybrid-permeability log 1035 may be used to determine a hydrocarbon production rate for the well 205. In turn, the hydrocarbon production rate may be used, at least in part, to determine a production management plan.


A production management plan may define and organize the activities associated with producing hydrocarbons from the hydrocarbon reservoir 515 that the well 205 penetrates. For example, a production management plan may define how rapidly to produce hydrocarbons for various time intervals for the well 205 over the lifetime of the well 205. A production management plan may also define when and where to drill new wells and how to complete the new wells that penetrate the hydrocarbon reservoir 515. Further, a production management plan may define when, where, and how to stimulate the well 205 and new wells. Further still, a production management plan may define when to abandon the well 205.


In turn, planned actions based, at least in part, on the production management plan may be taken. Planned actions may include, but are not limited to, drilling one or more offset wells, completing the well 205 or one or more offset wells (such as by hydraulic fracturing), choking the well 205 or one or more offset wells, performing secondary or tertiary recovery (such as by injecting the well 205 or one or more offsets wells), etc.


A production management plan may reside within a production management system stored on a memory of a computer 1405. FIG. 14 illustrates a computer system in accordance with one or more embodiments. The computer system 1405 may provide computational functionalities associated with described AI models, algorithms, methods, functions, processes, flows, and procedures as described in this disclosure, according to one or more embodiments. The illustrated computer 1405 is intended to encompass any computing device such as a server, desktop computer, laptop/notebook computer, wireless data port, smart phone, personal data assistant (PDA), tablet computing device, one or more processors 1408 within these devices, or any other suitable processing device, including both physical or virtual instances (or both) of the computing device. Additionally, the computer 1405 may include a computer that includes an input device, such as a keypad, keyboard, touch screen, or other device that can accept user information, and an output device that conveys information associated with the operation of the computer 1405, including digital data, visual, or audio information (or a combination of information), or a GUI.


The computer 1405 can serve in a role as a client, network component, a server, a database or other persistency, or any other component (or a combination of roles) of a computer system for performing the subject matter described in the instant disclosure. The illustrated computer 1405 is communicably coupled with a network 1430. In some implementations, one or more components of the computer 1405 may be configured to operate within environments, including cloud-computing-based, local, global, or other environment (or a combination of environments).


At a high level, the computer 1405 is an electronic computing device operable to receive, transmit, process, store, or manage data and information associated with the described subject matter. According to some implementations, the computer 1405 may also include or be communicably coupled with an application server, e-mail server, web server, caching server, streaming data server, business intelligence (BI) server, or other server (or a combination of servers).


The computer 1405 can receive requests over network 1430 from a client application (for example, executing on another computer 1405) and responding to the received requests by processing the said requests in an appropriate software application. In addition, requests may also be sent to the computer 1405 from internal users (for example, from a command console or by other appropriate access method), external or third-parties, other automated applications, as well as any other appropriate entities, individuals, systems, or computers.


Each of the components of the computer 1405 can communicate using a system bus 1403. In some implementations, any or all of the components of the computer 1405, both hardware or software (or a combination of hardware and software), may interface with each other or the interface 1404 (or a combination of both) over the system bus 1403 using an application programming interface (API) 1412 or a service layer 1413 (or a combination of the API 1412 and service layer 1413. The API 1412 may include specifications for routines, data structures, and object classes. The API 1412 may be either computer-language independent or dependent and refer to a complete interface, a single function, or even a set of APIs. The service layer 1413 provides software services to the computer 1405 or other components (whether or not illustrated) that are communicably coupled to the computer 1405. The functionality of the computer 1405 may be accessible for all service consumers using this service layer. Software services, such as those provided by the service layer 1413, provide reusable, defined business functionalities through a defined interface. For example, the interface may be software written in JAVA, C++, or other suitable language providing data in extensible markup language (XML) format or another suitable format. While illustrated as an integrated component of the computer 1405, alternative implementations may illustrate the API 1412 or the service layer 1413 as stand-alone components in relation to other components of the computer 1405 or other components (whether or not illustrated) that are communicably coupled to the computer 1405. Moreover, any or all parts of the API 1412 or the service layer 1413 may be implemented as child or sub-modules of another software module, enterprise application, or hardware module without departing from the scope of this disclosure.


The computer 1405 includes an interface 1404. Although illustrated as a single interface 1404 in FIG. 14, two or more interfaces 1404 may be used according to particular needs, desires, or particular implementations of the computer 1405. The interface 1404 is used by the computer 1405 for communicating with other systems in a distributed environment that are connected to the network 1430. Generally, the interface 1404 includes logic encoded in software or hardware (or a combination of software and hardware) and operable to communicate with the network 1430. More specifically, the interface 1404 may include software supporting one or more communication protocols associated with communications such that the network 1430 or interface's hardware is operable to communicate physical signals within and outside of the illustrated computer 1405.


The computer 1405 includes at least one computer processor 1408. Although illustrated as a single computer processor 1408 in FIG. 14, two or more processors 1408 may be used according to particular needs, desires, or particular implementations of the computer 1405. Generally, the computer processor 1408 executes instructions and manipulates data to perform the operations of the computer 1405 and any algorithms, methods, functions, processes, flows, and procedures as described in the instant disclosure.


The computer 1405 also includes a memory 1406 that holds data for the computer 1405 or other components (or a combination of both) that can be connected to the network 1430. For example, the memory 1406 may store a production management system 1450 in the form of software. Although illustrated as a single memory 1406 in FIG. 14, two or more memories may be used according to particular needs, desires, or particular implementations of the computer 1405 and the described functionality. While memory 1406 is illustrated as an integral component of the computer 1405, in alternative implementations, memory 1406 can be external to the computer 1405.


The application 1407 is an algorithmic software engine providing functionality according to particular needs, desires, or particular implementations of the computer 1405, particularly with respect to functionality described in this disclosure. For example, application 1407 can serve as one or more components, modules, applications, etc. Further, although illustrated as a single application 1407, the application 1407 may be implemented as multiple applications 1407 on the computer 1405. In addition, although illustrated as integral to the computer 1405, in alternative implementations, the application 1407 can be external to the computer 1405.


There may be any number of computers 1405 associated with, or external to, a computer system containing a computer 1405, wherein each computer 1405 communicates over network 1430. Further, the term “client,” “user,” and other appropriate terminology may be used interchangeably as appropriate without departing from the scope of this disclosure. Moreover, this disclosure contemplates that many users may use one computer 1405, or that one user may use multiple computers 1405.



FIG. 15 illustrates a series of systems in accordance with one or more embodiments. In brief, the rock coring system 200 may be configured to simultaneously drill a well 205 while collecting one or more rock cores 215. In turn, each rock core 215 may be tested in a laboratory setting using a permeability system 305 to determine associated training rock core permeability data. Each rock core 215 may also be tested in a laboratory setting to determine training rock core porosity data, rock core porosity data, training geological data, and/or geological data.


Following the removal of the rock coring system 200 from downhole, the well logging system 500 may deployed downhole to collect training well data, well data, each mth training image, and/or each image.


All collected or determined data may then be input into, stored on, and processed using the computer system 1405. The computer system 1405 may be configured to train the first AI model 1015 and train the second AI model 1030, at least in part. In some embodiments, the first AI model 1015 and second AI model 1030 may have been previously trained, at least in part, such that transfer learning may be relied on. Further, the computer system 1405 may be configured to generate the predicted hybrid-permeability log 1035 following the method described in FIG. 13. The predicted hybrid-permeability log 1035 may then be used by a production management system 1450 configured to determine a production management plan.


Although only a few example embodiments have been described in detail above, those skilled in the art will readily appreciate that many modifications are possible in the example embodiments without materially departing from this invention. Accordingly, all such modifications are intended to be included within the scope of this disclosure as defined in the following claims.

Claims
  • 1. A method of training a first artificial intelligence (AI) model comprising: collecting matrix training data along a first depth interval of a first well, wherein the matrix training data comprises training well data and associated training rock core permeability data; andtraining the first AI model using the matrix training data, wherein the first AI model is trained to produce a predicted matrix permeability log from input well data, andwherein the input well data is collected along a second depth interval of the first well or along a depth interval of a second well.
  • 2. The method of claim 1, wherein the training well data comprises training geological data.
  • 3. The method of claim 1, wherein the training well data comprises training porosity log data and training rock core porosity data.
  • 4. The method of claim 3, wherein the training porosity log data is within a threshold of the training rock core porosity data.
  • 5. The method of claim 1, wherein the first AI model comprises multi-resolution graph-based clustering (MRGC).
  • 6. A method of determining a predicted hybrid-permeability log comprising: collecting well data along a depth interval of a well;inputting the well data into a first artificial intelligence (AI) model;producing a predicted matrix permeability log along the depth interval from the first AI model;collecting an image at a discrete depth within the depth interval of the well, wherein the image is of a fracture;inputting the image into a second AI model;producing a predicted fracture permeability from the second AI model; andgenerating the predicted hybrid-permeability log using the predicted matrix permeability log and the predicted fracture permeability.
  • 7. The method of claim 6, further comprising: determining a hydrocarbon production rate based, at least in part, on the predicted hybrid-permeability log; anddetermining a production management plan based, at least in part, on the hydrocarbon production rate.
  • 8. The method of claim 7, further comprising: taking one or more actions based, at least in part, on the production management plan.
  • 9. The method of claim 6, wherein training the second AI model comprises: generating M fracture training pairs by performing steps comprising, collecting an mth training image, wherein the mth training image is of an mth fracture,determining an mth training fracture-identified image using the mth training image, anddetermining, using a model, an associated mth training fracture permeability for the mth fracture using the mth training fracture-identified imagewherein M is an integer greater than or equal to one,wherein m is an integer between 1 and M, inclusive, andwherein each of the M fracture training pairs comprises the mth training image and the associated mth training fracture permeability; andtraining the second AI model using the M fracture training pairs, wherein the second AI model is trained to produce the predicted fracture permeability from the image.
  • 10. The method of claim 9, wherein the model comprises Navier-Stokes equations.
  • 11. The method of claim 6, wherein the well data comprises geological data.
  • 12. The method of claim 6, wherein the well data comprises porosity log data and rock core porosity data.
  • 13. The method of claim 6, wherein the second AI model comprises a plurality of convolutional neural networks (CNNs).
  • 14. The method of claim 13, wherein the plurality of CNNs comprises a u-net.
  • 15. The method of claim 6, wherein producing the predicted fracture permeability comprises: producing a predicted fracture-identified image from a first CNN;inputting the predicted fracture-identified image into a second CNN; andproducing the predicted fracture permeability from the second CNN,wherein the second AI model comprises the first CNN and the second CNN.
  • 16. A system comprising: a computer system configured to: receive well data along a depth interval of a well,input the well data into a first artificial intelligence (AI) model,produce a predicted matrix permeability log along the depth interval from the first AI model,receive an image for a discrete depth within the depth interval of the well, wherein the image is of a first fracture,input the image into a second AI model,produce a predicted fracture permeability from the second AI model,generate a predicted hybrid-permeability log using the predicted matrix permeability log and the predicted fracture permeability, anddetermine a hydrocarbon production rate based, at least in part, on the predicted hybrid-permeability log; anda production management system configured to: determine a production management plan based, at least in part, on the hydrocarbon production rate.
  • 17. The system of claim 16, further comprising a first well logging system configured to collect the well data.
  • 18. The system of claim 16, further comprising a second well logging system configured to collect the image.
  • 19. The system of claim 16, further comprising a rock coring system configured to collect rock cores.
  • 20. The system of claim 19, further comprising a permeability system configured to determine associated training rock core permeability data from the rock cores.