Autonomous Interpretation of Rock Drill Cuttings

Information

  • Patent Application
  • 20230374903
  • Publication Number
    20230374903
  • Date Filed
    May 23, 2022
    a year ago
  • Date Published
    November 23, 2023
    5 months ago
Abstract
A computer-implemented method that autonomously performs rock drill cuttings interpretation is described herein. The method includes obtaining rock drill cuttings representations. The method also includes preprocessing the rock drill cuttings representations. The method also includes performing unsupervised image segmentation in order to obtain masked representations of such images discriminating rock types. The method also includes performing supervised learning through a custom Convolutional Neuronal Network using the segmented pictures as inputs and a continuous or discrete mineralogical or sedimentological variable of interest as the output. Additionally, the method includes autonomously predicting such mineralogical or sedimentological quantity from new rock drill cuttings pictures using the parameters of the unsupervised segmentation and the trained supervised model created for this purpose.
Description
TECHNICAL FIELD

This disclosure relates generally to drill cuttings interpretation.


BACKGROUND

Drilling processes typically generate rock drill cuttings via a drill bit positioned at the end of a drill string. The drill bit contacts a cutting face of a geological formation to create a wellbore. Contact with the cutting face creates rock drill cuttings. To move the rock drill cuttings to the surface, a drilling fluid is circulated down the wellbore, through the drill string and drill bit, and then up the backside annulus out of the wellbore. Interpretation of the rock drill cuttings provides information on the corresponding geological formation.


SUMMARY

An embodiment described herein provides a method for autonomous interpretation of rock drill cuttings. The method includes obtaining, with one or more hardware processors, rock drill cuttings representations. The method also includes preprocessing, with the one or more hardware processors, the rock drill cuttings representations by applying at least one transformation to the rock drill cuttings representations. Further, the method includes segmenting, with the one or more hardware processors, the preprocessed rock drill cuttings representations into segmented pictures by inputting the preprocessed rock drill cuttings representations into a first machine learning model that outputs the segmented pictures, wherein the segmented pictures are masked images that include at least one rock type. Additionally, the method includes predicting, with the one or more hardware processors, depth indexed mineralogical or sedimentological data using the segmented pictures input to a trained second machine learning model.


An embodiment described herein provides an apparatus comprising a non-transitory, computer readable, storage medium that stores instructions that, when executed by at least one processor, cause the at least one processor to perform operations. The operations include preprocessing rock drill cuttings representations by applying at least one transformation to the rock drill cuttings representations. The operations also include segmenting the preprocessed rock drill cuttings representations into segmented pictures by inputting the preprocessed rock drill cuttings representations into a first machine learning model that outputs segmented pictures, wherein the segmented pictures are masked images that include at least one rock type. Additionally, the operations include predicting depth indexed mineralogical or sedimentological data using the segmented pictures input to a trained second machine learning model.


An embodiment described herein provides a system. The system comprises one or more memory modules and one or more hardware processors communicably coupled to the one or more memory modules. The one or more hardware processors is configured to execute instructions stored on the one or more memory models to perform operations. The operations include preprocessing the rock drill cuttings representations by applying at least one transformation to the rock drill cuttings representations. The operations also include segmenting the preprocessed rock drill cuttings representations into segmented pictures by inputting the preprocessed rock drill cuttings representations into a first machine learning model that outputs segmented pictures, wherein the segmented pictures are masked images that include at least one rock type. Additionally, the operations include predicting depth indexed mineralogical or sedimentological data using the segmented pictures input to a trained second machine learning model.





BRIEF DESCRIPTION OF DRAWINGS


FIG. 1 shows an acquisition subsystem.



FIG. 2 shows a software subsystem.



FIG. 3 shows model training of a machine learning model.



FIG. 4 shows a model deployment of trained machine learning models.



FIG. 5 shows a comparison of autonomous rock drill cuttings interpretation and human interpretation.



FIG. 6 is a process flow diagram of a process that enables autonomous rock drill cuttings interpretation



FIG. 7 is a schematic illustration of an example controller (or control system) for autonomous rock drill cuttings interpretation according to the present disclosure.





DETAILED DESCRIPTION

Rock drill cuttings, also referred to as drill cuttings or cuttings, are produced as rock is broken by a drill bit advancing through the rock or soil to create a wellbore. In examples, the cuttings are pieces of rock that are chipped away at a cutting face of a geological formation by the drill bit while the wellbore is drilled. Drilling fluid is pumped through a drill string coupled with the drill bit. The drilling fluid exits through jets in the drill bit and carries the cuttings through the hole and up to the surface. The cuttings are carried to the surface by drilling fluid circulating up from the drill bit. Equipment, such as a shaker, is used to separate the cuttings and fluid. A size of cuttings produced during drilling is based on, at least in part, the geologic material being drilled and the drill bit used. In examples, the cuttings are of any size. In some examples, the drilling fluid is collected at the surface in a containment system or pit.


The cuttings are interpreted to determine characteristics of the rock being drilled. Interpretation of the rock includes, for example, a determination of the facies, mineralogy of the sample, or other custom categories of interest. In examples, cuttings provide information on the lithology of the rock being drilled, the mineral composition, and hydrocarbon quality measurements. Traditional interpretation of rock drill cuttings is labor-intensive and requires continuous attention and the frequent repetition of a task by a highly qualified professional that has extensive training for drill cuttings interpretation. Human interpreters are known to give different interpretations of the same rock, resulting in inconsistent assessments of rocks not only between wells, but also for the same well, as interpretation work by humans is carried out by multiple people, in shifts. Traditionally geologists work in shifts for hydrocarbon development and exploratory wells and can produce inconsistent assessment of drill cuttings.


Embodiments described herein enable autonomous interpretation of rock drill cuttings. The present techniques autonomously interpret rock drill cuttings images and video. Rock drill cuttings are produced by the action of drilling a wellbore, and are brought up to surface continuously by the recirculation of drilling mud fluids along the annulus. Rock drill cuttings provide invaluable information about the nature and properties of the targeted rocks. The present techniques improve the traditional practice of human interpretation of rock drill cuttings, which is prone to human bias, labor-intensive, and expensive. The present techniques enable a consolidated, centralized, fast, and inexpensive interpretation of rock drill cuttings. Further, the present techniques reduce costs through automating cuttings interpretation while simultaneously increasing the usability and accuracy of the interpretation through consolidation. In examples, the present techniques are applied to stored rock drill cuttings. For example, rock cuttings collected during drilling are stored. The stored rock cuttings are digitized (e.g., images or video of the stored cuttings is captured) and interpreted via autonomous interpretation of rock drill cuttings.


In some embodiments, the present techniques enable an autonomous rock drill cuttings interpretation system. The autonomous rock drill cuttings interpretation system includes an acquisition subsystem and a software subsystem. The acquisition subsystem generates cuttings representations, such as images or video. In examples, an image is a microscope image. A video is a sequence of frames. The software subsystem receives as input the image and/or video, and outputs a mineralogical and sedimentological interpretation of the rock drill cuttings portrayed in the video and/or images. In some embodiments, machine learning models are trained to preprocess drill cuttings representations, segment drill cuttings representations, and output mineralogical or sedimentological data based on the drill cuttings.


In some embodiments, once at least one machine learning model is trained using training data from wellbores in an area, zone, and/or geological formation of interest, the present techniques autonomously interpret rock drill cuttings representations (e.g., images or video) in real time. Mineralogical or sedimentological data is predicted for other wellbores in the predetermined area, zone, or geological formation in real time. Autonomous interpretation of rock drill cuttings removes the need to have trained geologists working on-site and on shifts to perform rock drill cuttings interpretation. Further, the present techniques reduce costs and deliver consistent and comparable interpretations across multiple wellbores and within each individual wellbore across multiple geological units. In examples, a mineralogical interpretation of the drill cuttings images produced at a wellbore is autonomously deployed in real time, utilizing for this purpose a machine learning model previously trained on data obtained from a nearby wellbore where the desired mineralogical information was measured. In some embodiments, a machine-learning model trained on drill cuttings images sourced from multiple wellbores is utilized to autonomously deliver a consistent mineralogical description across other multiple wellbores.


In some embodiments, a machine learning model is trained to predict quanitites a human interpreter may not be trained to describe from rock drill cuttings images, such as algorithmically-built discrete or continuous descriptors associated with a particular well, formation, or play. In some embodiments, the predictions made by the trained machine learning models are auditable, and the model themselves can be perfected (e.g., updated, retrained) after-the-fact and re-applied to the same rock drill cuttings images.



FIG. 1 shows an acquisition subsystem. FIG. 2 shows a software subsystem. In some embodiments, an autonomous rock drill cuttings interpretation system includes an acquisition subsystem (e.g., FIG. 1), which captures image and/or video recordings of rock drill cuttings, and a software subsystem (e.g., FIG. 2), which analyzes the image and/or video recorded and delivers a mineralogical and sedimentological interpretation of the rock drill cuttings portrayed in the video and/or images.


Referring to FIG. 1, an acquisition subsystem 100 is shown. The acquisition subsystem can be of two types. A first acquisition subsystem 100 is a continuous acquisition system 110 and a second acquisition subsystem 100 is a manual microscope system 120. In the continuous acquisition system 110, rock drill cuttings representations 112 are obtained while drilling, in real time. For example, a capture mechanism 114 is used to capture rock drill cuttings representations 112 while the cuttings travel on a belt 116. In examples, the capture mechanism is a high definition digital video recorder or digital camera. The belt 116 transports cuttings that from shakers. In examples, the shakers are shale shakers. Shakers are the first phase of a solids control system on a drilling rig and are used to remove large solids (cuttings) from the drilling fluid (“mud”). In examples, the cuttings are cleaned in an automated process using shakers.


In some embodiments, the rock drill cuttings representations 112 are a video (e.g., a sequence of frames) that is obtained from live continuous while-drilling video recordings. In some embodiments, the rock drill cuttings representations 112 are images (e.g., digital pictures) of the rock drill cuttings produced by the action of drilling the well obtained at periodic intervals. In embodiments, each video frame or image is indexed to the depth at which the observed cuttings are produced.


The continuous acquisition system 110 is fully autonomous. The drill cuttings representations are captured automatically by the continuous acquisition system 110, without human intervention. In examples, video frames are captured continuously as rock drill cuttings generated by the action of drilling the well, are taken from the shale shakers through a hose, automatically cleaned, and mounted on a secondary conveyor belt 116 for image capture or video recording.


The second acquisition subsystem 100 is a manual microscope system 120. In the manual microscope system 120, rock drill cuttings representations 122 are microscope images captured using a microscope 124. In examples, the microscope images are manually selected, and images of prepared rock drill cuttings samples are captured and indexed to the specific depth from which each sample is produced as shown at reference number 126. In some embodiments, a person selects samples from the shale shakers at consistent drilled depth intervals, prepares the samples, and takes microscope pictures of the samples. In examples, the microscope images are simple magnified pictures of cleaned, raw, drill cuttings. The microscope images are distinguished from thin sections. To obtain a thin section, a single rock cutting is finely cut and polished as to expose its virgin state. In some cases, the single rock cutting is preserved in graphite, glass, or other materials. Thus, the preparation of thin section images is costly when compared to microscope images. Additionally, thin section images are not acquired in pseudo real-time.


Using the manual microscope system 120, samples are acquired, prepared, and the microscope images of the samples are captured in a standardized manner. In examples, the standardized manner refers to a same magnification applied to each sample when capturing the microscope pictures. In examples, a machine learning model is trained using images acquired with the same, standard, magnification. When deploying the trained machine learning model, the input images are captured at the same magnification applied to images used to train the machine learning model. For example, the samples are magnified to twenty times (e.g., 20X) larger than the original sample. In examples, the standardization also refers to capturing images of the samples with a same light setup. In the absence of a same light setup, histogram normalization is used to normalize a color of the images as described below.


The acquisition subsystem 100 accumulates enough training data to build and validate a machine learning model for a predetermined area, such as a zone, field, and/or formations of interest. The training data is used during machine learning model training and validation (e.g., 230 of FIG. 2). Upon deployment of the trained machine learning models, the acquisition subsystem 100 captures drill cuttings representations. The drill cuttings representations are interpreted in real time using the deployed, trained machine learning model. In this manner, the drill cuttings representations are autonomously interpreted.



FIG. 2 shows a software subsystem 200. The software subsystem 200 includes three subsystems: a preprocessing subsystem 210, a model training and validation subsystem 230, and model deployment workflow 250. The preprocessing subsystem 210 enhances and normalizes rock drill cuttings representations. In examples, the rock drill cuttings representations are prepared for either the model training and validation subsystem 230 or the deployment subsystem 250. In some embodiments, the rock drill cuttings representation is video that is pre-processed using workflow 212. In some embodiments, the rock drill cuttings representation is one or more images that are pre-processed using workflow 220.


In the video preprocessing workflow 212, frame-to-wellbore depth indexing 214 is performed. In frame-to-wellbore depth indexing 214, timestamps corresponding to each respective frame of video are mapped to a drilling bit depth. The mapping accounts for the time lag that it takes cuttings to travel from the depth of a drilling location (e.g., where the cuttings are generated, a bottom of the wellbore) to the cleaned cuttings belt (e.g., at the surface). This produces a unique indexing of each frame in a video to the corresponding depth of origination of the cuttings. This mapping facilitates the association of each video frame with any other independently acquired data. In examples, the other independent acquired data is also depth-indexed. At block 216, picture scaling is applied to frames of video. In examples, the video frames are scaled to account for differences in the camera lens magnification and distance to the target sample being recorded. The picture scaling of video frames is similar to the standardized magnification applied to microscope images as described above. At block 218, anti-aliasing is applied to multiple video frames. In examples, the anti-aliasing processing uses information from multiple contiguous frames to create a single sharp image.


In the image preprocessing workflow 220, image-to-wellbore depth indexing 222 is performed. In image-to-wellbore depth indexing 222, timestamps corresponding to each respective image are mapped to the drilling bit depth. For example, images are natively indexed with respect to a time at which the sample represented in the image reaches the surface. The time at which the sample represented in the image reaches the surface is used to determine a time-to-depth indexing that is used to associate depth with predicted mineralogical or sedimentological data.


In both of the video preprocessing workflow 212 and image preprocessing workflow 220, at block 224 image color histogram fit and matching is performed. In the example of FIG. 2, the image color histogram fit and matching evaluates whether more than one light configuration is used for a single well. In examples, a light configuration refers to the characteristics of the light sources utilized when microscope images and/or video frames of rock drill cuttings are captured. If more than one light configuration is used, then the best represented histogram (the one with the most images/frames taken) is used as a normalization basis for the other histograms within that same well. At block 226, image normalization is performed. In some embodiments, this normalization is performed on training datasets, at the model-building stage. In examples, a training dataset that includes more than one well and light configuration normalization is used for training data across whole wells. A basis histogram is a best represented light configuration among all wells used in the training dataset. In examples, the best represented light configuration corresponds to a single well with the most samples. During model training and validation at the subsystem 230, the preprocessing subsystem 210 stores information about the average color histogram of the set of pictures or video frames taken in a normalized images database at block 232. This information is utilized to normalize the color distribution within the set. During model deployment workflow 250, newly acquired pictures are normalized at block 254 according to this stored information in the normalized images database (e.g., block 232) from the training dataset. In this manner, rock drill cuttings representations input to the deployed, trained model share characteristics with rock drill cuttings representations utilized to train the model.


The model training and validation subsystem 230 is utilized to create models that encode how to map rock drill cuttings to specific prescribed data of interest. As shown in FIG. 2, the preprocessing system 210 generates normalized images from the rock drill cuttings used as a training dataset. The normalized images are stored in a database as shown at block 232. The model training and validation subsystem 230 includes a custom unsupervised convolutional neural network (CNN) at block 234 that is trained to segment and group individual rocks visible in each picture or frame. The output of the unsupervised CNN at block 234 is a masked version of the original images or video frames, where each pixel is given a label corresponding to a predicted classification. For example, when training a machine learning model to quantify the amount of sandstone, limestone and shale, the masked pictures includes pixels labeled as sandstone rock, limestone rock, shale rock or none. A label of none signals a space between labeled categories (e.g., classes) or an unidentified category.


At block 236, the segmented pictures obtained from the unsupervised CNN at block 234 are utilized to train a machine learning model. In some embodiments, the machine learning model trained at block 236 is a supervised CNN. The supervised CNN learns to interpret the mineralogy and/or sedimentology of a large set of rock drill cuttings using segmented pictures from videos and/or images previously acquired. In examples, the input to the supervised CNN at block 236 is masked images based on the original rock drill cuttings representations. The masked images include class labels of a rock type for each pixel. In examples, when training a machine learning model to quantify the amount of sandstone, limestone and shale, the supervised CNN is trained (supervised) against the relative abundances of sandstone, limestone and shale measured (or derived from measurements) independently.


Accordingly, at block 240, mineralogical training data, such as wireline X-ray fluorescence (XRF) and/or X-ray powder diffraction (XRD) are provided as input for training of a supervised CNN. Mineralogy data includes, for example, the presence and relative abundance of certain minerals, such as sandstone, limestone, dolostone, shale, and the like. At block 242, sedimentological data is provided as input for training of the supervised CNN. Sedimentological data includes, for example, facies descriptions that characterize the presence and relative abundance of geological units of interest such as distal, proximal, shallow, or deep.


In some embodiments, the training data includes custom categories 244. Custom categories 244 include the presence and relative abundance of pre-defined custom units of interest. For examples, custom categories include reservoir or non-reservoir classifications. Reservoir Engineers often create classifications that are bespoke to the specific asset under study, with arbitrary thresholds or labels that are not necessarily are industry-wide standards, nor purely geology-based and therefore cannot be described by any single Geologist. The present techniques train machine learning models to predict those custom bespoke categories. Following the example above, what constitutes a “reservoir” in a specific asset may be arbitrary and based on a joint evaluation of engineering, geological and economic factors and therefore is not a property any single Geologist will be able to accurately interpret from looking at rock samples. The trained machine learning model recognizes the subtle differences between these custom “reservoir” and “non-reservoir” labels because it is trained to do so.


At block 238, model validation and performance scoring is applied to the unsupervised CNN trained at block 234 and the supervised CNN trained at block 236. The hyper-parameters of the unsupervised and supervised components of the workflow are automatically selected by iteratively fitting a large number of models with different hyper-parameters and selecting the best using an appropriate loss function weighted from a set of cross-validations. The models are later tested against blind data sets using leave-k-wells-out cross-validation.


In some embodiments, the performance scoring is a measure of the accuracy of the trained model predictions. In some embodiments, the performance scoring is measured by sequentially setting aside a subgroup of images and labels that are not included in the training dataset, but used to test the performance of the model. This is done multiple times setting aside randomly selected subgroups each time to avoid bias, a technique referred to as cross-validation. The actual scoring measurement is obtained by testing the trained model against the subgroup of images set aside and comparing its prediction to the corresponding true label. The comparison is performed calculating the mean squared error between the predicted and true relative abundances labels. A single mean squared error is obtained per each validation, the final score is the weighted average score of all cross-validations.



FIG. 3 shows model training 300 of the supervised CNN. In the example of FIG. 3, training images 302 are indexed according to depth 304. In some embodiments, the training and validation workflow 330 illustrated in FIG. 3 is implemented by model training and validation subsystem 230 of FIG. 2. The training workflow 330 is executed using the training images 302 as input, and either continuous logging data 304 discrete logging data 306A, or discrete described data 306B as the output property to be predicted. A large number of depth-indexed training data 302 is used to train a machine learning model to predict two logging curves 304, which carry information on the relative abundance of two minerals, Quantity Mineral 1 and Quantity Mineral 2. The same set of images 302 can also be trained to predict discrete described data 306B, such as geological facies. In some embodiments, the training images 304 are segmented pictures used to train a machine learning model to predict measured or modeled data in the same wellbore in a supervised fashion. As illustrated in FIG. 3, the discrete logging data 306A and discrete described data 306B to be predicted is measured data or derived data. In examples, measured data is captured by a range of different sensors, tools and techniques. For example, measured data is classified as either wireline, Logging While Drilling (LWD), or core data. Wireline data is obtained once the wellbore has been drilled, by lowering sensors through the wellbore using a wireline. Logging While Drilling data is obtained by attaching sensors to a drilling bottom hole assembly, and are thus obtained while the wellbore is being drilled. Core data is obtained by a series of laboratory measurements performed over a core sample that is a whole rock sample cut from a specific depth interval. In some embodiments, derived data is data that is not directly measured, but that is determined using the measured data. For example, water saturation (e.g., the amount of water within the rock pore system) is often not directly measured, but derived using an equation which uses measured data as input.


For increased performance, the machine learning models (e.g., unsupervised CNN at block 234, supervised CNN at block 236 of FIG. 2) are usually trained using training data from specific plays and reservoirs. In some embodiments, a custom CNN auto encoder (e.g., unsupervised CNN at block 234) segments the rock drill cuttings representations in an unsupervised fashion, a process that identifies individual rocks within the picture and major rock types within it. The segmented pictures are trained to predict measured or modeled data in a same wellbore using a secondary CNN in a supervised fashion (e.g., unsupervised CNN at block 234). The predicted or measured data can be direct or derived data from wireline, Logging While Drilling or Core Data. In examples, the rock drill cuttings representations correspond to a depth interval of 5 to 10 feet of a drilled formation. In some embodiments, data sampled at a finer resolution is averaged using a moving window algorithm. By averaging using the moving window algorithm, rock drilling cuttings representations at a finer resolution are scaled to correspond to a same depth interval.


Referring again to FIG. 2, the model training and validation subsystem 230 outputs a first trained machine learning model that segments images of rock drill cuttings and a second trained machine learning model that predicts mineralogical data and sedimentological data, such as logging curves, discrete data, or derived data. In some embodiments, the trained machine learning models are stored in a model file, and the model file is deployed for execution to make real time predictions.


The deployment workflow 250 executes the trained machine learning models to predict mineralogical data and sedimentological data. The deployment workflow 250 obtains newly acquired rock drill cuttings representations at block 252. The newly acquired rock drill cuttings representations 252 include videos and/or images obtained either statically or live. In some embodiments, the videos and/or images are obtained from an acquisition subsystem 110 (FIG. 1). The rock drill cuttings representations 252 are pre-processed using stored transformations. For example, newly acquired pictures are normalized according to the stored information in the normalized images databased at block 232 of the model training and validation workflow. The stored information includes the average color histogram of the set of pictures or video frames used to train the machine learning models.


At block 256, the trained machine learning models are used to predict a depth-indexed mineralogical and/or sedimentological interpretation of rock drilling cuttings captured in the rock drill cuttings representations (videos and/or images). In some embodiments, the deployment subsystem 250 obtains a model file produced by the training and validation subsystem 230. For example, during image preprocessing of training data using the image preprocessing subsystem 210, a set of transformations are performed on the training raw images or video before the actual model training. In some embodiments, image preprocessing incudes scaling, histogram fit and image normalization. When the trained models are applied to newly acquired images, the same transformations developed prior to training are applied to the newly acquired raw images. For example, the parameters of those transformations applied to the training dataset (scaling factor, color histogram, normalization factors) are stored for application to the newly acquired rock drill cuttings representations prior to prediction. At block 258, depth indexed mineralogical or sedimentological data is predicted using the trained machine learning models and newly acquired rock drill cuttings representations. In some embodiments, custom categories of data are predicted.



FIG. 4 shows a model deployment 400 of the trained machine learning models. In the example of FIG. 4, input images 402 are indexed according to depth 404. The deployment workflow 450 illustrated in FIG. 4 is the same to similar to the model training deployment 250 of FIG. 2. During deployment, new rock drill cuttings representations (e.g., video frames or images) referred to as input images 402 are acquired in a first wellbore for which a relevant trained machine learning models exist. In some embodiments, relevant trained machine learning models are models trained using data captured from one or more other wellbores drilled near the first wellbore (e.g., located in the same area, zone, or field). In some embodiments, relevant trained machine learning models are models trained using data captured from distant wells (e.g., not located in the same area, zone, or field), but where the target geological formations are believed to be similar. The trained machine learning models can also be deployed in already drilled wells where rock drill cuttings images are available, or where rock drill cuttings are available and new pictures are taken.


In any of these cases, new rock drill cuttings representations are preprocessed following the same workflow described with respect to the image preprocessing subsystem 210 of FIG. 2. The deployment workflow 450 is applied to the preprocessed input images 402. The trained machine learning models predict continuous data 406, a discrete data 408, or any combinations thereof. In examples, the trained machine learning models predict target variables, such as the custom categories (e.g., block 244 of FIG. 2) used to train the machine learning models during the supervised part of the workflow (e.g., block 236 of FIG. 2). The predicted variables can be either continuous data, such as mineralogical abundance, or discrete variables, such as geological facies. In some embodiments, the deployment workflow 450 is executed in real time using rock drill cuttings representations that are video frames obtained in real time. In some embodiments, the deployment workflow 450 is executed in pseudo real time using rock drill cuttings representations that are microscope images taken while the new well is being drilled. In examples, pseudo real time refers to near real-time processing, with a minimal delay. Additionally, in some embodiments the rock drill cuttings representations are stored in a database and the deployment workflow 450 is applied to the rock drill cuttings after the well has been drilled.



FIG. 5 shows a comparison of autonomous rock drill cuttings interpretation 502 and human interpretation 504. The depth interval 506 shown spans 2560 feet, across which rock drill cuttings images were acquired every 10 feet. The trained machine learning model was therefore deployed and 256 images input to the trained machine learning model to output the interpretation 502. The training set was also sampled at 10 feet and was composed of 245 images. In the example of FIG. 5, the machine learning models are trained using measured wireline data. In the example of FIG. 5, the predictions of wireline data are made using autonomous rock drill cuttings interpretation are output by a machine learning model trained using a second Lateral (B) of a Well (X). The autonomous drill cuttings interpretation 502 are predicted using machine learning produced mineralogy. The human interpretation 504 illustrates a human created interpretation of the same rock drill cuttings samples input to the trained machine learning models. The trained machine learning models interpret rock drill cuttings representations from a first Lateral (A) of the same Well (X). In examples, the first Lateral (A) experiences well control difficulties, which prevents data from being acquired beyond the collection and digitalization of rock drill cuttings. The present techniques are utilized to build a model based on data from the second Lateral (B) of the same Well (X). In some embodiments, training data from the second Lateral (B) of the same Well (X) includes both rock drill cuttings images and wireline data. The trained machine learning models are deployed and executed for data available from the first lateral (A). As shown in FIG. 5, interpretation 502 made by the deployed trained machine learning models is more accurate than the human interpretation 504 of the cuttings. In examples, while autonomous rock drill cuttings interpretation 502 and human interpretation 504 are similar, the autonomous rock drill cuttings interpretation 502 is superior based on prior knowledge of the area. For example, the autonomous rock drill cuttings interpretation 502 more accurately describes the upper formation as not being composed of pure limestone; a known fact in the area. Moreover, in the example of FIG. 5 the autonomous rock drill cuttings interpretation 502 resolves the layered nature of the limestone/sandstone sequences.


Accordingly, the present techniques train high quality machine learning models in areas where both rock drill cuttings images and relevant data (wireline, LWD, core, etc.) are present. Interpretations from the trained machine learning models save costs and consolidate multiple, differing, interpretations. In this manner, all rock drill cuttings assets can be digitized and used to retroactively train and deploy models. The present techniques do not require manual labelling of the training set. Further, the training datasets used are complex and do not require preparation, such as preparation applied to thin section datasets. The preparation time and costs associated with thin sections from raw drill cuttings makes the use of thin sections in autonomous rock drill cuttings interpretation impractical for real time or near real time applications.



FIG. 6 is a process flow diagram of a process 600 that enables autonomous rock drill cuttings interpretation. The present techniques quantify mineralogical abundances in rock drill cuttings representations (video or images of rock drill cuttings) without the need of manual labeling, but with the use of measured data, such as wireline, logging while drilling, or core data.


At block 602, rock drill cuttings representations are obtained. In some embodiments, the rock drill cuttings representations are obtained using an acquisition subsystem. The rock drill cuttings representations are images or video recordings of rock drill cuttings. In some embodiments, the rock drill cuttings representations are obtained using a camera or other computer vision techniques. The present techniques predict mineralogical or sedimentological data using raw drill cuttings images or video.


At block 604, the rock drill cuttings representations are preprocessed. In some embodiments, the rock drill cuttings representations are preprocessed using an image preprocessing subsystem 210. In examples, to generate training data, rock drill cuttings representations are pre-processed using a respective video workflow (e.g., workflow 212 of FIG. 2) or an image workflow (e.g., workflow 222 of FIG. 2). In examples, for input to deployed, trained machine learning models, newly acquire rock drill cuttings representations (e.g., block 252 of FIG. 2) are preprocessed using stored transformations. At block 606, the rock drill cuttings representations are segmented into pictures that include a representative rock type. In some embodiments, the segmented pictures are masked images corresponding to the rock drill cuttings, where each pixel is given a label corresponding to a predicted classification. In some embodiments, the rock drill cuttings representations are segmented using a CNN trained using unsupervised learning.


At block 608, mineralogical data, sedimentological data, or any combinations thereof are predicted using the segmented pictures. The mineralogical or sedimentological data is depth indexed. In some embodiments, the mineralogical or sedimentological data is predicted using a CNN trained using supervised learning. For example, the labeled output of a first CNN trained using unsupervised learning to segment rock drilling representations is used as input to a second CNN trained to predict mineralogical or sedimentological data using supervised learning. In examples, the relative abundances of minerals is predicted for each picture. In some embodiments, custom categories of data are predicted.


By not using manual labeling, costs and time associated with interpretations are reduced. For example, when a new built-for-purpose model is created for a specific region, formation or predictor, then a set of highly qualified individuals need to assembled to work on a time consuming and repetitive task. The response time and cost of such methodology offsets any gain to be achieved by modifying the current industry practice. Additionally, the present techniques remove any bias from traditional, manual human interpretation. Moreover, the present techniques enable rock drill cuttings to be used to interpret the mineralogy of the samples, and also continuous or discrete quantities related to the shape, color, texture and size of the cuttings produced. The present techniques predict modeled variables that are key to specific assets, such as facies.



FIG. 7 is a schematic illustration of an example controller 700 (or control system) for autonomous rock drill cuttings interpretation according to the present disclosure. For example, the controller 700 may be operable according to the acquisition subsystem 100 of FIG. 1 or the software subsystem of FIG. 2. The controller 700 is intended to include various forms of digital computers, such as printed circuit boards (PCB), processors, digital circuitry, or otherwise parts of a system for automated dew point pressure prediction. Additionally the system can include portable storage media, such as, Universal Serial Bus (USB) flash drives. For example, the USB flash drives may store operating systems and other applications. The USB flash drives can include input/output components, such as a wireless transmitter or USB connector that may be inserted into a USB port of another computing device.


The controller 700 includes a processor 710, a memory 720, a storage device 730, and an input/output interface 740 communicatively coupled with input/output devices 760 (for example, displays, keyboards, measurement devices, sensors, valves, pumps). Each of the components 710, 720, 730, and 740 are interconnected using a system bus 750. The processor 710 is capable of processing instructions for execution within the controller 700. The processor may be designed using any of a number of architectures. For example, the processor 710 may be a CISC (Complex Instruction Set Computers) processor, a RISC (Reduced Instruction Set Computer) processor, or a MISC (Minimal Instruction Set Computer) processor.


In one implementation, the processor 710 is a single-threaded processor. In another implementation, the processor 710 is a multi-threaded processor. The processor 710 is capable of processing instructions stored in the memory 720 or on the storage device 730 to display graphical information for a user interface via the input/output interface 740 at an input/output device 760.


The memory 720 stores information within the controller 700. In one implementation, the memory 720 is a computer-readable medium. In one implementation, the memory 720 is a volatile memory unit. In another implementation, the memory 720 is a nonvolatile memory unit.


The storage device 730 is capable of providing mass storage for the controller 700. In one implementation, the storage device 730 is a computer-readable medium. In various different implementations, the storage device 730 may be a floppy disk device, a hard disk device, an optical disk device, or a tape device.


The input/output interface 740 provides input/output operations for the controller 700. In one implementation, the input/output devices 760 includes a keyboard and/or pointing device. In another implementation, the input/output devices 760 includes a display unit for displaying graphical user interfaces.


There can be any number of controllers 700 associated with, or external to, a computer system containing controller 700, with each controller 700 communicating over a network. Further, the terms “client,” “user,” and other appropriate terminology can be used interchangeably, as appropriate, without departing from the scope of the present disclosure. Moreover, the present disclosure contemplates that many users can use one controller 700 and one user can use multiple controllers 700.


Implementations of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Software implementations of the described subject matter can be implemented as one or more computer programs. Each computer program can include one or more modules of computer program instructions encoded on a tangible, non-transitory, computer-readable computer-storage medium for execution by, or to control the operation of, data processing apparatus. Alternatively, or additionally, the program instructions can be encoded in/on an artificially generated propagated signal. The example, the signal can be a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. The computer-storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of computer-storage mediums.


The terms “data processing apparatus,” “computer,” and “electronic computer device” (or equivalent as understood by one of ordinary skill in the art) refer to data processing hardware. For example, a data processing apparatus can encompass all kinds of apparatus, devices, and machines for processing data, including by way of example, a programmable processor, a computer, or multiple processors or computers. The apparatus can also include special purpose logic circuitry including, for example, a central processing unit (CPU), a field programmable gate array (FPGA), or an application specific integrated circuit (ASIC). In some implementations, the data processing apparatus or special purpose logic circuitry (or a combination of the data processing apparatus or special purpose logic circuitry) can be hardware- or software-based (or a combination of both hardware- and software-based). The apparatus can optionally include code that creates an execution environment for computer programs, for example, code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of execution environments. The present disclosure contemplates the use of data processing apparatuses with or without conventional operating systems, for example, LINUX, UNIX, WINDOWS, MAC OS, ANDROID, or IOS.


A computer program, which can also be referred to or described as a program, software, a software application, a module, a software module, a script, or code, can be written in any form of programming language. Programming languages can include, for example, compiled languages, interpreted languages, declarative languages, or procedural languages. Programs can be deployed in any form, including as stand-alone programs, modules, components, subroutines, or units for use in a computing environment. A computer program can, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data, for example, one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files storing one or more modules, sub programs, or portions of code. A computer program can be deployed for execution on one computer or on multiple computers that are located, for example, at one site or distributed across multiple sites that are interconnected by a communication network. While portions of the programs illustrated in the various figures may be shown as individual modules that implement the various features and functionality through various objects, methods, or processes, the programs can instead include a number of sub-modules, third-party services, components, and libraries. Conversely, the features and functionality of various components can be combined into single components as appropriate. Thresholds used to make computational determinations can be statically, dynamically, or both statically and dynamically determined.


The methods, processes, or logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The methods, processes, or logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, for example, a CPU, an FPGA, or an ASIC.


Computers suitable for the execution of a computer program can be based on one or more of general and special purpose microprocessors and other kinds of CPUs. The elements of a computer are a CPU for performing or executing instructions and one or more memory devices for storing instructions and data. Generally, a CPU can receive instructions and data from (and write data to) a memory. A computer can also include, or be operatively coupled to, one or more mass storage devices for storing data. In some implementations, a computer can receive data from, and transfer data to, the mass storage devices including, for example, magnetic, magneto optical disks, or optical disks. Moreover, a computer can be embedded in another device, for example, a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a global positioning system (GPS) receiver, or a portable storage device such as a universal serial bus (USB) flash drive.


Computer readable media (transitory or non-transitory, as appropriate) suitable for storing computer program instructions and data can include all forms of permanent/non-permanent and volatile/non-volatile memory, media, and memory devices. Computer readable media can include, for example, semiconductor memory devices such as random access memory (RAM), read only memory (ROM), phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), and flash memory devices. Computer readable media can also include, for example, magnetic devices such as tape, cartridges, cassettes, and internal/removable disks. Computer readable media can also include magneto optical disks and optical memory devices and technologies including, for example, digital video disc (DVD), CD ROM, DVD+/−R, DVD-RAM, DVD-ROM, HD-DVD, and BLURAY. The memory can store various objects or data, including caches, classes, frameworks, applications, modules, backup data, jobs, web pages, web page templates, data structures, database tables, repositories, and dynamic information. Types of objects and data stored in memory can include parameters, variables, algorithms, instructions, rules, constraints, and references. Additionally, the memory can include logs, policies, security or access data, and reporting files. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.


Implementations of the subject matter described in the present disclosure can be implemented on a computer having a display device for providing interaction with a user, including displaying information to (and receiving input from) the user. Types of display devices can include, for example, a cathode ray tube (CRT), a liquid crystal display (LCD), a light-emitting diode (LED), and a plasma monitor. Display devices can include a keyboard and pointing devices including, for example, a mouse, a trackball, or a trackpad. User input can also be provided to the computer through the use of a touchscreen, such as a tablet computer surface with pressure sensitivity or a multi-touch screen using capacitive or electric sensing. Other kinds of devices can be used to provide for interaction with a user, including to receive user feedback including, for example, sensory feedback including visual feedback, auditory feedback, or tactile feedback. Input from the user can be received in the form of acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to, and receiving documents from, a device that is used by the user. For example, the computer can send web pages to a web browser on a user's client device in response to requests received from the web browser.


The term “graphical user interface,” or “GUI,” can be used in the singular or the plural to describe one or more graphical user interfaces and each of the displays of a particular graphical user interface. Therefore, a GUI can represent any graphical user interface, including, but not limited to, a web browser, a touch screen, or a command line interface (CLI) that processes information and efficiently presents the information results to the user. In general, a GUI can include a plurality of user interface (UI) elements, some or all associated with a web browser, such as interactive fields, pull-down lists, and buttons. These and other UI elements can be related to or represent the functions of the web browser.


Implementations of the subject matter described in this specification can be implemented in a computing system that includes a back end component, for example, as a data server, or that includes a middleware component, for example, an application server. Moreover, the computing system can include a front-end component, for example, a client computer having one or both of a graphical user interface or a Web browser through which a user can interact with the computer. The components of the system can be interconnected by any form or medium of wireline or wireless digital data communication (or a combination of data communication) in a communication network. Examples of communication networks include a local area network (LAN), a radio access network (RAN), a metropolitan area network (MAN), a wide area network (WAN), Worldwide Interoperability for Microwave Access (WIMAX), a wireless local area network (WLAN) (for example, using 802.11 a/b/g/n or 802.20 or a combination of protocols), all or a portion of the Internet, or any other communication system or systems at one or more locations (or a combination of communication networks). The network can communicate with, for example, Internet Protocol (IP) packets, frame relay frames, asynchronous transfer mode (ATM) cells, voice, video, data, or a combination of communication types between network addresses.


The computing system can include clients and servers. A client and server can generally be remote from each other and can typically interact through a communication network. The relationship of client and server can arise by virtue of computer programs running on the respective computers and having a client-server relationship. Cluster file systems can be any file system type accessible from multiple servers for read and update. Locking or consistency tracking may not be necessary since the locking of exchange file system can be done at application layer. Furthermore, Unicode data files can be different from non-Unicode data files.


While this specification contains many specific implementation details, these should not be construed as limitations on the scope of what may be claimed, but rather as descriptions of features that may be specific to particular implementations. Certain features that are described in this specification in the context of separate implementations can also be implemented, in combination, in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations, separately, or in any suitable sub-combination. Moreover, although previously described features may be described as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can, in some cases, be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination.


Particular implementations of the subject matter have been described. Other implementations, alterations, and permutations of the described implementations are within the scope of the following claims as will be apparent to those skilled in the art. While operations are depicted in the drawings or claims in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed (some operations may be considered optional), to achieve desirable results. In certain circumstances, multitasking or parallel processing (or a combination of multitasking and parallel processing) may be advantageous and performed as deemed appropriate.


Moreover, the separation or integration of various system modules and components in the previously described implementations should not be understood as requiring such separation or integration in all implementations, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.


Accordingly, the previously described example implementations do not define or constrain the present disclosure. Other changes, substitutions, and alterations are also possible without departing from the spirit and scope of the present disclosure.


Furthermore, any claimed implementation is considered to be applicable to at least a computer-implemented method; a non-transitory, computer-readable medium storing computer-readable instructions to perform the computer-implemented method; and a computer system comprising a computer memory interoperably coupled with a hardware processor configured to perform the computer-implemented method or the instructions stored on the non-transitory, computer-readable medium.


Particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, some processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results.

Claims
  • 1. A computer-implemented method, comprising: obtaining, with one or more hardware processors, rock drill cuttings representations;preprocessing, with the one or more hardware processors, the rock drill cuttings representations by applying at least one transformation to the rock drill cuttings representations;segmenting, with the one or more hardware processors, the preprocessed rock drill cuttings representations into segmented pictures by inputting the preprocessed rock drill cuttings representations into a first machine learning model that outputs the segmented pictures, wherein the segmented pictures are masked images that include at least one rock type; andpredicting, with the one or more hardware processors, depth indexed mineralogical or sedimentological data using the segmented pictures input to a trained second machine learning model.
  • 2. The computer implemented method of claim 1, wherein the second machine learning model is trained to predict custom categories.
  • 3. The computer implemented method of claim 1, wherein the first machine learning model segments and groups individual rocks visible in each picture or frame using unsupervised learning.
  • 4. The computer implemented method of claim 1, wherein the trained second machine learning model is a supervised convolutional neural network.
  • 5. The computer implemented method of claim 1, wherein the rock drill cuttings representations are a high-definition video of cleaned rock drill cuttings transported on a cuttings belt from shakers.
  • 6. The computer implemented method of claim 1, wherein the rock drill cuttings representations are images of cleaned rock drill cuttings transported on a cuttings belt from shakers.
  • 7. The computer implemented method of claim 1, wherein the rock drill cuttings representations are microscope images of cleaned rock drill cuttings transported on a cuttings belt from shakers.
  • 8. An apparatus comprising a non-transitory, computer readable, storage medium that stores instructions that, when executed by at least one processor, cause the at least one processor to perform operations comprising: preprocessing rock drill cuttings representations by applying at least one transformation to the rock drill cuttings representations;segmenting the preprocessed rock drill cuttings representations into segmented pictures by inputting the preprocessed rock drill cuttings representations into a first machine learning model that outputs segmented pictures, wherein the segmented pictures are masked images that include at least one rock type; andpredicting depth indexed mineralogical or sedimentological data using the segmented pictures input to a trained second machine learning model.
  • 9. The apparatus of claim 8, wherein the second machine learning model is trained to predict custom categories.
  • 10. The apparatus of claim 8, wherein the first machine learning model segments and groups individual rocks visible in each picture or frame using unsupervised learning.
  • 11. The apparatus of claim 8, wherein the trained second machine learning model is a supervised convolutional neural network.
  • 12. The apparatus of claim 8, wherein the rock drill cuttings representations are a high-definition video of cleaned rock drill cuttings transported on a cuttings belt from shakers.
  • 13. The apparatus of claim 8, wherein the rock drill cuttings representations are images of cleaned rock drill cuttings transported on a cuttings belt from shakers.
  • 14. The apparatus of claim 8, wherein the rock drill cuttings representations are microscope images of cleaned rock drill cuttings transported on a cuttings belt from shakers.
  • 15. A system, comprising: one or more memory modules;one or more hardware processors communicably coupled to the one or more memory modules, the one or more hardware processors configured to execute instructions stored on the one or more memory models to perform operations comprising:preprocessing the rock drill cuttings representations by applying at least one transformation to the rock drill cuttings representations;segmenting the preprocessed rock drill cuttings representations into segmented pictures by inputting the preprocessed rock drill cuttings representations into a first machine learning model that outputs segmented pictures, wherein the segmented pictures are masked images that include at least one rock type; andpredicting depth indexed mineralogical or sedimentological data using the segmented pictures input to a trained second machine learning model.
  • 16. The system of claim 15, wherein the second machine learning model is trained to predict custom categories.
  • 17. The system of claim 15, wherein the first machine learning model is trained to segment and group individual rocks visible in each picture or frame using unsupervised learning.
  • 18. The system of claim 15, wherein the trained second machine learning model is a supervised convolutional neural network.
  • 19. The system of claim 15, wherein the rock drill cuttings representations are a high-definition video of cleaned rock drill cuttings transported on a cuttings belt from shakers.
  • 20. The system of claim 15, wherein the rock drill cuttings representations are images of cleaned rock drill cuttings transported on a cuttings belt from shakers.