ELECTRIC SUBMERSIBLE PUMP OPERATING PARAMETERS

Information

  • Patent Application
  • 20250154857
  • Publication Number
    20250154857
  • Date Filed
    November 13, 2023
    a year ago
  • Date Published
    May 15, 2025
    25 days ago
Abstract
This disclosure describes methods and systems for determining operating parameters for an electric submersible pump (ESP) and controlling the ESP based on the determined operating parameters. A method involves determining a target production rate for a wellbore; providing the target production rate as input to a neural network that provides as output ESP operating parameters to achieve the target production rate, the ESP operating parameters including a choke size percentage and a motor speed, and the neural network modeling an ESP-equipped wellbore; and controlling an ESP to operate according to the ESP operating parameters.
Description
TECHNICAL FIELD

This description relates to methods and systems for determining operating parameters for an electric submersible pump (ESP) and controlling the ESP based on the determined operating parameters.


BACKGROUND

An Electrical Submersible Pump (ESP) is a type of pump used in the oil and gas industry for lifting hydrocarbons, e.g., crude oil, from wells. It features a submersible design, an electric motor that powers the pump, multiple pump stages to increase fluid pressure, among other components. An ESP can be deployed in a well where there is a need to overcome natural reservoir pressure or manage mixed oil, water, and gas production.


SUMMARY

This disclosure describes systems and methods for controlling electric submersible pumps (ESPs) to achieve desired production targets for ESP-equipped wells. To control an ESP installed in a well, the systems and methods generate a model of an ESP-equipped well based on historical well data. The model can be a machine learning (ML) model, e.g., an artificial neural network (ANN) intelligent model, trained based on the historical well data. The ML model can be used to determine ESP operating parameters, for example, to reach the desired production target for the well.


One aspect of the subject matter described in this specification may be embodied in a method that involves determining a target production rate for a wellbore; providing the target production rate as input to a neural network that provides as output ESP operating parameters to achieve the target production rate, the ESP operating parameters including a choke size percentage and a motor speed, and the neural network modeling an ESP-equipped wellbore; and controlling an ESP to operate according to the ESP operating parameters.


The previously described implementation is implementable using a computer-implemented method; a non-transitory, computer-readable medium storing instructions to perform the computer-implemented method; and a computer system including a computer memory interoperably coupled with a hardware processor configured to perform the computer-implemented method or the instructions stored on the non-transitory, computer-readable medium. These and other embodiments may each optionally include one or more of the following features.


In some implementations, the input to the neural network further includes a real-time data comprising: an oil production rate, a water cut, an intake pressure, an ESP motor load, and an upstream-downstream (US/DS) differential pressure (DP).


In some implementations, the neural network is trained based on historical production data.


In some implementations, generating the neural network involves: obtaining historical production data including data points associated with respective wells, each data point including at least one of oil production rate, a water cut, an intake pressure, an ESP motor load, or an upstream-downstream (US/DS) differential pressure (DP); splitting the historical production into training data and testing data; and iteratively training the neural network using the training data to generate the neural network.


In some implementations, iteratively training the neural network involves: creating the neural network; defining initial hyperparameters for the neural network; training the neural network based on the training data; determining, based on at least one performance indicator, whether the training of the neural network is complete; if the training the neural network is complete, deploying the neural network; and if the training the neural network is incomplete, returning to training the neural network using the training data to generate new hyperparameter values for the neural network.


In some implementations, determining, based on at least one performance indicator, whether the training of the neural network is complete involves determining whether the at least one performance indicator satisfies a respective threshold.


In some implementations, the at least one performance indicator includes at least one of a correlation coefficient (CC), a root mean squared error (RMSE), or an average absolute percentage error (AAPE).


In some implementations, the initial hyperparameters include a number of neuron layers, a number of neurons per layer, and a seed number for the neural network.


The details of one or more embodiments of these systems and methods are set forth in the accompanying drawings and description below. Other features, objects, and advantages of these systems and methods will be apparent from the description, drawings, and claims.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram of an example ESP control system, according to some implementations.



FIG. 2 is an example workflow for controlling an ESP, according to some implementations.



FIG. 3 is an example neural network, according to some implementations.



FIG. 4 is a flowchart of an example method, according to some implementations.



FIG. 5 is a block diagram of an example computer system, according to some implementations.



FIG. 6 illustrates hydrocarbon production operations that include both one or more field operations and one or more computational operations, according to some implementations.





DETAILED DESCRIPTION

This disclosure describes systems and methods for controlling electric submersible pumps (ESPs) to achieve desired production targets for ESP-equipped wells. To control an ESP installed in a well, the systems and methods generate a model of an ESP-equipped well based on historical well data. The model can be a machine learning (ML) model, e.g., an artificial neural network (ANN) intelligent model, trained based on the historical well data. The ML model can be used to determine ESP operating parameters, for example, to reach the desired production target for the well.


The disclosed systems and methods provide resource efficient (e.g., time efficient and processing efficient) means for determining the required ESP parameters to achieve a desired production rate (e.g., production optimization). Because they are resource efficient, the systems and methods enhance the ESP run life and reduce power consumption.



FIG. 1 illustrates an ESP control system 100, according to some implementations. In one embodiment, the ESP control system 100 is implemented using a computer system, such as the computer system 500 of FIG. 5. As described in FIG. 5, the computer system 500 can communicate with one or more other computer systems over one or more networks. Note that the ESP control system 100 is shown for illustration purposes only, as the system 100 can include additional components or have one or more components removed without departing from the scope of the disclosure. Further, note that the various components of the ESP control system 100 can be arranged or connected in any manner.


In some implementations, the ESP control system 100 is configured to control an ESP to achieve a specific production target for a hydrocarbon well in which the ESP is installed. As shown in FIG. 1, the ESP control system 100 obtains input data 102, which can include historical well production data and/or simulated production data. The production data can include datasets (also called datapoints) associated with respective ESP-equipped wells (e.g., actual wells or simulated wells). Each dataset can include ESP parameters, oil and water rates, and pressures for the ESP-equipped wells. The ESP control system 100 can store the input data 102 as production data 104 in a memory of a computer system (e.g., the computer system on which the ESP control system 100 and/or a cloud-based computer system).


As shown in FIG. 1, the ESP control system 100 also includes an ESP-equipped well model 106. In some examples, the ESP control system 100 can generate the ESP-equipped well model 106 as a machine learning model, e.g., a neural network, based on the production data 104. Once the ESP-equipped well model 106 is generated, the model can be used to simulate performance of an ESP-equipped well. In particular, the ESP-equipped well model 106 can be used to determine ESP operating parameters, for example, to reach a specific production target for a well. Within examples, the ESP operating parameters include an ESP choke size percentage and/or an ESP motor speed (which is measured by the unit of frequency). The choke size is the choke valve opening size in the surface that separates the upstream and downstream flow. The choke valve is used to control and limit the flow of produced fluids from the reservoir to the pump. The degree to which the valve is open and allows fluid to flow through it is represented as a percentage (e.g., 100% indicates that the valve is fully open and 0% means that the valve is completely closed). By adjusting the choke size percentage, the ESP control system 100 can control the flow rate of produced fluids, pressure, and other operational parameters. The right choke size percentage can help prevent problems like gas interference, sand production, and excessive wear and tear on the equipment.


In some implementations, the ESP control system 100 can perform actions 108 based on the determined ESP operating parameters. In some examples, the actions 108 can include adjusting the ESP operating parameters to match the determined ESP operating parameters. In these examples, the actions 108 include controlling the ESP choke size percentage and/or an ESP motor speed.



FIG. 2 illustrates an example workflow 200 for controlling an ESP to achieve a desired production target for an ESP-equipped well. The workflow 200 can be performed by the ESP control system 100 of FIG. 1. It will be understood that workflow 200 can be performed, for example, by any suitable system, environment, software, hardware, or a combination of systems, environments, software, and hardware, as appropriate. In some examples, various steps of workflow 200 can be run in parallel, in combination, in loops, or in any order.


At 202, the ESP control system 100 obtains input production data. The production data can include datasets associated with respective ESP-equipped wells (e.g., actual wells or simulated wells). Each dataset can include ESP parameters, oil and water production rates, and pressures for the ESP-equipped wells. The ESP parameters include ESP motor load and ESP operating frequency (ESP motor speed). The pressures include ESP intake pressure (e.g., downhole pressure measured in the ESP), upstream-downstream (US-DS) differential pressure (DP) across the choke.


The US-DS DP refers to the pressure difference between the upstream and downstream forces acting on the ESP system, particularly as it relates to the choke valve.


At 204, the ESP control system 100 defines the inputs and outputs from the input production data. As an example, the ESP control system 100 defines the following parameters as inputs: total oil rate, water cut, ESP intake pressure, the differential pressure across the choke, ESP motor load, and the production target rate. In this example, the ESP control system 100 defines the outputs as the choke size and ESP motor speed required to reach the provided production target rate.


At 206, the ESP control system 100 performs data randomization and splits the input production data into training and testing data. In one example, the ESP control system 100 splits the data into training and testing by a ratio of 70:30. Other ratios are possible and are contemplated herein.


At 208, the ESP control system 100 creates a neural network. At 210, the ESP control system 100 defines the hyperparameters of the neural network. The hyperparameters include a number of neuron layers, a number of neurons per layer, and a seed number. At 212, the ESP control system 100 trains the neural network by fitting the training and testing data. This step involves applying iterations and back propagation to identify the best fitting model with weights and biases governed by equations of neural networks between the training data set and the testing data set to finalize the machine learning model. At 214, the ESP control system 100 calculates one or more performance indicators of the current iteration of the model. The performance indicators include a correlation coefficient (CC), root mean squared error (RMSE), and average absolute percentage error (AAPE).


At 216, the ESP control system 100 uses the one or more performance indicators to determine whether training of the neural network is complete. Determining whether the training is complete is based on whether the one or more performance indicators satisfy respective thresholds. In examples where there is more than one performance indicator, the ESP control system 100 can determine that training is complete if a majority of the indicators, or alternatively all of the indicators, satisfy respective factors. If the one or more performance indicators do not satisfy their respective thresholds, then the ESP control system 100 returns to the training step 112. The ESP control system 100 iteratively performs the training step until the performance indicators indicate that the training is complete. In some examples, the machine learning model is trained using a significant number of iterations, e.g., on the order of thousands or tens of thousands of iterations, until a desired accuracy is achieved (e.g., greater than a predetermined threshold). In each iteration, the machine learning algorithm tunes the weights and biases of the model to achieve better accuracy.


Once the one or more performance indicators satisfy respective thresholds, then the training of the model is complete. At 218, the ESP control system 100 saves the neural network with the current weights and biases. The neural network can then be deployed for predicting optimized ESP parameters for achieving desired production targets. For example, the deployed neural network model can receive as input a desired production rate and a real-time data, where the real-time data includes an oil production rate, a water cut, an intake pressure, an ESP motor load, and a US/DS DP. The deployed model provides as output ESP operating parameters that achieve the desired production rate.



FIG. 3 depicts an example neural network 300, according to some implementations. In this example, the inputs to the neural network 300 include an oil rate, water cut, intake pressure, US-DS DP, ESP motor load, and a target production rate. And the outputs are choke size percentage and ESP motor speed. Further, in this example, the neuron number is 15, the number of hidden layers is 100, and the seed number is 9. Other example hyperparameters are possible and are contemplated herein.


For the purposes of this disclosure, the terms “real-time,” “real time,” “realtime,” or similar terms (as understood by one of ordinary skill in the art) mean that an action and a response are temporally proximate such that an individual perceives the action and the response occurring substantially simultaneously. For example, the time difference for a response to display (or for an initiation of a display) of data following the action of an individual to access the data may be less than 1 millisecond (ms), less than 1 second, or less than 5 seconds. While the requested data need not be displayed (or initiated for display) instantaneously, it is displayed (or initiated for display) without any intentional delay, taking into account processing limitations of a described computing system and time required to, for example, gather, accurately measure, analyze, process, store, and/or transmit the data.



FIG. 4 illustrates a flowchart of an example method 400, according to some implementations. For clarity of presentation, the description that follows generally describes method 400 in the context of the other figures in this description. For example, method 400 can be performed by the ESP control system 100 of FIG. 1. It will be understood that method 400 can be performed, for example, by any suitable system, environment, software, hardware, or a combination of systems, environments, software, and hardware, as appropriate. In some implementations, various steps of method 400 can be run in parallel, in combination, in loops, or in any order. In some examples, the method 400 is for controlling an ESP installed in a wellbore.


At 402, the method involves determining a target production rate for a wellbore.


At 404, the method involves providing the target production rate as input to a neural network that provides as output ESP operating parameters to achieve the target production rate, the ESP operating parameters including a choke size percentage and a motor speed, and the neural network modeling an ESP-equipped wellbore.


At 406, the method involves controlling an ESP to operate according to the ESP operating parameters.


In some implementations, the input to the neural network further includes a real-time data comprising: an oil production rate, a water cut, an intake pressure, an ESP motor load, and an upstream-downstream (US/DS) differential pressure (DP).


In some implementations, the neural network is trained based on historical production data.


In some implementations, generating the neural network involves: obtaining historical production data including data points associated with respective wells, each data point including at least one of oil production rate, a water cut, an intake pressure, an ESP motor load, or an upstream-downstream (US/DS) differential pressure (DP); splitting the historical production into training data and testing data; and iteratively training the neural network using the training data to generate the neural network.


In some implementations, iteratively training the neural network involves: creating the neural network; defining initial hyperparameters for the neural network; training the neural network based on the training data; determining, based on at least one performance indicator, whether the training of the neural network is complete; if the training the neural network is complete, deploying the neural network; and if the training the neural network is incomplete, returning to training the neural network using the training data to generate new hyperparameter values for the neural network.


In some implementations, determining, based on at least one performance indicator, whether the training of the neural network is complete involves determining whether the at least one performance indicator satisfies a respective threshold.


In some implementations, the at least one performance indicator includes at least one of a correlation coefficient (CC), a root mean squared error (RMSE), or an average absolute percentage error (AAPE).


In some implementations, the initial hyperparameters include a number of neuron layers, a number of neurons per layer, and a seed number for the neural network.



FIG. 5 is a block diagram of an example computer system 500 that can be used to provide computational functionalities associated with described algorithms, methods, functions, processes, flows, and procedures, according to some implementations of the present disclosure. In some implementations, the ESP control system 100 can be the computer system 500, include the computer system 500, or the ESP control system 100 can communicate with the computer system 500.


The illustrated computer 502 is intended to encompass any computing device such as a server, a desktop computer, an embedded computer, a laptop/notebook computer, a wireless data port, a smart phone, a personal data assistant (PDA), a tablet computing device, or one or more processors within these devices, including physical instances, virtual instances, or both. The computer 502 can include input devices such as keypads, keyboards, and touch screens that can accept user information. Also, the computer 502 can include output devices that can convey information associated with the operation of the computer 502. The information can include digital data, visual data, audio information, or a combination of information. The information can be presented in a graphical user interface (UI) (or GUI). In some implementations, the inputs and outputs include display ports (such as DVI-I+2× display ports), USB 3.0, GbE ports, isolated DI/O, SATA-III (6.0 Gb/s) ports, mPCIe slots, a combination of these, or other ports. In instances of an edge gateway, the computer 502 can include a Smart Embedded Management Agent (SEMA), such as a built-in ADLINK SEMA 2.2, and a video sync technology, such as Quick Sync Video technology supported by ADLINK MSDK+. In some examples, the computer 502 can include the MXE-5400 Series processor-based fanless embedded computer by ADLINK, though the computer 502 can take other forms or include other components.


The computer 502 can serve in a role as a client, a network component, a server, a database, a persistency, or components of a computer system for performing the subject matter described in the present disclosure. The illustrated computer 502 is communicably coupled with a network 530. In some implementations, one or more components of the computer 502 can be configured to operate within different environments, including cloud-computing-based environments, local environments, global environments, and combinations of environments.


At a high level, the computer 502 is an electronic computing device operable to receive, transmit, process, store, and manage data and information associated with the described subject matter. According to some implementations, the computer 502 can also include, or be communicably coupled with, an application server, an email server, a web server, a caching server, a streaming data server, or a combination of servers.


The computer 502 can receive requests over network 530 from a client application (for example, executing on another computer 502). The computer 502 can respond to the received requests by processing the received requests using software applications. Requests can also be sent to the computer 502 from internal users (for example, from a command console), external (or third) parties, automated applications, entities, individuals, systems, and computers.


Each of the components of the computer 502 can communicate using a system bus 503. In some implementations, any or all of the components of the computer 502, including hardware or software components, can interface with each other or the interface 504 (or a combination of both), over the system bus. Interfaces can use an application programming interface (API) 512, a service layer 513, or a combination of the API 512 and service layer 513. The API 512 can include specifications for routines, data structures, and object classes. The API 512 can be either computer-language independent or dependent. The API 512 can refer to a complete interface, a single function, or a set of APIs 512.


The service layer 513 can provide software services to the computer 502 and other components (whether illustrated or not) that are communicably coupled to the computer 502. The functionality of the computer 502 can be accessible for all service consumers using this service layer 513. Software services, such as those provided by the service layer 513, can provide reusable, defined functionalities through a defined interface. For example, the interface can be software written in JAVA, C++, or a language providing data in extensible markup language (XML) format. While illustrated as an integrated component of the computer 502, in alternative implementations, the API 512 or the service layer 513 can be stand-alone components in relation to other components of the computer 502 and other components communicably coupled to the computer 502. Moreover, any or all parts of the API 512 or the service layer 513 can be implemented as child or sub-modules of another software module, enterprise application, or hardware module without departing from the scope of the present disclosure.


The computer 502 can include an interface 504. Although illustrated as a single interface 504 in FIG. 5, two or more interfaces 504 can be used according to particular needs, desires, or particular implementations of the computer 502 and the described functionality. The interface 504 can be used by the computer 502 for communicating with other systems that are connected to the network 530 (whether illustrated or not) in a distributed environment. Generally, the interface 504 can include, or be implemented using, logic encoded in software or hardware (or a combination of software and hardware) operable to communicate with the network 530. More specifically, the interface 504 can include software supporting one or more communication protocols associated with communications. As such, the network 530 or the interface's hardware can be operable to communicate physical signals within and outside of the illustrated computer 502.


The computer 502 includes a processor 505. Although illustrated as a single processor 505 in FIG. 5, two or more processors 505 can be used according to particular needs, desires, or particular implementations of the computer 502 and the described functionality. Generally, the processor 505 can execute instructions and manipulate data to perform the operations of the computer 502, including operations using algorithms, methods, functions, processes, flows, and procedures as described in the present disclosure.


The computer 502 can also include a database 506 that can hold data for the computer 502 and other components connected to the network 530 (whether illustrated or not). For example, database 506 can be an in-memory, conventional, or a database storing data consistent with the present disclosure. In some implementations, the database 506 can be a combination of two or more different database types (for example, hybrid in-memory and conventional databases) according to particular needs, desires, or particular implementations of the computer 502 and the described functionality. Although illustrated as a single database 506 in FIG. 5, two or more databases (of the same, different, or combination of types) can be used according to particular needs, desires, or particular implementations of the computer 502 and the described functionality. While database 506 is illustrated as an internal component of the computer 502, in alternative implementations, database 506 can be external to the computer 502.


The computer 502 also includes a memory 507 that can hold data for the computer 502 or a combination of components connected to the network 530 (whether illustrated or not). Memory 507 can store any data consistent with the present disclosure. In some implementations, memory 507 can be a combination of two or more different types of memory (for example, a combination of semiconductor and magnetic storage) according to particular needs, desires, or particular implementations of the computer 502 and the described functionality. Although illustrated as a single memory 507 in FIG. 5, two or more memories 507 (of the same, different, or combination of types) can be used according to particular needs, desires, or particular implementations of the computer 502 and the described functionality. While memory 507 is illustrated as an internal component of the computer 502, in alternative implementations, memory 507 can be external to the computer 502.


An application 508 can be an algorithmic software engine providing functionality according to particular needs, desires, or particular implementations of the computer 502 and the described functionality. For example, an application 508 can serve as one or more components, modules, or applications 508. Multiple applications 508 can be implemented on the computer 502. Each application 508 can be internal or external to the computer 502.


The computer 502 can also include a power supply 514. The power supply 514 can include a rechargeable or non-rechargeable battery that can be configured to be either user- or non-user-replaceable. In some implementations, the power supply 514 can include power-conversion and management circuits, including recharging, standby, and power management functionalities. In some implementations, the power-supply 514 can include a power plug to allow the computer 502 to be plugged into a wall socket or a power source to, for example, power the computer 502 or recharge a rechargeable battery.


There can be any number of computers 502 associated with, or external to, a computer system including computer 502, with each computer 502 communicating over network 530. Further, the terms “client,” “user,” and other appropriate terminology can be used interchangeably without departing from the scope of the present disclosure. Moreover, the present disclosure contemplates that many users can use one computer 502 and one user can use multiple computers 502.


Implementations of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly embodied computer software or firmware; in computer hardware, including the structures disclosed in this specification and their structural equivalents; or in combinations of one or more of them. Software implementations of the described subject matter can be implemented as one or more computer programs. Each computer program can include one or more modules of computer program instructions encoded on a tangible, non-transitory, computer-readable computer-storage medium for execution by, or to control the operation of, data processing apparatus. Alternatively, or additionally, the program instructions can be encoded in/on an artificially generated propagated signal. For example, the signal can be a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to a suitable receiver apparatus for execution by a data processing apparatus. The computer-storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of computer-storage mediums.


The terms “data processing apparatus,” “computer,” and “electronic computer device” (or equivalent as understood by one of ordinary skill in the art) refer to data processing hardware. For example, a data processing apparatus can encompass all kinds of apparatuses, devices, and machines for processing data, including by way of example, a programmable processor, a computer, or multiple processors or computers. The apparatus can also include special purpose logic circuitry including, for example, a central processing unit (CPU), a field programmable gate array (FPGA), or an application specific integrated circuit (ASIC). In some implementations, the data processing apparatus or special purpose logic circuitry (or a combination of the data processing apparatus and special purpose logic circuitry) can be hardware- or software-based (or a combination of both hardware- and software-based). The apparatus can optionally include code that creates an execution environment for computer programs, for example, code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of execution environments. The present disclosure contemplates the use of data processing apparatuses with or without conventional operating systems, for example, Linux, Unix, Windows, Mac OS, Android, or iOS.


A computer program, which can also be referred to or described as a program, software, a software application, a module, a software module, a script, or code can be written in any form of programming language. Programming languages can include, for example, compiled languages, interpreted languages, declarative languages, or procedural languages. Programs can be deployed in any form, including as stand-alone programs, modules, components, subroutines, or units for use in a computing environment. A computer program can, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data, for example, one or more scripts stored in a markup language document; in a single file dedicated to the program in question; or in multiple coordinated files storing one or more modules, sub programs, or portions of code. A computer program can be deployed for execution on one computer or on multiple computers that are located, for example, at one site or distributed across multiple sites that are interconnected by a communication network. While portions of the programs illustrated in the various figures may be shown as individual modules that implement the various features and functionality through various objects, methods, or processes; the programs can instead include a number of sub-modules, third-party services, components, and libraries. Conversely, the features and functionality of various components can be combined into single components as appropriate. Thresholds used to make computational determinations can be statically, dynamically, or both statically and dynamically determined.


The methods, processes, or logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The methods, processes, or logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, for example, a CPU, an FPGA, or an ASIC.


Computers suitable for the execution of a computer program can be based on one or more of general and special purpose microprocessors and other kinds of CPUs. The elements of a computer are a CPU for performing or executing instructions and one or more memory devices for storing instructions and data. Generally, a CPU can receive instructions and data from (and write data to) a memory. A computer can also include, or be operatively coupled to, one or more mass storage devices for storing data. In some implementations, a computer can receive data from, and transfer data to, the mass storage devices including, for example, magnetic, magneto optical disks, or optical disks. Moreover, a computer can be embedded in another device, for example, a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a global positioning system (GPS) receiver, or a portable storage device such as a universal serial bus (USB) flash drive.


Computer readable media (transitory or non-transitory, as appropriate) suitable for storing computer program instructions and data can include all forms of permanent/non-permanent and volatile/non-volatile memory, media, and memory devices. Computer readable media can include, for example, semiconductor memory devices such as random access memory (RAM), read only memory (ROM), phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), and flash memory devices. Computer readable media can also include, for example, magnetic devices such as tape, cartridges, cassettes, and internal/removable disks. Computer readable media can also include magneto optical disks, optical memory devices, and technologies including, for example, digital video disc (DVD), CD ROM, DVD+/−R, DVD-RAM, DVD-ROM, HD-DVD, and BLURAY. The memory can store various objects or data, including caches, classes, frameworks, applications, modules, backup data, jobs, web pages, web page templates, data structures, database tables, repositories, and dynamic information. Types of objects and data stored in memory can include parameters, variables, algorithms, instructions, rules, constraints, and references. Additionally, the memory can include logs, policies, security or access data, and reporting files. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.


Implementations of the subject matter described in the present disclosure can be implemented on a computer having a display device for providing interaction with a user, including displaying information to (and receiving input from) the user. Types of display devices can include, for example, a cathode ray tube (CRT), a liquid crystal display (LCD), a light-emitting diode (LED), or a plasma monitor. Display devices can include a keyboard and pointing devices including, for example, a mouse, a trackball, or a trackpad. User input can also be provided to the computer using a touchscreen, such as a tablet computer surface with pressure sensitivity or a multi-touch screen using capacitive or electric sensing. Other kinds of devices can be used to provide for interaction with a user, including to receive user feedback, for example, sensory feedback including visual feedback, auditory feedback, or tactile feedback. Input from the user can be received in the form of acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to, and receiving documents from, a device that is used by the user. For example, the computer can send web pages to a web browser on a user's client device in response to requests received from the web browser.


The term “graphical user interface,” or “GUI,” can be used in the singular or the plural to describe one or more graphical user interfaces and each of the displays of a particular graphical user interface. Therefore, a GUI can represent any graphical user interface, including, but not limited to, a web browser, a touch screen, or a command line interface (CLI) that processes information and efficiently presents the information results to the user. In general, a GUI can include a plurality of user interface (UI) elements, some or all associated with a web browser, such as interactive fields, pull-down lists, and buttons. These and other UI elements can be related to or represent the functions of the web browser.


Implementations of the subject matter described in this specification can be implemented in a computing system that includes a back end component, for example, as a data server, or that includes a middleware component, for example, an application server. Moreover, the computing system can include a front-end component, for example, a client computer having one or both of a graphical user interface or a Web browser through which a user can interact with the computer. The components of the system can be interconnected by any form or medium of wireline or wireless digital data communication (or a combination of data communication) in a communication network. Examples of communication networks include a local area network (LAN), a radio access network (RAN), a metropolitan area network (MAN), a wide area network (WAN), Worldwide Interoperability for Microwave Access (WIMAX), a wireless local area network (WLAN) (for example, using 802.11 a/b/g/n or 802.20 or a combination of protocols), all or a portion of the Internet, or any other communication system or systems at one or more locations (or a combination of communication networks). The network can communicate with, for example, Internet Protocol (IP) packets, frame relay frames, asynchronous transfer mode (ATM) cells, voice, video, data, or a combination of communication types between network addresses.


The computing system can include clients and servers. A client and server can generally be remote from each other and can typically interact through a communication network. The relationship of client and server can arise by virtue of computer programs running on the respective computers and having a client-server relationship.


Cluster file systems can be any file system type accessible from multiple servers for read and update. Locking or consistency tracking may not be necessary since the locking of exchange file system can be done at application layer. Furthermore, Unicode data files can be different from non-Unicode data files.



FIG. 6 illustrates hydrocarbon production operations 600 that include both one or more field operations 610 and one or more computational operations 612, which exchange information and control exploration for the production of hydrocarbons. In some implementations, outputs of techniques of the present disclosure can be performed before, during, or in combination with the hydrocarbon production operations 600, specifically, for example, either as field operations 610 or computational operations 612, or both.


Examples of field operations 610 include forming/drilling a wellbore, hydraulic fracturing, producing through the wellbore, injecting fluids (such as water) through the wellbore, to name a few. In some implementations, methods of the present disclosure can trigger or control the field operations 610. For example, the methods of the present disclosure can generate data from hardware/software including sensors and physical data gathering equipment (e.g., seismic sensors, well logging tools, flow meters, and temperature and pressure sensors). The methods of the present disclosure can include transmitting the data from the hardware/software to the field operations 610 and responsively triggering the field operations 610 including, for example, generating plans and signals that provide feedback to and control physical components of the field operations 610. Alternatively or in addition, the field operations 610 can trigger the methods of the present disclosure. For example, implementing physical components (including, for example, hardware, such as sensors) deployed in the field operations 610 can generate plans and signals that can be provided as input or feedback (or both) to the methods of the present disclosure.


Examples of computational operations 612 include one or more computer systems 620 that include one or more processors and computer-readable media (e.g., non-transitory computer-readable media) operatively coupled to the one or more processors to execute computer operations to perform the methods of the present disclosure. The computational operations 612 can be implemented using one or more databases 618, which store data received from the field operations 610 and/or generated internally within the computational operations 612 (e.g., by implementing the methods of the present disclosure) or both. For example, the one or more computer systems 620 process inputs from the field operations 610 to assess conditions in the physical world, the outputs of which are stored in the databases 618. For example, seismic sensors of the field operations 610 can be used to perform a seismic survey to map subterranean features, such as facies and faults. In performing a seismic survey, seismic sources (e.g., seismic vibrators or explosions) generate seismic waves that propagate in the earth and seismic receivers (e.g., geophones) measure reflections generated as the seismic waves interact with boundaries between layers of a subsurface formation. The source and received signals are provided to the computational operations 612 where they are stored in the databases 618 and analyzed by the one or more computer systems 620.


In some implementations, one or more outputs 622 generated by the one or more computer systems 620 can be provided as feedback/input to the field operations 610 (either as direct input or stored in the databases 618). The field operations 610 can use the feedback/input to control physical components used to perform the field operations 610 in the real world.


For example, the computational operations 612 can process the seismic data to generate three-dimensional (3D) maps of the subsurface formation. The computational operations 612 can use these 3D maps to provide plans for locating and drilling exploratory wells. In some operations, the exploratory wells are drilled using logging-while-drilling (LWD) techniques which incorporate logging tools into the drill string. LWD techniques can enable the computational operations 612 to process new information about the formation and control the drilling to adjust to the observed conditions in real-time.


The one or more computer systems 620 can update the 3D maps of the subsurface formation as information from one exploration well is received and the computational operations 612 can adjust the location of the next exploration well based on the updated 3D maps. Similarly, the data received from production operations can be used by the computational operations 612 to control components of the production operations. For example, production well and pipeline data can be analyzed to predict slugging in pipelines leading to a refinery and the computational operations 612 can control machine operated valves upstream of the refinery to reduce the likelihood of plant disruptions that run the risk of taking the plant offline.


In some implementations of the computational operations 612, customized user interfaces can present intermediate or final results of the above-described processes to a user. Information can be presented in one or more textual, tabular, or graphical formats, such as through a dashboard. The information can be presented at one or more on-site locations (such as at an oil well or other facility), on the Internet (such as on a webpage), on a mobile application (or app), or at a central processing facility.


The presented information can include feedback, such as changes in parameters or processing inputs, that the user can select to improve a production environment, such as in the exploration, production, and/or testing of petrochemical processes or facilities. For example, the feedback can include parameters that, when selected by the user, can cause a change to, or an improvement in, drilling parameters (including drill bit speed and direction) or overall production of a gas or oil well. The feedback, when implemented by the user, can improve the speed and accuracy of calculations, streamline processes, improve models, and solve problems related to efficiency, performance, safety, reliability, costs, downtime, and the need for human interaction.


In some implementations, the feedback can be implemented in real-time, such as to provide an immediate or near-immediate change in operations or in a model. The term real-time (or similar terms as understood by one of ordinary skill in the art) means that an action and a response are temporally proximate such that an individual perceives the action and the response occurring substantially simultaneously. For example, the time difference for a response to display (or for an initiation of a display) of data following the individual's action to access the data can be less than 1 millisecond (ms), less than 1 second(s), or less than 5 s. While the requested data need not be displayed (or initiated for display) instantaneously, it is displayed (or initiated for display) without any intentional delay, taking into account processing limitations of a described computing system and time required to, for example, gather, accurately measure, analyze, process, store, or transmit the data.


Events can include readings or measurements captured by downhole equipment such as sensors, pumps, bottom hole assemblies, or other equipment. The readings or measurements can be analyzed at the surface, such as by using applications that can include modeling applications and machine learning. The analysis can be used to generate changes to settings of downhole equipment, such as drilling equipment. In some implementations, values of parameters or other variables that are determined can be used automatically (such as through using rules) to implement changes in oil or gas well exploration, production/drilling, or testing. For example, outputs of the present disclosure can be used as inputs to other equipment and/or systems at a facility. This can be especially useful for systems or various pieces of equipment that are located several meters or several miles apart, or are located in different countries or other jurisdictions.


While this specification contains many specific implementation details, these should not be construed as limitations on the scope of what may be claimed, but rather as descriptions of features that may be specific to particular implementations. Certain features that are described in this specification in the context of separate implementations can also be implemented, in combination, or in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations, separately, or in any suitable sub-combination. Moreover, although previously described features may be described as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can, in some cases, be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination.


Particular implementations of the subject matter have been described. Other implementations, alterations, and permutations of the described implementations are within the scope of the following claims as will be apparent to those skilled in the art. While operations are depicted in the drawings or claims in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed (some operations may be considered optional), to achieve desirable results. In certain circumstances, multitasking or parallel processing (or a combination of multitasking and parallel processing) may be advantageous and performed as deemed appropriate.


Moreover, the separation or integration of various system modules and components in the previously described implementations should not be understood as requiring such separation or integration in all implementations; and the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.


Accordingly, the previously described example implementations do not define or constrain the present disclosure. Other changes, substitutions, and alterations are also possible without departing from the spirit and scope of the present disclosure.


Furthermore, any claimed implementation is considered to be applicable to at least a computer-implemented method; a non-transitory, computer-readable medium storing computer-readable instructions to perform the computer-implemented method; and a computer system comprising a computer memory interoperably coupled with a hardware processor configured to perform the computer-implemented method or the instructions stored on the non-transitory, computer-readable medium.

Claims
  • 1. A method for controlling an electric submersible pump (ESP) installed in a wellbore, the method comprising: determining a target production rate for the wellbore;providing the target production rate as input to a neural network that provides as output ESP operating parameters to achieve the target production rate, the ESP operating parameters comprising a choke size percentage and a motor speed, and the neural network modeling an ESP-equipped wellbore; andcontrolling the ESP to operate according to the ESP operating parameters.
  • 2. The method of claim 1, wherein the input to the neural network further comprises a real-time data comprising: an oil production rate, a water cut, an intake pressure, an ESP motor load, and an upstream-downstream (US/DS) differential pressure (DP).
  • 3. The method of claim 1, wherein the neural network is trained based on historical production data.
  • 4. The method of claim 1, wherein generating the neural network comprises: obtaining historical production data comprising data points associated with respective wells, each data point comprising at least one of oil production rate, a water cut, an intake pressure, an ESP motor load, or an upstream-downstream (US/DS) differential pressure (DP);splitting the historical production into training data and testing data; anditeratively training the neural network using the training data to generate the neural network.
  • 5. The method of claim 4, wherein iteratively training the neural network comprises: creating the neural network;defining initial hyperparameters for the neural network;training the neural network based on the training data;determining, based on at least one performance indicator, whether the training of the neural network is complete;if the training the neural network is complete, deploying the neural network; andif the training the neural network is incomplete, returning to training the neural network using the training data to generate new hyperparameter values for the neural network.
  • 6. The method of claim 5, wherein determining, based on at least one performance indicator, whether the training of the neural network is complete comprises: determining whether the at least one performance indicator satisfies a respective threshold.
  • 7. The method of claim 5, wherein the at least one performance indicator comprises at least one of a correlation coefficient (CC), a root mean squared error (RMSE), or an average absolute percentage error (AAPE).
  • 8. The method of claim 4, wherein the initial hyperparameters comprise a number of neuron layers, a number of neurons per layer, and a seed number for the neural network.
  • 9. A system comprising: one or more processors configured to perform operations comprising: determining a target production rate for a wellbore equipped with an electric submersible pump (ESP);providing the target production rate as input to a neural network that provides as output ESP operating parameters to achieve the target production rate, the ESP operating parameters comprising a choke size percentage and a motor speed, and the neural network modeling an ESP-equipped wellbore; andcontrolling the ESP to operate according to the ESP operating parameters.
  • 10. The system of claim 9, wherein the input to the neural network further comprises a real-time data comprising: an oil production rate, a water cut, an intake pressure, an ESP motor load, and an upstream-downstream (US/DS) differential pressure (DP).
  • 11. The system of claim 9, wherein the neural network is trained based on historical production data.
  • 12. The system of claim 9, wherein generating the neural network comprises: obtaining historical production data comprising data points associated with respective wells, each data point comprising at least one of oil production rate, a water cut, an intake pressure, an ESP motor load, or an upstream-downstream (US/DS) differential pressure (DP);splitting the historical production into training data and testing data; anditeratively training the neural network using the training data to generate the neural network.
  • 13. The system of claim 12, wherein iteratively training the neural network comprises: creating the neural network;defining initial hyperparameters for the neural network;training the neural network based on the training data;determining, based on at least one performance indicator, whether the training of the neural network is complete;if the training the neural network is complete, deploying the neural network; andif the training the neural network is incomplete, returning to training the neural network using the training data to generate new hyperparameter values for the neural network.
  • 14. The system of claim 13, wherein determining, based on at least one performance indicator, whether the training of the neural network is complete comprises: determining whether the at least one performance indicator satisfies a respective threshold.
  • 15. The system of claim 13, wherein the at least one performance indicator comprises at least one of a correlation coefficient (CC), a root mean squared error (RMSE), or an average absolute percentage error (AAPE).
  • 16. The system of claim 12, wherein the initial hyperparameters comprise a number of neuron layers, a number of neurons per layer, and a seed number for the neural network.
  • 17. A non-transitory computer storage medium encoded with instructions that, when executed by one or more computers, cause the one or more computers to perform operations comprising: determining a target production rate for a wellbore equipped with an electric submersible pump (ESP);providing the target production rate as input to a neural network that provides as output ESP operating parameters to achieve the target production rate, the ESP operating parameters comprising a choke size percentage and a motor speed, and the neural network modeling an ESP-equipped wellbore; andcontrolling the ESP to operate according to the ESP operating parameters.
  • 18. The non-transitory computer storage medium of claim 17, wherein the input to the neural network further comprises a real-time data comprising: an oil production rate, a water cut, an intake pressure, an ESP motor load, and an upstream-downstream (US/DS) differential pressure (DP).
  • 19. The non-transitory computer storage medium of claim 17, wherein the neural network is trained based on historical production data.
  • 20. The non-transitory computer storage medium of claim 17, wherein generating the neural network comprises: obtaining historical production data comprising data points associated with respective wells, each data point comprising at least one of oil production rate, a water cut, an intake pressure, an ESP motor load, or an upstream-downstream (US/DS) differential pressure (DP);splitting the historical production into training data and testing data; anditeratively training the neural network using the training data to generate the neural network.