Monitoring the flow rate of a fluid through pipelines is generally a resource intensive task, due to the number of pipelines involved, their size, and the fact that resources must be sent on site. The fluid flow rate is usually measured by sensors that have to be installed in the pipeline and maintained. The results of the measurements must be collected or communicated from the sensors, creating an additional challenge.
Accordingly, there is a need for a more flexible way of obtaining a fluid flow rate through pipelines. A possible solution consists of sending an intelligent drone that may fly autonomously and land on a pipeline. Equipped with measurement and computational tools, the drone determines the fluid flow rate, and, after sending the result to a base station using an internal communication system, the aircraft is ready for its next task.
With this strategy, only minimal installation is required on-site. Devices on which the drone may land can be conveniently installed on the exterior face of the pipeline, and easily removable. Any maintenance needs are shifted to the drone and its internal equipment, which is done at a centralized maintenance site.
This summary is provided to introduce a selection of concepts that are further described below in the detailed description. This summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used as an aid in limiting the scope of the claimed subject matter.
Embodiments disclosed herein generally relate to a system for computing a fluid flow rate of a fluid flowing through a pipe. The system includes a docking station, attached to a portion of the pipe, the portion of the pipe exposed to an air space. The system further includes a drone capable of flying through the air space. The drone includes a connecting device configured to latch securely onto the docking station, a first ultrasonic transducer that connects to the pipe when the connecting device is latched, a second ultrasonic transducer that connects to the pipe when the connecting device is latched, and a computer configured to perform a computational procedure. The computational procedure includes instructing the first ultrasonic transducer to emit a source signal into the fluid and receiving, after the first ultrasonic transducer starts emitting the source signal, a propagated signal from the second ultrasonic transducer. The computational procedure further includes computing the fluid flow rate, using a computational model, based on the propagated signal.
Embodiments disclosed herein generally relate to a method for computing a fluid flow rate of a fluid flowing through a pipe. The method includes flying a drone through an air space, to a vicinity of a docking station attached to a portion of the pipe, the portion of the pipe exposed to the air space. The drone includes a connecting device, a first ultrasonic transducer, and a second ultrasonic transducer. The method further includes latching the connecting device securely onto the docking station, connecting the first ultrasonic transducer to the pipe using the connecting device, connecting the second ultrasonic transducer to the pipe using the connecting device, emitting a source signal into the fluid using the first ultrasonic transducer and receiving, after the first ultrasonic transducer starts emitting the source signal, a propagated signal from the second ultrasonic transducer. The method further includes computing the fluid flow rate, using a computational model, based on the propagated signal.
Other aspects and advantages of the claimed subject matter will be apparent from the following description and the appended claims.
Specific embodiments of the disclosed technology will now be described in detail with reference to the accompanying figures. Like elements in the various figures are denoted by like reference numerals for consistency.
In the following detailed description of embodiments of the disclosure, numerous specific details are set forth in order to provide a more thorough understanding of the disclosure. However, it will be apparent to one of ordinary skill in the art that the disclosure may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description.
Throughout the application, ordinal numbers (e.g., first, second, third, etc.) may be used as an adjective for an element (i.e., any noun in the application). The use of ordinal numbers is not to imply or create any particular ordering of the elements nor to limit any element to being only a single element unless expressly disclosed, such as using the terms “before,” “after,” “single,” and other such terminology. Rather, the use of ordinal numbers is to distinguish between the elements. By way of an example, a first element is distinct from a second element, and the first element may encompass more than one element and succeed (or precede) the second element in an ordering of elements.
It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. For example, a computer may reference two or more such computers.
As used here and in the appended claims, the words “comprise,” “has,” and “include” and all grammatical variations thereof are each intended to have an open, non-limiting meaning that does not exclude additional elements or steps.
“Optionally” means that the subsequently described event or circumstances may or may not occur. The description includes instances where the event or circumstance occurs and instances where it does not occur.
Terms such as “approximately,” “about,” “substantially,” etc., mean that the recited characteristic, parameter, or value need not be achieved exactly, but that deviations or variations, including for example, tolerances, measurement error, measurement accuracy limitations and other factors known to those of skill in the art, may occur in amounts that do not preclude the effect the characteristic was intended to provide. For example, these terms may mean that there can be a variance in value of up to +10%, of up to 5%, of up to 2%, of up to 1%, of up to 0.5%, of up to 0.1%, or up to 0.01%.
Ranges may be expressed as from about one particular value to about another particular value, inclusive. When such a range is expressed, it is to be understood that another embodiment is from the one particular value to the other particular value, along with all particular values and combinations thereof within the range.
It is to be understood that one or more of the steps shown in a flowchart may be omitted, repeated, and/or performed in a different order than the order shown. Accordingly, the scope disclosed herein should not be considered limited to the specific arrangement of steps shown in the flowchart.
Although multiple dependent claims are not introduced, it would be apparent to one of ordinary skill that the subject matter of the dependent claims of one or more embodiments may be combined with other dependent claims.
In the following description of
One or more embodiments disclosed herein provide a drone that can be flown to a pipe, such as a pipeline, to assess a fluid flow rate of a fluid flowing through the pipe. In addition, one or more embodiments describe how the drone receives instruction from and sends results to command system, configured to analyze and optimize the fluid flow rate.
The drone (100) is equipped with a connecting device configured to latch securely onto a docking station. In the specific example of the drone (100), the connecting device includes two connectors. A first connector (115) is held at a distal end of a first arm (113), that extends from the frame (103). A second connector (119) is held at a distal end of a second arm (117), that extends from the frame (103). The first arm (113) and the second arm (117) are symmetrical with respect to the first axis. The drone (100) further includes a first ultrasonic transducer and a second ultrasonic transducer, not shown in
The drone (100) further includes a camera (121). The camera (121) may capture still images, video, or any combination thereof. Further, the camera (121) may also take the form of other types of imaging sensors such as infrared, acoustic, and other applicable sensors useful for navigation. Moreover, the camera (121) may take the form of one or more proximity sensors to, for example, align the drone with a docking station.
The components of the drone (100) can be made of many different materials. In one or more embodiments, the frame (103) is made of one or more materials, such as carbon fiber, aluminum, or any combination thereof. In one or more embodiment, the frame (103) is covered, at least in part, by a protective coating in an effort to mitigate wear and tear of the frame from external factors. In one or more embodiment, the propellers (107), (111), (127) and (135) are made of a composite material or a reinforced plastic material, or any combination thereof. Examples of a battery of the drone (100) include a lithium polymer battery and a lithium-ion battery. In one or more embodiments, the first ultrasonic transducer and the second ultrasonic transducer include a piezoelectric material, such as zirconate titanate, quartz, or any combination thereof, that convert mechanical stress into an electrical charge.
Latching the connecting device onto the docking station may be performed by various latching mechanisms. Examples of latching mechanisms that may be used latch the connecting device onto the docking station include, but are not limited to, a hook, attached to one of the docking station and the connecting device, configured to hook onto the other of the docking station and the connecting device. Examples of latching mechanisms that may be used latch the connecting device onto the docking station further include a pair of mutually attractive magnets, one magnet located in, or on the docking station and the other magnet located in, or on the connecting device. In implementations where the docking station includes a plurality of docking ports and the connecting device includes a plurality of connectors, such as the drone (100) and the pipe (200) described in
In one or more embodiments, the drone (100) further includes a tank (123) containing sonic transmission fluid. The sonic transmission fluid is designed to facilitate a transmission of ultrasonic signals between the fluid and the transducers. The tank (123) is configured to release sonic transmission fluid into the first ultrasonic transducer and the second ultrasonic transducer through a release mechanism, not shown in
In one or more embodiments, the docking station includes a battery charger, configured to charge the battery of the drone (100) when the connecting device is latched onto the docking station. Charging the battery using a charger installed in the docking station can be done in many ways. In some implementations, a first electrical wire connects the battery charger to an exposed part of the docking station and a second electrical wire connects the battery to an exposed part of the connecting device. The first and second electrical wired are installed in a way that upon latching the connecting device onto the docking station, the first electrical wire connects to the second electrical wire. This way, the battery charger charges the battery by circulating electrical current between the battery charger and the battery, through the first and second electrical wires. In other implementations, the battery charger is a magnetic induction charger and charging the battery is done wirelessly when the connecting device is latched onto the docking station. In the specific embodiments of the pipe (200) in
In one or mode embodiments, the drone (100) further includes a global positioning system (GPS), configured to receive a location of the docking station. In some embodiments, the drone (100) further includes a remote-control system that receives command inputs from a pilot, where the pilot may be a human or a machine. In some embodiments, the drone (100) further includes an autonomous flying system. The autonomous system may be of various forms and include a variety of devices. Examples of devices that may be included in the autonomous flight system include sensors, such as pressure sensors, position sensors and movement sensors. The autonomous flight system may further include a sonar, capable of mapping an environment surrounding the drone (100). The autonomous flight system may further include an accelerometer measuring an acceleration of the drone (100), a gyroscope that controls a stability of the drone (100), or an altimeter that measures an altitude of the drone (100). The autonomous flying system may make use of the camera (121) for obstacle detection. The autonomous flying system includes flying software, hosted and run on the computer. The flying software may include a flying control algorithm that sends flying inputs to the mechanical elements of the drone (100), such as the motors (105) and (109). Examples of flying inputs sent to the mechanical elements of the drone (100) may include increasing or reducing the thrust of one or more engines of the drone (100), or changing an orientation of one or more engines of the drone (100). Such flying inputs may have an effect of moving the drone (100) left, right, up or down, or modifying a speed of motion of the drone (100). In one or more embodiments, flying software uses a fly-by-wire system to send the flying inputs. The flying software may further include an obstacle detection algorithm and send a flying input to the mechanical elements of the drone (100) for the drone (100) to avoid an obstacle.
In one or more embodiments, the drone (100) further includes a communication system, that allows the drone (100) to communicate with a remote operator. The communication system may include a radio transmitter or a radio receiver, or both. The operator may also use a radio transmitter or a radio receiver, or both to communicate with the drone (100). Communication may be transmitted in several ways, including via a satellite, a remote network system or a wireless cellular mesh, such as a LTE or a 5G network. Through the communication system, the drone (100) may receive commands, such as a flying command, or send information to the operator. Examples of information that may be sent to an operator include, but are not limited to, a result of a computation performed by the drone, data captured by the camera (121) or data captured by the sensors of the drone (100). In some implementations, the communication system includes a broadband link. In some implementations, the flying software is configured to receive flying commands from the operator. Upon receiving a flying command, the flying algorithm sends flying inputs to the mechanical elements of the drone (100) to execute the flying command. A notable example of a flying command is a command to fly the drone (100) to a vicinity of the destination. Upon receiving a command to fly to a vicinity of the destination, the flying algorithm sends flying inputs to the mechanical elements of the drone (100) to fly the drone to the vicinity of the destination. For the purpose of this disclosure, a vicinity of a location is defined either as an inside of disk, centered at the location, with a pre-defined radius, or a region from which any object located the location is detectable by the camera (121). To execute a flying command, the autonomous flying system may use the GPS as a guide.
In one or more embodiments, the drone (100) includes an artificial intelligence (AI) model configured to detect the docking station from images captured by the camera (121) and determine a docking position. Then, the computer sends a flying command to the autonomous flying system to position the drone (100) to the docking position. A docking position for the drone (100) is a position, in the air space, that satisfies two criteria that allow the connecting device to be latched onto the docking station: firstly, the docking position is sufficiently close to the docking station; secondly, the connecting device of the drone (100) is aligned with the docking station. In the specific embodiment of the drone (100) and the pipe (200), the connecting device includes two connectors, and the docking station includes two docking ports. Aligning the connecting device of the drone (100) to the docking station includes aligning the first connector (115) with the first docking port (205) and aligning the second connector (119) with the second docking port (207). The AI model is hosted and run on the computer. Examples of AI models that may be used to detect the docking station and determining a docking position include a range of computer vision models, such as an image detection algorithm, an image classification algorithm, or both. Examples of algorithms performing image detection, and image classification, or both include region-based convolutional neural networks (RCNN) and “you only look once” (YOLO) algorithms. In some embodiments, the AI model includes an image segmentation algorithm, such as a UNET algorithm or a mask-RCNN algorithm. An image segmentation algorithm assigns a value to each pixel on an image, the value corresponding to an object detected on the image. For example, given an image taken by the camera (121), a segmentation algorithm may be configured to assign a value of one to each pixel that belongs to the docking station and zero to each pixel that does not belong to the docking station. The RCNN, YOLO, UNET and mask-RCNN models are examples of convolutional neural networks. In many instances, the AI model may include a neural network. In some implementations, positioning the drone (100) to a docking position is done in several steps that involve back-and-forth communication between the AI model and the autonomous flying system. For instance, as the drone arrives in a vicinity of the docking station, the following three approximation steps a), b) and c) may be iteratively repeated until the drone (100) is at the docking position: a) the camera (121) captures an image; b) the AI model detects the docking station on the image; c) the AI model send a command to the flying system to move closer to the docking station by a pre-determined step length.
AI models typically involve a training phase and a testing phase, both using previously acquired data. It is noted that supervised machine-learned models require examples of input and associated output (i.e., target) pairs in order to learn a desired functional mapping. In the context where the AI model in this disclosure is an image detection model, the examples may be defined in many ways. For instance, an example input may be a candidate image comprising an image of a docking station. An associated output, or target, may be a box within the candidate image, the box containing the image of the docking station. Another example input may be a candidate image that does not include an image of a docking station. A corresponding output, or target, may be a flag alerting that there is no image of a docking station in the candidate image. In the context where the AI model is a segmentation model, the examples may be defined in many ways. For instance, an example input may be a candidate image. An associated output, or target, may include a first mask of a same size as the candidate image, each pixel of the first mask having a value of zero or one. The first mask may have a value of one if the pixel is detected as being part of an image of a docking station, or a value of zero if the pixel is detected as not being part of an image of a docking station. A associated output, or target, may further include a second mask of a same size as the candidate image, each pixel of the second mask having a value of zero or one. The second mask may have a value of one if the pixel is detected as being part of an image of a given object, such as an object describing an environment of the docking station, such as a pipe, a tree or water. The second mask may have a value of zero if the pixel is detected as not being part of an image of the given object.
Generally, a plurality of examples is needed, forming a dataset of examples. In one or more embodiments, the dataset is split into a training dataset and a testing dataset. The example input and associated output pairs of the training dataset are called training examples. The example input and associated output pairs of the testing dataset are called testing examples. It is common practice to split the dataset in a way that the training dataset contains more examples than the testing dataset. Because data splitting is a common practice when training and testing a machine-learned model, it is not described in detail in this disclosure. One with ordinary skill in the art will recognize that any data splitting technique may be applied to the dataset without departing from the scope of this disclosure. The AI model is trained as a functional mapping that optimally matches the inputs of the training examples to the associated outputs of the training examples. One with ordinary skill in the art will recognize that various computer vision machine learning models exist and have been trained. Thus, in some embodiments, the AI model in this disclosure may benefit from these previously trained machine learning models through transfer learning.
Once trained, the AI model is validated by computing a metric for the testing examples, in accordance with one or more embodiments. Examples of metrics that may be used to validate the AI model include any scoring or comparison function known in the art, including but not limited to: a mean square error (MSE), a root mean square error (RMSE) and a coefficient of determination (R2), defined as:
and ŷi denotes the prediction obtained by inputting xi into the AI model, for i=1, . . . , n. The notation |·| denotes a norm that can be applied to the object in between. For example, if the outputs are real-valued, the notation |·| may denote an absolute value. If the outputs are vector-valued, the notation |·| may denote an l2 norm.
As previously mentioned, the drone in this disclosure may be structured in many ways. The drone described in
The drone (400) is equipped with a connecting device configured to latch securely onto a docking station. In the specific example in
The drone (400) further includes a first ultrasonic transducer and a second ultrasonic transducer, not shown in
It is emphasized that the example drones (100) and (400) and pipes (200) and (500) are given only as examples and should be considered non-limiting. One with ordinary skill in the art will recognize that other types of drones, pipes, and their accessories may be used without departing from the scope of this disclosure, as long as the drone is able to emit a source sonic signal into the fluid and receive a propagated signal from the fluid.
The first source signal (605) radiates from the first ultrasonic transducer (603) into the fluid, resulting in a first radiated signal. Some of the first radiated signal is received by the second ultrasonic transducer (607) as a first propagated signal (609). The second ultrasonic transducer (607) starts receiving, at the latest, when the first ultrasonic transducer (603) starts emitting the first source signal (605). The second ultrasonic transducer (607) records for a receiving time Trec that is long enough to receive the full first propagated signal (609). In one or more embodiments, the receiving time Trec is a sum of four durations,
In EQ. 4, the initial time TI is a difference from the time at which the first ultrasonic transducer (603) starts emitting the first source signal (605) and the time at which the second ultrasonic transducer (607) starts receiving. The propagation time Tprop is an upper bound of a time that it would take for a sonic wave with same frequencies as the source frequencies complete the following path if the fluid within the pipe (200) were stationary: to radiate from the first ultrasonic transducer (603), reflect onto the pipe, opposite the first ultrasonic transducer (603), and travel to the second ultrasonic transducer (607). In one or more embodiments, Tprop is computed by a formula,
In EQ. 5, L is a distance between the first ultrasonic transducer (603) and the second ultrasonic transducer (607), D is a diameter of the pipe, and U is an estimate of a lower bound of a speed of sound in the fluid. In some scenarios, a composition of the fluid is known and U is obtained available literature. In other scenarios, U may be defined as a speed of sound in air at a surface of the Earth. In further scenarios, U may be defined as a lowest speed of sound known from any fluid. Coming back to EQ. 4, To is a tolerance that accounts for uncertainties in obtaining T1, Tmax and Tprop. The tolerance To is defined such that the recorded time Trec is long enough, beyond reasonable doubt, to record the entire part of the first radiated signal that arrives at the second ultrasonic transducer (607). In some embodiments, To is a multiple of Trec. In other embodiments, To is a pre-defined number of seconds. Since the first ultrasonic transducer (603) starts emitting the first source signal (605) at time 0, the first propagated signal (609) is received by the second transducer between times−T1 and Tmax+Tprop+To.
Based on the first propagated signal (609), a fluid flow rate (613) of the fluid flowing through the pipe is computed using a computational model (611) hosted and run on the drone's computer. There are many ways of computing the fluid flow rate (613) based on the first propagated signal (609). In one or more embodiments, the computational model (611) is further based on the first source signal (605). Examples of a computational model (611) based on the first propagated signal (609) and the first source signal (605) include a Doppler model, defining the fluid flow rate (613) V as:
In EQ. 6, C is a speed of sound in the fluid, ƒ0 is a reference frequency of the first source signal (605), ƒ1 is a reference frequency of the first propagated signal (609), and θ is an angle of incidence of the first propagated signal (609). The parameters ƒ0, ƒ1 and θ in EQ. 6 can be computed in many ways. In some scenarios, the first source signal (605) has only one frequency and the reference frequency ƒ0 is defined as the frequency of the first source signal (605). In other scenarios, the first source signal (605) as multiple frequencies, in which case ƒ0 may be defined as an average frequency of the first source signal (605), or a maximum frequency of the first source signal (605). In one or more embodiments, the first propagated signal (609) includes a first reflected signal, received from a first reflected wave. A ray path of the first reflected wave is defined in
In one or more embodiments, the computational model (611) is based on the first propagated signal (609) and a second propagated signal (617). I in such scenarios, the second ultrasonic transducer (607) emits a second source signal (615) that radiates into the fluid as a second radiated signal, some of which is received by the first ultrasonic transducer (603) as the second propagated signal (617). Examples of a computational model (611) based on the first propagated signal (609) and the second propagated signal (617) include a transit-time difference model, defining the fluid flow rate (613) V as:
In EQ. 7, D is a diameter of the pipe, Δt is a travel time difference between the first propagated signal (609) and the second propagated signal (617) and L is a distance between the first ultrasonic transducer (603) and the second ultrasonic transducer (607). The angle θ is defined, again, as the angle of incidence θ (717) in
Back to EQ. 7 describing an embodiment of the computational model in
It is noted that the first propagated signal (609) and the second propagated signal (617) may further include noise, recorded by the first ultrasonic transducer (603) and the second ultrasonic transducer (607). The noise is defined as any sonic signal that is not related to the first radiated signal or the second radiated signal. Examples of noise include current noise from the current of the fluid inside the pipe (200). Examples of noise further include any sound produced by an equipment, such as a propeller of the drone (100), or an engine of a vehicle located in a vicinity of the pipe (200). Examples of noise further include any sound related to a weather in a vicinity of the pipe (200), such as rain or thunder. In some embodiments, it is not possible to discriminate a time at which noise occurs from an onset of the first reflected wave or an onset of the second reflected wave. In some embodiments, it is not possible to discriminate a time at which noise occurs from a time at which an amplitude of the first reflected signal is maximum or a time at which an amplitude of the second reflected signal is maximum. Thus, in accordance with one or more embodiments, computing Δt as the maximizing lag is considered as less sensitive to noise than computing Δt as the onset difference or the maximizing difference. In one or more embodiments, the computational model (611) further includes pre-processing tools that attenuate noise. Examples of pre-processing tools that may attenuate noise include frequency filters, frequency-wavenumber filters, or spike detection algorithms. In some implementations, Examples of pre-processing tools that may attenuate noise make use of artificial intelligence (AI).
In one or more embodiments, the first source signal (605), the first propagated signal (609) and the second propagated signal (617) are sent to the computational model (611) as digitalized time series of amplitude samples, representing amplitudes of the first source signal (605), the first propagated signal (609) and the second propagated signal (617) at monotonically increasing discreet times. In one or more embodiments, the computational model includes both the Doppler model given in EQ. 6 and the transit-time difference model given in EQ. 7. In one or more embodiments, the computational model includes a convex combination of the Doppler model given in EQ. 6 and the transit-time difference model given in EQ. 7, defining the fluid flow rate (613) V as:
where α is a positive real number between 0 and 1. In one or more embodiments, the computational model (611) includes an extraction procedure that extracts the first reflected signal from the first propagated signal (609), or the second reflected signal from the second propagated signal (617), or both.
The first radiated signal, that radiates from the first ultrasonic transducer (603), is composed of a set of wavefronts. The first radiated signal varies in accordance with a location in space and time. Examples of variations of the first radiated signal include a spherical divergence. The amplitudes of the first source signal (605) are distributed on the wavefronts of the first radiated signal. Because acoustic energy must be preserved, amplitudes of the first radiated signal, at any point in space and time, are smaller than the source amplitudes. The amplitudes of the first radiated signal on the wavefront decrease with the distance from the wavefront to the first ultrasonic transducer (603). Examples of variations of the first radiated signal further include an acoustic absorption by the medium. The fluid absorbs some of the acoustic energy emitted by the first ultrasonic transducer (603). By doing so, the fluid constitutes a filter that reduces the amplitudes and frequencies of the first radiated signal, compared to the source amplitudes and source frequencies. Therefore, the amplitudes of the first propagated signal (609) are smaller than the source amplitudes and the frequencies of the first propagated signal (609) are smaller than the source frequencies. In a similar fashion, the amplitudes of the second propagated signal (617) signal are smaller than the source amplitudes and the frequencies of the second propagated signal (617) are smaller than the source frequencies. Therefore, acoustic absorption may interfere with computations of the fluid flow rate in EQ. 6, EQ. 7 or EQ. 8. In that regard, the computational model (611) may further include pre-processing tools that restore amplitudes and frequencies of the first propagated signal (609) and the second propagated signal (617) to the source amplitudes and the source frequencies. An example pre-processing tool that restores amplitudes reduced by the spherical divergence is a spherical divergence compensation. An example pre-processing tool that restores amplitudes and frequencies reduced by absorption by the fluid is a factor Q compensation.
The fluid flow is controlled by control parameters (879), which can be set or tuned by a flow control system (877). Examples of components of the control parameters (879) include, but are not limited to an inlet pressure of the fluid at an extremity of the pipe (200), a composition of the fluid, a density of the fluid and a temperature of the fluid. The flow control system may include various devices, such a pump to apply a pressure to the fluid, a resistance to create heat into the fluid, and a remote control allowing an operator to control the pump or the resistance.
The drone (100) includes a connecting device (853). In one or more embodiments, the connecting device (853) comprises two connectors, such as the first connector (115) and the second connector (119) of the drone (100) in
In this specific implementation, the drone (100) further includes a GPS (861), an autonomous flying system (863) and an AI model (865) that is also hosted and run on the first computer (859). As described in other paragraphs of this disclosure, The GPS may be used by the autonomous flying system (863) to fly the drone (100) to a vicinity of the docking station (873). The AI model (865) may be used to detect the docking station (873) and determine a docking position for the drone (100). The docking position is suitable for the connecting device (853) to latch onto the docking station (873). In this specific implementation, the drone (100) further includes a drone communication system (867) to communicate with the command system (810).
The command system (810) includes a base communication system (813). In one or more embodiments, the base communication system (813) includes a Supervisory Control and Data Acquisition (SCADA) system. The command system (810) further includes a flow analysis system (815), that is hosted and run on a second computer (817). The flow analysis system (815) is configured to manage the fluid flow through the pipe (200) as follows. First, the flow analysis system (815) emits a request to compute the fluid flow rate at the docking station (873). The command system (810) sends a location of the docking station (873) to the drone (100), via a docking station location communication (833). The drone (100) sends a response to the command system (810) that it has received the location of the docking station (873). Then, the command system (810) sends a flying instruction (835) to the drone (100), for the drone (100) to fly to a vicinity of the docking station (873). The drone (100) sends a response to the command system (810) that it has received the flying instruction (835). The drone (100) executes the flying instruction (835) and flies autonomously to a vicinity of the docking station (873) using the autonomous flying system (863), guided by the GPS (861). The drone (100) then sends an alert to the command system that it has completed the flying instruction (835). The command system (810) instructs the drone (100) to latch the connecting device (853) securely onto the docking station (873) (ie: to land on the pipe (200)), via a latching instruction (837). The drone (100) sends a response to the command system (810) that it has received the latching instruction (837). Using the AI model (865), the drone (100) determines a docking position. Using the AI model (865) and the autonomous flying system (863), the drone (100) positions itself to the docking position. Once the drone (100) is in the docking position, the connecting device (853) latches securely onto the docking station (873), and send an alert to the command system (810) that the latching instruction (837) has been executed. The command system (810) instructs the drone (100) to compute the fluid flow rate (613) of the fluid flowing through the pipe (200), via a flow rate computation instruction (839). The drone (100) sends a response to the command system (810) that it has received the flow rate computation instruction (839). The drone (100) computes the fluid flow rate (613), using the first ultrasonic transducer (603), the second ultrasonic transducer (607) and the computational model (611), by using the system from
The fluid flow rate (613) is passed on to the flow analysis system (815), that determines a fluid flow performance, using the second computer (817). The fluid flow rate performance is based on the fluid flow rate (613). If the fluid flow performance is not optimum, the flow analysis system (815) determines one or more adjustments to be made to the control parameters (879) in order to optimize the fluid flow performance. Then, the flow analysis system (815) instructs the flow control system (877) to make the one or more adjustments to the control parameters (879) via an adjustment command (843). In one or more embodiments, the fluid flow performance is the fluid flow rate (613) and a determination whether the fluid flow performance is optimum is based on a pre-defined minimum flow rate threshold. If the fluid flow rate (613) is greater than or equal to the minimum flow rate threshold, the fluid flow performance is determined as optimum. By contrast, if the fluid flow rate (613) is less than the minimum flow rate threshold, the fluid flow performance is determined as not optimum. Embodiments of the system in
In one or more embodiments, the system in
As stated, an execution of the system in
The system in
In Step 903, the drone is securely latched onto the docking station using the connecting device. An example of latching the drone to the docking station is given in
In Step 909, the first ultrasonic transducer emits a first source signal into the fluid. As stated in another paragraph of this disclosure, the first source signal, such as the first source signal (605) in
In Step 913, a fluid flow rate is computed by using a computational model, based on, at least, the first propagated signal from Step 911. An example for the computational model in Step 913 is the computational model (611) in
As stated, a docking position, suitable for a drone to latch onto a docking station, is determined using an AI model, such as the AI model (865) in
AI model types may include, but are not limited to, generalized linear models, Bayesian regression, random forests, and deep models such as neural networks, convolutional neural networks, and recurrent neural networks. AI model types, whether they are considered deep or not, are usually associated with additional “hyperparameters” which further describe the model. For example, hyperparameters providing further detail about a neural network may include, but are not limited to, the number of layers in the neural network, choice of activation functions, inclusion of batch normalization layers, and regularization strength. Commonly, in the literature, the selection of hyperparameters surrounding an AI model is referred to as selecting the model “architecture.” Once an AI model type and hyperparameters have been selected, the AI model is trained to perform a task.
A notable example of an AI model that may be used as AI model (865) is a neural network (NN), such as a convolutional neural network (CNN). A cursory introduction to a NN is provided herein. However, it is noted that many variations of a NN exist. Therefore, one with ordinary skill in the art will recognize that any variation of the NN (or any other AI model) may be employed without departing from the scope of this disclosure. Further, it is emphasized that the following discussions of a NN is a basic summary and should not be considered limiting.
A diagram of a neural network is shown in
Nodes (1002) and edges (1004) carry additional associations. Namely, every edge is associated with a numerical value. The edge numerical values, or even the edges (1004) themselves, are often referred to as “weights” or “parameters.” While training a neural network (1000), numerical values are assigned to each edge (1004). Additionally, every node (1002) is associated with a numerical variable and an activation function. Activation functions are not limited to any functional class, but traditionally follow the form
where i is an index that spans the set of “incoming” nodes (1002) and edges (1004) and f is a user-defined function. Incoming nodes (1002) are those that, when the neural network (1000) is viewed or depicted as a directed graph (as in
and rectified linear unit function ƒ(x)=max(0, x), however, many additional functions are commonly employed. Every node (1002) in a neural network (1000) may have a different associated activation function. Often, as a shorthand, activation functions are described by the function ƒ by which it is composed. That is, an activation function composed of a linear function ƒ may simply be referred to as a linear activation function without undue ambiguity.
When the neural network (1000) receives an input, the input is propagated through the network according to the activation functions and incoming node (1002) values and edge (1004) values to compute a value for each node (1002). That is, the numerical value for each node (1002) may change for each received input. Occasionally, nodes (1002) are assigned fixed numerical values, such as the value of 1, that are not affected by the input or altered according to edge (1004) values and activation functions. Fixed nodes (1002) are often referred to as “biases” or “bias nodes” (1006), displayed in
In some implementations, the neural network (1000) may contain specialized layers (1005), such as a normalization layer, or additional connection procedures, like concatenation. One skilled in the art will appreciate that these alterations do not exceed the scope of this disclosure.
As noted, the training procedure for the neural network (1000) comprises assigning values to the edges (1004). To begin training the edges (1004) are assigned initial values. These values may be assigned randomly, assigned according to a prescribed distribution, assigned manually, or by some other assignment mechanism. Once edge (1004) values have been initialized, the neural network (1000) may act as a function, such that it may receive inputs and produce an output. As such, at least one input is propagated through the neural network (1000) to produce an output. Training data is provided to the neural network (1000). Generally, training data consists of pairs of inputs and associated targets. The targets represent the “ground truth,” or the otherwise desired output, upon processing the inputs. In the context where the AI model in this disclosure is an image detection model, an example input may be a candidate image comprising an image of a docking station. An associated output, or target, may be a box within the candidate image, the box containing the image of the docking station. Another example input may be a candidate image that does not include an image of a docking station. A corresponding output, or target, may be a flag alerting that there is no image of a docking station in the candidate image. In the context where the AI model is a segmentation model, an example input may be a candidate image. An associated output, or target, may include a mask of a same size as the candidate image, each pixel of the mask having a value of zero or one. The mask may have a value of one if the pixel is detected as being part of an image of a docking station, or a value of zero if the pixel is detected as not being part of an image of a docking station. During training, the neural network (1000) processes at least one input from the training data and produces at least one output. Each neural network (1000) output is compared to its associated input data target. The comparison of the neural network (1000) output to the target is typically performed by a so-called “loss function;” although other names for this comparison function such as “error function,” “misfit function,” and “cost function” are commonly employed. Many types of loss functions are available, such as the mean-squared-error function, however, the general characteristic of a loss function is that the loss function provides a numerical evaluation of the similarity between the neural network (1000) output and the associated target. The loss function may also be constructed to impose additional constraints on the values assumed by the edges (1004), for example, by adding a penalty term, which may be physics-based, or a regularization term. Generally, the goal of a training procedure is to alter the edge (1004) values to promote similarity between the neural network (1000) output and associated target over the training data. Thus, the loss function is used to guide changes made to the edge (1004) values, typically through a process called “backpropagation.”
While a full review of the backpropagation process exceeds the scope of this disclosure, a brief summary is provided. Backpropagation consists of computing the gradient of the loss function over the edge (1004) values. The gradient indicates the direction of change in the edge (1004) values that results in the greatest change to the loss function. Because the gradient is local to the current edge (1004) values, the edge (1004) values are typically updated by a “step” in the direction indicated by the gradient. The step size is often referred to as the “learning rate” and need not remain fixed during the training process. Additionally, the step size and direction may be informed by previously seen edge (1004) values or previously computed gradients. Such methods for determining the step direction are usually referred to as “momentum” based methods.
Once the edge (1004) values have been updated, or altered from their initial values, through a backpropagation step, the neural network (1000) will likely produce different outputs. Thus, the procedure of propagating at least one input through the neural network (1000), comparing the neural network (1000) output with the associated target with a loss function, computing the gradient of the loss function with respect to the edge (1004) values, and updating the edge (1004) values with a step guided by the gradient, is repeated until a termination criterion is reached. Common termination criteria are: reaching a fixed number of edge (1004) updates, otherwise known as an iteration counter; a diminishing learning rate; noting no appreciable change in the loss function between iterations; reaching a specified performance metric as evaluated on the data or a separate hold-out data set. Once the termination criterion is satisfied, and the edge (1004) values are no longer intended to be altered, the neural network (1000) is said to be “trained.”
With respect to a CNN, it is useful to consider a structural grouping, or group, of weights. Such a group is herein referred to as a “filter.” The number of weights in a filter is typically much less than the number of inputs. In a CNN, the filters can be thought as “sliding” over, or convolving with, the inputs to form an intermediate output or intermediate representation of the inputs which still possesses a structural relationship. Like unto the neural network (1000), the intermediate outputs are often further processed with an activation function. Many filters may be applied to the inputs to form many intermediate representations. Additional filters may be formed to operate on the intermediate representations creating more intermediate representations. This process may be repeated as prescribed by a user. There is a “final” group of intermediate representations, wherein no more filters act on these intermediate representations. In some instances, the structural relationship of the final intermediate representations is ablated; a process known as “flattening.” The flattened representation may be passed to a neural network (1000) to produce a final output. Note, that in this context, the neural network (1000) is still considered part of the CNN. Like unto a neural network (1000), a CNN is trained, after initialization of the filter weights, and the edge (1004) values of the internal neural network (1000), if present, with the backpropagation process in accordance with a loss function.
The computations mentioned in this disclosure, or the commands performed by the autonomous flying system (863) may be performed by a computer, such as the first computer (859) in
The computer (1102) can serve in a role as a client, network component, a server, a database or other persistency, or any other component (or a combination of roles) of a computer system for performing the subject matter described in the instant disclosure. In some implementations, one or more components of the computer (1102) may be configured to operate within environments, including cloud-computing-based, local, global, or other environments (or a combination of environments).
At a high level, the computer (1102) is an electronic computing device operable to receive, transmit, process, store, or manage data and information associated with the described subject matter. According to some implementations, the computer (1102) may also include or be communicably coupled with an application server, e-mail server, web server, caching server, streaming data server, business intelligence (BI) server, or other server (or a combination of servers).
The computer (1102) can receive requests over network (1130) from a client application (for example, executing on another computer (1102) and responding to the received requests by processing the said requests in an appropriate software application. In addition, requests may also be sent to the computer (1102) from internal users (for example, from a command console or by other appropriate access method), external or third-parties, other automated applications, as well as any other appropriate entities, individuals, systems, or computers.
Each of the components of the computer (1102) can communicate using a system bus (1103). In some implementations, any or all of the components of the computer (1102), both hardware or software (or a combination of hardware and software), may interface with each other or the interface (1104) (or a combination of both) over the system bus (1103) using an application programming interface (API) (1112) or a service layer (1113) (or a combination of the API (1112) and service layer (1113). The API (1112) may include specifications for routines, data structures, and object classes. The API (1112) may be either computer-language independent or dependent and refer to a complete interface, a single function, or even a set of APIs. The service layer (1113) provides software services to the computer (1102) or other components (whether or not illustrated) that are communicably coupled to the computer (1102). The functionality of the computer (1102) may be accessible for all service consumers using this service layer. Software services, such as those provided by the service layer (1113), provide reusable, defined business functionalities through a defined interface. For example, the interface may be software written in JAVA, C++, or other suitable language providing data in extensible markup language (XML) format or another suitable format. While illustrated as an integrated component of the computer (1102), alternative implementations may illustrate the API (1112) or the service layer (1113) as stand-alone components in relation to other components of the computer (1102) or other components (whether or not illustrated) that are communicably coupled to the computer (1102). Moreover, any or all parts of the API (1112) or the service layer (1113) may be implemented as child or sub-modules of another software module, enterprise application, or hardware module without departing from the scope of this disclosure.
The computer (1102) includes an interface (1104). Although illustrated as a single interface (1104) in
The computer (1102) includes at least one computer processor (1105). Although illustrated as a single computer processor (1105) in
The computer (1102) also includes a memory (1106) that holds data for the computer (1102) or other components (or a combination of both) that can be connected to the network (1130). The memory may be a non-transitory computer readable medium. For example, memory (1106) can be a database storing data consistent with this disclosure. Although illustrated as a single memory (1106) in
The application (1107) is an algorithmic software engine providing functionality according to particular needs, desires, or particular implementations of the computer (1102), particularly with respect to functionality described in this disclosure. For example, application (1107) can serve as one or more components, modules, applications, etc. Further, although illustrated as a single application (1107), the application (1107) may be implemented as multiple applications (1107) on the computer (1102). In addition, although illustrated as integral to the computer (1102), in alternative implementations, the application (1107) can be external to the computer (1102).
There may be any number of computers such as the computer (1102) associated with, or external to, a computer system containing computer (1102), wherein each computer (1102) communicates over network (1130). Further, the term “client,” “user,” and other appropriate terminology may be used interchangeably as appropriate without departing from the scope of this disclosure. Moreover, this disclosure contemplates that many users may use one computer (1102), or that one user may use multiple computers such as the computer (1102).
Although only a few example embodiments have been described in detail above, those skilled in the art will readily appreciate that many modifications are possible in the example embodiments without materially departing from this invention. Accordingly, all such modifications are intended to be included within the scope of this disclosure as defined in the following claims.