Methods of network peer identification and self-organization using unique tonal signatures and wells that use the methods

Information

  • Patent Grant
  • 10711600
  • Patent Number
    10,711,600
  • Date Filed
    Tuesday, February 5, 2019
    6 years ago
  • Date Issued
    Tuesday, July 14, 2020
    4 years ago
Abstract
A method of communication using a wireless network is disclosed. A wireless transmission of a signal is received at a first node. The signal has a frequency signature. The frequency signature of the received signal is compared with a frequency signature of a previously received signal from a second node. If it is determined that the frequency signature of the received signal and the frequency signature of the previously received signal are within a predetermined range of similarity, the received signal and the previously received signal are accepted as having been transmitted by the second node.
Description
FIELD OF THE DISCLOSURE

The present disclosure relates generally to methods of acoustically communicating and/or to wells that use the methods.


BACKGROUND OF THE DISCLOSURE

An acoustic wireless network may be used to wirelessly transmit an acoustic signal, such as a vibration, via a tone transmission medium. In general, a given tone transmission medium will only permit communication within a certain frequency range; and, in some systems, this frequency range may be relatively small. Such systems may be referred to herein as spectrum-constrained systems. An example of a spectrum-constrained system is a well, such as a hydrocarbon well, that includes a plurality of communication nodes spaced-apart along a length thereof.


Known methods of installing and operating the nodes of such a network require significant time and energy. Nodes have been required to be installed on the casing in numeric order, requiring a large investment of time, an extended spatial footprint, and an extreme logistical plan for casing movement. Once installed in the well, operation of the network requires ongoing investigation of optimal operating conditions and potential networked node pairings. This is an iterative manual process requiring a significant testing time, and also drains energy of all nodes used to send the commands to perform the tests.


The above method also incurs significant risk. Incorrect numbering of the nodes or installation in the wrong order will result in an unworkable network, and extensive reconfiguration may be necessary to correct the mistake, costing substantial operator time and draining energy from a number of nodes on the network. Accidental misconfiguration while operating (such as assigning a duplicate or out-of-order number to a node, or linking them in an endless loop) carries with it a similar risk.


A typical method of addressing the numbering issue uses a central authority to number all manufactured nodes sequentially. This guarantees uniqueness but does not address out-of-order installation, nor does it prevent accidental misconfiguration, and the approach still requires the central authority to touch each node (to assign the number), thereby limiting manufacturing efficiency.


An alternate technique has each node assign itself a random number and eliminates the requirement to install nodes in sequential order. This removes the out-of-order risk and greatly reduces the risk of operational misconfiguration, but it cannot guarantee uniqueness because it is possible that two nodes will randomly assign themselves the same number. To minimize (though still not eliminate) the risk of duplicate numbers, a typical implementation makes the random number very large. Unfortunately, nodes must routinely transmit this number as part of each communication, so using a very large number leads to additional energy drain via excessive transmitted tones. What is needed is a method of identifying nodes in a network after installation and without using energy-draining random identification numbers.


SUMMARY OF THE DISCLOSURE

Methods of acoustically communicating and wells that use the methods are disclosed herein. The methods generally use an acoustic wireless network including a plurality of nodes spaced-apart along a length of a tone transmission medium. According to disclosed aspects, there is provided a method of communication using a wireless network, such as an acoustic wireless network using one or more well components as a tone transmission medium as described herein. A wireless transmission of a signal is received at a first node. The signal has a frequency signature and/or an amplitude signature, which in some aspects may be a time-based frequency signature and/or a time-based amplitude signature. The frequency signature and/or the amplitude signature of the received signal is compared with a frequency signature and/or an amplitude signature of a previously received signal. If it is determined that the frequency signature and/or the amplitude signature of the received signal and the frequency signature and/or the amplitude signature of the previously received signal are within a predetermined range of similarity, the received signal and the previously received signal are accepted as having been transmitted by the second node.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a schematic representation of a well configured to use the methods according to the present disclosure.



FIG. 2 is a flowchart depicting methods, according to the present disclosure, of determining a major frequency of a received acoustic tone.



FIG. 3 is a plot illustrating a received amplitude of a plurality of received acoustic tones as a function of time.



FIG. 4 is a plot illustrating a received amplitude of an acoustic tone from FIG. 3.



FIG. 5 is a plot illustrating frequency variation in the received acoustic tone of FIG. 4.



FIG. 6 is a table illustrating histogram data that may be used to determine the major frequency of the received acoustic tone of FIGS. 4-5.



FIG. 7 is a table illustrating a mechanism, according to the present disclosure, by which the major frequency of the acoustic tone of FIGS. 4-5 may be selected.



FIGS. 8A-8D depict digital representations of polyhistogram signatures of a received signal.



FIGS. 9A-9F are amplitude diagrams showing unique patterns or signatures of a received signal.



FIG. 10 is a flowchart depicting a method of using acoustic tonal signatures as method of network peer identification and self-organization in an acoustic wireless network.





DETAILED DESCRIPTION AND BEST MODE OF THE DISCLOSURE

The following is a non-exhaustive list of definitions of several specific terms used in this disclosure (other terms may be defined or clarified in a definitional manner elsewhere herein). These definitions are intended to clarify the meanings of the terms used herein. It is believed that the terms are used in a manner consistent with their ordinary meaning, but the definitions are nonetheless specified here for clarity.


As used herein, the term “and/or” placed between a first entity and a second entity means one of (1) the first entity, (2) the second entity, and (3) the first entity and the second entity. Multiple entities listed with “and/or” should be construed in the same manner, i.e., “one or more” of the entities so conjoined. Other entities may optionally be present other than the entities specifically identified by the “and/or” clause, whether related or unrelated to those entities specifically identified. Thus, as a non-limiting example, a reference to “A and/or B,” when used in conjunction with open-ended language such as “comprising” may refer, in one embodiment, to A only (optionally including entities other than B); in another embodiment, to B only (optionally including entities other than A); in yet another embodiment, to both A and B (optionally including other entities). These entities may refer to elements, actions, structures, steps, operations, values, and the like.


As used herein, the phrase “at least one,” in reference to a list of one or more entities should be understood to mean at least one entity selected from any one or more of the entity in the list of entities, but not necessarily including at least one of each and every entity specifically listed within the list of entities and not excluding any combinations of entities in the list of entities. This definition also allows that entities may optionally be present other than the entities specifically identified within the list of entities to which the phrase “at least one” refers, whether related or unrelated to those entities specifically identified. Thus, as a non-limiting example, “at least one of A and B” (or, equivalently, “at least one of A or B,” or, equivalently “at least one of A and/or B”) may refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including entities other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including entities other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other entities). In other words, the phrases “at least one,” “one or more,” and “and/or” are open-ended expressions that are both conjunctive and disjunctive in operation. For example, each of the expressions “at least one of A, B and C,” “at least one of A, B, or C,” “one or more of A, B, and C,” “one or more of A, B, or C” and “A, B, and/or C” may mean A alone, B alone, C alone, A and B together, A and C together, B and C together, A, B and C together, and optionally any of the above in combination with at least one other entity.


If any patents, patent applications, or other references are incorporated by reference herein and (1) define a term in a manner that is inconsistent with and/or (2) are otherwise inconsistent with, either the non-incorporated portion of the present disclosure or any of the other incorporated references, the non-incorporated portion of the present disclosure shall control, and the term or incorporated disclosure therein shall only control with respect to the reference in which the term is defined and/or the incorporated disclosure was present originally.


As used herein, the terms “adapted” and “configured” mean that the element, component, or other subject matter is designed and/or intended to perform a given function. Thus, the use of the terms “adapted” and “configured” should not be construed to mean that a given element, component, or other subject matter is simply “capable of” performing a given function but that the element, component, and/or other subject matter is specifically selected, created, implemented, used, programmed, and/or designed for the purpose of performing the function. It is also within the scope of the present disclosure that elements, components, and/or other recited subject matter that is recited as being adapted to perform a particular function may additionally or alternatively be described as being configured to perform that function, and vice versa.


As used herein, the phrase, “for example,” the phrase, “as an example,” and/or simply the term “example,” when used with reference to one or more components, features, details, structures, embodiments, and/or methods according to the present disclosure, are intended to convey that the described component, feature, detail, structure, embodiment, and/or method is an illustrative, non-exclusive example of components, features, details, structures, embodiments, and/or methods according to the present disclosure. Thus, the described component, feature, detail, structure, embodiment, and/or method is not intended to be limiting, required, or exclusive/exhaustive; and other components, features, details, structures, embodiments, and/or methods, including structurally and/or functionally similar and/or equivalent components, features, details, structures, embodiments, and/or methods, are also within the scope of the present disclosure.


As used herein, “fluid” refers to gases, liquids, and combinations of gases and liquids, as well as to combinations of gases and solids, and combinations of liquids and solids.



FIGS. 1-10 provide examples of methods 200 and/or 1000, according to the present disclosure, and/or of wells 20 including acoustic wireless networks 50 that may include and/or use the methods. Elements that serve a similar, or at least substantially similar, purpose are labeled with like numbers in each of FIGS. 1-10, and these elements may not be discussed in detail herein with reference to each of FIGS. 1-10. Similarly, all elements may not be labeled in each of FIGS. 1-10, but reference numerals associated therewith may be used herein for consistency. Elements, components, and/or features that are discussed herein with reference to one or more of FIGS. 1-10 may be included in and/or used with any of FIGS. 1-10 without departing from the scope of the present disclosure. In general, elements that are likely to be included in a particular embodiment are illustrated in solid lines, while elements that are optional are illustrated in dashed lines. However, elements that are shown in solid lines may not be essential and, in some embodiments, may be omitted without departing from the scope of the present disclosure.



FIG. 1 is a schematic representation of a well 20 configured to use methods 200, and/or 1000 according to the present disclosure. Well 20 includes a wellbore 30 that extends within a subsurface region 90. Wellbore 30 also may be referred to herein as extending between a surface region 80 and subsurface region 90 and/or as extending within a subterranean formation 92 that extends within the subsurface region. Subterranean formation 92 may include a hydrocarbon 94. Under these conditions, well 20 also may be referred to herein as, or may be, a hydrocarbon well 20, a production well 20, and/or an injection well 20.


Well 20 also includes an acoustic wireless network 50. The acoustic wireless network also may be referred to herein as a downhole acoustic wireless network 50 and includes a plurality of nodes 60, which are spaced-apart along a tone transmission medium 100 that extends along a length of wellbore 30. In the context of well 20, tone transmission medium 100 may include a downhole tubular 40 that may extend within wellbore 30, a wellbore fluid 32 that may extend within wellbore 30, a portion of subsurface region 90 that is proximal wellbore 30, a portion of subterranean formation 92 that is proximal wellbore 30, and/or a cement 34 that may extend within wellbore 30 and/or that may extend within an annular region between wellbore 30 and downhole tubular 40. Downhole tubular 40 may define a fluid conduit 44.


Nodes 60 may include one or more encoding nodes 62, which may be configured to generate an acoustic tone 70 and/or to induce the acoustic tone within tone transmission medium 100. Nodes 60 also may include one or more decoding nodes 64, which may be configured to receive acoustic tone 70 from the tone transmission medium. A given node 60 may function as both an encoding node 62 and a decoding node 64 depending upon whether the given node is transmitting an acoustic tone (i.e., functioning as the encoding node) or receiving the acoustic tone (i.e., functioning as the decoding node). Stated another way, the given node may include both encoding and decoding functionality, or structures, with these structures being selectively used depending upon whether or not the given node is encoding the acoustic tone or decoding the acoustic tone.


In wells 20, transmission of acoustic tone 70 may be along a length of wellbore 30. As such, the transmission of the acoustic tone may be linear, at least substantially linear, and/or directed, such as by tone transmission medium 100. Such a configuration may be in contrast to more conventional wireless communication methodologies, which generally may transmit a corresponding wireless signal in a plurality of directions, or even in every direction.



FIG. 2 is a flowchart depicting methods 200, according to the present disclosure, of determining a major frequency of a received acoustic tone that is transmitted via a tone transmission medium using histograms generated from the received acoustic tone. Methods 200 may be performed using any suitable structure and/or structures. As an example, methods 200 may be used by an acoustic wireless network, such as acoustic wireless network 50 of FIG. 1. Under these conditions, methods 200 may be used to communicate along a length of wellbore 30.


Methods 200 include receiving a received acoustic tone at 210, estimating a frequency of the received acoustic tone at 220, and separating a tone receipt time into a plurality of time intervals at 230. Methods 200 also include calculating a frequency variation at 240, selecting a subset of the plurality of time intervals at 250, and averaging a plurality of discrete frequency values at 260. Methods 200 further may include transmitting a transmitted acoustic tone at 270.


Receiving the received acoustic tone at 210 may include receiving with a decoding node of an acoustic wireless network. Additionally or alternatively, the receiving at 210 may include receiving from the tone transmission medium and/or receiving for a tone receipt time. The receiving at 210 may include receiving for any suitable tone receipt time. As examples, the tone receipt time may be at least 1 microsecond, at least 10 microseconds, at least 25 microseconds, at least 50 microseconds, at least 75 microseconds, or at least 100 microseconds. The receiving at 210 also may include receiving at any suitable frequency, or tone frequency. Examples of the tone frequency include frequencies of at least 10 kilohertz (kHz), at least 25 kHz, at least 50 kHz, at least 60 kHz, at least 70 kHz, at least 80 kHz, at least 90 kHz, at least 100 kHz, at least 200 kHz, at least 250 kHz, at least 400 kHz, at least 500 kHz, and/or at least 600 kHz. Additionally or alternatively, the tone frequency may be at most 1 megahertz (MHz), at most 800 kHz, at most 600 kHz, at most 400 kHz, at most 200 kHz, at most 150 kHz, at most 100 kHz, and/or at most 80 kHz.


The receiving at 210 may include receiving with any suitable decoding node, such as decoding node 64 of FIG. 1. Additionally or alternatively, the receiving at 210 may include receiving with an acoustic tone receiver. Examples of the acoustic tone receiver include a piezoelectric tone receiver, a piezoresistive tone receiver, a resonant MEMS tone receiver, a non-resonant MEMS tone receiver, and/or a receiver array.


An example of a plurality of received acoustic tones is illustrated in FIG. 3, while an example of a single received acoustic tone is illustrated in FIG. 4. FIGS. 3-4 both illustrate amplitude of the received acoustic tone as a function of time (e.g., the tone receipt time). As illustrated in FIGS. 3-4, the amplitude of the received acoustic tone may vary significantly during the tone receipt time. This variation may be caused by non-idealities within the tone transmission medium and/or with the tone transmission process. Examples of these non-idealities are discussed herein and include acoustic tone reflection points within the tone transmission medium, generation of harmonics during the tone transmission process, ringing within the tone transmission medium, and/or variations in a velocity of the acoustic tone within the tone transmission medium. Collectively, these non-idealities may make it challenging to determine, to accurately determine, and/or to reproducibly determine the major frequency of the received acoustic tone, and methods 200 may facilitate this determination.


Estimating the frequency of the received acoustic tone at 220 may include estimating the frequency of the received acoustic tone as a function of time and/or during the tone receipt time. This may include estimating a plurality of discrete frequency values received at a corresponding plurality of discrete times within the tone receipt time and may be accomplished in any suitable manner.


As an example, the received acoustic tone may include, or be, a received acoustic wave that has a time-varying amplitude within the tone receipt time, as illustrated in FIGS. 3-4. The time-varying amplitude may define an average amplitude, and the estimating at 220 may include measuring a cycle time between the time-varying amplitude and the average amplitude (222), measuring a period of individual cycles of the received acoustic wave (224), and/or measuring a plurality of zero-crossing times of the received acoustic wave (226).


The estimating at 220 may be used to generate a dataset that represents the frequency of the received acoustic tone as a function of time during the tone receipt time. An example of such a dataset is illustrated in FIG. 5. As may be seen in FIG. 5 the frequency of the received acoustic tone includes time regions where there is a relatively higher amount of variation, such as the time regions from T0 to T1 and from T2 to T3 in FIG. 5, and a time region where there is a relatively lower amount of variation, such as time region from T1 to T2 in FIG. 5.


Separating the tone receipt time into the plurality of time intervals at 230 may include separating such that each time interval in the plurality of time intervals includes a subset of the plurality of discrete frequency values that was received and/or determined during that time interval. It is within the scope of the present disclosure that each time interval in the plurality of time intervals may be less than a threshold fraction of the tone receipt time. Examples of the threshold fraction of the tone receipt time include threshold fractions of less than 20%, less than 15%, less than 10%, less than 5%, or less than 1%. Stated another way, the separating at 230 may include separating the tone receipt time into at least a threshold number of time intervals. Examples of the threshold number of time intervals includes at least 5, at least 7, at least 10, at least 20, or at least 100 time intervals. It is within the scope of the present disclosure that a duration of each time interval in the plurality of time intervals may be the same, or at least substantially the same, as a duration of each other time interval in the plurality of time intervals. However, this is not required to all implementations, and the duration of one or more time interval in the plurality of time intervals may differ from the duration of one or more other time intervals in the plurality of time intervals.


Calculating the frequency variation at 240 may include calculating any suitable frequency variation within each time interval and/or within each subset of the plurality of discrete frequency values. The calculating at 240 may be performed in any suitable manner and/or may calculate any suitable measure of variation, or frequency variation. As an example, the calculating at 240 may include calculating a statistical parameter indicative of variability within each subset of the plurality of discrete frequency values. As another example, the calculating at 240 may include calculating a frequency range within each subset of the plurality of discrete frequency values. As yet another example, the calculating at 240 may include calculating a frequency standard deviation of, or within, each subset of the plurality of discrete frequency values. As another example, the calculating at 240 may include scoring each subset of the plurality of discrete frequency values.


As yet another example, the calculating at 240 may include assessing a margin, or assessing the distinctiveness of a given frequency in a given time interval relative to the other frequencies detected during the given time interval. This may include using a magnitude and/or a probability density to assess the distinctiveness and/or using a difference between a magnitude of a most common histogram element and a second most common histogram element within the given time interval to assess the distinctiveness.


As a more specific example, and when the calculating at 240 includes calculating the frequency range, the calculating at 240 may include binning, or separating, each subset of the plurality of discrete frequency values into bins. This is illustrated in FIG. 6. Therein, a number of times that a given frequency (i.e., represented by bins 1-14) is observed within a given time interval (i.e., represented by time intervals 1-10) is tabulated. A zero value for a given frequency bin-time interval combination indicates that the given frequency bin was not observed during the given time interval, while a non-zero number indicates the number of times that the given frequency bin was observed during the given time interval.


Under these conditions, the calculating at 240 may include determining a span, or range, of the frequency bins. In the example of FIG. 6, the uppermost bin that includes at least one count is bin 14, while the lowermost bin that includes at least one count is bin 11. Thus, the span, or range, is 4, as indicated.


Selecting the subset of the plurality of time intervals at 250 may include selecting a subset within which the frequency variation, as determined during the calculating at 240, is less than a threshold frequency variation. Experimental data suggests that time intervals within which the frequency variation is less than the threshold frequency variation represent time intervals that are more representative of the major frequency of the received acoustic tone. As such, the selecting at 250 includes selectively determining which time intervals are more representative of, or more likely to include, the major frequency of the received acoustic tone, thereby decreasing noise in the overall determination of the major frequency of the received acoustic tone.


The selecting at 250 may include selecting a continuous range within the tone receipt time or selecting two or more ranges that are spaced-apart in time within the tone receipt time. Additionally or alternatively, the selecting at 250 may include selecting at least 2, at least 3, at least 4, or at least 5 time intervals from the plurality of time intervals.


The selecting at 250 additionally or alternatively may include selecting such that the frequency variation within each successive subset of the plurality of discrete frequency values decreases relative to a prior subset of the plurality of discrete frequency values and/or remains unchanged relative to the prior subset of the plurality of discrete frequency values.


An example of the selecting at 250 is illustrated in FIG. 6. In this example, time intervals with a span of less than 10 are selected and highlighted in the table. These include time intervals 1, 4, and 5.


Averaging the plurality of discrete frequency values at 260 may include averaging within the subset of the plurality of time intervals that was selected during the selecting at 250 and/or averaging to determine the major frequency of the received acoustic tone. The averaging at 260 may be accomplished in any suitable manner. As an example, the averaging at 260 may include calculating a statistical parameter indicative of an average of the plurality of discrete frequency values within the subset of the plurality of time intervals. As another example, the averaging at 260 may include calculating a mean, median, or mode value of the plurality of discrete frequency values within the subset of the plurality of time intervals.


As a more specific example, and with reference to FIGS. 6-7, the averaging at 260 may include summing the bins for the time intervals that were selected during the selecting at 250. As discussed, and using one criteria for the selecting at 250, bins 1, 4, and 5 from FIG. 6 may be selected. The number of counts in these three bins then may be summed to arrive at FIG. 7, and the bin with the most counts, which represents the most common, or mode, frequency of the selected time intervals, may be selected. In the example of FIG. 7, this may include selecting bin 12, or the frequency of bin 12, as the major frequency of the received acoustic tone.


Transmitting the transmitted acoustic tone at 270 may include transmitting with an encoding node of the acoustic wireless network. The transmitting at 270 may be subsequent, or responsive, to the averaging at 260; and a transmitted frequency of the transmitted acoustic tone may be based, at least in part, on, or equal to, the major frequency of the received acoustic tone. Stated another way, the transmitting at 270 may include repeating, or propagating, the major frequency of the received acoustic tone along the length of the tone transmission medium, such as to permit and/or facilitate communication along the length of the tone transmission medium.


According to an aspect of the disclosure, an acoustic telemetry packet may be sent from node to node along a casing in a wellbore. A node in a fixed location, such as a hydrophone, may listen to the telemetry progress down the acoustic wireless network and a representation of the received acoustic signal may be recorded. This representation is known colloquially as a histogram or polyhistogram and may be generated as previously disclosed, or by collecting zero-crossings interpreted in time bins, such as 1 millisecond-long bins, by a receiver algorithm when receiving a telemetry packet. Other means of analyzing the frequency and/or amplitude of a received acoustic signal may be used, such as: performing a Fourier transform of the received acoustic signal; performing a fast Fourier transform (FFT) of the received acoustic signal; performing a discrete Fourier transform of the received acoustic signal; performing a wavelet transform of the received acoustic signal; performing a multiple least squares analysis of the received acoustic signal; margin, span with zero crossing rate to (ZCR); margin, span with Fast Fourier Transforms (FFT), and the like. The process of identifying frequencies in a wireless network using histograms is described in co-pending and commonly owned U.S. Patent Application Publication No. 2018/058,204, the disclosure of which is incorporated by reference herein in its entirety.


As seen in FIGS. 8A-8D, each frequency transmitted by a given node has its own pattern of higher and lower graded polyhistogram bins when received by a specific receiver. These patterns or signatures, which may be termed tonal signatures, polyhistogram signatures, or the like, vary by node due to the physical variability, however slight, of the node assembly and installation, and also may be impacted by the distance from the transmitting node to the receiving node. Furthermore, other non-idealities which may impact frequency signatures and/or amplitude signatures include acoustic tone reflection points within the tone transmission medium, generation of harmonics during the tone transmission process, ringing within the tone transmission medium, variations in a velocity of the acoustic tone within the tone transmission medium, the specific arrangement of distances and material properties (density, refraction index, etc.) unique to each transmit/receiver pair of nodes. For example, FIG. 8A shows a polyhistogram signature 802 for a listener (such as a node in an acoustic wireless network) when a specific node, such as node 12, transmits two specific tones. As shown in FIG. 8B, the listener may hear a similar polyhistogram signature 804 when node 12 later transmits the same two tones. While the digits of the two polyhistogram signatures 802, 804 are not identical, known pattern recognition techniques may be used to determine that the signatures are sufficiently similar for the purposes of identifying a unique source of the transmitted tones. As shown in FIG. 8C, the same listener hears a different polyhistogram signature 806 when a different node, such as node 13, transmits the same two tones. The listener hears a polyhistogram signature 808 when node 13 later transmits the same two tones. Polyhistogram signature 808 is similar to polyhistogram signature 806, but differs from polyhistogram signatures 802 and 804.


The unique patterns or signatures are present not only in the time or frequency domains, as viewable by the corresponding polyhistogram, but also in the amplitude domain, as shown by FIGS. 9A-9F, due to the physical propagation of various acoustic modes through the specific arrangement of distances and material properties (density, refraction index, etc.) unique to the transmit/receiver pair of nodes. FIG. 9A shows the time-based amplitude 902 of an acoustic signal from a transmitting node as received by a first receiving node. Both nodes are part of an acoustic wireless network associated with a wellbore and are attached to a casing of the wellbore. The first receiving node is positioned inches (centimeters) from the transmitting node, but on the opposite side of the casing, i.e., 180 degrees around the casing. FIG. 9B shows the time-based amplitude 904 of the identical signal as it is received by a second receiving node. The second receiving node is positioned inches (centimeters) from the transmitting node, but on the same side of the casing. FIG. 9C shows the time-based amplitude 906 of the signal as it is received by a third receiving node. The third receiving node is positioned 40 feet (13 meters) from the transmitting node, but on the opposite side of the casing, i.e., 180 degrees around the casing. FIG. 9D shows the time-based amplitude 908 of the signal as it is received by a fourth receiving node, which is positioned 40 feet (13 meters) from the transmitting node, but on the same side of the casing. It can be seen that a signal is received differently based on distance and position of a receiving node from a transmitting node. Another example is shown in FIGS. 9E and 9F, which depict time-based amplitudes 910, 912 of a different signal as received by the third and fourth receiving nodes, respectively.


Because the received patterns or signatures—whether in the time, frequency, or amplitude domains—are unique and repeatable for an extended duration between a given transmit/receive set of nodes, multiple uses for the patterns or signatures can be derived. Additionally, the unique received patterns or signatures are receiver-specific. In other words, a signal from a first transmit/receive node to a second transmit/receive node will generate a pattern or signature that is different from a pattern or signature generated by a signal from the second transmit/receive node to the first transmit/receive node. The unique nature of the patterns or signatures may be used to infer physical properties between two nodes.


Examples of uses of the disclosed aspects include using the unique frequency and/or amplitude signatures defining links between signals defined between the various nodes to form a network independent of human intervention. The unique frequency and/or amplitude signatures can identify specific nodes, and the network may be established using the defined nodes. Such a network may adapt over time, and be optimized on a packet-by-packet basis, either to changing physical environment, or by a specific goal for network use, such as minimizing energy usage between nodes or the network as a whole, maximizing a data rate, minimizing an error rate, minimizing latency, guaranteeing a worst-case data rate, guaranteeing a worst-case latency, autonomously balancing energy usage across multiple nodes, autonomously balancing data transmission loads, and the like. The signatures may also be used to infer physical properties between any two nodes, such as a changing nature of the transmission medium, which in an aspect may be an acoustic transmission medium. In another aspect, the unique signals may help determine, on a case-by-case basis, the networking to parameters between each node in the network. Such parameters may include locally-ideal frequency bands or timing parameters, which may be tailored specifically for a particular neighboring node that may be different from other nodes in the region, as described in commonly-owned U.S. patent application Ser. No. 16/139,427, which is incorporated by reference herein in its entirety.


In another aspect, a node may change its preferred communication partner to another node in response (at least in part) to changes noted in the acoustic and/or frequency signatures. Additionally or alternatively, a node may change one or more of its communication parameters in response (at least in part) to changes noted in the signatures. Such communication parameters may include ping time (i.e., the actual duration a signal is transmitted), wait time (i.e., a predetermined duration after a ping time where no signal is transmitted), symbol time (i.e., a duration equal to the sum of the ping time and its associated wait time), transmit amplitude, type of error correction, bits per tone, type of data compression, communication link prioritization, communication frequency band, modulation strategy, and the like. These communication parameters may be selected for communication between two nodes, a group of nodes, or all nodes in the network.


In still another aspect, a communication link and/or a network parameter may be changed or modified because the frequency signature and/or the amplitude signature of a received signal is not as expected. For example, a pair of nodes may be transmitting and receiving a series of signals. If one of the received signals, expected from or assumed to have been sent from the receiving node, has a frequency signature and/or an amplitude signature that is not in line with previously received signals from the receiving node, then the receiving node may change or modify the communication link and/or network parameter(s) to maintain or improve communication between the nodes and in the network overall.


In an additional aspect, the frequency signature and/or the amplitude signature of a received signal may be used to establish a preferred signal traversal path along part or all of the network in terms of acoustic strength—instead of proximity. Such preferred signal path may include various communication links and communication parameters at different locations along the network.


In still another aspect of the disclosure, the frequency signature and/or the amplitude signature of a received signal, once identified as being from a specific transmitting node in the network, may itself be used as a unique identifier of the transmitting node. In this case, the transmitting node does not need to include a node identifier in the message or data being to transmitted by the signal; the signature is sufficient to identify the transmitting node. The signal can therefore be used to transmit actual information, thereby saving transmission power and time.


Changes in the physical properties of received signals over time may be noted by recording in memory the pattern changes over time. Such recorded signal changes may be caused by and therefore used to determine changes in the physical surroundings such as the contents of the wellbore tubular, rate of flow or flow regime of the contents, corrosion, perforation or other tubular failure, changes in tubular thickness or eccentricity, potential wellbore blockages, and the like. Changes in the signal over time may also be the result of node hardware failures or declining battery power. Recording and analyzing the changes in signal properties over time may permit one to predict and mitigate node failures.


The unique frequency/amplitude signatures of each node connection are affected by their surroundings in the wellbore. By analyzing these signatures, it may be possible to determine which node is most proximate to a condition of interest to be sensed using a node's onboard sensors. Such condition of interest may include a well perforation, a well inflow, a wellbore blockage, a re-injection operation, a location of a wired or autonomous tool in the well, and the like.


Advantages of the disclosed aspects are numerous. For example, node identification and network assembly can occur without human intervention. Determination of neighboring nodes may occur in terms of optimal acoustic strength instead of physical proximity. This allows for automated assignment of non-sequential node identifications at the time of manufacture, and the nodes may then be installed on the casing in arbitrary order but tracked for physical location using barcode scans, RFID tags, etc. during installation into the well). Other neighbor pairings may be optimized on a packet-by-packet basis based on another goal such as lowest energy usage, highest data (lowest error) rate, lowest latency, autonomous load balancing, etc.


Another advantage is that dynamic optimization of networking parameters and optimal communication partners may be performed on a case-by-case basis. Such optimization increases network scalability, thereby supporting operation of a larger network such as in deeper wells. For example, a neighboring node's transmission characteristics may suggest that, at a particular moment, the neighbor is unsuitable for conveying an important or large packet.


Another advantage is that with extremely sensitive dynamic systems, changes in the physical properties of received signals over time may be noted by recording in memory the pattern changes over time. Still another advantage is that locally-ideal frequency bands or timing parameters can be tailored specifically for a particular neighboring node that may be different from other nodes in the region, as described in commonly-owned U.S. patent application Ser. No. 16/139,427, “Method and System for Performing Operations with Communications” and filed Oct. 13, 2017, the disclosure of which is incorporated by reference herein in its entirety.


Still another advantage is that the disclosed wireless acoustic network can issue predictive trouble tickets that focus operator intervention on high-risk nodes. Yet another advantage is that the disclosed wireless acoustic network can determine which node is most proximate to a condition of interest to be sensed using a node's onboard sensors.



FIG. 10 is a flowchart depicting a method 1000 of communicating using a wireless network according to the present disclosure. The wireless network may be an acoustic wireless network having a tone transmission medium. At block 1002 a wireless transmission of a signal is received at a first node, where the received signal has a frequency signature and/or an amplitude signature. At block 1004 the frequency signature and/or the amplitude signature of the received signal is compared with a frequency signature and/or an amplitude signature of a previously received signal received from a second node. At block 1006 it is determined whether the frequency signature and/or the amplitude signature of the received signal and the frequency signature and/or the amplitude signature of the previously received signal are within a predetermined range of similarity. If so, at block 1008 the received signal and the previously received signal are accepted as having been transmitted by the second node. At block 1010 the received signal and the previously received signal are defined as identifying a communication link between the first node and the second node.


The acoustic wireless network and/or the nodes thereof, which are disclosed herein, including acoustic wireless network 50 and/or nodes 60 of FIG. 1, may include and/or be any suitable structure, device, and/or devices that may be adapted, configured, designed, constructed, and/or programmed to perform the functions discussed herein with reference to any of the methods disclosed herein. As examples, the acoustic wireless network and/or the associated nodes may include one or more of an electronic controller, a dedicated controller, a special-purpose controller, a special-purpose computer, a display device, a logic device, a memory device, and/or a memory device having computer-readable storage media.


The computer-readable storage media, when present, also may be referred to herein as non-transitory computer readable storage media. This non-transitory computer readable storage media may include, define, house, and/or store computer-executable instructions, programs, and/or code; and these computer-executable instructions may direct the acoustic wireless network and/or the nodes thereof to perform any suitable portion, or subset, of any of the methods disclosed herein. Examples of such non-transitory computer-readable storage media include CD-ROMs, disks, hard drives, flash memory, etc. As used herein, storage, or memory, devices and/or media having computer-executable instructions, as well as computer-implemented methods and other methods according to the present disclosure, are considered to be within the scope of subject matter deemed patentable in accordance with Section 101 of Title 35 of the United States Code.


In the present disclosure, several of the illustrative, non-exclusive examples have been discussed and/or presented in the context of flow diagrams, or flow charts, in which the methods are shown and described as a series of blocks, or steps. Unless specifically set forth in the accompanying description, it is within the scope of the present disclosure that the order of the blocks may vary from the illustrated order in the flow diagram, including with two or more of the blocks (or steps) occurring in a different order and/or concurrently. It is also within the scope of the present disclosure that the blocks, or steps, may be implemented as logic, which also may be described as implementing the blocks, or steps, as logics. In some applications, the blocks, or steps, may represent expressions and/or actions to be performed by functionally equivalent circuits or other logic devices. The illustrated blocks may, but are not required to, represent executable instructions that cause a computer, processor, and/or other logic device to respond, to perform an action, to change states, to generate an output or display, and/or to make decisions.


INDUSTRIAL APPLICABILITY

The wells and methods disclosed herein are applicable to the acoustic wireless communication, to the hydrocarbon exploration, and/or to the hydrocarbon production industries.


It is believed that the disclosure set forth above encompasses multiple distinct inventions with independent utility. While each of these inventions has been disclosed in its preferred form, the specific embodiments thereof as disclosed and illustrated herein are not to be considered in a limiting sense as numerous variations are possible. The subject matter of the inventions includes all novel and non-obvious combinations and subcombinations of the various elements, features, functions and/or properties disclosed herein. Similarly, where the claims recite “a” or “a first” element or the equivalent thereof, such claims should be understood to include incorporation of one or more such elements, neither requiring nor excluding two or more such elements.


It is believed that the following claims particularly point out certain combinations and subcombinations that are directed to one of the disclosed inventions and are novel and non-obvious. Inventions embodied in other combinations and subcombinations of features, functions, elements and/or properties may be claimed through amendment of the present claims or presentation of new claims in this or a related application. Such amended or new claims, whether they are directed to a different invention or directed to the same invention, whether different, broader, narrower, or equal in scope to the original claims, are also regarded as included within the subject matter of the inventions of the present disclosure.

Claims
  • 1. A method of communication using a wireless network, comprising: preceding at a first node, receiving a wireless transmission of a signal, the received signal having a frequency signature and/or an amplitude signature;comparing the frequency signature and/or the amplitude signature of the received signal with a frequency signature and/or an amplitude signature of a previously received signal from a second node; andif the frequency signature and/or the amplitude signature of the received signal and the frequency signature and/or the amplitude signature of the previously received signal are within a predetermined range of similarity, accepting the received signal and the previously received signal as having been transmitted by the second node and defining the received signal and the previously received signal as a unique signal identifying a communication link between the first node and the second node;wherein the receiving step comprises receiving, with a decoding node of an acoustic wireless network and from the tone transmission medium, a received acoustic tone for a tone receipt time, and wherein the comparing step comprises:estimating a frequency of the received acoustic tone, as a function of time, during the tone receipt time, wherein the estimating includes estimating a plurality of discrete frequency values received at a corresponding plurality of discrete times within the tone receipt time;separating the tone receipt time into a plurality of time intervals, wherein each time interval in the plurality of time intervals includes a subset of the plurality of discrete frequency values received during the time interval;calculating a frequency variation within each subset of the plurality of discrete frequency values;selecting a subset of the plurality of time intervals within which the frequency variation is less than a threshold frequency variation; andaveraging the plurality of discrete frequency values within the subset of the plurality of time intervals to determine major frequency of the received acoustic tone.
  • 2. The method of claim 1, further comprising: using the unique signal, forming the wireless network independent of user intervention.
  • 3. The method of claim 1, further comprising: using the unique signal, adapting a connection in the wireless network.
  • 4. The method of claim 3, wherein adapting the connection comprises adapting the connection due to a changing physical environment.
  • 5. The method of claim 3, wherein adapting the connection comprises adapting the connection to optimize network communications.
  • 6. The method of claim 5, wherein optimizing network communications comprises at least one of minimizing energy usage in the wireless network, minimizing error rate, maximizing data rate, minimizing latency, guaranteeing a worst-case data rate, guaranteeing a worst-case latency, autonomously balancing energy usage across multiple nodes, and autonomously balancing data transmission loads.
  • 7. The method of claim 1, further comprising: inferring physical properties between the first node and the second node based on one or more of the frequency signature and/or the amplitude signature of the received signal from the first node, andthe frequency signature and/or the amplitude signature of the received signal from the second node.
  • 8. The method of claim 1, further comprising: recording, in a memory, changes to the frequency signature and/or the amplitude signature of subsequently received signals at the first node.
  • 9. The method of claim 1, further comprising: using at least one of the frequency signature and/or the amplitude signature of the received signal and the frequency signature and/or the amplitude signature of the previously received signal, determining a node in the wireless network most proximate to a condition of interest; andsensing the condition of interest using sensors associated with the determined node.
  • 10. The method of claim 1, further comprising: when the frequency signature and/or the amplitude signature of an expected received signal and the frequency signature and/or the amplitude signature of the previously received signal are not within a predetermined range of similarity, establishing a second communication link between one of the first node and the second node, anda third node,
  • 11. The method of claim 1, further comprising: when the frequency signature and/or the amplitude signature of an expected received signal and the frequency signature and/or the amplitude signature of the previously received signal are not within a predetermined range of similarity, modifying one or more communication parameters of the communication link.
  • 12. The method of claim 11, wherein the one or more communication parameters comprise ping time, wait time, symbol time, transmit amplitude, error correction type, bits per tone, type of data compression, communication link prioritization, communication frequency band, and modulation strategy.
  • 13. The method of claim 1, further comprising: identifying, from one of the frequency signature and/or the amplitude signature of the received signal andthe frequency signature and/or the amplitude signature of the previously received signal
  • 14. The method of claim 1, wherein the wireless network is an acoustic wireless network having a tone transmission medium.
  • 15. The method of claim 1, further comprising: establishing a preferred path of signal traversal along the network in terms of optimal acoustic strength based at least in part on one or more of the frequency signature and/or the amplitude signature of the received signal from the first node, andthe frequency signature and/or the amplitude signature of the received signal from the second node.
  • 16. The method of claim 1, further comprising: using the unique signal as a replacement for an encoded node identifier in the received signal.
  • 17. A well, comprising: a wellbore that extends within a subterranean formation; anda downhole acoustic wireless network including a plurality of nodes spaced-apart along a length of the wellbore, wherein the plurality of nodes includes a decoding node; a processor; andnon-transitory computer readable storage media including computer-executable instructions that, when executed on the processor, direct the downhole acoustic wireless network to perform a process of communication therewith, the process including:at a first node of the plurality of nodes, receiving a wireless transmission of a signal, the received signal having a frequency signature and/or an amplitude signature;comparing the frequency signature and/or the amplitude signature of the received signal with a frequency signature and/or an amplitude signature of a previously received signal from a second node of the plurality of nodes; andif the frequency signature and/or the amplitude signature of the received signal and the frequency signature and/or the amplitude signature of the previously received signal are within a predetermined range of similarity, accepting the received signal and the previously received signal as having been transmitted by the second node and defining the received signal and the previously received signal as a unique signal identifying a communication link between the first node and the second node;wherein the receiving step comprises receiving, with the decoding node of an acoustic wireless network and from the tone transmission medium, a received acoustic tone for a tone receipt time, and wherein the comparing step comprises:estimating a frequency of the received acoustic tone, as a function of time, during the tone receipt time, wherein the estimating includes estimating a plurality of discrete frequency values received at a corresponding plurality of discrete times within the tone receipt time;separating the tone receipt time into a plurality of time intervals, wherein each time interval in the plurality of time intervals includes a subset of the plurality of discrete frequency values received during the time interval;calculating a frequency variation within each subset of the plurality of discrete frequency values;selecting a subset of the plurality of time intervals within which the frequency variation is less than a threshold frequency variation; andaveraging the plurality of discrete frequency values within the subset of the plurality of time intervals to determine major frequency of the received acoustic tone.
  • 18. Non-transitory computer readable storage media including computer-executable instructions that, when executed on a processor, direct an acoustic wireless network to perform a process of communication using a wireless network, comprising: at a first node, receiving a wireless transmission of a signal, the received signal having a frequency signature and/or an amplitude signature;comparing the frequency signature and/or the amplitude signature of the received signal with a frequency signature and/or an amplitude signature of a previously received signal from a second node; andif the frequency signature and/or the amplitude signature of the received signal and the frequency signature and/or the amplitude signature of the previously received signal are within a predetermined range of similarity, accepting the received signal and the previously received signal as having been transmitted by the second node and defining the received signal and the previously received signal as a unique signal identifying a communication link between the first node and the second node;wherein the receiving step comprises receiving, with a decoding node of an acoustic wireless network and from the tone transmission medium, a received acoustic tone for a tone receipt time, and wherein the comparing step comprises:estimating a frequency of the received acoustic tone, as a function of time, during the tone receipt time, wherein the estimating includes estimating a plurality of discrete frequency values received at a corresponding plurality of discrete times within the tone receipt time;separating the tone receipt time into a plurality of time intervals, wherein each time interval in the plurality of time intervals includes a subset of the plurality of discrete frequency values received during the time interval;calculating a frequency variation within each subset of the plurality of discrete frequency values;selecting a subset of the plurality of time intervals within which the frequency variation is less than a threshold frequency variation; and
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims priority to U.S. Provisional Application Ser. No. 62/628,105, filed Feb. 8, 2018 entitled “Methods of Network Peer Identification and Self-Organization using Unique Tonal Signatures and Wells that Use the Methods;” and U.S. Provisional Application Ser. No. 62/799,881, filed Feb. 1, 2019 entitled “Methods of Network Peer Identification and Self-Organization using Tonal Signatures and wells that use the Methods,” the disclosure of each of which are incorporated herein by reference in their entirety. This application is related to U.S. Provisional Application Ser. No. 62/428,385, filed Nov. 30, 2016, “Methods of Acoustically Communicating And Wells That Utilize The Methods,” and U.S. Provisional Application Ser. No. 62/381,926, filed Aug. 31, 2016, “Plunger Lift Monitoring Via A Downhole Wireless Network Field,” and U.S. Pat. No. 10,190,410, the disclosure of each of which are incorporated herein by reference in their entirety.

US Referenced Citations (325)
Number Name Date Kind
3103643 Kalbfell Sep 1963 A
3205477 Kalbfell Sep 1965 A
3512407 Zill May 1970 A
3637010 Malay et al. Jan 1972 A
3741301 Malay et al. Jun 1973 A
3781783 Tucker Dec 1973 A
3790930 Lamel et al. Feb 1974 A
3900827 Lamel et al. Aug 1975 A
3906434 Lamel et al. Sep 1975 A
4001773 Lamel et al. Jan 1977 A
4283780 Nardi Aug 1981 A
4298970 Shawhan et al. Nov 1981 A
4302826 Kent et al. Nov 1981 A
4314365 Petersen et al. Feb 1982 A
4884071 Howard Nov 1989 A
4962489 Medlin et al. Oct 1990 A
5128901 Drumheller Jul 1992 A
5136613 Dumestre, III Aug 1992 A
5166908 Montgomery Nov 1992 A
5182946 Boughner et al. Feb 1993 A
5234055 Cornette Aug 1993 A
5283768 Rorden Feb 1994 A
5373481 Orban et al. Dec 1994 A
5468025 Adinolfe et al. Nov 1995 A
5480201 Mercer Jan 1996 A
5495230 Lian Feb 1996 A
5562240 Campbell Oct 1996 A
5592438 Rorden et al. Jan 1997 A
5667650 Face et al. Sep 1997 A
5850369 Rorden et al. Dec 1998 A
5857146 Kido Jan 1999 A
5924499 Birchak et al. Jul 1999 A
5960883 Tubel et al. Oct 1999 A
5995449 Green et al. Nov 1999 A
6049508 Deflandre Apr 2000 A
6125080 Sonnenschein et al. Sep 2000 A
6128250 Reid et al. Oct 2000 A
6177882 Ringgenberg et al. Jan 2001 B1
6236850 Desai May 2001 B1
6239690 Burbidge et al. May 2001 B1
6300743 Patino et al. Oct 2001 B1
6320820 Gardner et al. Nov 2001 B1
6324904 Ishikawa et al. Dec 2001 B1
6360769 Brisco Mar 2002 B1
6394184 Tolman et al. May 2002 B2
6400646 Shah et al. Jun 2002 B1
6429784 Beique et al. Aug 2002 B1
6462672 Besson Oct 2002 B1
6543538 Tolman et al. Apr 2003 B2
6670880 Hall et al. Dec 2003 B1
6679332 Vinegar et al. Jan 2004 B2
6695277 Gallis Feb 2004 B1
6702019 Dusterhoft et al. Mar 2004 B2
6717501 Hall et al. Apr 2004 B2
6727827 Edwards et al. Apr 2004 B1
6772837 Dusterhoft et al. Aug 2004 B2
6816082 Laborde Nov 2004 B1
6868037 Dasgupta et al. Mar 2005 B2
6880634 Gardner et al. Apr 2005 B2
6883608 Parlar et al. Apr 2005 B2
6899178 Tubel May 2005 B2
6909667 Shah et al. Jun 2005 B2
6912177 Smith Jun 2005 B2
6920085 Finke et al. Jul 2005 B2
6930616 Tang et al. Aug 2005 B2
6940392 Chan et al. Sep 2005 B2
6940420 Jenkins Sep 2005 B2
6953094 Ross et al. Oct 2005 B2
6956791 Dopf et al. Oct 2005 B2
6980929 Aronstam et al. Dec 2005 B2
6987463 Beique et al. Jan 2006 B2
7006918 Economides et al. Feb 2006 B2
7011157 Costley et al. Mar 2006 B2
7036601 Berg et al. May 2006 B2
7051812 McKee et al. May 2006 B2
7064676 Hall et al. Jun 2006 B2
7082993 Ayoub et al. Aug 2006 B2
7090020 Hill et al. Aug 2006 B2
7140434 Chouzenoux et al. Nov 2006 B2
7219762 James et al. May 2007 B2
7224288 Hall et al. May 2007 B2
7228902 Oppelt Jun 2007 B2
7249636 Ohmer Jul 2007 B2
7252152 LoGiudice et al. Aug 2007 B2
7257050 Stewart et al. Aug 2007 B2
7261154 Hall et al. Aug 2007 B2
7261162 Deans et al. Aug 2007 B2
7275597 Hall et al. Oct 2007 B2
7277026 Hall et al. Oct 2007 B2
RE40032 van Bokhorst et al. Jan 2008 E
7317990 Sinha et al. Jan 2008 B2
7321788 Addy et al. Jan 2008 B2
7322416 Burris, II et al. Jan 2008 B2
7325605 Fripp et al. Feb 2008 B2
7339494 Shah et al. Mar 2008 B2
7348893 Huang et al. Mar 2008 B2
7385523 Thomeer et al. Jun 2008 B2
7387165 Lopez de Cardenas et al. Jun 2008 B2
7411517 Flanagan Aug 2008 B2
7477160 Lemenager et al. Jan 2009 B2
7516792 Lonnes et al. Apr 2009 B2
7551057 King et al. Jun 2009 B2
7590029 Tingley Sep 2009 B2
7595737 Fink et al. Sep 2009 B2
7602668 Liang et al. Oct 2009 B2
7649473 Johnson et al. Jan 2010 B2
7750808 Masino et al. Jul 2010 B2
7775279 Marya et al. Aug 2010 B2
7787327 Tang et al. Aug 2010 B2
7819188 Auzerais et al. Oct 2010 B2
7828079 Oothoudt Nov 2010 B2
7831283 Ogushi et al. Nov 2010 B2
7913773 Li et al. Mar 2011 B2
7952487 Montebovi May 2011 B2
7994932 Huang et al. Aug 2011 B2
8004421 Clark Aug 2011 B2
8044821 Mehta Oct 2011 B2
8049506 Lazarev Nov 2011 B2
8115651 Camwell et al. Feb 2012 B2
8117907 Han et al. Feb 2012 B2
8157008 Lilley Apr 2012 B2
8162050 Roddy et al. Apr 2012 B2
8220542 Whitsitt et al. Jul 2012 B2
8237585 Zimmerman Aug 2012 B2
8242928 Prammer Aug 2012 B2
8276674 Lopez de Cardenas et al. Oct 2012 B2
8284075 Fincher et al. Oct 2012 B2
8284947 Giesbrecht et al. Oct 2012 B2
8316936 Roddy et al. Nov 2012 B2
8330617 Chen et al. Dec 2012 B2
8347982 Hannegan et al. Jan 2013 B2
8358220 Savage Jan 2013 B2
8376065 Teodorescu et al. Feb 2013 B2
8381822 Hales et al. Feb 2013 B2
8388899 Mitani et al. Mar 2013 B2
8411530 Slocum et al. Apr 2013 B2
8434354 Crow et al. May 2013 B2
8494070 Luo et al. Jul 2013 B2
8496055 Mootoo et al. Jul 2013 B2
8539890 Tripp et al. Sep 2013 B2
8544564 Moore et al. Oct 2013 B2
8552597 Song et al. Oct 2013 B2
8556302 Dole Oct 2013 B2
8559272 Wang Oct 2013 B2
8596359 Grigsby et al. Dec 2013 B2
8605548 Froelich Dec 2013 B2
8607864 Mcleod et al. Dec 2013 B2
8664958 Simon Mar 2014 B2
8672875 Vanderveen et al. Mar 2014 B2
8675779 Zeppetelle et al. Mar 2014 B2
8683859 Godager Apr 2014 B2
8689621 Godager Apr 2014 B2
8701480 Eriksen Apr 2014 B2
8750789 Baldemair et al. Jun 2014 B2
8787840 Srinivasan et al. Jul 2014 B2
8805632 Coman et al. Aug 2014 B2
8826980 Neer Sep 2014 B2
8833469 Purkis Sep 2014 B2
8893784 Abad Nov 2014 B2
8910716 Newton et al. Dec 2014 B2
8994550 Millot et al. Mar 2015 B2
8995837 Mizuguchi et al. Mar 2015 B2
9062508 Huval et al. Jun 2015 B2
9062531 Jones Jun 2015 B2
9075155 Luscombe et al. Jul 2015 B2
9078055 Nguyen et al. Jul 2015 B2
9091153 Yang et al. Jul 2015 B2
9133705 Angeles Boza Sep 2015 B2
9140097 Themig et al. Sep 2015 B2
9144894 Barnett et al. Sep 2015 B2
9206645 Hallundbaek Dec 2015 B2
9279301 Lovorn et al. Mar 2016 B2
9284819 Tolman et al. Mar 2016 B2
9284834 Alteirac et al. Mar 2016 B2
9310510 Godager Apr 2016 B2
9333350 Rise et al. May 2016 B2
9334696 Hay May 2016 B2
9359841 Hall Jun 2016 B2
9363605 Goodman et al. Jun 2016 B2
9376908 Ludwig et al. Jun 2016 B2
9441470 Guerrero Sep 2016 B2
9515748 Jeong et al. Dec 2016 B2
9557434 Keller et al. Jan 2017 B2
9617829 Dale et al. Apr 2017 B2
9617850 Fripp et al. Apr 2017 B2
9631485 Keller et al. Apr 2017 B2
9657564 Stolpman May 2017 B2
9664037 Logan et al. May 2017 B2
9670773 Croux Jun 2017 B2
9683434 Machocki Jun 2017 B2
9686021 Merino Jun 2017 B2
9715031 Contant et al. Jul 2017 B2
9721448 Wu et al. Aug 2017 B2
9759062 Deffenbaugh et al. Sep 2017 B2
9816373 Howell et al. Nov 2017 B2
9822634 Gao Nov 2017 B2
9863222 Morrow et al. Jan 2018 B2
9879525 Morrow et al. Jan 2018 B2
9945204 Ross et al. Apr 2018 B2
9963955 Tolman et al. May 2018 B2
10100635 Keller et al. Oct 2018 B2
10103846 van Zelm et al. Oct 2018 B2
10132149 Morrow et al. Nov 2018 B2
10145228 Yarus et al. Dec 2018 B2
10167716 Clawson et al. Jan 2019 B2
10167717 Deffenbaugh et al. Jan 2019 B2
10190410 Clawson et al. Jan 2019 B2
10196862 Li-Leger et al. Feb 2019 B2
20020180613 Shi et al. Dec 2002 A1
20030056953 Tumlin et al. Mar 2003 A1
20030117896 Sakuma et al. Jun 2003 A1
20040020063 Lewis et al. Feb 2004 A1
20040200613 Fripp et al. Oct 2004 A1
20040239521 Zierolf Dec 2004 A1
20050269083 Burris, II et al. Dec 2005 A1
20050284659 Hall et al. Dec 2005 A1
20060033638 Hall et al. Feb 2006 A1
20060041795 Gabelmann et al. Feb 2006 A1
20060090893 Sheffield May 2006 A1
20070139217 Beique et al. Jun 2007 A1
20070146351 Katsurahira et al. Jun 2007 A1
20070156359 Varsamis et al. Jul 2007 A1
20070219758 Bloomfield Sep 2007 A1
20070254604 Kim Nov 2007 A1
20070272411 Lopez de Cardenas et al. Nov 2007 A1
20080030365 Fripp et al. Feb 2008 A1
20080060505 Chang Mar 2008 A1
20080076536 Shayesteh Mar 2008 A1
20080110644 Howell et al. May 2008 A1
20080185144 Lovell Aug 2008 A1
20080304360 Mozer Dec 2008 A1
20090003133 Dalton et al. Jan 2009 A1
20090030614 Carnegie et al. Jan 2009 A1
20090034368 Johnson Feb 2009 A1
20090045974 Patel Feb 2009 A1
20090080291 Tubel et al. Mar 2009 A1
20090166031 Hernandez Jul 2009 A1
20100013663 Cavender et al. Jan 2010 A1
20100080086 Wright Apr 2010 A1
20100089141 Rioufol et al. Apr 2010 A1
20100112631 Hur et al. May 2010 A1
20100133004 Burleson et al. Jun 2010 A1
20100182161 Robbins et al. Jul 2010 A1
20100212891 Stewart et al. Aug 2010 A1
20110061862 Loretz et al. Mar 2011 A1
20110066378 Lerche et al. Mar 2011 A1
20110168403 Patel Jul 2011 A1
20110188345 Wang Aug 2011 A1
20110297376 Holderman et al. Dec 2011 A1
20110297673 Zbat et al. Dec 2011 A1
20110301439 Albert et al. Dec 2011 A1
20110315377 Rioufol Dec 2011 A1
20120043079 Wassouf et al. Feb 2012 A1
20120126992 Rodney et al. May 2012 A1
20120152562 Newton et al. Jun 2012 A1
20120179377 Lie Jul 2012 A1
20130000981 Grimmer et al. Jan 2013 A1
20130003503 L'Her et al. Jan 2013 A1
20130106615 Prammer May 2013 A1
20130138254 Seals et al. May 2013 A1
20130192823 Barrilleaux et al. Aug 2013 A1
20130278432 Shashoua et al. Oct 2013 A1
20130319102 Ringgenberg et al. Dec 2013 A1
20140060840 Hartshorne et al. Mar 2014 A1
20140062715 Clark Mar 2014 A1
20140079242 Nguyen Mar 2014 A1
20140102708 Purkis et al. Apr 2014 A1
20140133276 Volker et al. May 2014 A1
20140152659 Davidson et al. Jun 2014 A1
20140153368 Bar-Cohen et al. Jun 2014 A1
20140166266 Read Jun 2014 A1
20140170025 Weiner et al. Jun 2014 A1
20140266769 van Zelm Sep 2014 A1
20140327552 Filas et al. Nov 2014 A1
20140352955 Tubel et al. Dec 2014 A1
20150003202 Palmer et al. Jan 2015 A1
20150009040 Bowles et al. Jan 2015 A1
20150027687 Tubel Jan 2015 A1
20150041124 Rodriguez Feb 2015 A1
20150041137 Rodriguez Feb 2015 A1
20150152727 Fripp et al. Jun 2015 A1
20150159481 Mebarkia et al. Jun 2015 A1
20150167425 Hammer et al. Jun 2015 A1
20150176370 Greening et al. Jun 2015 A1
20150292319 Disko et al. Oct 2015 A1
20150292320 Lynk et al. Oct 2015 A1
20150300159 Stiles et al. Oct 2015 A1
20150330200 Richard et al. Nov 2015 A1
20150337642 Spacek Nov 2015 A1
20150354351 Morrow et al. Dec 2015 A1
20150377016 Ahmad Dec 2015 A1
20160010446 Logan et al. Jan 2016 A1
20160047230 Livescu et al. Feb 2016 A1
20160047233 Butner et al. Feb 2016 A1
20160076363 Morrow et al. Mar 2016 A1
20160109606 Market et al. Apr 2016 A1
20160215612 Morrow Jul 2016 A1
20170138185 Saed et al. May 2017 A1
20170145811 Robison et al. May 2017 A1
20170152741 Park et al. Jun 2017 A1
20170167249 Lee et al. Jun 2017 A1
20170204719 Babakhani Jul 2017 A1
20170254183 Vasques et al. Sep 2017 A1
20170293044 Gilstrap et al. Oct 2017 A1
20170314386 Orban et al. Nov 2017 A1
20180010449 Roberson et al. Jan 2018 A1
20180058191 Romer et al. Mar 2018 A1
20180058198 Ertas et al. Mar 2018 A1
20180058202 Disko et al. Mar 2018 A1
20180058203 Clawson et al. Mar 2018 A1
20180058204 Clawson et al. Mar 2018 A1
20180058205 Clawson et al. Mar 2018 A1
20180058206 Zhang et al. Mar 2018 A1
20180058207 Song et al. Mar 2018 A1
20180058208 Song et al. Mar 2018 A1
20180058209 Song et al. Mar 2018 A1
20180066490 Kjos Mar 2018 A1
20180066510 Walker et al. Mar 2018 A1
20190112913 Song et al. Apr 2019 A1
20190112915 Disko et al. Apr 2019 A1
20190112916 Song et al. Apr 2019 A1
20190112917 Disko et al. Apr 2019 A1
20190112918 Yi et al. Apr 2019 A1
20190112919 Song et al. Apr 2019 A1
20190116085 Zhang et al. Apr 2019 A1
Foreign Referenced Citations (13)
Number Date Country
102733799 Jun 2014 CN
0636763 Feb 1995 EP
1409839 Apr 2005 EP
2677698 Dec 2013 EP
2763335 Aug 2014 EP
WO2002027139 Apr 2002 WO
WO2010074766 Jul 2010 WO
WO2013079928 Jun 2013 WO
WO20141018010 Jan 2014 WO
WO2014049360 Apr 2014 WO
WO2014100271 Jun 2014 WO
WO2014134741 Sep 2014 WO
WO2015117060 Aug 2015 WO
Non-Patent Literature Citations (18)
Entry
U.S. Appl. No. 15/666,334, filed Aug. 1, 2017, Walker, Katie M. et al.
U.S. Appl. No. 16/175,441, filed Oct. 30, 2018, Song, Limin et al.
U.S. Appl. No. 16/175,467, filed Oct. 30, 2018, Kinn, Timothy F. et al.
U.S. Appl. No. 16/175,488, filed Oct. 30, 2018, Yi, Xiaohua et al.
U.S. Appl. No. 16/220,327, filed Dec. 14, 2018, Disko, Mark M. et al.
U.S. Appl. No. 16/220,332, filed Dec. 14, 2018, Yi, Xiaohua et al.
U.S. Appl. No. 16/269,083, filed Feb. 6, 2019, Zhang, Yibing.
U.S. Appl. No. 16/267,950, filed Feb. 5, 2019, Walker, Katie M. et al.
U.S. Appl. No. 62/782,153, filed Dec. 19, 2019, Yi, Xiaohua et al.
U.S. Appl. No. 62/782,160, filed Dec. 19, 2018, Hall, Timothy J. et al.
Arroyo, Javier et al. (2009) “Forecasting Histogram Time Series with K-Nearest Neighbours Methods,” International Journal of Forecasting, vol. 25, pp. 192-207.
Arroyo, Javier et al. (2011) “Smoothing Methods for Histogram-Valued Time Seriers: An Application to Value-at-Risk,” Univ. of California, Dept. of Economics, www.wileyonlinelibrary.com, Mar. 8, 2011, 28 pages.
Arroyo, Javier et al. (2011) “Forecasting with Interval and Histogram Data Some Financial Applications,” Univ. of California, Dept. of Economics, 46 pages.
Emerson Process Management (2011), “Roxar downhole Wireless PT sensor system,” www.roxar.com, or downhole@roxar.com, 2 pgs.
Gonzalez-Rivera, Gloria et al. (2012) “Time Series Modeling of Histogram-Valued Data: The Daily Histogram Time Series of S&P500 Intradaily Returns,” International Journal of Forecasting, vol. 28, 36 pgs.
Gutierrez-Estevez, M. A. et al. (2013) “Acoustic Boardband Communications Over Deep Drill Strings using Adaptive OFDM”, IEEE Wireless Comm. & Networking Conf., pp. 4089-4094.
Qu, X. et al. (2011) “Reconstruction fo Self-Sparse 20 NMR Spectra From undersampled Data in the Indirect Dimension”, pp. 8888-8909.
U.S. Department of Defense (1999) “Interoperability and Performance Standards for Medium and High Frequency Radio Systems,” MIL-STD-188-141B, Mar. 1, 1999, 584 pages.
Related Publications (1)
Number Date Country
20190242249 A1 Aug 2019 US
Provisional Applications (2)
Number Date Country
62799881 Feb 2019 US
62628105 Feb 2018 US