The present disclosure generally relates to robot-aided exploration, and in particular, to a system and associated methods for combining occupancy map estimates by robots in a multi-robot system (MRS) tasked with exploring an unknown but bounded finite environment according to a probabilistic search strategy.
Multi-robot systems (MRS) composed of multiple mobile robots have been used for various collective exploration and perception tasks, such as mapping unknown environments, disaster response, and surveillance and monitoring, herding a group of robots. The performance of MRS in such applications is constrained by the capabilities of the payloads that the robots can carry on-board, including the power source, sensor suite, computational resources, and communication devices for transmitting information to other robots and/or a central node. These constraints are particularly restrictive in the case of small aerial robots such as multi-rotors that perform vision-guided tasks.
It is with these observations in mind, among others, that various aspects of the present disclosure were conceived and developed.
Corresponding reference characters indicate corresponding elements among the view of the drawings. The headings used in the figures do not limit the scope of the claims.
Centralized MRS strategies for exploration and mapping, such as the next-best-view planning method, rely on constant communication between all the robots and a central node. Scaling up such strategies with the number of robots requires expanding the communication infrastructure and preventing communication failures of the central node. Frontier-based MRS exploration strategies rely on a dynamic communication topology in which leaders are responsible for coordinating the team.
Decentralized MRS exploration and mapping strategies that employ only local communication alleviate these drawbacks and are designed to work robustly under inter-robot communication bandwidth constraints and disruptions to communication links by environmental effects. Many decentralized MRS estimation strategies are designed to achieve consensus among the robots on a particular variable or property of interest through local inter-robot communication. For example, distributed consensus-based approaches have been designed for spacecraft attitude estimation and space debris tracking. Consensus behaviors also arise in social networks when users reach an agreement on a shared opinion in a distributed fashion. Consensus strategies have been developed for MRS communication networks that are static or dynamic, and that can be represented as directed or undirected graphs, as well as random networks and networks with communication delays. However, few works address consensus problems for MRS that follow random mobility models, often used in MRS exploration strategies whose communication networks exhibit Markovian switching dynamics as a result. Random exploration strategies have certain advantages for MRS: they do not require centralized motion planning, localization, or communication, and they can be modified to produce more focused or more dispersed coverage.
In previous works, a probabilistic consensus-based strategy was developed for target search by MRS with a discrete-time discrete-state (DTDS) Markov motion model and local sensing and communication. Using this strategy, the robots are guaranteed to achieve consensus almost surely on the presence of a static feature of interest, without any requirements on the connectivity of the robots' communication network. This approach is extended to an MRS strategy for tracking multiple static features by formulating the tracking procedure as a renewal-reward process on the underlying Markov chain. The robots reach a consensus on the number of features and their locations in a decentralized manner using a Gaussian Mixture approximation of the Probability Hypothesis Density (PHD) filter.
This disclosure provides a probabilistic multi-robot search strategy that achieves consensus on a discrete distribution of static features, modeled as an occupancy grid map, using results on opinion pools. As shown in
The plurality of robots 102 move according to a DTDS Markov chain on a finite 2D spatial grid (e.g., the environment) and can detect features present within the environment using sensor(s) 140 onboard each respective robot 102. The strategy is distributed and asynchronous, and it preserves the required communication bandwidth by relying only on local inter-robot communication. The main contributions of this disclosure are as follows:
Consider a bounded square environment (e.g., an area) ⊂2 with sides of length B. is discretized into a square grid of nodes spaced at a distance δ apart. The set of nodes is denoted by ⊂+, and S=||. A set of N robots (e.g., the plurality of robots 102), ={1,2, . . . , N}, each modeled as a point mass, explore the environment by performing a Markovian random walk on the grid, assuming that there are no obstacles in the environment that impede motion of the robots 102. Let s=(s,εs) be an undirected graph associated with this finite spatial grid, where s= is the set of nodes and E is the set of edges (i,j). The edges signify pairs of nodes i,j∈s, called neighboring nodes, between which robots can travel. Assume that the robots 102 can localize on s.
Let Yka∈ be a random variable that defines the node that robot a∈(e.g., the first robot 102A) occupies at discrete time k. Robot a moves from its current node i to a neighboring node j at the next time step with a transition probability pij∈[0, 1]. P∈S×S is the state transition matrix consisting of elements pij at row i and column j. Let πk∈1×S denote the probability mass function (PMF) of Yka for each robot a, or alternatively, a distribution of the robot population over the grid at time k. This distribution evolves over time according to a DTDS Markov chain model of order one,
πk+1=πkP (1)
Assume that each robot 102 can exchange information with other robots that are within a communication radius rcomm<0.58. Let c[k]=c[k]=(Vc,εc[k]) be an undirected graph in which Vc=(the set of robots 102), and εc[k] is the set of all pairs of robots (a, b)∈× that can communicate with each other at time k (where robot b can be considered the second robot 102B). Let M[k]∈N×N be the adjacency matrix with elements mab[k]=1 if (a, b)∈εc[k] and mab[k]=0 otherwise. For each robot a∈, the set of neighbors of robot a at time k is defined as Nka{b∈:(a, b)∈εc[k]}.
A set of discrete features is distributed over the grid at nodes in the set r⊆. The robots 102 know a priori that these features are present in the environment, but do not know their distribution. Assume that when a robot 102 is located at a node in r, it can detect the presence of a feature at that node using its on-board sensors (e.g., sensor(s) 140). Each node in the grid is associated with a binary occupancy value, defined as
Each robot 102 holds and updates feature opinion information, which includes an occupancy vector for robot a at time k is defined as
The occupancy vector for each robot indicates its estimate of the nodes of the area that are occupied by features. The feature opinion information held by the robot 102 can be in the form of a feature PMF or occupancy distribution, estimated by robot a at time k from its own sensor measurements, where:
Here, fka(s) represents the opinion of robot a at time k for the occupancy distribution. θref∈1×S denotes a reference occupancy vector that is being estimated by all the robots, defined as follows:
Since the robots 102 do not know the occupancy distribution a priori, the following specifies that they all initially consider the grid to be unoccupied, i.e.,
This is defined as a nominal distribution for all the robots 102, denoted by fnom(s)=f0(.)(s). A vector θnom∈1×S is defined as a nominal occupancy vector for all robots 102, which represents all nodes as unoccupied (i.e., θnom(s)=θ0(.)(s)=1−
fref(s) is defined as a reference PMF, which represents a ground truth feature distribution, corresponding to θref(S).
A fusion weight ωk(a,b) is defined as the following Metropolis weight:
Note that ωk(a,b) and ωk(a,b)=1. A consensus or opinion weighting matrix at time k is defined as Ωk∈N×N, which includes elements ωk(a,b) at row a and column b.
Given the robot exploration dynamics in Eq. (1), each respective robot a (e.g., first robot 102A) updates its opinion fka(s), computed from Eq. (3), to the following PMF fk+1cher(s) at the next time step, which it computes using the opinions of other robots (e.g., second robot 102B and any others) within its communication range according to a distributed Chernoff fusion protocol.
It can be proved that fk+1cher is the local neighbor fused feature PMF at time k+1 of all robots a∈ that are in ka. When only two robots a, b∈ are within communication range (e.g., the first robot 102A and the second robot 102B), this update rule becomes:
where the Metropolis weight ω=ωk(a,b) is defined in Eq. (6). Then each robot a compares fk+1cher(s) with fnom(s) to generate a new fused occupancy vector as follows:
The results of Eqs. (7)-(9) are referred to herein as fused feature opinion information. Then, each robot a generates a new occupancy vector θk+1a(s) by comparing both its occupancy vector at the previous time step θka(s), and the fused occupancy vector θk+1cher(s), to θnom(s):
This new occupancy vector can be used to update the feature opinion information held by the associated robot 102 for the time step k+1.
The Hellinger metric, which measures the similarity between two PMFs, is used to quantify the convergence of each robot's feature distribution to the reference distribution. The Hellinger distance between the feature PMF fka(s) of a robot a∈ at time k and the reference PMF fref(s) is given by:
D
H(fka(s),fref(s))=√{square root over (1−ρ(fka(s),fref(s)))}, (11)
Where ρ(fla(s)), fref(s) is the Bhattacharya coefficient, defined as:
This distance takes values in [0,1]. The vector DH∈≥0N×1 is defined with each entry a∈ given by DH(fka(s), fref(s)).
The pseudo code in Algorithm 1 above shows implementation of this fusion strategy for two robots a and b (e.g., the first robot 102A and the second robot 102B). In this algorithm, the normalizing constant c is the denominator of Eq. (8). The distributed Chernoff fusion protocol of Algorithm 1 can be implemented at the processor of the associated robot 102 in a logarithmic form for computational convenience.
For time k, the first robot 102A is positioned at node m and captures sensor data. The first robot 102A updates first feature opinion information based on the sensor data that indicates its feature opinion for features observed by the first robot 102A within the area up until time k. Likewise, the second robot 102B updates second feature opinion information based on the sensor data that indicates its feature opinion for features observed by the second robot 102B within the area up until time k. The first feature opinion information held by the first robot 102A and the second feature opinion information held by the second robot 102B can include overlapping information about nodes they have both visited before time k. The first feature opinion information held by the first robot 102A can include information about nodes that the second robot 102B has not visited, and vice versa.
When within communication range (e.g., when the first robot 102A and the second robot 102B are at the same node m or are otherwise near one another), the first robot 102A and the second robot 102B establish a communication link (e.g., communication link 110 of
Once the first robot 102A receives the second feature opinion information and once the second robot 102B receives the first feature opinion information, they each generate fused feature opinion information by the distributed Chernoff fusion protocol of Algorithm 1 for time step k+1. The first robot 102A updates the first feature opinion information for time step k+1 based on the fused feature opinion information. Likewise, the second robot 102B updates the second feature opinion information for time step k+1 based on the fused feature opinion information. Following this, the first robot 102A selects a next location (e.g., node j) along the Markovian random walk. Likewise, the second robot 102B selects a next location along the Markovian random walk which is likely different from the location selected by the first robot 102A. In examples where a third robot 102C is also within communication range with the first robot 102A and the second robot 102B at time step k, the above process is similarly applied with the distributed Chernoff fusion protocol subject to (7) outlined above.
With continued reference to
When within communication range (e.g., when the first robot 102A and the third robot 102C are at the same node l or are otherwise near one another), the first robot 102A and the third robot 102C establish a communication link (e.g., analogous to communication link 110 of
Once the first robot 102A receives the third feature opinion information and once the third robot 102C receives the first feature opinion information, they each generate fused feature opinion information by the distributed Chernoff fusion protocol of Algorithm 1 for time step (k+r+1). The first robot 102A updates the first feature opinion information for time step (k+r+1) based on the fused feature opinion information. Likewise, the third robot 102C updates the third feature opinion information for time step (k+r+1) based on the fused feature opinion information. Following this, the first robot 102A selects a next location along the Markovian random walk. Likewise, the third robot 102C selects a next location along the Markovian random walk which is likely different from the location selected by the first robot 102A. During this encounter, the second robot 102B can similarly interact with other robots or can simply update its own second feature opinion information if no other robots are available within communication range.
At a final time step T, the first feature opinion information held by the first robot 102A, the second feature opinion information held by the second robot 102B, and the third feature opinion information held by the third robot 102C (along with feature opinion information held by other robots 102 of the plurality of robots 102) reach a general consensus of the feature distribution within the area.
The Markov chain in (1) is characterized in terms of the time-invariant state transition matrix P, which is defined by the state space of the spatial grid representing the discretized environment. Hence, the Markov chain is time0homogeneous which implies that Pr(Yk+1a=j|Yka=i) is the same for all robots 102 at all times k. The entries of P, which are the state transition probabilities, can therefore be defined as:
p
ij
=Pr(Yka=j|Yka=i),∀i,j∈,k∈+,∀a∈ (13)
Since each robot 102 chooses its next node from a uniform distribution, these entries can be computed as:
where di is the degree of the node i∈, defined as di=2 if i is a corner node of the spatial grid, di=3 if it is on an edge between two corner nodes, and di=4 otherwise. Since each entry pij≥0, the notation P≥0. Note that Pm≥0 for m≥1, and therefore P is a non-negative matrix. P is a stochastic matrix. If TF is a stationary distribution of Markov chain (1), then ∀k∈Z+,
πPk=π (15)
From the construction of the Markov chain, each robot 102 has a positive probability of moving from any node i∈ to any other node j∈ of the spatial grid in a finite number of time steps. As a result, the Markov chain is irreducible, and therefore P is an irreducible matrix. There exists a real unique positive left eigenvector of P and since P is a stochastic matrix, its spectral radius ρ(P) is 1. Therefore, this left eigenvector is the stationary distribution of the corresponding Markov chain. Since it is shown that the Markov chain is irreducible and has a stationary distribution π, which satisfies πP=π, one can conclude that the Markov chain is positive recurrent. Thus, all states in the Markov chain are positive recurrent, which implies that each robot 102 will keep visiting every state on the finite spatial grid infinitely often.
Equation (8) achieves opinion consensus over a graph with a fixed and strongly connected topology. This result is extended to graphs with topologies that evolve according to the switching dynamics on the composite Markov chain described in this section. The present disclosure demonstrates that under the opinion fusion scheme described herein, all the robots 102 will reach a consensus on the feature distribution.
The dynamics of movements of the plurality of robots 102 on the spatial grid can be modeled by a composite Markov chain with states Yk=(Yk1, Yk2, . . . , YkN)∈, where M=. Note that =|| and =N. An undirected graph =(,{circumflex over (ε)}) is defined that is associated with the composite Markov chain. The vertex set is the set of all possible realizations î∈of Yk. The notation î(a) represents the ath entry of î, which is the node i∈ occupied by robot a. The edge set {circumflex over (ε)} is defined as follows: (î,ĵ)∈{circumflex over (ε)} if and only if (î(a), ĵ(a))∈εs for all robots a∈. Let Q∈ be the state transition matrix associated with the composite Markov chain. The elements of Q, denoted by qîĵ, are computed from the transition probabilities defined in (14) as follows:
Here qîĵ is the probability that in the next time step, each robot a will move from node î(a) to node ĵ(a).
For example, consider a set of two robots, ={1,2} that move on the graph in
q
îĵ
=Pr(Yk+1=ĵ|Yk=î)=pî(1)ĵ(1)pî(2)ĵ(2),k∈30 (17)
The following discussion proves that all robots will reach consensus on the feature distribution and it will converge to the reference distribution.
Theorem IV-A: Consider a group of N robots, moving on a finite spatial grid with DTDS Markov chain dynamics (1), that update their opinions fka(s) for the feature distribution on the grid according to Eqs. (3), (7), (9) and (10). Then for each robot a∈, fka(s) will converge to fref(s) as k→∞ almost surely.
Proof: Suppose at initial time k, the locations of the robots on the spatial grid are given by the node î∈. Consider another set of robot locations at a future time k0+k, given by the node ĵ∈. The transition of the robots from configuration î to configuration ĵ in k time steps corresponds to a random walk of length k on the composite Markov chain Yk from node î to node ĵ. It also corresponds to a random walk by each robot a on the spatial grid from node î(a) to node ĵ(a) in k time steps. By construction, the graph s is strongly connected and each of its nodes has a self-edge. Thus, there exists a discrete time n>0 such that, for each robot a, there exists a random walk on the spatial grid from node î(a) to node ĵ(a) in n time steps. Consequently, there always exists a random walk of length n on the composite Markov chain Yk from node i to node j, and therefore Yk is an irreducible Markov chain. All states of an irreducible Markov chain belong to a single communication class. In this case, all states are positive recurrent; as a result, each state of Yk is visited infinitely often by the group of robots. Moreover, because the composite Markov chain is irreducible, it can be concluded that 30 c[k]=0, where 0 is the complete graph on the set of robots. Therefore, 0 is strongly connected. Hence, each robot will meet every other robot at some node s∈ infinitely often. Since Yk is irreducible and, from Eq. (10), θka(s)≤θ+1a(s)≤θref(s), ∀a∈, ∀s∈, it follows from Eqs. (9) and (10) that θka(s)→θref(s) as k→∞. Consequently, fka(s)→fref(s) as k→∞ almost surely.
V. Numerical Simulation Results
The numerical simulations model a set of robots ={1,2,3,4} moving on a 5.6 m×5.6 m domain that is discretized into a square grid with c=8 nodes on each side, with a distance δ=0.7 m between adjacent nodes. The robots switch from one node to another at each time step according to the Markov chain dynamics in (1). The state transition probabilities pij,i,j∈={1,2, . . . , c2}, that are associated with the spatial graph s are computed from Eq. (14). The value {circumflex over (l)}=0.8 in Eq. (2) is set for both the numerical and Software-In-The-Loop simulations. The features are distributed on the set of nodes r={19,20,21,26,30,34,38,42,46,51,52,53}, which represents a discrete approximation of a circular distribution on the grid. The set of neighbors Nka of robot a at time k includes all robots that are located at the same node as robot a at that time.
All robots are initialized at uniformly random nodes in . Prior to exploration, the robots assume that all the grid nodes are unoccupied by features, and hence the vector of Hellinger distances is initially DH=0∈N×1. During their exploration of the grid, when robots encounter each other at the same node, they exchange their current feature PMFs and fuse them according to Eq. (7).
Monte Carlo numerical simulations were also conducted with different numbers of robots, N={4,8,12,16}, to investigate the effect of N and the effect of consensus on the performance of the strategy in terms of the time for all robots' feature PMFs to converge to the reference PMF.
The present approach is also validated in Software-In-The-Loop (SITL) simulations using Gazebo and ROS Melodic. The same scenario is simulated as in the numerical simulations, with the set of robots consisting of two quadrotors, Robot 1 and Robot 2.
A diagram of an example system architecture of a robot 102 of the system 100 is shown in
The performance of the present approach in real-world environments will be affected by aerodynamic interactions between quadrotors, uncertainty in positioning, and feature occlusion by a quadrotor that enters another's field of view. In the SITL simulations, it is difficult to accurately simulate the aerodynamic disturbance on a quadrotor caused by the downwash of a quadrotor above it. These disturbances can be rejected by incorporating a robust disturbance observer into the quadrotor's low-level flight control strategy. Alternatively, the quadrotors could fly at the same altitude and employ controllers for inter-robot collision avoidance, e.g. using control barrier functions. Quadrotors may miss feature detections due to misalignment of their camera image window with the ground or to occlusion of the feature by another quadrotor flying below it. Such misalignments and occlusions sometimes occurred in SITL simulations; however, the accuracy of the feature reconstruction despite these occurrences demonstrates the robustness of the present approach to the resulting missed feature detections.
In the present disclosure, a decentralized multi-robot strategy for reconstructing a discrete feature distribution on a finite spatial grid is presented. Robots update their estimate of the feature distribution using their own measurements during random-walk exploration and estimates from nearby robots, combined using a distributed Chernoff fusion protocol. The strategy described herein extends established results on consensus of opinion pools for fixed, strongly connected networks to networks with Markovian switching dynamics. The present disclosure provides theoretical guarantees on convergence to the ground truth feature distribution in an almost sure sense, and further provides validation of the strategy in both numerical simulations and SITL simulations with quadrotors.
Note that the strategy is agnostic to the source of the information used to reconstruct the feature distribution; it values the information gained from exploration and other robots equally. This can result in suboptimal convergence rates to consensus on the ground truth distribution, potentially exceeding the operational flight times of small aerial robots with limited battery life. To increase the convergence rate to consensus, the exploration strategy can be modified from unbiased random walks to random walks that are biased in directions that increase information gain from individual robots' on-board sensor measurements and also ensure frequent encounters between robots. Another possible way to decrease the time to consensus is by relaxing the constraint rcomm<0.5δ, allowing robots to communicate with robots at other nodes. This would require an analysis of whether consensus is still ensured by reformulating the composite Markov chain and determining whether it is irreducible and positive recurrent. In addition, while Pólya's recurrence theorem guarantees that the obtained results on irreducibility and positive recurrence do not extend to random walks on infinite 3D lattices, it would be interesting to investigate whether they are valid for finite 3D grids. Finally, it would be useful to derive an analytical formulation of the expected time until consensus, if possible, which would provide a more rigorous basis for selecting the final time T.
Referring to
At an initial time step where the first robot 102A has yet to observe features within the area, the first feature opinion information for when k=0 can be nominal according to equation (5) herein with the plurality of robots 102 assuming a priori that the grid is empty. However, in some embodiments, the plurality of robots 102 may be presented with initial information about the feature distribution and the first feature opinion information for when k=0 can reflect the initial information.
Step 204 of method 200 includes updating, based on the sensor information for the first node, first feature opinion information associated with the first robot 102A for the time step. The first robot 102A uses the sensor information to estimate a feature distribution of a subset of the area associated with the first node. The feature distribution can include an occupancy vector (equation (2)) expressive of an estimate of nodes of the area that are occupied by features, and may be in the form of a feature probability mass function (or “occupancy distribution”) (equation (3)). The first robot 102A can incorporate the estimated feature distribution into the first feature opinion information such that the first feature opinion information includes observations held by the first robot 102A up to the time step k. In the example implementation of
Step 206 of method 200 includes establishing, at the processor of the first robot 102A, a communication link (e.g., communication link 110 of
Step 208 of method 200 includes sending, by the processor and over the network interface of the first robot 102A, the first feature opinion information to the second robot 102B. Likewise, the first robot 102A receives second feature opinion information from the second robot 102B. Step 210 of method 200 includes accessing, at the processor of the first robot 102A, the second feature opinion information received from the second robot 102B over a network interface (e.g., through the communication link 110) of the first robot 102A.
Step 212 of method 200 includes applying, at the processor of the first robot 102A, a distributed Chernoff fusion protocol (e.g., Algorithm 1, equations (7)-(10)) for fusion of the first feature opinion information with the second feature opinion information. In some examples, step 212 can further include assigning a weight value for fusion of the first feature opinion information with the second feature opinion information based on a position of the first robot 102A and a position of the second robot 102B (see equation (6) and Algorithm 1).
Step 214 of method 200 includes generating, at the processor of the first robot 102A, fused feature opinion information for a subsequent time step (e.g., k+1) by fusion of the first feature opinion information with the second feature opinion information according to the distributed Chernoff fusion protocol of Algorithm 1 and equations (7)-(10)). Step 216 of method 200 includes updating, at the processor of the first robot 102A, the first feature opinion information for the subsequent time step based on the second feature opinion information using the fused feature opinion information for the subsequent time step (equations 8, 9 and 10).
Steps 218 and 220 of method 200 are directed to the Markovian random walk performed by each respective robot 102. Step 218 of method 200 includes selecting, based on a position of the first robot, a second node within the area according to a Markovian random walk (e.g., a discrete-time discrete-state Markov motion model). Step 220 of method 200 includes generating one or more control signals for application to a mobility assembly of the first robot that direct the first robot towards the second node. The processes outlined in method 200 can be repeated as needed until time T, with information exchange between individual robots 102 occurring in an ad-hoc manner.
Further, the various steps of method 200 can be extended as needed to incorporate a third robot 102C that might be within range of the first robot 102A and the second robot 102B for information exchange. In such a scenario, feature opinion information from additional robots 102 can be fused according to equation (7) herein.
The functions performed in the processes and methods may be implemented in differing order. Furthermore, the outlined steps and operations are provided as examples, and some of the steps and operations may be optional, combined into fewer steps and operations, or expanded into additional steps and operations without detracting from the essence of the disclosed embodiments. Further, the method 200 can include steps directed to providing instructions within a memory that are executable by a processor of a robot (e.g., the first robot 102A) to perform operations corresponding to steps of the method 200 outlined above.
Computing system 300 comprises one or more network interfaces 310 (e.g., wired, wireless, PLC, etc.), at least one processor 320, and a memory 340 interconnected by a system bus 350, as well as a power supply 360 (e.g., battery, plug-in, etc.).
Network interface(s) 310 include the mechanical, electrical, and signaling circuitry for communicating data over the communication links coupled to a communication network. Network interfaces 310 are configured to transmit and/or receive data using a variety of different communication protocols. As illustrated, the box representing network interfaces 310 is shown for simplicity, and it is appreciated that such interfaces may represent different types of network connections such as wireless and wired (physical) connections. Network interfaces 310 are shown separately from power supply 360, however it is appreciated that the interfaces that support PLC protocols may communicate through power supply 360 and/or may be an integral component coupled to power supply 360.
Memory 340 includes a plurality of storage locations that are addressable by processor 320 and network interfaces 310 for storing software programs and data structures associated with the embodiments described herein. In some embodiments, computing system 300 may have limited memory or no memory (e.g., no memory for storage other than for programs/processes operating on the device and associated caches).
Processor 320 comprises hardware elements or logic adapted to execute the software programs (e.g., instructions) and manipulate data structures 345. An operating system 342, portions of which are typically resident in memory 340 and executed by the processor, functionally organizes computing system 300 by, inter alia, invoking operations in support of software processes and/or services executing on the device. These software processes and/or services may include consensus-based estimation processes/services 390, which can include aspects of the methods described herein and/or implementations of various modules described herein. Note that while consensus-based estimation processes/services 390 is illustrated in centralized memory 340, alternative embodiments provide for the process to be operated within the network interfaces 310, such as a component of a MAC layer, and/or as part of a distributed computing network environment.
It will be apparent to those skilled in the art that other processor and memory types, including various computer-readable media, may be used to store and execute program instructions pertaining to the techniques described herein. Also, while the description illustrates various processes, it is expressly contemplated that various processes may be embodied as modules or engines configured to operate in accordance with the techniques herein (e.g., according to the functionality of a similar process). In this context, the term module and engine may be interchangeable. In general, the term module or engine refers to model or an organization of interrelated software components/functions. Further, while the consensus-based estimation processes/services 390 is shown as a standalone process, those skilled in the art will appreciate that this process may be executed as a routine or module within other processes.
It should be understood from the foregoing that, while particular embodiments have been illustrated and described, various modifications can be made thereto without departing from the spirit and scope of the invention as will be apparent to those skilled in the art. Such changes and modifications are within the scope and teachings of this invention as defined in the claims appended hereto.
This is a U.S. Non-Provisional Patent Application that claims benefit to U.S. Provisional Patent Application Ser. No. 63/375,160 filed 9 Sep. 2022, which is herein incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63375160 | Sep 2022 | US |