SYSTEMS AND METHODS FOR ROBUST MAX CONSENSUS FOR WIRELESS SENSOR NETWORKS

Information

  • Patent Application
  • 20210219167
  • Publication Number
    20210219167
  • Date Filed
    January 11, 2021
    3 years ago
  • Date Published
    July 15, 2021
    2 years ago
Abstract
Various embodiments of systems and methods for robust max consensus for wireless sensor networks in the presence of additive noise by determining and removing a growth rate estimate from state values of each node in a wireless sensor network are disclosed.
Description
FIELD

The present disclosure generally relates to wireless networks, and more specifically to systems and methods for robust max consensus for wireless sensor networks.


BACKGROUND

A wireless sensor network (WSN) is a distributed network consisting of multi-functional sensors, which can communicate with neighboring sensors over wireless channels. Estimating the statistics of sensor measurements in WSNs is necessary in detecting anomalous sensors, supporting the nodes with insufficient resources, network area estimation, and spectrum sensing for cognitive radio applications, just to name a few. Knowledge of extremes is often used in algorithms for outlier detection, clustering, classification, and localization. However, several factors such as additive noise in wireless channels, random link failures, packet loss and delay of arrival significantly degrade the performance of distributed algorithms. Hence it is important to design and analyze consensus algorithms robust to such adversities.


Although max consensus has been previously studied, the analysis of max consensus algorithms under additive channel noise and randomly changing network conditions has not received much attention. The present disclosure starts with a review of the literature on max consensus in the absence of noise. A distributed max consensus algorithm for both pairwise and broadcast communications is introduced and also provides an upper bound on the mean convergence time. Recent works consider pairwise and broadcast communications with asynchronous updates and significantly improve the tightness of the upper bound on the mean convergence time. The convergence properties of max consensus protocols have been studied for broadcast communications setting in distributed networks. The convergence of average and max consensus algorithms in time dependent and state dependent graphs have also been analyzed Asynchronous updates in the presence of bounded delays have been considered. Max-plus algebra is used to analyze convergence of max-consensus algorithms for time-invariant communication topologies in previous works, and for switching topologies in other works, both in the absence of noise. Distributed algorithms to reach consensus on general functions in the absence of noise are studied in previous works. A one-parameter family of consensus algorithms over a time-varying network has been proposed, where consensus on the minimum of the initial measurements can be reached by tuning a design parameter. A distributed algorithm to reach consensus on general functions in a network is presented in some works, where the weighted power mean algorithm is used to calculate the maximum of the initial measurements by setting the design parameter to infinity.


A system model with imperfect transmissions has also been considered, where a message is received with a probability 1−p. This model is equivalent to the time-varying graphs, where each edge is deleted independently with a probability p. However, that system model does not consider errors in transmission, but only consider transmission failures (erasures).


Other works consider the presence of additive noise in the network and propose an iterative soft-max based average consensus algorithm to approximate the maximum, which uses non-linear bounded transmissions in order to achieve consensus. This algorithm depends on a design parameter that controls the trade-off between the max estimation error and convergence speed. However, the convergence speed of this soft-max based method is limited compared to the more natural max-based methods considered herein.


It is with these observations in mind, among others, that various aspects of the present disclosure were conceived and developed.





BRIEF DESCRIPTION OF THE DRAWINGS

The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.



FIG. 1 is a simplified illustration showing a system for determining robust max consensus for a wireless sensor network;



FIG. 2 is a flowchart illustrating a methodology for determining robust max consensus for the wireless sensor network of FIG. 1 by the system of FIG. 1;



FIG. 3 is a graphical depiction of a network with=75 nodes;



FIG. 4 is a graphical comparison of upper bound, lower bound and a max update for all nodes with custom-character(0,1) additive noise for a fixed graph with N=75;



FIG. 5 is a set of random graphs with N=75 and edge deletion probability of p=0.5;



FIG. 6 is a first graphical comparison of upper bound, empirical upper bound and max consensus growth rate for the network;



FIG. 7 is a second graphical comparison of Upper bound, empirical upper bound and max consensus growth rate for the network;



FIG. 8 is a graphical representation of the performance of an algorithm in the presence of additive noise from custom-character(0,1) for fixed graphs;



FIG. 9 is a graphical representation of the performance of the algorithm in the presence of additive noise from custom-character(0,1) for random graphs with probability of edge deletion p=0.5; and



FIG. 10 is a graphical comparison of the algorithm and a soft-max based average consensus algorithm (SMA).



FIG. 11 is an exemplary computer system for effectuating the functionalities analyzing max consensus algorithms in the presence of additive noise. Corresponding reference characters indicate corresponding elements among the view of the drawings. The headings used in the figures do not limit the scope of the claims.





DETAILED DESCRIPTION

The present disclosure discloses systems and methods for analysis of max consensus algorithms in the presence of additive noise and a design of fast max-based consensus algorithms executed by a processor. Due to additive noise, the estimate of the maximum at each node has a positive drift. This results in nodes diverging from the true max value. Max-plus algebra is used to represent this ergodic process of recursive max and addition operations on the state values. This growth rate has been shown to be a constant for stochastic max-plus systems using the subadditive ergodic theorem, in a mathematics context that does not consider max-consensus. In order to study the growth rate, large deviation theory is used and an upper bound is derived for a general noise distribution in the network. The upper bound is shown to depend linearly on the standard deviation, and is a function of the spectral radius of the network. Since the noise variance and spectral radius are not known locally at each node, a two-run algorithm is proposed to locally estimate and compensate for the growth rate, and analyze its variance.


Further, the present disclosure includes the complete proof of upper bound on the growth rate and also extends the analysis by deriving a lower bound. An empirical upper bound, which includes an additional correction factor that depends on number of nodes is shown to be tighter compared to previously found correction factors. Additionally, the upper and lower bounds for time-varying random graphs are derived, which model transmission failures, and additive noise. Furthermore, a method to directly calculate the upper bound, without solving for the large deviation rate function of the noise, is presented. Also, using concentration inequalities it is shown that the variance of the growth rate estimator decreases inversely with the number of iterations and this is used to bound the variance of the estimator. Through simulations, it is shown that the proposed algorithm converges much faster with lower estimation error, in comparison to existing algorithms.


System Model

A network of N nodes is considered. The communication among nodes is modeled as an undirected graph custom-character=(custom-character, ε), where custom-character={1, . . . , N} is the set of nodes and E is the set of edges connecting the nodes. The set of neighbors of node i is denoted by custom-characteri={j|{i,j}∈ε}. The degree of the ith node, denoted by di=|custom-characteri|, is the number of neighbors of the ith node. The degree matrix D, is a diagonal matrix that contains the degrees of the nodes along its diagonal. The connectivity structure of the graph is characterized by the adjacency matrix A, with entries |A|i,j=1 if {i,j}∈ε and |A|i,j=0, otherwise. Spectral radius of the network ρ, corresponds to the eigenvalue with the largest magnitude of the adjacency matrix A.


In the present disclosure, the following standard assumptions on the system model are considered:

    • Each node has a real number which is its own initial measurement.
    • At each iteration, nodes broadcast their state values to their neighbors in a synchronized fashion. The analysis and the algorithm can be extended to asynchronous networks, assuming that the communication time is small such that the collisions are absent between communicating nodes.
    • Communications between nodes is analog over the wireless channel and is subject to additive noise.
    • General model of time-varying graphs are considered, wherein, a message corrupted by additive noise is received with a probability 1−p, in order to model the imperfect communication links.


A system model with imperfect transmissions is considered where a message is received with a probability 1−p, unaffected by the communication noise. Note that, the system model is a more general model that not only considers transmission failures (erasures), but also the errors in transmission due to imperfect communication links or fading channels.



FIG. 1 illustrates an overview of the system 100 including an example network 102 in communication with a controller or processor 104. As shown, the network 102 includes a plurality of nodes 110, each node 110 interconnected by a respective wireless connection 120. Nodes 110 are each operable for measuring and/or updating a state value xi(t). Wireless connections 120 each contribute noise to the state values of their corresponding nodes 110 modeled by vi,j(t) where i and j are neighboring nodes.


Problem Statement

The goal is to have each node reach consensus on the maximum of the node initial measurements in a distributed network, in the presence of additive communication noise. In some existing max consensus algorithms, at each iteration a node updates its state value by the maximum of the received values from its neighbors. After a number of iterations which is on the order of the diameter of the network, each node reaches a consensus on the maximum of the initial measurements. However, this approach fails in the presence of additive noise on the communication links, because every time a node updates its state value by taking the maximum over the received noisy measurements, the state value of the node drifts.


To address this problem, max-plus algebra and large deviation theory are used to find the growth rate of the state values. An algorithm is then proposed which locally estimates the growth rate and updates the state values accordingly to reach consensus on the true maximum value.


Mathematical Background

For completeness, the mathematical background including the max-based consensus algorithm and max-plus algebra is briefly reviewed.


Review of Max-Based Consensus Algorithm


In this section, the conventional max-based consensus algorithm is described. Consider a distributed network with N nodes with real-valued initial measurements, x(0)=[x1(0), . . . , xN(0)]T, where xi(t) denotes the state value of the ith node at time t. Max consensus in the absence of noise merely involves updating the state value of nodes with the largest received measurement thus far in each iteration so that the nodes reach consensus on the maximum value of the initial measurements. Let vij(t) be a zero mean, independent and identically distributed (i.i.d) noise sample from a general noise distribution, which models the additive communication noise between nodes i and j at time t. To reach consensus on the maximum of the initial state values, nodes update their state by taking the maximum over the received measurements from neighbors and their own state, given by,











x
i



(

t
+
1

)


=



max


(



x
i



(
t
)


,

max


(



x
i



(
t
)


+


υ
ij



(
t
)



)



)


.




j



𝒩
i






1
)







Review of Max Plus Algebra


Max plus algebra, which can be used to represent max consensus algorithm as a discrete linear system, is briefly introduced. A max-plus approach was considered for max consensus in previous works but in the absence of additive noise. The approach of the present disclosure considers the presence of a general noise distribution and study its effects on equation (1) using max-plus algebra and subadditive ergodic theory.


Max plus algebra is based on two binary operations ⊕ ⊗, on the set custom-charactermax=custom-character∪{−∞}. The operation are defined on x, y∈custom-charactermax as follows,






x⊕y=max(x,y) and x⊗y=x+y


The neutral element for the ⊕ operator is ε:=−∞ and for ⊗ operator is e:=0. Similarly for matrices X,Y∈custom-charactermaxN+N, operations are defined as, for i=1, . . . , N. and j=1, . . . , N.









[

X

Y

]


i
,
j


=



[
X
]


i
,
j





[
Y
]


i
,
j




,








[

X

Y

]


i
,
j






k
=
1

N



(



[
X
]


i
,
k





[
Y
]


k
,
j



)


=


max
k



(



[
X
]


i
,
k


+


[
Y
]


k
,
j



)







where [X]i,j and [Y]i,j, denote (i,j) element of matrices X and Y, respectively. For integers k>l, Y(k)=Y(k−1)⊗ . . . Y(l) is denoted.


Consider x(t) to be an N×1 vector with the state values of the nodes at time t. Max plus algebra can be used to represent equation (1) as,














x


(

t
+
1

)


=




W


(
t
)




x


(
t
)




,

t
>
0

,








=







W


(
t
)




W


(

t
-
1

)














W


(
0
)






=
Δ



W


(

t
,
0

)







x


(
0
)




,












2
)







where W(t) is the N×N noise matrix at time t, with elements











[

W


(
t
)


]

ij

=

{



e



i
=
j






ɛ
,





if






{

i
,
j

}












υ
ij



(
t
)


,





if






{

i
,
j

}













3
)







Existence of Linear Growth


In a queuing theory and networking context, reference [17], [18] show that for a system represented by the recursive relation in equation (2), xi(t) grows linearly, in the sense there exists areal number λ such that, for all i=1, . . . , N,










λ
=


lim

t







1
t




x
i



(
t
)












λ
=


lim

t







1
t



𝔼


[


x
i



(
t
)


]





,





4
)







where the first limit converges almost surely. Note that the constant λ does not depend on the initial measurement x(0), or the node index i. It is also sometimes referred to as the max-plus Lyapunov exponent of the recursion in equation (2).


In the current WSN context, the growth of xi(t) is clearly dependent on the distribution of noise and graph topology. However, there exists no analytical expressions for the growth rate λ, even for the simplest graphs and noise distributions. Indeed this is related to a long-standing open problem in the first and last passage percolation [26] to obtain analytical expressions for λ. One of the main contributions herein is analytical bounds on λ for arbitrary graphs and general noise distributions. Theorems are introduced to upper and lower bound the growth rate for arbitrarily connected fixed and random graphs.


Bounds on Growth Rate for Fixed Graphs
Upper Bound

To derive the upper bound on the growth rate, the following theorem is provided for fixed graphs and general noise distributions. Before stating the theorem, the following Lemma is introduced which will be later invoked in the theorem.


Lemma 1. Let A be the adjacency matrix and p be the spectral radius, then [At]i,j≤pt.


Proof: Consider a singular value decomposition (SVD) of A=UΣVT So that At=(UΣVT)(UΣVT) . . . (UΣVT) t times. Let ei be a unit vector of zeros, except a 1 at the ith position. Hence, [At]i,j=eiT(UΣVT)tej can be written and it can be shown that [At]i,j≤pt by showing |eiTp−tAtej|≤1. To this end, it may be written that,






p
−t
A
t=(UΣVT)t,


where, Σ=p−1Σ is a diagonal matrix with diagonal elements







(

1
,


p
2

p

,





,


p
N

p


)

,




where pn is the is the nth largest singular value of Σ. Since U and VT are unitary, it is clear that E is a contraction so that





Ux∥=∥x∥,∥VTx∥=∥x∥,∥Σx∥≤∥x∥  5)


because











p
n

p



<
1

,




for n=2, . . . , t. Now, successive application of equation (5) yields,











e
i
T



p

-
t




A
t



e
j


=






e
i
T



p

-
t




A
t



e
j










=








e
i
T



(

U






Σ
_







V
T








U






Σ
_







V
T


)




e
j





1.








where the first equality is because A has non-negative entries and the inequality uses equation (5) and Cauchy-Schwartz inequality. Hence, [At]i,j≤pt which concludes the proof of Lemma 1.


Theorem 1. (Upper Bound) Suppose the moment generating function of the noise M(γ):=custom-character[eγuij(y)] exists for γ in a neighborhood of the origin. Then, an upper bound on growth rate λ is given by,










λ


inf


{

x


:




sup

0

β

1




[



H


(
β
)


+

β






log


(
p
)



-

β






I


(

x
β

)




<
0

]



}



,




6
)









    • where, p is the spectral radius of the graph, H(l) is the binary entropy function given by









H(β)=−β log(β)−(1−β)log(1−β),

    • and I(x) is the large deviation rate function of the noise, given by,








I


(
x
)


:

=


sup

γ
>
0




(


x





γ

-

log


(

M


(
γ
)


)



)






Proof: The proof begins by describing the approach taken to prove the theorem. Formulating growth rate λ is taken as a function of the maximum path sum of random variables. Next, to find the maximal path sum, the number of paths in t hops are that involves l self-loops are counted. The upper bound are put in the desired form using large deviation theory. The different parts of the proof are labeled accordingly, for readability.


Relate λ and maximal path sum: To prove Theorem 1, λ is upper bounded using the elements of W(t,0) defined in equation (2). The i,j entry [W(t,0)]i,j, can be written as the maximum of the sum of noise samples over certain paths. To be precise, let Pt(i,j) be the set of all path sequences








{

p


(
k
)


}



t

k
=
0



,




the start at p(0)=j and end at p(t)=i, and also satisfies (p(k),p(k+1))∈ε or p(k)=p(k+1) for k∈{0,1, . . . , t−1}, which allows self loops. For simplicity Mt(i,j)custom-character[W(t,0)]i,j is defined. The path sum Mt(i,j) corresponds to the path whose sum of i.i.d noise samples along the edges in t hops between nodes i and j, is maximum among all possible paths, is given by,










M
t

(

i
,
j

)


=



[

W


(

t
,
0

)


]


i
,
j


=

max





k
=
0

t





[

W


(
k
)


]



p


(
k
)


,

p


(

k
+
1

)




.








7
)







For the system defined by the recursive relation in equation (2), let us define the growth rate of this max-plus process to be λ and derive an upper bound on λ. λ is related to Mt(i,j) by first recalling the definition in equation (4),









λ
=



lim

t







1
t




x
i



(
t
)




=



lim

t







1
t




max
j




(


M
t

(

i
,
j

)


+


x
j



(
0
)



)

·







max
j



(




lim





sup


t






1
t



M
t

(

i
,
j

)



+



lim





sup


t








x
j



(
0
)


t



)





max
j





lim





sup


t






1
t




M
t

(

i
,
j

)


.









8
)







In fact, Kingman's subadditive ergodic theorem can be invoked [17] to show that the lim sup in the last inequality be replaced by a limit. Furthermore, as shown in same reference, this limit is independent of i, and j. Hence, one can work with Mt(i,j) instead of xi(t) to upperbound the graph-dependent constant λ. This enables us to drop the maximum over j and study the constant that Mt(i,j)/t converges to. Toward this goal, consider the smallest value of x for which











lim

t






P


[



1
t



M
t

(

i
,
j

)



>
x

]



=

0
.





9
)







This probability is upperbounded to find bounds on such values of x.


Count the number of paths with I self-loops: Examining equation (7) it is observed that, for a self-loop at time k, p(k)=p(k+1). Since [W]i,i(k)=e=0, there is no contribution to the sum in equation (7), as self-loops are not affected by the noise. So it is useful to express the maximum in equation (7) over the paths that have a fixed number of self-loops l. To study this case, the number of paths that contain l self loops are counted. Consider the expression [(A+zI)t]i,j, where z is an indeterminate variable that will help count the number of paths from node i to node j in t steps that go through a fixed number of l self-loops. Using the binomial expansion, the following can be written:








[


(

A
=

z





I


)

t

]


i
,
j


=




l
=
0

t






z
l



[

A

t
-
l


]



i
,
j




(



t




l



)







where co-efficient of zl is the number of paths from node i to j in t steps, that go through l self loops denoted as







n
l

=




(



t




l



)



[

A

t
-
ι


]



i
,
j


.





Upper bound the growth rate λ: Now the following can be written,











1
t



M
t

(

i
,
j

)



=




max





l


{

0
,
1
,





,

t
-
1


}







max


(



S
1

(
l
)


t

,





,


S

n
l


(
l
)


t


)







10
)







where Sq(l) is any sum in equation (7) that involves l self loops, q∈{1, . . . , nl} and nl is the number of paths in Pt(i,j) with l self-loops. Substituting equation (10) into equation (9) and using the union bound, equation (9) can be upper bounded as,










P


[





max


max




l










(



s
1

(
l
)


t

,





,


s

n
l


(
l
)


t


)


>
x

]







l
=
0

t






q
=
1


n
l




P
[



1
t



S
q

(
l
)



>
x

]







11
)







Since Sq(l) are sum of (t−l) i.i.d random variables, Sq(l) is i.i.d in q for a fixed l, but differently distributed for different l, so the index q can be dropped and the sum over q can be replaced with nl to get,











P
[



1
t



M
t

(

i
,
j

)



>
x

]






l
=
0

t




n
l

·

P
[



1
t



S

(
l
)



>
x

]




,





=




l
=
0

t






(



t




l



)



[

A

t
-
1


]



i
,
j




P
[



1
t



S

(
l
)



>
x

]








12
)







From Lemma1, [At−l]i,j≤pt−l and letting







l
*

=




argmax




l





(



t




l



)



p

t
-
l




P
[



S

(
l
)


t

>
x

]






in equation (12),










P
[



1
t



M
t

(

i
,
j

)



>
x

]




(

t
+
1

)



(



t





l
*




)



p

t
-

l
*





P
[



S

(

l
*

)


t

>
x

]






13
)











P
[



S

(

l
*

)


t

>
x

]




can be rewritten as







P
[



S

(

l
*

)



t
-

l
*



>


t

t
-

l
*




x


]

.




In the next step,


the second term can be bounded on RHS of equation (13) by the Chernoff bound as,







P
[



S

(

l
*

)



t
-

l
*



>


t

t
-

l
*




x


]

=

e


-

t


(

1
-
α

)





I


(

x

1
-
α


)








where I(x) is the large deviation rate function and α=l*/t. For large t,








(



t





α

t




)

=

e

(

t


(


H


(
α
)


+



(
1
)



)


)



,




αt where H(α)=−α log(α)−(1−α)log(1−α). For convenience let β=1−α, then equation (13) reduces to,










P
[



1
t



M
t

(

i
,
j

)



>
x

]




(

t
-
1

)



e

t


(


H


(
β
)


+

βlog


(
ρ
)


-

β






I


(

x
/
β

)



+



(
1
)



)








14
)







It is well-known that the large-deviation rate function I(·) is monotonically increasing to infinity for arguments restricted above the mean of the random variable (zero-mean noise in this case), so the exponent in equation (14) will be negative when x is large enough. Hence the smallest x for which equation (14) goes to zero exponentially is given by equation (6). This concludes the proof of the Theorem ▪.


Simplified upper bound for Gaussian noise: If the noise is Gaussian, i.e vij˜custom-character(0,1), then







I


(
x
)


=


x
2

2





in equation (6). Using algebra, equation (6) simplifies as,









λ





sup





0

β

1







2


β


(


H


(
β
)


+

β


log


(
ρ
)




)









15
)







Defining g(β)=√{square root over (2β(H(β)+β log(p)))}, the supremum will be achieved for β that satisfies











g


(
β
)





β


=
0

,




which simplifies to






ρ
=



β

1
-
β





e

-


H


(
β
)



2

β









Note that, I(·) is a convex function and as ρ increases β will approach its upper limit of 1. Therefore, it can be concluded that for graphs with large ρ, the optimal value of β→1, hence it can be written that,






H(β)+β log(ρ)−βI(x/β)≈log(ρ)−I(x)  16)


which is negative when I(x)>log(p).


This behavior of β may be established for the Gaussian case. However this holds more generally. Since f(x,β)=H(β)+β log(ρ)−βI(x/β) is concave in β for every x, this must only be checked when x>0, the β* that solves










f


(

x
,

β
*


)





β


=
0




approaches 1 as log(ρ) increases. Setting the derivative to 0:






log
(




1
-
β

β

+

log


(
ρ
)


-

I


(

x
/
β

)


+


x
β




I




(

x
/
β

)




=
0

)




One can check that as ρ increases, log(ρ)→∞ and hence,







log


(


1
-
β

β

)




-






which is reached as β→1. This shows that as ρ increases, β=1 for general noise distributions as well.


Alternative upper bound: Recall that, while proving Theorem 1, the path from node i to j in t steps, whose sum was the maximum among all possible paths, was of interest. To achieve this, first the number of paths from node i to j in t steps were counted and then, these paths were grouped in terms of number of paths that involved self-loops. Note that, self-loops were not affected by noise so their contribution to the sum along the path is 0. The analysis would be simpler if noise on self loops was considered, thereby eliminating the need to count and group the paths by number of self loops involved. So considering noise on self-loops, which is equivalent to setting β=1 in Theorem 1, would result in the following recursion,











x
i



(

t
+
1

)


=

max


(




x
i



(
t
)


+


v
ii



(
t
)



,




max





i





ϵ






𝒩
i







(



x
j



(
t
)


+


v
ij



(
t
)



)



)






17
)







instead of equation (1). Note that, equation (17) is not the proposed max consensus scheme, but an auxiliary recursion used here to upper bound the growth rate. It can be observed that xij(t+1) is convex in vii(t), and due to Jensen's inequality the additional noise in equation (17) can only increase the slope λ compared to equation (1). Hence, the growth rate of equation (1) is upper bounded by that of equation (17). Repeating the proof of Theorem 1 for this case amounts to replacing A by A+I and therefore ρ with ρ+1, so the following is true:


Theorem 2. The auxiliary recursion in equation (17) has a growth rate upper bounded by the value of x>0 that solves,






I(x)=log(ρ+1)  18)


where I(x) is the large deviation rate function. Moreover, this value of x upper bounds the growth rate λ of the recursion in equation (1).


Note that, for Gaussian noise distribution the alternative upper bound on the growth rate can be calculated as,





λ≤√{square root over (2 log(p+1))}  19)


While equation (19) is a looser bound than equation (15), it is much simpler. As ρ increases, i.e. as β→1, alternative upper bound and exact upper bound converge.


Lower Bound


While it is clear that λ≥0, it is not obvious when λ≥0. In this section, lower bound is derived, which, in part, shows that there exists a growth rate Δ due to additive noise in the network, which is always positive (λ.0). Also, the lower bound relates to the order statistics of the underlying noise distribution as well as the steady state distribution of the underlying Markov chain.


Lower bound for regular graphs: Recall that the state of the ith sensor at time t+1 is given by the ith element of the vector, x(t+1)=W(t,0)⊗x(0) which is,












x
i



(

t
+
1

)


=




max




j





(



[

W


(

t
,
0

)


]


i
,
j


+


x
j



(
0
)



)



,






max




j





(


[



W


(

t
,
0

)



i
,
j


+


x
min



(
0
)



)

,








20
)







Where








x
min



(
0
)


=




min




i







x
i



(
0
)


.






Now, using equation (20), the growth rate λ is lower bounded as,









λ
=





lim





t










x
i



(
t
)


t


=





lim





t










x
i



(

t
+
1

)


t








lim





t








1
t







max




j





[

W


(

t
,
0

)


]



i
,
j



+




lim





t








1
t




x
min



(
0
)











21

a

)
















lim





t








1
t






k
=
0


t
-
1





[

W


(
k
)


]



p


(
k
)


,

p


(

k
+
1

)












21

b

)







where equation (21a) is due to equation (20) and in equation (21b), {p(k)}k=0t is any path that satisfies p(0)=j and p(t)=i. In order to get a good lower bound, evaluating equation (21b) is relied upon for a specific path defined as,










p


(

k
+
1

)


=






argmax





m



𝒩


(

p


(
k
)


)




p


(
k
)









[

W


(
k
)


]




p


(
k
)


,
m


.





22
)







This amounts to selecting the locally optimum or greedy path. If the graph is d−regular, then with p(k) chosen as in equation (22), the random variables in equation (21b) are distributed the same as the maximum of d i.i.d random variables and zero, whose expectation is denoted as m+(d). Therefore, due to law of large numbers, equation (21b) converges to,










λ



m
+



(
d
)



=


𝔼


[



max


(

0
,



max




m




)




[

W


(
k
)


]




p


(
k
)


,
m


]


=

d




0






xF

d
-
1




(
x
)




f


(
x
)



dx








23
)







where F(·) and f(·) are the CDF and PDF of the noise respectively. Also, one can lower bound growth rate with a simple expression given by,







λ



F

-
1




(

d

d
+
1


)



,




provided that median of noise samples are zero.


Lower bound for irregular graphs: For irregular graphs, the path defined in equation (22) is a random walk on the graph with the corresponding sequence of nodes constituting a Markov chain. When the graph is irregular, the transition probabilities of this Markov chain depend on the degree of the current node. Specifically, the transition probability matrix is given by,






P=(1−k)D−1A+kI


where the diagonal matrix [D]i,j=di, degree of node i, so











[
P
]


i
,
j


=

{








1
-
k


d
i



i


j

,





(

i
,
j

)











k





i

=
j

,









}





24
)







where k is the probability that noise samples on neighboring edges of node i are negative, given by






k=P[[W(k)]i,j<0,∀j]=di−∞0Fdi−1(x)f(x)dx


Let the steady state probabilities of this Markov chain be denoted by πi. Then, using the law of large numbers the lower bound is given by,














lim





t








1
t






k
=
10


t
-
1







max




m





(


[

W


(
k
)


]



p


(
k
)


,
m


)




=





i
=
1

N




π
i


m


+

(

d
i

)



,




25
)







since the random variable









max




m





(


[

W


(
k
)


]



p


(
k
)


,
m


)





has expectation m+ (di), given node i. One can find a closed form expression for πi as








π
i

=


d
i


2

E



,




where E:=|⊗| is the total number of edges in the network. To verify this, one can check that πTP=πT, where πT=[π1, . . . , πN], using equation (24). In conclusion, the lower bound on the growth rate for irregular graphs is given by,









λ






i
=
1

N





d
i


2

E



m


+


(

d
i

)

.






26
)







Bounds on Growth Rate for Random Graphs

In this section the case is considered where each edge is absent by a probability of p, independently across edges and time, which models random transmission failures.


Upper Bound for Random Graphs


It is now shown that the upper bound on growth rate for the randomly changing graphs can be simply obtained by replacing p in the fixed graph case by ρ(1−p) in equation (6), where p is the Bernoulli probability, that any edge will be deleted independently at each iteration.


Recall that in fixed graph model, W(k) had zero (e) along the diagonal and [W(k)]l,m=vlm(k) was the underlying i.i.d noise random variables when (l,m)∈ε. The random graph can be described as,











[

W


(
k
)


]


l
,
m


=

{







v

l
,
m




(
k
)







with





prob






(

1
-
p

)






if






(

l
,
m

)



ɛ








-
C






with





prob





p





if






(

l
,
m

)



ɛ







e





l

=
m







ɛ





if






(

l
,
m

)



ɛ









27
)







where C is a large positive constant which captures randomly absent edge as C→∞. Note that, since each node is maxing with itself at each iteration in equation (1), the large negative value of −C, will never propagate through the network, which is equivalent to deleting an edge, for large C.


Following the analysis of the fixed graph case, only the moment generating function of the noise samples changes to,






M(γ,C)=pe−Cγ+(1−p)M(γ),


where M(γ) is the original moment generating function of the noise samples given by M(γ)=custom-character[eγvij(k)]. The corresponding rate function is given by







l


(

x
,
C

)


=




sup





γ
>
0






(


x





γ

-

log


(

M


(

γ
,
C

)


)



)






Following the proof of Theorem 1, to upper bound the growth for this case the smallest x is found that satisfies,









lim





C










sup





0

β

1






(



H


(
β
)


+

β


log


(
ρ
)



-

β






I


(


x
β

,
C

)




<
0

)





Consider








f


(

x
,
β
,
C

)


=


H


(
β
)


+

β


log


(
ρ
)



-

β






I


(


x
β

,
C

)





,




since f(x,β,C) is


convex in C and concave in β it can be written that.














inf




c







sup





0

β

1






f


(

x
,
β
,
C

)



=





sup





0

β

1








inf




C





f


(

x
,
β
,
C

)











=





sup





0

β

1








lim





C








f


(

x
,
β
,
C

)







=




sup





0

β

1






(



H


(
β
)


+

βlog


(

ρ


(

1
-
p

)


)


-

β






I


(

x
β

)




<
0

)





,




28
)







where the first equality is due to classical minimax theorem, and second due to the monotonicity of f(x,β,C) in C. Hence, the upper bound can be written as,









λ


inf


{

x




:








sup





0

β

1






[



H


(
β
)


+

β


log


(

p


(

1
-
p

)


)



-

β






I


(

x
β

)




<
0

]



}






29
)







Interestingly, this is precisely the upper bound for fixed graphs except that ρ(1−p) instead of ρ. While for a fixed graph ρ≤1 always holds, in random graphs case it is possible to have ρ(1−p)<1. If ρ(1−p)≈0 then it is easy to check in equation (29) that the optimizing β is near zero. This can be contrasted with the case where ρ is large and the optimizing β was found to satisfy β≈1.


Lower Bound for Random Graphs


Here, the lower bound on the growth rate for randomly changing graphs is derived. Recall that, for the path defined in equation (22), and when W(k) is as defined in equation (27), yields a lower bound on the growth rate, for graphs with edge deletion probability of p.


Compared to equation (26), the only difference in the derivation is that, the node i will now have a random degree Zi, which is binomial with parameters (di,1−p). Due to law of large numbers, equations (25)-(26) have an additional expectation with respect to this binomial distribution, resulting in following expression,










λ





i
=
1

N




π
i



𝔼


[

m
+

(

Z
i

)


]









=




i
=
1

N





d
i


2

E







k
=
0


d
i





(




d
i





k



)





p


d
i

-
k




(

1
-
p

)


k




m
+



(
k
)










30
)







Note that, in equation (30), πi=di/2E still holds, since the transition probabilities of the Markov chain are still of the form as in equation (24).


Upper Bound on Growth Rate without Calculating I(x)


In this section, a technique is presented to directly calculate the upper bound on growth rate using the moment generating function, without having to compute the large deviation rate function of the additive noise distribution.


Recall that the upper bound on growth rate is given by equation (29) where, p=0 for fixed graphs. For convenience, let Kcustom-characterρ(1−p) and f(β, x)=H(β)+β log(K)−βI(x/β). Since,








I


(
x
)


=




sup





γ
>
0






(


x





γ

-

log


M


(
γ
)




)



,




it can be written,













sup





0

β

1







f
_



(

β
,
x

)



=




inf





γ
>
0








sup





0

β

1






(


H


(
β
)


+

βlog


(
K
)


-

x





γ

+

βlog






M


(
γ
)




)






31
)







where minimax theorem is used to interchange the infimum and supremum, since log M(γ) is always convex. The inner supremum can be solved in closed form as,











β
*

=


K


M


(
γ
)




1
+

K


M


(
γ
)






,




32
)







which yields,










sup





0

β

1







f
_



(

β
,
x

)



=




inf





γ
>
0







(


H


(

β
*

)


+


β
*



log


(

KM


(
γ
)


)



-

x





γ


)

.






So,













inf




x





{

x
:





sup





0

β

1







f
_



(

β
,
x

)



<
0


}


=




inf





γ
>
0






(



1
γ



H


(

β
*

)



+



β
*

γ



log


(

K


M


(
γ
)



)




)






33
)







Note that β* is also a function of γ. This technique is very useful to calculate growth rate, when I(x) is difficult to evaluate, or unavailable.


Empirical Upper Bound on Growth Rate

In this Section, an empirical correction factor to the upper bound is proposed which improves the tightness of the bound, for all network settings and noise distributions. In order to improve the tightness of the upper bound, a correction factor ϕ is introduced to the upper bound in equation (6). The correction factor ϕ depends only on number of nodes N in the network, given by,










ϕ
=

1
-

1

2


N





,




34
)







and multiplies the upper bound in equation (6).


While there may be no proof that this correction will always yield an upper bound, the choice of ϕ was empirically validated over different graph topologies and noise distributions, and in all settings, ϕ improved the tightness of the bound. The intuition is that the approximations made in deriving the upper bound leads to a minor deviation in the tightness for smaller N, which can be fixed by ϕ. Note that, as N→∞, the compensation variable ϕ→1, hence ϕ mainly contributes for graphs with smaller number of nodes.


The tightness of upper bound in equation (6) is compared to the empirical bound, illustrating the accuracy of the correction factor ϕ.












Algorithm 1 Robust Max consensus Algorithm
















 1:
First run ::


 2:
 Input: iterations = t, # of nodes = N


 3:
 Initialization


 4:
  Initialize all nodes to zero, xi(0) = 0


 5:
 repeat until : tmax iterations


 6:
  for {i = 1 : N}





 7:
  
xi(t)=max(xi(t),maxj𝒩i(xj(t-1)+vij(t-1)))






 8:
  end : for


 9:
 end : repeat





10:

growthrateestimate:λ^i(tmax)=xi(tmax)tmax






11:
Second run ::


12:
 Input: # of nodes = N, Initial state: xi(0)


13:
 repeat until : convergence


14:
 for {i = 1 : N}





15:






x
i



(
t
)


=


max


(



x
i



(
t
)


,


max

j


𝒩
i





(



x
j



(

t
-
1

)


+


v
ij



(

t
-
1

)



)



)


-



λ
^

i



(

t
max

)












16:
  end : for


17:
 end : repeat









Robust Max Consensus Algorithm

Max consensus algorithms in existing works fail to converge in the presence of noise, as there is no compensation for the positive drift induced by the noise. Some works develop a soft-max based average consensus (SMA) approach to approximate the maximum and compensate for the additive noise. However, those algorithms are sensitive to a design parameter, which controls the trade off between estimation error and convergence speed. So, a fast max-based consensus algorithm is developed in this section, which is informed by the fact that there is a constant slope λ, analyzed in the previous sections, which can be estimated and removed. This makes the algorithm robust to the additive noise in the network.


If the knowledge of the spectral radius of the network and noise variance is known, then by using Theorem 1, one can closely estimate the growth rate and subtract this value at each node after the node update. However, the noise variance and the spectral radius are not always known locally at each node. Hence, a fast max consensus algorithm generalized to unknown noise distributions is proposed, as described in Algorithm 1, where slope is being locally estimated at each node. The variance of this estimator is also analyzed.


The algorithm consists of two runs, where in the first run, the state values of all the nodes are initialized to zero and run the max consensus algorithm in the additive noise setting. This can be performed by a simple reset operation, which is available at every node and then initiate the conventional max consensus algorithm. Note that, in this case the true maximum is zero, but due to the additive noise, the state values grow at the rate of λ. The growth rate estimate for node i is denoted by {circumflex over (λ)}i, is computed locally over tmax iterations as,













λ
^

i



(

t
max

)


=


1

t
max





x
i



(

t
max

)




,




35
)







the average increment in the state value of node i. Note that, this estimation is done locally at every node. Also, the algorithm is memory-efficient, since the history of state values is not used, and only the information of the iteration index and the current state value is needed to estimate the growth rate.


In the second run, max consensus algorithm is run on the actual measurements to find the maximum of the initial readings. The growth rate estimate {circumflex over (λ)}i is used to compensate for the error induced by the additive noise as given in line (15) of Algorithm 1. Note that, the estimator is independent of the type of additive noise distribution.


To clarify, FIG. 2 shows a methodology 200 for determining max consensus in a wireless distributed network. At block 202, a network is provided including N sensor nodes, each sensor node i assuming an assigned or measured state value xi(t) and each connection between neighboring sensor nodes i and j assuming additive noise vi,j(t), where i,j∈n. At block 204, a growth rate estimate A associated with each respective sensor node i is determined. Block 204 spawns three sub-blocks 242, 244 and 246; at block 242 the state value xi(t) of all sensor nodes are initialized to zero such that xi(0)=0. This step is crucial as it allows direct measurement of sensor drift; any nonzero state value updated in further iterations based on xi(0)=0 is due to sensor drift, network anomalies or noise from neighboring channels, allowing the system to fully characterize the growth rate estimate in the presence of zero sensor measurement. At block 244, the state values xi(t) are updated for each sensor node i with local maximum for tmax iterations such that








x
i



(
t
)


=


max


(



x
i



(
t
)


,


max

j


𝒩
i





(



x
j



(

t
-
1

)


+


v
ij



(

t
-
1

)



)



)


.





This step mirrors a typical max consensus algorithm, however as discussed above the initial sensor node state values were set to zero. At block 246, the growth rate estimate λi is determined such that









λ
^

i



(

t
max

)


=




x
i



(

t
max

)



t
max


.





In a perfect world with no communication noise, sensor drift or other network anomalies, xi(tmax) when xi(0)=0 would hypothetically be zero, as the maximum state value held by the sensor nodes from time t=0 would always be zero. Since this is not the case, nonzero xi(tmax) is a false value due to sensor drift, communication noise, or other network anomalies.


Once the growth rate estimate λi has been determined, a true state value maximum for each node can be estimated by running the iteratively updating max consensus methodology again, however this time with true initial sensor state values and by removing the growth rate estimate from each state value at each iteration. This is shown in block 206, which includes determining a true state value maximum for each respective sensor node i of the plurality of N sensor nodes for each iteration t of a plurality of tmax iterations to generate a set of true state value maxima. At sub-block 262 of block 206, the initial state values xi(0) are directly measured by each sensor node i. At sub-block 264 of block 206, each sensor node i is updated with a local maximum for tmax iterations and subtracting growth rate estimate A such that








x
i



(
t
)


=


max


(



x
i



(
t
)


,


max

j


𝒩
i





(



x
j



(

t
-
1

)


+


v
ij



(

t
-
1

)



)



)


-




λ
^

i



(

t
max

)


.






This step yields a set of true state value maximums with growth rates due to noise removed, one for each node at each iteration. At block 208, a final state value maximum is selected from the set of true state value maximums, the final state value maximum being the maximum of the set of true state value maximums.


Performance Analysis


To address the accuracy of the estimate in equation (35) over a finite number of iterations, Efron-Stein's inequality is used to show that the variance of the growth rate estimator {circumflex over (λ)}i (tmax) decreases as custom-character(tmax), where tmax is number of hops. For completeness, the Efron-Stein inequality is introduced in the following theorem.


Theorem 3. Let X1,X2, . . . , Xn, be independent random variables and let X be an independent copy of Xq, for q≥1. Let Z=f(X1,X2, . . . , Xq, . . . , Xn) and






Z
q
′=f(X1,X2, . . . , Xq−1,Xq,Xq+1, . . . , Xn)


then








Va


r


(
Z
)








q
=
1

n



𝔼


[

(


(

Z
-

Z
q
1

+

)

2

)

]




,






where







(

Z
-

Z
q



)

+


=


max


(

(

0
,

Z
-

Z
q




)

)


.






The following theorem bounds the variance of the growth rate estimator.


Theorem 4. The Variance of the growth rate estimator {circumflex over (λ)}i (tmax) satisfies,











Var


(



λ
^

i



(

t
max

)


)





σ
2


t
max



,




33
)







where tmax is number of iterations and σ2=Var(vij(t)).


Proof: Using equation (35), and recalling from Theorem 1, the expression for xi(tmax) with zero initial conditions xi(0)=0 is












λ
^

i



(

t
max

)


=


1

t
max





(


max


{

p


(
k
)


}





j




Pt
max



(

i
,
j

)










k
=
0


t
max





[

W


(
k
)


]



p


(
k
)


,

p


(

k
+
1

)






)

.






36
)







Next, Theorem 3 is used to bound the variance of equation (36). For simplicity of notation, set Z={circumflex over (λ)}i (tmax), which depends on noise samples vij(t) through W(k) in equation (36). So the independent random variables X={X1,X2, . . . , Xn} in Theorem 3 correspond to re-indexing of vij (t), with n denoting the total number of noise samples that influence {circumflex over (λ)}i [tmax], which is approximately n≈(tmax+1), where E=|ε| is the total number of edges (the exact value of n depends on the graph topology). Zq is set to be given by equation (36) when the noise sample vij(t) corresponding to Xq is replaced by an independent copy Xq. Note that the path that maximizes equation (36) corresponds to a subset custom-character(custom-character) of {1, . . . , n}, with tmax elements.


If q, ∉custom-character(custom-character) then the maximal path is un-affected, so Z−Zq′≤0 and (Z−Zq′)=0. Hence, the analysis is simplified by considering only q, ∉custom-character(custom-character), so that Theorem 3 can be simplified from involving n terms in the upper bound to only tmax terms:
















Var


(
Z
)





𝔼
𝒳



[




q





(
𝒳
)






𝔼


[



(


(

Z
-

Z
q



)

+

)

2




𝒳



]



]



,





=


𝔼
𝒳



[





q





(
𝒳
)







𝔼


[



(

(

Z
-


Z
q


+


)

)

2






X
q



X
q






𝒳

]




P


[


(


X
q



X
q



)


𝒳

]




+




q





(
𝒳
)







𝔼


[



(

(

Z
-


Z
q


+


)

)

2






X
q

<

X
q






𝒳

]




P


[


(


X
q

<

X
q



)


𝒳

]





]



,





37
)







where the equality is due to the total expectation theorem. Note that, for q∈custom-character(custom-character) and Xq<Xq′, the maximal path remains the same and (Z−Zq′)+=0. Using P[(Xq≥Xq′)|custom-character]=½ in equation (37) reduces to,











Va


r


(
Z
)






1
2



𝔼
𝒳






q





(
𝒳
)






𝔼


[



(


(

Z
-

Z
q



)

+

)

2






X
q



X
q






𝒳

]







1
2



𝔼
𝒳






q





(
𝒳
)






𝔼


[



(


1

t
max





(


X
q

-

X
q



)

+


)

2






X
q



X
q






𝒳

]













38
)







where Z−Zq′=(Xq−Xq′)/tmax is used, if the maximal path does not change when X is substituted for Xq; if on the other hand the maximal path changes then, Z−Zq′≤(Xq−Xq′)/tmax, which can be verified by considering a substitution of Xq′ in the original path which is smaller than Zq′. It is straightforward to show that the RHS of equation (38) is given by σ2/tmax, which concludes the proof.


In order to bound the variance of the max-consensus algorithm, Theorem 4 is used to write xi(t) in the first run of the algorithm with zero initial measurements as,






x
i(t)=λt+σ√{square root over (tYt)}  39)


where Yt is an auxiliary random variable with Var(Yy)≤1, which is clearly equivalent to Theorem 4 after using {circumflex over (λ)}i (tmax)=xi(t)/t.


In the second run of the algorithm after D iterations, where D is the diameter of the network, all nodes converge on the maximum of the initial measurements. Hence the estimator custom-characteri (tmax) can be written as,






x
i(D)=(λ−{circumflex over (λ)}i(tmax))D+σ√{square root over (DYD+xmax(0))},   40)


where it is known that,












λ
^

i



(

t
max

)


=

λ
+


σ


t
max





V

t
max








41
)







where Vtmax is an auxiliary random variable with Var(Vtmax)≤1. Since the two runs involve independent noise samples, substituting equation (41) into equation (40) gives,










Va


r


(


x
i



(
D
)


)







σ
2



(



D
2


t
max


+
D

)


.





42
)







This shows that the variance of the estimator scales linearly with the diameter of the network, as long as tmax also scales linearly with D.


Simulation Results

A distributed network with N=75 nodes is considered, as shown in FIG. 3. This irregular graph was randomly generated, which is commonly followed. The spectral radius of the graph generated was computed to be ρ=30.56. Two graph topologies are considered for the simulations:

    • Fixed graphs: by selecting p=0 as in FIG. 3.
    • Time-varying graphs (Random graphs): by selecting p=0.5.


Communication links between any two nodes has a noise component distributed as custom-character(0,1). First, all nodes are initialized to 0 and the max consensus algorithm is run to estimate growth rate, [tmax] as in line 10 of the algorithm. Note that, following results are Monte-Carlo averaged over 500 iterations.


Efficiency of the Bounds


For fixed graphs, the upper bound given by equation (15), empirical upper bound, lower bound given by equation (26), and the Monte-Carlo estimate of max consensus growth is plotted for every node, labeled as “True max-consensus growth” in FIG. 4, and compared. It is observed in FIG. 4 that the empirical upper bound is much tighter than the original upper bound.


The same experiment was repeated on a random graph, which was obtained by randomly deleting each edge of the graph in FIG. 3 with probability p=0.5. The comparison of the upper bound, given by equation (28), empirical upper bound, lower bound given by equation (30), and the true Monte-Carlo estimate of the max consensus growth is shown in FIG. 5. Note that, not only the empirical upper bound is tight for time-varying graphs, but it is also generalizable for different graph topologies.


Next, simulations are run for non-Gaussian distributions such as Laplace and Uniform distributions to verify the tightness of upper bound. In FIGS. 6-7, the performance of upper bound and empirical upper bound is compared for network in FIG. 3 with N=75, where the noise on the links are sampled from Laplace and continuous uniform distributions, respectively. The parameters of Laplace distribution L(μ,b) were chosen as μ=0 and b=√{square root over (2)}, and uniform distribution U(a.b) as U(−√{square root over (3)}, √{square root over (3)}), to ensure zero mean and unit variance. Results also show that the empirical upper bound holds good for general noise distributions. Since Laplace distribution is heavy-tailed compared to Gaussian and uniform, it has a larger growth rate.


Performance of the Algorithms


The performance of conventional max consensus algorithms and the proposed algorithm is compared, subjected to additive Gaussian noise custom-character(0,1). In order to represent the actual sensor measurements, for both fixed and random graphs, a synthetic dataset with nodes initialized with values over (100, 200) is considered, where the true maximum of the initial state values is 200. The robust max consensus algorithm given in Algorithm 1 is run over these initial measurements on both the graphs. The results are Monte-Carlo averaged over 500 iterations.


For fixed graphs, performance of the robust max consensus algorithm and the existing max based consensus algorithm is shown in FIG. 8. It can be observed that the conventional max consensus algorithm diverges as t increases, whereas the algorithm does not suffer from increasing linear bias. Even in case of random graphs, the algorithm converges to the true maximum, whereas the conventional max consensus algorithm diverges as t increases, as shown in FIG. 7.


By comparing the dynamic range of growth rate of conventional max consensus algorithms in FIG. 8 and FIG. 9, it is observed that a) at t=30, state values over fixed graphs has mean and standard deviation of 270.39 and 0.6966 respectively, and b) at t=30, state values over random graphs with p=0.5 has mean and standard deviation of 261.09 and 0.9233, respectively. Thus, node state values grow slower for random graphs with 0<p<1, compared to fixed graphs (p=0) due to the reduced connectivity of the graph.


Comparison with Existing Works


The performance of the proposed algorithm was compared with the conventional max consensus algorithm in FIGS. 8-9 and clearly, conventional max consensus algorithm diverges in the presence of additive noise.


Additionally, the performance against the soft-max based average consensus algorithm (SMA) was compared, as shown in FIG. 10. The soft maximum of a vector x=[x1, . . . , xN] is denoted as:








s

max






(
x
)


=


1
β


log





i
=
1

N



e

β






x
i






,




where β>0 is a design parameter. The same network with N=75 is considered as in FIG. 3. Nodes were initialized linearly over (0, 1). The design parameter β of the SMA algorithm was considered to be β{6,10}. The proposed algorithm and the SMA algorithm were applied in the presence of additive noise custom-character(0,1), distributed over the edges.


The SMA algorithm with β=6 converges faster than with β=10, however, β=6 has greater estimation error than β=10. In comparison with SMA, the proposed algorithm performs better in terms of bias and variance of the estimate of true maximum value, and the number of iterations required for convergence.


Computing System


FIG. 11 illustrates an example of a suitable computing system 300 used to implement various aspects of the present system and methods for analysis of max consensus algorithms in the presence of additive noise. Example embodiments described herein may be implemented at least in part in electronic circuitry; in computer hardware executing firmware and/or software instructions; and/or in combinations thereof. Example embodiments also may be implemented using a computer program product (e.g., a computer program tangibly or non-transitorily embodied in a machine-readable medium and including instructions for execution by, or to control the operation of, a data processing apparatus, such as, for example, one or more programmable processors or computers). A computer program may be written in any form of programming language, including compiled or interpreted languages, and may be deployed in any form, including as a stand-alone program or as a subroutine or other unit suitable for use in a computing environment. Also, a computer program can be deployed to be executed on one computer, or to be executed on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.


Certain embodiments are described herein as including one or more modules 312. Such modules 312 are hardware-implemented, and thus include at least one tangible unit capable of performing certain operations and may be configured or arranged in a certain manner. For example, a hardware-implemented module 312 may comprise dedicated circuitry that is permanently configured (e.g., as a special-purpose processor, such as a field-programmable gate array (FPGA) or an application-specific integrated circuit (ASIC)) to perform certain operations. A hardware-implemented module 312 may also comprise programmable circuitry (e.g., as encompassed within a general-purpose processor or other programmable processor) that is temporarily configured by software or firmware to perform certain operations. In some example embodiments, one or more computer systems (e.g., a standalone system, a client and/or server computer system, or a peer-to-peer computer system) or one or more processors may be configured by software (e.g., an application or application portion) as a hardware-implemented module 312 that operates to perform certain operations as described herein.


Accordingly, the term “hardware-implemented module” encompasses a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner and/or to perform certain operations described herein. Considering embodiments in which hardware-implemented modules 312 are temporarily configured (e.g., programmed), each of the hardware-implemented modules 312 need not be configured or instantiated at any one instance in time. For example, where the hardware-implemented modules 312 comprise a general-purpose processor configured using software, the general-purpose processor may be configured as respective different hardware-implemented modules 312 at different times. Software may accordingly configure a processor 302, for example, to constitute a particular hardware-implemented module at one instance of time and to constitute a different hardware-implemented module 312 at a different instance of time.


Hardware-implemented modules 312 may provide information to, and/or receive information from, other hardware-implemented modules 312. Accordingly, the described hardware-implemented modules 312 may be regarded as being communicatively coupled. Where multiple of such hardware-implemented modules 312 exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) that connect the hardware-implemented modules. In embodiments in which multiple hardware-implemented modules 312 are configured or instantiated at different times, communications between such hardware-implemented modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware-implemented modules 312 have access. For example, one hardware-implemented module 312 may perform an operation, and may store the output of that operation in a memory device to which it is communicatively coupled. A further hardware-implemented module 312 may then, at a later time, access the memory device to retrieve and process the stored output. Hardware-implemented modules 312 may also initiate communications with input or output devices.


As illustrated, the computing system 300 may be a general purpose computing device, although it is contemplated that the computing system 300 may include other computing systems, such as personal computers, server computers, hand-held or laptop devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronic devices, network PCs, minicomputers, mainframe computers, digital signal processors, state machines, logic circuitries, distributed computing environments that include any of the above computing systems or devices, and the like.


Components of the general purpose computing device may include various hardware components, such as a processor 302, a main memory 304 (e.g., a system memory), and a system bus 301 that couples various system components of the general purpose computing device to the processor 302. The system bus 301 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. For example, such architectures may include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus.


The computing system 300 may further include a variety of computer-readable media 307 that includes removable/non-removable media and volatile/nonvolatile media, but excludes transitory propagated signals. Computer-readable media 307 may also include computer storage media and communication media. Computer storage media includes removable/non-removable media and volatile/nonvolatile media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules or other data, such as RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store the desired information/data and which may be accessed by the general purpose computing device. Communication media includes computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. For example, communication media may include wired media such as a wired network or direct-wired connection and wireless media such as acoustic, RF, infrared, and/or other wireless media, or some combination thereof. Computer-readable media may be embodied as a computer program product, such as software stored on computer storage media.


The main memory 304 includes computer storage media in the form of volatile/nonvolatile memory such as read only memory (ROM) and random access memory (RAM). A basic input/output system (BIOS), containing the basic routines that help to transfer information between elements within the general purpose computing device (e.g., during start-up) is typically stored in ROM. RAM typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processor 302. For example, in one embodiment, data storage 306 holds an operating system, application programs, and other program modules and program data.


Data storage 306 may also include other removable/non-removable, volatile/nonvolatile computer storage media. For example, data storage 306 may be: a hard disk drive that reads from or writes to non-removable, nonvolatile magnetic media; a magnetic disk drive that reads from or writes to a removable, nonvolatile magnetic disk; and/or an optical disk drive that reads from or writes to a removable, nonvolatile optical disk such as a CD-ROM or other optical media. Other removable/non-removable, volatile/nonvolatile computer storage media may include magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like. The drives and their associated computer storage media provide storage of computer-readable instructions, data structures, program modules and other data for the general purpose computing device 300.


A user may enter commands and information through a user interface 340 or other input devices 345 such as a tablet, electronic digitizer, a microphone, keyboard, and/or pointing device, commonly referred to as mouse, trackball or touch pad. Other input devices 345 may include a joystick, game pad, satellite dish, scanner, or the like. Additionally, voice inputs, gesture inputs (e.g., via hands or fingers), or other natural user interfaces may also be used with the appropriate input devices, such as a microphone, camera, tablet, touch pad, glove, or other sensor. These and other input devices 345 are often connected to the processor 302 through a user interface 340 that is coupled to the system bus 301, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB). A monitor 360 or other type of display device is also connected to the system bus 301 via user interface 340, such as a video interface. The monitor 360 may also be integrated with a touch-screen panel or the like.


The general purpose computing device may operate in a networked or cloud-computing environment using logical connections of a network interface 303 to one or more remote devices, such as a remote computer. The remote computer may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the general purpose computing device. The logical connection may include one or more local area networks (LAN) and one or more wide area networks (WAN), but may also include other networks. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.


When used in a networked or cloud-computing environment, the general purpose computing device may be connected to a public and/or private network through the network interface 303. In such embodiments, a modem or other means for establishing communications over the network is connected to the system bus 301 via the network interface 303 or other appropriate mechanism. A wireless networking component including an interface and antenna may be coupled through a suitable device such as an access point or peer computer to a network. In a networked environment, program modules depicted relative to the general purpose computing device, or portions thereof, may be stored in the remote memory storage device.


CONCLUSION

A practical approach for reliable estimation of maximum of the initial state values of nodes in a distributed network, in the presence of additive noise is proposed. Firstly, it was shown that the existence of a constant growth rate due to additive noise and then derived upper and lower bounds for the growth rate. It is argued that the growth rate is constant, and the upper bound is a function of spectral radius of the graph. By deriving a lower bound, it was proved that the growth rate is always a positive non-zero real value. Upper and lower bounds on the growth rate for random time-varying graphs were derived. An empirical upper bound is obtained by scaling the original bound, which is shown to be tighter and generalizable to different networks and noise settings. Finally, a fast max-based consensus algorithm was presented, which is robust to additive noise and showed that the variance of the growth rate estimator used in this algorithm decreases as custom-character(tmax−1) using concentration inequalities. It was also shown that the variance of the estimator scales linearly with the diameter of the network. Simulation results corroborating the theory were also provided.


It should be understood from the foregoing that, while particular embodiments have been illustrated and described, various modifications can be made thereto without departing from the spirit and scope of the invention as will be apparent to those skilled in the art. Such changes and modifications are within the scope and teachings of this invention as defined in the claims appended hereto.

Claims
  • 1. A distributed sensor network system, comprising: a plurality of N sensor nodes, each sensor node i∈N assuming an assigned or measured state value xi(t);a processor that estimates a final state value maximum of a plurality of state values xi(t) respectively produced by each sensor node i of the plurality of N sensor nodes of the distributed sensor network;wherein to estimate the final state value maximum of the plurality of state values, the processor: determines a growth rate estimate λi associated with each respective sensor node i of the plurality of N sensor nodes, wherein to determine the growth rate estimate λi the processor: initializes the state value xi(t) of each sensor node i of the plurality of N sensor nodes to zero such that xi(0)=0 for all i∈N; andupdates the state value xi(t) of each sensor node i for tmax iterations with a local maximum of the state values xi(t) and xj(t−1) of the sensor node i and one or more neighboring sensor nodes j;wherein the growth rate estimate A is described by
  • 2. The distributed sensor network system of claim 1, wherein the one or more state values associated with the one or more neighboring sensor nodes of the plurality of sensor nodes include additive noise.
  • 3. The distributed sensor network system of claim 1, wherein to determine a maximum of a state value xi(t) associated with the sensor node i and one or more state values xj(t) associated with one or more neighboring sensor nodes j of the plurality of N sensor nodes for an iteration t of the plurality of tmax iterations, the processor: obtains a state value xi(t) measured by the sensor node i at iteration t;obtains one or more state values xj(t) associated with one or more neighboring sensor nodes j from a previous iteration t−1;compares the state value xi(t) measured by the sensor node i with each of the one or more state values xj(t) associated with one or more neighboring sensor nodes j; andselects the maximum from the state value xi(t) measured by the sensor node i and each of the one or more state values xj(t) associated with one or more neighboring sensor nodes j.
  • 4. The distributed sensor network system of claim 1, wherein the growth rate estimate λi is representative of a constant slope in max-based consensus measurement for a sensor node i of the plurality of N sensor nodes due to additive noise.
  • 5. The distributed sensor network system of claim 1, wherein the measured initial state value xi(t) associated with a sensor node i of the plurality of N sensor nodes is a value measured by the sensor node prior to a first iteration t=1 of the plurality of tmax iterations.
  • 6. The distributed sensor network system of claim 1, wherein each connection between neighboring sensor nodes i and j contributes additive noise vi,j(t) to each state value xi(t) and xj(t) of the neighboring sensor nodes i and j, where i,j∈N.
  • 7. The distributed sensor network system of claim 1, wherein to determine the growth rate estimate, the step of updating the state value xi(t) of each sensor node i for tmax iterations with a local maximum of the state values xi(t) and xj(t−1) of the sensor node i and one or more neighboring sensor nodes j is such that:
  • 8. The distributed sensor network system of claim 1, wherein to determine the true state value maximum, the step of updating the state value xi(t) of each sensor node i for tmax iterations with a local maximum of the state values xi(t) and xj(t−1) of the sensor node i and one or more neighboring sensor nosed j is such that:
  • 9. The distributed sensor network system of claim 1, wherein the final state value maximum is a maximum of the set of true state value maxima.
  • 10. A distributed sensor network system comprising: a plurality of N sensor nodes, each sensor node i∈N assuming an assigned or measured state value xi(t);a processor that estimates a final state value maximum of a plurality of state values xi(t) respectively produced by each sensor node i of the plurality of N sensor nodes of the distributed sensor network;wherein to estimate the final state value maximum of the plurality of state values xi(t), the processor: determines a growth rate estimate λi associated with each respective sensor node i of the plurality of N sensor nodes;determines a true state value maximum of a plurality of true state value maxima for each respective sensor node i of the plurality of N sensor nodes at each iteration t of a plurality of tmax iterations to generate a set of true state maxima by removing the growth rate estimate λi associated with each respective sensor node i from the respective state values xi(t) of each sensor node λi of the plurality of N sensor nodes; andselects a final state value maximum from the set of true state value maxima.
  • 11. The system of claim 10, wherein to determine the growth rate estimate λi the processor: initializes the state value xi(t) of each sensor node i of the plurality of N sensor nodes to zero such that xi(0)=0 for all i∈N; andupdates the state value xi(t) of each sensor node i for tmax iterations with a local maximum of the state values xi(t) and xj(t−1) of the sensor node i and one or more neighboring sensor nodes j;wherein the growth rate estimate λi is described by
  • 12. The system of claim 10, wherein to determine the true state value maximum the processor: measures an initial state value xi(0) by each sensor node i; andupdates the state value xi(t) of each sensor node i for tmax iterations with a local maximum of the state values xi(t) and xj(t−1) of the sensor node i and one or more neighboring sensor nodes j.
  • 13. The system of claim 12, wherein to update the state value xi(t) of each sensor node i for tmax iterations with a local maximum of the state values xi(t) and xj(t−1) of the sensor node i and one or more neighboring sensor nodes, the processor: determines an initial state value maximum xi(t) using the state value xi(0) associated with the sensor node i and one or more measured state values xj(t) associated with one or more neighboring sensor nodes of the plurality of N sensor nodes;removes the growth rate estimate λi associated with the sensor node i from the initial state value maximum xi(t) to obtain the true state value maximum for the sensor node i of the plurality of sensor nodes; andassigns the true state value maximum for each sensor node of the plurality of sensor nodes.
  • 14. The system of claim 12, wherein the updated state value xi(t) is given by:
  • 15. The system of claim 6, wherein to determine the growth rate estimate λi the processor: determines an upper bound on a growth rate estimate λi based on a spectral radius of the plurality of N sensor nodes.
  • 16. A method for determining max-consensus of a plurality of nodes in a distributed network system, comprising: providing a network including N sensor nodes, each sensor node i assuming an assigned or measured state value xi(t) and each connection between neighboring sensor nodes i and j assuming additive noise vi,j(t), where i,j∈n;determining a growth rate estimate λi associated with each respective sensor node i;determining a true state value maximum for each respective sensor node i of the plurality of N sensor nodes for each iteration t of a plurality of tmax iterations to generate a set of true state value maxima; andselecting a final state value maximum from the set of true state value maxima.
  • 17. The method of claim 16, wherein the step of determining the growth rate estimate λi associated with each respective sensor node i further comprises: initializing the state value xi(t) of all sensor nodes to zero such that xi(0)=0;updating the state value xi(t) of each sensor node i with a local maximum for tmax iterations such that
  • 18. The method of claim 16, wherein the step of determining the true state value maximum further comprises: measuring initial state value xi(0) by each sensor node i; andupdating each sensor node i with a local maximum for tmax iterations and subtracting growth rate estimate λi such that
  • 19. The method of claim 16, wherein each connection between neighboring sensor nodes i and j in the network contributes additive noise vi,j(t) to each state value xi(t) and xj(t) of the neighboring sensor nodes i and j, where i,j∈N.
  • 20. The method of claim 16, wherein the step of determining the growth rate estimate λi further comprises: determining an upper bound on a growth rate estimate λi based on a spectral radius of the plurality of N sensor nodes of the network.
CROSS REFERENCE TO RELATED APPLICATIONS

This is a non-provisional application that claims benefit to U.S. provisional application Ser. No. 62/959,564 filed on Jan. 10, 2020, which is herein incorporated by reference in its entirety.

Provisional Applications (1)
Number Date Country
62959564 Jan 2020 US