Anda di halaman 1dari 32

Distributed Consensus Problems in Uncertain

Heterogeneous Systems Under an Undirected Graph

Soham Chatterjee

Indian Institute of Technology Kanpur

August 8, 2016

Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 1 / 32


Overview

1 Modelling Multi-agent systems using graph theory


Graph Theory Fundamentals
Useful Theorems and Properties of Graphs
Leader Follower Network

2 Consensus of First Order Uncertain Systems


Uncertainty is a single weighted function
Using Neural Network approximation of functions

Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 2 / 32


Modelling Multi-agent systems using graph theory Graph Theory Fundamentals

Graph Theory
A graph is denoted by,
G = (V, E, A) (1)
Where V, E, A are the vertex set edge set and the adjacency matrix respectively.
Consider the directed graph shown in 1. v2

a21
A vertex is a node, and V is the set of
a23
all vertices.
v1 v3
V = {v1 , v2 , v3 , v4 }
a41
a43
An edge in a directed graph is the
ordered pair (vi , vj ). E is the edge-set v4

E = {(v1 , v2 ), (v1 , v4 ), (v3 , v4 ), (v3 , v2 )} Figure 1: Directed Graph

Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 3 / 32


Modelling Multi-agent systems using graph theory Graph Theory Fundamentals

Adjacency Matrices
A is called the adjacency matrix, it
defines the weights of the edges, for the v2
directed graph shown here. Note that
a21
aij ←→ (vj , vi ) a23

v1 v3
vj is said to be a neighbour of vi iff there
is an directed edge (vj , vi ) a41
The set of neighbours of vi is denoted by a43
Ni
v4
N2 = {v1 , v3 }
Note that the directed graph here can represent  
0 0 0 0
a set of multi-agents sharing information. a21 0 a23 0
The directed edge (v1 , v2 ) means A=
 0

0 0 0
v2 is receiving information from v1
a41 0 a43 0

Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 4 / 32


Modelling Multi-agent systems using graph theory Graph Theory Fundamentals

Undirected Graphs

The edges (vj , vi ) are unordered


pairs.
v2

a21 = a12 aij = aji


a23 = a32
Implies the Adjacency matrix is
v1 v3
symmetric.
a41 = a14 vj is said to be a neighbour of vi
a43 = a34
iff there is an directed edge
v4 (vj , vi ).
The set of neighbours of vi is
Figure 2: Undirected graph denoted by Ni

N2 = {v1 , v3 }

Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 5 / 32


Modelling Multi-agent systems using graph theory Graph Theory Fundamentals

Graph Laplacians
For a graph G consisting of p nodes, define
P 
a1j P0 ··· 0
 0 a2j ··· 0 
D= . (2)
 
. .. .. 
 .. .. .
P.

0 0 ··· apj

L=D−A (3)

1 L is called the Graph Laplacian of G.


2 The laplacian has several useful properties:
The Laplacian always has atleast one eigen value at zero.
The rest of the eigen values are always postive or zero.
3 The Laplacian of an undirected graph is symmetric, and thus has only real
non-negative eigenvalues
Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 6 / 32
Modelling Multi-agent systems using graph theory Useful Theorems and Properties of Graphs

Spanning Tree
Fig 3 shows a spanning tree
A spanning tree has a root vertex
v1
(v1 ) and there is always a directed
path from the root vertex to other
vertices.
v2 v3 A graph G is said to have a
spanning tree, if it has a sub-graph
which is a spanning tree.
v4 v5 v6 v7 A connected undirected graph
always has a spanning tree, because
Figure 3: Spanning Tree it satisfies a stronger connection of
being strongly connected
Theorem 1.1.
The Laplacian L of any graph G has only one zero eigen value, iff G has a
sub-graph which is a spanning tree.
Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 7 / 32
Modelling Multi-agent systems using graph theory Useful Theorems and Properties of Graphs

Lemma 1.2.

For a directed graph G containing a spanning tree.

adj(L) Y
= wr wlT P0 = λi (L)
P0
λi 6=0

L is the Laplacian of the directed graph G . wr = [1, 1, . . . 1]T is always a right


eigenvector of L corresponding to the zero eigenvalue, since the row sum of L is
always zero. wlT is the left eigenvector of L corresponding to the zero eigenvalue.

Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 8 / 32


Modelling Multi-agent systems using graph theory Leader Follower Network

Leader Follower Undirected Network

Consider the network of agents as shown in Fig 4, L denotes the leader


1 The agents vi are called followers, and L is
L
the leader.
2 The graph is denoted by Ḡ, with laplacian L̄ b1

3 The followers are connected through an v1


undirected graph G with laplaican L a21 = a12
P  a13 = a31
a1j P −a12 −a13 0 v2 v3
 −a12 a2j P0 −a24  a42 = a24
L=  −a31

0 a3j P−a34  a43 = a34
0 −a42 −a43 a4j v4

4 The leader is connected only to agent vi viaFigure 4: Network of


weight bi , bi 6= 0 iff there is an edge from Agents Under a Leader
the leader to vi

Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 9 / 32


Modelling Multi-agent systems using graph theory Leader Follower Network

Let us try to construct L̄.


1 Define, the leader connectivity vector
and matrix respectively as,
   
b1 b1 0 0 0
b2 
L  B =  0 b2 0 0 
 
b= b3   0 0 b3 0 
b1 b4 0 0 0 b4
v1
a21 = a12 2 Note that in each row of L̄, additionally
a13 = a31 the weights −bi are introduced to
v2 v3 represent connectivity of leader.
a42 = a24
a43 = a34 3 Thus the diagonal elements of L are
v4 incremented by bi in L̄
4

 
0 0
L̄ = (4)
b L+B
Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 10 / 32
Modelling Multi-agent systems using graph theory Leader Follower Network

Neighbourhood Error

Consider a network of follower agents, under a leader L

The leader agent is supplying a


xi = fi (xi , ui ) reference signal r to some of the
(5)
yi = xi,1 agents:
Where in xi,k i denotes the agent r is assumed to be
number and k denotes the k th bounded and continuously
state of that agent. differentiable.
r˙ is also bounded.
For the leader follower consensus problem
Define for each agent in the follower network G the term neighbourhood error as:
Neighborhood Error
X
ei = aij (xj,1 − xi,1 ) + bi (r − xi,1 ) (6)
j∈Ni

Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 11 / 32


Modelling Multi-agent systems using graph theory Leader Follower Network

Some Useful Theorems

Theorem 1.3.
For an connected undirected follower network G and laplacian L, with a
leader representing the graph Ḡ, and leader connectivity matrix B.

M = µL + ζB (7)

M is always positive definite, for µ > 0 and ζ > 0, and

α(kxk) = λmin (M)kxk2 ≤ x T Mx (8)

Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 12 / 32


Consensus of First Order Uncertain Systems Uncertainty is a single weighted function

Model of the System


In a network of leader follower Ḡ, with leader L and reference trajectory r
Consider N first order agents of the form:
ẋi = ui + ηi fi (xi )
(9)
y = xi

Where fi (xi ) is a known function, however the ηi is unknown.


G is undirected, with Laplacian L, and leader connectivity matrix B
Proposed Distributed Control Protocol
ui = µei + η̂i fi (xi ) (10)

η̂i is the estimate of ηi , µ is a constant design parameter.


Define the estimation error as:
η̃i = ηi − η̂i (11)

We are dropping the subscript k in xi,k , as the agents have only one state.
Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 13 / 32
Consensus of First Order Uncertain Systems Uncertainty is a single weighted function

Terminologies

The individual agent’s state, inputs and neighbourhood errors can be arranged in
a vector to give.
     
x1 u1 e1
 x2   u2   e2 
x= .  u= .  e= .  (12)
     
 ..   ..   .. 
xN uN eN

The uncertain parameters ηi , their approximations η̂i , and their approximation


errors, η̃i , can be arranged in vectors. Define
     
η1 η̂1 η̃1
 η2   η̂2   η̃2 
η= .  η̂ =  .  η̃ =  .  (13)
     
 ..   ..   .. 
ηN η̂N η̃N

Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 14 / 32


Consensus of First Order Uncertain Systems Uncertainty is a single weighted function

Also define,
 
f1 (x1 )
 f2 (x2 ) 
f(x) =  .  (14)
 
 .. 
fN (xN )

The parameter matrices are defined as:


   
η1 0 · · · 0 η̃1 0 ··· 0
 0 η2 · · · 0  0 η̃2 ··· 0 
H=. H̃ =  .
   
.. . . ..  .. .. ..
 ..  ..

. . .  . . . 
0 0 · · · aN 0 0 ··· η̃N
 
η̂1 0 ··· 0
 0 η̂2 ··· 0 
Ĥ =  .
 
.. .. ..
 ..

. . . 
0 0 ··· η̂N
(15)

Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 15 / 32


Consensus of First Order Uncertain Systems Uncertainty is a single weighted function

Error Dynamics

Using the control law (10) in the system (9)

ẋi = ηi fi (xi ) + µei − η̂i fi (xi )


ẋi = µei + η̃i fi (xi ) (16)
ẋ = µe + H̃f(x)

The error dynamics under the given control input (10).

ė = −(L + B)ẋ + Bṙ


(17)
= −(L + B)µe − (L + B)H̃f(x) + Bṙ

Where r = [r , r · · · (n times)]T

Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 16 / 32


Consensus of First Order Uncertain Systems Uncertainty is a single weighted function

Consider the Lyapunov function


1 T
V (e) = (e e + η̃ T η̃) (18)
2

V̇ under the system defined in (17), becomes,

V̇ = eT ė + η̃ T η̃˙
(19)
= −µeT (L + B)e − eT (L + B)H̃f(x) + eT Bṙ + η̃˙ T η̃

Define
 
f1 (x1 ) 0 ··· 0
 0 f2 (x2 ) · · · 0 
F = .
 
.. .. ..
 ..

. . . 
0 0 ··· fN (xN )

Note that,
F η̃ = H̃f(x)

Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 17 / 32


Consensus of First Order Uncertain Systems Uncertainty is a single weighted function

Design of Adaptation Law


Consider the adaptation of the parameter ηi as:

η̂˙ i = −η̃˙ i
 
X (20)
= aij (ej − ei ) − bi ei  fi (xi )
j∈Ni

Under the adaptation law in (20), the time derivative of parameter error η̃˙
becomes
η̃˙ = −η̂˙
= −F (L + B)e (21)
˙η̃ T = eT (L + B)F η̃

Using the value of η̂˙ from (21) in (19) to give,

V̇ = −µeT (L + B)e − eT (L + B)H̃f(x) + eT Bṙ + eT (L + B)F η̃


(22)
= −µeT (L + B)e + eT Bṙ

Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 18 / 32


Consensus of First Order Uncertain Systems Uncertainty is a single weighted function

Using the adaptation (20) in (22),

V̇ = −µeT (L + B)e + eT Bṙ


N
X (23)
= −µeT (L + B)e + bi (r˙i )ei
i=1

Applying Young’s inequality to 23, we have


N N
X 1X
bi (r˙)ei ≤ bi (ei2 + r˙2 ) (24)
2
i=1 i=1
1 T 1
≤ e Be + ṙT Bṙ (25)
2 2
Using (25) in (23)
 
1 1
V̇ ≤ −e µL + (µ − )B e + ṙT Bṙ
T
(26)
2 2
1
≤ −eT Me + ṙT Bṙ (27)
2
Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 19 / 32
Consensus of First Order Uncertain Systems Uncertainty is a single weighted function

If we choose µ > 0.5 the matrix


1
M = µL + (µ − )B
2

is positive definite, using Theorem Theorem ref, and

α(kek) = λmin (M)kek2 ≤ eT Me

α is class κ∞ using Theorem theorem ref. Note that since r˙ is bounded,


the term
ṙT Bṙ ≤ rm
is bounded by upper bound rm . Therefore,

V̇ ≤ −α(kek) + rm (28)

Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 20 / 32


Consensus of First Order Uncertain Systems Uncertainty is a single weighted function

Thus V̇ is negative outside some ball.


 
rm
Be = e : kek > (29)
λmin (M)

Thus using Theorem 4.18 of Nonlinear Systems by Hassan K. Khalil e is


globally uniformly ultimately bounded, with the ultimate bound as,
rm
γ=
λmin (M)

Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 21 / 32


Consensus of First Order Uncertain Systems Using Neural Network approximation of functions

Using Neural Networks to


treat uncertainty

Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 22 / 32


Consensus of First Order Uncertain Systems Using Neural Network approximation of functions

The theory developed in the previous section can be extended to be applied to


first order systems in the form:
p
X
ẋi = ui + ηi,k fi,k (xi ) (30)
k=1

fi,k ’s are known


ηi,k ’s are estimated by η̂i,k , with approximation errors,

η̃i,k = ηi,k − η̂i,k

˙ .
Design the corresponding adaptation laws ηˆi,k
This introduces us to use Neural Networks to approximate any
generic function

Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 23 / 32


Consensus of First Order Uncertain Systems Using Neural Network approximation of functions

Model of the agent

Consider N follower agents in an undirected graph G tracking a leader


signal r . The agents are first order in the form,

ẋi = ui + fi (xi )
(31)
yi = xi

We will approximate each fi using RBFNN.


Design the corresponding distributed control laws and weight update
laws to bring consensus.

Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 24 / 32


Consensus of First Order Uncertain Systems Using Neural Network approximation of functions

Radial Basis Function Neural Networks


If we can write the fi ’s as weighted sum of pi basis functions.

Input Hidden Output


fi (xi ) = wTi ϕi (xi ) + i (32)
layer layer layer    
wi,1 ϕi,1
 wi,2   ϕi,2 
ϕi,1
wi =  .  ϕi =  . 
   
wi,1  ..   .. 
.. wi,pi ϕi,pi
.
xi wi,k fi (xi ) (33)
ϕi,k
wi,pi
.. ϕi,k are known radial basis functions,
. for example eg.
ϕi,pi
(xi − ci,k )2
 
ϕi,k (xi ) = exp − (34)
2σ 2

Figure 5: RBFNN of a single Design the update laws for the


estimate ŵi
input function
Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 25 / 32
Consensus of First Order Uncertain Systems Using Neural Network approximation of functions

Consider a set of N first order agents in the form defined in Define,


 T   
w1 0 ··· 0 w1
 0 wT · · · 0  w2 
2
  
H= . w= .  (35)
 
. . .
 .. .. .. ..   .. 

0 0 · · · wN wTN

 T  

ϕ1 0 ··· 0 ϕ1
 0 ϕT 2 ··· 0   ϕ2 
F = . φ= .  (36)
   
.. .. ..
 ..  .. 

. . . 
0 0 ··· ϕN ϕT
N

Note that,

Hφ = F w, H̃φ = F w̃ (37)

Define the estimates of H and w as Ĥ and ŵ, and the approximation errors as H̃
and w̃

Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 26 / 32


Consensus of First Order Uncertain Systems Using Neural Network approximation of functions

Consider the distributed control law:

ui = µei + ŵT
i ϕi (38)

Using the control law (38) in the system (31)

ẋi = fi (xi ) + µei − ŵT


i ϕi

1
T  2 
ẋi = wT
i + µei − ŵi ϕi + i
(39) = .  (40)
 
ẋi = µei + w̃i ϕi + i  .. 
ẋ = µe + H̃φ +  N
The error dynamics under the given control input (10).

ė = −(L + B)ẋ + Bṙ


(41)
= −(L + B)µe − (L + B)H̃φ + Bṙ − (L + B)

Where r = [r , r · · · (n times)]T

Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 27 / 32


Consensus of First Order Uncertain Systems Using Neural Network approximation of functions

Note that the error dynamics in (41) does not necessarily have an equilibrium
point. So we will talk in terms of boundedness and ultimate boundedness.
Consider the Lyapunov function
1 T
V (e) = (e e + w̃T w̃) (42)
2

V̇ under the system defined in (41), becomes,

V̇ = eT ė + ẇT w̃
(43)
˙ T w̃ + eT (L + B)
= −µeT (L + B)e − eT (L + B)H̃φ + eT Bṙ + w̃

Let M = L + B
˙ T w̃ − eT M
V̇ = −µeT Me − eT M H̃φ + eT Bṙ + w̃

Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 28 / 32


Consensus of First Order Uncertain Systems Using Neural Network approximation of functions

Adaptation Law
Consider the adaptation of the parameter ηi as:
˙ i = −w̃
ŵ ˙i
 
X (44)
= aij (ej − ei ) − bi ei  ϕi (xi )
j∈Ni

˙
Under the adaptation law in (44), the time derivative of parameter error w̃
becomes
˙ = −ŵ
w̃ ˙
= −F (L + B)e (45)
˙
w̃ = eT MF w̃
T

˙ from (45) in (43) to give,


Using the value of ŵ

V̇ = −µeT Me − eT M H̃φ + eT Bṙ + eT MF w̃ − eT M


(46)
= −µeT Me + eT Bṙ − eT M

Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 29 / 32


Consensus of First Order Uncertain Systems Using Neural Network approximation of functions

Applying Young’s Inequality on (46)

eT Bṙ ≤ kekkBṙk
T T
µe Me ≥ e λmin e ≤ kBkkekkṙk
≥ λmin kek2 1
≤ λmax (B) kek2 + kṙk2

2

eT M ≤ kekkMk
≤ kMkkekkk
1
≤ λmax (M) kek2 + kk2

2

V̇ ≤ − [µλmin (M) − λmax (B) − λmax (M)] kek2 + λmax (B)kṙk2 + λmax (M)kk2
(47)
Under a suitable choice of µ,

α(kek) = (µλmin (M) − λmax (B) − λmax (M)) kek2

can be made class κ∞ .


Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 30 / 32
Consensus of First Order Uncertain Systems Using Neural Network approximation of functions

V̇ ≤ −α(kek) + λmax (B)kṙk2 + λmax (M)kk2 (48)

From (48), V̇ is negative outside some ball:


 s 
 2 2
λmax (B)Nrm + λmax (M)N0 
Ωe = e ∈ Rn : kek ≤ 2 (49)
 µλmin (M) − λmax (B) − λmax (M) 

Using Theorem 4.18 of Non-Linear Systems, by Hassan K Khalil, we are e

is globally uniformly ultimately bounded, with the ultimate bound in (49)

...
Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 31 / 32
Consensus of First Order Uncertain Systems Using Neural Network approximation of functions

The End

Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 32 / 32

Anda mungkin juga menyukai