Soham Chatterjee
August 8, 2016
Graph Theory
A graph is denoted by,
G = (V, E, A) (1)
Where V, E, A are the vertex set edge set and the adjacency matrix respectively.
Consider the directed graph shown in 1. v2
a21
A vertex is a node, and V is the set of
a23
all vertices.
v1 v3
V = {v1 , v2 , v3 , v4 }
a41
a43
An edge in a directed graph is the
ordered pair (vi , vj ). E is the edge-set v4
Adjacency Matrices
A is called the adjacency matrix, it
defines the weights of the edges, for the v2
directed graph shown here. Note that
a21
aij ←→ (vj , vi ) a23
v1 v3
vj is said to be a neighbour of vi iff there
is an directed edge (vj , vi ) a41
The set of neighbours of vi is denoted by a43
Ni
v4
N2 = {v1 , v3 }
Note that the directed graph here can represent
0 0 0 0
a set of multi-agents sharing information. a21 0 a23 0
The directed edge (v1 , v2 ) means A=
0
0 0 0
v2 is receiving information from v1
a41 0 a43 0
Undirected Graphs
N2 = {v1 , v3 }
Graph Laplacians
For a graph G consisting of p nodes, define
P
a1j P0 ··· 0
0 a2j ··· 0
D= . (2)
. .. ..
.. .. .
P.
0 0 ··· apj
L=D−A (3)
Spanning Tree
Fig 3 shows a spanning tree
A spanning tree has a root vertex
v1
(v1 ) and there is always a directed
path from the root vertex to other
vertices.
v2 v3 A graph G is said to have a
spanning tree, if it has a sub-graph
which is a spanning tree.
v4 v5 v6 v7 A connected undirected graph
always has a spanning tree, because
Figure 3: Spanning Tree it satisfies a stronger connection of
being strongly connected
Theorem 1.1.
The Laplacian L of any graph G has only one zero eigen value, iff G has a
sub-graph which is a spanning tree.
Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 7 / 32
Modelling Multi-agent systems using graph theory Useful Theorems and Properties of Graphs
Lemma 1.2.
adj(L) Y
= wr wlT P0 = λi (L)
P0
λi 6=0
0 0
L̄ = (4)
b L+B
Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 10 / 32
Modelling Multi-agent systems using graph theory Leader Follower Network
Neighbourhood Error
Theorem 1.3.
For an connected undirected follower network G and laplacian L, with a
leader representing the graph Ḡ, and leader connectivity matrix B.
M = µL + ζB (7)
We are dropping the subscript k in xi,k , as the agents have only one state.
Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 13 / 32
Consensus of First Order Uncertain Systems Uncertainty is a single weighted function
Terminologies
The individual agent’s state, inputs and neighbourhood errors can be arranged in
a vector to give.
x1 u1 e1
x2 u2 e2
x= . u= . e= . (12)
.. .. ..
xN uN eN
Also define,
f1 (x1 )
f2 (x2 )
f(x) = . (14)
..
fN (xN )
Error Dynamics
Where r = [r , r · · · (n times)]T
V̇ = eT ė + η̃ T η̃˙
(19)
= −µeT (L + B)e − eT (L + B)H̃f(x) + eT Bṙ + η̃˙ T η̃
Define
f1 (x1 ) 0 ··· 0
0 f2 (x2 ) · · · 0
F = .
.. .. ..
..
. . .
0 0 ··· fN (xN )
Note that,
F η̃ = H̃f(x)
η̂˙ i = −η̃˙ i
X (20)
= aij (ej − ei ) − bi ei fi (xi )
j∈Ni
Under the adaptation law in (20), the time derivative of parameter error η̃˙
becomes
η̃˙ = −η̂˙
= −F (L + B)e (21)
˙η̃ T = eT (L + B)F η̃
V̇ ≤ −α(kek) + rm (28)
˙ .
Design the corresponding adaptation laws ηˆi,k
This introduces us to use Neural Networks to approximate any
generic function
ẋi = ui + fi (xi )
(31)
yi = xi
0 0 · · · wN wTN
T
ϕ1 0 ··· 0 ϕ1
0 ϕT 2 ··· 0 ϕ2
F = . φ= . (36)
.. .. ..
.. ..
. . .
0 0 ··· ϕN ϕT
N
Note that,
Hφ = F w, H̃φ = F w̃ (37)
Define the estimates of H and w as Ĥ and ŵ, and the approximation errors as H̃
and w̃
ui = µei + ŵT
i ϕi (38)
Where r = [r , r · · · (n times)]T
Note that the error dynamics in (41) does not necessarily have an equilibrium
point. So we will talk in terms of boundedness and ultimate boundedness.
Consider the Lyapunov function
1 T
V (e) = (e e + w̃T w̃) (42)
2
V̇ = eT ė + ẇT w̃
(43)
˙ T w̃ + eT (L + B)
= −µeT (L + B)e − eT (L + B)H̃φ + eT Bṙ + w̃
Let M = L + B
˙ T w̃ − eT M
V̇ = −µeT Me − eT M H̃φ + eT Bṙ + w̃
Adaptation Law
Consider the adaptation of the parameter ηi as:
˙ i = −w̃
ŵ ˙i
X (44)
= aij (ej − ei ) − bi ei ϕi (xi )
j∈Ni
˙
Under the adaptation law in (44), the time derivative of parameter error w̃
becomes
˙ = −ŵ
w̃ ˙
= −F (L + B)e (45)
˙
w̃ = eT MF w̃
T
eT Bṙ ≤ kekkBṙk
T T
µe Me ≥ e λmin e ≤ kBkkekkṙk
≥ λmin kek2 1
≤ λmax (B) kek2 + kṙk2
2
eT M ≤ kekkMk
≤ kMkkekkk
1
≤ λmax (M) kek2 + kk2
2
V̇ ≤ − [µλmin (M) − λmax (B) − λmax (M)] kek2 + λmax (B)kṙk2 + λmax (M)kk2
(47)
Under a suitable choice of µ,
...
Soham Chatterjee (IITK) Consensus Problems in Uncertain Systems August 8, 2016 31 / 32
Consensus of First Order Uncertain Systems Using Neural Network approximation of functions
The End