Kernels of Directed Graph Laplacians
J.S. Caughman and J.J.P. Veerman
Department of Mathematics and StatisticsPortland State UniversityPO Box 751, Portland, OR 97207.caughman@pdx.edu, veerman@pdx.edu
Submitted: Oct 28, 2005; Accepted: Mar 14, 2006; Published: Apr 11, 2006Mathematics Subject Classiﬁcation: 05C50
Abstract.
Let
G
denote a directed graph with adjacency matrix
Q
and indegree matrix
D
. We consider the
Kirchhoﬀ matrix
L
=
D
−
Q
, sometimesreferred to as the
directed Laplacian
. A classical result of Kirchhoﬀ asserts thatwhen
G
is undirected, the multiplicity of the eigenvalue 0 equals the numberof connected components of
G
. This fact has a meaningful generalization todirected graphs, as was observed by Chebotarev and Agaev in 2005. Sincethis result has many important applications in the sciences, we oﬀer an independent and selfcontained proof of their theorem, showing in this paper thatthe algebraic and geometric multiplicities of 0 are equal, and that a graphtheoretic property determines the dimension of this eigenspace – namely, thenumber of reaches of the directed graph. We also extend their results by deriving a natural basis for the corresponding eigenspace. The results are proved inthe general context of stochastic matrices, and apply equally well to directedgraphs with nonnegative edge weights.
Keywords: Kirchhoﬀ matrix. Eigenvalues of Laplacians. Graphs. Stochastic matrix.
1 Deﬁnitions
Let
G
denote a directed graph with vertex set
V
=
{
1
,
2
,...,N
}
and edge set
E
⊆
V
×
V
. To each edge
uv
∈
E
, we allow a positive weight
ω
uv
to be assigned. The
adjacency matrix
Q
is the
N
×
N
matrix whose rows and columns are indexed by the vertices, andwhere the
ij
entry is
ω
ji
if
ji
∈
E
and zero otherwise. The
indegree matrix
D
is the
N
×
N
diagonal matrix whose
ii
entry is the sum of the entries of the
i
th
row of
Q
. Thematrix
L
=
D
−
Q
is sometimes referred to as the
Kirchhoﬀ matrix
, and sometimes asthe
directed graph Laplacian
of
G
.A variation on this matrix can be deﬁned as follows. Let
D
+
denote the pseudoinverseof
D
. In other words, let
D
+
be the diagonal matrix whose
ii
entry is
D
−
1
ii
if
D
ii
= 0
the electronic journal of combinatorics
13
(2006), #R39
1
and whose
ii
entry is zero if
D
ii
= 0. Then the matrix
L
=
D
+
(
D
−
Q
) has nonnegativediagonal entries, nonpositive oﬀdiagonal entries, all entries between 1 and 1 (inclusive)and all row sums equal to zero. Furthermore, the matrix
S
=
I
−L
is stochastic.We shall see (in Section 4) that both
L
and
L
can be written in the form
D
−
DS
where
D
is an appropriately chosen nonnegative diagonal matrix and
S
is stochastic. Wetherefore turn our attention to the properties of these matrices for the statement of ourmain results.We show that for any such matrix
M
=
D
−
DS
, the geometric and algebraic multiplicities of the eigenvalue zero are equal, and we ﬁnd a basis for this eigenspace (the
kernel
of
M
). Furthermore, the dimension of this kernel and the form of these eigenvectors canbe described in graph theoretic terms as follows.We associate with the matrix
M
a directed graph
G
, and write
j
i
if there existsa directed path from vertex
j
to vertex
i
. For any vertex
j
, we deﬁne the
reachable set
R
(
j
) to be the set containing
j
and all vertices
i
such that
j
i
. A maximal reachableset will be called a
reach
. We prove that the algebraic and geometric multiplicity of 0 asan eigenvalue for
M
equals the number of reaches of
G
.We also describe a basis for the kernel of
M
as follows. Let
R
1
,...
R
k
denote the reachesof
G
. For each reach
R
i
, we deﬁne the
exclusive part
of
R
i
to be the set
H
i
=
R
i
\∪
j
=
i
R
j
.Likewise, we deﬁne the
common part
of
R
i
to be the set
C
i
=
R
i
\
H
i
. Then for eachreach
R
i
there exists a vector
v
i
in the kernel of
M
whose entries satisfy: (i) (
v
i
)
j
= 1 forall
j
∈
H
i
; (ii) 0
<
(
v
i
)
j
<
1 for all
j
∈
C
i
; (iii) (
v
i
)
j
= 0 for all
j
∈ R
i
. Taken together,these vectors
v
1
,
v
2
,...,
v
k
form a basis for the kernel of
M
and sum to the all 1’s vector
1
.Due to the recent appearance of Agaev and Chebotarev’s notable paper [1], we wouldlike to clarify the connections to their results. In that paper, the matrices studied havethe form
M
=
α
(
I
−
S
) where
α
is positive and
S
stochastic. A simple check veriﬁesthat this is precisely the set of matrices of the form
D
−
DS
, where
D
is nonnegativediagonal. The number of reaches corresponds, in that paper, with the
inforest dimension
.And where that paper concentrates on the location of the Laplacian eigenvalues in thecomplex plane, we instead have derived the form of the associated eigenvectors.
2 Stochastic matrices
A matrix is said to be
(row) stochastic
if the entries are nonnegative and the row sumsall equal 1. Our ﬁrst result is a special case of Gerˇsgorin’s theorem [3, p.344].
2.1 Lemma.
Suppose
S
is stochastic. Then each eigenvalue
λ
satisﬁes

λ
≤
1
.
2.2 Deﬁnition.
Given any real
N
×
N
matrix
M
, we denote by
G
M
the directedgraph with vertices 1
,...,N
and an edge
j
→
i
whenever
M
ij
= 0
.
For each vertex
i
, set
N
i
:=
{
j

j
→
i
}
. We write
j
i
if there exists a directed path in
G
M
from vertex
j
tovertex
i
. Furthermore, for any vertex
j
, we deﬁne
R
(
j
) to be the set containing
j
and allvertices
i
such that
j
i
. We refer to
R
(
j
) as the
reachable set
of vertex
j
. Finally, wesay a matrix
M
is
rooted
if there exists a vertex
r
in
G
M
such that
R
(
r
) contains everyvertex of
G
M
. We refer to such a vertex
r
as a
root
.
the electronic journal of combinatorics
13
(2006), #R39
2
2.3 Lemma.
Suppose
S
is stochastic and rooted. Then the eigenspace
E
1
associatedwith the eigenvalue 1 is spanned by the allones vector
1
.
Proof.
Conjugating
S
by an appropriate permutation matrix if necessary, we may assumethat vertex 1 is a root. Since
S
is stochastic,
S
1
=
1
so
1
∈ E
1
. By way of contradiction,suppose dim(
E
1
)
>
1 and choose linearly independent vectors
x,y
∈ E
1
. Suppose

x
i

ismaximized at
i
=
n
. Comparing the
n
entry on each side of the equation
x
=
Sx
, we seethat

x
n
 ≤
j
∈N
n
S
nj

x
j
 ≤ 
x
n

j
∈N
n
S
nj
=

x
n

.
Therefore, equality holds throughout, and

x
j

=

x
n

for all
j
∈ N
n
. In fact, since
j
∈N
n
S
nj
x
j
=
x
n
,
it follows that
x
j
=
x
n
for all
j
∈ N
n
. Since
S
is rooted at vertex1, a simple induction now shows that
x
1
=
x
n
. So

x
i

is maximized at
i
= 1. The sameargument applies to any vector in
E
1
and so

y
i

is maximized at
i
= 1.Since
y
1
= 0 we can deﬁne a vector
z
such that
z
i
:=
x
i
−
x
1
y
1
y
i
for each
i
. Thisvector
z
, as a linear combination of
x
and
y
, must belong to
E
1
. It follows that

z
i

is alsomaximized at
i
= 1. But
z
1
= 0 by deﬁnition, so
z
i
= 0 for all
i
. It follows that
x
and
y
are not linearly independent, a contradiction.
2.4 Lemma.
Suppose
S
is stochastic
N
×
N
and vertex 1 is a root. Further assume
N
1
is empty. Let
P
denote the principal submatrix obtained by deleting the ﬁrst row andcolumn of
S
. Then the spectral radius of
P
is strictly less than 1.
Proof.
Since
N
1
is empty,
S
is block lowertriangular with
P
as a diagonal block. Sothe spectral radius of
P
cannot exceed that of
S
. Therefore, by Lemma 2.1, the spectralradius of
P
is at most 1. By way of contradiction, suppose the spectral radius of
P
is equalto 1. Then by the PerronFrobenius theorem (see [3, p. 508]), we would have
Px
=
x
forsome nonzero vector
x
.Deﬁne a vector
v
with
v
1
= 0 and
v
i
=
x
i
−
1
for
i
∈ {
2
,...,N
}
. We ﬁnd that
Sv
=
1 0
···
0
S
21
...
S
N
1
P
0
x
=
0
x
=
v.
So
v
∈ E
1
. But
v
1
= 0, so Lemma 2.3 implies
x
= 0. This contradiction completes theproof.
2.5 Corollary.
Suppose
S
is stochastic and
N
×
N
. Assume the vertices of
G
S
canbe partitioned into nonempty sets
A
,
B
such that for every
b
∈
B
, there exists
a
∈
A
with
a
b
in
G
S
. Then the spectral radius of the principal submatrix
S
BB
obtained by deleting from
S
the rows and columns of
A
is strictly less than 1.
Proof.
Deﬁne the matrixˆ
S
byˆ
S
=
1
0u
S
BB
,
the electronic journal of combinatorics
13
(2006), #R39
3
where
u
is chosen so thatˆ
S
is stochastic. We claim thatˆ
S
is rooted (at 1). To see this,pick any
b
∈
B
. We must show 1
b
in
G
ˆ
S
. By hypothesis there exists
a
∈
A
with
a
b
in
G
S
. Let
a
=
x
0
→
x
1
→···→
x
n
=
b
be a directed path in
G
S
from
a
to
b
. Let
i
be maximal such that
x
i
∈
A
. Then the
x
i
+1
,x
i
entry of
S
is nonzero, so the
x
i
+1
row of
S
BB
has row sum strictly less than 1.Therefore, the
x
i
+1
entry of the ﬁrst column of ˆ
S
is nonzero. So 1
→
x
i
+1
in
G
ˆ
S
andtherefore 1
b
in
G
ˆ
S
as desired. Soˆ
S
is rooted, and the previous lemma gives the result.
2.6 Deﬁnition.
A set
R
of vertices in a graph will be called a
reach
if it is a maximalreachable set; in other words,
R
is a
reach
if
R
=
R
(
i
) for some
i
and there is no
j
suchthat
R
(
i
)
⊂ R
(
j
) (properly). Since our graphs all have ﬁnite vertex sets, such maximalsets exist and are uniquely determined by the graph. For each reach
R
i
of a graph, wedeﬁne the
exclusive part
of
R
i
to be the set
H
i
=
R
i
\∪
j
=
i
R
j
. Likewise, we deﬁne the
common part
of
R
i
to be the set
C
i
=
R
i
\
H
i
.
2.7 Theorem.
Suppose
S
is stochastic
N
×
N
and let
R
denote a reach of
G
S
withexclusive part
H
and common part
C
. Then there exists an eigenvector
v
∈ E
1
whose entries satisfy (i)
v
i
= 1
for all
i
∈
H
,(ii)
0
< v
i
<
1
for all
i
∈
C
,(iii)
0
for all
i
∈ R
.
Proof.
Let
Y
denote the set of vertices not in
R
. Permuting rows and columns of
S
if necessary, we may write
S
as
S
=
S
HH
S
HC
S
HY
S
CH
S
CC
S
CY
S
YH
S
YC
S
YY
=
S
HH
0 0
S
CH
S
CC
S
CY
0 0
S
YY
Since
S
HH
is a rooted stochastic matrix, it has eigenvalue 1 with geometric multiplicity1. The associated eigenvector is
1
H
.Observe that
S
CC
has spectral radius
<
1 by Corollary 2.5. Further, notice that
S
(
1
H
,
0
C
,
0
Y
)
T
= (
1
H
,S
CH
1
H
,
0
Y
)
.
T
Using this, we ﬁnd that solving the equation
S
(
1
H
,
x
,
0
C
)
T
= (
1
H
,
x
,
0
C
)
T
for
x
amounts to solving
1
H
S
CH
1
H
+
S
CC
x0
Y
=
1
H
x0
Y
.
the electronic journal of combinatorics
13
(2006), #R39
4
Solving the above, however, is equivalent to solving (
I
−
S
CC
)
x
=
S
CH
1
H
.
Since thespectral radius of
S
CC
is strictly less than 1, the eigenvalues of
I
−
S
CC
cannot be 0. So
I
−
S
CC
is invertible. It follows that
x
= (
I
−
S
CC
)
−
1
S
CH
1
H
is the desired solution.Conditions (i) and (iii) are clearly satisﬁed by (
1
H
,
x
,
0
Y
)
,
T
so it remains only to verify(ii). To see that the entries of
x
are positive, note that (
I
−
S
CC
)
−
1
=
∞
i
=0
S
iCC
,
so theentries of
x
are nonnegative and strictly less than 1. But every vertex in
C
has a pathfrom the root, where the eigenvector has value 1. So since each entry in the eigenvectorfor
S
must equal the average of the entries corresponding to its neighbors in
G
S
, all entriesin
C
must be positive.
3 Matrices of the form
D
−
DS
We now consider matrices of the form
D
−
DS
where
D
is a nonnegative diagonalmatrix and
S
is stochastic. We will determine the algebraic multiplicity of the zeroeigenvalue. We begin with the rooted case.
3.1 Lemma.
Suppose
M
=
D
−
DS
, where
D
is a nonnegative diagonal matrix and
S
is stochastic. Suppose
M
is rooted. Then the eigenvalue
0
has algebraic multiplicity
1
.
Proof.
Let
M
=
D
−
DS
be given as stated. First we claim that, without loss of generality,
S
ii
= 1 whenever
D
ii
= 0. To see this, suppose
D
ii
= 0 for some
i
. If
S
ii
= 1,let
S
be the stochastic matrix obtained by replacing the
i
th
row of
S
by the
i
th
row of the identity matrix
I
, and let
M
=
D
−
DS
. Observe that
M
=
M
, and this proves ourclaim. So we henceforth assume that
S
ii
= 1 whenever
D
ii
= 0
.
(1)Next we claim that, given (1), ker(
M
) must be identical with ker(
I
−
S
). To see this, notethat if (
I
−
S
)
v
= 0 then clearly
Mv
=
D
(
I
−
S
)
v
= 0. Conversely, suppose
Mv
= 0.Then
D
(
I
−
S
)
v
= 0 so the vector
w
= (
I
−
S
)
v
is in the kernel of
D
. If
w
has a nonzeroentry
w
i
then
D
ii
= 0. Recall this implies
S
ii
= 1 and the
i
th
row of
I
−
S
is zero. But
w
= (
I
−
S
)
v
, so
w
i
must be zero. This contradiction implies
w
must have no nonzeroentries, and therefore (
I
−
S
)
v
= 0. So
M
and
I
−
S
have identical nullspaces as desired.By Lemma 2.3,
S
1
=
1
, so
M
1
= 0. Therefore the geometric multiplicity, and hencethe algebraic multiplicity, of the eigenvalue 0 must be at least 1. By way of contradiction,suppose the algebraic multiplicity is greater than 1. Then there must be a nonzero vector
x
and an integer
d
≥
2 such that
M
d
−
1
x
= 0 and
M
d
x
= 0
.
Now, since ker
M
= ker(
I
−
S
), Lemma 2.3 and the above equation imply that
M
d
−
1
x
must be a multiple of the vector
1
. Scaling
M
d
−
1
x
appropriately, we ﬁnd there exists avector
v
such that
Mv
=
−
1
.
the electronic journal of combinatorics
13
(2006), #R39
5