Spectrum of the Adjacency Matrix

Preparing to load PDF file. please wait...

0 of 0
100%
Spectrum of the Adjacency Matrix

Transcript Of Spectrum of the Adjacency Matrix

Aditya Bhaskara CS 5968/6968, Lecture 5: Eigenvalues of AG and the Laplacian 26 January 2016
Lecture 5: Eigenvalues of AG and the Laplacian
We continue our study of the adjacency matrix, and show that the multiplicity of the eigenvalue d is equal to the number of connected components. We then introduce the Laplacian of a graph. We will see how the second smallest eigenvalue of the Laplacian is related to the expansion of the graph.

Disclaimer: These lecture notes are informal in nature and are not thoroughly proofread. In case you find a serious error, please send email to the instructor pointing it out.

Spectrum of the Adjacency Matrix

Recall the definition of the adjacency matrix AG of the a graph G. Once again, throughout this lecture, we will be dealing with graphs that are regular, i.e., all vertices have degree d.
We saw last time that any eigenvalue λ of AG satisfies |λ| ≤ d. Suppose we order the eigenvalues λ1 ≤ λ2 ≤ · · · ≤ λn. Then we saw that λn = d. Since all the eigenvalues have magnitude ≤ d, we have λ1 ≥ −d.
Exercise. For a connected graph G, λ1(AG) = −d if and only if G is bipartite.

Today, we show a simple yet elegant connection between the eigenvalues and the number of connected components.
Theorem 1. The multiplicity of the eigenvalue d is equal to the number of connected components of G.

Before proving the theorem in general, let us work out a simpler case: suppose we know that the graph has precisely one connected component, i.e., the graph is connected. We saw that λn = d (the eigenvector is the n dimensional vector with all entries being 1, which we shall denote by 1n). The theorem states that in this case, λn−1 < d. This is equivalent to saying that for any non-zero vector v orthogonal to 1n, we cannot have Av = dv.
Let us start with any vector such that Av = dv. Let i∗ be the index for which |vi| is maximized. Now, the equality Av = dv implies that for every i,

In particular, this is true for i = i∗. Thus

Aij vj = dvi.
j

Ai∗j vj = dvi∗ .
j

Now, the LHS has at most d nonzero terms (because the degree is d, precisely d of the Ai∗j terms are 1 and the rest are 0). Further, each of the terms is ≤ vi∗ in magnitude (by assumption, i∗ maximizes |vi|). Thus the only way the equality can hold is if we have d non-zero terms, and each term is precisely vi∗.
This implies that for all j that are neighbors of i∗, we must have vj = vi∗ . Now, we can apply the exact same argument with j instead of i∗. We would obtain that for all neighbors k of j, we have vk = vj = vi∗ . Now since the graph is connected, we can proceed this way and conclude that for all vertices i, we must have vi = vi∗ !

Page 1 of 4

Aditya Bhaskara CS 5968/6968, Lecture 5: Eigenvalues of AG and the Laplacian 26 January 2016
Thus v has all its coordinates equal, i.e., it is parallel to 1n (and hence cannot be orthogonal unless it is zero). This establishes the simpler case. Let us now get to the general case.
Proof of Theorem 1. Suppose G has k connected components, say V1, V2, . . . , Vk. We need to show that the AG has precisely k orthogonal eigenvectors of value d.
We show this by first exhibiting k orthogonal eigenvectors (which shows the multiplicity is ≥ k), and then showing that any vector v satisfying AGv = dv is spanned by the vectors exhibited (which shows the multiplicity is ≤ k).
The first is easy. Let u(i) be a vector whose jth entry is 1 if j ∈ Vi and 0 otherwise (i.e., it is the indicator vector for Vi). Then for every i ≤ k, it is easy to see that
AGu(i) = du(i).
Also, since the Vi form a partition of the vertex set, we have u(i), u(j) = 0 for all i = j. Thus there are at least k orthogonal eigenvectors of eigenvalue d.
Now consider any v such that AGu = du. We can now use exactly the same reasoning we used in the case we had a single connected component, to conclude that in any connected component Vi, the values of uj : j ∈ Vi for j are all equal! (Formally, we can look at the j∗ ∈ Vi that has the largest magnitude of uj, and argue similarly).
Thus when restricted to connected component Vi, u is a scaling of u(i). This implies that we can write u = i αiu(i), for some constants αi. This implies that any vector with AGu = du is a linear combination of the u(i), implying that the multiplicity is at most k.
This completes the proof.
Note. For graphs that are not regular, such a clean connection between the multiplicity of the top eigenvalue and connected components does not hold. However, many of the other results we will show will turn out to hold.
Let us now go back to the question of finding sparse cuts in a graph. We recall a little bit of notation from the previous lecture.

Partitioning objectives

Recall the definitions of the sparsity σ(S) and the expansion φ(S), for a set S ⊆ V :

E(S, S)

E(S, S)

σ(S) :=

; φ(S) :=

.

|S||S|

d|S|

From the definitions, it is easy to see that for any S of size ≤ n/2, we have

n

n

σ(S) ≤ φ(S) ≤ σ(S).

2d

d

Partitioning objectives continued on next page. . .

Page 2 of 4

Aditya Bhaskara CS 5968/6968, Lecture 5: Eigenvalues of AG and the Laplacian 26 January 2016

In the last lecture, we also saw a relaxation for the problem of finding minS⊆V σ(S),

1 · min
2n x∈Rn, i xi=0

i∼j (xi − xj )2 i x2i ,

which we said can be formulated as an eigenvalue problem. Let us now see how.

The Graph Laplacian. The Laplacian of a graph, denoted LG, is the matrix dI − AG.a The key property of the Laplacian matrix is that for any vector x,

xT LGx = (xi − xj )2.

(1)

ij∈E

aFor graphs that are not regular, the Laplacian is defined to be D − AG, where D is a diagonal matrix whose ith entry is the degree of the ith vertex.

To see Eq. (1), simply expand the RHS: ij∈E x2i + x2j − xixj − xjxi. This is equal to i dx2i − xT AGx, because every x2i appears precisely d times. We can now write i dx2i as xT (dI)x, and thus Eq. (1) follows.

Eigenvalues of the Laplacian. By definition, the eigenvalues of the Laplacian are related to the eigenvalues of AG in a simple way. To see this, note that any eigenvector u of AG with AGu = λu is also an eigenvector of dI − AG – in fact (dI − AG)u = (d − λ)u. Thus if the eigenvalues of AG are λ1, λ2, . . . , λn, then the eigenvalues of LG are d − λ1, d − λ2, . . . , d − λn.

The eigenvalues of AG lie in the range [−d, d]. Thus the eigenvalues of LG lie in [0, 2d]. Theorem 1 is equivalent to saying that the multiplicity of the eigenvalue 0 of LG is the number of connected components of G. Also, for a connected graph, the eigenvector of LG corresponding to the eigenvalue 0 is 1n.

Let us write down what the second smallest eigenvalue of LG is, for a connected graph G. From the characterization of eigenvalues we saw last class,

xT LGx λ2(LG) = xm⊥i1nn xT x .

The condition x ⊥ 1n is exactly the same as i xi = 0, and using Eq. (1), we have

λ2(LG) = min
i xi=0

ij∈E (xi − xj )2 i x2i .

Note that this is precisely the relaxation for minS σ(S) (without the (1/2n) factor)! Thus, we can relate

the sparsest cut value to λ2(LG) as

1

min σS ≤ λ2(LG).

S

2n

Using the relation between minS σ(S) and min|S|≤n/2 φ(S) (which we denoted by Φ(G) in the previous

lecture), we have

1 Φ(G) ≥ 4d λ2(LG).

This means that by finding the value λ2(LG) (which is a quantity we can efficiently compute) we obtain a lower bound for the minimum expansion of a cut in G. In particular, if λ2(LG) happens to be large, it means that every cut in G has many edges going across! (which, as we will see, is an important property

Page 3 of 4

Aditya Bhaskara CS 5968/6968, Lecture 5: Eigenvalues of AG and the Laplacian 26 January 2016

in graphs). However, this inequality does not rule out the possibility that λ2(LG) is always tiny. We would ideally like to say that λ2(LG)/d is not much smaller than Φ(G).
This is precisely what the so-called Cheeger’s inequality talks about. In the next lecture we will show the following:
Theorem 2. Let G be a d-regular graph. Then we have

Φ(G) ≤

2λ2(LG) . d

Note that for graph that is not connected, λ2(LG) is zero. In fact, if V1 is a connected component of size ≤ n/2, we have φ(V1) = 0, thus the inequality above indeed holds in this case. We can also view Cheeger’s inequality in general as a robust form of the above statement. (Robust in the following sense: we know it holds when λ2 = 0; does it hold when λ2 is nearly zero?)
Non-regular graphs
For graphs that are not regular, the right matrix to look at is AG := D−1/2AGD−1/2. (Here D−1/2 is simply the diagonal matrix whose (i, i)th entry is (deg(i))−1/2 – we assume there are no isolated vertices, so none of the degrees is zero) This matrix is sometimes called the normalized adjacency matrix of a graph. Note that it also symmetric. Now consider the vector u, whose ith entry is deg(i)1/2. Clearly, we have

D−1/2u = 1n =⇒ D−1/2AGD−1/2u = D−1/2AG1n = u.

The last equality is because AG1n is a vector whose ith entry is deg(i). Thus u is an eigenvector with

eigenvalue 1. It turns out λmax(AG) = 1. This is not entirely trivial. From the characterization of λmax, we

have

λ (A ) = max xT AGx = max

max G

x xT x

x

√ 2xixj
ij∈E deg(i)deg(j)
i x2i .

(The factor 2 is because the sum includes ij and ji.) Now, since 2ab ≤ (a2 + b2) for any real numbers a, b,

we have

2xixj

x2i

x2j

2

≤ deg(i)deg(j)

+

=

deg(i) deg(j)

xi .

ij∈E

ij∈E

i

The

last

inequality

holds

because

x2i deg(i)

appears

precisely

deg(i)

times.

Thus

λmax(AG)



1.

An analog of Theorem 1 can be proved for the matrix AG. For general graphs, we can define the Normalized

Laplacian as LG := I − AG. Its eigenvalues are also 1− eigenvalues of AG, and lie in (0, 2). It turns out that

Cheeger’s inequality also holds in terms of the second smallest eigenvalue of LG (without the factor d in the

denominator).

Page 4 of 4
EigenvaluesGraphLaplacianEigenvalueMultiplicity