• No results found

Sum of squares of degrees in a graph

N/A
N/A
Protected

Academic year: 2022

Share "Sum of squares of degrees in a graph"

Copied!
40
0
0

Loading.... (view fulltext now)

Full text

(1)

Bernardo M. ´Abrego 1 Silvia Fern´andez-Merchant1

Michael G. Neubauer William Watkins Department of Mathematics California State University, Northridge

18111 Nordhoff St, Northridge, CA, 91330-8313, USA

email:{bernardo.abrego, silvia.fernandez, michael.neubauer, bill.watkins}@csun.edu

February 12, 2008, final version

Abstract

Let G(v, e) be the set of all simple graphs with v vertices and e edges and let P2(G) =P d2i denote the sum of the squares of the degrees, d1, . . . , dv, of the vertices of G.

It is known that the maximum value of P2(G) for G ∈ G(v, e) occurs at one or both of two special graphs in G(v, e)—the quasi-star graph or the quasi-complete graph. For each pair (v, e), we determine which of these two graphs has the larger value of P2(G). We also determine all pairs (v, e) for which the values of P2(G) are the same for the quasi-star and the quasi-complete graph. In addition to the quasi-star and quasi-complete graphs, we find all other graphs in G(v, e) for which the maximum value of P2(G) is attained. Density questions posed by previous authors are examined.

AMS Subject Classification: 05C07, 05C35

Key words: graph, degree sequence, threshold graph, Pell’s Equation, partition, density

1 Partially supported by CIMAT.

Current address: Centro de Investigaci´on en Matem´aticas, A.C., Jalisco S/N, Colonia Valenciana, 36240, Guanajuato, GTO., M´exico.

1

(2)

1 Introduction

Let G(v, e) be the set of all simple graphs with v vertices and e edges and let P2(G) =P

d2i denote the sum of the squares of the degrees, d1, . . . , dv, of the vertices of G. The purpose of this paper is to finish the solution of an old problem:

1. What is the maximum value of P2(G), for a graph G in G(v, e)?

2. For which graphs G in G(v, e) is the maximum value of P2(G) attained?

Throughout, we say that a graph G is optimal in G(v, e), if P2(G) is maximum and we denote this maximum value by max(v, e).

These problems were first investigated by Katz [Ka] in 1971 and by R. Ahlswede and G.O.H. Katona [AK] in 1978. In his review of the paper by Ahlswede and Katona. P. Erd˝os [Er] commented that

“the solution is more difficult than one would expect.” Ahlswede and Katona were interested in an equivalent form of the problem: they wanted to find the maximum number of pairs of different edges that have a common vertex. In other words, they want to maximize the number of edges in the line graph L(G) as G ranges over G(v, e). That these two formulations of the problem are equivalent follows from an examination of the vertex-edge incidence matrix N for a graph G ∈ G(v, e):

trace((N NT)2) = P2(G) + 2e

trace((NTN )2) = trace(AL(G)2) + 4e,

where AL(G) is the adjacency matrix of the line graph of G. Thus P2(G) = trace(AL(G)2) + 2e.

(trace(AL(G)2) is the twice the number of edges in the line graph of G.)

Ahlswede and Katona showed that the maximum value max(v, e) is always attained at one or both of two special graphs in G(v, e).

They called the first of the two special graphs a quasi-complete graph. The quasi-complete graph in G(v, e) has the largest possible complete subgraph Kk. Let k, j be the unique integers such that

e =

µk + 1 2

− j, where 1 ≤ j ≤ k.

The quasi-complete graph in G(v, e), which is denoted by QC(v, e), is obtained removing the j edges, (k − j + 1, k + 1), (k − j + 2, k + 1), . . . , (k, k + 1), from the complete graph on the k + 1 vertices, 1, 2, . . . , k + 1, and adding v − k − 1 isolated vertices.

The other special graph in G(v, e) is the quasi-star, which we denote by QS(v, e). This graph has as many dominant vertices as possible. (A dominant vertex is one with maximum degree v − 1.) Perhaps the easiest way to describe QS(v, e) is to say that it is the graph complement of QC(v, e0), where e0v

2

¢− e.

(3)

Define the function C(v, e) to be the sum of the squares of the degree sequence of the quasi- complete graph in G(v, e), and define S(v, e) to be the sum of the squares of the degree sequence of the quasi-star graph in G(v, e). The value of C(v, e) can be computed as follows:

Let e =¡k+1

2

¢− j, with 1 ≤ j ≤ k. The degree sequence of the quasi-complete graph in G(v, e) is d1= · · · = dk−j = k, dk−j+1 = · · · = dk= k − 1, dk+1 = k − j, dk+2= · · · = dv = 0.

Hence

C(v, e) = j(k − 1)2+ (k − j)k2+ (k − j)2. (1) Since QS(v, e) is the complement of QC(v, e0), it is straightforward to show that

S(v, e) = C(v, e0) + (v − 1)(4e − v(v − 1)) (2) from which it follows that, for fixed v, the function S(v, e) − C(v, e) is point-symmetric about the middle of the interval 0 ≤ e ≤¡v

2

¢. In other words, S(v, e) − C(v, e) = −¡

S(v, e0) − C(v, e0.

It also follows from Equation (2) that QC(v, e) is optimal in G(v, e) if and only if QS(v, e0) is optimal in G(v, e0). This allows us to restrict our attention to values of e in the interval [0,¡v

2

¢/2]

or equivalently the interval [¡v

2

¢/2,¡v

2

¢]. On occasion, we will do so but we will always state results for all values of e.

As the midpoint of the range of values for e plays a recurring role in what follows, we denote it by m = m(v) = 1

2 µv

2

and define k0= k0(v) to be the integer such that µk0

2

≤ m <

µk0+ 1 2

. (3)

To state the results of [AK] we need one more notion, that of the distance from¡k

20

¢to m. Write

b0 = b0(v) = m − µk0

2

.

We are now ready to summarize the results of [AK]:

Theorem 2 [AK] max(v, e) is the larger of the two values C(v, e) and S(v, e).

Theorem 3 [AK] max(v, e) = S(v, e) if 0 ≤ e < m−v2 and max(v, e) = C(v, e) if m+v2 < e ≤¡v

2

¢ Lemma 8 [AK] If 2b0≥ k0, or 2v − 2k0− 1 ≤ 2b0 < k0, then

C(v, e) ≤ S(v, e) for all 0 ≤ e ≤ m and C(v, e) ≥ S(v, e) for all m ≤ e ≤

µv 2

.

(4)

If 2b0 < k0 and 2k0+ 2b0 < 2v − 1, then there exists an R with b0 ≤ R ≤ min {v/2, k0− b0} such that

C(v, e) ≤ S(v, e) for all 0 ≤ e ≤ m − R C(v, e) ≥ S(v, e) for all m − R ≤ e ≤ m C(v, e) ≤ S(v, e) for all m ≤ e ≤ m + R C(v, e) ≥ S(v, e) for all m + R ≤ e ≤

µv 2

.

Ahlswede and Katona pose some open questions at the end of [AK]. “Some strange number- theoretic combinatorial questions arise. What is the relative density of the numbers v for which R = 0 [max(v, e) = S(v, e) for all 0 ≤ e < m and max(v, e) = C(v, e) for all m < e ≤¡v

2

¢]?”

This is the point of departure for our paper. Our first main result, Theorem 1, strengthens Ahlswede and Katona’s Theorem 2; not only does the maximum value of P2(G) occur at either the quasi-star or quasi-complete graph in G(v, e), but all optimal graphs in G(v, e) are related to the quasi-star or quasi-complete graphs via their so-called diagonal sequence. As a result of their relationship to the quasi-star and quasi-complete graphs, all optimal graphs can be and are described in our second main result, Theorem 2. Our third main result, Theorem 6, is a refinement of Lemma 8 in [AK]. Theorem 6 characterizes the values of v and e for which S(v, e) = C(v, e) and gives an explicit expression for the value R in Lemma 8 of [AK]. Finally, the “strange number-theoretic combinatorial” aspects of the problem, mentioned by Ahlswede and Katona, turn out to be Pell’s Equation y2− 2x2 = ±1. Corollary 3 answers the density question posed by Ahlswede and Katona.

Before stating the new results, we summarize work on the problem that followed [AK]. The authors of these papers were often unaware of the work in [AK].

A generalization of the problem of maximizing the sum of the squares of the degree sequence was investigated by Katz [Ka] in 1971 and R. Aharoni [Ah] in 1980. Katz’s problem was to maximize the sum of the elements in A2, where A runs over all (0, 1)-square matrices of size n with precisely j ones. He found the maxima and the matrices for which the maxima are attained for the special cases where there are k2 ones or where there are n2 − k2 ones in the (0, 1)-matrix. Aharoni [Ah]

extended Katz’s results for general j and showed that the maximum is achieved at one of four possible forms for A.

If A is a symmetric (0, 1)-matrix, with zeros on the diagonal, then A is the adjacency matrix A(G) for a graph G. Now let G be a graph in G(v, e). Then the adjacency matrix A(G) of G is a v × v (0, 1)-matrix with 2e ones. But A(G) satisfies two additional restrictions: A(G) is symmetric, and all diagonal entries are zero. However, the sum of all entries in A(G)2 is preciselyP

di(G)2. Thus our problem is essentially the same as Aharoni’s in that both ask for the maximum of the sum of the elements in A2. The graph-theory problem simply restricts the set of (0, 1)-matrices to those with 2e ones that are symmetric and have zeros on the diagonal.

Olpp, apparently unaware of the work of Ahlswede and Katona, reproved the basic result that max(v, e) = max(S(v, e), C(v, e)), but his results are stated in the context of two-colorings of a graph. He investigates a question of Goodman [Go1, Go2]: maximize the number of monochromatic triangles in a two-coloring of the complete graph with a fixed number of vertices and a fixed number

(5)

of red edges. Olpp shows that Goodman’s problem is equivalent to finding the two-coloring that maximizes the sum of squares of the red-degrees of the vertices. Of course, a two-coloring of the complete graph on v vertices gives rise to two graphs on v vertices: the graph G whose edges are colored red, and its complement G0. So Goodman’s problem is to find the maximum value of P2(G) for G ∈ G(v, e).

Olpp shows that either the quasi-star or the quasi-complete graph is optimal in G(v, e), but he does not discuss which of the two values S(v, e), C(v, e) is larger. He leaves this question unanswered and he does not attempt to identify all optimal graphs in G(v, e).

In 1999, Peled, Pedreschi, and Sterbini [PPS] showed that the only possible graphs for which the maximum value is attained are the so-called threshold graphs. The main result in [PPS] is that all optimal graphs are in one of six classes of threshold graphs. They end with the remark, “Further questions suggested by this work are the existence and uniqueness of the [graphs in G(v, e)] in each class, and the precise optimality conditions.”

Some of the results we prove below follow from results in [AK]. We reprove them here for two reasons: First, the new proofs introduce techniques that are used to prove the extensions of the results of [AK]. Second, they make the paper self-contained. We will point out at the appropriate places when a result is not new.

2 Statements of the main results

2.1 Threshold graphs

All optimal graphs come from a class of special graphs called threshold graphs. The quasi-star and quasi-complete graphs are just two among the many threshold graphs in G(v, e). The adjacency matrix of a threshold graph has a special form. The upper-triangular part of the adjacency matrix of a threshold graph is left justified and the number of zeros in each row of the upper-triangular part of the adjacency matrix does not decrease. We will show adjacency matrices using “+” for the main diagonal, an empty circle “◦” for the zero entries, and a black dot, “•” for the entries equal to one.

For example, the graph G whose adjacency matrix is shown in Figure 1(a) is a threshold graph in G(8, 13) with degree sequence (6, 5, 5, 3, 3, 3, 1, 0).

By looking at the upper-triangular part of the adjacency matrix, we can associate the distinct partition π = (6, 4, 3) of 13 with the graph. In general, the threshold graph Th(π) ∈ G(v, e) corresponding to a distinct partition π = (a0, a1, . . . , ap) of e, all of whose parts are less than v, is the graph with an adjacency matrix whose upper-triangular part is left-justified and contains as ones in row s. Thus the threshold graphs in G(v, e) are in one-to-one correspondence with the set of distinct partitions, Dis(v, e) of e with all parts less than v:

Dis(v, e) = {π = (a0, a1, . . . , ap) : v > a0 > a1> · · · > ap > 0,X

as = e}

(6)

We denote the adjacency matrix of the threshold graph Th(π) corresponding to the distinct partition π by Adj(π).

Peled, Pedreschi, and Sterbini [PPS] showed that all optimal graphs in a graph class G(v, e) must be threshold graphs.

Lemma 1 [PPS] If G is an optimal graph in G(v, e), then G is a threshold graph.

Thus we can limit the search for optimal graphs to the threshold graphs.

Actually, a much larger class of functions, including the power functions, dp1+ · · · + dpv for p ≥ 2, on the degrees of a graph are maximized only at threshold graphs. In fact, every Schur convex function of the degrees is maximized only at the threshold graphs. The reason is that the degree sequences of threshold graphs are maximal with respect to the majorization order among all graphical sequences.

See [MO] for a discussion of majorization and Schur convex functions and [MR] for a discussion of the degree sequences of threshold graphs.

2.2 The diagonal sequence of a threshold graph

To state the first main theorem, we must now digress to describe the diagonal sequence of a threshold graph in the graph class G(v, e).

Returning to the example in Figure 1(a) corresponding to the distinct partition π = (6, 4, 3) ∈ Dis(8, 13), we superimpose diagonal lines on the adjacency matrix Adj(π) for the threshold graph Th(π) as shown in Figure 1(b).

(a) (b)

Figure 1: The adjacency matrix, Adj(π), for the threshold graph in G(8, 13) corresponding to the distinct partition π = (6, 4, 3) ∈ Dis(8, 13) with diagonal sequence δ(π) = (1, 1, 2, 2, 3, 3, 1).

The number of black dots in the upper triangular part of the adjacency matrix on each of the diagonal lines is called the diagonal sequence of the partition π (or of the threshold graph Th(π)).

(7)

The diagonal sequence for π is denoted by δ(π) and for π = (6, 4, 3) shown in Figure 1, δ(π) = (1, 1, 2, 2, 3, 3, 1). The value of P2(Th(π)) is determined by the diagonal sequence of π.

Lemma 2 Let π be a distinct partition in Dis(v, e) with diagonal sequence δ(π) = (δ1, . . . , δt).

Then P2(Th(π)) is the dot product

P2(Th(π)) = 2δ(π) · (1, 2, 3, . . . , t) = 2 Xt

i=1

i.

For example, if π = (6, 4, 3) as in Figure 1, then

P2(Th(π)) = 2(1, 1, 2, 2, 3, 3, 1) · (1, 2, 3, 4, 5, 6, 7) = 114,

which equals the sum of squares of the degree sequence (6, 5, 5, 3, 3, 3, 1) of the graph Th(π).

Theorem 2 in [AK] guarantees that one (or both) of the graphs QS(v, e), QC(v, e) must be optimal in G(v, e). But there may be other optimal graphs in G(v, e), as the next example shows.

The quasi-complete graph QC(10, 30), which corresponds to the distinct partition (8, 7, 5, 4, 3, 2, 1) is optimal in G(10, 30). The threshold graph G2, corresponding to the distinct partition (9, 6, 5, 4, 3, 2, 1) is also optimal in G(10, 30), but is neither quasi-star in G(10, 30) nor quasi-complete in G(v, 30) for any v. The adjacency matrices for these two graphs are shown in Figure 2. They have the same diagonal sequence δ = (1, 1, 2, 2, 3, 3, 4, 4, 4, 2, 2, 1, 1) and both are optimal.

Figure 2: Adjacency matrices for two optimal graphs in G(10, 30), QC(10, 30) = Th(8, 7, 5, 4, 3, 2, 1) and Th(9, 6, 5, 4, 3, 2, 1), having the same diagonal sequence δ = (1, 1, 2, 2, 3, 3, 4, 4, 4, 2, 2, 1, 1)

We know that either the quasi-star or the quasi-complete graph in G(v, e) is optimal and that any threshold graph with the same diagonal sequence as an optimal graph is also optimal. In fact, the converse is also true. Indeed, the relationship between the optimal graphs and the quasi-star and quasi-complete graphs in a graph class G(v, e) is described in our first main theorem.

Theorem 1 Let G be an optimal graph in G(v, e). Then G = Th(π) is a threshold graph for some partition π ∈ Dis(v, e) and the diagonal sequence δ(π) is equal to the diagonal sequence of either the quasi-star graph or the quasi-complete graph in G(v, e).

(8)

Theorem 1 is stronger than Lemma 8 [AK] because it characterizes all optimal graphs in G(v, e).

In Section 2.3 we describe all optimal graphs in detail.

2.3 Optimal graphs

Every optimal graph in G(v, e) is a threshold graph, Th(π), corresponding to a partition π in Dis(v, e). So we extend the terminology and say that the partition π is optimal in Dis(v, e), if its threshold graph Th(π) is optimal in G(v, e). We say that the partition π ∈ Dis(v, e) is the quasi-star partition, if Th(π) is the quasi-star graph in G(v, e). Similarly, π ∈ Dis(v, e) is the quasi-complete partition, if Th(π) is the quasi-complete graph in G(v, e).

We now describe the quasi-star and quasi-complete partitions in Dis(v, e).

First, the quasi-complete graphs. Let v be a positive integer and e an integer such that 0 ≤ e ≤¡v

2

¢. There exists unique integers k and j such that

e =

µk + 1 2

− j and 1 ≤ j ≤ k.

The partition

π(v, e, qc) := (k, k − 1, . . . , j + 1, j − 1, . . . , 1) = (k, k − 1, . . . , bj, . . . , 2, 1)

corresponds to the quasi-complete threshold graph QC(v, e) in Dis(v, e). The symbol bj means that j is missing.

To describe the quasi-star partition π(v, e, qs) in Dis(v, e), let k0, j0 be the unique integers such that e =

µv 2

µk0+ 1 2

+ j0 and 1 ≤ j0 ≤ k0. Then the partition

π(v, e, qs) = (v − 1, v − 2, . . . , k0+ 1, j0) corresponds to the quasi-star graph QS(v, e) in Dis(v, e).

In general, there may be many partitions with the same diagonal sequence as π(v, e, qc) or π(v, e, qs).

For example, if (v, e) = (14, 28), then π(14, 28, qc) = (7, 6, 5, 4, 3, 2, 1) and all of the partitions in Figure 3 have the same diagonal sequence, δ = (1, 1, 2, 2, 3, 3, 4, 3, 3, 2, 2, 1, 1). But none of the threshold graphs corresponding to the partitions in Figure 3 is optimal. Indeed, if the quasi- complete graph is optimal in Dis(v, e), then there are at most three partitions in Dis(v, e) with the same diagonal sequence as the quasi-complete graph. The same is true for the quasi-star partition.

If the quasi-star partition is optimal in Dis(v, e), then there are at most three partition in Dis(v, e) having the same diagonal sequence as the quasi-star partition. As a consequence, there are at most six optimal partitions in Dis(v, e) and so at most six optimal graphs in G(v, e). Our second main result, Theorem 2, entails Theorem 1; it describes the optimal partitions in G(v, e) in detail.

(9)

Figure 3: Four partitions with the same diagonal sequence as π(14, 28, qc)

Theorem 2 Let v be a positive integer and e an integer such that 0 ≤ e ≤¡v

2

¢. Let k, k0, j, j0 be the unique integers satisfying

e =

µk + 1 2

− j, with 1 ≤ j ≤ k, and

e = µv

2

µk0+ 1 2

+ j0, with 1 ≤ j0 ≤ k0.

Then every optimal partition π in Dis(v, e) is one of the following six partitions:

1.1 π1.1= (v − 1, v − 2, . . . , k0+ 1, j0), the quasi-star partition for e,

1.2 π1.2= (v − 1, v − 2, . . . ,2k0\− j0− 1, . . . , k0− 1), if k0+ 1 ≤ 2k0− j0− 1 ≤ v − 1, 1.3 π1.3= (v − 1, v − 2, . . . , k0+ 1, 2, 1), if j0 = 3 and v ≥ 4,

2.1 π2.1= (k, k − 1, . . . , bj, . . . , 2, 1), the quasi-complete partition for e, 2.2 π2.2= (2k − j − 1, k − 2, k − 3, . . . 2, 1), if k + 1 ≤ 2k − j − 1 ≤ v − 1, 2.3 π2.3= (k, k − 1, . . . , 3), if j = 3 and v ≥ 4.

On the other hand, partitions π1.1 and π2.1always exist and at least one of them is optimal. Further- more, π1.2 and π1.3 (if they exist) have the same diagonal sequence as π1.1, and if S(v, e) ≥ C(v, e), then they are all optimal. Similarly, π2.2 and π2.3 (if they exist) have the same diagonal sequence as π2.1, and if S(v, e) ≤ C(v, e), then they are all optimal.

A few words of explanation are in order regarding the notation for the optimal partitions in Theorem 2. If k0 = v, then j0 = v, e = 0, and π1.1 = ∅. If k0 = v − 1, then e = j0 ≤ v − 1, and π1.1 = (j0);

further, if j0 = 3, then π1.3 = (2, 1). In all other cases k0 ≤ v − 2 and then π1.1, π1.2, and π1.3 are properly defined.

If j0 = k0 or j0 = k0− 1, then both partitions in 1.1 and 1.2 would be equal to (v − 1, v − 2, . . . , k0) and (v − 1, v − 2, . . . , k0 + 1, k0 − 1) respectively. So the condition k0+ 1 ≤ 2k0 − j0 − 1 merely

(10)

ensures that π1.1 6= π1.2. A similar remark holds for the partitions in 2.1 and 2.2. By definition the partitions π1.1 and π1.3 are always distinct; the same holds for partitions π2.1 and π2.3. In general the partitions πi.j described in items 1.1-1.3 and 2.1-2.3 (and their corresponding threshold graphs) are all different. All the exceptions are illustrated in Figure 4 and are as follows: For any v, if e ∈ {0, 1, 2} or e0 ∈ {0, 1, 2} then π1.1 = π2.1. For any v ≥ 4, if e = 3 or e0 = 3, then π1.3 = π2.1 and π1.1 = π2.3. If (v, e) = (5, 5) then π1.1 = π2.2 and π1.2 = π2.1. Finally, if (v, e) = (6, 7) or (7, 12), then π1.2 = π2.3. Similarly, if (v, e) = (6, 8) or (7, 9), then π1.3 = π2.2. For v ≥ 8 and 4 ≤ e ≤¡v

2

¢− 4, all the partitions πi.j are pairwise distinct (when they exist).

Figure 4: Instances of pairs (v, e) where two partitions πi.j coincide

In the next section, we determine the pairs (v, e) having a prescribed number of optimal partitions (and hence graphs) in G(v, e).

2.4 Pairs (v, e) with a prescribed number of optimal partitions.

In principle, a given pair (v, e), could have between one and six optimal partitions. It is easy to see that there are infinitely many pairs (v, e) with only one optimal partition (either the quasi-star or the quasi-complete). For example the pair (v,¡v

2

¢) only has the quasi-complete partition. Similarly, there are infinitely many pairs with exactly two optimal partitions and this can be achieved in many different ways. For instance if (v, e) = (v, 2v − 5) and v ≥ 9, then k0 = v − 2, j0 = v − 4 > 3, and S(v, e) > C(v, e) (c.f. Corollary 2). Thus only the partitions π1.1 and π1.2 are optimal. The interesting question is the existence of pairs with 3,4,5, or 6 optimal partitions.

Often, both partitions π1.2 and π1.3 in Theorem 2 exist for the same pair (v, e); however it turns out that this almost never happens when they are optimal partitions. More precisely,

Theorem 3 If π1.2 and π1.3 are optimal partitions then (v, e) = (7, 9) or (9, 18). Similarly, if π2.2 and π2.3 are optimal partitions then (v, e) = (7, 12) or (9, 18). Furthermore, the pair (9, 18) is the only one with six optimal partitions, there are no pairs with five. If there are more than two optimal partitions for a pair (v, e), then S(v, e) = C(v, e), that is, both the quasi-complete and the quasi-star partitions must be optimal.

In the next two results, we describe two infinite families of partitions in Dis(v, e), and hence graph

(11)

classes G(v, e), for which there are exactly three (four) optimal partitions. The fact that they are infinite is proved in Section 9.

Theorem 4 Let v > 5 and k be positive integers that satisfy the Pell’s Equation

(2v − 3)2− 2(2k − 1)2= −1 (4)

and let e =¡k

2

¢. Then (using the notation of Theorem 2), j = k, k0 = k + 1, j0 = 2k − v + 2, and there are exactly three optimal partitions in Dis(v, e), namely

π1.1 = (v − 1, v − 2, . . . , k + 2, 2k − v + 2) π1.2 = (v − 2, v − 3, . . . , k)

π2.1 = (k − 1, k − 2, . . . , 2, 1).

The partitions π1.3, π2.2, and π2.3 do not exist.

Theorem 5 Let v > 9 and k be positive integers that satisfy the Pell’s Equation

(2v − 1)2− 2(2k + 1)2= −49 (5)

and e = m = 12¡v

2

¢. Then (using the notation of Theorem 2), j = j0 = 3, k = k0, and there are exactly four optimal partitions in Dis(v, e), namely

π1.1 = (v − 1, v − 2, . . . , k + 1, 3) π1.3 = (v − 1, v − 2, . . . , k + 1, 2, 1) π2.1 = (k − 1, k − 2, . . . , 4, 2, 1) π2.3 = (k − 1, k − 2, . . . , 4, 3).

The partitions π1.2 and π2.2 do not exist.

2.5 Quasi-star versus quasi-complete

In this section, we compare S(v, e) and C(v, e). The main result of the section, Theorem 6, is a theorem very much like Lemma 8 of [AK], with the addition that our results give conditions for equality of the two functions.

If e = 0, 1, 2, 3, then S(v, e) = C(v, e) for all v. Of course, if e = 0, e = 1 and v ≥ 2, or e ≤ 3 and v = 3, there is only one graph in the graph class G(v, e). If e = 2 and v ≥ 4, then there are two graphs in the graph class G(v, 2): the path P and the partial matching M , with degree sequences (2, 1, 1) and (1, 1, 1, 1), respectively. The path is optimal as P2(P ) = 6 and P2(M ) = 4. But the path is both the quasi-star and the quasi-complete graph in G(v, 2). If e = 3 and v ≥ 4, then the quasi-star graph has degree sequence (3, 1, 1, 1) and the quasi-complete graph is a triangle with degree sequence (2, 2, 2). Since P2(G) = 12 for both of these graphs, both are optimal. Similarly, S(v, e) = C(v, e) for e =¡v

2

¢− j for j = 0, 1, 2, 3.

(12)

Now, we consider the cases where 4 ≤ e ≤ ¡v

4

¢− 4. Figures 5, 6, 7, and 8 show the values of the difference S(v, e) − C(v, e). When the graph is above the horizontal axis, S(v, e) is strictly larger than C(v, e) and so the quasi-star graph is optimal and the quasi-complete is not optimal.

And when the graph is on the horizontal axis, S(v, e) = C(v, e) and both the quasi-star and the quasi-complete graph are optimal. Since the function S(v, e) − C(v, e) is central symmetric, we shall consider only the values of e from 4 to the midpoint, m, of the interval [0,¡v

2

¢].

Figure 5 shows that S(25, e) > C(25, e) for all values of e: 4 ≤ e < m = 150. So, when v = 25, the quasi-star graph is optimal for 0 ≤ e < m = 150 and the quasi-complete graph is not optimal. For e = m(25) = 150, the quasi-star and the quasi-complete graphs are both optimal.

195

180

164

147

129

110105 120 136 153 171 190

Figure 5: S(25, e) − C(25, e) > 0 for 4 ≤ e < m = 150

Figure 6 shows that S(15, e) > C(15, e) for 4 ≤ e < 45 and 45 < e ≤ m = 52.5. But S(15, 45) = C(15, 45). So the quasi-star graph is optimal and the quasi-complete graph is not optimal for all 0 ≤ e ≤ 52 except for e = 45. Both the quasi-star and the quasi-complete graphs are optimal in G(15, 45).

69

60

50

3936 45 55 66

Figure 6: S(15, e) − C(15, e) > 0 for 4 ≤ e < 45 and for 45 < e ≤ m = 52.5

Figure 7 shows that S(17, e) > C(17, e) for 4 ≤ e < 63, S(17, 64) = C(17, 64), S(17, e) < C(17, e)

(13)

for 65 ≤ e < m = 68, and S(17, 68) = C(17, 68).

81

70

58

45 55 66 78 91

Figure 7: S(17, e) − C(17, e) > 0 for 4 ≤ e ≤ 63 and 65 ≤ e < m = 68

Finally, Figure 8 shows that S(23, e) > C(23, e) for 4 ≤ e ≤ 119, but S(23, e) = C(23, e) for 120 ≤ e ≤ m = 126.5.

162

148

133

117

10082 91 105 120 136 153 171

Figure 8: S(23, e) − C(23, e) > 0 for 4 ≤ e ≤ 119, S(23, e) = C(23, e) for 120 ≤ e < m = 126.5

These four examples exhibit the types of behavior of the function S(v, e) − C(v, e), for fixed v. The main thing that determines this behavior is the quadratic function

q0(v) := 1 4

¡1 − 2(2k0− 3)2+ (2v − 5)2¢ .

(The integer k0 = k0(v) depends on v.) For example, if q0(v) > 0, then S(v, e) − C(v, e) ≥ 0 for all values of e < m. To describe the behavior of S(v, e) − C(v, e) for q0(v) < 0, we need to define

R0= R0(v) = 8(m − e0)(k0− 2)

−1 − 2(2k0− 4)2+ (2v − 5)2, where

e0 = e0(v) = µk0

2

= m − b0

(14)

Our third main theorem is the following:

Theorem 6 Let v be a positive integer

1. If q0(v) > 0, then

S(v, e) ≥ C(v, e) for all 0 ≤ e ≤ m and S(v, e) ≤ C(v, e) for all m ≤ e ≤¡v

2

¢.

S(v, e) = C(v, e) if and only if e, e0∈ {0, 1, 2, 3, m}, or e, e0= e0 and (2v − 3)2− 2(2k0− 3)2=

−1, 7.

2. If q0(v) < 0, then

C(v, e) ≤ S(v, e) for all 0 ≤ e ≤ m − R0 C(v, e) ≥ S(v, e) for all m − R0 ≤ e ≤ m C(v, e) ≤ S(v, e) for all m ≤ e ≤ m + R0 C(v, e) ≥ S(v, e) for all m + R0 ≤ e ≤¡v

2

¢. S(v, e) = C(v, e) if and only if e, e0 ∈ {0, 1, 2, 3, m − R0, m}

3. If q0(v) = 0, then

S(v, e) ≥ C(v, e) for all 0 ≤ e ≤ m and S(v, e) ≤ C(v, e) for all m ≤ e ≤¡v

2

¢. S(v, e) = C(v, e) if and only if e, e0 ∈ {0, 1, 2, 3, e0, ..., m}

The conditions in Theorem 6 involving the quantity q0(v) simplify and refine the conditions in [AK]

involving k0 and b0. The condition 2b0 ≥ k0 in Lemma 8 of [AK] can be removed and the result restated in terms of the sign of the quantity 2k0+ 2b0− (2v − 1) = −2q0(v). While [AK] considers only the two cases q0(v) ≤ 0 and q0(v) > 0, we analyze the case q0(v) = 0 separately.

It is apparent from Theorem 6 that S(v, e) ≥ C(v, e) for 0 ≤ e ≤ m − αv if α > 0 is large enough. Indeed, Ahlswede and Katona [AK, Theorem 3] show this for α = 1/2, thus establishing an inequality that holds for all values of v regardless of the sign of q0(v). We improve this result and show that the inequality holds when α = 1 −√

2/2 ≈ 0.2929.

Corollary 1 Let α = 1−√

2/2. Then S(v, e) ≥ C(v, e) for all 0 ≤ e ≤ m−αv and S(v, e) ≤ C(v, e) for all m + αv ≤ e ≤¡v

2

¢. Furthermore, the constant α cannot be replaced by a smaller value.

Theorem 3 in [AK] can be improved in another way. The inequalities are actually strict.

Corollary 2 S(v, e) > C(v, e) for 4 ≤ e < m−v/2 and S(v, e) < C(v, e) for m+v/2 < e ≤¡v

2

¢−4.

(15)

2.6 Asymptotics and density

We now turn to the questions asked in [AK]:

What is the relative density of the positive integers v for which max(v, e) = S(v, e) for 0 ≤ e < m?

Of course, max(v, e) = S(v, e) for 0 ≤ e ≤ m if and only if max(v, e) = C(v, e) for m ≤ e ≤¡v

2

¢.

Corollary 3 Let t be a positive integer and let n(t) denote the number of integers v in the interval [1, t] such that

max(v, e) = S(v, e), for all 0 ≤ e ≤ m. Then

t→∞lim n(t)

t = 2 −√

2 ≈ 0.5858.

2.7 Piecewise linearity of S(v, e) − C(v, e)

The diagonal sequence for a threshold graph helps explain the behavior of the difference S(v, e) − C(v, e) for fixed v and 0 ≤ e ≤ ¡v

2

¢. From Figures 5, 6, 7, and 8, we see that S(v, e) − C(v, e), regarded as a function of e, is piecewise linear and the ends of the intervals on which the function is linear occur at e = ¡j

2

¢and e = ¡v

2

¢¡j

2

¢ for j = 1, 2, . . . , v. We prove this fact in Lemma 10.

For now, we present an example.

Take v = 15, for example. Figure 6 shows linear behavior on the intervals [36, 39], [39, 45], [45, 50], [50, 55], [55, 60], [60, 66], and [66, 69]. There are 14 binomial coefficients ¡j

2

¢for 2 ≤ j ≤ 15:

1, 3, 6, 10, 15, 21, 28, 36, 45, 55, 66, 78, 91, 105.

The complements with respect to ¡15

2

¢= 105 are

104, 102, 99, 95, 90, 84, 77, 69, 60, 50, 39, 27, 14, 0.

The union of these two sets of integers coincide with the end points for the intervals on which S(15, e) − C(15, e) is linear. In this case, the function is linear on the 27 intervals with end points:

0, 1, 3, 6, 10, 14, 15, 21, 27, 28, 36, 39, 45, 50, 55, 60, 66, 69, 77, 78, 84, 90, 91, 95, 99, 102, 104, 105.

These special values of e correspond to special types of quasi-star and quasi-complete graphs.

If e = ¡j

2

¢, then the quasi-complete graph QC(v, e) is the sum of a complete graph on j vertices and v − j isolated vertices. For example, if v = 15 and j = 9, and e =¡9

2

¢= 36, then the upper- triangular part of the adjacency matrix for QC(15, 21) is shown on the left in Figure 9. And if e = ¡v

2

¢¡j

2

¢, then the quasi-star graph QS(v, e) has j dominant vertices and none of the other v − j vertices are adjacent to each other. For example, the lower triangular part of the adjacency matrix for the quasi-star graph with v = 15, j = 12, and e =¡14

2

¢¡12

2

¢= 39, is shown on the right in Figure 9.

(16)

quasi-complete partition

π=(8,7,6,5,4,3,2,1) π=(9,7,6,5,4,3,2,1) π=(9,8,6,5,4,3,2,1) π=(9,8,7,5,4,3,2,1)

quasi-star partition

π=(14,13,9) π=(14,13,10) π=(14,13,11) π=(14,13,12)

e = 36 e = 37 e = 38 e = 39

Figure 9: Adjacency matrices for quasi-complete and quasi-star graphs with v = 15 and 36 ≤ e ≤ 39

As additional dots are added to the adjacency matrices for the quasi-complete graphs with e = 37, 38, 39, the value of C(15, e) increases by 18, 20, 22. And the value of S(15, e) increases by 28, 30, 32. Thus, the difference increases by a constant amount of 10. Indeed, the diagonal lines are a distance of five apart. Hence the graph of S(15, e) − C(15, e) for 36 ≤ e ≤ 39 is linear with a slope of 10. But for e = 40, the adjacency matrix for the quasi-star graph has an additional dot on the diagonal corresponding to 14, whereas the adjacency matrix for the quasi-complete graph has an additional dot on the diagonal corresponding to 24. So S(15, 40) − C(15, 40) decreases by 10. The decrease of 10 continues until the adjacency matrix for the quasi-complete graph contains a complete column at e = 45. Then the next matrix for e = 46 has an additional dot in the first row and next column and the slope changes again.

3 Proof of Lemma 2

Returning for a moment to the threshold graph Th(π) from Figure 1, which corresponds to the distinct partition π = (6, 4, 3), we see the graph complement shown with the white dots. Counting white dots in the rows from bottom to top and from the left to the diagonal, we have 7,5,2,1. These same numbers appear in columns reading from right to left and then top to the diagonal. So if Th(π) is the threshold graph associated with π, then the set-wise complement of π (πc) in the set {1, 2, . . . , v − 1} corresponds to the threshold graph Th(π)c—the complement of Th(π). That is,

Th(πc) = Th(π)c.

The diagonal sequence allows us to evaluate the sum of squares of the degree sequence of a threshold graph. Each black dot contributes a certain amount to the sum of squares. The amount depends on the location of the black dot in the adjacency matrix. In fact all of the dots on a particular diagonal line contribute the same amount to the sum of squares. For v = 8, the value of a black

(17)

dot in position (i, j) is given by the entry in the following matrix:











+ 1 3 5 7 9 11 13

1 + 3 5 7 9 11 13

1 3 + 5 7 9 11 13

1 3 5 + 7 9 11 13

1 3 5 7 + 9 11 13

1 3 5 7 9 + 11 13

1 3 5 7 9 11 + 13

1 3 5 7 9 11 13 +











This follows from the fact that a sum of consecutive odd integers is a square. So to get the sum of squares P2(Th(π)) of the degrees of the threshold graph associated with the distinct partition π, sum the values in the numerical matrix above that occur in the positions with black dots. Of course, an adjacency matrix is symmetric. So if we use only the black dots in the upper triangular part, then we must replace the (i, j)-entry in the upper-triangular part of the matrix above with the sum of the (i, j)- and the (j, i)-entry, which gives the following matrix:

E =











+ 2 4 6 8 10 12 14

+ 6 8 10 12 14 16

+ 10 12 14 16 18 + 14 16 18 20 + 18 20 22 + 22 24 + 26 +











. (6)

Thus, P2(Th(π)) = 2(1, 2, 3, . . .) · δ(π). Lemma 2 is proved.

4 Proofs of Theorems 1 and 2

Theorem 1 is an immediate consequence of Theorem 2 (and Lemmas 1 and 2). And Theorem 2 can be proved using the following central lemma:

Lemma 3 Let π = (v − 1, c, c − 1, . . . , bj, . . . , 2, 1) be an optimal partition in Dis(v, e), where e − (v − 1) = 1 + 2 + · · · + c − j ≥ 4 and 1 ≤ j ≤ c < v − 2. Then j = c and 2c ≥ v − 1 so that

π = (v − 1, c − 1, c − 2, . . . , 2, 1).

We defer the proof of Lemma 3 until Section 5 and proceed now with the proof of Theorem 2. The proof of Theorem 2 is an induction on v. Let π be an optimal partition in Dis(v, e), then πc is optimal in Dis(v, e0). One of the partitions, π, πccontains the part v − 1. We may assume without loss of generality that π = (v − 1 : µ), where µ is a partition in Dis(v − 1, e − (v − 1)). The cases where µ is a decreasing partition of 0, 1, 2, and 3 will be considered later. For now we shall assume that e − (v − 1) ≥ 4.

(18)

Since π is optimal, it follows that µ is optimal and hence by the induction hypothesis, µ is one of the following partitions in Dis(v − 1, e − (v − 1)):

1.1a µ1.1 = (v − 2, . . . , k0+ 1, j0), the quasi-star partition for e − (v − 1), 1.2a µ1.2 = (v − 2, . . . ,2k0\− j0− 1, . . . , k0− 1), if k0+ 1 ≤ 2k0− j0− 1 ≤ v − 2, 1.3a µ1.3 = (v − 2, . . . , k0+ 1, 2, 1), if j0 = 3,

2.1a µ2.1 = (k1, k1− 1, . . . , bj1, . . . , 2, 1), the quasi-complete partition for e − (v − 1), 2.2a µ2.2 = (2k1− j1− 1, k1− 2, k1− 3, . . . 2, 1), if k1+ 1 ≤ 2k1− j1− 1 ≤ v − 2, 2.3a µ2.3 = (k1, k1− 1, . . . , 3), if j1= 3,

where

e − (v − 1) = 1 + 2 + · · · + k1− j1 ≥ 4, with 1 ≤ j1≤ k1.

In symbols, π = (v − 1, µi.j), for one of the partitions µi.j above. For each partition, µi.j, we will show that (v − 1, µi.j) = πs.t for one of the six partitions, πs.t, in the statement of Theorem 2.

The first three cases are obvious:

(v − 1, µ1.1) = π1.1 (v − 1, µ1.2) = π1.2 (v − 1, µ1.3) = π1.3.

Next suppose that µ = µ2.1, µ2.2, or µ2.3. The partitions µ2.2 and µ2.3 do not exist unless certain conditions on k1, j1, and v are met. And whenever those conditions are met, the partition µ2.1 is also optimal. Thus π1 = (v − 1, µ2.1) is optimal. Also, since e − (v − 1) ≥ 4, then k1 ≥ 3. There are two cases: k1 = v − 2, k1≤ v − 3. If k1 = v − 2, then µ2.2 does not exist and

(v − 1, µ) =

½ π2.1, if µ = µ2.1 π1.1, if µ = µ2.3.

If k1 ≤ v − 3, then by Lemma 3, π1 = (v − 1, k1− 1, . . . , 2, 1), with j1 = k1 and 2k1 ≥ v − 1. We will show that k = k1+ 1 and v − 1 = 2k − j − 1. The above inequalities imply that

µk1+ 1 2

= 1 + 2 + · · · + k1 ≤ e

=

µk1+ 1 2

− k1+ (v − 1)

<

µk1+ 1 2

+ (k1+ 1) =

µk1+ 2 2

. But k is the unique integer satisfying ¡k

2

¢≤ e <¡k+1

2

¢. Thus k = k1+ 1.

(19)

It follows that

e = (v − 1) + 1 + 2 + · · · + (k − 2) =

µk + 1 2

− j, and so 2k − j = v.

We now consider the cases 2.1a, 2.2a, and 2.3a individually. Actually, µ2.2 does not exist since k1 = j1. If µ = µ2.3, then µ = (3) since k1 = j1 = 3. This contradicts the assumption that µ is a partition of an integer greater than 3. Therefore

µ = µ2.1= (k1, k1− 1, . . . , bj1, . . . , 2, 1) = (k − 2, k − 3, . . . 2, 1), since k1= j1 and k = k1+ 1. Now since 2k − j − 1 = v − 1 we have

π = (2k − j − 1, k − 2, k − 3, . . . 2, 1) =

½ π2.1 if e =¡v

2

¢or e =¡v

2

¢− (v − 2)

π2.2 otherwise. .

Finally, if µ is a decreasing partition of 0, 1, 2, or 3, then either π = (v − 1, 2, 1) = π1.3, or π = (v − 1) = π1.1, or π = (v − 1, j0) = π1.1 for some 1 ≤ j0 ≤ 3.

Now, we prove that π1.2 and π1.3 (if they exist) have the same diagonal sequence as π1.1 (which always exists). This in turn implies (by using the duality argument mentioned in Section 3) that π2.2 and π2.3 also have the same diagonal sequence as π2.1 (which always exists). We use the following observation. If we index the rows and columns of the adjacency matrix Adj(π) starting at zero instead of one, then two positions (i, j) and (i0, j0) are in the same diagonal if and only if the sum of their entries are equal, that is, i + j = i0+ j0. If π1.2 exists then 2k0− j0 ≤ v. Applying the previous argument to π1.1and π1.2, we observe that the top row of the following lists shows the positions where there is a black dot in Adj(π1.1) but not in Adj(π1.2) and the bottom row shows the positions where there is a black dot in Adj(π1.2) but not in Adj(π1.1).

(v − k0− 2, v − 1) . . . (v − k0− t, v − 1) . . . (v − k0− (k0− j0), v − 1) (v − 1 − k0, v − 2) . . . (v − 1 − k0, v − t) . . . (v − 1 − k0, v − (k0− j0)).

Each position in the top row is in the same diagonal as the corresponding position in the second row.

Thus the number of positions per diagonal is the same in π1.1 as in π1.2. That is, δ (π1.1) = δ (π1.2).

Similarly, if π1.3 exists then k0 ≥ j0= 3. To show that δ (π1.1) = δ (π1.3) note that the only position where there is a black dot in Adj(π1.1) but not in Adj(π1.3) is (v − 1 − k0, v − 1 − k0+ 3), and the only position where there is a black dot in Adj(π1.3) but not in Adj(π1.1) is (v − k0, v − 1 − k0+ 2).

Since these positions are in the same diagonal then δ (π1.1) = δ (π1.3).

Theorem 2 is proved.

5 Proof of Lemma 3

There is a variation of the formula for P2(Th(π)) in Lemma 2 that is useful in the proof of Lemma 3. We have seen that each black dot in the adjacency matrix for a threshold graph contributes

(20)

a summand, depending on the location of the black dot in the matrix E in (6). For example, if π = (3, 1), then the part of (1/2)E that corresponds to the black dots in the adjacency matrix Adj(π) for π is

Adj((3, 1)) =



+

+

+

+



 ,



+ 1 2 3

+ 3

+ +



Thus P2(Th(π)) = 2(1 + 2 + 3 + 3) = 18. Now if we index the rows and columns of the adjacency matrix starting with zero instead of one, then the integer appearing in the matrix (1/2)E at entry (i, j) is just i + j. So we can compute P2(Th(π)) by adding all of the positions (i, j) corresponding to the positions of black dots in the upper-triangular part of the adjacency matrix of Th(π). What are the positions of the black dots in the adjacency matrix for the threshold graph corresponding to a partition π = (a0, a1, . . . , ap)? The positions corresponding to a0 are

(0, 1), (0, 2), . . . , (0, a0) and the positions corresponding to a1 are

(1, 2), (1, 3) . . . , (1, 1 + a1).

In general, the positions corresponding to at in π are

(t, t + 1), (t, t + 2), . . . , (t, t + at).

We use these facts in the proof of Lemma 3.

Let µ = (c, c − 1, . . . , bj, . . . , 2, 1) be the quasi-complete partition in Dis(v, e − (v − 1)), where 1 ≤ j ≤ c < v − 2 and 1 + 2 + . . . + c − j ≥ 4. We deal with the cases j = 1, j = c, and 2 ≤ j ≤ c − 1 separately. Specifically, we show that if π = (v − 1 : µ) is optimal, then j = c and

π = (v − 1, c − 1, . . . , 2, 1), (7)

with 2c ≥ v − 1.

Arguments for the cases are given below.

(21)

5.1 j = 1 : µ = (c, c − 1, . . . , 3, 2)

Since 2 + 3 + . . . + c ≥ 4 then c ≥ 3. We show that π = (v − 1 : µ) is not optimal. In this case, the adjacency matrix for π has the following form:

0 1 2 · · · c · · · v − 1 0 + • • · · · • • • · · · 1 + • · · · • • ◦ · · ·

2 +

... . ..

c − 1 + • • ◦ · · ·

c + ◦ ◦ · · ·

c + 1 + ◦ · · ·

... . .. ...

v − 1 +

5.1.1 2c ≤ v − 1

Let

π0 = (v − 1, 2c − 1, c − 2, c − 3, . . . , 3, 2).

The parts of π0 are distinct and decreasing since 2c ≤ v − 1. Thus π0 ∈ Dis(v, e).

The adjacency matrices Adj(π) and Adj(π0) each have e black dots, many of which appear in the same positions. But there are differences. Using the fact that c−1 ≥ 2, the first row of the following list shows the positions in which a black dot appears in Adj(π) but not in Adj(π0). And the second row shows the positions in which a black dot appears in Adj(π0) but not in Adj(π):

(2, c + 1) (3, c + 1) · · · (c − 1, c + 1) (c − 1, c) (1, c + 2) (1, c + 3) · · · (1, 2c − 1) (1, 2c)

For each of the positions in the list, except the last ones, the sum of the coordinates for the positions is the same in the first row as it is in the second row. But the coordinates of the last pair in the first row sum to 2c − 1 whereas the coordinates of the last pair in the second row sum to 2c + 1. It follows that P20) = P2(π) + 4. Thus, π is not optimal.

5.1.2 2c > v − 1

Let π0 = (v − 2, c, c − 1, . . . , 3, 2, 1). Since c < v − 2, the partition π0 is in Dis(v, e). The positions of the black dots in the adjacency matrices Adj(π) and Adj(π0) are the same but with only two exceptions. There is a black dot in position (0, v − 1) in π but not in π0, and there is a black dot in position (c, c + 1) in π0 but not in π. Since c + (c + 1) > 0 + (v − 1), π is not optimal.

(22)

5.2 j = c : µ = (c − 1, . . . , 2, 1)

Since 1 + 2 + · · · + (c − 1) ≥ 4, then c ≥ 4. We will show that if 2c ≥ v − 1, then π has the same diagonal sequence as the quasi-complete partition. And if 2c < v − 1, then π is not optimal.

The adjacency matrix for π is of the following form:

0 1 2 · · · c · · · v − 1 0 + • • · · · • • · · ·

1 + • • ◦

... . ..

+ • ◦ · · ·

c + ◦ · · ·

+ · · · . ..

v − 1 +

5.2.1 2c ≥ v − 1

The quasi-complete partition in G(v, e) is π0 = (c + 1, c, . . . , bk, . . . , 2, 1), where k = 2c − v + 2. To see this, notice that

1 + 2 + · · · + c + (c + 1) − k = 1 + 2 + · · · + (c − 1) + (v − 1) for k = 2c − v + 2. Since 2c ≥ v − 1 and c < v − 2, then 1 ≤ k < c and π0 ∈ Dis(v, e).

To see that π and π0 have the same diagonal sequence, we again make a list of the positions in which there is a black dot in Adj(π) but not in Adj(π0) (the top row below), and the positions in which there is a black dot in Adj(π0) but not in Adj(π) (the bottom row below):

(0, c + 2) (0, c + 3) · · · (0, c + t + 1) · · · (0, v − 1) (1, c + 1) (2, c + 1) · · · (t, c + 1) · · · (v − c − 2, c + 1).

Each position in the top row is in the same diagonal as the corresponding position in the bottom row, that is, 0 + (c + t + 1) = t + (c + 1). Thus the diagonal sequences δ(π) = δ(π0).

5.2.2 2c < v − 1

In this case, let π0 = (v − 1, 2c − 2, c − 3, . . . , 3, 2). And since 2c − 2 ≤ v − 3, the parts of π0 are distinct and decreasing. That is, π0 ∈ Dis(v, e).

(23)

Using the fact that c − 2 ≥ 2, we again list the positions in which there is a black dot in Adj(π) but not in Adj(π0) (the top row below), and the positions in which the is a black dot in Adj(π0) but not in Adj(π):

(2, c) (3, c) · · · (c − 1, c) (c − 2, c − 1) (1, c + 1) (1, c + 2) . . . (1, 2c − 2) (1, 2c − 1).

All of the positions but the last in the top row are on the same diagonal as the corresponding position in the bottom row: t + c = 1 + (c − 1 + t). But in the last positions we have (c − 2) + (c − 1) = 2c − 3 and 1 + (2c − 1) = 2c. Thus P20) = P2(π) + 6 and so π is not optimal.

5.3 1 < j < c : µ = (c, c − 1, . . . , bj, . . . , 2, 1)

We will show that π = (v − 1, c, c − 1, . . . , bj, . . . , 2, 1) is not optimal. The adjacency matrix for π has the following form:

0 1 2 · · · c−1 c c+1 c+2

· · · v−1

0 + • • · · · • • • · · · •

1 + • • • ◦

...

c − j • • ◦ · · · ◦

c − j + 1 . .. • ◦ ◦ · · · ◦

...

c − 1 + • ◦ ◦ · · · ◦

c + ◦ ◦ · · · ◦

c + 1 + ◦ · · · ◦

... +

. ..

v − 1 +

There are two cases.

5.3.1 2c > v − 1

Let π0 = (v − r, c, c − 1, . . . , \j + 1 − r, . . . , 2, 1), where r = min(v − 1 − c, j). Then r > 1 because j > 1 and c < v − 2. We show that π0 ∈ Dis(v, e) and P20) > P2(π).

In order for π0 to be in Dis(v, e), the sum of the parts in π0 must equal the sum of the parts in π:

1 + 2 + . . . + c + (v − r) − (j + 1 − r) = 1 + 2 + . . . + c + (v − 1) − j.

And the parts of π0 must be distinct and decreasing:

v − r > c > j + 1 − r > 1.

References

Related documents

Easy graphs are Cycles, Trees, Quasi-trees, Monma Graphs, Butterflies and the Complete graph with four nodes.... Corollary 1 is a full characterization of graphs whose F -vector

5, we discuss the multi-band optical results in the context of high-energy and radio emission from the pulsars and compare these with those from slower pulsars, and theoretical

Results: We engineered Saccharomyces cerevisiae to produce fatty acid short‑ and branched‑chain alkyl esters, including ethyl, isobutyl, isoamyl and active amyl esters

After applying the face detection and feature extraction to all images in the dataset, a dataset containing all features of all faces in the dataset has been

A two year-long, investigation, and collaboration with inpatient and community healthcare resources resulted in no cost sustainable health fairs for the service area that focused

Optometrists who practise contact lens prescription and dispensing shall, in addition, keep a patient contact lens register as specified under Article 4.3.5.. Every optometrist