• No results found

COM B I N A T O R I A L FO R MU L A SC O N N E C T E DT OD I A G O N A LH A R MO N I C S AND MA C D O N A L DP O L Y N O MI A L S

N/A
N/A
Protected

Academic year: 2020

Share "COM B I N A T O R I A L FO R MU L A SC O N N E C T E DT OD I A G O N A LH A R MO N I C S AND MA C D O N A L DP O L Y N O MI A L S"

Copied!
120
0
0

Loading.... (view fulltext now)

Full text

(1)

COMBINATORIAL FORMULAS CONNECTED TO DIAGONAL HARMONICS

AND MACDONALD POLYNOMIALS

Meesue Yoo

A Dissertation

in

Mathematics

Presented to the Faculties of the University of Pennsylvania in Partial Fulfillment

of the Requirements for the Degree of Doctor of Philosophy

2009

James Haglund

Supervisor of Dissertation

Tony Pantev

(2)

Acknowledgments

My greatest and eternal thanks go to my academic advisor Jim Haglund who showed consistent

encouragement and boundless support. His mathematical instinct was truly inspiring and this

work was only possible with his guidance. My gratitude is everlasting.

I am grateful to Jerry Kazdan and Jason Bandlow for kindly agreeing to serve on my

disserta-tion committee, and Jennifer Morse, Sami Assaf and Antonella Grassi for graciously agreeing to

write recommendation letters for me. I would also like to thank the faculty of the mathematics

department for providing such a stimulating environment to study. I owe many thanks to the

department secretaries, Janet, Monica, Paula and Robin. I was only able to finish the program

with their professional help and cordial caring.

I would like to thank my precious friends Elena and Enka for being on my side and encouraging

me whenever I had hard times. I owe thanks to Daeun for being a friend whom I can share my

emotions with, Tong for being a person whom I can share my little worries with, Tomoko, Sarah

and John for being such comfortable friends whom I shared cups of tea with, and Min, Sieun and

all other my college classmates for showing such cheers for me. My thanks also go to my adorable

friends Soong and Jinhee for keeping unswerving friendship with me.

My most sincere thanks go to my family, Hyun Suk Lee, Young Yoo, Mi Jeong Yoo and Cheon

Gum Yoo, for their unconditional support and constant encouragement. I could not have made

(3)

ABSTRACT

COMBINATORIAL FORMULAS CONNECTED TO DIAGONAL HARMONICS AND

MACDONALD POLYNOMIALS

Meesue Yoo

James Haglund, Advisor

We study bigradedSn-modules introduced by Garsia and Haiman as an approach to prove the Macdonald positivity conjecture. We construct a combinatorial formula for the Hilbert series of

Garsia-Haiman modules as a sum over standard Young tableaux, and provide a bijection between a

group of fillings and the corresponding standard Young tableau in the hook shape case. This result

extends the known property of Hall-Littlewood polynomials by Garsia and Procesi to Macdonald

polynomials.

We also study the integral form of Macdonald polynomials and construct a combinatorial

(4)

Contents

1 Introduction and Basic Definitions 1

1.1 Basic Combinatorial Objects . . . 4

1.2 Symmetric Functions . . . 8

1.2.1 Bases of Λ . . . 9

1.2.2 Quasisymmetric Functions . . . 19

1.3 Macdonald Polynomials . . . 21

1.3.1 Plethysm . . . 21

1.3.2 Macdonald Polynomials . . . 22

1.3.3 A Combinatorial Formula for Macdonald Polynomials . . . 31

1.4 Hilbert Series ofSn-Modules . . . 33

2 Combinatorial Formula for the Hilbert Series 41 2.1 Two column shape case . . . 41

2.1.1 The Case Whenµ0= (n−1,1) . . . 44

2.1.2 General Case with Two Columns . . . 46

2.2 Hook Shape Case . . . 53

2.2.1 Proof of Theorem 2.2.1 . . . 55

2.2.2 Proof by Recursion . . . 63

(5)

2.2.4 Association with Fillings . . . 74

3 Schur Expansion of Jµ 94 3.1 Integral Form of the Macdonald Polynomials . . . 94

3.2 Schur Expansion ofJµ . . . 98

3.2.1 Combinatorial Formula for the Schur coefficients ofJ(r) . . . 98

3.2.2 Schur Expansion ofJ(r,1s) . . . 100

4 Further Research 109 4.1 Combinatorial formula of the Hilbert series ofMµ forµwith three or more columns 109 4.2 Finding the basis for the Hilbert series ofSn-modules . . . 110

(6)

List of Tables

2.1 Association table forµ= (2,1n−2). . . . 76

2.2 Association table forµ= (n−1,1). . . 79

2.3 The grouping table forµ= (4,1,1). . . 84

(7)

List of Figures

1.1 The Young diagram forλ= (4,4,2,1). . . 5

1.2 The Young diagram for (4,4,2,1)0= (4,3,2,2). . . 5

1.3 The Young diagram for (4,4,2,1)/(3,1,1). . . 6

1.4 The arm a, legl, coarma0 and colegl0. . . 6

1.5 A SSYT of shape (6,5,3,3). . . 7

1.6 An example of 4-corner case with corner cellsA1, A2, A3, A4 and inner corner cells B0, B1, B2, B3, B4. . . 37

1.7 Examples of equivalent pairs. . . 38

2.1 (i) The first case (ii) The second case . . . 46

2.2 Diagram forµ= (2b,1a−b). . . 48

2.3 Garsia-Procesi tree for a partition µ= (2,1,1). . . 80

2.4 Modified Garsia-Procesi tree for a partitionµ= (2,1,1). . . 81

2.5 Modified Garsia-Procesi tree forµ= (3,1,1). . . 88

2.6 SYT corresponding to 1-lined set in grouping table . . . 89

2.7 SYT corresponding to 2-lined set in grouping table . . . 90

2.8 SYT corresponding tok-lined set in grouping table . . . 91

(8)

Chapter 1

Introduction and Basic Definitions

The theory of symmetric functions arises in various areas of mathematics such as algebraic

combi-natorics, representation theory, Lie algebras, algebraic geometry, and special function theory. In

1988, Macdonald introduced a unique family of symmetric functions with two parameters

charac-terized by certain triangularity and orthogonality conditions which generalizes many well-known

classical bases.

Macdonald polynomials have been in the core of intensive research since their introduction due

to their applications in many other areas such as algebraic geometry and commutative algebra.

However, given such an indirect definition of these polynomials satisfying certain conditions, the

proof of existence does not give an explicit way of construction. Nonetheless, Macdonald

con-jectured that the integral form Jµ[X;q, t] of Macdonald polynomials, obtained by multiplying a certain polynomial to the Macdonald polynomials, can be extended in terms of modified Schur

functions sλ[X(1−t)] with coefficients inN[q, t], i.e.,

Jµ[X;q, t] = X λ`|µ|

Kλµ(q, t)sλ[X(1−t)], Kλµ(q, t)∈N[q, t],

(9)

Hall-Littlewood polynomialsPλ(X;t) =Pλ(X; 0, t) which was proved combinatorially by Lascoux and Sch¨utzenberger [LS78]

Kλµ(0, t) =Kλµ(t) =X T

tch(T),

summed over all SSYT of shape λ and weightµ, where ch(T) is the charge statistic. In 1992, Stembridge [Ste94] found a combinatorial interpretation ofKλµ(q, t) whenµhas a hook shape, and Fishel [Fis95] found statistics for the two-column case which also gives a combinatorial formula

for the two-row case.

To prove the Macdonald positivity conjecture, Garsia and Haiman [GH93] introduced certain

bigradedSn-modules and conjectured that the modified Macdonald polynomials ˜Hµ(X;q, t) could be realized as the bigraded characters of those modules. This is the well-known n! conjecture. Haiman proved this conjecture in 2001 [Hai01] by showing that it is intimately connected with

the Hilbert scheme ofnpoints in the plane and with the variety of commuting matrices, and this result proved the positivity conjecture immediately.

In 2004, Haglund [Hag04] conjectured and Haglund, Haiman and Loehr [HHL05] proved a

combinatorial formula for the monomial expansion of the modified Macdonald polynomials. This

celebrated combinatorial formula brought a breakthrough in Macdonald polynomial theory.

Un-fortunately, it does not give any combinatorial description ofKλµ(q, t), but it provides a shortcut to prove the positivity conjecture. In 2007, Assaf [Ass07] proved the positivity conjecture purely

combinatorially by showing Schur positivity of dual equivalence graphs and connecting them to

the modified Macdonald polynomials.

In this thesis, we study the bigradedSn-modules introduced by Garsia and Haiman [GH93] in their approach to the Macdonald positivity conjecture and construct a combinatorial formula for

the Hilbert series of the Garsia-Haiman modules in the hook shape case. The monomial expansion

formula of Haglund, Haiman and Loehr for the modified Macdonald polynomials gives a way of

(10)

standard Young tableaux (SYT, from now on) of the given hook shape. Noting that there are

only Q n!

c∈λh(c)

, where h(c) = a(c) +l(c) + 1 (see Definition 1.1.6 for the descriptions ofa(c) and

l(c)), many SYTs of shape λ, this combinatorial formula gives a way of calculating the Hilbert series much faster and easier than the monomial expansion formula.

The construction was motivated by a similar formula for the two-column case which was

conjectured by Haglund and proved by Garsia and Haglund. To prove, we apply the similar

strategy of deriving a recursion formula satisfied by both of the constructed combinatorial formulas

and the Hilbert series that Garsia and Haglund used to prove the two-column case formula. In

addition, we provide two independent proofs.

Also, we consider the integral form Macdonald polynomials,Jµ[X;q, t], and introduce a com-binatorial formula for the Schur coefficients ofJµ[X;q, t] in the one-row case and the hook shape case. As we mentioned in the beginning, Macdonald originally considered Jµ[X;q, t] in terms of the modified Schur functions sλ[X(1−t)] and the coefficientsKλµ(q, t) have been studied a lot due to the positivity conjecture. But no research has been done concerning the Schur expansion of

Jµ[X;q, t] so far. Along the way, Haglund noticed that the scalar product ofJµ[X;q, qk]/(1q)n and sλ(x), for any nonnegative k, is a polynomial in q with positive coefficients. Based on this observation, he conjectured thatJµ[X;q, t] has the following Schur expansion

Jµ[X;q, t] =X λ`n  

X T∈SSYT(λ00)

Y c∈µ

(1−tl(c)+1qqstat(c,T))qch(T)

 sλ,

for certain unkown integers qstat(c, T). We define the qstat(c, T) for the one-row case and the hook shape case and construct the explicit combinatorial formula for the Schur coefficients in those

two cases.

The thesis is organized as follows : in Chapter 1, we give definitions of basic combinatorial

(11)

In Section 1.4, we introduce the Garsia-Haiman modules and define their Hilbert series.

Chapter 2 is devoted to constructing and proving the combinatorial formula for the Hilbert

series of Garsia-Haiman modules as a sum over standard Young tableaux. In Section 2.1, we review Garsia’s proof for the two-column shape case, and in Section 2.2, we prove the combinatorial formula for the hook shape case. We provide three different proofs. The first one is by direct

calculation using the monomial expansion formula of Macdonald polynomials, the second one is

by deriving the recursive formula of Macdonald polynomials which is known by Garsia and Haiman,

and the third one is by applying the science fiction conjecture. In addition to the combinatorial

construction over SYTs, we provide a way of associating a group of fillings to one SYT, and prove

that this association is a bijection.

In Chapter 3, we construct the combinatorial formula for the coefficients in the Schur

expan-sion of the integral form of Macdonald polynomials. In Section 3.2, we construct and prove the combinatorial formula in one-row case and the hook case.

1.1

Basic Combinatorial Objects

Definition 1.1.1. Apartitionλof a nonnegative integernis a non increasing sequence of positive integers (λ1, λ2, . . . , λk)∈Nk satisfying

λ1≥ · · · ≥λk and k X i=1

λi=n.

We write λ`nto say λis a partition ofn. Forλ= (λ1, λ2, . . . , λk) a partition of n, we say the

length ofλis k (writtenl(λ) = k) and thesize of λis n(written |λ|=n). The numbersλi are referred to as theparts ofλ. We may also write

λ= (1m1,2m2, . . .)

(12)

Definition 1.1.2. The Young diagram (also called a Ferrers diagram) of a partition λ is a collection of boxes (or cells), left justified and with λi cells in the ith row from the bottom. The cells are indexed by pairs (i, j), withi being the row index (the bottom row is row 1), andj

being the column index (the leftmost column is column 1). Abusing notation, we will write λfor both the partition and its diagram.

Figure 1.1: The Young diagram forλ= (4,4,2,1).

Definition 1.1.3. Theconjugate λ0 of a partitionλis defined by

λ0j =X i≥j

mi.

The diagram of λ0 can be obtained by reflecting the diagram of λacross the main diagonal. In

particular, λ01=l(λ) andλ1=l(λ0). Obviouslyλ00=λ.

0

=

Figure 1.2: The Young diagram for (4,4,2,1)0= (4,3,2,2).

Definition 1.1.4. For partitions λ, µ, we say µ is contained in λ and write µ ⊂ λ, when the Young diagram of µ is contained within the diagram ofλ, i.e.,µi ≤λi for alli. In this case, we define theskew diagram λ/µby removing the cells in the diagram ofµfrom the diagram ofλ.

A skew diagram θ = λ/µ is a horizontal m-strip (resp. a vertical m-strip) if |θ| = m and

(13)

Figure 1.3: The Young diagram for (4,4,2,1)/(3,1,1).

condition forθ to be a horizontal strip is that the sequences λandµare interlaced, in the sense that λ1>µ1>λ2>µ2>. . ..

A skew diagramλ/µis aborder strip(also called askew hook, or aribbon) ifλ/µis connected and contains no 2×2 block of squares, so that successive rows (or columns) of λ/µ overlap by exactly one square.

Definition 1.1.5. Thedominance order is a partial ordering, denoted by ≤, defined on the set of partitions ofnand this is given by definingλ≤µfor|λ|=|µ|=nif for all positive integersk,

k X

i=1 λi≤

k X i=1

µi.

Definition 1.1.6. Given a square c ∈ λ, define the leg (respectively coleg) of c, denoted l(c) (resp. l0(c)), to be the number of squares in λthat are strictly above (resp. below) and in the same column asc, and thearm (resp. coarm) ofc, denoteda(c) (resp. a0(c)), to be the number of squares inλstrictly to the right (resp. left) and in the same row asc. Also, ifc has coordinates (i, j), we let south(c) denote the square with coordinates (i−1, j).

c l

a a a0

l0

Figure 1.4: The arma, legl, coarma0 and colegl0.

For each partitionλwe define

n(λ) =X i≥1

(14)

so that eachn(λ) is the sum of the numbers obtained by attaching a zero to each node in the first (bottom) row of the diagram ofλ, a 1 to each node in the second row, and so on. Adding up the numbers in each column, we see that

n(λ) =X i≥1

λ0

i 2

.

Definition 1.1.7. Letλbe a partition. Atableau ofshape λ`n is a functionT from the cells of the Young diagram ofλto the positive integers. Thesizeof a tableau is its number of entries. If T is of shapeλ then we writeλ= sh(T). Hence the size ofT is just|sh(T)|. A semistandard Young tableau (SSYT) ofshapeλis a tableau which is weakly increasing from the left to the right in every row and strictly increasing from the bottom to the top in every column. We may also

think of an SSYT of shapeλas the Young diagram ofλwhose boxes have been filled with positive integers (satisfying certain conditions). A semistandard Young tableau is standard (SYT) if it is

a bijection fromλto [n] where [n] ={1,2, . . . , n}.

6 9 9 5 5 7 2 4 4 5 5 1 1 1 3 4 4

Figure 1.5: A SSYT of shape (6,5,3,3).

For a partitionλofnand a compositionµofn, we define

SSYT(λ) = {semi-standard Young tableauT :λ→N},

SSYT(λ, µ) = {SSYTT :λ→Nwith entries 1µ1,2µ2, . . .}, SYT(λ) = {SSYTT :λ→∼ [n]}= SSYT(λ,1n).

ForT ∈SSYT(λ, µ), we sayT is a SSYT of shapeλand weightµ. Note that ifT ∈SSYT(λ, µ) for partitionsλandµ, thenλ>µ.

Definition 1.1.8. For a partitionλ, we define thehook polynomial to be a polynomial inqby

Hλ(q) = Y c∈λ

(15)

where a(c) is the arm ofc andl(c) is the leg ofc.

We say that T hastype α= (α1, α2, . . .), denoted α = type(T), if T has αi = αi(T) parts equal toi. For anyT of typeα, write

xT =x1α1(T)xα22 (T)· · · .

Note thatαi =|T−1(i)|.

1.2

Symmetric Functions

Consider the ringZ[x1, . . . , xn] of polynomials innindependent variablesx1, . . . , xnwith rational integer coefficients. The symmetric group Sn acts on this ring by permuting the variables, and a polynomial is symmetric if it is invariant under this action. The symmetric polynomials form a

subring

Λn=Z[x1, . . . , xn]Sn.

Λn is a graded ring : we have

Λn=⊕k≥0Λkn

where Λk

n consists of the homogeneous symmetric polynomials of degreek, together with the zero polynomial. If we add xn+1, we can form Λn+1 = Z[x1, . . . , xn+1]Sn+1, and there is a natural

surjection Λn+1 → Λn defined by setting xn+1 = 0. Note that the mapping Λkn+1 → Λkn is a surjection for all k≥0, and a bijection if and only if k≤n. If we define Λk as its inverse limit, i.e.,

Λk = lim

←−n Λ

k n

for eachk≥0, and let

(16)

then this graded ring is called a ring of symmetric functions. For any commutative ring R, we write

ΛR= Λ⊗ZR, Λn,R= Λn⊗ZR

for the ring of symmetric functions (symmetric polynomials in n indeterminates, respectively) with coefficients in R.

1.2.1

Bases of

Λ

For each α= (α1, . . . , αn)∈Nn, we denote byxαthe monomial

xα=xα1

1 · · ·x

αn

n .

Let x= (x1, x2, . . .) be a set of indeterminates, and letn∈N. Letλbe any partition of sizen.

The polynomial

mλ= X

summed over alldistinct permutationsαofλ= (λ1, λ2, . . .), is clearly symmetric, and themλ(as

λ runs through all partitions ofλ` n) form aZ-basis of Λn. Moreover, the set {mλ} is a basis

for Λ. They are calledmonomial symmetric functions.

For each integern≥0, the nth elementary symmetric function e

n is the sum of all products

ofndistinct variablesxi, so thate0= 1 and

en = X i1<i2<···<in

xi1xi2· · ·xin=m(1n)

forn≥1. The generating function for theen is

E(t) =X n≥0

entn= Y i≥1

(1 +xit)

(t being another variable), as one sees by multiplying out the product on the right. For each partition λ= (λ1, λ2, . . .) define

(17)

For eachn≥0, thenth complete symmetric function h

n is the sum of all monomials of total

degreenin the variablesx1, x2, . . ., so that

hn = X

|λ|=n

mλ.

In particular,h0= 1 andh1=e1. The generating function for thehn is

H(t) =X n≥0

hntn = Y i≥1

(1−xit)−1.

For each partitionλ= (λ1, λ2, . . .) define

hλ=hλ1hλ2· · ·.

For eachn≥1 thenth power sum symmetric function is

pn = X

xni =m(n).

For each partitionλ= (λ1, λ2, . . .) define

pλ=pλ1pλ2· · · .

The generating function for thepn is

P(t) =X n≥1

pntn−1= X i≥1

X n≥1

xnitn−1=X i≥1

xi 1−xit

=X

i≥1 d dtlog

1 1−xit

.

Note that

P(t) = d

dtlog

Y i≥1

(1−xit)−1=

d

dtlogH(t) =H

0(t)/H(t),

and likewise

P(−t) = d

dtlogE(t) =E

0(t)/E(t).

Proposition 1.2.1. We have

hn= X λ`n

zλ−1pλ

en= X λ`n

λzλ−1pλ,

(18)

Proof. See [Mac98, I, (2.14)] or [Sta99, 7.7.6].

Proposition 1.2.2. The mλ, eλ and hλ form Z-bases of Λ, and the pλ form a Q-basis for ΛQ,

where ΛQ= Λ⊗ZQ.

Proof. See [Mac98] or [Sta99].

Schur Functions

Definition 1.2.3. We define a scalar product on Λ so that the bases (hλ) and (mλ) become dual to each other :

< hλ, mµ>=δλµ.

This is called theHall inner product. Note that the power sum symmetric functions are orthogonal

with respect to the Hall inner product :

< pλ, pµ>=zλδλµ

where zλ=Qii

nini! andn

i=ni(λ) is the number ofi occurring as a part ofλ.

Definition 1.2.4. Let λ be a partition. The Schur function sλ of shape λ in the variables

x= (x1, x2, . . .) is the formal power series

sλ(x) =X T

xT

summed over all SSYTs T of shapeλ.

Example 1.2.5. The SSYTs T of shape (2,1) with largest part at most three are given by 2

1 1 2 1 2

3 1 1

3 1 3

3 2 2

3 2 3

3 1 2

2 1 3

Hence

s21(x1, x2, x3) = x21x2+x1x22+x12x3+x1x23+x22x3+x2x23+ 2x1x2x3

(19)

We also define theskew Schur functions, sλ/µ, by taking the sum over semistandard Young tableaux of shape λ/µ. Note that the Schur functions are orthonormal with respect to the Hall inner product :

< sλ, sµ>=δλµ

so thesλform anorthonormal basis of Λ, and thesλ such that|λ|=nform an orthonormal basis of Λn. In particular, the Schur functions{s

λ}could be defined as the unique family of symmetric

functions with the following two properties :

(i) sλ=mλ+Pµ<λKλµmµ,

(ii) < sλ, sµ>=δλµ.

The coefficient Kλµ is known as the Kostka number and it is equal to the number of SSYT of shape λand weight µ. The importance of Schur functions arises from their connections with many branches of mathematics such as representation theory of symmetric functions and algebraic

geometry.

Proposition 1.2.6. We have

hµ= X

λ

Kλµsλ.

Proof. By the definition of the Hall inner product, < hµ, mν >=δµν and by the property (i) of the above two conditions of Schur functions,sλ=PλKλµmµ. So,

< hµ, sλ>=< hµ, X

λ

(20)

The Littlewood-Richardson Rule

The integer < sλ, sµsν >=< sλ/ν, sµ >=< sλ/µ, sν >is denoted cλµν and is called a

Littlewood-Richardson coefficient. Thus

sµsν = X

λ

µν

sλ/ν = X

µ

cλµνsµ

sλ/µ = X

ν

µνsν.

We have a nice combinatorial interpretation ofcλµν and to introduce it, we make several definitions first.

We consider awordwas a sequence of positive integers. LetT be a SSYT. Then we can derive a word wfromT by reading the elements inT from right to left, starting from bottom to top.

Definition 1.2.7. A word w=a1a2. . . aN is said to be alattice permutation if for 1 6r 6N and 1 6i 6n−1, the number of occurrences of i in a1a2. . . ar is not less than the number of occurrences of i+ 1.

Now we are ready to state the Littlewood-Richardson rule.

Theorem 1.2.8. Let λ, µ, ν be partitions. Thencλ

µν is equal to the number of SSYTT of shape

λ/µ and weightν such thatw(T), the word obtained fromT, is a lattice permutation.

Proof. See [Mac98, I. 9].

Example 1.2.9. Let λ = (4,4,2,1), µ = (2,1) and ν = (4,3,1). Then there are two SSYT’s satisfying the Littlewood-Richardson rule :

2 1 3

1 2 2

1 1 and

3 1 2

1 2 2 1 1

(21)

Classical Definition of Schur Functions

We consider a finite number of variables, say x1, x2, . . . , xn. Let α = (α1, . . . , αn) ∈ Nn and

ω∈Sn. As usual, we writexα=xα11 · · ·xαnn, and define

ω(xα) =xαω(1)

1 · · ·x

αω(n)

n .

Now we define the polynomial aα obtained by antisymmetrizingxα, namely,

aα=aα(x1, . . . , xn) = X ω∈Sn

(ω)ω(xα), (1.2.1)

where (ω) is thesign (±) of the permutationω. Note that the right hand side of (1.2.1) is the expansion of a determinant, that is to say,

aα= det(x αj

i ) n i,j=1.

The polynomialaα is skew-symmetric, i.e., we have

ω(aα) =(ω)aα

for anyω ∈Sn, soaα = 0 unless all theαi’s are distint. Hence we may assume that α1> α2 >

· · · > αn ≥0, and therefore we may write α=λ+δ where λ is a partition with l(n)≤n and

δ= (n−1, n−2, . . . ,1,0). Sinceαj =λj+n−j, we have

aα=aλ+δ = det

xλj+n−j

i

n

i,j=1. (1.2.2)

This determinant is divisible in Z[x1, . . . , xn] by each of the differencesxi−xj(1 ≤i < j ≤n), and thus by their product, which is the Vandermonde determinant

aδ = det(xni−j) = Y

1≤i<j≤n

(xi−xj).

Moreover, since aα and aδ are skew-symmetric, the quotient is symmetric with homogeneous degree|λ|=|α| − |δ|, i.e.,aα/aδ∈Λ|

λ|

n .

Theorem 1.2.10. We have

(22)

Proof. See [Mac98, I.3] or [Sta99, 7.15.1].

Proposition 1.2.11. For any partition λ, we have

sλ(1, q, q2, . . .) = q n(λ) Hλ(q)

where Hλ(q) =Qc∈λ(1−q

a(c)+l(c)+1) is the hook polynomial.

Proof. By Theorem 1.2.10, we have

sλ(1, q, q2, . . . , qn−1) =

det q(i−1)(λj+n−j)n

i,j=1

det q(i−1)(n−j)n i,j=1

= (−1)(n2) Y

1≤i<j≤n

qλi+n−iqλj+n−j

qi−1qj−1

Letµi=λi+n−i, and note theq-integers [k]q = 1−qk and [k]q! = [1]q[2]q· · ·[k]q. Then (using the fact Q

1≤i<j≤n[j−i]q =Q n

i=1[n−i]q!)

sλ(1, q, q2, . . . , qn−1) =

qPi<jµjQ

i<j[µi−µj]q·Qi≥1[µi]q! qPi<j(i−1)Q

i<j[j−i]q· Q

i≥1[µi]q!

= qn(λ)Y

u∈λ

[n+c(u)]q [h(u)]q ,

where c(u) =l0(u)a0(u),h(u) =a(u) +l(u) + 1, noting that

Y u∈λ

[h(u)] =

Q

i≥1[µi]q!

Q

1≤i<j≤n[µi−µj]q

, Y

u∈λ

[n+c(u)]q = n Y i=1

[µi]q! [n−i]q!

.

If we now letn→ ∞, then the numeratorQ

u∈λ(1−q

n+c(u)) goes to 1, so we get

sλ(1, q, q2, . . .) = q n(λ) Hλ(q).

Zonal Symmetric Functions

These are symmetric functions Zλ characterized by the following two properties :

(23)

(ii) < Zλ, Zµ >2= 0,ifλ6=µ,

where the scalar product< , >2 on Λ is defined by

< pλ, pµ>2=δλµ2l(λ)zλ,

l(λ) being the length of the partitionλandzλ defined as before.

Jack’s Symmetric Functions

The Jack’s symmetric functionsPλ(α)=Pλ(α)(X;α) are a generalization of the Schur functions and the zonal symmetric functions and they are characterized by the following two properties :

(i) Pλ(α)=mλ+Pµ<λcλµmµ,for suitable coefficientscλµ,

(ii) < Pλ(α), Pµ(α)>α= 0,ifλ6=µ,

where the scalar product< , >α on Λ is defined by

< pλ, pµ>2=δλµαl(λ)zλ.

Remark 1.2.12. We have the following specializations :

(a) α= 1: Pλ(α)=sλ

(b) α= 2: Pλ(α)=Zλ

(c) Pλ(α)(X;α)→eλ0 asα→0

(d) Pλ(α)(X;α)→mλ asα→ ∞

Hall-Littlewood Symmetric Functions

The Hall-Littlewood symmetric functionsPλ(X;t) are characterized by the following two proper-ties :

(24)

(ii) < Pλ, Pµ>t= 0, ifλ6=µ,

where the scalar product< , >ton Λ is defined by

< pλ, pµ>t=δλµzλ l(λ)

Y i=1

(1−tλi)−1.

Note that whent= 0, thePλreduce to the Schur functionssλ, and whent= 1 to the monomial symmetric functions mλ.

Definition 1.2.13. The Kostka numbersKλµ defined by

sλ= X µ≤λ

Kλµmµ

generalizes to theKostka-Foulkes polynomials Kλµ(t) defined as follows :

sλ= X µ≤λ

Kλµ(t)Pµ(x;t)

where thePµ(x;t) are the Hall-Littlewood functions.

Since Pλ(x; 1) = mλ, we have Kλµ(1) = Kλµ. Foulkes conjectured, and Hotta and Springer [HS77] proved that Kλµ(t) ∈ N[t] using a cohomological interpretation. Later Lascoux and

Sch¨utzenberger [LS78] proved combinatorially that

Kλµ(t) = X

T

tch(T)

summed over all SSYT of shape λand weightµ, wherech(T) is thecharge statistic.

Charge Statistic

We considerwords (or sequences)w=a1· · ·an in which eachai is a positive integer. Theweight of w is the sequence µ = (µ1, µ2, . . .), where µi counts the number of times i occurring in w. Assume that µ is a partition, i.e., µ1 ≥ µ2 ≥ · · ·. If µ = (1n) so that w is a derangement of 12. . . n, then we callwa standard word.

(25)

(i) Ifw is a standard word, then we assign the index 0 to the number 1, and ifrhas index i, thenr+ 1 has indexiif it lies to the right ofrand it has indexi+ 1 if it lies to the left ofr.

(ii) If w is any word with repeated numbers, then we extract standard subwords from w as follows. Reading from the left, choose the first 1 that occurs in w, then the first 2 to the right of the chosen 1, and so on. If at any stage there is nos+ 1 to the right of the chosen

s, go back to the beginning. This procedure extracts a standard subword, sayw1, ofw. We erasew1 from wand repeat the same procedure to obtain a standard subword w2, and so

on. For each standard wordwj, we assign the index as we did for the standard word.

(iii) If a SSYTT is given, we can extract a wordw(T) fromT by reading the elements from the right to the left, starting from the bottom to the top.

Then we define thecharge ch(w) ofwto be the sum of the indices. Ifwhas many subwords, then

ch(w) =X j

ch(wj).

Example 1.2.14. Let w = 21613244153. Then we extract the first standard word w1 = 162453 fromw: 2 1 6 1 3 2 4 4 1 5 3. If we erasew1fromw, we are left with 21341, and extractw2= 2134

from 2 1 3 4 1, and finally the last subwordw3= 1. The indices (attached as subscripts) ofw1are

106220415130, soch(w1) = 2 + 1 + 1 = 4. The indices ofw2are 21103141, soch(w2) = 1 + 1 + 1 = 3,

and finally the index of w3 is 10andch(w3) = 0. Thus,

ch(w) =ch(w1) +ch(w2) +ch(w3) = 4 + 3 + 0 = 7.

Then Lascoux and Sch¨utzenberger proves the following theorem in [LS78].

Theorem 1.2.15. (i) We have

Kλµ(t) = X

T

tch(T)

summed over all SSYTT of shapeλand weightµ.

(26)

Proof. See [LS78].

1.2.2

Quasisymmetric Functions

Definition 1.2.16. A formal power seriesf =f(x)∈Q[[x1, x2, . . .]] isquasisymmetricif for any

compositionα= (α1, α2, . . . , αk), we have

f|xa1 i1···x

ak ik =f|x

a1 j1···x

ak jk

wheneveri1<· · ·< ik andj1<· · ·< jk.

Clearly every symmetric function is quasisymmetric, and sums and products of quasisymmetric

functions are also quasisymmetric. Let Qn denote the set of all homogeneous quasisymmetric

functions of degreen, and let Comp(n) denote the set of compositions ofn.

Definition 1.2.17. Givenα= (α1, α2, . . . , αk)∈Comp(n), define themonomial quasisymmetric

function Mα by

Mα= X i1<···<ik

xa1 i1 · · ·x

ak

ik.

Then it is clear that the set{Mα:α∈Comp(n)}forms a basis for Qn. One can show that if

f ∈ Qm andg∈ Qn, then f g∈ Qm+n, thus ifQ=Q0⊕ Q1⊕ · · ·, thenQis a

Q-algebra, and it

is called the algebra of quasisymmetric functions (overQ).

Note that there exists a natural one-to-one correspondence between compositions αofnand subsetsSof [n−1] ={1,2, . . . , n−1}, namely, we associate the setSα={α1, α1+α2, . . . , α1+α2+

· · ·+αk−1}with the compositionα, and the composition co(S) = (s1, s2−s1, s3−s2, . . . , n−sk−1)

with the set S ={s1, s2, . . . , sk−1}<. Then it is clear that co(Sα) = αand Sco(S) =S. Using

these relations, we define another basis for the quasisymmetric functions.

Definition 1.2.18. Givenα∈Comp(n), we define thefundamental quasisymmetric function Qα

Qα=

X i1≤···≤in

ij<ij+1ifj∈Sα

(27)

Proposition 1.2.19. Forα∈Comp(n)we have

Qα=

X Sα⊆T⊆[n−1]

Mco(T).

Hence the set {Qα:α∈Comp(n)} is a basis for Qn.

Proof. See [Sta99, 7.19.1].

The main purpose of introducing the quasisymmetric functions is because of the

quasisymmet-ric expansion of the Schur functions. We define a descent of an SYTT to be an integer i such thati+ 1 appears in a upper row ofT thani, and define thedescent set D(T) to be the set of all descents ofT.

Theorem 1.2.20. We have

sλ/µ = X

T

Qco(D(T)),

where T ranges over all SYTs of shapeλ/µ.

Proof. See [Sta99, 7.19.7].

Example 1.2.21. Forn= 3,

s3=

1 2 3

Qco(∅)

s21= 2 1 3

Qco(1)

+ 3 1 2

Qco(2)

s111= 2 1 3

Qco(1,2)

Forn= 4,

s4=

1 2 3 4

Qco(∅)

s31= 2 1 3 4

Qco(1)

+ 3 1 2 4

Qco(2)

+ 4 1 2 3

(28)

s22= 3 1

4 2

Qco(2)

+ 2 1

4 3

Qco(1,3)

s211= 3 2 1 4

Qco(1,2)

+ 4 2 1 3

Qco(1,3)

+ 4 3 1 2

Qco(2,3)

s1111= 2 1 3 4

Qco(1,2,3)

1.3

Macdonald Polynomials

1.3.1

Plethysm

In Proposition 1.2.2, we showed that the power sum functions pλ form a basis of the ring of symmetric functions. This implies that the ring of symmetric functions can be realized as the

ring of polynomials in the power sums p1, p2, . . .. Under this consideration, we introduce an

operation calledplethysm which simplifies the notation for compositions of power sum functions

and symmetric functions.

Definition 1.3.1. LetE =E(t1, t2, . . .) be a formal Laurent series with rational coefficients in

t1, t2, . . . .We define theplethystic substitution pk[E] by replacing eachti inE bytki, i.e.,

pk[E] :=E(tk1, t

k

2, . . .).

For any arbitrary symmetric functionf, the plethystic substitution ofE intof, denoted byf[E], is obtained by extending the specialization pk 7→pk[E] tof.

Note that ifX =x1+x2+· · ·, then forf ∈Λ,

(29)

For this reason, we consider this operation as a kind of substitution. In plethystic expression, X

stands forx1+x2+· · · so thatf[X] is the same asf(X). See [Hai99] for a fuller account.

Example 1.3.2. For a symmetric functionf of degreed,

(a) f[tX] =tdf[X].

(b) f[−X] = (−1)dωf[X].

(c) pk[X+Y] =pk[X] +pk[Y].

(d) pk[−X] =−pk[X].

Remark 1.3.3. Note that in plethystic notation, the indeterminates are not numeric variables, but

they need to be considered as formal symbols.

1.3.2

Macdonald Polynomials

Macdonald introduced a family of symmetric polynomials which becomes a basis of the space

of symmetric functions in infinitely many indeterminatesx1, x2, . . ., with coefficients in the field

Q(q, t), ΛQ(q,t). We first introduce aq, t-analog of the Hall inner product and define

< pλ, pµ>q,t=δλµzλ(q, t)

where

zλ(q, t) =zλ l(λ)

Y i=1

1−qλi

1−tλi.

In [Mac88], Macdonald proved the existence of the unique family of symmetric functions indexed

by partitions {Pλ[X;q, t]}, with coefficients in Q(q, t), having triangularity with respect to the

Schur functions and orthogonality with respect to the q, t-analog of the Hall inner product.

Theorem 1.3.4. There exists a unique family of symmetric polynomials indexed by partitions,

{Pλ[X;q, t]}, such that

(30)

2. < Pλ, Pµ>q,t= 0if λ6=µ

where ξµ,λ(q, t)∈Q(q, t).

Proof. See [Mac98].

Macdonald polynomials specialize to Schur functions, complete homogeneous, elementary and

monomial symmetric functions and Hall-Littlewood functions.

Proposition 1.3.5.

Pλ[X;t, t] =sλ[X], Pλ[X;q,1] =mλ[X]

Pλ[X; 1, t] =eλ0[X], P(1n)[X;q, t] =en[X]

Proof. See [Mac88] or [Mac98].

Proposition 1.3.6.

Pλ[X;q, t] =Pλ[X;q−1, t−1].

Proof. Note that

< f, g >q−1,t−1= (q−1t)n< f, g >q,t.

Because of this property, Pλ[X;q−1, t−1] also satisfies two characteristic properties of Pλ[X;q, t] in Theorem 1.3.4. By the uniqueness of such polynomials, we get the identity

Pλ[X;q, t] =Pλ[X;q−1, t−1].

Integral Forms

In order to simplify the notations, we use the following common abbreviations.

hλ(q, t) = Y c∈λ

(1−qa(c)tl(c)+1), h0λ(q, t) =Y c∈λ

(1−tl(c)qa(c)+1), dλ(q, t) =

hλ(q, t)

(31)

We now define theintegral form of Macdonald polynomials :

Jµ[X;q, t] =hµ(q, t)Pµ[X;q, t] =h0µ(q, t)Qµ[X;q, t]

whereQλ[X;q, t] = Pλ[X;q,t]

dλ(q,t) . Macdonald showed that the integral form of the Macdonald

polyno-mials Jλ has the following expansion in terms of{sµ[X(1−t)]}:

Jµ[X;q, t] := X λ`|µ|

Kλµ(q, t)sλ[X(1−t)],

where Kλµ(q, t) ∈ Q(q, t) which satisfies Kλµ(1,1) = Kλµ. These functions are called the q, t

-Kostka functions. Macdonald introduced the Jλ[X;q, t] and the q,t-Kostka functions in [Mac88] and he conjectured that the q,t-Kostka functions were polynomials inN[q, t]. This is the famous

Macdonald positivity conjecture. This was proved by Mark Haiman in 2001 by showing that it

is intimately connected with the Hilbert scheme of points in the plane and with the variety of

commuting matrices, and Sami Assaf proved combinatorially by introducing the dual equivalence

graphs in [Ass07], 2007.

Now we introduce aq, t-analog of theω involution.

Definition 1.3.7. We define the homomorphismωq,ton ΛQ(q,t) by

ωq,t(pr) = (−1)r−1 1−qr 1−trpr

for allr≥1, and so

ωq,t(pλ) =λpλ l(λ)

Y i=1

1−qλi

1−tλi.

Proposition 1.3.8. We have

ωq,tPλ(X;q, t) = Qλ0(X;t, q)

ωq,tQλ(X;q, t) = Pλ0(X;t, q).

Note that sinceωt,q=ω−1q,t, these two assertions are equivalent.

(32)

We introduce two important properties ofq, t-Kostka polynomials.

Proposition 1.3.9.

Kλµ(q, t) =qn(µ

0)

tn(µ)Kλ0µ(q−1, t−1). (1.3.1)

Proof. Note that

hλ(q−1, t−1) = Y c∈λ

(1−q−a(c)t−l(c)−1)

= (−1)|λ|q−n(λ0)t−n(λ)−|λ|hλ(q, t),

sinceP

c∈λa(c) =n(λ0) and P

c∈λl(c) =n(λ). Hence,

Jµ[X;q−1, t−1] = hµ(q−1, t−1)Pµ[X;q−1, t−1] =hµ(q−1, t−1)Pµ[X;q, t]

= (−1)|µ|q−n(µ0)t−n(µ)−|µ|hµ(q, t)Pµ[X;q, t]

= (−1)|µ|q−n(µ0)t−n(µ)−|µ|Jµ[X;q, t].

Also, we note that

sλ[X(1−t−1)] = (−t)|λ|sλ0[X(1−t)].

Then, on one hand

Jµ[X;q−1, t−1] = X λ

Kλµ(q−1, t−1)sλ[X(1−t)]

= X

λ`|µ|

Kλµ(q−1, t−1)(−t)|λ|sλ0[X(1−t)]

= X

λ0`|µ|

Kλ0µ(q−1, t−1)(−t)|λ 0|

sλ[X(1−t)].

On the other hand,

Jµ[X;q−1, t−1] = (−1)|µ|q−n(µ0)t−n(µ)−|µ|Jµ[X;q, t]

= (−1)|µ|q−n(µ0)t−n(µ)−|µ| X λ`|µ|

Kλµsλ[X(1−t)].

By comparing two coefficients of sλ[X(1−t)], we get the desired identity

Kλµ(q, t) =qn(µ

0)

(33)

Proposition 1.3.10.

Kλµ(q, t) =Kλ0µ0(t, q). (1.3.2)

Proof. Note that

ωq,tJµ[X;q, t] = hµ(q, t)ωq,tPµ[X;q, t]

= hµ(q, t)Qµ0[X;t, q]

by Proposition 1.3.8. Sinceh0µ0(t, q) =hµ(q, t), we have

ωq,tJµ[X;q, t] = h0µ0(t, q)ωq,tQµ0[X;t, q]

= Jµ0[X;t, q].

Also, note that

ωq,tsλ[X(1−t)] =sλ0[X(1−q)].

So,

ωq,tJµ[X;q, t] = X λ`|µ|

Kλµ(q, t)ωq,tsλ[X(1−t)]

= X

λ`|µ|

Kλµ(q, t)sλ0[X(1−q)].

And

Jµ0[X;t, q] =

X λ`|µ|

Kλµ0(t, q)sλ[X(1−q)]

= X

λ0`|µ|

Kλ0µ0(t, q)sλ0[X(1−q)].

Sinceωq,tJµ[X;q, t] =Jµ0[X;t, q], by comparing the coefficients ofsλ0[X(1−q)], we get

Kλµ(q, t) =Kλ0µ0(t, q).

(34)

Ifais an indeterminate, we define

(a;q)r= (1−a)(1−aq)· · ·(1−aqr−1)

and we define the infinite product, denoted by (a;q)∞,

(a;q)∞= ∞

Y r=0

(1−aqr)

regarded as a formal power series in a andq. For two sequences of independent indeterminates

x= (x1, x2, . . .) andy= (y1, y2, . . .), define

Y

(x, y;q, t) =Y i,j

(txiyj;q)∞

(xiyj;q)∞ .

Note that then we have

Y

(x, y;q, t) =X λ

zλ(q, t)−1pλ(x)pλ(y).

Now letgn(x;q, t) denote the coefficient ofynin the power-series expansion of the infinite product

Y i≥1

(txiy;q)∞

(xiy;q)∞

=X

n≥0

gn(x;q, t)yn,

and for any partition λ= (λ1, λ2, . . .) define

gλ(x;q, t) =Y i≥1

gλi(x;q, t).

Then we have

gn(x;q, t) =X λ`n

zλ(q, t)−1pλ(x),

and hence

Y

(x, y;q, t) = Y j

 

X n≥0

gn(x;q, t)yjn

 

= X

λ

gλ(x;q, t)mλ(y).

We note the following proposition.

(35)

(a) < uλ, vµ>=δλµ for all λ, µ,

(b) P

λ(x)vλ(y) = Q

(x, y;q, t).

Proof. See [Mac98].

Then by Proposition 1.3.11,

< gλ(X;q, t), mµ(x)>=δλµ (1.3.3)

so that the gλ form a basis of ΛQ(q,t) dual to the basis{mλ}.

Now we considerPλwhenλ= (n), i.e.,λhas only one row with sizen. By (1.3.3),gnis orthogonal tomµfor all partitionsµ6= (n), hence to allPµexcept forµ= (n). Sognmust be a scalar multiple ofP(n), and actually P(n) is

P(n)=

(q;q)n (t;q)ngn. And by the way of definingJλ,

J(n)(X;q, t) = (t;q)nP(n)= (q;q)ngn(X;q, t). (1.3.4)

Proposition 1.3.12.

Kλ,(n)(q, t) =

qn(λ)(q;q)n

Hλ(q)

and so by duality,

Kλ,(1n)(q, t) =

tn(λ0)(t;t)

n

Hλ(t) ,

where Hλ(q)is the hook polynomial defined in Definition 1.1.8.

Proof. Note that we have

X n≥0

gn(X;q, t) = Y i,j

1−txiqj−1 1−xiqj−1

= X

λ

sλ(1, q, q2, . . .)sλ[X(1−t)]

= X

λ

(36)

by Theorem 1.2.11. So, in (1.3.4),

J(n)(X;q, t) = (q;q)ngn(X;q, t) =

X λ`n

qn(λ)(q;q)n

Hλ(q) sλ[X(1−t)]

and therefore

Kλ,(n)(q, t) =

qn(λ)(q;q)

n

Hλ(q) . Note thatKλµ(q, t) has a duality property (1.3.2)

Kλµ(q, t) =Kλ0µ0(t, q)

and this property givesKλ,(1n)(q, t) =tn(λ

0)

(t;t)n/Hλ(t).

Modified Macdonald Polynomials

In many cases, it is convenient to work with the modified Macdonald polynomials. We define

Hµ[X;q, t] :=Jµ

X

1−t;q, t

= X

λ`|µ|

Kλ,µ(q, t)sλ[X].

We make one final modification to make the modified Macdonald polynomials

˜

Hµ[X;q, t] :=tn(µ)Hµ[X;q,1/t] = X λ`|µ|

˜

Kλ,µ(q, t)sλ[X].

˜

Kλ,µ(q, t) = tn(µ)Kλ,µ(q, t−1) are called the modified q, t-Kostka functions. Macdonald defined the coefficients ˜Kλµ(q, t) in such a way that on setting q = 0 they yield the famous (modified) Kostka-Foulkes polynomials ˜Kλµ(t) = ˜Kλµ(0, t).

Proposition 1.3.13.

˜

Kλ,µ(q, t)∈N[q, t].

Proof. See [Hai01] or [Ass07].

˜

Hµ[X;q, t] can be characterized independently of thePµ[X;q, t].

Proposition 1.3.14. The functionsH˜µ[X;q, t] are the unique functions inΛQ(q,t)satisfying the

(37)

(1) H˜µ[X(1−q);q, t] =P

λ≥µaλµ(q, t)sλ,

(2) H˜µ[X(1−t);q, t] =P

λ≥µ0bλµ(q, t)sλ,

(3) <H˜µ, s(n)>= 1,

for suitable coefficients aλµ, bλµ∈Q(q, t).

Proof. See [Hai99], [Hai01].

Corollary 1.3.15. For all µ, we have

ωH˜µ[X;q, t] =tn(µ)qn(µ0)H˜µ[X;q−1, t−1]

and, consequently, K˜λ0µ(q, t) =tn(µ)qn(µ 0)˜

Kλµ(q−1, t−1).

Proof. One can show that ωtn(µ)qn(µ0)H˜µ[X;q−1, t−1] satisfies (1) and (2) of Proposition 1.3.14, and so it is a scalar multiple of ˜Hµ. (3) of Proposition 1.3.14 requires that ˜K(1n) =tn(µ)qn(µ

0)

which is equivalent toK(1n)=qn(µ

0)

and this is known in [Mac98].

Proposition 1.3.16. For all µ, we have

˜

Hµ[X;q, t] = ˜Hµ0[X;t, q]

and consequently, K˜λµ(q, t) = ˜Kλµ0(t, q).

Proof. The left hand side is

˜

Hµ[X;q, t] = X

λ ˜

Kλµ(q, t)sλ[X]

= X

λ

tn(µ)Kλµ(q, t−1)sλ[X]. (1.3.5)

And the right hand side is

˜

Hµ0[X;t, q] =

X λ

˜

Kλµ0(t, q)sλ[X]

= X

λ

(38)

Comparing (1.3.5) and (1.3.6), we need to show that

tn(µ)Kλµ(q, t−1) =qn(µ

0)

Kλµ0(t, q−1).

We note two properties of q, t-Kostka functions (1.3.1) and (1.3.2). Using those properties,

tn(µ)K

λµ(q, t−1) becomes

tn(µ)Kλµ(q, t−1) = tn(µ)qn(µ0)t−n(µ)Kλµ0(t, q−1)

= qn(µ0)Kλµ0(t, q−1)

and this finishes the proof.

1.3.3

A Combinatorial Formula for Macdonald Polynomials

In 2004, Haglund [Hag04] conjectured a combinatorial formula for the monomial expansion of the

modified Macdonald polynomials ˜Hµ[x;q, t], and this was proved by Haglund, Haiman and Loehr [HHL05] in 2005. This celebrated combinatorial formula accelerated the research of symmetric

functions theory concerning Macdonald polynomials. Beefore we give a detailed description of the

formula, we introduce some definitions.

Definition 1.3.17. A word of length n is a function from {1,2, . . . , n} to the positive integers. Theweight of a word is the vector

wt(w) ={|w−1(1)|,|w−1(2)|, . . .}.

We will think of words as vectors

w= (w(1), w(2), . . .) = (w1, w2, . . .)

and we write the word w= (w1, w2, . . .) as simplyw1w2. . . wn. A word with weight (1,1, . . . ,1) is called apermutation.

(39)

and left to right within rows by 1 to n in order, then applyingσ. We simply use σ to denote a filled diagram.

Definition 1.3.19. The reading order is the total ordering on the cells of µ given by reading them row by row, from top to bottom, and from left to right within each row. More formally,

(i, j)<(i0, j0) in the reading order if (−i, j) is lexicographically less than (−i0, j0).

Definition 1.3.20. Adescent ofσis a pair of entriesσ(u) andσ(v), where the cellvis the south ofu, that isv= (i, j), u= (i+ 1, j), and the elements ofuandvsatisfyσ(u)> σ(v). Define

Des(σ) ={u∈µ:σ(u)> σ(v) is a descent},

maj(σ) = X u∈Des(σ)

(leg(u) + 1).

Example 1.3.21. The example below has two descents, as shown.

σ= 6 2 2 5 8 4 4 1 3

, Des(σ) = 6

5 8

The maj(σ)= (0 + 1) + (1 + 1) + (0 + 1) = 4.

Definition 1.3.22. Three cellsu, v, w∈µare said to form atriple if they are situated as shown below,

v

u w

namely,vis directly belowu, andwis in the same row asu, to its right. Letσbe a filling. When we order the entries σ(u), σ(v), σ(w) from the smallest to the largest, if they make a counter-clockwise rotation, then the triple is called aninversion triple. If the two cells uandware in the first row (i.e., in the bottom row), then they contribute an inversion triple ifσ(u)> σ(w). Define

inv(σ) = the number of inversion triples inσ.

We also define a coinversion triple if the orientation from the smallest entry to the largest one is

(40)

Now the combinatorial formula of Haglund, Haiman and Loehr is as follows.

Theorem 1.3.23.

˜

Hµ(X;q, t) = X σ:µ→Z+

qinv(σ)tmaj(σ)Xσ. (1.3.7)

Proof. See [HHL05].

1.4

Hilbert Series of

S

n

-Modules

To prove the positivity conjecture of Macdonald polynomials, Garsia and Haiman introduced

certain bigraded Sn modules Mµ [GH93]. We give several important definitions and results of their research here.

Letµ be a partition. We shall identify µ with its Ferrers’ diagram. Let (p1, q1), . . . ,(pn, qn) denote the pairs (l0, a0) of the cells of the diagram ofµarranged in lexicographic order and set

4µ(X, Y) =4µ(x1, . . . , xn;y1, . . . , yn) = detkxpj

i y qj

i ki,j=1,...,n.

Example 1.4.1. Forµ= (3,1),{(pj, qj)}={(0,0),(0,1),(0,2),(1,0)}, and

4µ= det            

1 y1 y12 x1

1 y2 y22 x2

1 y3 y2 3 x3

1 y4 y2 4 x4

           

This given, we letMµ[X, Y] be the space spanned by all the partial derivatives of4µ(x, y). In symbols

Mµ[X, Y] =L[∂pxyq4µ(x, y)]

where∂p

x=∂x1p1· · ·∂xpnn, ∂

p

y=∂y1p1· · ·∂ypnn. The natural action of a permutationσ= (σ1, . . . , σn) on

a polynomialP(x1, . . . , xn;y1, . . . , yn) is the so calleddiagonal action which is defined by setting

(41)

Since σ4µ =±4µ according to the sign of σ, the spaceMµ necessarily remains invariant under this action.

Note that, since4µ is bihomogeneous of degree n(µ) inxandn(µ0) iny, we have the direct sum decomposition

Mµ=⊕ n(µ)

h=0 ⊕

n(µ0)

k=0 Hh,k(Mµ),

whereHh,k(Mµ) denotes the subspace ofMµ spanned by its bihomogeneous elements of degreeh inxand degreekiny. Since the diagonal action clearly preserves bidegree, each of the subspaces Hh,k(Mµ) is alsoSn-invariant. Thus we see thatMµ has the structure of a bigraded module. We

can write abivariate Hilbert series such as

Fµ(q, t) = n(µ)

X h=0

n(µ0)

X k=0

thqkdim(Hh,k(Mµ)). (1.4.1)

In dealing with graded Sn-modules, we will generally want to record not only the dimensions of homogeneous components but their characters. The generating function of the characters of

its bihomogeneous components, which we shall refer to as thebigraded character ofMµ, may be written in the form

χµ(q, t) = n(µ)

X h=0

n(µ0)

X k=0

thqkcharHh,k(Mµ).

We also have an associatedbigraded Frobenius characteristic F(Mµ) which is simply the image of

χµ(q, t) under the Frobenius map. Note that theFrobenius map fromSn-characters to symmetric functions homogeneous of degree nis defined by

Φ(χ) = 1

n! X ω∈Sn

χ(ω)pτ(ω)(X),

where τ(ω) is the partition whose parts are the lengths of the cycles of the permutation ω. Since the Schur functionsλ(X) is the Frobenius image of the irreducibleSn-characterχλ, we have, i.e., Φ(χλ) =sλ(X), for any character,

Φ(χ) =X λ

(42)

Then we can define the Frobenius series of a doubly gradedSn moduleMµ to be

FMµ =CMµ(X;q, t) =

X λ`n

sλ(X)Cλµ(q, t)

where Cλµ(q, t) is the bivariate generating function of the multiplicity ofχλ in the various biho-mogeneous components of Mµ. M. Haiman proved [Hai02a] that the bigraded Frobenius series FMµ is equal to the modified Macdonald polynomials ˜H[X;q, t].

Theorem 1.4.2.

CMµ(X;q, t) = ˜Hµ(X;q, t)

which forces the equality

Cλµ(q, t) = ˜Kλµ(q, t).

In particular, we haveK˜λµ∈N[q, t].

Proof. See [Hai02a, Hai02b].

Theorem 1.4.2 proves the Macdonald’s positivity conjecture and in particular, it implies that

the dimension of Garsia-Haiman moduleMµ isn! which was known as the “n! conjecture”.

Corollary 1.4.3. The dimension of the spaceMµ isn!.

For a symmetric functionf, we write

∂p1f

to denote the result of differentiatingf with respect top1after expandingf in terms of the power sum symmetric function basis. Then it is known that for any Schur function sλ, we have

∂p1sλ= X ν→λ

where “ν →λ“ is to mean that the sum is carried out over partitionsν that are obtained fromλ

by removing one of its corners. Whenλis a partition, the well-knownbranching-rule

χλ↓Sn

Sn−1= X ν→λ

(43)

implies

∂p1H˜µ(X;q, t) = n(µ)

X h=0

n(µ0)

X k=0

thqkΦ(charHh,k(Mµ)↓Sn

Sn−1).

Namely, ∂p1H˜µ(X;q, t) gives the bigraded Frobenius characteristic of Mµ restricted to Sn−1. In particular, we must have

Fµ(q, t) =∂pn1 ˜

Hµ(X;q, t),

where Fµ(q, t) is the bivariate Hilbert series defined in (1.4.1). Noting the fact that the operator

∂p1 is the Hall scalar product adjoint to multiplication by the elementary symmetric function e1, we can transform one of the Pieri rules given by Macdonald in [Mac88] into the expansion of

∂p1H˜µ(X;q, t) in terms of the polynomials ˜Hν(x;q, t) whose index ν immediately precedes µ in the Young partial order (which is defined simply by containment of Young diagrams). To explain

this more precisely, we introduce some notations first.

Letµbe a partition of n and let{ν(1), ν(2), . . . , ν(d)} be the collection of partitions obtained

by removing one of the corners of µ. And for a cell c∈µ, we define weight to be the monomial

w(c) =tl0(c)qa0(c), wherea0(c) denotes the coarm ofcandl0(c) denotes the coleg ofcas defined in

Section 1.1. We call the weights of the corners

µ/ν(1), µ/ν(2), . . . , µ/ν(d)

respectively

x1, x2, . . . , xd.

Moreover, ifxi =tl

0

iqa

0

i, then we also let

ui=tl

0

i+1qa

0

i (fori= 1,2, . . . , d1)

be the weights of what we might refer to as the inner corners ofµ. Finally, we set

u0=tl01/q, ud=qa

0

(44)

B0

A1 A2

A3 A4 B1

B2 B3

B4

Figure 1.6: An example of 4-corner case with corner cells A1, A2, A3, A4 and inner corner cells B0, B1, B2, B3, B4.

Proposition 1.4.4.

∂p1H˜µ(X;q, t) = d X i=1

cµν(i)(q, t) ˜Hν(i)(X;q, t) (1.4.2)

where

cµν(i)(q, t) =

1

(1−1/t)(1−1/q) 1

xi Qd

j=0(xi−uj)

Qd

j=1;j6=i(xi−xj)

.

Proof. See [BBG+99].

Science Fiction

While studying the modulesMµ, Garsia and Haiman made a huge collection of conjectures based on representation theoretical heuristics and computer experiments. This collection of conjectures

is called “science fiction” and most of them are still open. In particular, they conjectured the

existence of a family of polynomials GD(X;q, t), indexed by arbitrary lattice square diagramsD, which the modified Macdonald polynomials ˜Hµ(X;q, t) can be imbedded in. To be more precise, we give some definitions first.

We say that two lattice square diagramsD1 andD2 areequivalent and writeD1≈D2if and

only if they differ by a sequence of row and column rearrangements.

TheconjugateofD, denoted byD0, is the diagram obtained by reflecting the diagramDabout

(45)

≈ ≈ ≈

Figure 1.7: Examples of equivalent pairs.

any cell in D2. IfD is decomposable intoD1 andD2, then we writeD =D1×D2. This given, Garsia and Haiman conjectured the existence of a family of polynomials{GD(X;q, t)}D, indexed by diagramsD equivalent to skew Young diagrams, with the following properties :

(1) GD(X;q, t) = ˜Hµ(X;q, t) ifDis the Young diagram of µ.

(2) GD1(X;q, t) =GD2(X;q, t) ifD1≈D2.

(3) GD(X;q, t) =GD0(X;t, q).

(4) GD(X;q, t) =GD1(X;q, t)GD2(X;q, t) ifD=D2×D2.

(5) The polynomials GD satisfy the following equation :

∂p1GD(X;q, t) = X c∈D

qa0(c)tl(c)GD\c(X;q, t).

Proposition 1.4.5. Properties(3) and(5)imply that for any lattice square diagramD we have

∂p1GD(X;q, t) = X c∈D

qa(c)tl0(c)GD\c(X;q, t). (1.4.3)

Proof. The property (5) gives

∂p1GD0(X;t, q) = X c∈D0

ta0(c)ql(c)GD0\c(X;t, q).

Applying the property (3) to the left hand side of the equation of (5) gives

∂p1GD(X;q, t) =∂p1GD0(X;t, q) =

X c∈D0

ta0(c)ql(c)GD0\c(X;t, q).

Note thatl(c) forc∈D0 is equal toa0(c) forc∈Danda0(c) forc∈D0 is equal tol(c) for c∈D. Hence, by applying the property (3) to the right hand side, we get

∂p1GD(X;q, t) = X c∈D

(46)

By Proposition 1.3.16, we can at least see that the condition (1) and (3) are consistent. It

is possible to use these properties to explicitly determine the polynomials GD(X;q, t) in special cases. In particular, we consider the details for the hook case which will give useful information

for later sections.

Hook Case

For convenience, we set the notation for diagram of hooks and broken hooks as

[l, b, a] =       

(a+ 1,1l) ifb= 1, (1l)×(a) ifb= 0.

Namely, [l,1, a] represents a hook with legl and arma, and [l,0, a] represents the product of a column of lengthl by a row of lengtha. Then in the hook case, the property (5) and (1.4.3) yield the recursions

∂p1G[l,1,a] = [l]tG[l−1,1,a]+t lG

[l,0,a]+q[a]qG[l,1,a−1], (1.4.4) ∂p1G[l,1,a] = t[l]tG[l−1,1,a]+qaG[l,0,a]+ [a]qG[l,1,a−1], (1.4.5)

where [l]t = (1−tl)/(1−t) and [a]q = (1−qa)/(1−q). Subtracting (1.4.5) from (1.4.4) and simplifying gives the following two recursions

G[l,0,a]= tl−1

tlqaG[l−1,1,a]+ 1−qa

tlqaG[l,1,a−1], (1.4.6)

G[l,1,a−1]=

1−tl

1−qaG[l−1,1,a]+

tl−qa

1−qaG[l,0,a]. (1.4.7)

Transforming the Pieri rule ([Mac88]) gives the identity

˜

H(1l)(X;q, t) ˜H(a)(X;q, t) =

tl−1

tlqaH˜(a+1,1l−1)(X;q, t) + 1−qa

tlqaH˜(a,1l)(X;q, t)

which by comparing with (1.4.6) implies G[l,0,a] = ˜H(1l)(X;q, t) ˜H(a)(X;q, t) with the initial

con-ditionsG[l,1,a]= ˜H(a+1,1l)(X;q, t). For µ= (a+ 1,1l), the formula in Proposition 1.4.4 gives

∂p1H˜(a+1,1l)= [l]t

tl+1qa

tlqa H˜(a+1,1l−1)+ [a]q

tlqa+1

(47)

On the other hand, by using (1.4.6) in (1.4.4), we can eliminate the broken hook term and get the

following equation

∂p1G[l,1,a]= [l]t

tl+1−qa

tlqa G[l−1,0,a]+ [a]q

tl−qa+1

tlqa G[l,1,a−1] (1.4.9)

which is exactly consistent with (1.4.8). We set the Hilbert series for the hook shape diagram

F[l,b,a] =n!G[l,b,a]|pn

1,

then (1.4.4) yields the Hilbert series recursion

F[l,1,a]= [l]tF[l−1,1,a]+tlF[l,0,a]+q[a]qF[l,1,a−1].

Note that we find in [Mac88] that

˜

H(1l)(X;q, t) = (t)lh

X

1−t

and H˜(a)(X;q, t) = (q)lh

X

1−q

,

whereqm= (1−q)(1−q2)· · ·(1−qm−1). Combining this withG[l,0,a] = ˜H(1l)(X;q, t) ˜H(a)(X;q, t)

gives

F[l,0,a] =

l+a

a

[l]t![a]q! (1.4.10)

where [l]t! and [a]q! are thetandq-analogues of the factorials.

We finish this section by giving a formula for the Macdonald hook polynomials which we can

get by applying (1.4.5) recursively.

Theorem 1.4.6.

˜

H(a+1,1l)=

l X i=0

(t)l (t)l−i

(q)a (q)a+i

tl−iqa+i+1

1−qa+i+1 H˜(1l−i)H˜(a+i+1).

(48)

Chapter 2

Combinatorial Formula for the

Hilbert Series

In this chapter, we construct a combinatorial formula for the Hilbert seriesFµ in (1.4.1) as a sum over SYT of shape µ. We provide three different proofs for this result, and in Section 2.2.4, we introduce a way of associating the fillings to the corresponding standard Young tableaux. We

begin by recalling definitions of q-analogs :

[n]q= 1 +q+· · ·+qn−1,

[n]q! = [1]q· · ·[n]q.

2.1

Two column shape case

Letµ= (2a,1b) (so µ0= (a+b, b)). We use ˜F

µ0(q, t) to denote the Hilbert series of the modified

Macdonald polynomials tn(µ)H˜µ(X; 1/t, q). Note that from the combinatorial formula for Mac-donald polynomials in Theorem 1.3.7, the coefficient of x1x2· · ·xn in Heµ x;1t, q

(49)

by

˜

Fµ0(q, t) :=

X σ∈Sn

qmaj(σ,µ0)tcoinv(σ,µ0), (2.1.1) where maj(σ, µ0) and coinv(σ, µ0) are as defined in Section 1.3.3. Now we Define

˜

Gµ0(q, t) :=

X T∈SYT(µ0)

n Y i=1

[ai(T)]t· µ02 Y j=1

1 +qtbj(T) (2.1.2)

where ai(T) and bj(T) are determined in the following way : starting with the cell containing 1, add cells containing 2, 3, . . . , i, one at a time. After adding the cell containingi,ai(T) counts the number of columns which have the same height with the column containing the square just added

withi. Andbj(T) counts the number of cells in the first to the strictly right of the cell (2, j) which contain bigger element than the element in (2, j). Then we have the following theorem :

Theorem 2.1.1. We have

˜

Fµ0(q, t) = ˜Gµ0(q, t).

Remark 2.1.2. The motivation of the conjectured formula is from the equation (5.11) in [Mac98], p.229. The equation (5.11) is the firstt-factor of our conjectured formula, i.e., whenq= 0. Hence our formula extends the formula for Hall-Littlewood polynomials to Macdonald polynomials.

Example 2.1.3. Let µ= (2,1). We calculate ˜Fµ(q, t) = P σ∈S3q

maj(σ,µ0)tcoinv(σ,µ0) first. All the

possible fillings of shape (2,1) are the followings. 1

2 3 1 3 2

2 1 3

2 3 1

3 1 2

3 2 1

From the above tableaux, reading from the left, we get

˜

F(2,1)(q, t) =t+ 1 +qt+ 1 +qt+q= 2 +q+t+ 2qt. (2.1.3)

Now we consider ˜G(2,1)(q, t) over the two standard tableaux

2 1 3 =

T1

,

3 1 2 =

T2

.

For the first SYTT1, if we add 1, there is only one column with height 1, so we havea1(T1) = 1.

(50)

height 2. There is only one column with height 2 which givesa2(T1) = 1 again. Adding the square with 3 gives the factor 1 as well, since the column containing the square with 3 is height 1 and

there is only one column with height 1. Hence for the SYTT1, the first factor is 1, i.e.,

ai(T1) : 1 [1]t

→ 1 2

[1]t →

2 1 3

[1]t ⇒

3

Y i=1

[ai(T1)]t= 1.

To decide bj(T1), we compare the element in the first row to the right of the square in the second

row. In T1, we only have one cell in the second row which has the element 2. 3 is bigger than 2

and the cell containing 3 is in the first row to the right of the first column. So we get b1(T1) = 1, i.e.,

µ02 Y j=1

1 +qtbj(T1)= 1 +qt.

Hence, the first standard Young tableau T1 gives 1·(1 +qt).

We repeat the same procedure for T2 to get ai(T2) and bj(T2). If we add the second box

containing 2 to the right of the box with 1, then it makes two columns with height 1, so we get

a2(T2) = 2 and that gives the factor [2]t = (1 +t). Adding the last square gives the factor 1, so the first factor is (1 +t).

ai(T2) : 1 [1]t

→ 1 2 [2]t

→ 3 1 2 [1]t

3

Y i=1

[ai(T2)]t= (1 +t).

Now we consider bj(T2). Since 3 is the biggest element in this case and it is in the second row,

b1(T2) = 0 and that makes the second factor (1 +q). Hence from the SYTT2, we get

3

Y i=1

[ai(T2)]t· µ02=1

Y j=1

1 +qtbj(T2)= (1 +t)(1 +q).

We add two polynomials from T1andT2 to get ˜G(2,1)(q, t) and the resulting polynomial is

˜

G(2,1)(q, t) = 1·(1 +qt) + (1 +t)(1 +q) = 2 +q+t+ 2qt. (2.1.4)

We compare (2.1.3) and (2.1.4) and confirm that

˜

(51)

2.1.1

The Case When

µ

0

= (

n

1

,

1)

We give a proof of Theorem 2.1.1 whenµ= (2,1n−2), orµ0 = (n1,1).

Proposition 2.1.4. We have

˜

F(n−1,1)(q, t) = ˜G(n−1,1)(q, t).

Proof. First of all, we consider whenq= 1. Then, then

˜

Fµ0(1, t) =

X σ∈Sn

qmaj(σ,µ0)tcoinv(σ,µ0)

q=1=

X σ∈Sn

tcoinv(σ,µ0)

and for any element in the cell of the second row, sayσ1,tcoinv(σ2σ3···σn)is always [n1]t!. So we

have

˜

F(n−1,1)(1, t) =

X σ∈Sn

tcoinv(σ,µ0)=n·[n−1]t!. (2.1.5)

On the other hand, to calculate ˜G(n−1,1)(1, t), we consider a standard Young tableau with j in

the second row. Then by the property of the standard Young tableaux, the first row has elements

1,2,· · ·, j−1, j+ 1,· · · , n, from the left.

=

T j

1 ... j−1 j+1 ... n

Then the first factor Qn

i=1[ai(T)]tbecomes

[j−1]t!·[j−1]t· · ·[n−2]t= [n−2]t!·[j−1]t

and since there aren−j many numbers which are bigger thanj in the second row to the right of the first column,bj(T) isn−j. Thus we have

˜

G(n−1,1)(1, t) =

n X j=2

[n−2]t!·[j−1]t· 1 +tn−j

(52)

If we expand the sum and simplify,

˜

G(n−1,1)(1, t) =

n X j=2

[n−2]t!·[j−1]t· 1 +tn−j

= [n−2]t!  

n X j=2

[j−1]t·(1 +tn−j)  

= [n−2]t!

t−1  

n X j=2

(tj−1−1)(1 +tn−j)  

= [n−2]t!

t−1 (t+· · ·+t

n−1) + (n1)(tn−11)(1 +· · ·+tn−2)

= [n−2]t!

t−1 ·(t

n−11)·n

= [n−2]t!·[n−1]t·n

= n·[n−1]t!. (2.1.6)

By comparing (2.1.5) and (2.1.6), we conclude that ˜F(n−1,1)(1, t) = ˜G(n−1,1)(1, t).

In general, to compute ˜G(n−1,1)(q, t), we only need to putqin front oftn−j and simply we get

˜

G(n−1,1)(q, t) =

n X j=2

[n−2]t![j−1]t(1 +qtn−j). (2.1.7)

Now we compute ˜F(n−1,1)(q, t). If 1 is the element of the cell in the second row, then there is no

descent which makes maj zero, so as we calculated before,tcoinv(σ,µ0)= [n1]t!. When 2 is in the

second row, there are two different cases when 1 is right below 2 and when 3 or bigger numbers

up to ncomes below 2. In the first case, since the square containing 2 contributes a descent, it gives a qfactor. So the whole factor becomes q(tn−2·[n2]

t!). In the second case, since there

are no descents, the factor has onlyt’s, and it gives usPn j=3t

n−j·[n2]

t! in the end. In general,

let’s say the element i comes in the second row. Then we consider two cases when the element below iis smaller thaniand when it is bigger thani. And these two different cases give us

q tn−2·[n−2]t! +· · ·+tn−i−1·[n−2]t!+ n X j=i+1

tn−j·[n−2]t!.

References

Related documents

Bhowmick S, Kundu AK, Adhikari J, Chatterjee D, Iglesias M, Nriagu J, Guha Mazumder DN, Shomar B, Chatterjee D (2015). Assessment of toxic metals in groundwater and saliva in

Keywords: ESWT, Radial extracorporeal shockwave therapy, Recurrence rate, Symptomatic shoulder calcifying tendinopathy,

Methods: In this qualitative study, in-depth interviews of 11 Japanese couples n 4 22 were conducted at an outpatient primary care clinic in southeast Michigan by a team of

Results: The proteomic approach revealed significantly increased expression of carbonic anhydrase I (CA1) in the synovial membrane of patients with AS as compared with the RA and

The present study was conducted to test the hypotheses that patients with right- or left-sided TKA show a signifi- cant increase in BRT from pre-operative (pre-op, 1 day before

Table 3 Overview of relevant literature in the treatment of Crowe III or IV dysplasia combined with subtrochanteric femoral shortening osteotomy Study Year Hips (n) DDH type

In conclusion, this large study demonstrates that the SPADI has a bidimensional factor structure representing pain and disability, with adequate internal consistency and