### CUNY Academic Works CUNY Academic Works

### Dissertations, Theses, and Capstone Projects CUNY Graduate Center

### 5-2019

### A Differential Algebra Approach to Commuting Polynomial Vector A Differential Algebra Approach to Commuting Polynomial Vector Fields and to Parameter Identifiability in ODE Models

### Fields and to Parameter Identifiability in ODE Models

### Peter Thompson

The Graduate Center, City University of New York

### How does access to this work benefit you? Let us know!

More information about this work at: https://academicworks.cuny.edu/gc_etds/3229 Discover additional works at: https://academicworks.cuny.edu

This work is made publicly available by the City University of New York (CUNY).

Contact: AcademicWorks@cuny.edu

PARAMETER IDENTIFIABILITY IN

### ODE

MODELS### by

### P

ETER### A. T

HOMPSON### A dissertation submitted to the Graduate Faculty in Mathematics in partial fulfillment of the requirements for the degree of Doctor of Philosophy, The City University of New York

### 2019

### 2019 c

### P

ETER### A. T

HOMPSON### All Rights Reserved

### This manuscript has been read and accepted by the Graduate Faculty in Mathematics in satisfaction of the dissertation requirement for the degree of Doctor of Philosophy.

### Professor Alexey Ovchinnikov

### Date Chair of Examining Committee

### Professor Ara Basmajian

### Date Executive Officer

### Professor Richard C. Churchill Professor Russell Miller

### Professor Alexey Ovchinnikov Supervisory Committee

### T

HE### C

ITY### U

NIVERSITY OF### N

EW### Y

ORK### Abstract

### A

DIFFERENTIAL ALGEBRA APPROACH TO COMMUTING POLYNOMIAL VECTOR FIELDS AND TO PARAMETER IDENTIFIABILITY IN### ODE

MODELS### by

### P

ETER### A. T

HOMPSON### Adviser: Professor Alexey Ovchinnikov

### In the first part, we study the problem of characterizing polynomial vector fields that commute with a given polynomial vector field. One motivating factor is that we can write down solution formulas for an ODE that corresponds to a planar vector field that possesses a linearly independent commuting vector field. This problem is also central to the question of linearizability of vector fields. We first show that a linear vector field admits a full complement of commuting vector fields.

### Then we study a type of planar vector field for which there exists an upper bound on the degree of a commuting polynomial vector field. Finally, we turn our attention to conservative Newton systems, which form a special class of Hamiltonian systems, and show the following result. Let f ∈ K[x], where K is a field of characteristic zero, and d the derivation that corresponds to the differential equation ¨ x = f (x) in a standard way. We show that if deg f > 2, then any K-derivation commuting with d is equal to d multiplied by a conserved quantity. For example, the classical elliptic equation

### ¨

### x = 6x

^{2}

### + a, where a ∈ C, falls into this category.

### In the second part, we study structural identifiability of parameterized ordinary differential

### equation models of physical systems, for example, systems arising in biology and medicine. A

### parameter is said to be structurally identifiable if its numerical value can be determined from perfect

### observation of the observable variables in the model. Structural identifiability is necessary for

### practical identifiability. We study structural identifiability via differential algebra. In particular, we

### use characteristic sets. A system of ODEs can be viewed as a set of differential polynomials in a

### differential ring, and the consequences of this system form a differential ideal. This differential ideal

### can be described by a finite set of differential equations called a characteristic set. The technique of

### studying identifiability via a set of special equations, sometimes called “input-output” equations, has

### been in use for the past thirty years. However it is still a challenge to provide rigorous justification

### for some conclusions that have been drawn in published studies. Our main result is on linear

### systems, which are a topic of current interest. We show that for a linear system of ODEs with one

### output, the coefficients of a monic characteristic set are identifiable. This result is then generalized,

### with additional hypotheses, to nonlinear systems.

## Acknowledgments

### The results of chapter Chapter 1 are joint work with Alexey Ovchinnikov and Joel Nagloo. The results of Chapter 2 are joint work with Alexey Ovchinnikov and Gleb Pogudin.

### This work was partially supported by the NSF grants CCF-1563942, CCF-0952591, DMS- 1700336, DMS-1606334, and DMS-1760448, by the NSA grant #H98230-15-1-0245, by CUNY CIRG #2248, and by PSC-CUNY grants #69827-00 47, #60456-00 48, and #60098-00 48. I am grateful to the CCiS at CUNY Queens College for the computational resources.

### vi

## Contents

### Contents vii

### 1 Commuting polynomial vector fields 1

### 1.1 Introduction . . . . 1

### 1.2 Basic terminology and related results . . . . 3

### 1.3 The linear case . . . . 7

### 1.4 A class of derivations admitting upper bounds on the degree of a commuting derivation 14 1.4.1 The utility of upper bounds . . . . 14

### 1.4.2 Main result . . . . 14

### 1.5 Conservative Newton Systems . . . . 20

### 2 Identifiability for polynomial ODE models 55 2.1 Introduction . . . . 55

### 2.2 Notation and definitions . . . . 60

### 2.3 Identifiability of coefficients of a characteristic set . . . . 67

### 2.3.1 Definitions and basic properties . . . . 67

### 2.3.2 Results on identifiability . . . . 70

### 2.4 Identifiability for input-output equations in systems with one output . . . . 76

### Bibliography 83

### vii

## Chapter 1

## Commuting polynomial vector fields

### 1.1 Introduction

### We study the problem of characterizing polynomial vector fields that commute with a given polynomial vector field. One motivating factor is that we can write down solution formulas for an ODE that corresponds to a planar vector field that possesses a linearly independent (transversal) commuting vector field (see Theorem 1.2.1). This problem is also central to the question of linearizability of vectors fields (cf. Gin´e and Grau (2006) and Sabatini (1997)). In what follows, we will use the standard correspondence between (polynomial) vector fields and derivations on (polynomial) rings.

### In Section 1.3, we show that a K-derivation on K[x

_{1}

### , . . . , x

_{n}

### ] defined by linear polynomials admits a full complement of commuting K-linearly independent K-derivations. In Section 1.4, we prove a degree bound on the degree of any derivation commuting with a K-derivation on K[x, y] of the form

### d = f

_{1}

### · ∂

### ∂x + f

_{2}

### · ∂

### ∂y satisfying f

1### f

_{2}

### 6= 0, deg

_{y}

### ∂ f

_{2}

### ∂x < deg

_{y}

### f

_{2}

### , deg

_{y}

### (y f

_{1}

### ) < deg

_{y}

### f

_{2}

### , deg

_{x}

### ∂ f

_{1}

### ∂y < deg

_{x}

### f

_{1}

### , and deg

_{x}

### (x f

_{2}

### ) <

### 1

### deg

_{x}

### f

_{1}

### . In Section 1.5, we show that a nonlinear planar polynomial derivation corresponding to a conservative Newton system does not admit a linearly independent commuting derivation. Let

### d = y ∂

### ∂x + f (x) ∂

### ∂y (1.1)

### be a K-derivation, where f is a polynomial with coefficients in a field K of zero characteristic. This derivation corresponds to a conservative Newton system, and so to the differential equation ¨ x = f (x).

### Observe that d is a special type of Hamiltonian derivation. That is, d(x) =

^{∂H}

∂y

### and d(y) = −

^{∂H}

∂x

### , where H =

^{1}

_{2}

### y

^{2}

### − ^{R} f (x)dx. It is shown in (Nowicki, 1994, Corollary 7.1.5) that the set of all polynomial derivations that commute with d forms a K[H]-module. In this paper, we show that, for every such d, the module M

_{d}

### is of rank 1 if and only if deg f > 2. For example, the classical elliptic equation ¨ x = 6x

^{2}

### + a, where a ∈ C, falls into this category.

### A characterization of commuting planar derivations in terms of a common Darboux polynomial is given in (Petravchuk (2010)). This was generalized to higher dimensions in (Li and Du (2012)).

### In (Choudhury and Guha (2013)), Darboux polynomials are used to find linearly independent

### commuting vector fields and to construct linearizations of the vector fields. In the case in which K

### is the real numbers, our result generalizes a result on conservative Newton systems with a center

### to the case in which a center may or may not be present. A vector field has a center at point P if

### there is a punctured neighborhood of P in which every solution curve is a closed loop. A center

### is called isochronous if every such loop has the same period. It was proven in (Villarini, 1992,

### Theorem 4.5) that, if D

_{1}

### and D

_{2}

### are commuting vector fields orthogonal at noncritical points, then

### any center of D

_{1}

### is isochronous. The hypothesis of this result can be relaxed to the case in which

### D

_{2}

### is transversal to D

_{1}

### at noncritical points (cf. (Sabatini, 1997, Theorem, p. 92)). In light of this

### result, one approach to showing the nonexistence of a vector field commuting with D is to show

### that D has a non-isochronous center. In fact, Amel’kin (Amel’kin, 1977, Theorem 11) has shown

### that if the system of ordinary differential equations (ODEs) corresponding to derivation (1.1) is not

### linear and has a center at the origin, then there is no transversal vector field that commutes with d.

### As far as we are aware, there has not been a standard method to show the nonexistence of a transversal polynomial vector field in the absence of a nonisochronous center. We develop our own method to do this, which includes building a triangular system of differential equations. One technique we use in approaching this system involves constructing a family of pairs of commuting derivations on rings of the form K[x

^{1/t}

### , x

^{−1/t}

### , y] (see Lemma 1.5.7) and using recurrence relations.

### It is impossible to remove the condition deg f > 2 from the statement of our main result, as every non-zero derivation of degree less than 2 commutes with another transversal derivation (see Proposition 1.2.1). The form of d in our main result implies that d is divergence free (which is the same as Hamiltonian in the planar case). It is not possible to strengthen our result to the case in which d is merely assumed to be divergence free of degree at least 2, as shown in Example 1.2.1 and Proposition 1.2.2.

### 1.2 Basic terminology and related results

### We direct the reader to Kaplansky (1957) and Kolchin (1973) for the basics of a ring with a derivation.

### Definition 1.2.1. An S-derivation on a commutative ring R with subring S is a map d : R → R such that d(S) = 0 and for all a, b ∈ R,

### d(a + b) = d(a) + d(b) and d(ab) = d(a) · b + a · d(b).

### Definition 1.2.2. Let K be a field. A non-zero K-derivation d on K[x

_{1}

### , . . . , x

_{n}

### ] is called integrable if

### there exist commuting K-derivations δ

_{1}

### , . . . , δ

n−1### on K[x

_{1}

### , . . . , x

_{n}

### ] that are linearly independent from

### d over K(x

_{1}

### , . . . , x

_{n}

### ), and commute with d, that is, for all a ∈ K[x

_{1}

### , . . . , x

_{n}

### ] and i, j, 1 6 i, j 6 n − 1,

### d (δ

i### (a)) = δ

i### (d(a)) and δ

_{i}

### (δ

j### (a)) = δ

j### (δ

i### (a)).

### The following is a result that follows easily from classical theory, although to the best of our knowledge it is not explicitly stated in this form.

### Theorem 1.2.1. Let d and δ be R-derivations on R(x, y) defined by

### d(x) = f

_{1}

### (x, y), d(y) = f

_{2}

### (x, y), δ(x) = g

1### (x, y), δ(y) = g

2### (x, y).

### Let (x

_{0}

### , y

_{0}

### ) ∈ R

^{2}

### . Suppose that d and δ commute and there is no (λ

_{1}

### , λ

2### ) ∈ R

^{2}

### \{(0, 0)} such that

### λ

_{1}

###

###

###

### f

_{1}

### (x

_{0}

### , y

_{0}

### ) f

_{2}

### (x

_{0}

### , y

_{0}

### )

###

###

### = λ

2###

###

###

### g

_{1}

### (x

_{0}

### , y

_{0}

### ) g

_{2}

### (x

_{0}

### , y

_{0}

### )

###

###

### .

### Then the initial value problem

### ˙

### x = f

_{1}

### (x, y), ˙ y = f

_{2}

### (x, y), x(0) = x

_{0}

### , y(0) = y

_{0}

### has a solution given by

### (x(t), y(t)) = F

^{−1}

### (t, 0),

### where

### F

###

###

### x y

###

###

### =

###

###

###

###

###

###

###

###

x

### Z

x_{0}
g_{2}(r,y)

∆(r,y)

### dr +

y

### Z

y_{0}

−g_{1}(x_{0},s)

∆(x0,s)

### ds

x

### Z

x0

− f2(r,y)

∆(r,y)

### dr +

y

### Z

y0

f1(x0,s)

∆(x0,s)

### ds

###

###

###

###

###

###

###

### ,

### and ∆(x, y) = f

1### (x, y)g

_{2}

### (x, y) − f

_{2}

### (x, y)g

_{1}

### (x, y).

### Proof. Suppose (x(t), y(t)) is a solution to the initial value problem. A straightforward calculation shows that F(x(t), y(t)) = (t, 0). Observing that the Jacobian determinant of F does not vanish at (x

_{0}

### , y

_{0}

### ), we see that F is a diffeomorphism in a neighborhood of (x

_{0}

### , y

_{0}

### ). We conclude that (x(t), y(t)) = F

^{−1}

### (t, 0).

### Example 1.2.1. Consider the initial value problem

### ˙

### x = 1 + x

^{2}

### , y ˙ = −2xy, x(0) = x

_{0}

### , y(0) = y

_{0}

### ,

### where x

0### and y

0### are real numbers and y

0### 6= 0. The corresponding derivation is

### d(x) = 1 + x

^{2}

### , d(y) = −2xy,

### and we observe that the derivation

### δ(x) = 0, δ(y) = y

### commutes with d, and that d and δ are independent at (x

_{0}

### , y

_{0}

### ). Using the above formula, we obtain the solution

### x(t) = tan(t + tan

^{−1}

### x

_{0}

### ), y(t) = y

_{0}

### (1 + x

^{2}

_{0}

### ) cos

^{2}

### (t + tan

^{−1}

### x

_{0}

### ).

### We make some observations, in the form of the following propositions:

### Proposition 1.2.1. Let K be a field. Every non-zero K-derivation of degree less than or equal to 1 on K[x, y] is integrable.

### A proof for n variables is given in 1.3.1. We give a more explicit proof for the case of 2 variables here.

### Proof. We will consider the following cases. The symbols a, b, c, e, f , and g are taken to be

### elements of K.

### Case 0 : d(x) = c, d(y) = g. Observe that d commutes with any constant derivation.

### Case 1 : d(x) = ax, d(y) = ay, a 6= 0. Observe that d commutes with δ, where δ(x) = y, δ(y) = x.

### Case 2 : d(x) = ax + by, d(y) = ex + f y, different from Case 1. Observe that d commutes with δ, where δ(x) = x, δ(y) = y.

### Case 3 : d(x) = ax + by + c, d(y) = ex + f y + g, a f − be 6= 0. In this case, d is equivalent to a derivation from Case 1 or Case 2 via a linear change of coordinates. Let (x

_{0}

### , y

_{0}

### ) be the solution to the system ax + by + c = ex + f y + g = 0. Now let u = x − x

_{0}

### and v = y − y

_{0}

### , so that d(u) = au + bv and d(v) = eu + f v.

### Case 4 : d(x) = ax + by + c, d(y) = ex + f y + g, a f − be = 0

### (a) a = b = 0, different from Case 0. If e 6= 0, then d commutes with and is transversal to δ given by δ(x) = −

^{g}

_{e}

### , δ(y) = 0. If f 6= 0, then d commutes with and is transversal to δ given by δ(x) = 0, δ(y) = −

^{g}

_{f}

### .

### (b) at least one of a and b is not 0. First assume a 6= 0. If f = e = 0, then this is equivalent to Case 4a by swapping the roles of x and y. Assume at least one of f and e is not 0.

### By the condition a f − be = 0, it must be that e 6= 0. Using the coordinate z = ex − ay instead of x puts this into the form of Case 4a. Next, assume b 6= 0. If f = e = 0, then this is equivalent to Case 4a. Assume at least one of f and e is not 0. By the condition a f − be = 0, it must be that f 6= 0. Using the coordinate z = f x − by instead of x puts this into the form of Case 4a.

### Definition 1.2.3. Let K be a field and let d be a K-derivation on K[x

_{1}

### , . . . , x

_{n}

### ]. We say d is divergence-free if

n

### ∑

i=1

### ∂

### ∂x

_{i}

### d(x

_{i}

### ) = 0.

### Proposition 1.2.2. Let K be a field of characteristic 0. There exist integrable divergence-free K-derivations on K[x, y] that are not coordinate-change equivalent to a derivation of degree less than or equal to 1.

### Proof. The K-derivation defined by the same equations as d from Example 1.2.1 is divergence-free and integrable. Note that the vector field corresponding to d vanishes only at the points ( √

### −1, 0) and (− √

### −1, 0) in K

^{2}

### . Since charK = 0, these points are distinct. After a coordinate change, the number of points in K

^{2}

### at which a vector field vanishes does not change. The vector field of any derivation of degree less than or equal to 1 vanishes at zero, one, or infinitely many points. We conclude that d is not coordinate-change equivalent to a derivation of degree no greater than 1.

### 1.3 The linear case

### We show in Proposition 1.3.1 that every nonzero K-derivation defined by polynomials of degree no greater than 1 on K[x

_{1}

### , ..., x

_{n}

### ] is integrable (see Definition 1.2.2). We will make use of the following lemma.

### Lemma 1.3.1. Let K be a field. Let ∂ be a non-zero K-derivation on the polynomial ring K[x

_{1}

### , . . . , x

_{n}

### ] such that

### ∂(x) = Cx + a,

### where x = (x

_{1}

### , . . . , x

_{n}

### )

^{T}

### , C is the companion matrix of a polynomial over K of degree n, and a is an n × 1 matrix with entries in K. Then there exist K-derivations δ

2### , . . . , δ

n### such that

### 1. ∀i, j δ

_{i}

### (x

_{j}

### ) has degree at most 1, 2. ∀i δ

i### ◦ ∂ = ∂ ◦ δ

i### ,

### 3. ∀i, j δ

i### ◦ δ

j### = δ

j### ◦ δ

i### , and

### 4. {∂, δ

2### , . . . , δ

n### } is K-linearly independent.

### Proof. Write

### C =

###

###

###

###

###

###

###

###

###

### 0 c

_{0}

### 1 0 c

_{1}

### . .. ... .. . 1 c

_{n−1}

###

###

###

###

###

###

###

###

###

### , a =

###

###

###

###

###

### a

_{0}

### .. . a

_{n−1}

###

###

###

###

###

### .

### Case 1: a

_{0}

### = 0 or c

_{0}

### 6= 0

### If c

_{0}

### 6= 0, let v = C

^{−1}

### a. If c

_{0}

### = 0 let v = (a

_{1}

### , a

_{2}

### , . . . , a

_{n−1}

### , 0)

^{T}

### . Observe that in either case, Cv = a. Now for i = 0, . . . , n − 1 define δ

i### to be the K-derivation given by

### δ

i### (x) = C

^{i}

### x +C

^{i}

### v

### and note that ∂ = δ

_{1}

### .

### We first show that for all i and j δ

_{i}

### ◦ δ

j### = δ

j### ◦ δ

i### . We have δ

_{i}

### (δ

j### (x)) = δ

i### (C

^{j}

### x +C

^{j}

### v) = C

^{j}

### (C

^{i}

### x + C

^{i}

### v) = C

^{i+ j}

### x +C

^{i+ j}

### v. We also have δ

_{j}

### (δ

_{i}

### (x)) = δ

_{j}

### (C

^{i}

### x +C

^{i}

### v) = C

^{i}

### (C

^{j}

### x +C

^{j}

### v) = C

^{i+ j}

### x +C

^{i+ j}

### v.

### We now show that {δ

0### , . . . , δ

n−1### } is K-linearly independent. Suppose C

^{0}

### x,Cx, . . . ,C

^{n−1}

### x are not K-linearly independent. Then there exist b

_{0}

### , . . . , b

_{n−1}

### ∈ K not all 0 such that b

_{0}

### C

^{0}

### x + . . . b

_{n−1}

### C

^{n−1}

### x = (b

_{0}

### C

^{0}

### + . . . + b

_{n−1}

### C

^{n−1}

### )x = (0, . . . , 0)

^{T}

### . Since x

_{1}

### , . . . , x

_{n}

### are algebraically indepen- dent over K, the only way this could happen is if b

_{0}

### C

^{0}

### + . . . + b

_{n−1}

### C

^{n−1}

### is the zero matrix. Since C is a companion matrix of a degree n polynomial, the minimal polynomial of C has degree n (cf.

### (Hoffman and Kunze, 1971, Corollary, p. 230)). Therefore b

_{0}

### = . . . = b

_{n−1}

### = 0. We conclude that {C

^{0}

### x, . . . ,C

^{n−1}

### x} is K-linearly independent. It follows that {C

^{0}

### x +C

^{0}

### v, . . . ,C

^{n−1}

### x +C

^{n−1}

### v} is K-linearly independent.

### Define δ

_{n}

### to be δ

_{0}

### . Now we have shown that {δ

_{2}

### , . . . , δ

n### } satisfy the properties in the statement of the lemma.

### Case 2: a

_{0}

### 6= 0 and c

_{0}

### = 0

### For i = 1, . . . , n let δ

_{i}

### be the K-derivation defined by

### δ

_{i}

### (x) = C

^{i}

### x +C

^{i−1}

### a

### and note that δ

_{1}

### = ∂.

### We show that for all i and j δ

_{i}

### ◦ δ

j### = δ

j### ◦ δ

i### . We have δ

_{i}

### (δ

j### (x)) = δ

i### (C

^{j}

### x +C

^{j−1}

### a) = C

^{j}

### (C

^{i}

### x + C

^{i−1}

### a) = C

^{i+ j}

### x+C

^{i+ j−1}

### a. We also have δ

_{j}

### (δ

i### (x)) = δ

j### (C

^{i}

### x +C

^{i−1}

### a) = C

^{i}

### (C

^{j}

### x +C

^{j−1}

### a) = C

^{i+ j}

### x + C

^{i+ j−1}

### a.

### Next we show that the set {δ

_{1}

### , . . . , δ

n### } is K-linearly independent. Suppose (b

_{1}

### , . . . , b

_{n}

### ) ∈ K

^{n}

### \{(0, . . . , 0)} is such that

### b

_{1}

### (Cx + a) + b

_{2}

### (C

^{2}

### x +Ca) + . . . + b

_{n}

### (C

^{n}

### x +C

^{n−1}

### a) = 0

^{n×1}

### . (1.2)

### It follows that

### b

_{1}

### Cx + b

_{2}

### C

^{2}

### x + . . . + b

_{n}

### C

^{n}

### x = 0

^{n×1}

### .

### Since x

1### , . . . , x

_{n}

### are K-algebraically independent, and hence K-linearly independent, it follows that b

_{1}

### C + . . . + b

_{n}

### C

^{n}

### = 0

^{n×n}

### . Since C is a companion matrix and c

0### = 0, the minimal polynomial of C is p(X ) = X

^{n}

### − c

_{n−1}

### X

^{n−1}

### − . . . − c

_{1}

### X . Hence there exists r ∈ K\{0} such that b

_{n}

### = r and for i = 1, . . . , n − 1 b

_{i}

### = −rc

_{i}

### . It follows from this and (1.2) that

### (−c

_{1}

### I − c

_{2}

### C − . . . +C

^{n−1}

### )a = 0

^{n×1}

### ,

### where I is the n × n identity matrix. Let D = −c

_{1}

### I − c

_{2}

### C − . . . +C

^{n−1}

### . Since CD is the 0 matrix, we

### see that the image of D, as a K-linear map from K

^{n×1}

### to K

^{n×1}

### , lies in the kernel of C. Observe that

### since c

_{0}

### = 0, the kernel of C has dimension 1. Because D is a K-linear combination of C

^{0}

### , . . . ,C

^{n−1}

### ,

### D is not the zero matrix. Hence, the image of D has positive dimension and thus the image of D

### has dimension 1. Therefore the kernel of D has dimension n − 1. Let e

_{1}

### , . . . , e

_{n}

### be the basis for K

^{n×1}

### where e

_{i}

### has 1 in the i-th entry and 0 elsewhere. Observe that since the first column of C

^{i}

### has a 1 in the (i + 1)-th entry and 0 in all other entries, De

_{1}

### = (−c

_{1}

### , −c

_{2}

### , . . . , −c

_{n−1}

### , 1)

^{T}

### 6= 0

^{n×1}

### . We now argue that for i = 2, . . . , n De

_{i}

### = 0

^{n×1}

### . To do this, we work over the field L := K( ˜ c

_{1}

### , . . . , ˜ c

_{n−1}

### ), where ˜ c

_{1}

### , . . . , ˜ c

_{n−1}

### are K-algebraically independent, and consider the matrices ˜ C defined as the companion matrix of X

^{n}

### − ˜c

_{n−1}

### X

^{n−1}

### − . . . − ˜c

_{1}

### X, and ˜ D := − ˜ c

_{1}

### I − ˜c

_{2}

### C ˜ − . . . + ˜ C

^{n−1}

### . Viewing ˜ C and ˜ D as L-linear maps on L

^{n}

### , we have that ker ˜ C is the L-span of (− ˜ c

_{1}

### , − ˜ c

_{2}

### , . . . , − ˜ c

_{n−1}

### , 1)

^{T}

### and that im ˜ D = ker ˜ C. Thus, each column of ˜ D is of the form (− ˜ c

_{1}

### r, − ˜ c

_{2}

### r , . . . , − ˜ c

_{n−1}

### r , r)

^{T}

### , where r ∈ L.

### Since for i ≥ 1 each element of the top row of ˜ C

^{i}

### is 0, we see that the top row of ˜ D is (− ˜ c

_{1}

### , 0, . . . , 0).

### Thus, we have

### D ˜ =

###

###

###

###

###

###

###

###

###

### − ˜c

_{1}

### 0 · · · 0

### − ˜c

_{2}

### 0 · · · 0 .. . .. . .. . 1 0 · · · 0

###

###

###

###

###

###

###

###

### .

### Observing that D is the specialization of ˜ D at ˜ c

_{1}

### = c

_{1}

### , . . . , ˜ c

_{n−1}

### = c

_{n−1}

### gives us

### D =

###

###

###

###

###

###

###

###

###

### −c

_{1}

### 0 · · · 0

### −c

_{2}

### 0 · · · 0 .. . .. . .. . 1 0 · · · 0

###

###

###

###

###

###

###

###

### ,

### and therefore De

_{i}

### = 0

^{n×1}

### for i > 1. Writing a = a

_{0}

### e

_{1}

### + a

_{1}

### e

_{2}

### + . . . + a

_{n−1}

### e

_{n}

### and recalling that a

_{0}

### 6= 0, we see that Da 6= 0

^{n×1}

### . This contradicts that (1.2) holds. Therefore {δ

_{1}

### , . . . , δ

n### } is K-linearly independent.

### Proposition 1.3.1. Let K be a field. Let ∂ be a non-zero K-derivation on the polynomial ring

### R = K[x

_{1}

### , ..., x

_{n}

### ] such that each ∂(x

i### ) has degree at most 1. Then there exist K-derivations δ

2### , ..., δ

n### on R such that

### 1. ∀i, j δ

i### (x

_{j}

### ) has degree at most 1, 2. ∀i δ

i### ◦ ∂ = ∂ ◦ δ

i### ,

### 3. ∀i, j δ

i### ◦ δ

j### = δ

j### ◦ δ

i### , and

### 4. {∂, δ

2### , . . . , δ

n### } is K-linearly independent.

### Proof. Write

### ∂x = Ax + a,

### where A ∈ K

^{n×n}

### and a ∈ K

^{n×1}

### . First, we show that without loss of generality we can assume A is in rational canonical form. By (Hungerford, 1974, Theorem 4.6(ii), p. 360), there exists P ∈ GL

_{n}

### (K) such that ˆ A = PAP

^{−1}

### is in rational canonical form. Letting ˆ x = ( ˆ x

_{1}

### , . . . , ˆ x

_{n}

### )

^{T}

### = Px, we have K[x

_{1}

### , . . . , x

_{n}

### ] = K[ ˆ x

_{1}

### , . . . , ˆ x

_{n}

### ] and ∂( ˆ x) = ˆ A x ˆ + Pa.

### Henceforth, we assume that A is in rational canonical form. Write

### A =

###

###

###

###

###

### C

_{1}

### . ..

### C

_{k}

###

###

###

###

###

### ,

### where for all i C

_{i}

### is the companion matrix of a polynomial of degree d

_{i}

### . For i = 1, . . . , k define l

_{i}

### as follows. Let l

_{1}

### = 0 and for i > 1 let l

_{i}

### = l

_{i−1}

### + d

_{i−1}

### . For i = 1, . . . , k and for j = 1, . . . , d

_{i}

### we define the K-derivation δ

i, j### as follows. Lemma 1.3.1 for the ring K[x

_{l}

_{i}

_{+1}

### , . . . , x

_{l}

_{i}

_{+d}

_{i}

### ] and the K-derivation

### ∂

_{i}

### (x

_{l}

_{i}

_{+1}

### , . . . , x

_{l}

_{i}

_{+d}

_{i}

### )

^{T}

### = C

_{i}

### (x

_{l}

_{i}

_{+1}

### , . . . , x

_{l}

_{i}

_{+d}

_{i}

### )

^{T}

### + (a

_{l}

_{i}

_{+1}

### , . . . , a

_{l}

_{i}

_{+d}

_{i}

### )

^{T}

### guarantees the existence of K-derivations δ

_{2}

### , . . . , δ

di### on K[x

_{l}

_{i}

_{+1}

### , . . . , x

_{l}

_{i}

_{+d}

_{i}

### ] such that the set

### {∂

i### , δ

2### , . . . , δ

_{d}

_{i}

### } is commutative and K-linearly independent. Let δ

i,1### be the extension of ∂

_{i}

### to K[x] by

### δ

i,1### (x

_{r}

### ) =

###

###

###

###

###

### ∂

_{i}

### (x

_{r}

### ) if l

_{i}

### < r ≤ l

_{i}

### + d

_{i}

### 0 otherwise

### and for j = 2, . . . , d

_{i}

### let δ

_{i, j}

### be the extension of δ

_{j}

### to K[x] by

### δ

i, j### (x

_{r}

### ) =

###

###

###

###

###

### δ

_{j}

### (x

_{r}

### ) if l

_{i}

### < r ≤ l

_{i}

### + d

_{i}

### 0 otherwise

### .

### Observe that ∂ = δ

1,1### + . . . + δ

k,1### . If k = 1, then the theorem is proven by Lemma 1.3.1. Assume k > 1. Now consider the set

### S := {∂, δ

_{1,1}

### , . . . , δ

1,d1### , δ

2,1### , . . . , δ

2,d2### , . . . , δ

k−1,1### , . . . , δ

k−1,d_{k−1}

### , δ

k,2### , . . . , δ

k,d_{k}

### }

### = {∂} ∪ {δ

i, j### | i = 1, . . . , k; j = 1, . . . , d

_{i}

### }\{δ

k,1### }.

### Observe that S contains n elements. We now show that S is commutative. Fix i, j, p, q, r such that 1 ≤ i ≤ k, 1 ≤ j ≤ d

_{j}

### , 1 ≤ p ≤ k, 1 ≤ q ≤ d

_{p}

### , and 1 ≤ r ≤ n. If i = p, then δ

p,q### ◦ δ

i, j### = δ

i, j### ◦ δ

p,q### . Suppose i 6= p. Since δ

i, j### (x

_{r}

### ) ∈ K[x

_{l}

_{i}

### , . . . , x

_{l}

_{i}

_{+d}

_{i}

### ] we have δ

p,q### (δ

i, j### (x

_{r}

### )) = 0. Similarly, δ

p,q### (x

_{r}

### ) ∈ K[x

_{l}

_{p}

### , . . . , x

_{l}

_{p}

_{+d}

_{p}

### ] and hence δ

i, j### (δ

p,q### (x

_{r}

### )) = 0. We conclude that δ

i, j### commutes with δ

_{p,q}

### . Since

### ∂ = δ

_{1,1}

### + . . . + δ

k,1### , we see that ∂ commutes with δ

_{i, j}

### .

### Now we show that S is K-linearly independent. Suppose b, b

_{1,1}

### , . . . , b

_{1,d}

_{1}

### , b

_{2,1}

### , . . . , b

_{k,d}

_{k}

### ∈ K are such that

### b∂ + b

_{1,1}

### δ

_{1,1}

### + . . . + b

_{k,d}

_{k}

### δ

_{k,d}

_{k}

### = 0. (Note that δ

_{k,1}

### is not included.)

### Since ∂ = δ

_{1,1}

### + . . . + δ

k,1### , this implies

### (b

_{1,1}

### + b)δ

1,1### + . . . + (b

_{k−1,d}

_{k−1}

### + b)δ

k−1,d_{k−1}

### + bδ

k,1### + (b

_{k,2}

### + b)δ

k,2### + . . . + (b

_{k,d}

_{k}

### + b)δ

k,d_{k}

### = 0.

### (1.3) Equation (1.3) implies that for all i = 1, . . . , k − 1 and for all r such that l

_{i}

### < r ≤ l

_{i}

### + d

_{i}

### (b

_{i,1}

### + b)δ

i,1### (x

_{r}

### ) + . . . + (b

_{i,d}

_{1}

### + b)δ

i,d_{1}

### (x

_{r}

### ) = 0.

### It follows that

### ∀i = 1, . . . , k − 1 (b

_{i,1}

### + b)δ

i,1### + . . . + (b

_{i,d}

_{i}

### + b)δ

i,d_{i}

### = 0. (1.4)

### Equation (1.3) also implies that for all r such that l

_{k}

### < r ≤ l

_{k}

### + d

_{k}

### bδ

_{k,1}

### (x

_{r}

### ) + (b

_{k,2}

### + b)δ

k,2### (x

_{r}

### ) + . . . + (b

_{k,d}

_{k}

### + b)δ

k,d_{k}

### (x

_{r}

### ) = 0.

### It follows that

### bδ

k,1### + (b

_{k,2}

### + b)δ

k,2### + . . . + (b

_{k,d}

_{k}

### + b)δ

k,d_{k}

### = 0. (1.5)

### Since for all i δ

_{i,1}

### , . . . , δ

i,d_{i}

### are K-linearly independent, (1.4) implies that b

_{i, j}

### = −b for i = 1, . . . , k − 1

### and j = 1, . . . , d

_{i}

### and (1.5) implies that b = 0 and b

_{k,2}

### = . . . = b

_{k,d}

_{k}

### = −b. We conclude that b = 0

### and b

_{i, j}

### = 0 for all i and j. Therefore S is K-linearly independent.

### 1.4 A class of derivations admitting upper bounds on the de- gree of a commuting derivation

### 1.4.1 The utility of upper bounds

### Let d(x, y) = ( f

_{1}

### , f

_{2}

### ) be a K-derivation on K[x, y]. Suppose b ∈ N is such that the following statement is true: “If δ(x, y) = (g

1### , g

_{2}

### ) is a K-derivation on K[x, y] that commutes with and is transversal to d, then the degrees of g

1### and g

2### are no greater than b.” Such a b is sometimes called an upper bound.

### We can use this information to determine whether d is integrable. Write g

_{i}

### = ∑

j,k; j+k6b### a

_{i, j,k}

### x

^{j}

### y

^{k}

### . Now the equations d(δ(x)) = δ(d(x)) and d(δ(y)) = δ(d(y)) form a system of two equations of polynomials, and thus a finite system of equations on elements of K obtained by equating like coefficients. These equations are linear in the variables a

_{i, j,k}

### . Hence the problem of determining whether d is integrable has been reduced to studying a finite system of linear equations over K.

### 1.4.2 Main result

### We present a class of derivations and give an upper bound for each element of this class.

### Notation. • Define deg

_{y}

### (0) := −∞, so that for all n ∈ Z deg

y### (0) < n.

### • Let P and Q be elements of K[x, y]. Define deg

_{y}

### (P/Q) = deg

_{y}

### (P/ gcd(P, Q)) − deg

_{y}

### (Q/ gcd(P, Q)).

### • Let U be a matrix with entries in K(x, y). Define

### deg

_{y}

### (U ) := max{deg

_{y}

### (u) | u is an entry of U }.

### Proposition 1.4.1. Let K be a field of characteristic 0. Let d be a K-derivation on K[x, y] given by

### d

###

###

### x y

###

###

### =

###

###

### f

_{1}

### f

_{2}

###

###

###

### satisfying the conditions

### • f

_{2}

### 6= 0,

### • deg

_{y}

### ∂ f

_{2}

### ∂x < deg

_{y}

### f

_{2}

### , and

### • deg

_{y}

### (y f

_{1}

### ) < deg

_{y}

### f

_{2}

### .

### If δ is a K-derivation on K[x, y] defined by

### δ

###

###

### x y

###

###

### =

###

###

### g

_{1}

### g

_{2}

###

###

###

### and δ commutes with d, then max{deg

_{y}

### g

_{1}

### , deg

_{y}

### g

_{2}

### } 6 deg

y### f

_{2}

### . Proof. The equations

### d(δ(x)) = δ(d(x)) and d(δ(y)) = δ(d(y))

### yield

### f

_{1}

### ∂g

1### ∂x + f

_{2}

### ∂g

1### ∂y = g

_{1}

### ∂ f

1### ∂x + g

_{2}

### ∂ f

1### ∂y and f

_{1}

### ∂g

2### ∂x + f

_{2}

### ∂g

2### ∂y = g

_{1}

### ∂ f

2### ∂x + g

_{2}

### ∂ f

2### ∂y , (1.6) which we rearrange as

###

###

###

### −

^{y f}

_{f}

^{1}

2

∂g1

∂x

### −

^{y f}

_{f}

^{1}

2

∂g2

∂x

###

###

### − y ∂

### ∂y

###

###

### g

_{1}

### g

_{2}

###

###

### +

###

###

###

y f2

∂ f1

∂x y f2

∂ f1

∂y y

f2

∂ f2

∂x y f2

∂ f2

∂y

###

###

###

###

###

### g

_{1}

### g

_{2}

###

###

### =

###

###

### 0 0

###

###

### .

### For conciseness of notation, we define the matrices

### • g :=

###

###

### g

_{1}

### g

_{2}

###

###

### ,

### • N :=

###

###

###

### −

^{y f}

_{f}

^{1}

2

∂g1

∂x

### −

^{y f}

_{f}

^{1}

2

∂g2

∂x

###

###

### , and

### • M :=

###

###

###

y
f_{2}

∂ f1

∂x
y
f_{2}

∂ f1

∂y y

f2

∂ f2

∂x y f2

∂ f2

∂y

###

###

### . so that this equation is written

### N − y · ∂

### ∂y g + M · g =

###

###

### 0 0

###

###

### .

### Let M

^{i}

### denote the i-th row of M, and let

### α

i### = max{deg

_{y}

### (M

^{i}

### ), 0}.

### Let

### D = diag(y

^{−α}

^{1}

### , y

^{−α}

^{2}

### ), A = D · M, and B = D · N.

### Now we have

### B − D · y · ∂

### ∂y g + A · g = 0. (1.7)

### Note that by the construction of D, deg

_{y}

### (A) 6 0, so D and A are both elements of K(x)[[

^{1}

_{y}

### ]]. Hence we can write

### D = D

_{0}

### + D

_{1}

### y + . . . , A = A

_{0}

### + A

_{1}

### y + . . . ,

### where each D

_{i}

### is in M

^{2×2}

### (K), each A

_{y,i}

### is in M

^{2×2}

### (K(x)), and the series for A is possibly infinite.

### Let µ = deg

_{y}

### (g) and ν = deg

_{y}

### (B). Recall that since the entries of g are polynomials, µ > 0, whereas

### ν may be negative. Thus, we can write

### g =

###

###

### c

_{µ}

### d

_{µ}

###

###

### y

^{µ}

### + lower degree terms,

### where

###

###

### c

_{µ}

### d

_{µ}

###

###

### ∈ M

^{2×1}

### (K[x]) and at least one of c

_{µ}

### and d

_{µ}

### is non-zero. Now equation (1.7) becomes

### lc(B) · y

^{ν}

### − (µ · D

_{0}

### − A

_{0}

### ) ·

###

###

### c

_{µ}

### d

_{µ}

###

###

### · y

^{µ}

### + terms of degree lower than max{ν, µ} =

###

###

### 0 0

###

###

### .

### Let γ = deg

_{y}

### y f

_{1}

### f

_{2}

### = deg

_{y}

### (y f

_{1}

### ) − deg

_{y}

### ( f

_{2}

### ). We see from the definition of B that δ

y### 6 γ + µ. Since we have assumed γ < 0, we have that ν < µ. It follows that (c

_{µ}

### , d

_{µ}

### )

^{T}

### is a non-zero element of the null space of µD

0### − A

_{0}

### , so det(µD

0### − A

_{0}

### ) = 0. Therefore µ belongs to the set

### R = {n ∈ N : det(n · D

0### − A

_{0}

### ) = 0}.

### Observe that if

### det(λD

y,0### − A

_{y,0}

### ) 6= 0, then R is finite and deg

_{y}

### g ∈ R.

### We first examine the first row of M. It follows from the hypotheses that

### deg

_{y}

### y f

_{2}

### · ∂ f

_{1}

### ∂x < 0 and deg

_{y}

### y f

_{2}

### · ∂ f

_{1}

### ∂y < 0.

### Hence, α

1### = 0.

### Now we consider the second row. Observe that γ < 0 implies deg

_{y}

### f

_{2}

### > 2, so

### deg

_{y}

### y f

_{2}

### ∂ f

_{2}

### ∂y = 0.

### Since deg

_{y}

^{∂ f}

^{2}

∂x

### < deg

_{y}

### f

_{2}

### , it follows that

### deg

_{y}

### y f

_{2}

### ∂ f

_{2}

### ∂x 6 0.

### Thus α

2### = 0 and it follows that D = diag(1, 1) and A = M.

### Write f

_{2}

### = ay

^{b}

### + terms of lower degree in y, where b ∈ N and a ∈ K. We see that

### A

_{0}

### =

###

###

### 0 0

### ∗ b

###

###

### .

### Now

### λD

_{0}

### − A

_{0}

### =

###

###

###

### λ 0

### ∗ λ − b

###

###

### .

### Now R = {0, b}, so deg

_{y}

### g = b or 0.

### Corollary 1.4.1. Let K be a field of characteristic 0. Let d be a K-derivation on K[x, y] given by

### d

###

###

### x y

###

###

### =

###

###

### f

_{1}

### f

_{2}

###

###

###

### satisfying the conditions

### • f

_{1}

### 6= 0,

### • deg

_{x}

### ∂ f

_{1}

### ∂y < deg

_{x}

### f

_{1}

### , and

### • deg

_{x}

### (x f

_{2}

### ) < deg

_{x}

### f

_{1}

### .

### If δ is a K-derivation on K[x, y] defined by

### δ

###

###

### x y

###

###

### =

###

###

### g

_{1}

### g

_{2}

###

###

###

### and δ commutes with d, then max{deg

_{x}

### g

_{1}

### , deg

_{x}

### g

_{2}

### } 6 deg

x### f

_{1}

### .

### Proof. This is identical to Proposition 1.4.1 but with the roles of x and y switched.

### Corollary 1.4.2. Let K be a field of characteristic 0. Let d = f

_{1}

^{∂}

∂x

### + f

_{2}

^{∂}

∂y

### be a K-derivation on K[x, y] satisfying the conditions

### • f

_{1}

### f

_{2}

### 6= 0,

### • deg

_{y}

### ∂ f

_{2}

### ∂x < deg

_{y}

### f

_{2}

### ,

### • deg

_{y}

### (y f

_{1}

### ) < deg

_{y}

### f

_{2}

### ,

### • deg

_{x}

### ∂ f

1### ∂y < deg

_{x}

### f

_{1}

### , and

### • deg

_{x}

### (x f

_{2}

### ) < deg

_{x}

### f

_{1}

### .

### Then there is no r ∈ K[x, y] \ K such that d(r) = 0, that is, the system of ODEs

###

###

###

###

###

### ˙

### x = f

_{1}

### (x, y)

### ˙

### y = f

_{2}

### (x, y)

### does not have a polynomial first integral.

### Proof. Suppose there is an r ∈ K[x, y] \ K such that d(r) = 0. Suppose without loss of generality

### that deg

_{y}

### r = a > 0. Now the derivation r · d is a derivation that commutes with d and has degree in

### y greater than deg

_{y}

### d, contradicting Theorem 1.4.1. Hence, no such r exists.

### Example 1.4.1. As an example, consider the K-derivation

### d

###

###

### x y

###

###

### =

###

###

### x

^{3}

### + y x + y

^{3}

###

###

###

### on the ring K[x, y]. We verify that we have satisfied the hypotheses above. First, f

_{1}

### , f

_{2}

### 6= 0. Now

### γ

_{x}

### = deg

_{x}

### x(x + y

^{3}

### )

### x

^{3}

### + y = −1 < 0, γ

_{y}

### = deg

_{y}

### y(x

^{3}

### + y)

### x + y

^{3}

### = −1 < 0.

### Next, we check that

### deg

_{x}

### ∂ f

_{1}

### ∂y = 0 < 3 = deg

_{x}

### f

_{1}

### , deg

_{y}

### ∂ f

_{2}

### ∂y = 0 < 3 = deg

_{y}

### f

_{2}

### .

### We conclude that any K-derivation on K[x, y] that commutes with d is defined by polynomials of degree no greater than 3.

### 1.5 Conservative Newton Systems

### Fix a field K of characteristic 0. Suppose δ

_{f}

### represents a second-order differential equation of the form

### ¨ x = f ,

### where f ∈ K[x] \ K, which corresponds to a conservative Newton system. That is,

### δ

_{f}

###

###

### x y

###

###

### =

###

###

### y

### f

###

###

### (1.8)

### If deg f = 1, then δ

_{f}

### is integrable by Proposition 1.2.1. The following theorem, which is our main

### result, addresses the case of deg f > 2.

### Theorem 1.5.1. For every

### • f ∈ K[x] such that deg f > 2 and

### • K-derivation γ on K[x, y] that commutes with δ

f### , where δ

_{f}

### is the K-derivation defined by (1.8), there exists q ∈ K[H] such that

### γ = q · δ

_{f}

### ,

### where H = y

^{2}

### − 2 ^{R} f dx and ^{R} f dx has 0 as the constant term.

### As a corollary, we recover the following result on conservative Newton systems with a center at the origin. This result was first proven in (Amel’kin, 1977, Theorem 11) and was given new proofs in (Chicone and Jacobs, 1989, Theorem 4.1) and (Cima et al., 1999, Corollary 2.6) (see also (Volokitin and Ivanov, 1999, p. 30)).

### Corollary 1.5.1. The real system

### ˙ x = −y

### ˙

### y = f (x),

### with f (0) = 0, f

^{0}

### (0) = 1, has a transversal commuting polynomial derivation if and only if f (x) = x.

### Proof of Theorem 1.5.1. Fix f ∈ K[x] such that deg f > 2. Fix a K-derivation δ so that δ(x) = y and δ(y) = f . Fix a K-derivation γ such that [δ, γ] = 0. First consider the case in which deg

_{y}

### γ 6 1.

### Lemma 1.5.1. If

### γ

###

###

### x y

###

###

### =

###

###

###

### c

_{1}

### y + c

_{0}

### d

_{1}

### y + d

_{0}

###

###

### ,

### where c

_{1}

### , c

_{0}

### , d

_{1}

### , d

_{0}

### ∈ K[x], and [δ, γ] = 0, then

### γ

###

###

### x y

###

###

### = c

_{1}

### δ.

### Proof. The equations δ(γ(x)) = γ(δ(x)) and δ(γ(y)) = γ(δ(y)) yield

###

###

###

###

###

### c

^{0}

_{1}

### y

^{2}

### + c

^{0}

_{0}

### y + f c

_{1}

### = d

_{1}

### y + d

_{0}

### d

_{1}

^{0}

### y

^{2}

### + d

_{0}

^{0}

### y + f d

_{1}

### = f

^{0}

### c

_{1}

### y + f

^{0}

### c

_{0}

### .

### Equating coefficients of like powers of y, we obtain the two independent systems

### c

^{0}

_{1}

### = 0, d

_{0}

^{0}

### = c

_{1}

### f

^{0}

### , f c

_{1}

### = d

_{0}

### (1.9)

### and

### d

_{1}

^{0}

### = 0, c

^{0}

_{0}

### = d

_{1}

### , f d

_{1}

### = c

_{0}

### f

^{0}

### . (1.10) The solution set of (1.9) is c

_{1}

### = constant, d

_{0}

### = c

_{1}

### f . System (1.10) has no non-zero solution, which we deduce as follows. We have

### c

_{0}

### f

0### = c

^{0}

_{0}

### f − f

^{0}

### c

_{0}

### f

^{2}

### = 0,

### so c

_{0}

### = (const) f . Therefore, d

_{1}

### = (const) f

^{0}

### , which implies d

_{1}

^{0}

### = (const) f

^{00}

### = 0. Since we assume deg f > 2, the constant must be 0. Therefore,

### γ

###

###

### x y

###

###

### = c

_{1}

###

###

### y f

###

###

### .

### Now assume deg

_{y}

### γ = M > 2. Write

### γ

###

###

### x y

###

###

### =

###

###

###

### c

_{M}

### y

^{M}

### + . . . + c

_{0}

### d

_{M}

### y

^{M}

### + . . . + d

_{0}

###

###

### , (1.11)

### where for all i, c

_{i}

### , d

_{i}

### ∈ K[x]. Since M = deg

_{y}

### γ, at least one of c

_{M}

### and d

_{M}

### is non-zero. Now the system

###

###

###

### δ(γ(x)) δ(γ(y))

###

###

### =

###

###

###

### γ(δ(x)) γ(δ(y))

###

###

### becomes

###

###

###

### c

^{0}

_{M}

### y

^{M+1}

### + c

^{0}

_{M−1}

### y

^{M}

### + . . . + c

^{0}

_{0}

### y d

_{M}

^{0}

### y

^{M+1}

### + d

_{M−1}

^{0}

### y

^{M}

### + . . . + d

_{0}

^{0}

### y

###

###

### +

###

###

###

### M f c

_{M}

### y

^{M−1}

### + . . . + f c

_{1}

### M f d

_{M}

### y

^{M−1}

### + . . . + f d

_{1}

###

###

###

### =

###

###

### 0 1 f

^{0}

### 0

###

###

###

###

###

###

### c

_{M}

### y

^{M}

### + . . . + c

_{0}

### d

_{M}

### y

^{M}

### + . . . + d

_{0}

###

###

### . (1.12)

### Viewing these matrix entries as polynomials in y and equating coefficients yields the following system of first-order ODEs

### c

^{0}

_{M}

### = 0 d

_{M}

^{0}

### = 0

### c

^{0}

_{M−1}

### = d

_{M}

### d

_{M−1}

^{0}

### = f

^{0}

### c

_{M}

### c

^{0}

_{M−2}

### + M f c

_{M}

### = d

_{M−1}

### d

^{0}

_{M−2}

### + M f d

_{M}

### = f

^{0}

### c

_{M−1}

### c

^{0}

_{M−3}

### + (M − 1) f c

_{M−1}

### = d

_{M−2}

### d

_{M−3}

^{0}

### + (M − 1) f d

_{M−1}

### = f

^{0}

### c

_{M−2}

### c

^{0}

_{M−4}

### + (M − 2) f c

_{M−2}

### = d

_{M−3}

### d

_{M−4}

^{0}

### + (M − 2) f d

_{M−2}

### = f

^{0}

### c

_{M−3}

### c

^{0}

_{M−5}

### + (M − 3) f c

_{M−3}

### = d

_{M−4}

### d

_{M−5}

^{0}

### + (M − 3) f d

_{M−3}

### = f

^{0}

### c

_{M−4}

### .. . .. .

### c

^{0}

_{0}

### + 2 f c

_{2}

### = d

_{1}

### d

_{0}

^{0}

### + 2 f d

_{2}

### = f

^{0}

### c

_{1}

### f c

_{1}

### = d

_{0}

### f d

_{1}

### = f

^{0}

### c

_{0}

### as well as the condition

### c

_{M}

### 6= 0 or d

_{M}

### 6= 0.

### In each equation, it is the case that if c

_{i}

### and d

_{j}

### both appear, then i and j have opposite parities. Thus, we see that this system consists of two independent systems. If M is odd, these systems are:

### (Io)

_{M}

### (IIo)

_{M}

### c

^{0}

_{M}

### = 0 d

_{M}

^{0}

### = 0

### d

_{M−1}

^{0}

### = f

^{0}

### c

_{M}

### c

^{0}

_{M−1}

### = d

_{M}

### c

^{0}

_{M−2}

### + M f c

_{M}

### = d

_{M−1}

### d

_{M−2}

^{0}

### + M f d

_{M}

### = f

^{0}

### c

_{M−1}

### d

_{M−3}

^{0}

### + (M − 1) f d

_{M−1}

### = f

^{0}

### c

_{M−2}

### c

^{0}

_{M−3}

### + (M − 1) f c

_{M−1}

### = d

_{M−2}

### c

^{0}

_{M−4}

### + (M − 2) f c

_{M−2}

### = d

_{M−3}

### d

_{M−4}

^{0}

### + (M − 2) f d

_{M−2}

### = f

^{0}

### c

_{M−3}

### d

_{M−5}

^{0}

### + (M − 3) f d

_{M−3}

### = f

^{0}

### c

_{M−4}

### c

^{0}

_{M−5}

### + (M − 3) f c

_{M−3}

### = d

_{M−4}

### .. . .. .

### c

^{0}

_{1}

### + 3 f c

_{3}

### = d

_{2}

### d

_{1}

^{0}

### + 3 f d

_{3}

### = f

^{0}

### c

_{2}

### d

_{0}

^{0}

### + 2 f d

_{2}

### = f

^{0}

### c

_{1}

### c

^{0}

_{0}

### + 2 f c

_{2}

### = d

_{1}

### f c

_{1}

### = d

_{0}

### f d

_{1}

### = f

^{0}

### c

_{0}

### If M is even, the systems are:

### (IIe)

_{M}

### (Ie)

_{M}

### c

^{0}

_{M}

### = 0 d

_{M}

^{0}

### = 0

### d

_{M−1}

^{0}

### = f

^{0}

### c

_{M}

### c

^{0}

_{M−1}

### = d

_{M}

### c

^{0}

_{M−2}

### + M f c

_{M}

### = d

_{M−1}

### d

_{M−2}

^{0}

### + M f d

_{M}

### = f

^{0}

### c

_{M−1}

### d

_{M−3}

^{0}

### + (M − 1) f d

_{M−1}

### = f

^{0}

### c

_{M−2}

### c

^{0}

_{M−3}

### + (M − 1) f c

_{M−1}

### = d

_{M−2}

### c

^{0}

_{M−4}

### + (M − 2) f c

_{M−2}

### = d

_{M−3}

### d

_{M−4}

^{0}

### + (M − 2) f d

_{M−2}

### = f

^{0}

### c

_{M−3}

### d

_{M−5}

^{0}

### + (M − 3) f d

_{M−3}

### = f

^{0}

### c

_{M−4}

### c

^{0}

_{M−5}

### + (M − 3) f c

_{M−3}

### = d

_{M−4}

### .. . .. .

### c

^{0}

_{0}

### + 2 f c

_{2}

### = d

_{1}

### d

_{0}

^{0}

### + 2 f d

_{2}

### = f

^{0}

### c

_{1}

### f d

_{1}

### = f

^{0}

### c

_{0}

### f c

_{1}

### = d

_{0}

### In light of these observations, let

### n = max{i | i odd and c

_{i}

### 6= 0 or i even and d

_{i}

### 6= 0}, p = max{i | i even and c

_{i}

### 6= 0 or i odd and d

_{i}

### 6= 0}.

### Note that n or p may be undefined. Now write γ = γ

_{1}

### + γ

2### , where γ

_{1}

### (x) contains the terms of γ(x)

### of odd degree in y, γ

_{1}

### (y) contains the terms of γ(y) of even degree in y, γ

2### (x) contains the terms of

### γ(x) of even degree in y, and γ

2### (y) contains the terms of γ(y) of odd degree in y. Explicitly,

### γ

1###

###

### x y

###

###

### =

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

### c

_{n}

### y

^{n}

### + c

_{n−2}

### y

^{n−2}

### + . . . + c

_{1}

### y d

_{n−1}

### y

^{n−1}

### + d

_{n−3}

### y

^{n−3}

### + . . . + d

_{0}

###

###

###

###

### if n is odd,

###

###

###

###

### c

_{n−1}

### y

^{n−1}

### + c

_{n−3}

### y

^{n−3}

### + . . . + c

_{1}

### y d

_{n}

### y

^{n}

### + d

_{n−2}

### y

^{n−2}

### + . . . + d

_{0}

###

###

###

###

### if n is even,

###

###

###

### 0 0

###

###

###

###

### if n is undefined,

### and

### γ

_{2}

###

###

### x y

###

###

### =

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

###

### c

_{p−1}

### y

^{p−1}

### + c

_{p−3}

### y

^{p−3}

### + . . . + c

_{0}

### d

_{p}

### y

^{p}

### + d

_{p−2}

### y

^{p−2}

### + . . . + d

_{1}

### y

###

###

###

###

### if p is odd,

###

###

###

###

### c

_{p}

### y

^{p}

### + c

_{p−2}

### y

^{p−2}

### + . . . + c

_{0}

### d

_{p−1}

### y

^{p−1}

### + d

_{p−3}

### y

^{p−3}

### + . . . + d

_{1}