Poisson equation
∇
2ϕ(⃗r)=−4 πρ(⃗r)
(CGS)Finite volume method
General approach: divide the space into small volumes and look at the fluxes of some quantity between the volumes.
In electrostatics, we have Gauss theorem:
∮
∂VE(⃗r)⋅⃗n dS =− ⃗ ∮
∂V∇ ϕ(⃗r )⋅⃗n dS =4 π ∫
Vρ(⃗r)dV
Consider division into small cubic volumes Vijk.
∮
∂V∇ ϕ(⃗r)⋅⃗n dS =−4 πQ
ijki j
i-1 i+1
j-1
j+1
∮
∂V∇ ϕ⋅⃗n dS≈
h
2( ∂ ϕ ∂ x ( x
i+1/ 2, y
j, z
k)− ∂ ∂ ϕ x ( x
i−1/ 2, y
j, z
k)
+ ∂ ϕ
∂ y ( x
i, y
j +1/ 2, z
k)− ∂ ϕ
∂ y ( x
i, y
j−1 /2, z
k) + ∂ ϕ
∂ z ( x
i, y
j, z
k+1 /2)− ∂ ϕ
∂ z ( x
i, y
j, z
k −1/ 2) )
1
∮∂V
∇ ϕ⋅⃗n dS≈−h
2( ∂ ϕ ∂ x ( x
i+1/ 2, y
j, z
k)− ∂ ϕ ∂ x ( x
i−1/ 2, y
j, z
k)+ ∂ ∂ ϕ y ( x
i, y
j+1/ 2, z
k)
− ∂ ϕ
∂ y ( x
i, y
j−1/2, z
k)+ ∂ ϕ
∂ z ( x
i, y
j, z
k+1/ 2)− ∂ ϕ
∂ z ( x
i, y
j, z
k−1/ 2) )
Approximate by centred differences:
∮
∂V ∇ ϕ⋅⃗n dS≈ h[
ϕ(xi +1, yj , zk)−ϕ(xi , y j , zk)−ϕ(xi, y j , zk)+ϕ( x
i−1, y
j, z
k)+ϕ( x
i, y
j+1, z
k)−ϕ( x
i, y
j, z
k)
−ϕ( x
i, y
j, z
k)+ϕ( x
i, y
j −1, z
k)+ϕ( x
i, y
j, z
k +1)
−ϕ( x
i, y
j, z
k)−ϕ( x
i, y
j, z
k)+ϕ( x
i, y
j, z
k−1) ] ≈−4 π Q
ijkQ
ijk≈ h
3ρ( x
i, y
j, z
k)
gives the standard discretization of the Poisson eq.So what are the advantages?
1) the volumes don't need to be cubic – can have some arbitrary mesh, like in finite element methods;
2) easy treatment of discrete charges, surface charges, etc.
3) variable ε, including discontinuities:
∮
∂Vϵ(⃗r)∇ ϕ(⃗r)⋅⃗n dS =−4 π Q
ijk2
Initial-value problems for PDEs
Parabolic equations Diffusion equation:
∂ c(⃗r , t)
∂ t =∇⋅[ D(⃗r)∇ c(⃗r , t)]
Can be obtained from Fick's first law: flux
⃗ J (⃗r , t)=−D(⃗r)∇ c(⃗r , t).
c is the particle concentration
Total particle number
N (t)=
∫c(⃗r , t)dV dN (t)
dt = ∂
∫c(⃗r , t)dV
∂ t =
∫∇⋅[ D(⃗r) ∇ c(⃗r , t)]dV =
∮∂VD(⃗r) ⃗J (⃗r ,t)⋅⃗n dS
=
∮∂V∂ c(⃗r , t)
∂ n dS.
Boundary conditions for space variables:
c(⃗r , t) ∣
Γ= f (⃗r , t)
(Dirichlet) or∂ c(⃗r ,t )
∂ n ∣
Γ= f (⃗r ,t )
(Neumann)For Neumann BC with f=0 N is conserved (reflecting BC). Dirichlet BC are also called absorbing.
Initial condition for the time variable:
c(⃗r ,0)=c
0(⃗r).
3
∂ u(⃗r , t)
∂ t =∇⋅[ D(⃗r)∇ u(⃗r , t)]
Separation of variables
u (⃗r , t)=R(⃗r)T (t).
R(⃗r) dT (t)
dt =T (t)∇⋅[ D(⃗r)∇ R(⃗r)] ⇒ T ' (t)
T (t) = ∇⋅[ D(⃗r)∇ R(⃗r)]
R(⃗r)
T ' (t)=−λ T (t) ⇒ T (t)=T
0e
−λt. ∇⋅[ D(⃗r)∇ R(⃗r)]=−λ R(⃗r).
u(⃗r , t) ∣
Γ=0 or ∂ u (⃗r ,t)
∂ n ∣
Γ=0
Elliptic eigenvalue problem with the same boundary conditions. Suppose we know the solution. If the eigenvalues are λn and the corresponding
eigenfunctions are then the general solution is
R
n(⃗r), u (⃗r , t)=
∑nC
nR
n(⃗r)e
−λnt.
The mode with the smallest λn ≡ λ1 decays the slowest. Conversely, if we solve the parabolic equation corresponding to the given elliptic equation
starting with an arbitrary initial condition, then, unless the initial condition has no “overlap” with R1, at long times
u (⃗r , t)∼R
1(⃗r) e
−λ1t.
∫
R
m(⃗r) R
n(⃗r) dV =0 , λ
m≠λ
n 4Numerical methods
∂ u(⃗r , t)
∂ t =∇⋅[ D(⃗r)∇ u(⃗r , t)]
Discretize in space, but not in time. Semi-discrete problem.
For simplicity, 1D, D = const.
∂ u( x , t)
∂ t = D ∂
2u (x , t)
∂ x
2d u
m(t)
d t = D u
m+1(t )−2 u
m( t)+u
m−1( t)
h
2u
0( t)=0 , u
M(t)=0
A set of M-1 ODEs. In principle, can use standard methods for solving ODEs.
General principles: 1) different methods have different orders of accuracy; 2) explicit methods have a limited range of stability; 3) some implicit methods are always stable.
Particularly important, because the resulting system turns out to be stiff.
Method of lines.
5
∂ u( x , t)
∂ t = D ∂
2u (x , t)
∂ x
2u (x ,t)=u(x)e
−λtD d
2u (x)
d x
2=−λ u (x) u (x)=sin kx λ= Dk
2For u(0,t)=u(L , t)=0 , u
n( x)=sin π n
L x d u
m(t)
d t = D u
m+1(t )−2 u
m( t)+u
m−1( t) h
2Semi-discrete equation
Since this is a linear system with constant coefficients, there should also be solutions of the form
e
−λt.
Guess:
u
m(t)=e
ikhm−λ t. −λ e
ikhm−λ t= D
h
2(e
ikh(m+1)−λ t− 2 e
ikhm−λ t+e
ikh(m−1)−λ t)
−λ= D
h
2(e
ikh− 2+e
−ikh) ⇒ λ= 2 D
h
2(1−cos kh)= 4 D
h
2sin
2kh 2
λ≈ 2 D
h
2(( kh)
2/ 2−(kh)
4/ 24+...)=Dk
2(1+O ((kh)
2)) .
All modes decay.
Largest k is π/h; then repeats periodically. λ = 4D/h2. Smallest k is π/h. λ = π2D/L2. Ratio of the largest and smallest λ is ~L2/h2. Stiff.
6
Gustafsson, Fundamentals of Scientific Computing
- λ
Even Euler method can be stable, if thetime step is not too large.
d u
m(t)
d t = D u
m+1(t )−2 u
m( t)+u
m−1( t) h
2˙⃗u= ⃗f (⃗u) ⃗u
n+1=⃗u
n+τ ⃗ f (⃗u
n) u
mn+1=u
mn+ D τ
h
2(u
m+1n−2 u
mn+u
m−1n) u
m(t)=e
ikhm−λ t.
For continuous time we had
u
mn= e
ikhm−λ τ ne
ikhm−λ τ (n+1)=e
ikhm−λ τ n+ D τ
h
2( e
ikh(m+1)−λ τ n− 2 e
ikhm−λ τ n+e
ikh( m−1)−λ τ n)
e
−λ τ=1+ D τ
h
2( e
ikh− 2+e
−ikh) ⇒ λ=− 1
τ ln [ 1− 2 D τ h
2{ 1−cos(kh)} ]
Stability: Re λ ≥ 0 for all k.
∣ 1− 4 D h
2τ sin
2( kh/2) ∣ ≤1 ⇒ τ≤ h 2 D
2=− 1
τ ln [ 1− 4 D h
2τ sin
2( kh/2) ] .
Von Neumann stability analysis
Forward-time centred-space (FTCS) scheme 7
τ≤τ
max= h
22 D .
Stability criterion Rather stringent: quadratic in mesh step
O(h
2)+O(τ)=O (h
2)
Does not really make sense to use a higher-order explicit method for time evolution: the stability ranges of all of them are not very different, so τ ~ h2 anyway, and then the time scheme is too accurate.In fact, trying to be 2nd order in time can make things worse. Richardson (1910) – leapfrog method.
∂ u( x , t)
∂ t = D ∂
2u (x , t)
∂ x
2Take centred difference in time.
u
mn+1− u
mn−12 τ = D u
m+1n− 2 u
nm+u
m−1nh
2u
mn= e
ikhm−λ τ ne
−λ τ− e
λ τ2 τ = 2 D
h
2(cos kh−1)
Denotee
−λ τ≡G , b≡ 4 D τ
h
2( 1−cos kh). G−1/G=−b ⇒ G=−b± √ b
2+ 4
2
For b≠0, one of the roots always has |G|>1 ⇒ λ < 0.
Unconditionally unstable!
8
giving a higher-order method.
λ=− 1
τ ln [ 1− 2 D τ h
2{ 1−cos(kh)} ]
=− 1
τ ln [ 1− 2 D h
2τ {( kh)
2/ 2−(kh)
4/ 24+O ((kh)
6)} ]
=2 D
h2 {(kh)2/2−(kh)4/24+O((kh)6)}+ D2k4τ
2 =Dk2
[
1+(kh)2(
2 hD τ2−121)
+O((kh)4)]
τ= h
26 D = τ
max3 ,
When the method is O(h4).
M.V. Chubynsky and G.W. Slater, Phys. Rev. E 85, 016709.
On the other hand, going back to the FTCS scheme, since the error is
O(h
2)+O (τ)
and τ has to be O(h2), these terms may cancel out 9u
mn+1=u
mn+ D τ
h
2(u
m+1n( t )−2 u
nm( t)+u
m−1n( t))
For
τ=τ
max= h
22 D , u
mn+1
= 1
2 ( u
m+1n+u
m−1n)
Suppose is 1 for even m and 0 for odd m. Then will be 0 for even m and 1 for odd m. Will oscillate forever. For larger τ this mode will grow.
u
mnu
mn+1The small time step is needed to take care of short-wavelength modes. But they die down rapidly, so after a short while we don't care about them. But
cannot ignore them, because if the method is unstable, they will grow back. As we know for ODEs, the way to deal with that is implicit schemes that are
stable for arbitrarily large steps.
Boundary conditions are implemented as before: fix u on the boundary for Dirichlet and introduce ghost sites for Neumann. FTCS does conserve the particle number for reflecting BC or in infinite space.
10
u
mn+1− u
mn−12 τ = D u
m+1n− 2 u
nm+u
m−1nh
2Try to remove the instability by making it implicit:
u
mn+1−u
mn−12 τ = D u
m+1n−(u
mn+1+u
n−1m)+ u
m−1nh
2A trick that does not really work:
Unconditionally unstable leapfrog
u
mn+1It turns out it is unconditionally stable. Moreover, even though it looks implicit, it's a linear equation for , so we can solve it. However,
u
m+1n−(u
mn+1+u
mn−1)+ u
m−1nh
2= u
m+1n−2 u
mn+u
m−1nh
2− u
mn+1− 2 u
nm+u
mn−1h
2→ ∂
2u
∂ x
2− τ
2h
2∂
2u
∂ t
2 So it is only a correct representation of the diffusion equation, when τ<<h, and we have not gained anything.Dufort-Frankel scheme 11
d u
m( t)
d t = D u
m+1( t )−2 u
m( t)+u
m−1( t) h
2Backward Euler:
˙⃗u= ⃗f (⃗u) ⃗u
n+1=⃗u
n+τ ⃗ f ( ⃗ u
n+1) u
mn+1=u
mn+ D τ
h
2(u
m+1n+1−2 u
mn+1+ u
m−1n+1)
A tridiagonal set of equations for
u
mn+1Backward-time centred-space (BTCS) scheme.
1=e
λ τ+ D τ
h
2( e
ikh− 2+e
−ikh) ⇒ λ= 1
τ ln [ 1+ 2 D τ h
2{ 1−cos(kh)} ]
For FTCS we had:
λ=− 1
τ ln [ 1− 2 D τ h
2{1−cos(kh)} ]
λ ≥ 0 always. Unconditionally stable. But now 1st order accuracy is an issue.
u
mn= e
ikhm−λ τ ne
ikhm−λ τ (n+1)=e
ikhm−λ τ n+ D τ
h
2( e
ikh(m+1)−λ τ( n+1)− 2 e
ikhm−λ τ (n+1)+ e
ikh(m−1)−λ τ (n+1))
12
d u
m( t)
d t = D u
m+1( t )−2 u
m( t)+u
m−1( t) h
2Trapezoidal method:
˙⃗u= ⃗f (⃗u) ⃗u
n+1=⃗u
n+ τ
2 [ ⃗ f (⃗u
n)+ ⃗ f (⃗u
n+1)]
u
mn+1=u
mn+ D τ
2 h
2( u
m+1n+1− 2 u
mn+1+u
n+1m−1+ u
m+1n− 2 u
mn+u
nm−1)
Crank-Nicolson scheme (often misspelled Nicholson). 2nd order in time as well
e
ikhm−λ τ (n+1)=e
ikhm−λ τ n+ D τ
2 h
2( e
ikh(m+1)−λ τ (n+1)− 2 e
ikhm−λ τ (n+1)+e
ikh(m−1)−λ τ (n+1))
+ D τ
2 h
2( e
ikh(m+1)−λ τ n− 2 e
ikhm−λ τ n+e
ikh(m−1)−λ τ n)
e
−λ τ−1= D τ
h
2(cos kh−1)(e
−λ τ+1) e
−λ τ≡G D τ
h
2(1−cos kh)≡b
G=1 −b
1+b
|G| < 1 for b > 0. Unconditionally stable.G−1=−b(G+1)
13Recall a nice property of the trapezoidal scheme: if an ODE has a solution
with purely imaginary λ, the trapezoidal scheme preserves this property.
∼e
λtUseful for solving the time-dependent Schrödinger equation
Rescale t and x to get
i ∂ ψ
∂ t =− ∂
2ψ
∂ x
2+V ( x)ψ
If V(x) = 0 and there are no boundaries, we can still look for a solution of the form
ψ( x , t)=exp(−λ t+ikx). −i λ=k
2⇒ λ=ik
2.
The wave function preserves its norm; the evolution operator is unitary.
∫
∣ψ( x , t)∣
2dx=const ψ( x ,t)=e
−i ̂H tψ( x , 0)
Need
ψ
n+1j= ̂ U ψ
nj, ̂ U
HU = ̂I ̂
∫(U ψ)
∗(U ψ)dx=
∫ψ
∗( U
HU ψ)dx=
∫ψ
∗ψ dx i ∂ ψ
∂ t = ̂ H ψ
14
Forward Euler method is unstable for purely imaginary λ. For FTCS,
ψ
mn+1=ψ
mn−i τ ̂̃H ψ
mnH ψ=− ̂ ∂
2ψ
∂ x
2+V (x) ψ
Discretized̂̃H ψ
mn=− ψ
m+1n−2 ψ
mn+ψ
nm−1h
2+V
mψ
mnψ
mn+1=(1−i τ ̂̃ H )ψ
mn∂ ψ
∂ t =−i ( − ∂ ∂
2x ψ
2+V ( x)ψ ) =−i ̂H ψ
(1−i τ ̂̃H )
H(1−i τ ̂̃ H )=(1+i τ ̂̃H )(1−i τ ̂̃ H )=1+τ
2̂̃H
2≠1
In Backward Euler method modes with purely imaginary λ decay. For BTCS,ψ
mn+1=ψ
mn−i τ ̂̃H ψ
mn+1ψ
mn+1=( 1+i τ ̂̃ H )
−1ψ
mn(1+i τ ̂̃H )
−H( 1+i τ ̂̃H )
−1=[(1+i τ ̂̃H )(1−i τ ̂̃ H )]
−1=(1+τ
2̂̃H
2)
−1≠1
15In the trapezoidal method, the oscillations do not grow nor decay.
H ψ=− ̂ ∂
2ψ
∂ x
2+V (x) ψ
Discretized̂̃H ψ
mn=− ψ
m+1n−2 ψ
mn+ψ
nm−1h
2+V
mψ
mn∂ ψ
∂ t =−i ( − ∂ ∂
2x ψ
2+V ( x)ψ ) =−i ̂H ψ
For Crank-Nicolson,
ψ
mn+1=ψ
mn−( i τ ̂̃H /2)(ψ
mn+ψ
mn+1) (1+i τ ̂̃H /2) ψ
mn+1=(1−i τ ̂̃ H /2)ψ
mnψ
mn+1=(1+i τ ̂̃ H /2)
−1( 1−i τ ̂̃H /2) ψ
mn[(1+i τ ̂̃ H /2)
−1(1−i τ ̂̃H /2)]
H(1+i τ ̂̃ H /2)
−1(1−i τ ̂̃H /2)=
(1−i τ ̂̃ H /2)
−1(1+i τ ̂̃H /2)(1+i τ ̂̃ H /2)
−1(1−i τ ̂̃H /2)=1
Can also see from von Neumann analysis. We had
e
−λ τ≡G G=1 −b
1+b b → ib ∣ 1−ib 1+ib ∣ =1
16
Another, explicit possibility is, surprisingly, the leapfrog method
ψ
mn+1−ψ
mn−12 τ =−i ̂̃H ψ
mn⇒ ψ
mn+1=ψ
mn−1−2 i τ ̂̃ H ψ
mne
−λ τ≡G , b≡ 4 D τ
h
2(1−cos kh). G= −b± √ b
2+ 4
2 b →ib ⇒ G= − i∣b∣± √ −∣ b∣
2+ 4
2 ∣ G∣=
√ 4−∣b∣
2+∣ b∣
22 =1 , ∣b∣<2
Needψ
1m= e
−i τ ̂Hψ
0m≈
∑2j=0(−i τ ̂̃ H )
jj ! ψ
0m 17Higher dimensions
Of course, the explicit methods (FTCS for diffusion eq. and leapfrog for Schrodinger eq.) can be generalized straightforwardly.
u
n+1j ,l=u
nj ,l+ D τ
h
2(u
nj+1, l+u
nj−1,l+ u
nj ,l +1+u
nj , l−1− 4 u
nj ,l)+τ D f
j , lIn this case,
∂ u( x , y , t)
∂ t = D ( ∂
2u (x , y , t)
∂ x
2+ ∂
2u (x , y , t)
∂ y
2) + D f ( x , y)
τ
max=h
2/ 4 D u
n+1j ,l= 1
4 ( u
nj+1,l+u
nj−1, l+u
nj , l+1+u
nj ,l −1)+ h
24 f
j ,lThis is exactly the Jacobi iteration for the elliptic problem
( ∂
2u (x , y ,t)
∂ x
2+ ∂
2u(x , y ,t)
∂ y
2) =− f (x , y)
18
Higher dimensions
It is possible in principle to use the Crank-Nicolson method in higher dimensions, but the matrix will not be tridiagonal, but rather will have a structure similar to that we encountered for elliptic equations.
Alternating direction implicit (ADI) method: split each step into 2 substeps, treating 1 direction implicitly and the other explicitly in each substep.
u
n+1/ 2j ,l=u
nj , l+ D τ
2 h
2( u
n+1/2j+1, l− 2 u
n+1/ 2j ,l+ u
n+1/2j−1,l+u
nj ,l +1− 2 u
nj , l+u
nj ,l −1) u
n+1j ,l=u
nj ,l+ D τ
2 h
2(u
n+1 /2j+1,l− 2 u
n+1/ 2j , l+u
n+1/ 2j−1,l+u
n+1j , l+1− 2 u
n+1j , l+u
n+1j , l−1)
A particular case of general operator splitting methodology (if there are several terms, each can be treated in a separate substep), although with a slightly different twist.
19
Nonlinear equations
Straightforward with explicit methods. Hard with implicit methods. Lagging nonlinear coefficients, iteration, Newton's method.
20