Adzievski, Kuzman Siddiqi, A. H - Introduction To
Adzievski, Kuzman Siddiqi, A. H - Introduction To
INTRODUCTION TO
USING MATHEMATICA
series, integral transforms, and Sturm–Liouville problems, necessary for proper
understanding of the underlying foundation and technical details. MATHEMATICA
The book provides fundamental concepts, ideas, and terminology related to PDEs.
It then discusses d’Alambert’s method as well as separation of variable method of Kuzman Adzievski • Abul Hasan Siddiqi
the wave equation on rectangular and circular domains. Building on this, the book
studies the solution of the heat equation using Fourier and Laplace transforms,
and examines the Laplace and Poisson equations of different rectangular circular
domains. The authors discuss finite difference methods elliptic, parabolic, and
hyperbolic partial differential equations—important tools in applied mathematics
and engineering. This facilitates the proper understanding of numerical solutions
of the above-mentioned equations. In addition, applications using Mathematica®
are provided.
Features
• Covers basic theory, concepts, and applications of PDEs in engineering and
science
• Includes solutions to selected examples as well as exercises in each chapter
• Uses Mathematica along with graphics to visualize computations, improving
understanding and interpretation
• Provides adequate training for those who plan to continue studies in this area
Written for a one- or two-semester course, the text covers all the elements
encountered in theory and applications of PDEs and can be used with or without
Adzievski • Siddiqi
computer software. The presentation is simple and clear, with no sacrifice of
rigor. Where proofs are beyond the mathematical background of students, a
short bibliography is provided for those who wish to pursue a given topic further.
Throughout the text, the illustrations, numerous solved examples, and projects
have been chosen to make the exposition as clear as possible.
K14786
ix
INTRODUCTION TO
PARTIAL DIFFERENTIAL
EQUATIONS FOR SCIENTISTS
AND ENGINEERS USING
MATHEMATICA
Kuzman Adzievski • Abul Hasan Siddiqi
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2014 by Taylor & Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group, an Informa business
This book contains information obtained from authentic and highly regarded sources. Reasonable
efforts have been made to publish reliable data and information, but the author and publisher cannot
assume responsibility for the validity of all materials or the consequences of their use. The authors and
publishers have attempted to trace the copyright holders of all material reproduced in this publication
and apologize to copyright holders if permission to publish in this form has not been obtained. If any
copyright material has not been acknowledged please write and let us know so we may rectify in any
future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced,
transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or
hereafter invented, including photocopying, microfilming, and recording, or in any information stor-
age or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copy-
right.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222
Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that pro-
vides licenses and registration for a variety of users. For organizations that have been granted a pho-
tocopy license by the CCC, a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are
used only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com
CONTENTS
Preface ix
Acknowledgments xiii
1 Fourier Series 1
1.1 Fourier Series of Periodic Functions . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Convergence of Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.3 Integration and Differentiation of Fourier Series . . . . . . . . . . . 37
1.4 Fourier Sine and Cosine Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
1.5 Projects Using Mathematica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
2 Integral Transforms 83
2.1 The Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
2.1.1 Definition and Properties of the Laplace Transform . . . . . . . . 84
2.1.2 Step and Impulse Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
2.1.3 Initial-Value Problems and the Laplace Transform . . . . . . . . . 102
2.1.4 The Convolution Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
2.2 Fourier Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
2.2.1 Definition of Fourier Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . 116
2.2.2 Properties of Fourier Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . 127
2.3 Projects Using Mathematica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
v
vi CONTENTS
6.4.1 The Laplace Transform Method for the Heat Equation . . . . 366
6.4.2 The Fourier Transform Method for the Heat Equation . . . . . 371
6.5 Projects Using Mathematica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
Appendices
A. Table of Laplace Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
B. Table of Fourier Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
C. Series and Uniform Convergence Facts . . . . . . . . . . . . . . . . . . . . 526
D. Basic Facts of Ordinary Differential Equations . . . . . . . . . . . . . 529
E. Vector Calculus Facts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
F. A Summary of Analytic Function Theory . . . . . . . . . . . . . . . . . . 549
G. Euler Gamma and Beta Functions . . . . . . . . . . . . . . . . . . . . . . . . . 556
H. Basics of Mathematica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
Bibliography 574
Index 631
PREFACE
ix
x PREFACE
The idea for writing this book was initiated while both authors were visiting
Sultan Qaboos University in 2009. We thank Dr. M. A-Lawatia, Head of the
Department of Mathematics & Statistics at Sultan Qaboos University, who
provided a congenial environment for this project.
The first author is grateful for the useful comments of many of his colleagues
at South Carolina State University, in particular: Sam McDonald, Ikhalfani
Solan, Jean-Michelet Jean Michel and Guttalu Viswanath.
Many students of the first author who have taken applied mathematics
checked and tested the problems in the book and their suggestions are appre-
ciated.
The first author is especially grateful for the continued support and en-
couragement of his wife, Sally Adzievski, while writing this book, as well as
for her useful linguistic advice.
Special thanks also are due to Slavica Grdanovska, a graduate research
assistant at the University of Maryland, for checking the examples and answers
to the exercises for accuracy in several chapters of the book.
The second author is indebted to his wife, Dr. Azra Siddiqi, for her encour-
agement. The second author would like to thank his colleagues, particularly
Professor P. Manchanda of Guru Nanak Dev University, Amritsar India. Ap-
preciation is also given to six research scholars of Sharda University, NCR,
India, working under the supervision of the second author, who have read the
manuscript of the book carefully and have made valuable comments.
Special thanks are due to Ms. Marsha Pronin, project coordinator, Taylor
& Francis Group, Boca Raton, Florida, for her help in the final stages of
manuscript preparation.
We would like to thank also Mr. Shashi Kumar from the Help Desk of
Taylor & Francis for help in formatting the whole manuscript.
We acknowledge Ms. Michele Dimont, project editor, Taylor & Francis
Group, Boca Raton, Florida, for editing the entire manuscript.
We acknowledge the sincere efforts of Ms. Aastha Sharma, acquiring editor,
CRC Press, Taylor & Francis Group, Delhi office, whose constant persuasion
enabled us to complete the book in a timely manner.
This book was typeset by AMS-TE X, the TE X macro system of the Amer-
ican Mathematical Society.
Kuzman Adzievski
Abul Hasan Siddiqi
xiii
CHAPTER 1
FOURIER SERIES
The purpose of this chapter is to acquaint students with some of the most
important aspects of the theory and applications of Fourier series.
Fourier analysis is a branch of mathematics that was invented to solve some
partial differential equations modeling certain physical problems. The history
of the subject of Fourier series begins with d’Alembert (1747) and Euler (1748)
in their analysis of the oscillations of a violin string. The mathematical theory
of such vibrations, under certain simplified physical assumptions, comes down
to the problem of solving a particular class of partial differential equations.
Their ideas were further advanced by D. Bernoulli (1753) and Lagrange (1759).
Fourier’s contributions begin in 1807 with his study of the problem of heat
flow presented to the Académie des Sciences. He made a serious attempt to
show that “arbitrary” function f of period T can be expressed as an infinite
linear combination of the trigonometric functions sine and cosine of the same
period T :
∑∞ [ ]
( 2nπ t ) ( )
f (t) = an cos + bn sin 2nπ tT .
n=0
T
1
2 1. FOURIER SERIES
f (x + T ) = f (x)
for all x ∈ R.
∫
x+T
T
(d) If f is T -periodic and a > 0, then f (ax) has period a.
T
f (a(x + )) = f (ax + T ) = f (ax).
a
f˜(x) = f (x − 2nπ).
Note that if −π < x ≤ π, then n = 0 and hence f˜(x) = f (x). The proof
that the resulting function f˜ is 2π periodic is left as an exercise. ■
4 1. FOURIER SERIES
3. The product of even functions and the product of two odd functions is
an even function.
Since integrable functions will play an important role throughout this sec-
tion we need the following definition.
Definition 1.1.3. A function f : R → R is said to be Riemann integrable
on an interval [a, b] (finite or infinite) if
∫b
|f (x)|dx < ∞.
a
∫a ∫a
f (x) dx = 2 f (x) dx.
−a 0
∫a
f (x) dx = 0.
−a
1.1 FOURIER SERIES OF PERIODIC FUNCTIONS 5
Proof. We will prove the above result only for even functions f , leaving the
case when f is an odd function as an exercise. Assume that f is an even
function. Then
∫a ∫0 ∫a ∫0 ∫a
f (x) dx = f (x) dx + f (x) dx = f (−x) dx + f (x) dx
−a −a 0 −L 0
∫0 ∫a ∫a ∫a
=− f (t) dt + f (x) dx = f (t) dt + f (x) dx
a 0 0 0
∫a
=2 f (x) dx. ■
0
Each term of the above series has period 2L, so if the sum of the series
exists, it will be a function of period 2L. With this expansion there are three
fundamental questions to be addressed:
(a) What values do the coefficients a0 , an , bn have?
(b) If the appropriate values are assigned to the coefficients, does the se-
ries converge at some some points x ∈ R?
Notice that even the coefficients an and bn are well defined numbers,
there is no guarantee that the resulting Fourier series converges, and even if
it converges, there is no guarantee that it converges to the original function
f . For these reasons, we use the symbol S f (x) instead of f (x) when writing
a Fourier series.
The notion introduced in the next definition will be very useful when we
discuss the key issue of convergence of a Fourier series.
Definition 1.1.5. let N be a natural number. The N th partial sum of the
Fourier series of a function f is the trigonometric polynomial
N [ ]
a0 ∑ ( nπ ) ( nπ )
SN f (x) = + an cos x + bn sin x ,
2 n=1
L L
where an and bn are the Fourier coefficients of the function f .
∫L
2 ( nπ )
an = f (x) cos x dx, n = 0, 1, 2, . . . and bn = 0, n = 1, 2, . . .
L L
0
∫L
2 ( nπ )
bn = f (x) sin x dx, n = 1, 2, . . . and an = 0, n = 0, 1, . . ..
L L
0
For n = 1, 2, . . . we have
∫π
1
an = f (x) cos nx dx
π
−π
∫0 ∫π
1 ) 1
= 0 cos nx dx + 1 cos nx dx
π π
−π 0
1 [ ]
=0+ sin(nπ) − sin(0) = 0;
nπ
8 1. FOURIER SERIES
and
∫π
1
bn = f (x) sin nx dx
π
−π
∫0 ∫π
1 1
= 0 sin nx dx + 1 sin nx dx
π π
−π 0
1 [ ] 1 − (−1)n
=0− cos nπ − cos 0 = .
nπ nπ
Therefore the Fourier series of the function is given by
∞
1 2∑ 1 x
S f (x) = + sin (2n − 1) .
2 π n=1 2n − 1 2
Figure 1.1.1 shows the graphs of the function f , together with the partial
sums SN f (x) of the function f taking N = 1 and N = 3 terms.
-2 Π -Π Π 2Π
Figure 1.1.1
Figure 1.1.2 shows the graphs of the function f , together with the partial
sums of the function f taking N = 6 and N = 14 terms.
N=14
N=6
-3 Π -2 Π -Π Π 2Π 3Π
Figure 1.1.2
1.1 FOURIER SERIES OF PERIODIC FUNCTIONS 9
From these graphs we see that, as N increases, the partial sums SN f (x)
become better approximations to the function f . It appears that the graphs
of SN f (x), are approaching the graph of f (x), except at x = 0 or where x
is an integer. In other words, it looks like f is equal to the sum of its Fourier
series except at the points where f is discontinuous.
Example 1.1.2. Let f : R → R be the 2π-periodic function which on
[−π, π) is defined by
{
−1, −π ≤ x < 0
f (x) =
1, 0 ≤ x < π.
Find the Fourier series of the function f .
Solution. The function is an odd function, hence by Lemma 1.1.1 its Fourier
cosine coefficients an are all zero and for n = 1, 2, · · · we have
∫π ∫π
2 2
bn = f (x) sin(nx) dx = 1 sin nx dx
π π
0 0
2 [ ] 2 [ ]
= − cos nπ + 1 = −(−1)n + 1 .
nπ nπ
Therefore the Fourier series of f (x) is
∞
∑ ∞
∑
S f (x) = bn sin nx = b2n−1 sin (2n − 1)x
n=1 n=1
∑∞
4 1
= sin (2n − 1)x.
π n=1
2n −1
Figure 1.1.3 shows the graphs of the function f , together with the partial
sums
4∑ 1 ( )
N
SN f (x) = sin (2n − 1)x
π n=1 2n − 1
of the function f , taking, respectively, N = 1 and N = 3 terms.
f(x)
N=1
-2 Π Π 2Π
f(x)
N=3
-1
Figure 1.1.3
10 1. FOURIER SERIES
Figure 1.1.4 shows the graphs of the function f , together with the partial
sums of the function f , taking, respectively, N = 10 and N = 25 terms.
N=10
N=25
-2 Π Π 2Π
-1
Figure 1.1.4
Figure 1.1.5 shows the graphs of the function f , together with the partial
sum SN f of the function f , taking N = 50 terms.
N=50
-2 Π Π 2Π
-1
Figure 1.1.5
For n = 0, 2, 3, . . . we have
∫π ∫π
2 2
an = f (x) cos nx dx = sin x cos nx dx
π π
0 0
∫π [ ]
2 1
= sin (n + 1)x + sin (n − 1)x dx
π 2
0
[ ]
1 cos (n + 1)π cos (n1)π 1 1
= − − + −
π n+1 n−1 n+1 n−1
[ 0,
]
n odd
=
π1 n+1
2
− n−12
, n even
{
0, n odd
=
− (n2 −1)π
4
, n even.
Figure 1.1.6 shows the graphs of the function f , together with the partial
sums SN f of the function f , taking, respectively, N = 1 and N = 3 terms.
1
N=1
N=3
-2 Π -Π Π 2Π
Figure 1.1.6
Figure 1.1.7 shows the graphs of the function f , together with the partial
sums SN f of the function f , taking, respectively, N = 10 and N = 30
terms.
12 1. FOURIER SERIES
N=10
N=30
-2 Π -Π Π 2Π
Figure 1.1.7
Figure 1.1.8 shows the graphs of the function f , together with the partial
sums SN f of the function f , taking, respectively, N = 1 and N = 3 terms.
Π
N=3
N=1
-2 Π -Π Π 2Π
Figure 1.1.8
Figure 1.1.9 shows the graphs of the function f , together with the partial
sums SN f of the function f , taking, respectively, N = 10 and N = 30
terms.
N=10 N=30
-2 Π -Π Π 2Π
Figure 1.1.9
Figure 1.1.10 shows the graphs of the function f together with the partial
sum SN f of the function f , taking N = 50 terms.
N=50
-2 Π -Π Π 2Π
Figure 1.1.10
14 1. FOURIER SERIES
∫π { ∫0 ∫π }
1 1 π π
an = f (x) cos nx dx = ( + x) cos nx dx + ( − x) cos nx dx
π π 2 2
−π −π 0
{[ ] [ ] }
1 (1 ) sin nx cos nx x=0 (1 ) sin nx cos nx x=π
= +x + + − x +
π 2 n n2 x=−π 2 n n2 x=0
2
= 2 (1 − cos nπ).
n π
The computation of the coefficients b′n s is like that of the a′n s, and we find
that bn = 0 for n = 1, 2, . . .. Hence the Fourier series of f (x) is
∞
4 ∑ cos (2n − 1)π
S f (x) = .
π n=1 (2n − 1)2
Figure 1.1.11 shows the graphs of the function f , together with the partial
sums SN f of the function f , taking, respectively, N = 1 and N = 3 terms.
Π
2
N=3
-Π Π
N=1
Π
-2
Figure 1.1.11
1.1 FOURIER SERIES OF PERIODIC FUNCTIONS 15
Π
2
-Π Π
N=10
Π
-2
Figure 1.1.12
Figure 1.1.12 shows the graphs of the function f , together with the partial
sum SN f of the function f , taking N = 10 terms.
So far we have discussed Fourier series whose terms are the trigonometric
functions sine and cosine. An alternative, and more convenient, approach to
Fourier series is to use complex exponentials. There are several reasons for
doing this. One of the reasons is that this is a more compact form, but the
main reason is that this complex form of a Fourier series will allow us in the
next chapter to introduce important extensions—the Fourier transforms.
Let f : R → R be a 2L periodic function which is integrable on [−L, L].
First let us introduce the variable
nπ
ωn = .
L
where
∫L
1
(1.1.5) cn = f (x)e−iωn x dx, n = 0, ±1, ±2, . . ..
2L
−L
16 1. FOURIER SERIES
This is the complex form of the Fourier series of the function f on the interval
[−L, L].
Remark. Notice that even though the complex Fourier coefficients cn are
generally complex numbers, the summation (1.1.4) always gives a real valued
function S f (x). Also notice that the Fourier coefficient c0 is the mean of
the function f on the interval [−L, L].
We need to emphasize that the real Fourier series (1.1.3) and the complex
Fourier series (1.1.4) are just two different ways of writing the same series.
Using the relations an = cn + c−n and bn = i(cn − c−n ) we can change
the complex form of the Fourier series (1.1.4) to the real form (1.1.3), and
vice versa. Also note that for an even function all coefficients cn will be real
numbers, and for an odd function are purely imaginary.
If m and n are any integers, then from the formulas
∫L {
iωn x iωm x 0, m ̸= n
e e dx = 1
2L , m=n
−L
For n ̸= 0 we have
∫π ∫0 ∫π
1 −inx 1 −inx 1
cn = f (x)e dx = −1e dx + 1e−inx dx
2π 2π 2π
−π −π 0
x=0 x=π
1 −inx 1 −inx
= e − e
2nπ x=−π 2nπ x=0
i i i
=− (1 − einπ ) + (e−inπ − 1) = (cos(nπ) − 1)
2nπ 2nπ nπ
i ( )
= (−1)n − 1 .
nπ
1.1 FOURIER SERIES OF PERIODIC FUNCTIONS 17
1. Let f be a function defined on the interval [−a, a], a > 0. Show that:
∫a ∫a
(a) If f is even, then f (x)dx = 2 f (x)dx.
−a 0
∫a
(b) If f is odd, then f (x)dx = 0.
−a
2. Let f be a function whose domain is the interval [−L, L]. Show that
f can be expressed as a sum of an even and an odd function.
∫L ∫
a+L
f (x)dx = f (x)dx.
0 a
5. Find the Fourier series of the following functions (the functions are all
understood to be periodic):
7. Show that the Fourier series (1.1.3) can be written in the following
form:
∑∞
1 ( nπ )
S f (x) = a0 + An sin x+α ,
2 n=1
L
where
√ an
An = a2n + b2n , tan α = .
bn
8. Find the complex Fourier series for the following functions (the func-
tions are all understood to be periodic):
9. Show that if a is any real number, then the Fourier series of the
function f (x) = eax , 0 < x < 2π, is
∞
e2aπ − 1 ∑ 1
S (f, x) = einx .
2π n=−∞
a − in
1.2 CONVERGENCE OF FOURIER SERIES 19
10. Show that if a is any real number, then the Fourier series of the
function f (x) = eax , −π < x < π, is
∞
eaπ − e−aπ ∑ (−1)n inx
S (f, x) = e .
4π n=−∞
a − in
Proof. Let n be any positive integer number. Using |z|2 = z z̄ for any
complex number z (z̄ is the complex conjugate of z), we have
∑ 2 ( ∑
)(
∑
)
n
n n
0 ≤ f (x) − ck e = f (x) −
ikx
ck e ikx
f (x) − ck eikx
k=−n k=−n k=−n
∑n
[ ] ∑
n ∑
n
= |f (x)|2 − ck f (x)eikx + ck f (x)e−ikx + ck cj eikx e−jx .
k=−n k=−n j=−n
Dividing both sides of the above equation by 2π, integrating over [−π, π] and
taking into account formula (1.1.5) for the complex Fourier coefficients cj ,
the orthogonality property of the set (1.1.6) implies
∫π ∑
n ∑
n
1
0≤ |f (x)|2 dx − [ck ck + ck ck ] + ck ck
2π
−π k=−n k=−n
∫π ∑
n ∑
n
1
= |f (x)| dx −
2
[ck ck + ck ck ] + |ck |2
2π
−π k=−n k=−n
∫π ∑
n
1
= |f (x)|2 dx − |ck |2 .
2π
−π k=−n
20 1. FOURIER SERIES
Remark. Based on the relations for the complex Fourier coefficients cn and
the real Fourier coefficients an and bn , Bessel’s inequality can also be stated
in terms of an and bn :
∞ ∫π
1 2 1∑( 2 ) 1
a + a + bn ≤
2
|f (x)|2 dx.
4 0 2 n=1 n 2π
−π
∞
∑ ∞
∑ ∞
∑
|cn |2 , a2n and b2n
n=−∞ n=0 n=1
The following terminology is needed in order to state and prove our con-
vergence results.
f (c−
j ) = lim f (cj − h) and f (cj ) = lim f (cj + h), j = 1, 2, . . . , n
+
h→0 h→0
h>0 h>0
-2 0 2
Figure 1.2.1
-2 1
Figure 1.2.2
22 1. FOURIER SERIES
-4 1 3 4
-1
Figure 1.2.3
√
Example 1.2.4. The function f defined by f (x) = 3 x (see Figure 1.2.4.)
is piecewise continuous but not piecewise smooth on any interval containing
x = 0, since
f ′ (0−) = f ′ (0+) = ∞.
-4 4
-2
Figure 1.2.4
∫π ∫π
−inx
(1.2.1) f (x)e dx = g(x)e−inx dx, n = 0, ±1, ±2, . . .,
−π −π
∑
N
1 ∑N
[ ]
(1.2.2) SN f (x) = cn einx = a0 + an cos nx + bn sin nx .
2 n=1
n=−N
∑ ∫π ∫π
N
( 1 ) ( 1 ∑
N
)
SN f (x) = e−iny f (y) dy einx = ein(x−y) f (y) dy.
2π 2π
n=−N −π −π n=−N
1 ∑ inx
N
(1.2.3) DN (x) = e .
2π
n=−N
∫π ∫π
(1.2.4) SN f (x) = DN (x − y)f (y) dy = DN (y)f (x − y) dy.
−π −π
We discuss below some properties of the kernel DN (x) which plays a crucial
role in obtaining convergence results for Fourier series.
24 1. FOURIER SERIES
Proof. To prove (a) we use the definition of DN (x). From (1.2.3) we have
1∑
N
1
DN (x) = + cos nx,
2π π n=1
and therefore
∫π [ ]x=π
1 ∑ sin nx
N
1
DN (x) dx = x+ = 1.
2π π n=1 n x=−π
−π
1 −iN x ∑ inx
2N
1 −iN x ei(2N +1)x − 1
DN (x) = e e = e
2π n=0
2π eix − 1
1 ei(N +1)x − e−iN x 1 ei(N + 2 )x − e−i(N + 2 )x
1 1
= =
eix − 1 ei 2 − e−i 2
x x
2π 2π
( )
1 sin N + 12 x
= ( ) ,
2π sin x2
∫0 ∫π
1
(1.2.5) DN (x) dx = DN (x) dx = .
2
−π 0
Plots of the kernel DN (x) for several values of N are presented in Figure
1.2.5.
Now we are ready to state and prove the main convergence theorem.
Theorem 1.2.3. If f is a 2π-periodic and piecewise smooth function on the
real line R, then
f (x+ ) + f (x− )
lim SN f (x) =
N →∞ 2
1.2 CONVERGENCE OF FOURIER SERIES 25
N=12
N=3 N=8
Figure 1.2.5
∫π
f (x− ) + f (x+ ) f (x− ) + f (x+ )
SN f (x) − = DN (y)f (x − y) dy −
2 2
−π
∫0 ∫π
[ ] [ ]
= DN (y) f (x − y) − f (x+ ) dy + DN (y) f (x − y) − f (x− ) dy
π 0
∫π ∫0
[ ] [ ]
= DN (y) f (x − y) − f (x− ) dy + DN (y) f (x + y) − f (x+ ) dy
0 π
∫π
[ ]
= DN (y) f (x − y) − f (x− ) + f (x + y) − f (x+ ) dy
0
∫π ( )
1 sin N + 12 y [ ]
= (y) f (x − y) − f (x− ) + f (x + y) − f (x+ ) dy
2π sin 2
0
∫π ( )
1 [ ( 1) ]
y
2(
[ f (x − y) − f (x− )
= sin N + y y
) dy
π 2 sin 2
y
0
∫π ( )
1 [ ( 1) ]
y
2( f (x + y) − f (x+ ) ]
+ sin N + y y
) dy.
π 2 sin 2
y
0
26 1. FOURIER SERIES
It is easy to see that F (y) is an odd function of y on [−π, π]. We claim that
for each fixed value of x this function is piecewise continuous on [−π, π].
Indeed, we need to check the behavior of F (y) only at the point y = 0. Since
f is a piecewise smooth function we have
[ ]
f (x − y) − f (x− ) f (x + y) − f (x+ )
y
lim )2(
+ = f ′ (x+ ) + f ′ (x− ).
y→0+ sin y y y
2
Therefore,
∫π
1[ ] 1 ( 1)
SN f (x) − f (x− ) + f (x+ ) = F (y) sin N + y dy
2 π 2
0
∫π ∫π
1 { ( y )} 1 { ( y )}
= F (y) cos sin(N y) dy + F (y) sin cos(N y) dy
π 2 π 2
0 0
= BN + AN ,
f (x− ) + f (x+)
SN f (x) − = BN + AN → 0, as N → ∞. ■
2
Find the Fourier series of f and find the sum of the series for x = kπ, k ∈ Z.
Plot f and several partial sums of the Fourier series of f .
1.2 CONVERGENCE OF FOURIER SERIES 27
N=4
N=1
-3 Π -2 Π -Π Π 2Π
Figure 1.2.6
N=10
-3 Π -2 Π -Π Π 2Π
Figure 1.2.7
28 1. FOURIER SERIES
Example 1.2.7. Examine the behavior of the partial sums of the 2π-periodic
function f , which on the interval (−π, π) is given by f (x) = ex .
Solution. The complex Fourier coefficients cn of the function f are
∫π ( )
1 −inx x eπ − e−π (−1)n
cn = e e dx = .
π 2(1 − ni)π
−π
Using the relations between the complex Fourier coefficients and the real
Fourier coefficients we find the real Fourier series of f is
∞ [ ]
eπ − e−π eπ − e−π ∑ (−1)n
S f (x) = + cos nx − n sin nx .
2π π n=1
n2 + 1
N=3
N=1
-3 Π -2 Π -Π Π 2Π 3Π
Figure 1.2.8
1.2 CONVERGENCE OF FOURIER SERIES 29
ãΠ
N=6 N=14
-3 Π -2 Π -Π Π 2Π 3Π
Figure 1.2.9
ãΠ
N=30
-3 Π -2 Π -Π Π 2Π 3Π
Figure 1.2.10
If we examine Figure 1.2.8, Figure 1.2.9, Figure 1.2.10 and Figure 1.2.11
more closely we notice the following:
1. The partial sums SN f (x) converge “nicely” to f (x) as N → ∞ at
all points x where f is continuous.
2. The partial sums SN f (x) exhibit some strange behavior near the
points x where f is discontinuous (has a jump) as N increases. We
can see in each case that the functions SN f (x) have noticeable over-
shoot just to the right (and left) of x = 0 (with a similar behavior near
the points x = ±π, ±2π). We also see that these overshoots remain
narrower and narrower and their magnitudes remain fairly large even
30 1. FOURIER SERIES
ãΠ
f HxL
N=90
49 Π 51 Π
Π
50 50
Figure 1.2.11
N=1 f(x)
N=3
-2 Π -Π Π 2Π
-1
Figure 1.2.12
1.2 CONVERGENCE OF FOURIER SERIES 31
In Figure 1.2.13 we plotted the function f and the N th partial sums for
N = 10 and N = 25, respectively.
N=10 N=25
-2 Π -Π Π 2Π
-1
Figure 1.2.13
In Figure 1.2.14 we plotted the function f and the N th partial sum for
N = 50.
N=50
-2 Π -Π Π 2Π
-1
Figure 1.2.14
In Figure 1.2.15 we plotted the function f and the N th partial sum for
N = 150.
N=150
Π Π
- 20 20
-1
Figure 1.2.15
32 1. FOURIER SERIES
Let us now examine analytically the Gibbs phenomenon for this function.
Since f is continuous at the point x = 0 the Fourier series at x = 0 converges
to f (0) = 0. From the continuity of f we have for small positive x <
π, SN f (x) converges to 1; and for small negative x > −π, SN f (x) converges
to −1. We are especially interested in the behavior of SN f (x) in a small
neighborhood of 0 as N → ∞. Since both f and SN f are odd functions,
it suffices to consider the case when x is positive and small.
First, we find the relative extrema of SN f . From
4∑
N
′
SN f (x) = cos (2n − 1)x
π n=1
′ 2 sin 2N x
SN f (x) = .
π sin x
From the last expression it follows SN has relative extrema in the interval
n
(0, π) at the points x = 2N π, n = 1, 2, . . . , 2N − 1. It is easy to show that
π
at the point x = 2N the function SN f has the first maximum. The value of
this maximum is
( π ) 4∑
N
1 ( 2n − 1 )
SN f = sin π .
2N π n=1 2n − 1 2N
Further, notice that the above sum can be made to look like a Riemann sum
for the integral
∫π
2 sin x
dx.
π x
0
Indeed, consider the continuous function
2 sin x
.
π x
If we partition the segment [0, π] into 2N equal sub-intervals each of length
π
∆x = ,
2N
then
( π ) 4∑
N
1 π
lim SN = lim sin (2n − 1)
N →∞ 2N N →∞ π
n=1
2n − 1 2N
2 ∑N
sin (2n − 1) 2N
π
π
= lim 2n−1 ·
π N →∞ n=1 2N π
2N
∫π
2 sin x
= dx,
π x
0
1.2 CONVERGENCE OF FOURIER SERIES 33
that is,
( π ) ∫π
2 sin x
lim SN = dx.
N →∞ 2N π x
0
∫π
2 sin x
dx ≈ 1.179.
π x
0
Therefore
π
(1.2.6) lim SN ( ) ≈ 1.179.
N →∞ 2N
Equation (1.2.6) shows that the maximum values of the partial sums SN f
are always greater than 1 near the discontinuity of the function f at x = 0,
no matter how many terms we choose to include in the partial sums. But
1 is the maximum value of the original function that is being represented
by the Fourier series. It follows, therefore, that near the discontinuity, the
maximum difference between the values of the partial sums and the function
itself (sometimes called the overshoot or the bump) remains finite as N → ∞.
This overshoot is approximately 0.18 in this example.
(d) Plot the graph of f and the Fourier partial sums S3 f , S10 f ,
S30 f and S100 f on the interval [−2π, 2π].
(c) Plot the graph of f and the Fourier partial sums S3 f , S10 f ,
S30 f and S100 f on the interval [−2π, 2π].
∑∞
1
sin nx, x ∈ R.
n=1
n
(c) Plot the graph of f and the Fourier partial sums S3 f , S10 f ,
S30 f and S100 f on the interval [−2π, 2π].
6. Using the Fourier expansion of the function f (x) = x(π − |x|), −π <
x < π, and choosing a suitable value of x, derive the following:
∑∞
(−1)n+1 π3
= .
n=1
(2n − 1) 3 32
(c) Plot the graph of f and the Fourier partial sums S3 f , S10 f ,
S30 f and S100 f on the interval [−2π, 2π].
36 1. FOURIER SERIES
∑∞
x2 π2 cos nx
− πx = − +2 .
2 3 n=1
n2
∑∞ [ ]
4π 2 1 π
x2 = +4 cos nx − sin nx .
3 n=1
n2 n
(c) If −π ≤ x ≤ π, then
∑∞
π2 (−1)n
x2 = +4 cos nx.
3 n=1
n2
(a) Plot the graph of f when 0 < α < 1 and when 1 < α < 2.
11. Let a be any number which is not an integer. Show that the following
is true
and
Proof. Assume that the function F is 2π-periodic, i.e., let F (x + 2π) = F (x)
for every x. If we take x = −π, then it follows F (π) = F (−π). Therefore
∫π ∫−π
f (t) dt = f (t) dt.
0 0
If in the right-hand side integral we introduce a new variable t by u = t + 2π,
then
∫π ∫π
f (t) dt = f (u − 2π) du.
0 2π
Since f is 2π-periodic, we have that
∫π ∫π
f (t) dt = f (u) du,
0 2π
and so
∫π ∫π
f (t) dt − f (u) du = 0.
0 2π
Therefore
∫π ∫2π
f (t) dt + f (t) dt = 0,
0 π
that is,
∫2π
f (t) dt = 0.
0
From the last equation, again by the 2π-periodicity of f , it follows
∫π
f (t) dyt = 0
−π
as required.
The proof of the other direction is left as an exercise. ■
Using the above lemma we easily have the following result.
1.3 INTEGRATION AND DIFFERENTIATION OF FOURIER SERIES 39
∫x
F (x) = f (y) dy.
0
∑∞ ∞ [ ]
cn inx A0 ∑ bn an
F (x) = C0 + e = + − cos nx + sin nx ,
n=−∞
in 2 n=1
n n
n̸=0
where
∫π
A0 1
C0 = = F (x) dx
2 2π
−π
∑∞
cn inx
F (x) = c0 x + C0 + e ,
n=−∞
in
n̸=0
where
∫π
1
C0 = F (x) dx
2π
−π
Notice that the last series is not a Fourier series because it contains the
term c0 x.
Next we present two theorems regarding differentiation of Fourier series.
First we need the following result.
40 1. FOURIER SERIES
Theorem 1.3.3. Let cn be the complex Fourier coefficients (or the real coeffi-
cients an and bn ) of a 2π-periodic, continuous and piecewise smooth function
f and let c′n (a′n and b′n ) be the Fourier coefficients of the first derivative f ′
of f . Then
c′n = incn , a′n = nbn ; b′n = −nan .
∫b
f ′ (x) dx = f (b) − f (a)
a
is valid not only for functions f which are continuously differentiable on the
interval [a, b], but also for functions f which are continuous and piecewise
smooth on [a, b].
Applying the integration by parts formula and the fundamental theorem
of calculus we have
∫π π ∫π
1 1
−inx 1
c′n = ′
f (x)e −inx
dx = f (x)e − 2π f (x)(−ine−inx ) dx
2π 2π −π
−π −π
∫π
1 [ ] in
= f (π)e−πni − f (−π)eπni + f (x)e−inx dx
2π 2π
−π
∫π
1 [ ] in
= f (π)(−1)n − f (−π)(−1)n + f (x)e−inx dx
2π 2π
−π
= 0 + incn = incn .
and converges to f ′ (x) for each x where f ′ [is continuous. If] f ′ is not
continuous at x, then the series converges to 12 f ′ (x−) + f ′ (x+) .
Proof. The result follows by combining the previous theorem and the Fourier
Convergence Theorem. ■
f (x) = π − |x|
8 1
a2n−1 = and a2n = 0 for n = 1, 2, . . ..
π (2n − 1)2
obtain a new Fourier series and find the sum of the series
∞
∑ n2
.
n=1
(2n + 1)2 (2n − 1)2
Since
lim f ′ (x) = lim− f ′ (x) = 0
x→0+ x→0
and
lim f ′ (x) = lim f ′ (x) = −π
x→π − x→−π +
∑∞
1
n
cos nx.
n=1
3
1.3 INTEGRATION AND DIFFERENTIATION OF FOURIER SERIES 43
Solution. Since
eix 1
= < 1 for every x ∈ R,
3 3
by the complex geometric sum formula we obtain
∑∞ ∑∞
einx ( eix )n eix 1 eix 3 − e−ix
= = · = ·
n=1
3n n=1
3 3 1 − e3
ix
3 − eix 3 − e−ix
3eix − 1 1 3 cos x − 1 1 3 sin x
· = · +i · .
9 − 6 cos x + 1 2 5 − 3 cos x 2 5 − 3 cos x
Hence
∞ {∑∞ }
1 1∑ 1 1 1 1 inx 1 1 1 3 cos x − 1
+ cos nx = + Re e = + · ·
4 2 n=1 3n 4 2 n=1
3n 4 2 2 5 − 3 cos x
1
= = f (x),
5 − 3 cos x
and
∞ {∑∞ }
2∑ 1 2 1 inx 2 1 3 sin x
sin nx = Im e = · · = g(x).
3 n=1 3n 3 n=1
3n 3 2 5 − 3 cos x
∫x ∫x
sin t 1 1
g(t) dt = dt = ln(5 − 3 cos x) − ln 2
5 − 3 cos t 3 3
0 0
∞ ∫x ∞
2∑ 1 2∑ 1
= sin nx dx = − (cos nx − 1),
3 n=1 3n 3 n=1 n3n
0
hence by a rearrangement
∞
∑ ∑∞
1 1 1 1 1
cos nx = + − ln 2 − ln(5 − 3 cos x)
n=1
n3n n=1
n3 n 2 2 2
3 1 1 1
= ln + − ln 2 ln(5 − 3 cos x)
2 2 2 2
1 1
= ln 3 − ln 2 − ln(5 − 3 cos x).
2 2
In the above we used the following result:
∞
∑ 1 3
= ln ,
n=1
n3n 2
44 1. FOURIER SERIES
1
which easily follows if we take x = 3 in the Maclaurin series
∑∞
xn
− ln(1 − x) = , |x| < 1.
n=1
n
The Fourier convergence theorem gives conditions under which the Fourier
series of a function f converges point-wise to f . Working with infinite series
can be a delicate matter and we have to be careful. Since a uniform convergent
series can be integrated term by term, it would be much better if we had
absolute and uniform convergence. Let us recall these definitions and a few
related results.
Definition 1.3.1. An infinite series of functions fn
∞
∑
fn (x)
n=1
∑
N
SN f (x) = fn (x)
n=1
Remark. The reason for the term “uniform convergence” is that the integer
N depends only on ϵ and not on the choice of the point x ∈ S.
Important consequences of uniform convergence are the following:
1.3 INTEGRATION AND DIFFERENTIATION OF FOURIER SERIES 45
∞
∑
|nk cn |2 < ∞,
n=1
∑∞ ∞
∑
|nk an |2 < ∞, |nk bn |2 < ∞.
n=1 n=1
In particular,
lim nk an = 0,
n→∞
lim nk bn = 0, lim nk cn = 0.
n→∞ n→∞
∞
∑ ∞
∑ ∫π
1
|n cn | =
k 2
n |
|c(k) 2
≤ |f (k) (x)|2 dx < ∞,
n=1 n=−∞
2π
−π
∞
∑ ∞
∑ ∑∞
1 1
|nj cn | ≤ M nj ≤ 2M < ∞.
n=−∞ n=−∞
nk+α n=1
n α
n̸=0 n̸=0
∞
∑
(in)j cn einx
n=−∞
Example 1.3.4. Discuss the rate of convergence of the Fourier series for the
2-periodic function f which on the interval [−1, 1] is defined by
{
(1 + x)x, −1 ≤ x ≤ 0
f (x) =
(1 − x)x, 0 ≤ x ≤ 1.
Solution. Notice that this function has a first derivative everywhere, but it
does not have a second derivative whenever x is an integer.
Since f is an odd function, an = 0 for every n ≥ 0. By computation we
have { 8
3 3, if n is odd
bn = π n
0, if n is even.
Therefore the Fourier series of f is
∞
8 ∑ 1
f (x) = sin (2n − 1)πx.
π 3 n=1 (2n − 1)3
This series converges very fast. If we plot the partial sum up to the third
harmonic, that is, the function
8 8
S2 f (x) = 3
sin(πx) + sin(3πx),
π 27π 3
48 1. FOURIER SERIES
1
4
N=2
-2 -1 1 2
1
-4
Figure 1.3.1
from Figure 1.3.1 we see that the graphs of f and S2 f (x) are almost indis-
tinguishable.
8
In fact, the coefficient 27π 3 is already just 0.0096 (approximately). The
reason for this fast convergence is the n3 term in the denominator of the nth
coefficient bn , so the coefficients bn tend to zero as fast as n−3 tends to zero.
Example 1.3.5. Discuss the convergence of the Fourier series of the 2π-
periodic function f defined by the Fourier series
∑∞
1
f (x) = 3
sin nx.
n=1
n
Solution. By the Weierstrass M -test it follows that the above series is a uni-
formly convergent series of continuous functions on R. Therefore f is a
continuous function on R. The convergence rate of this series is like n−3 .
Now, since the series obtained by term-by-term differentiation of the above
series is uniformly convergent we have
∑∞
1
f ′ (x) = cos(nx),
n=1
n2
(has jumps). Finally if we try to differentiate term by term the last series we
would obtain the series
∑∞
( )
− cos nx ,
n=0
Now let us revisit the Bessel inequality and discuss the question of equality
in it. We will need a few definitions first.
Definition 1.3.3. A function f is called square integrable on an interval
[a, b] if
∫b
|f (x)|2 dx < ∞.
a
2xy ≤ x2 + y 2 , x, y ∈ R.
∫1 ∫1
1
2
f (x) dx = dx = ∞
x
0 0
∫b
|f (x)|2 dx.
a
∑
N
[ ]
tN (x) = A0 + An cos nx + Bn sin nx ,
n=1
∫π
( )2
EN := f (x) − tN (x) dx
−π
is minimal.
Solution. To compute EN , we first expand the integral:
∫π ∫π ∫π
(1.3.1) EN = f (x) dx − 2
2
f (x)tN (x) dx + t2N (x) dx.
−π −π −π
∞
a0 ∑ [ ]
f (x) = + an cos nx + bn sin nx ,
2 n=1
and
∫π ∑
[ N
]
f (x)tN (x) dx = π 2A0 a0 + (An an + Bn bn ) .
−π n=1
Therefore
∫π ∑ ∑
[ N
] [ N
]
EN = f 2 (x) dx−2π 2A0 a0 + (An an +Bn bn ) +π 2A20 + (A2n +Bn2 ) ,
−π n=1 n=1
that is,
∫π [ ]
a20 ∑ 2
N
EN = f 2 (x) dx − π + (an + b2n )
2 n=1
−π
{ ∑N [ ]}
1
+π (A0 − a0 )2 + (An − an )2 + (Bn − bn )2 .
2 n=1
Since
∑N [ ]
1
(A0 − a0 )2 + (An − an )2 + (Bn − bn )2 ≥ 0,
2 n=1
a0 ∑ [ ]
N
tN (x) = + an cos nx + bn sin nx
2 n=1
Even though the next theorem is valid for any square integrable function,
we will prove it only for continuous functions which are piecewise smooth.
The proof for the more general class of square integrable functions involves
several important results about approximation of square integrable functions
by trigonometric polynomials and for details the interested reader is referred
to the book by T. M. Apostol, Mathematical Analysis [12].
52 1. FOURIER SERIES
Proof. We prove the result only for the complex Fourier coefficients cn . The
case for the real Fourier coefficients an and bn is very similar.
Since f is a continuous and piecewise smooth function on [−π, π], by the
Fourier convergence theorem we have
∞
∑
f (x) = cn einx
n=−∞
By Theorem 1.3.6 the above series can be integrated term by term and
using the orthogonality property of the system {einx : n ∈ Z} we obtain the
required formula
∫π ∞
∑
1
|f (x)|2 dx = |cn |2 . ■
π n=−∞
−π
Using the above result now we have a complete answer to our original
question about minimizing the mean error EN . From (1.3.2) and Parseval’s
identity we have
∑∞
min EN = π (a2n + b2n )
n=N +1
for all 2π-periodic, continuous and piecewise smooth functions f on [−π, π].
∑
∞ ∑
∞
Now, by Bessel’s inequality, both series a2n and b2n are convergent
n=1 n=1
and therefore we have
lim min EN = 0.
N →∞
The last equation can be restated as
∫π
lim |f (x) − SN f (x)|2 dx = 0,
N →∞
−π
and usually we say that the Fourier series of f converges to f in the mean
or in L2 .
1.3 INTEGRATION AND DIFFERENTIATION OF FOURIER SERIES 53
Example 1.3.7. Using the Parseval identity for the function f (x) = x2 on
[−π, π] find the sum of the series
∑∞
1
4
.
n=1
n
∫π ∑∞
1 4π 4 1
x2 dx = + 16 4
.
π 18 n=1
n
−π
Therefore
∑∞
8 4 2π 4 1
π = + 16 ,
45 9 n=1
n4
and so
∞
∑
π4 1
= .
90 n=1 n4
and xp (t) is a particular solution of (1.3.3) (see Appendix D). For c > 0, the
solution xh (t) will decay as time goes on. Therefore, we are mostly interested
in finding a particular solution xp (t) of which does not decay and is periodic
with the same period as f .
For simplicity, let us suppose that c = 0. The problem with c > 0 is very
similar. The general solution of the equation
is given by
x(t) = A cos(ω0 t) + B sin(ω0 t)
√
where ω0 = k/m. Any solution of the non-homogeneous equation
∞ [ ]
a0 ∑ nπ nπ
xp (t) = + an cos t + bn sin t ,
2 n=1
L L
where the coefficients an and bn are unknown. We substitute xp (t) into the
differential equation and solve for the coefficients cn and dn . This process is
perhaps best understood by an example.
Example 1.3.8. Suppose that k = 2 and m = 1. There is a jetpack
strapped to the mass which fires with a force of 1 Newton for the first time
period of 1 second and is off for the next time period of 1 second, then it
fires with a force of 1 Newton for 1 second and is off for 1 second, and so
on. We need to find that particular solution which is periodic and which does
not decay with time.
Solution. The differential equation describing this oscillation is given by
extended periodically on the whole real line R. The Fourier series of f (t) is
given by
∞
1 ∑ 2
f (t) = + sin (2n − 1)πt.
2 n=1 (2n − 1)π
Now we look for a particular solution xp (t) of the given differential equation
of the form
∞ [ ]
a0 ∑
xp (t) = + an cos (nπt) + bn sin (nπt) .
2 n=1
If we substitute xp (t) and the Fourier expansion of f (t) into the differential
equation
x′′ (t) + 2x = f (t)
by comparison, first we find an = 0 for n ≥ 1 since there are no corresponding
terms in the series for f (t). Similarly we find b2n = 0 for n ≥ 1. Therefore
∞
a0 ∑
xp (t) = + b2n−1 sin (2n − 1)πt.
2 n=1
If we compare the above series with the Fourier series obtained for f (t) (the
Uniqueness Theorem for Fourier series) we have that a0 = 21 and for n ≥ 1
2
bn = [ ].
(2n − 1)π 2 − (2n − 1)2 π 2
where ωN = N π/L for some natural number N , then some of the terms
in the Fourier expansion of f (t) will coincide with the solution xh (t). In
this case we have to modify the form of the particular solution xp (t) in the
following way:
[ ] ∑ ∞ [ ]
a0 nπ nπ
xp (t) = + t · aN cos ω0 t + bN sin ω0 t + an cos t + bn sin t .
2 n=1
L L
n̸=N
In other words, we multiply the duplicating term by t. Notice that the expan-
sion of xp (t) is no longer a Fourier series. After that we proceed as before.
Let us take an example.
Example 1.3.9. Find that particular solution which is periodic and does not
decay in time of the equation
2x′′ (t) + 18π 2 x = f (t),
where f (t) is the step function
{
1, 9 < t < 1
f (t) =
0, 1 < t < 1,
extended periodically with period 2 on the whole real line R.
Solution. The Fourier series of f (t) is
∞
∑ 4
f (t) = sin (2n − 1)πt.
n=1
(2n − 1)π
∑∞
1
(a) 4
.
n=1
n
∑∞
1
(b) .
n=1
(2n − 1)2
∑∞
1
(c) .
n=1
(2n − 1)4
∑∞
1
(d) 2 − 1)2
.
n=1
(4n
∞
∑
π6 1
= .
960 n=1 (2n − 1)6
∑∞
sin2 na
(b) , 0 < |a| < π.
n=1
n2
5. Show that each of the following Fourier series expansions is valid in the
range indicated and, for each expansion, apply the Parseval identity.
∑∞
1 n
(a) x cos x = − sin x + 2 (−1)n 2 sin nx, −π < x < π.
2 n=2
n −1
∑∞
1 n
(b) x sin x = 1 − cos x − 2 (−1)n 2 cos nx, −π ≤ x ≤ π.
2 n=2
n −1
6. Let
∑∞
1
f (x) = 3
cos(nx).
n=1
n
(b) Find the derivative f ′ (wherever it exists) and justify your answer.
(c) Answer the same questions for the second derivative f ′′ .
7. Let
∑ 1
f (x) = cos (nx).
n
(a) Is the function f continuous and differentiable everywhere? (b)
Find the derivative f ′ (wherever it exists) and justify your answer.
∞
∑ [ ]
a0 + e−nc an cos nx + bn sin nx
n=1
1.3 INTEGRATION AND DIFFERENTIATION OF FOURIER SERIES 59
in Fourier series and, using the Parseval identity, find the sum of the
series
∑∞
(1 − cos na)2
.
n=1
n4
(c) At each point x find the sum of the Fourier series. Where does
the Fourier series converge uniformly?
(c) At what points does the Fourier series converge? Where is the
convergence uniform?
60 1. FOURIER SERIES
(d) Integrate the Fourier series for f term by term and thus find the
Fourier series of
∫x
F (x) = f (t) dt.
0
(a) Plot the function f when 0 < α < 1 and when 1 < α < 2.
(d) Let x = π/2, t = α/2. Use the formula sin 2πt = 2 sin πt cos πt
in order to show that
∞
∑ (−1)n+1
πsect πt = −4 .
n=1
4t2 − (2n − 1)2
14. For which positive numbers a does the function f (x) = |x|−a , 0 <
|x| < π, have a Fourier series?
15. Does there exist an integrable function on the interval (−π, π) that
has the series
∑∞
sin nx
n=1
16. Solve the following differential equations by Fourier series. The forcing
function f is the periodic function given by
{
1, 0 < t < π
f (t) =
0, π < t < 2π
1.4 FOURIER SINE AND COSINE SERIES 61
(a) y ′′ − y = f (t).
(b) y ′′ − 3y ′ + 2y = f (t).
(a) y ′′ + 9y = f (t).
(b) y ′′ + 2y = f (t).
where k is the heat loss coefficient and the Fourier series describes
the temporal variation of the atmospheric air temperature and the
effective sky temperature. Find y(t) if y(0) = T0 .
fo
fe f
Figure 1.4.1
Therefore the Fourier series of fe involves only cosines and the Fourier series
of fo involves only sines; moreover, the Fourier coefficients of these functions
can be computed in terms of the value of the original function f over the
interval [0, L].
The above discussion naturally leads to the following definition.
Definition 1.4.2. Suppose that f is an integrable function on [0, L]. The
series
∑∞ ∫L
1 nπ 2 nπ
a0 + an cos x, an = f (x) cos x dx
2 n=1
L L L
0
is called the half-range Fourier cosine series (Fourier cosine series) of f on
[0, L].
The series
∑∞ ∫L
nπx 2 nπ
bn sin , bn = f (x) sin x dx
n=1
L L L
0
is called the half-range Fourier sine (Fourier sine series) of f on [0, L].
Example 1.4.1. Find the Fourier sine and cosine series of the function
{
1, 0 < x < 1
f (x) =
2, 1 < x < 2
on the interval [0, 2].
Solution. In this example L = 2, and using very simple integration in evalu-
ating an and bn we obtain the Fourier sine series
∞
2 ∑ 1 + cos nπ
2 − 2(−1)
n
nπx
sin ,
π n=1 n 2
and the Fourier cosine series
∞
3 2 ∑ (−1)n (2n − 1)πx
+ cos .
2 π n=1 2n − 1 2
Example 1.4.2. Find the Fourier sine and Fourier cosine series of the func-
tion f (x) = sin x on the interval [0, π].
Solution. In this example L = π. It is obvious that the Fourier sine series
of this function is simply sin x. After simple calculations we obtain that the
Fourier cosine series is given by
∞
2 4∑ 1
− cos 2nx.
π π n=1 4n2 − 1
Since the Fourier sine and Fourier cosine series are only particular Fourier
series, all the theorems for convergence of Fourier series are also true for
Fourier sine and Fourier cosine series. Therefore we have the following results.
64 1. FOURIER SERIES
f (x− ) + f (x+ )
2
at every x ∈ (0, L). In particular, both series converge to f (x) at every point
x ∈ (0, L) where f is continuous.
The Fourier cosine series of f converges to f (0+ ) at x = 0 and to f (L− )
at x = L.
The Fourier sine series of f converges to 0 at both of these points.
∞
3 2 ∑ (−1)n (2n − 1)πx
f (x) = 1 = + cos , for every 1 < x < 2.
2 π n=1 2n − 1 2
∫π ∞ ∞
π 2 π∑ 2 π∑ 2
f 2 (x) dx = a0 + an = b .
4 2 n=1 2 n=1 n
0
Example 1.4.4. Find the Fourier sine and Fourier cosine series of the func-
tion f defined on [0, π] by
{
x, 0≤x≤ π
2,
f (x) =
π − x, π
2 ≤ x ≤ π.
4 nπ 2
= 2 cos − 2 (1 + cos nπ).
n π 2 n π
Therefore the Fourier cosine series of f on [0, π] is given by
∞ [ ]
π ∑ 4 ( nπ ) 2
+ cos − (1 + cos nπ) cos nx .
4 n=1 n2 π 2 n2 π
The plot of the first two partial sums of the Fourier cosine series of f , along
with the plot of the function f , is given in Figure 1.4.2.
Π
2
N=2
N=1
Π
Π
2
Figure 1.4.2
The plot of the 10th partial sum of the Fourier cosine series of f , along
with the plot of the function f , is given in Figure 1.4.3.
Π
2
N=10
Π
Π
2
Figure 1.4.3
66 1. FOURIER SERIES
The plot of the first two partial sums of the Fourier sine series of f , along
with the plot of the function f , is given in Figure 1.4.4
Π
2
N=2
N=1
Π
Π
2
Figure 1.4.4
The plot of the 10th partial sum of the Fourier sine series of f , along with
the plot of the function f , is given in Figure 1.4.5.
Π
2
N=10
Π
Π
2
Figure 1.4.5
∫π
2 2 [ ]
bn = 1 sin nx dx = − (−1)n − 1 ,
π nπ
0
Example 1.4.6. Find the half-range Fourier cosine and sine expansions of
the function f (x) = x on the interval (0, 2).
Solution. For the Fourier cosine expansion of f we use the even extension
fe (x) = |x| of the function f . By computation we find the Fourier cosine
coefficients.
∫2
2
a0 = x dx = 2.
2
0
∫2
2 nπx 4 [ ]
an = x cos dx = 2 2 (−1)n − 1 ,
2 2 n π
0
The plot of the first two partial sums of the Fourier cosine series of f , along
with the plot of the the function f , is given in Figure 1.4.6.
68 1. FOURIER SERIES
N=2
N=1
Figure 1.4.6
The plot of the 10th and the 20th partial sums of the Fourier cosine series
of f , along with the plot of the function f , is given in Figure 1.4.7.
N=10
N=20
Figure 1.4.7
fo (x) = x
of the function f .
Using the integration by parts formula we find
∫2
2 nπx 4(−1)n+1
bn = x sin dx = .
2 2 nπ
0
N=1 N=2
Figure 1.4.8
The plot of the first two partial sums of the Fourier cosine series of f , along
with the plot of the function f , is given in Figure 1.4.8
The plot of the 10th and the 20th partial sums of the Fourier cosine series
of f , along with the plot of the function f , is given in Figure 1.4.9.
N=10
N=20
Figure 1.4.9
Solution. We show only that the coefficients an have the required property,
70 1. FOURIER SERIES
and leave the other part as an exercise. Let n be a natural number. Then
∫π
2
a2n−1 = f (x) cos (2n − 1)x dx
π
0
π
∫2 ∫π
2 2
= f (x) cos (2n − 1)x dx + f (x) cos (2n − 1)x dx
π π
0 π
2
π
∫2 ∫π
2 2
= f (x) cos (2n − 1)x dx + f (π − x) cos (2n − 1)x dx.
π π
0 π
2
∫π ∫π
f (x) dx ≤
2
(f ′ (x))2 dx.
0 0
∞
∑
f (x) = an sin nx.
n=1
∫π ∞
π∑ 2
f 2 (x) dx = a .
2 n=1 n
0
1.4 FOURIER SINE AND COSINE SERIES 71
∞
π2 4 ∑ (−1)n (π − 1) + 1
−π+ cos nx.
3 π n=1 n2
Π2 - 2 Π
N=2
N=1
Figure 1.4.10
The plot of the 10th and the 20th partial sums of the Fourier cosine series
of f , along with the plot of the function f , is given in Figure 1.4.11.
Π2 - 2 Π
N=10
N=20
Figure 1.4.11
1.4 FOURIER SINE AND COSINE SERIES 73
For the Fourier sine series of f we use the odd extension fo of f . Working
similarly as for the Fourier cosine series we obtain
∑∞ { }
n−1 π − 2 4 [ ]
x − 2x =
2
2(−1) − 1 − (−1) n
sin nx
n=1
n πn3
The plot of the first two partial sums of the Fourier sine series of f , along
with the plot of the function f , is given in Figure 1.4.12.
Π2 - 2 Π
N=2
Π
N=1
Figure 1.4.12
The plot of the 10th and the 50th partial sums of the Fourier sine series
of f , along with the plot of the function f , is given in Figure 1.4.13.
Π2 - 2 Π
N=10
N=20
Figure 1.4.13
74 1. FOURIER SERIES
1. f (x) = π − x.
2. f (x) = sin x.
3. f (x) = cos x.
4. f (x) = x2 .
{
x, 0≤x≤ π
2
5. f (x) =
π − x, π
2 ≤ x ≤ π.
In Exercise 6–11, find both the Fourier cosine series and the Fourier sine series
of the given function on the indicated interval.
{
1, 0<x<2
8. f (x) = ; [0, 4].
0, 2<x<4
{
x, 0≤x≤1
9. f (x) = ; [0, 2].
1, 1≤x≤2
{
0, 0<x<1
10. f (x) = ; [0, 2].
1, 1<x<2
-3 Π -2 Π -Π Π 2Π 3Π
Figure 1.5.1
N=3 N=1
-3 Π -2 Π -Π Π 2Π 3Π
Figure 1.5.2
The plot of the partial sum of the Fourier series of f for N = 4, along
with the plot of the function f , is given in Figure 1.5.3.
Π
N=4
-3 Π -2 Π -Π Π 2Π 3Π
Figure 1.5.3
1.5 PROJECTS USING MATHEMATICA 77
-6 -4 -2 0 2 4 6
Figure 1.5.4
We define the Fourier basis consisting of the cosine and sine functions:
In[4] := s[n− , x− ] = Sin[n P i x/2]
In[5] := c[n− , x− ] = Cos[n P i x/2]
Next we define the inner product:
78 1. FOURIER SERIES
Next, compute the 1st , 3rd , 5th , 10th and the 50th partial sums:
The plot of the partial sums of the Fourier series of f for N = 1 and
N = 3, along with the plot of the function f , is given in Figure 1.5.5.
N=1
N=3
-6 -4 -2 0 2 4 6
Figure 1.5.5
1.5 PROJECTS USING MATHEMATICA 79
N=5 N=10
-6 -4 -2 0 2 4 6
Figure 1.5.6
2 N=50
1.9 2 2.2
Figure 1.5.7
The plot of the 5th and the 10th partial sums of the Fourier series of f ,
along with the plot of the function f , is given in Figure 1.5.6.
The plot of the 50th partial sum of the Fourier series of f , along with the
plot of the function f , is given in Figure 1.5.7.
-3 Π -2 Π- 3 Π Π
2
2Π 5Π
3Π
2 2
Figure 1.5.8
N=11
1
2
N=9
Π
-Π 2
Π
Figure 1.5.9
The plot of the partial sums SN of the Fourier series of f for N = 9 and
N = 11, along with the plot of the function f , is given in Figure 1.5.9.
The plot of the partial sums of the Fourier series of f for N = 20 and
N = 30, along with the plot of the function f , is given in Figure 1.5.10.
N=20
1
2
N=30
Π
-Π 2
Π
Figure 1.5.10
The plot of the partial sum of the Fourier series of f for N = 150, along
with the plot of the function f , is given in Figure 1.5.11.
From these figures, in particular from Figure 1.5.12, we can clearly see the
Gibbs phenomenon in the neighborhood of the point x = 0.
82 1. FOURIER SERIES
1
2
N=150
Π
-Π 2
Π
Figure 1.5.11
N=500
1
2
Π Π
- 100 100
Figure 1.5.12
CHAPTER 2
INTEGRAL TRANSFORMS
∫∞
( )
F {f } (ω) ≡ fb(ω) = e−ixω f (x) dx.
−∞
∫∞
( )
Fc {f } (ω) ≡ fbc (ω) = cos(ωx)f (x) dx.
0
∫∞
( )
Fs {f } (ω) ≡ fbs (ω) = sin(ωx)f (x) dx.
0
83
84 2. INTEGRAL TRANSFORMS
In this part we will study the Laplace transform and some of its appli-
cations. Many more applications of the Laplace transform will be discussed
later in the chapters dealing with partial differential equations.
Usually we use the capital letters for the Laplace transform of a given
function f (t). For example, we write
L f (t) = F (s), L g(t) = G(s), L y(t) = Y (s).
2.1.1 DEFINITION AND PROPERTIES OF THE LAPLACE TRANSFORM 85
Solution. Using the linearity property of the Laplace transform and the result
of Example 2.1.2 we have
( 3 ) 2! Γ( 5 )
L {3t2 + 4t 2 } (s) = 3 3 + 4 52
s
√s
2
6 π
= 3 +3
s s5
since
5 3 1 1 3√
Γ( ) = · Γ( ) = π.
2 2 2 2 4
(See Appendix G for the Gamma function).
Proof. The proof of this theorem follows immediately from the definition of
the Laplace transform. ■
The proofs of the next two theorems follow directly from the definition of
the Laplace transform.
Theorem 2.1.3 The Second Shift Theorem. If F = L {f } and c is a
positive constant, then
( )
L {f (t − c)} (s) = e−cs F (s).
In addition to the shifting theorems, there are two other useful theorems
that involve the derivative and integral of the Laplace transform F (s).
2.1.1 DEFINITION AND PROPERTIES OF THE LAPLACE TRANSFORM 87
and in general,
( )
F (n) (s) = (−1)n L {tn f (t)} (s),
(∫∞ ) ∫∞
′ d −st ∂ ( −st )
F (s) = e f (t) dt = e f (t) dt
ds ∂s
0 0
∫∞
( )
=− e−st tf (t) dt = − L {tf (t)} (s).
0
∫∞ ( { })
f (t)
F (z) dz = L (s).
t
s
Using these theorems, along with the Laplace transforms of some functions
listed in Table A of Appendix A, we can compute the Laplace transforms of
many other functions.
Example 2.1.4. Find the Laplace transform of
eat tn , n ∈ N.
Solution. Since
( ) n!
L {tn } (s) = n+1 ,
s
from Theorem 2.1.2 it follows that
( ) n!
L{eat tn } (s) = , s > a.
(s − a)n+1
88 2. INTEGRAL TRANSFORMS
Solution. Since ( ) s
L {cos kt} (s) = ,
s2 + k 2
from Theorem 2.1.2 it follows that
( ) s−a
L {eat cos kt} (s) = , s > a.
(s − a)2 + k 2
Remark. The improper integral which defines the Laplace transform does
not have to converge.
For example, neither
1 2
L{ } nor L {et }
t
exists.
Sufficient conditions which guarantee the existence of L {f } are that f be
piecewise continuous on [0, ∞) and that f be of exponential order. Recall
that a function f is piecewise continuous on [0, ∞) if f is continuous on any
closed bounded interval [a, b] ⊂ [0, ∞) except at finitely many points. The
concept of exponential order is defined as follows:
2.1.1 DEFINITION AND PROPERTIES OF THE LAPLACE TRANSFORM 89
∫T ∫∞
( ) −st
L {f (t)} (s) = e f (t) dt + e−st f (t) dt = I1 + I2 .
0 T
By the comparison test for improper integrals, the integral I2 converges for
s > c.
The integral I1 exists because it can be written as a finite sum of integrals
over intervals on which the function est f (t) is continuous. This proves the
theorem. ■
The condition (2.1.2) restricts the functions that can be Laplace trans-
forms. For example, the functions s2 , es cannot be Laplace transforms of
any functions because their limits as s → ∞ are ∞, not 0.
On the other hand, the hypotheses of Theorem 2.1.7 are sufficient but not
necessary conditions for the existence of the Laplace transform. For example,
the function
1
f (t) = √ , t > 0
t
is not piecewise continuous on the interval [0, ∞) but nevertheless, from Ex-
ample 2.1.2 it follows that its Laplace transform
( ) √
( { 1 }) Γ 12 π
L √ (s) = 1 =
t s2 s
exists.
Up to this point, we were dealing with finding the Laplace transform of a
given function. Now we want to reverse the operation: For a given function
F (s) we want to find (if possible) a function f (t) such that
( )
L{f (t)} (s) = F (s).
Remark. For the inverse Laplace transform the linearity property holds.
The most common method of inverting the Laplace transform is by decom-
position into partial fractions, along with Table A.
Example 2.1.8. Find the inverse transform of
4
F (s) = .
(s − 1)2 (s2 + 1)2
2 1 2s + 1 2s
F (s) = − + + 2 + .
s − 1 (s − 1)2 s + 1 (s2 + 1)2
If we recall the first shifting property (Theorem 2.1.2), and the familiar
Laplace transforms of the functions sin t and cos t, then we find that the
required inverse transform of the given function F (s) is
∫ ∞
b+i ∫
b+iy
1 zt 1
(2.1.3) f (t) = F (z) e dz := lim F (z) ezt dz,
2πi 2πi y→∞
b−i ∞ b−iy
1
F (s) = .
s sinh πs
Ri
CR
3i
2i
i
0 b
-i
-2 i
-3 i
-Ri
Figure 2.1.1
{ }
π 3π
iθ
On the semi-circle CR = z = R e : ≤θ≤ we have
2 2
∫ ∫2
3π
eRt cos θ
F (z) etz dz ≤ 2 dθ.
e R cos θ − e−R cos θ
CR π
2
2.1.1 DEFINITION AND PROPERTIES OF THE LAPLACE TRANSFORM 93
π 3π
Since cos θ ≤ 0 for ≤θ≤ and since t > 0, from the above inequality
2 2
it follows that
∫
(2.1.5) lim F (z) etz dz = 0.
R→∞
CR
Now, taking R → ∞ in (2.1.4), from (2.1.5) and the residue theorem (see
Appendix F) we obtain
∫ ∞
b+i ∞
∑
1
(2.1.6) f (t) = F (z) ezt dz = Res ( F (z) ezt , z = zn ).
2πi n=−∞
b−i ∞
Since z = 0 is a double pole of F (z), using the formula for residues (given
in Appendix F) and l’Hôpital’s rule we have
( )
zt d (z − 0)2 ezt t
(2.1.7) Res ( F (z) e , z = 0) = lim = .
z→0 dz z 2 sinh z π
(z − n)etz enit
(2.1.8) Res ( F (z) ezt , z = zn ) = lim = (−1)n .
z→n z sinh z ni
If we substitute (2.1.7) and (2.1.8) into (2.1.6) we obtain (after small rear-
rangements)
∞
t 2 ∑ (−1)n
f (t) = + sin nt.
π π n=1 n
(a) f (t) = t.
(a) f (t) = t− 2 .
1
1
(b) f (t) = t 2 .
s2
(b) f (s) = .
(s − 2)2
s2
(c) f (s) = .
s2 + 9
8. Show that the following functions are of exponential order:
(a) f (t) = t2 .
Figure 2.1.2
2.1.2 STEP AND IMPULSE FUNCTIONS 97
Remark. ua (t) is another notation for the Heaviside function H(t − a).
It is very easy to find the Laplace transform of H(t − a).
∫∞
( )
L {H(t − a)} (s) = H(t − a)e−st ds
0
∫∞ −as
e
= e−st ds = .
s
a
Theorem 2.1.10. Let F (s) = L{f (t)} exist for s > c ≥ 0, and let a > 0
be a constant. Then
( )
L {H(t − a)f (t − a)} (s) = e−as F (s), s > c.
∫∞ ∫∞
( ) −st
L{H(t − a)f (t − a)} (s) = e H(t − a)f (t − a) dt = e−st f (t − a) dt.
0 a
∫∞
L{H(t − a)f (t − a)}(s) = e−s(v+a) f (v) dv
0
∫∞
= e−as e−sv f (v) dv = e−cas F (s). ■
0
98 2. INTEGRAL TRANSFORMS
Therefore,
( ) ( ) ( { })
L {f (t)} (s) = L {t} (s) + L H(t − 2)(t − 2)2 (s)
( ) ( )
= L t} (s) + e−2s L {t2 } (s).
1 + e−5s
F (s) = .
s4
2
1
-4 -2 -1- 1 1
1 2 4
2 2
Figure 2.1.3
Let us consider a function f (t) which has relatively big values in a relatively
short interval around the origin t = 0 and it is zero outside that interval. For
natural numbers n, define the functions fn (t) by
{
n, |t| < 2n1
,
(2.1.9) fn (t) =
0, |t| ≥ 2n .
1
(2.1.10) ∫∞ ∫2n
In ≡ fn (t) dt = fn (t) dt = 1, n = 1, 2, . . .
−∞ 1
− 2n
(2.1.11) lim In = 1.
n→∞
Using equations (2.1.10) and (2.1.11) we can “define” the Dirac delta
function δ(t), concentrated at t = 0, by
δ(t) = 0, t ̸= 0;
∫∞
δ(t) dt = 1.
−∞
100 2. INTEGRAL TRANSFORMS
= e−st fn (t − a) dt = n e−st dt
1 1
a− 2n a− 2n
t=a+ 2n
1
e−st 2n − e− 2n
s s
−sa e
=n = ne .
−s t=a− 1 s
2n
Therefore,
{ } e 2n − e− 2n
s s
(2.1.13) L fn (t − a) = ne−sa .
s
L{δ(t)} = 1.
∫∞
(2.1.15) δ(t − a)f (t) dt = f (a),
−∞
2.1.2 STEP AND IMPULSE FUNCTIONS 101
For the proof of (2.1.15) the interested reader is referred to the book by W. E.
Boyce and R. C. DiPrima [1].
{
0, 0 ≤ t ≤ 1
(c) f (t) =
1, t > 1.
0, t<π
(d) f (t) = t − π, π ≤ t < 2π
0, t ≥ 2π.
(e) f (t) = H(t − 1) + 2H(t − 3) − H(t − 4).
3!
(a) F (s) = .
(s − 2)4
e−2s
(b) F (s) = 2 .
s +s−2
2(s − 1)e−2s
(c) F (s) = 2 .
s − 2s + 2
2e−2s
(d) F (s) = 2 .
s −4
4. In the following exercises compute the Laplace transform of the given
function.
Let 0 < t1 < t2 < . . . < tn ≤ M be the points where the function f ′
is possibly discontinuous. Using the continuity of the function f and the
integration by parts formula on each of the intervals [tj−1 , tj ], t0 = 0, tn = M
we have
∫M ∫M
−st ′ −sM
e f (t) dt = e f (M ) − f (0) + s e−st f (t) dt.
0 0
2.1.3 INITIAL-VALUE PROBLEMS 103
Continuing this process, we can find similar expressions for the Laplace
transform of higher-order derivatives. This leads to the following corollary to
Theorem 2.1.11.
Corollary 2.1.2 Laplace Transform of Higher Derivatives. Suppose
that the function f and its derivatives f ′ , f ′′ , . . . , f (n−1) are of exponential
order b on [0, ∞) and the function f (n) is piecewise continuous on any
closed subinterval of [0, ∞). Then for s > b we have
∫∞
e−st f (n) (t) dt = sn L{f (t)}(s) − sn−1 f (0) − . . . − f (n−1) (0).
0
Now we show how the Laplace transform can be used to solve initial-value
problems. Typically, when we solve an initial-value problem that involves
y(t), we use the following steps:
1. Compute the Laplace transform of each term in the differential equation.
( )
2. Solve the resulting equation for L{y(t)} (s) = Y (s).
3. Find y(t) by computing the inverse Laplace transform of Y (s).
Example 2.1.12. Solve the initial-value problem
Solution. Let Y = L {y}. Taking the Laplace transform of both sides of the
differential equation, and using the formula in Corollary 2.1.2, we obtain
[ ] 10
s3 Y (s) − s2 y(0) − sy ′ (0) − y ′′ (0) + 4 sY (s) − y(0) = .
s−1
Using the given initial conditions and solving the above equation for Y (s) we
have
2s3 − 4s − 8
Y (s) = .
s(s − 1)(s2 + 4)
The partial fraction decomposition of the above fraction is
2s3 − 4s − 8 A B Cs + D
= + + 2 .
s(s − 1(s2 + 4) s s−1 s +4
104 2. INTEGRAL TRANSFORMS
2s3 − 4s − 8 2 −2 2s + 4
= + + .
s(s − 1(s2 + 4) s s − 1 s2 + 4
Finding the inverse Laplace transform of both sides of the above equation and
using the linearity of the inverse Laplace transform we obtain
Solution. Let Y = L{y}. Taking the Laplace transform of both sides of the
differential equation, and using the formula in Corollary 2.1.2, we obtain
[ ] 6
s2 Y (s) − sy(0) − y ′ (0) + 2 sY (s) − y(0) + Y (s) = .
s
Using the given initial conditions and solving the above equation for Y (s) we
have
5s2 + 20s + 6
Y (s) = .
s(s + 1)2
The partial fraction decomposition of the above fraction is
s2 + 20s + 6 6 −1 9
= + + .
s(s + 1)2 s s + 1 (s + 1)2
Finding the inverse Laplace transform of both sides of the above equation and
using the linearity of the inverse Laplace transform we obtain
y ′′ (t) + 9y = h(t),
Then ( ) ( ) ( )
L {h(t)} (s) = L {1} (s) − L {H(t − π)} (s)
1 e−πs
= − .
s s
Let L{y} = Y and let us take the Laplace transform of both sides of the
given differential equation. Using the formula in Corollary 2.1.2 we have
1 e−πs
s2 Y (s) − sy(0) − y ′ (0) + 9Y (s) = − .
s s
Using the prescribed initial conditions and solving for Y (s) we obtain
1 e−πs
Y (s) = − = F (s) − e−πs F (s),
s(s2 + 9) s(s2 + 9)
where
1 1 ss
F (s) = =· − 2 .
s(s2 + 9) 9s s + 9
Therefore,
{ } { }
y(t) = L−1 F (s) − L−1 e−πs F (s) .
First,
{ } 1 1
f (t) = L−1 F (s) = − cos 3t,
9 9
and by Theorem 2.1.10 we have
{ } [ ]
−1 −πs 1 1
L e F (s) = f (t − π)uπ (t) = − cos 3(t − π) uπ (t)
9 9
[ ]
1 1
= + cos 3t uπ (t).
9 9
Combining these results we find that the solution of the original problem is
given by
[ ]
1 1 1 1
y(t) = − cos 3t − + cos 3t uπ (t).
9 9 9 9
106 2. INTEGRAL TRANSFORMS
Solution. Again as in the previous example we write the function in the form
h(t) = 1 − H(t − 1). Let L{y} = Y and let us take the Laplace transform of
both sides of the given differential equation. Using the formula in Corollary
2.1.2 we have
1 e−s
s2 Y (s) − sy(0) − y ′ (0) + sY (s) − y(0) + Y (s) = − .
s s
Using the prescribed initial conditions and solving for Y (s) we obtain
1 e−s
Y (s) = − = F (s) − e−s F (s),
s(s2 + s + 1) s(s2 + s + 1)
where
1
F (s) = .
s(s2 + s + 1)
Then, if f = L−1 {F }, from Theorem 2.1.10, we find the solution of the given
initial-value problem is
1 s+1 1 s+1
F (s) = − = − )
s s2 + s + 1 s (s + 12 )2 + 43
[ ]
1 s + 12 1 1
= − + .
s (s + 12 )2 + 43 2 (s + 12 )2 + 34
Now, using the first shifting property of the Laplace transform and referring
to Table A in Appendix A we obtain
√ √
3 1 3
h(t) = 1 − e− 2 cos t − √ e− 2 sin
t t
t.
2 3 2
2.1.3 INITIAL-VALUE PROBLEMS 107
{ } d( ) { } d( 2 )
L ty ′ (t) = − sY (s) , L ty ′′ (t) = − s Y (s) − y ′ (0) .
ds ds
d( 2 ) d( )
− s Y (s) − y ′ (0) − sY (s) − 2Y (s) + Y (s) = 0,
ds ds
or after simplification
C
Y (s) = ,
(s + 1)4
In the next example we apply the Laplace transform to find the solution
of a harmonic oscillator equation with an impulse forcing.
Example 2.1.17. Find the solution of the initial-value problem
we have
s2 Y (s) − s + sY (s) − 1 + 3Y (s) = e−4t .
Solving the above equation for Y (s) we obtain
s+1 e−4s
Y (s) = + .
s2 + s + 3 s2 + s + 3
Therefore,
{ } { }
−1 s+1 −1 e−4s
y(t) = L +L
s2 + s + 3 s2 + s + 3
{ } { }
s + 21 1
= L−1 + L−1 2
(s + 12 )2 + 11
4 (s + 1 2
2) + 11
4
{ }
e−4s
+ L−1 11 .
(s + 12 )2 + 4
Using the shifting theorems 2.1.2 and 2.1.3 and Table A in Appendix A we
obtain that the solution of the original initial-value problem is given
√ √
− 21 t
( 11 ) 1 −1t ( 11 )
y(t) = e cos t +√ e 2 sin t
2 11 2
√
2 ( 11 )
+ H(t − 4) √ e− 2 (t−4) sin
1
(t − 4) .
11 2
∫T
( ) 1
L {f } (s) = f (t)e−st dt.
1 − e−T s
0
∫x
( )
f ∗ g (x) = f (x − y)g(y) dy,
0
(∫∞ ) (∫∞ )
−sx −sy
F (s)G(s) = e f (x) dx · e g(y) dy ,
0 0
∫∞ ∫∞
e−s(x+y) f (x)g(y) dx dy.
0 0
If in the above double integral we change the variable x with a new variable
t by the transformation x = t − y and keep the variable y, then we obtain
∫∫ ∫∞ ∫∞
F (s)G(s) = e−st f (t − y)g(y) dt dy = e−st f (t − y)g(y) dt dy,
R 0 y
2.1.4 THE CONVOLUTION THEOREM 111
t= y
Figure 2.1.4
∫t ∫t
( ) t2
1 ∗ f (t) = 1 · f (x) dx = x dx = ,
2
0 0
∫t ∫t ∫t
( ) −(t−x) −t
f ∗ g (t) = f (t − x)g(x) dx = e sin x dx = e ex sin x dx
0 0 0
[ ]x=t
ex 1( ) 1
= e−t (sin x − cos x) = sin t − cos t + e−t .
2 x=0 2 2
Similarly, we find
∫t ∫t
( ) 1( ) 1
g ∗ f (t) = g(t − x)f (x) dx = sin(t − x)ex dx = sin t − cos t + e−t ,
2 2
0 0
1 1
F (s) = L {f } = L {e−t } = , G(s) = L {g} = L {sin t} =
s+1 s2 + 1
and so { }
−1
{ } 1 1
L F (s)G(s) = L−1 .
s + 1 s2 + 1
We compute
{ 1 1 }
L−1 2
s+1s +1
by the partial fraction decomposition
1 1 1 1 1 −s + 1
= + .
s + 1 s2 + 1 2 s + 1 2 s2 + 1
2.1.4 THE CONVOLUTION THEOREM 113
Therefore,
{ } { }
−1
{ 1 1 } 1 −1 1 −s + 1 1 −1 1
L = L − 2 = L
s + 1 s2 + 1 2 s+1 s +1 2 s+1
{ } { }
1 s 1 1
− L−1 2 + L−1 2
2 s +1 2 s +1
1 1 1
= e−t − cos t + sin t,
2 2 2
which is the same result as that obtained for (f ∗ g) (t).
Example 2.1.19. Using the Convolution Theorem find the Laplace trans-
form of the function h defined by
∫t
h(t) = sin x cos (t − x) dx.
0
( )
Solution. Notice that h(t) = f ∗ g (t), where f (t) = cos t and g(t) = sin t.
Therefore, by the Convolution Theorem,
( ) ( )( )
L {h(t)} = L { f ∗ g (t)} = L {f (t)} L {g(t)}
( )( ) s
= L {cos t} L {sin t} = 2 .
(s + 1)2
Example 2.1.20. Find the solution of the initial-value problem
y ′′ (t) + 16y = f (t), y(0) = 5, y ′ (0) = −5,
where f is a given function.
Solution. Let L {y} = Y and L {f } = F . By taking the Laplace transform
of both sides of the differential equation and using the initial conditions, we
obtain
s2 Y (s) − 5s + 5 + 16Y (s) = F (s).
Solving for Y (s) we obtain
5s − 5 F (s)
Y (s) = + .
s2 + 16 s2 + 16
Therefore, by linearity and the Convolution Theorem of the Laplace transform
it follows that
{ } { }
{ s } 1 F (s)
y(t) = 5L−1 2 − 5L−1 2 + L−1 2
s + 16 s + 16 s + 16
( { } )
5 1
= 5 cos 4t − sin 4t + L−1 2 ∗ f (t)
4 s + 16
( )
5 1
= 5 cos 4t − sin 4t + sin 4t ∗ f (t)
4 4
∫t
5 1
= 5 cos 4t − sin 4t + f (t − x) sin 4x dx.
4 4
0
114 2. INTEGRAL TRANSFORMS
4 Y (s)
Y (s) = + 2 .
s2 s +1
s2 + 1 4 4
Y (s) = 4 = 2 + 4.
s4 s s
By computing the inverse Laplace transform, we find
2
y(t) = 4t + t3 .
3
∫t
(a) f (t) = (t − x)2 cos 2x dx
0
∫t
(b) f (t) = e−(t−x) sin x dx
0
∫t
(c) f (t) = (t − x)ex dx
0
∫t
(d) f (t) = sin (t − x) cos x dx
0
∫t
(e) f (t) = ex (t − x)2 , dx
0
∫t
(f) f (t) = e−(t−x) sin2 x dx
0
∫t
(a) y(t) − 4t = −3 y(x) sin (t − x) dx
0
∫t
t2
(b) y(t) = − (t − x)y(x) dx
2
0
∫t
(c) y(t) − e−t = − y(x) cos (t − x) dx
0
∫t
(d) y(t) = t3 + y(t − x)x sin x dx
0
∫t
(e) y(t) = 1 + 2 e−2(t−x) y(x) dx
0
116 2. INTEGRAL TRANSFORMS
∫t
(a) y ′ (t) − t = y(x) cos (t − x)dx, y(0) = 4.
0
∫t y ′ (τ ) √
(b) √
t−τ
dτ = 1 − 2 t, y(0) = 0.
0
where
∫L
1
f (x)e−i
nπ
cn = L x dx.
2L
−L
where
∫L
F (ωn ) = f (x)e−iωn x dx.
−L
Now, we expand the interval (−L, L) by letting L → ∞ in such a way so
that ∆ωn → 0. Notice that the sum in (2.2.1) is very similar to the Riemann
sum of the improper integral
∫∞
1
F (ω)eiωx dω,
2π
−∞
where
∫∞
(2.2.2) F (ω) = f (x)e−iωx dx.
−∞
Therefore, we have
∫∞
1
(2.2.3) S(f, x) = F (ω) dω.
2π
−∞
The function F in (2.2.2) is called the Fourier transform of the function f
on (−∞, ∞). The above discussion naturally leads to the following definition.
Definition 2.2.1. Let f : R → R be an integrable function. The Fourier
transform of f , denoted by F = F {f }, is given by the integral
∫∞
( )
(2.2.4) F (ω) = F {f } (ω) = f (x)e−iωx dx,
−∞
for all x ∈ R for which the improper integral exists.
Remark. Notice that, if the integrable function f is even or odd, then the
Fourier transform F{f } of f is even or odd, respectively, and
∫∞
F {f }(ω) = 2 f (x) cos ωx dx, if f is even
0
and
∫∞
F {f }(ω) = 2i f (x) sin ωx dx, if f is odd.
0
∫∞ ∫∞
Fc {f }(ω) = f (x) cos ωx dx, Fs {f }(ω) = f (x) sin ωx dx.
0 0
The inversion formulas for the Fourier cosine and Fourier sine transforms
are
∫∞
1
f (x) = Fc {f }(ω) cos ωx dx
2π
0
∫∞
1
= Fs {f }(ω) sin ωx dx.
2π
0
∫∞
ω sin(ωx) π
dω = e−a , x > 0.
1 + x2 2
0
2.2.1 DEFINITION OF FOURIER TRANSFORMS 119
∫∞ ∫0 ∫∞
−iωx −x −iωx
F (ω) = f (x)e dx = e e dx + ex e−iωx dx
−∞ −∞ 0
∫0 ∫∞
= ex(1−iω) dx + e−x(1+iω) dx
−∞ 0
[ ] [ ]x=∞
e x(1−iω) x=0
e−x(1+iω) 2
= + = .
1 − iω x=−∞ −(1 + iω) x=0 1 + ω2
Since the function f (x) = e−|x| is continuous on the whole real line, by
the inversion formula in Theorem 2.2.1 we have
∫∞ ∫∞
−|x| 1 iωx 1 eiωx
e = F (ω)e dω = 2 dω
2π 2π 1 + ω2
−∞ −∞
∫0 ∫∞
1 eiωx 1 eiωx
= 2 dω + 2 dω
2π 1 + ω2 2π 1 + ω2
−∞ 0
∫∞ ∫∞
1 e−iωx 1 eiωx
= dω + dω
π 1 + ω2 π 1 + ω2
0 0
∫∞ ∫∞
1 eiωx + e−iωx 2 cos(ωx)
= dω = .
π 1 + ω2 π 1 + ω2
0 0
∫∞ ∫∞ ∫∞
−iωx −x2 −iωx
e−x −iωx
2
F (ω) = f (x)e dx = e e dx = dx
−∞ −∞ −∞
∫∞ 2 2
∫∞
−(x+ iω
2 ) − 4 − ω4
e−(x+
2 ω iω 2
= e dx = e 2 ) dx
−∞ −∞
∫∞
− ω4
2 √ − ω2
e−u du =
2
=e πe 4 .
−∞
120 2. INTEGRAL TRANSFORMS
Example 2.2.3. Find the Fourier transform of the following step function.
{
1, |x| < a
f (x) =
0, |x| > a.
eωai
− e−ωai sin ωa
= =2 .
ωi ω
1
,
a2 + x2
CR
ai
-R R
Figure 2.2.1
2.2.1 DEFINITION OF FOURIER TRANSFORMS 121
e−iωz
f (z) =
a2 + z 2
From ( )
e−iωz e−iωai i
Res 2 , z0 = ai = = − eωa
a + z2 2ai 2a
we have
π ωa
(2.2.5) I(R) = e .
a
We decompose the integral I(R) along the semicircle CR and the interval
[−R, R]:
∫ ∫R
e−iωz e−iωx
(2.2.6) I(R) = dz + dx.
a2 + z 2 a2 + x 2
CR −R
Denote the first integral in (2.2.6) by ICR . For the integral ICR along the
semicircle CR we have
∫π ∫π −iωReiθ ∫π
e−iωRe
iθ
e eRω sin θ
ICR
= Rie iθ
dθ ≤ R dθ ≤ R dθ.
a2 + R2 e2iθ a2 + R2 e2iθ R2 − a2
0 0 0
eRω sin θ
lim R =0
R→∞ R 2 − a2
lim ICR = 0.
R→∞
∫∞
e−iωx
lim I(R) = dx,
R→∞ a2 + x 2
−∞
122 2. INTEGRAL TRANSFORMS
so (2.2.5) implies
∫∞
e−iωx π
(2.2.7) dx = eωa .
a2 + x 2 a
−∞
For the case when ω > 0, working similarly as above, but integrating along
the lower semicircle z = Reiθ , π ≤ θ ≤ 2π, we obtain
∫∞
e−iωx π
(2.2.8) dx = e−ωa .
a2 + x 2 a
−∞
π −|ω|a
F {f }(ω) = e .
a
This function, introduced in Section 2.1.2 was defined as the “limit” of the
following sequence of functions.
{ n
2, |x| ≤ 1
n
fn (x) =
0, |x| > 1
n.
∫∞
1
∫n ( )
−iωx n −iωx sin ωn
F {fn }(ω) = fn (x)e dx = e dx = n.
2 ω
−∞ −n
1
and therefore
(2.2.11) F {δa }(ω) = e−iωa .
For the inverse Fourier transform, from (2.2.10) it follows that
∫∞
δω0 (ω)eixω dω = eiω0 x .
−∞
Hence
1 iω0 x
F −1 {δω0 }(x) = e ,
2π
that is,
(2.2.12) F {eiω0 x }(ω) = 2πδ(ω − ω0 ).
If we set ω0 = 0 we have the following result.
F {1}(ω) = 2πδ(ω).
Solution. We will compute the Fourier transform of sin ω0 x, leaving the other
function as an exercise. The function sin ω0 x is not integrable, and so we
need to use the Dirac delta function. From Euler’s formula, the linearity
property of the Fourier transform, and (2.2.12) we have
1[ ]
F {sin ω0 x}(ω) = F {eiω0 x }(ω) − F {e−iω0 x }(ω)
2i [ ]
= −πi δ(ω − ω0 ) − δ(ω + ω0 ) .
124 2. INTEGRAL TRANSFORMS
∫∞ ∫∞
−ϵ|x| 2
e sgn(x) dx = 2 e−ϵx dx = .
ϵ
−∞ 0
∫0 ∫∞
Fϵ = F {fϵ }(ω) = e−ϵ(−x) · (−1) · e−iωx dx + e−ϵx · 1 · e−iωx dx
−∞ 0
[ ]x=0 [ ]x=∞
1 1
=− e(ϵ−iω)x + e(−ϵ−iωx)
ϵ − iω x=−∞ −ϵ − iω x=0
1 1 2iω
=− − =− 2 .
ϵ − iω −ϵ − iω ϵ + ω2
Therefore,
2iω
F {sgn}(ω) = lim Fϵ (ω) = − lim .
ϵ→0 ϵ→0 ϵ2 + ω2
2
If ω = 0, the above lim equals 0, and if ω ≠ 0, the above lim equals .
iω
Therefore we conclude that
{ 2
, ω ̸= 0
F {sgn}(ω) = iω
0, ω = 0.
2.2.1 DEFINITION OF FOURIER TRANSFORMS 125
1. Find the Fourier transform of the following functions and for each ap-
ply Theorem 2.2.1.
0, x < −1
(a) f (x) = −1, −1 < x < 1
2, x > 1.
0, x < 0
(b) f (x) = x, 0 < x < 3
0, x > 3.
{
0, x<0
(c) f (x) = −x
e , x > 0.
ax2
(d) f (x) = e− 2 , a > 0.
Show that
F {f }(ω) = Γ(a)(1 + iω)−a .
is
3
F {f }(ω) = .
(2 − iω)(1 + iω)
is
sin(ω − a) sin(ω + a)
F {f }(ω) = + .
ω−a ω+a
126 2. INTEGRAL TRANSFORMS
5. Let g be a function defined on [0, ∞), and let its Laplace transform
L{g} exist. On (−∞, ∞) define the function f by
{
0, t>0
f (t) =
g(t), t ≥ 0.
Show formally that
L {g}(ω) = F (−iω),
The next theorem summarizes some of the more important and useful prop-
erties of the Fourier transform.
Theorem 2.2.3. Suppose that f and g are integrable functions on R. Then
(a) For any c ∈ R, F {f (x − c)}(ω) = e−icω F {f }(ω) (Translation).
For the Fourier sine and Fourier cosine transform we have the following
theorem.
Theorem 2.2.4. Suppose that f is continuous and piecewise smooth and
that f and f ′ are integrable functions on (0, ∞). Then
(a) Fc {f ′ }(ω) = ωFs {f }(ω) − f (0).
lim f (x) = 0.
x→∞
1+ω 2
π 2
∫∞
1
e−(x−y) e−|y| dy.
2
= √
2 π
−∞
2.2.2 PROPERTIES OF FOURIER TRANSFORMS 131
Example 2.2.9. Using properties (e) and (f) in Theorem 2.2.3 find the
Fourier transform of the function
y(x) = e−x .
2
Solution. The function y(x) = e−x satisfies the following differential equa-
2
tion.
y ′ (x) + 2xy(x) = 0.
Applying the Fourier transform to both sides of this equation, with F{y} = Y ,
from (e) and (f ) of Theorem 2.2.3 it follows that
i.e.,
Y ′ (ω) + 2ωY (ω) = 0.
The general solution of the above equation is
ω2
Y (ω) = Ce− 4 .
Therefore,
√ − ω2
Y (ω) = πe 4 .
we have
∫∞ ∫∞ ( ∫∞ )
1
|f (x)| dx =
2
f (x) F (ω)e iωx
dω dx
2π
−∞ −∞ −∞
∫∞ ( ∫∞ )
1 iωx
= F (ω) f (x)e dx dω
2π
−∞ −∞
∫∞
1
= F (ω) F (ω) dω
2π
−∞
∫∞
1
= |F (ω)|2 dω. ■
2π
−∞
Example 2.2.10. Using the Fourier transform of the unit step function on
the interval (−a, a) show that
∫∞
sin2 (aω)
dω = aπ.
ω2
−∞
Next, using the definition of the function g(x) we compute the coefficient cn :
(2.2.18)
∫π ∫π ( ∑∞ )
1 −inx 1
cn = g(x)e dx = f (x + 2nπ) e−inx dx
2π 2π n=−∞
−π −π
∞
∑ ∫π
1
= f (x + 2nπ)e−inx dx (integration term by term)
2π n=−∞ −π
∞ ∫
2nπ+π
1 ∑
= f (t)e−in(t−2nπ) dt (substitution x = t − 2nπ)
2π n=−∞
2nπ−π
∫∞
1 1
= f (t)e−int dt = F (n).
2π 2π
−∞
134 2. INTEGRAL TRANSFORMS
Remark. Using the dilation property (c) of Theorem 2.2.3, Poisson’s sum-
mation formula (2.2.14) in Theorem 2.2.6 can be written in the following
form.
∞
∑ ∞
1 ∑ ( 2nπ )
(2.2.19) f (cn) = F ,
n=−∞
c n=−∞ c
Therefore,
∞
∑ 1 π 1 + e−2aπ
= .
n=−∞
a2 +n 2 a 1 − e−2aπ
and so
∞
∑ 1 π 1 + e−2aπ 1
= − 2.
n=1
a 2 + n2 2a 1 − e−2aπ 2a
Although in the next several chapters we will apply the Fourier transform
in solving partial differential equations, let us take a few examples of the
application of the Fourier transform in solving ordinary differential equations.
Example 2.2.12. Solve the following boundary value problem
{ ′′
y (x) + a2 y = f (x), −∞ < x < ∞,
lim y(x) = 0,
x→±∞
F (ω)
(2.2.20) Y (ω) = .
a2 + ω 2
136 2. INTEGRAL TRANSFORMS
∫∞
1 F (ω) iωx
(2.2.21) y(x) = e dω.
2π a2+ ω2
−∞
Alternatively, in order to find the solution y(x) from (2.2.21), we can use
the convolution property for the Fourier transform:
{ 1 } { 1 }
y(x) = F −1 F (ω) = F −1 2 (x) ∗ f (x)
a2 +ω 2 a +ω 2
∫∞
1 −ax 1
= e ∗ f (x) = e−a(x−y) f (y) dy.
2a 2a
∞
The improper integral (2.2.21) can also be evaluated using the Cauchy
Residue Theorem.
Example 2.2.13. Find the solution y(x) of the boundary value problem in
Example 2.2.12 if the forcing function f is given by
f (x) = e−|x| ,
and a ̸= ±1.
Solution. From Example 2.2.1 we have
{ } 2
F e−|x| (ω) = ,
1 + ω2
∫∞
1 2
y(x) = eiωx dω.
2π (1 + ω 2 )(a2 + ω2 )
−∞
∫∞
1
f (x) = F (ω) eiωx dω, x ∈ R.
2π
−∞
2. Given that
{ 1 }
F (ω) = πe−|ω| ,
1 + x2
find the Fourier transform of
1
(a) , a is a real constant.
1 + a2 x2
cos(ax)
(b) , a is a real constant.
1 + x2
138 2. INTEGRAL TRANSFORMS
3. Let H(x) be the Heaviside unit step function and let a > 0. Use the
modulation property of the Fourier transform and the fact that
{ } 1
F e−ax H(x) (ω) =
1 + iω
to show that
{ } b
F e−ax sin(bx)H(x) (ω) = .
(a + iω)2 + b2
∫∞
dx π
= .
(x2 + a2 − b2 )2 + 4a2 b2 2a(a2 + b2 )
−∞
∫∞
dx π
= .
(x2 + 1)2 2
−∞
7. By taking the appropriate closed contour, find the inverse of the fol-
lowing Fourier transforms by the Cauchy Residue Theorem. The pa-
rameter a is positive.
ω
(a) .
ω 2 + a2
2.3 PROJECTS USING MATHEMATICA 139
3
(b) .
(2 − iω)(1 + iω)
ω2
(c) .
(ω + a2 )2
2
10. Suppose that f is continuous and piecewise smooth and that f and
f ′ are integrable on (0, ∞). Show that
{ }
Fs f ′ (ω) = −ω{Fc }(ω)
and { }
Fc f ′ (ω) = ω{Fs }(ω) − f (0).
11. State and prove Parseval’s formulas for the Fourier cosine and Fourier
sine transforms.
In this section we will see how Mathematica can be used to evaluate the
Laplace and Fourier transforms, as well as their inverse transforms. For a
brief overview of the computer software Mathematica consult Appendix H.
Let us start with the Laplace transform.
140 2. INTEGRAL TRANSFORMS
Mathematica’s commands for the Laplace transform and the inverse Laplace
transform
∫∞ ∫∞
−st 1
(2.3.1) F (s) = f (t)e dt, f (t) = F (s)est ds
2π
0 −∞
are
[ ]
In[] := LaplaceTransform f [t], t, s ;
[ ]
In[] := InverseLaplaceTransform F [s], s, t ;
Project 2.3.1. Using Mathematica find the Laplace transform of the func-
tion {
t, 0 ≤ t < 1
f (t) =
0, 1 < t < ∞
in two ways:
(a) By (2.3.1).
Part (b):
In[3]:=LaplaceTransform [f [t], t, s]
e−s (−1+es −s)
Out[3] = s2
Project 2.3.2. Using Mathematica find the inverse Laplace transform of the
function
2s2 − 3s + 1
F (s) = 2 2
s (s + 9)
in two ways:
(a) By Mathematica’s command for the inverse Laplace transform.
2.3 PROJECTS USING MATHEMATICA 141
∫ i
c+∞ ∞
∑
1 st
( )
(2.3.2) f (t) = F (s)e ds = Res F (z)ezt , z = zn ,
2π n=0
c−∞ i
Part (b):
Find the singularities of the function F (z):
In[5] := Solve [z 3 (z 2 + 9) == 0, z]
Out[5] = {{z → 0}, {z → 0}, {z → 0}, {z → −3 i}, {z → 3 i}}
Now add the above residues to get the inverse Laplace transform.
In[9] := r0 + r1 + r
( 17 ) −3 i t ( 17 ) ( )
Out[9] = − 162 i
+ 18 e + − 162 − i
18 e3 i t + 1
162 34 − 54 t + 9 t2
respectively.
The command to find the Fourier transform is
In[] := FourierTransform [f [t], t, ω]
2.3 PROJECTS USING MATHEMATICA 143
∫∞ ∫∞
−iωt 1
f (t)e dt, F (ω)iωt dω,
2π
−∞ −∞
The next project shows how Mathematica can evaluate Fourier transforms
and inverse Fourier transforms of a big range of functions: algebraic, expo-
nential and trigonometric functions, step and impulse functions.
Project 2.3.4. Find the Fourier transform of each of the following functions.
Take the constant in the Fourier transform to be 1. From the obtained Fourier
transforms find their inverses.
(a) e−t
2
1
√
(b) t
.
sin t
(c) sinc(t) = t .
{
1, t>0
(d) sign(t) =
−1, t < 0.
(e) δ(t)—the Dirac Delta function, concentrated at t = 0.
(f) e−t .
Part (a).
In[2] := FourierTransform [e−t , t, ω, FourierParameters → {1, −1}]
2
ω2 √
Out[2] = e− 4 π
In[3] := InverseFourierTransform [%], ω, t, FourierParameters → {1, −1}]
Out[3] = e−t
2
144 2. INTEGRAL TRANSFORMS
Part (b).
In[4] := FourierTransform [ √ 1
, t, ω, FourierParameters → {1, −1}]
Abs[t]
√
Out[4] = √ 2π
Abs[ω]
Part (c).
In[7] := FourierTransform [Sinc[t], t, ω, FourierParameters → {1, −1}]
Out[7] = 12 πSign[1 − ω] + 21 πSign[1 + ω]
In[8] := InverseFourierTransform [%, ω, t, → {1, −1}]
Sin[t]
Out[8] = t
Part (d).
In[9] := FourierTransform [Sign[t], t, ω, FourierParameters → {1, −1}]
Out[9] = − 2i
ω
Part (e).
In[11] := FourierTransform [DiracDelta[t], t, ω,
FourierParameters → {1, −1}]
Out[11] = 1
In[12] := InverseFourierTransform [%, ω, t, → {1, −1}]
Out[12] = DiracDelta [t]
Part (f).
In[13] := FourierTransform [Exp[−Abs[t]], t, ω,
FourierParameters → {1, −1}]
2
Out[13] = 1+ω 2
STURM–LIOUVILLE PROBLEMS
In the first chapter we saw that the trigonometric functions sine and cosine
can be used to represent functions in the form of Fourier series expansions.
Now we will generalize these ideas.
The methods developed here will generally produce solutions of various
boundary value problems in the form of infinite function series. Technical
questions and issues, such as convergence, termwise differentiation and inte-
gration and uniqueness, will not be discussed in detail in this chapter. In-
terested readers may acquire these detail from advanced literature on these
topics, such as the book by G. B. Folland [6].
where the real functions p(x), p′ (x), r(x) are continuous on [a, b], and p(x)
and r(x) are positive on [a, b].
Remark. Any differential equation of the form (3.1.1) can be written in
Sturm–Liouville form.
Indeed, first divide (3.1.1) by a(x) to obtain
[ ]
b(x) ′ c(x) r(x)
y ′′ (x) + y (x) + +λ y = 0.
a(x) a(x) a(x)
145
146 3. STURM–LIOUVILLE PROBLEMS
we obtain
[ ]
b(x) ′ µ(x)c(x) µ(x)r(x)
µ(x)y ′′ (x) + µ(x) y (x) + +λ y = 0.
a(x) a(x) a(x)
where p(x), p′ (x) and r(x) are real continuous functions on a finite interval
[a, b], and p(x), r(x) > 0 on [a, b], together with the set of homogeneous
boundary conditions of the form
{
α1 y(a) + β1 y ′ (a) = 0,
(3.1.3)
α2 y(b) + β2 y ′ (b) = 0,
Using this operator the differential equation (3.1.2) can be expressed in the
following form.
(3.1.5) L[y] = −λ r y.
We use the term linear differential operator because of the following im-
portant linear property of the operator L.
cos(µl) = 0.
Notice that all the eigenvalues in Example 3.1.1 are real numbers. Ac-
tually, this is not an accident, and this fact is true for any regular Sturm–
Liouville problem.
We introduce a very useful shorthand notation:
For two square integrable complex functions f and g on an interval [a, b],
the expression
∫b
(f, g) = f (x)g(x) dx
a
(g(x) is the complex conjugate of g(x)) is called the inner product of f and
g.
The inner product satisfies the following useful properties:
∫b [ ]
( ′
)′
p(x)y (x) +q(x)y y(x) dx = −λ(y, ry).
a
i.e.,
( ) ( )
(3.1.6) p(b)y(b)y ′ (b) − p(a)y(a)y ′ (a) − y ′ , py ′ + y, qy = −λ(y, ry).
150 3. STURM–LIOUVILLE PROBLEMS
Now, taking the complex conjugate of the above equation, and using the fact
that p, q and r are real functions, along with properties (b) and (c) of the
inner product, we have
( ) ( )
(3.1.7) p(b)y(b)y ′ (b) − p(a)y(a)y ′ (a) − y ′ , py ′ + y, qy = −λ(y, ry).
The boundary condition at the point b together with its complex conjugate
gives {
α2 y(b) + β2 y ′ (b) = 0,
α2 y(b) + β2 y ′ (b) = 0.
Both α2 , β2 cannot be zero, otherwise there would be no boundary condition
at x = b. Therefore,
Now, since y(x) ̸≡ 0 on [a, b], the continuity of y(x) at some point of
[a, b] implies that y(x)y(x) > 0 for every x in some interval (c, d) ⊆ [a, b].
Therefore, from r(x) > 0 on [a, b] and the continuity of r(x) it follows that
y(x)r(x)y(x) > 0 for every x ∈ (c, d). Hence (y, ry) > 0, and so (3.1.11)
forces λ − λ = 0, i.e., λ = λ. Therefore λ is a real number. ■
Regular Sturm–Liouville problems have several important properties. How-
ever, we will not prove them all here. An obvious question regarding a regular
Sturm–Liouville problem is that about the existence of eigenvalues and eigen-
functions. One property, related to this question and whose proof is beyond
the scope of this book, but can be found in more advanced books, is the
following theorem.
Theorem 3.1.2. A regular Sturm–Liouville problem has infinitely many real
and simple eigenvalues λn , n = 0, 1, 2, . . ., which can be arranged as a mono-
tone increasing sequence
such that
lim λn = ∞.
n→∞
For each eigenvalue λn there exists only one eigenfunction yn (x) (up to a
multiplicative constant).
′
From the boundary
( ) condition y (1) = 0 we have A′ = 0, and therefore,
y(x) = B cos µ ln x . The other boundary condition y (2) = 0 implies
( )
sin µ ln 2 = 0.
n2 π 2
λn = µ2n = , n = 1, 2, . . . ,.
ln2 2
The corresponding eigenfunctions are functions of the form An cos (µn ln x),
and ignoring the coefficients An , the eigenfunctions of the given problem are
( ) nπ
yn (x) = cos µn ln x , µn = , n = 1, 2, . . . ,.
ln 2
y = Aex + Bxex .
3.1 REGULAR STURM–LIOUVILLE PROBLEMS 153
Thus B = −A and so
[ ]
A (1 + µ)e1+µ − (1 − µ)e1−µ = 0.
Now, since
(1 + µ)e1+µ > (1 − µ)e1−µ
for all µ > 0, it follows that A = 0, and hence the given boundary value
problem has only a trivial solution in this case also.
Case 30 . λ > 1. In this case we have 1 − λ = −µ2 , with µ > 0. The
differential equation in this case has a general solution
[ ]
y(x) = ex A cos (µx) + B sin (µx) .
tanH Μ L
∆1 ∆2 ∆3 ∆4
Π 3Π 5Π 7Π
2 2 2 2
-Μ
Figure 3.1.1
Therefore, the eigenvalues for this problem are λ = 1 + µ2 , where the pos-
itive number µ satisfies the transcendental equation (3.1.12). Although the
solutions of (3.1.12) cannot be found explicitly, from the graphical sketch of
tan µ and −µ in Figure 3.1.1 we see that there are infinitely many solutions
µ1 < µ2 < . . .. Also, the following estimates are valid.
π 3π 2n − 1
< µ1 < π, < µ2 < 2π, . . . , π < µn < nπ, . . .
2 2 2
2n − 1
π < µn < nπ,
2
then (√ )
λn = 1 + µ2n , yn (x) = ex sin 1 + λn x
are the eigenvalues and the associated eigenfunctions.
From the estimates
2n − 1
π < µn < nπ
2
it is clear that
λ1 < λ2 < . . . < λn < . . .
and
lim λn = ∞.
n→∞
Solution. The general solution y(x) of the given differential equation depends
on the parameter λ:
For the case when λ = 0 the solution of the equation is y(x) = A + Bx.
When we impose the two boundary conditions we find B = 0 and A+B−B =
0. But this means that both A and B must be zero, and therefore λ = 0 is
not an eigenvalue of the problem.
If λ = −µ2 < 0, with µ > 0, then the solution of the differential equation
is
y(x) = A sinh (µ x) + B cosh (µ x).
Since
y ′ (x) = Aµ cosh (µ x) + Bµ sinh (µ x)
the boundary condition at x = 0 implies that A = 0, and so
i.e.,
1
tanh (µ) = , µ > 0.
µ
As we see from the Figure 3.1.2, there is a single solution of the above equa-
tion, which we will denote by µ0 . Thus, in this case there is a single eigenvalue
λ0 = −µ20 and a corresponding eigenfunction y0 (x) = cosh(µ0 x).
tanhH Μ L
1
Μ
Μ0
Figure 3.1.2
156 3. STURM–LIOUVILLE PROBLEMS
Since
y ′ (x) = Aµ cos (µ x) − Bµ sin (µ x)
i.e.,
1
(3.1.13) − tan (µ) = , µ > 0.
µ
Μ1 Μ2 Μ3 Μ4
Π 3Π 5Π 7Π 9Π
2 2 2 2 2
1
-
Μ
tanH Μ L
Figure 3.1.3
3.1 REGULAR STURM–LIOUVILLE PROBLEMS 157
y ′′ + λy = 0, 0 < x < π,
y(x) = A + Bx.
From the last equation we obtain B = 0, and therefore any λ < 0 cannot be
an eigenvalue.
Now, let λ = µ2 > 0. In this case the general solution of the differential
equation is given by
y = A sin µx + B cos µx.
158 3. STURM–LIOUVILLE PROBLEMS
Therefore sin µπ = 0. From the last equation, in view the fact that µ ̸= 0,
it follows that µ = n, n = ±1, ±2, . . .. Now since A and B are arbitrary
constants, to each eigenvalue λn = n2 there are two linearly independent
eigenfunctions
{ }
yn (x) = sin nx, cos nx, n = 1, 2, 3, . . . .
The negative integers n give the same eigenvalues and the same associated
eigenfunctions.
Show that the eigenvalues of this problem are all negative λ = −µ2 ,
where µ satisfies the transcendental equation
4µ
tanh (2µ) = .
3 − 4µ2
4. Find an equation that the eigenvalues for each of the following bound-
ary value problems satisfy.
are given by
( ) ( )
nπ 1 sin nπ ln(1 + x)
λn = + , yn (x) = √ , n = 1, 2, . . .
ln 2 4 (ln 2) 1 + x
Multiplying the first equation by y2 (x) and the second by y1 (x) and sub-
tracting, we get
( )′ ( )′
(3.2.1) p(x)y1′ (x) y2 (x) − p(x)y2′ (x) y1 (x) = 0.
However, since
[ ]′ ( )′ ( )′
p(x)y1′ (x) y2 (x) − p(x)y2′ (x) y1 (x) = p(x)y1′ (x) y2 (x) − p(x)y2′ (x) y1 (x)
and hence
[ ]
(3.2.2) p(x) y1′ (x) y2 (x) − y2′ (x) y1 (x) = C = constant, a ≤ x ≤ b.
To find C, we use the fact that y1 and y2 satisfy the boundary condition at
the point x = a: {
α1 y1 (a) + β1 y1′ (a) = 0
α1 y2 (a) + β1 y2′ (a) = 0.
Since at least one of the coefficients α1 and β1 is not zero from the above
two equations it follows that
from which we conclude that y2 (x) = Ay1 (x) for some constant A. ■
Now, following the argument in Theorem 3.2.1, from the boundary conditions
for the functions y1 (x) and y2 (x) at x = a and x = b we get
{
y1 (a)y2′ (a) − y2 (a)y1′ (a) = 0
y1 (b)y2′ (b) − y2 (b)y1′ (b) = 0.
(2n − 1)π
yn (x) = sin µn x, µn = , n = 1, 2, . . . ,.
2l
1( )( )
sin x sin y = cos (x − y) − cos (x + y)
2
we have
∫l ∫l
(2m − 1)πx (2n − 1)πx
ym (x)yn (x)w(x) dx = sin sin dx
2l 2l
0 0
∫l [ )]
(m − n)πx (m + n − 1)πx
=2 cos − cos dx
l l
0
[ ]l [ ]l
2l (m − n)πx 2l (m + n − 1)πx
=− sin + sin
(m − n)π l 0 (m + n − 1)π l 0
= 0.
Now, with the help of this theorem we can expand a given function f in
a series of eigenfunctions of a regular Sturm–Liouville problem. We have had
examples of such expansions in Chapter 1. Namely, if f is a continuous and
piecewise smooth function on the interval [0, 1], and satisfies the boundary
conditions f (0) = f (1) = 0, then f can be expanded in the Fourier sine series
∞
∑
f (x) = bn sin nπx,
n=1
∫1
bn = 2 f (x) sin nπx, dx.
0
3.2 EIGENFUNCTION EXPANSIONS 163
Since f (x) is continuous we have that the Fourier sine series converges to
f (x) for every x ∈ [0, 1]. Notice that the functions sin nπx, n = 1, 2 . . . are
the eigenfunctions of the boundary value problem
y ′′ + λy = 0, y(0) = y(1) = 0.
We have had similar examples for expanding functions in Fourier cosine series.
Let f (x) be a function defined on the interval [a, b]. For a sequence
{yn (x) : n ∈ N}
and let
∞
∑
(3.2.5) f (x) = ck yk (x), x ∈ (a, b).
k=1
But the question is how to compute each of the coefficients ck in the series
(3.2.5). To answer this question we work very similarly to the case of Fourier
series. Let n be any natural number. If we multiply Equation (3.2.5) by
r(x)yn (x), and if we assume that the series can be integrated term by term,
we obtain that
∫b ∞
∑ ∫b
(3.2.6) f (x)yn (x)r(x) dx = ck yk (x)yn (x)r(x) dx.
a k=1 a
∫b
f (x)yn (x)r(x) dx
a
(3.2.7) cn = .
∫b ( )2
yn (x) r(x) dx
a
164 3. STURM–LIOUVILLE PROBLEMS
Remark. The series in (3.2.5) is called the generalized Fourier series (also
called the eigenfunction expansions) of the function f with respect to the
eigenfunctions yk (x) and ck , given by (3.2.7), are called generalized Fourier
coefficients of f .
The study of pointwise and uniform convergence of this kind of generalized
Fourier series is a challenging problem. We present here the following theorem
without proof, dealing with the pointwise and uniform convergence of such
series.
Theorem 3.2.3. Let λn and yn (x), n = 1, 2, . . ., be the eigenvalues and
associated eigenfunctions, respectively, of the regular Sturm–Liouville problem
(3.1.2) and (3.1.3). Then
(i) If both f (x) and f ′ (x) are piecewise continuous on [a, b], then f can
be expanded in a convergent generalized Fourier series (3.2.5), whose
generalized Fourier coefficients cn are given by (3.2.7), and moreover
∑∞
f (x− ) + f (x+ )
cn yn (x) =
n=1
2
at any point x in the open interval (a, b).
(iii) If both f (x) and f ′ (x) are piecewise continuous on [a, b], and f is
continuous on a subinterval [α, β] ⊂ [a, b], then the generalized Fourier
series of f converges uniformly to f on the subinterval [α, β].
Therefore,
∫π
f (x)yn (x)r(x) dx ∫π
2
cn = 0∫π ( )2 = x sin nx dx
π
yn (x) r(x) dx 0
0
[ ]x=π
2 cos nx sin nx
= −x +
π n n2 x=0
2[ ] 2
= −π cos nπ = (−1)n−1 .
π n
∑∞
(−1)n−1
x=2 sin nx, for every x ∈ (0, π).
n=1
n
( )′
λ
xy ′ (x) + y = 0, 1 < x < e, y(1) = y ′ (e) = 0,
x
( )
(2n − 1)2 π 2 (2n − 1)π
λn = , yn (x) = sin ln x , n = 1, 2, . . .
4 2
∫e ∫e ( )
(2n − 1)π 1 (2n − 1)π
yn2 (x) r(x) dx = sin2 ln x dx (t = ln x)
2 x 2
1 1
∫
(2n−1)π/2 [ ]t=(2n−1)π/2
2 1 1 1
= 2
sin t dt = t − sin (2t) = .
(2n − 1)π (2n − 1)π 2 t=0 2
0
166 3. STURM–LIOUVILLE PROBLEMS
Therefore,
∫e
f (x)yn (x)r(x) dx ∫e ( )
1 (2n − 1)π 1
cn = ∫e =2 sin ln x dx
2 x
yn2 (x) r(x) dx 1
1
(2n−1)π
∫ 2 [ ]t=(2n−1)π/2
4 4
= sin t dt = − cos t
(2n − 1)π (2n − 1)π t=0
0
4
= .
(2n − 1)π
Since the given function f = 1 does not satisfy the boundary conditions
at x = 1 and x = e, we do not have uniform convergence of the Fourier series
on the interval [1, e]. However, we have pointwise convergence to 1 for all x
in the interval (1, e):
∞
∑ ( )
4 (2n − 1)π
1= sin ln x , 1 < x < e.
n=1
(2n − 1)π 2
√
Now, if, for example, we take x = e ∈ (1, e) in the above series, we
obtain ( )
∑∞
4 (2n − 1)π
1= sin
n=1
(2n − 1)π 4
and since
( ) { √
(2n − 1)π (−1)k 22 if n = 2k
sin = √
4 (−1)k+1 22 if n = 2k − 1
it follows that
√ [ ]
π 2 1 1 1 1 1
= 1+ − − + + − ··· .
4 2 3 5 7 9 11
( ′ )′ λ
xy (x) + y = 0, 1 < x < 2, y(1) = y(2) = 0
x
are given by
( )2 √
nπ
λn = , yn (x) = sin ( λn ln x).
ln 2
y ′′ + λy = 0, y(0) = y ′ (1) = 0,
(a) f (x) = 1, 0 ≤ x ≤ 1.
(b) f (x) = x, 0 ≤ x ≤ 1.
{
1, 0 ≤ x < 12
(c) f (x) =
0, 12 ≤ x ≤ 1.
{
2x, 0 ≤ x < 1
2
(d) f (x) =
1, 1
2 ≤ x ≤ 1.
y ′′ + λy = 0, y(0) = y(l) = 0
has eigenfunctions
( )
nπx
yn (x) = sin n = 1, 2, . . ..
l
y ′′ + λy = 0, y(0) = y ′ (l) = 0
has eigenfunctions
( )
(2n − 1)πx
yn (x) = sin n = 1, 2, . . ..
2l
are √
yn (x) = cos λn ,
√ √ √
where λn are the positive solutions of cos λ − λ sin λ = 0.
Find the eigenfunction expansion of the following functions in terms
of these eigenfunctions.
(a) f (x) = 1.
(b) f (x) = xc , c ∈ R.
(ii) p(x) = 0 for some x ∈ [a, b] or r(x) = 0 for some x ∈ [a, b].
∫b
y1 (x)y2 (x)r(x) dx = 0.
a
in (3.3.1) we obtain
∞
∑ ∞
∑ ∞
∑
(1 − x2 )k(k − 1)ak xk−2 − 2x kak xk−1 + λ(λ + 1)ak xk = 0,
k=0 k=0 k=0
or after a rearrangement
∞
∑ ∞
∑ [ ]
k(k − 1)ak x k−2
+ λ(λ + 1) − k(k + 1) ak xk = 0.
k=0 k=0
Since the above equation holds for every x ∈ (−1, 1), each coefficient in the
above power series must be zero, leading to the following recursive relation
between the coefficients ak :
k(k + 1) − λ(λ + 1)
(3.3.4) ak+2 = ak , k = 0, 1, 2, . . ..
(k + 2)(k + 1)
∏
m
(λ − 2k + 1)
m k=1
(3.3.6) a2m+1 = (−1) a1 .
(2m + 1)!
Inserting (3.3.5) and (3.3.6) into (3.3.3) we obtain that the general solution
of Legendre’s equation is
and
( )( )
∏
m ∏
m
∞
∑ (λ − 2k + 1) (λ + 2k)
k=1 k=1
(3.3.8) y1 (x) = (−1)m x2m+1 .
m=0
(2m + 1)!
∏
Remark. The symbol (read “product”) in the above formulas has the
following meaning:
{ }
For a sequence ck : k = 1, 2, . . . of numbers ck we define
∏
m
ck = c1 · c2 · . . . · cm .
k=1
For example,
∏
m
˙ − 2) · . . . · (λ − 2m + 2).
(λ − 2k + 2) = λ(λ
k=1
1 ∑
m
(2n − 2k)!
(3.3.10) Pn (x) = n (−1)k xn−2k ,
2 k!(n − k)!(n − 2k)!
k=0
1 1
P0 (x) = 1; P1 (x) = x; P2 (x) =(3x2 − 1); P3 (x) = (5x3 − 3x);
2 2
1 1
P4 (x) = (35x4 − 30x2 + 3); P5 (x) = (63x5 − 70x3 + 15x).
8 8
The graphs of the first six polynomials are shown in Figure 3.3.1.
1
P3 HxL P4 HxL
P0 HxL
-1 1
P2 HxL P5 HxL
P1 HxL
-1
Figure 3.3.1
Proof. If y(x) = (x2 − 1)n , then y ′ (x) = 2nx(x2 − 1)n−1 and therefore
(x2 − 1)y ′ (x) = 2nxy(x).
If we differentiate the last equation (n + 1) times, using the Leibnitz rule for
differentiation, we obtain
( ) ( )
n+1 n + 1 (n)
(x − 1)y
2 (n+2)
(x) + 2xy (n+1)
(x) + 2 y (x)
1 2
( )
n + 1 (n)
= 2nxy (n+1) (x) + 2n y (x).
1
If we introduce a new function w(x) by w(x) = y (n) (x), then from the last
equation it follows that
(x2 − 1)w′′ (x) + 2(n + 1)xw′ (x) + (n + 1)nw(x) = 2nxw′ (x) + 2n(n + 1)w(x),
or
(1 − x2 )w′′ (x) − 2xw′ (x) + n(n + 1)w(x) = 0.
So w(x) is a solution of Legendre’s equation, and since w(x) is a polynomial,
by Theorem 3.3.1 it follows that
dn ( 2 )
n
(x − 1)n = w(x) = cn Pn (x)
dx
for some constant cn . To find the constant cn , again we apply the Leibnitz
rule ( )
dn ( 2 ) dn ( )n
(x − 1) n
= (x − 1)(x + 1)
dxn dxn
= n!(x + 1)n + Rn ,
where Rn denotes the sum of the n remaining terms each having the factor
(x − 1). Therefore,
dn ( 2 )
n
(x − 1) = 2n n!,
dxn x=1
and so from
Pn (1) = cn 2n n!
and Pn (1) = 1 we obtain
1
cn = . ■
2n n!
3.3.2 LEGENDRE’S DIFFERENTIAL EQUATION 175
1 d2 ( 2 ) 1 d2 ( 4 ) 1
P2 (x) = (x − 1)2 = x − 2x2 + 1 = (3x2 − 1).
22 2! dx 2 8 dx 2 2
1
(3.3.12) G(t, x) ≡ √ .
1 − 2xt + t2
∑∞
1
(3.3.13) √ = Pn (x)tn .
1 − 2xt + t2 n=0
Proof. If we use the binomial expansion for the function G(t, x), then we have
∑∞
1 · 3 · · · · · (2n − 1)
G(t, x) = (1 − 2xt + t2 )− 2 =
1
n n!
(2xt − t2 )n
n=0
2
∑∞
1 · 3 · · · · · (2n − 1) ∑
n
n!
= (2x)n−k (−t2 )k
n=0
2 n n! k!(n − k)!
k=0
∑∞ [∑ m ]
(2n − 2k)!
= (−1)k n xn−2k tn ,
n=0
2 k!(n − 2k)!(n − k)!
k=0
n n−1
where m = if n is even and m = if n is odd. But the coefficient
2 2
of tn is exactly the Legendre polynomial Pn (x). ■
′ ′
(3.3.15) Pn+1 (x)+Pn−1 (x) = 2xPn′ (x)+Pn (x).
′ ′
(3.3.16) Pn+1 (x)−Pn−1 (x) = (2n+1)Pn (x).
∫1
2
(3.3.17) Pn2 (x) dx = .
2n + 1
−1
176 3. STURM–LIOUVILLE PROBLEMS
or, using again the expansion (3.3.13) for the generating function G(t, x),
∞
∑ ∞
∑
(1 − 2xt + t2 ) nPn (x)tn−1 − (x − t) Pn (x)tn = 0.
n=0 n=0
Since the left hand side vanishes for all t, the coefficients of each power of t
must be zero, giving (3.3.14).
To prove (3.3.15) we differentiate Equation (3.3.13) with respect to x:
∑∞
∂G(t, x) t
= 3 = Pn′ (x)tn .
∂x (1 − 2xt + t2 ) 2 n=0
or, in view of the expansion (3.3.13) for the generating function g(t, x),
∞
∑ ∞
∑
(1 − 2xt + t ) 2
Pn′ (x)tn −t Pn (x)tn = 0.
n=0 n=0
3.3.2 LEGENDRE’S DIFFERENTIAL EQUATION 177
for all x ∈ (−1, 1). Now, if we multiply the first equation by Pn (x) and the
second by Pn−1 (x), integrate over the interval [−1, 1] and use the orthogo-
nality property of Legendre’s polynomials we obtain
∫1 ∫1
n Pn2 (x) dx = (2n − 1) xPn−1 (x)Pn (x) dx,
−1 −1
∫1 ∫1
2
n Pn−1 (x) dx = (2n + 1) xPn−1 (x)Pn (x) dx.
−1 −1
Remark. If the function f and its first n derivatives are continuous on the
interval (−1, 1), then using Rodrigues’ formula (3.3.11) and integration by
parts, it follows that
∫1
(−1)n
(3.3.18) an = n (x2 − 1)n f (n) (x) dx.
2 n!
−1
For n = 0, we have
∫1 ∫1
1 1 1
a0 = P0 (x) dx = 1 dx = .
2 2 2
0 0
Now, since
{
0, if n = 2k + 1
Pn (0) =
(−1)k (2k−1)!!
(2k)!! , if n = 2k,
∞
1 1∑ (2k − 3)!! 4k − 1
H(x) ∼ + (−1)k−1 P2k−1 (x).
2 2 (2k + 2)!! 2k
k=1
The sum of the first 23 terms is shown in Figure 3.3.2. We note the slow
convergence of the series to the Heaviside function. Also, we see that the
Gibbs phenomenon is present due to the jump discontinuity at x = 0.
1
2
-1 1
Figure 3.3.2
d ( dy ) ( m2 )
(3.3.19) (1 − x2 ) + λ− y = 0, −1 < x < 1,
dx dx 1 − x2
where m = 0, 1, 2, . . . and λ is a real number. The nontrivial and bounded
functions on the interval (−1, 1) which are solutions of Equation (3.3.19) are
called associated Legendre functions of order m. Notice that if m = 0 and
λ = n(n + 1) for n ∈ N, then equation (3.3.19) is the Legendre equation of
order n and so the associated Legendre functions in this case are the Legendre
polynomials Pn (x). To find the associated Legendre functions for any other
m we use the following substitution.
m
(3.3.20) y(x) = (1 − x2 ) 2 v(x).
d2 w dw
(3.3.22) (1 − x2 ) 2
− 2x + λw = 0.
dx dx
If we differentiate Equation (3.3.22) m times (using the Leibnitz rule) we
obtain
dm+2 w dm+1 w [ ] dm w
(1 − x2 ) − 2(m + 1)x + λ − m(m + 1) = 0,
dxm+2 dxm+1 dxm
which can be written in the form
( m )′′ ( m )′
d w d w [ ] dm w
(3.3.23) (1 − x2 ) m
− 2(m + 1)x m
+ λ − m(m + 1) = 0.
dx dx dxm
dm w
(3.3.24) v(x) = ,
dxm
where w(x) is a particular solution of the Legendre equation (3.3.23). We
have already seen that the nontrivial bounded solutions w(x) of the Legendre
equation (3.3.22) are only if λ = n(n + 1) for some n = 0, 1, 2, . . . and in
this case we have that w(x) = Pn (x) is the Legendre polynomial of order n.
Therefore from (3.3.20) and (3.3.23) it follows that
m dm Pn (x)
(3.3.25) y(x) = (1 − x2 ) 2
dxm
3.3.2 LEGENDRE’S DIFFERENTIAL EQUATION 181
are the bounded solutions of (3.3.19). Since Pn (x) are polynomials of degree
n, from (3.3.25) using the Leibnitz rule it follows that y(x) are nontrivial
only if m ≤ n, and so the associated Legendre functions, usually denoted by
(m)
Pn (x), are given by
m dm Pn (x)
(3.3.26) Pn(m) (x) = (1 − x2 ) 2 .
dxm
Theorem 3.3.5. For any m, n and k we have the following integral rela-
tion.
∫1 0, k ̸= n
(m)
(3.3.27) Pn(m) (x) Pk (x) dx = 2(n + m)!
, k = n.
−1 (2n + 1)(n − m)!
(m)
Proof. From the definition of the associated Legendre polynomials Pn we
have
Now let
∫1
(m) (m)
An,k = Pn(m) Pk dx.
−1
182 3. STURM–LIOUVILLE PROBLEMS
(m)
If we use relation (3.3.26) for the associated Legendre polynomials Pn and
(m)
Pk , using the integration by parts formula we obtain
∫1
(m) dm Pn (x) dm Pk (x)
An,k = (1 − x2 )m dx
dxm dxm
−1
[ ]x=1
dm−1 Pn (x) dm Pk (x)
= (1 − x2 )m
dxm−1 dxm x=−1
∫1 ( )
dm−1 Pk (x) d m
2 m d Pn (x)
− (1 − x ) dx
dxm−1 dx dxm
−1
∫1 ( )
dm−1 Pk (x) d m
2 m d Pn (x)
=− (1 − x ) dx.
dxm−1 dx dxm
−1
we obtain that
(m) 2(n + m)!
An,k = . ■
(n + 1)(n − m)!
If we write out explicitly the terms for k = 0 and k = 1 in the first sum, and
re-index the other sums, then we have
∞
∑
[ ] {[ ] }
(r2 − µ2 )a0 + (r + 1)2 − µ2 a1 x + (k + r)2 − µ2 ak + ak−2 xk = 0.
k=2
In order for this equation to be satisfied identically for every x > 0, the
coefficients of all powers of x must vanish. Therefore,
a2k+1 = 0, k = 2, 3, . . .
184 3. STURM–LIOUVILLE PROBLEMS
and
1 1
a2k = − a2k−2 = 4 a2k−4
22 k(k + µ) 2 k(k − 1)(k + µ)(k − 1 + µ)
(−1)k Γ(1 + µ)
= 2k a0 = (−1)k 2k a0 .
2 k!(k + µ)(k − 1 + µ) · · · (1 + µ) 2 k!Γ(k + 1 + µ)
The function Jµ (x), µ ≥ 0, is called the Bessel function of the first kind
of order µ.
If µ = n is a nonnegative integer, then using the properties of the Gamma
function Γ, from (3.3.34) it follows that
∞
∑ (−1)k ( x )2k+n
Jn (x) = .
k!(k + n)! 2
k=0
The plots of the Bessel functions J0 (x), J1 (x), J2 (x) and J3 (x) are given
in Figure 3.3.3.
J0
J2
0
20
J3
-0.5
J1
Figure 3.3.3
3.3.3 BESSEL’S DIFFERENTIAL EQUATION 185
it follows that we have only one solution Jµ (x) of the Bessel equation and
we don’t have yet another, linearly independent solution. One of the several
methods to find another, linearly independent solution of the Bessel equation
in the case when 2µ is an integer, and µ is also an integer, is the following.
If ν is not an integer number, then we define the function
This function, called the Bessel function of the second kind of order ν, is a
solution of the Bessel equation of order ν, and it is linearly independent of
186 3. STURM–LIOUVILLE PROBLEMS
the other solution Jν (x). Next, for an integer number n, we define Yn (x) by
the limiting process:
(3.3.37) Yn (x) = lim Yν (x).
ν→n
ν not integer
The plots of the Bessel functions Y0 (x), Y1 (x), Y2 (x) and Y3 (x) are given in
Figure 3.3.4.
0.5
Y2 Y1
0
20
Y0
-1 Y3
-1.5
Figure 3.3.4
Similarly
∞ √ ∞
∑ (−1)j ( x )2j− 21 2 ∑ 2j ( x )2j
j
J− 12 (x) = = (−1)
j=0
1
j!Γ(j + 2 ) 2 πx j=0
j! · · · (2j − 1)!! 2
√ ∞ √
2 ∑ (−1)j 2j 2
= x = cos x.
πx j=0 (2j)! πx
Example 3.3.8. The Aging Spring. Consider unit mass attached to a spring
with a constant c. If the spring oscillates for a long period of time, then clearly
the spring would weaken. The linear differential equation that describes the
displacement y = y(t) of the attached mass at moment t is given by
y ′′ (t) + ce−at y(t) = 0.
Find the general solution of the aging spring equation in terms of the Bessel
functions.
Solution. If we introduce a new variable τ by
2 √ −a t
τ= ce 2 ,
a
then the differential equation of the aging spring is transformed into the dif-
ferential equation
τ 2 y ′′ (τ ) + τ y ′ (τ ) + τ 2 y = 0.
We see that the last equation is the Bessel equation of order µ = 0. The
general solution of this equation is
y(τ ) = AJ0 (τ ) + BY0 (τ ).
If we go back to the original time variable t, then the general solution of the
aging spring equation is given by
( ) ( )
2 √ −a t 2 √ −a t
y(t) = AJ0 ce 2 + BY0 ce 2 .
a a
We state and prove some most elementary and useful recursive formulas
for the Bessel functions.
Theorem 3.3.7. Suppose that µ is any real number. Then for every x > 0
the following recursive formulas for the Bessel functions hold.
d ( µ )
(3.3.39) x Jµ (x) = xµ Jµ−1 (x),
dx
d ( −µ )
(3.3.40) x Jµ (x) = −x−µ Jµ+1 (x),
dx
Proof. To prove (3.3.39) we use the power series (3.3.34) for Jµ (x):
∞
d ( µ ) d ∑ (−1)k
x Jµ (x) = x2k+µ
dx dx 22k+2µ k!Γ(k + µ + 1)
k=0
∞
∑ (−1)k (2k + 2µ)
= x2k+2µ−1
22k+2µ k!Γ(k + µ + 1)
k=0
∞
∑ (−1)k
= xµ x2k+µ−1 = xµ Jµ−1 .
22k+µ−1 k!Γ(k + µ)
k=0
Proof. From the Taylor expansion of the exponential function we have that
∑∞ ∞
∑
tj ( x ) j (−1)k ( x )k
e− 2t =
xt x
e2 = , .
j=0
j! 2 k! tk 2
k=0
190 3. STURM–LIOUVILLE PROBLEMS
Since both power series are absolutely convergent, they can be multiplied and
the terms in the resulting double series can be added in any order:
( ) (∑∞ )(∑
∞ )
x
t− 1t xt
− 2t
x tj ( x )j (−1)k ( x )k
e 2 =e e
2 =
j=0
j! 2 k! tk 2
k=0
∑∞ ∑∞
(−1)k ( x )j+k j−k
= t .
j=0
j! k! 2
k=0
we obtain (3.3.45). ■
A consequence of the previous theorem is the following formula, which gives
integral representation of the Bessel functions.
Theorem 3.3.9. If n is any integer and x is any positive number, then
∫π
1
Jn (x) = cos(x sin φ − nφ) dφ.
π
0
Therefore
∞
∑
(3.3.46) cos(x sin φ) = J0 (x) + 2 J2k (x) cos (2kφ),
k=1
and
∞
∑
(3.3.47) sin(x sin φ) = 2 J2k−1 (x) sin (2k − 1)φ,
k=1
{ } { }
Since the systems 1, cos 2kφ, k = 1, 2, . . . and sin (2k−1)φ, k = 1, 2, . . .
are orthogonal on [0, π], from (3.3.46) and (3.3.47) it follows that
∫π {
1 Jn (x), n is even
cos(x sin φ) cos nφ dφ =
π 0, n is odd
0
and
∫π {
1 0, n is even
sin(x sin φ) cos nφ dφ =
π Jn (x), n is odd.
0
The following result, whose proof can be found in the book by G. B. Fol-
land [6], will be used to show that there are infinitely many values of λ for
which there exist pairwise orthogonal solutions of (3.3.38) which satisfy the
boundary conditions (3.3.49).
192 3. STURM–LIOUVILLE PROBLEMS
∫1
(λ22 − λ21 ) x Jµ (λ1 x) Jµ (λ2 x) dx = 0
0
and therefore
∫1
x Jµ (λ1 x) Jµ (λ2 x) dx = 0. ■
0
∫1
2
(3.3.51) ck = 2 xf (x)Jµ (λk x) dx.
Jµ+1 (λk )
0
f (x+ ) + f (x− )
for any point x ∈ (0, 1).
2
∫1
1
(3.3.52) ck = xf (x)Jµ (λk x) dx.
∫1
xJµ2 (λk x) dx 0
0
dx
Integrating the above equation on the interval [0, 1] we obtain
[ ]x=1 ∫1
x2 u′ + (λ2k x2 − µ2 )u2
2
− 2λ2k xu2 (x) dx = 0,
x=0
0
or
∫1
′2
u (1) + (λ2k − µ )u (1) =
2 2
2λ2k xu2 (x) dx.
0
If we substitute the function u(x) and use the fact that λk are the zeros of
Jµ (x), then it follows that
∫1
λ2k Jµ′ (λk )
2
= 2λ2k xJµ2 (λk x) dx,
0
i.e.,
∫1
1 ′2
xJµ2 (λk x) dx = J (λk ).
2 µ
0
Example 3.3.10. Expand the function f (x) = x, 0 < x < 1 in the series
∞
∑
x∼ ck J1 (λk x),
k=1
∫1 ∫λk
2 2 2
ck = 2 x J1 (λk x) dx = 3 2 t2 J1 (t) dt.
J2 (λk ) λk J2 (λk )
0 0
d [ 2 ]
x J2 (x) = x2 J1 (x)
dx
with µ = 2, it follows that
∫λk t=λk
2 d[2 ] 2 2
ck = 3 2 t J2 (t) dt = 3 2 t J2 (t)
2
= .
λk J2 (λk ) dt λk J2 (λk ) t=0 λk J2 (λk )
0
Example 3.3.11. Show that for any p ≥ 0, a > 0 and λ > 0, we have
∫a
(λ ) ap+4
(a2 − t2 )tp+1 Jp t dt = 2 2 Jp+2 (λ).
a λ
0
λ
Solution. If we introduce a new variable x by x = t, then
a
∫a ∫λ
(λ ) ap+4
(a2 − t2 )tp+1 Jp t dt = p+2 xp+1 Jp (x) dx
a λ
0 0
(3.3.53)
∫λ
ap+4
− x2 xp+1 Jp (x) dx.
λp+2
0
196 3. STURM–LIOUVILLE PROBLEMS
For the first integral in the right hand side of (3.3.53), from identity (3.3.39)
in Theorem 3.3.7 we have
∫λ
(3.3.54) xp+1 Jp (x) dx = λp+1 Jp+1 (λ).
0
For the second integral in the right hand side of (3.3.53) we apply the inte-
gration by parts formula with
∫λ x=λ ∫λ
2
x x p+1
Jp (x) dx = x p+3
Jp+1 (x) − 2 xp+2 Jp+1 (x) dx
x=0
0 0
∫λ
= λp+3 Jp+1 (λ) − 2 xp+2 Jp+1 (x) dx (use of (3.3.39))
0
= λp+3 Jp+1 (λ) − 2λp+2 Jp+2 (λ).
n(n + 1) n(n + 1)
Pn′ (1) = , and Pn′ (−1) = (−1)n−1 .
2 2
∫1
3. If n ≥ 1 show that Pn (x) dx = 0.
−1
(a) Pn (1) = 1.
(2n)!
(d) P2n (0) = (−1)n .
22n n!n!
8. Prove that
∫1
1 [ ]
Pn (t) dt = Pn−1 (x) − Pn+1 (x) .
2n + 1
x
(a) J0 (0) = 1.
y ′′ + xy = 0, x>0
12. Show that the function cJ1 (x) is a solution of the differential equation
xy ′′ − y ′ + xy = 0, x > 0.
(b) xy ′′ + 5y ′ + xy = 0, x > 0.
16. Suppose that x and y are positive numbers and n an integer. Show
that
∞
∑
Jn (x + y) = Jk (x)Jn−k (y).
k=−∞
1 − x2
(c) f (x) = , 0<x<1
8
in the Fourier–Bessel series
∞
∑
ck J0 (λk x),
k=1
∫1 1
(a) xJ0 (λx) dx =
J1 (λ).
0 λ
∫1 3 λ2 − 4 2
(b) x J0 (λx) dx = 3
J1 (λ) + 2 J0 (λ).
0 λ λ
21. If λ is any solution of the equation J0 (x) = 0, show that
∫1 1
(a) J1 (λx) dx = .
0 λ
∫λ
(b) J0 (λx) dx = 1.
0
1
N=5 f HxL
N=1
1 1
-1 - 1
2 2
-1
1 f HxL
N=7
1 1
-1 -2 1
2
N=9
-1
Figure 3.4.1
Mathematica has many options to work with Bessel functions. It can plot
the Bessel functions of the first and second kind. It can evaluate numerically
the zeros of the Bessel functions. It can solve some differential equations whose
solutions are expressed in terms of the Bessel functions. It can evaluate some
integrals in terms of the Bessel functions.
In the next project we explore some of these options.
202 3. STURM–LIOUVILLE PROBLEMS
0.4 J0 HxL
10 20 40 50
J6 HxL
-0.4
0.51
Y0 HxL
10 20 40 50
Y7 HxL
-0.51
Figure 3.4.2
The command
In[] :=N[BesselJZero[n, k, x0 ]]
3.4 PROJECTS USING MATHEMATICA 203
gives the k th zero of the Bessel function Jn (x) which is greater than x0 .
Example 2. Write out the first 5 positive zeros of J1 (x).
Solution. Although we can specify a desired precision, let us work with the
Mathematica default.
In[1] := Table[N[BesselJZero [1, k, 0]], k, 1, 10]
Out[1] = {3.83171, 7.01559, 10.1735, 16.4706, 13.3237,
19.6159, 22.7601, 25.9037, 29.0468, 32.1897}
Solution. This is the Bessel equation of order 1 and the solution of this
equation is expressed in terms of the Bessel function which can been seen
from the following.
In[2] := DSolve [x2 y ′′ [x] + x y ′ [x] + (x2 − 1) y[x] == 0, y[x], x]
Out[2] = {{y[x]− > BesselJ[1, x]C[1]+ BesselY [1, x]C[2]}}
CHAPTER 4
ux , uy ,
instead of
∂u ∂u
, .
∂x ∂y
Similarly, for the partial derivatives of the second order we will use the nota-
tion
uxx , uyx , uyy ,
instead of
∂2u ∂2u ∂2u
, , .
∂x2 ∂y∂x ∂y 2
204
4.1 BASIC CONCEPTS AND TERMINOLOGY 205
The order of a partial differential equation is the order of the highest partial
derivative that appears in the equation.
As in ordinary differential equations, a solution of the partial differen-
tial Equation (4.1.1) of order k is a function u = u(x, y, z, . . .) ∈ C k (Ω)
which together with its partial derivatives satisfies Equation (4.1.1) for all
(x, y, . . .) ∈ Ω. The general solution is the set of all solutions.
Example 4.1.1. Find the general solution of the differential equation
Solution. The given equation simply means that uy (x, y) does not depend
on y. Therefore, uy (x, y) = A(x), where A(x) is an arbitrary function.
Integrating the last equation with respect to y (keeping x constant), it follows
that the general solution of the equation is given by
1
u(x, y, z) = √ , (x, y, z) ̸= 0
x2 + y 2 + z 2
√ 1
Solution. If we let r = x2 + y 2 + z 2 , then u = . Using the chain rule we
r
have
1 x
ux = − 2
rx = − 3 .
r r
206 4. PARTIAL DIFFERENTIAL EQUATIONS
Differentiating once more and using the quotient and the chain rule again we
have
r3 − 3r2 rx x r2 − 3x2
uxx = − = − .
r6 r5
By symmetry, we have
r2 − 3y 2 r2 − 3z 2
uyy = − , uzz = − .
r5 r5
Therefore,
r2 − 3x2 r2 − 3y 2 r2 − 3z 2
uxx + uyy + uzz = − − −
r5 r5 r5
3r − 3(x + y + z )
2 2 2 2
3r2 − 3r2
=− 5
=− = 0.
r r5
uxy + ux = 0.
wy + w = 0.
The general solution of the above equation is easily found and it is given by
ux = e−y a(x).
yux − xuy = 0.
uuxy = ux uy .
ux + 2uy = 0.
(a) ux = f (x),
(b) ux = f (y),
where f is a continuous function, defined on an open interval.
(a) uyy = 0.
(b) uxy = 0.
208 4. PARTIAL DIFFERENTIAL EQUATIONS
(c) uxx + u = 0 = 0.
(d) uyy + u = 0 = 0.
10. Show that all surfaces of revolution around the u-axis satisfy the par-
tial differential equation
u(x, y) = f (x2 + y 2 ),
where f is some differentiable function of one variable.
4.2 PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER 209
From Equation (4.2.1) it follows that the vector (a, b, cu) is perpendicular to
the vector n, and therefore the vector (a, b, cu) must be in the tangent plane
to the surface S. Therefore, in order to solve Equation (4.2.1) we need to
find the surface S. It is obvious that S is the union of all curves which lie
on the surface S with the property that the vector
( )
(4.2.2) a(x, y), b(x, y), c(x, y)u(x, y)
is tangent to each curve. Let us consider one such curve C, whose parametric
equations are given by
x = x(t),
(C) y = y(t),
u = u(t).
The curves C given by (4.2.4) are called the characteristic curves for Equa-
tion (4.2.1). The system (4.2.4) is called the characteristic equations for
Equation (4.2.1). There are two linearly independent solutions of the system
(4.2.4), and any solution of Equation (4.2.1) is constant on such curves. To
summarize, we have the following theorem.
Theorem 4.2.1. The general solution u = u(x, y) of the partial differential
equation (4.2.1) is given by
( )
f F (x, y, u), G(x, y, u) = 0,
dx dy du
(4.2.5) = = .
a(x, y) b(x, y) c(x, y)u(x, y)
1
x3 = − + c.
2y 2
Therefore
1
c = x3 + ,
2y 2
and so the general solution of the given partial differential equation is
( 1 )
u(x, y) = f x3 + 2 ,
2y
x2 + y 2 = c.
u(x, y) = f (x2 + y 2 ),
u(x, y) = x2 + y 2 .
where f is any differentiable function. Notice that the solution can also be
expressed as
(y)
u(x, y) = y n g ,
x
where g is any differentiable function of a single variable.
212 4. PARTIAL DIFFERENTIAL EQUATIONS
dx dy (√ √ ) du
(4.2.7) √ = −√ = x− y .
x y u
√ (√ √ ) du √ (√ √ ) du
dx = x x− y dy = − y x − y .
u u
Thus,
dx − dy √ √ du
√ √ = ( x + y) ,
x− y u
and so
d(x − y) du
= .
x−y u
Integrating the last equation we obtain
u
= c2 , where c2 is any constant,
x−y
u
(4.2.9) G(x, y) = .
x−y
Therefore, from (4.2.7) and (4.2.8) we obtain that the general solution of
Equation (4.2.6) is given by
(√ √ u )
f x+ y, = 0,
x−y
4.2 PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER 213
where f is any differentiable function. The last equation implies that the
general solution can be expressed as
(√ √ )
u(x, y) = (x − y)g x + y ,
where g is any differentiable
√ function of√a single √
variable. From the given
condition u(4x, x) = x we have 3xg( x) = 3 x, and so g(x) = 1/x.
Therefore,
1 √ √
u(x, y) = (x − y) √ √ = x− y
x+ y
is the required particular solution.
Now, we will describe another method for solving Equation (4.2.1). The
idea is to replace the variables x and y with new variables ξ and η and
transform the given equation into a new equation which can be solved. We
will find a transformation to new variables
(4.2.10) ξ = ξ(x, y), η = η(x, y),
which transforms Equation (4.2.1) into an equation of the form
(4.2.11) wξ + g(ξ, η)w = h(ξ, η),
( )
where w = w(ξ, η) = u x(ξ, η), y(ξ, η) . Equation (4.2.11) is a linear ordi-
nary differential equation of the first order (with respect to ξ), and as such it
can be solved.
So, the problem is to find such functions ξ and η with the above properties.
By the chain rule we have
ux = wξ ξx + wη ηx , uy = wξ ξy + wη ηy .
Substituting the above partial derivatives into Equation (4.2.1), after some
rearrangements, we obtain
( ) ( ) ( )
aξx + bξy wξ + aηx + bηy wη = c x(ξ, η), y(ξ, η) .
In order for the last equation to be of the form (4.2.11) we need to choose
ξ = ξ(x, y) such that
a(x, y)ξx + b(x, y)ξy = 0.
But the last equation is homogeneous and therefore along its characteristic
curves
(4.2.12) −b(x, y) dx + a(x, y) dy = 0
the function ξ = ξ(x, y) is constant. Thus,
ξ(x, y) = F (x, y),
where F (x, y) = c, c is any constant, is the general solution of Equation
(4.2.12). For the function η = η(x, y) we can choose η(x, y) = y.
214 4. PARTIAL DIFFERENTIAL EQUATIONS
Figure 4.2.1
4.2 PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER 215
ux + yuy + u = 0, y > 0,
for which
u(x, e2x ) = e2x .
−y dx + dy = 0,
whose general solution is given by y = cex . If w(ξ, η) = u(x(ξ, η), y(ξ, η)),
then the new variables ξ and η are given by
{
ξ = ye−x ,
η = y.
dw(ξ, η) dη
=− .
v η
f (ξ)
w(ξ, η) = ,
η
f (ye−x )
u(x, y) = .
y
Now, from u(x, e2x ) = 1 it follows that f (ex ) = e2x , i.e., f (x) = x2 . There-
fore, the particular solution of the given partial differential equation is
u(x, y) = ye−2x .
The last two examples are only particular examples of a much more general
problem, the so-called Cauchy Problem. We will state this problem without
any further discussion.
216 4. PARTIAL DIFFERENTIAL EQUATIONS
is called the transport equation. This equation can be applied to model pol-
lution (contaminant) or dye dispersion, in which case u(t, x) represents the
density of the pollutant or the dye at time t and position x. Also it can be
applied to traffic flow in which case u(x, t) represents the density of the traffic
at moment t and position x. In an initial value problem for the transport
equation, we seek the function u(t, x) which satisfies the transport equation
and satisfies the initial condition u(0, x) = u0 (x) for some given initial density
u0 (x).
Let us illustrate one of these applications with a concrete example.
Example 4.2.7. Fluid flows with constant speed v in a thin, uniform,
straight tube (cylinder) whose cross sections have constant area A. Sup-
pose that the fluid contains a pollutant whose concentration at position x
and time t is denoted by u(t, x). If there are no other sources of pollution
in the tube and there is no loss of pollutant through the walls of the tube,
derive a partial differential equation for the function u(t, x).
Solution. Consider a part of the tube between any two points x1 and x2 . At
moment t, the amount of the pollutant in this part is equal to
∫x2
Au(t, x) dx.
x1
Similarly, the amount of the pollutant at a fixed position x during any time
interval (t1 , t2 ) is given by
∫t2
u(t, x)v dt.
t1
4.2 PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER 217
Now we apply the mass conservation principle, which states that the total
amount of the pollutant in the section (x1 , x2 ) at time t2 is equal to the total
amount of the pollutant in the section (x1 , x2 ) at time t1 plus the amount
of contaminant that flowed through the position x1 during the time interval
(t1 , t2 ) minus the amount of pollutant that flowed through the position x2
during the time interval (t1 , t2 ):
Similarly,
∫t2 ∫t2
u(t, x2 )Av dt − u(t, x1 )Av dt
t1 t1
(4.2.15)
∫t2 ∫t2 ∫x2
( ) ( )
= u(t, x2 )A − u(t, x1 )Av dt = ∂x u(t, x)Av dx dt.
t1 t1 x 1
If we substitute (4.2.14) and (4.2.15) into the mass conservation law (4.2.13)
and interchange the order of integration, then we obtain that
∫x2 ∫t2
[ ( ) ( )]
∂t u(t, x)A + ∂x u(t, x)Av dx dt = 0.
x 1 t1
Since the last equation holds for every x1 , x2 , t1 and t2 (under the assump-
tion that ut (x, y) and ux (t, x) are continuous functions), we have
Example 4.2.8. Find the density function u = u(t, x) which satisfies the
differential equation
ut + xux = 0,
if the initial density u0 (x) = u( 0, x) is given by
u0 (x) = e−(x−3) .
2
Therefore ( )2
− xe−t −3
u(t, x) = e .
We can use Mathematica to check that really this is the solution:
In[2] := DSolve[{D[u[t, x], t] + xD[u[t, x], x] == 0,
u[0, x] == e−(x−3) }, u[t, x], {t, x}]
2
−t
Out[2] = {{u[t, x]− > e−(−3+e x) }}
2
(a) (b)
Figure 4.2.2
4.2 PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER 219
Example 4.2.9. Solve the partial differential equation uux + 2xuy = 2xu2 .
Solution. It is obvious that u ≡ 0 is a solution of the above equation. If
u ̸= 0, then the characteristic equations are
dx dy du
= = .
u 2x 2xu2
2
From 2xu2 dx = u du we have u = c1 ex . Using this solution and the other
differential equation 2x dx = u du we have c1 x + e−x = c2 . Therefore, the
2
(c) (1 + x2 )ux + uy = 0.
(d) ux + 2xy 2 uy = 0.
(h) x2 ux + uy = xu.
3. Find the general solution u(x, y) of each of the following partial differ-
ential equations. Find the particular solution up (x, y) which satisfies
the given condition.
1 1 1 1
(b) ux + uy = , u(x, 1) = (3 − x2 ).
x y y 2
(c) xux + (x + y)uy = u + 1, u(x, 0) = x2 .
x
(d) 2xyux + (x + y )uy = 0, u(x, y) = e x − y if x + y = 1.
2 2
(f) ut + ux − u = t, u(0, x) = ex .
respectively.
uHx, tL T2
Α2
B
A
Α1
T1
x x+h l
Figure 4.3.1
Now if we apply Newton’s Second Law of motion, which states that mass
times acceleration equals force, then we have
If we divide the last equation by T and use Equation (4.3.1), we obtain that
ρh T2 sin α2 T1 sin α1
utt (x, t) = −
T T2 cos α2 T1 cos α1
.
= tan α2 − tan α1 .
Now since
tan α1 = ux (x, t), tan α2 = ux (x + h, t)
we have
ρ ux (x + h, t) − ux (x, t)
utt (x, t) = .
T h
T
Letting h → 0 and a2 = we obtain
ρ
then the wave equation in any dimension can be written in the form
utt = a2 ∆u.
According to another Ohm’s Laws the voltage decrease across the capacitor
is given by
∫
1
(4.3.4) v= i(x, t) dt
C
∂i(x, t)
(4.3.5) v=L .
∂t
4.3.1 IMPORTANT EQUATIONS OF MATHEMATICAL PHYSICS 225
therefore we have
and
and
Equations (4.3.11) and (4.3.12) are the telegraphic equations for e(x, t) and
i(x, t).
.
226 4. PARTIAL DIFFERENTIAL EQUATIONS
where the constant c is the specific heat, which is the amount of heat that it
takes to raise one unit mass by one unit temperature.
3. Conservation Law of Energy. This law states that the rate of change of
heat in the element of the rod between the points x and x + h is equal to
the sum of the rate of change of heat that flows in and the rate of change of
heat that flows out, that is,
Since we assumed that uxx and ut are continuous, from the mean value
theorem of integral calculus we obtain that
[ ] [ ]
(4.3.17) γux (x + h, τ∗ ) − γux (x, τ∗ ) ∆t = cρ u(ξ∗ , t + ∆t) − u(ξ∗ , t) h,
4.3.1 IMPORTANT EQUATIONS OF MATHEMATICAL PHYSICS 227
for some τ∗ ∈ (t, t + ∆t) and ξ∗ ∈ (x, x + h). If we apply the mean value
theorem of differential calculus to (4.3.17), then we have
∂ [ ]
γux (ξ1 , τ∗ ) ∆th = cρut (ξ∗ , t1 )h∆t,
∂x
for some ξ1 ∈ (x, x + h) and t1 ∈ (t, t + ∆t). If we divide the last equation
by hρ, and we let ∆t → 0 and h → 0, we obtain that
( )
∂
γux = cρut .
∂x
Since the rod is homogeneous, γ, c and ρ are constants, and the last equation
takes the form
γ
(4.3.18) ut = a2 uxx , a2 = .
cρ
Equation (4.3.18) is called the one dimensional heat equation or one di-
mensional heat-conduction equation.
As in the case of the wave equation, we can consider a two dimensional
heat equation for a two dimensional plate, or three dimensional object:
ut = a2 ∆u.
The Laplace Equation. There are many physical problems which lead to
the Laplace equation. A detailed study of this equation will be taken up in
Chapter 7.
Let us consider some applications of this equation with a few examples.
Example 4.3.4. It was shown that the temperature of a nonstationary heat
flow without the presence of heat sources satisfies the heat differential equation
ut = a2 ∆ u.
If a stationary heat process occurs (the heat state does not change with time),
then the temperature distribution is constant with time; thus it is only a
function of position. Consequently, the heat equation in this case reduces to
∆ u = 0.
This equation is called the Laplace equation.
However, if heat sources are present, the heat equation in a stationary
situation takes the form
F
(4.3.19) ∆ u = −f, f= ,
k
where F is the heat density of the sources, and k is the heat conductivity
coefficient. The nonhomogeneous Laplace equation (4.3.19) is usually called
the Poisson equation.
228 4. PARTIAL DIFFERENTIAL EQUATIONS
1 1
=√ .
r x + y2
2
We will show that the work w performed only by the attraction force F
toward O as the particle moves from M to N (1, 0) is path independent, i.e.,
w does not depend on the curve along which the particle moves, but only on
the initial and terminal points M and N (see Figure 4.3.2.)
y MHx, yL
F
r c
O
NH1, 0L PHr, 0L
Figure 4.3.2
√
Since the magnitude of F = (f, g) is 1/ x2 + y 2 we have
x y
f = f (x, y) = − , g = g(x, y) = − .
x2 + y2 x2 + y2
Let c be any continuous and piecewise smooth curve in Ω from the point M
to the point N and let
x = x(t), y = y(t), a ≤ t ≤ b,
4.3.1 IMPORTANT EQUATIONS OF MATHEMATICAL PHYSICS 229
be the parametric equations of the curve c. The work wc done to move the
particle from the point M to point N along the curve c under the force F
is given by the line integral
∫b
[ ]
(4.3.22) wc = f (x(t, y(t))x′ (t) + g(x(t, y(t))y ′ (t) dt.
a
From (4.3.21) and (4.3.22) it follows that
∫b
[ ]
wc = f (x(t, y(t))x′ (t) + g(x(t, y(t))y ′ (t) dt
a
∫b ∫b
[ ] d( )
= vx x′ (t) + vy y ′ (t) dt = v(x(t), y(t)) dt
dt
a a
= v(x(b), y(b)) − v(x(a), y(a)) = v(N ) − v(M ).
Therefore, the work wc does not depend on c (it depends only on the initial
point M (x, y) and terminal point N (1, 0)). Thus
wc = u(x, y)
for some function u(x, y).
Now, since the work wc is path independent we have
wc = w ⌢ + wP N .
MP
⌢
For the work w ⌢ along the arc M P , from (4.3.21) and the path indepen-
MP
dence of the work we have
1 1
w ⌢ = v(P ) − v(M ) = − ln(r2 ) + ln(r2 ) = 0.
MP 2 2
For the work wP N along the line segment P N we have
∫ ∫
wP N = f (x, y) dx + g(x, y) dy = − f (x, y) dx
PN PN
∫1
1 1 ( )
=− dx = ln r = ln x2 + y 2 .
x 2
r
Therefore,
1 ( 2 )
ln x + y 2 .
u(x, y) =
2
Differentiating the above function u(x, y) twice with respect to x and y we
obtain
y 2 − x2 x2 − y 2
uxx = 2 , u yy = .
(x + y 2 )2 (x2 + y 2 )2
Thus,
uxx + uyy = 0.
230 4. PARTIAL DIFFERENTIAL EQUATIONS
We can simplify Equation (4.3.26) if we can select ξ and η such that at least
one of the coefficients A, B and C is zero.
Consider the family of functions y = y(x) defined by ξ(x, y) = c1 , where
c1 is any numerical constant. Differentiating this equation with respect to x
we have
ξx + ξy y ′ (x) = 0,
from which it follows that
ξx = −ξy y ′ (x).
If we substitute ξx from the last equation into the expression for A in (4.3.27)
it follows that
( )
′2 ′
(4.3.28) A = ξy ay (x) − 2by (x) + c .
2
Equations (4.3.28) and (4.3.29) suggest that we need to consider the nonlin-
ear ordinary differential equation
then from (4.3.28) and (4.3.29) it follows that both coefficients A and C
will be zero, and so the transformed equation (4.3.26) takes the form
e
A e
C du f
(4.3.32) uξη + uξ + uη + = .
2B 2B 2B 2B
232 4. PARTIAL DIFFERENTIAL EQUATIONS
Therefore,
(√ √ )2
A = aξx2 + 2bξx ξy + cξy2 = aξx + cξy = 0,
and ( )
B = aξx ηy + b ξx ηy + ηx ξy + cξy ηy
(√ √ )(√ √ )
= a ξx + c ξ y a ηx + c ηy = 0,
and so, the canonical form of the given partial differential equation (after
dividing (4.3.26) by C) is
( )
uηη = F ξ, η, u, uξ , uη .
If we separate the real and imaginary parts in the above equation, we obtain
( )2 ( )2 ( )2 ( )2
(4.3.33) a φx + 2bφx φy + c φx = a ψx + 2bψx ψy + c ψx = 0,
and
( )
(4.3.34) aφx ψx + b φx ψy + φy ψx + cφy ψy = 0.
From (4.3.32) it follows that Equations (4.3.33) and (4.3.32) can be written
in the form A = C and B = 0, where A, B and C are the coefficients that
appear in Equation (4.3.33). Dividing both sides of Equation (4.3.26) by A
(provided that A ̸= 0) we obtain
)
(4.3.35) uξξ + uηη = F (ξ, η, u, uξ , uη .
Equation (4.3.35) is called the canonical form of equation of the elliptic type.
y2 x2
y 2 uxx + 2xyuxy + x2 uyy − ux − uy = 0
x y
Substituting the above partial derivatives into the given equation we obtain
ηuηη = uη .
y2 3x2
y 2 uxx − 4xyuxy + 3x2 uyy − ux − uy = 0
x y
by reducing it to its canonical form.
Solution. For this equation, b2 − ac = 16x2 y 2 − 12x2 y 2 = 4x2 y 2 > 0 for every
x ̸= 0 and y ̸= 0. Therefore, the equation is hyperbolic in R2 \ {(0, 0)}. It is
parabolic on the coordinate axes Ox and Oy. For the hyperbolic case, the
characteristic equation (4.3.30) is
x2 + y 2 = C1 , and 3x2 + y 2 = C2 ,
4.3.2 CLASSIFICATION OF LINEAR PDES OF THE SECOND ORDER 235
ξ = 3x2 + y 2 and η = x2 + y 2 .
If we substitute the above partial derivatives into the given equation we obtain
uξη = 0.
5y ′ (x) − 4y ′ (x) + 4 = 0,
2
2 4
y ′ (x) = ± i.
5 5
The general solutions of the last two ordinary differential equations are given
by
(2 4 ) (2 4 )
y− + i x = C1 , and y − − i x = C2 ,
5 5 5 5
236 4. PARTIAL DIFFERENTIAL EQUATIONS
3. Determine the domains in the Oxy plane where each of the following
equations is hyperbolic, parabolic or elliptic.
(1 + sin x)2
(f) (1 + sin x)uxx − 2 cos x + (1 − sin x)uyy + ux
2 cos x
1
+ (1 − sin x)uy = 0.
2
6. Classify each of the following partial differential equations as hyper-
bolic or parabolic and find the indicated particular solution.
(b) e2x uxx − 2ex+y uxy + e2y uyy + e2x ux + e2y uy = 0, u(0, y) = e−y ,
uy (0, y) = −e−y .
5
(c) uxx − 2uxy + uyy = 4e3y + cos x, u(x, 0) = 1 + cos x − ,
9
4
u(0, y) = e3y .
9
conditions. Boundary conditions are constraints on the function and the space
variable, while initial conditions are constraints on the unknown function and
the time variable.
Consider a linear partial differential equation of the second order
Neumann Conditions. With these conditions, the value of the normal de-
rivative ∂u(t,x)
n on the boundary ∂Ω is specified. Symbolically we write this
as
∂u(t, x)
(4.4.3) = g(t, x).
n ∂Ω
Example 1. Solve the linear partial differential equation of the first order
Solution. The solution of the given partial differential is obtained by the com-
mands
In[1] := e1 = x3 ∗ D[u[x, y], x] + x2 ∗ y ∗ D[u[x, y], y] − (x + y) ∗ u[x, y] == 0;
4.5 PROJECTS USING MATHEMATICA 241
Example 2. Solve the quasi linear partial differential equation of the first
order
xux (x, y) + yuy (x, y) − u(x, y) − u2 (x, y) = 0.
Solution. The solution of the given partial differential is obtained by the com-
mands
In[4] := e2 = x ∗ D[u[x, y], x] + y ∗ D[u[x, y], y] − u[x, y] − u[x, y]2 == 0;
In[5] := s2 = DSolve[e1, u, {x, y}]
[ ]
{{ [ C[1]
y
]}}
Out[5] = u → Function {x, y}, − e [ ]
x x
y
C[1]
−1+e x x
Example 3. Solve the nonlinear partial differential equation of the first order
Solution. The solution of the given partial differential is obtained by the com-
mands
In[6] := e3 = D[u[x, y], x]2 + y ∗ D[u[x, y], y]2 − u[x, y]2 == 0;
In[7] := s3 = DSolve[e3, u, {x, y}]
{{ [ −√ y −√
C[2]
− √ y 2 ]}}
Out[7] = u → Function {x, y}, e 1+C[1]2 1+C[1]2 1+C[1]
Solution. The solution of the given partial differential is obtained by the com-
mands
In[8] := e4 = D[u[x, y], x, 2] + 2 ∗ D[u[x, y], x, y] − ∗D[u[x, y], y, 2] == 0;
In[9] := DSolve [e4, u, {x, y}]
{{ [ [ √ ]
Out[9] = u → Function {x, y}, C[1] − 12 (2 + 2 6)x + y
[ √ ]]}}
+C[2] − 12 (2 − 2 6)x + y
242 4. PARTIAL DIFFERENTIAL EQUATIONS
In[11] :=Plot3D[Evaluate[u[t, x]/.%], {t, 0, 10}, {x, 0, 5}, PlotRange − > All]
The plot of the solution of the given problem is displayed in Figure 4.5.1.
Figure 4.5.1
CHAPTER 5
The purpose of this chapter is to study the one dimensional wave equation
also known as the vibrating string equation, and its higher dimensional version
243
244 5. THE WAVE EQUATION
Recall from Chapter 4 that the variables ξ and η are called the characteristics
of the wave equation. If u(x, t) = w(ξ, η), then using the chain rule, the wave
equation (5.1.1) is transformed into the simple equation
wξ η = 0,
where F and G are any twice differentiable functions, each of a single vari-
able. Therefore, the general solution of the wave equation (5.1.1) is
Using the initial condition u(x, 0) = f (x) from (5.1.2) and setting t = 0 in
(5.1.3) we obtain
∫x
(5.1.7) aF (x) − aG(x) = g(s) ds + C,
0
∫x
f (x) 1 C
(5.1.8) F (x) = + g(s) ds + ,
2 2a 2
0
and
∫x
f (x) 1 C
G(x) = − g(s) ds − .
2 2a 2
0
5.1 D’ALEMBERT’S METHOD 245
∫
x+at
f (x + at) + f (x − at) 1
(5.1.9) u(x, t) = + g(s) ds.
2 2a
x−at
f (x + at) + f (x − at)
2
∫
x+at
1
g(s) ds
2a
x−at
is the initial velocity (impulse) when we have zero initial displacement, i.e.,
when f (x) = 0.
A function of type f (x − at) in physics is known as a propagation forward
wave with velocity a, and f (x + at) as a propagation backward wave with
velocity a.
The straight lines u = x + at and u = x − at (known as characteristics)
give the propagation paths along which the wave function f (x) propagates.
Remark. It is relatively easy to show that if f ∈ C 2 (R) and g ∈ C(R), then
the C 2 function u(x, t), given by d’Alembert’s formula (5.1.9), is the unique
solution of the problem (5.1.1), (5.1.2). (See Exercise 1 of this section.)
Example 5.1.1. Find the solution of the wave equation (5.1.1) satisfying
the initial conditions
Example 5.1.2. Find the solution of the wave equation (5.1.1) satisfying
the initial conditions
u(x, 0) = 0 ut (x, 0) = sin x, −∞ < x < ∞.
Solution. Using d’Alembert’s formula (5.1.9) we obtain
∫
x+at
1 1
u(x, t) = sin s ds = sin x sin at.
2a a
x−at
Example 5.1.3. Illustrate the solution of the wave equation (5.1.1) taking
a = 2, subject to the initial velocity g(x) = 0 and initial displacement f (x)
given by {
2 − |x|, −2 ≤ x ≤ 2
f (x) =
0, otherwise.
Solution. Figure 5.1.1 shows the behavior of the string and the propagation
of the forward and backward waves at the moments t = 0, t = 1/2, t = 1,
t = 1.5, t = 2 and t = 2.5.
u u
t = 0.5
t=0
2
x x
-2 2 -3 -1 1 3
(a) (b)
u u
t = 1.5
t = 1.0
1 1
x x
-4 -2 2 4 -5 -3 -1 1 3 5
(c) (d)
u u
t = 2.0 t = 2.5
1 1
x x
-6 -4 -2 2 4 6 -7 -5 -3 3 5 7
(e) (f)
Figure 5.1.1
5.1 D’ALEMBERT’S METHOD 247
Example 5.1.4. Using d’Alembert’s method find the solution of the wave
equation (5.1.1), subject to the initial displacement f (x) = 0 and the initial
velocity ut (x, 0) = g(x), given by
g(x) = cos (2x), −∞ < x < ∞.
Example 5.1.5. Show that if both functions f (x) and g(x) in d’Alembert’s
solution of the wave equation are odd functions, then the solution u(x, t) is
an odd function of the spatial variable x. Also, if both functions f (x) and
g(x) in d’Alembert’s solution of the wave equation are even functions, then
the solution u(x, t) is an even function of the spatial variable x.
Solution. Let us check the case when both functions are odd. The other case
is left as an exercise. So, assume that
f (−x) = −f (x), g(−x) = −g(x), x ∈ R.
Then by d’Alembert’s formula (5.1.9) it follows that
∫
−x+at
f (−x + at) + f (−x − at) 1
u(−x, t) = + g(s) ds ( s = −v )
2 2a
−x−at
∫
x−at
f (x − at) + f (x + at) 1
=− − g(−v) dv
2 2a
x+at
∫
x+at
f (x − at) + f (x + at) 1
=− − g(v) dv = −u(x, t).
2 2a
x−at
u(0, t) = 0, t > 0.
In order to solve this problem we will use the result of Example 5.1.5.
First, we extend the functions f , g and u(x, t) to the odd functions fo , fo
and w(x, t), respectively, by
{ {
−f (−x), −∞ < x < 0 −g(−x), −∞ < x < 0
fo (x) = go (x) =
f (x), 0 < x < ∞, g(x), 0 < x < ∞,
and {
−u(−x, t), −∞ < x < 0, t > 0
w(x, t) =
u(x, t), 0 < x < ∞, t > 0.
Next, we consider the initial boundary value problem
{
wtt (x, t) = a2 wxx (x, t), −∞ < x < ∞, t > 0
w(x, 0) = fo (x), wt (x, 0) = go (x), −∞ < x < ∞.
∫
x+at
fo (x + at) + fo (x − at) 1
(5.1.12) w(x, t) = + go (s) ds.
2 2a
x−at
∫at
fo (at) + fo (−at) 1
w(0, t) = + go (s) ds = 0,
2 2a
at
and so,
w(x, t) = u(x, t), for x > 0 and t > 0.
Therefore, the solution of the initial boundary value problem (5.1.11) on the
semi-infinite interval 0 < x < ∞ is given by
∫
x+at
f (x+at)−f (x−at)
+ 1
g(s) ds, 0 < x < at
2 2a
x−at
(5.1.13) u(x, t) =
∫
x+at
f (x+at)+f (x−at)
+ 1
g(s) ds, at ≤ x < ∞.
2 2a
x−at
5.1 D’ALEMBERT’S METHOD 249
Now, we consider the following problem for the wave equation on (−∞, ∞).
{
wtt (x, t) = a2 wxx (x, t), −∞ < x < ∞, t > 0
w(x, 0) = fp (x), wt (x, 0) = gp (x), −∞ < x < ∞.
Therefore,
w(x, t) = u(x, t), 0 < x < l, t > 0,
and so, the solution u(x, t) of the wave equation on the finite interval 0 <
x < l is given by
∫
x+at
fp (x + at) + fp (x − at) 1
(5.1.16) u(x, t) = + gp (s) ds.
2 2a
x−at
u(x, 0) = f (x) = 2 sin x + 4 sin 2x, ut (x, 0) = g(x) = sin x, 0 < x < π,
1[ ]
u(x, t) = 2 sin (x + t) + 4 sin 2(x + t) + 2 sin (x − t) + 4 sin 2(x − t)
2
∫
x+t
1 1[ ]
+ sin s ds = 6 sin x cos t − cos (x + t) − cos (x − t)
2 2
x−t
= 6 sin x cos t + sin x sin t.
5.1 D’ALEMBERT’S METHOD 251
At what instances does the string return to its initial shape u(x, 0) = 0?
Solution. By d’Alembert’s formula (5.1.16) the solution of the given initial
boundary value problem is
∫
x+t
1
(5.1.17) u(x, t) = gp (s) ds.
2
x−t
∫x
G(x) = gp (s) ds.
0
From the above formula for the antiderivative G and from (5.1.17) we have
1[ ]
(5.1.18) u(x, t) = G(x + t) − G(x − t) .
2
Now, we will find explicitly the function G. Since gp is an odd and 2π-
periodic function it follows that G is also a 2π-periodic function. Indeed,
∫
x+2π ∫x ∫
x+2π
G(x) = x2 .
252 5. THE WAVE EQUATION
Let tr > 0 be the time moments when the string returns to its initial
position u(x, 0) = 0. From (5.1.18) it follows that this will happen only
when
G(x + tr ) = G(x − tr ).
x + tr = x − tr + 2nπ, n = 1, 2, . . ..
Therefore,
tr = nπ, n = 1, 2, . . .
are the moments when the string will come back to its original position.
In the next example we discuss the important notion of energy of the string
in the wave equation.
Example 5.1.9. Consider the string equation on the finite interval 0 < x < l:
2
utt (x, t) = a uxx (x, t), 0 < x < l, t > 0.
u(x, 0) = f (x), ut (x, 0) = g(x), 0 < x < l
u(0, t) = u(l, t) = 0, t > 0.
∫l
1 [ 2 ]
(5.1.19) E(t) = ut (x, t) + a2 u2x (x, t) dx.
2
0
The terms
∫l ∫l
1 1
Ek (t) = u2t (x, t) dx, Ep (t) = a2 u2x (x, t) dx
2 2
0 0
∫l ∫l
′ 2
(5.1.20) E (t) = ut (x, t)utt (x, t) dx + a ux (x, t)uxt (x, t) dx.
0 0
5.1 D’ALEMBERT’S METHOD 253
Using the integration by parts formula for the second integral in Equation
(5.1.20) and the fact that utt (x, t) = a2 uxx (x, t), from (5.1.20) we obtain
∫l
′
[ ]
E (t) = ut (x, t)utt (x, t) dx + a2 ux (l, t)ut (l, t) − ux (0, t)ut (0, t)
0
∫l
[ ]
−a 2
ut (x, t)uxx (x, t) dx = a2 ut (l, t)ux (l, t) − ut (0, t)ux (0, t) .
0
Therefore
[ ]
(5.1.21) E ′ (t) = a2 ut (l, t)ux (l, t) − ut (0, t)ux (0, t) .
From the given boundary conditions u(0, t) = u(l, t) = 0 for all t > 0 it
follows that
ut (0, t) = ut (l, t) = 0, t > 0,
and hence from (5.1.21) we obtain that E ′ (t) = 0. Therefore E(t) is a
constant function. In fact, we can evaluate this constant. Indeed,
∫l
1 [ 2 ]
E(t) = E(0) = ut (x, 0) + a2 u2x (x, 0) dx
2
0
∫l
1 [ 2 ( )2 ]
= f (x) + a2 g ′ (x) dx.
2
0
The result in Example 5.1.9. can be used to prove the following theorem.
Theorem 5.1.1. The initial boundary value problem
2
utt (x, t) = a uxx (x, t), 0 < x < l, t > 0,
u(x, 0) = f (x), ut (x, 0) = g(x), 0 < x < l
u(0, t) = u(l, t) = 0, t > 0
If E(t) is the energy of U (x, t), then by the result in Example 5.1.9 it follows
that E(t) = 0 for every t > 0. Therefore,
∫l
[ 2 ]
Ut (x, t) + a2 Ux2 (x, t) dx = 0,
0
Thus, U (x, t) = Ut (x, t) = 0 for every 0 < x < l and every t > 0 and so,
∫x
U (x, t) = U (0, t) + Us (s, t) ds = 0,
0
for every 0 < x < l and every t > 0. Hence, u(x, t) = v(x, t) for every
0 < x < l and every t > 0. ■
Now, if v(x, t) is the solution of problem (5.1.23) and w(x, t) is the solution
of problem (5.1.24), then u(x, t) = v(x, t) + w(x, t) will be the solution of our
nonhomogeneous problem (5.1.22).
From Case 10 it follows that the solution v(x, t) of the homogeneous prob-
lem (5.1.23) is given by
∫
x+at
f (x + at) + f (x − at) 1
v(x, t) = + g(s) ds.
2 2a
x−at
5.1 D’ALEMBERT’S METHOD 255
Solution. Using the formula in Duhamel’s principle we find that the solution
of the problem is
∫t ∫
x+(t−ν) ∫t
1 1 ( )
u(x, t) = e µ−ν
dµ dν = ex+t−ν − ex−t+ν e−ν dν
2 2
0 x−(t−ν) 0
∫t
1 ( ) 1 1
= ex+t−2ν − ex−t dν = ex−t + ex+t − te−x−t .
2 4 2
0
ξ = x + t, η = x − t.
uxx = wξξ + 2wξη + wηη , utt = wξξ − 2wξη + wηη , uxt = wξξ − wηη .
4. u(x, 0) = 1
1+x2 , ut (x, 0) = sin x, −∞ < x < ∞.
7. u(x, 0) = 1
1+x2 , ut (x, 0) = e−x , −∞ < x < ∞.
ekx −e−kx
8. u(x, 0) = cos πx2, ut (x, 0) = 2 , k is a constant.
When does the string for the first time return to its initial position
u(x, 0) = 0?
17. Compute the potential, kinetic and total energy of the string equation
where X(x) and T (t) are functions of single variables x and t, respectively.
Differentiating (5.2.4) with respect to x and t and substituting the partial
derivatives in Equation (5.2.1) we obtain
X ′′ (x) 1 T ′′ (t)
(5.2.5) = 2 .
X(x) a T (t)
Equation (5.2.5) holds identically for every 0 < x < l and every t > 0.
Notice that the left side of this equation is a function which depends only on
x, while the right side is a function which depends on t. Since x and t are
independent variables this can happen only if each function in both sides of
(5.2.4) is equal to the same constant λ:
X ′′ (x) 1 T ′′ (t)
= 2 = λ.
X(x) a T (t)
From the last equation we obtain the two ordinary differential equations
and
it follows that
X(0)T (t) = X(l)T (t) = 0, t > 0.
From the last equations we obtain the boundary conditions
X(0) = X(l) = 0,
5.2 SEPARATION OF VARIABLES FOR THE WAVE EQUATION 261
each of which satisfies the wave equation (5.2.1) and the boundary conditions
(5.2.3). Since the wave equation and the boundary conditions are linear and
homogeneous, a function u(x, t) of the form
∞
∑ ∑∞ ( )
nπat nπat nπx
(5.2.8) u(x, t) = un (x, t) = an cos + bn sin sin
n=1 n=1
l l l
also will satisfy the wave equation and the boundary conditions. If we assume
that the above series is convergent and that it can be differentiated term by
term with respect to t, from (5.2.8) and the initial conditions (5.2.2) we
obtain
∞
∑ nπx
(5.2.9) f (x) = u(x, 0) = an sin , 0<x<l
n=1
l
and
∑∞
nπa nπx
(5.2.10) g(x) = ut (x, 0) = bn sin , 0<x<l
n=1
l l
Using the Fourier sine series (from Chapter 1) for the functions f (x) and
g(x), from (5.2.9) and (5.2.10) we obtain
∫l ∫l
2 nπx 2 nπx
(5.2.11) an = f (x) sin dx, bn = g(x) sin dx, n ∈ N.
l l nπa l
0 0
Theorem 5.2.1. Suppose that the functions f and g are of class C 2 [0, l]
and that f ′′ (x) and g ′′ (x) are piecewise continuous on [0, l]. If
f (0) = f ′′ (0) = f (l) = f ′′ (l) = 0
g(0) = g(l),
then the function u(x, t) given by (5.2.8), where an and bn are given by
(5.2.11), is the unique solution of the problem (5.2.1), (5.2.2), (5.2.3).
Now, since fp (x) is a continuous function on the whole real line R, its Fourier
series, given by
∞
16 ∑ sin nπ 2 nπx
sin ,
π 2 n=1 n2 4
converges to fp (x). Therefore, the first and second terms in (5.2.13) are
equal to
1 1
fp (x + at) and fp (x − at),
2 2
respectively, which was to be shown.
In order to find the solution to problem (5.2.14) we split the problem into
the following two problems:
2
vtt (x, t) = a vxx (x, t), 0 < x < l, t > 0,
(5.2.15) v(x, 0) = f (x), vt (x, 0) = g(x), 0 < x < l
v(0, t) = v(l, t) = 0,
and
2
wtt (x, t) = a wxx (x, t) + F (x, t), 0 < x < l, t > 0,
(5.2.16) w(x, 0) = 0, wt (x, 0) = 0, 0 < x < l
u(0, t) = u(l, t) = 0.
Problem (5.2.15) has been considered in Case 10 and we already know how
to solve it. Let its solution be v(x, t). If w(x, t) is the solution of problem
(5.2.16), then
u(x, t) = v(x, t) + w(x, t)
will be the solution of the given problem (5.2.14). Therefore, we need to solve
only problem (5.2.16).
There are several approaches to solving problem (5.2.16). One of them,
presented without a rigorous justification, is the following.
For each fixed t > 0, we expand the function F (x, t) in the Fourier sine
series
∞
∑ nπx
(5.2.17) F (x, t) = Fn (t) sin , 0 < x < l,
n=1
l
264 5. THE WAVE EQUATION
where
∫l
2 nπx
(5.2.18) Fn (t) = F (x, t) sin dx, n = 1, 2, . . ..
l l
0
Next, again for each fixed t > 0, we expand the unknown function w(x, t) in
the Fourier sine series
∞
∑ nπx
(5.2.19) w(x, t) = wn (t) sin , 0 < x < l,
n=1
l
where
∫l
2 nπx
(5.2.20) wn (t) = w(x, t) sin dx, n = 1, 2, . . ..
l l
0
If we substitute (5.2.17) and (5.2.19) into the wave equation (5.2.16) and
compare the Fourier coefficients we obtain
n2 π 2 a 2
(5.2.22) wn′′ (t) + 2wn (t) = Fn (t).
l2
The solution of the second order, linear differential equation (5.2.22), in view
of the initial conditions (5.2.21), is given by
∫t ( )
1 nπa
(5.2.23) wn (t) = Fn (s) sin (t − s) ds.
nπa l
0
Take a = 4 in the wave equation and plot the displacements u(x, t) of the
string at the moments t = 0, t = 0.15, t = 0.85 and t = 1.
5.2 SEPARATION OF VARIABLES FOR THE WAVE EQUATION 265
u
2 t = 0.00 u
2 t = 0.15
0 x
2 4
x
0 2 4 -2
(a) (b)
u u
2 2
t = 0.85 t = 1.00
0 x 0 x
2 4 2 4
-2 -2
(c) (d)
Figure 5.2.1
where
Fe(x, t) = (x − l)φ′′ (t) − xψ ′′ (t),
fe(x) = (x − l)φ(0) − xψ(0),
ge(x) = (x − l)φ′ (0) − xψ ′ (0).
Notice that problem (5.2.28) has homogeneous boundary conditions and it
was already considered in Case 10 ; therefore we know how to solve it.
Case 40 . Homogeneous Wave Equation. Neumann Boundary Conditions.
Consider the initial boundary value problem
2
utt (x, t) = a uxx (x, t), 0 < x < l, t > 0,
(5.2.29) u(x, 0) = f (x), ut (x, 0) = g(x), 0 < x < l
ux (0, t) = ux (l, t) = 0, t > 0.
We will solve this problem by the separation of variables method. Let the
solution u(x, t) of the above problem be of the form
where X(x) and T (t) are functions of single variables x and t, respectively.
Differentiating (5.2.30) with respect to x and t and substituting the par-
tial derivatives in the wave equation in problem (5.2.29) we obtain
X ′′ (x) 1 T ′′ (t)
(5.2.31) = 2 .
X(x) a T (t)
Equation (5.2.31) holds identically for every 0 < x < l and every t > 0.
Notice that the left side of this equation is a function which depends only on
x, while the right side is a function which depends on t. Since x and t are
independent variables this can happen only if each function in both sides of
(5.2.31) is equal to the same constant λ.
X ′′ (x) 1 T ′′ (t)
= 2 =λ
X(x) a T (t)
From the last equation we obtain the two ordinary differential equations
and
it follows that
X ′ (0)T (t) = X ′ (l)T (t) = 0, t > 0.
To avoid the trivial solution, from the last equations we obtain
Solving the eigenvalue problem (5.2.32), (5.2.34) (see Chapter 3), we obtain
that its eigenvalues λ = λn and the corresponding eigenfunctions Xn (x) are
and
( nπ )2 nπx
λn = − , Xn (x) = cos , n = 0, 1, 2, . . . , 0 < x < l.
l l
The solution of the differential equation (5.2.33), corresponding to the above
found λn , is given by
T0 (t) = a0 + b0 t
nπat nπat
Tn (t) = an cos + bn sin , n = 1, 2, . . .,
l l
where an and bn are constants which will be determined.
Therefore we obtain a sequence of functions
given by
u0 (x, t) = a0 + b0 t,
( )
nπat nπat nπx
un (x, t) = an cos + bn sin cos , n∈N
l l l
each of which satisfies the wave equation and the Neumann boundary condi-
tions in problem (5.2.29). Since the given wave equation and the boundary
conditions are linear and homogeneous, a function u(x, t) of the form
∑∞ ( )
nπat nπat nπx
(5.2.35) u(x, t) = a0 + b0 t + an cos + bn sin cos
n=1
l l l
also will satisfy the wave equation and the boundary conditions. If we assume
that the above series is convergent and it can be differentiated term by term
with respect to t, then from (5.2.35) and the initial conditions in problem
(5.2.29) we obtain
∞
∑ nπx
(5.2.36) f (x) = u(x, 0) = a0 + an cos , 0<x<l
n=1
l
5.2 SEPARATION OF VARIABLES FOR THE WAVE EQUATION 269
and
∑∞
nπa nπx
(5.2.37) g(x) = ut (x, 0) = bn cos , 0 < x < l.
n=1
l l
Using the Fourier cosine series (from Chapter 1) for the functions f (x) and
g(x), or the fact, from Chapter 2, that the eigenfunctions
πx 2πx nπx
1, cos , cos , . . . , cos , ...
l l l
are pairwise orthogonal on the interval [0, l], from (5.2.36) and (5.2.37) we
obtain
∫l
2 nπx
an = f (x) cos dx, n = 0, 1, 2, . . .
l l
0
(5.2.38)
∫l
2 nπx
bn = g(x) cos dx, n = 1, 2, . . .
nπa l
0
From the wave equation and the given boundary conditions we obtain the
eigenvalue problem
{
X ′′ (x) − λX(x) = 0, 0<x<π
(5.2.39) ′ ′
X (0) = X (π) = 0,
From the initial condition ut (x, 0) = 0, 0 < x < π, and the orthogonality
property of the eigenfunctions
{1, cos x, cos 2x, . . . , cos nx, . . .}
on the interval [0, π] we obtain bn = 0 for every n = 0, 1, 2, . . .. From the
other initial condition u(x, 0) = π 2 − x2 , 0 < x < π, and again from the
orthogonality of the eigenfunctions, we obtain
∫π
1 2π 2
a0 = (π 2 − x2 ) dx = .
π 3
0
For n = 1, 2, . . . we have
∫π ∫π
1 2
an = ∫π (π − x ) cos nx dx = −
2 2
x2 cos nx dx
π
cos2 nx dx 0 0
0
2 2π 4
=− · cos nπ = 2 (−1)n+1 .
π n2 n
Therefore, the solution u(x, t) of the problem is given by
∞
2π 2 ∑ 4
u(x, t) = + 2
(−1)n+1 cos nt cos nx, 0 < x < π, t > 0.
3 n=1
n
Example 5.2.4. Using the separation of variables method solve the following
problem:
utt (x, t) = a uxx (x, t) − c u, 0 < x < l, t > 0,
2 2
u(x, 0) = f (x), ut (x, 0) = g(x), 0 < x < l
u(0, t) = u(l, t) = 0, t > 0.
X ′′ (x) T ′′ (t)
a2 − c2 = .
X(x) T (t)
X ′′ (x)
a2 − c2 = −λ,
X(x)
i.e.,
c2 − λ
(5.2.41) X ′′ (x) − X(x) = 0
a2
and
T ′′ (t)
(5.2.42) = −λ,
T (t)
X(x) = A + Bx
272 5. THE WAVE EQUATION
The term −2kut (x, t) is the frictional forces that cause the damping.
Consider the following three cases:
πa πa πa
(a) k < ; (b) k = ; (c) k > .
l l l
by expanding the functions f (x), g(x) and u(x, t) in the Fourier cosine
series
∑∞
1 nπx
f (x) = a0 + an cos ,
2 n=1
l
∞
1 ∑ ′ nπx
g(x) = + an cos ,
2 n=1 l
∑∞
1 nπx
u(x, t) = A0 (t) + An (t) cos .
2 n=1
l
u(x, 0) = sin x,
x
u(x, 0) = sin 2,
In this section we will study the two and three dimensional wave equation
on rectangular domains. We begin by considering the homogeneous wave
equation on a rectangle.
5.3.1 Homogeneous Wave Equation on a Rectangle
Consider an infinitely thin, perfectly elastic membrane stretched across a
rectangular frame. Let
where ∆ is the Laplace operator. The last equation can be written in the
form
∆ W (x, y) T ′′ (t)
= 2 , (x, y) ∈ D, t > 0.
W (x, y) c T (t)
5.3.1 HOMOGENEOUS WAVE EQUATION ON A RECTANGLE 277
The above equation is possible only when both of its sides are equal to the
same constant, denoted by −λ:
′′
T (t)
= −λ,
2
c T (t)
∆ W (x, y)
= −λ.
W (x, y)
From the second equations it follows that
To avoid the trivial solution, from the boundary conditions for u(x, y, t), the
following boundary conditions for the function W (x, y) are obtained:
We have already solved the eigenvalue problems (5.3.5) (see Chapter 2).
Their eigenvalues and corresponding eigenfunctions are
m2 π 2 mπx
µ= , X(x) = sin , m = 1, 2, . . .
a2 a
n2 π 2 nπy
λ−µ= , Y (y) = sin , n = 1, 2, . . ..
b2 b
m2 π 2 n2 π 2
λmn = +
a2 b2
and
mπx nπy
Wmn (x, y) = sin sin .
a b
A general solution of Equation (5.3.3), corresponding to the above found
λmn , is given by
(√ ) (√ )
Tmn (t) = amn cos λmn ct + bmn sin λmn ct .
where the coefficients amn will be found from the initial conditions for the
function u(x, y, t) and the following orthogonality property of the eigenfunc-
tions ( mπx ) ( nπy )
fm,n (x, y) ≡ sin sin , m, n = 1, 2, . . .
a b
on the rectangle [0, a] × [0, b]:
∫a ∫b
fm,n (x, y) fp,q (x, y) dx dy = 0, for every (m, n) ̸= (p, q).
0 0
∫a ∫b
4 mπx nπy
(5.3.7) amn = f (x, y) sin sin dy dx
ab a b
0 0
5.3.1 HOMOGENEOUS WAVE EQUATION ON A RECTANGLE 279
and
∫a ∫b
4 mπx nπy
(5.3.8) bmn = √ g(x, y) sin sin dy dx.
abπ m2
+ n2 a b
a2 b2 0 0
Remark. For a given function f (x, y) on a rectangle [0, a] × [0, b], the series
of the form
∑∞ ∑ ∞
( mπx ) ( nπy )
Amn sin sin ,
m=1 n=1
a b
∞ ∑
∑ ∞
( mπx ) ( nπy )
Bmn cos cos ,
m=0 n=0
a b
where
∫a ∫b
4 ( mπx ) ( nπy )
Amn = f (x, y) sin sin dy dx,
ab a b
0 0
∫a ∫b
4 ( mπx ) ( nπy )
Bmn = f (x, y) cos cos dy dx
ab a b
0 0
are called the Double Fourier Sine Series and the Double Fourier Cosine
Series, respectively, of the function f(x,y) on the rectangle [0, a] × [0, b].
√
Notice that when 10t = π2 , i.e., when t = 2√π10 ≈ 0.4966, for the first
time the vertical displacement of the membrane from the Oxy plane is zero.
The shape of the membrane at several time instances is displayed in Figure
5.3.1.
t=0 t=0.4966
t=0.7 t=1
Figure 5.3.1
5.3.2 NONHOMOGENEOUS WAVE EQUATION ON A RECTANGLE 281
As in the one dimensional case, we split problem (5.3.9) into the following
two problems:
( )
vtt = c vxx + vyy , (x, y) ∈ R, t > 0,
2
(5.3.10) v(x, y, 0) = f (x, y), vt (x, y, 0) = g(x, y), (x, y) ∈ R,
v(x, y, t) = 0, (x, y) ∈ ∂R, t > 0.
and
( )
wtt = c wxx + wyy + F (x, y, t), (x, y) ∈ R, t > 0,
2
(5.3.11) w(x, y, 0) = 0, wt (x, y, 0) = 0, (x, y) ∈ R,
w(x, y, t) = 0, (x, y) ∈ ∂R, t > 0.
Problem (5.3.10) was considered in Case 10 . Let its solution be v(x, y, t). If
w(x, y, t) is the solution of problem (5.3.11), then
will be the solution of the given problem (5.3.9). So, it remains to solve
problem (5.3.11).
For each fixed t > 0, we expand the function F (x, y, t) in the double
Fourier sine series
∞ ∑
∑ ∞
mπx nπy
(5.3.12) F (x, y, t) = Fmn (t) sin sin , (x, y) ∈ R, t > 0,
m=1 n=1
a b
where
∫a ∫b
4 mπx nπy
(5.3.13) Fmn (t) = F (x, y, t) sin sin dy dx.
ab a b
0 0
282 5. THE WAVE EQUATION
Next, for each fixed t > 0, we expand the unknown function w(x, y, t) in the
double Fourier sine series
∞ ∑
∑ ∞
mπx nπy
(5.3.14) w(x, y, t) = wmn (t) sin sin , (x, y) ∈ R, t > 0,
m=1 n=1
a b
where
∫a ∫b
4 mπx nπy
wmn (t) = w(x, y, t) sin sin dy dx.
ab a b
0 0
If we substitute (5.3.12) and (5.3.14) into the wave equation (5.3.11) and
compare the Fourier coefficients we obtain
( )
′′ 2 m2 π 2 n2 π 2
wmn (t) +c + 2 wmn (t) = Fmn (t).
a2 b
where
S = {(x, y) : 0 < x < 1, 0 < y < 1}
is the unit square and ∂S is its boundary.
Solution. The corresponding homogeneous problem is
1( )
vtt = π 2 uxx + uyy , (x, y) ∈ S, t > 0,
(5.3.17) v(x, y, 0) = sin 3πx sin πy, vt (x, y, 0) = 0, (x, y) ∈ S,
v(x, y, t) = 0, (x, y) ∈ ∂S, t > 0,
For each fixed t > 0, expand the functions xyt and w(x, y, t) in the double
Fourier sine series
∞ ∑
∑ ∞
(5.3.20) xyt = Fmn (t) sin mπx sin nπy, (x, y) ∈ S, t > 0,
m=1 n=1
where
∫1 ∫1
Fmn (t) = 4 xyt sin mπx sin nπy dy dx
0 0
(∫1 ) (∫1 )
(5.3.21)
= 4t x sin mπx dx y sin nπy dy
0 0
(−1)m (−1)n
= 4t , m, n = 1, 2, . . .,
mnπ 2
and
∞ ∑
∑ ∞
(5.3.22) w(x, y, t) = wmn (t) sin mπx sin nπy, (x, y) ∈ S, t > 0,
m=1 n=1
where
∫1 ∫1
wmn (t) = 4 w(x, y, t) sin mπx sin nπy dy dx, m, n = 1, 2, . . .
0 0
If we substitute (5.3.20) and (5.3.22) into the wave equation (5.3.19) and
compare the Fourier coefficients we obtain
′′
( ) (−1)m+n
wmn (t) + m2 + n2 wmn (t) = 4t .
mnπ 2
The solution of the last equation, subject to the initial conditions (5.3.23), is
4(−1)m (−1)n 4(−1)m (−1)n √
wmn (t) = 2 2 2
t + √ sin m2 + n2 t
mn(m + n )π 2 2 2 2
mn(m + n ) m + n π 2
284 5. THE WAVE EQUATION
and so, from (5.3.22), it follows that the solution w = w(x, y, t) is given by
∞ ∑ ∞ [ ( ) ]
∑ 4(−1)m (−1)n sin λmn t
(5.3.24) w= t + sin mπx sin nπy ,
m=1 n=1
mnλ2mn π 2 λmn
where √
λmn = m 2 + n2 .
Hence, the solution u(x, y, t) of the given problem (5.3.16) is
where u(x, y, t) and w(x, y, t) are given by (5.3.18) and (5.3.24), respec-
tively.
whose boundary is ∂V .
Consider the initial boundary value problem
( )
utt = A2 uxx + uyy + uzz , (x, y, z) ∈ V, t > 0,
u(x, y, z, 0) = f (x, y, z), (x, y, z) ∈ V ,
(5.3.25)
ut (x, y, z, 0) = g(x, y, z), (x, y, z) ∈ V ,
u(x, y, z, t) = 0, (x, y, z) ∈ ∂V, t > 0.
If we differentiate this function twice with respect to the variables and sub-
stitute the partial derivatives in the wave equation, after a rearrangement we
obtain
T ′′ (t) X ′′ (x) Y ′′ (y) Z ′′ (z)
= + + .
A2 T (t) X(x) Y (y) Z(z)
From the last equation it follows that
and
X ′′ (x) Y ′′ (y) Z ′′ (z)
=− − −λ
X(x) Y (y) Z(z)
for some constant λ.
The above equation is possible only if
X ′′ (x)
= −µ,
X(x)
i.e.,
and
Y ′′ (y) Z ′′ (z)
− − − λ = −µ,
Y (y) Z(z)
where µ is a constant
The last equation can be written in the form
Y ′′ (y) Z ′′ (z)
− = + λ − µ,
Y (y) Z(z)
Y ′′ (y) Z ′′ (z)
− = + λ − µ = ν,
Y (y) Z(z)
where ν is a constant.
From the last equations we obtain
and
If we solve the differential equation (5.3.27) for the above obtained λijk
we have √ √
Tk (t) = aijk cos A λijk t + bijk sin A λijk t.
286 5. THE WAVE EQUATION
where the coefficients aijk and bijk are determined using the initial conditions
∞ ∑
∑ ∞ ∑
∞
iπx jπy kπz
f (x, y, z) = aijk sin sin sin ,
i=1 j=1 k=1
a b c
∞ ∑
∑ ∞ ∑ ∞
√ iπx jπy kπz
g(x, y, z) = A bijk λijk sin sin sin .
i=1 j=1 k=1
a b c
∫a ∫b ∫c
8 iπx jπy kπz
√
b =
ijk
g(x, y, z) sin
a
sin
b
sin
c
dz dy dx.
abcA λijk
0 0 0
i2 π 2 j 2 π2 k2 π2
λijk = 2
+ 2 + 2 , i, j, k ∈ N.
a b c
1( )
utt = 2
uxx (x, y, t) + uyy (x, y, t) , 0 < x < 1, 0 < y < 1, t > 0,
π
subject to the boundary conditions
2. u(x, y, 0) = sin πx sin πy, ut (x, y, 0) = sin πx, 0 < x < 1, 0 < y < 1.
11. Let R be the rectangle R = {(x, y) : 0 < x < a, 0 < y < b} and
∂R be its boundary. Solve the following two dimensional damped
membrane vibration resonance problem.
( )
utt = c uxx + uyy − 2kut + A sin ωt, (x, y) ∈ R, t > 0,
2
u(x, y, 0) = 0, ut (x, y, 0) = 0, (x, y) ∈ R,
u(x, y, t) = 0, ∈ ∂R, t > 0.
D = {(x, y) : x2 + y 2 < a2 },
In order to find the solution of this boundary value problem we take the
solution u(x, y, t) to be of the form u(x, y, t) = W (x, y)T (t). With this new
function W (x, y) the given membrane problem is reduced to solving the fol-
lowing eigenvalue problem:
∆x,y W (x, y, t) + λW (x, y, t) = 0, (x, y) ∈ D,
(5.4.3) W (x, y, t) = 0, (x, y) ∈ ∂D, t > 0,
W (x, y, t) is continuous for (x, y) ∈ D ∪ ∂D.
Thus λ ≥ 0.
The parameter λ cannot be zero, since otherwise, from the above we would
have ∫∫ ∫∫
∇ W · ∇ W dx dy = − W ∆ W dx dy
D D
∫∫
=λ W 2 dx dy = 0,
D
Since the membrane is fixed for the frame, we impose Dirichlet boundary
conditions:
The last equation is possible only when both sides are equal to the same
constant, which will be denoted by −λ2 (we already established that the
eigenvalues of the Helmholtz equation are positive):
1 1
(5.4.11) wrr + wr + 2 wφφ + λ2 w(r, φ) = 0, 0 < r ≤ a, −π ≤ φ < π,
r r
and
From the boundary conditions (5.4.6), (5.4.7), and (5.4.8) for the function
u(r, φ, t) it follows that the functions R(r) and Φ(φ) satisfy the boundary
conditions
{
Φ(φ) = Φ(φ + 2π), −π ≤ φ < π,
(5.4.14) R(a) = 0, | lim R(r) |< ∞.
r→0+
which is possible only if both sides are equal to the same constant µ. There-
fore, in view of the conditions (5.4.14), we obtain the eigenvalue problems
and
Φ(φ) = Aφ + B.
292 5. THE WAVE EQUATION
where Jm (·) and Y (·) are the Bessel functions of the first and second kind,
respectively. Since R(r) is bounded at r = 0, and because Bessel functions
of the second kind have singularity at r = 0, i.e.,
lim Ym (r) = ∞,
r→0
Jm (λa) = 0.
Since each Bessel function has infinitely many positive zeroes, the last equa-
tion has infinitely many zeroes which will be denoted by zmn . Using these
zeroes we have that the eigenfunctions of the problem (5.4.16) are given by
n ( )
Rmn (r) = Jm λmn r m = 0, 1, 2, . . . ; n = 1, 2, . . ..
For the above found eigenvalues λmn , the general solution of Equation
(5.4.12) is given by
zmn zmn
Tmn (t) = amn cos t + bmn sin t,
a a
5.4.1 THE WAVE EQUATION IN POLAR COORDINATES 293
Table 5.4.1
Display the shapes of the membrane at the time instances t = 0 and t = 0.7.
∫ ∫1
2π ( )
(1 − r2 )r2 sin 2φ J2 z2n r r sin 2φ dr dφ
0 0
b2 A2n =
∫ ∫1
2π ( )
J22 z2n r r sin2 2φ dr dφ
0 0
(5.4.21)
∫1 ( )
(1 − r2 )r3 J2 z2n r dr
0
= .
∫1 ( )
rJ22 z2n r dr
0
The numbers z2n in (5.4.21) are the zeroes of the Bessel function J2 (x).
Taking a = 1, p = 2 and λ = z2n in Example 3.2.12 of Chapter 3, for
the integral in the numerator in (5.4.21) we have
∫1
( ) 2 ( )
(5.4.22) (1 − r2 )r3 J2 z2n r dr = 2 J4 z2n .
z2n
0
∫1
1 2
xJµ2 (λk x) dx = J (λk ).
2 µ+1
0
∫1
( ) 1 ( )
(5.4.23) rJ22 z2n r dr = J32 z2n .
2
0
The right hand side of the last equation can be simplified using the following
recurrence formula from Chapter 3 for the Bessel functions.
2µ
Jµ+1 (x) + Jµ−1 (x) = Jµ (x).
x
( )
If we take µ = 3 and x = z2n in this formula and use the fact that J2 z2n =
0, then it follows that ( ) ( )
z2n J4 z2n = 6J3 z2n
296 5. THE WAVE EQUATION
and thus
24
b2 A2n = ( ).
3 J z
z2n 3 2n
∞
∑ 1 ( ) ( )
u(r, φ, t) = 24 ( ) J2 z2n r sin 2φ cos z2n t .
z3 J
n=1 2n 3
z2n
t=0 t=0.7
Figure 5.4.1
Solution. From (5.4.20) and the orthogonality of the sine and cosine functions
we have
am Amn = bm Bmn = 0
bm Amn = 0
For m = 1 we have
∫ ∫2
2π (z ) ∫2 (z )
(4 − r2 )r2 sin2 φJ1 1n
2 r dr dφ (4 − r2 )r2 J1 1n
2 r dr
0 0 0
b1 A1n = = .
∫ ∫2
2π ( ) 2
∫2 ( z1n )
J12 z1n
2 r r sin φ dr dφ J12 2 r r dr
0 0 0
∫ 2π ∫2 ( z0n )
0
J0 2 r r dr dφ
z0n
a0 B0n = 2π 20
2 ∫ ∫ ( z0n )
J02 2 r r dr dφ
0 0
∫2 ( z0n )
J0 2 r r dr
( )
J0 z0n
2 r )
= 0
=2 ( .
∫2 ( z0n ) z0n J1 z0n
J02 2 r r dr
0
B = {(x, y, z) ∈ R3 : x2 + y 2 + z 2 < a2 },
then µ > 0.
Proof. We use the Divergence Theorem (see Appendix E). If F = F(x, y, z)
is a twice differentiable vector field on the closed ball B, then
∫∫∫ ∫∫
∇ · F dx dy dz = F · n dσ,
B S
where ∇ is the gradient vector operator and n is the outward unit normal
vector on the sphere S. If we take F = P ∇P in the divergence formula and
use the fact that P = 0 on the boundary S we obtain that
∫∫∫ ∫∫
( )
∇ · P ∇P dx dy dz = P ∇P · n dσ = 0.
B S
it follows that
∫∫∫ ∫∫∫
∇P · ∇P dx dy dz = − P ∆P, dx dy dz.
B B
F (r, φ, θ) = R(r)Θ(θ)Φ(φ).
300 5. THE WAVE EQUATION
1 d ( 2 dR ) 1 d( dΘ ) 1 d2 Φ
r + λ2 r 2 = − sin θ − .
R dr dr Θ sin θ dθ dθ Φ sin2 θ dφ2
The last equation is possible only when both sides are equal to the same
constant ν.
d ( 2 dR )
(5.4.31) r + (λ2 r2 − ν)R = 0,
dr dr
and
1 d( dΘ ) 1 d2 Φ
− sin θ − = ν.
Θ sin θ dθ dθ Φ sin2 θ dφ2
From the last equation it follows that
sin θ d ( dΘ ) 1 d2 Φ
− sin θ − ν sin2 θ = .
θ dθ dθ Φ dφ2
And again, this equation is possible only when both sides are equal to the
same constant −µ:
d2 Φ
(5.4.32) + µΦ = 0,
dφ2
and
d2 Θ dΘ ( µ )
(5.4.33) + cot φ + ν− Θ = 0.
dθ2 dθ sin2 θ
Now, we turn our attention to Equation (5.4.33). This equation has two
singular points at θ = 0 and θ = π. If we introduce a new variable θ by
x = cos θ, then we obtain the differential equation
[ ]
d2 Θ dΘ m2
(5.4.36) (1 − x2 ) − 2x + ν − Θ(x) = 0, −1 < x < 1.
dx2 dx 1 − x2
5.4.2 THE WAVE EQUATION IN SPHERICAL COORDINATES 301
m dm ( )
Pn(m) (x) = (1 − x2 ) 2 m
Pn (x) ,
dx
Finally, for the above obtained ν we are ready to solve Equation (5.4.31).
This equation has singularity at the point r = 0. First, we rewrite the
equation in the form
[ ]
d2 R dR
(5.4.39) r2 2 + 2r + λr − n(n + 1) R = 0.
2
dr dr
From the boundary condition F (a, θ, φ) = 0, given in (5.4.28), and from the
fact that R(r) remains bounded for all 0 ≤ r < a, it follows that the function
R(r) should satisfy the conditions:
1
Equation (5.4.41) is the Bessel equation of order n + 2 and its solution is
√ √
y(r) = AJn+ 21 ( λ r) + BYn+ 21 ( λ r),
302 5. THE WAVE EQUATION
where J and Y are the Bessel functions of the first and second type, re-
spectively. From the boundary conditions (5.4.40) it follows that B = 0
and
√
(5.4.42) Jn+ 12 ( λ a) = 0.
If znj are the zeroes of the Bessel function Jn+ 12 (x), then from (5.4.42) it
follows that the eigenvalues λmn are given by
( zjn )2
(5.4.43) λnj = , n = 0, 1, 2, . . . ; j = 1, 2, . . ..
a
where Φm (φ), Θmn (θ), Rnj (r) and Tnj (t) are the functions given by the
formulas (5.4.35), (5.4.38), (5.4.44) and (5.4.45).
The coefficients Amnj in (5.4.46) are determined using the initial condi-
tions given in (5.4.26) and the orthogonality properties of the eigenfunctions
involved in (5.4.46).
if it is given that
5.4.2 THE WAVE EQUATION IN SPHERICAL COORDINATES 303
1. c = a = 1, f (r, φ) = 1 − r2 , g(r, φ) = 0.
{ 1
1, 0 < r < 2
2. c = a = 1, f (r, φ) = 0, g(r, φ) = 1
0, 2 < r < 1.
( ) ( )
5. c = a = 1, f (r, φ) = 5J4 z4,1 r cos 4φ − J2 z2,3 r sin 2φ,
( )
6. c = a = 1, f (r, φ) = J0 z03 r , g(r, φ) = 1 − r2 (zmn is the nth
zero of Jm (x)).
7. c = 1, a = 2, f (r, φ) = 0, g(r, φ) = 1.
10. Use the separation of variables to find the solution u = u(r, t) of the
following boundary value problems:
( 2
2 ∂ u 1 ∂u )
u (r, t) = c + + A, 0 < r < a, t > 0
tt
∂r 2 r ∂r
∂u(r, t)
u(r, 0) = 0, , 0<r<a
∂r t=0
u(a, t) = 0, t > 0.
11. Use the separation of variables to find the solution u = u(r, t) of the
following boundary value problems:
( 2
2 ∂ u 1 ∂u )
u (r, t) = c + + Af (r, t), 0 < r < a, t > 0
tt
∂r 2 r ∂r
∂u(r, t)
u(r, 0) = 0, , 0<r<a
∂r t=0
u(a, t) = 0, t > 0.
12. Find the solution u = u(r, t) of the following boundary value prob-
lems:
( ∂ 2 u 2 ∂u )
utt (r, t) = c2 + , r1 < r < r 2 , t > 0
∂r2
r ∂r
∂u(r, t)
u(r, 0) = 0, = f (r), r1 < r < r2
∂t t=0
∂u(r, t)
∂u(r, t)
r=r
= 0, r=r2
= 0, t > 0.
∂r 1 ∂r
13. Find the solution u = u(r, φ, t) of the following boundary value prob-
lems:
( 2
2 ∂ u 1 ∂u 1 ∂2u )
u (r, φ, t) = c + + , 0 < r < a, 0 ≤ φ < 2π, t > 0
tt
∂r2 r ∂r r2 ∂φ2
∂ut (r, φ, t)
u(r, φ, 0) = Ar cos φ, = 0, 0 < r < a, 0 ≤ φ < 2π
∂t t=0
∂u(r, t)
r=a
= 0, 0 ≤ φ < 2π, t > 0.
∂r
5.5.1 LAPLACE TRANSFORM FOR THE WAVE EQUATION 305
∫∞
( )
(5.5.1) L u(x, t) = U (x, s) = u(x, t)e−st dt.
0
( )
(5.5.2) L ut (x, t) = sU (x, s) − U (x, 0).
( )
(5.5.3) L utt (x, t) = s2 U (x, s) − su(x, 0) − ut (x, 0).
( )
( ) d ( )
(5.5.4) L ux (x, t) = L u(x, t)
dx
( )
( ) d2 ( ) d2 U (x, s)
(5.5.5) L uxx (x, t) = 2 L u(x, t) = .
dx dx2
( )
Solution. If U = U (x, s) = L u(x, t) , then from (5.5.3), (5.5.5) and the
initial conditions we have
d2 U
− s2 U = − cos πx.
dx2
Using the boundary conditions for the function u(x, t) we obtain that the
function U (x, s) satisfies the conditions
( ) ( )
U (0, s) = L u(0, t) = 0, U (1, s) = L u(1, t) = 0.
Therefore,
( )
( ) cos πx
u(x, t) = L−1 U (x, s) = L−1
s2 + π 2
( )
1 1
= cos πx L−1 = cos πx sin πt.
s2 + π 2 π
Example 5.5.2. Using the Laplace transform method solve the following
boundary value problem.
utt (x, t) = uxx (x, t), x > 0, t > 0,
u(0, t) = 0, lim ux (x, t) = 0, t > 0,
x→∞
u(x, 0) = xe−x , ut (x, 0) = 0, x > 0.
( )
Solution. If U = U (x, s) = L u(x, t) , then from (5.5.3) and (5.5.5) we have
d2 U
s2 U − su(x, 0) − ut (x, 0) = .
dx2
The last equation in view of the initial conditions becomes
d2 U
− s2 U = −sxe−x ,
dx2
whose general solution is
se−sx se−sx
U (x, s) = −2 + x + c1 esx + c2 e−xs .
(s2 − 1)2 s2 − 1
The boundary condition lim ux (x, t) = 0 implies lim Ux (x, s) = 0 and so,
x→∞ x→∞
c1( = 0. From
) the other boundary conditions u(0, t) = 0 we have U (0, s) =
L u(0, t) = 0, which implies
2s
c2 = .
(s2 − 1)2
5.5.1 LAPLACE TRANSFORM FOR THE WAVE EQUATION 307
Therefore,
se−sx −x s e−sx
U (x, s) = 2 − 2e + x .
(s2 − 1)2 (s2 − 1)2 s2 − 1
Using the linearity and translation property of the inverse Laplace transform,
together with the decomposition into partial fractions, we have
( ) ( ) ( )
−1 se−sx −x −1 s −x −1 1
u(x, t) = 2L − 2e L + xe L
(s2 − 1)2 (s2 − 1)2 s2 − 1
1 [ ]
= (t − x)e−t−x 1 − e2t + (e2t − e2x )H(t − x) ,
2
where {
1, if t ≥ 0
H(t) =
0, if t < 0
is the Heaviside unit step function.
Example 5.5.3. Find the solution of the following problem (the telegraphic
equation).
1
utt (x, t) = 2 uxx (x, t), 0 < x < 1, t > 0,
π
u(0, t) = sin πt, t > 0,
ut (1, t) = 0, t > 0,
u(x, 0) = 0, ut (x, 0) = 0, 0 < x < 1.
( )
Solution. If U = U (x, s) = L u(x, t) , then from (5.5.3), (5.5.5), in view of
the initial conditions, we have
d2 U
− π 2 s2 U = 0.
dx2
The general solution of the last equation is
esπ(1−x) − e−sπ(1−x)
U (x, s) = π ( πs ) .
e − e−πs (s2 + π 2 )
308 5. THE WAVE EQUATION
In order to find the inverse Laplace transform of U (x, s) we use the formula
for the inverse Laplace transform by contour integration
∫
b+i∞ ∫
b+i∞
1 st 1
u(x, t) = e U (x, s) ds = F (x, s) ds,
2πi 2πi
b−i∞ b−i∞
where
esπ(1−x) − e−sπ(1−x)
F (x, s) = π ( πs ) .
e − e−πs (s2 + π 2 )
(See the section the Inverse Laplace Transform by Contour Integration in
Chapter 2). One way to compute the above contour integral is by the Cauchy
Theorem of Residues (see Appendix F).
The singularities of the function F (x, s) are found by solving the equation
( πs )
e − e−πs (s2 + π 2 ) = 0,
or equivalently, ( )
e2πs − 1 (s2 + π 2 ) = 0.
The solutions of the above equation are s = ±ni, n = 0, 1, 2, . . .. We exclude
s = 0 since it is a removable singularity for the function F (z). Notice that
all these singularities are simple poles. Using the formula for residues (see
Appendix F) we find
( )
( ) [ ] eπti sin π 2 (1 − x)
Res F (x, s), s = πi = lim (s − πi)F (x, s) = ,
s→πi 2i sin π 2( )
( ) [ ] e−πti sin π 2 (1 − x)
Res F (x, s), s = −πi = lim (s + πi)F (x, s) = − ,
s→−πi 2i sin π 2
( )
( ) [ ] sin nπ(1 − x)
Res F (x, s), s = ni = lim (s − ni)F (x, s) = ienπti 2 ,
s→ni (π − n2 ) cos nπ
( )
( ) [ ] i sin nπ(1 − x)
Res F (x, s), s = −ni = lim (s + ni)F (x, s) = − nπti 2 .
s→−ni e (π − n2 ) cos nπ
Therefore, by the Residue Theorem we have
∞
∑
( ) ( ) ( )
u(x, t) = Res F (x, s), πi +Res F (x, s), −πi + Res F (x, s), ni
n=−∞
n̸=0
( ) ( )
eπti sin π 2 (1 − x) e−πti sin π 2 (1 − x)
= −
2i sin π 2 2i sin π 2
∞ ( ( ) ( ))
∑
nπti sin nπ(1 − x) −nπti sin nπ(1 − x)
+ ie − ie
n=1
(π 2 − n2 ) cos nπ (π 2 − n2 ) cos nπ
( ) ∞ ( )
sin πt sin π 2 (1 − x) ∑ sin nπt sin nπ(1 − x)
= −2 .
sin π 2 n=1
(π 2 − n2 ) cos nπ
5.5.1 LAPLACE TRANSFORM FOR THE WAVE EQUATION 309
The Convolution Theorem for the Laplace transform can be used to solve
some boundary value problems. The next example illustrates this method.
Example 5.5.4. Find the solution of the following problem.
utt (x, t) = uxx (x, t) + sin t, x > 0, t > 0,
u(0, t) = 0, limx→∞ ux (x, t) = 0, t > 0,
u(x, 0) = 0, ut (x, 0) = 0, 0 < x < 1.
( )
Solution. If U = U (x, s) = L u(x, t) , then from (5.5.3) and (5.5.5) we have
d2 U 1
s2 U − su(x, 0) − ut (x, 0) = + 2 .
dx2 s +1
The last equation, in view of the initial conditions, becomes
d2 U 1
− s2 U = − 2 ,
dx2 s +1
whose general solution is
1
U (x, s) = c1 esx + c2 e−sx + .
s2 (s2 + 1)
From the condition lim ux (x, t) = 0 it follows that lim Ux (x, s) = 0 and
x→∞ x→∞
so c1 = 0. (From the) other boundary conditions u(0, t) = 0 it follows that
U (0, s) = L u(0, t) = 0, which implies that
1
c2 = − .
s2 (s2 + 1)
Therefore,
1 1 − e−sx
U (x, s) = .
s2 + 1 s2
From the last equation, by the Convolution Theorem for the Laplace transform
we have
( ( ))
( ) −1 1 − e−sx ( ) ( )
u(x, t) = sin t ∗ L 2
= sin t ∗ t − (t − x)H(t − x)
s
∫t
( )
= sin (t − y) y − (y − x)H(y − x) dy
0
∫t ∫t
= y sin (t − y) dy − (y − x)H(y − x) dy
0 0
∫t
= t sin t − (y − x)H(y − x) dy.
0
310 5. THE WAVE EQUATION
For the last integral we consider two cases. If t < x, then y − x < 0, and so
H(x − y) = 0. Thus, the integral is zero in this case. If t > x, then
∫t ∫x ∫t
(y − x)H(y − x) dy = (y − x)H(y − x) dy + (y − x)H(y − x) dy
0 0 x
∫t
1
=0+ (y − x) dy = (t − x)2 .
2
x
Therefore, {
t sin t, if 0 < t < x
u(x, t) =
t sin t − 1
2 (t − x) , if t > x.
2
∫∞
( )
(5.5.6) F u(x, t) = U (ω, t) = u(x, t)e−iωx dx.
−∞
( )
(5.5.7) F ux (x, t) = iωU (ω, t).
( )
(5.5.8) F uxx (x, t) = (iω)2 U (ω, t) = −ω 2 U (ω, t).
∫∞
1
(5.5.9) u(x, t) = U (ω, t)eiωx dω.
2π
−∞
( )
( ) d ( )
(5.5.10) F ut (x, t) = F u(x, t) .
dt
( )
( ) d2 ( )
(5.5.11) F utt (x, t) = 2 F u(x, t) .
dt
Example 5.5.5. Using the Fourier transform solve the following transport
boundary value problem.
{
ut (x, t) + aux (x, t) = 0, −∞ < x < ∞, t > 0
−x2
u(x, 0) = e , −∞ < x < ∞.
( )
Solution. If U = U (ω, t) = F u(x, t) , then from (5.5.7) and (5.5.10) we
have
dU (ω, t)
+ aiωU (ω, t) = 0.
dt
The general solution of this equation (keeping ω constant) is
U (ω, t) = C(ω)e−iωat .
it follows that
( ) ( 2) √ ω2
U (ω, 0) = F u(x, 0) = F e−x = πe− 4 (see Table B in Appendix B).
√ − ω2
Therefore, C(ω) = π e 4 and so,
√ − ω2 −iωat
U (ω, t) = πe 4 e .
Example 5.5.6. Using the Fourier transform solve the following transport
boundary value problem.
{
utt (x, t) = c2 uxx (x, t) + f (x, t), −∞ < x < ∞, t > 0,
u(x, 0) = 0, ut (x, 0) = 0, −∞ < x < ∞.
( ) ( )
Solution. If U = U (ω, t) = F u(x, t) and F = F (ω, t) = F f (x, t) , then
from (5.5.6), (5.5.8) and (5.5.11) the equation becomes
d2 U (ω, t)
(5.5.12) + c2 ω 2 U (ω, t) = F (ω, t).
dt2
∫t
1
U (ω, t) = F (ω, τ ) sin cω(t − τ ) dτ.
cω
0
∫∞ (∫t )
1 sin cω(t − τ )
(5.5.13) u(x, t) = F (ω, τ ) dτ eiωx dω.
2π cω
−∞ 0
To complete the problem, let us recall the following properties of the Fourier
transform.
( ) ( ) ( )
F f (x − a) = e−iω F f (ω), F eiax f (x) = F f (ω − a),
( ) sin aω
F χa (x) (ω) = 2 ,
ω
where χa is the characteristic function on the interval (−a, a), defined by
{
1, |x| ≤ 1
χa (x) =
0, |x| > 1,
∫t ( ∫∞ )
1 sin cω(t − τ ) iωx
u(x, t) = F (ω, τ ) e dω dτ
2π cω
0 −∞
∫t ( x+c(t−τ
∫ ) )
1
= f (ω, τ ) dω dτ.
2π
0 x−c(t−τ )
of the telegraph equation. This equation has many other applications in fluid
mechanics, acoustics and(elasticity.
)
If U = U (ω, t) = F u(x, t) , then from (5.5.6), (5.5.8), (5.5.10) and
(5.5.11) the telegraph equation becomes
d2 U dU
(5.5.14) 2
+ 2(a + b) + (4ab + c2 ω 2 )U = 0.
dt dt
The solutions of the characteristic equation of the above ordinary differential
equation (keeping ω constant) are
√
r1,2 = −(a + b) ± (a − b)2 − c2 ω 2 .
Case 10 . If D = (a − b)2 − c2 ω 2 > 0, then r1 and r2 are real and distinct,
and so the general solution of (5.5.14) is given by
U (ω, t) = C1 (ω)er1 t + C2 (ω)er2 t .
Notice that r1 < 0 and r2 < 0 in this case. From the given initial condi-
tions u(x, 0) = f (x) and ut (x, 0) = g(x) it follows that U (ω, 0) = F (ω) and
Ut (ω, 0) = G(ω), where F = F f and G = F g. Using these initial conditions
we have
G − r2 F G − r1 F
C1 (ω) = , C2 (ω) = .
r2 − r1 r2 − r1
Therefore,
er1 t + er2 t r2 er1 t + r1 er2 t
U (ω, t) = G(ω, t) − F (ω, t).
r2 − r1 r2 − r1
In general, it is not very easy to find the inverse transform from the last
equation.
Case 20 . If D = (a − b)2 − c2 ω 2 < 0, then
( √ √
)
U (ω, t) = e−(a+b)t C1 (ω)e −D it + C2 (ω)e− −D it .
In some particular cases we can find explicitly the inverse. For example, if
a = b, then r1,2 = (a + b) ± cωi, and so,
( )
−2at cωti −cωti
U (ω, t) = e C1 (ω)e + C2 (ω)e .
From the last equation, by the formula for the inverse Fourier transform and
the translation property we obtain
∫∞ [ ]
1 −2at cωti −cωti iωx
u(x, t) = e C1 (ω)e + C2 (ω)e e dω
2π
−∞
( )
= e−2at c1 (x − ct) + c2 (x + ct) ,
314 5. THE WAVE EQUATION
( ) ( )
where c1 (x) = F −1 C1 (ω) and c2 (x) = F −1 C2 (ω) .
If the wave equation is considered on the interval (0, ∞), then we use the
Fourier sine or Fourier cosine transform. Let us recall from Chapter 2 a few
properties of these transforms.
∫∞
(5.5.15) Fs u(x, t) = u(x, t) sin ωx dx.
0
∫∞
(5.5.16) Fc u(x, t) = u(x, t) cos ωx dx.
0
( ) ( )
(5.5.17) Fs uxx (x, t) = −ω 2 Fs u(x, t) + ω u(0, t).
( ) ( )
(5.5.18) Fc uxx (x, t) = −ω 2 Fs u(x, t) − ux (0, t).
( )
( ) d2 ( )
(5.5.19) Fs utt (x, t) = 2 Fs u(x, t) .
dt
( )
( ) d2 ( )
(5.5.20) Fc utt (x, t) = 2 Fc u(x, t) .
dt
∫∞
2 ( )
(5.5.21) u(x, t) = Fs u(x, t) (ω, t) sin ωx dω.
π
0
∫∞
2 ( )
(5.5.22) u(x, t) = Fc u(x, t) (ω, t) cos ωx dω.
π
0
The choice of whether to use the Fourier sine or Fourier cosine transform
depends on the nature of the boundary conditions at zero. Let us illustrate this
remark with a few examples of a wave equation on the semi-infinite interval
(0, ∞).
Example 5.5.8. Using the Fourier cosine or Fourier sine transform solve the
following boundary value problem.
utt (x, t) = c uxx (x, t), 0 < x < ∞, t > 0,
2
u(x, 0) = f (x), ut (x, 0) = g(x), 0 < x < ∞,
u(0, t) = 0, t > 0.
d2 U (ω, t)
(5.5.23) + c2 ω 2 U (ω, t) = 0.
dt2
5.5.2 FOURIER TRANSFORM METHOD FOR THE WAVE EQUATION 315
For the first integral in the above equation, by the inversion formula (5.5.21)
we have
∫∞ ∫∞
1 [ ] 1
F (ω) sin (x + ct)ω + sin (x − ct)ω dω = F (ω) sin (x + ct)ω dω
π π
0 0
∫∞
1 1 1
+ F (ω) sin (x − ct)ω dω = f (x + ct) + f (x − ct).
π 2 2
0
For the second integral in the equation, again by the inversion formula (5.5.21)
we have
∫∞ ∫∞( ∫
x+ct )
1 cos (x−ct)ω − cos (x+ct)ω 1
G(ω) dω = G(ω) sin ωs ds dω
πc ω πc
0 0 x−ct
∫ (
x+ct ∫∞ ) ∫
x+ct
1 1
= G(ω) sin ωs dω ds = g(s) ds.
πc 2c
x−ct 0 x−ct
If x − ct < 0, then ct − x > 0 and working as above and using the facts that
sin (x − ct)ω = −sin (ct − x)ω and cos (x − ct)ω = cos (ct − x)ω we obtain
∫
x+ct
f (x + ct) − f (ct − x) 1
u(x, t) = + g(s) ds.
2 2c
ct−x
Therefore,
∫
x+ct
f (x + ct) + f (| x − ct |)sign(x − ct) 1
u(x, t) = + g(s) ds,
2 2c
|x−ct|
Example 5.5.9. Using the Fourier cosine or Fourier sine transform solve the
following boundary value problem.
utt (x, t) = c uxx (x, t), 0 < x < ∞, t > 0,
2
u(x, 0) = f (x), ut (x, 0) = g(x), 0 < x < ∞,
ux (0, t) = 0, t > 0.
d2 U (ω, t)
(5.5.24) + c2 ω 2 U (ω, t) = 0.
dt2
sin cωt
U (ω, t) = F (ω) cos cωt + G(ω) .
cω
follows that
∫∞ ( )
2 sin cωt
u(x, t) = F (ω) cos cωt + G(ω) cos ωx dω
π cω
0
∫∞ ∫∞
2 2 sin cωt cos ωx
= F (ω) cos cωt cos ωx dω + G(ω) dω
π π cω
0 0
∫∞
1 [ ]
= F (ω) cos (x + ct)ω + cos (x − ct)ω dω
π
0
∫∞
1 sin (x − ct)ω − sin (x + ct)ω
+ G(ω) dω.
πc ω
0
For the first integral in the above equation, by the inversion formula (5.5.22)
we have
∫∞ ∫∞
1 [ ] 1
F (ω) cos (x + ct)ω + cos (x − ct)ω dω = F (ω) cos (x + ct)ω dω
π π
0 0
∫∞
1 1 1
+ F (ω) cos (x − ct)ω dω = f (x + ct) + f (x − ct).
π 2 2
0
For the second integral in the equation, again by the inversion formula (5.5.22)
we have
∫∞ ∫∞ ∫
x+ct
1 sin (x − ct)ω − sin (x + ct)ω 1
G(ω) dω = − G(ω) cos ωs ds dω
πc ω πc
0 0 x−ct
∫ (
x+ct ∫∞ ) ∫
x+ct
1 1
=− G(ω) cos ωs dω ds = − g(s) ds.
πc 2c
x−ct 0 x−ct
Therefore,
( x+ct
∫ |x−ct|
∫ )
f (x+ct) + f (| x − ct |) 1
u(x, t) = + g(s) ds−sign(x−ct) g(s) ds ,
2 2c
0 0
In Problems 1–9, using the Laplace transform solve the indicated initial
boundary value problem on the interval (0, ∞), subject to the given condi-
tions.
1. ut (x, t) + 2ux (x, t) = 0, x > 0, t > 0, u(x, 0) = 3, u(0, t) = 5.
6. utt (x, t) = c2 uxx (x, t), x > 0, t > 0 u(0, t) = f (t), lim u(x, t) = 0
x→∞
t > 0, u(x, 0) = 0, ut (x, 0) = 0, x > 0.
7. utt (x, t) = uxx (x, t), x > 0, t > 0 u(0, t) = 0, lim u(x, t) = 0 t > 0,
x→∞
u(x, 0) = xe−x , ut (x, 0) = 0, x > 0.
9. utt (x, t) = uxx (x, t), x > 0, t > 0 u(0, t) = sin t, t > 0, u(x, 0) = 0,
ut (x, 0) = 0, x > 0.
5.5.2 FOURIER TRANSFORM METHOD FOR THE WAVE EQUATION 319
In Problems 10–15, use the Laplace transform to solve the initial boundary
value problem on the indicated interval, subject to the given conditions.
10. utt (x, t) = c2 uxx (x, t) + kc2 sin πx
a , 0 < x < a, t > 0, u(x, 0) =
ut (x, 0) = 0, u(0, t) = u(a, t) = 0, t > 0 .
11. utt (x, t) = uxx (x, t), 0 < x < 1, t > 0, u(x, 0) = 0, ut (x, 0) =
0 u(0, t) = 0, u(0, t) = 0.
12. utt (x, t) = uxx (x, t), 0 < x < 1, t > 0, with initial conditions
u(x, 0) = 0, ut (x, 0) = sin πx, and boundary conditions u(0, t) = 0,
ux (1, t) = 1.
[ 1 ]
Hint: Expand −2s
in a geometric series.
1+e
13. utt (x, t) = uxx (x, t), 0 < x < 1, t > 0, with initial conditions
u(x, 0) = 0, ut (x, 0) = 1, and boundary conditions u(0, t) = 0,
u(1, t) = 0.
14. utt (x, t) = uxx (x, t), 0 < x < 1, t > 0, with initial conditions
u(x, 0) = sin πx, ut (x, 0) = − sin πx and boundary conditions
u(0, t) = u(1, t) = 0.
15. tux (x, t) + ut (x, t) = 0, −∞ < x < ∞, t > 0 subject to the initial
condition u(x, 0) = f (x).
16. ux (x, t) + 3ut (x, t) = 0, −∞ < x < ∞, t > 0 subject to the initial
condition u(x, 0) = f (x).
18. utt (x, t) + ut (x, t) = −u(x, t), −∞ < x < ∞, t > 0 subject to the
initial conditions u(x, 0) = f (x) and ut (x, 0) = g(x).
320 5. THE WAVE EQUATION
21. utt (x, t) = c2 uxx (x, t), 0 < x < ∞, t > 0 subject to the initial
conditions u(x, 0) = 0, ut (x, 0) = 0 and the boundary condition
ux (0, t) = f (t), 0 < t < ∞.
[ ]
Hint: Use the Fourier cosine transform.
22. utt (x, t) = c2 uxx (x, t) + f (x, t), 0 < x < ∞, t > 0 subject to the ini-
tial conditions u(x, 0) = 0, ut (x, 0) = 0 and the boundary condition
u(0, t) = 0, 0 < t < ∞.
[ ]
Hint: Use the Fourier sine transform.
23. utt (x, t) = c2 uxx (x, t) + f (x, t), 0 < x < ∞, t > 0 subject to
the initial conditions u(x, 0) = 0, ut (x, 0) = 0 and the condition
ux (0, t) = 0, 0 < t < ∞.
[ ]
Hint: Use the Fourier cosine transform.
24. utt (x, t) = c2 uxx (x, t) + 2aut (x, t) + b2 u(x, t), 0 < x < ∞, t > 0, sub-
ject to the initial conditions u(x, 0) = ut (x, 0) = 0, 0 < x < ∞ and
the boundary conditions u(0, t) = f (t), t > 0 and lim | u(x, t) |<
x→∞
∞,
[ t > 0. ]
Hint: Use the Fourier sine transform.
25. utt (x, t) = c2 uxx (x, t), 0 < x < ∞, t > 0, subject to the initial
conditions u(x, 0) = ut (x, 0) = 0, 0 < x < ∞, and the boundary
conditions ux (0, t) − ku(0, t) = f (t), lim | u(x, t) |= 0, t > 0.
[ x→∞]
Hint: Use the Fourier cosine transform.
5.6 PROJECTS USING MATHEMATICA 321
u u
1 t=2
t=0 1.5
x x
-10 10 -10 10
-0.5
u u
t=4 t=6
1.5 1.5
1 1
x x
-10 10 -10 10
-0.5 -0.5
Figure 5.6.1
Project 5.6.2. Use the Laplace transform to solve the wave equation
utt (x, t) = uxx (x, t), 0 < x < 1, t > 0,
u(x, 0) = sin πx, ut (x, 0) = − sin πx, 0 < x < 1,
u(0, t) = sin t, ut (0, t) = 0, t > 0.
d2 U (x, s)
− s2 U (x, s) + sf (x) + g(x) = 0
dx2
u u
2 2
t=0 t=2
1 1
0 x 0 x
5 10 5 10
-1 -1
-2 -2
u u
2 2
t=3 t=5
1 1
0 x 0 x
5 10 5 10
-1 -1
-2 -2
Figure 5.6.2
Mathematica generates the plots for the solutions u(r, φ) of these prob-
lems, displayed in Figure 5.4.1 and Figure 5.4.2.
5.6 PROJECTS USING MATHEMATICA 325
and the solution w(r, φ, t) of the problem found in Example 5.4.2 is given
by
∞ ( ) ∞ ( )
∑ J1 z1n r ( z1n ) ∑ J0 z0n r (z )
w(r, φ, t) = 128 3
( ) sin φ cos
2
t +4 2
( ) sin 0n t
2
z J z
n=1 1n 2 1n
2 z J z
n=1 0n 1 0n
2
In the above sums, zmn (m = 0, 1, 2) denotes the nth zero of the Bessel
function Jm (·).
First generate a list of zeroes of the mth Bessel function.
In[1] := BZ[m− , n− ] = N [BesselJZero[m, k, 0]];
(the nth positive zero of Jm (·))
Next define the N th partial sums of the solutions u1 (r, φ, t) and u2 (r, φ, t).
In[2] := u1 [r− , φ− , t− , N− ] := 24 Sum[(BesselJ[2, Bz[2, n] r])/((Bz[2, n])3
BesselJ[3, Bz[2, n]]) Sin[2 φ]] Cos[Bz[2, n] t], {n, 1, N }];
In[3] := u2 [r− , φ− , t− , N− ] := 128 Sum[(BesselJ[1, Bz[1, n]r/2])
/((Bz[1, n])3 BesselJ[2, Bz[1, n]]) Sin[φ]] Cos[Bz[1, n] t/2], {n, 1, N }]
+4 Sum[(BesselJ[0, Bz[0, n] r/2])/((Bz[0, n])2 BesselJ[1, Bz[0, n]])
Sin[Bz[0, n] t/2], {n, 1, N }];
In[4] := P arametricP lot3D[{rCos[φ], rSin[φ], u1 [r, φ, 1, 3]}, {φ, 0, 2 P i]},
{r, 0, 1}, T icks− > {{−1, 0, 1}, {−1, 0, 1}, {−.26, 0, 0.26}}, RegionFunction
− > F unction[{r, φ, u}, r <= 1], BoxRatios− > Automatic,
AxesLabel− > {x, y, u}];
In[5] := P arametricP lot3D[{rCos[φ], rSin[φ], u2 [r, φ, 1, 3]}, {φ, 0, 2 P i]},
{r, 0, 1}, T icks− > {{−1, 0, 1}, {−1, 0, 1}, {−.26, 0, 0.26}}, RegionFunction
− > F unction[{r, φ, u}, r <= 1], BoxRatios− > Automatic,
AxesLabel− > {x, y, u}];
CHAPTER 6
The purpose of this chapter is to study the one dimensional heat equation
also known as the diffusion equation, and its higher dimensional version
In the first section we will find the fundamental solution of the initial value
problem for the heat equation. In the next sections we will apply the separa-
tion of variables method for constructing the solution of the one and higher
dimensional heat equation in rectangular, polar and spherical coordinates. In
the last section of this chapter we will apply the Laplace and Fourier trans-
forms to solve the heat equation.
As for the wave equation we define the notion of energy E(t) of the heat
equation (6.1.1) by
∫∞
1
(6.1.4) E(t) = u2 (x, t) dx.
2
−∞
326
6.1 FUNDAMENTAL SOLUTION OF THE HEAT EQUATION 327
Let v(x, t) and w(x, t) be two solutions of the problem (6.1.1), (6.1.2)
and assume that both v and w satisfy the boundary condition (6.1.3). If
u = v − w, then u satisfies the heat equation (6.1.1) and the initial and
boundary conditions
{
u(x, 0) = 0, −∞ < x < ∞
lim ux (x, t) = 0, t > 0.
x→±∞
From (6.1.4), using the heat equation (6.1.1) and the above boundary and
initial conditions, by the integration by parts formula we obtain that
∫∞ ∫∞
′ 2
E (t) = u(x, t)ut (x, t) dx = c uxx (x, t)u(x, t) dx
−∞ −∞
∫∞ x=∞
∂ ( )
=c 2
u(x, t) ux (x, t) dx = c u(x, t)ux (x, t)
2
∂x x=−∞
−∞
∫∞ ∫∞
−c 2
u2x (x, t) dx = −c
2
u2x (x, t) dx ≤ 0.
−∞ −∞
Solution. From Theorem 6.1.1 it follows that the solution of the given prob-
lem is
∫∞
1 s2
e− e−(x−s) ds
2
u(x, t) = G(x, t) ∗ f (x) = √ t
πt
−∞
∫∞ ∫∞ (√ √ )2
e−x e−x
2 2
(t+1)s2 2
e−
t+1
− t tx
= √ e t +2xs
ds = √ t s− t+1 x + t+1
ds
πt πt
−∞ −∞
∫∞ (√ √ )2 x2
e −x2 tx2 − t+1 t e− 1+t
= √ e t+1 e t s− t+1 x
ds = √ .
πt 1+t
−∞
The plots of the heat distribution at several time instances t are displayed in
Figure 6.1.1.
u u
1
t=0
1 t=1
0.7
x x
-10 -5 5 10 -10 -5 5 10
u u
1 t=5 1
t = 10
0.4
0.3
x x
-10 -5 5 10 -10 -5 5 10
Figure 6.1.1
330 6. THE HEAT EQUATION
First, let us recall the following result from Chapter 2. If g(x) is a contin-
uous function on R which vanishes outside a bounded interval, then
∫∞
(6.1.10) g(x)δa (x) dx = g(a),
−∞
For the first term in Equation (6.1.11), using the fact that the Green function
G(x − s, t − s) satisfies
we have
∫t ∫∞
Gt (x − s, t − s)F (s, τ )ds dτ
0 −∞
(∫t ∫∞ )
∂2
=c 2
G(x − s, t − s)F (s, τ ) ds dτ
∂x2
0 −∞
we have
∫∞
lim G(x − s, t − s)F (s, τ ) ds = F (x, t).
τ →t
−∞
Thus, the function w(x, t), defined by (6.1.9) satisfies the heat equation
in (6.1.8). It remains to show that w(x, t) satisfies the initial condition in
(6.1.8). Ignoring some details we have
∫t ∫∞
lim w(x, t) = lim G(x − s, t − s)F (s, τ ) ds dτ = 0. ■
t→0 t→0
0 −∞
As in d’Alembert’s method for the wave equation, we will convert the initial
boundary value problem (6.1.12) into a boundary value problem on the whole
real line (−∞, ∞). In d’Alembert’s method for the wave equation we have
332 6. THE HEAT EQUATION
used either the odd or even extension of the function u(x, t). For the heat
equation let us work with an odd extension. Let fe(x) be the odd extension
of the initial function f (x).
Now consider the following initial value problem on (−∞, ∞).
{
vt (x, t) = c2 vxx (x, t), −∞ < x < ∞, t > 0
(6.1.13)
v(x, 0) = fe(x), −∞ < x < ∞.
∫∞
u(x, t) = v(x, t) = G(x − s)fe(s) ds
−∞
∫0 ∫∞
1 (x−s)2 1 (x−s)2
=√ e − 4cπt fe(s) ds + √ e− 4cπt fe(s) ds
4cπt 4cπt
−∞ 0
∫0 ∫∞
1 (x+τ )2 1 (x−s)2
=√ e − 4cπt fe(−τ ) dτ + √ e− 4cπt f (s) ds
4cπt 4cπt
∞ 0
∫∞ ( )
1 (x−s)2 (x+s)2
=√ e− 4cπt +e − 4cπt
f (s) ds.
4cπt
0
Remark. The heat equation on a bounded interval will be solved in the next
section using the separation of variables method and in the following section
of this chapter by the Laplace and Fourier transforms.
6.1 FUNDAMENTAL SOLUTION OF THE HEAT EQUATION 333
2. Show that the the fundamental solution G(x, t) satisfies the following.
(c) G(x, t) → 0 as t → ∞.
Show that
We will find a nontrivial solution u(x, t) of the above initial boundary value
problem of the form
where X(x) and T (t) are functions of single variables x and t, respectively.
Differentiating (6.2.4) with respect to x and t and substituting the partial
derivatives in Equation (6.2.1) we obtain
X ′′ (x) 1 T ′ (t)
(6.2.5) = 2 .
X(x) c T (t)
Equation (6.2.5) holds identically for every 0 < x < l and every t > 0. Since
x and t are independent variables, Equation (6.2.5) is possible only if each
function on both sides is equal to the same constant λ:
X ′′ (x) 1 T ′ (t)
= 2 =λ
X(x) c T (t)
and
also will satisfy the heat equation and the boundary conditions. If we assume
that the above series is convergent, from (6.2.9) and the initial condition
(6.2.2) we obtain
∞
∑ nπx
(6.2.10) f (x) = u(x, 0) = an sin , 0 < x < l.
n=1
l
Theorem 6.2.1. Suppose that the function f is continuous and its derivative
f ′ is a piecewise continuous function on [0, l]. If f (0) = f (l) = 0, then
the function u(x, t) given in (6.2.8), where the coefficients an are given by
(6.2.10), is the unique solution of the problem defined by (6.2.1), (6.2.2),
(6.2.3).
Solution. For the coefficients an , applying Equation (6.2.11) and using the
integration by parts formula we have
∫π ∫π
2 2
an = f (x) sin nx dx = x(π − x) sin nx dx
π π
0 0
∫π ∫π
2 4 1 − (−1)n
=2 x sin nx dx − x2 sin nx dx = .
π π n3
0 0
∞
8∑ 1
e−(2n−1) c t sin (2n − 1)x.
2 2
u(x, t) =
π n=1 (2n − 1) 3
In order to find the solution of problem (6.2.12) we split the problem into
the following two problems:
2
vt (x, t) = c uxx (x, t), 0 < x < l, t > 0,
(6.2.13) v(x, 0) = f (x), 0 < x < l
v(0, t) = v(l, t) = 0,
6.2 SEPARATION OF VARIABLES METHOD FOR THE HEAT EQUATION 337
and
2
wt (x, t) = c wxx (x, t) + F (x, t), 0 < x < l, t > 0,
(6.2.14) w(x, 0) = 0, 0 < x < l
w(0, t) = w(l, t) = 0.
Problem (6.2.13) was considered in Case 10 and it has been solved. Let
v(x, t) be its solution. If w(x, t) is the solution of problem (6.2.14), then
will be the solution of the given problem (6.2.12). So, the remaining problem
to be solved is (6.2.14).
In order to solve problem (6.2.14) we proceed exactly as in the nonho-
mogeneous wave equation of Chapter 5. For each fixed t > 0, expand the
nonhomogeneous term F (x, t) in the Fourier sine series
∞
∑ nπx
(6.2.15) F (x, t) = Fn (t) sin , 0 < x < l,
n=1
l
where
∫l
2 nπξ
(6.2.16) Fn (t) = F (ξ, t) sin dξ, n = 1, 2, . . ..
l l
0
Next, again for each fixed t > 0, expand the unknown function w(x, t) in
the Fourier sine series
∞
∑ nπx
(6.2.17) w(x, t) = wn (t) sin , 0 < x < l,
n=1
l
where
∫l
2 nπξ
(6.2.18) wn (t) = w(ξ, t) sin dξ, n = 1, 2, . . ..
l l
0
(6.2.19) wn (0) = 0.
If we substitute (6.2.15) and (6.2.17) into the heat equation (6.2.14) and
compare the Fourier coefficients we obtain
n2 π 2 c2
(6.2.20) wn′ (t) + wn (t) = Fn (t).
l2
338 6. THE HEAT EQUATION
∫t
c 2 n2 π 2
(6.2.21) wn (t) = Fn (τ )e− l2
(t−τ )
dτ.
0
∫ l ∫t
(6.2.22) w(x, t) = G(x, ξ, t − τ )F (ξ, τ ) dξ dτ,
0 0
where
∞
2 ∑ − c2 n22 π2 (t−τ ) nπ nπ
(6.2.23) G(x, ξ, t − τ ) = e l sin x sin ξ.
l n=1 l l
is called the Green function of the heat equation on the interval [0, l].
Example 6.2.2. Solve the initial boundary value problem for the heat equa-
tion 2
ut (x, t) = c uxx (x, t) + xt, 0 < x < π, t > 0,
u(x, 0) = x(x − π), 0 < x < π,
u(0, t) = u(π, t) = 0, t > 0.
For c = 1 plot the heat distribution u(x, t) at several time instances.
Solution. The corresponding homogeneous heat equation was solved in Ex-
ample 6.2.1 and its solution is given by
∞
8∑ 1
e−(2n−1) c t sin (2n − 1)x.
2 2
(6.2.25) v(x, t) =
π n=1 (2n − 1)3
For each fixed t > 0, expand the function xt and the unknown function
w(x, t) in the Fourier sine series on the interval (0, π):
∞
∑ ∞
∑
xt = Fn (t) sin nx, w(x, t) = wn (t) sin nx,
n=1 n=1
where the coefficients Fn (t) and wn (t) in these expansions are given by
∫π
2 2t cos nπ
Fn (t) = x sin nx dx = − ,
π n
0
∫t ∫t
−c2 n2 (t−τ ) 2 cos nπ
τ e−c
2
n2 (t−τ )
wn (t) = Fn (τ )e dτ = − dτ
n
0 0
[ ]
2 cos nπ −c2 n2 t
= −1 + e + c2 2
n t .
c4 n5
Thus,
∞ [ ]
2 ∑ cos nπ −c2 n2 t
(6.2.26) w(x, t) = 4 −1 + e 2 2
+ c n t sin nx,
c n=1 n5
and so, the solution u(x, t) of the problem is given by u(x, t) = v(x, t) +
w(x, t), where v and w are given by (6.2.25) and (6.2.26), respectively.
For c = 1 the plots of u(x, t) at 4 instances are displayed in Figure 6.2.1.
u u
2.49863 1.37078
t = 0.5
t=0
Π x Π x
2
Π 2
Π
u u
1.14763
0.715189
t = 0.6
t = 0.8
Π x Π x
2
Π 2
Π
Figure 6.2.1
340 6. THE HEAT EQUATION
e t) by
If we introduce a new function w(x,
where
Fe(x, t) = (x − l)α′ (t) − xβ ′ (t),
fe(x) = (x − l)α(0) − xβ(0).
Notice that problem (6.2.29) has homogeneous boundary conditions and it
was considered in Case 10 and therefore we know how to solve it.
40 . Homogeneous Heat Equation. Neumann Boundary Conditions. Consider
the initial boundary value problem
2
ut (x, t) = c uxx (x, t), 0 < x < l, t > 0,
(6.2.30) u(x, 0) = f (x), 0 < x < l,
ux (0, t) = ux (l, t) = 0, t > 0.
We will solve this problem by the separation of variables method. Let the
solution u(x, t) (not identically zero) of the above problem be of the form
where X(x) and T (t) are functions of single variables x and t, respectively.
6.2 SEPARATION OF VARIABLES METHOD FOR THE HEAT EQUATION 341
X ′′ (x) 1 T ′ (t)
= 2 .
X(x) c T (t)
X ′′ (x) 1 T ′ (t)
= 2 =λ
X(x) c T (t)
for some constant λ. From the last equation we obtain the two ordinary
differential equations
and
it follows that
X ′ (0)T (t) = X ′ (l)T (t) = 0, t > 0.
From the last equations we obtain
and
( nπ )2 nπx
λn = − , Xn (x) = cos , n = 0, 1, 2, . . . , 0 < x < l.
l l
also will satisfy the heat equation and the boundary conditions. If we assume
that the above series is convergent, from (6.2.35) and the initial condition in
problem (6.2.30) we obtain
∞
∑ nπx
(6.2.36) f (x) = u(x, 0) = an cos , 0 < x < l.
n=0
l
Using the Fourier cosine series (from Chapter 1) for the function f (x) or
the fact, from Chapter 2, that the eigenfunctions
2πx 3πx nπx
1, cos , cos , . . . , cos , ...
l l l
are pairwise orthogonal on the interval [0, l], from (6.2.36) we obtain that
∫l
2 nπx
(6.2.37) an = f (x) cos dx n = 0, 1, 2, . . .
l l
0
From the heat equation and the given boundary conditions we obtain the
eigenvalue problem
X ′′ (x) − λX(x) = 0, 0<x<π
(6.2.38) ′ ′
X (0) = X (π) = 0,
6.2 SEPARATION OF VARIABLES METHOD FOR THE HEAT EQUATION 343
and
λn = −n2 , Xn (x) = cos nx, n = 1, 2, . . . , 0 < x < π.
The solution of the differential equation (6.2.39) corresponding to the above
found λn is given by
Tn (t) = an e−n t ,
2
n = 0, 1, 2, . . ..
and
∫π ∫π
1 2
an = ∫π (π − x ) cos nx dx = −
2 2
x2 cos nx dx
π
cos2 nx dx 0 0
0
2 2π 4
=− · 2 cos nπ = 2 (−1)n+1 ,
π n n
for n = 1, 2, . . .. Therefore, the solution u(x, t) of the problem is given by
∞
2π 2 ∑ 4
(−1)n+1 e−n t cos nx,
2
u(x, t) = + 2
0 < x < π, t > 0.
3 n=1
n
344 6. THE HEAT EQUATION
Find the temperature u(x, t) of the rod by solving the following system of
partial differential equations.
1
(6.2.40) vt (x, t) = vxx (x, t), 0<x< , t > 0,
2
1
(6.2.41) wt (x, t) = 4wxx (x, t), < x < 1, t > 0,
2
(6.2.42) v(0, t) = w(1, t) = 0, t > 0
(1 ) (1 ) 1 (1 )
(6.2.43) v , t = w , t , vx ( , t) = wx , t ,
2 2 2 2
(6.2.44) u(x, 0) = f (x) = x(1 − x), 0 < x < 1.
Solution. If v(x, t) = T1 (t)Y1 (x) and w(x, t) = T2 (t)Y2 (x), then using the
separation of variables we obtain the following system of ordinary differential
equations.
1
(6.2.45) Y1′′ (x) + λY1 (x) = 0, 0<x<
2
1
(6.2.46) 4Y2′′ (x) + λY2 (x) = 0, <x<1
2
(6.2.47) Tk′ (t) + λTk (t) = 0, t > 0, k = 1, 2.
Solving the differential equations (6.2.45) and (6.2.46) and using the bound-
ary conditions Y1 (0) = Y2 (π) = 0 we obtain
√ (1√ )
Y1 (x) = A sin ( λ x), Y2 (x) = B sin λ (x − 1) ,
2
for some constants A and B. From the conditions (6.2.43) we obtain the
following conditions for the functions Y1 (x) and Y2 (x):
√ √
( λ) ( λ)
A sin = −B sin ,
(6.2.48) 2√ 2√
( λ) ( λ)
Aλ cos = 2Bλ cos .
2 2
One solution of the above system is λ = 0. If we eliminate A and B from
(6.2.48), then we find the other solution:
√
( λ)
(6.2.49) cot = 0.
2
6.2 SEPARATION OF VARIABLES METHOD FOR THE HEAT EQUATION 345
1 1 1
+ = .
=
4 4 2
For the integral in the numerator of cn , we have
1
∫1 ∫2
( )
un (x)f (x) dx = (−1)n x(1 − x) sin (2n − 1)πx dx
0 0
∫1
( )
+ (−1)n x(1 − x) sin (2n − 1)π(1 − x) dx
1
[ ]
2
2 2 4(−1)n
= (−1)n + = 3 .
π (2n − 1)
3 3 π (2n − 1)
3 3 π (2n − 1)3
346 6. THE HEAT EQUATION
Therefore,
8(−1)n
cn = ,
π 3 (2n − 1)3
and so the solution of the problem is given by
∞
8 ∑ (−1)n −(2n−1)2 π2 t
u(x, t) = e un (x),
π 3 n=1 (2n − 1)3
u u
0.25
t=0 0.095
t = 0.1
x x
0.5 1 0.5 1
u u
0.0135 0.00185
t = 0.3 t = 0.5
x x
0.5 1 0.5 1
Figure 6.2.2
In Exercises 1–12 use the separation of variables method to solve the heat
equation
ut (x, t) = a2 uxx (x, t), 0 < x < l, t > 0,
subject to the following boundary conditions and the following initial
conditions:
√
1. a = 3, l = π, u(x, 0) = x(π − x), u(0, t) = u(π, t) = 0.
{
20, 0 ≤ x < 1
u(x, 0) =
0, 1 ≤ x ≤ 2.
6. a = 2, l = π, u(x, 0) = x2 , u(0, t) = u(π, t) = 0.
subject to the given initial condition u(x, 0) = f (x) and the following
source functions F (x, t) and boundary conditions.
{ π
x, 0<x< 2
F (x, t) =
π − x, π
2 ≤ x < π.
17. Solve the following heat equation with periodic boundary conditions.
π
22. u(x, 0) = sin 2l x, u(0, t) = ux (l, t) = 0.
One method to find the fundamental solution or Green function of the two
dimensional heat equation in (6.3.1) is to work similarly to the one dimen-
sional case. The other method is outlined in Exercise 1 of this section.
We will find a solution G(x, y, t) of the heat equation of the form
√
1 x2 + y 2
G(x, y, t) = g(ζ), where ζ = ,
t t
[ ]
1 1 y2 ′′ x2 ′
Gyy (x, y, t) = 3 √ 2 g (ζ) + √ g (ζ) .
t2 t x + y2 (x2 + y 2 ) x2 + y 2
If we substitute the above derivatives into the heat equation, after re-
arrangement we obtain
√
x2 + y 2 ′ ( 1 ′ )
−g(ζ) √ g (ζ) = c2 g ′′ (ζ) + g (ζ) .
2 t ζ
The last equation can be written in the form
( ) ( )
2 d ′ 1 d 2
c ξg (ζ) + ζ g(ζ) = 0.
dζ 2 dρ
If we integrate the last equation, then we obtain
1
c2 g ′ (v) + g(ζ) = 0.
2
The general solution of the last equation is
ζ2
g(ζ) = Ae− 4c2 ,
and so
x2 +y 2
G(x, y, t) = Ae− 4c2 t .
Usually the constant A is chosen such that
∫∫
G(x, y, t) dx dy = 1.
R2
Therefore,
1 x2 +y 2
(6.3.2) G(x, y, t) = √ e− 4c2 t .
4πct
The function G(x, y, t) given by (6.3.2) is called the fundamental solution
or the Green function of the two dimensional heat equation on the plane R2 .
Some properties of the Green function G(x, y, t) are given in Exercise 2 of
this section.
The Green function is of fundamental importance because of the following
result, stated without a proof.
6.3.2 THE HEAT EQUATION ON A RECTANGLE 351
The Green function G(x, y, z, t) for the three dimensional heat equation
( )
ut = c2 uxx + uyy + uzz , (x, y, z) ∈ R3 , t > 0
is given by
1 x2 +y + z 2
G(x, y, z, t) = 3 e− 4c2 t .
(4πct) 2
be the rectangular plate with boundary ∂R. Let the boundary of the plate
be held at zero temperature for all times and let the initial heat distribution
of the plate be given by f (x, y). The heat distribution u(x, y, t) of the plate
is described by the following initial boundary value problem.
( )
ut = c ∆x,y u = c uxx + uyy , (x, y) ∈ R, t > 0
2 2
(6.3.4) u(x, y, 0) = f (x, y), (x, y) ∈ R
u(x, y, t) = 0, (x, y) ∈ ∂R, t > 0
m2 π 2 n2 π 2 ( mπx ) ( nπy )
λmn = + , Wmn (x, y) = sin sin , (x, y) ∈ R.
a2 b2 a b
A general solution of Equation (6.3.6), corresponding to the above found λmn ,
is given by
Tmn (t) = amn e−c λmn t .
2
Using the initial condition of the function u(x, y, t) and the orthogonality
property of the the eigenfunctions
( mπx ) ( nπy )
Wm,n (x, y) ≡ sin sin , m, n = 1, 2, . . .
a b
on the rectangle [0, a] × [0, b], from (6.3.7) we find that the coefficients amn
are given by
∫a ∫b
4 ( mπx ) ( nπy )
(6.3.8) amn = f (x, y) sin sin dy dx.
ab a b
0 0
Example 6.3.1. Solve the following heat plate initial boundary value prob-
lem.
1( )
ut (x, y, t) = 2 uxx (x, y, t) + uyy (x, y, t) , 0 < x < 1, 0 < y < 1, t > 0
π
u(x, y, 0) = sin 3πx sin πy, 0 < x < 1, 0 < y < 1, t > 0
u(0, y, t) = u(1, y, t) = u(x, 0, t) = u(x, 1, t) = 0, 0 < x < 1, 0 < y < 1.
Solution. From the initial condition f (x, y) = sin 3πx sin πy and Equation
(6.3.8) for the coefficients amn , we find
∫1 ∫1 {
1, m = 3, n = 1
amn = 4 sin 3πx sin πy sin mπx sin nπy dy dx =
0, otherwise.
0 0
6.3.2 THE HEAT EQUATION ON A RECTANGLE 353
t=0 t=0.5
t=1 t=5
Figure 6.3.1
Remark. For the solution of the three dimensional heat equation on a rect-
angular solid by the separation of variables see Exercise 3 of this section.
6.3.3 THE HEAT EQUATION IN POLAR COORDINATES 355
D = {(x, y) : x2 + y 2 < a2 }
and
1 1
∆ u(r, φ) = urr + ur + 2 uφφ .
r r
Let us have a circular plate which occupies the disc of radius a and centered
at the origin. If the temperature of the disc plate is denoted by u(r, φ, t), then
the heat initial boundary value problem in polar coordinates is given by
( 1 1 )
utt = c urr + r ur + r2 uφφ , 0 < r ≤ a, 0 ≤ φ < 2π, t > 0,
2
and so, the solution u = u(r, φ, t) of heat distribution problem (6.3.9) is given
by
∞ ∑
∑ ∞
( zmn ) [ ]
Amn e−λmn c t Jm
2
(6.3.12) u= r am cos mφ + bm sin mφ .
m=0 n=1
a
∑∞ ∑ ∞
[ ] ( zmn )
f (r, φ) = u(r, φ, 0) = am cos mφ + bm sin mφ Amn Jm r .
m=0 n=1
a
∫2π
sin 2φ cos mφ dφ = 0, m = 0, 1, 2, . . .,
0
and
∫2π
sin 2φ sin mφ dφ = 0, for every m ̸= 2,
0
∫ ∫1
2π ( )
(1 − r2 )r2 sin 2φ J2 z2n r r sin 2φ dr dφ
0 0
b2 A2n =
∫ ∫1
2π ( )
J22 z2n r r sin2 2φ dr dφ
0 0
(6.3.14)
∫1 ( )
(1 − r2 )r3 J2 z2n r dr
0
= .
∫1 ( )
rJ22 z2n r dr
0
The numbers z2n in (6.3.14) are the zeros of the Bessel function J2 (x).
Taking a = 1, p = 2 and λ = z2n in Example 3.2.12 of Chapter 3, for
the integral in the numerator in (6.3.14) we have
∫1
( ) 2 ( )
(6.3.15) (1 − r2 )r3 J2 z2n r dr = 2 J4 z2n .
z2n
0
∫1
1 2
xJµ2 (λk x) dx = J (λk ).
2 µ+1
0
358 6. THE HEAT EQUATION
∫1
( ) 1 ( )
(6.3.16) rJ22 z2n r dr = J32 z2n .
2
0
The right hand side of the last equation can be simplified using the following
recurrence formula for the Bessel functions.
2µ
Jµ+1 (x) + Jµ−1 (x) = Jµ (x).
x
(See Section 3.3. of Chapter
( 3.)
) Taking µ = 3 and x = z(2n in ) this formula,
( )
and using the fact that J2 z2n = 0 it follows that z2n J4 z2n = 6J3 z2n .
Thus,
24
b2 A2n = 3 ( ),
z2n J3 z2n
and so the solution u(r, φ, t) of the problem is given by
(∑
∞ )
−z2n t 1 ( )
u(r, φ, t) = 24 e (
3 J z
) J2 z2n r sin 2φ.
n=1
z2n 3 2n
See Figure 6.3.2 for the temperature distribution of the disc at the given
instances.
t=0 t=0.05
Figure 6.3.2
6.3.4 HEAT EQUATION IN CYLINDRICAL COORDINATES 359
x = r cos φ, y = rφ, z = z
and the fact that the Laplacian ∆ in the cylindrical coordinates is given by
∂2 1 ∂ 1 ∂2 ∂2
∆= + + +
∂r2 r ∂r r2 ∂φ2 ∂z 2
then the variables will be separated and in view of the boundary conditions
we will obtain the differential equation
and so from (6.3.20), (6.3.21), (6.3.22), (6.3.23) and (6.3.24) it follows that
the solution u = u(r, φ, z, t) of the heat problem for the cylinder is given by
∞ ∑ ∞ ∑ ∞ [
( zmn
2 )
∑ −c2 + ςj2 t ( zmn ) ( )
a2
u= Ajmn e Jn r cos nφ sin ςj z + αk
j=0 m=0 n=0
a
( zmn
2 ) ]
−c2
2
+ ςj2 t ( zmn ) ( )
+ Bjmn e a Jn r sin nφ sin ςj z + αk ,
a
where the coefficients Ajmn and Bjmn are determined using the initial con-
dition u(r, φ, 0, z) = f (r, φ, z) and the orthogonality property of the Bessel
function Jn (·) and the sine and cosine functions:
∫a 2π
∫ ∫l ( zmn ) ( )
rf (r, φ, z)Jn r cos nφ sin ςj z + αj dr dφ dz
a
Ajmn = 4 0 0 0[ ] [ ]
(h h
2 3 + s 2
k )(h 2 + h3 ) ( 2 ) a2 h21 − n2
πa2 ϵn 1 + J n zmn 1 +
h22 + ςk2 )(h23 + ςk2 ) zmn
where {
2, n = 0,
ϵn =
1, n ̸= 0.
6.3.5 THE HEAT EQUATION IN SPHERICAL COORDINATES 361
Solution. We look for the solution u(r, t) of the above problem in the form
Separating the variables and using the given boundary conditions we obtain
∞
∑ ( zn0 ) −czn0 t
u(r, t) = An J0 r e ,
n=1
a
where zn0 is the nth positive zero of the Bessel function J0 (·).
The coefficients An are evaluated using the initial condition and the or-
thogonality property of the Bessel function:
∫a
( ) ( )
rJ0 zm0 r J0 zn0 r dr = 0, if m ̸= n
0
∫a
( ) a ( )
rJ02 zm0 r = J12 zn0 r dr = 0, if m = n.
2
0
then working exactly as in the vibrations of the ball problem we need to solve
the ordinary differential equation
Using the results of the vibrations of the ball problem we obtain that the
solution u = u(r, φ, θ) of the heat ball problem (6.3.25) is given by
∞ ∑
∑ ∞ ∑
∞ [
−c2
znj
t ( znj )
u= Amnj a
r am cos mφ Pn(m) (cos θ)
Jn+ 12
m=0 n=0 j=1
a
(6.3.26) ]
+ bm sin mφ Pn(m) (cos θ) ,
(m)
where znj is the j th positive zero of the Bessel function Jn+ 12 (x), Pn (x)
are the associated Legendre polynomials of order m (discussed in Chapter 3),
given by
m ( )
m d
Pn(m) (x) = (1 − x2 ) 2 m
Pn (x) ,
dx
and Pn (x) are the Legendre polynomials of order n.
The coefficients Amnj am and Amnj bm in (6.3.26) are determined using
the initial conditions given in (6.3.25) and the orthogonality properties of the
eigenfunctions R(r), Φ(φ), Θ(θ).
6.3.5 THE HEAT EQUATION IN SPHERICAL COORDINATES 363
1. Let v(x, t) and w(y, t) be solutions of the one dimensional heat equa-
tions
vt (x, t) = c2 vxx (x, t), x, y ∈ R, t > 0,
wt (y, t) = c2 wyy (y, t), x, y ∈ R, t > 0,
respectively.
Show that u(x, y, t) = v(x, t) w(y, t) is a solution of the two dimen-
sional heat equation
( )
ut = c2 uxx + uyy , (x, y) ∈ R2 , t > 0
and deduce that the Green function G(x, y, t) for the two dimensional
heat equation is given by
1 x2 +y 2
G(x, y, t) = √ e− 4c2 t .
4πct
if it is given that
3π
5. a = 1, b = 2, f (x, y) = 3 sin 6πx sin 2πy + 7 sin πx sin 2 y.
6. a = b = π, f (x, y) = 1.
10. Use the separation of variables method to solve the following diffusion
initial boundary value problem.
ut = uxx + uyy + 2k1 ux + 2k2 uy − k3 u,
(x, y) ∈ S, t > 0,
u(x, y, t) = 0, (x, y) ∈ ∂S, t > 0,
u(x, y, 0) = f (x, y), (x, y) ∈ S.
∑
∞ ∑
∞
Hint: F (x, y, t) = Fmn (t) sin mx sin ny.
m=1 n=1
6.3.5 THE HEAT EQUATION IN SPHERICAL COORDINATES 365
12. Find the solution u = u(r, t) of the following initial boundary heat
problem in polar coordinates.
(
1 )
ut = c2 urr + ur , 0 < r < a, t > 0,
r
| lim u(r, t) |< ∞, u(a, t) = 0, t > 0,
r→0+
u(r, 0) = T0 , 0 < r < a.
13. Find the solution u = u(r, φ, t) of the following initial boundary heat
problem in polar coordinates.
1 1
u = urr + ur + 2 uφφ , 0 < r < 1, 0 < φ < 2π, t > 0,
t r r
| lim u(r, φ, t) |< ∞, u(1, φ, t) = 0, 0 < φ < 2π, t > 0,
r→0+
u(r, φ, 0) = (1 − r2 )r sin φ, 0 < r < 1, 0 < φ < 2π.
14. Find the solution u = u(r, φ, t) of the following initial boundary heat
problem in polar coordinates.
1 1
u = urr + ur + 2 uφφ , 0 < r < 1, 0 < φ < 2π, t > 0,
t r r
| lim u(r, φ, t) |< ∞, u(1, φ, t) = sin 3φ, 0 < φ < 2π, t > 0,
r→0+
u(r, φ, 0) = 0, 0 < r < 1, 0 < φ < 2π.
15. Find the solution u = u(r, z, t) of the following initial boundary heat
problem in cylindrical coordinates.
1
ut = urr + r ur + uzz , 0 < r < 1, 0 < z < 1, t > 0,
u(r, 0, t) = u(r, 1, t) = 0, 0 < r < 1, t > 0,
u(1, z, t) = 0, 0 < z < 1, t > 0,
u(r, z, 0) = 1, 0 < r < 1.
16. Find the solution u = u(r, t) of the following initial boundary heat
problem in polar coordinates.
∂u(r, t) = c ∂ (r2 ∂ ), 0 < r < a, t > 0,
2
∂t r2 ∂r ∂r
| lim u(r, t) |< ∞, u(1, t) = 0, t > 0,
r→0+
u(r, 0) = 1, 0 < r < 1.
17. Find the solution u = u(r, t) of the heat ball problem
( 2
2 ∂ u 2 ∂u )
u = cx + , 0 < r < 1, t > 0
t
∂r 2 r ∂r
u(r, 0) = f (r) 0 < r < 1,
| lim u(r, t) |< ∞, u(1, t) = 0, t > 0.
r→0+
366 6. THE HEAT EQUATION
( ) ( )
Solution. If U (s, t) = L u(x, t) and F (s) = L f (x) , then the heat equa-
tion is transformed into
In view of the boundary conditions for the function u(x, t) the above equation
becomes
Ut (s, t) − (s2 + a2 )U (x, t)+ = F (s).
6.4.1 LAPLACE TRANSFORM METHOD FOR THE HEAT EQUATION 367
2
+a2 )t F (s)
U (s, t) = Ce(s − .
s2 + a2
From the fact that every Laplace transform tends to zero as s → ∞ it follows
that C = 0. Thus,
F (s)
U (s, t) = − 2 ,
s + a2
and so by the convolution theorem for the Laplace transform we have
∫x
1
u(x, t) = − f (x − y) sin ay dy.
a
0
Example 6.4.2. Using the Laplace transform method solve the initial bound-
ary value heat problem
ut (x, t) = uxx (x, t), 0 < x < 1, t > 0,
u(0, t) = 0, u(1, t) = 1, t > 0,
u(x, 0) = 0, 0 < x < 1.
( )
Solution. If U = U (x, s) = L u(x, t) , then applying the Laplace transform
to both sides of the heat equation and using the initial condition we have
d2 U
− sU = 0.
dx2
The general solution of the above ordinary differential equation is
(√ ) (√ )
U (x, s) = c1 cosh sx + c2 sinh sx .
we obtain
∞ [ −(2n+1−x)
∑
√ √ ]
e s
e−(2n+1+x) s
U (x, s) = − .
n=0
s s
∫∞
2
e−t dt.
2
erf (x) = √
π
x
Example 6.4.3. Using the Laplace transform method solve the boundary
value problem
ut (x, t) = uxx (x, t), −1 < x < 1, t > 0,
u(x, 0) = 1, x > 0,
ux (1, t) + u(1, t) = ux (−1, t) + u(1, t) = 0, t > 0.
( )
Solution. If U = U (x, s) = L u(x, t) , then from the initial condition we
have
d2 U
− sU = −1.
dx2
The solution of the last equation, in view of the boundary conditions
[ ] [ ]
dU dU
+ U (x, s) = 0, + U (x, s) = 0,
dx x=1 dx x=−1
is
√
1 cosh ( sx)
(6.4.1) U (x, s) = − [√ √ √ ].
s s s sinh ( s) + cosh ( s)
The first function in (6.4.1) is easily invertible. To find the inverse Laplace
transform of the second function is not as easy. Let
√
cosh ( sx)
F (s) = [√ √ √ ].
s s sinh ( s) + cosh ( s)
6.4.1 LAPLACE TRANSFORM METHOD FOR THE HEAT EQUATION 369
Ri
0
-zn2
-Ri
Figure 6.4.1
The singularities of this function are at s = 0 and all points s which are
solutions of the equation
√ √ √
(6.4.2) s sinh s + cosh s = 0.
These singularities are simple poles (check !). Consider the contour integral
∫
F (z) etz dz,
CR
For the poles z = −zn2 , using l’Hopital’s rule and Euler’s formula
we have
{ }
Res F (z)ezt , z = −zn2 = lim 2 (z + zn2 )F (z)ezt
z→−zn
−zn
2 z + z2
(6.4.4) = cosh (−zn ix)e t
lim 2 [√ √ n √ ]
z→−zn z sinh ( z + cosh ( z)
cos (zn x)e−zn t
2
= −2 ( ).
zn sin (zn ) + zn cos (zn )
u(x, t) = L−1 + −1 − 2 2 ( )
s n=1
zn sin (zn ) + zn cos (zn )
∞
∑ cos (zn x)e−zn t
2
=2 [ ].
n=1
zn 2 sin (zn ) + zn cos (zn )
∫∞
f (x)δ(x) dx = f (0),
−∞
∑∞ ( ) ∑ ∞ ( )
2nl + x 2nl − x
u(x, t) = g t − g t .
n=0
a n=0
a
Since the function g(y, t) is odd with respect to the variable y, from the last
equation we obtain
∞
∑
1 (2nl+x)2
u(x, t) = √ 3 (2nl + x) e− 4a2 l
t
.
2 π t2 n=−∞
dU
= −c2 ω 2 ωU, U (ω, 0) = F (ω).
dt
∫∞ ∫∞
1 1
F (ω)e−c
2
iω x ω 2 t iω x
u(x, t) = U (ω, t)e dω = e dω
2π 2π
−∞ −∞
∫∞ ( ∫∞ )
1
f (ξ)e−iωξ dξ e−c
2
ω 2 t iω x
= e dω
2π
−∞ −∞
∫∞ ( ∫∞ )
1
e−c
2
ω 2 t iω(x−ξ)
= e dω f (ξ) dξ
2π
−∞ −∞
∫∞ (∫∞ )
1 ( )
e−c
2
ω2 t
= cos ω(x − ξ) dω f (ξ) dξ
π
−∞ 0
∫∞
1 (x−ξ)2
= √ e− 4c2 t f (ξ) dξ.
2c πt
−∞
∫∞ √
( ) 1 π
e−c
2
ω2 t
(6.4.7) cos ω λ dω = .
2c t
0
∫∞
( )
e−c
2
ω2 t
g(ω, λ) = cos ω λ dω
0
∂g(ω, λ) λ
= − 2 g(ω, λ),
∂λ 2c t
Example 6.4.6. Using the Fourier cosine transform solve the initial bound-
ary value problem
ut (x, t) = c2 uxx (x, t), 0 < x < ∞, t > 0
ux (0, t) = f (t), 0 < t < ∞,
u(x, 0) = 0, 0 < x < ∞
lim u(x, t) = lim ux (x, t) = 0, t > 0.
x→∞ x→∞
( ) ( )
Solution. If U = U (ω, t) = Fc u(x, t) and F (ω) = Fc f (x) , then taking
the Fourier cosine transform from both sides of the heat equation and using the
integration by parts formula (twice) and the boundary condition ux (0, t) =
f (t) we obtain
∫∞ [ ]
dU (ω, t) ( ) ( ) ∞
= c2 uxx (x, t) cos ω x dx = c2 ux (x, t) cos ω x
dt x=0
0
∫∞ [ ]
( ) ( ) ∞
+ c2 ω ux (x, t) sin ω x dx = −c2 f (t) + c2 ω u(x, t) sin ω x
x=0
0
∫∞
( )
− c2 ω 2 u(x, t) cos ω x dx = −c2 f (t) − c2 ω 2 U (ω, t),
0
i.e.,
dU (ω, t)
+ c2 ω 2 U (ω, t) = −c2 f (t).
dt
The solution of the last differential equation, in view of the initial condition
∫∞
( )
U (ω, 0) = u(x, 0) cos ω x dx = 0,
0
is given by
∫t
f (τ ) e−c
2
ω 2 (t−τ )
U (ω, t) = −c2 dτ.
0
If we take the inverse Fourier cosine transform, then we obtain that the solu-
374 6. THE HEAT EQUATION
2 ∫t (∫∞ )
2c −c2 ω 2 (t−τ )
=− e cos ωx dω f (τ ) dτ
π
0 0
∫t 2
c f (τ ) − 4c2x(t−τ
= −√ √ e ) (by result (6.4.7) in Example 6.4.5).
π t−τ
0
2. ut (x, t) = uxx (x, t), 0 < x < ∞, t > 0, subject to the following con-
ditions: lim u(x, t) = u1 , u(0, t) = u0 , u(x, 0) = u1 , u0 and u1 are
x→∞
constants.
3. ut (x, t) = uxx (x, t), 0 < x < ∞, t > 0, subject to the following con-
ditions: lim u(x, t) = u0 , ux (0, t) = u(0, t), u(x, 0) = u0 , u0 is a
x→∞
constant.
4. ut (x, t) = uxx (x, t), 0 < x < ∞, t > 0, subject to the following condi-
tions: lim u(x, t) = u0 , u(0, t) = f (t), u(x, 0) = 0, u0 is a constant.
x→∞
5. ut (x, t) = uxx (x, t), 0 < x < ∞, t > 0, subject to the following
conditions: lim u(x, t) = 60, u(x, 0) = 60, u(0, t) = 60 + 40 U2 (t),
x→∞{
1, 0 ≤ t ≤ 2
where U2 (t) =
0, 2 < t < ∞.
6. ut (x, t) = uxx (x, t), −∞ < x < 1, t > 0, subject to the following
conditions: lim u(x, t) = u0 , ux (1, t) + u(1, t) = 100, u(x, 0) = 0,
x→−∞
u0 is a constant.
6.4.2 FOURIER TRANSFORM METHOD FOR THE HEAT EQUATION 375
7. ut (x, t) = uxx (x, t), 0 < x < 1, t > 0, subject to the following condi-
tions: u(x, 0) = u0 + u0 sin x, u(0, t) = u(1, t) = u0 , u0 is a constant.
8. ut (x, t) = uxx (x, t) − hu(x, t), 0 < x < ∞, t > 0, subject to the
following conditions: lim u(x, t) = 0, u(x, 0) = 0, u(0, t) = u0 , u0 is
x→∞
a constant.
9. ut (x, t) = uxx (x, t), 0 < x < ∞, t > 0, subject to the following con-
ditions: lim u(x, t) = 0, u(x, 0) = 10e−x , u(0, t) = 10.
x→∞
10. ut (x, t) = uxx (x, t) + 1, 0 < x < 1, t > 0, subject to the following
conditions: u(0, t) = u(1, t) = 0, u(x, 0) = 0.
( )
11. ut (x, t) = c2 uxx (x, t) + (1 + k)ux (x, t) + ku(x, t) , 0 < x < ∞, t > 0,
subject to the following conditions: u(0, t) = u0 , lim | u(x, t) |< ∞,
x→∞
u(x, 0) = 0, u0 is a constant.
12. Find the solution u(r, t) of the initial boundary value problem
2
ut (r, t) = urr (r, t) + ur (r, t), 0 < r < 1, t > 0
r
lim | u(r, t) |< ∞, ur (1, t) = 1, t > 0
r→0+
u(r, 0) = 0, 0 ≤ r < 1.
In Problems 15–19, use the Fourier transform to solve the indicated initial
boundary value problem on the indicated interval, subject to the given
conditions.
376 6. THE HEAT EQUATION
15. ut (x, t) = c2 uxx (x, t), −∞ < x < ∞, t > 0 subject to the following
condition: u(x, 0) = µ(x).
16. ut (x, t) = c2 uxx (x, t) + f (x, t), −∞ < x < ∞, t > 0 subject to the
following condition: u(x, 0) = 0.
17. ut (x, t) = c2 uxx (x, t), −∞ < x < ∞, t > 0 subject to the condition
{
1, |x| < 1
u(x, 0) =
0, |x| > 1.
18. ut (x, t) = c2 uxx (x, t), −∞ < x < ∞, t > 0 subject to the condition
u(x, 0) = e−|x| .
19. ut (x, t) = t2uxx (x, t), −∞ < x < ∞, t > 0 subject to the condition
u(x, 0) = f (x).
In Problems 20–25, use one of the Fourier transforms to solve the indicated
heat equation, subject to the given initial and boundary conditions.
20. ut (x, t) = c2 uxx (x, t), 0 < x < ∞, t > 0 subject to the conditions
ux (0, t) = µ(t), and u(x, 0) = 0.
21. ut (x, t) = c2 uxx (x, t) + f (x, t), 0 < x < ∞, t > 0 subject to the
conditions u(0, t) = u(x, 0) = 0.
22. ut (x, t) = uxx (x, t), 0 < x < ∞, t > 0 subject to the initial conditions
{
1, 0 < x < 1
u(x, 0) =
0, 1 ≤ x < ∞.
23. ut (x, t) = uxx (x, t), 0 < x < ∞, t > 0 subject to the initial conditions
u(x, 0) = 0, ut (x, 0) = 1 and the boundary condition lim u(x, t) = 0.
x→∞
24. ut (x, t) = c2 uxx (x, t) + f (x, t), 0 < x < ∞, t > 0, subject to the
initial condition u(x, 0) = 0, 0 < x < ∞, and the boundary condition
u(0, t) = 0.
25. ut (x, t) = c2 uxx (x, t) + f (x, t), 0 < x < ∞, t > 0, subject to the ini-
tial condition ux (x, 0) = 0, 0 < x < ∞, and the boundary condition
u(0, t) = 0.
6.5 PROJECTS USING MATHEMATICA 377
All calculation should be done using Mathematica. Display the plot of the
function u(r, φ, t) at several time instances t and find the hottest places on
the disc plate for the specified instances.
Solution. The solution of the heat problem, as we derived in Section 6.3, is
given by
∞ ∑
∑ ∞
( ) 2 [ ]
Jm λmn r e−λmn t Amn cos mφ + Bmn sin mφ ,
2
u(r, φ, t) =
m=0 n=0
where
∫1 ∫2π
1 ( )
Amn = 2
( ) f (r, φ) cos mφ Jm zmn r r dφ dr,
πJm+1 zmn
0 0
∫1 ∫2π
1 ( )
Bmn = 2
( ) f (r, φ) sin mφ Jm zmn r r dφ dr.
πJm+1 zmn
0 0
Define the k th partial sum which will be the approximation of the solution:
[
(
In[8] := u[r− , φ− , t− , k− ] :=Sum A[m, n] EFc[r, φ, m, n]
]
)
+A[m, n] EFs[r, φ, m, n] Exp[−λ[m, n])2 t], {m, 0, k}, {n, 0k} ;
Let us find the points on the disc with maximal temperature u(r, φ) at
the specified time instances t.
At the initial moment t = 0 we have that
[ ]
In[10]:= FindMaximum {u[r, φ, 0], 0 <= r < 1 && 0 <= φ < 2 Pi }, {r, φ}
Out[10] = {0.387802, {r− > 0.605459, φ− > 1.5708}}
At the moment t = 0.2 we have that
[ ]
In[11]:= FindMaximum {u[r, φ, 0.2], 0 <= r < 1 && <= φ < 2 Pi}, {r, φ}
Out[11] = {0.297189, {r− > 0.534556, φ− > 1.5708}}
At the moment t = 0.4 we have that
[ ]
In[12]:= FindMaximum {u[r, φ, 0.4], 0 <= r < 1 && 0 <= φ < 2 Pi}, {r, φ}
Out[12] = {0.224781, {r− > 0.507249, φ− > 1.5708}}
At the moment t = 0.6 we have that
[ ]
In[13]:= FindMaximum {u[r, φ, 0.6], 0 <= r < 1 && <= φ < 2 Pi}, {r, φ}
Out[13] = {0.168849, {r− > 0.493909, φ− > 1.5708}}
6.5 PROJECTS USING MATHEMATICA 379
t=0 t=0.02
Figure 6.5.1
Project 6.5.2. Use the Laplace transform to solve the heat boundary value
problem
ut (x, t) = uxx (x, t), 0 < x < ∞, t > 0,
u(x, 0) = 1, 0 < x < ∞,
u(0, t) = 10, t > 0.
From the boundary conditions and the bounded property of the Laplace
transform as s → ∞ we find the constants C[1] and C[2]:
[ ]
In[9] := BC = LaplaceT ransf orm b[t], t, s ;
10
Out[9] = s
√ [ ]
t x
√
x2
x Erf c
Out[12] = 1 + 9 √ t
.
t
y
10
4 t = 0.5 t = 20
t = 0.02 t=5 t = 10
2
t=2
0 x
0 2 4 5 6 8 10
Figure 6.5.2
6.5 PROJECTS USING MATHEMATICA 381
Project 6.5.3. Use the Fourier transform to solve the heat boundary value
problem
1
ut (x, t) = 4 uxx (x, t), −∞ < x < ∞, t > 0,
{
10, −1 < x < 1
u(x, 0) = f (x) = −∞ < x < ∞.
0, otherwise,
dU (z, t) z 2
+ U (z, t) = 0
dt 4
In[9] := IC=(Sol/.{z → 0, })
√2
20 π Sin[z]
Out[9] = z = C[1]
√2
20 Sin[z]
In[10] :=Final=Sol/.{C[1]− > π
z }
382 6. THE HEAT EQUATION
tz 2 √2
20 e− 4
π Sin[z]
Out[10] = z ;
To find the inverse Fourier transform we use the convolution theorem.
First define:
[ 20 √ π2 Sin[z] ]
In[11] := g[x− , t− ] :=InverseFourierTransform z , z, x ;
[ tz2 ]
In[12] := h[x− , t− ] :=InverseFourierTransform e− 4 , z, x ;
The solution u(x, t) is obtained by
[ ]
In[13] := u[x− , t− ] := √1 Convolve h[y, t], g[y, t], y, x ;
2π
[ ]
In[14] := FullSimplify %
( [ ] [ ])
Out[14] = 10 −Erf −1+x√
t
+ Erf 1+x
√
t
CHAPTER 7
383
384 7. LAPLACE AND POISSON EQUATIONS
where F and g are given functions, and un is the derivative of the function
u in the direction of the outward unit normal vector n to the boundary ∂Ω.
The following theorem, the Maximum Principle for the Laplace equation
is true for any of the above two problems in any dimension; we will prove it
for the two dimensional homogeneous Dirichlet boundary value problem. The
proof for the three dimensional Laplace equation is the same.
Theorem 7.1.1. Let Ω ⊂ R2 be a bounded domain with boundary ∂Ω and
let Ω = Ω ∪ ∂Ω. Suppose that u(x, y) is a nonconstant, continuous function
on Ω and that u(x, y) satisfies the Laplace equation
Then, u(x, y) achieves its largest, and also its smallest value on the boundary
∂Ω.
Proof. We will prove the first assertion. The second assertion is obtained from
the first assertion, applied to the function −u(x, y).
For any ϵ > 0 consider the function
Thus,
max u(x, y) ≤ A + ϵB,
(x,y)∈Ω
max u(x, y) = A. ■
(x,y)∈Ω
By Theorem 7.1.1, the function u achieves its maximum and minimum values
on the boundary ∂Ω. Since these boundary values are zero, u must be zero
in Ω. ■
∆ u(x) = 0, x∈Ω
is called harmonic in Ω.
The following are some properties of harmonic functions which can be easily
verified. See Exercise 1 of this section.
(a) A finite linear combination of harmonic functions is also a harmonic
function.
(b) The real and imaginary parts of any analytic function in a domain
Ω ⊆ R2 are harmonic functions in Ω.
Example 7.1.1. Show that the following functions are harmonic in the in-
dicated domain Ω.
(a) u(x, y) = x2 − y 2 , Ω = R2 .
(d) u(x, y, z) = √ 1
, Ω = R3 \ {(0, 0, 0)}.
x2 +y 2 +z 2
r3 − 3x2 r r3 − 3x2 r r3 − 3z 2 r
∆ u(x, y, z) = − − −
r6 r6 r6
3r − 3(x + y + z )r
3 2 2 2
3r3 − 3r3
=− = − = 0.
r6 r6
The following mean value property is important in the study of the Laplace
equation. We will prove it only for the two dimensional case since the proof
for higher dimensions is very similar.
Theorem 7.1.2. Mean Value Property. If u(x, y) is a harmonic function
in a domain Ω ⊆ R2 , then for any point (a, b) ∈ Ω
∫ ∫
1 1
u(a, b) = u(x, y) ds = 2 u(x, y) dx dy,
2πr r π
∂Dr (z) Dr (z)
7.1 THE FUNDAMENTAL SOLUTION OF THE LAPLACE EQUATION 387
where Dr = Dr (z) is any open disc with its center at the point z = (a, b)
and radius r such that Dr (a, b) ⊆ Ω.
Proof. Let Dϵ (z) be the open disc with its center at the point z = (a, b)
and radius ϵ < r and let Aϵ = Dr (z) \ Dϵ (z). Let us consider the harmonic
function
1
v(x, y) = √
(x − a)2 + (y − b)2
in the annulus Aϵ . See Figure 7.1.1.
¶W
AΕ
Ε W
DΕ
z
Figure 7.1.1
Thus, ∫
uvn ds = 0.
∂Aϵ
Taking lim as ϵ → 0 from both sides of the above equation and using the
fact that u is a continuous function it follows that
∫
1
2πu(a, b) = u ds,
r
∂Dr
which is the first part of the assertions. The second assertion is left as an
exercise. ■
The converse of the mean value property for harmonic functions is also
true. Even though the property is true for any dimensional domains we will
state and prove it for the three dimensional case.
Theorem 7.1.3. If u is a twice continuously differentiable function on a
domain Ω ⊆ R3 and satisfies the mean value property
∫ ∫
1 1
u(x) = 4 3 u(y) d y = 2 u(y) dS(y),
3r π
4r π
B(x,r) ∂B(x,r)
for every ball B(x, r) with the property B(x, r) ⊂ Ω, then u is harmonic in
Ω.
Proof. Suppose contrarily that u is not harmonic in Ω. Then ∆ u(x0 ) ̸=
0 for some x0 ∈ Ω. We may assume that ∆ u(x0 ) > 0. Since u is a
twice continuously differentiable function in Ω there exists an r0 > 0 such
that ∆ u(x) > 0 for every x ∈ B(x0 , r) and every r < r0 . Now, define the
function
∫ ∫
1 1
f (r) = 4 3 u(y) d y = 4 u(x0 + rz) d z, r < r0 .
3r π 3π
B(x0 ,r) B(0,1)
Because u satisfies the mean value property we have f (r) = u(x0 ). Thus,
f ′ (r) = 0. On the other hand, from the Gauss–Ostrogradski formula we have
∫
1 ( )
f ′ (r) = 4 grad u(x0 + rz) · z d z
3π
B(0,1)
∫
1 )
= 4 ∆ u(x0 + rz) d z > 0,
3π
∂B(0,1)
which is a contradiction. ■
x ′ y
ux (x, y) = f (r), uy (x, y) = f ′ (r), (x, y) ̸= (0, 0)
r r
1 ′ x2 ′ x2
uxx (x, y) = f (r) − 3 f (r) + 2 f ′′ (r),
r r r
1 ′ y2 ′ y 2 ′′
uyy (x, y) = f (r) − 3 f (r) + 2 f (r), (x, y) ̸= (0, 0),
r r r
1 ′
f (r) + f ′′ (r), r ̸= 0.
r
f (r) = C1 ln |r| + C2 , r ̸= 0,
1
(7.1.4) Φ(x, y) = − ln(x2 + y 2 )
2π
1 1
(7.1.6) Φ(x, y, z) = √ .
4π x + y 2 + z 2
2
∆ u(x) = −F (x), x ∈ Rn ,
and un (y) and Φn (x, y) are the derivatives of u and Φ, respectively, in the
direction of the unit outward vector n to the boundary ∂Ω at the boundary
point y ∈ ∂Ω.
Proof. First, from the Gauss–Ostrogradski formula (see Appendix E) applied
to two harmonic functions u and v in the domain Ω we obtain
∫
[ ]
(7.1.7) v(y) un(y) (y) − u(y)vn(y) (y) dS(y) = 0.
∂Ω
{ }
On the sphere y : | y − x |= ϵ we have
{
− 2π
1
ln ϵ, n = 2
Φ(x, y) = 1
4πϵ , n = 3,
and thus,
{
− 2π1 ϵ , n=2
Φn(y) (x, y) =
− 4πϵ
1
2, n = 3.
From the above fact and the continuity of the function u(x) in Ω it follows
that ∫
[ ]
lim u(y) − u(x) Φn(y) (x, y) dS(y) = 0
ϵ→0
{ }
y: |y−x|=ϵ
For some basic properties of the Green function see Exercise 2 of this
section.
Now, let us take a few examples.
Example 7.1.2. Verify that the expression
( x )
G(x, y) = Φ(x, y) − Φ | x | y ,
|x|
is actually the Green function for the unit ball | x |< 1 in Rn for n = 2, 3,
where Φ(·, ·) is the fundamental solution of the Laplace equation.
Solution. Let us consider the function h(x, y), defined by
( x )
h(x, y) = Φ | x | y, .
|x|
7.1 THE FUNDAMENTAL SOLUTION OF THE LAPLACE EQUATION 393
and
(7.1.10) | x | y − x = | y | x − y .
|x| |y|
From the first identity in (7.1.9) it follows that the function h(x, y) is har-
monic with respect to x in the unit ball, while from the second identity in
(7.1.9) we have h(x, y) is harmonic with respect to y in the unit ball. From
the definition of h(x, y) and identity (7.1.10) we have
Therefore, the above defined function G(· , ·) satisfies all the properties for a
function to be a Green function.
If we take n = 2, for example, then from the definition of the fundamental
solution of the Laplace equation in R2 \ {0} and the above discussion, it
follows that the Green function in the unit disc of the complex plane R2 is
given by
1 | x−y |
G(x, y) = − ln ,
| x |< 1, | y |< 1.
2π | x | y − x
|x|
Example 7.1.3. Find the Green function for the upper half plane.
Solution. For a point x = (x, y), define its reflection by x∗ = (x, −y). If
U + = {x = (x, y) ∈ R2 : y > 0}
| x − y∗ |=| y − x∗ |, x, y ∈ R2
394 7. LAPLACE AND POISSON EQUATIONS
(b) The real and imaginary parts of any analytic function in a domain
Ω ⊆ R2 are harmonic functions in Ω.
4. Find the harmonic function u(x, y) in the upper half plane {(x, y) :
y > 0}, subject to the boundary condition
x
u(x, 0) = .
x2 + 1
5. Find the harmonic function u(x, y, z), (x, y, z) ∈ R3 , in the lower half
space {(x, y, z) : z < 0}, subject to the boundary condition
1
u(x, y, 0) = 3 .
(1 + x2 + y2 ) 2
6. Find the Green function of the right half plane RP = {(x, y) : x > 0}
and solve the Poisson equation
{
∆ u(x) = f (x), x ∈ RP
u(x) = h(x, x ∈ ∂RP .
7. Find the Green function of the right half plane UP = {(x, y) : y > 0}
subject to the boundary condition
Gn (x, y) = 0, (x, y) ∈ ∂U P
396 7. LAPLACE AND POISSON EQUATIONS
8. Using the Green function for the unit ball in Rn , n = 2, 3, derive the
Poisson formula
∫
1 1− | x |2
u(x) = f (y) dS(y),
ωn | y − x |2
|y|=1
where ωn is the area of the unit sphere in Rn which gives the solution
of the Poisson boundary value problem
{
∆ u(x) = −f (x), | x |< 1
u(x) = g(x), | x |= 1.
and
wxx (x, y) + wyy (x, y) = 0, 0 < x < a, 0 < y < b
(W) w(x, 0) = w(x, b) = 0, 0 < x < a
w(0, y) = g1 (y), w(a, y) = g2 (y), 0 < y < b
Therefore,
∞
∑ [ ]
v(x, y) = An cosh λn y + Bn sinh λn y sin λn x.
n=1
The above coefficients An and Bn are determined from the boundary con-
ditions in problem (V ):
∞
∑
f1 (x) = v(x, 0) = An sin λn x,
n=1
∑∞
[ ]
f2 (x) = v(x, b) = An cosh λn b + Bn sinh λn b sin λn x.
n=1
The above two series are Fourier sine series and therefore,
∫a
2
An = f1 (x) sin λn x dx,
a
0
(7.2.3)
∫a
2
A cosh λn b + Bn sinh λn b = f2 (x) sin λn x dx,
n b
0
where { π
x, 0<x< 2
f (x) =
π − x, π
2 < x < π.
Display the plot of the solution u(x, y).
7.2 LAPLACE AND POISSON EQUATIONS ON RECTANGULAR DOMAINS 399
and
wxx (x, y) + wyy (x, y) = 0, 0 < x < π, 0 < y < π
w(x, 0) = w(x, π) = 0, 0 < x < π
w(0, y) = f (y), w(π, y) = f (y), 0 < y < π.
It is enough to solve only the first problem for the function v. The eigen-
values λn for this problem are λn = n and so from (7.2.3) we have
∫π
2
An = f (x) sin nx dx,
π
0
∫π
2
An cosh nπ + Bn sinh nπ = f (x) sin nx dx.
π
0
4 sin nπ
2 4 sin nπ
2
An = , An cosh nπ + Bn sinh nπ = ,
π n2 π n2
from which we obtain that the solution v(x, y) is given by
∞
4 ∑ sin nπ [ ( )]
v(x, y) = 2
2
sinh (ny) + sinh n(π − y) sin nx.
π n=1 n sinh nπ
Figure 7.2.1
and
Y ′′ (y) + λY (y) = 0.
Solving the eigenvalue problem we find that eigenvalues and corresponding
eigenfunctions are
Therefore,
∞
∑ [ ]
u(x, y) = A0 + B0 y + An cosh ny + Bn sinh ny cos nx.
n=1
7.2 LAPLACE AND POISSON EQUATIONS ON RECTANGULAR DOMAINS 401
B0 = 1, Bn = 0.
The left hand side of the last equation is a Fourier cosine series and so, from
the uniqueness theorem for Fourier cosine expansion, it follows that
K = B0 = 1, An = Bn = 0, n = 1, 2, . . ..
u(x, u) = A0 + y,
The next example is about the Laplace equation on a rectangle with mixed
boundary conditions.
Example 7.2.3. Using the separation of variables method, solve the problem
Solution. Taking u(x, y) = X(x)Y (y) and separating the variables we have
Y ′′ (y) X ′′ (x)
=− = −λ,
Y (y) X(x)
and using the homogeneous boundary conditions (7.2.5) and (7.2.6) we have
If we examine all possible cases for λ: λ > 0, λ < 0 and λ = 0 in the Sturm–
Liouville problem (7.2.7), we find that the eigenvalues and corresponding
eigenfunctions of (7.2.7) are
n2 π 2 nπ
λ0 = 0; Y0 (y) = 1, λn = , Yn (y) = cos y, n = 1, 2, . . ..
b2 b
Therefore,
∞
∑ ( nπ ) nπ
u(x, y) = A0 x + An sinh x cos y.
n=1
b b
∞
∑ nπ nπ
f (y) = u(a, y) = A0 a + An sinh a cos y
n=1
b b
∫b ∫b
1 1 nπ
A0 = f (y) dy, An = nπ f (y) cos y dy.
ab b sinh b a b
0 0
We cannot directly separate the variables, but as with the Laplace bound-
ary value problem with general Dirichlet boundary conditions, we can split
problem (P ) into the following two problems.
vxx (x, y) + vyy (x, y) = F (x, y), 0 < x < a, 0 < y < b
(PH) v(x, 0) = 0, v(x, b) = 0, 0 < x < a
v(0, y) = v(a, y) = 0, 0 < y < b,
7.2 LAPLACE AND POISSON EQUATIONS ON RECTANGULAR DOMAINS 403
and
wxx (x, y) + wyy (x, y) = 0, 0 < x < a, 0 < y < b
(L) w(x, 0) = f1 (x), w(x, b) = f2 (x), 0 < x < a
w(0, y) = g1 (y), w(a, y) = g2 (y), 0 < y < b.
If v(x, y) is the solution of problem (P H) and w(x, y) is the solution
of problem (L), then the solution u(x, y) of problem (P ) will be u(x, y) =
v(x, y) + w(x, y).
Problem (L), being a Laplace Dirichlet boundary value problem, has been
solved in Case 10 . For problem (P H), notice first that the boundary value on
every side of the rectangle is homogeneous. This suggests that u(x, y) may
be of the form
∞ ∑
∑ ∞
mπ nπ
v(x, y) = Fmn sin x sin y,
m=1 n=1
a b
Recognizing the above equation as the double Fourier sine series of F (x, y)
we have
∫a ∫b
π2 mπ nπ
Fmn =− 2 2 2 2( ) F (x, y) sin x sin y dy dx.
a b b m + a2 n2 a b
0 0
The solution of this problem is u(x, y) = v(x, y) + w(x, y), where v(x, y) is
the solution of the problem
vxx (x, y) + vyy (x, y) = 0, 0 < x < ∞, 0 < y < b
v(x, 0) = 0, v(x, b) = 0, 0 < x < ∞
(*)
v(0, y) = g(y), 0 < y < b
| lim v(x, y) |< ∞, 0 < y < b,
x→∞
∞
∑ √ nπ
v(x, y) = A n e− λn x
sin y,
n=1
b
nπ
and the orthogonality of the eigenfunctions sin b y on the interval [0, b]:
∫b
2 nπ
An = g(y) sin y dy.
b b
0
Now, let us turn our attention to problem (∗∗). If w(x, y) = X(x)Y (y)
is the solution of problem (∗∗), then, after separating the variables, from the
Laplace equation we obtain the problem
and the general solution of the second equation can be written in the special
form
A(µ) B(µ)
Y (y) = sinh µy + sinh µ(b − y).
sinh µb sinh µb
Therefore,
[ ]
A(µ) B(µ)
w(x, y) = sin µx sinh µy + sinh µ(b − y)
sinh µb sinh µb
406 7. LAPLACE AND POISSON EQUATIONS
is a solution of the Laplace equation for any µ > 0, which satisfies all condi-
tions in problem (∗∗) except the boundary conditions at y = 0 and y = b.
Since the Laplace equation is homogeneous it follows that
∫∞ [ ]
A(µ) B(µ)
w(x, y) = sin µx sinh µy + sinh µ(b − y) dµ
sinh µb sinh µb
0
will also be a solution of the Laplace equation. Now, it remains to find A(µ)
and B(µ) such that the above function will satisfy the boundary conditions
at y = 0 and y = b:
∫∞
f1 (x) = w(x, 0) = B(µ) sin µx dµ,
=0
∫∞
f2 (x) = w(x, b) = A(µ) sin µx dµ.
0
We recognize the last equations as Fourier sine transforms and so A(µ) and
B(µ) can be found by taking the inverse Fourier transforms. Therefore,
we have solved problem (∗∗) and, consequently, the original problem of the
Laplace equation on the semi-infinite strip with the prescribed general Dirich-
let condition is completely solved.
We will return to this problem again in the last section of this chapter,
where we will discuss the integral transforms for solving the Laplace and
Poisson equations.
Now, we will take an example of the Laplace equation on an unbounded
rectangular domain, and the use of the Fourier transform is not necessary.
Example 7.2.5. Solve the problem
uxx (x, y) + uyy (x, y) = 0, 0 < x < ∞, 0 < y < π,
u(x, 0) = 0, u(x, π) = 0, 0 < x < ∞,
u(0, y) = y(π − y), 0 < y < π,
lim u(x, y) = 0, 0 < y < π.
x→∞
Solution. Let u(x, y) = X(x)Y (y). After separating the variables and using
the boundary conditions at y = 0 and y = π, we obtain
For these λn the general solution of the differential equation for X(x) is
given by √ √
Xn (x) = An e− λn x + Bn λn x .
To ensure that u(x, y) is bounded as x → ∞, we need to take Bn = 0.
Therefore,
Xn (x) = An e−nx ,
and so,
∞
∑
u(x, y) = An e−nx sin ny, 0 < x < ∞, 0 < y < π.
n=1
∫π {
2 2(1 − cos nπ) 0, n even
An = y(π − y) sin ny dy = = 4
π n3 n3 , n odd.
0
We need to find the solution (potential) u(x, y) of the above problem of the
form {
v(x, y), 0 < x < h, 0 < y < ∞
u(x, y) =
w(x, y), h < x < a, 0 < y < ∞,
where the functions u(x, y) and w(x, y) satisfy the boundary conditions
v(0, y) = 0, v(a, y) = 0, 0 < y < ∞
(7.2.11) w(x, 0) = V, 0 < x < a
lim v(x, y) = lim w(x, y) = 0, 0 < x < a.
y→∞ y→∞
( )
Solution. Recall that for a vector field F(x, y) = f (x, y), g(x, y) and a scalar
function u(x, y), grad and div are defined by
( )
div F(x, y) = fx (x, y) + gy (x, y), grad u(x, y) = ux (x, y), uy (x, y) .
We try to separate the variables. If u(x, y) = X(x)Y (y), then from (7.2.9)
after separating the variables we obtain that
{( )′
cX ′ (x) + c λX(x) = 0,
(7.2.12)
X(0) = X(a) = 0
and
then, from (7.2.12) and the continuity conditions, we have the eigenvalue
problems { ′′
X1 (x) + λX1 (x) = 0, X1 (0) = 0
X2′′ (x) + λX2 (z) = 0, X2 (a) = 0
7.2 LAPLACE AND POISSON EQUATIONS ON RECTANGULAR DOMAINS 409
and thus, the general solution u(x, y) of the original problem has the form
∞
∑ √
(7.2.15) u(x, y) = An e− λn y
Xn (x), 0 < x < a, 0 < y < ∞.
n=1
The solution of the problem now is obtained if the coefficients An are chosen
so that the above representation of u(x, y) satisfies the initial condition
∞
∑
V = u(x, 0) = An Xn (x), 0 < x < a.
n=1
∫b
c Xn (x)Xm (x) dx = 0, if m ̸= n,
0
then we find
∫b
V
An = Xn2 (x) dx,
∥ Xn ∥2
0
∫a ∫h ∫a
∥ Xn ∥ = 2
c(x)Xn2 (x) dx = c1 2
X1n (x) dx + c2 2
X2n (x) dx
0 0 h
c1 h c2 (a − h)
= (√ )+ √ ,
2 sin 2
λn h 2 sin2 λn (a − h)
410 7. LAPLACE AND POISSON EQUATIONS
i.e.,
c1 h c2 (a − h)
(7.2.16) ∥ Xn ∥2 = √ + √ .
2 sin 2
λn h 2 sin2 λn (a − h)
(e) a = b = 1, f1 (x) = 7 sin 7πx, f2 (x) = sin πx, g1 (y) = sin 3πy,
g2 (y) = sin 6πy.
where
∫1 ∫1
4
Amn = g(y, z) sin mπy sin nπz dz dy
sinh λmn
0 0
and √
λmn = π m2 + n2 .
Hint: Take u(x, y, z) = X(x)Y (y)Z(z) and separate the variables.
7.3.1 LAPLACE EQUATION IN POLAR COORDINATES 413
We will find the solution of the problem (7.3.1), (7.3.2) of the form
u(r, φ) = R(r)Φ(φ).
and
for every φ. Using this condition (as we did in the section on the wave
equation on circular domains), we have
√
λ = n, n = 0, 1, 2, . . ..
Since u(r, φ) is a continuous function in the whole disc D, the function R(r)
cannot be unbounded for r = 0. Thus, B = 0, and so
R(r) = rn , n = 0, 1, 2, . . ..
Therefore,
∞
∑ ( )
u(r, φ) = rn An cos nφ + Bn sin nφ , 0 ≤ r ≤ a, 0 ≤ φ < 2π
n=0
are particular solutions of our problem, provided that the above series con-
verge.
We find the coefficients An and Bn so that the above u(r, φ) satisfies the
boundary condition
∞
∑ ( )
f (φ) = u(a, φ) = an An cos nφ + Bn sin nφ , 0 ≤ φ < 2π.
n=0
Recognizing that the above right hand side is the Fourier series of f (φ), we
have
∫2π
1
An = n f (ψ) cos nψ dψ,
a π
0
∫2π
1
Bn = f (ψ) sin nψ dψ; n = 0, 1, 2, . . ..
an π
0
Therefore, the solution u = u(r, φ) of our Dirichlet problem for the Laplace
equation on the disc is given by
∞
a0 ∑ ( r )n ( )
(7.3.6) u = + an cos nφ + bn sin nφ , 0 ≤ r ≤ a, 0 ≤ φ < 2π
2 n=1
a
7.3.1 LAPLACE EQUATION IN POLAR COORDINATES 415
where
∫2π ∫π
1 1
(7.3.7) an = f (ψ) cos nψ dφ, bn = f (ψ) sin nψ dψ.
π π
0 0
Using the assumption that f (φ) is a continuous and differentiable function
on the circle C and using the Weierstrass test for uniform convergence, it can
be easily shown that the series in (7.3.6) converges, it can be differentiated
term by term and it is a continuous function on the circle C. By construction,
every term of the above series satisfies the Laplace equation and therefore the
solution of the original problem is given by (7.3.6).
Now, using (7.3.6) and (7.3.7) we will derive the very important Poisson
integral formula, which gives the solution of the Dirichlet problem for the
Laplace equation on the disc D by means of an integral (Poisson integral).
If we substitute the coefficients an and bn , given by (7.3.7), into (7.3.6)
and interchange the order of summation and integration we obtain
∫2π[ ∞ ]
1 1 ∑ ( r )n ( )
u(r, φ) = + cos n ψ cos nφ + sin ψ sin nφ f (ψ) dψ
π 2 n=1 a
0
∫2π[ ∞ ]
1 1 ∑ ( r )n
= + cos n(φ − ψ) f (ψ) dψ.
π 2 n=1 a
0
To simplify our further calculations we let z = ar . Notice that |z| < 1. Using
Euler’s formula and the geometric sum formula
∑∞
eiα + e−iα w
cos α = , wn = , |w| < 1,
2 n=1
1 − w
we obtain
∫2π[ ∞ ]
1 1 ∑ ( r )n
u(r, φ) = + cos n(φ − ψ) f (ψ) dψ
π 2 n=1 a
0
∫2π[ ∞
∑ ∞
∑ ]
1 ( )n ( )n
= 1+ ze(φ−ψ)ni + ze−(φ−ψ)ni f (ψ) dψ
2π n=1 n=1
0
∫2π[ ]
1 ze(φ−ψ)ni ze−(φ−ψ)ni
= 1+ + f (ψ) dψ
2π 1 − ze(φ−ψ)ni 1 − ze−(φ−ψ)ni
0
∫2π
1 1 − z2
= f (ψ) dψ
2π 1 − 2z cos (φ − ψ) + z 2
0
∫2π
1 a2 − r 2
= f (ψ) dψ.
2π r2 − 2ar cos (φ − ψ) + a2
0
416 7. LAPLACE AND POISSON EQUATIONS
Hence,
∫2π
1 a2 − r2
(7.3.8) u(r, φ) = f (ψ) dψ.
2π r2 − 2ar cos (φ − ψ) + a2
0
Equation (7.3.8) is called the Poisson formula for the harmonic function
u(r, φ) for the disc D with boundary values f (φ).
If we introduce the function
a2 − r2
(7.3.9) P (r, φ) = ,
r2 − 2ar cos φ + a2
known as the Poisson kernel, then the Poisson integral formula (7.3.8) can
be written in the form
∫2π
1
(7.3.10) u(r, φ) = P (r, φ − ψ) f (ψ) dπ.
2π
0
{ }
Example 7.3.1. Let D = (x, y) : x2 + y 2 < 1 be the unit disc. Find
a harmonic function u(x, y) on D which is continuous on the closed disc D
and which on the boundary ∂D has value f (x, y) = x2 − 3xy − 2y 2 .
Solution. We pass to polar coordinates. From (7.3.6) it follows that the
required function u(r, φ) is given by
∞
a0 ∑ n ( )
u(r, φ) = + r an cos nφ + bn sin nφ , 0 ≤ r ≤ 1, 0 ≤ φ < 2π,
2 n=1
where an and bn are the Fourier coefficients of the function f (x, y), which
will be found from the boundary condition
∞
a0 ∑ ( )
cos2 φ − 3 cos φ sin φ − 2 sin2 φ = + an cos nφ + bn sin nφ .
2 n=1
From
1 + cos 2φ 3 ( )
cos2 φ − 3 cos φ sin φ − 2 sin2 φ = − sin 2φ − 1 − cos φ ,
2 2
3 3
a0 = −1, a2 = , b2 = −
2 2
7.3.1 LAPLACE EQUATION IN POLAR COORDINATES 417
and the other coefficients an and bn are all zero. Therefore, the required
function is
1 3 3
u(r, φ) = − − r2 cos 2φ + r2 sin 2φ.
2 2 2
We can rewrite the function u(r, φ) as
1 3 ( )
u(r, φ) = − − r2 cos2 φ − sin2 φ + 3r2 sin φ cos φ,
2 2
1 3
u(x, y) = − − (x2 − y 2 ) + 3xy.
2 2
R(r) = ln r, R(r) = 1.
Example 7.3.3. Find the harmonic function u(r, φ) on the unit disc such
that {
1, 0 < φ < π
u(1, φ) =
0, π < φ < 2π.
Solution. The Fourier coefficients an and bn of the function f are
∫2π ∫2π ∫π
1 1 1
a0 = f (ψ) dψ = 1, an = f (ψ) cos nψ dψ = cos nψ dψ = 0,
π π π
0 0 0
∫2π ∫π
1 1 1 1 − (−1)n
bn = f (ψ) sin nψ dψ = sin nψ dψ = .
π π π n
0 0
Using the Poisson integral formula (7.3.8), the function u(r, φ) can be
represented in the form
∫2π
1 1 − r2
u(r, φ) = f (ψ) dψ
2π r2 − 2r cos (φ − ψ) + 1
0
∫π
1 − r2 sin ψ
= dψ.
2π r2 − 2r cos (φ − ψ) + 1
0
7.3.1 LAPLACE EQUATION IN POLAR COORDINATES 419
Example 7.3.4. Solve the following Laplace equation boundary value prob-
lem on the semi-disc.
∆ u(r, φ) = 0, 0 < r < 1 0 < φ < π
u(1, φ) = c, 0 < φ < π
u(r, 0) = u(r, π) = 0, 0<r<π
| lim u(r, φ) |< ∞, 0 < φ < π.
r→0+
where v(r, φ) and w(r, φ) are the solutions of the following problems, respec-
tively,
{
∆ v(r, φ) = 0, 0 < r < 1, 0 ≤ φ < 2π
(L)
u(a, φ) = f (φ), 0 ≤ φ < 2π
and
{
∆ w(r, φ) = −F (r, φ), 0 < r < 1, 0 ≤ φ < 2π
(P)
w(1, φ) = 0, 0 ≤ φ < 2π.
The first problem (L) has been already solved in part 7.3.1. Therefore, it
is enough to solve the second problem (P ). We will look for the solution of
problem (P ) in the form
∞ ∑
∑ ∞
w(r, φ) = Wmn (r, φ),
m=0 n=0
where Wmn (r, φ) are the eigenfunctions of the Helmholtz eigenvalue problem
{
∆ w(r, φ) + λ2 w(r, φ) = 0, 0 ≤ r < 1, 0 ≤ φ < 2π
(H)
w(1, φ) = 0, 0 ≤ φ < 2π.
Thus,
∞ ∑
∑ ∞
( )[ ]
(7.3.11) w(r, φ) = Jm λmn r amn cos mφ + bmn sin mφ .
m=0 n=0
7.3.2. POISSON EQUATION IN POLAR COORDINATES 421
Now, from
∆ Wmn + λ2mn Wmn = 0, m, n = 0, 1, 2, . . .
we have
∞ ∑
∑ ∞ ∞ ∑
∑ ∞
(7.3.12) ∆ w(r, φ) = ∆ Wmn (r, φ) = − λ2mn Wmn (r, φ).
m=0 n=0 m=0 n=0
From the last expansion and from the orthogonality of the eigenfunction with
respect to the weight r we find
∫1 ∫2π
ϵm ( )
Amn = ( ) f (φ) Jm λmn r r cos mφ dφ dr,
2
πJm+1 λmn
0 0
(7.3.13)
∫1 ∫2π
2 ( )
Bmn = ( ) f (φ) Jm λmn r r sin mφ dφ dr,
2
πJm+1 λmn
0 0
where {
1, m = 0,
ϵm =
2, m > 0.
Example 7.3.5. Solve the boundary value problem for the Poisson equation.
{
∆ u(r, φ) = −2 − r3 cos 3φ, 0 ≤ r < 1, 0 ≤ φ < 2π
u(1, φ) = 0, 0 ≤ φ < 2π.
∫1 ∫2π
1 ( ) ( )
A0n = ( ) 2 + r3 cos 3φ J0 λ0n r r dφ dr
πλ20n J12 λ0n
0 0
∫1
4 ( )
= ( ) J0 λ0n r r dr
J12 λ0n
0
4 1 ( ) 4
= 2 2( ) J1 λ0n = 3 ( ).
λ0n J1 λ0n λ0n λ0n J1 λ0n
422 7. LAPLACE AND POISSON EQUATIONS
For m = 3 we have
∫1 ∫2π
2 ( ) ( )
A3n = ( ) 2 + r3 cos 3φ cos 3φ J3 λ3n r r dφ dr
πλ23n J42 λ3n
0 0
∫1
4 ( )
= ( ) r4 J3 λ3n r dr
λ23n J42 λ3n
0
4 1 ( ) 4
= ( ) J4 λ3n = 3 ( ).
λ23n J42 λ3n λ3n λ3n J4 λ0n
In the above calculations we have used the following fact from the Bessel
functions (see Chapter 3):
∫1
1
rn+1 Jn (λ r) dr = Jn+1 (λ).
λ
0
u(r, φ, z) = u(r, φ + 2π, z), 0 < r < a, 0 < φ < 2π, 0 < z < b
7.3.3 LAPLACE EQUATION IN CYLINDRICAL COORDINATES 423
and bounded at r = 0:
u(r, φ, z) = R(r)Φ(φ)Z(z),
then substituting it into Equation (7.3.14), after separating the variables and
using the boundary conditions (7.3.15), (7.3.16) and (7.3.17), we will obtain
the problems
For these found µn = n, from the bounded condition for the function R(r)
at r = 0, we find a solution of the Bessel equation in (7.3.20) of order n,
given by (√ )
Rmn (r) = Jn λmn a r
where λmn are obtained by solving the equation
√
Jn ( λmn a) = 0,
where the harmonic functions u1 (r, φ, z) and u2 (r, φ, z) satisfy the conditions
{
u1 (a, φ, z) = 0, u1 (r, φ, 0) = f (r, φ), u1 (r, φ, b) = 0
u2 (aφ, z) = 0, u2 (r, φ, 0) = 0, u2 (r, φ, b) = g(r, φ).
424 7. LAPLACE AND POISSON EQUATIONS
where the coefficients Amn , Bmn , Cmn and Dmn are found from the bound-
ary functions f (r, φ) and g(r, φ) and the orthogonality of the Bessel functions
Jn (·) with respect to the weight function r:
∫2π ∫a
2 (√ )
Amn = 2 ( √ f (r, φ) cos nφ Jn λmn r r dr dφ
′
a πϵn Jn ( λmn a)2
0 0
∫2π ∫a
2 (√ )
Bmn = ( √ f (r, φ) sin nφ Jn λmn r r dr dφ
a2 πϵ ′ 2
n Jn ( λmn a)
0 0
∫2π ∫a
2 (√ )
Cmn = ( √ g(r, φ) cos nφ Jn λmn r r dr dφ
a2 πϵ ′ 2
n Jn ( λmn a)
0 0
∫2π ∫a
2 (√ )
Dmn = ( √ f (r, φ) sin nφ Jn λmn r r dr dφ,
a2 πϵ ′ 2
n Jn ( λmn a)
0 0
and {
2, n=0
ϵn =
1, n ̸= 0.
where f (θ, φ) is a given function. Let us recall from the previous chapters
that the Laplacian ∆ u(r, θ, φ) in spherical coordinates is given by
( ) ( )
1 ∂ 2 ∂u 1 ∂ ∂u 1 ∂2u
(7.3.23) ∆u ≡ 2 r + 2 sin θ + 2 2 .
r ∂r ∂r r sin θ ∂θ ∂θ r sin θ ∂φ2
As many times before, we look for the solution of this problem of the form
The function Y (θ, φ) also satisfies the periodic and boundary conditions
{
Y (θ, φ + 2π) = Y (θ, φ), 0 < θ, 0 < φ < 2π
(7.3.26) | lim Y (θ, φ) |< ∞, | lim Y (θ, φ) |< ∞.
θ→0+ θ→π−
Y (θ, φ) = P (θ)Φ(φ),
and
[ ( ) ]
sin2 θ
1 d
sin θ
dP
+ λ = µ,
(7.3.28) sin θ dθ dθ
| lim P (θ) |< ∞, | lim P (θ) |< ∞.
θ→0+ θ→π−
(m)
where Pn (x) are the associated Legendre polynomials. Therefore, the eigen-
functions of problem (7.3.28) are
{ ( ) ( ) ( ) }
Pn cos θ , Pn(m) cos θ cos mθ, Pn(m) cos θ sin mθ : n ≥ 0, m ∈ N ,
(m)
where Pn (·) is the Legendre polynomial of order n and Pn (·) is the asso-
ciated Legendre polynomial.
Now, let us solve Equation (7.3.24) for the found λ = n(n + 1). Two
independent solutions of this equation are
Since the above series is uniformly convergent for each fixed 0 ≤ r < 1 it can
be differentiated twice term by term and since each member of the series is
a harmonic function on the unit ball the series is a harmonic function on the
unit ball. Therefore, the general solution of the original problem is given by
∞ ∑
∑ n
[ ]
(7.3.30) u(r, θ, φ) = Amn cos mφ + Bmn sin mφ rn Pn(m) (cos θ).
n=0 m=0
7.3.4 LAPLACE EQUATION IN SPHERICAL COORDINATES 427
∞ ∑
∑ ∞
[ ]
f (θ, φ) = u(1, θ, φ) = Amn cos mφ + Bmn sin mφ Pn(m) (cos θ),
m=0 n=0
where {
2, m=0
cn =
1, m > 0.
Therefore, the solution of our original problem (7.3.19) is given by (7.3.27),
where Amn and Bmn are given by (7.3.31).
Example 7.3.6. Find the harmonic function u(r, θ) in the unit ball
{ }
(r, θ, φ) : ≤ r < 1
such that
u(1, θ) = sin4 θ, 0 ≤ θ < π.
where Pn (·) are the Legendre polynomials of order n. To find the coeffi-
cients an we use the boundary condition at r = 1:
∞
∑
4
(7.3.32) sin θ = u(1, θ) = an Pn (cos θ).
n=0
428 7. LAPLACE AND POISSON EQUATIONS
Using the orthogonality property of the Legendre polynomials, from the above
expansion we have
∫π ∫π
an Pn2 (cos θ) sin θ dθ = sin4 θ Pn (cos θ) sin θ dθ.
0 0
and so
∫π
2n + 1
an = sin4 θ Pn (cos θ) sin θ dθ.
2
0
The Poisson equation on the unit ball can be solved in a similar way as
in the unit disc. For the solution of the Poisson boundary value problem see
Exercise 19 of this section.
7.3.4 LAPLACE EQUATION IN SPHERICAL COORDINATES 429
where n is the outward unit normal vector to the circle, provided that
∫
2π
f (φ) dφ = 0.
0
where n is the inward unit normal vector to the circle, provided that
∫
2π
f (φ) dφ = 0.
0
In Exercises 3–7 find the harmonic function u(r, φ) in the unit disc which
on the boundary assumes the given function f (φ).
3. f (φ) = A sin φ.
4. f (φ) = A sin3 φ + B.
{
A sin φ, 0 ≤ φ < π,
5. f (φ) = A 3
sin φ, π < φ < 2π.
{ 3
1, 0 ≤ φ < π,
6. f (φ) =
0, π < φ < 2π.
7. f (φ) = 12 (π − φ).
∆ u(r, φ) = 0, (r, φ) ∈ Ω
{ }
10. Ω = (r, φ) : 0 < r < 2, 0 < φ < π , uφ (r, 0) = uφ (r, π) = 0,
{
c, 0 < φ < π2
u(2, φ) =
0, π2 < φ < π.
u(1, θ, φ) = f (θ), 0 < θ < π.
7.3.4 LAPLACE EQUATION IN SPHERICAL COORDINATES 431
{ π
4, 0<θ< 2
16. f (θ) = π
0, 2 < θ < π.
17. f (θ) = 1 − cos θ, 0 < θ < π.
20. Solve the Laplace equation outside the unit ball with given Dirichlet
boundary values, i.e., solve the problem
∆ u(r, θ, φ) = 0, r > 1, 0 < φ < 2π, 0 < θ < π
u(1, θ, φ) = f (θ, φ), 0 < θ < π, 0 < φ < 2π
| lim u(r, θ, φ) |< ∞, 0 < θ < π, 0 < φ < 2π.
r→+∞
21. Solve the Laplace equation inside the unit ball with given Newmann
boundary values, i.e., solve the problem
∆ u(r, θ, φ) = 0, 0 < r < 1, 0 < φ < 2π, 0 < θ < π
un (1, θ, φ) = f (θ, φ), 0 < θ < π, 0 < φ < 2π
| lim u(r, θ, φ) |< ∞, 0 < θ < π, 0 < φ < 2π,
r→0+
where n is the outward unit normal vector to the sphere and f (θ, φ)
is a given function which satisfies the condition
∫2π ∫π
f (θ, φ) sin θ dθ dφ = 0.
0 0
22. Show that the solution u(r, θ, φ) of the Laplace equation between two
concentric balls with given Dirichlet boundary values
∆ u(r, θ, φ) = 0, r1 < r < r2 , 0 < φ < 2π, 0 < θ < π
u(r2 , θ, φ) = f (θ, φ), 0 < θ < π, 0 < φ < 2π
u(r1 , θ, φ) = 0, 0 < θ < π, 0 < φ < 2π
| lim u(r, θ, φ) |< ∞, 0 < θ < π, 0 < φ < 2π
r→+∞
is given by
∞
∑ [ ]
( r )n ( r2 )n+1
u(r, θ, φ) = an − Pn (cos θ),
n=0
r2 r
432 7. LAPLACE AND POISSON EQUATIONS
∫π
2n + 1 1
an = ( r n
) ( )n+1 f (θ) Pn (cos θ) dθ.
2 − rr21
r1 0
Hint: Use the solution of the Laplace equation inside the ball and the
solution of the Laplace equation outside a ball. (see Exercise 20 of
this section.)
Solution. Let
∫∞
( )
U (x, ω) = F u(x, y) = u(x, y) e−iωy dy
−∞
and
U (0, ω) = F (f ) = F (ω).
If we apply the Fourier transform to the Laplace equation, then in view of the
boundary condition for the function u(x, y) at x = 0, we obtain the ordinary
differential equation
d2 U (x, ω)
− ω 2 U (x, ω) = 0, x ≥ 0.
dx2
The general solution of the above equation is given by
U (x, ω) = F (ω)e−|ω| x .
Now, using the fact that
( )
(
−1 −|ω| x
) 1 x
F e (y) = ,
π x + y2
2
(see the Table B of Appendix B), from the convolution theorem for the Fourier
transform we have
∫∞
1( x ) 1 x
(7.4.1) u(x, y) = f∗ 2 (x) = f (t) 2 dt.
π x + y2 π (x + (y − t)2
−∞
The function
1 x
Px (y) =
π x2 + (y − t)2
is called the Poisson
{ } kernel for the Laplace equation on the right half plane
(x, y) : x > 0 , and Equation (7.4.1) is known as the Poisson integral
formula for harmonic functions on the right half plane.
Example 7.4.2. Solve the following Dirichlet boundary value problem in the
given semi-infinite strip
uxx (x, y) + uyy (x, y) = 0, 0 < x < ∞, 0 < y < b
u(x, 0) = f (x), u(x, b) = 0, 0 < x < ∞
u(0, y) = 0, lim u(x, y) = 0, 0 < y < b,
x→+∞
and taking b → ∞{ solve the boundary problem for} the Laplace equation in
the first quadrant (x, y) : 0 < x < ∞, 0 < y < ∞ .
Solution. Since in the boundary conditions of the problem are not derivatives
we use the Fourier sine transform . Let
∫∞
( )
U (ω, y) = Fs u(x, y) = u(x, y) sin (ωx) dx,
0
and
U (ω, 0) = Fs (f ) = F (ω).
If we apply the Fourier sine transform to the Laplace equation, then in view
of the boundary condition for the function u(x, y) at x = 0, we obtain the
ordinary differential equation
d2 U (ω, y)
− ω 2 U (ω, y) = 0, 0 ≤ y < ∞.
dy 2
434 7. LAPLACE AND POISSON EQUATIONS
Now in (7.4.2) take b → ∞. Assuming that we can pass the limit inside the
integrals, from the following limit (obtained by l’Hospital’s rule)
( )
sinh ω(b − y)
lim ( = e−ωy
b→∞ sinh ωb)
it follows that
∫∞ (∫∞ )
2
u(x, y) = f (t) sin (ωt) sin (ωy) eωy dω dt.
π
0 0
∫∞ [ ]
y y y
u(x, y) = − f (t) dt.
π ((x − t)2 + y 2 ((x + t)2 + y 2
0
and
U (0, ω) = Fc (f ) = F (ω).
If we apply the Fourier cosine transform to the Laplace equation, then in view
of the boundary condition for the function u(x, y) at y = 0, we obtain the
ordinary differential equation
d2 U (x, ω)
− ω 2 U (x, ω) − uy (x, 0) = 0, 0 ≤ x < π.
dx2
i.e.,
d2 U (x, ω)
− ω 2 U (x, ω), 0 ≤ x < π.
dx2
The general solution of the above equation is given by
From the inversion formula for the Fourier cosine transform we obtain
∫∞
2
u(x, y) = U (x, ω) cos (ωy) dω
π
0
∫∞ ∫∞
2 sinh (ωx)
= f (t) cos (ωt) cos (ωy) dt dω,
π sinh (ωa)
0 0
∫∞ (∫∞ )
2 sinh (ωx)
u(x, y) = cos (ωt) cos (ωy) dω f (t) dt.
π sinh (ωa)
0 0
Now, we will introduce the two dimensional Fourier transform which can
be applied to solve the Laplace equation on unbounded domains.
The Fourier transform of an integrable function f (x), x = (x, y) ∈ R2 is
defined by ∫∫
( )
b
f (w) = F f (x) = f (x) e−iw·x dx,
R2
where for x = (x, y) and w = (ξ, η), w · x is the usual dot product defined
by
w · x = ξ x + η y.
The higher dimensional Fourier transform is defined similarly.
The basic properties of the two dimensional Fourier transform are just like
those of the one dimensional Fourier transform and they are summarized in
the following theorem.
Theorem 7.4.1. Suppose that f (x), x = (x, y), is integrable on R2 . Then
(a) For any fixed a ∈ R2 ,
( ) ( ) ( ) ( )
F f (x − a) = e−iw·a F f (x) , F eia·x f (x) = F w − a .
(b) If the partial derivatives fx (x) and fy (y) exist and are integrable on
R2 , then
( ) ( ) ( ) ( )
F fx = ix F f , F fy = iy F f .
The solution of the above differential equation, in view of the boundary con-
ditions, is given by
√2 2
U (ξ, η, z) = F (ξ, η) e− ξ +η z .
where ( √2 2 )
g(x, y, z) = F −1 e− ξ +η z .
To find the function g(x, y, z) is not a trivial matter. Using the inversion
Fourier transform formula, given in part (d) in the above theorem, we have
∫∞ ∫∞ √2 2
1
g(x, y, z) = e(xξ+yη)i e− ξ +η z dξ dη.
4π 2
−∞ −∞
438 7. LAPLACE AND POISSON EQUATIONS
(See Exercise 2, page 247, in the book by G. B. Folland, [6] for details.)
Therefore,
∫∞ ∫∞
1
u(x, y, z) = f (ξ, η) g(x − ξ, y − η) dξ dη
2π
−∞ −∞
∫∞ ∫∞
z f (ξ, η)
= [ ] 3 dξ dη.
2π (x − ξ) + (y − η)2 + z 2 2
2
−∞ −∞
where Jn (·) is the Bessel function of the first kind of order n, provided that
the improper integral exists.
For the Hankel transform we have the inversion formula
∫∞
)
f (r) = Hn−1 bigl(Fn (ω) = ωFn (ω)Jn (ω r) dω.
0
The Hankel transform can be defined of any order µ ≥ − 12 , but for our
purposes we will need the Hankel transform only of nonnegative integer order
n.
Important cases for our applications in solving partial differential equations
are n = 0 and n = 1 and the most important properties (for our applications)
of the Hankel transform are the following.
Property 1. Let f (r) be a function defined on r ≥ 0. If f (r) and f ′ (r)
are bounded at the origin r = 0 and satisfy the boundary conditions
√ √ √
lim r f (r) = lim r f (r) = lim r f ′ (r) = 0,
r→∞ r→∞ r→∞
| lim rf ′ (r)Jn (r) |< ∞, | lim r f (r)Jn′ (r) |< ∞,
r→0+ r→0
7.4.2 THE HANKEL TRANSFORM METHOD 439
then
∫∞ [ ]
1 n2
(7.4.3) rJn (ωr) f ′′ (r) + f ′ (r) − 2 f (r) dr = −ω 2 Fn (ω),
r r
0
∫∞ [ ]
′′ 1 ′ n2
rJn (ωr) f (r) + f (r) − 2 f (r) dr
r r
0
∫∞ [ ( ) ]
d df (r) n2
= r r − 2 f (r) Jn (ωr) dr
dr dr r
0
[ ]r=∞ ∫∞
= rf ′ (r)Jn (ωr) − ωrJn′ (ωr) − ω2 rJn (ωr) dω = −ω 2 Fn (ω).
r=0
0
∫∞
rJ (ω r) 1
(7.4.4) √0 dr = e−ωz .
2
r +z 2 ω
0
This property can be justified very easy using the definition of the Hankel
transform.
∫∞
1
(7.4.5.) J0 (ωr) e−zω dω = √ .
r2 + z 2
0
The property can be verified by using the series expansion of the Bessel
functions Jo (ω r) (from Chapter 3) and integrating term by term the series
expansion.
Solution. For part (a) we need to solve the boundary value problem
( )
1 ∂ ∂u ∂2u
(7.4.6) ∆ u(r, z) = r + 2 = 0, r > 0, z > 0,
r ∂r ∂r ∂z
(7.4.7) u(r, 0) = f (r), lim u(r, z) = 0, r > 0,
z→∞
(7.4.8) lim u(r, z) = lim ur (r, z) = 0, z > 0.
r→∞ r→∞
Let
∫∞
( )
U (ω, z) = H u(r, z) = rJ0 (ω r)u(r, z) dr,
0
∫∞
( )
F (ω) = H f (r) = rJ0 (ω r)f (r) dr
0
∫∞ ∫∞ ( )
∂2u ∂ ∂u
rJ0 (ω r) dr = − J0 (ω r) r dr
∂z 2 ∂r ∂r
0 0
[ ]r=∞ ∫∞ ∫∞
∂u ∂u ∂u
= − rJ0 (ω r) +ω rJ0′ (ω r) dr = ω rJ0′ (ω r) dr
∂r r=0 ∂r ∂r
0 0
[ ]r=∞ ∫∞ ( )
′ ∂
= ω ru(r, z)J0 (ωr) −ω u(r, z) rJ0′ (ω r) dr
r=0 ∂r
0
∫∞ ( )
∂ ′
= −ω u(r, z) rJ0 (ω r) dr
∂r
0
∫∞ ∫∞
= −ω u(r, z)J0′ (ω r) dr −ω 2
ru(r, z)J0′′ (ω r) dr.
0 0
7.4.2 THE HANKEL TRANSFORM METHOD 441
Thus,
∫∞ ∫∞ ∫∞
∂2u
(7.4.9) rJ0 (ωr) 2 dr = −ω u(r, z)J0′ (ωr)dr −ω 2
ru(r, z)J0′′ (ωr)dr.
∂z
0 0 0
U (r, z) = F (ω)e−ωz .
Taking the inverse Hankel transform we obtain that the solution u = u(r, z)
of the original boundary value problem is given by
∫∞ ∫∞(∫∞ )
−ωz
u= ωJ0 (ωr)F (ω)e dω = σJ0 (ω σ)f (σ) dσ ωJ0 (ωr)e−ωz dω.
0 0 0
The solution of (b) follows from (a) by substituting the given function f :
∫∞ (∫∞ )
u(r, z) = σJ0 (ω σ)f (σ) dσ ωJ0 (ω r) e−ωz dω
0 0
∫∞ (∫a )
=T σJ0 (ω σ)f (σ) dσ ωJ0 (ω r) e−ωz dω
0 0
∫∞ (∫aω )
ρ
=T J0 (ρ) dρ ωJ0 (ω r) e−ωz dω
ω2
0 0
∫∞ (∫aω )
1
=T ρJ0 (ρ) dρ J0 (ω r) e−ωz dω.
ω
0 0
442 7. LAPLACE AND POISSON EQUATIONS
(see the section for the Bessel functions in Chapter 3), then the above solution
u(r, z) can be written as
∫∞
u(r, z) = aT J0 (ω r) J1 (aω) e−ωz dω.
0
∂2u 1 ∂ ∂2u
(a) ∆u = + = 0, 0 < r < ∞, z ∈ R,
∂r2 r ∂r ∂z 2
(b) lim u(r, z) = lim u(r, z) = 0,
r→∞ |z|→∞
2
(c) lim r u(r, z) = lim rur (r, z) = f (z).
r→0 r→0
Solution. Let
∫∞
( )
U (ω, z) = H u(r, z) = rJ0 (ω r)u(r, z) dr,
0
∫∞
( )
F (ω) = H f (r) = rJ0 (ω r)f (r) dr
0
From the given boundary conditions (b) at r = ∞ and the boundary condi-
tions (c), in view of the following properties for the Bessel functions
1 ω
J0 (0) = 1, J0′ (r) = −J1 (r), lim J0 (ω r) = ,
r→0+ r 2
it follows that
d2 U (ω, z)
− ω 2 U (ω, z) = f (z), −∞ < z < ∞.
dz 2
7.4.2 THE HANKEL TRANSFORM METHOD 443
If we take the inverse Hankel transform of the above U (ω, z) and use
(7.4.5) (after changing the integration order) we obtain that the solution
u(x, z) of our original problem is given by
∫∞ ∫∞ (∫∞ )
1
u(x, z) = ω J0 (ω r) U (ω, z) dω = − e−|ω−τ | J0 (ω r) dω f (τ ) dτ
2
0 −∞ 0
∫∞
1 f (τ )
=− √ dτ.
2 r + (z − τ )2
2
−∞
{ }
7. D = (x, y) : −∞ < x < ∞, 0 < y < ∞ ; u(x, 0) = cos x.
444 7. LAPLACE AND POISSON EQUATIONS
{ }
8. D = (x, y) : 0 < x < 1, 0 < y < ∞ ; u(0, y) = 0, u(1, y) = e−y ,
for y > 0 and u(x, 0) = 0 for 0 < x < 1.
{ }
9. D = (x, y) : 0 < x < ∞, 0 < y < ∞ ; u(x, 0) = 0 for x > 0 and
{
1, 0 < y < 1
u(0, y) =
0, y > 1.
10. Using the two dimensional Fourier transform show that the solution
u(x, y) of the equation
∫∞ ( ∫∞ ∫∞ )
1 (x−ξ)2 +(y−η)2 e−t
u(x, y) = e− 4t dξ dη f (t) dt.
4π t
0 −∞ −∞
11. Using properties of the Bessel functions prove the following identities
for the Hankel transform.
(1) 1
(a) H0 (ω) = .
x ω
( ) a
(b) H0 e−ax (ω) = √ .
(a2 + ω 2 )3
( 1 −ax ) ( ) 1
(c) H0 e (ω) = L J0 (ax) (ω) = √ .
x a + ω2
2
ω2
( −a2 x2 ) 1 − 2
(d) H0 e (ω) = 2 e 4a .
2a
( ) 1 ( ) ω
(e) Hn f (ax) (ω) = H f (x) ( ).
a a
( ) 1 ( ) ω
(f) Hn f (ax) (ω) = H f (x) ( ).
a a
In Problems 12–13, use the Hankel transform to solve the partial differential
equation
( )
1 ∂ ∂u ∂2u
r + 2 = 0, 0 < r < ∞, 0 < z < ∞,
r ∂r ∂r ∂z
7.5 PROJECTS USING MATHEMATICA 445
1
13. lim u(r, z) = 0, lim u(r, z) = 0, u(r, 0) = √ .
z,→∞ r,→∞ a + r2
2
and
wxx (x, y) + wyy (x, y) = 0, 0 < x < π, 0 < y < π,
{
x, 0 < x < π2
w(x, 0) = f (x) =
1 − x, π2 < x < π,
w(0, y) = f (y) = 0, 0 < y < π.
446 7. LAPLACE AND POISSON EQUATIONS
The solution of the original problem is u(x, y) = v(x, y) + w(x, y). Because
of symmetry we need to solve only the problem for the function w(x, y).
Let us use Mathematica to “derive” this result. Let w[X, Y ] = X[x] Y [y].
After separating the variables we obtain the equations
{ ′′
X (x) + λ2 X(x) = 0, X(0) = 0, X(π) = 0
Y ′′ (y) − λ2 X(x) = 0.
2 }, {π − x, x > 0}}];
π
In[1] := f [x− ] := Piecewise[{{x, 0 < x <
Figure 7.5.1
Figure 7.5.2
Project 7.5.3. Use the Fourier transform to solve the boundary value prob-
lem
uxx (x, y) + uyy (x, y) = 0, −∞ < x < ∞, 0 < y < ∞
u(x, 0) = f (x), −∞ < x < ∞
lim u(x, y) = 0, −∞ < x < ∞,
y→∞
lim u(x, y) = 0, 0 < y < ∞.
|x|→∞
Plot the solution u(x, y) of the problem for the boundary value functions
{ 1 1
2, 0<x< 2
(a) f (x) =
0, x ≤ 0 or x ≥ 12 .
f (x) = x2 e−x ,
4
(b) −∞ < x < ∞.
In each case plot the level sets of the solution.
Solution. (a) Define the boundary function f (x):
In[1] := f [x− ] := Piecewise [{{0, x ≤ 0}, { 12 , 0 < x ≤ 21 }, {0, x > 21 }}];
Next, define the Laplace operator L u = uxx + uyy :
In[2] := Lapu = D[u[x, y], x, x] + D[u[x, y], y, y];
Find the Fourier transform of the Laplace operator with respect to x:
In[3] :=FourierTransform [D[u[x, y], x, x], x, ω, FourierParameters
→ {1, −1}]+ FourierTransform [D[u[x, y], y, y], x, ω,
FourierParameters → {1, −1}]
7.5 PROJECTS USING MATHEMATICA 449
∫∞
( ) 1
F f (ω) = √ f (x)eiωx dx,
2π
∞
∫∞
−1
( ) 1
F F (x) = √ F (ω)e−iωx dω.
2π
∞
In[12] := Plot 3D[u[x, y], {x, −4, 4}, {y, 0.005, 4}, PlotRange → All]
In[13] := ContourPlot [u[x, y], {x, −4, 4}, {y, 0.005, 4},
FrameLabel → {”x”, ”y”}, ContourLabels → True]
The plots of the solutions u(x, y) and their level sets for (a) and (b)
are displayed in Figures 7.5.3 and Figure 7.5.4, respectively.
4 0.02 0.04
0.02
1
4 2
y
0.08
u
0 0.1
-4 y 1
-2
0
x
2 0 0.06
0 0 2 4
4
-4 -2
x
(1a) (2a)
Figure 7.5.3
2.0
1.5
0.4
u 2 1.0
y
0
-2 y 0.5
-1
0
x
1 0.0
0 0 1 2
2
-2 -1
x
(1b) (2b)
Figure 7.5.4
CHAPTER 8
Despite the methods developed in the previous chapters for solving the
wave, heat and Laplace equations, very often these analytical methods are not
practical, and we are forced to find approximate solutions of these equations.
In this chapter we will discuss the finite difference numerical methods for
solving partial differential equations.
The first two sections are devoted to some mathematical preliminaries re-
quired for developing these numerical methods. Some basic facts of linear
algebra, numerical iterative methods for linear systems and finite differences
are described in these sections.
All the numerical results of this chapter and various graphics were produced
using the mathematical software Mathematica.
451
452 8. FINITE DIFFERENCE NUMERICAL METHODS
∑
n
cij = aik bkj , i = 1, 2, . . . , j = 1, 2, . . . p.
k=1
∑
n
T
x y= xk yk = 0.
k=1
A square matrix is called diagonal if the only nonzero entries are on the
main diagonal.
The determinant of a square n × n matrix A, denoted by det(A) or |A|
can be defined
( )inductively:
If A = a11 is an 1 × 1 matrix, then det(A) = a11 . If
( )
a11 a12
A=
a21 a22
is a 2 × 2 matrix, then
a a12
det(A) = 11 = a11 a22 − a12 a21 .
a21 a22
AB = BA = In .
Ax = λx.
det(λIn − A) = 0.
454 8. FINITE DIFFERENCE NUMERICAL METHODS
A = U DU T
xT Ax ≥ 0
∑
n ∑
n √
∥A∥∞ = max |aij |, ∥A∥1 = max |aij |, ∥A∥2 = λmax ,
1≤i≤n 1≤j≤n
j=1 i=1
456 8. FINITE DIFFERENCE NUMERICAL METHODS
where λmax is the largest eigenvalue of the symmetric and positive definite
matrix AT A.
If A is a real symmetric and positive definite matrix, then
∥A∥2 = |λmax |,
Using these matrices, the linear system (8.1.2) can be written in the matrix
form
(8.1.3) Ax = b.
8.1 LINEAR ALGEBRA AND ITERATIVE METHODS 457
q
∥x(k) − x⋆ ∥ ≤ ∥x(k) − x(k−1) ∥, k = 1, 2, . . .,
1−q
or equivalently
qk
∥x(k) − x⋆ ∥ ≤ ∥x(1) − x(0) ∥, k = 1, 2, . . ..
1−q
The proof of this theorem is far beyond the scope of this book, and the
interested student is referred to the book by M. Rosenlicht [11], or the book
by M. Reed and B. Simon [15].
Now we are ready to describe some iterative methods for solving a linear
system.
458 8. FINITE DIFFERENCE NUMERICAL METHODS
Ax = b
where C is some matrix (usually such that C is invertible). Then the (k+1)th
approximation x(k+1) of the exact solution x is determined recursively by
(
(8.1.4) x(k+1) = In − CA)x(k) + Cb, k = 0, 1, . . ..
x(k+1) = Bx(k) + Cb
converges for any initial point x(0) if and only if the spectral radius
{ }
ρ(B) := max |λi | < 1 : 1 ≤ i ≤ n < 1,
and 0
a12 a13 . . . a1n−1 a1n
0 0 a23
. . . a2,n−1 a2n
0 0 0
. . . a3,n−1 a3n
U =. .
. .. .. .. ..
. . . ... . .
0 0 0 ... 0 an−1,n
0 0 0 ... 0 0
If we take B = D, then the matrix C in this case is given by
( )
C = In − B −1 A = −D−1 L + U ,
with entries { a
− aij , if i ̸= j
cij = ii
0, if i = j.
and so the iterative process is given by
( )
(8.1.5) x(k+1) = −D−1 L + U x(k) + D−1 b, k = 0, 1, 2, . . ..
The iterative process (8.1.5) in coordinate form is given by
( ∑n )
(k+1) 1 (k)
(8.1.6) xi = bi − aij xi , i = 1, . . . , n; k = 1, 2, . . ..
aii j=1
j̸=i
or in matrix form
( )
−1
(8.1.8) x(k+1)
=D b−x (k+1)
− Ux (k)
, k ∈ N.
( )
An n×n matrix A = aij is called a strongly diagonally dominant matrix
if
∑
n
|aij | < |aii |, 1 ≤ i ≤ n.
j=1
j̸=i
1 ∑
n
1
= max |aij | < max |aii | = 1.
1≤i≤n |aii | j=1
1≤i≤n |aii |
j̸=i
There is more general result for convergence of the Jacobi and Gauss–Seidel
iterations which we state without a proof.
Theorem 8.1.5. If the matrix A is strictly positive definite, then the Jacobi
and Gauss–Seidel iterative processes are convergent.
F or[i = 1, i <=
( n, i + +, )
∑
n
X[[i]] = 1
A[[i,i]] B[[i]] + A[[i,i]] ⋆ Xin[[i]] − A[[i,j]] ⋆ Xin[[j]] ];
j=1
Xin = X;
k = k + 1;];
Return[X]; ];
A = {{9, 2, −1, 3}, {2, 8, −3, 1}, {−1, −3, 10, −2}, {3, 1, −2, 8}};
B = {5, −2, 6, −4};
X = {0, −1, 1, −1};
X = Jac[A, B, X, 10]
A = {{9, 2, −1, 3}, {2, 8, −3, 1}, {−1, −3, 10, −2}, {3, 1, −2, 8}};
b = {5, −2, 6, −4};
{B, c} = generate[A, b];
ExtGS[A− List, b− List, x0− List, e− ] := M odule[{},
norm1[xx− List, yy− List] :=
M ax[Sum[Abs[xx[[i]] − yy[[i]]], {i, 1, Length[xx]}]];
norm[xx− List] := M ax[Sum[Abs[xx[[i]]], {i, 1, Length[xx]}]];
{B, c} = generate[A, b];
z = T able[0, {i, 1, Length[x0]}];
x[0] = x0;
G[x− List] := B.x + c;
x[k− /; k > 0] := x[k] = G[x[k − 1]];
Do[
If [norm1[x[k], x[k − 1]] <= e, {z = x[k], savek = k,
Break[]}], {k, 1, 100}]];
Taking ϵ = 10−8 and
A = {{9, 2, −1, 3}, {2, 8, −3, 1}, {−1, −3, 10, −2}, {3, 1, −2, 8}};
464 8. FINITE DIFFERENCE NUMERICAL METHODS
k x1 x2 x3 x4
23 0.882469 −0.200988 0.491358 −0.682963
k x1 x2 x3 x4
14 0.882469 −0.200988 0.491358 −0.682963
Example 8.1.4. Apply the Jacobi iterative method to the linear system
1 2 −1 3 x1 1
2 2 −3 1 x2 −2
= .
−1 −3 1 −2 x3 3
3 1 −2 6 x4 4
Table 8.1.5
k x1 x2 x3 x4
1 1 −1. −1. 0.666667
2 0. −3.83333 −0.333333 0.
3 8.33333 −1.5 −8.5 1.19444
4 −8.08333 −22.6806 4.44444 −6.08333
5 69.0556 16.7917 −60.9583 9.96991
6 −123.45 −166.478 102.491 −56.9792
7 607.384 304.677 −505.927 124.302
8 −1487.19 −1429.43 1275.81 −522.447
9 5703.01 3661.13 −4727.57 1407.77
10 −16272.1 −13499.2 13873.9 −5036.88
( 35 17 51 31 ) ( )
,− , , = 0.813953, −0.395349, 1.18605, 0.72093 .
43 43 43 43
But from Table 8.1.5 we can see that the iterations given by the Jacobi method
are getting worse instead of better. Therefore we can conclude that this
iterative process diverges.
The next example shows that the Gauss–Seidel iterative method is diver-
gent while the Jacobi iterative process is convergent.
Example 8.1.5. Discuss the convergence of the Jacobi and Gauss–Seidel
iterative methods for the system
A · x = b,
1 2
pBJ (λ) = −λ3 − λ + .
3 3
Using Mathematica (to simplify the calculations) we find that the character-
istic polynomial pBGS (λ) is given by
We find that the eigenvalues of BGS (the roots of pBGS (λ)) are
λ1 = 0, λ2 = 0, λ3 = 1.
(8.1.9) Ax = b
We will show that the solution x of the system (8.1.9) minimizes the function
f (x) defined in (8.1.10).
Let xe be a solution of (8.1.9), i.e., let Ae
x = b. Since A is symmetric we
( )T ( ) ( )T
have Ax x = x Ax and since A is positive definite we have Ax x ≥ 0
T
f (x) − f (e eT Ae
x) = xT Ax − 2bT x − x e
x + 2bT x
= xT Ax − 2e
xT Ax + 2e
xT Ae
x
( )T ( )
= x−x e A x−x e ≥ 0.
i.e.,
( )
(8.1.11) f (x(k) + h c) − h2 cT Ac + 2cT Ax(k) − b
468 8. FINITE DIFFERENCE NUMERICAL METHODS
we have
d
f (x(k) + h c) = 2cT r(k) ,
dh
h=0
where
r(k) = Ax(k) − b.
So the direction c of the steepest descent of f is in the direction of the vector
r(k) = A · x(k) − b.
From (8.1.11) it follows that
d ( )T ( )T
f (x(k) + h c) = 2h Ar(k) + r(k) r(k) = 0,
dh
and so,
( (k) )T (k)
r r
h = −( )T .
Ar (k) r(k)
Therefore, the new approximation x(k+1) is given by the formula
( (k) )T (k)
r r
(8.1.12) x (k+1)
=x −(
(k)
)T r(k) .
Ar(k) r(k)
The approximation given by (8.1.12) is called the steepest descent iterative
method.
It can be shown that the convergence rate of the steepest descent iterative
method is of geometric order. More precisely, the following is true.
( )k+1
λmax − λmin
∥x (k+1)
e ∥2 ≤
−x ∥x(0) − xe ∥2 ,
λmax + λmin
where λmax is the largest and λmin is the smallest eigenvalue of A.
The conjugate gradient method is a modification of the steepest gradient
method. The algorithm for the conjugate gradient method can be summarized
as follows:
x(0) , initial approximation
by the conjugate gradient method using 7 iterations and display the differ-
ences between any two successive approximations.
Solution. First we check (with Mathematica) that A is a strictly positive
definite matrix:
In[1] := A = {{9, 2, −1, 3}, {2, 8, −3, 1}, {−1, −3, 10, −2}, {3, 1, −2, 8}};
In[2] := P ositiveDef initeM atrixQ[A]
Out[3] := T rue;
In[4] := r[j− ] := r[j] = b − A.T ranspose[x[j]];
In[5] := v[1] := r[0];
In[6] := t[j− ] := t[j] = (T ranspose[v[j]].r[j − 1])
/(T ranspose[v[j]].(A.v[j]));
In[7] := x[j− ] := x[/] = x[j − 1] + t[j][[1, 1]] ∗ (T ranspose[v[j]])[[1]];
In[8] := s[j− ] := s[j] = −(T ranspose[v[j]].(A.r[j]))
/(T ranspose[v[j]].(A.v[j]));
In[9] := v[j− ] := v[j] = r[k − 1] + (s[j − 1][[1, 1]]) ∗ v[j − 1];
In[10] := d[− ] := d[j] = Sum[([x[j][[i]] − x[j − 1][[i]])2 , {i, 1, Length[x[j]}];
In[11] := b = T ranspose[{{5, −2, 6, −4}}];
In[12] := x[0] = {1, 1, 1, 1};
In[13] := T able[x[j], {j, 0, 6}];
In[14] := A = {{9, 2, −1, 3}, {2, 8, −3, 1}, {−1, −3, 10, −2}, {3, 1, −2, 8}};
In[15] := b = T ranspose[{{5., −2., 6., −4.}}];
The results are given in Table 8.1.6.
In[17] := N [%]
Out[17] := {0.882469, −0.200988, 0.491358, −0.682963}
470 8. FINITE DIFFERENCE NUMERICAL METHODS
and ( )( ) ( )
0.99999 0.99999 y1 2.99999
= .
0.99999 1 y2 2.99998
is positive definite.
by
∥x(k+1) − x(k) ∥2 ≤ ϵ
is not strictly diagonally dominant but yet the Jacobi and Gauss–
Seidel iterative methods for the system
A·x=b
are convergent.
has derivative of any order, i.e., f ∈ C ∞ (R). First, let us recall the definition
of the first derivative:
f (x + h) − f (x)
f ′ (x) = lim .
h→0 h
then we write
f (x) = O(g(x)) as x → 0.
f (x + h) − f (x)
(8.2.2) f ′ (x) = + O(h).
h
f (x + h) − f (x)
(8.2.3) f ′ (x) ≈ , forward difference approximation.
h
474 8. FINITE DIFFERENCE NUMERICAL METHODS
f (x) − f (x − h)
f ′ (x) = + O(h).
h
f (x) − f (x − h)
(8.2.4) f ′ (x) ≈ , backward difference approximation.
h
f (x + h) + f (x − h)
(8.2.6) f ′ (x) ≈ , central difference approximation.
2h
f (x + h) − f (x)
Ef or (h) ≡ − f ′ (x) = O(h)
h
f (x) − f (x − h)
Eback (h) ≡ − f ′ (x) = O(h)
h
f (x + h) + f (x − h)
Ecent (h) ≡ − f ′ (x) = O(h2 ).
2h
If we use the Taylor series (8.2.1) and (8.2.5), then we obtain the following
approximations for the second derivative:
f (x + 2h) − 2f (x + h) + f (x)
(8.2.7) f ′′ (x) ≈ , forward approximation.
h2
f (x) − 2f (x − h) + f (x − 2h)
(8.2.8) f ′′ (x) ≈ , backward approximation.
h2
f (x + h) − 2f (x) + f (x − h)
(8.2.9) f ′′ (x) ≈ , central approximation.
h2
The approximations (8.2.7) and (8.2.8) for the second derivative are ac-
curate of first order, and the central difference approximation (8.2.9) for the
second derivative is accurate of second order.
476 8. FINITE DIFFERENCE NUMERICAL METHODS
A · y = b,
8.2 FINITE DIFFERENCES 477
where
−2 + 9h2 1 0 ... 0 0 0
1 −2 + 9h2 1 ... 0 0 0
0 1 −2 + 9h2 ... 0 0 0
A= .. ,
.
..
.
0 0 0 ... 1 −2 + 9h2 1
0 0 0 ... 0 1 −2 + 9h2
y1 0
y2 0
y3 0
y4 0
y= .. , b = .. .
. .
.. .
..
.
y 0
n−2
yn−1 −1
In[1] := n = 15;
In[2] := h = P i/(2 ∗ n);
In[3]:=yApprox = Inverse[A] · b;
In[4]:=N [%]
Out[4]:={{0.104576}, {0.208005}, {0.309154}, {0.406912}, {0.500208},
{0.588018}, {0.66938}, {0.743401}, {0.809271}, {0.866265},
{0.91376}, {0.951234}, {0.978277}, {0.994592}}
have
f (x + h, y) − f (x, y)
fx (x, y) = + O(h), forward difference,
h
f (x, y) − f (x − h, y)
fx (x, y) = + O(h), backward difference,
h
f (x + h, y) − f (x − h, y)
fx (x, y) = + O(h2 ), central difference,
2h
f (x, y + k) − f (x, y)
fy (x, y) = + O(k), forward difference,
k
f (x, y) − f (x, y − k)
fy (x, y) = + O(k), backward difference,
k
f (x, y + k) − f (x, y − k)
fy (x, y) = + O(k 2 ), central difference.
2k
f (x + 2h, y) − 2f (x + h, y) + f (x, y)
fxx (x, y) = + O(h2 ), forward,
h2
f (x, y) − 2f (x − h, y) + f (x − 2h, y)
fxx (x, y) = + O(h2 ), backward,
h2
f (x + h, y) − 2f (x, y) + f (x − h, y)
fxx (x, y) = + O(h4 ), central.
h2
For the second partial derivatives with respect to y, as well as for the mixed
partial derivatives, we have similar expressions.
y j+1
yj M
y j-1
xi-1 xi xi+1 x
Figure 8.2.1
8.2 FINITE DIFFERENCES 479
f (M ) = f (xi , yj ) = fi,j .
With this notation we have the following approximations for the first order
partial derivatives.
fi+1,j − fi,j fi,j − fi−1,j
fx (xi , yj ) ≈ , fx (xi , yj ) ≈
h h
fi,j+1 − fi,j fi,j − fi,j−1
fy (xi , yj ) ≈ , fy (xi , yj ) ≈ ,
k k
fi+1,j − 2fi−1,j fi,j+1 − 2fi,j−1
fx (xi , yj ) ≈ , fy (xi , yj ) ≈ .
2h 2k
For the second order partial derivatives we have the following approxima-
tions.
fi+2,j − 2fi+1,j + fi,j fi,j − 2fi−1,j + f xi−2,j
fxx (xi , yj ) ≈ , fxx (xi , yj ) ≈
h2 h2
fi,j+2 − 2fi,j+1 + fi,j fi,j − 2fi,j−1 + fi,j−2
fyy (xi , yj ) ≈ , fyy (xi , yj ) ≈ ,
k2 k2
fi+1,j − 2fi,j + fi−1,j fi,j+1 − 2fi,j + fi,j−1
fxx (xi , yj ) ≈ , fyy (xi , yj ) ≈
h2 k2
fi+1,j+1 − fi+1,j−1 − fi−1,j+1 + fi−1,j−1
fxy (xi , yj ) ≈ .
4hk
f (x, y) = e−π
2
y
sin πx.
1. For each of the following functions estimate the first derivative of the
function at x = π/4 taking h = 0.1 in the forward, backward and
central finite difference approximation. Compare your obtained re-
sults with the exact values.
2. Use the table to estimate the first derivative at each mesh point. Es-
timate the second derivative at x = 0.6.
3. Let f (x) have the first 4 continuous derivatives. Show that the follow-
ing higher degree approximation for the first derivative f ′ (x) holds.
[ ]
′ 1
f (x) = f (x − 2h) − 8f (x − h) + 8f (x + h) − f (x + 2h)
12h
+ O(h4 ).
y ′′ + xy = 0, 0<x<1
y(0) = 0, y(1) = 1.
8.3 FINITE DIFFERENCE METHODS FOR THE LAPLACE EQUATION 481
xi = a + ih, i = 0, 1, 2, . . . , m; yj = c + jk, j = 0, 1, 2, . . . , n,
where
b−a d−c
h= k= .
m n
(See Figure 8.3.1.)
y d
y j+1
k
yj M
k
y j-1
h h x
a xi-1 xi xi+1 b
c
Figure 8.3.1
Then, using the central finite difference approximations for the second par-
tial derivatives, an approximation to the Poisson/Laplace equation (8.3.1) at
each interior point (node) (i, j), 1 ≤ i ≤ m − 1; 1 ≤ j ≤ n − 1 is given by
At the boundary points (nodes) (0, j), (m, j), j = 0, 1, . . . , n and (i, 0),
(i, n), i = 0, 1, . . . , m we have
(8.3.3) ui,j = B(xi , yj ) ≡ Bi,j .
In applications, usually we take h = k, and then Equations (8.3.2) become
(8.3.4) ui−1,j − 4ui,j + ui+1,j + ui,j−1 + ui,j+1 = h2 Fi,j ,
for 1 ≤ i ≤ m − 1; 1 ≤ j ≤ n − 1.
In order to write the matrix form of the linear system (8.3.2) for the
(m − 1)(n − 1) unknowns
u1,1 , u1,2 , . . . , u1,n−1 , u2,1 , u2,2 , . . . , u2,n−1 , . . . , um−1,1 , um−1,2 , . . . , um−1,n−1 ,
we label the nodes in the rectangle from left to right and from top to bottom.
With this enumeration, the matrix (m − 1)(n − 1) × (m − 1)(n − 1) matrix
A of the system has the block form
U −I O O ... O O
−I U −I O ... O O
O −I U −I ... O O
A= ,
..
.
..
.
O O O O ... −I U
where I is the identity matrix of order n − 1, O is the zero matrix of order
n − 1, and the (n − 1) × (n − 1) matrix U is given by
4 −1 0 0 ... 0 0
0 4 −1 0 ... 0 0
0 0 4 −1 . . . 0 0
U = .. .
.
..
.
0 0 0 0 ... −1 4
Even though the matrix A is a nice sparse matrix (formed by many iden-
tical matrix blocks that have many, many zeros) it is quite large even for
relatively small m and n, and therefore some iterative method is required to
solve the system.
We illustrate the finite difference method for the Poisson equation with the
following example.
8.3 FINITE DIFFERENCE METHODS FOR THE LAPLACE EQUATION 483
Example 8.3.1. Using a finite difference method solve the boundary value
problem
{
∆ u (x, y) = −2, (x, y) ∈ D,
u(x, y) = 0, (x, y) ∈ ∂D,
where D is the square D = {(x, y) : −1 < x < 1, −1 < y < 1}, whose
boundary is denoted by ∂D.
2
Solution. Taking a uniform grid with increment h = in both coordinate
n
axes, from (8.3.3) it follows that
We label the nodes from left to right and from top to bottom. So, we start
with the top left corner of the rectangle. With this labeling, taking n = 5,
i.e., h = 0.4 in (8.3.5) and using the boundary conditions (8.3.6) we obtain
the linear system
A · u = b,
4 0 0 0 −1 0 0 0 0 0 0 0 0
0 4 −1 0 0 −1 0 0 0 0 0 0 0
0 −1 4 −1 0 0 −1 0 0 0 0 0 0
0 0 −1 4 −1 0 0 −1 0 0 0 0 0
−1 0 0 −1 4 0 0 0 −1 0 0 0 0
0 −1 0 0 0 4 −1 0 0 −1 0 0 0
A= 0 0 −1 0 0 −1 4 −1 0 0 −1 0 0
0 0 0 −1 0 0 −1 4 −1 0 0 −1 0
0 0 0 0 −1 0 0 −1 4 0 0 0 −1
0 0 0 0 0 −1 0 0 0 4 −1 0 0
0 0 0 0 0 0 −1 0 0 −1 4 −1 0
0 0 0 0 0 0 0 −1 0 0 −1 4 −1
0 0 0 0 0 0 0 0 −1 0 0 −1 4
484 8. FINITE DIFFERENCE NUMERICAL METHODS
and
u1,1
.. 0.32
. ..
.
u1,4
0.32
u2,1 .
..
..
. 0.32
u2,4 .
u=
u3,1 , b = .. .
. 0.32
.. .
.
u3,4 .
u4,1 0.32
.
. ..
..
0.32 16×1
u4,4
If we solve this system directly (using Mathematica) we obtain the following
approximation of the Poisson equation.
The exact solution of the given Poisson equation, obtained in Section 7.2, is
given by
∞ ( ) ( )
32 ∑ (−1)k cosh (2k + 1) πx
πy
2( cos (2k )+ 1) 2
u(x, y) = 1 − y − 3
2
.
π
k=1
(2k + 1)3 cosh (2k + 1) π2
1
We make an n×n grid with grid size h = . Let us denote a node (xi , yj )
n
by (i, j).
At all interior nodes (i, j), for which i = 2, 3, . . . , n − 2, j = 1, 2, . . . , n − 1,
we apply the approximation scheme (8.3.4):
ui−1,j − 4ui,j + ui+1,j + ui,j−1 + ui,j+1 = h2 Fi,j .
For the interior nodes (1, j), and (n−1, j), j = 1, 2, . . . , n−1 the situation
is slightly more complicated since we do not know the values of the function
u(x, y) at the boundary nodes (i, 0), i = 1, 2, . . . , n − 1 and (n, j), j =
1, 2, . . . , n − 1. In order to approximate the values of the function u(x, y)
at these nodes we extend the grid with additional points (so called fictitious
points) (−1, j) and (n + 1, j) j = 1, 2, . . . , n − 1 to the left and right of the
boundary points x = 0 and x = 1, respectively. (See Figure 8.3.2.)
y
uHx, 1L = f2 HxL
1
H-1, jL
H1, jL Hn + 1, jL
* Hn - 1, jL *
H0, jL Hn, jL
0 uHx, 0L = f1 HxL 1 x
Figure 8.3.2
( )
(If we)apply the central finite difference approximation for ux 0, yj and
ux 1, yj we have
( ) u(0 + h, yj ) − u(0 − h, yj )
ux 0, yj ≈ ,
2h
u (1, y ) ≈ u(1 + h, yj ) − u(1 − h, yj ) .
x j
2h
Therefore,
(8.3.7) u1,j − u−1,j = 2hf1 (yj ), un+1j − un−1,j = 2hf2 (yj ).
We will eliminate u−1,j and un+1,j if we use the Poisson equation at the
nodes (0, j) and (n, j):
{
u−1,j + u1,j + u0,j−1 + u0,j+1 − 4u0,j = F0,j ,
(8.3.8)
un+1,j + un−1,j + un,j−1 + un,j+1 − 4un,j = Fn,j .
486 8. FINITE DIFFERENCE NUMERICAL METHODS
We solve the system given by (8.3.4) and (8.3.9) for the n2 −1 unknowns ui,j ,
i = 2, . . . n−2, j = 1, . . . n−1 and u0,j , u1,j , un−1,j , un,j , j = 1, 2, . . . , n−1.
We illustrate the above discussion by an example for the Laplace equation.
Example 8.3.2. Find an approximation of the boundary value problem
u + uyy = 0, 0 < x < 1, 0 < y < 1,
xx
u(x, 0) = 0, u(x, 1) = 1 + x2 , 0 < x < 1 ,
ux (0, y) = 1, ux (1, y) = 0, 0 < y < 1
on a 5 × 5 uniform grid.
Solution. For this example we have
F (x, y) = 0, f1 (x) = 0, f2 (x) = 1 + x2 ,
g1 (y) = 1, g2 (y) = 0,
and n = 5, h = 0.2.
Therefore, the system (8.3.4) and (8.3.9) is given by
4ui,j − ui−1,j − ui+1,j − ui,j−1 − ui,j+1 = 0, i = 1, 2, 3, 4 j = 1, 2, 3, 4
u = 0, i = 0, 1, 2, 3, 4, 5
i,0
ui,5 = 1 + (ih)2 , i = 0, 1, 2, 3, 4, 5
4u0,j − 2u1,j − u0,j+1 − u0,j−1 = −2h, j = 1, 2, 3, 4,
4u5,j − 2u4,j1 − u5,j+1 − u5,j−1 = 2, j = 1, 2, 3, 4.
If we enumerate the nodes (i, j) from bottom to top and left to right, then
we obtain that the system to be solved can be written in the matrix form
(8.3.10) A · u = b,
where the 24 × 24 matrix has the block form
T −2I O O O O
−I T −I O O O
O −I T −I O O
A= ,
O O −I T −I O
O O O −I T −I
O O O O −2I T
8.3 FINITE DIFFERENCE METHODS FOR THE LAPLACE EQUATION 487
The above system was solved by the Gauss–Seidel iterative method (taking
10 iterations) and the results are given in Table 8.3.1.
Table 8.3.1
j\i 0 1 2 3 4 5
1 −0.06946 0.049656 0.112271 0.169276 0.202971 0.206501
2 0.022854 0.155810 0.230152 0.361864 0.436107 0.420063
3 0.179795 0.320579 0.290663 0.611922 0.4361072 0.808039
4 0.478023 0.656047 0.785585 1.035632 1.182048 1.293034
We cover the domain D with a uniform grid of squares whose sides are
parallel to the coordinate axes. Let h be the grid size. To apply a finite
difference method, the given domain D is replaced by the set of those squares
which completely lie in the domain D. (See Figure 8.3.3.)
488 8. FINITE DIFFERENCE NUMERICAL METHODS
*N
*E
W M
S
Du = F
uG = f
G = ¶D
Figure 8.3.3
If an interior node of the squares is such that its four neighboring nodes
either lie entirely in the domain D or none of them is outside the domain,
then we use Equations (8.3.4) to approximate the Poisson equation. Special
consideration should be taken for those nodes which are close to the boundary
Γ for which
( at )least one of their neighboring nodes is outside the domain. The
node M xi , yj in Figure 8.3.4 is an example of such a node. Its neighboring
nodes E and N are outside the region D. To approximate the second partial
derivatives in the Poisson equation at the node M we proceed as follows. In
order to approximate uxx we denote by A(x, yj ) the point on the segment
M E which is exactly on the boundary Γ and let ∆ x = M A (see Figure
8.3.4).
*
N
*
W M A E
D
G
Figure 8.3.4
and
1
u(A) = u(M ) + ux (M )∆ x + uxx (M )(∆ x)2 + . . . .
2
From the last two equations we obtain the following approximation for uxx
at the node (i, j).
[ ]
u(A) u(W ) u(M )
(8.3.11) uxx (M ) ≈ 2 + − .
∆ x(∆ x + h) h(∆ x + h) h∆ x
Working similarly with the point B (see Figure 8.3.4) we have an approxi-
mation for uyy at the node (i, j).
[ ]
u(B) u(W ) u(M )
(8.3.12) uyy (M ) ≈ 2 + − ,
∆ y(∆ y + h) h(∆ y + h) h∆ y
where ∆ y = M B.
From (8.3.11) and (8.3.12) we have the following approximation of the
Poison equation at the node M :
In (8.3.13), u(M ), u(W ) and u(S) are unknown and should be determined.
1
Example 8.3.3. Taking a uniform grid of step size h = solve the Laplace
4
equation
x2
uxx + uyy = 0, x > 0, y > 0, + y 2 < 1,
4
subject to the boundary conditions
u(0, y) = y 2 , 0 ≤ y ≤ 1,
x2
u(x, 0) = , 0 ≤ x ≤ 2,
4
2
u(x, y) = 1, x + y 2 = 1, y > 0.
4
Solution. The uniform grid and the domain are displayed in Figure 8.3.5.
1 B1 B2 B4
B3 x2
B5 + y2 = 1
A3 4
0.75 B6
A2
0.5 B7
0.25 A1
Figure 8.3.5
For the nodes (5, 3), (6, 2) and (7, 1) we use approximations (8.3.13):
u(A3 ) u(B5 ) u4,3
+ +
∆ x (∆ x + h) ∆ y (∆ y + h) h(∆ x3 + h)
3 3 5 5
u5,2 ∆ x3 + ∆ y5
+ − u5,3 = 0
h(∆ y 5 + h) h∆ x3 ∆ y5
u(A2 ) u(B6 ) u5,2
∆ x (∆ x + h) + ∆ y (∆ y + h) + h(∆ x + h)
2 2 6 6 2
(8.3.15)
u ∆ x + ∆ y
+
6,1
−
2 6
u6,2 = 0
h(∆ y + h) h∆ x ∆ y
6 2 6
u(A1 ) u(B7 ) u6,1
+ +
∆ x 1 (∆ x 1 + h) ∆ y 7 (∆ y7 + h) h(∆ x1 + h)
u7,0 ∆ x1 + ∆ y7
+ − u7,1 = 0.
h(∆ y7 + h) h∆ x1 ∆ y7
If we enumerate the nodes (i, j) from bottom to top and left to right we
obtain that the system to be solved can be written in the matrix form
(8.3.17) A · u = b,
−1 0 0 −1 0 0
B1 = , B2 =
0 −1 0 0,
0 1
h(h+∆ x2 )
1
0 0 h(h+∆ x3 ) 0 0 0
492 8. FINITE DIFFERENCE NUMERICAL METHODS
4 −1 0
Ti =
−1 4 −1 , i = 1, 2, . . . , 5,
0 2h
h+∆ yi −2 h+∆
∆ yi
yi
4 −1 −1
T6 = −∆ .
1 x2 +∆ y6
h(h+∆ y6 ) h∆ x2 ∆ y6 0
1
h(h+∆ x1 ) 0 −∆ x1 +∆ y7
h∆ x1 ∆ y7
Solving the system for ui,j , we obtain the results given in Table 8.3.2. (the
index i runs in the horizontal direction and the index j runs vertically).
Table 8.3.2
j\i 1 2 3 4 5 6 7
1 0.34519 0.44063 0.53730 0.63861 0.73899 0.83647 0.93431
2 0.56514 0.63002 0.69497 0.77816 0.85588 0.92259
3 0.78536 0.81932 0.83442 0.92316 0.98379
∆ u(x, y) = 1
uxx + uyy = 1
on the rectangle 3 < x < 5, 4 < y < 6, with zero boundary condition,
1
taking h = in the finite differences formula.
4
5. The function u satisfies the Laplace equation on the square 0 < x < 1,
0 < y < 1 and satisfies
uxx + uyy = 0
u(x, 0) = 1, u(x, y) = x2 if x2 + y 2 = 1.
494 8. FINITE DIFFERENCE NUMERICAL METHODS
-1 0 1
Figure 8.3.6
∆ u = −1
on the L-shaped region, obtained when from the unit square is re-
1
moved the top left corner square of side h = , subject to zero bound-
4
ary value on the boundary of the L=shaped region (see Figure 8.3.7).
1
Take the uniform grid for which h = .
4
0 1
Figure 8.3.7
8.4 FINITE DIFFERENCE METHODS FOR THE HEAT EQUATION 495
xi = ih, yj = j∆ t, i = 0, 1, . . . , m; j = 0, 1, 2, . . ..
1 [ ] c2 [ ]
ui,j+1 −ui,j = 2 ui+1,j −2ui,j +ui−1,j , 1 ≤ i ≤ m − 1; 1 ≤ j ≤ n − 1,
∆t h
or
The above system is solved in the following way. First, we take j = 0 and
using the given initial values
we solve the system (8.4.3) for the unknowns u1,1 , u2,1 , . . . , ui−1,1 . Using
these values, we solve the system for the next unknowns u1,2 , u2,2 , . . . , ui−1,2
and so on.
The advantage of the implicit Crank–Nicolson approximation over the ex-
plicit one is that its stability doesn’t depend on the parameter λ.
It is important to note that for a fixed value of λ, the order of the approx-
imation schemes (8.4.2) and (8.4.3) is O(h2 ).
Let us take a few examples to illustrate all of this.
Example 8.4.1. Consider the problem
where
T
ui,j+1 = ( u1,j+1 u2,j+1 . . . ui−1,j+1 ) ,
T
ui,0 = ( f (h) f (2h) . . . f ((i − 1)h) ) ,
First, let us take h = 0.1, ∆ t = 0.4. In this case λ = 0.4 < 0.5. Solving
the system (8.4.4) at t = 1.2, t = 4.8 and t = 14.4 we obtain approximations
that are presented graphically in Figure 8.4.1 and in Table 8.4.1. From Figure
8.4.1 we notice pretty good agreement of the numerical and exact solutions.
Table 8.4.1
tj \xi 0.1 0.2 0.3 0.4 0.5 0.6 0.7
0.0 2.0000 4.0000 6.0000 8.0000 9.9899 8.0000 6.0000
1.2 1.9692 3.8617 5.5262 6.7106 7.1454 6.7106 5.5262
4.8 1.5494 2.9545 4.0793 4.8075 5.0598 4.8075 4.0793
14.4 0.5587 1.0628 1.4629 1.7198 1.8083 1.7198 1.4629
10
u
ø
Λ = 0.4
h = 0.1
8 t = 0. ø ø Dt = 0.4
ø
ø ø Exact
6 ø t = 1.2 ø Approx ø ø øø
ø ø
ø
ø ø
4 ø ø ø ø
t = 4.8
ø ø
2 ø ø ø ø ø
ø ø ø ø
1 ø ø
ø t = 14.4 ø x
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Figure 8.4.1
Next we take h = 0.1, ∆ t = 0.6. In this case λ = 0.6. Solving the system
(8.4.4) at t = 1.2, t = 2.4, t = 4.8 and t = 14.4 we obtain approximations
that are presented in Table 8.4.2 and graphically in Figure 8.4.2. Big oscilla-
tions appear in the solution. These oscillations grow larger and larger as time
increases. This instability is due to the fact that λ = 0.6 > 0.5.
498 8. FINITE DIFFERENCE NUMERICAL METHODS
Table 8.4.2
tj \xi 0.1 0.2 0.3 0.4 0.5 0.6 0.7
0.0 2.0000 4.0000 6.0000 8.0000 10.0000 8.0000 6.0000
1.2 2.0000 4.0000 6.0000 6.5600 8.0800 6.5600 6.0000
4.8 2.1244 1.8019 5.7749 2.6826 7.3010 2.6826 5.7749
14.4 73.374 −137.33 192.138 −222.257 237.528 −222.257 192.138
Λ = 0.6
ó
ç
Exact t = 4.8
h = 0.1
Approx
6 Dt = 0.6
ç
ó ç
ó
ó ó
4
t = 2.4
ó ó
t = 1.2 ç ç
ç t = 14.4 ç
2 ó ó
ç ç
Figure 8.4.2
where
T
ui,j+1 = ( u1,j+1 u2,j+1 . . . ui−1,j+1 ) ,
T
ui,0 = ( f (h) f (2h) . . . f ((i − 1)h) ) ,
and the (i − 1) × (i − 1) matrices A and B are given by
2 + 2λ −λ 0 ... 0 0 0
−λ 2 + 2λ −λ . . . 0 0 0
.
A= . . ,
0 0 0 . . . −λ 2 + 2λ −λ
0 0 0 ... 0 −λ 2 + 2λ
and
2 − 2λ λ 0 ... 0 0 0
λ 2 − 2λ λ ... 0 0 0
.
B=
.
. .
0 0 0 ... λ 2 − 2λ −λ
0 0 0 ... 0 λ 2 − 2λ
8.4 FINITE DIFFERENCE METHODS FOR THE HEAT EQUATION 499
u(x, y, 0) = f (x, y)
where
∆t
λ = c2 .
h2
500 8. FINITE DIFFERENCE NUMERICAL METHODS
Figure 8.4.3
8 u
t = 0.01
6 t = 0.1
4 t = 0.4
x
2 4 6 8 10
Figure 8.4.4
502 8. FINITE DIFFERENCE NUMERICAL METHODS
Figure 8.4.5
1
2 t = 0.9
Π 3Π Π 7Π 9Π
x
Π
10 10 2 10 10
1
-2
t = 0.4 t = 0.01
Figure 8.4.6
8.4 FINITE DIFFERENCE METHODS FOR THE HEAT EQUATION 503
u(x, y, t) = 0, (x, y) ∈ ∂R, 0 < t < 4.
F
! igure 8.4.7 Figure 8.4.8
504 8. FINITE DIFFERENCE NUMERICAL METHODS
4. Write down the explicit finite difference approximation for the initial
boundary value problem
ut = ux + uxx , 0 < x < 1, t > 0,
0 ≤ x ≤ 12
x,
u(x, 0) =
1 − x, 12 < x ≤ 1
u(0, t) = u(1, t) = 0 0 < t < 1.
1
Perform the first 4 iterations taking h = 0.25 and ∆ t = .
16
Compare the obtained results with the exact solution
∞
2 ∑ (−1)n −n2 π2 t
u(x, t) = e sin (nπx),
π n=1 n
1 1
Take h = and ∆ t = .
4 32
7. Write down and solve the explicit finite difference approximation of
the equation
1
ut = xuxx , 0 < x < , t > 0,
2
u(x, t) = f (x − t).
x
2 4 6 10 11 12 20
Figure 8.5.1
508 8. FINITE DIFFERENCE NUMERICAL METHODS
u
3 Λ = 0.8, h = 0.5, t = 0.4
o Exact
o Approx
o
o
o
2
o o
o
o o o
o o
1 o o o o o o o o o o o o o o o o o o o o o o o o o o
x
12 22 32 52 57 62
20
5 5 5 5 5 5
Figure 8.5.2
u
3
o Λ = 0.8, h = 0.25, t = 0.4
o Exact
o o
o Approx
o o
o o
2
o o o
o
o o o o
o o o o
o o
o o
1 oooooooo oooooooooooooooo oooooooooooooooooooooooooooooo
x
12 22 32 52 57 62
20
5 5 5 5 5 5
Figure 8.5.3
not getting better. The reason for this behavior is that stability condition in
the Lax–Friedrich approximation scheme is not satisfied.
u
3 Λ = 1.6, h = 1, t = 1.6 Exact
o Approx
o o
2 o
o1 o o o o o o o o o o o
o o
x
18 28 38 58 63 68
20
5 5 5 5 5 5
Figure 8.5.4
u
o
3
Λ = 1.6, h = 0.5, t = 1.6 Exact
o o o Approx
o
2
o o
o
o
o o
o1 o o o o o o o o o o o o o o o o o o o o
o
o o
o
o
o
x
18 28 38 58 63 68
20
5 5 5 5 5 5
Figure 8.5.5
u
3 Λ = 1.6, h = 0.25, t = 1.6 Exact
o Approx
o o o o
o o
o
o o o
o
o o
o o
o1o ooooooooooooo ooooooooooooooooooooooooooo
o o o
o
o
o
o o
o
x
18 28 38 58 63
20
5 5 5 5 5
Figure 8.5.6
510 8. FINITE DIFFERENCE NUMERICAL METHODS
Example 8.5.2. Solve the transport equation in the previous example using
the Leap-Frog approximation. Take λ = 1.6, h = 0.5 and t = 1.6.
Solution. The Leap-Frog approximation is given by
( )
ui,j+1 = ui,j−1 − λ ui+1,j − ui−1,j .
Let us note that the Leap-Frog method is a three level method. To find the
values of the function at one time step, it is necessary to know the values of
the function at the previous two time steps. As a result, to get started on the
method we need initial values at the first two time levels. For the initial level
we use the given initial condition ui,0 = f (ih). For the next level we use the
Forward-Time Central-Space (see Exercise 1 of this section).
( )
λ
ui,j+1 = ui,j − 2ui+1,j − ui−1,j ,
2
o
o
2 o
o
o o
o
o
o o o
1o o o o o o o o o o o o o o o o o o o o o o o o
x
3.6 5.6 7.6 11.6 13.6 20
Figure 8.5.7
The above approximation schemes for the transport equation (8.5.1) with a
constant coefficient a can be modified to a transport equation with a variable
coefficient a = a(x, t, u). For simplicity we consider the case when a = a(x, t):
We will discuss only the Lax–Wendroff approximation scheme for this equa-
tion. The other schemes can be derived similarly. If we assume that u(x, t) is
sufficiently smooth with respect to both variables, from Taylor’s formula we
have
(∆ t)2
(8.5.3) u(x, t + ∆ t) ≈ u(x, t) + ut (x, t)∆ t + utt (x, t).
2
If we differentiate Equation (8.5.2) with respect to t we obtain
utt = −at (x, t)ux (x, t)−a(x, t)utx (x, t) = −at (x, t)ux (x, t)−a(x, t)uxt (x, t)
( )
= −at (x, t)ux (x, t) + a(x, t) ax (x, t)ux (x, t) + a(x, t)uxx (x, t)
( )
= −at (x, t) + a(x, t)ax (x, t) ux (x, t) + a(x, t)2 uxx (x, t),
Now using the central approximations for the partial derivatives ux (x, t) and
uxx (x, t), from (8.5.5) we obtain the Lax–Wendroff approximation scheme for
the transport equation:
( )
λ 1 (x) 1 (t)
ui,j+1 = (1 − λ )ui,j + 2 ai,j + λai,j − 2 ai,j ai,j ∆ t + 2 ai,j ∆ t ui−1,j
2 2
( )
λ 1 1 (t)
+
(x)
−ai,j + λai,j + ai,j ai,j ∆ t − ai,j ∆ t ui+1,j ,
2
2 2 2
(x) (t)
where ai,j and ai,j are the values of ax (x, t) and at (x, t), respectively, at
the node (i, j).
Let us take an example to illustrate this numerical method.
Example 8.5.3. Use the Lax–Wendroff approximation method with h =
0.04 and ∆ t = 0.02 to solve the problem
{
ut (x, t) + xux (x, t) = 0, 0 < x < 1, 0 < t < 1,
u(x, 0) = f (x) = 1 + e−40(x−0.3) ,
2
0 < x < 1.
Display the graphs of the approximative solutions, along with the analytical
solution at the time instances t = 0.02, 0.10 and t = 0.20.
512 8. FINITE DIFFERENCE NUMERICAL METHODS
x x
0.3 1 0.3 1
(a) (b)
u u
2 2 Exact
o t = 0.10 Exact t = 0.20
o
o
o
o
-o - o - Approx o -o - o - Approx
o o o
o o
1o o o o o o o o o o o o o o o o o o o o o 1o o o o o o o o o o o o o o o o o o o
o
x x
0.3 1 0.3 1
(c) (d)
Figure 8.5.8
∫
x+ct
f odd (x − ct) + f odd (x + ct) 1
u(x, t) = + g odd (s) ds,
2 2c
x−ct
where f odd and g odd are the odd, 2c periodic extensions of f and g, re-
spectively.
Now we will present finite difference methods for solving the wave equation.
Using the central differences for both time and space partial derivatives in the
wave equation we obtain that
∆t
λ=c .
h
∆ t2
(8.5.8) u(xi , t1 ) ≈ u(xi , 0) + ut (xi , 0)∆ t + utt (xi , 0) .
2
From the wave equation and the initial condition u(x, 0) = f (x) we have
Using the other initial condition ut (x, 0) = g(x), Equation (8.5.8) becomes
c2 ∆ t2
(8.5.9) u(xi , t1 ) ≈ u(xi , 0) + g(xi )∆ t + f ′′ (xi ) .
2
Now if we use the central finite difference approximation for f ′′ (x), then
Equation (8.5.9) takes the form
( ) c2 ∆ t2
u(xi , t1 ) ≈ u(xi , 0) + g(xi )∆ t + f (xi+1 ) − 2f (xi ) + f (xi−1 )
2h2
514 8. FINITE DIFFERENCE NUMERICAL METHODS
λ2 ( )
u(xi , t1 ) ≈ (1 − λ2 )ui,0 + ui−1,0 + ui+1,0 + g(xi )∆ t.
2
The last approximation allows us to have the required first step
λ2 ( )
(8.5.10) ui,1 = g(xi )∆ t + (1 − λ2 )ui,0 + ui−1,0 + ui+1,0 .
2
The approximation, defined by (8.5.7) and (8.5.10), is known as the explicit
finite difference approximation of the wave equation. The order of this ap-
proximation is O(h2 + ∆ t2 ).
It can be shown that this method is stable if λ ≤ 1.
We can avoid the conditional stability in the above explicit scheme if we
consider the following implicit (Crank–Nicolson) scheme.
( ) λ2 λ2
1 + λ2 ui,j+1 − ui+1,j+1 − ui−1,j+1
2 2
( ) λ2 λ2
= 2ui,j − 1 + λ ui,j−1 + ui+1,j−1 + ui−1,j−1 .
2
2 2
The implicit Crank–Nicolson scheme is stable for every grid parameter λ.
The following example illustrates the explicit numerical scheme.
Example 8.5.4. Consider the initial boundary problem
utt (x, t) ={uxx (x, t), 0 < x < 3, 0 < t < 2,
u(x, 0) = 2 x (x − 1) , 0 ≤ x ≤ 1,
8 8 4
0, 1≤x≤3
ut (x, 0) = 0, 0 < x < 3,
u(0, t) = u(3, t) = 0 t > 0.
Using the explicit scheme (8.5.7) with different space and time steps we
approximate the solution of the problem at several time instances. Compare
the obtained numerical solutions with the analytical solution.
Solution. The following Mathematica program generates the numerical solu-
tion of the above initial boundary value problem.
In[1] := W ave[n− , m− ] :=
M odule[{i},
uApp = T able[0, {n}, {m}];
F or[i = 1, i <= n, i + +,
uApp [[i, 0]] = f [i]; ];
F or[i = 2, i <= n − 1, i + +,
8.5 FINITE DIFFERENCE METHODS FOR THE WAVE EQUATION 515
u u
0.42 o 0.34 oo
o o t = 0.08 Exact o o t = 0.12 Exact
o
o
o -o - o - Approx o -o - o - Approx
o
o o o
o
o
o o o
o o o
o
o o
o o
o o
o o o o oooooooooooooooooooooooooooooooooooooo
oooo o ooooooooooooooooooooooooooooooooooooooo x ooo o x
1 3 1 3
(a) (b)
u u
0.27 o
o o t = 0.16 0.25 oo o
o Exact
o oo Exact
o o o
t = 0.20
o oo
o -o - o - Approx o -o - o - Approx
o
o o o
oo
o o
o o
o o o
o
o o
o o o o
o o ooooooooooooooooooooooooooooooooooooo o o oooooooooooooooooooooooooooooooooooo
oo o x oo x
1.16 3 1.2 3
(c) (d)
Figure 8.5.9
Table 8.5.1
x1 x2 x3 x4 x5
t = 0.00 3.258/108 6.718/106 1.379/104 1.0737/103 4.943/103
t = 0.08 0.000 1.400/105 1.400/104 8.284/104 3.357/103
t = 0.16 0.000 4.922/104 2.128/103 6.857/103 1.769/102
t = 0.24 0.000 4.293/103 1.255/102 2.871/102 5.599/102
t = 0.50 0.000 8.908/102 1.657/101 2.187/101 2.382/101
u u
0.4
o
o o
0.3 o
t = 0.24 Exact
t = 0.08
o o Exact oo
o
o -o - o - Approx
o
-o - o - Approx o
o o
oo
o
o o
o o
o
o o
o o o
o o
o
o o o
o
o o
oooo o o oooooooooooooooooooooooooooooooooooooooo
x oo oooooooooooooooooooooooooooooooooooooo x
1 3 1 3
(a) (b)
u u
5. o
t = 0.40
0.9 o
t = 0.32 Exact Exact
-o - o - Approx -o - o - Approx
o
o o
o o o
oo o o
o
o
o oo oo oooo o ooooo oo oooooooooooooooooooooooooooooooooooo x
oo
o o
o o o o oo o
o 1 3
o
o
ooooooooooooooooooooooooooooooooooooo x
1 3 o
o o
(c) (d)
Figure 8.5.10
∞
8 ∑ sin (2n + 1)πx ( ) 1
u(x, t) = 3 3
cos (2n + 1)πt + sin πx sin πt.
π n=0 (2n + 1) π
(see the separation of variables method for the wave equation described in
Section 5.3 of Chapter 5)
1 1
If we take n = and m = in the Crank–Nicholson approximation
h ∆t
method, then the following system is obtained.
ui,0 = f (ih), i = 0, 1, . . . , n,
λ2 (
ui,1 = (1 − λ2 )f (ih) + ∆ t g(ih)+ f ((i + 1)h) + f ((i − 1)h), 1 ≤ i ≤ n,
2
u = 2(1−λ2 )u ( )
i,j−1 +λ ui+1,j−1 +ui−1,j−1 −ui,j−2 , 1 ≤ i ≤ n; 2 ≤ j ≤ m.
2
i,j
(a) (b)
u u
0.34 oo
oooooooo Exact 0o o o o
oo t = 0.80 ooo
oo o o oo
o o -o - o - Approx o o
o o
o t = 0.10 o o o
o o o o
o o
o o o o
o o o o
o o
o o o o
o o
o o o o
o o
o o
o o
o o
o o
o o
o o o o Exact
o o o o
o o
o o
oo o o -o - o - Approx
ooo ooo
o o x ooooo x
0.5 1 0.5 1
(c) (d)
Figure 8.5.11
518 8. FINITE DIFFERENCE NUMERICAL METHODS
Use the Lax–Friedrichs with h = 0.04 and λ = 0.8. Display the re-
sults at t = 0 and t = 0.24.
u(0, t) = 0, t > 0.
8.5 FINITE DIFFERENCE METHODS FOR THE WAVE EQUATION 519
Compute the numerical values at the first 8 space points and the first
10 time instances. Compare the results obtained with the analytical
solution.
8. Apply the explicit scheme to the following the initial boundary value
problem.
utt (x, t) = uxx (x, t), 0 < x < 1, 0 < t < 1,
u(x, 0) = 1 + sin (πx) + 3 sin (2πx), 0 < x < 1,
u(0, t) = u(1, t) = 1, 0 < t < 1,
ut (x, 0) = sin pix, 0 < x < 1.
9. Derive the matrix forms of the systems of equations for the solution
of the one dimensional wave equation
522
A TABLE OF LAPLACE TRANSFORMS 523
38. f (n) (t), n = 0, 1, 2, . . . sn L{f }(s) − sn−1 f (0) − sn−2 f ′ (0) − . . . − f (n−1) (0)
3. e−a|x| 2a
ω 2 +a2
4. e−ax H(x) ‡ 1
a+iω
π
(π )
5. sech(ax) a sech 2a ω
6. 1 2πδ0 (ω)
‡
7. δ(x) 1
( )
8. cos (ax) π δ (ω − a) + δ(ω + a)
( )
9. sin (ax) −πi δ (ω − a) − δ(ω + a)
( −|ω−a| )
10. sin (ax)
1+x2
π
2 e − e−|ω+a|
( −|ω−a| )
11. cos (ax)
1+x2
π
2 e + e−|ω+a|
† 1
( ω ) ✠
12. rect (ax) |a| sink 2πa
( ) √π ( ω2 −aπ )
13. cos ax2 a cos 4a
( ) √π ( ω2 −aπ )
14. sin ax2 − a sin 4a
( )
15. |x|n e−a|x| Γ(n + 1) (a−iω) 1 1
n+1 + (a+iω)n+1
† 2
16. sgn (x) iω
‡
( 1
)
17. H(x) π iπω + δ0 (ω)
n−1
18. 1
xn −iπ (−iω)
(n−1) sgn (ω)
( )
19. J0 (x) √ 2
1−ω 2
rect ω2 †
20. eiax 2πδ(ω − a)
21. xn 2πin δ (n) (ω ‡ )
1 1 −ω
22. 1+a2 x2 |a| e
a
23. x
a2 +x2 −πie−a|ω| sgn (ω) †
B TABLE OF FOURIER TRANSFORMS 525
‡ The Heaviside function H(x) and the Dirac delta “function” δ(x):
{ {
0, −∞ < x < 0 ∞, x = 0
H(x) = ; δ(x) =
1, 0 ≤ x < ∞ 0, x ̸= 0.
∫b ∫b
fn (x) dx = f (x) dx.
a a
C SERIES AND UNIFORM CONVERGENCE FACTS 527
( )
Theorem C.5. Let fn be a sequence( ) of differentiable functions on [a, b]
( )fn (x0 ) converges at some point x0 ∈ [a,
such that the numerical sequence ( b].)
If the sequence of derivatives fn′ converges uniformly on [a, b], then fn
converges uniformly on [a, b] to a differentiable function f and moreover,
∑
n
Sn (x) = fk (x)
k=1
is convergent, then
∞
∑
fn
n=1
is uniformly convergent on S.
in which (an ) is a sequence of real numbers and x a real variable. The basic
convergence properties of power series are described by the following theorem.
528 APPENDICES
(√ )
let R be the largest limit point of the sequence n
|an | . Then
a. If R = 0, then the power series is absolutely convergent on the whole
real line and it is uniformly convergent on any bounded set of the real
line.
∑∞ xn ∑∞ xn+1
ex = , x ∈ R; ln (1 + x) = (−1)n , −1 < x ≤ 1.
n=0 n! n=0 n+1
∑
∞ x2n+1 ∑∞ x2n
sin x = (−1)n x ∈ R; cos x = (−1)n , x ∈ R.
n=0 (2n + 1)! n=0 (2n)!
D BASIC FACTS OF ORDINARY DIFFERENTIAL EQUATIONS 529
(D.1) F (x, y, y ′ ) = 0,
is called a second order linear equation. The coefficients a(x), b(x) and c(x)
are assumed to be continuous on an interval I.
If a(x) ̸= 0 for every x ∈ I, then we can divide by a(x) and equation
(D.7) takes the normal form
is called homogeneous.
Theorem D.2. If the function yp (x) is any particular solution of the non-
homogeneous equation (D.7) and uh (x) is the general solution of the homo-
geneous equation (D.8), then
c1 y1 (x) + c2 y2 (x),
For second order linear equations we have the following existence and
uniqueness theorem.
532 APPENDICES
Since the system of the last two equations has a nontrivial solution (c1 , c2 ),
its determinant W (y1 , y2 ; x) must be zero for every x ∈ I.
The converse of Theorem D.6. is not true. In other words, two differen-
tiable functions y1 and y2 can be linearly independent on an interval even if
their Wronskian may be zero at some point of that interval.
Example D.3. The functions y1 (x) = x3 and y2 (x) = x2 |x| are linearly in-
dependent on the interval (−1, 1), but their Wronskian W (y1 , y2 ; x) is iden-
tically zero on (−1, 1).
Solution. Indeed, if c1 x3 + c2 x2 |x| = 0 for some constants c1 and c2 and
every x ∈ (−1, 1), then taking x = −1 and x = 1 in this equation we obtain
that −c1 +c2 = 0 and c1 +c2 = 0. From the last two equations it follows that
c1 = c2 = 0. Therefore y1 and y2 are linearly independent on (−1, 1). Also,
it is easily checked that y1′ (x) = 3x2 and y2′ (x) = 3x|x| for every x ∈ (−1, 1).
Therefore y1 and y2 are differentiable on (−1, 1) and
The last equation can be integrated by the separation of variables and one
solution of this equation is
1 −∫ p(x) dx
v(x) = e .
y12
Thus,
∫ ∫
e− p(x) dx
u(x) = dx
y12 (x)
534 APPENDICES
y ′′ + p(x)y ′ + q(x)y = 0,
i.e.,
yh (x) = c1 y1 (x) + c2 y2 (x)
is a general solution of the homogeneous equation, then a particular solution
yp (x) of the nonhomogeneous equation
is given by
yp (x) = c1 (x)y1 (x) + c2 (x)y2 (x),
where the two differentiable functions c1 (x) and c2 (x) are determined by
solving the system
{
c′1 (x)y1 (x) + c′2 (x)y2 (x) = 0
c′1 (x)y1′ (x) + c′2 (x)y2′ (x) = r(x)
(D.12) ar2 + br + c = 0.
D BASIC FACTS OF ORDINARY DIFFERENTIAL EQUATIONS 535
we have three possibilities: two real distinct roots when b2 − 4ac > 0, one
real repeated root when b2 − 4ac = 0, and two complex conjugate roots when
b2 − 4ac < 0. We consider each case separately.
Two Real Distinct Roots. Let (D.12) have two real and distinct roots r1
and r2 . Using the Wronskian it can be easily checked that the functions
y1 (x) = er1 x and y2 (x) = er2 x are linearly independent on the whole real
line so the general solution of (D.11) is
One Real Repeated Root. Suppose that the characteristic equation (D.12)
has a real repeated root r = r1 = r2 . In this case we have only one solution
y1 (x) = erx of the equation. We use this solution and (D.10) in order to
obtain a second linearly independent solution y2 (x) = xerx . Therefore, a
general solution of (D.11) is
y ′′ − 2y ′ + y = ex ln x, x > 0.
This equation has repeated root r = 1 and so y1 (x) = ex and y2 (x) = xex
are two linearly independent solutions of the homogeneous equation. There-
fore,
yh (x) = c1 ex + c2 xex
536 APPENDICES
Solving the last system for c′1 (x) and c′2 (x) we obtain
Using the integration by parts formula, from the last two equations it follows
that
1 1
c1 (x) = x2 − x2 ln x and c2 (x) = x ln x − x.
4 2
Therefore,
1 3
yp (x) = x2 ex ln x − x2 ex
2 4
and the general solution is
1 3
y(x) = yh (x) + yp (x) = c1 ex + c2 xex + x2 ex ln x − x2 ex .
2 4
where a, b and c are real constants. To solve this equation we assume that
it has a solution of the form y = xr . After substituting y and its first and
second derivative into the equation we obtain
(D.14) ar(r − 1) + br + c = 0.
Two Complex Conjugate Roots. If (D.14) has two, complex conjugate roots
r1 = α + β i and r2 = α − β i, then
( ) ( )
y(x) = c1 xα cos β ln x + c2 xα sin β ln x
For each value of x either the series converges or it does not. The set of all
x for which the series converges is an interval (open or not, bounded or not),
centered at the point x0 . The largest R, 0 ≤ R ≤ ∞ for which the series
converges for every x ∈ (x0 − R, x0 + R) is called the radius of convergence
of the power series and the interval (x0 − R, x0 + R) is called the interval of
convergence. Within the interval of convergence of a power series, the function
that it represents can be differentiated and integrated by differentiating and
integrating the power series term by term.
A function f is said to be analytic in some open interval centered at x0 if
for each x in that interval the function can be represented by a power series
∞
∑
f (x) = cn (x − x0 )n .
n=0
where a(x), b(x) and c(x) are analytic functions in some open interval con-
taining the point x0 . If x0 is an ordinary point for (D.7) (a(x0 ) ̸= 0), then a
general solution of Equation (D.7) can be expressed in form of a power series
∞
∑
y(x) = cn (x − x0 )n .
n=0
Example D.5. Find the general solution in the form of a power series about
x0 = 0 of the equation
(D.15) y ′′ − 2xy ′ − y = 0.
Since a(x) ≡ 1 does not have any singularities, the interval of convergence
of the above power series is (−∞, ∞). Differentiating twice the above power
series we have that
∞
∑ ∞
∑
y ′ (x) = ncn xn−1 , y ′′ (x) = n(n − 1)cn xn−2 .
n=1 n=2
We substitute the power series for y(x), y ′ (x) and y ′′ (x) into Equation
(D.15) and we obtain
∞
∑ ∞
∑ ∞
∑
n(n − 1)cn xn−2 − 2x ncn xn−1 − cn xn = 0.
n=2 n=1 n=0
If we insert the term x inside the second power series, after re-indexing the
first power series we obtain
∞
∑ ∞
∑ ∞
∑
(n + 2)(n + 1)cn+2 xn − 2 ncn xn − cn xn = 0.
n=0 n=1 n=0
If we break the first and the last power series into two parts, the first terms
and the rest of these power series, we obtain
∞
∑ ∞
∑ ∞
∑
2 · 1c2 + (n + 2)(n + 1)cn+2 xn − 2 ncn xn − c0 − cn xn = 0.
n=1 n=1 n=0
Combining the three power series above into one power series, it follows that
∑ [ ]
∞
2c2 − c0 + n=1 (n + 2)(n + 1)cn+2 − (2n + 1)cn xn = 0
Since a power series is identically zero in its interval of convergence if all the
coefficients of the power series are zero, we obtain
Therefore,
1 2n + 1
c2 = c0 and cn+2 = , n ≥ 1.
2 (n + 2)(n + 1)
From this equation, recursively we obtain
1
c2 = c0 ,
2
1·5
c4 = c0 ,
4!
1·5·9
c6 = c0 ,
6!
..
.
1 · 5 · 9 · · · (4n − 3)
c2n−1 = c0 , n ≥ 1
(2n)!
and
3
c3 = c1 ,
3!
3·7
c5 = c1 ,
5!
3 · 7 · 11
c7 = c1 ,
11!
..
.
3 · 7 · 11 · · · (4n − 5)
c2n−1 = c1 , n ≥ 2.
(2n − 1)!
Therefore the general solution of Equation (D.15) is
[ ∑∞ ] [ ∑∞ ]
1 · 5 · · · (4n − 3) 2n 3 · 7 . . . (4n − 5) 2n−1
y(x) = c0 1 + x + c1 x + x
n=0
(2n)! n=2
(2n − 1)!
where a(x), b(x) and c(x) are analytic functions in some open interval con-
taining the point x0 . If x0 is singular point for (D.7) (a(x0 ) = 0), then a
general solution of Equation (D.7) cannot always be expressed in the form of
540 APPENDICES
a power series. In order to deal with this situation, we distinguish two types
of singular points: regular and irregular singular points.
A singular point x0 is called a regular singular point of Equation (D.7) if
both functions
b(x) c(x)
(x − x0 ) and (x − x0 )2
a(x) a(x)
are analytic at x = x0 . A point which is not a regular singular point is called
an irregular singular point.
Example D.6. Consider the equation
Now, we explain the method of Frobenius for solving a second order linear
equation (D.7) when x0 is an irregular singular point. For convenience, we
assume that x0 = 0 is a regular singular point of (D.7) and we may consider
the equation of the form
and
∞
∑
x2 q(x) = b0 + bn xn .
n=1
If we substitute these power series and the series for y, y ′ and y ′′ in (D.17),
after rearrangements we obtain
∑∞ [ ∑ ]
n
( )
(r + n)(r + n − 1)cn + an−k (r + k)ck + bn−k ck xn = 0.
n=0 k=0
D BASIC FACTS OF ORDINARY DIFFERENTIAL EQUATIONS 541
For n = 0 we have
(D.18) r(r − 1) + a0 r + b0 = 0.
If the roots r1 and r2 of the indicial equation (D.18) are complex conju-
gate numbers, then r1 −r2 is not an integer number. Therefore, two solutions
of the forms as in Case 1 of the theorem are obtained by taking the real and
imaginary parts of them.
Many important equations in mathematical physics have solutions obtained
by the Frobenius method. One of them, the Bessel equation, is discussed in
detail in Chapter 3.
542 APPENDICES
are the three unit vectors in the Euclidean space R3 , then any vector
x1
x = x2
x3
x = x1 i + x2 j + x3 j.
x · y = x1 y1 + x2 y2 + x3 y3 .
where α is the oriented angle between x and y and n is the unit vector
perpendicular to both vectors x and y and whose direction is given by the
right hand side rule. If x = x1 i + x2 j + x3 j and y = y1 i + y2 j + y3 j, then
i j k
x2 x3 x1 x3 x1 x2
(E.3)
x × y = x1 x2 x3 =
i− j+ k.
y1 y2 y3 y2 y3 y1 y3 y1 y2
is called the gradient of F (x, y, z) and is sometimes denoted by grad F (x, y, z).
An important fact for the gradient is the following.
If x = f (t), y = g(t) and z = h(t) are parametric equations of a smooth
curve on a smooth surface F (x, y, z) = c, then for the scalar-valued function
F (x, y, z) and the vector-valued function r(t) = f (t)i + g(t)j + h(t)k we have
(E.8) curl(F) = ∇ × F,
and is a measure of the tendency of rotation around a point in the vector field.
If curl(F) = 0, then the field F is called irrotational.
( )
The divergence of a vector field F(x, y, z) = f (x, y, z), g(x, y, z), h(x, y, z)
is defined by
(E.9) div(F) = ∇ · F = fx + gy + hz .
∇ · (f F) = f ∇ · F + F · ∇f,
∇ × (f F) = f ∇ × F + ∇f × F,
(E.10) ∇2 f = ∇ · ∇f = fxx + fyy + fzz ,
∇ × ∇f = 0,
∇ · (∇ × F) = 0,
F = ∇f and ∇2 f = 0.
0 x
Figure E.1
y
j
r
Figure E.2
546 APPENDICES
Spherical Coordinates. From Figure E.3 we see that the spherical coordinates
(ρ, φ, θ) in R3 and the Cartesian coordinates (x, y, z) are related by the
formulas
·
Θ
y
j
Figure E.3
( ∂ ∂ )
where n is the outward normal vector on the boundary C and ∇ = ,
∂x ∂y
is the gradient.
z1 + z2 = (x1 + x2 , y1 + y2 ), z1 · z2 = (x1 y1 − x2 y2 , x1 y2 + x2 y1 ).
√
|z| = x2 + y 2 .
For any two complex numbers z1 and z2 the triangle inequality holds:
1 1
= 2 · z̄.
z |z|
where
eix = cos x + i sin x.
If z = r(cos θ + i sin θ) is a complex number and n a natural number,
then there are n complex numbers zk , k = 1, 2, · · · , n (called the nth roots
of z) such that zkn = z, k = 1, 2, · · · , n and for each k = 1, 2, · · · , n we have
√
n
( θ + 2kπ θ + 2kπ )
zk = r cos + i sin .
n n
f (z + ∆z) − f (z)
f ′ (z) = lim
∆z→0 ∆z
∂u ∂v ∂u ∂v
= , =− .
∂x ∂y ∂y ∂x
∂u ∂u ∂v ∂v
Conversely, if the partial derivatives , , and exist and are
∂x ∂y ∂x ∂y
continuous on an open set U and the Cauchy–Riemann conditions are satis-
fied, then f is analytic on U .
The functions ez , z n , n ∈ N, sin z, cos z are analytic in the whole
complex plane. The principal logarithmic function ln z = ln |z| + i arg(z),
0 < arg(z) ≤ 2π is analytic in the set C \ {x ∈ R : x ≥ 0}—the whole
complex plane cut along the positive part of the Ox–axis.
With the principal branch of the logarithm we define complex powers by
z a = e(a ln z) .
which converges absolutely and uniformly in every open disc D(z0 , R) whose
closure D(z0 , r) lies entirely in the open set U . The coefficients an are
uniquely determined by f and z0 and they are given by
f (n) (z0 )
an = .
n!
552 APPENDICES
∑∞ ∑∞ ∑∞ ∑∞
zn z 2n−1 z 2n 1
ez = , sin z = , cos z = , = z n , |z| < 1.
n=0
n! n=1
(2n − 1)! n=0
(2n)! 1 − z n=0
If f and g are analytic functions on some open and connected set U and
f = g on some open disc D ⊂ U , then f ≡ g on the whole U .
If f is analytic on some open and connected set U and not identically
zero, then the zeros of f are isolated, i.e., for any zero a of f there exists
an open disc D(a, r) ⊂ U such that f (z) ̸= 0 for every z ∈ D(a, r) \ {a}.
If f is analytic in a punctured disc D̃(z0 , r) = {z ∈ C : 0 < |z − z0 | < r}
but not analytic at z0 , then the point z0 is called an isolated singularity of
f . In this case f can be expanded in a Laurent series about z0 :
∞
∑
f (z) = an (z − z0 )n , 0 < |z − z0 | < r.
n=−∞
∫ ∫b
f (z) dz = f (γ(t))γ ′ (t) dt.
γ a
F A SUMMARY OF ANALYTIC FUNCTION THEORY 553
Γ0
Γ3
Γ2
Γ1
Γ4
Γ5
W
Figure F.1
554 APPENDICES
Solution. We cut the complex plane along the positive real axis and consider
the region bounded by the Bromwich contour in Figure F.2.
-1 R
Figure F.2
F A SUMMARY OF ANALYTIC FUNCTION THEORY 555
and therefore −y
z −y
|z|−y
= |z| ≤ .
1 + z |1 + z| |1 − |z|
Hence, for small ϵ and large R we have
∫ ∫
z −y y−1 z −y y−1
dz ≤ 2π R , dz ≤ 2π ϵ .
1+z R−1 1+z 1−ϵ
CR cϵ
Therefore
∫∞
−πiy t−y
(e πiy
−e ) dt = 2πi
1+t
0
and finally
∫∞
t−y π
dt = .
1+t sin(π y)
0
556 APPENDICES
∫∞
tn e−t dt = n!, n ∈ N.
0
This formula suggests that for x > 0 we define the gamma function by the
improper integral
∫∞
Γ(x) = tx−1 e−t dt.
0
Γ(x + 1) = xΓ(x).
∫∞
Since γ(1) = e−t dt = 1, recursively its follows that
0
Γ(n + 1) = n!
for all natural numbers n. Thus Γ(x) is a function that continuously extends
the factorial function from the natural numbers to all of the positive numbers.
We can extend the domain of the gamma function to include all negative
real numbers that are not integers. To begin, suppose that −1 < x < 0. Then
x + 1 > 0 and so Γ(x + 1) is defined. Now set
Γ(x)
Γ(x + 1) = .
x
Continuing in this way we see that for every natural number n we have
Γ(x + n)
(G.1) Γ(x) = , x > −n,
x(x + 1) · · · (x + n)
and so we can define Γ(x) for every x in R except the nonpositive integers.
A plot of the Gamma function Γ(x) for real values x is given in Figure G.1.
G EULER GAMMA AND BETA FUNCTIONS 557
10
-4 -2 2 4
-5
Figure G.1
(−1)n
Res(Γ(z), −n) = .
n!
The Euler Gamma function Γ(x) for x > 0 can also be defined by
where
n! nx nx
Γn (x) = = .
x · (x + 1) · · · · · (x + n) x · (1 + 1 ) · · · · · (1 + nx )
x
Γ(z + 1) = zΓ(z).
for all z ∈ C.
1
In particular, for z = 2 we have that
∫∞
1 √
t− 2 e−t dt =
1
Γ( ) = π.
2
0
558 APPENDICES
The previous formula together with the recursive formula for the Gamma
function implies that
1 1 · 3 · 5 · · · · · (2n − 1) √
Γ(n + ) = π
2 2n
and
1 (−1)n 2n √
Γ(−n + ) = π
2 1 · 3 · 5 · · · · · (2n − 1)
for all nonnegative integers n.
1
The function Γ does not have zeroes and Γ(z) is analytic on the entire
complex plane.
The Gamma function Γ(z) is infinitely many differentiable on Re(z) > 0
and for all x > 0 we have that
∞
Γ′ (x) 1 ∑ x
= −γ − + ,
Γ(x) x n=1 n(n + x)
Euler discovered another function, called the Beta function, which is closely
related to the Gamma function Γ(x). For x > 0 and y > 0, we define the
Beta function B(x, y) by
∫1
B(x, y) = tx−1 (1 − t)y−1 dt.
0
If 0 < x < 1, the integral is improper (singularity at the left end point of
integration a = 0). If 0 < y < 1, the integral is improper (singularity at the
right end point of integration b = 1).
For all x, y > 0 the improper integral converges and the following formula
holds.
Γ(x) Γ(y)
B(x, y) = .
Γ(x + y)
H BASICS OF MATHEMATICA 559
H. Basics of Mathematica
2−3 + 3 · (5 − 7)
−5 + 3 · (−2) +
− 23 + 34 · (−9 + 3)
If we want to get a numerical value for the last expression with, say, 20
digits, then in the cell type
In[ ]:=N[%, 20]
Out[ ]=−9.8629032258064516129
One single input cell may consist of several. A new line within the current
cells is obtained by pressing the Enter key. Commands on the same line
within a cell must be separated by semicolons. The output of any command
that ends with a semicolon is not displayed.
In[ ]:=x = 5/4 − 3/7; y = 9/5 + 4/9 − 9 x; z = 2 x2 − 3 y 2
Out[ ]=− 2999
392
x, −π ≤ x < 0
h(x) = x2 , 0≤x<π
0, elsewhere
we use
In[ ]:=h[x− ]=Piecewise[{{x, −P i <= x < 0}, {0 <= x < P i}}, 0]];
If we now type
In[ ]:=h[2]
we obtain
Out[ ]=4
y
Π2
x
-2 Π -Π Π 2Π
-Π
x
0 1
To plot many points on the plane that are on a given curve we use the
ListPlot command.
Example.
In[ ]:=list=Table[{x, Sin[x]}, {x, −2 P i, 2 P i, 0.1}];
In[ ]:=p1=ListPlot[list]
y
1
x
-2 Π 2Π
-1
y
1
x
-2 Π 2Π
-1
x
-2 Π 2Π
-1
x x
-2 Π 2Π -2 Π 2Π
-1 -1
x
-6 -3 3 6
-3
-6
564 APPENDICES
1
r = 2, r =2+ sin 10φ, r = sin 5φ, 0 ≤ φ ≤ 2π.
3
In[ ]:=PolarPlot[{2, 2 + 1/3 Sin[10 t], Sin[5 t]}, {t, 0, 2 P i}, PlotStyle
− > {Black, Dashed, Black}, Ticks − > {{−2, −1, 0, 1, 2},
{−2, −1, 0, 1, 2}}, AxesLabel − > {x, y}]
x
-2 -1 1 2
-1
-2
y
1
x
-1 1
-1
H BASICS OF MATHEMATICA 565
The command for partial derivatives is the same. For example, the com-
mand for the mixed partial derivative of f (x, y) is D[f [x, y], x, y], or the
command for the second partial derivative of f (x, y) with respect to y is
D[f [x, y], y, 2].
∫
Integrate[f [x], x] is the command for the indefinite integral f (x) dx. For
∫b
the definite integral f (x) dx the command is Integrate[f [x], {x, a, b}].
a
In[ ]:=Solve[a x2 + x + 1 == 0, x]
√ √
−1− 1−4 a −1+ 1−4 a
Out[ ]={{x− > 2a }, {x− > 2a }}
Examples.
Approximate solutions to a polynomial equation:
In[ ]:=NSolve[x5 − 2 x + 1 == 0, x]
Out[ ]={{x− > −1.29065}, {x− > −0.114071 − 1.21675 i},
{x− > −0.114071 + 1.21675 i}, {x− > 0.51879}, {x− > 1.}}
Examples.
Find all roots of the function f (x) = x − sin x near x = π.
In[ ]:=[x − 2 Sin[x], {x, P i}]
Out[ ] = {x− > 1.89549}
Find the general solution of the second order partial differential equation
In[ ]:=DSolve[3 D[z[x, t], {x, 2}] − 2 D[z[x, t], y, 2] == 1, z, {x, t}]
( ( x ( ) ))
Out[ ] = {{z− > {x, t}− > e 2 c1 21 (2t − 5x) − 1 }}
( ( √ ) ( √ ) )
x2
In[ ] := {{z− > {x, t}− > x c1 t − 23 x + c2 t + 23 x + 6 }}
Examples.
In[ ]:=nsol=NDSolve[{y ′ [t] == y[t] Cos[t2 + y[t]2 ], y[0] == 1}, y, {t, 0, 20}]
In[ ]:=Plot[Evaluate [y[t]/. nsol], {t, 0, 20}, Ticks − > {{0, 5, 10, 15, 20},
{0, 1}}, PlotRange − > All, AxesLabel − > {x, t}]
x
5 10 15 20
570 APPENDICES
Plot the solution of the initial value problem (partial differential equation)
Out[ ] = {{ 1+1
1 1
, 1+2 1
, 1+3 }, { 2+1
1 1
, 2+2 1
, 2+3 }}
In[ ]:=MatrixForm[B]
( 1 1 1 )
1+1 1+2 1+3
Out[ ]= 1 1 1 .
2+1 2+2 2+3
The matrix operations addition and subtraction are represented with the
usual +4 and − keys. Matrix multiplication is represented using the dot
key.
Examples.
Let
In[ ] := X = {{1, 3, 4}, {3, 0, 2}, {1, 1, 1}}; Y = {{2, 1, 4}, {1, 1, 3}, {1, 2, 1}};
Then
In[ ] := X + Y
Out[ ] = {{3, 4, 8}, {4, 1, 5}, {2, 3, 2}}
In[ ] := X.Y
Out[ ] = {{10, 12, 17}, {8, 7, 14}, {4, 4, 8}}
The commands Det[A] and Inverse[A] are the commands for the deter-
minant and the inverse matrix of a square matrix A. TransposeM] is the
command for the transpose matrix of any matrix M .
Examples.
In[ ] := Det[{{1, 3, 4}, {3, 0, 2}, {1, 1, 1}}]
Out[ ] = 7
Solution. We form the coefficient matrix and the right side matrix.
In[ ] := A = {{3, −2, 3}, {2, 1, −1}, {3, −2, , −3}, b = {−1, 0, 5};
Next we find the inverse matrix B of the matrix A:
In[ ]:=B=Inverse[A];
We find the solution of the system by
In[ ]:=Solution=B.b
Out[ ] = {− 14
5
, − 11
14 , − 2 }
3
Notice that Mathematica gives the exact solution of the system. If, instead
of the matrix A, we use the matrix
In[ ] := A1 = {{3., −2., 3.}, {2., 1., −1.}, {3., −2., , −3.};
then the inverse matrix of A1 will be the matrix
In[ ] := B1 = Inverse[A1 ]
Out[ ] := {{0.178571, 0.285714, −0.0357143},
{−0.107143, 0.428571, −0.178571}, {0.25, 0., −0.25}}
and the numerical solution of the system will be
In[ ]:=NumerSolution=B1 .b
Out[ ] := {−0.357143, −0.785714, −1.5}
[4] D. G. Duffy, Advanced Engineering Mathematics, 2nd ed., CRC Press, Boca Raton,
Florida, 1998.
[5] L. C. Evans, Partial Differential Equations, Graduate Studies in Mathematics, vol. 19,
American Mathematical Society, Providence, Rhode Island, 1991.
[6] G. B. Folland, Fourier Series and Its Applications, Mathematics Series, Wadsworth
& Brooks/Cole Publishing Company, Pacific Grove, California, 1992.
[7] P. Garabedian, Partial Differential Equations, 2nd ed., Chelsea, New York, 1998.
[8] G. D. Smith, Numerical Solution of Partial Differential Equations, 3rd ed., Claren-
don, Oxford, 1985.
[10] E. C. Titchmarsh, The Theory of Functions, Second Edition, Oxford University Press,
London, 1939.
[11] M. Rosenlicht, Introduction to Analysis, Dover Publications, Inc., New York, 1968.
[13] N. Asmar, Partial Differential Equations and Boundary Value Problems, Prentice-
Hall, Upper Saddle River, New Jersey, 2000.
[14] S. Wolfram, The Mathematica Book, Fourth Edition, Cambridge University Press,
New York, 1999.
[16] G. Evans, J. Blackledge and P. Yardley, Numerical Methods for Partial Differential
Equations, Springer-Verlag, Berlin, Heidelberg, New York, 2000.
574
ANSWERS TO EXERCISES
Section 1.1.
∑
∞
(−1)n+1
5. (a) 2 n sin nx.
n=1
∑
∞
(b) 2
π − 4
π
cos 2nx
4n2 −1 .
n=1
∑
∞
sin nx
(c) 2 n .
n=1
8
∑
∞
sin (2n−1)x
(d) π (2n−1)3 .
n=1
[ ]
e2aπ −1
∑∞
a(e2aπ −a) 2aπ
(e) 2aπ + π
1
a2 +n2 cos nx − n(n+e
a2 +n2
)
sin nx .
n=1
[ ]
∑∞
(−1)n ) n(−1)n
a2 +n2 cos nx − a2 +n2
sinh aπ 2 sinh aπ
(f) aπ + π sin nx .
n=1
[ ]
∑∞
(g) −3 +
1
an cos 2nπ3 x + bn sin 2nπ
3 x
n=1 ( )
sin nπ
an = −2 n2 π32 nπ + 2nπ cos 2nπ
3 − 3 sin 2nπ
3 ;
−nπ cos nπ+3 sin nπ
bn = n2 π 2
3
.
[ ]
3
∑
∞
nπ nπ −2+2 cos, nπ
(h) 8 + an cos 2 x + bn sin 2 x , an = n2 π 2
2
;
n=1
−nπ cos nπ+2 sin nπ
bn = n2 π 2
2
.
∑
∞
e(2n−1)ix
8. (a) π
2 − 2
π (2n−1)2 .
n=−∞
n̸=0
i
∑
∞
enπix
(b) 1 + π n .
n=−∞
n̸=0
∑
∞
e2(2n−1)ix
(c) 1
2 − i
π 2n−1 .
n=−∞
n̸=0
575
576 ANSWERS TO EXERCISES
Section 1.2.
2. (a) 3 derivatives.
(c) None.
1
(c) The sum is 2.
π
4. (b) The sum is 2.
π
6. Take x = 2.
3π
7. (b) The sum is 4 .
Section 1.3.
x
∑
∞
(−1)n+1 ∑
∞
1 π2
1. (a) 2 = n sin nx. n2 = 6 .
n=1 n=1
∑
∞
(−1)n ∑
∞
π6
(b) f (x) = x3 − π 2 x = 12 n3 sin nx. 1
n6 = 945 .
n=1 n=1
ANSWERS TO EXERCISES 577
∑∞
(−1)n+1 ∑∞
π2
2. (a) f (x) = x2 . f (x) = n sin nx. 1
n2 = 6 .
n=1 n=1
{
−1, −π < x < 0 ∑∞
f (x) = π4 2n−1 sin (2n − 1)πx.
1
(b) f (x) =
1, 0 < x < π. n=1
∑
∞
1 π2
(2n−1)2 = 8 .
n=1
∑
∞
(c) f (x) = |x|. f (x) = π
4 − π2 1
(2n−1)2 cos (2n − 1)x, −π < x < π.
n=1
∑
∞
1 π4
(2n−1)4 = 96 .
n=1
∑
∞
(d) f (x) = | sin x| = 2
π − 4
π
1
4n2 −1 cos 2nx.
n=1
∑
∞
1 π 2 −8
(4n2 −1)2 = 16 .
n=1
∑
∞
(−1)n+1 sin nx
4. (a) f (x) = sinh x = sinh π
π n2 +1 , |x| < π.
n=1
4 sinh2 π
∑
∞
sin2 na −π+cosh pi sinh pi
π2 n2 = π .
n=1
{ 1
2a , |x| < a 1 1
∑
∞
sin na
(b) f (x) = f (x) = + cos nx.
0, a < |x| < π. 2π π
n=1
na
∑
∞
sin2 na
( )
n2
a
= 2π π−a .
n=1
1
∑
∞
1 1
∫π 3+2π 2
5. (a) 4 +4 (n2 −1)2 = π x2 cos2 x dx = 6 .
n=2 −π
∑
∞ ∫π −3+2π 2
(b) 1
2 + 1
4 +4 1
(n2 −1)2 = 1
π x2 sin2 x dx = 6 .
n=1 −π
1 2
∑
∞
1−cos na
9. f (x) = 2π + π n2 a2 cos nx.
n=1
1 4
∑
∞
(1−cos na)2 1
∫ π
2π 2 + π2 n2 a2 = π π f 2 (x) dx.
n=1
578 ANSWERS TO EXERCISES
∑
∞
(−1)n
10. (b) (c) f (x) = 1
2π − 4
π 4n2 −1 cos 2nx, x ∈ R.
n=1
∑
∞
11. (b) f (x) = 1
2π + 1
2 sin x − 2
π
1
4n2 −1 cos 2nx, x ∈ R.
n=1
∑
∞
(d) F (x) = − 12 cos x + 2
π
2n
4n2 −1 sin 2nx,
n=1
13. α > 1.
15. No. For square integrable functions f the answer follows from the
Parseval identity. For integrable functions f the answer follows by
approximation with square integrable functions.
∑
∞
16. (a) y(t) = C1 cosh t+C2 sinh t− 12 − π2 1
2n−1+(2n−1)3 sin (2n−1)t.
n=1
∑
∞
(b) y(t) = C1 e2t +C2 et t+ 41 + π6 [ ]
1
2 cos (2n−1)t
n=1 2−(2n−1)2 +9(2n−1)2
∑
∞
2−(2n−1)2
+ π2 {[ ]2 } sin (2n − 1)t.
n=1 (2n−1) 2−(2n−1)2 +9(2n−1)2
Section 1.4.
∑
∞
1. Fourier Cosine Series: f (x) = π
2 + 4
π
1
2n−1 cos (2n − 1)x.
n=1
2
∑
∞
1
Fourier Sine Series: f (x) = π n sin nx.
n=1
∑
∞
2. Fourier Cosine Series: f (x) = 2
π − 4
π
1
4n2 −1 cos 2nx.
n=1
4
∑
∞
n
Fourier Sine Series: f (x) = π 4n2 −1 sin 2nx.
n=1
π2
∑
∞
(−1)n [ ]
4. Fourier Cosine Series: f (x) = 3 +4 n2 cos (2n − 1)x − π2 .
n=1
∑
∞
−2+(2−n2 π 2 )(−1)n
Fourier Sine Series: f (x) = 2 n3 sin nx.
n=1
π 8
∑
∞
cos nπ
sin2 nπ
5. Fourier Cosine Series: f (x) = 4 + π
2
n2
4
cos nx.
n=1
4
∑
∞
sin nπ
Fourier Sine Series: f (x) = π n2
2
sin nx.
n=1
∑
∞
6. Fourier Cosine Series: f (x) = π
2 − 4
π
1
(2n−1)2 cos (2n − 1)x.
n=1
2
∑
∞
(−1)n−1
Fourier Sine Series: f (x) = π n sin nx.
n=1
1 2
∑
∞
(−1)n−1 (2n−1)π
8. Fourier Cosine Series: f (x) = 2 + π 2n−1 cos 4 x.
n=1
2
∑
∞
1−cos nπ
nπ
Fourier Sine Series: f (x) = π n
2
sin 4 x.
n=1
3 8
∑
∞
cos2 nπ nπ
9. Fourier Cosine Series: f (x) = 4 + π2 n2
4
cos 2 x.
n=1
4
∑
∞
(−1)n−1 (2n−1)π
Fourier Sine Series: f (x) = π2 (2n−1)2 sin 2 x
n=1
∑
∞
(−1)n
− π2 n sin nπ
2 x.
n=1
∑
∞
sin nπ
10. Fourier Cosine Series: f (x) = 1
2 − 2
π n
2
cos nx.
n=1
2
∑
∞
(cos 2 −cos
nπ
nπ
Fourier Sine Series: f (x) = π n sin nx.
n=1
580 ANSWERS TO EXERCISES
eπ −1 2
∑
∞
(−1)n eπ −1
11. Fourier Cosine Series: f (x) = π + π n2 +1 cos nx.
n=1
[ ]
∑
∞ n (−1)n eπ −1
Fourier Sine Series: f (x) = − π2 n2 +1 cos nx.
n=1
Section 2.1.1.
1
1. (a) F (s) = s2 .
e
(b) F (s) = −3+s .
s
(c) F (s) = 4+s2 .
2
(d) F (s) = s(4+s2 ) .
e−2s (−1+e2s )
(e) F (s) = s .
e−2s (−1+e2s +s)
(f) F (s) = s2 .
e−s
(g) F (s) = s .
1
2. (a) F (s) = 4+(s−1)2 .
1+s
(b) F (s) = 4+(s+1)2 .
s−1
(c) F (s) = 9+(s−1)2 .
s+2
(d) F (s) = 16+(s+2)2 .
5
(e) F (s) = 25+(s+1)2 .
1
(f) F (s) = (s−1)2 .
2
(b) F (s) = (s+3)3 .
12s
(c) F (s) = (s2 +4)2 .
s+2
(d) F (s) = 49+(s+2)2 .
1 s
(e) F (s) = s + s2 +4 .
ANSWERS TO EXERCISES 581
48s(s2 −4)
(f) F (s) = (s2 +4)4 .
√
4. (a) F (s) = √π .
s
√
π
(b) F (s) = 3 .
2s 2
5. (a) f (t) = t.
( )
6. (a) f (t) = 1
250 −34et + 160et t − 75et t2 + 34 cos 2t − 63 sin 2t .
( )
(b) f (t) = 1
216 72t − 120t − 81 cos t + 9 cos 3t + 135 sin t − 5 sin 3t .
Section 2.1.2.
2e−2s
2. (a) F (s) = s3 .
2−s+es s
(b) F (s) = e−s s3 .
e−s
(c) F (s) = s .
−1−πs+eπ s
(d) F (s) = e−2πs s2 .
−4s −3s
e−s
(e) F (s) = −6 e s + 2 e s + s .
−s
(f) F (s) = − e s .
582 ANSWERS TO EXERCISES
e2πs
(d) F (s) = e−
πs
2 − 1+s2 .
e−πs s
(e) F (s) = e−πs + s2 +4 .
Section 2.1.3.
( )
1. (a) f (t) = e−2t −1 + 3et .
[ ( ) ]
(b) f (t) = 17 e−7t 21+ − e14 + e7t H(t − 2) .
[ ( ) ]
(c) f (t) = 15 e−7t 5+ − e14 + e4+5t H(t − 2) .
[ ( ) ]
(d) f (t) = 17 e−7t 21+ − e14 + e7t H(t − 2) .
[ ( ) ]
(e) f (t) = 15 e−7t 5+ − e14 + e4+5t H(t − 2) .
( )
2. (a) f (t) = 1
13 −3 cos 3t − 2 sin 3t
t (
e−√2
√ √
3 3t
√ )
3 3t
+ 39 3
87 3 cos 2 + 4 sin 2 .
( √ ) √
(b) f (t) = 1
22 cos 5(t − 4) + cos 3 (t − 4) H(t − 4) − √2
3
sin 3 t.
[ (
1
(c) f (t) = 18 36)])
cos 3t + H(t
] − 4) − 3(t − 4) cos 3(t − 4)
+ sin 3(t − 4) H(t − 2) .
[ ( )
(d) f (t) = 1
2 − − 2 + cos (t − 2) + cosh (t − 2) H(t − 2)
ANSWERS TO EXERCISES 583
( ) ]
+ −2 + cos (t − 1) + cosh (t − 1) H(t − 1) .
[ ( √ √ ) √ √ ]
(e) f (t) = 1
9 3t + H(t − 1) 3 − 3t + 3 sin 3 (ti1) − 7 3 sin 3 t .
4. F (s) 2s tanh 2s .
Section 2.1.4.
(b) y(t) = et − t − 1.
t2
(c) y(t) = 2 − 1
2 sin2 t.
t2
(e) y(t) = 2 − 1
2 (t − 2)2 H(t − 2).
2
3. (a) F (s) = s2 (s2 +4) .
1
(b) F (s) = (s+1)(s2 +1) .
1
(c) F (s) = (s−1)s2 .
d
(d) F (s) = (s2 +1)2 .
2
(e) F (s) = (s−1)s3 .
2
(f) F (s) = 9(s+1)(s2 +4) .
584 ANSWERS TO EXERCISES
1 1
(
(e) y(t) = 5 + 5 cos 2t + 2 sin 2t.
3
5. (a) y(t) = t + 2 sin 2t.
1 5
(d) y(t) = t3 + 20 t .
6. (a) y(t) = 4 + 52 t2 + 1 4
24 t .
√
(b) y(t) = 2
π t − t.
Section 2.2.1.
1. (a) F (ω) = 2i cos ω
ω + 2πδ(ω) − 4 sinω ω .
1−e3iω +3iω
(b) F (ω) = e−3iω ω2 .
i
(c) F (ω) = i−ω .
√ ω2
(d) F (ω) = 2π
a e− 2 .
7. (a) Fc {f }(ω) = 2a
ω 2 +a2 .
ANSWERS TO EXERCISES 585
(b) Fs {f }(ω) = 2ω
ω 2 +a2 .
8. F (ω) = − ωi + πδ(ω).
Section 2.2.2.
π −| ω
a |.
2. (a) F (ω) = |ω| e
( )
(b) F (ω) = π
2 e−|ω−a| + e−|ω+a| .
1 −bt
6. (a) F (ω) = − 2a e sin at.
7. (a) f (t) = i
2sgn(t) e−a|t| .
{ 2t
e , t>0
(b) f (t) =
e−t , t < 0.
(c) f (t) = 1
4a (1 − a|t|)e−a|t| .
1 −a|t−1|
8. f (t) = 2a e .
Section 3.1.
(2n−1)2 π 2 2n−1
2. (a) λn = 4 , yn (x) = cos 2 πx.
586 ANSWERS TO EXERCISES
(c) The eigenvalues are λn = µ2n , where mun are the positive solu-
tions of the equation cot µ = µ, yn (x) = sin µn x.
(c) The eigenvalues are λn = µ2n , where mun are the positive solu-
tions of tan 2µ = −µ, yn (x) = sin µn x + µn cos µn x.
1
5. (a) λn = n2 + 1, yn (x) = x sin nπ ln x.
1
( nπ )2 ( nπ )
7. λn = + , yn (x) = √1 sin ln x .
4 ln 2 x ln 2
8. This is not
{ a regular Sturm–Liouville
} problem. λn = n2 ,
yn (x) ∈ sin nx, cos nx .
Section 3.2.
4
∑
∞
1
( 2n−1 )
1. f (x) = π 2n−1 sin ln 2 π ln x .
n=1
1
∑
∞
1 2n−1
2. (a) f (x) = π 2n−1 cos 2 πx.
n=1
8
∑
∞
(−1)n−1 2n−1
(b) π2 2n−1 cos 2 πx.
n=1
4
∑
∞
1−cos2 2n−1 π 2n−1
(c) f (x) = π 2n−1
4
cos 2 πx.
n=1
)
16
∑∞ sin 2n−1 π
4 2n−1
(d) f (x) = π2 (2n−1)2 cos 2 πx.
n=1
ANSWERS TO EXERCISES 587
l
∑
∞
(−1)n−1 nπ
3. f (x) = π n sin l x.
n=1
4l
∑
∞
(−1)n−1 (2n−1)π
4. f (x) = π2 (2n−1)2 sin 2l x.
n=1
(√ )
√ ∑∞ sin λn (√ )
5. (a) f (x) = 2 √ √ √ cos λn x ,
λ 1+sin 2 λ
√ √ n=1 √n n
cos λn − λn cos λn = 0.
(√ )
√ ∑
∞ 2 cos λn −1 (√
(b) f (x) = 2 √ √ √ cos λn x,
λ 1+sin 2 λ
√ √ n=1 √ n n
cos λn − λn cos λn = 0.
Section 3.2.1.
1 1+x
5. 2 ln 1−x .
Section 3.2.2.
√ ( )
7. y = − 2
πx sin x + cos x
x .
∞
∑ 1
9. (a) 1 = 2 J0 (λk x).
λk J1 (λk )
k=1
∑∞
λk − 4
(b) x2 = 2 J0 (λk x).
λ3k J1 (λk )
k=1
∞
∑
1−x 2 1
(c) 8 = J0 (λk x).
λk J1 (λk )
k=1
588 ANSWERS TO EXERCISES
Section 4.1.
(c) u(x, y) = F (y) sin x + G(y) cos x, where where F and G are any
functions of one variable.
(d) u(x, y) = F (x) sin y + G(x) cos y, where where F and G are any
functions of one variable.
Section 4.2.
( )
1. (a) u(x, y) = f ln x + y1 , f is an arbitrary differentiable function of
one variable.
( ) ( )
(b) u(x, y) = f xy −cos xy
2 , f is an arbitrary differentiable function
of one variable.
( )
(c) u(x, y) = f y −arctan x , f is an arbitrary differentiable function
of one variable.
( )
(d) u(x, y) = f x2 + y1 , f is an arbitrary differentiable function of
one variable.
ANSWERS TO EXERCISES 589
of one variable.
( ) c
(b) u(x, y) = f x2 + y 2 e y , f is an arbitrary differentiable function
of one variable.
( ) x2
(c) u(x, y) = f e−x(x+y+1) e 2 , f is an arbitrary differentiable func-
tion of one variable.
y2
(d) u(x, y) = f (xy)e 2 + 1, f is an arbitrary differentiable function
of one variable.
(y)
(e) u(x, y) = xf x , f is any differentiable function of one variable.
( x−y )
(f) u(x, y) = xyf xy , f is an arbitrary differentiable function of
one variable.
2( )
(g) u(x, y) = e−(x+y) x2 + f (4x − 3y) , f is an arbitrary differen-
tiable function of one variable.
( 1+xy )
(h) u(x, y) = xf x , f is an arbitrary an differentiable function
of one variable.
( 5x+3y )
3. (a) u(x, y) = f (5x + 3y); up (x, y) = sin 3 .
( )2
4. (a) u(x, y) = c x2 − y 2 .
[ ]
(b) u(x, y) = 1
2 2y − (x2 − y 2 ) .
y y
(c) u(x, y) = x2 e− x + e x − 1.
x
(d) u(x, y) = e x2 −y2 .
590 ANSWERS TO EXERCISES
√
(e) u(x, y) = xy.
( )
1
f √ +√
1− x2 +y 2 x y
(f) u(x, y) = e .
x2 +y 2 x2 +y 2
( )
(g) u(x, y) = ey f − ln(y + e−x ) .
2
5. (a) u(t, x) = xet−t .
( )
(b) u(t, x) = f xe−t .
( √ )
(xt−1)+ (xt−1)2 +4x2
(c) u(t, x) = f 2x .
t − x,
0 < x < t,
6. u(t, x) = (x − t) , t < x < t + 2,
2
x − t, x > t + 2.
Section 4.3.
1. xuxy − yuyy − uy + 9y 2 = 0.
(b) D = 0, parabolic.
(d) hyperbolic.
(e) parabolic.
(f) elliptic.
elliptic - nowhere.
(g) D = x2 −a2 . hyperbolic if |x| > |a|, parabolic if |x| = |a|, elliptic
if |x| < |a|.
1 β2
uαα + uββ = + .
β 4
If x = 0, then the equation reduces to uyy = 0.
form is
1( 8)
uξη = uη −
3 3
uξη = uη − u − η.
592 ANSWERS TO EXERCISES
uη − uξ
uξη = .
2(ξ − η)
√
For the elliptic case, α = x, β = 2 y. The canonical form is
1
uαα + uββ = uβ .
β
uξξ + uηη = 0.
canonical
( )form is 9uξη( + 2 =
) 0. The general solution is u(x, y) =
− 92 x + y2 (x + y) + f x + y2 + g(x + y). From the boundary condi-
y2
tions we obtain that u(x, y) = x + xy + 2 .
The
( canonical ) form
( is uξη) =( 0. u(ξ, η)
) = f (ξ) + ηg(ξ). u(x, y) =
f e−x + e−y + e−x − e−y g e−x + e−y . Using the boundary condi-
t (2−t)2
tions we have f (t) = 2 and g(t(= 2 .
(c) t′ = 12 t+ 12 x− 12 y − 12 z, ξ = 12 t+ 12 x+ 21 y + 12 z, η = − 2√
1
3
1
t+ 2√ 3
x+
√1
2 3
y − 2 3 z, ζ = − 2 5 t+ 2 5 x− 2 5 y + 2 5 z. The canonical form is
√1 √1 √1 √1 √1
(d) t′ = 2√ 1
3
t+ 2√1
3
x− 2√ 1
3
y − 2√
1
3
z, ξ = √12 x+ √12 y, η = √1 t+ √
2
1
2 2
z,
ζ = − 12 t + 12 x − 12 y − 12 z. The canonical form is
Section 5.1.
1 15t
3. u(x, t) = sin 5t cos x + 15 sin 3x sin 2 .
1
[ 1 1
] 1
4. u(x, t) = 2 1+(x+at)2 + 1+(x−at)2 + a sin x sin at.
[ 2]
5. u(x, t) = − a1 e−(x+at) − e−(x−at) .
2
1
6. u(x, t) = sin 2x cos 2at + a cos x sin at.
1+x2 +a2 t2
( at )
7. u(x, t) = (1+x2 +a2 t2 )2 +4a2 x2 t2 + 1 x
2a e e − e−at .
( )( )
8. u(x, t) = cos πx
2 cos πat
2 + 1
2ak ekx − e−kx eakt − e−akt .
[ −(x+at)2 2]
10. u(x, t) = 1
a e − e−(x−at) + t
2 + 1
4a cos 2x sin 2at.
1, if x − 5t < 0, x + 5t < 0
11. u(x, t) = 1
2, if x − 5t < 0, x + 5t > 0
0, if x − 5t > 0.
[ ] 1[ ]
12. u(x, t) = 12 sin (x+at)+sin (x−at) + 2a arctan (x+at)+arctan (x−at) ,
if x − 5t > 0;
[ ] 1[ ]
u(x, t) = 12 sin (x+at)−sin (x−at) + 2a arctan (x+at)−arctan (x−at) ,
if 0 < x < at.
[
13. 1
2 f (x + at) + f (x − at).
2
14. sin 2πx cos 2πt + π sin πx cos πt.
[ ]
15. u(x, t) = 1
2 G(x + t) − G(x − t) , where G is the anti-derivative of gp
ANSWERS TO EXERCISES 595
G(x + tr ) = G(x − tr ).
x + tr = x − tr + 2n, n = 1, 2, . . ..
Therefore
xr = n, n = 1, 2, . . .
are the moments when the string will come back to its original position.
( )
16. u(x, t) = 1
a(1+a2 ) sin x sin at − a cos at + ae−t .
17. Ep (t) = π
4 cos2 t. Ek (t) = π
4 sin2 t. The total energy is E(t) = π
4.
Section 5.2.
∑
∞
1. (a) u(x, t) = 4
π
1
(2n−1)2 sin (2n − 1)x sin (2n − 1)t.
n=1
9
∑
∞
1 2nπ
(b) u(x, t) = π n2 sin 3 sin nx cos nt.
n=1
4
∑
∞
(−1)n+1 (2n−1)π
(c) u(x, t) = sin x sin t + π (2n−1)2 sin 4
n=1
· sin (2n − 1)x sin (2n − 1)t.
∑
∞
(−1)n+1
(d) u(x, t) = 4
π (2n−1)2 sin (2n − 1)x cos (2n − 1)t.
n=1
∑
∞
(e) u(x, t) = 8
π
1
(2n−1)3 sin (2n − 1)x sin (2n − 1)t.
n=1
∑
∞ ( )
2. (a) u(x, t) = e−kt an cos λn t + bn sin λn t sin nπx
l , where λn , an
n=1
and bn are given by
596 ANSWERS TO EXERCISES
a2 n2 π 2
∫l
λ2n = l2 − k2 , an = 2
l f (x) sin nπx
l dx;
0
2
∫l nπx
l (−kan + λn bn ) = g(x) sin l dx.
0
4 −t
[ √
3. u(x, t) = πe sin x cosh ( 3t)
∑
∞ √ ]
+ 1
2n−1 sin (2n − 1)x cos ( 4n2 − 4n − 2t) .
n=2
[ √ ]
4. u(x, t) = e−t t sin x + √1
3
sin 2x sin 3t .
∞ [
∑ ( ) ( 2n−1 )] ( 2n−1 )
2lbn
5. u(x, t) = an cos 2n−1
2l πat + (2n−1) sin 2l πat sin 2l πx ,
n=1
where
∫l ∫l
2 ( 2n − 1 ) 2 ( 2n − 1 )
an = f (x) sin πx dx, bn = g(x) sin πx dx.
l 2l l 2l
0 0
∞ [
∑ ]
4 2−2(−1)n 1
6. u(x, t) = π3 n3 cos 2nπt + 4n3 −n sin 2nπt sin nπx.
n=1
2 4
∑
∞
1
7. u(x, t) = π + π 1−4n2 cos 2nx cos 4nt.
n=1
∑
∞
8. u(x, t) = − π83 1
(2n−1)3 cos (2n − 1)πx sin (2n − 1)πt.
n=1
( )
∑
∞
9. u(x, t) = 12 (a0 + a′0 t) + an cos nπa
l t + l ′
nπa an sin nπa
l t sin nπa
l x.
n=1
( )
Al2
10. u(x, t) = l2 +a2 π 2 e−t − cos aπ
l t + l
aπ sin aπ
l t .
( )
2Al3
∑
∞
(−1)n+1 sin anπ l x
11. u(x, t) = π n(l2 +a2 π 2 n2 ) e−t − cos anπ
l t + l
aπn sin aπn
l t .
n=1
∑
∞ a(2n−1)π
12. u(x, t) = 16Al2
π
[sin 2l x
] sin t
n=1 (2n−1) a π n (2n−1) −4l
2 2 2 2 2
∑
∞ a(2n−1)π
[sin ] sin
3 x a(2n−1)π
− 32Al
aπ 2
2l
2l t.
n=1 (2n−1)2 a2 π 2 n2 (2n−1)2 −4l2
ANSWERS TO EXERCISES 597
∞ (
)
4Al2
∑
13. u(x, t) = 4l2 +a2 π 2 e−t − cos aπ
2l t + 2l
aπ sin aπ
2l t cos π
2l x.
n=1
( )
14. u(x, t) = 1 − πx t2 + πx t3 + sin x cos t
[ ]
∑
∞
+ π4 1
n3 3(−1) n
t − 1 + cos nt − 3
n (−1) n
sin nt sin nx.
n=1
( )
15. u(x, t) = 1 − πx e−t + xt 1
π + 2 sin 2x cos 2t
[ ]
∑
∞
−t
( )
− π2 1
n(1+n2 e + n 2
cos nt − 2n + 1
n sin nt sin nx.
n=1
∑
∞
(−1)n
16. u(x, t) = x + t + sin x
2 cos t
2 − 8
π (2n+1)2 cos 2n+1
2 t sin 2n+1
2 x.
n=1
Section 5.3.
∑∞ ∑ ∞
1. u(x, y, t) = π646 (2m−1)3 (2n−1)3 sin (2m − 1)πx sin (2n − 1)πy
1
√ m=1 n=1
cos (2m − 1)2 + (2n − 1)2 t.
√ ∑
∞
2. u(x, y, t) = sin πx sin πy cos 2t + 4
π
√
sin (2n−1)πy
n=1 (2n−1) 1+(2n−1)2
√
sin 1 + (2n − 1)2 t.
√
3. u(x, y, t) = v(x, y, t) + √2 sin πx sin 2πy sin 5 t, where
5
∑∞ ∑ ∞
v(x, y, t) = π646 (2m−1)3 (2n−1)3 sin (2m − 1)πx sin (2n − 1)πy
1
√ m=1 n=1
cos (2m − 1)2 + (2n − 1)2 t.
∑∞ ∑ ∞
4. u(x, y, t) = π162 √
sin (2m−1)πx sin (2n−1)πy
(2m−1)(2n−1) (2m−1)2 +(2n−1)2
√ m=1 n=1
sin (2m − 1)2 + (2n − 1)2 t.
∑∞ ∑ ∞
5. u(x, y, t) = π166 (2m−1)3 (2n−1)3 sin (2m − 1)πx sin (2n − 1)πy
1
√ m=1 n=1
sin (2m − 1)2 + (2n − 1)2 t.
598 ANSWERS TO EXERCISES
∑ ∞ [
∞ ∑ √ √ ] mπx
6. (a) u(x, y, t) = amn cos λmn t + bmn sin λmn t sin a
m=1 n=1
(2n−1)πy m2 π 2 (2n−1)2 π 2
sin 2b , where λmn = a2 + 4b2 .
∑ ∞ [
∞ ∑ √ √ ]
(b) u(x, y, t) = amn cos λmn t + bmn sin λmn t
m=0 n=1
mπx nπy m2 π 2 n2 π 2
cos a sin b , where λmn = a2 + b2 .
√
7
√3 cos 4t sin 4y − 5 cos
7. u(x, y, t) = 5t cos 2x sin y
√
+ 10 sin 10t cos x sin 3y.
( )
∑
∞ ∑ ∞ ( ) ( )
8. u(x, y, t) = e−k t
2
amn cos λmn t + bmn sin λmn t
( ) ( m=1 ) n=1
sin mπx
a sin nπyb , where
√
λmn = m aπ2 c + n πb2 c − k 4 .
2 2 2 2 2 2
√
m2 π 2 c2 n2 π 2 c2
9. Let ωmn = aπ a2 + b2 . You need to consider two cases:
4
∫a ∫b ( mπx ) ( nπy )
amn = 2 −ω 2 )ab
(ωmn F (x, y) sin a sin b dy dx.
0 0
If there are several couples (m0 , n0 ) for which ω = ωm0 n0 , then in-
stead of one resonance term in the solution there will be several reso-
nance terms of the form specified in Case 20 .
ANSWERS TO EXERCISES 599
√ √
10. (a) ω = 3 and ωmn = m2 + n2 . m2 + n2 ̸= 3 for every m and n.
Therefore, by Case 10 we have that
∑∞ ∑ ∞ ( √ )
u(x, y, t) = amn sin 3t−√m23+n2 sin m2 + n2 t sin mx sin ny
2(
m=1 n=1
√ )
= π4 sin 3t − √32 sin 2t sin x sin y
∑
∞ ∑ ∞ ( √ )
+ amn sin 3t − √m23+n2 sin m2 + n2 t sin mx sin ny,
m=2 n=2 ( )( )
16mn 1+(−1)m 1+(−1)n
where for m n ≥ 2, amn = π2 (m2 +n2 −3)(m2 −1)2 (n2 −1)2 .
√ √ √
(b) ω = 5 and ωmn = m2 + n2 . m2 + n2 = 5 for m = 1 and
n = 2 or m = 1 and n = 3. Therefore, by Case 20 we have
√
∞
∑ ( √ 5 √ )
u(x, y, t) = amn sin 5 t − √ sin m2 + n2 t
2
m +n 2
m, n=1
m2 +n2 ̸=5
( √ √ √ )
sin mx sin ny + a1,2 sin 5 t − 5t cos 5 t sin x sin 2y
( √ √ √ )
+a2,1 sin 5 t − 5 t cos 5 t sin 2x sin y, where
∫π ∫π
2
a1,2 = √5π 2
xy sin2 x sin y sin 2y dy dx = a2,1
0 0
∫π ∫π
= √2
5π 2
xy sin x sin2 y sin 2x dx dy = − 49 .
0 0
∫π ∫π
For m = n = 1 we have a1,1 = √2
5π 2
x sin2 x y sin2 ydy dx =
0 0
2
π
√ .
8 5
For every other m and n we have
4
∫π ∫π
amn = (m2 +n2 −5)π 2 x sin x sin mx y sin y sin ny dy dx
0 0
16mn(1+(−1)m )(1+(−1)n )
= (m2 +n2 −5)(m2 −1)2 (n2 −1)2 π 2 .
∑
∞ ∑
∞ 2
+ω 2 ) sin ω t++2kωt cos ω
11. u(x, y, t) =
(ωmn
[ ] sin mπx
a sin nπy
b ,
m=1 n=1 (2m−1)(2n−1) (ωmn −ω) +4k ω
2 2 2
m2 π 2 n2 π 2
where ωmn = a2 + b2 .
Section 5.4.
∑
∞
8
1. u(r, φ, t) = 3 J (z
z0n 1 0n )
J0 (z0n r) cos (z0n t).
n=1
(z )
∑
∞ J1 0n 2
2. u(r, φ, t) = 2
z0n (J1 (z0n ))2
J0 (z0n r) sin (z0n t).
n=1
600 ANSWERS TO EXERCISES
∑
∞
1
3. u(r, φ, t) = 16 sin φ 3 J (z
z1n 2 1n )
J1 (z1n r) cos (z1n t).
n=1
∑
∞
1
4. u(r, φ, t) = 16 sin φ 3 J (z
z1n 2 1n )
J1 (z1n r) cos (z1n t)
n=1
∑
∞
1
+24 sin 2φ 4 J (z
z2n 3 2n )
J2 (z2n r) sin (z2n t).
n=1
5. u(r, φ, t) = 5J4 (z4,1 r) cos 4φ cos (z4,1 t) − J2 (z2,3 r) sin 2φ cos (z2,3 t).
∑
∞
1
6. u(r, φ, t) = J0 (z03 r) cos (z03 t) + 8 4 J (z
z0n 1 1n )
J0 (z0n r) sin (z0n t).
n=1
∑
∞ (z ) ( )
7. u(r, φ, t) = 4 1
2 J (z
z0n 1 0n )
J0 0n
2 r sin z0n
2 t .
n=1
[ ]
∑
∞ ( ) ( ) ( z0n )
8. u(r, t) = an cos c z0n
a t + bn sin c z0n
a t J0 a r , where
n=1
∫a (z )
an = ( (2 ))2 rf (r)J0 0n
a r dr,
a2 J1 z0n 0
∫a ( z0n )
bn = ( 2( ))2 rg(r)J0 a r dr.
a c z0n J1 z0n 0
∫a
9. u(r, t) = a22 (f (r) + tg(r))r dr
[ 0
]
∑∞ ( z ) ( z ) ( )
+ an cos c a t + bn sin c a t J0 z1n
1n 1n
a r , where
n=1
∫a ( z1n )
an = ( (2 ))2 rf (r)J0 a r dr,
a2 J0 z1n 0
∫a ( z1n )
bn = ( 2( ))2 rg(r)J0 a r dr.
a c z1n J0 z1n 0
[ ]
∑
∞
1(
( z0n ) ( z0n )
10. u(r, t) = A
c2
a2 −r 2
4 − 2a2 ) J0 a r cos c a t .
2
n=1 z0n J1 z0n
∑
∞ ( )
11. u(r, t) = an (t)J0 z0na r ,
n=1
( )
∫t ∫a ( z0n ) ( )
1
an (t) = λn f (ξ, η)J0 a ξ sin λn (t − η) dξ dη, λn = cz0n
a .
0 0
1
∑
∞ [ ] 2n−1
12. u(r, t) = r an cos (λn t) + µn sin (λn t) , where λn = 2(r2 −r1 ) π,
n=1
ANSWERS TO EXERCISES 601
∑
∞ ( µ1n ) ( aµn )
13. u(r, φ, t) = A cos φ an J1 a r cos a t , where µn are the pos-
n=1
itive roots of J1′ (x) = 0,
2µ2n ∫a ( µn )
and an = ( )2 J1 a r dr.
a(µ2n −1) J1 (µn ) 0
Section 5.5.
( )
1. u(x, t) = 3 + 2H t − x2 , H(·) is the unit step Heaviside function.
2. u(x, t) = sin (xit)−H(t−x) sin (x−t), H(·) is the unit step Heaviside
function.
[ ]
3. u(x, t) = sin (x − t) − H(t − x) sin (x − t) e−t , H(·) is the unit step
Heaviside function.
[ ]
4. u(x, t) = sin (x − t) − H(t − x) sin (x − t) et , H(·) is the unit step
Heaviside function.
( ) ( )
6. u(x, t) = f t − xc H t − xc , H(·) is the unit step Heaviside function.
t3
8. u(x, t) = 6 − 16 (t − x)3 H(t − x), H(·) is the unit step Heaviside
function.
[ ( πc )] ( )
10. u(x, t) = k
c2 π 2 1 − cos a t sin πa x .
1
11. u(x, t) = π sin πx sin πt.
∑
∞ [ ] ( )
12. u(x, t) = (−1)n t − (2n + 1 − x) H t − (2n + 1 − x) .
n=1
∑
∞ ( ) ( )
13. u(x, t) = 4
π2
1
(2n−1)2 sin 2n − 1)πx sin 2n − 1)πt .
n=1
( t2
)
15. u(x, t) = f x − 2 .
( )
16. u(x, t) = f x − 3t .
t3
)
17. u(x, t) = 3 cos bigl(x + 3 .
( )
18. u(x, t) = e−t + te−t f (x) + te−t g(x).
√π
19. u(x, t) = 2 e−|x+t| .
∫∞
20. u(x, t) = 1
2 e−|ω| cos ωt eiωx .
−∞
t− x
∫
c
21. u(x, t) = −c f (s) ds.
0
1
∫t ( ∫ )
x+c(t−τ )
22. u(x, t) = 2c f (s, τ ) ds dτ .
0 |x−c(t−τ )|
1
∫t ( x+c(t−τ
∫ ) )
23. u(x, t) = 2c f (s, τ ) ds dτ
0 0
ANSWERS TO EXERCISES 603
∫t ( |x−c(t−τ
∫ )| ) [ ]
− 2c
1
f (s, τ ) ds sign x − c(t − τ ) dτ .
0 0
( t− xc )
∫
25. u(x, t) = −ce k(x−ct)
e ckτ
f (τ ) dτ H(ct − x).
0
Section 6.1.
∫∞ (ξ−x)2 ( ( x ))
3. u(x, t) = √100
4aπt
e− 4at dξ = 50 1 + er f √4at , where
0
∫x
e−s ds.
2
er f (x) = √2
π
0
∫∞ (ξ−x)2
1
4. u(x, t) = et √4aπt e− 4at dξ.
−∞
Section 6.2.
∑
∞
e−(2n−1)
2
1. u(x, t) = 8
π
1
(2n−1)2
t
sin (2n − 1)x.
n=1
∑
∞ (2n−1)2 π 2
2. u(x, t) = 3 − 12
π2
1
(2n−1)2 e− 9 t
cos nπ
3 x.
n=1
∑
∞
(2n−1)π−8(−1)n (2n−1)2 π 2
3. u(x, t) = 4
π2 (2n−1)2 e− 64 t
sin (2n−1)π
8 x.
n=1
∑
∞
e−2(2n−1)
2
4. u(x, t) = 80
π
1
2n−1
t
sin (2n − 1)x.
n=1
∑
∞
1−cos nπ n2 π 2
5. u(x, t) = 40
π n
2
e− 2 t
sin nπ
2 x.
n=1
π2
∑
∞
(−1)n
e−3n
2
t
6. u(x, t) = 3 +4 n cos nx.
n=1
400
∑
∞
1 −
(2n−1)
t
2
2n−1
7. u(x, t) = π 2n−1 e
4 sin 2 x.
n=1
604 ANSWERS TO EXERCISES
∑
∞
(−1)n−1 −a2 (2n−1)2 t
8. u(x, t) = 4
π (2n−1)2 e sin (2n − 1)x.
n=1
[ ]
∑
∞
8(−1)n+1 2 (2n−1)
2
9. u(x, t) = 4
2n−1 − (2n−1)2 π 2 e−a 4 t
sin 2n−1
2 x.
n=1
T0 2T0 ∑
∞
1 −n2 t
10. u(x, t) = π x + π ne sin nx.
n=1
π −t
∑
∞
−4n2 t
11. u(x, t) = 2e sin x − 16
π
1
(4n2 −1)2 e sin 2nx.
n=1
12. u(x, t) = 1
2 − 12 e−4t cos 2x.
( )
∑
∞ (2n−1)2 a2 π 2
13. u(x, t) = 4rl2
a2 π 3
1
(2n−1)3 1 − e− l2
t
sin (2n−1)π
l x.
n=1
∑
∞
(−1)n
e−a
2
n2 π 2 t
14. u(x, t) = 1
3 −t− 2
π2 n2 cos nπx.
n=1
[ ]
∑
∞
(−1)n−1 −a2 (2n−1)2 π 2 t
15. u(x, t) = 4
π (2n−1)4 1−e sin (2n − 1)x.
n=1
1−e−t
16. u(x, t) = 2 + e−4t cos 2x.
∑
∞ ( )
e−n
2
t
17. u(x, t) = an cos nx + bn sin nx , where
n=0
1
∫π 1
∫π
an = π f (x) cos nx dx, bn = π f (x) sin nx dx, n = 1, 2, . . ...
−π −π
∑
∞ ∫t a2 n2 π 2 (t−s)
18. u(x, t) = un (t) sin nπ
l x, where un (t) = e− l2 fn (τ ) dτ ,
n=1 0
2
∫l nπ
fn (τ ) = l F (s, τ ) sin l s ds.
0
∑
∞
an e−zn t sin zn x, where zn is the nth positive solution
2
20. u(x, t) =
n=1
of the equation x + tan x = 0,
∫1
and cn = 2 f (x) sin zn x dx, n = 1, 2, . . ..
0
ANSWERS TO EXERCISES 605
( ) ∫l
∑
∞
− a2 n2 π 2
+b t nπ 2 nπ
21. u(x, t) = cn e l2 sin l x, cn = l f (x) sin l x dx.
n=1 0
( 2 2 2 )
22. u(x, t) = e− l2 +b t sin
a n π
π
2l x.
( ) ∫l
∑
∞ a2 n2 π 2
23. u(x, t) = cn e− l2
+b t
cos nπ
l x, cn = 2
l f (x) cos nπ
l x dx.
n=0 0
[ ]
∑
∞
Aa2
e−t sin e−a n ωn t sin ωn x,
2 2
aA x 2 T
24. u(x, t) = cos al a+ l ωn +(−1)n 1−a2 ω2
n
n=0
(2n+1)π
where ωn = a , ωn ̸= a1 .
Section 6.3.
∑
∞ ∑ ∞ ∑∞
Amnj e−k λmnj t
2
3. u(x, y, z, t) =
( ) ( mπ ) ( jπ )
m=1 n=1 j=1
sin mπa x sin b y sin c z , where
8
∫a ∫b ∫c ( mπ ) ( mπ ) ( jπ )
Amnj = abc f (x, y, z) sin a x sin b y sin c z dz dy dx.
0 0 0
∑
∞ n2 π 2
an e− a2 t sin nπ
4. u(x, y, t) = a x
n=1 ( )
∑
∞ ∑
∞ 2 m2 n2
+ amn e−π a2 + b2 t sin mπ a x cos
nπ
b y.
m=1 n=1
( )
8
∑
∞ ∑
∞
1 − (2m−1)2 +(2n−1)2 t
6. u(x, y, t) = π2 (2m−1)(2n−1) e .
m=1 n=1
sin (2m − 1)x sin (2m − 1)y.
7. u(x, y, t) = e−2t sin x sin y + 3e−5t sin 2x sin y + e−13t sin 3x sin 2y.
∞ [ ] ( )
∑
∞ ∑
8. u(x, y, t) = 16
π2
sin mx sin ny
m2 n 2 sin mπ
2 sin nπ
2 e− m2 +n2 t
m=1 n=1
( )
9. u(x, y, t) = 1
13 1 − e−13t sin 2x sin 3y + e−65t sin 4x sin 7y.
606 ANSWERS TO EXERCISES
( ) ∞ ∞ ( )
∑ ∑
10. u(x, y, t) = e− k1 x+k2 y Amn e− m +n +k1 +k2 +k3 t
2 2 2 2
m=1 n=1
sin mx sin my, where
∫π ∫π
Amn = 4 f (x, y) ek1 x+k2 y sin mx sin my dy dx.
0 0
( ) (∫t )
∑
∞ ∑
∞
e− m +n t
2 2
Fmn (s)e−(m +n )s ds
2 2
11. u(x, y, t) =
m=1 n=1 0
sin mx sin ny, where
4
∫π ∫π
Fmn (t) = π2 F (x, y, t) sin mx sin ny dy dx.
0 0
∑
∞
1(
( z0n ) − z0n c2 t
12. u(r, t) = 2T0 ) J0 a r e
a2 , where z0n is the nth
n=1 z0n J1 z0n
positive zero of the Bessel function J0 (·).
∑
∞
1(
( )
) J1 z1n r e−z1n t ; z1n is the nth positive
2
13. u(r, t) = 16 3
n=1 z1n J2 z1n
zero of the Bessel function J1 (·).
∑
∞
1(
( )
) J3 z3n r e−z3n t , where
2
14. u(r, t) = r3 sin 3φ − 2 sin 3φ
n=1 z3n J4 z3n
z3n is the nth positive zero of the Bessel function J3 (·).
( ) ( )
∑
∞ ∑
∞
1−(−1)m
e−
2
4 z0n +m2 π 2 t
15. u(r, z, t) = π m z0n J1 (z0n ) J0 z0n r sin mπz,
m=1 n=1
where z0n is the nth positive zero of the Bessel function J0 (·).
∑
∞
(−1)n−1
e−c
2
2 n2 π 2 t
16. u(r, t) = πr n sin nπr.
n=1
∑
∞ ∫1
An e−c
2
n2 π 2 t 1
17. u(r, t) = r sin nπr, An = 2 rf (r) sin nπr dr.
n=1 0
∑
∞ ∑
∞
1 −zmn
2
t
( ) ( )
18. u(r, t) = Amn √znm r e Jn+ 21 zmn r Pn cos θ ,
m=1 n=1
where znm is the the mth positive zero of the Bessel function Jn (·),
ANSWERS TO EXERCISES 607
Section 6.4.
[ ]
∑
∞ ( 2n+1+x ) ( 2n+1−x )
1. u(x, t) = u0 erf √
2 t
− erf √
2 t
, where
n=0
∫∞
e−u du is the complementary error function.
2
erf c(x) = √2
π
x
( )
2. u(x, t) = u1 + (u0 − u1 ) erf c x
√
2 t
.
[ ]
( x ) (√ )
3. u(x, t) = u0 1 − erf c 2√ t
+ e x+1
erf c t+ x
√
2 t
.
∫t x2
4. u(x, t) = x
√
2 π
f (t−τ )
3 e− 4τ dτ .
0 t2
( )
5. u(x, t) = 60 + 40 erf c √x
2 t−2
U2 (t).
[ ( ) ( )]
√
6. u(x, t) = 100 −e1−x+t erf c t+ 1−x
√
2 t
+ erf c 1−x
√
2 t
.
7. u(x, t) = u + 0 + u0 e−π
2
t
sin πx.
u
∫t e−hτ − 4τ
x2
8. u(x, t) = √0 x 3 dτ .
2 π τ2
0
( )
∫t ( )
9. u(x, t) = 10t−x + 10 1 + t − τ + et−τ erf c x
√
2 τ
dτ .
0
∑
∞
e−(2n−1)
2
π2 t
10. u(x, t) = 21 x(1 − x) − 4
π3
1
(2n−1)3 sin (2n − 1)πx.
n=1
( ) ( )
u0 −kx c(1−k) √
11. u(x, t) = 2 e erf c 2cx√t + 2 t + erf c 1−x
√
2 t
608 ANSWERS TO EXERCISES
( ) ( )
c(1−k) √
+ u20 e−x erf c x√
2c t
− 2 t + erf c 1−x
√
2 t
.
r2
∑
∞
e−zn t , tan zn = zn .
sin (zn r) 2
12. u(r, t) = 2 + 3t − 3
10 − 2
r 2 sin z
zn n
n=1
x2
14. (a) u(x, t) = √
x
3 e− 4c2 t .
2c π t 2
∫t − x2
2
(b) u(x, t) = √
x
3 f (t − τ ) e 4c τ
3 dτ .
2c π t 2 0 τ2
∫∞ x2
15. u(x, t) = 2c
x
√
π
µ(t − τ ) e− 4c2 τ dτ .
−∞
(x−η)2
x
∫t ∫∞ −
4c2 (t−τ )
e √
16. u(x, t) = √ f (η, τ ) dη dτ .
2c π t−τ
0 −∞
( ) ( )
1 1−x
√ 1 1+x
√
17. u(x, t) = 2 erf 2c t
+ 2 erf 2c t
.
∫∞
e−c
2
1 cos ωx ω2 t
18. u(x, t) = π 1+ω 2 .
−∞
∫∞ (x−τ )2
19. u(x, t) = √1
t 2π
f (τ ) e− 2t2 dτ .
−∞
∫t µ(τ ) − 4c2x
2
( [ ] )
∫t ∫∞ −
(x−ξ)2
−
(x+ξ)2
21. u(x, t) = 1
√
2c π
√1
t−τ
e 4c2 (t−τ ) − e 4c2 (t−τ ) f (ξ, τ ) dξ dτ .
0 0
∫∞
e−ω
2
2 sin ω t
22. u(x, t) = π ω cos xω dω.
0
2
∫∞ 1−e−ω
2t
23. u(x, t) = π ω sin xω dω.
0
ANSWERS TO EXERCISES 609
(x−η)2 (x−η)2
∫t ∫∞ −
4c2 (t−τ )
−
4c2 (t+τ )
24. u(x, t) = 1√
f (η, τ ) e √ −e dη dτ .
2c π t−τ
0 0
(x−η)2 (x−η)2
1√
∫t ∫∞ e
−
4c2 (t−τ )
−
4c2 (t+τ )
25. u(x, t) = f (η, τ ) √ +e dη dτ .
2c π t−τ
0 0
Section 7.1.
x
4. u(x, y) = x2 +(y+1)2 .
z−1
5. u(x, y, z) = 3 .
(x2 +y 2 +(z−1)2 ) 2
1 [ ]
G(x, y) = − ln | x − y | − ln | x − y∗ | ,
2π
where y∗ = (−x′ , y ′ ).
The solution of the Poisson equation is given by
∫∫ ∫
u(x) = f (y)G(x, y) dy − g(y) Gn (x, y) dS(y).
RP ∂(RP )
1 [ ]
G(x, y) = − ln | x − y | − ln | x − y∗ | ,
2π
where y∗ = (x′ , −y ′ ).
The solution of the Poisson equation is given by
∫∫ ∫
u(x) = f (y)G(x, y) dy − g(y) G(x, y) dS(y).
RP ∂(RP )
∫ √
9. u(x) = − π1 ln | x − y |2 g(y) dS(y) + C.
|y|=R
[ ( ) ( 2 2 )]
10. u(x, y) = 1 − 1
π arctan 1−x
y − arctan x +yy −x .
610 ANSWERS TO EXERCISES
Section 7.2.
200
∑
∞
1−(−1)n
1. (a) u(x, y) = π n cosh nπy sin nπx
n=1
∑
∞
1−(−1)n 2−cosh nπ
+ 200
π n sinh nπ sinh nπy sin nπx.
n=1
2
∑
∞
1−(−1)n sinh nπ
(b) u(x, y) = π n sinh ny sin nx
n=1
∑
∞
1−(−1)n sinh nπ ( )
+ π2 n sinh ny + sinh n(π − x) sin ny.
n=1
2
∑
∞
(−1)n−1
(c) u(x, y) = π n sinh 2nπ sin nπx sinh nπy.
n=1
∑∞
sin
(2n−1)π
x ( (2n−1)π )
(d) u(x, y) = 400
π
2
(2n−1)π sinh 2 (1 − y)
n=1 (2n−1) sinh 2
∑
∞
+ 200
π
1
sinh nπx sin nπy.
n sinh 2nπ
n=1
( )
sin 7πx sinh 7π(1−y)
(e) u(x, y) = + sin πx sinh πy
( sinh 7π ) sinh π
sin 3πy sinh 3π(1−x)
+ sinh 3π + sinh sinh
6πx sin 6πy
6π .
400
∑
∞
sin (2n−1)πx sinh (2n−1)πy
(f) u(x, y) = π (2n−1) sinh (2n−1)π .
n=1
2
∑
∞
sin nπ
a x sinh nπ
a y
∫a nπ
(g) u(x, y) = a An sinh nπ , An = f (x) sin a x dx.
a b
n=1
( ) 0
2
∑
∞ sin nπ
a x sinh nπa (b−y)
∫a nπ
2. u(x, y) = a An sinh nπ , An = f (x) sin a x dx.
a b
n=1 0
∑
∞
1−(−1)n
3. u(x, y) = 12 x + 2
π2 n2 sinh nπ sinh nπx cos nπy.
n=1
2
∑
∞
1−(−1)n ( )
4. u(x, y) = π n (n cosh nπ+sinh nπ) n cosh nx + sinh nx sin ny.
n=1
∑
∞ ( nπ ) nπ 1
∫a
5. u(x, y) = A0 y + An sinh a y cos a y; A0 = ab f (x) dx,
n=1 0
1
∫a nπ
An = a sin nπ f (x) cos a x dx.
a b
0
6. u(x, y) = 1.
( (2n−1)π )
∑
∞ sin x ( (2n−1)π )
7. u(x, y) = 1 − π4 a
(2n−1)πb cosh a y .
n=1 (2n−1) cosh
( )
a
[ ]
∑∞ ( 4−cosh (2n−1) sin nx
8. u = π4 sin nx
2n−1 cosh 2n − 1)y + (2m−1) sinh (2n−1) sinh (2n − 1)y .
n=1
[ ]
∑
∞ ( (
9. u(x, y) = An cosh n − 12 )πx + Bn sinh n − 21 )πx sin (n − 12 )πy,
n=1
ANSWERS TO EXERCISES 611
( )
n−1 cosh n− 21 )π
where An = 4 (−1)
(2n−1)2 , Bn = −An ( ).
sinh n− 12 )π
∫1
10. f (y) dy = 0.
0
(π )
∑
∞ ∫
11. u(x, y) = 2
π f (t) sin nt dt e−ny sin nx.
n=1 0
Section 7.3.
∑
∞
rn
( )
1. u(r, φ) = nan−1 an cos nφ + bn sin φ + C, where C is any con-
n=1
1
∫
2π
1
∫
2π
stant; an = 2π f (φ) cos nφ dφ, bn = 2π f (φ) sin nφ dφ.
0 0
∑
∞
r n+1
( )
2. u(r, φ) = − nan an cos nφ + bn sin φ + C, where C is any con-
n=1
1
∫
2π
1
∫
2π
stant; an = 2π f (φ) cos nφ dφ, bn = 2π f (φ) sin nφ dφ.
0 0
612 ANSWERS TO EXERCISES
3. u(r, φ) = Ar sin φ.
8. u(r, φ) = B0 ln r + A0
∞ (
[ ]
∑ ) ( )
+ An rn + rbnn cos nφ + Cn rn + Dn
rn sin nφ , where
n=1
(c) (c)
bn gn
(c)
−an fn
(c)
b n (c)
fn −an gn
(c)
f0 −g0
An = b2n −a2n , Bn = an bn b2n −a2n , A0 = ln a−ln b ,
(c) (c)
b n gn(s)
−an fn(s)
bn f (s) −an g (s) g ln a−f ln b
Cn = b2n −a2n , Dn = an bn bn2n −a2n n , B0 = 0 ln a−ln0 b ;
(c) (s) (c) (s)
fn , fn , gn , gn are the Fourier coefficients of f (φ) and g(φ),
(c) 1
∫
2π
(s) 1
∫
2π
fn = 2π f (φ) cos nφ dφ, fn = 2π f (φ) sin φ dφ,
0 0
(c) 1
∫
2π
(s) 1
∫
2π
gn = 2π g(φ) cos nφ dφ, gn = 2π g(φ) sin φ dφ.
0 0
If b = 1, f (φ) = 0 and g(φ) = 1 + 2 sin φ, then u(r, φ) = 1 +
a2 +r 2
2 (1+a 2 )r sin φ.
∑
∞
(−1)n−1 ( r )2n−1
9. u(r, φ) = 1
2 + 4
π 2n−1 a cos 2(2n − 1)φ.
n=1
∑
∞
(−1)n−1 ( r )2n−1
10. u(r, φ) = c
2 + 4c
π 2n−1 2 cos (2n − 1)φ.
n=1
∑
∞
1(
( )
11. u(r, φ) = −2 ) J0 λ0n r , where λ0n is the nth positive
n=1 λ30n J1 λ0n
zero of J0 (·).
[ ( ) ( ) ]
∑
∞ J0 λ0n r J3 λ3n r
12. u(r, φ) = −2 ( )+ ( ) cos 3φ , where λ0n is the
3 λ33n J4 λ3n
n=1 λ0n J1 λ0n
th
n positive zero of J0 (·) and λ3n is the nth positive zero of J3 (·).
∑
∞
1(
( )
13. u(r, φ) = r2 sin 2φ − 2 ) J0 λ0n r , where λ0n is the nth
n=1 λ30n J1 λ0n
positive zero of J0 (·).
( )
∑
∞
1(
( ) sinh λ0n z
14. u(r, φ, z) = u(r, z) = 2 ) J0 λ0n r ( ),
n=1 λ0n J1 λ0n sinh λ0n
th
where λ0n is the n positive zero of J0 (·).
ANSWERS TO EXERCISES 613
( )[ ]
∑
∞ J0 λ0n r ( ) ( )
15. u(r, z) = ( ) An sinh λ0n (1−z) +Bn sinh λ0n z , where
n=1 sinh λ0n
λ0n is the nth positive zero of J0 (·). The coefficients are determined
by the formula
∫1 ( ) ∫1 ( )
An = 2 (2 ) G(r)J0 λ0n r r dr, Bn = 2 (2 ) H(r)J0 λ0n r r dr.
J1 λ0n 0 J1 λ0n 0
∑
∞
(−1)n−1 (2n−2)!
16. u(r, θ) = 2 + (4n − 1) ( )2 r2n−1 P2n−1 (cos θ).
n=1 n22n−2 n−1)!
(2m+1)! ((m−n)! ∫ ∫π
2π
(n)
Amn = 2π(m+n)! f (θ, φ) Pm (cos θ) cos nφ sin θ dθ dφ,
0 0
(2m+1)! ((m−n)! ∫ ∫π
2π
(n)
Bmn = 2π(m+n)! f (θ, φ) Pm (cos θ) sin nφ sin θ dθ dφ.
[
0 0
]
∑
∞ ∑
m
rm
( ) (n)
21. u(r, θ, φ) = m Amn cos n φ + Bmn sin φ Pm (cos θ) + C,
m=1 n=0
where C is any constant and Amn and Bmn are as in Exercise 18.
Section 7.4.
2
∫∞ sinh ωx
1. u(x, y) = π (1+ω 2 ) sinh ωπ dω.
0
2
∫∞ sinh ω(2−y)
2. u(x, y) = π F (ω) (1+ω 2 ) sinh 2ω sin ωx dω.
0
∫∞ [ −ωx ]
3. u(x, y) = 2
π
ω
1+ω 2 e sin wy + e−ωy sin ωx dω.
0
y
∫∞ f (ω)
4. u(x, y) = π y 2 +(x−ω)2 dω.
[0
]
1
( 1+x ) ( 1−x )
5. u(x, y) = π arctan y + arctan y .
1 2+y
6. u(x, y) = 2 x2 +(2+y)2 .
y
∫∞ cos ω
7. u(x, y) = π (x−ω)2 +y 2 dω.
−∞
614 ANSWERS TO EXERCISES
2
∫∞ sinh ωx
8. u(x, y) = π (1+ω 2 ) sinh ω cos ωy dω.
0
∫∞
9. u(x, y) = 2
π
1−cos ω
ω e−ωx sin ωy dω.
0
∫∞
12. u(r, z) = a J0 (ωr) J1 (ω)e−ωz dω.
0
∫∞
13. u(r, z) = J0 (ωr)e−ω(z+a) dω = √ 1
.
(z+a)2 +r 2
0
Section 8.1.
−4 0
0 0
1. λ1 = 5, x1 = ; λ2 = 9, x2 = .
−3 −3
5 0
√ √
− 1+3 10 − 1−3 10
√ 0 √ 0
λ3 = − 10, x3 =
; λ4 = 10, x4 =
.
1 1
0 0
is
x1 = 2 and x2 = 1,
ANSWERS TO EXERCISES 615
is
y1 = 4.00002 and y2 = −1.
x1
4. Let x = x2 be any nonzero vector. Then
x3
2 −1 0 x1
xT · A · x = ( x1 x2 x3 ) · −1 2 −1 · x2
0 −1 2 x3
= x21 + (x1 − x2 )2 + (x2 − x3 )2 + x23 ≥ 0.
5. First notice that the matrix of the system is strictly diagonally dom-
inant (check this). Therefore the Jacobi and Gauss–Seidel iterative
methods converge for every initial point x0 . Now if we take ϵ = 10−6
and
In[1] := A = {{4, 1, −1, 1}, {1, 4, −1, −1}, {−1, −1, 5, 1}
, {1, −1, 1, 3}};
In[2] := b = {5, −2, 6, −4};
in the modules “ImprovedJacobi” and “ImprovedGaussSeidel”
In[3] := ImprovedJacobi[A, b, {0, 0, 0, 0}, 0.000001]
In[4] := ImprovedGaussSeidel[A, b, {0, 0, 0, 0}, 0.000001]
we obtain
(a) Out[4] := k = 23 x = {−0.75341, 0.041092, −0.28081, 0.69177}
(b) Out[5] := k = 12 x = {−0.75341, 0.04109, −0.28081, 0.69177}.
616 ANSWERS TO EXERCISES
k x1 x2 x3 x4 dk
0 0 0 0 0
1 4.0854 2.7636 1.4419 0.1202 5.3606
2 4.0000 3.0000 2.0000 1.0000 0.0112
7. The matrix A is not a strictly definite positive since, for example, for
its first row we have |2| + | − 2| = 4 = |4|. To show that the Jacobi
and Gauss–Seidel iterative methods are convergent we use Theorem
8.1.3.
For the Jacobi iterative method, the matrix B = BJ in (8.1.17) is
given by
−1
4 0 0 [ 0 0 0
( )
BJ = D−1 L + U = 0 −4 0 · 0 −4 0
0 0 4 0 0 0
1
0 2 −2 ] 0 1
2 −2
+ 0 0 −1 = − 13 0 1
3 .
0 0 0 3
4 − 1
4 0
ANSWERS TO EXERCISES 617
8 1
pBJ (λ) = −λ3 − λ + λ.
8 12
· 0 0 −1 = 0 16 1
6 .
0 0 0 1 5
0 3 12
7 2 1
pBGS (λ) = −λ3 − λ − λ.
12 8
We find that the eigenvalues of BGS (the roots of pBGS (λ)) are
Section 8.2.
′
1. (a) We know from√ calculus that f (x) = cos x and so the exact value
′
of f (π/4) is 2/2 ≈ 0.707.
With the forward finite difference approximation for h = 0.1 we
have
( ) ( ) ( ) ( )
( π ) f π4 + 0.1 − f π4 cos π4 + 0.1 − cos π4
f′ ≈ = ≈ 0.63.
4 0.1 0.1
With the backward finite difference approximation for h = 0.1 we
have
( ) ( ) ( ) ( )
( ) f π4 − f π4 + 0.1
′ π cos π4 − cos π4 + 0.1
f ≈ = ≈ 0.773.
4 0.1 0.1
With the central finite difference approximation for h = 0.1 we
have
( ) ( ) ( )
( ) cos π4 + 0.1 − 2 cos π4 + cos π4 − 0.1
′ π
f ≈ ≈ 0.702.
4 0.1
2. For the mesh points x = 0.5 and x = 0.7 we use the forward and
backward finite differences approximations, respectively:
0.56464 − 0.47943
f ′′ (0.5) ≈ = 0.8521,
0.1
0.56464 − 0.64422
f ′′ (0.7) ≈ = −0.7958.
0.1
For the mesh point x = 0.6 we can use either finite difference approx-
imation. The central approximation gives
0.64422 − 0.47943
f ′ (0.6) ≈ = 0.82395.
2 ∗ 0.1
For the second derivative with the central approximation we obtain
0.64422 − 2 ∗ 0.56464 + 0.47943
f ′′ (0.6) ≈ = −0.563.
0.12
ANSWERS TO EXERCISES 619
4. We find from calculus that f ′ (x) = 2xex and f ′′ (x) = 2ex + 4x2 ex .
2 2 2
1
xi = ih, i = 0, 1, 2, . . . , n; h = .
n
Section 8.3.
Table E. 8.3.1
j\i 1 2 3 4 5 6 7
1 125.8 141.2 145.4 144.0 137.5 122.6 88.61
2 102.10 113.50 116.50 113.10 103.30 84.48 51.79
3 89.17 94.05 93.92 88.76 77.97 60.24 34.05
4 80.53 79.65 76.40 70.00 59.63 44.47 24.17
5 73.30 67.62 62.03 55.22 46.08 33.82 18.18
6 65.05 55.52 48.87 42.76 36.65 26.55 14.73
7 51.39 40.52 35.17 31.29 27.23 21.99 14.18
620 ANSWERS TO EXERCISES
1
4ui,j − ui−1,j − ui,j+1 + ui,j−1 = − , i, j = 1, 2, 3.
64
Table E. 8.3.2.
j\i 1 2 3
1 −0.043 −0.055 −0.043
2 −0.055 −0.070 −0.055
3 −0.043 −0.055 −0.043
Table E. 8.3.3.
j\i 1 2 3 4 5 6 7
1 126.5 142.3 146.8 145.5 138.8 123.6 89.1
2 103.5 116.0 119.6 116.3 106.0 86.47 52.81
3 91.66 98.41 99.21 94.05 82.49 63.47 35.71
4 84.72 86.79 84.83 78.21 66.46 49.21 26.55
5 80.44 79.21 75.12 67.49 55.92 40.37 21.29
6 77.84 74.47 68.97 60.69 49.36 35.04 18.25
7 76.42 71.88 65.58 56.96 45.70 32.20 16.65
( )
4. Let u 3 + 0.25i, 4 + 0.25j ≈ ui,j , i = 1, 2, 3, j = 1, 2, 3, 4. The results
are given in Table E. 8.3.4.
Table E. 8.3.4.
j\i 1 2 3
1 −0.017779 −0.044663 −0.065229
2 −0.0277746 −0.053768 −0.068876
3 −0.032916 −0.056641 −0.072783
4 −0.034524
4 −1 0 −1 0 0 0 0 0
−1 4 −1 0 −1 0 0 0 0
0 −1 4 0 0 −1 0 0 0
−1 0 0 4 −1 0 −1 0 0
A = 0 −1 0 −1 4 −1 0 −1 0,
0 0 −1 0 −1 4 0 0 −1
0 0 0 −1 0 0 4 −1 0
0 0 0 0 −1 0 −1 4 −1
0 0 0 0 0 −1 0 −1 4
and
( )T
u= u1,1 u1,2 u1,3 u2,1 u2,2 u2,3 u3,1 u3,2 u3,3
Table E. 8.3.5.
j\i 1 2 3
1 0.35355339 0 −0.35355339
2 0 0 0
3 −0.35355339 0 0.35355339
Section 8.4.
Table E.8.4.1
tj h = 1/8 h = 1/16 u(1/4, tj )
0.01 0.68 0.62 0.67
0.02 0.47 0.42 0.45
0.03 0.32 0.28 0.31
0.04 0.22 0.19 0.21
0.05 0.15 0.13 0.14
0.06 0.10 0.09 0.09
0.07 0.07 0.06 0.06
0.08 0.05 0.04 0.04
0.09 0.03 0.03 0.03
0.10 0.02 0.02 0.02
0.11 0.01 0.01 0.01
0.12 0.01 0.01 0.01
0.13 0.01 0.01 0.01
0.14 0.00 0.00 0.00
0.15 0.00 0.00 0.00
4. Using the central finite difference approximations for ux and uxx the
explicit finite difference approximation of the given initial boundary
value problem is
( ) ( )
2∆ t ∆t ∆t
ui,j+1 = 1 − 2 ui,j + + 2 ui+1,j
h 2h h
( )
∆t ∆t
+ − + 2 ui−1,j .
2h h
With the given h = 0.1 and ∆ t = 0.0025 the results are presented
in Table E.8.4.4.
Table E.8.4.4.
t\x x1 x2 x3 x4 x5 x6 x7
0.0025 0.1025 0.2025 0.3025 0.5338 0.7000 0.5162 0.2975
0.0025 0.1044 0.2050 0.3395 0.5225 0.6123 0.5025 0.3232
0.0075 0.1060 0.2163 0.3556 0.5026 0.5621 0.4815 0.3321
0.0100 0.1098 0.2267 0.3611 0.4832 0.5268 0.4614 0.3328
0.0125 0.1144 0.2342 0.3612 0.4657 0.4993 0.4432 0.3293
0.0150 0.1187 0.2391 0.3585 0.4497 0.4766 0.4266 0.3239
0.0175 0.1221 0.2419 0.3541 0.4351 0.4571 0.4114 0.3172
0.2000 0.1245 0.2429 0.3487 0.4216 0.4399 0.3976 0.3103
Section 8.5.
u u
2 t=0 2
t = 0.24
o
o o
o o Exact
o o -o - o - Approx
uHx, 0L = f HxL
o o
o
o o
1 1o o o o o o o o o o o o o o o o o o o o o o o
x x
1 3 3
2 2
4 4 4
(a) (b)
Figure E.8.5.1
3. The numerical results are given in Table E.8.5.1. The numbers are
rounded to 4 digits.
Table E.8.5.1.
t\x 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00
0.2 1.0 1.0 1.0 0.985 1.024 1.270 1.641 1.922
0.4 1.000 1.000 1.000 0.993 0.976 1.063 1.352 1.715
0.6 1.000 0.999 1.000 1.000 0.981 0.977 1.115 1.434
0.8 1.000 1.000 1.000 1.000 0.995 0.969 0.991 1.177
1.0 1.000 1.000 1.000 1.000 1.000 0.986 0.960 1.015
1.2 1.000 1.000 1.000 1.000 1.000 1.000 0.974 0.957
ANSWERS TO EXERCISES 625
Table E.8.5.2.
t\x 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00
0.2 1.000 1.000 1.000 1.191 1.562 1.979 2.00 2.00
0.4 1.000 1.000 1.000 1.000 1.036 1.038 1.039 1.706
0.6 1.000 1.000 1.000 1.000 1.000 1.000 1.077 1.410
0.8 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.130
1.0 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
1.2 1.0000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
2 o 2 o
o
o t = 1.0 o Exact
Exact o
o -o - o - Approx o -o - o - Approx
o o
o
o o
o
o o o o o o o o o o o o o o o o o o o o o o o o o o
1o o o o o o o o o 1o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o
o
1 2 3 10 1 2 3 10
(a) (b)
Figure E.8.5.2
4. For the initial level use the given initial condition ui, 0 = f (ih). For
the next level use the Forward-Time Central-Space
( )
λ
ui,1 = ui,0 − 2ui+1,0 − ui−1,0 .
2
u
u
2
2 o o t = 0.3 Exact
o o
o
-o - o - Approx
o
o o
o
o
1 o o o o o o o o o o o o o o o o o o o o o o o o o
1
x
x 3
3 1 2 4
1 2 4 2
2
(a) (b)
Figure E.8.5.3
t
2 o
2 t=0 o t = 0.0250
Exact
o -o - o - Approx
uHx, 0L = f HxL o
o
o
1 1o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o
x
0.5 2 0.5 2
(a) (b)
2 o 2 o
t = 0.0750 o t = 0.200
o Exact
o Exact
o -o - o - Approx
-o - o - Approx
o
o o
o o
o
o o
1o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o 1o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o
0.5 2 0.5 2
(c) (d)
Figure E.8.5.4
Table E.8.5.4.
t\x 0.1 0.2 0.3 0.4 0.5 0.6 0.7
0.05 0.079 0.104 0.119 0.124 0.119 0.104 0.079
0.10 0.075 0.100 0.115 0.120 0.115 0.100 0.075
0.15 0.0687 0.094 0.109 0.113 0.109 0.094 0.069
0.20 0.006 0.085 0.100 0.105 0.100 0.085 0.060
0.25 0.05 0.074 0.089 0.094 0.089 0.074 0.050
628 ANSWERS TO EXERCISES
{
−g(−x), −1 ≤ x ≤ 0,
Godd (x) =
g(x), 0 ≤ x ≤ 1.
To find Gext first find an antiderivative G(x) for −1 < x < 1:
∫x ∫x
1( )
G(x) = g(x) dx = sin πx dx = − 1 + cos πx .
π
−1 −1
u u
t=0 3.5 o o
3.5 o t = 0.050 Exact
o
o o
-o -
o o o Approx
uHx, 0L = f HxL
o o
x o
o o x
1 1
o o
o o
o o
-2 o o
(a) (b)
u
u
t = 0.275 o o
o o Exact o
o
3. o o t = 0.100
Exact -o - o -
o
o o
o
-o - o - Approx 0.9 Approx o
o o o
o
o o
o o
o o
o o x
1 o o
o
o
o
o o o
o
-1.5 o o o o o o x
o o 1
(c) (d)
Figure E.8.5.7
ANSWERS TO EXERCISES 629
where
2(1 − λ2 ) λ2 ... 0 0
λ2 2(1 − λ2 ) ... 0 0
A=
..
.
0 0 ... 2(1 − λ2 ) λ2
0 0 ... λ2 2(1 − λ2 ),
u α(tj−1 )
1,j
0
u2,j ..
Ui,j =
.. , bj−1 =
. .
. 0
ui,j β(tj−1 )
where
∆t
λ=a .
h
Mathematics
INTRODUCTION TO
USING MATHEMATICA
series, integral transforms, and Sturm–Liouville problems, necessary for proper
understanding of the underlying foundation and technical details. MATHEMATICA
The book provides fundamental concepts, ideas, and terminology related to PDEs.
It then discusses d’Alambert’s method as well as separation of variable method of Kuzman Adzievski • Abul Hasan Siddiqi
the wave equation on rectangular and circular domains. Building on this, the book
studies the solution of the heat equation using Fourier and Laplace transforms,
and examines the Laplace and Poisson equations of different rectangular circular
domains. The authors discuss finite difference methods elliptic, parabolic, and
hyperbolic partial differential equations—important tools in applied mathematics
and engineering. This facilitates the proper understanding of numerical solutions
of the above-mentioned equations. In addition, applications using Mathematica®
are provided.
Features
• Covers basic theory, concepts, and applications of PDEs in engineering and
science
• Includes solutions to selected examples as well as exercises in each chapter
• Uses Mathematica along with graphics to visualize computations, improving
understanding and interpretation
• Provides adequate training for those who plan to continue studies in this area
Written for a one- or two-semester course, the text covers all the elements
encountered in theory and applications of PDEs and can be used with or without
Adzievski • Siddiqi
computer software. The presentation is simple and clear, with no sacrifice of
rigor. Where proofs are beyond the mathematical background of students, a
short bibliography is provided for those who wish to pursue a given topic further.
Throughout the text, the illustrations, numerous solved examples, and projects
have been chosen to make the exposition as clear as possible.
K14786