0% found this document useful (0 votes)
983 views645 pages

Adzievski, Kuzman Siddiqi, A. H - Introduction To

Uploaded by

Uttam Ghosh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
983 views645 pages

Adzievski, Kuzman Siddiqi, A. H - Introduction To

Uploaded by

Uttam Ghosh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Mathematics

INTRODUCTION TO

EQUATIONS FOR SCIENTISTS AND ENGINEERS


INTRODUCTION TO PARTIAL DIFFERENTIAL
Developed specifically for students in engineering and physical sciences, PARTIAL DIFFERENTIAL
Introduction to Partial Differential Equations for Scientists and Engineers
Using Mathematica covers phenomena and process represented by partial EQUATIONS FOR SCIENTISTS
differential equations (PDEs) and their solutions, both analytical and numerical.
In addition the book details mathematical concepts and ideas, namely Fourier AND ENGINEERS USING

USING MATHEMATICA
series, integral transforms, and Sturm–Liouville problems, necessary for proper
understanding of the underlying foundation and technical details. MATHEMATICA
The book provides fundamental concepts, ideas, and terminology related to PDEs.
It then discusses d’Alambert’s method as well as separation of variable method of Kuzman Adzievski • Abul Hasan Siddiqi
the wave equation on rectangular and circular domains. Building on this, the book
studies the solution of the heat equation using Fourier and Laplace transforms,
and examines the Laplace and Poisson equations of different rectangular circular
domains. The authors discuss finite difference methods elliptic, parabolic, and
hyperbolic partial differential equations—important tools in applied mathematics
and engineering. This facilitates the proper understanding of numerical solutions
of the above-mentioned equations. In addition, applications using Mathematica®
are provided.

Features
• Covers basic theory, concepts, and applications of PDEs in engineering and
science
• Includes solutions to selected examples as well as exercises in each chapter
• Uses Mathematica along with graphics to visualize computations, improving
understanding and interpretation
• Provides adequate training for those who plan to continue studies in this area

Written for a one- or two-semester course, the text covers all the elements
encountered in theory and applications of PDEs and can be used with or without

Adzievski • Siddiqi
computer software. The presentation is simple and clear, with no sacrifice of
rigor. Where proofs are beyond the mathematical background of students, a
short bibliography is provided for those who wish to pursue a given topic further.
Throughout the text, the illustrations, numerous solved examples, and projects
have been chosen to make the exposition as clear as possible.

K14786

K14786_Cover.indd 1 8/19/13 9:28 AM


INTRODUCTION TO
PARTIAL DIFFERENTIAL
EQUATIONS FOR SCIENTISTS
AND ENGINEERS USING
MATHEMATICA

ix
INTRODUCTION TO
PARTIAL DIFFERENTIAL
EQUATIONS FOR SCIENTISTS
AND ENGINEERS USING
MATHEMATICA
Kuzman Adzievski • Abul Hasan Siddiqi
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2014 by Taylor & Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group, an Informa business

No claim to original U.S. Government works


Version Date: 20130820

International Standard Book Number-13: 978-1-4665-1057-9 (eBook - PDF)

This book contains information obtained from authentic and highly regarded sources. Reasonable
efforts have been made to publish reliable data and information, but the author and publisher cannot
assume responsibility for the validity of all materials or the consequences of their use. The authors and
publishers have attempted to trace the copyright holders of all material reproduced in this publication
and apologize to copyright holders if permission to publish in this form has not been obtained. If any
copyright material has not been acknowledged please write and let us know so we may rectify in any
future reprint.

Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced,
transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or
hereafter invented, including photocopying, microfilming, and recording, or in any information stor-
age or retrieval system, without written permission from the publishers.

For permission to photocopy or use material electronically from this work, please access www.copy-
right.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222
Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that pro-
vides licenses and registration for a variety of users. For organizations that have been granted a pho-
tocopy license by the CCC, a separate system of payment has been arranged.

Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are
used only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com
CONTENTS

Preface ix

Acknowledgments xiii

1 Fourier Series 1
1.1 Fourier Series of Periodic Functions . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Convergence of Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.3 Integration and Differentiation of Fourier Series . . . . . . . . . . . 37
1.4 Fourier Sine and Cosine Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
1.5 Projects Using Mathematica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

2 Integral Transforms 83
2.1 The Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
2.1.1 Definition and Properties of the Laplace Transform . . . . . . . . 84
2.1.2 Step and Impulse Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
2.1.3 Initial-Value Problems and the Laplace Transform . . . . . . . . . 102
2.1.4 The Convolution Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
2.2 Fourier Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
2.2.1 Definition of Fourier Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . 116
2.2.2 Properties of Fourier Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . 127
2.3 Projects Using Mathematica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

3 Sturm–Liouville Problems 145


3.1 Regular Sturm–Liouville Problems . . . . . . . . . . . . . . . . . . . . . . . . 145
3.2 Eigenfunction Expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
3.3 Singular Sturm–Liouville Problems . . . . . . . . . . . . . . . . . . . . . . . . 168
3.3.1 Definition of Singular Sturm–Liouville Problems . . . . . . . . . . . 168
3.3.2 Legendre’s Differential Equation . . . . . . . . . . . . . . . . . . . . . . . . . . 170

v
vi CONTENTS

3.3.3 Bessel’s Differential Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182


3.4 Projects Using Mathematica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199

4 Partial Differential Equations 204


4.1 Basic Concepts and Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . 204
4.2 Partial Differential Equations of the First Order . . . . . . . . . . . 209
4.3 Linear Partial Differential Equations of the Second Order . . 221
4.3.1 Important Equations of Mathematical Physics . . . . . . . . . . . . . 222
4.3.2 Classification of Linear PDEs of the Second Order . . . . . . . . . 230
4.4 Boundary and Initial Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . 238
4.5 Projects Using Mathematica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240

5 The Wave Equation 243


5.1 d’Alembert’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
5.2 Separation of Variables Method for the Wave Equation . . . . 259
5.3 The Wave Equation on Rectangular Domains . . . . . . . . . . . . . . 276
5.3.1 Homogeneous Wave Equation on a Rectangle . . . . . . . . . . . . . . 276
5.3.2 Nonhomogeneous Wave Equation on a Rectangle . . . . . . . . . . 281
5.3.3 The Wave Equation on a Rectangular Solid . . . . . . . . . . . . . . . 284
5.4 The Wave Equation on Circular Domains . . . . . . . . . . . . . . . . . 288
5.4.1 The Wave Equation in Polar Coordinates . . . . . . . . . . . . . . . . . . 288
5.4.2 The Wave Equation in Spherical Coordinates . . . . . . . . . . . . . . 297
5.5 Integral Transform Methods for the Wave Equation . . . . . . . 305
5.5.1 The Laplace Transform Method for the Wave Equation . . . . 305
5.5.2 The Fourier Transform Method for the Wave Equation . . . . 310
5.6 Projects Using Mathematica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321

6 The Heat Equation 326


6.1 The Fundamental Solution of the Heat Equation . . . . . . . . . . 326
6.2 Separation of Variables Method for the Heat Equation . . . . . 334
6.3 The Heat Equation in Higher Dimensions . . . . . . . . . . . . . . . . . 349
6.3.1 Green Function of the Higher Dimensional Heat Equation . 349
6.3.2 The Heat Equation on a Rectangle . . . . . . . . . . . . . . . . . . . . . . . . 351
6.3.3 The Heat Equation in Polar Coordinates . . . . . . . . . . . . . . . . . . 355
6.3.4 The Heat Equation in Cylindrical Coordinates . . . . . . . . . . . . . 359
6.3.5 The Heat Equation in Spherical Coordinates . . . . . . . . . . . . . . 361
6.4 Integral Transform Methods for the Heat Equation . . . . . . . . . 366
CONTENTS vii

6.4.1 The Laplace Transform Method for the Heat Equation . . . . 366
6.4.2 The Fourier Transform Method for the Heat Equation . . . . . 371
6.5 Projects Using Mathematica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377

7 Laplace and Poisson Equations 383


7.1 The Fundamental Solution of the Laplace Equation . . . . . . . . 383
7.2 Laplace and Poisson Equations on Rectangular Domains . . . 397
7.3 Laplace and Poisson Equations on Circular Domains . . . . . . 413
7.3.1 Laplace Equation in Polar Coordinates . . . . . . . . . . . . . . . . . . . . 413
7.3.2 Poisson Equation in Polar Coordinates . . . . . . . . . . . . . . . . . . . . 420
7.3.3 Laplace Equation in Cylindrical Coordinates . . . . . . . . . . . . . . 422
7.3.4 Laplace Equation in Spherical Coordinates . . . . . . . . . . . . . . . . 424
7.4 Integral Transform Methods for the Laplace Equation . . . . . 432
7.4.1 The Fourier Transform Method for the Laplace Equation . . 432
7.4.2 The Hankel Transform Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
7.5 Projects Using Mathematica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445

8 Finite Difference Numerical Methods 451


8.1 Basics of Linear Algebra and Iterative Methods . . . . . . . . . . . 451
8.2 Finite Differences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
8.3 Finite Difference Methods for Laplace & Poisson Equations 481
8.4 Finite Difference Methods for the Heat Equation . . . . . . . . . . 495
8.5 Finite Difference Methods for the Wave Equation . . . . . . . . . . 506

Appendices
A. Table of Laplace Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
B. Table of Fourier Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
C. Series and Uniform Convergence Facts . . . . . . . . . . . . . . . . . . . . 526
D. Basic Facts of Ordinary Differential Equations . . . . . . . . . . . . . 529
E. Vector Calculus Facts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
F. A Summary of Analytic Function Theory . . . . . . . . . . . . . . . . . . 549
G. Euler Gamma and Beta Functions . . . . . . . . . . . . . . . . . . . . . . . . . 556
H. Basics of Mathematica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559

Bibliography 574

Answers to the Exercises 575

Index of Symbols 630

Index 631
PREFACE

There is a wide class of physical, real world problems, such as distribution of


heat, vibrating string, blowing winds, movement of traffic, stock market prices
and brain activity, which can be represented by partial differential equations.
The goal of this book is to provide an introduction of phenomena and process
represented by partial differential equations and their solutions, both analyt-
ical and numerical. In addition, we introduce those mathematical concepts
and ideas, namely, Fourier series, integral transforms, Sturm–Liouville prob-
lems, necessary for proper understanding of the underlying foundation and
technical details.
This book is written for a one- or two-semester course in partial differen-
tial equations. It was developed specifically for students in engineering and
physical sciences. A knowledge of multi-variable calculus and ordinary differ-
ential equations is assumed. Results from power series, uniform convergence,
complex analysis, ordinary differential equations and vector calculus that are
used here are summarized in the appendices of the book.
The text introduces systematically all basic mathematical concepts and
ideas that are needed for construction of analytic and numerical solutions of
partial differential equations. The material covers all the elements that are
encountered in theory and applications of partial differential equations which
are taught in any standard university.
We have attempted to keep the writing style simple and clear, with no
sacrifice of rigor. On many occasions, students are introduced to proofs.
Where proofs are beyond the mathematical background of students, a short
bibliography is provided for those who wish to pursue a given topic further.
Throughout the text, the illustrations, numerous solved examples and projects
have been chosen to make the exposition as clear as possible.
This textbook can be used with or without the use of computer software.
We have chosen to use the Wolfram computer software Mathematica because
of its easy introduction to computer computations and its excellent graphics
capabilities. The use of this software allows students, among other things, to
do fast and accurate computations, modeling, experimentation and visualiza-
tion.
The organization of the book is as follows:
The book is divided into 8 chapters and 8 appendices. Chapters 1, 2
and 3 lay out the foundation for the solution methods of partial differential
equations, developed in later chapters.
Chapter 1 is devoted to basic results of Fourier series, which is nothing
but a linear combination of trigonometric functions. Visualization of Fourier

ix
x PREFACE

series of a function is presented using Mathematica. The subject matter of this


chapter is very useful for solutions of partial deferential equations presented
in Chapters 5 to 8. Topics of Chapter 1 include Fourier series of periodic
functions and their convergence, integration and differentiation, Fourier Sine
and Fourier Cosine series and computation of partial sums of a Fourier series
and their visualization by Mathematica.
Integral transforms are discussed in Chapter 2. Fourier transforms and
their basic properties are discussed in this chapter. The Laplace transform
and its elementary properties as well as its differentiation and integration are
introduced. The inversion formula and convolution properties of the Laplace
and Fourier transform, Heaviside step and the Dirac Delta functions are pre-
sented. Applications of the Laplace and Fourier transforms to initial value
problems are indicated in this chapter. Parseval’s identity and the Riemann-
Lebesgue Lemma for Fourier transforms are also discussed.
The concept of the Fourier series, discussed in Chapter 1, is expanded in
Chapter 3. Regular and singular Sturm–Liouville Problems and their prop-
erties are introduced and discussed in this chapter. The Legendre and Bessel
differential equations and their solutions, the Legendre polynomials and Bessel
functions, respectively, are discussed in the major part of this chapter.
Chapter 4 provides fundamental concepts, ideas and terminology related
to partial differential equations. In addition, in this chapter there is an in-
troduction to partial differential equations of the first order and linear partial
differential equations of the second order, along with the classification of lin-
ear partial differential equations of the second order. The partial differential
equations known as the heat equation, the wave equation, the Laplace and
Poisson equations and the transport equation are derived in this chapter.
Analytical (exact) solutions of the wave, heat, Laplace and Poisson equa-
tions are discussed, respectively, in Chapters 5, 6 and 7.
Chapter 5 constitutes d’Alambert’s method, separation of variable method
of the wave equation on rectangular and circular domains. Solutions of the
wave equation applying the Laplace and the Fourier transforms are obtained.
Chapter 6 is mainly devoted to the solution of the heat equation using
Fourier and Laplace transforms. The multi dimensional heat equation on
rectangular and circular domains is also studied. Projects using Mathematica
for simulation of the heat equation are presented at the end of the chapter.
In Chapter 7 we discuss the Laplace and Poisson equations on different
rectangular and circular domains. Applications of the Fourier and Hankel
transform to the Laplace equation on unbounded domains are presented. Ex-
amples of applications of Mathematica to the Laplace and Poisson equations
are given.
In Chapter 8 we study finite difference methods for elliptic, parabolic and
hyperbolic partial differential equations. In recent years these methods have
become an important tool in applied mathematics and engineering. A sys-
tematic review of linear algebra, iterative methods and finite difference is
presented in this chapter. This will facilitate the proper understanding of
PREFACE xi

numerical solutions of the above mentioned equations. Applications using


Mathematica are provided.
Projects using Mathematica are included throughout the textbook.
Tables of the Laplace and Fourier transforms, for ready reference, are given
in Appendix A and Appendix B, respectively.
Introduction and basics of Mathematica are given in Appendix H.
Answers to most of the exercises are provided.
An Index of Symbols used in the textbook is included.
In order to provide easy access to page numbers an index is included.
ACKNOWLEDGMENTS

The idea for writing this book was initiated while both authors were visiting
Sultan Qaboos University in 2009. We thank Dr. M. A-Lawatia, Head of the
Department of Mathematics & Statistics at Sultan Qaboos University, who
provided a congenial environment for this project.
The first author is grateful for the useful comments of many of his colleagues
at South Carolina State University, in particular: Sam McDonald, Ikhalfani
Solan, Jean-Michelet Jean Michel and Guttalu Viswanath.
Many students of the first author who have taken applied mathematics
checked and tested the problems in the book and their suggestions are appre-
ciated.
The first author is especially grateful for the continued support and en-
couragement of his wife, Sally Adzievski, while writing this book, as well as
for her useful linguistic advice.
Special thanks also are due to Slavica Grdanovska, a graduate research
assistant at the University of Maryland, for checking the examples and answers
to the exercises for accuracy in several chapters of the book.
The second author is indebted to his wife, Dr. Azra Siddiqi, for her encour-
agement. The second author would like to thank his colleagues, particularly
Professor P. Manchanda of Guru Nanak Dev University, Amritsar India. Ap-
preciation is also given to six research scholars of Sharda University, NCR,
India, working under the supervision of the second author, who have read the
manuscript of the book carefully and have made valuable comments.
Special thanks are due to Ms. Marsha Pronin, project coordinator, Taylor
& Francis Group, Boca Raton, Florida, for her help in the final stages of
manuscript preparation.
We would like to thank also Mr. Shashi Kumar from the Help Desk of
Taylor & Francis for help in formatting the whole manuscript.
We acknowledge Ms. Michele Dimont, project editor, Taylor & Francis
Group, Boca Raton, Florida, for editing the entire manuscript.
We acknowledge the sincere efforts of Ms. Aastha Sharma, acquiring editor,
CRC Press, Taylor & Francis Group, Delhi office, whose constant persuasion
enabled us to complete the book in a timely manner.
This book was typeset by AMS-TE X, the TE X macro system of the Amer-
ican Mathematical Society.

Kuzman Adzievski
Abul Hasan Siddiqi

xiii
CHAPTER 1

FOURIER SERIES

The purpose of this chapter is to acquaint students with some of the most
important aspects of the theory and applications of Fourier series.
Fourier analysis is a branch of mathematics that was invented to solve some
partial differential equations modeling certain physical problems. The history
of the subject of Fourier series begins with d’Alembert (1747) and Euler (1748)
in their analysis of the oscillations of a violin string. The mathematical theory
of such vibrations, under certain simplified physical assumptions, comes down
to the problem of solving a particular class of partial differential equations.
Their ideas were further advanced by D. Bernoulli (1753) and Lagrange (1759).
Fourier’s contributions begin in 1807 with his study of the problem of heat
flow presented to the Académie des Sciences. He made a serious attempt to
show that “arbitrary” function f of period T can be expressed as an infinite
linear combination of the trigonometric functions sine and cosine of the same
period T :
∑∞ [ ]
( 2nπ t ) ( )
f (t) = an cos + bn sin 2nπ tT .
n=0
T

Fourier’s attempt later turned out to be incorrect. Dirichlet (1829), Riemann


(1867) and Lebesgue (1902) and later many other mathematicians made im-
portant contributions to the subject of Fourier series.
Fourier analysis is a powerful tool for many problems, and particularly for
solving various differential equations arising in science and engineering. Ap-
plications of Fourier series in physics, chemistry and engineering are enormous
and almost endless: from the analysis of vibrations of the air in wind tunnels
or the description of the heat distribution in solids, to electrocardiography, to
magnetic resonance imaging, to the propagation of light, to the oscillations
of the ocean tides and meteorology. Fourier analysis lies also at the heart of
signal and image processing, including audio, speech, images, videos, radio
transmissions, seismic data, and so on. Many modern technological advances,
including television, music, CDs, DVDs video movies, computer graphics and
image processing, are, in one way or another, founded upon the many results
of Fourier analysis. Furthermore, a fairly large part of pure mathematics
was invented in connection with the development of Fourier series. This re-
markable range of applications qualifies Fourier’s discovery as one of the most
important in all of mathematics.

1
2 1. FOURIER SERIES

1.1 Fourier Series of Periodic Functions.


In this chapter we will develop properly the basic theory of Fourier analysis,
and in the following chapter, a number of important extensions. Then we will
be in a position for our main task—solving partial differential equations.
A Fourier series is an expansion of a periodic function on the real line R
in terms of trigonometric functions. It can also be used to give expansions of
functions defined on a finite interval in terms of trigonometric functions on
that interval. In contrast to Taylor series, which can only be used to represent
functions which have many derivatives, Fourier series can be used to represent
functions which are continuous as well as those that are discontinuous. The
two theories of Fourier series and Taylor series are profoundly different. A
power series either converges everywhere, or on an interval centered at some
point c, or nowhere except at the point c. On the other hand, a Fourier
series can converge on very complicated and strange sets. Second, when a
power series converges, it converges to an analytic function, which is infinitely
differentiable, and whose derivatives are represented again by power series. On
the other hand, Fourier series may converge, not only to periodic continuous
functions, but also to a wide variety of discontinuous functions.
After reviewing periodic, even and odd functions, in this section we will
focus on representing a function by Fourier series.
Definition 1.1.1. Let T > 0. A function f : R → R is called T -periodic if

f (x + T ) = f (x)

for all x ∈ R.

Remark. The number T is called a period of f . If f is non-constant, the


smallest number T with the above property is called the fundamental period
or simply period of f .
Classical examples of periodic functions are sin x, cos x and other trigono-
metric functions. The functions sin x and cos x have period 2π, while tan x
and cot x have period π.
Periodic functions appear in many physical situations, such as the oscilla-
tions of a spring, the motion of the planets about the sun, the rotation of the
earth about its axis, the motion of a pendulum and musical sounds.
Next we prove an important property concerning integration of periodic
functions.
Theorem 1.1.1. Suppose that f is T -periodic. Then for any real number
a, we have
∫T ∫
a+T

f (x) dx = f (x) dx.


0 a
1.1 FOURIER SERIES OF PERIODIC FUNCTIONS 3

Proof. Define the function F by


x+T

F (x) = f (t) dt.


x

By the fundamental theorem of calculus, F ′ (x) = f (x + T ) − f (x) = 0 since


f is T -periodic. Hence, F is a constant function. In particular, F (0) = F (a),
which implies the theorem. ■

Theorem 1.1.2. The following result holds.


(a) If f1 , f2 , . . . fn are all T -periodic functions, then c1 f1 + c2 f2 + . . . + cn fn
is also T -periodic.

(b) If f and g are T -periodic functions so is their product f g.

(c) If f and g are T -periodic functions so is their quotient f /g, provided


g ̸= 0.

T
(d) If f is T -periodic and a > 0, then f (ax) has period a.

(e) If f is T -periodic and g is any function, then the composition g ◦ f is


also T -periodic. (It is understood that f and g are such that the composition
g ◦ f is well defined.)
Proof. We prove only part (d) of the theorem, leaving the other parts as an
exercise. Let f be a T -periodic function and let a > 0. Then

T
f (a(x + )) = f (ax + T ) = f (ax).
a

Therefore f (ax) has period T


a. ■

The following result is required in the study of convergence of Fourier series.


Lemma 1.1.1. If f is any function defined on −π < x ≤ π, then there is
a unique 2π periodic function f˜, known as the 2π periodic extension of f ,
that satisfies f˜(x) = f (x) for all −π < x ≤ π.
Proof. For a given x ∈ R, there is a unique integer n so that (2n − 1)π <
x ≤ (2n + 1)π. Define f˜ by

f˜(x) = f (x − 2nπ).

Note that if −π < x ≤ π, then n = 0 and hence f˜(x) = f (x). The proof
that the resulting function f˜ is 2π periodic is left as an exercise. ■
4 1. FOURIER SERIES

Definition 1.1.2. Let f be a function defined on an interval I (finite or


infinite) centered at the origin x = 0.
1. f is said to be even if f (−x) = f (x) for every x in I.

2. f is said to be odd if f (−x) = −f (x) for every x in I.

Remark. The graph of an even function is symmetric with respect to the


y-axis, and the graph of an odd function is symmetric with respect to the
origin. For example, 5, x2 , x4 , cos x are even functions, while x, x3 , sin x
are odd functions.
The proof of the following lemma is left as an exercise.
Lemma 1.1.2. The following results hold.
1. The sum of even functions is an even function.

2. The sum of odd functions is an odd function.

3. The product of even functions and the product of two odd functions is
an even function.

4. The product of an even and an odd function is an odd function.

Since integrable functions will play an important role throughout this sec-
tion we need the following definition.
Definition 1.1.3. A function f : R → R is said to be Riemann integrable
on an interval [a, b] (finite or infinite) if

∫b
|f (x)|dx < ∞.
a

Lemma 1.1.3. If f is an even and integrable function on [−a, a], then

∫a ∫a
f (x) dx = 2 f (x) dx.
−a 0

If f is an odd and integrable function on the interval [−a, a], then

∫a
f (x) dx = 0.
−a
1.1 FOURIER SERIES OF PERIODIC FUNCTIONS 5

Proof. We will prove the above result only for even functions f , leaving the
case when f is an odd function as an exercise. Assume that f is an even
function. Then
∫a ∫0 ∫a ∫0 ∫a
f (x) dx = f (x) dx + f (x) dx = f (−x) dx + f (x) dx
−a −a 0 −L 0
∫0 ∫a ∫a ∫a
=− f (t) dt + f (x) dx = f (t) dt + f (x) dx
a 0 0 0
∫a
=2 f (x) dx. ■
0

Suppose that a given function f : R → R is Riemann integrable on the


interval [−L, L]. We wish to expand the function f in a (trigonometric)
series
∞ [ ]
a0 ∑ ( nπ ) ( nπ )
(1.1.1) f (x) = + an cos x + bn sin x .
2 n=1
L L

Each term of the above series has period 2L, so if the sum of the series
exists, it will be a function of period 2L. With this expansion there are three
fundamental questions to be addressed:
(a) What values do the coefficients a0 , an , bn have?

(b) If the appropriate values are assigned to the coefficients, does the se-
ries converge at some some points x ∈ R?

(c) If the trigonometric series does converge at a point x, does it actually


represent the given function f (x)?
The answer to the first question can be found easily by using the following
important result from calculus, known as the orthogonality property. If m
and n are any integers, then the following is true.
∫L
( mπ ) ( nπ )
sin x cos x dx = 0,
L L
−L
∫L {
( mπ ) ( nπ ) 0, m ̸= n
cos x cos x dx =
L L L, n = m ̸= 0,
−L
∫L {
( mπ ) ( nπ ) 0, m ̸= n
sin x sin x dx =
L L L, n = m ̸= 0.
−L
6 1. FOURIER SERIES

The above orthogonality


( relations
) suggest
( nπ that
) we multiply both sides of the
equation (1.1.1) by cos nπL x and sin L x , respectively. If for a moment
we ignore convergence issues (if term-by-term integration of the above series
is allowed), we find

 ∫L

 1 ( nπ )

 an = f (x) cos x dx, n = 0, 1, 2, . . .

 L L

−L
(1.1.2)

 ∫L

 1 ( nπ )

 bn = f (x) sin x dx, n = 1, 2, . . ..

 L L
−L

Therefore, if the proposed equality (1.1.1) holds, then the coefficients an


and bn must be chosen according to the formula (1.1.2).
In general, the answers to both questions (b) and (c) is “no.” Ever since
Fourier’s time, enormous literature has been published addressing these two
questions. We will see later in this chapter that the convergence or divergence
of the Fourier series at a particular point depends only on the behavior of the
function in an arbitrarily small neighborhood of the point.
The above discussion naturally leads us to the following fundamental defi-
nition.
Definition 1.1.4. Let f : R → R be a 2L periodic function which is inte-
grable on [−L, L]. Let the Fourier coefficients an and bn be defined by the
above formulas (1.1.2). The infinite series
∞ [ ]
a0 ∑ ( nπ ) ( nπ )
(1.1.3) S f (x) = + an cos x + bn sin x
2 n=1
L L
is called the Fourier series of the function f .

Notice that even the coefficients an and bn are well defined numbers,
there is no guarantee that the resulting Fourier series converges, and even if
it converges, there is no guarantee that it converges to the original function
f . For these reasons, we use the symbol S f (x) instead of f (x) when writing
a Fourier series.
The notion introduced in the next definition will be very useful when we
discuss the key issue of convergence of a Fourier series.
Definition 1.1.5. let N be a natural number. The N th partial sum of the
Fourier series of a function f is the trigonometric polynomial
N [ ]
a0 ∑ ( nπ ) ( nπ )
SN f (x) = + an cos x + bn sin x ,
2 n=1
L L
where an and bn are the Fourier coefficients of the function f .

A useful observation for computational purposes is the following result.


1.1 FOURIER SERIES OF PERIODIC FUNCTIONS 7

Lemma 1.1.4. If f is an even function, then

∫L
2 ( nπ )
an = f (x) cos x dx, n = 0, 1, 2, . . . and bn = 0, n = 1, 2, . . .
L L
0

If f is an odd function, then

∫L
2 ( nπ )
bn = f (x) sin x dx, n = 1, 2, . . . and an = 0, n = 0, 1, . . ..
L L
0

Proof. It easily follows from Lemma 1.1.1 and Lemma 1.1.3. ■

Remark. For normalization reasons, in the first Fourier coefficient a20 we


have the factor 21 . Also notice that this first coefficient a20 is nothing but the
mean (average) of the function f on the interval [−L, L].
In the next section we will show that every periodic function of x satisfying
certain very general conditions can be represented in the form (1.1.1), that
is, as a Fourier series.
Now let us take several examples.
Example 1.1.1. Let f : R → R be the 2π-periodic function which on the
interval [−π, π] is defined by
{
0, −π ≤ x < 0
f (x) =
1, 0 ≤ x < π.
Find the Fourier series of the function f .
Solution. Using the formulas (1.1.2) for the Fourier coefficients in Definition
1.1.4 we have
∫π ∫0 ∫π
1 1 1
a0 = f (x) dx = 0 dx + 1 dx = 1.
π π π
−π −π 0

For n = 1, 2, . . . we have
∫π
1
an = f (x) cos nx dx
π
−π
∫0 ∫π
1 ) 1
= 0 cos nx dx + 1 cos nx dx
π π
−π 0
1 [ ]
=0+ sin(nπ) − sin(0) = 0;

8 1. FOURIER SERIES

and
∫π
1
bn = f (x) sin nx dx
π
−π
∫0 ∫π
1 1
= 0 sin nx dx + 1 sin nx dx
π π
−π 0
1 [ ] 1 − (−1)n
=0− cos nπ − cos 0 = .
nπ nπ
Therefore the Fourier series of the function is given by

1 2∑ 1 x
S f (x) = + sin (2n − 1) .
2 π n=1 2n − 1 2

Figure 1.1.1 shows the graphs of the function f , together with the partial
sums SN f (x) of the function f taking N = 1 and N = 3 terms.

-2 Π -Π Π 2Π

Figure 1.1.1

Figure 1.1.2 shows the graphs of the function f , together with the partial
sums of the function f taking N = 6 and N = 14 terms.

N=14
N=6

-3 Π -2 Π -Π Π 2Π 3Π

Figure 1.1.2
1.1 FOURIER SERIES OF PERIODIC FUNCTIONS 9

From these graphs we see that, as N increases, the partial sums SN f (x)
become better approximations to the function f . It appears that the graphs
of SN f (x), are approaching the graph of f (x), except at x = 0 or where x
is an integer. In other words, it looks like f is equal to the sum of its Fourier
series except at the points where f is discontinuous.
Example 1.1.2. Let f : R → R be the 2π-periodic function which on
[−π, π) is defined by
{
−1, −π ≤ x < 0
f (x) =
1, 0 ≤ x < π.
Find the Fourier series of the function f .
Solution. The function is an odd function, hence by Lemma 1.1.1 its Fourier
cosine coefficients an are all zero and for n = 1, 2, · · · we have
∫π ∫π
2 2
bn = f (x) sin(nx) dx = 1 sin nx dx
π π
0 0
2 [ ] 2 [ ]
= − cos nπ + 1 = −(−1)n + 1 .
nπ nπ
Therefore the Fourier series of f (x) is

∑ ∞

S f (x) = bn sin nx = b2n−1 sin (2n − 1)x
n=1 n=1
∑∞
4 1
= sin (2n − 1)x.
π n=1
2n −1

Figure 1.1.3 shows the graphs of the function f , together with the partial
sums
4∑ 1 ( )
N
SN f (x) = sin (2n − 1)x
π n=1 2n − 1
of the function f , taking, respectively, N = 1 and N = 3 terms.

f(x)
N=1

-2 Π Π 2Π

f(x)
N=3
-1

Figure 1.1.3
10 1. FOURIER SERIES

Figure 1.1.4 shows the graphs of the function f , together with the partial
sums of the function f , taking, respectively, N = 10 and N = 25 terms.

N=10
N=25

-2 Π Π 2Π

-1

Figure 1.1.4

Figure 1.1.5 shows the graphs of the function f , together with the partial
sum SN f of the function f , taking N = 50 terms.

N=50

-2 Π Π 2Π

-1

Figure 1.1.5

And again we see that, as N increases, SN f (x) becomes a better ap-


proximation to the function f . It appears that the graphs of SN f (x) are
approaching the graph of f (x), except at x = 0 or where x is an integer
multiple of π. In other words, it looks as if f is equal to the sum of its Fourier
series except at the points where f is discontinuous.
Example 1.1.3. Let f : R → R be the 2π-periodic function defined by
f (x) = | sin(x)|, x ∈ [−π, π]. Find the Fourier series of the function f .
Solution. This function is obviously an even function, hence by Lemma 1.1.1
its Fourier cosine coefficients bn are all zero. For n = 1 we have that
∫π ∫π
2 2
a1 = f (x) cos x dx = sin x cos nx dx = 0.
π π
0 0
1.1 FOURIER SERIES OF PERIODIC FUNCTIONS 11

For n = 0, 2, 3, . . . we have
∫π ∫π
2 2
an = f (x) cos nx dx = sin x cos nx dx
π π
0 0
∫π [ ]
2 1
= sin (n + 1)x + sin (n − 1)x dx
π 2
0
[ ]
1 cos (n + 1)π cos (n1)π 1 1
= − − + −
π n+1 n−1 n+1 n−1

 [ 0,
]
n odd
=
 π1 n+1
2
− n−12
, n even

{
0, n odd
=
− (n2 −1)π
4
, n even.

Therefore the Fourier series of f (x) is


∞ ∞
a0 ∑ a+0 ∑
S f (x) = + an cos nx = + a2n cos 2nx
2 n=1
2 n=1

2 4∑ 1
= − cos 2nx.
π π n=1 4n − 1
2

Figure 1.1.6 shows the graphs of the function f , together with the partial
sums SN f of the function f , taking, respectively, N = 1 and N = 3 terms.

1
N=1

N=3

-2 Π -Π Π 2Π

Figure 1.1.6

Figure 1.1.7 shows the graphs of the function f , together with the partial
sums SN f of the function f , taking, respectively, N = 10 and N = 30
terms.
12 1. FOURIER SERIES

N=10
N=30

-2 Π -Π Π 2Π

Figure 1.1.7

Remark. A useful observation for computing the Fourier coefficients is the


following: instead of integrating from −L to L in (1.1.2), we can integrate
over any interval of length 2L. This follows from Theorem 1.1.1 and the fact
that the integrands are all 2L-periodic. (See Exercise 1 of this section.)
Example 1.1.4. Let f : R → R be the 2π-periodic function defined on
[−π, π] by {
0, −π ≤ x < 0
f (x) =
x, 0 ≤ x < π.
Find the Fourier series of the function f .
Solution. The Fourier coefficients are
∫π ∫0 ∫π
1 1( π2 ) π
a0 = f (x) dx = 0 dx + x dx = 0+ = .
π π 2 2
−π −π 0

For n = 1, 2, . . . by the integration by parts formula we have


∫π ∫0 ∫π
1 1 1
an = f (x) cos nx dx = 0 cos nx dx + x cos nx dx
π π π
π π 0
[ ] x=π
1 x sin nx cos nx 1
=0+ + = 2 [(−1)n − 1].
π n n2 x=0 n π
Similarly, for n = 1, 2, . . . we obtain
(−1)n+1
bn = .
n
Therefore the Fourier series of f (x) is
∞ ∞
a0 ∑ ∑
S f (x) = + a2n−1 cos (2n − 1)x + bn sin nx
2 n=1 n=1
∞ ∞
π 2∑ 1 2 ∑ (−1)n
= − cos (2n − 1)x + sin nx.
4 π n=1 (2n − 1)2 π n=1 n
1.1 FOURIER SERIES OF PERIODIC FUNCTIONS 13

Figure 1.1.8 shows the graphs of the function f , together with the partial
sums SN f of the function f , taking, respectively, N = 1 and N = 3 terms.

Π
N=3
N=1

-2 Π -Π Π 2Π

Figure 1.1.8

Figure 1.1.9 shows the graphs of the function f , together with the partial
sums SN f of the function f , taking, respectively, N = 10 and N = 30
terms.

N=10 N=30

-2 Π -Π Π 2Π

Figure 1.1.9

Figure 1.1.10 shows the graphs of the function f together with the partial
sum SN f of the function f , taking N = 50 terms.

N=50

-2 Π -Π Π 2Π

Figure 1.1.10
14 1. FOURIER SERIES

Example 1.1.5. Let f : R → R be the 2π-periodic function defined on


[−π, π] by { 1
π + x, −π ≤ x < 0
f (x) = 21
2 π − x, 0 ≤ x < π.
Find the Fourier series of the function f .
Solution. The Fourier coefficients are
∫π ∫0 ∫π
1 1 1 1 1
a0 = f (x) dx = ( π + x) dx + ( π − x) dx
π π 2 π 2
−π −π 0
1( )
= 0 + 0 = 0.
π
For n = 1, 2, . . . by the integration by parts formula we have

∫π { ∫0 ∫π }
1 1 π π
an = f (x) cos nx dx = ( + x) cos nx dx + ( − x) cos nx dx
π π 2 2
−π −π 0
{[ ] [ ] }
1 (1 ) sin nx cos nx x=0 (1 ) sin nx cos nx x=π
= +x + + − x +
π 2 n n2 x=−π 2 n n2 x=0
2
= 2 (1 − cos nπ).
n π
The computation of the coefficients b′n s is like that of the a′n s, and we find
that bn = 0 for n = 1, 2, . . .. Hence the Fourier series of f (x) is

4 ∑ cos (2n − 1)π
S f (x) = .
π n=1 (2n − 1)2

Figure 1.1.11 shows the graphs of the function f , together with the partial
sums SN f of the function f , taking, respectively, N = 1 and N = 3 terms.

Π
2

N=3

-Π Π

N=1
Π
-2

Figure 1.1.11
1.1 FOURIER SERIES OF PERIODIC FUNCTIONS 15

Π
2

-Π Π
N=10

Π
-2

Figure 1.1.12

Figure 1.1.12 shows the graphs of the function f , together with the partial
sum SN f of the function f , taking N = 10 terms.

So far we have discussed Fourier series whose terms are the trigonometric
functions sine and cosine. An alternative, and more convenient, approach to
Fourier series is to use complex exponentials. There are several reasons for
doing this. One of the reasons is that this is a more compact form, but the
main reason is that this complex form of a Fourier series will allow us in the
next chapter to introduce important extensions—the Fourier transforms.
Let f : R → R be a 2L periodic function which is integrable on [−L, L].
First let us introduce the variable

ωn = .
L

In view of the formulas

eiα + e−iα eiα − e−iα


cos α = , sin α = ,
2 2i

easily obtained from the Euler’s formula

eiα = cos α + i sin α,

the Fourier series (1.1.3) can be written in the following form:




(1.1.4) S f (x) = cn eiωn x ,
n=−∞

where

∫L
1
(1.1.5) cn = f (x)e−iωn x dx, n = 0, ±1, ±2, . . ..
2L
−L
16 1. FOURIER SERIES

This is the complex form of the Fourier series of the function f on the interval
[−L, L].
Remark. Notice that even though the complex Fourier coefficients cn are
generally complex numbers, the summation (1.1.4) always gives a real valued
function S f (x). Also notice that the Fourier coefficient c0 is the mean of
the function f on the interval [−L, L].
We need to emphasize that the real Fourier series (1.1.3) and the complex
Fourier series (1.1.4) are just two different ways of writing the same series.
Using the relations an = cn + c−n and bn = i(cn − c−n ) we can change
the complex form of the Fourier series (1.1.4) to the real form (1.1.3), and
vice versa. Also note that for an even function all coefficients cn will be real
numbers, and for an odd function are purely imaginary.
If m and n are any integers, then from the formulas
∫L {
iωn x iωm x 0, m ̸= n
e e dx = 1
2L , m=n
−L

it follows that the following set of functions


{ iω1 x −iω1 x iω2 x −iω2 x }
(1.1.6) 1, e ,e ,e ,e , · · · , eiωn x , eiωn x , . . .
has the orthogonality property.
Example 1.1.6. Let f : R → R be the 2π-periodic function defined on
[−π, π] by {
−1, −π ≤ x < 0
f (x) =
1, 1 ≤ x < π.
Find the complex Fourier series of the function f .
Solution. In this problem, we have L = π and ωn = n. Therefore,
∫π ∫0 ∫π
1 1 1
c0 = f (x) dx = −1 dx + 1 dx = 0.
2π 2π 2π
−π −π 0

For n ̸= 0 we have
∫π ∫0 ∫π
1 −inx 1 −inx 1
cn = f (x)e dx = −1e dx + 1e−inx dx
2π 2π 2π
−π −π 0
x=0 x=π
1 −inx 1 −inx
= e − e
2nπ x=−π 2nπ x=0
i i i
=− (1 − einπ ) + (e−inπ − 1) = (cos(nπ) − 1)
2nπ 2nπ nπ
i ( )
= (−1)n − 1 .

1.1 FOURIER SERIES OF PERIODIC FUNCTIONS 17

Therefore, the complex Fourier series is



2i ∑ e(2n−1)ix
S f (x) = − .
π n=−∞ 2n − 1

Exercises for Section 1.1.

1. Let f be a function defined on the interval [−a, a], a > 0. Show that:
∫a ∫a
(a) If f is even, then f (x)dx = 2 f (x)dx.
−a 0
∫a
(b) If f is odd, then f (x)dx = 0.
−a

2. Let f be a function whose domain is the interval [−L, L]. Show that
f can be expressed as a sum of an even and an odd function.

3. Let f be an L-periodic function and a any number. Show that

∫L ∫
a+L

f (x)dx = f (x)dx.
0 a

4. Show that the following results are true.

(a) If f1 , f2 , . . . fn are L-periodic functions and c1 , c2 , . . . , cn are any


constants, then c1 f1 + c2 f2 + . . . + cn fn is also L-periodic.

(b) If f and g are L-periodic functions, so is their product f g.

(c) If f and g are L-periodic functions, so is their quotient f /g.


(x)
(d) If f is L-periodic and a > 0, then f a has period aL and
f (ax) has period La .

(e) If f is L-periodic and g is any function (not necessarily periodic),


then the composition g ◦ f is also L-periodic.

5. Find the Fourier series of the following functions (the functions are all
understood to be periodic):

(a) f (x) = x, −π < x < π.


18 1. FOURIER SERIES

(b) f (x) = | sin(x)|, −π < x < π.

(c) f (x) = π − x, 0 < x < 2π.

(d) f (x) = x(π − |x|), −π < x < π.

(e) f (x) = eax , 0 < x < 2π.

(f) f (x) = eax , −π < x < π.


{
x, − 32 ≤ x < 12
(g) f (x) =
1 − x, 12 ≤ x ≤ 32 .

 0, −2 ≤ x < 0

(h) f (x) = x, 0 ≤ x ≤ 1


1, 1 < x ≤ 2.

6. Complete the proof of Lemma 1.1.

7. Show that the Fourier series (1.1.3) can be written in the following
form:
∑∞
1 ( nπ )
S f (x) = a0 + An sin x+α ,
2 n=1
L

where
√ an
An = a2n + b2n , tan α = .
bn
8. Find the complex Fourier series for the following functions (the func-
tions are all understood to be periodic):

(a) f (x) = |x|, −π < x < π.

(b) f (x) = x, 0 < x < 2.


{
0, − π2 < x < 0
(c) f (x) =
1, 0 < x < π2 .

9. Show that if a is any real number, then the Fourier series of the
function f (x) = eax , 0 < x < 2π, is


e2aπ − 1 ∑ 1
S (f, x) = einx .
2π n=−∞
a − in
1.2 CONVERGENCE OF FOURIER SERIES 19

10. Show that if a is any real number, then the Fourier series of the
function f (x) = eax , −π < x < π, is

eaπ − e−aπ ∑ (−1)n inx
S (f, x) = e .
4π n=−∞
a − in

1.2 Convergence of Fourier Series.


In this section we will show that the Fourier series S f (x) of a periodic
function f converges, under certain reasonably general conditions, on the
function f . For convenience only we will consider the interval [−π, π] and
the function f : R → R is 2π-periodic. We begin this section by deriving an
important estimate on the Fourier coefficients.
Theorem 1.2.1 (Bessel’s Inequality). If f : R → R is a 2π-periodic and
Riemann integrable function on the interval [−π, π] and the complex Fourier
coefficients ck are defined by (1.1.5), then

∑ ∫π
1
|ck | ≤
2
|f (x)|2 dx.
−∞

−π

Proof. Let n be any positive integer number. Using |z|2 = z z̄ for any
complex number z (z̄ is the complex conjugate of z), we have
∑ 2 ( ∑
)(

)
n
n n
0 ≤ f (x) − ck e = f (x) −
ikx
ck e ikx
f (x) − ck eikx
k=−n k=−n k=−n
∑n
[ ] ∑
n ∑
n
= |f (x)|2 − ck f (x)eikx + ck f (x)e−ikx + ck cj eikx e−jx .
k=−n k=−n j=−n

Dividing both sides of the above equation by 2π, integrating over [−π, π] and
taking into account formula (1.1.5) for the complex Fourier coefficients cj ,
the orthogonality property of the set (1.1.6) implies
∫π ∑
n ∑
n
1
0≤ |f (x)|2 dx − [ck ck + ck ck ] + ck ck

−π k=−n k=−n

∫π ∑
n ∑
n
1
= |f (x)| dx −
2
[ck ck + ck ck ] + |ck |2

−π k=−n k=−n

∫π ∑
n
1
= |f (x)|2 dx − |ck |2 .

−π k=−n
20 1. FOURIER SERIES

Letting n → ∞ in the above inequality we obtain the result. ■

Remark. Based on the relations for the complex Fourier coefficients cn and
the real Fourier coefficients an and bn , Bessel’s inequality can also be stated
in terms of an and bn :

∞ ∫π
1 2 1∑( 2 ) 1
a + a + bn ≤
2
|f (x)|2 dx.
4 0 2 n=1 n 2π
−π

Later in this chapter we will see that Bessel’s inequality is actually an


equality. But for now, this inequality implies that the series


∑ ∞
∑ ∞

|cn |2 , a2n and b2n
n=−∞ n=0 n=1

are all convergent, and as a consequence we obtain the following important


result about the Fourier coefficients:

Corollary 1.2.1 (Riemann–Lebsgue Lemma). If f is a 2π-periodic and


Riemann integrable function on the interval [−π, π] and cn (an and bn ) are
the Fourier coefficients of f , then

lim cn = 0, lim an = lim bn = 0.


n→∞ n→∞ n→∞

The following terminology is needed in order to state and prove our con-
vergence results.

Definition 1.2.1. A function f is called piecewise continuous on an interval


[a, b] if it is continuous everywhere on that interval except at finitely many
points c1 , c2 , . . . , cn ∈ [a, b] and the left-hand and right-hand limits

f (c−
j ) = lim f (cj − h) and f (cj ) = lim f (cj + h), j = 1, 2, . . . , n
+
h→0 h→0
h>0 h>0

of f exist and they are finite.


A function f is called piecewise smooth on an interval [a, b] if f and its
first derivative f ′ are piecewise continuous functions on [a, b].
A function f is called piecewise continuous (piecewise smooth) on R if it
is piecewise continuous (piecewise smooth) on every bounded interval.

Representative graphs of piecewise continuous and piecewise smooth func-


tions are provided in the next several examples.
1.2 CONVERGENCE OF FOURIER SERIES 21

-2 0 2

Figure 1.2.1

Example 1.2.1. The function f : [−2, 2] → R defined by



 3, −2 < x < 0

f (x) = 1, 0 < x ≤ 2


2, x = −2, 0

is piecewise continuous on [−2, 2] since f is discontinuous only at the points


c = −2, 0. See Figure 1.2.1.

Example 1.2.2. The function f : [−2, 6] → R defined by


{
−x, −2 ≤ x ≤ 0
f (x) =
x2 , 0 ≤ x ≤ 1

is piecewise smooth on [−2, 1] since it is continuous on that interval, and it


is not differentiable only at the point c = 0. See Figure 1.2.2.

-2 1

Figure 1.2.2
22 1. FOURIER SERIES

Example 1.2.3. The function f : [−4, 4] → R defined by



 −x,
 −4 ≤ x < 1
f (x) = −1, 1<x≤3


x − 4, 3 < x ≤ 4

is piecewise smooth on [−4, 4] since it is continuous on that interval, and it


is not differentiable at the points c = 1, 3. See Figure 1.2.3.

-4 1 3 4

-1

Figure 1.2.3


Example 1.2.4. The function f defined by f (x) = 3 x (see Figure 1.2.4.)
is piecewise continuous but not piecewise smooth on any interval containing
x = 0, since
f ′ (0−) = f ′ (0+) = ∞.

-4 4

-2

Figure 1.2.4

Before addressing the question of convergence of Fourier series we address


the following question of uniqueness of Fourier series:
1.2 CONVERGENCE OF FOURIER SERIES 23

If two functions f and g have the same Fourier coefficients:

∫π ∫π
−inx
(1.2.1) f (x)e dx = g(x)e−inx dx, n = 0, ±1, ±2, . . .,
−π −π

are the functions necessarily identical? In other words, is a function uniquely


determined by its Fourier coefficients? The answer is affirmative and for its
proof the interested reader can consult the book by G. B. Folland [6].
Theorem 1.2.2 (Uniqueness Theorem). Let f and g be piecewise con-
tinuous on the interval −π ≤ x ≤ π and satisfy (1.2.1), i.e., the two functions
have the same Fourier coefficients. Then f (x) = g(x) for all x ∈ [−π, π]
except perhaps at points of discontinuity.

Let us recall the definition of a partial sum of a Fourier series. If N is a


natural number, then we defined the N th partial sum of the Fourier series of
a 2π-periodic function f by


N
1 ∑N
[ ]
(1.2.2) SN f (x) = cn einx = a0 + an cos nx + bn sin nx .
2 n=1
n=−N

If we substitute cn from formula (1.1.5) into (1.2.2) we have

∑ ∫π ∫π
N
( 1 ) ( 1 ∑
N
)
SN f (x) = e−iny f (y) dy einx = ein(x−y) f (y) dy.
2π 2π
n=−N −π −π n=−N

Definition 1.2.2. If N is a natural number, then the function

1 ∑ inx
N
(1.2.3) DN (x) = e .

n=−N

is called the N th Dirichlet kernel.

Notice that DN (x) is a 2π-periodic function. By a change of variable and


the periodicity of the function DN (x − y)f (y) it follows that

∫π ∫π
(1.2.4) SN f (x) = DN (x − y)f (y) dy = DN (y)f (x − y) dy.
−π −π

We discuss below some properties of the kernel DN (x) which plays a crucial
role in obtaining convergence results for Fourier series.
24 1. FOURIER SERIES

Lemma 1.2.1. The Dirichlet Kernel DN satisfies the following properties:


∫π
(a) DN (x) dx = 1,
−π
( )
1 sin N + 21 x
(b) DN (x) = ( ) .
2π sin x2

Proof. To prove (a) we use the definition of DN (x). From (1.2.3) we have

1∑
N
1
DN (x) = + cos nx,
2π π n=1

and therefore
∫π [ ]x=π
1 ∑ sin nx
N
1
DN (x) dx = x+ = 1.
2π π n=1 n x=−π
−π

Using the geometric sum formula, from (1.2.3) for x ̸= 0 we have

1 −iN x ∑ inx
2N
1 −iN x ei(2N +1)x − 1
DN (x) = e e = e
2π n=0
2π eix − 1
1 ei(N +1)x − e−iN x 1 ei(N + 2 )x − e−i(N + 2 )x
1 1

= =
eix − 1 ei 2 − e−i 2
x x
2π 2π
( )
1 sin N + 12 x
= ( ) ,
2π sin x2

which establishes (b). ■

From (a) in Lemma 1.2.2. it follows that

∫0 ∫π
1
(1.2.5) DN (x) dx = DN (x) dx = .
2
−π 0

Plots of the kernel DN (x) for several values of N are presented in Figure
1.2.5.
Now we are ready to state and prove the main convergence theorem.
Theorem 1.2.3. If f is a 2π-periodic and piecewise smooth function on the
real line R, then
f (x+ ) + f (x− )
lim SN f (x) =
N →∞ 2
1.2 CONVERGENCE OF FOURIER SERIES 25

N=12

N=3 N=8

Figure 1.2.5

at every point x. Hence

lim SN f (x) = f (x)


N →∞

at every point x of continuity of the function f .


Proof. Using the integral representation (1.2.4) for SN f , by Lemma 1.2.1
and formula (1.2.5) for the Dirichlet kernel DN (y), we have

∫π
f (x− ) + f (x+ ) f (x− ) + f (x+ )
SN f (x) − = DN (y)f (x − y) dy −
2 2
−π
∫0 ∫π
[ ] [ ]
= DN (y) f (x − y) − f (x+ ) dy + DN (y) f (x − y) − f (x− ) dy
π 0
∫π ∫0
[ ] [ ]
= DN (y) f (x − y) − f (x− ) dy + DN (y) f (x + y) − f (x+ ) dy
0 π
∫π
[ ]
= DN (y) f (x − y) − f (x− ) + f (x + y) − f (x+ ) dy
0
∫π ( )
1 sin N + 12 y [ ]
= (y) f (x − y) − f (x− ) + f (x + y) − f (x+ ) dy
2π sin 2
0
∫π ( )
1 [ ( 1) ]
y
2(
[ f (x − y) − f (x− )
= sin N + y y
) dy
π 2 sin 2
y
0
∫π ( )
1 [ ( 1) ]
y
2( f (x + y) − f (x+ ) ]
+ sin N + y y
) dy.
π 2 sin 2
y
0
26 1. FOURIER SERIES

For a fixed value x define the function F (y) by


[ ]
f (x − y) − f (x− ) f (x + y) − f (x+ )
y
2( )
F (y) = y + .
sin 2
y y

It is easy to see that F (y) is an odd function of y on [−π, π]. We claim that
for each fixed value of x this function is piecewise continuous on [−π, π].
Indeed, we need to check the behavior of F (y) only at the point y = 0. Since
f is a piecewise smooth function we have
[ ]
f (x − y) − f (x− ) f (x + y) − f (x+ )
y
lim )2(
+ = f ′ (x+ ) + f ′ (x− ).
y→0+ sin y y y
2

Therefore,

∫π
1[ ] 1 ( 1)
SN f (x) − f (x− ) + f (x+ ) = F (y) sin N + y dy
2 π 2
0
∫π ∫π
1 { ( y )} 1 { ( y )}
= F (y) cos sin(N y) dy + F (y) sin cos(N y) dy
π 2 π 2
0 0
= BN + AN ,

where BN , AN are the N th Fourier coefficients of the piecewise continuous


functions
1 (y ) 1 (y )
F (y) cos , F (y) sin ,
2 2 2 2
respectively. From Corollary 1.2.1 (Riemann–Lebesgue Lemma) it follows
that BN → 0 and AN → 0 as N → ∞. Therefore

f (x− ) + f (x+)
SN f (x) − = BN + AN → 0, as N → ∞. ■
2

This result, besides its theoretical significance, provides a useful method


for finding the sums of certain numerical series.
Example 1.2.5. Let f : R → R be the 2π-periodic function which on
[−π, π] is defined by
{
0, −π ≤ x < 0
f (x) =
sin x, 0 ≤ x ≤ π.

Find the Fourier series of f and find the sum of the series for x = kπ, k ∈ Z.
Plot f and several partial sums of the Fourier series of f .
1.2 CONVERGENCE OF FOURIER SERIES 27

Solution. The Fourier series of f is given by



1 1 2∑ 1
S f (x) = + sin x − cos (2nx).
π 2 π n=1 4n2 − 1
Since f is continuous on R by Theorem 1.2.3 we have

1 1 2∑ 1
f (x) = + sin x − cos (2nx)
π 2 π n=1 4n2 − 1
for every x ∈ R. In particular, for x = kπ, k ∈ Z we have

1 2∑ 1
f (kπ) = 0 = −
π π n=1 4n2 − 1
and after rearrangement

∑ 1 1
2−1
= .
n=1
4n 2
In Figure 1.2.6 we plot f and the partial sums SN f of f for N = 1 and
N = 4.

N=4

N=1

-3 Π -2 Π -Π Π 2Π

Figure 1.2.6

In Figure 1.2.7 we plot f and the partial sum SN f of f for N = 10.

N=10

-3 Π -2 Π -Π Π 2Π

Figure 1.2.7
28 1. FOURIER SERIES

Example 1.2.6. Show that



∑ 1 1
= .
n=1
4n2 −1 2

Solution. The Fourier series of the 2π-periodic function f (x) = | sin(x)|, x ∈


[−π, π] is

2 4∑ 1
S f (x) = − cos 2nx.
π π n=1 4n − 1
2

Since x = 0 is a point of continuity of f the Fourier series of f at x = 0


converges to f (0) = 0 and thus

1 4∑ 1
0= − .
2 π n−1 4n2 − 1

Example 1.2.7. Examine the behavior of the partial sums of the 2π-periodic
function f , which on the interval (−π, π) is given by f (x) = ex .
Solution. The complex Fourier coefficients cn of the function f are
∫π ( )
1 −inx x eπ − e−π (−1)n
cn = e e dx = .
π 2(1 − ni)π
−π

Using the relations between the complex Fourier coefficients and the real
Fourier coefficients we find the real Fourier series of f is
∞ [ ]
eπ − e−π eπ − e−π ∑ (−1)n
S f (x) = + cos nx − n sin nx .
2π π n=1
n2 + 1

In Figure 1.2.8 we compare the graphs of SN f (x) for N = 1, 3 with the


graph of the function f .
ãΠ
f HxL

N=3

N=1

-3 Π -2 Π -Π Π 2Π 3Π

Figure 1.2.8
1.2 CONVERGENCE OF FOURIER SERIES 29

ãΠ

N=6 N=14

-3 Π -2 Π -Π Π 2Π 3Π

Figure 1.2.9

ãΠ

N=30

-3 Π -2 Π -Π Π 2Π 3Π

Figure 1.2.10

In Figure 1.2.9 we compare the graphs of SN f (x) for N = 6 and N = 14


with the graph of the function f .
In Figure 1.2.10 we compare the graph of SN f (x) for N = 30 with the
graph of the function f .

In Figure 1.2.11 we compare the graph of SN f (x) for N = 90 with the


graph of the function f in the small interval (49/50π, 51/50π) around the
point x = π.

If we examine Figure 1.2.8, Figure 1.2.9, Figure 1.2.10 and Figure 1.2.11
more closely we notice the following:
1. The partial sums SN f (x) converge “nicely” to f (x) as N → ∞ at
all points x where f is continuous.

2. The partial sums SN f (x) exhibit some strange behavior near the
points x where f is discontinuous (has a jump) as N increases. We
can see in each case that the functions SN f (x) have noticeable over-
shoot just to the right (and left) of x = 0 (with a similar behavior near
the points x = ±π, ±2π). We also see that these overshoots remain
narrower and narrower and their magnitudes remain fairly large even
30 1. FOURIER SERIES

ãΠ

f HxL

N=90

49 Π 51 Π
Π
50 50

Figure 1.2.11

as N is larger and larger. This curious behavior is known as the Gibbs


phenomenon and it is a manifestation of the non-uniform convergence
of the Fourier series that will be discussed later.
Example 1.2.8. Discuss the Gibbs phenomenon for the 2π-periodic function
which on [−π, π] is defined by

 0, x = −π



 −1, −π < x < 0

f (x) = 0, x=0



 1, 0<x<π


0, x = π.
Solution. The Fourier series of this odd function is

4∑ 1
S f (x) = sin (2n − 1)x.
π n=1 2n − 1
In Figure 1.2.12 we plotted the function f and the N th partial sums
4∑
N
1
SN f (x) = sin(2n − 1)x
π n=1 2n − 1
for N = 1 and N = 3, respectively.

N=1 f(x)
N=3

-2 Π -Π Π 2Π

-1

Figure 1.2.12
1.2 CONVERGENCE OF FOURIER SERIES 31

In Figure 1.2.13 we plotted the function f and the N th partial sums for
N = 10 and N = 25, respectively.

N=10 N=25

-2 Π -Π Π 2Π

-1

Figure 1.2.13

In Figure 1.2.14 we plotted the function f and the N th partial sum for
N = 50.

N=50

-2 Π -Π Π 2Π

-1

Figure 1.2.14

In Figure 1.2.15 we plotted the function f and the N th partial sum for
N = 150.

N=150

Π Π
- 20 20

-1

Figure 1.2.15
32 1. FOURIER SERIES

Let us now examine analytically the Gibbs phenomenon for this function.
Since f is continuous at the point x = 0 the Fourier series at x = 0 converges
to f (0) = 0. From the continuity of f we have for small positive x <
π, SN f (x) converges to 1; and for small negative x > −π, SN f (x) converges
to −1. We are especially interested in the behavior of SN f (x) in a small
neighborhood of 0 as N → ∞. Since both f and SN f are odd functions,
it suffices to consider the case when x is positive and small.
First, we find the relative extrema of SN f . From

4∑
N

SN f (x) = cos (2n − 1)x
π n=1

and the summation formula (b) in Lemma 1.2.1 we have

′ 2 sin 2N x
SN f (x) = .
π sin x
From the last expression it follows SN has relative extrema in the interval
n
(0, π) at the points x = 2N π, n = 1, 2, . . . , 2N − 1. It is easy to show that
π
at the point x = 2N the function SN f has the first maximum. The value of
this maximum is
( π ) 4∑
N
1 ( 2n − 1 )
SN f = sin π .
2N π n=1 2n − 1 2N

Further, notice that the above sum can be made to look like a Riemann sum
for the integral
∫π
2 sin x
dx.
π x
0
Indeed, consider the continuous function
2 sin x
.
π x
If we partition the segment [0, π] into 2N equal sub-intervals each of length
π
∆x = ,
2N
then
( π ) 4∑
N
1 π
lim SN = lim sin (2n − 1)
N →∞ 2N N →∞ π
n=1
2n − 1 2N

2 ∑N
sin (2n − 1) 2N
π
π
= lim 2n−1 ·
π N →∞ n=1 2N π
2N
∫π
2 sin x
= dx,
π x
0
1.2 CONVERGENCE OF FOURIER SERIES 33

that is,
( π ) ∫π
2 sin x
lim SN = dx.
N →∞ 2N π x
0

The last integral is non-elementary, but can be easily approximated. Using a


Maclaurin series we have
∫π
2 sin x π2 π4 π6
dx = 2 − + − + ··· ,
π x 9 300 17640
0

and so up to three accurate decimal places we have

∫π
2 sin x
dx ≈ 1.179.
π x
0

Therefore
π
(1.2.6) lim SN ( ) ≈ 1.179.
N →∞ 2N

Equation (1.2.6) shows that the maximum values of the partial sums SN f
are always greater than 1 near the discontinuity of the function f at x = 0,
no matter how many terms we choose to include in the partial sums. But
1 is the maximum value of the original function that is being represented
by the Fourier series. It follows, therefore, that near the discontinuity, the
maximum difference between the values of the partial sums and the function
itself (sometimes called the overshoot or the bump) remains finite as N → ∞.
This overshoot is approximately 0.18 in this example.

Exercises for Section 1.2.

1. Which of the following functions are continuous, piecewise continuous,


or piecewise smooth on the interval [−π, π]?

(a) f (x) = csc x.



3
(b) f (x) = sin x.

4
(c) f (x) = sin5 x.
{
− cos x, −π ≤ x ≤ 0
(d) f (x) =
cos x, 0 < x ≤ π.
34 1. FOURIER SERIES
{
sin 2x −π ≤ x ≤ 0
(e) f (x) =
sin x, 0 < x ≤ π.
2. How smooth are the following functions? That is, how many deriva-
tives can you guarantee them to have?

∑ 1
(a) f (x) = einx .
n=−∞
n4.2 + 2n6 − 1
∑∞
1
(b) f (x) = 1 cos nx.
n=−∞
n+ 2

∑ 1
(c) f (x) = cos(2n x).
n=−∞
2n
3. Let f be the 2π-periodic function which on the interval [−π, π] is
given by {
0, −π < x ≤ 0
f (x) =
sin x, 0 < x ≤ π.
(a) Show that the Fourier series of f is given by

1 1 2∑ 1
+ sin x − cos 2nx, x ∈ R.
π 2 π n=1 4n − 1
2

(b) Find the Fourier series at x = kπ, k ∈ Z.

(c) Find the sum of the series



∑ 1
.
n=1
4n2 −1

(d) Plot the graph of f and the Fourier partial sums S3 f , S10 f ,
S30 f and S100 f on the interval [−2π, 2π].

4. Let f be the 2π-periodic function, which on the interval [−π, π] is


given by {
x, 0 < x ≤ π
f (x) =
0, π < x ≤ 2π.
(a) Show that the Fourier series of f is given by
∞ ∞
π 2 ∑ cos (2n − 1)x ∑ (−1)n−1
− + sin nx, x ∈ R.
4 π n=1 (2n − 1)2 n=1
n

(b) Find the Fourier series at x = π.


1.2 CONVERGENCE OF FOURIER SERIES 35

(c) Plot the graph of f and the Fourier partial sums S3 f , S10 f ,
S30 f and S100 f on the interval [−2π, 2π].

(d) Examine the Gibbs phenomenon near the point x = π.

5. Let f be the 2π-periodic function which on the interval (0, 2π] is


given by
π−x
f (x) = .
2
(a) Show that the Fourier series of f is given by

∑∞
1
sin nx, x ∈ R.
n=1
n

(b) Find the sums of the Fourier series at x = 0 and x = 2π.

(c) Plot the graph of f and the Fourier partial sums S3 f , S10 f ,
S30 f and S100 f on the interval [−2π, 2π].

(d) Examine the Gibbs phenomenon near the point x = 0.

6. Using the Fourier expansion of the function f (x) = x(π − |x|), −π <
x < π, and choosing a suitable value of x, derive the following:

∑∞
(−1)n+1 π3
= .
n=1
(2n − 1) 3 32

7. Let f be the 2π-periodic function which on the interval [−π, π] is


given by
{
x2
2 − 2 − 4π , −π < x ≤ 0
π x
f (x) =
x2
2 + 2 − 4π , 0 < x ≤ π.
π x

(a) Show that the Fourier series of f is given by



2π 1 ∑ cos nx
− , x ∈ R.
3 π n=1 n2

(b) Find the Fourier series at x = π.

(c) Plot the graph of f and the Fourier partial sums S3 f , S10 f ,
S30 f and S100 f on the interval [−2π, 2π].
36 1. FOURIER SERIES

8. Show that each of the following Fourier expansions is valid in the


indicated range.

(a) If 0 ≤ x ≤ 2π, then

∑∞
x2 π2 cos nx
− πx = − +2 .
2 3 n=1
n2

(b) If 0 < x < 2π, then

∑∞ [ ]
4π 2 1 π
x2 = +4 cos nx − sin nx .
3 n=1
n2 n

(c) If −π ≤ x ≤ π, then

∑∞
π2 (−1)n
x2 = +4 cos nx.
3 n=1
n2

9. Let α be any positive number that is not an integer. Define f (x) =


sin (α x) for x ∈ (−π, π). Let f (−π) − f (π) = 0 and extend f to be
a 2π-periodic function.

(a) Plot the graph of f when 0 < α < 1 and when 1 < α < 2.

(b) Show that the Fourier series of f is



2 sin (απ) ∑ (−1)n+1
S(f, x) = − sin nx.
π n=1
α 2 − n2

(c) Identify all points x in R at which the series converges.

(d) Prove that the Fourier series converges uniformly to f on any


interval [a, b] that does not contain an odd multiple of π.

(e) Take x = π/2 and y = α/2. Use the trigonometric formula


sin 2πy = 2 sin πy cos πy in order to show that

∑ (−1)n+1
π sec πy = −4 .
n=1
4y 2 − (2n − 1)2

10. Let a be a non-zero constant. Show that



eaπ − e−aπ eaπ − e−aπ ∑ (−1)n ( )
e ax
= + a cos nx − n sin nx
2aπ π n=1
n2 + a 2
1.3 INTEGRATION AND DIFFERENTIATION OF FOURIER SERIES 37

for every x, −π < x < π.

11. Let a be any number which is not an integer. Show that the following
is true

(a) For any −π ≤ x ≤ π we have



sin aπ 2 sin aπ ∑ (−1)n+1 a
cos ax = + cos nx
aπ π n=1
n2 − a 2

and

(b) For any −π < x < π we have



2 sin aπ ∑ (−1)n+1 n
sin ax = sin nx.
π n=1
n2 − a 2

What happens if a is an integer?

1.3 Integration and Differentiation of Fourier Series.


In this section we will study the questions of integration and differentiation
of Fourier series. If a series of functions converges “nicely” to some function
f , then we expect to be able to integrate and differentiate the series term by
term and the resulting series should converge to the integral and derivative of
the given function f . For example, integration and differentiation of a power
series is always valid inside the interval of convergence. In many situations,
both operations of integration and differentiation term-by-term of Fourier
series lead to valid results, and are quite useful for constructing new Fourier
series of more complicated functions. However, in all these situations, the
question is, to what do the resulting series converge?
From calculus we know that integration is a smoothing operation, that is,
the resulting function is always smoother than the original function. There-
fore, we would expect that we would be able to integrate Fourier series with-
out any difficulties. There is, however, one problem: the integral of a periodic
function is not necessarily periodic. For example, the constant function 1
is certainly periodic, but its integral is not. On the other hand, integrals
of all sine and cosine functions appearing in the Fourier series are periodic.
Therefore, only the constant term might cause some difficulty when we try
to integrate a Fourier series. Recall that functions which have no constant
terms in their Fourier series expansions are exactly those functions which have
zero means. Now, we will show that only functions which have zero means
remain periodic upon integration. The results discussed below for 2π-periodic
functions defined on [−π, π] can be easily extended to 2L periodic functions
defined on any interval [−L, L].
38 1. FOURIER SERIES

Lemma 1.3.1. If f is a 2π-periodic function, then its indefinite integral


∫x
F (x) = f (t) dt
0
is 2π-periodic if and only if
∫π
c0 = f (x) dx = 0.
−π

Proof. Assume that the function F is 2π-periodic, i.e., let F (x + 2π) = F (x)
for every x. If we take x = −π, then it follows F (π) = F (−π). Therefore
∫π ∫−π
f (t) dt = f (t) dt.
0 0
If in the right-hand side integral we introduce a new variable t by u = t + 2π,
then
∫π ∫π
f (t) dt = f (u − 2π) du.
0 2π
Since f is 2π-periodic, we have that
∫π ∫π
f (t) dt = f (u) du,
0 2π
and so
∫π ∫π
f (t) dt − f (u) du = 0.
0 2π
Therefore
∫π ∫2π
f (t) dt + f (t) dt = 0,
0 π
that is,
∫2π
f (t) dt = 0.
0
From the last equation, again by the 2π-periodicity of f , it follows
∫π
f (t) dyt = 0
−π

as required.
The proof of the other direction is left as an exercise. ■
Using the above lemma we easily have the following result.
1.3 INTEGRATION AND DIFFERENTIATION OF FOURIER SERIES 39

Theorem 1.3.1. Let f be a 2π-periodic, piecewise continuous function with


complex Fourier coefficients cn (real Fourier coefficients an and bn ) and let

∫x
F (x) = f (y) dy.
0

If c0 = a0 = 0, then for all x,

∑∞ ∞ [ ]
cn inx A0 ∑ bn an
F (x) = C0 + e = + − cos nx + sin nx ,
n=−∞
in 2 n=1
n n
n̸=0

where
∫π
A0 1
C0 = = F (x) dx
2 2π
−π

is the average of the function F (x) on the interval [−π, π].


Proof. Integrate term-by-term the Fourier series and use Lemma 1.3.1. ■

If c0 ̸= 0, considering the function F (x) − c0 x, from the previous theorem


we have the following result.
Theorem 1.3.2. Let f be a 2π-periodic, piecewise continuous function with
Fourier coefficients cn and let
∫x
F (x) = f (y) dy.
0

Then for all x we have that

∑∞
cn inx
F (x) = c0 x + C0 + e ,
n=−∞
in
n̸=0

where
∫π
1
C0 = F (x) dx

−π

is the average of F on [−π, π].

Notice that the last series is not a Fourier series because it contains the
term c0 x.
Next we present two theorems regarding differentiation of Fourier series.
First we need the following result.
40 1. FOURIER SERIES

Theorem 1.3.3. Let cn be the complex Fourier coefficients (or the real coeffi-
cients an and bn ) of a 2π-periodic, continuous and piecewise smooth function
f and let c′n (a′n and b′n ) be the Fourier coefficients of the first derivative f ′
of f . Then
c′n = incn , a′n = nbn ; b′n = −nan .

Proof. The fundamental theorem of calculus

∫b
f ′ (x) dx = f (b) − f (a)
a

is valid not only for functions f which are continuously differentiable on the
interval [a, b], but also for functions f which are continuous and piecewise
smooth on [a, b].
Applying the integration by parts formula and the fundamental theorem
of calculus we have
∫π π ∫π
1 1
−inx 1
c′n = ′
f (x)e −inx
dx = f (x)e − 2π f (x)(−ine−inx ) dx
2π 2π −π
−π −π
∫π
1 [ ] in
= f (π)e−πni − f (−π)eπni + f (x)e−inx dx
2π 2π
−π
∫π
1 [ ] in
= f (π)(−1)n − f (−π)(−1)n + f (x)e−inx dx
2π 2π
−π
= 0 + incn = incn .

A similar proof works for a′n = nbn , b′n = −nan . ■


Theorem 1.3.4. Let f be a 2π-periodic, continuous and piecewise smooth
function, with piecewise smooth derivative f ′ . If cn (an and bn ) are the
Fourier coefficients of f , then the Fourier series S f ′ of f ′ is given by

∑ ∞
∑ ( )
S f ′ (x) = incn einx = nbn cos nx − nan sin nx
n=−∞ n=1

and converges to f ′ (x) for each x where f ′ [is continuous. If] f ′ is not
continuous at x, then the series converges to 12 f ′ (x−) + f ′ (x+) .
Proof. The result follows by combining the previous theorem and the Fourier
Convergence Theorem. ■

Let us illustrate the above results with several examples.


1.3 INTEGRATION AND DIFFERENTIATION OF FOURIER SERIES 41

Example 1.3.1. Integrate term-by-term the Fourier series of the 2π-periodic


function f : R → R which for x ∈ [−π, π] is given by

f (x) = π − |x|

and obtain a new Fourier series.


Solution. Since the given 2π-periodic function f : R → R is continuous on
R, its Fourier series S f (x) converges to f (x) for every x. Since f is even,
its Fourier coefficients bn are all zero. By computation we find a0 = 0 and

8 1
a2n−1 = and a2n = 0 for n = 1, 2, . . ..
π (2n − 1)2

Therefore the Fourier series of f is given by



8∑ 1
π − 2|x| = cos (2n − 1)x.
π n=1 (2n − 1)2

Since a0 = 0, by Theorem 1.3.1, integrating both sides of the above equation


we obtain

4∑ 1
x(π − |x|) = sin (2n − 1)x.
π n=1 (2n − 1)3

Example 1.3.2. Differentiating term-by-term the Fourier series of the 2π-


periodic function f : R → R, which on the interval [−π, π] is given by
{
−x sin x, −π < x < 0
f (x) =
x sin x, 0<x<π

obtain a new Fourier series and find the sum of the series

∑ n2
.
n=1
(2n + 1)2 (2n − 1)2

Solution. The given 2π-periodic, odd function f is continuous and piecewise


smooth, with Fourier series

π 16 ∑ n
f (x) = sin x − sin 2nx.
2 π n=1 (2n − 1)2 (2n + 1)2

By direct calculations we have that


{
sin x + x cos x, −π < x < 0
f ′ (x) =
− sin x − x cos x, 0 < x < π.
42 1. FOURIER SERIES

Since
lim f ′ (x) = lim− f ′ (x) = 0
x→0+ x→0

and
lim f ′ (x) = lim f ′ (x) = −π
x→π − x→−π +

the continuation of the function f ′ on the real R is continuous. Therefore,


term-by-term differentiation of the Fourier series of f gives us

π 32 ∑ n2
f ′ (x) = cos x − cos 2nx.
2 π n=1 (2n − 1)2 (2n + 1)2

If we rearrange the terms of the Fourier series of f ′ , for x ∈ [0, π] we obtain



∑ n2 π2 π
cos 2nx = cos x − f ′ (x)
n=1
(4n2 − 1)2 64 32
π2 π π
= cos x − sin x − cos x.
64 32 32
Inserting x = 0 in the last series we have that

∑ n2 π2
= .
n=1
(4n2 − 1)2 64

Example 1.3.3. Let f : R → R be the function given by


1
f (x) = , x ∈ R.
5 − 3 cos x
Show that

1 1∑ 1
f (x) = + cos nx, x ∈ R.
4 2 n=1 3n

Further, if the function g : R → R is given by


sin x
g(x) = , x ∈ R,
5 − 3 cos x
show that

2∑ 1
g(x) = sin nx, x ∈ R.
3 n=1 3n

Finally, find the sum of the series

∑∞
1
n
cos nx.
n=1
3
1.3 INTEGRATION AND DIFFERENTIATION OF FOURIER SERIES 43

Solution. Since
eix 1
= < 1 for every x ∈ R,
3 3
by the complex geometric sum formula we obtain

∑∞ ∑∞
einx ( eix )n eix 1 eix 3 − e−ix
= = · = ·
n=1
3n n=1
3 3 1 − e3
ix
3 − eix 3 − e−ix
3eix − 1 1 3 cos x − 1 1 3 sin x
· = · +i · .
9 − 6 cos x + 1 2 5 − 3 cos x 2 5 − 3 cos x

Hence
∞ {∑∞ }
1 1∑ 1 1 1 1 inx 1 1 1 3 cos x − 1
+ cos nx = + Re e = + · ·
4 2 n=1 3n 4 2 n=1
3n 4 2 2 5 − 3 cos x
1
= = f (x),
5 − 3 cos x

and
∞ {∑∞ }
2∑ 1 2 1 inx 2 1 3 sin x
sin nx = Im e = · · = g(x).
3 n=1 3n 3 n=1
3n 3 2 5 − 3 cos x

By termwise integration we obtain

∫x ∫x
sin t 1 1
g(t) dt = dt = ln(5 − 3 cos x) − ln 2
5 − 3 cos t 3 3
0 0
∞ ∫x ∞
2∑ 1 2∑ 1
= sin nx dx = − (cos nx − 1),
3 n=1 3n 3 n=1 n3n
0

hence by a rearrangement

∑ ∑∞
1 1 1 1 1
cos nx = + − ln 2 − ln(5 − 3 cos x)
n=1
n3n n=1
n3 n 2 2 2
3 1 1 1
= ln + − ln 2 ln(5 − 3 cos x)
2 2 2 2
1 1
= ln 3 − ln 2 − ln(5 − 3 cos x).
2 2
In the above we used the following result:

∑ 1 3
= ln ,
n=1
n3n 2
44 1. FOURIER SERIES

1
which easily follows if we take x = 3 in the Maclaurin series

∑∞
xn
− ln(1 − x) = , |x| < 1.
n=1
n

The Fourier convergence theorem gives conditions under which the Fourier
series of a function f converges point-wise to f . Working with infinite series
can be a delicate matter and we have to be careful. Since a uniform convergent
series can be integrated term by term, it would be much better if we had
absolute and uniform convergence. Let us recall these definitions and a few
related results.
Definition 1.3.1. An infinite series of functions fn


fn (x)
n=1

converges absolutely on a set S ⊆ R if the series




|fn (x)|
n=1

converges for every x ∈ S.

Definition 1.3.2. An infinite series of functions fn




fn (x)
n=1

) uniformly on a set S ⊆ R to a function f on S if the sequence


converges
(
SN f of the partial sums


N
SN f (x) = fn (x)
n=1

converges uniformly to f on S, i.e., for every ϵ > 0, there exists an integer


N = N (ϵ) such that
|Sn f (x) − f (x)| < ϵ
for all x ∈ S and all n ≥ N .

Remark. The reason for the term “uniform convergence” is that the integer
N depends only on ϵ and not on the choice of the point x ∈ S.
Important consequences of uniform convergence are the following:
1.3 INTEGRATION AND DIFFERENTIATION OF FOURIER SERIES 45

Theorem 1.3.5. If an infinite series of continuous functions fn (x) on a set


S ⊆ R converges uniformly on S to a function f , then f is also continuous
on S.

Theorem 1.3.6. If an infinite series of functions fn (x) on a set S ⊆ R


converges uniformly on S to a function f , then we can integrate the series
term by term and the resulting integrated series
∫x ∑ ∞ ∫x ∫x
(∞ ) ∑
fn (y) dy = fn (y) dy = f (y) dy
a n=1 n=1 a a

is also uniformly convergent on S.

Theorem 1.3.7. If an infinite series of differentiable functions fn (x) on a


set S ⊆ R is such that the series


fn′ (x)
n=1

converges uniformly on S, then the series




fn (x)
n=1

is also uniformly convergent on S and


(∑ ∞ )′ ∑ ∞
fn (x) = fn′ (x), for every x ∈ S.
n=1 n=1

A useful criterion for absolute and uniform convergence is the following


test:
Theorem 1.3.8 (Weierstrass M -test). If there are positive constants Mn
such that
|fn (x)| ≤ Mn for n = 1, 2, . . . and every x ∈ S,
and the series


Mn
n=1

converges, then the series




fn (x)
n=1

converges absolutely and uniformly on S.

The notion of uniform convergence will help us to investigate the already


mentioned “nice” Fourier coefficients in the question what are “nice” Fourier
46 1. FOURIER SERIES

coefficients? How quickly the Fourier coefficients decay to zero determines


how many terms of the Fourier series we need take in order to get a better
approximation. If the Fourier series decay more quickly, then the Fourier
coefficients are “nicer.” The next theorem shows precisely how the rate of
convergence of Fourier coefficients (series) is related to the smoothness of the
function.
Theorem 1.3.9. Let f be a 2π-periodic function on R and let k ∈ N.
If f and its first k − 1 derivatives f ′ , . . . , f (k−1) are all continuous on R
and the k th derivative f (k) is piecewise continuous on R (f (k) is piecewise
continuous on each bounded interval), then the Fourier coefficients cn , (an ,
bn ) of f satisfy the following:



|nk cn |2 < ∞,
n=1
∑∞ ∞

|nk an |2 < ∞, |nk bn |2 < ∞.
n=1 n=1

In particular,
lim nk an = 0,
n→∞
lim nk bn = 0, lim nk cn = 0.
n→∞ n→∞

On the other hand, suppose that the Fourier coefficients cn (n ̸= 0 ), (an , bn )


satisfy
M
|cn | ≤ k+α
n
or equivalently
C C
|an | ≤ and |an | ≤ k+α
nk+α n
for some constants M > 0 and α > 1. Then f and its first k derivatives
f ′ , . . . , f (k) are all continuous on R.
Proof. We will consider only the complex coefficients cn . The real Fourier
coefficients an and bn are treated similarly.
For the first part, applying Theorem 1.3.3 k times we have that the Fourier
(k) (k)
coefficients cn of the k th derivative f (k) are given by cn = (in)k cn . From
(k)
Bessel’s inequality applied to the function f it follows that


∑ ∞
∑ ∫π
1
|n cn | =
k 2
n |
|c(k) 2
≤ |f (k) (x)|2 dx < ∞,
n=1 n=−∞

−π

which establishes the first part.


1.3 INTEGRATION AND DIFFERENTIATION OF FOURIER SERIES 47

For the second part, let j ≤ k. Since α > 1 we have


∑ ∞
∑ ∑∞
1 1
|nj cn | ≤ M nj ≤ 2M < ∞.
n=−∞ n=−∞
nk+α n=1
n α

n̸=0 n̸=0

By the Weierstrass M -test it follows that the series



(in)j cn einx
n=−∞

are absolutely and uniformly convergent on R for every j ≤ k. Therefore


these series define continuous functions, which are the j th derivatives f (j) of
the function


f (x) = cn einx . ■
n=−∞

The next example illustrates the above theorem.

Example 1.3.4. Discuss the rate of convergence of the Fourier series for the
2-periodic function f which on the interval [−1, 1] is defined by
{
(1 + x)x, −1 ≤ x ≤ 0
f (x) =
(1 − x)x, 0 ≤ x ≤ 1.

Solution. Notice that this function has a first derivative everywhere, but it
does not have a second derivative whenever x is an integer.
Since f is an odd function, an = 0 for every n ≥ 0. By computation we
have { 8
3 3, if n is odd
bn = π n
0, if n is even.
Therefore the Fourier series of f is


8 ∑ 1
f (x) = sin (2n − 1)πx.
π 3 n=1 (2n − 1)3

This series converges very fast. If we plot the partial sum up to the third
harmonic, that is, the function

8 8
S2 f (x) = 3
sin(πx) + sin(3πx),
π 27π 3
48 1. FOURIER SERIES

1
4

N=2

-2 -1 1 2

1
-4

Figure 1.3.1

from Figure 1.3.1 we see that the graphs of f and S2 f (x) are almost indis-
tinguishable.
8
In fact, the coefficient 27π 3 is already just 0.0096 (approximately). The

reason for this fast convergence is the n3 term in the denominator of the nth
coefficient bn , so the coefficients bn tend to zero as fast as n−3 tends to zero.

Example 1.3.5. Discuss the convergence of the Fourier series of the 2π-
periodic function f defined by the Fourier series

∑∞
1
f (x) = 3
sin nx.
n=1
n

Solution. By the Weierstrass M -test it follows that the above series is a uni-
formly convergent series of continuous functions on R. Therefore f is a
continuous function on R. The convergence rate of this series is like n−3 .
Now, since the series obtained by term-by-term differentiation of the above
series is uniformly convergent we have

∑∞
1
f ′ (x) = cos(nx),
n=1
n2

and f ′ is a continuous function on R. The convergence rate of this series is


like n−2 . If we differentiate again (wherever we can) we obtain the following
series:
∑∞
1
− sin(nx).
n=1
n

Even though at most points the derivative of f ′ is defined, at the points


x = (4n − 1) π2 , n ∈ N, the above series is divergent. At these points the
function f ′ is not differentiable, so the function f ′′ fails to be continuous
1.3 INTEGRATION AND DIFFERENTIATION OF FOURIER SERIES 49

(has jumps). Finally if we try to differentiate term by term the last series we
would obtain the series
∑∞
( )
− cos nx ,
n=0

which does not converge anywhere except at the points x = (2n − 1) π2 .

Now let us revisit the Bessel inequality and discuss the question of equality
in it. We will need a few definitions first.
Definition 1.3.3. A function f is called square integrable on an interval
[a, b] if
∫b
|f (x)|2 dx < ∞.
a

Continuous and piecewise continuous functions are examples of integrable


and square integrable functions on any finite interval.
Remark. From the obvious inequality (x − y)2 ≥ 0 which is valid for all
x, y ∈ R follows the inequality

2xy ≤ x2 + y 2 , x, y ∈ R.

Therefore, every square integrable on any interval [a, b] (finite or infinite) is


also a Riemann integrable on that interval.
Example 1.3.6. The function f (x) = x− 2 is a Riemann integrable function
1

on [0, 1], but it is not a square integrable function on [0, 1].


Solution. From
∫1 ∫1
√ 1
f (x) dx = x − 12
dx = 2 x = 2
0
0 0

it follows that f (x) is Riemann integrable on [0, 1].


On the other hand, from

∫1 ∫1
1
2
f (x) dx = dx = ∞
x
0 0

it follows that f (x) is not square integrable on [0, 1].

In engineering and some physical sciences one of the fundamental quantities


is power (energy).
50 1. FOURIER SERIES

Definition 1.3.4. The energy or energy content of a square integrable func-


tion f on [a, b] is defined by

∫b
|f (x)|2 dx.
a

Before addressing the question of equality in the Bessel inequality, we will


consider the following problem:
Problem. Let f be a 2π-periodic function which is continuous and piece-
wise smooth on [−π, π] and let N be a given natural number. Among all
trigonometric polynomials tN of the form


N
[ ]
tN (x) = A0 + An cos nx + Bn sin nx ,
n=1

find that trigonometric polynomial tN , i.e., find those coefficients An and


Bn for which the energy content

∫π
( )2
EN := f (x) − tN (x) dx
−π

is minimal.
Solution. To compute EN , we first expand the integral:

∫π ∫π ∫π
(1.3.1) EN = f (x) dx − 2
2
f (x)tN (x) dx + t2N (x) dx.
−π −π −π

Since f is continuous on [−π, π] we have that the Fourier series S f (x) of


f is equal to f (x) at every point x ∈ [−π, π]:


a0 ∑ [ ]
f (x) = + an cos nx + bn sin nx ,
2 n=1

where an and bn are the Fourier coefficients of the function f . By the


orthogonality of the sine and cosine functions and by direct integration we
have
∫π
2
[ 2 ∑ N
]
tN (x) dx = π 2A0 + (A2n + Bn2 ) ,
−π n=1
1.3 INTEGRATION AND DIFFERENTIATION OF FOURIER SERIES 51

and
∫π ∑
[ N
]
f (x)tN (x) dx = π 2A0 a0 + (An an + Bn bn ) .
−π n=1

Therefore
∫π ∑ ∑
[ N
] [ N
]
EN = f 2 (x) dx−2π 2A0 a0 + (An an +Bn bn ) +π 2A20 + (A2n +Bn2 ) ,
−π n=1 n=1

that is,
∫π [ ]
a20 ∑ 2
N
EN = f 2 (x) dx − π + (an + b2n )
2 n=1
−π
{ ∑N [ ]}
1
+π (A0 − a0 )2 + (An − an )2 + (Bn − bn )2 .
2 n=1

Since
∑N [ ]
1
(A0 − a0 )2 + (An − an )2 + (Bn − bn )2 ≥ 0,
2 n=1

the energy EN takes its minimum value if we choose An = an for n =


0, 1, . . . , N , and Bn = bn for n = 1, . . . , N .
Therefore, the energy EN becomes minimal if the coefficients An and Bn
in tN are chosen to be the corresponding Fourier coefficients an and bn of
the function f . Thus the trigonometric polynomial tN should be chosen to
be the N th partial sums of the Fourier series of f :

a0 ∑ [ ]
N
tN (x) = + an cos nx + bn sin nx
2 n=1

in order to minimize EN . Now when we know which choice of An and Bn


minimizes EN , we can find its minimum value. After a little algebra we find
∫π [ ]
a20 ∑ 2
N
(1.3.2) min EN = f 2 (x) dx − π + (an + b2n ) . ■
2 n=1
−π

Even though the next theorem is valid for any square integrable function,
we will prove it only for continuous functions which are piecewise smooth.
The proof for the more general class of square integrable functions involves
several important results about approximation of square integrable functions
by trigonometric polynomials and for details the interested reader is referred
to the book by T. M. Apostol, Mathematical Analysis [12].
52 1. FOURIER SERIES

Theorem 1.3.10 (Parseval’s Identity). Let f be a 2π-periodic, continu-


ous and piecewise smooth function on [−π, π]. If cn are the complex Fourier
coefficients of f (an , bn are the real Fourier coefficients), then
∫π ∞
∑ ∞
1 a20 ∑ 2
|f (x)|2 dx = |cn |2 = + (an + b2n ).
π n=−∞
2 n=1
−π

Proof. We prove the result only for the complex Fourier coefficients cn . The
case for the real Fourier coefficients an and bn is very similar.
Since f is a continuous and piecewise smooth function on [−π, π], by the
Fourier convergence theorem we have


f (x) = cn einx
n=−∞

for every x ∈ [−π, π], and thus



∑ ∞

|f (x)|2 = cn cm ei(n−m)x .
n=−∞ m=−∞

By Theorem 1.3.6 the above series can be integrated term by term and
using the orthogonality property of the system {einx : n ∈ Z} we obtain the
required formula
∫π ∞

1
|f (x)|2 dx = |cn |2 . ■
π n=−∞
−π

Using the above result now we have a complete answer to our original
question about minimizing the mean error EN . From (1.3.2) and Parseval’s
identity we have
∑∞
min EN = π (a2n + b2n )
n=N +1

for all 2π-periodic, continuous and piecewise smooth functions f on [−π, π].

∞ ∑

Now, by Bessel’s inequality, both series a2n and b2n are convergent
n=1 n=1
and therefore we have
lim min EN = 0.
N →∞
The last equation can be restated as
∫π
lim |f (x) − SN f (x)|2 dx = 0,
N →∞
−π

and usually we say that the Fourier series of f converges to f in the mean
or in L2 .
1.3 INTEGRATION AND DIFFERENTIATION OF FOURIER SERIES 53

Example 1.3.7. Using the Parseval identity for the function f (x) = x2 on
[−π, π] find the sum of the series

∑∞
1
4
.
n=1
n

Solution. The Fourier series of the function x2 is given by



π 2 ∑ (−1)n
x2 = + cos nx.
3 n=1
n2

By the Parseval identity we have

∫π ∑∞
1 4π 4 1
x2 dx = + 16 4
.
π 18 n=1
n
−π

Therefore
∑∞
8 4 2π 4 1
π = + 16 ,
45 9 n=1
n4

and so


π4 1
= .
90 n=1 n4

We close this section with an application of the Fourier series in determining


a particular solution of a differential equation describing a physical system in
which the input driving force is a periodic function.
Periodically Forced Oscillation. An undamped spring/mass system with
mass m, a spring constant k and a damping constant c is driven by a 2L-
periodic external force f (t) (think, for example, when we are pushing a child
on a swing). The differential equation which models this oscillation is given
by

(1.3.3) mx′′ (t) + cx′ (t) + kx(t) = f (t).

We know that the general solution x(t) of (1.3.3) is of the form

x(t) = xh (t) + xp (t),

where xh (t) is the general solution of the corresponding homogeneous equa-


tion
mx′′ (t) + cx′ (t) + kx(t) = 0,
54 1. FOURIER SERIES

and xp (t) is a particular solution of (1.3.3) (see Appendix D). For c > 0, the
solution xh (t) will decay as time goes on. Therefore, we are mostly interested
in finding a particular solution xp (t) of which does not decay and is periodic
with the same period as f .
For simplicity, let us suppose that c = 0. The problem with c > 0 is very
similar. The general solution of the equation

mx′′ (t) + kx(t) = 0

is given by
x(t) = A cos(ω0 t) + B sin(ω0 t)

where ω0 = k/m. Any solution of the non-homogeneous equation

mx′′ (t) + kx(t) = f (t)

will be of the form A cos(ω0 t) + B sin(ω0 t) + xp . To find xp we expand f (t)


in Fourier series
∞ [ ]
a0 ∑ nπ nπ
f (t) = + an cos + bn sin .
2 n=1
L L

We look for a particular solution xp (t) of the form

∞ [ ]
a0 ∑ nπ nπ
xp (t) = + an cos t + bn sin t ,
2 n=1
L L

where the coefficients an and bn are unknown. We substitute xp (t) into the
differential equation and solve for the coefficients cn and dn . This process is
perhaps best understood by an example.
Example 1.3.8. Suppose that k = 2 and m = 1. There is a jetpack
strapped to the mass which fires with a force of 1 Newton for the first time
period of 1 second and is off for the next time period of 1 second, then it
fires with a force of 1 Newton for 1 second and is off for 1 second, and so
on. We need to find that particular solution which is periodic and which does
not decay with time.
Solution. The differential equation describing this oscillation is given by

x′′ (t) + 2x = f (t),

where f (t) is the step function


{
1, 0 < t < 1
f (t) =
0, 1 < t < 2,
1.3 INTEGRATION AND DIFFERENTIATION OF FOURIER SERIES 55

extended periodically on the whole real line R. The Fourier series of f (t) is
given by

1 ∑ 2
f (t) = + sin (2n − 1)πt.
2 n=1 (2n − 1)π

Now we look for a particular solution xp (t) of the given differential equation
of the form
∞ [ ]
a0 ∑
xp (t) = + an cos (nπt) + bn sin (nπt) .
2 n=1

If we substitute xp (t) and the Fourier expansion of f (t) into the differential
equation
x′′ (t) + 2x = f (t)
by comparison, first we find an = 0 for n ≥ 1 since there are no corresponding
terms in the series for f (t). Similarly we find b2n = 0 for n ≥ 1. Therefore

a0 ∑
xp (t) = + b2n−1 sin (2n − 1)πt.
2 n=1

Differentiating xp (t) twice, after rearrangement we find




x′′p (t) + 2xp (t) = a0 + b2n−1 [2 − (2n − 1)2 π 2 ] sin (2n − 1)πt.
n=1

If we compare the above series with the Fourier series obtained for f (t) (the
Uniqueness Theorem for Fourier series) we have that a0 = 21 and for n ≥ 1

2
bn = [ ].
(2n − 1)π 2 − (2n − 1)2 π 2

So the required particular solution xp (t) has the Fourier series



1 ∑ 2
xp (t) = + sin (2n − 1)πt.
4 n=1 (2n − 1)π[2 − (2n − 1)2 π 2 ]

Resonance of Periodically Forced Oscillation. Again let us consider the


equation
mx′′ (t) + kx(t) = f (t).
If it happens that the general solution of the corresponding homogeneous
equation is of the form

xh (t) = A cos ωN t + B sin ωN t


56 1. FOURIER SERIES

where ωN = N π/L for some natural number N , then some of the terms
in the Fourier expansion of f (t) will coincide with the solution xh (t). In
this case we have to modify the form of the particular solution xp (t) in the
following way:
[ ] ∑ ∞ [ ]
a0 nπ nπ
xp (t) = + t · aN cos ω0 t + bN sin ω0 t + an cos t + bn sin t .
2 n=1
L L
n̸=N

In other words, we multiply the duplicating term by t. Notice that the expan-
sion of xp (t) is no longer a Fourier series. After that we proceed as before.
Let us take an example.
Example 1.3.9. Find that particular solution which is periodic and does not
decay in time of the equation
2x′′ (t) + 18π 2 x = f (t),
where f (t) is the step function
{
1, 9 < t < 1
f (t) =
0, 1 < t < 1,
extended periodically with period 2 on the whole real line R.
Solution. The Fourier series of f (t) is

∑ 4
f (t) = sin (2n − 1)πt.
n=1
(2n − 1)π

The general solution of the given nonhomogeneous differential equation is of


the form
x(t) = A cos 3πt + B sin 3πt + xp (t),
where xp is a particular solution of the corresponding homogeneous differen-
tial equation.
If we try as before that xp has a Fourier series, then it will not work since
we have duplication when n = 3. Therefore we look for a particular solution
xp of the form
( ) ∑ ∞
xp (t) = t · a3 cos 3πt + b3 sin 3πt + b2n+1 sin (2n + 1)πt.
n=2

Differentiating xp twice and substituting in the nonhomogeneous equations


along with the Fourier expansion of f (t) we get

∑ [ ]
−2(2n + 1)2 π 2 + 18π 2 b2n+1 sin (2n + 1)πt
n=2

∑ 4
− 12a3 π sin 3πt + 12b3 π cos 3πt = sin (2n − 1)πt.
n=1
(2n − 1)π
1.3 INTEGRATION AND DIFFERENTIATION OF FOURIER SERIES 57

Comparing the corresponding coefficients we find


1
a3 = − , b3 = 0,
9π 2
2
b2n+1 = [ ], n = 2, 3, . . ..
π 3 (2n + 1) 9 − (2n + 1)2
Therefore,
∑∞
1 2
xp (t) = − t cos (3πt) + [ ] sin (2n + 1)πt.
9π 2
n=2
π (2n + 1) 9 − (2n + 1)2
3

Exercises for Section 1.3.


1. Expand the given function in Fourier series and, using the Parseval
identity, find the sum of the given series
∑∞
x 1
(a) f (x) = , |x| < π, .
2 n=1
n2
∑ ∞
1
(b) f (x) = x − π x, |x| < π,
3 2
6
.
n=1
n
2. Evaluate the following series by applying the Parseval equation to an
appropriate Fourier expansion:

∑∞
1
(a) 4
.
n=1
n
∑∞
1
(b) .
n=1
(2n − 1)2
∑∞
1
(c) .
n=1
(2n − 1)4
∑∞
1
(d) 2 − 1)2
.
n=1
(4n

3. Let f be the 4-periodic function defined on (−2, 2) by


{
2x + x2 , −2 < x < 0
f (x) =
2x − x2 , 0 < x < 2.
(a) Show that f is an odd function.
(b) Show that the Fourier series of f is

32 ∑ sin { }
(2n−1)πx
f (x) = 3 2
, x ∈ R\ ± 2, ±4, ±6, . . . .
π n=1 (2n − 1)3
58 1. FOURIER SERIES

(c) Use the Parseval identity to show that



π6 1
= .
960 n=1 (2n − 1)6

4. Evaluate the following series by applying the Parseval identity to cer-


tain Fourier expansions
∑∞
n2
(a) .
n=1
(n + 1)2
2

∑∞
sin2 na
(b) , 0 < |a| < π.
n=1
n2

5. Show that each of the following Fourier series expansions is valid in the
range indicated and, for each expansion, apply the Parseval identity.
∑∞
1 n
(a) x cos x = − sin x + 2 (−1)n 2 sin nx, −π < x < π.
2 n=2
n −1
∑∞
1 n
(b) x sin x = 1 − cos x − 2 (−1)n 2 cos nx, −π ≤ x ≤ π.
2 n=2
n −1

6. Let
∑∞
1
f (x) = 3
cos(nx).
n=1
n

(a) Is the function f continuous and differentiable everywhere?

(b) Find the derivative f ′ (wherever it exists) and justify your answer.
(c) Answer the same questions for the second derivative f ′′ .

7. Let
∑ 1
f (x) = cos (nx).
n
(a) Is the function f continuous and differentiable everywhere? (b)
Find the derivative f ′ (wherever it exists) and justify your answer.

8. Suppose that (an ) and (bn ) are sequences that tend to 0 as n → ∞.


If c is any positive number, show that the series


∑ [ ]
a0 + e−nc an cos nx + bn sin nx
n=1
1.3 INTEGRATION AND DIFFERENTIATION OF FOURIER SERIES 59

converges uniformly and absolutely on the whole real line R.

9. Expand the function


{
a−2 (a − |x|), |x| < a
f (x) =
0, a < |x| < π

in Fourier series and, using the Parseval identity, find the sum of the
series
∑∞
(1 − cos na)2
.
n=1
n4

10. For x ∈ R, let f (x) = | sin x|.

(a) Plot the graph of f .

(b) Find the Fourier series of f .

(c) At each point x find the sum of the Fourier series. Where does
the Fourier series converge uniformly?

(d) Compute f ′ and find the Fourier series of f ′ .

(e) Show that, for 0 ≤ x ≤ π,



π( ) x ∑ sin 2nx
cos x − 1 + = .
4 2 n−1 (2n − 1)2n(2n + 1)

(f) Show that



∑ (−1)n+1 π √
= ( 2 − 1).
n−1
(4n − 3)(4n − 2)n(4n − 1) 8

11. Define the function f by


{
0, −π ≤ x ≤ 0
f (x) =
sin x, 0 ≤ x ≤ π.

(a) Plot the function f .

(b) Find the Fourier series of f .

(c) At what points does the Fourier series converge? Where is the
convergence uniform?
60 1. FOURIER SERIES

(d) Integrate the Fourier series for f term by term and thus find the
Fourier series of
∫x
F (x) = f (t) dt.
0

12. Let α be any positive number which is not an integer. Define f by


f (x) = sin(αx) for x ∈ (−π, π). Let f (−π) = f (π) = 0 and extend
f periodically.

(a) Plot the function f when 0 < α < 1 and when 1 < α < 2.

(b) Show that the Fourier series of f is



sin(απ) ∑ n sin(nx)
f (x) = (−1)n 2 .
π n=1
α − n2

(c) Prove that the Fourier series converges uniformly to f on any


interval [a, b] that does not contain an odd multiple of π.

(d) Let x = π/2, t = α/2. Use the formula sin 2πt = 2 sin πt cos πt
in order to show that

∑ (−1)n+1
πsect πt = −4 .
n=1
4t2 − (2n − 1)2

13. For which real numbers α is the series


∑ 1
cos nx

uniformly convergent on each interval?

14. For which positive numbers a does the function f (x) = |x|−a , 0 <
|x| < π, have a Fourier series?

15. Does there exist an integrable function on the interval (−π, π) that
has the series
∑∞
sin nx
n=1

as its Fourier series?

16. Solve the following differential equations by Fourier series. The forcing
function f is the periodic function given by
{
1, 0 < t < π
f (t) =
0, π < t < 2π
1.4 FOURIER SINE AND COSINE SERIES 61

and f (t) = f (t + 2π) for every other t.

(a) y ′′ − y = f (t).

(b) y ′′ − 3y ′ + 2y = f (t).

17. Solve the following differential equations by complex Fourier series.


The forcing function f is the periodic function given by

f (t) = |t|, −π < t < π

and f (t) = f (t + 2π) for any other t.

(a) y ′′ + 9y = f (t).

(b) y ′′ + 2y = f (t).

18. An object radiating into its surroundings has a temperature y(t)


governed by the equation

∑ [ ]
y ′ (t) + ky(t) = a0 + an cos(nωt) + bn sin)nωt) ,
n=1

where k is the heat loss coefficient and the Fourier series describes
the temporal variation of the atmospheric air temperature and the
effective sky temperature. Find y(t) if y(0) = T0 .

1.4 Fourier Sine and Cosine Series.


Very often, a function f is given only on an interval [0, L] but still we
want the function f to be represented in the form of a Fourier series. There
are many ways of doing this. One way, for example, is to define a new function
which is equal to 0 on [−L, 0] and coincides with the function f on [0, L].
But two other ways are especially simple and useful: we extend the given
function to a function which is defined on the interval [−L, L] by making the
extended function either odd or even.
Definition 1.4.1. Let f be a given function on (0, L). The odd extension
fo of f is defined by
{
−f (−x), −L ≤ x < 0
fo (x) =
f (x), 0 ≤ x ≤ L.
The even extension fe of f is defined by
{
f (−x), −L ≤ x < 0
fe (x) =
f (x), 0 ≤ x ≤ L.
62 1. FOURIER SERIES

Graphically, the even extension is made by reflecting the graph of the


original function around the vertical axis, and the odd extension is made by
reflecting the graph of the original function around the coordinate origin. See
Figure 1.4.1.

fo

fe f

Figure 1.4.1

After we extend the original function f to an even or odd function, we


extend these new functions fo and fe on the whole real line to 2L-periodic
functions by
fe (x + 2nL) = fe (x), x ∈ [−L, L], n = 0, ±1, ±2, . . .,
fo (x + 2nL) = fo (x), x ∈ [−L, L], n = 0, ±1, ±2, . . ..
Notice that if the original function f is piecewise continuous or piecewise
smooth on the interval [0, L], then the 2L-periodic extensions fe and fo
are piecewise continuous or piecewise smooth (possibly with extra points of
discontinuity at nL) on the whole real line R.
Now we expand the functions fo and fe in Fourier series. The advantage
of using these functions is that the Fourier coefficients are simple. Indeed, it
follows from Lemma 1.1.3 that
∫L ∫L
nπ nπ
fe (x) cos x dx = 2 f (x) cos x dx,
L L
−L 0
∫L

fe (x) sin x dx = 0
L
−L
∫L ∫L
nπ nπ
fo (x) sin x dx = 2 f (x) sin x dx,
L L
−L 0
∫L

fo (x) cos x dx = 0.
L
−L
1.4 FOURIER SINE AND COSINE SERIES 63

Therefore the Fourier series of fe involves only cosines and the Fourier series
of fo involves only sines; moreover, the Fourier coefficients of these functions
can be computed in terms of the value of the original function f over the
interval [0, L].
The above discussion naturally leads to the following definition.
Definition 1.4.2. Suppose that f is an integrable function on [0, L]. The
series
∑∞ ∫L
1 nπ 2 nπ
a0 + an cos x, an = f (x) cos x dx
2 n=1
L L L
0
is called the half-range Fourier cosine series (Fourier cosine series) of f on
[0, L].
The series
∑∞ ∫L
nπx 2 nπ
bn sin , bn = f (x) sin x dx
n=1
L L L
0
is called the half-range Fourier sine (Fourier sine series) of f on [0, L].

Example 1.4.1. Find the Fourier sine and cosine series of the function
{
1, 0 < x < 1
f (x) =
2, 1 < x < 2
on the interval [0, 2].
Solution. In this example L = 2, and using very simple integration in evalu-
ating an and bn we obtain the Fourier sine series

2 ∑ 1 + cos nπ
2 − 2(−1)
n
nπx
sin ,
π n=1 n 2
and the Fourier cosine series

3 2 ∑ (−1)n (2n − 1)πx
+ cos .
2 π n=1 2n − 1 2

Example 1.4.2. Find the Fourier sine and Fourier cosine series of the func-
tion f (x) = sin x on the interval [0, π].
Solution. In this example L = π. It is obvious that the Fourier sine series
of this function is simply sin x. After simple calculations we obtain that the
Fourier cosine series is given by

2 4∑ 1
− cos 2nx.
π π n=1 4n2 − 1

Since the Fourier sine and Fourier cosine series are only particular Fourier
series, all the theorems for convergence of Fourier series are also true for
Fourier sine and Fourier cosine series. Therefore we have the following results.
64 1. FOURIER SERIES

Theorem 1.4.1. If f is a piecewise smooth function on the interval [0, L],


then the Fourier sine and Fourier cosine series of f converge to

f (x− ) + f (x+ )
2

at every x ∈ (0, L). In particular, both series converge to f (x) at every point
x ∈ (0, L) where f is continuous.
The Fourier cosine series of f converges to f (0+ ) at x = 0 and to f (L− )
at x = L.
The Fourier sine series of f converges to 0 at both of these points.

Example 1.4.3. Apply Theorem 1.4.1 to the function in Example 1.4.1.


Solution. Since the function f (x) in this example is continuous on the set
(0, 1) ∪ (1, 2), from Theorem 1.4.1, applied to the Fourier Cosine series, we
obtain the following:

3 2 ∑ (−1)n (2n − 1)πx
f (x) = 1 = + cos , for every 0 < x < 1,
2 π n=1 2n − 1 2


3 2 ∑ (−1)n (2n − 1)πx
f (x) = 1 = + cos , for every 1 < x < 2.
2 π n=1 2n − 1 2

Theorem 1.4.2 (Parseval’s Formula). If f is a square integrable function


on [0, π], and an and bn are the Fourier coefficients of the Fourier cosine
and Fourier sine expansion of f , respectively, then

∫π ∞ ∞
π 2 π∑ 2 π∑ 2
f 2 (x) dx = a0 + an = b .
4 2 n=1 2 n=1 n
0

Example 1.4.4. Find the Fourier sine and Fourier cosine series of the func-
tion f defined on [0, π] by
{
x, 0≤x≤ π
2,
f (x) =
π − x, π
2 ≤ x ≤ π.

Solution. For a0 we have


π
∫π ∫2 ∫π
2 2 π
a0 = f (x) dx = x dx + (π − x) dx = .
π π 2
0 0 π
2
1.4 FOURIER SINE AND COSINE SERIES 65

For n = 1, 2, . . . by the integration by parts formula we have


π
∫π ∫2 ∫π
2 2 2
an = f (x) cos nx dx = x cos nx dx + (π − x) cos nx dx
π π π
0 0 π
2

4 nπ 2
= 2 cos − 2 (1 + cos nπ).
n π 2 n π
Therefore the Fourier cosine series of f on [0, π] is given by
∞ [ ]
π ∑ 4 ( nπ ) 2
+ cos − (1 + cos nπ) cos nx .
4 n=1 n2 π 2 n2 π
The plot of the first two partial sums of the Fourier cosine series of f , along
with the plot of the function f , is given in Figure 1.4.2.

Π
2

N=2

N=1

Π
Π
2

Figure 1.4.2

The plot of the 10th partial sum of the Fourier cosine series of f , along
with the plot of the function f , is given in Figure 1.4.3.

Π
2

N=10

Π
Π
2

Figure 1.4.3
66 1. FOURIER SERIES

The computation of the coefficients bn in the Fourier sine series is like


that of the coefficients an , and we find that the Fourier sine series of f (x) is

∑ 2 nπ
2
sin sin nx.
n=1
n π 2

The plot of the first two partial sums of the Fourier sine series of f , along
with the plot of the function f , is given in Figure 1.4.4

Π
2

N=2

N=1

Π
Π
2

Figure 1.4.4

The plot of the 10th partial sum of the Fourier sine series of f , along with
the plot of the function f , is given in Figure 1.4.5.

Π
2

N=10

Π
Π
2

Figure 1.4.5

The following question is quite natural: How do we know whether and


when to use the Fourier cosine or the Fourier sine series?
1.4 FOURIER SINE AND COSINE SERIES 67

The answer to this question is related to Theorem 1.3.8. In order to have


better (faster) approximation of a given function, the periodic extension of
the function needs to be as smooth as possible. The smoother the extension,
the faster the Fourier series will converge.
In Example 1.4.4, the odd periodic extension did not produce a sharp turn
at x = 0 (the extension is smooth at the origin), while the even periodic
extension is not smooth at the origin.
Example 1.4.5. Find the half-range Fourier cosine and sine expansions of
the function f (x) = 1 on the interval (0, π).
Solution. It is obvious the Fourier cosine of the given function is 1.
For any n ∈ N we find

∫π
2 2 [ ]
bn = 1 sin nx dx = − (−1)n − 1 ,
π nπ
0

and so the Fourier sine expansion of the function is



4∑ 1 ( )
sin 2n − 1 πx.
π n=1 2n − 1

Example 1.4.6. Find the half-range Fourier cosine and sine expansions of
the function f (x) = x on the interval (0, 2).
Solution. For the Fourier cosine expansion of f we use the even extension
fe (x) = |x| of the function f . By computation we find the Fourier cosine
coefficients.
∫2
2
a0 = x dx = 2.
2
0

For n ∈ N, by the integration by parts formula, we have

∫2
2 nπx 4 [ ]
an = x cos dx = 2 2 (−1)n − 1 ,
2 2 n π
0

and so the Fourier cosine expansion of the function is



8 ∑ 1 2n − 1
1− 2 cos πx.
π n=1 (2n − 1)2 2

The plot of the first two partial sums of the Fourier cosine series of f , along
with the plot of the the function f , is given in Figure 1.4.6.
68 1. FOURIER SERIES

N=2

N=1

Figure 1.4.6

The plot of the 10th and the 20th partial sums of the Fourier cosine series
of f , along with the plot of the function f , is given in Figure 1.4.7.

N=10

N=20

Figure 1.4.7

For the Fourier sine expansion of f we use the odd extension

fo (x) = x

of the function f .
Using the integration by parts formula we find
∫2
2 nπx 4(−1)n+1
bn = x sin dx = .
2 2 nπ
0

The Fourier sine series of f is then


∑∞
4(−1)n+1 nπx
sin .
n=1
nπ 2
1.4 FOURIER SINE AND COSINE SERIES 69

N=1 N=2

Figure 1.4.8

The plot of the first two partial sums of the Fourier cosine series of f , along
with the plot of the function f , is given in Figure 1.4.8

The plot of the 10th and the 20th partial sums of the Fourier cosine series
of f , along with the plot of the function f , is given in Figure 1.4.9.

N=10

N=20

Figure 1.4.9

Example 1.4.7. Suppose f is a piecewise continuous function on [0, π], such


that f (x) = f (π − x) for every x ∈ [0, π]. In other words, the graph of f
is symmetric with respect to the vertical line x = π2 . Let an and bn be
the Fourier cosine and Fourier sine coefficients of f , respectively. Show that
an = 0 if n is odd, and bn = 0 if n is even.

Solution. We show only that the coefficients an have the required property,
70 1. FOURIER SERIES

and leave the other part as an exercise. Let n be a natural number. Then

∫π
2
a2n−1 = f (x) cos (2n − 1)x dx
π
0
π
∫2 ∫π
2 2
= f (x) cos (2n − 1)x dx + f (x) cos (2n − 1)x dx
π π
0 π
2
π
∫2 ∫π
2 2
= f (x) cos (2n − 1)x dx + f (π − x) cos (2n − 1)x dx.
π π
0 π
2

If in the last integral we introduce a new variable t by t = π − x, then we


have
π
∫2 ∫0
2 2 ( )
a2n−1 = f (x) cos (2n − 1)x dx + f (t) cos (2n − 1)(π − t) dt
π π
0 π
2
π π
∫2 ∫2
2 2 ( )
= f (x) cos (2n − 1x dx − f (t) cos 2n − 1)t dt = 0.
π π
0 0

Example 1.4.8. Let f be a function which has a continuous first derivative


on [0, π]. Further, let f (0) = f (π) = 0. Show that

∫π ∫π
f (x) dx ≤
2
(f ′ (x))2 dx.
0 0

For what functions f does equality hold?


Solution. Since f is continuous on [0, π] and f (0) = 0, we have that the
Fourier sine expansion of f on [0, π] is



f (x) = an sin nx.
n=1

By the Parseval identity we have

∫π ∞
π∑ 2
f 2 (x) dx = a .
2 n=1 n
0
1.4 FOURIER SINE AND COSINE SERIES 71

Since the Fourier series of the first derivative f ′ is


∑∞
nbn cos nx,
n=1
again from the Parseval identity we have
∫π ∞
π∑ 2 2
(f ′ (x))2 dx = n an .
2 n=1
0
Now from the obvious inequality

∑ ∞

a2n ≤ n2 a2n
n=1 n=1
it follows that
∫π ∫π
f (x) dx ≤
2
(f ′ (x))2 dx.
0 0
Equality in the above inequality holds if and only if
∑∞ ∞

a2n = n2 a2n .
n=1 n=1
From the last equation it follows an = 0 for n ≥ 2. Therefore
f (x) = a1 sin x.

Example 1.4.9. Let f : [0, π] → R be the function given by


f (x) = x2 − 2x.
Find the Fourier cosine and Fourier sine series of the function f .
Solution. For the Fourier cosine series we have
∫π ∫π
2 1 2
a0 = f (x) dx = (x2 − 2x) dx = π 2 − 2π,
π 2π 3
0 0
and for n ≥ 1, by the integration by parts formula
∫π
2
an = (x2 − 2x) cos nx dx
π
0
[ x=π ∫π ]
2
= 2
(x − 2x) sin nx − 2 (x − 1) sin nx dx
nπ x=0
0
[ ]x=π ∫π
4 4
= (x − 1) cos nx − cos nxdx
nπ 2 x=0 nπ 2
0
[ ]
4
= (−1) n
(π − 1) + 1 .
nπ 2
72 1. FOURIER SERIES

Therefore the Fourier cosine series of f on [0, π] is given by


π2 4 ∑ (−1)n (π − 1) + 1
−π+ cos nx.
3 π n=1 n2

Since f is continuous on [0, π], by the Fourier cosine convergence theorem,


it follows that

π2 4 ∑ (−1)n (π − 1) + 1
x2 − 2x = −π+ cos nx
3 π n=1 n2

for every x ∈ [0, π].


The plot of the first two partial sums of the Fourier cosine series of f , along
with the plot of the function f , is given in Figure 1.4.10.

Π2 - 2 Π

N=2

N=1

Figure 1.4.10

The plot of the 10th and the 20th partial sums of the Fourier cosine series
of f , along with the plot of the function f , is given in Figure 1.4.11.

Π2 - 2 Π

N=10
N=20

Figure 1.4.11
1.4 FOURIER SINE AND COSINE SERIES 73

For the Fourier sine series of f we use the odd extension fo of f . Working
similarly as for the Fourier cosine series we obtain

∑∞ { }
n−1 π − 2 4 [ ]
x − 2x =
2
2(−1) − 1 − (−1) n
sin nx
n=1
n πn3

for every x ∈ [0, π].

The plot of the first two partial sums of the Fourier sine series of f , along
with the plot of the function f , is given in Figure 1.4.12.

Π2 - 2 Π

N=2

Π
N=1

Figure 1.4.12

The plot of the 10th and the 50th partial sums of the Fourier sine series
of f , along with the plot of the function f , is given in Figure 1.4.13.

Π2 - 2 Π

N=10

N=20

Figure 1.4.13
74 1. FOURIER SERIES

Exercises for Section 1.4.


In Exercises 1–5, find both the Fourier cosine series and the Fourier sine
series of the given function on the interval [0, π]. Find the values of these
series when x = 0 and x = π.

1. f (x) = π − x.

2. f (x) = sin x.

3. f (x) = cos x.

4. f (x) = x2 .
{
x, 0≤x≤ π
2
5. f (x) =
π − x, π
2 ≤ x ≤ π.

In Exercise 6–11, find both the Fourier cosine series and the Fourier sine series
of the given function on the indicated interval.

6. f (x) = x; [0, 1].

7. f (x) = 3; [0, π].

{
1, 0<x<2
8. f (x) = ; [0, 4].
0, 2<x<4
{
x, 0≤x≤1
9. f (x) = ; [0, 2].
1, 1≤x≤2
{
0, 0<x<1
10. f (x) = ; [0, 2].
1, 1<x<2

11. f (x) = ex ; [0, π].

1.5 Projects Using Mathematica.


In this section we will see how Mathematica can be used to solve many
problems involving Fourier series. In particular, we will develop several Math-
ematica notebooks which automate the computation of partial sums of Fourier
series. For a brief overview of the computer software Mathematica consult
Appendix H.
1.5 PROJECTS USING MATHEMATICA 75

Project 1. Let f be the 2π-periodic function which on the interval [−π, π]


is given by f (x) = |x|. Using Mathematica solve the following problems.
(a) Plot the function f on the interval [−3π, 3π].
(b) Find the Fourier coefficients of f .
(c) Plot several Fourier partial sums SN f .

Solution. First define an expression for the function we want to expand in a


Fourier series.
In[1] := ab[x ] := P iecewise[{{Abs[x], Abs[x] ≤ P i}}]
We also want to plot the periodic extension. The following function will
replicate a function over several periods.
In[2] := periodicExtension[f unc− , nP eriods− ] = Sum[f unc[x + 2 kP i],
{k, −n P eriods, n P eriods}]
In[3] := P lot[periodicExtension[ab, 4], {x, −4 P i, 4 P i}, P lotRange → All]
In Figure 1.5.1 we plot the function on the interval [−4π, 4π].
Π

-3 Π -2 Π -Π Π 2Π 3Π

Figure 1.5.1

Further, we define the Fourier basis consisting of the cosine and


sine functions:
In[4] := s[n− , x− ] = Sin[n x]
In[5] := c[n− , x− ] = Cos[n x]
Next we define the inner product:
In[6] := IP [f− , g− ] := 1/2Integrate[f g, {x, −2, 2}]
Next we define the Fourier coefficients an and bn :
In[7] := a0[f unc− ] := IP [f unc[x], c[0, x]]
In[8] := aF C[f unc− , n− ] := IP [f unc[x], c[n, x]]
In[9] : bF C[f unc− , n− ] := IP [f unc[x], s[n, x]]
We define the N th Fourier partial sum.
76 1. FOURIER SERIES

In[10] := f ourierSeries[f unc− , N− , x− ] := a0[f unc]/2


+Sum[aF C[f unc, n]c[n, x] + bF C[f unc, n]s[n, x], {n, 1, N }]
If in an expression appears sin(nπ), cos nπ, sin (2n − 1)π/2, and n is an
integer, then we must tell Mathematica that n is an integer. This is done
as follows:
In[11] := Simplif y[Sin[n P i], Element[n, Integers]]
In[12] := Simplif y[Cos[n P i], Element[n, Integers]]
In[13] := Simplif y[Sin[(2n − 1) P i/2], Element[n, Integers]]
Next, compute the 1st , 5th , and the 50th partial sums:
In[14] := f s1[x− ] = f ourierSeries[1, ab, x]
In[15] := f s5[x− ] = f ourierSeries[5, ab, x]
In[16] := f s50[x ] = f ourierSeries[50, ab, x]
The plot of the first two partial sums of the Fourier series of f , along with
the plot of the function f , is given in Figure 1.5.2.
Π

N=3 N=1

-3 Π -2 Π -Π Π 2Π 3Π

Figure 1.5.2

The plot of the partial sum of the Fourier series of f for N = 4, along
with the plot of the function f , is given in Figure 1.5.3.
Π

N=4

-3 Π -2 Π -Π Π 2Π 3Π

Figure 1.5.3
1.5 PROJECTS USING MATHEMATICA 77

Project 2. Let f be the 4-periodic function which on the interval [−2, 2]


is given by 
 2, for x = −2, −1, 0, 1, 2

f (x) = 3, for x ∈ (−2, −1) ∪ (0, 1)


1, for x ∈ (−1, 0) ∪ (1, 2).
Using Mathematica solve the following problems.

(a) Plot the function f on the interval [−6, 6].

(b) Find the Fourier coefficients of f .

(c) Plot the Fourier partial sums SN f for several values of N .

(d) Investigate the Gibbs phenomenon at the point x = 1.

Solution. First we define our function f :


In[1] := f [x− ] := P iecewise[{{3, −2 < x < 0}, {1, 0 < x < 2},
{2, x == −2}, {2, x == 0}, {2, x == 2}}]
We also want to plot the periodic extension (displayed in Figure 1.5.4).
In[2] := e[x− ] = periodicExtension[f unc− , nP eriods− ]
= Sum[f unc[x + 4k], {k, −nP eriods, nP eriods}]
In[3] := P lot[periodicExtension[f, 4], {x, −6, 6},
P lotRange− > {{−6, 6}, {0, 3.5}}]

-6 -4 -2 0 2 4 6

Figure 1.5.4

We define the Fourier basis consisting of the cosine and sine functions:
In[4] := s[n− , x− ] = Sin[n P i x/2]
In[5] := c[n− , x− ] = Cos[n P i x/2]
Next we define the inner product:
78 1. FOURIER SERIES

In[6] := IP [f− , g ] := Integrate[f g, {x, −2, 2}]

Next we define the Fourier coefficients an and bn :

In[7] := a0[f unc− ] := IP [f unc[x], c[0, x]]

In[8] := aF C[f unc− , n− ] := IP [f unc[x], c[n, x]]

In[9] := bF C[f unc− , n−] := IP [f unc[x], s[n, x]]

We define the N th Fourier partial sum:

In[10] := F Series[f unc− , N− , x− ] := a0[f unc]/2


+Sum[aF C[f unc, n]c[n, x] + bF C[f unc, n]s[n, x], {n, 1, N }]

In[11] := Simplif y[Sin[nP i], Element[n, Integers]]

In[12] := Simplif y[Cos[nP i], Element[n, Integers]]

In[13] := Simplif y[Sin[(2n − 1)P i/2], Element[n, Integers]]

In[14] := Simplif y[Sin[(2n − 1)P i/2], Element[n, Integers]]

Next, compute the 1st , 3rd , 5th , 10th and the 50th partial sums:

In[15] := f s1[x− ] = f ourierSeries[1, f, x]

In[16] := f s3[x− ] = f ourierSeries[3, f, x]

In[17] := f s5[x− ] = f ourierSeries[5, f, x]

In[18] := f s10[x− ] = f ourierSeries[10, f, x]

In[19] := f s50[x ] = f ourierSeries[50, f, x]

The plot of the partial sums of the Fourier series of f for N = 1 and
N = 3, along with the plot of the function f , is given in Figure 1.5.5.

N=1
N=3

-6 -4 -2 0 2 4 6

Figure 1.5.5
1.5 PROJECTS USING MATHEMATICA 79

N=5 N=10

-6 -4 -2 0 2 4 6

Figure 1.5.6

2 N=50

1.9 2 2.2

Figure 1.5.7

The plot of the 5th and the 10th partial sums of the Fourier series of f ,
along with the plot of the function f , is given in Figure 1.5.6.

The plot of the 50th partial sum of the Fourier series of f , along with the
plot of the function f , is given in Figure 1.5.7.

Project 3. Let f be the 2π-periodic function which on the interval [−π, π]


is defined by 
 π
 0 for x = 0, 2 , π
f (x) = 0 for x ∈ (−π, 0) ∪ ( π2 , π)


1 for x ∈ (0, π2 ).
Using Mathematica solve the following problems:

(a) Plot the function f on the interval [−3π, 3π].

(b) Find the Fourier coefficients of f .

(c) Plot the Fourier partial sums SN f for several values of N .


80 1. FOURIER SERIES

(d) Investigate the Gibbs phenomenon at the point x = 0.

Solution. First we define the function by


In[1] := f [x− ] := P iecewise[{{0, −P i < x < 0}, {1, 0 < x < P i/2},
{0, x == 0}, {0, x == P i/2}, {0, x == P i}}]
Next we define the periodic extension of the function f .
In[2] := e[x− ] = periodicExtension[f unc− , nP eriods− ]
= Sum[f unc[x + 4k], {k, −nP eriods, nP eriods}]
We plot the function f on the segment [−3π, 3π].
See Figure 1.5.8.
In[3] := P lot[periodicExtension[f, 4], {x, −3π, 3π},
P lotRange → {{−3π, 3π}, {0, 1}}]

-3 Π -2 Π- 3 Π Π
2
2Π 5Π

2 2

Figure 1.5.8

We compute the Fourier coefficients of the function f :


In[4] := IP [f− , g− ] = 1/P i Integrate[f g, {x, −P i, P i}];
In[5] := a0 = IP [f [x], 1]
Out[6] := 1/2
In[7] := a[n− ] := IP [f [x], Cos[n x]]
Sin[ nπ
2 ]
Out[8] := nπ

In[9] := b[n− ] := IP [f [x], Sin[n x]]


Sin2 [ nπ
4 ]
Out[10] = 2 nπ

We form the N th partial sum:


In[11] := SN = T able[a0/2 + Sum[a[n] Cos[n x] + b[n] Sin[n x],
{n, 1, N }], {N, 1, 200}]
1.5 PROJECTS USING MATHEMATICA 81

N=11
1
2

N=9

Π
-Π 2
Π

Figure 1.5.9

The plot of the partial sums SN of the Fourier series of f for N = 9 and
N = 11, along with the plot of the function f , is given in Figure 1.5.9.

The plot of the partial sums of the Fourier series of f for N = 20 and
N = 30, along with the plot of the function f , is given in Figure 1.5.10.

N=20
1
2

N=30

Π
-Π 2
Π

Figure 1.5.10

The plot of the partial sum of the Fourier series of f for N = 150, along
with the plot of the function f , is given in Figure 1.5.11.

The plot of the partial sum of


( the Fourier
) series of f for N = 500 and a
the function f on the interval − 100
π π
, 100 is given in Figure 1.5.12.

From these figures, in particular from Figure 1.5.12, we can clearly see the
Gibbs phenomenon in the neighborhood of the point x = 0.
82 1. FOURIER SERIES

1
2

N=150

Π
-Π 2
Π

Figure 1.5.11

N=500
1
2

Π Π
- 100 100

Figure 1.5.12
CHAPTER 2

INTEGRAL TRANSFORMS

Many functions in analysis are defined and expressed as improper integrals


of the form
∫∞
F (y) = K(x, y)f (y) dx.
−∞

A function F defined in this way (y may be either a real or a complex variable)


is called an integral transform of f . The function K which appears in the
integrand is referred to as the kernel of the transform.
Integral transforms are used very extensively in both pure and applied
mathematics, as well as in science and engineering. They are especially useful
in solving certain boundary value problems, partial differential equations and
some types of integral equations. An integral transform is a linear and in-
vertible transformation, and a partial differential equation can be reduced to
a system of algebraic equations by application of an integral transform. The
algebraic problem is easy to solve for the transformed function F , and the
function f can be recovered from F by some inversion formula.
Some of the more commonly used integral transforms are the following:

1. The Fourier Transform F {f } ≡ fb of a function f is defined by

∫∞
( )
F {f } (ω) ≡ fb(ω) = e−ixω f (x) dx.
−∞

2. The Fourier Cosine Transform Fc {f } ≡ fbc of a function f is defined by

∫∞
( )
Fc {f } (ω) ≡ fbc (ω) = cos(ωx)f (x) dx.
0

3. The Fourier Sine Transform Fs {f } ≡ fbs of a function f is defined by

∫∞
( )
Fs {f } (ω) ≡ fbs (ω) = sin(ωx)f (x) dx.
0

83
84 2. INTEGRAL TRANSFORMS

4. The Laplace Transform L {f } ≡ fb of a function f is defined by


∫∞
( )
L {f } (s) ≡ fb(s) = e−xs f (x) dx.
0

5. The Hankel Transform Hn {f } of order n of a function f is defined by


∫∞
( )
Hn {f } (s) ≡ fd
Hn (s) = xf (x)Jn (sx) dx,
0
where Jn is the Bessel function of order n of the first kind.
Sometimes, the integral transforms are defined with different constants (for
normalization purposes), particulary in the case of the Fourier transforms.
Since
e−ixy = cos (xy) − i sin (xy),
the Fourier sine and Fourier cosine transforms are special cases of the Fourier
transform in which the function f vanishes on the negative real axis.
The Laplace transform is also related to the Fourier transform. Indeed, if
we consider the complex number s = u + iv, where u and v are real, then
we have
∫∞ ∫∞ ∫∞
−xs −ixv −xu
e f (x) dx = e e f (x) dx = e−ixv ϕu (x) dx,
0 0 0
−xu
where ϕu (x) = e f (x). Therefore the Laplace transform can also be re-
garded as a special case of the Fourier transform.

2.1 The Laplace Transform.

In this part we will study the Laplace transform and some of its appli-
cations. Many more applications of the Laplace transform will be discussed
later in the chapters dealing with partial differential equations.

2.1.1 Definition and Properties of the Laplace Transform.


Definition 2.1.1. Given a function f (t) defined for all t ≥ 0, the Laplace
transform of f is the function F (s) defined by
∫∞
( )
(2.1.1) L{f } (s) = F (s) = e−st f (t) dt
0
for all values of s for which the improper integral converges.

Usually we use the capital letters for the Laplace transform of a given
function f (t). For example, we write
L f (t) = F (s), L g(t) = G(s), L y(t) = Y (s).
2.1.1 DEFINITION AND PROPERTIES OF THE LAPLACE TRANSFORM 85

Example 2.1.1. Find the Laplace transform of the function


f (t) = 1, t ≥ 0.

Solution. By the definition of the Laplace transform we have


∫∞ t=∞
( ) −st 1 −st
L{1} (s) = e · 1 dt = − e
s t=0
0
( )
1 −st 1 1
= lim e + = for s > 0
t→∞ s s s
and therefore,
( ) 1
L{1} (s) = , for s > 0.
s

Example 2.1.2. Find the Laplace transform of the function


f (t) = ta , t ≥ 0, and a > −1.

Solution. By the definition of the Laplace transform we have


∫∞
( )
L{t } (s) =
a
e−st ta dt.
0
If we use the substitution u = st, then from the above equation we obtain
∫∞
[ ) 1 Γ(a + 1)
(L{t } (s) = a+1
a
e−u ua du = , s > 0,
s sa+1
0
where Γ is the Euler Gama function. For discussion of the Gama function Γ
see Appendix G.
Since Γ(n + 1) = n! if n ∈ N ∪ {0}, from Example 2.1.2. we obtain
( ) n!
L{tn } (s) = n+1 , for s > 0.
s

Below are some basic Laplace transforms.


( ) k ( ) s
L {sin (kt)} (s) = 2 2
, L {cos (kt)} (s) = 2 ,
s +k s + k2
( ) s ( ) k
L{cosh (kt)} (s) = 2 , L {sinh (kt)} (s) = 2 ,
s −k 2 s − k2
( ) 1
L {eat } (s) = .
s−a
A more comprehensive table of Laplace transforms is given in Table A of
Appendix A.
There are a number of useful operational properties of the Laplace trans-
form.
86 2. INTEGRAL TRANSFORMS

Theorem 2.1.1 Linear Property. If a1 and a2 are any constants, then


L {a f1 + b f2 } = aL {f1 } + bL {f2 }
for all functions f1 and f2 whose Laplace transforms exist.
Proof. The proof of this theorem follows immediately from the linearity prop-
erty of the operations integration and taking the limit. ■

Example 2.1.3. Find the Laplace transform of the function


3
f (t) = 3t4 + 4t 2 , t ≥ 0.

Solution. Using the linearity property of the Laplace transform and the result
of Example 2.1.2 we have
( 3 ) 2! Γ( 5 )
L {3t2 + 4t 2 } (s) = 3 3 + 4 52
s
√s
2

6 π
= 3 +3
s s5
since
5 3 1 1 3√
Γ( ) = · Γ( ) = π.
2 2 2 2 4
(See Appendix G for the Gamma function).

Theorem 2.1.2 The First Shift Theorem. If F = L {f } and c is a


constant, then ( )
L {e−ct f (t)} (s) = F (s + c).

Proof. The proof of this theorem follows immediately from the definition of
the Laplace transform. ■

The proofs of the next two theorems follow directly from the definition of
the Laplace transform.
Theorem 2.1.3 The Second Shift Theorem. If F = L {f } and c is a
positive constant, then
( )
L {f (t − c)} (s) = e−cs F (s).

Theorem 2.1.4. If the Laplace transform of a function f is F , and c is a


positive constant, then
( ) 1 (s)
L {f (ct)} (s) = F .
s c

In addition to the shifting theorems, there are two other useful theorems
that involve the derivative and integral of the Laplace transform F (s).
2.1.1 DEFINITION AND PROPERTIES OF THE LAPLACE TRANSFORM 87

Theorem 2.1.5. If F = L {f }, then


( )
F ′ (s) = − L {tf (t)} (s),

and in general,
( )
F (n) (s) = (−1)n L {tn f (t)} (s),

for any natural number n.


Proof. If we use the definition of the Laplace transform, then

(∫∞ ) ∫∞
′ d −st ∂ ( −st )
F (s) = e f (t) dt = e f (t) dt
ds ∂s
0 0
∫∞
( )
=− e−st tf (t) dt = − L {tf (t)} (s).
0

For n ∈ N use the principle of mathematical induction. ■

Theorem 2.1.6. If F = L {f }, then

∫∞ ( { })
f (t)
F (z) dz = L (s).
t
s

Using these theorems, along with the Laplace transforms of some functions
listed in Table A of Appendix A, we can compute the Laplace transforms of
many other functions.
Example 2.1.4. Find the Laplace transform of

eat tn , n ∈ N.

Solution. Since
( ) n!
L {tn } (s) = n+1 ,
s
from Theorem 2.1.2 it follows that

( ) n!
L{eat tn } (s) = , s > a.
(s − a)n+1
88 2. INTEGRAL TRANSFORMS

Example 2.1.5. Find the Laplace transform of


eat cos kt.

Solution. Since ( ) s
L {cos kt} (s) = ,
s2 + k 2
from Theorem 2.1.2 it follows that
( ) s−a
L {eat cos kt} (s) = , s > a.
(s − a)2 + k 2

Example 2.1.6. Find the Laplace transform of


t sin at.

Solution. Let f (t) = sin at and F = L {f }. From Theorem 2.1.5 we have


( )
( ) d a 2as
L {t sin at} (s) = −F ′ (s) = − = 2 .
ds s2 + a2 (s + a2 )2

Example 2.1.7. Find the Laplace transform of the following function.


1 − cos at
f (t) = .
t

Solution. If F = L {f }, then from Theorem 2.1.6 we have


∫∞ ∫∞ ( )
( ) 1 z
F (s) = L {1 − cos at} (z) dz = − 2 dz
z z + a2
s s
z=∞
z s
= ln √ = − ln √ .
z 2 + a2 z=s s2 + a2

Remark. The improper integral which defines the Laplace transform does
not have to converge.
For example, neither
1 2
L{ } nor L {et }
t
exists.
Sufficient conditions which guarantee the existence of L {f } are that f be
piecewise continuous on [0, ∞) and that f be of exponential order. Recall
that a function f is piecewise continuous on [0, ∞) if f is continuous on any
closed bounded interval [a, b] ⊂ [0, ∞) except at finitely many points. The
concept of exponential order is defined as follows:
2.1.1 DEFINITION AND PROPERTIES OF THE LAPLACE TRANSFORM 89

Definition 2.1.2. A function f is said to be of exponential order c if there


exist constants c > 0, M > 0 and T > 0 such that

|f (t)| ≤ M ect for all t > T.

Remark. If f is an increasing function, then the statement that f is of


exponential order means that the graph of the function f on the interval
[T, ∞) does not grow faster than the graph of the exponential function M ect .
Every polynomial function, e−t , 2 cos t are a few examples of functions
2
which are of exponential order. The function et is an example of a function
which is not of exponential order.
Theorem 2.1.7 Existence of Laplace Transform. If f is a piecewise
continuous function on [0, ∞) and of exponential order c, then the Laplace
transform ( )
F (s) = L {f } (s)
exists for s > c.
Proof. By the definition of the Laplace transform and the additive property
of definite integrals we have

∫T ∫∞
( ) −st
L {f (t)} (s) = e f (t) dt + e−st f (t) dt = I1 + I2 .
0 T

Since f is of exponential order c, there exist constants c > 0, M > 0 and


T > 0 such that |f (t)| ≤ M ect for all t > T . Therefore
∫∞ ∫∞
−st e−(s−c)T
|I2 | ≤ |e f (t)| dt ≤ M e−st ect dt = M , s > c.
s−c
T T

By the comparison test for improper integrals, the integral I2 converges for
s > c.
The integral I1 exists because it can be written as a finite sum of integrals
over intervals on which the function est f (t) is continuous. This proves the
theorem. ■

In the proof of the above theorem we have also shown that


∫∞
M
|F (s)| ≤ |e−st f (t)| dt ≤ , s > c.
s−c
0

If we take limits from both sides of the above inequality as s → ∞, then


we obtain the following result.
90 2. INTEGRAL TRANSFORMS

Corollary 2.1.1. If f (t) satisfies the hypotheses of Theorem 2.1.7, then

(2.1.2) lim F (s) = 0.


s→∞

The condition (2.1.2) restricts the functions that can be Laplace trans-
forms. For example, the functions s2 , es cannot be Laplace transforms of
any functions because their limits as s → ∞ are ∞, not 0.
On the other hand, the hypotheses of Theorem 2.1.7 are sufficient but not
necessary conditions for the existence of the Laplace transform. For example,
the function
1
f (t) = √ , t > 0
t
is not piecewise continuous on the interval [0, ∞) but nevertheless, from Ex-
ample 2.1.2 it follows that its Laplace transform
( ) √
( { 1 }) Γ 12 π
L √ (s) = 1 =
t s2 s
exists.
Up to this point, we were dealing with finding the Laplace transform of a
given function. Now we want to reverse the operation: For a given function
F (s) we want to find (if possible) a function f (t) such that
( )
L{f (t)} (s) = F (s).

Definition 2.1.3. The Inverse Laplace Transform of a given function F (s)


is a function f (t), if such a function exists, such that
( )
L{f (t)} (s) = F (s).

We denote the inverse Laplace transform of F (s) by


)
L−1 {F (s)} (t).

Remark. For the inverse Laplace transform the linearity property holds.
The most common method of inverting the Laplace transform is by decom-
position into partial fractions, along with Table A.
Example 2.1.8. Find the inverse transform of
4
F (s) = .
(s − 1)2 (s2 + 1)2

Solution. We decompose F (s) into partial fractions as follows:


4 A B Cs + D Es + F
= + + 2 + 2 .
(s − 1)2 (s2 + 1)2 s − 1 (s − 1)2 s +1 (s + 1)2
2.1.1 DEFINITION AND PROPERTIES OF THE LAPLACE TRANSFORM 91

By comparison, we find that the coefficients are A = −2, B = 1, C = 2,


D = 1, E = 2 and F = 0. Therefore

2 1 2s + 1 2s
F (s) = − + + 2 + .
s − 1 (s − 1)2 s + 1 (s2 + 1)2

If we recall the first shifting property (Theorem 2.1.2), and the familiar
Laplace transforms of the functions sin t and cos t, then we find that the
required inverse transform of the given function F (s) is

−2et + tet + 2 cos t + sin t + t sin t.

Another method to recover a function f from its Laplace transform F is


to use the inversion formula with contour integration. The inversion formula
is given by the following theorem.
Theorem 2.1.8 The Laplace Inversion Formula. Suppose that f (t) is
a piecewise smooth function of exponential order c which vanishes for t < 0.
If F = L {f } and b > c, then

∫ ∞
b+i ∫
b+iy
1 zt 1
(2.1.3) f (t) = F (z) e dz := lim F (z) ezt dz,
2πi 2πi y→∞
b−i ∞ b−iy

at every point t of continuity of the function f .


Later in this chapter we will prove this theorem by an easy application of
the Fourier transform. This theorem can also be proved by Cauchy’s theorem
for analytic functions. (See Appendix F).
The line integral in the inversion formula (2.1.3) is called the Bromwich
integral. We choose b large enough so that the straight line Re(z) = b lies in
the half-plane where F (z) is an analytic function. Usually, the line integral
in the inversion formula is transformed into an integral along a closed contour
(Bromwich contour) and after that the Cauchy residue theorem is applied.
(See Appendix F for the Cauchy residue theorem.)
Let us take an example to illustrate the inversion formula.
Example 2.1.9. Find the inverse Laplace transform of

1
F (s) = .
s sinh πs

Solution. Consider the function


1
F (z) = .
z sinh πz
92 2. INTEGRAL TRANSFORMS

This function is analytic everywhere except at the point z = 0 and at all


points z for which
eπz − e−πz
sinh πz ≡ = 0.
2
There are infinitely many solutions zn of the above equation, zn = ni, n =
0, ±1, ±2, . . .. The singularity at z = 0 is a double pole of F (z), and the
other singularities are simple poles.
Now, consider the contour integral

1
(2.1.4) F (z) etz dz,
2πi
C

where C is the closed contour shown in Figure 2.1.1.

Ri

CR

3i
2i
i

0 b
-i
-2 i
-3 i

-Ri

Figure 2.1.1

{ }
π 3π

On the semi-circle CR = z = R e : ≤θ≤ we have
2 2

∫ ∫2

eRt cos θ
F (z) etz dz ≤ 2 dθ.

e R cos θ − e−R cos θ
CR π
2
2.1.1 DEFINITION AND PROPERTIES OF THE LAPLACE TRANSFORM 93

π 3π
Since cos θ ≤ 0 for ≤θ≤ and since t > 0, from the above inequality
2 2
it follows that

(2.1.5) lim F (z) etz dz = 0.
R→∞
CR

Now, taking R → ∞ in (2.1.4), from (2.1.5) and the residue theorem (see
Appendix F) we obtain

∫ ∞
b+i ∞

1
(2.1.6) f (t) = F (z) ezt dz = Res ( F (z) ezt , z = zn ).
2πi n=−∞
b−i ∞

Since z = 0 is a double pole of F (z), using the formula for residues (given
in Appendix F) and l’Hôpital’s rule we have
( )
zt d (z − 0)2 ezt t
(2.1.7) Res ( F (z) e , z = 0) = lim = .
z→0 dz z 2 sinh z π

For the other simple poles z = zn = ±ni, n = 1, 2, . . ., using l’Hôpital’s rule


we find

(z − n)etz enit
(2.1.8) Res ( F (z) ezt , z = zn ) = lim = (−1)n .
z→n z sinh z ni

If we substitute (2.1.7) and (2.1.8) into (2.1.6) we obtain (after small rear-
rangements)

t 2 ∑ (−1)n
f (t) = + sin nt.
π π n=1 n

The next theorem, which is a consequence of Theorem 2.1.3, has theoretical


importance, particularly when solving initial boundary value problems using
a Laplace transform.
Theorem 2.1.9 (Uniqueness of the Inverse Laplace Transform). Let
the functions f (t) and g(t) satisfy the hypotheses of Theorem 2.1.7, so that
their Laplace transforms F (s) and G(s) exist for s > c. If F (s) = G(s),
then f (t) = g(t) at all points t where f and g are both continuous.

As a consequence of the previous theorem it follows that there is a one-


to-one correspondence between the continuous functions of exponential order
and their Laplace transforms.
94 2. INTEGRAL TRANSFORMS

Exercises for Section 2.1.1.

1. In Problems (a)–(g) apply the definition 2.1.1 to find directly the


Laplace transforms of the given functions.

(a) f (t) = t.

(b) f (t) = e3t+1 .

(c) f (t) = cos 2t.

(d) f (t) = sin2 t.


{
1, 0 ≤ t ≤ 2
(e) f (t) =
0, t > 2.
{
t, 0 ≤ t < 2
(f) f (t) =
3, t > 2.
{
0, 0 ≤ t ≤ 1
(g) f (t) =
1, t > 1.
2. Use the shifting property of the Laplace transform to compute the
following:

(a) L {et sin 2t}.

(b) L{e−t cos 2t}.

(c) L {et cos 3t}.

(d) L {e−2t cos 4t}.

(e) L {e−t sin 5t}.

(f) L {et t}.

3. Using properties of the Laplace transform compute the Laplace trans-


forms of the following functions:

(a) f (t) = −18e3t .

(b) f (t) = t2 e−3t .

(c) f (t) = 3t sin 2t.


2.1.1 DEFINITION AND PROPERTIES OF THE LAPLACE TRANSFORM 95

(d) f (t) = e−2t cos 7t.

(e) f (t) = 1 + cos 2t.

(f) f (t) = t3 sin 2t.

4. Use properties of the Gamma function (see Appendix G) to find the


Laplace transform of:

(a) f (t) = t− 2 .
1

1
(b) f (t) = t 2 .

5. In the following exercises, compute the inverse Laplace transform of


the given function.
1
(a) F (s) = .
s6
1
(b) F (s) = .
(s2− 4s + 4)
1
(c) F (s) = 2 .
(s + 9)2
4
(d) F (s) = .
(s − 6s + 9
1
(e) F (s) = 2 .
s + 15s + 56
1
(f) F (s) = 2 .
s + 16s + 36
2s − 7
(g) F (s) = 2 .
2s − 14s + 55
s+2
(h) F (s) = 2 .
s + 4s + 12
6. Find the following inverse Laplace transforms.
{ }
−1 2s − 5
(a) L .
(s − 1)3 (s2 + 4)
{ }
−1 3s − 5
(b) L .
s2 (s2 + 9)(s2 + 1)
7. Determine if F (s) is the Laplace transform of a piecewise continuous
function of exponential order:
s
(a) F (s) = .
4−s
96 2. INTEGRAL TRANSFORMS

s2
(b) f (s) = .
(s − 2)2
s2
(c) f (s) = .
s2 + 9
8. Show that the following functions are of exponential order:

(a) f (t) = t2 .

(b) f (t) = tn , where n is any positive integer.

9. Show that the following functions are not of exponential order:


2
(a) f (t) = et .
n
(b) f (t) = et , where n is any positive integer.

10. Show that { }


√ a a2
−1 −a s
L e = √ e− 4t .
2 πt 3

2.1.2 Step and Impulse Functions.


In many applications in electrical and mechanical engineering functions
which are discontinuous or quite large in very small intervals are frequently
present. An important and very useful function in such physical situations is
the Heaviside function or unit step function. This function, denoted by H, is
defined by
{
0, t < 0
H(t) =
1, t ≥ 0.

The graph of the Heaviside function y = H(t − 2) is shown in Figure 2.1.2

Figure 2.1.2
2.1.2 STEP AND IMPULSE FUNCTIONS 97

Remark. ua (t) is another notation for the Heaviside function H(t − a).
It is very easy to find the Laplace transform of H(t − a).

∫∞
( )
L {H(t − a)} (s) = H(t − a)e−st ds
0
∫∞ −as
e
= e−st ds = .
s
a

A useful observation for the Heaviside function is that the function


{
0, t<a
H(t − a)f (t − a) =
f (t − a), t ≥ a

is simply a translation of the function f (t).


The Heaviside step function can be used to express the Laplace transform
of a translation of a function f (t) and the Laplace transform of the function
f.

Theorem 2.1.10. Let F (s) = L{f (t)} exist for s > c ≥ 0, and let a > 0
be a constant. Then
( )
L {H(t − a)f (t − a)} (s) = e−as F (s), s > c.

Proof. We apply the definition of the Laplace transform.

∫∞ ∫∞
( ) −st
L{H(t − a)f (t − a)} (s) = e H(t − a)f (t − a) dt = e−st f (t − a) dt.
0 a

If we make the substitution v = t − a in the last integral, then we have

∫∞
L{H(t − a)f (t − a)}(s) = e−s(v+a) f (v) dv
0
∫∞
= e−as e−sv f (v) dv = e−cas F (s). ■
0
98 2. INTEGRAL TRANSFORMS

Example 2.1.10. Find the Laplace transform of the function f defined by


{
t, 0 ≤ t < 2,
f (t) =
t + (t − 2) , 2 ≤ t < ∞.
2

Solution. Consider the following translation of the function t2 :


{
0, 0 ≤ t < 2,
H(t − 2)(t − 2) = 2
(t − 2)2 , t ≥ 2.

Using this translation we have

f (t) = t + H(t − 2)(t − 2)2 .

Therefore,
( ) ( ) ( { })
L {f (t)} (s) = L {t} (s) + L H(t − 2)(t − 2)2 (s)
( ) ( )
= L t} (s) + e−2s L {t2 } (s).

Using the Laplace transforms of t and t2 from Table A in Appendix A, we


obtain
( ) 1 2
L {f (t)} (s) = 2 + e−2s 3
s s
s + e−2s
= .
s3

Example 2.1.11. Find the inverse Laplace transform of the function

1 + e−5s
F (s) = .
s4

Solution. Since the inverse Laplace transform is linear we have


{ −5s }
−1
{1} −1 −1 e
f (t) = L {F (s)} = L +L
s4 s4
t 3
(t − 5) 3
= + H(t − 5) .
3! 3!

Now, we will introduce (only intuitively) a “function” which is of great


importance in many disciplines, such as quantum physics, electrostatics and
mathematics.
2.1.2 STEP AND IMPULSE FUNCTIONS 99

2
1

-4 -2 -1- 1 1
1 2 4
2 2

Figure 2.1.3

Let us consider a function f (t) which has relatively big values in a relatively
short interval around the origin t = 0 and it is zero outside that interval. For
natural numbers n, define the functions fn (t) by
{
n, |t| < 2n1
,
(2.1.9) fn (t) =
0, |t| ≥ 2n .
1

Figure 2.1.3 shows the graphs of fn for several values of n.


Each of the functions fn (t) has the following properties:

 lim fn (t) = 0, t ̸= 0

 n→∞

 1

(2.1.10) ∫∞ ∫2n

 In ≡ fn (t) dt = fn (t) dt = 1, n = 1, 2, . . .



−∞ 1
− 2n

Now, since In = 1 for every n ∈ N, it follows that

(2.1.11) lim In = 1.
n→∞

Using equations (2.1.10) and (2.1.11) we can “define” the Dirac delta
function δ(t), concentrated at t = 0, by


 δ(t) = 0, t ̸= 0;




∫∞



 δ(t) dt = 1.


−∞
100 2. INTEGRAL TRANSFORMS

The Dirac delta function, concentrated at any other point t = a is defined


by δ(t − a).
It is obvious that there is no function that satisfies both of the above
conditions.
Even though the Dirac function δ is not at all an ordinary function, its
Laplace transform can be defined. Namely, one of the ways to define the
Laplace transform of the Dirac delta function δ(t − a) is to use the limiting
process:
{ }
(2.1.12) L{δ(t − a)} = lim L fn (t − a) ,
n→∞

where fn are the functions defined by (2.1.9).


If we suppose that a > 0, then a − 2n 1
> 0 for some n ∈ N and so from
(2.1.9) and the definition of the Laplace transform, for these values of n we
have
∫∞
{ }
L fn (t − a) = e−st fn (t − a) dt
0
1 1

a+ 2n

a+ 2n

= e−st fn (t − a) dt = n e−st dt
1 1
a− 2n a− 2n
t=a+ 2n
1
e−st 2n − e− 2n
s s
−sa e
=n = ne .
−s t=a− 1 s
2n

Therefore,

{ } e 2n − e− 2n
s s

(2.1.13) L fn (t − a) = ne−sa .
s

If we take the limit in (2.1.13) as n → ∞, then from (2.1.12) we obtain


that

(2.1.14) L{δ(t − a)} = e−sa .

If we take the limit as a → 0 in (2.1.14) we obtain

L{δ(t)} = 1.

If f is any continuous function, then it can be shown that

∫∞
(2.1.15) δ(t − a)f (t) dt = f (a),
−∞
2.1.2 STEP AND IMPULSE FUNCTIONS 101

where the improper integral is defined by


∫∞ ∫∞
δ(t − a)f (t) dt = lim fn (t − a)f (t) dt.
n→∞
−∞ −∞

For the proof of (2.1.15) the interested reader is referred to the book by W. E.
Boyce and R. C. DiPrima [1].

Exercises for Section 2.1.2.


1. Sketch the graphs of the following functions on the interval t ≥ 0.

(a) H(t − 1) + 2H(t − 3) − 6H(t − 4).

(b) (t − 3)H(t − 2) − (t − 2)H(t − 3).

(c) (t − π)2 H(t − π).

(d) sin (t − 3) H(t − 3)).

(e) 2(t − 1)H(t − 2).

2. Find the Laplace transform of the following functions:


{
0, t<2
(a) f (t) =
(t − 2) , t ≥ 2.
2
{
t, t<1
(b) f (t) =
t − 2t + 2, t ≥ 1.
2

{
0, 0 ≤ t ≤ 1
(c) f (t) =
1, t > 1.


 0, t<π
(d) f (t) = t − π, π ≤ t < 2π


0, t ≥ 2π.
(e) f (t) = H(t − 1) + 2H(t − 3) − H(t − 4).

(f) f (t) = (t − 3)H(t − 3) − (t − 2)H(t − 2).

(g) f (t) = t − (t − 1)H(t − 1).

3. In the following exercises compute the inverse Laplace transform of


the given function.
102 2. INTEGRAL TRANSFORMS

3!
(a) F (s) = .
(s − 2)4
e−2s
(b) F (s) = 2 .
s +s−2
2(s − 1)e−2s
(c) F (s) = 2 .
s − 2s + 2
2e−2s
(d) F (s) = 2 .
s −4
4. In the following exercises compute the Laplace transform of the given
function.

(a) δ(t − π).

(b) δ(t − 3).

(c) 2δ(t − 3) − δ(t − 1).

(d) sin t · u2π (t) + δ(t − π2 ).

(e) cos 2t · uπ (t) + δ(t − π).

2.1.3 Initial-Value Problems and the Laplace Transform.


In this section we will apply the Laplace transform to solve initial-value
problems. For this purpose we first need the following theorem.
Theorem 2.1.11. Suppose that f is continuous and is of exponential order
c for t > T . Also, suppose that f ′ is piecewise continuous on any closed
subinterval of [0, ∞). Then, for s > c
( { ′ }) ( )
L f (t) (s) = s L{f (t)} (s) − f (0).

Proof. Let M be any positive number. Consider the integral


∫M
e−st f ′ (t) dt.
0

Let 0 < t1 < t2 < . . . < tn ≤ M be the points where the function f ′
is possibly discontinuous. Using the continuity of the function f and the
integration by parts formula on each of the intervals [tj−1 , tj ], t0 = 0, tn = M
we have
∫M ∫M
−st ′ −sM
e f (t) dt = e f (M ) − f (0) + s e−st f (t) dt.
0 0
2.1.3 INITIAL-VALUE PROBLEMS 103

Since f is of exponential order c we have e−sM f (M ) → 0 as M → ∞,


whenever s > c. Therefore, for s > c,
∫∞
( )
e−st f ′ (t) dt = s L{f (t)} (s) − f (0),
0

which establishes the theorem. ■

Continuing this process, we can find similar expressions for the Laplace
transform of higher-order derivatives. This leads to the following corollary to
Theorem 2.1.11.
Corollary 2.1.2 Laplace Transform of Higher Derivatives. Suppose
that the function f and its derivatives f ′ , f ′′ , . . . , f (n−1) are of exponential
order b on [0, ∞) and the function f (n) is piecewise continuous on any
closed subinterval of [0, ∞). Then for s > b we have
∫∞
e−st f (n) (t) dt = sn L{f (t)}(s) − sn−1 f (0) − . . . − f (n−1) (0).
0

Now we show how the Laplace transform can be used to solve initial-value
problems. Typically, when we solve an initial-value problem that involves
y(t), we use the following steps:
1. Compute the Laplace transform of each term in the differential equation.
( )
2. Solve the resulting equation for L{y(t)} (s) = Y (s).
3. Find y(t) by computing the inverse Laplace transform of Y (s).
Example 2.1.12. Solve the initial-value problem

y ′′′ + 4y ′ = −10e−t , y(0) = 2, y ′ (0) = 2, y ′′ (0) = −10.

Solution. Let Y = L {y}. Taking the Laplace transform of both sides of the
differential equation, and using the formula in Corollary 2.1.2, we obtain
[ ] 10
s3 Y (s) − s2 y(0) − sy ′ (0) − y ′′ (0) + 4 sY (s) − y(0) = .
s−1
Using the given initial conditions and solving the above equation for Y (s) we
have
2s3 − 4s − 8
Y (s) = .
s(s − 1)(s2 + 4)
The partial fraction decomposition of the above fraction is

2s3 − 4s − 8 A B Cs + D
= + + 2 .
s(s − 1(s2 + 4) s s−1 s +4
104 2. INTEGRAL TRANSFORMS

We find the coefficients: A = 2, B = −2, C = 2 and D = 4 and so

2s3 − 4s − 8 2 −2 2s + 4
= + + .
s(s − 1(s2 + 4) s s − 1 s2 + 4

Finding the inverse Laplace transform of both sides of the above equation and
using the linearity of the inverse Laplace transform we obtain

y(t) = 2 − 2et + 2 cos 2t + 2 sin 2t.

Let us take another example.


Example 2.1.13. Solve the initial-value problem

y ′′ + 2y ′ + y = 6, y(0) = 5, y ′ (0) = 10.

Solution. Let Y = L{y}. Taking the Laplace transform of both sides of the
differential equation, and using the formula in Corollary 2.1.2, we obtain
[ ] 6
s2 Y (s) − sy(0) − y ′ (0) + 2 sY (s) − y(0) + Y (s) = .
s
Using the given initial conditions and solving the above equation for Y (s) we
have
5s2 + 20s + 6
Y (s) = .
s(s + 1)2
The partial fraction decomposition of the above fraction is

s2 + 20s + 6 6 −1 9
= + + .
s(s + 1)2 s s + 1 (s + 1)2

Finding the inverse Laplace transform of both sides of the above equation and
using the linearity of the inverse Laplace transform we obtain

y(t) = 6 − e−t + 9te−t .

In the next few examples we consider initial-value problems in which the


nonhomogeneous term, or forcing function, is discontinuous.
Example 2.1.14. Find the solution of

y ′′ (t) + 9y = h(t),

subject to the conditions y(0) = y ′ (0) = 0, where


{
1, 0 ≤ t < π,
h(t) =
0, t ≥ π.
2.1.3 INITIAL-VALUE PROBLEMS 105

Solution. Because h(t) is a piecewise continuous function, it is more conve-


nient to write it in terms of the Heaviside functions as

h(t) = H(t) − H(t − π) = 1 − H(t − π).

Then ( ) ( ) ( )
L {h(t)} (s) = L {1} (s) − L {H(t − π)} (s)
1 e−πs
= − .
s s
Let L{y} = Y and let us take the Laplace transform of both sides of the
given differential equation. Using the formula in Corollary 2.1.2 we have

1 e−πs
s2 Y (s) − sy(0) − y ′ (0) + 9Y (s) = − .
s s

Using the prescribed initial conditions and solving for Y (s) we obtain

1 e−πs
Y (s) = − = F (s) − e−πs F (s),
s(s2 + 9) s(s2 + 9)

where
1 1 ss
F (s) = =· − 2 .
s(s2 + 9) 9s s + 9
Therefore,
{ } { }
y(t) = L−1 F (s) − L−1 e−πs F (s) .

First,
{ } 1 1
f (t) = L−1 F (s) = − cos 3t,
9 9
and by Theorem 2.1.10 we have
{ } [ ]
−1 −πs 1 1
L e F (s) = f (t − π)uπ (t) = − cos 3(t − π) uπ (t)
9 9
[ ]
1 1
= + cos 3t uπ (t).
9 9

Combining these results we find that the solution of the original problem is
given by
[ ]
1 1 1 1
y(t) = − cos 3t − + cos 3t uπ (t).
9 9 9 9
106 2. INTEGRAL TRANSFORMS

Example 2.1.15. Find the solution of

y ′′ (t) + y ′ (t) + y = h(t),

subject to the conditions y(0) = y ′ (0) = 0, where


{
1, 0 ≤ t < 1,
h(t) =
0, t ≥ 1.

Solution. Again as in the previous example we write the function in the form
h(t) = 1 − H(t − 1). Let L{y} = Y and let us take the Laplace transform of
both sides of the given differential equation. Using the formula in Corollary
2.1.2 we have

1 e−s
s2 Y (s) − sy(0) − y ′ (0) + sY (s) − y(0) + Y (s) = − .
s s

Using the prescribed initial conditions and solving for Y (s) we obtain

1 e−s
Y (s) = − = F (s) − e−s F (s),
s(s2 + s + 1) s(s2 + s + 1)

where
1
F (s) = .
s(s2 + s + 1)
Then, if f = L−1 {F }, from Theorem 2.1.10, we find the solution of the given
initial-value problem is

y(t) = f (t) − f (t − 1)H(t − 1).

To determine f (t) we use the partial fraction decomposition of F (s):

1 s+1 1 s+1
F (s) = − = − )
s s2 + s + 1 s (s + 12 )2 + 43
[ ]
1 s + 12 1 1
= − + .
s (s + 12 )2 + 43 2 (s + 12 )2 + 34

Now, using the first shifting property of the Laplace transform and referring
to Table A in Appendix A we obtain
√ √
3 1 3
h(t) = 1 − e− 2 cos t − √ e− 2 sin
t t
t.
2 3 2
2.1.3 INITIAL-VALUE PROBLEMS 107

Example 2.1.16. Find a nontrivial solution of the following linear differen-


tial equation of second order, with non-constant coefficients.

ty ′′ (t) + (t − 2)y ′ (t) + y(t) = 0, y(0) = 0.

Solution. If Y = L {y}, then


{ } { }
L y ′ (t) = sY (s), L y ′′ (t) = s2 Y (s) − y ′ (0),

and so by Theorem 2.1.5 we have that

{ } d( ) { } d( 2 )
L ty ′ (t) = − sY (s) , L ty ′′ (t) = − s Y (s) − y ′ (0) .
ds ds

The result is the following differential equation.

d( 2 ) d( )
− s Y (s) − y ′ (0) − sY (s) − 2Y (s) + Y (s) = 0,
ds ds

or after simplification

(s + 1)Y ′ (s) + 4Y (s) = 0.

The general solution of the above differential equation is

C
Y (s) = ,
(s + 1)4

where C is any numerical constant. Taking the inverse Laplace transform of


the above Y (s) we obtain
y(t) = Ct3 e−t .

In the next example we apply the Laplace transform to find the solution
of a harmonic oscillator equation with an impulse forcing.
Example 2.1.17. Find the solution of the initial-value problem

y ′′ (t) + y ′ (t) + 3y = δ4 (t), y(0) = 1, y ′ (0) = 0,

where δ4 (t) = δ(t − 4) is the Dirac delta function.


Solution. As usual, let L {y} = Y , and take the Laplace transform of both
sides of the equation. Using the formula for the Laplace transform of a de-
rivative, taking into account the given initial conditions and using the fact
that ( )
L{δ4 (t)} (s) = e−4s ,
108 2. INTEGRAL TRANSFORMS

we have
s2 Y (s) − s + sY (s) − 1 + 3Y (s) = e−4t .
Solving the above equation for Y (s) we obtain

s+1 e−4s
Y (s) = + .
s2 + s + 3 s2 + s + 3

Therefore,
{ } { }
−1 s+1 −1 e−4s
y(t) = L +L
s2 + s + 3 s2 + s + 3
{ } { }
s + 21 1
= L−1 + L−1 2
(s + 12 )2 + 11
4 (s + 1 2
2) + 11
4
{ }
e−4s
+ L−1 11 .
(s + 12 )2 + 4

Using the shifting theorems 2.1.2 and 2.1.3 and Table A in Appendix A we
obtain that the solution of the original initial-value problem is given
√ √
− 21 t
( 11 ) 1 −1t ( 11 )
y(t) = e cos t +√ e 2 sin t
2 11 2

2 ( 11 )
+ H(t − 4) √ e− 2 (t−4) sin
1
(t − 4) .
11 2

Exercises for Section 2.1.3.

1. Using the Laplace transform solve the initial-value problems:

(a) y ′ (t) + y = e−2t , y(0) = 2.

(b) y ′ (t) + 7y = H(t − 2), y(0) = 3.

(c) y ′ (t) + 7y = H(t − 2)e−2(t−2) , y(0) = 1.

2. Using the Laplace transform solve the initial-value problems:

(a) y ′′ (t) + y ′ (t) + 7y = sin 3t, y(0) = 2, y ′ (0) = 0.

(b) y ′′ (t) + 3y = H(t − 4) cos 5(t − 4), y(0) = 0, y ′ (0) = −2.

(c) y ′′ (t) + 9y = H(t − 5) sin 3(t − 5), y(0) = 2, y ′ (0) = 0.


2.1.4 THE CONVOLUTION THEOREM 109

(d) y iv (t)−y = H(t−1)−H(t−2), y(0) = y ′ (0) = y ′′ (0) = y ′′′ (0) = 0.


{
t, 0 ≤ t < 1
(e) y ′′ (t)+3y = w(t), y(0) = 0, y ′ (0) = −2, w(t) =
1, t ≥ 1.

3. Suppose that f is a periodic function with period T . If the Laplace


transform of f exists show that

∫T
( ) 1
L {f } (s) = f (t)e−st dt.
1 − e−T s
0

4. Find the Laplace transform of the 2-periodic function w defined by


{
1, 2n ≤ t < 2n + 1 for some integer n
w(t) =
−1, 2n + 1 ≤ t < 2n for some integer n.

5 Find the solution for each of the following initial-value problems.

(a) y ′′ (t) − 2y ′ (t) + y = 3δ2 (t), y(0) = 0, y ′ (0) = 1.

(b) y ′′ (t) + 2y ′ (t) + 6y = 3δ2 (t) − 4δ5 (5), y(0) = 0, y ′ (0) = 1.

(c) y ′′ (t) + 2y ′ (t) + y = δπ (t), y(0) = 1, y ′ (0) = 0.

(d) y ′′ (t) + 2y ′ (t) + 3y = sin t + δπ (t), y(0) = 0, y ′ (0) = 1.

(e) y ′′ (t) + 4y = δπ (t) − δ2π (t), y(0) = 0, y ′ (0) = 0.

(f) y ′′ (t) + y = δπ (t) cos t, y(0) = 0, y ′ (0) = 1.

2.1.4 The Convolution Theorem for the Laplace Transform.


In this section we prove the Convolution Theorem, which is a fundamental
result for the Laplace transform.
First we introduce the concept of convolution. This concept is inherent in
fields of the physical sciences and engineering. For example, in mechanics,
it is known as the super position or Duhamel integral. In system theory, it
plays a crucial role as the impulse response and in optics as the point spread
or smearing function.
110 2. INTEGRAL TRANSFORMS

Definition 2.1.4. If f and g are functions on R, their convolution is the


function, denoted by f ∗ g , defined by

∫x
( )
f ∗ g (x) = f (x − y)g(y) dy,
0

provided that the integral exists.


Remark. The convolution of f and g exists for every x ∈ R if the functions
satisfy certain conditions. For example, it exists in the following situations:
(1) If f is an integrable function on R and g is a bounded function.

(2) If g is an integrable function on R and f is a bounded function.

(3) If f and g are both square integrable functions on R.

(4) If f is a piecewise continuous function on R and g is a bounded


function which vanishes outside a closed bounded interval.

Theorem 2.1.12. The Convolution Theorem. Suppose that f (t) and


g(t) are piecewise continuous functions on [0, ∞) and both are of exponential
order. If L{f } = F and L{g} = G, then
( )
F (s)G(s) = L {f ∗ g} (s).

Proof. If we compute the product F (s)G(s) by the definition of the Laplace


transform; then we have that

(∫∞ ) (∫∞ )
−sx −sy
F (s)G(s) = e f (x) dx · e g(y) dy ,
0 0

which can be written as the following iterated integral:

∫∞ ∫∞
e−s(x+y) f (x)g(y) dx dy.
0 0

If in the above double integral we change the variable x with a new variable
t by the transformation x = t − y and keep the variable y, then we obtain

∫∫ ∫∞ ∫∞
F (s)G(s) = e−st f (t − y)g(y) dt dy = e−st f (t − y)g(y) dt dy,
R 0 y
2.1.4 THE CONVOLUTION THEOREM 111

t= y

Figure 2.1.4

where the region of integration R is the unbounded region shown in Figure


2.1.4.
If we interchange the order of integration in the last iterated integral, then
we obtain
∫∞ ∫t
F (s)G(s) = e−st f (t − y)g(y) dy dt,
0 0
which can be written as
∫∞ ( ∫t )
−st
F (s)G(s) = e f (t − y)g(y) dy dt
0 0
{∫t }
{ }
=L f (t − y)g(y) dy = L f ∗ g . ■
0

Remark. The convolution f ∗ g has many of the properties of ordinary mul-


tiplication. For example, it is easy to show that
f ∗g =g∗f (commutative law).
f ∗ (g + h) = f ∗ g + f ∗ h (distributive law).
(f ∗ g) ∗ h = f ∗ (g ∗ h) (associative law).
f ∗ 0 = 0.
The proofs of these properties are left as an exercise.
But, there are other properties of ordinary multiplication of functions which
the convolution does not have. For example, it is not true that
1∗f =f
112 2. INTEGRAL TRANSFORMS

for every function f .


Indeed, if f (t) = t, then

∫t ∫t
( ) t2
1 ∗ f (t) = 1 · f (x) dx = x dx = ,
2
0 0

and so, for this function we have 1 ∗ f ̸= f .


Example 2.1.18. Compute f ∗ g and g ∗ f if f (t) = e−t and g(t) = sin t.
Verify the Convolution Theorem for these functions.
Solution. By using the definition of convolution and the integration by parts
formula, we obtain

∫t ∫t ∫t
( ) −(t−x) −t
f ∗ g (t) = f (t − x)g(x) dx = e sin x dx = e ex sin x dx
0 0 0
[ ]x=t
ex 1( ) 1
= e−t (sin x − cos x) = sin t − cos t + e−t .
2 x=0 2 2

Similarly, we find

∫t ∫t
( ) 1( ) 1
g ∗ f (t) = g(t − x)f (x) dx = sin(t − x)ex dx = sin t − cos t + e−t ,
2 2
0 0

which shows that f ∗ g = g ∗ f .


To check the result in the Convolution Theorem we let F = L {f } and
G = L {g}. Then

1 1
F (s) = L {f } = L {e−t } = , G(s) = L {g} = L {sin t} =
s+1 s2 + 1

and so { }
−1
{ } 1 1
L F (s)G(s) = L−1 .
s + 1 s2 + 1
We compute
{ 1 1 }
L−1 2
s+1s +1
by the partial fraction decomposition

1 1 1 1 1 −s + 1
= + .
s + 1 s2 + 1 2 s + 1 2 s2 + 1
2.1.4 THE CONVOLUTION THEOREM 113

Therefore,
{ } { }
−1
{ 1 1 } 1 −1 1 −s + 1 1 −1 1
L = L − 2 = L
s + 1 s2 + 1 2 s+1 s +1 2 s+1
{ } { }
1 s 1 1
− L−1 2 + L−1 2
2 s +1 2 s +1
1 1 1
= e−t − cos t + sin t,
2 2 2
which is the same result as that obtained for (f ∗ g) (t).

Example 2.1.19. Using the Convolution Theorem find the Laplace trans-
form of the function h defined by
∫t
h(t) = sin x cos (t − x) dx.
0
( )
Solution. Notice that h(t) = f ∗ g (t), where f (t) = cos t and g(t) = sin t.
Therefore, by the Convolution Theorem,
( ) ( )( )
L {h(t)} = L { f ∗ g (t)} = L {f (t)} L {g(t)}
( )( ) s
= L {cos t} L {sin t} = 2 .
(s + 1)2
Example 2.1.20. Find the solution of the initial-value problem
y ′′ (t) + 16y = f (t), y(0) = 5, y ′ (0) = −5,
where f is a given function.
Solution. Let L {y} = Y and L {f } = F . By taking the Laplace transform
of both sides of the differential equation and using the initial conditions, we
obtain
s2 Y (s) − 5s + 5 + 16Y (s) = F (s).
Solving for Y (s) we obtain
5s − 5 F (s)
Y (s) = + .
s2 + 16 s2 + 16
Therefore, by linearity and the Convolution Theorem of the Laplace transform
it follows that
{ } { }
{ s } 1 F (s)
y(t) = 5L−1 2 − 5L−1 2 + L−1 2
s + 16 s + 16 s + 16
( { } )
5 1
= 5 cos 4t − sin 4t + L−1 2 ∗ f (t)
4 s + 16
( )
5 1
= 5 cos 4t − sin 4t + sin 4t ∗ f (t)
4 4
∫t
5 1
= 5 cos 4t − sin 4t + f (t − x) sin 4x dx.
4 4
0
114 2. INTEGRAL TRANSFORMS

The Convolution Theorem is very helpful also in solving integral equations.


Example 2.1.21. Use the Convolution Theorem to solve the following inte-
gral equation.
∫t
y(t) = 4t + y(t − x) sin x dx.
0

Solution. If f (t) = sin t, then the integral equation can be written as


( )
y(t) = 4t + y ∗ f (t).

Therefore, if Y = L {y} and we apply the Laplace transform to both sides of


the equation, and using the Convolution Theorem, we obtain

4 Y (s)
Y (s) = + 2 .
s2 s +1

Solving for Y (s), we have

s2 + 1 4 4
Y (s) = 4 = 2 + 4.
s4 s s
By computing the inverse Laplace transform, we find

2
y(t) = 4t + t3 .
3

Exercises for Section 2.1.4.

1. Establish the commutative, distributive and associative properties of


the convolution.

2. Compute the convolution f ∗ g of the following functions.

(a) f (t) = 1, g(t) = t2 .

(b) f (t) = e−3t , g(t) = 2.

(c) f (t) = t, g(t) = e−t .

(d) f (t) = cos t, g(t) = sin t.

(e) f (t) = t, g(t) = H(t − 1) − H(t − 2).


2.1.4 THE CONVOLUTION THEOREM 115

3. Find the Laplace transform of each of the following functions.

∫t
(a) f (t) = (t − x)2 cos 2x dx
0
∫t
(b) f (t) = e−(t−x) sin x dx
0
∫t
(c) f (t) = (t − x)ex dx
0
∫t
(d) f (t) = sin (t − x) cos x dx
0
∫t
(e) f (t) = ex (t − x)2 , dx
0
∫t
(f) f (t) = e−(t−x) sin2 x dx
0

4. Find the inverse Laplace transform of each of the following functions


by using the Convolution Theorem.
1
(a) F (s) =
s2 (s + 1)
s2
(b) F (s) = 2
(s + 1)2
1
(c) F (s) = 2
(s + 4)(s + 1)
1
(d) F (s) = 4
s (s +2 +16)
s
(e) F (s) = 2
(s + 4)(s + 1)
5 Solve the given integral equations using Laplace transforms.

∫t
(a) y(t) − 4t = −3 y(x) sin (t − x) dx
0
∫t
t2
(b) y(t) = − (t − x)y(x) dx
2
0
∫t
(c) y(t) − e−t = − y(x) cos (t − x) dx
0
∫t
(d) y(t) = t3 + y(t − x)x sin x dx
0
∫t
(e) y(t) = 1 + 2 e−2(t−x) y(x) dx
0
116 2. INTEGRAL TRANSFORMS

6 Solve the given integro-differential equations using Laplace transforms.

∫t
(a) y ′ (t) − t = y(x) cos (t − x)dx, y(0) = 4.
0
∫t y ′ (τ ) √
(b) √
t−τ
dτ = 1 − 2 t, y(0) = 0.
0

2.2 Fourier Transforms.


In this section we study Fourier transforms, which provide integral repre-
sentations of functions defined on either the whole real line (−∞, ∞) or on
the half-line (0, ∞). In Chapter 1, Fourier series were used to represent a
function f defined on a bounded interval (−L, L) or (0, L). When f and
f ′ are piecewise continuous on an interval, the Fourier series represents the
function on that interval and converges to the periodic extension of f outside
that interval. We will now derive, in a non-rigorous manner, a method of
representing some non-periodic functions defined either on the whole line or
on the half-line by some integrals.
Fourier transforms are of fundamental importance in a broad range of ap-
plications, including both ordinary and partial differential equations, quantum
physics, signal and image processing and control theory.
We begin this section by investigating how the Fourier series of a given
function behaves as the length 2L of the interval (−L, L) goes to infinity.
2.2.1 Definition of Fourier Transforms.
Suppose that a “reasonable” function f , defined on R, is 2L-periodic.
From Chapter 1 we know that the complex Fourier series expansion of f on
the interval (−L, L) is

∑ nπ
S(f, x) = c n ei L x ,
n=−∞

where
∫L
1
f (x)e−i

cn = L x dx.
2L
−L

If we introduce the quantity


nπ π
ωn = , ∆ ωn = ωn+1 − ωn = ,
L L
then

1 ∑
(2.2.1) S(f, x) = F (ωn )eiωn x ∆ωn ,
2π n=−∞
2.2.1 DEFINITION OF FOURIER TRANSFORMS 117

where
∫L
F (ωn ) = f (x)e−iωn x dx.
−L
Now, we expand the interval (−L, L) by letting L → ∞ in such a way so
that ∆ωn → 0. Notice that the sum in (2.2.1) is very similar to the Riemann
sum of the improper integral
∫∞
1
F (ω)eiωx dω,

−∞
where
∫∞
(2.2.2) F (ω) = f (x)e−iωx dx.
−∞
Therefore, we have
∫∞
1
(2.2.3) S(f, x) = F (ω) dω.

−∞
The function F in (2.2.2) is called the Fourier transform of the function f
on (−∞, ∞). The above discussion naturally leads to the following definition.
Definition 2.2.1. Let f : R → R be an integrable function. The Fourier
transform of f , denoted by F = F {f }, is given by the integral
∫∞
( )
(2.2.4) F (ω) = F {f } (ω) = f (x)e−iωx dx,
−∞
for all x ∈ R for which the improper integral exists.

Remark. Notice that the placement of the factor 1 in (2.2.4) is arbitrary.


1
In some literature, instead, it is chosen to be 2π :
∫∞ ∫∞
( ) 1 −iωx
F (ω) = F{f } (ω) = f (x)e dx, S(f, x) = F (ω) dω.

−∞ −∞
In this book we will use the notation given in (2.2.4).
Remark. Other common notations for F {f } are f˜ and fb.
For a given Fourier transform F , the function (if it exists) given by
∫∞
1
F (ω)eiωx dω

−∞

is called the inverse Fourier transform, and it is denoted by F −1 .


Similarly as in a Fourier series we have the following Dirichlet condition
for existence of the inverse Fourier transform.
118 2. INTEGRAL TRANSFORMS

Theorem 2.2.1. Suppose that f : R → R is an integrable function and


satisfies the Dirichlet condition: on any finite interval, the functions f and
f ′ are piecewise continuous with finitely many maxima and minima. If F =
F{f }, then
∫∞
f (x+ ) + f (x− ) 1
= F (ω)eiωx dω.
2 2π
−∞

Remark. Notice that, if the integrable function f is even or odd, then the
Fourier transform F{f } of f is even or odd, respectively, and

∫∞
F {f }(ω) = 2 f (x) cos ωx dx, if f is even
0

and
∫∞
F {f }(ω) = 2i f (x) sin ωx dx, if f is odd.
0

This suggests the following definition.


Definition 2.2.2. Suppose that f is an integrable function on (0, ∞). De-
fine Fourier cosine transform Fc {f } and Fourier sine transform Fs {f } by

∫∞ ∫∞
Fc {f }(ω) = f (x) cos ωx dx, Fs {f }(ω) = f (x) sin ωx dx.
0 0

The inversion formulas for the Fourier cosine and Fourier sine transforms
are
∫∞
1
f (x) = Fc {f }(ω) cos ωx dx

0
∫∞
1
= Fs {f }(ω) sin ωx dx.

0

Let us take several examples.


Example 2.2.1. Find the Fourier transform of f (x) = e−|x| and hence using
Theorem 2.2.1, deduce that

∫∞
ω sin(ωx) π
dω = e−a , x > 0.
1 + x2 2
0
2.2.1 DEFINITION OF FOURIER TRANSFORMS 119

Solution. By the definition of the Fourier transform we have

∫∞ ∫0 ∫∞
−iωx −x −iωx
F (ω) = f (x)e dx = e e dx + ex e−iωx dx
−∞ −∞ 0
∫0 ∫∞
= ex(1−iω) dx + e−x(1+iω) dx
−∞ 0
[ ] [ ]x=∞
e x(1−iω) x=0
e−x(1+iω) 2
= + = .
1 − iω x=−∞ −(1 + iω) x=0 1 + ω2

Since the function f (x) = e−|x| is continuous on the whole real line, by
the inversion formula in Theorem 2.2.1 we have

∫∞ ∫∞
−|x| 1 iωx 1 eiωx
e = F (ω)e dω = 2 dω
2π 2π 1 + ω2
−∞ −∞
∫0 ∫∞
1 eiωx 1 eiωx
= 2 dω + 2 dω
2π 1 + ω2 2π 1 + ω2
−∞ 0
∫∞ ∫∞
1 e−iωx 1 eiωx
= dω + dω
π 1 + ω2 π 1 + ω2
0 0
∫∞ ∫∞
1 eiωx + e−iωx 2 cos(ωx)
= dω = .
π 1 + ω2 π 1 + ω2
0 0

Example 2.2.2. Find the Fourier transform of f (x) = e−x .


2

Solution. By the definition of the Fourier transform we have

∫∞ ∫∞ ∫∞
−iωx −x2 −iωx
e−x −iωx
2
F (ω) = f (x)e dx = e e dx = dx
−∞ −∞ −∞
∫∞ 2 2
∫∞
−(x+ iω
2 ) − 4 − ω4
e−(x+
2 ω iω 2
= e dx = e 2 ) dx
−∞ −∞
∫∞
− ω4
2 √ − ω2
e−u du =
2
=e πe 4 .
−∞
120 2. INTEGRAL TRANSFORMS

Example 2.2.3. Find the Fourier transform of the following step function.
{
1, |x| < a
f (x) =
0, |x| > a.

Solution. Let F = F {f }. From the definition of the Fourier transform we


have
∫∞ ∫a
−iωx
F (ω) = f (x)e dx = e−iωx dx
−∞ −a

eωai
− e−ωai sin ωa
= =2 .
ωi ω

Example 2.2.4. Find the Fourier transform of

1
,
a2 + x2

where a is a positive constant.


Solution. Let F be the Fourier transform of f . From the definition of the
Fourier transform,
∫∞ −iωx
e
F (ω) = dx.
a2 + x 2
−∞

To evaluate the above improper integral, we evaluate the following contour


integral. ∫
e−iωz
I(R) = dz,
a2 + z 2
γ

where {γ = CR ∪ [−R, R] is the closed}contour formed by the upper semicircle


CR = z ∈ C : z = Reiθ , 0 ≤ θ ≤ π and the interval [−R, R]. See Figure
2.2.1. Let us consider first the case when ω < 0.

CR

ai

-R R

Figure 2.2.1
2.2.1 DEFINITION OF FOURIER TRANSFORMS 121

If R is large enough, the only singularity (simple pole) of the function

e−iωz
f (z) =
a2 + z 2

inside the contour CR ∪ [−R, R] is the point z0 = ai. Therefore, by the


Cauchy Residue Theorem (see Appendix F) we have
(
I(R) = 2πiRes f (z), z0 = ai).

From ( )
e−iωz e−iωai i
Res 2 , z0 = ai = = − eωa
a + z2 2ai 2a
we have
π ωa
(2.2.5) I(R) = e .
a

We decompose the integral I(R) along the semicircle CR and the interval
[−R, R]:

∫ ∫R
e−iωz e−iωx
(2.2.6) I(R) = dz + dx.
a2 + z 2 a2 + x 2
CR −R

Denote the first integral in (2.2.6) by ICR . For the integral ICR along the
semicircle CR we have
∫π ∫π −iωReiθ ∫π
e−iωRe

e eRω sin θ
ICR
= Rie iθ
dθ ≤ R dθ ≤ R dθ.
a2 + R2 e2iθ a2 + R2 e2iθ R2 − a2
0 0 0

Since ω < 0 and sin θ > 0 for 0 < θ < π we have

eRω sin θ
lim R =0
R→∞ R 2 − a2

and so from the above inequality it follows that

lim ICR = 0.
R→∞

Therefore from (2.2.6) we have

∫∞
e−iωx
lim I(R) = dx,
R→∞ a2 + x 2
−∞
122 2. INTEGRAL TRANSFORMS

so (2.2.5) implies

∫∞
e−iωx π
(2.2.7) dx = eωa .
a2 + x 2 a
−∞

For the case when ω > 0, working similarly as above, but integrating along
the lower semicircle z = Reiθ , π ≤ θ ≤ 2π, we obtain

∫∞
e−iωx π
(2.2.8) dx = e−ωa .
a2 + x 2 a
−∞

Therefore, from (2.2.8) and (2.2.7) it follows that

π −|ω|a
F {f }(ω) = e .
a

In the treatment of Fourier transforms until now we considered only func-


tions that are integrable on the real line R. It is an obvious fact that such
simple functions as cosine, sine and the Heaviside or the constant functions,
even though bounded, are not integrable. Does this mean that these functions
do not possess a Fourier transform? In the rest of this section we will try to
overcome this difficulty and make some sense of Fourier transforms of such
functions. We start with the Dirac delta “function” δ(x):
{
0, x ̸= 0
δ(x) =
∞, x = 0.

This function, introduced in Section 2.1.2 was defined as the “limit” of the
following sequence of functions.
{ n
2, |x| ≤ 1
n
fn (x) =
0, |x| > 1
n.

For these integrable functions fn we have

∫∞
1
∫n ( )
−iωx n −iωx sin ωn
F {fn }(ω) = fn (x)e dx = e dx = n.
2 ω
−∞ −n
1

Notice that (ω)


n sin n
lim = 1.
n→∞ ω
2.2.1 DEFINITION OF FOURIER TRANSFORMS 123

Therefore, it is natural to define


(2.2.9) F{δ}(ω) = lim F{fn }(ω) = 1.
n→∞

In Section 2.1.2, when we discussed the Dirac delta function we “stated”


that
∫∞ ∫∞
δ(x − a)f (x) dx = δa f (x) dx = f (a),
−∞ −∞

for every continuous function f . In particular, we have


∫∞
(2.2.10) δa (x)e−iωx dx = e−iωa ,
−∞

and therefore
(2.2.11) F {δa }(ω) = e−iωa .
For the inverse Fourier transform, from (2.2.10) it follows that
∫∞
δω0 (ω)eixω dω = eiω0 x .
−∞

Hence
1 iω0 x
F −1 {δω0 }(x) = e ,

that is,
(2.2.12) F {eiω0 x }(ω) = 2πδ(ω − ω0 ).
If we set ω0 = 0 we have the following result.
F {1}(ω) = 2πδ(ω).

Example 2.2.5. Find the Fourier transforms of the functions


sin ω0 x, cos ω0 x, ω0 ̸= 0, x ∈ R.

Solution. We will compute the Fourier transform of sin ω0 x, leaving the other
function as an exercise. The function sin ω0 x is not integrable, and so we
need to use the Dirac delta function. From Euler’s formula, the linearity
property of the Fourier transform, and (2.2.12) we have
1[ ]
F {sin ω0 x}(ω) = F {eiω0 x }(ω) − F {e−iω0 x }(ω)
2i [ ]
= −πi δ(ω − ω0 ) − δ(ω + ω0 ) .
124 2. INTEGRAL TRANSFORMS

Example 2.2.6. By approximation, find the Fourier transforms of the sign


function sgn(x), where


 1, x>0
sgn(x) = 0, x=0


−1, x < 0.

Solution. The function sgn(x) is not integrable on R, but the function

fϵ (x) = e−ϵ|x| sgn(x),

where ϵ > 0, is integrable. Indeed

∫∞ ∫∞
−ϵ|x| 2
e sgn(x) dx = 2 e−ϵx dx = .
ϵ
−∞ 0

Also, for the function fϵ we have

lim fϵ (x) = sgn(x), x ∈ R.


ϵ→0

First, let us find the Fourier transform Fϵ of the function fϵ .

∫0 ∫∞
Fϵ = F {fϵ }(ω) = e−ϵ(−x) · (−1) · e−iωx dx + e−ϵx · 1 · e−iωx dx
−∞ 0
[ ]x=0 [ ]x=∞
1 1
=− e(ϵ−iω)x + e(−ϵ−iωx)
ϵ − iω x=−∞ −ϵ − iω x=0
1 1 2iω
=− − =− 2 .
ϵ − iω −ϵ − iω ϵ + ω2

Therefore,
2iω
F {sgn}(ω) = lim Fϵ (ω) = − lim .
ϵ→0 ϵ→0 ϵ2 + ω2
2
If ω = 0, the above lim equals 0, and if ω ≠ 0, the above lim equals .

Therefore we conclude that
{ 2
, ω ̸= 0
F {sgn}(ω) = iω
0, ω = 0.
2.2.1 DEFINITION OF FOURIER TRANSFORMS 125

Exercises for Section 2.2.1.

1. Find the Fourier transform of the following functions and for each ap-
ply Theorem 2.2.1.


 0, x < −1
(a) f (x) = −1, −1 < x < 1


2, x > 1.


 0, x < 0
(b) f (x) = x, 0 < x < 3


0, x > 3.
{
0, x<0
(c) f (x) = −x
e , x > 0.

ax2
(d) f (x) = e− 2 , a > 0.

2. For given a > 0, let


{
e−x xa−1 , x > 0
f (x) =
0, x ≤ 0.

Show that
F {f }(ω) = Γ(a)(1 + iω)−a .

3. Show that the Fourier transform of


{
e2x , x<0
f (x) =
e−2x , x > 0

is
3
F {f }(ω) = .
(2 − iω)(1 + iω)

4. Show that the Fourier transform of


{
cos(ax), |x| < 1
f (x) =
0, |x| > 1

is
sin(ω − a) sin(ω + a)
F {f }(ω) = + .
ω−a ω+a
126 2. INTEGRAL TRANSFORMS

5. Let g be a function defined on [0, ∞), and let its Laplace transform
L{g} exist. On (−∞, ∞) define the function f by
{
0, t>0
f (t) =
g(t), t ≥ 0.
Show formally that

L {g}(ω) = F (−iω),

where F (ω) = F {f }(ω).

6. Find the Fourier transform of


{
cos x, |x| < π
f (x) =
0, |x| > π,
and, using, the result in Theorem 2.2.1, evaluate the integral
∫∞
x sin(πx) cos(xy)
dx.
1 − x2
0

7. If a > 0 is a constant, then compute the following Fourier transforms.


{ }
(a) Fc e−ax .
{ }
(b) Fs e−ax .
{ }
(c) Fs xe−ax .
{ }
(d) Fc (1 + x)e−ax .

8. Find the Fourier transform of the Heaviside unit step function u0 :


{
1, x > 0
u0 (x) =
0, x < 0.
9. Verify that
ω0 πi [ ]
(a) F {sin(ω0 x)u0 (x)}(ω) = δ(ω + ω0 ) − δ(ω − ω0 ) ,
ω02 − ω 2 2
and
iω0 πi [ ]
(b) F {cos(ω0 x)u0 (x)}(ω) = δ(ω + ω0 ) − δ(ω − ω0 ) .
ω02 − ω 2 2
2.2.2 PROPERTIES OF FOURIER TRANSFORMS 127

2.2.2. Properties of Fourier Transforms.


In principle, we can compute the Fourier transform of any function from
the definition. However, it is much more efficient to derive some properties of
the Fourier transform. This is the purpose of this section.
The following theorem establishes the linearity property of the Fourier
transform.
Theorem 2.2.2 Linear Property. If f1 and f2 are integrable functions
on R, then
F{c1 f1 + c2 f2 } = c1 F {f1 } + c2 F {f2 }
for any constants c1 and c2 .
Proof. This result follows directly from the definition of the Fourier transform
and the linearity property of integrals. ■.

The next theorem summarizes some of the more important and useful prop-
erties of the Fourier transform.
Theorem 2.2.3. Suppose that f and g are integrable functions on R. Then
(a) For any c ∈ R, F {f (x − c)}(ω) = e−icω F {f }(ω) (Translation).

(b) For any c ∈ R, F {eicx f (x)}(ω) = F {f }(ω − c) (Modulation).


(x)
(c) If c > 0 and fc (x) = 1c f c , then F{fc }(ω) = F{f }(cω) (Dilation).
( )( )
(d) F {f ∗ g} = F {f } F {g} (Convolution).

(e) If f is continuous and piecewise smooth and f ′ is integrable, then


F {f ′ }(ω) = iωF {f }(ω) (Differentiation)
( )′
(f ) If also xf (x) is integrable, then F {xf (x)}(ω) = i F {f } (ω).
Proof. The proof of part (a) we leave as an exercise.
For part (b), we have
∫∞ ∫∞
−iωx
F {e icx
f (x)}(ω) = e icx
f (x)e dx = e−ix(ω−c) f (x) dx
−∞ −∞
= F {f }(ω − c).
For part (c), by direct computation we find
∫∞
1 ( x ) −iωx
F {fc }(ω) = f e dx (substitution x = cy)
c c
−∞
∫∞
= f (y)e−icωy dy = F {f }(cω).
−∞
128 2. INTEGRAL TRANSFORMS

For part (d) we have


∫∞ ( ∫∞ )
F {f ∗ g)}(ω) = f (x − y)g(y) dy e−ixω dx
−∞ −∞
∫∞ ( ∫∞ )
= f (x − y)g(y)e−ixω dx dy
−∞ −∞
∫∞ ( ∫∞ )
= f (z)e−i(z+y)ω dz g(y) dy ( z = x − y)
−∞ −∞
∫∞ ( ∫∞ )
= f (z)e−izω dz g(y)e−iyω dy
−∞ −∞
∫∞
= F {f }(ω)g(y)e−iyω dy
−∞
∫∞
= F {f }(ω) g(y)e−iyω dy
−∞
( )( )
= F {f }(ω) F {g}(ω) .
In the proof (e) we will use the fact that
lim f (x) = 0
x→±∞
for every integrable function f on R. Now, based on this fact, by direct
computation and the integration by parts formula we have
∫∞ [ ]x=∞
′ ′ −iωx −iωx
F {f }(ω) = f (x)e dx = f (x)e
x=−∞
−∞
∫∞
− (−iω)f (x)e−iωx dx
−∞
∫∞
= 0 + iω f (x)e−iωx dx = iωF {f }(ω).
−∞

For the last part (f ),


∫∞ ∫∞
−iωx d ( −iωx )
F {xf (x)}(ω) = xf (x)e dx = if (x) e dx

−∞ −∞
( ∫∞ )
d −iωx
( )′
=i f (x)e dx = i F {f } (ω). ■

−∞
2.2.2 PROPERTIES OF FOURIER TRANSFORMS 129

Remark. Parts (e) and (f ) in the above theorem can be generalized as


follows:
(e′ ) If f and the derivatives f ′ , · · · , f (n−1) are continuous on R and f (n)
is piecewise continuous and integrable on R, then

F {f (n) }(ω) = (iω)n F {f }.

(f ′ ) If f is an integrable function on R, such that the function xn f (x) is


also integrable, then
( )(n)
F {xn f (x)}(ω) = in F {f } (ω).

For the Fourier sine and Fourier cosine transform we have the following
theorem.
Theorem 2.2.4. Suppose that f is continuous and piecewise smooth and
that f and f ′ are integrable functions on (0, ∞). Then
(a) Fc {f ′ }(ω) = ωFs {f }(ω) − f (0).

(b) Fs {f ′ }(ω) = −ωFc {f }(ω).


Proof. We prove only (a), leaving (b) as an exercise.
Since f is an integrable function on (0, ∞) we have

lim f (x) = 0.
x→∞

Using this condition and the integration by parts formula we have


∫∞ [ ]x=∞
Fc {f ′ }(ω) = f ′ (x) cos ωx dx = f (x) cos ωx
x=0
0
∫∞
+ω f (x) sin ωx dx
0
= −f (0) + ωFs {f }(ω). ■

As a consequence of this theorem we have the following.


Corollary 2.2.1. Suppose that the functions f and f ′ are continuous and
piecewise smooth and that f and f ′ are integrable on (0, ∞). Then
(a) Fs {f ′′ }(ω) = −ω 2 Fs {f }(ω) + ωf (0).

(b) Fc {f ′′ }(ω) = −ω 2 Fc {f }(ω) − f ′ (0).

Let us take now several examples.


130 2. INTEGRAL TRANSFORMS

Example 2.2.7. Let {


1, |x| ≤ 1
f (x) =
0, |x| > 1
and {
|x|, |x| ≤ 2
g(x) =
0, |x| > 2.
Compute the Fourier transform F {f ∗ g}.
Solution. First we find F {f } and F {g}. In Example 2.2.3 we have already
found that
sin ω
F{f }(ω) = 2 .
ω
For F{g} we have
∫∞ ∫0 ∫2
−iωx −iωx
F{g}(ω) = g(x)e dx = (−x)e dx + xe−iωx dx
−∞ −2 0
[ ]x=0 [ ]x=2
( 1 ix ) −iωx ( 1 ix ) −iωx
= + e + + e
ω2 ω x=−2 ω2 ω x=0
2 cos 2ω − 2
= .
ω2
Thus, by the convolution property of the Fourier transform in Theorem 2.2.3,
( )( )
F {f ∗ g}(ω) = F {f }(ω) F {g}(ω)
[ ]
sin ω 2 cos 2ω − 2
=2 .
ω ω2

Example 2.2.8. Find


{ ω2 }
−1 e− 4
F .
1 + ω2
Solution. From Examples 2.2.1 and 2.2.2 we have
2 √ ω2
F {e−|x| }(ω) = and F {e−x }(ω) = πe− 4 .
2

1+ω 2

Therefore, from the convolution property of the Fourier transform


{ − ω2 } { }
−1 e 4 −1 − ω4
2 1
F (x) = F e · (x)
1 + ω2 1 + ω2
( { } ) ( { }
−1 − ω4
2
−1 1
= F e (x) ∗ F (x)
1 + ω2
( )
1 1
= √ e−x ∗ e−|x|
2

π 2
∫∞
1
e−(x−y) e−|y| dy.
2
= √
2 π
−∞
2.2.2 PROPERTIES OF FOURIER TRANSFORMS 131

Example 2.2.9. Using properties (e) and (f) in Theorem 2.2.3 find the
Fourier transform of the function

y(x) = e−x .
2

Solution. The function y(x) = e−x satisfies the following differential equa-
2

tion.
y ′ (x) + 2xy(x) = 0.
Applying the Fourier transform to both sides of this equation, with F{y} = Y ,
from (e) and (f ) of Theorem 2.2.3 it follows that

iωY (ω) + 2iY ′ (ω) = 0,

i.e.,
Y ′ (ω) + 2ωY (ω) = 0.
The general solution of the above equation is
ω2
Y (ω) = Ce− 4 .

For the constant C we have


∫∞
C = Y (0) = y(x)e−ix·0 dx
−∞
∫∞

e−x dx =
2
= π. ( See Appendix G)
−∞

Therefore,
√ − ω2
Y (ω) = πe 4 .

In the definition of the Fourier transform of a function f we required that


the function was integrable on the real line. We mention only (without any
further discussion) that theory has been developed that allows us to apply
the Fourier transform also to functions that are square integrable. The next
theorem, even though valid for square integrable functions, will be stated only
for functions that are both integrable and square integrable.
Theorem 2.2.5 (Parseval’s Formula). Suppose that f is a function which
is both integrable and square integrable on R. If F = F {f } is the Fourier
transform of f , then
∫∞ ∫∞
1
|f (x)| dx =
2
|F (ω)|2 dω.

−∞ −∞
132 2. INTEGRAL TRANSFORMS

Proof. From the definition of the inverse Fourier transform


∫∞
1
f (x) = F (ω)eiωx dω

−∞

we have
∫∞ ∫∞ ( ∫∞ )
1
|f (x)| dx =
2
f (x) F (ω)e iωx
dω dx

−∞ −∞ −∞
∫∞ ( ∫∞ )
1 iωx
= F (ω) f (x)e dx dω

−∞ −∞
∫∞
1
= F (ω) F (ω) dω

−∞
∫∞
1
= |F (ω)|2 dω. ■

−∞

Example 2.2.10. Using the Fourier transform of the unit step function on
the interval (−a, a) show that
∫∞
sin2 (aω)
dω = aπ.
ω2
−∞

Solution. Let 1a be the unit step function on (−a, a):


{
1, |x| < a|
1a (x) =
0, |x| > a.
In Example 2.2.3 we showed that
2 sin(ωa)
F{1a }(ω) = .
ω
Therefore, by Parseval’s formula we have
∫∞ ∫∞ ∫∞
2 sin2 (aω) 1
dω = |F (ω)| dω =
2
|1a (x)|2 dx
π ω2 2π
−∞ −∞ −∞
∫a
= 1 dx = 2a.
−a

Now we examine Poisson’s summation theorem, which provides a beau-


tiful and important connection between the Fourier series and the Fourier
transform.
2.2.2 PROPERTIES OF FOURIER TRANSFORMS 133

Theorem 2.2.6. Let f be absolutely integrable and continuous on the real


line R with Fourier transform F = F{f }. Assume that there exist constants
p > 1, M > 0 and A > 0 such that
|f (x)| ≤ M |x|−p for |x| > A.
Also assume that the series


(2.2.13) F (n)
n=−∞
converges. Then

∑ ∞
1 ∑
(2.2.14) f (2πn) = F (n),
n=−∞
2π n=−∞
and both series are absolutely convergent.
Proof. First we define the function g by
∑∞
(2.2.15) g(x) = f (x + 2nπ).
n=−∞

If we replace x by x + 2π in (2.6.15) we see that we will obtain the same


series; therefore g is 2π-periodic. Without proof we state that, based on the
conditions of the function f , the function g(x) exists for every x, and it is
continuous on R. Now, since g is a continuous and periodic function with a
period 2π, it can be represented by the complex Fourier series:
∑∞
(2.2.16) g(x) = cn einx .
n=−∞

If we evaluate g(0) from (2.2.15) and (2.6.16), then we obtain



∑ ∑∞
(2.2.17) f (2nπ) = cn .
n=−∞ n=−∞

Next, using the definition of the function g(x) we compute the coefficient cn :
(2.2.18)
∫π ∫π ( ∑∞ )
1 −inx 1
cn = g(x)e dx = f (x + 2nπ) e−inx dx
2π 2π n=−∞
−π −π

∑ ∫π
1
= f (x + 2nπ)e−inx dx (integration term by term)
2π n=−∞ −π

∞ ∫
2nπ+π
1 ∑
= f (t)e−in(t−2nπ) dt (substitution x = t − 2nπ)
2π n=−∞
2nπ−π
∫∞
1 1
= f (t)e−int dt = F (n).
2π 2π
−∞
134 2. INTEGRAL TRANSFORMS

If we substitute (2.2.18) into (2.2.17), then we obtain



∑ ∞
1 ∑
f (2πn) = F (n). ■
n=−∞
2π n=−∞

Remark. Using the dilation property (c) of Theorem 2.2.3, Poisson’s sum-
mation formula (2.2.14) in Theorem 2.2.6 can be written in the following
form.

∑ ∞
1 ∑ ( 2nπ )
(2.2.19) f (cn) = F ,
n=−∞
c n=−∞ c

where c is any nonzero number.


Among other applications, Poisson’s summation formula can be used to
evaluate some infinite series.
Example 2.2.11. Using Poisson’s summation formula for the function
a
f (x) = , a>0
a2 + x2
evaluate the following infinite series.

∑ 1
.
n=−∞
a2 + ω2

Solution. From the result of Example 2.2.4 it follows that


π −|ω|a
F{f }(ω) = e .
a
First, we check the assumptions in Theorem 2.2.6.

∑ ∞
∑ ∞

F (n) = πe−|2n|a = 1 + 2π e−2na
n=−∞ n=−∞ n=1
e−2a
= 1 + 2π <∞
1 − e−2a
Therefore, condition (2.2.13) of Theorem 2.2.6 is satisfied. The condition
for the growth of the function f is easily verified. Indeed, for x ̸= 0, we have
1 1
f (x) = ≤ 2,
a2 + x2 x
and so this condition is satisfied taking p = 2, M = 1, and A > 0 to be
arbitrary.
2.2.2 PROPERTIES OF FOURIER TRANSFORMS 135

Now, if we substitute the following expressions


a
f (n) =
a2 + n2
and
F (2nπ) = πe−|2nπ|a
into the Poisson summation formula (2.2.14) we obtain

∑ ∞ ( ∞ )
1 π ∑ −|2n|aπ π ∑
−2naπ
= e = 1 + 2 e
n=−∞
a 2 + n2 a n=−∞ a n=1
( −2aπ
)
π e π 1 + e−2aπ
= 1+2 = .
a 1 − e−2aπ a 1 − e−2aπ

Therefore,

∑ 1 π 1 + e−2aπ
= .
n=−∞
a2 +n 2 a 1 − e−2aπ

We rewrite the last formula as follows.


∑∞
1 1 π 1 + e−2aπ
+ 2 =
a2 n=1
a 2 + n2 a 1 − e−2aπ

and so

∑ 1 π 1 + e−2aπ 1
= − 2.
n=1
a 2 + n2 2a 1 − e−2aπ 2a

Although in the next several chapters we will apply the Fourier transform
in solving partial differential equations, let us take a few examples of the
application of the Fourier transform in solving ordinary differential equations.
Example 2.2.12. Solve the following boundary value problem
{ ′′
y (x) + a2 y = f (x), −∞ < x < ∞,
lim y(x) = 0,
x→±∞

where a > 0 is a positive constant and f is a given function.


Solution. Let F {y} = Y and F {f } = F . Applying the Fourier transform
to both sides of the differential equation we obtain

ω 2 Y (ω) + a2 Y (ω) = F (ω).

Solving for Y (ω), it follows that

F (ω)
(2.2.20) Y (ω) = .
a2 + ω 2
136 2. INTEGRAL TRANSFORMS

Therefore, we can reconstruct the solution by applying the inverse Fourier


transform to (2.2.20):

∫∞
1 F (ω) iωx
(2.2.21) y(x) = e dω.
2π a2+ ω2
−∞

Alternatively, in order to find the solution y(x) from (2.2.21), we can use
the convolution property for the Fourier transform:
{ 1 } { 1 }
y(x) = F −1 F (ω) = F −1 2 (x) ∗ f (x)
a2 +ω 2 a +ω 2
∫∞
1 −ax 1
= e ∗ f (x) = e−a(x−y) f (y) dy.
2a 2a

The improper integral (2.2.21) can also be evaluated using the Cauchy
Residue Theorem.
Example 2.2.13. Find the solution y(x) of the boundary value problem in
Example 2.2.12 if the forcing function f is given by

f (x) = e−|x| ,

and a ̸= ±1.
Solution. From Example 2.2.1 we have
{ } 2
F e−|x| (ω) = ,
1 + ω2

and so from (2.2.21) it follows that

∫∞
1 2
y(x) = eiωx dω.
2π (1 + ω 2 )(a2 + ω2 )
−∞

Using partial fraction decomposition in the above integral, we obtain


∫∞ ( ∫∞ ∫∞ )
2 2 eiωx eiωx
e iωx
dω = dω − dω
(1 + ω 2 )(a2 + ω 2 ) a2 − 1 1 + ω2 a2 + ω 2
−∞ −∞ −∞
( )
2 1 −1 { 1 } 1 −1 { 1 }
= 2 F (x) − F (x)
a − 1 2π 1 + ω2 2π a2 + ω 2
( )
2 1 −|x| 1 e−|x|a
= 2 πe − π (from Example 2.2.1).
a − 1 2π 2π a
2.2.2 PROPERTIES OF FOURIER TRANSFORMS 137

Therefore, the solution of the given boundary value problem is


( )
1 e−|x|a
y(x) = e−|x| − .
2π(a2 − 1) a

Two more basic properties of Fourier transforms of integrable functions


should be mentioned. First, it is relatively easy to show that if f is integrable,
then F{f } is a bounded and continuous function on R (see Exercise 1 of this
section). But something more is true, which we state without a proof.
Theorem 2.2.7 The Riemann–Lebesgue Lemma. If f is an integrable
function, then
lim F {f }(ω) = 0.
ω→∞

Theorem 2.2.8. The Fourier Transform Inversion Formula. Suppose


that f is an integrable function on R. If the Fourier transform F = F {f }
is also integrable on R, then f is continuous on R and

∫∞
1
f (x) = F (ω) eiωx dω, x ∈ R.

−∞

Remark. A consequence of the Fourier Inversion Formula is the Laplace


Inversion Formula which was given by Theorem 2.1.7.

Exercises for Section 2.2.2.

1. Prove that the Fourier transform of an integrable function is a bounded


and continuous function.

2. Given that
{ 1 }
F (ω) = πe−|ω| ,
1 + x2
find the Fourier transform of
1
(a) , a is a real constant.
1 + a2 x2
cos(ax)
(b) , a is a real constant.
1 + x2
138 2. INTEGRAL TRANSFORMS

3. Let H(x) be the Heaviside unit step function and let a > 0. Use the
modulation property of the Fourier transform and the fact that
{ } 1
F e−ax H(x) (ω) =
1 + iω

to show that
{ } b
F e−ax sin(bx)H(x) (ω) = .
(a + iω)2 + b2

4. Let a > 0. Use the function

f (x) = e−ax sin(bx)H(x)

and Parseval’s equality to show that

∫∞
dx π
= .
(x2 + a2 − b2 )2 + 4a2 b2 2a(a2 + b2 )
−∞

5 Use the fact that


{ 1 }
F (ω) = πe−|ω|
1 + x2

and Parseval’s inequality to show that

∫∞
dx π
= .
(x2 + 1)2 2
−∞

6. Find the inverse of the following Fourier transforms:


1
(a) .
ω 2 − 2ibω − a2 − b2

(b) .
(1 + iω)(1 − iω)
1
(c) .
(1 + iω)(1 + 2iω)2

7. By taking the appropriate closed contour, find the inverse of the fol-
lowing Fourier transforms by the Cauchy Residue Theorem. The pa-
rameter a is positive.
ω
(a) .
ω 2 + a2
2.3 PROJECTS USING MATHEMATICA 139

3
(b) .
(2 − iω)(1 + iω)
ω2
(c) .
(ω + a2 )2
2

8. Find the inverse Fourier transform of


cos(iω)
F (ω) = , a > 0.
ω 2 + a2

Hint: Use the Cauchy Residue Theorem.

9. Using the Fourier transform, find particular solutions of the following


differential equations:

(a) y ′ (x) + y = 21 e−|x| .

(b) y ′ (x) + y = 12 e−|x| , where we assume that the given function f


has the Fourier transform F .

(c) y ′′ (x) + 4y ′ (x) + 4y = 12 e−|x| .

(d) y ′′ (x) + 3y ′ (x) + 2y = e−x H(x), H is the Heaviside step function.

10. Suppose that f is continuous and piecewise smooth and that f and
f ′ are integrable on (0, ∞). Show that
{ }
Fs f ′ (ω) = −ω{Fc }(ω)

and { }
Fc f ′ (ω) = ω{Fs }(ω) − f (0).
11. State and prove Parseval’s formulas for the Fourier cosine and Fourier
sine transforms.

2.3. Projects Using Mathematica.

In this section we will see how Mathematica can be used to evaluate the
Laplace and Fourier transforms, as well as their inverse transforms. For a
brief overview of the computer software Mathematica consult Appendix H.
Let us start with the Laplace transform.
140 2. INTEGRAL TRANSFORMS

Mathematica’s commands for the Laplace transform and the inverse Laplace
transform
∫∞ ∫∞
−st 1
(2.3.1) F (s) = f (t)e dt, f (t) = F (s)est ds

0 −∞

are
[ ]
In[] := LaplaceTransform f [t], t, s ;
[ ]
In[] := InverseLaplaceTransform F [s], s, t ;

Project 2.3.1. Using Mathematica find the Laplace transform of the func-
tion {
t, 0 ≤ t < 1
f (t) =
0, 1 < t < ∞
in two ways:
(a) By (2.3.1).

(b) By Mathematica’s command for the Laplace transform.

Solution. (a). First define the function f (t):


In[1] := f [t− ] := Piecewise [{{t, 0 ≤ t < 1}, {0, 1 < t}}];
Now define the Laplace transform F (s) of f (t):
In[2] := F [s− ] := Integrate f [t] e−st , {t, 0, ∞},
Assumptions → { Im [s] == 0, s > 0}]
e−s (−1+es −s)
Out[2] = s2

Part (b):
In[3]:=LaplaceTransform [f [t], t, s]
e−s (−1+es −s)
Out[3] = s2

Project 2.3.2. Using Mathematica find the inverse Laplace transform of the
function
2s2 − 3s + 1
F (s) = 2 2
s (s + 9)
in two ways:
(a) By Mathematica’s command for the inverse Laplace transform.
2.3 PROJECTS USING MATHEMATICA 141

(b) By the following formula for the inverse Laplace transform.

∫ i
c+∞ ∞

1 st
( )
(2.3.2) f (t) = F (s)e ds = Res F (z)ezt , z = zn ,
2π n=0
c−∞ i

where zn are all the singularities of F (z) in the half-plane {z ∈ C :


Re (z) < c} and c > 0 is any fixed number.

Solution. (a). First clear all previous f and F :


In[1] := Clear [f, F ];
Next define the function F (s):
2s2 −3s+1
In[2] := F [s− ] := s3 (s2 +9) ];
[ ]
In[3] := InverseLaplaceTransform F [s], s, t
( )
1
Out[3] = 162 34 − 54 t + 9 t2 − 34 Cos[3 t] + 18 Sin[3t]
In[4] := Expand [%]
t2
Out[4] = 17
81 − t
3 + 18 − 17
81 Cos[3 t] + 1
9 Sin[3 t]

Part (b):
Find the singularities of the function F (z):
In[5] := Solve [z 3 (z 2 + 9) == 0, z]
Out[5] = {{z → 0}, {z → 0}, {z → 0}, {z → −3 i}, {z → 3 i}}

Find the residues at these singularities


In[6] := r0 = Residue [F [z] ez t , {z, 0}]
( )
1
Out[6] := 162 34 − 54 t + 9 t2
In[7] := r1 = Residue [F [z] ez t , {z, −3 i}]
( 17 ) −3 i t
Out[7] = − 162 + 18i
e
In[8] := r2 = Residue [F [z] ez t , {z, 3 i}]
( 17 ) 3it
Out[8] = − 162 − 18i
e

Now add the above residues to get the inverse Laplace transform.
In[9] := r0 + r1 + r
( 17 ) −3 i t ( 17 ) ( )
Out[9] = − 162 i
+ 18 e + − 162 − i
18 e3 i t + 1
162 34 − 54 t + 9 t2

We can simplify the above expression:


142 2. INTEGRAL TRANSFORMS

In[10] := FullSimplify [%]


( )
1
Out[10] = 162 34 + 9 (−6 + t) t − 34 Cos[3 t] + 18 Sin[3 t] .
In[11] := Expand [%]
t2
Out[11] = 17
81 − t
3 + 18 − 17
81 Cos[3 t] + 1
9 Sin[3 t]

Using the Laplace transform we can solve differential equations.

Project 2.3.3. Using Mathematica solve the differential equation

y ′′ (t) + 9y(t) = cos t.

Solution. Take the Laplace transform of both sides of the equation.


In[1] := LaplaceTransform [y ′′ [t] + 9 y[t] == Cos[t], t, s]]
Out[1] = −s y[0] + 9 LaplaceTransform [y[t], t, s]]
+s2 LaplaceTransform [y[t], t, s] − y ′ [0] == s2s+9

Solve for the Laplace transform.


In[2] := Solve [%, LaplaceTransform [y[t], t, s]]
s+s y[0]+s2 y ′ [0]
Out[2] = {{ LaplaceTransform [y[t], t, s] → (1+s2 ) (9+s2 ) }}

Find the inverse Laplace transform.


In[3] := InverseLaplaceTransform [%, s, t]
{{ ( )}}
Out[3] = y[t] → 241
3Cos[t]−3Cos[3 t]+24Cos[3 t] y[0]+8Sin[3 t]y ′ [0]
In[4] := Expand [%]
{{ )}}
Out[4] = y[t] → Cos[t]
8 − 1
8 Cos[3 t] + Cos[3 t] y[0] + 1
3 Sin[3 t] y ′ [0]

Now we discuss the Fourier transform in Mathematica.


In Mathematica the Fourier transform of a function f (t) and its inverse
are by default defined to be
∫∞ ∫∞
1 1
√ f (t)e iωt
dt, √ F (ω)−iωt dω,
2π 2π
−∞ −∞

respectively.
The command to find the Fourier transform is
In[] := FourierTransform [f [t], t, ω]
2.3 PROJECTS USING MATHEMATICA 143

The default Mathematica command


In[] := InverseFourierTransform [f [t], t, ω]
gives the inverse Fourier transform of f (t).
The Fourier transform and its inverse, defined by

∫∞ ∫∞
−iωt 1
f (t)e dt, F (ω)iωt dω,

−∞ −∞

in Mathematica are evaluated by


In[] := FourierTransform [f [t], tω, FourierParameters → {1, −1}]
In[] := InverseFourierTransform [F [ω], ω, t, FourierParameters → {1, −1}],
respectively.

The next project shows how Mathematica can evaluate Fourier transforms
and inverse Fourier transforms of a big range of functions: algebraic, expo-
nential and trigonometric functions, step and impulse functions.
Project 2.3.4. Find the Fourier transform of each of the following functions.
Take the constant in the Fourier transform to be 1. From the obtained Fourier
transforms find their inverses.
(a) e−t
2

1

(b) t
.

sin t
(c) sinc(t) = t .
{
1, t>0
(d) sign(t) =
−1, t < 0.
(e) δ(t)—the Dirac Delta function, concentrated at t = 0.
(f) e−t .

Solution. First we clear any previous variables and functions.


In[1] := Clear [t, f, g, h, s];

Part (a).
In[2] := FourierTransform [e−t , t, ω, FourierParameters → {1, −1}]
2

ω2 √
Out[2] = e− 4 π
In[3] := InverseFourierTransform [%], ω, t, FourierParameters → {1, −1}]
Out[3] = e−t
2
144 2. INTEGRAL TRANSFORMS

Part (b).
In[4] := FourierTransform [ √ 1
, t, ω, FourierParameters → {1, −1}]
Abs[t]

Out[4] = √ 2π
Abs[ω]

In[5] :=FourierTransform [HeavisideTheta [t], t, ω,


FourierParameters → {1, −1}]
Out[5] = − ωi + π DiracDelta [ω]
In[6] := InverseFourierTransform [%], ω, t, FourierParameters → {1, −1}]
Out[6] = 12 (1 + Sign[t])

Part (c).
In[7] := FourierTransform [Sinc[t], t, ω, FourierParameters → {1, −1}]
Out[7] = 12 πSign[1 − ω] + 21 πSign[1 + ω]
In[8] := InverseFourierTransform [%, ω, t, → {1, −1}]
Sin[t]
Out[8] = t

Part (d).
In[9] := FourierTransform [Sign[t], t, ω, FourierParameters → {1, −1}]
Out[9] = − 2i
ω

In[10] := InverseFourierTransform [%, ω, t, → {1, −1}]


Out[10] = Sign [t]

Part (e).
In[11] := FourierTransform [DiracDelta[t], t, ω,
FourierParameters → {1, −1}]
Out[11] = 1
In[12] := InverseFourierTransform [%, ω, t, → {1, −1}]
Out[12] = DiracDelta [t]

Part (f).
In[13] := FourierTransform [Exp[−Abs[t]], t, ω,
FourierParameters → {1, −1}]
2
Out[13] = 1+ω 2

In[14] := InverseFourierTransform [%, ω, t, → {1, −1}]


Out[14] = e−Abs[t] .
CHAPTER 3

STURM–LIOUVILLE PROBLEMS

In the first chapter we saw that the trigonometric functions sine and cosine
can be used to represent functions in the form of Fourier series expansions.
Now we will generalize these ideas.
The methods developed here will generally produce solutions of various
boundary value problems in the form of infinite function series. Technical
questions and issues, such as convergence, termwise differentiation and inte-
gration and uniqueness, will not be discussed in detail in this chapter. In-
terested readers may acquire these detail from advanced literature on these
topics, such as the book by G. B. Folland [6].

3.1 Regular Sturm–Liouville Problems.


In mathematical physics and other disciplines fairly large numbers of prob-
lems are defined in the form of boundary value problems involving second
order ordinary differential equations. Therefore, let us consider the differen-
tial equation
[ ]
(3.1.1) a(x)y ′′ (x) + b(x)y ′ (x) + c(x) + λr(x) y = 0, a<x≤b

subject to some boundary conditions on a bounded interval [a, b]. We suppose


that the real functions a(x), r(x) are continuous on the interval [a, b], λ a
parameter, and we suppose that a(x) is not zero for every x ∈ [a, b]. It turns
out that it is much more convenient to rewrite the differential equation (3.1.1)
in its equivalent form, the so-called Sturm–Liouville form
( )′

p(x)y (x) +[q(x) + λr(x)]y = 0,

where the real functions p(x), p′ (x), r(x) are continuous on [a, b], and p(x)
and r(x) are positive on [a, b].
Remark. Any differential equation of the form (3.1.1) can be written in
Sturm–Liouville form.
Indeed, first divide (3.1.1) by a(x) to obtain
[ ]
b(x) ′ c(x) r(x)
y ′′ (x) + y (x) + +λ y = 0.
a(x) a(x) a(x)

145
146 3. STURM–LIOUVILLE PROBLEMS

Multiplying the last equation by


∫ b(x)
dx ( )
µ(x) = e a(x) ignore the integration constant

we obtain
[ ]
b(x) ′ µ(x)c(x) µ(x)r(x)
µ(x)y ′′ (x) + µ(x) y (x) + +λ y = 0.
a(x) a(x) a(x)

Now, using the fact that


b(x)
µ′ (x) = µ(x) ,
a(x)
it follows that
[ ]
′′ ′ µ(x)c(x)
′ µ(x)r(x)
µ(x)y (x) + µ (x)y (x) + +λ y = 0,
a(x) a(x)

and thus, by the product rule for differentiation we have


( )′ [ ]
µ(x)c(x) µ(x)r(x)
µ(x)y ′ (x) + +λ y = 0.
a(x) a(x)

But, the last equation is indeed in Sturm–Liouville form.


Definition 3.1.1. A regular Sturm–Liouville problem is a second order ho-
mogeneous linear differential equation of the form
( )′ [ ]

(3.1.2) p(x)y (x) + q(x) + λr(x) y(x) = 0, a<x<b

where p(x), p′ (x) and r(x) are real continuous functions on a finite interval
[a, b], and p(x), r(x) > 0 on [a, b], together with the set of homogeneous
boundary conditions of the form
{
α1 y(a) + β1 y ′ (a) = 0,
(3.1.3)
α2 y(b) + β2 y ′ (b) = 0,

where αi and βi are constants. We regard λ as an undetermined constant


parameter.

Notation. Sometimes, we will denote by L the following linear differential


operator.
( )′
(3.1.4) L[y] = p(x)y ′ (x) +q(x)y(x).
3.1 REGULAR STURM–LIOUVILLE PROBLEMS 147

Using this operator the differential equation (3.1.2) can be expressed in the
following form.

(3.1.5) L[y] = −λ r y.

We use the term linear differential operator because of the following im-
portant linear property of the operator L.

L[c1 y1 + c2 y2 ] = c1 L[y1 ] + c2 L[y2 ]

for any constants c1 and c2 and any differentiable functions y1 and y2 .


Remark. For different values of the constants α′ s and β ′ s in the boundary
conditions (3.1.3) we have special types of boundary conditions.
For β1 = β2 = 0, (3.1.3) these are called Dirichlet boundary conditions.
For α1 = α2 = 0, (3.1.3) these are called Neumann boundary conditions.
Other types of boundary conditions that are often encountered are periodic
boundary conditions when

y(a) = y(b), y ′ (a) = y ′ (b).

Our goal is to find all solutions of the Sturm–Liouville problem. It is clear


that y ≡ 0 is one, the trivial solution of (3.1.2) for every λ, and satisfies
the boundary conditions (3.1.3). However, we are interested here to find
those parameters λ for which the Sturm–Liouville problem has non-trivial
solutions y. Those values of λ and the corresponding solutions y(x) have
special names:
Definition 3.1.2. If y(x) ̸≡ 0 is a solution of a regular Sturm–Liouville
problem (3.1.2), (3.1.3) corresponding to some constant λ, then this solution
is called an eigenfunction corresponding (associated) to the eigenvalue λ.

Let us take now an example.


Example 3.1.1. Find all eigenvalues and the corresponding eigenfunctions
of the following problem.

y ′′ (x) + λy(x) = 0, 0 < x < l,

subject to the boundary conditions y(0) = 0 and y ′ (l) = 0.


Solution. First we observe that this is a regular Sturm–Liouville problem.
Indeed, the differential equation can be written in the form
( )′ [ ]
1 · y ′ (x) + 0 + 1 · λ y = 0,

so p(x) = 1, q(x) = 0 and r(x) = 1.


148 3. STURM–LIOUVILLE PROBLEMS

The given differential equation is a simple homogeneous linear differential


equation of second order with constant coefficients. Its characteristic equation
is
m2 + λ = 0,
whose solutions are √
m = ± −λ.
The general solution y(x) of the differential equation depends on whether
λ = 0, λ < 0 or λ > 0.
Case 10 . λ = 0. In this case the differential equation is simply y ′′ = 0,
which has a general solution y = A + Bx. The two boundary conditions
y(0) = y ′ (l) = 0 imply A = 0 and B = 0, which yields y ≡ 0 on the interval
[0, l], and so λ = 0 is not an eigenvalue of the given problem.
Case 20 . λ < 0. In this case we can write λ = −µ2 where µ > 0. The
solutions of the characteristic equation in this case are m = ±µ, and so the
general solution of the differential equation is

y(x) = A1 eµx + B1 e−µx = A cosh (µx) + B sinh (µx), A, B are constants.

The boundary condition y(0) = 0 implies A = 0, so y(x) = B sinh (µx). The


other boundary condition y ′ (l) = 0 implies that Bµ cosh(µl) = 0, and since
cosh(µl) > 0 we have B = 0. Therefore, y(x) ≡ 0, and so any λ < 0 cannot
be an eigenvalue of the problem.
Case 30 . λ > 0. In this case we have λ = µ2 , with µ > 0. The differential
equation in this case has a general solution

y(x) = A sin(µx) + B cos(µx), A and B are constants.

The boundary condition y(0) = 0 implies B = 0, so y(x) = A sin(µx). The


other boundary condition y ′ (l) = 0 implies that Aµ cos(µl) = 0. To avoid
triviality, we want A to be nonzero, and so we must have

cos(µl) = 0.

If the solutions µ of the last equation are denoted by µn , then we have


(2n − 1)π
µn = , n ∈ N.
2l
Therefore, the eigenvalues for this problem are
(2n − 1)2 π 2
λn = µ2n = , n = 1, 2, . . ..
4l2
The corresponding eigenfunctions are functions of the form An sin (µn x), and
ignoring the factors An , the eigenfunctions of the given problem are
(2n − 1)π
yn (x) = sin (µn x), µn = , n = 1, 2, . . ..
2l
3.1 REGULAR STURM–LIOUVILLE PROBLEMS 149

Notice that all the eigenvalues in Example 3.1.1 are real numbers. Ac-
tually, this is not an accident, and this fact is true for any regular Sturm–
Liouville problem.
We introduce a very useful shorthand notation:
For two square integrable complex functions f and g on an interval [a, b],
the expression
∫b
(f, g) = f (x)g(x) dx
a

(g(x) is the complex conjugate of g(x)) is called the inner product of f and
g.
The inner product satisfies the following useful properties:

(a) (f, f ) ≥ 0 for any square integrable f and (f, f ) = 0 only if f ≡ 0.


(b) (f, g) = (g, f ).
(c) (αf + βg, h) = α(f, h) + β(g, h), for any scalars α and β.

The next theorem shows that the eigenvalues of a regular Sturm–Liouville


problem are real numbers.
Theorem 3.1.1. Every eigenvalue of the regular Sturm–Liouville problem
(3.1.2), (3.1.3) is a real number.
Proof. Let λ be an eigenvalue of the boundary value problem (3.1.2), (3.1.3),
with a corresponding eigenfunction y = y(x). If we multiply both sides of the
equation
L[y] = −λ r y
by y(x) and integrate from a to b, then we obtain

∫b [ ]
( ′
)′
p(x)y (x) +q(x)y y(x) dx = −λ(y, ry).
a

If we integrate by parts, then from the last equation we have


[ ]x=b ∫b

p(x)y(x)y (x) − p(x)y ′ (x)y ′ (x) dx
x=a
a
∫b
+ q(x)y(x)y(x) dx = −λ(y, ry),
a

i.e.,
( ) ( )
(3.1.6) p(b)y(b)y ′ (b) − p(a)y(a)y ′ (a) − y ′ , py ′ + y, qy = −λ(y, ry).
150 3. STURM–LIOUVILLE PROBLEMS

Now, taking the complex conjugate of the above equation, and using the fact
that p, q and r are real functions, along with properties (b) and (c) of the
inner product, we have
( ) ( )
(3.1.7) p(b)y(b)y ′ (b) − p(a)y(a)y ′ (a) − y ′ , py ′ + y, qy = −λ(y, ry).

If we subtract (3.1.7) from (3.1.6) we obtain


(3.1.8)[ ] [ ]
′ ′ ′ ′
p(b) y(b)y (b) − y(b)y (b) − p(a) y(a)y (a) − y(a)y (a) = (λ − λ)(y, ry).

The boundary condition at the point b together with its complex conjugate
gives {
α2 y(b) + β2 y ′ (b) = 0,
α2 y(b) + β2 y ′ (b) = 0.
Both α2 , β2 cannot be zero, otherwise there would be no boundary condition
at x = b. Therefore,

(3.1.9) y(b)y ′ (b) − y ′ (b)y(b) = 0.

Similar considerations (working with the boundary condition at x = a) give

(3.1.10) y(a)y ′ (a) − y ′ (a)y(a) = 0.

Inserting conditions (3.1.9) and (3.1.10) into (3.1.8) we have

(3.1.11) (λ − λ)(y, ry) = 0.

Now, since y(x) ̸≡ 0 on [a, b], the continuity of y(x) at some point of
[a, b] implies that y(x)y(x) > 0 for every x in some interval (c, d) ⊆ [a, b].
Therefore, from r(x) > 0 on [a, b] and the continuity of r(x) it follows that
y(x)r(x)y(x) > 0 for every x ∈ (c, d). Hence (y, ry) > 0, and so (3.1.11)
forces λ − λ = 0, i.e., λ = λ. Therefore λ is a real number. ■
Regular Sturm–Liouville problems have several important properties. How-
ever, we will not prove them all here. An obvious question regarding a regular
Sturm–Liouville problem is that about the existence of eigenvalues and eigen-
functions. One property, related to this question and whose proof is beyond
the scope of this book, but can be found in more advanced books, is the
following theorem.
Theorem 3.1.2. A regular Sturm–Liouville problem has infinitely many real
and simple eigenvalues λn , n = 0, 1, 2, . . ., which can be arranged as a mono-
tone increasing sequence

λ0 < λ1 < λ2 < . . . < λn < . . . ,


3.1 REGULAR STURM–LIOUVILLE PROBLEMS 151

such that
lim λn = ∞.
n→∞

For each eigenvalue λn there exists only one eigenfunction yn (x) (up to a
multiplicative constant).

Several other properties will be presented in the next section.


Prior to that we take up a few more illustrative examples.
Example 3.1.2. Find the eigenvalues and the corresponding eigenfunctions
of the following problem.

x2 y ′′ (x) + xy ′ (x) + (λ + 2)y = 0, 1 < x < 2,

subject to the boundary conditions y ′ (1) = 0 and y ′ (2) = 0.


Solution. First we observe that this is a regular Sturm–Liouville problem.
Indeed, the differential equation can be written in the form
[ ]
( )
′ ′ 2 1
x·y + + · λ y = 0,
x x
so
2 1
p(x) = x, q(x) =, r(x) = .
x x
The given differential equation is a Euler-Cauchy equation. Its characteristic
equation is
m2 + λ + 2 = 0,
whose solutions are √
m = ± −λ − 2.
(See Appendix D.) The general solution y(x) of the differential equation
depends on whether λ = −2, λ < −2 or λ > −2.
Case 10 . λ = −2. In this case the differential equation is simply xy ′′ +y ′ (x) =
0 whose general solution is given by y = A + B ln(x). The two boundary
conditions give B = 0, which yields y(x) ≡ A, and so λ = −2 is an eigenvalue
and the corresponding eigenfunction is y(x) = 1.
Case 20 . λ < −2. In this case we have λ + 2 = −µ2 with µ > 0. The
solutions of the characteristic equation in this case are m = ±µ, and so a
general solution of the differential equation is

y(x) = Axµ + Bx−µ , A and B are constants.

The boundary condition y ′ (1) = 0 implies that Aµ − Bµ = 0. The other


boundary condition y ′ (2) = 0 implies Aµ2µ−1 − Bµ2−µ−1 = 0. Solving the
linear system for A and B we obtain that A = B = 0. Therefore y(x) = 0,
and so any λ < −2 cannot be an eigenvalue of the problem.
152 3. STURM–LIOUVILLE PROBLEMS

Case 30 . λ > −2. In this case we have λ + 2 = µ2 , with µ > 0. The


differential equation in this case has a general solution
( ) ( )
y(x) = A sin µ ln x + B cos µ ln x , A and B are constants.


From the boundary
( ) condition y (1) = 0 we have A′ = 0, and therefore,
y(x) = B cos µ ln x . The other boundary condition y (2) = 0 implies
( )
sin µ ln 2 = 0.

From the last equation it follows that



µ= ,
ln(2)

and so, denoting µ by µn , we have



µn = , n ∈ N.
ln 2
Therefore, the eigenvalues of this problem are

n2 π 2
λn = µ2n = , n = 1, 2, . . . ,.
ln2 2

The corresponding eigenfunctions are functions of the form An cos (µn ln x),
and ignoring the coefficients An , the eigenfunctions of the given problem are
( ) nπ
yn (x) = cos µn ln x , µn = , n = 1, 2, . . . ,.
ln 2

Example 3.1.3. Find the eigenvalues and the corresponding eigenfunctions


of the following boundary value problem.

y ′′ (x) − 2y ′ (x) + λy = 0, 0 < x < 1, y(0) = y ′ (1) = 0.

Solution. The given differential equation is a second order homogeneous linear


differential equation with constant coefficients. Its characteristic equation is
m2 − 2m + λ = 0, whose solutions are

m=1± 1 − λ.

The general solution y(x) depends on whether λ = 1, λ < 1 or λ > 1.


Case 10 . λ = 1. The differential equation in this case has a general solution

y = Aex + Bxex .
3.1 REGULAR STURM–LIOUVILLE PROBLEMS 153

The two boundary conditions imply that


{
0 = y(0) = A
0 = y ′ (1) = A + 2eB.

Thus A = B = 0 and so λ = 1 is not an eigenvalue.


Case 20 . λ < 1. In this case we have 1 − λ = µ2 with µ > 0. The solutions
of the characteristic equation in this case are m = 1 ± µ, and so a general
solution of the differential equation is

y(x) = Ae(1+µ)x + Be(1−µ)x , A and B are constants.

The boundary conditions imply


{
0 = y(0) = A + B
0 = y ′ (1) = A(1 + µ)e1+µ + B(1 − µ)e1−µ .

Thus B = −A and so
[ ]
A (1 + µ)e1+µ − (1 − µ)e1−µ = 0.

Now, since
(1 + µ)e1+µ > (1 − µ)e1−µ
for all µ > 0, it follows that A = 0, and hence the given boundary value
problem has only a trivial solution in this case also.
Case 30 . λ > 1. In this case we have 1 − λ = −µ2 , with µ > 0. The
differential equation in this case has a general solution
[ ]
y(x) = ex A cos (µx) + B sin (µx) .

The boundary condition y(0) = 0 implies A = 0, and since

y ′ (x) = Bex sin (µx) + Bµex cos (µx),

the other boundary condition y ′ (1) = 0 implies


( )
B sin µ + µ cos µ = 0.

To avoid again the trivial solution, we want B to be nonzero, and so we must


have
sin µ + µ cos µ = 0,
i.e.,

(3.1.12) tan µ = −µ.


154 3. STURM–LIOUVILLE PROBLEMS

tanH Μ L

∆1 ∆2 ∆3 ∆4
Π 3Π 5Π 7Π
2 2 2 2

Figure 3.1.1

Therefore, the eigenvalues for this problem are λ = 1 + µ2 , where the pos-
itive number µ satisfies the transcendental equation (3.1.12). Although the
solutions of (3.1.12) cannot be found explicitly, from the graphical sketch of
tan µ and −µ in Figure 3.1.1 we see that there are infinitely many solutions
µ1 < µ2 < . . .. Also, the following estimates are valid.

π 3π 2n − 1
< µ1 < π, < µ2 < 2π, . . . , π < µn < nπ, . . .
2 2 2

If µ = µn is the solution of (3.1.12) such that

2n − 1
π < µn < nπ,
2

then (√ )
λn = 1 + µ2n , yn (x) = ex sin 1 + λn x
are the eigenvalues and the associated eigenfunctions.
From the estimates
2n − 1
π < µn < nπ
2
it is clear that
λ1 < λ2 < . . . < λn < . . .
and
lim λn = ∞.
n→∞

Example 3.1.4. Find the eigenvalues and the corresponding eigenfunctions


of the following boundary value problem.

y ′′ (x) + λy = 0, 0 < x < 1,


3.1 REGULAR STURM–LIOUVILLE PROBLEMS 155

subject to the boundary conditions

y ′ (0) = 0, y(1) − y ′ (1) = 0.

Solution. The general solution y(x) of the given differential equation depends
on the parameter λ:
For the case when λ = 0 the solution of the equation is y(x) = A + Bx.
When we impose the two boundary conditions we find B = 0 and A+B−B =
0. But this means that both A and B must be zero, and therefore λ = 0 is
not an eigenvalue of the problem.
If λ = −µ2 < 0, with µ > 0, then the solution of the differential equation
is
y(x) = A sinh (µ x) + B cosh (µ x).
Since
y ′ (x) = Aµ cosh (µ x) + Bµ sinh (µ x)
the boundary condition at x = 0 implies that A = 0, and so

y(x) = B cosh(µ x).

The other boundary condition at x = 1 yields

µ sinh (µ) = cosh (µ),

i.e.,
1
tanh (µ) = , µ > 0.
µ
As we see from the Figure 3.1.2, there is a single solution of the above equa-
tion, which we will denote by µ0 . Thus, in this case there is a single eigenvalue
λ0 = −µ20 and a corresponding eigenfunction y0 (x) = cosh(µ0 x).

tanhH Μ L

1
Μ

Μ0

Figure 3.1.2
156 3. STURM–LIOUVILLE PROBLEMS

Finally, if λ = µ2 > 0, with µ > 0, then the solution of the differential


equation is
y(x) = A sin (µ x) + B cos (µ x).

Since
y ′ (x) = Aµ cos (µ x) − Bµ sin (µ x)

the boundary condition at x = 0 implies A = 0, and so

y(x) = B cos (µ x).

The other boundary condition at x = 1 yields

−µB sin (µ) = B cos (µ),

i.e.,

1
(3.1.13) − tan (µ) = , µ > 0.
µ

As before, the positive solutions of this equation correspond to the intersec-


tions of the curves
1
y = tan µ y = − .
µ
As we see from Figure 3.1.3, there are infinitely many solutions, which we will
denote by µ1 < µ2 < . . .. So, the eigenvalues in this case are λn = µ2n and
the associated eigenfunctions are given by

yn (x) = cos (µn x), n = 1, 2 . . .,

where µn are the positive solutions of Equation (3.1.13).

Μ1 Μ2 Μ3 Μ4
Π 3Π 5Π 7Π 9Π
2 2 2 2 2

1
-
Μ
tanH Μ L

Figure 3.1.3
3.1 REGULAR STURM–LIOUVILLE PROBLEMS 157

Next, we examine an example of a boundary value problem which is not a


regular Sturm–Liouville problem.
Example 3.1.5. Find all eigenvalues and the corresponding eigenfunctions
of the following boundary value problem.

y ′′ + λy = 0, 0 < x < π,

subject to the periodic boundary conditions

y(−π) = y(π), y ′ (−π) = y ′ (π).

Solution. First we observe that this is not a regular Sturm–Liouville problem


because the boundary conditions are periodic and they are not of the canon-
ical boundary conditions (3.1.3). But yet this boundary value problem has
infinitely many eigenvalues and eigenfunctions.
The general solution y(x) of the given differential equation depends on the
parameter λ:
If λ = 0, then the solution of the equation is

y(x) = A + Bx.

The boundary condition y(−π) = y(π) implies A − Bπ = A + Bπ, and so


B = 0.
The other boundary condition is satisfied, and therefore λ = 0 is an eigen-
value of the problem, and y(x) = 1 is the corresponding eigenfunction.
If λ = −µ2 < 0, then the solution of the differential equation is

y(x) = A sinh µ x + B cosh µ x.

The boundary condition y(−π) = y(π) implies that

−A sinh µπ + B cosh µπ = A sinh µπ + B cosh µπ,

from which it follows that A = 0. Now, the other boundary condition


y ′ (−π) = y ′ (π) implies

−Bµ sinh µπ = Bµ sinh µπ.

From the last equation we obtain B = 0, and therefore any λ < 0 cannot be
an eigenvalue.
Now, let λ = µ2 > 0. In this case the general solution of the differential
equation is given by
y = A sin µx + B cos µx.
158 3. STURM–LIOUVILLE PROBLEMS

From the boundary condition, y(−π) = y(π) it follows that A sin µπ = 0.


From the second boundary condition, y ′ (−π) = y ′ (π), we obtain

Aµ cos µπ + Bµ sin µπ = Aµ cos µπ − Bµ sin µπ.

Therefore sin µπ = 0. From the last equation, in view the fact that µ ̸= 0,
it follows that µ = n, n = ±1, ±2, . . .. Now since A and B are arbitrary
constants, to each eigenvalue λn = n2 there are two linearly independent
eigenfunctions
{ }
yn (x) = sin nx, cos nx, n = 1, 2, 3, . . . .

The negative integers n give the same eigenvalues and the same associated
eigenfunctions.

Exercises for Section 3.1.

1. Show that the differential equation (3.1.1) is equivalent to the Sturm-


Liouville form.

2. Find the eigenvalues and eigenfunctions for each of the following.

(a) y ′′ + λy = 0, y ′ (0) = 0, y(1) = 0.

(b) y ′′ + λy = 0, y(0) + y ′ (0) = 0, y(π) + y ′ (π) = 0.

(c) y ′′ + λy = 0, y ′ (0) = 0, y(π) − y ′ (π) = 0.

(d) y (iv) + λy = 0, y(0) = y ′′ (0) = 0, y(π) = y ′′ (π) = 0.

3. Consider the following boundary value problem.

y ′′ + λy = 0, y(0) + 2y ′ (0) = 0, 3y(2) + 2y ′ (2) = 0.

Show that the eigenvalues of this problem are all negative λ = −µ2 ,
where µ satisfies the transcendental equation


tanh (2µ) = .
3 − 4µ2

4. Find an equation that the eigenvalues for each of the following bound-
ary value problems satisfy.

(a) y ′′ + λy = 0, y(0) = 0, y(π) + y ′ (π) = 0.


3.2 EIGENFUNCTION EXPANSIONS 159

(b) y ′′ + λy = 0, y(0) + y ′ (0) = 0, y ′ (π) = 0.

(c) y ′′ + λy = 0, y(0) + y ′ (0) = 0, y(π) − y ′ (π) = 0.

5. Find the eigenvalues and corresponding eigenfunctions for each of the


following Sturm–Liouville problems.
( )′
3 ′
(a) x y +λxy = 0, y(1) = y(eπ ) = 0, 1 ≤ x ≤ eπ .
( )′
1 ′
(b) x y + λx y = 0, y(1) = y(e) = 0, 1 ≤ x ≤ e.
( )′
(c) xy ′ + λx y = 0, y(1) = y ′ (e) = 0, 1 ≤ x ≤ e.

6. Show that the eigenvalues λn and the associated eigenfunctions of the


boundary value problem
( )′
2 ′
(1 + x )y (x) + λy = 0, 0 < x < 1, y(0) = y(1) = 0

are given by
( ) ( )
nπ 1 sin nπ ln(1 + x)
λn = + , yn (x) = √ , n = 1, 2, . . .
ln 2 4 (ln 2) 1 + x

7. Find the eigenvalues of the following boundary value problem problem.

x2 y ′′ + 2λxy ′ + λy = 0 1 < x < 2, y(1) = y(2) = 0.

8. Find all eigenvalues and the corresponding eigenfunctions of the fol-


lowing periodic boundary value problem.

y ′′ + (λ − 2)y = 0, y(−π) = y(π), y ′ (−π) = y ′ (π).

3.2 Eigenfunction Expansions.


In this section we will investigate additional properties of Regular Sturm–
Liouville Problems. We begin with the following result, already mentioned in
the previous section.
Theorem 3.2.1. Every eigenvalue of the regular Sturm–Liouville problem
(3.1.2) and (3.1.3) is simple, i.e., if λ is an eigenvalue and y1 and y2 are
160 3. STURM–LIOUVILLE PROBLEMS

two corresponding eigenfunctions to λ, then y1 and y2 are linearly dependent


(one function is a scalar multiple of the other function).
Proof. Since y1 and y2 both are solutions of (3.1.2), we have
{( )′
p(x)y1′ (x) + q(x)y1 (x) + λr(x)y1 (x) = 0
( )′
p(x)y2′ (x) + q(x)y2 (x) + λr(x)y2 (x) = 0.

Multiplying the first equation by y2 (x) and the second by y1 (x) and sub-
tracting, we get
( )′ ( )′
(3.2.1) p(x)y1′ (x) y2 (x) − p(x)y2′ (x) y1 (x) = 0.

However, since
[ ]′ ( )′ ( )′
p(x)y1′ (x) y2 (x) − p(x)y2′ (x) y1 (x) = p(x)y1′ (x) y2 (x) − p(x)y2′ (x) y1 (x)

from (3.2.1) it follows that


[ ]′
p(x)y1′ (x)y2 (x) − p(x)y2′ (x) y1 (x) = 0, a≤x≤b

and hence
[ ]
(3.2.2) p(x) y1′ (x) y2 (x) − y2′ (x) y1 (x) = C = constant, a ≤ x ≤ b.

To find C, we use the fact that y1 and y2 satisfy the boundary condition at
the point x = a: {
α1 y1 (a) + β1 y1′ (a) = 0
α1 y2 (a) + β1 y2′ (a) = 0.
Since at least one of the coefficients α1 and β1 is not zero from the above
two equations it follows that

y1 (a)y2′ (a) − y2 (a)y1′ (a) = 0.

Thus, from (3.2.2) we have C = 0 and hence


[ ]
p(x) y1′ (x) y2 (x) − y2′ (x) y1 (x) = 0, a ≤ x ≤ b.

Since p(x) > 0, for x ∈ [a, b] we must have

y1′ (x) y2 (x) − y2′ (x)y1 (x) = 0, a ≤ x ≤ b,

from which we conclude that y2 (x) = Ay1 (x) for some constant A. ■

The next result is about the “orthogonality” property of the eigenfunctions.


3.2 EIGENFUNCTION EXPANSIONS 161

Theorem 3.2.2. If λ1 and λ2 are two distinct eigenvalues of the regular


Sturm–Liouville problem (3.1.2) and (3.1.3), with corresponding eigenfunc-
tions y1 and y2 , then
∫b
y1 (x)y2 (x)r(x) dx = 0.
a

Proof. Since y1 and y2 both are solutions of (3.1.2), we have


{( )′
p(x)y1′ (x) + q(x)y1 (x) + λ1 r(x)y1 (x) = 0
( )′
p(x)y2′ (x) + q(x)y2 (x) + λ2 r(x)y2 (x) = 0.
Multiplying the first equation by y2 (x) and the second by y1 (x) and sub-
tracting, we get
( )′ ( )′
(3.2.3) p(x)y1′ (x) y2 (x) − p(x)y2′ (x) y1 (x) + (λ1 − λ2 )r(x)y1 (x)y2 (x) = 0.
Using the identity
[ ]′ ( )′ ( )′
p(x)y1′ (x)y2 (x) − p(x)y2′ (x)y1 (x) = p(x)y1′ (x) y2 (x) − p(x)y2′ (x) y1 (x)
of the previous theorem, Equation (3.2.3) becomes
[ ]′
p(x)y1′ (x)y2 (x) − p(x)y2′ (x)y1 (x) + (λ1 − λ2 )r(x)y1 (x)y2 (x) = 0.
Integration of the last equation implies
[ ]x=b ∫b
′ ′
(3.2.4) p(x) y1 y2 − y2 y1 + (λ1 − λ2 ) r(x)y1 (x)y2 (x) dx = 0.
x=a
a

Now, following the argument in Theorem 3.2.1, from the boundary conditions
for the functions y1 (x) and y2 (x) at x = a and x = b we get
{
y1 (a)y2′ (a) − y2 (a)y1′ (a) = 0
y1 (b)y2′ (b) − y2 (b)y1′ (b) = 0.

Inserting the last two equations in (3.2.4) we have


∫b
(λ1 − λ2 ) r(x)y1 (x)y2 (x) dx = 0,
a

and since λ1 ̸= λ2 , it follows that


∫b
r(x)y1 (x)y2 (x) dx = 0. ■
a
162 3. STURM–LIOUVILLE PROBLEMS

Example 3.2.1. Show that the eigenfunctions of the Sturm–Liouville prob-


lem in Example 3.1.1 are orthogonal.
Solution. In Example 3.1.1 we found that the eigenfunctions of the boundary
value problem
y ′′ (x) + λ = 0, 0 < x < l,
subject to the boundary conditions y(0) = 0 and y ′ (l) = 0, are given by

(2n − 1)π
yn (x) = sin µn x, µn = , n = 1, 2, . . . ,.
2l

The weight function r in this example is r(x) ≡ 1. Observe, that, if m and


n are two distinct natural numbers, then using the trigonometric identity

1( )( )
sin x sin y = cos (x − y) − cos (x + y)
2

we have

∫l ∫l
(2m − 1)πx (2n − 1)πx
ym (x)yn (x)w(x) dx = sin sin dx
2l 2l
0 0
∫l [ )]
(m − n)πx (m + n − 1)πx
=2 cos − cos dx
l l
0
[ ]l [ ]l
2l (m − n)πx 2l (m + n − 1)πx
=− sin + sin
(m − n)π l 0 (m + n − 1)π l 0
= 0.

Now, with the help of this theorem we can expand a given function f in
a series of eigenfunctions of a regular Sturm–Liouville problem. We have had
examples of such expansions in Chapter 1. Namely, if f is a continuous and
piecewise smooth function on the interval [0, 1], and satisfies the boundary
conditions f (0) = f (1) = 0, then f can be expanded in the Fourier sine series



f (x) = bn sin nπx,
n=1

where the coefficients bn are given by

∫1
bn = 2 f (x) sin nπx, dx.
0
3.2 EIGENFUNCTION EXPANSIONS 163

Since f (x) is continuous we have that the Fourier sine series converges to
f (x) for every x ∈ [0, 1]. Notice that the functions sin nπx, n = 1, 2 . . . are
the eigenfunctions of the boundary value problem

y ′′ + λy = 0, y(0) = y(1) = 0.

We have had similar examples for expanding functions in Fourier cosine series.
Let f (x) be a function defined on the interval [a, b]. For a sequence

{yn (x) : n ∈ N}

of eigenfunctions of a given Sturm–Liouville problem (3.1.2) and (3.1.3),


we wish to express f as an infinite linear combination of the eigenfunctions
yn (x).
Let the function f be such that
∫b
f (x)yn (x)r(x) dx < ∞ for each n ∈ N
a

and let


(3.2.5) f (x) = ck yk (x), x ∈ (a, b).
k=1

But the question is how to compute each of the coefficients ck in the series
(3.2.5). To answer this question we work very similarly to the case of Fourier
series. Let n be any natural number. If we multiply Equation (3.2.5) by
r(x)yn (x), and if we assume that the series can be integrated term by term,
we obtain that
∫b ∞
∑ ∫b
(3.2.6) f (x)yn (x)r(x) dx = ck yk (x)yn (x)r(x) dx.
a k=1 a

From the orthogonality property of the eigenfunctions {yk (x) : k = 1, 2, . . .}


given in Theorem 3.2.2, it follows that
 b
∫b  ∫ (y (x))2 r(x) dx if k = n
n
yk (x)yn (x)r(x) dx =
a
a 0 if k ̸= n,
and therefore, from (3.2.6) we have

∫b
f (x)yn (x)r(x) dx
a
(3.2.7) cn = .
∫b ( )2
yn (x) r(x) dx
a
164 3. STURM–LIOUVILLE PROBLEMS

Remark. The series in (3.2.5) is called the generalized Fourier series (also
called the eigenfunction expansions) of the function f with respect to the
eigenfunctions yk (x) and ck , given by (3.2.7), are called generalized Fourier
coefficients of f .
The study of pointwise and uniform convergence of this kind of generalized
Fourier series is a challenging problem. We present here the following theorem
without proof, dealing with the pointwise and uniform convergence of such
series.
Theorem 3.2.3. Let λn and yn (x), n = 1, 2, . . ., be the eigenvalues and
associated eigenfunctions, respectively, of the regular Sturm–Liouville problem
(3.1.2) and (3.1.3). Then
(i) If both f (x) and f ′ (x) are piecewise continuous on [a, b], then f can
be expanded in a convergent generalized Fourier series (3.2.5), whose
generalized Fourier coefficients cn are given by (3.2.7), and moreover
∑∞
f (x− ) + f (x+ )
cn yn (x) =
n=1
2
at any point x in the open interval (a, b).

(ii) If f is continuous and f ′ (x) is piecewise continuous on the inter-


val [a, b], and if f satisfies both boundary conditions (3.1.3) of the
Sturm–Liouville problem, then the series converges uniformly on [a, b].

(iii) If both f (x) and f ′ (x) are piecewise continuous on [a, b], and f is
continuous on a subinterval [α, β] ⊂ [a, b], then the generalized Fourier
series of f converges uniformly to f on the subinterval [α, β].

Let us illustrate this theorem with a few examples.


Example 3.2.2. Expand the function f (x) = x, 0 ≤ x ≤ π, in terms of the
eigenfunctions of the regular Sturm–Liouville problem
y ′′ + λy = 0, 0 < x < π, y(0) = y ′ (π) = 0,
and discuss the convergence of the associated generalized Fourier series.
Solution. In Example 3.1.1 we showed that the eigenfunctions of this bound-
ary value problem are
yn (x) = sin nx, n = 1, 2, . . ..
To compute the generalized Fourier coefficients cn in (3.2.7) of the given
function, first we compute
∫π ∫π
( )2
yn (x) r(x) dx = sin2 nx dx
0 0
[ ]x=π
x sin 2nx π
= − = .
2 4n x=0 2
3.2 EIGENFUNCTION EXPANSIONS 165

Therefore,
∫π
f (x)yn (x)r(x) dx ∫π
2
cn = 0∫π ( )2 = x sin nx dx
π
yn (x) r(x) dx 0
0
[ ]x=π
2 cos nx sin nx
= −x +
π n n2 x=0
2[ ] 2
= −π cos nπ = (−1)n−1 .
π n

Now, since f (x) = x is continuous on [0, π] by the above theorem we have

∑∞
(−1)n−1
x=2 sin nx, for every x ∈ (0, π).
n=1
n

Example 3.2.3. Expand the constant function f (x) = 1, 1 ≤ x ≤ e, in


terms of the eigenfunctions of the regular Sturm–Liouville problem

( )′
λ
xy ′ (x) + y = 0, 1 < x < e, y(1) = y ′ (e) = 0,
x

and discuss the convergence of the associated generalized Fourier series.

Solution. The eigenvalues and corresponding eigenfunctions, respectively, of


the given eigenvalue problem are given by

( )
(2n − 1)2 π 2 (2n − 1)π
λn = , yn (x) = sin ln x , n = 1, 2, . . .
4 2

(See Exercise 5 (c) of Section 3.1.)


To compute the generalized Fourier coefficients cn in (3.2.7) of the given
function, first we compute

∫e ∫e ( )
(2n − 1)π 1 (2n − 1)π
yn2 (x) r(x) dx = sin2 ln x dx (t = ln x)
2 x 2
1 1


(2n−1)π/2 [ ]t=(2n−1)π/2
2 1 1 1
= 2
sin t dt = t − sin (2t) = .
(2n − 1)π (2n − 1)π 2 t=0 2
0
166 3. STURM–LIOUVILLE PROBLEMS

Therefore,

∫e
f (x)yn (x)r(x) dx ∫e ( )
1 (2n − 1)π 1
cn = ∫e =2 sin ln x dx
2 x
yn2 (x) r(x) dx 1
1
(2n−1)π
∫ 2 [ ]t=(2n−1)π/2
4 4
= sin t dt = − cos t
(2n − 1)π (2n − 1)π t=0
0
4
= .
(2n − 1)π

Since the given function f = 1 does not satisfy the boundary conditions
at x = 1 and x = e, we do not have uniform convergence of the Fourier series
on the interval [1, e]. However, we have pointwise convergence to 1 for all x
in the interval (1, e):


∑ ( )
4 (2n − 1)π
1= sin ln x , 1 < x < e.
n=1
(2n − 1)π 2


Now, if, for example, we take x = e ∈ (1, e) in the above series, we
obtain ( )
∑∞
4 (2n − 1)π
1= sin
n=1
(2n − 1)π 4

and since
( ) { √
(2n − 1)π (−1)k 22 if n = 2k
sin = √
4 (−1)k+1 22 if n = 2k − 1

it follows that
√ [ ]
π 2 1 1 1 1 1
= 1+ − − + + − ··· .
4 2 3 5 7 9 11

Therefore, we have the following result.



1 1 1 1 1 2
1+ − − + + − ··· = π.
3 5 7 9 11 4
3.2 EIGENFUNCTION EXPANSIONS 167

Exercises for Section 3.2.

1. The eigenvalues and eigenfunctions of the problem

( ′ )′ λ
xy (x) + y = 0, 1 < x < 2, y(1) = y(2) = 0
x

are given by
( )2 √

λn = , yn (x) = sin ( λn ln x).
ln 2

Find the expansion of the function f (x) = 1 in terms of these eigen-


functions. What values does the series converge to at the points x = 1
and x = b?

2. Using the eigenfunctions of the boundary value problem

y ′′ + λy = 0, y(0) = y ′ (1) = 0,

find the eigenfunction expansion of each of the following functions.

(a) f (x) = 1, 0 ≤ x ≤ 1.

(b) f (x) = x, 0 ≤ x ≤ 1.
{
1, 0 ≤ x < 12
(c) f (x) =
0, 12 ≤ x ≤ 1.
{
2x, 0 ≤ x < 1
2
(d) f (x) =
1, 1
2 ≤ x ≤ 1.

3. The Sturm–Liouville problem

y ′′ + λy = 0, y(0) = y(l) = 0

has eigenfunctions
( )
nπx
yn (x) = sin n = 1, 2, . . ..
l

Find the eigenfunction expansion of the function f (x) = x using these


eigenfunctions.
168 3. STURM–LIOUVILLE PROBLEMS

4. The Sturm–Liouville problem

y ′′ + λy = 0, y(0) = y ′ (l) = 0

has eigenfunctions
( )
(2n − 1)πx
yn (x) = sin n = 1, 2, . . ..
2l

Find the eigenfunction expansion of the function f (x) = x using these


eigenfunctions.

5. The eigenfunctions of the Sturm–Liouville problem

y ′′ + λy = 0, 0 < x < 1, y ′ (0) = 0, y(1) + y ′ (1) = 0

are √
yn (x) = cos λn ,
√ √ √
where λn are the positive solutions of cos λ − λ sin λ = 0.
Find the eigenfunction expansion of the following functions in terms
of these eigenfunctions.

(a) f (x) = 1.

(b) f (x) = xc , c ∈ R.

3.3 Singular Sturm–Liouville Problems.


In this section we study Singular Sturm–Liouville Problems.
3.3.1 Definition of Singular Sturm–Liouville Problems.
In the previous sections we have discussed regular Sturm–Liouville prob-
lems and we have mentioned an example of a boundary value problem which
is not regular. Because of their importance in applications, it is worthwhile
to discuss Sturm–Liouville problems which are not regular. A problem which
is not a regular Sturm–Liouville problem is called a singular Sturm–Liouville
problem. A systematic study of such Sturm–Liouville problems is quite lengthy
and technical and so we restrict ourselves to some special cases such as Le-
gendre’s and Bessel’s equations. This class of boundary value problems has
found significant physical applications. First we define precisely the notion of
a singular Sturm–Liouville problem.
3.3.1 DEFINITION OF SINGULAR STURM–LIOUVILLE PROBLEMS 169

Definition 3.3.1. A Sturm–Liouville differential equation


(
)′ [ ]
p(x)y ′ (x) + q(x) + λr(x) y = 0

is called a singular Sturm–Liouville problem on an interval [a, b] if at least


one of the following conditions is satisfied.
(i) The interval [a, b] is unbounded, i.e., either a = −∞ and/or b = ∞.

(ii) p(x) = 0 for some x ∈ [a, b] or r(x) = 0 for some x ∈ [a, b].

(iii) |p(x)| → ∞ and/or r(x) → ∞ as x → a and/or x → b.

For singular Sturm-Liouville problems, appropriate boundary conditions


need to be specified. In particular, if p(x) vanishes at x = a or x = b, we
should require that y(x) and y ′ (x) remain bounded as x → a, or x → b,
respectively.

As for regular Sturm–Liouville problems, the orthogonality property of the


eigenfunctions with respect to the weight function r(x) holds:
If y1 (x) and y2 (x) are any two linearly independent eigenfunctions of a
singular Sturm-Liouville problem (with appropriate boundary conditions) then

∫b
y1 (x)y2 (x)r(x) dx = 0.
a

There are many differences between regular and singular Sturm-Liouville


problems. But one, the most profound difference, is that the spectrum (the set
of all eigenvalues) of a singular Sturm-Liouville problem can happen not to
be a sequence of numbers. In other words, every number λ in some interval
can be an eigenvalue of a singular Sturm-Liouville problem.
Example 3.3.1. Solve the following singular Sturm–Liouville problem.
{ ′′
y (x) + λy = 0, 0 < x < ∞,
y ′ (0) = 0, |y ′ (x)| is bounded on (0, ∞).

Solution. For λ = 0, the general solution of the given differential equation


is y(x) = A + Bx. The boundary condition y ′ (0) = 0 implies that B = 0.
Therefore λ = 0 is an eigenvalue and y(x) ≡ 1 on [0, ∞) is the associated
eigenfunction of the problem.
For λ = −µ2 < 0, the general solution of the given differential equation is
given by
y(x) = Aeµx + Be−µx .
170 3. STURM–LIOUVILLE PROBLEMS

From the boundary condition y ′ (0) = 0 we obtain A − B = 0, and so


y(x) = A(eµx − e−µx ). If A ̸= 0, then from eµx − e−µx → ∞ as x → ∞ it
follows that |y(x)| is not bounded on (0, ∞). Therefore, any λ < 0 is not an
eigenvalue of the problem.
If λ > 0, then the general solution of the given differential equation is
(√ ) (√ )
y(x) = A sin λx + B cos λx .

The(√boundary
) condition
(√ y (0) ) = 0 implies A = 0, and therefore y(x) =
B cos λ x . Since cos λx | ≤ 1 for every x it follows that any λ > 0
is an eigenvalue of the given
(√ boundary
) value problem, with corresponding
eigenfunction y(x) = cos λx . Therefore, the set of all eigenvalues of the
given problem is the half-line [0, ∞).

Since the most frequently encountered singular Sturm–Liouville problems


in mathematical physics are those involving Legendre and Bessel’s differential
equations, the next two parts of this section are devoted to these differen-
tial equations. The functions which are the solutions of these two equations
are only two examples of so called special functions of mathematical physics.
Besides the Legendre and Bessel functions, there are other important special
functions (which will be not discussed in this book), such as the Tchebychev,
Hermite, Laguerre and Jacobi functions.

3.3.2 Legendre’s Differential Equation.


In this section we will discuss in some detail Legendre’s differential equation
and its solutions.
Definition 3.3.2. For a number λ with λ > −1/2, the differential equation

(3.3.1) (1 − x2 )y ′′ (x) − 2xy ′ (x) + λ(λ + 1)y = 0, −1 ≤ x ≤ 1.

or in the Sturm–Liouville form


( )′
(3.3.2) (1 − x2 )y ′ (x) + λ(λ + 1)y = 0

is called Legendre’s differential equation of order λ.

Since p(x) = 1−x2 and p(−1) = p(1) = 0, Legendre’s differential equation


is an equation of a singular Sturm-Liouville problem. There is not a simple
solution of (3.3.1). Therefore we try for a solution y(x) of the form of power
series


(3.3.3) y(x) = ak xk .
k=0
3.3.2 LEGENDRE’S DIFFERENTIAL EQUATION 171

If we substitute the derivatives



∑ ∞

y ′ (x) = kak xk−1 and y ′′ (x) = k(k − 1)ak xk−2
k=0 k=0

in (3.3.1) we obtain

∑ ∞
∑ ∞

(1 − x2 )k(k − 1)ak xk−2 − 2x kak xk−1 + λ(λ + 1)ak xk = 0,
k=0 k=0 k=0

or after a rearrangement

∑ ∞
∑ [ ]
k(k − 1)ak x k−2
+ λ(λ + 1) − k(k + 1) ak xk = 0.
k=0 k=0

If we re-index the first sum, after a rearrangement we obtain


∞ {
∑ }
[ ]
(k + 2)(k + 1)ak+2 + λ(λ + 1) − k(k + 1) ak xk = 0.
k=0

Since the above equation holds for every x ∈ (−1, 1), each coefficient in the
above power series must be zero, leading to the following recursive relation
between the coefficients ak :

k(k + 1) − λ(λ + 1)
(3.3.4) ak+2 = ak , k = 0, 1, 2, . . ..
(k + 2)(k + 1)

If k is an even number, then from the recursive formula (3.3.4) by induction,


we have that

m
(λ − 2k + 2)
m k=1
(3.3.5) a2m = (−1) a0 .
(2m)!

If k is an odd number, then from the recursive formula (3.3.4), again by


induction, we have that


m
(λ − 2k + 1)
m k=1
(3.3.6) a2m+1 = (−1) a1 .
(2m + 1)!

Inserting (3.3.5) and (3.3.6) into (3.3.3) we obtain that the general solution
of Legendre’s equation is

y(x) = a0 y0 (x) + a1 y1 (x)


172 3. STURM–LIOUVILLE PROBLEMS

where two particular solutions y0 (x) and y1 (x) are given by


(m )( m )
∏ ∏
∑∞ (λ − 2k + 2) (λ + 2k − 1)
(3.3.7) y0 (x) = (−1)m k=1 k=1
x2m ,
m=0
(2m)!

and
( )( )

m ∏
m

∑ (λ − 2k + 1) (λ + 2k)
k=1 k=1
(3.3.8) y1 (x) = (−1)m x2m+1 .
m=0
(2m + 1)!


Remark. The symbol (read “product”) in the above formulas has the
following meaning:
{ }
For a sequence ck : k = 1, 2, . . . of numbers ck we define


m
ck = c1 · c2 · . . . · cm .
k=1

For example,


m
˙ − 2) · . . . · (λ − 2m + 2).
(λ − 2k + 2) = λ(λ
k=1

Remark. It is important to observe that if n is a nonnegative even integer


and λ = n, then the power series for y0 (x) in (3.3.7) terminates with the
term involving xn , and so y0 (x) is a polynomial of degree n. Similarly, if
n is an odd integer and λ = n, then the power series for y1 (x) in (3.3.8)
terminates with the term involving xn .
The solutions of the differential equation (3.3.2) when λ = n is a natural
number are called Legendre functions.
Therefore, if n is a nonnegative integer, the polynomial solution Pn (x) of
the equation

(3.3.9) (1 − x2 )y ′′ (x) − 2xy ′ (x) + n(n + 1)y = 0, −1 ≤ x ≤ 1

such that Pn (1) = 1 is called Legendre’s polynomial of degree n, or Legendre’s


function of the first kind and is given by

1 ∑
m
(2n − 2k)!
(3.3.10) Pn (x) = n (−1)k xn−2k ,
2 k!(n − k)!(n − 2k)!
k=0

where m = n/2 if n is even, and m = (n − 1)/2 if n is odd.


3.3.2 LEGENDRE’S DIFFERENTIAL EQUATION 173

The other solution of (3.3.9) which is linearly independent of Pn (x) is


called Legendre’s function of the second kind.
The first six Legendre polynomials are listed below.

1 1
P0 (x) = 1; P1 (x) = x; P2 (x) =(3x2 − 1); P3 (x) = (5x3 − 3x);
2 2
1 1
P4 (x) = (35x4 − 30x2 + 3); P5 (x) = (63x5 − 70x3 + 15x).
8 8
The graphs of the first six polynomials are shown in Figure 3.3.1.

1
P3 HxL P4 HxL
P0 HxL

-1 1

P2 HxL P5 HxL
P1 HxL
-1

Figure 3.3.1

Now we state and prove several important properties of Legendre’s poly-


nomials.
Theorem 3.3.1. Let n be a nonnegative even integer. Then any polyno-
mial solution y(x) which has only even powers of x of Legendre’s differential
equation (3.3.1) is a constant multiple of Pn (x). Similarly, if n is a nonnega-
tive odd integer, then any polynomial solution y(x) of Legendre’s differential
equation (3.3.1) which has only odd powers of x is a constant multiple of
Pn (x).
Proof. Suppose that n is a nonnegative even integer. Let y(x) be a polyno-
mial solution of (3.3.1) with only even powers of x. Then for some constants
c0 and c − 1 we have

y(x) = c0 y0 (x) + c1 y1 (x),

where y0 (x) = Pn (x) is a polynomial of degree n with even powers of x and


y1 (x) is a power series solution of (3.3.1) with odd powers of x. Now, since
y(x) is a polynomial we must have c1 = 0, and so y(x) = c0 Pn (x).
The case when n is an odd integer is treated similarly. ■
174 3. STURM–LIOUVILLE PROBLEMS

Theorem 3.3.2. Rodrigues’ Formula. Legendre’s polynomials Pn (x) for


n = 0, 1, 2, . . . are given by
1 dn ( 2 )
(3.3.11) Pn (x) = (x − 1) n
.
2n n! dxn

Proof. If y(x) = (x2 − 1)n , then y ′ (x) = 2nx(x2 − 1)n−1 and therefore
(x2 − 1)y ′ (x) = 2nxy(x).
If we differentiate the last equation (n + 1) times, using the Leibnitz rule for
differentiation, we obtain
( ) ( )
n+1 n + 1 (n)
(x − 1)y
2 (n+2)
(x) + 2xy (n+1)
(x) + 2 y (x)
1 2
( )
n + 1 (n)
= 2nxy (n+1) (x) + 2n y (x).
1

If we introduce a new function w(x) by w(x) = y (n) (x), then from the last
equation it follows that
(x2 − 1)w′′ (x) + 2(n + 1)xw′ (x) + (n + 1)nw(x) = 2nxw′ (x) + 2n(n + 1)w(x),
or
(1 − x2 )w′′ (x) − 2xw′ (x) + n(n + 1)w(x) = 0.
So w(x) is a solution of Legendre’s equation, and since w(x) is a polynomial,
by Theorem 3.3.1 it follows that
dn ( 2 )
n
(x − 1)n = w(x) = cn Pn (x)
dx
for some constant cn . To find the constant cn , again we apply the Leibnitz
rule ( )
dn ( 2 ) dn ( )n
(x − 1) n
= (x − 1)(x + 1)
dxn dxn
= n!(x + 1)n + Rn ,
where Rn denotes the sum of the n remaining terms each having the factor
(x − 1). Therefore,
dn ( 2 )
n
(x − 1) = 2n n!,
dxn x=1
and so from
Pn (1) = cn 2n n!
and Pn (1) = 1 we obtain
1
cn = . ■
2n n!
3.3.2 LEGENDRE’S DIFFERENTIAL EQUATION 175

Example 3.3.2. Compute P2 (x) using Rodrigues’ formula (3.3.11),


Solution. From (3.3.11) with n = 2,

1 d2 ( 2 ) 1 d2 ( 4 ) 1
P2 (x) = (x − 1)2 = x − 2x2 + 1 = (3x2 − 1).
22 2! dx 2 8 dx 2 2

Legendre’s polynomials can be defined through their generating function

1
(3.3.12) G(t, x) ≡ √ .
1 − 2xt + t2

Theorem 3.3.3. Legendre’s polynomials Pn (x), n = 0, 1, 2, · · · are exactly


the coefficients in the expansion

∑∞
1
(3.3.13) √ = Pn (x)tn .
1 − 2xt + t2 n=0

Proof. If we use the binomial expansion for the function G(t, x), then we have

∑∞
1 · 3 · · · · · (2n − 1)
G(t, x) = (1 − 2xt + t2 )− 2 =
1

n n!
(2xt − t2 )n
n=0
2
∑∞
1 · 3 · · · · · (2n − 1) ∑
n
n!
= (2x)n−k (−t2 )k
n=0
2 n n! k!(n − k)!
k=0
∑∞ [∑ m ]
(2n − 2k)!
= (−1)k n xn−2k tn ,
n=0
2 k!(n − 2k)!(n − k)!
k=0

n n−1
where m = if n is even and m = if n is odd. But the coefficient
2 2
of tn is exactly the Legendre polynomial Pn (x). ■

Theorem 3.3.4. Legendre polynomials satisfy the recursive relations

(3.3.14) (n+1)Pn+1 (x) = (2n + 1)xPn (x)−nPn−1 (x).

′ ′
(3.3.15) Pn+1 (x)+Pn−1 (x) = 2xPn′ (x)+Pn (x).

′ ′
(3.3.16) Pn+1 (x)−Pn−1 (x) = (2n+1)Pn (x).

∫1
2
(3.3.17) Pn2 (x) dx = .
2n + 1
−1
176 3. STURM–LIOUVILLE PROBLEMS

Proof. We leave the proof of identity (3.3.16) as an exercise (Exercise 4 of


this section).
First we prove (3.1.14). If we differentiate Equation (3.3.13) with respect
to t we have that
∑∞
∂G(t, x) x−t
= 3 = nPn (x)tn−1 ,
∂t (1 − 2xt + t2 ) 2 n=0

which can be rewritten as



∑ x−t
(1 − 2xt + t2 ) nPn (x)tn−1 − √ = 0,
n=0
1 − 2xt + t2

or, using again the expansion (3.3.13) for the generating function G(t, x),

∑ ∞

(1 − 2xt + t2 ) nPn (x)tn−1 − (x − t) Pn (x)tn = 0.
n=0 n=0

After multiplication, the last equation becomes



∑ ∞
∑ ∞

nPn (x)tn−1 − 2x nPn (x)tn + nPn (x)tn+1
n=0 n=0 n=0
∑∞ ∞

+ Pn (x)tn+1 − xPn (x)tn = 0,
n=0 n=0

or after re-indexing, we obtain


∑∞ [∑ ∞ ]
nPn (x)tn−1 (n + 1)Pn+1 (x) − (2n + 1)xPn (x) + nPn−1 (x) tn = 0.
n=0 n=0

Since the left hand side vanishes for all t, the coefficients of each power of t
must be zero, giving (3.3.14).
To prove (3.3.15) we differentiate Equation (3.3.13) with respect to x:
∑∞
∂G(t, x) t
= 3 = Pn′ (x)tn .
∂x (1 − 2xt + t2 ) 2 n=0

The last equation can be written as



∑ t
(1 − 2xt + t2 ) Pn′ (x)tn − √ = 0,
n=0
1 − 2xt + t2

or, in view of the expansion (3.3.13) for the generating function g(t, x),

∑ ∞

(1 − 2xt + t ) 2
Pn′ (x)tn −t Pn (x)tn = 0.
n=0 n=0
3.3.2 LEGENDRE’S DIFFERENTIAL EQUATION 177

Setting to zero each coefficient of t gives (3.3.15).


Formula (3.3.17) is easily verified for n = 0 and n = 1. Suppose n ≥ 2.
From (3.3.14) we have
nPn (x) − (2n − 1)xPn (x) + (n − 1)Pn−2 (x) = 0,
(n + 1)Pn+1 (x) − (2n + 1)xPn (x) + nPn−1 (x) = 0,

for all x ∈ (−1, 1). Now, if we multiply the first equation by Pn (x) and the
second by Pn−1 (x), integrate over the interval [−1, 1] and use the orthogo-
nality property of Legendre’s polynomials we obtain
∫1 ∫1
n Pn2 (x) dx = (2n − 1) xPn−1 (x)Pn (x) dx,
−1 −1
∫1 ∫1
2
n Pn−1 (x) dx = (2n + 1) xPn−1 (x)Pn (x) dx.
−1 −1

From the last two equations it follows that


∫1 ∫1
(2n + 1) Pn2 (x) dx = (2n − 1) 2
Pn−1 (x) dx
−1 −1

from which, recursively, we obtain


∫1 ∫1 ∫1
(2n − 1) 2
Pn−1 (x) dx = (2n − 3) 2
Pn−2 (x) dx = ... = 3 P12 (x) dx = 2,
−1 −1 −1

which is the required assertion (3.3.17). ■.

Example 3.3.3. Compute P3 (x) using some recursive formula.


Solution. From (3.3.14) with n = 2, we have

3P3 (x) = 5xP2 (x) − 2P1 (x).


(
But, from P1 (x) = x and P2 (x) = 21 3x2 − 1), computed in Example 3.3.2,
we have
1 1
3P3 (x) = 5x (3x2 − 1) − 2x = (15x3 − 9x).
2 2

Using the orthogonality property of Legendre’s polynomials we can rep-


resent a piecewise continuous function f (x) on the interval (−1, 1) by the
series


f (x) = an Pn (x), −1 ≤ x ≤ 1.
n=0
178 3. STURM–LIOUVILLE PROBLEMS

Indeed, using the orthogonality property and (3.3.16) we have that


∫1
2n + 1
an = f (x)Pn (x) dx, n = 0, 1, . . ..
2
−1

Remark. If the function f and its first n derivatives are continuous on the
interval (−1, 1), then using Rodrigues’ formula (3.3.11) and integration by
parts, it follows that
∫1
(−1)n
(3.3.18) an = n (x2 − 1)n f (n) (x) dx.
2 n!
−1

Example 3.3.4. Represent f (x) = x2 as a series of Legendre’s polynomials.


Solution. Because of (3.3.15), we need to calculate only a0 , a1 and a2 . From
(3.3.18) we can easily calculate these coefficients and obtain that
1 1
x2 = + 0 · x + (3x2 − 1).
3 3

Example 3.3.5. Expand the Heaviside function in a Fourier-Legendre series.


Solution. Recall that the Heaviside function is defined as
{
1, if x > 0,
H(x) =
0, if x < 0.
We need to compute
∫1 ∫1
2n + 1 2n + 1
an = H(x)Pn (x) dx = Pn (x) dx, n = 0, 1, 2, . . ..
2 2
−1 0

For n = 0, we have
∫1 ∫1
1 1 1
a0 = P0 (x) dx = 1 dx = .
2 2 2
0 0

For n ≥ 1, from the identity (3.3.15) we have that


∫1 ∫1
2n + 1 1 [ ′ ]
Pn (x) dx = Pn+1 (x) − Pn′ (x) dx
2 2
0 0
1[ ] 1[ ]
= Pn+1 (1) − Pn (1) − Pn+1 (0) − Pn (0)
2 2
1[ ]
= Pn (0) − Pn+1 (0) .
2
3.3.2 LEGENDRE’S DIFFERENTIAL EQUATION 179

Now, since
{
0, if n = 2k + 1
Pn (0) =
(−1)k (2k−1)!!
(2k)!! , if n = 2k,

(see Exercise 7 of this section), we have a2k = 0, for k = 0, 1, 2, . . ., and

(2k − 3)!! (2k − 1)!!


a2k−1 = (−1)k−1 − (−1)k
(2k + 2)!! (2k)!!
(2k − 3)!! 4k − 1
= (−1)k−1 .
(2k + 2)!! 2k

Remark. The symbol !! is defined as follows.

(2k − 1)!! = 1 · 3 · . . . · (2k − 1) and (2k)!! = 2 · 4 · . . . · (2k).

Therefore, the Fourier-Legendre expansion of the Heaviside function is


1 1∑ (2k − 3)!! 4k − 1
H(x) ∼ + (−1)k−1 P2k−1 (x).
2 2 (2k + 2)!! 2k
k=1

The sum of the first 23 terms is shown in Figure 3.3.2. We note the slow
convergence of the series to the Heaviside function. Also, we see that the
Gibbs phenomenon is present due to the jump discontinuity at x = 0.

1
2

-1 1

Figure 3.3.2

We conclude this section with a description of the associated Legendre poly-


nomials which are related to the Legendre polynomials and which have im-
portant applications in solving some partial differential equations.
180 3. STURM–LIOUVILLE PROBLEMS

Consider the differential equation

d ( dy ) ( m2 )
(3.3.19) (1 − x2 ) + λ− y = 0, −1 < x < 1,
dx dx 1 − x2
where m = 0, 1, 2, . . . and λ is a real number. The nontrivial and bounded
functions on the interval (−1, 1) which are solutions of Equation (3.3.19) are
called associated Legendre functions of order m. Notice that if m = 0 and
λ = n(n + 1) for n ∈ N, then equation (3.3.19) is the Legendre equation of
order n and so the associated Legendre functions in this case are the Legendre
polynomials Pn (x). To find the associated Legendre functions for any other
m we use the following substitution.
m
(3.3.20) y(x) = (1 − x2 ) 2 v(x).

If we substitute this y(x) in Equation (3.3.19) we obtain the following equa-


tion.
[
(3.3.21) (1−x2 )v ′′ (x)−2(m+1)xv ′ (x)+ λ−m(m+1)]v = 0, −1 < x < 1.

To find a particular solution v(x) of Equation (3.3.21) we consider the fol-


lowing Legendre equation.

d2 w dw
(3.3.22) (1 − x2 ) 2
− 2x + λw = 0.
dx dx
If we differentiate Equation (3.3.22) m times (using the Leibnitz rule) we
obtain
dm+2 w dm+1 w [ ] dm w
(1 − x2 ) − 2(m + 1)x + λ − m(m + 1) = 0,
dxm+2 dxm+1 dxm
which can be written in the form
( m )′′ ( m )′
d w d w [ ] dm w
(3.3.23) (1 − x2 ) m
− 2(m + 1)x m
+ λ − m(m + 1) = 0.
dx dx dxm

From (3.3.21) and (3.3.22) we have that v(x) is given by

dm w
(3.3.24) v(x) = ,
dxm
where w(x) is a particular solution of the Legendre equation (3.3.23). We
have already seen that the nontrivial bounded solutions w(x) of the Legendre
equation (3.3.22) are only if λ = n(n + 1) for some n = 0, 1, 2, . . . and in
this case we have that w(x) = Pn (x) is the Legendre polynomial of order n.
Therefore from (3.3.20) and (3.3.23) it follows that

m dm Pn (x)
(3.3.25) y(x) = (1 − x2 ) 2
dxm
3.3.2 LEGENDRE’S DIFFERENTIAL EQUATION 181

are the bounded solutions of (3.3.19). Since Pn (x) are polynomials of degree
n, from (3.3.25) using the Leibnitz rule it follows that y(x) are nontrivial
only if m ≤ n, and so the associated Legendre functions, usually denoted by
(m)
Pn (x), are given by

m dm Pn (x)
(3.3.26) Pn(m) (x) = (1 − x2 ) 2 .
dxm

For the associated Legendre functions we have the following orthogonality


property.

Theorem 3.3.5. For any m, n and k we have the following integral rela-
tion.

∫1  0, k ̸= n
(m)
(3.3.27) Pn(m) (x) Pk (x) dx = 2(n + m)!
 , k = n.
−1 (2n + 1)(n − m)!

(m)
Proof. From the definition of the associated Legendre polynomials Pn we
have

dm+2 Pn (x) dm+1 Pn (x) [ ] dm Pn (x)


(1−x2 ) m+2
−2(m+1)x m+1
+ n(n+1)−m(m+1) = 0.
dx dx dxm

If we multiply the last equation by (1 − x2 )m , then we obtain

dm+2 Pn (x) dm+1 Pn (x)


(1 − x2 )m+1 m+2
− 2(m + 1)x(1 − x2 )m
dx dxm+1
[ ] m
d Pn (x)
+ n(n + 1) − m(m + 1) (1 − x2 )m = 0,
dxm

which can be written as


( )
d dm+1 Pn (x)
(1 − x2 )m+1
dx dxm
(3.3.28)
[ ] dm Pn (x)
= − n(n + 1) − m(m + 1) (1 − x2 )m .
dxm

Now let
∫1
(m) (m)
An,k = Pn(m) Pk dx.
−1
182 3. STURM–LIOUVILLE PROBLEMS

(m)
If we use relation (3.3.26) for the associated Legendre polynomials Pn and
(m)
Pk , using the integration by parts formula we obtain
∫1
(m) dm Pn (x) dm Pk (x)
An,k = (1 − x2 )m dx
dxm dxm
−1
[ ]x=1
dm−1 Pn (x) dm Pk (x)
= (1 − x2 )m
dxm−1 dxm x=−1
∫1 ( )
dm−1 Pk (x) d m
2 m d Pn (x)
− (1 − x ) dx
dxm−1 dx dxm
−1
∫1 ( )
dm−1 Pk (x) d m
2 m d Pn (x)
=− (1 − x ) dx.
dxm−1 dx dxm
−1

The last equation, in view of (3.3.28), can be written in the form


∫1
(m) [ ] dm−1 Pn (x) dm−1 Pk (x)
An,k = n(n + 1) − m(m − 1) (1 − x2 )m−1 dx
dxm−1 dxm−1
−1
(m−1)
= (n + m)(n − m − 1) An,k .
By recursion, from the above formula we obtain
(m) (m−2) (n + m)! (0)
An,k = (n+m)(n−m−1)(n+m−1)(n−m−2) An,k = ... = A .
(n − m)! n,k
Therefore, using the already known orthogonality property (3.3.17) for the
Legendre polynomials
∫1 {
(0) 0, n ̸= k
An,k = Pn (x)Pk (x) dx = 2
n+1 , n=k
−1

we obtain that
(m) 2(n + m)!
An,k = . ■
(n + 1)(n − m)!

3.3.3 Bessel’s Differential Equation.

Bessel’s differential equation is one of the most important equations in


mathematical physics. It has applications in electricity, hydrodynamics, heat
propagation, wave phenomena and many other disciplines.
3.3.3 BESSEL’S DIFFERENTIAL EQUATION 183

Definition 3.3.3. Suppose that µ is a nonnegative real number. Then the


differential equation

(3.3.29) x2 y ′′ (x) + xy ′ (x) + (x2 − µ2 )y = 0, x>0

or in the Sturm–Liouville form


( )′
( µ2 )
(3.3.30) xy ′ (x) + x − y=0
x

is called Bessel’s differential equation of order µ.

We will find the solution of Bessel’s equation in the form of a Frobenius


series (see Appendix D).

∑ ∞

y(x) = xr ak xk = ak xk+r
k=0 k=0

with a0 ̸= 0 and a constant r to be determined. If we substitute y(x) and


its derivatives into Bessel equation (3.3.29) we obtain

∑ ∞

(k + r)(k + r − 1)ak xk+r + (k + r)ak xk+r
k=0 k=0
∑∞ ∞

+ ak xk+r+2 − µ2 ak xk+r = 0.
k=0 k=0

If we write out explicitly the terms for k = 0 and k = 1 in the first sum, and
re-index the other sums, then we have


[ ] {[ ] }
(r2 − µ2 )a0 + (r + 1)2 − µ2 a1 x + (k + r)2 − µ2 ak + ak−2 xk = 0.
k=2

In order for this equation to be satisfied identically for every x > 0, the
coefficients of all powers of x must vanish. Therefore,

(3.3.31) (r2 − µ2 )a0 = 0,


[ ]
(3.3.32) (r + 1)2 − µ2 a1 = 0,
[ ]
(3.3.33) (k+r)2 −µ2 ak +ak−2 = 0, k ≥ 2.

Since a0 ̸= 0, from (3.3.31) we have r2 − µ2 = 0, i.e., r = ±µ.


Case 10 . r = µ. Since (r + 1)2 − µ2 ̸= 0 in this case, from (3.3.32) we have
a1 = 0, and therefore from (3.3.33), iteratively, it follows that

a2k+1 = 0, k = 2, 3, . . .
184 3. STURM–LIOUVILLE PROBLEMS

and
1 1
a2k = − a2k−2 = 4 a2k−4
22 k(k + µ) 2 k(k − 1)(k + µ)(k − 1 + µ)
(−1)k Γ(1 + µ)
= 2k a0 = (−1)k 2k a0 .
2 k!(k + µ)(k − 1 + µ) · · · (1 + µ) 2 k!Γ(k + 1 + µ)

(See Appendix G for the Gamma function Γ.)


Therefore, one solution of the Bessel equation, denoted by Jµ , is given by

∑ Γ(1 + µ)
Jµ (x)(x) = a0 xµ (−1)k x2k .
22k k!Γ(k + 1 + µ)
k=0

For normalization reasons we choose


1
a0 =
2µ Γ(µ + 1)

and so we have that the solution Jµ (x) is given by



∑ (−1)k ( x )2k+µ
(3.3.34) Jµ (x) = .
k!Γ(k + µ + 1) 2
k=0

The function Jµ (x), µ ≥ 0, is called the Bessel function of the first kind
of order µ.
If µ = n is a nonnegative integer, then using the properties of the Gamma
function Γ, from (3.3.34) it follows that

∑ (−1)k ( x )2k+n
Jn (x) = .
k!(k + n)! 2
k=0

The plots of the Bessel functions J0 (x), J1 (x), J2 (x) and J3 (x) are given
in Figure 3.3.3.

J0

J2

0
20

J3
-0.5
J1

Figure 3.3.3
3.3.3 BESSEL’S DIFFERENTIAL EQUATION 185

Case 20 . r = −µ. Consider the coefficient (k + r)2 − µ2 = k(k − 2µ) of


ak in the iterative formula (3.3.33). So we have two cases to consider in this
situation: either k − 2µ ̸= 0 for every k ≥ 2, or k − 2µ = 0 for some k ≥ 2.
Case 20a . k − 2µ ̸= 0 for every k ≥ 2. In this case, (k + r)2 − µ2 ̸= 0 for
every k = 2, 3, . . ., and so from a1 = 0 and (3.3.33), similarly as in Case 10
it follows that
a2k+1 = 0, k = 2, 3, . . .
and
Γ(1 − µ)
a2k = (−1)k a0 , k = 1, 2, . . .
22k k!Γ(k − µ + 1)
Again, if we choose
1
a0 = ,
2µ Γ(1 − µ)
then

∑ (−1)k ( x )2k−µ
J−µ (x) =
k!Γ(k − µ + 1) 2
k=0

is a solution of the Bessel equation in this case. Since

lim Jµ (x) = 0, lim J−µ (x) = ∞,


x→0+ x→0+

the Bessel functions Jµ and J−µ are linearly independent, and so

y(x) = c1 Jµ (x) + c2 J−µ (x)

is the general solution to the Bessel equation (3.3.29) in this case.


Case 20b . k − 2µ = 0 for some k ≥ 2. Working similarly as in Case 20a it can
be shown that the general solution to the Bessel equation (3.3.29) is a linear
combination of the linearly independent Bessel functions Jµ and J−µ if µ is
not an integer. But if µ is an integer, then from

(3.3.35) J−µ (x) = (−1)µ Jµ (x)

it follows that we have only one solution Jµ (x) of the Bessel equation and
we don’t have yet another, linearly independent solution. One of the several
methods to find another, linearly independent solution of the Bessel equation
in the case when 2µ is an integer, and µ is also an integer, is the following.
If ν is not an integer number, then we define the function

Jν (x) cos(µπ) − J−ν (x)


(3.3.36) Yν (x) = .
sin (νπ)

This function, called the Bessel function of the second kind of order ν, is a
solution of the Bessel equation of order ν, and it is linearly independent of
186 3. STURM–LIOUVILLE PROBLEMS

the other solution Jν (x). Next, for an integer number n, we define Yn (x) by
the limiting process:
(3.3.37) Yn (x) = lim Yν (x).
ν→n
ν not integer

It can be shown that Yn (x) is a solution of the Bessel equation of order n,


linearly independent of Jn (x). Using l’Hôpital’s rule in (3.3.36) with respect
to ν it follows that
[ ∂ ]
1 ∂ J−ν (x)
Yn (x) = lim Jν (x) − tan (νπ) − ∂ν
π ν→n ∂µ cos (νπ)
[ ]
1 ∂Jν (x) ∂J−ν (x)
= − (−1)n .
π ∂ν ∂ν ν=n

The plots of the Bessel functions Y0 (x), Y1 (x), Y2 (x) and Y3 (x) are given in
Figure 3.3.4.

0.5
Y2 Y1

0
20

Y0

-1 Y3

-1.5

Figure 3.3.4

The following theorem summarizes the above discussion.


Theorem 3.3.6. Suppose that µ is a nonnegative real number. If µ is not
an integer, then Jµ (x), defined by

∑ (−1)k ( x )2k+µ
Jµ (x) =
k!Γ(k + µ + 1) 2
k=0

is one solution of the Bessel differential equation (3.3.29) of order µ. Another


solution of the Bessel equation, independent of Jµ (x), is J−µ (x), obtained
when µ in the above series is replaced by −µ.
If µ = n is an integer, then Jn is one solution of the Bessel differen-
tial equation (3.3.20) of order n. Another solution of the Bessel equation,
independent of Jn (x), is Yn (x), which is defined by
Yn (x) = lim Yµ (x),
µ→n
µ is not integer
3.3.3 BESSEL’S DIFFERENTIAL EQUATION 187

where for a noninteger number µ, the function Yµ (x) is defined by


Jµ (x) cos(µπ) − J−µ (x)
Yµ (x) = .
sin (µπ)

Let us take a few examples.


Example 3.3.6. Find explicitly the Bessel functions J 21 (x) and J− 12 (x).
Solution. From equation (3.3.34) and the results in Appendix G for the
Gamma function Γ it follows that
∞ √ ∞
∑ (−1)j ( x )2j+ 21 2 ∑ 2j+1 ( x )2j+1
j
J 12 (x) = = (−1)
j=0
3
j! Γ(j + 2 ) 2 πx j=0
j! · (2j + 1)!! 2
√ ∞ √
2 ∑ (−1)j 2j+1 2
= x = sin x.
πx j=0 (2j + 1)! πx

Similarly
∞ √ ∞
∑ (−1)j ( x )2j− 21 2 ∑ 2j ( x )2j
j
J− 12 (x) = = (−1)
j=0
1
j!Γ(j + 2 ) 2 πx j=0
j! · · · (2j − 1)!! 2
√ ∞ √
2 ∑ (−1)j 2j 2
= x = cos x.
πx j=0 (2j)! πx

Example 3.3.7. Parametric Bessel Equation of Order µ. Express the gen-


eral solution of the Parametric Bessel equation of order µ

(3.3.38) x2 y ′′ (x) + xy ′ (x) + (λ2 x2 − µ2 )y = 0

in terms of the Bessel functions.


Solution. If we use the substitution t = λx in (3.3.38), then by the chain
rule for differentiation

y ′ (x) = λy ′ (t), y ′′ (x) = λ2 y ′′ (t),

it follows that (3.3.38) becomes

t2 y ′′ (t) + ty ′ (t) + (t2 − µ2 )y = 0.

The last equation is the Bessel equation of order µ whose solution is

y(t) = c1 Jµ (t) + c2 Yµ (t).

Therefore, the general solution of Equation (3.3.38) is

y(x) = c1 Jµ (λx) + c2 Yµ (λx).


188 3. STURM–LIOUVILLE PROBLEMS

Example 3.3.8. The Aging Spring. Consider unit mass attached to a spring
with a constant c. If the spring oscillates for a long period of time, then clearly
the spring would weaken. The linear differential equation that describes the
displacement y = y(t) of the attached mass at moment t is given by
y ′′ (t) + ce−at y(t) = 0.
Find the general solution of the aging spring equation in terms of the Bessel
functions.
Solution. If we introduce a new variable τ by
2 √ −a t
τ= ce 2 ,
a
then the differential equation of the aging spring is transformed into the dif-
ferential equation
τ 2 y ′′ (τ ) + τ y ′ (τ ) + τ 2 y = 0.
We see that the last equation is the Bessel equation of order µ = 0. The
general solution of this equation is
y(τ ) = AJ0 (τ ) + BY0 (τ ).
If we go back to the original time variable t, then the general solution of the
aging spring equation is given by
( ) ( )
2 √ −a t 2 √ −a t
y(t) = AJ0 ce 2 + BY0 ce 2 .
a a

We state and prove some most elementary and useful recursive formulas
for the Bessel functions.
Theorem 3.3.7. Suppose that µ is any real number. Then for every x > 0
the following recursive formulas for the Bessel functions hold.
d ( µ )
(3.3.39) x Jµ (x) = xµ Jµ−1 (x),
dx

d ( −µ )
(3.3.40) x Jµ (x) = −x−µ Jµ+1 (x),
dx

(3.3.41) xJµ+1 (x) = µJµ (x) − xJµ′ (x),

(3.3.42) xJµ−1 (x) = µJµ (x) + xJµ′ (x),

(3.3.43) 2µJµ (x) = xJµ−1 (x) + xJµ+1 (x),

(3.3.44) 2Jµ′ (x) = Jµ−1 (x) − Jµ+1 (x).


3.3.3 BESSEL’S DIFFERENTIAL EQUATION 189

Proof. To prove (3.3.39) we use the power series (3.3.34) for Jµ (x):

d ( µ ) d ∑ (−1)k
x Jµ (x) = x2k+µ
dx dx 22k+2µ k!Γ(k + µ + 1)
k=0

∑ (−1)k (2k + 2µ)
= x2k+2µ−1
22k+2µ k!Γ(k + µ + 1)
k=0

∑ (−1)k
= xµ x2k+µ−1 = xµ Jµ−1 .
22k+µ−1 k!Γ(k + µ)
k=0

The proof of (3.3.40) is similar and it is left as an exercise.


From (3.3.39) and (3.3.40), after differentiation we obtain
{ µ−1
µx Jµ (x) + xµ Jµ′ (x) = xµ Jµ−1 (x),
− µx−µ−1 Jµ (x) + x−µ Jµ′ (x) = −x−µ Jµ+1 (x).

If we multiply the first equation by x1−µ we obtain (3.3.42). To obtain


(3.3.41), multiply the second equation by x1+µ . By adding and subtracting
(3.3.41) and (3.3.42) we obtain (3.3.43) and (3.3.43), respectively. ■

Example 3.3.9. Find J3/2 in terms of elementary functions.


1
Solution. From (3.3.44) with µ = we have
2
1
J 32 = J 1 − J− 21 .
x 2
In Example 3.3.6 we have found
√ √
2 2
J 12 (x) = sin x J− 21 (x) = cos x
πx πx
and therefore √ ( )
2 sin x
J 32 = − cos x .
πx x

Theorem 3.3.8. If x is a positive number and t is a nonzero complex num-


ber, then
( ) ∞

x
t− 1t
(3.3.45) e2 = Jn (x)tn .
n=−∞

Proof. From the Taylor expansion of the exponential function we have that
∑∞ ∞

tj ( x ) j (−1)k ( x )k
e− 2t =
xt x
e2 = , .
j=0
j! 2 k! tk 2
k=0
190 3. STURM–LIOUVILLE PROBLEMS

Since both power series are absolutely convergent, they can be multiplied and
the terms in the resulting double series can be added in any order:
( ) (∑∞ )(∑
∞ )
x
t− 1t xt
− 2t
x tj ( x )j (−1)k ( x )k
e 2 =e e
2 =
j=0
j! 2 k! tk 2
k=0
∑∞ ∑∞
(−1)k ( x )j+k j−k
= t .
j=0
j! k! 2
k=0

If we introduce in the last double series a new index n by j − k = n, then


j = k + n and j + k = 2k + n, and therefore we have
( ) ∑∞ {∑ ∞ }
x 1 1 ( x )2k+n n
e 2 t− t = (−1)k t .
n=−∞
(n + k)! k! 2
k=0

Since the Bessel function Jn (x) is given by



∑ 1 ( x )2k+n
Jn (x) = (−1)k
(n + k)! k! 2
k=0

we obtain (3.3.45). ■
A consequence of the previous theorem is the following formula, which gives
integral representation of the Bessel functions.
Theorem 3.3.9. If n is any integer and x is any positive number, then
∫π
1
Jn (x) = cos(x sin φ − nφ) dφ.
π
0

Proof. If we introduce a new variable t by t = eiφ , then


1
t− = eiφ − e−iφ = 2i sin φ,
t
formula (3.3.45) implies
( )
x
t− 1t
cos (x sin φ) + i sin (x sin φ) = eix sin φ = e 2

∑ ∞
∑ [ ]
= Jn (x)einφ = J0 (x) + Jn (x)einφ + J−n (x)e−inφ
n=−∞ n=1

∑ [ ]
= J0 (x) + Jn (x)einφ + (−1)n Jn (x)e−inφ
n=1
∑∞
[ ]
= J0 (x) + Jn (x) einφ + (−1)n e−inφ
n=1
∑∞ ∞

= J0 (x) + 2 J2k (x) cos (2kφ) + 2i J2k−1 (x) sin (2k − 1)φ.
k=1 k=1
3.3.3 BESSEL’S DIFFERENTIAL EQUATION 191

Therefore


(3.3.46) cos(x sin φ) = J0 (x) + 2 J2k (x) cos (2kφ),
k=1

and


(3.3.47) sin(x sin φ) = 2 J2k−1 (x) sin (2k − 1)φ,
k=1
{ } { }
Since the systems 1, cos 2kφ, k = 1, 2, . . . and sin (2k−1)φ, k = 1, 2, . . .
are orthogonal on [0, π], from (3.3.46) and (3.3.47) it follows that
∫π {
1 Jn (x), n is even
cos(x sin φ) cos nφ dφ =
π 0, n is odd
0

and
∫π {
1 0, n is even
sin(x sin φ) cos nφ dφ =
π Jn (x), n is odd.
0

If we add the last two equations we obtain


∫π
1 [ ]
Jn (x) = cos(x sin φ) cos nφ + sin(x sin φ) sin nφ dφ
π
0
∫π
1
= cos(x sin φ − nφ) dφ. ■
π
0

Now we will examine an eigenvalue problem of the Bessel equation. Con-


sider again the parametric Bessel differential equation
(3.3.48) x2 y ′′ + xy ′ + (λ2 x2 − µ2 )y = 0
on the interval [0, 1]. This is a singular Sturm–Liouville problem because the
leading coefficient x2 vanishes at x = 0. Among several ways to impose the
boundary values at the boundary points x = 0 and x = 1 is to consider the
problem of finding the solutions y(x) of (3.3.48) which satisfy the boundary
conditions
(3.3.49) | lim+ y(x) |< ∞, y(1) = 0.
x→0

The following result, whose proof can be found in the book by G. B. Fol-
land [6], will be used to show that there are infinitely many values of λ for
which there exist pairwise orthogonal solutions of (3.3.38) which satisfy the
boundary conditions (3.3.49).
192 3. STURM–LIOUVILLE PROBLEMS

Theorem 3.3.10. If µ is a real number, then the Bessel function Jµ (x) of


order µ has infinitely many positive zeros λn . These zeros are simple and
can be arranged in an increasing sequence
λ1 < λ2 · · · < λn < · · · ,
such that
lim λn = ∞.
n→∞

As discussed in Example 3.3.7, the general solution of the parametric Bessel


differential equation (3.3.48) is
y(x) = c1 Jµ (λx) + c2 Yµ (λx).
We have to have c2 = 0, otherwise the solutions y(x) will become infinite at
x = 0.
For these functions we have the following result.
Theorem 3.3.11. Suppose that µ is a nonnegative real number, and that
Jµ (x) is the Bessel function of order µ. If λ1 , λ2 , . . . , λn , . . . are the positive
zeros of Jµ (x), then each function Jµ (λn x) of the system
Jµ (λ1 x), Jµ (λ2 x), . . . , Jµ (λn x), . . . x ∈ (0, 1)
satisfies the parametric Bessel differential equation (3.3.48) with λ = λn , and
the boundary conditions (3.3.49). Furthermore, this system is orthogonal on
(0, 1) with respect to the weight function r(x) = x.
Proof. That each function Jµ (λn x) satisfies Equation (3.3.48) with λ = λn
was shown in Example 3.3.7. It remains to show the orthogonality property.
Suppose that λ1 and λ2 are any two distinct and positive zeroes of Jµ (x).
If y1 (x) = Jµ (λ1 x) and y2 (x) = Jµ (λ2 x), then

x2 y1′′ (x) + xy1′ (x) + (λ21 x2 − µ2 )y1 (x) = 0, 0 < x < 1,


x2 y2′′ (x) + xy2′ (x) + (λ22 x − µ )y2 (x) = 0,
2 2
0 < x < 1.
If we multiply the first equation by y2 and the second by y1 , and subtract,
then ( )
x y2 y1′′ − y1 y2′′ + y2 y1′ − y1 y2′ = (λ22 − λ21 )x y1 y2 .
Notice that the last equation can be written in the form
d ( )
x (y2 y1′ − y1 y2′ ) = (λ22 − λ21 ) xy1 y2 .
dx
If we integrate the last equation and substitute the functions y1 and y2 we
obtain that
[ ]1 ∫1
′ ′
x Jµ (λ2 x)Jµ (λ1 x)−Jµ (λ1 x)Jµ (λ2 x) = (λ2 −λ1 )
2 2
xJµ (λ1 x) Jµ (λ2 x) dx.
0
0
3.3.3 BESSEL’S DIFFERENTIAL EQUATION 193

Since λ1 and λ2 were such that Jµ (λ1 ) = Jµ (λ2 ) = 0, it follows that

∫1
(λ22 − λ21 ) x Jµ (λ1 x) Jµ (λ2 x) dx = 0
0

and therefore
∫1
x Jµ (λ1 x) Jµ (λ2 x) dx = 0. ■
0

This theorem can be used to construct a Fourier–Bessel expansion of a


function f (x) on the interval [0, 1].
Theorem 3.3.12. Suppose that f is a piecewise smooth function on the
interval [0, 1]. Then f has a Fourier–Bessel expansion on the interval [0, 1]
of the form


(3.3.50) f (x) ∼ ck Jµ (λk x),
k=1

where λk are the positive zeros of Jµ (x).


The coefficients ck in (3.3.50) can be computed by the formula

∫1
2
(3.3.51) ck = 2 xf (x)Jµ (λk x) dx.
Jµ+1 (λk )
0

The series in (3.3.50) converges to

f (x+ ) + f (x− )
for any point x ∈ (0, 1).
2

Proof. From the orthogonality property of Jµ (λk x) it follows that

∫1
1
(3.3.52) ck = xf (x)Jµ (λk x) dx.
∫1
xJµ2 (λk x) dx 0
0

To evaluate the integral


∫1
xJµ2 (λk x) dx
0
194 3. STURM–LIOUVILLE PROBLEMS

we proceed as in Theorem 3.3.11. If u(x) = Jµ (λk x), then


x2 u′′ + xu′ + (λ2k x2 − µ2 )u = 0.
If we multiply the above equation by 2u′ , then we obtain

2x2 u′′ u′ + 2xu′ + 2(λ2k x2 − µ2 )uu′ = 0,


2

which can be written in the form


[ ]
d
x2 u′ + (λ2k x2 − µ2 )u2 − 2λ2k xu2 = 0.
2

dx
Integrating the above equation on the interval [0, 1] we obtain
[ ]x=1 ∫1
x2 u′ + (λ2k x2 − µ2 )u2
2
− 2λ2k xu2 (x) dx = 0,
x=0
0
or
∫1
′2
u (1) + (λ2k − µ )u (1) =
2 2
2λ2k xu2 (x) dx.
0

If we substitute the function u(x) and use the fact that λk are the zeros of
Jµ (x), then it follows that
∫1
λ2k Jµ′ (λk )
2
= 2λ2k xJµ2 (λk x) dx,
0

i.e.,
∫1
1 ′2
xJµ2 (λk x) dx = J (λk ).
2 µ
0

From Equation (3.3.41) in Theorem 3.3.7, with x = λk we have

Jµ′ (λk ) = Jµ+1


2
2
(λk ).
Thus,
∫1
1 2
xJµ2 (λk x) dx = J (λk ).
2 µ+1
0

Therefore, from (3.3.52) it follows that


∫1
2
ck = 2 xf (x)Jµ (λk x) dx. ■
Jµ+1 (λk )
0

Let us illustrate this theorem with an example.


3.3.3 BESSEL’S DIFFERENTIAL EQUATION 195

Example 3.3.10. Expand the function f (x) = x, 0 < x < 1 in the series


x∼ ck J1 (λk x),
k=1

where λk , k = 1, 2, · · · are the zeros of J1 (x).


Solution. From (3.3.51) we have

∫1 ∫λk
2 2 2
ck = 2 x J1 (λk x) dx = 3 2 t2 J1 (t) dt.
J2 (λk ) λk J2 (λk )
0 0

However, from (3.3.39),

d [ 2 ]
x J2 (x) = x2 J1 (x)
dx
with µ = 2, it follows that

∫λk t=λk
2 d[2 ] 2 2
ck = 3 2 t J2 (t) dt = 3 2 t J2 (t)
2
= .
λk J2 (λk ) dt λk J2 (λk ) t=0 λk J2 (λk )
0

Therefore, the resulting series is



∑ J1 (λk x)
x=2 .
λk J2 (λk )
k=1

Example 3.3.11. Show that for any p ≥ 0, a > 0 and λ > 0, we have

∫a
(λ ) ap+4
(a2 − t2 )tp+1 Jp t dt = 2 2 Jp+2 (λ).
a λ
0

λ
Solution. If we introduce a new variable x by x = t, then
a

∫a ∫λ
(λ ) ap+4
(a2 − t2 )tp+1 Jp t dt = p+2 xp+1 Jp (x) dx
a λ
0 0
(3.3.53)
∫λ
ap+4
− x2 xp+1 Jp (x) dx.
λp+2
0
196 3. STURM–LIOUVILLE PROBLEMS

For the first integral in the right hand side of (3.3.53), from identity (3.3.39)
in Theorem 3.3.7 we have
∫λ
(3.3.54) xp+1 Jp (x) dx = λp+1 Jp+1 (λ).
0

For the second integral in the right hand side of (3.3.53) we apply the inte-
gration by parts formula with

u = x2 , dv = xp+1 Jp (x) dx.

Then, by identity (3.3.39), we have v = xp+1 Jp+1 (x) and so,

∫λ x=λ ∫λ

2
x x p+1
Jp (x) dx = x p+3
Jp+1 (x) − 2 xp+2 Jp+1 (x) dx
x=0
0 0
∫λ
= λp+3 Jp+1 (λ) − 2 xp+2 Jp+1 (x) dx (use of (3.3.39))
0
= λp+3 Jp+1 (λ) − 2λp+2 Jp+2 (λ).

If we substitute (3.3.54) and the last expression in (3.3.53) we obtain the


required formula. ■

Exercises for Section 3.3.

1. Verify the expressions for the first six Legendre polynomials.

2. Show by means of Rodrigues’ formula that

n(n + 1) n(n + 1)
Pn′ (1) = , and Pn′ (−1) = (−1)n−1 .
2 2
∫1
3. If n ≥ 1 show that Pn (x) dx = 0.
−1

4. Prove the recursive formula (3.3.16) in Theorem 3.3.4:


′ ′
Pn+1 (x) − Pn−1 (x) = (2n + 1)Pn (x).

5. Find the sum of the series


∑∞
xn+1
Pn (x).
n=0
n+1
3.3.3 BESSEL’S DIFFERENTIAL EQUATION 197

6. Find the first three nonvanishing coefficients in the Legendre polyno-


mial expansion for the following functions.
{
0, −1 < x < 0
(a) f (x) =
x, 0 < x < 1.
(b) f (x) = |x|, −1 < x < 1.
{
−1, −1 < x < 0
(c) f (x) =
0, 0 < x < 1.
7. Show that

(a) Pn (1) = 1.

(b) Pn (−1) = (1)n .

(c) P2n−1 (0) = 0.

(2n)!
(d) P2n (0) = (−1)n .
22n n!n!
8. Prove that
∫1
1 [ ]
Pn (t) dt = Pn−1 (x) − Pn+1 (x) .
2n + 1
x

9. Establish the following properties of the Bessel functions.

(a) J0 (0) = 1.

(b) Jµ (0) = 0 for µ > 0.

(c) J0′ (x) = −J1 (x).


d ( )
(d) xJ1 (x) = xJ0 (x).
dx
10. Show that the differential equation

y ′′ + xy = 0, x>0

has the general solution


1 (2 3 ) 1 (2 3 )
y = c1 x 2 J 31 x 2 + c2 x 2 J− 13 x 2 .
3 3
11. Show that the differential equation

xy ′′ + (x2 − 2)y = 0, x>0


198 3. STURM–LIOUVILLE PROBLEMS

has the general solution


1 ( ) 1 ( )
y = c1 x 2 J 23 x + c2 x 2 J− 32 x .

12. Show that the function cJ1 (x) is a solution of the differential equation

xy ′′ − y ′ + xy = 0, x > 0.

13. Solve the following differential equations.

(a) x2 y ′′ + 3xy ′ + (µ2 x2 + 1)y = 0, µ > 0, x > 0.

(b) xy ′′ + 5y ′ + xy = 0, x > 0.

(c) xy ′′ + y ′ + µ2 x2µ−1 y = 0, µ ̸= 0, x > 0.

(d) x2 y ′′ + xy ′ + (x2 − 9)y = 0, x > 0.

14. Show that


π
∫2
sin x
J0 (x cos θ) dθ = , x > 0.
x
0

15. Find J− 32 (x) in terms of elementary functions.

16. Suppose that x and y are positive numbers and n an integer. Show
that


Jn (x + y) = Jk (x)Jn−k (y).
k=−∞

17. Expand the following functions

(a) f (x) = 1, 0<x<1

(b) f (x) = x2 , 0 < x < 1

1 − x2
(c) f (x) = , 0<x<1
8
in the Fourier–Bessel series


ck J0 (λk x),
k=1

where λk denote the positive zeros of J0 (x).


3.4 PROJECTS USING MATHEMATICA 199

18. Show that



∑ λk2 − 8
x3 = 2 J1 (λk x) x > 0,
λ3k J2 (λk )
k=1

where λk denote the positive zeros of J1 (x).

19. From the recursive formulas, show the following relations.

(a) 2J0′′ (x) = J2 (x) − J0 (x).

(b) 2J0′′ (x) = J2 (x) − J0 (x).


J0′ (x)
(c) J2 (x) = J2′′ (x) − x .

(d) 4Jn′′ (x) = Jn−2 (x) − 2Jn (x) + Jn+2 (x).

(e) 2J0′′ (x) = J2 (x) − J0 (x).

20. Show that

∫1 1
(a) xJ0 (λx) dx =
J1 (λ).
0 λ
∫1 3 λ2 − 4 2
(b) x J0 (λx) dx = 3
J1 (λ) + 2 J0 (λ).
0 λ λ
21. If λ is any solution of the equation J0 (x) = 0, show that

∫1 1
(a) J1 (λx) dx = .
0 λ
∫λ
(b) J0 (λx) dx = 1.
0

3.4 Projects Using Mathematica.


In this section we will see how Mathematica can be used to solve some
eigenvalue problems. Legendre polynomials and Bessel functions also are in-
vestigated in this section using Mathematica. For a brief overview of the
computer software Mathematica consult Appendix H.
Project 3.4.1. Using Mathematica solve the eigenvalue problem

y ′′ (x) + xy ′ (x) + λy = 0, 1 < x < e, y(1) = y(e) = 0.

Solution. We find the general solution of the Cauchy–Euler differential equa-


tion
200 3. STURM–LIOUVILLE PROBLEMS

In[1] := GS = First[DSolve[x2 y ′′ [x] + x y ′ [x] + λ y[x] == 0, y, x]]


[ [√ ] ]
Out[1] = {y− > Function {x}, C[1] Cos λ Log[x]
[√ ] ]
+C[2] Sin λ Log[x]
Next define the boundary conditions at x = 1 and x = e:
In[2] := bc1 = (y[1] == 0)/. GS
Out[2] = C[1] = 0
In[3] := bce = (y[Exp[1]] == 0)/. GS
[√ ] [√ ]
Out[3] = C[1]Cos λ + C[2]Sin λ == 0
Now substitute C[1] = 0 and find the eigenvalues:
[ √ ] ]
In[4] := Reduce Sin[ λ == 0, λ, Reals
( (( )
Out[4]
( = λ = 0 ∥ C[1] ∈ Integers &&
))) C[1] ≥ 1 && λ == 4π 2
C[1]2

C[1] ≥ 0 && λ == (π + 2πC[1]) 2

Notice that the eigenvalues are given by λn = n2 π 2 n = 1, 2, . . ..


The eigenfunctions are therefore
( )
yn (x) = sin nπ ln x .

In the next project we use Mathematica to solve a problem involving the


Legendre polynomials.
Project 3.4.2. Let f (x) be the function defined on [−1, 1] by

 −2x − 2, −1 ≤ x < − 2
1

f (x) = 2x, − 12 ≤ x ≤ 12


−2x + 2, 12 ≤ x ≤ 1.

Expand the function f (x) into the Legendre series




f (x) = cn Pn (x),
n=0

where Pn (x) is the Legendre polynomial of degree n. Using Mathematica


plot the function f (x) and several partial sums of the Legendre series.
Solution. First define the function f (x):
In[1] := f [x− ] := Piecewise [{{−1, −1 ≤ x < 0}, {x, 0 < x ≤ 1}}];
Next define the Legendre polynomial Pn (x):
In[2] := L[x− , n− ] := Legendre [n, x];
3.4 PROJECTS USING MATHEMATICA 201

Evaluate the integral


∫1
Pn2 (x) dx :
−1

In[3] := a[n− ] := Integrate [(L[x, n])2 , {x, −1, 1}];


Now we define the nth coefficient in the Legendre expansion:
In[4] := c[n− ] := (1/a[n]) Integrate[f [x] L[x, n], {x, −1, 1}];
Now we define the N th Legendre partial sum:
In[5] := S[x− , N− ] := Sum[c[n] L[x, n], {n, 0, N }];
Plots of the function f (x) and its several partial Legendre sums are dis-
played in Figure 3.4.1.

1
N=5 f HxL

N=1

1 1
-1 - 1
2 2

-1

1 f HxL

N=7

1 1
-1 -2 1
2

N=9

-1

Figure 3.4.1

Mathematica has many options to work with Bessel functions. It can plot
the Bessel functions of the first and second kind. It can evaluate numerically
the zeros of the Bessel functions. It can solve some differential equations whose
solutions are expressed in terms of the Bessel functions. It can evaluate some
integrals in terms of the Bessel functions.
In the next project we explore some of these options.
202 3. STURM–LIOUVILLE PROBLEMS

Project 3.4.3. Explore some of the capabilities of Mathematica in solving


problems related to Bessel functions.
Solution. The Mathematica commands
In[] := BesselJ [n, x];
In[] := BesselY [n, x];
give the Bessel function Jn (x) of the first kind of order n and the Bessel
function Yn (x) of the second kind of order n.
Example 1. We can plot several Bessel functions.
In[1] :=Plot [{ BesselJ[0, x], BesselJ[1, x]}, {x, 0, 50},
Ticks → {{0, 10, 20, 40, 50}, {−0.4, −0.2, 0, 0.2, 0.4}}, AxesLabel → {”x”}]
In[2] :=Plot [{ BesselY[0, x], BesselY[1, x]}, {x, 0, 50},
The plots are displayed in Figure 3.4.2.

0.4 J0 HxL

10 20 40 50

J6 HxL
-0.4

0.51
Y0 HxL

10 20 40 50

Y7 HxL
-0.51

Figure 3.4.2

The command
In[] :=N[BesselJZero[n, k, x0 ]]
3.4 PROJECTS USING MATHEMATICA 203

gives the k th zero of the Bessel function Jn (x) which is greater than x0 .
Example 2. Write out the first 5 positive zeros of J1 (x).
Solution. Although we can specify a desired precision, let us work with the
Mathematica default.
In[1] := Table[N[BesselJZero [1, k, 0]], k, 1, 10]
Out[1] = {3.83171, 7.01559, 10.1735, 16.4706, 13.3237,
19.6159, 22.7601, 25.9037, 29.0468, 32.1897}

Example 3. Solve the differential equation

s2 y ′′ (x) + xy ′ (x) + (x2 − 1)y(x) = 0.

Solution. This is the Bessel equation of order 1 and the solution of this
equation is expressed in terms of the Bessel function which can been seen
from the following.
In[2] := DSolve [x2 y ′′ [x] + x y ′ [x] + (x2 − 1) y[x] == 0, y[x], x]
Out[2] = {{y[x]− > BesselJ[1, x]C[1]+ BesselY [1, x]C[2]}}
CHAPTER 4

PARTIAL DIFFERENTIAL EQUATIONS

Many problems in physics and engineering are described by partial differen-


tial equations (PDEs) with appropriate boundary and initial value conditions.
Partial differential equations have a major role in electromagnetic theory, fluid
dynamics, traffic flow and many other disciplines. Therefore there is a need
to study the theory of partial differential equations and the rest of the book
is devoted entirely to this important topic.
A major part of this chapter is devoted to the study of a particular class
of first order partial differential equation, namely, linear partial differential
equations. We will start with basic concepts and terminology and finally we
will give some generalities of linear partial differential equations of the second
order and their classification.
In this chapter, as well as in the remaining chapters, all functions will be
real-valued functions of one or more real variables unless we indicate otherwise.

4.1 Basic Concepts and Terminology.


A domain Ω in the plane R2 or in the space R3 is a nonempty, open
and connected set in R2 or in R3 . A set Ω is called open if for every point
P ∈ Ω, there is an open disc/ball (without the circle/sphere) with center at
P which is a subset of Ω. A set Ω is called connected if any two points in Ω
can be joined by a polygonal line which lies entirely in Ω.
If u = u(x, y, . . .), (x, y, . . .) ∈ Ω is a function of two or more variables,
then the partial derivatives of u of the first order will be denoted by

ux , uy ,

instead of
∂u ∂u
, .
∂x ∂y
Similarly, for the partial derivatives of the second order we will use the nota-
tion
uxx , uyx , uyy ,
instead of
∂2u ∂2u ∂2u
, , .
∂x2 ∂y∂x ∂y 2

204
4.1 BASIC CONCEPTS AND TERMINOLOGY 205

We say that a function u = u(x, y), (x, y) ∈ Ω ⊆ R2 belongs to the class


C k (Ω), k = 1, 2, . . ., if all partial derivatives of u up to the order of k are
continuous functions in Ω.
These concepts can be generalized for functions u = u(x, y, z, . . .) of three
or more variables x, y, z, . . .
A partial differential equation for a function u = u(x, y, z, . . .) is an equa-
tion which contains the function u and some of its partial derivatives:

(4.1.1) F (x, y, u, ux , uy , uxx , uxy , uyy , . . .) = 0, (x, y, . . .) ∈ Ω.

The order of a partial differential equation is the order of the highest partial
derivative that appears in the equation.
As in ordinary differential equations, a solution of the partial differen-
tial Equation (4.1.1) of order k is a function u = u(x, y, z, . . .) ∈ C k (Ω)
which together with its partial derivatives satisfies Equation (4.1.1) for all
(x, y, . . .) ∈ Ω. The general solution is the set of all solutions.
Example 4.1.1. Find the general solution of the differential equation

uyy (x, y) = 0, (x, y) ∈ R2 .

Solution. The given equation simply means that uy (x, y) does not depend
on y. Therefore, uy (x, y) = A(x), where A(x) is an arbitrary function.
Integrating the last equation with respect to y (keeping x constant), it follows
that the general solution of the equation is given by

u(x, y) = A(x)y + B(x),

where A(x) and B(x) are arbitrary functions.

Example 4.1.2. Show that the function

1
u(x, y, z) = √ , (x, y, z) ̸= 0
x2 + y 2 + z 2

is a solution of the differential equation

uxx (x, y, z) + uyy (x, y, z) + uzz (x, y, z) = 0, (x, y, z) ∈ R3 \ {(0, 0, 0)}.

√ 1
Solution. If we let r = x2 + y 2 + z 2 , then u = . Using the chain rule we
r
have
1 x
ux = − 2
rx = − 3 .
r r
206 4. PARTIAL DIFFERENTIAL EQUATIONS

Differentiating once more and using the quotient and the chain rule again we
have
r3 − 3r2 rx x r2 − 3x2
uxx = − = − .
r6 r5
By symmetry, we have

r2 − 3y 2 r2 − 3z 2
uyy = − , uzz = − .
r5 r5

Therefore,

r2 − 3x2 r2 − 3y 2 r2 − 3z 2
uxx + uyy + uzz = − − −
r5 r5 r5
3r − 3(x + y + z )
2 2 2 2
3r2 − 3r2
=− 5
=− = 0.
r r5

Example 4.1.3. Find the general solution u = u(x, y) of the equation

uxy + ux = 0.

Solution. We can consider this partial differential equation as an ordinary


differential equation. Indeed, if we introduce a new function w = w(x, y) =
ux , then the given equation takes the form

wy + w = 0.

The general solution of the above equation is easily found and it is given by

w(x, y)) = a(x)e−y ,

where a(x) is any differentiable function. Therefore,

ux = e−y a(x).

Integrating the above equation with respect to x (keeping y constant), we


obtain that the general solution of the given partial differential equation is

u(x, y) = e−y f (x) + g(y),

where f is any differentiable function of a single variable and g an arbitrary


function of a single variable.
4.1 BASIC CONCEPTS AND TERMINOLOGY 207

Example 4.1.4. The equations

ux + xuy + u = 0, uy − uux uy + u2 = 0, ux − yuy + zuz = x − y − z

are all of the first order, while the equations

uxx + uyy = 0, uy − xyuxy + (x − y)uyy = 0, uxx + uyy + uzz = 0

are of the second order.

Exercises for Section 4.1.

1. Show that the function u = u(x, y) defined by


(√ )
u(x, y) = f x2 + y 2 ,

where f is any differentiable function on [a, b], a ≥ 0, satisfies the


differential equation

yux − xuy = 0.

2. Let u(x, y) = f (x)g(y), where f and g are any differentiable functions


of one variable. Verify that the function u = u(x, y) is a solution of
the partial differential equation

uuxy = ux uy .

3. Let u(x, y) = ex f (2x − y), where f is any differentiable function


of one variable. Show that u = u(x, y) is a solution of the partial
differential equation

ux + 2uy = 0.

4. Find the general solution u(x, y) of the following differential equations.

(a) ux = f (x),

(b) ux = f (y),
where f is a continuous function, defined on an open interval.

5. Find the general solution u(x, y) of the following differential equations.

(a) uyy = 0.

(b) uxy = 0.
208 4. PARTIAL DIFFERENTIAL EQUATIONS

(c) uxx + u = 0 = 0.

(d) uyy + u = 0 = 0.

6. Show that the function


1 x2
u(x, t) = √ e− 16t , t>0
t
satisfies the differential equation
ut = 4uxx .
7. Show that the function
u(x, t) = ϕ(x − at) + ψ(x + at),
where a is any constant, and ϕ and ψ are any differentiable func-
tions, each of a single variable, satisfies the differential equation
utt = a2 uxx .
8 Find a partial differential equation which is satisfied by each of the
following families of functions.

(a) u = x + y + f (xy), f is any differentiable function of one variable.

(b) u = (x + a)(y + b), a and b are any constants.

(c) ax2 + by 2 + cz 2 = 1, a, b and c are any constants.

(d) u = xy+f (x2 +y 2 ), f is any differentiable function of one variable.


( xy )
(e) u = f , f is any differentiable function of one variable.
u
9. Show that the function u = u(x, y) defined by
{ xy
2 2 if (x, y) ̸= (0, 0)
u(x, y) = x +y
0 if (x, y) = (0, 0)
is discontinuous only at the point (0, 0), but still satisfies the equation
xux + yuy = 0
in the entire plane.

10. Show that all surfaces of revolution around the u-axis satisfy the par-
tial differential equation
u(x, y) = f (x2 + y 2 ),
where f is some differentiable function of one variable.
4.2 PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER 209

4.2 Partial Differential Equations of the First Order.


In this section we will study partial differential equations of the first order.
We will restrict our discussion only to equations in R2 . These types of equa-
tions are used in many physical problems, such as traffic flow, shock waves,
convection–diffusion processes in chemistry and chemical engineering.
We start with the linear equations.
Linear Partial Differential Equations of the First Order. A differential
equation of the form

(4.2.1) a(x, y)ux (x, y) + b(x, y)uy (x, y) = c(x, y)u(x, y)

is called a linear partial differential equation of the first order. We suppose


that the functions a = a(x, y), b = b(x, y) and c = c(x, y) are continuous in
some domain in the plane.
Let u = u(x, y) be a solution of (4.2.1). Geometrically, the set (x, y, u)
represents a surface S in R3 . We know from calculus that a normal vector
n to the surface S at a point (x, y, u) of the surface is given by
(
n = ux , uy , −1).

From Equation (4.2.1) it follows that the vector (a, b, cu) is perpendicular to
the vector n, and therefore the vector (a, b, cu) must be in the tangent plane
to the surface S. Therefore, in order to solve Equation (4.2.1) we need to
find the surface S. It is obvious that S is the union of all curves which lie
on the surface S with the property that the vector
( )
(4.2.2) a(x, y), b(x, y), c(x, y)u(x, y)

is tangent to each curve. Let us consider one such curve C, whose parametric
equations are given by


 x = x(t),
(C) y = y(t),


u = u(t).

A tangent vector to the curve C is given by


( dx dy du )
(4.2.3) , , .
dt dt dt
If we compare (4.2.2) and (4.2.3), then we obtain

 dx

 = a(x, y),

 dt

dy
(4.2.4) = b(x, y),

 dt



 du = c(x, y)u(x, y).
dt
210 4. PARTIAL DIFFERENTIAL EQUATIONS

The curves C given by (4.2.4) are called the characteristic curves for Equa-
tion (4.2.1). The system (4.2.4) is called the characteristic equations for
Equation (4.2.1). There are two linearly independent solutions of the system
(4.2.4), and any solution of Equation (4.2.1) is constant on such curves. To
summarize, we have the following theorem.
Theorem 4.2.1. The general solution u = u(x, y) of the partial differential
equation (4.2.1) is given by
( )
f F (x, y, u), G(x, y, u) = 0,

where f is any differentiable function of two variables and F (x, y, u) = c1


and G(x, y, u) = c2 are two linearly independent solutions of the characteris-
tic equations

dx dy du
(4.2.5) = = .
a(x, y) b(x, y) c(x, y)u(x, y)

Example 4.2.1. Solve the partial differential equation ux + 3x2 y 2 uy = 0.


Solution. The characteristic curves of this equation are given by the general
solution of
3x2 y 3 dx − dy = 0.
After the separation of the variables in the above equation we obtain that its
general solution is given by

1
x3 = − + c.
2y 2

Therefore
1
c = x3 + ,
2y 2
and so the general solution of the given partial differential equation is
( 1 )
u(x, y) = f x3 + 2 ,
2y

where f is any differentiable function of a single variable.

Example 4.2.2. Find the particular solution u = u(x, y) of the differential


equation
−yux + xuy = 0
which satisfies the condition u(x, x) = 2x2 .
Solution. The characteristic curves of this equation are given by the general
solution of
x dx + y dy = 0.
4.2 PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER 211

Integrating the above equation we obtain

x2 + y 2 = c.

Therefore, the general solution of the given partial differential equation is

u(x, y) = f (x2 + y 2 ),

where f is any differentiable function of one variable. From the condition


u(x, x) = 2x2 we have f (2x2 ) = 2x2 and so f (x) = x. Therefore, the partic-
ular solution of the given equation that satisfies the given condition is

u(x, y) = x2 + y 2 .

Example 4.2.3. Find the general solution of

xux + yuy = nu.

Solution. The characteristic equations (4.2.5) of the given differential equa-


tion are
dx dy du
= = .
x y nu
From the first equation we obtain
y
= c1 , c1 is any constant.
x

From the other equation


dy du
=
y nu
we find
u
= c2 , c2 is any constant.
yn
Therefore, the general solution of the equation is given by
(y u )
f , = 0,
x yn

where f is any differentiable function. Notice that the solution can also be
expressed as
(y)
u(x, y) = y n g ,
x
where g is any differentiable function of a single variable.
212 4. PARTIAL DIFFERENTIAL EQUATIONS

Example 4.2.4. Find the particular solution u = u(x, y) of the equation


√ √ u
(4.2.6) xux − yuy = √ √ ,
x− y

which satisfies the condition u(4x, x) = x.
Solution. The characteristic equations (4.2.5) in this case are

dx dy (√ √ ) du
(4.2.7) √ = −√ = x− y .
x y u

The first equation in (4.2.7) implies


√ √
x+ y = c1 , where c1 is any constant,

and so F (x, y) in Theorem 4.2.1 is given by


√ √
(4.2.8) F (x, y) = x + y.

Further, from the characteristic equations (4.2.7) it follows that

√ (√ √ ) du √ (√ √ ) du
dx = x x− y dy = − y x − y .
u u

Thus,
dx − dy √ √ du
√ √ = ( x + y) ,
x− y u
and so
d(x − y) du
= .
x−y u
Integrating the last equation we obtain
u
= c2 , where c2 is any constant,
x−y

and so G(x, y) in Theorem 4.2.1 is given by

u
(4.2.9) G(x, y) = .
x−y

Therefore, from (4.2.7) and (4.2.8) we obtain that the general solution of
Equation (4.2.6) is given by
(√ √ u )
f x+ y, = 0,
x−y
4.2 PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER 213

where f is any differentiable function. The last equation implies that the
general solution can be expressed as
(√ √ )
u(x, y) = (x − y)g x + y ,
where g is any differentiable
√ function of√a single √
variable. From the given
condition u(4x, x) = x we have 3xg( x) = 3 x, and so g(x) = 1/x.
Therefore,
1 √ √
u(x, y) = (x − y) √ √ = x− y
x+ y
is the required particular solution.

Now, we will describe another method for solving Equation (4.2.1). The
idea is to replace the variables x and y with new variables ξ and η and
transform the given equation into a new equation which can be solved. We
will find a transformation to new variables
(4.2.10) ξ = ξ(x, y), η = η(x, y),
which transforms Equation (4.2.1) into an equation of the form
(4.2.11) wξ + g(ξ, η)w = h(ξ, η),
( )
where w = w(ξ, η) = u x(ξ, η), y(ξ, η) . Equation (4.2.11) is a linear ordi-
nary differential equation of the first order (with respect to ξ), and as such it
can be solved.
So, the problem is to find such functions ξ and η with the above properties.
By the chain rule we have
ux = wξ ξx + wη ηx , uy = wξ ξy + wη ηy .
Substituting the above partial derivatives into Equation (4.2.1), after some
rearrangements, we obtain
( ) ( ) ( )
aξx + bξy wξ + aηx + bηy wη = c x(ξ, η), y(ξ, η) .
In order for the last equation to be of the form (4.2.11) we need to choose
ξ = ξ(x, y) such that
a(x, y)ξx + b(x, y)ξy = 0.
But the last equation is homogeneous and therefore along its characteristic
curves
(4.2.12) −b(x, y) dx + a(x, y) dy = 0
the function ξ = ξ(x, y) is constant. Thus,
ξ(x, y) = F (x, y),
where F (x, y) = c, c is any constant, is the general solution of Equation
(4.2.12). For the function η = η(x, y) we can choose η(x, y) = y.
214 4. PARTIAL DIFFERENTIAL EQUATIONS

Example 4.2.5. Find the particular solution u = u(x, y) of the equation


1 1 1
ux + uy = , x ̸= 0, y ̸= 0,
x y y
for which u(x, 1) = 3 − x2 .
Solution. Equation (4.2.12) in this case is
1 1
− dx + dy = 0,
y x
whose general solution is given by x2 − y 2 = c. If w(ξ, η) = u(x(ξ, η), y(ξ, η)),
then the new variables ξ and η are given by
{
ξ = x2 − y 2 ,
η = y.

Now, if we substitute the partial derivatives ux = wξ ξx + wη tx = 2xvξ and


uy = wξ ξy + wη ηy = −2ywξ + wη in the given partial differential equation, we
obtain the simple equation wt = 1. The general solution of this equation is
given by w(ξ, η) = η +f (ξ), where f is any differentiable function. Returning
to the original variables x and y we have
u(x, y) = y + f (x2 − y 2 ).
From the condition u(x, 1) = 3 − x2 it follows that 3 − x2 = 1 + f (x2 − 1),
i.e., f (x2 − 1) = 2 − x2 . If we let s = x2 − 1, then f (s) = 2 − (s + 1) = 1 − s.
Therefore, the particular solution of the given equation is
u(x, y) = y + 1 + y 2 − x2 .

The plot of the particular solution u(x, y) is given in Figure 4.2.1.

Figure 4.2.1
4.2 PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER 215

Example 4.2.6. Find the solution u = u(x, y) of the equation

ux + yuy + u = 0, y > 0,

for which
u(x, e2x ) = e2x .

Solution. Equation (4.2.12) in this case is

−y dx + dy = 0,

whose general solution is given by y = cex . If w(ξ, η) = u(x(ξ, η), y(ξ, η)),
then the new variables ξ and η are given by
{
ξ = ye−x ,
η = y.

Now, substituting the partial derivatives ux = wξ ξx + wη ηx = −ye−x wξ and


uy = wξ ξy + wη ηy = e−x wξ + wη in the given partial differential equation we
obtain
ηwη + w = 0.
This equation can be solved by the method of separation of variables:

dw(ξ, η) dη
=− .
v η

The general solution of the last equation is

f (ξ)
w(ξ, η) = ,
η

where f is an arbitrary differentiable function. Therefore, the general solution


of the given partial differential equation is

f (ye−x )
u(x, y) = .
y

Now, from u(x, e2x ) = 1 it follows that f (ex ) = e2x , i.e., f (x) = x2 . There-
fore, the particular solution of the given partial differential equation is

u(x, y) = ye−2x .

The last two examples are only particular examples of a much more general
problem, the so-called Cauchy Problem. We will state this problem without
any further discussion.
216 4. PARTIAL DIFFERENTIAL EQUATIONS

Cauchy Problem. Consider a curve C in a plane domain Ω. The Cauchy


problem is to find a solution u = u(x, y) of a partial differential equation of
the first order
F (x, y, u, ux , uy ) = 0, (x, y) ∈ Ω,
such that u takes prescribed values u0 on C:


u(x, y) = u0 (x, y).
C

In applications, a linear partial differential equation of the first order of a


function u = u(t, x) of the form

ut (t, x) + cux (t, x) = f (t, x)

is called the transport equation. This equation can be applied to model pol-
lution (contaminant) or dye dispersion, in which case u(t, x) represents the
density of the pollutant or the dye at time t and position x. Also it can be
applied to traffic flow in which case u(x, t) represents the density of the traffic
at moment t and position x. In an initial value problem for the transport
equation, we seek the function u(t, x) which satisfies the transport equation
and satisfies the initial condition u(0, x) = u0 (x) for some given initial density
u0 (x).
Let us illustrate one of these applications with a concrete example.
Example 4.2.7. Fluid flows with constant speed v in a thin, uniform,
straight tube (cylinder) whose cross sections have constant area A. Sup-
pose that the fluid contains a pollutant whose concentration at position x
and time t is denoted by u(t, x). If there are no other sources of pollution
in the tube and there is no loss of pollutant through the walls of the tube,
derive a partial differential equation for the function u(t, x).
Solution. Consider a part of the tube between any two points x1 and x2 . At
moment t, the amount of the pollutant in this part is equal to

∫x2
Au(t, x) dx.
x1

Similarly, the amount of the pollutant at a fixed position x during any time
interval (t1 , t2 ) is given by
∫t2
u(t, x)v dt.
t1
4.2 PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER 217

Now we apply the mass conservation principle, which states that the total
amount of the pollutant in the section (x1 , x2 ) at time t2 is equal to the total
amount of the pollutant in the section (x1 , x2 ) at time t1 plus the amount
of contaminant that flowed through the position x1 during the time interval
(t1 , t2 ) minus the amount of pollutant that flowed through the position x2
during the time interval (t1 , t2 ):

∫x2 ∫x2 ∫t2 ∫t2


(4.2.13) u(t2 , x)A dx = u(t1 , x)A dx + u(t, x1 )v dt − u(t, x2 )v dt.
x1 x1 t1 t1

From the fundamental theorem of calculus we have

∫x2 ∫x2 ∫x2


( )
u(t2 , x)A dx − u(t1 , x)A dx = u(t2 , x)A − u(t1 , x)A dx
x1 x1 x1
∫x2 ∫t2
( )
(4.2.14) = ∂t u(t, x)A dt dx.
x 1 t1

Similarly,

∫t2 ∫t2
u(t, x2 )Av dt − u(t, x1 )Av dt
t1 t1
(4.2.15)
∫t2 ∫t2 ∫x2
( ) ( )
= u(t, x2 )A − u(t, x1 )Av dt = ∂x u(t, x)Av dx dt.
t1 t1 x 1

If we substitute (4.2.14) and (4.2.15) into the mass conservation law (4.2.13)
and interchange the order of integration, then we obtain that

∫x2 ∫t2
[ ( ) ( )]
∂t u(t, x)A + ∂x u(t, x)Av dx dt = 0.
x 1 t1

Since the last equation holds for every x1 , x2 , t1 and t2 (under the assump-
tion that ut (x, y) and ux (t, x) are continuous functions), we have

ut (t, x) + vux (t, x) = 0.


218 4. PARTIAL DIFFERENTIAL EQUATIONS

Example 4.2.8. Find the density function u = u(t, x) which satisfies the
differential equation
ut + xux = 0,
if the initial density u0 (x) = u( 0, x) is given by
u0 (x) = e−(x−3) .
2

Solution. The characteristic curves of this partial differential equation are


given by the general solution of the equation
−x dt + dx = 0.
The general solution of the above ordinary differential equation is given by
xe−t = c,
where c is any constant. Therefore the general solution of the partial differ-
ential equation is given by
( )
u(t, x) = f xe−t ,
where f is any differentiable function of one variable. Using the initial density
u0 we have
f (x) = e−(x−3) .
2

Therefore ( )2
− xe−t −3
u(t, x) = e .
We can use Mathematica to check that really this is the solution:
In[2] := DSolve[{D[u[t, x], t] + xD[u[t, x], x] == 0,
u[0, x] == e−(x−3) }, u[t, x], {t, x}]
2

−t
Out[2] = {{u[t, x]− > e−(−3+e x) }}
2

Using Mathematica plots of the solution u(t, x) and several characteristic


curves are given in Figure 4.2.2 (a) and (b), respectively

(a) (b)

Figure 4.2.2
4.2 PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER 219

The method of characteristics, described for linear partial differential equa-


tions of the first order, can be applied also for some nonlinear equations.
Quasi-linear Equation of the First Order. A differential equation of the
form

(4.2.16) a(x, y, u)ux (x, y, u) + b(x, y, u)uy (x, y) = c(x, y, u),

where u = u(x, y), (x, y) ∈ Ω ⊆ R2 , is called a quasi-linear differential


equation of the first order. We suppose that a(x, y, u), b(x, y, u) and c(x, y, u)
are continuous functions in Ω.
A similar discussion as for the linear equation gives the following theorem.
Theorem 4.2.2. The general solution u = u(x, y) of Equation (4.2.16) is
given by
f (F (x, y, u), G(x, y, u)) = 0,
where f is any differentiable function of two variables and F (x, y, u) = c1
and G(x, y, u) = c2 are two linearly independent solutions of the characteris-
tic equations
dx dy du
= = .
a(x, y, u) b(x, y, u) c(x, y, u)

Example 4.2.9. Solve the partial differential equation uux + 2xuy = 2xu2 .
Solution. It is obvious that u ≡ 0 is a solution of the above equation. If
u ̸= 0, then the characteristic equations are

dx dy du
= = .
u 2x 2xu2
2
From 2xu2 dx = u du we have u = c1 ex . Using this solution and the other
differential equation 2x dx = u du we have c1 x + e−x = c2 . Therefore, the
2

general solution of the given nonlinear equation is given implicitly


( 2)
f ue−x , xue−x + e−x = 0,
2 2

where f is any differentiable function of two variables.

Exercises for Section 4.2.


1. Find the general solution of each of the following partial differential
equations.

(a) xux + y 2 uy = 0, x > 0, y > 0.

(b) xux + yuy = xy sin (xy), y > 0.


220 4. PARTIAL DIFFERENTIAL EQUATIONS

(c) (1 + x2 )ux + uy = 0.

(d) ux + 2xy 2 uy = 0.

2. Find the general solution of each of the following partial differential


equations.

(a) aux + buy + cu = 0, a ̸= 0, b and c are numerical constants.

(b) y 3 ux − xy 2 uy = cxu, c ̸= 0 is a constant, y > 0.

(c) ux + (x + y)uy = xu.

(d) xux − yuy + y 2 u = y 2 .

(e) xux + yuy = u, x ̸= 0.

(f) x2 ux + y 2 uy = (x + y)u, x ̸= 0 and y ̸= 0.

(g) 3ux + 4uy + 14(x + y)u = 6xe−(x+y) .


2

(h) x2 ux + uy = xu.

3. Find the general solution u(x, y) of each of the following partial differ-
ential equations. Find the particular solution up (x, y) which satisfies
the given condition.

(a) 3ux − 5uy = 0, u(0, y) = sin y.

(b) 2ux − 3uy = cos x, u(x, x) = x2 .

(c) xux + yuy = 2xy, u(x, x2 ) = 2.


( )2
(d) ux + xuy = y − 21 x2 , u(0, y) = ey .

(e) yux − 2xyuy = 2xu, u(0, y) = y 3 .


1
(f) 2xux + yuy − x − u = 0, u(1, y) = .
y
4. Find the particular solution of each of the following partial differential
equations which satisfies the given condition.
1 1
(a) ux + uy = 0, u(x, 0) = cx4 , where c is a constant.
x y
4.3 LINEAR PDE OF THE SECOND ORDER 221

1 1 1 1
(b) ux + uy = , u(x, 1) = (3 − x2 ).
x y y 2
(c) xux + (x + y)uy = u + 1, u(x, 0) = x2 .
x
(d) 2xyux + (x + y )uy = 0, u(x, y) = e x − y if x + y = 1.
2 2

(e) xux − yuy = u, u(y, y) = y 2 .

(f) xux + yuy = 2xyu, u(x, y) = f (x, y) on the circle x2 + y 2 = 1, f


is a given differentiable function.

(g) ex ux + uy = y, u(x, 0) = g(x), g is a given differentiable function.

5. Solve the following equations with the given initial conditions.

(a) ut + 2xtux = u, u(0, x) = x, where c is a constant.

(b) ut + xux = 0, u(0, x) = f (x), where f is a given differentiable


function.

(c) ut = x2 ux , u(x, x) = f (x), where f is a given differentiable


function.

(d) ut + cux = 0, u(0, x) = f (x), where f is a given differentiable


function.

(e) ut + ux = x cos t, u(0, x) = sin x.

(f) ut + ux − u = t, u(0, x) = ex .

6. Solve the transport equation ut + ux = 0, 0 < x < ∞, t > 0, subject


to the boundary condition u(t, 0) = t, and the initial condition
{ 2
x , 0≤x≤2
u(0, x) =
x, x > 2.

4.3 Linear Partial Differential Equations of the Second Order.


In this section we will study linear equations of the second order in one
unknown function. The following are examples of this type of equations:

ut = c2 uxx , the Heat equation.


2
utt = c uxx , the Wave equation.
uxx + uyy = 0, the Laplace equation.
222 4. PARTIAL DIFFERENTIAL EQUATIONS

A fairly large class of important problems of physics and engineering can be


written in the form of the above listed equations. The heat equation is used
to model heat distribution (flow) along a rod or wire. It is easily generalized
to describe heat flow in planar or three dimensional objects. This equation
can also be used to describe many diffusion processes. The wave equation is
used to describe many physical processes, such as the vibrations of a guitar
string, vibrations of a stretched membrane, waves in water, sound waves, light
waves and radio waves. Laplace’s equation is applied to problems involving
electrostatic and gravitational potentials.

4.3.1 Important Equations of Mathematical Physics.


In this part we study several physical problems which lead to second order
linear partial differential equations.
The Wave Equation. Chapter 5 is entirely devoted to the wave equation.
Now, we will only derive this important equation.
Example 4.3.1. Very thin, uniform and perfectly flexible string of length l
is placed on the Ox-axis. Let us assume that the string is fastened at its end
points x = 0 and x = l. We also assume that the string is free to vibrate only
in the vertical direction. Initially, the string is in a horizontal position. At the
initial moment t = 0, we displace the string slightly and the string will start
to vibrate. By u(t, x) we denote the vertical displacement of the point x of
the string from its equilibrium position at moment t. We assume that the
function u(x, (t) is twice differentiable
) with respect to both variables x and
t, i.e., u ∈ C 2 (0, l) × (0, ∞) . We will also assume that the density ρ of the
string is constant. Let us consider a small segment of the string between the
points A and B (see Figure 4.3.1). This segment has mass which is equal to
ρh and acceleration utt . The only forces acting along this string segment are
the string tension T . We ignore gravity g since it is very small relative to the
tension forces. From physics we know that tension is the magnitude of the
pulling force exerted by the string, and the tension force is in the direction
of the tangent line. According to Hooke’s law, the tension of the string at
the point x and time t is equal to ux (x, t). The vertical components of the
tension forces at the points A and B are

−T1 sin α1 and T2 sin α2

respectively.

Since there is no horizontal motion, the horizontal components of the ten-


sion forces at the points A and B must be the same:

(4.3.1) T1 cos α1 = T2 cos α2 = T.


4.3.1 IMPORTANT EQUATIONS OF MATHEMATICAL PHYSICS 223

uHx, tL T2

Α2
B

A
Α1
T1

x x+h l

Figure 4.3.1

Now if we apply Newton’s Second Law of motion, which states that mass
times acceleration equals force, then we have

ρhutt = T2 sin α2 − T1 sin α1 .

If we divide the last equation by T and use Equation (4.3.1), we obtain that

ρh T2 sin α2 T1 sin α1
utt (x, t) = −
T T2 cos α2 T1 cos α1
.
= tan α2 − tan α1 .

Now since
tan α1 = ux (x, t), tan α2 = ux (x + h, t)
we have
ρ ux (x + h, t) − ux (x, t)
utt (x, t) = .
T h
T
Letting h → 0 and a2 = we obtain
ρ

utt (x, t) = a2 uxx (x, t).

This equation is called the wave equation.

We can consider the vibration of a two dimensional elastic membrane


(drumhead) over a plane domain Ω. In a similar way as for the vibrat-
ing string we find that the equation of the deviation u(x, y, t) at a point
(x, y) ∈ Ω and at time t has the form
[ ]
utt (x, y, t) = a2 uxx (x, y, t) + uyy (x, y, t) , (x, y) ∈ Ω, t > 0,
224 4. PARTIAL DIFFERENTIAL EQUATIONS

where a is a constant which depends on the physical characteristics of the


membrane.
Three dimensional vibrations can be treated in the same way, resulting in
the three dimensional wave equation
( )
utt = a2 uxx + uyy + uzz .

Remark. If we introduce the Laplace operator or Laplacian ∆ by

(4.3.2) ∆ = ∂x2 + ∂y2 + ∂z2 ,

then the wave equation in any dimension can be written in the form

utt = a2 ∆u.

Example 4.3.2. The Telegraph Equation. A variation of the wave equa-


tion is the telegraph equation

utt − c2 uxx + aut + bu = 0, 0 < x < l, t > 0,

where a, b and c are constants. This equation is applied in problems related


to transmission of electrical signals in telephone or electrical lines (cables). A
mathematical derivation for the telegraph equation in terms of voltage and
current for a section of a transmission line will be investigated.
Consider a small element of a telegraph cable as an electrical circuit of
length h at a point x, where x is the distance from one of the ends of the
cable. Let i(x, t) be the current in the cable, v the voltage in the cable,
e(x, t) the potential at any point x of the cable. Further, let R be the
coefficient of resistance of the cable, L the coefficient of inductance of the
cable, G the coefficient of conductance to the ground and C the capacitance
to the ground. According to Ohm’s Law, the voltage change in the cable is
given by

(4.3.3) v = Ri(x, t).

According to another Ohm’s Laws the voltage decrease across the capacitor
is given by

1
(4.3.4) v= i(x, t) dt
C

and the voltage decrease across the inductor is given by

∂i(x, t)
(4.3.5) v=L .
∂t
4.3.1 IMPORTANT EQUATIONS OF MATHEMATICAL PHYSICS 225

The electrical potential e(x + h, t) at the point x + h is equal to the potential


e(x, t) at x minus the decrease in potential along the cable element [x, x+h].
Therefore, from (4.3.3), (4.3.4) and (4.3.5) it follows that

e(x + h, t) − e(x, t) = −Rhi(x, t) − Lhit (x, t).

If we divide the last equation by h and let h → 0 we obtain that

(4.3.6) ex (x, t) = −Ri(x, t) − Lit (x, t).

For the current i we have the following. The current i(x + h, t) at x + h


is equal to the current i(x, t) at x minus the current lost through leakage to
the ground. Using the formula for current through the capacitor,

i(x, t) = Cet (x, t),

therefore we have

(4.3.7) i(x + h, t) − i(x, t) = −Ghi(x, t) − Chet (x, t).

Dividing both sides of (4.3.7) by h and letting h → 0 we obtain

(4.3.8) ix (x, t) = −Ghi(x, t) − Cet (x, t).

If now we differentiate (4.3.6) with respect to x and (4.3.8) with respect to


t, the following equations are obtained.

(4.3.9) .exx (x, t) = −Rix (x, t) − Lixt (x, t),

and

(4.3.10) ixt (x, t) = −Gex (x, t) − Cett (x, t).

From (4.3.9) and (4.3.10) it easily follows that

(4.3.11) exx (x, t) − LCett (x, t) − (RC + GL)et (x, t) − GRe(x, t) = 0,

and

(4.3.12) ixx (x, t) − LCitt (x, t) − (RC + GL)it (x, t) − GRi(x, t) = 0.

Equations (4.3.11) and (4.3.12) are the telegraphic equations for e(x, t) and
i(x, t).

.
226 4. PARTIAL DIFFERENTIAL EQUATIONS

Example 4.3.3. The Heat/Diffusion Equation. In this example we will


derive the heat equation using some well known physical laws and later, in
Chapter 6, we will solve this equation.
Let us consider a homogeneous cylindrical metal rod of length l which has
constant cross sectional area A and constant density ρ. We place the rod
along the x-axis from x = 0 to x = l. We assume that the lateral surface of
the rod is heat insulated, that is, we assume that the heat can escape or enter
the rod only at either end. Let u(t, x) denote the temperature of the rod at
position x at time t. We also assume that the temperature is constant at
any point of the cross section. For derivation of the equation we will use the
following physical laws of heat conduction:
1. The Fourier/Ficks Law. When a body is heated the heat flows in the
direction of temperature decrease from points of higher temperature to places
of lower temperature. The amount of heat which flows through the section at
a position x during time interval (t, t + ∆t) is equal to

t+∆t

(4.3.13) Q = −A γux (x, t) dt,


t

where the constant γ is the thermal conductivity.


2. The rate of change of heat in a section of the rod between x and x + h
is given by

x+h

(4.3.14) cρAut (x, t) dx,


x

where the constant c is the specific heat, which is the amount of heat that it
takes to raise one unit mass by one unit temperature.
3. Conservation Law of Energy. This law states that the rate of change of
heat in the element of the rod between the points x and x + h is equal to
the sum of the rate of change of heat that flows in and the rate of change of
heat that flows out, that is,

(4.3.15) cρhut (x, t) = −γAux (x + h, t) − γAux (x, t).

From (4.3.13), (4.3.14) and (4.3.15) it follows that



t+∆t ∫
x+h
[ ] [ ]
(4.3.16) γux (x+h, τ )−γux (x, τ ) dτ = cρ u(ξ, t+∆t)−u(ξ, t) dξ.
t x

Since we assumed that uxx and ut are continuous, from the mean value
theorem of integral calculus we obtain that
[ ] [ ]
(4.3.17) γux (x + h, τ∗ ) − γux (x, τ∗ ) ∆t = cρ u(ξ∗ , t + ∆t) − u(ξ∗ , t) h,
4.3.1 IMPORTANT EQUATIONS OF MATHEMATICAL PHYSICS 227

for some τ∗ ∈ (t, t + ∆t) and ξ∗ ∈ (x, x + h). If we apply the mean value
theorem of differential calculus to (4.3.17), then we have
∂ [ ]
γux (ξ1 , τ∗ ) ∆th = cρut (ξ∗ , t1 )h∆t,
∂x
for some ξ1 ∈ (x, x + h) and t1 ∈ (t, t + ∆t). If we divide the last equation
by hρ, and we let ∆t → 0 and h → 0, we obtain that
( )

γux = cρut .
∂x
Since the rod is homogeneous, γ, c and ρ are constants, and the last equation
takes the form
γ
(4.3.18) ut = a2 uxx , a2 = .

Equation (4.3.18) is called the one dimensional heat equation or one di-
mensional heat-conduction equation.
As in the case of the wave equation, we can consider a two dimensional
heat equation for a two dimensional plate, or three dimensional object:
ut = a2 ∆u.

The Laplace Equation. There are many physical problems which lead to
the Laplace equation. A detailed study of this equation will be taken up in
Chapter 7.
Let us consider some applications of this equation with a few examples.
Example 4.3.4. It was shown that the temperature of a nonstationary heat
flow without the presence of heat sources satisfies the heat differential equation
ut = a2 ∆ u.
If a stationary heat process occurs (the heat state does not change with time),
then the temperature distribution is constant with time; thus it is only a
function of position. Consequently, the heat equation in this case reduces to
∆ u = 0.
This equation is called the Laplace equation.
However, if heat sources are present, the heat equation in a stationary
situation takes the form
F
(4.3.19) ∆ u = −f, f= ,
k
where F is the heat density of the sources, and k is the heat conductivity
coefficient. The nonhomogeneous Laplace equation (4.3.19) is usually called
the Poisson equation.
228 4. PARTIAL DIFFERENTIAL EQUATIONS

Example 4.3.5. As a second example, we will investigate the work in a con-


servative vector field. Gravitational force, spring force, electrical and magnetic
force are several examples of conservative forces.
Let Ω be the whole plane R2 except the origin O. Suppose that a particle
starts at any point M (x, y) of the plane Ω and it is attracted toward the
origin O by a force F = (f, g) of magnitude

1 1
=√ .
r x + y2
2

We will show that the work w performed only by the attraction force F
toward O as the particle moves from M to N (1, 0) is path independent, i.e.,
w does not depend on the curve along which the particle moves, but only on
the initial and terminal points M and N (see Figure 4.3.2.)

y MHx, yL
F

r c

O
NH1, 0L PHr, 0L

Figure 4.3.2


Since the magnitude of F = (f, g) is 1/ x2 + y 2 we have
x y
f = f (x, y) = − , g = g(x, y) = − .
x2 + y2 x2 + y2

Consider the function


1 ( )
(4.3.20) v(x, y) = − ln x2 + y 2 .
2
It is easily seen that

(4.3.21) f (x, y) = vx (x, y), g(x, y) = vy (x, y).

Let c be any continuous and piecewise smooth curve in Ω from the point M
to the point N and let

x = x(t), y = y(t), a ≤ t ≤ b,
4.3.1 IMPORTANT EQUATIONS OF MATHEMATICAL PHYSICS 229

be the parametric equations of the curve c. The work wc done to move the
particle from the point M to point N along the curve c under the force F
is given by the line integral
∫b
[ ]
(4.3.22) wc = f (x(t, y(t))x′ (t) + g(x(t, y(t))y ′ (t) dt.
a
From (4.3.21) and (4.3.22) it follows that
∫b
[ ]
wc = f (x(t, y(t))x′ (t) + g(x(t, y(t))y ′ (t) dt
a
∫b ∫b
[ ] d( )
= vx x′ (t) + vy y ′ (t) dt = v(x(t), y(t)) dt
dt
a a
= v(x(b), y(b)) − v(x(a), y(a)) = v(N ) − v(M ).
Therefore, the work wc does not depend on c (it depends only on the initial
point M (x, y) and terminal point N (1, 0)). Thus
wc = u(x, y)
for some function u(x, y).
Now, since the work wc is path independent we have
wc = w ⌢ + wP N .
MP

For the work w ⌢ along the arc M P , from (4.3.21) and the path indepen-
MP
dence of the work we have
1 1
w ⌢ = v(P ) − v(M ) = − ln(r2 ) + ln(r2 ) = 0.
MP 2 2
For the work wP N along the line segment P N we have
∫ ∫
wP N = f (x, y) dx + g(x, y) dy = − f (x, y) dx
PN PN
∫1
1 1 ( )
=− dx = ln r = ln x2 + y 2 .
x 2
r
Therefore,
1 ( 2 )
ln x + y 2 .
u(x, y) =
2
Differentiating the above function u(x, y) twice with respect to x and y we
obtain
y 2 − x2 x2 − y 2
uxx = 2 , u yy = .
(x + y 2 )2 (x2 + y 2 )2
Thus,
uxx + uyy = 0.
230 4. PARTIAL DIFFERENTIAL EQUATIONS

4.3.2 Classification of Linear PDEs of the Second Order.


In this section we will consider the most general linear partial differential
equation of the second order of a function u(x, y) of two independent variables
x and y in a domain Ω ⊆ R2 :
(4.3.23) L u ≡ auxx + 2buxy + cuyy + a1 ux + c1 uy + du = f (x, y), (x, y) ∈ Ω.
We assume that the coefficients a = a(x, y), b = b(x, y), c = c(x, y),
a1 = a1 (x, y), c1 = c1 (x, y) and d = d(x, y) are functions of x and y,
and the functions a(x, y), b(x, y) and c(x, y) are twice continuously differen-
tiable in Ω. We assume that the function u(x, y) is also twice continuously
differentiable. If the function f (x, y) in (4.3.23) is zero in Ω, then the equa-
tion
(4.3.24) auxx + 2buxy + cuyy + a1 ux + c1 uy + du = 0, (x, y) ∈ Ω
is called homogeneous. We will find an invertible transformation of the vari-
ables x and y to new variables ξ and η such that the transformed differential
equation has a much simpler form than (4.3.23) or (4.3.24). Let the functions
(4.3.25) ξ = ξ(x, y), η = η(x, y)
be twice differentiable, and let the Jacobian

ξx ξy

ηx ηy

be different from zero in the domain Ω of consideration. Then the transfor-


mation defined by (4.3.25) is invertible in some domain. Using the chain rule
we compute the first partial derivatives
ux = uξ ξx + uη ηx , uy = uξ ξy + uη ηy ,
and after that the second order partial derivatives
uxx = uξξ ξx2 + 2uξη ξx ηx + uηη ηx2 + uξ ξxx + uη ηxx ,
( )
uxy = uξξ ξx ξy + uξη ξx ηy + ξy ηx + uηη ηx ηy + uξ ξxy + uη ηxy ,
uyy = uξξ ξy2 + 2uξη ξy ηy + uηη ηy2 + uξ ξyy + uη ηyy .
If we substitute the above derivatives into Equation (4.3.23), we obtain
(4.3.26) Auξξ + 2Buξη + Cuηη + Ãuξ + C̃uη + du = f,
where


 A = aξx2 + 2bξx ξy + cξy2 ,

 ( )


 B = aξx ηy + b ξx ηy + ηx ξy + cξy ηy ,

(4.3.27) C = aηx2 + 2bηx ηy + cηy2 ,






à = a1 ξx + c1 ξy ,


C̃ = a1 ηx + c1 ηy .
4.3.2 CLASSIFICATION OF LINEAR PDES OF THE SECOND ORDER 231

We can simplify Equation (4.3.26) if we can select ξ and η such that at least
one of the coefficients A, B and C is zero.
Consider the family of functions y = y(x) defined by ξ(x, y) = c1 , where
c1 is any numerical constant. Differentiating this equation with respect to x
we have
ξx + ξy y ′ (x) = 0,
from which it follows that
ξx = −ξy y ′ (x).
If we substitute ξx from the last equation into the expression for A in (4.3.27)
it follows that
( )
′2 ′
(4.3.28) A = ξy ay (x) − 2by (x) + c .
2

Working similarly, if we consider the family of functions y = y(x) defined by


η(x, y) = numerical constant, we obtain
( )
B = ηy2 ay ′ (x) − 2by ′ (x) + c .
2
(4.3.29)

Equations (4.3.28) and (4.3.29) suggest that we need to consider the nonlin-
ear ordinary differential equation

ay ′ (x) − 2by ′ (x) + c = 0,


2
(4.3.30)

Equation (4.3.30) is called the characteristic equation of the given partial


differential equation. If a ̸= 0, from (4.3.30) we obtain

′ b ± b2 − ac
(4.3.31) y (x) = .
a
From (4.3.31) it follows that we have to consider the following three cases:
b2 − ac > 0, b2 − ac < 0 and b2 − ac = 0.
Case 10 . b2 − ac > 0. In this case the linear partial differential equation
(4.3.23) is called hyperbolic. The wave equation is a prototype of a hyperbolic
equation.
Let φ(x, y) = C1 and ψ(x, y) = C2 be the two (linearly independent)
general solutions of Equations (4.3.31). If we select now

ξ = φ(x, y) and η = ψ(x, y),

then from (4.3.28) and (4.3.29) it follows that both coefficients A and C
will be zero, and so the transformed equation (4.3.26) takes the form

e
A e
C du f
(4.3.32) uξη + uξ + uη + = .
2B 2B 2B 2B
232 4. PARTIAL DIFFERENTIAL EQUATIONS

Equation (4.3.32) is the canonical form of the equation of hyperbolic type.


Very often, another canonical form is used. If we introduce new variables α
and β by
ξ = α + β, η = α − β,
then from
1( ) 1( ) 1( )
uξ = uα + uβ , uη = uα − uβ , uξη = uαα − uββ
2 2 2
Equation (4.3.32) takes the form
( )
uαα − uββ = F u, uα , uβ .

Case 20 . b2 − ac = 0. An equation for which b2 − ac = 0 is said to be


of parabolic type or simply parabolic equation. A prototype of this type of
equation is the heat equation.
In this case, the two equations (4.3.31) coincide and we have a single
differential equation. Let the general solution of this equation be given by
ξ(x, y) = constant. Now, select the variables ξ and η by

ξ = ξ(x, y), η = η(x, y),


where for η(x, y) we can take to be any function, linearly independent of ξ.
Because of this choice of the variable ξ, from b2 = ac and (4.3.31) we have
√ √ 1 ( )
a ξx + c ξy = √ aξx + bξy = 0.
a

Therefore,
(√ √ )2
A = aξx2 + 2bξx ξy + cξy2 = aξx + cξy = 0,

and ( )
B = aξx ηy + b ξx ηy + ηx ξy + cξy ηy
(√ √ )(√ √ )
= a ξx + c ξ y a ηx + c ηy = 0,
and so, the canonical form of the given partial differential equation (after
dividing (4.3.26) by C) is
( )
uηη = F ξ, η, u, uξ , uη .

Case 30 . b2 − ac < 0. In this case, the linear partial differential equation is


called elliptic. The Laplace and Poisson equations are prototypes of this type
of equation. The right hand sides of Equations (4.3.31) in this case will be
complex functions. Let the general solution of one of Equations (4.3.31) be
given by ϕ(x, y) = constant, where

ϕ(x, y) = φ(x, y) + iψ(x, y).


4.3.2 CLASSIFICATION OF LINEAR PDES OF THE SECOND ORDER 233

It is easy to check that the function ϕ(x, y) satisfies the equation


( )2 ( )2
a ϕx + 2bϕx ϕy + c ϕx = 0.

If we separate the real and imaginary parts in the above equation, we obtain
( )2 ( )2 ( )2 ( )2
(4.3.33) a φx + 2bφx φy + c φx = a ψx + 2bψx ψy + c ψx = 0,

and
( )
(4.3.34) aφx ψx + b φx ψy + φy ψx + cφy ψy = 0.

From (4.3.32) it follows that Equations (4.3.33) and (4.3.32) can be written
in the form A = C and B = 0, where A, B and C are the coefficients that
appear in Equation (4.3.33). Dividing both sides of Equation (4.3.26) by A
(provided that A ̸= 0) we obtain
)
(4.3.35) uξξ + uηη = F (ξ, η, u, uξ , uη .

Equation (4.3.35) is called the canonical form of equation of the elliptic type.

Let us take a few examples.


Example 4.3.6. Find the domains in which the equation

yuxx − 2xuxy + yuyy = 0

is hyperbolic, parabolic or elliptic.


Solution. The discriminant for this equation is D = b2 − ac = x2 − y 2 . There-
fore the equation is hyperbolic in the region Ω of points (x, y) for which
y 2 < x2 , i.e., for which |y| < |x|. The equation is parabolic when D = 0, i.e.,
when y = ±x. The equation is elliptic when D < 0, i.e., when |y| > |x|.

Example 4.3.7. Solve the equation

y2 x2
y 2 uxx + 2xyuxy + x2 uyy − ux − uy = 0
x y

by reducing it to its canonical form.


Solution. For this equation, b2 − ac = x2 y 2 − x2 y 2 = 0 and so the equation
is parabolic in the whole plane. The characteristic equation (4.3.30) for this
equation is
y 2 y ′ (x) − 2xyby ′ (x) + 3x2 = 0.
2

Solving for y ′ (x) we obtain


x
y ′ (x) = .
y
234 4. PARTIAL DIFFERENTIAL EQUATIONS

The general solution of the above equation is given by x2 − y 2 = c, where c


is an arbitrary constant. Therefore, we introduce the new variable η to be
an arbitrary function (for example, we can take η = x) and the variable ξ
is introduced by
ξ = x2 − y 2 , η = x.
Then, using the chain rule we obtain
ux = 2xuξ + uη , uy = −2yuξ ,
uxx = 2uξ + 4x2 uξξ + 4xuξη + uηη ,
uxy = −4xyuξξ − 2yuξη , uyy = −2uξ + 4y 2 uξξ .

Substituting the above partial derivatives into the given equation we obtain

ηuηη = uη .

This equation can be treated as an ordinary differential equation. By separa-


tion of variables we find that its general solution is

u(ξ, η) = η 2 f (ξ) + g(ξ),

where f and g are arbitrary twice differentiable functions. Returning to the


variables x and y it follows that the solution of the equation is

u(x, y) = x2 f (x2 − y 2 ) + g(x2 − y 2 ).

Example 4.3.8. Solve the equation

y2 3x2
y 2 uxx − 4xyuxy + 3x2 uyy − ux − uy = 0
x y
by reducing it to its canonical form.
Solution. For this equation, b2 − ac = 16x2 y 2 − 12x2 y 2 = 4x2 y 2 > 0 for every
x ̸= 0 and y ̸= 0. Therefore, the equation is hyperbolic in R2 \ {(0, 0)}. It is
parabolic on the coordinate axes Ox and Oy. For the hyperbolic case, the
characteristic equation (4.3.30) is

y 2 y ′ (x) + 4xyy ′ (x) + x2 = 0.


2

Solving for y ′ (x) we obtain


x x
y ′ (x) = − , y ′ (x) = −3 .
y y
The general solution of the last two ordinary differential equations is given by

x2 + y 2 = C1 , and 3x2 + y 2 = C2 ,
4.3.2 CLASSIFICATION OF LINEAR PDES OF THE SECOND ORDER 235

where C1 and C2 are arbitrary numerical constants. Therefore, we introduce


new variables ξ and η by

ξ = 3x2 + y 2 and η = x2 + y 2 .

Therefore, using the chain rule we obtain

ux = 6xuξ + 2xuη , uy = 2yuξ + 2yuη ,


uxx = 6uξ + 2uη + 36x2 uξξ + 24x2 uξη + 4x2 uηη ,
uxy = 12xyuξξ +16xuξη +4xyuηη , uyy = 2uξ +2uη +4y 2 uξξ +8y 2 uξη +4y 2 uηη .

If we substitute the above partial derivatives into the given equation we obtain

uξη = 0.

It is easy to find that the general solution of the above equation is

u(ξ, η) = f (ξ) + g(ξ),

where f and g are arbitrary twice differentiable functions. Returning to the


variables x and y it follows that the solution of the equation is

u(x, y) = f (x2 + y 2 ) + g(3x2 + y 2 ).

Example 4.3.9. Reduce to a canonical form the equation

5uxx + 4uxy + 4uyy = 0.

Solution. For this equation, b2 − ac = −4 < 0 for every (x, y) ∈ R2 and


therefore, the equation is elliptic in the whole plane R2 . The characteristic
equation (4.3.30) in this case is

5y ′ (x) − 4y ′ (x) + 4 = 0,
2

and solving for y ′ (x) we obtain

2 4
y ′ (x) = ± i.
5 5

The general solutions of the last two ordinary differential equations are given
by
(2 4 ) (2 4 )
y− + i x = C1 , and y − − i x = C2 ,
5 5 5 5
236 4. PARTIAL DIFFERENTIAL EQUATIONS

where C1 and C2 are arbitrary numerical constants. Therefore, we introduce


new variables ξ and η by
( ) ( )
ξ = 5y − 2 + 4i x and η = y − 2 − 4i x.
In order to avoid working with complex numbers we introduce two new vari-
ables α and β by
α=ξ+η and β = i(ξ − η).
Then
α = 10y − 4x and β = 8x.
Using the chain rule we obtain
ux = −4uα + 8uβ , uy = 10uα , uxx = 16uαα − 64uαβ + 8uββ ,
uxy = −40uαα + 80uββ , uyy = 100uαα .
If we substitute the above partial derivatives in the given equation, then its
canonical form is
uαα + uββ = 0.

Example 4.3.10. Find the general solution of the wave equation


utt − c2 uxx = 0, c > 0.

Solution. The characteristic equation is given by


x′ (t) − c2 = 0,
2

whose solutions are x + ct = c1 and x − ct = c2 . Therefore, we introduce the


new variable ξ and η by ξ = x + ct and η = x − ct. From
uxx = uξξ + 2uξη + uηη utt = c2 uξξ − 2c2 uξη + c2 uηη
the wave equation takes the form
uξη = 0.
The general solution of the last equation is given by
u(ξ, η) = f (ξ) + g(η),
where f and g are twice differentiable functions, each of a single variable.
Therefore, the general solution of the wave equation is given by
u(x, y) = f (x + ct) + g(x − ct).

The classification of linear partial differential equations of the second order


in more than two variables is much more complicated and involves elements of
multi-variable differential calculus and linear algebra. For these reasons this
topic is not discussed in this book and the interested reader can consult more
advanced books.
4.3.2 CLASSIFICATION OF LINEAR PDES OF THE SECOND ORDER 237

Exercises for Section 4.3.


1. Suppose that f and g are functions of class C 2 (−∞, ∞). Find a sec-
ond order partial differential equation that is satisfied by all functions
of the form u(x, y) = y 3 + f (xy) + g(x).

2. Classify the following linear partial differential equations with con-


stant coefficients as hyperbolic, parabolic or elliptic.

(a) uxx − 6uxy = 0.

(b) 4uxx − 12uxy + ux − uy = 0.

(c) 4uxx + 6uxy + 9uyy + uy = 0.

(d) uxx + 4uxy + 5uyy − ux + uy = 0.

(e) uxx − 4uxy + 4uyy + 3ux − 5uy = 0.

(f) uxx + 2uxy − 3uyy + uy = 0.

3. Determine the domains in the Oxy plane where each of the following
equations is hyperbolic, parabolic or elliptic.

(a) uxx + 2yuxy + uy = 0.

(b) (1 + x)yuxx + 2xyuxy − y 2 uyy = 0.

(c) yuxx + yuyy + ux − uy = 0.

(d) x2 uxx + 4yuxy + uyy + 2ux − 3uy = 0.

(e) yuxx − 2uxy + ex uyy + x2 uy − u = 0.

(f) 3yuxx − xuxy + u = 0.

(g) uxx + 2xyuxy + a2 uyy + u = 0.

4. Classify each of the following equations as hyperbolic, parabolic or


elliptic, and reduce it to its canonical form equations.

(a) uxx + yuxy + uyy = 0.

(b) xuxx + uyy = x2 .

(c) 4uxx + 5uxy + uyy + ux + uy = 0.


238 4. PARTIAL DIFFERENTIAL EQUATIONS

(d) 2uxx − 3yuxy + uyy = y.

(e) uxx + yuyy = 0.

(f) y 2 uxx + x2 uyy = 0.

(g) (a2 + x2 )uxx + (a2 + x2 )u√


yy + xux + yuy = 0. Hint: ξ =
( √ ) ( )
ln x + a + x , η = ln x + a2 + y 2 .
2 2

5. Classify each of the following partial differential equations as hyper-


bolic or parabolic and find its general solution.

(a) 9uxx + 12uxy + 4uyy = 0.

(b) uxx + 8uxy + 16uyy = 0.

(c) uxx + 2uxy − 3uyy = 0.

(d) 2xyuxx + x2 uxy − ux = 0.

(e) x2 uxx + 2xyuxy + y 2 uyy = 4x2 .

(1 + sin x)2
(f) (1 + sin x)uxx − 2 cos x + (1 − sin x)uyy + ux
2 cos x
1
+ (1 − sin x)uy = 0.
2
6. Classify each of the following partial differential equations as hyper-
bolic or parabolic and find the indicated particular solution.

(a) uxx + uxy − 2uyy + 1 = 0, u(0, y) = uy (0, y) = x.

(b) e2x uxx − 2ex+y uxy + e2y uyy + e2x ux + e2y uy = 0, u(0, y) = e−y ,
uy (0, y) = −e−y .
5
(c) uxx − 2uxy + uyy = 4e3y + cos x, u(x, 0) = 1 + cos x − ,
9
4
u(0, y) = e3y .
9

4.4 Boundary and Initial Conditions.


In the previous sections we have seen that partial differential equations
have infinitely many solutions. Therefore, additional conditions are required
in order for a partial differential equation to have a unique solution. In gen-
eral, we have two types of conditions, boundary value conditions and initial
4.4 BOUNDARY AND INITIAL CONDITIONS 239

conditions. Boundary conditions are constraints on the function and the space
variable, while initial conditions are constraints on the unknown function and
the time variable.
Consider a linear partial differential equation of the second order

(4.4.1) L u = G(t, x), x∈Ω

where L is a linear differential operator of the second order (in L u partial


derivatives up to the second order are involved). An example of such an
operator was the operator given by Equation (4.3.23). Ω is a domain in Rn ,
n = 1, 2, 3. The boundary of Ω is denoted by ∂Ω.
Related to differential equation (4.4.1) we introduce the following terms:
Boundary conditions are a set of constraints that describe the nature of
the unknown function u(t, x), x ∈ Ω, on the boundary ∂Ω of the domain Ω.
There are three important types of boundary conditions:
Dirichlet Conditions. These conditions specify prescribed values f (t, x),
x ∈ ∂Ω, of the unknown function u(t, x) on the boundary ∂Ω. We usually
write these conditions in the form


(4.4.2) u(t, x) = f (t, x).
∂Ω

Neumann Conditions. With these conditions, the value of the normal de-
rivative ∂u(t,x)
n on the boundary ∂Ω is specified. Symbolically we write this
as

∂u(t, x)
(4.4.3) = g(t, x).
n ∂Ω

Robin (Mixed) Conditions. These conditions are linear combinations of the


Dirichlet and Neumann Conditions:

∂u(t, x)
(4.4.4) a u(t, x) +b = h(t, x),
∂Ω n ∂Ω

for some nonzero constants or functions a and b and a given function h,


defined on ∂Ω.
Let us note that different portions of the boundary ∂Ω can have different
types of boundary conditions.
In cases when we have a partial differential equation that involves the
time variable t, then we have to consider initial (Cauchy) conditions. These
conditions specify the value of the unknown function and its higher order
derivatives at the initial time t = t0 .
If the functions f (t, x), g(t, x) and h(t, x) in Equations (4.4.2), (4.4.3)
and (4.4.4), respectively, are identically zero in the domain, then we have so
240 4. PARTIAL DIFFERENTIAL EQUATIONS

called homogeneous boundary conditions; otherwise, the boundary conditions


are nonhomogeneous.
The choice of boundary and initial conditions depends on the given partial
differential equation and the physical problem that the equation describes.
For example, in the case of a vibrating string, described by the one space
dimensional wave equation
utt − uxx = 0
in the domain (0, ∞) × (0, l), the initial conditions are given by specifying
the initial position and velocity of the string. If, for example, we impose the
boundary conditions
u(0) = u(l) = 0,
then it means that the two ends x = 0 and x = l of the string are fixed.
If we consider heat distribution problems, then the Dirichlet conditions
specify the temperature of the body (a bar, a rectangular plate or a ball) on
the boundary. For this type of problem the Neumann conditions specify the
heat flux across the boundary.
A similar discussion can be applied to the Laplace/Poisson equation. The
questions of choice of boundary and initial conditions for concrete partial
differential equations (the heat, the wave and the Laplace equation) will be
studied in more detail in the next chapters.

4.5 Projects Using Mathematica.


In this section we will see how Mathematica can be used to solve many
partial differential equations. Mathematica can find symbolic (analytic) so-
lutions of many types of partial differential equations (linear of the first and
second order, semi linear and quasi linear of the first order). This can be done
by the command
In[] := Dsolve[expression, u, {x, y, . . .}];
Mathematica can solve many partial differential equations. The command
for solving (numerically) differential equations is
In[] := NDsolve[expression, u, {x, y, . . .}];
Project 4.5.1. In this project we solve several types of partial differential
equations.

Example 1. Solve the linear partial differential equation of the first order

x3 ux (x, y) + x2 yuy (x, y) − (x + y)u(x, y) = 0.

Solution. The solution of the given partial differential is obtained by the com-
mands
In[1] := e1 = x3 ∗ D[u[x, y], x] + x2 ∗ y ∗ D[u[x, y], y] − (x + y) ∗ u[x, y] == 0;
4.5 PROJECTS USING MATHEMATICA 241

In[2] := s1 = DSolve[e1, u, {x, y}]


{{ [ y [ ]]}}
Out[2] = u → Function {x, y}, e− x − x2 C[1] xy
1

By specifying the constant C[1], we can find a particular solution.


In[3] := u[x, y]/.s[[1]]/.{C[1][x− ] → x2 }
y
−1− 2
e x x y2
Out[3] = x2

Example 2. Solve the quasi linear partial differential equation of the first
order
xux (x, y) + yuy (x, y) − u(x, y) − u2 (x, y) = 0.
Solution. The solution of the given partial differential is obtained by the com-
mands
In[4] := e2 = x ∗ D[u[x, y], x] + y ∗ D[u[x, y], y] − u[x, y] − u[x, y]2 == 0;
In[5] := s2 = DSolve[e1, u, {x, y}]
[ ]
{{ [ C[1]
y
]}}
Out[5] = u → Function {x, y}, − e [ ]
x x
y
C[1]
−1+e x x

Example 3. Solve the nonlinear partial differential equation of the first order

u2x (x, y) + u2y (x, y) − u(x, y) = u2 (x, y).

Solution. The solution of the given partial differential is obtained by the com-
mands
In[6] := e3 = D[u[x, y], x]2 + y ∗ D[u[x, y], y]2 − u[x, y]2 == 0;
In[7] := s3 = DSolve[e3, u, {x, y}]
{{ [ −√ y −√
C[2]
− √ y 2 ]}}
Out[7] = u → Function {x, y}, e 1+C[1]2 1+C[1]2 1+C[1]

Example 4. Solve the second order linear partial differential equation

uxx (x, y) + 2uxy (x, y) − 5uyy (x, y) = 0.

Solution. The solution of the given partial differential is obtained by the com-
mands
In[8] := e4 = D[u[x, y], x, 2] + 2 ∗ D[u[x, y], x, y] − ∗D[u[x, y], y, 2] == 0;
In[9] := DSolve [e4, u, {x, y}]
{{ [ [ √ ]
Out[9] = u → Function {x, y}, C[1] − 12 (2 + 2 6)x + y
[ √ ]]}}
+C[2] − 12 (2 − 2 6)x + y
242 4. PARTIAL DIFFERENTIAL EQUATIONS

Example 5. Plot the solution of the initial boundary value problem



 u (t, x) = 9uxxz (t, x), 0 < x < 5, t > 0
 t
u(0, x) = 0, 0 < x < 5


u(t, 0) = 9 sin4 t, u(t, 5) = 0, t > 0.

Solution. We use NDSolve to plot the solution of the given problem.


[{
In[10] := NDSolve u(1,0) [t, x]} == 9 u(0,2) [t, x], u[0, x]] == 0,
u[t, 0] = 9 Sin[t] , u[t, 5] == 0 , u, {t, 0, 10}, {x, 0, 5}
4

Out[10] = {{u → InterpolatingFunction [{{0., 10}, {0., 5.}}, <>]}}

In[11] :=Plot3D[Evaluate[u[t, x]/.%], {t, 0, 10}, {x, 0, 5}, PlotRange − > All]
The plot of the solution of the given problem is displayed in Figure 4.5.1.

Figure 4.5.1
CHAPTER 5

THE WAVE EQUATION

The purpose of this chapter is to study the one dimensional wave equation

utt (x, t) = a2 uxx (x, t) + h(x, t), x ∈ (a, b) ⊆ R, t > 0,

also known as the vibrating string equation, and its higher dimensional version

utt (x, t) = a2 ∆ u(x, t) + h(x, t), x ∈ Ω ⊆ Rn , n = 2, 3.

This equation is important in many applications and describes many phys-


ical phenomena, e.g., sound waves, ocean waves and mechanical and electro-
magnetic waves. As mentioned in the previous chapter, the wave equation
is an important representative of the very large class of hyperbolic partial
differential equations.
In the first section of this chapter we will derive d’Alembert’s formula,
a general method for the solution of the one dimensional homogeneous and
nonhomogeneous wave equation. In the next several sections we will apply the
Fourier Method, or so called Separation of Variables Method, for constructing
the solution of the one and higher dimensional wave equation in rectangular,
polar and spherical coordinates. In the last section of this chapter we will
apply the Laplace and Fourier transforms to solve the wave equation.

5.1 d’Alembert’s Method.


In this section we will use d’Alembert’s formula to solve the wave equation
on an infinite, semi-infinite and finite interval.
Case 10 . Infinite String. Homogeneous Equation. Let us consider first the
homogeneous wave equation for an infinite string

(5.1.1) utt (x, t) = a2 uxx (x, t), −∞ < x < ∞, t > 0,

which satisfies the initial conditions

(5.1.2) u(x, 0) = f (x), ut (x, 0) = g(x), −∞ < x < ∞.

We introduce new variables ξ and η by

ξ = x + at, and η = x − at.

243
244 5. THE WAVE EQUATION

Recall from Chapter 4 that the variables ξ and η are called the characteristics
of the wave equation. If u(x, t) = w(ξ, η), then using the chain rule, the wave
equation (5.1.1) is transformed into the simple equation

wξ η = 0,

whose general solution is given by

w(ξ, η) = F (ξ) + G(η),

where F and G are any twice differentiable functions, each of a single vari-
able. Therefore, the general solution of the wave equation (5.1.1) is

(5.1.3) u(x, t) = F (x + at) + G(x − at).

Using the initial condition u(x, 0) = f (x) from (5.1.2) and setting t = 0 in
(5.1.3) we obtain

(5.1.4) F (x) + G(x) = f (x).

If we differentiate (5.1.3) with respect to t we obtain

(5.1.5) ut (x, t) = aF ′ (x + at) − aG′ (x − at).

If we substitute t = 0 in (5.1.5) and use the initial condition ut (x, 0) = g(x),


then we obtain

(5.1.6) aF ′ (x) − aG′ (x) = g(x).

From (5.1.6), by integration it follows that

∫x
(5.1.7) aF (x) − aG(x) = g(s) ds + C,
0

where C is any integration constant. From (5.1.4) and (5.1.7) we obtain

∫x
f (x) 1 C
(5.1.8) F (x) = + g(s) ds + ,
2 2a 2
0

and
∫x
f (x) 1 C
G(x) = − g(s) ds − .
2 2a 2
0
5.1 D’ALEMBERT’S METHOD 245

If we substitute the obtained formulas for F and G into (5.1.3) we obtain


that the solution u(x, t) of the wave equation (5.1.1) subject to the initial
conditions (5.1.2) is given by


x+at
f (x + at) + f (x − at) 1
(5.1.9) u(x, t) = + g(s) ds.
2 2a
x−at

Formula (5.1.9) is known as d’Alembert’s formula.


Remark. The first term

f (x + at) + f (x − at)
2

in Equation (5.1.9) represents the propagation of the initial displacement


without the initial velocity, i.e., when g(x) = 0. The second term


x+at
1
g(s) ds
2a
x−at

is the initial velocity (impulse) when we have zero initial displacement, i.e.,
when f (x) = 0.
A function of type f (x − at) in physics is known as a propagation forward
wave with velocity a, and f (x + at) as a propagation backward wave with
velocity a.
The straight lines u = x + at and u = x − at (known as characteristics)
give the propagation paths along which the wave function f (x) propagates.
Remark. It is relatively easy to show that if f ∈ C 2 (R) and g ∈ C(R), then
the C 2 function u(x, t), given by d’Alembert’s formula (5.1.9), is the unique
solution of the problem (5.1.1), (5.1.2). (See Exercise 1 of this section.)
Example 5.1.1. Find the solution of the wave equation (5.1.1) satisfying
the initial conditions

u(x, 0) = cos x, ut (x, 0) = 0, −∞ < x < ∞.

Solution. Using d’Alembert’s formula (5.1.9) we obtain

cos (x + at) + cos (x − at)


u(x, t) = = 2 cos x cos at.
2
246 5. THE WAVE EQUATION

Example 5.1.2. Find the solution of the wave equation (5.1.1) satisfying
the initial conditions
u(x, 0) = 0 ut (x, 0) = sin x, −∞ < x < ∞.
Solution. Using d’Alembert’s formula (5.1.9) we obtain

x+at
1 1
u(x, t) = sin s ds = sin x sin at.
2a a
x−at

Example 5.1.3. Illustrate the solution of the wave equation (5.1.1) taking
a = 2, subject to the initial velocity g(x) = 0 and initial displacement f (x)
given by {
2 − |x|, −2 ≤ x ≤ 2
f (x) =
0, otherwise.
Solution. Figure 5.1.1 shows the behavior of the string and the propagation
of the forward and backward waves at the moments t = 0, t = 1/2, t = 1,
t = 1.5, t = 2 and t = 2.5.
u u

t = 0.5
t=0
2

x x
-2 2 -3 -1 1 3

(a) (b)
u u

t = 1.5
t = 1.0

1 1

x x
-4 -2 2 4 -5 -3 -1 1 3 5

(c) (d)
u u

t = 2.0 t = 2.5

1 1

x x
-6 -4 -2 2 4 6 -7 -5 -3 3 5 7

(e) (f)

Figure 5.1.1
5.1 D’ALEMBERT’S METHOD 247

The solution u(x, t) of this wave equation is given by


f (x + 2t) + f (x − 2t)
u(x, t) = .
2

Example 5.1.4. Using d’Alembert’s method find the solution of the wave
equation (5.1.1), subject to the initial displacement f (x) = 0 and the initial
velocity ut (x, 0) = g(x), given by
g(x) = cos (2x), −∞ < x < ∞.

Solution. From d’Alembert’s formula (5.1.9) it follows that



x+at ∫
x+at
1 1
u(x, t) = g(s) ds = cos 2s ds
2a 2a
x−at x−at
s=x+at
1 1
= sin 2s = − cos 2x sin 2at.
4a s=x−at 2a

Example 5.1.5. Show that if both functions f (x) and g(x) in d’Alembert’s
solution of the wave equation are odd functions, then the solution u(x, t) is
an odd function of the spatial variable x. Also, if both functions f (x) and
g(x) in d’Alembert’s solution of the wave equation are even functions, then
the solution u(x, t) is an even function of the spatial variable x.
Solution. Let us check the case when both functions are odd. The other case
is left as an exercise. So, assume that
f (−x) = −f (x), g(−x) = −g(x), x ∈ R.
Then by d’Alembert’s formula (5.1.9) it follows that

−x+at
f (−x + at) + f (−x − at) 1
u(−x, t) = + g(s) ds ( s = −v )
2 2a
−x−at

x−at
f (x − at) + f (x + at) 1
=− − g(−v) dv
2 2a
x+at

x+at
f (x − at) + f (x + at) 1
=− − g(v) dv = −u(x, t).
2 2a
x−at

Case 20 . Semi-Infinite String. Homogeneous Equation. Let us consider now


the homogeneous wave equation for a semi-infinite string
(5.1.10) utt (x, t) = a2 uxx (x, t), 0 < x < ∞, t > 0,
248 5. THE WAVE EQUATION

which satisfies the initial conditions

(5.1.11) u(x, 0) = f (x), ut (x, 0) = g(x), 0 < x < ∞

and the boundary condition

u(0, t) = 0, t > 0.

In order to solve this problem we will use the result of Example 5.1.5.
First, we extend the functions f , g and u(x, t) to the odd functions fo , fo
and w(x, t), respectively, by
{ {
−f (−x), −∞ < x < 0 −g(−x), −∞ < x < 0
fo (x) = go (x) =
f (x), 0 < x < ∞, g(x), 0 < x < ∞,

and {
−u(−x, t), −∞ < x < 0, t > 0
w(x, t) =
u(x, t), 0 < x < ∞, t > 0.
Next, we consider the initial boundary value problem
{
wtt (x, t) = a2 wxx (x, t), −∞ < x < ∞, t > 0
w(x, 0) = fo (x), wt (x, 0) = go (x), −∞ < x < ∞.

By d’Alembert’s formula (5.1.9), the solution of the last problem is given by


x+at
fo (x + at) + fo (x − at) 1
(5.1.12) w(x, t) = + go (s) ds.
2 2a
x−at

Since fo and go are odd functions from (5.1.12) it follows that

∫at
fo (at) + fo (−at) 1
w(0, t) = + go (s) ds = 0,
2 2a
at

and so,
w(x, t) = u(x, t), for x > 0 and t > 0.
Therefore, the solution of the initial boundary value problem (5.1.11) on the
semi-infinite interval 0 < x < ∞ is given by
 ∫
x+at

 f (x+at)−f (x−at)
+ 1
g(s) ds, 0 < x < at

 2 2a
x−at
(5.1.13) u(x, t) =

 ∫
x+at

 f (x+at)+f (x−at)
+ 1
g(s) ds, at ≤ x < ∞.
2 2a
x−at
5.1 D’ALEMBERT’S METHOD 249

Example 5.1.6. Find the solution of the wave equation

utt (x, t) = 4uxx (x, t), x > 0, t > 0,

subject to the conditions


{
u(x, 0) = 3e−x , ut (x, 0) = 0, x > 0
u(0, t) = 0, t > 0.

Solution. Apply Equation (5.1.13) to this problem.


 3 ( −x−2t )

 2 e − e−x+2t , 0 < x < 2t
u(x, t) =

 3 ( −x−2t )
2 e + e−x+2t) , 2t ≤ x < ∞.

Case 30 . Homogeneous Wave Equation for a Bounded Interval. We will con-


sider now a bounded interval (0, l) and find the solution of the homogeneous
problem  2

 utt (x, t) = a uxx (x, t), 0 < x < l, t > 0
u(x, 0) = f (x), ut (x, 0) = g(x), 0 < x < l


u(0, t) = u(l, t) = 0, t > 0.
As in the case of the semi-infinite interval, we extend both functions f and
g to the odd functions fo and go , respectively, on the interval (−l, l). After
that, we periodically extend the functions fo and go on the whole real line
R to new, 2l-periodic functions fp and gp , defined by
{ {
fo (x), −l < x < l go (x), −l < x < l
fp (x) = gp (x) =
fp (x + 2l), otherwise, gp (x + 2l), otherwise.
Notice that the functions fp and gp have the following properties.
{
fp (−x) = −fp (x), fp (l − x) = −fp (l + x)
(5.1.14)
gp (−x) = −gp (x), gp (l − x) = −gp (l + x).

Now, we consider the following problem for the wave equation on (−∞, ∞).
{
wtt (x, t) = a2 wxx (x, t), −∞ < x < ∞, t > 0
w(x, 0) = fp (x), wt (x, 0) = gp (x), −∞ < x < ∞.

By d’Alembert’s formula (5.1.9) the solution of the above problem is given


by

x+at
fp (x + at) + fp (x − at) 1
(5.1.15) w(x, t) = + gp (s) ds.
2 2a
x−at
250 5. THE WAVE EQUATION

From (5.1.14) and properties (5.1.14) for the functions fp and gp it


follows that
w(0, t) = w(l, t) = 0, t > 0.

Therefore,
w(x, t) = u(x, t), 0 < x < l, t > 0,

and so, the solution u(x, t) of the wave equation on the finite interval 0 <
x < l is given by


x+at
fp (x + at) + fp (x − at) 1
(5.1.16) u(x, t) = + gp (s) ds.
2 2a
x−at

Example 5.1.7. Using d’Alemebert’s formula (5.1.16) find the solution of


the problem
utt (x, t) = uxx (x, t), 0 < x < π, t > 0,

subject to the initial conditions

u(x, 0) = f (x) = 2 sin x + 4 sin 2x, ut (x, 0) = g(x) = sin x, 0 < x < π,

and the boundary conditions

u(0, t) = u(π, t) = 0, t > 0.

Solution. The odd periodical extensions fp and gp of f and g, respectively,


of period 2π, are given by

fp (x) = 2 sin x + 4 sin 2x, gp (x) = sin x, x ∈ R.

Therefore, by (5.1.16) the solution of the given problem is

1[ ]
u(x, t) = 2 sin (x + t) + 4 sin 2(x + t) + 2 sin (x − t) + 4 sin 2(x − t)
2

x+t
1 1[ ]
+ sin s ds = 6 sin x cos t − cos (x + t) − cos (x − t)
2 2
x−t
= 6 sin x cos t + sin x sin t.
5.1 D’ALEMBERT’S METHOD 251

Example 5.1.8. Using d’Alemebert’s formula (5.1.16) find the solution of


the string problem

utt (x, t) = uxx (x, t), 0 < x < π, t > 0,

subject to the initial conditions


{
u(x, 0) = f (x) = 0, 0 < x < π,
ut (x, 0) = g(x) = 2x, 0 < x < π,

and the boundary conditions

u(0, t) = u(π, t) = 0, t > 0.

At what instances does the string return to its initial shape u(x, 0) = 0?
Solution. By d’Alembert’s formula (5.1.16) the solution of the given initial
boundary value problem is


x+t
1
(5.1.17) u(x, t) = gp (s) ds.
2
x−t

Let G be an antiderivative of gp on R, i.e., let G′ (x) = gp (x), x ∈ R. Then

∫x
G(x) = gp (s) ds.
0

From the above formula for the antiderivative G and from (5.1.17) we have

1[ ]
(5.1.18) u(x, t) = G(x + t) − G(x − t) .
2
Now, we will find explicitly the function G. Since gp is an odd and 2π-
periodic function it follows that G is also a 2π-periodic function. Indeed,


x+2π ∫x ∫
x+2π

G(x + 2π) = gp (s) ds = gp (s) ds + gp (s) ds


0 0 0
∫x ∫π ∫x
= gp (s) ds + gp (s) ds = gp (s) ds + 0 = G(x).
0 −π 0

For −x ≤ x ≤ 1 we have gp (x) = 2x and thus, for −x ≤ x ≤ 1 we have

G(x) = x2 .
252 5. THE WAVE EQUATION

Let tr > 0 be the time moments when the string returns to its initial
position u(x, 0) = 0. From (5.1.18) it follows that this will happen only
when
G(x + tr ) = G(x − tr ).

Since G is 2π periodic, from the last equation we obtain

x + tr = x − tr + 2nπ, n = 1, 2, . . ..

Therefore,
tr = nπ, n = 1, 2, . . .

are the moments when the string will come back to its original position.

In the next example we discuss the important notion of energy of the string
in the wave equation.
Example 5.1.9. Consider the string equation on the finite interval 0 < x < l:
 2

 utt (x, t) = a uxx (x, t), 0 < x < l, t > 0.
u(x, 0) = f (x), ut (x, 0) = g(x), 0 < x < l


u(0, t) = u(l, t) = 0, t > 0.

The energy of the string at moment t, denoted by E(t), is defined by

∫l
1 [ 2 ]
(5.1.19) E(t) = ut (x, t) + a2 u2x (x, t) dx.
2
0

The terms

∫l ∫l
1 1
Ek (t) = u2t (x, t) dx, Ep (t) = a2 u2x (x, t) dx
2 2
0 0

are called kinetic and potential energy of the string, respectively.


The energy E(t) of the wave equation is a constant function.
Solution. Since u(x, t) is a twice differentiable function from (5.1.19) it fol-
lows that

∫l ∫l
′ 2
(5.1.20) E (t) = ut (x, t)utt (x, t) dx + a ux (x, t)uxt (x, t) dx.
0 0
5.1 D’ALEMBERT’S METHOD 253

Using the integration by parts formula for the second integral in Equation
(5.1.20) and the fact that utt (x, t) = a2 uxx (x, t), from (5.1.20) we obtain

∫l

[ ]
E (t) = ut (x, t)utt (x, t) dx + a2 ux (l, t)ut (l, t) − ux (0, t)ut (0, t)
0
∫l
[ ]
−a 2
ut (x, t)uxx (x, t) dx = a2 ut (l, t)ux (l, t) − ut (0, t)ux (0, t) .
0

Therefore
[ ]
(5.1.21) E ′ (t) = a2 ut (l, t)ux (l, t) − ut (0, t)ux (0, t) .

From the given boundary conditions u(0, t) = u(l, t) = 0 for all t > 0 it
follows that
ut (0, t) = ut (l, t) = 0, t > 0,
and hence from (5.1.21) we obtain that E ′ (t) = 0. Therefore E(t) is a
constant function. In fact, we can evaluate this constant. Indeed,

∫l
1 [ 2 ]
E(t) = E(0) = ut (x, 0) + a2 u2x (x, 0) dx
2
0
∫l
1 [ 2 ( )2 ]
= f (x) + a2 g ′ (x) dx.
2
0

The result in Example 5.1.9. can be used to prove the following theorem.
Theorem 5.1.1. The initial boundary value problem
 2

 utt (x, t) = a uxx (x, t), 0 < x < l, t > 0,
u(x, 0) = f (x), ut (x, 0) = g(x), 0 < x < l


u(0, t) = u(l, t) = 0, t > 0

has only one solution.


Proof. If v(x, t) and w(x, t) are solutions of the above problem, then the
function U (x, t) = v(x, t) − w(x, t) satisfies the problem
 2

 Utt (x, t) = a Uxx (x, t), 0 < x < l, t > 0
U (x, 0) = 0, Ut (x, 0) = 0, 0 < x < l


U (0, t) = U (l, t) = 0, t > 0.
254 5. THE WAVE EQUATION

If E(t) is the energy of U (x, t), then by the result in Example 5.1.9 it follows
that E(t) = 0 for every t > 0. Therefore,

∫l
[ 2 ]
Ut (x, t) + a2 Ux2 (x, t) dx = 0,
0

from which it follows that

Ut2 (x, t) + a2 Ux2 (x, t) = 0, 0 < x < l, t > 0.

Thus, U (x, t) = Ut (x, t) = 0 for every 0 < x < l and every t > 0 and so,

∫x
U (x, t) = U (0, t) + Us (s, t) ds = 0,
0

for every 0 < x < l and every t > 0. Hence, u(x, t) = v(x, t) for every
0 < x < l and every t > 0. ■

Case 40 . Nonhomogeneous Wave Equation. Duhamel’s Principle. Consider


now the nonhomogeneous problem
{
utt (x, t) = a2 uxx (x, t) + h(x, t), −∞ < x < ∞, t > 0
(5.1.22)
u(x, 0) = f (x), ut (x, 0) = g(x), −∞ < x < ∞.

First, we know already how to solve the homogeneous problem


{
vtt (x, t) = a2 vxx (x, t), −∞ < x < ∞, t > 0
(5.1.23)
v(x, 0) = f (x), vt (x, 0) = g(x), −∞ < x < ∞.

Next, we will solve the problem


{
wtt (x, t) = a2 wxx (x, t) + h(x, t), −∞ < x < ∞, t > 0
(5.1.24)
w(x, 0) = 0, wt (x, 0) = 0, −∞ < x < ∞.

Now, if v(x, t) is the solution of problem (5.1.23) and w(x, t) is the solution
of problem (5.1.24), then u(x, t) = v(x, t) + w(x, t) will be the solution of our
nonhomogeneous problem (5.1.22).
From Case 10 it follows that the solution v(x, t) of the homogeneous prob-
lem (5.1.23) is given by


x+at
f (x + at) + f (x − at) 1
v(x, t) = + g(s) ds.
2 2a
x−at
5.1 D’ALEMBERT’S METHOD 255

Next, let us consider problem (5.1.24). If we introduce new variables ξ


and η by
ξ = x + at, η = x − at,
then problem (5.1.24) is reduced to the differential equation
1
(5.1.25) eξη (ξ, η) = −
w H(ξ, η),
4a2
subject to the conditions
(5.1.26) e ξ) = 0,
w(ξ, eξ (ξ, ξ) = 0, w
w eη (ξ, ξ) = 0,
where
( ) ( )
ξ+η ξ−η ξ+η ξ−η
e η) = u
w(ξ, , H(ξ, η) = h , .
2 2a 2 2a
If we integrate (5.1.25) with respect to η we obtain
∫ξ
1
eξ (ξ, ξ) − w
w eξ (ξ, η) = − 2 H(ξ, s) ds.
4a
η

From the last equation and conditions (5.1.26) it follows that


∫ξ
1
eξ (ξ, η) = 2
w H(ξ, s) ds.
4a
η

Integrating the last equation with respect to ξ we have


∫ξ ∫y
1
e η) = 2
w(ξ, H(y, s) ds dy.
4a
η η

If in the last iterated integrals we introduce new variables µ and ν by the


transformation
s = µ − aν, y = µ + aν,
then we obtain that the solution w(x, t) of problem (5.1.24) is given by
∫t ∫
x+a(t−ν)
1
w(x, t) = f (µ, ν) dµ dν.
2a
0 x−a(t−ν)

Therefore, the solution u(x, t) of the original problem (5.1.22) is given by



x+at ∫t ∫
x+a(t−ν)
f (x + at) + f (x − at) 1 1
u(x, t) = + g(s) ds+ f (µ, ν) dµ dν.
2 2a 2a
x−at 0 x−a(t−ν)
256 5. THE WAVE EQUATION

Example 5.1.10. Find the solution of the problem


{
utt (x, t) = uxx (x, t) + ex−t , −∞ < x < ∞, t > 0
u(x, 0) = 0, ut (x, 0) = 0, −∞ < x < ∞.

Solution. Using the formula in Duhamel’s principle we find that the solution
of the problem is

∫t ∫
x+(t−ν) ∫t
1 1 ( )
u(x, t) = e µ−ν
dµ dν = ex+t−ν − ex−t+ν e−ν dν
2 2
0 x−(t−ν) 0

∫t
1 ( ) 1 1
= ex+t−2ν − ex−t dν = ex−t + ex+t − te−x−t .
2 4 2
0

d’Alemberts method can be used to solve other types of equations.


Example 5.1.11. Consider the partial differential equation

(5.1.27) utt (x, t) − 2uxt (x, t) + utt (x, t) = 0.

(a) Find the general solution of (5.1.27).

(b) Find the particular solution of (5.1.27) in the domain x ≥ 0, t ≥ 0,


subject to the following initial and boundary conditions

(5.1.28) u(x, 0) = 2x2 , u(0, t) = 0, x ≥ 0, t ≥ 0.

Solution. (a) If u(x, t) is a solution of (5.1.27), then it belongs to the class


C 2 (R2 ). Let us introduce new variables ξ and η by

ξ = x + t, η = x − t.

With these new variables we have a new function


( )
ξ+η ξ−η
w(ξ, η) = u , .
2 2

Using the chain rule we have

uxx = wξξ + 2wξη + wηη , utt = wξξ − 2wξη + wηη , uxt = wξξ − wηη .

If we substitute the above partial derivatives in Equation (5.1.27), then we


obtain the simple equation
wηη = 0.
5.1 D’ALEMBERT’S METHOD 257

From the last equation it follows that


wη = f (ξ)
where f is any twice differentiable function on R. Integrating the last equa-
tion with respect to η we obtain
w(ξ, η) = ηf (ξ) + g(ξ),
where g is any twice differentiable function on R.
Therefore, the general solution of Equation (5.1.27) is given by
u(x, t) = (x − t)f (x + t) + g(x + t).
(b) From the initial and boundary conditions (5.1.28) it follows that
xf (x) + g(x) = 2x2 , −tf (t) + g(t) = 0.
From the last two equations (taking t = x in the second equation) it follows
that
g(x) = x2 and f (x) = x.
Therefore, the solution of the given initial boundary value problem is
u(x, t) = (x − t)(x + t) + (x + t)2 = 2x2 + 2xt.

Exercises for Section 5.1.


1. Show that if f ∈ C 2 (R) and g ∈ C(R), then the C 2 function u(x, t),
defined by

x+at
f (x + at) + f (x − at) 1
u(x, t) = + g(s) ds
2 2a
x−at

is the unique solution of the problem


utt (x, t) = a2 uxx (x, t), −∞ < x < ∞, t > 0,
subject to the initial conditions
u(x, 0) = f (x), ut (x, 0) = g(x), −∞ < x < ∞.
2. Show that the wave equation
utt (x, t) = a2 uxx (x, t)
satisfies the following properties:
(a) If u(x, t) is a solution of the equation, then u(x − y, t) is also a
solution for any constant y.
(b) If u(x, t) is a solution of the equation, then the derivative ux (x, t)
is also a solution.
(c) If u(x, t) is a solution of the equation, then u(kx, kt) is also a
solution for any constant k.
258 5. THE WAVE EQUATION

In Exercises 3–10 using d’Alembert’s formula solve the wave equation


utt (x, t) = a2 uxx (x, t), −∞ < x < ∞, t > 0,
subject to the indicated initial conditions.

3. a = 5, u(x, 0) = sin x , ut (x, 0) = sin 3x, −∞ < x < ∞.

4. u(x, 0) = 1
1+x2 , ut (x, 0) = sin x, −∞ < x < ∞.

5. u(x, 0) = 0, ut (x, 0) = 2xe−x , −∞ < x < ∞.


2

6. u(x, 0) = 2 sin x cos x, ut (x, 0) = cos x, −∞ < x < ∞.

7. u(x, 0) = 1
1+x2 , ut (x, 0) = e−x , −∞ < x < ∞.

ekx −e−kx
8. u(x, 0) = cos πx2, ut (x, 0) = 2 , k is a constant.

9. u(x, 0) = ex , ut (x, 0) = sin x, −∞ < x < ∞.

10. u(x, 0) = e−x , ut (x, 0) = cos2 x, −∞ < x < ∞.


2

11. Solve the wave equation on the semi-infinite interval


utt (x, t) = 25uxx (x, t), −∞ < x < ∞, t > 0,
subject to the initial conditions
{
1, x<0
u(x, 0) = 0, ut (x, 0) =
0, x ≥ 0.
12. Solve the wave equation
utt (x, t) = a2 uxx (x, t), 0 < x < ∞, t > 0,
subject to the initial conditions
1
u(x, 0) = sin x, ut (x, 0) = , 0 < x < ∞.
1 + x2
13. Solve the wave equation
utt (x, t) = a2 uxx (x, t), 0 < x < ∞, t > 0,
subject to the initial conditions


 0, 0<x<2
u(x, 0) = f (x) = 1, 2<x<3,


0, x>3

ut (x, 0) = 0, x > 0, ux (0, t) = 0, t > 0.


5.2 SEPARATION OF VARIABLES FOR THE WAVE EQUATION 259

14. Using d’Alembert’s method solve the wave equation

utt (x, t) = uxx (x, t), 0 < x < 2, t > 0,

subject to the initial conditions

u(x, 0) = sin 2πx, ut (x, 0) = sin πx, 0 < x < 2.

15. Using d’Alembert’s method solve the wave equation

utt (x, t) = uxx (x, t), 0 < x < 1, t > 0,

subject to the initial conditions

u(x, 0) = 0, ut (x, 0) = 1, 0 < x < 1.

When does the string for the first time return to its initial position
u(x, 0) = 0?

16. Solve the initial value problem

utt (x, t) = a2 uxx (x, t) + e−t sin x, −∞ < x < ∞, t > 0


u(x, 0) = 0, ut (x, 0) = 0, −∞ < x < ∞.

17. Compute the potential, kinetic and total energy of the string equation

utt (x, t) = uxx (x, t), 0 < x < π, t > 0,

subject to the conditions


{
u(x, 0) = sin x, ut (x, 0) = 0, 0 < x < π,
u(0, t) = u(π, t) = 0, t > 0.

5.2 Separation of Variables Method for the Wave Equation.


In this section we will discuss the Method of Separation of Variables, also
known as the Fourier Method or the Eigenfunctions Expansion Method for
solving the one dimensional wave equation. We will discuss homogeneous and
non-homogeneous wave equations with homogeneous and nonhomogeneous
boundary conditions on a bounded interval. This method can be applied to
wave equations in higher dimensions and it can be used to solve some other
partial differential equations, as we will see in the next sections of this chapter
and the following chapters.
260 5. THE WAVE EQUATION

Case 10 . Homogeneous Wave Equation. Homogeneous Boundary Conditions.


Consider the initial boundary value problem

(5.2.1) utt (x, t) = a2 uxx (x, t), 0 < x < l, t > 0,


(5.2.2) u(x, 0) = f (x), ut (x, 0) = g(x), 0 < x < l
(5.2.3) u(0, t) = u(l, t) = 0,

where a is a positive constant.


We will look for a nontrivial solution u(x, t) of the above problem of the
form

(5.2.4) u(x, t) = X(x)T (t),

where X(x) and T (t) are functions of single variables x and t, respectively.
Differentiating (5.2.4) with respect to x and t and substituting the partial
derivatives in Equation (5.2.1) we obtain

X ′′ (x) 1 T ′′ (t)
(5.2.5) = 2 .
X(x) a T (t)

Equation (5.2.5) holds identically for every 0 < x < l and every t > 0.
Notice that the left side of this equation is a function which depends only on
x, while the right side is a function which depends on t. Since x and t are
independent variables this can happen only if each function in both sides of
(5.2.4) is equal to the same constant λ:

X ′′ (x) 1 T ′′ (t)
= 2 = λ.
X(x) a T (t)

From the last equation we obtain the two ordinary differential equations

(5.2.6) X ′′ (x) − λX(x) = 0,

and

(5.2.7) T ′′ (t) − a2 λT (t) = 0.

From the boundary conditions

u(0, t) = u(l, t) = 0, t>0

it follows that
X(0)T (t) = X(l)T (t) = 0, t > 0.
From the last equations we obtain the boundary conditions

X(0) = X(l) = 0,
5.2 SEPARATION OF VARIABLES FOR THE WAVE EQUATION 261

since T (t) ≡ 0 would imply u(x, t) ≡ 0. Solving the eigenvalue problem


(5.2.6) with the above boundary conditions, just as in Chapter 3, we find
that the eigenvalues λn and corresponding eigenfunctions Xn (x) are given
by
( nπ )2 nπx
λn = − , Xn (x) = sin , n = 1, 2, . . ..
l l
The solution of the differential equation (5.2.7), corresponding to the above
found λn is given by
nπat nπat
Tn (t) = an cos + bn sin , n = 1, 2, . . .,
l l
where an and bn are constants which will be determined. Therefore, we
obtain a sequence of functions
( )
nπat nπat nπx
un (x, t) = an cos + bn sin sin , n = 1, 2, . . .
l l l

each of which satisfies the wave equation (5.2.1) and the boundary conditions
(5.2.3). Since the wave equation and the boundary conditions are linear and
homogeneous, a function u(x, t) of the form

∑ ∑∞ ( )
nπat nπat nπx
(5.2.8) u(x, t) = un (x, t) = an cos + bn sin sin
n=1 n=1
l l l

also will satisfy the wave equation and the boundary conditions. If we assume
that the above series is convergent and that it can be differentiated term by
term with respect to t, from (5.2.8) and the initial conditions (5.2.2) we
obtain

∑ nπx
(5.2.9) f (x) = u(x, 0) = an sin , 0<x<l
n=1
l

and
∑∞
nπa nπx
(5.2.10) g(x) = ut (x, 0) = bn sin , 0<x<l
n=1
l l

Using the Fourier sine series (from Chapter 1) for the functions f (x) and
g(x), from (5.2.9) and (5.2.10) we obtain

∫l ∫l
2 nπx 2 nπx
(5.2.11) an = f (x) sin dx, bn = g(x) sin dx, n ∈ N.
l l nπa l
0 0

A formal justification that the obtained function u(x, t) is the solution of


the wave equation is given by the following theorem, stated without a proof.
262 5. THE WAVE EQUATION

Theorem 5.2.1. Suppose that the functions f and g are of class C 2 [0, l]
and that f ′′ (x) and g ′′ (x) are piecewise continuous on [0, l]. If
f (0) = f ′′ (0) = f (l) = f ′′ (l) = 0
g(0) = g(l),
then the function u(x, t) given by (5.2.8), where an and bn are given by
(5.2.11), is the unique solution of the problem (5.2.1), (5.2.2), (5.2.3).

Example 5.2.1. Solve the following initial boundary value problem.




 utt (x, t) = a2 uxx (x, t), 0 < x < 4, t > 0,

 {


 u(x, 0) = f (x) = x, 0 < x < 2,
4 − x, 2 < x < 4,



 u (x, 0) = g(x) = 0, 0 < x < 4,


t

u(0, t) = u(4, t) = 0, t > 0.

Solution. Since g(x) ≡ 0 on (0, 4), bn = 0 for every n = 1, 2, . . .. For the


coefficients an , by formulas (5.2.11), using the integration by parts formula
we have
∫2 ∫4
1 nπx 1 nπx 16 nπ
an = x sin dx + (4 − x) sin dx = 2 2 sin .
2 4 2 4 π n 2
0 2

Therefore, the solution u(x, t) of the problem is given by


∞ ( )
16 ∑ sin nπ 2 nπx nπat
(5.2.12) u(x, t) = 2 sin cos .
π n=1 n2 4 4

The solution u(x, t) of the given problem, using d’Alembert’s method, is


given by
fp (x + at) + fp (x − at)
u(x, t) = ,
2
where fp (x) is the odd, 4-periodic extension of the function f (x). This
solution can be easily derived from the solution u(x, t), given by (5.2.12).
Indeed, using the trigonometric formula
1[ ]
sin α cos β = sin (α + β) + sin (α − β)
2
in (5.2.12), we can write the function u(x, t), given by (5.2.12), as
∞ ( )
1 16 ∑ sin nπ nπ(x + at)
u(x, t) = · 2 2
sin
2 π n=1 n2 4
(5.2.13) ∞ ( nπ )
1 16 ∑ sin 2 nπ(x − at)
+ · 2 sin .
2 π n=1 n2 4
5.2 SEPARATION OF VARIABLES FOR THE WAVE EQUATION 263

Now, since fp (x) is a continuous function on the whole real line R, its Fourier
series, given by

16 ∑ sin nπ 2 nπx
sin ,
π 2 n=1 n2 4

converges to fp (x). Therefore, the first and second terms in (5.2.13) are
equal to
1 1
fp (x + at) and fp (x − at),
2 2
respectively, which was to be shown.

Case 20 . Nonhomogeneous Wave Equation. Homogeneous Boundary Condi-


tions. Consider the initial boundary value problem
 2

 utt (x, t) = a uxx (x, t) + F (x, t), 0 < x < l, t > 0,
(5.2.14) u(x, 0) = f (x), ut (x, 0) = g(x), 0 < x < l


u(0, t) = u(l, t) = 0.

In order to find the solution to problem (5.2.14) we split the problem into
the following two problems:
 2

 vtt (x, t) = a vxx (x, t), 0 < x < l, t > 0,
(5.2.15) v(x, 0) = f (x), vt (x, 0) = g(x), 0 < x < l


v(0, t) = v(l, t) = 0,

and
 2

 wtt (x, t) = a wxx (x, t) + F (x, t), 0 < x < l, t > 0,
(5.2.16) w(x, 0) = 0, wt (x, 0) = 0, 0 < x < l


u(0, t) = u(l, t) = 0.

Problem (5.2.15) has been considered in Case 10 and we already know how
to solve it. Let its solution be v(x, t). If w(x, t) is the solution of problem
(5.2.16), then
u(x, t) = v(x, t) + w(x, t)
will be the solution of the given problem (5.2.14). Therefore, we need to solve
only problem (5.2.16).
There are several approaches to solving problem (5.2.16). One of them,
presented without a rigorous justification, is the following.
For each fixed t > 0, we expand the function F (x, t) in the Fourier sine
series

∑ nπx
(5.2.17) F (x, t) = Fn (t) sin , 0 < x < l,
n=1
l
264 5. THE WAVE EQUATION

where
∫l
2 nπx
(5.2.18) Fn (t) = F (x, t) sin dx, n = 1, 2, . . ..
l l
0

Next, again for each fixed t > 0, we expand the unknown function w(x, t) in
the Fourier sine series

∑ nπx
(5.2.19) w(x, t) = wn (t) sin , 0 < x < l,
n=1
l

where
∫l
2 nπx
(5.2.20) wn (t) = w(x, t) sin dx, n = 1, 2, . . ..
l l
0

From the initial conditions w(x, 0) = wt (x, 0) = 0 it follows that

(5.2.21) wn (0) = wn′ (0) = 0.

If we substitute (5.2.17) and (5.2.19) into the wave equation (5.2.16) and
compare the Fourier coefficients we obtain

n2 π 2 a 2
(5.2.22) wn′′ (t) + 2wn (t) = Fn (t).
l2
The solution of the second order, linear differential equation (5.2.22), in view
of the initial conditions (5.2.21), is given by

∫t ( )
1 nπa
(5.2.23) wn (t) = Fn (s) sin (t − s) ds.
nπa l
0

Let us take an example.


Example 5.2.2. Solve the wave equation


 utt (x, t) = a2 uxx (x, t) + xt, 0 < x < 4, t > 0,

 {


 u(x, 0) = f (x) = x, 0 < x < 2,
4 − x, 2 < x < 4,



 u (x, 0) = g(x) = 0, 0 < x < 4,


t

u(0, t) = u(4, t) = 0, t > 0.

Take a = 4 in the wave equation and plot the displacements u(x, t) of the
string at the moments t = 0, t = 0.15, t = 0.85 and t = 1.
5.2 SEPARATION OF VARIABLES FOR THE WAVE EQUATION 265

Solution. The corresponding homogeneous wave equation with homogeneous


boundary conditions was solved in Example 5.2.1. and its solution is given
by

16 ∑ sin nπ2 nπx nπat
(5.2.24) v(x, t) = 2 sin cos .
π n=1 n2 4 4
Now, we solve the nonhomogeneous problem
 2

 wtt (x, t) = a wxx (x, t) + xt, 0 < x < 4, t > 0,
w(x, 0) = 0, wt (x, 0) = 0, 0 < x < 4


w(0, t) = w(4, t) = 0.
First, expand the function xt (for each fixed t > 0) in the Fourier sine series
on the interval (0, 4):
∑∞
nπx
xt = Fn (t) sin .
n=1
4
The coefficients Fn (t) in this expansion are given by
∫4
2 nπx 8t cos nπ
Fn (t) = xt sin dx = − .
4 4 nπ
0
Next, again for each fixed t > 0, we expand the unknown function w(x, t) in
the Fourier sine series
∑∞
nπx
w(x, t) = wn (t) sin , 0 < x < l,
n=1
4
where the coefficients wn (t) are determined using Equation (5.2.23).
∫t ( )
1 nπa ( )
wn (t) = Fn (s) sin t − s ds
nπa 4
0
∫t ( )
8 cos nπ nπa ( )
=− s sin t − s ds
an2 π 2 4
0
( )
32 cos nπ nπa
= − 2 3 3 nπat − 4 sin t .
a n π 4
Thus,
∞ ( )
32 ∑ cos nπ nπa nπx
(5.2.25) w(x, t) = − nπat − 4 sin t sin .
a2 π 3 n=1 n3 4 4
Therefore, the solution u(x, t) of the given problem is u(x, t) = v(x, t) +
w(x, t), where v(x, t) and w(x, t) are given by (5.2.24) and (5.2.25), respec-
tively.
The plots of u(x, t) at the given time instances are displayed in Figure
5.2.1.
266 5. THE WAVE EQUATION

u
2 t = 0.00 u
2 t = 0.15

0 x
2 4

x
0 2 4 -2

(a) (b)

u u
2 2
t = 0.85 t = 1.00

0 x 0 x
2 4 2 4

-2 -2

(c) (d)

Figure 5.2.1

Case 30 . Nonhomogeneous Wave Equation with Nonhomogeneous Dirichlet


Boundary Conditions. Consider the initial boundary value problem
 2

 utt (x, t) = a uxx (x, t) + F (x, t), 0 < x < l, t > 0,
(5.2.26) u(x, 0) = f (x), ut (x, 0) = g(x), 0 < x < l


u(0, t) = φ(t), u(l, t) = ψ(t).
The solution of this problem can be found by superposition of the solution
v(x, t) of the homogeneous problem (5.2.15) with the solution w(x, t) of the
problem
 2

 wtt (x, t) = a wxx (x, t), 0 < x < l, t > 0,
(5.2.27) w(x, 0) = 0, ut (x, 0) = 0, 0 < x < l,


w(0, t) = φ(t), w(l, t) = ψ(t).
e t) by
If we introduce a new function w(x,
e t) = w(x, t) − xψ(t) + (x − l)φ(t),
w(x,
then problem (5.2.27) is transformed to the problem:



w exx (x, t) + Fe(x, t), 0 < x < l, t > 0,
ett (x, t) = a2 w
(5.2.28) e t) = w(l,
w(0, e t) = 0, t > 0


 w(x,
e 0) = fe(x), w f (x, 0) = ge(x),
t
5.2 SEPARATION OF VARIABLES FOR THE WAVE EQUATION 267

where
Fe(x, t) = (x − l)φ′′ (t) − xψ ′′ (t),
fe(x) = (x − l)φ(0) − xψ(0),
ge(x) = (x − l)φ′ (0) − xψ ′ (0).
Notice that problem (5.2.28) has homogeneous boundary conditions and it
was already considered in Case 10 ; therefore we know how to solve it.
Case 40 . Homogeneous Wave Equation. Neumann Boundary Conditions.
Consider the initial boundary value problem
 2

 utt (x, t) = a uxx (x, t), 0 < x < l, t > 0,
(5.2.29) u(x, 0) = f (x), ut (x, 0) = g(x), 0 < x < l


ux (0, t) = ux (l, t) = 0, t > 0.

We will solve this problem by the separation of variables method. Let the
solution u(x, t) of the above problem be of the form

(5.2.30) u(x, t) = X(x)T (t),

where X(x) and T (t) are functions of single variables x and t, respectively.
Differentiating (5.2.30) with respect to x and t and substituting the par-
tial derivatives in the wave equation in problem (5.2.29) we obtain

X ′′ (x) 1 T ′′ (t)
(5.2.31) = 2 .
X(x) a T (t)

Equation (5.2.31) holds identically for every 0 < x < l and every t > 0.
Notice that the left side of this equation is a function which depends only on
x, while the right side is a function which depends on t. Since x and t are
independent variables this can happen only if each function in both sides of
(5.2.31) is equal to the same constant λ.

X ′′ (x) 1 T ′′ (t)
= 2 =λ
X(x) a T (t)

From the last equation we obtain the two ordinary differential equations

(5.2.32) X ′′ (x) − λX(x) = 0,

and

(5.2.33) T ′′ (t) − a2 λT (t) = 0.

From the boundary conditions

ux (0, t) = ux (l, t) = 0, t>0


268 5. THE WAVE EQUATION

it follows that
X ′ (0)T (t) = X ′ (l)T (t) = 0, t > 0.
To avoid the trivial solution, from the last equations we obtain

(5.2.34) X ′ (0) = X ′ (l) = 0.

Solving the eigenvalue problem (5.2.32), (5.2.34) (see Chapter 3), we obtain
that its eigenvalues λ = λn and the corresponding eigenfunctions Xn (x) are

λ0 = 0, X0 (x) = 1, 0 < x < l

and
( nπ )2 nπx
λn = − , Xn (x) = cos , n = 0, 1, 2, . . . , 0 < x < l.
l l
The solution of the differential equation (5.2.33), corresponding to the above
found λn , is given by

T0 (t) = a0 + b0 t
nπat nπat
Tn (t) = an cos + bn sin , n = 1, 2, . . .,
l l
where an and bn are constants which will be determined.
Therefore we obtain a sequence of functions

{un (x, t) = Xn (x)Tn (t), n = 0, 1, . . .},

given by

 u0 (x, t) = a0 + b0 t,
( )
nπat nπat nπx
 un (x, t) = an cos + bn sin cos , n∈N
l l l

each of which satisfies the wave equation and the Neumann boundary condi-
tions in problem (5.2.29). Since the given wave equation and the boundary
conditions are linear and homogeneous, a function u(x, t) of the form

∑∞ ( )
nπat nπat nπx
(5.2.35) u(x, t) = a0 + b0 t + an cos + bn sin cos
n=1
l l l

also will satisfy the wave equation and the boundary conditions. If we assume
that the above series is convergent and it can be differentiated term by term
with respect to t, then from (5.2.35) and the initial conditions in problem
(5.2.29) we obtain

∑ nπx
(5.2.36) f (x) = u(x, 0) = a0 + an cos , 0<x<l
n=1
l
5.2 SEPARATION OF VARIABLES FOR THE WAVE EQUATION 269

and

∑∞
nπa nπx
(5.2.37) g(x) = ut (x, 0) = bn cos , 0 < x < l.
n=1
l l

Using the Fourier cosine series (from Chapter 1) for the functions f (x) and
g(x), or the fact, from Chapter 2, that the eigenfunctions

πx 2πx nπx
1, cos , cos , . . . , cos , ...
l l l

are pairwise orthogonal on the interval [0, l], from (5.2.36) and (5.2.37) we
obtain


 ∫l

 2 nπx

 an = f (x) cos dx, n = 0, 1, 2, . . .

 l l
0
(5.2.38)

 ∫l

 2 nπx

 bn = g(x) cos dx, n = 1, 2, . . .

 nπa l
0

Remark. The problem (5.2.29) can be solved in a similar way to problem


(5.2.2), except we use the Fourier cosine series instead of the Fourier sine
series (see Exercise 9 of this section.)
Example 5.2.3. Using the separation of variables method solve the following
problem: 
 u (x, t) = uxx (x, t), 0 < x < π, t > 0,
 tt
u(x, 0) = π 2 − x2 , ut (x, 0) = 0, 0 < x < π,


ux (0, t) = ux (π, t) = 0, t > 0.

Solution. Let the solution u(x, t) of the problem be of the form

u(x, t) = X(x)T (t).

From the wave equation and the given boundary conditions we obtain the
eigenvalue problem
{
X ′′ (x) − λX(x) = 0, 0<x<π
(5.2.39) ′ ′
X (0) = X (π) = 0,

and the ordinary differential equation

(5.2.40) T ′′ (t) − λT (t) = 0, t > 0.


270 5. THE WAVE EQUATION

Solving the eigenvalue problem (5.2.39) we obtain that its eigenvalues λ = λn


and the corresponding eigenfunctions Xn (x) are
λ0 = 0, X0 (x) = 1, 0 < x < π
and
λn = −n2 , Xn (x) = cos nx, n = 1, 2, . . . , 0 < x < π.
The solutions of the differential equation (5.2.40), corresponding to the above
found λn , are given by
T0 (t) = a0 + b0 t
Tn (t) = an cos nt + bn sin nt, n = 1, 2, . . .,
where an and bn are constants which will be determined.
Hence, the solution of the given problem will be of the form

∑ ( )
u(x, t) = a0 + b0 t + an cos nt + bn sin nt cos nx, 0 < x < π, t > 0.
n=1

From the initial condition ut (x, 0) = 0, 0 < x < π, and the orthogonality
property of the eigenfunctions
{1, cos x, cos 2x, . . . , cos nx, . . .}
on the interval [0, π] we obtain bn = 0 for every n = 0, 1, 2, . . .. From the
other initial condition u(x, 0) = π 2 − x2 , 0 < x < π, and again from the
orthogonality of the eigenfunctions, we obtain
∫π
1 2π 2
a0 = (π 2 − x2 ) dx = .
π 3
0

For n = 1, 2, . . . we have
∫π ∫π
1 2
an = ∫π (π − x ) cos nx dx = −
2 2
x2 cos nx dx
π
cos2 nx dx 0 0
0
2 2π 4
=− · cos nπ = 2 (−1)n+1 .
π n2 n
Therefore, the solution u(x, t) of the problem is given by

2π 2 ∑ 4
u(x, t) = + 2
(−1)n+1 cos nt cos nx, 0 < x < π, t > 0.
3 n=1
n

The next example describes vibrations of a string in a medium (air or


water).
5.2 SEPARATION OF VARIABLES FOR THE WAVE EQUATION 271

Example 5.2.4. Using the separation of variables method solve the following
problem:

 utt (x, t) = a uxx (x, t) − c u, 0 < x < l, t > 0,
2 2

u(x, 0) = f (x), ut (x, 0) = g(x), 0 < x < l


u(0, t) = u(l, t) = 0, t > 0.

The term −c2 u represents the force of reaction of the medium.


Solution. Let the solution u(x, t) of the problem be of the form

u(x, t) = X(x)T (t).

If we substitute the partial derivatives of u into the string equation, then we


obtain

X(x)T ′′ (t) = a2 X ′′ (x)T (t) − c2 X(x)T (t), 0 < x < l, t > 0.

From the last equation it follows that

X ′′ (x) T ′′ (t)
a2 − c2 = .
X(x) T (t)

Since x and t are independent, from the above equation we have

X ′′ (x)
a2 − c2 = −λ,
X(x)

i.e.,

c2 − λ
(5.2.41) X ′′ (x) − X(x) = 0
a2
and
T ′′ (t)
(5.2.42) = −λ,
T (t)

where λ is a constant to be determined. From u(0, t) = u(l, t) = 0 it follows


that X(x) satisfies the boundary conditions

(5.2.43) X(0) = X(l) = 0.

Now, we will solve the eigenvalue problem (5.2.41), (5.2.42). If c2 − λ = 0,


then the general solution of the differential equation (5.2.41) is given by

X(x) = A + Bx
272 5. THE WAVE EQUATION

where A and B are arbitrary constants. From the conditions (5.2.43) it


follows that A = B = 0. Thus, X(x) ≡ 0, and so λ = c2 can’t be an
eigenvalue of the problem.
If c2 − λ > 0, i.e., if λ < c2 , then the general solution of (5.2.41) is
√ √
c2 −λx
+ Be− a c2 −λx
1 1
X(x) = Ae a ,
where A and B are arbitrary constants. From the conditions (5.2.43) again
it follows that A = B = 0. Thus, X(x) ≡ 0, and so any λ > −c2 cannot be
an eigenvalue of the problem.
Finally, if c2 − λ < 0, i.e., λ > c2 , then the general solution of (5.2.41) is
( √ ) ( √ )
1 1
X(x) = A sin λ − c2 x + B cos λ − c2 x ,
a a
where A and B are arbitrary constants. From the boundary condition
X(0) = 0 it follows that B = 0. From the other boundary condition X(l) = 0
it follows that ( √ )
1
sin λ − c2 l = 0.
a
From the last equation we have
1√
λ − c2 l = nπ, n ∈ N.
a
Therefore, the eigenvalues of the eigenvalue problem are
a2 n2 π 2
(5.2.44) λn = c2 + , n ∈ N.
l2
The corresponding eigenfunctions are
nπx
Xn (x) = sin , n ∈ N.
l
The general solution of the differential equation (5.2.42), which corresponds
to the above found eigenvalues λn , is
√ √
Tn (t) = an cos ( λn t) + bn sin ( λn t).
Therefore, the solution of the given initial value problem is
∑∞ √ √
( ) nπx
u(x, t) = an cos ( λn t) + bn sin ( λn t) sin ,
n=1
l
where an and bn can be found using the orthogonality property of the func-
tions
πx 2πx nπx
sin , sin , . . . , sin ,...
l l l
and the initial conditions for the function u(x, t).
∫l ∫l
2 nπx 2 nπx
an = f (x) sin dx, bn = g(x) sin dx, n ∈ N.
l l lλn l
0 0
5.2 SEPARATION OF VARIABLES FOR THE WAVE EQUATION 273

Exercises for Section 5.2.

1. Using separation of variables solve the equation

utt (x, t) = uxx (x, t), 0 < x < π, t > 0,

subject to the Dirichlet boundary conditions

u(0, t) = u(π, t) = 0, t>0

and the following initial conditions:

(a) u(x, 0) = 0, ut (x, 0) = 1, 0 < x < π.

(b) u(x, 0) = πx − x2 , ut (x, 0) = 0, 0 < x < π.


{ 3
x, 0 < x < 2π
(c) u(x, 0) = 2 3
ut (x, 0) = 0, 0 < x < π.
3(π − x), 2π 3 < x < π,
 π

 0, 0 < x < 4
(d) u(x, 0) = sin x, u( x, 0) = 1, π4 < x < 3π 0 < x < π.


4

0, 4 < x < π,
{
x, 0 < x < π2
(e) u(x, 0) = ut (x, 0) = 0, 0 < x < π.
π − x, π
2 < x < π,

2. Use separation of variables method to solve the damped wave equation



 utt (x, t) = a uxx (x, t) − 2kut (x, t), 0 < x < l, t > 0,
2

u(0, t) = u(l, t) = 0, t > 0,


u(x, 0) = f (x), ut (x, 0) = g(x), 0 < x < l.

The term −2kut (x, t) is the frictional forces that cause the damping.
Consider the following three cases:

πa πa πa
(a) k < ; (b) k = ; (c) k > .
l l l

3. Solve the damped wave equation



 utt (x, t) = uxx (x, t) − 4ut (x, t), 0 < x < π, t > 0,

u(0, t) = u(π, t) = 0, t > 0,


u(x, 0) = 1, ut (x, 0) = 0, 0 < x < π.
274 5. THE WAVE EQUATION

4. Solve the damped wave equation



 utt (x, t) = uxx (x, t) − 2ut (x, t), 0 < x < π, t > 0,

u(0, t) = u(π, t) = 0, t > 0,


u(x, 0) = 0, ut (x, 0) = sin x + sin 2x, 0 < x < π.

5. Solve the initial boundary value problem


 2

 utt (x, t) = a uxx (x, t), 0 < x < l, t > 0,
u(x, 0) = f (x), ut (x, 0) = g(x), 0 < x < l,


u(0, t) = ux (0, t) = 0, t > 0.

6. Solve the problem




 utt (x, t) = 4uxx (x, t), 0 < x < 1, t > 0,
 πx
u(x, 0) = x(1 − x), ut (x, 0) = cos , 0 < x < 1,

 2

u(0, t) = u(1, t) = 0, t > 0.

7. Solve the following wave equation with Neumann boundary conditions:




 utt (x, t) = 4uxx (x, t), 0 < x < π, t > 0,
ux (0, t) = ux (π, t) = 0, t > 0,


u(x, 0) = sin x, ut (x, 0) = 0, 0 < x < π.

8. Solve the following wave equation with Neumann boundary conditions:




 utt (x, t) = uxx (x, t), 0 < x < 1, t > 0,
ux (0, t) = ux (1, t) = 0, t > 0,


u(x, 0) = 0, ut (x, 0) = 2x − 1, 0 < x < 1.

9. Solve the initial boundary value problem


 2

 utt (x, t) = a uxx (x, t), 0 < x < l, t > 0,
u(x, 0) = f (x), ut (x, 0) = g(x), 0 < x < l


ux (0, t) = ux (l, t) = 0, t > 0
5.2 SEPARATION OF VARIABLES FOR THE WAVE EQUATION 275

by expanding the functions f (x), g(x) and u(x, t) in the Fourier cosine
series
∑∞
1 nπx
f (x) = a0 + an cos ,
2 n=1
l

1 ∑ ′ nπx
g(x) = + an cos ,
2 n=1 l
∑∞
1 nπx
u(x, t) = A0 (t) + An (t) cos .
2 n=1
l

In Exercises 10–13 solve the wave equation

utt (x, t) = a2 uxx (x, t) + F (x, t), 0 < x < l, t > 0,

subject to the Dirichlet initial conditions

u(x, 0) = ut (x, 0) = 0, 0<x<l

and the following forcing functions F (x, t) and boundary conditions.

10. F (x, t) = Ae−t sin πx


l , u(0, t) = u(l, t) = 0.

11. F (x, t) = Axe−t , u(0, t) = u(l, t) = 0.

12. F (x, t) = A sin t, u(0, t) = ux (l, t) = 0.

13. F (x, t) = Ae−t cos, πx


2l , ux (0, t) = u(l, t) = 0.

In Exercises 14–16 solve the wave equation

utt (x, t) = uxx (x, t), 0 < x < π, t > 0,

subject to the the following initial and boundary conditions.

14. u(0, t) = t2 , u(π, t) = t3 ,

u(x, 0) = sin x,

ut (x, 0) = 0, 0 < x < π, t > 0.


276 5. THE WAVE EQUATION

15. u(0, t) = e−t , u(π, t) = t,

u(x, 0) = sin x cos x,

ut (x, 0) = 1, 0 < x < π, t > 0.

16. u(0, t) = t, ux (π, t) = 1,

x
u(x, 0) = sin 2,

ut (x, 0) = 1, 0 < x < π, t > 0.

5.3 The Wave Equation on Rectangular Domains.

In this section we will study the two and three dimensional wave equation
on rectangular domains. We begin by considering the homogeneous wave
equation on a rectangle.
5.3.1 Homogeneous Wave Equation on a Rectangle
Consider an infinitely thin, perfectly elastic membrane stretched across a
rectangular frame. Let

D = {(x, y) : 0 < x < a, 0 < y < b}

be the rectangle, whose boundary ∂D is the frame.


The following initial boundary value problem describes the vibration of the
membrane:
 ( )

 utt = c2 ∆ u ≡ c2 uxx + uyy , (x, y) ∈ D, t > 0


 u(x, y, 0) = f (x, y), (x, y) ∈ D
(5.3.1)

 ut (x, y, 0) = g(x, y), (x, y) ∈ D


 u(x, y, t) = 0, (x, y) ∈ ∂D, t > 0.

If a solution u(x, y, t) of the wave equation is of the form

u(x, y, t) = W (x, y)T (t), (x, y) ∈ D, t > 0,

then the wave equation becomes

W (x, y)T ′′ (t) = c2 ∆ W (x, y) T (t),

where ∆ is the Laplace operator. The last equation can be written in the
form
∆ W (x, y) T ′′ (t)
= 2 , (x, y) ∈ D, t > 0.
W (x, y) c T (t)
5.3.1 HOMOGENEOUS WAVE EQUATION ON A RECTANGLE 277

The above equation is possible only when both of its sides are equal to the
same constant, denoted by −λ:
 ′′

 T (t)

 = −λ,
 2
 c T (t)



 ∆ W (x, y)

 = −λ.
W (x, y)
From the second equations it follows that

(5.3.2) ∆ W (x, y) + λW (x, y) = 0, (x, y) ∈ D,

and from the first equation we have

(5.3.3) T ′′ (t) + c2 λT (t) = 0, t > 0.

To avoid the trivial solution, from the boundary conditions for u(x, y, t), the
following boundary conditions for the function W (x, y) are obtained:

(5.3.4) W (x, y) = 0, (x, y) ∈ ∂D,

Equation (5.3.2) is called the Helmholtz Equation.


In order to find the eigenvalues λ of Equation (5.3.2) (eigenvalues of the
Laplace operator ∆), subject to the boundary conditions (5.3.4), we separate
the variables x and y. We write W (x, y) in the form

W (x, y) = X(x)Y (y).

If we substitute W (x, y) in Equation (5.3.2) we obtain

X ′′ (x)X(x) + Y ′′ (y)Y (y) + λX(x)Y (y) = 0, (x, y) ∈ D,

which can be written in the form


X ′′ (x) Y ′′ (y)
=− − λ, (x, y) ∈ D.
X(x) Y (y)
The last equation is possible only when both sides are equal to the same
constant, denoted by −µ:
X ′′ (x) Y ′′ (y)
= −µ, − − λ = −µ, (x, y) ∈ D.
X(x) Y (y)
The boundary conditions (5.3.4) for the function W (x, y) imply X(0) =
X(a) = 0 and Y (0) = Y (b) = 0. Therefore, we have the following boundary
value problems:
{ ′′
X (x) + µX(x) = 0, X(0) = X(a) = 0, 0 < x < a,
(5.3.5)
Y ′′ (y) + (λ − µ)Y (y) = 0, Y (0) = Y (b) = 0, 0 < y < b.
278 5. THE WAVE EQUATION

We have already solved the eigenvalue problems (5.3.5) (see Chapter 2).
Their eigenvalues and corresponding eigenfunctions are

m2 π 2 mπx
µ= , X(x) = sin , m = 1, 2, . . .
a2 a

n2 π 2 nπy
λ−µ= , Y (y) = sin , n = 1, 2, . . ..
b2 b

Therefore, the eigenvalues λmn and corresponding eigenfunctions Wmn (x, y)


of the eigenvalue problem (5.3.2), (5.3.4) are given by

m2 π 2 n2 π 2
λmn = +
a2 b2

and
mπx nπy
Wmn (x, y) = sin sin .
a b
A general solution of Equation (5.3.3), corresponding to the above found
λmn , is given by
(√ ) (√ )
Tmn (t) = amn cos λmn ct + bmn sin λmn ct .

Therefore, the solution u = u(x, y, t) of the problem (5.3.1) will be of the


form
∞ ∑
∑ ∞ ( √ √ )
mπx nπy
(5.3.6) sin sin amn cos λmn ct + bmn sin λmn ct ,
m=1 n=1
a b

where the coefficients amn will be found from the initial conditions for the
function u(x, y, t) and the following orthogonality property of the eigenfunc-
tions ( mπx ) ( nπy )
fm,n (x, y) ≡ sin sin , m, n = 1, 2, . . .
a b
on the rectangle [0, a] × [0, b]:

∫a ∫b
fm,n (x, y) fp,q (x, y) dx dy = 0, for every (m, n) ̸= (p, q).
0 0

Using this property, we find

∫a ∫b
4 mπx nπy
(5.3.7) amn = f (x, y) sin sin dy dx
ab a b
0 0
5.3.1 HOMOGENEOUS WAVE EQUATION ON A RECTANGLE 279

and
∫a ∫b
4 mπx nπy
(5.3.8) bmn = √ g(x, y) sin sin dy dx.
abπ m2
+ n2 a b
a2 b2 0 0

Remark. For a given function f (x, y) on a rectangle [0, a] × [0, b], the series
of the form
∑∞ ∑ ∞
( mπx ) ( nπy )
Amn sin sin ,
m=1 n=1
a b
∞ ∑
∑ ∞
( mπx ) ( nπy )
Bmn cos cos ,
m=0 n=0
a b
where
∫a ∫b
4 ( mπx ) ( nπy )
Amn = f (x, y) sin sin dy dx,
ab a b
0 0
∫a ∫b
4 ( mπx ) ( nπy )
Bmn = f (x, y) cos cos dy dx
ab a b
0 0
are called the Double Fourier Sine Series and the Double Fourier Cosine
Series, respectively, of the function f(x,y) on the rectangle [0, a] × [0, b].

Example 5.3.1. Solve the following membrane problem.



 1( )

 utt (x, y, t) = 2 uxx (x, y, t) + uyy (x, y, t) , (x, y) ∈ S, t > 0,

 π

u(x, y, 0) = sin 3πx sin πy, (x, y) ∈ S,



 ut (x, y, 0) = 0, t > 0,


u(x, y, t) = 0, (x, y) ∈ ∂S, t > 0,
where S is the unit square
S = {(x, y) : 0 < x < 1, 0 < y < 1}
and ∂S is its boundary.
Display the shape of the membrane at the moment t = 0.7.
Solution. From g(x, y) = 0 and (5.3.8) we have bmn = 0 for all m, n =
1, 2, . . .. From f (x, y) = sin 3πx sin πy and (5.3.7) we have
∫1 ∫1
amn = 4 sin 3πx sin πy sin mπx sin nπy dy dx
0 0
∫1 ∫1
=4 sin 3πx sin mπx dx · sin πy sin nπy dy
0 0
{
1, m = 3, n = 1
=
0, otherwise.
280 5. THE WAVE EQUATION

Therefore, the solution of this problem, by formula (5.3.6), is given by



u(x, y, t) = sin 3πx sin πy cos 10 t.


Notice that when 10t = π2 , i.e., when t = 2√π10 ≈ 0.4966, for the first
time the vertical displacement of the membrane from the Oxy plane is zero.
The shape of the membrane at several time instances is displayed in Figure
5.3.1.

t=0 t=0.4966

t=0.7 t=1

Figure 5.3.1
5.3.2 NONHOMOGENEOUS WAVE EQUATION ON A RECTANGLE 281

5.3.2 Nonhomogeneous Wave Equation on a Rectangle.


Let R be the rectangle given by

R = {(x, y) : 0 < x < a, 0 < y < b}

whose boundary is ∂R.


Consider the following two dimensional initial boundary value problem for
the wave equation
 ( )
 utt = c uxx + uyy + F (x, y, t), (x, y) ∈ R, t > 0,
2

(5.3.9) u(x, y, 0) = f (x, y), ut (x, y, 0) = g(x, y), (x, y) ∈ R


u(x, y, t) = 0, (x, y) ∈ ∂R, t > 0.

As in the one dimensional case, we split problem (5.3.9) into the following
two problems:
 ( )
 vtt = c vxx + vyy , (x, y) ∈ R, t > 0,
2

(5.3.10) v(x, y, 0) = f (x, y), vt (x, y, 0) = g(x, y), (x, y) ∈ R,


v(x, y, t) = 0, (x, y) ∈ ∂R, t > 0.

and
 ( )
 wtt = c wxx + wyy + F (x, y, t), (x, y) ∈ R, t > 0,
2

(5.3.11) w(x, y, 0) = 0, wt (x, y, 0) = 0, (x, y) ∈ R,


w(x, y, t) = 0, (x, y) ∈ ∂R, t > 0.

Problem (5.3.10) was considered in Case 10 . Let its solution be v(x, y, t). If
w(x, y, t) is the solution of problem (5.3.11), then

u(x, y, t) = v(x, y, t) + w(x, y, t)

will be the solution of the given problem (5.3.9). So, it remains to solve
problem (5.3.11).
For each fixed t > 0, we expand the function F (x, y, t) in the double
Fourier sine series
∞ ∑
∑ ∞
mπx nπy
(5.3.12) F (x, y, t) = Fmn (t) sin sin , (x, y) ∈ R, t > 0,
m=1 n=1
a b

where

∫a ∫b
4 mπx nπy
(5.3.13) Fmn (t) = F (x, y, t) sin sin dy dx.
ab a b
0 0
282 5. THE WAVE EQUATION

Next, for each fixed t > 0, we expand the unknown function w(x, y, t) in the
double Fourier sine series
∞ ∑
∑ ∞
mπx nπy
(5.3.14) w(x, y, t) = wmn (t) sin sin , (x, y) ∈ R, t > 0,
m=1 n=1
a b

where
∫a ∫b
4 mπx nπy
wmn (t) = w(x, y, t) sin sin dy dx.
ab a b
0 0

From the initial conditions w(x, y, 0) = wt (x, y, 0) = 0 it follows that



(5.3.15) wmn (0) = wmn (0) = 0.

If we substitute (5.3.12) and (5.3.14) into the wave equation (5.3.11) and
compare the Fourier coefficients we obtain
( )
′′ 2 m2 π 2 n2 π 2
wmn (t) +c + 2 wmn (t) = Fmn (t).
a2 b

The last equation is a linear, second order, nonhomogeneous ordinary differ-


ential equation with constant coefficients and given initial conditions (5.3.15)
and it can be solved.
Example 5.3.2. Solve the following damped membrane problem
 1( )

 utt = π 2 uxx + uyy + xyt, (x, y) ∈ S, t > 0,

(5.3.16)
 u(x, y, 0) = sin 3πx sin πy, ut (x, y, 0) = 0, (x, y) ∈ S,


u(x, y, t) = 0, (x, y) ∈ ∂S, t > 0,

where
S = {(x, y) : 0 < x < 1, 0 < y < 1}
is the unit square and ∂S is its boundary.
Solution. The corresponding homogeneous problem is
 1( )

 vtt = π 2 uxx + uyy , (x, y) ∈ S, t > 0,

(5.3.17) v(x, y, 0) = sin 3πx sin πy, vt (x, y, 0) = 0, (x, y) ∈ S,



v(x, y, t) = 0, (x, y) ∈ ∂S, t > 0,

and its solution v(x, t), found in Example 5.3.1, is given by



(5.3.18) v(x, y, t) = sin 3πx sin πy cos 10 t.
5.3.2 NONHOMOGENEOUS WAVE EQUATION ON A RECTANGLE 283

The corresponding nonhomogeneous wave equation is


 1( )

 wtt = 2 uxx + uyy + xyt, (x, y) ∈ S, t > 0,
 π
(5.3.19) w(x, y, 0) = sin 3πx sin πy, wt (x, y, 0) = 0, (x, y) ∈ S,



w(x, y, t) = 0, (x, y) ∈ ∂S, t > 0.

For each fixed t > 0, expand the functions xyt and w(x, y, t) in the double
Fourier sine series
∞ ∑
∑ ∞
(5.3.20) xyt = Fmn (t) sin mπx sin nπy, (x, y) ∈ S, t > 0,
m=1 n=1

where
∫1 ∫1
Fmn (t) = 4 xyt sin mπx sin nπy dy dx
0 0
(∫1 ) (∫1 )
(5.3.21)
= 4t x sin mπx dx y sin nπy dy
0 0
(−1)m (−1)n
= 4t , m, n = 1, 2, . . .,
mnπ 2
and
∞ ∑
∑ ∞
(5.3.22) w(x, y, t) = wmn (t) sin mπx sin nπy, (x, y) ∈ S, t > 0,
m=1 n=1

where
∫1 ∫1
wmn (t) = 4 w(x, y, t) sin mπx sin nπy dy dx, m, n = 1, 2, . . .
0 0

From the initial conditions w(x, y, 0) = wt (x, y, 0) = 0 it follows that



(5.3.23) wmn (0) = wmn (0) = 0.

If we substitute (5.3.20) and (5.3.22) into the wave equation (5.3.19) and
compare the Fourier coefficients we obtain

′′
( ) (−1)m+n
wmn (t) + m2 + n2 wmn (t) = 4t .
mnπ 2
The solution of the last equation, subject to the initial conditions (5.3.23), is
4(−1)m (−1)n 4(−1)m (−1)n √
wmn (t) = 2 2 2
t + √ sin m2 + n2 t
mn(m + n )π 2 2 2 2
mn(m + n ) m + n π 2
284 5. THE WAVE EQUATION

and so, from (5.3.22), it follows that the solution w = w(x, y, t) is given by
∞ ∑ ∞ [ ( ) ]
∑ 4(−1)m (−1)n sin λmn t
(5.3.24) w= t + sin mπx sin nπy ,
m=1 n=1
mnλ2mn π 2 λmn

where √
λmn = m 2 + n2 .
Hence, the solution u(x, y, t) of the given problem (5.3.16) is

u(x, y, t) = v(x, y, t) + w(x, y, t),

where u(x, y, t) and w(x, y, t) are given by (5.3.18) and (5.3.24), respec-
tively.

5.3.3 The Wave Equation on a Rectangular Solid.


Let V be the three dimensional solid given by

V = {(x, y, z) : 0 < x < a, 0 < y < b, 0 < z < c}

whose boundary is ∂V .
Consider the initial boundary value problem
 ( )

 utt = A2 uxx + uyy + uzz , (x, y, z) ∈ V, t > 0,


 u(x, y, z, 0) = f (x, y, z), (x, y, z) ∈ V ,
(5.3.25)
 ut (x, y, z, 0) = g(x, y, z), (x, y, z) ∈ V ,



 u(x, y, z, t) = 0, (x, y, z) ∈ ∂V, t > 0.

This problem can be solved by the separation of variables method.


Let the solution u(x, y, z, t) of the problem be of the form

u(x, y, z, t) = X(x)Y (y)Z(z)T (t), (x, y, z) ∈ V, t > 0.

From the boundary conditions for the function u(x, y, z, t) we have

(5.3.26) X(0) = Y (0) = Z(0) = X(a) = Y (b) = Z(c) = 0.

If we differentiate this function twice with respect to the variables and sub-
stitute the partial derivatives in the wave equation, after a rearrangement we
obtain
T ′′ (t) X ′′ (x) Y ′′ (y) Z ′′ (z)
= + + .
A2 T (t) X(x) Y (y) Z(z)
From the last equation it follows that

(5.3.27) T ′′ (t) + λA2 T (t) = 0, t>0


5.3.3 THE WAVE EQUATION ON A RECTANGULAR SOLID 285

and
X ′′ (x) Y ′′ (y) Z ′′ (z)
=− − −λ
X(x) Y (y) Z(z)
for some constant λ.
The above equation is possible only if

X ′′ (x)
= −µ,
X(x)
i.e.,

(5.3.28) X ′′ (x) + µX(x) = 0, 0<x<a

and
Y ′′ (y) Z ′′ (z)
− − − λ = −µ,
Y (y) Z(z)
where µ is a constant
The last equation can be written in the form

Y ′′ (y) Z ′′ (z)
− = + λ − µ,
Y (y) Z(z)

which is possible only if

Y ′′ (y) Z ′′ (z)
− = + λ − µ = ν,
Y (y) Z(z)

where ν is a constant.
From the last equations we obtain

(5.3.29) Y ′′ (y) + νY (y) = 0, 0<y<b

and

(5.3.30) Z ′′ (z) + (λ − µ − ν)Z(z) = 0, 0 < z < c.

Solving the eigenvalue problems (5.3.28), (5.3.29), (5.3.30), (5.3.26) we find




 i2 π 2 iπx

 µ i = , Xi (x) = sin , 0 < x < a, i∈N

 a 2 a
 2 2
j π jπy
νj = 2 , Yj (y) = sin , 0 < y < b, j∈N

 b b



 2 2 kπz
 λijk − µj − νk = k π , Zk (z) = sin , 0 < z < c, i, j, k ∈ N.
c2 c

If we solve the differential equation (5.3.27) for the above obtained λijk
we have √ √
Tk (t) = aijk cos A λijk t + bijk sin A λijk t.
286 5. THE WAVE EQUATION

Therefore, the solution u = u(x, y, z, t) of the three dimensional problem


(5.3.25) is given by
∞ ∑
∑ ∞ ∑
∞ ( )
iπx jπy kπz √ √
u= sin sin sin aijk cos A λijk t+bijk sin A λijk t ,
i=1 j=1 k=1
a b c

where the coefficients aijk and bijk are determined using the initial conditions
∞ ∑
∑ ∞ ∑

iπx jπy kπz
f (x, y, z) = aijk sin sin sin ,
i=1 j=1 k=1
a b c
∞ ∑
∑ ∞ ∑ ∞
√ iπx jπy kπz
g(x, y, z) = A bijk λijk sin sin sin .
i=1 j=1 k=1
a b c

Using the orthogonality property of the sine functions

{ iπx jπy kπz }


sin , sin , sin , i, j, k = 1, 2, . . .
a b c
on 0 ≤ x ≤ a, 0 ≤ y ≤ b, 0 ≤ z ≤ c we find

 ∫a ∫b ∫c

 8 iπx jπy kπz

 aijk = f (x, y, z) sin sin sin dz dy dx,

 abc a b c


 0 0 0




 ∫a ∫b ∫c

 8 iπx jπy kπz

 √
 b =
 ijk
g(x, y, z) sin
a
sin
b
sin
c
dz dy dx.
abcA λijk
0 0 0

The coefficients λijk in the above formulas are given by

i2 π 2 j 2 π2 k2 π2
λijk = 2
+ 2 + 2 , i, j, k ∈ N.
a b c

Exercises for Section 5.3.


In Exercises 1–5 solve the two dimensional wave equation

1( )
utt = 2
uxx (x, y, t) + uyy (x, y, t) , 0 < x < 1, 0 < y < 1, t > 0,
π
subject to the boundary conditions

u(0, y, t) = u(1, y, t) = u(x, 0, t) = u(x, 1, t) = 0, 0 < x < 1, 0 < y < 1, t > 0


5.3.3 THE WAVE EQUATION ON A RECTANGULAR SOLID 287

and the following initial conditions.


1. u(x, y, 0) = x(1 − x)y(1 − y), ut (x, y, 0) = 0, 0 < x < 1, 0 < y < 1.

2. u(x, y, 0) = sin πx sin πy, ut (x, y, 0) = sin πx, 0 < x < 1, 0 < y < 1.

3. u(x, y, 0) = x(1−x)y(1−y), ut (x, y, 0) = 2 sin πx sin 2πx, 0 < x < 1,


0 < y < 1.

4. u(x, y, 0) = 0, ut (x, y, 0) = 1, 0 < x < 1, 0 < y < 1.

5. u(x, y, 0) = 0, ut (x, y, 0) = x(1 − x)y(1 − y), 0 < x < 1, 0 < y < 1.

6. Let S = {(x, y) : 0 < x < a, 0 < y < b} be a rectangle with boundary


∂S. Find the general solution of the two dimensional wave equation
utt = uxx + uyy , (x, y) ∈ S, t > 0,
subject to the following boundary conditions.

(a) u(x, y, t) = 0, (x, y) ∈ S, t > 0.

(b) ux (x, y, t) = u(x, y, t) = 0, (x, y) ∈ ∂S, t > 0.

7. Solve the two dimensional wave initial boundary value problem



 utt = uxx + uyy , (x, y) ∈ S, t > 0,




 u(x, y, 0) = 3 sin 4y − 5 cos 2x sin y, (x, y) ∈ S,

ut (x, y, 0) = 7 cos x sin 3y, (x, y) ∈ S,


 ux (x, y, t) = ux (x, y, t) = 0, (x, y) ∈ ∂S, t > 0,




u(x, y, t) = u(x, y, t) = 0, (x, y) ∈ ∂S, t > 0,
where S = {(x, y) : 0 < x < π, 0 < y < π} is a square and ∂S its
boundary.

8. Let R be the rectangle R = {(x, y) : 0 < x < a, 0 < y < b} and ∂R


be its boundary. Use the separation of variables method to solve the
following two dimensional damped membrane vibration problem.
 ( )
 utt = c uxx + uyy − 2k ut , (x, y) ∈ R, t > 0,
2 2

u(x, y, 0) = f (x, y), ut (x, y, 0) = g(x, y), (x, y) ∈ R,


u(x, y, t) = 0, (x, y) ∈ ∂R, t > 0.
288 5. THE WAVE EQUATION

9. Let R be the rectangle R = {(x, y) : 0 < x < a, 0 < y < b} and


∂R be its boundary. Solve the following two dimensional damped
membrane vibration resonance problem.
 ( )
 utt = c uxx + uyy + F (x, y) sin ωt, (x, y) ∈ R, t > 0,
2

u(x, y, 0) = 0, ut (x, y, 0) = 0, (x, y) ∈ R,


u(x, y, t) = u(a, y, t) = 0, (x, y) ∈ ∂R, t > 0.

10. Solve the previous problem taking

(a) a = b = π, c = 1, and F (x, y, t) = xy sin x sin y sin 5t.



(b) a = b = π, c = 1, and F (x, y, t) = xy sin x sin y sin 5t.

11. Let R be the rectangle R = {(x, y) : 0 < x < a, 0 < y < b} and
∂R be its boundary. Solve the following two dimensional damped
membrane vibration resonance problem.
 ( )
 utt = c uxx + uyy − 2kut + A sin ωt, (x, y) ∈ R, t > 0,
2

u(x, y, 0) = 0, ut (x, y, 0) = 0, (x, y) ∈ R,


u(x, y, t) = 0, ∈ ∂R, t > 0.

5.4 The Wave Equation on Circular Domains.


In this section we will discuss and solve problems of vibrations of a circular
drum and a three dimensional ball.
5.4.1 The Wave Equation in Polar Coordinates.
Consider a membrane stretched over a circular frame of radius a. Let D
be the disc inside the frame:

D = {(x, y) : x2 + y 2 < a2 },

whose boundary is ∂D.


We assume that the membrane has uniform tension. If u(x, y, t) is the
displacement of the point (x, y, u) of the membrane from the Oxy plane at
moment t, then the vibrations of the membrane are modeled by the wave
equation
( )
(5.4.1) utt = c2 ∆x,y u ≡ c2 uxx + uyy , (x, y) ∈ D, t > 0,
5.4.1 THE WAVE EQUATION IN POLAR COORDINATES 289

subject to the Dirichlet boundary condition

(5.4.2) u(x, y, t) = 0, (x, y) ∈ ∂D, t > 0.

In order to find the solution of this boundary value problem we take the
solution u(x, y, t) to be of the form u(x, y, t) = W (x, y)T (t). With this new
function W (x, y) the given membrane problem is reduced to solving the fol-
lowing eigenvalue problem:

 ∆x,y W (x, y, t) + λW (x, y, t) = 0, (x, y) ∈ D,

(5.4.3) W (x, y, t) = 0, (x, y) ∈ ∂D, t > 0,


W (x, y, t) is continuous for (x, y) ∈ D ∪ ∂D.

It is difficult to find the eigenvalues λ and corresponding eigenfunctions


W (x, y) of the above eigenvalue problem stated in this form. But, it is rela-
tively easy to show that all eigenvalues are positive. To prove this we will use
the Green’s formula (see Appendix E):
∫∫ ∮ ∫∫
p∆ q dx dy = p∇ q · n ds − ∇ p · ∇ q dx dy,
D ∂D D

and if, in particular, p and q vanish on the boundary ∂ D, then


∫∫ ∫∫
(5.4.4) p∆ q dx dy = − ∇ p · ∇ q dx dy.
D D

If we take p = q = W in formula (5.4.4), then we obtain


∫∫ ∫∫
0≤ ∇ W · ∇ W dx dy = − W ∆ W dx dy
D D
∫∫
2
=λ W dx dy.
D

Thus λ ≥ 0.
The parameter λ cannot be zero, since otherwise, from the above we would
have ∫∫ ∫∫
∇ W · ∇ W dx dy = − W ∆ W dx dy
D D
∫∫
=λ W 2 dx dy = 0,
D

which would imply ∇ W = 0 on D ∪ ∂D. So, W would be a constant


function on D ∪ ∂D. But, from W = 0 on ∂D and the continuity of W on
290 5. THE WAVE EQUATION

D ∪ ∂D we would have W = 0 on D ∪ ∂D, which is impossible since W is


an eigenfunction.

Instead of considering cartesian coordinates when solving the circular mem-


brane problem, it is much more convenient to use polar coordinates.
Let us recall that the polar coordinates are given by

x = r cos φ, y = r sin φ, −π ≤ φ < π, 0 ≤ r < ∞,

and the Laplace operator in polar coordinates is given by


( 1 1 )
c2 ∆r,φ u ≡ c2 urr + ur + 2 uφφ .
r r
(See Appendix E for the Laplace operator in polar coordinates.)
Let us have a circular membrane fixed along its boundary, which occupies
the disc of radius a, centered at the origin. If the displacement of the mem-
brane is denoted by u(r, φ, t), then the wave equation in polar coordinates is
given by

(5.4.5) utt (r, φ) = c2 ∆r,φ u(r, φ), 0 < r ≤ a, −π ≤ φ < π, t > 0,

Since the membrane is fixed for the frame, we impose Dirichlet boundary
conditions:

(5.4.6) u(a, φ, t) = 0, −π ≤ φ < π, t > 0.

Since u(r, φ, t) is a single valued and differentiable function, we have to require


that u(r, φ, t) is a periodical function of φ, i.e., we have to have the following
periodic condition:

(5.4.7) u(r, φ + 2π, t) = u(r, φ, t), 0 < r ≤ a, −π ≤ φ < π, t > 0.

Further, since the solution u(r, φ, t) is a continuous function on the whole


disc D we have that the solution is bounded (finite) at the origin. Therefore
we have the boundary condition

(5.4.8) | u(0, φ, t) |=| lim+ u(r, φ, t) |< ∞, −π ≤ φ < π, t > 0.


r→0

Also we impose the following initial conditions:


{
u(r, φ, 0) = f (r, φ), 0 < r < a, −π ≤ φ < π,
(5.4.9)
ut (r, φ, 0) = g(r, φ), 0 < r < a, −π ≤ φ < π.

Now we can separate out the variables. Let

(5.4.10) u(r, φ, t) = w(r, φ)T (t), 0 < r ≤ a, −π ≤ φ < π.


5.4.1 THE WAVE EQUATION IN POLAR COORDINATES 291

If we substitute (5.4.10) into (5.4.5) we obtain

∆r,φ w(r, φ) T ′′ (t)


= 2 .
w(r, φ) c T (t)

The last equation is possible only when both sides are equal to the same
constant, which will be denoted by −λ2 (we already established that the
eigenvalues of the Helmholtz equation are positive):

1 1
(5.4.11) wrr + wr + 2 wφφ + λ2 w(r, φ) = 0, 0 < r ≤ a, −π ≤ φ < π,
r r

and

(5.4.12) T ′′ (t) + λ2 c2 T (t) = 0, t > 0.

Now we separate the Helmholtz equation (5.4.11) by letting

(5.4.13) w(r, φ) = R(r)Φ(φ).

From the boundary conditions (5.4.6), (5.4.7), and (5.4.8) for the function
u(r, φ, t) it follows that the functions R(r) and Φ(φ) satisfy the boundary
conditions
{
Φ(φ) = Φ(φ + 2π), −π ≤ φ < π,
(5.4.14) R(a) = 0, | lim R(r) |< ∞.
r→0+

Substituting (5.4.13) into Equation (5.4.5) it follows that

r2 R′′ (r) R′ (r) Φ′′ (φ)


+r + λ2 r2 = − ,
R(r) R(r) Φ(φ)

which is possible only if both sides are equal to the same constant µ. There-
fore, in view of the conditions (5.4.14), we obtain the eigenvalue problems

(5.4.15) Φ′′ (φ) + µΦ(φ) = 0, Φ(φ) = Φ(φ + 2π), −π ≤ φ < π

and

(5.4.16) r2 R′′ + rR′ + (λ2 r2 − µ)R = 0, R(a) = 0, | lim+ R(r) |< ∞.


r→0

Let us consider first the problem (5.4.15). If µ = 0, then the general


solution of the differential equation in (5.4.15) is

Φ(φ) = Aφ + B.
292 5. THE WAVE EQUATION

The periodicity of Φ(φ) implies A = 0, and so µ = 0 is an eigenvalue of


(5.4.15) with corresponding eigenfunction Φ(φ) = 1.
If µ > 0, then the general solution of the differential equation in (5.4.15)
is
√ √
Φ(φ) = A cos µ φ + B sin µ φ.
Using the periodic condition Φ(π) = Φ(−π), from the above equation it fol-
lows that
√ √ √ √
A cos µ π + B sin µ π = A cos µ π − B sin µ π,

from which we have



sin µ π = 0,

that is, µ π = mπ, m = 1, 2, . . .. Therefore,

(5.4.17) µm = m2 , Φm (φ) = am cos mφ + bm sin mφ, m = 0, 1, 2, . . .

are the eigenvalues and corresponding eigenfunctions of problem (5.4.15).


As before, for these found µm = m2 we will solve problem (5.4.16). The
equation in (5.4.16) is the Bessel equation of order m, discussed and solved
in Section 3.3. of Chapter 3, and its general solution is given by

R(r) = CJm (λr) + DYm (λr),

where Jm (·) and Y (·) are the Bessel functions of the first and second kind,
respectively. Since R(r) is bounded at r = 0, and because Bessel functions
of the second kind have singularity at r = 0, i.e.,

lim Ym (r) = ∞,
r→0

we have to choose D = 0. From the other boundary condition R(a) = 0 it


follows that the eigenvalue λ must be chosen such that

Jm (λa) = 0.

Since each Bessel function has infinitely many positive zeroes, the last equa-
tion has infinitely many zeroes which will be denoted by zmn . Using these
zeroes we have that the eigenfunctions of the problem (5.4.16) are given by
n ( )
Rmn (r) = Jm λmn r m = 0, 1, 2, . . . ; n = 1, 2, . . ..
For the above found eigenvalues λmn , the general solution of Equation
(5.4.12) is given by
zmn zmn
Tmn (t) = amn cos t + bmn sin t,
a a
5.4.1 THE WAVE EQUATION IN POLAR COORDINATES 293

and the general solution w(r, θ) of Equation (5.4.11) is a linear combination


of the following, called modes of the membrane:
 ( zmn )
 wmn(c)
(r, φ) = Jm r cos mφ,
a
 w(s) (r, φ) = J zmn r) sin mφ.
(
(5.4.18)
mn m
a
Therefore, the solution of our original problem (5.4.5), which was to find
the vibration of the circular membrane with given initial displacement and
initial velocity, is given by
∑∞ ∑ ∞
( zmn ) [ ]
u(r, φ, t) = Jm r am cos mφ + bm sin mφ ·
(5.4.19) m=0 n=1
a
[ zmn zmn ]
Amn cos ct + Bmn sin ct .
a a
The coefficients in (5.4.19) are found by using the initial conditions (5.4.9):
∑∞ ∑ ∞
[ ] ( zmn )
f (r, φ) = u(r, φ, 0) = am cos mφ + bm sin mφ Amn Jm r
m=0 n=1
a
and
∑∞ ∑ ∞
[ ] zmn ( zmn )
g(r, φ) = ut (r, φ, 0) = am cos mφ+bm sin mφ cBmn Jm r .
m=0 n=1
a a
Using the orthogonality property of the Bessel eigenfunctions {Jm (·), m =
0, 1, . . .}, as well as the orthogonality property of
{cos mφ, sin mφ, m = 0, 1, . . .}
we find

 ∫ ∫a
2π ( )

 f (r, φ)Jm zmn r r cos mφ dr dφ

 a

 am Amn = 0 02π a

 ,

 ∫ ∫ ( )

 J 2 zmn r r cos2 mφ dr dφ


m a

 0 0



 ∫ ∫a
2π ( )

 g(r, φ)Jm zmn a r r cos mφ dr dφ

 zmn

 c am Bmn = 0 02π a ,

 ∫ ∫ ( zmn )

 a


2 2
Jm a r r cos mφ dr dφ
0 0
(5.4.20)

 ∫ ∫a
2π ( )

 f (r, φ)Jm zmn r r sin mφ dr dφ

 a

 bm Amn = 0 02π a

 ,

 ∫ ∫ ( zmn )

 2 2
Jm a r r sin mφ dr dφ



 0 0

 ∫ ∫a ( )




 g(r, φ)Jm zmn a r r sin mφ dr dφ

 zmn

 c bm Bmn = 0 02π a .

 ∫ ∫ ( )


a
 Jm2 zmn r r sin2 mφ dr dφ
a
0 0
294 5. THE WAVE EQUATION

Remark. As mentioned in Chapter 3, the zeroes zmn , n = 1, 2, . . . of the


Bessel function Jm (·) are not easy to find. In Table 5.4.1. we list the first n
zeroes for the Bessel functions of order m of the first kind.

Table 5.4.1

m\n n=1 n=2 n=3 n=4 n=5 n=6 n=7


m=0 2.40482 5.52007 8.65372 11.7915 14.9309 18.0710 21.2116
m=1 3.83170 7.01558 10.1734 13.3236 16.4706 19.6158 22.7600
m=2 5.13562 8.41724 11.6198 14.7959 17.9598 21.1170 24.2701
m=3 6.38016 9.76102 13.0152 16.2234 19.4094 22.5827 25.7481
m=4 7.58834 11.0647 14.3725 17.6159 20.8263 24.0190 27.1990
m=5 8.77148 12.3386 15.7001 18.9801 22.2178 25.4303 28.6266
m=6 9.93611 13.5892 17.0038 20.3207 23.5860 26.8201 30.0337
m=7 11.0863 14.8212 18.2875 21.6415 24.9349 28.1911 31.4227
m=8 12.2250 16.0377 19.5545 22.9451 26.2668 29.5456 32.7958
m=9 13.3543 17.2412 20.8070 24.2338 27.5837 30.8853 34.1543

Example 5.4.1. Solve the vibrating membrane problem (5.4.5), (5.4.9),


(5.4.10) if a = c = 1 and the initial displacement f (r, φ) is given by

f (r, φ) = (1 − r2 )r2 sin 2φ, 0 ≤ r ≤ 1, 0 ≤ φ < 2π

and the initial velocity is

g(r, φ) = 0, 0 ≤ r ≤ 1, 0 ≤ φ < 2π.

Display the shapes of the membrane at the time instances t = 0 and t = 0.7.

Solution. Since g(r, φ) = 0, from (5.4.20) it follows that the coefficients


am Bmn and bm Bmn are all zero.
Since
∫2π
sin 2φ cos mφ dφ = 0, m = 0, 1, 2, . . .,
0
∫2π
sin 2φ sin mφ dφ = 0, for every m ̸= 2,
0

from (5.4.20) we have am Amn = 0 for every m = 0, 1, 2, . . . and every


n = 1, 2, . . .; and bm Amn = 0 for every m ̸= 2 and every n = 1, 2, . . ..
5.4.1 THE WAVE EQUATION IN POLAR COORDINATES 295

For m = 2 and n ∈ N, from (5.4.20) we have

∫ ∫1
2π ( )
(1 − r2 )r2 sin 2φ J2 z2n r r sin 2φ dr dφ
0 0
b2 A2n =
∫ ∫1
2π ( )
J22 z2n r r sin2 2φ dr dφ
0 0
(5.4.21)
∫1 ( )
(1 − r2 )r3 J2 z2n r dr
0
= .
∫1 ( )
rJ22 z2n r dr
0

The numbers z2n in (5.4.21) are the zeroes of the Bessel function J2 (x).
Taking a = 1, p = 2 and λ = z2n in Example 3.2.12 of Chapter 3, for
the integral in the numerator in (5.4.21) we have

∫1
( ) 2 ( )
(5.4.22) (1 − r2 )r3 J2 z2n r dr = 2 J4 z2n .
z2n
0

In Theorem 3.3.11 of Chapter 3 we proved the following result.


If λk are the zeroes of the Bessel function Jµ (x), then

∫1
1 2
xJµ2 (λk x) dx = J (λk ).
2 µ+1
0

If we take µ = 2 in this formula, then for the integral in the denominator in


(5.4.21) we have

∫1
( ) 1 ( )
(5.4.23) rJ22 z2n r dr = J32 z2n .
2
0

If we substitute (5.4.22) and (5.4.23) into (5.4.21), then we obtain


( )
4J4 z2n
b2 A2n = 2 2 ( ) .
z2n J3 z2n

The right hand side of the last equation can be simplified using the following
recurrence formula from Chapter 3 for the Bessel functions.

Jµ+1 (x) + Jµ−1 (x) = Jµ (x).
x
( )
If we take µ = 3 and x = z2n in this formula and use the fact that J2 z2n =
0, then it follows that ( ) ( )
z2n J4 z2n = 6J3 z2n
296 5. THE WAVE EQUATION

and thus
24
b2 A2n = ( ).
3 J z
z2n 3 2n

Therefore, from (5.4.19), the solution u(r, φ, t) of the vibration membrane


problem is given by


∑ 1 ( ) ( )
u(r, φ, t) = 24 ( ) J2 z2n r sin 2φ cos z2n t .
z3 J
n=1 2n 3
z2n

The shapes of the membrane at t = 0 and t = 0.7 are displayed in Figure


5.4.1.

t=0 t=0.7

Figure 5.4.1

Example 5.4.2. Solve the vibrating circular membrane problem (5.4.5),


(5.4.9), (5.4.10) if a = 2, c = 1 and

f (r, φ) = (4 − r2 )r sin φ, 0 ≤ r ≤ 2, 0 ≤ φ < 2π,


g(r, φ) = 1, 0 ≤ r ≤ 2, 0 ≤ φ < 2π.

Solution. From (5.4.20) and the orthogonality of the sine and cosine functions
we have
am Amn = bm Bmn = 0

for every m ≥ 0, n ≥ 1. For the same reason, we have

bm Amn = 0

for every m ̸= 1, n ≥ 1, and am Bmn = 0 for every m ̸= 0, n ≥ 1.


5.4.2 THE WAVE EQUATION IN SPHERICAL COORDINATES 297

For m = 1 we have
∫ ∫2
2π (z ) ∫2 (z )
(4 − r2 )r2 sin2 φJ1 1n
2 r dr dφ (4 − r2 )r2 J1 1n
2 r dr
0 0 0
b1 A1n = = .
∫ ∫2
2π ( ) 2
∫2 ( z1n )
J12 z1n
2 r r sin φ dr dφ J12 2 r r dr
0 0 0

Using the same argument as in the previous example we obtain


( )
J1 z1n r
b1 A1n = 128 3 ( ).
2
z1n J2 z1n

Similarly, for m = 0 we have

∫ 2π ∫2 ( z0n )
0
J0 2 r r dr dφ
z0n
a0 B0n = 2π 20
2 ∫ ∫ ( z0n )
J02 2 r r dr dφ
0 0
∫2 ( z0n )
J0 2 r r dr
( )
J0 z0n
2 r )
= 0
=2 ( .
∫2 ( z0n ) z0n J1 z0n
J02 2 r r dr
0

Therefore, the solution u(r, φ, t) of the vibration problem is given by


∞ ( )
∑ J1 z1n r (z )
u(r, φ, t) = 128 3
( ) sin nφ cos 1n t
2
z J z
n=1 1n 2 1n
2
∞ ( z0n )
∑ J0 r (z )
+4 2
(2 ) sin 0n t .
z J z
n=1 0n 1 0n
2

5.4.2 The Wave Equation in Spherical Coordinates.


Let B be the three dimensional ball

B = {(x, y, z) ∈ R3 : x2 + y 2 + z 2 < a2 },

whose boundary will be denoted by S. The wave equation for a function


u = u(x, y, z, t), (x, y, z) ∈ B, t > 0 is given by
( )
utt = c2 uxx + uyy + uzz = c2 ∆x,y,z u(x, y, x, t), (x, y, z) ∈ B, t > 0,

subject to a zero Dirichlet boundary condition and given initial conditions.


This equation describes many physical phenomena, such as the propagation of
298 5. THE WAVE EQUATION

light, magnetic and X-ray waves. Also, in electromagnetism, the components


of electric and magnetic fields satisfy the wave equation. Vibrations of the ball
are also modeled by the wave equation. The solution u(x, y, z, t) of the wave
equation represents the radial displacement of the ball at position (x, y, z) ∈ B
and time t.
It is much more convenient if we write the wave equation in spherical
coordinates:
(5.4.24) utt = c2 ∆ u, 0 ≤ r < a, 0 ≤ θ < π, −π ≤ φ < π, t > 0,
where
( ) ( )
1 ∂ 2 ∂u 1 ∂ ∂u 1 ∂2u
∆u = 2 r + 2 sin θ + 2 2
r ∂r ∂r r sin θ ∂θ ∂θ r sin θ ∂φ2
(5.4.25) 2
( 2 2
)
∂ u 2 ∂u 1 ∂ u ∂u 2 ∂ u
≡ + + 2 + cot θ + csc θ 2 .
∂r2 r ∂r r ∂θ2 ∂θ ∂φ
(See Appendix E for the Laplacian in spherical coordinates.)
Along with this equation, we incorporate the boundary and initial condi-
tions

 u(a, φ, θ, t) = 0, 0 ≤ θ < π, 0 ≤ φ < 2π,

(5.4.26) u(r, φ, θ, 0) = f (r, θ, φ), 0 ≤ r < a, 0 ≤ θ < π, −π ≤ φ < π,


ut (r, φ, θ, 0) = g(r, φ, θ), 0 ≤ r < a, 0 ≤ θ < π, −π ≤ φ < π.
In order to solve equation (5.4.24), subject to the boundary and initial
conditions (5.4.26), we work similarly as in the wave equation, describing the
vibrations of a circular membrane.
First we write the solution u(r, θ, φ, t) of the above initial boundary value
problem in the form
(5.4.27) u(r, φ, θ, t) = F (r, φ, θ)T (t).
From the boundary condition u(a, φ, θ, t) = 0 we obtain that the function
F (r, φ, θ) satisfies the boundary condition
(5.4.28) F (a, φ, θ) = 0, 0 ≤ θ < π, −π ≤ φ < π.
If we substitute (5.4.27) in the wave equation (5.4.24), after separating
the variables, we obtain
(5.4.29) T ′′ (t) + λc2 T (t) = 0, t>0
and again, the Helmholtz equation
(5.4.30) ∆ F (r, φ, θ) + λ2 F = 0, 0 ≤ r < a, 0 ≤ θ < π, −π ≤ φ < π,
where ∆ F (r, φ, θ) is given by (5.4.25) and λ is a positive parameter to be
determined.
The fact that λ2 was used in (5.4.30) follows from the following result.
5.4.2 THE WAVE EQUATION IN SPHERICAL COORDINATES 299

Theorem 5.4.1. Suppose that P = P (x, y, z) is a twice differentiable func-


tion in the ball B, continuous on the closed ball B and vanishes on the
boundary S. If P is not identically zero in the ball B and satisfies the
Helmholtz equation

∆ P (x, y, z) + µP (x, y, z) = 0, (x, y, z) ∈ B,

then µ > 0.
Proof. We use the Divergence Theorem (see Appendix E). If F = F(x, y, z)
is a twice differentiable vector field on the closed ball B, then
∫∫∫ ∫∫
∇ · F dx dy dz = F · n dσ,
B S

where ∇ is the gradient vector operator and n is the outward unit normal
vector on the sphere S. If we take F = P ∇P in the divergence formula and
use the fact that P = 0 on the boundary S we obtain that
∫∫∫ ∫∫
( )
∇ · P ∇P dx dy dz = P ∇P · n dσ = 0.
B S

From this equation and from


( )
∇ · P ∇P = ∇P · ∇P + P ∇ · ∇P = ∇P · ∇P + P ∆P

it follows that
∫∫∫ ∫∫∫
∇P · ∇P dx dy dz = − P ∆P, dx dy dz.
B B

Using the above formula and Helmholtz equation we have


∫∫∫ ∫∫∫ ∫∫∫
0≤ ∇P · ∇P dx dy dz = − P ∆P dx dy dz = µ P 2 dx dy dz.
B B B

Therefore, µ ≥ 0. If µ = 0, then using the same argument as above, the


continuity of P on B and the fact that P vanishes on S would imply that
P ≡ 0 on B, contrary to the assumption that P is not identically zero.
Therefore µ > 0. ■

In order to solve (5.4.30) we assume a solution of the form

F (r, φ, θ) = R(r)Θ(θ)Φ(φ).
300 5. THE WAVE EQUATION

If we substitute this function into (5.4.30) we obtain (after rearrangement)

1 d ( 2 dR ) 1 d( dΘ ) 1 d2 Φ
r + λ2 r 2 = − sin θ − .
R dr dr Θ sin θ dθ dθ Φ sin2 θ dφ2

The last equation is possible only when both sides are equal to the same
constant ν.
d ( 2 dR )
(5.4.31) r + (λ2 r2 − ν)R = 0,
dr dr
and
1 d( dΘ ) 1 d2 Φ
− sin θ − = ν.
Θ sin θ dθ dθ Φ sin2 θ dφ2
From the last equation it follows that

sin θ d ( dΘ ) 1 d2 Φ
− sin θ − ν sin2 θ = .
θ dθ dθ Φ dφ2

And again, this equation is possible only when both sides are equal to the
same constant −µ:

d2 Φ
(5.4.32) + µΦ = 0,
dφ2

and

d2 Θ dΘ ( µ )
(5.4.33) + cot φ + ν− Θ = 0.
dθ2 dθ sin2 θ

Because of physical reasons, the function Φ(φ) must be 2π periodic, i.e., it


should satisfy the periodic condition

(5.4.34) Φ(φ + 2π) = Φ(φ).

We already solved the eigenvalue problem (5.4.32), (5.4.34) when we stud-


ied the vibrations of a circular membrane. We found that the eigenvalues and
corresponding eigenfunctions of this problem are

(5.4.35) µm = m2 , Φm (φ) = am cos mφ + bm sin mφ, m = 0, 1, . . ..

Now, we turn our attention to Equation (5.4.33). This equation has two
singular points at θ = 0 and θ = π. If we introduce a new variable θ by
x = cos θ, then we obtain the differential equation
[ ]
d2 Θ dΘ m2
(5.4.36) (1 − x2 ) − 2x + ν − Θ(x) = 0, −1 < x < 1.
dx2 dx 1 − x2
5.4.2 THE WAVE EQUATION IN SPHERICAL COORDINATES 301

Equation (5.4.36) has two singularities at the points x = −1 and x = 1.


Let us recall that this equation was discussed in Section 3.3 of Chapter 3,
where we found out that in order for the solutions of this equation to remain
bounded for all x we are required to have

(5.4.37) ν = n(n + 1), n = 0, 1, 2, . . ..

The corresponding bounded solutions of Equation (5.4.36) are the associated


(m)
Legendre functions Pn (x) of order m, given by

m dm ( )
Pn(m) (x) = (1 − x2 ) 2 m
Pn (x) ,
dx

where Pn (x) are the Legendre polynomials of order n.


Therefore, the bounded solutions of Equation (5.4.33) are given (up to a
multiplicative constant) by

(5.4.38) Θnm (θ) = Pn(m) (cos θ).

Finally, for the above obtained ν we are ready to solve Equation (5.4.31).
This equation has singularity at the point r = 0. First, we rewrite the
equation in the form
[ ]
d2 R dR
(5.4.39) r2 2 + 2r + λr − n(n + 1) R = 0.
2
dr dr

From the boundary condition F (a, θ, φ) = 0, given in (5.4.28), and from the
fact that R(r) remains bounded for all 0 ≤ r < a, it follows that the function
R(r) should satisfy the conditions:

(5.4.40) R(a) = 0, | lim R(r) |< ∞.


r→0+

If we introduce a new function y = y(r) by the substitution


y
R(r) = √ ,
r

then Equation (5.4.39) is transformed into the following equation


[ ]
d2 y dy 1 2
(5.4.41) r2 + r + λr 2
− (n + ) y = 0.
dr2 dr 2

1
Equation (5.4.41) is the Bessel equation of order n + 2 and its solution is
√ √
y(r) = AJn+ 21 ( λ r) + BYn+ 21 ( λ r),
302 5. THE WAVE EQUATION

where J and Y are the Bessel functions of the first and second type, re-
spectively. From the boundary conditions (5.4.40) it follows that B = 0
and

(5.4.42) Jn+ 12 ( λ a) = 0.

If znj are the zeroes of the Bessel function Jn+ 12 (x), then from (5.4.42) it
follows that the eigenvalues λmn are given by
( zjn )2
(5.4.43) λnj = , n = 0, 1, 2, . . . ; j = 1, 2, . . ..
a

So, the eigenfunctions of (5.4.31) are


( znj )
(5.4.44) Rnj (r) = Jn+ 12 r .
a
For the above found λnj we have that the general solution of Equation
(5.4.29) is given by
(√ ) (√ )
(5.4.45) Tnj (t) = anj cos λnj ct + bnj sin λnj ct .

Therefore, the solution u(r, φ, θ, t) of the vibration ball problem (5.4.25) is


given by
∞ ∑
∑ ∞ ∑

(5.4.46) u(r, φ, θ, t) = Amnj Φm (φ)Θmn (θ)Rnj (r)Tnj (t),
m=0 n=0 j=1

where Φm (φ), Θmn (θ), Rnj (r) and Tnj (t) are the functions given by the
formulas (5.4.35), (5.4.38), (5.4.44) and (5.4.45).
The coefficients Amnj in (5.4.46) are determined using the initial condi-
tions given in (5.4.26) and the orthogonality properties of the eigenfunctions
involved in (5.4.46).

Exercises for Section 5.4.

In Exercises 1–7 solve the following circular membrane problem:


 ( 1 1 )

 utt (r, φ, t) = c urr + r ur + r2 uφφ , 0 ≤ r < a, −π ≤ φ < π, t > 0,
 2

 u(a, φ, t) = 0, −π ≤ φ < π, t > 0,




u(r, φ, 0) = f (r, φ), ut (r, φ, 0) = g(r, φ), 0 < r ≤ a, −π ≤ φ < π,

if it is given that
5.4.2 THE WAVE EQUATION IN SPHERICAL COORDINATES 303

1. c = a = 1, f (r, φ) = 1 − r2 , g(r, φ) = 0.

{ 1
1, 0 < r < 2
2. c = a = 1, f (r, φ) = 0, g(r, φ) = 1
0, 2 < r < 1.

3. c = a = 1, f (r, φ) = (1 − r2 )r sin φ, g(r, φ) = 0.

4. c = a = 1, f (r, φ) = (1 − r2 )r sin φ, g(r, φ) = (1 − r2 )r2 sin 2φ.

( ) ( )
5. c = a = 1, f (r, φ) = 5J4 z4,1 r cos 4φ − J2 z2,3 r sin 2φ,

g(r, φ) = 0 (zmn is the nth zero of Jm (x)).

( )
6. c = a = 1, f (r, φ) = J0 z03 r , g(r, φ) = 1 − r2 (zmn is the nth
zero of Jm (x)).

7. c = 1, a = 2, f (r, φ) = 0, g(r, φ) = 1.

8. Use the separation of variables to find the solution u = u(r, t) of the


following boundary value problems:

 ( 2
2 ∂ u 1 ∂u )

 u (r, t) = c + , 0 < r < a, t > 0

 tt 2
 ∂r r ∂r
∂u(r, t)
 u(r, 0) = f (r), = g(r), 0 < r < a


 ∂r t=0

 u(a, t) = 0, t > 0.

9. Use the separation of variables to find the solution u = u(r, t) of the


following boundary value problems:

 ( 2
2 ∂ u 1 ∂u )

 u (r, t) = c + , 0 < r < a, t > 0

 tt 2


∂r r ∂r
∂u(r, t)
 u(r, 0) = f (r), = g(r), 0 < r < a

 ∂r t=0



 | u(0, t) |< ∞, ∂u(r, t)
r=a
= 0, t > 0.
∂r
304 5. THE WAVE EQUATION

10. Use the separation of variables to find the solution u = u(r, t) of the
following boundary value problems:

 ( 2
2 ∂ u 1 ∂u )

 u (r, t) = c + + A, 0 < r < a, t > 0


tt
∂r 2 r ∂r
∂u(r, t)

 u(r, 0) = 0, , 0<r<a

 ∂r t=0

u(a, t) = 0, t > 0.

11. Use the separation of variables to find the solution u = u(r, t) of the
following boundary value problems:

 ( 2
2 ∂ u 1 ∂u )

 u (r, t) = c + + Af (r, t), 0 < r < a, t > 0


tt
∂r 2 r ∂r
∂u(r, t)

 u(r, 0) = 0, , 0<r<a

 ∂r t=0

u(a, t) = 0, t > 0.

12. Find the solution u = u(r, t) of the following boundary value prob-
lems:

 ( ∂ 2 u 2 ∂u )

 utt (r, t) = c2 + , r1 < r < r 2 , t > 0

 ∂r2


r ∂r
∂u(r, t)
 u(r, 0) = 0, = f (r), r1 < r < r2

 ∂t t=0

 ∂u(r, t)

 ∂u(r, t)
r=r
= 0, r=r2
= 0, t > 0.
∂r 1 ∂r

13. Find the solution u = u(r, φ, t) of the following boundary value prob-
lems:

 ( 2
2 ∂ u 1 ∂u 1 ∂2u )

 u (r, φ, t) = c + + , 0 < r < a, 0 ≤ φ < 2π, t > 0


tt
∂r2 r ∂r r2 ∂φ2


∂ut (r, φ, t)
 u(r, φ, 0) = Ar cos φ, = 0, 0 < r < a, 0 ≤ φ < 2π

 ∂t t=0

 ∂u(r, t)


r=a
= 0, 0 ≤ φ < 2π, t > 0.
∂r
5.5.1 LAPLACE TRANSFORM FOR THE WAVE EQUATION 305

5.5 Integral Transform Methods for the Wave Equation.


In this section we will apply the Laplace and Fourier transforms to solve
the wave equation and similar partial differential equations. We begin with
the Laplace transform method.
5.5.1 The Laplace Transform Method for the Wave Equation.
Besides the separation of variables method, the Laplace transform method
is used very often to solve linear partial differential equations.
As usual, we use capital letters for the Laplace transform with respect to
t of a given function g(x, t). For example, we write

L u(x, t) = U (x, s), L y(x, t) = Y (x, s), L f (x, t) = F (x, s).

Let us recall from Chapter 2 several properties of the Laplace transform.

∫∞
( )
(5.5.1) L u(x, t) = U (x, s) = u(x, t)e−st dt.
0
( )
(5.5.2) L ut (x, t) = sU (x, s) − U (x, 0).
( )
(5.5.3) L utt (x, t) = s2 U (x, s) − su(x, 0) − ut (x, 0).
( )
( ) d ( )
(5.5.4) L ux (x, t) = L u(x, t)
dx
( )
( ) d2 ( ) d2 U (x, s)
(5.5.5) L uxx (x, t) = 2 L u(x, t) = .
dx dx2

Let us take a few examples.


Example 5.5.1. Using the Laplace transform method solve the following
boundary value problem.


 utt (x, t) = uxx (x, t), 0 < x < 1, t > 0,
u(0, t) = u(1, t) = 0, t > 0,


u(x, 0) = 0, ut (x, 0) = cos πx, 0 < x < 1.

( )
Solution. If U = U (x, s) = L u(x, t) , then from (5.5.3), (5.5.5) and the
initial conditions we have

d2 U
− s2 U = − cos πx.
dx2

The general solution of the above ordinary differential equation is


cos πx
U (x, s) = + c1 esx + c2 e−sx .
s2 + π 2
306 5. THE WAVE EQUATION

Using the boundary conditions for the function u(x, t) we obtain that the
function U (x, s) satisfies the conditions
( ) ( )
U (0, s) = L u(0, t) = 0, U (1, s) = L u(1, t) = 0.

These conditions for U (x, s) imply c1 = c2 = 0 and so


cos πx
U (x, s) = .
s2 + π 2

Therefore,
( )
( ) cos πx
u(x, t) = L−1 U (x, s) = L−1
s2 + π 2
( )
1 1
= cos πx L−1 = cos πx sin πt.
s2 + π 2 π

Example 5.5.2. Using the Laplace transform method solve the following
boundary value problem.

 utt (x, t) = uxx (x, t), x > 0, t > 0,

u(0, t) = 0, lim ux (x, t) = 0, t > 0,


x→∞
u(x, 0) = xe−x , ut (x, 0) = 0, x > 0.

( )
Solution. If U = U (x, s) = L u(x, t) , then from (5.5.3) and (5.5.5) we have

d2 U
s2 U − su(x, 0) − ut (x, 0) = .
dx2
The last equation in view of the initial conditions becomes

d2 U
− s2 U = −sxe−x ,
dx2
whose general solution is

se−sx se−sx
U (x, s) = −2 + x + c1 esx + c2 e−xs .
(s2 − 1)2 s2 − 1

The boundary condition lim ux (x, t) = 0 implies lim Ux (x, s) = 0 and so,
x→∞ x→∞
c1( = 0. From
) the other boundary conditions u(0, t) = 0 we have U (0, s) =
L u(0, t) = 0, which implies

2s
c2 = .
(s2 − 1)2
5.5.1 LAPLACE TRANSFORM FOR THE WAVE EQUATION 307

Therefore,

se−sx −x s e−sx
U (x, s) = 2 − 2e + x .
(s2 − 1)2 (s2 − 1)2 s2 − 1

Using the linearity and translation property of the inverse Laplace transform,
together with the decomposition into partial fractions, we have
( ) ( ) ( )
−1 se−sx −x −1 s −x −1 1
u(x, t) = 2L − 2e L + xe L
(s2 − 1)2 (s2 − 1)2 s2 − 1
1 [ ]
= (t − x)e−t−x 1 − e2t + (e2t − e2x )H(t − x) ,
2
where {
1, if t ≥ 0
H(t) =
0, if t < 0
is the Heaviside unit step function.

Example 5.5.3. Find the solution of the following problem (the telegraphic
equation). 
 1

 utt (x, t) = 2 uxx (x, t), 0 < x < 1, t > 0,

 π

u(0, t) = sin πt, t > 0,



 ut (1, t) = 0, t > 0,


u(x, 0) = 0, ut (x, 0) = 0, 0 < x < 1.
( )
Solution. If U = U (x, s) = L u(x, t) , then from (5.5.3), (5.5.5), in view of
the initial conditions, we have

d2 U
− π 2 s2 U = 0.
dx2
The general solution of the last equation is

U (x, s) = c1 eπsx + c2 e−πsx .

From the boundary condition u(1, t) = 0 it follows that U (1, s) = 0. From


the other boundary condition u(0, t) = sin πt we have
( ) π
U (0, s) = L sin πt) = .
s2 + π 2

Using these boundary conditions for U (x, s) we find

esπ(1−x) − e−sπ(1−x)
U (x, s) = π ( πs ) .
e − e−πs (s2 + π 2 )
308 5. THE WAVE EQUATION

In order to find the inverse Laplace transform of U (x, s) we use the formula
for the inverse Laplace transform by contour integration

b+i∞ ∫
b+i∞
1 st 1
u(x, t) = e U (x, s) ds = F (x, s) ds,
2πi 2πi
b−i∞ b−i∞

where
esπ(1−x) − e−sπ(1−x)
F (x, s) = π ( πs ) .
e − e−πs (s2 + π 2 )
(See the section the Inverse Laplace Transform by Contour Integration in
Chapter 2). One way to compute the above contour integral is by the Cauchy
Theorem of Residues (see Appendix F).
The singularities of the function F (x, s) are found by solving the equation
( πs )
e − e−πs (s2 + π 2 ) = 0,

or equivalently, ( )
e2πs − 1 (s2 + π 2 ) = 0.
The solutions of the above equation are s = ±ni, n = 0, 1, 2, . . .. We exclude
s = 0 since it is a removable singularity for the function F (z). Notice that
all these singularities are simple poles. Using the formula for residues (see
Appendix F) we find
( )
( ) [ ] eπti sin π 2 (1 − x)
Res F (x, s), s = πi = lim (s − πi)F (x, s) = ,
s→πi 2i sin π 2( )
( ) [ ] e−πti sin π 2 (1 − x)
Res F (x, s), s = −πi = lim (s + πi)F (x, s) = − ,
s→−πi 2i sin π 2
( )
( ) [ ] sin nπ(1 − x)
Res F (x, s), s = ni = lim (s − ni)F (x, s) = ienπti 2 ,
s→ni (π − n2 ) cos nπ
( )
( ) [ ] i sin nπ(1 − x)
Res F (x, s), s = −ni = lim (s + ni)F (x, s) = − nπti 2 .
s→−ni e (π − n2 ) cos nπ
Therefore, by the Residue Theorem we have


( ) ( ) ( )
u(x, t) = Res F (x, s), πi +Res F (x, s), −πi + Res F (x, s), ni
n=−∞
n̸=0
( ) ( )
eπti sin π 2 (1 − x) e−πti sin π 2 (1 − x)
= −
2i sin π 2 2i sin π 2
∞ ( ( ) ( ))

nπti sin nπ(1 − x) −nπti sin nπ(1 − x)
+ ie − ie
n=1
(π 2 − n2 ) cos nπ (π 2 − n2 ) cos nπ
( ) ∞ ( )
sin πt sin π 2 (1 − x) ∑ sin nπt sin nπ(1 − x)
= −2 .
sin π 2 n=1
(π 2 − n2 ) cos nπ
5.5.1 LAPLACE TRANSFORM FOR THE WAVE EQUATION 309

The Convolution Theorem for the Laplace transform can be used to solve
some boundary value problems. The next example illustrates this method.
Example 5.5.4. Find the solution of the following problem.


 utt (x, t) = uxx (x, t) + sin t, x > 0, t > 0,
u(0, t) = 0, limx→∞ ux (x, t) = 0, t > 0,


u(x, 0) = 0, ut (x, 0) = 0, 0 < x < 1.
( )
Solution. If U = U (x, s) = L u(x, t) , then from (5.5.3) and (5.5.5) we have

d2 U 1
s2 U − su(x, 0) − ut (x, 0) = + 2 .
dx2 s +1
The last equation, in view of the initial conditions, becomes
d2 U 1
− s2 U = − 2 ,
dx2 s +1
whose general solution is
1
U (x, s) = c1 esx + c2 e−sx + .
s2 (s2 + 1)
From the condition lim ux (x, t) = 0 it follows that lim Ux (x, s) = 0 and
x→∞ x→∞
so c1 = 0. (From the) other boundary conditions u(0, t) = 0 it follows that
U (0, s) = L u(0, t) = 0, which implies that
1
c2 = − .
s2 (s2 + 1)
Therefore,
1 1 − e−sx
U (x, s) = .
s2 + 1 s2
From the last equation, by the Convolution Theorem for the Laplace transform
we have
( ( ))
( ) −1 1 − e−sx ( ) ( )
u(x, t) = sin t ∗ L 2
= sin t ∗ t − (t − x)H(t − x)
s
∫t
( )
= sin (t − y) y − (y − x)H(y − x) dy
0
∫t ∫t
= y sin (t − y) dy − (y − x)H(y − x) dy
0 0
∫t
= t sin t − (y − x)H(y − x) dy.
0
310 5. THE WAVE EQUATION

For the last integral we consider two cases. If t < x, then y − x < 0, and so
H(x − y) = 0. Thus, the integral is zero in this case. If t > x, then

∫t ∫x ∫t
(y − x)H(y − x) dy = (y − x)H(y − x) dy + (y − x)H(y − x) dy
0 0 x
∫t
1
=0+ (y − x) dy = (t − x)2 .
2
x

Therefore, {
t sin t, if 0 < t < x
u(x, t) =
t sin t − 1
2 (t − x) , if t > x.
2

5.5.2 The Fourier Transform Method for the Wave Equation.


The Fourier transform, like the Laplace transform, can be used to solve
certain classes of partial differential equations. The wave equation, and like
equations, in one and higher spatial dimensions can be solved using the Fourier
transform.
The fundamentals of the Fourier transform were developed in Chapter 2.
We use the same, but capital letters for the Fourier transform of a given
function u(x, t) with respect to the variable x ∈ R. For example, we write

F u(x, t) = U (ω, t), F y(x, t) = Y (ω, x), F f (x, t) = F (ω, t).

Let us recall several properties of the Fourier transform.

∫∞
( )
(5.5.6) F u(x, t) = U (ω, t) = u(x, t)e−iωx dx.
−∞
( )
(5.5.7) F ux (x, t) = iωU (ω, t).
( )
(5.5.8) F uxx (x, t) = (iω)2 U (ω, t) = −ω 2 U (ω, t).
∫∞
1
(5.5.9) u(x, t) = U (ω, t)eiωx dω.

−∞
( )
( ) d ( )
(5.5.10) F ut (x, t) = F u(x, t) .
dt
( )
( ) d2 ( )
(5.5.11) F utt (x, t) = 2 F u(x, t) .
dt

Let us take first an easy example.


5.5.2 FOURIER TRANSFORM METHOD FOR THE WAVE EQUATION 311

Example 5.5.5. Using the Fourier transform solve the following transport
boundary value problem.
{
ut (x, t) + aux (x, t) = 0, −∞ < x < ∞, t > 0
−x2
u(x, 0) = e , −∞ < x < ∞.

( )
Solution. If U = U (ω, t) = F u(x, t) , then from (5.5.7) and (5.5.10) we
have
dU (ω, t)
+ aiωU (ω, t) = 0.
dt
The general solution of this equation (keeping ω constant) is

U (ω, t) = C(ω)e−iωat .

From the initial condition


u(x, 0) = e−x
2

it follows that
( ) ( 2) √ ω2
U (ω, 0) = F u(x, 0) = F e−x = πe− 4 (see Table B in Appendix B).

√ − ω2
Therefore, C(ω) = π e 4 and so,
√ − ω2 −iωat
U (ω, t) = πe 4 e .

Using the modulation property of the Fourier transform (see Chapter 2) we


have
u(x, t) = e−(x−at) .
2

Example 5.5.6. Using the Fourier transform solve the following transport
boundary value problem.
{
utt (x, t) = c2 uxx (x, t) + f (x, t), −∞ < x < ∞, t > 0,
u(x, 0) = 0, ut (x, 0) = 0, −∞ < x < ∞.

( ) ( )
Solution. If U = U (ω, t) = F u(x, t) and F = F (ω, t) = F f (x, t) , then
from (5.5.6), (5.5.8) and (5.5.11) the equation becomes

d2 U (ω, t)
(5.5.12) + c2 ω 2 U (ω, t) = F (ω, t).
dt2

From the initial


( conditions
) u(x, 0) = 0 and
( ut (x,) 0) = 0 it follows that
U (ω, 0) = F u(x, 0) = 0 and Ut (ω, 0) = F ut (x, 0) = 0. Using these initial
312 5. THE WAVE EQUATION

conditions and the method of variation of parameters (see the Appendix D)


the solution of (5.5.12) is

∫t
1
U (ω, t) = F (ω, τ ) sin cω(t − τ ) dτ.

0

Therefore, by the inversion formula we have

∫∞ (∫t )
1 sin cω(t − τ )
(5.5.13) u(x, t) = F (ω, τ ) dτ eiωx dω.
2π cω
−∞ 0

To complete the problem, let us recall the following properties of the Fourier
transform.
( ) ( ) ( )
F f (x − a) = e−iω F f (ω), F eiax f (x) = F f (ω − a),
( ) sin aω
F χa (x) (ω) = 2 ,
ω
where χa is the characteristic function on the interval (−a, a), defined by
{
1, |x| ≤ 1
χa (x) =
0, |x| > 1,

Based on these properties, from (5.5.13) it follows that

∫t ( ∫∞ )
1 sin cω(t − τ ) iωx
u(x, t) = F (ω, τ ) e dω dτ
2π cω
0 −∞

∫t ( x+c(t−τ
∫ ) )
1
= f (ω, τ ) dω dτ.

0 x−c(t−τ )

In the next example we consider the important telegraph equation.


Example 5.5.7. Using the Fourier transform solve the following problem.

 utt (x, t) + 2(a + b)ut (x, t) + 4ab u(x, t) = c uxx (x, t) = 0, x ∈ R, t > 0,
2

u(x, 0) = f (x), x ∈ R,


ut (x, 0) = g(x), x ∈ R,

where a and b are positive given constants.


Solution. The current and voltage in a transmission line are governed by equa-
tions of this type. If a = b = 0, then the wave equation is a particular case
5.5.2 FOURIER TRANSFORM METHOD FOR THE WAVE EQUATION 313

of the telegraph equation. This equation has many other applications in fluid
mechanics, acoustics and(elasticity.
)
If U = U (ω, t) = F u(x, t) , then from (5.5.6), (5.5.8), (5.5.10) and
(5.5.11) the telegraph equation becomes
d2 U dU
(5.5.14) 2
+ 2(a + b) + (4ab + c2 ω 2 )U = 0.
dt dt
The solutions of the characteristic equation of the above ordinary differential
equation (keeping ω constant) are

r1,2 = −(a + b) ± (a − b)2 − c2 ω 2 .
Case 10 . If D = (a − b)2 − c2 ω 2 > 0, then r1 and r2 are real and distinct,
and so the general solution of (5.5.14) is given by
U (ω, t) = C1 (ω)er1 t + C2 (ω)er2 t .
Notice that r1 < 0 and r2 < 0 in this case. From the given initial condi-
tions u(x, 0) = f (x) and ut (x, 0) = g(x) it follows that U (ω, 0) = F (ω) and
Ut (ω, 0) = G(ω), where F = F f and G = F g. Using these initial conditions
we have
G − r2 F G − r1 F
C1 (ω) = , C2 (ω) = .
r2 − r1 r2 − r1
Therefore,
er1 t + er2 t r2 er1 t + r1 er2 t
U (ω, t) = G(ω, t) − F (ω, t).
r2 − r1 r2 − r1
In general, it is not very easy to find the inverse transform from the last
equation.
Case 20 . If D = (a − b)2 − c2 ω 2 < 0, then
( √ √
)
U (ω, t) = e−(a+b)t C1 (ω)e −D it + C2 (ω)e− −D it .

In some particular cases we can find explicitly the inverse. For example, if
a = b, then r1,2 = (a + b) ± cωi, and so,
( )
−2at cωti −cωti
U (ω, t) = e C1 (ω)e + C2 (ω)e .

From the last equation, by the formula for the inverse Fourier transform and
the translation property we obtain
∫∞ [ ]
1 −2at cωti −cωti iωx
u(x, t) = e C1 (ω)e + C2 (ω)e e dω

−∞
( )
= e−2at c1 (x − ct) + c2 (x + ct) ,
314 5. THE WAVE EQUATION
( ) ( )
where c1 (x) = F −1 C1 (ω) and c2 (x) = F −1 C2 (ω) .

If the wave equation is considered on the interval (0, ∞), then we use the
Fourier sine or Fourier cosine transform. Let us recall from Chapter 2 a few
properties of these transforms.

∫∞
(5.5.15) Fs u(x, t) = u(x, t) sin ωx dx.
0
∫∞
(5.5.16) Fc u(x, t) = u(x, t) cos ωx dx.
0
( ) ( )
(5.5.17) Fs uxx (x, t) = −ω 2 Fs u(x, t) + ω u(0, t).
( ) ( )
(5.5.18) Fc uxx (x, t) = −ω 2 Fs u(x, t) − ux (0, t).
( )
( ) d2 ( )
(5.5.19) Fs utt (x, t) = 2 Fs u(x, t) .
dt
( )
( ) d2 ( )
(5.5.20) Fc utt (x, t) = 2 Fc u(x, t) .
dt
∫∞
2 ( )
(5.5.21) u(x, t) = Fs u(x, t) (ω, t) sin ωx dω.
π
0
∫∞
2 ( )
(5.5.22) u(x, t) = Fc u(x, t) (ω, t) cos ωx dω.
π
0

The choice of whether to use the Fourier sine or Fourier cosine transform
depends on the nature of the boundary conditions at zero. Let us illustrate this
remark with a few examples of a wave equation on the semi-infinite interval
(0, ∞).
Example 5.5.8. Using the Fourier cosine or Fourier sine transform solve the
following boundary value problem.

 utt (x, t) = c uxx (x, t), 0 < x < ∞, t > 0,
2

u(x, 0) = f (x), ut (x, 0) = g(x), 0 < x < ∞,


u(0, t) = 0, t > 0.

Solution. Let us apply


( the
) Fourier sine transform to solve this problem. If
U = U (ω, t) = Fs u(x, t) , then from (5.5.17) and (5.5.19) the wave equa-
tion becomes

d2 U (ω, t)
(5.5.23) + c2 ω 2 U (ω, t) = 0.
dt2
5.5.2 FOURIER TRANSFORM METHOD FOR THE WAVE EQUATION 315

From the initial


( conditions
) u(x, 0) = f (x) and ut (x,
( 0) = g(x)
) it follows that
U (ω, 0) = Fs u(x, 0) = F (ω) and Ut (ω, 0) = Fs ut (x, 0) = G(ω). Using
these initial conditions, the solution of (5.5.23) is
sin cωt
U (ω, t) = F (ω) cos cωt + G(ω).

To find the inverse Fourier transform we consider first the case when x−ct > 0.
In this case we have x + ct > 0 and so from the inversion formula (5.5.21) it
follows that
∫∞ ( )
2 sin cωt
u(x, t) = F (ω) cos cωt + G(ω) sin ωx dω
π cω
0
∫∞ ∫∞
2 2 sin cωt sin ωx
= F (ω) cos cωt sin ωx dω + G(ω) dω
π π cω
0 0
∫∞
1 [ ]
= F (ω) sin (x + ct)ω + sin (x − ct)ω dω
π
0
∫∞
1 cos (x − ct)ω − cos (x + ct)ω
+ F (ω) dω.
πc ω
0

For the first integral in the above equation, by the inversion formula (5.5.21)
we have
∫∞ ∫∞
1 [ ] 1
F (ω) sin (x + ct)ω + sin (x − ct)ω dω = F (ω) sin (x + ct)ω dω
π π
0 0
∫∞
1 1 1
+ F (ω) sin (x − ct)ω dω = f (x + ct) + f (x − ct).
π 2 2
0

For the second integral in the equation, again by the inversion formula (5.5.21)
we have
∫∞ ∫∞( ∫
x+ct )
1 cos (x−ct)ω − cos (x+ct)ω 1
G(ω) dω = G(ω) sin ωs ds dω
πc ω πc
0 0 x−ct
∫ (
x+ct ∫∞ ) ∫
x+ct
1 1
= G(ω) sin ωs dω ds = g(s) ds.
πc 2c
x−ct 0 x−ct

Thus, u(x, t), in this case, is given by



x+ct
f (x + ct) + f (x − ct) 1
u(x, t) = + g(s) ds.
2 2c
x−ct
316 5. THE WAVE EQUATION

If x − ct < 0, then ct − x > 0 and working as above and using the facts that
sin (x − ct)ω = −sin (ct − x)ω and cos (x − ct)ω = cos (ct − x)ω we obtain


x+ct
f (x + ct) − f (ct − x) 1
u(x, t) = + g(s) ds.
2 2c
ct−x

Therefore,


x+ct
f (x + ct) + f (| x − ct |)sign(x − ct) 1
u(x, t) = + g(s) ds,
2 2c
|x−ct|

where the function sign(·) is defined by


{
−1, x<0
sign(x) =
1, x > 0.

Example 5.5.9. Using the Fourier cosine or Fourier sine transform solve the
following boundary value problem.

 utt (x, t) = c uxx (x, t), 0 < x < ∞, t > 0,
2

u(x, 0) = f (x), ut (x, 0) = g(x), 0 < x < ∞,


ux (0, t) = 0, t > 0.

Solution. We (apply the


) Fourier cosine transform to solve this problem. If U =
U (ω, t) = Fc u(x, t) , then from (5.5.17) and (5.5.19) the wave equation
becomes

d2 U (ω, t)
(5.5.24) + c2 ω 2 U (ω, t) = 0.
dt2

From the initial


( conditions
) u(x, 0) = f (x) and ut (x,
( 0) = g(x)
) it follows that
U (ω, 0) = Fs u(x, 0) = F (ω) and Ut (ω, 0) = Fs ut (x, 0) = G(ω). Using
these initial conditions, the solution of (5.5.24) is

sin cωt
U (ω, t) = F (ω) cos cωt + G(ω) .

If x − ct > 0, then x + ct > 0 and so from the inversion formula (5.5.22) it


5.5.2 FOURIER TRANSFORM METHOD FOR THE WAVE EQUATION 317

follows that
∫∞ ( )
2 sin cωt
u(x, t) = F (ω) cos cωt + G(ω) cos ωx dω
π cω
0
∫∞ ∫∞
2 2 sin cωt cos ωx
= F (ω) cos cωt cos ωx dω + G(ω) dω
π π cω
0 0
∫∞
1 [ ]
= F (ω) cos (x + ct)ω + cos (x − ct)ω dω
π
0
∫∞
1 sin (x − ct)ω − sin (x + ct)ω
+ G(ω) dω.
πc ω
0

For the first integral in the above equation, by the inversion formula (5.5.22)
we have
∫∞ ∫∞
1 [ ] 1
F (ω) cos (x + ct)ω + cos (x − ct)ω dω = F (ω) cos (x + ct)ω dω
π π
0 0
∫∞
1 1 1
+ F (ω) cos (x − ct)ω dω = f (x + ct) + f (x − ct).
π 2 2
0

For the second integral in the equation, again by the inversion formula (5.5.22)
we have
∫∞ ∫∞ ∫
x+ct
1 sin (x − ct)ω − sin (x + ct)ω 1
G(ω) dω = − G(ω) cos ωs ds dω
πc ω πc
0 0 x−ct
∫ (
x+ct ∫∞ ) ∫
x+ct
1 1
=− G(ω) cos ωs dω ds = − g(s) ds.
πc 2c
x−ct 0 x−ct

Thus, u(x, t) in this case is given by



x+ct
f (x + ct) + f (x − ct) 1
u(x, t) = + g(s) ds.
2 2c
x−ct

If x − ct < 0, then ct − x > 0, and working as above, we obtain u(x, t) in


this case is given by
( x+ct
∫ ∫
ct−x )
f (x+ct)+f (x−ct) 1
u(x, t) = + g(s) ds + g(s) ds .
2 2c
0 0
318 5. THE WAVE EQUATION

Therefore,

( x+ct
∫ |x−ct|
∫ )
f (x+ct) + f (| x − ct |) 1
u(x, t) = + g(s) ds−sign(x−ct) g(s) ds ,
2 2c
0 0

Exercises for Section 5.5.

In Problems 1–9, using the Laplace transform solve the indicated initial
boundary value problem on the interval (0, ∞), subject to the given condi-
tions.
1. ut (x, t) + 2ux (x, t) = 0, x > 0, t > 0, u(x, 0) = 3, u(0, t) = 5.

2. ut (x, t) + ux (x, t) = 0, x > 0, t > 0, u(x, 0) = sin x, u(0, t) = 0.

3. ut (x, t) + ux (x, t) = −u, x > 0, t > 0, u(x, 0) = sin x, u(0, t) = 0.

4. ut (x, t) − ux (x, t) = u, x > 0, t > 0, u(x, 0) = e−5x , u(0, t) = 0.

5. ut (x, t) + ux (x, t) = t, x > 0, t > 0, u(x, 0) = 0, u(0, t) = t2 .

6. utt (x, t) = c2 uxx (x, t), x > 0, t > 0 u(0, t) = f (t), lim u(x, t) = 0
x→∞
t > 0, u(x, 0) = 0, ut (x, 0) = 0, x > 0.

7. utt (x, t) = uxx (x, t), x > 0, t > 0 u(0, t) = 0, lim u(x, t) = 0 t > 0,
x→∞
u(x, 0) = xe−x , ut (x, 0) = 0, x > 0.

8. utt (x, t) = uxx (x, t) + t, x > 0, t > 0 u(0, t) = 0, t > 0, u(x, 0) = 0,


ut (x, 0) = 0, x > 0.

9. utt (x, t) = uxx (x, t), x > 0, t > 0 u(0, t) = sin t, t > 0, u(x, 0) = 0,
ut (x, 0) = 0, x > 0.
5.5.2 FOURIER TRANSFORM METHOD FOR THE WAVE EQUATION 319

In Problems 10–15, use the Laplace transform to solve the initial boundary
value problem on the indicated interval, subject to the given conditions.
10. utt (x, t) = c2 uxx (x, t) + kc2 sin πx
a , 0 < x < a, t > 0, u(x, 0) =
ut (x, 0) = 0, u(0, t) = u(a, t) = 0, t > 0 .

11. utt (x, t) = uxx (x, t), 0 < x < 1, t > 0, u(x, 0) = 0, ut (x, 0) =
0 u(0, t) = 0, u(0, t) = 0.

12. utt (x, t) = uxx (x, t), 0 < x < 1, t > 0, with initial conditions
u(x, 0) = 0, ut (x, 0) = sin πx, and boundary conditions u(0, t) = 0,
ux (1, t) = 1.
[ 1 ]
Hint: Expand −2s
in a geometric series.
1+e

13. utt (x, t) = uxx (x, t), 0 < x < 1, t > 0, with initial conditions
u(x, 0) = 0, ut (x, 0) = 1, and boundary conditions u(0, t) = 0,
u(1, t) = 0.

14. utt (x, t) = uxx (x, t), 0 < x < 1, t > 0, with initial conditions
u(x, 0) = sin πx, ut (x, 0) = − sin πx and boundary conditions
u(0, t) = u(1, t) = 0.

In Problems 15–19, use a Fourier transform to solve the given boundary


value problem on the indicated interval, subject to the given conditions.

15. tux (x, t) + ut (x, t) = 0, −∞ < x < ∞, t > 0 subject to the initial
condition u(x, 0) = f (x).

16. ux (x, t) + 3ut (x, t) = 0, −∞ < x < ∞, t > 0 subject to the initial
condition u(x, 0) = f (x).

17. t2 ux (x, t) − ut (x, t) = 0, −∞ < x < ∞, t > 0 subject to the initial


condition u(x, 0) = 3 cos x.

18. utt (x, t) + ut (x, t) = −u(x, t), −∞ < x < ∞, t > 0 subject to the
initial conditions u(x, 0) = f (x) and ut (x, 0) = g(x).
320 5. THE WAVE EQUATION

√ −∞ < x < ∞, t > 0 subject to the initial


19. uxt (x, t) = uxx (x, t),
π −|x|
condition u(x, 0) = e .
2
In Problems 20–25, use one of the Fourier transforms to solve the indicated
wave equation, subject to the given initial and boundary conditions.
Find explicitly the inverse Fourier transform where it is possible.
20. utt (x, t) = uxx (x, t), −∞ < x < ∞, t > 0 subject to the initial con-
1
ditions u(x, 0) = and ut (x, 0) = 0.
1 + x2

21. utt (x, t) = c2 uxx (x, t), 0 < x < ∞, t > 0 subject to the initial
conditions u(x, 0) = 0, ut (x, 0) = 0 and the boundary condition
ux (0, t) = f (t), 0 < t < ∞.
[ ]
Hint: Use the Fourier cosine transform.

22. utt (x, t) = c2 uxx (x, t) + f (x, t), 0 < x < ∞, t > 0 subject to the ini-
tial conditions u(x, 0) = 0, ut (x, 0) = 0 and the boundary condition
u(0, t) = 0, 0 < t < ∞.
[ ]
Hint: Use the Fourier sine transform.

23. utt (x, t) = c2 uxx (x, t) + f (x, t), 0 < x < ∞, t > 0 subject to
the initial conditions u(x, 0) = 0, ut (x, 0) = 0 and the condition
ux (0, t) = 0, 0 < t < ∞.
[ ]
Hint: Use the Fourier cosine transform.

24. utt (x, t) = c2 uxx (x, t) + 2aut (x, t) + b2 u(x, t), 0 < x < ∞, t > 0, sub-
ject to the initial conditions u(x, 0) = ut (x, 0) = 0, 0 < x < ∞ and
the boundary conditions u(0, t) = f (t), t > 0 and lim | u(x, t) |<
x→∞
∞,
[ t > 0. ]
Hint: Use the Fourier sine transform.

25. utt (x, t) = c2 uxx (x, t), 0 < x < ∞, t > 0, subject to the initial
conditions u(x, 0) = ut (x, 0) = 0, 0 < x < ∞, and the boundary
conditions ux (0, t) − ku(0, t) = f (t), lim | u(x, t) |= 0, t > 0.
[ x→∞]
Hint: Use the Fourier cosine transform.
5.6 PROJECTS USING MATHEMATICA 321

5.6 Projects Using Mathematica.


In this section we will see how Mathematica can be used to solve several
problems involving the wave equation. In particular, we will develop several
Mathematica notebooks which automate the computation, the solution of this
equation. For a brief overview of the computer software Mathematica consult
Appendix H.
Project 5.6.1. Using d’Alembert’s formula solve the wave equation

 utt (x, t) = uxx (x, t), −∞ < x < ∞, t > 0,
1
 u(x, 0) = , ut (x, 0) = ex .
2
x +1

All calculation should be done using Mathematica. Display the plot of


u(x, t) at several time moments t.
Solution. Use the DSolve command to find the general solution of the given
partial differential equation.
In[1] := DSolve[D[u[x, t], {t, 2}] == DS[u[x, t], {x, 2}], u[x, t], {t, x}];
Out[2] = u[x, t] − > C[1][−t + x] + C[2][t + x];
Next, define the functions f (x) and g(x) and apply the initial conditions.
In[3] := f [x− ] := 1/(1 + x2 );
In[4] := g[x− ] := xExp[−x2 ];
In[5] : In[5] := C[1][x] + C[2][x] == f [x];
In[6] := −C[1]′ [x] + C[2]′ [x] == g[x]:
In[7] := Solve[C[1]′ [x] + C[2]′ [x] == f ′ [x]&&
−C[1]′ [x] + C[2]′ [x] == g[x], {[C[1]′ ][x], C[2]′ [x]}]
( ) ( )
Out[7] = {{C[1]′ [x]− > 21 −g[x] + f ′ [x] , C[2]′ [x]− > 12 g[x] + f ′ [x] }}
Next, find C1[x] and C2[x].
In[8] := DSolve[C[1]′ [x] == (1/2)(−g[x] + f ′ [x]), C[1][x], x];
In[8] := DSolve[C[2]′ [x] == (1/2)(g[x] + f ′ [x]), C[2][x], x];
The obtained C[1][x] and C2[x] substitute in the solution and obtain
In[9] := u[x, t] = (f [x−t]+f [x+t])/2+(1/2)Integrate[g[u], u, x − t, x + t]

f [−t+x]+f [t+x]+f [t+x] 1



x+t
Out[10] = 2 + 2 g[u] du.
x−t

Plots of u[x, t] at several time instances t are displayed in Figure 5.6.1.


322 5. THE WAVE EQUATION

u u

1 t=2
t=0 1.5

x x
-10 10 -10 10
-0.5

u u

t=4 t=6
1.5 1.5

1 1

x x
-10 10 -10 10
-0.5 -0.5

Figure 5.6.1

Project 5.6.2. Use the Laplace transform to solve the wave equation


 utt (x, t) = uxx (x, t), 0 < x < 1, t > 0,
u(x, 0) = sin πx, ut (x, 0) = − sin πx, 0 < x < 1,


u(0, t) = sin t, ut (0, t) = 0, t > 0.

Do all calculation using Mathematica. Display the plots of u(x, t) at sev-


eral time instances t.
Solution. First we define the wave expression:
In[1] := wave = D[u[x, t], {t, 2}] − c2 D[u[x, t], {x, 2}];
Out[2] = u(0,2) [x, t] − c2 u(2,0) [x, t];
Next, take the Laplace transform
In[3] := LaplaceT ransf orm[wave, t, s];
Out[4] = s2 LaplaceT ransf orm[u[x, t], t, s]
−c2 LaplaceT ransf orm[[u(2,0) [x, t], t, s] − s u[x, 0] − u(0,1) [x, 0];
Further, define
In[5] := U [x− , s− ] := LaplaceT ransf orm[u[x, t], t, s];
5.6 PROJECTS USING MATHEMATICA 323

Next, define the initial conditions


In[6] := f [x− ] = Sin[P i x];
In[7] := g[x− ] = −Sin[P i x];
Using the initial conditions define the expression
In[8] := eq = s2 U [x, s] − c2 D[U [x, s], {x, 2}] − sf [x] − g[x];
Next find the general solution of

d2 U (x, s)
− s2 U (x, s) + sf (x) + g(x) = 0
dx2

In[9] := gensol = DSolve[eq == 0, U [x, s], x]


−Sin[π x]+sSin[πx]
C[1] + e−
sx sx
Out[10] := {{U [x, s]− > e c c C[2] + c2 π 2 +s2 }}
In[11] := Solution = U [x, s]/.gensol[[1]];
Define the boundary condition
In[12] := b[t− ] = Sin[t];
From the boundary conditions and the bounded property of the Laplace
transform as s → ∞ we find the constants C[1] and C[2]:
In[13] := BC = LaplaceT ransf orm[b[t], t, s];
1
Out[14] = 1+s2

In[15] := BC = (Solution/.{x → 0, C[1]− > 0})


1
Out[16] = 1+s2 = C[2]
In[17] := F inalExpression = Solution/.{x → 0, C[1] → 0}
sx
e− c −Sin[πx]+s Sin[πx]
Out[18] = 1+s2 + c2 π 2 +s2 ;
Find the inverse Laplace of the last expression:
− sx
In[19] := InverseLaplace[ e1+sc2 + −Sin [πc2x]+s Sin [π x]
π 2 +s2 , s, t]
( )
c πCos[c π t]−Sin[c π t] Sin[π x]
Out[20] = cπ + HeavisideT heta[t − xc ] Sin[t − xc ].
324 5. THE WAVE EQUATION

Plots of u[x, t] at several time instances t are displayed in Figure 5.6.2.

u u
2 2
t=0 t=2

1 1

0 x 0 x
5 10 5 10

-1 -1

-2 -2

u u
2 2
t=3 t=5

1 1

0 x 0 x
5 10 5 10

-1 -1

-2 -2

Figure 5.6.2

Project 5.6.3. In Example 5.4.1 we solved the circular membrane problem


( 1 1 )
utt (r, φ, t) = c2 urr + ur + 2 uφφ , 0 < r ≤ a, 0 ≤ φ < 2π, t > 0
r r
for a = c = 1, subject to the initial conditions
{
u(r, φ, 0) = f (r, φ) = (1 − r2 )r2 sin 2φ, 0 < r ≤ a, 0 ≤ φ < 2π,
ut (r, φ, 0) = 0, 0 < r ≤ a, 0 ≤ φ < 2π

and the boundary condition

u(a, φ) = 0, 0 ≤ φ < 2π.

In Example 5.4.2 we solved the same wave equation for a = 2, c = 1,


subject to the boundary conditions

u(r, φ, 0) = f (r, φ) = (4−r2 )r sin φ, ut (r, φ, 0) = 1, 0 ≤ r ≤ 1, 0 ≤ φ < 2π.

Mathematica generates the plots for the solutions u(r, φ) of these prob-
lems, displayed in Figure 5.4.1 and Figure 5.4.2.
5.6 PROJECTS USING MATHEMATICA 325

Solution. The solution v(r, φ, t) of the problem found in Example 5.4.1 is


given by

∑ 1 ( ) ( )
v(r, φ, t) = 24 ( ) J2 z2n r sin 2φ cos z2n t
z3 J
n=1 2n 3
z2n

and the solution w(r, φ, t) of the problem found in Example 5.4.2 is given
by
∞ ( ) ∞ ( )
∑ J1 z1n r ( z1n ) ∑ J0 z0n r (z )
w(r, φ, t) = 128 3
( ) sin φ cos
2
t +4 2
( ) sin 0n t
2
z J z
n=1 1n 2 1n
2 z J z
n=1 0n 1 0n
2

In the above sums, zmn (m = 0, 1, 2) denotes the nth zero of the Bessel
function Jm (·).
First generate a list of zeroes of the mth Bessel function.
In[1] := BZ[m− , n− ] = N [BesselJZero[m, k, 0]];
(the nth positive zero of Jm (·))
Next define the N th partial sums of the solutions u1 (r, φ, t) and u2 (r, φ, t).
In[2] := u1 [r− , φ− , t− , N− ] := 24 Sum[(BesselJ[2, Bz[2, n] r])/((Bz[2, n])3
BesselJ[3, Bz[2, n]]) Sin[2 φ]] Cos[Bz[2, n] t], {n, 1, N }];
In[3] := u2 [r− , φ− , t− , N− ] := 128 Sum[(BesselJ[1, Bz[1, n]r/2])
/((Bz[1, n])3 BesselJ[2, Bz[1, n]]) Sin[φ]] Cos[Bz[1, n] t/2], {n, 1, N }]
+4 Sum[(BesselJ[0, Bz[0, n] r/2])/((Bz[0, n])2 BesselJ[1, Bz[0, n]])
Sin[Bz[0, n] t/2], {n, 1, N }];
In[4] := P arametricP lot3D[{rCos[φ], rSin[φ], u1 [r, φ, 1, 3]}, {φ, 0, 2 P i]},
{r, 0, 1}, T icks− > {{−1, 0, 1}, {−1, 0, 1}, {−.26, 0, 0.26}}, RegionFunction
− > F unction[{r, φ, u}, r <= 1], BoxRatios− > Automatic,
AxesLabel− > {x, y, u}];
In[5] := P arametricP lot3D[{rCos[φ], rSin[φ], u2 [r, φ, 1, 3]}, {φ, 0, 2 P i]},
{r, 0, 1}, T icks− > {{−1, 0, 1}, {−1, 0, 1}, {−.26, 0, 0.26}}, RegionFunction
− > F unction[{r, φ, u}, r <= 1], BoxRatios− > Automatic,
AxesLabel− > {x, y, u}];
CHAPTER 6

THE HEAT EQUATION

The purpose of this chapter is to study the one dimensional heat equation

ut (x, t) = c2 uxx (x, t) + F (x, t), x ∈ (a, b) ⊆ R, t > 0,

also known as the diffusion equation, and its higher dimensional version

ut (x, t) = a2 ∆ u(x, t) + F (x, t), x ∈ Ω ⊆ Rn , n = 2, 3.

In the first section we will find the fundamental solution of the initial value
problem for the heat equation. In the next sections we will apply the separa-
tion of variables method for constructing the solution of the one and higher
dimensional heat equation in rectangular, polar and spherical coordinates. In
the last section of this chapter we will apply the Laplace and Fourier trans-
forms to solve the heat equation.

6.1 The Fundamental Solution of the Heat Equation.


In this section we will solve the homogeneous and nonhomogeneous heat
equation on the whole real line and on the half line.
10 . The Homogeneous Heat Equation on R. Consider the following homoge-
neous Cauchy problem.

(6.1.1) ut (x, t) = c2 uxx (x, t), −∞ < x < ∞, t > 0


(6.1.2) u(x, 0) = f (x), −∞ < x < ∞.

First we will show that the Cauchy problem


( (6.1.1),) (6.1.2) has a unique
solution under the assumption that u ∈ C 2 R × [0, ∞) and

(6.1.3) lim ux (x, t) = 0, t > 0.


|x|→∞

As for the wave equation we define the notion of energy E(t) of the heat
equation (6.1.1) by

∫∞
1
(6.1.4) E(t) = u2 (x, t) dx.
2
−∞

326
6.1 FUNDAMENTAL SOLUTION OF THE HEAT EQUATION 327

Let v(x, t) and w(x, t) be two solutions of the problem (6.1.1), (6.1.2)
and assume that both v and w satisfy the boundary condition (6.1.3). If
u = v − w, then u satisfies the heat equation (6.1.1) and the initial and
boundary conditions
{
u(x, 0) = 0, −∞ < x < ∞
lim ux (x, t) = 0, t > 0.
x→±∞

From (6.1.4), using the heat equation (6.1.1) and the above boundary and
initial conditions, by the integration by parts formula we obtain that

∫∞ ∫∞
′ 2
E (t) = u(x, t)ut (x, t) dx = c uxx (x, t)u(x, t) dx
−∞ −∞
∫∞ x=∞
∂ ( )
=c 2
u(x, t) ux (x, t) dx = c u(x, t)ux (x, t)
2
∂x x=−∞
−∞
∫∞ ∫∞
−c 2
u2x (x, t) dx = −c
2
u2x (x, t) dx ≤ 0.
−∞ −∞

Thus, E(t) is a decreasing function. Therefore, since E(t) ≥ 0 and E(0) = 0


we have E(t) = 0 for every t > 0. Hence, u(x, t) = 0 for every t > 0 and so
v(x, t) = w(x, t). ■
Now we will solve the heat equation (6.1.1). We look for a solution G(x, t)
of the form ( )
1 x
G(x, t) = √ g √
t c t
for some function g of a single variable. If we introduce the new variable ξ
by
x
ξ= √ ,
c t
then by the chain rule, the heat equation (6.1.1) is transformed into the
ordinary differential equation

2cg ′′ (ξ) + ξg ′ (ξ) + g(ξ) = 0.

This equation can be written in the form


( )′
2cg ′′ (ξ) + ξg(ξ) = 0.

One solution of the last equation is


ξ2
g(ξ) = e− 4c .
328 6. THE HEAT EQUATION

Therefore, one solution of the heat equation (6.1.1) is given by


1 x2
(6.1.5) G(x, t) = √ e− 4ct .
4πct
The function G(x, t), defined by (6.1.5), is called the fundamental solution
or Green function of the heat equation (6.1.1).
Some properties of the function G(x, t) are given by the following theorem.
Theorem 6.1.1. The fundamental solution G(x, t) satisfies the following
properties.
(a) G(x, t) > 0 for all x ∈ R and all t > 0.

(b) G(x, t) is infinitely differentiable with respect to x and t.

(c) G(x − a, t) is a solution of the heat equation (6.1.1) for every a.



(d) G( k x, kt) is a solution of the heat equation (6.1.1) for every k > 0.
∫∞
(e) G(x, t) dx = 1 for every t > 0.
−∞
∫∞
(f) u(x, t) = G(x, t) ∗ f (x) = G(x − y, t)f (y) dy is the solution of the
−∞
Cauchy problem (6.1.1), (6.1.2) for every bounded and continuous
function f on R.
Proof. Properties (a) and (b) are obvious. Property (d) can be checked by
direct differentiation. For property (e) we have
∫∞ ∫∞ √
1 x2
G(x, t) dx = √ e− 4ct dx (change of variable x = 4ct y)
4πct
−∞ −∞
∫∞
1 1 √
e−y dy = √
2
=√ π = 1.
π π
−∞

To prove (f) let


∫∞
1 (x−y)2
u(x, t) = G(x, t) ∗ f (x) = √ e− 4ct f (y) dy.
4πct
−∞

By direct calculation of the partial derivatives we can check (see Exercise


1 of this section) that the above function u(x, t) satisfies the heat equation
(6.1.1). We omit the proof of the fact that the function u(x, t) satisfies the
initial condition u(x, 0) = f (x). We cannot simply substitute t = 0 directly
in the definition of u(x, t) since the kernel function G(x, t) has singularity
(is not defined) at t = 0). For the proof of this omitted fact the reader is
referred to the book by G. B. Folland [6]. ■
6.1 FUNDAMENTAL SOLUTION OF THE HEAT EQUATION 329

Example 6.1.1. Solve the Cauchy problem.



 ut (x, t) = 1 uxx (x, t), −∞ < x < ∞, t > 0
16

u(x, 0) = e−x , −∞ < x < ∞.
2

Solution. From Theorem 6.1.1 it follows that the solution of the given prob-
lem is
∫∞
1 s2
e− e−(x−s) ds
2
u(x, t) = G(x, t) ∗ f (x) = √ t
πt
−∞
∫∞ ∫∞ (√ √ )2
e−x e−x
2 2
(t+1)s2 2
e−
t+1
− t tx
= √ e t +2xs
ds = √ t s− t+1 x + t+1
ds
πt πt
−∞ −∞
∫∞ (√ √ )2 x2
e −x2 tx2 − t+1 t e− 1+t
= √ e t+1 e t s− t+1 x
ds = √ .
πt 1+t
−∞

The plots of the heat distribution at several time instances t are displayed in
Figure 6.1.1.

u u

1
t=0
1 t=1

0.7

x x
-10 -5 5 10 -10 -5 5 10

u u

1 t=5 1
t = 10

0.4
0.3

x x
-10 -5 5 10 -10 -5 5 10

Figure 6.1.1
330 6. THE HEAT EQUATION

20 . Nonhomogeneous Heat Equation on R. Consider the nonhomogeneous


problem
{
ut (x, t) = c2 uxx (x, t) + F (x, t), −∞ < x < ∞, t > 0
(6.1.6)
u(x, 0) = f (x), −∞ < x < ∞.

As in the wave equation, the solution u(x, t) of Problem (6.1.6) is given


by
u(x, t) = v(x, t) + w(x, t),
where v(x, t) and w(x, t) are the solutions of the following two problems.
{
vt (x, t) = c2 uxx (x, t), −∞ < x < ∞, t > 0
(6.1.7)
v(x, 0) = f (x), −∞ < x < ∞
and
{
wt (x, t) = c2 wxx (x, t) + F (x, t), −∞ < x < ∞, t > 0
(6.1.8)
w(x, 0) = 0, −∞ < x < ∞.

From Case 10 , the solution of Problem (6.1.7) is given by


∫∞
v(x, t) = G(x − s, t)f (s) ds,
−∞

where G(x, t) is the Green function, given by (6.1.5).


We will “show” that the solution w(x, t) of problem (6.1.8) is given by
∫t ∫∞
(6.1.9) w(x, t) = G(x − s, t − s)F (s, τ ) ds dτ.
0 −∞

First, let us recall the following result from Chapter 2. If g(x) is a contin-
uous function on R which vanishes outside a bounded interval, then
∫∞
(6.1.10) g(x)δa (x) dx = g(a),
−∞

where δa (x) is the Dirac impulse function concentrated at the point x = a.


If we ignore the question about differentiation under the sign of integration,
then from (6.1.9) it follows that
∫t ∫∞
wt (x, t) = Gt (x − s, t − s)F (s, τ ) ds dτ
0 −∞
(6.1.11)
∫∞
+ lim G(x − s, t − s)F (s, τ ) ds.
τ →t
−∞
6.1 FUNDAMENTAL SOLUTION OF THE HEAT EQUATION 331

For the first term in Equation (6.1.11), using the fact that the Green function
G(x − s, t − s) satisfies

Gt ((x − s, t − s) = c2 Gxx (x − s, t − s),

we have
∫t ∫∞
Gt (x − s, t − s)F (s, τ )ds dτ
0 −∞
(∫t ∫∞ )
∂2
=c 2
G(x − s, t − s)F (s, τ ) ds dτ
∂x2
0 −∞

= c2 wxx (x, t).


For the second term in (6.1.11), from (6.1.10) and the fact that

lim G (x, t) = δ0 (x),


t→0

we have
∫∞
lim G(x − s, t − s)F (s, τ ) ds = F (x, t).
τ →t
−∞

Therefore, Equation (6.1.11) becomes

wt (x, t) = a2 wxx (x, t) + F (x, t).

Thus, the function w(x, t), defined by (6.1.9) satisfies the heat equation
in (6.1.8). It remains to show that w(x, t) satisfies the initial condition in
(6.1.8). Ignoring some details we have

∫t ∫∞
lim w(x, t) = lim G(x − s, t − s)F (s, τ ) ds dτ = 0. ■
t→0 t→0
0 −∞

30 . Homogeneous Heat Equation on (0, ∞). Consider the homogeneous prob-


lem

 ut (x, t) = c uxx (x, t), 0 < x < ∞, t > 0
2

(6.1.12) u(x, 0) = f (x), 0 < x < ∞


u(0, t) = 0, t > 0.

As in d’Alembert’s method for the wave equation, we will convert the initial
boundary value problem (6.1.12) into a boundary value problem on the whole
real line (−∞, ∞). In d’Alembert’s method for the wave equation we have
332 6. THE HEAT EQUATION

used either the odd or even extension of the function u(x, t). For the heat
equation let us work with an odd extension. Let fe(x) be the odd extension
of the initial function f (x).
Now consider the following initial value problem on (−∞, ∞).
{
vt (x, t) = c2 vxx (x, t), −∞ < x < ∞, t > 0
(6.1.13)
v(x, 0) = fe(x), −∞ < x < ∞.

From case 10 it follows that the solution v(x, t) of problem (6.1.13) is


given by
∫∞
v(x, t) = G(x − s)fe(s) ds.
−∞

Since fe is an odd function, the function v(x, t) is also an odd function


with respect to x, i.e., v(−x, t) = −v(x, t) for all x ∈ R and all t > 0 (see
Exercise 5 of this section). Therefore, v(0, t) = 0 and so v(x, t) satisfies
the same boundary condition as the function u(x, t). Now, since problems
(6.1.12) and (6.1.13), coupled with the condition v(0, t) = 0, have unique
solutions, for x > 0 we have

∫∞
u(x, t) = v(x, t) = G(x − s)fe(s) ds
−∞
∫0 ∫∞
1 (x−s)2 1 (x−s)2
=√ e − 4cπt fe(s) ds + √ e− 4cπt fe(s) ds
4cπt 4cπt
−∞ 0
∫0 ∫∞
1 (x+τ )2 1 (x−s)2
=√ e − 4cπt fe(−τ ) dτ + √ e− 4cπt f (s) ds
4cπt 4cπt
∞ 0
∫∞ ( )
1 (x−s)2 (x+s)2
=√ e− 4cπt +e − 4cπt
f (s) ds.
4cπt
0

Remark. The heat equation on a bounded interval will be solved in the next
section using the separation of variables method and in the following section
of this chapter by the Laplace and Fourier transforms.
6.1 FUNDAMENTAL SOLUTION OF THE HEAT EQUATION 333

Exercises for Section 6.1.

1. Complete the proof of Theorem 6.1.1.

2. Show that the the fundamental solution G(x, t) satisfies the following.

(a) G(x, t) → +∞ as t → 0+ with x = 0.

(b) G(x, t) → 0 as t → 0+ with x ̸= 0.

(c) G(x, t) → 0 as t → ∞.

3. Solve the heat equation (6.1.1) subject to the initial condition


{
0, x<0
u(x, 0) = f (x) =
10 − x, x ≥ 0.

4. Solve the heat equation


{
ut (x, t) = c2 uxx (x, t) + u(x, t), −∞ < x < ∞, t > 0,
u(x, 0) = f (x), −∞ < x < ∞.

5. Let u(x, t) be the solution of the Cauchy problem


{
ut (x, t) = c2 uxx (x, t), ∞ < x < ∞, t > 0,
u(x, 0) = f (x), −∞ < x < ∞.

Show that

(a) If f (x) is an odd function, then u(−x, t) = −u(x, t) for all x


and all t > 0. Conclude that u(0, t) = 0.

(b) If f (x) is an even function, then u(−x, t) = u(x, t) for all x


and all t > 0.
334 6. THE HEAT EQUATION

6.2 Separation of Variables Method for the Heat Equation.


In this section we will discuss the separation of variables method for solving
the one dimensional heat equation. We will consider homogeneous and nonho-
mogeneous heat equations with homogeneous and nonhomogeneous boundary
conditions on a bounded interval. This method can be applied to the heat
equation in higher dimensions and it can be used to solve some other partial
differential equations, as we will see in the next sections of this chapter and
the following chapters.
10 . Homogeneous Heat Equation with Homogeneous Boundary Conditions.
Consider the homogeneous heat equation

(6.2.1) ut (x, t) = c2 uxx (x, t), 0 < x < l, t > 0,

which satisfies the initial condition

(6.2.2) u(x, 0) = f (x), 0<x<l

and the homogeneous Dirichlet boundary conditions

(6.2.3) u(0, t) = u(l, t) = 0.

We will find a nontrivial solution u(x, t) of the above initial boundary value
problem of the form

(6.2.4) u(x, t) = X(x)T (t),

where X(x) and T (t) are functions of single variables x and t, respectively.
Differentiating (6.2.4) with respect to x and t and substituting the partial
derivatives in Equation (6.2.1) we obtain

X ′′ (x) 1 T ′ (t)
(6.2.5) = 2 .
X(x) c T (t)

Equation (6.2.5) holds identically for every 0 < x < l and every t > 0. Since
x and t are independent variables, Equation (6.2.5) is possible only if each
function on both sides is equal to the same constant λ:

X ′′ (x) 1 T ′ (t)
= 2 =λ
X(x) c T (t)

From the last equations we obtain two ordinary differential equations.

(6.2.6) X ′′ (x) − λX(x) = 0,

and

(6.2.7) T ′ (t) − c2 λT (t) = 0.


6.2 SEPARATION OF VARIABLES METHOD FOR THE HEAT EQUATION 335

From the boundary conditions


u(0, t) = u(l, t) = 0, t > 0
it follows that
X(0)T (t) = X(l)T (t) = 0, t > 0.
From the last equations we have
(6.2.8) X(0) = X(l) = 0,
since T (t) = 0 for every t > 0 would imply u(x, t) ≡ 0. Solving the eigen-
value problem (6.2.6), (6.2.8), as in Chapter 3, it follows that the eigenvalues
λn and the corresponding eigenfunctions Xn (x) are given by
( nπ )2 nπx
λn = − , Xn (x) = sin , n = 1, 2, . . ..
l l
The solution of the differential equation (6.2.7) corresponding to the above
found λn is given by
n2 π 2 c 2
Tn (t) = an e− l2
t
, n = 1, 2, . . .
where an are constants which will be determined. Therefore we obtain a
sequence of functions
n2 π 2 c 2 nπx
un (x, t) = an e− l2
t
sin , n = 1, 2, . . .,
l
each of which satisfies the heat equation (6.2.1) and the boundary conditions
(6.2.3). Since the heat equation and the boundary conditions are linear and
homogeneous, a function u(x, t) of the form

∑ ∞
∑ n2 π 2 c 2 nπx
(6.2.9) u(x, t) = un (x, t) = a n e− l2
t
sin
n=1 n=1
l

also will satisfy the heat equation and the boundary conditions. If we assume
that the above series is convergent, from (6.2.9) and the initial condition
(6.2.2) we obtain

∑ nπx
(6.2.10) f (x) = u(x, 0) = an sin , 0 < x < l.
n=1
l

Using the orthogonality of sine functions, from (6.2.10) we obtain


∫l
2 nπx
(6.2.11) an = f (x) sin dx.
l l
0

A formal justification of the above solution of the heat equation is given


by the following theorem, stated without a proof.
336 6. THE HEAT EQUATION

Theorem 6.2.1. Suppose that the function f is continuous and its derivative
f ′ is a piecewise continuous function on [0, l]. If f (0) = f (l) = 0, then
the function u(x, t) given in (6.2.8), where the coefficients an are given by
(6.2.10), is the unique solution of the problem defined by (6.2.1), (6.2.2),
(6.2.3).

Example 6.2.1. Solve the initial boundary value problem


 2

 ut (x, t) = c uxx (x, t), 0 < x < π, t > 0,
u(x, 0) = f (x) = x(x − π), 0 < x < π,


u(0, t) = u(π, t) = 0, t > 0.

Solution. For the coefficients an , applying Equation (6.2.11) and using the
integration by parts formula we have

∫π ∫π
2 2
an = f (x) sin nx dx = x(π − x) sin nx dx
π π
0 0
∫π ∫π
2 4 1 − (−1)n
=2 x sin nx dx − x2 sin nx dx = .
π π n3
0 0

Therefore, the solution u(x, t) of the given heat problem is


8∑ 1
e−(2n−1) c t sin (2n − 1)x.
2 2
u(x, t) =
π n=1 (2n − 1) 3

Case 20 . Nonhomogeneous Heat Equation. Homogeneous Boundary Condi-


tions. Consider the initial boundary value problem
 2

 ut (x, t) = c uxx (x, t) + F (x, t), 0 < x < l, t > 0,
(6.2.12) u(x, 0) = f (x), 0 < x < l


u(0, t) = u(l, t) = 0.

In order to find the solution of problem (6.2.12) we split the problem into
the following two problems:
 2

 vt (x, t) = c uxx (x, t), 0 < x < l, t > 0,
(6.2.13) v(x, 0) = f (x), 0 < x < l


v(0, t) = v(l, t) = 0,
6.2 SEPARATION OF VARIABLES METHOD FOR THE HEAT EQUATION 337

and
 2

 wt (x, t) = c wxx (x, t) + F (x, t), 0 < x < l, t > 0,
(6.2.14) w(x, 0) = 0, 0 < x < l


w(0, t) = w(l, t) = 0.

Problem (6.2.13) was considered in Case 10 and it has been solved. Let
v(x, t) be its solution. If w(x, t) is the solution of problem (6.2.14), then

u(x, t) = v(x, t) + w(x, t)

will be the solution of the given problem (6.2.12). So, the remaining problem
to be solved is (6.2.14).
In order to solve problem (6.2.14) we proceed exactly as in the nonho-
mogeneous wave equation of Chapter 5. For each fixed t > 0, expand the
nonhomogeneous term F (x, t) in the Fourier sine series

∑ nπx
(6.2.15) F (x, t) = Fn (t) sin , 0 < x < l,
n=1
l

where
∫l
2 nπξ
(6.2.16) Fn (t) = F (ξ, t) sin dξ, n = 1, 2, . . ..
l l
0

Next, again for each fixed t > 0, expand the unknown function w(x, t) in
the Fourier sine series

∑ nπx
(6.2.17) w(x, t) = wn (t) sin , 0 < x < l,
n=1
l

where
∫l
2 nπξ
(6.2.18) wn (t) = w(ξ, t) sin dξ, n = 1, 2, . . ..
l l
0

From the initial condition w(x, 0) = 0 it follows that

(6.2.19) wn (0) = 0.

If we substitute (6.2.15) and (6.2.17) into the heat equation (6.2.14) and
compare the Fourier coefficients we obtain

n2 π 2 c2
(6.2.20) wn′ (t) + wn (t) = Fn (t).
l2
338 6. THE HEAT EQUATION

The solution of Equation (6.2.20) in view of the initial condition (6.2.19) is

∫t
c 2 n2 π 2
(6.2.21) wn (t) = Fn (τ )e− l2
(t−τ )
dτ.
0

Therefore, from (6.2.15) and (6.2.16) the solution of problem (6.2.14) is


given by

∫ l ∫t
(6.2.22) w(x, t) = G(x, ξ, t − τ )F (ξ, τ ) dξ dτ,
0 0

where

2 ∑ − c2 n22 π2 (t−τ ) nπ nπ
(6.2.23) G(x, ξ, t − τ ) = e l sin x sin ξ.
l n=1 l l

Remark. The function



2 ∑ − c2 n22 π2 t nπ nπ
(6.2.24) G(x, ξ, t) = e l sin x sin ξ
l n=1 l l

is called the Green function of the heat equation on the interval [0, l].
Example 6.2.2. Solve the initial boundary value problem for the heat equa-
tion  2

 ut (x, t) = c uxx (x, t) + xt, 0 < x < π, t > 0,
u(x, 0) = x(x − π), 0 < x < π,


u(0, t) = u(π, t) = 0, t > 0.
For c = 1 plot the heat distribution u(x, t) at several time instances.
Solution. The corresponding homogeneous heat equation was solved in Ex-
ample 6.2.1 and its solution is given by

8∑ 1
e−(2n−1) c t sin (2n − 1)x.
2 2
(6.2.25) v(x, t) =
π n=1 (2n − 1)3

Now, we solve the nonhomogeneous heat equation with homogeneous bound-


ary conditions:
 2

 wt (x, t) = c wxx (x, t) + xt, 0 < x < π, t > 0,
w(x, 0) = 0, 0 < x < π


w(0, t) = w(π, t) = 0.
6.2 SEPARATION OF VARIABLES METHOD FOR THE HEAT EQUATION 339

For each fixed t > 0, expand the function xt and the unknown function
w(x, t) in the Fourier sine series on the interval (0, π):


∑ ∞

xt = Fn (t) sin nx, w(x, t) = wn (t) sin nx,
n=1 n=1

where the coefficients Fn (t) and wn (t) in these expansions are given by

∫π
2 2t cos nπ
Fn (t) = x sin nx dx = − ,
π n
0

∫t ∫t
−c2 n2 (t−τ ) 2 cos nπ
τ e−c
2
n2 (t−τ )
wn (t) = Fn (τ )e dτ = − dτ
n
0 0
[ ]
2 cos nπ −c2 n2 t
= −1 + e + c2 2
n t .
c4 n5
Thus,
∞ [ ]
2 ∑ cos nπ −c2 n2 t
(6.2.26) w(x, t) = 4 −1 + e 2 2
+ c n t sin nx,
c n=1 n5

and so, the solution u(x, t) of the problem is given by u(x, t) = v(x, t) +
w(x, t), where v and w are given by (6.2.25) and (6.2.26), respectively.
For c = 1 the plots of u(x, t) at 4 instances are displayed in Figure 6.2.1.

u u
2.49863 1.37078
t = 0.5
t=0

Π x Π x
2
Π 2
Π

u u
1.14763
0.715189
t = 0.6
t = 0.8

Π x Π x
2
Π 2
Π

Figure 6.2.1
340 6. THE HEAT EQUATION

30 Nonhomogeneous Heat Equation. Nonhomogeneous Dirichlet Boundary


Conditions. Consider the initial boundary value problem
 2

 ut (x, t) = c uxx (x, t) + F (x, t), 0 < x < l, t > 0,
(6.2.27) u(x, 0) = f (x), 0 < x < l


u(0, t) = α(t), u(l, t) = β(t) t ≥ 0.

The solution of this problem can be found by superposition of the solution


v(x, t) of the homogeneous problem (6.2.12) (considered in Case 20 ) with
the solution w(x, t) of the problem
 2

 wt (x, t) = c wxx (x, t), 0 < x < l, t > 0,
(6.2.28) w(x, 0) = 0, 0 < x < l


w(0, t) = α(t), w(l, t) = β(t) .

e t) by
If we introduce a new function w(x,

e t) = w(x, t) − xβ(t) + (x − l)α(t),


w(x,

then problem (6.2.28) is transformed into the problem





w exx (x, t) + Fe (x, t), 0 < x < l, t > 0,
et (x, t) = c2 w
(6.2.29) e t) = w(l,
w(0, e t) = 0, t > 0


 w(x,
e 0) = fe(x), 0 < x < l,

where
Fe(x, t) = (x − l)α′ (t) − xβ ′ (t),
fe(x) = (x − l)α(0) − xβ(0).
Notice that problem (6.2.29) has homogeneous boundary conditions and it
was considered in Case 10 and therefore we know how to solve it.
40 . Homogeneous Heat Equation. Neumann Boundary Conditions. Consider
the initial boundary value problem
 2

 ut (x, t) = c uxx (x, t), 0 < x < l, t > 0,
(6.2.30) u(x, 0) = f (x), 0 < x < l,


ux (0, t) = ux (l, t) = 0, t > 0.

We will solve this problem by the separation of variables method. Let the
solution u(x, t) (not identically zero) of the above problem be of the form

(6.2.31) u(x, t) = X(x)T (t),

where X(x) and T (t) are functions of single variables x and t, respectively.
6.2 SEPARATION OF VARIABLES METHOD FOR THE HEAT EQUATION 341

Differentiating (6.2.31) with respect to x and t and substituting the par-


tial derivatives in the heat equation in problem (6.2.30) we obtain that

X ′′ (x) 1 T ′ (t)
= 2 .
X(x) c T (t)

The last equation is possible only if

X ′′ (x) 1 T ′ (t)
= 2 =λ
X(x) c T (t)

for some constant λ. From the last equation we obtain the two ordinary
differential equations

(6.2.32) X ′′ (x) − λX(x) = 0,

and

(6.2.33) T ′ (t) − c2 λT (t) = 0.

From the boundary conditions

ux (0, t) = ux (l, t) = 0, t > 0

it follows that
X ′ (0)T (t) = X ′ (l)T (t) = 0, t > 0.
From the last equations we obtain

(6.2.34) X ′ (0) = X ′ (l) = 0.

Solving the eigenvalue problem (6.2.32), (6.2.34) (see Chapter 3) we obtain


that its eigenvalues λ = λn and the corresponding eigenfunctions Xn (x) are

λ0 = 0, X0 (x) = 1, 0 < x < l

and
( nπ )2 nπx
λn = − , Xn (x) = cos , n = 0, 1, 2, . . . , 0 < x < l.
l l

The solution of the differential equation (6.2.33) corresponding to the above


found λn is given by
2
Tn (t) = an ec λn t
, n = 0, 1, 2, . . .,

where an are constants which will be determined.


342 6. THE HEAT EQUATION

From (6.2.31) we obtain a sequence {un (x, t) = Xn (x)Tn (t), n = 0, 1, . . .}


given by
2 nπx
un (x, t) = an ec λn t cos , n = 0, 1, 2, . . .
l
each of which satisfies the heat equation and the Neumann boundary con-
ditions involved in problem (6.2.30). Since the given heat equation and the
boundary conditions are linear and homogeneous, a function u(x, t) of the
form

∑ ∞
∑ 2 nπx
(6.2.35) u(x, t) = un (x, t) = an ec λn t
cos
n=0 n=0
l

also will satisfy the heat equation and the boundary conditions. If we assume
that the above series is convergent, from (6.2.35) and the initial condition in
problem (6.2.30) we obtain

∑ nπx
(6.2.36) f (x) = u(x, 0) = an cos , 0 < x < l.
n=0
l

Using the Fourier cosine series (from Chapter 1) for the function f (x) or
the fact, from Chapter 2, that the eigenfunctions
2πx 3πx nπx
1, cos , cos , . . . , cos , ...
l l l
are pairwise orthogonal on the interval [0, l], from (6.2.36) we obtain that
∫l
2 nπx
(6.2.37) an = f (x) cos dx n = 0, 1, 2, . . .
l l
0

Remark. Problem (6.2.29) can be solved in a similar way to Example 6.2.2.,


except we use the Fourier cosine series instead of the Fourier sine series.
Example 6.2.3. Using the separation of variables method solve the problem

 u (x, t) = uxx (x, t), 0 < x < π, t > 0,
 t
u(x, 0) = π 2 − x2 , 0 < x < π


ux (0, t) = ux (π, t) = 0, t > 0.

Solution. Let the solution u(x, t) of the problem be of the form

u(x, t) = X(x)T (t).

From the heat equation and the given boundary conditions we obtain the
eigenvalue problem
X ′′ (x) − λX(x) = 0, 0<x<π
(6.2.38) ′ ′
X (0) = X (π) = 0,
6.2 SEPARATION OF VARIABLES METHOD FOR THE HEAT EQUATION 343

and the ordinary differential equation

(6.2.39) T ′ (t) − λT (t) = 0, t > 0.

Solving the eigenvalue problem (6.2.38) we obtain that its eigenvalues λ = λn


and the corresponding eigenfunctions Xn (x) are

λ0 = 0, X0 (x) = 1, 0 < x < π

and
λn = −n2 , Xn (x) = cos nx, n = 1, 2, . . . , 0 < x < π.
The solution of the differential equation (6.2.39) corresponding to the above
found λn is given by

Tn (t) = an e−n t ,
2
n = 0, 1, 2, . . ..

Hence, the solution of the given problem will be of the form




an e−n
2
t
u(x, t) = cos nx, 0 < x < π, t > 0,
n=0

where an are coefficients to be determined.


From the other initial condition u(x, 0) = π 2 − x2 , 0 < x < π, and the
orthogonality property of the eigenfunctions

{1, cos x, cos 2x, . . . , cos nx, . . .}

on the interval [0, π] we obtain


∫π
1 2π 2
a0 = (π 2 − x2 ) dx =
π 3
0

and
∫π ∫π
1 2
an = ∫π (π − x ) cos nx dx = −
2 2
x2 cos nx dx
π
cos2 nx dx 0 0
0
2 2π 4
=− · 2 cos nπ = 2 (−1)n+1 ,
π n n
for n = 1, 2, . . .. Therefore, the solution u(x, t) of the problem is given by

2π 2 ∑ 4
(−1)n+1 e−n t cos nx,
2
u(x, t) = + 2
0 < x < π, t > 0.
3 n=1
n
344 6. THE HEAT EQUATION

Example 6.2.4. Let the temperature u(x, t) of a rod, composed of two


different metals, be defined by
{
v(x, t), 0 < x < 12 , t > 0
u(x, t) =
w(x, t), 12 < x < 1, t > 0.

Find the temperature u(x, t) of the rod by solving the following system of
partial differential equations.
1
(6.2.40) vt (x, t) = vxx (x, t), 0<x< , t > 0,
2
1
(6.2.41) wt (x, t) = 4wxx (x, t), < x < 1, t > 0,
2
(6.2.42) v(0, t) = w(1, t) = 0, t > 0
(1 ) (1 ) 1 (1 )
(6.2.43) v , t = w , t , vx ( , t) = wx , t ,
2 2 2 2
(6.2.44) u(x, 0) = f (x) = x(1 − x), 0 < x < 1.

Solution. If v(x, t) = T1 (t)Y1 (x) and w(x, t) = T2 (t)Y2 (x), then using the
separation of variables we obtain the following system of ordinary differential
equations.
1
(6.2.45) Y1′′ (x) + λY1 (x) = 0, 0<x<
2
1
(6.2.46) 4Y2′′ (x) + λY2 (x) = 0, <x<1
2
(6.2.47) Tk′ (t) + λTk (t) = 0, t > 0, k = 1, 2.

Solving the differential equations (6.2.45) and (6.2.46) and using the bound-
ary conditions Y1 (0) = Y2 (π) = 0 we obtain
√ (1√ )
Y1 (x) = A sin ( λ x), Y2 (x) = B sin λ (x − 1) ,
2
for some constants A and B. From the conditions (6.2.43) we obtain the
following conditions for the functions Y1 (x) and Y2 (x):
√ √
( λ) ( λ)
A sin = −B sin ,
(6.2.48) 2√ 2√
( λ) ( λ)
Aλ cos = 2Bλ cos .
2 2
One solution of the above system is λ = 0. If we eliminate A and B from
(6.2.48), then we find the other solution:

( λ)
(6.2.49) cot = 0.
2
6.2 SEPARATION OF VARIABLES METHOD FOR THE HEAT EQUATION 345

If λn , n = 1, 2, . . . are the solutions of Equation (6.2.49), then


λn = π 2 (2n − 1)2 , n = 1, 2, . . ..
For the found λn , the coefficients A and B in (6.2.48) can be chosen to
be
1 1
A=− = (−1)n , B= = −(−1)n .
sin (2n − 1) π2 sin (2n − 1) π2
Therefore the eigenfunctions un (x), corresponding to the eigenvalues λn , are
given by
{
(−1)n sin (2n − 1)πx, 0 < x < 12
(6.2.50) un (x) = ( )
(−1)n sin (2n − 1)π(1 − x) , 12 < x < 1.
The solution of Equation (6.2.47) for the above found λn is given by
Tn (t) = e−(2n−1)
2
π2 t
,
and so the solution of the given problem is


cn e−(2n−1)
2
π2 t
u(x, t) = wn (x),
n=1

where un (x) are given by (6.2.50).


The coefficients cn are found as usual from the initial condition u(x, 0) =
f (x) and the orthogonality of the functions un (x) on the interval (0, 1):
∫1
1
cn = ( )2 un (x)f (x) dx.
∫1
u2n (x) dx 0
0

For the integral in the denominator, we have


1
∫1 ∫2 ∫1
( ) ( )
u2n (x) dx = sin2
(2n − 1)πx dx + sin2 (2n − 1)π(1 − x) dx
0 0 1
2

1 1 1
+ = .
=
4 4 2
For the integral in the numerator of cn , we have
1
∫1 ∫2
( )
un (x)f (x) dx = (−1)n x(1 − x) sin (2n − 1)πx dx
0 0
∫1
( )
+ (−1)n x(1 − x) sin (2n − 1)π(1 − x) dx
1

[ ]
2

2 2 4(−1)n
= (−1)n + = 3 .
π (2n − 1)
3 3 π (2n − 1)
3 3 π (2n − 1)3
346 6. THE HEAT EQUATION

Therefore,
8(−1)n
cn = ,
π 3 (2n − 1)3
and so the solution of the problem is given by

8 ∑ (−1)n −(2n−1)2 π2 t
u(x, t) = e un (x),
π 3 n=1 (2n − 1)3

where un (x) are given by (6.2.50).


The plots of the heat distribution w(x, t) of the rod at several time in-
stances t are displayed in Figure 6.2.2.

u u
0.25
t=0 0.095
t = 0.1

x x
0.5 1 0.5 1

u u
0.0135 0.00185
t = 0.3 t = 0.5

x x
0.5 1 0.5 1

Figure 6.2.2

Exercises for Section 6.2.

In Exercises 1–12 use the separation of variables method to solve the heat
equation
ut (x, t) = a2 uxx (x, t), 0 < x < l, t > 0,
subject to the following boundary conditions and the following initial
conditions:

1. a = 3, l = π, u(x, 0) = x(π − x), u(0, t) = u(π, t) = 0.

2. a = 1, l = 3, u(x, 0) = x, ux (0, t) = ux (3, t) = 0.


6.2 SEPARATION OF VARIABLES METHOD FOR THE HEAT EQUATION 347

3. a = 2, l = 4, u(x, 0) = x + 1, u(0, t) = ux (4, t) = 0.

4. a = 2, l = π, u(x, 0) = 20, u(0, t) = u(π, t) = 0.

5. a = 2, l = 2, u(0, t) = u(2, t) = 0, and

{
20, 0 ≤ x < 1
u(x, 0) =
0, 1 ≤ x ≤ 2.
6. a = 2, l = π, u(x, 0) = x2 , u(0, t) = u(π, t) = 0.

7. a = 1, l = π, u(x, 0) = 100, u(0, t) = ux (π, t) = 0.


{
x, 0 < x < π2
8. l = π, u(0, t) = ux(π, t) = 0, u(x, 0) =
π − x, π2 < x < π.
9. l = π, u(x, 0) = π − x, u(0, t) = ux (π, t) = 0.

10. l = π, u(x, 0) = T0 , u(0, t) = 0, u(π, t) = T0 .

11. a = 1, l = π, u(x, 0) = x sin x, u(0, t) = u(π, t) = 0.

12. a = 1, l = π, u(x, 0) = sin2 x, u( 0, t) = ux (π, t) = 0.

13. Suppose that a rod of length l is radioactive and produces heat at a


constant rate r. Solve the rod heat distribution equation

ut (x, t) = a2 uxx (x, t) + r, u(x, 0) = 0, u(0, t) = u(l, t) = 0.

In Exercises 14–17 solve the nonhomogeneous heat equation

ut (x, t) = a2 uxx (x, t) + F (x, t), 0 < x < l, t > 0,

subject to the given initial condition u(x, 0) = f (x) and the following
source functions F (x, t) and boundary conditions.

14. l = 1, F (x, t) = −1, ux (0, t) = ux (1, t) = 0, f (x) = 12 (1 − x2 ).

15. l = π, f (x) = 0, u(0, t) = u(π, t) = 0,

{ π
x, 0<x< 2
F (x, t) =
π − x, π
2 ≤ x < π.

16. a = 1, l = π, F (x, t) = e−t , ux (0, t) = ux (π, t) = 0, f (x) = cos 2x.


348 6. THE HEAT EQUATION

17. Solve the following heat equation with periodic boundary conditions.

ut (x, t) = uxx (x, t), −π < x < π, t > 0,


u(−π, t) = u(π, t) ux (π, t) = ux (π, t), t > 0,
u(x, 0) = f (x) −π < x < π.

18. Solve the nonhomogeneous heat equation

ut (x, t) = a2 uxx (x, t) + F (x, t), 0 < x < l, t > 0

subject to the initial condition

u(x, 0) = 0, 0 < x < l

and the boundary conditions

u(0, t) = u(l, t) = 0, t > 0.

19. The equation

ut (x, t) − uxx (x, t) + bu(x, t) = F (x, t), 0 < x < l, b ∈ R, t > 0

describes heat conduction in a rod taking into account the radioactive


losses if b > 0. Verify that the equation reduces to

vt (x, t) − vxx (x, t) = ebt F (x, t)

with the transformation u(x, t) = e−bt v(x, t).

20. Solve the heat equation

ut (x, t) = uxx (x, t), 0 < x < 1, t > 0

subject to the initial condition

u(x, 0) = f (x), 0 < x < 1

and the boundary conditions

u(0, t) = u(1, t) + ux (1, t), t > 0.

In Exercises 21–23 solve the heat equation

ut (x, t) = a2 uxx (x, t) − bu(x, t), 0 < x < 1, t > 0

subject to the given initial condition and given boundary conditions.


6.3.1 GREEN FUNCTION OF HIGHER DIMENSIONAL HEAT EQUATION 349

21. u(x, 0) = f (x), u(0, t) = u(l, t) = 0.

π
22. u(x, 0) = sin 2l x, u(0, t) = ux (l, t) = 0.

23. u(x, 0) = f (x), ux (0, t) = ux (l, t) = 0.

24. Solve the heat equation

ut (x, t) = a2 uxx (x, t), 0 < x < 1, t > 0

subject to the initial condition u(x, 0) = T -constant and the boundary


conditions

u(0, t) = 0, ux (l, t) = Ae−t , t > 0.

Hint: Look for a solution u(x, t) of the type

u(x, t) = f (x)e−t + v(x, t),

where v(x, t) satisfies a homogeneous heat equation and homogeneous


boundary conditions.

6.3 The Heat Equation in Higher Dimensions.


In this section, we will study the two and three dimensional heat equation in
rectangular and circular domains. We begin this section with the fundamental
solution of the heat equation on the whole spaces R2 and R3 .

6.3.1 Green Function of the Higher Dimensional Heat Equation.


Consider the two dimensional heat equation on the whole plane
{ ( )
ut (x, y, t) = c2 uxx (x, y, t) + uyy (x, y, t) , (x, y) ∈ R2 , t > 0
(6.3.1)
u(x, y, 0) = f (x, y), (x, y) ∈ R2 .

One method to find the fundamental solution or Green function of the two
dimensional heat equation in (6.3.1) is to work similarly to the one dimen-
sional case. The other method is outlined in Exercise 1 of this section.
We will find a solution G(x, y, t) of the heat equation of the form

1 x2 + y 2
G(x, y, t) = g(ζ), where ζ = ,
t t

and g is a function to be determined.


350 6. THE HEAT EQUATION

Using the chain rule we find



1 x2 + y 2
Gt (x, y, t) = − 2 g(ζ) − 5 ,
t t2
and
[
1 1 x2 ′′ y2
Gxx (x, y, t) = 3 √ 2 g (ζ) + √ g ′ (ζ),
t 2 t x + y2 (x2 + y 2 ) x2 + y 2

[ ]
1 1 y2 ′′ x2 ′
Gyy (x, y, t) = 3 √ 2 g (ζ) + √ g (ζ) .
t2 t x + y2 (x2 + y 2 ) x2 + y 2
If we substitute the above derivatives into the heat equation, after re-
arrangement we obtain

x2 + y 2 ′ ( 1 ′ )
−g(ζ) √ g (ζ) = c2 g ′′ (ζ) + g (ζ) .
2 t ζ
The last equation can be written in the form
( ) ( )
2 d ′ 1 d 2
c ξg (ζ) + ζ g(ζ) = 0.
dζ 2 dρ
If we integrate the last equation, then we obtain
1
c2 g ′ (v) + g(ζ) = 0.
2
The general solution of the last equation is
ζ2
g(ζ) = Ae− 4c2 ,
and so
x2 +y 2
G(x, y, t) = Ae− 4c2 t .
Usually the constant A is chosen such that
∫∫
G(x, y, t) dx dy = 1.
R2

Therefore,
1 x2 +y 2
(6.3.2) G(x, y, t) = √ e− 4c2 t .
4πct
The function G(x, y, t) given by (6.3.2) is called the fundamental solution
or the Green function of the two dimensional heat equation on the plane R2 .
Some properties of the Green function G(x, y, t) are given in Exercise 2 of
this section.
The Green function is of fundamental importance because of the following
result, stated without a proof.
6.3.2 THE HEAT EQUATION ON A RECTANGLE 351

Theorem 6.3.1. If f (x, y) is a bounded and continuous function on the


whole plane R2 , then
∫∫
(6.3.3) u(x, y, t) = G(x − ξ, y − η, t)f (ξ, η) dx dy
R2

is the unique solution of problem (6.3.1).

The Green function G(x, y, z, t) for the three dimensional heat equation
( )
ut = c2 uxx + uyy + uzz , (x, y, z) ∈ R3 , t > 0

is given by
1 x2 +y + z 2
G(x, y, z, t) = 3 e− 4c2 t .
(4πct) 2

6.3.2 The Heat Equation on a Rectangle.


Consider a very thin, metal rectangular plate which is heated. Let

R = {(x, y) : 0 < x < a, 0 < y < b}

be the rectangular plate with boundary ∂R. Let the boundary of the plate
be held at zero temperature for all times and let the initial heat distribution
of the plate be given by f (x, y). The heat distribution u(x, y, t) of the plate
is described by the following initial boundary value problem.
 ( )
 ut = c ∆x,y u = c uxx + uyy , (x, y) ∈ R, t > 0
2 2

(6.3.4) u(x, y, 0) = f (x, y), (x, y) ∈ R


u(x, y, t) = 0, (x, y) ∈ ∂R, t > 0

If a solution u(x, y, t) of the above problem (6.3.4) is of the form

u(x, y, t) = W (x, y)T (t), 0 < x < a, 0 < y < b, t > 0,

then the heat equation becomes

W (x, y)T ′ (t) = c2 ∆ W (x, y) T (t).

Working exactly as in Chapter 5 we will be faced with solving the Helmholtz


eigenvalue problem
{
∆ W (x, y) + λW (x, y) = 0, (x, y) ∈ R
(6.3.5)
W (x, y) = 0, (x, y) ∈ ∂R
352 6. THE HEAT EQUATION

and the first order ordinary differential equation

(6.3.6) T ′ (t) + c2 λT (t) = 0, t > 0.

The eigenvalue problem (6.3.5) was solved in Chapter 5, where we found


that its eigenvalues λmn and corresponding eigenfunctions Wmn (x, y) are
given by

m2 π 2 n2 π 2 ( mπx ) ( nπy )
λmn = + , Wmn (x, y) = sin sin , (x, y) ∈ R.
a2 b2 a b
A general solution of Equation (6.3.6), corresponding to the above found λmn ,
is given by
Tmn (t) = amn e−c λmn t .
2

Therefore, the solution of problem (6.3.4) will be of the form


∞ ∑
∑ ∞
( mπx ) ( nπy )
amn e−c
2
λmn t
(6.3.7) u(x, y, t) = sin sin .
m=1 n=1
a b

Using the initial condition of the function u(x, y, t) and the orthogonality
property of the the eigenfunctions
( mπx ) ( nπy )
Wm,n (x, y) ≡ sin sin , m, n = 1, 2, . . .
a b
on the rectangle [0, a] × [0, b], from (6.3.7) we find that the coefficients amn
are given by

∫a ∫b
4 ( mπx ) ( nπy )
(6.3.8) amn = f (x, y) sin sin dy dx.
ab a b
0 0

Example 6.3.1. Solve the following heat plate initial boundary value prob-
lem.
 1( )

 ut (x, y, t) = 2 uxx (x, y, t) + uyy (x, y, t) , 0 < x < 1, 0 < y < 1, t > 0
 π
 u(x, y, 0) = sin 3πx sin πy, 0 < x < 1, 0 < y < 1, t > 0


u(0, y, t) = u(1, y, t) = u(x, 0, t) = u(x, 1, t) = 0, 0 < x < 1, 0 < y < 1.

Solution. From the initial condition f (x, y) = sin 3πx sin πy and Equation
(6.3.8) for the coefficients amn , we find

∫1 ∫1 {
1, m = 3, n = 1
amn = 4 sin 3πx sin πy sin mπx sin nπy dy dx =
0, otherwise.
0 0
6.3.2 THE HEAT EQUATION ON A RECTANGLE 353

t=0 t=0.5

t=1 t=5

Figure 6.3.1

Therefore, the solution of this problem, using Equation (6.3.7), is given by


u(x, y, t) = e−10t sin 3πx sin πy.
The plots of the heat distribution of the plate at several time instances are
displayed in Figure 6.3.1.

Other types of the two dimensional heat equation on a rectangle (nonho-


mogeneous equation with homogeneous or nonhomogeneous Dirichlet or Neu-
mann boundary conditions) can be solved exactly as in the one dimensional
case. Therefore we will take only one example.
Example 6.3.2. Let R be the square
R = {(x, y) : 0 < x < 1, 0 < y < 1}
and ∂R its boundary. Solve the initial boundary value heat problem:
 1( )

 ut (x, y, t) = π 2 uxx (x, y, t) + uyy (x, y, t) + xyt, (x, y) ∈ R, t > 0

 u(x, y, 0) = sin 3πx sin πy, (x, y) ∈ R, t > 0




u(x, y, t) = 0, (x, y) ∈ ∂R, t > 0.
354 6. THE HEAT EQUATION

Solution. The corresponding homogeneous wave equation is


 1( )

 vtt (x, y, t) = π 2 vxx (x, y, t) + vyy (x, y, t) , (x, y) ∈ R, t > 0

 v(x, y, 0) = sin 3πx sin πy, (x, y) ∈ R, t > 0




v(0x, y, t) = 0, (x, y) ∈ ∂R, t > 0.
and it was solved in Example 6.3.1:
v(x, y, t) = e−10t sin 3πx sin πy.
Now we solve the corresponding nonhomogeneous wave equation
 1( )

 wt (x, y, t) = π 2 wxx (x, y, t) + wyy (x, y, t) + xyt, (x, y) ∈ R, t > 0

 w(x, y, 0) = sin 3πx sin πy, (x, y) ∈ R, t > 0




w(x, y, t) = 0, (x, y) ∈ ∂R, t > 0.
For each fixed t > 0 expand the functions xyt and w(x, y, t) in a double
Fourier sine series:
∑∞ ∑ ∞
(−1)m (−1)n
xyt = 4t sin mπx sin nπy, (x, y) ∈ R, t > 0
m=1 n=1
mnπ 2
and
∞ ∑
∑ ∞
w(x, y, t) = wmn (t) sin mπx sin nπy
m=1 n=1
where
∫1 ∫1
wmn (t) = 4 w(x, y, t) sin mπx sin nπy dy dx.
0 0
If we substitute the above expansions into the heat equation and compare
the Fourier coefficients we obtain

( ) (−1)m+n
wmn (t) + m2 + n2 wmn (t) = 4t .
mnπ 2
The solution of the last equation, subject to the initial condition wmn (0) = 0,
is
4(−1)m (−1)n −(m2 +n2 ) t [ 2 2]
wmn (t) = 2 2
e 1 − em +n ,
mn(m + n )
and so
∑∞ ∑ ∞
4(−1)m (−1)n −(m2 +n2 ) t [ 2 2]
w(x, y, t) = 2 + n2 )
e 1 − em +n sin mπx sin nπy.
m=1 n=1
mn(m
Therefore, the solution of the original problem is given by
u(x, y, t) = v(x, y, t) + w(x, y, t).

Remark. For the solution of the three dimensional heat equation on a rect-
angular solid by the separation of variables see Exercise 3 of this section.
6.3.3 THE HEAT EQUATION IN POLAR COORDINATES 355

6.3.3 The Heat Equation in Polar Coordinates.


In this part we will solve the problem of heating a circular plate.
Consider a circular plate D of radius a,

D = {(x, y) : x2 + y 2 < a2 }

with boundary ∂D.


If u(x, y, t) is the temperature of the plate at moment t and a point (x, y)
of the plate, then the heat distribution of the plate is modeled by the heat
initial boundary value problem
 ( )
 ut (x, y, t) = c uxx (x, y, t) + uyy (x, y, t) , (x, y) ∈ D, t > 0,
2

u(x, y, t) = 0, (x, y) ∈ ∂D, t > 0,


u(x, y, 0) = f (x, y), (x, y) ∈ D.

Instead of working with cartesian coordinates when solving this problem,


it is much more convenient to use polar coordinates.
Recall that the polar coordinates and the Laplacian in polar coordinates
are given by

x = r cos φ, y = r sin φ, −π ≤ φ < π, 0 ≤ r < ∞

and
1 1
∆ u(r, φ) = urr + ur + 2 uφφ .
r r
Let us have a circular plate which occupies the disc of radius a and centered
at the origin. If the temperature of the disc plate is denoted by u(r, φ, t), then
the heat initial boundary value problem in polar coordinates is given by
 ( 1 1 )

 utt = c urr + r ur + r2 uφφ , 0 < r ≤ a, 0 ≤ φ < 2π, t > 0,
 2

(6.3.9) u(a, φ, t) = 0, 0 ≤ φ < 2π, t > 0,





u(r, φ, 0) = f (r, φ), 0 < r < a, 0 ≤ φ < 2π.

Now, we can separate out the variables. Let

u(r, φ, t) = Φ(r, φ)T (t), 0 < r ≤ a, −π ≤ φ < π.

Working exactly in the same way as in the oscillations of a circular mem-


brane problem, discussed in Chapter 5, we will be faced with the eigenvalue
problem

 1 1

 Φ + Φ + Φ + λ2 Φ(r, φ) = 0, 0 < r ≤ a, 0 ≤ φ < 2π,
 rr r r r2 φφ
(6.3.10)
 Φ(a, φ) = 0, | lim+ Φ(r, φ) |< ∞, 0 ≤ φ < 2π,

 r→0

Φ(r, φ) = Φ(r, φ + 2π), 0 < r ≤ a, 0 ≤ φ < 2π.
356 6. THE HEAT EQUATION

and the differential equation

(6.3.11) T ′ (t) + λ2 c2 T (t) = 0, t > 0.

After separating the variables in the Helmholtz equation in problem (6.3.10),


we find that its eigenvalues and corresponding eigenfunctions are given by
zmn
λmn = , m = 0, 1, 2, . . . ; n = 1, 2, . . .
a
and
( zmn ) ( zmn )
Φ(c)
mn (r, φ) = Jm r cos mφ, (s)
Φmn (r, φ)Jm r sin mφ,
a a
where zmn is the nth zero of the Bessel function Jm (·) of order m.
For the above parameters λmn , the general solution of (6.3.11) is given by

Tmn (t) = Amn e−λmn c t ,


2

and so, the solution u = u(r, φ, t) of heat distribution problem (6.3.9) is given
by
∞ ∑
∑ ∞
( zmn ) [ ]
Amn e−λmn c t Jm
2
(6.3.12) u= r am cos mφ + bm sin mφ .
m=0 n=1
a

The coefficients in (6.3.12) are found by using the initial condition in


(6.3.9):

∑∞ ∑ ∞
[ ] ( zmn )
f (r, φ) = u(r, φ, 0) = am cos mφ + bm sin mφ Amn Jm r .
m=0 n=1
a

From the orthogonality property of the Bessel eigenfunctions {Jm (·), m =


0, 1, . . .}, as well as the orthogonality of {sin mφ, cos mφ, m = 0, 1, . . .} we
obtain

 ∫ ∫a
2π ( )

 f (r, φ)Jm zmn a r r cos mφ dr dφ



 am Amn = 0 02π a ,

 ∫ ∫ ( )



 J 2 zmn r r cos2 mφ dr dφ
m a
0 0
(6.3.13)

 ∫ ∫a
2π ( )

 f (r, φ)Jm zmn

 a r r sin mφ dr dφ

 bm Amn = 0 0

 .

 ∫ ∫a
2π ( zmn )
 J 2
m a
2
r r sin mφ dr dφ
0 0

Let us take an example.


6.3.3 THE HEAT EQUATION IN POLAR COORDINATES 357

Example 6.3.3. Solve the heat disc problem (6.3.9) if a = c = 1 and

f (r, φ) = (1 − r2 )r2 sin 2φ, 0 ≤ r ≤ 1, 0 ≤ φ < 2π.

Display the solution u(r, φ, t) of the temperature at several time moments t.


Solution. Since

∫2π
sin 2φ cos mφ dφ = 0, m = 0, 1, 2, . . .,
0

and
∫2π
sin 2φ sin mφ dφ = 0, for every m ̸= 2,
0

from (6.3.13) we have am Amn = 0 for every m = 0, 1, 2, . . . and every


n = 1, 2, . . .; and bm Amn = 0 for every m ̸= 2 and every n = 1, 2, . . ..
For m = 2 and n ∈ N, from (6.3.13) we have

∫ ∫1
2π ( )
(1 − r2 )r2 sin 2φ J2 z2n r r sin 2φ dr dφ
0 0
b2 A2n =
∫ ∫1
2π ( )
J22 z2n r r sin2 2φ dr dφ
0 0
(6.3.14)
∫1 ( )
(1 − r2 )r3 J2 z2n r dr
0
= .
∫1 ( )
rJ22 z2n r dr
0

The numbers z2n in (6.3.14) are the zeros of the Bessel function J2 (x).
Taking a = 1, p = 2 and λ = z2n in Example 3.2.12 of Chapter 3, for
the integral in the numerator in (6.3.14) we have

∫1
( ) 2 ( )
(6.3.15) (1 − r2 )r3 J2 z2n r dr = 2 J4 z2n .
z2n
0

In Theorem 3.3.11 of Chapter 3 we proved the following formula. If λk are


the zeros of the Bessel function Jµ (x), then

∫1
1 2
xJµ2 (λk x) dx = J (λk ).
2 µ+1
0
358 6. THE HEAT EQUATION

If µ = 2 in this formula, then for the denominator in (6.3.15) we have

∫1
( ) 1 ( )
(6.3.16) rJ22 z2n r dr = J32 z2n .
2
0

If we substitute (6.3.15) and (6.3.16) into (6.3.14) we obtain


( )
4J4 z2n
b2 A2n = 2 2 ( ) .
z2n J3 z2n

The right hand side of the last equation can be simplified using the following
recurrence formula for the Bessel functions.

Jµ+1 (x) + Jµ−1 (x) = Jµ (x).
x
(See Section 3.3. of Chapter
( 3.)
) Taking µ = 3 and x = z(2n in ) this formula,
( )
and using the fact that J2 z2n = 0 it follows that z2n J4 z2n = 6J3 z2n .
Thus,
24
b2 A2n = 3 ( ),
z2n J3 z2n
and so the solution u(r, φ, t) of the problem is given by
(∑
∞ )
−z2n t 1 ( )
u(r, φ, t) = 24 e (
3 J z
) J2 z2n r sin 2φ.
n=1
z2n 3 2n

See Figure 6.3.2 for the temperature distribution of the disc at the given
instances.

t=0 t=0.05

Figure 6.3.2
6.3.4 HEAT EQUATION IN CYLINDRICAL COORDINATES 359

6.3.4 The Heat Equation in Cylindrical Coordinates.


In this section we will solve the problem of heat distribution in a cylinder.
We will use the cylindrical coordinates

x = r cos φ, y = rφ, z = z

and the fact that the Laplacian ∆ in the cylindrical coordinates is given by

∂2 1 ∂ 1 ∂2 ∂2
∆= + + +
∂r2 r ∂r r2 ∂φ2 ∂z 2

(see Appendix E).


In the cylinder C, defined in cylindrical coordinates (r, φ, z) by

C = {(r, φ, z) : 0 < r < a, 0 ≤ φ < 2π, 0 < z < l}

consider the initial boundary value heat problem




 ut (r, φ, z, t) = c2 ∆ u(r, φ, z, t), (r, φ, z) ∈ C, t > 0



 u(r, φ, z, 0) = f (r, φ, z), (r, φ, z) ∈ C



 [ ]



 r u (r, φ, z, t) + k1 u(r, φ, z, t) = 0,
(6.3.17)
 [ ]r=0



 u (r, φ, z, t) − k u(r, φ, z, t) = 0,


z 2

 [ ] z=0




 uz (r, φ, z, t) + k3 u(r, φ, z, t) = 0.
z=l

If the function u(r, φ, z, t) is of the form

u(r, φ, z, t) = R(r)Φ(φ)Z(z)T (t),

then the variables will be separated and in view of the boundary conditions
we will obtain the differential equation

(6.3.18) T ′ (t) + λT (t) = 0,

and the eigenvalue problems


 ′′
 Z (z) + (λ − µ)Z(z) = 0,

 ′

 ′
 Z (0) − k2 (0)Z(0) = 0, Z (l) + k3 (l)Z(l) = 0,


(6.3.19) r2 R′′ (r) + rR′ (r) + (r2 µ − ν)R(r) = 0,

 [ ]

 R(a) = 0, Rr (r) + k1 R(r) r=0 , | lim+ R(r) |< ∞,



 ′′
r→0
Φ (φ) + νΦ(φ), Φ(φ) = Φ(φ + 2π), 0 ≤ φ < 2π.
360 6. THE HEAT EQUATION

The eigenvalues ν and the corresponding eigenfunctions of the last eigen-


value problem in (6.3.19) are given by
{ }
(6.3.20) ν = n, Φn (φ) = sin nφ, cos nφ n − 0, 1, 2, . . . . .
For the above values of ν, the eigenvalues of the second eigenvalue problem
for the function R(r) and the corresponding eigenfunctions are given by
( zmn )
(6.3.21) µ = zmn , Rmn (r) = Jn r , m, n, = 0, 1, 2, . . . ,
a
where zmn is the mth positive solution of the equation
( ) ( )
(6.3.22) zmn Jn′ zmn + ak1 Jn zmn .
The eigenvalues λ − µ and the corresponding eigenfunctions of the first
eigenvalue problem in (6.3.19) are given by
( )
(6.3.23) λ − µ = ςj , Zj (z) = sin ςj z + αj , j = 0, 1, 2, . . .,
where ςj are the positive solutions of the equation
ςj2 − k2 k3
cot (lςj ) = ,
ςj (k2 + k3 )
and αj are given by
( ςj )
αj = arctan .
k1
For all of these eigenvalues, a solution of the differential equation (6.3.18)
is given by
( 2 )
2 zmn
Tmnj (t) = e−c a2 +ςj t ,
2
(6.3.24)

and so from (6.3.20), (6.3.21), (6.3.22), (6.3.23) and (6.3.24) it follows that
the solution u = u(r, φ, z, t) of the heat problem for the cylinder is given by
∞ ∑ ∞ ∑ ∞ [
( zmn
2 )
∑ −c2 + ςj2 t ( zmn ) ( )
a2
u= Ajmn e Jn r cos nφ sin ςj z + αk
j=0 m=0 n=0
a
( zmn
2 ) ]
−c2
2
+ ςj2 t ( zmn ) ( )
+ Bjmn e a Jn r sin nφ sin ςj z + αk ,
a
where the coefficients Ajmn and Bjmn are determined using the initial con-
dition u(r, φ, 0, z) = f (r, φ, z) and the orthogonality property of the Bessel
function Jn (·) and the sine and cosine functions:
∫a 2π
∫ ∫l ( zmn ) ( )
rf (r, φ, z)Jn r cos nφ sin ςj z + αj dr dφ dz
a
Ajmn = 4 0 0 0[ ] [ ]
(h h
2 3 + s 2
k )(h 2 + h3 ) ( 2 ) a2 h21 − n2
πa2 ϵn 1 + J n zmn 1 +
h22 + ςk2 )(h23 + ςk2 ) zmn
where {
2, n = 0,
ϵn =
1, n ̸= 0.
6.3.5 THE HEAT EQUATION IN SPHERICAL COORDINATES 361

Example 6.3.4. Consider an infinitely long circular cylinder in which the


temperature u(r, φ, z, t) is a function of the time t and radius r only, i.e.,
u = u(r, t). Solve the heat problem

 ( 2
2 ∂ u 1 ∂u )

 ut (r, t) = c + , 0 < r < a, t > 0
 ∂r 2 r ∂r
 u(r, 0) = f (r), 0 < r < a


 | lim u(r, t) |< ∞, u(a, t) = 0, t > 0.
r→0+

Solution. We look for the solution u(r, t) of the above problem in the form

u(r, t) = R(r)T (t).

Separating the variables and using the given boundary conditions we obtain

∑ ( zn0 ) −czn0 t
u(r, t) = An J0 r e ,
n=1
a

where zn0 is the nth positive zero of the Bessel function J0 (·).
The coefficients An are evaluated using the initial condition and the or-
thogonality property of the Bessel function:
∫a
( ) ( )
rJ0 zm0 r J0 zn0 r dr = 0, if m ̸= n
0
∫a
( ) a ( )
rJ02 zm0 r = J12 zn0 r dr = 0, if m = n.
2
0

Using the above formulas we obtain


∫a
2 ( )
An = 2 ( ) rf (r)J0 zn0 r dr.
a J1 zn0 a
0

6.3.5 The Heat Equation in Spherical Coordinates.


In this section we will solve the heat equation in a ball. The derivation of
the solution is exactly the same as in the problem of vibrations of a ball in
spherical coordinates, so we will omit many of the details.
Consider the initial boundary value heat problem
 ( ) ( )

 1 ∂ 2 ∂u 1 ∂ ∂u 1 ∂2u

 ut = r + 2 sin θ + 2
r2 ∂r ∂r r2 sin θ ∂θ ∂θ r2 sin θ ∂θ2
(6.3.25)

 u(a, φ, θ, t) = 0, 0 ≤ φ < 2π, 0 ≤ θ < π,


u(r, φ, θ, 0) = f (r, φ, θ), 0 ≤ r < a, 0 ≤ φ < 2π, 0 ≤ θ < π.
362 6. THE HEAT EQUATION

If we assume a solution of the above problem of the form

u(r, φ, θ, t) = R(r) Φ(φ) Θ(θ) T (t),

then working exactly as in the vibrations of the ball problem we need to solve
the ordinary differential equation

T ′ (t) + λc2 T (t) = 0, t>0

and the three eigenvalue problems


 (
 d 2 dR )

 r + (λ2 r2 − ν)R = 0, 0 < r < a,

 dr dr

 R(a) = 0, | lim R(r) |< ∞,






r→0+

 2
 d Φ + µΦ = 0, 0 < φ < 2π,
dθ2

 Φ(φ + 2π) = Φ(φ), 0 ≤ φ ≤ 2π,





 d2 Θ dΘ ( µ )

 + cot φ + ν− Θ = 0,



 dθ 2 dθ sin2 θ

 | lim Θ(θ) |< ∞, | lim Θ(θ) |< ∞.
θ→0+ − θ→π

Using the results of the vibrations of the ball problem we obtain that the
solution u = u(r, φ, θ) of the heat ball problem (6.3.25) is given by

∞ ∑
∑ ∞ ∑
∞ [
−c2
znj
t ( znj )
u= Amnj a
r am cos mφ Pn(m) (cos θ)
Jn+ 12
m=0 n=0 j=1
a
(6.3.26) ]
+ bm sin mφ Pn(m) (cos θ) ,

(m)
where znj is the j th positive zero of the Bessel function Jn+ 12 (x), Pn (x)
are the associated Legendre polynomials of order m (discussed in Chapter 3),
given by
m ( )
m d
Pn(m) (x) = (1 − x2 ) 2 m
Pn (x) ,
dx
and Pn (x) are the Legendre polynomials of order n.
The coefficients Amnj am and Amnj bm in (6.3.26) are determined using
the initial conditions given in (6.3.25) and the orthogonality properties of the
eigenfunctions R(r), Φ(φ), Θ(θ).
6.3.5 THE HEAT EQUATION IN SPHERICAL COORDINATES 363

Exercises for Section 6.3.

1. Let v(x, t) and w(y, t) be solutions of the one dimensional heat equa-
tions
vt (x, t) = c2 vxx (x, t), x, y ∈ R, t > 0,
wt (y, t) = c2 wyy (y, t), x, y ∈ R, t > 0,
respectively.
Show that u(x, y, t) = v(x, t) w(y, t) is a solution of the two dimen-
sional heat equation
( )
ut = c2 uxx + uyy , (x, y) ∈ R2 , t > 0

and deduce that the Green function G(x, y, t) for the two dimensional
heat equation is given by

1 x2 +y 2
G(x, y, t) = √ e− 4c2 t .
4πct

Generalize the method for the three dimensional heat equation.

2. Show that the Green function G(x, y, t) satisfies the following:


∫∞ ∫∞
(a) G(x, y, t) dy dx = 1 for every t > 0.
−∞ −∞

(b) lim G(x, y, t) = 0 for every fixed (x, y) ̸= (0, 0).


t→0+

(c) G(x, y, t) is infinitely differentiable function on R2 .

3. Let V be the three dimensional box defined by V = {(x, y, z) :


0 < x < a, 0 < y < b, 0 < z < c} with boundary ∂V . Use the
separation of variables method to find the solution of the following
three dimensional heat initial boundary value problem.
 ( )

2
 ut = k uxx + uyy + uzz , (x, y, z) ∈ V, t > 0,
u(x, y, z, 0) = f (x, y, z), (x, y, z) ∈ V ,


u(x, y, z, t) = 0, (x, y, z) ∈ ∂V .

4. Let R be the rectangle defined by R = {(x, y) : 0 < x < a, 0 < y < b}


with boundary ∂R. Use the separation of variables method to find
the general solution of the following two dimensional boundary value
heat problem.
{
ut (x, y, t) = uxx (x, y, t) + uyy (x, y, t), (x, y) ∈ R, t > 0,
u(0, y, t) = 0, (x, y) ∈ ∂R, t > 0.
364 6. THE HEAT EQUATION

Let R be the rectangle defined by R = {(x, y) : 0 < x < a, 0 <


y < b} with boundary ∂R.
In Exercises 5–8 solve the following two dimensional initial bound-
ary heat problem.

 ut (x, y, t) = uxx (x, y, t) + uyy (x, y, t), (x, y) ∈ R, t > 0,

u(x, y, t) = 0, (x, y) ∈ ∂R, t > 0,


u(x, y, 0) = f (x, y), (x, y) ∈ R, t > 0

if it is given that


5. a = 1, b = 2, f (x, y) = 3 sin 6πx sin 2πy + 7 sin πx sin 2 y.

6. a = b = π, f (x, y) = 1.

7. a = b = π, f (x, y) = sin x sin y + 3 sin 2x sin y + 7 sin 3x sin 2y.


{
x, 0 ≤ x ≤ π2
8. a = b = π, f (x, y) = g(x)h(y), g(x) = h(x) =
π − x, π2 ≤ x ≤ π.
For Exercises 9–11, let S be the square defined by S = {(x, y) :
0 < x < π, 0 < y < π} whose boundary is ∂S.

9. Solve the following nonhomogeneous initial boundary value problem.



 ut = uxx + uyy + sin 2x sin 3y, (x, y) ∈ S, t > 0,

u(x, y, t) = 0, (x, y) ∈ ∂S, t > 0,


u(x, y, 0) = sin 4x sin 7y, (x, y) ∈ S.

10. Use the separation of variables method to solve the following diffusion
initial boundary value problem.

 ut = uxx + uyy + 2k1 ux + 2k2 uy − k3 u,
 (x, y) ∈ S, t > 0,
u(x, y, t) = 0, (x, y) ∈ ∂S, t > 0,


u(x, y, 0) = f (x, y), (x, y) ∈ S.

11. Solve the following nonhomogeneous problem.




 ut = uxx (x, y, t) + uyy + F (x, y, t), (x, y) ∈ S, t > 0,
u(x, y, t) = 0, (x, y) ∈ ∂S, t > 0,


u(x, y, 0) = (x, y) ∈ S.


∞ ∑

Hint: F (x, y, t) = Fmn (t) sin mx sin ny.
m=1 n=1
6.3.5 THE HEAT EQUATION IN SPHERICAL COORDINATES 365

12. Find the solution u = u(r, t) of the following initial boundary heat
problem in polar coordinates.
 (
 1 )

 ut = c2 urr + ur , 0 < r < a, t > 0,
 r
 | lim u(r, t) |< ∞, u(a, t) = 0, t > 0,

 r→0+

u(r, 0) = T0 , 0 < r < a.
13. Find the solution u = u(r, φ, t) of the following initial boundary heat
problem in polar coordinates.

 1 1

 u = urr + ur + 2 uφφ , 0 < r < 1, 0 < φ < 2π, t > 0,
 t r r
| lim u(r, φ, t) |< ∞, u(1, φ, t) = 0, 0 < φ < 2π, t > 0,

 r→0+


u(r, φ, 0) = (1 − r2 )r sin φ, 0 < r < 1, 0 < φ < 2π.
14. Find the solution u = u(r, φ, t) of the following initial boundary heat
problem in polar coordinates.

 1 1

 u = urr + ur + 2 uφφ , 0 < r < 1, 0 < φ < 2π, t > 0,
 t r r
 | lim u(r, φ, t) |< ∞, u(1, φ, t) = sin 3φ, 0 < φ < 2π, t > 0,

 r→0+

u(r, φ, 0) = 0, 0 < r < 1, 0 < φ < 2π.
15. Find the solution u = u(r, z, t) of the following initial boundary heat
problem in cylindrical coordinates.

 1

 ut = urr + r ur + uzz , 0 < r < 1, 0 < z < 1, t > 0,



u(r, 0, t) = u(r, 1, t) = 0, 0 < r < 1, t > 0,



 u(1, z, t) = 0, 0 < z < 1, t > 0,


u(r, z, 0) = 1, 0 < r < 1.
16. Find the solution u = u(r, t) of the following initial boundary heat
problem in polar coordinates.

 ∂u(r, t) = c ∂ (r2 ∂ ), 0 < r < a, t > 0,
2


 ∂t r2 ∂r ∂r
 | lim u(r, t) |< ∞, u(1, t) = 0, t > 0,

 r→0+

u(r, 0) = 1, 0 < r < 1.
17. Find the solution u = u(r, t) of the heat ball problem

 ( 2
2 ∂ u 2 ∂u )

 u = cx + , 0 < r < 1, t > 0


t
∂r 2 r ∂r




 u(r, 0) = f (r) 0 < r < 1,







 | lim u(r, t) |< ∞, u(1, t) = 0, t > 0.
r→0+
366 6. THE HEAT EQUATION

Hint: Introduce a new function v(r, t) by v(r, t) = ru(r, t).

18. Find the solution u = u(r, φ, θ, t) = u(r, θ, t) (u is independent of φ)


of the heat ball problem in spherical coordinates.


 ∂ 2 u 2 ∂u 1 ∂ 2 u cot θ ∂u

 u t = + + + 2 , 0 < r < 1, 0 < θ < π, t > 0

 ∂r2 r ∂r r2 ∂θ2 r ∂θ




 {
1, 0 ≤ θ < π2

 u(r, θ, 0) = 0 < r < 1,

 0, π2 ≤ θ ≤ π,







u(1, θ, t) = 0, 0 ≤ θ ≤ π, t > 0.

6.4 Integral Transform Methods for the Heat Equation.


In this section we will apply the Laplace and Fourier transforms to solve
the heat equation and similar partial differential equations. We begin with
the Laplace transform method.
6.4.1 The Laplace Transform Method for the Heat Equation.
The the Laplace transform method for solving the heat equation is the
same as the one used to solve the wave equation.
As usual, we use capital letters for the Laplace transform with respect to
t > 0 for a given function g(x, t). For example, we write

L u(x, t) = U (x, s), L y(x, t) = Y (x, s), L f (x, t) = F (x, s).

Let us take several examples to illustrate the Laplace method.


Example 6.4.1. Solve the initial boundary value heat problem
{
ut (x, t) = uxx (x, t) + a2 u(x, t) + f (x), 0 < x < ∞, t > 0,
u(0, t) = ux (0, t) = 0, t > 0.

( ) ( )
Solution. If U (s, t) = L u(x, t) and F (s) = L f (x) , then the heat equa-
tion is transformed into

Ut (s, t) = s2 U (s, t) − su(0, t) − ux (0, t) + a2 U (s, t) + F (s).

In view of the boundary conditions for the function u(x, t) the above equation
becomes
Ut (s, t) − (s2 + a2 )U (x, t)+ = F (s).
6.4.1 LAPLACE TRANSFORM METHOD FOR THE HEAT EQUATION 367

The general solution of the last equation (treating it as an ordinary differential


equation with respect to t) is given by

2
+a2 )t F (s)
U (s, t) = Ce(s − .
s2 + a2
From the fact that every Laplace transform tends to zero as s → ∞ it follows
that C = 0. Thus,
F (s)
U (s, t) = − 2 ,
s + a2
and so by the convolution theorem for the Laplace transform we have
∫x
1
u(x, t) = − f (x − y) sin ay dy.
a
0

Example 6.4.2. Using the Laplace transform method solve the initial bound-
ary value heat problem


 ut (x, t) = uxx (x, t), 0 < x < 1, t > 0,
u(0, t) = 0, u(1, t) = 1, t > 0,


u(x, 0) = 0, 0 < x < 1.

( )
Solution. If U = U (x, s) = L u(x, t) , then applying the Laplace transform
to both sides of the heat equation and using the initial condition we have

d2 U
− sU = 0.
dx2
The general solution of the above ordinary differential equation is
(√ ) (√ )
U (x, s) = c1 cosh sx + c2 sinh sx .

From the boundary conditions for the function u(x, t) we have


( ) ( ) 1
U (0, s) = L u(0, t) = 0, U (1, s) = L u(1, t) = .
s
The above boundary conditions for U (x, s) imply
(√ ) √ √
1 sinh sx 1 e(x+1) s − e−(x+1) s
U (x, s) = (√ ) = √ .
s sinh s s 1 − e−2 s

Using the geometric series


∑∞ √
1 −2n s
√ = e
1 − e−2 s n=0
368 6. THE HEAT EQUATION

we obtain
∞ [ −(2n+1−x)

√ √ ]
e s
e−(2n+1+x) s
U (x, s) = − .
n=0
s s

Therefore (from Table A of Laplace transforms given in Appendix A) we have


∞ [
∑ ( √ ) ( −(2n+1+x)√s )]
−1 e−(2n+1−x) s −1 e
u(x, t) = L −L
n=0
s s
∑∞ [ ( ) ( )]
2n + 1 − x 2n + 1 + x
= erf √ − erf √ ,
n=0
2 t 2 t

where the error function erf (·) is defined by

∫∞
2
e−t dt.
2
erf (x) = √
π
x

Example 6.4.3. Using the Laplace transform method solve the boundary
value problem

 ut (x, t) = uxx (x, t), −1 < x < 1, t > 0,

u(x, 0) = 1, x > 0,


ux (1, t) + u(1, t) = ux (−1, t) + u(1, t) = 0, t > 0.

( )
Solution. If U = U (x, s) = L u(x, t) , then from the initial condition we
have
d2 U
− sU = −1.
dx2
The solution of the last equation, in view of the boundary conditions
[ ] [ ]
dU dU
+ U (x, s) = 0, + U (x, s) = 0,
dx x=1 dx x=−1

is

1 cosh ( sx)
(6.4.1) U (x, s) = − [√ √ √ ].
s s s sinh ( s) + cosh ( s)

The first function in (6.4.1) is easily invertible. To find the inverse Laplace
transform of the second function is not as easy. Let

cosh ( sx)
F (s) = [√ √ √ ].
s s sinh ( s) + cosh ( s)
6.4.1 LAPLACE TRANSFORM METHOD FOR THE HEAT EQUATION 369

Ri

0
-zn2

-Ri

Figure 6.4.1

The singularities of this function are at s = 0 and all points s which are
solutions of the equation
√ √ √
(6.4.2) s sinh s + cosh s = 0.

These singularities are simple poles (check !). Consider the contour integral

F (z) etz dz,
CR

where the contour CR is given in Figure 6.4.1.

Letting R → ∞, from the formula for the inverse Laplace transform by


residues (see Appendix F) we have

( ) ∑ ( )
(6.4.3) L−1 F (s) = Res F (z)ezt , z = 0 + Res F (z)ezt , z = −zn2 ,
n=1

where −zn2 are the negative solutions of Equation (6.4.2).


To calculate the above residues we use a result from Appendix F for eval-
uating residues. For the pole z = 0 we have
{ }
Res F (z) ezt , z = 0 = lim zF (z) ezt
z→0

cosh ( z x) ezt
= lim z [√ √ √ ] = 1.
z→0 z sinh( z + cosh( z)

For the poles z = −zn2 , using l’Hopital’s rule and Euler’s formula

eiα = cos α + i sin α


370 6. THE HEAT EQUATION

we have
{ }
Res F (z)ezt , z = −zn2 = lim 2 (z + zn2 )F (z)ezt
z→−zn

−zn
2 z + z2
(6.4.4) = cosh (−zn ix)e t
lim 2 [√ √ n √ ]
z→−zn z sinh ( z + cosh ( z)
cos (zn x)e−zn t
2

= −2 ( ).
zn sin (zn ) + zn cos (zn )

Therefore, from (6.4.1), (6.4.3) and (6.4.4) it follows that


[ ∑∞ ]
(1) cos (zn x)e−zn t
2

u(x, t) = L−1 + −1 − 2 2 ( )
s n=1
zn sin (zn ) + zn cos (zn )

∑ cos (zn x)e−zn t
2

=2 [ ].
n=1
zn 2 sin (zn ) + zn cos (zn )

Example 6.4.4. Find the solution of the problem



 2
 ut (x, t) = a uxx (x, t), 0 < x < l, t > 0,

(6.4.5) lim u(x, t) = δ(t), u(l, t) = 0, t > 0,


x→0+
 u(x, 0) = 0, 0 < x < l,

where δ(t) is the Dirac impulse “function” concentrated at t = 0.


( )
Solution. If U = U (x, s) = L u(x, t) , then using the result

∫∞
f (x)δ(x) dx = f (0),
−∞

for every continuous function f on R which vanishes outside an open interval,


the initial boundary value problem (6.4.5) is reduced to the problem
 s
 Uxx (x, s) − U (x, s) = 0, 0 < x < l, s > 0
a2
 lim U (x, s) = 1, U (l, s) = 0, s > 0.
x→0+

The solution of the last problem is given by


( l−x √ )
sinh s
U (x, s) = ( al √ ) .
sinh a s
6.4.2 FOURIER TRANSFORM METHOD FOR THE HEAT EQUATION 371

To find the inverse Laplace of the above function we proceed as in Example


6.4.2. We write the function U (x, s) in the form
( l−x √ ) x √ 2l−x √
sinh s e− a s − e− a s
U (x, s) = ( al √ ) = 2l √
sinh a s 1 − e− a s
( )∑ ∞
x √ 2l−x √ 2ln √
(6.4.6) = e− a s − e− a s e− a s
n=0

∑ √ ∞
∑ √
(2nl+x) (2nl−x)
= e− a s
− e− a s
.
n=0 n=0

From Table A in Appendix A we have


( √
)
−1 −y s y 2
−y t
L e = g(y, t) = √ 3 e 4 .
2 π t2

Therefore, from (6.4.6) it follows that

∑∞ ( ) ∑ ∞ ( )
2nl + x 2nl − x
u(x, t) = g t − g t .
n=0
a n=0
a

Since the function g(y, t) is odd with respect to the variable y, from the last
equation we obtain


1 (2nl+x)2
u(x, t) = √ 3 (2nl + x) e− 4a2 l
t
.
2 π t2 n=−∞

6.4.2 The Fourier Transform Method for the Heat Equation.


Fourier transforms, like the Laplace transform, can be used to solve the
heat equation, and similar equations, in one and higher spatial dimensions.
The fundamentals of Fourier transforms were developed in Chapter 2. As
usual, we use capital letters for the Fourier transform with respect to x ∈ R
for a given function g(x, t). For example, we write

F u(x, t) = U (ω, t), F y(x, t) = Y (ω, t), F f (x, t) = F (ω, t).

Let us take first an easy example.


Example 6.4.5. Using the Fourier transform solve the transport boundary
value problem
{
ut (x, t) = c2 uxx (x, t), −∞ < x < ∞, t > 0
u(x, 0) = f (x), −∞ < x < ∞.
372 6. THE HEAT EQUATION
( ) ( )
Solution. If U = U (ω, t) = F u(x, t) and F (ω) = F f (x) , then the above
problem is reduced to the problem

dU
= −c2 ω 2 ωU, U (ω, 0) = F (ω).
dt

The solution of the last problem is U (ω, t) = F (ω)e−c ω t .


2 2

Now, from the inverse Fourier transform formula it follows that

∫∞ ∫∞
1 1
F (ω)e−c
2
iω x ω 2 t iω x
u(x, t) = U (ω, t)e dω = e dω
2π 2π
−∞ −∞
∫∞ ( ∫∞ )
1
f (ξ)e−iωξ dξ e−c
2
ω 2 t iω x
= e dω

−∞ −∞
∫∞ ( ∫∞ )
1
e−c
2
ω 2 t iω(x−ξ)
= e dω f (ξ) dξ

−∞ −∞
∫∞ (∫∞ )
1 ( )
e−c
2
ω2 t
= cos ω(x − ξ) dω f (ξ) dξ
π
−∞ 0
∫∞
1 (x−ξ)2
= √ e− 4c2 t f (ξ) dξ.
2c πt
−∞

In the above we used the following result.

∫∞ √
( ) 1 π
e−c
2
ω2 t
(6.4.7) cos ω λ dω = .
2c t
0

This result can be easily derived by differentiating the function

∫∞
( )
e−c
2
ω2 t
g(ω, λ) = cos ω λ dω
0

with respect to λ and integrating by parts to obtain

∂g(ω, λ) λ
= − 2 g(ω, λ),
∂λ 2c t

from which the result (6.4.7) follows.


6.4.2 FOURIER TRANSFORM METHOD FOR THE HEAT EQUATION 373

Example 6.4.6. Using the Fourier cosine transform solve the initial bound-
ary value problem



 ut (x, t) = c2 uxx (x, t), 0 < x < ∞, t > 0


 ux (0, t) = f (t), 0 < t < ∞,

 u(x, 0) = 0, 0 < x < ∞


 lim u(x, t) = lim ux (x, t) = 0, t > 0.
x→∞ x→∞

( ) ( )
Solution. If U = U (ω, t) = Fc u(x, t) and F (ω) = Fc f (x) , then taking
the Fourier cosine transform from both sides of the heat equation and using the
integration by parts formula (twice) and the boundary condition ux (0, t) =
f (t) we obtain

∫∞ [ ]
dU (ω, t) ( ) ( ) ∞
= c2 uxx (x, t) cos ω x dx = c2 ux (x, t) cos ω x
dt x=0
0
∫∞ [ ]
( ) ( ) ∞
+ c2 ω ux (x, t) sin ω x dx = −c2 f (t) + c2 ω u(x, t) sin ω x
x=0
0
∫∞
( )
− c2 ω 2 u(x, t) cos ω x dx = −c2 f (t) − c2 ω 2 U (ω, t),
0

i.e.,
dU (ω, t)
+ c2 ω 2 U (ω, t) = −c2 f (t).
dt

The solution of the last differential equation, in view of the initial condition

∫∞
( )
U (ω, 0) = u(x, 0) cos ω x dx = 0,
0

is given by
∫t
f (τ ) e−c
2
ω 2 (t−τ )
U (ω, t) = −c2 dτ.
0

If we take the inverse Fourier cosine transform, then we obtain that the solu-
374 6. THE HEAT EQUATION

tion u(x, t) of the given problem is


∫∞
2
u(x, t) = U (ω, t) cos ωx dω
π
0
∫∞ (∫t )
2c2 −c2 ω 2 (t−τ )
=− f (τ )e dτ cos ωx dω
π
0 0

2 ∫t (∫∞ )
2c −c2 ω 2 (t−τ )
=− e cos ωx dω f (τ ) dτ
π
0 0
∫t 2
c f (τ ) − 4c2x(t−τ
= −√ √ e ) (by result (6.4.7) in Example 6.4.5).
π t−τ
0

Exercises for Section 6.4.


In Problems 1–11, use the Laplace transform to solve the indicated initial
boundary value problem on (0, ∞), subject to the given conditions.
1. ut (x, t) = uxx (x, t), 0 < x < 1, t > 0, subject to the following condi-
tions: u(x, 0) = 0, u(0, t) = 1, u(1, t) = u0 , u0 is a constant.

2. ut (x, t) = uxx (x, t), 0 < x < ∞, t > 0, subject to the following con-
ditions: lim u(x, t) = u1 , u(0, t) = u0 , u(x, 0) = u1 , u0 and u1 are
x→∞
constants.

3. ut (x, t) = uxx (x, t), 0 < x < ∞, t > 0, subject to the following con-
ditions: lim u(x, t) = u0 , ux (0, t) = u(0, t), u(x, 0) = u0 , u0 is a
x→∞
constant.

4. ut (x, t) = uxx (x, t), 0 < x < ∞, t > 0, subject to the following condi-
tions: lim u(x, t) = u0 , u(0, t) = f (t), u(x, 0) = 0, u0 is a constant.
x→∞

5. ut (x, t) = uxx (x, t), 0 < x < ∞, t > 0, subject to the following
conditions: lim u(x, t) = 60, u(x, 0) = 60, u(0, t) = 60 + 40 U2 (t),
x→∞{
1, 0 ≤ t ≤ 2
where U2 (t) =
0, 2 < t < ∞.
6. ut (x, t) = uxx (x, t), −∞ < x < 1, t > 0, subject to the following
conditions: lim u(x, t) = u0 , ux (1, t) + u(1, t) = 100, u(x, 0) = 0,
x→−∞
u0 is a constant.
6.4.2 FOURIER TRANSFORM METHOD FOR THE HEAT EQUATION 375

7. ut (x, t) = uxx (x, t), 0 < x < 1, t > 0, subject to the following condi-
tions: u(x, 0) = u0 + u0 sin x, u(0, t) = u(1, t) = u0 , u0 is a constant.

8. ut (x, t) = uxx (x, t) − hu(x, t), 0 < x < ∞, t > 0, subject to the
following conditions: lim u(x, t) = 0, u(x, 0) = 0, u(0, t) = u0 , u0 is
x→∞
a constant.

9. ut (x, t) = uxx (x, t), 0 < x < ∞, t > 0, subject to the following con-
ditions: lim u(x, t) = 0, u(x, 0) = 10e−x , u(0, t) = 10.
x→∞

10. ut (x, t) = uxx (x, t) + 1, 0 < x < 1, t > 0, subject to the following
conditions: u(0, t) = u(1, t) = 0, u(x, 0) = 0.
( )
11. ut (x, t) = c2 uxx (x, t) + (1 + k)ux (x, t) + ku(x, t) , 0 < x < ∞, t > 0,
subject to the following conditions: u(0, t) = u0 , lim | u(x, t) |< ∞,
x→∞
u(x, 0) = 0, u0 is a constant.

12. Find the solution u(r, t) of the initial boundary value problem

2
ut (r, t) = urr (r, t) + ur (r, t), 0 < r < 1, t > 0
r
lim | u(r, t) |< ∞, ur (1, t) = 1, t > 0
r→0+
u(r, 0) = 0, 0 ≤ r < 1.

13. Solve the boundary value problem

ut (x, t) = uxx (x, t) + u(x, t) + A cos x, 0 < x < ∞, t > 0


−3t
u(0, t) = Be , ux (0, t) = 0, t > 0.

14. Solve the heat equation

ut (x, t) = uxx (x, t), 0 < x < ∞, t > 0,

subject to the following conditions:

(a) lim u(x, t) = δ(t), lim u(x, t) = 0, u(x, 0) = 0, 0 < x < ∞,


x→0+ x→∞
where δ(t) is the Dirac impulse function, concentrated at t = 0.
(b) lim u(x, t) = f (t), lim u(x, t) = 0, u(x, 0) = 0, 0 < x < ∞.
x→0+ x→∞

In Problems 15–19, use the Fourier transform to solve the indicated initial
boundary value problem on the indicated interval, subject to the given
conditions.
376 6. THE HEAT EQUATION

15. ut (x, t) = c2 uxx (x, t), −∞ < x < ∞, t > 0 subject to the following
condition: u(x, 0) = µ(x).

16. ut (x, t) = c2 uxx (x, t) + f (x, t), −∞ < x < ∞, t > 0 subject to the
following condition: u(x, 0) = 0.

17. ut (x, t) = c2 uxx (x, t), −∞ < x < ∞, t > 0 subject to the condition
{
1, |x| < 1
u(x, 0) =
0, |x| > 1.
18. ut (x, t) = c2 uxx (x, t), −∞ < x < ∞, t > 0 subject to the condition
u(x, 0) = e−|x| .

19. ut (x, t) = t2uxx (x, t), −∞ < x < ∞, t > 0 subject to the condition
u(x, 0) = f (x).

In Problems 20–25, use one of the Fourier transforms to solve the indicated
heat equation, subject to the given initial and boundary conditions.

20. ut (x, t) = c2 uxx (x, t), 0 < x < ∞, t > 0 subject to the conditions
ux (0, t) = µ(t), and u(x, 0) = 0.

21. ut (x, t) = c2 uxx (x, t) + f (x, t), 0 < x < ∞, t > 0 subject to the
conditions u(0, t) = u(x, 0) = 0.

22. ut (x, t) = uxx (x, t), 0 < x < ∞, t > 0 subject to the initial conditions

{
1, 0 < x < 1
u(x, 0) =
0, 1 ≤ x < ∞.
23. ut (x, t) = uxx (x, t), 0 < x < ∞, t > 0 subject to the initial conditions
u(x, 0) = 0, ut (x, 0) = 1 and the boundary condition lim u(x, t) = 0.
x→∞

24. ut (x, t) = c2 uxx (x, t) + f (x, t), 0 < x < ∞, t > 0, subject to the
initial condition u(x, 0) = 0, 0 < x < ∞, and the boundary condition
u(0, t) = 0.

25. ut (x, t) = c2 uxx (x, t) + f (x, t), 0 < x < ∞, t > 0, subject to the ini-
tial condition ux (x, 0) = 0, 0 < x < ∞, and the boundary condition
u(0, t) = 0.
6.5 PROJECTS USING MATHEMATICA 377

6.5 Projects Using Mathematica.


In this section we will see how Mathematica can be used to solve several
problems involving the heat equation. In particular, we will develop several
Mathematica notebooks which automate the computation of the solution of
this equation. For a brief overview of the computer software Mathematica
consult Appendix H.
Project 6.5.1. Solve the following heat distribution problem for the unit
disc.

 1 1

 ut (r, φ, t) = urr + r ur + r2 uφφ , 0 < r ≤ 1, 0 ≤ φ < 2π, t > 0,

| lim u(r, φ, t) |< ∞, u(1, φ, t) = 0, 0 ≤ φ < 2π, t > 0,

 r→0+


u(r, φ, 0) = f (r, φ) = (1 − r2 )r sin φ, 0 < r < 1, 0 ≤ φ < 2π.

All calculation should be done using Mathematica. Display the plot of the
function u(r, φ, t) at several time instances t and find the hottest places on
the disc plate for the specified instances.
Solution. The solution of the heat problem, as we derived in Section 6.3, is
given by
∞ ∑
∑ ∞
( ) 2 [ ]
Jm λmn r e−λmn t Amn cos mφ + Bmn sin mφ ,
2
u(r, φ, t) =
m=0 n=0

where

∫1 ∫2π
1 ( )
Amn = 2
( ) f (r, φ) cos mφ Jm zmn r r dφ dr,
πJm+1 zmn
0 0
∫1 ∫2π
1 ( )
Bmn = 2
( ) f (r, φ) sin mφ Jm zmn r r dφ dr.
πJm+1 zmn
0 0

Define the nth positive zero of the Bessel function Jm (·):


In[1] := z[m− , n− ]:=N[BesselJZero[m, n, 0];
Now define the eigenfunctions of the Helmholtz equation:
In[2]:=λ[m− , n− ] := z[m, n];
Next, define the eigenfunctions:
In[3]:=EFc[r− , φ− , m− , n− ]:=BesselJ[m, λ[m, n] r] Cos[m φ];
In[4]:=EFs[r− , φ− , m− , n− ]:=BesselJ[m, λ[m, n] r] Sin[m φ];
Define the initial condition:
378 6. THE HEAT EQUATION

In[5] := f [r− , φ− ] := (1 − r2 ) r Sin[φ];


Define the coefficients Amn and Bmn :
2
In[6] := A[m
[ − , n− ] := 1/(BesselJ[m + 1, λ[m, n] ) ]
Integrate f [r, φ] EFc[r, φ, m, n] r, {r, 0, 1}, {φ, 0, 2 Pi} ;
2
In[7] := B[m
[ − , n− ] := 1/(BesselJ[m + 1, λ[m, n]) ]
Integrate f [r, φ] EFs[r, φ, m, n] r, {r, 0, 1}, {φ, 0, 2 Pi} ;

Define the k th partial sum which will be the approximation of the solution:
[
(
In[8] := u[r− , φ− , t− , k− ] :=Sum A[m, n] EFc[r, φ, m, n]
]
)
+A[m, n] EFs[r, φ, m, n] Exp[−λ[m, n])2 t], {m, 0, k}, {n, 0k} ;

To plot the solution u(r, φ, t) at t = 0 (taking k = 5) we use


[
In[9] :=ParametricPlot3D {rCos[φ], r Sin[φ], u[r, φ, 0, 5]}, {φ, 0, 2 Pi},
{r, 0, 1}, Ticks − > {{−1, 0, 1}, {−1, 0, 1}, {−.26, 0, 0.26}}, RegionFunction
− >Function[{r,] φ, u}, r <= 1], BoxRatios − > Automatic, AxesLabel
− > {x, y, u} ;

Let us find the points on the disc with maximal temperature u(r, φ) at
the specified time instances t.
At the initial moment t = 0 we have that
[ ]
In[10]:= FindMaximum {u[r, φ, 0], 0 <= r < 1 && 0 <= φ < 2 Pi }, {r, φ}
Out[10] = {0.387802, {r− > 0.605459, φ− > 1.5708}}
At the moment t = 0.2 we have that
[ ]
In[11]:= FindMaximum {u[r, φ, 0.2], 0 <= r < 1 && <= φ < 2 Pi}, {r, φ}
Out[11] = {0.297189, {r− > 0.534556, φ− > 1.5708}}
At the moment t = 0.4 we have that
[ ]
In[12]:= FindMaximum {u[r, φ, 0.4], 0 <= r < 1 && 0 <= φ < 2 Pi}, {r, φ}
Out[12] = {0.224781, {r− > 0.507249, φ− > 1.5708}}
At the moment t = 0.6 we have that
[ ]
In[13]:= FindMaximum {u[r, φ, 0.6], 0 <= r < 1 && <= φ < 2 Pi}, {r, φ}
Out[13] = {0.168849, {r− > 0.493909, φ− > 1.5708}}
6.5 PROJECTS USING MATHEMATICA 379

t=0 t=0.02

Figure 6.5.1

The plots of u(r, φ, t) at the time instances t = 0 and t = 0.02 are


displayed in Figure 6.5.1.

Project 6.5.2. Use the Laplace transform to solve the heat boundary value
problem 
 ut (x, t) = uxx (x, t), 0 < x < ∞, t > 0,

u(x, 0) = 1, 0 < x < ∞,


u(0, t) = 10, t > 0.

Solution. First we define the heat expression:


In[1] := heat = D[u[x, t], {t, 1}] − D[u[x, t], {x, 2}];
Out[1] = u(0,1) [x, t] − u(2,0) [x, t];
Next, take the Laplace transform:
[ ]
In[2] := LaplaceT ransf orm heat, t, s ;
Out[2] = s LaplaceT ransf
[ orm[u[x, t],] t, s]
−LaplaceT ransf orm [u(2,0) [x, t], t, s − u[x, 0];
Define:
[ ]
In[3] := U [x− , s− ] := LaplaceT ransf orm u[x, t], t, s ;
Define the initial condition:
In[4] := f [x− ] := 1;
Using the initial condition define the expression:
In[5] := eq = s U [x, s] − D[U [x, s], {x, 2}] − f [x];
Next find the general solution of
d2 U (x, s)
− sU (x, s) − f (x) = 0
dx2
380 6. THE HEAT EQUATION

In[6] := gensol = DSolve[eq == 0, U [x, s], x]


√ √
Out[6] := {{U [x, s]− > 1
s +e sx
C[1] + e− sx
C[2]}}

In[7] := Sol = U [x, s]/.gensol[[1]];

Define the boundary condition:

In[8] := b[t− ] = 10;

From the boundary conditions and the bounded property of the Laplace
transform as s → ∞ we find the constants C[1] and C[2]:
[ ]
In[9] := BC = LaplaceT ransf orm b[t], t, s ;
10
Out[9] = s

In[10] := BC = (Sol/.{x → 0, C[1]− > 0})


9
Out[10] = s = C[2]

In[11] :=FE=Sol/.{C[2]− > 9s , C[1] → 0}



− sx
1
Out[11] = s +9e s ;

Find the inverse Laplace of the last expression:


[ √
]
1 e− sx
In[12] :=InverseLaplace s + 9 s , s, t

√ [ ]
t x

x2
x Erf c
Out[12] = 1 + 9 √ t
.
t

Plots of u[x, t] at several time instances t are displayed in Figure 6.5.2.

y
10

4 t = 0.5 t = 20
t = 0.02 t=5 t = 10
2
t=2
0 x
0 2 4 5 6 8 10

Figure 6.5.2
6.5 PROJECTS USING MATHEMATICA 381

Project 6.5.3. Use the Fourier transform to solve the heat boundary value
problem

 1

 ut (x, t) = 4 uxx (x, t), −∞ < x < ∞, t > 0,
{

 10, −1 < x < 1
 u(x, 0) = f (x) = −∞ < x < ∞.
0, otherwise,

Do all calculations using Mathematica.


Solution. First we define the heat expression:
In[1] := heat = D[u[x, t], {t, 1}] − 1/4 D[u[x, t], {x, 2}];
Out[1] = u(0,1) [x, t] − 41 u(2,0) [x, t];
Next, take the Fourier transform:
[ ]
In[2] := F ourierT ransf orm heat, x, z ;
Out[2]=FourierTransform[u(0,1) [x, t], x, z]− 14 FourierTransform[u[x, t], x, z];
Further define:
[ ]
In[3] := U [z− , t− ] :=FourierTransform u[x, t], x, z ;
Define the initial condition:
[ ]
In[4] := f [x− ] :=Piecewise {{20, −1 < x < 1}, {0, x ≥ 1}, {0, x ≤ −1}} ;
[ ]
In[5] := eq = D U [z, t], {t, 1} − z 2 /4U [z, t];
Next find the general solution of

dU (z, t) z 2
+ U (z, t) = 0
dt 4

In[6] := gensol = DSolve[eq == 0, U [z, t], t]


tz 2
Out[6] := {{U [z, t]− > e− 4 C[1]}}
In[7] := Sol = U [z, t]/.gensol[[1]];
From the initial condition we find the constant C[1]:
[ ]
In[8] :=IC=FourierTransform f [x], x, z ;
√2
20 π Sin[z]
Out[8] = z

In[9] := IC=(Sol/.{z → 0, })
√2
20 π Sin[z]
Out[9] = z = C[1]
√2
20 Sin[z]
In[10] :=Final=Sol/.{C[1]− > π
z }
382 6. THE HEAT EQUATION

tz 2 √2
20 e− 4
π Sin[z]
Out[10] = z ;
To find the inverse Fourier transform we use the convolution theorem.
First define:
[ 20 √ π2 Sin[z] ]
In[11] := g[x− , t− ] :=InverseFourierTransform z , z, x ;
[ tz2 ]
In[12] := h[x− , t− ] :=InverseFourierTransform e− 4 , z, x ;
The solution u(x, t) is obtained by
[ ]
In[13] := u[x− , t− ] := √1 Convolve h[y, t], g[y, t], y, x ;

[ ]
In[14] := FullSimplify %
( [ ] [ ])
Out[14] = 10 −Erf −1+x√
t
+ Erf 1+x

t
CHAPTER 7

LAPLACE AND POISSON EQUATIONS

The purpose of this chapter is to study the two dimensional equation


∆ u(x, y) ≡ uxx (x, y) + uyy (x, y) = −f (x, y), (x, y) ∈ Ω ⊆ R2
and its higher dimensional version
∆ u ≡ uxx + uyy + uzz = −f (x, y, z), (x, y, z) ∈ Ω ⊆ R3 .
When f ≡ 0 in the domain Ω the homogeneous equation is called the
Laplace equation. The nonhomogeneous equation usually is called the Poisson
equation.
The Laplace equation is one of the most important equations in mathemat-
ics, physics and engineering. This equation is important in many applications
and describes many physical phenomena, e.g., gravitational and electromag-
netic potentials, and fluid flows. The Laplace equation can also be interpreted
as a two or three dimensional heat equation when the temperature does not
change in time, in which case the corresponding solution is called the steady-
state solution. The Laplace equation plays a big role in several mathematical
disciplines such as complex analysis, harmonic analysis and Fourier analysis.
The Laplace equation is an important representative of the very large class of
elliptic partial differential equations.
The first section of this chapter is an introduction, in which we state and
prove several important properties of the Laplace equation, such as the maxi-
mum principle and the uniqueness property. The fundamental solution of the
Laplace equation is discussed in this section. In the next several sections we
will apply the Fourier method of separation of variables for constructing the
solution of the Laplace equation in rectangular, polar and spherical coordi-
nates. In the last section of this chapter we will apply the Fourier and Hankel
transforms to solve the Laplace equation.

7.1 The Fundamental Solution of the Laplace Equation.


Let Ω be a domain in the plane R2 or in the space R3 . As usual, we denote
the boundary of Ω by ∂Ω. The following two boundary value problems will
be investigated throughout this chapter:
The first problem, called the Dirichlet boundary value problem, is to find a
function u(x), x ∈ Ω with the properties
{
∆ u(x) = −F (x), x ∈ Ω
(D)
u(z) = f (z), z ∈ ∂ Ω.

383
384 7. LAPLACE AND POISSON EQUATIONS

where F and f are given functions.


The second problem, called the Neumann boundary value problem, is to
find a function u(x), x ∈ Ω with the properties
{
∆ u(x) = −F (x), x ∈ Ω,
(N)
un (z) = g(z), z ∈ ∂ Ω,

where F and g are given functions, and un is the derivative of the function
u in the direction of the outward unit normal vector n to the boundary ∂Ω.
The following theorem, the Maximum Principle for the Laplace equation
is true for any of the above two problems in any dimension; we will prove it
for the two dimensional homogeneous Dirichlet boundary value problem. The
proof for the three dimensional Laplace equation is the same.
Theorem 7.1.1. Let Ω ⊂ R2 be a bounded domain with boundary ∂Ω and
let Ω = Ω ∪ ∂Ω. Suppose that u(x, y) is a nonconstant, continuous function
on Ω and that u(x, y) satisfies the Laplace equation

(7.1.1) uxx (x, y) + uyy (x, y) = 0, (x, y) ∈ Ω.

Then, u(x, y) achieves its largest, and also its smallest value on the boundary
∂Ω.
Proof. We will prove the first assertion. The second assertion is obtained from
the first assertion, applied to the function −u(x, y).
For any ϵ > 0 consider the function

w(x, y) = u(x, y) + ϵ(x2 + y 2 ).

Then, from (7.1.1) we obtain

(7.1.2) wxx (x, y) + wyy (x, y) = 4ϵ (x, y) ∈ Ω.

Since the function w(x, y) is continuous on the closed bounded region Ω, w


achieves its maximum value at some point (x0 , y0 ) ∈ Ω. If we have (x0 , y0 ) ∈
Ω, then from calculus of functions of several variables it follows that

wx (x0 , y0 ) = wy (x0 , y0 ) = 0, wxx (x0 , y0 ) ≤ 0, wyy (x0 , y0 ) ≤ 0,

which is a contradiction to (7.1.2). Therefore, (x0 , y0 ) ∈ ∂Ω.


Let

A = max{u(x, y) : (x, y) ∈ ∂Ω}, B = max{x2 + y 2 , (x, y) ∈ ∂Ω}.

Then, from u(x, y) ≤ w(x, y) on Ω and ∂Ω ⊆ Ω it follows that


max u(x, y) ≤ max w(x, y) = max w(x, y)
(x,y)∈Ω (x,y)∈Ω (x,y)∈∂Ω

= max u(x, y) + ϵ max (x2 + y 2 ) = A + ϵB.


(x,y)∈∂Ω (x,y)∈∂Ω
7.1 THE FUNDAMENTAL SOLUTION OF THE LAPLACE EQUATION 385

Thus,
max u(x, y) ≤ A + ϵB,
(x,y)∈Ω

and since ϵ > 0 was arbitrary we have

max u(x, y) = A. ■
(x,y)∈Ω

With this theorem, it is very easy to establish the following uniqueness


result.
Corollary 7.1.1. If Ω ⊂ R2 is a bounded domain, then the Dirichlet problem
(D) has a unique solution.
Proof. If u1 and u2 are two solutions of the Dirichlet problem (D), then the
function u = u1 − u2 is a solution of the problem
{
∆ u(x, y) = 0, (x, y)in Ω,
u(x, y) = 0, (x, y) ∈ ∂ Ω.

By Theorem 7.1.1, the function u achieves its maximum and minimum values
on the boundary ∂Ω. Since these boundary values are zero, u must be zero
in Ω. ■

Remark. A function u with continuous second order partial derivatives in


Ω that satisfies the Laplace equation

∆ u(x) = 0, x∈Ω

is called harmonic in Ω.
The following are some properties of harmonic functions which can be easily
verified. See Exercise 1 of this section.
(a) A finite linear combination of harmonic functions is also a harmonic
function.

(b) The real and imaginary parts of any analytic function in a domain
Ω ⊆ R2 are harmonic functions in Ω.

(c) If f (z) is an analytic function in a domain Ω ⊆ R2 , then | f (z) | is


a harmonic function in that part of Ω where f (z) ̸= 0.

(d) Any harmonic function in a domain Ω is infinitely differentiable in


that domain.
386 7. LAPLACE AND POISSON EQUATIONS

Example 7.1.1. Show that the following functions are harmonic in the in-
dicated domain Ω.
(a) u(x, y) = x2 − y 2 , Ω = R2 .

(b) u(x, y) = ex sin y, Ω = R2 .

(c) u(x, y) = ln (x2 + y 2 ), Ω = R2 \ {(0, 0)}.

(d) u(x, y, z) = √ 1
, Ω = R3 \ {(0, 0, 0)}.
x2 +y 2 +z 2

Solution. (a) u(x, y) is the real part of the analytic function z 2 , z = x + iy


in the whole plane.

(b) u(x, y) is the real part of the analytic function ez , z = x + iy in


the whole plane.

(c) u(x, y) is the real part of the analytic function ln z, z = x + iy,


R2 \ {(0, 0)}.

(d) We write the function u(x, y, z) in the form


1 √
u= , r= x2 + y 2 + z 2 .
r
Using the chain rule we obtain
x y z
ux = − , uy = − 3 , uz = − 3
r3 r r
r3 − 3x2 r r3 − 3y 2 r r3 − 3z 2 r
uxx =− , u yy = − , uzz = − .
r6 r6 r6
Therefore,

r3 − 3x2 r r3 − 3x2 r r3 − 3z 2 r
∆ u(x, y, z) = − − −
r6 r6 r6
3r − 3(x + y + z )r
3 2 2 2
3r3 − 3r3
=− = − = 0.
r6 r6

The following mean value property is important in the study of the Laplace
equation. We will prove it only for the two dimensional case since the proof
for higher dimensions is very similar.
Theorem 7.1.2. Mean Value Property. If u(x, y) is a harmonic function
in a domain Ω ⊆ R2 , then for any point (a, b) ∈ Ω
∫ ∫
1 1
u(a, b) = u(x, y) ds = 2 u(x, y) dx dy,
2πr r π
∂Dr (z) Dr (z)
7.1 THE FUNDAMENTAL SOLUTION OF THE LAPLACE EQUATION 387

where Dr = Dr (z) is any open disc with its center at the point z = (a, b)
and radius r such that Dr (a, b) ⊆ Ω.
Proof. Let Dϵ (z) be the open disc with its center at the point z = (a, b)
and radius ϵ < r and let Aϵ = Dr (z) \ Dϵ (z). Let us consider the harmonic
function
1
v(x, y) = √
(x − a)2 + (y − b)2
in the annulus Aϵ . See Figure 7.1.1.

¶W

Ε W

z

Figure 7.1.1

We will apply Green’s formula (see Appendix E):


∫ ∫∫
( ) ( )
vun − u vn ds = u∆ v − v∆ u dx dy.
∂Aϵ Aϵ

Since u and v are harmonic in Aϵ from the above formula we have


∫ ∫ ∫
1
uvn ds = vun ds = un ds.
r
∂Aϵ ∂Aϵ ∂Aϵ

Applying Green’s formula again, now for u and v = 1 in the annulus Aϵ , we


have ∫
un ds.
∂Aϵ

Thus, ∫
uvn ds = 0.
∂Aϵ

Passing to polar coordinates, from the above we have


∫2π ∫2π
1 1
u(a + ϵ cos φ, b + ϵ sin φ) dφ = u(r φ, r φ) dφ.
ϵ r
0 0
388 7. LAPLACE AND POISSON EQUATIONS

Taking lim as ϵ → 0 from both sides of the above equation and using the
fact that u is a continuous function it follows that

1
2πu(a, b) = u ds,
r
∂Dr

which is the first part of the assertions. The second assertion is left as an
exercise. ■

The converse of the mean value property for harmonic functions is also
true. Even though the property is true for any dimensional domains we will
state and prove it for the three dimensional case.
Theorem 7.1.3. If u is a twice continuously differentiable function on a
domain Ω ⊆ R3 and satisfies the mean value property
∫ ∫
1 1
u(x) = 4 3 u(y) d y = 2 u(y) dS(y),
3r π
4r π
B(x,r) ∂B(x,r)

for every ball B(x, r) with the property B(x, r) ⊂ Ω, then u is harmonic in
Ω.
Proof. Suppose contrarily that u is not harmonic in Ω. Then ∆ u(x0 ) ̸=
0 for some x0 ∈ Ω. We may assume that ∆ u(x0 ) > 0. Since u is a
twice continuously differentiable function in Ω there exists an r0 > 0 such
that ∆ u(x) > 0 for every x ∈ B(x0 , r) and every r < r0 . Now, define the
function
∫ ∫
1 1
f (r) = 4 3 u(y) d y = 4 u(x0 + rz) d z, r < r0 .
3r π 3π
B(x0 ,r) B(0,1)

Because u satisfies the mean value property we have f (r) = u(x0 ). Thus,
f ′ (r) = 0. On the other hand, from the Gauss–Ostrogradski formula we have

1 ( )
f ′ (r) = 4 grad u(x0 + rz) · z d z

B(0,1)

1 )
= 4 ∆ u(x0 + rz) d z > 0,

∂B(0,1)

which is a contradiction. ■

Now, we will discuss the fundamental solution of the Laplace equation.


Consider the Laplace equation in the whole plane without the origin.

(7.1.3) uxx (x, y) + uyy (x, y) = 0, (x, y) ∈ R2 \ {(0, 0)}.


7.1 THE FUNDAMENTAL SOLUTION OF THE LAPLACE EQUATION 389

We will find a solution u(x, y) of (7.1.3) of the form



u(x, y) = f (r), r= x2 + y 2 ,

where f (r) is a twice differentiable function which will be determined.


Using the chain rule we have

x ′ y
ux (x, y) = f (r), uy (x, y) = f ′ (r), (x, y) ̸= (0, 0)
r r
1 ′ x2 ′ x2
uxx (x, y) = f (r) − 3 f (r) + 2 f ′′ (r),
r r r
1 ′ y2 ′ y 2 ′′
uyy (x, y) = f (r) − 3 f (r) + 2 f (r), (x, y) ̸= (0, 0),
r r r

and so the Laplace equation becomes

1 ′
f (r) + f ′′ (r), r ̸= 0.
r

The general solution of the last equation is given by

f (r) = C1 ln |r| + C2 , r ̸= 0,

where C1 and C2 are arbitrary constants. For a particular choice of these


constants, the function

1
(7.1.4) Φ(x, y) = − ln(x2 + y 2 )

is called the fundamental solution or Green function of the two dimensional


Laplace equation (7.1.3) in R2 \ {(0, 0)}.
For the three dimensional Laplace equation in the whole space without the
origin

(7.1.5) ∆ u(x, y, z) = 0, (x, y, z) ∈ R3 \ {(0, 0, 0)},

working similarly as in the two dimensional Laplace equation we obtain that


the fundamental solution of (7.1.5) is given by

1 1
(7.1.6) Φ(x, y, z) = √ .
4π x + y 2 + z 2
2

The significance of the fundamental solution of the Laplace equation is


because of the following two integral representation theorems.
390 7. LAPLACE AND POISSON EQUATIONS

Theorem 7.1.4. If F is a twice continuously differentiable function on Rn


(n = 2, 3) which vanishes outside a closed and bounded subset of Rn , then

u(x) = Φ(x, y) F (y) dy
Rn

is the unique solution of the Poisson equation

∆ u(x) = −F (x), x ∈ Rn ,

where Φ(·, ·) is the fundamental solution of the Laplace equation on Rn \ {0},


given by  1 1

 2π ln |x−y| , x = (x, y) if n = 2
Φ(x, y) =

 1 1
4π |x−y| , x = (x, y, z) if n = 3.

Theorem 7.1.5. Let Ω be a domain in Rn , n = 2, 3, with a smooth bound-


ary ∂Ω. If u(x) is a continuously differentiable function on Ω = Ω ∪ ∂Ω and
harmonic in Ω, then

[ ]
u(x) = Φ(x, y) un (y) − u(y)Φn(y (x, y) dS(y), x ∈ Ω,
∂Ω

where Φ(·, ·) is the fundamental solution of the Laplace equation on Rn \ {0},


given by  1 1

 2π ln |x−y| , x = (x, y) if n = 2
Φ(x, y) =

 1 1
4π |x−y| , x = (x, y, z) if n = 3.

and un (y) and Φn (x, y) are the derivatives of u and Φ, respectively, in the
direction of the unit outward vector n to the boundary ∂Ω at the boundary
point y ∈ ∂Ω.
Proof. First, from the Gauss–Ostrogradski formula (see Appendix E) applied
to two harmonic functions u and v in the domain Ω we obtain

[ ]
(7.1.7) v(y) un(y) (y) − u(y)vn(y) (y) dS(y) = 0.
∂Ω

For a fixed {point x ∈ Ω and sufficiently


} small ϵ > 0 consider the closed
ball Bϵ (x) = y ∈ Rn : | y − x |≤ ϵ ⊂ Ω. Let Ωϵ = Ω \ Bϵ (x) (remove the
closed ball Bϵ (x) from Ω). Since Φ(x, y) is harmonic in Ωϵ we can apply
7.1 THE FUNDAMENTAL SOLUTION OF THE LAPLACE EQUATION 391

Equation (7.1.7) to the functions


{ u(x) and v(x)
} = Φ(x, y) in the domain
Ωϵ with boundary ∂Ωϵ = ∂Ω ∪ y : | y − x |= ϵ and obtain

[ ]
Φ(x, y) un (y) − u(y)Φn(y) (x, y) dS(y)
∂Ω

(7.1.8) [ ]
= Φ(x, y) un (y) − u(y)Φn(y) (x, y) dS(y).
{ }
y: |y−x|=ϵ

{ }
On the sphere y : | y − x |= ϵ we have
{
− 2π
1
ln ϵ, n = 2
Φ(x, y) = 1
4πϵ , n = 3,

and thus,
{
− 2π1 ϵ , n=2
Φn(y) (x, y) =
− 4πϵ
1
2, n = 3.
From the above fact and the continuity of the function u(x) in Ω it follows
that ∫
[ ]
lim u(y) − u(x) Φn(y) (x, y) dS(y) = 0
ϵ→0
{ }
y: |y−x|=ϵ

and therefore, the right hand side of (7.1.8) approaches 0 as ϵ → 0, which


concludes the proof. ■

Now, we will introduce the Green function for a domain.


Let Ω be a domain with smooth boundary ∂Ω. The Green function
G(x, y) for the domain Ω is defined by

G(x, y) = Φ(x, y) + h(x, y), x, y ∈ Ω,

where h(x, y) is a harmonic function on Ω in each variable x, y ∈ Ω


separately and it is such that the function G(· , ·) satisfies the boundary
condition
G(x, y) = 0

if at least one of the points x and y is on the boundary ∂Ω.


Using the Divergence Theorem of Gauss–Ostrogradski, just as in Theorem
7.1.5, it can be shown that the solutions of the Dirichlet and Neumann bound-
ary value problems (D) and (N ), respectively, are given by the following two
theorems.
392 7. LAPLACE AND POISSON EQUATIONS

Theorem 7.1.6. Let Ω be a domain in R2 or R3 with a smooth boundary


∂Ω. If F (x) is a continuous function with continuous first order partial
derivatives on Ω and f (x) is a continuous function on Ω, then
∫ ∫
u(x) = G(x, y) F (y) dy − f (z) Gn (x, z) dS(z), x ∈ Ω
Ω ∂Ω

is the unique solution of the Dirichlet boundary value problem (D)


{
∆ u(x) = −F (x), x ∈ Ω,
u(z) = f (z), z ∈ ∂ Ω.

Theorem 7.1.7. Let Ω be a domain in R2 or R3 with a smooth boundary


∂Ω. If F (x) is a continuous function with continuous first order partial
derivatives on Ω and g(x) is a continuous function on Ω, then
∫ ∫
u(x) = G(x, y) F (y) dy dy − g(z) G(x, z) dS(z) + C, x ∈ Ω
Ω ∂Ω

is a solution of the Neumann boundary value problem (N )


{
∆ u(x) = −F (x), x ∈ Ω,
un (x) = g(x), x ∈ ∂ Ω,

where C is any numerical constant, provided that the following condition is


satisfied ∫ ∫
F (y) dy = g(z) dS(z).
Ω ∂Ω

For some basic properties of the Green function see Exercise 2 of this
section.
Now, let us take a few examples.
Example 7.1.2. Verify that the expression
( x )
G(x, y) = Φ(x, y) − Φ | x | y ,
|x|

is actually the Green function for the unit ball | x |< 1 in Rn for n = 2, 3,
where Φ(·, ·) is the fundamental solution of the Laplace equation.
Solution. Let us consider the function h(x, y), defined by
( x )
h(x, y) = Φ | x | y, .
|x|
7.1 THE FUNDAMENTAL SOLUTION OF THE LAPLACE EQUATION 393

It is straightforward to verify that for every x and y the following iden-


tities hold.

 x
y

 | x | y − =| y | x − ,
|x| | y |2
(7.1.9)

 x
y − x ,

 |x|y− =| x |
|x| |x|
2

and


(7.1.10) | x | y − x = | y | x − y .
|x| |y|

From the first identity in (7.1.9) it follows that the function h(x, y) is har-
monic with respect to x in the unit ball, while from the second identity in
(7.1.9) we have h(x, y) is harmonic with respect to y in the unit ball. From
the definition of h(x, y) and identity (7.1.10) we have

h(x, y) = Φ(x, y), when either | x |= 1 or | y |= 1.

Therefore, the above defined function G(· , ·) satisfies all the properties for a
function to be a Green function.
If we take n = 2, for example, then from the definition of the fundamental
solution of the Laplace equation in R2 \ {0} and the above discussion, it
follows that the Green function in the unit disc of the complex plane R2 is
given by

1 | x−y |
G(x, y) = − ln ,
| x |< 1, | y |< 1.
2π | x | y − x
|x|

Example 7.1.3. Find the Green function for the upper half plane.
Solution. For a point x = (x, y), define its reflection by x∗ = (x, −y). If

U + = {x = (x, y) ∈ R2 : y > 0}

is the upper half plane, then define the function g(x, y) on U + by

g(x, y) = Φ(x∗ , y), x, y ∈ U + ,

where Φ(·, ·) is the fundamental solution of the Laplace equation in R2 \ {0}.


Using the obvious identity

| x − y∗ |=| y − x∗ |, x, y ∈ R2
394 7. LAPLACE AND POISSON EQUATIONS

and the facts that x∗ ∈


/ U + if x ∈ U + , and the same for y, it follows that
g(x, y) = g(y, x), x, y ∈ U +
and also, g(x, y) is harmonic with respect to x in U + for every fixed y ∈ U +
and g(x, y) is harmonic with respect to y in U + for every fixed x ∈ U + .
Further, from the definitions of the functions g(x, y) and Φ(x, y) it follows
that g(x, y) = Φ(x, y) for every x ∈ ∂U + or every y ∈ ∂U + .
Therefore,
G(x, y) = Φ(x, y) − Φ(x∗ , y), x, y ∈ U +
is the Green function for the domain U + . ■.

In the next several examples we apply Theorem 7.1.4 to harmonic functions


in domains in R2 or R3 , giving important integral representations of the
harmonic functions in the domains.
Example 7.1.4. If u and v are two harmonic functions in a domain Ω in
R2 or R3 with a smooth boundary ∂Ω and continuously differentiable on
Ω = Ω ∪ ∂Ω, then
∫ ∫
u(z)vn (z ds(z) = v(z)un (z ds(z),
∂Ω ∂Ω

where un and vn are the derivatives of the functions u and v, respectively,


in the direction of the outward unit normal vector n to the boundary ∂Ω.
Proof. For n = 3 apply the Gauss–Ostrogradski formula (see Appendix E)
∫∫∫ ∫∫
( )
div F(x) dx = (F(z)) · n(z) dz
Ω ∂Ω
( ) ) (
to F(x) = u(x) grad v(x) − v(x) grad u(x) and use the fact that u and v
are harmonic in Ω to obtain the result. ■

Example 7.1.5. If u is a harmonic function in a domain Ω in R2 or R3


with a smooth boundary ∂Ω and continuously differentiable on Ω = Ω ∪ ∂Ω,
then ∫
u(x) = − Gn (x, z))f (z) ds(z), x ∈ Ω
∂Ω
is the solution of the Dirichlet boundary value problem
{
∆ u(x) = 0, x ∈ Ω,
u(z) = f (z), z ∈ ∂ Ω.

Proof. Apply Theorem 7.1.6. ■


7.1 THE FUNDAMENTAL SOLUTION OF THE LAPLACE EQUATION 395

Exercises for Section 7.1.

1. Show that the following are true.

(a) Any finite linear combination of harmonic functions is a harmonic


function.

(b) The real and imaginary parts of any analytic function in a domain
Ω ⊆ R2 are harmonic functions in Ω.

(c) If f (z) is an analytic function in a domain Ω, then | f (z) | is a


harmonic function in the set of points z ∈ Ω where f (z) ̸= 0.

(d) Any harmonic function in a domain Ω is infinitely differentiable


in that domain.

2. Show that the Green function G for any domain Ω in R2 or R3 is


symmetric:
G(x, y) = G(y, x), x, y ∈ Ω.
Hint: Use the Gauss–Ostrogradski formula.

3. Find the maximum value of the harmonic function u(x, y) = x2 − y 2


in the closed unit disc x2 + y 2 ≤ 1.

4. Find the harmonic function u(x, y) in the upper half plane {(x, y) :
y > 0}, subject to the boundary condition
x
u(x, 0) = .
x2 + 1

5. Find the harmonic function u(x, y, z), (x, y, z) ∈ R3 , in the lower half
space {(x, y, z) : z < 0}, subject to the boundary condition

1
u(x, y, 0) = 3 .
(1 + x2 + y2 ) 2

6. Find the Green function of the right half plane RP = {(x, y) : x > 0}
and solve the Poisson equation
{
∆ u(x) = f (x), x ∈ RP
u(x) = h(x, x ∈ ∂RP .

7. Find the Green function of the right half plane UP = {(x, y) : y > 0}
subject to the boundary condition

Gn (x, y) = 0, (x, y) ∈ ∂U P
396 7. LAPLACE AND POISSON EQUATIONS

and solve the Poisson equation


{
∆ u(x) = f (x), x ∈ UP
un (x) = h(x, x ∈ ∂U P .

8. Using the Green function for the unit ball in Rn , n = 2, 3, derive the
Poisson formula

1 1− | x |2
u(x) = f (y) dS(y),
ωn | y − x |2
|y|=1

where ωn is the area of the unit sphere in Rn which gives the solution
of the Poisson boundary value problem
{
∆ u(x) = −f (x), | x |< 1
u(x) = g(x), | x |= 1.

9. Using the Green function for the disc DR = {x = (x, y) : x2 + y 2 < R2 },


solve the following Poisson equation with Neumann boundary values
{
∆ u(x) = −f (x), x ∈ DR
un (x) = g(x), | x |= R,

provided that the function g satisfies the boundary condition



g(y) dS(y).
∂BR

10. In the domain {(x, y) : x2 + y 2 ≤ 1, y ≥ 0} solve the boundary value


problem

 2 2
 ∆ u(x, y) = 0, x + y < 1, y > 0
{
1, x2 + y 2 = 1

 u(x, y) =
0, y = 0.

Hint: Use the Green function.


7.2 LAPLACE AND POISSON EQUATIONS ON RECTANGULAR DOMAINS 397

7.2 Laplace and Poisson Equations on Rectangular Domains.


In this section we will discuss the Separation of Variables Method for solv-
ing the Laplace and Poisson equations. We will consider the Laplace and
Poisson equations with homogeneous and nonhomogeneous Dirichlet and Neu-
mann boundary conditions on rectangular domains. We already discussed this
method in the previous two chapters when we solved the wave and heat equa-
tions. The idea and procedure of the separation of variables method is the
same and therefore we will omit many details in our discussions.

10 . Laplace Equation on a Rectangle. First let us have Dirichlet boundary


conditions.
Consider the following Laplace boundary value problem with Dirichlet
boundary values.


 uxx (x, y) + uyy (x, y) = 0, 0 < x < a, 0 < y < b
(U) u(x, 0) = f1 (x), u(x, b) = f2 (x), 0 < x < a


u(0, y) = g1 (y), u(a, y) = g2 (y), 0 < y < b.

We split the above problem (U ) into the two problems




 vxx (x, y) + vyy (x, y) = 0, 0 < x < a, 0 < y < b
(V) v(x, 0) = f1 (x), v(x, b) = f2 (x), 0 < x < a


v(0, y) = v(a, y) = 0, 0 < y < b,

and


 wxx (x, y) + wyy (x, y) = 0, 0 < x < a, 0 < y < b
(W) w(x, 0) = w(x, b) = 0, 0 < x < a


w(0, y) = g1 (y), w(a, y) = g2 (y), 0 < y < b

with homogeneous Dirichlet boundary values on two parallel sides. If v and


w are the solutions of problems (V ) and (W ), respectively, then u = v + w
is the solution of problem (W ).
From symmetrical reasons, it is enough to solve either problem (V ) or
(W ). Let us consider problem (V ).
If we assume that the solution of (V ) is of the form v(x, y) = X(x)Y (y),
then after substituting the derivatives into the Laplace equation and using the
homogeneous boundary conditions in (V ), we have the eigenvalue problem

(7.2.1) X ′′ (x) + λ2 X(x) = 0, X(0) = X(a) = 0

and the differential equation

(7.2.2) Y ′′ (y) − λ2 Y (y) = 0.


398 7. LAPLACE AND POISSON EQUATIONS

The eigenvalues and corresponding eigenfunctions of (7.2.1) are


nπ nπ
λn = , Xn (x) = sin x, n = 1, 2, . . .
a a

For these eigenvalues λn , the general solution of (7.2.2) is given by

Yn (y) = An cosh λn y + Bn sinh λn y.

Therefore,

∑ [ ]
v(x, y) = An cosh λn y + Bn sinh λn y sin λn x.
n=1

The above coefficients An and Bn are determined from the boundary con-
ditions in problem (V ):


f1 (x) = v(x, 0) = An sin λn x,
n=1
∑∞
[ ]
f2 (x) = v(x, b) = An cosh λn b + Bn sinh λn b sin λn x.
n=1

The above two series are Fourier sine series and therefore,
 ∫a

 2

 An = f1 (x) sin λn x dx,


 a
0
(7.2.3)

 ∫a

 2

 A cosh λn b + Bn sinh λn b = f2 (x) sin λn x dx,
 n b
0

from which An and Bn are determined.


Let us take an example.
Example 7.2.1. Find the solution u(x, y) of the Laplace equation in the
rectangle 0 < x < π, 0 < y < 1, subject to the boundary conditions
{
u(x, 0) = f (x), u(x, π) = f (x), 0 < x < π
u(0, y) = f (y), u(π, y) = f (y), 0 < y < π,

where { π
x, 0<x< 2
f (x) =
π − x, π
2 < x < π.
Display the plot of the solution u(x, y).
7.2 LAPLACE AND POISSON EQUATIONS ON RECTANGULAR DOMAINS 399

Solution. Consider the following two problems.




 vxx (x, y) + vyy (x, y) = 0, 0 < x < π, 0 < y < π
v(x, 0) = f (x), v(x, b) = f (x), 0 < x < a


v(0, y) = v(π, y) = 0, 0 < y < π,

and 

 wxx (x, y) + wyy (x, y) = 0, 0 < x < π, 0 < y < π
w(x, 0) = w(x, π) = 0, 0 < x < π


w(0, y) = f (y), w(π, y) = f (y), 0 < y < π.
It is enough to solve only the first problem for the function v. The eigen-
values λn for this problem are λn = n and so from (7.2.3) we have
∫π
2
An = f (x) sin nx dx,
π
0
∫π
2
An cosh nπ + Bn sinh nπ = f (x) sin nx dx.
π
0

If we solve the integral for the given function f (x) we obtain

4 sin nπ
2 4 sin nπ
2
An = , An cosh nπ + Bn sinh nπ = ,
π n2 π n2
from which we obtain that the solution v(x, y) is given by

4 ∑ sin nπ [ ( )]
v(x, y) = 2
2
sinh (ny) + sinh n(π − y) sin nx.
π n=1 n sinh nπ

Because of symmetry reasons, we can interchange the roles of x and y and


obtain the solution of problem for the function w:

4 ∑ sin nπ [ ]
w(x, y) = 2
2
sinh nx + sinh n(π − x) sin ny.
π n=1 n sinh nπ

Therefore, the solution of the original problem is given by


∞ {
4 ∑ sin nπ [ ]
u(x, y) = 2
2
sinh ny + sinh n(π − y) sin nx
π n=1 n sinh nπ
}
[ ]
+ sinh nx + sinh n(π − x) sin ny .

The plot of the solution u(x, y) is displayed in Figure 7.2.1.


400 7. LAPLACE AND POISSON EQUATIONS

Figure 7.2.1

Remark. The solution of the Laplace equation on a rectangle, or any pla-


nar or space domain with piecewise smooth boundary, subject to Neumann
boundary conditions does not always exist, and even if exists it is not unique
(recall Theorem 7.1.7 of the previous section).
Example 7.2.2. Consider the problem


 uxx (x, y) + uyy (x, y) = 0, 0 < x < π, 0 < y < π
ux (0, y) = ux (π, y) = 0, 0 < y < π


uy (x, 0) = 1, uy (x, π) = K, 0 < x < π,

where K is a constant. Find a solution of this problem.


Solution. Looking for a solution of the form u(x, y) = X(x)Y (y) we find that

X ′′ (x) + λX(x) = 0, X ′ (0) = X ′ (π) = 0

and
Y ′′ (y) + λY (y) = 0.
Solving the eigenvalue problem we find that eigenvalues and corresponding
eigenfunctions are

λ0 = 0, X0 (x) = 1; λn = n2 , Xn (x) = cos nx.

The corresponding solutions Y (y) are given by

Y0 (y) = A0 + B0 y, Yn (y) = An cosh ny + Bn sinh ny.

Therefore,

∑ [ ]
u(x, y) = A0 + B0 y + An cosh ny + Bn sinh ny cos nx.
n=1
7.2 LAPLACE AND POISSON EQUATIONS ON RECTANGULAR DOMAINS 401

From the boundary condition uy (x, 0) = 1 it follows that




B0 + nBn cos nx = 1, 0 < x < π,
n=1

from which we find (using the Fourier cosine series)

B0 = 1, Bn = 0.

From the other boundary condition uy (x, π) = K it follows that


[ ]
B0 + nAn sinh nπ + nBn cosh nπ cos nx = K, n = 1, 2, . . ..

The left hand side of the last equation is a Fourier cosine series and so, from
the uniqueness theorem for Fourier cosine expansion, it follows that

B0 = K, nAn sinh nπ + nBn cosh nπ = 0, n = 1, 2, . . ..

Thus, we must have

K = B0 = 1, An = Bn = 0, n = 1, 2, . . ..

Therefore, a solution of the given problem exists only if K = 1, in which case


we have the following infinitely many solutions

u(x, u) = A0 + y,

where A0 is any constant.

The next example is about the Laplace equation on a rectangle with mixed
boundary conditions.
Example 7.2.3. Using the separation of variables method, solve the problem

(7.2.4) uxx (x, y) + uyy (x, y) = 0, 0 < x < a, 0 < y < b


(7.2.5) u(0, y) = 0, u(a, y) = f (y), 0 < y < b
(7.2.6) uy (x, 0) = 0, uy (x, b) = 0, 0 < x < a.

Solution. Taking u(x, y) = X(x)Y (y) and separating the variables we have

Y ′′ (y) X ′′ (x)
=− = −λ,
Y (y) X(x)

and using the homogeneous boundary conditions (7.2.5) and (7.2.6) we have

(7.2.7) Y ′′ (y) + λY (y) = 0, Y (0) = Y ′ (b) = 0,


(7.2.8) X ′′ (x) − λX(x) = 0, X(0) = 0.
402 7. LAPLACE AND POISSON EQUATIONS

If we examine all possible cases for λ: λ > 0, λ < 0 and λ = 0 in the Sturm–
Liouville problem (7.2.7), we find that the eigenvalues and corresponding
eigenfunctions of (7.2.7) are

n2 π 2 nπ
λ0 = 0; Y0 (y) = 1, λn = , Yn (y) = cos y, n = 1, 2, . . ..
b2 b

For the found λn , the solutions of (7.2.8) are


( nπ )
X0 (x) = A0 x, Xn (x) = An sinh x .
b

Therefore,

∑ ( nπ ) nπ
u(x, y) = A0 x + An sinh x cos y.
n=1
b b

From the boundary condition


∑ nπ nπ
f (y) = u(a, y) = A0 a + An sinh a cos y
n=1
b b

and the orthogonality of the functions {cos nπ


b y : n = 0, 1, . . .} on the interval
[0, b] we find

∫b ∫b
1 1 nπ
A0 = f (y) dy, An = nπ f (y) cos y dy.
ab b sinh b a b
0 0

20 . Poisson Equation on a Rectangle. Now, we consider the following Poisson


boundary value problem with general Dirichlet boundary values.


 uxx (x, y) + uyy (x, y) = F (x, y), 0 < x < a, 0 < y < b
(P) u(x, 0) = f1 (x), u(x, b) = f2 (x), 0 < x < a


u(0, y) = g1 (y), u(a, y) = g2 (y), 0 < y < b.

We cannot directly separate the variables, but as with the Laplace bound-
ary value problem with general Dirichlet boundary conditions, we can split
problem (P ) into the following two problems.


 vxx (x, y) + vyy (x, y) = F (x, y), 0 < x < a, 0 < y < b
(PH) v(x, 0) = 0, v(x, b) = 0, 0 < x < a


v(0, y) = v(a, y) = 0, 0 < y < b,
7.2 LAPLACE AND POISSON EQUATIONS ON RECTANGULAR DOMAINS 403

and


 wxx (x, y) + wyy (x, y) = 0, 0 < x < a, 0 < y < b
(L) w(x, 0) = f1 (x), w(x, b) = f2 (x), 0 < x < a


w(0, y) = g1 (y), w(a, y) = g2 (y), 0 < y < b.
If v(x, y) is the solution of problem (P H) and w(x, y) is the solution
of problem (L), then the solution u(x, y) of problem (P ) will be u(x, y) =
v(x, y) + w(x, y).
Problem (L), being a Laplace Dirichlet boundary value problem, has been
solved in Case 10 . For problem (P H), notice first that the boundary value on
every side of the rectangle is homogeneous. This suggests that u(x, y) may
be of the form
∞ ∑
∑ ∞
mπ nπ
v(x, y) = Fmn sin x sin y,
m=1 n=1
a b

where Fmn are coefficients that will be determined. If we substitute v(x, y)


into the Poisson equation in problem (P H) (we assume that we can differen-
tiate term by term the double Fourier series for v(x, y)) we obtain
∞ ∞
π2 ∑ ∑ ( ) mπ nπ
F (x, y) = − 2 2
Fmn b2 m2 + a2 n2 sin x sin y.
a b m=1 n=1 a b

Recognizing the above equation as the double Fourier sine series of F (x, y)
we have
∫a ∫b
π2 mπ nπ
Fmn =− 2 2 2 2( ) F (x, y) sin x sin y dy dx.
a b b m + a2 n2 a b
0 0

Example 7.2.4. Solve the problem




 uxx (x, y) + uyy (x, y) = xy, 0 < x < π, 0 < y < π
u(x, 0) = 0, u(x, π) = 0, 0 < x < π


u(0, y) = u(π, y) = 0, 0 < y < π.

Solution. In this case we have a = b = π and F (x, y) = xy. Therefore,


∫π ∫π
1
Fmn = − ( ) xy sin mx sin ny dy dx
π 2 m 2 + n2
0 0
(∫π ) (∫π )
1
=− ( ) x sin mx dx x sin nx dx
π 2 m 2 + n2
0 0
(−1)m−1 (−1)n−1
1
= − 2( 2 ) .
π m + n2 m n
404 7. LAPLACE AND POISSON EQUATIONS

The solution of the problem is given by


∞ ∞
1 ∑ ∑ (−1)m+n
u(x, y) = − ( ) sin mx sin ny.
π 2 m=1 n=1 mn m2 + n2

30 . Laplace Equation on an Infinite Strip. Consider the boundary value


problem

 u (x, y) + uyy (x, y) = 0, 0 < x < ∞, 0 < y < b
 xx
u(x, 0) = f1 (x), u(x, b) = f2 (x), 0 < x < ∞

 u(0, y) = g(y), | lim u(x, y) |< ∞, 0 < y < b.
x→∞

The solution of this problem is u(x, y) = v(x, y) + w(x, y), where v(x, y) is
the solution of the problem

 vxx (x, y) + vyy (x, y) = 0, 0 < x < ∞, 0 < y < b



 v(x, 0) = 0, v(x, b) = 0, 0 < x < ∞
(*)

 v(0, y) = g(y), 0 < y < b


 | lim v(x, y) |< ∞, 0 < y < b,
x→∞

and w(x, y) is the solution of the problem



 wxx (x, y) + wyy (x, y) = 0, 0 < x < ∞, 0 < y < b



 w(x, 0) = f1 (x), w(x, b) = f2 (x), 0 < x < ∞
(**)

 w(0, y) = 0, 0 < y < b


 | lim w(x, y) |< ∞, 0 < y < b.
x→∞

Problem (∗) can be solved by the separation of variables method. Indeed,


if v(x, y) = X(x)Y (y), then after separating the variables in the Laplace
equation and using the boundary conditions at y = 0 and y = b we obtain
the eigenvalue problem

(EY) Y ′′ (y) + λY (y) = 0, Y (0) = Y (b) = 0

and the differential equation

(EX) X ′′ (x) − λX(x) = 0, x > 0, | lim X(x) |< ∞.


x→∞

The eigenvalues and corresponding eigenfunctions of problem (EY ) are given


by
n2 π 2 √
λn = 2 , Yn (y) = sin λn y, n = 1, 2, . . ..
b
7.2 LAPLACE AND POISSON EQUATIONS ON RECTANGULAR DOMAINS 405

The solution of the differential equation (EX) corresponding to the above


eigenvalues, in view of the fact that X(x) remains bounded as x → ∞, is
given by √
Xn (x) = e− λn x .
Therefore, the solution v(x, y) of problem (∗) is given by


∑ √ nπ
v(x, y) = A n e− λn x
sin y,
n=1
b

where the coefficients An are determined from the boundary condition



∑ nπ
g(y) = v(0, y) = An sin y,
n=1
b


and the orthogonality of the eigenfunctions sin b y on the interval [0, b]:

∫b
2 nπ
An = g(y) sin y dy.
b b
0

Now, let us turn our attention to problem (∗∗). If w(x, y) = X(x)Y (y)
is the solution of problem (∗∗), then, after separating the variables, from the
Laplace equation we obtain the problem

X ′′ (x) + µ2 X(x) = 0, X(0) = 0, | lim X(x) |< ∞


x→∞

and the equation


Y ′′ (y) − µ2 Y (y) = 0, 0 < y < b,
where µ is any nonzero parameter.
A solution of the first problem is given by

Xµ (x) = sin µx,

and the general solution of the second equation can be written in the special
form
A(µ) B(µ)
Y (y) = sinh µy + sinh µ(b − y).
sinh µb sinh µb
Therefore,
[ ]
A(µ) B(µ)
w(x, y) = sin µx sinh µy + sinh µ(b − y)
sinh µb sinh µb
406 7. LAPLACE AND POISSON EQUATIONS

is a solution of the Laplace equation for any µ > 0, which satisfies all condi-
tions in problem (∗∗) except the boundary conditions at y = 0 and y = b.
Since the Laplace equation is homogeneous it follows that
∫∞ [ ]
A(µ) B(µ)
w(x, y) = sin µx sinh µy + sinh µ(b − y) dµ
sinh µb sinh µb
0

will also be a solution of the Laplace equation. Now, it remains to find A(µ)
and B(µ) such that the above function will satisfy the boundary conditions
at y = 0 and y = b:
∫∞
f1 (x) = w(x, 0) = B(µ) sin µx dµ,
=0
∫∞
f2 (x) = w(x, b) = A(µ) sin µx dµ.
0

We recognize the last equations as Fourier sine transforms and so A(µ) and
B(µ) can be found by taking the inverse Fourier transforms. Therefore,
we have solved problem (∗∗) and, consequently, the original problem of the
Laplace equation on the semi-infinite strip with the prescribed general Dirich-
let condition is completely solved.
We will return to this problem again in the last section of this chapter,
where we will discuss the integral transforms for solving the Laplace and
Poisson equations.
Now, we will take an example of the Laplace equation on an unbounded
rectangular domain, and the use of the Fourier transform is not necessary.
Example 7.2.5. Solve the problem

 uxx (x, y) + uyy (x, y) = 0, 0 < x < ∞, 0 < y < π,



 u(x, 0) = 0, u(x, π) = 0, 0 < x < ∞,

 u(0, y) = y(π − y), 0 < y < π,


 lim u(x, y) = 0, 0 < y < π.
x→∞

Solution. Let u(x, y) = X(x)Y (y). After separating the variables and using
the boundary conditions at y = 0 and y = π, we obtain

Y ′′ (y) + λY (y) = 0, Y (0) = Y (π) = 0

and the differential equation

X ′′ (x) − λX(x) = 0, x > 0, lim X(x) = 0.


x→∞
7.2 LAPLACE AND POISSON EQUATIONS ON RECTANGULAR DOMAINS 407

If we solve the eigenvalue problem for Y (y) we obtain

λn = n2 , Yn (y) = sin ny.

For these λn the general solution of the differential equation for X(x) is
given by √ √
Xn (x) = An e− λn x + Bn λn x .
To ensure that u(x, y) is bounded as x → ∞, we need to take Bn = 0.
Therefore,
Xn (x) = An e−nx ,
and so,


u(x, y) = An e−nx sin ny, 0 < x < ∞, 0 < y < π.
n=1

The coefficients An are determined from the boundary condition




y(π − y) = u(0, y) = An sin ny, 0<y<π
n=1
{ }
and the orthogonality of the system sin ny : n = 1, 2, . . . on [0, π]:

∫π {
2 2(1 − cos nπ) 0, n even
An = y(π − y) sin ny dy = = 4
π n3 n3 , n odd.
0

Therefore, the solution u(x, y) of the problem is given by



8 ∑ 1
u(x, y) = e−(2n−1)x sin (2n − 1)y.
π n=1 (2n − 1)3

The following example is from the field of electrostatics.


Example {7.2.6. Let V be the potential}of an electrostatic field in the do-
main S = (x, y) : 0 < x < a, 0 < y < ∞ . For given 0 < h < a, define the
function c = c(x) by
{
c1 , 0 < x < h, 0 < y < ∞,
c(x) =
c2 , h < x < a, 0 < y < ∞,

where c1 and c2 are given constants. Consider the problem


( )
(7.2.9) div c grad u(x, y) = 0, (x, y) ∈ S
(7.2.10) lim u(x, y) = 0, 0 < x < a.
y→∞
408 7. LAPLACE AND POISSON EQUATIONS

We need to find the solution (potential) u(x, y) of the above problem of the
form {
v(x, y), 0 < x < h, 0 < y < ∞
u(x, y) =
w(x, y), h < x < a, 0 < y < ∞,
where the functions u(x, y) and w(x, y) satisfy the boundary conditions


 v(0, y) = 0, v(a, y) = 0, 0 < y < ∞

(7.2.11) w(x, 0) = V, 0 < x < a


 lim v(x, y) = lim w(x, y) = 0, 0 < x < a.
y→∞ y→∞

( )
Solution. Recall that for a vector field F(x, y) = f (x, y), g(x, y) and a scalar
function u(x, y), grad and div are defined by
( )
div F(x, y) = fx (x, y) + gy (x, y), grad u(x, y) = ux (x, y), uy (x, y) .

Now, since c(x) is piecewise constant and u(x, y) is a continuous function,


problem (7.2.9), (7.2.10), in view of the boundary conditions (7.2.11), is
reduced to the problems


 ∆ v(x, y) = 0, ∆ w(x, y) = 0, 0 < x < a, 0 < y < ∞

v(h, y) = w(h, y), c1 vy (h, y) = c2 wy (h, y), 0 < y < ∞


 lim v(x, y) = lim w(x, y) = 0, 0 < x < a.
y→∞ y→∞

We try to separate the variables. If u(x, y) = X(x)Y (y), then from (7.2.9)
after separating the variables we obtain that
{( )′
cX ′ (x) + c λX(x) = 0,
(7.2.12)
X(0) = X(a) = 0

and

(7.2.13) Y ′′ (y) − λ Y (y) = 0, lim Y (y) = 0.


y→∞

If we write the function X(x) in the form


{
X1 (x), 0<x<h
X(x) =
X2(x), h < x < a,

then, from (7.2.12) and the continuity conditions, we have the eigenvalue
problems { ′′
X1 (x) + λX1 (x) = 0, X1 (0) = 0
X2′′ (x) + λX2 (z) = 0, X2 (a) = 0
7.2 LAPLACE AND POISSON EQUATIONS ON RECTANGULAR DOMAINS 409

and the continuity conditions

X1 (h) = X2 (h), c1 X1′ (h) = c2 X2′ (h).

The solution of these problems, in view of the continuity conditions, is


√ √
sin λn x sin λn (a − x)
X1n (x) = √ , X2n (x) = √ ,
sin λn sin λn (a − h)

where λn are the positive solutions of the equation


√ √
(7.2.14) c1 cot λ h + ϵ2 cot λ (a − h).

For the above found λn , the general solution of (7.2.13) is given by



Yn (y) = An e− λn y
,

and thus, the general solution u(x, y) of the original problem has the form

∑ √
(7.2.15) u(x, y) = An e− λn y
Xn (x), 0 < x < a, 0 < y < ∞.
n=1

The solution of the problem now is obtained if the coefficients An are chosen
so that the above representation of u(x, y) satisfies the initial condition


V = u(x, 0) = An Xn (x), 0 < x < a.
n=1

If we use the orthogonality condition

∫b
c Xn (x)Xm (x) dx = 0, if m ̸= n,
0

then we find
∫b
V
An = Xn2 (x) dx,
∥ Xn ∥2
0

where, based on Equation (7.2.14), we have

∫a ∫h ∫a
∥ Xn ∥ = 2
c(x)Xn2 (x) dx = c1 2
X1n (x) dx + c2 2
X2n (x) dx
0 0 h
c1 h c2 (a − h)
= (√ )+ √ ,
2 sin 2
λn h 2 sin2 λn (a − h)
410 7. LAPLACE AND POISSON EQUATIONS

i.e.,

c1 h c2 (a − h)
(7.2.16) ∥ Xn ∥2 = √ + √ .
2 sin 2
λn h 2 sin2 λn (a − h)

Therefore, the solution of the original problem is given by (7.1.15), where


An are determined by (7.2.16), λn are the positive solutions of Equation
(7.2.14) and Xn (x) are given by
 √
 sin λn x
{ 
 √ , 0<x<h
X1n (x), 0 < x < h  sin λn
Xn (x) = = √
X2n (x), h < x < a 

 sin √λn (a − x) , h < x < a.

sin λn (a − h)

Exercises for Section 7.2

1. Using the separation of variables method solve the problem

uxx (x, y) + uyy (x, y) = 0, 0 < x < a, 0 < y < b


u(x, 0) = f1 (x), u(x, b) = f2 (x), 0 < x < a
u(0, y) = g1 (y), u(a, y) = g2 (y), 0 < y < b.

for the given data.

(a) a = b = 1, f1 (x) = 100, f2 (x) = 200, g1 (y) = g2 (y) = 0.

(b) a = b = π, f1 (x) = 0, f2 (x) = 1, g1 (y) = g2 (y) = 1.

(c) a = 1, b = 2, f1 (x) = 0, f2 (x) = x, g1 (y) = g2 (y) = 0.

(d) a = 2, b = 1, f1 (x) = 100, f2 (x) = 0, g1 (y) = 0, g2 (y) =


100y(1 − y).

(e) a = b = 1, f1 (x) = 7 sin 7πx, f2 (x) = sin πx, g1 (y) = sin 3πy,
g2 (y) = sin 6πy.

(f) a = b = 1, f1 (x) = 0, f2 (x) = 100, g1 (y) = 0, g2 (y) = 0.

(g) u(x, 0) = f1 (x) = 0, u(x, b) = f2 (x) = f (x), u(0, y) = g1 (y) = 0,


u(a, y) = g2 (y) = 0.

In problems 2–9 solve the Laplace equation in [0, a] × [0, b] subject


7.2 LAPLACE AND POISSON EQUATIONS ON RECTANGULAR DOMAINS 411

to the given boundary conditions.

2. u(x, 0) = f (x), u(x, b) = 0, 0 < x < a; u(0, y) = 0, u(a, y) = 0,


0 < y < b.

3. uy (x, 0) = 0, uy (x, 1) = 0, 0 < x < a; u(0, y) = 0, u(1, y) = 1 − y,


0 < y < b.

4. ux (0, y) = u(0, y), u(0, y) = 1, 0 < y < b; u(x, 0) = 0, u(x, π) = 0,


0 < x < a.

5. ux (0, y) = u(0, y), ux (a, y) = 0, 0 < y < b; u(x, 0) = 0, u(x, b) =


f (x), 0 < x < a.

6. ux (0, y) = 0, u(a, y) = 1, 0 < y < b; u(x, 0) = 1, u(x, b) = 1,


0 < x < a.

7. u(0, y) = 1, u(a, y) = 1, 0 < y < b; uy (x, 0) = 0, u(x, b) = 0,


0 < x < a.

8. u(x, 0) = 1, u(x, 1) = 4, 0 < x < π; u(0, y) = 0, u(π, y) = 0.

9. u(x, 0) = 0, uy (x, 1) = 0, 0 < x < 1; u(0, y) = 0, u(1, y) = 0,


0 < y < 1.

10. Determine conditions on the function f (x) on the segment [0, 1]


which guarantee a solution of the Neumann problem
uxx (x, y) + uyy (x, y) = 0, 0 < x < 1, 0 < y < 1
uy (x, 0) = 0, uy (x, 1) = 0, 0 < x < 1
ux (0, y) = f (y), ux (1, y) = 0, 0 < y < 1.
In Problems 11–14 solve the Laplace equation ∆ u(x, y) = 0 on the
indicated semi-infinite strip with the given boundary conditions.

11. 0 < x < π, 0 < y < ∞; u(0, y) = u(π, y) = 0; u(x, 0) = f (x);


| lim u(x, y) |< ∞.
y→∞

12. 0 < x < π2 , 0 < y < ∞; u(0, y) = u( π2 , y) = e−y ; u(x, 0) = 1,


| lim u(x, y) |< ∞.
y→∞
412 7. LAPLACE AND POISSON EQUATIONS

13. 0 < x < ∞, 0 < y < b; u(x, 0) = uy (x, b) = 0; u(0, y) = f (y);


| lim u(x, y) |< ∞.
x→∞

14. 0 < x < ∞, 0 < y < b; uy (x, 0) = uy (x, b) + h u(x, b) = 0;


u(0, y) = f (y); | lim u(x, y) |= 0, 0 < y < b.
x→∞

In Problems 15–20 solve the Poisson equation ∆ u(x, y) = −F (x, y)


on the indicated rectangle 0 < x < a 0 < y < b, with the given
boundary values and given function F (x, y).

15. a = b = 1, F (x, y) = −1, u(x, y) = 0 on all sides of the rectangle.

16. a = b = 1, F (x, y) = −x, u(x, y) = 0 on all sides of the rectangle.

17. a = b = 1, F (x, y) = − sin πx, u(x, 0) = u(0, y) = u(1, y) = 0,


u(x, 1) = x.

18. a = b = 1, F (x, y) = −xy, u(x, 0) = u(0, y) = u(1, y) = 0, u(x, 1) = x.

19. a = b = π, F (x, y) = e2y sin x, u(x, 0) = u(0, y) = u(π, y) = 0


u(x, π) = f (x).
{ }
20. Let C = (x, y, z) : 0 < x < 1, 0 < y < 1, 0 < z < 1 be the unit
cube. Show that the solution of the Laplace equation
uxx (x, y, z) + uyy (x, y, z) + uzz (x, y, z) = 0, (x, y, z) ∈ C,
subject to the Dirichlet boundary conditions u(1, y, z) = g(y, z), 0 <
y < 1 0 < z < 1; and u(x, y, z) = 0 on all other sides of the cube is
given by
∞ ∑
∑ ∞
u(x, y, z) = Amn sinh λmn x sin mπy sin nπz,
m=1 m=1

where
∫1 ∫1
4
Amn = g(y, z) sin mπy sin nπz dz dy
sinh λmn
0 0

and √
λmn = π m2 + n2 .
Hint: Take u(x, y, z) = X(x)Y (y)Z(z) and separate the variables.
7.3.1 LAPLACE EQUATION IN POLAR COORDINATES 413

7.3 Laplace and Poisson Equations on Circular Domains.


In this section we will study the two and three dimensional the Laplace
equation in circular domains. We begin by considering the Dirichlet problem
of the Laplace equation on a disc.
7.3.1 Laplace Equation in Polar Coordinates.
We will consider and solve the following boundary value problem: Find a
twice differentiable function u(r, φ) in the disc
{ }
D = (r, φ) : r < a

that is continuous on the closed disc D = D ∪ ∂D, such that

(7.3.1) ∆ u(r, φ) = 0, 0 < r < a, 0 ≤ φ < 2π


(7.3.2) u(a, φ) = f (φ), 0 ≤ φ < 2π,

where f (φ) is a prescribed function on the circle C = ∂D. We will assume


that the given function f is continuous and differentiable on the circle C,
even though the results which will be proved are valid for a more general class
of functions f .
As usual, when working in a disc, we use the polar coordinates (r, φ) in
which, as we have seen in the previous chapters, the Laplacian is given by
( )
1 ∂ ∂u 1 ∂2u ∂ 2 u 1 ∂u 1 ∂2u
(7.3.3) ∆ u(r, φ) = r + 2 ≡ + + .
r ∂r ∂r r ∂φ2 ∂r2 r ∂r r2 ∂φ2

We will find the solution of the problem (7.3.1), (7.3.2) of the form

u(r, φ) = R(r)Φ(φ).

If we substitute u(r, φ) into Equation (7.3.1), after separating the variables,


we obtain ( )
r rR′′ (r) + R′ (r) Φ′′ (φ)
=− = λ,
R(r) Φ(φ)
where λ is a constant. Therefore, we obtain the two differential equations

(7.3.4) Φ′′ (φ) + λΦ(φ) = 0, 0 ≤ φ < 2π,

and

(7.3.5) r2 R′′ (r) + rR′ (r) − λR(r) = 0, 0 < r ≤ a.

Since u(r, φ) is a 2π-periodic function of φ we have

Φ(φ) = Φ(φ + 2π)


414 7. LAPLACE AND POISSON EQUATIONS

for every φ. Using this condition (as we did in the section on the wave
equation on circular domains), we have

λ = n, n = 0, 1, 2, . . ..

Consequently, the general solution of Equation (7.3.4) is given by

Φn (φ) = An cos nφ + Bn sin nφ.

For these λn = n2 , the solution of Equation (7.3.5), as a Cauchy–Euler


equation (see Appendix D), is

R(r) = Arn + Br−n .

Since u(r, φ) is a continuous function in the whole disc D, the function R(r)
cannot be unbounded for r = 0. Thus, B = 0, and so

R(r) = rn , n = 0, 1, 2, . . ..

Therefore,

∑ ( )
u(r, φ) = rn An cos nφ + Bn sin nφ , 0 ≤ r ≤ a, 0 ≤ φ < 2π
n=0

are particular solutions of our problem, provided that the above series con-
verge.
We find the coefficients An and Bn so that the above u(r, φ) satisfies the
boundary condition

∑ ( )
f (φ) = u(a, φ) = an An cos nφ + Bn sin nφ , 0 ≤ φ < 2π.
n=0

Recognizing that the above right hand side is the Fourier series of f (φ), we
have
∫2π
1
An = n f (ψ) cos nψ dψ,
a π
0
∫2π
1
Bn = f (ψ) sin nψ dψ; n = 0, 1, 2, . . ..
an π
0

Therefore, the solution u = u(r, φ) of our Dirichlet problem for the Laplace
equation on the disc is given by

a0 ∑ ( r )n ( )
(7.3.6) u = + an cos nφ + bn sin nφ , 0 ≤ r ≤ a, 0 ≤ φ < 2π
2 n=1
a
7.3.1 LAPLACE EQUATION IN POLAR COORDINATES 415

where
∫2π ∫π
1 1
(7.3.7) an = f (ψ) cos nψ dφ, bn = f (ψ) sin nψ dψ.
π π
0 0
Using the assumption that f (φ) is a continuous and differentiable function
on the circle C and using the Weierstrass test for uniform convergence, it can
be easily shown that the series in (7.3.6) converges, it can be differentiated
term by term and it is a continuous function on the circle C. By construction,
every term of the above series satisfies the Laplace equation and therefore the
solution of the original problem is given by (7.3.6).
Now, using (7.3.6) and (7.3.7) we will derive the very important Poisson
integral formula, which gives the solution of the Dirichlet problem for the
Laplace equation on the disc D by means of an integral (Poisson integral).
If we substitute the coefficients an and bn , given by (7.3.7), into (7.3.6)
and interchange the order of summation and integration we obtain
∫2π[ ∞ ]
1 1 ∑ ( r )n ( )
u(r, φ) = + cos n ψ cos nφ + sin ψ sin nφ f (ψ) dψ
π 2 n=1 a
0
∫2π[ ∞ ]
1 1 ∑ ( r )n
= + cos n(φ − ψ) f (ψ) dψ.
π 2 n=1 a
0
To simplify our further calculations we let z = ar . Notice that |z| < 1. Using
Euler’s formula and the geometric sum formula
∑∞
eiα + e−iα w
cos α = , wn = , |w| < 1,
2 n=1
1 − w
we obtain
∫2π[ ∞ ]
1 1 ∑ ( r )n
u(r, φ) = + cos n(φ − ψ) f (ψ) dψ
π 2 n=1 a
0
∫2π[ ∞
∑ ∞
∑ ]
1 ( )n ( )n
= 1+ ze(φ−ψ)ni + ze−(φ−ψ)ni f (ψ) dψ
2π n=1 n=1
0
∫2π[ ]
1 ze(φ−ψ)ni ze−(φ−ψ)ni
= 1+ + f (ψ) dψ
2π 1 − ze(φ−ψ)ni 1 − ze−(φ−ψ)ni
0
∫2π
1 1 − z2
= f (ψ) dψ
2π 1 − 2z cos (φ − ψ) + z 2
0
∫2π
1 a2 − r 2
= f (ψ) dψ.
2π r2 − 2ar cos (φ − ψ) + a2
0
416 7. LAPLACE AND POISSON EQUATIONS

Hence,

∫2π
1 a2 − r2
(7.3.8) u(r, φ) = f (ψ) dψ.
2π r2 − 2ar cos (φ − ψ) + a2
0

Equation (7.3.8) is called the Poisson formula for the harmonic function
u(r, φ) for the disc D with boundary values f (φ).
If we introduce the function

a2 − r2
(7.3.9) P (r, φ) = ,
r2 − 2ar cos φ + a2

known as the Poisson kernel, then the Poisson integral formula (7.3.8) can
be written in the form

∫2π
1
(7.3.10) u(r, φ) = P (r, φ − ψ) f (ψ) dπ.

0

{ }
Example 7.3.1. Let D = (x, y) : x2 + y 2 < 1 be the unit disc. Find
a harmonic function u(x, y) on D which is continuous on the closed disc D
and which on the boundary ∂D has value f (x, y) = x2 − 3xy − 2y 2 .
Solution. We pass to polar coordinates. From (7.3.6) it follows that the
required function u(r, φ) is given by

a0 ∑ n ( )
u(r, φ) = + r an cos nφ + bn sin nφ , 0 ≤ r ≤ 1, 0 ≤ φ < 2π,
2 n=1

where an and bn are the Fourier coefficients of the function f (x, y), which
will be found from the boundary condition

a0 ∑ ( )
cos2 φ − 3 cos φ sin φ − 2 sin2 φ = + an cos nφ + bn sin nφ .
2 n=1

From
1 + cos 2φ 3 ( )
cos2 φ − 3 cos φ sin φ − 2 sin2 φ = − sin 2φ − 1 − cos φ ,
2 2

by comparison of the coefficients we obtain

3 3
a0 = −1, a2 = , b2 = −
2 2
7.3.1 LAPLACE EQUATION IN POLAR COORDINATES 417

and the other coefficients an and bn are all zero. Therefore, the required
function is
1 3 3
u(r, φ) = − − r2 cos 2φ + r2 sin 2φ.
2 2 2
We can rewrite the function u(r, φ) as

1 3 ( )
u(r, φ) = − − r2 cos2 φ − sin2 φ + 3r2 sin φ cos φ,
2 2

from which we find that the solution u in Cartesian coordinates is given by

1 3
u(x, y) = − − (x2 − y 2 ) + 3xy.
2 2

Example 7.3.2. Let A be the annulus defined by


{ }
A = (r, φ) : a < r < b, 0 ≤ φ < 2π .

Find the solution u(r, φ) of the problem



 ∆ u(r, φ) = 0, (r, φ) ∈ A

u(a, φ) = f (φ), 0 ≤ φ < 2π


u(b, φ) = g(φ), 0 ≤ φ < 2π.

Solution. If we take u(r, φ) = R(r)Φ(φ) and work similarly as in the Laplace


equation on the disc we will obtain Equation (7.3.5) for R(r), a < r < b
and Equation (7.3.4) for Φ(φ). Since Φ is 2π-periodic we obtain that the
general solution of (7.3.4) is given by

Φn (φ) = An cos nφ + Bn sin nφ, n = 0, 1, 2, . . .

For n = 1, 2, . . . we have two independent solutions of the Euler–Cauchy


equation (7.3.5):
R(r) = rn , R(r) = r−n ,
while for n = 0 the two independent solutions of (7.3.6) are

R(r) = ln r, R(r) = 1.

Therefore, the solution of the Laplace equation on the annulus A will be


∞ [
∑ ]
n −n n −n
u(r, φ) = a0 ln r + b0 + (an r + bn r ) cos nφ + (cn r + dn r ) sin nφ ,
n=1
418 7. LAPLACE AND POISSON EQUATIONS

where the coefficients, an , bn , cn and dn are determined if we compare the


equations obtained from the boundary conditions at r = a and r = b:
∞ [
∑ ]
( ) ( )
a0 ln a + b0 + an an + bn a−n cos nφ + cn an + dn a−n sin nφ = f (φ)
n=1
∞ [
∑ ]
( n −n
) ( n −n
)
a0 ln b + b0 + an b + bn b cos nφ + cn b + dn b sin nφ = g(φ) ,
n=1

to the Fourier expansions of f (φ) and f (φ):



a0 ∑ ( )
f (φ) = + an cos nφ + bn sin nφ
2 n=1

c0 ∑ ( )
g(φ) = + cn cos nφ + dn sin nφ .
2 n=1

Example 7.3.3. Find the harmonic function u(r, φ) on the unit disc such
that {
1, 0 < φ < π
u(1, φ) =
0, π < φ < 2π.
Solution. The Fourier coefficients an and bn of the function f are
∫2π ∫2π ∫π
1 1 1
a0 = f (ψ) dψ = 1, an = f (ψ) cos nψ dψ = cos nψ dψ = 0,
π π π
0 0 0
∫2π ∫π
1 1 1 1 − (−1)n
bn = f (ψ) sin nψ dψ = sin nψ dψ = .
π π π n
0 0

Therefore, by (7.3.6) the function u(r, φ) is given by



a0 ∑ n ( )
u(r, φ) = + (r an cos nφ + bn sin nφ
2 n=1

1 2 ∑ r2n−1
= + sin (2n − 1)φ.
2 π n=1 2n − 1

Using the Poisson integral formula (7.3.8), the function u(r, φ) can be
represented in the form
∫2π
1 1 − r2
u(r, φ) = f (ψ) dψ
2π r2 − 2r cos (φ − ψ) + 1
0
∫π
1 − r2 sin ψ
= dψ.
2π r2 − 2r cos (φ − ψ) + 1
0
7.3.1 LAPLACE EQUATION IN POLAR COORDINATES 419

Example 7.3.4. Solve the following Laplace equation boundary value prob-
lem on the semi-disc.



∆ u(r, φ) = 0, 0 < r < 1 0 < φ < π


 u(1, φ) = c, 0 < φ < π

 u(r, 0) = u(r, π) = 0, 0<r<π


 | lim u(r, φ) |< ∞, 0 < φ < π.
r→0+

Solution. If we let u(r, φ) = R(r)Φ(φ) and substitute in the Laplace equation


(after separating the variables) we obtain

Φ′′ (φ) + λΦ(φ) = 0, φ)(0) = Φ(π) = 0,


′′ ′
r R (r) + rR (r) − λR(r) = 0,
2
0 < r < 1, | lim R(r) |< ∞.
r→0+

Solving the above eigenvalue problem for Φ(φ) we obtain that

λ n = n2 , Φn (φ) = sin nφ.

For these λn , in view of the fact that R(r) is bounded in a neighborhood


of r = 0, a particular solution of the Cauchy–Euler differential equation for
R(r) is
R(r) = rn , 0 ≤ r ≤ 1.
Therefore, the general solution of the Laplace equation on the semi-disc is
given by
∑∞
u(r, φ) = an rn sin nφ.
n=1

We find the coefficients an from the boundary condition




c = u(1, φ) = an sin nφ.
n=1

From the orthogonality property of the functions sin nφ on [0, π] it follows


that
∫π
2 2 1 − (−1)n
an = c sin nφ dφ = c .
π π n
0

Therefore, the solution of the original boundary value problem is given by



4c ∑ r2n−1
u(r, φ) = sin (2n − 1)φ.
π n=1 2n − 1
420 7. LAPLACE AND POISSON EQUATIONS

7.3.2. Poisson Equation in Polar Coordinates.


We consider the following Poisson boundary value problem on the unit disc.
{
∆ u(r, φ) = −F (r, φ), 0 < r < 1, 0 ≤ φ < 2π
u(1, φ) = f (φ), 0 ≤ φ < 2π.

Let the solution u(r, φ) of the problem be of the form

u(r, φ) = v(r, φ) + w(r, φ),

where v(r, φ) and w(r, φ) are the solutions of the following problems, respec-
tively,
{
∆ v(r, φ) = 0, 0 < r < 1, 0 ≤ φ < 2π
(L)
u(a, φ) = f (φ), 0 ≤ φ < 2π

and
{
∆ w(r, φ) = −F (r, φ), 0 < r < 1, 0 ≤ φ < 2π
(P)
w(1, φ) = 0, 0 ≤ φ < 2π.

The first problem (L) has been already solved in part 7.3.1. Therefore, it
is enough to solve the second problem (P ). We will look for the solution of
problem (P ) in the form
∞ ∑
∑ ∞
w(r, φ) = Wmn (r, φ),
m=0 n=0

where Wmn (r, φ) are the eigenfunctions of the Helmholtz eigenvalue problem
{
∆ w(r, φ) + λ2 w(r, φ) = 0, 0 ≤ r < 1, 0 ≤ φ < 2π
(H)
w(1, φ) = 0, 0 ≤ φ < 2π.

We solved problem (H) in the vibration of a circular drum in Chapter 5,


where we found that its eigenvalues λ2mn are obtained by solving the equation
( )
Jm λmn = 0

and the corresponding eigenfunctions Wmn (r, φ) are given by


( ) ( )
Wmn (r, φ) = Jm λmn r cos mφ, W mn(r, φ) = Jm λmn r cos mφ.

Thus,
∞ ∑
∑ ∞
( )[ ]
(7.3.11) w(r, φ) = Jm λmn r amn cos mφ + bmn sin mφ .
m=0 n=0
7.3.2. POISSON EQUATION IN POLAR COORDINATES 421

Now, from
∆ Wmn + λ2mn Wmn = 0, m, n = 0, 1, 2, . . .
we have
∞ ∑
∑ ∞ ∞ ∑
∑ ∞
(7.3.12) ∆ w(r, φ) = ∆ Wmn (r, φ) = − λ2mn Wmn (r, φ).
m=0 n=0 m=0 n=0

Therefore, from (7.3.11), (7.3.12) and from the Poisson equation in (P ) we


have
∞ ∑
∑ ∞
( )[ ]
λ2mn Jm λmn r amn cos mφ + bmn sin mφ = F (r, φ).
m=0 n=0

From the last expansion and from the orthogonality of the eigenfunction with
respect to the weight r we find


 ∫1 ∫2π

 ϵm ( )

 Amn = ( ) f (φ) Jm λmn r r cos mφ dφ dr,

 2
πJm+1 λmn
0 0
(7.3.13)

 ∫1 ∫2π

 2 ( )

 Bmn = ( ) f (φ) Jm λmn r r sin mφ dφ dr,

 2
πJm+1 λmn
0 0

where {
1, m = 0,
ϵm =
2, m > 0.

Example 7.3.5. Solve the boundary value problem for the Poisson equation.
{
∆ u(r, φ) = −2 − r3 cos 3φ, 0 ≤ r < 1, 0 ≤ φ < 2π
u(1, φ) = 0, 0 ≤ φ < 2π.

Solution. From Equation (7.3.13) we have Bmn = 0 for every m and n


and Amn = 0 for every m ̸= 3 and m ̸= 0 and every n. For m = 0 from
(7.3.13) we have

∫1 ∫2π
1 ( ) ( )
A0n = ( ) 2 + r3 cos 3φ J0 λ0n r r dφ dr
πλ20n J12 λ0n
0 0
∫1
4 ( )
= ( ) J0 λ0n r r dr
J12 λ0n
0
4 1 ( ) 4
= 2 2( ) J1 λ0n = 3 ( ).
λ0n J1 λ0n λ0n λ0n J1 λ0n
422 7. LAPLACE AND POISSON EQUATIONS

For m = 3 we have
∫1 ∫2π
2 ( ) ( )
A3n = ( ) 2 + r3 cos 3φ cos 3φ J3 λ3n r r dφ dr
πλ23n J42 λ3n
0 0
∫1
4 ( )
= ( ) r4 J3 λ3n r dr
λ23n J42 λ3n
0
4 1 ( ) 4
= ( ) J4 λ3n = 3 ( ).
λ23n J42 λ3n λ3n λ3n J4 λ0n

In the above calculations we have used the following fact from the Bessel
functions (see Chapter 3):

∫1
1
rn+1 Jn (λ r) dr = Jn+1 (λ).
λ
0

Therefore, the solution of the original problem is given by


∞ [
∑ ]
1 ( ) 1 ( )
u(r, φ) = 4 ( ) J0 λ0n r + ( ) J3 λ3n r cos 3φ .
n−1
λ30n J1 λ0n λ33n J4 λ3n

7.3.3 Laplace Equation in Cylindrical Coordinates.


In this part we will solve the Laplace equation in cylindrical coordinates
x = r cos φ, y = r sin φ, z = z. Recall that the Laplacian in these coordinates
is given by
1 1
∆ u(r, φ, z) ≡ urr + ur + 2 uφφ + uzz .
r r
Let C be the cylinder given by

C = {(r, φ, z) : 0 < r < a, 0 < φ < 2π, 0 < z < b}.

We will find a unique, twice differentiable function u = u(r, φ, z) on C which


satisfies the boundary value problem
1 1
(7.3.14) urr + ur + 2 uφφ + uzz = 0, (r, φ, z) ∈ C,
r r
(7.3.15) u(a, φ, z) = 0, 0 < φ < 2π, 0 < z < b,
(7.3.16) u(r, φ, 0) = f (r, φ), 0 < r < a, 0 < φ < 2π,
(7.3.17) u(r, φ, b) = g(r, φ), 0 < r < a, 0 < φ < 2π.

Naturally, the function u(r, φ, z) is 2π-periodic with respect to φ:

u(r, φ, z) = u(r, φ + 2π, z), 0 < r < a, 0 < φ < 2π, 0 < z < b
7.3.3 LAPLACE EQUATION IN CYLINDRICAL COORDINATES 423

and bounded at r = 0:

| lim u(r, φ, z) |< ∞, 0 < φ < 2π, 0 < z < b.


r→0+

If we search for the solution of this problem of the form

u(r, φ, z) = R(r)Φ(φ)Z(z),

then substituting it into Equation (7.3.14), after separating the variables and
using the boundary conditions (7.3.15), (7.3.16) and (7.3.17), we will obtain
the problems

(7.3.18) Z ′′ (z) − λZ(z) = 0, 0 < z < b,


′′ 2
(7.3.19) Φ (φ) + µ Φ(φ) = 0, Φ(φ) = Φ(φ + 2π),
(7.3.20) r2 R′′ +R′ +(λr2 −µ2)R = 0, | lim R(r)| < ∞
r→0

Because Φ(φ) is a parodic function we have µ = µn = n, n = 0, 1, 2, . . .,


and so two independent solutions of Equation (7.3.19) are given by

Φn (φ) = cos nφ, Φn (φ) = sin nφ.

For these found µn = n, from the bounded condition for the function R(r)
at r = 0, we find a solution of the Bessel equation in (7.3.20) of order n,
given by (√ )
Rmn (r) = Jn λmn a r
where λmn are obtained by solving the equation

Jn ( λmn a) = 0,

found from the boundary condition at r = a.


Therefore, the general solution u = u(r, φ, z) of our original problem is of
the form
∞ ∑
∑ ∞ [
(√ ) (√ ) ]
u= Amn Jn λmn ar cos nφ + Bmn Jn λmn ar sin nφ Zmn (z),
m=0 n=0

where Zmn (z) are solutions of (7.3.18).


As usual, we split the original problem into two boundary value problems:

u(r, φ, z) = u1 (r, φ, z) + u2 (r, φ, z),

where the harmonic functions u1 (r, φ, z) and u2 (r, φ, z) satisfy the conditions
{
u1 (a, φ, z) = 0, u1 (r, φ, 0) = f (r, φ), u1 (r, φ, b) = 0
u2 (aφ, z) = 0, u2 (r, φ, 0) = 0, u2 (r, φ, b) = g(r, φ).
424 7. LAPLACE AND POISSON EQUATIONS

It is enough to consider only u1 (r, φ, z). In this case we have



Zmn (z) = sinh λmn (b − z).

Therefore the solution u = u(r, φ, z) of the given original problem is


∞ ∑ ∞ [ ] √
∑ (√ ) sinh λmn (b − z)
u= Amn cos nφ + Bmn sin nφ Jn λmn ar √
m=0 n=0
sinh λmn b
∞ ∑ ∞ [ ] √
∑ (√ ) sinh λmn z
+ Cmn cos nφ + Dmn sin nφ Jn λmn r √ ,
m=0 n=0
sinh λmn b

where the coefficients Amn , Bmn , Cmn and Dmn are found from the bound-
ary functions f (r, φ) and g(r, φ) and the orthogonality of the Bessel functions
Jn (·) with respect to the weight function r:

∫2π ∫a
2 (√ )
Amn = 2 ( √ f (r, φ) cos nφ Jn λmn r r dr dφ

a πϵn Jn ( λmn a)2
0 0
∫2π ∫a
2 (√ )
Bmn = ( √ f (r, φ) sin nφ Jn λmn r r dr dφ
a2 πϵ ′ 2
n Jn ( λmn a)
0 0
∫2π ∫a
2 (√ )
Cmn = ( √ g(r, φ) cos nφ Jn λmn r r dr dφ
a2 πϵ ′ 2
n Jn ( λmn a)
0 0
∫2π ∫a
2 (√ )
Dmn = ( √ f (r, φ) sin nφ Jn λmn r r dr dφ,
a2 πϵ ′ 2
n Jn ( λmn a)
0 0

and {
2, n=0
ϵn =
1, n ̸= 0.

7.3.4 Laplace Equation in Spherical Coordinates.


Let us consider the following problem: Find a function u, harmonic inside
a three dimensional ball, continuous on the closure of the ball and assumes on
the sphere a given continuous function f . For convenience only, we will take
the ball to be the unit ball, centered at the origin. As in many cases when
working with a three dimensional ball, we use spherical coordinates.
As usual, by (r, θ, φ) we denote the spherical coordinates. Let B =
{(r, θ, φ) : 0 ≤ r < 1} be the unit ball with boundary S. We will find a
function u(r, θ, φ) which satisfies the boundary value problem

(7.3.21) ∆ u(r, θ, φ) = 0, (r, θ, φ) ∈ B,


(7.3.22) u(r, θ, φ) = f (θ, φ), (r, θ, φ) ∈ S,
7.3.4 LAPLACE EQUATION IN SPHERICAL COORDINATES 425

where f (θ, φ) is a given function. Let us recall from the previous chapters
that the Laplacian ∆ u(r, θ, φ) in spherical coordinates is given by
( ) ( )
1 ∂ 2 ∂u 1 ∂ ∂u 1 ∂2u
(7.3.23) ∆u ≡ 2 r + 2 sin θ + 2 2 .
r ∂r ∂r r sin θ ∂θ ∂θ r sin θ ∂φ2

As many times before, we look for the solution of this problem of the form

u(r, θ, φ) = R(r)Y (θ, φ).

If we now substitute u(r, θ, φ) into (7.3.21), using (7.3.23), after separating


the variables, we obtain the Euler–Cauchy equation

(7.3.24) r2 R′′ (r) + 2rR(r) − λR(r) = 0, 0 < r ≤ 1, | lim R(r) |< ∞,


r→0+

and the Helmholtz equation


( )
1 ∂ ∂Y 1 ∂2Y
(7.3.25) sin θ + + λY = 0.
sin θ ∂θ ∂θ sin2 θ ∂φ2

The function Y (θ, φ) also satisfies the periodic and boundary conditions
{
Y (θ, φ + 2π) = Y (θ, φ), 0 < θ, 0 < φ < 2π
(7.3.26) | lim Y (θ, φ) |< ∞, | lim Y (θ, φ) |< ∞.
θ→0+ θ→π−

The bounded solutions Y (θ, φ) of the eigenvalue problem (7.3.25), (7.3.26)


are called spherical functions. An excellent analysis of these important func-
tions is given in the book by N. Asmar [13]. To solve this eigenvalue problem
we use again the separation of variables method. If Y (θ, φ) is of the form

Y (θ, φ) = P (θ)Φ(φ),

then we obtain the two problems

(7.3.27) Φ′′ (φ) + µΦ(φ) = 0, Φ(φ + 2π) = Φ(φ),

and
 [ ( ) ]

 sin2 θ
1 d
sin θ
dP
+ λ = µ,
(7.3.28) sin θ dθ dθ

 | lim P (θ) |< ∞, | lim P (θ) |< ∞.
θ→0+ θ→π−

The eigenvalues of (7.3.24) are µm = m2 , m = 0, 1, 2 . . .; the corresponding


eigenfunctions are
{ }
1, cos φ, sin φ, . . . , cos mφ, sin mφ, . . . , .
426 7. LAPLACE AND POISSON EQUATIONS

If we now introduce in problem (7.3.28) the new variable x by x = cos θ and


write X(x) = P (θ), then for the found µm = m2 , problem (7.3.28) becomes
 ( )
(
 d 2 dX(x) m2 )

 (1 − x ) + λ− X(x) = 0, −1 < x < 1
 dx dx 1 − x2
(7.3.29)



 | lim X(x) |< ∞, | lim X(x) |< ∞.
x→−1+ x→1−

As we saw in the section on vibrations of the ball in Chapter 5, as well in the


section on heat distribution in the ball in Chapter 6, problem (7.3.29) has a
bounded solution only if λ = n(n + 1) for some nonnegative integer n and
in this case, the bounded solutions Xmn (x) are given by

Xmn (x) = Pn(m) (x), −1 < x < 1,

(m)
where Pn (x) are the associated Legendre polynomials. Therefore, the eigen-
functions of problem (7.3.28) are
{ ( ) ( ) ( ) }
Pn cos θ , Pn(m) cos θ cos mθ, Pn(m) cos θ sin mθ : n ≥ 0, m ∈ N ,

(m)
where Pn (·) is the Legendre polynomial of order n and Pn (·) is the asso-
ciated Legendre polynomial.
Now, let us solve Equation (7.3.24) for the found λ = n(n + 1). Two
independent solutions of this equation are

R(r) = rn , R(r) = r−(n+1) , 0 < r ≤ 1.

Since R(r) is bounded,


R(r) = rn
is the only one (up to a multiplicative constant) of problem (7.3.24).
Now, consider the series
∞ ∑
∑ n
[ ]
Amn cos mφ + Bmn sin mφ Pn(m) (cos θ).
n=0 m=0

Since the above series is uniformly convergent for each fixed 0 ≤ r < 1 it can
be differentiated twice term by term and since each member of the series is
a harmonic function on the unit ball the series is a harmonic function on the
unit ball. Therefore, the general solution of the original problem is given by
∞ ∑
∑ n
[ ]
(7.3.30) u(r, θ, φ) = Amn cos mφ + Bmn sin mφ rn Pn(m) (cos θ).
n=0 m=0
7.3.4 LAPLACE EQUATION IN SPHERICAL COORDINATES 427

From the boundary condition

∞ ∑
∑ ∞
[ ]
f (θ, φ) = u(1, θ, φ) = Amn cos mφ + Bmn sin mφ Pn(m) (cos θ),
m=0 n=0

using the orthogonality property of the associated Legendre polynomials, dis-


cussed in Section 3.3.2 of Chapter 3, we obtain that the coefficients Amn and
Bmn in (3.3.30) are determined by the formulas

 ∫2π ∫π

 1

 Amn = f (θ, φ) Pn(m) (cos θ) cos mφ sin θ dθ dφ

 Nmn



 0 0
 ∫2π ∫π
(7.3.31) 1

 Bmn = f (θ, φ) Pn(m) (cos θ) sin mφ sin θ dθ dφ

 N


mn


0 0



 Nmn =
2πcn (m + n)!
,
2n + 1 (n − m)!

where {
2, m=0
cn =
1, m > 0.
Therefore, the solution of our original problem (7.3.19) is given by (7.3.27),
where Amn and Bmn are given by (7.3.31).
Example 7.3.6. Find the harmonic function u(r, θ) in the unit ball
{ }
(r, θ, φ) : ≤ r < 1

such that
u(1, θ) = sin4 θ, 0 ≤ θ < π.

Solution. Notice that we assumed that the function u(r, θ) is independent on


the variable φ. Therefore, the general solution of the Laplace equation in the
ball is given by


u(r, θ) = an rn Pn (cos θ),
n=0

where Pn (·) are the Legendre polynomials of order n. To find the coeffi-
cients an we use the boundary condition at r = 1:



4
(7.3.32) sin θ = u(1, θ) = an Pn (cos θ).
n=0
428 7. LAPLACE AND POISSON EQUATIONS

Using the orthogonality property of the Legendre polynomials, from the above
expansion we have
∫π ∫π
an Pn2 (cos θ) sin θ dθ = sin4 θ Pn (cos θ) sin θ dθ.
0 0

From Chapter 3 we know that


∫π
2
Pn2 (cos θ) sin θ dθ = ,
2n + 1
0

and so
∫π
2n + 1
an = sin4 θ Pn (cos θ) sin θ dθ.
2
0

It is not straightforward to find the above integral. In this situation it is much


easier to use only (7.3.32) if we introduce the variable x by x = cos θ. Then
from (7.3.32) we have

(7.3.33) 1−2x2 +x4 = a0 P0 (x)+a1 P1 (x)+a2 P2 (x)+a3 P3 (x)+a4 P4 (x)+. . . .

If we recall from Chapter 3 that the first 5 Legendre polynomials are


1
P0 (x) = 1, P1 (x) = x, P2 (x) = (3x2 − 1),
2
1 1
P3 (x) = (5x3 − 3x), P4 (x) = (35x4 − 30x2 + 3),
2 8
then from (7.3.33), by comparison we obtain
22 2 8
a0 = , a1 = a3 = 0, a2 = − , a4 = , and an = 0 for n ≥ 5.
35 7 35
Therefore, the solution u(r, θ) of the Poisson equation is given by
22 2 2 8
u(r, θ) = − r P2 (cos θ) + r4 P4 (cos θ),
35 7 35
from which we obtain
22 1 2 ( ) 1 ( )
u(r, θ) = − r 3 cos2 θ − 1 + r4 35 cos4 θ − 30 cos2 θ + 3 .
35 7 35

The Poisson equation on the unit ball can be solved in a similar way as
in the unit disc. For the solution of the Poisson boundary value problem see
Exercise 19 of this section.
7.3.4 LAPLACE EQUATION IN SPHERICAL COORDINATES 429

Exercises for Section 7.3.

1. Use the separation of variables method to solve the Neumann prob-


lem for the Laplace equation in a disc, i.e., solve the boundary value
problem

∆ u(r, φ) = 0, 0 < r < a, 0 ≤ φ < 2π.


un (a, φ) = f (φ), 0 ≤ φ < 2π,

where n is the outward unit normal vector to the circle, provided that


f (φ) dφ = 0.
0

2. Use the separation of variables method to solve the Neumann problem


for the Laplace equation outside a disc, i.e., solve the boundary value
problem

∆ u(r, φ) = 0, a < r < ∞, 0 ≤ φ < 2π.


un (a, φ) = f (φ), 0 ≤ φ < 2π,
| lim u(r, φ) |< ∞, 0 ≤ φ < 2π,
r→∞

where n is the inward unit normal vector to the circle, provided that


f (φ) dφ = 0.
0

In Exercises 3–7 find the harmonic function u(r, φ) in the unit disc which
on the boundary assumes the given function f (φ).
3. f (φ) = A sin φ.

4. f (φ) = A sin3 φ + B.
{
A sin φ, 0 ≤ φ < π,
5. f (φ) = A 3
sin φ, π < φ < 2π.
{ 3
1, 0 ≤ φ < π,
6. f (φ) =
0, π < φ < 2π.
7. f (φ) = 12 (π − φ).

8. Solve the Dirichlet problem for the Laplace equation in an annulus,


i.e., solve the problem
{
∆ u(r, φ) = 0, a < r < b, 0 < φ < 2π
u(a, φ) = f (φ), u(b, φ) = g(φ), 0 ≤ φ < 2π.

What is the solution if b = 1, f (φ) = 0 and g(φ) = 1 + 2 sin φ?


430 7. LAPLACE AND POISSON EQUATIONS

In Exercises 9–10 solve the Laplace equation

∆ u(r, φ) = 0, (r, φ) ∈ Ω

on the indicated domain Ω and the indicated boundary conditions.


{ π
}
9. Ω = (r, φ) : 0 < r < a, 0 < φ < , uφ (r, 0) = uφ (r, π2 ) = 0,
{ 2
1, 0 < φ < π4
u(a, φ) =
0, π4 < φ < π2 .

{ }
10. Ω = (r, φ) : 0 < r < 2, 0 < φ < π , uφ (r, 0) = uφ (r, π) = 0,
{
c, 0 < φ < π2
u(2, φ) =
0, π2 < φ < π.

In Exercises 11–13 solve the Poisson boundary value problem


{
∆ u(r, φ) = F (r, φ), 0 < r < 1, 0 < φ < 2π
u(1, φ) = f (φ), 0 < φ < 2π.

11. F (r, φ) = 1, f (φ) = 0.

12. F (r, φ) = 2 + r3 cos 3φ, f (φ) = 0.

13. F (r, φ) = 1, f (φ) = sin 2φ.


In Exercises 14–15 solve the Laplace boundary value problem on a cylinder,
i.e., find the function u(r, φ, z) which satisfies the problem
{
∆ u(r, φ, z) = 0, 0 < r < 1, 0 < φ < 2π, 0 < z < 2
u(1, φ, z) = f (φ, z), u(r, φ, 0) = g(r, φ).

14. f (φ, z) = 0, g(r, φ) = 1, h(r, φ) = 0.

15. f (φ, z) = 0, g(r, φ) = G(r), h(r, φ) = H(r).


In Exercises 16–19 find the function u(r, θ, φ) = u(r, θ) which satisfies the
problem

 1 ∂ ( 2 ∂u ) 1 ∂ ( ∂u )

 ∆ u ≡ r2 ∂r r ∂r + r2 sin θ ∂θ sin θ ∂θ = 0, 0 < r < 1, 0 < θ < π




u(1, θ, φ) = f (θ), 0 < θ < π.
7.3.4 LAPLACE EQUATION IN SPHERICAL COORDINATES 431
{ π
4, 0<θ< 2
16. f (θ) = π
0, 2 < θ < π.
17. f (θ) = 1 − cos θ, 0 < θ < π.

18. f (θ) = 2 + cos2 θ, 0 < θ < π.


{
4 cos θ, 0 < θ < π2
19. f (θ) = π
0, 2 < θ < π.

20. Solve the Laplace equation outside the unit ball with given Dirichlet
boundary values, i.e., solve the problem

 ∆ u(r, θ, φ) = 0, r > 1, 0 < φ < 2π, 0 < θ < π

u(1, θ, φ) = f (θ, φ), 0 < θ < π, 0 < φ < 2π

 | lim u(r, θ, φ) |< ∞, 0 < θ < π, 0 < φ < 2π.
r→+∞

21. Solve the Laplace equation inside the unit ball with given Newmann
boundary values, i.e., solve the problem

 ∆ u(r, θ, φ) = 0, 0 < r < 1, 0 < φ < 2π, 0 < θ < π

un (1, θ, φ) = f (θ, φ), 0 < θ < π, 0 < φ < 2π

 | lim u(r, θ, φ) |< ∞, 0 < θ < π, 0 < φ < 2π,
r→0+

where n is the outward unit normal vector to the sphere and f (θ, φ)
is a given function which satisfies the condition

∫2π ∫π
f (θ, φ) sin θ dθ dφ = 0.
0 0

22. Show that the solution u(r, θ, φ) of the Laplace equation between two
concentric balls with given Dirichlet boundary values



∆ u(r, θ, φ) = 0, r1 < r < r2 , 0 < φ < 2π, 0 < θ < π


 u(r2 , θ, φ) = f (θ, φ), 0 < θ < π, 0 < φ < 2π

 u(r1 , θ, φ) = 0, 0 < θ < π, 0 < φ < 2π


 | lim u(r, θ, φ) |< ∞, 0 < θ < π, 0 < φ < 2π
r→+∞

is given by

∑ [ ]
( r )n ( r2 )n+1
u(r, θ, φ) = an − Pn (cos θ),
n=0
r2 r
432 7. LAPLACE AND POISSON EQUATIONS

where the coefficients an are determined by

∫π
2n + 1 1
an = ( r n
) ( )n+1 f (θ) Pn (cos θ) dθ.
2 − rr21
r1 0

Hint: Use the solution of the Laplace equation inside the ball and the
solution of the Laplace equation outside a ball. (see Exercise 20 of
this section.)

7.4 Integral Transform Methods for the Laplace Equation.


In this section we will use the Fourier and Hankel transform to solve the
Laplace equation on unbounded domains.
7.4.1 The Fourier Transform Method for the Laplace Equation.
Let us apply the Fourier transform to several examples of elliptic boundary
value problems.
Example 7.4.1. Use the Fourier transform to solve the following boundary
value problem on the right half plane.

 uxx (x, y) + uyy (x, y) = 0, x > 0, −∞ < y < ∞

u(0, y) = f (y), −∞ < y < ∞

 lim u(x, y) = 0 −∞ < y < ∞.
x→+∞

Solution. Let
∫∞
( )
U (x, ω) = F u(x, y) = u(x, y) e−iωy dy
−∞

and
U (0, ω) = F (f ) = F (ω).
If we apply the Fourier transform to the Laplace equation, then in view of the
boundary condition for the function u(x, y) at x = 0, we obtain the ordinary
differential equation

d2 U (x, ω)
− ω 2 U (x, ω) = 0, x ≥ 0.
dx2
The general solution of the above equation is given by

U (x, ω) = C1 (ω)e|ω| x + C2 (ω)e−|ω| x .


7.4.1 FOURIER TRANSFORM METHOD FOR THE LAPLACE EQUATION 433

In view of the condition lim U (x, ω) = 0 we have C1 (ω) = 0, while from


x→+∞
U (0, ω) = F (ω) it follows that C2 (ω) = F (ω). Therefore,

U (x, ω) = F (ω)e−|ω| x .
Now, using the fact that
( )
(
−1 −|ω| x
) 1 x
F e (y) = ,
π x + y2
2

(see the Table B of Appendix B), from the convolution theorem for the Fourier
transform we have
∫∞
1( x ) 1 x
(7.4.1) u(x, y) = f∗ 2 (x) = f (t) 2 dt.
π x + y2 π (x + (y − t)2
−∞

The function
1 x
Px (y) =
π x2 + (y − t)2
is called the Poisson
{ } kernel for the Laplace equation on the right half plane
(x, y) : x > 0 , and Equation (7.4.1) is known as the Poisson integral
formula for harmonic functions on the right half plane.

Example 7.4.2. Solve the following Dirichlet boundary value problem in the
given semi-infinite strip

 uxx (x, y) + uyy (x, y) = 0, 0 < x < ∞, 0 < y < b

u(x, 0) = f (x), u(x, b) = 0, 0 < x < ∞

 u(0, y) = 0, lim u(x, y) = 0, 0 < y < b,
x→+∞

and taking b → ∞{ solve the boundary problem for} the Laplace equation in
the first quadrant (x, y) : 0 < x < ∞, 0 < y < ∞ .
Solution. Since in the boundary conditions of the problem are not derivatives
we use the Fourier sine transform . Let
∫∞
( )
U (ω, y) = Fs u(x, y) = u(x, y) sin (ωx) dx,
0

and
U (ω, 0) = Fs (f ) = F (ω).
If we apply the Fourier sine transform to the Laplace equation, then in view
of the boundary condition for the function u(x, y) at x = 0, we obtain the
ordinary differential equation
d2 U (ω, y)
− ω 2 U (ω, y) = 0, 0 ≤ y < ∞.
dy 2
434 7. LAPLACE AND POISSON EQUATIONS

The general solution of the above equation is given by


U (ω, y) = C1 (ω) sinh (ωy) + C2 (ω) cosh (ωy).
In view of the boundary conditions U (0, ω) = F (ω) and U (ω, b) = 0 we have
C2 (ω) = F (ω),
( ) ( )
C1 (ω) sinh ωb + C2 (ω) cosh ωb = 0.
Solving the above system for C1 (ω) and C2 (ω) we obtain
( )
sinh ω(b − y)
U (ω, y) = F (ω) ( ) .
sinh ωb
The inverse Fourier sine transform implies that the solution of the original
problem is given by
∫∞
2
u(x, y) = U (ω, y) sin (ωy) dω
π
0
∫∞ ( )
2 sinh ω(b − y)
(7.4.2) = U (ω, y) sin (ωy) F (ω) ( ) dω
π sinh ωb
0
∫∞ ∫∞ ( )
2 sinh ω(b − y)
= f (t) sin (ωt) sin (ωy) ( dt dω.
π sinh ωb)
0 0

Now in (7.4.2) take b → ∞. Assuming that we can pass the limit inside the
integrals, from the following limit (obtained by l’Hospital’s rule)
( )
sinh ω(b − y)
lim ( = e−ωy
b→∞ sinh ωb)
it follows that
∫∞ (∫∞ )
2
u(x, y) = f (t) sin (ωt) sin (ωy) eωy dω dt.
π
0 0

If for the inside integral we use the formula


1[ ( ) ( )]
sin (ωt) sin (ωy) = cos ω(t − y) − cos ω(t + y)
2
and the integration by parts formula, then we obtain
∫∞ {∫∞[ ] }
1 ( ) ( ) ωy
sin (ωt) sin (ωy)e ωy
dω = cos ω(t − y) cos ω(t + y) e dω
2
0 0
[ ]
1 y y
= − .
2 ((x − t)2 + y 2 ((x + t)2 + y 2
7.4.1 FOURIER TRANSFORM METHOD FOR THE LAPLACE EQUATION 435

Therefore, the solution of the given problem is

∫∞ [ ]
y y y
u(x, y) = − f (t) dt.
π ((x − t)2 + y 2 ((x + t)2 + y 2
0

Example 7.4.3. Solve the boundary value problem



 uxx (x, y) + uyy (x, y) = 0, 0 < x < a, 0 < y < ∞

u(0, y) = 0, u(a, y) = g(y), 0 < x < ∞


uy (x, 0) = 0, 0 < x < a.

Solution. Since the boundary condition involves a derivative with respect to y


of the required function we will use the Fourier cosine transform (with respect
to y). Let
∫∞
( )
U (x, ω) = Fc u(x, y) = u(x, y) cos (ωy) dx,
0

and
U (0, ω) = Fc (f ) = F (ω).
If we apply the Fourier cosine transform to the Laplace equation, then in view
of the boundary condition for the function u(x, y) at y = 0, we obtain the
ordinary differential equation

d2 U (x, ω)
− ω 2 U (x, ω) − uy (x, 0) = 0, 0 ≤ x < π.
dx2

i.e.,
d2 U (x, ω)
− ω 2 U (x, ω), 0 ≤ x < π.
dx2
The general solution of the above equation is given by

U (x, ω) = C1 (ω) sinh (|ω|x) + C2 (ω) cosh (ωx).

From the boundary condition u(0, y) = 0 it follows that U (0, ω) = 0, and


therefore, C2 (ω) = 0. From the other boundary condition u(a, y) = f (y) we
have
C1 (ω) sinh (ωa) = F (ω),
and so
sinh (ωx)
U (x, ω) = F (ω) .
sinh (ωa)
436 7. LAPLACE AND POISSON EQUATIONS

From the inversion formula for the Fourier cosine transform we obtain
∫∞
2
u(x, y) = U (x, ω) cos (ωy) dω
π
0
∫∞ ∫∞
2 sinh (ωx)
= f (t) cos (ωt) cos (ωy) dt dω,
π sinh (ωa)
0 0

and after interchanging the order of integration it follows that

∫∞ (∫∞ )
2 sinh (ωx)
u(x, y) = cos (ωt) cos (ωy) dω f (t) dt.
π sinh (ωa)
0 0

Now, we will introduce the two dimensional Fourier transform which can
be applied to solve the Laplace equation on unbounded domains.
The Fourier transform of an integrable function f (x), x = (x, y) ∈ R2 is
defined by ∫∫
( )
b
f (w) = F f (x) = f (x) e−iw·x dx,
R2

where for x = (x, y) and w = (ξ, η), w · x is the usual dot product defined
by
w · x = ξ x + η y.
The higher dimensional Fourier transform is defined similarly.
The basic properties of the two dimensional Fourier transform are just like
those of the one dimensional Fourier transform and they are summarized in
the following theorem.
Theorem 7.4.1. Suppose that f (x), x = (x, y), is integrable on R2 . Then
(a) For any fixed a ∈ R2 ,
( ) ( ) ( ) ( )
F f (x − a) = e−iw·a F f (x) , F eia·x f (x) = F w − a .

(b) If the partial derivatives fx (x) and fy (y) exist and are integrable on
R2 , then
( ) ( ) ( ) ( )
F fx = ix F f , F fy = iy F f .

(c) If g and the convolution f ∗ g are integrable on R2 , then


( ) ( ( )) ( ))
F f ∗g = F f g
7.4.1 FOURIER TRANSFORM METHOD FOR THE LAPLACE EQUATION 437

( The convolution f ∗ g of the functions f and g is defined by


∫∫
( )
f ∗ g (x) = f (y) g(x − y) dy. )
R2
( )
(d) If f is integrable and continuous on R2 and F (ω) = F f is also
integrable on R2 , then
∫∫
1
f (x) = eiw·x F (w) dw.
4π 2
R2

Example 7.4.4. Solve the three dimensional boundary value problem



 uxx (x, y, z) + uyy (x, y, z) + uzz (x, y, z) = 0, x, y ∈ R, 0 < z < ∞


u(x, y, 0) = f (x, y), −∞ < x < ∞, −∞ < y < ∞
 √

 lim u(x, y, z) = 0, |x| = x2 + y 2 + z 2 .
|x|→0

Solution. We apply the two dimensional


( ) Fourier transform
( ) with respect to
x and y. Let U (ξ, η, z) = F u and F (ξ, η) = F f . If we use part
(b) of Theorem 7.4.1, then the three dimensional Laplace equation and the
boundary conditions become

 Uzz (ξ, η, z) − (ξ 2 + η 2 )U (ξ, η, z) = 0,
 U (ξ, η, 0) = F (ξ, η), lim U (ξ, η, z) = 0.
ξ 2 +η 2 +z 2 →0

The solution of the above differential equation, in view of the boundary con-
ditions, is given by
√2 2
U (ξ, η, z) = F (ξ, η) e− ξ +η z .

Using the convolution property (c) of Theorem 7.4.1 we obtain


( )
u(x, y, z) = f ∗ g (x, y, z),

where ( √2 2 )
g(x, y, z) = F −1 e− ξ +η z .
To find the function g(x, y, z) is not a trivial matter. Using the inversion
Fourier transform formula, given in part (d) in the above theorem, we have
∫∞ ∫∞ √2 2
1
g(x, y, z) = e(xξ+yη)i e− ξ +η z dξ dη.
4π 2
−∞ −∞
438 7. LAPLACE AND POISSON EQUATIONS

It can be shown that


1 z
g(x, y, z) = .
2π (x2 + y 2 + z 2 ) 23

(See Exercise 2, page 247, in the book by G. B. Folland, [6] for details.)
Therefore,
∫∞ ∫∞
1
u(x, y, z) = f (ξ, η) g(x − ξ, y − η) dξ dη

−∞ −∞
∫∞ ∫∞
z f (ξ, η)
= [ ] 3 dξ dη.
2π (x − ξ) + (y − η)2 + z 2 2
2
−∞ −∞

7.4.2 The Hankel Transform Method.


In this section we will introduce the Hankel transform and we will apply it
to solve the Laplace equation on some unbounded domains.
Let f (r) be a function defined on [0, ∞). The Hankel transform of( order
)
n, n nonnegative integer, of the function f , denoted by Fn (= Hn f , is
defined by
∫∞
( )
Hn f (ω) = rf (r)Jn (ω r) dr,
0

where Jn (·) is the Bessel function of the first kind of order n, provided that
the improper integral exists.
For the Hankel transform we have the inversion formula
∫∞
)
f (r) = Hn−1 bigl(Fn (ω) = ωFn (ω)Jn (ω r) dω.
0

The Hankel transform can be defined of any order µ ≥ − 12 , but for our
purposes we will need the Hankel transform only of nonnegative integer order
n.
Important cases for our applications in solving partial differential equations
are n = 0 and n = 1 and the most important properties (for our applications)
of the Hankel transform are the following.
Property 1. Let f (r) be a function defined on r ≥ 0. If f (r) and f ′ (r)
are bounded at the origin r = 0 and satisfy the boundary conditions
 √ √ √
 lim r f (r) = lim r f (r) = lim r f ′ (r) = 0,
r→∞ r→∞ r→∞
 | lim rf ′ (r)Jn (r) |< ∞, | lim r f (r)Jn′ (r) |< ∞,
r→0+ r→0
7.4.2 THE HANKEL TRANSFORM METHOD 439

then

∫∞ [ ]
1 n2
(7.4.3) rJn (ωr) f ′′ (r) + f ′ (r) − 2 f (r) dr = −ω 2 Fn (ω),
r r
0

where Fn (·) is the Hankel transform of f of order n.

Proof. We integrate by parts.

∫∞ [ ]
′′ 1 ′ n2
rJn (ωr) f (r) + f (r) − 2 f (r) dr
r r
0
∫∞ [ ( ) ]
d df (r) n2
= r r − 2 f (r) Jn (ωr) dr
dr dr r
0
[ ]r=∞ ∫∞
= rf ′ (r)Jn (ωr) − ωrJn′ (ωr) − ω2 rJn (ωr) dω = −ω 2 Fn (ω).
r=0
0

Notice that in the above we have used the


√ facts that the Bessel function
Jn (ω r) is bounded at zero and that lim r Jn (ω r) = 0. ■
r→∞

Property 2. For the Bessel function of order 0 the following is true:

∫∞
rJ (ω r) 1
(7.4.4) √0 dr = e−ωz .
2
r +z 2 ω
0

This property can be justified very easy using the definition of the Hankel
transform.

Property 3. For the Bessel function of order 0 the following is true:

∫∞
1
(7.4.5.) J0 (ωr) e−zω dω = √ .
r2 + z 2
0

The property can be verified by using the series expansion of the Bessel
functions Jo (ω r) (from Chapter 3) and integrating term by term the series
expansion.

For some additional properties of the Hankel transform see Exercise 11 of


this section.
440 7. LAPLACE AND POISSON EQUATIONS

Example 7.4.5. Find the steady


{ temperature function u(r, φ, z) = u(r, z)
}
in the semi-infinite cylinder r, φ, z) : 0 ≤ r < ∞, 0 ≤ φ < 2π, 0 ≤ z < ∞
in the following two cases:
(a) The temperature on the boundary z = 0 is equal to f (r).

(b) The temperature on the boundary z = 0 is equal to T for r < a,


and it is equal to 0 for r > a.

Solution. For part (a) we need to solve the boundary value problem
( )
1 ∂ ∂u ∂2u
(7.4.6) ∆ u(r, z) = r + 2 = 0, r > 0, z > 0,
r ∂r ∂r ∂z
(7.4.7) u(r, 0) = f (r), lim u(r, z) = 0, r > 0,
z→∞
(7.4.8) lim u(r, z) = lim ur (r, z) = 0, z > 0.
r→∞ r→∞

Let
∫∞
( )
U (ω, z) = H u(r, z) = rJ0 (ω r)u(r, z) dr,
0
∫∞
( )
F (ω) = H f (r) = rJ0 (ω r)f (r) dr
0

be the Hankel transforms of order 0 of u(r, z) and f (r), respectively.


If we multiply both sides of (7.4.6) by rJ0 (ωr), then using the integration
by parts formula and the boundary conditions (7.4.8) we have

∫∞ ∫∞ ( )
∂2u ∂ ∂u
rJ0 (ω r) dr = − J0 (ω r) r dr
∂z 2 ∂r ∂r
0 0
[ ]r=∞ ∫∞ ∫∞
∂u ∂u ∂u
= − rJ0 (ω r) +ω rJ0′ (ω r) dr = ω rJ0′ (ω r) dr
∂r r=0 ∂r ∂r
0 0
[ ]r=∞ ∫∞ ( )
′ ∂
= ω ru(r, z)J0 (ωr) −ω u(r, z) rJ0′ (ω r) dr
r=0 ∂r
0
∫∞ ( )
∂ ′
= −ω u(r, z) rJ0 (ω r) dr
∂r
0
∫∞ ∫∞
= −ω u(r, z)J0′ (ω r) dr −ω 2
ru(r, z)J0′′ (ω r) dr.
0 0
7.4.2 THE HANKEL TRANSFORM METHOD 441

Thus,
∫∞ ∫∞ ∫∞
∂2u
(7.4.9) rJ0 (ωr) 2 dr = −ω u(r, z)J0′ (ωr)dr −ω 2
ru(r, z)J0′′ (ωr)dr.
∂z
0 0 0

Since J0 (·) is the Bessel function of order zero we have


1
J0′′ (ω r) + J0′ (ω r) + ω 2 J0 (ω r) = 0.
r
If we find J0′ (ω r) from the last equation and substitute in (7.4.9) we obtain
∫∞ ∫∞
∂2u
rJ0 (ω r) 2 dr = ω 2 rJ0 (ω r) dr,
∂z
0 0

which implies the boundary value problem



 2
 d U (r, z) − ω 2 U (r, z) = 0,
dz 2

 U (ω, 0) = F (ω), lim U (ω, z) = 0, 0 < z < ∞.
z→∞

The solution of the last boundary value problem is given by

U (r, z) = F (ω)e−ωz .

Taking the inverse Hankel transform we obtain that the solution u = u(r, z)
of the original boundary value problem is given by
∫∞ ∫∞(∫∞ )
−ωz
u= ωJ0 (ωr)F (ω)e dω = σJ0 (ω σ)f (σ) dσ ωJ0 (ωr)e−ωz dω.
0 0 0

The solution of (b) follows from (a) by substituting the given function f :
∫∞ (∫∞ )
u(r, z) = σJ0 (ω σ)f (σ) dσ ωJ0 (ω r) e−ωz dω
0 0
∫∞ (∫a )
=T σJ0 (ω σ)f (σ) dσ ωJ0 (ω r) e−ωz dω
0 0
∫∞ (∫aω )
ρ
=T J0 (ρ) dρ ωJ0 (ω r) e−ωz dω
ω2
0 0
∫∞ (∫aω )
1
=T ρJ0 (ρ) dρ J0 (ω r) e−ωz dω.
ω
0 0
442 7. LAPLACE AND POISSON EQUATIONS

If we use the following identity for the Bessel function


∫c
ρJ0 (ρ) dρ = cJ1 (c),
0

(see the section for the Bessel functions in Chapter 3), then the above solution
u(r, z) can be written as
∫∞
u(r, z) = aT J0 (ω r) J1 (aω) e−ωz dω.
0

Example 7.4.6. Find the solution u = u(r, φ, z) = u(r, z) of the boundary


value problem

∂2u 1 ∂ ∂2u
(a) ∆u = + = 0, 0 < r < ∞, z ∈ R,
∂r2 r ∂r ∂z 2
(b) lim u(r, z) = lim u(r, z) = 0,
r→∞ |z|→∞
2
(c) lim r u(r, z) = lim rur (r, z) = f (z).
r→0 r→0

Solution. Let
∫∞
( )
U (ω, z) = H u(r, z) = rJ0 (ω r)u(r, z) dr,
0
∫∞
( )
F (ω) = H f (r) = rJ0 (ω r)f (r) dr
0

be the Hankel transforms of order 0 of u(r, z) and f (r), respectively. If we


take n = 0 in Property 1, then we obtain
[ ]r=∞
d2 U (ω, z) ′
− ω 2
U (ω, z) + ru r (r, z)J 0 (ω r) − ω rJ0 (ω r) =0
dz 2 r=0

From the given boundary conditions (b) at r = ∞ and the boundary condi-
tions (c), in view of the following properties for the Bessel functions
1 ω
J0 (0) = 1, J0′ (r) = −J1 (r), lim J0 (ω r) = ,
r→0+ r 2
it follows that
d2 U (ω, z)
− ω 2 U (ω, z) = f (z), −∞ < z < ∞.
dz 2
7.4.2 THE HANKEL TRANSFORM METHOD 443

Using U (ω, z) → 0 as |z| → ∞ we obtain that the solution of the above


differential equation is given by
∫∞
1
U (ω, z) = − e−|ω−τ | f (τ ) dτ.
2
−∞

If we take the inverse Hankel transform of the above U (ω, z) and use
(7.4.5) (after changing the integration order) we obtain that the solution
u(x, z) of our original problem is given by

∫∞ ∫∞ (∫∞ )
1
u(x, z) = ω J0 (ω r) U (ω, z) dω = − e−|ω−τ | J0 (ω r) dω f (τ ) dτ
2
0 −∞ 0
∫∞
1 f (τ )
=− √ dτ.
2 r + (z − τ )2
2
−∞

Exercises for Section 7.4.


In Problems 1–9, use one of the Fourier transforms to solve the Laplace
equation
uxx (x, y) + uyy (x, y) = 0, (x, y) ∈ D
on the indicated domain D, subject to the given boundary value conditions.
{ }
1. D = (x, y) : 0 < x < π, 0 < y < ∞ ; u(0, y) = 0, u(π, y) = e−y ,
for y > 0 and uy (x, 0) = 0 for 0 < x < π.
{ }
2. D = (x, y) : 0 < x < ∞, 0 < y < 2 ; u(0, y) = 0, for 0 < y < 2
and u(x, 0) = f (x), u(x, 2) = 0 for 0 < x < ∞.
{ }
3. D = (x, y) : 0 < x < ∞, 0 < y < ∞ ; u(0, y) = e−y , for
0 < y < ∞ and u(x, 0) = e−x for 0 < x < ∞.
{ }
4. D = (x, y) : −∞ < x < ∞, 0 < y < ∞ ; u(x, 0) = f (x) for
−∞ < x < ∞, lim u(x, y) = 0.
|x|, y→∞
{
1, |x| < 1
5. D = {(x, y) : −∞ < x < ∞, 0 < y < ∞}; u(x, 0) =
0, |x| > 1.
{ }
6. D = (x, y) : −∞ < x < ∞, 0 < y < ∞ ; u(x, 0) = 4+x 1
2.

{ }
7. D = (x, y) : −∞ < x < ∞, 0 < y < ∞ ; u(x, 0) = cos x.
444 7. LAPLACE AND POISSON EQUATIONS
{ }
8. D = (x, y) : 0 < x < 1, 0 < y < ∞ ; u(0, y) = 0, u(1, y) = e−y ,
for y > 0 and u(x, 0) = 0 for 0 < x < 1.
{ }
9. D = (x, y) : 0 < x < ∞, 0 < y < ∞ ; u(x, 0) = 0 for x > 0 and
{
1, 0 < y < 1
u(0, y) =
0, y > 1.
10. Using the two dimensional Fourier transform show that the solution
u(x, y) of the equation

uxx (x, y) + uyy (x, y) − u(x, y) = −f (x, y), x, y ∈ R,

where f is a square integrable function on R2 , is given by

∫∞ ( ∫∞ ∫∞ )
1 (x−ξ)2 +(y−η)2 e−t
u(x, y) = e− 4t dξ dη f (t) dt.
4π t
0 −∞ −∞

Hint: Use the result


( ) ∫∞
−1 1 1 e−t − x2 +y2
F (x, y) = e 4t dt.
1 + ξ2 + η2 2 t
0

11. Using properties of the Bessel functions prove the following identities
for the Hankel transform.
(1) 1
(a) H0 (ω) = .
x ω
( ) a
(b) H0 e−ax (ω) = √ .
(a2 + ω 2 )3
( 1 −ax ) ( ) 1
(c) H0 e (ω) = L J0 (ax) (ω) = √ .
x a + ω2
2

ω2
( −a2 x2 ) 1 − 2
(d) H0 e (ω) = 2 e 4a .
2a
( ) 1 ( ) ω
(e) Hn f (ax) (ω) = H f (x) ( ).
a a
( ) 1 ( ) ω
(f) Hn f (ax) (ω) = H f (x) ( ).
a a

In Problems 12–13, use the Hankel transform to solve the partial differential
equation
( )
1 ∂ ∂u ∂2u
r + 2 = 0, 0 < r < ∞, 0 < z < ∞,
r ∂r ∂r ∂z
7.5 PROJECTS USING MATHEMATICA 445

subject to the given boundary conditions.




 1, 0≤r<a
12. lim u(r, z) = 0, lim u(r, z) = 0, u(r, 0) =
z,→∞ r,→∞ 

0, 1 < r < ∞.

1
13. lim u(r, z) = 0, lim u(r, z) = 0, u(r, 0) = √ .
z,→∞ r,→∞ a + r2
2

7.5 Projects Using Mathematica.


In this section we will see how Mathematica can be used to solve several
problems involving the Laplace and Poisson equations. In particular, we will
develop several Mathematica notebooks which automate the computations of
the solutions of these equations.
Project 7.5.1. Use the separation of variables method to solve the Laplace
equation on the given square.



uxx (x, y) + uyy (x, y) = 0, 0 < x < π, 0 < y < π,

 {
x, 0 < x < π2
u(x, 0) = f (x) =

 π − x, π2 < x < π,


u(0, y) = f (y) = 0, 0 < y < π.

All calculation should be done using Mathematica. Display the plot of


u(x, y).
Solution. In Example 7.2.1 we solved this problem and we found out that its
solution u(x, y) is given by

4 ∑ sin nπ [ ( )]
u(x, y) = 2
2
sinh (ny) + sinh n(π − y) sin nx.
π n=1 n sinh nπ

We split the problem into two problems





vxx (x, y) + vyy (x, y) = 0, 0 < x < π, 0 < y < π,

 {
x, 0 < x < π2
v(x, 0) = f (x) =

 1 − x, π2 < x < π,


v(0, y) = f (y) = 0, 0 < y < π.

and 


wxx (x, y) + wyy (x, y) = 0, 0 < x < π, 0 < y < π,

 {
x, 0 < x < π2
w(x, 0) = f (x) =

 1 − x, π2 < x < π,


w(0, y) = f (y) = 0, 0 < y < π.
446 7. LAPLACE AND POISSON EQUATIONS

The solution of the original problem is u(x, y) = v(x, y) + w(x, y). Because
of symmetry we need to solve only the problem for the function w(x, y).
Let us use Mathematica to “derive” this result. Let w[X, Y ] = X[x] Y [y].
After separating the variables we obtain the equations
{ ′′
X (x) + λ2 X(x) = 0, X(0) = 0, X(π) = 0
Y ′′ (y) − λ2 X(x) = 0.

First define the boundary function f (x):

2 }, {π − x, x > 0}}];
π
In[1] := f [x− ] := Piecewise[{{x, 0 < x <

Solve the eigenvalue problem for X(x):


In[2] := DSolve[X ′′ [x] + λ2 ∗ X[x] == 0, X[0] == 0}, X, x];
Out[2] = {{X− > Function[{x}, C[2] Sin[xλ]]}}
Use the boundary condition X(π) = 0:
In[3] :=Reduce[Sin[λ ∗ P i] == 0, λ, Integers];
Out[3] = C[1] ∈ Integers &&λ == C[1];
Define the eigenvalues λn = n:
In[4] := λ[n− ] := n;
Now solve the equation for Y (y):
In[5] := DSolve[Y ′′ [x] − λ∗ Y [y] == 0, Y, y];
Out[5] = {{Y − > Function[{y}, eny C[1] + e−ny C[2]}}
Define the eigenfunctions Xn (x) and the functions Yn (x) by
In[6] := X[x− , n− ] := Sin [n ∗ x]; medskip
In[7] := Y [y− , n− ] := A[n] ∗ e−n∗y + B[n] ∗ en∗y ;
Now find A[n], B[n] from the conditions w(x, 0) = w(x, π) = f (x):
In[8] := a[n− ] = FullSimplify[Integrate[f [x]∗ Sin[λ[n] ∗ x], {x, 0, P i}],
Assumptions − > {Element[n, Integers]}]
[ ]
2Sin nπ
2
Out[8] = n2

In[9] := b[n− ] = FullSimplify[Integrate[(Sin[λ[n] ∗ x])2 , {x, 0, P i}]


π
Out[9] = 2
( )
In[10] := Solve [(A[n]+B[n])∗b[n] == a[n]&& A[n]∗e−n∗yP i +B[n]∗en∗P i
∗b[n] == a[n], {A[n], B[n]}, Reals ];
In[11] := FullSimplify [%]
{{ 4enπ Sin[ nπ ] 4Sin[ nπ
2 ]
}}
Out[11] = A[n] → (1+enπ )n22π , B[n] → (1+enπ )n2 π

Now, define the N th partial sum of the solution w(x, y):


7.5 PROJECTS USING MATHEMATICA 447

In[12] := w[x− , y− , N− ] := Sum [X[x, n] ∗ Y [y, n], {n, 1, N }];


The partial sum of the solution v(x, y) is given by
In[13] := v[x− , y− , N− ] := Sum [X[y, n] ∗ Y [x, n], {n, 1, N }];
Thus, the partial sums of the solution of the original problem are given by
In[14] := u[x− , y− , N− ] := v[x, y, N ] + w[x, y, N ];
The plot of u[x, y, 100] is displayed in Figure 7.5.1.

Figure 7.5.1

Project 7.5.2. Plot the solution of the boundary value problem




 uxx (x, y) + uyy (x, t) = 0, 0 < x < π, 0 < y < π,
u(x, 0) = 0, 0 < x < π,


u(0, y) = 0, 0 < y < π.

Solution. In Example 7.2.4. we found that the solution of this problem is


given by
∞ ∞
1 ∑ ∑ (−1)m+n
u(x, y) = sin mx sin nx.
π 2 m=1 n=1 mn(m2 + n2 )

Define the N th double partial sum:


[
1 (−1)m+n
In[1] := u[x− , y− , N− ] := 2 Sum
π ∗ n(m2 + n2 )
m]
sin m ∗ x sin n ∗ x, {m, 1, N }, {n, 1, N } ;
448 7. LAPLACE AND POISSON EQUATIONS

Figure 7.5.2

The plot of u[x, y, 100] is displayed in Figure 7.5.2.

Project 7.5.3. Use the Fourier transform to solve the boundary value prob-
lem 
 uxx (x, y) + uyy (x, y) = 0, −∞ < x < ∞, 0 < y < ∞




 u(x, 0) = f (x), −∞ < x < ∞
 lim u(x, y) = 0, −∞ < x < ∞,

 y→∞


 lim u(x, y) = 0, 0 < y < ∞.
|x|→∞

Plot the solution u(x, y) of the problem for the boundary value functions
{ 1 1
2, 0<x< 2
(a) f (x) =
0, x ≤ 0 or x ≥ 12 .

f (x) = x2 e−x ,
4
(b) −∞ < x < ∞.
In each case plot the level sets of the solution.
Solution. (a) Define the boundary function f (x):
In[1] := f [x− ] := Piecewise [{{0, x ≤ 0}, { 12 , 0 < x ≤ 21 }, {0, x > 21 }}];
Next, define the Laplace operator L u = uxx + uyy :
In[2] := Lapu = D[u[x, y], x, x] + D[u[x, y], y, y];
Find the Fourier transform of the Laplace operator with respect to x:
In[3] :=FourierTransform [D[u[x, y], x, x], x, ω, FourierParameters
→ {1, −1}]+ FourierTransform [D[u[x, y], y, y], x, ω,
FourierParameters → {1, −1}]
7.5 PROJECTS USING MATHEMATICA 449

Out[3] = −ω 2 FourierTransform [u[x, y], x, ω]]


+ FourierTransform u(2,0) [[x, y], y, ω],FourierParameters → {1, −1}];

Remark. In Mathematica, by default the Fourier transform of a function f


and the inverse Fourier transform of a function F are defined by

∫∞
( ) 1
F f (ω) = √ f (x)eiωx dx,

∫∞
−1
( ) 1
F F (x) = √ F (ω)e−iωx dω.

In[] := F [ω] = FourierTransform f [[x], x, ω];


In[] := InverseFourierTransform F [ω], ω, x];
Define the differential expression:
In[4] := de = −ω 2 ∗ U [ω], y]+ D [U [ω], y], {y, 2}]
Out[4] = −ω 2 U [ω], y] + U (0,2) [ω], y]
Next solve the differential equation:
In[5] := ftu=DSolve [ de == 0, U [ω, y], y]
Out[5] = {{U [ω, y] → ey|ω| C[1] + e−y|ω| C[2]}}
Write the solution in the standard form.
In[6] := Sol = U [ω, y]/. ftu [[1]]
Out[6] = ey|ω|C[1 + e−y|ω| C[2]
Use the boundary conditions to obtain:
In[7] := U [ω, y] = F [ω] e−y Abs[ω] ;
Find the inverse Fourier transform of e−y|ω| :
In[8] := p[x− ] = InverseFourierTransform [Exp[−y Abs[ω], ω, x,
FourierParameters → {1, −1}]
y
Out[8] = π(x2 +y 2 )

Use the convolution theorem to find the solution u(x, y):


In[9] := u[x− , y− ] = Convolve [f [t], p[t], t, x]
[ 2y ] [ ]
ArcCot 1−2x +ArcT an x
y
Out[9] = π

Alternatively, we integrate numerically:


450 7. LAPLACE AND POISSON EQUATIONS

In[10] := Clear [u];


y f [t]
In[11] := u[x− , y− ] = If [y > 0, NIntegrate [ π1 (x−t)2 +y 2 , {t, ∞, ∞}], f [x]]];

In[12] := Plot 3D[u[x, y], {x, −4, 4}, {y, 0.005, 4}, PlotRange → All]
In[13] := ContourPlot [u[x, y], {x, −4, 4}, {y, 0.005, 4},
FrameLabel → {”x”, ”y”}, ContourLabels → True]
The plots of the solutions u(x, y) and their level sets for (a) and (b)
are displayed in Figures 7.5.3 and Figure 7.5.4, respectively.

4 0.02 0.04

0.02
1
4 2
y
0.08
u

0 0.1

-4 y 1

-2
0
x
2 0 0.06
0 0 2 4
4
-4 -2
x

(1a) (2a)

Figure 7.5.3

2.0

1.5

0.4
u 2 1.0
y

0
-2 y 0.5

-1
0
x
1 0.0
0 0 1 2
2
-2 -1
x

(1b) (2b)

Figure 7.5.4
CHAPTER 8

FINITE DIFFERENCE NUMERICAL METHODS

Despite the methods developed in the previous chapters for solving the
wave, heat and Laplace equations, very often these analytical methods are not
practical, and we are forced to find approximate solutions of these equations.
In this chapter we will discuss the finite difference numerical methods for
solving partial differential equations.
The first two sections are devoted to some mathematical preliminaries re-
quired for developing these numerical methods. Some basic facts of linear
algebra, numerical iterative methods for linear systems and finite differences
are described in these sections.
All the numerical results of this chapter and various graphics were produced
using the mathematical software Mathematica.

8.1 Basics of Linear Algebra and Iterative Methods.


Because of their importance in the study of systems of linear equations,
we briefly review some facts from linear algebra. More detailed properties
of matrices discussed in this section can be found in many linear algebra
textbooks, such as the book by J. W. Demmel [9].
A matrix A of type m × n is a table of the form
 
a11 a12 . . . a1n
 a21 a22 . . . a2n 
(8.1.1) A=  ... .. .. 
. ... . 
am1 am2 ... amn
( )
with m rows and n columns. The matrix A will (be denoted
) by A = aij .
The( transpose
) matrix of the m × n matrix A = aij is the n × m matrix
AT = aji .
An n × n matrix A is called symmetric if AT = A.
 
a1
 a2 
An n × 1 matrix a =  
 ...  usually is called a column vector.
an
( ) ( )
If A = aij and B = bij are two m × n matrices and c is a scalar,
then cA and A + B are defined by
( ) ( )
cA = caij , A + B = aij + bij .

451
452 8. FINITE DIFFERENCE NUMERICAL METHODS

A set S = {a1 , a2 , . . . , ak } of finitely many n × 1 vectors is said to be


linearly independent if none of the vectors of S can be expressed as a linear
combination( of) the other vectors in S. ( )
If A = aij is an m × n matrix and B =( bij) is an n × p matrix, then
their product AB is the m × p matrix C = cij defined by


n
cij = aik bkj , i = 1, 2, . . . , j = 1, 2, . . . p.
k=1

In general, matrix multiplication is not a commutative operation.


Example 8.1.1. Compute the products AB and BA for the matrices
 
( ) 2 4
0 4 5
A= , B = 2 2 .
1 2 3
3 2

Solution. Because A is a 2 × 3 matrix and B is a 3 × 2 matrix, AB is the


2 × 2 matrix and BA is the 3 × 3 matrix
 
( ) 4 16 22
23 18
AB = , BA =  2 10 16 .
15 14
2 16 22

and so the products are not equal.


  
x1 y1
 x2   y2 
Two vectors x =    
 ...  and y =  ..  are said to be orthogonal if
.
xn yn


n
T
x y= xk yk = 0.
k=1

An n × n matrix is called a square matrix.


The n × n matrix
 
1 0 0 ... 0
0 1 0 ... 0
In = 
 ... ... ... . . . .. 
.
0 0 0 ... 1

is called the n × n identity matrix.


For any square n × n matrix A we have that AIn = In A = A.
8.1 LINEAR ALGEBRA AND ITERATIVE METHODS 453

A square matrix is called diagonal if the only nonzero entries are on the
main diagonal.
The determinant of a square n × n matrix A, denoted by det(A) or |A|
can be defined
( )inductively:
If A = a11 is an 1 × 1 matrix, then det(A) = a11 . If
( )
a11 a12
A=
a21 a22

is a 2 × 2 matrix, then

a a12
det(A) = 11 = a11 a22 − a12 a21 .
a21 a22

For an n × n matrix A given by (8.1.1) we proceed as follows: Let Aij


be the (n − 1) × (n − 1) matrix obtained by deleting the ith row and the j th
column of A. Then we define

a11 a12 ... a1n

a21 a22 ... a2n ∑ n
( )
det(A) = .. .. .. = (−1)i+j aij det Aij .
. . ... . j=1

an1 an2 ... ann

A square n × n matrix A is said to be invertible or nonsingular if there


exists an n × n matrix B such that

AB = BA = In .

In this case, the matrix B is called the inverse of A and it is denoted by


A−1 .
A matrix A is invertible if and only if det(A) ̸= 0.
A nonzero n × 1 column vector x is called an eigenvector of an n × n
matrix A if there is a number (real or complex) λ such that

Ax = λx.

The number λ is called an eigenvalue of the matrix A with corresponding


eigenvector x.
The eigenvalues λ of a matrix A can be be found by solving the nth
degree polynomial equation, called the characteristic equation of A,

det(λIn − A) = 0.
454 8. FINITE DIFFERENCE NUMERICAL METHODS

Example 8.1.2. Find the eigenvalues and corresponding eigenvectors of the


matrix ( )
4 −6
A= .
3 −7

Solution. The eigenvalues of A can be found by solving the characteristic


equation

λ − 4 6

det(λI2 − A) = = λ2 + 3λ − 10 = 0.
−3 λ + 7

( of)the last equation are λ = 2 and λ = −5.


The solutions
x1
If x = is an eigenvector of A corresponding to the eigenvalue
x2
λ = 2, then ( )( ) ( )
4 −6 x1 x1
=2 .
3 −7 x2 x2
From the last equation we obtain the linear system
2x1 − 6x2 = 0
3x1 − 9x2 = 0.

This system has infinitely many solutions given by


( ) ( )
x1 3
= x2 .
x2 1

Taking x2 ̸= 0 we obtain the eigenvectors of A corresponding to λ = 2.


Similarly, the eigenvectors of A which correspond to λ = −5 are given by
( ) ( )
x1 2
=t , t ̸= 0.
x2 3

For real symmetric matrices we have the following result.


Theorem 8.1.1. If A is a square and real symmetric matrix, then
1. The eigenvalues of A are real numbers.

2. The eigenvectors corresponding to distinct eigenvalues are orthogonal


to each other.

3. The matrix A is diagonalizable, i.e.,

A = U DU T

where D is the diagonal matrix with the eigenvalues of the matrix A


along the main diagonal, and U is the orthogonal matrix (U T U = I)
whose columns are the corresponding eigenvectors.
8.1 LINEAR ALGEBRA AND ITERATIVE METHODS 455

A real n × n matrix A is called positive definite if

xT Ax ≥ 0

for every nonzero n × 1 vector x.


All eigenvalues of a real positive definite matrix are nonnegative numbers.
The notions of a norm of vectors and a norm of square matrices are im-
portant in the investigation of iterative methods for solving linear systems of
algebraic equations.
A norm on the space Rn of all n × 1 vectors, denoted by ∥ ∥, is a real
valued function on Rn which satisfies the following properties.
(1) ∥x∥ > 0 if x ̸= 0 and ∥x∥ = 0 only if x = 0.

(2) ∥αx∥ = |α| ∥x∥ for every α ∈ R and every x ∈ Rn .

(3) ∥x + y∥ ≤ ∥x| + ∥y∥ for all x, y ∈ Rn .


( )T
For a column vector x = x1 , x2 , . . . , xn ∈ Rn , the following three norms
are used most often.
v
u n
∑ n
u∑
∥x∥∞ = max |xk |, ∥x∥1 = |xk |, ∥x∥2 = t |xk |2 .
1≤k≤n
k=1 k=1

A matrix norm ∥ ∥ is a real valued function on the space of all n × n


matrices with the following properties.
(1) ∥A∥ ≥ 0 and ∥A∥ = 0 only if A = 0 matrix.

(2) ∥α A∥ = |α|∥A∥ for every matrix A and every scalar α.

(3) ∥A + B∥ ≤ ∥A∥ + ∥B∥ for every n × n matrix A and B.

(4) ∥A B∥ ≤ ∥A∥ ∥B∥ for every n × n matrix A and B.

The norm of an n × n matrix A, induced by a vector norm ∥ · ∥ in Rn , is


defined by { }
∥A∥ = max ∥Ax∥ : ∥x∥ = 1 .
( )
If A = aij is an n × n real matrix, then the matrix norms of A, induced
by the above three vector norms are given by


n ∑
n √
∥A∥∞ = max |aij |, ∥A∥1 = max |aij |, ∥A∥2 = λmax ,
1≤i≤n 1≤j≤n
j=1 i=1
456 8. FINITE DIFFERENCE NUMERICAL METHODS

where λmax is the largest eigenvalue of the symmetric and positive definite
matrix AT A.
If A is a real symmetric and positive definite matrix, then

∥A∥2 = |λmax |,

where λmax is the eigenvalue of A of largest absolute value.


The following simple lemma will be needed.
Lemma 8.1.1. If ∥·∥ is any matrix norm on the space of all n×n matrices,
then
∥A∥ ≥ |λ|,
where λ is any eigenvalue of the matrix A.
Proof. If λ is an eigenvalue of the matrix A, then there exists a nonzero
column vector x such that A x = λx. Therefore,

|λ| ∥x∥ = ∥λ x∥ = ∥A x∥ ≤ ∥A∥ ∥x∥. ■

Most of the numerical methods for solving partial differential equations


lead to systems of linear algebraic equations, very often of very large order.
Therefore, we will review some facts about iterative methods for solving linear
systems.
Associated with a linear system of n unknowns x1 , x2 , . . . , xn
 a x + a x + ... + a x = b


11 1 12 2 1n n 1

 a21 x1 + a22 x2 + . . . + a2n xn = b2

(8.1.2) ..



 .

an1 x1 + a12 x2 + . . . + ann xn = bn

are the following square matrix A and column vectors b and x.


     
a11 a12 ... a1n b1 x1
 a21 a22 ... a2n   b2   x2 
A=
 ... .. ..  and b =    
 ...  , x =  ... .
. ... . 
an1 an2 ... ann bn xn

Using these matrices, the linear system (8.1.2) can be written in the matrix
form

(8.1.3) Ax = b.
8.1 LINEAR ALGEBRA AND ITERATIVE METHODS 457

In our further discussion, the matrix A will be invertible, so the linear


system (8.1.3) has a unique solution.
In general, there are two classes of methods for solving linear systems. One
of them is Direct Elimination Methods and the other is Iterative Methods.
The direct methods are based on the well-known Gauss elimination tech-
nique, which consists of applying row operations to reduce the given system
of equations to an equivalent system which is easier to solve.
For very large linear systems, the direct methods are not very practical,
and so iterative methods are almost always used. Therefore, our main focus
in this section will be on the iterative methods for solving large systems of
linear equations.
All of the iterative methods for the numerical solution of any system of
equations (not only linear) are based on the following very important result
from mathematical analysis, known as the Banach Fixed Point Theorem or
Contraction Mapping Theorem.
Theorem 8.1.2. Contraction Mapping Theorem. Let K be a closed
and bounded subset of Rn , equipped with a norm ∥ ∥. Suppose that f : K →
K is a function such that there is a positive constant q < 1 with the property

∥f (x) − f (y)∥ ≤ q∥x − y∥

) x, y ∈ K. Then, there{is a unique point x} ∈ K such that x =


⋆ ⋆
for
( all
f x⋆ . Moreover, the sequence x(k) : k = 0, 1, 2, . . . , defined recursively by
( )
x(k+1) = f x(k) , k = 0, 1, . . .,

converges to x⋆ for any initial point x(0) ∈ C.


The following
{ inequalities hold} and describe the rate of convergence of the
sequence x(k) : k = 0, 1, 2, . . . :

q
∥x(k) − x⋆ ∥ ≤ ∥x(k) − x(k−1) ∥, k = 1, 2, . . .,
1−q

or equivalently

qk
∥x(k) − x⋆ ∥ ≤ ∥x(1) − x(0) ∥, k = 1, 2, . . ..
1−q

The proof of this theorem is far beyond the scope of this book, and the
interested student is referred to the book by M. Rosenlicht [11], or the book
by M. Reed and B. Simon [15].
Now we are ready to describe some iterative methods for solving a linear
system.
458 8. FINITE DIFFERENCE NUMERICAL METHODS

Usually, a given system of linear equations

Ax = b

is transformed to an equivalent linear system


( ) (
x = x − C Ax − b = In − CA)x + Cb = Bx + Cb,

where C is some matrix (usually such that C is invertible). Then the (k+1)th
approximation x(k+1) of the exact solution x is determined recursively by
(
(8.1.4) x(k+1) = In − CA)x(k) + Cb, k = 0, 1, . . ..

For convergence of this iterative process we have the following theorem.


Theorem 8.1.3. The iterative process

x(k+1) = Bx(k) + Cb

converges for any initial point x(0) if and only if the spectral radius
{ }
ρ(B) := max |λi | < 1 : 1 ≤ i ≤ n < 1,

where λ1 , . . . , λn are the eigenvalues of the matrix B.

In practice, to find the eigenvalues of matrices is not a simple matter.


Therefore, other sufficient conditions for convergence of the iterative process
are used.
For a different choice of the iteration matrix C we have different iterative
methods.
The Jacobi (Iteration.
) Let us assume that the diagonal entries aii of the
matrix A = aij are all nonzero, otherwise we can exchange some rows of
the nonsingular matrix A, if necessary. Next, we split the matrix A into its
lower diagonal part L, its diagonal part D and its upper diagonal part U :
 
0 0 0 ... 0 0
 a21 0 0 ... 0 0
 
 a31 a32 0 ... 0 0
L=  .. .. .. .. .. 
,
 . . . ... . .
a a a ... 0 0
n−1,1 n−1,2 n−1,3
an1 an2 an3 ... an,n−1 0
 
a11 0 0 ... 0 0
 0 a22 0 ... 0 0 
 . .. .. .. ..
D=
 .. . . ... . .


 0 0 0 . . . an−1,n 0 
0 0 0 ... 0 an,n
8.1 LINEAR ALGEBRA AND ITERATIVE METHODS 459

and 0 
a12 a13 . . . a1n−1 a1n
0 0 a23 
. . . a2,n−1 a2n
 
0 0 0 
. . . a3,n−1 a3n

U =. .
. .. ..  .. ..
. . . ...  . .
 
0 0 0 ... 0 an−1,n
0 0 0 ... 0 0
If we take B = D, then the matrix C in this case is given by
( )
C = In − B −1 A = −D−1 L + U ,
with entries { a
− aij , if i ̸= j
cij = ii

0, if i = j.
and so the iterative process is given by
( )
(8.1.5) x(k+1) = −D−1 L + U x(k) + D−1 b, k = 0, 1, 2, . . ..
The iterative process (8.1.5) in coordinate form is given by
( ∑n )
(k+1) 1 (k)
(8.1.6) xi = bi − aij xi , i = 1, . . . , n; k = 1, 2, . . ..
aii j=1
j̸=i

The iterative sequence given by (8.1.5) or (8.1.6) is called the Jacobi


Iteration.

The Gauss–Seidel Iteration. The Gauss–Seidel iterative method is an


improvement of the Jacobi method.
(k+1)
If already obtained iterations xj , 1 ≤ j ≤ i−1 from the Jacobi method
(k+1)
are used in evaluating xj , i ≤ j ≤ n, then we obtain the Gauss–Seidel
iterative method.
( ∑
i−1 ∑n )
(k+1) 1 (k+1) (k)
(8.1.7) xi = bi − aij xi − aij xi , 1 ≤ j ≤ n, k ∈ N,
aii j=1 j=i+1

or in matrix form
( )
−1
(8.1.8) x(k+1)
=D b−x (k+1)
− Ux (k)
, k ∈ N.

( )
An n×n matrix A = aij is called a strongly diagonally dominant matrix
if

n
|aij | < |aii |, 1 ≤ i ≤ n.
j=1
j̸=i

For convergence of the Jacobi and Gauss–Seidel iterative methods we have


the following theorem.
460 8. FINITE DIFFERENCE NUMERICAL METHODS
( )
Theorem 8.1.4. Let A = aij be an n×n nonsingular, strongly diagonally
dominant matrix, then the Jacobi iteration (8.1.5) and Gauss–Seidel iteration
(8.1.8) converge for every initial point x(0) .
Proof. Let λ be any eigenvalue of the matrix C used in the iterative processes
(8.1.5) or (8.1.8). If we use the ∞ matrix norm of C, then by Lemma
8.1.1. and the strictly dominance of the matrix C it follows that

n ∑n
|aij |
|λ| ≤ ∥C∥∞ = max |cij | = max
1≤i≤n
j=1
1≤i≤n
j=1
|aii |
j̸=i

1 ∑
n
1
= max |aij | < max |aii | = 1.
1≤i≤n |aii | j=1
1≤i≤n |aii |
j̸=i

The conclusion now follows from Theorem 8.1.2.

There is more general result for convergence of the Jacobi and Gauss–Seidel
iterations which we state without a proof.
Theorem 8.1.5. If the matrix A is strictly positive definite, then the Jacobi
and Gauss–Seidel iterative processes are convergent.

Example 8.1.3. Solve the linear system


    
9 2 −1 3 x1 5
 2 8 −3 1   x2   −2 
  = 
−1 −3 10 −2 x3 6
3 1 −2 8 x4 −4
by the Jacobi and Gauss–Seidel method using 10 iterations.
Solution. First, we check that the matrix of the system is a diagonally domi-
nant matrix. This can be done using Mathematica:
In[1] := A = {{9, 2, −1, 3}, {2, 8, −3, 1}, {−1, −3, 10, −2}, {3, 1, −2, 8}}
Out[2] = {{9, 2, −1, 3}, {2, 8, −3, 1}, {−1, −3, 10, −2}, {3, 1, −2, 8}}
In[3] := T able[−Sum[Abs[A[[i, j]]], {j, 1, i − 1}] − Sum[Abs[A[[i, j]]],
{j, i + 1, 4}] + Abs[A[[i, i]]], {i, 1, 4}]
Out[4] = {3, 2, 4, 2}
Since A is diagonally dominant, the Jacobi and Gauss–Seidell iterations,
given by  (k+1) (k) (k) (k)

 9x1 = 5 − 2x2 + x3 − 3x4



 8x(k+1) = −2 − 2x(k) + 3x(k) − x(k)
2 1 3 4

 (k+1) (k) (k)


10x3 = 6 + x1 + 3x2 (k) + 2x4

 (k+1) (k) (k) (k)
8x4 = −4 − 3x2 − x2 + 2x3
8.1 LINEAR ALGEBRA AND ITERATIVE METHODS 461

and  (k+1) (k) (k) (k)



 9x1 = 5 − 2x2 + x3 − 3x4



 8x(k+1) (k) (k) (k)
2 = −2 − 2x1 + 3x3 − x4
 10x(k+1)
 (k) (k)

 3 = 6 + x1 + 3x2 (k) + 2x4

 (k+1) (k) (k) (k)
8x4 = −4 − 3x2 − x2 + 2x3 ,
( (0) (0) (0) (0) )
respectively, will converge
( for )any initial point x1 , x2 , x3 , x4 . Tak-
ing the initial point 0, −1, 1, 1 we have the results displayed in Table 8.1.1
(Jacobi) and Table 8.1.2 (Gauss–Seidel).

Table 8.1.1 (Jacobi)


k\xk x1 x2 x3 x4
1 1.22222 0.25 0.1 −0.12
2 0.552778 −0.502431 0.772222 −0.964583
3 1.07454 0.0219618 0.311632 −0.451432
4 0.735778 −0.345343 0.623756 −0.827789
5 0.977534 −0.0965626 0.404417 −0.57681
6 0.814219 −0.270626 0.553423 −0.753401
7 0.92832 −0.151846 0.449554 −0.633148
8 0.850299 −0.234354 0.520648 −0.716751
9 0.904401 −0.177738 0.471374 −0.659406
10 0.86723 −0.216909 0.505238 −0.69909

Table 8.1.2 (Gauss–Seidel)


k\xk x1 x2 x3 x4
1 0.555556 −0.138889 0.813889 −0.4875
2 0.839352 −0.0936921 0.558328 −0.663464
3 0.859567 −0.172586 0.501488 −0.675392
4 0.87476 −0.196208 0.493535 −0.680125
5 0.880703 −0.200084 0.49202 −0.682248
6 0.882104 −0.200737 0.49154 −0.682812
7 0.882383 −0.200917 0.491401 −0.682929
8 0.882447 −0.20097 0.491368 −0.682954
9 0.882463 −0.200984 0.49136 −0.682961
10 0.882468 −0.200987 0.491359 −0.682962

The above tables were generated by the following Mathematica programs.


Jac[A0− , B0− , X0− , max− ] :=
M odule[{A = N [A0], B = N [B0], i, j, k = 0, n = Length[X0], X = X0,
Xin = X0},
W hile[k < max,
462 8. FINITE DIFFERENCE NUMERICAL METHODS

F or[i = 1, i <=
( n, i + +, )

n
X[[i]] = 1
A[[i,i]] B[[i]] + A[[i,i]] ⋆ Xin[[i]] − A[[i,j]] ⋆ Xin[[j]] ];
j=1
Xin = X;
k = k + 1;];
Return[X]; ];
A = {{9, 2, −1, 3}, {2, 8, −3, 1}, {−1, −3, 10, −2}, {3, 1, −2, 8}};
B = {5, −2, 6, −4};
X = {0, −1, 1, −1};
X = Jac[A, B, X, 10]

GS[A0− , B0− , X0− , max− ] :=


M odule[{A = N [A0], B = N [B0], i, j, k = 0, n = Length[X0], X = X0},
W hile[k < max,
F or[i = 1, i <= ( n, i + +, )

n
X[[i]] = A[[i,i]] B[[i]] + A[[i,i]] ⋆ X[[i]] −
1
A[[i,j]] ⋆ X[[j]] ];
j=1
k = k + 1; ];
Return[X]; ];
X = {0, −1, 1, −1};
X = GS[A, B, X, 10]
If we want the error to be incorporated in the program, which will allow us
to stop the iteration when some required accuracy of the iteration sequence has
been achieved, then we can use the extended Jacobi module and the extended
Gauss–Seidel module. For tracking the error we can use, for example, the
norm
(k+1) (k) (k+1) (k)
∥xi − xi ∥∞ = max |xi − xi | ≤ ϵ.
1≤i≤n

generate[A− List, b− List] := M odule[{B, c, n},


f lag = T rue;
n = Length[A];
Do[If [A[[i, i]] == 0, f lag = F alse], {i, 1, n}];
If [f lag, {B = T able[0, {i, 1, n}, {j, 1, n}];
c = T able[0, {i, 1, n}];
Do[
{Do[If [i ̸= j, B[[i, j]] = −A[[i, j]]/A[[i, i]]], {j, 1, n}];
c[[i]] = b[[i]]/A[[i, i]]}, {i, 1, n}]; }];
If [!f lag, P rint[”Anerrorhasoccuredintheirconstruction.”]];
If [f lag, {B, c}]
];
A = {{9, 2, −1, 3}, {2, 8, −3, 1}, {−1, −3, 10, −2}, {3, 1, −2, 8}};
b = {5, −2, 6, −4};
{B, c} = generate[A, b];
8.1 LINEAR ALGEBRA AND ITERATIVE METHODS 463

ExtJac[A− List, b− List, x0− List, e− ] := M odule[{},


norm1[xx− List, yy− List] :=
M ax[Sum[Abs[xx[[i]] − yy[[i]]], {i, 1, Length[xx]}]];
norm[xx− List] := M ax[Sum[Abs[xx[[i]]], {i, 1, Length[xx]}]];
{B, c} = generate[A, b];
z = T able[0, {i, 1, Length[x0]}];
x[0] = x0;
G[x− List] := B.x + c;
x[k− /; k > 0] := x[k] = G[x[k − 1]];
Do[
If [norm1[x[k], x[k − 1]] <= e, {z = x[k], savek = k,
Break[]}], {k, 1, 100}];
];

generate[A− List, b− List] := M odule[{B, c, n},


f lag = T rue;
n = Length[A];
Do[If [A[[i, i]] == 0, f lag = F alse], {i, 1, n}];
If [f lag, {B = T able[0, {i, 1, n}, {j, 1, n}];
c = T able[0, {i, 1, n}];
Do[
{Do[If [i ̸= j, B[[i, j]] = −A[[i, j]]/A[[i, i]]], {j, 1, n}];
c[[i]] = b[[i]]/A[[i, i]]}, {i, 1, n}]; }];
If [f lag, {B, c}]
];

A = {{9, 2, −1, 3}, {2, 8, −3, 1}, {−1, −3, 10, −2}, {3, 1, −2, 8}};
b = {5, −2, 6, −4};
{B, c} = generate[A, b];
ExtGS[A− List, b− List, x0− List, e− ] := M odule[{},
norm1[xx− List, yy− List] :=
M ax[Sum[Abs[xx[[i]] − yy[[i]]], {i, 1, Length[xx]}]];
norm[xx− List] := M ax[Sum[Abs[xx[[i]]], {i, 1, Length[xx]}]];
{B, c} = generate[A, b];
z = T able[0, {i, 1, Length[x0]}];
x[0] = x0;
G[x− List] := B.x + c;
x[k− /; k > 0] := x[k] = G[x[k − 1]];
Do[
If [norm1[x[k], x[k − 1]] <= e, {z = x[k], savek = k,
Break[]}], {k, 1, 100}]];
Taking ϵ = 10−8 and
A = {{9, 2, −1, 3}, {2, 8, −3, 1}, {−1, −3, 10, −2}, {3, 1, −2, 8}};
464 8. FINITE DIFFERENCE NUMERICAL METHODS

b = {5, −2, 6, −4};

ExtpGS[A, b, {0, 0, 0, 0}, 0.00000001]

The result is given in Table 8.1.3 (Extended Jacobi).

Table 8.1.3 Extended Jacobi

k x1 x2 x3 x4
23 0.882469 −0.200988 0.491358 −0.682963

For Gauss–Seidel we have

ExtGauss − Seidel[A, b, {0, 0, 0, 0}, 0.00000001]

The result is given in Table 8.1.4 (Extended Gauss–Seidel).

Table 8.1.4 Extended Gauss–Seidel

k x1 x2 x3 x4
14 0.882469 −0.200988 0.491358 −0.682963

If A is not a diagonally dominant matrix, then the Jacobi iteration process


for the system Ax = b may not converge.

Example 8.1.4. Apply the Jacobi iterative method to the linear system

    
1 2 −1 3 x1 1
 2 2 −3 1   x2   −2 
  = .
−1 −3 1 −2 x3 3
3 1 −2 6 x4 4

Solution. Taking the initial approximation (0, 0, 0, 0) in the Jacobi iterative


method, the Jacobi module described above will generate the following ap-
proximations.
8.1 LINEAR ALGEBRA AND ITERATIVE METHODS 465

Table 8.1.5
k x1 x2 x3 x4
1 1 −1. −1. 0.666667
2 0. −3.83333 −0.333333 0.
3 8.33333 −1.5 −8.5 1.19444
4 −8.08333 −22.6806 4.44444 −6.08333
5 69.0556 16.7917 −60.9583 9.96991
6 −123.45 −166.478 102.491 −56.9792
7 607.384 304.677 −505.927 124.302
8 −1487.19 −1429.43 1275.81 −522.447
9 5703.01 3661.13 −4727.57 1407.77
10 −16272.1 −13499.2 13873.9 −5036.88

This particular system has the exact solution

( 35 17 51 31 ) ( )
,− , , = 0.813953, −0.395349, 1.18605, 0.72093 .
43 43 43 43

But from Table 8.1.5 we can see that the iterations given by the Jacobi method
are getting worse instead of better. Therefore we can conclude that this
iterative process diverges.

The next example shows that the Gauss–Seidel iterative method is diver-
gent while the Jacobi iterative process is convergent.
Example 8.1.5. Discuss the convergence of the Jacobi and Gauss–Seidel
iterative methods for the system

A · x = b,

where A is the matrix given by


 
1 0 1
A =  −1 1 0 .
1 2 −3

Solution. It is obvious that the matrix A is not strictly diagonally dominant.


For the Jacobi iterative method, the matrix B = BJ in (8.1.7) is given by
 −1    
1 0 0 [ 0 0 0 0 0 1 ]
( )
BJ = D−1 · L + U =  0 1 0  .  −1 0 0  +  0 0 0
0 0 −3 1 2 0 0 0 0
 
0 0 1
=  −1 0 0.
− 3 − 23 0
1
466 8. FINITE DIFFERENCE NUMERICAL METHODS

Using Mathematica, in order to simplify the calculations, we find that the


characteristic polynomial pB (λ) is given by

1 2
pBJ (λ) = −λ3 − λ + .
3 3

Again using Mathematica, we find that the eigenvalues of BJ (the roots of


pBJ (λ)) are

λ1 = 0.747415, λ2 = −0.373708 + 0.867355 i, λ3 = −0.373708 − 0.867355 i.

Their moduli are

|λ1 | = 0.747415, |λ2 | = 0.944438, |λ3 | = 0.944438.

Therefore, the spectral radius ρ(BJ ) = 0.944438 < 1 and so by Theorem


8.1.3 the Jacobi iterative method converges.
For the Gauss–Seidel iterative method, the matrix B = BGS in (8.1.8) is
given by
     
[ 0 0 0 1 0 0 ]−1 0 0 1
( )−1
BGS = L + D · U =  −1 0 0 + 0 1 0 .0 0 0
1 2 0 0 0 −3 0 0 0
 
0 0 1
=  0 0 1.
0 0 1

Using Mathematica (to simplify the calculations) we find that the character-
istic polynomial pBGS (λ) is given by

pBGS (λ) = −λ3 + λ2 .

We find that the eigenvalues of BGS (the roots of pBGS (λ)) are

λ1 = 0, λ2 = 0, λ3 = 1.

Their moduli are


|λ1 | = 0, |λ2 | = 0, |λ3 | = 1.
( )
Therefore, the spectral radius ρ BGS = 1 and so by Theorem 8.1.3 the
Gauss–Seidel iterative method diverges.

The strictly diagonal dominance is not a necessary condition for conver-


gence of the Jacobi iterative process. There are linear systems for which their
matrices are not strictly diagonally dominant, but nevertheless, the Jacobi it-
erative method produces convergent iterations. See Exercise 7 of this section.
8.1 LINEAR ALGEBRA AND ITERATIVE METHODS 467

The Conjugate Gradient Iteration. The Conjugate Gradient Method is


the most popular iterative method for solving large linear systems. It is based
on solving a minimization problem of a specific functional.
Let

(8.1.9) Ax = b

be a given linear system, where A is a symmetric and positive definite matrix.


For an arbitrary vector column x ∈ Rn consider the real valued function

(8.1.10) f (x) = xT Ax − 2bT x.

We will show that the solution x of the system (8.1.9) minimizes the function
f (x) defined in (8.1.10).
Let xe be a solution of (8.1.9), i.e., let Ae
x = b. Since A is symmetric we
( )T ( ) ( )T
have Ax x = x Ax and since A is positive definite we have Ax x ≥ 0
T

for every vector x. Therefore,

f (x) − f (e eT Ae
x) = xT Ax − 2bT x − x e
x + 2bT x
= xT Ax − 2e
xT Ax + 2e
xT Ae
x
( )T ( )
= x−x e A x−x e ≥ 0.

Equality in the last inequality holds only if x = x e, and so the minimum of


the function (8.1.10) is achieved only at the solution of the system (8.1.9).
There are different methods to find the minimizer of the function (8.1.10).
One of the most popular methods is the steepest descent method. With this
method, from a given approximation x(k) , the new approximation x(k+1) is
chosen, such that it assures the largest decrease of the function (8.1.10).
The new approximation x(k+1) is obtained from the old approximation
(k)
x by the formula
x(k+1) = x(k) + h c,
where the vector c is in the direction of the largest decrease of f (x) (in the
direction of the gradient grad f (x)), and the coefficient h is chosen such that
it insures the largest decrease of f in that direction. The direction c of the
largest decrease of f (x(k) + h c) is such that the derivative of f (x(k) + h c)
with respect to h when h = 0 has largest modulus. From
( )T ( (k) ) ( )T
f (x(k) + h c) = Ax(k) + hAc x + h c − 2 x(k) + h c b
( )
= h2 cT Ac + 2cT Ax(k) − b ,

i.e.,
( )
(8.1.11) f (x(k) + h c) − h2 cT Ac + 2cT Ax(k) − b
468 8. FINITE DIFFERENCE NUMERICAL METHODS

we have
d
f (x(k) + h c) = 2 cT r(k) ,
dh
h=0
where
r(k) = Ax(k) − b.
So the direction c of the steepest descent of f is in the direction of the vector
r(k) = A · x(k) − b.
From (8.1.11) it follows that
d ( )T ( )T
f (x(k) + h c) = 2h Ar(k) + r(k) r(k) = 0,
dh
and so,
( (k) )T (k)
r r
h = −( )T .
Ar (k) r(k)
Therefore, the new approximation x(k+1) is given by the formula
( (k) )T (k)
r r
(8.1.12) x (k+1)
=x −(
(k)
)T r(k) .
Ar(k) r(k)
The approximation given by (8.1.12) is called the steepest descent iterative
method.
It can be shown that the convergence rate of the steepest descent iterative
method is of geometric order. More precisely, the following is true.
( )k+1
λmax − λmin
∥x (k+1)
e ∥2 ≤
−x ∥x(0) − xe ∥2 ,
λmax + λmin
where λmax is the largest and λmin is the smallest eigenvalue of A.
The conjugate gradient method is a modification of the steepest gradient
method. The algorithm for the conjugate gradient method can be summarized
as follows:
x(0) , initial approximation

v(0) = r(0) = b − Ax(0)


( )T
v(k) r(k)
hk = ( )T
v(k) Av(k)

x(k+1) = x(k) + hk v(k)

r(k+1) = r(k) − hk Av(k)


( )T
v(k) Ar(k+1)
tk = − ( )T
v(k) Av(k)

v(k+1) = r(k+1) + tk v(k)


8.1 LINEAR ALGEBRA AND ITERATIVE METHODS 469

Example 8.1.6. Solve the linear system


    
9 2 −1 3 x1 5
 2 8 −3 1   x2   −2 
  = 
−1 −3 10 −2 x3 6
3 1 −2 8 x4 −4

by the conjugate gradient method using 7 iterations and display the differ-
ences between any two successive approximations.
Solution. First we check (with Mathematica) that A is a strictly positive
definite matrix:
In[1] := A = {{9, 2, −1, 3}, {2, 8, −3, 1}, {−1, −3, 10, −2}, {3, 1, −2, 8}};
In[2] := P ositiveDef initeM atrixQ[A]
Out[3] := T rue;
In[4] := r[j− ] := r[j] = b − A.T ranspose[x[j]];
In[5] := v[1] := r[0];
In[6] := t[j− ] := t[j] = (T ranspose[v[j]].r[j − 1])
/(T ranspose[v[j]].(A.v[j]));
In[7] := x[j− ] := x[/] = x[j − 1] + t[j][[1, 1]] ∗ (T ranspose[v[j]])[[1]];
In[8] := s[j− ] := s[j] = −(T ranspose[v[j]].(A.r[j]))
/(T ranspose[v[j]].(A.v[j]));
In[9] := v[j− ] := v[j] = r[k − 1] + (s[j − 1][[1, 1]]) ∗ v[j − 1];
In[10] := d[− ] := d[j] = Sum[([x[j][[i]] − x[j − 1][[i]])2 , {i, 1, Length[x[j]}];
In[11] := b = T ranspose[{{5, −2, 6, −4}}];
In[12] := x[0] = {1, 1, 1, 1};
In[13] := T able[x[j], {j, 0, 6}];
In[14] := A = {{9, 2, −1, 3}, {2, 8, −3, 1}, {−1, −3, 10, −2}, {3, 1, −2, 8}};
In[15] := b = T ranspose[{{5., −2., 6., −4.}}];
The results are given in Table 8.1.6.

Table 8.1.6 Conjugate Gradient


k\xk x1 x2 x3 x4 dk
0 1. 1. 1. 1.
1 0.355752 0.19469 1.16106 −0.127434 2.36063
2 0.686608 −0.0650599 0.429966 −0.645698 0.980036
3 881608 −0.203023 0.490418 −0.681147 0.0619698
4 0.882469 −0.200988 0.491358 −0.682963 9.06913 ∗ 10−6
5 0.882469 −0.200988 0.491358 −0.682963 3.08149 ∗ 10−32

Compare the obtained results with the exact solution:


In[16] := EXACT = Inverse[A].b
Out[16] := { 1787
2025 , − 2025 , 405 , − 675 }
407 199 461

In[17] := N [%]
Out[17] := {0.882469, −0.200988, 0.491358, −0.682963}
470 8. FINITE DIFFERENCE NUMERICAL METHODS

The following Mathematica program solves a system of linear equations


by the conjugate gradient iterative method using prescribed error to stop the
iterations.
In[1] := ConjGrad[A− List, b− List, x0− List, d− ] := M odule[{};
norm[xx− List, yy− List] :=
M ax[Sum[Abs[xx[[i]] − yy[[i]]], {i, 1, Length[xx]}]];
norm[xx− List] := M ax[Sum[Abs[xx[[i]]], {i, 1, Length[xx]}]];
z = T able[0, {i, 1, Length[x0]}];
x[0] = x0;
r[0] = b − A.x[0];
v[1] = r[0];
r[k− /; k > 0] := r[k] = r[k − 1] − t[k] ∗ A.v[k];
t[k− /; k > 0] := t[k] = Dot[r[k − 1], r[k − 1]]/Dot[v[k], A.v[k]];
s[k− /; k > 0] := s[k] = Dot[r[k], r[k]]/Dot[r[k − 1], r[k − 1]];
v[k− /; k > 1] := v[k] = r[k − 1] + s[k − 1] ∗ v[k − 1];
x[k− /; k > 0] := x[k] = x[k − 1] + t[k] ∗ v[k];
Do[
If [norm[x[k], x[k − 1]] <= d, {appr = x[k], savek = k,
Break[ ]}], {k, 1, 100}];
];
If we execute the module by
In[2] := A = {{9, 2, −1, 3}, {2, 8, −3, 1}, {−1, −3, 10, −2}, {3, 1, −2, 8}};
In[3] := b = {5, −2, 6, −4};
In[4] := ConjGradG[A, b, 1, 0.4, 1, −1, 0.00000001];
taking d = 10−8 we obtain the result
Out[4] := Approx = {0.882469, −0.200988, 0.491358, −0.682963}.

Exercises for Section 8.1.

1. Find the eigenvalues and the corresponding eigenvectors of the matrix


 
−1 0 3 0
 0 9 0 0
A= .
3 0 1 0
0 0 0 5
( )
2. The Euclidean (Frobenious) norm ∥A∥E of an n×n matrix A = aij
is defined by √∑
∥A∥E = a2ij .
i,j
8.1 LINEAR ALGEBRA AND ITERATIVE METHODS 471

For the matrix  


1 0 1
A = 2 3 0
2 1 4
compute the norms ∥A∥E , ∥A∥1 , ∥A∥2 , ∥A∥∞ and verify the inequal-
ities
√ √
∥A∥2 ≤ ∥A∥E ≤ n∥A∥2 , ∥A∥2 ≤ ∥A∥1 ∥A∥∞ .

3. The condition number of an invertible matrix A is defined by

κ(A) = ∥A∥ ∥A−1 ∥.

In the case of a symmetric matrix A, the condition number of the


matrix is given by

κ(A) = ∥A∥2 ∥A−1 ∥2 .

Compute the condition number for the matrix


( )
1 0.99999
A= .
0.99999 1

A “large” condition number of a matrix strongly affects results of any


procedure that involves the matrix. Verify this by solving the systems
( )( ) ( )
1 0.99999 x1 2.99999
=
0.99999 1 x2 2.99998

and ( )( ) ( )
0.99999 0.99999 y1 2.99999
= .
0.99999 1 y2 2.99998

4. Show that the matrix


 
2 −1 0
A =  −1 2 −1 
0 −1 2

is positive definite.

5. Solve the system


    
4 1 −1 1 x1 −2
 1 4 −1 −1   x2   −1 
   =  
−1 −1 5 1 x3 0
1 −1 1 3 x4 1
472 8. FINITE DIFFERENCE NUMERICAL METHODS

by

(a) the Jacobi iterative method


(b) the Gauss–Seidel iterative method
using the condition

∥x(k+1) − x(k) ∥2 ≤ ϵ

to stop the iteration when ϵ = 10−6 .

6. Solve the system


    
10 −1 −1 −1 x1 34
 −1 10 −1 −1   x2   23 
   =  
−1 −1 10 −1 x3 12
−1 −1 −1 10 x4 1

by the conjugate gradient method taking only 2 iterations.

7. Show that the matrix


 
4 2 −2
A = 1 −3 −1 
3 −1 4

is not strictly diagonally dominant but yet the Jacobi and Gauss–
Seidel iterative methods for the system

A·x=b

are convergent.

8.2 Finite Differences.


In cases in which no analytical expressions for the solutions of partial dif-
ferential equations can be given, then numerical methods for the solutions are
used. One among many numerical methods is the finite difference method.
With this method, a particular differential equation is replaced with a differ-
ence equation, i.e., a system of linear equations which can be solved by many
numerical methods. This method is very often used because of its simplicity
and its easy computer implementation.
The numerical methods which will be discussed in the next several sections
are developed by approximating derivatives.
Let us begin with a real-valued function f of a single variable x. We
assume that f is sufficiently smooth; in most cases we will assume that f
8.2 FINITE DIFFERENCES 473

has derivative of any order, i.e., f ∈ C ∞ (R). First, let us recall the definition
of the first derivative:

f (x + h) − f (x)
f ′ (x) = lim .
h→0 h

This means that


f (x + h) − f (x)
h
can be a good candidate for an approximation of f ′ (x) for sufficiently small
h. But, it is not obvious how good this approximation might be. The answer
lies in the well-known Taylor’s Formula.
Taylor’s Formula. Let a function f have n + 1 continuous derivatives in
the interval (x − l, x + l). Then for any h such that x + h ∈ (x − l, x + l)
there is a number 0 < θ < 1 such that

f ′ (x) f ′′ (x) 2 f (n) (x) n f (n+1) (x + θh) n+1


f (x + h) = f (x) + h+ h +...+ h + h .
1! 2! n! (n + 1)!

If the derivative f (n+1) is bounded, then we write

f ′ (x) f ′′ (x) 2 f (n) (x) n )


(8.2.1) f (x + h) = f (x) + h+ h + ... + h + O(hn+1 .
1! 2! n!

The above notation O (read “big-oh”) has the following meaning:


If f (x) and g(x) are two functions such that there exist constants M > 0
and δ > 0 with the property

|f (x)| ≤ M |g(x)| for |x| < δ,

then we write
f (x) = O(g(x)) as x → 0.

If we take n = 1 in (8.2.1), then we obtain


( )
f (x + h) = f (x) + f ′ (x)h + O h2

which can be written as

f (x + h) − f (x)
(8.2.2) f ′ (x) = + O(h).
h

From the last expression we have one approximation of f ′ (x):

f (x + h) − f (x)
(8.2.3) f ′ (x) ≈ , forward difference approximation.
h
474 8. FINITE DIFFERENCE NUMERICAL METHODS

Replacing h by −h in (8.2.2) we obtain

f (x) − f (x − h)
f ′ (x) = + O(h).
h

From the last expression we have another approximation of f ′ (x):

f (x) − f (x − h)
(8.2.4) f ′ (x) ≈ , backward difference approximation.
h

Replacing h by −h in (8.2.1) we have

f ′ (x) f (n) (x) n ( )


(8.2.5) f (x − h) = f (x) − h + . . . + (−1)n h + O hn+1 .
1! n!

If we take n = 1 in (8.2.5) and (8.2.1) and subtract (8.2.5) from (8.2.1),


then
f (x + h) + f (x − h)
f ′ (x) = + O(h2 ).
2h
Therefore, we have another approximation of the first derivative

f (x + h) + f (x − h)
(8.2.6) f ′ (x) ≈ , central difference approximation.
2h

The truncation errors for the above approximations are given by

f (x + h) − f (x)
Ef or (h) ≡ − f ′ (x) = O(h)
h
f (x) − f (x − h)
Eback (h) ≡ − f ′ (x) = O(h)
h
f (x + h) + f (x − h)
Ecent (h) ≡ − f ′ (x) = O(h2 ).
2h

Example 8.2.1. Let f (x) = ln x. Approximate f ′ (2) taking h = 0.1 and


h = 0.001 in

(a) the forward difference approximation

(b) the backward difference approximation

(c) the central difference approximation


and compare the obtained results with the exact value.
1
Solution. From calculus we know that f ′ (x) = and so the exact value of
x
f ′ (2) is 0.5.
8.2 FINITE DIFFERENCES 475

(a) For h = 0.1 we have

f (2 + 0.1) − f (2) ln(2.1) − ln(2)


f ′ (2) ≈ = ≈ 0.4879.
0.1 0.1
For h = 0.001 we have
f (2 + 0.001) − f (2) ln(2.001) − ln(2)
f ′ (2) ≈ = ≈ 0.49988.
0.001 0.001
(b) For h = 0.1 we have

f (2) − f (2 − 0.1) ln(2) − ln(1.9)


f ′ (2) ≈ = ≈ 0.5129.
0.1 0.1
For h = 0.001 we have
f (2) − f (2 − 0.001) ln(2) − ln(1.999)
f ′ (2) ≈ = ≈ 0.50018.
0.001 0.001
(c) For h = 0.1 we have

f (2 + 0.1) − f (2 − 0.1) ln(2.1) − ln(1.9)


f ′ (2) ≈ = ≈ 0.500417.
0.1 2(0.1)

For h = 0.001 we have


f (2 + 0.001) − f (2 − 0.001) ln(2.001) − ln(1.999)
f ′ (2) ≈ = ≈ 0.5000.
2(0.001) 0.1

If we use the Taylor series (8.2.1) and (8.2.5), then we obtain the following
approximations for the second derivative:

f (x + 2h) − 2f (x + h) + f (x)
(8.2.7) f ′′ (x) ≈ , forward approximation.
h2

f (x) − 2f (x − h) + f (x − 2h)
(8.2.8) f ′′ (x) ≈ , backward approximation.
h2

f (x + h) − 2f (x) + f (x − h)
(8.2.9) f ′′ (x) ≈ , central approximation.
h2
The approximations (8.2.7) and (8.2.8) for the second derivative are ac-
curate of first order, and the central difference approximation (8.2.9) for the
second derivative is accurate of second order.
476 8. FINITE DIFFERENCE NUMERICAL METHODS

Example 8.2.2. Approximate f ′′ (2) for the function f (x) = ln x taking


h = 0.1 with the central difference approximation.
Solution. We find that f ′′ (x) = −1/x2 and so f ′′ (2) = −0.25. With the
central difference approximation for the second derivative we have that

f (x + h) − 2f (x) + f (x − h) ln 2.1 − 2 ln 2 + ln 1.9


f ′′ (x) ≈ = ≈ −0.25031.
h2 0.12

When solving a problem described by an ordinary differential equation,


usually we partition a part of the line R into many equal intervals, defined
by the points
xi = ih, i = 0, ±1, ±2, . . .,
and then apply some finite difference approximation scheme to the derivatives
involved in the differential equation.
Let us apply the finite differences to a particular boundary value problem.
Example 8.2.3. Solve the boundary value problem
 π
 y ′′ (x) = −y(x), 0 ≤ x ≤
2
 y(0) = 0, y ( π ) = 1
(8.2.10)
2

by the central difference approximation.


Solution. The exact solution of the given boundary value problem is given by
y(x) = sin (3x). Divide the interval [0, π/2] into n equal subintervals with
the points
π
xj = jh, h = , j = 0, 1, 2, . . . , n.
2n
If yj is an approximation of y(xj ) (yj ≈ y(xj )), then using the central
difference approximations for the second derivative, Equation (8.2.10) can be
approximated by

y(xj−1 ) − 2y(xj ) + y(xj+1 )


= −yj , j = 1, 2, . . . , n − 1.
h2

From the last equations we obtain n − 1 linear equations

(8.2.11) yj−1 − 2yj + yj+1 = −h2 yj , j = 1, 2, . . . , n − 1,

and together with the boundary conditions y0 = y(0) = 1 and yn = y(π/2) =


0 can be written in the matrix form

A · y = b,
8.2 FINITE DIFFERENCES 477

where
 
 −2 + 9h2 1 0 ... 0 0 0 
 
 1 −2 + 9h2 1 ... 0 0 0 
 
 0 1 −2 + 9h2 ... 0 0 0 
 
A= .. ,
 . 
 .. 
 
 . 
 0 0 0 ... 1 −2 + 9h2 1 
0 0 0 ... 0 1 −2 + 9h2

   
y1 0
 y2   0
   
 y3   0
   
 y4   0
   
y= .. , b =  .. .
 .   . 
 ..   . 
   .. 
 .   
y   0
n−2
yn−1 −1

Now let us take n = 15. Using Mathematica we have

In[1] := n = 15;
In[2] := h = P i/(2 ∗ n);
In[3]:=yApprox = Inverse[A] · b;
In[4]:=N [%]
Out[4]:={{0.104576}, {0.208005}, {0.309154}, {0.406912}, {0.500208},
{0.588018}, {0.66938}, {0.743401}, {0.809271}, {0.866265},
{0.91376}, {0.951234}, {0.978277}, {0.994592}}

Compare this approximative solution with the exact solution.

In[5]:=T able[Sin[j ∗ h], {j, 1, 15}];


In[6]:=yExact = N [%]
Out[6]:={0.104528, 0.207912, 0.309017, 0.406737, 0.5, 0.587785,
0.669131, 0.743145, 0.809017, 0.866025, 0.913545,
0.951057, 0.978148, 0.994522, 1.}

The approximations of the derivatives of a function of a single variable can


be extended to functions of several variables. For example, if f (x, y) is a given
function of two variables (x, y) ∈ D ⊆ R2 , for the first partial derivatives we
478 8. FINITE DIFFERENCE NUMERICAL METHODS

have
f (x + h, y) − f (x, y)
fx (x, y) = + O(h), forward difference,
h
f (x, y) − f (x − h, y)
fx (x, y) = + O(h), backward difference,
h
f (x + h, y) − f (x − h, y)
fx (x, y) = + O(h2 ), central difference,
2h
f (x, y + k) − f (x, y)
fy (x, y) = + O(k), forward difference,
k
f (x, y) − f (x, y − k)
fy (x, y) = + O(k), backward difference,
k
f (x, y + k) − f (x, y − k)
fy (x, y) = + O(k 2 ), central difference.
2k

For the partial derivatives of second order we have

f (x + 2h, y) − 2f (x + h, y) + f (x, y)
fxx (x, y) = + O(h2 ), forward,
h2
f (x, y) − 2f (x − h, y) + f (x − 2h, y)
fxx (x, y) = + O(h2 ), backward,
h2
f (x + h, y) − 2f (x, y) + f (x − h, y)
fxx (x, y) = + O(h4 ), central.
h2
For the second partial derivatives with respect to y, as well as for the mixed
partial derivatives, we have similar expressions.

In order to approximate the partial derivatives on a relatively large set of


discrete of points in R2 we proceed in the following way. We partition a
part of the plane R2 into equal rectangles by a grid of lines parallel to the
coordinate axes Ox and Oy, defined by the points

xi = ih, i = 0, ±1, ±2, . . . ; yj = jk, j = 0, ±1, ±2, . . . (see Figure 8.2.1).

y j+1
yj M
y j-1

xi-1 xi xi+1 x

Figure 8.2.1
8.2 FINITE DIFFERENCES 479

The following notation will be used.

f (M ) = f (xi , yj ) = fi,j .

With this notation we have the following approximations for the first order
partial derivatives.
fi+1,j − fi,j fi,j − fi−1,j
fx (xi , yj ) ≈ , fx (xi , yj ) ≈
h h
fi,j+1 − fi,j fi,j − fi,j−1
fy (xi , yj ) ≈ , fy (xi , yj ) ≈ ,
k k
fi+1,j − 2fi−1,j fi,j+1 − 2fi,j−1
fx (xi , yj ) ≈ , fy (xi , yj ) ≈ .
2h 2k
For the second order partial derivatives we have the following approxima-
tions.
fi+2,j − 2fi+1,j + fi,j fi,j − 2fi−1,j + f xi−2,j
fxx (xi , yj ) ≈ , fxx (xi , yj ) ≈
h2 h2
fi,j+2 − 2fi,j+1 + fi,j fi,j − 2fi,j−1 + fi,j−2
fyy (xi , yj ) ≈ , fyy (xi , yj ) ≈ ,
k2 k2
fi+1,j − 2fi,j + fi−1,j fi,j+1 − 2fi,j + fi,j−1
fxx (xi , yj ) ≈ , fyy (xi , yj ) ≈
h2 k2
fi+1,j+1 − fi+1,j−1 − fi−1,j+1 + fi−1,j−1
fxy (xi , yj ) ≈ .
4hk

Example 8.2.4. Using one of the above approximation formulas find an


approximation of fxx (0.05, 0.05) for the function

f (x, y) = e−π
2
y
sin πx.

Solution. If we take h = k = 0.05, then by the approximation formula


fi+1,j − 2fi,j + fi−1,j
fxx (xi , yj ) ≈
h2
we have
f2,1 − 2f1,1 + f0,1
fxx (0.05, 0.05) ≈ ,
h2
where f2,1 = f (0.01, 0.05), f1,1 = f (0.05, 0.5) and f0,1 = f (0, 0.5). Calculat-
ing these fij for the given function we obtain that

fxx (0.05, 0.05) ≈ −68.7319.

Among several important characteristics of a numerical difference iteration


applied to the solution of ordinary or partial differential equations (conver-
gence, consistency and stability), we will consider only the notions of con-
vergence and stability. Convergence means that the finite difference solution
480 8. FINITE DIFFERENCE NUMERICAL METHODS

approaches the true (exact) solution of the ordinary or partial differential


equation as the grid size approaches zero. A finite difference scheme is called
stable if the error caused by a small change (due to truncation, round-off) in
the numerical solution remains bounded. By a theorem which is beyond the
scope of this textbook, the stability of a finite difference scheme is a necessary
and sufficient condition for the convergence of the finite difference scheme.

Exercises for Section 8.2.

1. For each of the following functions estimate the first derivative of the
function at x = π/4 taking h = 0.1 in the forward, backward and
central finite difference approximation. Compare your obtained re-
sults with the exact values.

(a) f (x) = sin x.

(b) f (x) = tan x.

2. Use the table to estimate the first derivative at each mesh point. Es-
timate the second derivative at x = 0.6.

x 0.5 0.6 0.7


f (x) 0.47943 .56464 .64422

3. Let f (x) have the first 4 continuous derivatives. Show that the follow-
ing higher degree approximation for the first derivative f ′ (x) holds.
[ ]
′ 1
f (x) = f (x − 2h) − 8f (x − h) + 8f (x + h) − f (x + 2h)
12h
+ O(h4 ).

4. Using the central finite difference approximation with h = 0.001, esti-


2
mate the second derivative of the function f (x) = ex . Compare your
result with the “exact” value.

5. Partition [0, 1] into 10 equal subintervals to approximate the solution


of the following problem.

y ′′ + xy = 0, 0<x<1
y(0) = 0, y(1) = 1.
8.3 FINITE DIFFERENCE METHODS FOR THE LAPLACE EQUATION 481

8.3 Finite Difference Methods for Laplace and Poisson Equations.


In this section we will apply the finite difference method to the Laplace
and Poisson equations on domains with different shapes, subject to Dirichlet
and Neumann conditions.
10 . Rectangular Domain. Dirichlet Boundary Condition.
Let us consider the Poisson equation, subject to the Dirichlet boundary
conditions on the two dimensional rectangular domain D
{
∆ u(x, y) = F (x, y), (x, y) ∈ D = {a < x < b, c < y < d}
(8.3.1)
u(x, y) = B(x, y), (x, y) ∈ S = ∂D — the boundary of D.

We partition the rectangle D into equal rectangles by a grid of lines parallel


to the coordinate axes Ox and Oy, defined by the points

xi = a + ih, i = 0, 1, 2, . . . , m; yj = c + jk, j = 0, 1, 2, . . . , n,

where
b−a d−c
h= k= .
m n
(See Figure 8.3.1.)

y d

y j+1
k
yj M
k
y j-1

h h x
a xi-1 xi xi+1 b
c

Figure 8.3.1

Then, using the central finite difference approximations for the second par-
tial derivatives, an approximation to the Poisson/Laplace equation (8.3.1) at
each interior point (node) (i, j), 1 ≤ i ≤ m − 1; 1 ≤ j ≤ n − 1 is given by

ui−1,j − 2ui,j + ui+1,j ui,j−1 − 2ui,j + ui,j+1


(8.3.2) + = Fi,j ,
h2 k2

where Fi,j = F (xi , yj ), 1 ≤ i ≤ m − 1; 1 ≤ j ≤ n − 1.


482 8. FINITE DIFFERENCE NUMERICAL METHODS

At the boundary points (nodes) (0, j), (m, j), j = 0, 1, . . . , n and (i, 0),
(i, n), i = 0, 1, . . . , m we have
(8.3.3) ui,j = B(xi , yj ) ≡ Bi,j .
In applications, usually we take h = k, and then Equations (8.3.2) become
(8.3.4) ui−1,j − 4ui,j + ui+1,j + ui,j−1 + ui,j+1 = h2 Fi,j ,
for 1 ≤ i ≤ m − 1; 1 ≤ j ≤ n − 1.
In order to write the matrix form of the linear system (8.3.2) for the
(m − 1)(n − 1) unknowns
u1,1 , u1,2 , . . . , u1,n−1 , u2,1 , u2,2 , . . . , u2,n−1 , . . . , um−1,1 , um−1,2 , . . . , um−1,n−1 ,
we label the nodes in the rectangle from left to right and from top to bottom.
With this enumeration, the matrix (m − 1)(n − 1) × (m − 1)(n − 1) matrix
A of the system has the block form
 
U −I O O ... O O
 
 
 −I U −I O ... O O
 
 
 
 O −I U −I ... O O
 
 
A= ,
 .. 
 . 
 
 
 .. 
 
 . 
 
O O O O ... −I U
where I is the identity matrix of order n − 1, O is the zero matrix of order
n − 1, and the (n − 1) × (n − 1) matrix U is given by
 
4 −1 0 0 ... 0 0
 0 4 −1 0 ... 0 0
 
 0 0 4 −1 . . . 0 0
 
U = .. .
 . 
 .. 
 . 
0 0 0 0 ... −1 4

Even though the matrix A is a nice sparse matrix (formed by many iden-
tical matrix blocks that have many, many zeros) it is quite large even for
relatively small m and n, and therefore some iterative method is required to
solve the system.
We illustrate the finite difference method for the Poisson equation with the
following example.
8.3 FINITE DIFFERENCE METHODS FOR THE LAPLACE EQUATION 483

Example 8.3.1. Using a finite difference method solve the boundary value
problem
{
∆ u (x, y) = −2, (x, y) ∈ D,
u(x, y) = 0, (x, y) ∈ ∂D,

where D is the square D = {(x, y) : −1 < x < 1, −1 < y < 1}, whose
boundary is denoted by ∂D.

2
Solution. Taking a uniform grid with increment h = in both coordinate
n
axes, from (8.3.3) it follows that

(8.3.5) ui−1,j − 4ui,j + ui+1,j + ui,j−1 + ui,j+1 = −2h2 ,

with boundary values

(8.3.6) u0,j = un,j = 0, j = 0, 1, . . . , n; ui,0 = ui,n = 0, 0 ≤ i ≤ n.

We label the nodes from left to right and from top to bottom. So, we start
with the top left corner of the rectangle. With this labeling, taking n = 5,
i.e., h = 0.4 in (8.3.5) and using the boundary conditions (8.3.6) we obtain
the linear system

A · u = b,

where the 16 × 16 matrix A is given by

 
4 0 0 0 −1 0 0 0 0 0 0 0 0
 0 4 −1 0 0 −1 0 0 0 0 0 0 0
 
 0 −1 4 −1 0 0 −1 0 0 0 0 0 0
 
 0 0 −1 4 −1 0 0 −1 0 0 0 0 0
 
 −1 0 0 −1 4 0 0 0 −1 0 0 0 0
 
 0 −1 0 0 0 4 −1 0 0 −1 0 0 0
 
A= 0 0 −1 0 0 −1 4 −1 0 0 −1 0 0
 
 0 0 0 −1 0 0 −1 4 −1 0 0 −1 0
 
 0 0 0 0 −1 0 0 −1 4 0 0 0 −1 
 
 0 0 0 0 0 −1 0 0 0 4 −1 0 0
 
 0 0 0 0 0 0 −1 0 0 −1 4 −1 0
 
0 0 0 0 0 0 0 −1 0 0 −1 4 −1
0 0 0 0 0 0 0 0 −1 0 0 −1 4
484 8. FINITE DIFFERENCE NUMERICAL METHODS

and  
u1,1  
 ..  0.32
 .   .. 
   . 
 u1,4   
   0.32 
 u2,1   . 
   .. 
 ..   
 .   0.32 
   
 u2,4   . 
u= 
 u3,1  , b =  ..  .
   
 .   0.32 
 ..   . 
   . 
 u3,4   . 
   
 u4,1   0.32 
   . 
 .   .. 
 .. 
0.32 16×1
u4,4
If we solve this system directly (using Mathematica) we obtain the following
approximation of the Poisson equation.

.266667, .373333, .373333, .266667, .373333, .533333, .533333, .373333,


.373333, .533333, .533333, .373333, .266667, .373333, .373333, .266667.

Compare this approximate solution with the exact solution

.277176, .385245, .385245, .277176, .385245, .549954, .549954, .385245,


.385245, .549954, .549954, .385245, .277176, .385245, .385245, .277176.

The exact solution of the given Poisson equation, obtained in Section 7.2, is
given by
∞ ( ) ( )
32 ∑ (−1)k cosh (2k + 1) πx
πy
2( cos (2k )+ 1) 2
u(x, y) = 1 − y − 3
2
.
π
k=1
(2k + 1)3 cosh (2k + 1) π2

20 . Rectangular Domain. Neumann Boundary Condition.


Now we will approximate the solution of the Poisson equation on the unit
square, subject to mixed boundary conditions on the boundary of the square.
Let D be the unit square

D = {(x, y) : 0 < x < 1, 0 < y < 1}.

Consider the boundary value problem



 ∆u (x, y) = F (x, y), (x, y) ∈ D,

u(x, 0) = f1 (x), u(x, 1) = f2 (x), 0 < x < 1


ux (0, y) = g1 (y), ux (1, y) = g2 (y), 0 < y < 1.
8.3 FINITE DIFFERENCE METHODS FOR THE LAPLACE EQUATION 485

1
We make an n×n grid with grid size h = . Let us denote a node (xi , yj )
n
by (i, j).
At all interior nodes (i, j), for which i = 2, 3, . . . , n − 2, j = 1, 2, . . . , n − 1,
we apply the approximation scheme (8.3.4):
ui−1,j − 4ui,j + ui+1,j + ui,j−1 + ui,j+1 = h2 Fi,j .
For the interior nodes (1, j), and (n−1, j), j = 1, 2, . . . , n−1 the situation
is slightly more complicated since we do not know the values of the function
u(x, y) at the boundary nodes (i, 0), i = 1, 2, . . . , n − 1 and (n, j), j =
1, 2, . . . , n − 1. In order to approximate the values of the function u(x, y)
at these nodes we extend the grid with additional points (so called fictitious
points) (−1, j) and (n + 1, j) j = 1, 2, . . . , n − 1 to the left and right of the
boundary points x = 0 and x = 1, respectively. (See Figure 8.3.2.)

y
uHx, 1L = f2 HxL
1

H-1, jL
H1, jL Hn + 1, jL
* Hn - 1, jL *
H0, jL Hn, jL

ux H0, yL = g1 HyL ux H1, yL = g2 HyL

0 uHx, 0L = f1 HxL 1 x

Figure 8.3.2

( )
(If we)apply the central finite difference approximation for ux 0, yj and
ux 1, yj we have
 ( ) u(0 + h, yj ) − u(0 − h, yj )

 ux 0, yj ≈ ,
2h
 u (1, y ) ≈ u(1 + h, yj ) − u(1 − h, yj ) .

x j
2h
Therefore,
(8.3.7) u1,j − u−1,j = 2hf1 (yj ), un+1j − un−1,j = 2hf2 (yj ).
We will eliminate u−1,j and un+1,j if we use the Poisson equation at the
nodes (0, j) and (n, j):
{
u−1,j + u1,j + u0,j−1 + u0,j+1 − 4u0,j = F0,j ,
(8.3.8)
un+1,j + un−1,j + un,j−1 + un,j+1 − 4un,j = Fn,j .
486 8. FINITE DIFFERENCE NUMERICAL METHODS

From (8.3.7) and (8.3.8) it follows that


{
4u0,j = 2u1,j + u0,j+1 + u0,j−1 − 2h g1 (yj ) − F0,j ,
(8.3.9)
4un,j = 2un−1,j + un,j+1 + un,j−1 + 2h g2 (yj ) − Fn,j .

We solve the system given by (8.3.4) and (8.3.9) for the n2 −1 unknowns ui,j ,
i = 2, . . . n−2, j = 1, . . . n−1 and u0,j , u1,j , un−1,j , un,j , j = 1, 2, . . . , n−1.
We illustrate the above discussion by an example for the Laplace equation.
Example 8.3.2. Find an approximation of the boundary value problem

 u + uyy = 0, 0 < x < 1, 0 < y < 1,
 xx
u(x, 0) = 0, u(x, 1) = 1 + x2 , 0 < x < 1 ,


ux (0, y) = 1, ux (1, y) = 0, 0 < y < 1
on a 5 × 5 uniform grid.
Solution. For this example we have
F (x, y) = 0, f1 (x) = 0, f2 (x) = 1 + x2 ,
g1 (y) = 1, g2 (y) = 0,
and n = 5, h = 0.2.
Therefore, the system (8.3.4) and (8.3.9) is given by

 4ui,j − ui−1,j − ui+1,j − ui,j−1 − ui,j+1 = 0, i = 1, 2, 3, 4 j = 1, 2, 3, 4




 u = 0, i = 0, 1, 2, 3, 4, 5
 i,0
ui,5 = 1 + (ih)2 , i = 0, 1, 2, 3, 4, 5



 4u0,j − 2u1,j − u0,j+1 − u0,j−1 = −2h, j = 1, 2, 3, 4,



4u5,j − 2u4,j1 − u5,j+1 − u5,j−1 = 2, j = 1, 2, 3, 4.
If we enumerate the nodes (i, j) from bottom to top and left to right, then
we obtain that the system to be solved can be written in the matrix form
(8.3.10) A · u = b,
where the 24 × 24 matrix has the block form
 
T −2I O O O O
 
 
 −I T −I O O O
 
 
 
 O −I T −I O O
 
A= ,
 
 O O −I T −I O
 
 
 
 O O O −I T −I 
 
O O O O −2I T
8.3 FINITE DIFFERENCE METHODS FOR THE LAPLACE EQUATION 487

where I is the identity 5 × 5 matrix, O is the zero 5 × 5 matrix, and the


5 × 5 matrix T is given by
 
4 −1 0 0 0
 −1 4 −1 0 0
 
T = 0 −1 4 −1 0.
 
0 0 −1 4 −1
0 0 0 −1 4
The matrix u of the 24 unknowns is given by
( )T
u = u0,1 u0,2 . . . u0,4 . . . . . . u4,1 u4,2 . . . u4,4 u5,1 u5,2 . . . u5,4

and the 24 × 1 matrix b is given by


( )T
b = −2h −2h −2h 1−2h 0 0 0 1+h2 0 0 0 1+4h2 . . . 0 0 1+25h2 .

The above system was solved by the Gauss–Seidel iterative method (taking
10 iterations) and the results are given in Table 8.3.1.

Table 8.3.1
j\i 0 1 2 3 4 5
1 −0.06946 0.049656 0.112271 0.169276 0.202971 0.206501
2 0.022854 0.155810 0.230152 0.361864 0.436107 0.420063
3 0.179795 0.320579 0.290663 0.611922 0.4361072 0.808039
4 0.478023 0.656047 0.785585 1.035632 1.182048 1.293034

30 . Nonrectangular Domain. Dirichlet Boundary Condition.


Now we will approximate the Poisson equation on a nonrectangular domain
D with a curved boundary Γ = ∂D, subject to Dirichlet boundary conditions.
We consider only Dirichlet boundary conditions. For problems with Neumann
boundary conditions the interested reader is referred to the book [16] by G.
Evans, J. Blackledge and P. Yardley.
Consider the boundary value problem
{
∆ u(x, y) = F (x, y), (x, y) ∈ D,
u(x, y) = f (x, y), (x, y) ∈ Γ.

We cover the domain D with a uniform grid of squares whose sides are
parallel to the coordinate axes. Let h be the grid size. To apply a finite
difference method, the given domain D is replaced by the set of those squares
which completely lie in the domain D. (See Figure 8.3.3.)
488 8. FINITE DIFFERENCE NUMERICAL METHODS

*N
*E
W M
S

Du = F

uG = f

G = ¶D

Figure 8.3.3

If an interior node of the squares is such that its four neighboring nodes
either lie entirely in the domain D or none of them is outside the domain,
then we use Equations (8.3.4) to approximate the Poisson equation. Special
consideration should be taken for those nodes which are close to the boundary
Γ for which
( at )least one of their neighboring nodes is outside the domain. The
node M xi , yj in Figure 8.3.4 is an example of such a node. Its neighboring
nodes E and N are outside the region D. To approximate the second partial
derivatives in the Poisson equation at the node M we proceed as follows. In
order to approximate uxx we denote by A(x, yj ) the point on the segment
M E which is exactly on the boundary Γ and let ∆ x = M A (see Figure
8.3.4).

*
N

*
W M A E

D
G

Figure 8.3.4

By Taylor’s formula for the function u(x, y) at the points A and W we


have
1
u(W ) = u(M ) − ux (M )h + uxx (M )h2 + . . .
2
8.3 FINITE DIFFERENCE METHODS FOR THE LAPLACE EQUATION 489

and
1
u(A) = u(M ) + ux (M )∆ x + uxx (M )(∆ x)2 + . . . .
2
From the last two equations we obtain the following approximation for uxx
at the node (i, j).
[ ]
u(A) u(W ) u(M )
(8.3.11) uxx (M ) ≈ 2 + − .
∆ x(∆ x + h) h(∆ x + h) h∆ x

Working similarly with the point B (see Figure 8.3.4) we have an approxi-
mation for uyy at the node (i, j).
[ ]
u(B) u(W ) u(M )
(8.3.12) uyy (M ) ≈ 2 + − ,
∆ y(∆ y + h) h(∆ y + h) h∆ y

where ∆ y = M B.
From (8.3.11) and (8.3.12) we have the following approximation of the
Poison equation at the node M :

u(A) u(B) u(W )


+ +
∆ x(∆ x + h) ∆ y(∆ y + h) h(∆ x + h)
(8.3.13)
u(S) ∆x + ∆y 1
+ − u(M ) ≈ F (M ).
h(∆ y + h) h∆ x∆ y 2

In (8.3.13), u(M ), u(W ) and u(S) are unknown and should be determined.
1
Example 8.3.3. Taking a uniform grid of step size h = solve the Laplace
4
equation
x2
uxx + uyy = 0, x > 0, y > 0, + y 2 < 1,
4
subject to the boundary conditions

 u(0, y) = y 2 , 0 ≤ y ≤ 1,



 x2
u(x, 0) = , 0 ≤ x ≤ 2,
 4


 2
 u(x, y) = 1, x + y 2 = 1, y > 0.
4

Solution. The uniform grid and the domain are displayed in Figure 8.3.5.

For the nodes (6, 1) and (i, j), i = 1, 2, . . . , 5, j = 1, 2 we use (8.3.4):

(8.3.14) 4ui,j − ui,j+1 − ui,j−1 − ui−1,j − ui+1,j = 0.


490 8. FINITE DIFFERENCE NUMERICAL METHODS

1 B1 B2 B4
B3 x2
B5 + y2 = 1
A3 4
0.75 B6

A2

0.5 B7

0.25 A1

0.25 0.5 0.75 1 1.25 1.5 1.75 2

Figure 8.3.5

For the nodes (5, 3), (6, 2) and (7, 1) we use approximations (8.3.13):

 u(A3 ) u(B5 ) u4,3

 + +

 ∆ x (∆ x + h) ∆ y (∆ y + h) h(∆ x3 + h)


3 3 5 5

 u5,2 ∆ x3 + ∆ y5

 + − u5,3 = 0



 h(∆ y 5 + h) h∆ x3 ∆ y5



 u(A2 ) u(B6 ) u5,2

 ∆ x (∆ x + h) + ∆ y (∆ y + h) + h(∆ x + h)
2 2 6 6 2
(8.3.15)

 u ∆ x + ∆ y

 +
6,1

2 6
u6,2 = 0

 h(∆ y + h) h∆ x ∆ y


6 2 6




u(A1 ) u(B7 ) u6,1

 + +

 ∆ x 1 (∆ x 1 + h) ∆ y 7 (∆ y7 + h) h(∆ x1 + h)



 u7,0 ∆ x1 + ∆ y7
+ − u7,1 = 0.
h(∆ y7 + h) h∆ x1 ∆ y7

Using the second order Taylor approximation for ui−1,3 , i = 1, 2, 3, 4 we


obtain the following 4 equations.
[ ]
2h
ui−1,3 + ui+1,3 + hu(Bi ) + ui,2 ∆ yi
(h + ∆ yi )∆ yi
(8.3.16)
h + ∆ yi
−2 ui,3 = 0, i = 1, 2, 3, 4.
h∆ yi

From the given boundary conditions we have



 j2 x2i i2
 2
 u0,j = yj = 16 , j = 1, . . . , 4; ui,0 = 4 = 64 ;



u(Bi ) = 1, i = 1, . . . , 7; u(Ai ) = 1, i = 1, 2, 3.
8.3 FINITE DIFFERENCE METHODS FOR THE LAPLACE EQUATION 491

In order to solve the system (8.3.14), (8.3.15), (8.3.16) of 18 unknowns


ui,j , i = 1, . . . , 6, j = 1, 2; ui,3 , i = 1, . . . , 5; u7,1 first we evaluate the
quantities ∆ xi , and ∆ yi :
√ √
x2i i2
∆ yi = 1 − − 3h = 1 − − 3h, i = 1, 2, . . . , 5,
4 64
√ √
x2 i2
∆ yi = 1 − i − (8 − i)h = 1 − − (8 − i)h i = 6, 7,
4 64
√ √
i2
∆ xi = 2 1 − yi2 − (8 − i)h = 2 1 − − (8 − i)h, i = 1, 2, 3.
16

If we enumerate the nodes (i, j) from bottom to top and left to right we
obtain that the system to be solved can be written in the matrix form

(8.3.17) A · u = b,

where the 18 × 18 matrix has the block form


 
T1 d O O O O
 
 
 A T2 d O O O
 
 
 
 O A T3 d O O
 
A= ,
 
 O O C T4 d O
 
 
 
 O O O B1 T5 C
 
O O O O B2 T6

where O is the zero 3 × 3 matrix, and the 3 × 3 matrices D, Bi , i = 1, 2;


C and Ti , i = 1, 2, . . . , 6 are given by
   
−1 0 0 −1 0 0
   
   
D =  0 −1 0  , C =  0 −1 0,
   
0 0 1 0 0 0

   
−1 0 0 −1 0 0
   
 
B1 =   , B2 =  
 0 −1  0 0,
0 1
  h(h+∆ x2 ) 
 
1
0 0 h(h+∆ x3 ) 0 0 0
492 8. FINITE DIFFERENCE NUMERICAL METHODS

 
4 −1 0
 
 
Ti = 
 −1 4 −1  , i = 1, 2, . . . , 5,

 
0 2h
h+∆ yi −2 h+∆
∆ yi
yi

 
4 −1 −1
 
 
T6 =  −∆ .
1 x2 +∆ y6
 h(h+∆ y6 ) h∆ x2 ∆ y6 0 
 
1
h(h+∆ x1 ) 0 −∆ x1 +∆ y7
h∆ x1 ∆ y7

The matrix b in the system (8.3.17) is given by


(
2h2 2h2
b = u1,0 + u0,1 , u0,2 , −u0,3 − , u2,0 , 0, − ,
(h + ∆ y1 )∆ y1 (h + ∆ y2 )∆ y2
2h2 2h2
u3,0 , 0, − , u4,0 , 0, − , u5,0 , 0,
(h + ∆ y3 )∆ y3 (h + ∆ y4 )∆ y4
1 1 1
− − , u6,0 , −
(∆ x3 + h)∆ x3 (∆ y5 + h)∆ y5 (∆ x2 + h)∆ x2
)T
1 1 1 u7,0
− ,− − − .
(∆ y6 + h)∆ y6 (∆ x1 + h)∆ x1 (∆ y7 + h)∆ y7 (h(h + ∆ y7 )

Solving the system for ui,j , we obtain the results given in Table 8.3.2. (the
index i runs in the horizontal direction and the index j runs vertically).

Table 8.3.2
j\i 1 2 3 4 5 6 7
1 0.34519 0.44063 0.53730 0.63861 0.73899 0.83647 0.93431
2 0.56514 0.63002 0.69497 0.77816 0.85588 0.92259
3 0.78536 0.81932 0.83442 0.92316 0.98379

Exercises for Section 8.3.

1. Using the finite difference method find an approximation solution of


the Laplace equation ∆ u(x, y) = 0 on the unit square 0 < x < 1,
0 < y < 1, on a 9 × 9 uniform grid (h = 0.9), subject to the Dirichlet
conditions

u(x, 0) = 20, u(0, y) = 80, u(x, 1) = 180, u(1, y) = 0.


8.3 FINITE DIFFERENCE METHODS FOR THE LAPLACE EQUATION 493

2. Using the finite difference method find an approximation solution of


the Poisson equation

∆ u(x, y) = 1

on the unit square 0 < x < 1, 0 < y < 1, on a 4 × 4 uniform grid


(h = 1/4), subject to the Dirichlet conditions

u(x, 0) = u(0, y) = u(x, 1) = u(1, y) = 0.

3. Using the finite difference method find an approximation solution of


the Laplace equation ∆ u(x, y) = 0 on the unit square 0 < x < 1,
1
0 < y < 1, on a 9 × 9 uniform grid with h = , subject to the
9
boundary conditions

uy (x, 0) = 0, u(0, y) = 80, u(x, 1) = 180, u(1, y) = 0.

4. Solve the Poisson equation

uxx + uyy = 1

on the rectangle 3 < x < 5, 4 < y < 6, with zero boundary condition,
1
taking h = in the finite differences formula.
4

5. The function u satisfies the Laplace equation on the square 0 < x < 1,
0 < y < 1 and satisfies

u(x, y) = cos (πx + piy)


1
on the boundary of the square. Take a uniform grid of side h =
4
and order the nodes from bottom to top and left to right. Determine
and solve the matrix equation for the 9 internal nodes.

6. Consider the Laplace equation

uxx + uyy = 0

on the semicircle region shown in Figure 8.3.6, subject to the Dirichlet


boundary conditions

u(x, 0) = 1, u(x, y) = x2 if x2 + y 2 = 1.
494 8. FINITE DIFFERENCE NUMERICAL METHODS

-1 0 1

Figure 8.3.6

Approximate the solution u at the 10 interior nodes of the uniform


1
grid with size h = .
3
7. Solve the Poisson equation

∆ u = −1

on the L-shaped region, obtained when from the unit square is re-
1
moved the top left corner square of side h = , subject to zero bound-
4
ary value on the boundary of the L=shaped region (see Figure 8.3.7).
1
Take the uniform grid for which h = .
4

0 1

Figure 8.3.7
8.4 FINITE DIFFERENCE METHODS FOR THE HEAT EQUATION 495

8.4 Finite Difference Methods for the Heat Equation.


In this section we will apply finite difference methods to solve the heat
equation in one and two spatial dimensions on a rectangular domain, subject
to Dirichlet boundary conditions. Parabolic partial differential equations sub-
ject to Neumann boundary conditions are treated in the same way as in the
previous section by introducing new “fictitious” points.
10 . One Dimensional Heat Equation. Dirichlet Boundary Condition.
Consider the heat equation on the domain D = (0, l) × (0, T ), subject to
Dirichlet boundary conditions

 ut (x, t) = c uxx , (x, t) ∈ D = {0 < x < a, 0 < t < T }
2

(8.4.1) u(x, 0) = f0 (x), , u(x, T ) = fT (x), 0 < x < a,


u(0, t) = g0 (t), u(a, t) = ga (t), 0 < t < T .

First we partition the rectangle D with the points

xi = ih, yj = j∆ t, i = 0, 1, . . . , m; j = 0, 1, 2, . . ..

If ui,j ≈ u(xi , tj ), then we approximate the derivative ut (x, t) with the


forward finite difference
( ) ui+1,j − ui,j
ut xi , yj ≈
∆t
and the derivative uxx (x, t) with the central finite difference
( ) ui+1,j − 2ui,j + ui−1,j
uxx xi , tj ≈ .
h2
Then the finite difference approximation of the heat equation (8.4.1) is

1 [ ] c2 [ ]
ui,j+1 −ui,j = 2 ui+1,j −2ui,j +ui−1,j , 1 ≤ i ≤ m − 1; 1 ≤ j ≤ n − 1,
∆t h
or

(8.4.2) ui,j+1 = λui−1,j + (1 − 2λ)ui,j + λui+1,j ,

where the parameter


∆t
λ = c2
h2
is called the grid ratio parameter. Equation (8.4.2) is called the explicit
approximation of the parabolic equation (8.4.1). The parameter λ plays a
role in the stability of the explicit approximation scheme (8.4.2). It can be
1
shown that we have a stable approximation scheme only if 0 < λ ≤ , i.e., if
2
2c2 ∆ t ≤ h2 .
496 8. FINITE DIFFERENCE NUMERICAL METHODS

Now, if we write approximations for the derivative in Equation (8.4.1) at


1
the node (i, j + ), then we obtain the following implicit, so called Crank–
2
Nicolson approximation.

(8.4.3) λui+1,j+1 −(2+λ)ui,j+1 +λui−1,j+1 = −λui+1,j +(2−λ)ui,j −λui−1,j .

The above system is solved in the following way. First, we take j = 0 and
using the given initial values

u1,0 , u2,0 , . . . , ui−1,0 ,

we solve the system (8.4.3) for the unknowns u1,1 , u2,1 , . . . , ui−1,1 . Using
these values, we solve the system for the next unknowns u1,2 , u2,2 , . . . , ui−1,2
and so on.
The advantage of the implicit Crank–Nicolson approximation over the ex-
plicit one is that its stability doesn’t depend on the parameter λ.
It is important to note that for a fixed value of λ, the order of the approx-
imation schemes (8.4.2) and (8.4.3) is O(h2 ).
Let us take a few examples to illustrate all of this.
Example 8.4.1. Consider the problem

ut (x, y) = 10−2 uxx (x, y), 0 < x < 1, t > 0,

subject to the boundary conditions

u(0, t) = u(1, t) = 0, t>0

and the initial condition


{
10x, 0≤x≤ 1
2
u(x, 0) =
10(1 − x), 1
2 < 1.
Using the explicit finite difference approximation find an approximation of
the problem at several locations and several time instants and compare the
obtained solutions with the exact solution.
Solution. Using the separation of variables method, given in Chapter 5, it can
be shown (show this!) that the exact (analytical) solution of this problem is
given by

80 ∑ (−1)n ( )
sin (2n + 1)πx e−0.01(2n+1) t .
2
u(x, t) =
π 2 n=0 (2n + 1)2

Since we have homogeneous boundary conditions, system (8.4.2) can be


written in matrix form

(8.4.4) ui,j+1 = Aj · ui,0 ,


8.4 FINITE DIFFERENCE METHODS FOR THE HEAT EQUATION 497

where
T
ui,j+1 = ( u1,j+1 u2,j+1 . . . ui−1,j+1 ) ,
T
ui,0 = ( f (h) f (2h) . . . f ((i − 1)h) ) ,

and the (i − 1) × (i − 1) matrix A is given by


 
1 − 2λ λ 0 ... 0 0 0
 λ 1 − 2λ λ ... 0 0 0 
 . 
A=
 .
. .

 0 0 0 ... λ 1 − 2λ λ 
0 0 0 ... 0 λ 1 − 2λ

First, let us take h = 0.1, ∆ t = 0.4. In this case λ = 0.4 < 0.5. Solving
the system (8.4.4) at t = 1.2, t = 4.8 and t = 14.4 we obtain approximations
that are presented graphically in Figure 8.4.1 and in Table 8.4.1. From Figure
8.4.1 we notice pretty good agreement of the numerical and exact solutions.

Table 8.4.1
tj \xi 0.1 0.2 0.3 0.4 0.5 0.6 0.7
0.0 2.0000 4.0000 6.0000 8.0000 9.9899 8.0000 6.0000
1.2 1.9692 3.8617 5.5262 6.7106 7.1454 6.7106 5.5262
4.8 1.5494 2.9545 4.0793 4.8075 5.0598 4.8075 4.0793
14.4 0.5587 1.0628 1.4629 1.7198 1.8083 1.7198 1.4629

10
u
ø
Λ = 0.4
h = 0.1
8 t = 0. ø ø Dt = 0.4
ø
ø ø Exact
6 ø t = 1.2 ø Approx ø ø øø
ø ø
ø
ø ø
4 ø ø ø ø
t = 4.8
ø ø

2 ø ø ø ø ø
ø ø ø ø
1 ø ø
ø t = 14.4 ø x
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Figure 8.4.1

Next we take h = 0.1, ∆ t = 0.6. In this case λ = 0.6. Solving the system
(8.4.4) at t = 1.2, t = 2.4, t = 4.8 and t = 14.4 we obtain approximations
that are presented in Table 8.4.2 and graphically in Figure 8.4.2. Big oscilla-
tions appear in the solution. These oscillations grow larger and larger as time
increases. This instability is due to the fact that λ = 0.6 > 0.5.
498 8. FINITE DIFFERENCE NUMERICAL METHODS

Table 8.4.2
tj \xi 0.1 0.2 0.3 0.4 0.5 0.6 0.7
0.0 2.0000 4.0000 6.0000 8.0000 10.0000 8.0000 6.0000
1.2 2.0000 4.0000 6.0000 6.5600 8.0800 6.5600 6.0000
4.8 2.1244 1.8019 5.7749 2.6826 7.3010 2.6826 5.7749
14.4 73.374 −137.33 192.138 −222.257 237.528 −222.257 192.138

Λ = 0.6
ó
ç
Exact t = 4.8
h = 0.1
Approx
6 Dt = 0.6
ç
ó ç
ó
ó ó

4
t = 2.4
ó ó
t = 1.2 ç ç
ç t = 14.4 ç
2 ó ó
ç ç

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Figure 8.4.2

Example 8.4.2. Solve the problem in Example 8.4.1 by the Crank–Nicolson


approximation.
Solution. Let us write the system (8.4.3) in matrix form

(8.4.5) A · ui,j+1 = B · ui,j ,

where
T
ui,j+1 = ( u1,j+1 u2,j+1 . . . ui−1,j+1 ) ,
T
ui,0 = ( f (h) f (2h) . . . f ((i − 1)h) ) ,
and the (i − 1) × (i − 1) matrices A and B are given by
 
2 + 2λ −λ 0 ... 0 0 0
 −λ 2 + 2λ −λ . . . 0 0 0 
 . 

A= . . ,

 0 0 0 . . . −λ 2 + 2λ −λ 
0 0 0 ... 0 −λ 2 + 2λ

and  
2 − 2λ λ 0 ... 0 0 0
 λ 2 − 2λ λ ... 0 0 0 
 . 
B=
 .
. .

 0 0 0 ... λ 2 − 2λ −λ 
0 0 0 ... 0 λ 2 − 2λ
8.4 FINITE DIFFERENCE METHODS FOR THE HEAT EQUATION 499

Now, if we take λ = 0.6, which can be obtained for h = 0.1 (i = 10),


∆ t = 0.6, then, first we need to solve the 9 × 9 linear system (8.4.5) for
u1,1 , u2,1 , . . . , u9,1 . In this step we use the given initial condition ui,0 = f (ih),
i = 1, 2, . . . , 9. The results obtained are used in (8.4.5) to obtain the next
values ui,2 , , i = 1, 2, . . . , 9, and so on. The results obtained at the first 8
nodes are summarized in Table 8.4.3.
Table 8.4.3
tj \xi 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
0.0 2.0000 4.0000 6.0000 8.0000 10.0000 8.0000 6.0000 4.0000
1.2 1.9849 3.9346 5.7456 7.1174 7.6464 7.1174 5.7456 3.9346
4.8 1.5646 2.9861 4.1271 4.8680 5.1250 4.8680 4.1271 2.9861
14.4 0.6165 1.1728 1.6142 1.897 1.9953 1.897 1.6142 1.1728

20 . Two Dimensional Heat Equation on a Rectangular.


In this section we will describe the numerical solution of the two dimen-
sional heat equation
[ ]
ut (x, y, t) = c2 uxx (x, y, t) + uyy (x, y, t) , 0 < x < a, 0 < y < b, t > 0,

subject to the initial condition

u(x, y, 0) = f (x, y)

and the Dirichlet boundary conditions


{
u(0, y, t) = f1 (y), u(a, y, t) = f2 (y)
u(x, 0, t) = g1 (x), u(x, b, t) = g2 (x).

With ∆ x = 1/m, ∆ y = 1/n we partition the rectangle with the points


(i∆ x, j∆ y), i = 0, 1, 2 . . . , m, j = 0, 1, 2 . . . , n and we denote the discrete
(k)
time points by tk = k∆ t. If ui,j denotes an approximation of the exact
solution u(x, y, t) of the heat equation at the point (i∆ x, j∆ y) at time k∆ t,
then using the finite difference approximations for the partial derivatives in
the heat equation we obtain the explicit approximation scheme
[ ( )] [ (k) (k) (k) (k) ]
(k+1) 1 1 (k) ui+1,j +ui−1,j ui,j+1 + ui,j−1
ui,j = 1−2c2 ∆t + ui,j +a∆t + .
∆x2 ∆y 2 ∆x2 ∆ y2

In practical applications usually we take ∆ x = ∆ y = h, and the above


explicit finite difference approximation becomes
(k+1) ( ) (k) ( (k) (k) (k) (k) )
(8.4.6) ui,j = 1 − 4λ ui,j + λ ui+1,j + ui−1,j + ui,j+1 + ui,j−1 ,

where
∆t
λ = c2 .
h2
500 8. FINITE DIFFERENCE NUMERICAL METHODS

In order to have a stable, and thus convergent, approximation, the incre-


ments ∆ t and h should be selected such that
∆t 1
λ = c2 2 ≤ .
h 4
When solving the above system (8.4.6) for i = 1, 2 . . . , m−1, j = 1, 2 . . . , n−1
we use the given initial conditions
(0)
ui,j = f (ih, jh), i = 0, 1, 2 . . . , m; j = 0, 1, 2 . . . , n
(k) (k) (k) (k)
and the given boundary conditions ui,0 , ui,n , and u0,j , un,j .
Example 8.4.3. Using the explicit finite difference method solve the two
dimensional heat equation
ut = uxx + uyy
in the rectangle 0 < x < 1.25, 0 < y < 1, subject to the initial and boundary
conditions
u(x, y, 0) = 1, u(x, 0, t) = u(x, 1, t) = u(0, y, t) = u(1.25, y, t) = 0.

Solution. If we take h = 1/4 (n = 5, m = 4); ∆ t = 1/64, then λ = 1/4 and


so the stability condition is satisfied. For this choice of m, n and ∆ t the
system (8.4.6) becomes
(k+1) 1 ( (k) (k) (k) (k) )
ui,j = ui+1,j + ui−1,j + ui,j+1+ui,j−1 , 1 ≤ i ≤ 4; j = 1, 2, 3; k = 0, 1, . . .
4
The solutions of the above system taking k = 0, 1, 2 are presented in Table
8.4.4.
Table 8.4.4
(1) (1) (1) (1)
u1,1 u2,1 u3,1 u4,1
(1)
u1,1 1/2 3/4 3/4 1/2
(1)
u1,2 3/4 1 1 3/4
(1)
u1,3 1/2 3/4 3/4 1/2
(2) (2) (1) (2)
j\i u1,1 u2,1 u3,1 u4,1
(2)
u1,1 3/8 9/16 9/16 3/8
(2)
u1,2 1/2 13/16 13/16 1/2
(2)
u1,3 3/8 9/16 9/16 3/8
(3) (3) (1) (3)
j\i u1,1 u2,1 u3,1 u4,1
(3)
u1,1 17/64 7/16 7/16 17/64
(3)
u1,2 25/64 19/64 19/64 25/64
(3)
u1,3 17/64 7/16 7/16 17/64

We conclude this section with a few illustrations of Mathematica’s dif-


ferential equation solving functions, applied to parabolic partial differential
equations.
8.4 FINITE DIFFERENCE METHODS FOR THE HEAT EQUATION 501

Example 8.4.4. Solve the initial boundary value problem



 ut (x, t) = uxx (x, t), 0 < x < 10, 0 < t < 0.5,


1
u(x, 0) = 5 x2 (x − 10)2 ex , 0 < x < 10,

 10

u(0, t) = u(10, t) = 0, 0 < t < 0.5.

Solution. The following Mathematica commands solve the problem.


In[1]:=U = N DSolve[{Derivative[2, 0][u][x, t] == Derivative[0, 1][u][x, t],
u[x, 0] == 1/(105 )x2 (x − 10)2 Exp[x], u[0, t] == 0, u[10, t] == 0},
u, {x, 0, 10}, {t, 0, 0.5}];
Out[1]:= {{u − > InterpolatingF unction[{{0., 10.}, {0, 0.5}, <>]}}
In[2]:=P lot3D[Evaluate[u[x, t]/.U [[1]]], {t, 0, 0.5}, {x, 0, 10},
Mesh − >None, PlotRange − > All]
The plot of the solution of the problem is given in Figure 8.4.3, and a plot
at different time instances is given in Figure 8.4.4.

Figure 8.4.3

8 u
t = 0.01

6 t = 0.1

4 t = 0.4

x
2 4 6 8 10

Figure 8.4.4
502 8. FINITE DIFFERENCE NUMERICAL METHODS

Example 8.4.5. Solve the heat equation


ut (x, t) = uxx (x, t) + x cos 3x − 30t cos 9x, 0 < x < π, 0 < t < 1,
subject to the initial and Neumann boundary conditions
u(x, 0) = cos 5x, 0 < x < π, ux (0, t) = ux (π, t) = 0, 0 < t < 1.
Solution. The following Mathematica commands solve the problem.
In[1]:=U = N DSolve[{D[u[[x, t], t] == D[u[x, t], x, x] + xCos[3x]
−30tCos[9x], u[x, 0] == Cos[5x], Devivative[1, 0][u][0, t]
== Devivative[1, 0][u][P i, t] == 0}, u, {x, 0, P i}, {t, 0, 1},
M ethod− > {”M ethodOf Lines”, ”SpatialDiscretization”
> {”T ensorP roductGrid”, ”M inP oints”− > 500}}];
Out[1]:= {{u − > InterpolatingF unction[{{0., 3.14159.}, {0, 1.}], <>]}}
In[2]:=P lot3D[Evaluate[u[x, t]/.U [[1]]], {t, 0, 1}, {x, 0, P i},
Mesh − > N one, P lotRange − > All]
In[3]:=P lot[Evaluate[{u[x, .01]/.U [[1]], u[x, .4]/.U [[1]], u[x, .9]/.U [[1]]]}],
{x, 0, P i}, P lotRange− > All, T icks− > {{0, P i/10, 3P i/10, P i/2, 7P i/10,
9P i/10, P i}, {−1, −1/2, 0, 1/2}}, F rame − > F alse]
The plot of the solution is given in Figure 8.4.5, and plot of the solutions
at different time instances is given in Figure 8.4.6.

Figure 8.4.5

1
2 t = 0.9

Π 3Π Π 7Π 9Π
x
Π
10 10 2 10 10

1
-2
t = 0.4 t = 0.01

Figure 8.4.6
8.4 FINITE DIFFERENCE METHODS FOR THE HEAT EQUATION 503

Example 8.4.6. Let R be the square


{
R = (x, y) : −5 < x < 5, −5 < y < 5}

and ∂R its boundary.


Using Mathematica graphically display at several time instances the solu-
tion of the two dimensional heat equation

 u (x, y, t) = uxx (x, y, t) + uyy (x, y, t), (x, y) ∈ R, 0 < t < 4,
 t
u(x, 0) = 1.5e−(x +y ) , (x, y) ∈ R,
2 2



u(x, y, t) = 0, (x, y) ∈ ∂R, 0 < t < 4.

Solution. The following Mathematica commands solve the problem.


In[1]:=heat = F irst[N DSolve[D[u[x, y, t], t] == D[u[x, y, t], x, x]
+D[u[x, y, t], y, y], u[x, y, 0] == 1.5Exp[−(x2 +y 2 )]], u[−5, y, t] == u[5, y, t],
u[x, −5, t] == u[x, 5, t]}, u, {x, −5, 5}, {y, −5, 5}, {t, 0, 4}]];
Out[1]:= {{u − > InterpolatingF unction[{{−5., 5.}, {−5., 5.},
{0., 4.}], <>]}}
To get the plots of the solution of the heat equation at t = 0 (the initial
heat distribution) and t = 4 use
In[2]:=P lot3D[u[0, x, y]/.heat, {x, −5, 5}, {y, −5, 5},
P lotRange− > All, M esh− > 20]
In[3]:=P lot3D[u[4, x, y]/.heat, {x, −5, 5}, {y, −5, 5},
P lotRange− > All, M esh− > 20]
Their plots are displayed in Figure 8.4.7 and Figure 8.4.8, respectively.

F
! igure 8.4.7 Figure 8.4.8
504 8. FINITE DIFFERENCE NUMERICAL METHODS

Exercises for Section 8.4.

1. Using the explicit finite difference method find a numerical solution of


the heat equation

ut (x, t) = uxx (x, t), 0 < x < 1,

subject to the initial and Dirichlet conditions

u(x, 0) = sin (2πx), u(0, t) = u(1, t) = 0

at the points (xj = 1/4, tj = j · 0.01), j = 1, 2 . . . , 15. Consider


1 1
the following increments: h = , and ∆ t = 10−3 . Compare the
8 16
obtained results with the exact solution

u(x, t) = e−4π t sin (2πx).


2

2. Approximate the initial boundary value problem in the previous ex-


1 1
ample by the Crank–Nicolson method taking h = , h = and
8 16
−3
∆ t = 10 .

3. Apply the explicit method to the equation

ut = 0.1uxx , 0 < x < 20, t > 0,

subject to the initial and boundary conditions

u(x, 0) = 2, u(0, t) = 0, u(20, t) = 10.

Take h = 4.0 and ∆ t = 40.0 (λ = 0.25). Perform four iterations.

4. Write down the explicit finite difference approximation for the initial
boundary value problem

 ut = ux + uxx , 0 < x < 1, t > 0,

 

  0 ≤ x ≤ 12

  x,

 u(x, 0) =



 1 − x, 12 < x ≤ 1







u(0, t) = u(1, t) = 0 0 < t < 1.

Perform 8 iterations taking h = 0.1 and ∆ t = 0.0025.


8.4 FINITE DIFFERENCE METHODS FOR THE HEAT EQUATION 505

5. Write down the Crank–Nicolson approximation scheme for the initial


boundary value problem


 ut (x, t) = uxx )x, t), 0 < x < 1, t > 0,
u(x, 0) = x, 0 < x < 1


u(0, t) = u(i, t) = 0, t > 0.

1
Perform the first 4 iterations taking h = 0.25 and ∆ t = .
16
Compare the obtained results with the exact solution


2 ∑ (−1)n −n2 π2 t
u(x, t) = e sin (nπx),
π n=1 n

which was derived by the separation of variables method in Chapter 5.

6. Approximate the equation

ut = uxx + 1, 0 < x < 1, t > 0,

subject to the initial and boundary conditions

u(x, 0) = 0, 0 < x < 1 u(0, t) = u(1, t) = 0.

1 1
Take h = and ∆ t = .
4 32
7. Write down and solve the explicit finite difference approximation of
the equation

1
ut = xuxx , 0 < x < , t > 0,
2

subject to the initial and Neumann boundary conditions


( )
(1 ) u 12 , t
u(x, 0) = x(1 − x), ux , t = − .
2 2

Take h = 0.1 and ∆ t = 0.005.


Hint: Use fictitious points.
506 8. FINITE DIFFERENCE NUMERICAL METHODS

8.5 Finite Difference Methods for the Wave Equation.


In this section we will apply the finite difference methods to hyperbolic
partial differential equations of the first and second order in one and two
spatial dimensions on a rectangular domain, subject to Dirichlet boundary
conditions.
10 . Hyperbolic Equations of the First Order. In Section 4.2 of Chapter 4, we
considered the simple constant coefficient transport equation
(8.5.1) ut (x, t) + aux (x, t) = 0, 0 < x < l, t > 0,
subject to the initial and boundary conditions
{
u(x, 0) = f (x), 0 ≤ x < l,
u(0, t) = g(t), t > 0.
The unique solution of this initial boundary value problem is given by
{
f (x − at), for x − at > 0
u(x, t) =
g(x − a1 t), for x − at > 0
and this solution is constant on any straight line of slope a. Despite the fact
that we have an exact solution of the problem, a numerical approximation
method of this equation will be discussed because it can serve as a model
for approximation of transport equations with variable coefficients a(x, t) or
even more complicated nonlinear equations.
As usual, we partition the spatial and time interval with the grid points
xi = ih, tj = j∆ t, i = 0, 1, . . . , m; j = 0, 1, 2, . . . , n.
Let ui,j ≈ u(xi , tj ) be the approximation to the exact solution u(x, t) at
the node (xi , tj ). Now, using the forward, backward and central approxima-
tions for the partial derivatives with respect to x and t the following finite
difference approximations of Equation (8.5.1) are obtained.
( )
ui,j+1 = ui,j−1 − λ ui+1,j − ui−1,j , Leap-Frog
1( ) 1( )
ui,j+1 = 1 − λ ui+1,j + 1 + λ ui−1,j , Lax–Friedrichs
2 2 )
1( 2 ) ( ) 1( 2 )
ui,j+1 = λ − λ ui+1,j + 1 − λ ui,j + λ + λ ui−1,j , Lax–Wendroff,
2
2 2
where
∆t
λ=a .
2h
It can be shown that for stability of the Lax–Friedrichs and Lax–Wendroff
approximation schemes it is necessary that
∆t
|λ| = |a| ≤ 1.
h
The Leap-Frog scheme is stable for every grid ratio λ.
Let us take a few examples.
8.5 FINITE DIFFERENCE METHODS FOR THE WAVE EQUATION 507

Example 8.5.1. Solve the transport equation

ut (x, t) + ut (x, t) = 0, 0 < x < 20, 0 < t < 1,

subject to the initial condition



 1, 0≤x<2



 x − 1, 2≤x<4



 7 − x, 4≤x<6
f (x) = u(x, 0) =

 1, 6 ≤ x < 10



 x − 9, 10 ≤ x < 11



13 − x, 11 ≤ x ≤ 20

and the boundary conditions

u(0, t) = u(1, t) = 1, 0 < t < 1.

Use the Lax–Friedrichs scheme with


(a) λ = 0.8, h = 0.5 and h = 0.25 at t = 0.4.

(b) λ = 1.6, h = 1, h = 0.5 and h = 0.25 at t = 1.6.

Solution. The exact solution of the problem is given by

u(x, t) = f (x − t).

Plots of u(x, t) at t = 0, t = 0.4 and t = 1.6 are given in Figure 8.5.1.


u
3 uHx,0L
uHx,0.4L
uHx,1.6L
2

x
2 4 6 10 11 12 20

Figure 8.5.1
508 8. FINITE DIFFERENCE NUMERICAL METHODS
u
3 Λ = 0.8, h = 0.5, t = 0.4
o Exact
o Approx
o
o

o
2
o o

o
o o o

o o
1 o o o o o o o o o o o o o o o o o o o o o o o o o o

x
12 22 32 52 57 62
20
5 5 5 5 5 5

Figure 8.5.2

In order to start the Lax–Friedrichs approximation we use the initial con-


dition ui,0 = f (ih). For j ≥ 0 the approximations are obtained by
( ) ( ) )
1 1
ui,j+1 = − λ ui+1,j + + λ ui−1,j .
2 2
Using Mathematica, an approximation ui,1 is obtained for λ = 0.8, h =
0.5 (∆ t = 0.4) at t = 0.4. This solution is shown in Figure 8.5.2.
In Figure 8.5.3 the solution is shown for λ = 0.8, h = 0.25 (∆ t = 0.4) at
the time moment t = 0.4.

u
3
o Λ = 0.8, h = 0.25, t = 0.4
o Exact
o o
o Approx
o o

o o
2
o o o
o
o o o o
o o o o
o o
o o
1 oooooooo oooooooooooooooo oooooooooooooooooooooooooooooo

x
12 22 32 52 57 62
20
5 5 5 5 5 5

Figure 8.5.3

From these figures we see that we have reasonable approximations except


at the sharp corners due to the fact that f (x) is not smooth at these points.
For λ = 1.6 the situation is different, which can be seen from Figures 8.5.4,
8.5.5 and 8.5.6. Even if the grid size h is decreasing, the approximations are
8.5 FINITE DIFFERENCE METHODS FOR THE WAVE EQUATION 509

not getting better. The reason for this behavior is that stability condition in
the Lax–Friedrich approximation scheme is not satisfied.
u
3 Λ = 1.6, h = 1, t = 1.6 Exact
o Approx

o o

2 o

o1 o o o o o o o o o o o

o o

x
18 28 38 58 63 68
20
5 5 5 5 5 5

Figure 8.5.4

u
o
3
Λ = 1.6, h = 0.5, t = 1.6 Exact
o o o Approx
o

2
o o

o
o
o o
o1 o o o o o o o o o o o o o o o o o o o o
o
o o
o
o
o
x
18 28 38 58 63 68
20
5 5 5 5 5 5

Figure 8.5.5

u
3 Λ = 1.6, h = 0.25, t = 1.6 Exact
o Approx

o o o o

o o
o
o o o
o
o o
o o
o1o ooooooooooooo ooooooooooooooooooooooooooo
o o o
o
o
o
o o
o
x
18 28 38 58 63
20
5 5 5 5 5

Figure 8.5.6
510 8. FINITE DIFFERENCE NUMERICAL METHODS

Example 8.5.2. Solve the transport equation in the previous example using
the Leap-Frog approximation. Take λ = 1.6, h = 0.5 and t = 1.6.
Solution. The Leap-Frog approximation is given by
( )
ui,j+1 = ui,j−1 − λ ui+1,j − ui−1,j .

Let us note that the Leap-Frog method is a three level method. To find the
values of the function at one time step, it is necessary to know the values of
the function at the previous two time steps. As a result, to get started on the
method we need initial values at the first two time levels. For the initial level
we use the given initial condition ui,0 = f (ih). For the next level we use the
Forward-Time Central-Space (see Exercise 1 of this section).
( )
λ
ui,j+1 = ui,j − 2ui+1,j − ui−1,j ,
2

with j = 0. For j ≥ 1 use the Leap-Frog method. Using Mathematica, the


approximation ui,2 is obtained for λ = 1.6, h = 0.5 (∆ t = 0.8) at time
t = 1.6. This solution along with the exact solution f (x − 1.6) is shown in
Figure 8.5.7.
u
3 Λ = 1.6, h = 0.5, t = 1.6 Exact
o
o Approx
o

o
o
2 o
o
o o
o
o
o o o
1o o o o o o o o o o o o o o o o o o o o o o o o

x
3.6 5.6 7.6 11.6 13.6 20

Figure 8.5.7

The above approximation schemes for the transport equation (8.5.1) with a
constant coefficient a can be modified to a transport equation with a variable
coefficient a = a(x, t, u). For simplicity we consider the case when a = a(x, t):

(8.5.2) ut (x, t) + a(x, t)ux (x, t) = 0, 0 < x < l, t > 0,

subject to the initial and boundary conditions

u(x, 0) = f (x), 0 ≤ x ≤ l; u(0, t) = g(t), t > 0,


8.5 FINITE DIFFERENCE METHODS FOR THE WAVE EQUATION 511

We will discuss only the Lax–Wendroff approximation scheme for this equa-
tion. The other schemes can be derived similarly. If we assume that u(x, t) is
sufficiently smooth with respect to both variables, from Taylor’s formula we
have

(∆ t)2
(8.5.3) u(x, t + ∆ t) ≈ u(x, t) + ut (x, t)∆ t + utt (x, t).
2
If we differentiate Equation (8.5.2) with respect to t we obtain

utt = −at (x, t)ux (x, t)−a(x, t)utx (x, t) = −at (x, t)ux (x, t)−a(x, t)uxt (x, t)
( )
= −at (x, t)ux (x, t) + a(x, t) ax (x, t)ux (x, t) + a(x, t)uxx (x, t)
( )
= −at (x, t) + a(x, t)ax (x, t) ux (x, t) + a(x, t)2 uxx (x, t),

from which it follows that


( )
(8.5.4) utt (x, t) = −at (x, t) + a(x, t)ax (x, t) ux (x, t) + a(x, t)2 uxx (x, t).

From (8.5.3) and (8.5.4) we obtain

u(x, t + ∆ t) ≈ u(x, t) − a(x, t)ux (x, t)∆ t


[ ]
(8.5.5) 1 ( )
+ −at (x, t) + a(x, t)ax (x, t) ux (x, t) + a(x, t)2 uxx (x, t) (∆ t)2 .
2

Now using the central approximations for the partial derivatives ux (x, t) and
uxx (x, t), from (8.5.5) we obtain the Lax–Wendroff approximation scheme for
the transport equation:
 ( )
 λ 1 (x) 1 (t)

 ui,j+1 = (1 − λ )ui,j + 2 ai,j + λai,j − 2 ai,j ai,j ∆ t + 2 ai,j ∆ t ui−1,j
2 2

( )

 λ 1 1 (t)
 +
(x)
−ai,j + λai,j + ai,j ai,j ∆ t − ai,j ∆ t ui+1,j ,
2
2 2 2

(x) (t)
where ai,j and ai,j are the values of ax (x, t) and at (x, t), respectively, at
the node (i, j).
Let us take an example to illustrate this numerical method.
Example 8.5.3. Use the Lax–Wendroff approximation method with h =
0.04 and ∆ t = 0.02 to solve the problem
{
ut (x, t) + xux (x, t) = 0, 0 < x < 1, 0 < t < 1,
u(x, 0) = f (x) = 1 + e−40(x−0.3) ,
2
0 < x < 1.

Display the graphs of the approximative solutions, along with the analytical
solution at the time instances t = 0.02, 0.10 and t = 0.20.
512 8. FINITE DIFFERENCE NUMERICAL METHODS

Solution. Using the method of characteristics, explained in Chapter 4, we


have that the solution u(x, t) of the given initial value problem is
( ) −t
u(x, t) = f xe−t = 1 + 1 + e−40(xe −0.3) .
2

For the Lax–Wendroff approximation scheme we have the following. From


(x) (t)
a(x, t) = x it follows that ai,j = ih, and ai,j = 1, ai,j = 0 for every i and
every j. Therefore the Lax–Wendroff approximation scheme for our problem
is given by
 ( )
 λ 1

 i,j+1
u = (1 − λ2
)u i,j + ih + λi 2 2
h − ih∆ t ui−1,j
2 2
( )

 λ 1
 + −ih + λi2 h2 + ih∆ t ui+1,j .
2 2
The results obtained are displayed in Figure 8.5.8.
u u
2 2
t=0 o Exact
o t = 0.02
-o - o - Approx
o
o
o o o
1 1o o o o o o o o o o o o o o o o o o o

x x
0.3 1 0.3 1

(a) (b)

u u
2 2 Exact
o t = 0.10 Exact t = 0.20
o
o

o
o
-o - o - Approx o -o - o - Approx

o o o

o o
1o o o o o o o o o o o o o o o o o o o o o 1o o o o o o o o o o o o o o o o o o o
o

x x
0.3 1 0.3 1

(c) (d)

Figure 8.5.8

20 . One Dimensional Wave Equation of the Second Order. Now we consider


the initial boundary value problem for the one dimensional wave equation
 2

 utt (x, t) = c uxx (x, y, t), 0 < x < a, t > 0,
u(x, 0) = f (x), ut (x, 0) = g(x), 0 < x < a


u(0, t) = u(l, t) = 0, t > 0.
8.5 FINITE DIFFERENCE METHODS FOR THE WAVE EQUATION 513

In Chapter 5 we showed that the analytical solution of the above initial


boundary value problem is given by d’Alembert’s formula


x+ct
f odd (x − ct) + f odd (x + ct) 1
u(x, t) = + g odd (s) ds,
2 2c
x−ct

where f odd and g odd are the odd, 2c periodic extensions of f and g, re-
spectively.
Now we will present finite difference methods for solving the wave equation.
Using the central differences for both time and space partial derivatives in the
wave equation we obtain that

ui,j+1 − 2ui,j + ui,j−1 ui+1,j − 2ui,j + ui−1,j


(8.5.6) = c2 .
∆ t2 h2
or

(8.5.7) ui,j+1 = −ui,j−1 + 2(1 − λ2 )ui,j + λ2 (ui+1,j + ui−1,j , i, j = 1, 2, . . .,

where λ is the grid ratio given by

∆t
λ=c .
h

The question now is how to start the approximation scheme (8.5.7). We


accomplish this in the following way. Using Taylor’s formula for u(x, t) with
respect to t we have that

∆ t2
(8.5.8) u(xi , t1 ) ≈ u(xi , 0) + ut (xi , 0)∆ t + utt (xi , 0) .
2

From the wave equation and the initial condition u(x, 0) = f (x) we have

utt (xi , 0) = c2 uxx (xi , 0) = c2 f ′′ (xi ).

Using the other initial condition ut (x, 0) = g(x), Equation (8.5.8) becomes

c2 ∆ t2
(8.5.9) u(xi , t1 ) ≈ u(xi , 0) + g(xi )∆ t + f ′′ (xi ) .
2

Now if we use the central finite difference approximation for f ′′ (x), then
Equation (8.5.9) takes the form

( ) c2 ∆ t2
u(xi , t1 ) ≈ u(xi , 0) + g(xi )∆ t + f (xi+1 ) − 2f (xi ) + f (xi−1 )
2h2
514 8. FINITE DIFFERENCE NUMERICAL METHODS

and since f (xi ) = u(xi , 0) = ui,0 we have

λ2 ( )
u(xi , t1 ) ≈ (1 − λ2 )ui,0 + ui−1,0 + ui+1,0 + g(xi )∆ t.
2
The last approximation allows us to have the required first step

λ2 ( )
(8.5.10) ui,1 = g(xi )∆ t + (1 − λ2 )ui,0 + ui−1,0 + ui+1,0 .
2
The approximation, defined by (8.5.7) and (8.5.10), is known as the explicit
finite difference approximation of the wave equation. The order of this ap-
proximation is O(h2 + ∆ t2 ).
It can be shown that this method is stable if λ ≤ 1.
We can avoid the conditional stability in the above explicit scheme if we
consider the following implicit (Crank–Nicolson) scheme.

( ) λ2 λ2
1 + λ2 ui,j+1 − ui+1,j+1 − ui−1,j+1
2 2
( ) λ2 λ2
= 2ui,j − 1 + λ ui,j−1 + ui+1,j−1 + ui−1,j−1 .
2
2 2
The implicit Crank–Nicolson scheme is stable for every grid parameter λ.
The following example illustrates the explicit numerical scheme.
Example 8.5.4. Consider the initial boundary problem


 utt (x, t) ={uxx (x, t), 0 < x < 3, 0 < t < 2,



 u(x, 0) = 2 x (x − 1) , 0 ≤ x ≤ 1,
 8 8 4

0, 1≤x≤3



 ut (x, 0) = 0, 0 < x < 3,


 u(0, t) = u(3, t) = 0 t > 0.

Using the explicit scheme (8.5.7) with different space and time steps we
approximate the solution of the problem at several time instances. Compare
the obtained numerical solutions with the analytical solution.
Solution. The following Mathematica program generates the numerical solu-
tion of the above initial boundary value problem.

In[1] := W ave[n− , m− ] :=
M odule[{i},
uApp = T able[0, {n}, {m}];
F or[i = 1, i <= n, i + +,
uApp [[i, 0]] = f [i]; ];
F or[i = 2, i <= n − 1, i + +,
8.5 FINITE DIFFERENCE METHODS FOR THE WAVE EQUATION 515

uApp [[i, 1]] = (1 − λ2 )f [i] + λ2 /2(f [(i + 1)] + f [(i − 1)]); ]; ];


EF DS[n− , m− ] :=
M odule[{i, j},
F or[j = 2, j <= m, j + +,
F or[i = 2, i <= n − 1, i + +,
uApp [[i, j]] = (2 − 2λ2 )UApp [[i, j − 1]]+
λ2 (uApp [[i + 1, j − 1]] + uApp [[i − 1, j − 1]]) − uApp[[i, j − 2]]; ]; ]; ];
First let us take h = 0.02 and ∆ t = 0.02. For these chosen steps we have
λ = 0.2 < 1 and so we can expect a good convergence of the approximation
scheme.
At the given times, the numerical and analytical solutions are displayed
in Figure 8.5.9. We see that the numerical and analytical solutions almost
coincide.

u u
0.42 o 0.34 oo
o o t = 0.08 Exact o o t = 0.12 Exact
o
o
o -o - o - Approx o -o - o - Approx
o

o o o
o

o
o o o

o o o
o

o o
o o
o o
o o o o oooooooooooooooooooooooooooooooooooooo
oooo o ooooooooooooooooooooooooooooooooooooooo x ooo o x
1 3 1 3

(a) (b)

u u
0.27 o
o o t = 0.16 0.25 oo o
o Exact
o oo Exact
o o o
t = 0.20
o oo
o -o - o - Approx o -o - o - Approx
o
o o o

oo
o o
o o

o o o
o
o o
o o o o
o o ooooooooooooooooooooooooooooooooooooo o o oooooooooooooooooooooooooooooooooooo
oo o x oo x
1.16 3 1.2 3

(c) (d)

Figure 8.5.9

In Table 8.5.1 the numerical approximations are given at time instances


t = 0, t = 0.08, t = 0.16 and t = 0.24 for the first 5 space points
x1 , x2 , . . . , x5 .
516 8. FINITE DIFFERENCE NUMERICAL METHODS

Table 8.5.1
x1 x2 x3 x4 x5
t = 0.00 3.258/108 6.718/106 1.379/104 1.0737/103 4.943/103
t = 0.08 0.000 1.400/105 1.400/104 8.284/104 3.357/103
t = 0.16 0.000 4.922/104 2.128/103 6.857/103 1.769/102
t = 0.24 0.000 4.293/103 1.255/102 2.871/102 5.599/102
t = 0.50 0.000 8.908/102 1.657/101 2.187/101 2.382/101

If we take h = 0.05 and ∆ t = 0.08, then λ = 1.8 > 1, and so instability


can be expected. Indeed, this can be observed from Figure 8.5.10.

u u
0.4
o
o o
0.3 o
t = 0.24 Exact
t = 0.08
o o Exact oo
o
o -o - o - Approx
o
-o - o - Approx o
o o
oo
o
o o
o o
o
o o
o o o
o o
o
o o o
o
o o
oooo o o oooooooooooooooooooooooooooooooooooooooo
x oo oooooooooooooooooooooooooooooooooooooo x
1 3 1 3

(a) (b)

u u
5. o
t = 0.40
0.9 o
t = 0.32 Exact Exact

-o - o - Approx -o - o - Approx
o
o o

o o o
oo o o
o
o
o oo oo oooo o ooooo oo oooooooooooooooooooooooooooooooooooo x
oo
o o
o o o o oo o
o 1 3
o
o
ooooooooooooooooooooooooooooooooooooo x
1 3 o

o o

(c) (d)

Figure 8.5.10

Example 8.5.5. Consider the initial boundary value problem



 utt = uxx (x, t), 0 < x < 1, 0 < t < 1,


 u(0, t) = u(1, t) = 0, 0 < t < 1,

 u(x, 0) = f (x) = x(1 − x), 0 < x < 1,


ut (x, 0) = g(x) = sin (πx), 0 < x < 1.
8.5 FINITE DIFFERENCE METHODS FOR THE WAVE EQUATION 517

Using the Crank–Nicholson approximation method solve numerically the


above problem. Take h = 0.02 and ∆ t = 0.01 and the following time in-
stances: t = 0.03, t = 0.10 and t = 0.80. Display graphically the results
obtained.
Solution. The analytical solution of the above wave is given by


8 ∑ sin (2n + 1)πx ( ) 1
u(x, t) = 3 3
cos (2n + 1)πt + sin πx sin πt.
π n=0 (2n + 1) π

(see the separation of variables method for the wave equation described in
Section 5.3 of Chapter 5)
1 1
If we take n = and m = in the Crank–Nicholson approximation
h ∆t
method, then the following system is obtained.


 ui,0 = f (ih), i = 0, 1, . . . , n,


λ2 (
ui,1 = (1 − λ2 )f (ih) + ∆ t g(ih)+ f ((i + 1)h) + f ((i − 1)h), 1 ≤ i ≤ n,

 2

 u = 2(1−λ2 )u ( )
i,j−1 +λ ui+1,j−1 +ui−1,j−1 −ui,j−2 , 1 ≤ i ≤ n; 2 ≤ j ≤ m.
2
i,j

The results obtained are displayed in Figure 8.5.11.


u u
0.25 0.33 oooooooo Exact
oo oo
oo o
o-o - o - Approx
o o
t = 0.00 o
o
o t = 0.02 o
o
o o
o o
o o
o o
o o
o o
o o
o o
o o
o o
o o
o o
o o
x o o x
0.5 1 0.5 1

(a) (b)

u u
0.34 oo
oooooooo Exact 0o o o o
oo t = 0.80 ooo
oo o o oo
o o -o - o - Approx o o
o o
o t = 0.10 o o o
o o o o
o o
o o o o
o o o o
o o
o o o o
o o
o o o o
o o
o o
o o
o o
o o
o o
o o o o Exact
o o o o
o o
o o
oo o o -o - o - Approx
ooo ooo
o o x ooooo x
0.5 1 0.5 1

(c) (d)

Figure 8.5.11
518 8. FINITE DIFFERENCE NUMERICAL METHODS

Exercises for Section 8.5.

1. Derive the following finite difference approximations of the transport


equation (8.5.1).
( )
ui,j+1 = ui,j − λ ui+1,j − ui,j ,
( )
ui,j+1 = ui,j − λ ui,j − ui−1,j ,
( )
λ
ui,j+1 = ui,j − 2ui+1,j − ui−1,j .
2

2. Solve the one dimensional wave equation

ut (x, t) + ux (x, t) = 0, 0 < x < 2, 0 < t < 1,

with the initial condition




 1, if 0 ≤ x ≤ 14 ,
u(x, 0) = 1 + (cos 2πx)2 , if 14 ≤ x ≤ 34 ,


1 if 43 ≤ x ≤ 2,

and the boundary conditions

u(0, t) = u(3, t) = 0, 0 < t < 2.4.

Use the Lax–Friedrichs with h = 0.04 and λ = 0.8. Display the re-
sults at t = 0 and t = 0.24.

3. Using the Lax–Wendroff method with h = 0.25 and ∆ = 0.20 solve


numerically the following problem.

ut (x, t) + ux (x, t) = 0, 0 < x < ∞, t > 0,

subject to the initial condition




 1, 0 ≤ x ≤ 1,
u(x, 0) = f (x) = 1 + (x − 1) (x − 3) , 1 ≤ x ≤ 3,
2 2


1, 3 ≤ x∞,

and the boundary condition

u(0, t) = 0, t > 0.
8.5 FINITE DIFFERENCE METHODS FOR THE WAVE EQUATION 519

Compute the numerical values at the first 8 space points and the first
10 time instances. Compare the results obtained with the analytical
solution.

4. Using the Leap-Frog scheme with h = 0.1 and λ = 0.5 approximate


the solution at t = 0.3 of the initial boundary value problem
ut (x, t) + ux (x, t) = 0, 0 < x < 1, t > 0,
u(x, 0) = f (x) = 1 + sin pix 0 < x < 1,
u(0, t) = g(t) = 1 + sin πt, t > 0.
Compare your results with the exact solution.

5. Consider the first order wave equation


1 + x2
ut (x, t) + ux (x, t) = 0, 0 < x < 2, t > 0,
1 + 2xt + 2x2 + x4
subject to the initial and boundary conditions
{
u(x, 0) = f (x) = 1 + e−100(x−0.5) , 0 < x < 2;
2

u(0, t) = g(t) = 0, t > 0.


Use the Lax–Wendroff scheme to approximate its solution. Use h =
0.005 and ∆ t = 0.00125. Compare your results with the exact
solution.

6. For the given initial boundary value problem



 utt (x, t) = uxx (x, t), 0 < x < 3, 0 < t < 3,


 u(x, 0) = sin (πx), 0 < x < 3,

 ut (x, 0) = 0, 0 < x < 3,


u(0, t) = u(3, t) = 0

use the explicit scheme (8.5.7) with h = 0.05, ∆ t = 0.04 to approx-


imate the solution and compare with the analytical solution.

7. For the initial boundary value problem



 utt (x, t) = uxx (x, t), 0 < x < 1, 0 < t < 1,


1
u(x, 0) = f (x) = x(1 − x), 0 < x < 1,

 2

ut (x, 0) = 0, 0 < x < 1, u(0, t) = u(1, t) = 0
520 8. FINITE DIFFERENCE NUMERICAL METHODS

use the explicit scheme (8.5.7) with h = 0.1, ∆ t = 0.05 to approx-


imate the solution. Calculate the numerical solution at the first 9
space grid points and the first 5 time instances. Compare the numer-
ical solution with the analytical solution.

8. Apply the explicit scheme to the following the initial boundary value
problem.

 utt (x, t) = uxx (x, t), 0 < x < 1, 0 < t < 1,


 u(x, 0) = 1 + sin (πx) + 3 sin (2πx), 0 < x < 1,

 u(0, t) = u(1, t) = 1, 0 < t < 1,


ut (x, 0) = sin pix, 0 < x < 1.

Use h = 0.05, ∆ t = 0.0025 to approximate the solution. Calculate


the numerical solution at the first 8 space grid points and the first 6
time instances. Compare the numerical solution with the analytical
solution obtained by d’Alembert’s method.

9. Derive the matrix forms of the systems of equations for the solution
of the one dimensional wave equation

utt (x, t) = a2 (uxx (x, t), 0 < x < l, t > 0,

subject to the initial conditions


{
u(x, 0) = f (x), 0 < x < l,
ut (x, 0) = g(x), 0 < x < l,

and the boundary conditions


{
u(0, t) = α(t), t > 0,
u(1, t) = β(t), t>0

by the explicit approximation method.

10. Let R be the unit square


{
R = (x, y) : 0 < x < 1, 0 < y < 1}

with boundary ∂R and let ∆x,y be the Laplace operator

∆x,y u = uxx + uyy .


8.5 FINITE DIFFERENCE METHODS FOR THE WAVE EQUATION 521

Use the forward approximations for the second partial derivatives


with respect to x, y and t to derive a finite explicit difference ap-
proximation of the following two dimensional wave boundary value
problem


 utt (x, y, t) = a2 ∆x,y u(x, y, t), (x, y) ∈ R, 0 < t < T ,


 u(x, y, 0) = f (x, y), (x, y) ∈ R,

 ut (x, y, 0) = g(x, y), (x, y) ∈ R,


 u(x, y, t) = 0, (x, y) ∈ ∂R, 0 < t < T .
APPENDICES

A. Table of Laplace Transforms


f (t) F (s) = L{f } (s)
1
1. t s, s > 0
at 1
2. e s−a , s >a
a
3. sin (at) s2 +a2 , s>0
s
4. cos (at) s2 +a2 , s>0
a
5. sinh (at) s2 −a2 , s>0
s
6. cosh (at) s2 −a2 , s>0
n|
7. t e ,n∈N
n at
(s−a)n+1 , s > a
b
8. eat sin (bt) (s−a)2 +b2 , s > a
s
9. eat cos (bt) (s−a)2 +b2 , s > a
n|
10. tn , n ∈ N sn+1
‡ e−as
11. H(t − a) s , s>0
‡ −as
12. H(t − a)f (t − a) e L{f }(s)
Γ(ν+1)
13. tν eat , Re ν > −1 (s−a)n+1
s2 −a2
14. t cos (at) (s2 +a2 )2 , s>0
2as
15. t sin (at) (s2 +a2 )2 ,
s>0
sin (at) (a)
16. t arctan s , s > 0
( )n
17. Jn (at) √ 1 √ a
s2 +a2 s+ s2 +a2
, n > −1
( )n
18. In (at) √ 1 √a
s2 −a2 s+ s2 −a2
, n > −1
( )
n n
2 a Γ n+1/2
19. tn Jn (at), n > − 12 √ ( 2 )n+1/2
π s +a2
√ − a
sin (2 at) e√ s
20. √
πt s
√ a
cos (2 at)
√ e− s
21. πt 3
s2
2 √
e−a

/(4t)
e−a

s
22. πt s

23. δ(t) 1
24. δ(t − a) e−as , a > 0

522
A TABLE OF LAPLACE TRANSFORMS 523

A. Table of Laplace Transforms (Continued)

f (t) F (s) = L{f } (s)


√π √
t−1/2 e− t −2 as
a
25. se
√π √
t−3/2 e− t e−2
a
as
26. a
( )
27. √1 1 + 2at eat s√
πt (s−a) s−a
( bt ) √ √
28. √1
2 πt 3
e − e at
s−a− s−b
√ (s)
e−a
2 2 π s2 /4a2
t
29. 2a e erf c 2a
1 s2 /4a2
(s)
30. erf (at) s e erf c 2a
1
√ (√ a )
31. t sin at π erf 4s

32. erf ( t) √1
s s+1

33. eat f (t) L{f }(s − a)


1
∫ T −st
34. f (t + T ) = f (t), T > 0 1+e−T s 0
e f (t) dt
( )
35. f ∗ g (t) L{f }(s) L{g}(s)
(s)
a L{f } a
1
36. f (at)
∫t
s L{f }(s)
1
37. 0
f (τ ) dτ

38. f (n) (t), n = 0, 1, 2, . . . sn L{f }(s) − sn−1 f (0) − sn−2 f ′ (0) − . . . − f (n−1) (0)

39. f ′ (t) sL{f }(s) − f (0)

40. f ′′ (t) s2 L{f }(s) − sf (0) − f ′ (0)


( )(n)
41. tn f (t) (−1)n L{f } (s)
f (t) ∫∞
42. t s
L{f }(x) dx
f (t) ∫∞∫∞
43. t2 s y
L{f }(x) dx dy

44. δ (n) (t − a), n = 0, 1, 2, . . . sn e−as


524 APPENDICES

B. Table of Fourier Transforms

f (x) F{f } (ω)


✠ 1
( ω
) †
1. sink (ax) rect 2πa
|a|
√ π − ω2
e−ax
2
2. a e
4a

3. e−a|x| 2a
ω 2 +a2

4. e−ax H(x) ‡ 1
a+iω
π
(π )
5. sech(ax) a sech 2a ω

6. 1 2πδ0 (ω)

7. δ(x) 1
( )
8. cos (ax) π δ (ω − a) + δ(ω + a)
( )
9. sin (ax) −πi δ (ω − a) − δ(ω + a)
( −|ω−a| )
10. sin (ax)
1+x2
π
2 e − e−|ω+a|
( −|ω−a| )
11. cos (ax)
1+x2
π
2 e + e−|ω+a|
† 1
( ω ) ✠
12. rect (ax) |a| sink 2πa
( ) √π ( ω2 −aπ )
13. cos ax2 a cos 4a
( ) √π ( ω2 −aπ )
14. sin ax2 − a sin 4a
( )
15. |x|n e−a|x| Γ(n + 1) (a−iω) 1 1
n+1 + (a+iω)n+1

† 2
16. sgn (x) iω

( 1
)
17. H(x) π iπω + δ0 (ω)
n−1
18. 1
xn −iπ (−iω)
(n−1) sgn (ω)
( )
19. J0 (x) √ 2
1−ω 2
rect ω2 †
20. eiax 2πδ(ω − a)
21. xn 2πin δ (n) (ω ‡ )
1 1 −ω
22. 1+a2 x2 |a| e
a

23. x
a2 +x2 −πie−a|ω| sgn (ω) †
B TABLE OF FOURIER TRANSFORMS 525

B. Table of Fourier Transforms (Continued)

f (x) F{f } (ω)


24. eiax f (x) F{f }(ω − a)
√π( )
25. cos (ax)f (x) 2 F{f }(ω − a) + F{f }(ω + a)
√ ( )
26. sin (ax)f (x) −i π2 F{f }(ω−a)+F{f }(ω+a)
( )
27. f xa |a|F{f }(aω)
28. f (x − a) e−iaω F{f }(ω)
29. a f (x) + b g(x) F{f }(ω) + F{g}(ω)
30. f ′ (x) iωF{f }(ω)
31. f ′′ (x) −ω 2 F{f }(ω)
32. f (n) (x), n = 0, 1, . . . (iω)n F{f }(ω)
( )′
33. xf (x) i F{f } (ω)
( )(n)
34. xn f (x), n = 0, 1, . . . in F{f } (ω)
( ) ( )( )
35. f ∗ g (x) F{f }(ω) F{g}(ω)
( )
2π F{f } ∗ F {g} (ω)
1
36. f (x)g(x)

Notations for the Laplace/Fourier Transform Tables


† The box function rect (x) and the signum function sgn (x):
 

 0, |x| > 21  −1,
 x<0
rect (x) = 1
2, x = ±2 ;1
sgn (x) = 0, x=0

 

1, |x| < 2
1 1, x>0

‡ The Heaviside function H(x) and the Dirac delta “function” δ(x):
{ {
0, −∞ < x < 0 ∞, x = 0
H(x) = ; δ(x) =
1, 0 ≤ x < ∞ 0, x ̸= 0.

✠ The sinc function sinc (x):


sin (πx)
sinc (x) = .
πx
526 APPENDICES

C. Series and Uniform Convergence Facts

Suppose S ⊆ R and fn : S → R are real-valued functions for every natural


number n. We say that the sequence (fn ) is pointwise convergent on S with
limit f : S → R if for every ϵ > 0 and every x ∈ S there exists a natural
number n0 (ϵ, x) such that all n > n0 (ϵ),

|fn (x) − f (x)| < ϵ.

Suppose S ⊆ R and fn : S → R are real-valued functions for every natural


number n. We say that the sequence (fn ) is uniformly convergent on S with
limit f : S → R if for every ϵ > 0, there exists a natural number n0 (ϵ) such
that for all x ∈ S and all n > n0 (ϵ),

|fn (x) − f (x)| < ϵ.

Let us compare uniform convergence to the concept of pointwise conver-


gence. In the case of uniform convergence, n0 can only depend on ϵ, while in
the case of pointwise convergence n0 may depend on ϵ and x. It is clear that
uniform convergence implies pointwise convergence.
The following are well known results about uniform convergence.
Theorem C.1. Cauchy ( ) Criterion for Uniform Convergence. A se-
quence of functions fn converges uniformly on a set S ⊆ R if and only if
for every ϵ > 0 there exists a natural number n0 (ϵ) such that for all x ∈ S
and all natural numbers n, m > n0 (ϵ),

|fn (x) − fm (x)| < ϵ.


( )
Theorem C.2. Suppose that fn is a sequence of continuous functions on
a set S ⊆ R and fn converges uniformly on S to a function f . Then f is
continuous on S.
( )
Theorem C.3. Let fn be an increasing sequence of continuous functions
defined on a closed interval [a, b]. If the sequence is pointwise convergent on
[a,b], then it converges uniformly on [a,b].
( )
Theorem C.4. Let fn be a sequence of integrable functions on [a, b].
Assume that fn converges uniformly on [a, b] to a function f . Then f is
integrable and moreover,

∫b ∫b
fn (x) dx = f (x) dx.
a a
C SERIES AND UNIFORM CONVERGENCE FACTS 527
( )
Theorem C.5. Let fn be a sequence( ) of differentiable functions on [a, b]
( )fn (x0 ) converges at some point x0 ∈ [a,
such that the numerical sequence ( b].)
If the sequence of derivatives fn′ converges uniformly on [a, b], then fn
converges uniformly on [a, b] to a differentiable function f and moreover,

lim fn′ (x) = f ′ (x)


n→∞

for every x ∈ [a, b].

Convergence properties of infinite series




(C.1) fk (x)
k=1

of functions are identified with those of the corresponding sequence


n
Sn (x) = fk (x)
k=1

of partial sums. So, the infinite series (C.1) is (pointwise,


) absolutely and
uniformly convergent on S ⊆ R if the sequence Sn (x) of partial sums is
pointwise, absolutely and uniformly convergent, respectively, on S.
The following criterion for uniform convergence of infinite series is useful.
( )
Theorem C.6. Weierstrass M -Test. Let fn be a sequence of functions
defined on a set S such that |fn (x)| ≤ mn for every x ∈ S and every n ∈ N.
If
∑∞
mn < ∞
n=1

is convergent, then


fn
n=1

is uniformly convergent on S.

An important class of infinite series of functions is the power series




an xn
n=0

in which (an ) is a sequence of real numbers and x a real variable. The basic
convergence properties of power series are described by the following theorem.
528 APPENDICES

Theorem C.7. For the power series




an xn
n=0

(√ )
let R be the largest limit point of the sequence n
|an | . Then
a. If R = 0, then the power series is absolutely convergent on the whole
real line and it is uniformly convergent on any bounded set of the real
line.

b. If R = ∞, then the power series is convergent only for x = 0.

c. If 0 < R < ∞, then the (


power series
) is absolutely convergent for every
1 1
x in the open interval − , and uniformly convergent on the
{ } R R
1 1
set x : |x| ≤ r < . For |x| > the power series is divergent.
R R
( )
A point a is called a limit point for a sequence an if in every open
interval containing the point a there are terms of the sequence.
The following are several examples of power series:


1 ∑∞ x2n+
= xn , |x| < 1; arctan x = (−1)n , −1 < x ≤ 1.
1 − x n=0 n=0 2n + 1

∑∞ xn ∑∞ xn+1
ex = , x ∈ R; ln (1 + x) = (−1)n , −1 < x ≤ 1.
n=0 n! n=0 n+1


∞ x2n+1 ∑∞ x2n
sin x = (−1)n x ∈ R; cos x = (−1)n , x ∈ R.
n=0 (2n + 1)! n=0 (2n)!
D BASIC FACTS OF ORDINARY DIFFERENTIAL EQUATIONS 529

D. Basic Facts of Ordinary Differential Equations

1. First Order Differential Equations


We will review some basic facts of ordinary differential equations of the
first and second order.
A differential equation of the first order is an equation of the form

(D.1) F (x, y, y ′ ) = 0,

where F is a given function of three variables.


A solution of (D.1) is a differentiable function f (x) such that

F (x, f (x), f ′ (x)) = 0,

for all x in the domain (interval) where f is defined.


The general solution of (D.1) is the set of all solutions of (D.1).
An equation of the form

(D.2) g(y) dy = f (x) dx

is said to be separable. A separable equation can be integrated. Simply


integrate both sides of equation (D.2).
An equation of the form

(D.3) M (x, y) dx + N (x, y) dy = 0,

where M = M (x, y) and N = N (x, y) are C 1 (R) functions (functions which


have continuous partial derivatives in R) in a rectangle R = [a, b]×[c, d] ⊆ R2
is said to be exact if
My (x, y) = Nx (x, y)
at each point (x, y) ∈ R.
If an equation is exact, then there exists a function u = u(x, y) ∈ C 1 (R)
such that
ux = M, uy = N in R
and the general solution of (D.3) is given by u(x, y) = C, where C is any
constant.
Example D.1. Find the general solution of the equation
( ) ( )
1 + y cos xy dx + x cos xy + 2y dy = 0.

Solution. In this example

M = 1 + y cos (xy), N = x cos (xy) + 2y.


530 APPENDICES

Since My = cos xy − xy sin xy and Nx = cos xy − xy sin xy we have My =


Nx , and so the equation is exact. Therefore, there exists a function u =
u(x, y) such that
ux = M = 1 + y cos xy and uy = x cos xy + 2y.
Integrating the first equation with respect to x we have

( )
u(x, y) = 1 + y cos xy dx + f (y) = x + sin xy + f (y).

Now, from uy = x cos xy + f ′ (y) = x cos xy + 2y it follows that f ′ (y) = 2y


and so f (y) = y 2 . Thus,
u(x, y) = x + sin xy + y 2
and so the general solution of the equation is given by
x + sin xy + y 2 = C.

Sometimes, even though equation (D.3) is not exact, there is a function


µ = µ(x, y), not identically zero, such that the equation
(µM ) dx + (µN ) dy = 0
is exact. In this case the function µ is called an integrating factor for (D.3).
Example D.2. (Linear Equation of the First Order). Consider the first order
homogeneous linear equation
(D.4) y ′ + p(x)y + q(x)y = 0,
where p = p(x) and q = q(x) are continuous functions in some interval.∫
It
can be easily checked that this equation has an integrating factor e p(x) dx .
Using this integrating factor, the general solution of (D.4) is obtained to be

[∫ ∫
]
− p(x) dx p(x) dx
y=e q(x)e dx + C ,

where C is any constant.

A differential equation of the form


(D.5) y ′ = f (x, y),
subject to an initial condition
(D.6) y(x0 ) = y0
is usually called an initial value problem.
The following theorem is of fundamental importance in the theory of dif-
ferential equations.
D BASIC FACTS OF ORDINARY DIFFERENTIAL EQUATIONS 531

Theorem D.1. Existence and Uniqueness. Suppose that f (x, y) and


its partial derivative fy (x, y) are continuous functions on the rectangle R =
{(x, y) : a < x < b, c < y < d} containing the point (x0 , y0 ). Then in some
interval (x0 − h, x0 + h) ⊂ [a, b] there exists a unique solution y = ϕ(x) of
the initial value problem (D.5), (D.6).

2. Second Order Linear Differential Equations


A differential equation of the form

(D.7) a(x)y ′′ + b(x)y ′ + c(x)y = f (x)

is called a second order linear equation. The coefficients a(x), b(x) and c(x)
are assumed to be continuous on an interval I.
If a(x) ̸= 0 for every x ∈ I, then we can divide by a(x) and equation
(D.7) takes the normal form

(D.8) y ′′ + p(x)y ′ + q(x)y = r(x).

If a(x0 ) ̸= 0 at some point x0 ∈ I, then x0 is called an ordinary point for


(D.7). If a(x0 ) = 0, then x0 is called singular point for (D.7).
If f (x) ≡ 0, then the equation

(D.9) a(x)y ′′ + b(x)y ′ + c(x)y = 0

is called homogeneous.
Theorem D.2. If the function yp (x) is any particular solution of the non-
homogeneous equation (D.7) and uh (x) is the general solution of the homo-
geneous equation (D.8), then

y(x) = yh (x) + yp (x)

is the general solution of the nonhomogeneous equation (D.7).


For homogeneous equations the following principle of superposition holds.
Theorem D.3. If y1 (x) and y2 (x) are both solutions to the homogeneous,
second order equation (D.9), then any linear combination

c1 y1 (x) + c2 y2 (x),

where c1 and c2 are constants, is also a solution to (D.9).

For second order linear equations we have the following existence and
uniqueness theorem.
532 APPENDICES

Theorem D.4. Existence and Uniqueness Theorem. Let p(x), q(x)


and r(x) be continuous functions on the interval I = (a, b) and let x0 ∈
(a, b). Then the initial value problem

y ′′ + p(x)y ′ + q(x)y = r(x), y(x0 ) = y0 , y ′ (x0 ) = y1

has a unique solution on I, for any numbers y0 and y1 .

Now we introduce linearly independent and linearly dependent functions.


Two functions y1 (x) and y2 (x) (both not identical to zero) are said to be
linearly independent on an interval I if the condition

c1 y1 (x) + c2 y2 (x) = 0 for every x ∈ I

for some constants c1 and c2 is satisfied only if c1 = c2 = 0. In other words,


two functions are independent if neither of them can be expressed as a scalar
multiple of the other. If two functions are not linearly independent, then
we say they are linearly dependent. The importance of linearly independent
functions comes from the following theorem.
Theorem D.5. Let the coefficient functions p(x) and q(x) in the homoge-
neous equation (D.9) be continuous on the open interval I. If y1 = y1 (x)
and y2 = y2 (x) are two linearly independent solutions on the interval I of
the homogeneous equation (D.9), then any solution y = y(x) of Equation
(D.9) is of the form
y = c1 y1 + c2 y2 ,
for some constants c1 and c2 .
In this case we say that the system {y1 , y2 } is a fundamental set or basis
of the solutions of the homogeneous equation (D.9).
The question of linearly independence of two functions can be examined
by their Wronskian:
The Wronskian W (y1 , y2 ; x) of two differentiable functions y1 (x) and
y2 (x) is defined by

y1 (x) y2 (x)

W (y1 , y2 ; x) = ′ = y1 (x)y2′ (x) − y2 (x)y1′ (x).
y1 (x) y2′ (x)

Theorem D.6. If y1 and y2 are linearly dependent differentiable functions


on an interval I, then their Wronskian W (y1 , y2 ; x) is identically zero on I.
Equivalently, if y1 and y2 are differentiable functions on an interval I and
their the Wronskian W (y1 , y2 ; x0 ) ̸= 0 for some x0 ∈ I, then y1 and y2 are
linearly independent on this interval.
Proof. Let y1 and y2 be linearly dependent differentiable functions on an
interval I. Then there are constants c1 and c2 , not both zero, such that

c1 y1 (x) + c2 y2 (x) = 0 for every x ∈ I.


D BASIC FACTS OF ORDINARY DIFFERENTIAL EQUATIONS 533

Differentiating the last equation with respect to x we find

c1 y1′ (x) + c2 y2′ (x) = 0 for every x ∈ I.

Since the system of the last two equations has a nontrivial solution (c1 , c2 ),
its determinant W (y1 , y2 ; x) must be zero for every x ∈ I.

The converse of Theorem D.6. is not true. In other words, two differen-
tiable functions y1 and y2 can be linearly independent on an interval even if
their Wronskian may be zero at some point of that interval.
Example D.3. The functions y1 (x) = x3 and y2 (x) = x2 |x| are linearly in-
dependent on the interval (−1, 1), but their Wronskian W (y1 , y2 ; x) is iden-
tically zero on (−1, 1).
Solution. Indeed, if c1 x3 + c2 x2 |x| = 0 for some constants c1 and c2 and
every x ∈ (−1, 1), then taking x = −1 and x = 1 in this equation we obtain
that −c1 +c2 = 0 and c1 +c2 = 0. From the last two equations it follows that
c1 = c2 = 0. Therefore y1 and y2 are linearly independent on (−1, 1). Also,
it is easily checked that y1′ (x) = 3x2 and y2′ (x) = 3x|x| for every x ∈ (−1, 1).
Therefore y1 and y2 are differentiable on (−1, 1) and

W (y1 , y2 ; x) = y1 (x)y2′ (x) − y2 (x)y1′ (x) = 3x4 |x| − 3x4 |x| = 0

for every x ∈ (−1, 1).

If one particular solution y1 = y1 (x) of the homogeneous linear differential


equation (D.9) is known, then introducing a new function u = u(x) by
y = y1 (x)u(x), Equation (D.9) takes the form
( )
y1 u′′ + 2y1′ + p(x)y1 v ′ = 0.

Now, if in the last equation we introduce a new function v by v = u′ we


obtain the linear first order equation
( )
y1 v ′ + 2y1′ + p(x)y1 v = 0.

The last equation can be integrated by the separation of variables and one
solution of this equation is

1 −∫ p(x) dx
v(x) = e .
y12

Thus,
∫ ∫
e− p(x) dx
u(x) = dx
y12 (x)
534 APPENDICES

and so a second linearly independent solution y2 (x) of (D.9) is given by


∫ ∫
e− p(x) dx
(D.10) y2 (x) = y1 (x) dx.
y12 (x)

From Theorem D.2 we have that a general solution of a nonhomogeneous


linear equation of second order is the sum of a general solution yh (x) of the
corresponding homogeneous equation and a particular solution yp (x) of the
nonhomogeneous equation. To find a particular solution yp (x) of the nonho-
mogeneous equation from the general solution of the homogeneous equation
we use the following theorem.
Theorem D.7. Method of Variation of Parameters. Let the functions
p(x), q(x) and r(x) be continuous on an interval I. If {y1 (x), y2 (x)} is a
fundamental system of the homogeneous equation

y ′′ + p(x)y ′ + q(x)y = 0,

i.e.,
yh (x) = c1 y1 (x) + c2 y2 (x)
is a general solution of the homogeneous equation, then a particular solution
yp (x) of the nonhomogeneous equation

y ′′ + p(x)y ′ + q(x)y = r(x)

is given by
yp (x) = c1 (x)y1 (x) + c2 (x)y2 (x),
where the two differentiable functions c1 (x) and c2 (x) are determined by
solving the system
{
c′1 (x)y1 (x) + c′2 (x)y2 (x) = 0
c′1 (x)y1′ (x) + c′2 (x)y2′ (x) = r(x)

Second order linear homogeneous equation

(D.11) ay ′′ (x) + by ′ (x) + cy(x) = 0

with real constant coefficients a, b and c can be solved by assuming a solution


of the form y = erx for some values of r. We find r by substituting this
solution and its first and second derivative into the differential equation (D.10)
and obtain the quadratic characteristic equation

(D.12) ar2 + br + c = 0.
D BASIC FACTS OF ORDINARY DIFFERENTIAL EQUATIONS 535

For the roots of the characteristic equation



−b ± b2 − 4ac
r=
2a

we have three possibilities: two real distinct roots when b2 − 4ac > 0, one
real repeated root when b2 − 4ac = 0, and two complex conjugate roots when
b2 − 4ac < 0. We consider each case separately.
Two Real Distinct Roots. Let (D.12) have two real and distinct roots r1
and r2 . Using the Wronskian it can be easily checked that the functions
y1 (x) = er1 x and y2 (x) = er2 x are linearly independent on the whole real
line so the general solution of (D.11) is

y(x) = c1 er1 x + c2 er2 x .

One Real Repeated Root. Suppose that the characteristic equation (D.12)
has a real repeated root r = r1 = r2 . In this case we have only one solution
y1 (x) = erx of the equation. We use this solution and (D.10) in order to
obtain a second linearly independent solution y2 (x) = xerx . Therefore, a
general solution of (D.11) is

y(x) = c1 erx + c2 xerx .

Complex Conjugate Roots. Suppose that the characteristic equation (D.12)


has the
{ complex conjugate roots } r1,2 = α ± β i. In this case we can verify
that eαx cos (βx), eαx sin (βx) is a fundamental system of the differential
equation and so its general solution is given by
( )
y(x) = eαx c1 cos (βx) + c2 sin (βx) .

Example D.4. Find the general solution of the nonhomogeneous equation

y ′′ − 2y ′ + y = ex ln x, x > 0.

Solution. The corresponding homogeneous equation has the characteristic


equation
r2 − 2r + 1 = 0.

This equation has repeated root r = 1 and so y1 (x) = ex and y2 (x) = xex
are two linearly independent solutions of the homogeneous equation. There-
fore,
yh (x) = c1 ex + c2 xex
536 APPENDICES

is the general solution of the homogeneous equation. To find a particular


solution yp (x) = c1 (x)ex + c2 (x)xex of the given nonhomogeneous equation
we apply the method of variation of parameters:
{ ′
c1 (x)ex + c′2 (x)xex = 0
( )
c′1 (x)ex + c′2 (x) ex + xex = ex ln x.

Solving the last system for c′1 (x) and c′2 (x) we obtain

c′1 (x) = −x ln x, c′2 (x) = ln x.

Using the integration by parts formula, from the last two equations it follows
that
1 1
c1 (x) = x2 − x2 ln x and c2 (x) = x ln x − x.
4 2
Therefore,
1 3
yp (x) = x2 ex ln x − x2 ex
2 4
and the general solution is

1 3
y(x) = yh (x) + yp (x) = c1 ex + c2 xex + x2 ex ln x − x2 ex .
2 4

The Cauchy–Euler equation is a linear, second order homogeneous equation


of the form

(D.13) ax2 y ′′ + bxy ′ + cy = 0,

where a, b and c are real constants. To solve this equation we assume that
it has a solution of the form y = xr . After substituting y and its first and
second derivative into the equation we obtain

(D.14) ar(r − 1) + br + c = 0.

Equation (D.14) is called the indicial equation of (D.13).


Two Real Distinct Roots. If (D.14) has two real and distinct roots r1 and
r2 , then
y(x) = c1 xr1 + c2 xr2
is a general solution of (D. 13).
One Real Repeated Root. If (D.14) has one real and repeated root r = r1 = r2 ,
then
y(x) = c1 xr + c2 xr ln x
is a general solution of (D.13).
D BASIC FACTS OF ORDINARY DIFFERENTIAL EQUATIONS 537

Two Complex Conjugate Roots. If (D.14) has two, complex conjugate roots
r1 = α + β i and r2 = α − β i, then
( ) ( )
y(x) = c1 xα cos β ln x + c2 xα sin β ln x

is a general solution of (D.13).

3. Series Solutions of Linear Differential Equations


Recall that a power series about x0 is an infinite series of the form


cn (x − x0 )n .
n=0

For each value of x either the series converges or it does not. The set of all
x for which the series converges is an interval (open or not, bounded or not),
centered at the point x0 . The largest R, 0 ≤ R ≤ ∞ for which the series
converges for every x ∈ (x0 − R, x0 + R) is called the radius of convergence
of the power series and the interval (x0 − R, x0 + R) is called the interval of
convergence. Within the interval of convergence of a power series, the function
that it represents can be differentiated and integrated by differentiating and
integrating the power series term by term.
A function f is said to be analytic in some open interval centered at x0 if
for each x in that interval the function can be represented by a power series


f (x) = cn (x − x0 )n .
n=0

Our interest in series solutions is mainly in second order linear equations.


The basic result about series solutions of linear differential equations of the
second order is the following theorem.
Theorem D.8. Consider the differential equation (D.7)

a(x)y ′′ + b(x)y ′ + c(x)y = 0,

where a(x), b(x) and c(x) are analytic functions in some open interval con-
taining the point x0 . If x0 is an ordinary point for (D.7) (a(x0 ) ̸= 0), then a
general solution of Equation (D.7) can be expressed in form of a power series


y(x) = cn (x − x0 )n .
n=0

The radius of convergence of the power series is at least d, where d is the


distance from x0 to the nearest singular point of a(x).
The proof of this theorem can be found in several texts, such as the book
by Coddington [2].
538 APPENDICES

Example D.5. Find the general solution in the form of a power series about
x0 = 0 of the equation

(D.15) y ′′ − 2xy ′ − y = 0.

Solution. We seek the solution of (D.15) of the form




y(x) = cn xn .
n=0

Since a(x) ≡ 1 does not have any singularities, the interval of convergence
of the above power series is (−∞, ∞). Differentiating twice the above power
series we have that

∑ ∞

y ′ (x) = ncn xn−1 , y ′′ (x) = n(n − 1)cn xn−2 .
n=1 n=2

We substitute the power series for y(x), y ′ (x) and y ′′ (x) into Equation
(D.15) and we obtain

∑ ∞
∑ ∞

n(n − 1)cn xn−2 − 2x ncn xn−1 − cn xn = 0.
n=2 n=1 n=0

If we insert the term x inside the second power series, after re-indexing the
first power series we obtain

∑ ∞
∑ ∞

(n + 2)(n + 1)cn+2 xn − 2 ncn xn − cn xn = 0.
n=0 n=1 n=0

If we break the first and the last power series into two parts, the first terms
and the rest of these power series, we obtain

∑ ∞
∑ ∞

2 · 1c2 + (n + 2)(n + 1)cn+2 xn − 2 ncn xn − c0 − cn xn = 0.
n=1 n=1 n=0

Combining the three power series above into one power series, it follows that
∑ [ ]

2c2 − c0 + n=1 (n + 2)(n + 1)cn+2 − (2n + 1)cn xn = 0

Since a power series is identically zero in its interval of convergence if all the
coefficients of the power series are zero, we obtain

2c2 − c0 = 0, (n + 2)(n + 1)cn+2 − (2n + 1)cn = 0, n ≥ 1.


D BASIC FACTS OF ORDINARY DIFFERENTIAL EQUATIONS 539

Therefore,
1 2n + 1
c2 = c0 and cn+2 = , n ≥ 1.
2 (n + 2)(n + 1)
From this equation, recursively we obtain

1
c2 = c0 ,
2
1·5
c4 = c0 ,
4!
1·5·9
c6 = c0 ,
6!
..
.
1 · 5 · 9 · · · (4n − 3)
c2n−1 = c0 , n ≥ 1
(2n)!

and
3
c3 = c1 ,
3!
3·7
c5 = c1 ,
5!
3 · 7 · 11
c7 = c1 ,
11!
..
.
3 · 7 · 11 · · · (4n − 5)
c2n−1 = c1 , n ≥ 2.
(2n − 1)!
Therefore the general solution of Equation (D.15) is
[ ∑∞ ] [ ∑∞ ]
1 · 5 · · · (4n − 3) 2n 3 · 7 . . . (4n − 5) 2n−1
y(x) = c0 1 + x + c1 x + x
n=0
(2n)! n=2
(2n − 1)!

where c0 and c1 are arbitrary constants.


There are many differential equations important in mathematical physics
whose solutions are in the form of power series. One such equation is the
Legendre equation considered in detail in Chapter 3.
Now, consider again the differential equation (D.7)

a(x)y ′′ + b(x)y ′ + c(x)y = 0,

where a(x), b(x) and c(x) are analytic functions in some open interval con-
taining the point x0 . If x0 is singular point for (D.7) (a(x0 ) = 0), then a
general solution of Equation (D.7) cannot always be expressed in the form of
540 APPENDICES

a power series. In order to deal with this situation, we distinguish two types
of singular points: regular and irregular singular points.
A singular point x0 is called a regular singular point of Equation (D.7) if
both functions
b(x) c(x)
(x − x0 ) and (x − x0 )2
a(x) a(x)
are analytic at x = x0 . A point which is not a regular singular point is called
an irregular singular point.
Example D.6. Consider the equation

(x − 1)2 xy ′′ + xy′ + 2(x − 1)y = 0.

In this example, since


x x
(x − 1) =
(x − 1)2 x−1
is not analytic at the point x = 1, x = 1 is an irregular singular point for the
equation. The point x = 0 is a regular singular point.

Now, we explain the method of Frobenius for solving a second order linear
equation (D.7) when x0 is an irregular singular point. For convenience, we
assume that x0 = 0 is a regular singular point of (D.7) and we may consider
the equation of the form

(D.16) y ′′ + p(x)y ′ + q(x)y = 0,

where xp(x) and x2 q(x) are analytic at x = 0.


We seek a solution y(x) of (D.16) of the form

∑ ∞

(D.17) y(x) = xr cn xn = cn xn+r ,
n=0 n=0

where r is a constant to be determined in the following way. Since xp(x)


and x2 q(x) are analytic at x = 0,


xp(x) = a0 + an x n
n=1

and


x2 q(x) = b0 + bn xn .
n=1

If we substitute these power series and the series for y, y ′ and y ′′ in (D.17),
after rearrangements we obtain
∑∞ [ ∑ ]
n
( )
(r + n)(r + n − 1)cn + an−k (r + k)ck + bn−k ck xn = 0.
n=0 k=0
D BASIC FACTS OF ORDINARY DIFFERENTIAL EQUATIONS 541

For n = 0 we have

(D.18) r(r − 1) + a0 r + b0 = 0.

Equation (D.18) is called the indicial equation.


For any n > 1 we obtain an equation of cn in terms of c0 , c1 , · · · ,
cn−1 . Recursively, we solve these equations and obtain a solution of Equation
(D.16).
Depending on the nature of the roots r1 and r2 of the indicial equation
(D.18) we have different forms of particular solutions.
Theorem D.9. Frobenius. Let r1 and r2 be the roots of the indicial
equation (D.18). Then
1. If r1 ̸= r2 are real and r1 − r2 is not an integer, then there are two
linearly independent solutions of (D.18) of the form

∑ ∞

y1 (x) = xr1 an xn and y2 (x) = xr2 bn x n .
n=0 n=0

2. If r1 ̸= r2 are real and r1 − r2 is an integer, then there are two


linearly independent solutions of (D.18) of the form

∑ ∞

y1 (x) = xr1 an xn and y2 (x) = y1 (x ln x + xr2 bn xn .
n=0 n=0

3. If r1 = r2 is real, then there are two linearly independent solutions


of (D.18) of the form

∑ ∞

y1 (x) = xr1 an xn and y2 (x) = y1 (x) ln x + xr1 bn xn .
n=0 n=0

If the roots r1 and r2 of the indicial equation (D.18) are complex conju-
gate numbers, then r1 −r2 is not an integer number. Therefore, two solutions
of the forms as in Case 1 of the theorem are obtained by taking the real and
imaginary parts of them.
Many important equations in mathematical physics have solutions obtained
by the Frobenius method. One of them, the Bessel equation, is discussed in
detail in Chapter 3.
542 APPENDICES

E. Vector Calculus Facts

Vectors in R2 or R3 are physical quantities that have norm (magnitude)


and direction. Examples include force, velocity and acceleration. The vectors
will be denoted by boldface letters, x, y.
There is a convenient way to express a three dimensional vector x in terms
of its components. If
     
1 0 0
i = 0, j = 1, k = 0
0 0 1

are the three unit vectors in the Euclidean space R3 , then any vector
 
x1
x =  x2 
x3

in R3 can be expressed in the form

x = x1 i + x2 j + x3 j.

The norm, magnitude of a vector x = x1 i + x2 j + x3 j is defined to be the


nonnegative number √
∥x∥ = x21 + x22 + x23 .

Addition of two vectors x = x1 i + x2 j + x3 j and y = y1 i + y2 j + y3 j is


defined by ( ) ( ) ( )
x + y = x1 + y1 i + x2 + y2 j + x3 + y3 j.
Multiplication of a vector x = x1 i + x2 j + x3 j with a scalar c is defined
by
c · x = cx1 i + cx2 j + cx3 j.

The dot product of two vectors x and y in R2 or R3 is defined by

(E.1) x · y = ∥x∥ ∥y∥ cos α

where α is the angle between the vectors x and y. If x = x1 i + x2 j + x3 j


and y = y1 i + y2 j + y3 j, then

x · y = x1 y1 + x2 y2 + x3 y3 .

The cross product of two vectors x and y R3 is defined to be the vector


( )
(E.2) x × y = ∥x∥ ∥y∥ sin α n,
E VECTOR CALCULUS FACTS 543

where α is the oriented angle between x and y and n is the unit vector
perpendicular to both vectors x and y and whose direction is given by the
right hand side rule. If x = x1 i + x2 j + x3 j and y = y1 i + y2 j + y3 j, then

i j k
x2 x3 x1 x3 x1 x2
(E.3)
x × y = x1 x2 x3 =
i− j+ k.
y1 y2 y3 y2 y3 y1 y3 y1 y2

A vector-valued function or a vector function is a function of one or more


variables whose range is a set of two or three dimensional vectors. We can
write a vector-valued function r = r(t) of a variable t as

r(t) = f (t)i + g(t)j + h(t)k.

The derivative r′ (t) is given by

r′ (t) = f ′ (t)i + g ′ (t)j + h′ (t)k,

and it gives the tangent vector to the curve r(t).


We can write a vector function F(x, y, z) of several variables x, y and z
as
F(x, y, z) = f (x, y, z) i + g(x, y, z) j + h(x, y, z) k.
A vector function F(x, y, z) usually is called a vector field.
The Laplace operator is one of the most important operators. For a given
function u(x, y), the function

(E.4) ∇2 u ≡ ∆ u = uxx + uyy

is called the two dimensional Laplace operator or simply two dimensional


Laplacian.
Similarly, the three dimensional Laplacian is given by

(E.5) ∇2 u ≡ ∆ u = uxx + uyy + uzz .

If F (x, y, z) is a real-valued function, then vector valued function

∂F (x, y, z) ∂F (x, y, z) ∂F (x, y, z)


(E.6) ∇ F (x, y, z) = i+ j+ k
∂x ∂y ∂z

is called the gradient of F (x, y, z) and is sometimes denoted by grad F (x, y, z).
An important fact for the gradient is the following.
If x = f (t), y = g(t) and z = h(t) are parametric equations of a smooth
curve on a smooth surface F (x, y, z) = c, then for the scalar-valued function
F (x, y, z) and the vector-valued function r(t) = f (t)i + g(t)j + h(t)k we have

(E.7) ∇F (x, y, z) · r′ (t) = 0,


544 APPENDICES

i.e., ∇F (x, y, z) is perpendicular to the level sets of F (x, y, z) = c.


Other differential operations on a vector field are the curl and divergence.
The curl of a vector field F(x, y, z) is defined by

(E.8) curl(F) = ∇ × F,

and is a measure of the tendency of rotation around a point in the vector field.
If curl(F) = 0, then the field F is called irrotational.
( )
The divergence of a vector field F(x, y, z) = f (x, y, z), g(x, y, z), h(x, y, z)
is defined by

(E.9) div(F) = ∇ · F = fx + gy + hz .

Some useful formulas for these differential operators are

∇ · (f F) = f ∇ · F + F · ∇f,
∇ × (f F) = f ∇ × F + ∇f × F,
(E.10) ∇2 f = ∇ · ∇f = fxx + fyy + fzz ,
∇ × ∇f = 0,
∇ · (∇ × F) = 0,

for every scalar function f (x, y, z) and vector-function F(x, y, z).


If a vector field F is an irrotational field (curl(F) = 0) and nondivergent
(div(F) = 0) in a domain in R3 , then there exists a real-valued function
f (x, y, z), defined in the domain, with the properties

F = ∇f and ∇2 f = 0.

The above function f is called potential.


If a vector field F is such that ∇ × F = 0 in the whole domain, then this
vector field is called a conservative field. For any conservative field F there
exists a potential function f such that F = ∇f .
It is often important to consider the Laplace operator in other coordinate
systems, such as the polar coordinates in the plane and spherical and cylin-
drical coordinates in space.
Polar Coordinates. Polar coordinates (r, φ) in R2 and the Cartesian coordi-
nates (x, y) in R2 are related by the formulas (see Figure E.1)

(E.11) x = r cos φ, y = r sin φ.


E VECTOR CALCULUS FACTS 545

0 x

Figure E.1

If u is a function of class C 2 (Ω), where Ω is a domain in R2 , then by the


chain rule we have
ur = ux cos φ + uy sin φ,
(E.12)
uφ = −rux sin φ + ruy cos φ.
If we differentiate one more time and again use the chain rule we obtain that
urr = uxx cos2 φ + 2uxy sin φ cos φ + uyy sin2 φ,
(E.13) ( ) 1
uφφ = uxx sin2 φ − 2uxy sin φ cos φ + uyy cos2 φ − ur .
r
From Equations (E.10) and (E.11) we obtained
uxx + uyy = urr + r−1 ur + r−2 uφφ ,
and therefore,
(E.14) ∇2 u ≡ ∆ u = urr + r−1 ur + r−2 uφφ .

Cylindrical Coordinates. Cylindrical coordinates (r, φ, z) in R3 and the


Cartesian coordinates (x, y, z) are related by the formulas (see Figure E.2)
(E.15) x = r cos φ, y = r sin φ, z = z.

y
j
r

Figure E.2
546 APPENDICES

If u is a function of class C 2 (Ω), where Ω is a domain in R3 , then using


the result for the Laplacian in polar coordinates we have that the Laplacian
in cylindrical coordinates is

(E.16) ∇2 u := ∆ u = urr + r−1 ur + r−2 uθφ + uzz .

Spherical Coordinates. From Figure E.3 we see that the spherical coordinates
(ρ, φ, θ) in R3 and the Cartesian coordinates (x, y, z) are related by the
formulas

(E.17) x = ρ cos φ sin θ, y = ρ sin φ sin θ, z = ρ cos θ.


z

·
Θ
y
j

Figure E.3

The Laplacian in spherical coordinates has the form


1 1( )
∇2 u = uρρ + uρ + 2 uφφ fφ cot φ + uθθ cos2 θ
ρ ρ
(E.18) [ ]
1 ( ) 1 ( ) 1
= 2 ρ2 uρ ρ + uφ sin φ φ + u θθ .
ρ sin φ sin2 φ
These equations can be obtained directly by a long and tedious calculation
of the second partial derivatives, or by applying the transformation of the
Laplacian from cylindrical to spherical coordinates.
Now we present several theorems that are important in the theory of partial
differential equations.
Theorem E.1. Stokes’s Theorem. Let S be a bounded smooth surface in
R3 and C a smooth closed curve on the surface S. If F(x, y, z) = f (x, y, z)i+
g(x, y, z)i + h(x, y, z)k is a smooth vector field on S, then
∫∫ ∮
∇F · n dS = f dx + g dy + h dz
S C

where n is the unit outward normal vector to the surface S.


E VECTOR CALCULUS FACTS 547

Theorem E.2. Gauss/Ostrogradski Divergence Theorem. Let Ω be


a bounded domain in R3 with a smooth boundary surface S = ∂Ω. If F is a
smooth vector field in Ω and continuous on Ω ∪ S, then
∫∫∫ ∫∫
∇ · F dV = F · n dS
Ω S

where n is the unit outward normal vector to the surface S.

Theorem E.3. Green Theorem. Let Ω be a domain in R2 with a smooth


boundary curve C = ∂Ω. If p = p(x, y) and q = q(x, y) are smooth functions
on Ω and continuous on C, then
∮ ∫∫
( )
p dx + q dy = px − qy dx dy.
C Ω

Theorem E.4. The First Green Theorem. Let Ω be a domain in R2


with a smooth boundary curve C = ∂Ω. If p = p(x, y) and q = q(x, y) are
smooth functions on Ω and continuous on Ω ∪ C, then
∫∫ ∮ ∫∫
p∆ q dx dy = p∇ q · n ds = ∇ p · ∇ q dx dy,
Ω C Ω

( ∂ ∂ )
where n is the outward normal vector on the boundary C and ∇ = ,
∂x ∂y
is the gradient.

Theorem E.5. The Second Green Theorem. Let Ω be a bounded do-


main in R3 with a smooth boundary surface S = ∂Ω. If u = p(x, y, z) and
v = v(x, y, z) are smooth functions on Ω and continuous on Ω ∪ S, then
∫∫∫ ∫∫
( 2 ) ( )
u∇ v − u∇2 u dx dy dz = uvn − vuu dS,
Ω S

where n is the unit outward normal vector to the surface S and un = ∇ u · n


and vn = ∇ v · n are the normal derivatives of u and v, respectively.

Multiple integrals very often should be transformed in polar or spherical


coordinates. The following formulas hold.
548 APPENDICES

Theorem E.6. Double Integrals in Polar Coordinates. If Ω ⊆ R2 is


a bounded domain with boundary ∂Ω and f (x, y) a continuous function on
the closed domain Ω, (Ω = Ω ∪ ∂Ω), then
∫∫ ∫∫
f (x, y) dx dy = f (r cos φ, r sin φ)r dr dφ,
Ω Ω′

where Ω′ is the image of Ω under the transformation from Cartesian coor-


dinates (x, y) to polar coordinates (r, φ).

Theorem E.7. Triple Integrals in Spherical Coordinates. If Ω is a


bounded domain in R3 with boundary ∂Ω and f (x, y, z) a continuous func-
tion on the closed domain Ω = Ω ∪ ∂Ω, then
∫∫∫ ∫∫∫
f (x, y, z)dV = f (ρ cos φ sin θ, ρ sin φ sin θ, ρ cos θ)ρ2 sin θ dρ dφ dθ
Ω Ω′

where Ω′ is the image of Ω under the transformation from Cartesian coor-


dinates (x, y, z) to spherical coordinates (ρ, φ, θ).
F A SUMMARY OF ANALYTIC FUNCTION THEORY 549

F. A Summary of Analytic Function Theory

A complex number z is an ordered pair (x, y) of real numbers x and y.


The first number, x, is called the real part of z, and the second component,
y, is called the imaginary part of z.
Addition and multiplication of two complex numbers z1 = (x1 , y1 ) and
z2 = (x2 , y2 ) are defined by

z1 + z2 = (x1 + x2 , y1 + y2 ), z1 · z2 = (x1 y1 − x2 y2 , x1 y2 + x2 y1 ).

The complex number (0, 1) is denoted by i and is called the imaginary


unit.
Every complex number of the form (x, 0) is identified by x.
Every complex number z = (x, y) can be represented in the form z =
x + iy.
The set of all complex numbers usually is denoted by C.
For a complex number z = x + iy, the nonnegative number |z| (called the
modulus of z), is defined by


|z| = x2 + y 2 .

For any two complex numbers z1 and z2 the triangle inequality holds:

|z1 + z2 | ≤ |z1 | + |z2 |.

If z = x + iy is a complex number, then the number z̄ = x − iy is called


the conjugate of z. The following is true: z1 + z2 = z1 + z2 ; z1 · z2 = z1 · z2
and |z|2 = z · z̄.
If z is a nonzero complex number, then we define

1 1
= 2 · z̄.
z |z|

If z1 and z2 are complex numbers, then |z1 − z2 | is the distance between


the numbers z1 and z2 .
If z0 is a given complex number and r a positive number, then the set of
all complex numbers z such that |z − z0 | = r represents a circle with center
at z0 and radius r.
If z0 is a given complex number and r a positive number, then the set of
all complex numbers z such that |z − z0 | < r represents an open disc with
center at z0 and radius r and it is denoted by D(z0 , r).
550 APPENDICES

If z0 is a given complex number and r a positive number, then the set of


all complex numbers z such that |z − z0 | ≤ r represents a closed disc with
center at z0 and radius r and it is denoted by D(z0 , r).
A set U in the complex plane R is called an open set if for every point
a ∈ U there exists a number r such that D(a, r) ⊂ U . In other words, U is
open if any point of U is a center of an open disc which is entirely in the set
U.
A set U in the complex plane R is called a connected set if every two
points in U can be connected by a polygonal line which lies entirely in U .
Let z = x + iy be a nonzero complex number. The unique real number θ
which satisfies the conditions

x = |z| cos θ, y = |z| sin θ, −π < θ ≤ π

is called the argument of z, and is denoted by θ = arg(z).


Every complex number z ̸= 0 can be represented in the trigonometric form
z = r(cos θ + i sin θ), where r = |z| and θ = arg(z).
If θ is a real number and r a rational number, then
( )r
eiθ = eirθ ,

where
eix = cos x + i sin x.
If z = r(cos θ + i sin θ) is a complex number and n a natural number,
then there are n complex numbers zk , k = 1, 2, · · · , n (called the nth roots
of z) such that zkn = z, k = 1, 2, · · · , n and for each k = 1, 2, · · · , n we have


n
( θ + 2kπ θ + 2kπ )
zk = r cos + i sin .
n n

If z = x + iy is a complex number, then ez is defined by

ez = ex (cos y + i sin y).

The trigonometric functions sin z and cos z are defined as follows:

eiz − e−iz eiz + e−iz


sin z = , cos z = .
2i 2

If z ̸= 0 is a complex number, then there exists unique complex number


w such that ew = z. The number w is called the principal logarithm of z
and is given by
w = ln |z| + iarg(z).
F A SUMMARY OF ANALYTIC FUNCTION THEORY 551

If U is an open set in the complex plane, a function f : U → C is called


analytic on U if its derivative

f (z + ∆z) − f (z)
f ′ (z) = lim
∆z→0 ∆z

exists at every point z ∈ U . f is said to be analytic at a point z0 if it is


analytic in some open set containing z0 .
Analytic functions f (z) = u(x, y) + iv(x, y), z = x + iy must satisfy the
Cauchy–Riemann equations

∂u ∂v ∂u ∂v
= , =− .
∂x ∂y ∂y ∂x

∂u ∂u ∂v ∂v
Conversely, if the partial derivatives , , and exist and are
∂x ∂y ∂x ∂y
continuous on an open set U and the Cauchy–Riemann conditions are satis-
fied, then f is analytic on U .
The functions ez , z n , n ∈ N, sin z, cos z are analytic in the whole
complex plane. The principal logarithmic function ln z = ln |z| + i arg(z),
0 < arg(z) ≤ 2π is analytic in the set C \ {x ∈ R : x ≥ 0}—the whole
complex plane cut along the positive part of the Ox–axis.
With the principal branch of the logarithm we define complex powers by

z a = e(a ln z) .

A point z0 is said to be zero of order n of an analytic function f if

f (z0 ) = f ′ (z0 ) = · · · = f (n−1) (z0 ) = 0, f (n) (z0 ) ̸= 0.

If z0 is a zero of an analytic function f , then there is an open set U


containing the point z0 such that f (z) ̸= 0 for every z ∈ U \ {z0 }.
If f is analytic on an open set U , then at every point z0 ∈ U the function
f can be expanded in a power series


f (z) = an (z − z0 )n
n=0

which converges absolutely and uniformly in every open disc D(z0 , R) whose
closure D(z0 , r) lies entirely in the open set U . The coefficients an are
uniquely determined by f and z0 and they are given by

f (n) (z0 )
an = .
n!
552 APPENDICES

Some power series expansions are

∑∞ ∑∞ ∑∞ ∑∞
zn z 2n−1 z 2n 1
ez = , sin z = , cos z = , = z n , |z| < 1.
n=0
n! n=1
(2n − 1)! n=0
(2n)! 1 − z n=0

If f and g are analytic functions on some open and connected set U and
f = g on some open disc D ⊂ U , then f ≡ g on the whole U .
If f is analytic on some open and connected set U and not identically
zero, then the zeros of f are isolated, i.e., for any zero a of f there exists
an open disc D(a, r) ⊂ U such that f (z) ̸= 0 for every z ∈ D(a, r) \ {a}.
If f is analytic in a punctured disc D̃(z0 , r) = {z ∈ C : 0 < |z − z0 | < r}
but not analytic at z0 , then the point z0 is called an isolated singularity of
f . In this case f can be expanded in a Laurent series about z0 :



f (z) = an (z − z0 )n , 0 < |z − z0 | < r.
n=−∞

If in the Laurent series of f about z0 we have an = 0 for all n < 0, then


z0 is called a removable singularity of f .
If in the Laurent series of f about z0 we have an = 0 for all n < −N
but a−N ̸= 0 for some n ∈ N, then z0 is called a pole of order N of f .
If in the Laurent series of f about z0 we have an ̸= 0 for infinitely many
negative n, then z0 is called an essential singularity of f .
A function which is analytic everywhere except at poles is called meromor-
phic
The residue of a function f at an isolated singularity z0 is the coefficient
a1 in the Laurent expansion of f around the point z0 ; it is denoted by
Res(f, z0 ). If the singularity z0 is a pole of order n then the residue is given
by
1 dn−1 [ ]
Res(f, z0 ) = lim (z − z0 )n f (z) .
(n − 1)! z→z0 dz n−1

A contour in C is defined to be a continuous map γ : [a, b] → C, [a, b] ⊂ R


whose derivative γ ′ (t) exists and is nonzero at all but finitely many values of
t and it is piecewise continuous. If γ is a contour we say that a function f
is analytic on γ if it is analytic on an open set containing the range {γ(t) :
a ≤ t ≤ b}. In this case we define

∫ ∫b
f (z) dz = f (γ(t))γ ′ (t) dt.
γ a
F A SUMMARY OF ANALYTIC FUNCTION THEORY 553

If ϕ : [c, d] → [a, b] is a function with a continuous first derivative and


ϕ(c) = b, ϕ(d) = a, and ϕ is a decreasing function, then we say that the
contours γ and γ ◦ ϕ have opposite orientation and in this case we have that
∫ ∫
f (z) dz = − f (z) dz.
γ◦ϕ γ

A simple closed contour is a contour γ : [a, b] → C such that γ(a) = γ(b)


but in any other case γ(s) ̸= γ(t).
Any simple closed contour γ divides the complex plane in two regions, the
interior and the exterior region of γ. γ is said to be positively oriented if the
interior region lies on the left with respect to the direction of motion along γ
as t increases.
Cauchy’s Theorem. If f(z) is analytic inside some simple closed contour γ
and continuous on the contour γ, then

f (z) dz = 0.
γ

Suppose that Ω is a connected open set in C that is bounded by finitely


many simple closed contours, as in Figure F.1. Ω will be in the interior of
one of those contours, which will be called γ0 , and the exterior of the others,
which will be called γ1 , · · · , γk .

Γ0
Γ3

Γ2

Γ1
Γ4

Γ5
W

Figure F.1
554 APPENDICES

Cauchy Theorem. Let Ω and γ0 , γ1 , · · · , γk be as described above. If f


is analytic on Ω and continuous on each of the γk′ , then
∫ k ∫

f (z) dz = f (z) dz.
γ0 j=1 γ
j

Cauchy Residue Theorem. Suppose γ is a simple closed contour with


interior region Ω. If f is continuous on γ and analytic on Ω except for
singularities at z1 , · · · , zn ∈ Ω, then
∫ ∑
n
f (z) dz = 2πi Res(f, zk ).
γ k=1

Cauchy Integral Formula. Suppose γ is a simple closed contour with in-


terior region Ω and exterior region V . If f is analytic on Ω and continuous
on γ, then for all n = 0, 1, 2, · · · ,
∫ { (n)
n! f (z) f (a), if a ∈ Ω,
dz =
2πi z−a 0, if a ∈ V .
γ

Let us illustrate the residue theorem by the following example.


Example. For y ∈ (0, 1) we have
∫∞ −y
t π
dt = .
1+t sin(π y)
0

Solution. We cut the complex plane along the positive real axis and consider
the region bounded by the Bromwich contour in Figure F.2.

-1 R

Figure F.2
F A SUMMARY OF ANALYTIC FUNCTION THEORY 555

On this region we define the function


z −y
f (z) =
1+z
with arg(z −y ) = 0 on the upper side of the cut. This function has a simple
pole at z = −1 and it is easy to find that
( −y )
z
Res , −1 = e−π y .
1+z
If we integrate this function along a path which goes along the upper side of
the cut from ϵ > 0 to R, then along the circle CR of radius R centered
at the origin, then along the lower side of the cut from R to ϵ and finally
around the origin along the circle cϵ of radius ϵ, by the residue theorem we
get
∫R ∫ ∫R ∫
t−y z −y t−y z −y
dt + dz − e−2πiy dt − dz = 2πie−2πiy .
1+t 1+z 1+t 1+z
ϵ CR ϵ cϵ

First let us notice that for z ̸= 0 we have that

|z −y | = |e−y ln z | = e−yRe(ln z | = e−y ln |z| = |z|−y .

and therefore −y
z −y
|z|−y
= |z| ≤ .
1 + z |1 + z| |1 − |z|
Hence, for small ϵ and large R we have
∫ ∫
z −y y−1 z −y y−1
dz ≤ 2π R , dz ≤ 2π ϵ .
1+z R−1 1+z 1−ϵ
CR cϵ

Clearly this implies that


∫ ∫
z −y z −y
lim dz = 0 and lim dz = 0.
R→∞ 1+z ϵ→0 1+z
CR cϵ

Therefore
∫∞
−πiy t−y
(e πiy
−e ) dt = 2πi
1+t
0

and finally
∫∞
t−y π
dt = .
1+t sin(π y)
0
556 APPENDICES

G. Euler Gamma and Beta Functions

Among Euler’s many remarkable discoveries is the gamma function, which


for more than two centuries has become extremely important in the theory of
probability and statistics and elsewhere in mathematics and applied sciences.
Historically, it was of interest to search for a function generalizing the
factorial function for the natural numbers. In dealing with this problem one
will come upon the well-known formula

∫∞
tn e−t dt = n!, n ∈ N.
0

This formula suggests that for x > 0 we define the gamma function by the
improper integral
∫∞
Γ(x) = tx−1 e−t dt.
0

This improper integral converges and it is a continuous function on (0, ∞).


By integration by parts, for x > 0 we have that

Γ(x + 1) = xΓ(x).

∫∞
Since γ(1) = e−t dt = 1, recursively its follows that
0

Γ(n + 1) = n!

for all natural numbers n. Thus Γ(x) is a function that continuously extends
the factorial function from the natural numbers to all of the positive numbers.
We can extend the domain of the gamma function to include all negative
real numbers that are not integers. To begin, suppose that −1 < x < 0. Then
x + 1 > 0 and so Γ(x + 1) is defined. Now set

Γ(x)
Γ(x + 1) = .
x

Continuing in this way we see that for every natural number n we have

Γ(x + n)
(G.1) Γ(x) = , x > −n,
x(x + 1) · · · (x + n)

and so we can define Γ(x) for every x in R except the nonpositive integers.
A plot of the Gamma function Γ(x) for real values x is given in Figure G.1.
G EULER GAMMA AND BETA FUNCTIONS 557

10

-4 -2 2 4

-5

Figure G.1

Using the relation (G.1), the function Γ can be extended to a function


which is meromorphic on the whole complex plane C and it has simple poles
at the points 0, −1, 2, · · · , −n, · · · , with residues at −n given by

(−1)n
Res(Γ(z), −n) = .
n!

The Euler Gamma function Γ(x) for x > 0 can also be defined by

Γ(x) = lim Γn (x)


n→∞

where

n! nx nx
Γn (x) = = .
x · (x + 1) · · · · · (x + n) x · (1 + 1 ) · · · · · (1 + nx )
x

For any z ∈ C we have

Γ(z + 1) = zΓ(z).

The function Γ also satisfies


π
Γ(z)Γ(1 − z) =
sin π z

for all z ∈ C.
1
In particular, for z = 2 we have that

∫∞
1 √
t− 2 e−t dt =
1
Γ( ) = π.
2
0
558 APPENDICES

The previous formula together with the recursive formula for the Gamma
function implies that
1 1 · 3 · 5 · · · · · (2n − 1) √
Γ(n + ) = π
2 2n
and
1 (−1)n 2n √
Γ(−n + ) = π
2 1 · 3 · 5 · · · · · (2n − 1)
for all nonnegative integers n.
1
The function Γ does not have zeroes and Γ(z) is analytic on the entire
complex plane.
The Gamma function Γ(z) is infinitely many differentiable on Re(z) > 0
and for all x > 0 we have that

Γ′ (x) 1 ∑ x
= −γ − + ,
Γ(x) x n=1 n(n + x)

where γ is the Euler constant given by


( )
1 1 1
γ = lim 1 + + + · · · + − ln (n)
n→∞ 2 3 n
and
∫∞
( )(n) ( )n
Γ(x) = tx−1 e−t ln t dt.
0

For all z ∈ C we have


22z−1 1
Γ(2z) = √ Γ(z) Γ(z + ).
π 2

Euler discovered another function, called the Beta function, which is closely
related to the Gamma function Γ(x). For x > 0 and y > 0, we define the
Beta function B(x, y) by
∫1
B(x, y) = tx−1 (1 − t)y−1 dt.
0

If 0 < x < 1, the integral is improper (singularity at the left end point of
integration a = 0). If 0 < y < 1, the integral is improper (singularity at the
right end point of integration b = 1).
For all x, y > 0 the improper integral converges and the following formula
holds.
Γ(x) Γ(y)
B(x, y) = .
Γ(x + y)
H BASICS OF MATHEMATICA 559

H. Basics of Mathematica

Mathematica is a large computer software package for performing mathe-


matical computations. It has a huge collection of mathematical functions and
commands for producing two and three dimensional graphics. This appendix
is only a brief introduction to Mathematica and for more information about
this software the reader is referred to The Mathematica Book, Version 4 by
S. Wolfram [14]. There is a very convenient help system available through the
“Help” menu button at the top of the notebook window. The window “Find
Selected Function” gives a brief summary of what each Mathematica function
does and provides references to the section of the book where the function is
explained.

1. Arithmetic Operations. Commands for expressions which should be


evaluated are entered into the input cells, displayed in bold. The commands
are evaluated by pressing the Shift-Enter key. The arithmetic operations
addition and subtraction are as usual the keys + and −, respectively. Multi-
plication and division are performed by ∗ and /, respectively. Multiplication
of two quantities can be done by leaving a space between the quantities. Or-
dinary round brackets ( ) have a grouping effect as in algebra notation, but
not other types of brackets. In order to perform xy the exponential symbol ˆ
is used: xˆy.
For example, in order to evaluate

2−3 + 3 · (5 − 7)
−5 + 3 · (−2) +
− 23 + 34 · (−9 + 3)

in the cell type


In[ ]:=−5 + 3 ∗ (−2) + (2ˆ(−3) + 3 ∗ (5 − 7))/(−2/3 + (3/4) ∗ (−9 + 3))
the following result will be displayed:
Out[ ]=− 1223
124 .

If we want to get a numerical value for the last expression with, say, 20
digits, then in the cell type
In[ ]:=N[%, 20]
Out[ ]=−9.8629032258064516129

Mathematica can do algebraic calculations.


In[ ]:=Expand[(x − 1) (2x − 3) (x + 9)]
Out[ ]=27 − 42x + 13x2 + 2x3

In[ ]:=Factor[xˆ3 + 2 xˆ2 − 5 x − 6]


560 APPENDICES

Out[ ]=(−2 + x)(1 + x)(3 + x)

In[ ]:=Simplify[(x2 − 3 x) (6 x − 7) − (3 x2 − 2 x − 1) (2 x − 3)]


Out[ ]=−3 + 17 x − 12 x2

In[ ]:=Together[2 x − 3 + (3 x + 5)/(xˆ2 + x + 1)]


2 3
Out[ ]= 2+21+x+x
x−x +2 x
2

In[ ]:=Apart[(3 x + 1)/(xˆ3 − 3 xˆ2 − 2 x + 4)]


4 11+4 x
Out[ ]=− 5 (−1+x) + 5 (−4−2 x+x2 )

One single input cell may consist of several. A new line within the current
cells is obtained by pressing the Enter key. Commands on the same line
within a cell must be separated by semicolons. The output of any command
that ends with a semicolon is not displayed.
In[ ]:=x = 5/4 − 3/7; y = 9/5 + 4/9 − 9 x; z = 2 x2 − 3 y 2
Out[ ]=− 2999
392

If we have to evaluate an expression with different values each time, then


the substitution command symbolized by a slanted bar and a period is useful.
In[ ]:=z = 2 x2 − 3x y + 2 y 2 /.{x− > 1, y− > 1}
Out[ ]=1
When we set a value to a symbol, that value will be used for the symbol for
the entire Mathematica session. Since symbols no longer in use can introduce
confusion when used in new computations, clearing previous definitions is
extremely important. To remove a variable from the kernel, use the Clear
command.
In[ ]:=Clear[x, y, z]

2. Functions. All built-in mathematical functions and constants have full


names that begin with a capital letter. The arguments of a function are
enclosed by brackets. For example, the familiar functions from calculus sin x,
cos x, ex , ln x in Mathematica are Sin[x], Cos[x], Exp[x], Log[x]. The
constant π in Mathematica is Pi, while the Euler number e in Mathematica
is E.
In Mathematica the function f (x) = x2 + 2x is defined by
In[ ]:=f [x− ] := x2 + 2 x;
We can evaluate f (a + b) + f (a − b) by typing
In[ ]:=f [a + b] + f [a − b]
H BASICS OF MATHEMATICA 561

Out[ ]=2a2 + 2b2 + 4a.

The command for defining functions of several variables is similar:

In[ ]:=g[x− , t− ] := Sin[P i x] Cos[2 P i t];

To define the piecewise function


 x, −π ≤ x < 0

h(x) = x2 , 0≤x<π


0, elsewhere

we use

In[ ]:=h[x− ]=Piecewise[{{x, −P i <= x < 0}, {0 <= x < P i}}, 0]];

If we now type

In[ ]:=h[2]

we obtain

Out[ ]=4

3. Graphics. Mathematica is exceptionally good at creating two and three


dimensional graphs.
The plot of the function h(x) above on the interval [−2π, 2π] is obtained
by typing

In[ ]:=Plot[h[x], {x, −2 P i, 2 P i}, PlotRange − > All, Ticks − >


{{−2 P i, −P i, 0, P i, 2 P i}, {−P i, P i2 }}, AspectRatio − >
Automatic, PlotStyle − > Thick, AxesLabel − > {x, y}]

The plot is displayed below.


562 APPENDICES

y
Π2

x
-2 Π -Π Π 2Π

Mathematica can do multiple plots.


In[ ]:=Plot[{Sqrt[x], x, x2 ]}, {x, 0, 2}, PlotRange − > {0, 1.5} ,
Ticks − > {{0, 1}, {0, 1}}, PlotStyle − > {{Dashing [{0.02}]}, { Black},
{T hickness[0.007]}}], AspectRaito − > Automatic, AxesLabel − > {x, y}]
The plot is displayed below.
y

x
0 1

To plot many points on the plane that are on a given curve we use the
ListPlot command.
Example.
In[ ]:=list=Table[{x, Sin[x]}, {x, −2 P i, 2 P i, 0.1}];
In[ ]:=p1=ListPlot[list]
y
1

x
-2 Π 2Π

-1

If we want to joint the points, we use ListLinePoints.


Example.
In[ ]:=p2=ListLinePlot[list]
H BASICS OF MATHEMATICA 563

y
1

x
-2 Π 2Π

-1

To plot both graphs p1 and p2 together we use the command Show.


Example.
In[ ]:=Show[p1, p2]
y
1

x
-2 Π 2Π

-1

To plot the graph p2 next to the graph p1 use Show[GraphicsGrid[ ].


Example.
In[ ]:=Show[GraphicsGrid[{{p1, p2}}]]
y y
1 1

x x
-2 Π 2Π -2 Π 2Π
-1 -1

To plot implicitly given functions we use ContourPlot.


Example.
In[ ]:=ContourPlot[{x3 + y 3 − 9 x y == 0,
x + y + 3 == 0} {x, −6, 6}, {y, −6, 6}]
y
6
5

x
-6 -3 3 6

-3

-6
564 APPENDICES

To plot in polar coordinates use PolarPlot.


Example.
Plot the following functions.

1
r = 2, r =2+ sin 10φ, r = sin 5φ, 0 ≤ φ ≤ 2π.
3

In[ ]:=PolarPlot[{2, 2 + 1/3 Sin[10 t], Sin[5 t]}, {t, 0, 2 P i}, PlotStyle
− > {Black, Dashed, Black}, Ticks − > {{−2, −1, 0, 1, 2},
{−2, −1, 0, 1, 2}}, AxesLabel − > {x, y}]

x
-2 -1 1 2

-1

-2

To plot curves given with parametric equations use ParametricPlot.


Plot the curve given by the parametric equations x = sin 2t, y = sin 3t,
0 ≤ t ≤ 2π.
In[ ]:=ParametricPlot[{Sin[2 t], Sin[3 t]}, {t, 0, 2 P i}, Ticks
− > {{−1, 0, 1}, {−1, 0, 1}}, AxesLabel − > {x, y}]

y
1

x
-1 1

-1
H BASICS OF MATHEMATICA 565

Three dimensional plots are generated using the Plot3D command.


In[ ]:=Plot3D[Sqrt[x2 + y 2 ] Exp [−x2 + y 2 ], {x, −2, 2}, {y, −2, 2}, Ticks
− > {{−2, 0, 2}, {−2, 0, 2}, {0, 1}, AxesLabel − > {x, y, z}]

4. Differentiation and Integration. Mathematica can perform differen-


tiation and integration of functions of a single variable as well as multiple
variables. The command for the nth derivative of a function f (x) with re-
spect to x is D[f [x], {x, n}].
To find the second derivative of the function x3 cos2 x type
In[ ]:=D[x3 (Cos[x])2 , {x, 2}]
Out[ ]=30x4 cos2 x − 24x5 cos x sin x + x6 (−2 cos2 x + 2 sin2 x)

The command for partial derivatives is the same. For example, the com-
mand for the mixed partial derivative of f (x, y) is D[f [x, y], x, y], or the
command for the second partial derivative of f (x, y) with respect to y is
D[f [x, y], y, 2].

To find the mixed partial derivative of the function x3 y 2 cos x − x2 y 3 cos y


type
In[ ]:=D[x3 y 2 Cos[x] − x2 y 3 Cos[y], x, y]
Out[ ]=6x2 y cos x − 6xy 2 cos y − 2x3 y sin x + 2xy 3 sin y


Integrate[f [x], x] is the command for the indefinite integral f (x) dx. For
∫b
the definite integral f (x) dx the command is Integrate[f [x], {x, a, b}].
a

In[ ]:=Integrate[1/(x3 + 8), x]


( )
ArcT an −1+x

Out[ ]= √
4 3
3
+ 1
12 ln (2 − x) − 1
24 ln (4 − 2x + x2 )
566 APPENDICES

In[ ]:=Integrate[1/(x3 + 8), {x, 0, 1}]


1
( √ )
Out[ ]= 72 π 3 + ln 27
Integrate [f [x, y], {x, a, b}, {y, c[x], d[x]}] is the command for the double
∫b ( d(x)
∫ )
integral f (x, y) dy dx.
a c(x)

In[ ]:=Integrate[Sin[x] Sin[y], {x, 0, P i}, {y, 0, x}]


2
Out[ ]= π4

5. Solving Equations. Mathematica can solve many equations: linear,


quadratic, cubic and quartic equations (symbolically and numerically), any
algebraic and transcendental equations (numerically), ordinary and partial
differential equations (numerically and some of them symbolically). It can
solve numerically systems of equations and in some cases exactly.

5a. Solving Equations Exactly. The commands for solving equations


exactly are Solve[expr,vars] (tries to solve the equation exactly) and
Solve[expr,vars, dom] solves the equation (or the system) over the domain
dom (usually real, integer or complex numbers).
Examples.
Solve a quadratic equation:

In[ ]:=Solve[a x2 + x + 1 == 0, x]
√ √
−1− 1−4 a −1+ 1−4 a
Out[ ]={{x− > 2a }, {x− > 2a }}

In[ ]:=Solve[a x2 + x + 1 == 0, x, Reals]


{{ [ √ ]}
Out[ ]= x− > ConditionalExpression −1−2 a1−4 a , a < 14 ,
{ [ √ ]}}
x− > ConditionalExpression −1+2 a1−4 a , a < 41

Solve a cubic equation (one solution):


In[ ]:=Solve[x3 + x + 1 == 0, x][[1]] (∗ the 1st solution ∗)
( ) 13
( √ )
( ) 13 2 −9+ 93
1

Out[ ]=− 3(−9+2 √93) + 2


33

Solving a linear system for x and y:


In[ ]:=Solve[a x + 2 y == −1 && b x − y == 1, {x, y}]
H BASICS OF MATHEMATICA 567
{{ }}
Out[ ]= x− > − −a−2
1
b , y− > − a+2 b
a+b

Solving a nonlinear system for x and y:


In[ ]:=Solve[a x + 2 y == −1 && b x − y == 1, {x, y}]
{{ }}
Out[ ]= x− > − −a−2 1
b , y− > − a+2 b
a+b

5b. Solving Equations Numerically. The Mathematica commands for


solving equations numerically are NSolve[expr,vars] (tries to solve a given
equation numerically), while NSolve[expr,vars, Reals] solves numerically the
equation (or the system) over the domain real numbers.

Examples.
Approximate solutions to a polynomial equation:
In[ ]:=NSolve[x5 − 2 x + 1 == 0, x]
Out[ ]={{x− > −1.29065}, {x− > −0.114071 − 1.21675 i},
{x− > −0.114071 + 1.21675 i}, {x− > 0.51879}, {x− > 1.}}

In[ ]:=NSolve[x5 − 2 x + 1 == 0, x, Reals]


Out[ ]={{x− > −1.29065}, {x− > 0.51879}, {x− > 1.}}

Approximate solutions to a system of polynomial equations:


In[ ]:=NSolve[{x3 + x y 2 + y 3 == 1, 3 x + 2 y == 4}, {x, y}]
Out[ ]={x− > 58.0885, y− > −85.1328}, {x− > 0.955748 + 0.224926 i,
y− > 0.566378 − 0.337389 i}, {x− > 0.955748 − 0.224926 i,
y− > 0.566378 + 0.337389 i}}

In[ ]:=NSolve[{x3 + x y 2 + y 3 == 1, 3 x + 2 y == 4}, {x, y}, Reals]


Out[ ]={x− > 58.0885, y− > −85.1328}

For numerical solutions of transcendental equations the Mathematica com-


mands are
FindRoot[f [x], {x, x0 }] (searches for a numerical root of f (x), starting
from the point x = x0 );
FindRoot[f [x] == 0, {x, x0 }] (searches for a numerical solution of the equa-
tion f (x) = 0, starting from the point x = x0 );
FindRoot[f [x, y] == 0, g[x, y] == 0, {x, x0 }, {y, y0 }] searches for a numer-
ical solution
( of the system )of equations f (x, y) = 0, g(x, y) = 0, starting from
the point x = x0 , y = y0 .
568 APPENDICES

Examples.
Find all roots of the function f (x) = x − sin x near x = π.
In[ ]:=[x − 2 Sin[x], {x, P i}]
Out[ ] = {x− > 1.89549}

Find the solution of ex = −x near x = 0.


In[ ]:=FindRoot[Exp[x] == −x, {x, 0}]
Out[ ] = {x− > −0.567143}

Solve the following nonlinear system of equations sin (x − y) + 2y − 1 =


sin x − sin y, y = sin (x + y) + sin x + 3y near the point (0, 0.5).
In[ ]:=FindRoot[{Sin[x − y] + 2 y − 1 == Sin[x] − Sin[y], y == Sin[x +
y] + Sin[x] + x + 3 y},
{{x, 0}, {y, 0.5}}]
Out[ ] = {x− > 0.0042256, y− > 0.500257}

5c. Solving Differential Equations Symbolically. The Mathematica


command DSolve[eqn,y, x] solves the differential equation for the function
y(x). The command DSolve[{eqn1 , eqn2 , . . .}, {y1 , y2 , . . . , x}] solves a system
of differential equations. The command DSolve[eqn, y, {x1 , x2 , . . .}] solves a
partial differential equation for the function y = y(x1 , x2 , . . .).
Examples.
Find the general solution of y ′′ (x) + y(x) = ex .
In[ ]:=DSolve[y ′′ [x] + y[x] == Exp[x], y[x], x]
( )
Out[ ] = {{y[x]− > c1 cos x + c2 sin x + 21 ex cos2 x + sin2 x }}

Find the general solution of y ′′ (x) + y(x) = ex subject to the boundary


conditions y(0) = 0, y ′ (0) = 1.
In[ ]:=DSolve[{y ′′ [x] + y[x] == Exp[x], y[0] == 1, y ′ [0] == 1}, y, x]
( )
Out[ ] = {{y− >Function[{x}, 21 cos x + ex cos2 x + sin x + ex sin2 x ]}}

Find the general solution of the partial differential equation

2zx (x, t) + 5zt (x, t) = z(x, t) − 1.

In[ ]:=DSolve[2 D[z[x, t], x] + 5 D[z[x, t], t] == z[x, t] + 1, z, {x, t}]


( x ( ) )
Out[ ] = {{z− > {x, t}− > e 2 c1 12 (2t − 5x) − 1 }}
H BASICS OF MATHEMATICA 569

Find the general solution of the second order partial differential equation

3xxx (x, t) − 2ztt (x, t) = 1.

In[ ]:=DSolve[3 D[z[x, t], {x, 2}] − 2 D[z[x, t], y, 2] == 1, z, {x, t}]
( ( x ( ) ))
Out[ ] = {{z− > {x, t}− > e 2 c1 21 (2t − 5x) − 1 }}

( ( √ ) ( √ ) )
x2
In[ ] := {{z− > {x, t}− > x c1 t − 23 x + c2 t + 23 x + 6 }}

5d. Solving Differential Equations Numerically. The Mathematica


command NDSolve[eqns, y, {x, a, b}] finds a numerical solution to the ordi-
nary differential equations eqns for the function y(x) in the interval [a, b].
The command NDSolve[eqns, z, {x, a, b}, {t, c, d}] finds a numerical solution
of the partial differential equation eqns.

Examples.

Solve numerically the initial value problem (ordinary differential equation)

y ′ (t) = y cos (t2 + y 2 ), y(0) = 1.

In[ ]:=nsol=NDSolve[{y ′ [t] == y[t] Cos[t2 + y[t]2 ], y[0] == 1}, y, {t, 0, 20}]

Out[ ] = {{y− > InterpolatingFunction [{{0., 20.}}, <>]}}

We plot this solution with

In[ ]:=Plot[Evaluate [y[t]/. nsol], {t, 0, 20}, Ticks − > {{0, 5, 10, 15, 20},
{0, 1}}, PlotRange − > All, AxesLabel − > {x, t}]

x
5 10 15 20
570 APPENDICES

Plot the solution of the initial value problem (partial differential equation)

ut (x, t) = 9uxx (x, t), 0 < x <, 0 < t < 10


u(x, 0) = 0, 0<x<5
u(0, t) = sin 2t, u(, t) = 0, 0 < t < 10.

In[ ]:=NDSolve[{D[u[x, t], t] == 9 D[u[x, t], x, x], u[x, 0] == x, u[0, t]


== Sin[2 t], u[5, t] == 0}, u, {t, 0, 10}, {x, 0, 5}]
Out[ ] = {{u− > InterpolatingFunction [{{0., 5.}, {0.10.}}, <>]}}

We plot the solution with


In[ ]:=Plot3D[Evaluate [u[x, t]/.%], {t, 0, 20}, {x, 0, }, PlotRange − > All]

6. Matrices. In Mathematica a matrix is a list in which each component is


a row of the matrix.
Examples.
In[ ] := A = {{1, 0, 3, 4}, {3, 2, 0, 2}, {1, 1, 1, 1}}
Out[ ] = {{1, 0, 3, 4}, {3, 2, 0, 2}, {1, 1, 1, 1}}

To print the matrix A in the traditional form use


In[ ]:=MatrixForm[A]
 
1 0 3 4
Out[ ]= 3 2 0 2 .
1 1 1 1

We can construct matrices whose entries are defined by a function.


In[ ]:=B=Table[1/(i + j)], {i, 2}, {j, 3}]
H BASICS OF MATHEMATICA 571

Out[ ] = {{ 1+1
1 1
, 1+2 1
, 1+3 }, { 2+1
1 1
, 2+2 1
, 2+3 }}
In[ ]:=MatrixForm[B]
( 1 1 1 )
1+1 1+2 1+3
Out[ ]= 1 1 1 .
2+1 2+2 2+3

The matrix operations addition and subtraction are represented with the
usual +4 and − keys. Matrix multiplication is represented using the dot
key.
Examples.
Let
In[ ] := X = {{1, 3, 4}, {3, 0, 2}, {1, 1, 1}}; Y = {{2, 1, 4}, {1, 1, 3}, {1, 2, 1}};
Then
In[ ] := X + Y
Out[ ] = {{3, 4, 8}, {4, 1, 5}, {2, 3, 2}}

In[ ] := X.Y
Out[ ] = {{10, 12, 17}, {8, 7, 14}, {4, 4, 8}}

The commands Det[A] and Inverse[A] are the commands for the deter-
minant and the inverse matrix of a square matrix A. TransposeM] is the
command for the transpose matrix of any matrix M .
Examples.
In[ ] := Det[{{1, 3, 4}, {3, 0, 2}, {1, 1, 1}}]
Out[ ] = 7

In[ ]:=MatrixForm[Inverse[{{1, 3, 4}, {3, 0, 2}, {1, 1, 1}}]]


 2 6 
−7 1
7 7
Out[ ] =  − 17 − 73 10 
7
3
7
2
7 − 97

In[ ]:=MatrixForm[Transpose[{{1, 0, 3, 4}, {3, 2, 0, 2}, {1, 1, 1, 1}}]]


 
1 3 1
0 2 1
Out[ ] =  
3 0 1
4 2 1

Using inverse matrices we can solve linear systems.


572 APPENDICES

Example. Solve the system

3x1 − 2x2 + x3 = −1,


2x1 + x2 − x3 = 0,
3x2 − 2x2 − 3x3 = 5.

Solution. We form the coefficient matrix and the right side matrix.
In[ ] := A = {{3, −2, 3}, {2, 1, −1}, {3, −2, , −3}, b = {−1, 0, 5};
Next we find the inverse matrix B of the matrix A:
In[ ]:=B=Inverse[A];
We find the solution of the system by
In[ ]:=Solution=B.b
Out[ ] = {− 14
5
, − 11
14 , − 2 }
3

Alternatively, we can use Mathematica command LinearSolve.


In[ ]:=Sol=LinearSolve[A, b]
Out[ ] = {− 14
5
, − 11
14 , − 2 }
3

Notice that Mathematica gives the exact solution of the system. If, instead
of the matrix A, we use the matrix
In[ ] := A1 = {{3., −2., 3.}, {2., 1., −1.}, {3., −2., , −3.};
then the inverse matrix of A1 will be the matrix
In[ ] := B1 = Inverse[A1 ]
Out[ ] := {{0.178571, 0.285714, −0.0357143},
{−0.107143, 0.428571, −0.178571}, {0.25, 0., −0.25}}
and the numerical solution of the system will be
In[ ]:=NumerSolution=B1 .b
Out[ ] := {−0.357143, −0.785714, −1.5}

To find the eigenvalues and corresponding eigenvectors of a given matrix


A we use the commands Eigenvalues[A] and Eigenvectors[A], respectively.
Example. Find all eigenvalues and eigenvectors of the matrix A given by
 
2 −1 −1
A =  −1 2 −1 
−1 −1 2
H BASICS OF MATHEMATICA 573

Solution. First define the matrix A.


In[ ] := A = {{2, −1, −1}, {−1, 2, −1}, {−1, −1, 2}}];
The eigenvalues of A are found by
In[ ]: = eigval=Eigenvalues[A]
Out[ ] := {3, 3, 0}
Next we find the eigenvectors
In[ ]: = eigvect=Eigenvectors[A]
Out[ ] := {{−1, 0, 1}, {−1, 1, 0}, {1, 1, 1}}
BIBLIOGRAPHY

[1] W. E. Boyce, R. C. DiPrima, Elementary Differential Equations and Boundary Value


Problems, Fourth Edition, John Wiley & Sons, New York, 1986.

[2] E. A. Coddington, An Introduction to Ordinary Differential Equations, Academic


Press, New York, 1966.

[3] M. P. Coleman, An Introduction to Partial Differential Equations with MATLAB,


CRC Applied Mathematics and Nonlinear Science Series, Chapman & Hall/CRC,
Boca Raton, London, New York, 2005.

[4] D. G. Duffy, Advanced Engineering Mathematics, 2nd ed., CRC Press, Boca Raton,
Florida, 1998.

[5] L. C. Evans, Partial Differential Equations, Graduate Studies in Mathematics, vol. 19,
American Mathematical Society, Providence, Rhode Island, 1991.

[6] G. B. Folland, Fourier Series and Its Applications, Mathematics Series, Wadsworth
& Brooks/Cole Publishing Company, Pacific Grove, California, 1992.

[7] P. Garabedian, Partial Differential Equations, 2nd ed., Chelsea, New York, 1998.

[8] G. D. Smith, Numerical Solution of Partial Differential Equations, 3rd ed., Claren-
don, Oxford, 1985.

[9] J. W. Demmel, Applied Numerical Linear Algebra, SIAM, Philadelphia, 1997.

[10] E. C. Titchmarsh, The Theory of Functions, Second Edition, Oxford University Press,
London, 1939.

[11] M. Rosenlicht, Introduction to Analysis, Dover Publications, Inc., New York, 1968.

[12] T. M. Apostiol, Mathematical Analysis, Second Edition, Addison-Wesley, New York,


1974.

[13] N. Asmar, Partial Differential Equations and Boundary Value Problems, Prentice-
Hall, Upper Saddle River, New Jersey, 2000.

[14] S. Wolfram, The Mathematica Book, Fourth Edition, Cambridge University Press,
New York, 1999.

[15] M. Reed, B. Simon, Methods of Modern Mathematical Physics, I: Functional Analy-


sis, Academic Press, New York, 1972.

[16] G. Evans, J. Blackledge and P. Yardley, Numerical Methods for Partial Differential
Equations, Springer-Verlag, Berlin, Heidelberg, New York, 2000.

574
ANSWERS TO EXERCISES

Section 1.1.


(−1)n+1
5. (a) 2 n sin nx.
n=1


(b) 2
π − 4
π
cos 2nx
4n2 −1 .
n=1


sin nx
(c) 2 n .
n=1
8


sin (2n−1)x
(d) π (2n−1)3 .
n=1
[ ]
e2aπ −1
∑∞
a(e2aπ −a) 2aπ
(e) 2aπ + π
1
a2 +n2 cos nx − n(n+e
a2 +n2
)
sin nx .
n=1
[ ]
∑∞
(−1)n ) n(−1)n
a2 +n2 cos nx − a2 +n2
sinh aπ 2 sinh aπ
(f) aπ + π sin nx .
n=1
[ ]
∑∞
(g) −3 +
1
an cos 2nπ3 x + bn sin 2nπ
3 x
n=1 ( )
sin nπ
an = −2 n2 π32 nπ + 2nπ cos 2nπ
3 − 3 sin 2nπ
3 ;
−nπ cos nπ+3 sin nπ
bn = n2 π 2
3
.
[ ]
3


nπ nπ −2+2 cos, nπ
(h) 8 + an cos 2 x + bn sin 2 x , an = n2 π 2
2
;
n=1
−nπ cos nπ+2 sin nπ
bn = n2 π 2
2
.


e(2n−1)ix
8. (a) π
2 − 2
π (2n−1)2 .
n=−∞
n̸=0

i


enπix
(b) 1 + π n .
n=−∞
n̸=0



e2(2n−1)ix
(c) 1
2 − i
π 2n−1 .
n=−∞
n̸=0

575
576 ANSWERS TO EXERCISES

Section 1.2.

1. (a) Not continuous; not piecewise continuous.

(b) Continuous; not piecewise smooth.

(c) Continuous; piecewise smooth.

(d) Not continuous; piecewise continuous; piecewise smooth.

(e) Continuous; piecewise smooth.

2. (a) 3 derivatives.

(b) Derivative of any order.

(c) None.

3. (b) The sum is 0.

1
(c) The sum is 2.

π
4. (b) The sum is 2.

5. (b) The sums are 0.

π
6. Take x = 2.


7. (b) The sum is 4 .

9. (c) Converges at every point x.

Section 1.3.

x


(−1)n+1 ∑

1 π2
1. (a) 2 = n sin nx. n2 = 6 .
n=1 n=1


(−1)n ∑

π6
(b) f (x) = x3 − π 2 x = 12 n3 sin nx. 1
n6 = 945 .
n=1 n=1
ANSWERS TO EXERCISES 577

∑∞
(−1)n+1 ∑∞
π2
2. (a) f (x) = x2 . f (x) = n sin nx. 1
n2 = 6 .
n=1 n=1
{
−1, −π < x < 0 ∑∞
f (x) = π4 2n−1 sin (2n − 1)πx.
1
(b) f (x) =
1, 0 < x < π. n=1


1 π2
(2n−1)2 = 8 .
n=1


(c) f (x) = |x|. f (x) = π
4 − π2 1
(2n−1)2 cos (2n − 1)x, −π < x < π.
n=1


1 π4
(2n−1)4 = 96 .
n=1


(d) f (x) = | sin x| = 2
π − 4
π
1
4n2 −1 cos 2nx.
n=1


1 π 2 −8
(4n2 −1)2 = 16 .
n=1



(−1)n+1 sin nx
4. (a) f (x) = sinh x = sinh π
π n2 +1 , |x| < π.
n=1
4 sinh2 π


sin2 na −π+cosh pi sinh pi
π2 n2 = π .
n=1
{ 1
2a , |x| < a 1 1


sin na
(b) f (x) = f (x) = + cos nx.
0, a < |x| < π. 2π π
n=1
na


sin2 na
( )
n2
a
= 2π π−a .
n=1

1


1 1
∫π 3+2π 2
5. (a) 4 +4 (n2 −1)2 = π x2 cos2 x dx = 6 .
n=2 −π

∞ ∫π −3+2π 2
(b) 1
2 + 1
4 +4 1
(n2 −1)2 = 1
π x2 sin2 x dx = 6 .
n=1 −π

6. (a) By the Dirichlet test for convergence of series of complex num-


bers, the function f and its derivative are continuous everywhere.

7. (a) By the Dirichlet test for convergence of series of complex numbers,


the series is convergent for every x ̸= 2nπ, n = 0, 1, . . .. For x = 2nπ
we obtain the harmonic series which is divergent.

1 2


1−cos na
9. f (x) = 2π + π n2 a2 cos nx.
n=1
1 4


(1−cos na)2 1
∫ π
2π 2 + π2 n2 a2 = π π f 2 (x) dx.
n=1
578 ANSWERS TO EXERCISES



(−1)n
10. (b) (c) f (x) = 1
2π − 4
π 4n2 −1 cos 2nx, x ∈ R.
n=1



11. (b) f (x) = 1
2π + 1
2 sin x − 2
π
1
4n2 −1 cos 2nx, x ∈ R.
n=1


(d) F (x) = − 12 cos x + 2
π
2n
4n2 −1 sin 2nx,
n=1

13. α > 1.

14. 0 < a < 1.

15. No. For square integrable functions f the answer follows from the
Parseval identity. For integrable functions f the answer follows by
approximation with square integrable functions.



16. (a) y(t) = C1 cosh t+C2 sinh t− 12 − π2 1
2n−1+(2n−1)3 sin (2n−1)t.
n=1


(b) y(t) = C1 e2t +C2 et t+ 41 + π6 [ ]
1
2 cos (2n−1)t
n=1 2−(2n−1)2 +9(2n−1)2


2−(2n−1)2
+ π2 {[ ]2 } sin (2n − 1)t.
n=1 (2n−1) 2−(2n−1)2 +9(2n−1)2

17. (a) y(t) = C1 cos 3t + C2 sin 3t + 18π


− 27π
2
sin 3t
∑∞
−π
4 [ 1 ] cos (2n − 1)t.
2(2n−1) 2 9−(2n−1)
n=1
n̸=2

Section 1.4.


1. Fourier Cosine Series: f (x) = π
2 + 4
π
1
2n−1 cos (2n − 1)x.
n=1
2


1
Fourier Sine Series: f (x) = π n sin nx.
n=1



2. Fourier Cosine Series: f (x) = 2
π − 4
π
1
4n2 −1 cos 2nx.
n=1

Fourier Sine Series: f (x) = sin x.

3. Fourier Cosine Series: f (x) = cos x.


ANSWERS TO EXERCISES 579

4


n
Fourier Sine Series: f (x) = π 4n2 −1 sin 2nx.
n=1

π2


(−1)n [ ]
4. Fourier Cosine Series: f (x) = 3 +4 n2 cos (2n − 1)x − π2 .
n=1


−2+(2−n2 π 2 )(−1)n
Fourier Sine Series: f (x) = 2 n3 sin nx.
n=1

π 8


cos nπ
sin2 nπ
5. Fourier Cosine Series: f (x) = 4 + π
2
n2
4
cos nx.
n=1
4


sin nπ
Fourier Sine Series: f (x) = π n2
2
sin nx.
n=1



6. Fourier Cosine Series: f (x) = π
2 − 4
π
1
(2n−1)2 cos (2n − 1)x.
n=1
2


(−1)n−1
Fourier Sine Series: f (x) = π n sin nx.
n=1

7. Fourier Cosine Series: f (x) = 3.




Fourier Sine Series: f (x) = 12
π
1
2n−1 sin (2n − 1)x.
n−1

1 2


(−1)n−1 (2n−1)π
8. Fourier Cosine Series: f (x) = 2 + π 2n−1 cos 4 x.
n=1
2


1−cos nπ

Fourier Sine Series: f (x) = π n
2
sin 4 x.
n=1

3 8


cos2 nπ nπ
9. Fourier Cosine Series: f (x) = 4 + π2 n2
4
cos 2 x.
n=1
4


(−1)n−1 (2n−1)π
Fourier Sine Series: f (x) = π2 (2n−1)2 sin 2 x
n=1



(−1)n
− π2 n sin nπ
2 x.
n=1



sin nπ
10. Fourier Cosine Series: f (x) = 1
2 − 2
π n
2
cos nx.
n=1
2


(cos 2 −cos


Fourier Sine Series: f (x) = π n sin nx.
n=1
580 ANSWERS TO EXERCISES

eπ −1 2


(−1)n eπ −1
11. Fourier Cosine Series: f (x) = π + π n2 +1 cos nx.
n=1
[ ]

∞ n (−1)n eπ −1
Fourier Sine Series: f (x) = − π2 n2 +1 cos nx.
n=1

Section 2.1.1.
1
1. (a) F (s) = s2 .

e
(b) F (s) = −3+s .

s
(c) F (s) = 4+s2 .

2
(d) F (s) = s(4+s2 ) .

e−2s (−1+e2s )
(e) F (s) = s .
e−2s (−1+e2s +s)
(f) F (s) = s2 .

e−s
(g) F (s) = s .

1
2. (a) F (s) = 4+(s−1)2 .

1+s
(b) F (s) = 4+(s+1)2 .

s−1
(c) F (s) = 9+(s−1)2 .

s+2
(d) F (s) = 16+(s+2)2 .

5
(e) F (s) = 25+(s+1)2 .

1
(f) F (s) = (s−1)2 .

3. (a) F (s) = −18 (s−3)


1
2.

2
(b) F (s) = (s+3)3 .

12s
(c) F (s) = (s2 +4)2 .

s+2
(d) F (s) = 49+(s+2)2 .

1 s
(e) F (s) = s + s2 +4 .
ANSWERS TO EXERCISES 581

48s(s2 −4)
(f) F (s) = (s2 +4)4 .


4. (a) F (s) = √π .
s

π
(b) F (s) = 3 .
2s 2

5. (a) f (t) = t.

(b) f (t) = e2t t.


( )
(c) f (t) = 1
54 −3t cos 3t + t sin 3t .

(d) f (t) = e3t t.


( )
(e) f (t) = e−8t −1 + et .
( ( √ ) ( √ )
)
−8−2 7 t −8+2 7 t
1
(f) f (t) = 4√ 7
e − e .
7t

61 t
(g) f (t) = e 2 cos 2 .

(h) f (t) = e−2t cos 2 2 t.

( )
6. (a) f (t) = 1
250 −34et + 160et t − 75et t2 + 34 cos 2t − 63 sin 2t .
( )
(b) f (t) = 1
216 72t − 120t − 81 cos t + 9 cos 3t + 135 sin t − 5 sin 3t .

7. (a), (b), (c) - No.

Section 2.1.2.
2e−2s
2. (a) F (s) = s3 .

2−s+es s
(b) F (s) = e−s s3 .

e−s
(c) F (s) = s .

−1−πs+eπ s
(d) F (s) = e−2πs s2 .
−4s −3s
e−s
(e) F (s) = −6 e s + 2 e s + s .
−s
(f) F (s) = − e s .
582 ANSWERS TO EXERCISES

e−s e−s (s+1)


(g) F (s) = 1
s2 + s − s2 .

3. (a) f (t) = e2t t3 .


( )
(b) f (t) = 13 e−2(t−2) −1 + e3(t−2) H(t − 2).

(c) f (t) = 2et−2 cos (t − 2) H(t − 2).


( )
(d) f (t) = − 12 e−2(t−2) −1 + e4(t−2) H(t − 2).

4. (a) F (s) = 2e−πs .

(b) F (s) = e−3s .

(c) F (s) = 2e−3s − e−s .

e2πs
(d) F (s) = e−
πs
2 − 1+s2 .

e−πs s
(e) F (s) = e−πs + s2 +4 .

Section 2.1.3.
( )
1. (a) f (t) = e−2t −1 + 3et .
[ ( ) ]
(b) f (t) = 17 e−7t 21+ − e14 + e7t H(t − 2) .
[ ( ) ]
(c) f (t) = 15 e−7t 5+ − e14 + e4+5t H(t − 2) .
[ ( ) ]
(d) f (t) = 17 e−7t 21+ − e14 + e7t H(t − 2) .
[ ( ) ]
(e) f (t) = 15 e−7t 5+ − e14 + e4+5t H(t − 2) .

( )
2. (a) f (t) = 1
13 −3 cos 3t − 2 sin 3t
t (
e−√2
√ √
3 3t
√ )
3 3t
+ 39 3
87 3 cos 2 + 4 sin 2 .
( √ ) √
(b) f (t) = 1
22 cos 5(t − 4) + cos 3 (t − 4) H(t − 4) − √2
3
sin 3 t.
[ (
1
(c) f (t) = 18 36)])
cos 3t + H(t
] − 4) − 3(t − 4) cos 3(t − 4)
+ sin 3(t − 4) H(t − 2) .
[ ( )
(d) f (t) = 1
2 − − 2 + cos (t − 2) + cosh (t − 2) H(t − 2)
ANSWERS TO EXERCISES 583
( ) ]
+ −2 + cos (t − 1) + cosh (t − 1) H(t − 1) .
[ ( √ √ ) √ √ ]
(e) f (t) = 1
9 3t + H(t − 1) 3 − 3t + 3 sin 3 (ti1) − 7 3 sin 3 t .

4. F (s) 2s tanh 2s .

5. (a) y(t) = tet + 3(t − 2)H(t − 2).


( ) ( )
(b) y(t) = 3 e−2(t−2) − e−3(t−2) H(t−2) + 4 e−3(t−5)−e−2(t−25) H(t−5).

(c) y(t) = eπ−t − sin (t − π) H(t − π).


√ √ √ ( )
(d) y(t) = 2
e−t sin
2 t) + 14 e−t cos 2 t+ 14 sin t − cos t
√ 2

+ 22 H(t − π) e−(t−π) sin 2 (t − π).

(e) y(t) = 12 H(t − π) sin 2t − 21 H(t − 2π) sin 2t.


( )
(f) y(t) = 1 + H(t − 1) sin t.

Section 2.1.4.

2. (a) y(t) = t − sin t.

(b) y(t) = et − t − 1.

t2
(c) y(t) = 2 − 1
2 sin2 t.

(d) y(t) 21 t sin t.

t2
(e) y(t) = 2 − 1
2 (t − 2)2 H(t − 2).

2
3. (a) F (s) = s2 (s2 +4) .

1
(b) F (s) = (s+1)(s2 +1) .

1
(c) F (s) = (s−1)s2 .

d
(d) F (s) = (s2 +1)2 .

2
(e) F (s) = (s−1)s3 .

2
(f) F (s) = 9(s+1)(s2 +4) .
584 ANSWERS TO EXERCISES

4. (a) y(t) = −1 + e−t + t.


( )
(b) y(t) = 1
2 t cos t − sin t .
( )
(c) y(t) = 1
5 e−t + 1
10 −2 cos 2t + sin 2t .
(
(d) y(t) = 1
256 −1 + 8t2 + cos 4t.

1 1
(
(e) y(t) = 5 + 5 cos 2t + 2 sin 2t.

3
5. (a) y(t) = t + 2 sin 2t.

(b) y(t) = 1 − cos t.

(c) y(t) = (1 − t)2 e−t .

1 5
(d) y(t) = t3 + 20 t .

(e) y(t) = 1 + 2t.

6. (a) y(t) = 4 + 52 t2 + 1 4
24 t .

(b) y(t) = 2
π t − t.

Section 2.2.1.
1. (a) F (ω) = 2i cos ω
ω + 2πδ(ω) − 4 sinω ω .

1−e3iω +3iω
(b) F (ω) = e−3iω ω2 .

i
(c) F (ω) = i−ω .
√ ω2
(d) F (ω) = 2π
a e− 2 .

∫∞ y sin(πy) cos(xy) ∫∞ y sin(πy) cos(xy)


6. 1−y 2 dy = cos |x| for |x| < π; 1−y 2 dy = 0
0 0
for |x| > π.

7. (a) Fc {f }(ω) = 2a
ω 2 +a2 .
ANSWERS TO EXERCISES 585

(b) Fs {f }(ω) = 2ω
ω 2 +a2 .

(c) Fs {xe−ax }(ω) = 4aω


ω 2 +a2 .

2(−ω 2 +a2 ω 2 +a2 +a3


(d) Fc {(1 + x)e−ax }(ω) = (ω 2 +a2 )2 .

8. F (ω) = − ωi + πδ(ω).

Section 2.2.2.
π −| ω
a |.
2. (a) F (ω) = |ω| e
( )
(b) F (ω) = π
2 e−|ω−a| + e−|ω+a| .

1 −bt
6. (a) F (ω) = − 2a e sin at.

(b) F (ω) = 21 e−tH(t) + 12 et H(−t).

(c) F (ω) = e−tH(t) − e 2 H(t) + 2t e− 2 tH(t).


t t

7. (a) f (t) = i
2sgn(t) e−a|t| .
{ 2t
e , t>0
(b) f (t) =
e−t , t < 0.
(c) f (t) = 1
4a (1 − a|t|)e−a|t| .

1 −a|t−1|
8. f (t) = 2a e .

9. (a) y(t) = 41 e−|t| + 12 te−t H(t).


{ 1 −t
e , t>0
(b) y(t) = 91 2t 1 2t
9 e − 3 te , t < 0.
(c) y(t) = g(t) ∗ f (t), g(t) = F −1 { ω21+1 } = e−|t| H(t).
[ ]
(d) y(t) = H(t) e−t + e−2t .

Section 3.1.
(2n−1)2 π 2 2n−1
2. (a) λn = 4 , yn (x) = cos 2 πx.
586 ANSWERS TO EXERCISES

(b) λ0 = −1, y0 (x) = e−x , λn = n2 , yn (x) = sin nx − n cos nx.

(c) The eigenvalues are λn = µ2n , where mun are the positive solu-
tions of the equation cot µ = µ, yn (x) = sin µn x.

(d) λn = −n4 , yn (x) = sin nx.

4. (a) λn = µ2n , where mun are the positive solutions of tan µ = µ,


yn (x) = sin µn x.

(b) λ0 = −µ20 , µ0 = coth mu0 π, y0 (x) = sinh µ0 x − µn cosh mun x,


λn = µ2n , yn (x) = sinh µn x − µn cos µn x.

(c) The eigenvalues are λn = µ2n , where mun are the positive solu-
tions of tan 2µ = −µ, yn (x) = sin µn x + µn cos µn x.

1
5. (a) λn = n2 + 1, yn (x) = x sin nπ ln x.

(b) λn = n2 π 2 , yn (x) = sin n ln x.


(2n−1)2 π 2 ( 2n−1 )
(c) λn = 4 , yn (x) = sin 2 π ln x .

1
( nπ )2 ( nπ )
7. λn = + , yn (x) = √1 sin ln x .
4 ln 2 x ln 2

8. This is not
{ a regular Sturm–Liouville
} problem. λn = n2 ,
yn (x) ∈ sin nx, cos nx .

Section 3.2.
4


1
( 2n−1 )
1. f (x) = π 2n−1 sin ln 2 π ln x .
n=1

1


1 2n−1
2. (a) f (x) = π 2n−1 cos 2 πx.
n=1
8


(−1)n−1 2n−1
(b) π2 2n−1 cos 2 πx.
n=1
4


1−cos2 2n−1 π 2n−1
(c) f (x) = π 2n−1
4
cos 2 πx.
n=1
)
16
∑∞ sin 2n−1 π
4 2n−1
(d) f (x) = π2 (2n−1)2 cos 2 πx.
n=1
ANSWERS TO EXERCISES 587

l


(−1)n−1 nπ
3. f (x) = π n sin l x.
n=1

4l


(−1)n−1 (2n−1)π
4. f (x) = π2 (2n−1)2 sin 2l x.
n=1

(√ )
√ ∑∞ sin λn (√ )
5. (a) f (x) = 2 √ √ √ cos λn x ,
λ 1+sin 2 λ
√ √ n=1 √n n

cos λn − λn cos λn = 0.
(√ )
√ ∑
∞ 2 cos λn −1 (√
(b) f (x) = 2 √ √ √ cos λn x,
λ 1+sin 2 λ
√ √ n=1 √ n n

cos λn − λn cos λn = 0.

Section 3.2.1.
1 1+x
5. 2 ln 1−x .

6. (a) f (x) = 14 P0 (x) + 12 P1 (x) + 5


16 P2 (x) + ···.

(b) f (x) = 12 P0 (x) + 58 P2 (x) − 3


16 P4 (x) + ···.

(c) f (x) = 23 P1 (x) − 73 P3 (x) + 11


6 P5 (x).

Section 3.2.2.

5. (a) y = c1 x−1 J0 (µx) + c2 x−1 Y0 (µx).

(b) y = c1 x−2 J2 (x) + c2 x−2 Y2 (x).

(c) y = c1 J0 (xµ ) + c2 Y2 (xµ ).

√ ( )
7. y = − 2
πx sin x + cos x
x .


∑ 1
9. (a) 1 = 2 J0 (λk x).
λk J1 (λk )
k=1
∑∞
λk − 4
(b) x2 = 2 J0 (λk x).
λ3k J1 (λk )
k=1


1−x 2 1
(c) 8 = J0 (λk x).
λk J1 (λk )
k=1
588 ANSWERS TO EXERCISES

Section 4.1.

4. (a) u(x, y) = F (x) + G(y), where F is any differentiable function and


G is any function.

(b) u(x, y) = xF (y) + G(y), where F and G are any functions of


one variable.

5. (a) u(x, y) = yF (x) + G(x), where F and G are any functions of


one variable.

(b) u(x, y) = F (x)+G(y), where F is any differentiable function and


G is any function of one variable.

(c) u(x, y) = F (y) sin x + G(y) cos x, where where F and G are any
functions of one variable.

(d) u(x, y) = F (x) sin y + G(x) cos y, where where F and G are any
functions of one variable.

8. (a) xux − yuy = x − y.


( )
(b) u = ux uy . (c) u2 − u ux + yuy = 1.

(d) xuy − yux = x2 − y 2 .

(e) xux − yuy = 0.

Section 4.2.
( )
1. (a) u(x, y) = f ln x + y1 , f is an arbitrary differentiable function of
one variable.
( ) ( )
(b) u(x, y) = f xy −cos xy
2 , f is an arbitrary differentiable function
of one variable.
( )
(c) u(x, y) = f y −arctan x , f is an arbitrary differentiable function
of one variable.
( )
(d) u(x, y) = f x2 + y1 , f is an arbitrary differentiable function of
one variable.
ANSWERS TO EXERCISES 589

2. (a) u(x, y) = f (bx−ay)e− a x , f is an arbitrary differentiable function


c

of one variable.
( ) c
(b) u(x, y) = f x2 + y 2 e y , f is an arbitrary differentiable function
of one variable.
( ) x2
(c) u(x, y) = f e−x(x+y+1) e 2 , f is an arbitrary differentiable func-
tion of one variable.
y2
(d) u(x, y) = f (xy)e 2 + 1, f is an arbitrary differentiable function
of one variable.
(y)
(e) u(x, y) = xf x , f is any differentiable function of one variable.
( x−y )
(f) u(x, y) = xyf xy , f is an arbitrary differentiable function of
one variable.
2( )
(g) u(x, y) = e−(x+y) x2 + f (4x − 3y) , f is an arbitrary differen-
tiable function of one variable.
( 1+xy )
(h) u(x, y) = xf x , f is an arbitrary an differentiable function
of one variable.

( 5x+3y )
3. (a) u(x, y) = f (5x + 3y); up (x, y) = sin 3 .

(b) u(x, y) = f (3x + 2y) + 12 sin x; up (x, y) = 1


2
1
sin x + 25 (3x + 2y)2 −
1 3x+2y
2 sin 5 .
(y) ( y )3
(c) u(x, y) = xy + f x ; up (x, y) = xy + 2 − x .
( )
x2 2
( x2
) ( )
x2 2 x2
(d) u(x, y) = x y − 2 +f y− 2 ; up (x, y) = x y − 2 + ey− 2 .
( ) )4
(e) u(x, y) = y1 f x2 + y ; up (x, y) = y1 bigl(x2 + y .
( y2 )
(f) u(x, y) = x + yf x ; up (x, y) = x + xy .

( )2
4. (a) u(x, y) = c x2 − y 2 .
[ ]
(b) u(x, y) = 1
2 2y − (x2 − y 2 ) .
y y
(c) u(x, y) = x2 e− x + e x − 1.
x
(d) u(x, y) = e x2 −y2 .
590 ANSWERS TO EXERCISES


(e) u(x, y) = xy.
( )
1
f √ +√
1− x2 +y 2 x y
(f) u(x, y) = e .
x2 +y 2 x2 +y 2
( )
(g) u(x, y) = ey f − ln(y + e−x ) .

2
5. (a) u(t, x) = xet−t .
( )
(b) u(t, x) = f xe−t .
( √ )
(xt−1)+ (xt−1)2 +4x2
(c) u(t, x) = f 2x .

(d) u(t, x) = f (x − ct)

(e) u(t, x) = sin(x − t) + (x − t) sin t − x + t + cos t + t sin t − 1.


 t − x,
 0 < x < t,
6. u(t, x) = (x − t) , t < x < t + 2,
2


x − t, x > t + 2.

Section 4.3.
1. xuxy − yuyy − uy + 9y 2 = 0.

2. (a) D = b2 − ac = 8 > 0, hyperbolic.

(b) D = 0, parabolic.

(c) D = −27 < 0, elliptic.

(d) hyperbolic.

(e) parabolic.

(f) elliptic.

3. (a) D = −2y. hyperbolic if y > 0, parabolic if y = 0, elliptic if


y > 0.

(b) D = 4y 2 (x2 + x + 1). hyperbolic if y < 0, parabolic if y = 0,


ANSWERS TO EXERCISES 591

elliptic - nowhere.

(c) D = −y 2 . elliptic if y ̸= 0, parabolic if y = 0, hyperbolic -


nowhere.

(d) D = 4y 2 − x2 . hyperbolic if 2|y| > |x|, parabolic if 2|y| = |x|,


elliptic if 2|y| < |x|.

(e) D = 1 − yex . hyperbolic if y < e−x , parabolic if y = e−x , elliptic


if y > e−x .

(f) D = 3xy. hyperbolic in quadrants I and III, parabolic on the


coordinate axes, elliptic in quadrants II and IV .

(g) D = x2 −a2 . hyperbolic if |x| > |a|, parabolic if |x| = |a|, elliptic
if |x| < |a|.

4. (a) D = −3/4 < 0. The equation is elliptic. The characteristic


′2 ′

equation
√ is y + y + 1 = 0. ξ = y − 1/2(1 + 3 i)x, η = y − 1/2(1 −
3 i)x; α = ξ + η, β = (ξ − η)i. The canonical form is uαα + uββ = 0.

(b) D = −4x. hyperbolic if x < 0, parabolic if x = 0, elliptic if


′2
x > 0. √
The characteristic
√ equation is xy + 1 = 0. If x < 0, then
ξ = 2 + −x, η = y − −x. The canonical form in this case is
1 1 uξ − uη
uξη = − (ξ − η)2 − .
4 162 2(ξ − η
√ √
If x < 0, xi = y + 2 x i, η = y − 2 x i, α = 1/2(ξ + η), β =
−i/2(ξ − η). The canonical form is

1 β2
uαα + uββ = + .
β 4
If x = 0, then the equation reduces to uyy = 0.

(c) D = 9 > 0. The equation is hyperbolic. The characteristic


equation is 4y ′ + 5y ′ + 1 = 0. ξ = x + y, η = 1/4x − y. The canonical
2

form is
1( 8)
uξη = uη −
3 3

(d) D = 1/4. The equation is hyperbolic. The characteristic equation


is 2y ′ − 3y ′ + 1 = 0. ξ = x − y, η = 1/2x − y. The canonical form is
2

uξη = uη − u − η.
592 ANSWERS TO EXERCISES

(e) D = −y. The equation is hyperbolic if y < 0, parabolic if y = 0



elliptic if y > 0. For the hyperbolic case, ξ = x + −y, η =
and √
x − −y. The canonical form is

uη − uξ
uξη = .
2(ξ − η)

For the elliptic case, α = x, β = 2 y. The canonical form is

1
uαα + uββ = uβ .
β

In the parabolic case we have uxx = 0.

(f) D = −x2 y 2 . The equation is elliptic if (x, y) ≠ (0, 0), and it is


parabolic if y = 0 or y = 0. For the elliptic case, ξ = y 2 , η = x2 and
the canonical form is
1 1
uξξ + uηη = uξ + uη .
2ξ 2η

In the parabolic case we have uxx = 0 or uyy = 0.

(g) D = −(a2 + x2 )(a2 + y 2 ) < 0. The equation is elliptic in the whole


plane. The canonical form is

uξξ + uηη = 0.

5. (a) D = 0. The equation is parabolic. The characteristic equation


is 9y ′ + 12y ′ + 4 = 0. ξ = 2x + 3y, η = x. The canonical form is
2

uηη = 0. u(x, y) = xf (3y − 2x) + g(3y − x).

(b) D = 0. Parabolic. The characteristic equation is y ′ +8y ′ +16 = 0.


2

ξ = 4x − y, η = x. The canonical form is uηη = 0. u(x, y) =


xf (4x − y) + g(4x − y).

(c) D = 4 > 0. The equation is hyperbolic. The characteristic


equation is y ′ + 2y ′ − 3 = 0. ξ = y − 3x, η = x + y. u(x, y) =
2

f (y − 3x) + g(x + y).

(d) D = 1/4. The equation is hyperbolic. The characteristic equation


is 2y ′ − 3y ′ + 1 = 0. ξ = x2 − y 2 , η = y. The canonical form is
2

uξη = 0. u(x, y) = f (x2 − y 2 ) + g(y).

(e) D = 0. The equation is parabolic. The characteristic equa-


tion is xy ′ = y. ξ = x, η = xy . The canonical form is uξξ = 4.
ANSWERS TO EXERCISES 593
(y) (y)
u(x, y) = 2x2 + xf x +g x .
( )
(f) D = 0.( The equation ) is parabolic. ξ = y + ln 1 + sin x ,
η = y + ln 1 + sin x and the canonical form is uηη + uη = 0.
η
u(ξ, η) = f (ξ) + e− 4 g(ξ).

6. (a) D = 49 > 0. The equation is hyperbolic. The characteris-


tic equation is y ′ + y ′ − 2 = 0. ξ = x − y2 , η = x + y. The
2

canonical
( )form is 9uξη( + 2 =
) 0. The general solution is u(x, y) =
− 92 x + y2 (x + y) + f x + y2 + g(x + y). From the boundary condi-
y2
tions we obtain that u(x, y) = x + xy + 2 .

(b) D = 0. The equation is parabolic. The characteristic equa-


tion is e2x y ′ − ex+y y ′ + e2y = 0. ξ = e−x + e−y , η = e−x − e−y .
2

The
( canonical ) form
( is uξη) =( 0. u(ξ, η)
) = f (ξ) + ηg(ξ). u(x, y) =
f e−x + e−y + e−x − e−y g e−x + e−y . Using the boundary condi-
t (2−t)2
tions we have f (t) = 2 and g(t(= 2 .

(c) D = 0. The equation is parabolic. ξ = x+y, η = x. General solu-


tion u(x, y) = f (x+y)+xg(x+y)+ 49 e3y −cos y. From u(0, y) = 94 e3y
we obtain that f (y) = cos y. From u(x, 0) = − 49 we obtain that
x
g(x) = x1 . Therefore u(x, y) = cos (x + y) + x+y + 49 e3y − cos y.

7. (a) ξ = x, η = 2x + y, ζ = 2x − 2y + z. The canonical form is

uξξ + uηη + uζζ + uξ = 0.

(b) ξ = x + 21 y − z, η = − 12 y, ζ = z. The canonical form is

uξξ = uηη + uζζ .

(c) t′ = 12 t+ 12 x− 12 y − 12 z, ξ = 12 t+ 12 x+ 21 y + 12 z, η = − 2√
1
3
1
t+ 2√ 3
x+
√1
2 3
y − 2 3 z, ζ = − 2 5 t+ 2 5 x− 2 5 y + 2 5 z. The canonical form is
√1 √1 √1 √1 √1

ut′ t′ = uξξ + uηη + uζζ .

(d) t′ = 2√ 1
3
t+ 2√1
3
x− 2√ 1
3
y − 2√
1
3
z, ξ = √12 x+ √12 y, η = √1 t+ √
2
1
2 2
z,
ζ = − 12 t + 12 x − 12 y − 12 z. The canonical form is

ut′ t′ = uξξ + uηη + uζζ .


594 ANSWERS TO EXERCISES

Section 5.1.

1 15t
3. u(x, t) = sin 5t cos x + 15 sin 3x sin 2 .

1
[ 1 1
] 1
4. u(x, t) = 2 1+(x+at)2 + 1+(x−at)2 + a sin x sin at.

[ 2]
5. u(x, t) = − a1 e−(x+at) − e−(x−at) .
2

1
6. u(x, t) = sin 2x cos 2at + a cos x sin at.

1+x2 +a2 t2
( at )
7. u(x, t) = (1+x2 +a2 t2 )2 +4a2 x2 t2 + 1 x
2a e e − e−at .

( )( )
8. u(x, t) = cos πx
2 cos πat
2 + 1
2ak ekx − e−kx eakt − e−akt .

9. u(x, t) = ex+t + ex−t + 2 sin x sin t.

[ −(x+at)2 2]
10. u(x, t) = 1
a e − e−(x−at) + t
2 + 1
4a cos 2x sin 2at.



 1, if x − 5t < 0, x + 5t < 0
11. u(x, t) = 1
2, if x − 5t < 0, x + 5t > 0


0, if x − 5t > 0.

[ ] 1[ ]
12. u(x, t) = 12 sin (x+at)+sin (x−at) + 2a arctan (x+at)+arctan (x−at) ,
if x − 5t > 0;
[ ] 1[ ]
u(x, t) = 12 sin (x+at)−sin (x−at) + 2a arctan (x+at)−arctan (x−at) ,
if 0 < x < at.

[
13. 1
2 f (x + at) + f (x − at).

2
14. sin 2πx cos 2πt + π sin πx cos πt.

[ ]
15. u(x, t) = 1
2 G(x + t) − G(x − t) , where G is the anti-derivative of gp
ANSWERS TO EXERCISES 595

and gp is the 2 periodic extension of


{
−1, −1 < x < 1
g0 (x) =
1, 0 < x < 1.

The function G is 2 periodic and for −x ≤ x ≤ 1 we have that


G(x) = |x|.
Let tr > 0 be the times when the string returns to its initial posi-
tion u(x, 0) = 0. This will happen only when

G(x + tr ) = G(x − tr ).

Since G is 2 periodic, from the last equation we obtain that

x + tr = x − tr + 2n, n = 1, 2, . . ..

Therefore
xr = n, n = 1, 2, . . .
are the moments when the string will come back to its original position.

( )
16. u(x, t) = 1
a(1+a2 ) sin x sin at − a cos at + ae−t .

17. Ep (t) = π
4 cos2 t. Ek (t) = π
4 sin2 t. The total energy is E(t) = π
4.

Section 5.2.


1. (a) u(x, t) = 4
π
1
(2n−1)2 sin (2n − 1)x sin (2n − 1)t.
n=1
9


1 2nπ
(b) u(x, t) = π n2 sin 3 sin nx cos nt.
n=1
4


(−1)n+1 (2n−1)π
(c) u(x, t) = sin x sin t + π (2n−1)2 sin 4
n=1
· sin (2n − 1)x sin (2n − 1)t.


(−1)n+1
(d) u(x, t) = 4
π (2n−1)2 sin (2n − 1)x cos (2n − 1)t.
n=1


(e) u(x, t) = 8
π
1
(2n−1)3 sin (2n − 1)x sin (2n − 1)t.
n=1


∞ ( )
2. (a) u(x, t) = e−kt an cos λn t + bn sin λn t sin nπx
l , where λn , an
n=1
and bn are given by
596 ANSWERS TO EXERCISES

a2 n2 π 2
∫l
λ2n = l2 − k2 , an = 2
l f (x) sin nπx
l dx;
0

2
∫l nπx
l (−kan + λn bn ) = g(x) sin l dx.
0

4 −t
[ √
3. u(x, t) = πe sin x cosh ( 3t)

∞ √ ]
+ 1
2n−1 sin (2n − 1)x cos ( 4n2 − 4n − 2t) .
n=2

[ √ ]
4. u(x, t) = e−t t sin x + √1
3
sin 2x sin 3t .

∞ [
∑ ( ) ( 2n−1 )] ( 2n−1 )
2lbn
5. u(x, t) = an cos 2n−1
2l πat + (2n−1) sin 2l πat sin 2l πx ,
n=1
where
∫l ∫l
2 ( 2n − 1 ) 2 ( 2n − 1 )
an = f (x) sin πx dx, bn = g(x) sin πx dx.
l 2l l 2l
0 0

∞ [
∑ ]
4 2−2(−1)n 1
6. u(x, t) = π3 n3 cos 2nπt + 4n3 −n sin 2nπt sin nπx.
n=1

2 4


1
7. u(x, t) = π + π 1−4n2 cos 2nx cos 4nt.
n=1



8. u(x, t) = − π83 1
(2n−1)3 cos (2n − 1)πx sin (2n − 1)πt.
n=1

( )


9. u(x, t) = 12 (a0 + a′0 t) + an cos nπa
l t + l ′
nπa an sin nπa
l t sin nπa
l x.
n=1

( )
Al2
10. u(x, t) = l2 +a2 π 2 e−t − cos aπ
l t + l
aπ sin aπ
l t .

( )
2Al3


(−1)n+1 sin anπ l x
11. u(x, t) = π n(l2 +a2 π 2 n2 ) e−t − cos anπ
l t + l
aπn sin aπn
l t .
n=1


∞ a(2n−1)π
12. u(x, t) = 16Al2
π
[sin 2l x
] sin t
n=1 (2n−1) a π n (2n−1) −4l
2 2 2 2 2


∞ a(2n−1)π
[sin ] sin
3 x a(2n−1)π
− 32Al
aπ 2
2l
2l t.
n=1 (2n−1)2 a2 π 2 n2 (2n−1)2 −4l2
ANSWERS TO EXERCISES 597

∞ (
)
4Al2

13. u(x, t) = 4l2 +a2 π 2 e−t − cos aπ
2l t + 2l
aπ sin aπ
2l t cos π
2l x.
n=1

( )
14. u(x, t) = 1 − πx t2 + πx t3 + sin x cos t
[ ]


+ π4 1
n3 3(−1) n
t − 1 + cos nt − 3
n (−1) n
sin nt sin nx.
n=1

( )
15. u(x, t) = 1 − πx e−t + xt 1
π + 2 sin 2x cos 2t
[ ]


−t
( )
− π2 1
n(1+n2 e + n 2
cos nt − 2n + 1
n sin nt sin nx.
n=1



(−1)n
16. u(x, t) = x + t + sin x
2 cos t
2 − 8
π (2n+1)2 cos 2n+1
2 t sin 2n+1
2 x.
n=1

Section 5.3.
∑∞ ∑ ∞
1. u(x, y, t) = π646 (2m−1)3 (2n−1)3 sin (2m − 1)πx sin (2n − 1)πy
1

√ m=1 n=1
cos (2m − 1)2 + (2n − 1)2 t.

√ ∑

2. u(x, y, t) = sin πx sin πy cos 2t + 4
π

sin (2n−1)πy
n=1 (2n−1) 1+(2n−1)2

sin 1 + (2n − 1)2 t.


3. u(x, y, t) = v(x, y, t) + √2 sin πx sin 2πy sin 5 t, where
5

∑∞ ∑ ∞
v(x, y, t) = π646 (2m−1)3 (2n−1)3 sin (2m − 1)πx sin (2n − 1)πy
1

√ m=1 n=1
cos (2m − 1)2 + (2n − 1)2 t.

∑∞ ∑ ∞
4. u(x, y, t) = π162 √
sin (2m−1)πx sin (2n−1)πy
(2m−1)(2n−1) (2m−1)2 +(2n−1)2
√ m=1 n=1
sin (2m − 1)2 + (2n − 1)2 t.

∑∞ ∑ ∞
5. u(x, y, t) = π166 (2m−1)3 (2n−1)3 sin (2m − 1)πx sin (2n − 1)πy
1

√ m=1 n=1
sin (2m − 1)2 + (2n − 1)2 t.
598 ANSWERS TO EXERCISES

∑ ∞ [
∞ ∑ √ √ ] mπx
6. (a) u(x, y, t) = amn cos λmn t + bmn sin λmn t sin a
m=1 n=1
(2n−1)πy m2 π 2 (2n−1)2 π 2
sin 2b , where λmn = a2 + 4b2 .
∑ ∞ [
∞ ∑ √ √ ]
(b) u(x, y, t) = amn cos λmn t + bmn sin λmn t
m=0 n=1
mπx nπy m2 π 2 n2 π 2
cos a sin b , where λmn = a2 + b2 .


7
√3 cos 4t sin 4y − 5 cos
7. u(x, y, t) = 5t cos 2x sin y

+ 10 sin 10t cos x sin 3y.

( )

∞ ∑ ∞ ( ) ( )
8. u(x, y, t) = e−k t
2
amn cos λmn t + bmn sin λmn t
( ) ( m=1 ) n=1
sin mπx
a sin nπyb , where

λmn = m aπ2 c + n πb2 c − k 4 .
2 2 2 2 2 2


m2 π 2 c2 n2 π 2 c2
9. Let ωmn = aπ a2 + b2 . You need to consider two cases:

Case 10 . ω ̸= ωmn for every m, n = 1, 2, . . .. In this case



∞ ∑
∞ ( ) ( ) ( )
u(x, y, t) = amn sin ωt − ω
ωmn sin ωmn t sin mπx
a sin nπy
b ,
m=1 n=1
where

4
∫a ∫b ( mπx ) ( nπy )
amn = 2 −ω 2 )ab
(ωmn F (x, y) sin a sin b dy dx.
0 0

Case 20 . ω = ωm0 n0 for some m0 and n0 (resonance), then



∞ ∑
∞ ( ) ( ) ( )
u(x, y, t) = amn sin ωt − ω
ωmn sin ωmn t sin mπx
a sin nπy
b
m=1 n=1
m̸=m0 n̸=n0
( ) ( ) ( )
+am0 n0 sin ωt − ωt cos ωt sin m0aπx sin n0bπy , where for m ̸= m0
and n ̸= n0 amn are determined as in Case 10 , and
∫a ∫b ( ) ( )
2
am0 n0 = ωab F (x, y) sin m0aπx sin n0bπy dy dx.
0 0

If there are several couples (m0 , n0 ) for which ω = ωm0 n0 , then in-
stead of one resonance term in the solution there will be several reso-
nance terms of the form specified in Case 20 .
ANSWERS TO EXERCISES 599
√ √
10. (a) ω = 3 and ωmn = m2 + n2 . m2 + n2 ̸= 3 for every m and n.
Therefore, by Case 10 we have that
∑∞ ∑ ∞ ( √ )
u(x, y, t) = amn sin 3t−√m23+n2 sin m2 + n2 t sin mx sin ny
2(
m=1 n=1
√ )
= π4 sin 3t − √32 sin 2t sin x sin y

∞ ∑ ∞ ( √ )
+ amn sin 3t − √m23+n2 sin m2 + n2 t sin mx sin ny,
m=2 n=2 ( )( )
16mn 1+(−1)m 1+(−1)n
where for m n ≥ 2, amn = π2 (m2 +n2 −3)(m2 −1)2 (n2 −1)2 .
√ √ √
(b) ω = 5 and ωmn = m2 + n2 . m2 + n2 = 5 for m = 1 and
n = 2 or m = 1 and n = 3. Therefore, by Case 20 we have


∑ ( √ 5 √ )
u(x, y, t) = amn sin 5 t − √ sin m2 + n2 t
2
m +n 2
m, n=1
m2 +n2 ̸=5
( √ √ √ )
sin mx sin ny + a1,2 sin 5 t − 5t cos 5 t sin x sin 2y
( √ √ √ )
+a2,1 sin 5 t − 5 t cos 5 t sin 2x sin y, where
∫π ∫π
2
a1,2 = √5π 2
xy sin2 x sin y sin 2y dy dx = a2,1
0 0
∫π ∫π
= √2
5π 2
xy sin x sin2 y sin 2x dx dy = − 49 .
0 0
∫π ∫π
For m = n = 1 we have a1,1 = √2
5π 2
x sin2 x y sin2 ydy dx =
0 0
2
π
√ .
8 5
For every other m and n we have

4
∫π ∫π
amn = (m2 +n2 −5)π 2 x sin x sin mx y sin y sin ny dy dx
0 0
16mn(1+(−1)m )(1+(−1)n )
= (m2 +n2 −5)(m2 −1)2 (n2 −1)2 π 2 .


∞ ∑
∞ 2
+ω 2 ) sin ω t++2kωt cos ω
11. u(x, y, t) =
(ωmn
[ ] sin mπx
a sin nπy
b ,
m=1 n=1 (2m−1)(2n−1) (ωmn −ω) +4k ω
2 2 2

m2 π 2 n2 π 2
where ωmn = a2 + b2 .

Section 5.4.


8
1. u(r, φ, t) = 3 J (z
z0n 1 0n )
J0 (z0n r) cos (z0n t).
n=1

(z )

∞ J1 0n 2
2. u(r, φ, t) = 2
z0n (J1 (z0n ))2
J0 (z0n r) sin (z0n t).
n=1
600 ANSWERS TO EXERCISES



1
3. u(r, φ, t) = 16 sin φ 3 J (z
z1n 2 1n )
J1 (z1n r) cos (z1n t).
n=1



1
4. u(r, φ, t) = 16 sin φ 3 J (z
z1n 2 1n )
J1 (z1n r) cos (z1n t)
n=1


1
+24 sin 2φ 4 J (z
z2n 3 2n )
J2 (z2n r) sin (z2n t).
n=1

5. u(r, φ, t) = 5J4 (z4,1 r) cos 4φ cos (z4,1 t) − J2 (z2,3 r) sin 2φ cos (z2,3 t).


1
6. u(r, φ, t) = J0 (z03 r) cos (z03 t) + 8 4 J (z
z0n 1 1n )
J0 (z0n r) sin (z0n t).
n=1


∞ (z ) ( )
7. u(r, φ, t) = 4 1
2 J (z
z0n 1 0n )
J0 0n
2 r sin z0n
2 t .
n=1

[ ]

∞ ( ) ( ) ( z0n )
8. u(r, t) = an cos c z0n
a t + bn sin c z0n
a t J0 a r , where
n=1
∫a (z )
an = ( (2 ))2 rf (r)J0 0n
a r dr,
a2 J1 z0n 0
∫a ( z0n )
bn = ( 2( ))2 rg(r)J0 a r dr.
a c z0n J1 z0n 0

∫a
9. u(r, t) = a22 (f (r) + tg(r))r dr
[ 0
]
∑∞ ( z ) ( z ) ( )
+ an cos c a t + bn sin c a t J0 z1n
1n 1n
a r , where
n=1
∫a ( z1n )
an = ( (2 ))2 rf (r)J0 a r dr,
a2 J0 z1n 0
∫a ( z1n )
bn = ( 2( ))2 rg(r)J0 a r dr.
a c z1n J0 z1n 0

[ ]


1(
( z0n ) ( z0n )
10. u(r, t) = A
c2
a2 −r 2
4 − 2a2 ) J0 a r cos c a t .
2
n=1 z0n J1 z0n


∞ ( )
11. u(r, t) = an (t)J0 z0na r ,
n=1
( )
∫t ∫a ( z0n ) ( )
1
an (t) = λn f (ξ, η)J0 a ξ sin λn (t − η) dξ dη, λn = cz0n
a .
0 0

1

∞ [ ] 2n−1
12. u(r, t) = r an cos (λn t) + µn sin (λn t) , where λn = 2(r2 −r1 ) π,
n=1
ANSWERS TO EXERCISES 601

λn sin (λn r2 )+r2−1 cos (λn r2 )


µn = λn cos (λn r2 )−r2−1 sin (λn r2 )
,
∫r2 ( )
an = cos (λn r) + µn sin (λn r) rf (r) dr.
r1


∞ ( µ1n ) ( aµn )
13. u(r, φ, t) = A cos φ an J1 a r cos a t , where µn are the pos-
n=1
itive roots of J1′ (x) = 0,

2µ2n ∫a ( µn )
and an = ( )2 J1 a r dr.
a(µ2n −1) J1 (µn ) 0

Section 5.5.
( )
1. u(x, t) = 3 + 2H t − x2 , H(·) is the unit step Heaviside function.

2. u(x, t) = sin (xit)−H(t−x) sin (x−t), H(·) is the unit step Heaviside
function.

[ ]
3. u(x, t) = sin (x − t) − H(t − x) sin (x − t) e−t , H(·) is the unit step
Heaviside function.

[ ]
4. u(x, t) = sin (x − t) − H(t − x) sin (x − t) et , H(·) is the unit step
Heaviside function.

5. u(x, t) = t2 e−x − te−x + t.

( ) ( )
6. u(x, t) = f t − xc H t − xc , H(·) is the unit step Heaviside function.

7. u(x, t) = (t − x) sinh (t − x) H(t − x) + xe−x cosh t − te−t sinh t.

t3
8. u(x, t) = 6 − 16 (t − x)3 H(t − x), H(·) is the unit step Heaviside
function.

9. u(x, t) = t + sin (t − x) H(t − x) − (t − x)H(t − x), H(·) is the unit


step Heaviside function.
602 ANSWERS TO EXERCISES

[ ( πc )] ( )
10. u(x, t) = k
c2 π 2 1 − cos a t sin πa x .

1
11. u(x, t) = π sin πx sin πt.


∞ [ ] ( )
12. u(x, t) = (−1)n t − (2n + 1 − x) H t − (2n + 1 − x) .
n=1


∞ ( ) ( )
13. u(x, t) = 4
π2
1
(2n−1)2 sin 2n − 1)πx sin 2n − 1)πt .
n=1

14. u(x, t) = sin πx cos πt − 1


π sin πx sin πt.

( t2
)
15. u(x, t) = f x − 2 .

( )
16. u(x, t) = f x − 3t .

t3
)
17. u(x, t) = 3 cos bigl(x + 3 .

( )
18. u(x, t) = e−t + te−t f (x) + te−t g(x).

√π
19. u(x, t) = 2 e−|x+t| .

∫∞
20. u(x, t) = 1
2 e−|ω| cos ωt eiωx .
−∞

t− x

c
21. u(x, t) = −c f (s) ds.
0

1
∫t ( ∫ )
x+c(t−τ )
22. u(x, t) = 2c f (s, τ ) ds dτ .
0 |x−c(t−τ )|

1
∫t ( x+c(t−τ
∫ ) )
23. u(x, t) = 2c f (s, τ ) ds dτ
0 0
ANSWERS TO EXERCISES 603

∫t ( |x−c(t−τ
∫ )| ) [ ]
− 2c
1
f (s, τ ) ds sign x − c(t − τ ) dτ .
0 0

24. u(x, t) = e−cmx f (t − cx)H(t − cx), where m = a


c2 .

( t− xc )

25. u(x, t) = −ce k(x−ct)
e ckτ
f (τ ) dτ H(ct − x).
0

Section 6.1.
∫∞ (ξ−x)2 ( ( x ))
3. u(x, t) = √100
4aπt
e− 4at dξ = 50 1 + er f √4at , where
0
∫x
e−s ds.
2
er f (x) = √2
π
0

∫∞ (ξ−x)2
1
4. u(x, t) = et √4aπt e− 4at dξ.
−∞

Section 6.2.


e−(2n−1)
2
1. u(x, t) = 8
π
1
(2n−1)2
t
sin (2n − 1)x.
n=1


∞ (2n−1)2 π 2
2. u(x, t) = 3 − 12
π2
1
(2n−1)2 e− 9 t
cos nπ
3 x.
n=1



(2n−1)π−8(−1)n (2n−1)2 π 2
3. u(x, t) = 4
π2 (2n−1)2 e− 64 t
sin (2n−1)π
8 x.
n=1



e−2(2n−1)
2
4. u(x, t) = 80
π
1
2n−1
t
sin (2n − 1)x.
n=1



1−cos nπ n2 π 2
5. u(x, t) = 40
π n
2
e− 2 t
sin nπ
2 x.
n=1

π2


(−1)n
e−3n
2
t
6. u(x, t) = 3 +4 n cos nx.
n=1

400


1 −
(2n−1)
t
2
2n−1
7. u(x, t) = π 2n−1 e
4 sin 2 x.
n=1
604 ANSWERS TO EXERCISES



(−1)n−1 −a2 (2n−1)2 t
8. u(x, t) = 4
π (2n−1)2 e sin (2n − 1)x.
n=1

[ ]


8(−1)n+1 2 (2n−1)
2
9. u(x, t) = 4
2n−1 − (2n−1)2 π 2 e−a 4 t
sin 2n−1
2 x.
n=1

T0 2T0 ∑

1 −n2 t
10. u(x, t) = π x + π ne sin nx.
n=1

π −t


−4n2 t
11. u(x, t) = 2e sin x − 16
π
1
(4n2 −1)2 e sin 2nx.
n=1

12. u(x, t) = 1
2 − 12 e−4t cos 2x.

( )

∞ (2n−1)2 a2 π 2
13. u(x, t) = 4rl2
a2 π 3
1
(2n−1)3 1 − e− l2
t
sin (2n−1)π
l x.
n=1



(−1)n
e−a
2
n2 π 2 t
14. u(x, t) = 1
3 −t− 2
π2 n2 cos nπx.
n=1

[ ]


(−1)n−1 −a2 (2n−1)2 π 2 t
15. u(x, t) = 4
π (2n−1)4 1−e sin (2n − 1)x.
n=1

1−e−t
16. u(x, t) = 2 + e−4t cos 2x.


∞ ( )
e−n
2
t
17. u(x, t) = an cos nx + bn sin nx , where
n=0

1
∫π 1
∫π
an = π f (x) cos nx dx, bn = π f (x) sin nx dx, n = 1, 2, . . ...
−π −π


∞ ∫t a2 n2 π 2 (t−s)
18. u(x, t) = un (t) sin nπ
l x, where un (t) = e− l2 fn (τ ) dτ ,
n=1 0
2
∫l nπ
fn (τ ) = l F (s, τ ) sin l s ds.
0



an e−zn t sin zn x, where zn is the nth positive solution
2
20. u(x, t) =
n=1
of the equation x + tan x = 0,
∫1
and cn = 2 f (x) sin zn x dx, n = 1, 2, . . ..
0
ANSWERS TO EXERCISES 605

( ) ∫l


− a2 n2 π 2
+b t nπ 2 nπ
21. u(x, t) = cn e l2 sin l x, cn = l f (x) sin l x dx.
n=1 0

( 2 2 2 )
22. u(x, t) = e− l2 +b t sin
a n π
π
2l x.

( ) ∫l

∞ a2 n2 π 2
23. u(x, t) = cn e− l2
+b t
cos nπ
l x, cn = 2
l f (x) cos nπ
l x dx.
n=0 0

[ ]


Aa2
e−t sin e−a n ωn t sin ωn x,
2 2
aA x 2 T
24. u(x, t) = cos al a+ l ωn +(−1)n 1−a2 ω2
n
n=0
(2n+1)π
where ωn = a , ωn ̸= a1 .

Section 6.3.

∞ ∑ ∞ ∑∞
Amnj e−k λmnj t
2
3. u(x, y, z, t) =
( ) ( mπ ) ( jπ )
m=1 n=1 j=1
sin mπa x sin b y sin c z , where

8
∫a ∫b ∫c ( mπ ) ( mπ ) ( jπ )
Amnj = abc f (x, y, z) sin a x sin b y sin c z dz dy dx.
0 0 0


∞ n2 π 2
an e− a2 t sin nπ
4. u(x, y, t) = a x
n=1 ( )

∞ ∑
∞ 2 m2 n2
+ amn e−π a2 + b2 t sin mπ a x cos

b y.
m=1 n=1

5. u(x, y, t) = 3e−44π sin 6πx sin 2πy − 7e− 2 π


2 11 2
t t 3π
sin πx sin 2 y.

( )
8

∞ ∑

1 − (2m−1)2 +(2n−1)2 t
6. u(x, y, t) = π2 (2m−1)(2n−1) e .
m=1 n=1
sin (2m − 1)x sin (2m − 1)y.

7. u(x, y, t) = e−2t sin x sin y + 3e−5t sin 2x sin y + e−13t sin 3x sin 2y.

∞ [ ] ( )

∞ ∑
8. u(x, y, t) = 16
π2
sin mx sin ny
m2 n 2 sin mπ
2 sin nπ
2 e− m2 +n2 t
m=1 n=1

( )
9. u(x, y, t) = 1
13 1 − e−13t sin 2x sin 3y + e−65t sin 4x sin 7y.
606 ANSWERS TO EXERCISES

( ) ∞ ∞ ( )
∑ ∑
10. u(x, y, t) = e− k1 x+k2 y Amn e− m +n +k1 +k2 +k3 t
2 2 2 2

m=1 n=1
sin mx sin my, where
∫π ∫π
Amn = 4 f (x, y) ek1 x+k2 y sin mx sin my dy dx.
0 0

( ) (∫t )

∞ ∑

e− m +n t
2 2
Fmn (s)e−(m +n )s ds
2 2
11. u(x, y, t) =
m=1 n=1 0
sin mx sin ny, where

4
∫π ∫π
Fmn (t) = π2 F (x, y, t) sin mx sin ny dy dx.
0 0



1(
( z0n ) − z0n c2 t
12. u(r, t) = 2T0 ) J0 a r e
a2 , where z0n is the nth
n=1 z0n J1 z0n
positive zero of the Bessel function J0 (·).



1(
( )
) J1 z1n r e−z1n t ; z1n is the nth positive
2
13. u(r, t) = 16 3
n=1 z1n J2 z1n
zero of the Bessel function J1 (·).



1(
( )
) J3 z3n r e−z3n t , where
2
14. u(r, t) = r3 sin 3φ − 2 sin 3φ
n=1 z3n J4 z3n
z3n is the nth positive zero of the Bessel function J3 (·).

( ) ( )

∞ ∑

1−(−1)m
e−
2
4 z0n +m2 π 2 t
15. u(r, z, t) = π m z0n J1 (z0n ) J0 z0n r sin mπz,
m=1 n=1
where z0n is the nth positive zero of the Bessel function J0 (·).



(−1)n−1
e−c
2
2 n2 π 2 t
16. u(r, t) = πr n sin nπr.
n=1


∞ ∫1
An e−c
2
n2 π 2 t 1
17. u(r, t) = r sin nπr, An = 2 rf (r) sin nπr dr.
n=1 0


∞ ∑

1 −zmn
2
t
( ) ( )
18. u(r, t) = Amn √znm r e Jn+ 21 zmn r Pn cos θ ,
m=1 n=1
where znm is the the mth positive zero of the Bessel function Jn (·),
ANSWERS TO EXERCISES 607

and Pn (·) is the Legendre polynomial of order n. The coefficients


Amn are evaluated using the formula
√ ∫1
(2n + 1)Pn′ (0) znm 3 ( )
Amn = 2
( ) 2
r 2 Jn+ 1 znm r dr.
n(n + 1) Jn− 1 znm 2
2 0

Section 6.4.
[ ]

∞ ( 2n+1+x ) ( 2n+1−x )
1. u(x, t) = u0 erf √
2 t
− erf √
2 t
, where
n=0
∫∞
e−u du is the complementary error function.
2
erf c(x) = √2
π
x

( )
2. u(x, t) = u1 + (u0 − u1 ) erf c x

2 t
.

[ ]
( x ) (√ )
3. u(x, t) = u0 1 − erf c 2√ t
+ e x+1
erf c t+ x

2 t
.

∫t x2
4. u(x, t) = x

2 π
f (t−τ )
3 e− 4τ dτ .
0 t2

( )
5. u(x, t) = 60 + 40 erf c √x
2 t−2
U2 (t).

[ ( ) ( )]

6. u(x, t) = 100 −e1−x+t erf c t+ 1−x

2 t
+ erf c 1−x

2 t
.

7. u(x, t) = u + 0 + u0 e−π
2
t
sin πx.

u
∫t e−hτ − 4τ
x2
8. u(x, t) = √0 x 3 dτ .
2 π τ2
0

( )
∫t ( )
9. u(x, t) = 10t−x + 10 1 + t − τ + et−τ erf c x

2 τ
dτ .
0



e−(2n−1)
2
π2 t
10. u(x, t) = 21 x(1 − x) − 4
π3
1
(2n−1)3 sin (2n − 1)πx.
n=1

( ) ( )
u0 −kx c(1−k) √
11. u(x, t) = 2 e erf c 2cx√t + 2 t + erf c 1−x

2 t
608 ANSWERS TO EXERCISES
( ) ( )
c(1−k) √
+ u20 e−x erf c x√
2c t
− 2 t + erf c 1−x

2 t
.

r2


e−zn t , tan zn = zn .
sin (zn r) 2
12. u(r, t) = 2 + 3t − 3
10 − 2
r 2 sin z
zn n
n=1

13. u(x, t) = Be−3t cos 2x − A


2x sin x.

x2
14. (a) u(x, t) = √
x
3 e− 4c2 t .
2c π t 2

∫t − x2
2

(b) u(x, t) = √
x
3 f (t − τ ) e 4c τ
3 dτ .
2c π t 2 0 τ2

∫∞ x2
15. u(x, t) = 2c
x

π
µ(t − τ ) e− 4c2 τ dτ .
−∞

(x−η)2
x
∫t ∫∞ −
4c2 (t−τ )
e √
16. u(x, t) = √ f (η, τ ) dη dτ .
2c π t−τ
0 −∞

( ) ( )
1 1−x
√ 1 1+x

17. u(x, t) = 2 erf 2c t
+ 2 erf 2c t
.

∫∞
e−c
2
1 cos ωx ω2 t
18. u(x, t) = π 1+ω 2 .
−∞

∫∞ (x−τ )2
19. u(x, t) = √1
t 2π
f (τ ) e− 2t2 dτ .
−∞

∫t µ(τ ) − 4c2x
2

20. u(x, t) = − √cπ √


t−τ
e (t−τ ) dτ .
0

( [ ] )
∫t ∫∞ −
(x−ξ)2

(x+ξ)2
21. u(x, t) = 1

2c π
√1
t−τ
e 4c2 (t−τ ) − e 4c2 (t−τ ) f (ξ, τ ) dξ dτ .
0 0

∫∞
e−ω
2
2 sin ω t
22. u(x, t) = π ω cos xω dω.
0

2
∫∞ 1−e−ω
2t
23. u(x, t) = π ω sin xω dω.
0
ANSWERS TO EXERCISES 609

(x−η)2 (x−η)2
∫t ∫∞ −
4c2 (t−τ )

4c2 (t+τ )
24. u(x, t) = 1√
f (η, τ ) e √ −e dη dτ .
2c π t−τ
0 0

(x−η)2 (x−η)2
1√
∫t ∫∞ e

4c2 (t−τ )

4c2 (t+τ )
25. u(x, t) = f (η, τ ) √ +e dη dτ .
2c π t−τ
0 0

Section 7.1.

3. The maximum value of u(x, y) is 1, attained at the boundary points


(1, 0) and (1, 0).

x
4. u(x, y) = x2 +(y+1)2 .

z−1
5. u(x, y, z) = 3 .
(x2 +y 2 +(z−1)2 ) 2

6. For x = (x, y) and y = (x′ , y ′ ) in the right half plane

1 [ ]
G(x, y) = − ln | x − y | − ln | x − y∗ | ,

where y∗ = (−x′ , y ′ ).
The solution of the Poisson equation is given by
∫∫ ∫
u(x) = f (y)G(x, y) dy − g(y) Gn (x, y) dS(y).
RP ∂(RP )

7. For x = (x, y) and y = (x′ , y ′ ) in the upper half plane UP

1 [ ]
G(x, y) = − ln | x − y | − ln | x − y∗ | ,

where y∗ = (x′ , −y ′ ).
The solution of the Poisson equation is given by
∫∫ ∫
u(x) = f (y)G(x, y) dy − g(y) G(x, y) dS(y).
RP ∂(RP )

∫ √
9. u(x) = − π1 ln | x − y |2 g(y) dS(y) + C.
|y|=R
[ ( ) ( 2 2 )]
10. u(x, y) = 1 − 1
π arctan 1−x
y − arctan x +yy −x .
610 ANSWERS TO EXERCISES

Section 7.2.

200


1−(−1)n
1. (a) u(x, y) = π n cosh nπy sin nπx
n=1


1−(−1)n 2−cosh nπ
+ 200
π n sinh nπ sinh nπy sin nπx.
n=1
2


1−(−1)n sinh nπ
(b) u(x, y) = π n sinh ny sin nx
n=1


1−(−1)n sinh nπ ( )
+ π2 n sinh ny + sinh n(π − x) sin ny.
n=1
2


(−1)n−1
(c) u(x, y) = π n sinh 2nπ sin nπx sinh nπy.
n=1
∑∞
sin
(2n−1)π
x ( (2n−1)π )
(d) u(x, y) = 400
π
2
(2n−1)π sinh 2 (1 − y)
n=1 (2n−1) sinh 2



+ 200
π
1
sinh nπx sin nπy.
n sinh 2nπ
n=1
( )
sin 7πx sinh 7π(1−y)
(e) u(x, y) = + sin πx sinh πy
( sinh 7π ) sinh π
sin 3πy sinh 3π(1−x)
+ sinh 3π + sinh sinh
6πx sin 6πy
6π .

400


sin (2n−1)πx sinh (2n−1)πy
(f) u(x, y) = π (2n−1) sinh (2n−1)π .
n=1

2


sin nπ
a x sinh nπ
a y
∫a nπ
(g) u(x, y) = a An sinh nπ , An = f (x) sin a x dx.
a b
n=1
( ) 0

2

∞ sin nπ
a x sinh nπa (b−y)
∫a nπ
2. u(x, y) = a An sinh nπ , An = f (x) sin a x dx.
a b
n=1 0


1−(−1)n
3. u(x, y) = 12 x + 2
π2 n2 sinh nπ sinh nπx cos nπy.
n=1
2


1−(−1)n ( )
4. u(x, y) = π n (n cosh nπ+sinh nπ) n cosh nx + sinh nx sin ny.
n=1

∞ ( nπ ) nπ 1
∫a
5. u(x, y) = A0 y + An sinh a y cos a y; A0 = ab f (x) dx,
n=1 0
1
∫a nπ
An = a sin nπ f (x) cos a x dx.
a b
0

6. u(x, y) = 1.
( (2n−1)π )

∞ sin x ( (2n−1)π )
7. u(x, y) = 1 − π4 a
(2n−1)πb cosh a y .
n=1 (2n−1) cosh
( )
a
[ ]
∑∞ ( 4−cosh (2n−1) sin nx
8. u = π4 sin nx
2n−1 cosh 2n − 1)y + (2m−1) sinh (2n−1) sinh (2n − 1)y .
n=1
[ ]

∞ ( (
9. u(x, y) = An cosh n − 12 )πx + Bn sinh n − 21 )πx sin (n − 12 )πy,
n=1
ANSWERS TO EXERCISES 611
( )
n−1 cosh n− 21 )π
where An = 4 (−1)
(2n−1)2 , Bn = −An ( ).
sinh n− 12 )π
∫1
10. f (y) dy = 0.
0
(π )

∞ ∫
11. u(x, y) = 2
π f (t) sin nt dt e−ny sin nx.
n=1 0

12. u(x, y) = (sin x + cos x) e−y .



∞ (2n−1)π
13. u(x, y) = A n e− 2b x
sin (2n−1)π
2b y,
n=1
∫b
An = 2b f (y) sin (2n−1)π
2b y dy.
0
( b )

∞ ∫ λ2n +h2
14. u(x, y) = 2 An cos λn ξ dξ e−λn x cos λn y, An = b(λ2n +h2 )+h ,
n=1 0
λn are the positive solutions of the equation λ tan bλ = h.

∞ ∑

15. u(x, y) = − π162 [ sin (2n−1)πy
sin (2m−1)πx ].
2 2
m=1 n=1 (2m−1)(2n−1) (2m−1) +(2n−1)
∑ ∑ (−1)m sin mπx sin (2n−1)πy
∞ ∞
16. u(x, y) = 8
π4
[ ] .
2 2
m=1 n=1 m(2n−1) (m +(2n−1)
∑∞
17. u(x, y) = − 4 sin
π3
πx [
sin (2n−1)πy ]
n=1 (2n−1) 1+(2n−1)2


(−1)n−1 sin nπx sinh nπy
+ π2 n sinh nπ .
n=1

∞ ∑

(−1)m+n−1
18. u(x, y) = 4
π4 ,
[ sin mπx)sin nπy
m=1 n=1 mn (m2 +n2


(−1) n−1
sin nπx sinh nπy
+ π2 n sinh nπ .
n=1
( ) ∑

19. u(x, y) = a sinh y − 1
3 cosh y + 13 e2π sin x + An sinhsinh
ny sin ny
nπ ,
n=2
A1 + 13 sinh π− 31 e2π ∫π
where a = sinh pi , An = f (x) sin nx dx.
0

Section 7.3.


rn
( )
1. u(r, φ) = nan−1 an cos nφ + bn sin φ + C, where C is any con-
n=1
1


1


stant; an = 2π f (φ) cos nφ dφ, bn = 2π f (φ) sin nφ dφ.
0 0


r n+1
( )
2. u(r, φ) = − nan an cos nφ + bn sin φ + C, where C is any con-
n=1
1


1


stant; an = 2π f (φ) cos nφ dφ, bn = 2π f (φ) sin nφ dφ.
0 0
612 ANSWERS TO EXERCISES

3. u(r, φ) = Ar sin φ.

4. u(r, φ) = B + 3Ar sin φ − 4Ar3 sin 3φ.




5. u(r, φ) = Ar sin φ − 8A
π r2n 4n21−9 sin 2nφ.
n=1


(−1)n−1
6. u(r, φ) = 1
2 + 2
π 2n−1 r2n−1 sin (2n − 1)φ.
n=1


1
7. u(r, φ) = n rn sin nφ.
n=1

8. u(r, φ) = B0 ln r + A0
∞ (
[ ]
∑ ) ( )
+ An rn + rbnn cos nφ + Cn rn + Dn
rn sin nφ , where
n=1
(c) (c)
bn gn
(c)
−an fn
(c)
b n (c)
fn −an gn
(c)
f0 −g0
An = b2n −a2n , Bn = an bn b2n −a2n , A0 = ln a−ln b ,
(c) (c)
b n gn(s)
−an fn(s)
bn f (s) −an g (s) g ln a−f ln b
Cn = b2n −a2n , Dn = an bn bn2n −a2n n , B0 = 0 ln a−ln0 b ;
(c) (s) (c) (s)
fn , fn , gn , gn are the Fourier coefficients of f (φ) and g(φ),
(c) 1


(s) 1


fn = 2π f (φ) cos nφ dφ, fn = 2π f (φ) sin φ dφ,
0 0
(c) 1


(s) 1


gn = 2π g(φ) cos nφ dφ, gn = 2π g(φ) sin φ dφ.
0 0
If b = 1, f (φ) = 0 and g(φ) = 1 + 2 sin φ, then u(r, φ) = 1 +
a2 +r 2
2 (1+a 2 )r sin φ.



(−1)n−1 ( r )2n−1
9. u(r, φ) = 1
2 + 4
π 2n−1 a cos 2(2n − 1)φ.
n=1


(−1)n−1 ( r )2n−1
10. u(r, φ) = c
2 + 4c
π 2n−1 2 cos (2n − 1)φ.
n=1


1(
( )
11. u(r, φ) = −2 ) J0 λ0n r , where λ0n is the nth positive
n=1 λ30n J1 λ0n
zero of J0 (·).
[ ( ) ( ) ]

∞ J0 λ0n r J3 λ3n r
12. u(r, φ) = −2 ( )+ ( ) cos 3φ , where λ0n is the
3 λ33n J4 λ3n
n=1 λ0n J1 λ0n
th
n positive zero of J0 (·) and λ3n is the nth positive zero of J3 (·).


1(
( )
13. u(r, φ) = r2 sin 2φ − 2 ) J0 λ0n r , where λ0n is the nth
n=1 λ30n J1 λ0n
positive zero of J0 (·).
( )


1(
( ) sinh λ0n z
14. u(r, φ, z) = u(r, z) = 2 ) J0 λ0n r ( ),
n=1 λ0n J1 λ0n sinh λ0n
th
where λ0n is the n positive zero of J0 (·).
ANSWERS TO EXERCISES 613

( )[ ]

∞ J0 λ0n r ( ) ( )
15. u(r, z) = ( ) An sinh λ0n (1−z) +Bn sinh λ0n z , where
n=1 sinh λ0n
λ0n is the nth positive zero of J0 (·). The coefficients are determined
by the formula
∫1 ( ) ∫1 ( )
An = 2 (2 ) G(r)J0 λ0n r r dr, Bn = 2 (2 ) H(r)J0 λ0n r r dr.
J1 λ0n 0 J1 λ0n 0



(−1)n−1 (2n−2)!
16. u(r, θ) = 2 + (4n − 1) ( )2 r2n−1 P2n−1 (cos θ).
n=1 n22n−2 n−1)!

17. u(r, θ) = 1 − r cos θ.


( )
18. u(r, θ) = 7
3 + 13 r2 3 cos2 θ − 1 .


19. u(r, θ) = 1 + 2rcos θ + 4(−1)n−1 4n+1
n+1 2n+1
( )2 r2n P2n (cos θ).
n(2n−2)!
2 n)!
n=1
[ ]

∞ ∑
m
−m−1
( ) (n)
20. u(r, θ, φ) = r Amn cos n φ + Bmn sin φ Pm (cos θ) ,
m=0 n=0

(2m+1)! ((m−n)! ∫ ∫π

(n)
Amn = 2π(m+n)! f (θ, φ) Pm (cos θ) cos nφ sin θ dθ dφ,
0 0

(2m+1)! ((m−n)! ∫ ∫π

(n)
Bmn = 2π(m+n)! f (θ, φ) Pm (cos θ) sin nφ sin θ dθ dφ.
[
0 0
]

∞ ∑
m
rm
( ) (n)
21. u(r, θ, φ) = m Amn cos n φ + Bmn sin φ Pm (cos θ) + C,
m=1 n=0
where C is any constant and Amn and Bmn are as in Exercise 18.

Section 7.4.

2
∫∞ sinh ωx
1. u(x, y) = π (1+ω 2 ) sinh ωπ dω.
0
2
∫∞ sinh ω(2−y)
2. u(x, y) = π F (ω) (1+ω 2 ) sinh 2ω sin ωx dω.
0
∫∞ [ −ωx ]
3. u(x, y) = 2
π
ω
1+ω 2 e sin wy + e−ωy sin ωx dω.
0
y
∫∞ f (ω)
4. u(x, y) = π y 2 +(x−ω)2 dω.
[0
]
1
( 1+x ) ( 1−x )
5. u(x, y) = π arctan y + arctan y .

1 2+y
6. u(x, y) = 2 x2 +(2+y)2 .

y
∫∞ cos ω
7. u(x, y) = π (x−ω)2 +y 2 dω.
−∞
614 ANSWERS TO EXERCISES

2
∫∞ sinh ωx
8. u(x, y) = π (1+ω 2 ) sinh ω cos ωy dω.
0
∫∞
9. u(x, y) = 2
π
1−cos ω
ω e−ωx sin ωy dω.
0
∫∞
12. u(r, z) = a J0 (ωr) J1 (ω)e−ωz dω.
0
∫∞
13. u(r, z) = J0 (ωr)e−ω(z+a) dω = √ 1
.
(z+a)2 +r 2
0

Section 8.1.
   
−4 0
 0  0 
1. λ1 = 5, x1 =  ; λ2 = 9, x2 =  .
−3 −3
5 0
√   √ 
− 1+3 10 − 1−3 10
√  0  √  0 
λ3 = − 10, x3 = 

; λ4 = 10, x4 = 
 
.
1 1 
0 0

2. ∥A∥1 = 5, ∥A∥∞ = 7, ∥A∥E = 6. The characteristic equation of


 
2 2 6
AT A =  2 13 7 
6 7 21

is λ3 −36λ2 +252λ−64 = 0. Numerically we find that λmax ≈ 26.626.

3. We find that the eigenvalues of A and A−1 are

λ1 (A) = 1.99999, λ2 (A) = 0.00001


λ1 (A−1 ) = 0.500003, λ2 (A−1 ) = 100000.

Therefore ∥A∥2 = 1.9999, ∥A−1 ∥2 = 100000 and hence

κ(A) = 199999 ≈ 200000.

The exact solution of the system


( )( ) ( )
1 0.99999 x1 2.99999
=
0.99999 1 x2 2.99998

is
x1 = 2 and x2 = 1,
ANSWERS TO EXERCISES 615

but the exact solution of the system


( )( ) ( )
0.99999 0.99999 y1 2.99999
=
0.99999 1 y2 2.99998

is
y1 = 4.00002 and y2 = −1.

We observe that the solution of the second system differs considerably


from the solution of the original system despite a very small change
in the coefficients of the original system. This is due to the large con-
ditional number of the matrix A.

 
x1
4. Let x =  x2  be any nonzero vector. Then
x3
   
2 −1 0 x1
xT · A · x = ( x1 x2 x3 ) ·  −1 2 −1  ·  x2 
0 −1 2 x3
= x21 + (x1 − x2 )2 + (x2 − x3 )2 + x23 ≥ 0.

The above expression is zero only if x1 = x2 = x3 = 0 and so the


matrix is indeed positive definite.

5. First notice that the matrix of the system is strictly diagonally dom-
inant (check this). Therefore the Jacobi and Gauss–Seidel iterative
methods converge for every initial point x0 . Now if we take ϵ = 10−6
and
In[1] := A = {{4, 1, −1, 1}, {1, 4, −1, −1}, {−1, −1, 5, 1}
, {1, −1, 1, 3}};
In[2] := b = {5, −2, 6, −4};
in the modules “ImprovedJacobi” and “ImprovedGaussSeidel”
In[3] := ImprovedJacobi[A, b, {0, 0, 0, 0}, 0.000001]
In[4] := ImprovedGaussSeidel[A, b, {0, 0, 0, 0}, 0.000001]
we obtain
(a) Out[4] := k = 23 x = {−0.75341, 0.041092, −0.28081, 0.69177}
(b) Out[5] := k = 12 x = {−0.75341, 0.04109, −0.28081, 0.69177}.
616 ANSWERS TO EXERCISES

6. It is obvious that the matrix


 
10 −1 −1 −1
 −1 10 −1 −1 
A= 
−1 −1 10 −1
−1 −1 −1 10
is symmetric. Next we show that this matrix is strictly positive defi-
 
x1
 x2 
nite. Indeed, if x =   is any vector, then
x3
x4
xT · A · x = (x1 − x2 )2 + (x1 − x3 )2 + (x1 − x4 )2 + (x2 − x3 )2
+ (x2 − x4 )2 + (x3 − x4 )2 + 7(x21 + x22 + x23 + x24 ) > 0.

We work as in Example 8.1.5. If we take


In[1] := A = {{10, −1, −1, −1}, {−1, 10, −1, −1}, {−1, −1, 10, −1}
, {−1, −1, −1, 10}};
In[2] := b = T ranspose[{{34, 23, 12, 1}}];
In[3] := x[0] = {0, 0, 0, 0};
then we obtain the following table.

k x1 x2 x3 x4 dk
0 0 0 0 0
1 4.0854 2.7636 1.4419 0.1202 5.3606
2 4.0000 3.0000 2.0000 1.0000 0.0112

The exact solution is obtained by


In[4] := XExact = Inverse[A]ḃ;
Out[4] := {4.0, 3, 0, 2.0, 1.0}.

7. The matrix A is not a strictly definite positive since, for example, for
its first row we have |2| + | − 2| = 4 = |4|. To show that the Jacobi
and Gauss–Seidel iterative methods are convergent we use Theorem
8.1.3.
For the Jacobi iterative method, the matrix B = BJ in (8.1.17) is
given by
 −1  
4 0 0 [ 0 0 0
( )
BJ = D−1 L + U =  0 −4 0  ·  0 −4 0 
0 0 4 0 0 0
   1 
0 2 −2 ] 0 1
2 −2
+  0 0 −1  =  − 13 0 1 
3 .
0 0 0 3
4 − 1
4 0
ANSWERS TO EXERCISES 617

Using Mathematica to simplify the calculations we find that the char-


acteristic polynomial pBJ (λ) is given by

8 1
pBJ (λ) = −λ3 − λ + λ.
8 12

Again using Mathematica we find that the eigenvalues of BJ (the


roots of pBJ (λ)) are

λ1 = 0.129832, λ2 = −0.0649159 + 0.798525 i,


λ3 = −0.0649159 − 0.798525 i.

Their modules are

|λ1 | = 0.129832, |λ2 | = 0.801159, |λ3 | = 0.801159.

Therefore the spectral radius ρ(BJ ) = 0.801159 < 1 and so by Theo-


rem 8.1.3 the Jacobi iterative method converges.
For the Gauss–Seidel iterative method, the matrix B = BGS in
(8.1.18) is given by
   
[ 0 0 0 4 0 0 ]−1
( )−1
BGS = L+D · U = 1 0 0 + 0 −3 0 
3 −1 0 0 0 4
   1 
0 2 −2 0 2 −2
1

·  0 0 −1  =  0 16 1 
6 .
0 0 0 1 5
0 3 12

Using Mathematica (to simplify the calculations) we find that the


characteristic polynomial pBGS (λ) is given by

7 2 1
pBGS (λ) = −λ3 − λ − λ.
12 8

We find that the eigenvalues of BGS (the roots of pBGS (λ)) are

λ1 = 0, λ2 = 0.291667 + 0.199826 i, λ3 = 0.291667 − 0.199826 i.

Their modules are

|λ1 | = 0, |λ2 | = 0.353553, |λ3 | = 0.353553.


( )
Therefore the spectral radius ρ BGS = 0.353553 < 1 and so by
Theorem 8.1.3 the Gauss–Seidel iterative method converges.
618 ANSWERS TO EXERCISES

Section 8.2.


1. (a) We know from√ calculus that f (x) = cos x and so the exact value

of f (π/4) is 2/2 ≈ 0.707.
With the forward finite difference approximation for h = 0.1 we
have
( ) ( ) ( ) ( )
( π ) f π4 + 0.1 − f π4 cos π4 + 0.1 − cos π4
f′ ≈ = ≈ 0.63.
4 0.1 0.1
With the backward finite difference approximation for h = 0.1 we
have
( ) ( ) ( ) ( )
( ) f π4 − f π4 + 0.1
′ π cos π4 − cos π4 + 0.1
f ≈ = ≈ 0.773.
4 0.1 0.1
With the central finite difference approximation for h = 0.1 we
have
( ) ( ) ( )
( ) cos π4 + 0.1 − 2 cos π4 + cos π4 − 0.1
′ π
f ≈ ≈ 0.702.
4 0.1

(b) f ′ (x) = sec2 x so the exact value of f ′ (x) is 2.

With the forward approximation f ′ (x) ≈ 2.23049.

With the backward approximation f ′ (x) ≈ 1.82371.

With the central approximation f ′ (x) ≈ 2.0271.

2. For the mesh points x = 0.5 and x = 0.7 we use the forward and
backward finite differences approximations, respectively:
0.56464 − 0.47943
f ′′ (0.5) ≈ = 0.8521,
0.1

0.56464 − 0.64422
f ′′ (0.7) ≈ = −0.7958.
0.1
For the mesh point x = 0.6 we can use either finite difference approx-
imation. The central approximation gives
0.64422 − 0.47943
f ′ (0.6) ≈ = 0.82395.
2 ∗ 0.1
For the second derivative with the central approximation we obtain
0.64422 − 2 ∗ 0.56464 + 0.47943
f ′′ (0.6) ≈ = −0.563.
0.12
ANSWERS TO EXERCISES 619

3. Apply Taylor’s formula to f (x+h), f (x−h), f (x+2h) and f (x−2h).

4. We find from calculus that f ′ (x) = 2xex and f ′′ (x) = 2ex + 4x2 ex .
2 2 2

Therefore the exact value of f ′′ (1) = 6e ≈ 16.30969097.


With the central finite difference approximation taking h = 0.001 we
have
2 2 2
e(1+0.001) − 2e1 + e(1−0.001)
f ′′ (1) ≈ ≈ 13.30970817.
0.0012

5. Partition [0, 1] into n equal subintervals.

1
xi = ih, i = 0, 1, 2, . . . , n; h = .
n

If we discretize the given boundary value problem we obtain the linear


system

yi+1 + (h2 xi − 2)yi + yi−1 = 0, i = 1, 2, . . . , n − 1,

where y0 = y(0) = 0 and yn = y(1) = 1.


Taking n = 6, using Mathematica (as in Example 8.2.3) you should
obtain the following results.

x 0.1 0.2 0.3 0.4 0.5 0.6


y(x) 0.013382 0.046762 0.100124 0.173396 0.260390 0.378719

Section 8.3.

1. The 81 unknown ui,j , i, j = 1, 2, . . . , 9 are given in Table E. 8.3.1.

Table E. 8.3.1
j\i 1 2 3 4 5 6 7
1 125.8 141.2 145.4 144.0 137.5 122.6 88.61
2 102.10 113.50 116.50 113.10 103.30 84.48 51.79
3 89.17 94.05 93.92 88.76 77.97 60.24 34.05
4 80.53 79.65 76.40 70.00 59.63 44.47 24.17
5 73.30 67.62 62.03 55.22 46.08 33.82 18.18
6 65.05 55.52 48.87 42.76 36.65 26.55 14.73
7 51.39 40.52 35.17 31.29 27.23 21.99 14.18
620 ANSWERS TO EXERCISES

2. The following 3× system in the ui,j , i, j = 1, 2, . . . , 3 unknown should


be solved

1
4ui,j − ui−1,j − ui,j+1 + ui,j−1 = − , i, j = 1, 2, 3.
64

From the boundary conditions use ui,0 = u0,j = 0, i, j = 1, 2, 3. The


results are given in Table E. 8.3.2.

Table E. 8.3.2.
j\i 1 2 3
1 −0.043 −0.055 −0.043
2 −0.055 −0.070 −0.055
3 −0.043 −0.055 −0.043

3. The 81 unknown ui,j , i, j = 1, 2, . . . , 9 are givn in Table E. 8.3.3.

Table E. 8.3.3.
j\i 1 2 3 4 5 6 7
1 126.5 142.3 146.8 145.5 138.8 123.6 89.1
2 103.5 116.0 119.6 116.3 106.0 86.47 52.81
3 91.66 98.41 99.21 94.05 82.49 63.47 35.71
4 84.72 86.79 84.83 78.21 66.46 49.21 26.55
5 80.44 79.21 75.12 67.49 55.92 40.37 21.29
6 77.84 74.47 68.97 60.69 49.36 35.04 18.25
7 76.42 71.88 65.58 56.96 45.70 32.20 16.65

( )
4. Let u 3 + 0.25i, 4 + 0.25j ≈ ui,j , i = 1, 2, 3, j = 1, 2, 3, 4. The results
are given in Table E. 8.3.4.

Table E. 8.3.4.
j\i 1 2 3
1 −0.017779 −0.044663 −0.065229
2 −0.0277746 −0.053768 −0.068876
3 −0.032916 −0.056641 −0.072783
4 −0.034524

5. The matrix equation is Au = b, where


ANSWERS TO EXERCISES 621

 
4 −1 0 −1 0 0 0 0 0
 −1 4 −1 0 −1 0 0 0 0
 
 0 −1 4 0 0 −1 0 0 0
 
 −1 0 0 4 −1 0 −1 0 0
 
A =  0 −1 0 −1 4 −1 0 −1 0,
 
 0 0 −1 0 −1 4 0 0 −1 
 
 0 0 0 −1 0 0 4 −1 0
 
0 0 0 0 −1 0 −1 4 −1
0 0 0 0 0 −1 0 −1 4

and
( )T
u= u1,1 u1,2 u1,3 u2,1 u2,2 u2,3 u3,1 u3,2 u3,3

is the unknown matrix, and


( )T
π 3π 5π 7π
b = 2 cos 0 2 cos 0 0 0 2 cos 0 2 cos .
4 4 4 4

Solving the system we obtain the results given in Table E. 8.3.5.

Table E. 8.3.5.
j\i 1 2 3
1 0.35355339 0 −0.35355339
2 0 0 0
3 −0.35355339 0 0.35355339

6. The solution is given in Table E. 8.3.6.


Table E. 8.3.6.
j\i 1 2 3 4 5
1 0.79142 0.73105 0.71259 0.73105 0.79142
2 0.52022 0.42019 0.38823 0.42019 0.52022
( ) ( )
7. Let u 0.25i, 0.25j ≈ ui,j , i = 1, 2, 3, j = 1, 2, u 0.25i, 0.75 ≈ ui,3 ,
i = 2, 3. The results are given in Table E. 8.3.7.
Table E. 8.3.7.
j\i 1 2 3
1 165/268 218/268 176/268
2 174/268 263/268 218/268
3 174/268 165/268
622 ANSWERS TO EXERCISES

Section 8.4.

1. The results are given in Table E.8.4.1.

Table E.8.4.1
tj h = 1/8 h = 1/16 u(1/4, tj )
0.01 0.68 0.62 0.67
0.02 0.47 0.42 0.45
0.03 0.32 0.28 0.31
0.04 0.22 0.19 0.21
0.05 0.15 0.13 0.14
0.06 0.10 0.09 0.09
0.07 0.07 0.06 0.06
0.08 0.05 0.04 0.04
0.09 0.03 0.03 0.03
0.10 0.02 0.02 0.02
0.11 0.01 0.01 0.01
0.12 0.01 0.01 0.01
0.13 0.01 0.01 0.01
0.14 0.00 0.00 0.00
0.15 0.00 0.00 0.00

2. The results are given in Table E.8.4.2.


Table E.8.4.2.
tj h = 1/8 h = 1/16 u(1/4, tj )
0.01 0.68 0.63 0.67
0.02 0.47 0.42 0.45
0.03 0.32 0.29 0.31
0.04 0.22 0.19 0.21
0.05 0.15 0.13 0.14
0.06 0.10 0.09 0.09
0.07 0.07 0.06 0.06
0.08 0.05 0.04 0.04
0.09 0.03 0.03 0.03
0.10 0.02 0.02 0.02
0.11 0.02 0.01 0.01
0.12 0.01 0.01 0.01
0.13 0.01 0.01 0.01
0.14 0.01 0.00 0.00
0.15 0.00 0.00 0.00
ANSWERS TO EXERCISES 623

3. The results are given in Table E.8.4.3.


Table E.8.4.3.
tj \xi 4.0 8.0 12.0
40.0 1.5 2.0 4.0
80.0 1.25 2.375 5.0
120.0 1.2188 2.75 5.5939
160.0 1.2970 3.0780 5.9843

4. Using the central finite difference approximations for ux and uxx the
explicit finite difference approximation of the given initial boundary
value problem is
( ) ( )
2∆ t ∆t ∆t
ui,j+1 = 1 − 2 ui,j + + 2 ui+1,j
h 2h h
( )
∆t ∆t
+ − + 2 ui−1,j .
2h h
With the given h = 0.1 and ∆ t = 0.0025 the results are presented
in Table E.8.4.4.
Table E.8.4.4.
t\x x1 x2 x3 x4 x5 x6 x7
0.0025 0.1025 0.2025 0.3025 0.5338 0.7000 0.5162 0.2975
0.0025 0.1044 0.2050 0.3395 0.5225 0.6123 0.5025 0.3232
0.0075 0.1060 0.2163 0.3556 0.5026 0.5621 0.4815 0.3321
0.0100 0.1098 0.2267 0.3611 0.4832 0.5268 0.4614 0.3328
0.0125 0.1144 0.2342 0.3612 0.4657 0.4993 0.4432 0.3293
0.0150 0.1187 0.2391 0.3585 0.4497 0.4766 0.4266 0.3239
0.0175 0.1221 0.2419 0.3541 0.4351 0.4571 0.4114 0.3172
0.2000 0.1245 0.2429 0.3487 0.4216 0.4399 0.3976 0.3103

5. Since λ = 1/2, the Crank–Nicolson finite difference scheme becomes


ui+1,j+1 − 5ui,j+1 + ui−1,j+1 = −ui+1,j + 3ui,j − ui−1,j ,
for i = 1, 2, 3; j = 0, 1, 2, . . ..
The boundary conditions imply that u0,j = u4,j = 0 for every j.
The initial condition implies that ui,0 = i/4. Solving the system we
obtain the results presented in Table E.8.4.5.
Table E.8.4.5.
t\x 0.00 0.25 0.50 0.75 1.00
1/32 0 0.25 0.50 0.75 0
2/32 0 0.25 0.50 0.25 0
3/32 0 0.25 0.25 0.25 0
4/32 0.125 0.25 0.25 0.125 0
624 ANSWERS TO EXERCISES

6. The results are presented in Table E.8.4.6.


Table E.8.4.6.
t\x 0.00 0.25 0.50 0.75 1.00
1/32 0 0.0312500 0.0312500 0.0312500 0
2/32 0 0.0468750 0.0625000 0.0468750 0
3/32 0 0.0625000 0.0782500 0.0625000 0
4/32 0.125 0.0703125 0.0937500 0.0703125 0

Section 8.5.

1. Use finite difference approximations for the partial derivatives.

2. The analytical solution of the problem is given by u(x, t) = f (x − t).

The graphical results are presented in Figure E.8.5.1. (a), (b).

u u

2 t=0 2
t = 0.24
o
o o
o o Exact
o o -o - o - Approx
uHx, 0L = f HxL
o o
o
o o
1 1o o o o o o o o o o o o o o o o o o o o o o o

x x
1 3 3
2 2
4 4 4

(a) (b)

Figure E.8.5.1

3. The numerical results are given in Table E.8.5.1. The numbers are
rounded to 4 digits.
Table E.8.5.1.
t\x 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00
0.2 1.0 1.0 1.0 0.985 1.024 1.270 1.641 1.922
0.4 1.000 1.000 1.000 0.993 0.976 1.063 1.352 1.715
0.6 1.000 0.999 1.000 1.000 0.981 0.977 1.115 1.434
0.8 1.000 1.000 1.000 1.000 0.995 0.969 0.991 1.177
1.0 1.000 1.000 1.000 1.000 1.000 0.986 0.960 1.015
1.2 1.000 1.000 1.000 1.000 1.000 1.000 0.974 0.957
ANSWERS TO EXERCISES 625

The values of the analytical solution are presented in Table E.8.5.2.

Table E.8.5.2.
t\x 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00
0.2 1.000 1.000 1.000 1.191 1.562 1.979 2.00 2.00
0.4 1.000 1.000 1.000 1.000 1.036 1.038 1.039 1.706
0.6 1.000 1.000 1.000 1.000 1.000 1.000 1.077 1.410
0.8 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.130
1.0 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
1.2 1.0000 1.000 1.000 1.000 1.000 1.000 1.000 1.000

A graphical solution of the problem at the time instances t = 0


and t = 0.24 is displayed in Figure E.8.5.2 (a), (b).

2 o 2 o
o
o t = 1.0 o Exact
Exact o

o -o - o - Approx o -o - o - Approx
o o

o
o o
o
o o o o o o o o o o o o o o o o o o o o o o o o o o
1o o o o o o o o o 1o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o
o

1 2 3 10 1 2 3 10

(a) (b)

Figure E.8.5.2

4. For the initial level use the given initial condition ui, 0 = f (ih). For
the next level use the Forward-Time Central-Space
( )
λ
ui,1 = ui,0 − 2ui+1,0 − ui−1,0 .
2

For j ≥ 1 use the Leap-Frog method


( )
ui,j+1 = ui,j−1 − λ ui+1,j − ui−1,j .

Using Mathematica, the approximation ui,6 is obtained for λ = 0.5,


h = 0.1 (∆ t = 0.05) at time t = 0.3. This solution along with the
solution f (x − 0.3) is shown in Figure E.8.5.3 (b). In Figure E.8.5.3
(a) the initial condition u(x, 0) is displayed.
626 ANSWERS TO EXERCISES

u
u
2
2 o o t = 0.3 Exact
o o
o
-o - o - Approx
o

o o

o
o
1 o o o o o o o o o o o o o o o o o o o o o o o o o
1

x
x 3
3 1 2 4
1 2 4 2
2

(a) (b)

Figure E.8.5.3

t
2 o
2 t=0 o t = 0.0250
Exact
o -o - o - Approx
uHx, 0L = f HxL o

o
o

1 1o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o

x
0.5 2 0.5 2

(a) (b)

2 o 2 o
t = 0.0750 o t = 0.200
o Exact
o Exact
o -o - o - Approx
-o - o - Approx
o
o o
o o
o
o o
1o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o 1o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o

0.5 2 0.5 2

(c) (d)

Figure E.8.5.4

5. The analytical solution of given initial value problem is


( ) ( )2
t −100 x− 1+x
t
2 −0.5
u(x, t) = f x − =1+e .
1 + x2

To obtain the numerical results, use h = 0.05 and ∆ t = 0.0125.


The obtained numerical results at several time instances t are dis-
played in Figure E.8.5.4 (a), (b), (c) and (d).
ANSWERS TO EXERCISES 627

6. The numerical solution is displayed in Figure E.8.5.5 and the analyt-


ical solution is presented in Figure E.8.5.6. It is important to point
out that the x and t axes are scaled: value 60 on the x-axis corre-
sponds to position x = 3, and the value 75 on the t-axis corresponds
to time t = 3.

Figure E.8.5.5 Figure E.8.5.6

7. The analytical solution uan (x, t) is given by


1( )
uan (x, t) = Fext (x + t) + Fext (x − t) ,
2
where Fext (x) is the 2 periodic extension of the odd function
{
−f (−x), −1 ≤ x ≤ 0,
fodd (x) =
f (x), 0 ≤ x ≤ 1.
The numerical results are given in Table E.8.5.3 and the results
obtained from the analytical solution are given in Table E.8.5.4. The
numbers are rounded to 4 digits.
Table E.8.5.3.
t\x 0.1 0.2 0.3 0.4 0.5 0.6 0.7
0.05 0.079 0.104 0.119 0.124 0.119 0.104 0.079
0.10 0.074 0.100 0.115 0.120 0.115 0.100 0.075
0.15 0.068 0.091 0.109 0.113 0.109 0.094 0.069
0.20 0.062 0.084 0.099 0.105 0.100 0.085 0.061
0.25 0.04 0.070 0.085 0.093 0.089 0.075 0.054

Table E.8.5.4.
t\x 0.1 0.2 0.3 0.4 0.5 0.6 0.7
0.05 0.079 0.104 0.119 0.124 0.119 0.104 0.079
0.10 0.075 0.100 0.115 0.120 0.115 0.100 0.075
0.15 0.0687 0.094 0.109 0.113 0.109 0.094 0.069
0.20 0.006 0.085 0.100 0.105 0.100 0.085 0.060
0.25 0.05 0.074 0.089 0.094 0.089 0.074 0.050
628 ANSWERS TO EXERCISES

8. The analytical solution ua (x, t) is given by


1 1
ua (x, t) = Fext (x + t) + Fext (x − t)
2 2
1 1
+ Gext (x + t) − Gext (x − t),
2 2
where Fext (x) and Gext (x) are the 2 periodic extension of the odd
functions
{
−f (−x), −1 ≤ x ≤ 0,
Fodd (x) =
f (x), 0 ≤ x ≤ 1;

{
−g(−x), −1 ≤ x ≤ 0,
Godd (x) =
g(x), 0 ≤ x ≤ 1.
To find Gext first find an antiderivative G(x) for −1 < x < 1:
∫x ∫x
1( )
G(x) = g(x) dx = sin πx dx = − 1 + cos πx .
π
−1 −1

After that extend periodically G(x).


The numerical solution of the problem along with the analytical
solution at the specified time instances is presented in Figure E.8.5.7.

u u

t=0 3.5 o o
3.5 o t = 0.050 Exact
o
o o
-o -
o o o Approx
uHx, 0L = f HxL
o o

x o
o o x
1 1
o o

o o
o o
-2 o o

(a) (b)

u
u
t = 0.275 o o
o o Exact o
o
3. o o t = 0.100
Exact -o - o -
o
o o
o
-o - o - Approx 0.9 Approx o

o o o
o

o o
o o

o o
o o x
1 o o
o
o
o
o o o
o
-1.5 o o o o o o x
o o 1

(c) (d)

Figure E.8.5.7
ANSWERS TO EXERCISES 629

9. The system has the following matrix form

Ui,j+1 = L · Ui,j − Ui,j−1 + λ2 bj−1 ,

where
 2(1 − λ2 ) λ2 ... 0 0 
 λ2 2(1 − λ2 ) ... 0 0 
 
A=

..
.


 
0 0 ... 2(1 − λ2 ) λ2
0 0 ... λ2 2(1 − λ2 ),
 
u  α(tj−1 )
1,j
 0 
 u2,j   .. 
Ui,j =  
 ..  , bj−1 =
 . .

.  0 
ui,j β(tj−1 )

10. Divide the unit square by the grid points xi = ih, i = 0, 1, . . . , n,


yi = jh, j = 0, 1, . . . , n, for some positive integer n where h = ∆ x =
∆ y = 1/n and divide the time interval [0, T ] with the points tk =
k∆ t, k = 0, 1, . . . , m for some positive integer m, where ∆ t = T /m.
(k)
Let ui,j ≈ u(xi , yj , tk ) be the approximation to the exact solution
u(x, y, t) at the point (xi , yj , tk ). Substituting the forward approxi-
mations for the second partial derivatives into the wave equation, the
following approximation is obtained.

(k+1) ( ) (k) ( (k) (k)


ui,j = 2 1 − 2λ2 ui,j + λ2 ui+1,j + ui−1,j
(k) (k) ) (k−1)
+ ui,j+1 + ui,j−1 − ui,j ,

where
∆t
λ=a .
h
Mathematics
INTRODUCTION TO

EQUATIONS FOR SCIENTISTS AND ENGINEERS


INTRODUCTION TO PARTIAL DIFFERENTIAL
Developed specifically for students in engineering and physical sciences, PARTIAL DIFFERENTIAL
Introduction to Partial Differential Equations for Scientists and Engineers
Using Mathematica covers phenomena and process represented by partial EQUATIONS FOR SCIENTISTS
differential equations (PDEs) and their solutions, both analytical and numerical.
In addition the book details mathematical concepts and ideas, namely Fourier AND ENGINEERS USING

USING MATHEMATICA
series, integral transforms, and Sturm–Liouville problems, necessary for proper
understanding of the underlying foundation and technical details. MATHEMATICA
The book provides fundamental concepts, ideas, and terminology related to PDEs.
It then discusses d’Alambert’s method as well as separation of variable method of Kuzman Adzievski • Abul Hasan Siddiqi
the wave equation on rectangular and circular domains. Building on this, the book
studies the solution of the heat equation using Fourier and Laplace transforms,
and examines the Laplace and Poisson equations of different rectangular circular
domains. The authors discuss finite difference methods elliptic, parabolic, and
hyperbolic partial differential equations—important tools in applied mathematics
and engineering. This facilitates the proper understanding of numerical solutions
of the above-mentioned equations. In addition, applications using Mathematica®
are provided.

Features
• Covers basic theory, concepts, and applications of PDEs in engineering and
science
• Includes solutions to selected examples as well as exercises in each chapter
• Uses Mathematica along with graphics to visualize computations, improving
understanding and interpretation
• Provides adequate training for those who plan to continue studies in this area

Written for a one- or two-semester course, the text covers all the elements
encountered in theory and applications of PDEs and can be used with or without

Adzievski • Siddiqi
computer software. The presentation is simple and clear, with no sacrifice of
rigor. Where proofs are beyond the mathematical background of students, a
short bibliography is provided for those who wish to pursue a given topic further.
Throughout the text, the illustrations, numerous solved examples, and projects
have been chosen to make the exposition as clear as possible.

K14786

K14786_Cover.indd 1 8/19/13 9:28 AM

You might also like