Blog on Edwards and Penney
Differential Equations and Boundary Value Problems:
Computing and Modeling, 3rd edition
Here are my personal opinions on what is good and what is not, and what is
important and what is not, for a first course in Ordinary Differential
Equations using the text by Edwards and Penney. Your feedback is welcome. Names
in [brackets] indicate people who have contributed ideas.
- Richard Laugesen,
Department of Mathematics, University of Illinois, Urbana-Champaign
Page numbers refer to the 3rd edition, and might have changed in the
current edition.
Chapter 1 - First-Order Differential Equations
- 1.1 - Differential Equations and Mathematical Models
- 1.2 - Integrals as General and Particular Solutions
-
Page 11: The remark distinguishing "a" general solution from "the" general
solution is useless clutter - it will interest or help only the most
logically-inclined student. It would be more helpful to show an example
where a general solution method fails to find some particular solution
because of some hidden assumption in the general method.
-
The section is theoretically weak. In class, I like to show the students
that even if they cannot find an antiderivative of f explicitly,
the differential equation dy/dx=f(x) can still be solved by writing
down y=y0+Intxx0 f(u) du
and recalling the Fundamental Theorem of Calculus (the part of it
that most students don't remember). You have to emphasize that
the definite integral is a number (the signed area under the graph of f),
so that this formula gives an actual value for f(x), for each x. Thus you
really do have (in principle) a function y=y(x) that solves the differential
equation. The problem is that the book uses the traditional lousy
"indefinite integral" notation for antiderivatives, which confuses the matter
because just writing down y=Int f(x) dx + C does not give you any actual
values for the solution y, and thus does not solve the problem in the same
sense that the definite integral formula does.
-
The "velocity and acceleration" problems and methods should be familiar
already to students from calculus (or physics) courses.
-
Overall, this seems a rather weak section with which to begin the course.
- 1.3 - Slope Fields and Solution Curves
- I use the Iode
software to help teach about direction fields, and so I de-emphasize the
sketching of direction fields by hand. In particular, I ignore the topic of
"isoclines" completely.
- The Existence and Uniqueness Theorem is certainly worth discussing in
class, although for those students of a practical mind-set, it will be
"clear" already from the direction field that a solution should exist
for any reasonable first-order differential equation. The fact that a solution
might only exist locally, and might (for example) blow up in finite time,
is certainly important. But the other pathologies that the text treats,
such as the equation x(dy/dx)=2y which has infinitely many solution curves
through the origin, can be skipped without harm.
- [Aldo Manfroi]
The Existence and Uniqueness Theorem has been simplified in the
3rd edition, by assuming throughout that f and its y-derivative are
continuous (whereas in the 2nd edition, only the uniqueness statement
assumed that the y-derivative of f is continuous). Probably this makes
life simpler for the students, but exercises 11-20 need to be updated
accordingly, and no longer make sense as stated, since the hypotheses
are now the same for the existence and uniqueness parts of the theorem.
- 1.4 - Separable Equations and Applications
- I tell my students to write the y-antiderivative of 1/y as ln(y),
not ln(|y|). The fussiness of using the absolute value is useless in
practice (any counterexamples?!). And anyway, from a more advanced viewpoint
one can always use the complex logarithm and add an imaginary constant of
integration (i Pi), instead of using the absolute value inside the logarithm;
note that after exponentiation, this constant becomes a factor of -1.
So in my experience, there is no need to bother with the absolute value in
the logarithm.
- 1.5 - Linear First-Order Equations
- The steps on page 47 should start with
Step 0: rearrange
the differential equation into the standard form: y'+P(x)y=Q(x). See the
First
Order Linear handout.
- The method would be easier for students to understand if the author
wrote "antidifferentiate" instead of writing "integrate".
- A good example to treat is x'(t)+cx(t)=a cos(kt) + b sin(kt), which is
a first-order analogue of the kind of second-order equation
treated extensively in Chapter 3. The solution consists of two parts:
an exponentially decaying response to the initial condition, and an
oscillatory response to the periodic forcing.
- I like to do one of the mixture problems, to show students another
example of how to arrive at a differential equation modeling a real world
situation. (They are weak at that skill.)
- 1.6 - Substitution Methods and Exact Equations
- I omit Exact Equations, because I've never seen them arise
naturally except from "conservation of energy" (in which
case the level surfaces of the energy already give implicit solutions to the
differential equation). Why is this topic in the book? Just for historical
reasons, or are there some important applications I don't know about?
If there are no applications that are really important, then the
author should say so, and relegate the method to the Exercises.
- Due to lack of time, I also omit "reducible" equations where the
independent variable is missing. It is a cute reduction, but doesn't seem
to arise too often (although a professor at Courant did once ask me, in
conversation, how to solve such an equation, which had arisen in his research).
- A couple of points that the text ought to make but does not are: if the
DE involves yy' then the substitution v=y2 might be useful (since
then v'=2yy'), and similarly if the
DE involves (cos y)y' then the substitution v=sin y might be useful (since
then v'=(cos y)y'), and so on. Students should observe closely the
form of the differential equation, and look for useful patterns!
Chapter 2 - Mathematical Models and Numerical Methods
- 2.1 - Population Models
- This section is interesting enough, but I skip it and instead work some
of the "population" ideas into my treatment of Section 2.2.
- 2.2 - Equilibrium Solutions and Stability
- Phase diagrams should always be drawn vertically, not horizontally
as in the textbook. Reason: students find it much easier to understand the
direction of the arrows on a phase diagram if they can imagine it as the
vertical axis for a sketch of solution curves (with an "up" arrow in the
phase diagram corresponding to solution curves that are increasing, and
similarly for "down" arrows). For example, look at Figure 2.2.1 (solution
curves) and Figure 2.2.2 (phase diagram), and imagine how much easier it
would be to explain these figures to students if you could put the phase
diagram vertically alongside the sketch of solution curves.
- [Aldo Manfroi] The way the diagrams are drawn, it makes it look like
the equilibrium solutions are reached in a finite amount of time - so I always
tell the students not to draw different curves touching each other.
- The definition of stability on page 92 is correct, but this
formal epsilon-delta definition of the general concept of stability is
precisely the wrong thing to state in this class. Every example we
encounter actually satisfies the more intuitive notion of
asymptotic stability (see page 372). In fact I see no harm at this
level in simplifying a little and just defining asymptotic stability to mean
that if the solution x(t) ever gets close to the value c, then it must
tend to c as t -> infinity. The graphical examples make the meaning
perfectly clear. (If I were teaching a course to math majors, I would be
more precise in my definition.)
- The treatment of bifurcation that culminates in the bifurcation
diagram (Figure 2.2.12) at the end of the section is very nice,
if you have time.
- A good in-class example is to sketch the graph of a function f(x)
that crosses the x-axis a few times (that is, for which f(x)=0 has a few
roots), and then ask the students to draw the phase line for the
differential equation dx/dt=f(x).
- 2.3 - Acceleration-Velocity Models
- This section is not worth covering on its own, although it is a handy
reference for the air resistance problems. I cover the "resistance
proportional to velocity" problem when I cover Section 1.4, and then I
challenge students to investigate the "resistance proportional to the
square of velocity" problem on their own, for falling objects (note the
constant of proportionality would have the opposite sign for a rising object).
- 2.4 - Numerical Approximation: Euler's Method
- I take students to the computer lab and have them work through Iode Lab 2,
rather than lecturing on Euler's method. Then I follow up in class with a
discussion of error behavior (which is of order h, so that halving the step
size cuts the error by a factor of 2),
and examples where the method does not work well
e.g. highly oscillatory direction fields, or solutions that blow up in finite
time.
- The text material is good, although Example 2 on the dropping baseball is
too artificial to be helpful.
- A good in-class exercise is to have the students compute the first
few iterations of the Euler method by hand, for some simple example.
- 2.5 - A Closer Look at the Euler Method
- The Improved Euler Method is covered in Iode Project 2. I don't cover this
material in class. The text ought to emphasize more strongly that the error
is of order h^2, so that halving the step size cuts the error by a factor of
4 (in general).
- 2.6 - The Runge-Kutta Method
- There is no time to cover this, so I just point interested
students to the textbook.
Chapter 3 - Linear Equations of Higher Order
- 3.1 - Introduction: Second-Order Linear Equations
- The text is too enamoured of the general theory, and should instead
start by emphasizing the fundamental examples y''+k2y=0 and
y''-k2y=0, showing how to solve them using sin/cos and
exponentials. Then remind students about the definition and properties
of sinh/cosh, including values at the origin, derivatives, hyperbolic
Pythagoras identity cosh2-sinh2=1, and their graphs;
maybe mention the connection to hyperbolas.
Do NOT assume your students already know the hyperbolic trig functions.
Then show how y''-k2y=0 can also be solved using sinh/cosh.
- These same examples should be used to illustrate the existence and
uniqueness theorem; point out to students that the initial conditions work
out much more nicely for sinh/cosh than for the exponentials.
- The superposition principle for linear homogeneous equations should be
immediately followed by the corresponding principle for linear
nonhomogeneous equations. This shows students that adding two solutions of a
DE need not give a solution of the same DE.
- Wronskians are a needless
distraction to students in a "methods"-oriented
course, and should not be mentioned at all. That is, we should skip over
Theorem 3 (Wronskians of Solutions) and go straight to Theorem 4
(General Solutions of
Homogeneous Equations). It is Theorem 4 that we really care about, and
we only use Theorem 3 to prove Theorem 4. So Theorem 3 should
be relegated to the Appendices.
- As a further warning to anyone thinking of proving these theorems,
notice that the proof of General Solutions Theorem 4 depends on the
uniqueness part of the
Existence and Uniqueness Theorem 2, which is proved only in the appendix
(and indeed the uniqueness part is only sketched there). So even if you
go to the trouble of teaching your students about Wronskians, they still
will not get the complete theoretical picture until you have proved the
Existence and Uniqueness Theorem. Given this rather complicated chain of
dependences, I can't see why the author thinks it is so crucial to
introduce Wronskians. If I wrote the book, I'd relegate Wronskians to the
exercises.
- Note that in practice, we never compute a Wronskian in order to check
linear independence. It is one of the travesties of the traditional
ODE course that students end up believing they should compute some
meaningless (to them) determinant in order to see if a collection of functions
is independent. In practice, if you have two functions then you simply check
whether or not one is a multiple of the other; and if you have more than two
functions arising as solutions of an ODE, then they will almost
certainly have some structure that allows you to see linear independence
quite readily. For example, when studying constant coefficient, linear
homogeneous ODEs, we find exponential solutions for which
linear independence is easily proved directly e.g. to show that {ex,
e2x,e3x} are linear independent, just assume they
are linearly dependent and deduce a contradiction (using the differing
growth rates at infinity).
- When I teach linear systems, I will say a bit more about
Wronskians; but not much more.
- If I were teaching math majors, then I'd certainly teach the full
theory of about linear
independence and Wronskians. But my target audience consists of engineers,
not math majors, and Wronskians seem a waste of time for engineers.
(Counterexamples?)
- For the case of "repeated roots" for constant coefficient, linear
homogeneous ODEs, I like to show the students where the "x" comes from
in the solution
xerx. For example, consider two distinct roots r and s, so that
(esx-erx)/(s-r) solves the ODE; letting s tend to r
yields xerx. Thus if s=r then we would expect
xerx to be a solution of the ODE. This is not a proof, but it is
close enough. It also gives me an opportunity to talk about why
roots are "generically" distinct, and why we should regard the case of equal
roots as a limiting case of the distinct root case.
- I treat here also the case of complex roots for a second order linear
constant coefficient homogeneous equation. (Strangely, the
textbook delays treatment of complex roots until Section 3.3.)
- I definitely spend half an hour discussing complex numbers with my
students before doing the case of complex roots. Reason: an astounding
number of students are either ignorant of complex numbers, or afraid of them,
or are deeply suspicious of their validity. I stress that "there is
nothing imaginary about complex numbers!" Here's a good way to do it.
Define a complex number to be a pair (a,b) of real numbers, and
define the multiplication rule for complex numbers to be:
(a,b)(c,d)=(ac-bd,ad+bc). Then
for notational simplicity I write (a,b)=a+bi, where the "i" is just a symbol
that indicates that "a" is the first real number in the pair and "b" is the
second number. Then you observe (0,1)=i. And the multiplication rule implies
i2=(0,1)(0,1)=(-1,0)= -1+0i. That is,
i2=-1.
The deeper point here is that by enlarging our idea of what is a "number"
(in fact, by considering pairs of real numbers, which we call complex numbers)
we have found a "number" whose square is -1.
Then I explain that multiplication by i corresponds to rotation by
90 degrees: i(a+bi)=-b+ai, and I draw a picture to illustrate. In particular,
i i = i2=-1 just says that rotating the point (0,1) by 90 degrees
gives the point (-1,0).
After these foundational remarks, I define the complex exponential
using the Taylor series, and assert that all the usual algebraic rules
like ez+w=ezew still hold. Then I
check Euler's formula
eit = cos(t) + i sin(t) by using the Taylor series of the three
functions. Then you can express polar coordinates in complex form as
a+bi = reit where r=(a2+b2)1/2
is the magnitude of the complex number a+bi, equivalently of the vector (a,b).
Finally, I advertize Math 446 (or Math 448) as a good course for
learning more about the wonderful properties of complex numbers.
Note:
This treatment of complex numbers got some unusually positive reaction
from the better students. They really seemed to appreciate having the complex
theory demystified.
- 3.2 - General Solutions of Linear Equations
- Sometimes I don't cover this material until after Section 3.4, because
Sections 3.3 and 3.4 are mostly about the second order case anyway.
- This section should only be treated briefly, because it is so
similar to the second order case in Section 3.1. I give a
handout
and put it up as an overhead, and talk the students through it.
- Just like in the second order case above, I omit Wronskians.
"Down with Wronskians!"
- The new concept in this section is linear independence for
more than two functions. Students find this tricky to understand, and it
is worth spending time on the definition, and examples. Actually,
when I'm teaching Math 385, which does not cover systems of ODEs,
I just define linear independence to mean that no one of the functions
can be written as a linear combination of the others. This definition works
well in practice, even though it is not the most elegant definition,
because students can easily remember and check it.
But when I teach Math 386, which includes systems, because the
linear algebra is more important
I go the extra step and show how the above definition of linear independence
is equivalent to the usual one about the only linear
combination that equals zero being the trivial combination.
- If you still don't believe me about Wronskians, then try the
following with your class. Have half the class check linear independence of
{ex,e2x,e3x} using the Wronskian and
the other half by checking that no one of these functions can be written as a
linear combination of the other two functions. See which half of the class
has a more satifactory understanding of the meaning of linear
independence.
- One should not get too carried away with the higher-order material in
Sections 3.2 and 3.3,
because the most-used differential operators in the world are
second order. Of course, there are some physical situations that yield higher
order operators (e.g. vibrating beams give a fourth order operator),
but students should first thoroughly understand the second order case and
all its ramifications e.g. mechanical vibrations in the next section.
One might argue that higher order equations are important because
they motivate the study of systems of first order equations.
But systems of first order equations arise perfectly
naturally all by themselves, and don't need motivational help from
artificially complicated higher order equations.
- It is a deficiency of the textbook that it fails to advise the
reader properly, with comments like my "don't get carried away with the
higher-order material in Sections 3.2 and 3.3, because the most-used
differential operators in the world are second order".
Not every topic is equally important! The author
ought to guide the reader better by offering pungent opinions. (Or does the
publisher prefer bland inoffensiveness?!)
- 3.3 - Homogeneous Equations with Constant Coefficients
- The "operator" notation for differential equations is a good
conceptual step in this section, and the idea that you can factor a
differential operator is important. But this topic can wait till after
Section 3.4 Mechanical Vibrations. I think it's more important to
get students quickly to some applications.
- 3.4 - Mechanical Vibrations
- There are three good real-world examples: the mass-spring system, the
RLC electrical circuit (which should be taken from Section 3.7 and inserted
here, with a very brief statement of the analogies m=L, c=R, k=1/C), and the
pendulum (which is nonlinear, but becomes approximately linear for
small oscillations). Incidentally, the period of a pendulum is not
independent of the amplitude when you use the full nonlinear equation
- independence only holds after you make the linearizing approximation.
- Students have a surprising amount of difficulty with
converting solutions from the form Acos(wt)+Bsin(wt) to the form Ccos(wt-g),
particularly if the phase shift g is in the second or third quadrant (so that
one must add pi to arctan(B/A)).
- You might want to tell your class what a
dashpot is.
- 3.5 - Nonhomogeneous Equations and Undetermined Coefficients
- Rule 2 on page 202 is good, but why on earth does the author
confuse matters with the different-looking Rule 1 on page 198?!
It is much better to state Rule 1 exactly the same way as Rule 2
except without the factor of xs that is needed in Rule 2
to handle the duplication. Then students only have to remember one
coherent approach, not the mishmash of different approaches given in the book.
- See my
handout
on undetermined coefficients, where I state Rule 1 and Rule 2 in a consistent
fashion and give examples for students to work in class.
- See also my
handout
on variation of parameters, where I state the method and give examples for
students to work in class. (In class, I also give a proof that the method
works.)pwd
- 3.6 - Forced Oscillations and Resonance
- The students need lots of practice to help them understand and distinguish
the different phenomena (beating, resonance, practical resonance). And of
course some get confused by amplitude response graphs, because they are
not used to thinking of the forcing frequency as a variable.
- Overall the
textbook does a good job, although I think it would help to make up a
summary diagram encapsulating the material of Section 3.4 (free motion)
and Section 3.6 (forced motion). This diagram could show the four cases
(free, undamped; free, damped; forced, undamped; forced, damped), along with
the behavior of the complementary and particular solution in each case,
and brief reminders of the qualitative behavior associated with that case
(e.g. beating, practical resonance).
- 3.7 - Electrical Circuits
- I don't cover this section. Instead I just briefly mention the
electrical-mechanical analogy while teaching Section 3.4. I think Section 3.7
should be moved to some kind of "appendix" at the end of the chapter,
or into some kind of "application" section. Because there really is no
new mathematics developed here.
- 3.8 - Endpoint Problems and Eigenvalues
- I defer this till after Chapters 4 and 5, since it does not seem to be
needed until Chapter 9.
- Page 231 is the natural place to state the Fredholm Alternative; but the
author does not, curiously.
- What is lacking here is any mention of Orthogonality of Eigenfunctions,
which of course is the crucial fact later for Fourier series in Chapter 9.
I work through a
handout
with my class, to show that eigenfunctions that satisfy the same type of BC
and have different eigenvalues are automatically orthogonal. Orthogonality
is then deduced for the standard trigonometric system.
Chapter 4 - Introduction to Systems of Differential Equations
- 4.1 - First-Order Systems and Applications
- When it comes to phase portraits, I think the book needs to do more
to help students understand the connection between the the x- and y-solution
curves and the phase portrait and the underlying physical situation.
For example, if you analyze a simple mass-spring system, then you want
students to be able to look at the circular trajectory in the phase plane and
identify which points of the trajectory correspond to maximum rightwards
displacement of the mass, which points correspond to the mass passing
through the equilibrium position while at maximum speed, and so on.
- The basic pedagogical principle here is that when you introduce a new
method of representing information (such as a phase portrait), you ought to
help students make mental connections between this new
representation and the old representation (such as x- and y-solution plots).
- 4.2 - The Method of Elimination
- 4.3 - Numerical Methods for Systems
Chapter 5 - Linear Systems of Differential Equations
- 5.1 - Matrices and Linear Systems
- In-class worksheet
- The treatment of determinants is deficient because it fails to say what
determinants mean! (I ascribe this to the undue influence of algebraists
on the teaching of linear algebra...)
The determinant should be defined as the (signed) volume of the
parallelepiped
spanned by the column vectors of the matrix (or the row vectors). Then in order
to compute this obviously useful quantity, one develops the usual determinant
formulas. Since time is short in this class, I only justify the determinant
formula in 2 dimensions: draw the parallelogram with edges [a c]^T and [b d]^T
and compute the area: you find it is ad-bc, which is therefore our definition
of determinant for a 2x2 matrix [a b \\ c d].
- The whole
section is poorly structured. The first part consists of a rapid-fire
survey of basic matrix algebra. (This should be a separate section.)
Then it transitions to a somewhat rambling restatement of
material on superposition and general solutions, complementary and particular
solutions, all of which we are familiar with from the scalar case in
Chapter 3. The text then ought to (but does not) say the following. "So just
like in Chapter 3, we face three separate issues. 1. Finding n linearly
independent complementary solutions of the homogeneous equation.
2. Finding a particular solution of the nonhomogeneous equation.
3. Finding the constants c1,...,cn so that the initial
conditions are satisfied."
Instead of saying this and then laying out a plan for dealing with all
three issues, the text just launches into dealing with issue 3 (satisfying the
initial conditions), showing how to solve the system of linear
equations by row reduction.
-
Row reduction summary
- See my comments on Wronskians in Section 3.1 above. The same comments
apply here.
- 5.2 - The Eigenvalue Method for Homogeneous Systems
- The examples are good, but the overall feeling of the section
is "I'll tell you how to find a solution, but I won't ask you to think about
the phase portrait". There should really be some clear statement, in this
section, of what the phase portraits look like in the standard 2 dimensional
examples: two positive real roots (source), two negative real roots (sink),
one positive and one negative (saddle), a complex pair of roots with positive
real part (spiral source), a complex pair of roots with negative real part
(spriral sink). One can do all this in the easiest possible canonical cases,
and just say "the general cases look like distorted versions of these phase
portraits" - this statement about distortion is readily understood if you do
just one example where the eigenvectors are not in the coordinate directions,
such as Example 1 in the text. An excellent source for this (standard)
material is Hirsch, Smale and Devaney "Differential Equations,
Dynamical Systems, and an Introduction to Chaos".
- The section curiously fails to mention that if a matrix is real
and symmetric, then all the eigenvalues are real. (This fact is useful
in Section 5.3, for example.)
- Another odd omission is the failure to prove that distinct eigenvalues
have linearly independent eigenvectors. This very important fact is
more-or-less stated on page 302, but without proof.
Why is this not proved, when the text investigates less-fundamental ideas
in gory detail (e.g. Section 5.4)?! The proof is not long, and could easily be
done here.
- 5.3 - Second-Order Systems and Mechanical Applications
- This is a very nice section, with a lovely interplay of theory and
physical intuition.
- In Example 2, the assumption that the buffer springs disengage when
stretched should not be mentioned until the end, where it is used.
- The final subsection "Periodic and Transient Solutions" is not much use,
since damping terms are not considered anywhere else in the section.
And note formula (39) needs to be rephrased as x(t)-xp(t) -> 0,
in order to make sense.
- The author could make mass-spring systems seem more appealing by
showing how the wave equation for compressional waves can arise as a limit
of mass-spring systems as the number of oscillators goes to infinity. Examples:
sound waves, seismic P-waves.
- 5.4 - Multiple Eigenvalue Solutions
- This section seems like overkill. The theory is too deep for the students
to understand (if the students have not seen linear algebra before),
and the applications are not sufficiently compelling to make
the reader believe that the work involved is worthwhile.
- I go lightly: first show a simple example of a deficient eigenvalue,
then explain the general algorithm on page 337, and then apply the algorithm
to the example.
- 5.5 - Matrix Exponentials and Linear Systems
- I ignore the material on the matrix differential equation X'=AX. The
students have enough to think about already with the vector equation x'=Ax,
and I don't want to burden them further. So I skip the first 4 pages of the
section, the fundamental matrix etc.
- Instead I begin with the following motivation: the familiar equation
x'=ax with initial condition x(0)=x0 has solution
x(t)=eatx0, and so by analogy, we expect
the vector equation
x'=Ax with initial condition x(0)=x0 to have solution
x(t)=eAtx0. But clearly this means we need to
find a satisfactory definition of the exponential of a matrix.
- After definining the matrix exponential by the Taylor series, I
show e0=I, and compute the exponential of a diagonal matrix.
Then I consider the examples A=[0 1 // 0 0] and B=[0 0 // 1 0], and directly
compute eAt and eBt and e(A+B)t (the last
one involves cosh and sinh). This provides dramatic proof that the
familiar laws of exponentials can fail for matrices, since
eAt eBt does not equal e(A+B)t here.
- Then I prove the theorem that if A and B commute then equality does hold.
It's not hard, and I don't understand why the book relegates this beautiful
result to the Problems. The key point of the proof is that the binomial
expansion depends on commutativity
e.g. (A+B)2 = A2+AB+BA+B2 =
A2+2AB+B2 provided AB=BA.
- Another curious omission is diagonalization, which provides both a
method for computing the exponential of a matrix, and enables one to
see the connection between the solution x(t)=eAtx0
and the solution in terms of eigenvectors and eigenvalues that we found in
Section 5.2.
- At the end of the section, I show the derivative of eAt
is AeAt, by using the Taylor series, so that our original
motivation is correct: the vector equation
x'=Ax with initial condition x(0)=x0 does have solution
x(t)=eAtx0.
- 5.6 - Nonhomogeneous Linear Systems
- I don't like the content or organization of this section. The section
fails to adequately explain the parallels with the scalar case, and it
does not treat the second order vector case, and worst of all, in my view,
it fails to use the fundamental idea of decomposing the forcing term and the
solution into eigenvector series. The eigenvector decomposition idea is
fundamental in differential equations, and was used already in Section 3.8
and will be used again in Chapter 9 (partial differential equations). Surely
it should be used here too!
- Here is what I cover in class, instead.
1. First order linear constant coefficient nonhomogeneous.
(a) Explain the integrating factor method (the
integrating factor is e-At). This works when A is a constant
matrix, and not in general when A depends on t (because the derivative of
eB(t) need not equal eB(t)B'(t), when B(t) and B'(t)
do not commute).
(b) Explain eigenvector decomposition: decompose the forcing term into a linear
combination of eigenvectors, with variable coefficients fj(t),
and guess that the solution can be written as a linear combination of
eigenvectors, with variable coefficients gj(t).
Substitute this guess into into the differential
equation to obtain a scalar first order equation for each gj.
Solve that scalar equation by the usual integrating factor method.
2. First order linear variable coefficient nonhomogeneous.
Here I follow the nice treatment in the text.
3. Second order linear constant coefficient nonhomogeneous.
(a) Explain Undetermined Coefficients very briefly: it is like in the scalar
case (Section 3.5) except with vector coefficients. And note the slight
difference in the case of duplication, noted on page 360.
(b) Explain eigenvector decomposition. Like in the first order case above,
except solve for gj using scalar Undetermined Coefficients
(Section 3.5).
4. Second order linear variable coefficient nonhomogeneous.
Mention Variation of Parameters. Omitted.
- Then give them lots of practice at these methods!
Chapter 9 - Fourier Series Methods
- 9.1 - Periodic Functions and Trigonometric Series
- Fourier series are presented in a rather low-level way. For instance,
the trigonometric orthogonality formulas on page 574 are proved using
trig identities, with no mention of the fact that orthogonality is to be
expected because the sines and cosines
are eigenfunctions of y''+lambda y=0 satisfying periodic boundary conditions.
(See my comments above on Section 3.8.)
- Also, there is no description of
the vector analogy, where we expand a vector v=ai+bj+ck in terms of orthogonal
vectors i,j,k, and find the coefficients a,b,c by taking dot products with
i,j,k and using orthogonality. I do this example before even writing down the
Fourier series formula, because then students can see the analogy in the
Fourier formula, as we develop it.
- 9.2 - General Fourier Series and Convergence
- Regarding the Convergence Theorem,
I wish the book emphasized the importance of sketching the graph
over at least two full periods, so that any jumps at the "endpoints" become
clearly visible.
- 9.3 - Fourier Sine and Cosine Series
- I point out in class that a sine series can be obtained either by
odd extension and a full Fourier series, or by simply using that the sine
functions on the original interval are orthogonal (because they are
eigenfunctions for Dirichlet boundary conditions). This dispels the
feeling that sine series are just some kind of trick.
- The material on Termwise Differentiation of Fourier Series fails to
clearly state the main point: we want to differentiate the series of
x(t), and x(t) is smooth because it solves a differential equation.
Thus termwise differentiation is valid.
[The discussion in the textbook of counterexamples to term-by-term
differentiation is irrelevant to our needs.]
- 9.4 - Applications of Fourier Series
- After teaching this section, I give students a
summary
of the methods in Sections 9.3 and 9.4 for solving ODEs by Fourier series.
Particularly, I explain how to detect and deal with resonance.
- 9.5 - Heat Conduction and Separation of Variables
- Consider the brief comment on page 613: "the series solution...usually
converges quite rapidly, unless t is quite small, because of the presence
of the negative exponential factors." This is correct, but it fails to
emphasize the main point in a form students can readily understand. The
comment should say that "one finds in practice that after a short time t,
only the first one or two terms in the solution u(x,t) need be added up,
because the other terms will be neglibly small. In other words,
the high modes (terms with large n) decay extremely fast and do not
contribute much to the solution, in practice."
This comment should
be accompanied by snapshops of a solution that is very wiggly at t=0,
but which has become basically an arch of a sine curve by time t=1.
Unfortunately, the book has a lot of this kind of writing -
insights are presented using declarative language rather than
action-oriented or algorithmic language. The reader is left wondering what
he or she is actually supposed to do with these statements presented
by the author.
- 9.6 - Vibrating Strings and the One-Dimensional Wave Equation
- The D'Alembert solution is derived by trig identities from a series
solution on the interval 0 < x < L. Aaaaagggghhhhh!!!!!!!
Please derive it instead
by changing variable (to characteristic coordinates) in the wave equation on
the whole line -Infty < x < Infty, in the standard way, to get
u=F(x+ct)+G(x-ct). Then do some examples with F=G (initial velocity zero)
and with F=-G (initial displacement zero).
- 9.7 - Steady-State Temperature and Laplace's Equation