Next |
Prev |
Up |
Top
|
Index |
JOS Index |
JOS Pubs |
JOS Home |
Search
Newton's method [163],[167, p. 143]
finds the minimum of a nonlinear (scalar) function of several
variables by locally approximating the function by a quadratic
surface, and then stepping to the bottom of that ``bowl'', which
generally requires a matrix inversion. Newton's method therefore
requires the function to be ``close to quadratic'', and its
effectiveness is directly tied to the accuracy of that assumption.
For smooth functions, Newton's method gives very rapid quadratic
convergence in the last stages of iteration. Quadratic convergence
implies, for example, that the number of significant digits in the
minimizer approximately doubles each iteration.
Newton's method may be derived as follows: Suppose we wish to minimize
the real, positive function
with respect to
. The
vector Taylor expansion [550] of
about
gives
for some
, where
. It is now necessary to assume that
.
Differentiating with respect to
, where
is presumed
to be minimized, this becomes
Solving for
yields
![$\displaystyle \underline{\hat{x}}^\ast = \underline{\hat{x}}- [J''(\underline{\hat{x}})]^{-1} J^\prime(\underline{\hat{x}}).$](img1727.png) |
(8.14) |
Applying Eq.(7.14) iteratively, we obtain Newton's method:
![$\displaystyle \underline{\hat{x}}_{n+1} = \underline{\hat{x}}_n - [J''(\underline{\hat{x}}_n)]^{-1} J^\prime(\underline{\hat{x}}_n), \quad n=1,2,\ldots\,,$](img1728.png) |
(8.15) |
where
is given as an initial condition.
When the
is any quadratic form in
, then
, and
Newton's method produces
in one iteration; that is,
for every
.
Next |
Prev |
Up |
Top
|
Index |
JOS Index |
JOS Pubs |
JOS Home |
Search
[How to cite this work] [Order a printed hardcopy] [Comment on this page via email]