Next |
Prev |
Up |
Top
|
Index |
JOS Index |
JOS Pubs |
JOS Home |
Search
The simplest case to study first is the sample mean:
 |
(C.29) |
Here we have defined the sample mean at time
as the average of the
successive samples up to time
--a ``running average''. The
true mean is assumed to be the average over any infinite number of
samples such as
 |
(C.30) |
or
 |
(C.31) |
Now assume
, and let
denote the
variance of the process
, i.e.,
Var![$\displaystyle \left\{x(n)\right\} \isdefs {\cal E}\left\{[x(n)-\mu_x]^2\right\} \eqsp {\cal E}\left\{x^2(n)\right\} \eqsp \sigma_x^2$](img2688.png) |
(C.32) |
Then the variance of our sample-mean estimator
can be calculated as follows:
where we used the fact that the time-averaging operator
is
linear, and
denotes the unbiased autocorrelation of
.
If
is white noise, then
, and we obtain
We have derived that the variance of the
-sample running average of
a white-noise sequence
is given by
, where
denotes the variance of
. We found that the
variance is inversely proportional to the number of samples used to
form the estimate. This is how averaging reduces variance in general:
When averaging
independent (or merely uncorrelated) random
variables, the variance of the average is proportional to the variance
of each individual random variable divided by
.
Next |
Prev |
Up |
Top
|
Index |
JOS Index |
JOS Pubs |
JOS Home |
Search
[How to cite this work] [Order a printed hardcopy] [Comment on this page via email]