Class 3: unavoidable error, numerically stable and numerically unstable algorithms
Introduction
We want to compute
for a given input value x. Assume that there are no measurement errors. In the ideal algorithm
- the given value is rounded to the closest machine number:

- then use
and compute
"exactly" (with some extra precision) - then round this extra precision result
to the closest machine number: 
As a result we obtain the unavoidable error
where
denotes the condition number of f and
denotes the machine epsilon (which is
for double precision numbers which are used in Matlab). In a practical computation we break up the computation of
into a sequence of operations which are available on our machine ("algorithm"). Then we run the algorithm on our computer using machine numbers and machine arithmetic. Example
We want to compute
for
. Algorithm 1
The obvious way to compute y is
format compact; format long g
yalg1 = 1-cos(x)
yalg1 = 1.24999997352937e-07
We call this Algorithm 1: first compute
and then
. Matlab uses machine numbers with 16 digits accuracy and displays 15 digits.
- How accurate is this computed result?
- Is Algorithm 1 numerically stable, i.e., "optimal"?
- Can we find an algorithm which gives a much smaller error?
Finding the relative error
In order to find the relative error we need the "exact result". One way is to use higher precision: If the original computation was in single precision we can get a more accurate result using double precision.
Matlab uses double precision by default. One way to obtain a higher precision result is to use the symbolic toolbox and vpa which uses 30 digits accuracy by default:
X = sym(5*10^-4); % symbolic input
Y = vpa(1-cos(X)) % evaluate using 30 digit accuracy
Y = 0.00000012499999739583335503472212534102
relerr_alg1 = double(yalg1-Y)/yex % double() converts to machine number
relerr_alg1 = -3.4317095325145e-10
We see that the realtive error is about
. Unavoidable error
Here
, so the condition number is giving the unavoidable error 
unav_err = cf*epsM + epsM
unav_err = 2.99999995901967e-16
The error of Algorithm 1 is
which is much larger than the unavoidable error
. Therefore Algorithm 1 is numerically unstable. Why do we get such a large error for algorithm 1?
Note that we first compute
. The computed result is rounded to the closest machine number, this results in an error which can be as large as
. Then we compute
. Note that
, so we have subtractive cancellation, and the relative error present in
will be multiplied by
: factor = abs(y1/(1-y1))
factor = 7999999.16941204
factor*epsM
ans = 7.99999916941204e-10
This means that the roundoff error in
may cause an error as large as
in y. This is the order of magnitude of the error we observed for algorithm 1. Algorithm 2
How can we compute a more accurate result in machine arithmetic? The idea is to use mathematical identities to rewrite the expression
so that the new expression does not suffer from subtractive cancellation. A useful trick is to multiply and divide by
: So far this does not help: Evaluating
still causes subtractive cancellation. But using
we obtain The new expression is now free from subtractive cancellation. This gives Algorithm 2:
yalg2 = sin(x)^2/(1+cos(x))
yalg2 = 1.24999997395833e-07
relerr_alg2 = double(yalg2-Y)/yex
relerr_alg2 = 5.55505114272964e-18
Note that this is of the order of the unavoidable error or less. Hence Algorithm 2 is numerically stable.