Applications of Interval Computations: General
Data Processing
One of the main functions of a computer is crunching numbers, or, to
use a more fancy term, data processing. This data comes either
from measurements, or from expert estimates, or from the results of
the previous processing.
Expert estimates are often very important, but
a computer can easily crunch (and does crunch) thousands and
millions numbers per second. There is not enough experts in the world
to supply that many expert estimates. Therefore, the majority of data
processing algorithms do not use expert estimates at all:
they either directly process the results of the measurements, or
process the results of pre-processing these results. In the second
case, we can consider the entire chain of processors starting from the
sensors and eventually producing the desired result as one huge data
processing algorithm.
So, in the majority of cases,
data processing processes the results of measurements.
Why Do We Process Data At All?
- First of all, we want to know the world
that we live it. In particular, we want to know the characteristics
of the physical quantities that characterize this world. In a few
cases, we can simply measure them (e.g., body temperature, or weight).
But for more complicated characteristics, like an amount of oil in the
well, or a distance to a star, there is no way to measure them directly.
So, we measure what we can and then, from the results of these
measurements, we try to reconstruct the desired value.
- Second, we want to change this world: we want to
control the objects that already exist, or to change them, or even to
create (synthesize) new objects.
Main Problem
And here comes the problem. Measurements are never absolutely precise. The
result x of measuring a physical quantity x (e.g.,
temperature) may differ from the actual value of that quantity.
E.g., if you have weighed yourself, and the result is 125 pounds,
this does not mean that you weight equals exactly 125. If the scales have
an accuracy +/-2, then the actual weight can be any number from
123(=125-2) to 127=(125+2).
So, the data that we process are not absolutely
precise. This inaccuracy
leads to the inaccuracy in the result of data processing. The
problem is to estimate the resulting inaccuracy.
If we do not know this accuracy, then the result of data processing is
of little practical use: e.g., suppose that we estimated that
a given location contains 100 million
tons of oil, but we do not know the accuracy of this estimate. Then,
if this is 100+/-5, this location is worth developing, but if it is
100+/-100, then we need further analysis to decide what to do.
In Many Cases, We Know Probabilities Of Errors
In many cases, the manufacturer of a measuring instrument provides us
with the probabilities of different values of a measurement error.
For such cases, there exist numerous methods that compute statistical
characteristics of the resulting error.
-
W. A. Fuller, Measurement error models, J. Wiley
& Sons, New York, 1987.
-
H. M. Wadsworth, Jr (editor). Handbook of statistical
methods for engineers and scientists, McGraw-Hill Publishing Co.,
N.Y., 1990.
In Many Cases, We Do Not Know Probabilities
In many
other cases, however, the values of the probabilities are not known.
Instead, the manufacturer provides us with the guaranteed accuracy
D, i.e., with a guaranteed upper bound
of the error d=x-x
(e.g., ``error cannot exceed 0.1'').
If our measurement results is x, then the
possible values of x=x-d form an
interval [x-D,x+D].
Historical comment.
The first person to describe intervals as a result of
measurement was Norbert Wiener: in 1914, he applied intervals
to measuring distances, and in 1921, to measuring time.
-
N. Wiener, ``A contribution to the theory of relative position'',
Proc. Cambridge Philos. Soc., 1914, Vol. 17, pp. 441-449.
-
N. Wiener, ``A new theory of measurement: a study in the logic of
mathematics'', Proceedings of the London Mathematical Society, 1921.
Since we are dealing with
intervals, the entire area is called interval computations.
In this case, our problem takes the following form:
Main Problem (In Case We Do Not Know
Probabilities)
Suppose that we are interested in the value of a
physical quantity y that is difficult or impossible to measure
directly. To find the value of y, we
measure several other quantities x1,...,xn that
are related to y, and then we reconstruct the value of
y from the results xi of measuring xi.
In other words, we have an algorithm f that takes the values
xi and returns an estimate
y=f(x1,...,xn)
(this estimate is called the result of indirect measurement).
In some cases, this algorithm consists of simply applying known
formulas to xi. In other cases, this algorithm implements a
numerical method for solving a system of equations that connect xi
and y. These equations can be algebraic, differential, integral,
etc, and the resulting algorithm can be pretty complicated.
The problem is to estimate the error of the
estimate y:
We know:
- n intervals xi=[xi-Di,xi+Di] and
- an algorithm f that transforms n real numbers
x1,...,xn into a real number y=f(x1,...,xn).
We are interested in: estimating the interval
y = f(x1,...,xn) of possible values of
y = f(x1,...,xn) when xi is in [xi-Di, xi+Di].
This is the basic problem of interval computations with
which the entire area started; see, e.g.,
early
papers on interval computations
(see
also) including:
-
R. E. Moore, Automatic error analysis in digital
computation, Lockheed Missiles and Space Co. Technical Report
LMSD-48421, Palo Alto, CA, 1959.
-
R. E. Moore, C. T. Yang, Interval analysis,
Lockheed Missiles and Space Co. Technical Report
LMSD-285875, Palo Alto, CA, 1959.
-
R. E. Moore, "The automatic analysis and control of error in
digital computation based on the use of interval numbers", In: L. B.
Rall (ed.), Error in Digital Computation. Proceedings of an
Advanced Seminar Conducted by the Mathematics Research Center, U.S.
Army, at the University of Wisconsin, Madison, October 5-7, 1964,
Vol. 1, John Wiley, N. Y., 1964, pp. 61-130.
-
R. E. Moore, Interval Analysis, Prentice Hall, Englewood Cliffs,
NJ, 1966.
Where Does This Main Problem Fit In Traditional Mathematics
- For many algorithms f, methods of estimating the
accuracy of the result y have been developed in
numerical mathematics. These methods often produce the exact
error bounds.
- Numerical mathematics does not always give us the solution
to our problem. Error estimation methods of numerical mathematics
are often very complicated, and
require difficult mathematical techniques. Because of that,
algorithms f for which such methods are known form a small subset in
the set of all algorithms that are used for data processing. New data
processing problems appear every day, and new data processing
algorithms are being designed to solve these problems. For a
new data processing algorithm, for a
reasonable period of time, no error estimation method is known.
To handle these algorithms, we need a general tool that would, given
f, xi, and Di, automatically estimate the error
bound for y.
-
Problems that appear when we do not know the exact distribution,
but we know instead that a distribution belongs to a given class
P of distributions, are called problems of robust
statistics.
-
P. J. Huber, Robust statistics, Wiley, N.Y., 1981.
So,
from statistical viewpoint, interval computations is a particular case
of robust statistics, in which P is the class of all
distributions located on a given interval.
- Robust statistics does not always give us the solution
to our problem because robust methods have been developed only for
some functions f.
- The case when instead of a number, we have an interval
(or, more generally, a set) of possible values, has been developed in
mathematics under the name of set-valued analysis
-
J.-P. Aubin, A. Cellina, Differential inclusions,
Spinger-Verlag, Grundlehren der math. Wiss., Vol. 264, 1984.
-
J.-P. Aubin, H. Frankowska, Set-valued analysis, Birkhauser,
Boston, MA, 1990.
In particular, differential equations with the
uncertain (set-valued) right-hand side (called differential
inclusions) are used, e.g., in control problems.
- Set-valued analysis does not always give us the solution
to our problem because its methods have been developed only for
specific f, e.g., when
f is a solution of a differential or an
integral equation of a certain type.
A Brief History of Interval Computations
- It started here, in the US.
-
The main ideas of Interval Computations appeared in the USA, in the Ph.D.
Dissertation of R. E. Moore that was defended at Stanford in 1962.
The first application of interval computation
was presented by R. E. Moore in 1959.
The first monograph, also by R. E. Moore,
appeared in the USA in 1966.
- Moved to Europe.
-
Later, the center of interval computations moved to Europe, mainly to Germany.
One of the reasons was that in the US, manufacturers were, in
average, less
cost-conscious, and they were thus less
worried about inaccuracy of sensors: ``if a sensor is not good enough,
let's spend some more money and buy a better one''.
The main users of this techniques were scientists, for whom this
solution did not work, because they were working at
the cutting edge of accuracy, and they were already using the best
possible sensors to measure their micro-quantities.
- As a result, Interval Computations is not widely
known in the US, while in Germany, it is a part of the standard
qualifying exam for several areas of
Numerical Mathematics. Germany was the place where
the first specialized journal appeared. Germany still hosts regular
conferences in interval computations.
- A recent outburst of activity
-
Recently, there has been an outburst of activity in the USA and
internationally, related to Interval Computations:
-
A new international journal Interval Computation has been launched in
1991 (starting from 1995, it is issued under the new title
Reliable Computing).
-
In 1993, a well-represented International Conference on Interval
Computations was held in Lafayette, LA.
-
In 1995, an International Workshop on Applications of Interval
Computations was held in El Paso, Texas.
Applications
Numerous applications of interval computations are described in the
Proceedings of the International Workshop on Applications of Interval
Computations, held in El Paso, Texas, on February 23-25.
Back to Applications of Interval Computations
Back to the main menu of the Interval
Computations website