E. C. Zeeman, Catastrophe theory. Selected papers, 1972-1977, Addison-Wesley, London, 1977.
T. Poston and I. N. Stewart, Catastrophe theory and its applications, Pitman, London, San Francisco, 1978.
J. Guckenheimer, "Comments of catastrophe and chaos", Some mathematical questions in biology. IX, Lecture Notes on Mathematics in the Life Sciences, Amer. Math. Soc., Providence, RI, 1978, pp. 1-ff.
V. Kreinovich, "Letter on catastrophe theory", Notices Amer. Math. Soc., 1979, Vol. 26, No. 5.
P. T. Saunders, An introduction to catastrophe theory, Cambridge Univ. Press, Cambridge, England, 1980.
Catastrophe theory and applications, Wiley, N.Y., 1981.
R. Gilmore, Catastrophe theory for scientists and engineers, J. Wiley, 1981.
V. I. Arnold, Catastrophe theory, Springer-Verlag, Berlin, Heidelberg, N.Y., 1984.
Laws of physics are typically given in the form of a variational principle, i.e., each of these laws state that some characteristic $S$ of the fields and particle coordinates, called action, must take optimal (usually, minimal) value. These laws can be used to predict the values of desired physical quantities based on the measurements results. The action $S$ is usually a smooth function of its variables; therefore, e.g., in field theory, we can usually extract a system of differential equations from the variational principle and thus, predict the future values of the fields provided that we have measured their current values at all spatial points. There are infinitely many points in space, and, in reality, we cannot measure infinitely many values, so, we measure finitely many values, and use an approximate version of the variational principle (or of the resulting differential equations) to predict the values of the desired physical quantities.
In general, let us assume that we measure the quantities $m_1,\ldots,m_n$, and we are interested in the values of the quantities $x_1,\ldots,x_k$. To determine $x_i$ from $m_j$, we use the variational principle $$S(m_1,\ldots,m_n,x_1,\ldots,x_k)\to\min_{x_1,\ldots,x_k}$$ for a known function $S$. Differentiating over $x_i$, we get a non-linear system of equations: $S_i(m_1,\ldots,m_n,x_1,\ldots,x_k)=0$, $1\le i\le k$, where $S_i$ is a partial derivative of $S$ w.r.t. $x_i$.
Since the functions $S_i$ are smooth (differentiable), for almost all values of $m=(m_1,\ldots,m_n)$, the dependency of $x_i$ on $m_j$ is also smooth. Thus, in a small neighborhood of a point $m_j$, there exists a constant $C>0$ such that to determine $x_i$ with a desired accuracy $\varepsilon>0$, we must measure $m_j$ with an accuracy $C\cdot\varepsilon$. In terms of the number of digits: to find $d$ digits of $x_i$, we must, for some appropriate $c$, know $m_j$ with an accuracy of $d+c$ binary (or decimal) digits.
The dependency of $x_i$ on $m_j$ is, however, not always smooth: e.g., in state equations, we have {\it phase transitions}, in which the dependency changes in a non-smooth manner (sometimes even discontinuously). This non-smoothness drastically decreases the accuracy of the result: e.g., if the dependency is of the type $x_i=\sqrt{m_j}$ (i.e., $x_i^2-m_j=0$), then for $m_j\approx 0$, to find $d$ digits of $x_i$, we must know $2d$ digits of $m_j$ (in terms of accuracy: to compute $x_i$ with an accuracy $\varepsilon$, we must know $m_j$ with an accuracy $C\varepsilon^2$ for some $C$). For cubic roots, the situation is even worse. In general, the higher the degree of the equation that determines $x_i$, the worse is the accuracy. How bad can it be?
Of course, even for a single measurement result $m$ and a single desired variable $x$, we can have, in principle, an equation $x^N-m=0$ for an arbitrary large $N$; as $N$ grows larger, this equation requires better and better accuracy in $m$ to achieve the same accuracy in $x$. So, in principle, it can be as bad as possible.
The next natural question is: how frequent are these bad cases? Are almost all situations bad or bad situations are (in some reasonable sense) rare?
Catastrophe theory is a formalism that provides a partial answer to this question. Its basic result, first proven by R. Thom and E. C. Zeeman, states that if the number $k$ of the unknowns is 5 or less, then for almost all functions $S$ (in some reasonable topological sense) the solution $x_i(m_1,\ldots,m_n)$ of the equations $S_j=0$ can be locally represented as a composition of three mappings:
a smooth mapping $m_1,\ldots,m_n\to m'_1,\ldots,m'_n$;
a mapping $m'_j\to x'_i$ described by the condition $$S_a(m'_1,\ldots,m'_n,x'_1,\ldots,x'_k)\to\min,$$ where $S_a$ is a function from a finite list of polynomial functions called elementary catastrophes;
a smooth mapping $x'_1,\ldots,x'_k,m_1,\ldots,m_n\to x_1,\ldots,x_n$.
Thus, for {\it almost all} functions $S$, the only part of the algorithm that requires a non-linear increase in accuracy is the second one, and in this second part, the degree of the polynomial $S_a$ (and hence, the increase in accuracy) is bounded.
This result is proven for {\it almost all} systems $S$; it might so happen that it is not true for the actual physical action functions $S$; luckily, however, this decomposition result is true for known physical action functions as well.
When we represent a mapping in the form of this composition, then the only part of this mapping in which we must take special care of accuracy (because linearlization estimates do not work) is the second one, in which we actually follow one of the {\it standard} systems $S_a$. For these finitely many standard systems, we can pre-compute the desired accuracy.
So far, we talked about the situations in which we already know $S$. In many real-life problems, however, we know that there is a variational principle, but we do not know the exact function $S$. In such situations, we must determine $S$ from the experiments. Usually, to determine $S$, we expand $S$ into a power series (cut after a certain power), and determine the (unknown) coefficients of this series from the results of the experiments. We have mentioned that it is computationally advantageous to find $x_i$ using the three-mapping representation. Therefore, rather than finding $S$ and determining the mappings, it is easier to find the mapping directly from the experiments: namely, we expand the formulas for these mapping into Taylor series, and find the unknown coefficients of these expansions directly from the experimental results.
This methodology forms of the basis of {\it applied catastrophe theory}. There exist many successful applications of this methodology (and even more suggestions and speculations on the possibility of other applications).
Comment. Our description of catastrophe theory is aimed at the interval computations community, and is, therefore, different from the usual expositions of this theory: Although the computational (including computational accuracy) aspects of catastrophe theory are (implicitly or explicitly) present in the papers and monographs, these aspects are usually overshadowed either by complicated mathematics, or by the description of successful applications, or (as in original papers of Thom) by philosophy.
Thomas Swenson and V. Kreinovich