I-Cheng Chang, Cheng-Ching Yu, and Ching-Tein Liou, "Model-based approach for fault diagnosis. 2. Extension to interval systems", Ind. Eng. Chem., Res., 1995, Vol. 34, pp. 828-844.
In process control, it is important to detect faults as soon as they appear. Faults usually mean serious deviations of process parameters $a_j$ from their desired values $a^{(0)}_j$. For example, in chemical engineering, we must keep track of the feed flow, feed concentration, sensor failures, etc. The problem is that many of these important parameters $a_1,...,a_n$ are difficult to measure directly. Fortunately, these parameters are related to other quantities $m_1,...,m_k$ that we can continuously trace: $m_i=f_i(a_1,...,a_n)$ for some known functions $f_i$. These dependences $f_i$ form a model.
Usually, the differences $\Delta a_j=a_j-a^{(0)}_j$ are small, so, we can neglect quadratic and higher order terms in the formulas that express the observable deviations $\Delta m_i$ in terms of the desired deviations $\Delta a_j$: $$\Delta m_i=m_i-m^{(0)}_i=f_i(a_1,...,a_n)-f_i(a^{(0)}_1,...,a_n^{(0)})=$$ $$f_i(a^{(0)}_1-\Delta a_1,...,a^{(0)}_n-\Delta a_n)- f_i(a^{(0)}_1,...,a_n^{(0)})=\sum_{j=1}^n {\partial f_i\over\partial a_j}(a^{(0)}_1,...,a_n^{(0)})\Delta a_j.$$ As a result, we get the following system of linear equations: $\sum p_{ij}\Delta a_j=\Delta m_i,$ where $$p_{ij}= {\partial f_i\over\partial a_j}(a^{(0)}_1,...,a_n^{(0)}).$$ Since faults are usually rare, we can safely assume that at any moment of time, at most one fault occurs, so, at most one deviation $\Delta a_j$ is different from 0. If a $j-$th fault occurs, then $\Delta m_1=p_{1j}\Delta a_j$ and $\Delta m_2=p_{2j}\Delta a_j$, and therefore, $\Delta m_1/\Delta m_2=p_{1j}/p_{2j}$.
In the ideal situation when all measurements are absolutely precise, and all the coefficients $p_{ij}$ are precisely known, we can thus uniquely diagnose the fault as the $j$ for which $\Delta m_1/\Delta m_2=p_{1j}/p_{2j}$.
In reality, measurements are not precise, and we do not know the exact values of the coefficients $p_{ij}$. Besides, in addition to a drastic deviation $\Delta a_j$ that leads to a fault, other parameters can also differ (slightly) from their desired values $a^{(0)}_j$. As a result, we only know intervals of possible values of $\Delta m_i$ and $p_{ij}$.
A natural way to handle this problem is to consider intervals of possible values of $\Delta m_1/\Delta m_2$ and $p_{1j}/p_{2j}$. Then, all faults $j$ for which $\Delta m_1/\Delta m_2$ can be equal to $p_{1j}/p_{2j}$ (i.e., for which the corresponding intervals overlap) are considered possible.
This approach leaves too many faults possible. Can we reduce the number of possible faults and thus make a more definite fault diagnosis? This can happen if we can somehow make the intervals narrower. To make them narrower, the authors propose the following idea: Intervals generated by traditional interval arithmetic are not always very narrow because we have to consider all possible cases, including the worst ones, when it so happens that uncertainties of different components combine into the largest value of the resulting error. In real life, such events are relatively rare, so we can simply consider it impossible that both the fault and this unfortunate error combination occur at the same time. If we make this assumption, then, in fault diagnosis, we can use a version of interval arithmetic that results in narrower intervals. In particular, the authors propose to use the following formulas for $*\in\{+,-,\cdot,/\}$: $[a^-,a^+]*[b^-,b^+]=[\min(a^-*b^-,a^+*b^+),\max(a^-*b^-,a^+*b^+)]$.
The resulting algorithm is illustrated on the example of a chemical reactor; for this reactor, the authors' algorithm successfully locates the faults.