Jie Chen, "On computing the maximal delay intervals for stability of linear delay systems", IEEE Transactions on Automatic Control, 1995, Vol. 40, No. 6, pp. 1087-1093.
Control theory usually considers systems with an instant reaction, in which control $u(t)$ applied at a moment $t$ is uniquely determined by the state $x(t)=(x_1(t),...,x_n(t))$ at this same moment $t$. The dynamics of systems controlled by such controllers is thus described by differential equations of the type $\dot x(t)=f(x)$. In real life, there is usually a time delay $h>0$ between the measurement and the formation of the control response. For systems controlled by such controllers, the change in the state $\dot x(t)$ at the moment $t$ is determined not only by the state $x(t)$ at this very moment of time, but also on the state $x(t-h)$ at the "previous" moment of time $t-h$.
The control may depend not only on the state $x(t-h)$ at the previous moment of time, but also on the control applied at this previous moment of time. This control, in its turn, depends on the state of the controlled system at a "pre-previous" moment of time $(t-h)-h$, etc. As a result, in general, the dynamics $\dot x(t)$ of the controlled system depends on $x(t)$, $x(t-h)$, $x(t-2h)$, etc.: $\dot x(t)=f(x(t),x(t-h),...,x(t-qh))$. Alternatively, we can also explicitly describe the dependency on the changes at the previous moments of time: $\dot x(t)=f(x(t),x(t-h),...,x(t-qh),\dot x(t-h),...,\dot x(t-ph))$.
The idea of control is to keep the values of the controlled parameters close to the desired ones. So, if, as variables $x(t)$ for describing the system, we take the deviations from the desired parameters, we can say that the goal of control is to keep the values $x(t)$ small. Since these values are small, we can expand the function $f$ into power series, and neglect quadratic and higher order terms. As a result, we get a linear formula $\dot x(t)=A_0x(t)+A_1x(t-h)+\ldots +A_qx(t-qh)$ or $\dot x(t)=A_0x(t)+A_1x(t-h)+\ldots +A_qx(t-qh)+B_1\dot x(t-h)+\ldots + B_p\dot x(t-ph)$ for some matrices $A_i$ and $B_j$.
If the delay is long enough, then it takes too long to react to the deviations and, as a result, the initially stable system can become unstable. From this viewpoint, it is desirable to use small delays $h$. On the other hand, the smaller delay $h$ we require, the more difficult it is to satisfy this requirement and therefore, the more complicated and expensive the system becomes. It is therefore necessary to choose the largest $h$ that can still guarantee stability (i.e., to find the maximal delay interval that still guarantees stability of the resulting system).
In this paper, an algorithm is presented that, for given $A_i$ and $B_j$, finds the desired maximal delay $h$. Suhuan Chen, Zhiping Qiu, and Datong Song, "A new method for computing the upper and lower bounds on frequencies of structures with interval parameters", Mechanics Research Communications, 1995, Vol. 22, No. 5, pp. 431-439.
The frequency $\omega$ of a structure can be determine as a square root $\omega=\sqrt{\lambda}$ of the solution to the so-called generalized eigenvalue problem $Ku=\lambda Mu$, where $K=|k_{ij}|$ is a {\it stiffness matrix} and $M=|m_{ij}|$ is a {\it mass matrix}. In real life, we often know only the intervals of possible values of $k_{ij}$ and $m_{ij}$; in such situations, we want to know the interval of possible values of $\lambda$.
There exist several interval methods for solving the generalized interval eigenvalue problem; these methods are mainly it algebraic, based on the equation $Ku=\lambda Mu$. In this paper, a new method is proposed that is based on the known representation of the eigenvalue problem as an optimization problem (called Rayleigh Quotient): the largest eigenvalue is equal to $$\lambda=\max_{u\ne 0}{u^T Ku\over u^T Mu};$$ similar formulas describe other eigenvalues (the corresponding formulas are slightly more complicated, with $\min\max$ instead of $\max$).
The method is illustrated on the example of a multi-story structure.