## Abstract

Many robust control design methods require a linear model consisting of nominal model augmented by an uncertainty description. A general form for such a model is

(1)* G* = *G*_{0} + *H*_{21}Δ(I-*H*_{11}Δ)^{-1}*H*_{12}

where *G* is the transfer function of the true system, *G*_{0} is a nominal model, and Δ is a perturbation causing uncertainty about the true system. Depending on the particular type of uncertainty (additive, input or output multiplicative, inverse types of uncertainty, combinations of various types of uncertainty), *H*_{11}, *H*_{12} and *H*_{21} can contain combinations of (known) constant matrices and the nominal model.

Assume that we have information about the true system in the form of a number of possible transfer functions *G _{k}*,

*k*= 1,...,

*N*. The nominal model and the perturbation Δ

*associated with*

_{k}*G*are unknown, but they have to satisfy

_{k}(2) *G _{k}* =

*G*

_{0}+

*H*

_{21}Δ

*(I-*

_{k}*H*

_{11}Δ

*)*

_{k}^{-1}

*H*

_{12},

*k*= 1,...,

*N.*

How should *G*_{0} be determined? It has been shown that ||Δ||_{∞} is a control relevant measure of the distance between *G* and *G*_{0} for models of the form (1) and that the achievable stability margin by feedback control is inversely proportional to this distance. For a given type of uncertainty model, this suggests that *G*_{0} should be determined by solving the optimization problem

(3) min_{G0 }max_{k} ||Δ* _{k}*||

_{∞}

subject to the appropriate data matching condition (2). Obviously, the type of uncertainty model giving the smallest minimum is the best one according to this measure.

If information about the system is obtain through identification, input-output data are available. An attractive way of removing noise from the output is to fit a model *G _{k}* to the data and to calculate a noise-free output

*y*by

_{k}*y*=

_{k}*Gu*, where

_{k}*u*is the input in experiment

_{k}*k*. Since the purpose of the experiments in this context is to excite the system in various ways, the inputs do not tend to be persistently exciting in all individual experiments. Thus,

*G*only applies to the particular input

_{k}*u*, and the relevant information is input-output data {

_{k}*u*,

_{k}*y*},

_{k}*k*= 1,...,

*N*. This means that the model matching condition (2) should be replaced by the input-output matching condition

(4) *y _{k}* =

*G*

_{0}

*u*+

_{k}*H*

_{21}Δ

*(I-*

_{k}*H*

_{11}Δ

*)*

_{k}^{-1}

*H*

_{12}

*u*

_{k}*.*

It can be shown that the use of (4) instead of (2) results in a less conservative uncertainty model.

Our modeling approach is to model *G*_{0} in the frequency domain using sampled frequency responses of the input-output data. Because of the availability of *G _{k}*, these are easy to calculate for standardized inputs. The available information is thus {

*u*(j

_{k}*ω*), y

*∈Ω},*

*(j*_{k}*ω*), ω*k*= 1,...,

*N*, and we solve the optimization problem frequency-by-frequency, i.e.

(5) min_{G0(j}_{ω) }max_{k} ||Δ* _{k}*(j

*ω*)||

_{2 }s.t. (4), ∀

*ω*∈Ω.

_{}

The uncertainty Δ* _{k}* is assumed to be unstructured.

For some types of uncertainty, the optimization problem can easily be formulated as a convex optimization problem. Additive uncertainty, for example, which in its basic form is described by

(6) *y _{k}* =

*G*

_{0}

*u*+ Δ

_{k}

_{k}*u*,

_{k}*k*= 1,...,

*N,*

results in the optimization problem

(7) min_{G0}γ s.t. matrix(γ*I*, *y _{k}* -

*G*

_{0}

*u*; (

_{k}*y*-

_{k}*G*

_{0}

*u*)

_{k}^{*},

*u*

_{k*}

*u*) >= 0,

_{k}*k*= 1,...,

*N,*∀

*ω*∈Ω

which is a convex optimization problem. Here A^{*} denotes the complex conjugate transpose of A and P>=0 denotes that P is positive semidefinite.

Many types of uncertainty descriptions do not readily give a convex optimization problem. For example, an output multiplicative uncertainty described by

(8) *y _{k}* =

*G*

_{0}

*u*+ Δ

_{k}

_{k}*G*

_{0}u_{k}*,*

*k*= 1,...,*N,*results in the optimization problem

(9) min_{G0}γ s.t. matrix(γ*I*, *y _{k}* -

*G*

_{0}

*u*; (

_{k}*y*-

_{k}*G*

_{0}

*u*)

_{k}^{*},

*u*

_{k*}

*G*

_{0}

_{*}G_{0}

*u*) >= 0,

_{k}*k*= 1,...,

*N,*∀

*ω*∈Ω

which is non-convex due to the appearance of *G*_{0}_{*}G_{0}. An iterative solution by keeping *G*_{0}_{*}G_{0} fixed during each iteration tends to produce local minima which are non-global. However, we show how this optimization problem, and similar ones for some other types of uncertainty, can be reformulated as a convex optimization problem.

For control design, *G*_{0} is needed as a transfer function or a state-space model. In principle, we should determine such a model by replacing *G*_{0} in the appropriate consistency relations, like those appearing in (7) and (9), by a suitable parameterization of *G*_{0} . However, so far we have not been able to obtain a satisfactory solution in that way. Instead, we have fitted a model to the calculated frequency responses _{}_{}G_{0}(j*ω*), *ω*∈Ω. A drawback of this approach is that min||Δ(j*ω*)||_{}_{2} will increase, usually also min||Δ||_{∞}, sometimes even drastically. We have studied various approaches of reducing the effects of this drawback.

Original language | Undefined/Unknown |
---|---|

Title of host publication | Preprints 16th Nordic Process Control Workshop |

Editors | Tore Hägglund |

Publisher | Lund university |

Pages | – |

Publication status | Published - 2010 |

MoE publication type | B3 Non-refereed article in conference proceedings |

Event | conference - Duration: 1 Jan 2010 → … |

### Conference

Conference | conference |
---|---|

Period | 01/01/10 → … |

## Keywords

- Convex optimization
- Distillation columns
- LFT uncertainty
- Linear matrix inequalities
- Linear multivariable systems
- Robust control
- Uncertainty modeling