Adaptive Backstepping Flight Control for Modern - repository.tudelft.nl

Adaptive Backstepping Flight Control for Modern - repository.tudelft.nl

Adaptive Backstepping Flight Control for Modern Fighter Aircraft ISBN 978-90-8570-573-4 Printed by W¨ohrmann Print Service, Zutphen, The Netherlands...

16MB Sizes 0 Downloads 14 Views

Adaptive Backstepping Flight Control for Modern Fighter Aircraft

ISBN 978-90-8570-573-4 Printed by W¨ohrmann Print Service, Zutphen, The Netherlands. Typeset by the author with the LATEX Documentation System. Cover design based on an F-16 image by James Dale. c 2010 by L. Sonneveldt. All rights reserved. No part of the material Copyright protected by this copyright notice may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording or by any information storage and retrieval system, without the prior permission of the author.

Adaptive Backstepping Flight Control for Modern Fighter Aircraft

PROEFSCHRIFT

ter verkrijging van de graad van doctor aan de Technische Universiteit Delft, op gezag van de Rector Magnificus Prof. ir. K.Ch.A.M. Luyben, voorzitter van het College voor Promoties, in het openbaar te verdedigen op woensdag 7 juli 2010 om 15.00 uur door

Lars SONNEVELDT

ingenieur luchtvaart en ruimtevaart geboren te Rotterdam.

Dit proefschrift is goedgekeurd door de promotor: Prof. dr. ir. J.A. Mulder Copromotor: Dr. Q.P. Chu

Samenstelling promotiecommissie: Rector Magnificus Prof. dr. ir. J.A. Mulder Dr. Q.P. Chu Prof. lt. gen. b.d. B.A.C. Droste Prof. dr. ir. M. Verhaegen Prof. dr. A. Zolghadri Prof. Dr.-Ing. R. Luckner Ir. W.F.J.A. Rouwhorst Prof. dr. ir. Th. van Holten

voorzitter Technische Universiteit Delft, promotor Technische Universiteit Delft, copromotor Technische Universiteit Delft Technische Universiteit Delft Universit´e de Bordeaux Technische Universit¨at Berlin Nationaal Lucht- en Ruimtevaartlaboratorium Technische Universiteit Delft, reservelid

To Rianne

Summary Over the last few decades and pushed by developments in aerospace technology, the performance requirements of modern fighter aircraft became more and more challenging throughout an ever increasing flight envelope. Extreme maneuverability is achieved by designing the aircraft with multiple redundant control actuators and allowing static instabilities in certain modes. A good example is the Lockheed Martin F-22 Raptor which makes use of thrust vectored control to increase maneuverability. Furthermore, the survivability requirements in modern warfare are constantly evolving for both manned and unmanned combat aircraft. Taking into account all these requirements when designing the control systems for modern fighter aircraft poses a huge challenge for flight control designers. Traditionally, aircraft control systems were designed using linearized aircraft models at multiple trimmed flight conditions throughout the flight envelope. For each of these operating points a corresponding linear controller is derived using the well established linear-based control design methods. One of the many gain scheduling methods can next be applied to derive a single flight control law for the entire flight envelope. However, a problem of this approach is that good performance and robustness properties cannot be guaranteed for a highly nonlinear fighter aircraft. Nonlinear control methods have been developed to overcome the shortcomings of linear design approaches. The theoretically established nonlinear dynamic inversion (NDI) approach is the best known and most widely used of these methods. NDI is a control design method that can explicitly handle systems with known nonlinearities. By using nonlinear feedback and exact state transformations rather than linear approximations the nonlinear system is transformed into a constant linear system. This linear system can in principle be controlled by just a single linear controller. However, to perform perfect dynamic inversion all nonlinearities have to be precisely known. This is generally not the case with modern fighter aircraft, since it is very difficult to precisely know and model their complex nonlinear aerodynamic characteristics. Empirical data is usually obtained from wind tunnel experiments and flight tests, augmented by comi

ii

SUMMARY

putational fluid dynamics (CFD) results, and thus is not 100% accurate. The problem of model deficiencies can be dealt with by closing the control loop with a linear, robust controller. However, even then desired performance cannot be expected in case of gross errors, due to large, sudden changes in the aircraft dynamics that could result from structural damage, control effector failures or adverse environmental conditions. A more sophisticated way of dealing with large model uncertainties is to introduce an adaptive control system with some form of online model identification. In recent years, the increase in available onboard computational power has made it possible to implement more complex adaptive flight control designs. It is clear that a nonlinear adaptive flight control system with onboard model identification can do more than just compensate for inaccuracies in the nominal aircraft model. It is also possible to identify any sudden changes in the dynamic behavior of the aircraft. Such changes will in general lead to an increase in pilot workload or can even result in a complete loss of control. If the postfailure aircraft dynamics can be identified correctly by the online model identification, the redundancy in control effectors and the fly-by-wire system of modern fighter planes can be exploited to reconfigure the flight control system. There are several methods available to design an identifier that updates the onboard model of the NDI controller online, e.g. neural networks or least squares techniques. A disadvantage of an adaptive design with separate identifier is that the certainty equivalence property does not hold for nonlinear systems, i.e. the identifier is not fast enough to cope with potentially faster-than-linear growth of instabilities in nonlinear systems. To overcome this problem a controller with strong parametric robustness properties is needed. An alternative solution is to design the controller and identifier as a single integrated system using the adaptive backstepping design method. By systematically constructing a Lyapunov function for the closed-loop system, adaptive backstepping offers the possibility to synthesize a controller for a wide class of nonlinear systems with parametric uncertainties. The main goal of this thesis is to investigate the potential of the nonlinear adaptive backstepping control technique in combination with online model identification for the design of a reconfigurable flight control (RFC) system for a modern fighter aircraft. The following features are aimed for: • the RFC system uses a single nonlinear adaptive flight controller for the entire domain of operation (flight envelope), which has provable theoretical performance and stability properties. • the RFC system enhances performance and survivability of the aircraft in the presence of disturbances related to failures and structural damage. • the algorithms, on which the RFC system is based, possess excellent numerical stability properties and their computational costs are low (real-time implementation is feasible). Adaptive backstepping is a recursive, Lyapunov-based, nonlinear design method, that makes use of dynamic parameter update laws to deal with parametric uncertainties. The

iii idea of backstepping is to design a controller recursively by considering some of the state variables as ‘virtual controls’ and designing intermediate control laws for these. Backstepping achieves the goals of global asymptotic stabilization of the closed-loop states and tracking. The proof of these properties is a direct consequence of the recursive procedure, since a Lyapunov function is constructed for the entire system including the parameter estimates. The tracking errors drive the adaptation process of the procedure. Furthermore, it is possible to take magnitude and rate constraints on the control inputs and system states into account in such a way that the identification process is not corrupted during periods of control effector saturation. A disadvantage of the integrated adaptive backstepping method is that it only yields pseudo-estimates of the uncertain system parameters. There is no guarantee that the real values of the parameters are found, since the adaptation only tries to satisfy a total system stability criterion, i.e. the Lyapunov function. Increasing the adaptation gain will not necessarily improve the response of the closed-loop system, due to the strong coupling between the controller and the estimator dynamics. The immersion and invariance (I&I) approach provides an alternative way of constructing a nonlinear estimator. This approach allows for prescribed stable dynamics to be assigned to the parameter estimation error. The resulting estimator is combined with a backstepping controller to form a modular adaptive control scheme. The I&I based estimator is fast enough to capture the potential faster-than-linear growth of nonlinear systems. The resulting modular scheme is much easier to tune than the ones resulting from the standard adaptive backstepping approaches with tracking error driven adaptation process. In fact, the closed-loop system resulting from the application of the I&I based adaptive backstepping controller can be seen as a cascaded interconnection between two stable systems with prescribed asymptotic properties. As a result, the performance of the closed-loop system with adaptive controller can be improved significantly. To make a real-time implementation of the adaptive controllers feasible the computational complexity has to be kept at a minimum. As a solution, a flight envelope partitioning method is proposed to capture the globally valid aerodynamic model into multiple locally valid aerodynamic models. The estimator only has to update a few local models at each time step, thereby decreasing the computational load of the algorithm. An additional advantage of using multiple, local models is that information of the models that are not updated at a certain time step is retained, thereby giving the approximator memory capabilities. B-spline networks are selected for their nice numerical properties to ensure smooth transitions between the different regions. The adaptive backstepping flight controllers developed in this thesis have been evaluated in numerical simulations on a high-fidelity F-16 dynamic model involving several control problems. The adaptive designs have been compared with the gain-scheduled baseline flight control system and a non-adaptive NDI design. The performance has been compared in simulation scenarios at several flight conditions with the aircraft model suffering from actuator failures, longitudinal center of gravity shifts and changes in aerodynamic coefficients. All numerical simulations can be easily performed in real-time on an ordi-

iv

SUMMARY

nary desktop computer. Results of the simulations demonstrate that the adaptive flight controllers provide a significant performance improvement over the non-adaptive NDI design for the simulated failure cases. Of the evaluated adaptive flight controllers, the I&I based modular adaptive backstepping design has the overall best performance and is also easiest to tune, at the cost of a small increase in computational load and design complexity when compared to integrated adaptive backstepping control designs. Moreover, the flight controllers designed with the I&I based modular adaptive backstepping approach have even stronger provable stability and convergence properties than the integrated adaptive backstepping flight controllers, while at the same time achieving a modularity in the design of the controller and identifier. On the basis of the research performed in this thesis, it can be concluded that a RFC system based on the I&I based modular adaptive backstepping method shows a lot of potential, since it possesses all the features aimed at in the thesis goal. Further research that explores the performance of the RFC system based on the I&I based modular adaptive backstepping method in other simulation scenarios is suggested. The evaluation of the adaptive flight controllers in this thesis is limited to simulation scenarios with actuator failures, symmetric center of gravity shifts and uncertainties is individual aerodynamic coefficients. The research would be more valuable if scenarios with asymmetric failures such as partial surface loss are performed. Generating the necessary realistic aerodynamic data for the F-16 model would take a separate study in itself. Still an open issue is the development of an adaptive flight envelope protection system that can estimate the reduced flight envelope of an aircraft post-failure and that can feed this information back to the controller, the pilot and the guidance system. Another important research direction would be to perform a piloted evaluation and validation of the proposed RFC framework in a simulator. Post-failure workload and handling qualities should be compared with those of the baseline flight control system. Simultaneously, a study of the interactions between the pilots reactions to a failure and the actions taken by the adaptive element in the flight control system can be performed.

Contents Summary

i

1 Introduction 1.1 1.2 1.3

1.4

1.5

Background . . . . . . . . . . . . . . . . . . . . . Problem Definition . . . . . . . . . . . . . . . . . Reconfigurable Flight Control . . . . . . . . . . . 1.3.1 Reconfigurable Flight Control Approaches 1.3.2 Reconfigurable Flight Control in Practice . Thesis Goal and Research Approach . . . . . . . . 1.4.1 Nonlinear Adaptive Backstepping Control . 1.4.2 Flight Envelope Partitioning . . . . . . . . 1.4.3 The F-16 Dynamic Model . . . . . . . . . Thesis Outline . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

1 1 3 4 5 9 10 11 11 12 12

. . . . . . . . . . . .

17 17 17 18 19 20 24 26 28 31 31 31 31

2 Aircraft Modeling 2.1 2.2

2.3 2.4 2.5

Introduction . . . . . . . . . . . . . . . . . . . . . . . Aircraft Dynamics . . . . . . . . . . . . . . . . . . . 2.2.1 Reference Frames . . . . . . . . . . . . . . . . 2.2.2 Aircraft Variables . . . . . . . . . . . . . . . . 2.2.3 Equations of Motion for a Rigid Body Aircraft 2.2.4 Gathering the Equations of Motion . . . . . . . Control Variables and Engine Modeling . . . . . . . . Geometry and Aerodynamic Data . . . . . . . . . . . Baseline Flight Control System . . . . . . . . . . . . . 2.5.1 Longitudinal Control . . . . . . . . . . . . . . 2.5.2 Lateral Control . . . . . . . . . . . . . . . . . 2.5.3 Directional Control . . . . . . . . . . . . . . . v

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

vi

CONTENTS

2.6

c MATLAB/Simulink Implementation . . . . . . . . . . . . . . . . . .

3 Backstepping 3.1 3.2

3.3

Introduction . . . . . . . . . . . . . . . . . . . Lyapunov Theory and Stability Concepts . . . . 3.2.1 Lyapunov Stability Definitions . . . . . 3.2.2 Lyapunov’s Direct Method . . . . . . . 3.2.3 Lyapunov Theory and Control Design . Backstepping Basics . . . . . . . . . . . . . . 3.3.1 Integrator Backstepping . . . . . . . . 3.3.2 Extension to Higher Order Systems . . 3.3.3 Example: Longitudinal Missile Control

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

33 33 34 34 36 38 41 41 44 47

. . . . . . . . .

53 53 54 55 58 63 66 68 69 73

4 Adaptive Backstepping 4.1 4.2

4.3

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tuning Functions Adaptive Backstepping . . . . . . . . . . . . . . . 4.2.1 Dynamic Feedback . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Extension to Higher Order Systems . . . . . . . . . . . . . . 4.2.3 Robustness Considerations . . . . . . . . . . . . . . . . . . . 4.2.4 Example: Adaptive Longitudinal Missile Control . . . . . . . Constrained Adaptive Backstepping . . . . . . . . . . . . . . . . . . 4.3.1 Command Filtering Approach . . . . . . . . . . . . . . . . . 4.3.2 Example: Constrained Adaptive Longitudinal Missile Control

32

5 Inverse Optimal Adaptive Backstepping 5.1 5.2

5.3

5.4

77 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Nonlinear Control and Optimality . . . . . . . . . . . . . . . . . . . . 78 5.2.1 Direct Optimal Control . . . . . . . . . . . . . . . . . . . . . . 78 5.2.2 Inverse Optimal Control . . . . . . . . . . . . . . . . . . . . . 80 Adaptive Backstepping and Optimality . . . . . . . . . . . . . . . . . . 80 5.3.1 Inverse Optimal Design Procedure . . . . . . . . . . . . . . . . 81 5.3.2 Transient Performance Analysis . . . . . . . . . . . . . . . . . 85 5.3.3 Example: Inverse Optimal Adaptive Longitudinal Missile Control 86 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

6 Comparison of Integrated and Modular Adaptive Flight Control 6.1 6.2

6.3

Introduction . . . . . . . . . . . . . . . . Modular Adaptive Backstepping . . . . . 6.2.1 Problem Statement . . . . . . . . 6.2.2 Input-to-state Stable Backstepping 6.2.3 Least-Squares Identifier . . . . . Aircraft Model Description . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

93 93 94 95 97 98 101

vii

CONTENTS

6.4

6.5

6.6

6.7

Flight Control Design . . . . . . . . . . 6.4.1 Feedback Control Design . . . . 6.4.2 Integrated Model Identification . 6.4.3 Modular Model Identification . Control Allocation . . . . . . . . . . . 6.5.1 Weighted Pseudo-inverse . . . . 6.5.2 Quadratic Programming . . . . Numerical Simulation Results . . . . . 6.6.1 Tuning the Controllers . . . . . 6.6.2 Simulation Scenarios . . . . . . 6.6.3 Controller Comparison . . . . . Conclusions . . . . . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

103 103 105 106 107 108 108 110 110 111 112 116

7 F-16 Trajectory Control Design 7.1 7.2

7.3

7.4

7.5

Introduction . . . . . . . . . . . . . . . . . . . . Flight Envelope Partitioning . . . . . . . . . . . 7.2.1 Partitioning the F-16 Aerodynamic Model 7.2.2 B-spline Networks . . . . . . . . . . . . 7.2.3 Resulting Approximation Model . . . . . Trajectory Control Design . . . . . . . . . . . . 7.3.1 Motivation . . . . . . . . . . . . . . . . 7.3.2 Aircraft Model Description . . . . . . . . 7.3.3 Adaptive Control Design . . . . . . . . . 7.3.4 Model Identification . . . . . . . . . . . Numerical Simulation Results . . . . . . . . . . 7.4.1 Controller Parameter Tuning . . . . . . . 7.4.2 Maneuver 1: Upward Spiral . . . . . . . 7.4.3 Maneuver 2: Reconnaissance . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

119 119 120 121 124 128 128 129 130 131 139 141 142 143 145 146

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

149 149 150 151 152 153 154 155 157 159 162 162

8 F-16 Stability and Control Augmentation Design 8.1 8.2

8.3 8.4 8.5 8.6

Introduction . . . . . . . . . . . . . . . . . . . Flight Control Design . . . . . . . . . . . . . . 8.2.1 Outer Loop Design . . . . . . . . . . . 8.2.2 Inner Loop Design . . . . . . . . . . . 8.2.3 Update Laws and Stability Properties . Integrated Model Identification . . . . . . . . . Modular Model Identification . . . . . . . . . . Controller Tuning and Command Filter Design Numerical Simulations and Results . . . . . . . 8.6.1 Simulation Scenarios . . . . . . . . . . 8.6.2 Simulation Results with Cmq = 0 . . .

. . . . . . . . . . .

viii

CONTENTS

8.7

8.6.3 Simulation Results with Longitudinal c.g. Shifts . . . . . . . . 8.6.4 Simulation Results with Aileron Lock-ups . . . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9 Immersion and Invariance Adaptive Backstepping

163 164 165

. . . . . . . . . . . . . . .

167 167 168 173 173 175 177 177 180 182 183 185 187 187 189 191

10.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

193 193 200

9.1 9.2 9.3

9.4

9.5

9.6

9.7

Introduction . . . . . . . . . . . . . . . . . . . . The Immersion and Invariance Concept . . . . . Extension to Higher Order Systems . . . . . . . 9.3.1 Estimator Design . . . . . . . . . . . . . 9.3.2 Control Design . . . . . . . . . . . . . . Dynamic Scaling and Filters . . . . . . . . . . . 9.4.1 Estimator Design with Dynamic Scaling . 9.4.2 Command Filtered Control Law Design . Adaptive Flight Control Example . . . . . . . . . 9.5.1 Adaptive Control Design . . . . . . . . . 9.5.2 Numerical Simulation Results . . . . . . F-16 Stability and Control Augmentation Design 9.6.1 Adaptive Control Design . . . . . . . . . 9.6.2 Numerical Simulation Results . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

10 Conclusions and Recommendations

A F-16 Model A.1 F-16 Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.2 ISA Atmospheric Model . . . . . . . . . . . . . . . . . . . . . . . . . A.3 Flight Control System . . . . . . . . . . . . . . . . . . . . . . . . . . .

B System and Stability Concepts B.1 Lyapunov Stability and Convergence . . . . . . . . . . . . . . . . . . . B.2 Input-to-state Stability . . . . . . . . . . . . . . . . . . . . . . . . . . B.3 Invariant Manifolds and System Immersion . . . . . . . . . . . . . . .

C Command Filters Simulation Results of Chapter 6 Simulation Results of Chapter 7 Simulation Results of Chapter 8 Simulation Results of Chapter 9

207 207 211 211 213

D Additional Figures D.1 D.2 D.3 D.4

203 203 204 205

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

215 216 221 227 234

CONTENTS

ix

Bibliography

239

Samenvatting

263

Acknowledgements

269

Curriculum Vitae

271

Chapter

1

Introduction This chapter provides an introduction on modern high performance fighter aircraft and their flight control systems. It describes the current situation, the ongoing research and the challenges for these systems. The position of the work performed in this thesis in relation to existing research on control methods is explained. Furthermore, the solution proposed in this thesis, as well as the research approach and scope are discussed. The thesis outline is clarified in the final part of the chapter, by means of a short topic description for each chapter and an explanation of the interconnections between the different chapters.

1.1 Background At the moment, most western countries, including the Netherlands, have started to replace or are considering replacing their current fleet of fighter aircraft with aircraft of the new generation. Some of the better known examples of this new generation of fighter aircraft are the F-22 Raptor, the JAS-39 Gripen, the Eurofighter and the F-35 Lightning II (better known as Joint Strike Fighter). Pushed by Air Force requirements and by developments in aerospace technology, the performance specifications for modern fighter aircraft have become ever more challenging. Extreme maneuverability over a large flight envelope is achieved by designing the aircraft unstable in certain modes and by using multiple redundant control effectors for control. Examples include the F-22 Raptor (Figure 1.1(a)), which makes use of thrust vectored control to achieve extreme angles of attack, and the highly unstable Su-47 prototype (Figure 1.1(b)) with its forward swept wings and thrust vectoring. Human pilots are not able to control these highly complex nonlinear systems without some kind of assistance for their various tasks. Modern fighter aircraft require digital flight control systems to ensure that aircraft possess the flying qualities pilots desire. In fact, flight control systems have been considered by inventors even before the first flight 1

2

1.1

INTRODUCTION

(a) The F-22 Raptor

(b) The Su-47 Berkhut

Figure 1.1: Two examples of modern high performance fighter aircraft. The F-22 picture is by courtesy of the USAF and the Su-47 picture is a photo by Andrey Zinchuk.

of the Wright brothers in 1903 [139]. In 1893 Sir Hiram Maxim already made a working model of a steam-powered gyroscope and servo cylinder to maintain the longitudinal attitude of an aircraft. The pitch controller weighted over 130 kg, which is still only a fraction of the 3.5 ton total weight of his self-developed steam-powered flying machine depicted in Figure 1.2. In 1913 Lawrence Sperry demonstrated hands-off flight when he and his co-pilot each stood on a wing of his biplane as it passed the exuberant crowd. Sperry used a lightweight version of his father’s gyroscope to control the pitch and roll motion of his aircraft with compressed air. Both world wars not only stimulated the development of more advanced flight control systems, but also the fundamentals of classical control theory were laid.

Figure 1.2: Sir Hiram Maxim’s ‘heavier-than-air’ steam-powered aircraft.

In the early 1950’s it was found that constant-gain, linear feedback controllers had problems to perform well over the whole flight regime of the new high-performance prototype aircraft such as the X-15. After a considerable development effort it was found that gain-

1.2

PROBLEM DEFINITION

3

scheduling was a suitable technique to achieve good performance over a wide range of operating conditions [9]. Even today, modern fighter aircraft still make use of flight control systems based on various types of linear control algorithms and gain-scheduling. The main benefit of this strategy is that it is based on the well-developed classical linear control theory. However, nonlinear effects, occurring in particular at high angles of incidence, and the cross-couplings between longitudinal and lateral motion are neglected in the control design. Furthermore, it is difficult to guarantee stability and performance of the gain-scheduled controller in between operating points for which a linear controller has been designed. This motivates the use of nonlinear control techniques for the flight control system design of high performance aircraft. In the early 1990’s a new nonlinear control methodology called feedback linearization (FBL) emerged [88, 187]. Nonlinear dynamic inversion (NDI) is a special form of FBL especially suited for flight control applications; see e.g. [48, 123, 129]. The main idea behind NDI is to use an accurate model of the system to cancel all system nonlinearities in such a way that a single linear system valid over the entire flight envelope remains. A classical linear controller can be used to close the outer loop of the system under NDI control. The F-35 Lightning II will be the first production fighter aircraft equipped with such a NDI based flight control system [20, 205]. The control law structure, presented in Figure 1.3, permits a decoupling of flying qualities dependent portions of the design from that which is dependent on airframe and engine dynamics. Flying Qualities Dependent

Stick Input

Precompensation

Command Shaping (Desired Dynamics)

Airframe/Engine Dependent

Nonlinear Dynamic Inversion (Onboard Model)

Control Allocation

Sensor Processing

Figure 1.3: The nonlinear dynamic inversion controller structure of the F-35: The onboard model is used to cancel all system nonlinearities and a single linear controller which enforces the flying qualities closes the outer loop.

1.2 Problem Definition The main weakness of the NDI technique is that an accurate model of the aircraft dynamics is required. A dynamic aircraft model is costly to obtain, since it takes a large number

4

INTRODUCTION

1.3

of (virtual) wind tunnel experiments and an intensive flight testing program. Small uncertainties can be dealt with by designing a robust, linear outer loop controller. However, especially for larger model uncertainties, robust control methods tend to yield rather conservative control laws and, consequently, result in poor closed-loop performance [106, 202, 215]. A more sophisticated way of dealing with large model uncertainties is to introduce an adaptive control system with some form of online model identification. Adaptive control was originally studied in the 1950’s as an alternative to gain-scheduling methods for flight control and there has been a lot of theoretical development over the past decades [8]. In recent years, the increase in available onboard computational power has made it possible to implement adaptive flight control designs. There are several methods available to design an identifier that updates the onboard model of the NDI controller online, e.g. neural networks or least squares techniques. A disadvantage of a nonlinear adaptive design with separate identifier is that the certainty equivalence property [106] does not hold for nonlinear systems, i.e. the identifier is not fast enough to cope with potentially explosive instabilities of nonlinear systems. To overcome this problem a controller with strong parametric robustness properties is needed [119]. An alternative solution is to design the controller and identifier as a single integrated system using the adaptive backstepping design method [101, 117, 118]. By systematically constructing a Lyapunov function for the closed-loop system, adaptive backstepping offers the possibility to synthesize a controller for a wide class of nonlinear systems with parametric uncertainties. Obviously, a nonlinear adaptive (backstepping based) flight control system with onboard model identification has the potential to do more than just compensate for inaccuracies in the nominal aircraft model. It is also possible to identify sudden changes in the dynamic behavior of the aircraft that could result from structural damage, control effector failures or adverse environmental conditions. Such changes will in general lead to an increase in pilot workload or can even result in a complete loss of control. If the post-failure aircraft dynamics can be identified correctly by the online model identification, the redundancy in control effectors and the fly-by-wire system of modern fighter planes can be exploited to reconfigure the flight control system.

1.3 Reconfigurable Flight Control The idea of control reconfiguration can be traced back throughout the history of flight in cases where pilots had to manually exploit the remaining control capability of a degraded aircraft. In 1971 an early theoretical basis for control reconfiguration appeared in [13], where the number of control effectors needed for the controllability of a linear system for failure accommodation was considered. In fact, most of the studies in the 1970’s were based on the idea of introducing backup flight control effectors to compensate for the failure of a primary control surface. Many of these studies are also relevant for control reconfiguration. Two early studies that first showed the value of control reconfiguration were performed by the Grumman Aerospace Corporation for the United States Air Force (USAF) [23] and by the United States Navy [72]. The study done by Grumman demon-

1.3

RECONFIGURABLE FLIGHT CONTROL

5

strated the importance of considering reconfiguration during the initial design process. One of the aircraft studied at the time was the F-16, which would become a focus of later USAF studies as it appeared to be well suited for reconfiguration. Flight control reconfiguration became an important research subject in the 1980’s and has remained a major field of study ever since. This section will try to provide an overview of the many different reconfigurable flight control (RFC) approaches that have been proposed in literature over the past decades. Methods for accommodating sensor failures, software failures or for switching among redundant hardware will not be considered, although they are sometimes referred to as flight control reconfiguration. Here ‘reconfigurable flight control’ is only used to refer to software algorithms designed specifically to compensate for failures or damage to the flight control effectors or structure of the aircraft (e.g. lifting surfaces). This section is based on survey papers on reconfigurable flight control by Huzmezan [84], Jones [92] and Steinberg [195]. Other relevant articles are the more general fault-tolerant control surveys by Stengel [197] and Patton [162].

1.3.1 Reconfigurable Flight Control Approaches Most of the control configuration methods developed in the 1980’s required a separate system for explicit failure detection, isolation and estimation (FDIE). An important early example of this type of approach was developed by General Electric Aircraft Controls [55]. This design used a single extended Kalman estimator to perform all FDIE and a pseudo-inverse approach based on a linearized model of the aircraft was used to determine controller effector commands, so that the degraded aircraft would generate the same accelerations as the nominal aircraft. The single Kalman estimator approach turned out to be impractical, but the pseudo-inverse methods would become a major focus of research, even resulting in some limited flight testing at the end of the 1980’s [140]. By the beginning of the 1990’s a set of flight tested techniques was available, which could be used to add limited reconfigurable control capability to otherwise conventional flight control laws for fixed wing aircraft. FDIE was the main limiting factor and required complicated tuning based on known failure models, particularly for surface damage detection and isolation. Similarly, the control approaches could require quite a bit of design tuning and there was a lack of theoretical proofs of stability and robustness. However, these approaches were shown to be quite effective when optimized for a small number of failure cases [195]. The increase of onboard computational power and advanced control development software packages in the 1990’s led to an rapid increase in the number and types of approaches applied to RFC problems. It became much easier and cheaper to experiment with complex nonlinear design approaches. Furthermore, there had been considerable theoretical advances in the areas of adaptive [9] and nonlinear control methods [187] throughout the 1980’s. The late 1980’s also saw a renewed interest in the use of emerging machine intelligence techniques, such as neural networks and fuzzy logic [148]. These approaches could potentially improve FDIE or support new control architectures that do not use explicit FDIE at all.

6

INTRODUCTION

1.3

An attempt is now made to organize the various RFC methods developed during the 1990’s and up until now. This has become increasingly more difficult, because a lot of combinations of different methods have been attempted over the years. In [92] the RFC methods are subdivided in four categories. A short overview of each category will now be given. Note that this overview is by no means complete, but only serves to give an illustration of the advantages and disadvantages of the different methodologies. Also note that many combinations of different methodologies have emerged over the years. Multiple Model Control Multiple model control basically involves a controller law existing of several fault models and their corresponding controllers. Three types of multiple model control exist in literature: multiple model switching and tuning (MMST), interacting multiple model (IMM) and propulsion controlled aircraft (PCA). In the first two cases all expected failure scenarios are collected during a failure modes and effects analysis, where fault models are constructed that cover each situation. When a failure occurs MMST switches to a precomputed control law corresponding to the current fault situation. Some examples of MMST approaches can be found in [24, 25, 26, 71]. IMM removes the extensive fault modeling limitation of MMST, by considering fault models which are a convex combination of models in a predetermined model set. Again the control law can be based on a variety of methods. In [137, 138] a fixed controller is used, while in [99, 100] a MPC scheme with minimization of the past tracking error is used. PCA is a special case of MMST, where the only anticipated fault is total hydraulics failure and only the engines can be used for control. There have been some successful flight tests with PCA on a F-15 and an MD-11 in the beginning of the 1990’s [32, 33]. The advantage of multiple model methods is that they are fast and provable stable. The main disadvantages are the lack of correct models when dealing with failures that were not considered during the control design and the exponential increase of the number of models required with the number of considered failures for large systems. Controller Synthesis Controller synthesis methods make use of a fault model provided by some form of FDIE. FDIE provides information about the onset, location and severity of any faults and hence the reconfiguration problem is reduced to finding a proper FDIE. Many FDIE approaches can be found in literature, see e.g. [41, 169, 217, 216] and the references therein. Eigenstructure assignment (EA), the pseudo-inverse method (PIM) and model predictive control (MPC) are three of the methodologies which can be used in this reconfigurable control framework. • The main idea of EA is to design a stabilizing output feedback law such that eigen structure closed-loop system of the linear fault model provided by the FDIE unit is as close as possible to that of the original closed-loop system. The limitations when applying EA to reconfigurable flight control are obvious: only linear models are considered and actuator dynamics are not taken into account. Also, a perfect

1.3

RECONFIGURABLE FLIGHT CONTROL

7

fault model is assumed and the effect of eigenvectors in the failed system being not exactly equal to those in the nominal system is not well understood. Despite these problems, some examples of EA and reconfigurable flight control exist in literature, see e.g. [112, 214]. • A method which closely resembles EA is the pseudo-inverse method. The idea of the PIM is to recover the closed-loop behavior by calculating an output feedback law which minimizes the difference in closed-loop dynamics between the fault model and the nominal model. The PIM was popular in the 1980’s and the early 1990’s, but has fallen out of grace due to difficulties in ensuring stability. A survey with several attempts to make this method stable can be found in [162]. • MPC is an interesting method to use for RFC due to its ability to handle constraints and changing model dynamics systematically when failure occurs. MPC also requires the use of a fault model since it relies on an internal model of the system. Several methods for changing the internal model have been proposed, such as the multiple model method in [99]. More examples of RFC using MPC can be found in [94, 95]. In [74] a combination of MPC and a subspace predictor is suggested and demonstrated in a reconfigurable flight control problem for a damaged Boeing 747 model. A disadvantage of MPC is that the method requires a computationally intensive online optimization at each time step, which makes it difficult to implement MPC as an aircraft controller. There is no guarantee that there exists a solution to the optimization problem at all time. Actuator Only Actuator only methodologies are limited in the sense that they can only provide reconfigurable control in case of actuator failures. Sliding mode control (SMC) and control allocation (CA) are two such methodologies: • SMC is a nonlinear control method, which has become quite popular for RFC research [82, 83, 179, 180]. The advantages of SMC are its excellent provable robustness properties by the use of a discontinuous term in the control law. A major disadvantage is that some assumptions have to be made which require that there is one control surface for each controlled variable, none of the controller surfaces can ever be completely lost. This is not very realistic, as actuators are usually jammed completely when they fail. • CA is mainly used in aircraft with redundant control surfaces, like high performance jet fighters [34, 35]. CA distributes the demanded forces and torque by the controller over the actuators. CA handles actuator failures without the need to model the control law and has therefor received a lot of attention in literature, see [54] for a survey. A limitation of this approach is that the post-failure aircraft and actuator dynamics are not taken into account by the control law, so that the controller will still be attempting to achieve the original system performance while

8

INTRODUCTION

1.3

the actuators may not be capable of achieving this. Another problem is that the system will not necessarily be stable, even with a stabilizing control law, as the input seen by the system may not be equal to that intended by the controller. Several extensions to the basic CA method have been proposed in literature, see e.g. [76] for an overview. Adaptive Control Adaptive control approaches are by far the largest research area in RFC, especially in recent decades. An adaptive controller is a controller with adjustable parameters and a mechanism for adjusting these parameters. A ‘good’ adaptive control law removes the need for a FDIE system. However, it is often difficult or impossible to proof robustness and/or stability of such an algorithm. All the previously mentioned methods are also somewhat adaptive, but all require FDIE or use pre-computed control laws and fault models. The bulk of the adaptive flight control approaches in literature can be roughly divided in two categories: model reference adaptive control (MRAC) and model inversion based control, e.g. NDI or backstepping (BS), in combination with an online parameter estimation method, e.g. neural networks (NN), recursive least squares (RLS). • MRAC is usually used as a final stage in another algorithm. The goal of MRAC is to force the output of the system to track a reference model with the desired specifications. Adaptation is used to estimate the controller parameters needed to track the model when failure occurs. There exists direct adaptation and indirect adaptation. Both these methods are compared in [18, 19]. Other publications about RFC using MRAC are [73, 108, 107, 142]. In [85] a discrete version of this method is proposed. Several modifications of standard MRAC have been proposed to provide stable adaptation in the presence of input constraints [91, 123]. • NDI/BS in combination with NN/RLS basically uses nonlinear control for reference tracking and NN/RLS to compensate for all modeling errors. In [30, 31, 37, 38, 39] a controller using NDI in combination with NN is designed and (limited) flight tested on a tailless fighter aircraft under the USAF RESTORE program and on the unmanned X-36 (Figure 1.4(a)). NDI with NN was also used on the F-15 ACTIVE (Figure 1.4(b)) under the intelligent flight control system program of the NASA [21, 22]. A Boeing 747 fitted with a NDI controller combined with RLS for the online model identification was successfully tested in a moving base simulator [131, 132, 133]. In recent years adaptive backstepping flight control in combination with some form of neural networks has become a popular research subject, see e.g. [58, 125, 161, 176, 177, 196]. The main advantages of adaptive backstepping over NDI are its strong stability and convergence properties. • Some other approaches suggested in literature over the years include the adaptive LQR methods in [2, 69]. In [62] a linear matrix inequalities framework for a robust, adaptive nonlinear flight control system is proposed. In [81, 185] a RFC for the NASA F-18/HARV based on a QFT compensator and an adaptive filter is

1.3

RECONFIGURABLE FLIGHT CONTROL

9

used. Flight control based on reinforcement learning is the subject of [89, 90, 126]. Indirect adaptive control using a moving window/batch estimation for partial loss of the horizontal tail surface is studied in [157].

(a) The X-36

(b) The F-15 ACTIVE

Figure 1.4: Two examples of aircraft used for reconfigurable flight control testing. Pictures by courtesy of NASA.

1.3.2 Reconfigurable Flight Control in Practice In 1998, an F-18E/F Super Hornet (Figure 1.5) was in the middle of a flutter test flight when the right stabilizer actuator experienced a failure [53]. This failure would have triggered a reversion to a mechanical control mode in previous versions of the F-18, which usually caused substantial transients and slightly degraded handling qualities. However, the E/F design included the replacement of the mechanical backup system with a reconfigurable control law. For this particular failure, the left stabilizer and rudder toe-in can be used to restore some of the lost pitching moment and the flaps, ailerons and rudders can be used to compensate for the coupling in lateral/directional axis caused by asymmetric stabilizer deflection. Although this control reconfiguration approach had been demonstrated with simulated failures in flight tests, this was the first successful demonstration with an actual failure. In 1999 the F-18E/F was the first production aircraft delivered with a reconfigurable flight control law, which can only compensate for a single stabilizer actuator failure mode. Several more advanced RFC systems have been flight tested on the X-36 and the F-15 ACTIVE, but manufacturers are cautious to implement them in production aircraft. One reason for this has been the difficulty of certifying RFC approaches for safety of flight. Therefore, part of the current research is focusing on the development of tools for the analysis of RFC laws and adaptive control algorithms that are easier to certify and implement. For instance, in [27, 141] a ‘retrofit’ RFC law using a modified sequential least-squares algorithm for online model identification is proposed, which does not alter

10

INTRODUCTION

1.4

Figure 1.5: The F-18E/F Super Hornet with RFC Law. Picture by courtesy of Boeing.

the baseline inner loop control and could be treated more like an autopilot for certification purposes. A limited flight test program has been performed by Boeing and the Naval Air Systems Command [158]. However, again only certain types of actuator failures are considered.

1.4 Thesis Goal and Research Approach The main goal of this thesis is to investigate the potential of the nonlinear adaptive backstepping control technique in combination with online model identification for the design of a reconfigurable flight control system for a modern fighter aircraft. The following features are aimed for: • the RFC system uses a single nonlinear adaptive flight controller for the entire domain of operation (flight envelope), which has provable theoretical performance and stability properties. • the RFC system enhances performance and survivability of the aircraft in the presence of disturbances related to failures and structural damage. • the algorithms, on which the RFC system is based, possess excellent numerical stability properties and their computational costs are low (real-time implementation is feasible). As a study model the Lockheed Martin F-16 is selected, since it is the current fighter aircraft of the Royal Netherlands Air Force and an accurate high-fidelity aerodynamic c model has been obtained. The MATLAB/Simulink software package will be used to design, refine and evaluate the RFC system. A short discussion on the motivation of the methods and the aircraft model used in this thesis is now presented.

1.4

THESIS GOAL AND RESEARCH APPROACH

11

1.4.1 Nonlinear Adaptive Backstepping Control Adaptive backstepping is a recursive, Lyapunov-based, nonlinear design method, which makes use of dynamic parameter update laws to deal with parametric uncertainties. The idea of backstepping is to design a controller recursively by considering some of the state variables as ‘virtual controls’ and designing intermediate control laws for these. Backstepping achieves the goals of global asymptotic stabilization and tracking. The proof of these properties is a direct consequence of the recursive procedure, since a Lyapunov function is constructed for the entire system including the parameter estimates. The tracking errors drive the adaptation process of the procedure. Furthermore, it is possible to take magnitude and rate constraints on the control inputs and system states into account such that the identification process is not corrupted during periods of control effector saturation [58, 61]. A disadvantage of the integrated adaptive backstepping method is that it only yields pseudo-estimates of the uncertain system parameters. There is no guarantee that the real values of the parameters are found, since the adaptation only tries to satisfy a total system stability criterion, i.e. the Lyapunov function. Furthermore, since the controller and identifier are designed as one integrated system it is very difficult to tune the performance of one subsystem without influencing the performance of the other. In this thesis several possible improvements to the basic adaptive backstepping approach are introduced and evaluated.

1.4.2 Flight Envelope Partitioning To simplify the online approximation of a full nonlinear dynamic aircraft model and thereby reducing computational load, the flight envelope can be partitioned into multiple connecting operating regions called hyperboxes or clusters [152, 153]. This can be done manually using a priori knowledge of the nonlinearity of the system, automatically using nonlinear optimization algorithms that cluster the data into hyperplanar or hyperellipsoidal clusters [10] or a combination of both. In each hyperbox a locally valid linear-in-the-parameters nonlinear model is defined, which can be updated using the update laws of the Lyapunov-based adaptive backstepping control law. For an aircraft, the aerodynamic model can be partitioned using different state variables, the choice of which depends on the expected nonlinearities of the system. Fuzzy logic or some form of neural network can be used to interpolate between the local nonlinear models, ensuring smooth transitions. Because only a small number of local models is updated at any given time step, the computational expense is relatively low. Another advantage is that storing of the local models means retaining information of all flight conditions, because the local adaptation does not interfere with the models outside the closed neighborhood. Hence, the estimator has memory capabilities and learns instead of continuously adapting one global nonlinear model.

12

INTRODUCTION

1.5

1.4.3 The F-16 Dynamic Model Throughout this thesis the theoretical results are illustrated, where possible, by means of numerical simulation examples. The most accurate dynamic aircraft model available for this research is that of the Lockheed Martin F-16 single-seat fighter aircraft. This aircraft model has been constructed using high-fidelity aerodynamic data obtained from [149] which is valid over the entire subsonic flight envelope of the F-16. Detailed engine and actuator models are also available, as well as a simplified version of the baseline flight control system. However, structural failure models are not available, which poses a limitation on the reconfigurable flight control research in this thesis. In other words, the simulation scenarios with the F-16 model are limited to actuator hard-overs or lock-ups, longitudinal center of gravity shifts and uncertainties in one or more aerodynamic coefficients. Without any form of FDIE, these limited failure scenarios still pose a challenge, especially the actuator failures, and can be used to evaluate the theoretical results in this thesis work. Therefore, an FDIE system, such as sensor feedback of actuator positions or actuator health monitoring systems, is not included in the investigated adaptive control designs. In this way, the actuator failures are used as a substitute for more complex (a)symmetric structural failure scenarios. Note that the baseline flight control system of the F-16 model makes use of full state measurement and hence these measurements are also assumed to be available for the nonlinear adaptive control designs developed in this thesis.

1.5 Thesis Outline The outline of the thesis is as follows: In Chapter 2 the high-fidelity dynamic model of the F-16 is constructed. The model c is implemented as a C S-function in MATLAB/Simulink . The available aerodynamic data is valid over a large, subsonic flight envelope. Furthermore, the characteristics of the classical baseline flight control system of the F-16 are discussed. The dynamic aircraft model and baseline controller are needed to evaluate and compare the performance of the nonlinear adaptive control designs in later chapters. Chapter 3 starts with a discussion on stability concepts and the concept of Lyapunov functions. Lyapunov’s direct method forms the basis for the recursive backstepping procedure, which is highlighted in the second part of the chapter. Simple control examples are used to clarify the design procedure. In Chapter 4 nonlinear systems with parametric uncertainty are introduced. The backstepping method is extended with a dynamic feedback part, i.e. a parameter update law, that constantly updates the static control part. The parameter adaptation part is designed recursively and simultaneously with the static feedback part using a single control Lyapunov function. This approach is referred to as tuning functions adaptive backstepping. Techniques to robustify the adaptive design against non-parametric uncertainties are also

1.5

THESIS OUTLINE

13

discussed. Finally, command filters are introduced in the design to simplify the tuning functions adaptive backstepping design and to make the parameter adaption more robust to actuator saturation. This approach is referred to as constrained adaptive backstepping. Chapter 5 explores the possibilities of combining (inverse) optimal control theory and adaptive backstepping. The standard adaptive backstepping designs are mainly focused on achieving stability and convergence, the transient performance and optimality are not taken explicitly into account. The inverse optimal adaptive backstepping technique resulting from combining the tuning functions approach and inverse optimal control theory is validated with a simple flight control example. In Chapter 6 the constrained adaptive backstepping technique is applied to the design of a flight control system for a simplified, nonlinear over-actuated fighter aircraft model valid at two flight conditions. It is demonstrated that the extension of the method to multi-input multi-output systems is straightforward. A comparison with a modular adaptive controller that employs a least squares identifier is made. Furthermore, the interactions between several control allocation algorithms and the online model identification for simulations with actuator failures are studied. Chapter 7 extends the results of Chapter 6 to nonlinear adaptive control for the F-16 dynamic model of Chapter 2, which is valid for the entire subsonic flight envelope. A flight envelope partitioning method to simplify the online model identification is introduced. The flight envelope is partitioned into multiple connecting operating regions and locally valid models are defined in each region. B-spline networks are used for smooth interpolation between the models. As a study case a trajectory control autopilot is designed, after which it is evaluated in several maneuvers with actuator failures and uncertainties in the onboard aerodynamic model. Chapter 8 again considers constrained adaptive backstepping flight control for the highfidelity F-16 model. A stability and control augmentation system is designed in such a way that it has virtually the same handling qualities as the baseline F-16 flight control system. A comparison is made between the performance of the baseline control system, a modular adaptive controller with least-squares identifier and the constrained adaptive backstepping controller in several realistic failure scenarios. Chapter 9 introduces the immersion and invariance method to construct a new type of nonlinear adaptive estimator. The idea behind the immersion and invariance approach is to assign prescribed stable dynamics to the estimation error. The resulting estimator in combination with a backstepping controller is shown to improve transient performance and to radically simplify the tuning process of the integrated adaptive backstepping designs of the earlier chapters. In Chapter 10 the concluding remarks and recommendations for further research are discussed. Figure 1.6 depicts a flow chart of the thesis illustrating the connections between the different chapters. Although this thesis is written as a monograph, Chapters 5 to 9 can be viewed as a collection of edited versions of previously published papers. An (approximate) overview of the papers on which these chapters are based is given below.

14

INTRODUCTION

1.5

Chapter 5: • L. Sonneveldt, E.R. van Oort, Q.P. Chu and J.A. Mulder, “Comparison of Inverse Optimal and Tuning Functions Design for Adaptive Missile Control”, Journal of Guidance, Control and Dynamics, Vol. 31, No. 4, July-Aug 2008, pp. 1176-1182 • L. Sonneveldt, E.R. van Oort, Q.P. Chu and J.A. Mulder, “Comparison of Inverse Optimal and Tuning Function Designs for Adaptive Missile Control”, Proc. of the 2007 AIAA Guidance, Navigation, and Control Conference and Exhibit, Hilton Head, South Carolina, AIAA-2007-6675 Chapter 6: • E.R. van Oort, L. Sonneveldt, Q.P. Chu and J.A. Mulder, “Full Envelope Modular Adaptive Control of a Fighter Aircraft using Orthogonal Least Squares”, Journal of Guidance, Control and Dynamics, Accepted for publication • E.R. van Oort, L. Sonneveldt, Q.P. Chu and J.A. Mulder, “Modular Adaptive Inputto-State Stable Backstepping of a Nonlinear Missile Model”, Proc. of the 2007 AIAA Guidance, Navigation, and Control Conference and Exhibit, Hilton Head, South Carolina, AIAA-2007-6676 • E.R. van Oort, L. Sonneveldt, Q.P. Chu and J.A. Mulder, “A Comparison of Adaptive Nonlinear Control Designs for an Over-Actuated Fighter Aircraft Model”, Proc. of the 2008 AIAA Guidance, Navigation, and Control Conference and Exhibit, Honolulu, Hawaii, AIAA-2008-6786 Chapter 7: • L. Sonneveldt, E.R. van Oort, Q.P. Chu and J.A. Mulder, “Nonlinear Adaptive Backstepping Trajectory Control”, Journal of Guidance, Control and Dynamics, Vol. 32, No. 1, Jan-Feb 2009, pp. 25-39 • L. Sonneveldt, E.R. van Oort, Q.P. Chu and J.A. Mulder, “Nonlinear Adaptive Trajectory Control Applied to an F-16 Model”, Proc. of the 2008 AIAA Guidance, Navigation, and Control Conference and Exhibit, Honolulu, Hawaii, AIAA-20086788 Chapter 8: • L. Sonneveldt, Q.P. Chu and J.A. Mulder, ‘‘Nonlinear Flight Control Design Using Constrained Adaptive Backstepping”, Journal of Guidance, Control and Dynamics, Vol. 30, No. 2, Mar-Apr 2007, pp. 322-336 • L. Sonneveldt, Q.P. Chu and J.A. Mulder, “Constrained Nonlinear Adaptive Backstepping Flight Control: Application to an F-16/MATV Model”, Proc. of the 2006 AIAA Guidance, Navigation, and Control Conference and Exhibit, Keystone, Colorado, AIAA-2006-6413

1.5

THESIS OUTLINE

15

• L. Sonneveldt, E.R. van Oort, Q.P. Chu and J.A. Mulder, “Nonlinear Adaptive Flight Control Law Design and Handling Qualities Evaluation”, Joint 48th IEEE Conference on Decision and Control and 28th Chinese Control Conference, Shanghai, 2009 • L. Sonneveldt, et al., “Lyapunov-based Fault Tolerant Flight Control Designs for a Modern Fighter Aircraft Model”, Proc. of the 2009 AIAA Guidance, Navigation, and Control Conference and Exhibit, Chicago, Illinois, AIAA-2009-6172 Chapter 9: • L. Sonneveldt, E.R. van Oort, Q.P. Chu and J.A. Mulder, “Immersion and Invariance Adaptive Backstepping Flight Control”, Journal of Guidance, Control and Dynamics, Under review • L. Sonneveldt, E.R. van Oort, Q.P. Chu and J.A. Mulder, “Immersion and Invariance Based Nonlinear Adaptive Flight Control”, Proc. of the 2010 AIAA Guidance, Navigation, and Control Conference and Exhibit, To be presented

16

1.5

INTRODUCTION

1. Introduction

3. Lyapunov Theory and Backstepping 2. Aircraft Modeling 4. Adaptive Backstepping

6. Comparison of Integrated and Modular Adaptive Flight Control

7. F-16 Trajectory Control Design

5. Inverse Optimal Adaptive Backstepping

8. F-16 Stability and Control Augmentation System Design

9. Immersion and Invariance Adaptive Backstepping

10. Conclusions and Recommendations

Figure 1.6: Flow chart of the thesis chapters.

Chapter

2

Aircraft Modeling This chapter utilizes basic flight dynamics theory to construct a nonlinear dynamical model of the Lockheed Martin F-16, which is the main study model in this thesis work. The available geometric and aerodynamic aircraft data, as well as the assumptions made are discussed in detail. Furthermore, a description of the baseline flight control system of the F-16, which can be used for comparison purposes, is also included. The final part of the chapter discusses the implementation of the model and the baseline control system c in the MATLAB/Simulink software package.

2.1 Introduction In this chapter a nonlinear dynamical model of the Lockheed-Martin F-16 is constructed. The F-16 is a single-seat, supersonic, multi-role tactical aircraft with a blended wingfuselage that has been in production since 1976. Over 4,400 have been produced for 24 countries, making it the most common fighter type in the world. A three-view of the single-engined F-16 aircraft is depicted in Figure 2.1. This chapter will start with a derivation of the equations of motion for a general rigid body aircraft. After that, the available control variables and the engine model of the F16 are discussed. The geometry and the aerodynamic data are given in Section 2.4. In Section 2.5 a simplified version of the baseline F-16 flight control system is discussed. c The implementation in MATLAB/Simulink of the complete F-16 dynamic model with flight control system is detailed in the last part of the chapter.

2.2 Aircraft Dynamics In this section the equations of motion for the F-16 model are derived, this derivation is based on [16, 45, 127]. A very thorough discussion on flight dynamics can be found in 17

18

AIRCRAFT MODELING

2.2

Figure 2.1: Three-view of the Lockheed-Martin F-16.

the course notes [143].

2.2.1 Reference Frames Before the equations of motion can be derived, some frames of reference are needed to describe the motion in. The reference frames used in this thesis are • the earth-fixed reference frame FE , used as the inertial frame and the vehicle carried local earth reference frame FO with its origin fixed in the center of gravity of the aircraft which is assumed to have the same orientation as FE ; • the wind-axes reference frame FW , obtained from FO by three successive rotations of flight path heading angle χ, flight path climb angle γ and aerodynamic bank angle µ; • the stability-axes reference frame FS , obtained from FW by a rotation of minus sideslip angle β; • and finally the body-fixed reference frame FB , obtained from FS by a rotation of angle of attack α. The body-fixed reference frame FB can also be obtained directly from FO by three successive rotations of yaw angle ψ, pitch angle θ and roll angle φ. All reference frames are right-handed and orthogonal. In the earth-fixed reference frame the zE -axis points to the center of the earth, the xE -axis points in some arbitrary direction, e.g. the north, and the

2.2

AIRCRAFT DYNAMICS

19

yE -axis is perpendicular to the xE -axis. The transformation matrices from FB to FS and from FB to FW are defined as Ts/b



cos α 0 = − sin α

0 1 0

 sin α 0 , cos α

Tw/b



cos α cos β =  − cos α sin β − sin α

sin β cos β 0

 sin α cos β − sin α sin β  . cos α

2.2.2 Aircraft Variables A number of assumptions has to be made, before proceeding with the derivation of the equations of motion: 1. The aircraft is a rigid-body, which means that any two points on or within the airframe remain fixed with respect to each other. This assumption is quite valid for a small fighter aircraft. 2. The earth is flat and non-rotating and regarded as an inertial reference. This assumption is valid when dealing with control design of aircraft, but not when analyzing inertial guidance systems. 3. Wind gust effects are not taken into account, hence the undisturbed air is assumed to be at rest w.r.t. the surface of the earth. In other words, the kinematic velocity is equal to the aerodynamic velocity of the aircraft. 4. The mass is constant during the time interval over which the motion is considered, the fuel consumption is neglected during this time-interval. This assumption is necessary to apply Newton’s motion laws. 5. The mass distribution of the aircraft is symmetric relative to the XB OZB -plane, this implies that the products of inertia Iyz and Ixy are equal to zero. This assumption is valid for most aircraft. Note that the last assumption is no longer valid when the aircraft gets asymmetrically damaged. However, the aerodynamic effects resulting from such damage will, in general, be much larger than the influence of the center of gravity shift for a small fighter aircraft. A derivation of the equations of motion without this last assumption can be found in [11]. Under the above assumptions the motion of the aircraft has six degrees of freedom (rotation and translation in three dimensions). The aircraft dynamics can be described by its position, orientation, velocity and angular velocity over time. pE = (xE , yE , zE )T is the position vector expressed in an earth-fixed coordinate system. V is the velocity vector given by V = (u, v, w)T , where u is the longitudinal velocity, v the lateral velocity and w the normal velocity. The orientation vector is given by Φ = (φ, θ, ψ)T , where φ is the roll angle, θ the pitch angle and ψ the yaw angle, and the angular velocity vector is given by ω = (p, q, r)T , where p, q and r are the roll, pitch and yaw angular velocities, respectively. Various components of the aircraft motions are illustrated in Figure 2.2.

20

2.2

AIRCRAFT MODELING

Figure 2.2: Aircraft orientation angles φ, θ and ψ, aerodynamic angles α and β, and the angular rates p, q and r. The frame of reference is body-fixed and all angles and rates are defined positive in the figure [178].

The relation between the attitude vector Φ and the angular velocity vector ω is given as   1 sin φ tan θ cos φ tan θ ˙ = 0 cos φ − sin φ  ω (2.1) Φ sin φ cos φ 0 cos θ cos θ Defining VT as the total velocity and using Figure 2.2, the following relations can be derived: p VT = u2 + v 2 + w 2 w α = arctan (2.2) u v β = arcsin VT

Furthermore, when β = φ = 0, the flight path angle γ can be defined as γ =θ−α

(2.3)

2.2.3 Equations of Motion for a Rigid Body Aircraft The equations of motion for the aircraft can be derived from Newton’s Second Law of motion, which states that the summation of all external forces acting on a body must be equal to the time rate of change of its momentum, and the summation of the external

2.2

AIRCRAFT DYNAMICS

21

moments acting on a body must be equal to the time rate of change of its angular momentum. In the inertial, earth-fixed reference frame FE , Newton’s Second Law can be expressed by two vector equations [143] i d F = (mV) (2.4) dt E i dH M = (2.5) dt E where F represents the sum of all externally applied forces, m is the mass of the aircraft, M represents the sum of all applied torques and H is the angular momentum. Force Equation First, to further evaluate the force equation (2.4) it is necessary to obtain an expression for the time rate of change of the velocity vector with respect to earth. This process is complicated by the fact that the velocity vector may be rotating while it is changing in magnitude. Using the equation of Coriolis in appendix A of [16] results in i d F = (mV) + ω × mV, (2.6) dt B where ω is the total angular velocity of the aircraft with respect to the earth (inertial reference frame). Expressing the vectors as the sum of their components with respect to the body-fixed reference frame FB gives V

=

iu + jv + kw

(2.7)

ω

=

ip + jq + kr

(2.8)

where i, j and k are unit vectors along the aircraft’s xB , yB and zB axes, respectively. Expanding (2.6) using (2.7), (2.8) results in Fx Fy

= =

Fz

=

m(u˙ + qw − rv) m(v˙ + ru − pw)

(2.9)

m(w˙ + pv − qu)

where the external forces Fx , Fy and Fz depend on the weight vector W, the aerodynamic force vector R and the thrust vector E. It is assumed the thrust produced by the engine, FT , acts parallel to the aircraft’s xB -axis. Hence, Ex Ey

= FT = 0

Ez

= 0

(2.10)

The components of W and R along the body-axes are Wx Wy

= =

−mg sin θ mg sin φ cos θ

Wz

=

mg cos φ cos θ

(2.11)

22

2.2

AIRCRAFT MODELING

and Rx Ry Rz

= =

¯ X Y¯

=



(2.12)

¯ Y¯ and Z¯ is where g is the gravity constant. The size of the aerodynamic forces X, determined by the amount of air diverted by the aircraft in different directions. The amount of air diverted by the aircraft mainly depends on the following factors: • the total velocity VT (or Mach number M ) and density of the airflow ρ, • the geometry of the aircraft: wing area S, wing span b and mean aerodynamic chord c¯, • the orientation of the aircraft relative to the airflow: angle of attack α and side slip angle β, • the control surface deflections δ, • the angular rates p, q, r, There are other variables such as the time derivatives of the aerodynamic angles that also play a role, but these effects are less prominent, since it is assumed that the aircraft is a rigid body. This motivates the standard way of modeling the aerodynamic force: ¯ X ¯ Y Z¯

=

q¯SCXT (α, β, p, q, r, δ, ...)

= =

q¯SCYT (α, β, p, q, r, δ, ...) q¯SCZT (α, β, p, q, r, δ, ...)

(2.13)

where q¯ = 21 ρVT2 is the aerodynamic pressure. The air density ρ is calculated according to the International Standard Atmosphere (ISA) as given in Appendix A.2. The coefficients CXT , CYT and CZT are usually obtained from (virtual) wind tunnel data and flight tests. Combining equations (2.11) and (2.12) and the thrust components (2.10) with (2.9), results in the complete body-axes force equation: ¯ + FT − mg sin θ X Y¯ + mg sin φ cos θ Z¯ + mg cos φ sin θ

= = =

m(u˙ + qw − rv) m(v˙ + ru − pw)

(2.14)

m(w˙ + pv − qu)

Moment Equation To obtain the equations for angular motion, consider again Equation (2.5). The time rate of change of H is required and since H can change in magnitude and direction, (2.5) can be written as dH i M= +ω×H (2.15) dt B

2.2

23

AIRCRAFT DYNAMICS

In the body-fixed reference frame, under the rigid body and constant mass assumptions, the angular momentum H can be expressed as H = Iω

(2.16)

where, under the symmetrical aircraft assumption, the inertia matrix is defined as 

Ix I= 0 −Ixz

0 Iy 0

 −Ixz 0  Iz

(2.17)

Expanding (2.15) using (2.16) results in Mx My Mz

= pI ˙ x − rI ˙ xz + qr(Iz − Iy ) − pqIxz = qI ˙ y + pq(Ix − Iz ) + (p2 − r2 )Ixz

(2.18)

= rI ˙ z − pI ˙ xz + pq(Iy − Ix ) + qrIxz .

The external moments Mx , My and Mz are those due to aerodynamics and engine angular momentum. As a result the aerodynamic moments are Mx My Mz

¯ = L ¯ − rHeng = M ¯ + qHeng = N

(2.19)

¯M ¯ and N ¯ are the aerodynamic moments and Heng is the engine angular mowhere L, mentum. Note that the engine angular momentum is assumed to act parallel to the body x-axis of the aircraft. The aerodynamic moments can be expressed in a similar way as the aerodynamic forces in Equation (2.13): ¯ L ¯ M ¯ N

= q¯SbClT (α, β, p, q, r, δ, ...)

(2.20)

= q¯S¯ cCmT (α, β, p, q, r, δ, ...) = q¯SbCnT (α, β, p, q, r, δ, ...)

Combining (2.18) and (2.19), the complete body-axis moment equation is formed as ¯ L ¯ − rHeng M ¯ + qHeng N

= = =

pI ˙ x − rI ˙ xz + qr(Iz − Iy ) − pqIxz qI ˙ y + pq(Ix − Iz ) + (p2 − r2 )Ixz

rI ˙ z − pI ˙ xz + pq(Iy − Ix ) + qrIxz .

(2.21)

24

2.2

AIRCRAFT MODELING

2.2.4 Gathering the Equations of Motion Euler Angles The equations of motion derived in the previous sections are now collected and written as a system of twelve scalar first order differential equations. 1 ¯ (X + FT ) m 1 = pw − ru + g sin φ cos θ + Y¯ m 1 = qu − pv + g cos φ cos θ + Z¯ m



= rv − qw − g sin θ +

v˙ w˙ p˙ q˙

= =



=

¯ + c4 (N ¯ + qHeng ) (c1 r + c2 p)q + c3 L ¯ − rHeng ) c5 pr − c6 (p2 − r2 ) + c7 (M ¯ + c9 (N ¯ + qHeng ) (c8 p − c2 r)q + c4 L

φ˙ = θ˙ = ψ˙ x˙ E

=

(2.22) (2.23) (2.24)

(2.25) (2.26) (2.27)

p + tan θ(q sin φ + r cos φ)

(2.28)

q cos φ − r sin φ q sin φ + r cos φ cos θ

(2.29) (2.30)

= +

u cos ψ cos θ + v(cos ψ sin θ sin φ − sin ψ cos φ) w(cos ψ sin θ cos φ + sin ψ sin φ)

y˙ E

= +

z˙E

=

u sin ψ cos θ + v(sin ψ sin θ sin φ + cos ψ cos φ) w(sin ψ sin θ cos φ − cos ψ sin φ)

−u sin θ + v cos θ sin φ + w cos θ cos φ

(2.31) (2.32) (2.33)

where 2 Γc1 = (Iy − Iz )Iz − Ixz

Γc2 = (Ix − Iy + Iz )Ixz Γc3 = Iz

Γc4 = Ixz c5 =

Iz −Ix Iy

c6 =

Ixz Iy

c7 =

1 Iy

2 Γc8 = Ix (Ix − Iy ) + Ixz

Γc9 = Ix

2 with Γ = Ix Iz − Ixz .

Quaternions The above equations of motion make use of Euler angle approach for the orientation model. The disadvantage of the Euler angle method is that the differential equations for

2.2

AIRCRAFT DYNAMICS

25

p˙ and r˙ become singular when pitch angle θ passes through ± π2 . To avoid these singularities quaternions are used for the aircraft orientation presentation. A detailed explanation about quaternions and their properties can be found in [127]. With the quaternions presentation the aircraft system representation consists of 13 scalar first order differential equations:

u˙ = v˙

=



=

 1¯ X + FT + 2(q1 q3 − q0 q2 )g m 1 pw − ru + Y¯ + 2(q2 q3 + q0 q1 )g m 1 ¯ qu − pv + Z + (q02 − q12 − q22 + q32 )g m

rv − qw +



=

q˙ r˙

= =

¯ + c4 (N ¯ + qHeng ) (c1 r + c2 p)q + c3 L 2 2 ¯ c5 pr − c6 (p − r ) + c7 (M − rHeng ) ¯ + c9 (N ¯ + qHeng ) (c8 p − c2 r)q + c4 L



  q˙0  q˙1  1    q˙ =   q˙2  = 2  q˙3   2 x˙ E q0 + q12 − q22 − q32  y˙ E  =  2(q1 q2 + q0 q3 ) z˙E 2(q1 q3 − q0 q2 ) 

0 −p p 0 q −r r q

 −q −r q0  q1 r −q   0 p   q2 −p 0 q3

2(q1 q2 − q0 q3 ) − q12 + q22 − q32 2(q2 q3 + q0 q1 )

q02

where



  q0   q1      q2  = ±  q3

   

(2.34) (2.35) (2.36) (2.37) (2.38) (2.39)

(2.40)

  2(q1 q3 + q0 q2 ) u 2(q2 q3 − q0 q1 )   v  (2.41) q02 − q12 − q22 + q32 w

 cos φ/2 cos θ/2 cos ψ/2 + sin φ/2 sin θ/2 sin ψ/2 sin φ/2 cos θ/2 cos ψ/2 − cos φ/2 sin θ/2 sin ψ/2  . cos φ/2 sin θ/2 cos ψ/2 + sin φ/2 cos θ/2 sin ψ/2  cos φ/2 cos θ/2 sin ψ/2 − sin φ/2 sin θ/2 cos ψ/2

Using (2.40) to describe the attitude dynamics means that the four differential equations are integrated as if all quaternion components were independent. Therefore, the p normalization condition |q| = q02 + q12 + q22 + q32 = 1 and the derivative constraint q0 q˙0 + q1 q˙1 + q2 q˙2 + q3 q˙3 = 0 may not be satisfied after performing an integration step due to numerical round-off errors. After each integration step the constraint may be reestablished by subtracting the discrepancy from the quaternion derivatives. The corrected quaternion dynamics are [170] q˙ ′ = q˙ − δq, where δ = q0 q˙0 + q1 q˙1 + q2 q˙2 + q3 q˙3 .

(2.42)

26

2.3

AIRCRAFT MODELING

Wind-axes Force Equations For control design it is more convenient to transform the force equations (2.34)-(2.36) to the wind-axes reference frame. Taking the derivative of (2.2) results in [127] V˙ T

=

1 (−D + FT cos α cos β + mg1 ) m

α˙

=

q − (p cos α + r sin α) tan β −

β˙

=

p sin α − r cos α +

1 (L + FT sin α − mg3 ) mVT cos β

1 (Y − FT cos α sin β + mg2 ) mVT

(2.43) (2.44) (2.45)

where the drag force D, the side force Y and the lift force L are defined as D Y

= =

L =

¯ cos α cos β − Y¯ sin β − Z¯ sin α cos β −X ¯ −X cos α sin β + Y¯ cos β − Z¯ sin α sin β ¯ sin α − Z¯ cos α X

and the gravity components as g1 g2 g3

= g (− cos α cos β sin θ + sin β sin φ cos θ + sin α cos β cos φ cos θ) = g (cos α sin β sin θ + cos β sin φ cos θ − sin α sin β cos φ cos θ) = g (sin α sin θ + cos α cos φ cos θ) .

2.3 Control Variables and Engine Modeling The F-16 model allows control over thrust, elevator, ailerons and rudder. The thrust is measured in Newtons. All deflections are defined positive in the conventional way, i.e. positive thrust causes an increase in acceleration along the xB -axis, a positive elevator deflection results in a decrease in pitch rate, a positive aileron deflection gives a decrease in roll rate and a positive rudder deflection decreases the yaw rate. The F-16 also has a leading edge flap, which helps to fly the aircraft at high angles of attack. The deflection of the leading edge flap δlef is not controlled directly by the pilot, but is governed by the following transfer function dependent on angle of attack α and static and dynamic pressures: δlef = 1.38

2s + 7.25 q¯ α − 9.05 + 1.45. s + 7.25 pstat

(2.46)

The differential elevator deflection, trailing edge flap, landing gear and speed brakes are not included in the model, since no data is publicly available. The control surfaces of the F-16 are driven by servo-controlled actuators to produce the deflections commanded by the flight control system. The actuators of the control surfaces are modeled as a firstorder low-pass filters with certain gain and saturation limits in range and deflection rate. These limits can be found in Table 2.1. The gains of the actuators are 1/0.136 for the leading edge flap and 1/0.0495 for the other control surfaces. The maximum values and

2.3

CONTROL VARIABLES AND ENGINE MODELING

27

Table 2.1: The control input units and maximum values

Control Elevator Ailerons Rudder Leading edge flap

units deg deg deg deg

MIN. -25 -21.5 -30 0

MAX. 25 21.5 30 25

rate limit ± 60 deg/s ± 80 deg/s ± 120 deg/s ± 25 deg/s

units for all control variables are given in Table 2.1. The Lockheed Martin F-16 is powered by an after-burning turbofan jet engine, which is modeled taking into account throttle gearing and engine power level lag. The thrust response is modeled with a first order lag, where the lag time constant is a function of the current engine power level and the commanded power. The commanded power level to the throttle position is a linear relationship apart from a change in slope when the military power level is reached at 0.77 throttle setting [149]:  64.94δth if δth ≤ 0.77 Pc∗ (δth ) = . (2.47) 217.38δth − 117.38 if δth > 0.77 Note that the throttle position is limited to the range 0 ≤ δth ≤ 1. The derivative of the actual power level Pa is given by [149] P˙a =

1 (Pc − Pa ) , τeng

(2.48)

where  Pc    60 Pc =  40   Pc  5.0     1

if if if if



1 τeng = 5.0  τeng    1 ∗ τeng

1 τeng



Pc∗ Pc∗ Pc∗ Pc∗ if if if if

≥ 50 and Pa ≥ 50 and Pa < 50 and Pa < 50 and Pa

Pc∗ Pc∗ Pc∗ Pc∗

≥ 50 < 50 ≥ 50 < 50

≥ 50 and Pa ≥ 50 and Pa < 50 and Pa < 50 and Pa

 if  1.0 0.1 if =  1.9 − 0.036 (Pc − Pa ) if

≥ 50 < 50 ≥ 50 < 50

(Pc − Pa ) ≤ 25 (Pc − Pa ) ≥ 50 . 25 < (Pc − Pa ) < 50

28

2.4

AIRCRAFT MODELING

The engine thrust data is available in a tabular form as a function of actual power, altitude and Mach number over the ranges 0 ≥ h ≥ 15240 m and 0 ≥ M ≥ 1 for idle, military and maximum power settings [149]. The thrust is computed as FT =



Tidle + (Tmil − Tidle ) P50a −50 Tmil + (Tmax − Tmil ) Pa50

if if

Pa < 50 . Pa ≥ 50

(2.49)

The engine angular momentum is assumed to be acting along the xB -axis with a constant value of 216.9 kg.m2 /s.

2.4 Geometry and Aerodynamic Data The relevant geometry data of the F-16 can be found in Table A.1 of Appendix A. The aerodynamic data of the F-16 model have been derived from low-speed static and dynamic (force oscillation) wind-tunnel tests conducted with sub-scale models in windtunnel facilities at the NASA Ames and Langley Research Centers [149]. The aerodynamic data in [149] are given in tabular form and are valid for the following subsonic flight envelope: • −20 ≤ α ≤ 90 degrees; • −30 ≤ β ≤ 30 degrees. Two examples of the aerodynamic data for the F-16 model can be found in Figure 2.3. The pitch moment coefficient Cm and the CZ both depend on three variables: angle of attack, sideslip angle and elevator deflection.

0.2 2

0.1 0

1

−0.2

CZ (−)

Cm (−)

−0.1

−0.3

0

−1

−0.4 −2

−0.5 −0.6 −20

20 0 0

20

40

60

80

−20

−3 −20

20 0

0 20

40

60

beta (deg) alpha (deg)

(a) Cm for δe = 0

80

−20 beta (deg)

alpha (deg)

(b) CZ for δe = 0

Figure 2.3: Two examples of the aerodynamic coefficient data for the F-16 obtained from windtunnel tests.

The various aerodynamic contributions to a given force or moment coefficient as given

2.4

GEOMETRY AND AERODYNAMIC DATA

29

in [149] are summed as follows. For the X-axis force coefficient CXT : CXT

= +

 δlef  CX (α, β, δe ) + δCXlef 1 − 25  q¯ c h δlef i CXq (α) + δCXqlef (α) 1 − 2VT 25

(2.50)

where δCXlef = CXlef (α, β) − CX (α, β, δe = 0o ). For the Y-axis force coefficient CYT : CYT

= + + +

 δlef  CY (α, β) + δCYlef 1 − 25 h  δlef i δa  δCYδa + δCYδa 1− lef 25 20 δ  h  rb δlef i r δCYδr + CYr (α) + δCYrlef (α) 1 − 30 2VT 25  δlef i pb h CYp (α) + δCYplef (α) 1 − 2VT 25

(2.51)

where δCYlef

=

δCYδa δCYδa

= =

CYδa (α, β) − CY (α, β) CYδa (α, β) − CYlef (α, β) − δCYδa

δCYδr

=

CYδr (α, β) − CY (α, β).

lef

CYlef (α, β) − CY (α, β) lef

For the Z-axis force coefficient CZT : CZT

= +

 δlef  CZ (α, β, δe ) + δCZlef 1 − 25  q¯ c h δlef i CZq (α) + δCZqlef (α) 1 − 2VT 25

where δCZlef = CZlef (α, β) − CZ (α, β, δe = 0o ).

(2.52)

30

2.5

AIRCRAFT MODELING

For the rolling-moment coefficient ClT : ClT

where

 δlef  = Cl (α, β, δe ) + δCllef 1 − 25 h  δlef i δa  + δClδa + δClδa 1− lef 25 20 δ  h  rb δlef i r + δClδr + Clr (α) + δClrlef (α) 1 − 30 2VT 25  δlef i pb h + Clp (α) + δClplef (α) 1 − + δClβ (α)β 2VT 25 δCllef δClδa δClδa

lef

δClδr

(2.53)

= Cllef (α, β) − Cl (α, β, δe = 0o ) = Clδa (α, β) − Cl (α, β, δe = 0o ) = Clδa

lef

(α, β) − Cllef (α, β) − δClδa

= Clδr (α, β) − Cl (α, β, δe = 0o ).

For the pitching-moment coefficient CmT : h i  δlef  CmT = Cm (α, β, δe ) + CZT xcgr − xcg + δCmlef 1 − (2.54) 25 h i  q¯ c δlef + Cmq (α) + δCmqlef (α) 1 − + δCm (α) + δCmds (α, δe ) 2VT 25 where

δCmlef = Cmlef (α, β) − Cm (α, β, δe = 0o ). For the yawing-moment coefficient CnT : CnT

= + + +

where

 h i c¯ δlef  Cn (α, β, δe ) + δCnlef 1 − − CYT xcgr − xcg 25 b h  δlef i δa  δCnδa + δCnδa 1− lef 25 20 δ  h  rb δlef i r δCnδr + Cnr (α) + δCnrlef (α) 1 − 30 2VT 25  pb h δlef i Cnp (α) + δCnplef (α) 1 − + δCnβ (α)β 2VT 25 δCnlef

=

δCnδa δCnδa

= =

δCnδr

=

lef

Cnlef (α, β) − Cn (α, β, δe = 0o )

Cnδa (α, β) − Cn (α, β, δe = 0o ) Cnδa (α, β) − Cnlef (α, β) − δCnδa lef

Cnδr (α, β) − Cn (α, β, δe = 0o ).

(2.55)

2.5

BASELINE FLIGHT CONTROL SYSTEM

31

2.5 Baseline Flight Control System The NASA technical report [149] also contains a description of a stability and control augmentation system for the F-16 model. This flight control system is a simplified version of the actual baseline F-16 flight controller, which retains its main characteristics. A description of the different control loops of the system is given in this section, for more details see [149].

2.5.1 Longitudinal Control A diagram of the longitudinal flight control system can be found in Figure A.2 of Appendix A.3. It is a command augmentation system where the pilot commands normal acceleration with a longitudinal stick input. Washed-out pitch rate and filtered normal acceleration are fed back to achieve the desired response. A forward-loop integration is included to make the steady-state acceleration response match the commanded acceleration. At low Mach numbers the F-16 model has a minor negative static longitudinal stability; therefore angle of attack feedback is used to provide artificial static stability. The pitch control system incorporates an angle of attack limiting system, where again angle of attack feedback is used to modify the pilot-commanded normal acceleration. The resulting angle of attack limit is about 25 deg in 1g flight. Finally, the system also makes sure that the pitch control is deflected in the proper direction to oppose the nose-up coupling moment generated by rapid rolling at high angles of attack.

2.5.2 Lateral Control The lateral flight control system is depicted in the block diagram given in Figure A.3. The pilot can command roll rates up to 308 deg/s through the lateral stick movement. Above angles of attack of 29 deg, an automatic departure-prevention system is activated. This system disengages the roll-rate control augmentation system and uses yaw rate feedback to drive the roll control surfaces to oppose any yaw rate buildup. At high angles of attack the pilot-commanded roll rate is limited to prevent pitch-out departures. The roll rate limiting is scheduled on angle of attack, elevator deflection and dynamic pressure.

2.5.3 Directional Control A scheme of the directional control system can be found in Figure A.4. The pilot rudder input is computed directly from pedal force and is limited to ±30 deg. Between 20 and 30 deg angle of attack this command signal is gradually reduced to zero to prevent departures from excessive pilot rudder usage at high angles of attack. Also, between 20 and 40 deg/s roll rate the command signal is gradually reduced to zero to prevent pitch-out departures. Yaw stability augmentation consists of lateral acceleration and approximated stability yaw rate feedback. The stability-axis yaw damper provides increased lateral-directional damping in addition to reducing sideslip during high angle of attack roll maneuvers.

32

AIRCRAFT MODELING

2.6

An aileron-rudder interconnection exists to improve coordination and roll performance. At low speeds the gain for the interconnection is scheduled as a linear function of angle of attack. As in the lateral control system, above angles of attack of 29 deg, a departure/spin-prevention mode is activated which uses the rudder to oppose any yaw rate buildup. c 2.6 MATLAB/Simulink Implementation c The F-16 dynamic model is written as a C S-function in MATLAB/Simulink . The inputs of the model are the control surface deflections and the throttle setting. The outputs are the aircraft states and the dimensionless normal accelerations ny and nz . The aerodynamic data, interpolation functions, the engine model and the ISA atmosphere model are obtained from separate C files. A rudimentary trim function obtained from [173] is included. The baseline flight control system and the leading edge flap control system are constructed with Simulink blocks. Sensor models have been obtained from ADMIRE [63] and are also included in the Simulink model. Full state measurement is assumed to be available for the control systems. Note that wind or turbulence effects are not taken into account in the simulation model. Figure 2.4 depicts the resulting Simulink model of the closed-loop system. The Flightgear block can be used to fly the aircraft on a desktop computer with a joystick in the open-source Flightgear flight simulator in real-time. All simulation model files are included on cdrom, but can also be downloaded from www.mathworks.com. Descriptions are included in the header of each file.

c Figure 2.4: The MATLAB/Simulink F-16 model with baseline flight control system.

Chapter

3

Backstepping In this chapter the backstepping approach to control design is introduced. Since all the adaptive design methods discussed throughout the chapters of this thesis are based on the backstepping technique, this chapter, together with the next chapter about adaptive backstepping, form the theoretical basis of the thesis. First, the Lyapunov theory and stability concepts on which backstepping is based are reviewed. After that, the design approach itself is introduced and its characteristics are explained with illustrative examples.

3.1 Introduction Backstepping is a systematic, Lyapunov-based method for nonlinear control design. The backstepping method can be applied to a broad class of systems. The name ‘backstepping’ refers to the recursive nature of the design procedure. The design procedure starts at the scalar equation which is separated by the largest number of integrations from the control input and ‘steps back’ toward the control input. Each step an intermediate or ‘virtual’ control law is calculated and in the last step the real control law is found. Two comprehensive textbooks that deal with backstepping and Lyapunov theory are [106] and especially [118]. The origins of the backstepping method are traced in the survey paper by Kokotovi´c [110]. An important feature of backstepping is the flexibility of the method, for instance dealing with nonlinearities is a designer choice. If a nonlinearity acts stabilizing, i.e. it is useful in a sense, it can be retained in the closed-loop system. This is in contrast with the NDI and FBL methods. An additional advantage is that the controller relies on less precise model information: the designer does not need to know the size of a stabilizing nonlinearity. In [75, 77, 78] this notion is used to design a robust nonlinear controller for a fighter aircraft model. Other examples of backstepping control designs where the cancellation of useful nonlinearities is avoided can be found in [116, 118]. However, it is often difficult to ascertain if a nonlinearity in the aircraft dynamics acts 33

34

3.2

BACKSTEPPING

stabilizing over the entire flight envelope, especially with model uncertainties or sudden changes in the aircraft’s dynamic behavior. Therefore, this feature of backstepping is not exploited in this thesis. Instead, the research focuses on more advanced adaptive backstepping techniques that guarantee stability and convergence even in the presence of unknown parameters. Nevertheless, this chapter serves as an introduction before the more complex adaptive backstepping techniques are introduced in Chapter 4. This chapter starts with a discussion on Lyapunov theory and stability concepts. Lyapunov’s direct method, which forms the basis of the backstepping technique, is outlined. In Section 3.3 the idea behind backstepping is introduced on a general second order nonlinear system and extended to a recursive procedure for higher order systems. The chapter closes with an example where the backstepping procedure is applied to the pitch autopilot design for a longitudinal missile model.

3.2 Lyapunov Theory and Stability Concepts 3.2.1 Lyapunov Stability Definitions Consider the nonlinear dynamical system x˙ = f (x(t), t),

x(t0 ) = x0

(3.1)

where x(t) ∈ Rn and f : Rn × R+ → Rn is locally Lipschitz in x and piecewise continuous in t. Definition 3.1 (Lipschitz condition). A function f (x, t) satisfies a Lipschitz condition on D with Lipschitz constant L if |f (x, t) − f (x, y)| ≤ L|x − y|,

(3.2)

for all points (x, t) and (y, t) in D.1 An equilibrium point xe ∈ Rn of (3.1) is such that f (xe ) = 0. It can be assumed, without loss of generality, that the system (3.1) has an equilibrium point xe = 0. The following definition gives the stability of this equilibrium point [106]. Definition 3.2 (Stability in the sense of Lyapunov). The equilibrium point xe = 0 of the system (3.1) is • stable if for each ǫ > 0 and any t0 > 0, there exists a δ(ǫ, t0 ) > 0 such that |x(t0 )| < δ(ǫ, t0 )



|x(t)| < ǫ

∀t ≥ t0 ;

1 √ Note that Lipschitz continuity is a stronger condition than continuity. For example, the function f (x) = x is continuous on D = [0, ∞), but it is not Lipschitz continuous on D since its slope approaches infinity as x approaches zero.

3.2

LYAPUNOV THEORY AND STABILITY CONCEPTS

35

• uniformly stable if for each ǫ > 0 and any t0 > 0, there exists a δ(ǫ) > 0 such that |x(t0 )| < δ(ǫ) ⇒ |x(t)| < ǫ ∀t ≥ t0 ; • unstable if it is not stable; • asymptotically stable if it is stable, and for any t0 > 0, there exists an η(t0 ) > 0 such that |x(t0 )| < η(t0 ) ⇒ |x(t)| → 0 as t → ∞; • uniformly asymptotically stable if it is uniformly stable, and there exists a δ > 0 independent of t such that ∀ǫ > 0 there exists a T (ǫ) > 0 such that |x(t0 )| < δ



|x(t)| < ǫ

∀t ≥ t0 + T (ǫ);

• exponentially stable if for any ǫ > 0 there exists a δ(ǫ) > 0 such that |x(t0 )| < δ



|x(t)| < ǫe−α(t−t0 )

∀t > t0 ≥ 0

for some α > 0. Stability in the sense of Lyapunov is a very mild requirement on equilibrium points. In particular, it includes the idea that solutions are bounded, but at the same time requires that the bound on the solution can be made arbitrarily small by restriction of the size of the initial condition. The main difference between stability and uniform stability is that in the latter case δ is independent of t0 . Asymptotic stability additionally requires solutions to converge to the origin, while exponential stability requires this convergence rate to be exponential. Lyapunov stability can be further illustrated in R2 by Figure 3.1. All trajectories that start in the inner disc will remain in the outer disc forever (bounded).

Figure 3.1: Different types of stability illustrated in R2 [136].

The set of initial conditions D = {x0 ∈ Rn |x(t0 ) = x0 and |x(t)| → ∞ as t → ∞}

36

3.2

BACKSTEPPING

is the domain of attraction of the origin. If D is equal to Rn , then the origin is said to be globally asymptotically stable. A globally asymptotically stable equilibrium point implies that xe is the unique equilibrium point, i.e. all solutions, regardless of their starting point, converge to this point. In some relevant cases it may not be possible to prove stability of xe , but it may still be possible to use Lyapunov analysis to show boundedness of the solution [106]. Definition 3.3 (Boundedness). The equilibrium point xe = 0 of the system (3.1) is • uniformly ultimately bounded if there exist positive constants R, T (R), and b such that |x(t0 )| ≤ R implies that |x(t)| < b

∀t > t0 + T ;

• globally uniformly ultimately bounded if it is uniformly ultimately bounded and R = ∞. The constant b is referred to as the ultimate bound.

3.2.2 Lyapunov’s Direct Method To be of practical interest the stability conditions must not require that the differential equation (3.1) has to be explicitly solved, since this is in general not possible analytically. The Russian A. M. Lyapunov [135] found another way of proving stability, nowadays referred to as Lyapunov’s direct method (or Lyapunov’s second method). The method is a generalization of the idea that if there is some ‘measure of energy’ in a system, then studying the rate of change of the energy in the system is a way to ascertain stability. To make this more precise, this ‘measure of energy’ has to be defined in a more formal way. Let B(r) be a ball of size r around the origin, B(r) = {x ∈ Rn : |x| < r}. Definition 3.4. A continuous function V (x) is • positive definite on B(r) if V (0) = 0 and V (x) > 0, x 6= 0; • positive semi-definite on B(r) if V (0) = 0 and V (x) ≥ 0 x 6= 0;

∀x ∈ B(r) such that ∀x ∈ B(r) such that

• negative(semi-)definite on B(r) if −V (x) is positive (semi-)definite; • radially unbounded if V (0) = 0, V > 0 on Rn − {0}, and V (x) → ∞ as |x| → ∞. A continuous function V (x, t) is

3.2

LYAPUNOV THEORY AND STABILITY CONCEPTS

37

• positive definite on R × B(r) if there exists a positive definite function α(x) on B(r) such that V (0, t) = 0,

∀t ≥ 0 and V (x, t) ≥ α(x),

∀t ≥ 0, x ∈ B(r);

• radially unbounded if there exists a radially unbounded function α(x) such that V (0, t) = 0,

∀t ≥ 0 and V (x, t) ≥ α(x),

∀t ≥ 0, x ∈ Rn ;

• decrescent on R × B(r) if there exists a positive definite function α(x) on B(r) such that V (x, t) ≤ α(x), ∀t ≥ 0, x ∈ B(r). Using these definitions, the following theorem can be used to determine stability for a system by studying an appropriate Lyapunov (energy) function V (x, t). The time derivative of V (x, t) is taken along the trajectories of the system (3.1) V˙

x=f ˙ (x,t)

=

∂V ∂V + f (x, t). ∂t ∂x

Theorem 3.5 (Lyapunov’s Direct Method). Let V (x, t) : R+ × D → R+ be a continuously differentiable and positive definite function, where D is an open region containing the origin. • If V˙ is negative semi-definite for x ∈ D, then the equilibrium xe = 0 is x=f ˙ (x,t)

stable.

• If V (x, t) is decrescent and V˙

x=f ˙ (x,t)

is negative semi-definite for x ∈ D, then

the equilibrium xe = 0 is uniformly stable. is negative definite for x ∈ D, then the equilibrium xe = 0 is asymp• If V˙ x=f ˙ (x,t)

totically stable.

• If V (x, t) is decrescent and V˙

x=f ˙ (x,t)

is negative definite for x ∈ D, then the

equilibrium xe = 0 is uniformly asymptotically stable.

• If there exist three positive constants c1 , c2 and c3 such that c1 |x|2 ≤ V (x, t) ≤ c2 |x|2 and V˙ ≤ −c3 |x|2 for all t ≥ 0 and for all x ∈ D, then the x=f ˙ (x,t)

equilibrium xe = 0 is exponentially stable.

Proof: The proof can be found in chapter 4 of [106].

38

3.2

BACKSTEPPING

The requirement for negative definiteness of the derivative of the Lyapunov function to guarantee asymptotic convergence is quite stringent. It may still be possible to conclude asymptotic convergence when this derivative is only negative semi-definite using LaSalle’s invariance theorem (Theorem B.7 in Appendix B.1). However, this theorem is only valid for autonomous systems. For time-varying systems Barbalat’s useful lemma can be used [118]. + Lemma 3.6 (Barbalat’s R tLemma). Let φ : R → R be a uniformly continuous function on [0, ∞). If limt→∞ 0 φ(τ )dτ exists and is finite, then

lim φ(t) = 0.

t→∞

Combining this lemma with Lyapunov’s direct method leads to the powerful theorem by LaSalle and Yoshizawa. Theorem 3.7 (LaSalle-Yoshizawa). Let xe = 0 be an equilibrium point of (3.1) and suppose that f is locally Lipschitz in x uniformly in t. Let V : Rn × R+ → R+ be a continuously differentiable function such that • γ1 (x) ≤ V (x, t) ≤ γ2 (x) • V˙ =

∂V ∂t

+

∂V ∂x f (x, t)

≤ −W (x) ≤ 0

∀t ≥ 0, ∀x ∈ Rn , where γ1 and γ2 are continuous positive definite functions and where W is a continuous function. Then all solutions of (3.1) satisfy lim W (x(t)) = 0.

t→∞

In addition, if W (x) is positive definite, then the equilibrium xe = 0 is globally uniformly asymptotically stable. Proof: The detailed proof can be found in Appendix B.1. The key advantage of this theorem is that it can be applied without finding the solutions of (3.1). Unfortunately, Theorem 3.7 does not give an actual prescription for determining the Lyapunov function V (x, t). Since the theorem only gives sufficient conditions, it can be tedious to find the correct Lyapunov function to establish the stability of an equilibrium point. However, the converse of the theorem also exists: if an equilibrium point is stable, then there exists a function V (x, t) satisfying the conditions of the theorem. A more formal explanation of Lyapunov stability theory can be found in Appendix B.1.

3.2.3 Lyapunov Theory and Control Design In this section the Lyapunov function concept is extended to control design, i.e. Lyapunov theory will now be applied to create a closed-loop system with desirable stability

3.2

LYAPUNOV THEORY AND STABILITY CONCEPTS

39

properties. Consider the nonlinear system to be controlled x˙ = f (x, u),

x ∈ Rn ,

u ∈ R,

f (0, 0) = 0

(3.3)

where x is the system state and u the control input. The control objective is to design a feedback control law α(x) for the control input u such that the equilibrium x = 0 is globally asymptotically stable. To prove stability a function V (x) is needed as a Lyapunov candidate, and it is required that its derivative along the solutions of (3.3) satisfies V˙ (x) ≤ −W (x), where W (x) is a positive semi-definite function. The straightforward approach for finding α(x) would be to pick a positive definite, radially unbounded function V (x) and then choosing α(x) such that ∂V (x)f (x, α(x)) ≤ −W (x) ∀x ∈ Rn . ∂x

(3.4)

Careful selection is needed: while there may exist a stabilizing control law for (3.3), it may fail to satisfy (3.4). This problem motivated [5] and [190] to introduce the control Lyapunov function (CLF) concept. Definition 3.8 (Control Lyapunov function). A smooth positive definite and radially unbounded function V : Rn → R+ is called a control Lyapunov function (CLF) for the system (3.3) if n ∂V o inf (x)f (x, u) < 0 ∀x 6= 0. (3.5) u∈R ∂x Given a CLF for a system, a globally stabilizing control law can thus be found. In fact, in [5] it was demonstrated that the existence of such a globally stabilizing control law is equivalent to the existence of a CLF. This means that for each globally stabilizing control law, a corresponding CLF can be found and vice versa. This is illustrated in the following example [75]. Example 3.1 (A scalar system) Consider the feedback linearizable system x˙ = −x3 + x + u

(3.6)

and let x = 0 be the desired equilibrium. Consider the simplest choice of CLF, the quadratic CLF 1 V (x) = x2 , (3.7) 2 and its time derivative along the solutions of (3.6) V˙ = xx˙ = x(−x3 + x + u).

(3.8)

There exist multiple choices of control law to render the above expression negative (semi-)definite. The most obvious choice is the control law u = x3 − cx,

c > 1,

(3.9)

40

3.2

BACKSTEPPING

which is equivalent to applying FBL, since it cancels all nonlinearities thus resulting in the linear feedback system: x˙ = −(c − 1)x. Obviously, this control law does not recognize the fact that −x3 is a useful nonlinearity when stabilizing around x = 0 and thereby wastes control effort canceling this term. Also, the presence of x3 in the control law (3.9) is dangerous from a robustness perspective. Suppose that the true system is equal to x˙ = −0.99x3 + x + u, applying control law (3.9) could lead to an unstable closed-loop system. As an alternative the much simpler feedback u = −cx,

c>1

(3.10)

is selected. This results in V˙ = −x4 − (c − 1)x2 < 0 for x 6= 0. By Theorem 3.7 this control law again renders the origin globally asymptotically stable. However, the new control is more efficient and also more robust to model uncertainty as compared to the previous control (3.9). This can be illustrated using numerical simulations. Plots of the closed-loop system response for both controllers can be found in Figure 3.2. The first plot of Figure 3.2 shows the regulation of the states for both controllers for x(0) = 5 and control gain c = 2. As expected the system with the second ‘smart’ controller (3.10) has a more rapid convergence because it makes use of the stabilizing nonlinearity. The bottom plot of Figure 3.2 illustrates that far less control effort is required when the stabilizing nonlinearity is not canceled.

5

state x

4 3 2 1 0 0

0.5

1

1.5

2

2.5 time (s)

3

3.5

4

4.5

5

120 fbl smart

100

input u

80 60 40 20 0 −20 0

0.5

1

1.5

2

2.5 time (s)

3

3.5

4

4.5

5

Figure 3.2: Regulation of x and control effort u for both stabilizing controllers with x(0) = 5 and k = 2.

3.3

BACKSTEPPING BASICS

41

The main deficiency of the CLF concept as a design tool is that for more complex nonlinear systems a CLF is in general not known and the task of finding one may be as difficult as that of designing a stabilizing feedback law. At the end of the 1980’s backstepping was introduced in a number of papers, e.g. [111, 191, 201], as a recursive design tool to solve this problem for several important classes of nonlinear systems.

3.3 Backstepping Basics The previous section dealt with the general Lyapunov theory and introduced the concept of the CLF. It was stated that if a CLF exists, a control law which makes the closed-loop system globally asymptotically stable can be found. However, it can be a problem to find a CLF or the corresponding control law. Using the backstepping procedure a CLF and a control law can be found simultaneously as will be illustrated in this section.

3.3.1 Integrator Backstepping Consider the second order system x˙ 1

= f (x1 ) + g(x1 )x2

(3.11)

x˙ 2

= u

(3.12)

where (x1 , x2 ) ∈ R2 are the states, u ∈ R is the control input and g(x1 ) 6= 0. The control objective is to track the smooth reference signal yr (t) (all derivatives known and bounded) with the state x1 . This tracking control problem can be transformed to a regulation problem by introducing the tracking error variable z1 = x1 − yr and rewriting the x1 -subsystem in terms of this variable as z˙1

=

f (x1 ) + g(x1 )x2 − y˙ r

(3.13)

The idea behind backstepping is to regard the state x2 as a control input for the z1 subsystem. By a correct choice of x2 the z1 -subsystem can be made globally asymptotically stable. Since x2 is just a state variable and not the real control input, x2 is called a virtual control and its desired value xdes , α(x1 , yr , y˙ r ) a stabilizing function. For the 2 z1 -subsystem a CLF V1 (z1 ) can be selected such that the stabilizing virtual control law renders its time-derivative along the solutions of (3.13) negative (semi-)definite, i.e. ∂V1 [f (x1 ) + g(x1 )α(x1 , yr , y˙ r ) − y˙ r ] ≤ −W (z1 ), V˙ 1 = ∂z1

(3.14)

where W (z1 ) is positive definite. The difference between the virtual control x2 and its desired value α(x1 , yr , y˙ r ) is defined as the tracking error variable z2 = x2 − xdes = x2 − α(x1 , yr , y˙ r ). 2

(3.15)

42

3.3

BACKSTEPPING

The system can now be rewritten in terms of the new state z2 as z˙1

=

z˙2

=

f + g (z2 + α) − y˙ r ∂α ∂α ∂α u− [f + g (z2 + α)] − y˙ r − y¨r , ∂x1 ∂yr ∂ y˙ r

(3.16) (3.17)

where the time derivative of α can be computed analytically, since it is a known expression. The task is now to find a control law for u that ensures that z2 converges to zero, i.e. x2 converges to its desired value α. To help find this stabilizing control law, a CLF for the complete (z1 , z2 )-system is needed. The most obvious solution is to augment the CLF of the first design step, V1 , with an additional quadratic term that penalizes the error z2 as 1 V2 (z1 , z2 ) = V1 (z1 ) + z22 . 2

(3.18)

Taking the derivative of V2 results in V˙ 2

= = =

=



V˙ 1 + z2 z˙2   ∂α ∂α ∂α ˙ V1 + z2 u − [f + g (z2 + α)] − y˙ r − y¨r ∂x1 ∂yr ∂ y˙ r ∂V1 [f + g (z2 + α) − y˙ r ] ∂z1   ∂α ∂α ∂α +z2 u − [f + g (z2 + α)] − y˙ r − y¨r ∂x1 ∂yr ∂ y˙ r ∂V1 [f + gα − y˙ r ] ∂z1   ∂V1 ∂α ∂α ∂α +z2 g+u− [f + g (z2 + α)] − y˙ r − y¨r ∂z1 ∂x1 ∂yr ∂ y˙ r   ∂V1 ∂α ∂α ∂α g+u− [f + g (z2 + α)] − y˙ r − y¨r , −W (z1 ) + z2 ∂z1 ∂x1 ∂yr ∂ y˙ r

1 where the cross term ∂V ∂z1 gz2 due to the presence of z2 in (3.16) is grouped together with u. The first term of the above expression is already negative definite by the choice of the stabilizing function α and the bracketed term can be made negative semi-definite by selecting the control law

u = −cz2 −

∂V1 ∂α ∂α ∂α g+ [f + g (z2 + α)] + y˙ r + y¨r , ∂z1 ∂x1 ∂yr ∂ y˙ r

(3.19)

where the gain c > 0. This control law yields V˙ 2

≤ −W (z1 ) − cz22 ,

and thus by Theorem 3.7 renders the equilibrium (z1 , z1 ) = 0 globally stable. Furthermore, the tracking problem is solved, since limt→∞ x1 → yr . Note that selecting the

3.3

43

BACKSTEPPING BASICS

CLF quadratic with a corresponding (virtual) feedback control law is usually the most straightforward choice. However, other choices of CLF are also possible and in some cases may even result in a more efficient controller by e.g. not canceling stabilizing nonlinearities. This is demonstrated in the following example [75]. Example 3.2 (A second order system) Consider the scalar system of Example 3.1 augmented with an integrator x˙ 1 x˙ 2

= =

−x31 + x1 + x2 u.

(3.20) (3.21)

The control objective is to regulate x1 to zero. A control law for the x1 -subsystem was already found in Example 3.1. This control law is now used as a virtual control law for x2 with c = 2: = −2x1 , α. (3.22) xdes 2 The error between x2 and its desired value α is defined as the tracking error z z = x2 − α = x2 + 2x1 .

(3.23)

Rewriting the system in terms of the state x1 and z satisfies x˙ 1 z˙

= −x31 − x1 + z = u + 2(−x31 − x1 + z).

(3.24) (3.25)

Now the CLF of Example 3.1 is augmented for the (x1 , z)-system with an extra term that penalizes the tracking error z as V2 (x1 , z) =

1 2 1 2 x + z . 2 1 2

(3.26)

Taking the derivative of V2 results in V˙ 2 = x1 x˙ 1 + z z˙ = −x41 − x21 + z(u − 2x31 − x1 + 2z). Examining (3.27) reveals that all indefinite terms can be canceled by the control law u = −c2 z + 2x31 + x1 ,

c2 > 2.

(3.27)

By Theorem 3.7 the control law 3.27 stabilizes the (x1 , z)-system. However, it may be possible to find another, more efficient controller that recognizes the naturally stabilizing dynamics of the x1 -subsystem. In order to find this efficient controller the definition of the CLF V2 is postponed. Consider the CLF 1 V2 (x1 , z) = Q(x1 ) + z 2 , 2

(3.28)

44

3.3

BACKSTEPPING

where Q(x1 ) is a CLF for the x1 -subsystem. Taking the derivative of V2 results in V˙ 2 = −Q′ (x31 + x1 ) + z(Q′ + u − 2x31 − 2x1 + 2z). The extended design freedom can now be used to cancel the indefinite terms by selecting Q′ = 2x31 + 2x1 , i.e. Q(x1 ) =

1 4 x + x21 2 1

(3.29)

which is positive definite and thus a valid choice of CLF. This reduces the derivative of V2 to V˙ 2 = −2x6 − 4x4 − 2x2 + z(u + 2z). 1

1

1

A much simpler control law u = −c2 z,

c2 > 2

(3.30)

can now be selected to render the derivative of the CLF V2 negative semi-definite. Plots of the closed-loop system response of both controllers can be found in Figure 3.3. Backstepping controller 1 only takes the stabilizing nonlinearity into account in the first design step and backstepping controller 2 was found using the non-quadratic CLF. The system is initialized at x1 (0) = 2, x(2) = −2 and the control gains are selected as c = 2, c2 = 3. The required control effort for both controllers is much lower when compared to a full cancellation FBL controller. This example illustrates the design freedom the backstepping technique gives the control engineer.

3.3.2 Extension to Higher Order Systems The backstepping procedure demonstrated on second order systems in the previous section can be applied recursively to higher order systems. The only difference is that there are more virtual states to ‘backstep’ through. Starting with the state ‘furthest’ from the actual control, each step of the backstepping technique can be divided into three parts: 1. Introduce a virtual control and an error state, and rewrite the current state equation in terms of these, 2. Choose a CLF for the system, treating it as a final stage, 3. Choose a stabilizing feedback term for the virtual control that makes the CLF stabilizable. The CLF is augmented at subsequent steps to reflect the presence of new virtual states, but the same three stages are followed at each step.

3.3

BACKSTEPPING BASICS

45

2 state x1

1.5 1 0.5 0 −0.5 0

0.5

1

1.5

2

2.5 time (s)

3

3.5

4

4.5

5

0.5

1

1.5

2

2.5 time (s)

3

3.5

4

4.5

5

state x2

0 −1 −2 0

input u

10 bs1 bs2

5 0 −5 0

0.5

1

1.5

2

2.5 time (s)

3

3.5

4

4.5

5

Figure 3.3: Response of x1 , x2 and control effort u for both backstepping controllers with x1 (0) = 2, x2 (0) = −2 and c = 2, c2 = 3.

The backstepping procedure for general strict feedback systems is now stated more formally, consider the nonlinear system x˙ 1 x˙ 2

x˙ i

x˙ n

= = .. . = .. . =

f1 (x1 ) + g1 (x1 )x2 f2 (x1 , x2 ) + g2 (x1 , x2 )x3

fi (x1 , x2 , ..., xi ) + gi (x1 , x2 , ..., xi )xi+1

(3.31)

fn (x1 , x2 , ..., xn ) + gn (x1 , x2 , ..., xn )u

where xi ∈ R, u ∈ R and gi 6= 0. The control objective is to force the output y = x1 to asymptotically track the reference signal yr (t) whose first n derivatives are assumed to be known and bounded. The backstepping procedure starts by defining the tracking errors as z1

=

zi

=

x1 − yr

xi − αi−1 ,

i = 2, ..., n.

(3.32)

46

3.3

BACKSTEPPING

The system (3.31) can be rewritten in terms of these new variables as z˙1

= f1 (x1 ) + g1 (x1 )x2 − y˙ r

z˙2

= f2 (x1 , x2 ) + g2 (x1 , x2 )x3 − α˙ 1 .. .

z˙i

= fi (x1 , x2 , ..., xi ) + gi (x1 , x2 , ..., xi )xi+1 − α˙ i−1 .. . = fn (x1 , x2 , ..., xn ) + gn (x1 , x2 , ..., xn )u − α˙ n−1

z˙n

(3.33)

The clfs are selected as 1 Vi = Vi−1 + zi2 , 2

i = 1, ..., n.

(3.34)

and the (virtual) feedback controls as α1

=

αi

=

u

=

1 [−c1 z1 − f1 + y˙ r ] g1 1 [−gi−1 zi−1 − ci zi − fi + α˙ i−1 ] , gi αn

i = 2, ..., n (3.35)

with gains ci > 0. Theorem 3.9 (Backstepping Design for Tracking). If Vn is radially unbounded and gi 6= 0 holds globally, then the closed-loop system, consisting of the tracking error dynamics of (3.33) and the control u specified according to (3.35), has a globally stable equilibrium at (z1 , z2 , ..., zn ) = 0 and limt→∞ zi = 0. In particular, this means that global asymptotic tracking is achieved: lim [x1 − yr ] = 0.

t→∞

Proof: The time derivative of Vn along the solutions of (3.33) is V˙ n = −

n X

ci zi2 ,

i=1

which proves that the equilibrium (z1 , z2 , ..., zn ) = 0 is globally uniformly stable. By Theorem 3.7 it follows further that limt→∞ zi = 0. A block scheme of the resulting closed-loop system for n = 3 and a constant reference signal yr is shown in Figure 3.4. The recursive nature of the procedure is clearly visible. This concludes the discussion of the theory behind backstepping. In [118] it is demonstrated that the procedure can be applied to all nonlinear systems of a lower triangular form, including multivariable systems.

3.3

BACKSTEPPING BASICS

47

f3 ( x ) f 2 ( x1 , x2 ) f1 ( x1 )



g3 ( x )

x3



g 2 ( x1 , x2 )

g1 ( x1 )

x2

-



α1 ( x1 , yr )

x1 -

yr

α 2 ( x1 , x2 , yr )

u ( x , yr ) Backstepping controller

Figure 3.4: Closed-loop dynamics of a general strict feedback control system with backstepping (i) controller for n = 3. It is assumed that yr = 0, i = 1, 2, 3.

3.3.3 Example: Longitudinal Missile Control In this section the backstepping method is demonstrated in a first flight control example: the tracking control design for a longitudinal missile model. A second order nonlinear model of a generic surface-to-air missile has been obtained from [109]. The model is nonlinear, but not overly complex. The model consists of the longitudinal force and moment equations representative of a missile traveling at an altitude of approximately 6000 meters, with aerodynamic coefficients represented as third order polynomials in angle of attack α and Mach number M . The nonlinear equations of motion in the pitch plane are given by  q¯S  Cz (α, M ) + bz (M )δ mVT  q¯Sd  Cm (α, M ) + bm (M )δ , Iyy

α˙

= q+

(3.36)



=

(3.37)

while the aerodynamic coefficients of the model are approximated by bz (M ) = bm (M ) = Cz (α, M ) = Cm (α, M ) =

1.6238M − 6.7240,

12.0393M − 48.2246, ϕz1 (α) + ϕz2 (α)M, ϕm1 (α) + ϕm2 (α)M,

48

3.3

BACKSTEPPING

where ϕz1 (α) ϕz2 (α)

= =

ϕm1 (α) ϕm2 (α)

= =

−288.7α3 + 50.32α|α| − 23.89α, −13.53α|α| + 4.185α,

303.1α3 − 246.3α|α| − 37.56α, 71.51α|α| + 10.01α.

These approximations are valid for the flight envelope −10o < α < 10o and 1.8 < M < 2.6. To facilitate the control design, the nonlinear missile model (3.36) and (3.37) is rewritten in the more general state-space form as x˙ 1

=

x2 + f1 (x1 ) + g1 u

(3.38)

x˙ 2

=

f2 (x1 ) + g2 u,

(3.39)

where x1 = α, x2 = q,   f1 (x1 ) = C1 ϕz1 (x1 ) + ϕz2 (x1 )M ,   f2 (x1 ) = C2 ϕm1 (x1 ) + ϕm2 (x1 )M , g1 = C1 bz , g2 = C2 bm , q¯S q¯Sd C1 = , C2 = . mVT Iyy

The control objective considered here is to design an autopilot with the backstepping method that tracks a commanded reference yr (all derivatives known and bounded) with the angle of attack x1 . It is assumed that the aerodynamic force and moment functions are exactly known and the Mach number M is treated as a parameter available for measurement. Furthermore, the contribution of the fin deflection on the right-hand side of the force equation (3.38) is ignored during the control design, since the backstepping method can only handle nonlinear systems of lower-triangular form, i.e. the assumption is made that the fin surface is a pure moment generator. This a valid assumption for most types of aircraft and aerodynamically controlled missiles, often made in flight control system design, see e.g. [56, 76]. The backstepping procedure starts by defining the tracking errors as z1

=

z2

=

x1 − yr

x2 − α1

where α1 is the virtual control to be designed in this first design step. Step 1: The z1 -dynamics satisfy z˙1 = x2 + f1 − y˙ r = z2 + α1 + f1 − y˙ r .

(3.40)

Consider a candidate CLF V1 for the z1 -subsystem defined as V1 (z1 ) =

 1 2 z + k1 λ21 , 2 1

(3.41)

3.3

BACKSTEPPING BASICS

49

Rt where the gain k1 > 0 and the integrator term λ1 = 0 z1 dt are introduced to robustify the control design against the effect of the neglected control term. The derivative of V1 along the solutions of (3.40) is given by V˙ 1 = z1 z˙1 + k1 λ1 z1 = z1 [z2 + α1 + f1 − y˙ r + k1 λ1 ] . The virtual control α1 is selected as α1 = −c1 z1 − k1 λ1 − f1 + y˙ r ,

c1 > 0

(3.42)

to render the derivative V˙ 1 = −c1 z12 + z1 z2 . The cross term z1 z2 will be dealt with in the second design step. Step 2: The z2 -dynamics are given by z˙2 = f2 + g2 u − α˙ 1 ,

(3.43)

where α˙ 1 = −c1 (x2 + f1 − y˙ r ) − k1 z1 − f˙1 + y¨r . The CLF V1 is augmented with an additional term to penalize z2 as 1 V2 (z1 , z2 ) = V1 + z22 . 2

(3.44)

The derivative of V2 along the solutions of (3.40) and (3.43) satisfies V˙ 2 = −c1 z12 + z1 z2 + z2 [f2 + g2 u − α˙ 1 ] = −c1 z12 + z2 [z1 + f2 + g2 u − α˙ 1 ] . A control law for u can now be defined to cancel all indefinite terms, the most straightforward choice is given by u=

1 [−c2 z2 − z1 − f2 + α˙ 1 ] . g2

By Theorem 3.7 limt→∞ z1 , z2 = 0, which means that the reference signal yr is asymptotically tracked with x1 . Numerical simulations of the longitudinal missile model with the backstepping controller c have been performed in MATLAB/Simulink . A third order fixed time step solver with sample time 0.01s was used. First, consider the simulations using the ‘idealized’ missile model, i.e. the lower triangular model as used for the control design with g1 = 0. Figure 3.5 shows the response of the system states and the control input for a series of angle of attack doublets at Mach 2.0. The red line represents the reference signal, while the closed-loop response of the system for three different gain selections is plotted in blue. As can be seen in the plots perfect tracking is achieved by increasing the control gains. However, when the full missile model is used, with g1 6= 0, the controllers without integral gain only achieve bounded tracking as can be seen in Figure 3.6. Setting the integral gain k1 = 10 removes the bounded tracking error. Many other methods for robustifying

50

3.3

BACKSTEPPING

pitch rate (deg/s)

angle of attack (deg)

the backstepping design against unmodeled dynamics can be found in literature. However, for large uncertainties these robust methods fail to give adequate performance or they tend to lead to conservative control laws. Adaptive backstepping is a more sophisticated method of dealing with large model uncertainties and is the subject of the next chapter.

10 0

0

c1,c2=10,k1=0 c1,c225 =10,k1=10 reference

5

10

15

20

30

5

10

15

20

25

30

5

10

15 time (s)

20

25

30

40 20 0 −20 −40 0

control deflection (deg)

c1,c2=1,k1=0

−10

10 0 −10 0

Figure 3.5: Numerical Simulations at Mach 2.0 of the idealized longitudinal missile model with backstepping control law for 3 different gain selections.

3.3

angle of attack (deg)

BACKSTEPPING BASICS

51

10 0 c1,c2=1,k1=0

−10 0

c1,c2=10,k1=0 5

10

15

20

c ,c =10,k =10 1 2 25 1

30

pitch rate (deg/s)

reference 50

0

control deflection (deg)

−50 0

5

10

15

20

25

30

5

10

15 time (s)

20

25

30

10 0 −10 0

Figure 3.6: Numerical Simulations at Mach 2.0 of the full longitudinal missile model with backstepping control law for 3 different gain selections.

Chapter

4

Adaptive Backstepping In the previous chapter the basic ideas of the backstepping control design approach for nonlinear systems were explained. The backstepping approach allows the designer to construct controllers for a wide range of nonlinear systems in a structured, recursive way. However, the method assumes that an accurate system model is available and this may not be the case for real world physical systems. In this chapter the backstepping framework is extended with a dynamic feedback part that constantly updates the static feedback control part to deal with nonlinear systems with parametric uncertainties. In the first part of the chapter the concept of dynamic feedback is explained in a simple example and after that the standard tuning functions adaptive backstepping method is derived. An overview of methods to deal with non-parametric uncertainties such as measurement noise is also presented. In the second part command filters are introduced to simplify the adaptive backstepping method and to make the dynamic update laws more robust to input saturation.

4.1 Introduction Backstepping can be used to stabilize a large class of nonlinear systems in a structured manner, while giving the control designer a lot of freedom. However, the true potential of backstepping was discovered only when the approach was developed for nonlinear systems with structured uncertainty. With adaptive backstepping [101, 117] global stabilization is achieved in the presence of unknown parameters, and with robust backstepping [64, 66, 87] it is achieved in the presence of disturbances. The ease with which uncertainties and unknown parameters can be incorporated in the backstepping procedure is what makes the method so interesting. Robust backstepping and other robust nonlinear control techniques have been studied extensively in literature. However, these methods tend to yield rather conservative control laws, especially for cases where the uncertainties are large. Furthermore, nonlinear 53

54

ADAPTIVE BACKSTEPPING

4.2

damping terms and switching control functions are often used to guarantee robustness in the presence of uncertainties, which may result in undesirable high gain control or chattering in the control signal. High gain feedback may cause several problems, such as saturation of the control (actuators), high sensitivity to measurement noise, excitation of unmodeled dynamics and large transient errors. Adaptive backstepping control has a more sophisticated way of dealing with large uncertainties. Adaptive backstepping controllers do not only employ static feedback like the controllers designed in the previous section, but also contain a dynamic feedback part. This dynamic part of the control law is used as a parameter update law to continuously adapt the static part to new parameter estimates. Adaptive backstepping achieves boundedness of the closed-loop states and convergence of the tracking error to zero for nonlinear systems with parametric uncertainties. The first adaptive backstepping method [101] employed overparametrization, i.e. more than one update law was used for each parameter. Overparametrization is not necessarily disadvantageous from a performance point of view, but it is not very efficient in a numerical implementation of the controller due to the resulting higher dynamical order. With the introduction of the tuning functions adaptive backstepping method [117] the overparametrization was removed so that only one dynamic update law for each unknown parameter is needed. The first part of this chapter, Section 4.2, discusses the tuning functions adaptive backstepping approach. Dynamic feedback is introduced on a second order system, after which the method is extended to higher order systems. The tuning functions approach has a number of shortcomings, two of the most important being its analytical complexity and its sensitivity to input saturation. In Section 4.3 the constrained adaptive backstepping method is introduced, which makes use of command filters to completely remove these drawbacks. The use of filters in the backstepping framework was first proposed as dynamic surface control in [212, 213] to remove the tedious analytical calculation of the time derivatives of the virtual control laws at each design step. In [58] the idea of using command filters is extended in such a way that the dynamic update laws of adaptive backstepping are robustified against the effects of the input saturation, resulting in the constrained adaptive backstepping approach.

4.2 Tuning Functions Adaptive Backstepping In this section the tuning functions adaptive backstepping method as conceived in [117] is discussed. The ideas of the recursive backstepping approach of the previous chapter are extended to nonlinear systems with parametric uncertainties. Dynamic feedback is employed as parameter update law to continuously adapt the static feedback control to new parameter estimates. The controller is still constructed in a recursive manner, introducing a virtual control law and intermediate update laws at each design step, while extending the CLF, until the control law and the dynamic update laws are found in the last design step.

4.2

TUNING FUNCTIONS ADAPTIVE BACKSTEPPING

55

4.2.1 Dynamic Feedback The difference between a static and a dynamic nonlinear design will be illustrated using the scalar system of Example 3.1 augmented with an unknown constant parameter θ in front of the nonlinear term. Example 4.1 (An uncertain scalar system) Consider the feedback linearizable system x˙ = θx3 + x + u

(4.1)

where θ ∈ R is an unknown constant parameter. The control objective is regulation of x to zero. If θ where known, the control u = −θx3 − cx,

c > 1,

ˆ 3 − cx, u = −θx

c > 1.

(4.2)

would render the derivative of V0 (x) = 12 x2 negative definite: V˙ 0 = −(c−1)x2 . Since θ is not known, its certainty equivalence form is employed in which θ is replaced by ˆ the parameter estimate θ: (4.3)

Substituting (4.3) into (4.1) gives ˜ 3 − (c − 1)x, x˙ = θx

(4.4)

ˆ θ˜ = θ − θ.

(4.5)

˜ 4 − (c − 1)x2 . V˙ 0 = θx

(4.6)

where θ˜ is the parameter estimation error, defined as The derivative of V0 (x) = 12 x2 now satisfies

It is not possible to conclude anything about the stability of (4.4), since the first term ˜ The idea is now to extend the control of (4.6) contains the unknown parameter error θ. ˆ To design this update law, V0 is augmented with law with a dynamic update law for θ. a quadratic term to penalize the parameter estimation error θ˜ as ˜ = 1 x2 + 1 θ˜2 , V1 (x, θ) 2 2γ

(4.7)

where γ > 0 is the adaptation gain. The derivative of this function is V˙ 1

= = =

xx˙ +

1 ˜˜˙ θθ γ

˜ 4 − (c − 1)x2 + 1 θ˜θ˜˙ θx γ   1 ˜˙ 2 4 ˜ −(c − 1)x + θ x + θ . γ

(4.8)

56

4.2

ADAPTIVE BACKSTEPPING

˜ However, the The above equation still contains an indefinite term with the unknown θ. ˙ ˙ dynamics of θ˜ = −θˆ can now be utilized, which means that the indefinite term can be ˆ˙ Choosing the update law canceled with an appropriate choice of θ. ˙ θˆ = −θ˜˙ = γx4

(4.9)

V˙ 1 = −(c − 1)x2 ≤ 0.

(4.10)

yields ˜ = 0 is globally stable and by Theorem It can be concluded that the equilibrium (x, θ) 3.7 the regulation property limt→∞ x = 0 is satisfied. Note that since the parameter estimation error term in (4.8) is completely canceled, it cannot be concluded that the parameter estimation error θ˜ converges to zero. This is a characteristic of this type of Lyapunov based adaptive controllers: the idea is to satisfy a total system stability criterion, the CLF, rather than to optimize the error in estimation. The advantage is that global asymptotic stability of the closed-loop system is guaranteed. This is in contrast with a traditional estimation-based design, where the identifiers are too slow to deal with nonlinear system dynamics [118]. The resulting adaptive system consists of (4.1) with control law (4.3) and update law (4.9). The response of the closed-loop system with θ = 1 for several values of update gain γ can be found in Figure 4.1. The initial state of the system is x(0) = 2, the ˆ control gain c = 2 and the initial parameter estimate θ(0) = 0 . As can be seen from the figure, the adaptive controller manages to stabilize the uncertain nonlinear system. The parameter estimate converges to a constant value for each of the update gain selections, but never converges to the true parameter value. The adaptive design of the above example is very simple because the uncertainty is in the span of the control , i.e. matched. Adaptive backstepping extends the design approach of the example to a recursive procedure that can deal with nonlinear systems containing parametric uncertainties that are separated by one or more integrators from the control input. Consider the second order system x˙ 1 x˙ 2

= =

ϕ(x1 )T θ + x2 u

(4.11) (4.12)

where (x1 , x2 ) ∈ R2 are the states, u ∈ R is the control input, ϕ(x1 ) is a smooth, nonlinear function vector, i.e. the regressor vector, and θ is a vector of unknown constant parameters. The control objective is to track the smooth reference signal yr (t) (all derivatives known and bounded) with the state x1 . The adaptive backstepping procedure starts by introducing the tracking errors z1 = x1 − yr and z2 = x2 − α. The virtual control α is now defined in terms of the parameter estimate θˆ as ˆ yr , y˙ r ) = −c1 z1 − ϕT θˆ + y˙ r , α(x1 , θ,

c1 > 0.

(4.13)

4.2

57

TUNING FUNCTIONS ADAPTIVE BACKSTEPPING

state x

3 2 1 0 0

0.5

1

1.5

2

2.5 time (s)

3

3.5

4

4.5

5

0 gamma = 0.1 gamma = 1 gamma = 10

input u

−10 −20 −30 −40

parameter estimate

−50 0

0.5

1

1.5

2

2.5 time (s)

3

3.5

4

4.5

5

0.5

1

1.5

2

2.5 time (s)

3

3.5

4

4.5

5

6 4 2 0 0

ˆ Figure 4.1: State x, control effort u and parameter estimate θˆ for initial values x(0) = 2, θ(0) =0 and control gain c = 2 with different values of update gain γ. The parameter estimate does not converge to the true parameter value θ = 1.

This virtual control reduces the (z1 , z2 )-dynamics to z˙1

=

z˙2

=

ϕT θ˜ + z2 − c1 z1 ∂α ∂α ∂α ∂α ˆ˙ u− x˙ 1 − y˙ r − y¨r − θ, ∂x1 ∂yr ∂ y˙ r ∂ θˆ

(4.14) (4.15)

where θ˜ = θ − θˆ is the parameter estimation error. A CLF is defined that not only penalizes the tracking errors, but also the estimation error as   ˜ = 1 z 2 + z 2 + θ˜T Γ−1 θ˜ (4.16) V (z1 , z2 , θ) 1 2 2 with Γ = ΓT > 0. The time derivative of V along the solutions of (4.14) is V˙

˜1 = −c1 z12 + z1 z2 + ϕT θz   ∂α ∂α ∂α ∂α ˆ˙ ˙ +z2 u − x˙ 1 − y˙ r − y¨r − θ − θ˜T Γ−1 θˆ ∂x1 ∂yr ∂ y˙ r ∂ θˆ   ∂α T ˆ ∂α ∂α ∂α ∂α ˆ˙ 2 = −c1 z1 + z2 z1 + u + ϕ θ− x2 − y˙ r − y¨r − θ ∂x1 ∂x1 ∂yr ∂ y˙ r ∂ θˆ    ∂α ˙ −θ˜T Γ−1 θˆ − Γϕ z1 − z2 . ∂x1

58

4.2

ADAPTIVE BACKSTEPPING

In order to render the derivative of the CLF V negative definite, a control law for u and a dynamic update law for θˆ are selected as u = ˙ θˆ =

∂α T ˆ ∂α ∂α ∂α ∂α ˆ˙ −c2 z2 − z1 − θ ϕ θ+ x2 + y˙ r + y¨r + ˆ ∂x1 ∂x1 ∂yr ∂ y˙ r ∂ θ   ∂α Γϕ z1 − z2 ∂x1

(4.17) (4.18)

where c2 > 0. This results in V˙

=

−c1 z12 − c2 z22

˜ = 0 is globally uniformly stable. Furtherand it follows that the equilibrium (z1 , z2 , θ) more, limt→∞ z1 , z2 → 0, i.e. global asymptotic tracking is achieved. Note again that convergence of the parameter estimate θˆ is guaranteed, but not necessarily convergence to the real value of θ. In this adaptive backstepping design the choice of parameter update law was postponed until the second design step. This will become a lot more complicated for higher order systems as considered in the next part of the chapter.

4.2.2 Extension to Higher Order Systems The adaptive backstepping method is now extended to higher order systems. Consider the strict feedback system x˙ i x˙ n

= fi (¯ xi ) + gi (¯ xi )xi+1 , = fn (x) + gn (x)u

i = 1, ..., n − 1

(4.19)

where xi ∈ R, u ∈ R and x ¯i = (x1 , x2 , ..., xi ). Unlike before, the smooth functions fi and gi now contain the unknown dynamics of the system and will have to be approximated. It is assumed that gi does not change sign, i.e. gi > 0 or gi < 0, in the domain of operation. For most physical systems at least the sign of these functions is known. It is assumed that there exist vectors θfi and θgi such that fi (¯ xi ) = gi (¯ xi ) =

ϕfi (¯ xi )T θfi ϕgi (¯ xi )T θgi ,

where ϕ∗ are the regressors and θ∗ are vectors of unknown constant parameters. Then the estimates of the nonlinear functions fi and gi are defined as fˆi (¯ xi , θˆfi ) = gˆi (¯ xi , θˆgi ) =

ϕfi (¯ xi )T θˆfi ϕg (¯ xi )T θˆg i

i

and the parameter estimation errors as θ˜fi = θfi − θˆfi and θ˜gi = θgi − θˆgi . The system (4.20) can be rewritten as x˙ i

=

ϕfi (¯ xi )T θfi + ϕgi (¯ xi )T θgi xi+1

x˙ n

=

ϕfn (¯ xn )T θfn + ϕgn (¯ xn )T θgn u.

4.2

TUNING FUNCTIONS ADAPTIVE BACKSTEPPING

59

The control objective is to force the output y = x1 to asymptotically track the reference signal yr (t) whose first n derivatives are assumed to be known and bounded. The adaptive backstepping procedure is initiated by defining the tracking errors as z1 zi

= =

x1 − yr xi − αi−1 ,

i = 2, ..., n.

(4.20)

Step 1: The task in the first design step is to stabilize the z1 -subsystem given by z˙1

= ϕTf1 θf1 + ϕTg1 θg1 (z2 + α1 ) − y˙ r .

(4.21)

Consider the CLF V1 given by V1 =

1 1 2 1 ˜T −1 ˜ ˜ z1 + θf1 Γf1 θf1 + θ˜gT1 Γ−1 g1 θg1 , 2 2 2

(4.22)

where Γ∗ = ΓT∗ > 0 and whose derivative along the solutions of (4.21) is h i V˙ 1 = z1 ϕTf1 θˆf1 + ϕTg1 θˆg1 (z2 + α1 ) − y˙ r     ˆ˙f − Γf ϕf z1 − θ˜T Γ−1 θˆ˙g − Γg ϕg x2 z1 . θ −θ˜fT1 Γ−1 1 1 1 1 1 1 g1 g1 f1

To cancel the indefinite terms the virtual control α1 and the intermediate update laws τf11 , τg11 are defined as  1  α1 = −c1 z1 − ϕTf1 θˆf1 + y˙ r (4.23) ϕTg1 θˆg1 τf11 = Γf1 ϕf1 z1 (4.24) τg11 = Γg1 ϕg1 x2 z1 , (4.25) where c1 > 0. Similar to the construction of the control law, the parameter update laws are build up recursively in the adaptive backstepping design to prevent overparametrization. These intermediate update functions τ are called tuning functions and therefore this method is often referred to as the tuning functions approach in literature [117]. Substituting these expressions in the derivative of V1 leads to     ˜T Γ−1 θˆ˙g − τg ˆ˙f − τf . − θ V˙ 1 = −c1 z12 + ϕTg1 θˆg1 z1 z2 − θ˜fT1 Γ−1 θ 1 11 g g 1 11 f1 1 1

If this would be the final design step, the update laws would cancel the last two indefinite terms and z2 ≡ 0, reducing the derivative to V˙ 1 = −c1 z12 Hence, the z1 -system would be stabilized. Hence, the task in the next design step is to make sure that z2 converges to zero. Step 2: The z2 -dynamics satisfy z˙2

=

ϕTf2 θf2 + ϕTg2 θg2 (z3 + α2 ) − α˙ 1 .

(4.26)

60

4.2

ADAPTIVE BACKSTEPPING

The CLF V1 is now augmented with additional terms penalizing z2 and the parameter estimation errors θ˜f2 , θ˜g2 , i.e. 1 1 1 ˜T −1 ˜ ˜ V2 = V1 + z22 + θ˜fT2 Γ−1 f2 θf2 + θg2 Γg2 θg2 . 2 2 2

(4.27)

Taking the time derivative of V2 along the solutions of (4.21), (4.26) results in V˙ 2

=



−c1 z12 + ϕTg1 θˆg1 z1 z2   ∂α1 ˙ T −1 ˆ ˜ −θf1 Γf1 θf1 − τf11 + Γf1 ϕf1 z2 ∂x1   ∂α1 ˙ T −1 ˆ ˜ −θg1 Γg1 θg1 − τg11 + Γg1 ϕg1 x2 z2 ∂x1 h i +z2 ϕTf2 θˆf2 + ϕTg2 θˆg2 (z3 + α2 ) − µ1     ˙ ˙ θ˜fT2 Γ−1 θˆf2 − Γf2 ϕf2 z2 − θ˜gT2 Γ−1 θˆg2 − Γg2 ϕg2 x3 z2 , g2 f2

where µ1 represents the known parts of the dynamics of α˙ 1 and is defined as  ∂α ∂α1 ∂α1 ˆ˙ ∂α1 ∂α1  T ˆ 1 ˆ ˙ ϕf1 θf1 + ϕTg1 θˆg1 x2 + y˙ r + y¨r . µ1 = θf1 + θg1 + ∂x1 ∂yr ∂ y˙ r ∂ θˆf ∂ θˆg 1

1

The virtual control and intermediate update laws are selected as  1  α2 = −c2 z2 − ϕTg1 θˆg1 z1 − ϕTf2 θˆf2 + µ1 ϕTg2 θˆg2   ∂α1 ∂α1 τf12 = τf11 − Γf1 ϕf1 z2 = Γf1 ϕf1 z1 − z2 ∂x1 ∂x1   ∂α1 ∂α1 τg12 = τg11 − Γg1 ϕg1 x2 z2 = Γg1 ϕg1 x3 z1 − z2 ∂x1 ∂x1 τf22 = Γf2 ϕf2 z2 τg22 = Γg2 ϕg2 x3 z2 .

(4.28) (4.29) (4.30) (4.31) (4.32)

Substituting the above expressions in the derivative of V2 gives V˙ 2

= −c1 z12 − c2 z22 + ϕTg2 θˆg2 z2 z3     ˆ˙f − τf ˜T Γ−1 θˆ˙g − τg −θ˜fT1 Γ−1 θ − θ g1 g1 1 12 1 12 f1     ˙ ˙ −θ˜fT2 Γ−1 θˆf2 − τf22 − θ˜gT2 Γ−1 θˆg2 − τg22 . g2 f2

This concludes the second design step. Step i: The design steps until step n (where the real control u enters) are identical. The zi -dynamics are given by z˙i

=

ϕTfi θfi + ϕTgi θgi (zi+1 − αi ) − α˙ i−1 .

(4.33)

4.2

TUNING FUNCTIONS ADAPTIVE BACKSTEPPING

61

The CLF for step i is defined as 1 1 1 ˜T −1 ˜ ˜ Vi = Vi−1 + zi2 + θ˜fTi Γ−1 fi θfi + θgi Γgi θgi . 2 2 2

(4.34)

The time derivative of Vi along the solutions of (4.33) satisfies V˙ i

= −

i−1 X

cj zj2 + ϕTgi−1 θˆgi−1 zi−1 zi

j=1

i−1 X

  ∂αi−1 ˙ T −1 ˆ ˜ θfk Γfk θfk − τfk(i−1) + Γfk ϕfk zi − ∂xk k=1   i−1 X ∂αi−1 ˙ T −1 ˆ ˜ − θgk Γgk θgk − τgk(i−1) + Γgk ϕgk xk+1 zi ∂xk k=1 h i +zi ϕTfi θˆfi + ϕTgi θˆgi (zi+1 + αi ) − µi−1     ˙ ˙ −θ˜fTi Γ−1 θˆfi − Γfi ϕfi zi − θ˜gTi Γ−1 θˆgi − Γgi ϕgi xi+1 zi , gi fi

where µi−1 is given by µi−1

=

i−1  X ∂αi−1  T ˆ ϕfk θfk + ϕTgk θˆgk xk+1 ∂xk k=1 ! i−1 i X X ∂αi−1 ˆ˙ ∂αi−1 ˆ˙ ∂αi−1 (k) + y . θfk + θgk + (k−1) r ˆ ˆ ∂ θfk ∂ θgk k=1 k=1 ∂yr

Now the intermediate update laws and the virtual control αi are selected as  1  αi = −ci zi − ϕTgi−1 θˆgi−1 zi−1 − ϕTfi θˆfi + µi−1 ϕTgi θˆgi ∂αi−1 τfki = τfk(i−1) − Γfk ϕfk zi ∂xk ∂αi−1 τgki = τgk(i−1) − Γgk ϕgk xk+1 zi ∂xk τfii = Γfi ϕfi zi τgii = Γgi ϕgi xi+1 zi , for k = 1, 2, ..., i − 1. This renders the derivative of Vi equal to V˙ i

= − −

i X

cj zj2 + ϕTgi θˆgi zi zi+1

j=1

i X

k=1

i   X   ˆ˙f − τf ˜T Γ−1 θˆ˙g − τg θ˜fTk Γ−1 θ − θ . k ki gk gk k ki fk k=1

(4.35) (4.36) (4.37) (4.38) (4.39)

62

4.2

ADAPTIVE BACKSTEPPING

Step n: In the final step the control law and the complete update laws are defined. Consider the final Lyapunov function Vn

= =

1 1 1 ˜T −1 ˜ ˜ Vn−1 + zn2 + θ˜fTn Γ−1 fn θfn + θgn Γgn θgn 2 2 2 n  1 X  2 ˜T −1 ˜ ˜g . zk + θfk Γfk θfk + θ˜gTk Γ−1 θ gk k 2

(4.40)

k=1

To render the derivative of Vn negative semi-definite, the real control and update laws are selected as   1 u = −cn zn − ϕTgn−1 θˆgn−1 zn−1 − ϕTfn θˆfn + µn−1 (4.41) ϕTgn θˆgn ∂αn−1 ˙ zn θˆfk = τfk(n−1) − Γfk ϕfk ∂xk   n−1 X ∂αj = Γfk ϕfk zk − zj+1  (4.42) ∂xk j=k   ∂αn−1 ˙ ˆ θgk = P τgk(n−1) − Γgk ϕgk xk+1 zn ∂xk    n−1 X ∂αj = P Γgk ϕgk xk+1 zk − zj+1  (4.43) ∂xk j=k

˙ θˆfn ˙ θˆgn

= Γfn ϕfn zn

(4.44)

= P (Γgn ϕgn uzn ) ,

(4.45)

where P represents the parameter projection operator to prevent singularity problems, i.e. zero crossings, in the domain of operation. While the functions gi 6= 0, the update laws for θˆi can still cross through zero if this modification is not made. Parameter projection can be used to keep the parameter estimate within a desired bounded and convex region. In section 4.2.3 the parameter projection method is discussed in more detail. Substituting (4.41)-(4.45) in the derivative of Vn renders it equal to V˙ n

=



n X

cj zj2 .

j=1

Theorem 4.1. The closed-loop system consisting of the system (4.20), the control (4.41) and the dynamic update laws (4.42)-(4.45) has a globally uniformly stable equilibrium at (zi , θ˜fi , θ˜gi ) = 0 and limt→∞ zi = 0, i = 1, ..., n. Proof: The closed-loop stability result follows directly from Theorem 3.7.

4.2

TUNING FUNCTIONS ADAPTIVE BACKSTEPPING

63

A block scheme of the resulting closed-loop system with tuning functions controller for n = 3 and a constant reference signal yr is shown in Figure 4.2. It is clear that controller and update laws are part of one integrated system. f3 ( x ) f 2 ( x1 , x2 ) f1 ( x1 )



g3 ( x )

x3

g 2 ( x1 , x2 )



-



g1 ( x1 )

x2

α1 ( x1 , yr )

x1 -

yr

α 2 ( x1 , x2 , yr )

u ( x , yr ) ɺ ɺ θˆf , θˆg 1

1

ɺ ɺ θˆ f 2 , θˆg2

ɺ ɺ θˆf ,θˆg 3

3

Adaptive backstepping controller

Figure 4.2: Closed-loop dynamics of an uncertain strict feedback control system with adaptive (i) backstepping controller for n = 3. It is assumed that yr = 0, i = 1, 2, 3.

4.2.3 Robustness Considerations The adaptive backstepping control design of Theorem 4.1 is based on ideal plant models with parametric uncertainties. However, in practice the controllers will be designed for real world physical systems, which means they have to deal with non-parametric uncertainties such as • low-frequency unmodeled dynamics, e.g. structural vibrations; • measurement noise; • computational round-off errors and sampling delays; • time variations of the unknown parameters.

64

4.2

ADAPTIVE BACKSTEPPING

When the input signal (or the reference signal) of the system is persistently exciting (PE) [3], i.e. the reference signal is sufficiently rich [28], these uncertainties will hardly affect the robustness of the adaptive backstepping design. The PE property guarantees exponential stability in the absence of modeling errors which in turn guarantees bounded states in the presence of bounded modeling error inputs provided the modeling error term does not destroy the PE property of the input. However, when the reference signal is not persistently exciting even very small uncertainties may already lead to problems. For example, the estimated parameters will, in general, not converge to their true values. Although a parameter estimation error of zero can be useful (e.g. for system health monitoring), it is not a necessary condition to guarantee stability of the adaptive backstepping design. A more serious problem is that the adaptation process will have difficulty to distinguish between parameter information and noise. This may cause the estimated parameters to drift slowly. More examples of instability phenomena in adaptive systems can be found in [87]. The lack of robustness is primarily due to the adaptive law which is nonlinear in general and therefore more susceptible to modeling error effects. Several methods of robustifying the update laws have been suggested in literature over the years, an overview is given in [87]. These techniques have in common that they all aim to guarantee that the properties of the modified adaptive laws are as close as possible to the ideal properties despite the presence of the non-parametric uncertainties. The different methods are now discussed briefly for the general parameter update law ˙ θˆ = γϕz.

(4.46)

Dead-Zones The dead-zone modification method is based on the observation that small tracking errors are mostly due to noise and disturbances. The most obvious solution is to turn off the adaptation process if the tracking errors are within certain bounds. This gives a closedloop system with bounded tracking errors. Modifying the update law (4.46) using the dead-zone technique results in  0 if |z| ≥ η0 ˙ θˆ = γϕ(z + η), η = (4.47) −z if |z| < η0 or ˙ θˆ = γϕ(z + η),

  η0 −η0 η=  −z

if if if

z < η0 z > η0 |z| ≤ η0

(4.48)

for a continuous version to prevent computational problems. Leakage Terms The idea behind leakage terms is to modify the update laws so that the time derivative of the Lyapunov function used to analyze closed-loop stability becomes negative in the

4.2

TUNING FUNCTIONS ADAPTIVE BACKSTEPPING

65

space of the parameter estimates when these parameters exceed certain bounds. Basically, leakage terms add damping to the update laws: ˙ ˆ θˆ = γ(ϕz − ω θ),

(4.49)

where the term ω θˆ with ω > 0 converts the pure integral action of the update law (4.46) to a ‘leaky’ integration and is therefore referred to as the leakage. Several choices are possible for the leakage term ω, the most widely used choices are called σ-modification [86] and e-modification [147]. These modifications are as follows • σ-modification: ˙ ˆ θˆ = γ(ϕz − σ θ)

(4.50)

˙ ˆ θˆ = γ(ϕz − σ|z|θ)

(4.51)

• e-modification:

where σ > 0 is a small constant. The advantage of e-modification is that the leakage term will go to zero as the tracking error converges to zero. Parameter Projection A last effective method for eliminating parameter drift and keeping the parameter estimates within some designer defined bounds is to use the projection method to constrain the parameter estimates to lie inside a bounded convex set in the parameter space. Let this convex region S be defined as S , {θ ∈ Rpθ |g(θ) ≤ 0},

(4.52)

where g : Rpθ → R is a smooth function. Applying the projection algorithm the standard update law (4.46) becomes ˙ θˆ = P (γϕz) =

   γϕz

  γϕz − γ ∇g∇gT γϕz ∇gT Γ∇g

if θˆ ∈ S 0 or if θˆ ∈ δS and ∇g T γϕz ≤ 0 otherwise

(4.53)

where S 0 is the interior of S, δS the boundary of S and ∇g = dg . If the parameter dθˆ ˆ estimate θ is inside the desired region S, then the standard adaptive law is implemented. If θˆ is on the boundary of S and its derivative is directed outside the region, then the derivative is projected onto the hyperplane tangent to δS. Hence, the projection keeps the parameter estimation vector within the desired convex region S at all time.

66

4.2

ADAPTIVE BACKSTEPPING

4.2.4 Example: Adaptive Longitudinal Missile Control In this section the missile example of Chapter 3 is revisited. The generalized dynamics of the missile (3.38), (3.39) are repeated here for convenience sake: x˙ 1 x˙ 2

= =

x2 + f1 (x1 ) + g1 u f2 (x1 ) + g2 u,

(4.54) (4.55)

where f1 , f2 , g1 and g2 are now unknown nonlinear functions containing the aerodynamic stability and control derivatives. For the control design the g1 u1 -term is again neglected so that the system is in a lower triangular form. It is assumed that the sign of g2 is known and fixed. The unknown functions are rewritten in a parametric form with unknown parameter vectors θf1 , θf2 and θg2 as f1 (x1 ) = f2 (x1 ) = g2

=

ϕf1 (x1 )T θf1 ϕf2 (x1 )T θf2 ϕTg2 θg2

where the regressors ϕ∗ are given by ϕf1 ϕf2 ϕg2

 T = C1 x31 , x1 |x1 |, x1  T = C2 x31 , x1 |x1 |, x1 = C2 .

Then the estimates of the nonlinear functions are defined as fˆ1 (x1 , θˆf1 ) = ϕf1 (x1 )T θˆf1 fˆ2 (x1 , θˆf2 ) = ϕf2 (x1 )T θˆf2 gˆ2 (θˆg ) = ϕT θˆg g2

2

2

and the parameter estimation errors as θ˜∗ = θˆ∗ − θ∗ . The tracking control objective remains the same, hence z1

=

z2

=

x1 − yr

x2 − α1

where α1 is the virtual control to be designed in the first design step. The tuning functions adaptive backstepping method is now used to solve this control problem. Step 1: The z1 -dynamics satisfy z˙1 = z2 + α1 + ϕTf1 θf1 − y˙ r . Consider the candidate CLF V1 for the z1 -subsystem defined as i 1h 2 ˜ V1 (z1 , θ˜f1 ) = z1 + k1 λ21 + θ˜fT1 Γ−1 f1 θf1 , 2

(4.56)

(4.57)

4.2

TUNING FUNCTIONS ADAPTIVE BACKSTEPPING

67

Rt where the gain k1 > 0 and the integrator term λ1 = 0 z1 dt are again introduced to robustify the design against the neglected g1 u1 -term. The derivative of V1 along the solutions of (4.56) is given by   h i ˆ˙f − Γf ϕf z1 . V˙ 1 = z1 z2 + α1 + ϕTf1 θˆf1 − y˙ r + k1 λ1 − θ˜fT1 Γ−1 θ 1 1 1 f1 To cancel all indefinite terms, the virtual control α1 is selected as α1 = −c1 z1 − k1 λ1 − ϕTf1 θˆf1 + y˙ r ,

c1 > 0

(4.58)

and the intermediate update law for θˆf1 as τf11 = Γf1 ϕf1 z1 .

(4.59)

This renders the derivative equal to   ˙ θˆf1 − τf11 . V˙ 1 = −c1 z12 + z1 z2 − θ˜fT1 Γ−1 f1

This concludes the outer loop design. Step 2: The z2 -dynamics are given by

z˙2 = ϕTf2 θf2 + ϕTg2 θg2 u − α˙ 1 .

(4.60)

Consider the CLF V2 for the complete system i 1 h 2 ˜T −1 ˜ ˜ V2 (z1 , z2 , θ˜f1 , θ˜f2 , θ˜g2 ) = V1 (z1 , θ˜f1 ) + z2 + θf2 Γf2 θf2 + θ˜gT2 Γ−1 g2 θg2 . (4.61) 2 The derivative of V2 along the solutions of (4.56) and (4.60) satisfies   ∂α1 ˙ 2 T −1 ˆ ˙ ˜ V2 = −c1 z1 + z1 z2 − θf1 Γf1 θf1 − τf11 + Γf1 ϕf1 z2 ∂x1 h i + z2 ϕTf2 θˆf2 + ϕTg2 θˆg2 u − µ1     ˙ ˙ − θ˜fT2 Γ−1 θˆf2 − Γf2 ϕf2 z2 − θ˜gT2 Γ−1 θˆg2 − Γg2 ϕg2 uz2 , g2 f2

where µ1 is given by µ1

=

∂α1 T ˆ ∂α1 ˆ˙ ∂α1 ∂α1 θf + ϕ θf + y˙ r + y¨r . ∂x1 f1 1 ∂ θˆf1 1 ∂yr ∂ y˙ r

The control law and the update laws are selected as  1  u = −c2 z2 − z1 − ϕTf2 θˆf2 + µ1 ϕTg2 θˆg2 ∂α1 ˙ θˆf1 = τf11 − Γf1 ϕf1 z2 ∂x1 ˙ θˆf2 = Γf2 ϕf2 z2 ˙ θˆg2

= P (Γg2 ϕg2 uz2 ) ,

(4.62) (4.63) (4.64) (4.65)

68

4.3

ADAPTIVE BACKSTEPPING

where the projection operator P is introduced to ensure that the estimate of g2 does not change sign. The above adaptive control law renders the derivative of V2 equal to V˙ 2

= −c1 z12 − c2 z22 .

By Theorem 3.7 limt→∞ z1 , z2 = 0, which means that the reference signal yr is again asymptotically tracked with x1 . c The resulting closed-loop system has been implemented in MATLAB/Simulink . In Figure 4.3 the response of the system with 4 different gain selections to a number of angle of attack doublets at Mach 2.2 is shown. The onboard model contains the data of the missile at Mach 2.0. The control gains are selected as c1 = c2 = 10 for all simulations, the integral gain k1 is either 0 (‘noint’) or 10 (‘int’) and the update gains Γf1 = Γf2 = 0I, Γg2 = 0 (‘nonad’) or Γf1 = Γf2 = 10I, Γg2 = 0.01 (‘ad’). As can be seen from Figure 4.3, the modeling error is severe enough to render the system unstable, when adaptation is turned off and no integral gain is sued. Adding an integral gain ensures that the missile follows its reference again, but the transient performance is not acceptable. Turning adaptation on instead gives a much better response, but there is still a very small tracking error in the outer loop. This is due to the neglected g1 u-term. The regressors are not defined rich enough to fully cancel the effect of these unmodeled dynamics. Therefore, the final simulation with adaptation turned on and an integral gain shows the best response. The parameter estimation errors of the two simulations with adaptation turned on are plotted in Figure 4.4. The errors can be seen to converge to constant values. However, the true values are not found. This is a characteristic of the integrated adaptive approaches: the estimation is performed to meet a total system stability criterion, the control Lyapunov function, rather than to optimize the error in the estimation. Hence, convergence of the parameters to their true values is not guaranteed. Note that dead-zones can be added to the update laws to prevent the parameter drift due to numerical round-off errors.

4.3 Constrained Adaptive Backstepping In the previous section the tuning functions adaptive backstepping method was derived. The complexity of the design procedure is mainly due to the calculation of the derivatives of the virtual controls at each intermediate design step. Especially for high order systems or complex multivariable systems such as aircraft dynamics, it becomes very tedious to calculate the derivatives analytically. In this section an alternative approach involving command filters is introduced to reduce the algebraic complexity of the adaptive backstepping control law formulated in Theorem 4.1. This approach is sometimes referred to as dynamic surface control in literature [212, 213]. An additional advantage of this approach is that it also eliminates the method’s restriction to nonlinear systems of a lower triangular form. Finally, the command filters can also be used to incorporate magnitude and rate limits on the input and states used as virtual controls in the design [58, 60, 61, 163]. For example, when a magnitude limit on

4.3

pitch rate (deg/s)

angle of attack (deg)

CONSTRAINED ADAPTIVE BACKSTEPPING

20 10 0 −10 −20 0

nonad,noint nonad,int ad,noint ad,int 25 reference

5

10

15

20

30

5

10

15

20

25

30

5

10

15 time (s)

20

25

30

50

0

−50 0 control deflection (deg)

69

20 10 0 −10 −20 0

Figure 4.3: Numerical Simulations at Mach 2.2 of the longitudinal missile model with adaptive backstepping control law with uncertainty in the onboard model. Results are shown for 4 different gain selections, including 2 with adaptation turned off.

the input is in effect and the desired control cannot be achieved, then the tracking errors will in general become larger and will no longer be the result of function approximation errors exclusively. Since the dynamic parameter update laws of the adaptive backstepping method are driven by the tracking errors, care must be taken that they do not ‘unlearn’ when the limits on the control input are in effect. The command filtered approach for preventing corruption of the parameter estimation process can be seen as a combination of training signal hedging [4, 105] and pseudocontrol hedging [91, 206]. Training signal hedging involves modifying the tracking error definitions used in the parameter update laws to remove the effects of the saturation. In the pseudo-control hedging method the commanded input to the next control loop is altered so that the generated control signal is implementable without exceeding the constraints.

4.3.1 Command Filtering Approach Consider the non-triangular, feedback passive system x˙ i

= fi (x) + gi (x)xi+1 ,

x˙ n

= fn (x) + gn (x)u,

i = 1, ..., n − 1

(4.66)

70

4.3

ADAPTIVE BACKSTEPPING

−3

15

20

25

14.4 14.2 14 13.8 13.6 0

10

15

20

−2.72 −2.73

30

0.7 5

−2.71

0

25

30

5

10

15

20

25

30

5

10

15 time (s)

20

25

30

5

10

15

20

25

30

5

10

15

20

25

30

5

10

15 time (s)

20

25

30

0 −0.05 −0.1 0 thetatildef23

thetatildef13

10

0.8

0 thetatildef22

5

thetatildef12

−4 0

thetatildeg2

ad,noint ad,int

−2

thetatildef21

thetatildef11

x 10 0

2 0 −2 0

2.41 2.4 2.39 0

Figure 4.4: The parameter estimation errors for the two simulations of the longitudinal missile model with adaptive backstepping control law at Mach 2.2 with adaptation turned on.

where x = (x1 , ..., xn ) is the state, xi ∈ R and u ∈ R the control signal. The smooth functions fi and gi are again unknown. The sign of all gi (x) is known and gi (x) 6= 0. The control objective is to asymptotically track the reference signal x1,r (t) with first derivative known. The tracking errors are defined as zi

= xi − xi,r ,

(4.67)

where xi,r , i = 2, ..., n will be defined by the backstepping controller. Step 1: As with the standard adaptive backstepping procedure, the first virtual control is defined as  1  α1 = −c1 z1 − ϕTf1 θˆf1 + x˙ 1,r (4.68) ϕT θˆg g1

1

where c1 > 0. However, instead of directly applying this virtual control, a new signal x02,r is defined as x02,r = α1 − χ2 ,

(4.69)

where χ2 will be defined in design step 2. The signal x02,r is filtered with a second order command filter to produce x2,r and its derivative x˙ 2,r . It is possible to enforce magnitude and rate limits with this filter, see Appendix C for details. The effect that the use of this command filter has on the tracking error z1 is estimated by the stable linear filter  χ˙ 1 = −c1 χ1 + ϕTg1 θˆg1 x2,r − x02,r . (4.70)

4.3

CONSTRAINED ADAPTIVE BACKSTEPPING

71

Note that by design of the second order command filter, the signal (x2,r − x02,r ) is bounded and, when no limits are in effect, small. It is now possible to introduce the compensated tracking errors as z¯i

=

z i − χi ,

i = 1, ..., n.

(4.71)

Select the first CLF V1 as a quadratic function of the compensated tracking error z¯1 and the estimation errors: V1 =

1 2 1 ˜T −1 ˜ 1 z¯ + θ Γ θf + θ˜T Γ−1 θ˜g . 2 1 2 f1 f1 1 2 g1 g1 1

(4.72)

Taking the derivative of V1 results in h i V˙ 1 = z¯1 ϕTf1 θˆf1 + ϕTg1 θˆg1 (z2 + x2,r ) − x˙ 1,r − χ˙ 1     ˆ˙f − Γf ϕf z¯1 − θ˜T Γ−1 θˆ˙g − Γg ϕg x2 z¯1 −θ˜fT1 Γ−1 θ 1 1 1 1 1 1 g g f1 1 1 i h 0 T ˆ T ˆ = z¯1 ϕf1 θf1 + ϕg1 θg1 (z2 + x2,r ) − x˙ 1,r + c1 χ1     ˆ˙f − Γf ϕf z¯1 − θ˜T Γ−1 θˆ˙g − Γg ϕg x2 z¯1 θ −θ˜fT1 Γ−1 g1 g1 1 1 1 1 1 1 f1

= −c1 z¯12 + ϕTg1 θˆg1 z¯1 z¯2     ˆ˙f − Γf ϕf z¯1 − θ˜T Γ−1 θˆ˙g − Γg ϕg x2 z¯1 . θ −θ˜fT1 Γ−1 1 1 1 1 1 1 g g f1 1 1

Selecting the dynamic update laws as ˙ θˆf1 ˙ θˆg1

=

Γf1 ϕf1 z¯1

(4.73)

=

P (Γg1 ϕg1 x2 z¯1 )

(4.74)

finishes the first design step. The update laws for θˆf1 and θˆf1 are defined immediately, since there will be no additional derivative terms in the next steps due to the command filters. Note that the update laws are driven by the compensated tracking error. Step i: (i = 2, ..., n − 1) The virtual controls are defined as αi

=

 1  −ci zi − ϕTgi−1 θˆgi−1 z¯i−1 − ϕTfi θˆfi + x˙ i,r ϕT θˆg gi

(4.75)

i

where ci > 0 and the command filter inputs as x0i,r = αi−1 − χi .

(4.76)

The effect that the use of the command filters has on the tracking errors is estimated by  χ˙ i = −ci χi + ϕTgi θˆgi xi+1,r − x0i+1,r . (4.77)

72

4.3

ADAPTIVE BACKSTEPPING

Finally, the update laws are given by ˙ θˆfi ˙ θˆgi

= Γfi ϕfi z¯i

(4.78)

= P (Γgi ϕgi xi+1 z¯i ) .

(4.79)

Step n: In the final design step the actual controller is found by filtering   1 u0 = αn = −cn zn − ϕTgn−1 θˆgn−1 z¯n−1 − ϕTfn θˆfn + x˙ n,r , ϕTgn θˆgn

(4.80)

to generate u. The effect that the use of this filter has on the tracking error zn is estimated by  χ˙ n = −cn χn + ϕTgn θˆgn u − u0 (4.81) and the update laws are defined as ˙ θˆfn ˙ θˆgn

=

Γfn ϕfn z¯n

(4.82)

=

P (Γgn ϕgn u¯ zn ) .

(4.83)

Theorem 4.2. The closed-loop system consisting of the system (4.66), the control (4.80) and update laws (4.73), (4.74), (4.78), (4.79), (4.82), (4.83) has a globally uniformly stable equilibrium at (¯ zi , θ˜fi , θ˜gi ) = 0, i = 1, ..., n. Furthermore, limt→∞ z¯i = 0. Proof: Consider the CLF n

Vn

=

 1 X  2 ˜T −1 ˜ ˜g , z¯i + θfi Γfi θfi + θ˜gTi Γ−1 θ i gi 2 i=1

(4.84)

which, along the solutions of the closed-loop system with the control (4.80) and update laws (4.73), (4.78), (4.82), has the time derivative V˙ n = −

n X

ci z¯i2 .

i=1

Hence, by Theorem 3.7 the stated stability properties follow. The above theorem guarantees desirable properties for the compensated tracking errors z¯i . The difference between z¯i and the real tracking errors zi is χi , which is the output of the stable filters  χ˙ i = −ci χi + ϕTgi θˆgi xi+1,r − x0i+1,r . The magnitude of the input to this filter is determined by the design of the command filter for x0i+1,r . If there are no magnitude or rate limits in effect on the command filters and

4.3

73

CONSTRAINED ADAPTIVE BACKSTEPPING

 their bandwidth is selected sufficiently high, the error xi+1,r − x0i+1,r will be small during transients and zero under steady-state conditions. Hence, the performance of the command filtered adaptive backstepping approach can be made arbitrarily close to that of the standard adaptive backstepping approach of Section 4.2. A formal proof of this statement can be found in [59]. This rigorous proof is based on singular perturbation theory and makes use of Tikhonov’s Theorem as given in [106]. If the limits on the command filter are in effect, the real tracking errors zi may increase, but the compensated tracking errors z¯i that drive the estimation process are unaffected. Hence, the dynamic update laws will not unlearn due to magnitude or rate limits on the input and states used for virtual control.

4.3.2 Example: Constrained Adaptive Longitudinal Missile Control In this section the command filtered adaptive backstepping approach is applied to the tracking control design for the longitudinal missile model (3.38), (3.39) of the earlier examples. The nonlinear functions containing the aerodynamic stability and control derivatives f1 , f2 , g1 and g2 are again unknown. Furthermore, it is again assumed that the sign of g2 is known and fixed. Since the command filtered adaptive backstepping method can deal with non-triangular nonlinear systems the g1 u1 -term does not have to be neglected during the control design. The tracking errors are defined as z1

= x1 − yr

z2

= x2 − x2,r .

(4.85)

where x2,r is the command filtered virtual control. The virtual controls are defined as α1

=

α2

=

−c1 z1 − ϕTf1 θˆf1 − ϕTg1 θˆg1 u + y˙ r , c1 > 0  1  −c2 z2 − z¯1 − ϕTf2 θˆf2 + x˙ 2,r , c2 > 0, ϕT θˆg g2

(4.86) (4.87)

2

where z¯i

= z i − χi ,

i = 1, 2

(4.88)

are the compensated tracking errors. The signals x02,r

=

0

=

u

α1 − χ2

α2

(4.89) (4.90)

are filtered with second order command filters to produce x2,r , its derivative x˙ 2,r and u. The effect that the use of these command filters has on the tracking errors is measured by  (4.91) χ˙ 1 = −c1 χ1 + x2,r − x02,r  T ˆ 0 χ˙ 2 = −c2 χ2 + ϕ θg u − u . (4.92) g2

2

74

4.3

ADAPTIVE BACKSTEPPING

Finally, the update laws are given by ˙ θˆf1 ˙ θˆg1 ˙ θˆf2 ˙ θˆg2

= Γf1 ϕf1 z¯1

(4.93)

= Γg1 ϕg1 u¯ z1

(4.94)

= Γf2 ϕf2 z¯2

(4.95)

z2 ) , = P (Γg2 ϕg2 u¯

(4.96)

where Γ∗ = ΓT∗ > 0 are the update gains. The adaptive controller renders the derivative of the CLF 2

V

=

 1 X  2 ˜T −1 ˜ ˜ z¯i + θfi Γfi θfi + θ˜gTi Γ−1 gi θgi 2 i=1

(4.97)

equal to V˙ = −c1 z¯12 − c2 z¯22 .

(4.98)

By Theorem 3.7 the equilibrium (¯ zi , θ˜fi , θ˜gi ) = 0 for i = 1, 2 is globally stable and the compensated tracking errors z¯1 , z¯2 converge asymptotically to zero. The resulting constrained adaptive backstepping controller can be compared with the c standard adaptive backstepping controller of Section 4.2.4 in MATLAB/Simulink simulations. For the tuning functions controller the control gains are selected as c1 = c2 = k1 = 10 and the update gains as Γf1 = Γf2 = 10I, Γg2 = 0.01. The gains of the command filtered controller are selected the same, except that the update gains of the outer loop are selected as Γf1 = 1000I and Γg1 = 1. The outer loop update laws of both designs differ, but with these update gain selections the response of both controllers is nearly identical. Of course, the command filtered controller does not need the integral term to achieve perfect tracking since it does not neglect the effect of the control surface deflections on the aerodynamic forces. The results of a simulation with an upper magnitude limit of 9.5 degrees on the control input are more interesting, as can be seen in Figure 4.5. The maneuver has been performed at Mach 2.2 with onboard model for Mach 2.0. The performance of the standard adaptive backstepping degrades severely when compared to the performance without saturation in Figure 4.3 of Section 4.2.4. The reason for this loss in performance can be found in Figure 4.6 where the parameter estimation errors are plotted. During periods of control saturation the tracking errors increase, since the parameter update laws are driven by the tracking errors (which are now no longer the result of the function approximation errors exclusively) so they tend to ‘unlearn’. The update laws of the command filtered controller are driven by the compensated tracking errors, where the effect of the magnitude limit has been removed by proper definition of the command filters. As a result the performance of the constrained adaptive backstepping controller is much better.

4.3

CONSTRAINED ADAPTIVE BACKSTEPPING

75

angle of attack (deg)

20 10 0 tuning cabs reference

−10 −20 0

5

10

15

20

25

30

5

10

15

20

25

30

5

10

15 time (s)

20

25

30

pitch rate (deg/s)

50

0

control deflection (deg)

−50 0 20 10 0 −10 −20 0

10

15

20

25

1

thetatildef22

5

10

15

20

25

15 10 5

10

15

20

25

0.3248 0.3247 0

5

10

15 time (s)

20

25

30

15

20

25

30

5

10

15

20

25

30

5

10

15

20

25

30

5

10

15 time (s)

20

25

30

0 −2

30 20 10 0

g2

0.3249

10

−1

−10 0

30

5

1

−3 0

30

20

0

−2.8 0 thetatildef21

2 1.5 0.5 0

−2.6

30

2.5

−2.4

thetatildef23

f13

thetatilde

5

thetatilde

0 −0.02 −0.04 0

thetatildeg1

tuning cabs

0.02

thetatilde

thetatilde

f11

0.04

f12

Figure 4.5: Numerical simulations at Mach 2.2 of the longitudinal missile model with the tuning functions versus the constrained adaptive backstepping (cabs) control law and an upper magnitude limit on the control input of 9.5 deg.

2.4 2.3 2.2 2.1 0

Figure 4.6: The parameter estimation errors for both adaptive backstepping designs. The update laws of the tuning functions adaptive backstepping controller ‘unlearn’ during periods when the upper limit on the input is in effect.

Chapter

5

Inverse Optimal Adaptive Backstepping The static and dynamic parts of the adaptive backstepping controllers of the previous chapter are designed simultaneously in a recursive manner. The very strong stability and convergence properties of the controllers can be proved using a single control Lyapunov function. A drawback of this approach is that, because there is strong coupling between the static and dynamic parts, it is unclear how changes in the adaptation gain affect the tracking performance. This makes tuning of the controllers a very tedious and nonintuitive process. In this chapter an attempt is made to develop an adaptive backstepping control approach that is optimal with respect to some meaningful cost functional. Besides optimal control being an intuitively appealing approach, the resulting control laws inherently possess certain robustness properties.

5.1 Introduction The adaptive backstepping designs of Chapter 4 are focused on achieving stability and convergence rather than performance or optimality. Some performance bounds can be derived for the tracking errors, the system states and the estimated parameters, but those bounds do not contain any estimates of the necessary control effort [121]. Furthermore, increasing the update gains results in more rapid parameter convergence, but it is unclear how the transient tracking performance is affected. The advantages of a control law that is optimal with respect to some ‘meaningful’ cost functional1 are its inherent robustness properties with respect to external disturbances and model uncertainties, as in the case of linear quadratic control or H∞ control [215]. This would suggest combining or extend1 A meaningful cost functional is one that places a suitable penalty on both the tracking error and the control effort, so that useless conclusions such as ‘every stabilizing control law is optimal’ can be avoided [67].

77

78

INVERSE OPTIMAL ADAPTIVE BACKSTEPPING

5.2

ing the Lyapunov based control with some form of optimal control theory. Naturally, many attempts have been made to extend linear optimal control results to nonlinear control, see e.g. [130, 166, 167, 175]. However, the difficulty lies in the fact that the direct optimal control problem for nonlinear systems requires the solving of a Hamilton-Jacobi-Bellman (HJB) equation which is in general not feasible. Optimal adaptive control is even more challenging, since the certainty equivalence combination of a standard parameter estimation scheme with linear quadratic optimal control does not even give any optimality properties [113]. The problems with direct nonlinear optimal control motivated the development of inverse optimal design methods [65, 66]. In the inverse approach a positive definite Lyapunov function is given and the task is to determine if a feedback control law minimizes some meaningful cost functional. The term inverse refers to the fact that the cost functional is determined after the design of the stabilizing feedback control law, instead of being selected beforehand by the control designer. In [128] the inverse optimal control theory for nonlinear systems was combined with the tuning functions approach, to develop an inverse optimal adaptive backstepping control design for a general class of nonlinear systems with parametric uncertainties. This adaptive controller compensates for the effect of the parameter estimation transients in order to achieve optimality of the overall system. In [134] this result is extended to a nonlinear multivariable system with external disturbances. This chapter starts with a discussion on the differences between direct and inverse optimal control for nonlinear systems. The inverse optimal control theory is combined with the tuning function adaptive backstepping method in Section 5.3 following the approach of [128]. The transient performance is analyzed, after which the method is applied to the pitch autopilot design for a longitudinal missile model and compared with a design based on the standard tuning functions adaptive backstepping approach.

5.2 Nonlinear Control and Optimality This section discusses general optimal control theory. The difficulties with optimal control theory in the context of nonlinear control are explained and as an alternative inverse optimal control theory is introduced.

5.2.1 Direct Optimal Control Optimal control deals with the problem of finding a control law for a given system such that a certain optimality criterion is achieved. Given the general nonlinear system x˙ = f (x) + g(x)u

(5.1)

where x ∈ Rn is the state vector and u ∈ Rm is the control input, the aim is to find a control u(x) that stabilizes system (5.1) while minimizing the cost functional Z ∞  J= l(x) + uT R(x)u dt (5.2) 0

5.2

NONLINEAR CONTROL AND OPTIMALITY

79

with l(x) ≥ 0 and R(x) > 0 for all x. For a given feedback control u(x), the value of J, if finite, is a function of the initial state x(0): J(x). When J is at its minimum, J(x) is called the optimal value function. The optimal control law is denoted by u∗ (x). When this optimal control law is applied, J(x) will decrease along the trajectory, since the cost-to-go must continuously decrease by the principle of optimality [15]. This means that J(x) is a Lyapunov function for the controlled system: V (x) = J(x). The functions V (x) and u∗ (x) are related to each other by the following optimality condition [175, 194]. Theorem 5.1 (Optimality and Stability). Suppose that there exists a continuously differentiable positive semi-definite function V (x) which satisfies the Hamilton-JacobiBellman equation [14] 1 l(x) + Lf V (x) − Lg V (x)R−1 (x)(Lg V (x))T = 0, 4

V (0) = 0

(5.3)

such that the feedback control 1 u∗ (x) = − R−1 (x)(Lg V (x))T 2

(5.4)

achieves asymptotic stability of the equilibrium x = 0. Then u∗ (x) is the optimal stabilizing control which minimizes the cost functional (5.2) over all u guaranteeing limt→∞ x(t) = 0, and V (x) is the optimal value function. Proof: Substituting 1 v = u − u∗ = u + R−1 (x)(Lg V (x))T 2

(5.5)

into (5.2) and using the HJB-identity results in: Z ∞  1 J = l + v T Rv − v T (Lg V )T + Lg V R−1 (Lg V )T dt 4 Z0 ∞ Z ∞  1 = − Lf V + Lg V R−1 (Lg V )T − Lg V v dt + v T Rv dt 2 0 0 Z ∞ Z ∞ ∂V T = − (f + gu)dt + v Rv dt ∂x 0 Z0 ∞ Z ∞ dV = − dt + v T Rv dt dt 0 0 Z ∞ = V (x(0)) − lim V (x(T )) + v T Rv dt. T →∞

0

The above limit of V (x(T )) is zero since the cost functional (5.2) is only minimized over those u which achieve limt→∞ x(t) = 0, thus Z ∞ J = V (x(0)) + v T Rv dt. 0

80

5.3

INVERSE OPTIMAL ADAPTIVE BACKSTEPPING

It is easy to see that the minimum of J is V (x(0)). This minimum is reached for v(t) ≡ 0, which proves that u∗ (x) given by (5.4) is optimal and that V (x) is the optimal value function. In [70] and [175] it is shown that, besides optimal control being an intuitively appealing approach, optimal control laws inherently possess certain robustness properties for the closed-loop system, including stability margins. However, a direct optimal control approach requires the solving of the Hamilton-Jacobi-Bellman equation which is in general not feasible.

5.2.2 Inverse Optimal Control The fact that the robustness achieved as a result of optimality is largely independent of the choice of functions l(x) ≥ 0 and R(x) > 0 motivated the development of inverse optimal control design methods [65, 66]. In the inverse approach a Lyapunov function V (x) is given and the task is to determine whether a control law such as (5.4) is optimal for a cost functional of the form (5.2). The term ‘inverse’ refers to the fact that the functions l(x) and R(x) are determined after the design of the stabilizing feedback control instead of being selected beforehand by the designer. Definition 5.2. A stabilizing control law u(x) solves an inverse optimal control problem for the system x˙ = f (x) + g(x)u

(5.6)

if it can be expressed as 1 u(x) = −k(x) = − R−1 (x)(Lg V (x))T , 2

R(x) > 0,

(5.7)

where V (x) is a positive semi-definite function, such that the negative semi-definiteness of V˙ is achieved with the control (5.7), that is 1 V˙ = Lf V (x) − Lg V (x)k(x) ≤ 0. 2

(5.8)

When the function l(x) is selected equal to −V˙ : 1 l(x) := −Lf V (x) + Lg V (x)k(x) ≥ 0 2

(5.9)

then V (x) is a solution of the HJB equation 1 l(x) + Lv V (x) − (Lg V (x))R−1 (x)(Lg V (x))T = 0. 4

(5.10)

5.3 Adaptive Backstepping and Optimality Since the introduction of adaptive backstepping in the beginning of the 1990’s, there have been numerous publications that consider the inverse optimal problem and control

5.3

ADAPTIVE BACKSTEPPING AND OPTIMALITY

81

Lyapunov function designs, e.g. [57, 122] and [128]. Textbooks that deal with the subject are [115] and [175]. However, inverse optimal adaptive backstepping control is only considered in [128] and [134]. In [128] an inverse optimal adaptive tracking control design for a general class of nonlinear systems is derived and [134] extends the results to a nonlinear multi-input multi-output system with external disturbances. In this section the approach of [128] is repeated in an organized manner and theoretical transient performance bounds are given. The section concludes with an evaluation of the performance and numerical sensitivity of the inverse optimal design approach applied to the longitudinal missile pitch autopilot example as discussed in the earlier chapters.

5.3.1 Inverse Optimal Design Procedure Consider the class of parametric strict feedback systems x˙ i

=

xi+1 + ϕi (¯ xi )T θ,

x˙ n

=

u + ϕn (x)T θ

i = 1, ..., n − 1

(5.11)

where xi ∈ R, u ∈ R and x ¯i = (x1 , x2 , ..., xi ). The vector θ contains the unknown constant parameters of the system. The control objective is to force the output y = x1 to asymptotically track the reference signal yr (t) whose first n derivatives are assumed to be known and bounded. To simplify the control design, the tracking control problem is first transformed to a regulation problem. For any given smooth function yr (t) there exist functions ρ1 (t), ρ2 (t, θ), ..., ρn (t, θ) and αr (t, θ) such that ρ˙ i ρ˙ n yr (t)

= ρi+1 + ϕi (¯ ρi )T θ, T

= αr (t, θ) + ϕn (ρ) θ = ρ1 (t).

i = 1, ..., n − 1

(5.12)

p 1 ˆ Since ∂ρ ∂θ = 0 for all t ≥ 0 and for all θ ∈ R , θ can be replaced by its estimate θ. ˆ Consider the signal xr (t) = ρ(t, θ(t)), which is governed by

x˙ ri

=

x˙ rn

=

yr (t) =

∂ρi ˆ˙ θ, ∂ θˆ ˆ + ϕrn (xr )T θˆ + ∂ρn θˆ˙ αr (t, θ) ∂ θˆ ρ1 (t). xr(i+1) + ϕri (¯ xri )T θˆ +

i = 1, ..., n − 1 (5.13)

The dynamics of the tracking error e = x − xr satisfy e˙ i

=

e˙ n

=

ˆ T θ + ϕri (xr1 , ..., xrn , θ) ˆ T θ˜ − ∂ρi θ, ˆ˙ ei+1 + ϕ˜i (e1 , ..., ei , θ) ˆ ∂θ ∂ρ n ˙ T T ˆ θ + ϕrn (xr , θ) ˆ θ˜ − u ˜ + ϕ˜n (e1 , ..., ei , θ) θˆ ∂ θˆ

i = 1, ..., n − 1 (5.14)

82

5.3

INVERSE OPTIMAL ADAPTIVE BACKSTEPPING

ˆ and ϕ˜i = ϕi (x1 , ..., xi )−ϕri (xr1 , ..., xri ), i = 1, ..., n. Now the where u ˜ = u−αr (t, θ) inverse optimal tracking problem has been transformed into an inverse optimal regulation problem. Define the error states as zi = ei − α ˜ i−1 ,

i = 1, ..., n

(5.15)

where α ˜ i−1 are the virtual controls to be designed by applying the tuning functions adaptive backstepping method of Theorem 4.1. After that, the real control u˜ is chosen in a form that is inverse optimal. Step i: (i = 1,...,n-1) i−1

ˆ α ˜ i (t, e¯i θ)

= −ci zi − zi−1 + − ω ˜ iT θˆ −

i−1 X

k=1

∂α ˜ i−1 X ∂ α ˜ i−1 + ek+1 ∂t ∂ek k=1

(σki + σik ) zk − σii zi ,

ci > 0

(5.16)

where for notational convenience ˆ ω ˜ (t, e¯i θ)

σik

= ϕ˜i − 

= −

i−1 X ∂α ˜ i−1 k=1

∂ek

ϕ˜k

(5.17) i−1 X ∂α ˜ i−1 ∂ρi

∂α ˜ i−1 ∂ρi + − ∂ θˆ ∂ θˆ j=2 ∂ej

∂ θˆ



 Γωk .

(5.18)

Step n: Consider the control Lyapunov function n

Vn =

1 X 2 1 ˜T −1 ˜ zn + θ Γ θ. 2 2

(5.19)

k=1

Taking the derivative of Vn and substituting (5.16) gives V˙ n

=





n−1 X

"

ck zk2 + zn zn−1 + u ˜+

k=1

n−1 X

(σkn + σnk ) zk + σnn zn

k=1

# i−1 ∂α ˜ i−1 X ∂ α ˜ i−1 Tˆ − ek+1 + ω ˜i θ ∂t ∂ek

(5.20)

k=1

ˆ˙ +θ˜T (τn − Γ−1 θ), where

τi = τi−1 + ω ˜ i zi ,

i = 1, ..., n.

(5.21)

5.3

ADAPTIVE BACKSTEPPING AND OPTIMALITY

83

To eliminate the parameter estimation error θ˜ = θ − θˆ from V˙ n , the update law ˙ θˆ = Γτn

(5.22)

is selected. Now the actual control u can be defined. Following the standard adaptive backstepping procedure of Theorem 4.1 it is possible to define a control u˜ which cancels all indefinite terms and render V˙ n negative semi-definite. However, this controller is not designed in a way that it can be guaranteed to be optimal. By Theorem 5.1 a control law of the form ˆ ∂V g, u = −r−1 (z, θ) ∂e

ˆ > 0 ∀ t, e, θˆ r(t, e, θ)

(5.23)

is suggested. For this control problem (5.23) simplifies to ˆ n, u = −r−1 (z, θ)z

(5.24)

i.e. zn has to be a factor of the control. In order to get rid of the indefinite terms without canceling them, damping terms [118] are introduced. Since the expressions P nonlinear ∂αn−1 − ∂ α˜∂ti−1 − n−1 e and ω ˜ nT θˆ vanish at z = 0 there exist smooth functions φk k+1 k=1 ∂ek such that −

n−1

n

k=1

k=1

X ∂α ˜ i−1 X ∂αn−1 φk zk , − ek+1 + ω ˜ nT θˆ = ∂t ∂ek

k = 1, ..., n.

(5.25)

Thus (5.20) becomes V˙ n = −

n−1 X

ck zk2 + zn u ˜+

k=1

n X

z k Φk z n ,

(5.26)

k=1

where Φk Φn−1 Φn

= φk + σkn + σnk , k = 1, ..., n − 2 = 1 + φn−1 + σ(n−1)n + σn(n−1)

(5.27)

= φn + σnn .

A control law of the form (5.24) with ˆ = r(t, e, θ)

n X Φ2k cn + 2ck k=1

!−1

> 0,

cn > 0,

∀ t, e, θˆ

(5.28)

results in 1 V˙ n = − 2

n X

k=1

ck zk2 −

 2 n X ck Φk zk − zn . 2 ck k=1

(5.29)

84

INVERSE OPTIMAL ADAPTIVE BACKSTEPPING

5.3

Note that incorporating command filters in this inverse optimal technique is not possible, since the filtered derivatives of the virtual controls cannot be damped out in the same way. By Theorem 3.7, it can be concluded that the tracking control problem is solved, since V˙ n is negative semi-definite. The properties of the controller are summarized in the following theorem. Theorem 5.3 (Inverse optimal adaptive backstepping). The dynamic feedback control law u∗

=

˙ θˆ =

ˆ n, β ≥ 2 −βr−1 (t, e, θ)z n X Γτn = Γ ω ˜ j zj ,

(5.30)

j=1

does not only stabilize the system (5.14) with respect to the control Lyapunov function (5.19), but is also optimal with respect to the cost functional

J

=

ˆ 2 −1 + β lim |θ − θ(t)| Γ t→∞

Z

0



h

i ˆ + r(t, e, θ)˜ ˆ u2 dt, ∀ θ ∈ Rp (5.31) l(t, e, θ)

where ˆ = l(z, θ)

−2β V˙ n + β(β − 2)r−1 zn2

(5.32)

with a value function ˆ 2 −1 + β|z|2 . J ∗ = β|θ − θ| Γ

(5.33)

ˆ > 0, and V˙ n negative definite, it is clear that l(t, e, θ) ˆ is Proof: Since β ≥ 2, r(t, e, θ) positive definite. Therefore J defined in (5.31) is a ‘meaningful’ cost functional which puts an integral penalty on both z and u ˜ (with complicated nonlinear scaling in terms ˜ Note that an integral of the parameter estimate), as well as on the terminal value of |θ|. ˜ penalty on θ is not included, since adaptive backstepping controllers in general do not guarantee parameter convergence to a true value. ˆ and Substituting l(t, e, θ) ˆ n v=u ˜ − u∗ = u ˜ + βr−1 (t, e, θ)z

(5.34)

5.3

85

ADAPTIVE BACKSTEPPING AND OPTIMALITY

into J together with (5.26) gives Z ∞" n  n−1  X X 2 ˜ J = β lim |θ|Γ−1 + − 2β − ck zk2 − r−1 zn2 + z k Φk z n t→∞

0

k=1

k=1

#

+rv 2 − 2βvzn + β 2 r−1 zn2 dt ˜ 2 −1 − 2β = β lim |θ| Γ

Z

˜ 2 −1 − 2β = β lim |θ| Γ

Z

t→∞

t→∞



0 ∞

h



n−1 X

ck zk2 + zn u ˜+

k=1

dVn +

0

n X

k=1

Z



Z i zk Φk zn dt +



rv 2 dt

0

rv 2 dt

(5.35)

0

ˆ ˜ 2 −1 − 2β lim Vs (z(t)) + = 2βVn (z(0), θ(0)) + β|θ(0)| Γ t→∞

Z



rv 2 dt,

0

P where Vs = 12 nk=1 zn2 . It was already shown that the control law u˜ together with the update law for θˆ stabilizes the closed-loop system, which means limt→∞ z(t) = 0 and thus limt→∞ Vs (z(t)) = 0. Therefore the minimum of (5.35) is reached only if v = 0 and thus the control u = u∗ minimizes the cost functional (5.31).

5.3.2 Transient Performance Analysis A L2 transient performance bound on the error state z and the control u ˜ can be found for the inverse optimal design. By Theorem 5.3 the control law (5.24) for β = 2 is optimal with respect to the cost functional J

= +

˜ 2 −1 2 lim |θ| Γ t→∞ Z ∞"X n n  X Φk  2 u˜2 ck z k − 2 ck zk2 + zn +  P n ck 0 2 cn + k=1 k=1 k=1

(5.36)

Φ2k 2ck

with a value function

 dt

˜ 2 −1 + 2|z|2 . J ∗ = 2|θ| Γ Therefore Z 2

0



"

n X

ck zk2

k=1

≤2

Z

0



"

+  2 cn +

n X

u˜2 Pn

Φ2k k=1 2ck

n X

(5.37)

#

 dt

 Φk 2 u ˜2 ck zk2 + ck z k − zn +  P n ck 2 cn + k=1 k=1 k=1

˜ 2 −1 + 2|z(0)|2 ≤ J ∗ = 2|θ(0)| Γ

#

Φ2k 2ck

#

 dt

(5.38)

86

5.3

INVERSE OPTIMAL ADAPTIVE BACKSTEPPING

which yields the inequality  Z ∞ X n u ˜2  ck zk2 +  Pn 0 2 cn + k=1 k=1

Φ2k 2ck



˜ 2 −1 + |z(0)|2 .   dt ≤ |θ(0)| Γ

(5.39)

The dependency on z(0) can be eliminated by employing trajectory initialization: z(0) = 0. This results in the L2 performance bound   Z ∞ X n 2 u ˜ ˜ 2   ck zk2 +  (5.40) Pn Φ2k  dt ≤ |θ(0)|Γ−1 . 0 2 cn + k=1 2ck k=1

5.3.3 Example: Inverse Optimal Adaptive Longitudinal Missile Control The nonlinear adaptive controller developed in this chapter is inverse optimal with respect to a cost functional that penalizes the tracking errors and the control effort. However, nonlinear damping terms are used to achieve this inverse optimality. In [164] the numerical sensitivity of the tuning functions adaptive backstepping method, with added nonlinear damping terms to robustify the controller against unknown external disturbances, is studied. Increasing the nonlinear damping gains improves tracking performance, but leads to undesirable high frequency components in the control signal. This illustrates that using nonlinear damping in the feedback controller must be done with care, since it can easily result in high gain feedback. The effect of the nonlinear damping terms used in the inverse optimal design will become more clear in the example outlined in this section. The inverse optimal nonlinear adaptive control approach is applied to the longitudinal missile control example of Sections 3.3.3 and 4.2.4. The generalized dynamics of the missile (3.38), (3.39) are repeated here for convenience sake: x˙ 1

=

x2 + f1 (x1 ) + g1 u

(5.41)

x˙ 2

=

f2 (x1 ) + g2 u,

(5.42)

where f1 , f2 , g1 and g2 are unknown nonlinear functions containing the aerodynamic stability and control derivatives. For the control design the g1 u1 -term has to be neglected so that the system is of a lower triangular form. The control objective is to track the reference signal yr (t) with the state x1 . According to the inverse optimal adaptive backstepping procedure, the functions ρ1 (t), ρ2 (t, θ) and αr (t, θ) have to be selected such that ρ˙ 1

= ρ2 + ϕTf1 (ρ1 )θf1

ρ˙ 2

= αr (t, θf1 , θf2 ) + ϕTf2 (ρ1 )θf2

yr (t)

= ρ1 (t).

(5.43)

5.3

ADAPTIVE BACKSTEPPING AND OPTIMALITY

87

Hence, ρ1

=

yr

ρ2

=

αr

=

y˙ r − ϕTf1 (ρ1 )θf1  ∂ y¨r − ϕTf1 (ρ1 )θf1 y˙ r − ϕTf2 (ρ1 )θf2 . ∂ρ1

∂ρ1 = 0 it follows that ∂θ = 0 for all t ≥ 0 and for all θ∗ ∈ R2 , θ∗ can be replaced ∗ by its estimate θˆ∗ . Consider the signal xr (t) = ρ(t, θˆf (t), θˆf (t)), which satisfies

Since

∂yr ∂θ∗

(5.44)

1

x˙ r1

=

x˙ r2

=

yr (t) =

2

xr2 + ϕTrf1 (xr1 )θˆf1 ∂ρ2 ˆ˙ αr + ϕTrf2 (xr1 )θˆf2 + θf1 ∂ θˆf1 xr1 (t).

(5.45)

Defining the tracking error e = x − xr , the system can be rewritten as e˙ 1

=

e2 + ϕ˜Tf1 θf1 + ϕTrf1 θ˜f1

(5.46)

e˙ 2

=

∂ρ2 ˆ˙ u ˜ + ϕ˜Tf2 θf2 + ϕTrf θ˜f2 − θf1 2 ∂ θˆf1

(5.47)

where u ˜ = ϕTg1 θg1 u − αr and ϕ˜∗ = ϕ∗ − ϕr∗ . Now the tracking problem has been transformed into a regulation problem. The error states are defined as z1

=

e1

z2

=

e2 − α ˜1 ,

(5.48)

where the standard adaptive backstepping approach is used to find the virtual control α ˜1 as α ˜1 (e1 , θˆf1 ) =

−c1 z1 − ϕ˜Tf1 θˆf1 ,

(5.49)

where c1 > 0 and the update laws as ˙ θˆf1 ˙ θˆf2 ˙ θˆg2



ϕ˜f1 z1 − ϕ˜f1

∂α ˜1 z2 ∂e1



=

Γf1

=

Γf2 ϕ˜f2 z2

(5.51)

=

P (Γg2 ϕg2 uz2 ) .

(5.52)

(5.50)

Consider the CLF V2 =

i 1h 2 ˜ ˜T −1 ˜ ˜T −1 ˜ z1 + z22 + θ˜fT1 Γ−1 f1 θf1 + θf2 Γf2 θf2 + θg2 Γg2 θg2 . 2

(5.53)

88

5.3

INVERSE OPTIMAL ADAPTIVE BACKSTEPPING

Taking the derivative of V2 along the solutions of (5.49)-(5.52) results in " ∂α ˜1 T ˆ 2 ˙ V2 = −c1 z1 + z2 z1 + ϕ˜Tf2 θˆf2 + u ˜− ϕ˜ θf ∂e1 f1 1  #  ∂α ˜1 ∂ρ2  ∂α ˜1 − + Γ ϕ˜f1 z1 − ϕ˜f1 z2 . ∂e1 ∂ θˆf1 ∂ θˆf1

(5.54)

Instead of canceling all indefinite terms, scaling nonlinear damping terms are introduced as  ∂α ˜1 ∂ρ2  Φ1 = 1 − + Γϕ˜f1 + φ1 (5.55) ∂ θˆf1 ∂ θˆf1  ∂α ∂ρ2  ∂ α ˜1 ˜1 Φ2 = + Γ ϕ˜f2 + φ2 , (5.56) ˆ ˆ ∂e 1 ∂ θf1 ∂ θf1 where −

∂α ˜1 ∂α ˜1 T ˆ e2 − ϕ˜ θf = φ1 z1 + φ2 z2 . ∂e1 ∂e1 f1 1

(5.57)

This renders (5.54) equal to V˙ 2 = −c1 z12 + z2 u˜ + z1 Φ1 z2 + z2 Φ2 z2 . Finally, substituting the control law   Φ2 Φ2 u˜ = − c2 + 1 + 2 z2 , 2c1 2c2

c2 > 0,

(5.58)

(5.59)

gives 1 1 c1 V˙ 2 = − c1 z12 − c2 z22 − 2 2 2

 2  2 Φ1 c2 Φ2 z1 − z2 − z2 − z2 . c1 2 c2

(5.60)

By Theorem 5.3 the inverse optimal tracking control problem is solved. An integral term with gain k1 ≥ 0 can be added to the outer loop design to compensate for the neglected control effectiveness term as was done with the tuning functions autopilot design of Section 4.2.4. The resulting inverse optimal closed-loop system is implemented in the MATLAB/ c Simulink environment to evaluate the performance and the numerical sensitivity. The gains are selected as c1 = 18, k1 = c2 = 10, Γf1 = Γf2 = 10I, Γg2 = 0.01. The simulation is again performed with a third order fixed step solver with a sample time of 0.01s. The control signal is fed through a low pass filter to remove high frequency components that crash the solver. The controller is very sensitive to variations in the control gain c1 . The response of the system for a simulation at Mach 2.2 with onboard model data for

5.4

CONCLUSIONS

89

Mach 2.0 can be found in Figure 5.1. Tracking performance is excellent, there is not even a bad transient at the start of the first doublet as was the case with the tuning functions design of Section 4.2.4. However, some high frequency components are visible in the control signal at 5, 10, 15, 20 and 25 seconds, despite the use of the low pass filter. This aggressive behavior is further illustrated in Figure 5.2, where the parameter estimation errors are plotted. There is hardly any adaptation, since the controller already forces the tracking errors rapidly to zero. In fact, turning adaptation off does not influence the tracking performance. The control law of the inverse optimal design contains the large nonlinear damping terms Φ21 Φ22 2c1 and 2c2 . Especially the first term can grow very large and vary in size rapidly as it contains the derivatives of the virtual control law, as is illustrated in Figure 5.3. The control law is numerically very sensitive due the fast nonlinear growth resulting from these Φ2 terms. It is not possible to reduce 2c11 , since Φ1 is also dependent on c1 . For other control applications where the derivatives of the intermediate control law are much smaller, such as attitude control problems, the design approach may be beneficial, because the nonlinear growth will be more restricted.

5.4 Conclusions In this chapter inverse optimal control theory is used to modify the last step of the tuning functions adaptive backstepping approach of Chapter 4. The goal is to introduce a cost functional to simplify the closed-loop performance tuning of the adaptive controller and to exploit the inherent robustness properties of optimal controllers. However, nonlinear damping terms were utilized to achieve the inverse optimality, resulting in high gain feedback terms in the design. The numerical sensitivity due to the high gain feedback terms makes the inverse optimal approach less suitable than the adaptive designs of the previous chapter for the complex flight control design problems considered in this thesis. Furthermore, the complexity of the cost functional associated with the inverse optimal design does not make performance tuning any easier.

5.4

INVERSE OPTIMAL ADAPTIVE BACKSTEPPING

10 0 −10 0

10

15

20

25

30

5

10

15

20

25

30

5

10

15 time (s)

20

25

30

0

−50 0 control deflection (deg)

5

50

pitch rate (deg/s)

angle of attack (deg)

90

20 10 0 −10 −20 0

Figure 5.1: Numerical simulations at Mach 2.2 of the longitudinal missile model using an inverse optimal adaptive backstepping control law with uncertainty in the onboard model.

x 10

thetatildef12

thetatildef11

−4

0 −2

−4 0

5

10

15

20

25

−2.706 −2.707

30

0

5

10

15

20

25

30

5

10

15

20

25

30

5

10

15 time (s)

20

25

30

0.835

thetatildef22

0

5

10

15

20

25

30

14.3 14.29 0

thetatildeg2

thetatildef21

0.84

thetatildef23

thetatildef13

−3

5

10

15

20

25

30

5

10

15 time (s)

20

25

30

0

x 10

−2 −4 0

2

1.95 0

2.41 2.405 2.4 0

Figure 5.2: The parameter estimation errors for the inverse optimal adaptive backstepping design. The aggressive control law prevents the update laws from any serious adaptation.

5.4

CONCLUSIONS

91

2

Phi1 /2/c1

200 150 100 50 0 0

5

10

15

20

25

30

5

10

15

20

25

30

5

10

15 time (s)

20

25

30

z2

0.05

0

−0.05 0 250

inv(r)

200 150 100 50 0 0

Figure 5.3: The size and variations of the nonlinear damping terms and the error state z2 during the missile simulation.

Chapter

6

Comparison of Integrated and Modular Adaptive Flight Control The constrained adaptive backstepping approach of Chapter 4 is applied to the design of a flight control system for a simplified, nonlinear over-actuated fighter aircraft model valid at two flight conditions. It is demonstrated that the extension of the adaptive control method to multi-input multi-output systems is straightforward. A comparison with a more traditional modular adaptive controller that employs a least squares identifier is made to illustrate the advantages and disadvantages of an integrated adaptive design. Furthermore, the interactions between several control allocation algorithms and the online model identification for simulations with actuator failures are studied. The control design for this simplified aircraft model will provide valuable insights before attempting the more complex flight control design for the high-fidelity F-16 dynamic model of Chapter 2.

6.1 Introduction In this chapter a nonlinear adaptive backstepping based reconfigurable flight control system is designed for a simplified aircraft model, before attempting the more complex F-16 model of Chapter 2. As a study case the control design problem for a nonlinear over-actuated fighter aircraft model is selected. The key simplifications made here are constant velocity and no lift or drag effects of the control surfaces. Furthermore, aerodynamic data is only available for two flight conditions. Since the aircraft model considered in this chapter is over-actuated, some form of control allocation has to be applied to distribute the desired control moments over the actuators. However, a characteristic of the adaptive backstepping designs as discussed in Chapter 4 is that the Lyapunov-based identifiers of the method only yield pseudo-estimates of the 93

94

COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL

6.2

unknown parameters, since the estimation is performed to satisfy a total system stability criterion rather than to optimize the error in estimation. As a result the parameter estimates are not guaranteed to converge to their true values over time and it is not clear what effect this will have on the control allocation. Therefore, as an interesting side study, the combination of constrained adaptive backstepping with two common types of control allocation methods with different weightings will also be examined. Furthermore, the integrated adaptive backstepping flight controller will be compared with a more traditional modular adaptive design which makes use of a separate least-squares identifier. This type of modular adaptive controller is referred to as ‘estimation-based’ designs in literature. An estimation-based adaptive control design does not suffer from the restriction of a Lyapunov update law, since it achieves modularity of controller and identifier: any stabilizing controller can be combined with any identifier. Especially a least-squares based identifier is of interest, since this type of identifier possesses excellent convergence properties and guaranteed parameter convergence to constant values. In [131, 132, 188] an adaptive NDI design with recursive least-squares identifier is used for the design of a reconfigurable flight control system for a fly-by-wire Boeing 747. However, theoretical stability and convergence results for the closed-loop system are not provided, since the least-squares identifier, like all traditional identifiers, is not fast enough to capture the potential faster-than-linear growth of nonlinear systems. Hence, the certainty equivalence principle does not hold and an alternative solution will have to be found. In [119, 120] a robust backstepping controller is introduced which achieves input-to-state stability (ISS) with respect to the parameter estimation errors and the derivative of the parameter estimate. Nonlinear state filters are used to compensate for the time varying nature of the parameter estimation errors so that standard gradient or least-squares identifiers can be applied. The resulting identifier module guarantees boundedness of the parameter estimation errors. The modular nonlinear adaptive flight controller will be designed using this approach in combination with the different control allocation methods so that a comparison can be made. This chapter starts with a discussion on the problem of applying classical estimationbased adaptive control designs to uncertain nonlinear systems. After that, the theory behind modular adaptive backstepping with a least-squares identifier is explained. In the second part of the chapter the aircraft model is introduced and the integrated and modular adaptive backstepping flight control designs are constructed. The concept of control allocation is explained and three common types of algorithms are introduced in both design frameworks. Finally, the aircraft model with the adaptive flight controllers is evaluated in numerical simulations where several types of actuator lockup failure scenarios are performed.

6.2 Modular Adaptive Backstepping One of the goals in this chapter is to compare a reconfigurable flight controller based on the constrained adaptive backstepping technique with one based on a more traditional

6.2

MODULAR ADAPTIVE BACKSTEPPING

95

modular adaptive design where the controller and identifier are separate modules. However, the latter adaptive design method fails to achieve any global stability results for systems whose nonlinearities are not linearly bounded. In this section a robust backstepping design with least-squares identifier is developed with strong provable stability and convergence properties.

6.2.1 Problem Statement Before the modular adaptive backstepping approach is derived, the problem of applying traditional estimation-based adaptive control designs to nonlinear systems is illustrated in the following simple example. Example 6.1 Consider the scalar nonlinear system x˙ = u + θx2 ,

(6.1)

where θ is an unknown constant parameter. A stabilizing certainty equivalence controller is given by ˆ 2, u = −x − θx

(6.2)

where θˆ the parameter estimate of θ. The parameter estimation error is defined as ˆ Selecting the Lyapunov update law θ˜ = θ − θ. ˙ θˆ = x3 renders the derivative of the control Lyapunov function V = semi-definite, i.e. V˙ = −x2 .

(6.3) 1 2 2x

+ 12 θ˜2 negative (6.4)

An alternative solution to this adaptive control problem is to employ a standard identifier to provide the estimate for the certainty equivalence controller (6.2). However, in general, the signal x˙ is not available for measurement and thus (6.1) cannot be solved 1 for unknown θ. This problem is solved by filtering both sides of (6.1) by s+1 : s x s+1

=

1 1 u+θ x2 . s+1 s+1

(6.5)

Introducing the filters x˙ f u˙ f

= =

−xf + x2

ˆ 2 = −uf + u + x = −uf − θx

(6.6) (6.7)

96

COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL

6.2

makes it possible to rewrite (6.5) as x(t) = θ(t)xf (t) + uf (t).

(6.8)

Since θ is unknown its estimate θˆ has to be used. The corresponding predicted value of x is ˆ x ˆ(t) = θ(t)x f (t) + uf (t),

(6.9)

and the prediction error e is defined as ˜ f. e=x−x ˆ = θx

(6.10)

To achieve the minimum of e2 a parameter update law for θˆ has to be defined. A standard normalized gradient update law is selected: ˙ θˆ =

xf e. 1 + xTf xf

(6.11)

˙ ˙ Substituting (6.10) and θˆ = −θ˜ results in ˙ θ˜ =



˜ 2 θx f 1 + xTf xf

.

(6.12)

Hence, the parameter estimation error converges to zero. However, since this is a linear differential equation the error cannot converge faster than exponentially. Consider the most favorable case where ˜ θ˜ = e−t θ(0).

(6.13)

The closed-loop system with controller (6.2) is ˜ 2. x˙ = −x + θx

(6.14)

Substitution of (6.13) into (6.14) yields the equation ˜ x˙ = −x + x2 e−t θ(0),

(6.15)

whose explicit solution is x(t) =

2x(0) h i . −t + 2 − x(0)θ(0) ˜ ˜ x(0)θ(0)e et

(6.16)

˜ < 2 then x(t) will converge to zero as t → ∞. However, if x(0)θ(0) ˜ >2 If x(0)θ(0) the solution escapes to infinity in finite time, that is x(t) → ∞ as t →

˜ 1 x(0)θ(0) ln . ˜ −2 2 x(0)θ(0)

(6.17)

6.2

MODULAR ADAPTIVE BACKSTEPPING

97

This is illustrated in Figure 6.1, where the response of the system (6.1) with both the Lyapunov- and the estimation-based adaptive control design is plotted. The identifier of the estimation-based design is not fast enough to cope with the potential faster-thanlinear growth of nonlinear systems and converges to infinity resulting in a simulation crash.

state x

5

0

−5 0

0.5

1

1.5

2

2.5 time (s)

3

3.5

4

4.5

5

0 est−based Lyap−based

input u

−5

−10

−15

−20 0

0.5

1

1.5

2

2.5 time (s)

3

3.5

4

4.5

5

Figure 6.1: State x and control effort u of the Lyapunov- and estimation-based adaptive controllers ˆ for initial values x(0) = 2 and θ(0) = 0. The real value of θ is 2. The normalized gradient based identifier of the estimation-based controller is not fast enough to cope with the nonlinear growth.

The above simple example illustrates the notion that to achieve stability either a faster identifier is needed, such as the adaptive backstepping designs of Chapter 4, or a robust controller that can deal with disturbances such as large transient parameter estimation errors resulting from a slower identifier.

6.2.2 Input-to-state Stable Backstepping In this section a robust backstepping controller which is input-to-state stable (ISS) with respect to the parameter estimation error is constructed. In other words, the states of the closed-loop system remain bounded when the parameter estimation error is bounded, and when the parameter estimation error converges to zero the closed-loop system states will also converge to zero. A formal definition of input-to-state stability is given in Appendix B.2. The ISS backstepping design procedure is largely identical to the static feedback design

98

6.2

COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL

part of the command filtered adaptive backstepping approach as given by Theorem 4.2. The only difference is that the virtual and real control laws are augmented with additional nonlinear damping terms, i.e.  1  αi = −ci zi − si z¯i − ϕTgi−1 θˆgi−1 z¯i−1 − ϕTfi θˆfi + x˙ i,r , i = 1, ..., n ϕT θˆg gi

u0

=

i

αn ,

(6.18)

where si , i = 1, ..., n are nonlinear damping terms defined as si

=

κ1i ϕTfi ϕfi + κ2i ϕTgi ϕgi x2i+1 ,

i = 1, ..., n,

(6.19)

with κ∗ > 0 and u , xn+1 for the ease of notation. Note that when compared to the complex nonlinear damping terms used in the inverse optimal design of Chapter 5, the size of the above damping terms is much easier to control. Pn Consider again the general system (4.66) and the control Lyapunov function V = 21 i=1 z¯i2 . Applying the approach of Theorem 4.2, excluding the update laws but including the nonlinear damping terms si defined above, reduces the derivative of V to V˙

=

n h i X −(ci + si )¯ zi2 + ϕTfi θ˜fi z¯i + ϕTgi θ˜gi xi+1 z¯i i=1

=

n X i=1



−ci z¯i2 − κ1i



−κ2i



n  X i=1

θ˜f ϕfi z¯i − i 2κ1i

θ˜g ϕgi xi+1 z¯i − i 2κ2i

−ci z¯i2

!T

!T

θ˜f ϕfi z¯i − i 2κ1i

θ˜g ϕgi xi+1 z¯i − i 2κ2i

 1 ˜T ˜ 1 ˜T ˜ + θ θf + θ θg . 4κ1i fi i 4κ2i gi i

!

!

+

+

1 ˜T ˜ θ θf 4κ1i fi i 

1 ˜T ˜  θ θg 4κ2i gi i

If the parameter estimation errors θ˜∗ are bounded V˙ , is negative outside a compact set, which demonstrates that the modified tracking errors z¯i are decreasing outside the compact set and are hence bounded. The size of the bounds is determined by the damping gains κ∗ . Furthermore, if the parameter estimation errors are converging to zero, then the modified tracking errors will also converge to zero. From an input-output point of view, the nonlinear damping terms render the closed-loop system input-to-state stable with respect to the parameter estimation errors. The values of κ∗ should be selected very small, since nonlinear damping terms may result in high gain control for large disturbance signals if not tuned carefully.

6.2.3 Least-Squares Identifier In this section a least-squares identifier that guarantees boundedness of the parameter estimation error and its derivative is developed for the ISS backstepping design of the

6.2

MODULAR ADAPTIVE BACKSTEPPING

99

previous section. The idea of [119] is to use nonlinear regressor filtering to convert the dynamic parametric system into a static form in such a way that a standard least-squares estimation algorithm can be used. The system (4.66) can be rewritten in a general parametric form as x˙ = h(x, u) + F (x, u)T θ,

(6.20)

where h(x, u) represents the known system dynamics, F (x, u) the known regressor matrix, θ ∈ Rp the unknown parameter vector and x = (x1 , ..., xn )T the system states. Consider the x-swapping filter from [120], which is defined as   ˙ 0 = A0 − ρF (x, u)T F (x, u)P (Ω0 + x) − h(x, u), Ω0 ∈ Rn Ω (6.21)   ˙ΩT = A0 − ρF (x, u)T F (x, u)P ΩT + F (x, u)T , Ω ∈ Rp×n , (6.22) where ρ > 0 and A0 is an arbitrary constant matrix such that P A0 + AT0 P = −I,

P = P T > 0.

(6.23)

The estimation error vector is defined as ˆ = x + Ω0 − ΩT θ,

ǫ ∈ Rn ,

(6.24)

ǫ˜ = x + Ω0 − ΩT θ,

ǫ˜ ∈ Rn .

(6.25)

ǫ along with

Then ǫ˜ is governed by ǫ˜˙ =

 A0 − ρF (x, u)T F (x, u)P ǫ˜,

(6.26)

which is exponentially decaying. The least-squares update law for θˆ and the covariance update are defined as ˙ θˆ = ˆ˙ Γ

=

Ωǫ 1 + νtrace (ΩT ΓΩ) ΓΩΩT Γ − , 1 + νtrace (ΩT ΓΩ)

Γ

(6.27) Γ(0) = Γ(0)T > 0,

(6.28)

where ν ≥ 0 is the normalization coefficient. The properties of the least-squares identifier are given by the following Lemma from [118]. Lemma 6.1. Let the maximal interval of existence of solutions of (6.20), (6.21)-(6.22) with (6.27)-(6.28) be [0, tf ). Then for ν ≥ 0, the following identifier properties hold: 1. θ˜ ∈ L∞ [0, tf ) 2. ǫ ∈ L2 [0, tf ) ∩ L∞ [0, tf )

100

COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL

6.3

˙ 3. θˆ ∈ L2 [0, tf ) ∩ L∞ [0, tf ) Proof: Along the solutions of (6.22) the following holds:    d  ΩP ΩT = Ω P A0 + AT0 P ΩT − 2ρΩP F T F P ΩT + ΩP F T + F P ΩT dt  T   1 1 1 Ip F P ΩT − Ip + Ip . (6.29) = −ΩΩT − 2ρ F P ΩT − 2ρ 2ρ 2ρ

Taking the Frobenius norm results in 2  d 1 1 T 2 T trace ΩP Ω = −|Ω|F − 2ρ F P Ω − I + trace {Ip } dt 2ρ F 2ρ p 2 ≤ −|Ω|F + . 2ρ

(6.30)

This proves that Ω ∈ L∞ [0, tf ). From (6.26) it follows that  d ǫ|2 , |˜ ǫ|2P ≤ −|˜ dt which implies that ǫ˜ ∈ L2 [0, tf ) ∩ L∞ [0, tf ). Consider the function U=

1 ˜2 |θ| −1 + |˜ ǫ|2P 2 Γ(t)

(6.31)

(6.32)

which is positive definite because Γ(t)−1 is positive definite for each t. The derivative of U after some manipulations satisfies U˙ ≤ −

|ǫ|2 . 1 + νtrace {ΩT ΓΩ}

The fact that U˙ is non-positive proves that θ˜ ∈ L∞ [0, tf ) Integration of the above inequality yields ǫ p ∈ L2 [0, tf ). 1 + νtrace {ΩT ΓΩ}

Since Ω is bounded, then ǫ ∈ L2 [0, tf ). Due to ǫ = ΩT θ˜ + ǫ˜ and the boundedness ˙ Ωǫ of Ω it follows that ǫ ∈ L∞ [0, tf ), which in turn proves that θˆ = Γ 1+νtrace(Ω T ΓΩ) ∈ L∞ [0, tf ). Finally, the square-integrability of ǫ and the boundedness of Ω prove that ˙ Ωǫ θˆ = Γ 1+νtrace(Ω T ΓΩ) ∈ L2 [0, tf ). The robust backstepping controller of Section 6.2.2 allows the use of any identifier which can independently guarantee that the parameter estimation errors and their derivatives are bounded. The least-squares identifier with x-swapping filter as introduced in this section has these properties. This concludes the dicussion on the theory behind the modular adaptive backstepping approach in which the controller and identifier are designed separately.

6.3

AIRCRAFT MODEL DESCRIPTION

101

6.3 Aircraft Model Description Before the adaptive flight control designs are discussed, the aircraft dynamic model for which the controllers are designed is introduced in this section. The simplified nonlinear aircraft dynamic model has been obtained from [159]. The aircraft dynamic model (6.33) somewhat resembles that of an F-18 model.          

α˙ β˙ φ˙ θ˙ p˙ q˙ r˙

         

 =

        

 +

        

 q − pβ + zα ∆α + (g0 /V )(cosθ cos φ − cos θ0 )  yβ + p(sin α0 + ∆α) − r cos α0 + (g0 /V ) cos θ sin φ   p + q tan θ sin φ + r tan θ cos φ   q cos φ − r sin θ   lβ β + lq q + lr r + (lβα β + lrα r)∆α + lp p − i1 qr  mα ∆α + mq q + i2 pr − mα˙ pβ + mα˙ (g0 /V )(cos θ cos φ − cos θ0 )  nβ β + nr r + np p + npα p∆α − i3 pq + nq q   0 0 0 0 0 0 0 δel   0 0 0 0 0 0 0    δer   δal  0 0 0 0 0 0 0      0 0 0 0 0 0 0    δar (6.33)  δlef  lδel lδer lδal lδar 0 0 lδr    mδel mδer mδal mδar mδlef mδtef mδr   δtef  δr nδel nδer nδal nδar 0 0 nδr

Aerodynamic data are available in Tables 6.1 and 6.2 for two trimmed flight conditions: flight condition 1 at an altitude of 30000 ft and a Mach number of 0.7, and flight condition 2 at 40000 ft altitude and a Mach number of 0.6. The model has seven independent control surfaces, i.e. left and right elevators, left and right ailerons, leading and trailing edge flaps, and collective rudders. A layout of the aircraft and its control surfaces can be seen in Figure 6.2. The main simplifications made in the dynamic model are constant airspeed and no lift or drag effects on the control surfaces. The latter simplifications have been made to get the system into a lower triangular form required for standard adaptive backstepping and feedback linearization designs. The designs considered in this chapter do not suffer from this shortcoming since command filters are used to generate the intermediate control laws. The aircraft model includes second order actuator dynamics. The magnitude, rate and bandwidth limits of the actuators are specified in Table 6.3. Table 6.1: Aircraft model parameters for trim condition I, h = 30000 ft and M = 0.7. lβ = −11.04 lp = −1.4096 mq = −0.3373 nq = 0 lδr = 1.8930 mδer = −4.5176 mδr = 0 nδar = −0.0698

lq = 0 zα = −0.6257 nβ = 2.558 lδel = 6.3176 i1 = 0.7966 mδal = −0.8368 g0 = 9.80665 nδr = −1.7422

lr = 0.4164 yβ = −0.1244 nr = −0.1122 lδer = −6.3176 i2 = 0.9595 mδar = 0.8368 nδel = 0.2814 V = 212.14

lβα = −19.72 mα = −5.432 np = −0.0328 lδal = 7.9354 i3 = 0.6914 mδlef = −1.2320 nδer = −0.2814 α0 = 0.0681

lrα = 4.709 mα˙ = −0.1258 npα = −0.0026 lδar = −7.9354 mδel = −4.5176 mδtef = 0.9893 nδal = −0.0698 θ0 = 0.0681

All stability and control derivatives introduced in (6.33) are considered to be unknown

102

6.3

COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL

Table 6.2: Aircraft model parameters for trim condition II, h = 40000 ft and M = 0.6. lβ = −7.0104 lp = −0.7331 mq = −0.1286 nq = 0 lδr = 0.8920 mδer = −1.9782 mδr = 0 nδar = −0.0963

lq = 0 zα = −0.2876 nβ = 1.3612 lδel = 2.7203 i1 = 0.7966 mδal = −0.3183 g0 = 9.80665 nδr = −0.8018

lr = 0.3529 yβ = −0.0700 nr = −0.0619 lδer = −2.7203 i2 = 0.9595 mδar = −0.3183 nδel = 0.1262 V = 177.09

lβα = −16.4015 mα = −1.4592 np = −0.0177 lδal = 4.2438 i3 = 0.6914 mδlef = −0.4048 nδer = −0.1262 α0 = 0.1447

lrα = 1.0461 mα˙ = −0.0177 npα = 0.0696 lδar = −4.2438 mδel = −1.9782 mδtef = 0.3034 nδal = −0.0963 θ0 = 0.1447

Figure 6.2: The control surfaces of the fighter aircraft model. The control surfaces which will lock in place during the various simulation scenarios are indicated.

and will be estimated online by the parameter estimation process of the adaptive control laws. The system (6.33) is rewritten in a more suitable form for the control design as X˙ 1 X˙ 2

= =

H1 (X1 , Xu ) + Φ1 (X1 , Xu )T Θ1 + B1 (X1 , Xu )X2 H2 (X1 , X2 , Xu ) + Φ2 (X1 , X2 , Xu )T Θ2 + B2 U

X˙ u

=

Hu (X1 , X2 , Xu )

(6.34)

where X1 = (φ, α, β)T , X2 = (p, q, r)T , U = (δel , δer , δal , δar , δlef , δtef , δr )T and the uncontrolled state Xu = θ. The known nonlinear aircraft dynamics are represented by the vector functions H1 (X1 , Xu ), H2 (X1 , X2 , Xu ) and Hu (X1 , X2 , Xu ) and the matrix function B1 (X1 , Xu ). The functions Φ1 (X1 , Xu ) and Φ2 (X1 , X2 , Xu ) are the regressor matrices, while Θ1 , Θ2 and B2 are vectors and a matrix containing the unknown

6.4

FLIGHT CONTROL DESIGN

103

Table 6.3: Aircraft model actuator specifications.

Surface Horizontal Stabilizer Ailerons Leading Edge Flaps Trailing Edge Flaps Rudder

Deflection Limit [deg] [-24, 10.5] [-25, 45] [-3, 33] [-8, 45] [-30, 30]

Rate Limit [deg/s] ± 40 ± 100 ± 15 ± 18 ± 82

Bandwidth [rad/s] 50 50 50 50 50

parameters of the system, defined as Θ1

=

(zα , yβ )T

Θ2

=

( lβ , lp , lq , lr , lβα , lrα , l0 , mα , mq , mα˙ , m0 , nβ , np , nq , nr , npα , n0 )T



lδel B2 =  mδel nδel

lδer mδer nδer

lδal mδal nδal

lδar mδar nδar

0

0

mδlef 0

mδtef 0

 lδr mδr  . nδr

Note that the parameters l0 , m0 and n0 have been added to the vector Θ to compensate for additional trim moments caused by locked actuators.

6.4 Flight Control Design Now that the system has been rewritten in a structured form, the actual control design methods can be discussed. The control objective is to track a smooth reference signal X1,r with state vector X1 . The reference X1,r and its derivative X˙ 1,r are generated by linear second order filters, which can also be used to enforce the desired transient response of the controllers. The static feedback loops of the integrated and modular adaptive controllers are designed identical for comparison purposes and are therefore derived first. After that, the dynamic part of both controllers is introduced and their closed-loop stability properties are discussed.

6.4.1 Feedback Control Design The static feedback control design can be divided in two parts, an outer loop to control the aerodynamic angles and the roll angle using the angular rates, and an inner loop to control the angular rates using the control surfaces. The design procedure starts by

104

COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL

6.4

defining the tracking errors as Z1

Z2



  φ =  α − β    p =  q − r

 φr αr  = X1 − X1,r βr  pr qr  = X2 − X2,r , rr

(6.35)

(6.36)

where X2,r is the virtual control law to be defined. Step 1: The Z1 -dynamics satisfy

Z˙ 1 = B1 Z2 + B1 X2,r + H1 + ΦT1 Θ1 . 0 To stabilize (6.37), a stabilizing function X2,r is defined as   0 ˆ 1 + X˙ 1,r − Ξ2 , X2,r = B1−1 −C1 Z1 − S1 Z¯1 − H1 − ΦT1 Θ

(6.37)

(6.38)

ˆ 1 is the estimate of Θ1 , C1 is a positive definite gain matrix and where Θ S1

= κ1 ΦT1 Φ1 .

(6.39)

The compensated tracking error Z¯1 and Ξ2 are to be defined. The stabilizing function (6.38) is now fed through second order low pass filters as defined in Appendix C to produce the virtual control law X2,r and its derivative. These filters can also be used to enforce rate and magnitude limits on the signals. The magnitude and rate limits can be selected equal to the physical limits of the actual actuators or states of the aircraft. The effect that the use of these filters has on the tracking errors can be captured with the stable linear filter  0 Ξ˙ 1 = −C1 Ξ1 + B1 X2,r − X2,r . (6.40) The compensated tracking error Z¯1 is defined as Z¯1

= Z1 − Ξ1 .

(6.41)

This concludes the outer loop design. Step 2: The inner loop design starts with the Z2 -dynamics, which are given by Z˙ 2 = B2 U + H2 + ΦT2 Θ2 − X˙ 2,r .

(6.42)

0 To stabilize (6.42), the stabilizing function Mdes is defined as 0 ˆ2 U 0 = −C2 Z2 − S2 Z¯2 − B T Z¯1 − H2 − ΦT Θ ˆ ˙ B 1 2 2 + X2,r = Mdes ,

(6.43)

ˆ2 is the estimate of B2 and where C2 is a positive definite gain matrix, B S2

=

κ2 ΦT2 Φ2 +

3 X i=1

κ2i Ui2 .

(6.44)

6.4

105

FLIGHT CONTROL DESIGN

ˆ2 is a 3 × 7 matrix. In Section 6.5 several control allocation Note that the matrix B ˆ2 U = Mdes is found by algorithms are introduced to determine U 0 . The real control B ˆ2 U 0 . Finally, the stable linear filter filtering B 0 Ξ˙ 2 = −C2 Ξ2 + Mdes − Mdes

(6.45)

is defined. The derivative of the control Lyapunov function V =

1 ¯T ¯ 1 Z Z1 + Z¯2T Z¯2 2 1 2

(6.46)

along the trajectories of the closed-loop system is reduced to 7 X 1 ˜T ˜ 1 ˜T ˜ 1 ˜T ˜ V˙ ≤ −Z¯1T C1 Z¯1 − Z¯2T C2 Z¯2 + Θ1 Θ1 + Θ2 Θ2 + B B2j , 4κ1 4κ2 4κ2j 2j j=1

˜2j represents the j-th column of the matrix B ˜2 . From the above expression it can where B ¯ be deduced that the compensated tracking errors Z1 , Z¯2 are globally uniformly bounded if the parameter estimation errors are bounded. The size of the bounds is determined by the damping gains κ∗ . Furthermore, if the parameter estimation errors are converging to zero, than the compensated tracking errors will also converge to zero. This concludes the static feedback design for both adaptive controllers.

6.4.2 Integrated Model Identification To design the Lyapunov update laws of the integrated adaptive design method the control Lyapunov function V (6.46) is augmented with additional terms that penalize the estimation errors as   2 7   X   1 X ˜ T Γ−1 Θ ˜i + ˜ T Γ−1 B ˜  , (6.47) Va = V + trace Θ trace B i i 2j B2j 2j 2 i=1 j=1

where Γ∗ = ΓT∗ > 0 are the update gain matrices. Selecting the update laws ˆ˙ 1 Θ ˆ˙ 2 Θ ˆ˙ 2j B

= = =

Γ1 Φ1 Z¯1 Γ2 Φ2 Z¯2 PB2j ΓB2j Z¯2 Uj

(6.48) 

where Uj represents the j-th element of the control vector U , reduces the derivative of Va along the trajectories of the closed-loop system to V˙ a

=

−Z¯1T (C1 + S1 ) Z¯1 − Z¯2T (C2 + S2 ) Z¯2 ,

which is negative semi-definite. Hence, the modified tracking errors Z¯1 , Z¯2 converge asymptotically to zero. Note that the nonlinear damping gains are not needed to guarantee stability of this integrated adaptive design. However, for the purpose of comparison

106

6.4

COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL

the static feedback parts of both controllers are kept the same. Furthermore, the damping terms can be used to improve transient performance bounds of the integrated design as demonstrated in [118], although selecting them too large will result in high gain control and related numerical problems. The update laws (6.48) are driven by the compensated tracking errors Z¯i . If the magnitude or rate limits of the command filters (selected equal to limits of the actuators or states) are reached, the real tracking errors Zi may increase. However, the modified tracking errors Z¯i will still converge to zero, since the effect of these constraints has been filtered out. In this way unlearning of the update laws is prevented. Note that the update ˆ2 include a projection operator to ensure that certain elements of the matrix laws for B do not change sign and full rank is maintained at all times. For most elements the sign is known based on physical principles. The update laws are also robustified against parameter drift with continuous dead-zones and e-modification. A scheme of the integrated adaptive control law can be found in Figure 6.3. _ Z

Online Model Identification

U

Θ

Pilot Commands

Prefilters

Y

Z

_ Z

Backstepping Control Law (Onboard Model)

Command Filters Control Allocation

Mdes

U0

Constraint Effect Estimator Ξ

X

Sensor Processing

U

Figure 6.3: Integrated adaptive control framework.

6.4.3 Modular Model Identification An alternative approach to the control design with the Lyapunov-based adaptive laws of the previous section is to separate the identifier and control law designs. The theory behind this approach, referred to as modular adaptive backstepping control, was discussed in Section 6.2. A least-squares identification method is selected as the identifier module for its excellent convergence properties. An advantage of the least-squares method is that, in theory, the true system parameters can be found since the estimation is not driven by the tracking error but rather by the state of the system. The system (6.34) can be written as the general affine parametric model X˙ = H(X, U ) + F T (X, U )Θ,

(6.49)

where X = (X1T , X2T , XuT )T represents the system states, H(X, U ) are the known sysT T T tem dynamics, Θ = (ΘT1 , ΘT2 , B21 , ..., B27 ) is a vector containing the unknown con-

6.5

CONTROL ALLOCATION

107

stant parameters and F (X, U ) the known regressor matrix. The x-swapping filter and prediction error are defined as   Ω˙ 0 = A0 − ρF T (X, U )F (X, U )P (Ω0 + X) − H(X, U ) (6.50)   Ω˙ T = A0 − ρF T (X, U )F (X, U )P ΩT + F T (X, U ) (6.51) T ˆ ǫ = X + Ω0 − Ω Θ, (6.52) where ρ > 0 and A0 is an arbitrary constant matrix such that P A0 + AT0 P = −I,

P = P T > 0.

(6.53)

ˆ and the covariance update are defined as The least-squares update law for Θ ˆ˙ Θ ˆ˙ Γ

= Γ

Ωǫ 1 + νtrace (ΩT ΓΩ)

(6.54)

ΓΩΩT Γ − Γλ , 1 + νtrace (ΩT ΓΩ)

(6.55)

= −

where ν ≥ 0 is the normalization coefficient and λ ≥ 0 is a forgetting factor. By Lemma 6.1 the modular controller with x-swapping filters and least-squares update law achieves global asymptotic tracking of the modified tracking errors. Despite using a mild forgetting factor in (6.55), the covariance matrix can become small after a period of tracking, and hence reduces the ability of the identifier to adjust to abrupt changes in the system parameters. A possible solution to this problem can be found by resetting the covariance matrix Γ when a sudden change is detected. After an abrupt change in the system parameters, the estimation error will be large. Therefore a good monitoring candidate is the ratio between the current estimation error and the mean estimation error over an interval tǫ . After a failure, the estimation error will be large compared to the mean estimation error, and thus an abrupt change is declared when ǫ − ǫ¯ (6.56) ǫ¯ > Tǫ where Tǫ is a predefined threshold. Moreover, this threshold should be chosen large enough such that measurement noise and other disturbances do not trigger the resetting. However, it should also be sufficiently small such that failures will trigger resetting. The modular scheme is depicted in Figure 6.4.

6.5 Control Allocation The control designs discussed in the preceding section provide the desired body frame 0 moments Mdes . The problem of control allocation is to distribute these moments over the available control effectors U 0 . For the control design of this chapter the control allocation problem can be summarized as 0 ˆ2 U 0 = Mdes B , (6.57)

108

COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL

6.5

ˆ2 is a 3 × 7 matrix obtained from the identifiers. Without constraints on U 0 , the where B expression (6.57) has infinite solutions. In the presence of magnitude and rate constraints on U 0 , this equation has either an infinite number of solution, an unique solution, or no solution at all. Two different control allocation methods will be discussed in this section, one based on the weighted pseudo-inverse and one based on quadratic programming. The control allocation methods applied in this section are quite basic methods, many more sophisticated methods exist. Overviews of the numerous control allocation techniques can be found in [17, 52, 76, 154].

6.5.1 Weighted Pseudo-inverse A simple and computationally efficient solution to the control allocation problem is found by utilizing the weighted pseudo-inverse (WPI). Consider the following quadratic cost function J = (U 0 )T W U 0

(6.58)

where W is a weighting matrix. The solution of (6.58) is given by h i−1 0 ˆ2 )T B ˆ2 W −1 (B ˆ2 )T U 0 = W −1 (B Mdes .

(6.59)

The above equation provides an unique solution to (6.57), but it does not take any constraints on the control effectors into account. The WPI approach can therefore be interpreted as a very crude approach to control allocation. When W = I, the solution of (6.59) is referred to as the pseudo-inverse (PI) solution.

6.5.2 Quadratic Programming The main disadvantage of the WPI method is that it does not take magnitude and rate constraints on the control effectors into account. When online solving of an optimization problem is allowed, these constraints can be taken into account. Quadratic optimization problems, or quadratic programs, can be solved very efficiently and are therefore interesting for online applications. The quadratic programming (QP) solution will be feasible

Pilot Commands

Prefilters

Y

Backstepping Control Law (Onboard Model)

Z

Control Allocation

Mdes

Θ

U

U

Least Squares Identifier

ΩT , Ω0

ε

x-Swapping Filter

Online Model Identification X

Figure 6.4: Modular adaptive control framework.

X

Sensor Processing

6.5

CONTROL ALLOCATION

109

when the desired moment vector is within the attainable moment set (AMS), and infeasible if it is outside. In [181] two approaches to modify the QP solution are proposed to guarantee that the solution will always be feasible: direction preserving and sign preserving. The direction preserving method scales down the magnitude of the desired moment with a scaling factor σ such that it falls within the attainable moment set. The sign preserving method is 0 very similar, but allows the scaling σ to be split amongst the three components of Mdes individually as σroll , σpitch and σyaw . The difference between the scaling methods is illustrated in Figure 6.5.

Figure 6.5: Illustration of two quadratic programming solutions: (a) Direction preserving method (b) Sign preserving method [181].

The sign preserving control allocation method makes more effective use of the available control authority, and therefore this method is implemented in the flight control designs. The QP is formulated as [181] min 0

U ,σ

s.t.

where

1 T x Hx + cTU x 2 0 ˆ2 U 0 − ΣT Mdes B =0      Ulb U0 Uub  0   σroll   1       0  ≤  σpitch  ≤  1 0 σyaw 1

(6.60)

   

x = ((U 0 )T , 1 − σroll , 1 − σpitch , 1 − σyaw )T ,   σroll 0 0 σpitch 0 , Σ= 0 0 0 σyaw   QU 0 H= . 0 Qσ

110

COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL

6.6

The weighting matrices QU , Qσ and cU are user specified. The scaling factors are more heavily weighted than the control inputs to make sure that all the available control authority is used: Qσ ≫ QU .

6.6 Numerical Simulation Results The control designs are evaluated on their tracking performance and parameter estimation accuracy for several failure scenarios during two separate maneuvers of 60 seconds. The task given to the controllers is to track roll angle and angle of attack reference signals, while the sideslip angle is regulated to zero. The simulations are performed in c MATLAB/Simulink with a third order solver and 0.01s sampling time. The controllers, identifiers and aircraft model are all written as M S-functions.

6.6.1 Tuning the Controllers The gains of both controllers are selected as C1 = I, C2 = 2I and all damping terms κ∗ are taken equal to 0.01. These gains were selected after a trial-and-error procedure in order to get an acceptable nominal tracking response. Note that Lyapunov stability theory only requires the control gains to be larger than zero, but it is natural to select the gains of the inner loop largest. The dynamics and limits of the outer loop command filters are selected equal to the actuator dynamics of the aircraft model. The inner loop command filters do not contain any limits on the virtual control signals. With the tuning of the static feedback designs finished, the identifiers can be tuned. Again the theory for the integrated adaptive design only requires the update gains to be larger than zero. Selecting larger gains results in a more rapid parameter convergence at the cost of more control effort. However, the effect of the update gains on the transient performance of the closed-loop system is unclear, since the dynamic behavior of the tracking error driven update laws can be quite unexpected. As such, it turns out to be very time consuming to find an unique set of update gains of the Lyapunov-based identifier for a range of failure types and two different flight conditions. This is a clear disadvantage of the integrated adaptive design. All tracking error driven update laws are normalized and the update gains related to the symmetric coefficients are selected equal to 10 and the gains related to the asymmetric coefficients equal to 3. The constant σ related to the e-modification (see Section 4.2.3) is taken equal to 0.01 and the continuous dead-zone bounds are taken equal to 0.01 deg in the outer loop and 0.1 deg/s in the inner loop. The tuning of the least-squares identifier is much more straight-forward, since the gain scaling is more or less automated and the dynamic behavior is similar to the aircraft model. However, the selection of a proper resetting threshold may take some time. All diagonal elements of the update gain matrix are initialized at 10 and the resetting threshold is selected as Tǫ = 20. A disadvantage of the modular adaptive design is that the least-squares identifier in combination with regressor filtering has a much higher dynamical order (more states) than the Lyapunov identifier and hence the simulations with the modular adaptive design take up more time.

6.6

NUMERICAL SIMULATION RESULTS

111

6.6.2 Simulation Scenarios The simulated failure scenarios are limited to individual locked control surfaces at different offsets from zero. As indicated in Figure 6.2, failures of the left aileron and the left elevator surfaces are considered at both flight conditions: the left aileron locks at −25, −10, 0, 10, 25 and 45 degrees, and the left elevator locks at −20, −10, −5, 0, 5 and 10 degrees. A positive deflection means trailing edge down for both control surfaces. All simulations are started from the trimmed flight condition; scenarios 1 and 2 at flight condition I, scenarios 3 and 4 at flight condition II. The simulated failures are initiated 1 second into the simulation and the failed surface is deflecting to the failure position subject to the rate limit of the corresponding effector, i.e. 100 deg/s for the aileron and 40 deg/s for the elevator surface. Second order command filters are used to generate the reference signals on angle of attack α and roll angle φ. The following two maneuvers are considered: 1. three angle of attack doublets of ±15+α0 deg are flown, while a roll angle doublet of ±90 degrees is commanded. 2. three multi axis doublets are flown, with angle of attack and roll angle of ±15 + α0 and ±60 deg, respectively. Figure 6.6 shows the reference signals for the maneuvers, which have been generated with second order command filters. The failure scenarios are summarized in Table 6.4.

Figure 6.6: The two simulated maneuvers.

Each scenario and failure case is simulated for each of the adaptive control laws and a non-adaptive backstepping controller, used as a baseline, combined with each of the three control allocation methods discussed in Section 6.5. Two different weight matrices are used for the weighted pseudo-inverse and QP control allocation methods   1 1 20 20 10 10 5 WU1 = diag , (6.61)   20 20 1 1 10 10 5 . WU2 = diag

112

COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL

6.6

The first weight matrix favors deflections of the horizontal tail surfaces over the ailerons, while the second weight matrix favors the use of ailerons over the horizontal tail surfaces.

Table 6.4: Definition of the simulation scenarios. Scenario 1 2 3 4

Maneuver 1 2 1 2

Trim Condition I I II II

Failed Effector Left Aileron Left Elevator Left Aileron Left Elevator

Lock Positions 45, 25, 10, 0, -10, -25 degrees 10.5, 5, 0, -5, -10, -24 degrees 45, 25, 10, 0, -10, -25 degrees 10.5, 5, 0, -5, -10, -24 degrees

6.6.3 Controller Comparison Nominal Performance First of all, the results of the simulated maneuvers without any failures are presented. The root mean square (RMS) error, or quadratic mean of the tracking errors over the whole duration of the simulation for the different control approaches combined with the control allocation methods is presented in Table 6.5. As a reference, the results for a backstepping controller without adaptation, but with robustifying damping terms, are included. The results show that the choice of control allocation method does not have a significant impact on the nominal performance. When the estimate of the control effectiveness matrix B2 is good, and the control moments commanded by the controller are within the attainable moment set, the control allocation methods are able to generate the commanded moment. The performance of the integrated design is better than the nominal and modular designs, since the tracking error driven update laws will adapt the system parameters even if their values are correctly initialized and the dead-zones in place. This is a general property of Lyapunov-based update laws for these type of adaptive designs. The modular design on the other hand recognizes that the parameter estimates are at their correct value, and therefore does not adapt the parameters. Table 6.5: Tracking performance, nominal case.

Controller NOMINAL INTEGRATED MODULAR

PI 1.0013 0.7692 1.0013

Control Allocation WPI WU1 WPI WU2 QP WU1 1.0139 1.0007 0.9964 0.8717 0.7402 0.8198 1.0139 1.0007 0.9964

QP WU2 0.9964 0.7953 0.9964

6.6

NUMERICAL SIMULATION RESULTS

113

Performance in the Presence of Actuator Failures The same reference tracking problem is considered with failures. To be able to present some meaningful statistics on performance, simulation cases which were terminated due to excessive tracking errors are not included in the comparison. The amount of excluded cases is given in Table 6.6. For each scenario, 6 failure case were performed for every control allocation method, resulting in 120 failure simulations per controller. It is clear that all the adaptive control laws reduce the number of failed simulations considerably with respect to the non-adaptive control law. The robust non-adaptive control law only results in satisfactory tracking for the mildest failure cases. Another striking fact is that the integrated adaptive control law in combination with the control allocation weight matrix WU1 performs poorly, especially for the WPI method. Weight matrix WU1 gives priority to the horizontal stabilizers. If one of these surfaces fails, and its loss of effectiveness is poorly estimated, the difference between the desired moments and the actually generated moments will be large, resulting in performance degradation. This effect is much larger when the weighted pseudo-inverse is used instead of the more sophisticated QP control allocation method which can incorporate constraints on the input. The modular adaptive design is less sensitive to the choice of control allocations algorithm. The few terminated failure cases that occur for the adaptive designs are the most extreme failure cases. For example, in scenario 4 with an elevator hard-over failure of 10.5 degrees, the simulation is terminated for all flight control laws. Stability can still be maintained at straight, level flight, but the commanded maneuver is too demanding for this failure at flight condition II. The RMS of the tracking errors over the whole duTable 6.6: Number of terminated simulation cases.

Controller NOMINAL INTEGRATED MODULAR

PI 15 2 4

WPI WU1 19 14 15

Control Allocation WPI WU2 QP WU1 18 15 4 8 5 3

QP WU2 18 3 2

ration of the simulations have been generated for a controller performance comparison. The tabulated results of the numerical simulations can be found in Table 6.7. Note that the results for the damaged aircraft are averaged over all the successful failure scenarios. The average tracking performance of the modular approach is better than that of the integrated design, although it should be noted that for the PI control allocation the results of the integrated design include two of the more severe failure cases. However, when a weighted control allocation method is included, the performance of the modular adaptive controller is clearly superior. The performance of the nominal controller is included for comparison, note that the tracking performance for the mild failure cases already degrades when compared to the

114

COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL

6.6

nominal performance. Unsurprisingly, the average performance with the QP control allocation is better than for the (weighted) pseudo-inverse methods, and the number of successful simulations is also higher. The WPI method does not take constraints on the surface deflections into account, which can result in suboptimal use of the available control effectiveness, and thus reduced performance. In [159] similar simulations were performed for a tuning function adaptive backstepping design in combination with the weighted pseudo-inverse and direct control allocation. Their results show that the weighted pseudo-inverse control allocation gave the best results due to the artificial lead it generates, although it is pointed out that this lead can also result in poor performance during maneuvers. However, [159] did not consider realistic failures, the investigation was limited to maneuvers with wrong initial estimates of the aerodynamic parameters. With control surface failures, a more sophisticated control allocation method is clearly more beneficial. A possible source for the better performance

Table 6.7: Post-failure tracking error RMS (terminated cases removed).

Controller NOMINAL INTEGRATED MODULAR

PI 5.7142 2.0363 1.7437

Control Allocation WPI WU1 WPI WU2 QP WU1 4.8384 4.4557 5.7652 2.6949 2.3331 2.4066 2.0602 2.4741 1.7660

QP WU2 4.2899 2.3583 1.8176

of the modular controller in combination with weighted control allocation is the accurateness of its parameter estimation. To verify this hypothesis, the average errors over the last 5 seconds of the simulation between the estimate of the parameter and the true values of the post-failure parameters are calculated. The average estimation errors of parameters not related to the control surfaces are shown in Table 6.8, Table 6.9 presents the estimation errors of the elements of the control effectiveness matrix that did not change due to the failure. Finally, the estimation errors in the elements of the control effectiveness matrix related to the failed surfaces are shown in Table 6.10. From these tables it becomes clear that the identifier of the modular design estimates the parameters closest to their true values. In fact, if the simulations are continued the estimates of the least squares algorithm keep converging closer to the true parameters. For the integrated adaptive design the opposite is often true, which is why parameter projection methods are usually introduced to bound the values of the parameter estimation errors for an adaptive backstepping design. Most crucial for the weighted control allocation are the estimation errors in the effectiveness of the failed surfaces as shown in Table 6.10. It is evident that the estimation quality of the modular design is superior for the parameters, which explains why this control law has the most successful reconfigurations when a weighted control allocation method is used.

6.6

NUMERICAL SIMULATION RESULTS

115

Table 6.8: Average parameter estimation error over last 5 seconds (terminated cases removed).

Controller INTEGRATED MODULAR

PI 0.2064 0.0589

Control Allocation WPI WU1 WPI WU2 QP WU1 0.2187 0.1950 0.2805 0.0242 0.0413 0.0739

QP WU2 0.2272 0.0516

Table 6.9: Average estimation error over last 5 seconds of the elements of B2 that did not change due to a failure.

Controller INTEGRATED MODULAR

PI 0.3034 0.2125

Control Allocation WPI WU1 WPI WU2 QP WU1 0.3294 0.2796 0.2484 0.0691 0.1508 0.1481

QP WU2 0.2628 0.1310

Table 6.10: Average estimation error over last 5 seconds of B2 elements relating to failed surfaces.

Controller INTEGRATED MODULAR

PI 1.7776 0.4046

Control Allocation WPI WU1 WPI WU2 QP WU1 1.0579 1.5359 2.0979 0.7117 0.2266 0.2208

QP WU2 1.8685 0.1005

116

COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL

6.7

Specific Failure Cases The response of the aircraft during one of the maneuvers with a hard-over of the left aileron is shown in Figure D.1 of Appendix D.1 for the integrated controller combined with PI control allocation. Despite the 45 degree lock of the left aileron after 1 second, stability is maintained and tracking performance is reasonable. The realized control surface deflections are shown in Figure D.1(b). The remaining control surfaces compensate for the trim moment introduced by the locked aileron, saturating both elevators during the roll doublets. Figure D.1(c) shows the realized total control moment coefficients versus the commanded control moment coefficients adjusted with the estimated trim moment introduced by the failure. Finally, the results of the parameter estimation of the elevator and aileron related control derivatives can be found in Figure D.1(d). It can be seen that the parameter estimates converge, but, as expected, not to their true values. Although the estimates do not converge to the true values, the plots of Figure D.1(c) demonstrate that the difference between the commanded control moment and the realized control moment is relatively small. The results of the same simulation scenario with the modular adaptive controller can be found in Figure D.2. Tracking performance of this controller is even better than for the integrated design. As can be seen in Figure D.2(d), the parameter estimates generated by the least-squares identifier converge to their true values. The results of a comparison between both controllers for simulation scenario 4 are shown in Figures D.3 and D.4. After 1 second of simulation time the aircraft experiences a left stabilizer hard-over to 10.5 degrees. This failure results in even more coupling between the longitudinal and lateral motions then the aileron hard-over. In this simulation the controllers make use of the QP control allocation with weighting WU2 . Again, both controllers manage to stabilize the aircraft after the failure. Tracking performance is restored close the the nominal performance, after some large initial tracking errors. The estimated parameters of both designs converge, but only the estimated parameters of the modular design converge to their true values. As discussed earlier, the performance of both adaptive designers with weighting WU2 used in the control allocation is much better then with WU1 . This performance difference is more pronounced when the unsophisticated weighted pseudo-inverse control allocation is used. Furthermore, the modular adaptive design is less sensitive to the weighting used than the integrated design due to its true parameter estimates. These statements are illustrated in Figure D.5, where the tracking performance of both controllers, with the WPI control allocation method, is compared in simulation scenario 2, where the aircraft suffers a left stabilizer lockup at 0 degrees after 1 second.

6.7 Conclusions Two nonlinear adaptive flight control designs for an over-actuated fighter aircraft model have been studied. The first controller is a constrained adaptive backstepping design with control law and dynamic update law designed simultaneously using a control Lyapunov

6.7

CONCLUSIONS

117

function, while the second design is an ISS-backstepping controller with a separate recursive least-squares identifier. In addition, two control allocation methods with different weighings have been used to distribute the desired moments over the available control surfaces. The controllers have been compared in numerical simulations involving several types of aileron and horizontal stabilizer failures. Several important observations can be made based on this comparison: 1. Results of numerical simulations show that both adaptive controllers provide a significant improvement over a non adaptive NDI/backstepping design in the presence of actuator lockup failures. The success rate and performance of both adaptive designs with the pseudo inverse control allocation is comparable for most failure cases. However, in combination with weighted control allocation methods the success rate and also the performance of the modular adaptive design is shown to be superior. This is mainly due to the better parameter estimates obtained by the least squares identification method. The Lyapunov-based update laws of the integrated adaptive backstepping designs, in general, do not estimate the true value of the unknown parameters. It is shown that especially the estimate of the control effectiveness of the damaged surfaces is much more accurate using the modular adaptive design. It can be concluded that the constrained adaptive backstepping approach is best used in combination with the simple pseudo inverse control allocation to prevent unexpected results. 2. The computational load of the integrated adaptive design is much lower than for the modular design. This is due to the higher dynamic order of the estimator of the latter approach. The number of states of the Lyapunov-based estimator is equal to the number of parameters to be estimated p. The least-squares identifier used by the modular design has p × p + p states, while the x-swapping filter has an additional p × n + n states, with n being the number of states of the system to be controlled. This is a critical advantage of the integrated adaptive design when considering real-time implementation. 3. The integrated adaptive design does not require the nonlinear damping terms used in this experiment to compensate for the slowness of the identifier. The nonlinear damping terms can easily result in high gain feedback control and numerical instability. 4. The tuning of the update laws of the integrated design turns out to be quite time consuming for this simplified aircraft model. Increasing the adaptation gain may lead to unwanted transients in the closed-loop tracking performance. This tuning process may have to be improved when attempting the control design for the highfidelity full envelope F-16 model or an alternative identifier may have to be found. 5. For some simulated failure cases, the adaptive controller managed to stabilize the aircraft, but the commanded maneuver proved too challenging for the damaged aircraft. Hence, an adaptive controller by itself may not be sufficient for a good reconfigurable flight control system. The pilot or guidance system also needs to

118

COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL

6.7

be aware of the characteristics of the failure, since the post-failure flight envelope might be a lot smaller. This statement has resulted in a whole new area of research, usually referred to as adaptive flight envelope estimation and/or protection, see e.g. [198, 211]. Using the adaptive controllers developed in this thesis, it is possible to indicate to the pilot which axes have suffered a failure, so that he is made aware that there is a failure and that he should fly more carefully. However, a fully adaptive flight envelope protection system is beyond the scope of this thesis work.

Chapter

7

F-16 Trajectory Control Design The results of the previous chapter demonstrated that the constrained adaptive backstepping flight control system improved the closed-loop performance in the case of sudden changes in the dynamic behavior of the aircraft. In this chapter the control system design framework of the previous chapter is extended to nonlinear adaptive control for the complex high-fidelity F-16 dynamic model of Chapter 2, which is valid over a large, subsonic flight envelope. A flight envelope partitioning method using B-spline networks is introduced to simplify the online model identification and make real-time implementation feasible. As a study case a trajectory control autopilot is designed, which is evaluated in several maneuvers with actuator failures and uncertainties in the onboard aerodynamic model. The trajectory control problem is relatively challenging since the uncertain system to be controlled has a high relative degree. It will be shown that the constrained adaptive backstepping approach is well suited to tackle this problem.

7.1 Introduction In this chapter the command filtered adaptive backstepping design method is applied to the control design for the F-16 dynamic model of Chapter 2, thereby extending the results of Chapter 6 which are limited to a single point in the flight envelope. The size of the aerodynamic forces and moments of the F-16 model varies nonlinearly with the flight condition. Approximating uncertainties, resulting from modeling errors or sudden changes due to failures, in these complex force and moment functions means that the regressors of the identifier will have to be selected very large in order to capture the dynamic behavior of the complete aircraft model. On the other hand, a real-time implementation of the adaptive control method is still an important goal, hence the computational complexity should be kept at a minimum. As a solution, a flight envelope partitioning method [152, 153, 203] is proposed to capture the globally valid aerodynamic model into multiple locally valid aerodynamic models. 119

120

F-16 TRAJECTORY CONTROL DESIGN

7.2

The Lyapunov-based update laws of the adaptive backstepping method only update a few local models at each time step, thereby decreasing the computational load of the algorithm. B-spline networks are used to ensure smooth transitions between the different regions. In Section 7.2 the flight envelope partitioning method and resulting local approximation scheme is further explained. In the second part of the chapter an inertial trajectory controller in three-dimensional air space for the F-16 model is designed using the adaptive backstepping approach combined with multiple model approach in the parameter update laws. The trajectory control problem is quite challenging, since the system to be controlled has a high relative degree, resulting in a multivariable, four loop adaptive feedback design. The performance of the autopilot is evaluated in numerical simulation scenarios involving several types of trajectories and uncertainties in the onboard aerodynamic model. The conclusions are presented in Section 7.5.

7.2 Flight Envelope Partitioning In the previous chapter a reconfigurable flight control system based on the constrained adaptive backstepping method was designed for a simplified fighter aircraft model. The aircraft model was only valid at a single operating point and hence the identifier only had to estimate the aerodynamic model error at that point. As discussed before, this model error can be the result of modeling inaccuracies or sudden changes in the dynamic behavior of the aircraft, e.g. due to structural damage or control surface failures. The high-fidelity F-16 model, as detailed in Chapter 2, contains aerodynamic data valid over the entire subsonic flight envelope. Hence, the model error in this case is, in general, a complex nonlinear function dependent on the states and inputs of the aircraft valid over a large domain of operation. To ensure that the constrained adaptive backstepping method can also be applied to the control design for this more complex aircraft model the regressors of the parameter update laws will have to be selected in an appropriate way, i.e. in such a way that they are ‘rich’ enough to accurately identify the model error. One possible approach is to view the model error as a black box and introduce some form of neural network for which the weights are updated by the adaptive backstepping update laws, see e.g. [125, 176, 196]. The main motivation for this approach is that neural networks are universal approximators and can be used to approximate continuous functions at any arbitrary accuracy as long as the network is large enough [79]. However, exactly how large the network should be selected is very difficult to determine since it has no real physical meaning. Hence, the trade-off between estimation accuracy and computational load is not transparent. In this thesis a different and more intuitive approach based on [144, 153] is used. The idea is to ‘partition’ the flight envelope into multiple connecting operating regions called hyperboxes or clusters. In each hyperbox a locally valid linear-in-the-parameter (polynomial) aerodynamic model is defined, for which the parameters can be updated with the Lyapunov based update laws of the adaptive backstepping control law. An additional advantage of using multiple, local models is that information of the models that are not

7.2

FLIGHT ENVELOPE PARTITIONING

121

updated at a certain time step is retained, thereby giving the approximator memory capabilities. The partitioning can be done using multiple state variables, the choice of which depends on the expected nonlinearities of the system. In this thesis B-spline networks are employed to ensure smooth transitions between the local aerodynamic models [42, 208], but fuzzy sets or radial basis function networks could also be used. More advanced local learning algorithms can be found in literature, see e.g. [145, 204] and related works, where nonlinear function approximation with automatic growth of the learning network according to the nonlinearities and the working domain of the control system is proposed. However, the computational load of these type of methods may well be too large for a real-time implementation. Therefore the implementation of these methods is not investigated in this thesis.

7.2.1 Partitioning the F-16 Aerodynamic Model An earlier study at the Delft University of Technology [203] already examined the possibilities of partitioning the F-16 aerodynamic model for modeling and identification purposes. The study focused on fuzzy sets for the smooth transitions between the multiple models and several manual and automatic methods for the partitioning were evaluated. For the F-16 model it is not necessary to focus on such complex automatic partitioning methods, since the aerodynamic data are already given in tabular form and have a polynomial model structure. As an example, consider the normal force coefficient CZT and the pitch moment coefficient Cmt as given in Section 2.4, but repeated here for convenience sake  δlef  CZT = CZ (α, β, δe ) + δCZlef 1 − 25  δlef i q¯ c h + CZq (α) + δCZqlef (α) 1 − 2VT 25 and CmT

= +

where

h i  δlef  Cm (α, β, δe ) + CZT xcgr − xcg + δCmlef 1 − 25 i δlef  q¯ c h Cmq (α) + δCmqlef (α) 1 − + δCm (α) + δCmds (α, δe ) 2VT 25 δCZlef δCmlef

= =

CZlef (α, β) − CZ (α, β, δe = 0o ) Cmlef (α, β) − Cm (α, β, δe = 0o ).

As can be seen, the effects of altitude or Mach number are not included in the aerodynamic database itself. This is because the aerodynamic data is only valid at subsonic flight conditions. All static and rotational coefficient terms in the above expressions are determined from 1, 2 or 3-dimensional look-up tables depending on a combination of the angle of attack α, the sideslip angle β and the elevator deflection δe . The density of the

122

F-16 TRAJECTORY CONTROL DESIGN

7.2

data points in the look-up tables for the angle of attack data varies between 5o and 10o for angles of attack above 55o . For the sideslip angle tables the grid points are spaced 2o between −10o and 10o , while they points are 5o apart outside this range. Finally, the grid points of the tables dependent on the elevator deflection are 5o degrees apart. As explained in Section 2.3, the leading edge flap is controlled automatically by a separate system. It is possible to translate this polynomial aerodynamic model to the proposed multiple model form directly and even use the same grid, i.e. partitioning. However, this would not be very realistic. In reality the aerodynamic model is never perfect, since it is obtained from (virtual) wind tunnel experiments and flight tests. To make the experiments more realistic, the onboard model used by the controller and the multiple polynomial models used by the identifier are selected to be of a more basic structure. In [127] the aerodynamic data of the F-16 was already simplified by integrating all leading edge flap dependent tables into the rest of the tables and approximating some sideslip dependencies. However, the range of the data has been reduced to −10o ≤ α ≤ 45o in the process. These steps have greatly reduced the size of the database, but the response of the approximate model constructed from this new data is still close to the response with the full aerodynamic model in the reduced flight envelope. This aerodynamic data will be referred to as the low-fidelity set. The onboard model used by the backstepping controller will use the low-fidelity data to simulate the modeling error. At angle of attacks outside the range of the low-fidelity model the data of the nearest known point will be used, e.g. at 75 degrees angle of attack the low fidelity model will use the data from 45 degrees. The identifier should be able to compensate for the large modeling errors in this region. The structure of the normal force coefficient CZT and the pitch moment coefficient CmT for the low fidelity model, in an affine form suitable for control, are given by CZl T

=

CZl (α, β) +

q¯ c l C (α) + CZl δe (α, δe )δe + CˆZl T 2VT Zq

and l Cm T

=

h i q¯ c l l l l Cm (α, β) + CZl T xcgr − xcg + C (α) + Cm (α, δe )δe + Cˆm , T δe 2VT mq

l where CˆZl T and Cˆm are the estimates of the modeling errors. The other force and T moment coefficients of the low-fidelity model are similarly defined. All coefficient terms are again given in tabular form. Note that the higher order elevator deflection dependent l terms are contained in the base terms CZl and Cm . l The next step is to further specify the estimates of the modeling errors CˆZl T and Cˆm in T such a way that they can account for all possible uncertainties. If the failure scenarios are limited to symmetric damage and/or control surface failures the polynomial structure of the estimates can be selected identical to the known onboard model structure, i.e.

CˆZl T

q¯ c ˆl = CˆZl (α, β) + C (α) + CˆZl δe (α, δe )δe 2VT Zq

(7.1)

7.2

FLIGHT ENVELOPE PARTITIONING

123

and l Cˆm T

=

q¯ c ˆl l l C (α) + Cˆm (α, δe )δe , Cˆm (α, β) + δe 2VT mq

(7.2)

where each polynomial coefficient term Cˆ∗l varies with the flight condition. It should be possible to model all possible errors with parameter estimate definitions. However, in the case of asymmetric damage to an aircraft the longitudinal force and moment coefficient will become dependent of more lateral states and vice versa. The dependency of the total aerodynamic force and moment coefficients on the aircraft states for the nominal (undamaged) aircraft model is given in the second column of Table 7.1. It can be seen that for the most part the aerodynamic model for the undamaged aircraft is almost decoupled with respect to the aircraft states. If the aerodynamic characteristics of the aircraft change, due to a structural failure, then the dependencies of the aerodynamic coefficients on the aircraft states change and dependencies on additional states are possibly established. In [51] some specific asymmetric structural failures are Table 7.1: Aerodynamic force and moment coefficients - dependency on aircraft states for the nominal and damaged aircraft model [51].

Coefficient CXT CYT CZT ClT CmT CnT

nominal aircraft model [α, β, VT , q, δe ] [α, β, VT , p, r, δa , δr ] [α, β, VT , q, δe ] [α, β, VT , p, r, δa , δr ] [α, β, VT , q, δe ] [α, β, VT , p, r, δa , δr ]

damaged aircraft model [α, β, VT , p, q, r, δe , δa , δr ] [α, β, VT , p, q, r, δe , δa , δr ] [α, β, VT , p, q, r, δe , δa , δr ] [α, β, VT , p, q, r, δe , δa , δr ] [α, β, VT , p, q, r, δe , δa , δr ] [α, β, VT , p, q, r, δe , δa , δr ]

discussed, i.e. wing, fuselage and vertical stabilizer damage. It is concluded that for each failure the aircraft aerodynamic characteristics become more coupled in case of an asymmetric failure. This means that all aerodynamic coefficients become dependent on all longitudinal and lateral aircraft states, see the third column of Table 7.1. Some failures (wing damage) will cause stronger coupling than others (fuselage damage) because it directly depends on the measure of asymmetry introduced into the aerodynamic characteristics of the aircraft. This means the parameter estimation structures (7.1), (7.2) would have to be extended with more coefficient terms to accurately estimate the aerodynamic model after such failures. However, as discussed in Chapter 2, no aerodynamic data is available for any asymmetric damage cases for the F-16 model. Therefore, the research in this thesis will be limited to symmetric structural damage or actuator failure scenarios only. Hence, the polynomial approximation structures (7.1) and (7.2) are sufficiently rich.

124

F-16 TRAJECTORY CONTROL DESIGN

7.2

7.2.2 B-spline Networks In the previous section the flight envelope was subdivided and a low order approximating polynomial for the model error was defined on each of the resulting subregions. Numeric splines are very suitable to connect such a set of polynomials in a continuous fashion to fit a more complex nonlinear function over a certain domain. In this thesis B-splines are used, which are computationally efficient and possess good numeric properties [46, 174]. In this section the properties of B-splines and their use in B-spline networks is discussed. An adaptive B-spline network can be used to relate k inputs and a single output y on a restricted domain of the input space. One-dimensional B-spline Networks First, consider the following network showing a realization with one input: The net-

Figure 7.1: One-dimensional network [44].

work has two hidden layers. One hidden layer would be enough for a one-dimensional network, but multi-dimensional networks use two hidden layers as will be shown later on. The first hidden layer is used to distribute the inputs over the nodes. In the onedimensional network of Figure 7.1 one input is distributed over n nodes in the first hidden layer, so each node has only one input. To this input a basis function F is applied. These basis functions are B-splines of any desired order. An n-th order B-spline function consists of pieces of (n-1)th order polynomials, such that the resulting function is (n-1) times differentiable. B-spline basis function have the interesting property that they are non-zero on a few adjacent subintervals, which makes them ‘local’ as a result. B-spline basis functions can be defined in the following way [49]: Definition 7.1 (B-spline basis function). Let U be a set of m + 1 non-decreasing numbers, u0 ≤ u2 ≤ u3 ≤ ... ≤ um . The ui ’s are called knots, the set U the knot vector, and the half-open interval [ui , ui + 1) the ith knot span. Note that since some ui ’s may be equal, some knot spans may not exist. If a knot ui appears k times (i.e., ui = ui + 1 = ... = ui + k − 1), where k > 1, ui is a multiple knot of multiplicity k, written as ui (k). Otherwise, if ui appears only once, it is a simple knot. If the knots are equally spaced (i.e., ui + 1 − ui is a constant for 0 ≤ i ≤ m − 1), the knot vector or the knot sequence is said uniform; otherwise, it is non-uniform.

7.2

FLIGHT ENVELOPE PARTITIONING

125

The knots can be considered as division points that subdivide the interval [u0 , um ] into knot spans. All B-spline basis functions are supposed to have their domain on [u0 , um ]. To define B-spline basis functions, we need one more parameter, the degree of these basis functions, p. The ith B-spline basis function of degree p, written as Ni,p (u), is defined recursively as follows1 :  1 if ui ≤ u < ui+1 Fi,0 (u) = 0 otherwise ui+p+1 − u u − ui Fi,p−1 (u) + Fi+1,p−1 (u) Fi,p (u) = ui+p − ui ui+p+1 − ui+1 B-splines of order 2 through 6 are depicted as an example in Figure 7.2. Note that a spline function differs from zero on a finite interval.

Figure 7.2: B-splines order 2 through 6.

The second hidden layer of Figure 7.1 also consists of n nodes and each node of this layer also has only one input. To this input a function G is applied which is merely a multiplication of this input with a weight w. The results of all second hidden layer nodes are summed in the output node. When the spline functions of the various nodes are properly spaced, every one-dimensional function can be approximated. This is shown in the Figure 7.3, where the various splines (F1 to Fn ) combined with the various weights (w1 to wn ), together form an output function: y=

n X

wi Fi (u).

i=1

1 This

formula is usually referred to as the Cox-De Boor recursion formula

(7.3)

126

7.2

F-16 TRAJECTORY CONTROL DESIGN

As an example, the input could be the angle of attack over an input space of 0 to 10 degrees and the output one of the coefficients of the polynomial approximators (7.1) and (7.2). Note that (7.3) can also be written in the standard notation used throughout this thesis as ˆ y = ϕ(u)T θ, (7.4) where ϕ(u) = (F1 (u), ..., Fn (u))T is the known regressor and θ = [w1 , ..., wn ]T a vector of unknown constant parameters.

Figure 7.3: The output function y as a combination of third order B-splines and weights.

Two-dimensional B-spline Networks Two-dimensional B-spline networks have two input nodes. The first hidden layer, as with the one-dimensional network, consists of nodes, to which a basis function F is applied. This is shown in Figure 7.4 below. To the first input a group of n nodes are applied, and to the second input a group of m nodes are applied. The second hidden layer now consists of nodes which each have two inputs u1 and u2 . For every combination of a node from one group and and a node from the second group, a node exists. To each node of the second hidden layer, a function G is applied which is now a multiplication of the two inputs multiplied by a weight w. Again the output node sums the results of all second hidden layer nodes: y=

n X m X i=1 j=1

wi+n(j−1) F1i (u1 )F2j (u2 ).

(7.5)

7.2

FLIGHT ENVELOPE PARTITIONING

127

Figure 7.4: Two-dimensional network [44].

When the spline functions of the various nodes are properly spaced, any two-dimensional function can be approximated. The extension to n-dimensional networks is evidently straightforward. B-spline Network Learning Learning of B-spline networks can be done in several ways, the most common way to adapt is after each sample: ∆wi = γeFi (u) (7.6) where ∆wi is the adaptation of weight i, Fi is B-spline function i and u is the input. Given a certain input u, only a limited number of splines Fi (u) are nonzero. Therefore only a few weights are adapted after each sample, i.e. the adaptation is local. There are two practical methods for the network learning process available: • offline learning, where a previously obtained set of data is available and the network learns from this environment. The complete data set can be presented at the same time, i.e. batch learning, but it is also possible to use only a part of the set at each time step for training, i.e. stochastic learning. The learning phase is separated from the simulation phase, i.e. offline learning; • online learning, when no data set is available to train the network, the network can be trained during a simulation. The network learns to include the new data points in the network. Since the learning phase takes place during the simulation phase, this is called online learning. Application of B-spline Networks Based on the definition of the B-spline networks and the properties of B-splines, it can be concluded that B-spline networks have several characteristics that make them very suitable for online adaptive control:

128

F-16 TRAJECTORY CONTROL DESIGN

7.3

• Because only a small number of B-spline basis functions is non-zero at any given time step, the weight updating scheme is local. This has the advantage that only a few update laws are used at the same time, resulting in a lower computational load. Another advantage is that the network retains information of all flight conditions, since the local adaptation does not interfere with points outside the closed neighborhood. This means the approximator has memory capabilities, and hence learns instead of simply adapting the weights. • The spline outputs are always positive and normalized, which provides numerical stability.

7.2.3 Resulting Approximation Model For the F-16 aerodynamic model error approximation each of the coefficient terms in (7.1) and (7.2) are represented by a B-spline network. Third order B-spline basis functions are used and the grid for each of scheduling parameters α, β and δe is selected as 2.5 degrees. In earlier work this combination provided enough accuracy to estimate the model errors, even in the case of an aircraft model with sudden changes in the dynamic behavior [189]. Note, however, that [203] demonstrated that less partitions are needed to accurately identify the nominal aerodynamic F-16 model. Since sudden, unexpected changes in the model are considered in this work, more partitions are used. Note that using more partitions does not mean that more models are updated at a certain time step; this is determined by approximation structures, the order of the B-spline functions and the order of the B-spline networks. The local behavior of the approximation process with B-spline networks is illustrated in one of the simulation scenarios in Section 8.6.

7.3 Trajectory Control Design In this section, a nonlinear adaptive autopilot is designed for the inertial trajectory control of the six-degrees-of-freedom, high-fidelity F-16 aircraft model as introduced in Chapter 2. The control system is decomposed in four backstepping feedback loops, see Figure 7.5, constructed using a single control Lyapunov function. The aerodynamic force and moment functions of the aircraft model are assumed not to be exactly known during the control design phase and will be approximated online. B-spline networks are used to partition the flight envelope into multiple connecting regions in the manner that was discussed in the previous section. In each partition a locally valid linear-in-the-parameters nonlinear aircraft model is defined, of which the unknown parameters are adapted online by Lyapunov based update laws. These update laws take aircraft state and input constraints into account so that they do not corrupt the parameter estimation process. The performance of the proposed control system will be assessed in numerical simulations of several types of trajectories at different flight conditions. Simulations with a locked control surface and uncertainties in the aerodynamic forces and moments are also included.

7.3

TRAJECTORY CONTROL DESIGN

129

The section is outlined as follows. First, a motivation for applying the proposed control approach to this problem is given. After that, the nonlinear dynamics of the aircraft model are written in a suitable form for the control design in Section 7.3.2. In Section 7.3.3 the adaptive control design is presented as decomposed in four feedback loops, after which the identification process with the B-spline neural networks is discussed in Section 7.3.4. Section 7.4 validates the performance of the control law using numerical c . Finally, a summary of the results and simulations performed in MATLAB/Simulink the conclusions are given in Section 7.5.

7.3.1 Motivation In recent years the advancements in micro-electronics and precise navigation systems have led to an enormous rise of interest [43] in (partially) automated unmanned air vehicle (UAV) designs for a large variety of missions in both civil [160, 209] and military aviation [200]. Inertial trajectory control is essential for these UAVs, since they are usually required to follow predetermined paths through certain target points in the threedimensional air space [29, 96, 97, 151, 171, 172, 184]. Other situations where trajectory control is desired include formation control, aerial refueling and autonomous landing maneuvers [68, 155, 156, 168, 182, 207]. This has lead to a lot of literature dedicated to formation and flight path control for UAVs, but also for other types of (un)manned vehicles [80, 146]. Two different approaches can be distinguished in the design of these trajectory control systems. The most popular approach is to separate the guidance and control laws: A given reference trajectory is converted by the guidance laws to velocity and attitude commands for the actual flight controller which in turn generates the actuator signals [155, 156, 172]. For example, in [172] it is assumed a flight path angle control autopilot exists and a guidance law is constructed that takes heading rate and velocity constraints of the vehicle into account. The same holds for the formation control schemes of [155, 156]. Usually the assumption is made that the autopilot response to heading and airspeed commands is first order in nature to simplify the design. The other design approach is to integrate the guidance and control laws into one system to achieve better stability guarantees and improve performance. For instance, [96] utilizes an integrated guidance and control approach to trajectory tracking where the trimmed flight conditions along the reference trajectory are the command input to the tracking controllers. In [184] a combination of sliding mode control and adaptive control is used

Figure 7.5: Four loop feedback design for flight path control.

130

F-16 TRAJECTORY CONTROL DESIGN

7.3

for flight path control of an F/A-18 model. In this section, a Lyapunov-based adaptive backstepping approach is used to design a flight path controller for a nonlinear, high-fidelity F-16 model in three-dimensional air space. It is assumed that the aerodynamic force and moment functions of the model are not known exactly and that they can change during flight due to structural damage or control surface failures. There is plenty of literature available on adaptive backstepping designs for the control of aircraft and missiles; see e.g. [58, 61, 76, 109, 176, 183]. However, most of these designs consider control of the aerodynamic angles µ, α and β or the angular rates. The design of a trajectory controller is much more complicated since the system to be controlled is of a higher relative degree. This presents difficulties for a standard adaptive backstepping design since the derivatives of the intermediate control variables have to be calculated analytically in each design step. Calculating the derivatives of the intermediate control variables in each design step leads to a rapid ‘explosion of terms’. This phenomenon is the main motivation for the authors of [184] to select a sliding mode design for the outer feedback loops: It simplifies the design considerably. Another disadvantage of standard backstepping designs and indeed most feedback linearizing designs is that the contribution of the control surface deflections to the aerodynamic forces cannot be taken into account. For these reasons the constrained adaptive backstepping approach as explained in Section 4.3 is used in this chapter. Furthermore, to simplify the approximation of the unknown aerodynamic force and moment functions and to reduce computational load, the flight envelope is partitioned into multiple, connecting operating regions as discussed in the previous section.

7.3.2 Aircraft Model Description The aircraft model used in this study is that of an F-16 fighter aircraft with geometry and aerodynamic data as reported in Section 2.4. The control inputs of the model are the elevator, ailerons, rudder and leading edge flaps, as well as the throttle setting. The leading edge flaps are controlled separately and will not be used for the control design. The control surface actuators are modeled as first-order low pass filters with rate and magnitude limits as given in Table 2.1. In Section 2.2.4 a representation of the equations of motion for the F-16 model was given. These differential equations can be rewritten in the following form, which is more suitable for the trajectory control problem: X˙ 0

1 m

=



 VT cos χ cos γ  VT sin χ cos γ  −VT sin γ

(7.7)

 (−D + FT cos α cos β) − g sin γ 1  X˙ 1 =  mVT cos γ (L sin µ + Y cos µ + FT (sin α sin µ − cos α sin β cos µ)) g 1 mVT (L cos µ − Y sin µ + FT (cos α sin β sin µ + sin α cos µ)) − VT cos γ (7.8) 

7.3

TRAJECTORY CONTROL DESIGN

X˙ 2

=

+



cos α cos β

131

 sin α 0 cos β 1 − sin α tan β  X3 0 − cos α

 − cos α tan β sin α  0 sin γ + cos γ sin µ tan β γ sin µ  0 − coscos β 0 cos γ cos µ

¯ + c4 (c1 r + c2 p) q + c3L 2 2 ˙  X3 = c5 pr − c6 p − r q + c7 ¯ + c9 (c8 p − c2 r) q + c4 L 

 cos µ tan β cos µ  X˙ 1 − cos β − sin µ

  ¯ + qHeng N  ¯ − rHeng  M  ¯ + qHeng N

(7.9)

(7.10)

where X0 = (x, y, z)T , X1 = (VT , χ, γ)T , X2 = (µ, α, β)T , X3 = (p, q, r)T and the definition of the inertia terms ci , i = 1, ..., 9 is given in Section 2.2.4. These twelve differential equations are sufficient to describe the complete motion of the rigid-body aircraft. Other states such as the attitude angles φ, θ and ψ are functions of X = (X0T , X1T , X2T , X3T )T .

7.3.3 Adaptive Control Design In this section the aim is to develop an adaptive guidance and control system that asymptotically tracks a smooth, prescribed inertial trajectory Y ref = (xref , y ref , z ref )T with position states X0 = (x, y, z)T . Furthermore, the sideslip angle β has to be kept at zero to enable coordinated turning. It is assumed that the reference trajectory Y ref = (xref , y ref , z ref )T satisfies x˙ ref

=

V ref cos χref

ref

=

V ref sin χref



(7.11)

with V ref , χref , z ref and their derivatives continuous and bounded. It is also assumed ¯ M ¯,N ¯ are that the components of the total aerodynamic forces L, Y, D and moments L, uncertain, so these will have to be estimated. The available controls are the control surface deflections (δe , δa , δr )T and the engine thrust FT . The Lyapunov-based control design based on Section 4.3 is done in four feedback loops, starting at the outer loop. Inertial Position Control The outer loop feedback control design is initiated by transforming the tracking control problem into a regulation problem:     z01 cos χ sin χ 0  Z0 =  z02  =  − sin χ cos χ 0  X0 − Y ref , (7.12) z03 0 0 1

132

F-16 TRAJECTORY CONTROL DESIGN

7.3

where a new rotating reference frame for control, that is fixed to the aircraft and aligned with the horizontal component of the velocity vector, is introduced [168, 172]. Differentiating (7.12) gives   VT + z02 χ˙ − V ref cos(χ − χref ) (7.13) Z˙ 0 =  −z01 χ˙ + V ref sin(χ − χref )  . z˙ ref − VT sin γ

The idea is to design virtual control laws for the flight path angles χ, γ and the total airspeed VT to control the position errors Z0 . However, from (7.13) it is clear that it is not yet possible to do something about z02 in this design step. The virtual control laws are selected as  V des,0 = V ref cos χ − χref − c01 z01 (7.14)   ref c03 z03 − z˙ γ des,0 = arcsin , −π/2 < γ < π/2, (7.15) VT

where c01 , c03 > 0 are the control gains. The actual, implementable virtual control signals V des and γ des as well as their derivatives V˙ des and γ˙ des are obtained by filtering the virtual signals with a second order low pass filter with optional magnitude and rate limits in place. As an example the state space representation of the filter for V des,0 is given by " #   q2 q˙1 (t)    (7.16) = ω2 q˙2 (t) 2ζV ωV SR 2ζVVωV [SM (V des,0 ) − q1 ] − q2  des    V q1 = (7.17) q2 V˙ des where SM (·) and SR (·) represent the magnitude and rate limit functions as given in Appendix C. These functions enforce the state VT to stay within the defined limits. Note that if the signal V des,0 is bounded, then V des and V˙ des are also bounded and continuous signals. When the magnitude and rate limits are not in effect the transfer function from V des,0 to V des is given by V des (s) ωv2 = V des,0 (s) s2 + 2ζv ωv + ωv2

(7.18)

and the error V des,0 − V des can be made arbitrarily small by selecting the bandwidth of the filter sufficiently large. Flight Path Angle and Airspeed Control In the second loop the objective is to steer VT and γ to their desired values as determined in the previous section. Furthermore, the heading angle χ has to track the reference signal χref , while the tracking error z02 is also regulated to zero. The available (virtual) controls

7.3

TRAJECTORY CONTROL DESIGN

133

in this step are the aerodynamic angles µ and α as well as the thrust FT . Note that the aerodynamic forces also depend on the control surface deflections U = (δe , δa , δr )T . These forces are quite small, since the surfaces are primarily moment generators. However, since the current control surface deflections will be available from the command filters that are used in the inner design loop, they can be taken into account in the control design. The relevant equations of motion for this design step are given by X˙ 1

=

A1 F1 (X, U ) + B1 G1 (X, U, X2 ) + H1 (X)

(7.19)

where A1

B1

=

=

   0 0 −VT 1  cos µ 0 0  , H1 =  cos γ mVT 0 − sin µ 0   V cos α cos β 0 0 1  T 1 0 , 0 cos γ mVT 0 0 1

 −g sin γ , cos α sin β cos µ g cos α sin β sin µ − VT cos γ

T mVT cos γ

T mVT

are known (matrix) functions, and     FT L(X, U ) F1 =  Y (X, U )  , G1 =  (L(X, U ) + FT sin α) sin µ  D(X, U ) (L(X, U ) + FT sin α) cos µ

are functions containing the uncertain aerodynamic forces. Note that the intermediate control variables α and µ do not appear affine in the X1 -subsystem, which complicates the design somewhat. Since the control objective in this step is to track the smooth reference signal X1des = (V des , χref , γ des )T with X1 = (VT , χ, γ)T , the tracking errors are defined as   z11 Z1 =  z12  = X1 − X1des . (7.20) z13 To regulate Z1 and z02 to zero simultaneously, the following equation needs to be satisfied [98]   −c11 z11 ˆ 1 (X, U, X2 ) =  −V ref (c02 z02 + c12 sin z12 )  − A1 Fˆ1 − H1 + X˙ des , (7.21) B1 G 1 −c13 z13

where Fˆ1 is the estimate of F1 and where  FT    ˆ ˆ L (X, U ) + L (X, U )α + F sin α sin µ  ˆ α T G1 (X, U, X2 ) =   0  ˆ 0 (X, U ) + L ˆ α (X, U )α + FT sin α cos µ L

   

(7.22)

134

7.3

F-16 TRAJECTORY CONTROL DESIGN

ˆ ˆ 0 (X, U ) + L ˆ α (X, U )α. with the estimate of the lift force decomposed as L(X, U) = L The estimate of the aerodynamic forces Fˆ1 is defined as Fˆ1

ˆ F1 = ΦTF1 (X, U )Θ

(7.23)

ˆ F1 is a vector with unknown constant where ΦTF1 is the known regressor function and Θ parameters. It is assumed that there exists a vector ΘF1 such that F1

=

ΦTF1 (X, U )ΘF1 .

(7.24)

˜ F1 = ΘF1 − Θ ˆ F1 . The next step This means the estimation error can be defined as Θ is to determine the desired values αdes and µdes . The right-hand side of (7.21) is entirely known, so the left-hand side can be determined and the desired values extracted. Introducing the coordinate transformation   ˆ 0 (X, U ) + L ˆ α (X, U )α + FT sin α cos µ x ≡ L (7.25)   ˆ 0 (X, U ) + L ˆ α (X, U )α + FT sin α sin µ, y ≡ L (7.26) which can be seen as a transformation from the two-dimensional polar coordinates   ˆ ˆ L0 (X, U ) + Lα (X, U )α + T sin α and µ to cartesian coordinates x and y. The de-

sired signals (FTdes,0 , y0 , x0 )T are given by

   −c11 z11 FTdes,0 B1  y0  =  −V ref (c02 z02 + c12 sin z12 )  − A1 Fˆ1 − H1 + X˙ 1des , (7.27) −c13 z13 x0 

thus the virtual control signals are equal to q ˆ α (X, U )αdes,0 = x2 + y 2 − L ˆ 0 (X, U ) − FT sin α L 0 0

(7.28)

and

µdes,0

   y0  arctan     x0      arctan xy00 + π   = y0 arctan   x0 − π   π    2π  −2

if

x0 > 0

if

x0 < 0

and y0 ≥ 0

if if if

x0 < 0 x0 = 0 x0 = 0

and y0 < 0 and y0 > 0 and y0 < 0

.

(7.29)

Filtering the virtual signals to account for magnitude, rate and bandwidth limits will give the implementable virtual controls αdes , µdes and their derivatives. The sideslip angle command was already defined as β ref = 0, thus X2des = (µdes , αdes , 0)T and its derivative are completely defined.

7.3

TRAJECTORY CONTROL DESIGN

135

However, care must be taken since the desired virtual control µdes,0 is undefined when both x0 and y0 are uncontrollable. This  equal to zero making the system momentarily  ˆ ˆ sign change of L0 (X, U ) + Lα (X, U )α + FT sin α can only occur at very low or negative angles of attack. This situation was not encountered during the maneuvers simulated in this study. To solve the problem altogether, the designer could measure the rate of change for x0 and y0 and device a rule base set to change sign when these terms approach zero. Furthermore, problems will also occur at high angles of attack when the ˆ α will become smaller and eventually change sign. Possible control effectiveness term L solutions include limiting the angle of attack commands using the command filters or proper trajectory planning to avoid high angle of attack maneuvers. Aerodynamic Angle Control Now that the reference signal X2des = (µdes , αdes , β ref )T and its derivative have been found, the next feedback loop can be designed. The available virtual controls in this step are the angular rates X3 . The relevant equations of motion for this part of the design are given by X˙ 2

=

A2 F1 (X, U ) + B2 (X)X3 + H2 (X)

(7.30)

where A2

B2

H2

=



(tan β + tan γ sin µ)

1  mVT  cos α

−1 cos β

0

 tan γ cos µ 0 0 0  1 0 

sin α 0 cos β cos β  = − cos α tan β 1 − sin α tan β  sin α 0 cos α   g T0 − VT tan β cos γ cos µ 1  g sin α , FT cos = β + VT cos γ cos µ mVT −FT cos α cos β + VgT cos γ sin µ

are known (matrix) functions with

T0 = FT (sin α tan γ sin µ + sin α tan β − cos α sin β tan γ cos µ) . The tracking errors are defined as Z2 = X2 − X2des .

(7.31)

To stabilize the Z2 -subsystem a virtual feedback control X3des,0 is defined as B2 X3des,0 = −C2 Z2 − A2 Fˆ1 − H2 + X˙ 2des ,

C2 = C2T > 0.

(7.32)

The implementable virtual control, i.e. the reference signal for the inner loop, X3des and its derivative are again obtained by filtering the virtual control signal X3des,0 with a second order command limiting filter.

136

7.3

F-16 TRAJECTORY CONTROL DESIGN

Angular Rate Control In the fourth step, an inner feedback loop for the control of the body-axis angular rates X3 = (p, q, r)T is constructed. The control inputs for the inner loop are the control surface deflections U = (δe , δa , δr )T . The dynamics of the angular rates can be written as X˙ 3 = A3 (F3 (X, U ) + B3 (X)U ) + H3 (X)

(7.33)

where 

c3 A3 =  0 c4

0 c7 0

 c4 0 , c9

are known (matrix) functions, and   ¯0 L ¯0  , F3 =  M ¯0 N



 (c1 r + c2 p) q H3 =  c5 pr − c6 (p2 − r2 )  (c8 p − c2 r) q ¯ δe L ¯ δe B3 =  M ¯δe N 

¯ δa L ¯ δa M ¯δa N

 ¯ δr L ¯ δr  M ¯δr N

are unknown (matrix) functions that have to be approximated. Note that for a more convenient presentation the aerodynamic moments have been decomposed, e.g. ¯ (X, U ) = M

¯ 0 (X, U ) + M ¯ δe δe + M ¯ δa δa + M ¯ δr δr M

(7.34)

¯ 0 (X, U ). where the higher order control surface dependencies are still contained in M The control objective in this feedback loop is to track the reference signal X3des = (pref , q ref , rref )T with the angular rates X3 . Defining the tracking errors Z3 = X3 − X3des

(7.35)

and taking the derivatives results in Z˙ 3 = A3 (F3 (X, U ) + B3 (X)U ) + H3 (X) − X˙ 3des .

(7.36)

To stabilize the system of (7.36) the desired control U 0 is defined as ˆ3 U 0 = −C3 Z3 − A3 Fˆ3 − H3 + X˙ des , A3 B 3

C3 = C3T > 0

(7.37)

ˆ3 are the estimates of the unknown nonlinear aerodynamic moment funcwhere Fˆ3 and B tions F3 and B3 , respectively. The F-16 model is not over-actuated, i.e. the B3 matrix is square. If this is not the case some form of control allocation would be required, for instance the QP method used in the flight control problem discussed in the previous chapter. The estimates are defined as Fˆ3 ˆ3j B

ˆ F3 = ΦTF3 (X, U )Θ ˆ B3j for j = 1, ..., 3 = ΦTB3j (X)Θ

(7.38)

7.3

137

TRAJECTORY CONTROL DESIGN

ˆ F3 , Θ ˆ B3j are vectors with unwhere ΦTF3 , ΦTB3j are the known regressor functions and Θ ˆ3j represents the ith column of B ˆ3 . It is known constant parameters, also note that B assumed that there exist vectors ΘF3 , ΘB3j such that F3

= ΦTF3 (X, U )ΘF3

B3j

= ΦTB3j (X)ΘB3j .

(7.39)

˜ F3 = ΘF3 − Θ ˆ F3 and Θ ˜ B3j = This means the estimation errors can be defined as Θ ˆ B3j . The actual control signal U is found by applying a command filter similar ΘB3j − Θ to (7.16) to U 0 . Update Laws and Stability Properties The static part of the trajectory control design has been completed. In this section the stability properties of the control law are discussed and dynamic update laws for the unknown parameters are derived. Define the control Lyapunov function V

= + +

  1 2 − 2 cos z12 T 2 2 T T Z0 Z0 + z11 + + z13 + Z2 Z2 + Z3 Z3 2 c02     1 ˜ TF Γ−1 Θ ˜ F1 + trace Θ ˜ TF Γ−1 Θ ˜ F3 trace Θ 1 F1 3 F3 2 3   1X ˜ T Γ−1 Θ ˜ trace Θ B3j B3j B3j , 2 j=1

(7.40)

with the update gains matrices ΓF1 = ΓTF1 > 0, ΓF3 = ΓTF3 > 0 and ΓB3j = ΓTB3j > 0. Taking the derivative of V along the trajectories of the closed-loop system gives V˙

=

    2 −c01 z01 + z02 z01 χ˙ + VT − V des,0 z01 + z02 −z01 χ˙ + V ref sin z12     c12 2 2 −c03 z03 − VT sin γ − sin γ des,0 z03 − c11 z11 − V ref sin z12 z02 + sin2 z12 c02    2 ˜ F1 + B1 G1 (X2 ) − G ˆ 1 (X2 ) −c13 z13 + Z1T A1 ΦTF1 Θ   ˆ 1 (X2 ) − G ˆ 1 (X2des,0 ) +Z1T B1 G   ˜ F1 + Z2T B2 X3 − X3des,0 −Z2T C2 Z2 + Z2T A2 ΦTF1 Θ (7.41) ! 3 X T  ˜ F3 + ˆ3 U − U 0 ˜ B Ui + Z3T A3 B −Z3T C3 Z3 + Z3T A3 ΦTF Θ ΦB Θ 3

3j

3j

j=1

3     X   ˙ T −1 ˜ ˆ˙ TF Γ−1 ˜ ˆ ˆ˙ TB Γ−1 ˜ B3j . −trace Θ Θ − trace Θ Γ Θ − trace Θ Θ F F F3 F3 1 3 1 F1 3j B3j j=1

138

7.3

F-16 TRAJECTORY CONTROL DESIGN

To cancel the terms depending on the estimation errors in (7.41), the update laws are selected as ˆ˙ F1 Θ ˆ˙ F3 Θ ˆ˙ B3j Θ

= ΓF1 ΦF1 AT1a Z1 + AT2 Z2 = ΓF3 ΦF3 AT3 Z3



 = PB3j ΓB3j ΦB3j AT3 Z3 Uj ,

(7.42)

  ˜ F1 = A1 ΦT Θ ˜ F1 + B1 G1 (X2 ) − G ˆ 1 (X2 ) . The update laws for B ˆ3 with A1a ΦTF1 Θ F1 include a projection operator to ensure that certain elements of the matrix do not change sign and full rank is maintained always. For most elements the sign is known based on physical principles. Substituting the update laws in (7.41) leads to V˙

=

c12 2 2 2 2 −c01 z01 − c03 z03 − c11 z11 − V ref sin2 z12 − c13 z13 − Z2T C2 Z2 − Z3T C3 Z3 c02       ˆ 1 (X2 ) − G ˆ 1 (X2des,0 ) + VT − V des,0 z01 − VT sin γ − sin γ des,0 z03 + Z1T B1 G    ˆ3 U − U 0 , +Z2T B2 X3 − X3des,0 + Z3T A3 B (7.43)

where the first line is already negative semi-definite which is needed to prove stability in the sense of Lyapunov. Since the Lyapunov function V (7.40) is not radially unbounded, only local asymptotic stability can be guaranteed [98]. This is sufficient for the domain of operation considered here if the control law is properly initialized to ensure z12 ≤ ±π/2. However, the derivative expression of V also includes indefinite error terms due to the tracking errors and due to the command filters used in the design. As mentioned before, when no rate or magnitude limits are in effect the difference between the input and output of the filters can be made small by selecting the bandwidth of the filters sufficiently larger than the bandwidth of the input signal. Also, when no limits are in effect and the small, bounded difference between the input and output of the command filters is neglected, the feedback controller designed in the previous sections will converge the tracking errors to zero. Naturally, when control or state limits are in effect the system will in general not track the reference signal asymptotically. A problem with adaptive control is that this can lead to corruption of the parameter estimation process, since the tracking errors that are driving this process are no longer caused by the function approximation errors alone. To solve this problem a modified definition of the tracking errors is used in the update laws where the effect of the magnitude and rate limits has been removed. Define the modified tracking errors Z¯1 = Z1 − Ξ1 Z¯2 = Z2 − Ξ2 Z¯3 = Z3 − Ξ3

(7.44)

7.3

TRAJECTORY CONTROL DESIGN

139

with the linear filters Ξ˙ 1

=

Ξ˙ 2

=

Ξ˙ 3

=

  ˆ 1 (X, U, X2 ) − G ˆ 1 (X, U, X des,0 ) −C1 Ξ1 + B1 G 2   des,0 −C2 Ξ2 + B2 X3 − X3  ˆ3 U − U 0 . −C3 Ξ3 + A3 B

(7.45)

The modified errors will still converge to zero when the constraints are in effect. The resulting update laws are given by ˆ˙ F1 Θ ˆ˙ F3 Θ ˆ˙ B3j Θ

=

ΓF1 ΦF1 AT1a Z¯1 + AT2 Z¯2

=

ΓF3 ΦF3 AT3 Z¯3

=

PB3j

ΓB3j ΦB3j AT3 Z¯3 Uj

 

(7.46) .

To better illustrate the structure of the control system a scheme of the adaptive inner loop controller is shown in Figure 7.6.

X 3ref

Z3

A3 Bˆ3U 0 = −C3 Z 3 − A3 Fˆ3 − H 3 + Xɺ 3ref

U0

Command Limiting Filter

U

Xɺ 3 = A3 ( F3 + B3U ) + H 3

Ξɺ 3 = −C3Ξ 3 + A3 Bˆ3 (U − U 0 )

ˆɺ = Γ Φ AT Z Θ F3 F3 F3 3 3 ˆɺ = Γ Φ AT Z U Θ B 3i F 3i F 3i 3 3 i

Figure 7.6: Inner loop control system

7.3.4 Model Identification To simplify the approximation of the unknown aerodynamic force and moment functions, and thereby reducing computational load, the flight envelope is partitioned into multiple, connecting operating regions with a locally valid linear-in-the-parameters model defined in each region. B-spline networks are used to interpolate between the local nonlinear models to ensure smooth transitions. In the previous section parameter update laws (7.46)

140

F-16 TRAJECTORY CONTROL DESIGN

7.3

were defined for the unknown aerodynamic functions which were written as Fˆ1 Fˆ3 ˆ3j B

ˆ F1 = ΦTF1 (X, U )Θ ˆ F3 = ΦT (X, U )Θ =

F3 ˆ B3j . ΦTB3j (X)Θ

(7.47)

These unknown vectors and known regressor vectors can now be further defined. The total force approximations are defined as   c ˆ = L0 + q¯S CˆL0 (α, β) + CˆLα (β, δe )α + CˆLq (α) q¯ L + CˆLδe (α, δe )δe 2VT  rb pb Yˆ = Y0 + q¯S CˆY0 (α, β) + CˆYp (α) + CˆYr (α) 2VT 2VT  + CˆYδa (α, β)δa + CˆYδr (α, β)δr (7.48)   c ˆ = D0 + q¯S CˆD0 (α, β) + CˆDq (α) q¯ D + CˆDδe (α, δe )δe , 2VT and the moment approximations  pb rb ˆ ¯ ¯ L = L0 + q¯S CˆL¯ 0 (α, β) + CˆL¯ p (α) + CˆL¯ r (α) 2VT 2VT  +CˆL¯ δa (α, β)δa + CˆL¯ δr (α, β)δr   c ˆ¯ = M ¯ 0 + q¯S CˆM¯ (α, β) + CˆM¯ (α) q¯ ˆM¯ (α, δe )δe M + C 0 q δe 2VT  ˆ¯ = N ¯0 + q¯S CˆN¯ (α, β) + CˆN¯ (α) pb + CˆN¯ (α) rb N r 0 p 2VT 2VT  +CˆN¯δa (α, β)δa + CˆN¯δr (α, β)δr ,

(7.49)

¯ 0, M ¯ 0 and N ¯0 represent the known, nominal values of the aerodywhere L0 , Y0 , D0 , L namic forces and moments. Note that the approximation polynomial structures are somewhat different from two example structures in Section 7.2. Estimating the aerodynamic forces in a wind-axes reference frame is more natural for this control problem. Furthermore, an additional term can be found in the lift force approximation for the lift curve, since this term is needed in the flight path control loop. These approximations do not account for asymmetric failures that will introduce coupling of the longitudinal and lateral motions of the aircraft. If a failure occurs which introduces a parameter dependency that is not included in the approximation, stability can no longer be guaranteed. However, the failure scenarios considered in the next section are limited to symmetric structural damage or actuator failure scenarios. Therefore, these uncertainties can all be modeled with the above approximation structures. The

7.4

NUMERICAL SIMULATION RESULTS

141

total nonlinear function approximations are divided into simpler linear-in-the-parameter nonlinear coefficient approximations, e.g. CˆL0 (α, β) = ϕTCL0 (α, β)θˆCL0 ,

(7.50)

where the unknown parameter vector θˆCL0 contains the B-spline network weights, i.e. the unknown parameters, and ϕCL0 is a regressor vector containing the B-spline basis functions. All other coefficient estimates are defined in similar fashion. In this case a two-dimensional network is used with input nodes for α and β. Different scheduling parameters can be selected for each unknown coefficient. Third order B-splines spaced 2.5 degrees and three scheduling variables, α, β and δe , have been used to partitioning the flight envelope. With these approximators a sufficient model accuracy was obtained. Following the notation of (7.50) the estimates of the aerodynamic forces and moments can be written as ˆ L

ˆ L, = ΦTL (α, β, δe )Θ



ˆY , = ΦTY (α, β, δe )Θ

ˆ D

ˆ D, ΦTD (α, β, δe )Θ

=

ˆ¯ = ΦT (α, β, δ )Θ ˆ L, L ¯ ¯ e L ¯ˆ = ΦT (α, β, δ )Θ ˆ ¯ M ¯ M

ˆ¯ = N

e

M,

(7.51)

ˆ N¯ , ΦTN¯ (α, β, δe )Θ

which is a notation equivalent to the one used in (7.47). Therefore, the update laws (7.46) can be used adapt the B-spline network weights. However, the update laws have not yet been robustified against non-parametric uncertainties. In this study dead zones and e-modification are used to protect the estimated parameters from drifting.

7.4 Numerical Simulation Results This section presents the simulation results from the application of the adaptive flight path controller to the high-fidelity, six-degrees-of-freedom F-16 model of Section 7.3.2. Both the adaptive flight control law and the aircraft model are written as C S-functions c in MATLAB/Simulink . C S-functions are much more efficient than the Matlab Sfunctions used for the simplified F-18 model of Chapter 6, which means that the simulations can easily be performed in real-time despite the increase complexity of aircraft model and controller. The tracking error driven update laws now have around 17000 states, but only a small number of updates is non-zero at each time step due to the local model approximation structure. The simulations are performed at three different starting flight conditions with the following trim conditions: 1. h = 5000 m, VT = 200 m/s, α = θ = 2.774 deg; 2. h = 0 m, VT = 250 m/s, α = θ = 2.406 deg; 3. h = 2500 m, VT = 150 m/s, α = θ = 0.447 deg; where h is the altitude of the aircraft, and all other trim states are equal to zero. Furthermore, two maneuvers are considered:

142

F-16 TRAJECTORY CONTROL DESIGN

7.4

1. a climbing helical path; 2. a reconnaissance and surveillance maneuver. This last maneuver involves turns in both directions and some altitude changes. The simulations of both maneuvers last 300 seconds. The reference trajectories are generated with second order linear filters to ensure smooth trajectories. The onboard model in the nominal case contains the low-fidelity data, which means the online model identification has to compensate for any (small) differences between the low-fidelity data of the onboard model and high-fidelity data of the aircraft model. To properly evaluate the effectiveness of the online model identification, all maneuvers will also be performed with a ±30% deviation in all aerodynamic stability and control derivatives used by the controller, i.e. it is assumed that the onboard model is very inaccurate. Finally, the same maneuvers are also simulated with a lockup at ±10 degrees of the left aileron.

7.4.1 Controller Parameter Tuning The tuning process starts with the selection of the gains of the static control law and the bandwidths of the command filters. Lyapunov stability theory only requires the control gains to be larger than zero, but it is natural to select the gains of the inner loop largest. Larger gains will of course result in smaller tracking errors, but at the cost of more control effort. It is possible to derive certain performance bounds that can serve as guidelines for tuning, see e.g. [121]. However, getting the desired closed-loop response is still an extensive trial-and-error procedure. The control gains were selected as c01 = 0.1, c02 = 1.10−5, c03 = 0.5, c11 = 0.01, c12 = 2.5, c13 = 0.5, C2 = diag(1, 1, 1) and C3 = diag(2, 2, 2). The bandwidths of the command filters for the actual control variables δe , δa , δr are chosen equal to the bandwidths of the F-16 model actuators. The outer loop filters have the smallest bandwidths. The selection of the other bandwidths is again trial-and-error. A higher bandwidth in a certain feedback loop will result in more aggressive commands to the next feedback loop. All damping ratios are equal to 1.0. It is possible to add magnitude and rate limits to each of the filters. In this study magnitude limits on the aerodynamic bank angle µ and the flight path angle γ are used to avoid singularities in the control laws. Rate and magnitude limits, equal to the ones of the actuators, are enforced on the actual control variables. The selected command filter parameters can be found in Table 7.2. As soon as the controller gains and command filters parameters have been defined, the update law gains can be selected. Again the theory only requires that the gains should be larger than zero. Larger update gains means higher learning rates and thus more rapid changes in the B-spline network weights. It is not difficult to find a gain selection that results in a good performance at all flight conditions and with the failures considered in this section. This is probably because all flight path maneuvers are relatively slow and smooth.

7.4

NUMERICAL SIMULATION RESULTS

143

Table 7.2: Command filter parameters.

Command variable V des γ des µdes αdes pdes q des rdes δe δa δr

ωn (rad/s) 5 3 8 8 20 20 10 40.4 40.4 40.4

mag. limit − ± 80 deg ± 80 deg − − − − ± 25 deg ± 21.5 deg ± 30 deg

rate limit − − − − − − − ± 60 deg/s ± 80 deg/s ± 120 deg/s

7.4.2 Maneuver 1: Upward Spiral In this section the results of the numerical simulations of the first test maneuver, the climbing helical path, are discussed. For each of the three flight conditions five cases are considered: nominal, the aerodynamic stability and control derivatives used in the control law perturbed with +30%, and with −30% w.r.t. to the real values of the model, a lockup of the left aileron at +10 degrees, and a lockup at −10 degrees. No actuator sensor information is used. In Figure D.6 of Appendix D.2 the results of the simulation without uncertainty starting at flight condtion 1 are plotted. The maneuver involves a climbing spiral to the left with an increase in airspeed. It can be seen that the control law manages to track the reference signal very well and that closed-loop tracking is achieved. The sideslip angle does not become any larger than ±0.02 deg. The aerodynamic bank angle µ does reach the limit set by the command filter, but this has no consequences for the performance. The use of dead-zones ensures that the parameter update laws are indeed not updating during this maneuver without any uncertainties. The responses at the two other flight conditions are virtually the same, although less thrust is needed due to the lower altitude of flight condition 2 and the lower airspeed of flight condition 3. The other control surfaces are also more efficient. This is illustrated in Tables 7.3 to 7.5, where the mean absolute values (MAVs) of the outer loop tracking errors, control surface deflections and thrust can be found. Plots of the parameter estimation errors are not included. However, the errors converge to constant values, but not to zero as is common with Lyapunov based update laws. The response of the closed-loop system during the same maneuver starting at flight condition 1, but with +30% uncertainty in the aerodynamic coefficients, is shown in Figure D.7. It can be observed that the tracking errors of the outer loop are now much larger, but in the end the steady-state tracking error converges to zero. The sideslip angle still

144

7.4

F-16 TRAJECTORY CONTROL DESIGN

Table 7.3: Maneuver 1 at flight condition 1: Mean absolute values of the tracking errors and control inputs. Case nominal +30% uncertainty −30% uncertainty +10% deg. locked left aileron −10% deg. locked left aileron

(z01 , z02 , z03 )M AV (m) (0.33,0.24,0.24) (4.56,3.75,1.07) (5.15,3.88,1.10) (0.39,0.32,0.78) (0.31,0.25,1.12)

(δe , δa , δr )M AV (deg) (4.63, 0.12, 0.10) (4.59, 0.13, 0.11) (4.68, 0.16, 0.11) (4.63, 0.56, 0.74) (4.63, 0.46, 1.16)

T M AV (N) 5.59e+04 5.57e+04 5.62e+04 5.59e+04 5.59e+04

remains within 0.02 degrees. Some small oscillations are visible in Figure D.7J, but these stay well within the rate and magnitude limits of the actuators. In Tables 7.3 to 7.5 the MAVs of the tracking errors and control inputs are shown for all flight conditions with this uncertainty. As was already seen in the plots, the average tracking errors increase, but the magnitude of the control inputs stays approximately the same. The same simulations have been performed for a −30% perturbation in the stability and control derivatives used by the control law, the results are also shown in the tables. It appears that underestimated initial values of the unknown parameters lead to larger tracking errors than overestimates for this maneuver. Finally, the maneuver is performed with the left aileron locked at ±10 degrees, i.e. δadamaged = 0.5(δa ± 10π 180 ). Figure D.8 shows the response at flight condition 3 with the aileron locked at −10 degrees. Except for some small oscillations in the response of roll rate p and aileron deflection δa at the start of the simulation, there is no real change in performance visible. This is confirmed by the numbers of Table 7.5. However, from Tables 7.3 and 7.4 it can be observed that aileron and rudder deflections become larger for both locked aileron failure cases, while tracking performance does hardly decline. Table 7.4: Maneuver 1 at flight condition 2: Mean absolute values of the tracking errors and control inputs. Case nominal +30% uncertainty −30% uncertainty +10% deg. locked left aileron −10% deg. locked left aileron

(z01 , z02 , z03 )M AV (m) (0.30,0.23,0.21) (1.55,1.33,0.41) (2.01,1.53,0.52) (0.36,0.33,0.72) (0.30,0.28,1.01)

(δe , δa , δr )M AV (deg) (3.97, 0.14, 0.21) (3.96, 0.15, 0.23) (3.98, 0.15, 0.20) (3.97, 0.25, 1.20) (3.96, 0.40, 1.52)

T M AV (N) 3.14e+04 3.14e+04 3.14e+04 3.14e+04 3.14e+04

Table 7.5: Maneuver 1 at flight condition 3: Mean absolute values of the tracking errors and control inputs. Case nominal +30% uncertainty −30% uncertainty +10% deg. locked left aileron −10% deg. locked left aileron

(z01 , z02 , z03 )M AV (m) (0.33,0.22,0.27) (2.01,1.43,0.61) (2.16,1.49,0.77) (0.32,0.33,0.29) (0.34,0.24,0.30)

(δe , δa , δr )M AV (deg) (3.37, 0.08, 0.08) (3.40, 0.10, 0.08) (3.38, 0.09, 0.08) (3.38, 0.08, 0.09) (3.38, 0.08, 0.09)

T M AV (N) 4.41e+04 4.44e+04 4.41e+04 4.41e+04 4.41e+04

7.5

NUMERICAL SIMULATION RESULTS

145

7.4.3 Maneuver 2: Reconnaissance The second maneuver, called reconnaissance and surveillance, involves turns in both directions and altitude changes, but airspeed is kept constant. Plots of the simulation at flight condition 3 with −30% uncertainty are shown in Figure D.9. Tracking performance is again excellent and the steady-state tracking errors converge to zero. There are some small oscillations in the rudder deflection, but these are within the limits of the actuator. To provide some insight in the online estimation process, the time histories of the estimated coefficient errors are plotted in Figure D.10. The errors in the individual components of the force and moment coefficients do in general not converge to the true error values, as is expected with Lyapunov based update laws. However, the total force and moment coefficients are identified correctly which explains the good tracking performance. The MAVs of the tracking errors and control inputs are compared with the ones for the nominal case in Table 7.8. It can be observed that the average tracking errors have not increased much for this uncertainty case. The degradation of performance for the uncertainty cases is somewhat worse at the other two flight conditions as can be seen in Tables 7.6 and 7.7. The sideslip angle always remains within 0.05 degrees for all flight conditions and uncertainties. Corresponding with the results of maneuver 1 overestimation of the unknown parameters again leads to smaller tracking errors.

Table 7.6: Maneuver 2 at flight condition 1: Mean absolute values of the tracking errors and control inputs. Case nominal +30% uncertainty −30% uncertainty +10% deg. locked left aileron −10% deg. locked left aileron

(z01 , z02 , z03 )M AV (m) (0.42,0.39,0.46) (2.69,2.30,1.13) (3.02,2.40,1.12) (0.43,0.40,0.45) (0.42,0.39,0.46)

(δe , δa , δr )M AV (deg) (3.17, 0.16, 0.13) (3.16, 0.16, 0.14) (3.19, 0.18, 0.14) (3.17, 0.17, 0.16) (3.17, 0.17, 0.15)

T M AV (N) 2.25e+04 2.25e+04 2.25e+04 2.25e+04 2.25e+04

Simulations of maneuver 2 with the locked aileron are also performed. Figure D.11 shows the results for flight condition 1 with a locked aileron at +10 degrees. Some very small oscillations are again visible in the roll rate, aileron and rudder responses, but tracking performance is good and the steady-state convergence is achieved. Table 7.6 confirms that the results of the simulations with actuator failure hardly differ from the nominal one. There is only a small increase in the use of the lateral control surfaces. The same holds at the other flight conditions as can be seen in Tables 7.7 and 7.8.

146

7.5

F-16 TRAJECTORY CONTROL DESIGN

Table 7.7: Maneuver 2 at flight condition 2: Mean absolute values of the tracking errors and control inputs. Case nominal +30% uncertainty −30% uncertainty +10% deg. locked left aileron −10% deg. locked left aileron

(z01 , z02 , z03 )M AV (m) (0.58,0.49,0.34) (1.27,1.10,0.48) (1.73,1.24,0.55) (0.58,0.50,0.35) (0.59,0.51,0.34)

(δe , δa , δr )M AV (deg) (2.95, 0.18, 0.21) (2.95, 0.19, 0.22) (2.97, 0.19, 0.21) (2.95, 0.20, 0.22) (2.95, 0.22, 0.22)

T M AV (N) 1.62e+04 1.62e+04 1.61e+04 1.62e+04 1.62e+04

Table 7.8: Maneuver 2 at flight condition 3: Mean absolute values of the tracking errors and control inputs. Case nominal +30% uncertainty −30% uncertainty +10% deg. locked left aileron −10% deg. locked left aileron

(z01 , z02 , z03 )M AV (m) (0.49,0.40,0.56) (0.97,0.78,0.54) (0.97,0.56,0.85) (0.48,0.40,0.58) (0.49,0.40,0.56)

(δe , δa , δr )M AV (deg) (2.39, 0.12, 0.12) (2.39, 0.12, 0.13) (2.40, 0.13, 0.12) (2.39, 0.12, 0.13) (2.40, 0.13, 0.13)

T M AV (N) 2.33e+04 2.33e+04 2.33e+04 2.33e+04 2.33e+04

7.5 Conclusions In this chapter, a nonlinear adaptive flight path control system is designed for a highfidelity F-16 model. The controller is based on a backstepping approach with four feedback loops which are designed using a single control Lyapunov function to guarantee stability. The uncertain aerodynamic forces and moments of the aircraft are approximated online with B-spline neural networks for which the weights are adapted by Lyapunov based update laws. Numerical simulations of two test maneuvers were performed at several flight conditions to verify the performance of the control law. Actuator failures and uncertainties in the stability and control derivatives were introduced to evaluate the parameter estimation process. Several observations can be made based on the simulation results: 1. The results show that trajectory control can still be accomplished with the investigated uncertainties and failures, while good tracking performance is maintained. Compared to other nonlinear adaptive trajectory control designs found in literature, such as standard adaptive backstepping or sliding mode control in combination with feedback linearization, the approach is much simpler to apply, while the online estimation process is more robust to saturation effects. 2. The flight envelope partitioning approach used to simplify the estimation process makes real-time implementation of the adaptive control system feasible, while it also keeps the estimation process more transparent. All performed simulations c easily run real-time in MATLAB/Simulink with a standard third order solver at 100 Hz. 3. In the general case, a detailed design study is needed to define the necessary partitions and approximator structure. For the F-16 aerodynamic model earlier mod-

7.5

CONCLUSIONS

147

eling studies have already been performed and the data is already available in a suitable tabular form. 4. Tuning of the integrated update laws of the backstepping controller is, in general, a time consuming trial-and-error process, since increasing the gains can lead to unexpected closed-loop system behavior. However, the maneuvers flown with the trajectory controller are relatively slow and smooth, especially for this fighter aircraft model. This smooth maneuvering simplified the tuning of the update gains, since it was not hard to find a gain selection that provided adequate performance for all considered failure scenarios and flight conditions. However, in Chapter 6 more aggressive maneuvering with a much simpler aircraft model was considered, while finding an update gain selection that gave good performance at both flight conditions for all failure types was much more difficult, if not impossible. In the next chapter the stability and control augmentation system design for the F-16 model is considered and simulations involving more aggressive maneuvering will again be performed. Hence, update gain tuning is expected to be much more time consuming.

Chapter

8

F-16 Stability and Control Augmentation Design This chapter once again considers an adaptive flight control design for the high-fidelity F-16 model, but here a stability and control augmentation system (SCAS) is developed instead of a trajectory autopilot. This means that the flight control system must provide the pilot with the handling qualities he or she desires. Command filters are used to enforce these handling qualities and a frequency response analysis is included to verify that they have been satisfied in the nominal case. The flight envelope partitioning method, which results in multiple local models, is again used to simplify the online model identification. In the final part of the chapter the constrained adaptive backstepping based SCAS is compared with the baseline F-16 flight control system and an adaptive flight control system that makes use of a least squares identifier in several realistic maneuvers and failure scenarios. Furthermore, sensor models and time delays are introduced in the numerical simulations.

8.1 Introduction Nowadays most modern fighter aircraft are designed statically relaxed stable or even unstable in certain modes to allow for extreme maneuverability. As a result these aircraft have to be equipped with a stability and control augmentation system (SCAS) that artificially stabilizes the aircraft and provides the pilot with desirable flying and handling qualities. Briefly stated, the flying and handling qualities of an aircraft are those properties which describe the ease and effectiveness with which it responds to pilot commands in the execution of a flight task [45]. Flying qualities can be seen as being task related, while handling qualities are response related. In this chapter the constrained adaptive backstepping approach with B-spline networks 149

150

F-16 STABILITY AND CONTROL AUGMENTATION DESIGN

8.2

is used to design a SCAS for a nonlinear, high-fidelity F-16 model which satisfies the handling qualities requirements [1] across the entire flight envelope of the model. It is assumed that the aerodynamic force and moment functions of the model are not known exactly and that they can change during flight due to structural damage or control surface failures. There is plenty of literature available on adaptive backstepping designs for the control of aircraft and missiles, see e.g. [107, 183]. However, none of these publications considers the flying qualities during the controller design phase or performs a handling qualities evaluation after the design is finished. An exception is [93], where a longitudinal adaptive backstepping controller is designed for a simplified supersonic aircraft model. The controller parameters are tuned explicitly via short period handling qualities specifications [1]. The work in this chapter considers a full six degrees-of-freedom highfidelity aircraft model and enforces the handling qualities requirements with command filters during the control design process. A second adaptive SCAS is designed using the modular adaptive backstepping method with recursive least squares as detailed in Section 6.2. In this method the control law and identifier are designed as separate models as so often done for adaptive control for linear systems. Since the certainty equivalence principle does not hold in general for nonlinear systems, the modular control law has to be robustified against the time-varying character of the parameter estimates. The estimation error and the derivative of the parameter estimate are viewed as an unknown disturbance input, which are attenuated by adding nonlinear damping terms to the control law. As identifier the well-established recursive least-squares method in combination with an abrupt change detection algorithm is used. As was illustrated in Chapter 6, a potential advantage of the modular method is that the true values of the uncertain parameters can be found since the estimation is not driven by the tracking error but rather by the state of the system. Both fault tolerant SCAS systems are compared with the baseline F-16 flight control system in numerical simulations where the F-16 model suffers several types of sudden changes in the dynamic behavior. The comparison focuses on the performance, estimation accuracy, computation time and controller tuning. In the first part of the chapter both adaptive flight control designs are derived. In the second part the tuning of the controllers and the handling qualities analysis are discussed, followed by the results of the numerical c simulations in MATLAB/Simulink .

8.2 Flight Control Design A full description of the F-16 model together with all necessary data can be found in Chapter 2, the relevant equations of motion are repeated here for convenience sake: 1 V˙ T = (−D + FT cos α cos β + mg1 ) (8.1) m (−L − FT sin α + mg3 ) α˙ = qs − ps tan β + (8.2) mVT cos β (Y − FT cos α sin β + mg2 ) β˙ = −rs + (8.3) mVT

8.2

FLIGHT CONTROL DESIGN

p˙ q˙ r˙

 ¯ + c4 N ¯ + Heng q = (c1 r + c2 p) q + c3 L   ¯ − Heng r = c5 pr − c6 p2 − r2 + c7 M  ¯ + c9 N ¯ + Heng q = (c8 p − c2 r) q + c4 L

151 (8.4) (8.5) (8.6)

The goal of this study is to design a SCAS that tracks pilot commands with responses that satisfy the handling qualities, across the entire flight envelope of the aircraft, in the presence of uncertain aerodynamic parameters. The pilot commands should control the responses as follows: Longitudinal stick deflection commands angle of attack α0com , lateral stick deflection commands stability-axis roll rate p0s,com and the pedals command 0 0 the sideslip angle βcom . The total velocity command VT,com is achieved with the total engine thrust FT , which is in turn controlled with the throttle lever deflection. The commanded signals are fed through command filters to produce the signals αcom , βcom , ps,com , VT,com and their derivatives. The command filters are also used for specifying the desired aircraft handling qualities.

8.2.1 Outer Loop Design The control design procedure starts by defining the new tracking error states as     VT VT,com Z1 =  α  −  αcom  = X1 − X1,com β βcom     ps ps,com Z2 =  qs  −  qs,des  = X2 − X2,com , rs rs,des

(8.7)

(8.8)

with qs,des and rs,des the intermediate control laws that will be defined by the adaptive backstepping controller. The time derivative of Z1 can be written as   FT Z˙ 1 = A1 F1 + H1 + B11 X2 + B12  0  − X˙ 1,com , (8.9) 0 where

A1

=

H1

=



0

 0 −VT 0 0 , 1 0

1  − cos1 β mVT 0   mg1 1  α+mg3  −ps tan β + −FTVsin , T cos β m −FT cos α sin β+mg2 V

B11



0 =  0 0

 T  0 0 cos α cos β 1 0  , B12 =  0 0 −1 0

0 0 0

 0 0 , 0

152

F-16 STABILITY AND CONTROL AUGMENTATION DESIGN

8.2

are known (matrix) functions, and F1 = [L, Y, D]T is a vector containing the uncertain aerodynamic forces. Furthermore, let   FT0   0  qs,des  = B1−1 − C1 Z1 −K1 Λ1 −A1 Fˆ1 −H1 + X˙ 1,com −B11 Ξ2 , 0 rs,des (8.10) Rt where B1 = B11 + B12 and Λ1 = 0 Z¯1 (t)dt be a feedback control law with C1 = C1T > 0, K1 = K1T ≥ 0, Fˆ1 the estimate of F1 , Ξ2 and Z¯1 to be defined later. The estimate of the aerodynamic forces Fˆ1 is defined as Fˆ1

=

ˆ F1 , ΦTF1 (X, U )Θ

(8.11)

ˆ F1 is a vector with unknown constant where ΦTF1 is the known regressor function and Θ parameters. It is assumed that there exists a vector ΘF1 F1

=

ΦTF1 (X, U )ΘF1 ,

(8.12)

˜ F1 = ΘF1 − Θ ˆ F1 . Part of the feedback so that the estimation error can be defined as Θ control law (8.10) is now fed through second order low pass filters to produce the signals FT , qs,des , rs,des and their derivatives. These filters can also be used to enforce rate and magnitude limits on the signals, see the appendix of [61]. The effect that the use of these command filters has on the tracking errors can be captured with the stable linear filter   FT − FT0  0 . 0 Ξ˙ 1 = −C1 Ξ1 + B11 X2,com − X2,com + B12  (8.13) 0 Define the modified tracking errors as

Z¯i = Zi − Ξi ,

i = 1, 2.

(8.14)

8.2.2 Inner Loop Design Taking the derivative of Z2 results in Z˙ 2 = A2 (F2 + G2 U ) + H2 − X˙ 2,com T

where U = (δe , δa , δr ) is the control vector,   c3 0 c4 A2 = Ts/b  0 c7 0  , c4 0 c9     (c1 r + c2 p) q +c4 he q rs H2 =  0  α˙ + Ts/b  c5 pr − c6 p2 − r2 − c7 he r  , (c8 p − c2 r) q + c9 he q ps

(8.15)

8.2

FLIGHT CONTROL DESIGN

are known (matrix) functions, and   ¯0 L ¯0  , F2 =  M ¯0 N

¯ δe L ¯ δe  G2 = M ¯δe N 

¯ δa L ¯ δa M ¯δa N

153

 ¯ δr L ¯ δr  , M ¯δr N

are unknown (matrix) functions containing the aerodynamic moment components. Note that for a more convenient presentation the aerodynamic moments have been decomposed, e.g. ¯ (X, U ) = M ¯ 0 (X, U ) + M ¯ δe δe + M ¯ δa δa + M ¯ δr δr M (8.16) ¯ 0 (X, U ). To where the higher order control surface dependencies are still contained in M stabilize the system (8.15) the desired control U 0 is defined as ˆ 2 U 0 = −C2 Z2 −K2 Λ2 −B T Z¯1 −A2 Fˆ2 −H2 + X˙ 2,com, A2 G (8.17) 11 Rt ˆ2 where Λ2 = 0 Z¯2 (t)dt with C2 = C2T > 0, K2 = K2T ≥ 0 and where Fˆ2 and G are the estimates of the unknown nonlinear aerodynamic moment functions F2 and G2 , respectively. The estimates are defined as Fˆ2 ˆ 2j G

ˆ F2 = ΦTF2 (X, U )Θ ˆ G2j for j = 1, 2, 3 = ΦTG2j (X)Θ

(8.18) (8.19)

ˆ F2 , Θ ˆ G2j are vectors with unwhere ΦTF2 , ΦTG2j are the known regressor functions and Θ ˆ 2j represents the jth column of G ˆ 2 . It is known constant parameters, also note that G assumed that there exist vectors ΘF2 , ΘG2j such that F2

= ΦTF2 (X, U )ΘF2

G2j

= ΦTG2j (X)ΘG2j .

(8.20)

˜ F2 = ΘF2 − Θ ˆ F2 and Θ ˜ G2j = This means the estimation errors can be defined as Θ ˆ G2j . The actual control U is found by again applying command filters, as was ΘG2j − Θ also done in the outer loop design. Finally, with the definition of the stable linear filter  ˆ2 U − U 0 , Ξ˙ 2 = −C2 Ξ2 + A1 G (8.21) the static part of the control design is finished.

8.2.3 Update Laws and Stability Properties In this section the stability properties of the control law are discussed and dynamic update laws for the unknown parameters are derived. Define the control Lyapunov function V

= +

2    1 1 X ¯T ¯ ˜ TF Γ−1 Θ ˜ F1 Zi Zi + ΛTi Ki Λi + trace Θ 1 F1 2 i=1 2 ! 3   X   −1 ˜ −1 ˜ T T ˜ ˜ trace ΘF Γ ΘF2 + trace ΘG Γ ΘG2j 2

F2

2j

i=1

G2j

154

8.3

F-16 STABILITY AND CONTROL AUGMENTATION DESIGN

with the update gains matrices ΓF1 = ΓTF1 > 0, ΓF2 = ΓTF2 > 0 and ΓG2j = ΓTG2j > 0. Selecting the update laws ˆ˙ F1 Θ ˆ˙ F2 Θ ˆ˙ G2j Θ

=

ΓF1 ΦF1 AT1 Z¯1

=

ΓF2 ΦF2 AT2 Z¯2

=

PG2j ΓG2j ΦG2j AT2 Z¯2 Uj

(8.22) 

and substituting (8.10), (8.13), (8.21) and (7.37) reduces the derivative of V along the trajectories of the closed-loop system to V˙

= −Z¯1T C1 Z¯1 − Z¯2T C2 Z¯2 ,

(8.23)

which is negative semi-definite. By using Theorem 3.7 it can be shown that Z¯ → 0 as t → ∞. When the command filters are properly designed and the limits on the filters are not in effect, Z¯i will converge to the close neighborhood of Zi . If the limits are in effect the actual tracking errors Zi may increase, but the modified tracking errors Z¯i will still converge to zero and the update laws will not unlearn, since they are driven by the ˆ 2 include a projection modified tracking error definitions. Note that the update law for G operator to ensure that certain elements of the matrix do not change sign and full rank is maintained always. For most elements the sign is known based on physical principles.

8.3 Integrated Model Identification As was explained in Chapter 7, to simplify the approximation of the unknown aerodynamic force and moment functions, and thereby reducing computational load to make real-time implementation feasible, the flight envelope is partitioned into multiple, connecting operating regions. In the previous section parameter update laws (8.22) for the unknown aerodynamic functions (8.11)-(8.19) were defined. Now these unknown vectors and known regressor vectors will be further specified. The total force and moment approximations are written in the standard coefficient notation. The total nonlinear function approximations are divided into simpler linear-in-the-parameter nonlinear coefficient approximations, e.g. CˆL0 (α, β) = ϕTCL0 (α, β)θˆCL0 ,

(8.24)

where the unknown parameter vector θˆCL0 contains the network weights, i.e. the unknown parameters, and ϕCL0 is a regressor vector containing the B-spline basis functions. All other coefficient estimates are defined in similar fashion. In this case a twodimensional network is used with input nodes for α and β. Different scheduling parameters can be selected for each unknown coefficient. In this chapter third order B-splines spaced 2.5 degrees and up to three scheduling variables, α, β, δe , depending on coefficient are once again used. With these approximators sufficient model accuracy is

8.4

MODULAR MODEL IDENTIFICATION

155

obtained. Following the notation of (7.50) the estimates of the aerodynamic forces and moments can be written as ˆ¯ = ΦT (α, β, δ )Θ ˆ = ΦT (α, β, δe )Θ ˆ L, L ˆ L, L ¯ ¯ e L

Yˆ ˆ D

L

ˆ¯ = ΦT (α, β, δ )Θ ˆY ,M ˆ M, = (α, β, δe )Θ ¯ ¯ e M ˆ D, N ¯ˆ = ΦT¯ (α, β, δe )Θ ˆ N¯ , = ΦTD (α, β, δe )Θ N ΦTY

(8.25)

which is a notation equivalent to the one used in (8.11)-(8.19). Therefore, the update laws (8.22) can be used to adapt the B-spline network weights. A scheme of the integrated adaptive backstepping controller can be found in Figure 6.4 of Chapter 6.

8.4 Modular Model Identification An alternative to the Lyapunov-based indirect adaptive laws of the previous section is to separately design the identifier and the control law. This approach is referred to as the modular control design and was discussed in Section 6.2. The modular adaptive design is not limited to Lyapunov-based identifiers, but allows for more freedom in the selection of model identification. Especially (recursive) least-squares identification is of interest, since it is considered to have good convergence properties and its parameter estimates converge to true, constant values if the system is sufficiently excited. A comparison of Lyapunov and least-squares model identification for a simplified aircraft model in Chapter 6 demonstrated the more accurate approximation potential of the latter approach. A disadvantage of the design is that nonlinear damping terms have to be used to robustify the controller against the slowness of the parameter estimation method. These nonlinear damping terms can lead to high gain control and related numerical problems. Another disadvantage is that the least-squares identifier with nonlinear regressor filter is of a much higher dynamical order than the Lyapunov identifier of the integrated model identification method. First, the intermediate control (8.10) and the control (7.37) are augmented with the additional nonlinear damping terms −S1 Z¯1 and −S2 Z¯2 respectively, where S1 S2

= =

κ1 A1 ΦTF1 ΦF1 AT1

(8.26) 

κ2 A2 ΦTF2 ΦF2 AT2 + A2 

3 X j=1



κ2j ΦTG2j ΦG2j Uj2  AT2

(8.27)

with the scalar gains κ1 , κ2 , κ11 , κ12 , κ13 > 0. With these additional terms the derivative of the control Lyapunov function V (6.46) becomes V˙

=

˜ F1 −Z¯1T (C1 + S1 ) Z¯1 − Z¯2T (C2 + S2 ) Z¯2 + Z¯1T A1 ΦF1 Θ ! 3 X ˜ G Uj ˜ F2 + ΦG Θ +Z¯2T A2 ΦF2 Θ 2j

(8.28)

2j

j=1



−Z¯1T C1 Z¯1 − Z¯2T C2 Z¯2 +

3

X 1 1 ˜T ˜ 1 ˜T ˜ ˜ TG Θ ˜ G2j , ΘF 1 ΘF 1 + ΘF 2 ΘF 2 + Θ 2j 4κ1 4κ2 4κ 2j j=1

156

F-16 STABILITY AND CONTROL AUGMENTATION DESIGN

8.4

which demonstrates that the controller achieves boundedness of the modified tracking errors Z¯i if the parameter estimation errors are bounded. The size of the bounds is determined by the damping gains κ∗ . Nonlinear damping terms have to be used with care, since they may result in large control effort for large signals and thereby have an adverse effect on the robustness of the control scheme. An alternative is the use of so-called composite update laws that include both a tracking error and and estimation-based update term [186]. The resulting input-to-state stable controller allows the use of any identifier which can independently guarantee that the parameter estimation errors are bounded. However, to be able to use recursive least-squares techniques a swapping scheme is needed to account for the time-varying behavior of the parameter estimates. The idea behind the swapping technique is to use regressor filtering to convert the dynamic parametric system into a static form in such a way that standard parameter estimation algorithms can be used. In this study a x-swapping filter is used, which is defined as   Ω˙ 0 = A0 − ρF T (X, U )F (X, U )P (Ω0 + X) − H(X, U ) (8.29)   T T T T ˙ Ω = A0 − ρF (X, U )F (X, U )P Ω + F (X, U ) (8.30) ǫ

ˆ = X + Ω0 − ΩT Θ,

(8.31)

where H(X, U ) are the known dynamics, F (X, U ) is the known regressor matrix, ρ > 0 and A0 is an arbitrary constant matrix such that P A0 + AT0 P = −I,

P = P T > 0.

(8.32)

ˆ and the covariance update are defined as The least-squares update law for Θ ˆ˙ = Θ ˆ˙ Γ

=

Γ

Ωǫ 1 + νtrace (ΩT ΓΩ)

(8.33)

ΓΩΩT Γ − Γλ , 1 + νtrace (ΩT ΓΩ)

(8.34)



where ν ≥ 0 is the normalization coefficient and λ ≥ 0 is the forgetting factor. By Lemma 6.1 the modular controller with x-swapping filters and least-squares update law achieve global asymptotic tracking of the modified tracking errors. Note that flight envelope partitioning is again used for the modular design, only the parameters of the locally valid nonlinear linear-in-the-parameter models in the current partitions are updated at each time step. Although the whole updating process is slightly different, the same Bspline neural networks are used. In this way the modular adaptive design has the same memory capabilities as the integrated design. Note that for the modular adaptive design the covariance matrix also has to be stored in each partition, which leads to a significant increase in identifier states. However, again only a few partitions are updated at each time step. Despite using a mild forgetting factor in (8.34), the covariance matrix can become small after a period of tracking, and hence reduces the ability of the identifier to adjust to abrupt

8.5

CONTROLLER TUNING AND COMMAND FILTER DESIGN

157

changes in the system parameters. A possible solution to this is by resetting the covariance matrix Γ when a sudden change is detected. After an abrupt change in the system parameters, the estimation error will be large. Therefore a good monitoring candidate is the ratio between the current estimation error and the mean estimation error over an interval tǫ . After a failure, the estimation error will be large compared to the mean estimation error, and thus an abrupt change is declared when ǫ − ǫ¯ (8.35) ǫ¯ > Tǫ

where Tǫ is a predefined threshold. This threshold should be chosen large enough such that measurement noise and other disturbances do not trigger the resetting, and sufficiently small such that failures will trigger resetting. For the B-spline partitioned identifier, Tǫ is weighted by the degree of membership of the partition. Due to this modification partitions with low degree of membership, hence relatively inactive partitions, are more unlikely to reset, while the active partitions will reset normally if required. The modular scheme was already depicted in Figure 6.4.

8.5 Controller Tuning and Command Filter Design In this section the gains of the adaptive controllers are tuned and the handling qualities for the undamaged aircraft model are investigated. The goal of the control laws is to provide the pilot with Level 1 handling qualities throughout the whole flight envelope of the aircraft model as specified in MIL-STD-1797B [1]. The reference command filters can be used to convert the commands of the pilot into smooth reference signals for the control law as ps,com 1 = , ps,com,0 Tp s + 1 βcom βcom,0

=

αcom ωα2 = 2 αcom,0 s + 2ζα ωα s + ωα2 2 ωβ , 2 s + 2ζβ ωβ s + ωβ2

where Tp = 0.5, ζα = ζβ = 0.8, ωβ = 1.25 and ωα is a linear function of the dynamic pressure q¯ with value 2.5 for low q¯ and 6.5 for high q¯. After a trial-and error procedure, the controller gains are selected as C1 = 0.5I, C2 = I and the integral gains as     0.2 0 0 0.5 0 0 K1 =  0 0.2 0  , K2 =  0 0 0  . 0 0 0.2 0 0 0 The nonlinear damping gains for the modular adaptive controller are all taken equal to 0.01. The update laws (8.22) for the integrated design are robustified against parameter drift with continuous dead-zones and leakage terms. The update gains are all selected positive definite and tuned in a trial-and-error procedure. As expected, tuning the update

158

F-16 STABILITY AND CONTROL AUGMENTATION DESIGN

8.5

laws of the integrated adaptive controller such that they give a good performance at all flight conditions is again a very difficult and time consuming process. Selecting update gains too large can easily result in undesired oscillatory behavior. Low Order Equivalent System (LOES) analysis of frequency responses, obtained from frequency sweeps (0.2→12 rad/s) performed at twenty flight conditions over the entire operating range, were used as the primary means to verify the handling qualities in the nominal case. The flight conditions used for verification are shown in Figure 8.1.

Figure 8.1: Flight conditions for handling qualities analysis.

The transforming of the time history data from the sweeps into the frequency domain and the transfer function fitting was done with the commercially available software package c CIFER . Good fitting results were achieved at all test flight conditions. The following LOES are considered: h i 2 2 −τp s K s s + 2ζ ω s + ω φ φ φ φ e p = δroll (s + 1/Ts ) (s + 1/Tr ) [s2 + 2ζd ωd s + ωd2 ] q δpitch β δyaw

= =



Kθ s (s + 1/Tθ1 ) (s + 1/Tθ2 ) e−τq s   2 s2 + 2ζp ωp s + ωp2 s2 + 2ζsp ωsp s + ωsp

Aβ (s + 1/Tβ1 ) (s + 1/Tβ2 ) (s + 1/Tβ3 ) e−τβ s . (s + 1/Ts ) (s + 1/Tr ) [s2 + 2ζd ωd s + ωd2 ]

For level 1 handling qualities the LOES parameters must satisfy the ranges 0.28 ≤ CAP ≤ 3.6, Tr ≤ 1.0 s, ωsp > 1.0 rad/s, ζd ≥ 0.4, 0.35 ≤ ζsp ≤ 1.3, ζd ωd ≥ 0.4 rad/s, 2 where CAP = ωsp /(nz /α) is the Control Anticipation Parameter and the equivalent time delays τ∗ must be less than 0.10 seconds. Guidelines for estimating the substantial number of parameters in the LOES transfer functions are given in [1, 47]. For the longitudinal

8.6

NUMERICAL SIMULATIONS AND RESULTS

159

response the pitch attitude bandwidth versus phase delay criterion [1] is also taken into account as recommended by [199]. Plots of the CAP versus the short period frequency ωsp can be found in Figure 8.2, while the bandwidth criterion plot appears in Figure 8.3. It can be seen that both criteria predict level 1 handling qualities. Short period damping ζsp values were between 0.60 and 0.82, while the largest effective time delay was 0.084 s. The gain margin was larger than 6 dB and the phase margin larger than 45 deg at all test conditions. Finally, the Neal-Smith criterion [12] also predicts level 1 handling qualities. The Neal-Smith method estimates the amount of pilot compensation required to prevent pilot-in-the-loop resonance. Category A Flight Phases

2

10

1

ωsp (rad/s)

10

Level 2

Level 1 0

10

Level 2

Level 3 −1

10

0

10

1

10 nz/a

2

10

Figure 8.2: LOES short period frequency estimates.

Plots of the LOES roll mode time constant and effective time delay requirements can be found in Figure 8.4 and the LOES Dutch roll frequency ωd and damping ζd requirements in Figure 8.5. The figures demonstrate that also for lateral maneuvering all criteria for level 1 handling qualities are met.

8.6 Numerical Simulations and Results This section presents numerical simulation results from the application of the control systems developed in the previous sections to the high-fidelity, six-degrees-of-freedom F-16 model in a number of failure scenarios and maneuvers. The controllers are evaluated on

8.6

F-16 STABILITY AND CONTROL AUGMENTATION DESIGN

Pitch Attitude Bandwidth vs. Phase Delay Criterion

Phase Delay τ (sec)

0.25

Cat. C

0.2

p

Cat. A 0.15 Level 2/3

Cat. A

0.1

Cat. C

0.05

0

0

1

Level 1/2

2 3 Pitch Attitude Bandwidth ω

BW

4 (rad/s)

5

6

Roll Mode Time Constant (sec)

Figure 8.3: Pitch attitude bandwidth vs.phase delay.

Roll Requirements 2 Level 3 1.5 Level 2 1 0.5 Level 1 0

0

1

2

3

4

5

6 4

x 10 Effective Time Delay (sec)

160

0.25

Level 3

0.2 Level 2

0.15 0.1 0.05 0

Level 1 0

1

2

3

4

Dynamic Pressure (N/m2)

5

6 4

x 10

Figure 8.4: Roll mode time constant and effective time delay.

8.6

NUMERICAL SIMULATIONS AND RESULTS

161

their tracking performance and parameter estimation accuracy. Both the control laws and c the aircraft model are written as C S-functions in MATLAB/Simulink . Sensor models taken from [63] and transport delays of 20 ms have been added to the controller to model an onboard computer implementation of the control laws. The analysis in the previous part demonstrates that it is quite straightforward to use the command filters to enforce desired handling qualities of the adaptive backstepping controllers. However, one of the goals in this section is to compare the adaptive designs directly with the baseline F-16 control system of Section 2.5. For the purpose of this comparison, the command filters, stick shaping functions and command limiting functions in the numerical simulations are selected in such a way that the response of the adaptive designs on the nominal F-16 model will be approximately the same as the baseline control system response over the entire flight envelope. One problem is that a longitudinal stick command to the baseline controller generates a mixed pitch rate and load factor response, while the adaptive designs generate an angle of attack response. The desired mixed response is transformed to an angle of attack command for the adaptive controllers using the nominal aircraft model data. To verify whether the baseline control system achieves level 1 handling qualities or not, simulations of frequency sweeps were again performed. The small amplitude responses have been matched to LOES models. As expected the baseline control system also satisfies these criteria over the entire F-16 model flight envelope.

Dutch Roll Data 6

5

3

d

ω (rad/s)

4

2 Level 1 1 Level 3 0

0

0.2

Level 2

0.4

ζd

0.6

0.8

Figure 8.5: Dutch roll frequency vs. damping.

1

162

F-16 STABILITY AND CONTROL AUGMENTATION DESIGN

8.6

Table 8.1: Flight conditions used for evaluation. Flight condition FC1 FC2 FC3 FC4 FC5

Mach number 0.8 0.6 0.6 0.4 0.8

Altitude (m) 8000 12000 5000 10000 2000

dynamic pressure (kN/m2 ) 15.95 4.87 13.61 2.96 35.61

α (deg) 1.80 9.40 2.46 14.99 0.04

8.6.1 Simulation Scenarios The simulated failure scenarios are limited to locked right ailerons at zero, four different offsets from zero, two longitudinal center of gravity shifts and a sudden change in the pitch damping (Cmq ) for a total of eight different failure cases. Each simulation lasts between 150 and 200 seconds, after 20 seconds a failure is introduced. All simulation runs start at one of five different trimmed flight conditions as given in Table 8.1. This gives a total of forty failure scenarios for each controller, the simulation results of three typical ones are discussed in detail in the next sections.

8.6.2 Simulation Results with Cmq = 0 The first series of simulations considers a sudden reduction of the longitudinal damping coefficient Cmq to zero at all flight conditions. This is not a very critical change, since the tracking performance of both the baseline and the backstepping controller with adaptation disabled is hardly affected. It does however serve as a nice example to evaluate the ability of the adaptation schemes to accurately estimate inaccuracies in the onboard model. Figure D.12 of Appendix D.3 contains the simulation results for the integrated design starting at flight condition 2 with the longitudinal stick commanding a series of pitch doublets, after 20 seconds of simulation the sudden change in Cmq takes place. The left hand side plots show the inputs and response of the aircraft in solid lines, while the dotted lines are the reference trajectories. Tracking performance both before and after the change in pitch damping is excellent. The solid lines in the right hand side plots of Figure D.12 show the changes in aerodynamic coefficients w.r.t. the nominal values divided by the maximum absolute value of the real aerodynamic coefficients to normalize them. The dotted lines are the normalized real differences between the altered and nominal aircraft model. The change in Cmq is clearly visible in the plots. However, the tracking error based update laws of the integrated controller compensate by estimating changes in Cm0 and Cmδe instead, which leads to the same total pitching moment. The time histories drag estimation and the total airspeed are not depicted in the figure. The flight control system is not able to follow this maneuver and at the same time hold the aircraft at the correct airspeed, hence there is some estimation of a non-existing drag coefficient error.

8.6

NUMERICAL SIMULATIONS AND RESULTS

163

It is expected that the estimation-based update laws of the modular design will manage to find the correct parameter values, since the reference signal flown should be rich enough with information. The results of the same simulation scenario for the F-16 with the modular controller can be seen in Figure D.13. The tracking performance of this controller is also excellent, and, as can be seen from the right hand side plots, the correct change in parameter value is found by the model identification. The results of other simulations of this failure scenario are in correspondence with the single case discussed above. Tracking performance is always good, but as expected only the modular controller manages to find the true aerodynamic coefficient values. Naturally, the speed at which the true values are found depends on the richness of information in the reference signal.

8.6.3 Simulation Results with Longitudinal c.g. Shifts The second series of simulations considers a more complex failure: Longitudinal center of gravity shifts. Especially backward shifts can be quite critical, since they work destabilizing and can even result in a loss of static stability margin. All pitching and yawing aerodynamic moment coefficients will change as a result of a longitudinal c.g. shift. The baseline classical controller is designed to deal with longitudinal c.g. shifts and, as is demonstrated in [149], can even deal with shifts of ±0.06¯ c. The tracking performance degrades somewhat, but is still acceptable. However, for a non-adaptive model inversion based design the changes are far more critical and stability loss often occurs for destabilizing shifts, even with the integral gains. Figure D.14 contains the simulation results for F-16 model with the integrated adaptive controller starting at flight condition 1 with the longitudinal stick commanding a series of small amplitude pitch doublets, after 20 seconds the c.g. instantly shifts backward 0.06¯ c and the small positive static margin is lost. Without adaptation stability is lost immediately, but as can be seen in the left hand side plots with adaptation turned on the tracking performance of the integrated design is acceptable although small tracking errors remain. The right hand side plots demonstrate that the estimates again do not converge to their true values, and the change in yawing moment is not estimated at all, since it does not result in large enough tracking errors. In Figure D.15 the total pitch moment coefficient is plotted against the angle of attack with a pitch rate and elevator deflection of zero, both before (blue line) and after the failure occurs (red line). The difference or the error is plotted in Figure D.16 together with the estimated error generated by the adaptive backstepping controller at the end of the simulation. It is interesting to note that the pitch moment coefficient error is only learned over the portion of the flight envelope over which training samples have been accumulated. This is due to the local nature of the B-spline networks used for the flight envelope partitioning. The plots of the results for the modular design for the same scenario can be found in Figure D.17. The tracking performance of the modular design is somewhat disappointing; even after 200 seconds of simulation there still remains a significant tracking error. Also the parameter estimates do not converge to their true values and the total recon-

164

F-16 STABILITY AND CONTROL AUGMENTATION DESIGN

8.7

structed pitching moment is not equal to the real moment. However, if the same simulation is performed without flight envelope partitioning with B-splines on a semi-linear aircraft model the tracking performance and parameter convergence is excellent. It seems the flight envelope partitioning negatively affects the estimation capabilities of the leastsquares algorithm for this failure scenario. The simulation results of the rest of the c.g. shift failure scenarios correspond to this single case: The tracking performance of the integrated design is better than for the modular design, with the modular design struggling to estimate the correct parameter values. Tracking performance of both controllers is better for stabilizing c.g. shifts.

8.6.4 Simulation Results with Aileron Lock-ups In the last series of simulations right aileron lockups or hard-overs are considered. At 20 seconds simulation time the right aileron suddenly moves to a certain offset: -21.5, -10.75, 0, 10 or 21.5 degrees. Note that the public domain F-16 model does not contain a differential elevator, hence only the rudder and the left aileron can be used to compensate for these failures. Both the baseline control system and the adaptive SCAS designs with adaptation turned off cannot compensate for the additional rolling and yawing moments themselves, which means a very high workload for the pilot. The results of a simulation performed with the integrated controller at flight condition 4 with a right aileron lock up at -10.75 degrees can be seen in Figure D.18. One lateral stick doublet is performed before the failure occurs and three more 60 seconds after. As can be seen the controller manages to compensate for most of the additional rolling moment and after that the stability roll rate tracking error slowly converges to zero. Additional sideslip is generated in the doublets and tracking performance improves over time. The other plots of Figure D.18 demonstrate that parameter convergence to the true values is not achieved. The change in yawing moment is even estimated as having an opposite sign. However, tracking performance is adequate and improving over time. Figure D.19 contains the results of the same scenario using the modular controller. It can be seen that the aileron failure is quickly compensated for by the modular adaptive controller: All tracking errors quickly converge to zero. However, the controller again fails to identify the true aerodynamic coefficient changes. The total reconstructed forces and moments are correct, but the individual coefficients do not match their true values. This is partly because the reference signal is not rich enough, but also due to the flight envelope partitioning. The same simulation without partitioning on the semi-linear F-16 model gives much better estimates. The results of the above simulations were again characteristic for all scenarios with aileron lockup failures. Tracking performance of the modular controller is excellent, but parameter convergence to the true values is seldom achieved. The adaptation of the integrated design is less aggressive, mainly due to the use of the continuous dead-zones, but tracking performance is still good.

8.7

CONCLUSIONS

165

8.7 Conclusions In this chapter two Lyapunov-based nonlinear adaptive stability and control augmentation systems are designed for a high-fidelity F-16 model. The first controller is an integrated design with feedback control and dynamic tracking error based update law designed simultaneously using a control Lyapunov function. The second design is an ISS-backstepping controller with a separate recursive least-squares identifier. In order to make real-time implementation of the controllers feasible, the flight envelope is partitioned in locally valid linear-in-the-parameters models using B-spline networks. Only a few local aerodynamic models are updated at each time step, while information of other local models is stored. The controllers are designed in such a way that they have nearly identical handling qualities as the baseline F-16 control system over the entire subsonic flight envelope for the nominal, undamaged aircraft model. Numerical simulations with several types of failures were performed to verify the robust performance of the control laws. The results show that good tracking performance can still be accomplished with these failures and that pilot workload is reduced. Several important observations can be made based on the simulation results and the comparison: 1. Results of numerical simulations show that adaptive flight controllers provide a significant improvement over a non-adaptive NDI design with integral gains for the simulated failure cases. Both adaptive designs show no degradation in performance with the added sensor dynamics and time delays. The flight envelope partitioning method makes real-time implementation of both controllers feasible, although the difference in required computational load and storage space is quite significant. For the least-squares identifier each locally valid aerodynamic model has its own covariance matrix. 2. In general, the modular adaptive design provides the best estimates of the individual aerodynamic coefficients. However, the nonlinear damping gains of the modular design should be tuned with care to avoid high gain feedback signals. 3. The gain tuning of the update laws of the integrated adaptive controller is a very time consuming process, since changing the gains can give very unexpected transients in the closed-loop tracking performance. This is especially true for aggressive maneuvering. In the next chapter an alternative Lyapunov based parameter estimation method is investigated. 4. Tuning of the modular identifier is a much less involved task. However, the recursive least-squares identifier combined with flight envelope partitioning has unexpected problems to estimate the true parameters. A different parametrization of the approximator structure or another tuning setting may solve these problems. 5. Enforcing desired handling qualities using the command filters is a trivial task in the nominal case, since most specifications can be implemented directly. The handling qualities can be verified using frequency sweeps and lower order model fits.

166

F-16 STABILITY AND CONTROL AUGMENTATION DESIGN

8.7

Measurements of the handling qualities when a sudden aerodynamic change occurs and the adaptation becomes active have not been obtained, since the dynamic behavior of the closed-loop system is constantly changing, making it impossible to fit a lower order equivalent model.

Chapter

9

Immersion and Invariance Adaptive Backstepping The earlier chapters have shown that the dynamic part of integrated adaptive backstepping designs is very difficult to tune, since it is unclear how a higher update gain affects the closed-loop tracking performance of the control system. Furthermore, the dynamic behavior of the controllers is very unpredictable. In this chapter the dynamic part of the controller is replaced with a new kind of estimator based on the immersion and invariance approach. This approach allows for prescribed stable dynamics to be assigned to the parameter estimation error and is therefore much easier to tune. The new immersion and invariance backstepping technique is used to design a new stability and control augmentation system for the F-16, which is compared to the designs of the previous chapter. This chapter can be seen as a follow up of Chapter 5, where an attempt was made to simplify the performance tuning of the controllers by designing an inverse optimal adaptive backstepping controller.

9.1 Introduction In the past two decades a considerable amount of literature has been devoted to nonlinear adaptive control design methods for a variety of flight control problems where parametric uncertainties in the system dynamics are involved, see e.g. [61, 124, 132, 150]. Recursive, Lyapunov-based adaptive backstepping is among the most widely studied of these methods. The main attractions of adaptive backstepping based control laws lie in their provable convergence and stability properties as well as in the fact that they can be applied to a broad class of nonlinear systems. However, despite a number of refinements over the years, the adaptive backstepping method also has a number of shortcomings. The most important of these is that the pa167

168

IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING

9.2

rameter estimation error is only guaranteed to be bounded and converging to an unknown constant value, yet little can be said about its dynamical behavior. Unexpected dynamical behavior of the parameter update laws may lead to an undesired transient response of the closed-loop system. Furthermore, increasing the adaptation gain will lead to faster parameter convergence, but will not necessarily improve the response of the closed-loop system. This makes it impossible to properly tune an adaptive backstepping controller, especially for large and complex systems such as the high-fidelity F-16 model. One solution to this problem is to introduce a modular input-to-state stable backstepping approach with a separate identifier that is not of the Lyapunov-type, e.g. the well known recursive least-squares identifier. Since the certainty equivalence principle does not hold in general for nonlinear systems, the control law has to be robustified against the time-varying character of the parameter estimates. However, the nonlinear damping terms introduced to achieve this robustness can lead to undesirable high gain control. Furthermore, the controller loses some of the strong stability properties with respect to the integrated adaptive backstepping approach. Finally, in a real-time application for a complex system, the high dynamic order resulting from using a least-squares identifier with the necessary regressor filtering may be undesirable. In [40, 102, 103], a different class of Lyapunov-based adaptive controllers has been developed based on the immersion and invariance (I&I) methodology [7]. This approach allows for prescribed stable dynamics to be assigned to the parameter estimation error, thus leading to a modular control scheme which is much easier to tune than an adaptive backstepping controller. However, this shaping of the dynamics relies on the solution of a partial differential matrix inequality, which is difficult to solve for multivariable systems. This limitation is removed in [104] using a dynamic extension consisting of output filters and dynamic scaling factors added to the estimator dynamics. In this study, the approach of [104] is used to derive a nonlinear adaptive estimator which in combination with a static backstepping feedback controller results in a nonlinear adaptive control framework with guaranteed global asymptotic stability of the closed-loop system. The new design technique is applied to the flight control design problem for the over-actuated, six-degrees-of-freedom fighter aircraft model of Chapter 6 and after that to the SCAS design for the F-16 model of Chapter 2. The results of the numerical simulations are compared directly to the results for the integrated and modular adaptive backstepping controllers of Chapters 6 and 8.

9.2 The Immersion and Invariance Concept Immersion and invariance is a relatively new approach to designing nonlinear controllers or estimators for (uncertain) nonlinear systems [6]. As the name suggests, the method relies on the well known notions of system immersion and manifold invariance1, but used from another perspective. The idea behind the I&I approach is to capture the desired behavior of the system to be controlled with a target dynamical system. This way the 1 Formal

definitions of immersion and invariant manifolds can be found in Appendix B.3

9.2

THE IMMERSION AND INVARIANCE CONCEPT

169

control problem is reduced to the design of a control law which guarantees that the controlled system asymptotically behaves like the target system. The I&I method is applicable to a variety of control problems, but it is easiest to illustrate the approach with a basic stabilization problem of an equilibrium point of a nonlinear system. Consider the general system x˙ = f (x) + g(x)u,

(9.1)

where x ∈ Rn and u ∈ Rm . The control problem is to find a state feedback control law u = v(x) such that the closed-loop system has a globally asymptotic stable equilibrium at the origin. The first step of the I&I approach is to find a target dynamical system ξ˙ = α(x),

(9.2)

where ξ ∈ Rp , p < n, which has a globally asymptotically stable equilibrium at the origin, a smooth mapping x = π(ξ), and a control law v(x) such that f (π(ξ)) + g(π(ξ))v(π(ξ)) =

∂π α(ξ). ∂ξ

(9.3)

If these conditions hold, then any trajectory x(t) of the closed-loop system x˙ = f (x) + g(x)v(x),

(9.4)

is the image through the mapping π(ξ) of a trajectory ξ(t) of the target system (9.2). Note that the rank of π is equal to the dimension of ξ. The second step is to find a control law that renders the manifold x = π(ξ) attractive and keeps the closed-loop trajectories bounded. This way the closed-loop system will asymptotically behave like the desired target system and hence stability is ensured. From the above discussion, it follows that the control problem has been transformed into the problem of the selection of a target dynamical system. This is, in general, a nontrivial task, since the solvability of the underlying control design problem depends on this selection. However, in many cases of practical interest it is possible to identify natural target dynamics. Examples of different applications are given in [6]. In this thesis, the focus lies on adaptive control, hence the I&I approach is used to develop a framework for adaptive stabilization of nonlinear systems with parametric uncertainties. Consider again the system (9.1) with an equilibrium xe to be stabilized, but where the functions f (x) and g(x) now depend on an unknown parameter vector θ ∈ Rq . The goal is to find an adaptive state feedback control law of the form u = ˙ θˆ =

ˆ v(x, θ)

(9.5)

ˆ w(x, θ),

such that all trajectories of the closed-loop system (9.1), (9.5) are bounded and limt→∞ x = xe . To this end it is assumed that a full-information control law v(x, θ) exists. The I&I adaptive control problem is then defined as follows [7].

170

IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING

9.2

Definition 9.1. The system (9.1) is said to be adaptively I&I stabilizable if there exist functions β(x) and w(x) such that all trajectories of the extended system x˙ = ˙ θˆ =

f (x) + g(x)v(x, θˆ + β(x))

(9.6)

ˆ w(x, θ),

are bounded and satisfy h i ˆ + β(x(t))) − g(x(t))v(x(t), θ) = 0. lim g(x(t))v(x(t), θ(t) t→∞

(9.7)

It is not difficult to see that for all trajectories staying on the manifold n o ˆ ∈ Rn × Rq |θˆ − θ + β(x) = 0 M = (x, θ)

condition (9.7) holds. Moreover, by Definition 9.1, adaptive I&I stabilizability implies that lim x = xe . (9.8) t→∞

Note that the adaptive controller designed with the I&I approach is not of the certainty equivalence type in the strict sense, i.e. the parameter estimate is not used directly by the static feedback controller. Furthermore, note that, in general, f (x) and g(x) depend on the unknown θ and therefore the parameter estimate θˆ does not necessarily converge to the true parameter values. However, in many cases it is also possible to establish global ˆ = (xe , θ). This is illustrated in the following example. stability of the equilibrium (x, θ) Example 9.1 (Adaptive controller design) Consider the feedback linearizable system x˙ = θx3 + x + u,

(9.9)

where θ ∈ R is an unknown constant parameter. If θ were known, the equilibrium point x = 0 would be globally asymptotically stabilized by the control law u = −θx3 − cx,

c > 1.

(9.10)

Since θ is not known, θ is replaced by its estimate θˆ in the certainty equivalence controller u = ˙ θˆ =

ˆ 3 − cx −θx

w,

where w is the parameter update law. As before, the control Lyapunov function is selected as ˆ = 1 x2 + 1 (θ − θ) ˆ 2, V (x, θ) (9.11) 2 2γ

9.2

THE IMMERSION AND INVARIANCE CONCEPT

171

with γ > 0. Selecting the update law w

= γx4

(9.12)

renders the derivative of V equal to V˙ = −(c − 1)x2 .

(9.13)

˜ = 0 is globally stable and limt→∞ x = 0. By Theorem 3.7 the equilibrium (x, θ) However, no conclusions can be drawn about the behavior of the parameter estimation ˆ except that it converges to a constant value. The dynamical behavior of the error θ − θ, estimation error may be unacceptable in terms of transient response of the closed-loop system. Alternatively, the adaptive control problem can be placed in the I&I framework by considering the augmented system x˙ = θx3 + x + u ˙ θˆ = w

(9.14)

and by defining the one-dimensional manifold n o ˆ ∈ R2 |θˆ − θ + β(x) = 0 M = (x, θ)

ˆ where β(x) is a continuous function yet to be specified. in the extended space (x, θ), If the manifold M is invariant, the dynamics of the x-subsystem of (9.14) restricted to this manifold can be written as x˙ =

(θˆ + β(x))x3 + x + u.

(9.15)

Hence, the dynamics of the system are completely known and the equilibrium x = 0 can be asymptotically stabilized by the control law u = −cx − (θˆ + β(x))x3 ,

c > 1.

(9.16)

To render this design feasible, the first step of the I&I approach consists of finding an update law w that renders the manifold M invariant. To this end, consider the dynamics of the ‘off-the-manifold’ coordinate, i.e. the estimation error σ , θˆ − θ + β(x),

(9.17)

which are given by σ˙ = w +

 ∂β  ˆ (θ + β(x) − σ)x3 + x + u . ∂x

If the update law w is selected as  ∂β ∂β  ˆ w=− (θ + β(x))x3 + x + u = (c − 1)x ∂x ∂x

(9.18)

(9.19)

172

9.3

IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING

the manifold M is invariant and the off-the-manifold dynamics are described by σ˙ = − Consider the Lyapunov function V = ries of (9.20) satisfies

∂β 3 x σ. ∂x

1 2 2γ σ ,

(9.20)

whose time derivative along the trajecto-

∂β 1 3 2 V˙ = − x σ , ∂x γ

(9.21)

where γ > 0. To render this expression negative semi-definite, a possible choice for the function β(x) is given as β(x) = γ

x2 . 2

(9.22)

An alternative solution with dead-zones is given as

β(x) =

  

γ 2 γ 2

2

(x − η0 ) 2 (x + η0 )

0

if if if

x > η0 x < −η0 , |x| ≤ η0

(9.23)

with η0 > 0 the dead-zone constant. It can be concluded that the system (9.20) has a globally stable equilibrium at zero and limt→∞ x3 σ = 0. The resulting closed-loop system can be written in (x, σ)-coordinates as x˙ = σ˙

=

−(c − 1)x − x3 σ −γx4 σ

(9.24)

which has a global stable equilibrium at the origin and x converges to zero. Moreover, the extra β(x)x3 -term in the control law (9.16) renders the closed-loop system inputˆ to-state stable with respect to the parameter estimation error θ − θ. The response of the closed-loop system with the I&I adaptive controller is compared to the response of the system with the standard adaptive controller designed at the beginning of this example. The tuning parameters of both designs are selected the same. The real θ is equal to 2, but the initial parameter estimate is 0. As can be seen in Figure 9.1, both controllers manage to regulate the state to zero. Note that it is not guaranteed that the estimate of the I&I design converges to the true value, only that limt→∞ x3 σ = 0. The closed-loop system (9.24) can be regarded as a cascaded interconnection between two stable systems which can be tuned via the constants c and γ. This modularity makes the I&I adaptive controller much easier to tune than the standard adaptive design. As a result, the performance of the adaptive system can be significantly improved.

9.3

EXTENSION TO HIGHER ORDER SYSTEMS

173

state x

3 standard I&I

2 1 0 0

0.5

1

1.5

2

2.5 time (s)

3

3.5

4

4.5

5

0.5

1

1.5

2

2.5 time (s)

3

3.5

4

4.5

5

0.5

1

1.5

2

2.5 time (s)

3

3.5

4

4.5

5

0 input u

−10 −20 −30 −40

parameter estimate

−50 0 5 4 3 2 1 0 0

ˆ Figure 9.1: State x, control effort u and parameter estimate θˆ for initial values x(0) = 2, θ(0) = 0, control gain c = 2 and update gain γ = 1 for the closed-loop system with standard adaptive design and with the I&I adaptive design.

9.3 Extension to Higher Order Systems Extending the I&I approach outlined in the last section to higher-order nonlinear systems with unmatched uncertainties is by no means straight-forward. In [102] an attempt is made for the class of lower-triangular nonlinear systems of the form x˙ i

=

xi+1 + ϕi (x1 , ..., xi )T θ,

x˙ n

=

u + ϕn (x)T θ

i = 1, ..., n − 1

(9.25)

where xi ∈ R, i = 1, ..., n are the states, u ∈ R the control input, ϕi the smooth regressors and θ ∈ Rp a vector of unknown constant parameters. The control problem is to track the smooth reference signal yr (t) (all derivatives known and bounded) with the state x1 . The adaptive control design is done in two steps. First, an overparametrized estimator of order np for the unknown parameter vector θ is designed. In the second step a controller is designed that ensures that limt→∞ x1 = yr and all other states are bounded.

9.3.1 Estimator Design The estimator design starts by defining the estimation errors as σi = θˆi − θ + βi (x1 , ..., xi ),

1 = 1, ..., n,

(9.26)

174

9.3

IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING

where θˆi are the estimator states and βi are continuously differentiable functions to defined later. The dynamics of σi are given by i

σ˙ i

=

X ∂βi  ˙ θˆi + xk+1 + ϕk (x1 , ..., xk )T θ ∂xk k=1

=

i   X ∂βi  ˙ θˆi + xk+1 + ϕk (x1 , ..., xk )T θˆi + βi (x1 , ..., xi ) − σi , ∂xk k=1

˙ where xn+1 = u for the ease of notation. Update laws for θˆi can be defined as i   X ∂βi  ˙ θˆi = xk+1 + ϕk (x1 , ..., xk )T θˆi + βi (x1 , ..., xi ) ∂xk

(9.27)

k=1

to cancel all the known parts of the σi dynamics, resulting in # " i X ∂βi σ˙ i = − ϕk (x1 , ..., xk )T σi . ∂xk

(9.28)

k=1

The system (9.28) for i = 1, ..., n can be seen as a linear time varying system with a block diagonal dynamic matrix. Hence, the problem of designing an estimator θˆi is now reduced to the problem of finding functions βi such that the diagonal blocks are rendered negative semi-definite. In [102] the functions βi are selected as Z xi βi (x1 , ..., xi ) = γi ϕi (x1 , ..., xi−1 , χ)dχ + ǫi (xi ), γi > 0, (9.29) o

where ǫi are continuously differentiable functions that satisfy the partial differential matrix inequality Fi (x1 , ..., xi )T + Fi (x1 , ..., xi ) ≥ 0,

i = 2, ..., n,

(9.30)

where Fi (x1 , ..., xi )

Z xi  i−1 X ∂ = γi ϕi (x1 , ..., xi−1 , χ)dχ ϕk (x1 , ..., xk )T ∂xk 0 k=1

∂ǫi + ϕi (x1 , ..., xi )T . ∂xi

Note that the solvability of (9.30) strongly depends on the structure of the regressors ϕi . For instance, in the case that ϕi only depends on xi , the trivial solution ǫi (xi ) = 0 satisfies the inequality. If (9.30) is solvable, the following lemma can be established [6]. Lemma 9.2. Consider the system (9.25), where the functions βi are given by (9.29) and functions ǫi exist which satisfy (9.30). Then the system (9.25) has a globally uniformly stable equilibrium at the origin, σi (t) ∈ L∞ and ϕi (x1 (t), ..., xi (t))T σi (t) ∈ L2 , for all i = 1, ..., n and for all x1 (t), ..., xi (t). If, in addition, ϕi and its time derivative are bounded, then ϕi (x1 , ..., xi )T σi converges to zero.

9.3

EXTENSION TO HIGHER ORDER SYSTEMS

175

Pn T Proof: Consider the Lyapunov function W (σ) = i=1 σi σi , whose time derivative along the trajectories of (9.25) is given as " i # n X X ∂βi T T ˙ = − W σi ϕk (x1 , ..., xk ) σi ∂xk i=1 k=1

= − ≤ −

n X i=1

n X

 σiT 2γi ϕi ϕTi + F1 + F1T σi 

2γi (ϕTi σi )2 ,

i=1

where (9.30) was used to obtain the last inequality. The stability properties follow directly from Theorem 3.7. Note that the above inequality holds for any u. Furthermore, by definition (9.26) an asymptotically converging estimate of each unknown term ϕTi θ of the system (9.25) is given by ϕi (x1 , ..., xi )T (θˆ + βi (x1 , ..., xi )).

(9.31)

Note that an estimate of the ϕTi θ terms is obtained, instead of only an estimate of the parameter θ as with the Lyapunov based update laws of the earlier chapters.

9.3.2 Control Design The properties of the estimator will now be exploited with a backstepping control law. The design procedure starts by defining the tracking errors as z1

=

zi

=

x1 − yr

xi − αi−1 ,

i = 2, ..., n,

(9.32)

where α∗ are the intermediate control laws to be defined. The dynamics of z1 satisfy z˙1 = z2 + α1 + ϕT1 θ − y˙ r .

(9.33)

Introducing the virtual control α1 = −κ1 (z1 ) − ϕT1 (θˆ1 + β1 ) + y˙ r ,

(9.34)

where κ1 (z1 ) is a stabilizing function to be defined, reduces the z1 -dynamics to z˙1 = z2 − κ1 − ϕT1 σ1 .

(9.35)

Assume for the moment that z2 ≡ 0, i.e. α1 is the real control. Then the above expression can be seen as a stable system perturbed by an L2 signal. Consider now the Lyapunov

176

9.4

IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING

function V1 (z1 , σ1 ) = z12 + σ1T σ1 . Taking the derivative along the trajectories (9.35) and (9.28) results in V˙ 1

−2κ1 z1 − 2ϕT1 σ1 z1 − 2γ1 (ϕT1 σ1 )2 .  2 √ 1 1 −2κ1 z1 + ǫz12 − √ ϕT1 σ1 + ǫz1 − (2γ1 − )(ϕT1 σ1 )2 ǫ ǫ 1 −2κ1 z1 + ǫz12 − (2γ1 − )(ϕT1 σ1 )2 , ǫ

= = ≤

where ǫ > 0 is a constant. Substituting κ1 = (c1 + ǫ/2) z1 , with gain c1 > 0, reduces the derivative of V1 to V˙ 1



1 −2c1 z12 − (2γ1 − )(ϕT1 σ1 )2 . ǫ

1 By Theorem 3.7 it follows that if γ1 ≥ 2ǫ the closed-loop (z1 , σ1 )-subsystem has a globally uniformly stable equilibrium at (0, θ) and limt→∞ z1 = 0, limt→∞ ϕT1 σ1 = 0. Since, z2 6= 0, the approach is extended to design a backstepping controller for the complete system, i.e.

αi+1 u

= −κi − ϕTi (θˆi + βi ) + = αn+1 ,

i−1 i−1 i X X ∂αi ˆ˙ ∂αi h xk+1 + ϕTk (θˆk + βk ) + θk + yr(i) ∂xk ∂ θˆk k=1

k=1

(9.36)

where κi

=



i−1

ǫ ǫX ci + zi + 2 2

k=1



∂αi ∂xk

2

zi + zi−1 ,

i = 1, ..., n,

where ci > 0 and ǫ > 0. Note that nonlinear damping terms have to be introduced to compensate for derivative terms of the virtual controls. This is necessary, since command filters are not used in this backstepping design. To proof stability of the closed-loop system with the above backstepping control law and the I&I based designed in Pestimator n the previous section, the Lyapunov function V (z, σ) = W (σ) + k=1 zk2 is introduced. Taking the derivative of V results in  n n  X X 1 2γi − (n − i + 1) (ϕTi σi )2 . V˙ = −2 ci z12 − ǫ i=1 i=1 It can be concluded that, if 2γi ≥ 1ǫ (n − i + 1) and the inequality (9.30) is satisfied, the system (9.25), (9.36), (9.27), (9.29) has a globally stable equilibrium. Furthermore, by Theorem 3.7 limt→∞ zi = 0 and limt→∞ ϕTi σi = 0. This concludes the overparametrized, nonlinear adaptive control design, which can be used as an alternative to the tuning functions adaptive backstepping approach if functions ǫi (xi ) can be found that satisfy (9.30), as is demonstrated in a wing rock example in [102].

9.4

DYNAMIC SCALING AND FILTERS

177

9.4 Dynamic Scaling and Filters In the previous section a first attempt was made to design an adaptive backstepping controller with I&I based estimator. The estimator allows for prescribed dynamics to be assigned to the parameter estimation error, which leads to a modular adaptive backstepping design that is much easier to tune than the integrated approaches discussed in earlier chapters. Furthermore, the modular design does not suffer from the weaknesses of the certainty equivalence modular design of Section 6.2. However, the shaping of the dynamics relies on the solution of a partial differential matrix inequality, which is, in general, very difficult to solve for most physical systems. This limitation of the estimator design was removed in [104] with the introduction of a dynamic scaling factor in the estimator dynamics and by adding an output filter to the design. Dynamic scaling has been widely used in the design of high-gain observers, see e.g. [165]. In this section an I&I estimator with dynamic scaling and output filter is combined with a command filtered backstepping control design approach to arrive at a modular adaptive control framework. Consider the class of linearly parametrized systems of the form x˙ i

= xi+1 + ϕi (x)T θi ,

i = 1, ..., n,

(9.37)

with states xi ∈ R, i = i, ..., n and control input u ∈ R. Note that for notational convenience xn+1 = u. The functions ϕi (x, u) are the known, smooth regressors and θi ∈ Rpi are vectors of unknown constant parameters. The control objective is to track a smooth reference signal x1,r , for which the first derivative is known and bounded, with the state x1 .

9.4.1 Estimator Design with Dynamic Scaling The construction of an estimator for θi starts by defining the scaled estimation errors as

σi =

θˆi − θi + βi (xi , x ˆ) , ri

i = 1, ..., n,

(9.38)

where ri are scalar dynamic scaling factors, θˆi are the estimator states and βi (xi , x ˆ) continuously differentiable vector functions yet to be specified. Let ei = x ˆi − xi , then the filtered states x ˆi are obtained from   xˆ˙ i = xi+1 + ϕi (x)T θˆi + βi (xi , x ˆ) − ki (x, r, e)ei ,

(9.39)

178

9.4

IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING

where ki (x, r, e) are positive functions. Using the above definitions, the dynamics of σi are given by   n X  1  ˆ˙ ∂βi ∂βi ˙  r˙i θi + xi+1 + ϕi (x)T θi + x ˆj − σi σ˙ i = ri ∂xi ∂ x ˆj ri j=1   n h  i X 1  ˆ˙ ∂βi ∂βi ˙  = θi + xi+1 + ϕi (x)T θˆi + βi (xi , x ˆ) − ri σi + x ˆj ri ∂xi ∂ x ˆj j=1 −

r˙i σi . ri

˙ By selecting the update laws for θˆi as n   X ∂βi  ∂βi ˙ ˙ θˆi = − xi+1 + ϕi (x)T θˆi + βi (xi , xˆ) − x ˆj , ∂xi ∂ x ˆj j=1

(9.40)

the dynamics of σi are reduced to r˙i ∂βi σ˙ i = − ϕi (x)T σi − σi = − ∂xi ri



∂βi r˙i ϕi (x)T + ∂xi ri



σi .

(9.41)

The system (9.41) can again be seen as a linear time varying system with a block diagonal dynamic matrix. In order to render the diagonal blocks negative semi-definite, the functions βi (xi , x ˆ) are selected as Z xi βi (xi , x ˆ ) = γi ϕi (ˆ x1 , ..., xˆi−1 , σ, x ˆi+1 , ..., xˆn )dσ, (9.42) 0

where γi > 0. Since the regressors ϕi (x) are continuously differentiable, the expression n X j=1

ej δij (x, e) = ϕi (x) − ϕi (ˆ x1 , ..., x ˆi−1 , σ, xˆi+1 , ..., x ˆn ),

δii ≡ 0,

(9.43)

holds for some functions δij (x, e). Substituting (9.42) and (9.43) into (9.41) yields the σi -dynamics σ˙ i = −γi ϕi (x)ϕi (x)T σi + γi

n X j=1

ej δij (x, e)ϕi (x)T σi −

r˙i σi . ri

(9.44)

Furthermore, from (9.37) and (9.39), the dynamics of ei = x ˆi − xi are given by e˙ i = −ki (x, r, e)ei + ri ϕi (x)T σi .

(9.45)

The system consisting of (9.44) and (9.45) has an equilibrium at zero, which can be rendered globally uniformly stable by selecting the dynamics of the scaling factors ri and the functions ki (x, r, e) as defined in the following lemma [104].

9.4

DYNAMIC SCALING AND FILTERS

179

Lemma 9.3. Consider the system (9.37) and let r˙i = ci ri

n X j=1

e2j |δij (x, e)|2 ,

ri (0) = 1,

(9.46)

with ci ≥ γi n/2, where |.| denotes the 2-norm, and ki (x, r, e) =

λi ri2



n X j=1

cj rj2 |δji (x, e)|2

(9.47)

where λi > 0 and ǫ > 0 are constants. Then the system consisting of (9.44), (9.45) and (9.46) has a globally uniformly stable manifold of equilibria defined by M = {(σ, r, e)|σ = e = 0}. Moreover, σi (t) ∈ L∞ , ri (t) ∈ L∞ , ei (t) ∈ L2 ∩ L∞ and ϕi (x(t))T σi (t) ∈ L2 for all i = 1, ..., n. If, in addition, ϕi (x(t)) and its time derivative are bounded, it follows that limt→∞ ϕi (x(t))T σi = 0. Proof: Consider the Lyapunov function Vi (σi ) = of Vi along the trajectories of (9.44) results in V˙ i

= =

−(ϕTi σi )2 +

n X j=1

1 T 2γi σi σi .

ej σiT δij ϕTi σi −

Taking the time derivative

r˙i |σi |2 γi ri

 n  X 1 T 2 n 2 T 2 T 2 −(ϕi σi ) + (ϕ σi ) + ej (δij σi ) 2n i 2 j=1 r 2 n  X 1 T n T r˙i √ ϕi σi − − ej δij σi − |σi |2 2 γ r 2n i i j=1 n



1 nX 2 T 2 r˙i − (ϕTi σi )2 + e (δ σi ) − |σi |2 . 2 2 j=1 j ij γi ri

Substituting the dynamic scaling terms ri as given by (9.46) and by applying the inequalT ity |δij σi | < |δij ||σi |, the remaining indefinite term can be canceled such that V˙ i



1 − (ϕTi σi )2 < 0, 2

σi 6= 0.

Hence, the system (9.44) has a globally uniformly stable equilibrium at the origin, σi (t) ∈ L∞ and ϕi (x(t))T σi (t) ∈ L2 for all i = 1, ..., n. If ϕi (x(t)) and its time derivative are bounded, it follows from Barbalat’s lemma that limt→∞ ϕi (x(t))T σi = 0. This implies that an asymptotic estimate uncertainty term ϕi (x)T θi  of each parametric  T ˆ in (9.37) is given by the term ϕi (x) θi + βi (xi , x ˆ) . The next design step is to select the positive functions ki (x, r, e) in such a way that the

180

IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING

9.4

dynamics of ei , given by (9.45), become globally asymptotically stable. Taking the time derivative of the augmented Lyapunov function Wi (σi , ei ) = 21 e2i + λ1i Vi , with constant λi > 0, results in ˙i W

1 (ϕT σi )2 ≤ −ki e2i + ri ϕTi σi ei − 2λi i "r #2 λi 2 2 λi 1 2 T = −ki ei + ri ei − ri ei − √ ϕi σi 2 2 2λi   λi ≤ − ki − ri2 e2i . 2

It is clear that selecting ki (x, r, e) > λ2i ri2 renders the above expression negative definite, thus the equilibrium (σi , ei ) = (0, 0) is globally uniformly stable and ei (t) ∈ L2 ∩ L∞ . A final design step has to be made to ensure thatP the dynamic scalings ri remain bounded.  n  Consider the Lyapunov function Ve (e, σ, r) = i=1 Wi (σi , ei ) + 2ǫ ri2 with ǫ > 0, for which the time derivative is given by    n n n  X X X λ i ci ri2 e2j |δij (x, e)|2  . V˙ e ≤ − ki (x, r, e) − ri2 e2i + ǫ 2 i=1 j=1 i=1

Selecting r, e) as given by (9.47) to cancel the indefinite terms, ensures V˙ e ≤ Pn λiki (x, 2 2 − i=1 2 ri ei , which proves that ri (t) ∈ L∞ and limt→∞ ei (t) = 0. The functions ki (x, r, e) contain a nonlinear damping term to achieve boundedness of ri , but the constant ǫ multiplying the damping term can be chosen arbitrarily small.

This completes the design of the estimator, which consists of output filters (9.39), update laws (9.40) and dynamic scalings (9.46). Note that the estimator, in general, employs overparametrization, which is not necessarily disadvantageous from a performance point of view. However, in a numerical implementation it can lead to a higher computational Pn load. The total order of the estimator is i=1 pi + 2n.

9.4.2 Command Filtered Control Law Design

In this section the command filtered backstepping approach is used to close the loop and complete the adaptive control design. The procedure starts by defining the tracking errors as zi = xi − xi,r ,

i = 1, ..., n

(9.48)

where xi,r are the intermediate control laws to be designed. The modified tracking errors are defined as z¯i = zi − χi .

(9.49)

9.4

DYNAMIC SCALING AND FILTERS

181

with the signals χi to be defined. The dynamics of z¯i can be written as z¯˙i ˙z¯n

= =

zi+1 + xi+1,r + ϕTi θi − χ˙ i u + ϕTn θn − x˙ n,r − χ˙ n .

(9.50)

The idea is now to design a control law that renders the closed-loop system L2 stable from the ‘perturbation’ inputs ϕTi σi to the output z¯1 and keeps all signals bounded. To stabilize (9.50) the following desired (intermediate) controls are proposed: x0i+1,r

=

u0

=

  −κi + z¯i−1 − ϕTi θˆi + βi − χi+1 , i = 1, ..., n − 1,   −κn + z¯n−1 − ϕTn θˆn + βn + x˙ n,r

(9.51)

with the stabilizing functions κi given as

κi

= c¯i zi +

µri2 ¯i , z¯i + k¯i λ 2

for i = 1, ..., n, where c¯i > 0, µ > 0 and k¯i ≥ 0 are constants. The integral terms are defined as ¯i λ

=

Z

t

z¯i (t)dt.

0

The desired (intermediate) control laws (9.51) are fed through second order low pass filters to produce the actual intermediate controls xi+1,r , u and their derivatives. The effect that the use of these filters has on the tracking errors can be captured with the stable linear filters χ˙ i

=

χ˙ n

=

 −¯ ci χi + xi+1,r − x0i+1,r ,  −¯ cn χn + u − u 0 .

i = 1, ..., n − 1 (9.52)

The stability properties of the adaptive control framework based on this command filtered backstepping controller in combination with the I&I based estimator design of Lemma 9.3 can be proved using the control Lyapunov function

Vc (¯ z , σ) =

n  X i=1

 1 ¯2 T z¯i2 + k¯λ + σ σ . i i 2 i

(9.53)

182

IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING

9.5

Taking the time derivative of Vc and following some of the steps used in the proof of Lemma 9.3 results in V˙ c

≤ 2

n−1 X

  z¯i zi+1 + xi+1,r + ϕTi θi − χ˙ i + 2¯ zn u + ϕTn θn − x˙ n,r − χ˙ n

i=1 n X

 n  X 1 + 2γi − (ϕTi σi )2 2 i=1 i=1  X   n  n X 1 µi ri2 T 2γi − = −2 z¯i c¯i z¯i + z¯i + ϕi σi ri − (ϕTi σi )2 2 2 i=1 i=1 2 X  n n  n  X X 1 1 1 √ = −2 c¯i z¯i2 − 2γi − − (ϕTi σi )2 √ ϕTi σi + µri z¯i − µ 2 µ i=1 i=1 i=1  n n  X X 1 1 ≤ −2 c¯i z¯i2 − 2γi − − (ϕTi σi )2 . 2 µ i=1 i=1 ¯i z¯i − k¯λ

It can be concluded that, if γi ≥ µ+2 4µ , the closed-loop system consisting of (9.37), (9.51) and the I&I based estimator of the previous section, which consists of output filters (9.39), update laws (9.40) and dynamic scalings (9.46), has a globally stable equilibrium. Furthermore, by Theorem 3.7 limt→∞ z¯i = 0 and limt→∞ ϕTi σi = 0 (if ϕ and its time derivative are bounded). When the command filters are properly designed, i.e. with bandwidths sufficiently high, and no rate or magnitude limits are in effect, z¯i will converge to the close neighborhood of the real tracking errors zi . This concludes the discussion on the modular I&I based adaptive backstepping control design.

9.5 Adaptive Flight Control Example In this section the approach discussed in Section 9.4 is used to construct a nonlinear adaptive flight control law for the simplified aircraft model of Chapter 6 with the equations of motion given by (6.33). It will be demonstrated that the I&I estimator design with dynamic scalings can be applied directly to a multivariable system. The control objective is to track smooth reference signals with φ, α and β. It is assumed that all stability and control derivatives are unknown. A scheme of the proposed modular adaptive flight controller is depicted in Figure 9.2. Before the adaptive control design procedure begins, the aircraft dynamic model (6.33) is rewritten in a more general form. Define the states x1 = φ, x2 = α, x3 = β, x4 = θ, x5 = p, x6 = q, x7 = r and the control inputs u = (δel , δer , δal , δar , δlef , δtef , δr )T , then the system (6.33) can be rewritten as x˙ i = fi (x) + ϕi (x, u)T θi ,

i = 1, ..., 7,

(9.54)

9.5

183

ADAPTIVE FLIGHT CONTROL EXAMPLE

Pilot Commands

Prefilters

Backstepping Control Law (Onboard Model)

z

y

u

Control Allocation

mdes

u

θˆ

Parameter Update Laws



Output Filters

r

x

Sensor Processing

e Dynamic Scaling

Nonlinear Adaptive Estimator

x

Figure 9.2: Modular adaptive I&I backstepping control framework.

with fi (x) the known functions, the unknown parameter vectors θ1

= 0,

θ5

= (lp , lq , lr , lβα , lrα , l0 , lδel , lδer , lδal , lδar , lδr ) ,

θ6

=

θ7

θ2 = z α ,

θ3 = y β ,

θ4 = 0, T

mα , mq , mα˙ , m0 , mδel , mδer , mδal , mδar , mδlef , mδtef , mδr T

= (nβ , np , nq , nr , npα , n0 , nδel , nδer , nδal , nδar , nδr ) ,

T

,

and the regressors ϕ1 (x, u) =

0,

ϕ5 (x, u) =

(x5 , x6 , x7 , x3 (x2 − α0 ), x7 (x2 − α0 ), 1, u1 , u2 , u3 , u4 , u7 ) ,  T g0 x2 − α0 , x6 , x5 x3 + (cos x4 cos x1 − cos θ0 ), 1, u1 , ..., u7 , V T (x3 , x5 , x6 , x7 , x5 (x2 − α0 ), 1, u1 , u2 , u3 , u4 , u7 ) .

ϕ6 (x, u) = ϕ7 (x, u) =

ϕ2 (x, u) = x2 − α0 ,

ϕ3 (x, u) = x3 ,

ϕ4 (x, u) = 0, T

Note that the parameters l0 , m0 and n0 have been added to the unknown parameter vectors to compensate for any additional moments caused by failures, e.g. actuator hardovers.

9.5.1 Adaptive Control Design The design of the command filtered backstepping feedback control design is identical to the static backstepping part of the flight control design of Chapter 6. Note that the nonlinear damping terms are not needed for the combination with an I&I based estimator to guarantee stability, but they are kept in for the sake of comparison. The I&I estimator design of Section 9.4.1 can be applied directly to the rewritten aircraft equations of motion (9.54). Following the estimator design procedure of Section 9.4.1, the scaled estimation errors

184

9.5

IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING

are defined as σi =

θˆi − θi + βi (xi , x ˆ) , ri

i = 2, 3, 5, 6, 7.

(9.55)

Let the output errors be given by ei = x ˆi − xi , then the output filters are defined as x ˆ˙ i

  = fi + ϕTi θˆi + βi − ki ei ,

i = 2, 3, 5, 6, 7.

Note that no output filters are needed for x1 - and x4 -dynamics, since they contain no uncertainties. The estimator dynamics are given by 7 7 X X ∂βi ˙ ∂βi ∂βi ˙ (ˆ xi + ki ei ) − x ˆj − u˙ k , θˆi = − ∂xi ∂x ˆj ∂uk j=1 k=1

where the functions βi (xi , xˆ) are obtained from (9.42), i.e. β2

=

β5

=

β6

=

β7

=



 1 2 1 γ2 x2 − α0 x2 , β3 = γ3 x23 , 2 2  T 1 γ5 x5 xˆ3 , x ˆ6 , x ˆ7 , xˆ3 (ˆ x2 − α0 ), x ˆ7 (ˆ x2 − α0 ), x5 , 1, u1 , u2 , u3 , u4 , u7 , 2  T 1 g0 γ6 x6 xˆ2 − α0 , x6 , −ˆ x5 x ˆ3 + (cos x4 cos x1 − cos θ0 ) , 1, u1 , ..., u7 , 2 V  T 1 γ7 x7 xˆ3 , x7 , x ˆ5 , x ˆ5 (ˆ x2 − α0 ), xˆ6 , 1, u1 , u2 , u3 , u4 , u7 , 2

with γi > 0. Note that the derivative of the control vector is required in the estimator design. This derivative is obtained directly from the command filters used in the last step of the static backstepping control design. Taking the time derivative of the functions β results in β˙ 2 β˙ 5

= γ2 ϕ2 ,

β˙ 3 = γ3 ϕ3 , T

= γ5 ϕ3 − γ5 e2 (0, 0, 0, 0, −x3, −x7 , 0, 0, 0, 0, 0, 0)

T

− γ5 e3 (0, −1, 0, 0, α0 − x2 − e2 , 0, 0, 0, 0, 0, 0, 0) T

− γ5 e6 (0, 0, −1, 0, 0, 0, 0, 0, 0, 0, 0, 0)

T

− γ5 e7 (0, 0, 0, −1, 0, α0 − x2 − e2 , 0, 0, 0, 0, 0, 0) + = γ5 ϕ5 − γ5 e2 δ52 − γ5 e3 δ53 − γ5 e6 δ56 − γ5 e7 δ57 +

7 X ∂β5 u˙ k ∂uk

k=1 7 X k=1

∂β5 u˙ k , ∂uk

9.5

ADAPTIVE FLIGHT CONTROL EXAMPLE

β˙ 6

β˙ 7

=

T

γ6 ϕ6 − γ6 e2 (0, −1, 0, 0, 0, 0, 0, 0, 0, 0, 0) T



γ6 e3 (0, 0, 0, x5 + e5 , 0, 0, 0, 0, 0, 0, 0)



γ6 e5 (0, 0, 0, x3 , 0, 0, 0, 0, 0, 0, 0) +

=

γ6 ϕ6 − γ6 e2 δ62 − γ6 e3 δ63 − γ6 e5 δ65 +

=

T

7 X ∂β6 u˙ k ∂uk k=1

7 X ∂β6 u˙ k , ∂uk k=1 T

γ7 ϕ7 − γ7 e2 (0, 0, 0, 0, −x5, 0, 0, 0, 0, 0, 0) T



γ7 e3 (0, −1, 0, 0, 0, 0, 0, 0, 0, 0, 0)



γ7 e6 (0, 0, 0, 0, 0, −1, 0, 0, 0, 0, 0) +

=

γ7 ϕ7 − γ7 e2 δ72 − γ7 e3 δ73 − γ7 e5 δ75 − γ7 e6 δ76 +



185

T

γ7 e5 (0, 0, 0, −1, α0 − x2 − e2 , 0, 0, 0, 0, 0, 0) T

7 X ∂β7 u˙ k ∂uk

k=1

7 X ∂β7 u˙ k , ∂uk

k=1

where the bracketed terms correspond to the functions δij (x, e) of (9.43). Finally, from (9.46) and (9.47) the dynamic scaling parameters ri and the gains ki are given by 7 X 5 r˙i = γi ri e2j |δij (x, e)|2 2 j=1

and ki (x, r, e) = λi ri2 + ǫ

7 X j=1

cj rj2 |δji (x, e)|2 ,

with λi > 0, ǫ > 0 and ri (0) = 1. This completes the nonlinear estimator design for the over-actuated aircraft model. The tracking performance and parameter estimation capabilities of the adaptive controller resulting from combining this nonlinear estimator with the command filtered adaptive backstepping approach can now be evaluated in numerical simulations.

9.5.2 Numerical Simulation Results This section presents the simulation results from the application of the adaptive flight controller developed in the previous section to the over-actuated fighter aircraft model c of Section 6.3, implemented in the MATLAB/Simulink environment. The simulations are performed at two flight conditions for which the aerodynamic data can be found in Tables 6.1 and 6.2. The command filtered, static backstepping controller is tuned in a trial-and-error procedure on the nominal aircraft model. The final control and nonlinear damping gains were

186

IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING

9.5

chosen identical to the ones used for the adaptive controllers in Chapter 6. Tuning of the I&I estimator is relatively straight-forward, since increasing the adaptation gains γi not only increases the adaptation rate but also improves the closed-loop performance. This is in contrast with the integrated adaptive backstepping approach used in Chapter 6 where increasing the adaptation gains can lead to a worsened transient performance. The influence of the size of the other estimator parameters, λi and ǫ, on the tracking performance is very limited, simply selecting them inside the bounds defined in Section 9.4 is enough to guarantee convergence of the filtered states to the true states and boundedness of the dynamic scaling parameters. The final gain and parameter selection is: γi = 10, λi = 0.01, i = 1, ..., 7 and ǫ = 0.01. Simulation with Left Aileron Runaway In this first simulation a mixed maneuver involving a series of angle of attack and roll angle doublets is considered. The aircraft model starts at flight condition 2 in a trimmed horizontal flight where after 1 second of simulation time the left aileron suffers a hardover failure and moves to its limit of 45 degrees. This failure results in a large additional rolling moment and minor additional pitch and yawing moments. Note that the adaptive controller does not use any sensor measurements of the control surface position or any other form of fault detection. The results of this simulation can be found in Figure D.20 of Appendix D. Note that this maneuver is identical to scenario 3 of Chapter 6, which means the results in Figure D.20 can be compared directly with the plots of Figures D.1 and D.2. The adaptive controller manages to rapidly return the states to their reference values after the failure. Of course, the coupling between longitudinal and lateral motion is more prominent in the response after the failure. It can be seen in Figure D.20(c) that the total moment coefficients post-failure are estimated rapidly and accurately. However, the individual parameters have not converged to their true values since this maneuver alone does not provide the estimator with enough information. In Figure D.20(d) some additional parameters of the I&I estimator are plotted, i.e. the dynamic scaling parameters r∗ , the output filter states x ˆ and the prediction errors e∗ . All signals are behaving as they should be, the dynamic scalings converge to constant values, the filter states follow the aircraft model states and the prediction errors converge to zero. In the control surface plots it can be seen that most of the additional moment is compensated by the right aileron and the left elevator. The simple pseudo-inverse control allocation scheme does not give any preference to certain control surfaces or axes. The tracking performance and parameter convergence of the adaptive controller are very good for this failure case. Simulation with Left Elevator Runaway The second simulation is again of a mixed maneuver involving a series of angle of attack and roll angle doublets at flight condition 2. The aircraft model starts in a straight, horizontal flight where after 1 second of simulation time the left elevator suffers a hard-over

9.6

187

F-16 STABILITY AND CONTROL AUGMENTATION DESIGN

failure and moves to its limit of 10.5 degrees. The simulation results of this maneuver can be found in Figure D.21, which is again divided in 4 subplots. The results of this same simulation scenario for the adaptive controllers of Chapter 6 can be found in Figures D.3 and D.4. Note however, that a more sophisticated control allocation approach was used there. The results demonstrate again that the adaptive controller performs excellent. The total moment coefficients are rapidly found by the estimator and tracking performance is excellent. However, the individual components of the parameter estimate vectors do not converge to their true values. It is interesting to note that the new adaptive design manages to recover good performance without saturating any of the other control surfaces for this failures scenario, unlike the adaptive flight controllers of Chapter 6.

9.6 F-16 Stability and Control Augmentation Design The next step is to apply the I&I adaptive backstepping approach to the problem of designing a SCAS for the high-fidelity F-16 model of Chapter 2 and compare its performance with the integrated and modular SCAS designs of the previous chapter. First, the I&I estimator for the dynamic F-16 model uncertainties will be derived, after that, the simulation scenarios of the previous chapter are performed once again for the adaptive backstepping flight controller with the new estimator.

9.6.1 Adaptive Control Design The static nonlinear backstepping SCAS design has already been discussed in Section 8.2. This flight controller will be used again, but the tracking error driven adaptation process is replaced by an I&I based estimator. The I&I estimator with dynamic scaling of Section 9.4 can be applied directly to the F-16 model if the multiple model approach with B-spline networks is once again selected to simplify the approximation process. The size and number of the networks is selected identical to the ones used for the adaptive backstepping flight control laws of Chapter 8. Before the design of the estimator starts, the relevant equations of motion are written in the more general form x˙ i = fi (x, u) + ϕi (x, u)T θi ,

i = 1, ..., 6,

(9.56)

with the states x1 = VT , x2 = α, x3 = β, x4 = p, x5 = q, x6 = r and the inputs u1 = δe , u2 = δa , u3 = δr . Here fi (x) represent the known parts of the F-16 model dynamics given by f1 (x) f2 (x) f3 (x)

=

1 [D0 + FT cos x2 cos x3 + mg1 ] m

−L0 − FT sin x2 + mg3 mx1 cos x3 Y0 − FT cos x2 sin x3 + mg2 = − (x6 cos x2 − x4 sin x2 ) + mx1 = x5 − (x4 cos x2 + x6 sin x2 ) tan x3 +

188

9.6

IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING

f4 (x)

=

f5 (x)

=

f6 (x)

=

 ¯ 0 + c4 N ¯0 + Heng x5 (c1 x6 + c2 x4 ) x5 + c3 L   ¯ 0 − Heng x6 c5 x4 x6 − c6 x24 − x26 + c7 M  ¯ 0 + c9 N ¯0 + Heng x5 , (c8 x4 − c2 x6 ) x5 + c4 L

¯ 0, M ¯ 0 and N ¯0 are the known, nominal values of the aerodynamic where L0 , Y0 , D0 , L forces and moments. The second term of (9.56) describes the uncertainties in the aircraft model. As an example the approximation of the uncertainty in the total drag is given as T

ϕ1 (x, u) θ1

  q¯S ˆ x5 c¯ ˆ ˆ CD0 (x2 , x3 ) + CDq (x2 ) + CDδe (x2 , u1 )u1 m 2x1     θCD0 q¯S x c ¯ 5 ϕTCD0 (x2 , x3 ), ϕTCDq (x2 ) , ϕT (x2 , u1 )u1  θCDq  , m 2x1 CDδe θCDδ

=

=

e

where ϕTCD∗ (·) are vectors containing the third-order B-spline basis functions that form a first or second order B-spline network, and where θCD∗ are vectors containing the unknown parameters, i.e. the B-spline network weights, that will have to be estimated online. The other approximation terms are defined as ϕ2 (x, u)T θ2

=

ϕ3 (x, u)T θ3

=

ϕ4 (x, u)T θ4

=

ϕ5 (x, u)T θ5

=

ϕ6 (x, u)T θ6

=

  q¯S x5 c¯ CˆL0 (x2 , x3 ) + CˆLq (x2 ) + CˆLδe (x2 , u1 )u1 mx1 cos x3 2x1  q¯S ˆ x4 b x6 b CY0 (x2 , x3 ) + CˆYp (x2 ) + CˆYr (x2 ) mx1 2x1 2x1 i +CˆYδa (x2 , x3 )u2 + CˆYδr (x2 , x3 )u3  x4 b x6 b c3 q¯Sb CˆL¯ 0 (x2 , x3 ) + CˆL¯ p (x2 ) + CˆL¯ r (x2 ) 2x1 2x1 i +CˆL¯ δa (x2 , x3 )u2 + CˆL¯ δr (x2 , x3 )u3   x5 c¯ c7 q¯S¯ c CˆM¯ 0 (x2 , x3 ) + CˆM¯ q (x2 ) + CˆM¯ δe (x2 , u1 )u1 2x1  x4 b x6 b c9 q¯Sb CˆN¯0 (x2 , x3 ) + CˆN¯p (x2 ) + CˆN¯r (x2 ) 2x1 2x1 i +CˆN¯δa (x2 , x3 )u2 + CˆN¯δr (x2 , x3 )u3 ,

where all the coefficients are again approximated with B-spline networks. Note that, to avoid overparametrization, the roll and yaw moment error approximators are not designed to estimate the real errors, but rather pseudo-estimates. It is possible to estimate the real errors, but this would result in additional update laws and thus increase the dynamic order of the adaptation process. Now that the system is rewritten in the standard form, the I&I estimator design of Section

9.6

F-16 STABILITY AND CONTROL AUGMENTATION DESIGN

189

9.4 can be followed directly. The scaled estimation errors are again defined as σi =

θˆi − θi + βi (xi , x ˆ) , ri

i = 1, ..., 6.

(9.57)

In Section 9.4, the functions βi in the above expression were selected as Z xi βi (xi , xˆ) = γi ϕi (ˆ x1 , ..., x ˆi−1 , σ, xˆi+1 , ..., x ˆn )dσ,

(9.58)

0

where γi are the adaptation gains. The analytic calculation of βi (xi , x ˆ) for the F-16 model is relatively time-consuming, since the regressors ϕ∗ are quite large and contain the B-spline basis functions. Furthermore, the expression n X j=1

ej δij (x, e) = ϕi (x) − ϕi (ˆ x1 , ..., x ˆi−1 , σ, xˆi+1 , ..., xˆn ),

δii ≡ 0,

(9.59)

has to be solved for some functions δij (x, e). This is an even more tedious process due to the B-spline basis functions. However, it is still possible to solve the above expression. This concludes the discussion on the I&I estimator design for the high-fidelity F-16 dynamic model.

9.6.2 Numerical Simulation Results In this section the numerical simulation results are presented for the application of the flight control system with I&I based estimator, derived in the previous section, to the high-fidelity, six-degrees-of-freedom F-16 model in a number of failure scenarios and maneuvers. The scenarios are identical to the ones considered in the previous chapter, so that closed-loop responses for the new adaptive flight control design can be compared directly with the earlier results. For that same reason, the control gains and command filter parameters of the backstepping SCAS design are selected the same as in the previous chapter. The I&I estimator is tuned in a trial and error procedure, using the bounds derived in Section 9.4. Tuning is quite intuitive as expected and it is not difficult to find an adaptation gain selection that provides good results in all considered failure scenarios. The final gain and parameter selection for the estimator is: γ2 , γ5 = 0.1, γ1 , γ3 , γ4 , γ6 = 0.01, λi = 0.01, i = 1, ..., 6 and ǫ = 0.01.

Simulation Results with Cmq = 0 The first series of simulations considers again the sudden reduction of the longitudinal damping coefficient Cmq to zero. As discussed before, this is not a very critical change, but it does however serve as a nice example to evaluate the ability of the adaptation scheme to accurately estimate inaccuracies in the onboard model. Figure D.22 of Appendix D.4 contains the simulation results for the I&I backstepping design starting at

190

IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING

9.6

flight condition 2 with the longitudinal stick commanding a series of pitch doublets, after 20 seconds of simulation the sudden change in Cmq takes place. The left hand side plots show the inputs and response of the aircraft in solid lines, while the dotted lines are the reference trajectories. Tracking performance both before and after the change in pitch damping is excellent. The time histories of the dynamic scalings and the output errors of the filters used by the I&I estimator are not shown, but the scalings all converge to constant values and the errors converge to zero as expected. The solid lines in the right hand side plots of Figure D.22 represent the changes in aerodynamic coefficients w.r.t. the nominal values, divided by the maximum absolute value of the real aerodynamic coefficients to normalize them. The dotted lines are the normalized true errors between the altered and nominal aircraft model. The change in Cmq is clearly visible in the plots. It can be seen that the estimator does not succeed in estimating the individual components of the pitch moment correctly. This is to be expected for such an insignificant error. The simulation results of this failure at the other flight conditions exhibit the same characteristics. Simulation Results with Longitudinal c.g. Shifts A second series of simulations considers a much more complex failure scenario where the longitudinal center of gravity of the aircraft model is shifted. Especially backward shifts can be quite critical, since they work destabilizing and can even result in a loss of static stability margin. All pitching and yawing aerodynamic moment coefficients will change as a result of a longitudinal c.g. shift. For a model inversion based design the changes are far more critical and stability loss often will occur for destabilizing shifts without robust of adaptive compensation. Figure D.23 depicts the simulation results for the F-16 model with the I&I based backstepping controller starting at flight condition 1 with the longitudinal stick commanding a series of small amplitude pitch doublets, after 20 seconds the c.g. instantly shifts backward 0.06¯ c and the small positive static margin is lost. As can be seen in the left hand side plots the tracking performance of the I&I based backstepping design is very good. However, once again the right hand side plots demonstrate that the individual components are not estimated correctly. Compared to the results of Chapter 8, the tracking performance of the new adaptive design in this simulation scenario is superior to the performance of the other two adaptive designs. The integrated adaptive design of Chapter 8 is also more aggressive in its response, resulting from the non-ideal adaptation gains selected after the difficult tuning process. Simulation Results with Aileron Lockups The last series of simulations considers controlling the aircraft model with right aileron lock ups or hard-overs. At 20 seconds simulation time the right aileron suddenly moves to a certain offset: -21.5, -10.75, 0, 10 or 21.5 degrees. It should again be noted that the public domain F-16 model does not contain a differential elevator, hence only the rudder

9.7

CONCLUSIONS

191

and the left aileron can be used to compensate for these failures. The pilot should be able to compensate for this failure, but it would result in a very high workload. The results of a simulation performed with the integrated controller at flight condition 4 with a right aileron lockup at -10.75 degrees can be seen in Figure D.24. One lateral stick doublet is performed before the failure occurs and three more are performed after 60 seconds of further simulation. The response of the I&I adaptive design resembles the response of the modular adaptive design in Chapter 8, i.e. much better than the response of the integrated adaptive backstepping controller. The I&I based adaptive design still has a pretty good tracking performance after the failure and even the sideslip angle is regulated back to zero. The additional forces and moments resulting from the error are identified correctly, but the individual components are not.

9.7 Conclusions In this chapter, the immersion and invariance technique is combined with backstepping and the resulting adaptive control scheme is applied to the flight control problems of Chapter 6 and 8. The control scheme makes use of an invariant manifold based estimator with dynamic scalings and output filters to help guarantee attractivity of the manifold. The controller itself is based on the backstepping approach with command filters to avoid the analytic computation of the virtual control derivatives. Global asymptotic stability of the closed-loop system and parameter convergence of the complete adaptive controller can be proved with a single Lyapunov function. The controllers have been evaluated in numerical simulations and the results have been compared with the integrated and modular adaptive designs of Chapters 6 and 8. Based on the simulation results several observations can be made: 1. The main advantage of the invariant manifold approach over a conventional adaptive backstepping controller with tracking error driven update laws is that it allows for prescribed stable dynamics to be assigned to the parameter estimation error. Furthermore, this approach does not suffer from undesired transient performance resulting from unexpected dynamical behavior of parameter update laws that are strongly coupled with the static feedback control part. As a result the adaptive controller is much easier to tune, since a large update gain will improve the closed-loop transient performance. Therefore, it is possible to achieve a better performance of the closed-loop system, as is demonstrated in several simulation scenarios. In fact, the closed-loop system resulting from the application of the I&I based adaptive backstepping controller can be seen as a cascade interconnection between two stable systems with prescribed asymptotic properties. 2. The new I&I based modular adaptive controller does not require nonlinear damping terms, that could potentially result in high gain feedback, to proof closed-loop stability. This is a big advantage over the modular backstepping control design with least-squares identifier. Obviously, least-squares still has the appeal that it

192

IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING

9.7

has the capability of automatically adjusting the adaptation gain matrix. However, this comes at the cost of a higher dynamic order of the estimator. 3. A minor disadvantage of the I&I based modular adaptive backstepping approach is that the estimator employs overparametrization, which means that, in general, the dynamic order of the estimator is higher than for an integrated adaptive backstepping controller. Hence, the computational load is also higher. However, this does not play a role in the aircraft control design problems considered in Chapters 6 and 8. Though for the trajectory control problem of Chapter 7 the I&I estimator would require more states than the tracking error driven update laws of the constrained adaptive backstepping controller. 4. Another disadvantage is that the analytical derivation of the I&I estimator in combination with the B-spline networks used for the partitioning of the F-16 model is relatively time-consuming.

Chapter

10

Conclusions and Recommendations This thesis describes the development of adaptive flight control systems for a modern fighter aircraft. Adaptive backstepping techniques in combination with online model identification based on multiple models connected with B-spline networks have been used as the main design tools. Several algorithms have been considered for the online model adaptation. In this chapter the main conclusions of the research are provided. New research questions can be formulated based on these conclusions and these are formulated in the form of recommendations for further research.

10.1 Conclusions This thesis has aimed to contribute to the development of computationally efficient reconfigurable or adaptive flight control systems using nonlinear control design techniques and online model identification, all based on well founded mathematical proofs. As the main design framework the adaptive backstepping technique was investigated, this choice was based on the strong stability and convergence properties of the method as discussed in the introduction. For the online model identification a multiple model approach based on flight envelope partitioning was proposed to keep the required computational load at an acceptable level and create a numerically stable algorithm. The considered methods have been investigated and adapted throughout the thesis to improve their weaknesses for the considered flight control problems. Finally, numerical simulations involving a high-fidelity F-16 dynamic model with several types of uncertainties and failures have been used to validate the proposed adaptive flight control designs. The main conclusions and results of the thesis are summarized below. 193

194

CONCLUSIONS AND RECOMMENDATIONS

10.1

Constrained Adaptive Backstepping The standard adaptive backstepping approach has a number of shortcomings, two of the most important being its analytical complexity and its sensitivity to input saturation. The analytical complexity of the design procedure is mainly due to the calculation of the derivatives of the virtual controls at each intermediate design step. Especially for high order systems or complex multivariable systems such as aircraft dynamics, it becomes very tedious to calculate the derivatives analytically. The parameter update laws of the standard adaptive backstepping procedure are driven by the tracking errors, which makes them sensitive to input saturation. If input saturation is in effect and the desired control cannot be achieved, the tracking errors will in general become larger and no longer be the result of function approximation errors exclusively. As a result the parameter update laws may start to ‘unlearn’. In Chapter 4 both shortcoming are solved by introducing command filters in the design approach. The idea is to filter the virtual controls to calculate the derivatives and at the same time enforce the input or state limits. The effect that these limits have on the tracking errors is measured using a set of first order linear filters. Compensated tracking errors where the effect of the limits has been removed are defined and used to drive the parameter update laws. If there are no magnitude or rate limits in effect on the command filters and their bandwidth is selected sufficiently high, the performance of the constrained adaptive backstepping approach can be made arbitrarily close to that of the standard adaptive backstepping approach. If the limits on the command filters are in effect, the real tracking errors may increase, but the compensated tracking errors that drive the estimation process are unaffected. Hence, the dynamic update laws will not unlearn due to magnitude or rate limits on the input and states used for (virtual) control. An additional advantage of the command filters in the design is that the application is no longer restricted to uncertain nonlinear systems of a lower triangular form. For these reasons, the constrained adaptive backstepping approach serves as a basis for all the control designs developed in this thesis. Inverse Optimal Adaptive Backstepping The tuning functions and constrained adaptive backstepping designs are both focused on achieving stability and convergence rather than performance or optimality. To this end the static and dynamic parts of the adaptive backstepping controllers are designed simultaneously in a recursive manner. This way the very strong stability and convergence properties of the controllers can be proved using a single control Lyapunov function. A drawback of this design approach is that because there is strong coupling between the static and dynamic feedback parts, it is unclear how changes in the adaptation gain affect the tracking performance. In an attempt to solve this problem, inverse optimal control theory was combined with the tuning functions backstepping approach to develop an inverse optimal adaptive backstepping control design for a general class of nonlinear systems with parametric uncertainties in Chapter 5. An additional advantage of a control law that is (inverse) optimal with re-

10.1

CONCLUSIONS

195

spect to some ‘meaningful’ cost functional are its inherent robustness properties with respect to external disturbances and model uncertainties. However, nonlinear damping terms were utilized to achieve the inverse optimality, resulting in high gain feedback terms in the design. These nonlinear damping terms resulted in a very robust control design, but also in a very numerically sensitivity design. The nonlinear damping terms even removed the need for parameter adaptation. Furthermore, the complexity of the cost functional associated with the inverse optimal design did not make performance tuning any more transparent. It can be concluded that the inverse optimal adaptive backstepping approach is unsuitable for the type of control problems considered in this thesis. Integrated Versus Modular Adaptive Backstepping Flight Control In Chapter 6 the constrained adaptive backstepping approach was applied to the design of a flight control system for a simplified, nonlinear over-actuated fighter aircraft model valid at two flight conditions. It is demonstrated that the extension of the adaptive backstepping control method to multi-input multi-output systems is straightforward. A comparison with a more traditional modular adaptive controller that employs a least squares identifier was made to illustrate the advantages and disadvantages of an integrated adaptive design. The modular controller employs regressor filtering and nonlinear damping terms to guarantee closed-loop stability and robustify the design against potential faster-than-linear growth of the nonlinear systems. Furthermore, the interactions between several control allocation algorithms and the online model identification for simulations with actuator failures were studied. The results of numerical simulations demonstrated that both adaptive flight controllers provide a significant improvement over a non-adaptive NDI/backstepping design in the presence of actuator lockup failures. The success rate and performance of both adaptive designs with a simple pseudo inverse control allocation is comparable for most failure cases. However, in combination with weighted control allocation methods the success rate and also the performance of the modular adaptive design is shown to be superior. This is mainly due to the better parameter estimates obtained by the least squares identification method. The Lyapunov-based update laws of the constrained adaptive backstepping design, in general, do not estimate the true value of the unknown parameters. It is shown that especially the estimate of the control effectiveness of the damaged surfaces is much more accurate using the modular adaptive design. It can be concluded that the constrained adaptive backstepping approach is best used in combination with the simple pseudo inverse control allocation to prevent unexpected results. An advantage of the constrained adaptive backstepping design is that even for this simple example the computational load is much lower, since the gradient based identifier has less states than the least-squares identifier and does not require any regressor filtering. Furthermore, the modular design adaptive design requires regressor filtering and nonlinear damping terms to compensate for the fact that the least-squares identifier is to slow to deal with nonlinear growth, i.e. the certainty equivalence principle does not hold. The high gain associated with nonlinear damping terms can lead to numerical instability prob-

196

CONCLUSIONS AND RECOMMENDATIONS

10.1

lems. The identifier of the constrained adaptive backstepping design is much faster and does not suffer from this problem. For these reasons the integrated adaptive backstepping approach is deemed more suitable than the modular approach to design a reconfigurable flight control systems and is therefor tested on the . Full Envelope Adaptive Backstepping Flight Control In Chapters 7 and 8 two control design problems for the high-fidelity, subsonic F-16 dynamic model were considered: Trajectory control and SCAS design. The trajectory control problem is quite challenging, since the system to be controlled has a high relative degree, resulting in a multivariable, four loop adaptive feedback design. The SCAS design on the other hand can be compared directly with the baseline flight control system of the F-16. A flight envelope partitioning method is used to capture the globally valid nonlinear aerodynamic model into multiple locally valid aerodynamic models. The Lyapunov-based update laws of the adaptive backstepping method only update a few local models at each time step, thereby keeping the computational load of the algorithm at a minimum and making real-time implementation feasible. An additional advantage of using multiple, local models is that information of the models that are not updated at a certain time step is retained, thereby giving the approximator memory capabilities. B-spline networks are used to ensure smooth transitions between the different regions and have been selected for their excellent numerical properties. The partitioning for the F-16 has been done manually based on earlier modeling studies and the fact that the aerodynamic data is already available in a suitable tabular form. Numerical simulation results of several maneuvers demonstrate that trajectory control can still be accomplished with the investigated uncertainties and failures, while good tracking performance is maintained. Compared to the other nonlinear adaptive trajectory control designs in literature, such as standard adaptive backstepping or sliding mode control in combination with feedback linearization, the approach is much simpler to apply, while the online estimation process is more robust to saturation effects. Results of numerical simulations for the SCAS design demonstrate that the adaptive controller provides a significant improvement over a non-adaptive NDI design for the simulated failure cases. The adaptive design shows no degradation in performance with the added sensor dynamics and time delays. Feeding the reference signal through command filters makes it trivial to enforce desired handling qualities for the constrained adaptive backstepping controller in the nominal case. The handling qualities were verified using frequency sweeps and lower order equivalent model analysis. However, the adaptation gain tuning for the update laws of the constrained adaptive backstepping controller is a very time consuming and unintuitive process, since changing the identifier gains can result in unexpected transients in the closed-loop tracking performance. This is especially true for the SCAS design, since more aggressive maneuvering is considered. It is very difficult to find a set of gains that gives an adequate performance for all considered failure cases at the selected flight conditions. These results demonstrate that an alternative to the tracking error driven identifier has to be found when complex

10.1

CONCLUSIONS

197

flight control problems are considered. I&I Adaptive Backstepping Despite a number of refinements introduced in this thesis, the adaptive backstepping method with tracking error driven gradient update laws still has a major shortcoming. The estimation error is only guaranteed to be bounded and converging to an unknown constant. However, not much can be said about its dynamical behavior which may be unacceptable in terms of the closed-loop transient performance. Increasing the adaptation gain will not necessarily improve the response of the system, due to the strong coupling between system and estimator dynamics. The modular adaptive backstepping designs with least-squares identifier as derived in Chapter 6 do not suffer from this problem. However, this type of design requires unwanted nonlinear damping terms to compensate for the slowness of the estimation based identifier. In Chapter 9 an alternative way of constructing a nonlinear estimator is introduced, based on the I&I approach. This approach allows for prescribed stable dynamics to be assigned to the parameter estimation error. The resulting estimator is combined with the command filtered backstepping approach to form a modular adaptive control scheme. Robust nonlinear damping terms are not required in the backstepping design, since the I&I based estimator is fast enough to capture the potential faster-than-linear growth of nonlinear systems. The new modular scheme is much easier to tune than the ones resulting from the constrained adaptive backstepping approach. In fact, the closed-loop system resulting from the application of the I&I based adaptive backstepping controller can be seen as a cascaded interconnection between two stable systems with prescribed asymptotic properties. As a result, the performance of the closed-loop system with adaptive controller can be improved significantly. The flight control problems of Chapters 6 and 8 have been tackled again using the new I&I based modular adaptive backstepping scheme. A comparison of the simulation results has demonstrated that it is indeed possible to achieve a much higher level of tracking performance with the new design technique. Moreover, the I&I based modular adaptive backstepping approach has even stronger provable stability and convergence properties than the integrated adaptive backstepping approaches discussed in this thesis, while at the same time achieving a modularity in the design of the controller and identifier modules. It can be concluded that the I&I based modular adaptive backstepping design has great potential for these type of control problems: The resulting adaptive flight control systems perform excellent in the considered failure scenarios, while the identifier tuning process is relatively straight-forward. A minor disadvantage of the I&I based modular adaptive backstepping approach is that the estimator employs overparametrization, i.e. in general more than one update law is used to estimate each unknown parameter. Overparametrization is not necessarily disadvantageous from a performance point of view, but it is less efficient in a numerical implementation of the controller. Overparametrization does not play a role in the aerodynamic angle control design problems considered in Chapters 6 and 8. However, for the trajectory control problem of Chapter 7 the I&I estimator would require more states

198

CONCLUSIONS AND RECOMMENDATIONS

10.1

than the tracking error driven update laws of the constrained adaptive backstepping approach. Another minor disadvantage is that the analytical derivation of the I&I estimator in combination with the B-spline networks used for the partitioning of the F-16 model is relatively time consuming, but this additional effort is marginal when compared to effort required to perform the tuning process of the integrated adaptive backstepping update laws. Comparison of Adaptive Flight Control Frameworks The overall performance of the three methods used for flight control design in this thesis is now compared using several important criteria, such as design complexity and tracking performance. A table with the results of this comparison can be found in Figure 10.1. It can be seen that the modular adaptive backstepping design with I&I identifier outperforms the other methods, while also being the only method that does not display an unacceptable performance for any of the criteria.

Figure 10.1: Comparison of the overall performance of integrated adaptive backstepping control, modular adaptive backstepping control with RLS identifier and modular adaptive backstepping control with I&I identifier. Green indicates the best performing method, yellow the second best and orange the worst. A red table cell indicates an unacceptable performance.

A short explanation of each criterion is given. 1. Design complexity: The static feedback design of the controllers is nearly identical, but the identifier designs for the modular approaches are more complex than for the integrated design. The analytical derivation of the I&I estimator is the most time consuming, especially in combination with flight envelope partitioning

10.1

CONCLUSIONS

199

2. Dynamic order: The dynamic order of the RLS identifier is by far the highest. The I&I estimator requires a couple of extra filter states when compared to the tracking error driven update laws of the integrated adaptive backstepping. Furthermore, due to the overparametrization in the design, the dynamic order of the I&I estimator can increase for some control problems. 3. Estimation quality: With sufficient excitation the RLS identifier will find the true parameters of the system. The I&I identifier will find the total force and moment coefficients, but not the individual parameters. Finally, the parameter estimates of the integrated adaptive backstepping controller will in general never converge to their true values. 4. Numerical stability: The nonlinear damping terms used for the modular design with RLS identifier can lead to numerical problems. The integrated adaptive design is the simplest and therefore the most numerically stable, although it should be noted that no problems were encountered for the modular design with I&I estimator. 5. Tracking performance: The tracking performance of the modular designs is better in general, with the I&I designs just outperforming the RLS designs. 6. Transient performance: Unexpected behavior of the update laws can lead to bad transient performance for the integrated design. The nonlinear damping terms sometimes result in unwanted oscillations with the modular RLS design. 7. Tuning complexity: Integrated backstepping designs are very hard or impossible to tune for complex systems due to the strong coupling between controller and identifier. Tuning of I&I estimator is quite straightforward, but the tuning of the RLS identifier is almost automatic. However, finding the correct resetting algorithm and nonlinear damping gains requires additional effort. Therefore, the tuning of the modular adaptive backstepping controller with I&I identifier is the least time consuming and the most transparent. Final Conclusions On the basis of the research performed in this thesis, it can be concluded that a RFC system based on the modular adaptive backstepping method with I&I estimator shows a lot of potential, since it possesses all the features aimed at in the thesis goal: • a single nonlinear backstepping controller with an I&I estimator is used for the entire flight envelope. The stability and convergence properties of the resulting closed-loop system are guaranteed by Lyapunov theory and have been verified in numerical simulations. Due to the modularity of the design, systematic gain tuning can be performed to achieve the desired closed-loop performance. • the numerical simulation results with the F-16 model suffering from various types of sudden actuator failures and large aerodynamic uncertainties demonstrate that

200

CONCLUSIONS AND RECOMMENDATIONS

10.2

the performance of the RFC system is superior with respect to a non-adaptive NDI based control system or the baseline gain-scheduled control system for the considered situations. By extending the regressors, i.e. the local aerodynamic model polynomial structures, of the identifier, the adaptive controller should also be able to take asymmetric (structural) failures, which introduce additional coupling between longitudinal and lateral motion of the aircraft, into account. • by making use of a multiple model approach based on flight envelope partitioning with B-spline networks the computational load of the numerically stable adaptive control algorithm is relatively low. The algorithm can easily run real-time on a budget desktop computer. However, the current processors of onboard computers are sized for the current generation of flight controllers and are not powerful enough to run the proposed adaptive control algorithm in real-time. Manufacturers and clients will have to be convinced that the benefits of RFC are worth the additional hardware cost and weight.

10.2 Recommendations New questions and research directions can be formulated based on the research presented in this thesis. These recommendations are formulated in this section: • As already discussed in the introduction, accurate failure models of realistic (structural) damage are lacking for the high-fidelity F-16 model used as the main study object in this thesis. For this reason, the evaluation of the adaptive flight controllers was limited to simulation scenarios with actuator failures, symmetric center of gravity shifts and uncertainties is individual aerodynamic coefficients. If more realistic aerodynamic data for asymmetric failures such as partial surface loss could be obtained, the results of the study would be more valuable. Furthermore, the adaptive controllers could be extended with an FDIE module that performs actuator health monitoring, thereby simplifying the task of online model identification. This was not done in this thesis work to make the limited failure scenarios more challenging. • A multiple local model approach resulting from flight envelope partitioning was used to simplify the online model approximation and thereby reducing computational load. The aerodynamic model of the F-16 considered in this thesis was already examined in many earlier studies and is already in a form that lends itself for partitioning into locally valid models. However, in the more general case finding a proper local approximation structure and partitioning may be a time consuming study in itself. Many more advanced local approximation and learning algorithms are currently being developed, see e.g. [50, 145, 204]. In [145] an algorithm is proposed that employs nonlinear function approximation with automatic growth of the learning network according to the nonlinearities and the working domain of the control system. The unknown function in the dynamical system is approximated by piecewise

10.2

RECOMMENDATIONS

201

linear models using a nonparametric regression technique. Local models are allocated as necessary and their parameters are optimized online. Such an advanced technique eliminates the need for manual partitioning and the structure automatically adapts itself in the case of failures. However, it is unclear if a real-time implementation would be feasible for the F-16 model or similar high-fidelity aircraft models. • For some simulations with sudden failure cases, the adaptive controllers managed to stabilize the aircraft, but the commanded maneuver proved too challenging for the damaged aircraft. Hence, an adaptive controller by itself may not be sufficient for a good reconfigurable flight control system. The pilot or guidance system also needs to be aware of the characteristics of the failure, since the post-failure flight envelope might be a lot smaller. This statement has resulted in a whole new area of research, usually referred to as adaptive flight envelope estimation and/or protection, see e.g. [198, 211]. It is possible to indicate to the pilot which axis have suffered a failure with the adaptive controllers developed in this thesis, so that he is made aware that there is a failure and that he should fly more carefully. However, the development of fully adaptive flight envelope protection systems or at least systems that help make the pilot aware of the type and the size of failure should be a key research focus for the coming years. • Two main reasons for the gap between the research and the application of adaptive or reconfigurable flight control methods can be identified. Firstly, many of the adaptive control techniques cannot be applied to existing aircraft without replacing the already certified flight control laws. Secondly, the verification and validation procedures needed to certify the novel reconfiguration methods have not received the necessary attention. For these reasons, some designers have been developing ‘retrofit’ adaptive control systems which leave the baseline flight control system intact, see e.g. [141, 158]. The nonlinear adaptive designs developed in this thesis can not be used in a retrofit manner, since all current flight control systems are based on linear design techniques. However, this may change when the first aircraft with NDI based flight control systems become available. Nevertheless, more research should be devoted to the verification and validation procedures that can be applied directly to nonlinear and even adaptive control designs. The linear analysis tools currently used have a lot of shortcomings. • The contributions of this thesis are mainly of a theoretical nature, since all results were obtained from numerical flight simulations with preprogrammed maneuvers. No piloted simulations were performed. However, the adaptive flight control systems developed in the thesis have been test flown by the author on a desktop computer using a joystick and a flight simulator. Compared to a normal NDI controller and the baseline controller, the workload was indeed lowered for most of the failures considered. Results of these simulated test flights are not included in the thesis, since the author is not a professional pilot. Nevertheless, simulations with

202

CONCLUSIONS AND RECOMMENDATIONS

10.2

actual test pilots should be performed to examine the interactions between pilots and the adaptive control systems. The fast reaction of the pilot to the unexpected movements caused by an unknown, sudden change in dynamic behavior of the aircraft in combination with the immediate online adaptation may lead to unexpected results. Pilots may need to learn to thrust the adaptive element in the flight control system, as was already observed in an earlier study involving a damaged large passenger aircraft [133]. • As discussed in the conclusions, the I&I based estimator used in this thesis employs overparametrization. From a numerical point of view it would be beneficial to obtain a single estimate of each unknown parameter. Moreover, should this be achieved, it may be possible to combine the I&I based estimator with a leastsquares adaptation. The ability of least-squares to even out adaptation rates would almost completely automate the tuning of the adaptive control design. The regressor filters employed in the modular control designs with least-squares, as derived in Chapter 6, results in a high dynamic order of the estimators and may also make the estimator less responsive, thus affecting the performance. The combination with I&I can possibly remove the need for these filters as demonstrated for linearly parametrized nonlinear control systems in the ‘normal form’ in [114]. Furthermore, the need for nonlinear damping is also removed. However, to extend the suggested approach of [114] to the broader class of strict-feedback systems would require overparametrization. This would mean employing multiple Riccati differential equations, which is of course unacceptable. Hence, the use of overparametrization should certainly be avoided if least-squares are considered. • Control engineering is a broad field of study that encompasses many applications. The adaptive backstepping techniques discussed in this thesis were studied and evaluated purely for their usefulness in flight control design problems. Obviously, most of the techniques studied in this thesis can be and have been used for other types of control system problems in literature and sometimes even in practice. However, the shortcomings and modifications discussed in this thesis may not be relevant in other control design problems.

Appendix

A

F-16 Model A.1 F-16 Geometry

Figure A.1: F-16 of the Royal Netherlands Air Force Demo Team. Picture by courtesy of the F-16 Demo Team.

203

204

APPENDIX A

Table A.1: F-16 parameters.

Parameter aircraft mass (kg) wing span (m) wing area (m2 ) mean aerodynamic chord (m) roll moment of inertia (kg.m2 ) pitch moment of inertia (kg.m2 ) yaw moment of inertia (kg.m2 ) product moment of inertia (kg.m2 ) product moment of inertia (kg.m2 ) product moment of inertia (kg.m2 ) c.g. location (m) reference c.g. location (m) engine angular momentum (kg.m2/s)

Symbol m b S c¯ Ix Iy Iz Ixz Ixy Iyz xcg xcgr Heng

Value 9295.44 9.144 27.87 3.45 12874.8 75673.6 85552.1 1331.4 0.0 0.0 0.3¯ c 0.35¯ c 216.9

A.2 ISA Atmospheric Model For the atmospheric data an approximation of the International Standard Atmosphere (ISA) is used [143].

T

=

p

=

ρ

=

a

=



T0 + λh if h ≤ 11000 T(h=11000) if h > 11000  g0  − Rλ  p0 1 + λ Th0 g − (h−11000)  p(h=11000) e RT(h=11000) p RT p γRT ,

if

h ≤ 11000

if

h > 11000

where T0 = 288.15 K is the temperature at sea level, p0 = 101325 N/m2 the pressure at sea level, R = 287.05 J/kg.K the gas constant of air, g0 = 9.80665 m/s2 the gravity constant at sea level, λ = dT /dh = −0.0065 K/m the temperature gradient and γ = 1.41 the isentropic expansion factor for air. Given the aircraft’s altitude (h in meters) it returns the current temperature (T in Kelvin), the current air pressure (p in N/m2 ), the current air density (ρ in kg/m3 ) and the speed of sound (a in m/s).

FLIGHT CONTROL SYSTEM

205

A.3 Flight Control System These figures contain the schemes of the baseline flight control system of the F-16 model. More details can be found in [149].

Figure A.2: Baseline pitch axis control loop of the F-16 model.

206

APPENDIX A

Figure A.3: Baseline F-16 roll axis control loop of the F-16 model.

Figure A.4: Baseline F-16 yaw axis control loop of the F-16 model.

Appendix

B

System and Stability Concepts This appendix clarifies certain system and stability concepts that are used in the main body of the thesis. Most proofs are not included, but can be found in the main references for this appendix: [106, 118, 192].

B.1 Lyapunov Stability and Convergence For completeness, the main results of Lyapunov stability theory as discussed in Section 3.2 are reviewed. More comprehensive accounts can be found in [106] and [118]. Consider the non-autonomous system x˙ = f (x, t)

(B.1)

where f : Rn × R → Rn is locally Lipschitz in x and piecewise continuous in t. Definition B.1. The origin x = 0 is the equilibrium point for (B.1) if f (0, t) = 0,

∀t ≥ 0.

(B.2)

The following comparison functions are useful tools to create more transparent stability definitions. Definition B.2. A continuous function α : [0, a) → R+ is said to be of class K if it is strictly increasing and α(0) = 0. It is said to be of class K∞ if a = ∞ and limr→∞ α(r) = ∞. Definition B.3. A continuous function β : [0, a) × R+ → R+ is said to be of class KL if, for each fixed s, the mapping β(r, s) is of class K with respect to r and, for each fixed r, the mapping β(r, s) is decreasing with respect to s and lims→∞ β(r, s) = 0. It is said 207

208

APPENDIX B

to be of class KL∞ if, in addition, for each fixed s the mapping β(r, s) belongs to class K∞ with respect to r. Using these comparison functions the stability definitions of Chapter 3 are restated. Definition B.4. The equilibrium point x = 0 of (B.1) is • uniformly stable, if there exists a class K function γ(·) and a positive constant c, independent of t0 , such that |x(t)| ≤ γ(|x(t0 )|),

∀t ≥ t0 ≥ 0,

∀x(t0 )|

|x(t0 )| < c;

(B.3)

• uniformly asymptotically stable, if there exists a class KL function β(·, ·) and a positive constant c, independent of t0 , such that |x(t)| ≤ β(|x(t0 )|, t − t0 ),

∀t ≥ t0 ≥ 0,

∀x(t0 )| |x(t0 )| < c;

(B.4)

• exponentially stable, if (B.4) is satisfied with β(r, s) = kre−αs , k > 0, α > 0; • globally uniformly stable, if (B.3) is satisfied with γ ∈ K∞ for any initial state x(t0 ); • globally uniformly asymptotically stable, if (B.4) is satisfied with β ∈ KL∞ for any initial state x(t0 ); • globally exponentially stable, if (B.4) is satisfied for any initial state x(t0 ) and with β(r, s) = kre−αs , k > 0, α > 0. Based on these definitions, the main Lyapunov stability theorem is then formulated as follows. Theorem B.5. Let x = 0 be an equilibrium point of (B.1) and D = {x ∈ Rn ||x| < r}. Let V : D × Rn → R+ be a continuously differentiable function such that ∀t ≥ 0, ∀x ∈ D, γ1 (|x|) ≤ V (x, t) ∂V ∂V + f (x, t) ∂t ∂x

≤ γ2 (|x|)

(B.5)

≤ −γ3 (|x|).

(B.6)

Then the equilibrium x = 0 is • uniformly stable, if γ1 and γ2 are class K functions on [0, r) and γ3 (·) ≥ 0 on [0, r); • uniformly asymptotically stable, if γ1 , γ2 and γ3 are class K functions on [0, r); • exponentially stable, if γi (ρ) = ki ρα on [0, r), ki > 0, α > 0, i = 1, 2, 3;

LYAPUNOV STABILITY AND CONVERGENCE

209

• globally uniformly stable, if D = Rn , γ1 and γ2 are class K∞ functions, and γ3 (·) ≥ 0 on R+ ; • globally uniformly asymptotically stable, if D = Rn , γ1 and γ2 are class K∞ functions, and γ3 is a class K function on R+ ; and • globally exponentially stable, if D = Rn and γi (ρ) = ki ρα on R+ , ki > 0, α > 0, i = 1, 2, 3. The key advantage of this theorem is that it can be applied without solving the differential equation (B.1). However, analysis of dynamic systems can result in situations where the derivative of the Lyapunov function is only negative semi-definite. For autonomous systems it may still be possible to conclude asymptotic stability in these situations via the concept of invariant sets, i.e. LaSalle’s Invariance Theorem. Definition B.6. A set Γ is a positively invariant set of a dynamic system if every trajectory starting in Γ at t = 0 remains in Γ for all t > 0. For instance, any equilibrium of a system is an invariant set, but also the set of all equilibria of a system is an invariant set. Using the concept of invariant sets, the following invariant set theorem can be stated. Theorem B.7. For an autonomous system x˙ = f (x), with f continuous on the domain D, let V (x) : D → R be a function with continuous first partial derivatives on D. If 1. the compact set Ω ⊂ D is a positively invariant set of the system; 2. V˙ ≤ 0 ∀x ∈ Ω;

then every solution x(t) originating in Ω converges to M as t → ∞, where R = {x ∈ Ω|V˙ (x) = 0} and M is the union of all invariant sets in R. LaSalle’s Theorem is only applicable to the analysis of autonomous systems, since it may be unclear how to define the sets M and R for non-autonomous systems. For nonautonomous systems Barbalat’s Lemma can be used: + 1 Lemma B.8. R ∞Let φ(t) : R → R be uniformly continuous on [0, ∞). If limt→∞ 0 φ(τ )dτ exists and is finite, then

lim φ(t) = 0.

t→∞

Note that the uniform continuity of φ needed can be proven by showing either that φ˙ ∈ L∞ ([0, ∞)) or that φ(t) is Lipschitz on [0, ∞). Finally, the theorem due to LaSalle and Yoshizawa is stated. 1 A function f : D ⊆ R → R is uniformly continuous if, for any ǫ > 0, there exists δ(ǫ) > 0 such that |x − y| < δ ⇒ |f (x0 − f (y)| < ǫ, for all x, y ∈ D.

210

APPENDIX B

Theorem B.9. Let x = 0 be an equilibrium point of (B.1) and suppose that f is locally Lipschitz in x uniformly in t. Let V : Rn × R+ → R+ be a continuously differentiable function such that γ1 (|x|) ≤ V (x, t) ∂V ∂V V˙ = + f (x, t) ∂t ∂x

≤ γ2 (|x|)

(B.7)

≤ −W (x) ≤ 0,

(B.8)

∀t ≥ 0, ∀x ∈ Rn , where γ1 and γ2 are class K∞ functions and W is a continuous function. Then all solutions of (B.1) are globally uniformly bounded and satisfy lim W (x(t)) = 0.

(B.9)

t→∞

In addition, if W (x) is positive definite, then the equilibrium x = 0 is globally uniformly asymptotically stable. Proof: Since V˙ ≤ 0, V is non-increasing. Hence, in view of (B.7), it can be concluded that x is globally uniformly bounded, i.e. |x(t)| ≤ B, ∀t ≥ 0. Furthermore, since V (x(t), t) is non-increasing and bounded from below by zero, it can be concluded that it has a limit V∞ as t → ∞. Integrating (B.8) gives Z t Z t lim W (x(τ ))dτ ≤ − lim V˙ (x(τ ), τ )dτ t→∞

t→∞

t0

=

=

t0

lim {V (x(t0 ), t0 ) − V (x(t), t)}

t→∞

V (x(t0 ), t0 ) − V∞ ,

(B.10)

R∞

which means that t0 W (x(τ ))dτ exists and is finite. It remains to show that W (x(t)) is also uniformly continuous. Since |x(t)| ≤ B and f is locally Lipschitz in x uniformly in t, it can be observed that for any t ≥ t0 ≥ 0, Z t Z t |x(t) − x(t0 )| = | f (x(τ ), τ )dτ | ≤ L |x(τ )|dτ t0



t0

LB|t − t0 |,

where L is the Lipschitz constant of f on {|x| ≤ B}. Selecting δ(ǫ) = |x(t) − x(t0 )| < ǫ,

∀|t − t0 | ≤ δ(ǫ),

(B.11)

ǫ LB

results in (B.12)

which means that x(t) is uniformly continuous. Since W is continuous, it is uniformly continuous on the compact set {|x| ≤ B}. It can be concluded that W (x(t)) is uniformly continuous from the uniform continuity of W (x) and x(t). Hence, it satisfies the conditions of Lemma B.8, which in turn guarantees that W (x(t)) → 0 as t → ∞. If, in addition, W (x) is positive definite, there exists a class K function γ3 such that W (x) ≥ γ3 (|x|). By Theorem B.7 it can be concluded that x = 0 is globally uniformly asymptotically stable.

INPUT-TO-STATE STABILITY

211

B.2 Input-to-state Stability This section recalls the notion of input-to-state stability (ISS) [192, 193]. The ISS concept plays an important role in the modular backstepping design technique as derived in Section 6.2.2. Consider the system x˙ = f (t, x, u), (B.13) where f is piecewise continuous in t and locally Lipschitz in x and u. Definition B.10. The system (B.13) is said to be input-to-state stable (ISS) if there exist a class KL function β and a class K function γ, such that, for any x(t0 ) and for any input u that is continuous and bounded on [0, ∞) the solution exists for all t ≥ 0 and satisfies ! |x(t)| ≤ β(|x(t0 )|, t − t0 ) + γ

sup |u(τ )|

(B.14)

τ ∈[t0 ,t]

for all t ≥ t0 and t such that 0 ≤ t0 ≤ t. The function γ(·) is often referred to as an ISS gain for the system (B.13). The above definition implies that an ISS system is bounded-input bounded-state stable and has a globally uniformly asymptotically stable equilibrium at zero when u(t) = 0. The ISS property can be equivalently characterized in terms of Lyapunov functions, as the following theorem shows. Theorem B.11. The system (B.13) is ISS if and only if there exists a continuously differentiable function V : R+ × Rn → R+ such that for all x ∈ Rn and u ∈ Rm , γ1 (|x|) ≤ V (x, t) ≤ γ2 (|x|)

|x| ≥ ρ(|u|)



∂V ∂t

+

∂V ∂x

f (t, x, u) ≤ −γ3 (|x|)

(B.15) ,

(B.16)

where γ1 , γ2 and ρ are class K∞ functions and γ3 is a class K function. Note that an ISS gain for the system (B.13) can be obtained from the above theorem as γ = γ1−1 ◦ γ2 ◦ ρ.

B.3 Invariant Manifolds and System Immersion This section gives the definition of an invariant manifold [210] and of system immersion [36], since these notions are used in Chapter 9. Consider the autonomous system x˙ = f (x), with state x ∈ Rn and output y ∈ Rm .

y = h(x),

(B.17)

212

APPENDIX B

Definition B.12. The manifold M = {x ∈ Rn |s(x) = 0}, with s(x) smooth, is said to be (positively) invariant for x˙ = f (x) if s(x(0)) = 0, which implies s(x(t)) = 0, for all t ≥ 0. Consider now the (target) system ξ˙ = α(ξ),

ζ = β(ζ),

(B.18)

with state ξ ∈ Rp , p < n, and output ζ ∈ Rm . Definition B.13. The system (B.18) is said to be immersed into the system (B.17) if there exists a smooth mapping π : Rp → Rn satisfying x(0) = π(ξ(0)) and β(ξ1 ) 6= β(ξ2 ) ⇒ h(π(ξ1 )) 6= h(π(ξ2 )) and such that f (π(ξ)) =

∂π α(ξ) ∂ξ

and h(π(ξ)) = β(ξ) p

for all ξ ∈ R . Hence, roughly stated, a system Σ1 is said to be immersed into a system Σ2 if the inputoutput mapping of Σ2 is a restriction of the input-output mapping of Σ1 , i.e. any output response generated by Σ2 is also an output response of Σ1 for a restricted set of initial conditions.

Appendix

C

Command Filters This appendix covers the second order command filters which are used for reference signal generation and in the intermediate steps of the constrained adaptive backstepping approach (taken from [61]).

Figure C.1: Filter that generates the command and command derivative while enforcing magnitude, bandwidth and rate limits constraints [61].

Figure C.1 shows an example of a filter which produces a magnitude, rate and bandwidth limited signal xc and its derivative x˙ c by filtering a signal x0c . The state space representation of this filter is 

 q˙1 (t) q˙2 (t)   xc x˙ c

"

 = 2ζωn SR   q1 = , q2

q2 2 ωn 0 2ζωn [SM (xc )

213

 − q1 ] − q2



#

(C.1) (C.2)

214

APPENDIX C

where SM (·) and SR (·) represent the magnitude and rate limit functions, respectively. The functions SM and SR are defined similarly:  if x ≥ M  M x if |x| < M . SM (x) =  −M if x ≤ −M

Note that if the signal x0c is bounded, then xc and x˙ c are also bounded and continuous signals. Note also that x˙ c is computed without differentiation. When the state must remain in some operating envelope defined by the magnitude limit M and the rate limit R, the command filter ensures that the commanded trajectory and its derivative satisfy these same constraints. If the only objective in the design of the command filter is to compute xc and its derivative, then M and R are infinitely large and the limiters do not need to be included in the filter implementation. In the linear range of the functions SM and SR the filter dynamics are        q1 0 q˙1 (t) 0 1 + x0c (C.3) = q2 ωn2 q˙2 (t) −ωn2 −2ζωn     xc q1 = , (C.4) x˙ c q2 with the transfer function from the input to the first output defined as Xc (s) ωn2 = 2 . 0 Xc (s) s + 2ζωn s + ωn2

(C.5)

When command limiting is not in effect, the error xc − x0c can be made arbitrarily small by selecting ωn sufficiently larger than the bandwidth of the signal x0c . When command filtering is in effect, the error xc − x0c will be bounded since both xc and x0c are bounded.

Appendix

D

Additional Figures This appendix contains the results for some of the numerical simulations performed in Chapters 6 to 9.

215

216

APPENDIX D

90 60 30 0 −30 −60 −90 0

response

δal

60

reference

δar

δr

40

10

20

30

40

50

60

δal, δar, δr (deg)

φ (deg)

D.1 Simulation Results of Chapter 6

20 0

40 30 20 10 0 −10 −20 0

−40 0

20

30

40

50

60

β (deg)

4 2 0 −2 10

20

30 time (s)

40

50

−30 0

60

10

50

60

δ

lef

tef

20

30 time [s]

40

δel

δer

δal

δar

lδ (−)

estimated

*

0

20

δ

er

60

(b) Surface deflections

5

10

δ

50

−20

30

40

50

0

−5 0

60

0.5

0

0

−1

10

20

30

40

50

60

10

20

30

40

50

60

10

20

30 time (s)

40

50

60

*

mδ (−)

ltot (−) mtot (−)

el

40

0

5

−5 0

δ

30

−10

(a) Reference tracking

realized

20

10

−4 0

10

20 10

δel, δer, δlef, δtef (deg)

α (deg)

−20

−0.5 −1 0

10

20

30

40

50

−2 −3 0

60

0.5

0.2

δ

*

n (−)

ntot (−)

0.1 0

0 −0.1

−0.5 0

10

20

30 time (s)

40

(c) Control moment

50

60

−0.2 0

(d) Parameter estimation

Figure D.1: Simulation scenario 3 results for the integrated adaptive controller combined with PI control allocation where the aircraft experiences a hard-over of the left aileron to 45 degrees after 1 second.

SIMULATION RESULTS OF CHAPTER 6

α (deg)

40 30 20 10 0 −10 −20 0

60

reference

r

30

40

50

60

0

−20 0

10

20

30

40

50

60

δlef

50

60

50

60

δtef

tef lef

−10

er

10

20

30 time (s)

40

50

−20 −30 0

60

10

δel

5

l

δ

*

estimated

0

10

20

30

40

20

30 time [s]

40

(b) Surface deflections

l (−)

(−)

δer

40

0

el

0

realized

tot

30

10

5

50

δer

δal

δar

0

−5 0

60

2

10

20

30

40

50

60

10

20

30

40

50

60

10

20

30 time (s)

40

50

60

2 0

δ

*

m (−)

1

tot

(−)

20

δel

20 10

δ ,δ ,δ ,δ

β (deg)

20

(a) Reference tracking

m

δr

ar

20

2

−5 0

δar

al

10

4

−2 0

δal

40

δ , δ , δ (deg)

response

(deg)

φ (deg)

90 60 30 0 −30 −60 −90 0

217

0 −1 0

10

20

30

40

50

−2 −4 0

60

0.5

0.2 nδ (−)

*

n

tot

(−)

0.1 0

0 −0.1

−0.5 0

10

20

30 time (s)

40

(c) Control moment

50

60

−0.2 0

(d) Parameter estimation

Figure D.2: Simulation scenario 3 results for the modular adaptive controller combined with PI control allocation where the aircraft experiences a hard-over of the left aileron to 45 degrees after 1 second.

218

APPENDIX D

δ

40

reference

δ

al

δ

ar

r

r

δ , δ , δ (deg)

20

10

20

30

40

50

60

al

ar

0

−20

−40 0

10

20

20

δ

δ

el

10

20

30

40

50

60

30

40

δ

er

50

60

50

60

δ

lef

tef

10

δ ,δ ,δ ,δ

lef

0

er

0

−10

el

β (deg)

tef

α (deg)

40 30 20 10 0 −10 −20 0

response

(deg)

φ (deg)

90 60 30 0 −30 −60 −90 0

−2 0

10

20

30 time (s)

40

50

−20 0

60

10

(a) Reference tracking

0

δ

−2 −4 0

10

20

30

40

50

δal

δar

10

20

30

40

50

60

10

20

30

40

50

60

10

20

30 time (s)

40

50

60

−0.5

0.5

mδ (−)

(−)

δer

0

*

tot

m

40

0

−5 0

60

1

0 −0.5 0

δel

5

estimated

*

realized

30 time [s]

(b) Surface deflections

l (−)

l

tot

(−)

2

20

−1 −1.5

10

20

30

40

50

−2 0

60

0.5

0.2 nδ (−)

*

n

tot

(−)

0.1 0

0 −0.1

−0.5 0

10

20

30 time (s)

40

(c) Control moment

50

60

−0.2 0

(d) Parameter estimation

Figure D.3: Simulation scenario 4 results for the integrated adaptive controller combined with QP WU2 control allocation where the aircraft experiences a hard-over of the left horizontal stabilizer to 10.5 degrees.

SIMULATION RESULTS OF CHAPTER 6

δ

40

reference

δ

al

δ

ar

r

r

δ , δ , δ (deg)

20

10

20

30

40

50

60

al

ar

0

−20

−40 0

10

δ

20 10

20

30

40

50

60

20

el

30

δ

40

δ

er

50

60

50

60

δ

lef

tef

10

δ ,δ ,δ ,δ

lef

0

er

0

−10

el

β (deg)

tef

α (deg)

40 30 20 10 0 −10 −20 0

response

(deg)

φ (deg)

90 60 30 0 −30 −60 −90 0

219

−2 0

10

20

30 time (s)

40

50

−20 0

60

10

(a) Reference tracking

0

δ

−2 −4 0

10

20

30

40

50

δar

10

20

30

40

50

60

10

20

30

40

50

60

10

20

30 time (s)

40

50

60

0

δ

20

30

40

50

−1 −2 0

60

0.2 0.1 nδ (−)

0

*

tot

(−)

δal

*

m (−)

(−) tot

m

10

0.5

n

δer

1

0

−0.5 −1 0

40

0

−5 0

60

0.5

−0.5 0

δel

5

estimated

*

realized

30 time [s]

(b) Surface deflections

l (−)

l

tot

(−)

2

20

0 −0.1

10

20

30 time (s)

40

(c) Control moment

50

60

−0.2 0

(d) Parameter estimation

Figure D.4: Simulation scenario 4 results for the modular adaptive controller combined with QP WU2 control allocation where the aircraft experiences a hard-over of the left horizontal stabilizer to 10.5 degrees.

220

APPENDIX D

reference

φ (deg)

φ (deg)

response 90 60 30 0 −30 −60 −90 0

10

20

30

40

50

60

90 60 30 0 −30 −60 −90 0

20

10

20

30

40

50

β (deg)

β (deg) 10

20

30 time (s)

40

50

60

response

reference

φ (deg)

φ (deg)

60

10

20

30

40

50

60

10

20

30 time (s)

40

50

60

(b) Integrated adaptive control with WPI WU2

10

20

30

40

50

60

90 60 30 0 −30 −60 −90 0

20

response

reference

10

20

30

40

50

60

10

20

30

40

50

60

10

20

30 time (s)

40

50

60

20 10

α (deg)

10

α (deg)

50

0

0

(a) Integrated adaptive control with WPI WU1

0

0 −10

10

20

30

40

50

60

−20 0

2

4 β (deg)

β (deg)

40

2

0

0

−2 0

30

0

−20 0

60

2

−10 0

20

−10

4

90 60 30 0 −30 −60 −90 0

10

10

α (deg)

α (deg)

0

−2 0

reference

20

10

−10 0

response

2 0 −2 −4

10

20

30 time (s)

40

50

60

(c) Modular adaptive control with WPI WU1

0

(d) Modular adaptive control with WPI WU2

Figure D.5: Simulation scenario 2 results for both controllers with WPI control allocation where the aircraft experiences a left horizontal stabilizer locked at 0 degrees.

SIMULATION RESULTS OF CHAPTER 7

221

D.2 Simulation Results of Chapter 7

(A )

(B ) (m)

z

0

10000

2 Tracking errors Z

Altitude (m)

4

5000 1 4

x 10 East Distance (m)

0

−1

0

1

3 2 4 x 10 North Distance (m)

0

−4

0

100

0

100

200

300

−1000

0

100

time (s) (E ) 100

µ µ, α, β (deg)

4 γ (deg)

300

time (s) (F )

6

2 0 0

100

200

β

0 −50

300

α

50

0

100

time (s) (G )

200

300

time (s) (H )

2000

15 θ

ψ

p p, q, r (deg/s)

φ 1000 0

0 x 10

100

200

300

r

5 0 0

100

time (s) (I)

4

q

10

−5

200

300

time (s) (J) 5 δ , δ , δ (deg)

φ, θ, ψ (deg)

200

0

200

δe

δa

δr

0

a

r

10 5

−5

e

Thrust (N )

300

1000

χ (deg)

V (m/s)

205

15

200

2000

210

−1000

03

time (s) (D )

215

−2

z

02

−2

(C )

195

z

01

0

0

100

200 time (s)

300

−10

0

100

200

300

time (s)

Figure D.6: Maneuver 1: Climbing helical path performed at flight condition 1 without any uncertainty or actuator failures.

222

APPENDIX D

(A )

(B ) z

z

01

z

02

03

5

4000 2000 0 1 4

x 10 East Distance (m)

0

−1

0

1

3 2 4 x 10 North Distance (m)

Tracking errors Z

Altitude (m)

0

(m)

10

0 −5 −10

0

100

2000

170

1000 χ (deg)

V (m/s)

180

160 150 140

0

100

200

300

−2000

0

100

50

µ, α, β (deg)

γ (deg)

100

0 −5 0

100

200

0 −50 −100

300

µ 0

100

20

1000

10

p, q, r (deg/s)

2000

0 −1000

φ 0 x 10

100

β 300

θ 200

ψ

0 −10 p −20

300

0

100

time (s) (I)

4

α 200

time (s) (H )

q

r

200

300

time (s) (J) 10

r

δ , δ , δ (deg)

φ, θ, ψ (deg)

time (s) (G )

δe

5

δa

δr

0

a

5

e

Thrust (N )

300

time (s) (F )

5

10

200

−1000

10

−2000

300

0

time (s) (E )

−10

200 time (s) (D )

(C )

0

0

100

200 time (s)

300

−5 −10

0

100

200

300

time (s)

Figure D.7: Maneuver 1: Climbing helical path performed at flight condition 2 with +30% uncertainty in the aerodynamic coefficients.

SIMULATION RESULTS OF CHAPTER 7

(A )

(B ) (m)

5 z

0

Tracking errors Z

Altitude (m)

6000 4000 2000 1 4

x 10 East Distance (m)

0

−1

0

1

3 2 4 x 10 North Distance (m)

−5

1000 χ (deg)

V (m/s)

2000

270

250

0

100

0

100

200

300

−2000

50

µ, α, β (deg)

γ (deg)

100

0 −5

0

100

0

100

200

−100

300

300

µ 0

20

1000

10

p, q, r (deg/s)

2000

0 −1000

φ 0 x 10

100

β

200

300

θ 200

ψ

0 −10 −20

300

p 0

100

time (s) (I)

4

α

100 time (s) (H )

q

r

200

300

time (s) (J) 10

r

δ , δ , δ (deg)

φ, θ, ψ (deg)

200

0 −50

time (s) (G )

δe

5

δa

δr

0

a

5

e

Thrust (N )

300

time (s) (F )

5

10

200

−1000

10

−2000

03

0

time (s) (E )

−10

z

02

time (s) (D )

280

260

z

01

0

(C )

240

223

0

0

100

200 time (s)

300

−5 −10

0

100

200

300

time (s)

Figure D.8: Maneuver 1: Climbing helical path performed at flight condition 3 with left aileron locked at −10 deg.

224

APPENDIX D

(A )

(B )

10

6000 4000 2000 1 4

x 10 East Distance (m)

0

−1

0

1

3

2

4

Tracking errors Z

Altitude (m)

0

(m)

20

0 −10 −20

x 10 North Distance (m)

z 0

z

01

100

z

02

03

200

300

200

300

time (s) (D )

(C ) 255

1000

χ (deg)

V (m/s)

500 250

0 −500

245

0

100

200

300

−1000

0

100 time (s) (F )

10

100

5

50

µ, α, β (deg)

γ (deg)

time (s) (E )

0 −5 −10

0

100

200

0

100

θ

ψ p, q, r (deg/s)

500 0 −500 0 x 10

100

200

0 p −50

300

0

100

time (s) (I)

4

q

r

200

300

time (s) (J) 10

r

δ , δ , δ (deg)

φ, θ, ψ (deg)

300

50 φ

δe

5

δa

δr

0

a

5

e

Thrust (N )

200 time (s) (H )

1000

10

β

0

time (s) (G )

−1000

α

−50 −100

300

µ

0

0

100

200 time (s)

300

−5 −10

0

100

200

300

time (s)

Figure D.9: Maneuver 2: Reconnaissance and surveillance performed at flight condition 3 with −30% uncertainty in the aerodynamic coefficients.

SIMULATION RESULTS OF CHAPTER 7

−4

−3

x 10

∆ CL0

∆ CLq

∆ CLa

x 10

∆ CLde

8

C components

6 4 2 0 −2 0

50

∆ Clp

∆ Clr

∆ Clda

∆ Cldr

0

100

150 time (s)

200

250

300

−2 0

∆ CYr

∆ CYda

x 10

∆ CYdr Cm components

∆ CYp

2 1 0 −1 0

50

100

150 time (s)

200

250

300

−6

−3

x 10 3 ∆ CY0 CY components

∆ Cl0

2

l

CL components

10

225

50

100

150 time (s)

200

250

300

∆ Cm0

∆ Cmq

∆ Cmde

10

5

0 0

50

100

150 time (s)

200

250

300

−7

∆ CD0

∆ CDq

x 10 ∆ Cn0

∆ CDde C components

0.06 0.04 0.02

n

CD components

0.08

0 −0.02 0

50

100

150 time (s)

200

250

300

∆ Cnp

∆ Cnr

∆ Cnda

∆ Cndr

2 0 −2 −4 0

50

100

150 time (s)

200

250

300

Figure D.10: Maneuver 2: Estimated errors for the reconnaissance and surveillance performed at flight condition 3 with −30% uncertainty in the aerodynamic coefficients.

226

APPENDIX D

(A )

(B )

6000

Tracking errors Z

Altitude (m)

0

(m)

5

4000 2000 1 4

x 10 East Distance (m)

0

−1

0

1

3

2

4

x 10 North Distance (m)

0 z −5

0

z

01

100

z

02

03

200

300

200

300

time (s) (D )

(C ) 200.5

1000

χ (deg)

V (m/s)

500 200

0 −500

199.5

0

100

200

300

−1000

0

100 time (s) (F )

10

100

5

50

µ, α, β (deg)

γ (deg)

time (s) (E )

0 −5 −10

0

100

200

0

100

θ

ψ

500

p, q, r (deg/s)

φ, θ, ψ (deg)

300

50 φ

0 −500 0 x 10

100

200

0 p −50

300

0

100

time (s) (I)

5

q

r

200

300

time (s) (J) 10

r

δ , δ , δ (deg)

1.5

δe

5

δa

δr

0

a

1 0.5 0

e

Thrust (N )

200 time (s) (H )

1000

2

β

0

time (s) (G )

−1000

α

−50 −100

300

µ

0

100

200 time (s)

300

−5 −10

0

100

200

300

time (s)

Figure D.11: Maneuver 2: Reconnaissance and surveillance path performed at flight condition 1 with left aileron locked at +10 deg.

227

SIMULATION RESULTS OF CHAPTER 8

δ

1

−0.5 0

x 10

∆CL0

∆CLq

∆CLde

0 −2

50

100 n

150

−4 0

200

n

y

z

1

50

100 β

30

150 α

1 ∆CY0

150

50

100

200

∆Cmde

∆CYp

∆CYr

150 ∆CYde

200 ∆CYda

∆CYdr

∆CY

0.5

10 0

0 −0.5

50

100 ps

20

q

150

−1 0

200

rs

1 ∆Cl0

50 ∆Clp

100 ∆Clr

150 ∆Clde

200 ∆Clda

∆Cldr

0.5 ∆Cl

10 0

0 −0.5

50

100 δe

5

δr

150 δar

−1 0

200 δal

1 ∆Cn0

50 ∆Cnp

100 ∆Cnr

150 ∆Cnde

200 ∆Cnda

∆Cndr

0.5 ∆Cn

0 −5 −10 0

∆Cmq

0.5

−0.5 0

200

20

−10 0

100

∆Cm0

0

0

−10 0

50

1

2 ∆Cm

normal acc. (g) angle of attack/sideslip (deg)

2

p

0

−1 0

angular rates (deg/s)

−7

δ

l

0.5

3

surface deflections (deg)

δ

s

∆CL

stick/rudder deflection (−)

D.3 Simulation Results of Chapter 8

0 −0.5

50

100 time (s)

150

200

−1 0

50

100 time (s)

150

200

Figure D.12: Simulation results for the integrated adaptive controller at flight condition 2 and failure scenario 1: Cmq = 0 after 20 seconds.

228

δ

1

δ

s

5

p

0.5

x 10

∆CL0

∆CLq

∆CLde

0

0 −0.5 0

−5

50

100

−10 0

200

n

y

2 1

100

∆Cm0

150

∆Cmq

200

∆Cmde

0.5 0

0 −1 0

50

1

z

∆Cm

normal acc. (g)

150

n

3

50

100

150

200

−0.5 0

50

100

150

200

−6

β

30

x 10 2 ∆CY0

α

20 10

∆CYr

∆CYde

∆CYda

∆CYdr

0

0 −10 0

∆CYp

1 ∆CY

angle of attack/sideslip (deg)

−6

δ

l

∆CL

stick/rudder deflection (−)

APPENDIX D

50

100

150

−1 0

200

50

100

150

200

ps

20

q

rs

∆Clp

∆Clr

∆Clde

∆Clda

∆Cldr

10 ∆Cl

angular rates (deg/s)

−7

x 10 5∆Cl0

0

0 −10 0

50

100

150

−5 0

200

50

100

150

200

5

δe

δr

δar

δal

−5 −10 0

∆Cnp

∆Cnr

∆Cnde

∆Cnda

∆Cndr

2

0 ∆Cn

surface deflections (deg)

−5

x 10 4 ∆Cn0

0 −2

50

100 time (s)

150

200

−4 0

50

100 time (s)

150

200

Figure D.13: Simulation results for the modular adaptive controller at flight condition 2 and failure scenario 1: Cmq = 0 after 20 seconds.

229

δ

δ

s

0.6

δ

l

−8

p

6

0.4 0.2 0 −0.2 0

50

100

200

n

y

z

∆Cm

0

50

100

150

2

−2 0

50

100

150

1.5

∆Cm0

∆Cmq

∆Cmde

100

150

200

0.5

β

−0.5 0

200

∆CY0 1

α

50 ∆CYp

∆CYr

∆CYde

200 ∆CYda

∆CYdr

0.5 ∆CY

4 2 0

0 −0.5

50

100 ps

4

150 q

−1 0

200

∆Cl0 1

rs

2

50 ∆Clp

100 ∆Clr

150 ∆Clde

200 ∆Clda

∆Cldr

0.5

0 −2

0 −0.5

50

100 δ

2

e

δ

r

150 δ

ar

−1 0

200

δ

∆Cn0 0.1

al

50 ∆Cnp

100 ∆Cnr

150 ∆Cnde

200 ∆Cnda

∆Cndr

0.05 ∆Cn

0 −2 −4 0

∆CLde

0

6

−4 0

∆CLq

1

∆Cl

normal acc. (g) angle of attack/sideslip (deg) angular rates (deg/s) surface deflections (deg)

150

n

1

−2 0

∆CL0

0

2

−1 0

x 10

4 ∆CL

stick/rudder deflection (−)

SIMULATION RESULTS OF CHAPTER 8

0 −0.05

50

100 time (s)

150

200

−0.1 0

50

100 time (s)

150

200

Figure D.14: Simulation results for the integrated adaptive controller at flight condition 1 and failure scenario 2: Loss of longitudinal static stability margin after 20 seconds.

230

APPENDIX D

0.2

0.1

0

−0.1

C

m

−0.2

−0.3

−0.4

−0.5

−0.6

−0.7 −20

−10

0

10

20

30 40 50 angle of attack (deg)

60

70

80

90

Figure D.15: Simulation results for the integrated adaptive controller at flight condition 1 and failure scenario 2: Body pitch moment coefficient versus angle of attack. The blue line represents the nominal values, the red line are the post-failure values.

Figure D.16: Simulation results for the integrated adaptive controller at flight condition 1 and failure scenario 2: Body pitch moment coefficient error versus angle of attack. The blue line represents the actual error, the red line the estimated error at the end of the simulation.

231

δ

0.6

δ

s

4

p

0.4 0.2 0 −0.2 0

50

100

150

∆CLq

∆CLde

0

n

−4 0

200

50

z

100

∆Cm0

6

n

y

150

∆Cmq

200

∆Cmde

4 ∆Cm

0

2 0

50

100

150

−2 0

200

50

100

150

200

−9

β

10

x 10 2 ∆CY0

α

∆CYp

∆CYr

∆CYde

∆CYda

∆CYdr

1 ∆CY

5 0 −5 0

0

50

100

15

ps

150

q

−1 0

200

0.01 ∆Cl0

rs

10 5 0 −5 0

∆Clp

100 ∆Clr

150 ∆Clde

200 ∆Clda

∆Cldr

0 −0.005

50

100 δ

5

e

δ

r

150 δ

ar

200

−0.01 0

δ

∆Cn0 0.4

al

50 ∆Cnp

100 ∆Cnr

150 ∆Cnde

200 ∆Cnda

∆Cndr

0.2 ∆Cn

0 −5 −10 0

50

0.005 ∆Cl

normal acc. (g) angle of attack/sideslip (deg) angular rates (deg/s)

∆CL0

−2

2

−2 0

x 10

2

4

surface deflections (deg)

−7

δ

l

∆CL

stick/rudder deflection (−)

SIMULATION RESULTS OF CHAPTER 8

0 −0.2

50

100 time (s)

150

200

−0.4 0

50

100 time (s)

150

200

Figure D.17: Simulation results for the modular adaptive controller at flight condition 1 and failure scenario 2: Loss of longitudinal static stability margin after 20 seconds.

232

δ

0.4

δ

s

∆CL0

∆CLq

∆CLde

0

−0.2 −0.4 0

50

100 n

−1 0

150

n

y

50

1

100

∆Cm0

0.05

z

∆Cm

normal acc. (g)

∆Cmq

150 ∆Cmde

0

0

50

100 β

20

150

−0.05 0

50

100

150

−4

x 10 4 ∆CY0

α

∆CYp

∆CYr

∆CYde

∆CYda

∆CYdr

2 ∆CY

10 0

0

−10 0

50

100 ps

40

q

−2 0

150

rs

1.5 ∆Cl0

0 −20

∆Clr

∆Clde

150 ∆Clda

∆Cldr

0.5

50 δ

100 δ

e

r

δ

−0.5 0

150 δ

ar

0.6 ∆Cn0

al

10

50 ∆Cnp

100 ∆Cnr

∆Cnde

150 ∆Cnda

∆Cndr

0.4

0 −10 −20 0

∆Clp

100

0

−40 0 20

50

1 ∆Cl

20

∆Cn

angle of attack/sideslip (deg)

x 10

1

0

−1 0

angular rates (deg/s)

2

p

0.2

2

surface deflections (deg)

−9

δ

l

∆CL

stick/rudder deflection (−)

APPENDIX D

0.2 0

50

100 time (s)

150

−0.2 0

50

100

150

time (s)

Figure D.18: Simulation results for the integrated adaptive controller at flight condition 4 and failure scenario 5: Right aileron negatively locked at half of maximum deflection after 20 seconds.

233

δ

0.4

δ

s

δ

l

∆CL0

3

p

0.2

∆CLq

∆CLde

2 ∆CL

stick/rudder deflection (−)

SIMULATION RESULTS OF CHAPTER 8

0

1

−0.2

0

−0.4 0

−1 0

50

100

150

50

100

150

−10

n

x 10

∆Cm0

∆Cmq

∆Cmde

∆Cm

4

0.5 0

2 0

50

100 β

20

−2 0

150

α

1 ∆CY0

50 ∆CYp

100

∆CYr

∆CYde

150 ∆CYda

∆CYdr

0 ∆CY

10 0

−1

−10 0

50

100

ps

20

q

−2 0

150

4 ∆Cl0

rs

10 0 −10 50 δ

100

∆Clr

∆Clde

150 ∆Clda

∆Cldr

0

δ

e

r

δ

ar

−4 0

150

δ

10 ∆Cn0

al

10

50 ∆Cnp

100 ∆Cnr

∆Cnde

150 ∆Cnda

∆Cndr

5

0

0

−10 −20 0

∆Clp

100

−2

−20 0 20

50

2 ∆Cl

angle of attack/sideslip (deg) angular rates (deg/s)

6

z

1

−0.5 0

surface deflections (deg)

n

y

∆Cn

normal acc. (g)

1.5

50

100 time (s)

150

−5 0

50

100

150

time (s)

Figure D.19: Simulation results for the modular adaptive controller at flight condition 4 and failure scenario 5: Right aileron negatively locked at half of maximum deflection after 20 seconds.

234

APPENDIX D

90 60 30 0 −30 −60 −90 0

response

δal

60

reference

δar

δr

40

10

20

30

40

50

60

δal, δar, δr (deg)

φ (deg)

D.4 Simulation Results of Chapter 9

20 0

40 30 20 10 0 −10 −20 0

−40 0

20

30

40

50

60

β (deg)

2

0

−2 0

10

20

30 time (s)

realized

40

50

estimated

δlef

60

50

60

δtef

−20

10

20

30 time [s]

40

r2

r3

r5

r6

r7

1.3

0

1.2 1.1

10

20

30

40

50

1 0

60 xhat(deg or deg/s)

mtot (−)

δer

50

0

1.4

0.5

0

10

20

30

40

50

60

50

10

−100 0

e(deg or deg/s) 10

20

30 time (s)

40

(c) Control moment

50

60

30

xhat3

40

xhat5

50 xhat6

60 xhat7

0

10

20 e2

5

0

20

xhat2

−50

0.5 ntot (−)

δel

40

(b) Surface deflections

−1

−0.5 0

30

−10

−30 0

60

r

ltot (−)

1

−0.5 0

20

10

(a) Reference tracking

−2 0

10

20 10

δel, δer, δlef, δtef (deg)

α (deg)

−20

30 e3

40 e5

e6

50

60

e7

0 −5 −10 0

10

20

30 time (s)

40

50

60

(d) Estimator Parameters

Figure D.20: Simulation scenario 3 results for the modular adaptive controller with I&I estimator where the aircraft experiences a hard-over of the left aileron to 45 degrees after 1 second.

SIMULATION RESULTS OF CHAPTER 9

α (deg)

40 30 20 10 0 −10 −20 0

δar

δr

20

r ar

0

20

30

40

50

60

al

10

−20

−40 0

10

20 10

20

30

40

50

60

20

δ

el

30

δ

er

40

δ

50

60

50

60

δ

lef

tef

10

δ ,δ ,δ ,δ

tef

0.5

lef

0

er

0 −0.5 −1 0

−10

el

β (deg)

δal

40

reference

δ , δ , δ (deg)

response

(deg)

φ (deg)

90 60 30 0 −30 −60 −90 0

235

10

20

30 time (s)

40

50

−20 0

60

10

(a) Reference tracking

20

realized

40

(b) Surface deflections δ

2

30 time [s]

el

5

estimated

δ

er

δ

δ

al

ar

δ

*

l (−)

0

l

tot

(−)

1 0

−1 −2 0

10

20

30

40

50

−5 0

60

0.5

20

30

40

50

60

10

20

30

40

50

60

10

20

30 time (s)

40

50

60

mδ (−)

(−)

−0.5 *

0

m

tot

10

0

−1 −1.5

−0.5 0

10

20

30

40

50

−2 0

60

0.5

0.2 nδ (−)

*

n

tot

(−)

0.1 0

0 −0.1

−0.5 0

10

20

30 time (s)

40

(c) Control moment

50

60

−0.2 0

(d) Parameter estimation

Figure D.21: Simulation scenario 4 results for the modular adaptive controller with I&I estimator where the aircraft experiences a hard-over of the left horizontal stabilizer to 10.5 degrees after 1 second.

236

δs

1

∆CLde

0 −0.5

50

100

150

n

−1 0

200

n

y

50

100

∆Cm0

1

z

150

∆Cmq

200

∆Cmde

0.5 ∆Cm

normal acc. (g)

0

0 −0.5

50

100

150

β

50

−1 0

200

α

∆CY0 0.2

50 ∆CYp

100 ∆CYr

150

200

∆CYde

∆CYda

∆CYdr

∆CY

0.1 0

0 −0.1

−50 0

50

100 ps

20

150 q

−0.2 0

200

∆Cl0 0.2

rs

0 −10 −20 0 10

50 ∆Clp

100 ∆Clr

150 ∆Clde

200 ∆Clda

∆Cldr

0.1 ∆Cl

10

0 −0.1

50 δe

100 δr

150 δar

−0.2 0

200

∆Cn0 0.5

δal

50 ∆Cnp

100 ∆Cnr

150 ∆Cnde

200 ∆Cnda

∆Cndr

5 ∆Cn

angle of attack/sideslip (deg)

∆CLq

0.5

−0.5 −1 0

∆CL0

1

0

−5 0

angular rates (deg/s)

δp

0.5

5

surface deflections (deg)

δl

∆CL

stick/rudder deflection (−)

APPENDIX D

0

0

−5 −10 0

50

100 time (s)

150

200

−0.5 0

50

100 time (s)

150

200

Figure D.22: Simulation results for the I&I based adaptive controller at flight condition 2 and failure scenario 1: Cmq = 0 after 20 seconds.

δs

0.5

δl

δp

∆CL0

1

∆CLq

237

∆CLde

0.5 ∆CL

stick/rudder deflection (−)

SIMULATION RESULTS OF CHAPTER 9

0

0 −0.5

−0.5 0

50

100 ny

2

50

100

∆Cm0

150

∆Cmq

200

∆Cmde

∆Cm

1

0 −1

0 −1

50

100

150

β

5

−2 0

200

α

0.02 ∆CY0

50 ∆CYp

100 ∆CYr

150 ∆CYde

200 ∆CYda

∆CYdr

∆CY

0.01 0

0 −0.01

−5 0

50

100 ps

5

150 q

−0.02 0

200

∆Cl0 0.01

rs

50 ∆Clp

100 ∆Clr

150 ∆Clde

200 ∆Clda

∆Cldr

0.005 ∆Cl

angle of attack/sideslip (deg)

nz

1

−2 0

angular rates (deg/s)

−1 0

200

0

0 −0.005

−5 0

50

100 δ

5

e

δ

r

150 δ

200 δ

ar

∆Cn0 0.5

al

0

−5 0

−0.01 0

∆Cn

normal acc. (g)

2

surface deflections (deg)

150

50

100 time (s)

150

200

50 ∆Cnp

100 ∆Cnr

150

200

∆Cnde

∆Cnda

150

200

∆Cndr

0

−0.5 0

50

100 time (s)

Figure D.23: Simulation results for the I&I based adaptive controller at flight condition 1 and failure scenario 2: Loss of longitudinal static stability margin after 20 seconds.

238

δs

0.5

∆CL0

∆CLq

∆CLde

0 −0.5

−0.5 0

50

100

150

ny

−1 0

200

nz

50

100

∆Cm0

0.01

1

150

∆Cmq

200

∆Cmde

0.005 ∆Cm

normal acc. (g)

0 −1

0 −0.005

50

100

150

β

20

200

α

−0.01 0 0.5 ∆CY0

50 ∆CYp

100

150

∆CYr

∆CYde

200 ∆CYda

∆CYdr

∆CY

10 0

0

−10 −20 0

50

100 ps

20

150 q

−0.5 0

200

∆Cl0

rs

0 −10 −20 0

100 ∆Clr

150 ∆Clde

200 ∆Clda

∆Cldr

0 −1

50

100 δ

e

20

δ

r

150 δ

∆Cn0 1

δ

ar

−2 0

200 al

10

50 ∆Cnp

100 ∆Cnr

150 ∆Cnde

200 ∆Cnda

∆Cndr

0.5

0 −10 −20 0

2

50 ∆Clp

1 ∆Cl

10

∆Cn

angle of attack/sideslip (deg)

1

0

−2 0

angular rates (deg/s)

δp

0.5

2

surface deflections (deg)

δl

∆CL

stick/rudder deflection (−)

APPENDIX D

0 −0.5

50

100 time (s)

150

200

−1 0

50

100 time (s)

150

200

Figure D.24: Simulation results for the I&I based adaptive controller at flight condition 4 and failure scenario 5: Right aileron negatively locked at half of maximum deflection after 20 seconds.

Bibliography [1] Military Standard, Flying Qualities of Piloted Aircraft, MIL-STD-1797B, 2006, 2006. [2] F. Ahmed-Zaid, P. Ioannou, K. Gousman, and R. Rooney. Accommodation of Failures in the F-16 Aircraft Using Adaptive Control. IEEE Control Syst. Mag., 11:73–78, 1991. [3] B. D. O. Anderson and C. R. Johnson. Exponential Convergence of Adaptive Identification and Control Algorithms. Automatica, 18:1–13, 1982. [4] A. M. Annaswamy and J. E. Wong. Adaptive Control in the Presence of Saturation Nonlinearity. Int. Journal of Adaptive Control and Signal Processing, 11:3–19, 1997. [5] Z. Artstein. Stabilization with Relaxed Controls. Nonlinear Analysis, TMA7:1163–1173, 1983. [6] A. Astolfi, D. Karagiannis, and R. Ortega. Nonlinear and Adaptive Control with Applications. Springer-Verlag, 2008. [7] A. Astolfi and R. Ortega. Immersion and Invariance: A New Tool for Stabilization and Adaptive Control of Nonlinear Systems. IEEE Transactions on Automatic Control, 48(4):590–606, 2003. ˚ om. Adaptive Control Around 1960. IEEE Control Systems Magazine, [8] K. J. Astr¨ 16(3):44–49, 1996. ˚ om and B. Wittenmark. Adaptive Control. Addison Wesley, 1989. [9] K. J. Astr¨ [10] R. Babuˇska. Fuzzy Modeling for Control, pages 49–52. Kluwer Academic Publishers, 1998. 239

240

BIBLIOGRAPHY

[11] B. J. Bacon and I. M. Gregory. General Equations of Motion for a Damaged Asymmetric Aircraft. In Proc. of the AIAA Atmospheric Flight Mechanics Conference and Exhibit, 2007. [12] R. E. Bailey and R. E. Smith. Analysis of Augmented Aircraft Flying Qualities Through Application of the Neal-Smith Criterion. In Proc. of the Guidance and Control Conference, number AIAA 81-1776, 1981. [13] R. V. Beard. Failure Accommodation in Linear Systems Through SelfOrganization. PhD thesis, Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, 1971. [14] R. E. Bellman. Dynamic Programming. Princeton, NJ, 1957. [15] D. P. Bertsekas. Dynamic Programming and Optimal Control. Athena Scientific, 3rd edition, 2005. [16] J. H. Blakelock. Automatic Control of Aircraft and Missiles. John Wiley & Sons, 2nd edition, 1991. [17] M. Bodson. Evaluation of Optimization Methods for Control Allocation. Journal of Guidance, Control and Dynamics, 25(4):703–711, 2002. [18] M. Bodson and J. E. Groszkiewicz. Multivariable Adaptive Algorithms for Reconfigurable Flight Control. In Proc. of the 33rd Conference on Decision and Control, Dec. 1994. [19] M. Bodson and J. E. Groszkiewicz. Multivariable Adaptive Algorithms for Reconfigurable Flight Control. IEEE Transactions on Control Systems Technology, 5(2):217–229, Mar. 1997. [20] K. Bordignon and J. Bessolo. Control Allocation for the X-35B. In Proc. of the 2002 Biennial International Powered Lift Conference and Exhibit, 2002. [21] J. Bosworth. Flight Results of the NF-15B Intelligent Flight Control System (IFCS) Aircraft with Adaptation to a Longitudinally Destabilized Plant. In Proc. of the AIAA Guidance, Navigation and Control Conference and Exhibit, 2008. [22] J. Bosworth and P. Williams-Hayes. Stabilator Failure Adaptation from Flight Tests of NF-15B Intelligent Flight Control System. Journal of Aerospace Computing, Information, and Communication, 6(3):187–206, 2009. [23] J. A. Boudreau and H. I. Berman. Dispersed and Reconfigurable Digital Flight Control Systems. Technical report, Grumman Aerospace Corp., 1979. [24] J. D. Boˇskovi´c, S. M. Li, and R. K. Mehra. Reconfigurable Flight Control Design Using Multiple Switching Controllers and On-line Estimation of Damage-Related Parameters. In Proc. of the 2000 IEEE International Conference on Control Applications, 2000.

BIBLIOGRAPHY

241

[25] J. D. Boˇskovi´c and R. K. Mehra. A Multiple Model-Based Reconfigurable Flight Control System Design. Proc. of the 37th IEEE Conf. on Decision and Control, 1998. [26] J. D. Boˇskovi´c and R. K. Mehra. Multiple Model-Based Adaptive Reconfigurable Formation Flight Control Design. In Proc. of the 41st IEEE Conference of Decison and Control, Dec. 2002. [27] J. D. Boˇskovi´c, R. Prasanth, and R. K. Mehra. Retrofit Reconfigurable Flight Control. In Proc. of the AIAA Guidance, Navigation, and Control Conference and Exhibit, 2005. [28] S. Boyd and S. Sastry. Necessary and Sufficient Conditions for Parameter Convergence in Adaptive Control. Automatica, 22:629–638, 1986. [29] D. P. Boyle and G. E. Chamitof. Autonomous Maneuver Tracking for Self-Piloted Vehicles. Journal of Guidance, Control and Dynamics, 22:58–67, 1999. [30] J. S. Brinker and K. A. Wise. Reconfigurable Flight Control for Tailless Advanced Fighter Aircraft. In Proc. of the 1998 AIAA Guidance, Navigation and Control Conference, Aug. 1998. [31] J. S. Brinker and K. A. Wise. Nonlinear Simulation Analysis of a Tailless Advanced Fighter Aircraft Reconfigurable Flight Control Law. In Proc. of the AIAA Guidance, Navigation, and Control Conference and Exhibit, 1999. [32] F. W. Burcham, J. J. Burken, T. A. Maine, and J. Bull. Emergency Flight Control Using Only Engine Thrust and Lateral Center-of-Gravity Offset: A First Look. Technical report, NASA, 1997. [33] F. W. Burcham, J. J. Burken, T. A. Maine, and C. G. Fullerton. Development and Flight Test of an Emergency Flight Control System Using Only Engine Thrust on an MD-11 Transport Airplane. Technical report, NASA, Oct. 1997. [34] J. J. Burken, P. Lu, and Z. Wu. Reconfigurable Flight Control Designs with Application to the X-33 Vehicle. Technical report, NASA, 1999. [35] J. J. Burken, P. Lu, Z. Wu, and C. Bahm. Two Reconfigurable Flight-Control Design Methods: Robust Servomechanism and Control Allocation. Journal of Guidance, Control and Dynamics, 24(3):482–493, May-June 2001. [36] C. I. Byrnes, F. D. Priscoli, and A. Isidori. Output Regulation of Uncertain Nonlinear Systems. Birkhauser, 1997. [37] A. J. Calise, N. Hovakimyan, and M. Idan. Adaptive Output Feedback Control of Nonlinear Systems Using Neural Networks. Automatica, 37(8):1201–1211, March 2001.

242

BIBLIOGRAPHY

[38] A. J. Calise, S. Lee, and M. Sharma. Development of a Reconfigurable Flight Control law for the X-36 Tailless Fighter Aircraft. In Proc. of the AIAA Guidance, Navigation, and Control Conference and Exhibit, Aug. 2000. [39] A. J. Calise, S. Lee, and M. Sharma. Development of a Reconfigurable Flight Control Law for Tailless Aircraft. Journal of Guidance, Control and Dynamics, 24(5):896–902, Sep.-Oct. 2001. [40] D. Carnevale, D. Karagiannis, and A. Astolfi. Reduced-Order Observer Design for Nonlinear Systems. In Proc. of the European Control Conference, 2007. [41] R. Chen and J. Speyer. Sensor and Actuator Fault Reconstruction. Journal of Guidance, Control and Dynamics, 27:186–196, 2004. [42] K. W. E. Cheng, H. Wang, and D. Sutanto. Adaptive B-Spline Network Control for Three-Phase PWM AC-DC Voltage Source Converter. In Proc. of the IEEE 1999 International Conference on Power Electronics and Drive Systems, 1999. [43] B. T. Clough. Unmanned Aerial Vehicles: Autonomous Control Challenges, a Researchers Perspective. Journal of Aerospace Computing, Information, and Communication, 2:327–347, 2005. [44] Controllab Products B.V., www.20sim.com. 20 Sims Control Toolbox, 20-sim help files, 2005. [45] M. V. Cook. Flight Dynamics Principles. Butterworth-Heinemann, 1997. [46] M. Cox. Algorithms for Spline Curves and Surfaces. Technical report, MPL Report DITC 166, 1990. [47] T. J. Curry. Estimation of Handling Qualities Parameters of the Tu-144 Supersonic Transport Aircraft From Flight Test Data. Technical report, NASA CR2000210290, August 2000. [48] R. R. da Costa, Q. P. Chu, and J. A. Mulder. Reentry Flight Controller Design Using Nonlinear Dynamic Inversion. Journal of Spacecraft and Rockets, 40:64– 71, 2003. [49] M. Daehlen and T. Lyche. Box Splines and Applications. Springer-Verlag, 1991. [50] C. C. de Visser, Q. P. Chu, and J. A. Mulder. A New Approach to Linear Regression with Multivariate Splines. Automatica, 45:2903–2909, 2009. [51] E. de Weerdt, Q. P. Chu, and J. A. Mulder. Neural Network Aerodynamic Model Identification for Aerospace Reconfiguration. In Proc. of the AIAA Guidance, Navigation, and Control Conference and Exhibit, 2005. [52] W. C. Durham. Constrained Control Allocation. Journal of Guidance, Control and Dynamics, 16(4):717–725, 1993.

BIBLIOGRAPHY

243

[53] L. Egbert and I. Halley. Stabilator reconfiguration flight testing on the F/A-18/E/F. In Proc. of the SAE Control and Guidance Meeting, Mar. 2001. [54] D. F. Enns. Control Allocation Approaches. In Proc. of the AIAA Guidance, Navigation, and Control Conference and Exhibit, Aug. 1998. [55] R. A. Eslinger and P. R. Chandler. Self-Repairing Flight Control System Program Overview. In Proc. of the IEEE National Aerospaceand Electronics Conference, 1988. [56] B. Etkin and L. D. Reid. Dynamics of Flight: Stability and Control. John Wiley & Sons, 3rd edition, 1996. [57] K. Ezal, Z. Pan, and P. Kokotovi´c. Locally Optimal and Robust Backstepping Design. IEEE Transactions on Automatic Control, 45:260–271, 2000. [58] J. A. Farrell, M. Polycarpou, and M. Sharma. Adaptive Backstepping with Magnitude, Rate, and Bandwidth Constraints: Aircraft Longitude Control. In Proc. of the American Control Conference, pages 3898–3903, Evanston, IL, 2003. American Control Conference Council. [59] J. A. Farrell, M. Polycarpou, M. Sharma, and W. Dong. Command Filtered Backstepping. IEEE Transactions on Automatic Control, 54(6):1391–1395, 2009. [60] J. A. Farrell, M. Sharma, and M. Polycarpou. On-line Approximation Based Aircraft Longitudinal Control. In Proc. of the American Control Conference, Evanston, IL, 2003. American Control Conference Council. [61] J. A. Farrell, M. Sharma, and M. Polycarpou. Backstepping Based Flight Control with Adaptive Function Approximation. AIAA Journal of Guidance, Control and Dynamics, 28(6):1089–1102, Jan. 2005. [62] S. Ferrari and M. Jensenius. Robust and Reconfigurable Flight Control by Neural Networks. In Proc. of the AIAA 5th Aviation, Technology, Integration, and Operations Conference (ATIO), 2005. [63] L. Forssell and U. Nilsson. ADMIRE - The Aero-Data Model in a Research Environment. Technical report, FOI, 2005. [64] R. A. Freeman and P. Kokotovi´c. Backstepping Design of Robust Controllers for a Class of Nonlinear Systems. In Proceedings of the IFAC Nonlinear Control Systems Design Symposium, 1992. [65] R. A. Freeman and P. Kokotovi´c. Inverse Optimality in Robust Stabilization. SIAM J. Control and Optimization, 34(4):1365–1391, july 1996. [66] R. A. Freeman and P. Kokotovi´c. Robust Nonlinear Control Design: State-space and Lyapunov Techniques. Birkhauser, 1996.

244

BIBLIOGRAPHY

[67] R. A. Freeman and J. A. Primbs. Control Lyapunov Functions: New Ideas From an Old Source. In Proc. of the 35th Conference on Decision and Control, 1996. [68] A. Fujimori, M. Kurozumi, P. N. Nikiforuk, and M. M. Gupta. Flight Control Design of an Automatic Landing Flight Experiment Vehicle. Journal of Guidance, Control and Dynamics, 23:373–376, 2000. [69] R. J. Gadient and G. L. Weltz. Adaptive/Reconfigurable Flight Control Augmentation Design Applied to High-Winged Transport Aircraft. In Proc. of the AIAA Guidance, Navigation and Control Conference and Exhibit, 2004. [70] T. Glad. Robustness of Nonlinear State Feedback - A Survey. Automatica, 23:425– 435, 1987. [71] M. Gopinathan, J. D. Boˇskovi´c, R. K. Mehra, and C. Rago. A Multiple Model Predictive Scheme for Fault-Tolerant Flight Control Design. In Proc. of the 37th IEEE Conference on Decision and Control, 1998. [72] K. D. Graham, T. B. Cunningham, and C. Shure. Aircraft Flight Control Survivability Through Use of Computational Techniques. Technical report, Naval Air Development Center, Report 77028-30, May 1980. [73] J. E. Groszkiewicz and M. Bodson. Flight Control Reconfiguration Using Adaptive Methods. In Proc. of the 34th Conf. on Decision and Control, 1995. [74] R. Hallouzi and M. Verhaegen. Fault-Tolerant Subspace Predictive Control Applied to a Boeing 747 Model. Journal of Guidance, Control and Dynamics, 31:873–883, 2008. [75] O. H¨arkeg˚ard. Flight Control Design Using Backstepping. Link¨oping University, 2001.

Master’s thesis,

[76] O. H¨arkeg˚ard. Backstepping and Control Allocation with Applications to Flight Control. PhD thesis, Link¨oping University, 2003. [77] O. H¨arkeg˚ard and S. T. Glad. A Backstepping Design for Flight Path Angle Control. In Proc. of the 39th Conference on Decision and Control, 2000. [78] O. H¨arkeg˚ard and S. T. Glad. Flight Control Design Using Backstepping. In Proc. of the 5th IFAC Symposium on Nonlinear Control Systems, 2001. [79] S. Haykin. Neural Networks: A Comprehensive Foundation. Prentice Hall, 1994. [80] A. Healy and D. Liebard. Multivariable Sliding Mode Control for Autonomous Diving and Steering of Unmanned Underwater Vehicles. IEEE Journal of Oceanic Engineering, 18:327–339, 1993.

BIBLIOGRAPHY

245

[81] R. A. Hess and C. McLean. Development of a Design Methodology for Reconfigurable Flight Control Systems. In Proc. of the 38th Aerospace Sciences Meeting and Exhibit, Jan. 2000. [82] R. A. Hess and S. R. Wells. Sliding Mode Control Applied to Reconfigurable Flight Control Design. In Proc. of the 40th AIAA Aerospace Sciences Meeting and Exhibit, Jan. 2002. [83] R. A. Hess, S. R. Wells, and T. K. Vetter. MIMO Sliding Mode Control as an Alternative to Reconfigurable Flight Control Designs. In Proc. of the American Control Conference, May 2002. [84] M. Huzmezan and J. M. Maciejowski. Reconfigurable Control Methods and Related Issued - A Survey. Technical report, Department of Engineering, University of Cambridge, Aug. 1997. Technical report prepared for the DERA under the Research Agreement no.ASF/3455. [85] S. Hyung and Y. Kim. Reconfigurable Flight Control System Design Using Discrete Model Reference Adaptive Control. In Proc. of the AIAA Guidance, Navigation and Control Conference and Exhibit, Aug. 2005. [86] P. A. Ioannou and P. V. Kokotovi´c. Instability Analysis and Improvement of Robustness of Adaptive Control. Automatica, 20(5):583–594, 1984. [87] P. A. Ioannou and J. Sun. Stable and Robust Adaptive Control. Prentice-Hall, 1995. [88] A. Isidori. Nonlinear Control Systems. Springer, 3rd edition, 1995. [89] V. Janardhan, D. Schmitz, and S. N. Balakrishnan. Development and Implementation of New Nonlinear Control Concepts for a UA. In Proc. of the 23rd Digital Avionics Systems Conference, 2004. [90] V. Janardhan, D. Schmitz, and S. N. Balakrishnan. Nonlinear control concepts for a UA. IEEE Aerospace and Electronic Systems Magazine, 2006. [91] E. N. Johnson and A. J. Calise. Neural Network Adaptive Control of Systems with Input Saturation. In Proc. of the American Control Conference, pages 2557–2562, 2001. [92] C. N. Jones and J. M. Maciejowski. Reconfigurable Flight Control: First Year Report. Technical report, Department of Engineering, University of Cambridge, March 2005. [93] H. S. Ju and C. C. Tsai. Longitudinal Axis Flight Control law Design by Adaptive Backstepping. In Proc. of the IEEE Transactions on Aerospace and Electtronic Systems, 2007.

246

BIBLIOGRAPHY

[94] M. M. Kale and A. J. Chipperfield. Reconfigurable Flight Control Strategies Using Model Predictive Control. In Proc. of the 2002 IEEE International Symposium on Intelligent Control, 2002. [95] M. M. Kale and A. J. Chipperfield. Robust and Stabilized MPC Formulations for Fault Tolerant and Reconfigurable Flight Control. In Proc. of the 2004 IEEE International Symposium on Intelligent Control, 2004. [96] I. Kaminer, A. Pascoal, E. Hallberg, and C. Silvestre. Trajectory Tracking for Autonomous Vehicles: An Integrated Approach to Guidance and Control. Journal of Guidance, Control, and Dynamics, 21:29–38, 1998. [97] I. Kaminer, O. Yakimenko, V. Dobrokhodov, A. Pascoal, N. Hovakimyan, C. Cao, A. Young, and V. Patel. Coordinated Path Following for Time-Critical Missions of Multiple UAVs via L1 Adaptive Output Feedback Controllers. In Proc. of the AIAA Guidance, Navigation and Control Conference and Exhibit, 2007. [98] Y. J. Kanayama, Y. Kimura, F. Miyazaki, and T. Noguchi. A Stable Tracking Control Method for an Autonomous Mobile Robot. In Proc. of the IEEE International Conference on Robotics and Automation, 1990. [99] S. Kanev and M. Verhaegen. Controller Reconfiguration for Non-linear Systems. Control Engineering Practice, 8:1223–1235, Oct. 2000. [100] S. Kanev, M. Verhaegen, and G. Nijsse. A Method for the Design of Fault-Tolerant Systems in Case of Sensor and Actuator Faults. In Proc. of the European Control Conference, Sept. 2001. [101] I. Kannelakopoulos, P. V. Kokotovi´c, and A. S. Morse. Systematic Design of Adaptive Controllers for Feedback Linearizable Systems. IEEE Transactions on Automatic Control, 36(11):1241–1253, Nov. 1991. [102] D. Karagiannis and A. Astolfi. Nonlinear Observer Design Using Invariant Manifolds and Applications. In Proc. of the 44th IEEE Conf. Decision and Control, 2005. [103] D. Karagiannis and A. Astolfi. Nonlinear Adaptive Control of Systems in Feedback Form: An Alternative to Adaptive Backstepping. Systems and Control Letters, 57:733–739, 2008. [104] D. Karagiannis and A. Astolfi. Observer Design for a Class of Nonlinear Systems using Dynamic Scaling with Application to Adaptive Control. In Proc. of the 47th IEEE Conference on Decision and Control, 2008. [105] S. P. Karason and A. M. Annaswamy. Adaptive Control in the Presence of Input Constraints. IEEE Trans. on Automatic Control, 39(11):2325–2330, 1994. [106] H. K. Khalil. Nonlinear Systems. Prentice Hall, 3rd edition, 2002.

BIBLIOGRAPHY

247

[107] K. S. Kim, K. J. Lee, and Y. Kim. Reconfigurable Flight Control System Design Using Direct Adaptive Method. Journal of Guidance, Control and Dynamics, 26(4):543–550, July-Aug. 2003. [108] K. S. Kim, K. J. Lee, and Y. S. Kim. Model Following Reconfigurable Flight Control System Design Using Direct Adaptive Scheme. In Proc. of the AIAA Guidance Navigation and Control Conference and Exhibit, Aug. 2002. [109] S. H. Kim, Y. S. Kim, and C. Song. A Robust Adaptive Nonlinear Control Approach to Missile Autopilot Design. Control Engineering Practice, 12(2):149– 154, 2004. [110] P. V. Kokotovi´c and M. Arcak. Constructive Nonlinear Control: A Historical Perspective. Automatica, 37:637–662, 2001. [111] P. V. Kokotovi´c and H. J. Sussmann. A Positive Real Condition for Global Stabilization of Nonlinear Systems. Systems and Control Letters, 19:177–185, 1989. [112] I. Konstantopoulos. Eigenstructure Assignment In Reconfigurable Control Systems. citeseer.ist.psu.edu/152208.html, 1996. [113] M. Krsti´c. Optimal Adaptive Control - Contradiction in terms or a Matter of Choosing the Right Cost Functional? IEEE Transactions on Automatic Control, 53(8):1942–1947, 2008. [114] M. Krsti´c. On Using Least-squares Updates Without Regressor Filtering in Identification and Adaptive Control of Nonlinear Systems. Automatics, 45:731–735, 2009. [115] M. Krsti´c and H. Deng. Stabilization of Nonlinear Uncertain Systems. Springer, 1998. [116] M. Krsti´c, D. Fontaine, P. V. Kokotovi´c, and J. D. Paduano. Useful Nonlinearities and Global Stabilization of Bifurcations in a Model of Jet Engine Surge and Stall. IEEE Transactions on Automatic Control, 43(12):1739–1745, 1998. [117] M. Krsti´c, I. Kanellakopoulos, and P. V. Kokotovi´c. Adaptive Nonlinear Control Without Overparametrization. Systems and Control Letters, 19:177–185, sept. 1992. [118] M. Krsti´c, I. Kanellakopoulos, and P. V. Kokotovi´c. Nonlinear and Adaptive Control Design. John Wiley & Sons, 1995. [119] M. Krsti´c and P. V. Kokotovi´c. Adaptive Nonlinear Design with ControllerIdentifier Separation and Swapping. IEEE Transactions on Automatic Control, 40(3):426–440, March 1995. [120] M. Krsti´c and P. V. Kokotovi´c. Modular Approach to Adaptive Nonlinear Stabilization. Automatica, 32:625–629, 1996.

248

BIBLIOGRAPHY

[121] M. Krsti´c, P. V. Kokotovi´c, and I. Kanellakopoulos. Transient Performance Improvement with a New Class of Adaptive Controllers. Systems and Control Letters, 21:451–461, 1993. [122] M. Krsti´c and P. Tsiotras. Inverse optimality results for the attitude motion of a rigid spacecraft. IEEE Transactions on Automatic Control, 44:1042–1049, 1999. [123] E. Lavretsky and N. Hovakimyan. Positive µ-modification for Stable Adaptation in Dynamic Inversion Based Adaptive Control with Input Saturation. In Proc. of the American Control Conference, pages 3373–3378, 2005. [124] E. Lavretsky, N. Hovakimyan, and C. Cao. Adaptive Design for Uncertain Systems with Nonlinear-in-Control Dynamics. In Proc. of the AIAA Guidance, Navigation, and Control Conference and Exhibit, 2007. [125] T. Lee and Y. Kim. Nonlinear Adaptive Flight Control Using backstepping and Neural Networks Controller. Journal of Guidance, Control and Dynamics, 24(4):675–682, July-Aug. 2001. [126] G. G. Lendaris, R. A. Santiago, and M. S. Carroll. Proposed Framework for Applying Adaptive Critics in Real-Time Realm. Proceedings of the 2002 International Joint Conference on Neural Networks, 2002. [127] B. L. Lewis and F. L. Stevens. Aircraft Control and Simulation. John Wiley & Sons, 1992. [128] Z. H. Li and M. Krsti´c. Optimal Design of Adaptive Tracking Controllers for Nonlinear Systems. Automatica, 33:1459–1473, 1997. [129] D. M. Littleboy and P. R. Smith. Using Bifurcation Methods to Aid Nonlinear Dynamic Inversion Control Law Design. Journal of Guidance, Control, and Dynamics, 21:632–638, 1998. [130] J. L¨ofberg. Backstepping with Local LQ Performance and Global Approximation of Quadratic Performance. In Proc. of the American Control Conference, 2000. [131] T. J. J. Lombaerts, Q. P. Chu, J. A. Mulder, and D. A. Joosten. Real Time Damaged Aircraft Model Identification for Reconfiguring Flight Control. In Proc. of the AIAA Guidance, Navigation, and Control Conference and Exhibit, 2007. [132] T. J. J. Lombaerts, H. Huisman, Q. Chu, J. A. Mulder, and D. Joosten. Nonlinear Reconfiguring Flight Control Based on Online Physical Model Identification. Journal of Guidance, Control, and Dynamics, 32(3):727–748, 2009. [133] T. J. J. Lombaerts, M. H. Smaili, O. Stroosma, Q. P. Chu, J. A. Mulder, and D. Joosten. Piloted Simulator Evaluation Results of New Fault-Tolerant Flight Control Algorithm. Journal of Guidance, Control, and Dynamics, 32(6):1747– 1765, 2009.

BIBLIOGRAPHY

249

[134] W. Luo, Y. C. Chu, and K. V. Ling. Inverse Optimal Adaptive Control for Attitude Tracking of Spacecraft. IEEE Transactions on Automatic Control, 50(11):1639– 1654, 2005. [135] A. M. Lyapunov. The General Problem of the Stability of Motion. Taylor & Francis, 1992. English translation of the original publication in Russian from 1892. [136] C. Manzie. Advanced Control Lecture Notes. Melbourne School of Engineering, 2004. [137] P. S. Maybeck and R. D. Stevens. Reconfigurable Flight Control via Multiple Model Adaptive Control Methods. Proc. of the 29th Conf. on Decision and Control, 1990. [138] P. S. Maybeck and R. D. Stevens. Reconfigurable Flight Control via Multiple Model Adaptive Control Methods. IEEE Transactions on Aerospace and Electronic Systems, 27(3), May 1991. [139] D. McRuer and D. Graham. Eighty Years of Flight Control - Triumphs and Pitfalls of the Systems Approach. Journal of Guidance, Control, and Dynamics, 4:353– 362, 1981. [140] M. Mears, S. Pruett, and J. Houtz. URV Flight Test of an ADA implemented, Self-Repairing Flight Control System. WL-TR-92-3101, Aug. 1992. [141] J. Monaco, D. Ward, and A. Bateman. A Retrofit Architecture for Model-Based Adaptive Flight Control. In Proc. of the AIAA 1st Intelligent Systems Technical Conference, 2004. [142] G. Moon, H. Lee, and Y. Kim. Reconfigurable Flight Control Law Based on Model Following Scheme and Parameter Estimation. In Proc. of the AIAA Guidance, Navigation, and Control Conference and Exhibit, Aug. 2005. [143] J. A. Mulder, W. H. J. J. van Staveren, J. C. van der Vaart, and E. de Weerdt. Flight Dynamics, Lecture Notes AE3-302. Technical report, Delft University of Technology, 2006. [144] R. Murray-Smith and T. A. Johansen, editors. Multiple Model Approaches to Modelling and Control. Taylor & Francis, 1997. [145] J. Nakanishi, J. A. Farrell, and S. Schaal. Composite Adaptive Control with Locally Weighted Statistical Learning. Neural Networks, 18:71–90, 2005. [146] M. Narasimhan, H. Dong, R. Mittal, and S. N. Singh. Optimal Yaw Regulation and Trajectory Control of Biorobotic AUV Using Mechanical Fins Based on CFD Parametrization. Journal of Fluids Engineering, 128:687–698, 2006.

250

BIBLIOGRAPHY

[147] K. S. Narendra and A. M. Annaswamy. Stable Adaptive Systems. Prentice Hall, 1989. [148] K. S. Narendra and K. Parthasarathy. Identification and Control of Dynamical Systems Using Neural Networks. IEEE Transactions on Neural Networks, 1990. [149] L. T. Nguyen, M. E. Ogburn, W. P. Gilbert, K. S. Kibler, P. W. Brown, and P. L. Deal. Simulator Study of Stall Post-stall Characteristics of a Fighter Airplane with Relaxed Longitudinal Static Stability. Technical report, NASA Langley Research Center, 1979. [150] N. Nguyen and K. Krishnakumar. Hybrid Intelligent Flight Control with Adaptive Learning Parameter Estimation. Journal of Aerospace Computing, Information, and Communication, 6:171–186, 2009. [151] T. S. No, B. M. Min, R. H. Stone, and J. E. K. C. Wong. Control and Simulation of Arbitrary Flight Trajectory-Tracking. Control Engineering Practice, 13:601–612, 2005. [152] M. Oosterom, P. Bergsten, and R. Babuska. Fuzzy Gain-Scheduled H-infinity Flight Control Law Design. In Proc. of the AIAA Guidance, Navigation, and Control Conference and Exhibit, 2002. [153] M. Oosterom, G. Schram, R. Babuska, and H. B. Verbruggen. Automated Procedure for Gain Scheduled Flight Control Law Design. In Proc. of the AIAA Guidance, Navigation, and Control Conference and Exhibit, 2000. [154] M. Oppenheimer and D. Doman. A Method for Including Control Effector Interactions in the Control Allocation Problem. In Proc. of the AIAA Guidance, Navigation and Control Conference and Exhibit, 2007. [155] M. Pachter, J. J. D’Azzo, J. L. dargan, and J. L. A. W. Proud. Automatic Formation Flight Control. Journal of Guidance, Control and Dynamics, 17(6), 1994. [156] M. Pachter, J. J. D’Azzo, and A. W. Proud. Tight Formation Control. Journal of Guidance, Control and Dynamics, 24:246–254, 2001. [157] M. Pachter and E. B. Nelson. Reconfigurable Flight Control. IMechE, 219, 2005. [158] A. B. Page, J. Monaco, and D. Meloney. Flight Testing of a Retrofit Reconfigurable Control Law Architecture Using an F/A-18C. In Proc. of the AIAA Guidance, Navigation, and Control Conference and Exhibit, 2006. [159] A. B. Page and M. L. Steinberg. Effects of Control Allocation Algorithms on a Nonlinear Adaptive Design. Technical report, AIAA-99-4282, 1999. [160] B. Papadales and M. Downing. UAV Science Missions: A Business Perspective. In [email protected], 2005.

BIBLIOGRAPHY

251

[161] A. A. Pashilkar, N. Sundararajan, and P. Saratchandran. Adaptive Nonlinear Neural Controller for Aircraft Under Actuator Failures. Journal of Guidance, Control, and Dynamics, 30:835–847, 2007. [162] R. J. Patton. Fault-Tolerant Control: The 1997 Situation. In Proc. IFAC Safeprocess ’97, pages 1033–1055, 1997. [163] M. M. Polycarpou, J. A. Farrell, and M. Sharma. Robust On-line Approximation Control of Uncertain Nonlinear Systems Subject to Constraints. In Proc. of the 9th IEEE International Conference on Engineering of Complex Computer Systems, pages 66–74, 2004. [164] F. Pozo, F. Ikhouane, and J. Rodellar. Numerical Issues in backstepping Control: Sensitivity and Parameter Tuning. Journal of the Franklin Institute, 345:891–905, 2008. [165] L. Praly. Asymptotic Stabilization via Output Feedback for Lower Triangular Systems with Output Dependent Incremental Rate. IEEE Transactions on Automatic Control, 48:1103–1108, 2003. [166] J. A. Primbs. Nonlinear Optimal Control: A Receding Horizon Approach. PhD thesis, California Institute of Technology, Pasadena, California, 1999. [167] J. A. Primbs, V. Nevistic, and J. C. Doyle. A Receding Horizon Generalization of Pointwise Min-Norm Controllers. IEEE Transactions On Automatic Control, 45:898–909, 2000. [168] A. W. Proud, M. Pachter, and J. J. D’Azzo. Close Formation Flight Control. In Proc. of the AIAA Guidance, Navigation and Control Conference, 1999. [169] H. Rauch, R. Kline-Schoder, J. Adams, and H. Youssef. Fault Detection, Isolation and Reconfiguration for Aircraft Using Neural Networks. In Proc. of the AIAA 11th Applied Aerodynamics Conference, 1993. [170] K. Refson. Moldy’s User’s Manual. Department of Earth Sciences, May 2001. [171] W. Ren and E. Atkins. Nonlinear Trajectory Tracking for Fixed Wing UAVs via Backstepping and Parameter Adaptation. In Proc. of the AIAA Guidance, Navigation and Control Conference and Exhibit, Aug. 2005. [172] W. Ren and R. W. Beard. Trajectory Tracking for Unmanned Air Vehicles With Velocity and Heading Rate Constraints. IEEE Transactions on Control Systems Technology, 12:706–716, 2004. [173] R. S. Russell. Nonlinear F-16 Simulations Using Simulink and Matlab. Technical report, University of Minnesota, 2003. [174] I. J. Schoenberg. Spline Functions and the Problem of Graduation. Proceedings of the National Academy of Sciences, 52:947–950, 1964.

252

BIBLIOGRAPHY

[175] R. Sepulchre, M. Janovic, and P. V. Kokotovic. Constructive Nonlinear Control. Springer, 1997. [176] D. H. Shin and Y. Kim. Reconfigurable Flight Control System Design Using Adaptive Neural Networks. IEEE Transactions on Control Systems Technology, 12(1):87–100, Jan. 2004. [177] D. H. Shin and Y. Kim. Nonlinear Discrete-Time Reconfigurable Flight Control Law Using Neural Networks. IEEE Transactions on Control Systems Technology, 14(3):408–422, May 2006. [178] Y. Shin. Neural Network Based Adaptive Control for Nonlinear Dynamic Regimes. PhD thesis, Georgia Institute of Technology, 2005. [179] Y. B. Shtessel and J. Buffington. Multiple Time Scale Flight Control Using Reconfigurable Sliding Modes. AIAA Journal on Guidance, Control and Dynamics, 22(6):873–883, 1999. [180] Y. B. Shtessel, J. Buffington, M. Pachter, P. Chandler, and S. Banda. Reconfigurable Flight Control on Sliding Modes Addressing Actuator Deflection and Deflection Rate Saturation. AIAA-98-4112, 1998. [181] A. T. Simmons and A. S. Hodel. Control Allocation for the X-33 Using Existing and Novel Quadratic Programming Techniques. In Proc. of the American Control Conference, 2004. [182] S. N. Singh, P. Chandler, C. Schumacher, S. Banda, and M. Pachter. Adaptive Feedback Linearizing Nonlinear Close Formation Control of UAVs. In Proc. of the American Control Conference, pages 854–858, June 2000. [183] S. N. Singh and M. Steinberg. Adaptive Control of Feedback Linearizable Nonlinear Systems With Application to Flight Control. In Proc. of the AIAA Guidance, Navigation and Control Conference, July 1996. [184] S. N. Singh, M. L. Steinberg, and A. B. Page. Nonlinear Adaptive and Sliding Mode Flight Path Control of F/A-18 Model. IEEE Transactions on Aerospace and Electronic Systems, 39:1250–1262, 2003. [185] W. Siwakosit and R. A. Hess. Multi-Input/Multi-Output Reconfigurable Flight Control Design. Journal of Guidance, Control and Dynamics, 24(6), Nov.-Dec. 2001. [186] J. J. E. Slotine and W. Li. Composite Adaptive Control of Robot Manipulators. Automatica, 25:509–519, 1989. [187] J. J. E. Slotine and W. Li. Applied Nonlinear Control. Prentice Hall, 1991.

BIBLIOGRAPHY

253

[188] H. Smaili, J. H. Breeman, T. J. J. Lombaerts, and D. Joosten. A Simulation Benchmark for Integrated Fault Tolerant Flight Control Evaluation. In Proc. of the AIAA Modeling and Simulation Technologies Conference and Exhibit, 2006. [189] L. Sonneveldt. Constrained nonlinear adaptive backstepping flight control - application to an f-16/matv model. Master’s thesis, Delft University of Technology, 2006. [190] E. D. Sontag. A Lyapunov-like Characterization of Asymptotic Controllability. SIAM Journal of Control and Optimization, 21:462–471, 1983. [191] E. D. Sontag. A ‘Universal’ Construction of Artstein’s Theorem on Nonlinear Stabilization. Systems & Control Letters, 13:117–123, 1989. [192] E. D. Sontag. Smooth Stabilization Implies Coprime Factorization. IEEE Transactions on Automatic Control, 34:435–443, 1989. [193] E. D. Sontag. On the Input-to-state Stability Property. European Journal of Control, 1:24–36, 1995. [194] E. D. Sontag. Mathematical Control Theory: Deterministic Finite Dimensional Systems. Springer, New York, 2nd edition, 1998. [195] M. Steinberg. Historical Overview of Research in Reconfigurable Flight Control. IMechE, 219, 2005. [196] M. L. Steinberg. Comparison of Intelligent, Adaptive and Nonlinear Flight Control Laws. Journal of Guidance, Control and Dynamics, 24(4):693–699, July-Aug. 2001. [197] R. F. Stengel. Intelligent Failure-Tolerant Control. IEEE Control Systems Magazine, 11(4):14–23, 1991. [198] L. Tang, M. Roemer, J. Ge, A. Crassidis, J. Prasad, and C. Belcastro. Methodologies for Adaptive Flight Envelope Estimation and Protection. In Proc. of the AIAA Guidance, Navigation, and Control Conference, 2009. [199] M. B. Tischler. Advances in Aircraft Flight Control. Taylor & Francis, 1996. [200] S. Tsach, J. Chemla, and D. Penn. UAV Systems Development in IAI - Past, Present and Future. In Proc. of the 2nd AIAA ”Unmanned Unlimited” Systems, Technologies, and Operations - Aerospace, Land, and Sea Conference, 2003. [201] J. Tsinias. Existence of Control Lyapunov Functions and Applications to State Feedback Stabilizability of Nonlinear Systems. Journal of Control and Optimization, 29:457–473, 1991.

254

BIBLIOGRAPHY

[202] E. R. van Oort, Q. P. Chu, and J. A. Mulder. Robust Model Predictive Control of a Feedback Linearized Nonlinear F-16/MATV Aircraft Model. In Proc. of the AIAA Guidance, Navigation and Control Conference and Exhibit, 2006. [203] J. C. van Tooren. Fuzzy Aerodynamic Modeling and Identification - Application to the F-16 Aerodynamic Model. Master’s thesis, Delft University of Technology, 2006. [204] S. Vijayakumar and S. Schaal. Locally Weighted Projection Regression: Incremental Real Time Learning in High Dimensional Space. In Proc. of the 17th International Conference on Machine Learning, 2000. [205] G. P. Walker and D. A. Allen. X-35B STOVL Flight Control Law Design and Flying Qualities. In Proc. of the International Powered Lift Conference and Exhibit, 2002. [206] H. Wang and J. Sun. Modified Reference Adaptive Control with Saturated Inputs. In Proc. of the Conf. on Decision and Control, pages 3255–3256, 1992. [207] J. Wang, V. Patel, C. Cao, N. Hovakimyan, and E. Lavretsky. Novel L1 Adaptive Control Methodology for Aerial Refueling with Guaranteed Transient Performance. Journal of Guidance, Control and Dynamics, 31:182–193, 2008. [208] D. G. Ward, M. Sharma, N. D. Richards, J. D. Luca, and M. Mears. Intelligent Control of Un-Manned Air Vehicles: Program Summary and Representative Results. In Proc. of the 2nd AIAA Unmanned Unlimited Systems, Technologies and Operations Aerospace, Land and Sea, 2003. [209] S. Wegener, D. Sullivan, J. Frank, and F. Enomoto. UAV Autonomous Operations for Airborne Science Missions. In Proc. of the AIAA 3rd ”Unmanned Unlimited” Technical Conference, Workshop and Exhibit, 2004. [210] S. Wiggins. Introduction to Applied Nonlinear Dynamical Systems and Chaos. Springer-Verlag, 1990. [211] I. Yavrucuk, J. Prasad, and S. Unnikrishnan. Envelope Protection for Autonomous Unmanned Aerial Vehicles. Journal of Guidance, Control, and Dynamics, 32(1):248–261, 2009. [212] P.-C. P. Yip. Robust and Adaptive Nonlinear Control Using Dynamic Surface Controller with Applications to Intelligent Vehicle Highway Systems. PhD thesis, University of California at Berkeley, 1997. [213] P.-C. P. Yip and J. K. Hedrick. Adaptive Dynamic Surface Control: A Simplified Algorithm for Adaptive Backstepping Control of Nonlinear Systems. Int. J. Control, 71(5):959–979, 1998.

BIBLIOGRAPHY

255

[214] Y. Zhang and J. Jiang. Integrated Design of Reconfigurable Fault-Tolerant Control Systems. Journal of Guidance, 24(1), July 2000. [215] K. Zhou and J. Doyle. Essentials of Robust Control. Prentice Hall, 1997. [216] A. Zolghadri. A Redundancy-based Strategy for Safety Management in Modern Civil Aircraft. Control Engineering Practice, 8:545–554, 2000. [217] A. Zolghadri, D. Henry, and M. Monsion. Design of Nonlinear Observers for Fault Diagnosis: A Case Study. Control Engineering Practice, 4:1535–1544, 1996.

Nomenclature Abbreviations ABS

Adaptive Backstepping

ADMIRE

Aerodata Model in Research Environment

AIAA

American Institute of Aeronautics and Astronautics

AMS

Attainable Moment Set

BS

Backstepping

CA

Control Allocation

CABS

Constrained Adaptive Backstepping

CAP

Control Anticipation Parameter

CFD

Computational Fluid Dynamics

CG

Center of Gravity

CLF

Control Lyapunov Function

DOF

Degrees of Freedom

DUT

Delft university of Technology

EA

Eigenstructure Assignment

FBL

Feedback Linearization

FBW

Fly-By-Wire 257

258

BIBLIOGRAPHY

FDIE

Fault Detection, Isolation and Estimation

HJB

Hamilton-Jacobi-Bellman

I&I

Immersion and Invariance

IEEE

Institute of Electrical and Electronics Engineers

IMM

Interacting Multiple Model

ISS

Input-to-state Stable

LOES

Lower Order Equivalent System

LQR

Linear Quadratic Regulator

MAV

Mean Absolute Value

MMST

Multiple Model Switching and Tuning

MPC

Model Predictive Control

MRAC

Model Reference Adaptive Control

NASA

National Aeronautics and Space Administration

NDI

Nonlinear Dynamic Inversion

NLR

National Aerospace Laboratory

NN

Neural Network

PCA

Propulsion Controlled Aircraft

PE

Persistently Exciting

PIM

Pseudo Inverse Method

QFT

Quantitative Feedback Theory

QP

Quadratic Programming

RFC

Reconfigurable Flight Control

RLS

Recursive Least-Squares

RMS

Root Mean Square

SCAS

Stability and Control Augmentation System

SMC

Sliding Mode Control

UAV

Unmanned Aerial Vehicle

BIBLIOGRAPHY

USAF

United States Air Force

WPI

Weighted Pseudo-Inverse

Greek Symbols α

Aerodynamic angle of attack

α∗

Virtual control

β

Aerodynamic angle of sideslip

β∗

Continuously differentiable function vector

χ

Flight path heading angle

δa

Aileron deflection angle

δe

Elevator deflection angle

δr

Rudder deflection angle

δal

Left aileron deflection angle

δar

Right aileron deflection angle

δel

Left elevator deflection angle

δer

Right elevator deflection angle

δlef

Leading edge flap deflection angle

δtef

Trailing edge flap deflection angle

δth

Throttle position

ǫ∗

Continuously differentiable function vector

γ

Flight path climb angle

γ∗

Update gain

θˆ∗

Parameter estimate (vector)

κ∗

Nonlinear damping gain

µ

Aerodynamic bank angle

φ

Aircraft body axis roll angle

ψ

Aircraft body axis yaw angle

259

260

BIBLIOGRAPHY

ρ

Air density

σ∗

Invariant manifold

τeng

Engine lag time constant

θ

Aircraft body axis pitch angle

θ∗

Unknown parameter vector

θ˜∗

Parameter estimation error (vector)

ϕ∗

Regressor vector

Roman Symbols c¯

Mean aerodynamic chord length

¯ L

Total rolling moment

¯ M

Total pitching moment

¯ N

Total yawing moment



Dynamic air pressure

¯ X

Total force in body x-direction



Total force in body y-direction



Total force in body z-direction

z¯∗

Compensated tracking error

b

Reference wing span

C∗

Non-dimensional aerodynamic coefficient

c∗

Control gain

e∗

Prediction error

FB

Body-fixed reference frame

FE

Earth-fixed reference frame

FO

Vehicle carried local earth reference frame

FS

Stability axes reference frame

FT

Total thrust

BIBLIOGRAPHY

261

FW

Wind axes reference frame

g

Gravity acceleration

g1 , g2 , g3

Wind axes gravity components

h

Altitude

Heng

Engine angular momentum

Ix

Roll moment of inertia

Iy

Pitch moment of inertia

Iz

Yaw moment of inertia

Ixy , Ixz , Iyz

Product moments of inertia

k∗

Integral gain

M

Mach number

m

Total aircraft mass

ny

Normal Acceleration in body y-axis

nz

Normal Acceleration in body z-axis

p

Body axis roll rate

Pa

Engine power, percent of maximum power

Pc

Commanded engine power to the engine, percent of maximum power

Pc∗

Commanded engine power based on throttle position, percent of maximum power

ps

Stability axis roll rate

pstat

Static air pressure

q

Body axis pitch rate

q0 , q1 , q2 , q3

Quaternion components

qs

Stability axis pitch rate

r

Body axis yaw rate

r∗

Dynamic scaling parameter

rs

Stability axis yaw rate

262

BIBLIOGRAPHY

S

Reference wing area

T

Air temperature

Tidle

Idle thrust

Tmax

Maximum thrust

Tmil

Military thrust

u

Aircraft velocity in body x-direction

u

System input

v

Aircraft velocity in body y-direction

V∗

(Control) Lyapunov function

VT

Total velocity

w

Aircraft velocity in body z-direction

x

System states

x∗

System state

xE , yE , zE

Aircraft position w.r.t. reference point

xcgr

Reference center of gravity location

xcg

Center of gravity location

y

System output

yr

Reference signal

z∗

Tracking error

Samenvatting Onder de invloed van technologische ontwikkelingen in de lucht- en ruimtevaart techniek zijn tijdens de afgelopen decennia de prestatie-eisen voor moderne gevechtsvliegtuigen alsmaar hoger geworden, terwijl tegelijkertijd ook de grootte van het gewenste operationele vliegdomein flink is toegenomen. Om een extreme wendbaarheid te bereiken, worden deze vliegtuigen vaak a¨erodynamisch instabiel ontworpen en uitgerust met redundante besturingsactuatoren. Een goed voorbeeld hiervan is de Lockheed Martin F-22 Raptor, die gebruik maakt van een zogenaamd thrust vectored control systeem om een hogere mate van wendbaarheid te bereiken. Daarbij worden de overlevingseisen in de moderne oorlogsvoering steeds strenger voor zowel bemande als onbemande gevechtsvliegtuigen. Het vormt een enorme uitdaging voor regeltechnisch ingenieurs om rekening te houden met al deze eisen bij het ontwerp van de besturingssystemen voor dit type vliegtuigen. Tot op heden worden de meeste besturingssystemen voor vliegtuigen ontworpen met behulp van gelineariseerde vliegtuigmodellen die elk geldig zijn op een trimconditie in het operationele vliegdomein. Door gebruik te maken van de gevestigde klassieke regeltechnieken kan een lineaire regelaar worden afgeleid voor elk lokaal model. De versterkingsfactoren van de verschillende lineaire regelaars kunnen worden opgeslagen in tabellen en door te interpoleren wordt in feite een unieke regelaar verkregen, die geldig is in het gehele operationele vliegdomein. Echter, een probleem van deze aanpak is dat het voor complexe niet-lineaire systemen zoals moderne gevechtsvliegtuigen niet mogelijk is hoge prestatie- en robuustheidseisen te garanderen. Niet-lineaire regelmethodes zijn ontwikkeld om de tekortkomingen van deze klassieke aanpak op te lossen. De theoretisch ontwikkelde nonlinear dynamic inversion (NDI) methode is de bekendste en de meeste gebruikte van deze technieken. NDI is een regelmethode die expliciet kan omgaan met systemen die niet-lineariteiten bevatten. Door het toepassen van niet-lineaire terugkoppeling en toestandtransformaties kan het niet-lineare systeem worden omgezet in een constant lineair systeem, zonder 263

264

SAMENVATTING

gebruik te maken van lineaire benaderingen van het systeem. Vervolgens kan er een klassieke regelaar voor het resulterende lineaire systeem worden ontworpen. Echter, om een perfecte niet-lineaire dynamische inversie toe te passen is er een zeer nauwkeurig systeemmodel nodig. Het is een erg kostbaar en langdurig proces om zo een model voor een gevechtsvliegtuig af te leiden, aangezien er windtunnel experimenten, computational fluid dynamics (CFD) berekeningen en een uitgebreid testvluchtprogramma voor nodig zijn. Het resulterende, empirische vliegtuigmodel zal nooit 100% accuraat zijn. De tekortkomingen in het model kunnen worden gecompenseerd door een robuuste lineaire regelaar voor het met NDI gelineariseerde systeem af te leiden. Maar zelfs dan kunnen de gewenste vliegprestaties niet worden gehandhaafd in het geval van grove fouten als gevolg van grote, plotselinge veranderingen in de vliegtuig dynamica. Bijvoorbeeld veroorzaakt door structurele schade of het falen van een actuator. Voor een elegantere oplossing om om te gaan met grote modelonzekerheden kan er worden gekeken naar een adaptief regelsysteem met een vorm van real-time modelidentificatie. De recente ontwikkelingen in computers en beschikbare rekenkracht hebben het mogelijk gemaakt om meer complexe, adaptieve vliegtuigbesturingssystemen te implementeren. Natuurlijk heeft een adaptief regelsysteem de potentie om meer te doen dan het compenseren van modelonzekerheden; het is ook mogelijk om plotselinge veranderingen in het dynamisch gedrag van het vliegtuig te identificeren. Dergelijke veranderingen zullen over het algemeen leiden tot een verhoogde werkdruk van de piloot of zelfs tot compleet verlies van de controle over het vliegtuig. Als de systeemdynamica van het vliegtuig na de schade correct kan worden geschat door het modelidentificatiesysteem, dan kunnen het teveel aan besturingsactuatoren en de fly-by-wire structuur van moderne gevechtsvliegtuigen worden benut om het besturingssysteem te herconfigureren. Er zijn verscheidene methodes beschikbaar om een schatter te ontwerpen die het vliegtuigmodel dat gebruikt wordt door het besturingssysteem kan updaten, bijvoorbeeld neurale netwerken of least squares technieken. Een nadeel van een adaptief ontwerp met een aparte schatter is dat het certainty equivalence principe niet geldig is voor niet-lineaire systemen. Met andere woorden, de dynamica van de schatter is niet snel genoeg om om te gaan met de mogelijk sneller-dan-lineaire groei van instabiliteiten in niet-lineaire systemen. Om dit probleem te overwinnen is een regelaar met sterke parametrische robuustheidseigenschappen nodig. Als alternatieve oplossing kunnen de regelaar en schatter als een ge¨ıntegreerd systeem worden ontworpen met behulp van de adaptive backstepping methode. Adaptive backstepping biedt de mogelijkheid een regelaar af te leiden voor een brede klasse van niet-lineaire systemen met parametrische onzekerheden, door systematisch een Lyapunov functie te construeren voor het gesloten lus systeem. Het hoofddoel van dit proefschrift is om de geschiktheid te onderzoeken van de nietlineaire adaptive backstepping techniek in combinatie met real-time model identificatie voor het ontwerp van een herconfigureerbaar vliegtuigbesturingssysteem voor een modern gevechtsvliegtuig. Dit systeem moet beschikken over de volgende kenmerken: • Er wordt gebruikt gemaakt van een enkele niet-lineaire adaptieve regelaar die geldig is voor het gehele operationele domein van het vliegtuig en waarvan de

SAMENVATTING

265

prestatie- en stabiliteitseigenschappen theoretisch aantoonbaar zijn. • Het besturingssysteem verbetert de prestaties en de overlevingskansen van het vliegtuig wanneer er verstoringen optreden als gevolg van schade. • De algoritmes die het regelsysteem beschrijven bezitten uitstekende numerieke stabiliteitseigenschappen en de benodigde rekenkracht is klein (een real-time implementatie is haalbaar). Adaptive backstepping is een recursieve, niet-lineaire ontwerpmethode die is gebaseerd op Lyapunovs stabiliteitstheorie en gebruik maakt van dynamische parameter-updatewetten om te compenseren voor parametrische onzekerheden. De gedachte achter backstepping is om een regelaar recursief af te leiden door sommige toestandsvariabelen als ‘virtuele’ systeeminputs te beschouwen en hier tussenliggende, virtuele regelaars voor te ontwerpen. Backstepping realiseert globale asymptotische stabiliteit voor de toestandsvariabelen van het gesloten lus systeem. Het bewijs van deze eigenschappen is een direct gevolg van de recursieve procedure, aangezien op deze manier een Lyapunov functie wordt geconstrueerd voor het gehele systeem, inclusief de parameterschattingen. De tracking-fouten drijven het parameterschattingsproces van de procedure. Tevens is het mogelijk om fysieke beperkingen van systeeminputs en toestandsvariabelen mee te nemen in het ontwerp, zodat het identificatieproces niet wordt verstoord tijdens periodes van actuatorsaturatie. Een keerzijde van de ge¨ıntegreerde adaptive backstepping methode is dat de geschatte parameters slechts pseudo-schattingen zijn van de echte onzekere parameters. Er is geen enkele garantie dat de echte waardes van de onzekere parameters worden gevonden, aangezien de adaptatie alleen probeert te voldoen aan een totaal systeemstabiliteitscriterium, oftewel de Lyapunov functie. Verder is het zo dat het verhogen van de adaptatieversterkingsfactoren niet noodzakelijkerwijs de responsie van het gesloten lus systeem verbetert, doordat er een sterke koppeling is tussen de regelaar en dynamica van de schatter. De immersion en invariance (I&I) methode biedt een alternatieve manier om een nietlineaire schatter te construeren. Met deze aanpak is het mogelijk om voorgeschreven stabiele dynamica toe te wijzen aan de parameterschattingsfout. De resulterende schatter wordt gecombineerd met een backstepping regelaar om tot een modulaire, adaptieve regelmethode te komen. De op basis van I&I ontworpen schatter is snel genoeg om om te gaan met de potenti¨ele sneller-dan-lineaire groei van niet-lineaire systemen. De resulterende modulaire regelmethode is veel makkelijker te tunen dan de standaard adaptive backstepping methode waarbij de schatter wordt aangepast op basis van de trackingfouten. Het is zelfs zo dat het gesloten lus systeem, verkregen door de toepassing van de op I&I gebaseerde adaptive backstepping regelaar, kan worden gezien als een meertrapsverbinding tussen twee stabiele systemen met voorgeschreven asymptotische karakteristieken. Het gevolg is dat de prestaties van het gesloten lus systeem met de nieuwe adaptieve regelaar significant kunnen worden verbeterd. Om een real-time implementatie van adaptieve regelaars mogelijk te maken moet de complexiteit zoveel mogelijk beperkt worden. Als oplossing wordt het operationele vlieg-

266

SAMENVATTING

domein verdeeld in meerdere gebieden, met in ieder gebied een lokaal geldig vliegtuigmodel. Op deze manier hoeft de schatter maar een paar lokale modellen te updaten op iedere tijdstap, waarmee de benodigde rekenkracht van het algoritme gereduceerd wordt. Een ander voordeel van het gebruik van meerdere, lokale modellen is dat informatie van modellen die niet worden geupdate in een zekere tijdstap wordt onthouden. Met andere woorden, de schatter heeft geheugencapaciteiten. B-spline netwerken, geselecteerd vanwege hun uitstekende numerieke eigenschappen, worden gebruikt om te zorgen voor vloeiende overgangen tussen de lokale modellen in de verschillende gebieden van het vliegdomein. De adaptive backstepping besturingssystemen, die ontwikkeld zijn in deze thesis, zijn toegepast op een hoogwaardig dynamisch F-16 model en ge¨evalueerd in numerieke simulaties die zijn toegespitst op verscheidene regelproblemen. De adaptieve vliegtuigregelaars zijn vergeleken met het standaard F-16 besturingssysteem, dat is gebaseerd op klassieke regeltechniek methodes, en een niet-adaptief NDI-ontwerp. De prestaties zijn vergeleken in simulatie scenario’s op verschillende vliegcondities waar het vliegtuig model plotseling te maken krijgt met een falende actuator, longitudinale zwaartepuntsverschuivingen en veranderingen in de a¨erodynamische co¨effici¨enten. Alle numerieke simulaties kunnen zonder problemen in real-time uitgevoerd worden op een standaard desktop computer. De resultaten van de numerieke simulaties tonen aan dat de verschillende adaptieve regelaars een significante verbetering geven qua prestaties ten opzichte van een op NDI-gebaseerd besturingssysteem voor de gesimuleerde schade gevallen. Het modulaire adaptive backstepping ontwerp met I&I schatter geeft de beste prestaties en is het makkelijkst te tunen van alle onderzochte adaptieve vliegtuigbesturingssystemen. Verder beschikt de regelaar met I&I schatter over de sterkste stabiliteits- en convergentie-eigenschappen. In vergelijking met de standaard adaptive backstepping regelaars is de complexiteit van het ontwerp en de benodigde rekenkracht wat hoger, maar kunnen de regelaar en schatter wel los van elkaar ontworpen en getuned worden. Op basis van het onderzoek, dat is uitgevoerd voor dit proefschrift, kan worden geconcludeerd dat een RFC systeem gebaseerd op het modulaire adaptive backstepping ontwerp met I&I schatter een hoop potentie heeft, aangezien het over alle eigenschappen beschikt die zijn genoemd in de doelstellingen. Het wordt aangeraden om aanvullend onderzoek te doen naar de prestaties van het RFC systeem gebaseerd op het modulaire adaptive backstepping ontwerp met I&I schatter in andere simulatie scenario’s. De evaluatie van de adaptieve vliegtuigbesturingssystemen in dit proefschrift is beperkt tot simulatie scenario’s met falende actuatoren, symmetrische zwaartepuntsverschuivingen en onzekerheden in de a¨erodynamische co¨effici¨enten. Het onderzoek zou van meer waarde zijn als er ook simulaties met asymmetrische verstoringen, zoals gedeeltelijk vleugelverlies, waren uitgevoerd. Een apart onderzoek is dan wel eerst nodig om de benodigde realistische a¨erodynamische data voor het F-16 model te verkrijgen. Het is nog een open probleem om een adaptief flight envelope protection systeem te ontwikkelen, dat de gereduceerde flight envelope van het beschadige vliegtuig kan schatten en door kan geven aan de regelaar, de piloot en het

SAMENVATTING

267

guidance systeem. Tenslotte is het belangrijk om het voorgestelde RFC systeem te evalueren en valideren met testpiloten. De werkdruk van de piloot en de stuureigenschappen na een schadegeval met het RFC systeem moeten worden vergeleken met die van de standaard regelaar. Tegelijkertijd kan een studie worden uitgevoerd met betrekking tot de interactie tussen de reacties van de piloot en de acties van het adaptieve element van het besturingssysteem bij het plots optreden van schade of een actuator falen.

Acknowledgements This thesis is the result of four years of research within the Aerospace Software and Technology Institute (ASTI) at the Delft University of Technology. During this period, many people contributed to the realization of this work. I am very grateful to all of these people, but I would like to mention some of them in particular. First of all, I would like to thank my supervisor Dr. Ping Chu, my colleague Eddy van Oort and my promotor Prof. Bob Mulder. Dr. Ping Chu convinced me to pursue a Ph.D. degree and I am indebted for his enthusiastic scientific support that has kept me motivated in these past years. Moreover, I always enjoyed our social discussions on practically anything. I want to thank Eddy van Oort for his cooperation and the many inspiring discussions we have had. Eddy started his related Ph.D. research a few months after me: The modular adaptive backstepping flight control designs with least-squares identifier, used for comparison in this thesis, were mainly designed by him. I will always have many fond memories of the trips we made to conference meetings around the world. I am very grateful to Prof. Bob Mulder for his scientific support, his expert advice and for being my promotor. Thanks to Prof. Bob Mulder’s extensive knowledge and experience in the field of aerospace control and simulation, he could always provide me with a fresh perspective on my work. This research would not have been possible without the efforts of Prof. Lt. Gen. (ret.) Ben Droste, former commander of the Royal Netherlands Air Force and former dean of the faculty of Aerospace Engineering, and the support of the National Aerospace Laboratory (NLR). I would like to thank the people at the NLR and especially Jan Breeman for their scientific input and support. I am also indebted to my thesis committee for taking the time to read this book and making the (long) trip to The Netherlands.

269

270

ACKNOWLEDGEMENTS

I would like to thank all of my colleagues at ASTI, in particular Erikjan van Kampen, Elwin de Weerdt, Meine Oosten and Vera van Bragt. I am also grateful to the people at the Control and Simulation Division of the Delft University of Technology, especially to Thomas Lombaerts and Bertine Markus for their assistance with the administrative aspects of the thesis. I would like to express my gratitude to the people at Lockheed Martin and the Royal Netherlands Air Force, as well as to the many reviewers that read the journal papers containing parts of this research, for providing me with valuable scientific input and practical expertise. Last but certainly not least, I am truly grateful to my family, especially my parents, my brother Rutger and my girlfriend Rianne for their love and continuous support.

Rotterdam, May 2010

Lars Sonneveldt

Curriculum Vitae Lars Sonneveldt was born in Rotterdam, The Netherlands on July 29, 1982. From 1994 to 2000 he attended the Emmaus College in Rotterdam, obtaining the Gymnasium certificate. In 2000 he started his studies at the Delft University of Technology, Faculty of Aerospace Engineering. In 2004 he completed an internship at the Command and Control department of TNO-FEL in The Hague and obtained his B.Sc. degree. After that, he enrolled with the Control and Simulation Division for his masters program, specializing in flight control problems. In June 2006 he received his M.Sc. degree for his study on the suitability of new nonlinear adaptive control techniques for flight control design. In 2006 he started as a Ph.D. student at the Delft University of Technology within the Aerospace Software and Technology Institute (ASTI). His Ph.D. research was conducted in cooperation with the National Aerospace Laboratory (NLR) in Amsterdam and under the supervision of the Control and Simulation Division at the Faculty of Aerospace Engineering.

271