Multi-Objective Optimization for Flexible Transport Aircraft

Distributed Autonomous System Laboratory

Collaborators and Students: 
Project Description: 

A Research Initiation Grant through NASA EPSCOR funded work in the multiple objective optimization (MO-Op) framework using a NASA model for a flexible wing Generic Transport (fGTM) aircraft based on the Boeing 757. Multiple, disparate objectives do not necessarily add together well to create a reward function for normal optimal control frameworks. Other methods are necessary to decide which objective is most important at a particular instant in order to optimize the operation. In some cases objectives are mutually exclusive such that meeting one means the system can not meet another objective. We showed that a couple of the problems at the forefront of this kind control problem can be handled with reinforcement learning techniques and online learning.
In the general case, the MO-Op problem comes down to how do we prioritize the objectives in a mechanism that a computer understands and can work with? In the paper, Development of an Adaptive-Optimal Multi-Objective Optimization Algorithm, we consider the problem cast as an adaptive Multi-Objective Optimization flight control problem, in which a control policy is sought that attempt to optimize over multiple, sometimes conflicting objectives. A solution strategy utilizing Gaussian Process (GP) based adaptive-optimal control is presented, in which the system uncertainties are learned with an online, budgeted GP. The mean of the GP is used with feedback to linearize the system and reference model shaping is utilized for optimization. To make the MO-Op problem online realizable, a relaxation strategy that poses some objectives as adaptively updated soft constraints is proposed. The strategy is validated on a nonlinear roll dynamics model with simulated state-dependent flexible-rigid mode interaction.
The flexible wings of the NASA fGTM could allow for uncertainty in control allocation due to wing twisting and oscillation. Concurrent learning model reference adaptive control (CL-MRAC) was implemented to show the uncertainty could be handled and that purging was necessary for proper model tracking in the paper, Concurrent Learning Adaptive Control for Systems with Unknown Sign of Control Effectiveness. To handle such situations, a Concurrent Learning Model Reference Adaptive Control method is developed for linear uncertain dynamical systems where the sign of the control effectiveness, and parameters of the control allocation matrix, are unknown. The approach relies on simultaneous estimation of the control allocation matrix using recorded and instantaneous data concurrently, while the system is being actively controlled using the online updated estimate. The tracking error and weight error convergence depends on how accurate the estimates of the unknown parameters are. This is used to establish the necessity for purging the concurrent learning history stacks.