ABSTRACT OF THE DISSERTATION SQP Methods for Large-Scale Optimization by Alexander Barclay Sequential quadratic programming methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Sequential quadratic programming methods solve a sequence of quadratic programming (QP) subproblems, where each iteration of the QP subproblem requires the solution of a large linear system for the search direction. For some problems, the number of degrees of freedom, $n_z$, is of moderate size (say, up to 1000), and the large system can be solved using two smaller dense systems, one for the QP search direction and one for the QP multipliers. In this case, the system for the search direction is $Z^THZp = -Z^Tg$, where $Z^THZ$ is an $n_z \times n_z$ reduced Hessian. The thesis is concerned with the formulation and analysis of two sequential quadratic programming algorithms that are designed to be efficient when $n_z$ is large ($n_z \gg 1000$). The first method uses the conjugate-gradient method to solve the linear systems. The reduced system is shown to be equivalent to a certain least-squares problem that can be solved using the conjugate-gradient algorithm LSQR. Two preconditioners are proposed to accelerate convergence. The conjugate-gradient algorithm is implemented in a large-scale sequential quadratic programming method based on the optimization code SNOPT. Numerical results are presented to demonstrate the efficiency of the algorithm for problems with large degrees of freedom. The second method uses a Schur-complement approach to solve a large sparse system derived directly from the optimality conditions. The search directions are calculated using a fixed large sparse system and a small dense system incorporating the changes to the active set at each QP iteration. A general algorithm for convex quadratic programming is discussed, and a Schur-complement method is formulated for use with the algorithm. In addition, a new Schur-complement method is derived for minimizing an $\ell_2$ composite objective function when the initial point is not feasible. Modifications to the standard QP algorithm are presented that allow the use of a composite objective.

Home | Personnel | Programs |Jobs | Computing | Organization | Department | Research