Research Overview
I'm interested in the development of practical numerical algorithms for computational optimization. This includes the design and theoretical analysis of new methods, as well as their software implementation and practical application.
My research concentrates on optimization problems that involve
nonlinear objective and constraint functions.
The broad themes are:
- Development and implementation of general purpose algorithms.
- Specialized algorithms that exploit specific problem structures.
- Introducing techniques and concepts for nonlinear optimization into settings where they have not yet been exploited.
Research Projects
Interior point methods for nonconvex nonlinear constrained optimization
Starting with my PhD thesis advised by Larry Biegler, I developed Ipopt, a general purpose open-source optimization solver for large-scale nonlinear optimization. Ipopt was first released as FORTRAN code in 2000, and then reimplemend in C++ in 2005-2006 at IBM by Carl Laird and myself. Ipopt has been widely used in numerous application and has been integrated in various commercial software.My work on Ipopt was awarded the 2009 INFORMS Computing Sociery Prize and the 2011 Wilkinson Prize for Numerical Software.
Various topics related to this research:
- Global and local convergence properties filter methods. [1] [2] [3] [4] [5}
- Efficient software implementations. [1]
- Adaptive updates of barrier parameters. [1]
- Efficient linear solvers for computation of search directions. [1] [2] [3]
- Line search methods with inexact step computations. [1] [2] [3]
- Applications. [1] [2] [3] [4]
Nonlinear two-stage optimization problems
- Decomposition algorithm for nonlinear two-stage problems that is based on smoothing the recourse function and permits the utilization of existing nonlinear optimization algorithms. [1]
Optimization problems in electrical power systems
- Decomposition algorithm for optimal power flow in large-scale power grid using a smoothed second-stage response function. [1]
- Solving chance-constrained DC optimal power flow problems using the smooth approximation. [1]
- Description of our GO-SNIP team algorithm for the ARPA-E Grip Optimization Competition. [1]
Simulation optimization
- Computational methods for simulation optimization using Gaussian Markov random fields that exploit sparse linear algebra. [1] [2]
- Ranking and selection algorithm using the gradient of conditional expected improvement.
Optimization problems with nonlinear chance constraints
- Algorithm for nonlinear chance-constrainted problems based on sequential solution of their linearization. [1]
- Approximation of chance constraints by a differentiable nonlinear constraint. [1]
- Solving chance-constrained DC optimal power flow problems using the smooth approximation. [1]
Optimization problems with complementarity constraints
- New method based on difference-of-convex function algorithm [1]
- New interpretation of logical Benders decomposition as reversed branch-and-bound method. [1]
- Formulation of l0-norm constraints as complementarity constraints [1]
Derivative-free optimization
Nonsmooth optimization
- Limited-memory quasi-Newton methods for nonsmooth optimization. [1] [2]
- Second-order method for convex l1-regularized problems using active-set prediction. [1]
Sequential quadratic programming methods
- Acceleration obtained by extending the use of matrix factorization across several iterations. [1] [2]
- Exploitation of block-diagonal structure in SQP quasi-Newton method. [1]
Algorithms for mixed-integer nonlinear optimization
During my time at IBM, I was part of a team that developed the Bonmin and Couenne open-source solvers for nonlinear optimization
- Branch-and-bound algorithm implemented in Bonmin. [1] [2]
- Branching and bounds-tightening procedures implemented in Couenne. [1]
- Global solution of mixed-integer nonlinear optimization problems with separable nonconvexity. [1]