Stochastic gradient and related methods for solving stochastic optimization problems have been studied extensively in recent years. In addition, it has been shown that such algorithms and much of their convergence and complexity guarantees extend in straightforward ways when one considers problems involving simple constraints, such as when one can perform projections onto the feasible region. However, settings with general nonlinear constraints have received less attention, and many of the approaches that have been proposed for solving such problems resort to using penalty or (augmented) Lagrangian methods, which are often not the most effective strategies. In this work, we propose and analyze stochastic optimization methods based on the sequential quadratic optimization (commonly known as SQP) methodology. We discuss advantages and disadvantages of such techniques. This is joint work with Albert Berahas, Daniel P. Robinson, and Baoyu Zhou.
Please use this link to attend the virtual seminar: