In this talk, we introduce HPR-QP, a dual Halpern Peaceman-Rachford (HPR) method designed for solving large-scale convex composite quadratic programming. One distinctive feature of HPR-QP is that, instead of working with the primal formulations, it builds on the novel restricted Wolfe dual introduced in recent years. It also leverages the symmetric Gauss-Seidel technique to simplify subproblem updates without introducing auxiliary slack variables that typically lead to slow convergence. By restricting updates to the range space of the Hessian of the quadratic objective function, HPR-QP employs proximal operators of smaller spectral norms to speed up the convergence. Shadow sequences are elaborately constructed to deal with the range space constraints. Additionally, HPR-QP incorporates adaptive restart and penalty parameter update strategies, derived from the HPR method's O(1/k) convergence in terms of the Karush-Kuhn-Tucker residual, to further enhance its performance and robustness. Extensive numerical experiments on benchmark data sets using a GPU demonstrate that our Julia implementation of HPR-QP significantly outperforms state-of-the-art solvers in both speed and scalability.
Yancheng Yuan is an Assistant Professor at the Department of Applied Mathematics, The Hong Kong Polytechnic University. His research focuses on continuous optimization, the mathematical foundation of data science, and data-driven applications. His research has been published in prestigious academic journals and conferences, including SIAM Journal on Optimization, Mathematical Programming Computation, Journal of American Statistical Association, Journal of Machine Learning Research, IEEE Transactions on Pattern Analysis and Machine Intelligence, NeurIPS, ICML, ICLR, ACM WWW, ACM SIGIR. His papers have been selected in Best Paper Award Finalist of ACM WWW 2021 and ACM SIGIR 2024.