Hartman, Marie
[UCL]
De Poorter, Alexia
[UCL]
Glineur, François
[UCL]
Hendrickx, Julien
[UCL]
This master thesis analyzes and improves the exact worst-case performance of iterative black-box first-order methods for unconstrained optimization of smooth strongly convex functions. To characterize and compare optimization methods, we evaluate the exact worst-case performance of their solutions based on performance measures using the toolbox PESTO. This toolbox allows resolving instances of the performance estimation problem by resolving a semidefinite program. In this master thesis, we focus on the gradient method and the fast gradient method developed by Nesterov. Using PESTO, we observe that, unlike the gradient method, the fast gradient method derives little benefits from the strong convexity property. To overcome this, several versions of the fast gradient method specifically designed for smooth strongly convex functions exist but these are non-robust to wrong estimate of the strong convexity parameter. It is therefore necessary to find versions of the fast gradient method that improve the performance of the classical fast gradient method for smooth strongly convex functions but that either do not depend on the strong convexity parameter or are robust to a wrong estimation of this parameter. To this end, the restart has been introduced: applying a restart consists of stopping the algorithm, erasing the memory and taking the last iterate as new initial point. This master thesis implements three restarting schemes: the fixed restart which consists of applying a restart after a fixed number of iterations, the adaptive restart which consists of applying a restart when a certain restarting condition is satisfied and the soft restart which consists of applying a fraction of restart at each iteration. First, by using an optimal restarting interval, we show that the fixed restart improves the performance of the fast gradient method by 32% to 60% and is robust especially to underestimation of the strong convexity parameter. Then, we show that the adaptive restart does not improve the worst-case performance of the fast gradient method. Finally, we design several strategies of soft restart. One of them improves by 12% to 25% the performance of the fast gradient method with the optimal restarting interval and is also robust to wrong estimations of the strong convexity parameter.


Bibliographic reference |
Hartman, Marie ; De Poorter, Alexia. Automated estimation of performance of optimization methods. Ecole polytechnique de Louvain, Université catholique de Louvain, 2021. Prom. : Glineur, François ; Hendrickx, Julien. |
Permanent URL |
http://hdl.handle.net/2078.1/thesis:30516 |