User menu

Linear convergence of first order methods for non-strongly convex optimization

Bibliographic reference Necoara, Ion ; Nesterov, Yurii ; Glineur, François. Linear convergence of first order methods for non-strongly convex optimization. In: Mathematical Programming, (2018)
Permanent URL
  1. Leventhal D., Lewis A. S., Randomized Methods for Linear Constraints: Convergence Rates and Conditioning, 10.1287/moor.1100.0456
  2. Liu, J., Wright, S., Re, C., Bittorf, V., Sridhar, S.: An asynchronous parallel stochastic coordinate descent algorithm. J. Mach. Learn. Res. 16(1), 285–322 (2015)
  3. Nesterov Yurii, Introductory Lectures on Convex Optimization, ISBN:9781461346913, 10.1007/978-1-4419-8853-9
  4. Nemirovski A., Juditsky A., Lan G., Shapiro A., Robust Stochastic Approximation Approach to Stochastic Programming, 10.1137/070704277
  5. Wright Stephen J., Coordinate descent algorithms, 10.1007/s10107-015-0892-3
  6. Burke James V., Deng Sien, Weak sharp minima revisited, Part III: error bounds for differentiable convex inclusions, 10.1007/s10107-007-0130-8
  7. Lewis Adrian S., Pang Jong-Shi, Error Bounds for Convex Inequality Systems, Nonconvex Optimization and Its Applications (1998) ISBN:9781461333432 p.75-110, 10.1007/978-1-4613-3341-8_3
  8. Yangy, T., Lin, Q.: A stochastic gradient method with linear convergence rate for a class of non-smooth non-strongly convex optimization. Tech. rep. (2015).
  9. Luo Zhi-Quan, Tseng Paul, Error bounds and convergence analysis of feasible descent methods: a general approach, 10.1007/bf02096261
  10. Necoara Ion, Clipici Dragos, Parallel Random Coordinate Descent Method for Composite Minimization: Convergence Analysis and Error Bounds, 10.1137/130950288
  11. Wang, P.W., Lin, C.J.: Iteration complexity of feasible descent methods for convex optimization. J. Mach. Learn. Res. 15(4), 1523–1548 (2014)
  12. Zhang Hui, Cheng Lizhi, Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization, 10.1007/s11590-014-0795-x
  13. Beck Amir, Shtern Shimrit, Linearly convergent away-step conditional gradient for non-strongly convex functions, 10.1007/s10107-016-1069-4
  14. Drusvyatskiy, D., Lewis, A.: Error bounds, quadratic growth, and linear convergence of proximal methods. Tech. rep., (2016). (arXiv:1602.06661)
  15. Zhou Zirui, So Anthony Man-Cho, A unified approach to error bounds for structured convex optimization problems, 10.1007/s10107-016-1100-9
  16. Hoffman A.J., On approximate solutions of systems of linear inequalities, 10.6028/jres.049.027
  17. Klatte Diethard, Thiere Gisbert, Error bounds for solutions of linear equations and inequalities, 10.1007/bf01432655
  18. Nesterov Yu., Gradient methods for minimizing composite functions, 10.1007/s10107-012-0629-5
  19. O’Donoghue Brendan, Candès Emmanuel, Adaptive Restart for Accelerated Gradient Schemes, 10.1007/s10208-013-9150-3
  20. Bubeck Sébastien, Convex Optimization: Algorithms and Complexity, 10.1561/2200000050
  21. O’Donoghue Brendan, Chu Eric, Parikh Neal, Boyd Stephen, Conic Optimization via Operator Splitting and Homogeneous Self-Dual Embedding, 10.1007/s10957-016-0892-3
  22. Lin, H., Mairal, J., Harchaoui, Z.: A universal Catalyst for first-order optimization. In: Advances in neural information processing systems, 3384–3392 (2015)