Encyclopedia#
Over the years, numerous remarkable results have been discovered through the PEP framework. However, these works often adopt different notations and proof strategies, making it challenging for newcomers to navigate the literature and connect ideas. Now, with PEPFlow, we are able to provide a unified workflow for reproducing and presenting those convergence proofs in a consistent language. The first goal of this page is to gather existing PEP-based convergence results and showcase them, along with their proofs, within the coherent and standardized framework of PEPFlow.
We roughly categorize existing results into two main classes: convex optimization algorithms and fixed-point iterations (equivalently, methods for solving monotone inclusion problems). For each class, we showcase several complete examples, each including
a brief introduction to the problem setup and algorithm,
the full PEP workflow, especially the sparsity pattern of the nonnegative dual variables, and
a symbolic convergence proof,
all presented in a consistent and unified language.
Convex optimization#
PEPFlow enables the analysis of a broad class of convex optimization algorithms that rely on (sub)gradient evaluations and proximal operators. Following the structure of standard optimization courses (e.g., Stanford EE364b and UCLA ECE236C), this section showcases how PEPFlow systematically analyzes many first-order methods for solving convex optimization problems of the form
where \(f\) and \(g\) are proper closed convex functions with inexpensive proximal operators, \(h\) is \(L\)-smooth and (strongly) convex, and \(A\) is a (bounded) linear operator.
\(f = 0\) and \(g = 0\): gradient-based methods
\(g = 0\): prox-grad-type methods
FISTA
OptISTA
\(f = 0\)
Loris-Verhoeven algorithm (a.k.a. PDFP\(^2\)O, PAPC)
\(h = 0\) and \(A = I\)
\(h = 0\)
PDHG (a.k.a. Chambolle-Pock)
\(A = I\)
Davis-Yin splitting (DYS) method
most general setup
PD3O
PDDY
NB: Feel free to explore the hyperlinks to view detailed information about each algorithm.
The connection between some of the above methods is depicted in the following diagram:
More to come… As PEPFlow continues to grow, we plan to expand this list with more examples, showcasing PEPFlow’s ability to derive convergence guarantees for an ever broader class of optimization algorithms.
Monotone inclusion#
Looking ahead#
Algorithm analysis is not the limit of PEP, nor is it the limit of PEPFlow. We envision PEPFlow as a platform that not only verifies existing convergence results but also inspires new research directions. The future of PEPFlow aspires to
designing new algorithms,
revealing deep relationships among algorithms,
advancing our theoretical understanding of optimization/monotone inclusion/variational inequalities,
and beyond…
Ultimately, we hope PEPFlow becomes a tool for both research and education, one that helps students learn foundational concepts in optimization through advanced tools, and empowers scholars to experiment and validate ideas.
Let’s contribute to the growing body of PEP and optimization!