This paper presents a novel accelerated distributed algorithm for unconstrained consensus optimization over static undirected networks. The proposed algorithm combines the benefits of acceleration from momentum, the robustness of the alternating direction method of multipliers, and the computational efficiency of gradient tracking to surpass existing state-of-the-art methods in convergence speed, while preserving their computational and communication cost. First, we prove that, by applying momentum on the average dynamic consensus protocol over the estimates and gradient, we can study the algorithm as an interconnection of two singularly perturbed systems: the outer system connects the consensus variables and the optimization variables, and the inner system connects the estimates of the optimum and the auxiliary optimization variables. Next, we prove that, by adding momentum to the auxiliary dynamics, our algorithm always achieves faster convergence than the achievable linear convergence rate for the non-accelerated alternating direction method of multipliers gradient tracking algorithm case. Through simulations, we numerically show that our accelerated algorithm surpasses the existing accelerated and non-accelerated distributed consensus first-order optimization protocols in convergence speed.
Sebastian, E., Franceschelli, M., Gasparri, A., Montijano, E., Sagues, C. (2024). Accelerated Alternating Direction Method of Multipliers Gradient Tracking for Distributed Optimization. IEEE CONTROL SYSTEMS LETTERS, 1-1 [10.1109/LCSYS.2024.3400699].
Accelerated Alternating Direction Method of Multipliers Gradient Tracking for Distributed Optimization
Gasparri A.;
2024-01-01
Abstract
This paper presents a novel accelerated distributed algorithm for unconstrained consensus optimization over static undirected networks. The proposed algorithm combines the benefits of acceleration from momentum, the robustness of the alternating direction method of multipliers, and the computational efficiency of gradient tracking to surpass existing state-of-the-art methods in convergence speed, while preserving their computational and communication cost. First, we prove that, by applying momentum on the average dynamic consensus protocol over the estimates and gradient, we can study the algorithm as an interconnection of two singularly perturbed systems: the outer system connects the consensus variables and the optimization variables, and the inner system connects the estimates of the optimum and the auxiliary optimization variables. Next, we prove that, by adding momentum to the auxiliary dynamics, our algorithm always achieves faster convergence than the achievable linear convergence rate for the non-accelerated alternating direction method of multipliers gradient tracking algorithm case. Through simulations, we numerically show that our accelerated algorithm surpasses the existing accelerated and non-accelerated distributed consensus first-order optimization protocols in convergence speed.File | Dimensione | Formato | |
---|---|---|---|
10530190.pdf
accesso aperto
Tipologia:
Documento in Post-print
Licenza:
Non specificato
Dimensione
514.41 kB
Formato
Adobe PDF
|
514.41 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.