Jorge Cortés


Distributed algorithm via continuously differentiable exact penalty method for network optimization
P. Srivastava, J. Cortés
Proceedings of the IEEE Conference on Decision and Control, Miami Beach, Florida, 2018, pp. 975-980


This paper proposes a distributed optimization framework for solving nonlinear programming problems with separable objective function and local constraints. Our novel approach is based on first reformulating the original problem as an unconstrained optimization problem using continuously differentiable exact penalty function methods. This reformulation is based on replacing the Lagrange multipliers associated with the original problem with Lagrange multiplier functions. To calculate the gradient of the penalty function, we need to calculate the Lagrange multiplier functions and their gradient functions. This problem of calculating these functions is challenging as it is non-distributed in general even if the original problem is distributed. We show that we can reformulate this problem as a distributed, unconstrained convex optimization problem. The proposed framework opens new opportunities for various distributed algorithms which only apply to unconstrained continuous optimizations. The framework is especially useful for the special case of convex functions with some regularity assumptions on the constraints. In those cases, due to the continuity of the penalty function, we can directly implement the distributed gradient descent algorithm to find the global optimizers. We also characterize the robustness of the proposed approach. Simulations illustrate our results.


Mechanical and Aerospace Engineering, University of California, San Diego
9500 Gilman Dr, La Jolla, California, 92093-0411

Ph: 1-858-822-7930
Fax: 1-858-822-3107

cortes at
Skype id: jorgilliyo