The field of distributed optimization algorithms is wide and growing, and new techniques for categorizing and analyzing existing algorithms are continually being developed. In this talk, we leverage control theoretic analysis and block diagram interpretations for existing algorithms to synthesize entirely new families of algorithms. These new algorithm families have many interesting and useful properties --- namely the fact that they are self-healing, meaning that their convergence to the correct optimizer can be guaranteed even if they are initialized randomly, agents join or leave the network, local cost functions change, or packets are lost. In the setting of dynamic average consensus, we exploit the self-healing properties of our algorithm to guarantee both accuracy (i.e., the exact asymptotic computation of the global average) and privacy (i.e., no node can estimate another node’s reference value). To achieve this, we require that the digraph modeling the communication between nodes satisfy certain topological conditions. We also describe a parameterized family of first-order distributed optimization algorithms that enable a network of agents to collaboratively calculate a decision variable that minimizes the sum of cost functions at each agent. Our algorithms are the first single-Laplacian methods for distributed convex optimization to be entirely self-healing. We achieve self-healing by sacrificing internal stability, a fundamental trade-off for single-Laplacian Gradient-Tracking methods. Finally, we expand our self-healing methods to algorithms that can potentially minimize non-convex or non-smooth objectives.
Bio: Israel Donato-Ridgley received his Ph.D. in Electrical Engineering at Northwestern University (Evanston, Illinois) in Summer 2023. He is an affiliate of the Center for Robotics and Biosystems. His research interests include distributed control and optimization, multi-agent systems, and privacy in multi-agent systems.
See all upcoming talks at https://www.anl.gov/mcs/lans-seminars