arXiv Analytics

Sign in

Search ResultsShowing 1-20 of 24

Sort by
  1. arXiv:2505.03468 (Published 2025-05-06)

    Multi-Class Stackelberg Games for the Co-Design of Networked Systems

    Julian Barreiro-Gomez, Ye Wang

    We investigate a co-design problem, encompassing simultaneous design of system infrastructure and control, through a game-theoretical framework. To this end, we propose the co-design problem as a two-layer hierarchical strategic interaction. At the upper layer, a leader (or multiple leaders) determines system design parameters, while at the lower layer, a follower (or multiple followers) optimizes the control strategy. To capture this hierarchy, we propose four novel classes of Stackelberg games that integrate diverse strategic behaviors, including combinations of cooperative and non-cooperative interactions across two different layers. Notably, the leaders' interactions are represented using a normal-form game, whereas the followers' interactions are modeled by different games (dynamic games in discrete time). These distinct game structures result in a Stackelberg game that accommodates different game types per layer, and/or supports heterogeneous strategic behaviors involving cooperation and non-cooperation simultaneously. Learning algorithms using the best-response dynamics are used to solve the game problems when considering a discrete strategic space for the leaders. The efficacy of the proposed approach is demonstrated through an application to the co-design of the Barcelona drinking water network.

  2. arXiv:2304.06667 (Published 2023-04-13)

    D-SVM over Networked Systems with Non-Ideal Linking Conditions

    Mohammadreza Doostmohammadian, Alireza Aghasi, Houman Zarrabi

    This paper considers distributed optimization algorithms, with application in binary classification via distributed support-vector-machines (D-SVM) over multi-agent networks subject to some link nonlinearities. The agents solve a consensus-constraint distributed optimization cooperatively via continuous-time dynamics, while the links are subject to strongly sign-preserving odd nonlinear conditions. Logarithmic quantization and clipping (saturation) are two examples of such nonlinearities. In contrast to existing literature that mostly considers ideal links and perfect information exchange over linear channels, we show how general sector-bounded models affect the convergence to the optimizer (i.e., the SVM classifier) over dynamic balanced directed networks. In general, any odd sector-bounded nonlinear mapping can be applied to our dynamics. The main challenge is to show that the proposed system dynamics always have one zero eigenvalue (associated with the consensus) and the other eigenvalues all have negative real parts. This is done by recalling arguments from matrix perturbation theory. Then, the solution is shown to converge to the agreement state under certain conditions. For example, the gradient tracking (GT) step size is tighter than the linear case by factors related to the upper/lower sector bounds. To the best of our knowledge, no existing work in distributed optimization and learning literature considers non-ideal link conditions.

  3. arXiv:2210.03712 (Published 2022-10-07)

    Gain-Scheduling Controller Synthesis for Networked Systems with Full Block Scalings

    Christian A. Rösinger, Carsten W. Scherer

    This work presents a framework to synthesize structured gain-scheduled controllers for structured plants that are affected by time-varying parametric scheduling blocks. Using a so-called lifting approach, we are able to handle several structured gain-scheduling problems arising from a nested inner and outer loop configuration with partial or full dependence on the scheduling block. Our resulting design conditions are formulated in terms of convex linear matrix inequalities and permit to handle multiple performance objectives.

  4. arXiv:2207.14770 (Published 2022-07-29)

    Informativity for centralized design of distributed controllers for networked systems

    Jaap Eising, Jorge Cortes

    Recent work in data-driven control has led to methods that find stabilizing controllers directly from measurements of an unknown system. However, for multi-agent systems we are often interested in finding controllers that take their distributed nature into account. For instance, the full state might not be available for feedback at every agent. In order to deal with such information, we consider the problem of finding a feedback controller with a given block structure based on measured data. Moreover, we provide an algorithm that, if it converges, leads to a maximally sparse controller.

  5. arXiv:2207.06559 (Published 2022-07-13)

    Fully Decentralized Model-based Policy Optimization for Networked Systems

    Yali Du, Chengdong Ma, Yuchen Liu, Runji Lin, Hao Dong, Jun Wang, Yaodong Yang
    Comments: 8 pages, 7 figures, accepted by The 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022)
    Categories: cs.LG, cs.AI, cs.MA, math.OC, stat.ML

    Reinforcement learning algorithms require a large amount of samples; this often limits their real-world applications on even simple tasks. Such a challenge is more outstanding in multi-agent tasks, as each step of operation is more costly requiring communications or shifting or resources. This work aims to improve data efficiency of multi-agent control by model-based learning. We consider networked systems where agents are cooperative and communicate only locally with their neighbors, and propose the decentralized model-based policy optimization framework (DMPO). In our method, each agent learns a dynamic model to predict future states and broadcast their predictions by communication, and then the policies are trained under the model rollouts. To alleviate the bias of model-generated data, we restrain the model usage for generating myopic rollouts, thus reducing the compounding error of model generation. To pertain the independence of policy update, we introduce extended value function and theoretically prove that the resulting policy gradient is a close approximation to true policy gradients. We evaluate our algorithm on several benchmarks for intelligent transportation systems, which are connected autonomous vehicle control tasks (Flow and CACC) and adaptive traffic signal control (ATSC). Empirically results show that our method achieves superior data efficiency and matches the performance of model-free methods using true models.

  6. arXiv:2207.05950 (Published 2022-07-13)

    Decentralized Online Convex Optimization in Networked Systems

    Yiheng Lin, Judy Gan, Guannan Qu, Yash Kanoria, Adam Wierman

    We study the problem of networked online convex optimization, where each agent individually decides on an action at every time step and agents cooperatively seek to minimize the total global cost over a finite horizon. The global cost is made up of three types of local costs: convex node costs, temporal interaction costs, and spatial interaction costs. In deciding their individual action at each time, an agent has access to predictions of local cost functions for the next $k$ time steps in an $r$-hop neighborhood. Our work proposes a novel online algorithm, Localized Predictive Control (LPC), which generalizes predictive control to multi-agent systems. We show that LPC achieves a competitive ratio of $1 + \tilde{O}(\rho_T^k) + \tilde{O}(\rho_S^r)$ in an adversarial setting, where $\rho_T$ and $\rho_S$ are constants in $(0, 1)$ that increase with the relative strength of temporal and spatial interaction costs, respectively. This is the first competitive ratio bound on decentralized predictive control for networked online convex optimization. Further, we show that the dependence on $k$ and $r$ in our results is near optimal by lower bounding the competitive ratio of any decentralized online algorithm.

  7. arXiv:2204.05551 (Published 2022-04-12)

    Near-Optimal Distributed Linear-Quadratic Regulator for Networked Systems

    Sungho Shin, Yiheng Lin, Guannan Qu, Adam Wierman, Mihai Anitescu

    This paper studies the trade-off between the degree of decentralization and the performance of a distributed controller in a linear-quadratic control setting. We study a system of interconnected agents over a graph and a distributed controller, called $\kappa$-distributed control, which lets the agents make control decisions based on the state information within distance $\kappa$ on the underlying graph. This controller can tune its degree of decentralization using the parameter $\kappa$ and thus allows a characterization of the relationship between decentralization and performance. We show that under mild assumptions, including stabilizability, detectability, and a polynomially growing graph condition, the performance difference between $\kappa$-distributed control and centralized optimal control becomes exponentially small in $\kappa$. This result reveals that distributed control can achieve near-optimal performance with a moderate degree of decentralization, and thus it is an effective controller architecture for large-scale networked systems.

  8. arXiv:2204.01779 (Published 2022-04-04)

    Model-free Learning for Risk-constrained Linear Quadratic Regulator with Structured Feedback in Networked Systems

    Kyung-bin Kwon, Lintao Ye, Vijay Gupta, Hao Zhu

    We develop a model-free learning algorithm for the infinite-horizon linear quadratic regulator (LQR) problem. Specifically, (risk) constraints and structured feedback are considered, in order to reduce the state deviation while allowing for a sparse communication graph in practice. By reformulating the dual problem as a nonconvex-concave minimax problem, we adopt the gradient descent max-oracle (GDmax), and for modelfree setting, the stochastic (S)GDmax using zero-order policy gradient. By bounding the Lipschitz and smoothness constants of the LQR cost using specifically defined sublevel sets, we can design the stepsize and related parameters to establish convergence to a stationary point (at a high probability). Numerical tests in a networked microgrid control problem have validated the convergence of our proposed SGDmax algorithm while demonstrating the effectiveness of risk constraints. The SGDmax algorithm has attained a satisfactory optimality gap compared to the classical LQR control, especially for the full feedback case.

  9. arXiv:2109.06343 (Published 2021-09-13)

    Data-based Online Optimization of Networked Systems with Infrequent Feedback

    Ana M. Ospina, Nicola Bastianello, Emiliano Dall'Anese

    We consider optimization problems for (networked) systems, where we minimize a cost that includes a known time-varying function associated with the system's outputs and an unknown function of the inputs. We focus on a data-based online projected gradient algorithm where: i) the input-output map of the system is replaced by measurements of the output whenever available (thus leading to a "closed-loop" setup); and ii) the unknown function is learned based on functional evaluations that may occur infrequently. Accordingly, the feedback-based online algorithm operates in a regime with inexact gradient knowledge and with random updates. We show that the online algorithm generates points that are within a bounded error from the optimal solution of the problem; in particular, we provide error bounds in expectation and in high-probability, where the latter is given when the gradient error follows a sub-Weibull distribution and when missing measurements are modeled as Bernoulli random variables. We also provide results in terms of input-to-state stability in expectation and in probability. Numerical results are presented in the context of a demand response task in power systems.

  10. arXiv:2103.13470 (Published 2021-03-24)

    Time-Varying Optimization of Networked Systems with Human Preferences

    Ana M. Ospina, Andrea Simonetto, Emiliano Dall'Anese

    This paper considers a time-varying optimization problem associated with a network of systems, with each of the systems shared by (and affecting) a number of individuals. The objective is to minimize cost functions associated with the individuals' preferences, which are unknown, subject to time-varying constraints that capture physical or operational limits of the network. To this end, the paper develops a distributed online optimization algorithm with concurrent learning of the cost functions. The cost functions are learned on-the-fly based on the users' feedback (provided at irregular intervals) by leveraging tools from shape-constrained Gaussian Processes. The online algorithm is based on a primal-dual method, and acts effectively in a closed-loop fashion where: i) users' feedback is utilized to estimate the cost, and ii) measurements from the network are utilized in the algorithmic steps to bypass the need for sensing of (unknown) exogenous inputs of the network. The performance of the algorithm is analyzed in terms of dynamic network regret and constraint violation. Numerical examples are presented in the context of real-time optimization of distributed energy resources.

  11. arXiv:2010.08447 (Published 2020-10-16)

    Design of periodic scheduling and control for networked systems under random data loss

    Atreyee Kundu, Daniel E. Quevedo

    This paper deals with Networked Control Systems (NCSs) whose shared networks have limited communication capacity and are prone to data losses. We assume that among (N) plants, only (M < N) plants can communicate with their controllers at any time instant. In addition, a control input, at any time instant, is lost in a channel with a probability (p). Our contributions are threefold. First, we identify necessary and sufficient conditions on the open-loop and closed-loop dynamics of the plants that ensure existence of purely time-dependent periodic scheduling sequences under which stability of each plant is preserved for all admissible data loss signals. Second, given the open-loop and closed-loop dynamics of the plants, relevant parameters of the shared network and a period for the scheduling sequence, we present an algorithm that verifies our stability conditions and if satisfied, designs stabilizing scheduling sequences. Otherwise, the algorithm reports non-existence of a stabilizing periodic scheduling sequence with the given period and stability margins. Third, given the plant matrices, the parameters of the network and a period for the scheduling sequence, we present an algorithm that designs static state-feedback controllers such that our stability conditions are satisfied. The main apparatus for our analysis is a switched systems representation of the individual plants in an NCS whose switching signals are time-inhomogeneous Markov chains. Our stability conditions rely on the existence of sets of symmetric and positive definite matrices that satisfy certain (in)equalities.

  12. arXiv:2010.00268 (Published 2020-10-01)

    Encrypted control for networked systems -- An illustrative introduction and current challenges

    M. Schulze Darup, A. B. Alexandru, D. E. Quevedo, G. J. Pappas
    Comments: The paper is a preprint of an accepted paper in the IEEE Control Systems Magazine
    Categories: eess.SY, cs.CR, cs.SY, math.OC

    Cloud computing and distributed computing are becoming ubiquitous in many modern control systems such as smart grids, building automation, robot swarms or intelligent transportation systems. Compared to "isolated" control systems, the advantages of cloud-based and distributed control systems are, in particular, resource pooling and outsourcing, rapid scalability, and high performance. However, these capabilities do not come without risks. In fact, the involved communication and processing of sensitive data via public networks and on third-party platforms promote, among other cyberthreats, eavesdropping and manipulation of data. Encrypted control addresses this security gap and provides confidentiality of the processed data in the entire control loop. This paper presents a tutorial-style introduction to this young but emerging field in the framework of secure control for networked dynamical systems.

  13. arXiv:2009.04289 (Published 2020-09-09)

    A scalable controller synthesis method for the robust control of networked systems

    Pieter Appeltans, Wim Michiels

    This manuscript discusses a scalable controller synthesis method for networked systems with a large number of identical subsystems based on the H-infinity control framework. The dynamics of the individual subsystems are described by identical linear time-invariant delay differential equations and the effect of transport and communication delay is explicitly taken into account. The presented method is based on the result that, under a particular assumption on the graph describing the interconnections between the subsystems, the H-infinity norm of the overall system is upper bounded by the robust H-infinity norm of a single subsystem with an additional uncertainty. This work will therefore briefly discuss a recently developed method to compute this last quantity. The resulting controller is then obtained by directly minimizing this upper bound in the controller parameters.

  14. arXiv:2006.06626 (Published 2020-06-11)

    Scalable Multi-Agent Reinforcement Learning for Networked Systems with Average Reward

    Guannan Qu, Yiheng Lin, Adam Wierman, Na Li

    It has long been recognized that multi-agent reinforcement learning (MARL) faces significant scalability issues due to the fact that the size of the state and action spaces are exponentially large in the number of agents. In this paper, we identify a rich class of networked MARL problems where the model exhibits a local dependence structure that allows it to be solved in a scalable manner. Specifically, we propose a Scalable Actor-Critic (SAC) method that can learn a near optimal localized policy for optimizing the average reward with complexity scaling with the state-action space size of local neighborhoods, as opposed to the entire network. Our result centers around identifying and exploiting an exponential decay property that ensures the effect of agents on each other decays exponentially fast in their graph distance.

  15. arXiv:2004.10153 (Published 2020-04-21)

    Control of Networked Systems by Clustering: The Degree of Freedom Concept

    Andrea Martinelli, John Lygeros

    We address the problem of local flux redistribution in networked systems. The aim is to detect a suitable cluster which is able to locally adsorb a disturbance by means of an appropriate redistribution of control load among its nodes, such that no external node is affected. Traditional clustering measures are not suitable for our purpose, since they do not explicitly take into account the structural conditions for disturbance containment. We propose a new measure based on the concept of degree of freedom for a cluster, and we introduce a heuristic procedure to quickly select a set of nodes according to this measure. Finally, we show an application of the method in the context of DC microgrids voltage control.

  16. arXiv:2001.04906 (Published 2020-01-14)

    A unified analytical framework for a class of optimal control problems on networked systems

    Mingwu Li, Harry Dankowicz

    We consider a class of optimal control problems on networks that generically permits a reduction to a universal set of reference problems without differential constraints that may be solved analytically. The derivation shows that input homogeneity across the network results in universally constant optimal control inputs. These predictions are validated using numerical analysis of problems of synchronization of coupled phase oscillators and spreading dynamics on time-varying networks.

  17. arXiv:1908.03588 (Published 2019-08-09)

    A Data-Driven and Model-Based Approach to Fault Detection and Isolation in Networked Systems

    Miel Sharf, Daniel Zelazo

    Fault detection and isolation is a field of engineering dealing with designing on-line protocols for systems that allow one to identify the existence of faults, pinpoint their exact location, and overcome them. We consider the case of multi-agent systems, where faults correspond to the disappearance of links in the underlying graph, simulating a communication failure between the corresponding agents. We study the case in which the agents and controllers are maximal equilibrium-independent passive (MEIP), and use the known connection between steady-states of these multi-agent systems and network optimization theory. We first study asymptotic methods of differentiating the faultless system from its faulty versions by studying their steady-state outputs. We explain how to apply the asymptotic differentiation to fault detection and isolation, with graph-theoretic guarantees on the number of faults that can be isolated, assuming the existence of a "convergence assertion protocol", a data-driven method of asserting that a multi-agent system converges to a conjectured limit. We then construct two data-driven model-based convergence assertion protocols. We demonstrate our results by case studies.

  18. arXiv:1710.08115 (Published 2017-10-23)

    Distributed Constrained Optimization over Networked Systems via A Singular Perturbation Method

    Phuong Huu Hoang, Hyo-Sung Ahn

    This paper studies a constrained optimization problem over networked systems with an undirected and connected communication topology. The algorithm proposed in this work utilizes singular perturbation, dynamic average consensus, and saddle point dynamics methods to tackle the problem for a general class of objective function and affine constraints in a fully distributed manner. It is shown that the private information of agents in the interconnected network is guaranteed in our proposed strategy. The theoretical guarantees on the optimality of the solution are provided by rigorous analyses. We apply the new proposed solution into energy networks by a demonstration of two simulations.

  19. arXiv:1706.01792 (Published 2017-06-06)

    Sparse and Constrained Stochastic Predictive Control for Networked Systems

    Prabhat K. Mishra, Debasish Chatterjee, Daniel E. Quevedo

    This article presents a novel class of control policies for networked control of Lyapunov-stable linear systems with bounded inputs. The control channel is assumed to have i.i.d. Bernoulli packet dropouts and the system is assumed to be affected by additive stochastic noise. Our proposed class of policies is affine in the past dropouts and saturated values of the past disturbances. We further consider a regularization term in a quadratic performance index to promote sparsity in control. We demonstrate how to augment the underlying optimization problem with a constant negative drift constraint to ensure mean-square boundedness of the closed-loop states, yielding a convex quadratic program to be solved periodically online. The states of the closed-loop plant under the receding horizon implementation of the proposed class of policies are mean square bounded for any positive bound on the control and any non-zero probability of successful transmission.

  20. arXiv:1508.06018 (Published 2015-08-25)

    A combination of small-gain and density propagation inequalities for stability analysis of networked systems

    Humberto Stein Shiromoto, Petro Feketa, Sergey Dashkovskiy

    In this paper, the problem of stability analysis of a large-scale interconnection of nonlinear systems for which the small-gain condition does not hold globally is considered. A combination of the small-gain and density propagation inequalities is employed to prove almost input-to-state stability of the network.

  1. 1
  2. 2