arXiv Analytics

Sign in

arXiv:2312.16699 [math.OC]AbstractReferencesReviewsResources

Computational Tradeoffs of Optimization-Based Bound Tightening in ReLU Networks

Fabian Badilla, Marcos Goycoolea, Gonzalo Muñoz, Thiago Serra

Published 2023-12-27Version 1

The use of Mixed-Integer Linear Programming (MILP) models to represent neural networks with Rectified Linear Unit (ReLU) activations has become increasingly widespread in the last decade. This has enabled the use of MILP technology to test-or stress-their behavior, to adversarially improve their training, and to embed them in optimization models leveraging their predictive power. Many of these MILP models rely on activation bounds. That is, bounds on the input values of each neuron. In this work, we explore the tradeoff between the tightness of these bounds and the computational effort of solving the resulting MILP models. We provide guidelines for implementing these models based on the impact of network structure, regularization, and rounding.

Related articles:
arXiv:1907.03140 [math.OC] (Published 2019-07-06)
ReLU Networks as Surrogate Models in Mixed-Integer Linear Programs
arXiv:1809.04565 [math.OC] (Published 2018-09-12)
Optimization-Based Bound Tightening using a Strengthened QC-Relaxation of the Optimal Power Flow Problem