Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

figure a

1 Introduction

Artificial neural networks [7, 32] have emerged as a promising approach for creating scalable and robust systems. Applications include speech recognition [9], image classification [23], game playing [33], and many others. It is now clear that software that may be extremely difficult for humans to implement can instead be created by training deep neural networks (DNNs), and that the performance of these DNNs is often comparable to, or even surpasses, the performance of manually crafted software. DNNs are becoming widespread, and this trend is likely to continue and intensify.

Great effort is now being put into using DNNs as controllers for safety-critical systems such as autonomous vehicles [4] and airborne collision avoidance systems for unmanned aircraft (ACAS Xu) [13]. DNNs are trained over a finite set of inputs and outputs and are expected to generalize, i.e. to behave correctly for previously-unseen inputs. However, it has been observed that DNNs can react in unexpected and incorrect ways to even slight perturbations of their inputs [34]. This unexpected behavior of DNNs is likely to result in unsafe systems, or restrict the usage of DNNs in safety-critical applications. Hence, there is an urgent need for methods that can provide formal guarantees about DNN behavior. Unfortunately, manual reasoning about large DNNs is impossible, as their structure renders them incomprehensible to humans. Automatic verification techniques are thus sorely needed, but here, the state of the art is a severely limiting factor.

Verifying DNNs is a difficult problem. DNNs are large, non-linear, and non-convex, and verifying even simple properties about them is an NP-complete problem (see Sect. I of the supplementary material [15]). DNN verification is experimentally beyond the reach of general-purpose tools such as linear programming (LP) solvers or existing satisfiability modulo theories (SMT) solvers [3, 10, 31], and thus far, dedicated tools have only been able to handle very small networks (e.g. a single hidden layer with only 10 to 20 hidden nodes [30, 31]).

The difficulty in proving properties about DNNs is caused by the presence of activation functions. A DNN is comprised of a set of layers of nodes, and the value of each node is determined by computing a linear combination of values from nodes in the preceding layer and then applying an activation function to the result. These activation functions are non-linear and render the problem non-convex. We focus here on DNNs with a specific kind of activation function, called a Rectified Linear Unit (ReLU) [27]. When the ReLU function is applied to a node with a positive value, it returns the value unchanged (the active case), but when the value is negative, the ReLU function returns 0 (the inactive case). ReLUs are very widely used [23, 25], and it has been suggested that their piecewise linearity allows DNNs to generalize well to previously unseen inputs [6, 7, 11, 27]. Past efforts at verifying properties of DNNs with ReLUs have had to make significant simplifying assumptions [3, 10]—for instance, by considering only small input regions in which all ReLUs are fixed at either the active or inactive state [3], hence making the problem convex but at the cost of being able to verify only an approximation of the desired property.

We propose a novel, scalable, and efficient algorithm for verifying properties of DNNs with ReLUs. We address the issue of the activation functions head-on, by extending the simplex algorithm—a standard algorithm for solving LP instances—to support ReLU constraints. This is achieved by leveraging the piecewise linear nature of ReLUs and attempting to gradually satisfy the constraints that they impose as the algorithm searches for a feasible solution. We call the algorithm Reluplex, for “ReLU with Simplex”.

The problem’s NP-completeness means that we must expect the worst-case performance of the algorithm to be poor. However, as is often the case with SAT and SMT solvers, the performance in practice can be quite reasonable; in particular, our experiments show that during the search for a solution, many of the ReLUs can be ignored or even discarded altogether, reducing the search space by an order of magnitude or more. Occasionally, Reluplex will still need to split on a specific ReLU constraint—i.e., guess that it is either active or inactive, and possibly backtrack later if the choice leads to a contradiction.

We evaluated Reluplex on a family of 45 real-world DNNs, developed as an early prototype for the next-generation airborne collision avoidance system for unmanned aircraft ACAS Xu [13]. These fully connected DNNs have 8 layers and 300 ReLU nodes each, and are intended to be run onboard aircraft. They take in sensor data indicating the speed and present course of the aircraft (the ownship) and that of any nearby intruder aircraft, and issue appropriate navigation advisories. These advisories indicate whether the aircraft is clear-of-conflict, in which case the present course can be maintained, or whether it should turn to avoid collision. We successfully proved several properties of these networks, e.g. that a clear-of-conflict advisory will always be issued if the intruder is sufficiently far away or that it will never be issued if the intruder is sufficiently close and on a collision course with the ownship. Additionally, we were able to prove certain robustness properties [3] of the networks, meaning that small adversarial perturbations do not change the advisories produced for certain inputs.

Our contributions can be summarized as follows. We (i) present Reluplex, an SMT solver for a theory of linear real arithmetic with ReLU constraints; (ii) show how DNNs and properties of interest can be encoded as inputs to Reluplex; (iii) discuss several implementation details that are crucial to performance and scalability, such as the use of floating-point arithmetic, bound derivation for ReLU variables, and conflict analysis; and (iv) conduct a thorough evaluation on the DNN implementation of the prototype ACAS Xu system, demonstrating the ability of Reluplex to scale to DNNs that are an order of magnitude larger than those that can be analyzed using existing techniques.

The rest of the paper is organized as follows. We begin with some background on DNNs, SMT, and simplex in Sect. 2. The abstract Reluplex algorithm is described in Sect. 3, with key implementation details highlighted in Sect. 4. We then describe the ACAS Xu system and its prototype DNN implementation that we used as a case-study in Sect. 5, followed by experimental results in Sect. 6. Related work is discussed in Sect. 7, and we conclude in Sect. 8.

2 Background

Neural Networks. Deep neural networks (DNNs) are comprised of an input layer, an output layer, and multiple hidden layers in between. A layer is comprised of multiple nodes, each connected to nodes from the preceding layer using a predetermined set of weights (see Fig. 1). Weight selection is crucial, and is performed during a training phase (see, e.g., [7] for an overview). By assigning values to inputs and then feeding them forward through the network, values for each layer can be computed from the values of the previous layer, finally resulting in values for the outputs.

The value of each hidden node in the network is determined by calculating a linear combination of node values from the previous layer, and then applying a non-linear activation function [7]. Here, we focus on the Rectified Linear Unit (ReLU) activation function [27]. When a ReLU activation function is applied to a node, that node’s value is calculated as the maximum of the linear combination of nodes from the previous layer and 0. We can thus regard ReLUs as the function \(\text {ReLU} {}{}(x) = \max {}(0, x)\).

Fig. 1.
figure 1

A fully connected DNN with 5 input nodes (in green), 5 output nodes (in red), and 4 hidden layers containing a total of 36 hidden nodes (in blue). (Colour figure online).

Formally, for a DNN N, we use n to denote the number of layers and \(s_i\) to denote the size of layer i (i.e., the number of its nodes). Layer 1 is the input layer, layer n is the output layer, and layers \(2,\ldots ,n-1\) are the hidden layers. The value of the j-th node of layer i is denoted \(v_{i,j}\) and the column vector \([v_{i,1},\ldots ,v_{i,s_i}]^T\) is denoted \(V_i\). Evaluating N entails calculating \(V_n\) for a given assignment \(V_1\) of the input layer. This is performed by propagating the input values through the network using predefined weights and biases, and applying the activation functions—ReLUs, in our case. Each layer \(2\le i\le n\) has a weight matrix \(W_i\) of size \(s_{i}\times s_{i-1}\) and a bias vector \(B_i\) of size \( s_i\), and its values are given by \( V_i = \text {ReLU} {}{}(W_i V_{i-1} + B_i), \) with the ReLU function being applied element-wise. This rule is applied repeatedly for each layer until \(V_n\) is calculated. When the weight matrices \(W_1,\ldots W_n\) do not have any zero entries, the network is said to be fully connected (see Fig. 1 for an illustration).

Fig. 2.
figure 2

A small neural network.

Figure 2 depicts a small network that we will use as a running example. The network has one input node, one output node and a single hidden layer with two nodes. The bias vectors are set to 0 and are ignored, and the weights are shown for each edge. The ReLU function is applied to each of the hidden nodes. It is possible to show that, due to the effect of the ReLUs, the network’s output is always identical to its input: \(v_{31}\equiv v_{11}\).

Satisfiability Modulo Theories. We present our algorithm as a theory solver in the context of satisfiability modulo theories (SMT).Footnote 1 A theory is a pair \(T = (\varSigma , \mathbf {I})\) where \(\varSigma \) is a signature and \(\mathbf {I}\) is a class of \(\varSigma \)-interpretations, the models of T, that is closed under variable reassignment. A \(\varSigma \)-formula \(\varphi \) is T-satisfiable (resp., T-unsatisfiable) if it is satisfied by some (resp., no) interpretation in \(\mathbf {I}\). In this paper, we consider only quantifier-free formulas. The SMT problem is the problem of determining the T-satisfiability of a formula for a given theory T.

Given a theory T with signature \(\varSigma \), the DPLL(T) architecture [28] provides a generic approach for determining the T-satisfiability of \(\varSigma \)-formulas. In DPLL(T), a Boolean satisfiability (SAT) engine operates on a Boolean abstraction of the formula, performing Boolean propagation, case-splitting, and Boolean conflict resolution. The SAT engine is coupled with a dedicated theory solver, which checks the T-satisfiability of the decisions made by the SAT engine. Splitting-on-demand [1] extends DPLL(T) by allowing theory solvers to delegate case-splitting to the SAT engine in a generic and modular way. In Sect. 3, we present our algorithm as a deductive calculus (with splitting rules) operating on conjunctions of literals. The DPLL(T) and splitting-on-demand mechanisms can then be used to obtain a full decision procedure for arbitrary formulas.

Linear Real Arithmetic and Simplex. In the context of DNNs, a particularly relevant theory is that of real arithmetic, which we denote as \(\mathcal {T}_{\mathbb {R}}\). \(\mathcal {T}_{\mathbb {R}}{}\) consists of the signature containing all rational number constants and the symbols \(\{+,-,\cdot ,\le ,\ge \}\), paired with the standard model of the real numbers. We focus on linear formulas: formulas over \(\mathcal {T}_{\mathbb {R}}{}\) with the additional restriction that the multiplication symbol \(\cdot \) can only appear if at least one of its operands is a rational constant. Linear atoms can always be rewritten into the form \(\sum _{x_i\in \mathcal {X}}c_ix_i\bowtie d\), for \(\bowtie \ \in \{=,\le ,\ge \}\), where \(\mathcal {X}\) is a set of variables and \(c_i,d\) are rational constants.

The simplex method [5] is a standard and highly efficient decision procedure for determining the \(\mathcal {T}_{\mathbb {R}}{}\)-satisfiability of conjunctions of linear atoms.Footnote 2 Our algorithm extends simplex, and so we begin with an abstract calculus for the original algorithm (for a more thorough description see, e.g., [35]). The rules of the calculus operate over data structures we call configurations. For a given set of variables \(\mathcal {X}= \{x_1,\ldots ,x_n\}\), a simplex configuration is either one of the distinguished symbols \(\{\texttt {SAT}{},\texttt {UNSAT}{}\}\) or a tuple \(\langle \mathcal {B}, T, l, u, \alpha {}\rangle \), where: \(\mathcal {B}\subseteq \mathcal {X}\) is a set of basic variables; T, the tableau, contains for each \(x_i\in \mathcal {B}\) an equation \(x_i = \sum _{x_j\notin \mathcal {B}} c_j x_j\); \(l, u\) are mappings that assign each variable \(x\in \mathcal {X}\) a lower and an upper bound, respectively; and \(\alpha {}\), the assignment, maps each variable \(x\in \mathcal {X}\) to a real value. The initial configuration (and in particular the initial tableau \(T_0\)) is derived from a conjunction of input atoms as follows: for each atom \(\sum _{x_i\in \mathcal {X}}c_ix_i\bowtie d\), a new basic variable b is introduced, the equation \(b=\sum _{x_i\in \mathcal {X}}c_ix_i\) is added to the tableau, and d is added as a bound for b (either upper, lower, or both, depending on \(\bowtie \)). The initial assignment is set to 0 for all variables, ensuring that all tableau equations hold (though variable bounds may be violated).

The tableau T can be regarded as a matrix expressing each of the basic variables (variables in \(\mathcal {B}{}\)) as a linear combination of non-basic variables (variables in \(\mathcal {X}{}\setminus \mathcal {B}{}\)). The rows of T correspond to the variables in \(\mathcal {B}{}\) and its columns to those of \(\mathcal {X}{}\setminus \mathcal {B}{}\). For \(x_i\in \mathcal {B}{}\) and \(x_j\notin \mathcal {B}{}\) we denote by \(T_{i,j}\) the coefficient \(c_j\) of \(x_j\) in the equation \(x_i=\sum _{x_j\notin \mathcal {B}} c_j x_j\). The tableau is changed via pivoting: the switching of a basic variable \(x_i\) (the leaving variable) with a non-basic variable \(x_j\) (the entering variable) for which \(T_{i,j}\ne 0\). A pivot(Tij) operation returns a new tableau in which the equation \(x_i=\sum _{x_k\notin \mathcal {B}} c_k x_k\) has been replaced by the equation \(x_j = \frac{x_i}{c_j} - \sum _{x_k\notin \mathcal {B}, k\ne j}\frac{c_k}{c_j}x_k\), and in which every occurrence of \(x_j\) in each of the other equations has been replaced by the right-hand side of the new equation (the resulting expressions are also normalized to retain the tableau form). The variable assignment \(\alpha {}{}\) is changed via update operations that are applied to non-basic variables: for \(x_j\notin \mathcal {B}{}\), an update(\(\alpha {}, x_j,\delta \)) operation returns an updated assignment \(\alpha {}'\) identical to \(\alpha {}\), except that \(\alpha {}'(x_j)=\alpha {}(x_j)+\delta \) and for every \(x_i\in \mathcal {B}\), we have \(\alpha {}'(x_i)=\alpha {}(x_i)+\delta \cdot T_{i,j}.\) To simplify later presentation we also denote:

$$\begin{aligned} \text {slack} {}^+(x_i)&= \{ x_j\notin \mathcal {B}{}\ |\ (T_{i,j}>0 \wedge \alpha {}(x_j)< u(x_j)) \vee (T_{i,j}<0 \wedge \alpha {}(x_j)> l(x_j)) \\ \text {slack} {}^-(x_i)&= \{ x_j\notin \mathcal {B}{}\ |\ (T_{i,j}<0 \wedge \alpha {}(x_j) < u(x_j)) \vee (T_{i,j}>0 \wedge \alpha {}(x_j) > l(x_j)) \end{aligned}$$

The rules of the simplex calculus are provided in Fig. 3 in guarded assignment form. A rule applies to a configuration S if all of the rule’s premises hold for S. A rule’s conclusion describes how each component of S is changed, if at all. When \(S'\) is the result of applying a rule to S, we say that S derives \(S'\). A sequence of configurations \(S_i\) where each \(S_i\) derives \(S_{i+1}\) is called a derivation.

Fig. 3.
figure 3

Derivation rules for the abstract simplex algorithm.

The \(\mathsf {Update}\) rule (with appropriate values of \(\delta \)) is used to enforce that non-basic variables satisfy their bounds. Basic variables cannot be directly updated. Instead, if a basic variable \(x_i\) is too small or too great, either the \(\mathsf {Pivot}_{1}\) or the \(\mathsf {Pivot}_{2}\) rule is applied, respectively, to pivot it with a non-basic variable \(x_j\). This makes \(x_i\) non-basic so that its assignment can be adjusted using the \(\mathsf {Update}\) rule. Pivoting is only allowed when \(x_j\) affords slack, that is, the assignment for \(x_j\) can be adjusted to bring \(x_i\) closer to its bound without violating its own bound. Of course, once pivoting occurs and the \(\mathsf {Update}\) rule is used to bring \(x_i\) within its bounds, other variables (such as the now basic \(x_j\)) may be sent outside their bounds, in which case they must be corrected in a later iteration. If a basic variable is out of bounds, but none of the non-basic variables affords it any slack, then the \(\mathsf {Failure}\) rule applies and the problem is unsatisfiable. Because the tableau is only changed by scaling and adding rows, the set of variable assignments that satisfy its equations is always kept identical to that of \(T_0\). Also, the update operation guarantees that \(\alpha {}{}\) continues to satisfy the equations of T. Thus, if all variables are within bounds then the \(\mathsf {Success}\) rule can be applied, indicating that \(\alpha \) constitutes a satisfying assignment for the original problem.

It is well-known that the simplex calculus is sound [35] (i.e. if a derivation ends in \(\texttt {SAT}{}\) or \(\texttt {UNSAT}{}\), then the original problem is satisfiable or unsatisfiable, respectively) and complete (there always exists a derivation ending in either \(\texttt {SAT}{}\) or \(\texttt {UNSAT}{}\) from any starting configuration). Termination can be guaranteed if certain strategies are used in applying the transition rules—in particular in picking the leaving and entering variables when multiple options exist [35]. Variable selection strategies are also known to have a dramatic effect on performance [35]. We note that the version of simplex described above is usually referred to as phase one simplex, and is usually followed by a phase two in which the solution is optimized according to a cost function. However, as we are only considering satisfiability, phase two is not required.

3 From Simplex to Reluplex

The simplex algorithm described in Sect. 2 is an efficient means for solving problems that can be encoded as a conjunction of atoms. Unfortunately, while the weights, biases, and certain properties of DNNs can be encoded this way, the non-linear ReLU functions cannot.

When a theory solver operates within an SMT solver, input atoms can be embedded in arbitrary Boolean structure. A naïve approach is then to encode ReLUs using disjunctions, which is possible because ReLUs are piecewise linear. However, this encoding requires the SAT engine within the SMT solver to enumerate the different cases. In the worst case, for a DNN with n ReLU nodes, the solver ends up splitting the problem into \(2^n\) sub-problems, each of which is a conjunction of atoms. As observed by us and others [3, 10], this theoretical worst-case behavior is also seen in practice, and hence this approach is practical only for very small networks. A similar phenomenon occurs when encoding DNNs as mixed integer problems (see Sect. 6).

We take a different route and extend the theory \(\mathcal {T}_{\mathbb {R}}{}\) to a theory \(\mathcal {T}_{\mathbb {R}R}{}\) of reals and ReLUs. \(\mathcal {T}_{\mathbb {R}R}{}\) is almost identical to \(\mathcal {T}_{\mathbb {R}}{}\), except that its signature additionally includes the binary predicate \(\text {ReLU} {}{}\) with the interpretation: \(\text {ReLU} {}{}(x,y)\) iff \(y=\max {}(0,x)\). Formulas are then assumed to contain atoms that are either linear inequalities or applications of the \(\text {ReLU} {}{}\) predicate to linear terms.

DNNs and their (linear) properties can be directly encoded as conjunctions of \(\mathcal {T}_{\mathbb {R}R}{}\)-atoms. The main idea is to encode a single ReLU node v as a pair of variables, \(v^b\) and \(v^f\), and then assert \(\text {ReLU} {}{}(v^b,v^f)\). \(v^b\), the backward-facing variable, is used to express the connection of v to nodes from the preceding layer; whereas \(v^f\), the forward-facing variable, is used for the connections of x to the following layer (see Fig. 4). The rest of this section is devoted to presenting an efficient algorithm, Reluplex, for deciding the satisfiability of a conjunction of such atoms.

Fig. 4.
figure 4

The network from Fig. 2, with ReLU nodes split into backward- and forward-facing variables.

The Reluplex Procedure. As with simplex, Reluplex allows variables to temporarily violate their bounds as it iteratively looks for a feasible variable assignment. However, Reluplex also allows variables that are members of ReLU pairs to temporarily violate the ReLU semantics. Then, as it iterates, Reluplex repeatedly picks variables that are either out of bounds or that violate a ReLU, and corrects them using \(\mathsf {Pivot}_{}\) and \(\mathsf {Update}\) operations.

For a given set of variables \(\mathcal {X}= \{x_1,\ldots ,x_n\}\), a Reluplex configuration is either one of the distinguished symbols \(\{\texttt {SAT}{},\texttt {UNSAT}{}\}\) or a tuple \(\langle \mathcal {B}, T, l, u, \alpha {}, R\rangle \), where \(\mathcal {B}, T,l,u\) and \(\alpha {}\) are as before, and \(R\subset \mathcal {X}\times \mathcal {X}\) is the set of ReLU connections. The initial configuration for a conjunction of atoms is also obtained as before except that \(\langle x,y\rangle \in R\) iff \(\text {ReLU} {}{}(x,y)\) is an atom. The simplex transition rules \(\mathsf {Pivot}_{1}\), \(\mathsf {Pivot}_{2}\) and \(\mathsf {Update}\) are included also in Reluplex, as they are designed to handle out-of-bounds violations. We replace the \(\mathsf {Success}\) rule with the \(\mathsf {ReluSuccess}\) rule and add rules for handling ReLU violations, as depicted in Fig. 5. The \(\mathsf {Update}_{b}\) and \(\mathsf {Update}_{f}\) rules allow a broken ReLU connection to be corrected by updating the backward- or forward-facing variables, respectively, provided that these variables are non-basic. The \(\mathsf {PivotForRelu}\) rule allows a basic variable appearing in a ReLU to be pivoted so that either \(\mathsf {Update}_{b}\) or \(\mathsf {Update}_{f}\) can be applied (this is needed to make progress when both variables in a ReLU are basic and their assignments do not satisfy the ReLU semantics). The \(\mathsf {ReluSplit}\) rule is used for splitting on certain ReLU connections, guessing whether they are active (by setting \(l(x_i):=0\)) or inactive (by setting \(u(x_i):=0\)).

Fig. 5.
figure 5

Additional derivation rules for the abstract Reluplex algorithm.

Introducing splitting means that derivations are no longer linear. Using the notion of derivation trees, we can show that Reluplex is sound and complete (see Sect. II of the supplementary material [15]). In practice, splitting can be managed by a SAT engine with splitting-on-demand [1]. The naïve approach mentioned at the beginning of this section can be simulated by applying the \(\mathsf {ReluSplit}\) rule eagerly until it no longer applies and then solving each derived sub-problem separately (this reduction trivially guarantees termination just as do branch-and-cut techniques in mixed integer solvers [29]). However, a more scalable strategy is to try to fix broken ReLU pairs using the \(\mathsf {Update}_{b}\) and \(\mathsf {Update}_{f}\) rules first, and split only when the number of updates to a specific ReLU pair exceeds some threshold. Intuitively, this is likely to limit splits to “problematic” ReLU pairs, while still guaranteeing termination (see Sect. III of the supplementary material [15]). Additional details appear in Sect. 6.

Example. To illustrate the use of the derivation rules, we use Reluplex to solve a simple example. Consider the network in Fig. 4, and suppose we wish to check whether it is possible to satisfy \(v_{11}\in [0,1]\) and \(v_{31}\in [0.5,1]\). As we know that the network outputs its input unchanged (\(v_{31}\equiv v_{11}\)), we expect Reluplex to be able to derive SAT. The initial Reluplex configuration is obtained by introducing new basic variables \(a_1,a_2,a_3\), and encoding the network with the equations:

$$ a_1 = -v_{11} + v^b_{21} \qquad a_2 = v_{11} + v^b_{22} \qquad a_3 = - v^f_{21} - v^f_{22} + v_{31} $$

The equations above form the initial tableau \(T_0\), and the initial set of basic variables is \(\mathcal {B}{} = \{a_1,a_2,a_3\}\). The set of ReLU connections is \(R=\{ \langle v^b_{21},v^f_{21} \rangle , \langle v^b_{22},v^f_{22} \rangle \}\). The initial assignment of all variables is set to 0. The lower and upper bounds of the basic variables are set to 0, in order to enforce the equalities that they represent. The bounds for the input and output variables are set according to the problem at hand; and the hidden variables are unbounded, except that forward-facing variables are, by definition, non-negative:

figure b

Starting from this initial configuration, our search strategy is to first fix any out-of-bounds variables. Variable \(v_{31}\) is non-basic and is out of bounds, so we perform an \(\mathsf {Update}\) step and set it to 0.5. As a result, \(a_3\), which depends on \(v_{31}\), is also set to 0.5. \(a_3\) is now basic and out of bounds, so we pivot it with \(v_{21}^f\), and then update \(a_3\) back to 0. The tableau now consists of the equations:

$$ a_1 = -v_{11} + v^b_{21} \qquad a_2 = v_{11} + v^b_{22} \qquad v_{21}^f = -v_{22}^f + v_{31} -a_3 $$

And the assignment is \(\alpha {}(v_{21}^f) = 0.5\), \(\alpha {}(v_{31}) = 0.5\), and \(\alpha {}(v) = 0\) for all other variables v. At this point, all variables are within their bounds, but the \(\mathsf {ReluSuccess}\) rule does not apply because \(\alpha {}(v_{21}^f) = 0.5 \ne 0 = \max {}(0,\alpha {}(v_{21}^b))\).

The next step is to fix the broken ReLU pair \(\langle {v_{21}^b, v_{21}^f}\rangle \). Since \(v_{21}^b\) is non-basic, we use \(\mathsf {Update}_{b}\) to increase its value by 0.5. The assignment becomes \(\alpha {}(v_{21}^b) = 0.5\), \(\alpha {}(v_{21}^f) = 0.5\), \(\alpha {}(v_{31}) = 0.5\), \(\alpha {}(a_1) = 0.5\), and \(\alpha {}(v) = 0\) for all other variables v. All ReLU constraints hold, but \(a_1\) is now out of bounds. This is fixed by pivoting \(a_1\) with \(v_{11}\) and then updating it. The resulting tableau is:

$$ v_{11} = v_{21}^b - a_1 \qquad a_2 = v_{21}^b + v^b_{22} - a_1 \qquad v_{21}^f = -v_{22}^f + v_{31} -a_3 $$

Observe that because \(v_{11}\) is now basic, it was eliminated from the equation for \(a_2\) and replaced with \(v_{21}^b - a_1\). The non-zero assignments are now \(\alpha {}(v_{11}) = 0.5\), \(\alpha {}(v_{21}^b) = 0.5\), \(\alpha {}(v_{21}^f) = 0.5\), \(\alpha {}(v_{31}) = 0.5\), \(\alpha {}(a_2) = 0.5\). Variable \(a_2\) is now too large, and so we have a final round of pivot-and-update: \(a_2\) is pivoted with \(v_{22}^b\) and then updated back to 0. The final tableau and assignments are:

figure c

and the algorithm halts with the feasible solution it has found. A key observation is that we did not ever split on any of the ReLU connections. Instead, it was sufficient to simply use updates to adjust the ReLU variables as needed.

4 Efficiently Implementing Reluplex

We next discuss three techniques that significantly boost the performance of Reluplex: use of tighter bound derivation, conflict analysis and floating point arithmetic. A fourth technique, under-approximation, is discussed in Sect. IV of the supplementary material [15].

Tighter Bound Derivation. The simplex and Reluplex procedures naturally lend themselves to deriving tighter variable bounds as the search progresses [17]. Consider a basic variable \(x_i\in \mathcal {B}{}\) and let \(\text {pos} {}(x_i) =\{x_j\notin \mathcal {B}{}\ | \ T_{i,j}>0\}\) and \(\text {neg} {}(x_i) =\{x_j\notin \mathcal {B}{}\ | \ T_{i,j}<0\}.\) Throughout the execution, the following rules can be used to derive tighter bounds for \(x_i\), regardless of the current assignment:

figure d
figure e

The derived bounds can later be used to derive additional, tighter bounds.

When tighter bounds are derived for ReLU variables, these variables can sometimes be eliminated, i.e., fixed to the active or inactive state, without splitting. For a ReLU pair \(x^f=\text {ReLU} {}{}(x^b)\), discovering that either \(l{}(x^b)\) or \(l{}(x^f)\) is strictly positive means that in any feasible solution this ReLU connection will be active. Similarly, discovering that \(u(x^b)<0\) implies inactivity.

Bound tightening operations incur overhead, and simplex implementations often use them sparsely [17]. In Reluplex, however, the benefits of eliminating ReLUs justify the cost. The actual amount of bound tightening to perform can be determined heuristically; we describe the heuristic that we used in Sect. 6.

Derived Bounds and Conflict Analysis. Bound derivation can lead to situations where we learn that \(l{}(x)>u{}(x)\) for some variable x. Such contradictions allow Reluplex to immediately undo a previous split (or answer UNSAT if no previous splits exist). However, in many cases more than just the previous split can be undone. For example, if we have performed 8 nested splits so far, it may be that the conflicting bounds for x are the direct result of split number 5 but have only just been discovered. In this case we can immediately undo splits number 8, 7, and 6. This is a particular case of conflict analysis, which is a standard technique in SAT and SMT solvers [26].

Floating Point Arithmetic. SMT solvers typically use precise (as opposed to floating point) arithmetic to avoid roundoff errors and guarantee soundness. Unfortunately, precise computation is usually at least an order of magnitude slower than its floating point equivalent. Invoking Reluplex on a large DNN can require millions of pivot operations, each of which involves the multiplication and division of rational numbers, potentially with large numerators or denominators—making the use of floating point arithmetic important for scalability.

There are standard techniques for keeping the roundoff error small when implementing simplex using floating point, which we incorporated into our implementation. For example, one important practice is trying to avoid \(\mathsf {Pivot}_{}\) operations involving the inversion of extremely small numbers [35].

To provide increased confidence that any roundoff error remained within an acceptable range, we also added the following safeguards: (i) After a certain number of \(\mathsf {Pivot}_{}\) steps we would measure the accumulated roundoff error; and (ii) If the error exceeded a threshold M, we would restore the coefficients of the current tableau T using the initial tableau \(T_0\).

Cumulative roundoff error can be measured by plugging the current assignment values for the non-basic variables into the equations of the initial tableau \(T_0\), using them to calculate the values for every basic variable \(x_i\), and then measuring by how much these values differ from the current assignment \(\alpha {}(x_i)\). We define the cumulative roundoff error as:

$$ \sum _{x_i\in \mathcal {B}{}_0} |\alpha {}(x_i) - \sum _{x_j\notin \mathcal {B}_0} T_{0_{i,j}} \cdot \alpha {}(x_j)| $$

T is restored by starting from \(T_0\) and performing a short series of \(\mathsf {Pivot}_{}\) steps that result in the same set of basic variables as in T. In general, the shortest sequence of pivot steps to transform \(T_0\) to T is much shorter than the series of steps that was followed by Reluplex—and hence, although it is also performed using floating point arithmetic, it incurs a smaller roundoff error.

The tableau restoration technique serves to increase our confidence in the algorithm’s results when using floating point arithmetic, but it does not guarantee soundness. Providing true soundness when using floating point arithmetic remains a future goal (see Sect. 8).

5 Case Study: The ACAS Xu System

Airborne collision avoidance systems are critical for ensuring the safe operation of aircraft. The Traffic Alert and Collision Avoidance System (TCAS) was developed in response to midair collisions between commercial aircraft, and is currently mandated on all large commercial aircraft worldwide [24]. Recent work has focused on creating a new system, known as Airborne Collision Avoidance System X (ACAS X) [19, 20]. This system adopts an approach that involves solving a partially observable Markov decision process to optimize the alerting logic and further reduce the probability of midair collisions, while minimizing unnecessary alerts [19, 20, 22].

The unmanned variant of ACAS X, known as ACAS Xu, produces horizontal maneuver advisories. So far, development of ACAS Xu has focused on using a large lookup table that maps sensor measurements to advisories [13]. However, this table requires over 2 GB of memory. There is concern about the memory requirements for certified avionics hardware. To overcome this challenge, a DNN representation was explored as a potential replacement for the table [13]. Initial results show a dramatic reduction in memory requirements without compromising safety. In fact, due to its continuous nature, the DNN approach can sometimes outperform the discrete lookup table [13]. Recently, in order to reduce lookup time, the DNN approach was improved further, and the single DNN was replaced by an array of 45 DNNs. As a result, the original 2 GB table can now be substituted with efficient DNNs that require less than 3 MB of memory.

A DNN implementation of ACAS Xu presents new certification challenges. Proving that a set of inputs cannot produce an erroneous alert is paramount for certifying the system for use in safety-critical settings. Previous certification methodologies included exhaustively testing the system in 1.5 million simulated encounters [21], but this is insufficient for proving that faulty behaviors do not exist within the continuous DNNs. This highlights the need for verifying DNNs and makes the ACAS Xu DNNs prime candidates on which to apply Reluplex.

Network Functionality. The ACAS Xu system maps input variables to action advisories. Each advisory is assigned a score, with the lowest score corresponding to the best action. The input state is composed of seven dimensions (shown in Fig. 6) which represent information determined from sensor measurements [20]: (i) \(\rho \): Distance from ownship to intruder; (ii) \(\theta \): Angle to intruder relative to ownship heading direction; (iii) \(\psi \): Heading angle of intruder relative to ownship heading direction; (iv) \(v_\text {own}\): Speed of ownship; (v) \(v_\text {int}\): Speed of intruder; (vi) \(\tau \): Time until loss of vertical separation; and (vii) \(a_\text {prev}\): Previous advisory. There are five outputs which represent the different horizontal advisories that can be given to the ownship: Clear-of-Conflict (COC), weak right, strong right, weak left, or strong left. Weak and strong mean heading rates of 1.5 \(^{\circ }\)/s and 3.0 \(^{\circ }\)/s, respectively.

Fig. 6.
figure 6

Geometry for ACAS Xu horizontal logic table

The array of 45 DNNs was produced by discretizing \(\tau \) and \(a_\text {prev}\), and producing a network for each discretized combination. Each of these networks thus has five inputs (one for each of the other dimensions) and five outputs. The DNNs are fully connected, use ReLU activation functions, and have 6 hidden layers with a total of 300 ReLU nodes each.

Network Properties. It is desirable to verify that the ACAS Xu networks assign correct scores to the output advisories in various input domains. Figure 7 illustrates this kind of property by showing a top-down view of a head-on encounter scenario, in which each pixel is colored to represent the best action if the intruder were at that location. We expect the DNN’s advisories to be consistent in each of these regions; however, Fig. 7 was generated from a finite set of input samples, and there may exist other inputs for which a wrong advisory is produced, possibly leading to collision. Therefore, we used Reluplex to prove properties from the following categories on the DNNs: (i) The system does not give unnecessary turning advisories; (ii) Alerting regions are uniform and do not contain inconsistent alerts; and (iii) Strong alerts do not appear for high \(\tau \) values.

Fig. 7.
figure 7

Advisories for a head-on encounter with \(a_\text {prev} = \text {COC},\tau \) = 0s.

6 Evaluation

We used a proof-of-concept implementation of Reluplex to check realistic properties on the 45 ACAS Xu DNNs. Our implementation consists of three main logical components: (i) A simplex engine for providing core functionality such as tableau representation and pivot and update operations; (ii) A Reluplex engine for driving the search and performing bound derivation, ReLU pivots and ReLU updates; and (iii) A simple SMT core for providing splitting-on-demand services. For the simplex engine we used the GLPK open-source LP solverFootnote 3 with some modifications, for instance in order to allow the Reluplex core to perform bound tightening on tableau equations calculated by GLPK. Our implementation, together with the experiments described in this section, is available online [14].

Our search strategy was to repeatedly fix any out-of-bounds violations first, and only then correct any violated ReLU constraints (possibly introducing new out-of-bounds violations). We performed bound tightening on the entering variable after every pivot operation, and performed a more thorough bound tightening on all the equations in the tableau once every few thousand pivot steps. Tighter bound derivation proved extremely useful, and we often observed that after splitting on about 10% of the ReLU variables it led to the elimination of all remaining ReLUs. We counted the number of times a ReLU pair was fixed via \(\mathsf {Update}_{b}\) or \(\mathsf {Update}_{f}\) or pivoted via \(\mathsf {PivotForRelu}\), and split only when this number reached 5 (a number empirically determined to work well). We also implemented conflict analysis and back-jumping. Finally, we checked the accumulated roundoff error (due to the use of double-precision floating point arithmetic) after every 5000 \(\mathsf {Pivot}_{}\) steps, and restored the tableau if the error exceeded \(10^{-6}\). Most experiments described below required two tableau restorations or fewer.

We began by comparing our implementation of Reluplex to state-of-the-art solvers: the CVC4, Z3, Yices and MathSat SMT solvers and the Gurobi LP solver (see Table 1). We ran all solvers with a 4 h timeout on 2 of the ACAS Xu networks (selected arbitrarily), trying to solve for 8 simple satisfiable properties \(\varphi _1,\ldots ,\varphi _8\), each of the form \(x\ge c\) for a fixed output variable x and a constant c. The SMT solvers generally performed poorly, with only Yices and MathSat successfully solving two instances each. We attribute the results to these solvers’ lack of direct support for encoding ReLUs, and to their use of precise arithmetic. Gurobi solved 3 instances quickly, but timed out on all the rest. Its logs indicated that whenever Gurobi could solve the problem without case-splitting, it did so quickly; but whenever the problem required case-splitting, Gurobi would time out. Reluplex was able to solve all 8 instances. See Sect. V of the supplementary material [15] for the SMT and LP encodings that we used.

Table 1. Comparison to SMT and LP solvers. Entries indicate solution time (in seconds).

Next, we used Reluplex to test a set of 10 quantitative properties \(\phi _1,\ldots ,\phi _{10}\). The properties, described below, are formally defined in Sect. VI of the supplementary material [15]. Table 2 depicts for each property the number of tested networks (specified as part of the property), the test results and the total duration (in seconds). The Stack and Splits columns list the maximal depth of nested case-splits reached (averaged over the tested networks) and the total number of case-splits performed, respectively. For each property, we looked for an input that would violate it; thus, an UNSAT result indicates that a property holds, and a SAT result indicates that it does not hold. In the SAT case, the satisfying assignment is an example of an input that violates the property.

Table 2. Verifying properties of the ACAS Xu networks.

Property \(\phi _1\) states that if the intruder is distant and is significantly slower than the ownship, the score of a COC advisory will always be below a certain fixed threshold (recall that the best action has the lowest score). Property \(\phi _2\) states that under similar conditions, the score for COC can never be maximal, meaning that it can never be the worst action to take. This property was discovered not to hold for 35 networks, but this was later determined to be acceptable behavior: the DNNs have a strong bias for producing the same advisory they had previously produced, and this can result in advisories other than COC even for far-away intruders if the previous advisory was also something other than COC. Properties \(\phi _3\) and \(\phi _4\) deal with situations where the intruder is directly ahead of the ownship, and state that the DNNs will never issue a COC advisory.

Properties \(\phi _5\) through \(\phi _{10}\) each involve a single network, and check for consistent behavior in a specific input region. For example, \(\phi _5\) states that if the intruder is near and approaching from the left, the network advises “strong right”. Property \(\phi _7\), on which we timed out, states that when the vertical separation is large the network will never advise a strong turn. The large input domain and the particular network proved difficult to verify. Property \(\phi _8\) states that for a large vertical separation and a previous “weak left” advisory, the network will either output COC or continue advising “weak left”. Here, we were able to find a counter-example, exposing an input on which the DNN was inconsistent with the lookup table. This confirmed the existence of a discrepancy that had also been seen in simulations, and which will be addressed by retraining the DNN. We observe that for all properties, the maximal depth of nested splits was always well below the total number of ReLU nodes, 300, illustrating the fact that Reluplex did not split on many of them. Also, the total number of case-splits indicates that large portions of the search space were pruned.

Another class of properties that we tested is adversarial robustness properties. DNNs have been shown to be susceptible to adversarial inputs [34]: correctly classified inputs that an adversary slightly perturbs, leading to their misclassification by the network. Adversarial robustness is thus a safety consideration, and adversarial inputs can be used to train the network further, making it more robust [8]. There exist approaches for finding adversarial inputs [3, 8], but the ability to verify their absence is limited.

We say that a network is \(\delta \) -locally-robust at input point \(\varvec{x}\) if for every \(\varvec{x'}\) such that \(\Vert \varvec{x} - \varvec{x'}\Vert _{\infty }\le \delta \), the network assigns the same label to \(\varvec{x}\) and \(\varvec{x'}\). In the case of the ACAS Xu DNNs, this means that the same output has the lowest score for both \(\varvec{x}\) and \(\varvec{x'}\). Reluplex can be used to prove local robustness for a given \(\varvec{x}\) and \(\delta \), as depicted in Table 3. We used one of the ACAS Xu networks, and tested combinations of 5 arbitrary points and 5 values of \(\delta \). SAT results show that Reluplex found an adversarial input within the prescribed neighborhood, and UNSAT results indicate that no such inputs exist. Using binary search on values of \(\delta \), Reluplex can thus be used for approximating the optimal \(\delta \) value up to a desired precision: for example, for point 4 the optimal \(\delta \) is between 0.025 and 0.05. It is expected that different input points will have different local robustness, and the acceptable thresholds will thus need to be set individually.

Table 3. Local adversarial robustness tests. All times are in seconds.

Finally, we mention an additional variant of adversarial robustness which we term global adversarial robustness, and which can also be solved by Reluplex. Whereas local adversarial robustness is measured for a specific \(\varvec{x}\), global adversarial robustness applies to all inputs simultaneously. This is expressed by encoding two side-by-side copies of the DNN in question, \(N_1\) and \(N_2\), operating on separate input variables \(\varvec{x_1}\) and \(\varvec{x_2}\), respectively, such that \(\varvec{x_2}\) represents an adversarial perturbation of \(\varvec{x_1}\). We can then check whether \(\Vert \varvec{x_1} - \varvec{x_2}\Vert _{\infty }\le \delta \) implies that the two copies of the DNN produce similar outputs. Formally, we require that if \(N_1\) and \(N_2\) assign output a values \(p_1\) and \(p_2\) respectively, then \(|p_1-p_2|\le \epsilon \). If this holds for every output, we say that the network is \(\epsilon \)-globally-robust. Global adversarial robustness is harder to prove than the local variant, because encoding two copies of the network results in twice as many ReLU nodes and because the problem is not restricted to a small input domain. We were able to prove global adversarial robustness only on small networks; improving the scalability of this technique is left for future work.

7 Related Work

In [30], the authors propose an approach for verifying properties of neural networks with sigmoid activation functions. They replace the activation functions with piecewise linear approximations thereof, and then invoke black-box SMT solvers. When spurious counter-examples are found, the approximation is refined. The authors highlight the difficulty in scaling-up this technique, and are able to tackle only small networks with at most 20 hidden nodes [31].

The authors of [3] propose a technique for finding local adversarial examples in DNNs with ReLUs. Given an input point \(\varvec{x}\), they encode the problem as a linear program and invoke a black-box LP solver. The activation function issue is circumvented by considering a sufficiently small neighborhood of \(\varvec{x}\), in which all ReLUs are fixed at the active or inactive state, making the problem convex. Thus, it is unclear how to address an \(\varvec{x}\) for which one or more ReLUs are on the boundary between active and inactive states. In contrast, Reluplex can be used on input domains for which ReLUs can have more than one possible state.

In a recent paper [10], the authors propose a method for proving the local adversarial robustness of DNNs. For a specific input point \(\varvec{x}\), the authors attempt to prove consistent labeling in a neighborhood of \(\varvec{x}\) by means of discretization: they reduce the infinite neighborhood into a finite set of points, and check that the labeling of these points is consistent. This process is then propagated through the network, layer by layer. While the technique is general in the sense that it is not tailored for a specific activation function, the discretization process means that any UNSAT result only holds modulo the assumption that the finite sets correctly represent their infinite domains. In contrast, our technique can guarantee that there are no irregularities hiding between the discrete points.

Finally, in [12], the authors employ hybrid techniques to analyze an ACAS X controller given in lookup-table form, seeking to identify safe input regions in which collisions cannot occur. It will be interesting to combine our technique with that of [12], in order to verify that following the advisories provided by the DNNs indeed leads to collision avoidance.

8 Conclusion and Next Steps

We presented a novel decision algorithm for solving queries on deep neural networks with ReLU activation functions. The technique is based on extending the simplex algorithm to support the non-convex ReLUs in a way that allows their inputs and outputs to be temporarily inconsistent and then fixed as the algorithm progresses. To guarantee termination, some ReLU connections may need to be split upon—but in many cases this is not required, resulting in an efficient solution. Our success in verifying properties of the ACAS Xu networks indicates that the technique holds much potential for verifying real-world DNNs.

In the future, we plan to increase the technique’s scalability. Apart from making engineering improvements to our implementation, we plan to explore better strategies for the application of the Reluplex rules, and to employ advanced conflict analysis techniques for reducing the amount of case-splitting required. Another direction is to provide better soundness guarantees without harming performance, for example by replaying floating-point solutions using precise arithmetic [18], or by producing externally-checkable correctness proofs [16]. Finally, we plan to extend our approach to handle DNNs with additional kinds of layers. We speculate that the mechanism we applied to ReLUs can be applied to other piecewise linear layers, such as max-pooling layers.