Unifying SAT-Based Approaches to Maximum Satisfiability Solving

Maximum satisfiability (MaxSAT), employing propositional logic as the declarative language of choice, has turned into a viable approach to solving NP-hard optimization problems arising from artificial intelligence and other real-world settings. A key contributing factor to the success of MaxSAT is the rise of increasingly effective exact solvers that are based on iterative calls to a Boolean satisfiability (SAT) solver. The three types of SAT-based MaxSAT solving approaches, each with its distinguishing features, implemented in current state-of-the-art MaxSAT solvers are the core-guided, the implicit hitting set (IHS), and the objective-bounding approaches. The objective-bounding approach is based on directly searching over the objective function range by iteratively querying a SAT solver if the MaxSAT instance at hand has a solution under different bounds on the objective. In contrast, both core-guided and IHS are so-called unsatisfiability-based approaches that employ a SAT solver as an unsatisfiable core extractor to determine sources of inconsistencies, but critically differ in how the found unsatisfiable cores are made use of towards finding a provably optimal solution. Furthermore, a variety of different algorithmic variants of the core-guided approach in particular have been proposed and implemented in solvers. It is well-acknowledged that each of the three approaches has its advantages and disadvantages, which is also witnessed by instance and problem-domain specific runtime performance differences (and at times similarities) of MaxSAT solvers implementing variants of the approaches. However, the questions of to what extent the approaches are fundamentally different and how the benefits of the individual methods could be combined in a single algorithmic approach are currently not fully understood. In this work, we approach these questions by developing UniMaxSAT, a general unifying algorithmic framework. Based on the recent notion of abstract cores, UniMaxSAT captures in general core-guided, IHS and objective-bounding computations. The framework offers a unified way of establishing quite generally the correctness of the current approaches. We illustrate this by formally showing that UniMaxSAT can simulate the computations of various algorithmic instantiations of the three types of MaxSAT solving approaches. Furthermore, UniMaxSAT can be instantiated in novel ways giving rise to new algorithmic variants of the approaches. We illustrate this aspect by developing a prototype implementation of an algorithmic variant for MaxSAT based on the framework.


Introduction
The declarative paradigm of maximum satisfiability (MaxSAT) (Li & Manyà, 2021;Bacchus et al., 2021) is today a viable approach to solving NP-hard optimization problems arising from AI and other real-world settings.Much of the success of MaxSAT is due to advances in practical algorithms for MaxSAT and their fine-grained implementations as MaxSAT solvers.
MaxSAT solvers can be categorized into exact (or "complete") and inexact (or "incomplete") solvers.Inexact solvers are typically based on stochastic local search (Jiang et al., 1995;Cai et al., 2014;Lei & Cai, 2018;Chu et al., 2023) and/or combinations of techniques from exact solvers (Joshi et al., 2019;Berg et al., 2019;Nadel, 2020), and are in general geared towards finding "good" solutions in relatively short time instead of providing guarantees on finding provable optimal solutions.In contrast, exact solvers are guaranteed to find optimal solutions, given enough runtime resources.A majority of research on developing increasingly effective MaxSAT solvers has to date focused on exact solvers which are also the focus of this work.
The underlying algorithmic approaches implemented in the various SAT-based MaxSAT solvers today can be categorized into three types: the so-called core-guided approach (Fu & Malik, 2006;Marques-Silva & Planes, 2007;Heras et al., 2011;Ansótegui et al., 2013;Morgado et al., 2013Morgado et al., , 2014;;Narodytska & Bacchus, 2014;Alviano et al., 2015;Ansótegui et al., 2016;Ansótegui & Gabàs, 2017), the implicit hitting set (IHS) approach (Davies & Bacchus, 2011, 2013;Saikko et al., 2016), and the objective-bounding approach (Fu & Malik, 2006;Eén & Sörensson, 2006;Berre & Parrain, 2010;Koshimura et al., 2012;Heras et al., 2011;Ignatiev et al., 2014).The objective-bounding approach is based on directly searching over the objective function range by iteratively querying a SAT solver if the MaxSAT instance at hand has a solution under different bounds on the objective using different strategies such as model-improving search (Eén & Sörensson, 2006;Berre & Parrain, 2010;Koshimura et al., 2012), binary search (Fu & Malik, 2006;Heras et al., 2011;Piotrów, 2020) or progression-based search (Ignatiev et al., 2014).In contrast, both core-guided and IHS are unsatisfiability-based approaches, relying on iteratively extracting sources of inconsistencies in terms of unsatisfiable cores using a SAT solver (Eén & Sörensson, 2003) as a core-extracting decision oracle.However, core-guided and IHS solvers deal with cores extracted during search differently.Core-guided algorithms reformulate the current working instance-starting with the input MaxSAT instance-to take into account the so-far extracted cores in subsequent search iterations towards an optimal solution.The various different core-guided algorithms differ in the way in which the reformulation steps change the working instance.In contrast, in each iteration of IHS search, the SAT solver is invoked on a subset of clauses of the input instance, without reformulation-style modifications to the instance.The choice of the subset of constraints to consider in each iteration is dictated by computing a (minimum-cost) hitting set of the so-far accumulated set of cores.
In practice, state-of-the-art core-guided, IHS and objective-bounding (especially modelimproving) MaxSAT solvers are all competitive in terms of runtime performance.However, the relative performance on distinct problem domains can vary noticeably between solvers implementing a specific approach (Bacchus et al., 2019).The fundamental reasons behind this are not well understood, despite recent advances showing that for a specific classic variant of core-guided search, the cores extracted from the reformulated working formulas during core-guided search are tightly related to cores extracted in IHS search on the original instance (Bacchus & Narodytska, 2014;Narodytska & Bjørner, 2022).Furthermore, fundamental insights into how to combine the best of each of the three types of algorithmic approaches are currently lacking.
In this work, we develop a general algorithmic framework that captures core-guided, IHS, and objective-bounding computations in a unifying way.The framework is based on the recently-proposed notion of abstract cores originally presented as a performanceimproving variant of IHS for MaxSAT (Berg et al., 2020) that brings a flavor of core-guided reformulation into the representation of the hitting set problems solved during IHS search.While the correctness of the objective-bounding approach is relatively straightforward to establish directly, this is not generally the case for fine-grained variants of the core-guided approach-which would also translate to non-trivial individual correctness proofs for any non-trivial combinations of, e.g., the core-guided and IHS approaches.Our framework provides a unified way of establishing the correctness of variants of core-guided and IHS approaches.The framework also has the potential of being instantiated in novel ways, thereby giving rise to new variants of provably-correct MaxSAT algorithms.While the main focus of this work is evidently on the general formal algorithmic framework, as an illustration of its potential for obtaining novel types of unsatisfiability-based algorithms we more shortly outline and provide a prototype implementation of a core-guided variant for MaxSAT obtained through the framework.
In terms of related work, the motivations underlying the UniMaxSAT framework are in part similar to generic frameworks developed for capturing reasoning performed by modern SAT solvers and closely related solver technologies.These include the inprocessing rules framework (Järvisalo et al., 2012;Fazekas et al., 2019) for capturing the various types of reasoning steps applied by inprocessing SAT solvers as well as the DPLL(T ) framework (Nieuwenhuis et al., 2006) and its extensions which have been developed for finegrained formalization of satisfiability modulo theories (SMT) solvers (Cimatti et al., 2010;Barrett et al., 2021;Bjørner & Fazekas, 2023), specific types of optimization approaches in SMT (Fazekas et al., 2018), as well as, e.g., reasoning performed by answer set (ASP) solvers (Baselice et al., 2005;Gebser et al., 2009).Both the inprocessing and the DPLL(T ) framework take the view of formalizing solving steps as transition systems, describing the possible next-state transitions from a current solver state.The inprocessing framework is instantiated by specific redundancy notions-which have more recently been generalized to the realm of MaxSAT (Ihalainen et al., 2022)-which themselves can generally cover the various reasoning techniques applied in SAT solvers.However, such redundancy notions on the MaxSAT level do not allow for directly capturing the multitude of SAT-based MaxSAT algorithms.In contrast, a key motivation behind the UniMaxSAT framework is to cover all of the three main SAT-based MaxSAT solving approaches, also with the aim of providing a unifying view towards novel types of MaxSAT algorithms that would combine aspects of the different approaches in correct ways.
A shorter preliminary version of this work was presented at the IJCAI 2023 conference (Ihalainen et al., 2023).The present article considerably revises and extends on the work reported at IJCAI 2023.Most notably, we here thoroughly revise the details of the UniMaxSAT framework.As a result, the framework now allows for capturing not only core-guided and implicit hitting set approaches to MaxSAT, but more generally SAT-based MaxSAT solving, including what we will refer to as objective-bounding MaxSAT algorithms, instantiations of which include the model-improving approach as well as binary and progression-based search for MaxSAT.The revised framework is also arguably cleaner in terms of being more directly connected with how the various SAT-based MaxSAT algorithms perform search.As a result, formal proofs have been thoroughly revised and further details included, including formal explanations of how the general framework captures several further algorithmic instantiations of SAT-based MaxSAT approaches.
The rest of this article is organized as follows.We start with a background on maximum satisfiability (Section 2) and an overview of the objective-bounding, core-guided, and implicit hitting set approaches to MaxSAT solving (Section 3).Then, as the main contribution of this work, we detail the UniMaxSAT framework (Section 4) and argue that it can simulate the behavior of the objective-bounding and implicit hitting set approaches (Section 5) as well as the various variants of the core-guided approaches proposed in the literature so far (Section 6).Before conclusions, we further detail a novel variant of the core-guided approach to illustrate the use of the UniMaxSAT framework to formulate new algorithmic variants (Section 7).

Maximum Satisfiability
For a Boolean variable x there are two literals, x and x.A clause C = l 1 ∨ . . .∨ l n is a disjunction of literals, and a conjunctive normal form (CNF) formula is a set of clauses F = {C 1 , . . ., C m }.For a clause C, the set var(C) consists of variables x for which either x ∈ C or x ∈ C.An assignment τ maps variables to 1 (true) or 0 (false).Assignments extend to a literal l, clause C and formula F standardly by τ Interchangeably, we may treat τ as the set of literals τ assigns to 1. Then l ∈ τ denotes τ (l) = 1 and l ∈ τ denotes τ (l) = 0.The set of variables assigned by τ is var(τ ) = {x | x ∈ τ or x ∈ τ }; τ is complete for F if it assigns each variable in F a value, and otherwise partial.An assignment τ that assigns each literal in a clause C to 0 falsifies C, denoted by τ ⊇ ¬C.
Pseudo-Boolean constraints are linear inequalities of form i c i x i ≥ k, where each x i is a Boolean variable, each c i a positive coefficient, and k a positive constant.The constraint i c i x i ≥ k is satisfied by an assignment τ if i c i τ (x i ) ≥ k.When we do not make assumptions about how exactly pseudo-Boolean constraints are represented as CNF formulas 1 , we abstractly use asCNF( i c i x i ≥ k) to denote a CNF formula that is satisfied by an assignment τ iff i c i τ (x i ) ≥ k.Taking a name o k to indicate whether a pseudo-Boolean constraint is satisfied, we also use asCNF( i c i x i ≥ k ↔ o k ) to denote a (CNF-representation of) a reified pseudo-Boolean constraint, i.e., a CNF formula that is satisfied by any assignment τ that sets τ (o k ) = 1 iff i c i τ (x i ) ≥ k.An important special case of pseudo-Boolean constraints is the so-called cardinality constraints i x i ≥ k, which are pseudo-Boolean constraints where each coefficient is 1.Notice how the o k variable of a reified cardinality constraint asCNF( i x i ≥ k ↔ o k ) essentially counts whether the number of x i variables assigned to 1 is more or less than k.
An instance F = (F, O) of (weighted partial) maximum satisfiability (MaxSAT for short) consists of a CNF formula F and an objective function O = i w i b i + W lb under minimization, where w i are positive integers and b i are variables of F .Notice that we here include for convenience the constant term W lb .This allows for explicitly representing lower bounds on costs of solutions as computed by core-guided MaxSAT algorithms (as detailed in Section 3.2).
Remark 1.The definition of MaxSAT in terms of a CNF formula and an objective we use in this work is equivalent to the arguably more classical (clausal) definition of MaxSAT in terms of hard and weighted soft clauses in the following sense.Going from the objective function representation to the clausal representation, the clauses remain hard clauses, and each term w i b i in the objective function O is equivalently represented as a soft clause ⟨( bi ), w i ⟩, i.e., a unit soft clause ( bi ) with weight w i .To the other direction, any clausal instance of MaxSAT can be converted to an instance where each soft clause is a unit clause by the blocking variable transformation (Bacchus et al., 2021) standardly employed in SAT-based MaxSAT solvers before search: introduce a fresh variable b i for each non-unit soft clause C i with weight w i and replace C i with the hard clause C i ∨ b i and the soft clause ⟨( bi ), w i ⟩.After this transformation, the introduced soft unit clauses are evidently equivalent to the objective function i w i b i .For example, consider the following (clausal) MaxSAT instance (F H , F S ) consisting of the hard clauses F H = {(x∨b 1 ), (y∨z∨b 2 )} and the soft clauses F S = {⟨(ȳ∨x), 1⟩, ⟨( b1 ), 2⟩, ⟨( b2 ), 5⟩, ⟨(z∨x∨b 1 ), 3⟩}.Applying the blocking variable transformation for each non-unit soft clause results the set of hard clauses . This instance can be equivalently represented using the objective function representation as the instance The set var(O) consists of variables that occur in O.A complete satisfying assignment τ to F is a solution to F and has cost O(τ ) = i w i τ (b i ) + W lb .A solution is optimal if there are no solutions with lower costs.The cost of optimal solutions to a MaxSAT instance F is denoted by opt(F).

Example 1. Consider the MaxSAT instance
Algorithm 1 The objective-bounding search approach to MaxSAT Input: A MaxSAT instance F = (F, O) where O = i w i b i .Output: An optimal solution τ to F.

SAT-Based Approaches to MaxSAT
We develop a unifying algorithmic framework for modern SAT-based algorithms, capturing forms of objective-bounding search, core-guided algorithms, and algorithms based on the implicit hitting set (IHS) approach.As necessary background, we describe each of these approaches in general terms; practical solver implementations employ various heuristics and optimizations that do not affect our main contributions and, as such, are not detailed here.
Common to the three types of modern SAT-based MaxSAT algorithms is the use of an incremental SAT solver that can determine the satisfiability of CNF formulas under different sets of assumptions (Eén & Sörensson, 2003).Given a CNF formula F and a partial assignment γ A (constituting a set of assumptions, represented as a set of literals), we abstract the SAT solver into the subroutine Extract-Core that returns a triplet (res, C, τ ).Here res='true' if there is a solution τ ⊇ γ A to F .If there is no such solution, res='false' and C is a clause over a subset of the variables in γ A that is entailed by F .Invoked on F under a set of assumptions γ A for which F ∧ γ A is unsatisfiable, modern SAT solvers provide such a clause C at termination without computational overhead.In the context of SAT-based MaxSAT algorithms, such a C found during MaxSAT search will be unsatisfiable core of the current working instance.

Objective-Bounding Search
The so-called objective-bounding search algorithms-captured by Algorithm 1-compute an optimal solution to a given MaxSAT instance (F, O) by iteratively selecting a value w (Line 3) and querying Extract-Core for a solution τ to F satisfying O(τ ) ≤ w (Line 4).In practice, the query is formed by encoding a pseudo-Boolean constraint enforcing the bound w on the objective O.The solution of lowest cost found so far is stored in τ * and updated whenever the Extract-Core returns res='true' and a new solution (Line 5).
The search terminates when the algorithm establishes that there is no solution τ for which O(τ ) ≤ O(τ * ) − 1 (Line 6).
Solution-improving search is an upper-bounding approach that-starting from some upper bound such as the sum of all coefficients of the objective-in each iteration sets w to be one lower than the cost of the best currently known solution τ * .This strategy guarantees that the algorithm terminates when Extract-Core returns res='false'.Solutionimproving search is today the most widely employed objective-bounding search algorithm.In contrast, UNSAT-SAT search is a lower-bounding approach that starts from k = 0 and increments w each time Extract-Core returns res='false'.The search terminates when Extract-Core returns res='true'. 2 Binary search algorithms perform binary search over the range of the objective function, maintaining both an upper and a lower bound on optimal cost.The upper bound is updated whenever Extract-Core returns res='true', and the lower bound whenever Extract-Core returns res='false'. 3Finally, progression-based search can be seen as a combination of UNSAT-SAT and binary search.Progression-based search initially tests the values w = (2 0 − 1, 2 1 − 1, 2 2 − 1, . . . ) until Extract-Core reports res='true' and a solution on a particular ith iteration.At this stage, the algorithm has determined that the optimal cost is between 2 i−1 − 1 and 2 i − 1 and switches to binary search using these as the initial lower and upper bounds.

Core-Guided Search
Turning to core-guided search, we outline as Algorithm 2 a general abstraction of the coreguided approach to computing an optimal solution to a given MaxSAT instance F = (F, O).The algorithm first initializes a set Constraints of cardinality constraints as the empty set and a reformulated objective function O R as the objective function O (Lines 1-2).In each iteration of the main loop (Lines 3-9) a SAT solver is queried for a solution τ that (i) satisfies all clauses in F and all of the cardinality constraints in Constraints and (ii) falsifies all objective variables of the current reformulated objective O R , i.e., O R (τ ) = W lb (Lines 4-5).If there is such a τ , it is returned as an optimal solution to the input MaxSAT instance (Line 6).Otherwise, a core C of (F ∪ Constraints, O R ) is obtained.The core is then relaxed (Lines 7-9) by transforming the current working instance in a way 2. Today, UNSAT-SAT search is not commonly used.This is mainly because core-guided algorithms can be seen as refined versions of UNSAT-SAT search and generally outperform UNSAT-SAT search in practice.3.In theory, binary search has the desirable property of guaranteed termination within a logarithmic number of calls to Extract-Core in terms of the sum of objective coefficients.In practice, however, it is commonly acknowledged by MaxSAT solver developers that implementations of binary search are often outperformed by implementations of solution-improving search.This is due to the fact that the intermediate calls to Extract-Core that report res='false' can often be challenging when solving realworld instances.
Algorithm 2 The core-guided approach to MaxSAT Input: A MaxSAT instance F = (F, O).
Output: An optimal solution τ to F.
if res ='true' then return τ 7: (D, out) = Generate-Cardinality-Constraints(C) 8: that enables (at most) one variable in core C to incur cost in subsequent iterations.This is achieved by adding a cardinality constraint over the core to Constraints (Lines 7-8) and updating the current working objective (Line 9).
Conceptually, modern core-guided algorithms differ mainly in the specifics of the corerelaxation step.We detail the relaxation of the core-guided OLL algorithm (Andres et al., 2012;Morgado et al., 2014) as arguably one of the most successful core-guided approaches.In OLL, an invocation of Generate-Cardinality-Constraints(C) returns a set of cardinality constraints Intuitively, as enforcing o C k to 0 limits the number of literals in C assigned to 1 to at most k, the new cardinality constraints define output variables that count the number of literals in C assigned to 1 in subsequent iterations.The output variable with index 1 is not introduced, since the fact that C is a core implies that every solution to the instance assigns at least one literal to 1.In the objective reformulation step (Refine-Objective procedure in Algorithm 2) OLL adds the newly-introduced outputs to the objective in a way that preserves the set of optimal solutions.The coefficient of each x ∈ C is decreased by w C = min x∈C i {O R (x)}, removing from O R every literal whose coefficient decreases to 0. The coefficient of each output variable in out is set to w C and the constant term of O R is increased by w C .During the reformulation step, the coefficient of at least one variable in C decreases to 0. Thus, at least one more literal may incur cost in subsequent iterations.
Example 3. Invoke OLL on F = (F, O) from Example 1.The first call to Extract-Core is under the assumptions γ A = { b1 , b2 , b3 , b4 , b5 }.Let the first core obtained be 3,4,5.The new objective O R is formed based on the following observations: (i) as C 1 is a core, any solution to F assigns one literal in C 1 to 1 and as such incurs 1 cost in O, (ii) each additional literal of C 1 assigned to 1 should incur precisely 1 more cost.The new objective is Notice how observation (i) results in the addition of a constant term 1 and observation (ii) in the addition of the outputs of the new cardinality constraint to the objective.
Algorithm 3 The implicit hitting set approach to MaxSAT Input: A MaxSAT instance F = (F, O).Output: An optimal solution τ to F.

Implicit Hitting Set Approach
A generic abstraction of the implicit hitting set (IHS) approach to MaxSAT is outlined as Algorithm 3. IHS iteratively extracts cores of a given MaxSAT instance F = (F, O) and stores them in the set K. In contrast to core-guided algorithms, instead of reformulating the objective after each core-extraction step, IHS invokes the Mincost-HS(K) procedure that computes a minimum-cost hitting set (MCHS) over K under O.Here an MCHS is a minimum-cost (in terms of O) subset hs of the objective variables such that by assigning the variables in hs to 1 all cores in K are satisfied.In each iteration of the main loop (Lines 2-6), Extract-Core is queried for a solution that falsifies all objective variables that are not contained in the hs computed over the current set of cores (Lines 3-4).(Note that the assumptions γ A set up on Line 3 constitute a partial assignment over the objective variables that can be extended to a solution to K in a unique way.)If there is such a τ , it is an optimal solution to the input instance (Line 5).Otherwise, a new core is obtained and added to K (Line 6).The MCHS computed in each iteration represents a way of satisfying all cores found so far in an optimal way under O, this giving a lower bound on optimal cost of F. IHS iterates until the most recent MCHS can be extended to a solution to F .At which point, the solution satisfies ("hits") all cores-not only those currently accumulated in K-of the instance, and is thereby an optimal solution to F. In the first iteration there are no cores and hence Mincost-HS(K) = ∅.The first call to Extract-Core is under the assumptions γ A = { b1 , . . ., b5 }.There are a number of cores that could be returned; let the first core obtained be ).In the second iteration, there are three different MCHSes over K = {C 1 }.Assume that Mincost-HS returns {b 1 }.The assumptions for the next call to Extract-Core are γ A = { b2 , b3 , b4 , b5 }.Assume that the next core obtained is C 2 = (b 3 ∨ b 4 ∨ b 5 ).In this third iteration, the only MCHS over K = {C 1 , C 2 } is {b 4 }.The assumptions for the next call to Extract-Core In this fourth iteration, there are two possible MCHSs over K = {C 1 , C 2 , C 3 }.Assume that Mincost-HS returns {b 2 , b 4 }.This leads to the assumptions γ A = { b1 , b3 , b5 }.Given these assumptions, Extract-Core returns the solution τ = { b1 , b 2 , x, b3 , b 4 , b5 } as an optimal solution to F.

UniMaxSAT: A General Framework for SAT-Based MaxSAT Algorithms
As the main contribution of this article, we present UniMaxSAT as a general algorithmic framework unifying SAT-based MaxSAT algorithms.The framework makes use of the notion of abstract cores originally proposed as a basis for a refinement of IHS (Berg et al., 2020).Here, going considerably beyond their original intended purpose, we build on abstract cores to obtain a framework that captures SAT-based MaxSAT algorithms in general terms.

Abstraction Sets and Abstract Cores
We start by defining abstraction sets and abstract cores.On a high level, abstraction sets and abstract cores of a MaxSAT instance capture generic properties of the instance compactly in the sense that a large number of "standard" cores would be needed to express the same properties (Berg et al., 2020).
Informally speaking, an abstraction set models a relationship between a set of input literals in and a set out of output literals via a CNF formula D. In typical practical instantiations, the formula D corresponds to a cardinality constraint that essentially counts the number of input literals assigned to 1 by satisfying assignments of D, assigning the kth output literal to 1 if and only if k input literals are assigned to 1.In the following, we give a more general definition that is sufficient for proving the correctness of UniMaxSAT.
Definition 1.An abstraction set ab = (in, D, out) consists of a set in of input literals, a set out of output literals, and a satisfiable CNF formula D over a superset of the set of literals in ∪ out, i.e., it holds that var(in ∪ out) ⊆ var(D).Solutions to D are uniquely defined by assignments to the inputs: for any assignment τ over in there is exactly one extension τ E ⊇ τ that satisfies D.
For a given abstraction set ab = (in, D, out), we refer to D as the definitions of the outputs out.For a collection AB = {(in i , D i , out i ) | i = 1, . . ., n} of abstraction sets, the CNF formula DEF(AB) = n i=1 D i is the conjunction of the definitions in AB, OUTS(AB) = n i=1 out i is the set of outputs occurring in AB, and INPUTS(AB) = n i=1 in i is the set of inputs occurring in AB.We say that AB is feasible for a MaxSAT instance F = (F, O) if DEF(AB) does not change the set of solutions to F, i.e., if every solution τ to F can be extended to a solution τ E ⊇ τ to F ∪ DEF(AB).We will only consider collections of abstraction sets that are feasible for MaxSAT instances at hand.
An abstract core of a MaxSAT instance F = (F, O) is a clause that is logically entailed by F and the definitions of some feasible collection of abstraction sets.Importantly, an abstract core can contain both objective variables and outputs of abstraction sets.Every (standard) core of a MaxSAT instance F is also an abstract core of F with respect to any collection of feasible abstraction sets.
We have that C = (o 2 ∨ b 3 ) is an abstract core of F as any assignment that satisfies Note how the abstract core C in Example 5 corresponds to the core C 2 in Example 3.This demonstrates how cores of the reformulated instance extracted by OLL can be viewed as abstract cores of the original instance. 4he UniMaxSAT framework is based on computing minimum-cost solutions to abstract cores and extending them to a solution to the MaxSAT instance at hand.To differentiate solutions to an input MaxSAT instance from solutions to cores, we call solutions to a set of abstract cores candidate solutions (or candidates for short).More precisely, for a MaxSAT instance F = (F, O), a collection AB of feasible abstraction sets and a set K of abstract cores, an assignment δ that satisfies K ∪ DEF(AB) and assigns each variable in var Abstraction sets and abstract cores are employed in the UniMaxSAT framework for computing lower bounds (which are in turn used to prove the optimality of solutions) based on the following proposition.
Proposition 1.Let F = (F, O) be a MaxSAT instance, AB a set of feasible abstraction sets, K a set of abstract cores of F wrt AB, and δ a minimum-cost (K, AB)-candidate.
Proof.Consider an arbitrary solution τ to F. By feasibility of AB there is an extension A simple corollary to Proposition 1 is that any assignment τ that extends a minimumcost (K, AB)-candidate δ and satisfies F is an optimal solution to F. The UniMaxSAT framework we develop works intuitively by iteratively computing minimum-cost (K, AB)candidates for an increasing set AB and K of feasible abstraction sets, and abstract cores, respectively, and checking whether they can be extended to solutions to the whole instance.Each check either determines that an optimal solution has been found or provides a new abstract core that is falsified by the current (K, AB)-candidate.In the latter case, the new abstract core is an explanation for why the current (K, AB)-candidate can not be extended to an optimal solution of the instance at hand.While this is similar to a correctness proof for basic IHS search (see, e.g., (Davies & Bacchus, 2011)), the generality of UniMaxSAT allows for capturing also other types of SAT-based MaxSAT algorithms.Intuitively, the ability of UniMaxSAT to simulate core-guided algorithms follows from (i) the use of abstract cores and (ii) the fact that UniMaxSAT rules out not only complete (K, AB)-candidates but also partial assignments that extend solely to minimum-cost (K, AB)-candidates.
The following notion of a (minimum-cost) (K, AB)-abstract candidate is central for establishing the correctness and generality of UniMaxSAT.
Definition 3. Let F = (F, O) be a MaxSAT instance, AB a collection of feasible abstraction sets and K a set of abstract cores wrt to AB.A partial assignment γ A over a subset of the variables in var(K) ∪ var(O) is a (K, AB)-abstract candidate if (i) there is at least one extension τ ⊇ γ A which is a solution to DEF(AB) ∪ K, i.e., a (K, AB)-candidate of F, and (ii) all such extensions are minimum-cost (K, AB)-candidates.
Example 6.Consider the MaxSAT instance F = (F, O) from Example 1, the empty collection AB = ∅ of abstraction sets and the set An important insight is that the assumptions enforced during the iterations of a coreguided algorithm can be seen as (K, AB)-abstract candidates of the set AB of abstraction sets that corresponds to the cardinality constraints added by the core-guided algorithm and the set K of cores of the reformulated instance extracted by the algorithm.The following example illustrates this for the OLL algorithm (we will detail other core-guided algorithms in Section 5).

Example 7. Recall the MaxSAT instance
abstract candidate since it can be extended to a solution to K∪DEF(AB) by assigning exactly one literal in {b 1 , b 2 , b 4 } to 1 and the rest to 0. Note that γ A is exactly the set of assumptions that Extract-Core is queried under during the second iteration of the OLL invocation detailed in Example 3.

UniMaxSAT in Detail
With the necessary preliminaries in place, we now detail the UniMaxSAT framework.A high-level view to the framework is shown in Figure 1, and the framework is detailed in Algorithm 4 UniMaxSAT, a unifying framework for SAT-based MaxSAT algorithms Input: A MaxSAT instance F = (F, O).Output: An optimal solution τ * to F.
if res = 'true' and O(τ ) = lb i then return τ 6: pseudo-code as Algorithm 4. Given a MaxSAT instance F = (F, O) as input, UniMaxSAT outputs an optimal solution to F. 5 UniMaxSAT accumulates two sets, AB and K, of abstraction sets and abstract cores, respectively.In each iteration, the Optimize subroutine computes an assignment γ A over OUTS(AB) ∪ var(O) and a lower bound lb for the optimal cost of F. The subroutine Extract-AbstractCore is invoked to check for an extension of γ A to a solution to F .If such an extension τ exists (i.e., if res ='true'), UniMaxSAT checks if the cost of τ matches lb.If this is the case, UniMaxSAT terminates and returns τ as an optimal solution.Otherwise, a new abstract core falsified by γ A is obtained and added to K.
To ensure termination, we require that the γ A returned by Optimize must correspond to a (K, AB)-abstract candidate sufficiently often.Furthermore, when γ A is a (K, AB)abstract candidate, the lower bound lb returned by Optimize must equal to the costs of its 5.We note that the framework as presented here is a significant modification of the framework described in the preliminary version of this article (Ihalainen et al., 2023).In particular, the correctness of the present version does not require computing an abstract candidate in every iteration, only that each iteration is succeeded by another one on which an abstract candidate is computed.Compared to the preliminary version, this modification allows not only to further capture objective-bounding search algorithms but also more directly capture core-guided search algorithms that do not compute abstract candidates in every iteration.extensions.Since all such extensions are minimum-cost, it follows that, whenever Optimize computes a (K, AB)-abstract candidate, the lower bound Optimize returns is as high as possible in terms of the currently accumulated set of cores.Note that Optimize does not need to identify that the assignment it computes is a (K, AB)-abstract candidate.(The identification of (K, AB)-abstract candidate could be computationally challenging for many practical instantiations and is in fact not required for the correctness of UniMaxSAT.) We formalize the correctness of Algorithm 4 in the following terms.UniMaxSAT terminates on any MaxSAT instance and outputs an optimal solution to the input MaxSAT instance at hand, subject to the generic properties of its three subroutines.Importantly, the correctness of the general framework allows for establishing the correctness of any of its instantiations-including variants of objective-bounding, core-guided and IHS algorithms for MaxSAT-by showing how each algorithm can be viewed as an instantiation of Uni-MaxSAT.
First, we establish general conditions that instantiations of Optimize need to meet in order to guarantee that UniMaxSAT correctly computes an optimal solutions to an arbitrary input MaxSAT instance.
Definition 4 (Correctness condition).An instantiation of Optimize satisfies the correctness condition if the following conditions hold at every iteration i of UniMaxSAT when invoked on an arbitrary input MaxSAT instance F = (F, O).

γ A
i assigns a subset of the variables in var(O) ∪ OUTS(AB i ).
2. lb i is a lower bound on the optimal cost of F, i.e., lb i ≤ opt(F).

If γ
A i is a (K i , AB i )-abstract candidate, then lb i is equal to the cost of an extension of γ A i to a (K i , AB i )-candidate of F.
4. There exists an r ≥ 0 such that Optimize returns a (K i+r , AB i+r )-abstract candidate in iteration i + r.
In words, condition 1 ensures that in the iterations in which Extract-AbstractCore determines the instance to be unsatisfiable, an abstract core of the instance is obtained.Condition 2 ensures that UniMaxSAT does not terminate before finding an optimal solution, while conditions 3 and 4 ensure that termination takes place eventually.The following main theorem formalizes this intuition and establishes generic conditions that the other subroutines of UniMaxSAT need to satisfy to ensure the correctness of UniMaxSAT.
Theorem 1. Assume that the following three properties hold in every iteration i of Uni-MaxSAT on an input MaxSAT instance F = (F, O) that has a solution.
2. Extract-AbstractCore(F, DEF(AB i ), γ A i ) computes either a solution τ ⊇ γ A i to F ∪ DEF(AB i ) or a(n abstract) core C i that is satisfied by all solutions to F ∪ DEF(AB i ) and falsified by γ A i .
3. AB i is feasible for F.
Then UniMaxSAT terminates and returns an optimal solution to F.
The formal proof of Theorem 1 relies on the following lemma stating that, in each iteration i in which a (K i , AB i )-abstract candidate of F is computed for the current set AB i and K i of abstraction sets and abstract cores, respectively, the set of assignments from which Optimize will return an assignment shrinks.
Lemma 1. Invoke UniMaxSAT on a MaxSAT instance F = (F, O) and consider an iteration i.Let K i and AB i be the set of abstract cores and abstraction sets obtained so far, respectively, and denote by obj-sols i the restrictions of all solutions to K i ∪DEF(AB i ) onto var(O).Assume UniMaxSAT does not terminate on iteration i and Optimize computes an (K i , AB i )-abstract candidate of F. Then obj-sols i+1 ⊊ obj-sols i .
Proof.The fact that every element in obj-sols i+1 is also an element of obj-sols i follows from the fact that the sets of abstract cores and abstraction sets monotonically increase during the execution of UniMaxSAT.To show that there is a τ o ∈ obj-sols i \obj-sols i+1 , consider the (K i , AB i )-abstract candidate γ A i and the abstract core C i computed in iteration i.By definition, there is a minimum-cost (K i , AB i )-candidate δ ⊇ γ A i ⊇ ¬C i that falsifies C i .Let τ o be the restriction of δ onto the objective variables var(O).The claim of the lemma is equivalent to the claim that there is no extension of τ o to a solution to DEF(AB i ) that satisfies C i .As AB is feasible, there is exactly one way of extending τ o to a solution to DEF(AB i ).Since δ is such an extension and ¬C i ⊆ δ, we conclude that τ o cannot be extended to a solution to DEF(AB i ) that satisfies C i .
We are now ready to give a proof of Theorem 1.
Proof of Theorem 1.First note that by assumption 1 of the theorem and assumption 1 of the correctness condition, whenever Extract-AbstractCore reports that the current assignment γ A i is not extendable to a solution on Line 4 of Algorithm 4, the clause C i returned by Extract-AbstractCore is an abstract core of the instance at hand with respect to the current set of abstraction sets.As the sets of abstraction sets monotonically increase, AB i ⊆ AB i+1 holds for all i during the execution of UniMaxSAT.We conclude that, in each iteration i of UniMaxSAT, all clauses in K i are abstract cores of F wrt AB i .
Optimality of returned solutions.Assume that UniMaxSAT terminates in iteration i and returns a solution τ .As O(τ Termination.Given that F has solutions and AB i is feasible, F ∪ DEF(AB i ) has a solution for each i.By the definition of abstract cores, all solutions to F ∪ DEF(AB i ) are solutions to DEF(AB i ) ∪ K i .Thus termination follows by showing that Optimize will eventually return a (K i , AB i )-abstract candidate γ A i that can be extended to a solution τ to F ∪ DEF(AB i ).This in turn follows from the number of solutions to DEF(AB i ) ∪ K i , in particular, from the fact that each new core rules out at least one of the-finitely manysolutions that Optimize may return.More specifically, whenever Optimize returns a (K i , AB i )-abstract candidate and UniMaxSAT does not terminate, the number of assignments to var(O) that can be extended to solutions to the cores decreases by Lemma 1.As Optimize satisfies the correctness condition, the sequence of iterations in which it returns (K i , AB i )-abstract candidates is infinite.This implies that eventually a (K i , AB i )-abstract candidate can be extended to a solution to F ∪ DEF(AB i ).
Figure 2 provides a road map for Sections 5-7.We start with general observations on the extraction of abstract cores and feasibility of abstraction sets in Section 5.1, and then proceed in Sections 5.2-5.3 by detailing how IHS, IHS with abstract cores, and objectivebounding search algorithms can be viewed as direct instantiations of UniMaxSAT in a way that satisfies the assumptions of Theorem 1. Section 6 is dedicated to viewing coreguided algorithms via UniMaxSAT.For capturing the core-guided algorithms, we will define a generic core-guided instantiation of UniMaxSAT (Definition 5) that captures general properties of core-guided algorithms sufficient for obtaining correct instantiations of UniMaxSAT (as established by Theorem 2).We further introduce the notion of a cardinality-based CG instantiation of UniMaxSAT (Definition 6) that intuitively models core-guided algorithms that add a single cardinality constraint for each extracted core into the instance.We will show that all cardinality-based CG instantiations are also core-guided instantiations (Theorem 3), thereby establishing that cardinality-based CG instantiations are also correct instantiations of UniMaxSAT that satisfy Theorem 1. Turning to individual existing core-guided algorithms, we will show in Section 6.3 that the OLL, PMRES, K, WPM3 and MSU3 algorithms are cardinality-based CG instantiations of UniMaxSAT, thereby also establishing the correctness of each of the algorithms in terms of Theorem 1. Finally, to further demonstrate the usefulness of the hierarchy depicted in Figure 2 for defining new algorithms, we detail the novel AbstCG algorithm in Section 7 and show that it is a core-guided instantiation of UniMaxSAT but not a cardinality-based CG instantiation.

Extraction of Abstract Cores and Feasibility of Abstraction sets
For all specific instantiations of UniMaxSAT we discuss in Sections 5.2, 5.3, 6, and 7 the Extract-AbstractCore subroutine of UniMaxSAT is assumed to be core-extracting SAT solver.Given a MaxSAT instance F = (F, O), a feasible collection AB of abstraction sets and an assignment γ A , Extract-AbstractCore invokes a SAT solver on F ∪ DEF(AB) under the assumptions γ A .The search returns either a solution to F ∪ DEF(AB) that extends γ A , or a clause entailed by F ∪ DEF(AB) that is falsified by γ A .Assumption 2 of Theorem 1 on the Extract-AbstractCore subroutine follows directly from established properties of incremental SAT solvers (Eén & Sörensson, 2003;Audemard et al., 2013) instantiating the CDCL SAT solving paradigm (Silva & Sakallah, 1999;Zhang et al., 2001;Marques-Silva et al., 2021).
The feasibility of all abstraction sets computed-i.e., assumption 3 of Theorem 1follows from the fact that the definitions of every new abstraction set computed only intersect with the MaxSAT instance and previous abstraction sets on the inputs.More precisely, the abstraction sets computed in every instantiation of Add-AbstractionSets we consider satisfy the assumptions of the following lemma.With these considerations, we will from now on assume all abstraction sets to be feasible and the Extract-AbstractCore to be instantiated with a SAT solver.

Capturing IHS with UniMaxSAT
UniMaxSAT gives the (basic) IHS (Algorithm 3) through the following instantiation.Instantiate Add-AbstractionSets to never add any abstraction sets, i.e., so that AB i = ∅ for all i.Further, instantiate Optimize as a procedure that, given a set K of cores, returns the tuple (γ A , lb) where γ A = {x | x ∈ var(O) \ Mincost-HS(K)} is an assignment that sets all literals in the objective to 0 except the ones in the most recent minimumcost hitting set Mincost-HS(K) over K.The value of lb is the sum of the coefficients of the variables in the minimum-cost hitting set.The correctness of Algorithm 3 now follows by Theorem 1 by observing that the only extension of γ A to a (K, ∅)-candidate is δ = {x | x ∈ var(O) \ Mincost-HS(K)} ∪ {x | x ∈ Mincost-HS(K)}; this candidate is minimum-cost and has O(δ) = lb.Hence γ A is a (K, ∅)-abstract candidate and lb a lower bound on the optimal cost by Proposition 1.

Abstract cores. UniMaxSAT gives IHS enhanced with abstract cores by instantiating
Add-AbstractionSets to (heuristically) compute abstraction sets (in, D, out), where the inputs in = {x 1 , . . ., x n } are a subset of n objective variables that all have the same coefficient in O, out = {o 1 , . . ., o n } is a set of n new variables, and Informally speaking, the outputs of the added abstraction sets count the number of inputs assigned to 1 in all satisfying assignments.The instantiation of the Optimize subroutine first computes a minimum-cost solution γ to K ∪ DEF(AB) that assigns all variables in var(O) ∪ OUTS(AB), and then returns either γ A 1 as the restriction of γ onto the objective variables var(O) assigned to 0, or γ A 2 as the restriction of γ onto the outputs of the current abstraction sets and objective variables that are not inputs to any abstraction sets assigned to 0. More precisely, if INPUTS(AB) is the set of all variables that are inputs to some abstraction set, then γ A 2 assigns to 0 all variables OUTS(AB) ∪ (var(O) \ INPUTS(AB)) that are assigned to 0 by γ and does not assign any other variables.In both cases, Optimize also returns the cost of γ as lb.
The correctness of IHS enhanced with abstract cores now follows by Theorem 1 by showing that Optimize satisfies the correctness condition.This in turn follows directly from showing that both γ A 1 and γ A 2 are (K, AB)-abstract candidates since then γ is a minimum-cost (K, AB)-candidate, which by Proposition 1 implies that lb = O(γ) is a lower bound on the optimal cost.The fact that γ A 1 is a (K, AB)-abstract candidate follows since the only extension of γ A 1 to a solution to K ∪ DEF(AB) is γ.Further, γ A 2 is a (K, AB)-abstract candidate since (i) γ is an extension of γ A 2 to a solution to K ∪ DEF(AB) and (ii) every such extension sets equally many inputs of each abstraction set to 1, thus incurring exactly the same cost (since the inputs to each abstraction set have the same coefficient in O).

Capturing Objective-Bounding Search with UniMaxSAT
To capture objective-bounding search by UniMaxSAT, Add-AbstractionSets is instantiated to compute a single abstraction set ab = (var(O), D, out), where all of the variables in the objective occur as inputs, there is an output o w for every w = 1, . . ., W , where W is the sum of coefficients of O = i w i b i , and where definitions are that-informally speaking-count the sum of coefficients of objective variables set to 1 by satisfying assignments, i.e., the costs of satisfying assignments.The instantiations of Optimize for capturing different objective-bounding search algorithms follow by noting that, for any solution τ to F ∪ DEF({ab}), the cost of τ can be read from the outputs of ab, τ (o k ) = 0 if and only if O(τ ) < k holds for all solutions τ to F ∪ DEF({ab}).Similarly, abstract cores of this instance map to lower bounds on the optimal cost, the unit clause (o k ) is an abstract core if and only if k < opt(F).Thus solution-improving, UNSAT-SAT, binary, and progression-based search are all obtained by an instantiation of Optimize that returns assignments that set a single output variable to 0 and a lower bound equal to the largest index w for which (o w ) has been determined to be an abstract core.Termination occurs once the Extract-AbstractCore subroutine has determined (o opt(F ) ) to be an abstract core and {ō opt(F )+1 } a (K, {ab})-abstract candidate for the set K of cores obtained so far.
As a concrete example, to capture solution-improving search, Optimize returns in the first iteration the assignment (ō W ) and lb = 0.In subsequent iterations Optimize returns (ō O(τ )−1 ) and lb = 0 where O(τ ) is the cost of the latest solution computed by Extract-AbstractCore. When Extract-AbstractCore reports unsatisfiability, an abstract core (o w ) is obtained.Correctness of solution-improving search in terms of Theorem 1 follows from the fact that the assignment γ A = (ō w+1 ) is a ({(o w )}, {ab})-abstract candidate the extensions of which have cost lb = w = opt(F).

Capturing Core-Guided Search with UniMaxSAT
We turn to detailing how various modern core-guided algorithms can be viewed as instantiations of UniMaxSAT.Key to viewing any core-guided algorithm through UniMaxSAT is to view the cardinality constraints a core-guided algorithm introduces as abstraction sets, and the reformulation of the objective as an implicit computation of a (K, AB)-abstract candidate for the instance, where the set K and AB are the abstract cores and abstraction sets computed so far, respectively.

Capturing Generic Properties of Core-Guided Algorithms
Compared to the IHS and objective-bounding search algorithms, significantly more variants of core-guided algorithms have been proposed, differing in the specifics of how the corerelaxation steps are performed.We will in the following identify properties of core-relaxation steps shared by all core-guided algorithms we consider.These general properties allow for a more generic proof of correctness for core-guided algorithms via viewing the algorithms as instantiations of UniMaxSAT.
Definition 5. Consider the ith iteration of UniMaxSAT when invoked on a MaxSAT instance F = (F, O).Let AB and K be the accumulated set of abstraction sets and abstract cores, respectively.We say that an instantiation of UniMaxSAT is a core-guided instantiation if the following properties hold.
• Extract-AbstractCore is a core-extracting SAT-solver and Add-AbstractionSets introduces feasible abstraction sets.
• Optimize maintains a reformulated objective function O R and in each iteration returns as assumptions the assignment {x | x ∈ var(O R )} that sets all of the variables in O R to 0. It also returns the constant term of O R as the lower bound on the optimal cost.
• In each iteration we have O R (τ ) = O(τ ) for any solution τ to DEF(AB) ∪ K.
• Given a core, Optimize reformulates O R in a way that increases the constant term of O R .
We will now show that any core-guided instantiation of UniMaxSAT correctly computes optimal solutions to MaxSAT instances.
Theorem 2. For any input MaxSAT instance F that has a solution, any core-guided instantiation of UniMaxSAT terminates and returns an optimal solution to F.
We prove Theorem 2 by showing that a core-guided instantiation of UniMaxSAT satisfies the assumptions of Theorem 1.The non-trivial part of the proof deals with arguing that Optimize satisfies the correctness condition (Definition 4).
In the following lemmas, we consider the ith iteration of a core-guided instantiation of UniMaxSAT when invoked on a MaxSAT instance F = (F, O).Let AB i and K i be the set of abstraction sets and abstract cores collected, respectively; O R i the reformulated objective maintained by Optimize; and W lb i the constant term of O R i .The following two observations follow directly from the definition of core-guided instantiations.

Observation 1. W lb
i is a lower bound on opt(F).Observation 2. For any solution γ } computed by Optimize will be a (K i+r , AB i+r )-abstract candidate in iteration i + r for some r ≥ 0. We first establish that whenever γ A i can be extended to a solution to DEF(AB i ) ∪ K i , any extension of γ A i will be minimum-cost.

Lemma 3. Assume that the assignment γ
Proof.Consider an extension τ of γ A i to a solution to DEF(AB i ) ∪ K i .We show that τ is minimum-cost.Since τ (x) = 0 for all x ∈ var(O R i ), we have that O R i (τ ) = W lb i .By the definition of core-guided instantiations, this implies O(τ ) = W lb i .The fact that τ is minimum-cost follows from observing that O(δ) = O R i (δ) ≥ W lb i holds for all solutions δ to DEF(AB i ) ∪ K i .
Next we show that the assignment returned by Optimize can be extended to a solution of the abstract cores and the definitions at some future iteration.
Lemma 4. Assume F has a solution.There is an r ≥ 0 such that the assignment γ A i+r = {x | x ∈ var(O R i+r )} can be extended to a solution to DEF(AB i+r ) ∪ K i+r .
Proof.Assume for contradiction that γ A k cannot be extended to a solution to DEF(AB k ) ∪ K k for any k ≥ i.Then in each iteration k ≥ i Extract-AbstractCore will return a core.Let τ * be an optimal solution to F and, for a fixed k, τ E k its extension to a solution of DEF(AB k ) ∪ K k .By the properties of core-guided instantiations, we have . Furthermore, the constant term W lb of the reformulated objective increases in each iteration when a core is obtained.Thus eventually for some iteration k ′ we have

contradicting the initial assumption.
A simple corollary of the Lemmas 3 and 4 is that Optimize is guaranteed to compute an abstract candidate in some future iteration.
Corollary 1.If F has a solution, then Optimize returns a (K i+r , AB i+r )-abstract candidate in iteration i + r for some integer r ≥ 0.
Proof.By Lemma 3 it suffices to show that there is an r ≥ 0 such that the set γ A i+r = {x | x ∈ var(O R i+r )} can be extended to a solution to DEF(AB i+r ) ∪ K i+r .This is established by Lemma 4.
Finally, the proof of Theorem 2 follows from the previous statements.
Proof of Theorem 2. Observations 1 and 2 together with Corollary 1 imply that Optimize satisfies the correctness condition (assumption 1 of Theorem 1).Assumptions 2 and 3 of Theorem 1 follow directly from the definition of core-guided instantiations.
In summary, Theorem 2 provides an alternative way of establishing the correctness of a range of core-guided algorithms by arguing that a core-guided algorithm at hand falls is a core-guided instantiation.
We also establish a similar result for a specific case of core-guided instantiations which we will refer to as cardinality-based CG instantiations.The notion of cardinality-based CG instantiations of UniMaxSAT captures in particular core-guided algorithms which introduce a single cardinality constraint over each extracted core.Definition 6.We say that an instantiation of UniMaxSAT is cardinality-based CG instantiation if the following conditions hold.
(i) Extract-AbstractCore is a core-extracting SAT solver.
(ii) Given a core C as input, Add-AbstractionSets computes an abstraction set ab = (in, D, out) for which the following hold.
-The set of variables of D intersects the set of previous variables only on the inputs, i.e., var(D) ∩ var(F ∪ DEF(AB)) ⊆ in.
-The number of outputs is one less than the number of variables in the core, i.e., |out| = |C| − 1.
Here K and AB are the set of cores computed and abstraction sets introduced by the iteration in which C is obtained.We establish that cardinality-based CG instantiations of UniMaxSAT are a special case of core-guided instantiations.
Note that this implies that ny cardinality-based CG instantiation will return an optimal solution to any MaxSAT instance by Theorem 2.
Proof.The non-trivial part of the proof is to argue that in each iteration i, we have O R i (τ ) = O(τ ) for any solution τ to DEF(AB i ) ∪ K i , i.e., the definitions of the abstraction sets added and cores computed by iteration i.The proof is by induction on the iteration i.The base case = 1 directly follows from O R = O.Now assume that the statement holds in iteration i − 1.Let O R i be the reformulated objective in the beginning of iteration i and assume without loss of generality that an abstract core C i is extracted in iteration i.Let ab i+1 = (in i+1 , D i+1 , out i+1 ) be the abstraction set computed by Add-AbstractionSets Here * follows by the fact that Add-AbstractionSets fits the definition of a cardinalitybased CG instantiation stating that x∈C i τ (x) = 1 + x∈out i+1 τ (x), which implies w C i τ (x).

Contrasting Core-Guided and IHS Algorithms through UniMaxSAT
Before moving on to concretely capturing the core-relaxation steps of individual core-guided algorithms, we note that the definition of core-guided instantiations of UniMaxSAT allows for observing an interesting contrast between core-guided and IHS.In particular, in contrast to IHS, the cores extracted by core-guided instantiations of UniMaxSAT always refute all possible minimum-cost solutions to the cores accumulated by that iteration.
Proposition 2. Consider the ith iteration of a core-guided instantiation of UniMaxSAT invoked on a MaxSAT instance F = (F, O).Let K i and AB i be the set of abstract cores and abstraction sets accumulated by the beginning of iteration i, respectively.Assume that Optimize returns a (K i , AB i )-abstract candidate and Extract-AbstractCore a core C i .Let then γ A be any Less formally, Proposition 2 states that whenever a core-guided instantiation of Uni-MaxSAT extracts a core falsified by a (K, AB)-abstract candidate γ of F where K and AB are a set of cores and abstraction sets, respectively, the core refutes not only γ but all possible (K, AB)-abstract candidates, and thereby all minimum-cost (K, AB)-candidates.In contrast, the following example demonstrates that cores extracted by IHS over an abstract candidate do not necessarily refute all possible abstract candidates.
6. Recall that IHS simulated in UniMaxSAT does not add abstractions sets.

Capturing Algorithm-Specific Core Relaxations of Core-Guided Algorithms
Having established general conditions for the correctness of core-guided algorithms, we move on to detailing how the algorithm-specific core relaxations of modern core-guided algorithms can be viewed as cardinality-based CG instantiations of UniMaxSAT.Specifically, we detail this individually for OLL (Andres et al., 2012;Morgado et al., 2014), WPM3 (Ansótegui & Gabàs, 2017), MSU3 (Marques- Silva & Planes, 2007), PMRES (Narodytska & Bacchus, 2014) and K (Alviano et al., 2015) as key representatives of modern core-guided algorithms.More precisely, as the definition of cardinality-based CG instantiations prescribes how Extract-AbstractCore and Optimize are instantiated in each of these algorithms, we now detail how to instantiate Add-AbstractionSets in ways that fit Lemma 2 and Definition 6, and at the same time match the core-relaxation step of the individual algorithms.

OLL
The OLL algorithm (recall Section 3.2) is viewed as a cardinality-based CG instantiation of UniMaxSAT by instantiating Add-AbstractionSets to introduce, given a core C as input, the abstraction set matching the OLL core relaxation.For any reasonable encoding of the cardinality constraint, ab C clearly satisfies condition (ii) of Definition 6. Hence OLL is a cardinality-based CG instantiation of UniMaxSAT.
Example 10.Invoke the cardinality-based CG instantiation of UniMaxSAT that corresponds to OLL on the MaxSAT instance F = (F, O) from Example 1.Before the main search loop, the reformulated objective O R of Optimize is set to O, and the sets K 1 and AB 1 both to ∅.In the first iteration, Optimize(O, AB 1 , K 1 ) returns the assignment γ A 1 = { b1 , . . ., b5 } containing the negation of all variables in O R , and lb 1 = 0 corresponding to the constant term of O R .The call Extract-AbstractCore(F, DEF(AB 1 ), γ A 1 ) then returns res ='false' and, e.g., the core ).The set of cores is then updated by letting K 2 = {C 1 }, after which Add-AbstractionSets(K 2 ) forms the new abstraction set and UniMaxSAT sets AB 2 = {ab 1 }.
In the second iteration Optimize(O, AB 2 , K 2 ) uses the core C 1 and abstraction set ab 1 to update O R following Definition 6.The new reformulated objective is 2 ) then returns res ='false' and, e.g., the (abstract) core C 2 = (o 1 2 ∨ b 3 ).Adding C 2 to K, i.e., letting Add-AbstractionSets introducing the abstraction set .

PMRES
The PMRES algorithm is viewed as a cardinality-based CG instantiation of UniMaxSAT by instantiating Add-AbstractionSets to introduce for every core In practice, the definition of each o i is represented in CNF in the style of the standard Tseitin encoding (Tseitin, 1983;Prestwich, 2021) by taking the name To establish the correctness of PMRES in terms of UniMaxSAT through Theorem 3, we argue that this instantiation of Add-AbstractionSets satisfies condition (ii) of Definition 6.Consider a solution τ to D ∧{C}.Let in = {b i 1 , . . ., b im } be the set of variables occurring in C assigned to 1 by τ .We show that the set of outputs that τ assigns to 1 is out As τ is a solution to D, it assigns all o ∈ out to 1.In the opposite direction, any other output o k / ∈ out assigned to 1 by τ would result in a variable of the core b k / ∈ in also being assigned to 1 by τ , which is a contradiction.

K
The core relaxation of the K algorithm is intuitively a combination of the PMRES and OLL relaxations.K partitions the found cores into subsets of bounded size, relaxes each partition similarly to OLL and then merges the relaxed partitions similarly to PMRES.More precisely, given a core . ., k} similarly as in OLL.Finally, the cardinality constraints of each partition are "merged" by adding a PMRES-style constraint of the form {o P i 1 ∧(o where o P 1 is the first output of the cardinality constraint introduced for the partition P . For formalizing K as a cardinality-based instantiation of UniMaxSAT, consider an instantiation of Add-AbstractionSets that returns a single abstraction set that combines both of these relaxations.More precisely, consider a core C of size mk partitioned into C = P 1 ∨ . . .∨ P m by m subsets P 1 , . . ., P m with P i = (b P i 1 ∨ . . .∨ b P i k ).We define three 7.The auxiliary di variables do not obstruct the main observations made here.
different types of abstraction sets: and DEF(ab P j ), out}, . , m and i = 2, . . ., k}.The intuition underlying these sets is that each ab P i corresponds to the OLL-style relaxation of the partition P i and the set ab R to the PMRES-style relaxation that combines all of them.In other words, the set ab collects all of the relaxations into a single abstraction set that satisfies the condition of Definition 6.
The fact that an instantiation of Add-AbstractionSets which on input C returns the abstraction set ab satisfies condition (ii) of Definition 6 follows straightforwardlyalbeit being somewhat tedious to formally prove-as a consequence of the arguments we already made for PMRES and OLL.For some intuition, note that the set out contains n(k − 1) + (m − 1) = mk − 1 output variables, which is one less than the number of variables in C. Furthermore, any solution τ to the definitions of C and definitions of ab assigns in each ab P i exactly the same number of inputs and outputs to 1.The output with index 1 from each P i is then further relaxed by the PMRES-style abstraction set ab R .This ensures that exactly one less output in out will be assigned to 1.

MSU3
The MSU3 algorithm is specific to unweighted MaxSAT instances, i.e., instances in which objective coefficients are all equal.On an unweighted MaxSAT instance (F, O) MSU3 maintains a single cardinality constraint where the set active contains the objective variables that have occurred in the so-far extracted cores.The bound bound counts the number of cores that have been extracted so far.When a new core is extracted, the objective variables in the core are added to active and the bound is incremented by one.Informally speaking, in each iteration, the SAT solver is queried for a solution that sets exactly bound objective variables to 1.The increment is due to the fact that each new core obtained implies that the optimal cost of the instance is at least one higher than bound.
To see that MSU3 is a cardinality-based CG instantiation of UniMaxSAT, we define an instantiation of Add-AbstractionSets that corresponds to the core relaxation performed by MSU3 as just-described.When the first core C 1 is extracted, Add-AbstractionSets initializes a bound bound to 1 and introduces the abstraction set where in 1 = C 1 and out 1 = {o 1 , . . .o |C|−1 }.
In iteration i > 1 the obtained core C i is first extended with all outputs in out i−1 of the previous abstraction set ab i−1 .The resulting C ′ i = C i ∪ out i−1 is clearly a core as well.Then the bound is incremented by one and a new abstraction set ) contains the objective variables that are in C i and the inputs in i−1 of the previous abstraction set.Notice that due to the coreextension step and the instance being unweighted, all outputs of ab i−1 are removed from the reformulated objective, and as such ignored in subsequent iterations.
The core extension step is a minor technical detail required to fit the formalization of MSU3 into the definition of a cardinality-based CG instantiation of UniMaxSAT.It does not affect the algorithm in any meaningful way.In the formalization Optimize-CB returns An exact correspondence to the description of MSU3 given at the beginning of the section would instead return γ A 2 = {ō i 1 }.These two are, however, essentially equal since ōi t → ōi t+1 holds for all t.Specifically, there is no (K, AB)-candidate of the instance for current set K and AB of cores and abstraction sets, respectively, that would extend γ A 2 but not γ A 1 .MSU3 can be seen as a special case of the WPM3 algorithm discussed next.As such a formal proof of the fact that the abstraction set ab i satisfies condition (ii) of Definition 6 follows from the corresponding proof for WPM3, provided in Appendix A. Informally, the proofs make use of the fact that if k inputs are assigned to 1 by a solution τ , then k ≥ bound and k − bound outputs are assigned to 1 by τ .

WPM3
The WPM3 algorithm combines elements of OLL and MSU3 in that it maintains a set of several cardinality constraints, but only over objective variables.More precisely, assume that WPM3 on a MaxSAT instance (F, O) extracts the core C = (o C 1 t 1 ∨. ..∨oCn tn ∨b 1 ∨. ..∨b m ).Each o C i t i is an output of a cardinality constraint introduced in the relaxation of a previous core C i and each b i an objective variable.WPM3 relaxes C by introducing a new cardinality constraint where in C contains the objective variables of C and the inputs of all cardinality constraints whose outputs appear in C. The bound bound C is defined recursively as Here bound i is the bound of the abstraction set introduced when relaxing a previously found core C i .All of the cardinality constraints the outputs of which appear in C are removed from the working instance.Conceptually, the new cardinality constraint merges the inputs of the constraints whose outputs appear in C, and the bound is the number of inputs that can be inferred to 1 by the cores extracted so far 8 .The formalization of WPM3 as a cardinality-based CG instantiation of UniMaxSAT is similar to the formalization of MSU3.In terms of UniMaxSAT, a core extracted by WPM3 is of the form Here o ab i t i is an output of the abstraction set ab i introduced earlier.The instantiation of Add-AbstractionSets that corresponds to WPM3 first extends C to C ′ ⊇ C by adding, for each output o ab i t i ∈ C, all of the outputs of ab i that are in the reformulated objective maintained by Optimize-CB.The resulting C ′ remains a core since all outputs of a fixed abstraction set have the same coefficient when introduced to the reformulated objective.Thus C ′ either contains all of the outputs of a previously introduced abstraction set or none of them.A new abstraction set is then introduced.The inputs in C ′ = {b 1 , . . ., b m } ∪ n i=1 in i consist of the objective variables in C and the inputs in i of all previous abstraction sets ab i whose outputs appear in C ′ .The outputs out C ′ = {o The bound bound C ′ of the abstraction set ab C ′ is defined analogously to the bounds on cardinality constraints as Here bound i is the bound of the abstraction set ab i .The definitions D C ′ ensure that the outputs count the number of new inputs in addition to bound C ′ assigned to 1.A formal proof of the fact that the abstraction set ab C ′ satisfies condition (ii) of Definition 6 is provided in Appendix A. Informally, the result follows from three observations: (i) The definitions D C ′ ensure that any solution assigning k ≥ bound C ′ inputs to 1 will assign k − bound C ′ outputs to 1; (ii) the definitions of previous abstraction sets ensure that such a solution will assign k − bound C ′ variables of C ′ to 1; and (iii) the set of accumulated cores ensures that at least bound C ′ inputs will be assigned to 1 in any solution.
8. As a minor technical remark, if objective variables with different coefficients appear in cores together, they may end up as inputs in different cardinality constraints.Whenever the inputs of cardinality constraints merged contain the same variable, the union operator should be understood as additive union ⊎ for which, e.g., {x, y} ⊎ {x} = {x, x, y}.

On the Frequency of Abstract Candidates
We end this section with observations on how frequently core-guided algorithms compute abstract candidates.Recall that whenever an instantiation of UniMaxSAT computes a (K, AB)-abstract candidate of the instance with the current set K and AB of cores and abstraction sets, respectively, a lower bound as high as possible given the cores extracted sofar is obtained.Intuitively, the more frequently (K, AB)-abstract candidates are computed during the search, the faster in terms of iterations new lower bounds are obtained.First, we will show that in the general case, cardinality-based CG instantiations of Uni-MaxSAT will not (necessarily) compute a (K, AB)-abstract candidate in every iteration.
Example 11 is stated in terms of a generic instantiation of UniMaxSAT and hence applies to all cardinality-based CG instantiations, including OLL, PMRES, K, WPM3, and MSU3.In other words, each of these algorithms may compute intermediate lower bounds that are weaker than what could be inferred based on the cores extracted so far.In contrast, it turns out that these algorithms differ in this respect when restricting to unweighted MaxSAT instances in which all objective coefficients are equal.Specifically, PMRES and WPM3 are guaranteed to always compute abstract candidates when invoked on an unweighted instance, thereby obtaining as strong lower bounds as possible, while this is not the case for OLL.To establish this, we first show that there are unweighted instances on which UniMaxSAT instantiated as OLL may not always compute abstract candidates.For some intuition on the reasons for this, in contrast to PMRES, the outputs of abstraction sets introduced by OLL can lead to situations where already-extracted cores imply other cores-irrespectively of the input instance.In contrast to WPM3, at the same time the outputs introduced by OLL need not be removed from the reformulated objective.
Example 12. Consider an invocation of UniMaxSAT instantiated as OLL on an unweighted MaxSAT instance F. Assume two cores C and D are extracted, both containing three variables.This results in the introduction of the abstraction sets Assume that is the next core extracted with the corresponding abstraction set 3 }, ∅, ∅). Figure 3 illustrates the abstraction sets added after extracting these cores.In the subsequent iteration, the partial assignment δ E computed by Optimize-CB will include ōC 1 2 .However, then δ E is not a (K, AB)-abstract candidate, where K = {C, D, C 1 , C 2 , C 3 } and AB = {ab C , ab D , ab 1 , ab 2 , ab 3 }.This is because there is no extension of δ E to a solution to K ∪ DEF(AB).To see this, note that any solution τ to K ∪ DEF(AB) has to assign τ In contrast to OLL, when invoked on an unweighted MaxSAT instance, both PMRES and WPM3 are guaranteed to compute abstract candidates in each iteration, and thus both algorithms are guaranteed to obtain as strong lower bounds as possible.This is formalized in the following proposition, the proof of which is provided in Appendix A.
Proposition 3. Invoke UniMaxSAT instantiated as PMRES or WPM3 on an unweighted MaxSAT instance F = (F, O).In the ith iteration of search, let O R i be the reformulated objective maintained by Optimize-CB.Let also K i be the set of cores, and AB i the collection of abstraction sets collected so far.The partial assignment γ The results of Proposition 3 and Example 12 demonstrate the potential of UniMaxSAT for analyzing existing SAT-based MaxSAT solving algorithms.

UniMaxSAT as Basis for New Algorithmic Variants
We emphasize that the main contributions of this work are the formal UniMaxSAT framework and the unifying proofs of correctness for established SAT-based MaxSAT solving approaches the framework yields.However, beyond the already-presented main contributions, we more shortly point out that the framework can also be used for obtaining new algorithmic variants of the SAT-based MaxSAT solving approaches and thereby to provide proofs of correctness for such variants.To illustrate this further potential of the UniMaxSAT framework, we describe a novel variant AbstCG of core-guided search as an instantiation of UniMaxSAT.While AbstCG could be designed on its own, viewing it as an instantiation of UniMaxSAT immediately implies that this new algorithmic variant is correct, highlighting the usefulness of UniMaxSAT in developing new correct MaxSAT algorithms.
For AbstCG, similarly as for other core-guided algorithms, Extract-AbstractCore is a core-extracting SAT solver, Add-AbstractionSets introduces abstraction sets for each core and Optimize maintains a reformulated objective O R and always returns {x | x ∈ var(O R )} as assumptions.Given a core C consisting of variables that have m different coefficients in O R , the instantiation of Add-AbstractionSets in AbstCG first partitions C into m disjoint sets C = G 1 ∨ . . .∨ G m so that all variables in the same set G i have the same coefficients, with the sets G i indexed by decreasing coefficients.Starting from G 1 (corresponding to the largest coefficient in O R ), AbstCG introduces for each G i an abstraction set The inputs in i = G i ∪ out i−1 consist of the variables in G i and the outputs of ab i−1 .Since C is an abstract core, at least one of its variables is assigned to 1 in any solution.Hence the first output o m,1 of the last abstraction set is not included in out m .
The Optimize instantiation in AbstCG updates the reformulated objective O R by processing each abstraction set ab i = (in i , D i , out i ) in order, starting from i = 1.The coefficient of each x ∈ in i is decreased by w i = min({O R (x) | x ∈ in i }) and each output x ∈ out i is included in O R with coefficient w i .After processing each ab i , a constant w m is added to O R .
For an informal connection between AbstCG and OLL, note that when the variables C all have the same coefficient in O R , the reformulation performed by AbstCG is the same We establish the correctness of AbstCG as a core-guided instantiation of UniMaxSAT via Theorem 2. The non-trivial part is to argue that the reformulated objective O R maintained by AbstCG preserves the costs of all solutions.
Proposition 4. Assume that the AbstCG instantiation of UniMaxSAT is invoked on a MaxSAT instance.Let O R i (τ ) be the reformulated objective in iteration i and AB i and K i the set of abstraction sets introduced and cores obtained by iteration i, respectively.Let τ be a solution to DEF(AB Proof.By induction on the iteration i.The base case i = 1 follows from O R = O.Assume then that the statement holds in iteration i − 1.Let O R i be the reformulated objective in iteration i (i.e., the reformulated objective in the beginning of iteration i).Assume without loss of generality that an abstract core C i is extracted in iteration i and partitioned into m subsets G 1 , . . ., G m .Let ab i j = (in i j , D i j , out i j ) for j = 1, 2, . . ., m be the abstraction sets computed by Add-AbstractionSets on input C i and let w i j be the minimum weights of each G j for j = 1, 2, . . ., m.Then Step * follows from the observation that b∈in While the main focus of this work is on the UniMaxSAT framework, we developed a prototype implementation of AbstCG on top of CGSS2, a state-of-the-art C++ implementation of OLL (Ihalainen, 2022).This implementation of AbstCG is available online at https://bitbucket.org/coreo-group/cgss2/ as a command line option of CGSS2.When a new core is extracted, the prototype implementation dynamically selects between relaxing it in the style of OLL and relaxing it in the style of AbstCG, always choosing the relaxation that results in fewer clauses added.Intuitively, this choice aims to balance the benefits of relaxing cores in the style of AbstCG (such as removing all literals in the core from the reformulated objective) with the potentially smaller size of the core-relaxation constraint of OLL.We empirically compare the runtimes of CGSS2-AbstCG (our prototype implementation of AbstCG) to those of CGSS2-OLL, i.e., the base CGSS2 implementation.We emphasize that the goal of these experiments is to demonstrate that UniMaxSAT allows for novel algorithmic instantiations that can be used to obtain practical solvers competitive with the state-of-the-art.As benchmarks, we used the 558 instances weighted instances 9 from the exact track of MaxSAT Evaluation 2023 (https://maxsat-evaluations.github.io/2023/).The experiments were run using 2.6-GHz AMD EPYC 7H12 processors under a per-instance 3600-second time and 16-GB memory limit.
Figure 5 (left) provides a runtime comparison for the two solvers with all additional heuristics implemented in CGSS2 enabled.We observe that CGSS2-AbstCG is competitive with base CGSS2 (referred to as CGSS2-OLL in the following).CGSS2-AbstCG exhibits somewhat faster runtimes on instances that take around 2000 seconds to solve, CGSS2-AbstCG solving 415 instances within 2100 seconds compared to CGSS2-OLL solving 412.Both solvers solve 419 instances within the per-instance time limit.For a more fine-grained view, we also ran both solvers with the weight-aware core extraction (WCE) (Berg & Järvisalo, 2017) and structure sharing (SS) (Ihalainen et al., 2021) techniques disabled.We note that with these two techniques CGSS2 delays the core-relaxation steps and heuristically attempts to order the literals in the individual core relaxations in a way that allows reusing constraints between multiple core relaxations.Figure 5 (right) provides a runtime comparison for the two solvers with WCE and SS disabled.Interestingly, disabling these heuristics seems to degrade the performance of CGSS2-OLL more than of CGSS2-AbstCG, 9. Note that when all literals in a core to be relaxed have the same coefficients, the reformulation used by AbstCG is exactly the same as the one used by OLL.We hence excluded the unweighted benchmarks from the experiments.with the former solving 406 instances and the latter 412 within the time limit.The results suggest that a solver using purely OLL benefits more from WCE and SS than a solver using the AbstCGcore relaxation.Overall, as a proof of concept, AbstCG appears an interesting example of a new instantiation of the general framework.Beyond the main focus of this work, we note that a more fine-grained integration of the AbstCG relaxation within CGSS2 could lead to further improvements in solver runtimes.

Conclusions
Building on the recently-proposed notion of abstract cores, we developed a general algorithmic framework that captures in a unifying way the computations performed by SAT-based MaxSAT solvers.The framework covers the three most popular modern practical algorithmic variants to MaxSAT, namely, the core-guided, the implicit hitting set (IHS), and the objective-bounding approaches, variants of which are today implemented in various publicly-available MaxSAT solvers.The framework provides a uniform way of proving the correctness of the current and potential forthcoming algorithms of the three approaches, as well as algorithms combining techniques of the different approaches.To illustrate this, we formally detailed how the framework captures various existing instantiations of the approaches.The framework also suggests novel algorithmic variants through different instantiations; we detailed one such instantiation and showed as a proof of concept that it resulted in a potentially interesting solver variant for MaxSAT from a practical perspective.
While the framework developed in this work captures quite generally the current mainstream approaches to SAT-based MaxSAT solving, as a potential direction for further study it would be interesting to develop further understanding on the distinguishing features of branch-and-bound based MaxSAT solvers and their connnections to our framework.Furthermore, while our discussion was grounded in MaxSAT, we note that the framework could be extended to cover related constraint optimization paradigms-such as pseudo-Boolean optimization (Devriendt et al., 2021;Smirnov et al., 2021Smirnov et al., , 2022)), finite-domain constraint optimization (Delisle & Bacchus, 2013;Gange et al., 2020), and answer set programming (Andres et al., 2012;Saikko et al., 2018;Alviano & Dodaro, 2020)-for which core-guided and IHS-style solvers have been developed.Such extensions would seem reasonable, since-as implemented in the already-proposed solvers for PBO, COP and ASP-the core-guided and IHS approaches are essentially agnostic to the constraint language at hand, assuming that a suitable decision oracle for core extraction exists.In particular, the Uni-MaxSAT framework could analogously be presented on the level of these more high-level constraint languages instead of the propositional representation focused on in this article.Studying new ways of instantiating the framework towards developing novel practical SATbased algorithms for MaxSAT and related constraint optimization paradigms is also an interesting direction for further work.
We show that C ′ fulfills condition (ii) of Definition 6.The non-trivial part to show is that the number of variables in C ′ assigned to 1 by any satisfying assignment to the cores and definitions found so far is one more than the number of outputs of the abstraction set introduced in its relaxation.This is formalized in the following proposition.The induction step is marked with * .If m i=1 τ (b) ≥ 1, we are done.Otherwise, assume that m i=1 τ (b) = 0. We establish that there exists an abstraction set ab k with more than bound k inputs set to true by τ .Since τ (C ′ ) = 1, we can fix k to be an index for which τ (o ab k 1 ) = 1.Now, if the core C k that prompted the addition of ab k does not contain any outputs of previous abstraction sets, we are done, since then bound k = 1 and by the definitions of abstraction sets (satisfied by τ ) τ (o ab k 1 ) = 1 implies at least two inputs are assigned to 1. Otherwise, we recurse.Since τ (C k ) = 1, there is another abstraction set for which τ assigns at least two inputs to true.At some point, the core corresponding to the found abstraction set will not contain any output literals, at which point the recursion is guaranteed to end.

A.2 PMRES and WPM3 Compute Abstract Candidates on Unweighted Instances
We prove Proposition 3 separately for PMRES and WPM3.For the following, let AB i and K i be the set of abstraction sets and abstract cores extracted so far, respectively.Both proofs make use of the fact that, by Lemma 3, it suffices to show that there exists a solution Proof of Proposition 3 for PMRES.We prove the existence of such a τ i by induction on i.More precisely, for every i we construct an assignment τ o i to the objective variables that extends to such a τ i .As a tool for the induction, we use a function β i that maps each variable x ∈ var(γ A i ) to a unique objective variable β i (x) = y ∈ var(O).The mapping has the following properties: (1) τ o i (y) = 0 and (2) for every x ∈ var(γ A i ), any solution δ to DEF(AB i ) ∪ K i that agrees with τ i on every variable in var(O) except β i (x) also agrees with τ i on every variable in var(O R i ) except for x.Base case (i = 1): We have K 1 ∪ DEF(AB 1 ) = ∅ so τ o 1 = γ A 1 = {x | x ∈ var(O)} and β 1 (x) = x for each x ∈ var(O).
Induction Step: Let the core obtained in iteration i be C i = (x 1 ∨ • • • ∨ x n ) and the abstraction set introduced be ab i = (C i , {x j ∧ (x j+1 ∨ • • • ∨ x n ) ↔ o i j | 1 ≤ j ≤ n − 1}, {o i 1 , . . ., o i n−1 }).By the induction assumption, there exists an assignment τ o i to the objective variables that extends to a solution τ i ⊇ γ A i that satisfies DEF(AB i ) ∪ K i and a function β i that satisfies properties (1) and (2) outlined in the beginning of this proof.
Let then b = β i (x n ) and τ o i+1 = τ o i ∪ {b} \ { b} and consider the (unique) extension of τ i+1 ⊃ τ o i+1 to a solution of DEF(AB i+1 ) = DEF(AB i ∪ {ab i }).By property (2) of β i , τ i+1 agrees with τ i on all variables in var(O R i ) except for x n that τ i assigns to 0 and τ i+1 to 1.As τ i is a solution to K i , and τ i+1 assigns τ i+1 (x n ) = 1, it follows that τ i+1 is a solution to K i+1 = K i ∪ {C i }.Finally, let β i+1 (x) = β i (x) for all x except for every x i ∈ C i (as these variables will no longer be part of the assumptions in subsequent iterations) and β i+1 (o i j ) = β i (x j ) for o i j ∈ {o i 1 , . . ., o i n−1 }.Then the mapping β i+1 has the properties (1) and (2) outlined in the beginning of the proof.∈ C. Then τ i+1 assigns bound j + 1 inputs of ab j to 1.This follows by a recursive argument similar to the one made in the proof of Lemma 5.By the construction of the abstraction sets-and by recursing if needed-we can assume that the core C j that prompted the addition of ab j only contains objective variables.As τ i (C j ) = 1 we have that τ i assigns τ i (b o ) = 1 for at least one variable b o ∈ C j to 1.As b ∈ C j , τ i (b) = 0 and τ i+1 agrees with τ i on all objective variables except b, we have that τ i+1 assigns at least 2 variables in C j to 1.As bound j = 1 τ i+1 also assigns τ i (o ab j 1 ) = 1 and satisfies C i .Furthermore, by the definitions of abstraction sets, τ i+1 assigns at least the same variables to 1 as τ i , thus it clearly also satisfies all other cores.We conclude that τ i+1 is a solution to K i+1 .

Proof of
(b) Follows from the properties of core-guided instantiations.As τ i+1 assigns exactly one more objective variable to 1 than τ , we have that O(τ i+1 ) = O(τ i ) + 1 which implies O R i+1 (τ i+1 ) = O R i (τ i+1 ) = O R i (τ i ) + 1.By the induction assumption O(τ i ) is equal to the constant term W lb i of O i as τ i assigns every variable in var(O R i ) to 0. Since W lb i +1 = W lb i+1 it follows that O R i+1 (τ i+1 ) = W lb i+1 so it assigns every variable in τ i+1 to 0.
all literals in C are objective variables (i.e., var(C) ⊆ var(O)) and every solution to F satisfies C (i.e., F logically entails C).Example 2. The clauses (b 1 ∨ b 2 ∨ b 3 ) and (b 3 ∨ b 4 ∨ b 5 ) are two of the cores of the MaxSAT instance detailed in Example 1.

Definition 2 .
For a MaxSAT instance F = (F, O) and collection AB of feasible abstraction sets, a clause C is an abstract core of F wrt AB if (i) var(C) ⊆ (var(O) ∪ var(OUTS(AB)) and (ii) τ (C) = 1 for each solution τ to F ∪ DEF(AB).
Figure 1: A schematic overview of UniMaxSAT, invoked on a MaxSAT instance (F, O).

Figure 2 :
Figure 2: The structure of results of Sections 5-7.In the figure, an arrow A → B from algorithm A to B indicates that A can be seen as a special case of B.

Lemma 2 .
Consider a MaxSAT instance F = (F, O) and a set of feasible abstraction sets AB.Let (in, D, out) be an abstraction set and assume var(D) ∩ var(F ∪ DEF(AB)) ⊆ in.Then AB ∪ {(in, D, out)} is feasible for F.Proof.Let τ be a solution to F .By the feasibility of AB, there is a solution τ e ⊇ τ to F ∪ DEF(AB).Since the only variables of D assigned by τ e are in in, by Definition 1 there is a solution τ E ⊇ τ e to D. Such a τ E is a solution to F ∪ DEF(AB ∪ {(in, D, out)}), establishing the feasibility of AB ∪ {(in, D, out)}.

(
iii) Optimize is instantiated as Optimize-CB which maintains a reformulated objective O R , initialized to be the objective O of the input MaxSAT instance.In each iteration i, Optimize-CB returns the assignment {x | x ∈ var(O R )} that sets all variables in O R to 0 and the constant term of O R as the lower bound.Given an abstract core C and an abstraction set ab = (in, D, out), Optimize-CB updates O R as follows.(1) The coefficient of each x ∈ C is decreased by the minimum over the coefficients in O of the variables in the core, i.e., by w C = min x∈C {O R (x)}.(2) Each x ∈ out is added to O R with the coefficient w C .(3) The variables in O R with coefficients 0 are removed.(4) The constant term of O R is incremented by w C .

Example 11 .
Consider the MaxSAT instance F = (F, O) with F = {(b 1 ), (b 2 )} and O = b 1 + 2b 2 .Invoke a cardinality-based CG instantiation of UniMaxSAT on F. Assume that the first core extracted is C 1 = (b 1 ∨ b 2 ).Then Add-AbstractionSets introduces an abstraction set ab 1 with one output o 1 .Further, Optimize-CB updates its reformulated objective to O R 1 = b 2 + o 1 and returns the set of assumptions γ A = { b2 , ō1 }.Since Add-AbstractionSets fulfills condition (ii) of Definition 6 and any solution to F assigns two of the variables in C 1 to 1, any solution assigns the variable o 1 to 1. Thus C 2 = (o 1 ) is an abstract core that can be extracted in the next iteration, which results in an abstraction set ab 2 without any outputs.In the next call to Optimize-CB its objective is updated to O R 2 = b 2 and the set γ

Figure 3 :
Figure 3: Structure of cores and abstraction sets of Example 12.Each pair of connected ellipse and rectangle nodes corresponds to an abstraction set, with the inputs (the extracted core) of that set appearing in the ellipse and the outputs in the rectangle.The dashed edges visualize how outputs of ab C and ab D appear as inputs to new abstraction sets.

Figure 4 :
Figure 4: Left: An abstraction set introduced by OLL (left) and AbstCG (right) when relaxing core C 1 = {b 1 , b 2 , b 3 , b 4 , b 5 } from Example 13.Connected ellipse and rectangle nodes correspond to an abstraction set, with the inputs of a set mentioned within the ellipse and the outputs within the rectangle.Variables with a positive coefficient in the reformulated objective O R after the reformulation are highlighted in boldface.

Figure 5 :
Figure 5: Left: runtime comparison of OLL and AbstCG with WCE and SS enabled.Right: Runtime comparison of OLL and AbstCG with WCE and SS disabled.

Proposition 5 .
Let τ be an assignment satisfying to DEF(AB)∪ K ∪ {D C ′ } ∪ C ′ .We have that b∈C ′ τ (b) = o∈out C ′ τ (o) + 1.The proof of Proposition 5 employes the following lemma.Lemma 5. Let τ be an assignment satisfying to DEF(AB)∪ K ∪ {D C ′ } ∪ C ′ .We have that b∈in C ′ τ (b) ≥ bound C ′ .Proof.By induction on the added abstraction sets.Base Case.C ′ only contains objective variables.Now bound C ′ = 1 and the statement follows by noting that C ′ = C = in C ′ and that τ satisfies C ′ .Induction step.Assume that C ′ contains outputs of previous abstraction sets (for which the statement holds by the induction).Then ) + bound C ′ − 1.
Proposition 3 for WPM3.We show the existence of a solutionτ i ⊃ γ A i = {x | x ∈ var(O R i )} to K i ∪ DEF(AB i ).The proof is by induction on the iteration i. Base case (i = 1): Immediate by K 1 ∪ DEF(AB 1 ) = ∅ and O = O R .Induction step: Assume that the (extended) coreC i = (b 1 ∨ . . .∨ b m ) ∨ n i=1 (o ab i 1 ∨ . . .∨ o ab i |out i |−1 )is extracted on iteration i, and the new abstraction set ab i = (in i , D i , out i ) introduced as detailed in Section 6.3.5.By the induction assumption, there exists a solutionτ i ⊃ γ A i = {x | x ∈ var(O R i )} to K i ∪ DEF(AB i ).By the properties of the Extract-AbstractCore subroutine, τ i falsifies C i .Let τ o i be the restriction of τ i onto the objective variables and select any b ∈ in i for which τ o i (b) = 0, at least one such b exists as τ i (C i ) = 0. Finally, let τ o i+1 be the assignment to the objective variables that agrees with τ o i on all variables except b and consider the (unique) extension τ i+1 ⊃ τ o i+1 of τ o i+1 to a solution of DEF(AB i+1 ) = DEF(AB i ∪ {ab i }).The proposition follows from showing two things: (a) τ i+1 satisfies K i+1 = K i−1 ∪ {C i }, and (b)τ i ⊃ γ A i+1 = {x | x ∈ var(O R i+1 )}.(a)For the non-trivial case, assume b / ∈ C .Then b is an input to some abstraction set ab j for which o ab j 1