Partial Strategyproofness: Relaxing Strategyproofness for the Random Assignment Problem

We present partial strategyproofness, a new, relaxed notion of strategyproofness for studying the incentive properties of non-strategyproof assignment mechanisms that involve some form of randomness. Informally, a mechanism is partially strategyproof if it makes truthful reporting a dominant strategy for those agents whose preference intensities differ sufficiently between any two objects. A single numerical parameter, the degree of strategyproofness, controls the extent to which these intensities must differ. We demonstrate that partial strategyproofness is a natural and useful relaxation of strategyproofness: It is axiomatically motivated, it allows a meaningful parametric and algorithmic comparison of mechanisms by their incentive properties, and it provides new insights across a wide range of popular mechanisms.


Introduction
The assignment problem is concerned with the allocation of indivisible objects to selfinterested agents who have private preferences over these objects. Monetary transfers are not permitted, which makes this problem different from auctions and other settings with transferable utility. In practice, assignment problems often arise in situations that are of great importance to people's lives; for example, when assigning students to seats at public schools (Abdulkadiroglu and Sönmez, 2003), medical school graduates to entry level positions (Roth, 1984), or tenants to subsidized housing (Abdulkadiroglu and Sönmez, 1998). Not surprisingly, since the seminal paper of Hylland and Zeckhauser (1979), the assignment problem has attracted much attention from mechanism designers.
In this paper, we study mechanisms for the assignment problem that take preference orders over objects as input. As mechanism designers, we care specifically about incentives for truthtelling under the mechanisms we design. A mechanism is strategyproof if truthtelling is a dominant strategy equilibrium of the associated preference revelation game. Participating in a strategyproof mechanism is simple for the agents because it eliminates the need to take the preferences or strategies of other agents into account. Strategyproofness thus yields a robust prediction of equilibrium behavior. 1 This advantage makes strategyproofness the gold standard among incentive concepts. 2 The advantages of strategyproofness, however, come at a cost: Zhou (1990) showed that, in the assignment domain, it is impossible to achieve the optimum with respect to incentives, efficiency, and fairness simultaneously. 3 This makes the assignment problem an interesting mechanism design challenge. For example, the Random Serial Dictatorship mechanism is strategyproof and anonymous, but only ex-post efficient. In fact, it is 1 In addition, strategyproofness yields a form of fairness as it levels the playing field between agents who have different levels of strategic ability (Pathak and Sönmez, 2008). Moreover, the truthful preference information that strategyproof mechanisms elicit can be useful beyond its role as input to the mechanism; for example, researchers rely on such information for computational experiments (see, e.g., (Abdulkadiroglu, Pathak and Roth, 2009)), and in the context of course assignment, universities may wish to adjust course capacities in response to the true demand. 2 Recently, Li (2016) proposed obvious strategyproofness, a strict refinement of strategyproofness, which requires that agents can understand the optimality of truthful reporting without contingent reasoning. While the partial strategyproofness concept that we introduce in the present paper retains the ability to give honest and useful strategic advice to the agents (see Section 5.2), it is agnostic about the information-theoretic complexity of verifying the validity of this advice. 3 Specifically, Zhou (1990) showed that no (possibly random and possibly cardinal) assignment mechanism can satisfy strategyproofness, ex-ante efficiency, and equal treatment of equals.
conjectured to be the unique mechanism that satisfies all three properties (Lee and Sethuraman, 2011;Bade, 2016). The more demanding ordinal efficiency is achieved by the Probabilistic Serial mechanism, but any mechanism that achieves ordinal efficiency and equal treatment of equals cannot be strategyproof (Bogomolnaia and Moulin, 2001). Finally, rank efficiency, an even stronger efficiency concept, can be achieved with Rank Value mechanisms (Featherstone, 2011), but it is incompatible with strategyproofness, even without additional fairness requirements. Obviously, strategyproofness is in conflict with many other desiderata, and mechanism designers are therefore interested in studying non-strategyproof mechanisms. This highlights the need for good tools to capture the incentive properties of non-strategyproof mechanisms and to analyze what trade-offs are possible between incentives for truthtelling and other desiderata. 4 In practice, non-strategyproof mechanisms, such as the Boston mechanism for the assignment of seats at public schools are ubiquitous. 5 School choice administrators frequently have the explicit objective of assigning as many students as possible to their top choices (Dur, Mennle and Seuken, 2016). If students report their preferences truthfully, the Boston mechanism intuitively fares well with respect to this objective; however, it is known to be highly manipulable in theory (Abdulkadiroglu and Sönmez, 2003), and it was found to be manipulated by students in practice (Calsamiglia and Güell, 2014). A second example is the Teach for America program, which used a mechanism that aimed at rank efficiency when assigning new teachers to positions at different schools (Featherstone, 2011). While this mechanism was not strategyproof, the organizers were confident that the majority of preferences were reported truthfully because participants lacked the information necessary to determine beneficial misreports. 6 The prevalence of non-strategyproof mechanisms in practice raises a number of questions: What incentive guarantees do these mechanisms provide, if any? What honest and useful strategic advice can we give to the agents? How do these mechanisms compare in terms of their incentive properties? and how can we quantify their "non-truthfulness"? To be useful for market design, an incentive concept should provide answers to these questions.
It should come as no surprise that researchers have been calling for useful relaxations of strategyproofness before (Budish, 2012). In this paper, we introduce partial strategyproofness, a new, relaxed notion of strategyproofness. Partial strategyproofness is particularly suited to the analysis of assignment mechanisms that involve some form of randomness. The randomness may come from any of three sources: First, it may be an intrinsic feature of the mechanism (e.g., of the Probabilistic Serial mechanism (Bogomolnaia and Moulin, 2001)), which corresponds to our typical understanding of the random assignment problem. Second, the randomness may be injected via random priorities; e.g., ties in school choice mechanisms are typically broken via lotteries (Erdil and Ergin, 2008). Third, the randomness may arise from the agents' uncertainty about the other agents' reports (which Azevedo and Budish (2015) call the interim perspective). We show that partial strategyproofness provides compelling answers to all the questions we have raised, independent of the particular source of randomness.
We now illustrate the definition of partial strategyproofness with a motivating example. Consider a setting in which three agents, conveniently named 1, 2, 3, compete for three objects, a, b, c, each in unit capacity. Suppose that the agents' preferences are P 1 : a ą b ą c, P 2 : b ą a ą c, and P 3 : b ą c ą a; that agent 1 has utility 0 for its last choice c; and that the non-strategyproof Probabilistic Serial mechanism 7 is used to assign the objects. If all agents report their preferences truthfully, then agent 1 receives a, b, c with probabilities 3{4, 0, 1{4 respectively. If agent 1 reports P 1 1 : b ą a ą c instead, then these probabilities change to 1{2, 1{3, 1{6. Observe that whether or not the misreport P 1 1 increases agent 1's expected utility depends on how intensely it prefers a over b: If u 1 paq is close to u 1 pbq, then agent 1 would benefit from the misreport P 1 1 . If u 1 paq is significantly larger than u 1 pbq, then agent 1 would prefer to report truthfully. Specifically, agent 1 prefers to report truthfully if 3{4¨u 1 paq ě u 1 pbq.
Our definition of partial strategyproofness generalizes the intuition from this motivating example: For some bound r in r0, 1s, we say that agent i's utility function u i satisfies uniformly relatively bounded indifference with respect to r (URBI(r)) if r¨u i paq ě u i pbq holds whenever i prefers a to b (after appropriate normalization). A mechanism is r-partially strategyproof if it makes truthful reporting a dominant strategy for any agent whose utility function satisfies URBI(r). Varying the degree of strategyproofness r between 0 and 1 gives rise to a new, parametrized class of incentive concepts. In fact, we find (in Section 8.2) that the Probabilistic Serial mechanism has a degree of strategyproofness of exactly 3{4 in a setting with three agents and three objects.
We argue that partial strategyproofness is a natural and useful way to think about the incentive properties of non-strategyproof assignment mechanisms that involve some form of randomness. The following three arguments support our claim.
Partial strategyproofness has a compelling axiomatic motivation. Towards showing this, we first prove that full strategyproofness can be decomposed into three simple axioms. Each of the axioms restricts the way in which a mechanism can react when an agent swaps two consecutively ranked objects in its preference report (e.g., from P i : a ą b to P 1 i : b ą a).

1.
A mechanism is swap monotonic if either the swap makes it more likely that the agent receives b (the object that it claims to prefer) and less likely that the agent receives a, or the mechanism does not react to the swap at all. In other words, any swap must affect at least the agent's probabilities for the objects that trade positions, and the change must be monotonic in the agent's reported preferences.

2.
A mechanism is upper invariant if an agent cannot improve its chances for objects that it likes more by misrepresenting its preferences for objects that it likes less. Precisely, the swap of a and b must not affect the agent's chances for any object that it prefers strictly to a.

3.
A mechanism is lower invariant if an agent cannot affect its chances for lesspreferred objects by swapping a and b. This axiom is the complement to upper invariance but for objects that the agent likes strictly less than b.
For our first main result, we show that strategyproofness can be decomposed into these three axioms: A mechanism is strategyproof if and only if it is swap monotonic, upper invariant, and lower invariant (Theorem 1). Intuitively, lower invariance is the least important of the three axioms (see Section 5.2 for a formal argument), and by dropping it, we arrive at the larger class of partially strategyproof mechanisms: We show that a mechanism is r-partially strategyproof for some r ą 0 if and only if it is swap monotonic and upper invariant (Theorem 2). Thus, partial strategyproofness describes the incentive properties of (possibly non-strategyproof) mechanisms that retain the two most important of the three axioms that constitute strategyproofness. Partial strategyproofness allows a meaningful parametric and algorithmic comparison of mechanisms by their incentive properties. By construction, a greater degree of strategyproofness r means that the mechanism is guaranteed to provide good incentives for a larger set of utility functions. 8 An r of 1 is equivalent to strategyproofness, and the lower limit concept of r-partial strategyproofness as r approaches 0 is the weaker lexicographic strategyproofness (Theorem 3). The degree of strategyproofness thus parametrizes the spectrum of incentive concepts between these two concepts. In terms of computability, we show that partial strategyproofness can be verified algorithmically, and that the degree of strategyproofness can be computed (for any mechanism in any finite setting). 9 Other approaches to studying the incentive properties of non-strategyproof mechanisms include strategyproofness in the large (Azevedo and Budish, 2015), or the comparison of mechanisms by their vulnerability to manipulation (Pathak and Sönmez, 2013). The insights from these other approaches are not in conflict with those from partial strategyproofness. 10 However, in contrast to strategyproofness in the large, partial strategyproofness describes incentives in every finite setting separately. Moreover, neither of the two other approaches is parametric, and no algorithms are known to verify that they apply to a given mechanism.
Partial strategyproofness provides new insights into the incentive properties of existing (and new) non-strategyproof mechanisms. The Probabilistic Serial mechanism (PS) serves as a typical example of a partially strategyproof mechanism. Bogomolnaia and Moulin (2001) had already shown that PS is weakly strategyproof, and Balbuzanov (2015) strengthened this result by showing that it is convex strategyproof. Since partial strategyproofness implies convex strategyproofness, our result that PS is partially strategyproof (Proposition 5) is the strongest statement about its incentive properties in any finite setting that is known at present. 11 A second insight from partial strategyproofness concerns the comparison of two popular non-strategyproof school choice mechanisms. Under the classical Boston mechanism (BM), 12 all applications in the k th round go to the students' k th choices. In particular, a student may waste one round by applying to a school with no more unfilled seats. This induces an obvious incentive to misreport by skipping exhausted schools in the application process. The adaptive Boston mechanism (ABM) (Mennle and Seuken, 2017d) works similarly to BM, except that students apply to their best available school in each round. Intuitively, this tweak removes part of the manipulability, but a formal understanding of this intuition has so far remained elusive. 13 Partial strategyproofness now enables this formal understanding: With priorities determined by a single, uniform lottery, ABM is partially strategyproof, but BM is not.
We conclude that the axiomatic motivation, the parametric degree of strategyproofness, and the new insights about non-strategyproof mechanisms qualify partial strategyproofness as a natural and useful relaxed notion of strategyproofness for assignment mechanisms that involve some form of randomness.

Formal Model
A setting pN, M, qq consists of a set of agents N (n " #N ), a set of objects M (m " #M ), and a vector q " pq 1 , . . . , q m q of capacities (i.e., there are q j units of object j available). We assume n ď ř jPM q j (i.e., there are not more agents than the total number of units); otherwise we include a dummy object with capacity n. Each agent i P N has a strict preference order P i over objects, where P i : a ą b indicates that agent i prefers object a to object b. Let P be the set of all possible preference orders. A preference profile P " pP i q iPN P P N is a collection of preference orders from all agents, and P´i P P N ztiu is 11 We show that r-partial strategyproofness (r ą 0) implies weak SD-strategyproofness (Bogomolnaia and Moulin, 2001), convex strategyproofness (Balbuzanov, 2015), approximate strategyproofness (Carroll, 2013), and lexicographic dominance-strategyproofness (Cho, 2012). 12 Under the Boston mechanism (Abdulkadiroglu and Sönmez, 2003), students apply to schools in rounds.
All students first apply to their first choice and applications are accepted according to priority. All students who did not receive their first choice apply to their second choice in the second round and are accepted into unfilled seats. This process continues until no new applications are received. 13 For fixed priorities, BM and ABM are as manipulable as each other, and for random priorities they cannot be compared by their vulnerability to manipulation (Dur, Mennle and Seuken, 2016).
a collection of preference orders of all agents except i. We extend agents' preferences to lotteries via von Neumann-Morgenstern utility functions: A utility function u i : M Ñ Rì s consistent with a preference order P i if u i paq ą u i pbq whenever P i : a ą b, denoted u i " P i . U P i " tu i | u i " P i u denotes the set of all utility functions consistent with P i . A (random) assignment is represented by an nˆm-matrix x " px i,j q iPN,jPM , where no object is assigned beyond capacity (i.e., ř iPN x i,j ď q j for all j P M ), and each agent receives some object with certainty (i.e., ř jPM x i,j " 1 for all i P N and x i,j ě 0 for all i P N, j P M ). The value x i,j is the probability that agent i gets object j. An assignment x is deterministic if x i,j P t0, 1u for all i P N, j P M . The i th row x i " px i,j q jPM of x is called the assignment vector of i (or i's assignment). The Birkhoff-von Neumann Theorem and its extensions (Budish et al., 2013) ensure that for any random assignment we can find a lottery over deterministic assignments that implements its marginal probabilities. Finally, let X and ∆pXq denote the spaces of all deterministic and random assignments respectively.
A (random assignment) mechanism is a mapping ϕ : P N Ñ ∆pXq that selects an assignment based on a preference profile. ϕ i pP i , P´iq denotes the assignment of agent i when i reports P i and the other agents report P´i. The mechanism ϕ is deterministic if it selects deterministic assignments (i.e., ϕ : P N Ñ X). Note that we only consider ordinal mechanisms, where the assignment only depends on the reported preference profiles but is independent of the underlying utility functions. If agent i with utility function u i reports P i and the other agents report P´i, then agent i's expected utility is (1) Remark 1. Our formal model treats randomness as an intrinsic feature of mechanisms. This corresponds to the typical understanding of the random assignment problem as studied by Bogomolnaia and Moulin (2001). Randomness may also arise from random priorities, which play an important role for tie-breaking in school choice. Since random priorities are outside the control of the agents, the randomness that they induce can formally be treated as if it was an intrinsic feature of the mechanism. A third potential source of randomness is each agent's uncertainty about the preference reports from the other agents; Azevedo and Budish (2015) call this the interim perspective. In Section 16 of the Online Appendix, we present an extension of our model to this interim perspective.

Strategyproofness and Incentive Axioms
In this section, we formally define strategyproofness and introduce the three axioms that we subsequently use to decompose and relax strategyproofness.
Definition 1 (Strategyproofness). A mechanism ϕ is strategyproof if, for all agents i P N , all preference profiles pP i , P´iq P P N , all misreports P 1 i P P, and all consistent utility functions u i P U P i , we have In words, strategyproofness requires that truthful reporting maximizes any agent's expected utility, independent of that agent's particular preferences, its utility function, or the preference reports of the other agents.
Alternatively, strategyproofness can be defined in terms of stochastic dominance: For a preference order P i P P (where P i : j 1 ą . . . ą j m ) and two assignment vectors x i , y i , we say that x i stochastically dominates y i at P i if, for all ranks K P t1, . . . , mu, we have In words, i has a weakly higher probability of obtaining one of its top-K choices under x i than under y i . This dominance is strict if inequality (3) is strict for at least one K P t1, . . . , mu. A mechanism ϕ is stochastic dominance-strategyproof (SD-strategyproof) if, for all agents i P N , all preference profiles pP i , P´iq P P N , and all misreports P 1 i P P, ϕ i pP i , P´iq stochastically dominates ϕ i pP 1 i , P´iq at P i . Erdil (2014) showed that strategyproofness (Definition 1) and SD-strategyproofness are equivalent; and throughout the paper we will refer to this property simply as strategyproofness.
Next, we introduce three axioms. To state the axioms formally, we require two auxiliary concepts: For any preference order P i P P, the neighborhood of P i is the set of all preference orders that differ from P i by a swap of two consecutively ranked objects, denoted N P i (e.g., the neighborhood of P i : a ą b ą c includes P 1 i : b ą a ą c but not P 2 i : c ą a ą b). For any object j P M , the upper contour set of j at P i is the set of objects that i prefers strictly to j, denoted U pj, P i q. Conversely, the lower contour set of j at P i is the set of objects that i likes strictly less than j, denoted Lpj, P i q. For example, the upper contour set of b at P i : a ą b ą c ą d is tau and the lower contour set is tc, du.
Swapping two consecutive objects in the true preference order (or equivalently, reporting a preference order from the neighborhood of the true preference order) is a basic kind of misreport. The axioms we define limit the way in which a mechanism can change an agent's assignment when this agent uses such a basic kind of misreport.
Axiom 1 (Swap Monotonicity). A mechanism ϕ is swap monotonic if, for all agents i P N , all preference profiles pP i , P´iq P P N , and all misreports P 1 i P N P i from the neighborhood of P i with P i : a ą b but P 1 i : b ą a, one of the following holds: Swap monotonicity requires that the mechanism reacts to the swap in a direct and responsive way: The swap reveals information about the agent's relative preference over a and b. Thus, if anything changes about that agent's assignment, then the assignment for a and b must be affected directly. Moreover, the mechanism must respond to the new information by increasing the assignment for the object that the agent reports to like more and reducing the assignment for the object that the agent reports to like less. For deterministic mechanisms, swap monotonicity is equivalent to strategyproofness (see Proposition 6 in Section 11 of the Online Appendix). For the more general class of random mechanisms, swap monotonicity is weaker than strategyproofness but prevents a certain obvious kind of manipulability: Consider a mechanism that assigns an agent's reported first choice with probability 1{3 and its reported second choice with probability 2{3. The agent is unambiguously better off by ranking its second choice first. Swap monotonicity precludes such opportunities for manipulation. Nevertheless, even swap monotonic mechanisms may be manipulable in a stochastic dominance sense, as the following example shows.
Example 1. Consider a mechanism where reporting P i : a ą b ą c ą d leads to an assignment of p0, 1{2, 0, 1{2q for a, b, c, d respectively and swapping b and c (i.e., reporting P 1 i : a ą c ą b ą d) leads to p1{2, 0, 1{2, 0q. This is consistent with swap monotonicity; yet, the latter assignment stochastically dominates the former at P i . 14 Note that while the swap monotonic mechanism in Example 1 is manipulable in a stochastic dominance sense, the manipulation leads to changes in the assignment for a, an object that the agent prefers to both b and c. Our next axiom restricts the changes in the assignment for these objects under a swap.
Axiom 2 (Upper Invariance). A mechanism ϕ is upper invariant if, for all agents i P N , all preference profiles pP i , P´iq P P N , and all misreports P 1 Intuitively, upper invariance ensures that agents cannot influence their chances of obtaining more-preferred objects by changing the order of less-preferred objects. This axiom was introduced by Hashimoto et al. (2014) as one of the central axioms to characterize the Probabilistic Serial mechanism. If an outside option is available and if the mechanism is individually rational, then upper invariance is equivalent to truncation robustness. 15 Many assignment mechanisms satisfy upper invariance, including Random Serial Dictatorship, Probabilistic Serial, the Boston mechanism, Deferred Acceptance (for the proposing agents), Top-Trade Cycles, and the HBS Draft mechanism.
Finally, our third axiom is symmetric to upper invariance but restricts the changes in the assignment for objects that the agent likes less than those objects that get swapped.
Axiom 3 (Lower Invariance). A mechanism ϕ is lower invariant if, for all agents i P N , all preference profiles pP i , P´iq P P N , and all misreports P 1 i P N P i with P i : a ą b but In words, a mechanism is lower invariant if changing the order of two consecutive objects does not affect the agents' chances of obtaining any less-preferred objects.

Decomposition of Strategyproofness
Each of the three axioms that we have introduced affects incentives by preventing manipulations from being beneficial in particular ways. We are now ready to formulate our first main result that strategyproofness can be decomposed into these axioms.

Theorem 1. A mechanism ϕ is strategyproof if and only if it is swap monotonic, upper invariant, and lower invariant.
Proof. "ñ ñ ñ". Assume towards contradiction that ϕ is strategyproof but not upper invariant. Then there exists some agent i P N , some preference profile P " pP i , P´iq P P N , some misreport P 1 i P N P i with P i : a ą b but P 1 i : b ą a, and an object j which i prefers strictly to a and b with ϕ i,j pP 1 i , P´iq ‰ ϕ i,j pP i , P´iq. Choose j to be i's most-preferred such object. Without loss of generality, ϕ i,j pP 1 i , P´iq ą ϕ i,j pP i , P´iq (otherwise, we can reverse the roles of P 1 i and P i ). Then ϕ i pP i , P´iq does not stochastically dominate ϕ j pP 1 i , P´iq, a contradiction to strategyproofness. Lower invariance follows by an analogous argument, except that we take j to be the least-preferred choice for which i's assignment changes.
By upper and lower invariance, any swap of two consecutive objects (e.g., from P i : a ą b to P 1 i : b ą a) only leads to a re-distribution of probability between a and b. If misreport P 1 i leads to a strictly higher assignment for a, then ϕ i pP 1 i , P´iq strictly stochastically dominates ϕ i pP i , P´iq at P i , a contradiction; and swap monotonicity follows.
"ð ð ð". We invoke a result from (Carroll, 2012) that strategyproofness can be shown by verifying that no agent can benefit from swapping two consecutive objects. Let ϕ be a swap monotonic, upper invariant, and lower invariant mechanism, and consider any agent i P N , any preference profile pP i , P´iq P P N , and any misreport P 1 Observe that ϕ i pP i , P´iq stochastically dominates ϕ i pP 1 i , P´iq at P i : By upper and lower invariance, i's assignment for all objects remains constant under the misreport, except possibly its assignment for a and b; and, by swap monotonicity, i's assignments for a and b can only decrease and increase respectively.
Theorem 1 illustrates why strategyproofness is such a restrictive requirement: If an agent swaps two consecutive objects (e.g., from P i : a ą b to P 1 i : b ą a), the only way in which a strategyproof mechanism can react is by increasing that agent's assignment for b and decreasing its assignment for a by the same amount.
In his seminal paper, Gibbard (1977) gave a similar decomposition of strategyproofness for random voting rules. He showed that any rule is strategyproof if and only if it is localized and non-perverse. Of course, all decompositions of strategyproofness are (by definition) equivalent. The appeal of our Theorem 1 lies in the choice of axioms, which are simple and easily motivated. They make our decomposition particularly useful, e.g., when verifying strategyproofness of new mechanisms, or when encoding strategyproofness as constraints under the automated mechanism design paradigm (Sandholm, 2003).
Remark 2. While we focus on strict preferences in the present paper, Theorem 1 can be extended to the case of weak preferences; see (Mennle and Seuken, 2017a).
Remark 3. To obtain strategyproofness in Theorem 1, we have used a local sufficiency result for strategyproofness by Carroll (2012). Cho (2012) proved an analogous local sufficiency result for lexicographic strategyproofness. In (Mennle and Seuken, 2017c), we have proven a local sufficiency result for partial strategyproofness that unifies these two prior local sufficiency results.

Partial Strategyproofness
Theorem 1 serves as another reminder that strategyproofness is a restrictive requirement, and that it may be worthwhile to think about the incentive properties of non-strategyproof mechanisms. In this section, we define partial strategyproofness, a new, relaxed notion of strategyproofness, and we show how this concept arises by dropping lower invariance from the decomposition of strategyproofness in Theorem 1.

URBI Domain Restriction and Partial Strategyproofness
Recall the motivating example from the Introduction, where agent 1 is contemplating a misreport under the Probabilistic Serial mechanism. r " 3{4 is the pivotal degree of indifference, which determines whether the misreport is beneficial to the agent or not. The following domain restriction generalizes this intuition of agents who differentiate sufficiently between different objects.
Definition 2 (URBI). A utility function u i satisfies uniformly relatively bounded indifference with respect to bound r P r0, 1s (URBI(r)) if, for all objects a, b P M with u i paq ą u i pbq, we have If min jPM pu i pjqq " 0 (i.e., the last choice yields utility 0), then inequality (4) simplifies to r¨u i paq ě u i pbq. In words, agent i must value b at least a factor r less than a whenever 0 0 u i paq Figure 1: Geometric interpretation of uniformly relatively bounded indifference.
For a geometric interpretation, consider Figure 1: The condition means that i's utility function (e.g., the point labeled u i ) cannot be arbitrarily close to the indifference hyperplane HpP i , P 1 i q, but it must lie within the shaded triangular area. r is the slope of the dashed line that bounds this area at the top. Any utility function in U P i that lies outside the shaded area (e.g., the point labeledũ i ) violates URBI(r). For convenience we denote by URBI(r) the set of utility functions that satisfy this constraint.
Remark 4. To gain some intuition about the size of the set URBI(r), consider a setting with m " 3 objects. Suppose that min jPM pu i pjqq " 0 and that the utilities for the first and second choice are determined by drawing a point uniformly at random from p0, 1q 2 zHpP i , P 1 i q (i.e., from the open unit square excluding the indifference hyperplane). Then the share of utilities that satisfy URBI(r) is exactly r. For example, if r " 3{4, the probability of drawing a utility function that satisfies URBIp3{4q is 3{4.
We are now ready to define our new relaxed incentive concept.
Definition 3 (Partial Strategyproofness). Given a setting pN, M, qq and a bound r P r0, 1s, a mechanism ϕ is r-partially strategyproof (in the setting pN, M, qq) if, for all agents i P N , all preference profiles pP i , P´iq P P N , all misreports P 1 i P P, and all utility functions u i P U P i X URBI(r), we have We say that ϕ is partially strategyproof (in pN, M, qq) if it is r-partially strategyproof in pN, M, qq for some positive bound r ą 0.
In words, an r-partially strategyproof mechanisms makes truthful reporting a dominant strategy, but only for agents whose utility functions satisfy URBI(r). Since all utility functions satisfy URBIp1q, 1-partial strategyproofness is equivalent to full strategyproofness, and since the set URBI(r) is smaller for smaller values of r, the r-partial strategyproofness requirement is less demanding for smaller r.

Decomposition of Partial Strategyproofness
For our second main result, we prove that partial strategyproofness can be decomposed into the axioms swap monotonicity and upper invariance.
Theorem 2. Given a setting pN, M, qq, a mechanism ϕ is partially strategyproof (i.e., r-partially strategyproof for some r ą 0) if and only if it is swap monotonic and upper invariant.
Proof. We use the following lemma, which is proven in Appendix A.
Lemma 1. Given a setting pN, M, qq and a mechanism ϕ, the following are equivalent: a. For all agents i P N , all preference profiles pP i , P´iq P P N , and all misreports P 1 Lemma 1 establishes a connection between the behavior of mechanisms under arbitrary misreports (i.e., statement a) and local properties (i.e., swap monotonicity and upper invariance in statement b). It replaces the local sufficiency result of Carroll (2012) which we have used to prove Theorem 1.
Towards Theorem 2, we must show that partial strategyproofness and statement a of Lemma 1 are equivalent. For the fixed setting pN, M, qq let δ be the smallest non-zero variation in the assignment resulting from any change of report by any agent; formally, δ is strictly positive for any non-constant ϕ, and well-defined as N , M , and P are finite.
Necessity (partial strategyproofness ð a). For any agent i P N and any preference profile pP i , P´iq P P N , suppose that i considers some misreport P 1 i P P. If ϕ i pP i , P´iq ‰ ϕ i pP 1 i , P´iq, then a implies that there exists some object a for which i's assignment strictly decreases, and i's assignment for all more-preferred objects j P U pa, P i q remains unchanged. Since i's assignment for a changes, it must decrease by at least δ. Let b be the object that i ranks directly below a in P i . Then i's gain in expected utility from misreporting is greatest if, first, i receives a with probability δ and its last choice with probability p1´δq when being truthful, and second, i receives b with certainty when misreporting. The gain is thus bounded from above by which is weakly negative if Inequality (8) holds for all utility functions in U P i X URBIpδq. Thus, if i's utility function satisfies URBIpδq, then no misreport increases i's expected utility, or equivalently, ϕ is δ-partially strategyproof. Sufficiency (partial strategyproofness ñ a). Let ϕ be r-partially strategyproof for some r ą 0, and assume towards contradiction that ϕ violates a. This is equivalent to saying that there exists an agent i P N , a preference profile pP i , P´iq P P N , a misreport P 1 i P P with ϕ i pP i , P´iq ‰ ϕ i pP 1 i , P´iq, and an object a P M , such that ϕ i,a pP i , P´iq ă ϕ i,a pP 1 i , P´iq but ϕ i,j pP i , P´iq " ϕ i,j pP 1 i , P´iq for all j P U pa, P i q. Again, let b be the object that i ranks directly below a in P i . Since i's assignment for a increases, it must increase by at least δ. Thus, i's gain in expected utility is smallest if, first, i receives b with certainty when being truthful, and second, i receives a with probability δ and its last choice with probability p1´δq when misreporting. This makeŝ a lower bound on i's gain from misreporting. This bound is strictly positive if which holds for all utility functions in U P i X URBI(r) for r ă δ. Therefore, ϕ cannot be r-partially strategyproof for any r ą 0, a contradiction.
Theorem 2 provides an axiomatic motivation for the definition of partial strategyproofness: The class of partially strategyproof mechanisms consists exactly of those mechanisms that are swap monotonic and upper invariant, but may violate lower invariance. The equivalence also allows the formulation of straightforward and honest strategic advice that can be given to agents who participate in swap monotonic and upper invariant mechanisms: They are best off reporting their preferences truthfully as long as their preference intensities for any two different objects are sufficiently different. 16 Remark 5. In light of the interpretation of upper invariance as robustness to manipulation by truncation (Hashimoto et al., 2014) and the fact that, for deterministic mechanisms, swap monotonicity is equivalent to strategyproofness (see Proposition 6 in Section 11 of the Online Appendix), dropping lower invariance suggests itself as a sensible approach to relaxing strategyproofness. Alternatively, one may consider mechanisms that violate a different axiom instead (i.e., swap monotonicity or upper invariance). However, neither of these approaches admits the construction of interesting ordinally efficient mechanisms. 17

Maximality of the URBI Domain Restriction
In this section, we study how well the URBI(r) domain restriction captures the incentive properties of non-strategyproof assignment mechanisms. By definition, r-partial strategyproofness implies that the set URBI(r) must be contained in the set of utility functions for which truthful reporting is a dominant strategy. However, the two sets may not be exactly equal, as the following example shows. 16 As we show in Section 6.2, this advice can be refined further: An agent who is close to being indifferent between some objects may potentially gain from misreporting, but this gain is bounded. 17 There exist mechanisms that are swap monotonic (SM), upper invariant (UI), ordinally efficient (OE), anonymous (AN), neutral (NE), and non-bossy (NB) (e.g., Probabilistic Serial); however, as we have shown in (Mennle and Seuken, 2017e), no mechanism is SM, LI, OE, AN, NE, and NB. Moreover, no mechanism is UI, LI, AN, and OE. Thus, dropping either UI or SM (instead of LI) from strategyproofness does not enable the design of new and appealing ordinally efficient mechanisms.
Example 2. Consider a setting with four agents and four objects a, b, c, d in unit capacity. In this setting, the adaptive Boston mechanism (see Section 8.4) with priorities determined by a single, uniform lottery is 1{3-partially strategyproof but not r-partially strategyproof for any r ą 1{3. However, it is a simple (though tedious) exercise to verify that an agent with utilities u i paq " 6, u i pbq " 2, u i pcq " 1, u i pdq " 0 cannot benefit from misreporting, independent of the reports from the other agents. But u i violates URBI p1{3q, since Example 2 shows that for some specific r-partially strategyproof mechanism there may exist utility functions that violate URBI(r), but for which truthful reporting is nonetheless a dominant strategy. This raises the question whether URBI(r) is too small in the sense that it excludes some utility functions for which all r-partially strategyproof mechanisms make truthful reporting a dominant strategy. Our next proposition, which shows maximality of the URBI(r) domain restriction, dispels this concern.
Proposition 1. For all settings pN, M, qq with m ě 3 objects, all bounds r P p0, 1q, all preference ordersP i P P, and all utility functionsũ i P UP i that violate URBI(r), there exists a mechanismφ such that 1.φ is r-partially strategyproof, 2. Eφ i pP i ,P´iq rũ i s ă Eφ i pP 1 i ,P´iq rũ i s for some misreport P 1 i P P and all P´i P P N ztiu .
Furthermore,φ can be chosen to satisfy anonymity.
Observe that b cannot be i's last choice because 0{ pũ i paq´min jPMũi pjqq ď r is trivially satisfied. Define the mechanismφ by setting the assignment for the distinguished agent i as follows: Fix parameters δ a , δ b P r0, 1{ms; then, for any P´i P P N , any preference orderP i P P, and any object j P M , let where d is the last choice underP i . For all other agents, distribute the remaining probabilities evenly. With parameters δ a , δ b P r0, 1{ms, this mechanism is well defined. Next, choose δ b " 1{m and δ a from the interval rr{m,r{mq. It is a straightforward, albeit tedious, exercise to verify thatφ is r-partially strategyproof but manipulable for agent i with utility functionũ i (Lemma 2 in Appendix B). To get an anonymous mechanism with the same properties, randomly assign each agent to the role of agent i.
In words, Proposition 1 means that for any utility function that violates URBI(r), there exists some r-partially strategyproof mechanism under which truthful reporting is not a dominant strategy for an agent with that utility function. Thus, unless we are given additional structural information about the mechanism (besides the fact that it is r-partially strategyproof and possibly anonymous), URBI(r) is in fact the largest set of utility functions for which truthful reporting is guaranteed to be a dominant strategy.

The Degree of Strategyproofness
The partial strategyproofness concept induces a new, intuitive measure for the incentive properties of swap monotonic and upper invariant mechanisms.
Definition 4 (Degree of Strategyproofness). Given a setting pN, M, qq and a mechanism ϕ, the degree of strategyproofness of ϕ (in the setting pN, M, qq) is defined as ρ pN,M,qq pϕq " max tr P r0, 1s | ϕ is r-partially strategyproof in pN, M, qqu . 18 Observe that for 0 ă r 1 ă r ď 1 we have URBIpr 1 q Ă URBI(r) by construction. Thus, a higher degree of strategyproofness corresponds to a stronger guarantee. By maximality of the URBI(r) domain restriction (Proposition 1), the degree of strategyproofness constitutes a meaningful measure for incentive properties: If the only known attributes of ϕ are that it is swap monotonic and upper invariant, then no single-parameter measure conveys strictly more information about the incentive properties of ϕ.
The degree of strategyproofness also allows for the comparison of two mechanisms: If ρ pN,M,qq pϕq ą ρ pN,M,qq pψq, then ϕ makes truthful reporting a dominant strategy on a strictly larger URBI(r) set of utility functions than ψ does. From a quantitative perspective, one might ask for how many more utility functions ϕ is guaranteed to have good incentives, compared to ψ. To answer this question, recall Remark 4, where we considered utility functions in a setting with 3 objects, min jPM pu i pjqq " 0, and where the utilities for the first and second choice were chosen uniformly at random from the unit square. Suppose that ρ pN,M,qq pϕq " 4{5 and ρ pN,M,qq pψq " 2{5. Then the set URBIp4{5q has twice the size of URBIp2{5q. Thus, the guarantee for ϕ extends over twice as many utility functions as the guarantee for ψ.
A different approach to comparing mechanisms by their vulnerability to manipulation was proposed by Pathak and Sönmez (2013), and an extension of their concept to the cardinal case is straightforward: We say that ψ is strongly as manipulable as ϕ if, whenever an agent with utility function u i finds a beneficial misreport under ϕ, the same agent in the same situation also finds a beneficial misreport under ψ. Interestingly, this comparison is consistent with the comparison of mechanisms by their degrees of strategyproofness in the sense that the two comparisons never yield contradictory results. 19 However, despite this consistency, neither concept is always better at strictly differentiating between two mechanisms. A comparison by vulnerability to manipulation may be inconclusive when the degree of strategyproofness yields a strict winner. Conversely, the degrees of strategyproofness may be equal, even though one of the mechanisms is in fact more strongly manipulable than the other.
An important difference is that the comparison by vulnerability to manipulation considers each preference profile separately, while the partial strategyproofness constraints must hold uniformly for all preference profiles. Vulnerability to manipulation thus yields a best response notion of incentives, while the degree of strategyproofness yields a dominant strategy notion of incentives. However, the degree of strategyproofness has two important advantages. First, it is computable (see Section 7), whereas Pathak and Sönmez (2013) do not present a method to perform their comparison algorithmically, and the definition of such a method is not trivial. Second, and more importantly, the degree of strategyproofness allows a parametric comparison, whereas the comparison by vulnerability to manipulation is binary (i.e., it does not indicate how much more robust one mechanism is to strategic manipulation than another). Moreover, it is simple for mechanism designers to express a minimal acceptable degree of strategyproofness and identify mechanisms within this restricted class that have other desirable properties (and this search can be performed algorithmically). Strategyproofness ó ò pfor r " 1q r-partial strategyproofness

Partial Strategyproofness and Other Concepts
In this section, we show that partial strategyproofness takes an intermediate position between strategyproofness and other incentive concepts that have been discussed previously in the context of the assignment problem or more broadly in domains with no monetary transfers. Figure 2 illustrates these relations.

Relation to Weak SD-Strategyproofness
Weak SD-strategyproofness requires that no agent can obtain an assignment that it strictly prefers, independent of its underlying utility function, by misreporting.

Definition 5 (Weak SD-Strategyproofness). A mechanism ϕ is weakly SD-strategyproof
if, for all agents i P N , all preference profiles pP i , P´iq P P N , and all misreports P 1 i P P, ϕ i pP i , P´iq is not strictly stochastically dominated by ϕ i pP 1 i , P´iq at P i . (2001) employed weak SD-stategyproofness to describe the incentive properties of the Probabilistic Serial mechanism. The next proposition establishes its relation with partial strategyproofness.

Bogomolnaia and Moulin
Proposition 2. Given a setting pN, M, qq, if a mechanism ϕ is partially strategyproof (i.e., r-partially strategyproof for some r ą 0), then it is weakly SD-strategyproof. The converse may not hold.
Proof. Observe that r-partial strategyproofness implies (but is not implied by) convex strategyproofness, 20 which implies weak SD-strategyproofness. Balbuzanov (2015) gave an example to show that weak SD-strategyproof does not imply convex strategyproofness; thus, it does not imply partial strategyproofness.
Proposition 2 is interesting because, as we show in Section 8.2, the Probabilistic Serial mechanism is actually partially strategyproof. Thus, partial strategyproofness is a strictly more informative statement about the incentive properties of this mechanism than either weak SD-strategyproofness or convex strategyproofness.

Relation to Approximate Strategyproofness
The impossibility results pertaining to strategyproofness have motivated the introduction of various notions of approximate strategyproofness. The general idea is that nonstrategyproof mechanisms should at least be close to strategyproof in a quantifiable sense. In domains with money, this idea translates to the requirement that the agents' gain (measured in units of money) from misreporting should be bounded. 21 However, utilities in the assignment domain are not comparable across agents, which makes the definition of meaningful notions of approximate strategyproofness more challenging. Earlier work has overcome this difficulty by bounding the utility functions, e.g., between 0 and 1 (Birrell and Pass, 2011;Carroll, 2013;Lee, 2015). 22 Definition 6 (Approximate Strategyproofness). Given a setting pN, M, qq and a bound ε P r0, 1s, a mechanism ϕ is ε-approximately strategyproof if, for all agents i P N , all preference profiles pP i , P´iq P P N , all misreports P 1 i P P, and all utility functions u i P U P i that are bounded between 0 and 1, we have Obviously, ε-approximate strategyproofness implies ε 1 -approximate strategyproofness 20 A mechanism is convex strategyproof if its makes truthful reporting a dominant strategy for at least one utility function (Balbuzanov, 2015). 21 Lubin and Parkes (2012) provided a survey of applications in quasi-linear domains. 22 Liu and Pycia (2013) defined the notion of ε-strategy-proofness differently by imposing a bound on the violations of the stochastic dominance constraints. Proposition 8 in Section 14 of the Online Appendix shows that their notion and our notion are in fact equivalent.
for any ε 1 ą ε; 0-approximate strategyproofness is equivalent to strategyproofness; and 1-approximate strategyproofness is a void constraint. In this sense, approximate strategyproofness yields a different parametric relaxation of strategyproofness, where the constraint is stronger when the bound ε is smaller. The next proposition shows the relation of approximate strategyproofness with partial strategyproofness.
Proposition 3. Given a setting pN, M, qq, if a mechanism ϕ is r-partially strategyproof for some r ą 0, then it is ε-approximately strategyproof for some ε ă 1. The converse may not hold. Moreover, for any ε ą 0, there exists an r ă 1 such that any r-partially strategyproof mechanism is ε-approximately strategyproof.
We give the proof in Appendix C.
Recall that r-partially strategyproof mechanisms are guaranteed to make truthful reporting a dominant strategy for any agent whose utility function satisfies URBI(r), which enables us to give honest an useful strategic advice: It is in the agent's best interest to report truthfully if the agent's utilities for any two objects are sufficiently different. Proposition 3 now allows a refinement of this advice. Since partial strategyproofness implies approximate strategyproofness, any agent's potential gain from misreporting is also quantitatively bounded, even if the agent's utility function does not satisfy URBI(r); and this quantitative bound is tighter the closer r is to 1. Azevedo and Budish (2015) proposed strategyproofness in the large (SP-L) as an alternative incentive concept when strategyproofness is incompatible with other essential design objectives. This concept captures the intuition that the ability of any single agent to benefit from misreporting may vanish when more agents participate. For example, in school choice, where thousands of students compete for seats at a relatively small number of schools, this requirement may facilitate interesting design alternatives. Azevedo and Budish (2015) defined strategyproofness in the large with respect to a finite set of utility functions tu 1 , . . . , u K u. Loosely speaking, a mechanism satisfies SP-L if, for any ε ą 0, there exist a number n 0 of agents such that in any setting with at least n 0 agents, no agent with a utility function from tu 1 , . . . , u K u can improve its expected utility by more than ε through misreporting. 23 To connect this concept with partial strategyproofness, we need to specify in what sense assignment problems get large. Here, we follow Kojima and Manea (2010): We keep the number of objects constant but we let the number of agents grow, and we increase the objects' capacities such that the total capacity is not lower than the number of agents.

Relation to Strategyproofness in the Large
Proposition 4. Fix any finite set of utility functions tu 1 , . . . , u K u and consider any sequence of settings pN n , M n , q n q ně1 with #N n " n, M n " M , ř jPM q n j ě n, and min jPM q n j Ñ 8 as n Ñ 8. If the degree of strategyproofness of ϕ converges to 1 as n Ñ 8, then ϕ is strategyproof in the large (with respect to tu 1 , . . . , u K u).
Proof. Any utility function u i P U P i satisfies URBI(r) for some (sufficiently large) r ă 1. Thus, we can findr such that u k P URBIprq for all k P t1, . . . , Ku. Since the degree of strategyproofness converges to 1 as n grows, ϕ isr-partially strategyproof in any setting with sufficiently many agents. Thus, the gain from misreporting vanishes for large n.
Proposition 4 shows that convergence of r to 1 in large settings implies SP-L. In Section 8.2, we explain how Proposition 4 might be applied to obtain an elegant, parametric proof of strategyproofness in the large of the Probabilistic Serial mechanism.

Relation to Lexicographic Strategyproofness
Finally, we establish the relation of partial strategyproofness with strategyproofness for agents with lexicographic preferences (Cho, 2012). An agent with lexicographic preferences prefers any (arbitrarily small) increase in the probability for some more preferred object to any (arbitrarily large) increase in the probability for some less preferred object.
Definition 7 (Lexicographic Dominance). For a preference order P i P P and assignment vectors x i , y i , we say that x i lexicographically dominates y i at P i if either x i " y i , or x i,a ą y i,a for some a P M and x i,j " y i,j for all j P U pa, P i q.
Lexicographic dominance (LD) induces LD-strategyproofness in the same way in which stochastic dominance induces SD-strategyproofness.
Definition 8 (LD-Strategyproofness). A mechanism ϕ is LD-strategyproof if, for all agents i P N , all preference profiles pP i , P´iq P P N , and all misreports P 1 i P P, ϕ i pP i , P´iq lexicographically dominates ϕ i pP 1 i , P´iq at P i .
Theorem 3 shows the relation of partial strategyproofness and LD-strategyproofness.
Theorem 3. Given a setting pN, M, qq, a mechanism ϕ is partially strategyproof (i.e., r-partially strategyproof for some r ą 0) if and only if ϕ is LD-strategyproof.
Proof. This follows from Lemma 1 in the proof of Theorem 2.
Theorem 3 has two interesting consequences: First, by describing a mechanism as being LD-strategyproof, one ignores the parametric nature of the set of utility functions for which this mechanism is guaranteed to make truthful reporting a dominant strategy. The partial strategyproofness thus concept provides a more refined understanding of the incentive properties of non-strategyproof mechanisms.
Second, Theorem 3 shows that LD-strategyproofness is the lower bounding concept for partial strategyproofness, symmetric to the sense in which strategyproofness is the upper bounding concept. To see this, let SPpN, M, qq, r-PSPpN, M, qq, and LD-SPpN, M, qq denote the sets of mechanisms that are strategyproof, r-partially strategyproof, and LD-strategyproof respectively in the setting pN, M, qq. Observe that SPpN, M, qq " č ră1 r-PSPpN, M, qq.
In words, any mechanism that is r-partially strategyproof for all r ă 1 must be strategyproof. Theorem 3 implies the complementary statement that LD-SPpN, M, qq " ď rą0 r-PSPpN, M, qq.
In words, any mechanisms that is LD-strategyproof must be r-partially strategyproof for some strictly positive r ą 0. Thus, the degree of strategyproofness ρ pN,M,qq parametrizes the space of incentive concepts between strategyproofness on the one hand and the weaker LD-strategyproofness on the other hand.

Discounted Dominance and Partial Strategyproofness
Recall that strategyproofness (defined in terms of expected utility) is equivalent to SDstrategyproofness. In this section, we present r-discounted dominance, a new dominance notion that generalizes stochastic dominance but includes r as a discount factor. We then provide an alternative definition of partial strategyproofness by showing that it is equivalent to the incentive concept induced by r-discounted dominance.
Definition 9 (Discounted Dominance). For a bound r P r0, 1s, a preference order P i P P with P i : j 1 ą . . . ą j m , and assignment vectors x i , y i , we say that x i r-discounted dominates y i at P i if, for all ranks K P t1, . . . , mu, we have Observe that for r " 1, this is precisely the same as stochastic dominance. However, for r ă 1, the difference in the agent's assignment for the k th choice is discounted by the factor r k . Analogous to stochastic dominance for SD-strategyproofness, we can use r-discounted dominance (r-DD) to define the corresponding incentive concept.
Definition 10 (r-DD-Strategyproofness). Given a setting pN, M, qq and a bound r P p0, 1s, a mechanism ϕ is r-DD-strategyproof if, for all agents i P N , all preference profiles pP i , P´iq P P N , and all misreports P 1 i P P, ϕ i pP i , P´iq r-discounted dominates ϕ i pP 1 i , P´iq at P i . The next theorem yields the equivalence to r-partial strategyproofness.
Theorem 4. Given a setting pN, M, qq and a bound r P r0, 1s, a mechanism ϕ is r-partially strategyproof if and only if it is r-DD-strategyproof.
We give the proof in Appendix D. Theorem 4 generalizes the equivalence between strategyproofness and SD-strategyproofness. Moreover, it yields an alternative definition of r-partial strategyproofness in terms of discounted dominance. This shows that the partial strategyproofness concept integrates nicely into the landscape of existing incentive concepts, many of which are defined using dominance notions (e.g., SD-, weak SD-, LD-, and ST 24 -strategyproofness).

Randomness SM UI LI PSP SP Section
Random Serial Dictatorship (RSD) Intrinsic 8.  The dominance interpretation also unlocks the partial strategyproofness concept to algorithmic analysis. Recall that the definition of r-partial strategyproofness imposes inequalities that have to hold for all utility functions within the set URBI(r). This set is infinite, which makes algorithmic verification of r-partial strategyproofness infeasible via its original definition. However, by the equivalence from Theorem 4, it suffices to verify that all (finitely many) constraints for r-discounted dominance are satisfied (i.e., the inequalities (17) from Definition 9). These inequalities can also be used to encode r-partial strategyproofness as linear constraints to an optimization problem. This enables an automated search in the set of r-partially strategyproof mechanisms while optimizing for some other design objective under the automated mechanism design paradigm (Sandholm, 2003).

Applications
In this section, we apply the new partial strategyproofness concept to analyze the incentive properties of popular and new non-strategyproof mechanisms. Table 1 provides an overview of our results.

Random Serial Dictatorship
Under the Random Serial Dictatorship mechanism (RSD) agents line up in a random order and then take turns to pick their most preferred object that is still available. RSD is strategyproof (Abdulkadiroglu and Sönmez, 1998), and therefore it satisfies all three axioms and is 1-partially strategyproof in any setting.

Probabilistic Serial Mechanism
The Probabilistic Serial mechanism (PS ), introduced by Bogomolnaia and Moulin (2001), is one of the most well-studied mechanism for the random assignment problem. Our next proposition shows that PS is partially strategyproof.
Proposition 5. Given a setting pN, M, qq, PS is r-partially strategyproof for some r ą 0.
Regarding the incentive properties of PS, Bogomolnaia and Moulin (2001) showed that it is weakly SD-strategyproof (but not strategyproof), and Balbuzanov (2015) showed that it is convex strategyproof. Since partial strategyproofness implies both of these properties, Proposition 5 establishes the most demanding description of the incentive properties of PS in finite settings known to date. The fact that PS is partially strategyproof also justifies our choice of this mechanism for the introductory example. Figure 3 shows the degrees of strategyproofness of PS in various settings. 25 Observe that, for a fixed number of objects, growing capacities, and a growing number of agents, these values increase. This suggests the conjecture that the degree of strategyproofness of PS converges to 1 in large settings.
Remark 6. Kojima and Manea (2010) proved that, for an arbitrary, fixed utility function, PS makes truthful reporting a dominant strategy for agents with that utility function if there are sufficiently many copies of each object; and Azevedo and Budish (2015) used this to conclude that PS is SP-L. A proof of our conjecture that the degree of strategyproofness of PS converges to 1 in large settings would strengthen the result of and m " 4 (right) objects, for varying numbers of agents n, and evenly distributed capacities q j " n{m. Kojima and Manea (2010); and, in combination with our Proposition 4, it would yield an elegant, parametric proof that PS is SP-L. 26

Boston Mechanism
Under the Boston mechanism (BM), agents apply to their first choice in the first round and objects are assigned to applicants according to some priority order. Agents who have not received their first choice in the first round apply to their second choice in the second round, and the process repeats. This mechanism is frequently used to assign students to seats at public schools, but it has been heavily criticized for its susceptibility to manipulation (Abdulkadiroglu and Sönmez, 2003). We consider this mechanism with priorities determined by a single, uniform lottery. Intuitively, it is upper invariant because the object to which an agent applies in the k th round has no effect on the applications (or assignments) in any previous rounds (see (Mennle and Seuken, 2017d) for a formal proof of upper invariance). However, BM fails swap monotonicity, as the following example shows, and thus it violates r-partial strategyproofness for any r ą 0.
Example 3. Consider a setting with four agents, four objects in unit capacity, and preferences P 1 : a ą b ą c ą d, P 2 : a ą c ą b ą d, and P 3 , P 4 : b ą c ą a ą d. Agent 1's assignment is BM 1 pP 1 , P´1q " p1{2, 0, 0, 1{2q for the objects a, b, c, d respectively. If agent 1 swaps b and c in its report, its assignment changes to BM 1 pP 1 1 , P´1q " p1{2, 0, 1{4, 1{4q. This violates swap monotonicity as agent 1's assignment for b has not changed, but the overall assignment has. It also violates lower invariance as agent 1's assignment for d has changed, even though d is in the lower contour set of c.
This insight adds an axiomatic argument to the criticism of the manipulability of BM.

Adaptive Boston Mechanism
Some obvious opportunities for manipulation arise under BM: An agent who knows that their second choice will be exhausted in the first round can manipulate by ranking their third choice in the second position. This misreport improves that agent's chance of obtaining their third choice without reducing their chance of obtaining their first or second choice. If instead the agent automatically skipped exhausted objects in the application process, then this manipulation heuristic would no longer be effective. In (Mennle and Seuken, 2017d) we have shown that such an adaptive Boston mechanism (ABM) with priorities determined by a single, uniform lottery is swap monotonic and upper invariant, and thus partially strategyproof. However, ABM is not fully strategyproof.
Interestingly, a comparison of BM and ABM by their vulnerability to manipulation is not differentiating for fixed priorities or inconclusive for random priorities (Dur, Mennle and Seuken, 2016). In contrast, partial strategyproofness provides a formal understanding of the nuanced differences between BM and ABM in terms of their incentive properties.
As for PS, we have computed the degree of strategyproofness of ABM for various settings. 27 The results are shown in Figure 4. Observe that ρ pN,M,qq pABMq is significantly lower than ρ pN,M,qq pPSq (see Figure 3). Furthermore, it does not grow for larger settings, but it appears to remain constant (i.e., ρ pN,M,qq pABMq " 1{2 for m " 3 objects and n " 3, 6, 9 agents; and ρ pN,M,qq pABMq " 1{3 for m " 4 objects and n " 4, 8 agents). Partial strategyproofness thus yields a formal argument that ABM has intermediate incentive properties, which are stronger than those of BM but weaker than those of PS. Rank efficiency often closely reflects the welfare criteria used in practical applications, e.g., in school choice (Mennle and Seuken, 2017d), or in the assignment of teachers to schools (Featherstone, 2011). However, no rank efficient mechanism is even weakly SD-strategyproof (Theorem 3 in (Featherstone, 2011)). Furthermore, any rank efficient mechanism will be neither swap monotonic, nor upper invariant, nor lower invariant (see Examples 4 and 5 in Section 15 of the Online Appendix); thus they will not be partially strategyproof. This highlights that the attractive efficiency properties of rank efficient mechanisms come at a price, as these mechanisms fail all of the axioms that constitute strategyproofness.

Hybrid Mechanisms
In (Mennle and Seuken, 2017b), we have investigated a natural way to trade off strategyproofness and efficiency in the random assignment problem. The main idea of hybrid mechanisms is to mix two different mechanisms, one with good incentive properties and the other with good efficiency properties. We have shown that, under certain technical conditions, the resulting mechanisms are partially strategyproof, but can also improve efficiency beyond the ex-post efficiency of Random Serial Dictatorship. Furthermore, the trade-offs are scalable in the sense that the mechanism designer can accept a lower degree of strategyproofness in exchange for more efficiency. Partial strategyproofness enables this analysis because it allows a meaningful parametric evaluation of incentive properties. Moreover, its equivalent formulation in terms of discounted dominance enables the computation of optimal mixing factors between the two mechanisms.

Deterministic and Multi-Unit Assignment Mechanisms
In the examples discussed so far, randomness was either an intrinsic feature of the mechanism (e.g., with PS) or it was induced by random priorities (e.g., with BM and ABM). A third source of randomness may arise from the fact that agents are not certain about the preference reports of the other agents. By considering the effect of this uncertainty on the agents' strategic situation, one takes what Azevedo and Budish (2015) call the interim perspective. In Section 16 of the Online Appendix, we show formally how the notion of partial strategyproofness can be applied from this interim perspective. In particular, we show that mechanisms (random or deterministic) are partially strategyproof from the interim perspective precisely if they satisfy three axioms: upper invariance, weak monotonicity, and sensitivity. Since the combination of weak monotonicity and sensitivity is a strictly weaker requirement than swap monotonicity, this allows the application of partial strategyproofness to a larger class of mechanisms, including deterministic ones. In Sections 8.3 and 8.4, we have already observed that partial strategyproofness captures the subtle differences in terms of incentives between BM and ABM when priorities are random. However, priorities in school choice may be known, e.g., if they are based on walk-zones, siblings, or exam scores. Nonetheless, partial strategyproofness can be applied to study the incentive properties of BM and ABM even with non-random priorities: Both mechanisms satisfy upper invariance, weak monotonicity, and sensitivity (Mennle and Seuken, 2017d); they can therefore be analyzed from the interim perspective.
Multi-unit assignment is an important extension of the assignment problem, where each agent receives K objects (instead of just one). For example, the HBS Draft mechanism is used to assign schedules of courses to students at Harvard Business School (Budish and Cantillon, 2012). 28 Another example of a multi-unit assignment mechanism is the straightforward extension of the Probabilistic Serial mechanism (Heo, 2014), where agents consume a total of K (instead of 1.0) probability shares. Neither the HBS Draft mechanism nor the Probabilistic Serial mechanism is strategyproof for the multi-unit assignment problem, even if agents have additive utilities. Furthermore, while both mechanisms are upper invariant and involve randomization, neither is swap monotonic. However, they satisfy weak monotonicity and sensitivity (see Proposition 9 in Section 16.3 of the Online Appendix). Thus, partial strategyproofness can be applied to study their incentive properties from the interim perspective.

Conclusion
We have introduced partial strategyproofness, a new concept to understand the incentive properties of non-strategyproof assignment mechanisms that involve some form of randomness; and we have argued why this concept is an important addition to the mechanism designer's toolbox. The definition of partial strategyproofness is motivated by the observation that incentives for misreporting can be attributed to small differences in the agents' preference intensities. Interestingly, this definition also has an appealing axiomatic motivation, and yields a meaningful parametric measure to study and compare the incentive properties of non-strategyproof mechanisms. In addition, partial strategyproofness comes with an array of useful technical results: First, locality of the axioms swap monotonicity and upper invariance makes them intuitive and easily accessible. Second, its dominance interpretation enables the algorithmic verification of partial strategyproofness and the computation of the degree of strategyproofness (for any mechanism in any finite setting). Third, partial strategyproofness provides a unified view on many other relaxed incentive requirements. Fourth, it allows market designers to give honest and useful strategic advice: If an agent differentiates sufficiently between any two objects, then the agent is best off by reporting their preferences truthfully; otherwise, the agent's incentive to misreport may be positive, but it is still quantitatively bounded. These facts distinguish the partial strategyproofness concept as a natural and powerful relaxed notion of strategyproofness for the assignment problem.
When applied to concrete mechanisms, the concept has provided new and useful insights, such as a more refined understanding of the incentive properties of the Probabilistic Serial mechanism and the meaningful comparison of the two variants of the Boston mechanism. In addition, it is applicable across a broad spectrum of other mechanisms. In the future, we expect the partial strategyproofness concept to be useful in research on assignment mechanisms. For example, from a theoretical perspective, it enables the development of new mechanisms with appealing properties which are incompatible with full strategyproofness; and for practical market design, it opens up new ways of reasoning about the trade-offs between incentives for truthtelling and other desiderata in school choice settings. a ñ ñ ñ b. First, assume that ϕ satisfies a but violates upper invariance. Then there exist an agent i P N , a preference profile pP i , P´iq P P N , and a misreport P 1 i P N P i with P i : a ą b but P 1 i : b ą a, such that i's assignment for some j P U pa, P i q changes. Letj be the highest-ranking such object, then a implies ϕ i,j pP i , P´iq ą ϕ i,j pP 1 i , P´iq. Moreover, a imposes restrictions in the opposite direction when i changes its report from P 1 i to P i . Sincej remains the highest-ranking object for which the assignment changes, we get ϕ i,j pP 1 i , P´iq ą ϕ i,j pP i , P´iq, a contradiction. Second, assume towards contradiction that ϕ satisfies a, but violates swap monotonicity. Again, there exist i, pP i , P´iq, and P 1 i as above such that the change in i's assignment violates swap monotonicity. This violation can occur in four ways: 1. i's assignment may not change for a and b, but for some other object. Then, by upper invariance, this object must be some j P Lpb, P i q. We can choosej to be the highest-ranking such object and derive a contradiction by a two-fold application of a as in the proof of upper invariance.
2. i's assignment may change for a but not for b. Then, by upper invariance, a is the highest-ranking object for which i's assignment changes. The contradiction again arises by a two-fold application of a.
3. i's assignment may change for b but not for a. This case is analogous to case 2, where the roles of P i and P 1 i are reversed. 4. i's assignment may change for both a and b, but in the wrong direction. If i's assignment for a increases, then a is the highest-ranking object for which i's assignment changes. This leads to a direct contradiction with a. If i's assignment for b decreases, we obtain a contradiction by reversing the roles of P i and P 1 i .
b ñ ñ ñ a. For this proof we introduce the notion of transitions between preference profiles. For any two preference orders P i , P 1 i P P a transition from P 1 i to P i is a finite sequence of preference orders P 0 , . . . , P K such that • P 0 " P 1 i and P K " P i , • for all k P t0, . . . , K´1u we have P k P N P k`1 .
Intuitively, such a transition resembles a series of consecutive swaps that transform the preference order P 1 i into the preference order P i . The canonical transition (from P 1 i to P i ) is a particular transition that is inspired by the bubble-sort algorithm: Initially, we set P 0 " P 1 i . The preference orders P 1 , . . . , P K are constructed in phases. In the first phase, we identify the highest ranking object under P i that is not ranked in the same position under P 1 i , say j. Then we construct the preference orders P 1 , P 2 , . . . by swapping j with the respective next-more preferred objects. When j has reached the same position under P k as under P i , the first phase ends. Likewise, at the beginning of the second phase, we identify the object that is ranked highest under P i of those objects that are not ranked in the same positions under P k ; and new preference orders are constructed by swapping this object up to its final position under P i . Subsequent phases are analogous. The construction ends when the last created preference order P K coincides with P i .
Consider a fixed setting pN, M, qq and a mechanism ϕ that is swap monotonic and upper invariant. Given an agent i P N , a preference profile pP i , P´iq P P N , and a misreport P 1 i P P, we must verify statement a in Lemma 1: If j is the highest-ranking object under P i for which i's assignment changes, then i's assignment for j should decrease strictly if i reports P 1 i instead of P i . Let where P i and P 1 i coincide on the top-k choices but j k`1 ‰ j 1 k`1 , and consider the canonical transition from P 1 i to P i . In the first phase, the object j k`1 is swapped upward in P 1 i until it reaches the pk`1q st position. By upper invariance, i's assignment for the top-k choices j 1 , . . . , j k does not change during this phase. Moreover, by swap monotonicity, i's assignment for j k`1 cannot decrease, and, again by upper invariance, it does not change for the remainder of the canonical transition. Suppose that i's assignment for j k`1 changes strictly during the first phase. Then a change of report from P i to P 1 i must lead to a strict decrease in i's assignment for j k`1 , and i's assignment for any of the more preferred objects j 1 , . . . , j k does not change. But this is precisely the statement a for the combination i, pP i , P´iq, P 1 i . Alternatively, suppose that i's assignment for j k`1 does not change during the first phase. Then swap monotonicity implies that i's assignment cannot change at all during this phase. In this case, we proceed to the second phase, where the arguments are analogous. Finally, if i's assignment does not change in any phase, then it does not change at all, which would also be consistent with statement a.

B. Proof of Lemma 2 for Proposition 1
Lemma 2. The mechanismφ constructed in the proof of Proposition 1 is r-partially strategyproof and manipulable for agent i with utility functionũ i .
Proof. Observe that underφ, i's assignment is independent of the other agents' preference reports, so the mechanism is strategyproof for all other agents except i. Thus, we can guarantee r-partial strategyproofness ofφ by verifying that truthful reporting maximizes i's expected utility for any preference order P i and any utility function u i P U P i that satisfies URBI(r).
If P i : a ą b, then i's assignment remains unchanged, unless i claims to prefer b to a (i.e.,P i : b ą a). If a is i's reported last choice underP i , then the change in i's expected utility from the misreport is´δ b¨ui paq`δ b¨ui pbq ď 0. Ifd ‰ a is i's reported last choice, then this change is´δ a¨ui paq`δ b¨ui pbq`pδ a´δb q¨u i pdq. Since δ a ă δ b by assumption, this value strictly increases if i ranks its true last choice d last. Thus, i's gain from misreporting is upper bounded bý δ a¨ui paq`δ b¨ui pbq`pδ a´δb q¨u i pdq (20) "´δ a¨ˆui paq´min jPM u i pjq˙`δ b¨ˆui pbq´min jPM u i pjq˙.
This bound is guaranteed to be weakly negative if δ a ě r¨δ b , where we use the fact that u i satisfies URBI(r). Next, if P i : b ą a, then i's assignment from truthful reporting stochastically dominates any assignment that i can obtain by misreporting.
Finally, we need to verify that i finds a beneficial misreport if its utility function is u i . Let P 1 i be the same preference order asP i , except that a and b trade positions. The change in i's expected utility from reporting P 1 i iś δ a¨ũi paq`δ b¨ũi pbq`pδ a´δb q¨ũ i pdq "´δ a¨ˆũi paq´min jPMũ i pjq˙`δ b¨ˆũi pbq´min jPMũ i pjq˙, where d ‰ b is i's true last choice. This change is strictly positive if δ a ăr¨δ b .

C. Proof of Proposition 3
Proof of Proposition 3. Given a setting pN, M, qq, if a mechanism ϕ is r-partially strategyproof for some r ą 0, then it is ε-approximately strategyproof for some ε ă 1. The converse may not hold. Moreover, for any ε ą 0, there exists an r ă 1 such that any r-partially strategyproof mechanism is ε-approximately strategyproof.
Let ϕ be r-partially strategyproof for some r ą 0 and let δ be the smallest non-zero variation in the assignment from any change of report by any agent (see (6) in the proof of Theorem 2). We need to find ε ă 1 such that for all agents i P N with a utility function u i that is bounded between 0 and 1, all preference profiles pP i , P´iq P P N , and all misreports P 1 i P P, i's gain from misreporting P 1 i (instead of reporting P i truthfully) is at most ε. Let a, b, d P M be objects such that P i : . . . ą a ą b ą . . . ą d, where a P M is the most-preferred object (under P i ) for which i's assignment changes, b is the object that i ranks just below a, and d is i's least-preferred object. Lemma 1 yields ϕ i,a pP i , P´iq´ϕ i,a pP 1 i , P´iq ě δ. Thus, i's gain is upper bounded bý u i paq¨δ`u i pbq´u i pdq¨p1´δq ď 1´δ.
Since δ ą 0, ϕ is ε-approximately strategyproof for all ε ě 1´δ. It is simple to see that the converse does not hold: Any mechanism that is almost constant (i.e., only responds to misreports by small changes to the assignments) but manipulable in a stochastic dominance sense can serve as a counter-example.
To see the second statement, fix an arbitrary ε ą 0. Theorem 4 and r-partial strategyproofness of ϕ imply that for all i P N , pP i , P´iq P P N (where P i : j 1 ą . . . ą j m ), P 1 i P P, and K P t1, . . . , mu, we have With r ą 0, we obtain K ÿ k"1 ϕ i,j k pP 1 i , P´iq´ϕ i,j k pP i , P´iq ď K ÿ k"1`1´r k˘¨p ϕ i,j k pP 1 i , P´iq´ϕ i,j k pP i , P´iqq ď K ÿ k"1`1´r k˘ď m ÿ k"1`1´r k˘.
The last term in (26) is arbitrarily small for r sufficiently close to 1. With this, Proposition 8 in Section 14 of the Online Appendix implies ε-approximate strategyproofness of ϕ.

D. Proof of Theorem 4
Proof of Theorem 4. Given a setting pN, M, qq and a bound r P r0, 1s, a mechanism ϕ is r-partially strategyproof if and only if it is r-DD-strategyproof.
Given the setting pN, M, qq, we fix an agent i P N , a preference profile pP i , P´iq P P N , and a misreports P 1 i P P. The following claim establishes equivalence of the r-partial strategyproofness constraints and the r-DD-strategyproofness constraints for any such combination pi, pP i , P´iq, P 1 i q with x " ϕ i pP i , P´iq and y " ϕ i pP 1 i , P´iq.
Claim 1. Given a setting pN, M, qq, a preference order P i P P, assignment vectors x, y, and a bound r P r0, 1s, the following are equivalent: a. For all utility functions u i P U P i X URBI(r) we have ř jPM u i pjq¨x j ě ř jPM u i pjq¨y j . b. x i r-discounted dominates y i at P i .
Proof of Claim 1. b ñ ñ ñ a. Let P i : j 1 ą . . . ą j m . Assume towards contradiction that b holds, but that for some utility function u i P U P i X URBI(r), we have m ÿ l"1 u i pj l q¨px j l´y j l q ă 0.
Without loss of generality, we can assume min jPM u i pjq " 0. Let δ k " x j k´y j k and let SpKq " K ÿ k"1 u i pj k q¨px j k´y j k q " K ÿ k"1 u i pj k q¨δ k .
and let K be the smallest rank for which inequality (35) is strict. Then the value ∆ " K ÿ k"1 r k¨p y j k´x j k q , is strictly positive. Let D ě d ą 0, and let u i be the utility function defined by This utility function satisfies URBI(r). Furthermore, we have r k¨p x j k´y j k q`d¨m´1 ÿ k"K`1 r k¨p x j k´y j k q ď´D¨∆`d.
For d ă D¨∆, ř jPM u i pjq¨x j´ř jPM u i pjq¨y j is strictly negative, a contradiction.
This concludes the proof of Theorem 4.