Manipulability in a group activity selection problem

We consider the aspect of strategic manipulation in a group activity selection problem. Given a set of activities in which they might participate, the agents have preferences over the activities themselves and over the number of participants in the activities; the goal is to assign agents to activities on basis of their preferences. In this paper, we consider the possibility of strategic manipulation involved in providing solutions in such a setting, for the solution concepts of maximum individual rationality, core stability, and Pareto optimality respectively. For three different preference extensions (Gärdenfors extension, maxi–min extension and maxi–max extension) we analyze strategic manipulability with respect to the number of activities available. In general, the considered solution concepts turn out to be prone to strategic manipulation; in some natural special cases, however, strategyproofness is provided by such an aggregation.


Introduction
We investigate the aspect of strategic manipulability in a group activity selection problem considered in Darmann et al. (2018) and Darmann (2018) respectively.In this setting, there is a set of agents and a set of activities to which the agents should be assigned, where each agent can take part in at most one activity.The agents' preferences depend on the activity itself and the number of participants in that activity.As particular examples consider the organizer of a workshop who plans to set up social activities for the free afternoon, or a company that wants to provide free sports classes for its employees (Skowron et al. 2015) in order to raise their overall satisfaction.Since these take place simultaneously, each agent can take part in at most one activity; of course, the choice of abstaining from any activity, i.e., doing nothing, should be a valid option as well.It is plausible to assume that the preferences of the agents do not depend on the activity alone but also on the number of agents taking part in the respective activity, since, e.g., a table tennis tournament with 40 players and only one table will not be desired even by a passionate table tennis player.A natural goal of the organizer now would be to find a reasonable assignment of agents to activities without forcing an agent to participate when she is not willing to.Another example would be a company that has several possible projects to which some of its employees should be assigned instead of performing their common working tasks, as a bonus in form of a variation from their usual work or in order to let them gain additional experience in project work.However, each of the employees might have different preferences over the projects and the number of agents engaged in the corresponding project teams.As an efficiency consideration, the company is interested in assigning the employees in a way such that they bear a reasonable level of motivation in working on the projects; in particular, the company does not want that employees are poorly motivated for the assigned project with the corresponding team size in the sense that an employee would rather opt out (and perform their usual working tasks in the company instead).
In this paper, we consider the group activity selection problem with ordinal preferences (o-GASP), in which the agents' preferences are strict orders over pairs "(activity, group size)" including the possibility "do nothing" to which we refer as the void activity.The goal, of course, would be to assign agents to activities in a reasonable manner.As indicated above, a main requirement is that the assignment should be individually rational, meaning that no agent should be forced to take part in an alternative she deems unacceptable, i.e., would rather prefer doing nothing to.The purpose of this paper is to study the aspect of strategyproofness involved when such assignments are provided.
Our contribution and relation to the literature Our focus is laid on the main solution concepts studied in the group activity selection problem: maximum individually rational, core stable, and Pareto optimal assignments respectively.In natural, special preference domains we analyze the strategyproofness of the respective single-valued aggregation functions and possibly multi-valued aggregation correspondences with respect to the number of activities involved.Darmann et al. (2018) introduce the general group activity selection problem GASP, where the agents' preferences are weak orders over the pairs "(activity, group size)".There, the problems of finding a stable assignment, for stability notions such as Nash and core stability, and, above all, finding a maximum individually rational assignment-that is an assignment maximizing the number of agents assigned to a non-void activity in an individually rational assignment-are studied from a computational viewpoint in the approval-based variant a-GASP.Darmann (2018) considers the problem of finding Pareto optimal and stable solutions in the strict preference setting of o-GASP, for different stability notions including the one of core stability.In both works the focus is laid on the special cases of increasing and decreasing preferences.Loosely speaking, with increasing preferences an agent would like as many other agents as possible to join the same activity; in the decreasing preferences case, an agent would like to share the same activity with as few other agents as possible.In this paper we analyze the aspect of manipulability involved in providing maximum individually rational, core stable, and Pareto optimal assignments.Typically, these assignments are not unique; our main interest hence is laid on aggregation correspondences which output the set of all maximum individually rational, core stable, and Pareto optimal assignments respectively.We particularly focus on the special cases of increasing, decreasing, and-more generally-single-peaked preferences.Our results show that such an aggregation is, unfortunately, susceptible to strategic manipulation already for a small number of activities involved: while for the cases of one and two activities some robustness results can be achieved, for three activities all of the considered solution concepts allow for strategic manipulation with respect to each of the preference extensions considered.In addition, it turns out that the negative results generalize to an impossibility result for any aggregation process that respects individual rationality when the mild condition of unanimity is imposed (which is satisfied by basically any reasonable aggregation), for restricted instances of o-GASP already (see Sect. 5.4).
Whether the aggregation of individual preferences into a group solution is susceptible to strategic manipulation is one of the central questions in social choice theory.Such an aggregation function (which outputs a single outcome) or aggregation correspondence (which outputs a set of outcomes) is strategyproof, and hence not manipulable, if no agent can be better off by misrepresenting her true preferences.In its classical framework, both strategyproofness of aggregation functions [see, for instance, Barberà (2010), and the seminal papers by Gibbard (1973) and Satterthwaite (1975)], and of aggregation correspondences [see, e.g., Barberà et al. (2001), Brandt and Brill (2011) and Brandt and Geist (2014)] has been well-studied.Clearly, comparing different assignments, an agent will prefer one which yields the best alternative for her.Comparing sets of assignments, however, is less obvious.Instead of asking the agents to give a ranking over all possible sets of outcomes (which requires to rank an exponential number of possibilities), the typical assumption is that the preferences over the single alternatives can be extended to binary relations over sets of alternatives.Of course, such a preference extension can be performed in various ways [see, e.g., Barberà et al. (2004) and Barberà (2010)].We consider the well-known Gärdenfors extension (Gärdenfors 1976) and include the natural maxi-max extension and maxi-min extension (see Moretti and Tsoukiàs 2012) in our analysis: in an optimistic mindset, one might hope for the most-preferred among the possible alternatives; having a pessimistic view, one might be worried by the least-preferred among the alternatives.
Strategyproofness of coalition formation rules has been studied in several papers, often axiomatically motivated.Rodríguez-Álvarez (2009) characterizes single-lapping rules over the domain of additively representable or separable preferences by four axioms including strategyproofness.Pápai (2004) uses strategyproofness as one of the characterizing axioms of these rules in general domains.Further literature on strategyproofness of coalition formation games, in connection with core stability, includes the works of Alcalde and Revilla (2004), Cechlárová andRomero-Medina (2001), andSönmez (1999).
Closely related problems have been considered in the works of Lee and Shoham (2015) and Long (2018).Both in the anonymous stable invitation problem (ASIP) of Lee and Shoham (2015) and in the group selection problem of Long (2018), the objective is to determine a single subgroup of agents in a reasonable way given the agents' preferences over group size (including 0).ASIP can be understood as the group activity selection problem with a single1 activity; Lee and Shoham (2015) provide the impossibility result that a strategyproof mechanism for ASIP that always outputs an individually rational and envy-free solution cannot exist.Also the group selection problem of Long (2018) might be interpreted as the group activity selection problem with a single activity under a certain domain restriction.Long (2018) assumes the agents' preferences to be strict and single-peaked, where their notion of singlepeakedness implies that each agent's set of group sizes preferred over the outside option is an (possibly empty) integer interval starting from one.In contrast, the notion of single-peakedness for the group activity selection problem used in this paper does not yield the analogous implication.However, Long (2018) proposes two aggregation functions (rules) that output a Pareto optimal and individually rational assignment, and provides an axiomatic justification for each of them.We add to these results by showing that, in the terminology used in our paper, a strategyproof aggregation function outputting a maximum individually rational assignment cannot exist in the group activity selection problem with only one activity when all agents' preferences are decreasing (and therefore single-peaked), while for the increasing case each such aggregation function is strategyproof.In addition, imposing a rather mild and natural condition-which is satisfied by all of the considered aggregation correspondences including the one restricted to Pareto optimal assignments-we provide a general impossibility result for aggregation functions and correspondences (w.r.t.Gärdenfors and maxi-min extension) for the case of two activities and increasing preferences.Note that our work distinguishes from Lee and Shoham (2015) and Long (2018) in several aspects.For instance, we mainly focus on aggregation correspondences instead of single-valued aggregation functions.Secondly, with maximum individual rationality and core stability we consider different solution concepts.Thirdly, we take into account also the case of more than just one activity.In particular, for the maxi-max, maxi-min and Gärdenfors extension we provide the link between strategic manipulability and the number of available activities in the considered group activity selection problem.
Further related work includes those of Jackson and Nicolò (2004) and Massó and Nicolò (2008).In their setting, agents have preferences over both alternatives and group size, and the goal is to determine a single alternative together with a group of agents who jointly use the alternative.Massó and Nicolò (2008) assume gregarious preferences, i.e., for each alternative the agents want additional agents to join the group; the focus is laid on efficient and both internally and externally stable allocations.Jackson and Nicolò (2004) assume that for each group size the agents' preferences over the alternatives are single-peaked; they consider the domains of pure congestion and pure cost-sharing as special cases (these domains translate to the cases of decreasing and increasing preferences respectively in our framework).They provide the result that, even in the domain of pure congestion, only dictatorship is compatible with strategyproofness, Pareto efficiency and the property of outsider independence.
Finally, the group activity selection problem is also related to anonymous and nonanonymous hedonic games.Note that in non-anonymous hedonic games, agents have preferences over the possible coalitions they could be part of; in the anonymous variant these preferences only depend on the size of the coalition.In contrast, in o-GASP (and in GASP in general), the agents' preferences depend both on the considered activity and the size of the group of agents participating in that activity.However, the setting of GASP (and hence o-GASP) can, in a somewhat bulky and artificial way, be embedded in the general hedonic game framework (see Darmann et al. 2018 for details).In particular, the model considered in this work allows for a much more compact representation and has some natural special cases to which we turn our attention.
The paper is organized as follows.In Sect. 2 we present the model of o-GASP and some basic definitions.The concepts of strategyproofness and the preference extensions involved are presented in Sect.3. In Sect. 4 we discuss manipulability in the case of a single activity.Section 5 considers the aspect of strategic manipulation involved when there are at least two activities, and ends with a general impossibility result for any aggregation function/correspondence satisfying unanimity.Section 6 provides an outlook towards future research questions and concludes the paper.

Formal model
We begin with the model considered in this work and some basic definitions (see also Darmann 2018, Darmann et al. 2018, and the survey by Darmann and Lang 2017).
Given a set of agents N = {1, . . ., n}, and a set of activities A = A * ∪ {a ∅ }, where A * = {a 1 , . . ., a m } and activity a ∅ is the void activity, the set of alternatives W is given by W = W * ∪ {a ∅ }, with W * = A * × {1, . . ., n}; alternative (a, k) ∈ W * is interpreted as "activity a with k participants".The vote i of an agent i ∈ N is a strict order over W .A preference profile P = ( 1 , . . ., n ) over W consists of n votes (one for each agent).The set of all preference profiles over W is denoted by P(N , A).We refer to the set S i := {(a, k) ∈ W * | (a, k) i a ∅ } as the induced approval vote of agent i, and say that agent i approves of all alternatives in S i .
An instance of the group activity selection problem with ordinal preferences (o-GASP) consists of a triple (N , A, P).An assignment for an instance (N , A, P) of o-GASP is a mapping π : and π Abusing notation, we say that assignment π assigns agent i to alternative (a, k) if π(i) = a with |π a | = k.Also, we identify the void activity a ∅ representing the outside option "do nothing" with the alternative (a ∅ , k), for any k ∈ {1, . . ., n}.For the sake of readability, in this paper we omit the repeated use of the term "non-void" when referring to the number of activities in an instance (N , A, P) of o-GASP; that is, with abuse of notation we refer to |A * | as the number of activities in the instance.
As the main requirement considered for an assignment, no agent should be assigned to an alternative she deems unacceptable.Formally, an assignment π : N → A is said to be individually rational if for every a ∈ A * and every agent i ∈ π a it holds that (a, |π a |) ∈ S i .
Clearly, in any instance (N , A, P) of o-GASP the trivial assignment π ∅ which assigns each agent to a ∅ is individually rational.As a consequence, an individually rational assignment always exists.A natural goal of a benevolent central authority, however, might be to maximize the number of agents assigned to a non-void activity.Let #(π ) = |{i ∈ N | π(i) = a ∅ }| denote the total number of agents assigned to non-void activities under assignment π .
Definition 1 Given an instance (N , A, P) of o-GASP, an assignment π is said to be maximum individually rational if π is individually rational and #(π ) ≥ #(π ) for every individually rational assignment π .
While the above concept of maximum individual rationality is certainly appealing, it does not take into account the possible desire of a group of agents to deviate from the assignment in favor of a different alternative.The well-known concept of the core is concerned with stability against such group deviations.In particular, an assignment is core stable if no subgroup of agents wants to deviate from the assignment in order to join some other activity (see also Darmann 2018).
Definition 2 Given an instance (N , A, P) of o-GASP, an assignment π is core stable (or in the core) if π is individually rational and there are no E ⊆ N and a ∈ A * with Requirement π a ⊂ E in the above definition represents the intuition that-and hence covers scenarios in which-a deviating group of agents cannot prevent agents from participating in their assigned activity; therefore, the group requires cooperation of the agents assigned to that activity.2Finally, we will consider Pareto optimal assignments, i.e., individually rational assignments for which there is no other assignment in which an agent is better off while no agent changes for the worse.
is individually rational and there is no assignment π which Pareto-dominates π .
Observe that there are instances of o-GASP which do not admit a core stable assignment (see Darmann 2018), while maximum individually rational assignments and Pareto optimal assignments always exist.
We will consider natural special cases of the agents' preferences.Informally speaking, an agent has increasing preferences with respect to an activity a if she prefers to participate in a together with as many other agents as possible; an agent has decreasing preferences with respect to an activity a if she wishes to share a with as few other agents as possible.Both increasing and decreasing preferences are special cases of single-peaked preferences, in which agent i has a conception p i (a) over the ideal group size in activity a; for any group size j < p i (a) the agent prefers j + 1 agents participating in a to j, and for any group size j > p i (a) the agent prefers j − 1 agents participating in a to j.
Definition 4 Given an instance (N , A, P) of o-GASP, with respect to activity a ∈ A * agent i's preferences are We say that an agent has single-peaked (respectively, increasing/decreasing) preferences if her preferences are single-peaked (resp., increasing/decreasing) with respect to each activity a ∈ A * .In the instance considered in Example 1 each agent has single-peaked and, in particular, increasing preferences.In what follows, since we are interested in individually rational assignments only, for the sake of brevity alternatives ranked below a ∅ typically will be omitted in the description of specific profiles.In addition, throughout this paper we assume that each agent approves of at least one alternative (otherwise, the agent has to be assigned to the void activity in any individually rational assignment), and, in order to exclude trivial instances, that n ≥ 2 holds.

Strategyproofness and preference extensions
We study strategyproofness in connection with maximum individually rational, core stable, and Pareto optimal assignments respectively.Particular focus is laid on aggregation correspondences which output the set of all maximum individually rational assignments (respectively, core stable/Pareto optimal assignments) for a given preference profile.In contrast, aggregation functions output exactly one specific assignment for each preference profile.
Given set N of agents and A of activities, where P(N , A) denotes the set of all preference profiles over the set of alternatives, let α(N , A) := {π | π : N → A} denote the set of all assignments.A function f : With abuse of notation, we say that an aggregation correspondence is individually rational, if it outputs individually rational assignments only; i.e., aggregation correspondence C is individually rational, if for each preference profile P and each π ∈ C(P), π is an individually rational assignment in the respective instance of o-GASP.We study three particular members of the family of individually rational aggregation correspondences defined below.The aggregation correspondence C mir such that, for each instance I = (N , A, P) of o-GASP, C mir (P) corresponds to the set of all maximum individually rational assignments in I, is called mir-aggregation correspondence.Analogously, the po-aggregation correspondence C po outputs, for a given preference profile P, the set of Pareto optimal assignments of the respective instance of o-GASP.An aggregation function f is called mir-aggregation function (resp.po-aggregation function) if, for each instance I = (N , A, P), f (P) is a single maximum individually rational (Pareto optimal) assignment in I.
The aggregation correspondence C cs such that, for each instance I = (N , A, P) of o-GASP, C cs (P) is the core of I if the core is non-empty, and {π ∅ } otherwise, is called cs-aggregation correspondence.An aggregation function f is called cs-aggregation function if, for each instance I = (N , A, P), f (P) is in the core of I if the core is nonempty, and f (P) = π ∅ otherwise.Recall that in an instance of o-GASP a core stable assignment might not exist; in such a case, the cs-aggregation function (correspondence) outputs (the singleton set made up of) the trivial assignment. 3Observe that any mir/po/cs-aggregation function is a special case of an ir-aggregation function which, for each given preference profile, outputs a single individually rational assignment in the respective instance.Let us now turn to strategyproofness of aggregation functions and aggregation correspondences.
Definition 5 An aggregation function f is called manipulable, if there exist an instance (N , A, P) of o-GASP, an agent i ∈ N and a profile P ∈ P(N , A) with holds.In such a case, to refer to the respective profile and agent, we say that f is manipulable at profile P by agent i. f is called strategyproof, if f is not manipulable.
In order to consider manipulability of an aggregation correspondence, we adapt the natural extension axiom (see Barberà et al. 2004) which, formulated for sets of objects, states that the comparison of singleton sets containing one object each should be consistent with the rankings over the objects.In our setting, the agents' ordering of sets containing one assignment each should be consistent with the rankings over the respective alternatives assigned.More formally, any transitive binary relation ε (with strict part ε i ) over sets of assignments is a preference extension of a strict order i over alternatives, if for all π, π ∈ α(N , A) the following holds: Definition 6 Let ε be a preference extension.C is ε-manipulable if there exist an instance (N , A, P) of o-GASP, an agent i ∈ N and a profile P ∈ P(N , A) with holds.In such a case, we say that Thus, given the sincere preferences of the other agents, when strategyproofness is provided by a social choice correspondence then no agent has an incentive to misreport her preferences.In what follows, we apply particular representatives of preference extensions to our setting.These are the intuitive maxi-max and maxi-min extension (see Moretti and Tsoukiàs 2012), and the well-known Gärdenfors extension (Gärdenfors 1976).Let (N , A, P) be an instance of o-GASP.
Observe that, given agent i and set X of assignments, a max-assignment for i in X is not necessarily unique, but-due to the strict preferences of the agent over the alternatives-the alternative to which i is assigned must be the same under all max-assignments for i in X .I.e., for all max-assignments π, π for i in X , we have (π(i), |π i |) = ( π(i), | πi |).Analogously, for each agent i and set X of assignments, the alternative to which i is assigned must be the same under all min-assignments for i in X .
In the maxi-max extension, an agent considers a set X of assignments better than a set Y of assignments, if she prefers the best alternative assigned to her by an assignment in X to the best alternative assigned to her by an assignment in Y .Similarly, in the maxi-min extension, an agent considers a set X of assignments better than a set Y of assignments, if she prefers the worst alternative assigned to her by an assignment in X to the worst alternative assigned to her by an assignment in Y .

Definition 7
The maxi-max extension is defined by: for Analogously, the maxi-min extension is defined by: for In our setting, according to the Gärdenfors extension (Gärdenfors 1976) an agent considers X better than Y if one of the following holds: (1) X can be "created" from Y by adding (removing) assignments, and the agent considers each added (remaining) assignment at least as good as each of the original (removed) assignments, with strict preference in at least one case; (2) otherwise, the agent considers each assignment in X \Y at least as good as each assignment in Y \X , with strict preference in at least one case.

Definition 8
The Gärdenfors extension is defined as follows.For i ∈ N and X , Y ∈ 2 α(N ,A) \∅, X G i Y if one of the three following conditions is satisfied: 1. X ⊂ Y , and there are x ∈ X and y ∈ Y \X with (x(i), Let us begin our study with two basic observations for the mir-aggregation correspondence C mir .

Aggregation correspondence C mir : basic observations
In what follows, for instance I = (N , A, P) and preference profile P = ( 1 , . . ., n ) with P| N \{i} = P | N \{i} , let S i denote the approval set of agent i with respect to i ; also, let I = (N , A, P ).
The first observation states that-for each of our dedicated preference extensions ε considered-we can w.l.o.g.assume that an agent i who manipulates C mir at profile P does this by reducing her approval set.
Lemma 1 Given instance I = (N , A, P) of o-GASP and ε ∈ {maxi-min, maxi-max, Gärdenfors}.If C mir is ε-manipulable at P by agent i then there is an instance I = (N , A, P ) with P| N \{i} = P | N \{i} and S i ⊂ S i such that C mir (P ) ε i C mir (P) holds.
Proof Assume that C mir is ε-manipulable at profile P by agent i, let P be the respective manipulated preference profile and I = (N , A, P ).Assume i ranks disapproved alternatives above a ∅ in i , i.e., has (a, k) i a ∅ for some (a, k) ∈ W * \S i .For any such alternative (a, k), note that there is no individually rational assignment in I that assigns i to a such that a total of k agents is assigned to a, whereas there might be maximum individually rational assignments in I that do so.By the choice of ε and by C mir (P ) ε i C mir (P), it follows that, given I , removing all such (a, k) ∈ W * \S i from S i (ceteris paribus) results in an instance I = (N , A, P ) with C mir (P ) ε i C mir (P).Thus, for the mir-aggregation correspondence we can assume that S i ⊂ S i holds.
Observe that Lemma 1 immediately implies #(π ) ≤ #(π ) for π ∈ C mir (P ), π ∈ C mir (P).In addition, we have either As a consequence, for C mir we can note that (with X = C mir (P ), Y = C mir (P)) the second condition of Gärdenfors manipulability is redundant.
As as second observation, it follows that for C mir the concepts of Gärdenfors strategyproofness and maxi-min strategyproofness coincide (Lemma 2 below).In general-and, in particular for C cs and C po -this is not the case; we provide an example for C cs and refer to Theorems 14 and 15 for C po .
Example 2 Let N = {1, 2, 3} and A * = {a, b}, with P given by 1 : (a, 2) 1 (b, 2) 1 a ∅ , and i : (b, 2) i (a, 2) i a ∅ for i ∈ {2, 3}.In (N , A, P), the unique core stable assignment is assignment Observe that C cs is not maxi-min manipulable at profile P: Clearly, agents 2 and 3 cannot maxi-min manipulate since π assigns them to their top-ranked alternative.Also, agent 1 cannot maxi-min manipulate at P since π is core stable in any manipulated profile P with P | {2,3} = P| {2,3} unless agent 1 approves of (a, 1) in P ; this, however, would result in an assignment π which is core stable in P with a ∅ 1 (a, 1), which would hence make agent 1 worse off.

Lemma 2 Given instance I = (N , A, P) of o-GASP, C mir is Gärdenfors manipulable at P if and only if C mir is maxi-min manipulable at P.
Proof "⇒": Assume C mir is Gärdenfors manipulable at profile P by agent i; let I = (N , A, P ) be the respective manipulated instance.With Lemma 1 and the subsequent remarks we can assume that we either have C mir (P ) ⊆ C mir (P) or C mir (P ) ∩ C mir (P) = ∅.
Case I C mir (P ) ⊆ C mir (P).Clearly, manipulability yields C mir (P ) ⊂ C mir (P).Also, Gärdenfors manipulability implies (i Therewith v(i) = a ∅ follows, since otherwise (1) contradicts with the individual rationality of w, and (2) implies that for all π ∈ C mir (P)\C mir (P ) π(i) = a ∅ holds-which, in turn, would imply C mir (P)\C mir (P ) = ∅ (and thus contradict to C mir (P ) ⊂ C mir (P)), because any individually rational assignment in I that assigns i to a ∅ is also individually rational in I .
In addition, due to Gärdenfors manipulability (x(i), |x i |) i (y(i), |y i |) for x ∈ max i C mir (P ), y ∈ min i (C mir (P)\C mir (P )).Thus, for the profile P which results from P by agent i disapproving of all alternatives ranked below (x(i), |x i |), we get C mir (P ) ⊆ C mir (P ).In particular, for all z ∈ C mir (P ) we have (z(i), Therewith, C mir is maxi-min manipulable at profile P by agent i.

Case II C mir (P )∩C mir
∈ max i C mir (P).Analogously to Case I, π (i) = a ∅ for all π ∈ C mir (P ) follows [here, (2) contradicts with C mir (P ) ∩ C mir (P) = ∅].In addition, as above instance I defined in Case I yields maxi-min manipulability of C mir .
"⇐": Assume C mir is maxi-min manipulable at profile P by agent i; let P be the respective manipulated profile, with I = (N , A, P ).Observe that, by Lemma 1, we can assume that any individually rational assignment in I with π(i) = a ∅ is also individually rational in I ; hence, any maximum individually rational assignment π in I with π(i) = a ∅ must be maximum individually rational also in I .Thus, must hold for each π ∈ C mir (P) since otherwise C mir is not maxi-min manipulable at profile P by agent i.We distinguish the following cases.
Case I for all π, λ ∈ C mir (P) we have (π(i), Let us now, for the preference extensions considered, turn to manipulability in connection with the considered solution concepts with respect to the number of activities involved.

Manipulability in o-GASP with a single activity
In the case of only one activity a, the size of a maximum individually rational assignment is given by the maximum number k for which (a, k) is approved by at least k agents; i.e., for π ∈ C mir (P) we have #(π Assume that all agents have increasing preferences.Then, since each agent approves of at least one alternative (which is different from a ∅ ), each agent approves of (a, n) and hence #(π ) = n for π ∈ C mir (P) holds.In particular, the unique maximum individually rational assignment is to assign each agent to a. Observe that this assignment is also the unique core stable and the unique Pareto optimal assignment: the whole set N of agents would prefer to jointly participate in a over any other assignment.Thus, by the nature of increasing preferences-each agent prefers (a, n) over any other alternative-no agent has an incentive to manipulate.
In the remainder of this section, we will hence consider the aspect of strategic manipulation involved in maximum individually rational, core stable, and Pareto opti-mal assignments respectively in the single-activity case, with focus on the special cases of single-peaked and decreasing preferences.

Maximum individually rational assignments
Considering maximum individually rational assignments in the case of a single activity, when all agents have decreasing preferences it turns out that the miraggregation correspondence C mir is maxi-min and Gärdenfors strategyproof (but maxi-max manipulable).On the negative side, that strategyproofness result for the mir-aggregation correspondence C mir cannot be extended from decreasing to singlepeaked preferences-in particular, in the latter case C mir is ε-manipulable for each preference extension ε (see Theorem 2).We begin with a short example which will be useful for the proofs of Theorem 1 and Proposition 1.

Theorem 1 When all agents have decreasing preferences and A * consists of one activity, then the mir-aggregation correspondence C mir is
• maxi-min and Gärdenfors strategyproof, and • maxi-max manipulable.
Proof Maxi-max manipulability Consider the instances I, I of Example 3. Comparing the max-assignments of the instances yields (λ(1), 1) 1 (π(1), 2) because (a, 1) 1 (a, 2) holds.Hence, C mir is maxi-max manipulable by agent 1 at profile P. maxi-min strategyproofness Assume that C mir is maxi-min manipulable, i.e., there are instances I = (N , A, P), I = (N , A, P ) with A * = {a} such that for some i ∈ N we have P | N \{i} = P| N \{i} , agent i's true preferences are i , but she is better off with misreporting her true preferences in terms of i .By Lemma 1 we can assume that agent i misreports by ranking some approved alternatives below a ∅ , i.e., removing alternatives from her approval set.Let π ∈ min i C mir (P).We distinguish two cases.
Case I π(i) = a ∅ .Clearly, due to P | N \{i} = P| N \{i} it follows that π is individually rational -and hence maximum individually rational -also in instance I .Since a ∅ is the worst possible alternative for agent i in any individually rational assignment we thus cannot get C mir (P ) min i C mir (P).
Case II π(i) = a.Let k := #(π ).Since π is individually rational, (a, k) ∈ S i holds.If (a, k) ∈ S i it follows that π ∈ C mir (P ) and hence C mir (P ) min i C mir (P) cannot hold.Otherwise, let := #(π ) for a maximum individually rational assignment π in instance I .By assumption, each agent approves of at least one alternative (see also end of Sect.2); thus ≥ 1 must hold.If = k, by (a, k) / ∈ S i we can conclude π (i) = a ∅ ; therefore, C mir (P) min i C mir (P ) holds.If = k, by Lemma 1 we 123 can conclude < k.Now, in the case (a, ) / ∈ S i clearly π (i) = a ∅ follows, again implying C mir (P) min i C mir (P ).Recall that by decreasing preferences all j ∈ N with (a, k) ∈ S j have (a, ) ∈ S j ; thus, if (a, ) ∈ S i then more than agents approve of (a, ) in I .Hence, in I there is a maximum individually rational assignment λ with λ(i) = a ∅ ; again, this implies C mir (P) min i C mir (P ).Either way, we get a contradiction with maxi-min manipulability.
Unfortunately, the above strategyproofness result does not generalize to singlepeaked preferences; in particular, in that more general domain strategyproofness of C mir cannot be achieved for any preference extension ε.
From the proof of the above theorem we can also conclude that any mir-aggregation function is manipulable over the domain of single-peaked preferences.This, however, holds even for the case of decreasing preferences as the following proposition shows.

.
Case II f (P ( 3) ) = μ.At P (3) now agent 1 is able to manipulate by reporting 1 instead of (3) 1 , i.e., "creating" P (1) .In this way, agent 1 is the only agent assigned to a (see 1.), which she prefers to a ∅ in (3) 1 .Thus, either choice of f (P ( 3) ) admits a possibility to manipulate.Therefore, there is no strategyproof mir-aggregation function in the case of one activity and decreasing preferences.

Core stable and Pareto optimal assignments
In the single-activity case, it turns out that core stable assignments are less prone to strategic manipulation than maximum individually rational assignments.Providing a general result, Theorem 3 states that the cs-aggregation correspondence C cs is always maxi-max strategyproof.In addition, for decreasing preferences, C cs is strategyproof for each of the considered preference extensions (Theorem 4).On the negative side, only maxi-max strategyproofness is provided when the preferences of the agents are single-peaked (see Theorem 5).Also, observe that in the single activity case, an assignment is core stable if and only if it is Pareto optimal (see Lemma 3 below), which allows us to restrict our attention to core stable assignments in the remainder of this section.

Lemma 3 In an instance of o-GASP with a single non-void activity an assignment is core stable if and only if it is Pareto optimal.
Proof In instance (N , A, P) of o-GASP let A * = {a}.Assume π is core stable but not Pareto optimal.Then, there is an assignment μ in which at least one agent is better off an no agent is worse off under μ than under π .Since there is only one non-void activity, this means that π a ⊆ μ a holds, because otherwise μ assigns an agent of π a to a ∅ , making that agent worse off.In particular, π a ⊂ μ a must follow by μ = π .Observe that this means (a, |μ a |) i (a, |π a |) for all i ∈ μ a .This, however, implies that π is not core stable which contradicts with our assumption.On the other hand, assume that an assignment π is Pareto optimal.If it is not core stable, then there is an assignment μ with π a ⊂ μ a such that (a, |μ a |) i (a, |π a |) for all i ∈ μ a .This contradicts with Pareto optimality of π .
Theorem 3 When A * consists of one activity, the cs-aggregation correspondence C cs is maxi-max strategyproof.
Proof Assume the opposite.Given instance I = (N , A, P) with A * = {a}, some agent i misreports the preferences, resulting in instance I such that for μ ∈ max i C cs (P ) and π ∈ max i C cs (P) we have (2) Clearly, μ / ∈ C cs (P) holds.If μ is not individually rational in instance I, then (a, |μ a |) ≺ i a ∅ follows since μ is individually rational in I and for each agent j ∈ N \{i} we have j = j ; this contradicts with (2) and the fact that π is individually rational in I. Thus, μ is individually rational in I. Since μ is not core stable in instance I, there is a set E ⊃ μ a such that for each e ∈ E we have (a, |E|) e (μ(e), |μ e |).
Let E be the largest of these sets.The assignment λ which assigns each agent of E to a and the remaining agents to a ∅ hence is preferred over μ by each member of E , and in particular by agent i. Observe that λ must be core stable in I because otherwise a superset F of E exists which makes each of its members better off, and hence (a, |F|) f (a, |E |) f (μ(e), |μ e |) holds for each f ∈ F; this contradicts with the choice of E .Thus, we have (a, |λ a |) i (a, |π a |) with λ ∈ C cs (P), which contradicts with π ∈ max i C cs (P).

Theorem 4 When all agents have decreasing preferences and A * consists of one activity, then the cs-aggregation correspondence C cs is
• maxi-min strategyproof, • maxi-max strategyproof, and • Gärdenfors strategyproof.
Proof Maxi-max strategyproofness follows from Theorem 3. maxi-min/Gärdenfors strategyproofness: Consider an instance I = (N , A, P) with A * = {a} and each agent having decreasing preferences.Note that in any assignment, each agent assigned to a objects to other agents joining a because of decreasing preferences.Thus, the set C cs (P) of core stable assignments is the set of non-trivial individually rational assignments; i.e., the set of assignments that assign exactly k agents approving of (a, k) to a, for any choice of k ≥ 1. Assume some agent i tries to manipulate and let the resulting instance be denoted by I = (N , A, P ).We first show that C cs (P ) ⊆ C cs (P) follows.
Assume there is an assignment π ∈ C cs (P )\C cs (P).Suppose π (i) = a ∅ .Then π is also individually rational in instance I because of P| N \{i} = P | N \{i} .Also, π ( j) = a must hold for at least some j ∈ N since otherwise π is not core stable in instance I by decreasing preferences.Hence, π is a non-trivial individually rational and hence core stable assignment in instance I which contradicts our assumption.Suppose π (i) = a.Then it must follow that π fails individual rationality in I since otherwise it must be core stable in I.However, this implies a ∅ i (a, #(π )).It is not difficult to verify that this contradicts both with maxi-min and Gärdenfors manipulability.Therewith, C cs (P ) ⊆ C cs (P) holds.The case C cs (P ) = C cs (P) is trivial.Assume C cs (P ) ⊂ C cs (P).By decreasing preferences, for each agent the top-ranked alternative is (a, 1).Thus, there is a core stable assignment in I that assigns an agent j ∈ N \{i} alone to a and each other agent to a ∅ .Agent i is hence assigned to a ∅ in a min-assignment for i in C cs (P ), which contradicts with maxi-min manipulability.For the Gärdenfors extension, consider the set C cs (P)\C cs (P ).Note that each assignment in C cs (P) that assigns i to a ∅ must also be in C cs (P ) due to decreasing preferences of the agents assigned to a. Thus, for each μ ∈ C cs (P)\C cs (P ), μ(i) = a holds.C cs (P ) G i C cs (P) hence would require that a ∅ i (a, |μ a |) holds, which contradicts with the individual rationality of μ.Therewith, C cs is Gärdenfors strategyproof.
Theorem 5 When all agents have single-peaked preferences and A * consists of one activity, then the cs-aggregation correspondence C cs is • maxi-min manipulable, and • Gärdenfors manipulable.
Proof Maxi-max strategyproofness follows from Theorem 3. Consider instances I, I in Example 4. In I, there are two core stable assignments π and λ, given by π(1) = π(2) = a and λ(1) = a, λ(2) = a ∅ .In instance I the assignment λ is the unique core stable assignment.Therewith, C cs is both maxi-min and Gärdenfors manipulable by agent 1 at profile P.

Manipulability in o-GASP with at least two activities
In what follows, we consider the scenario in which at least two activities are involved in the ordinal group activity selection problem.Our results show that again the miraggregation correspondence C mir is less robust to manipulation than the cs-aggregation correspondence C cs and the po-aggregation correspondence C po .However, all three considered aggregation correspondences turn out to be prone to strategic manipulation.Even worse, it turns out that each aggregation correspondence that satisfies a natural unanimity property (and hence basically any reasonable individually rational aggregation correspondence) is prone to strategic manipulation (see Sect. 5.4).

Maximum individually rational assignments
In this section we show that C mir is manipulable for each of the considered preference extensions when there are two activities, in both the special cases of decreasing and increasing preferences.In the latter case this negative result holds for each possible preference extension.
Theorem 6 Even when all agents have increasing preferences and A * consists of two activities, C mir is ε-manipulable for every preference extension ε.
The above proof immediately yields the following proposition.
Proposition 2 Even when all agents have increasing preferences and A * consists of two activities, every mir-aggregation function is manipulable.
Consider the following example to which we will refer in the proofs of Theorem 7 and 9.

Theorem 7 Even when all agents have decreasing preferences and A * consists of two activities, the mir-aggregation correspondence C mir is
• maxi-max manipulable, and • maxi-min and Gärdenfors manipulable.

Core stable assignments
As a general positive result, we can show that C cs is maxi-max strategyproof when all agents have decreasing preferences (irrespective of the number of activities available).Unfortunately, for the maxi-min extension and the Gärdenfors extension an analogous result does not hold.
Theorem 8 When all agents have decreasing preferences then the cs-aggregation correspondence C cs is maxi-max strategyproof.
Proof It suffices to show that for each agent i there is a core stable assignment that, for her top-ranked alternative (a i , 1), assigns i alone to a i .Fix agent i.Consider assignment π which assigns agent i alone to a i , and assigns the remaining agents to activities sequentially according to some fixed order s over these agents as follows.As long as there are unassigned agents, activities to which no agent has been assigned yet and corresponding alternatives approved by some of these agents, π assigns the first (w.r.t.s) of these agents, say j, to b j , where (b j , 1) denotes j's top-ranked approved alternative among the still available activities.
The resulting assignment π is core stable by construction: no agent assigned to an activity wants an additional agent to join; at the same time, no agent assigned to the void activity approves of an alternative to whose corresponding activity no agent has been assigned.Observe that the above proof implies that the case of decreasing preferences admits a (single-valued) strategyproof cs-aggregation function.Consider the serial dictatorship aggregation function f sd which lets, according to some pre-defined order, agents sequentially pick their best ranked approved alternative (a, k) among the yet unused activities a ∈ A * and assign a total of k agents including herself to a (or assign themselves to a ∅ if such an alternative does not exist).In the case of decreasing preferences, that means the agents in turn assign themselves alone to a for their best ranked available alternative (a, 1) (or to a ∅ if no such alternative is available).Also, observe that for decreasing preferences the resulting assignment is both core stable and Pareto optimal.We hence get the following proposition.
Proposition 3 When all agents have decreasing preferences, f sd is a strategyproof csand po-aggregation function.
Theorem 9 When all agents have decreasing preferences and A * consists of two activities, then the cs-aggregation correspondence C cs is • and Gärdenfors manipulable.
Proof Maxi-max strategyproofness is implied by Theorem 8, while maxi-min and Gärdenfors manipulability follow from Example 5.
In the remainder of this section, we first show that C cs is Gärdenfors-and maximin manipulable for two activities and increasing preferences already (see Example 6 below); on the positive side, in this case C cs is maxi-max strategyproof (Theorem 10).This result, however, establishes a boundary for maxi-max strategyproofness of C cs : Neither can maxi-max strategyproofness be achieved in the two activity case with single-peaked preferences (Theorem 11) nor in the case of increasing preferences and three activities (Theorem 12).
Observe that for each j ∈ N \{i} the preferences in P correspond to those in P , i.e., we have j = j .Thus, for any choice of h ≥ 1 there can be at most h − 1 agents of π a ∅ with (b, + h) ∈ S j since otherwise π is not core stable in I : By assumption, π is not core stable in instance I.In this respect, we distinguish the following cases in instance I.
Case 1 In I, there is no set of agents containing π a that wants to deviate to a. Since π is not core stable in I, there hence must be a set of agents containing π b that wants to deviate to b. I.e., there is a non-empty set D ⊃ π b such that (b, |D|) j (π( j), |π j |) holds for each j ∈ D. Note that i ∈ D must hold by core stability of π in instance I .In the remainder of Case 1 consider the original instance I. Starting from π , the idea is to construct another assignment which augments the set of agents assigned to b and from which again no set of agents wants to deviate to a. Stepwise application of this idea finally leads to an assignment from which no set of agents wants to deviate at all but making agent i better off than under π , which contradicts with the choice of π and hence with maxi-max manipulability.
Starting from π , construct assignment μ by assigning each agent of D to b; from the remaining agents (i.e., agents from N \D), determine the largest set F of agents that satisfies (a, |F|) ∈ S f for each f ∈ F, and assign each of these agents to a.Note that F ⊆ N \D and in particular hold.
Next, we show that in assignment μ there is no set of agents including μ a that wishes to deviate to a.For the sake of contradiction, assume that, in assignment μ, there is a set R ⊃ μ a of agents that prefers (a, |R|) over the alternative assigned under μ.Observe that R ⊆ π a cannot hold: each j ∈ π a ∩ μ b prefers (b, |μ b |) over (a, k) and thus over (a, |R|); hence R ⊆ π a implies R ⊆ (π a \μ b ), which is ruled out by the choice of F.
Therefore, R must contain some agents of the set π b ∪ π a ∅ .Observe that R can be rewritten as because the sets π a , π b , π a ∅ form a partition of the set N of agents.Recall that k ≥ |R ∩ π a | holds and no agent of π b ∪ π a ∅ is better off with π than with μ.Hence, the fact that j contradicts with our assumption in Case 1.
Thus, also in assignment μ there is no coalition that wishes to deviate to a in instance I. Recall that agent i is assigned to b under μ and we have (b, |μ b |) i (a, k).If μ is core stable in I, we thus have a contradiction with maxi-max manipulability.Hence, again there must be a coalition that wants to deviate to b. Arguing for assignment μ in analogous manner as for assignment π , we end up with an assignment γ with γ b ⊃ μ b and γ a ⊆ μ a from which no coalition of agents wishes to deviate to a.By increasing preferences and i ∈ μ b , for agent i we get (b, |γ b |) i (b, |μ b |).By the fact that the number of agents assigned to b is strictly growing in each step, repeating this argumentation we finally must end up with an assignment under which no coalition of agents wishes to deviate at all.Thus, in instance I there is a core stable assignment η which assigns agent i to activity b such that (b, |η b |) i (a, k) holds.This contradicts with maxi-max manipulability.
Case 2 In I, there is a set of agents containing π a that wants to deviate to a.In this case, starting from π we will construct an assignment ρ with ρ a ⊃ π a and ρ b ⊆ π b from which no set of agents wants to deviate to a.Under ρ, some set of agents might wish to deviate to b, but that set has to contain i; making use of Case 1 then concludes the proof.
Consider assignment π .In this case, in instance I there hence is a non-empty set E ⊃ π a such that (a, |E|) j (π( j), |π j |) holds for each j ∈ E. We proceed as follows.Construct assignment δ by assigning each agent of E to a; from the remaining agents (i.e., agents of N \E) assign the largest set H of agents to b that satisfies (b, |H |) ∈ S h for each h ∈ H .If under δ there is no set of agents including δ a that wishes to deviate to a, set ρ = δ.Otherwise, we stepwise derive our desired assignment ρ: in the next step we repeat the above procedure for assignment δ instead of π , and so on, until we end up with assignment ρ under which no set of agents (including ρ a ) wishes to deviate to a.This requires less than n such steps due to increasing preferences and the fact that in each step the number of agents assigned to a is strictly increasing.From the latter fact and by construction we immediately get ρ a ⊃ π a , which in turn implies ρ b ⊂ (π b ∪ π a∅ ).We now show that ρ b ⊆ π b holds as well.In order to do so, it is sufficient to show that ρ b ∩ π a ∅ = ∅ holds.Assume the opposite, that is, the set Now, assume that there is a set D of agents including ρ b that prefers (b, |D|) over the alternative assigned under ρ.In what follows, we show that D has to contain agent i. Assume D ⊆ π b .Since D is a superset of ρ b , by the choice of H in constructing ρ, D must contain at least one agent assigned to a under ρ.Now, in the above procedure to construct ρ, consider the last assignment γ where all agents of D where jointly assigned to b.Such an assignment must exist due to our assumption D ⊆ π b .In the assignment constructed from γ , some of the agents of D must have been assigned to a; by the fact that the number of agents assigned to a is increasing during the construction of ρ, this means these agents prefer (a, |ρ a |) over (b, |D|) due to increasing preferences.This, however, contradicts with our assumption that all agents of D are better off with joining b.Thus, any set D of agents that, under ρ, wants to deviate to b hence has to contain at least one agent of π Recall that by increasing preferences each agent of π a is better off under ρ than under π .Also, no agent of π a ∅ is made worse off under ρ.Hence, each agent of D prefers (b, |D|) and thus (b, |D | + ) over the alternative assigned by π .As a consequence, D has to contain agent i because otherwise π is not core stable in I .Then, since there is no set of agents under ρ that wishes to deviate to a, we can apply Case 1 (starting with ρ instead of π ) and again get a contradiction to maxi-max manipulability.
Unfortunately, the above maxi-max strategyproofness result for two activities and increasing preferences cannot be generalized to the domain of single-peaked preferences, as we show in Theorem 11 below.
Theorem 11 When all agents have single-peaked preferences and A * consists of two activities, then the cs-aggregation correspondence C cs is maxi-max manipulable.
Finally, for the sake of completeness we provide the details for the above sets of core stable assignments.In I, for identifying the set of core stable assignments, let us consider all individually rational assignments.The trivial assignment is not core stable since, e.g., agent 1 would like to deviate to a. Assignment γ with γ (1) = γ (2) = b, γ (3) = a ∅ is not core stable because agent 1 prefers (a, 1) over (b, 2).Assignment μ with μ(1) = a ∅ , μ(2) = μ(3) = a is not core stable either, because agents 1 and 2 prefer (b, 2) over their assigned alternative and hence want to deviate to b.Finally, assignment π with π(1) = a, π(2) = π(3) = a ∅ is core stable because agent 1 is the only agent assigned to a and (a, 1) is the agent's top-ranked alternative; thus, (1) agent 1 would be worse off with a set of agents joining a, and (2) no set of agents can deviate to b, because the latter would require also agent 1 to deviate to b. Considering P instead of P, the set of individually rational assignments remains unchanged.Exactly as argued above it follows that π is core stable whereas γ and the trivial assignment are not core stable.On the other hand, given P instead of P, assignment μ is core stable, because (1) (a, 2) is the top-ranked alternative of agents 2 and 3 in P , and (2) agent 2 does not want to deviate to b.
As the maxi-max strategyproofness result of C cs stated in Theorem 10 cannot be generalized to the more general domain of single-peaked preferences, we might be interested whether, keeping restricted to increasing preferences, the result still holds agent 2, the only remaining agent approving of (c, 3), prefers the assigned alternative (a, 2) over (c, 3).Hence, no deviation to c would make all members of the deviating group of agents better off.Finally, consider a possible deviation to b.Since (b, 4) is not approved by 4 agents, we can restrict to (b, 3).Whereas agents 1, 3 want to deviate to b, the only remaining agent approving of (b, 3), agent 4, prefers (a, 2) over (b, 3).Therefore, we can rule out also such a deviation, and μ is core stable in I .Hence, C cs (P ) = {π, μ}.

Pareto optimal assignments
Consider an agent i ∈ N , and let, among the alternatives i approves of, (a, k) be the best-ranked (w.r.t.i ) among the alternatives (b, ) approved by at least agents.Then there is always a Pareto optimal assignment which assigns i to a such that exactly k agents are assigned to a (see Darmann 2018).Thus, it immediately follows that C po is maxi-max strategyproof.

Theorem 13
The po-aggregation correspondence C po is maxi-max strategyproof.
In the case of decreasing preferences, C po also turns out to be maxi-min strategyproof; on the negative side, C po is Gärdenfors manipulable even if restricted to instances with only two activities.
Theorem 14 When all agents have decreasing preferences, then the po-aggregation correspondence C po is maxi-min strategyproof.
Theorem 15 When all agents have decreasing preferences and A * consists of two activities, then the po-aggregation correspondence C po is Gärdenfors manipulable.

Conclusion and outlook
We have analyzed the aspect of strategic manipulation in the group activity selection problem when the minimum requirement of individual rationality should be respected.Even when restricted to instances with increasing preferences and two activities, it turns out that a strategyproof aggregation correspondence (w.r.t.maximin and Gärdenfors extension) or a strategyproof aggregation function meeting the mild condition of unanimity does not exist.Observe that basically all reasonable aggregation processes satisfy unanimity, including the main solution concepts considered in the group activity selection problem-maximum individually rational, core stable, and Pareto optimal assignments-to which we paid our particular attention.Thus, a strategyproofness result (w.r.t.maximin and Gärdenfors extension) in that domain is ruled out for basically all reasonable aggregation processes respecting individual rationality.The latter also applies for single-valued aggregation functions.Concerning the particular ir-aggregation functions considered, we have shown that in the decreasing preference case a strategyproof cs-and po-aggregation function always exists, while a strategyproof mir-aggregation function is ruled out even in restricted instances with only one activity.
For the considered aggregation correspondences an overview of our results is given in Tables 4 and 5.While in the one activity case several positive results could be derived all three correspondences turn out to be susceptible to strategic manipulation when two activities are involved, even for restricted instances of our problem.However, compared to the mir-aggregation correspondence, the aggregation correspondence that outputs all core stable assignments is more robust against manipulation in so far that it is maxi-max strategyproof in the case of one activity, for decreasing preferences irrespective of the number of activities involved, and for two activities in the case of increasing preferences; if a third activity is contained in the setting, the latter result unfortunately does not hold anymore.The aggregation correspondence that outputs all Pareto optimal assignments turns out to be the one least prone to manipulation among the three correspondences.In particular, it is always maxi-max strategyproof, and remains maxi-min strategyproof in the case of decreasing preferences.
There are different approaches for further work on strategic manipulation in the group activity selection problem.For instance, other preference extensions could be adapted to the setting and analyzed with respect to manipulability.Of course, also a characterization of preference extensions and corresponding domains for which strategyproofness is guaranteed would be of interest.
Alternatively, one could hope for positive results at the price of dropping the requirement of individual rationality.This is, for instance, the case in serial dictatorship where

Fig. 1
Fig.1Relations between the sets of agents assigned to activities by assignments π and μ in the proof of Theorem 10 and there is no x ∈ X and y ∈ Y \X with (y(i),|y i |) i (x(i), |x i |).2.Y ⊂ X , and there are x ∈ X \Y and y ∈ Y with (x(i), |x i |) i (y(i), |y i |), and there is no x ∈ X \Y and y ∈ Y with (y(i),

Table 1
Preference profile P