Manuscript

We document two changes in postwar US macroeconomic dynamics: the procyclicality of labour productivity vanished, and the relative volatility of employment rose. We propose an explanation for these changes that is based on reduced hiring frictions due to improvements in information about the quality of job matches and the resulting decline in turnover. We develop a simple model with hiring frictions and variable effort to illustrate the mechanisms underlying our explanation. We show that our model qualitatively and quantitatively matches the observed changes in business cycle dynamics.


Introduction
The internet has created many new markets and industries that rely on the wisdom of the crowd, namely on information provided by market participants. Online reviews are a major feature of this trend. Yelp, TripAdvisor and Angie's List are billion-dollar platforms dedicated to o¤ering online reviews of nearly every existing product and service. An extra star on a restaurant's Yelp rating can increase revenues by 5-9% (see Luca and Zervas (2016)). While online reviews are ubiquitous and have become an essential part of a consumer's everyday decision making, their credibility has been undermined by the incentives of reviewed businesses (or their competitors) to manipulate them. Cases of businesses caught hiring fake reviewers or individuals o¤ering fake online review services abound in the popular press. The extent of review manipulation, while is hard to measure precisely, can be inferred indirectly. For instance, Yelp, which alone contains over 80 million reviews, …lters out 16% of restaurant reviews and has even created a special list of "recommended reviews" by removing the 30% of reviews that look suspicious. Fake reviews can also be negative, in the sense that businesses plant unfavorable fake reviews of competitors, especially in highly competitive markets.
Given that some reviews are written by benevolent agents who truthfully report their experience while the others are written by strategically interested parties whose objective is to falsely manipulate the readers'belief, a natural question arises: Should review platforms (such as Yelp) simply report all reviews (knowing that some of them may be fake) or could they apply a …ltering mechanism to reduce the fake reviewers'in ‡uence?
In this paper, a surprisingly strong result is obtained which we refer to as full transparency: a review platform cannot do better than simply reporting all messages. More speci…cally, when a platform reports all messages as is, a learning process takes place and any attempt by the platform to manipulate the reviews (e.g. by blocking "extreme" reviews or by pooling them), will make future users of reviews worse o¤ in expectation.
In the model, receivers, namely potential future consumers of a particular product, obtain review information from multiple senders by means of a platform. Senders are either "honest" thus truthfully reveal the (noisy) signal they received while using the product, or are "fake" in which case they wish to persuade the receiver that the product is good (in the case of a "positive fake") or bad (in the case of a "negative fake"). The platform is uninformed about the state of the world and can commit to a reporting mechanism that maps the senders'reviews to a report to be sent to the receivers. The platform's objective is to maximize the receivers'welfare. We say that the platform is nonstrategic if it simply reports all reviews to the receivers.
We characterize the equilibrium of the dynamic setup and prove that it is unique. It is shown that the platform cannot do better than to simply report all the reviews. Note that the only way that manipulating reviews could possibly bene…t the receiver is if the platform can somehow a¤ect the fake sender's strategy in a way that will make the messages less harmful. However, we show that any attempt by the platform to do so will in ‡uence the fake sender's strategy in a way that makes the messages sent by an honest sender less informative. By the same reasoning, we also show that if the "honest" sender could behave strategically (in order to maximize the receivers' welfare) then multiple equilibria would exist; however, the best equilibrium for the receivers is achieved when the honest sender is nonstrategic.
The analysis proceeds as follows: Section 2 reviews the literature. Section 3 presents a one-period model with a nonstrategic platform, a nonstrategic honest sender and a fake sender. The one-period equilibrium is presented in Section 4. In Section 5, we relax the assumption that the platform is nonstrategic. Section 6 extends the model to the case where there are many periods, many senders and many receivers. Section 7 concludes.
Despite the pervasive use of online reviews and the extent of review manipulation, until recently there has not been any theoretical work explicitly studying optimal reporting mechanisms used by such platforms. Nonetheless, there is a vast related literature.
In a recent (and independently written) working paper, Lahr and Winkelman (2019) also study a model with multiple senders who can either share the same preferences as the receiver or prefer that the receiver always take a particular action. They show, as does our model, that in equilibrium the fake senders ("partisans" in their analysis) randomize over some messages and honest senders ("advisors" in their analysis) simply report the truth even if strategic. In contrast to the current paper, they do not consider the design of the optimal reporting mechanism.
The literature includes models of manipulation/elimination of existing reviews such as Aköz, Arbatl and Çelik (2018) and Smirnov and Strakov (2018), in which the …rm does not produce fake reviews but rather alters or eliminates existing ones. Such setups apply only to reviews on a business'own site, while we focus on mass review platforms such as Yelp or TripAdvisor where existing reviews cannot be altered by an interested party, but only by the platform itself.
There is an extensive theoretical literature that looks at static models of communication, in which the sender can be either strategic or honest (see, for example, Benabou the receiver also has private information and can therefore assess the honesty of the sender (Olszewski (2004)). The equilibria in these models share some of the properties of the one-period model presented here. However, thanks to an independence result obtained here, we are able to derive novel results regarding the properties of the market's equilibrium in a multi-period model, as well as its implications.
A growing empirical literature examines the impact of reviews on consumers along various dimensions. Kim and Martin (2018) use online experiments to ascertain how individuals interpret ratings. Laouenan and Rathelot (2018) use data from an online marketplace of vacation rentals (Airbnb) to measure discrimination against ethnic-minority hosts and …nd that an additional review helps to close the gap in price between minority and majority hosts. This is consistent with the result of our model which predicts that in expectation an additional review incrementally corrects mistaken beliefs. Finally, Mayzlin, Dover and Chevalier (2014) compare fake reviews of hotels on platforms where only consumers can post reviews (such as Expedia) and platforms where anyone can (such as TripAdvisor) and show that fake reviews, whether positive or negative, are much more frequent on the latter platforms and when competition is stronger. Lastly, our model relates to the phenomenon of fake news in the sense that it explores the extent to which a long-run anonymous player with a political agenda can derail information aggregation. 1

The One-Period Model
We start with the case of two players: a sender (S) and a receiver (R). Player S can be one of two types: with probability q he is honest, (S h ); and with probability (1 q) he is fake (S f ). The sender's type is chosen by nature before the beginning of the game.
The "state of the world"(e.g., the quality of the product) is a random variable 2 f0; 1g and is not known to either player. p is the common prior that = 1. If and only if the sender is of type S h ; then conditional on the realization of the state of the world ; the sender (but not the receiver) receives a signalx, which takes a value in [0; 1] according to the density t (x) and the cdf T (x). We make the following assumptions: Assumption A.2 (hereafter referred to as MLRP (monotone likelihood ratio property)) captures the idea that the larger the signal, the more likely it is that = 1. De…ne x to be the (unique) signal for which: That is, x; referred to as the neutral news signal, does not change the sender's prior. In fact, by MLRP, signals above (below) neutral news imply positive (negative) updating.
After observing the signal x; the sender sends a message m 2 [0; 1] to the platform.
Upon receiving a message m from S the receiver uses Bayes' rule to update her beliefs about the state of the world. We assume that the receiver does not know the sender's type and assigns a probability q to the event that S is honest. The honest sender, S h ; reports his signal truthfully (i.e., m = x) whereas the fake sender, S f (to be referred as Fake), chooses m strategically. We assume …rst that Fake's payo¤ is increasing with R 0 s posterior that the state is 1, although later we explore the case in which Fake's payo¤ can either increase or decrease with the receiver's posterior: Initially, we assume that the platform is not strategic and simply forwards the message to the receiver. We will later show that such behavior is optimal for the receiver even when the sender is potentially fake.

Preliminaries
We de…nep(m) as the posterior probability that the state is 1, given message m and if the sender were known to be honest: denote the expected value ofp(m) given that the true state is . By assumption

A.2,p(m) is clearly increasing in m:
A strategy for S f is a distribution function de…ned over the set of all messages M = [0; 1]: 2 Note that upon observing the neutral news message m = x the receiver will not update her prior regardless of Fake's strategy: the receiver's prior will not change whether m = x is known to be coming from a fake sender (because it never does), or from an honest sender (because it would not for m = x): The following lemma states simply that, in equilibrium, Fake never assigns a strictly positive probability to any message m: Lemma 1 Fake's equilibrium strategy is atomless.
Proof. See the appendix.
The intuition for this result is that, since the true/honest signal distributions have no atoms, atoms cannot help the Fake sender since they would reveal his identity to the receiver.
Having ruled out atoms in equilibrium, we now view Fake's strategy as a density f (m): Let p(m j f ) denote the receiver's posterior probability that the state is 1 upon receiving the message m and given that with probability q the sender is honest and with probability denote the expected value ofp(m j f ) given that the true state is : Given (p; q; t 0 ; t 1 ), let: where P = p=(1 p) and Q = q=(1 q)). Thus,P (m j f ); hereafter referred to as the receiver's likelihood ratio, can be thought of, without loss of generality, as Fake's payo¤ from sending the message m when the receiver believes that Fake is playing the strategy f: For the range of values of m, note that:

One-Period Equilibrium
Let f denote Fake's equilibrium strategy. The following proposition provides a set of conditions that f must satisfy: Proposition 1 If f is an equilibrium strategy for Fake, then there exists a point z 2 ( x; 1) Proof. See the appendix.
The following theorem establishes existence and uniqueness: Theorem 1 (i) An equilibrium exists and is unique.
(ii) Fake's equilibrium strategy is: where z (hereafter referred to as Fake's cuto¤ ) is the unique solution to: (iii) Fake's cuto¤ z is increasing in q: Proof. See appendix.
Intuitively, the equilibrium can be described as follows: Fake randomizes over an interval (z; 1] in a way that generates the same posterior for the receiver at all m 2 (z; 1], which is equal to the receiver's posterior after the threshold message m = z: That is, the equilibrium likelihood ratio for all m 2 (z; 1] has the property: To guarantee the constant posterior, f (m) is strictly increasing in m 2 (z; 1], implying that higher values of m are also more likely to originate from Fake. Lastly, the more likely it is that the sender is fake (the lower is q), the lower z will be and consequently the lower will be the posterior/persuasion Fake can generate. Thus, even though Fake is able to "manipulate" the receiver's beliefs by generating a posteriorP (z j f ) that is higher than the prior P; he can only do so to a limited extent and his ability to manipulate decreases with q: Example 1 Consider the following linear example: For q = 3=4, we have z = 2=3: Figure 1 depicts Fake's strategy.   The following result, hereafter referred to as the independence result, follows immediately from (3) and will be essential in the subsequent development of the model.

Corollary 2
Independence: Fake's equilibrium strategy does not depend on the prior p: Thus, Fake's equilibrium strategy is not a¤ected by the receiver's prior beliefs about the state of the world. An immediate and interesting implication of Corollary 2 is that even if Fake is uncertain about the receiver's prior or if he faces a distribution of many receivers with possibly di¤erent priors, his equilibrium strategy will still be the one presented in Theorem 1, a result that will be particularly useful in the analysis of the multi-period multi-sender game. The independence result is intuitive since it emerges from the fact that if some piece of information is better news than another under some prior, then the same should be true under any other prior. As a consequence, information that maximizes the posterior (i.e. is best news) under some prior should also do so under any other prior.
We next present a learning result which states that: (i) as long as there is some strictly positive probability that the sender is honest, then the receiver bene…ts from paying attention to the sender's messages; and (ii) the more likely it is that the sender is honest, the higher is the receiver's bene…t. While the …rst part of the proposition is somewhat obvious, the second part is more surprising given that the fake sender is strategic and becomes more aggressive as q increases (see (iii) in Theorem 1).
In order to prove this proposition, it will be more convenient to focus on the receiver's prior p rather than the likelihood ratio P and on her posterior probabilityp(m j f ) rather than her posterior likelihood ratioP (m j f ): Let E [p f ] denote the receiver's expected posterior probability that the state is 1, given that the true state is .
Proposition 2 Learning: Proof. See the appendix Remark 1 Suppose that Fake can be one of two types: Fake-1's payo¤ increases with the receiver's posterior while Fake-0's decreases with the receiver's posterior. The sender is honest with probability q > 0; Fake-1 with probability q 1 > 0 and Fake-0 with probability q 0 > 0 where q + q 1 + q 0 = 1: An analysis similar to the one above shows that the (unique) equilibrium is characterized by two cuto¤ s: z 1 2 ( x; 1) for Fake-1 and z 0 2 (0; x) for Fake-0, such that Fake-1's equilibrium strategy coincides with Fake's when he is the only fake sender and the probability of the sender being honest is q=(1 q 0 ): Fake-0's equilibrium strategy is the mirror image of Fake's when he is the only fake sender (whose objective is to increase the receiver's posterior that the state is 1) and the probability of the sender being honest is q=(1 q 1 ): Example 2 Consider the linear case discussed in Example 1. Figure 4 and 5 depict the

Strategic Platform
Up to this point, we have assumed that the platform is nonstrategic in the sense that it truthfully reports any message it receives. The question that arises is whether the platform can do better by somehow manipulating the messages it receives before sending them to the receiver. By doing so, the platform may be able to induce the fake sender to alter his strategy in a way that will make it less harmful to the receiver. An example of such manipulation might be to delete extreme messages or pool some messages (polices often used by platforms). In what follows, however, we show that any such manipulation by the platform can only make the receiver worse o¤.
We assume that the platform's objective is to be as informative (a la Blackwell) as possible. Such an assumption is intuitive in a market where platforms compete for users who make choices (such as choosing the share of risky assets in their portfolio) based on the information they obtain from the platform. This will be the case, for example, if the receiver is choosing an action x 2 [0; 1] and his VNM utility from choosing x in state 2 f0; 1g is (x ) 2 : The notion of informativeness employed here is second-order stochastic dominance (see Blackwell and Girshick (1979)).
We adopt a mechanism design approach and assume that the platform commits in advance to a reporting policy g; known to both the sender and the receiver. Formally, the platform's strategy g : [0; 1] ! [0; 1] assigns to each message it receives from the sender a message to be sent to the receiver. Since the platform does not have any information other than the message it receives, the only manipulation it can apply is to pool some messages. 4 Clearly, if not for the presence of a fake sender, such a strategy would certainly make the receiver worse o¤.
The driving force behind the result is the characterization of Fake's equilibrium strategy f g when the platform applies the policy g: Essentially, Fake's optimal strategy in this case is similar to the one it used when the platform was not strategic, with the only modi…cation that it is now facing a di¤erent distribution of messages. In other words, Fake will, as before, assign a positive weight to those messages that, in its absence, would yield the highest posterior, and it will do so in a way that equalizes the posteriors for all these messages. Notice, however, that since the posterior is not necessarily monotonic, in the presence of a strategic platform and in the absence of the fake sender, the support of Fake's strategy may consist of more than one interval.
Recall thatp(m) is the receiver's posterior that the state is 1 in the absence of the fake sender and in the absence of manipulation by the platform. Similarly, letp g (m) denote the receiver's posterior belief that the state is 1; given that the sender is honest and given that the platform received the message m and applies the strategy g: Notice that m is not necessarily the message sent by the platform but rather the message sent by the sender before the manipulation by the platform. Thus,p g (m) is the posterior that the sender induces when it sends the message m: That is, Letp g (m j f g ) denote the receiver's posterior given the sender's (fake or honest) message m , Fake's strategy f g and the platform's strategy g: That iŝ We can now state the following proposition (the proof of which is omitted since it is essentially a repetition of the arguments in the proof of Proposition 1).
Proposition 3 If f g is an equilibrium strategy for Fake, then: (i) if f g (m 0 ) > 0 and f g (m") > 0; thenp g (m 0 j f g ) =p g (m" j f g ) p g : (ii) ifp g (m) p g ; then f g (m) = 0: Thus, in equilibrium, all messages to which Fake assigns a positive probability induce the same posterior p g ; which is the highest induced posterior inferred by the receiver. Notice however that unlike the case in which the platform is not strategic, and sincep g (m) is not necessarily monotonic, Fake's strategy may not be monotonic in this case: Recall thatp( j f ) is the distribution of posteriors in equilibrium when the platform is nonstrategic, andp(z j f ) p is the highest point in its support. The following proposition establishes that the highest point in the support of the distribution of posteriors when the platform is strategic, i.e. p g ; cannot be above p: Roughly speaking, the reason for this result is that the introduction of a strategic platform shifts the distribution of posteriors to the left. Therefore, when the fake agent assigns a positive weight to these posteriors, they are shifted even further to the left.
The inequality above states that since the platform is simply garbling any given strategy used by Fake, then the posterior under strategy f can never be larger than p: Therefore, since p > p; in order for the fake sender to induce a posterior higher than p; it must be that under his new strategy f g ; there exists m 0 for which: andp g (m 0 j f g ) > p: Since , it must also be that there exists a message m" for which Since by 6,p g (m" j f ) p; it must be that But this is in contradiction to Proposition 3 in which it is shown that in equilibrium Fake assigns a strictly positive probability only to messages that induce the highest posteriors.
We will now prove that the receiver cannot be better o¤ when the platform is strategic.
That is, the equilibrium distribution of posteriors when the platform is strategic secondorder stochastically dominates the distribution of posteriors when the platform is not strategic. Denote the equilibrium cumulative distribution of the posteriors by G p G ( j f g ) when the platform is strategic and by (p( j f )) when it is not.
Theorem 3p( j f ) is more informative thanp G ( j f g ): Proof. We need to show that (7) holds for all x 2 [0; 1]: First, observe that for all x 2 [0; p G ] the above inequality holds since the only di¤ erence betweenp( j f ) andp G ( j f g ) is the result of the platform's pooling strategy (since Fake does not operate in this region). Next, notice that since p G p; it follows that for every x > p G we have G p G ( j f g ) (p( j f ): Since the two distributions have the same expected value (i.e., the prior) inequality (7) holds for all x 2 [0; 1]: Furthermore, using essentially the same proof as in the case of a strategic platform, we show that if the "honest" sender could behave strategically (in order to maximizes the receivers'welfare), then multiple equilibria would exist, however, the best of those from the receiver's viewpoint would be the one in which the honest sender is nonstrategic.
Remark 2 Strategic Honest Sender. We have assumed throughout that the honest sender is not strategic and simply reports his signal. In the context of consumer reviews, this appears to be a realistic assumption. Given the above result, which showed that the receiver cannot bene…t from a strategic platform, it is straightforward to show (using the same arguments) that if the honest sender is strategic, then the best equilibrium from the receiver's viewpoint would be the one in which the honest sender is not strategic. 6

The N-Period Model
In this section, we extend the model to N periods and allow for many senders, some of whom are honest and some of whom are fake, who send messages at di¤erent times.
Senders can appear more than once, and fake senders are not necessarily myopic when choosing their strategy. We also allow for multiple receivers who form their beliefs after observing messages at various points in time. To characterize the equilibrium of the general model, we rely heavily on the independence result in Corollary 2 which implies that a fake sender's action in a given period is not a¤ected by previous messages (sent either by himself or by other senders) and will not a¤ect his actions or those of other senders in future periods.
In what follows, we begin assuming the platform is not strategic and then show that this assumption is without loss of generality also valid in a multi-period model.
There is a pool of receivers and in every period n 2 (1; 2; :::; N ) one of them (who may have already been a receiver in a previous period) is drawn from that pool and forms her posterior based on the history of messages up to period n: 7 Let L 0 be the set of fake senders whose objective is to minimize all receivers'posterior that the state is 1. Likewise, L 1 is the set of fake senders whose objective is to maximize the receivers'posterior that the state is 1; and L h is the set of honest senders. The set of all senders is denoted by L: In every period, sender l is selected with probability q l , where q l 0 and P l2L q l = 1; to send a message in that period. For i 2 f0; 1; hg; let q i = P l2L i q l and observe that q 0 + q 1 + q h = 1: If l is honest, then he truthfully reports his signal; otherwise he reports strategically.
A strategy for a fake sender l , denoted by l N ; speci…es his move, for every period n; given the history of previous messages, in the case that he is selected to move in that period. Let f qo and f q 1 be the equilibrium strategies of Fake-0 and Fake-1 respectively, in the two-sided one-period model where the sender is type Fake-with probability q and is honest with probability 1 q 0 q 1 . The following proposition states that a fake agent's strategy is stationary in the sense that it is independent of n and of the history of messages up to that period. Furthermore, in every period a type Fake-'s strategy chooses messages in the same way that a sender of his type would have done in the (two-sided) one-period model in which his type is chosen with probability q : Given that the last period's strategies are independent of the history, we can now move one step backwards and claim that: A similar argument can be applied to all periods:

Strategic platform in the N-period model
In a multi-period model, a strategic platform can apply strategies that are not feasible in the one-period case. For example, the platform can condition the messages it forwards to the receiver on the messages it received in previous periods. Since the platform can commit to a reporting mechanism, such a policy could potentially alter the fake sender's strategy in some periods so as to bene…t receivers overall. However, and as in the case of the one-period model, such a strategy can only harm the receivers.

Theorem 4
In the N-period model, the optimal strategy for a strategic platform is to truthfully reveal the signals it receives in every period.
Proof. Consider a fake sender in period n < N: Regardless of the platform's strategy, the expected posterior in all periods n 0 > n is the posterior obtained at the end of period n: Thus, Fake's strategy in period n is not a¤ ected by the platform's strategy in any period n 0 > n; but only by the platform's strategy in period n: With this in mind, we can now use Theorem 3 to argue that the platform's optimal strategy is to truthfully reveal the message it receives in every period.

Conclusion and Further Research
We have proposed a simple and parsimonious model of information aggregation in the presence of fake reviews. A major advantage of this model is nonetheless its several potential applications and extensions. Since the model is malleable and delivers a unique prediction, it can be used to answer a number of questions regarding the supply of fake reviews that are examined in the industrial organization (IO) domain. For instance, when would a business want to hire fake reviewers? What proportion of fake reviews is optimal for a (dishonest) business? When can negative fake reviews be used to sink new and competing, though as yet unreviewed, products? To answer these questions one needs to know the amount of persuasion that can be obtained from each fake review. Therefore, several IO (…rst-stage) questions can be addressed by recasting our simple model as the second stage in the game.
A natural extension of the model would be to add uncertainty, so that learning occurs on the proportion of fake reviews. This would entail dynamic path dependence since long-term senders would be trying to also persuade receivers that the number of fake reviews is low, so as to better disguise their fake messages and achieve greater impact on beliefs. A special case of this, which applies more to the case of fake news (than to fake reviews where aliases are used), is that of non-anonymous, possibly honest, long-term senders. If a fake sender is recognized as a sender of multiple messages (i.e., possibly fake news articles), then he might want to occasionally send true news articles in order to conceal his objective. Along these lines, our minimal model can serve as a benchmark to analyze the e¤ect of fake long-term senders on information aggregation and voting outcomes in a fake news world.

References
when a receiver observes the message m; she must conclude that the sender of m is fake and therefore she will not update her prior. Since any such atoms are of no use to the Fake since they do not increase the receiver's posterior, a deviation is easy to …nd. Since the number of messages with a strictly positive mass in any probability distribution is countable, there must be a message m 0 > x such that f has no atom at m 0 . Upon receiving this m 0 the receiver's posterior increases: there is positive updating as the receiver can no longer rule out that m 0 comes from an honest sender. Thus, deviating to the message m 0 is strictly better for Fake than sending the message m; a contradiction.
Proof of Proposition 1. The proof is established by proving a series of claims.
Proof. Assume, by contradiction, that f (1) = 0 and thereforeP (1 j f ) = P t 1 (1) t 0 (1) >P (m j f ) for all m < 1: The latter inequality follows from the assumed MLRP of t ( ): Thus, deviating to m = 1 is pro…table for Fake. contradicting that t 1 (m 0 ) t 0 (m 0 ) and t 1 (1) > t 0 (1): In the following claim, we show that in equilibrium Fake mixes over an interval their average is p. Since the terms in f cancel out, we obtain: f (m) (t 1 (m) t 0 (m)) dm > 0: The inequality follows because MLRP implies strict …rst order stochastic dominance and from the fact thatp f (m) is non-decreasing overall and (by MLRP) increasing below the neutral signal wherep f (m) =p(m). 8 (ii) We prove the theorem by showing that: It is easy to see that since the terms in f cancel out, : Since dp f dq = 0 for m z, using the Leibniz rule, we obtain: where the …rst term is positive due to MLRP and the last is positive since dp f dq > 0 and t 1 (m) > t 0 (m) for all m > z: